4052

LESSWRONG
LW

4051

Mikhail Samin's Shortform

by Mikhail Samin
7th Feb 2023
1 min read
276

6

This is a special post for quick takes by Mikhail Samin. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Mikhail Samin's Shortform
131Mikhail Samin
15Knight Lee
3RHollerith
9Knight Lee
5RHollerith
11Knight Lee
5RHollerith
3Knight Lee
2RHollerith
1Knight Lee
6RHollerith
1Knight Lee
3RHollerith
1Knight Lee
12jbash
6Lukas Finnveden
14Mikhail Samin
4Nathan Helm-Burger
7Zac Hatfield-Dodds
22Adam Scholl
4Zac Hatfield-Dodds
10Mikhail Samin
1Zac Hatfield-Dodds
2Mikhail Samin
4Zach Stein-Perlman
2Lukas Finnveden
6Nathan Helm-Burger
64Adam Scholl
14Martin Randall
10Mikhail Samin
5Knight Lee
9Mikhail Samin
3Knight Lee
5Mikhail Samin
4Nathan Helm-Burger
9Nathan Helm-Burger
4Knight Lee
14aysja
7Knight Lee
27Mikhail Samin
9ozziegooen
3Nathan Helm-Burger
7ozziegooen
4Nathan Helm-Burger
8Mikhail Samin
1ZY
99Mikhail Samin
25Zach Stein-Perlman
89Mikhail Samin
61Buck
26Fabien Roger
13Mikhail Samin
11Charbel-Raphaël
2Fabien Roger
11Zach Stein-Perlman
50habryka
24Neel Nanda
26Vaniver
26Thane Ruthenis
23Vaniver
14habryka
3Ben Pace
7Neel Nanda
8Mikhail Samin
2Neel Nanda
4Mikhail Samin
4Ben Pace
11Neel Nanda
24Ben Pace
15Neel Nanda
10Ben Pace
6Neel Nanda
6Ben Pace
9Neel Nanda
8Ben Pace
14Neel Nanda
21Ben Pace
1Neel Nanda
17ryan_greenblatt
4Neel Nanda
2habryka
4Neel Nanda
4Ben Pace
5Mikhail Samin
4habryka
2Neel Nanda
2habryka
2Mikhail Samin
6Neel Nanda
2Mikhail Samin
3Neel Nanda
5Mikhail Samin
4habryka
2Mikhail Samin
4Ben Pace
3habryka
7Knight Lee
7Mikhail Samin
3Knight Lee
6Ben Pace
1Knight Lee
3MondSemmel
6Mikhail Samin
6Ben Pace
1sjadler
5[anonymous]
6Ben Pace
7[anonymous]
5Ben Pace
2MondSemmel
1Ben Pace
8MondSemmel
11[anonymous]
1ProgramCrafter
6MichaelDickens
1MondSemmel
2MichaelDickens
2Ben Pace
7faul_sname
10habryka
2faul_sname
2Zach Stein-Perlman
14habryka
32habryka
9Vaniver
2Stephen McAleese
4Mikhail Samin
7Mikhail Samin
4eggsyntax
7Mikhail Samin
2eggsyntax
4DirectedEvolution
3Viliam
5Mikhail Samin
2Mikhail Samin
66Mikhail Samin
65Michaël Trazzi
4Mikhail Samin
5Ben Pace
5Ben Pace
-1Mikhail Samin
3J Bostock
4Mikhail Samin
2J Bostock
1Matrice Jacobine
2Mikhail Samin
3Matrice Jacobine
37Cole Wyeth
4MichaelDickens
4J Bostock
8yams
2J Bostock
2Mikhail Samin
2Mikhail Samin
1Mikhail Samin
19Ben Pace
7Garrett Baker
17Ben Pace
4Garrett Baker
6Ben Pace
18Wei Dai
4Lukas Finnveden
2Lucius Bushnaq
6Ben Pace
2Lucius Bushnaq
1Mikhail Samin
2Ben Pace
1Sohaib Imran
3Lucius Bushnaq
1Sohaib Imran
1Mikhail Samin
1Sohaib Imran
1Mikhail Samin
1Sohaib Imran
2Mikhail Samin
1Sohaib Imran
1Sohaib Imran
2Garrett Baker
4Ben Pace
2Mikhail Samin
2Ben Pace
3Mikhail Samin
4Ben Pace
4Ben Pace
2Mo Putera
2Ben Pace
2Ben Pace
2Mikhail Samin
4Ben Pace
5Ben Pace
2Mikhail Samin
6Ben Pace
2Mikhail Samin
2Cole Wyeth
3Mikhail Samin
2Ben Pace
1henryaj
2Mikhail Samin
17MichaelDickens
9Mikhail Samin
4sjadler
2Mikhail Samin
1sjadler
0Stephen Fowler
2Mikhail Samin
65Mikhail Samin
42Zach Stein-Perlman
9Caleb Biddulph
41Mikhail Samin
2AlphaAndOmega
1keltan
36Mikhail Samin
6metachirality
3Michaël Trazzi
2Mikhail Samin
2Mikhail Samin
1Kabir Kumar
2Mikhail Samin
1Kabir Kumar
2Mikhail Samin
1Kabir Kumar
2Mikhail Samin
1don't_wanna_be_stupid_any_more
4Mikhail Samin
11Mikhail Samin
41habryka
9yams
10habryka
6Mass_Driver
5yams
2Mikhail Samin
9Mass_Driver
8MichaelDickens
6leogao
10Mikhail Samin
10Mikhail Samin
15Zac Hatfield-Dodds
2Mikhail Samin
3Zac Hatfield-Dodds
8MathiasKB
2Mikhail Samin
5Yonatan Cale
8Mikhail Samin
3Rasool
1Sodium
9Mikhail Samin
0Joseph Miller
13Mikhail Samin
2Mikhail Samin
-1Joseph Miller
0evhub
14garrison
2evhub
10tylerjohnston
6garrison
13aysja
7Mikhail Samin
5tylerjohnston
7Mikhail Samin
19Martín Soto
14mattmacdermott
6Mikhail Samin
6RyanCarey
4mattmacdermott
4Mikhail Samin
1ShardPhoenix
2Mikhail Samin
3Mikhail Samin
4Martin Randall
2Mikhail Samin
1sjadler
1Mikhail Samin
2plex
4Ben Pace
2plex
1Mikhail Samin
2[comment deleted]
276 comments, sorted by
top scoring
Click to highlight new comments since: Today at 10:13 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Mikhail Samin8mo*13172

Anthropic employees: stop deferring to Dario on politics. Think for yourself.

Do your company's actions actually make sense if it is optimizing for what you think it is optimizing for?

Anthropic lobbied against mandatory RSPs, against regulation, and, for the most part, didn't even support SB-1047. The difference between Jack Clark and OpenAI's lobbyists is that publicly, Jack Clark talks about alignment. But when they talk to government officials, there's little difference on the question of existential risk from smarter-than-human AI systems. They do not honestly tell the governments what the situation is like. Ask them yourself.

A while ago, OpenAI hired a lot of talent due to its nonprofit structure.

Anthropic is now doing the same. They publicly say the words that attract EAs and rats. But it's very unclear whether they institutionally care.

Dozens work at Anthropic on AI capabilities because they think it is net-positive to get Anthropic at the frontier, even though they wouldn't work on capabilities at OAI or GDM.

It is not net-positive.

Anthropic is not our friend. Some people there do very useful work on AI safety (where "useful" mostly means "shows that the predictions of MIRI-s... (read more)

Reply3
[-]Knight Lee8mo151

I think you should try to clearly separate the two questions of

  1. Is their work on capabilities a net positive or net negative for humanity's survival?
  2. Are they trying to "optimize" for humanity's survival, and do they care about alignment deep down?

I strongly believe 2 is true, because why on Earth would they want to make an extra dollar if misaligned AI kills them in addition to everyone else? Won't any measure of their social status be far higher after the singularity, if it's found that they tried to do the best for humanity?

I'm not sure about 1. I think even they're not sure about 1. I heard that they held back on releasing their newer models until OpenAI raced ahead of them.

You (and all the people who upvoted your comment) have a chance of convincing them (a little) in a good faith debate maybe. We're all on the same ship after all, when it comes to AI alignment.

PS: AI safety spending is only $0.1 billion while AI capabilities spending is $200 billion. A company which adds a comparable amount of effort on both AI alignment and AI capabilities should speed up the former more than the latter, so I personally hope for their success. I may be wrong, but it's my best guess...

Reply
3RHollerith8mo
There is very little hope IMHO in increasing spending on technical AI alignment because (as far as we can tell based on how slow progress has been on it over the last 22 years) it is a much thornier problem than AI capability research and because most people doing AI alignment research don't have a viable story about how they are going to stop any insights / progress they achieve from helping with AI capability research. I mean, if you have a specific plan that avoids these problems, then let's hear it, I am all ears, but advocacy in general of increasing work on technical alignment is counterproductive IMHO.
9Knight Lee8mo
EDIT: thank you so much for replying to the strongest part of my argument, no one else tried to address it (despite many downvotes). I disagree with the position that technical AI alignment research is counterproductive due to increasing capabilities, but I think this is very complicated and worth thinking about in greater depth. Do you think it's possible, that your intuition on alignment research being counterproductive, is because you compared the plausibility of the two outcomes: 1. Increasing alignment research causes people to solve AI alignment, and humanity survives. 2. Increasing alignment research led to an improvement in AI capabilities, allowing AI labs to build a superintelligence which then kills humanity. And you decided that outcome 2 felt more likely? Well, that's the wrong comparison to make. The right comparison should be: 1. Increasing alignment research causes people to improve AI alignment, and humanity survives in a world where we otherwise wouldn't survive. 2. Increasing alignment research led to an improvement in AI capabilities, allowing AI labs to build a superintelligence which then kills humanity in a world where we otherwise would survive. In this case, I think even you would agree what P(1) > P(2). P(2) is very unlikely because if increasing alignment research really would lead to such a superintelligence, and it really would kill humanity... then let's be honest, we're probably doomed in that case anyways, even without increasing alignment research. If that really was the case, the only surviving civilizations would have had different histories, or different geographies (e.g. only a single continent with enough space for a single country), leading to a single government which could actually enforce an AI pause. We're unlikely to live in a world so pessimistic that alignment research is counterproductive, yet so optimistic that we could survive without that alignment research.
5RHollerith8mo
I believe we're probably doomed anyways. Sorry to disappoint you, but I do not agree. Although I don't consider it quite impossible that we will figure out alignment, most of my hope for our survival is in other things, such as a group taking over the world and then using their power to ban AI research. (Note that that is in direct contradiction to your final sentence.) So for example, if Putin or Xi were dictator of the world, my guess is that there is a good chance he would choose to ban all AI research. Why? It has unpredictable consequences. We Westerners (particularly Americans) are comfortable with drastic change, even if that change has drastic unpredictable effects on society; non-Westerners are much more skeptical: there have been too many invasions, revolutions and peasant rebellions that have killed millions in their countries. I tend to think that the main reason Xi supports China's AI industry is to prevent the US and the West from superseding China and if that consideration were removed (because for example he had gained dictatorial control over the whole world) he'd choose to just shut it down (and he wouldn't feel that need to have a very strong argument for that shutting it down like Western decision-makers would: non-Western leader shut important things down all the time or at least they would if the governments they led had the funding and the administrative capacity to do so). Of course Xi's acquiring dictatorial control over the whole world is extremely unlikely, but the magnitude of the technological changes and societal changes that are coming will tend to present opportunities for certain coalitions to gain and to keep enough power to shut AI research down worldwide. (Having power in all countries hosting leading-edge fabs is probably enough.) I don't think this ruling coalition necessarily need to believe that AI presents a potent risk of human extinction for them to choose to shut it down. I am aware that some reading this will react to
[-]Knight Lee8mo112

I don't agree that the probability of alignment research succeeding is that low. 17 years or 22 years of trying and failing is strong evidence against it being easy, but doesn't prove that it is so hard that increasing alignment research is useless.

People worked on capabilities for decades, and never got anywhere until recently, when the hardware caught up, and it was discovered that scaling works unexpectedly well.

There is a chance that alignment research now might be more useful than alignment research earlier, though there is uncertainty in everything.

We should have uncertainty in the Ten Levels of AI Alignment Difficulty.

The comparison

It's unlikely that 22 years of alignment research is insufficient but 23 years of alignment research is sufficient.

But what's even more unlikely, is the chance that $200 billion on capabilities research plus $0.1 billion on alignment research is survivable, while $210 billion on capabilities research plus $1 billion on alignment research is deadly.

In the same way adding a little alignment research is unlikely to turn failure into success, adding a little capabilities research is unlikely to turn success into failure.

It's also unlikely that alignme... (read more)

Reply
5RHollerith8mo
This assumes that alignment success is the mostly likely avenue to safety for humankind whereas like I said, I consider other avenues more likely. Actually there needs to be a qualifier on that: I consider other avenues more likely than the alignment project's succeeding while the current generation of AI researchers remain free to push capabilities: if the AI capabilities juggernaut could be stopped for 150 years, giving the human population time to get smarter and wiser, then alignment is likely (say p = .7) to succeed in my estimation. I am informed by Eliezer in his latest interview that such a success would probably use some technology other than deep learning to create the AI's capabilities; i.e., deep learning is particularly hard to align. Central to my thinking is my belief that alignment is just a significantly harder problem than the problem of creating an AI capable of killing us all. Does any of the reasoning you do in your section "the comparision" change if you started believing that alignment is much much harder than creating a superhuman (unaligned) AI? It will probably come as no great surprise that I am unmoved by the arguments I have seen (including your argument) that Anthropic is so much better than OpenAI that it helps the global situation for me to support Anthropic (if it were up to me, both would be shut down today if I couldn't delegate the decision to someone else and if I had to decide now with the result that there is no time for me to gather more information) but I'm not very certain and would pay attention to future arguments for supporting Anthropic or some other lab. Thanks for engaging with my comments.
3Knight Lee8mo
Thank you, I've always been curious about this point of view because a lot of people have a similar view to yours. I do think that alignment success is the most likely avenue, but my argument doesn't require this assumption. Your view isn't just that "alternative paths are more likely to succeed than alignment," but that "alternative paths are so much more likely to succeed than alignment, that the marginal capabilities increase caused by alignment research (or at least Anthropic), makes them unworthwhile." To believe that alignment is that hopeless, there should be stronger proof than "we tried it for 22 years, and the prior probability of the threshold being between 22 years and 23 years is low." That argument can easily be turned around to argue why more alignment research is equally unlikely to cause harm (and why Anthropic is unlikely to cause harm). I also think multiplying funding can multiply progress (e.g. 4x funding ≈ 2x duration). If you really want a singleton controlling the whole world (which I don't agree with), your most plausible path would be for most people to see AI risk as a "desperate" problem, and for governments under desperation to agree on a worldwide military which swears to preserve civilian power structures within each country.[1] Otherwise, the fact that no country took over the world during the last centuries strongly suggests that no country will in the next few years, and this feels more solid than your argument that "no one figured out alignment in the last 22 years, so no one will in the next few years." 1. ^ Out of curiosity, would you agree with this being the most plausible path, even if you disagree with the rest of my argument?
2RHollerith8mo
The most plausible story I can imagine quickly right now is the US and China fight a war and the US wins and uses some of the political capital from that win to slow down the AI project, perhaps through control over the world's leading-edge semiconductor fabs plus pressuring Beijing to ban teaching and publishing about deep learning (to go with a ban on the same things in the West). I believe that basically all the leading-edge fabs in existence or that will be built in the next 10 years are in the countries the US has a lot of influence over or in China. Another story: the technology for "measuring loyalty in humans" gets really good fast, giving the first group to adopt the technology so great an advantage that over a few years the group gets control over the territories where all the world's leading-edge fabs and most of the trained AI researchers are. I want to remind people of the context of this conversation: I'm trying to persuade people to refrain from actions that on expectation make human extinction arrive a little quicker because most of our (sadly slim) hope for survival IMHO flows from possibilities other than our solving (super-)alignment in time.
1Knight Lee8mo
I would go one step further and argue you don't need to take over territory to shut down the semiconductor supply chain, if enough large countries believed AI risk was a desperate problem they could convince and negotiate the shutdown of the supply chain. Shutting down the supply chain (and thus all leading-edge semiconductor fabs) could slow the AI project by a long time, but probably not "150 years" since the uncooperative countries will eventually build their own supply chain and fabs.
6RHollerith8mo
The ruling coalition can disincentivize the development of a semiconductor supply chain outside the territories it controls by selling world-wide semiconductors that use "verified boot" technology to make it really hard to use the semiconductor to run AI workloads similar to how it is really hard even for the best jailbreakers to jailbreak a modern iPhone.
1Knight Lee8mo
That's a good idea! Even today it may be useful for export controls (depending on how reliable it can be made). The most powerful chips might be banned from export, and have "verified boot" technology inside in case they are smuggled out. The second most powerful chips might be only exported to trusted countries, and also have this verified boot technology in case these trusted countries end up selling them to less trusted countries who sell them yet again.
3RHollerith8mo
If I believed that, then maybe I'd believe (like you seem to do) that there is no strong reason to believe that alignment project cannot be finished successfully before the capabilities project creates an unaligned super-human AI. I'm not saying scaling and hardware improvement have not been important: I'm saying they were not sufficient: algorithmic improvements were quite necessary for the field to arrive at anything like ChatGPT, and at least as early as 2006, there were algorithm improvements that almost everyone in the machine-learning field recognized as breakthrough or important insights. (Someone more knowledgeable about the topic might be able to push the date back into the 1990s or earlier.) After the publication 19 years ago by Hinton et al of "A Fast Learning Algorithm for Deep Belief Nets", basically all AI researchers recognized it as a breakthrough. Building on it, was AlexNet in 2012, again recognized as an important breakthrough by essentially everyone in the field (and if some people missed it then certainly generational adversarial networks, ResNets and AlphaGo convinced them). AlexNet was the first deep model trained on GPUs, a technique essential for the major breakthrough in 2017 reported in the paper "Attention is all you need". In contrast, we've seen nothing yet in the field of alignment that is as unambiguously a breakthrough as is the 2006 paper by Hinton et al or 2012's AlexNet or (emphatically) the 2017 paper "Attention is all you need". In fact I suspect that some researchers could tell that the attention mechanism reported by Bahdanau et al in 2015 or the Seq2Seq models reported on by Sutskever et al in 2014 was evidence that deep-learning language models were making solid progress and that a blockbuster insight like "attention is all you need" is probably only a few years away. The reason I believe it is very unlikely for the alignment research project to succeed before AI kills us all is that in machine learning or the deep-learni
1Knight Lee8mo
Even if building intelligence requires solving many many problems, preventing that intelligence from killing you may just require solving a single very hard problem. We may go from having no idea to having a very good idea. I don't know. My view is that we can't be sure of these things.
[-]jbash8mo12-8

But it's very unclear whether they institutionally care.

There are certain kinds of things that it's essentially impossible for any institution to effectively care about.

Reply1
6Lukas Finnveden8mo
What is this referring to?
[-]Mikhail Samin8mo140

People representing Anthropic argued against government-required RSPs. I don’t think I can share the details of the specific room where that happened, because it will be clear who I know this from.

Ask Jack Clark whether that happened or not.

Reply1
4Nathan Helm-Burger8mo
Anthropic ppl had also said approximately this publicly. Saying that it's too soon to make the rules, since we'd end up mispecifying due to ignorance of tomorrow's models.
7Zac Hatfield-Dodds8mo
There's a big difference between regulation which says roughly "you must have something like an RSP", and regulation which says "you must follow these specific RSP-like requirements", and I think Mikhail is talking about the latter. I personally think the former is a good idea, and thus supported SB-1047 along with many other lab employees. It's also pretty clear to me that locking in circa-2023 thinking about RSPs would have been a serious mistake, and so I (along with many others) am generally against very specific regulations because we expect they would on net increase catastrophic risk.
[-]Adam Scholl8mo2219

When do you think would be a good time to lock in regulation? I personally doubt RSP-style regulation would even help, but the notion that now is too soon/risks locking in early sketches, strikes me as in some tension with e.g. Anthropic trying to automate AI research ASAP, Dario expecting ASL-4 systems between 2025—the current year!—and 2028, etc.

Reply1
4Zac Hatfield-Dodds8mo
Here I am on record supporting SB-1047, along with many of my colleagues. I will continue to support specific proposed regulations if I think they would help, and oppose them if I think they would be harmful; asking "when" independent of "what" doesn't make much sense to me and doesn't seem to follow from anything I've said. My claim is not "this is a bad time", but rather "given the current state of the art, I tend to support framework/liability/etc regulations, and tend to oppose more-specific/exact-evals/etc regulations". Obviously if the state of the art advanced enough that I thought the latter would be better for overall safety, I'd support them, and I'm glad that people are working on that.
[-]Mikhail Samin8mo100

AFAIK Anthropic has not unequivocally supported the idea of "you must have something like an RSP" or even SB-1047 despite many employees, indeed, doing so.

Reply
1Zac Hatfield-Dodds8mo
To quote from Anthropic's letter to Govenor Newsom,
2Mikhail Samin8mo
“we believe its benefits likely outweigh its costs” is “it was a bad bill and now it’s likely net-positive”, not exactly unequivocally supporting it. Compare that even to the language in calltolead.org. Edit: AFAIK Anthropic lobbied against SSP-like requirements in private.
4Zach Stein-Perlman8mo
My guess is it's referring to Anthropic's position on SB 1047, or Dario's and Jack Clark's statements that it's too early for strong regulation, or how Anthropic's policy recommendations often exclude RSP-y stuff (and when they do suggest requiring RSPs, they would leave the details up to the company).
2Lukas Finnveden8mo
SB1047 was mentioned separately so I assumed it was something else. Might be the other ones, thanks for the links.
6Nathan Helm-Burger8mo
Our worldviews do not match, and I fail to see how yours makes sense. Even when I relax my predictions about the future to take in a wider set of possible paths... I still don't get it. AI is here. AGI is coming whether you like it or not. ASI will probably doom us. Anthropic, as an org, seems to believe that there is a threshold of power beyond which creating an AGI more powerful than that would kill us all. OpenAI may believe this also, in part, but it seems like their expectation of where that threshold is is further away than mine. Thus, I think there is a good chance they will get us all killed. There is substantial uncertainty and risk around these predictions. Now, consider that, before AGI becomes so powerful that utilizing it for practical purposes becomes suicide, there is a regime where the AI product gives its wielder substantial power. We are currently in that regime. The further AI gets advanced, the more power it grants. Anthropic might get us all killed. OpenAI is likely to get us all killed. If you tryst the employees of Anthropic to not want to be killed by OpenAI... then you should realize that supporting them while hindering OpenAI is at least potentially a good bet. Then we must consider probabilities, expected values, etc. Give me your model, with numbers, that shows supporting Anthropic to be a bad bet, or admit you are confused and that you don't actually have good advice to give anyone.
[-]Adam Scholl8mo*6443

Give me your model, with numbers, that shows supporting Anthropic to be a bad bet, or admit you are confused and that you don't actually have good advice to give anyone.

It seems to me that other possibilities exist, besides "has model with numbers" or "confused." For example, that there are relevant ethical considerations here which are hard to crisply, quantitatively operationalize!

One such consideration which feels especially salient to me is the heuristic that before doing things, one should ideally try to imagine how people would react, upon learning what you did. In this case the action in question involves creating new minds vastly smarter than any person, which pose double-digit risk of killing everyone on Earth, so my guess is that the reaction would entail things like e.g. literal worldwide riots. If so, this strikes me as the sort of consideration one should generally weight more highly than their idiosyncratic utilitarian BOTEC.

Reply
[-]Martin Randall8mo149

Does your model predict literal worldwide riots against the creators of nuclear weapons? They posed a single-digit risk of killing everyone on Earth (total, not yearly).

It would be interesting to live in a world where people reacted with scale sensitivity to extinction risks, but that's not this world.

Reply1
[-]Mikhail Samin8mo102

nuclear weapons have different game theory. if your adversary has one, you want to have one to not be wiped out; once both of you have nukes, you don't want to use them.

also, people were not aware of real close calls until much later.

with ai, there are economic incentives to develop it further than other labs, but as a result, you risk everyone's lives for money and also create a race to the bottom where everyone's lives will be lost.

Reply1
5Knight Lee8mo
I think you (or @Adam Scholl) need to argue why people won't be angry at you if you developed nuclear weapons, in a way which doesn't sound like "yes, what I built could have killed you, but it has an even higher chance of saving you!" Otherwise, it's hard to criticize Anthropic for working on AI capabilities without considering whether their work is a net positive. It's hard to dismiss the net positive arguments as "idiosyncratic utilitarian BOTEC," when you accept "net positive" arguments regarding nuclear weapons. Allegedly, people at Anthropic have compared themselves to Robert Oppenheimer. Maybe they know that one could argue they have blood on their hands, the same way one can argue that about Oppenheimer. But people aren't "rioting" against Oppenheimer. I feel it's more useful to debate whether it is a net positive, since that at least has a small chance of convincing Anthropic or their employees.
9Mikhail Samin8mo
My argument isn’t “nuclear weapons have a higher chance of saving you than killing you”. People didn’t know about Oppenheimer when rioting about him could help. And they didn’t watch The Day After until decades later. Nuclear weapons were built to not be used. With AI, companies don’t build nukes to not use them; they build larger and larger weapons because if your latest nuclear explosion is the largest so far, the universe awards you with gold. The first explosion past some unknown threshold will ignite the atmosphere and kill everyone, but some hope that it’ll instead just award them with infinite gold.  Anthropic could’ve been a force of good. It’s very easy, really: lobby for regulation instead of against it so that no one uses the kind of nukes that might kill everyone. In a world where Anthropic actually tries to be net-positive, they don’t lobby against regulation and instead try to increase the chance of a moratorium on generally smarter-than-human AI systems until alignment is solved. We’re not in that world, so I don’t think it makes as much sense to talk about Anthropic’s chances of aligning ASI on first try. (If regulation solves the problem, it doesn’t matter how much it damaged your business interests (which maybe reduced how much alignment research you were able to do). If you really care first and foremost about getting to aligned AGI, then regulation doesn't make the problem worse. If you’re lobbying against it, you really need to have a better justification than completely unrelated “if I get to the nuclear banana first, we’re more likely to survive”.)
3Knight Lee8mo
Hi, I've just read this post, and it is disturbing what arguments Anthropic made about how the US needs to be ahead of China. I didn't really catch up to this news, and I think I know where the anti-Anthropic sentiment is coming from now. I do think that Anthropic only made those arguments in the context of GPU export controls, and trying to convince the Trump administration to do export controls if nothing else. It's still very concerning, and could undermine their ability to argue for strong regulation in the future. That said, I don't agree with the nuclear weapon explanation. Suppose Alice and Bob were each building a bomb. Alice's bomb has a 10% chance of exploding and killing everyone, and a 90% chance of exploding into rainbows and lollipops and curing cancer. Bob's bomb has a 10% chance of exploding and killing everyone, and a 90% chance of "never being used" and having a bunch of good effects via "game theory." I think people with ordinary moral views will not be very angry at Alice, but forgive Bob because "Bob's bomb was built to not be used."
5Mikhail Samin8mo
(Dario’s post did not impact the sentiment of my shortform post.)
4Nathan Helm-Burger8mo
I don't believe the nuclear bomb was truly built to not be used from the point of view of the US gov. I think that was just a lie to manipulate scientists who might otherwise have been unwilling to help. I don't think any of the AI builders are anywhere close to "building AI not to be used". This seems even more clear than with nuclear, since AI has clear beneficial peacetime economically valuable uses. Regulation does make things worse if you believe the regulation will fail to work as intended for one reason or another. For example, my argument that putting compute limits on training runs (temporarily or permanently) would hasten progress to AGI by focusing research efforts on efficiency and exploring algorithmic improvements.
9Nathan Helm-Burger8mo
It has been pretty clearly announced to the world by various tech leaders that they are explicitly spending billions of dollars to produce "new minds vastly smarter than any person, which pose double-digit risk of killing everyone on Earth". This pronouncement has not yet incited riots. I feel like discussing whether Anthropic should be on the riot-target-list is a conversation that should happen after the OpenAI/Microsoft, DeepMind/Google, and Chinese datacenters have been burnt to the ground. Once those datacenters have been reduced to rubble, and the chip fabs also, then you can ask things like, "Now, with the pressure to race gone, will Anthropic proceed in a sufficiently safe way? Should we allow them to continue to exist?" I think that, at this point, one might very well decide that the company should continue to exist with some minimal amount of compute, while the majority of the compute is destroyed. I'm not sure it makes sense to have this conversation while OpenAI and DeepMind remain operational.
4Knight Lee8mo
That's a very good heuristic. I bet even Anthropic agrees with it. Anthropic did not release their newer models until OpenAI released ChatGPT and the race had already started. That's not a small sacrifice. Maybe if they released it sooner, they would be bigger than OpenAI right now due to the first mover advantage. I believe they want the best for humanity, but they are in a no-win situation, and it's a very tough choice what they should do. If they stop trying to compete, the other AI labs will build AGI just as fast, and they will lose all their funds. If they compete, they can make things better. AI safety spending is only $0.1 billion while AI capabilities spending is $200 billion. A company which adds a comparable amount of effort on both AI alignment and AI capabilities should speed up the former more than the latter. Even if they don't support all the regulations you believe in, they're the big AI company supporting relatively much more regulation than all the others. I don't know, I may be wrong. Sadly it is so very hard to figure out what's good or bad for humanity in this uncertain time.
[-]aysja8mo140

I don't think that most people, upon learning that Anthropic's justification was "other companies were already putting everyone's lives at risk, so our relative contribution to the omnicide was low" would then want to abstain from rioting. Common ethical intuitions are often more deontological than that, more like "it's not okay to risk extinction, period." That Anthropic aims to reduce the risk of omnicide on the margin is not, I suspect, the point people would focus on if they truly grokked the stakes; I think they'd overwhelmingly focus on the threat to their lives that all AGI companies (including Anthropic) are imposing.    

Reply
7Knight Lee8mo
Regarding common ethical intuitions, I think people in the post singularity world (or afterlife, for the sake of argument) will be far more forgiving of Anthropic. They will understand, even if Anthropic (and people like me) turned out wrong, and actually were a net negative for humanity. Many ordinary people (maybe most) would have done the same thing in their shoes. Ordinary people do not follow the utilitarianism that the awkward people here follow. Ordinary people also do not follow deontology or anything that's the opposite of utilitarianism. Ordinary people just follow their direct moral feelings. If Anthropic was honestly trying to make the future better, they won't feel that outraged at their "consequentialism." They may be outraged an perceived incompetence, but Anthropic definitely won't be the only one accused of incompetence.
[-]Mikhail Samin8mo*2716

If you trust the employees of Anthropic to not want to be killed by OpenAI


In your mind, is there a difference between being killed by AI developed by OpenAI and by AI developed by Anthropic? What positive difference does it make, if Anthropic develops a system that kills everyone a bit earlier than OpenAI would develop such a system? Why do you call it a good bet?

AGI is coming whether you like it or not

Nope.

You’re right that the local incentives are not great: having a more powerful model is hugely economically beneficial, unless it kills everyone.

But if 8 billion humans knew what many of LessWrong users know, OpenAI, Anthropic, DeepMind, and others cannot develop what they want to develop, and AGI doesn’t come for a while.

From the top of my head, it actually likely could be sufficient to either (1) inform some fairly small subset of 8 billion people of what the situation is or (2) convince that subset that the situation as we know it is likely enough to be the case that some measures to figure out the risks and not be killed by AI in the meantime are justified. It’s also helpful to (3) suggest/introduce/support policies that change the incentives to race or increase the chance of ... (read more)

Reply
9ozziegooen8mo
Are there good models that support that Anthropic is a good bet? I'm genuinely curious.  I assume that naively, if any side had more of the burden of proof, it would be Anthropic. They have many more resources, and are the ones doing the highly-impactful (and potentially negative) work.  My impression was that there was very little probablistic risk modeling here, but I'd love to be wrong.
3Nathan Helm-Burger8mo
I don't feel free to share my model, unfortunately. Hopefully someone else will chime in. I agree with your point and that this is a good question! I am not trying to say I am certain that Anthropic is going to be net positive, just that that's my view as the higher probability.
7ozziegooen8mo
I think it's totally fine to think that Anthropic is a net positive. Personally, right now, I broadly also think it's a net positive. I have friends on both sides of this. I'd flag though that your previous comment suggested more to me than "this is just you giving your probability" > Give me your model, with numbers, that shows supporting Anthropic to be a bad bet, or admit you are confused and that you don't actually have good advice to give anyone. I feel like there are much nicer ways to phase that last bit. I suspect that this is much of the reason you got disagreement points. 
4Nathan Helm-Burger8mo
Fair enough. I'm frustrated and worried, and should have phrased that more neutrally. I wanted to make stronger arguments for my point, and then partway through my comment realized I didn't feel good about sharing my thoughts. I think the best I can do is gesture at strategy games that involve private information and strategic deception like Diplomacy and Stratego and MtG and Poker, and say that in situations with high stakes and politics and hidden information, perhaps don't take all moves made by all players at literally face value. Think a bit to yourself about what each player might have in their uands, what their incentives look like, what their private goals might be. Maybe someone whose mind is clearer on this could help lay out a set of alternative hypotheses which all fit the available public data?
8Mikhail Samin8mo
The private data is, pretty consistently, Anthropic being very similar to OpenAI where it matters the most and failing to mention in private policy-related settings its publicly stated belief on the risk that smarter-than-human AI will kill everyone. 
1ZY8mo
I wonder if this is due to 1. funding - the company need money to perform research on safety alignment (X risks, and assuming they do want to to this), and to get there they need to publish models so that they can 1) make profits from them, 2) attract more funding. A quick look on the funding source shows Amazon, Google, some other ventures, and some other tech companies 2. empirical approach - they want to take empirical approach to AI safety and would need some limited capable models But both of the points above are my own speculations
[-]Mikhail Samin8d*993

The book is now a NYT bestseller: #7 in combined print&e-books nonfiction, #8 in hardcover nonfiction.

I want to thank everyone here who contributed to that. You're an awesome community, and you've earned a huge amount of dignity points.

Reply42
[-]Zach Stein-Perlman8d250

https://www.nytimes.com/books/best-sellers/combined-print-and-e-book-nonfiction/

Reply
[-]Mikhail Samin4mo8935

Nobody at Anthropic can point to a credible technical plan for actually controlling a generally superhuman model. If it’s smarter than you, knows about its situation, and can reason about the people training it, this is a zero-shot regime.

The world, including Anthropic, is acting as if "surely, we’ll figure something out before anything catastrophic happens."

That is unearned optimism. No other engineering field would accept "I hope we magically pass the hardest test on the first try, with the highest stakes" as an answer. Just imagine if flight or nuclear technology were deployed this way. Now add having no idea what parts the technology is made of. We've not developed fundamental science about how any of this works.

As much as I enjoy Claude, it’s ordinary professional ethics in any safety-critical domain: you shouldn't keep shipping SOTA tech if your own colleagues, including the CEO, put double-digit chances on that tech causing human extinction.

You're smart enough to know how deep the gap is between current safety methods and the problem ahead. Absent dramatic change, this story doesn’t end well.

In the next few years, the choices of a technical leader in this field could literally determine not what the future looks like, but whether we have a future at all.

If you care about doing the right thing, now is the time to get more honest and serious than the prevailing groupthink wants you to be.

Reply52
[-]Buck4mo6128

I think it's accurate to say that most Anthropic employees are abhorrently reckless about risks from AI (though my guess is that this isn't true of most people who are senior leadership or who work on Alignment Science, and I think that a bigger fraction of staff are thoughtful about these risks at Anthropic than other frontier AI companies). This is mostly because they're tech people, who are generally pretty irresponsible. I agree that Anthropic sort of acts like "surely we'll figure something out before anything catastrophic happens", and this is pretty scary.

I don't think that "AI will eventually pose grave risks that we currently don't know how to avert, and it's not obvious we'll ever know how to avert them" immediately implies "it is repugnant to ship SOTA tech", and I wish you spelled out that argument more.

I agree that it would be good if Anthropic staff (including those who identify as concerned about AI x-risk) were more honest and serious than the prevailing Anthropic groupthink wants them to be.

Reply
[-]Fabien Roger4mo2618

What if someone at Anthropic thinks P(doom|Anthropic builds AGI) is 15% and P(doom|some other company builds AGI) is 30%? Then the obvious alternatives are to do their best to get governments / international agreements to make everyone pause or to make everyone's AI development safer, but it's not completely obvious that this is a better strategy because it might not be very tractable. Additionally, they might think these things are more tractable if Anthropic is on the frontier (e.g. because it does political advocacy, AI safety research, and deploys some safety measures in a way competitors might want to imitate to not look comparatively unsafe). And they might think these doom-reducing effects are bigger than the doom-increasing effects of speeding up the race.

You probably disagree with P(doom|some other company builds AGI) - P(doom|Anthropic builds AGI) and with the effectiveness of Anthropic advocacy/safety research/safety deployments, but I feel like this is a very different discussion from "obviously you should never build something that has a big chance of killing everyone".

(I don't think most people at Anthropic think like that, but I believe at least some of the most influential employees do.)

Also my understanding is that technology is often built this way during deadly races where at least one side believes that them building it faster is net good despite the risks (e.g. deciding to fire the first nuke despite thinking it might ignite the atmosphere, ...).

Reply
[-]Mikhail Samin4mo1316

If this is their belief, they should state it and advocate for the US government to prevent everyone in the world, including them, from building what has a double-digit chance of killing everyone. They’re not doing that.

Reply1
[-]Charbel-Raphaël4mo116

P(doom|Anthropic builds AGI) is 15% and P(doom|some other company builds AGI) is 30% --> You need to add to this the probability that Anthropic is first and that the other companies are not going to create AGI if Anthropic already created it. this is by default not the case

Reply
2Fabien Roger4mo
I agree, the net impact is definitely not the difference between these numbers. Also I meant something more like P(doom|Anthropic builds AGI first).I don't think people are imagining that the first AI company to achieve AGI will have an AGI monopoly forever. Instead some think it may have a large impact on what this technology is first used for and what expectations/regulations are built around it.
[-]Zach Stein-Perlman4mo1113

It would be easier to argue with you if you proposed a specific alternative to the status quo and argued for it. Maybe "[stop] shipping SOTA tech" is your alternative If so: surely you're aware of the basic arguments for why Anthropic should make powerful models; maybe you should try to identify cruxes.

Reply
[-]habryka4mo5021

Separately from my other comment: It is not the case that the only appropriate thing to do when someone is going around killing your friends and your family and everyone you know is to "try to identify cruxes". 

It's eminently reasonable for people to just try to stop whatever is happening, which includes intention for social censure, convincing others, and coordinating social action. It is not my job to convince Anthropic staff they are doing something wrong. Indeed, the economic incentives point extremely strongly towards Anthropic staff being the hardest to convince of true beliefs here. The standard you invoke here seems pretty crazy to me.

Reply1
[-]Neel Nanda4mo246

It is not clear to me that Anthropic "unilaterally stopping" will result in meaningfully better outcomes than the status quo, let alone that it would be anywhere near the best way for Anthropic to leverage its situation.

Reply1
[-]Vaniver4mo2610

I do think there's a Virtue of Silence problem here.

Like--I was a ML expert who, roughly ten years ago, decided to not advance capabilities and instead work on safety-related things, and when the returns to that seemed too dismal stopped doing that also. How much did my 'unilateral stopping' change things? It's really hard to estimate the counterfactual of how much I would have actually shifted progress; on the capabilities front I had several 'good ideas' years early but maybe my execution would've sucked, or I would've been focused on my bad ideas instead. (Or maybe me being at the OpenAI lunch table and asking people good questions would have sped the company up by 2%, or w/e, independent of my direct work.)

How many people are there like me? Also not obvious, but probably not that many. (I would guess most of them ended up in the MIRI orbit and I know them, but maybe there are lurkers--one of my friends in SF works for generic tech companies but is highly suspicious of working for AI companies, for reasons roughly downstream of MIRI, and there might easily be hundreds of people in that boat. But maybe the AI companies would only actually have wanted to hire ten of them, and the others objecting to AI work didn't actually matter.) 

Reply5
[-]Thane Ruthenis4mo267

It is not clear to me that Anthropic "unilaterally stopping" will result in meaningfully better outcomes than the status quo

I think that just Anthropic, OpenAI, and DeepMind stopping would plausibly result in meaningfully better outcomes than the status quo. I still see no strong evidence that anyone outside these labs is actually pursuing AGI with anything like their level of effectiveness. I think it's very plausible that everyone else is either LARPing (random LLM startups), or largely following their lead (DeepSeek/China), or pursuing dead ends (Meta's LeCun), or some combination.

The o1 release is a good example. Yes, everyone and their grandmother was absent-mindedly thinking about RL-on-CoTs and tinkering with relevant experiments. But it took OpenAI deploying a flashy proof-of-concept for everyone to pour vast resources into this paradigm. In the counterfactual where the three major labs weren't there, how long would it have taken the rest to get there?

I think it's plausible that if only those three actors stopped, we'd get +5-10 years to the timelines just from that. Which I expect does meaningfully improve the outcomes, particularly in AI-2027-style short-timeline worlds.

So I think getting any one of them to individually stop would be pretty significant, actually (inasmuch as it's a step towards "make all three stop").

Reply1
[-]Vaniver4mo2314

I think more than this, when you look at the labs you will often see the breakthru work was done by a small handful of people or a small team, whose direction was not popular before their success. If just those people had decided to retire to the tropics, and everyone else had stayed, I think that would have made a huge difference to the trajectory. (What does it look like if Alec Radford had decided to not pursue GPT? Maybe the idea was 'obvious' and someone else gets it a month later, but I don't think so.)

Reply11
[-]habryka4mo1412

I see no principle by which I should allow Anthropic to build existentially dangerous technology, but disallow other people from building it. I think the right choice is for no lab to build it. I am here not calling for particularly much censure of Anthropic compared to all labs, and my guess is we can agree that in aggregate building existentially dangerous AIs is bad and should face censure.

Reply1
3Ben Pace4mo
If you are killing me and my friends because you think it better that you do the killing than someone else, then actually I will still ask you to stop, because I draw a hard line around killing me and my friends. Naturally, I have a similar line around developing tech that will likely kill me and my friends.
7Neel Nanda4mo
I think this would fail Anthropic's ideological Turing test. For example, they might make arguments like: by being a frontier lab, they can push for impactful regulation in a way they couldn't if they weren't; they can set better norms and demonstrate good safety practices that get adopted by others; or they can conduct better safety research that they could not do without access to frontier models. It's totally reasonable to disagree with this, or argue that their actions so far (e.g., lukewarm support and initial opposition to SB 1047) show that they are not doing this, but I don't think these arguments are, in principle, ridiculous.
8Mikhail Samin4mo
Yeah, sorry, I think it’s just very tricky for me to pass Anthropic’s ITT, because to imitate Anthropic, I would need to be concurrently saying stuff like “by being a frontier lab, we can push for impactful regulation”, typing stuff like “this bill will impose multi-million dollar fines for minor, technical violations, representing a risk to smaller companies” about a NY bill with requirements only for $100m+ training runs that would not impose multi-million dollar fine for minor violations, and misleading a part of me about Dario’s role (he is the Anthropic’s politics and policy lead and was a lot more involved in SB 1047 than many at Anthropic think). It’s generally harder to pass ITT of an entity that lies to itself and others than to point out why it is incoherent and ridiculous. In my mind, a good predictor of Anthropic’s actions is something in the direction of “a bunch of Sam Altmans stuck with potentially unaligned employees (who care about x-risk), going hard on trying to win the race”.
2Neel Nanda4mo
I disagree, but this doesn't feel like a productive discussion, so I'll leave things there Do you have a source for Anthropic comments on the NY bill? I couldn't find them and that one is news to me
4Mikhail Samin4mo
A bill passed two chambers of New York State legislature. It incorporated a lot of feedback from this community. This bill’s author actually talked about it as a keynote speaker at an event organized by FAR at the end of May. There’s no good theory of change for Anthropic compatible with them opposing and misrepresenting this bill. If you work at Anthropic on AI capabilities, you should stop. From Jack Clark: (Many such cases!) Here’s what the bill’s author says in response:
4Ben Pace4mo
I’m not saying that it’s implausible that the consequences might seem better. I’m stating it’s still morally wrong to race toward causing a likely extinction-level event as that’s a pretty schelling place for a deontological lines against action.
[-]Neel Nanda4mo111

Ah. In that case we just disagree about morality. I am strongly in favour of judging actions by their consequences, especially for incredibly high stakes actions like potential extinction level events. If an action decreases the probability of extinction I am very strongly in favour of people taking it.

I'm very open to arguments that the consequences would be worse, that this is the wrong decision theory, etc, but you don't seem to be making those?

Reply
[-]Ben Pace4mo*2423

I too believe we should ultimately judge things based on their consequences. I believe that having deontological lines against certain actions is something that leads humans to make decisions with better consequences, partly because we are bounded agents that cannot well-compute the consequences of all of our actions.

For instance, I think you would agree that it would be wrong to kill someone in order to prevent more deaths, today here in the Western world. Like, if an assassin is going to kill two people, but says if you kill one then he won’t kill the other, if you kill that person you should still be prosecuted for murder. It is actually good to not cross these lines even if the local consequentialist argument seems to check out. I make the same sort of argument for being first in the race toward an extinction-level event. Building an extinction-machine is wrong, and arguing you’ll be slightly more likely to pull back first does not stop it from being something you should not do. 

I think when you look back at a civilization that raced to the precipice and committed auto-genocide, and ask where the lines in the sand should’ve been drawn, the most natural one will be “building the extinction machine, and competing to be first to do so”. So it is wrong to cross this line, even for locally net positive tradeoffs.

Reply3
[-]Neel Nanda4mo153

I think this just takes it up one level of meta. We are arguing about the consequences of a ruleset. You are arguing that your ruleset has better consequences, while others disagree. And so you try to censure these people - this is your prerogative, but I don't think this really gets you out of the regress of people disagreeing about what the best actions are.

Engaging with the object level of whether your proposed ruleset is a good one, I feel torn.

For your analogy of murder, I am very pro-not-murdering people, but I would argue this is convergent because it is broadly agreed upon by society. We all benefit from it being part of the social contract, and breaking that erodes the social contract in a way that harms all involved. If Anthropic unilaterally stopped trying to build AGI, I do not think this would significantly affect other labs, who would continue their work, so this feels disanalogous.

And it is reasonable in extreme conditions (e.g. when those prohibitions are violated by others acting against you) to abandon standard ethical prohibitions. For example, I think it was just for Allied soldiers to kill Nazi soldiers in World War II. I think having nuclear weapons is terribl... (read more)

Reply1
[-]Ben Pace4mo*107

If Anthropic unilaterally stopped trying to build AGI, I do not think this would significantly affect other labs, who would continue their work, so this feels disanalogous.

Not a crux for either of us, but I disagree. When is the last time that someone shut down a multi-billion dollar profit arm of a company due to ethics, and especially due to the threat of extinction? If Anthropic announced they were ceasing development / shutting down because they did not want to cause an extinction-level event, this would have massive ramifications through society as people started to take this consequence more seriously, and many people would become more scared, including friends of employees at the other companies and more of the employees themselves. This would have massive positive effects.

For your analogy of murder, I am very pro-not-murdering people, but I would argue this is convergent because it is broadly agreed upon by society. We all benefit from it being part of the social contract, and breaking that erodes the social contract in a way that harms all involved.

This implies one should never draw lines in the sand about good/bad behavior if society has not reached consensus on it. In co... (read more)

Reply
6Neel Nanda4mo
The point I was trying to make is that, if I understood you correctly, you were trying to appeal to common sense morality that deontological rules like this are good on consequentialist grounds. I was trying to give examples why I don't think this immediately follows and you need to actually make object level arguments about this and engage with the counter arguments. If you want to argue for deontological rules, you need to justify why those rules I am not trying to defend the claim that I am highly confident that what Anthropic is doing is ethical and net good for the world, but I am trying to defend the claim that there are vaguely similar plans to Anthropics that I would predict are net good in expectation, e.g., becoming a prominent actor then leveraging your influence to push for good norms and good regulations. Your arguments would also imply that plans like that should be deontologically prohibited and I disagree. I don't think this follows from naive moral intuition. A crucial disanalogy with murder is that if you don't kill someone, the counterfactual is that the person is alive. While if you don't race towards AGI, the counterfactual is that maybe someone else makes it and we die anyway. This means that we need to be engaging in discussion about the consequences of there being another actor pushing for this, the consequences of other actions this actor may take, and how this all nets out, which I don't feel like you're doing. I expect AGI to be either the best or worse thing that has ever happened, and this means that important actions will typically be high variance, with major positive or negative consequences. Declining to engage in things with the potential for high negative consequences severely restricts your action space. And given that it's plausible that there's a terrible outcome even if we do nothing, I don't think the act-omission distinction applies
6Ben Pace4mo
Thank you for clarifying, I think I understand now. I’m hearing you’re not arguing in defense of Anthropic’s specific plan but in defense of there being some part of the space of plans being good that involve racing to build something that has a (say) >20% chance of causing an extinction-level event, that Anthropic may or may not fall into. This isn’t disanalagous. As I have already said in this thread, you are not allowed to murder someone even if someone else is planning to murder them. If you find out multiple parties are going to murder Bob, you are not now allowed to murder Bob in a way that is slightly less likely to be successful. Crucially it is not to be assumed that we will build AGI in the next 1-2 decades. If the countries of the world decided to ban training runs of a particular size, because we don’t want to take this sort of extinction-level risk, then it would not happen. Assuming this out of the hypothesis space will get you into bad ethical territory. Suppose a military general says “War is inevitable, the only question is how fast it’s over when it starts and how few deaths there are.” This general would never take responsibility for instigating. Similarly if you assume with certainty that AGI will be developed risking in next few decades, you absolve yourself of all responsibility for being the one who does so. I think you are failing to understand the concept of deontology by replacing “breaks deontological rules” with “highly negative consequences”. Deontology doesn’t say “you can tell a lie if it saves you from telling two lies later” or “lying is wrong unless you get a lot of money for it”. It says “don’t tell lies”. There are exceptional circumstances for all rules, but unless you’re in an exceptional circumstance, you treat them as rules, and don’t treat violations as integers to be traded against each other.  When the stakes get high it is not time to start lying, cheating, killing, or unilaterally betting the extinction of the human r
9Neel Nanda4mo
Yes that is correct I disagree. If a patient has a deadly illness then I think it is fine for a surgeon to perform a dangerous operation to try to save their life. I think the word murder is obfuscating things and suggest we instead talk in terms of "taking actions that may lead to death", which I think is more analogous - hopefully we can agree Anthropic won't intentionally cause human extinction. I think it is totally reasonable to take actions that net decrease someone's probability of dying, while introducing some novel risks. I think we're talking past each other. I understood you as arguing "deontological rules against X will systematically lead to better consequences than trying to evaluate each situation carefully, because humans are fallible". I am trying to argue that your proposed deontological rule does not obviously lead to better consequences as an absolute rule. Please correct me if I have misunderstood. I am arguing that "things to do with human extinction from AI, when there's already a meaningful likelihood" are not a domain where ethical prohibitions like "never do things that could lead to human extinction" are productive. For example, you help run LessWrong, which I'd argue has helped raise the salience of AI x-risk, which plausibly has accelerated timelines. I personally think this is outweighed by other effects, but that's via reasoning about the consequences. Your actions and Anthropic's feel more like a difference in scale than a difference in kind. I am not arguing that AI x-risk is inevitable, in fact I'm arguing the opposite. AI x-risk is both plausible and not inevitable. Actions to reduce this seem very valuable. Actions that do this will often have side effects that increase risk in other ways. In my opinion, this is not sufficient cause to immediately rule them out. Meanwhile, I would consider anyone pushing hard to make frontier AI to be highly reckless if they were the only one who could cause extinction, and they could unilate
8Ben Pace4mo
This is simplifying away key details.  If you go up to a person with a deadly illness and non-consensually do a dangerous surgery on them, this is wrong. If you kill them via this, their family has a right to sue you / prosecute you for murder. Once again, simply because some bad outcome is likely, you do not have ethical mandate to now go and cause it yourself. Deontology is typically about forbidding classes of action that on net make the world worse even when locally you have a good reason. Talking about “taking actions that lead to death” explicitly obfuscates the mechanism. I know you won’t endorse this once I point it out, but under this strictly-consequentialist framework “blogging on LessWrong about extinction-risk from AI” and “committing murder” are just two different “actions that lead to death” and neither can be thought of as having different deontological lines drawn. On the contrary, “don’t commit murder” and “don’t build a doomsday machine” are simple and natural deontological rules, whereas “don’t build a blogging platform with unusually high standards for truthseeking” is not. I am not trying to argue for an especially novel deontological rule… “building a doomsday machine” is wrong. It’s a far greater sin than murder. I think you’d do better to think of the AI companies as more like competing political factions each of whom’s base is very motivated toward committing a genocide against their neighbors. If your political faction commits a genocide; and you were merely a top-200 ranked official who didn’t particularly want a genocide, you still bear moral responsibility for it even though you only did paperwork and took meetings and maybe worked in a different department. Just because there are two political factions whose bases are uncomfortably attracted to the idea of committing genocide does not now make it ethically clear for you to make a third one that hungers for genocide but has wiser people in charge. I am not advocating for some new int
[-]Neel Nanda4mo143

I continue to feel like we're talking past each other, so let me start again. We both agree that causing human extinction is extremely bad. If I understand you correctly, you are arguing that it makes sense to follow deontological rules, even if there's a really good reason breaking them seems locally beneficial, because on average, the decision theory that's willing to do harmful things for complex reasons performs badly.

The goal of my various analogies was to point out that this is not actually a fully correcct statement about common sense morality. Common sense morality has several exceptions for things like having someone's consent to take on a risk, someone doing bad things to you, and innocent people being forced to do terrible things.

Given that exceptions exist, for times when we believe the general policy is bad, I am arguing that there should be an additional exception stating that: if there is a realistic chance that a bad outcome happens anyway, and you believe you can reduce the probability of this bad outcome happening (even after accounting for cognitive biases, sources of overconfidence, etc.), it can be ethically permissible to take actions that have side effects ar... (read more)

Reply
[-]Ben Pace4mo*2124

If I understand you correctly, you are arguing that it makes sense to follow deontological rules, even if there's a really good reason breaking them seems locally beneficial, because on average, the decision theory that's willing to do harmful things for complex reasons performs badly.

Hm… I would say that one should follow deontological rules like “don’t lie” and “don’t steal” and so on because we fail to understand or predict the knock-on consequences. For instance they can get the world into a much worse equilibrium of mutual liars/stealers, for instance, in ways that are hard to predict. And being a good person can get the world into a much better equilibrium of mutually-honorable people in ways that are hard to predict. And also because, if it does screw up in some hard to predict way, then when you look back, it will often be the easiest line in the sand to draw. 

For instance, if SBF is wondering at what point he could have most reliably intervened on his whole company collapsing and ruining the reputation of things associated with it, he might talk about certain deals he made or strategic plays with Binance or the US Govt, for he is not a very ethical person; I would tal... (read more)

Reply
1Neel Nanda4mo
Okay, after reading this it seems to me that we broadly do agree and are just arguing over price. I'm arguing that it is permissible to try to build a doomsday machine if there are really good reasons to believe it is net good for the probability of doomsday. It sounds like you agree, and give two examples of what "really good reasons" could be. I'm sure we disagree on the boundaries of where the really good reasons lie, but I'm trying to defend the point that you actually need to think about the consequences. What am I missing? Is it that you think these two are really good reasons, not because of the impact on the consequences, but because of the attitude/framing involved?
[-]ryan_greenblatt4mo*1718

I'm not Ben, but I think you don't understand. I think explaining what you are doing loudly in public isn't like "having a really good reason to believe it is net good" is instead more like asking for consent.

Like you are saying "please stop me by shutting down this industry" and if you don't get shut down, that it is analogous to consent: you've informed society about what you're doing and why and tried to ensure that if everyone else followed a similar sort of policy we'd be in a better position.

(Not claiming I agree with Ben's perspective here, just trying to explain it as I understand it.)

Reply111
4Neel Nanda4mo
Ah! Thanks a lot for the explanation, that makes way more sense, and is much weaker than what I thought Ben was arguing for. Yeah this seems like a pretty reasonable position, especially "take actions where if everyone else took them we would be much better off" and I am completely fine with holding Anthropic to that bar. I'm not fully sold re the asking for consent framing, but mostly for practical reasons - I think there's many ways that society is not able to act constantly, and the actions of governments on many issues are not a reflection of the true informed will of the people, but I expect there's some reframe here that I would agree with.
2habryka4mo
I don't think Ryan (or I) was intending to imply a measure of degree, so my guess is unfortunately somehow communication still failed. Like, I don't think Ryan (or Ben) are saying "it's OK to do these things you just have to ask for consent". Ryan was just trying to point out a specific way in which things don't bottom out in consequentialist analysis. If you end up walking away with thinking that Ben believes "the key thing to get right for AI companies is to ask for consent before building the doomsday machine", which I feel like is the only interpretation of what you could mean by "weaker" that I currently have, then I think that would be a pretty deep misunderstanding.
4Neel Nanda4mo
OK, I'm going to bow out of the conversation at this point, I'd guess further back and forth won't be too productive. Thanks all!
4Ben Pace4mo
There is something important to me in this conversation about not trusting one’s consequentialist analysis when evaluating proposals to violate deontological lines, and from my perspective you still haven’t managed to paraphrase this basic ethical idea or shown you’ve understood it, which I feel a little frustrated over. Ah well. I still have been glad of this opportunity to argue it through, and I feel grateful to Neel for that.
5Mikhail Samin4mo
I actually agree with Neel that, in principle, an AI lab could race for AGI while acting responsibly and IMO not violating deontology. Releasing models exactly at the level of their top competitor, immediately after the competitor's release and a bit cheaper; talking to the governments and lobbying for regulation; having an actually robust governance structure and not doing a thing that increases the chance of everyone dying. This doesn't describe any of the existing labs, though.
4habryka4mo
I like a lot of your comment, but this feels like a total non-sequitur. Did anyone involved in this conversation say that Anthropic was acting under false pretenses? I don't think anyone brought up concerns that rest on assumptions of bad faith (though to be clear, Anthropic employees have mostly told me I should assume something like bad faith from Anthropic as an institution, and that people should try to hold it accountable the same way any other AI lab, and to not straightforwardly trust statements Anthropic makes without associated commitments, so I do think I would assume bad faith, but it mostly just feels besides the point in this discussion).
2Neel Nanda4mo
Ah, sorry, I was thinking of Mikhail's reply here, not anything you or Ben said in this conversation https://www.lesswrong.com/posts/BqwXYFtpetFxqkxip/mikhail-samin-s-shortform?commentId=w2doi6TzjB5HMMfmx But yeah, I'm happy to leave that aside, I don't think it's cruxy
2habryka4mo
Makes sense! I hadn't read that subthread, so was additionally confused.
2Mikhail Samin4mo
Killing anyone who hasn't done anything to lose deontological protection is wrong and clearly violates deontology. As a Nazi soldier, you lose deontological protection. There are many humans who are not even customers of any of the AI labs; they clearly have not lost deontological protection, and it's not okay to risk killing them without their consent.
6Neel Nanda4mo
I disagree with this as a statement about war, I'm sure a bunch of Nazi soldiers were conscripted, did not particularly support the regime, and were participating out of fear. Similarly, malicious governments have conscripted innocent civilians and kept them in line through fear in many unjust wars throughout history. And even people who volunteered may have done this due to being brainwashed by extensive propaganda that led to them believing they were doing the right thing. The real world is messy and strict deontological prohibitions break down in complex and high stakes situations, where inaction also has terrible consequences - I strongly disagree with a deontological rule that says countries are not about to defend themselves against innocent people forced to do terrible things
2Mikhail Samin4mo
My deontology prescribes not to join a Nazi army regardless of how much fear you're in. It's impossible to demand of people to be HPMOR!Hermione, but I think this standard works fine for real-world situations. (While I do not wish any Nazi soldiers death, regardless of their views or reasons for their actions. There's a sense in which Nazi soldiers are innocent regardless of what they've done; none of them are grown up enough to be truly responsible for their actions. Every single death is very sad, and I'm not sure there has ever been even a single non-innocent human. At the same time, I think it's okay to kill Nazi soldiers (unless they're in a process of surrenderring, etc.) or lie to them, and they don't have deontological protection.) You're arguing it's okay to defend yourself against innocent people forced to do terrible things. I agree with that, and my deontology agrees with that. At the same time, killing everyone because otherwise someone else could've killed them with a higher chance = killing many people who aren't ever going to contribute to any terrible things. I think, and my deontology thinks, that this is not okay. Random civilians are not innocent Nazi soldiers; they're simply random innocent people. I ask of Anthropic to please stop working towards killing them.
3Neel Nanda4mo
And do you feel this way because you believe that the general policy of obeying such deontological prohibitions will on net result in better outcomes? Or because you think that even if there were good reason to believe that following a different policy would lead to better empirical outcomes, your ethics say that you should be deontologically opposed regardless?
5Mikhail Samin4mo
I think the general policy of obeying such deontological rules leads to better outcomes; this is the reason for having deontology in the first place. (I agree with that old post on what to do when it feels like there's a good reason to believe that following a different policy would lead to better outcomes.)
4habryka4mo
(Just as a datapoint, while largely agreeing with Ben here, I really don't buy this concept of deontological protection of individuals. I think there are principles we have about when it's OK to kill someone, but I don't think the lines we have here route through individuals losing deontological protection.  Killing a mass murderer while he is waiting for trial is IMO worse than killing a civilian in collateral damage as part of taking out an active combatant, because it violates and messes with different processes, which don't generally route through individuals "losing deontological protection" but instead are more sensitive to the context the individuals are in)
2Mikhail Samin4mo
Locally: can you give an example of when it’s okay to kill someone who didn’t lose deontological protection, where you want to kill them because of the causal impact of their death?
4Ben Pace4mo
To me the issue goes the other way. The idea of “losing deontological protection” suggests I’m allowed to ignore deontological rules when interacting with someone. But that seems obviously crazy to me. For instance I think there’s a deontological injunction against lying, but just because someone lies doesn’t now mean I’m allowed to kill them. It doesn’t even mean I’m allowed to lie to them. I think lying to them would still be about as wrong as it was before, not a free action I can take whenever I feel like it.
3habryka4mo
I mean, a very classical example that I've seen a few times in media is shooting a civilian who is about to walk into a minefield in which multiple other civilians or military members are located. It seems tragic but obviously the right choice to shoot them if they don't heed your warning.  IDK, I also think it's the right choice to pull the lever in the trolley problem, though the choice becomes less obvious the more it involves active killing as opposed to literally pulling a lever.
7Knight Lee3mo
Sorry for replying to a dead thread but, Murder implies an intent to kill someone. Suppose I hire a hitman to kill you. But suppose there already are 3 hitmen trying to kill you, and I'm hoping my hitman would reach you first, and I know that my hitman has really bad aim. Once the first hitman reaches you and starts shooting, the other hitmen will freak out and run away, so I'm hoping you're more likely to survive. I have no other options for saving you, since the only contact I have is a hitman, and he's very bad at English and doesn't understand any instructions except trying to kill someone. In this case, you can argue to the court that my plan to save you was retarded. But you cannot concede that my plan actually was a good idea consequentially, but deontologically unethical. Since I didn't intend to kill anyone. Deontology only kicks in when your plan involves making someone die, or greatly increasing the chance someone dies.
7Mikhail Samin3mo
I feel like this it’s actually a great analogy! The only difference is that if your hitman starts shooting and doesn’t kill anyone, you get infinite gold. You know that in real life you go to police instead of hiring a hitman, right? And I claim that it’s really not okay to hire a hitman who might lower the chance of the person ending up dead, especially when your brain is aware of the infinite gold part. The good strategy for anyone in that situation to follow is to go to the police or go public and not hire any additional hitmen.
3Knight Lee3mo
Yeah, it's less deontologically bad than murder but I admit it's still not completely okay. PS: Part of the reason I used the unflattering hitman analogy is because I'm no longer as optimistic about Anthropic's influence. They routinely describe other problems (e.g. winning the race against China to defend democracy) with the same urgency as AI Notkilleveryoneism. The only way to believe that AI Notkilleveryoneism is still Anthropic's main purpose, is to hope that, * They describe a ton of other problems with the same urgency as AI Notkilleveryoneism, but that is only due to political necessity. * At the same time, their apparent concern for AI Notkilleveryoneism is not just a political maneuver, but significantly more genuine. This "hope" is plausible since the people in charge of Anthropic prefer to live, and consistently claimed to have high P(doom). But it's not certain, and there is circumstantial evidence suggesting this isn't the case (e.g. their lobbying direction, and how they're choosing people for their board of directors). Maybe50% this hope is just cope :(
6Ben Pace3mo
I don’t agree that deontology is about intent. Deontology is about action. Deontology is about not hiring hitmen to kill someone even if you have a really good reason, and even if your intent is good. Deontology is substantially about schelling lines of action where everything gets hard to predict and goes bad after you commit it. I imagine that your incompetent hitman has only like a 50% chance of succeeding, whereas the others have ~100%, that seems deontologically wrong to me. It seems plausible that what you mean to say by the hypothetical is that he has 0% chance.  * I admit this is more confusing and I’m not fully resolved on this. * I notice I am confused about how you can get that epistemic state in real life. * I observe that society will still prosecute you for attempted murder if you buy a hitman off the dark web, even one with a clearly incompetent reputation for 0/10 kills or whatever. * I think society’s ability to police this line is not as fine grained as you’re imagining, and so you should not buy incompetent hitmen in order to not kill your friend, unless you’re willing to face the consequences.
1Knight Lee3mo
To be honest I couldn't resist writing the comment because I just wanted to share the silly thought :/ Now that I think about it, it's much more complicated. Mikhail Samin is right that the personal incentive of reaching AGI first really complicates the good intentions. And while a lot of deontology is about intent, it's hyperbole to say that deontology is just intent. I think if your main intent is to save someone (and not personal gain), and your plan doesn't require or seek anyone's death, then it is deontologically much less bad than evil things like murder. But it may still be too bad for you to do, if you strongly lean towards deontology rather than consequentialism. Even if the court doesn't find you guilty of first degree murder, it may still find you guilty of... some... things. One might argue that the enormous scale (risking everyone's death instead of only one person), makes it deontologically worse. But I think the balance does not shift in favor of deontology and against consequentialism as we increase the scale (it might even shift a little in favor of consequentialism?).
3MondSemmel4mo
That's fair, but the deontological argument doesn't work for anyone building the extinction machine who is unconvinced by x-risk arguments, or deludes themselves that it's not actually an extinction machine, or that extinction is extremely unlikely, or that the extinction machine is the only thing that can prevent extinction (as in all the alignment via AI proposals) etc. etc.
6Mikhail Samin4mo
This is not the case for many at Anthropic.
6Ben Pace4mo
True; in general, many people who behave poorly do not know that they do so.
1sjadler4mo
Plugging that I wrote a post which quotes Anthropic execs at length describing their views on race to the top: https://open.substack.com/pub/stevenadler/p/dont-rely-on-a-race-to-the-top (and yes agreed with Neel’s summary)
5[anonymous]4mo
I suppose if you think it's less likely there will be killing involved if you're the one holding the overheating gun than if someone else is holding it, that hard line probably goes away.
6Ben Pace4mo
Just because someone else is going to kill me, doesn’t mean we don’t have an important societal norm against murder. You’re not allowed to kill old people just because they’ve only got a few years left, or kill people with terminal diseases.
7[anonymous]4mo
I don't see how that at all addresses the analogy I made.
5Ben Pace4mo
I am not quite sure what an overheating gun refers to, I am guessing the idea is that it has some chance of going off without being fired. Anyhow, if that’s accurate, it’s acceptable to decide to be the person holding an overheating gun, but it’s not acceptable to (for example) accept a contract to assassinate someone so that you get to have the overheating gun, or to promise to kill slightly fewer people with the gun than the next guy. Like, I understand consequentially fewer deaths happen, but our society has deontological lines against committing murder even given consequentialist arguments, which are good. You’re not allowed to commit murder even if you have a good reason.
2MondSemmel4mo
I fully expect we're doomed, but I don't find this attitude persuasive. If you don't want to be killed, you advocate for actions that hopefully result in you not being killed, whereas this action looks like it just results in you being killed by someone else. Like you're facing a firing squad and pleading specifically with just one of the executioners.
1Ben Pace4mo
I just want to clarify that Anthropic doesn’t have the social authority of a governmental firing squad to kill people.
8MondSemmel4mo
For me the missing argument in this comment thread is the following: Has anyone spelled out the arguments for how it's supposed to help us, even incrementally, if one AI lab (rather than all of them) drops out of the AI race? Suppose whichever AI lab is most receptive to social censure could actually be persuaded to drop out; don't we then just end in an Evaporative Cooling of Group Beliefs situation where the remaining participants in the race are all the more intransigent?
[-][anonymous]4mo114

Has anyone spelled out the arguments for how it's supposed to help us, even incrementally, if one AI lab (rather than all of them) drops out of the AI race?

An AI lab dropping out helps in two ways:

  1. timelines get longer because the smart and accomplished AI capabilities engineers formerly employed by this lab are no longer working on pushing for SOTA models/no longer have access to tons of compute/are no longer developing new algorithms to improve performance even holding compute constant. So there is less aggregate brainpower, money, and compute dedicated to making AI more powerful, meaning the rate of AI capability increase is slowed. With longer timelines, there is more time for AI safety research to develop past its pre-paradigmatic stage, for outreach effort to mainstream institutions to start paying dividends in terms of shifting public opinion at the highest echelons, for AI governance strategies to be employed by top international actors, and for moonshots like uploading or intelligence augmentation to become more realistic targets.
  2. race dynamics become less problematic because there is one less competitor other top labs have to worry about, so they don't need to pump out top
... (read more)
Reply1
1ProgramCrafter4mo
For that to be case, instead of engineers entering another company, we should suggest other tasks. There are very questionable technologies shipped indeed (for example, social media with automatic recommendation algorithms) but someone would have to connect the engineers to the tasks.
6MichaelDickens4mo
I agree with sunwillrise but I think there is an even stronger argument for why it would be good for an AI company to drop out of the race. It is a strong jolt that has a good chance of waking up the world to AI risk. It sends a clear message: I don't know exactly what effect that would have on public discourse, but the effect would be large.
1MondSemmel4mo
Larger than the OpenAI board fiasco? I doubt it.
2MichaelDickens4mo
A board firing a CEO is a pretty normal thing to happen, and it was very unclear that the firing had anything to do with safety concerns because the board communicated so little. A big company voluntarily shutting down because its product is too dangerous is (1) a much clearer message and (2) completely unprecedented, as far as I know. In my ideal world, the company would be very explicit that they are shutting down specifically because they are worried about AGI killing everyone.
2Ben Pace4mo
I make the case here for stopping based on deontological rather than consequentialist reasons.
7faul_sname4mo
My understanding was that LessWrong, specifically, was a place where bad arguments are (aspirationally) met with counterarguments, not with attempts to suppress them through coordinated social action. Is this no longer the case, even aspirationally?
[-]habryka4mo106

I think it would be bad to suppress arguments! But I don't see any arguments being suppressed here. Indeed, I see Zack as trying to create a standard where (for some reason) arguments about AI labs being reckless must be made directly to the people who are working at those labs, and other arguments should not be made, which seems weird to me. The OP seems to me like it's making fine arguments.

I don't think it was ever a requirement for participation on LessWrong to only ever engage in arguments that could change the minds of the specific people who you would like to do something else, as opposed to arguments that are generally compelling and might affect those people in indirect ways. It's nice when it works out, but it really doesn't seem like a tenet of LessWrong.

Reply
2faul_sname4mo
Ah, I had (incorrectly) interpreted "It's eminently reasonable for people to just try to stop whatever is happening, which includes intention for social censure, convincing others, and coordinating social action" as being an alternative to engaging at all with the arguments of people who disagree with your positions here, rather than an alternative to having that standard in the outside world with people who are not operating under those norms.
2Zach Stein-Perlman4mo
Sure, censure among people who agree with you is a fine thing for a comment to do. I didn't read Mikhail's comment that way because it seemed to be asking Anthropic people to act differently (but without engaging with their views).
[-]habryka4mo1413

It's OK to ask people to act differently without engaging with your views! If you are stabbing my friends and family I would like you to please stop, and I don't really care about engaging with your views. The whole point of social censure is to ask people to act differently even if they disagree with you, that's why we have civilization and laws and society.

Reply
[-]habryka4mo3215

I think Anthropic leadership should feel free to propose a plan to do something that is not "ship SOTA tech like every other lab". In the absence of such a plan, seems like "stop shipping SOTA tech" is the obvious alternative plan.

Clearly in-aggregate the behavior of the labs is causing the risk here, so I think it's reasonable to assume that it's Anthropic's job to make an argument for a plan that differs from the other labs. At the moment, I know of no such plan. I have some vague hopes, but nothing concrete, and Anthropic has not been very forthcoming with any specific plans, and does not seem on track to have one.

Reply1
9Vaniver4mo
Note that Anthropic, for the early years, did have a plan to not ship SOTA tech like every other lab, and changed their minds. (Maybe they needed the revenue to get the investment to keep up; maybe they needed the data for training; maybe they thought the first mover effects would be large and getting lots of enterprise clients or w/e was a critical step in some of their mid-game plans.) But I think many plans here fail once considered in enough detail.
2Stephen McAleese4mo
Anthropic’s responsible scaling policy does mention pausing scaling if the capabilities of their models exceeds their best safety methods: I think OP and others in the thread are wondering why Anthropic doesn’t stop scaling now given the risks. I think the reason why is that in practice doing so would create a lot of problems: * How would Anthropic fund their safety research if Claude is no longer SOTA and becomes less popular? * Is Anthropic supposed to learn from and test only models at current levels of capability and how does it learn about future advanced model behaviors? I haven’t heard a compelling argument for how we could solve superalignment by studying much less advanced models. Imagine trying to align GPT-4 or o3 by only studying and testing GPT-2 from 2019. In reality, future models will probably have lots of unknown unknowns and emergent properties that are difficult or impossible to predict in advance. And then there’s all the social consequences of AI like misuse which are difficult to predict in advance. Although I’m skeptical that alignment can be solved without a lot of empirical work on frontier models I still think it would better if AI progress were slower.
4Mikhail Samin4mo
I don’t expect Anthropic to stick to any of their policies when competitive pressure means they have to train and deploy and release or be left behind. None of their commitments are of a kind they wouldn’t be able to walk back on. Anthropic accelerates capabilities more than safety; they don’t even support regulation, with many people internally being misled about Anthropic’s efforts. None of their safety efforts meaningfully contributed to solving any of the problems you’d have to solve to have a chance of having something much smarter than you that doesn’t kill you. I’d be mildly surprised if there’s a consensus at Anthropic that they can solve superalignment. The evidence they’re getting shows, according to them, that we live in an alignment-is-hard world. If any of these arguments are Anthropic’s, I would love for them to say that out loud.
7Mikhail Samin4mo
I’ve generally been aware of/can come up with some arguments; I haven’t heard them in detail from anyone at Anthropoid and would love for Anthropic to write up the plan that includes reasoning why shipping SOTA models helps humanity survive instead of doing the opposite thing. The last time I saw Anthropic’s claimed reason for existing, it later became an inspiration for
4eggsyntax4mo
I'm confused about why you're pointing to Anthropic in particular here. Are they being overoptimistic in a way that other scaling labs are not, in your view?
7Mikhail Samin4mo
Unlike other labs, Anthropic is full of people who care and might leave capabilities work or push for the leadership to be better. It’s a tricky place to be in: if you’re responsible enough, you’ll hear more criticism than less responsible actors, because criticism can still change what you’re doing. Other labs are much less responsible, to be clear. There’s  it a lot (I think) my words here can do about that, though.
2eggsyntax4mo
Got it. It might be worth adding something like that to the post, which in my opinion reads as if it's singling out Anthropic as especially deserving of criticism.
4DirectedEvolution4mo
I understand your argument and it has merit, but I think the reality of the situation is more nuanced. Humanity has long build buildings and bridges without access to formal engineering methods for predicting the risk of collapse. We might regard it as unethical to build such a structure now without using the best practically available engineering knowledge, but we do not regard it as having been unethical to build buildings and bridges historically due to the lack of modern engineering materials and methods. They did their best, more or less, with the resources they had access to at the time. AI is a domain where the current state of the art safety methods are in fact being applied by the major companies, as far as I know (and I’m completely open to being corrected on this). In this respect, safety standards in the AI field are comparable to those of other fields. The case for existential risk is  approximately as qualitative and handwavey as the case for safety, and I think that both of these arguments need to be taken seriously, because they are the best we currently have. It is disappointing to see the cavalier attitude with which pro-AI pundits dismiss safety concerns, and obnoxious to see the overly confident rhetoric deployed by some in the safety world when they tweet about their p(doom). It is a weird and important time in technology, and I would like to see greater open-mindedness and thoughtfulness about the ways to make progress on all of these important issues.
3Viliam4mo
Perhaps the answer is right there, in the name. The future Everett branches where we still exist will indeed be the ones where we have magically passed the hardest test on the first try.
5Mikhail Samin4mo
Branches like that don’t have a lot of reality-fluid and lost most of the value of our lightcone; you’re much more likely to find yourself somewhere before that.
2Mikhail Samin4mo
Does “winning the race” actually give you a lever to stop disaster, or does it just make Anthropic the lab responsible for the last training run? Does access to more compute and more model scaling, with today’s field understanding, truly give you more control—or just put you closer to launching something you can’t steer? Do you know how to solve alignment given even infinite compute? Is there any sign, from inside your lab, that safety is catching up faster than capabilities? If not, every generation of SOTA increases the gap, not closes it. “Build the bomb, because if we don’t, someone worse will.” Once you’re at the threshold where nobody knows how to make these systems steerable or obedient, it doesn't matter who is first—you still get a world-ending outcome. If Anthropic, or any lab, ever wants to really make things go well, the only winning move is not to play, and try hard to make everyone not play. If Anthropic was what it imagines itself being, it would build robust field-wide coordination and support regulation that would be effective globally, even if it means watching over your shoulder for colleagues and competitors across the world. If everyone justifies escalation as “safety”, there is no safety. In the end, if the race leads off a cliff, the team that runs fastest doesn’t “win”: they just get there first. That’s not leadership. It’s tragedy. If you truly care about not killing everyone, will have to be a point—maybe now—where some leaders stop, even if it costs, and demand a solution that doesn't sacrifice the long-term for a financial gain due to having a model slightly better than those of your competitors. Anthropic is in a tricky place. Unlike other labs, it is full of people who care. The leadership has to adjust for that. That makes you one of the few people in history who has the chance to say “no” to the spiral to the end of the world and demand of your company to behave responsibly. (note: many of these points are AI-generated by
[-]Mikhail Samin24d*669

I have great empathy and deep respect for the courage of the people currently on hunger strikes to stop the AI race. Yet, I wish they hadn’t started them: these hunger strikes will not work.

Hunger strikes can be incredibly powerful when there’s a just demand, a target who would either give in to the demand or be seen as a villain for not doing so, a wise strategy, and a group of supporters.

I don’t think these hunger strikes pass the bar. Their political demands are not what AI companies would realistically give in to because of a hunger strike by a small number of outsiders.

A hunger strike can bring attention to how seriously you perceive an issue. If you know how to make it go viral, that is; in the US, hunger strikes are rarely widely covered by the media. And even then, you are more likely to marginalize your views than to make them go more mainstream: if people don’t currently think halting frontier general AI development requires hunger strikes, a hunger strike won’t explain to them why your views are correct: this is not self-evident just from the description of the hunger strike, and so the hunger strike is not the right approach here and now.

Also, our movement does not need... (read more)

Reply42111
[-]Michaël Trazzi23d6536

Hi Mikhail, thanks for offering your thoughts on this. I think having more public discussion on this is useful and I appreciate you taking the time to write this up.

I think your comment mostly applies to Guido in front of Anthropic, and not our hunger strike in front of Google DeepMind in London.

Hunger strikes can be incredibly powerful when there’s a just demand, a target who would either give in to the demand or be seen as a villain for not doing so, a wise strategy, and a group of supporters.

I don’t think these hunger strikes pass the bar. Their political demands are not what AI companies would realistically give in to because of a hunger strike by a small number of outsiders.

  • I don't think I have been framing Demis Hassabis as a villain and if you think I did it would be helpful to add a source for why you believe this.
  • I'm asking Demis Hassabis to "publicly state that DeepMind will halt the development of frontier AI models if all the other major AI companies agree to do so." which I think is a reasonable thing to state given all public statements he made regarding AI Safety. I think that is indeed something that a company such as Google DeepMind would give in.

A hunger strike ca

... (read more)
Reply21
4Mikhail Samin23d
Thanks for responding! Yep! A hunger strike is not a good tool if you don’t want to paint someone as a villain in the eyes of the public when they don’t give in to your demand. It is vanishingly unlikely that all other major AI companies would agree to do so without the US government telling them to; this statement would be helpful, but only to communicate their position and not because of the commitment itself. Why not ask them to ask the government to stop everyone (maybe conditional on China agreeing to stop everyone in China)? If any of them go viral in the US with a good message, I’ll (somewhat) change my mind! This was mainly my impression after talking to Guido; but do you want to say more about the impact you think you’ll have? (Can come back to it at the end of the year; if you have any advance predictions, they might be helpful to have posted!) I hope you remain safe and are not proven otherwise! Hunger strikes do carry negative risks though. Do you have particular plans for how long to be on the hunger strike for?
5Ben Pace23d
I have sent myself an email to arrive on December 20th to send you both a reminder about this thread.
5Ben Pace23d
Is there any form of protest that doesn't implicitly imply that the person you're protesting is doing something wrong? When the thing wrong is "causing human extinction" it seems to me kind of hard for that to not automatically be assumed 'villainous'. (Asking genuinely, I think it quite probably the answer is 'yes'.)
-1Mikhail Samin23d
Something like: Hunger strikes are optimized hard specifically for painting someone as a villain because they decide to make someone suffer or die (or be inhumanely fed), this is different from other forms of protests that are more focused on, e.g., that specific decisions are bad and should be revoked, but don't necessarily try to make people perceive the other side as evil.
3J Bostock22d
I don't really see the problem with painting people as evil in principle, given that some people are evil. You can argue against it in specific cases, but I think the case for AI CEOs being evil is strong enough that it can't be dismissed out of hand. The case in question is "AI CEOs are optimising for their short-term status/profits, and for believing things about the world which maximise their comfort, rather than doing the due diligence required of someone in their position, which is to seriously check whether their company is building something which kills everyone" Whether this is a useful frame for one's own thinking---or a good frame to deploy onto the public---I'm not fully sure, but I think it does need addressing. Of course it might also differ between CEOs. I think Demis and Dario are two of the CEOs who it's relatively less likely to apply to, but also I don't think it applies weakly enough for them to be dismissed out of hand even in their cases.
4Mikhail Samin22d
"People are on hunger strikes" is not really a lot of evidence for "AI CEOs are optimizing for their short-term status/profits and are not doing the due diligence" in the eyes of the public. I don't think there's any problem with painting people and institutions as evil, I'm just not sure why you would want to do this here, as compared to other things, and would want people to have answers to how they imagine a hunger strike would paint AI companies/CEOs and what would be the impact of that, because I expect little that could move the needle.
2J Bostock22d
That is true. "People are on hunger strikes and the CEOs haven't even commented" is (some) public evidence of "AI CEOs are unempathetic" I misunderstood your point, I thought you were arguing against painting individuals as evil in general.
1Matrice Jacobine22d
This seems to be exactly the point of the demand? This is a demand that would be cheap (perhaps even of negative cost) for DeepMind to accept (because the other AI companies wouldn't agree to that), and would also be a major publicity win for the Pause AI crowd. Even counting myself skeptical of the hunger strikes, I think this is a very smart move.
2Mikhail Samin21d
the demand is that a specific company agrees to halt if everyone halts; this does not help in reality, because in fact it won't be the case that everyone halts (abscent gov intervention).
3Matrice Jacobine21d
I don't think the point of hunger strikes is to achieve immediate material goals, but publicity/symbolic ones.
[-]Cole Wyeth24d3723

Action is better than inaction; but please stop and think of your theory of change for more than five minutes,

I think there's a very reasonable theory of change - X-risk from AI needs to enter the Overton window. I see no justification here for going to the meta-level and claiming they did not think for 5 minutes, which is why I have weak downvoted in addition to strong disagree. 

This tactic might not work, but I am not persuaded by your supposed downsides. The strikers should not risk their lives, but I don't get the impression that they are. The movement does need people who are eating -> working on AI safety research, governance, and other forms of advocacy. But why not this too? Seems very plausibly a comparative advantage for some concerned people, and particularly high leverage when very few are taking this step. If you think they should be doing something else instead, say specifically what it is and why these particular individuals are better suited to that particular task.

Reply
4MichaelDickens24d
Michaël Trazzi's comment, which he wrote a few hours before he started his hunger strike, isn't directly about hunger striking but it does indicate to me that he put more than 5 minutes of thought into the decision, and his comment gestures at a theory of change.
4J Bostock24d
I spoke to Michaël in person before he started. I told him I didn't think the game theory worked out (if he's not willing to die, GDM should ignore him; if he does die then he's worsening the world, since he can definitely contribute better by being alive, and GDM should still ignore him). I don't think he's going to starve himself to death or serious harm, but that does make the threat empty. I don't really think that matters too much on a game-theoretic-reputation method since nobody seems to be expecting him to do that. His theory of change was basically "If I do this, other people might" which seems to be true: he did get another person involved. That other person has said they'll do it for "1-3 weeks" which I would say is unambiguously not a threat to starve oneself to death. As a publicity stunt it has kinda worked in the basic sense of getting publicity. I think it might change the texture and vibe of the AI protest movement in a direction I would prefer it to not go in. It certainly moves the salience-weighted average of public AI advocacy towards Stop AI-ish things.
8yams24d
As Mikhail said, I feel great empathy and respect for these people. My first instinct was similar to yours, though -  if you’re not willing to die, it won’t work, and you probably shouldn’t be willing to die (because that also won’t work / there are more reliable ways to contribute / timelines uncertainty). I think ‘I’m doing this to get others to join in’ is a pretty weak response to this rebuttal. If they’re also not willing to die, then it still won’t work, and if they are, you’ve wrangled them in at more risk than you’re willing to take on yourself, which is pretty bad (and again, it probably still won’t work even if a dozen people are willing to die on the steps of the DeepMind office, because the government will intervene, or they’ll be painted as loons, or the attention will never materialize and their ardor will wain). I’m pretty confused about how, under any reasonable analysis, this could come out looking positive EV. Most of these extreme forms of protest just don’t work in America (e.g. the soldier who self-immolated a few years ago). And if it’s not intended to be extreme, they’ve (I presume accidentally) misbranded their actions. 
2J Bostock22d
Fair enough. I think these actions are +ev under a coarse grained model where some version of "Attention on AI risk" is the main currency (or a slight refinement to "Not-totally-hostile attention on AI risk"). For a domain like public opinion and comms, I think that deploying a set of simple heuristics like "Am I getting attention?" "Is that attention generally positive?" "Am I lying or doing something illegal?" can be pretty useful. Michael said on twitter here that he's had conversations with two sympathetic DeepMind employees, plus David Silver, who was also vaguely sympathetic. This itself is more +ev than I expected already, so I'm updating in favour of Michael here. It's also occurred to me that if any of the CEOs cracks and at least publicly responds the hunger strikers, then the CEOs who don't do so will look villainous, so you actually only need to have one of them respond to get a wedge in.
2Mikhail Samin22d
"Attention on AI risk" is a somewhat very bad proxy to optimize for, where available tactics include attention that would be paid to luddites, lunatics, and crackpots caring about some issue. The actions that we can take can: * Use what separates us from people everyone considers crazy: that our arguments check out and our predictions hold; communicate those; * Spark and mobilize existing public support; * Be designed to optimize for positive attention, not for any attention. I don't think DeepMind employees really changed their minds? Like, there are people at DeepMind with p(doom) higher than Eliezer's; they would be sympathetic; would they change anything they're doing? (I can imagine it prompting them to talk to others at DeepMind, talking about the hunger strike to validate the reasons for it.) I don't think Demis responding to the strike would make Dario look particularly villainous, happy to make conditional bets. How villainous someone looks here should be pretty independent, outside of eg Demis responding, prompting a journalist to ask Dario, which takes plausible deniability away from him. I'm also not sure how effective it would be to use this to paint the companies (or the CEOs-- are they even the explicit targets of the hunger strikes?) as villainous.
2Mikhail Samin24d
To clarify, "think for five minutes" was an appeal to people who might want to do these kinds of things in the future, not a claim about Guido or Michael. That said, I do in fact claim they have not thought carefully about their theory of change, and the linked comment from Michael lists very obvious surface-level reasons for why do this in front of anthropic and not openai; I really would not consider this on the level of demonstrating having thought carefully about the theory of change.
1Mikhail Samin24d
While in principle, as I mentioned, a hunger strike can bring attention, this is not an effective way to do this for the particular issue that AI will kill everyone by default. The diff to communicate isn't "someone is really scared of AI ending the world"; it's "scientists think AI might literally kill everyone and also here are the reasons why". This was not a claim about these people but an appeal to potential future people to maybe do research on this stuff before making decisions like this one. That said, I talked to Guido prior to the start of the hunger strike, tried to understand his logic, and was not convinced he had any kind of reasonable theory of change guiding his actions, and my understanding is that he perceives it as the proper action to take, in a situation like that, which is why I called this vibe-protesting. (It's not very clear what would be the conditions for them to stop the hunger strikes.) Hunger strikes can be very effective and powerful if executed wisely. My comment expresses my strong opinion that this did not happen here, not that it can't happen in general.
[-]Ben Pace24d1912

At the moment, these hunger strikes are people vibe-protesting.

I think I somewhat agree, but also I think this is a more accurate vibe than “yay tech progress”. It seems like a step in the right direction to me.

Please don’t risk your life; especially, please don’t risk your life in this particular way that won’t change anything.

Action is better than inaction; but please stop and think of your theory of change for more than five minutes, if you’re planning to risk your life, and then don’t risk your life; please pick actions thoughtfully and wisely and not because of the vibes.

You repeat a recommendation not to risk your life. Um, I’m willing to die to prevent human extinction. The math is trivial. I’m willing to die to reduce the risk by a pretty small percentage. I don’t think a single life here is particularly valuable on consequentialist terms.

There’s important deontology about not unilaterally risking other people’s lives, but this mostly goes away in the case of risking your own life. This is why there are many medical ethics guidelines that separate self-experimentation as a special case from rules for experimenting on others (and that’s been used very well in many cases and ... (read more)

Reply22
7Garrett Baker24d
I don't think so, I agree we shouldn't have laws around this, but insofar as we have deontologies to correct for circumstances where historically our naive utility maximizing calculations have been consistently biased, I think there have been enough cases of people uselessly martyring themselves for their causes to justify a deontological rule not to sacrifice your own actual life. Edit: Basically, I don't want suicidal people to back-justify batshit insane reasons why they should die to decrease x-risk instead of getting help. And I expect these are the only people who would actually be at risk for a plan which ends with "and then I die, and there is 1% increased probability everyone else gets the good ending".
[-]Ben Pace24d*174

I recently read The Sacrifices We Choose to Make by Michael Nielsen, which was a good read. Here are some relevant extracts.

At the time, South Vietnam was led by President Ngo Dinh Diem, a devout Catholic who had taken power in 1955, and then instigated oppressive actions against the Buddhist majority population of South Vietnam. This began with measures like filling civil service and army posts with Catholics, and giving them preferential treatment on loans, land distribution, and taxes. Over time, Diem escalated his measures, and in 1963 he banned flying the Buddhist flag during Vesak, the festival in honour of the Buddha's birthday. On May 8, during Vesak celebrations, government forces opened fire on unarmed Buddhists who were protesting the ban, killing nine people, including two children, and injured many more.

[...]

Unfortunately, standard measures for negotiation – petitions, street fasting, protests, and demands for concessions – were ignored by the Diem government, or met with force, as in the Vesak shooting.

[...]

Since conventional measures were failing, the Inter-Sect Committee decided to consider more extreme measures, including the idea of a voluntary self-immolation. Wh

... (read more)
Reply2
4Garrett Baker24d
I'm not certain if there's a particular point you want me to take away from this, but thanks for the information, and including an unbiased sample from the article you linked. I don't think I changed my mind so much from reading this though.
6Ben Pace24d
Do you also believe there is a deontological rule against suicide? I have heard rumor that most people who attempt suicide and fail, regret it. At the same time, I think some lives are worse than death (for example, see Amanda Luce's Book Review: Two Arms And A Head that won the ACX book review prize), and so I believe it should be legal and sometimes supported, even if it were the case that most attempted suicides have been regretted.
[-]Wei Dai24d182

I have heard rumor that most people who attempt suicide and fail, regret it.

After doing some research on this, I think this is unlikely to be true. The only quantitative study I found says that among its sample of suicide attempt survivors, 35.6% are glad to have survived, while 42.7% feel ambivalent, and 21.6% regret having survived. I also found a couple of sources agreeing with your "rumor", but one cited just a suicide awareness trainer as its source, while the other cited the above study as the only evidence for its claim, somehow interpreting it as "Previous research has found that more than half of suicidal attempters regret their suicidal actions." (Gemini 2.5 Pro says "It appears the authors of the 2023 paper misinterpreted or misremembered the findings of the 2005 study they cited.")

If this "rumor" was true, I would expect to see a lot of studies supporting it, because such studies are easy to do and the result would be highly useful for people trying to prevent suicides (i.e., they can use it to convince potential suicide attempters that they're likely to regret it). Evidence to the contrary are likely to be suppressed or not gathered in the first place, as almost nob... (read more)

Reply75
4Lukas Finnveden24d
Interesting, thanks. I think I had heard the rumor before and believed it.  In the linked study, it looks like they asked the people about regret very shortly after the suicide attempt. This could both bias the results towards less regret to have survived (little time to change their mind) or more regret to have survived (people might be scared to signal intent to retry suicide, for fear of being committed, which I think sometimes happens soon after failed attempts). 
2Lucius Bushnaq24d
I think very very many people are not making an informed decision when they decide to commit suicide.  For example, I think quantum immortality is quite plausibly a thing. Very few people know about quantum immortality and even fewer have seriously thought about it.  This means that almost everyone on the planet might have a very mistaken model of what suicide actually does to their anticipated experience.[1] Also, many people are religious and believe in a pleasant afterlife. Many people considering suicide are mentally ill in a way that compromises their decision making. Many people think transhumanism is impossible and won't arrange for their brain to be frozen for that reason. I agree that there is some threshold on the fraction of ill-considered suicides relative to total suicides such that suicide should be legal if we were below that threshold. I used to think we were maybe below that threshold. After I began studying physics at uni and so started taking quantum immortality more seriously, I switched to thinking we are maybe above the threshold.  1. ^ You might find yourself in a branch where your suicide attempt failed, but a lot of your body and mind were still destroyed. If you keep exponentially decreasing the amplitude of your anticipated future experience in the universal wave function further, you might eventually find that it is now dominated by contributions from weird places and branches far-off in spacetime or configuration space that were formerly negligible, like aliens simulating you for some negotiation or other purpose.  I don't really know yet how to reason well about what exactly the most likely observed outcome would be here. I do expect that by default, without understanding and careful engineering our civilisation doesn't remotely have the capability for yet, it'd tend to be very Not Good.   
6Ben Pace24d
This all feels galaxy-brained to me and like it proves too much. By analogy I feel like if you thought about population ethics for a while and came to counterintuitive conclusions, you might argue that people who haven't done that shouldn't be allowed to have children; or if they haven't thought about timeless decision theory for a while they aren't allowed to get a carry license.
2Lucius Bushnaq24d
I don't think it proves too much. Informed decision-making comes in degrees, and some domains are just harder? Like, I think my threshold for leaving people free to make their own mistakes if they are the only ones harmed by them is very low, compared to where the human population average seems to be at the moment. But my threshold is, in fact, greater than zero. For example, there are a bunch of things I think bystanders should generally prevent four year old human children from doing, even if the children insist that they want to do them. I know that stopping four year old children from doing these things will be detrimental in some cases, and that having such policies is degrading to the childrens' agency. I remember what it was like being four years old and feeling miserable because of kindergarten teachers who controlled my day and thought they knew what was best for me. I still think the tradeoff is worth it on net in some cases. I just think that the suicide thing happens to be a case where doing informed decision-making is maybe just too tough for way too many humans and thus some form of ban could plausibly be worth it on net. Sports betting is another case where I was eventually convinced that maybe a legal ban of some form could be worth it.
1Mikhail Samin24d
(I agree with Lucious in that I think it is important that people have the option of getting cryopreserved and also are aware of all the reality-fluid stuff before they decide to kill themselves.)
2Ben Pace24d
"Important" is ambiguous, in that I agree it matters, but it does for this civilization to ban whole life options from people until they have heard about niche philosophy. Most people will never hear about niche philosophy.
1Sohaib Imran24d
I don’t think quantum immortality changes anything. You can rephrame this in terms of standard probability theory and condition on them continuing to have subjective experience, and still get to the same calculus.  However, only considering the branches in which you survive, or conditioning on having subjective experience after the suicide attempt, ignores the counterfactual suffering prevented in all the branches (or probability mass) in which you did die, which may be less unpleasant than the branches in which you survived, but are many many more in number! Ignoring those branches biases the reasoning toward rare survival tails that don’t dominate the actual expected utility.
3Lucius Bushnaq24d
I agree that quantum mechanics is not really central for this on a philosophical level. You get a pretty similar dynamic just from having a universe that is large enough to contain many almost-identical copies of you. It's just that it seems at present very unclear and arguable whether the physical universe is in fact anywhere near that large, whereas I would claim that a universal wavefunction which constantly decoheres into different branches containing different versions of us is pretty strongly implied to be a thing by the laws of physics as we currently understand them.  It is very late here and I should really sleep instead of discussing this, so I won't be able to reply as in-depth as this probably merits. But, basically, I would claim that this is not the right way to do expected utility calculations when it comes to ensembles of identical or almost-identical minds. A series of thought experiments might maybe help illustrate part of where my position comes from: 1. Imagine someone tells you that they will put you to sleep and then make two copies of you, identical down to the molecular level. They will place you in a room with blue walls. They will place one copy of you in a room with red walls, and the other copy in another room with blue walls. Then they will wake all three of you up. What color do you anticipate seeing after you wake up, and with what probability?  I'd say 2/3 blue, 1/3 red. Because there will now be three versions of me, and until I look at the walls I won't know which one I am. 2. Imagine someone tells you that they will put you to sleep and then make two copies of you. One copy will not include a brain. It's just a dead body with an empty skull. Another copy will be identical to you down to the molecular level. Then they will place you in a room with blue walls, and the living copy in a room with red walls. Then they will wake you and the living copy up.  What color do you anticipate seeing after you wak
1Sohaib Imran23d
Again, not sure why a large universe is needed. The expected utility ends up the same either way, whether you have some fraction of branches in which you remain alive or some probability of remaining alive.  Regarding the expected utility calculus. I agree with everything you said but i don’t see how any of it allows you to disregard the counterfactual suffering from not committing suicide in your expected value calculation. Maybe the crux is whether we consider the utility of each “you” (i.e. you in each branch) individually, and add it up for the total utility, or wether we consider all “you”s to have just one shared utility.  Let’s say that not committing suicide gives you -1 utility in n branches but commiting suicide gives you -100 utility in n/m branches and 0 utility in n−n/m branches If we treat all copies of you as having separate utilities and add them all up for a total expected utility calculation, not committing suicide gives −n utility while committing suicide leads to −100n/m utility. Therefore, as long as m>100, it is better to commit suicide.  If, on the other hand you treat them as having one shared utility, you get either -1 or -100 utility, and -100 is of course worse. Do you agree that this is the crux? If so, why do you think that all the copies share one utility rather than their utilities adding up?
1Mikhail Samin22d
In a large universe, you do not end. Like, not in expectation see some branch versus other; you just continue, the computation that is you continues. When you open your eyes, you're not likely to find yourself as a person in a branch computed only relatively rarely; still, that person continues, and does not die. Attemted suicide reduces your reality-fluid- how much you're computed and how likely you are to find yourself there- but you will continue to experience the world. If you die in a nuclear explosion, the continuation of you will be somewhere else, sort-of isekaied; and mostly you will find yourself not in a strange world that recovers the dead but in a world where the nuclear explosion did not appear; still, in a large world, even after a nuclear explosion, you continue. You might care about having a lot of reality-fluid, because this makes your actions more impactful, because you can spend your lightcone better, and improve the average experience in the large universe. You might also assign negative utility to others seeing you die; they'll have a lot of reality-fluid in worlds where you're dead and they can't talk to you, even as you continue. But I don't think it works out to assigning the same negative utility to dying as in branches of small worlds.
1Sohaib Imran22d
Yes, but the number of copies of you still reduces (or the probability that you are alive in standard probability theory, or the number of branches in many worlds). Why are these not equivalent in terms of the expected utility calculus?
1Mikhail Samin22d
Imagine they you’re an agent in the game of life. Your world, your laws of physics are computed on a very large independent computers; all performing the same computation. You exist within the laws of causality of your world, computed as long as at least one server computes your world. If some of them stop performing the computation, it won’t be a death of a copy; you’ll just have one fewer instance of yourself.
1Sohaib Imran22d
Whats the difference between fewer instances and fewer copies, and why is that load bearing for the expected utility calculation?
2Mikhail Samin22d
You are of course right that there’s no difference between reality-fluid and normal probabilities in a small world: it’s just how much you care about various branches relative to each other, regardless of whether all of them will exist or only some. I claim that the negative utility due to stopping to exist is just not there, because you don’t actually stop to exist in a way you reflectively care about, when you have fewer instances. For normal things (e.g., how much do you care about paperclips), the expected utility is the same; but here, it’s the kind of terminal value that i expect for most people would be different; guaranteed continuation in 5% of instances is much better than 5% chance of continuing in all instances; in the first case, you don’t die!
1Sohaib Imran22d
But we are not talking about negative utility due to stopping to exist. We are talking about avoiding counterfactual negative utility by committing suicide, which still exists!   I think this is an artifact of thinking of all of the copies having a shared utility (i.e. you) rather than separate utilities that add up (i.e. so many yous will suffer if you don't commit suicide). If they have separate utilities, we should think of them as separate instances of yourself. 
1Sohaib Imran22d
And even in the case where we are assigning negative utility to death, most people are really considering counterfactual utility from being alive, and 95% of that (expected) counterfactual utility is lost whether 95% of the "instances of you" die or whether there is a 95% chance that "you" die. 
2Garrett Baker24d
I think there is, and I think cultural mores well support this. Separately, I think we shouldn't legislate morality and though suicide is bad, it should be legal[1]. There also exist cases where it is in fact correct from a utilitarian perspective to kill, but this doesn't mean there is no deontological rule against killing. We can argue about the specific circumstances where we need these rule carve-outs (eg war), but I think we'd agree that when it comes to politics and policy, there ought to be no carve-outs, since people are particularly bad at risk-return calculations in that domain. ---------------------------------------- 1. But also this would mean we have to deal with certain liability issues, eg if ChatGPT convinces a kid to kill themselves, we'd like to say this is manslaughter or homicide iff the kid otherwise would've gotten better, but how do we determine that? I don't know, and probably on net we should choose freedom instead, or this isn't actually much a problem in practice. ↩︎
4Ben Pace24d
Makes sense. I don't hold this stance; I think my stance is that many/most people are kind of insane on this, but that like with many topics we can just be more sane if we try hard and if some of us set up good institutions around it for helping people have wisdom to lean on in thinking about it, rather than having to do all their thinking themselves with their raw brain. (I weakly propose we leave it here, as I don't think I have a ton more to say on this subject right now.)
2Mikhail Samin24d
To clarify, I meant that the choice of actions was based on the vibes, not on careful consideration, this seeming like the right thing to do in these circuimstances. I maybe formulated this badly. I do not disagree with that part of your comment. I did, in fact, risk being prosecuted unjustly by the state and spending a great deal of my life in prison. I was also aware of the kinds of situations I'd want to go for hunger strikes in while in prison, though didn't think about that often. And I, too, am willing to die to reduce the risk by a pretty small chance. Most of the time, though, I think people who think they have this choice don't actually face it; I think the bar for risking one's life should be very high. In particular, when people have time to carefully do the math, I really want them to carefully do the math before deciding to risk their lives, and in this particular case, some of my frustration is from the people getting their math wrong. I think as a community, we also would really want to make people err on the side of safety, and have a strong norm of assumption that most people who decide to sacrifice their lives got their math wrong. People really shouldn't be risking their lives without having carefully thought of the theory of change when they have the ability to do so. Like, I'd bet if we find people competent in how movements achieve their goals, they will say that these particular hunger strikes are not great; and I expect it to be the case most of the time when individuals who share values with a larger movement decide to go on a hunger strike even as the larger movement thinks that would not be effective.
2Ben Pace24d
I think I somewhat agree that these hunger strikes will not shut down the companies or cause major public outcry. I think that there is definitely something to be said that potentially our society is very poor at doing real protesting, and will just do haphazard things and never do anything goal-directed. That's potentially a pretty fundamental problem. But setting that aside (which is a big thing to set aside!) I think the hunger-strike is moving in the direction of taking this seriously. My guess is most projects in the world don't quite work, but they're often good steps to help people figure out what does work. Like, I hope this readies people to notice opportunities for hunger strikes, and also readies them to expect people to be willing to make large sacrifices on this issue.
3Mikhail Samin24d
People do in fact try to be very goal-directed about protesting! They  have a lot of institutional knowledge on it! You can study what worked and what didn’t work in the past, and what makes a difference between a movement that succeeds and a movement that doesn’t. You can see how movements organize, how they grow local leaders, how they come up with ideas that would mobilize people. A group doesn’t have to attempt a hunger strike to figure out what the consequences would be; it can study and think, and I expect that to be a much more valuable use of time than doing hunger strikes.
4Ben Pace24d
I'd be interested to read a quick post from you that argued "Hunger-strikes are not the right tool for this situation; here is what they work for and what they don't work for. Here is my model of this situation and the kind of protests that do make sense."
4Ben Pace24d
I don't know much about protesting. Most of the recent ones that get big enough that I hear about them have been essentially ineffectual as far as I can recall (Occupy Wallstreet, Women's March, No Kings). I am genuinely interested in reading about effective and clearly effective protests led by anyone currently doing protests, or within the last 10 years. Even if on a small scale. (My thinking isn't that protests have not worked in the past – I believe they have, MLK, Malcolm X, Women's Suffrage Movement, Vietnam War Protest, surely more – but that the current protesting culture has lost its way and is no longer effective.)
2Mo Putera24d
Caveat that I don't know much more than this, but I'm reminded of James Ozden's lit reviews, e.g. How effective are protests? Some research and some nuance. Ostensibly relevant bits:
2Ben Pace23d
(Would be interested in someone going through this paper and writing a post or comment highlighting some examples and why they're considered successful.)
2Ben Pace24d
Not quite responding to your main point here, but I'll say that this position would seem valid to me and good to say if you believed it. I don't know what personal life tradeoffs any of them are making, so I have a hard time speaking to that. I just found out that Michael Trazzi is one of the people doing a hunger strike; I don't think it's true of him that he hasn't thought seriously about the issues given how he's been intellectually engaged for 5+ years.
2Mikhail Samin24d
Yep, I basically believe this. (Social movements (and comms and politics) are not easy to reason about well from first principles. I think Michael is wrong to be making this particular self-sacrifice, not because he hasn’t thought carefully about AI but because he hasn’t thought carefully about hunger strikes.)
4Ben Pace24d
Relevantly, if any of them actually die, and if also it does not cause major change and outcry, I will probably think they made a foolish choice (where 'foolish' means 'should have known in advance this was the wrong call on a majorly important decision'). My modal guess is that they will all make real sacrifice, and stick it out for 10-20 days, then wrap up.
5Ben Pace20d
Follow-up: Michael Trazzi wrapped up after 7 days due to fainting twice and two doctors saying he was getting close to being in a life-threatening situation. (Slightly below my modal guess, but also his blood glucose level dropped unusual fast.) FAO @Mikhail Samin.
2Mikhail Samin19d
Yep. Good that he stopped. Likely bad that he started.
6Ben Pace11d
Trazzi shared this on Twitter: The linked video seems to me largely successful at raising awareness of the anti-extinction position – it is not exaggerated, it is not mocked, it is accurately described and taken seriously. I take this as evidence of the strikes being effective at their goals (interested if you disagree). I think the main negative update about Dennis (in line with your concerns) is that he didn't tell his family he was doing this. I think that's quite different from the Duc story I linked above, where he made a major self-sacrifice with the knowledge and support of his community.
2Mikhail Samin11d
Yep, I’ve seen the video. Maybe a small positive update overall, because could’ve been worse? It seems to me that you probably shouldn’t optimize for publicity for publicity’s sake, and even then, hunger strikes are not a good way. Hunger strikes are very effective tools in some situations; but they’re not effective for this. You can raise awareness a lot more efficiently than this. “The fears are not backed up with evidence” and “AI might improve billions of lives” is what you get when you communicate being in fear of something without focusing on the reasons why.
2Cole Wyeth24d
On the object level it’s (also) important to emphasize that these guys don’t seem to be seriously risking their lives. At least one of them noted he’s taking vitamins, hydrating etc. On consequentialist grounds I consider this to be an overdetermined positive. 
3Mikhail Samin24d
a hunger strike will eventually kill you even if you take vitamins, electrolytes, and sugar. (a way to prevent death despite the target not giving in is often a group of supporters publicly begging the person on the hunger strike to stop and not kill themselves for some plausible reasons, but sometimes people ignore that and die.) I'm not entirely sure what Guido's intention is if Anthropic doesn't give in.
2Ben Pace24d
Sure, I just want to defend that it would also be reasonable if they were doing a more intense and targeted protest. “Here is a specific policy you must change” and “I will literally sacrifice my life if you don’t make this change”. So I’m talking about the stronger principle.
1henryaj23d
Isn't suicide already legal in most places?
2Mikhail Samin23d
I think in a lot of places the government will try to stop you, including using violence.
[-]MichaelDickens24d1715

I don't strongly agree or disagree with your empirical claims but I do disagree with the level of confidence expressed. Quoting a comment I made previously:

I'm undecided on whether things like hunger strikes are useful but I just want to comment to say that I think a lot of people are way too quick to conclude that they're not useful. I don't think we have strong (or even moderate) reason to believe that they're not useful.

When I reviewed the evidence on large-scale nonviolent protests, I concluded that they're probably effective (~90% credence). But I've seen a lot of people claim that those sorts of protests are ineffective (or even harmful) in spite of the evidence in their favor.[1] I think hunger strikes are sufficiently different from the sorts of protests I reviewed that the evidence might not generalize, so I'm very uncertain about the effectiveness of hunger strikes. But what does generalize, I think, is that many peoples' intuitions on protest effectiveness are miscalibrated.

[1] This may be less relevant for you, Mikhail Samin, because IIRC you've previously been supportive of AI pause protests in at least some contexts.

ETA: To be clear, I'm responding to the part of your... (read more)

Reply
9Mikhail Samin24d
To be very clear, I expect large social movements that use protests as one of its forms of action to have the potential to be very successful and impactful if done well. Hunger strikes are significantly different from protests. Hunger strikes can be powerful, but they're best for very different contexts.
4sjadler24d
Aside from whether or not the hunger strikes are a good idea, I'm really glad they have emphasized conditional commitments in their demands I think that we should be pushing on these much much more: getting groups to say "I'll do X if abc groups do X as well" And should be pushing companies/governments to be clear whether their objection is "X policy is net-harmful regardless of whether anyone else does it" vs "X is net-harmful for us if we're the only ones to do it" [I recognize that some of this pushing/clarification might make sense privately, and that groups will be reluctant to stay stuff like this publicly because of posturing and whatnot.]
2Mikhail Samin23d
(While I like it being directed towards coordination, it would not actually make a difference, as it won’t be the case that all AI companies want to stop, and so it would still not be of great significance. The thing that works is a gov-supported ban on developing ASI anywhere in the world. A commitment to stop if everyone else stops doesn’t actually come into force unless everyone is required to stop anyway. An ask that works is, e.g., “tell the government they need to stop everyone, including us”.)
1sjadler10d
For sure, I think that would be a reasonable ask too. FWIW, I think if multiple leading AI companies did make a statement like the one outlined, I think that would increase the chance of non-complying ones being made to halt by the government, even though they hadn’t made a statement themselves. That is, even one prominent AI company making this statement then starts to widen the Overton window
0Stephen Fowler24d
I think we should show some solidarity to people committed to their beliefs and making a personal sacrifice, rather than undermining them by critiquing their approach.  Given that they're both young men and the hunger strikes are occurring in the first world, it seems unlikely anyone will die. But it does seem likely they or their friends will read this thread. Beyond that, the hunger strike is only on day 2 and is has already received a small amount of media coverage. Should they go viral then this one action alone will have a larger differential impact on reducing existential risk than most safety researchers will achieve in their entire careers.  https://www.businessinsider.com/hunger-strike-deepmind-ai-threat-fears-agi-demis-hassabis-2025-9  
2Mikhail Samin23d
This is surprising to hear on LessWrong, where we value truth without having to think of object-level reasons for why it is good to say true things. But on the object level: it would be very dangerous for a community to avoid saying true things because it is afraid of undermining someone’s sacrifice; this would lead to a lot of needless, and even net-negative, sacrifice, without mechanisms for self-correction. Like, if I ever do something stupid, please tell me (and everyone) that instead of respecting my sacrifice: I would not want others to repeat my mistakes. (There are lots of ways to get media coverage and it’s not always good in expectation. If they go viral, in a good way/with a good message, I will somewhat change my mind.)
[-]Mikhail Samin21d6524

"There is no justice in the laws of Nature, no term for fairness in the equations of motion. The universe is neither evil, nor good, it simply does not care. The stars don't care, or the Sun, or the sky. But they don't have to! We care! There is light in the world, and it is us!"

Reply
[-]Zach Stein-Perlman21d4213

And someday when the descendants of humanity have spread from star to star they won’t tell the children about the history of Ancient Earth until they’re old enough to bear it and when they learn they’ll weep to hear that such a thing as Death had ever once existed!

Reply4
9Caleb Biddulph21d
Credit for this quote goes to Eliezer Yudkowsky, for those who don't know
[-]Mikhail Samin2mo*412

Everyone should do more fun stuff![1]

I thought it'd just be very fun to develop a new sense.

Remember vibrating belts and ankle bracelets that made you have a sense of the direction of north? (1, 2)

I made some LLMs make me an iOS app that does this! Except the sense doesn't go away the moment you stop the app!

I am pretty happy about it! I can tell where’s north and became much better at navigating and relating different parts of the (actual) territory in my map. Previously, I would remember my paths as collections of local movements (there, I turn left); now, I generally know where places are, and Google Maps feel much more connected to the territory.

If you want to try it, it's on the App Store: https://apps.apple.com/us/app/sonic-compass/id6746952992 

It can vibrate when you face north; even better, if you're in headphones, it can give you spatial sounds coming from north; better still, a second before playing a sound coming from north, it can play a non-directional cue sound to make you anticipate the north sound and learn very quickly.

None of this interferes with listening to any other kind of audio.

It’s all probably less relevant to the US, as your roads are in a grid anyway... (read more)

Reply
2AlphaAndOmega2mo
This is really cool! My ADHD makes me rather place-blind, if I'm not intentionally forcing myself to pay attention to a route and my surroundings, I can get lost or disoriented quite easily. I took the same bus route to school for a decade, and I can't trace the path, I only remember a sequence of stops. Hopefully someone makes an Android version, I'd definitely check it out. 
1keltan2mo
Trying it out now, this is pretty fun! I think I'd use it more if it had an apple watch version that I could keep constantly running.
[-]Mikhail Samin3mo364

i made a thing!

it is a chatbot with 200k tokens of context about AI safety. it is surprisingly good- better than you expect current LLMs to be- at answering questions and counterarguments about AI safety. A third of its dialogues contain genuinely great and valid arguments.

You can try the chatbot at https://whycare.aisgf.us (ignore the interface; it hasn't been optimized yet). Please ask it some hard questions! Especially if you're not convinced of AI x-risk yourself, or can repeat the kinds of questions others ask you.

Send feedback to ms@contact.ms.

A couple of examples of conversations with users:

I know AI will make jobs obsolete. I've read runaway scenarios, but I lack a coherent model of what makes us go from "llms answer our prompts in harmless ways" to "they rebel and annihilate humanity".

Reply
6metachirality3mo
Confused about the disagreements. Is it because of the AI output or just the general idea of an AI risk chatbot?
3Michaël Trazzi3mo
how does your tool compare to stampy or just say asking these questions without the 200k tokens?
2Mikhail Samin3mo
It’s better than stampy (try asking both some interesting questions!). Stampy is cheaper to run though. I wasn’t able to get LLMs to produce valid arguments or answer questions correctly without the context, though that could be scaffolding/skill issue on my part.
2Mikhail Samin3mo
Another example:
1Kabir Kumar3mo
Good job trying and putting this out there. Hope you iterate on it a lot and make it better. Personally, I utterly despise this current writing style. Maybe you can look at the Void bot on Bluesky, which is based on Gemini pro - it's one of the rare bots I've seen whose writing is actually ok. 
2Mikhail Samin3mo
Thanks, but, uhm, try to not specify “your mom” as the background and “what the actual fuck is ai alignment” as your question if you want it to have a writing style that’s not full of “we’re toast”
1Kabir Kumar3mo
Maybe the option of not specifying the writing style at all, for impatient people like me?  Unless you see this as more something to be used by advocacy/comms groups to make materials for explaining things to different groups, which makes sense.  If the general public is really the target, then adding some kind of voice mode seems like it would reduce latency a lot
2Mikhail Samin3mo
This specific page is not really optimized for any use by anyone whatsoever; there are maybe five bugs each solvable with one query to claude, and all not a priority; the cool thing i want people to look at is the chatbot (when you give it some plausible context)! (Also, non-personalized intros to why you should care about ai safety are still better done by people.) I really wouldn't want to give a random member of the US general public a thing that advocates for AI risk while having a gender drop-down like that.[1] The kinds of interfaces it would have if we get to scale it[2] would be very dependent on where specific people are coming from. I.e., demographic info can be pre-filled and not necessarily displayed if it's from ads; or maybe we ask one person we're talking to to share it with two other people, and generate unique links with pre-filled info that was provided by the first person; etc. Voice mode would have a huge latency due to the 200k token context and thinking prior to responding. 1. ^ Non-binary people are people, but the dropdown creates unnecessary negative halo effect for a significant portion of the general public. Also, dropdowns = unnecessary clicks = bad. 2. ^ which I really want to! someone please give us the budget and volunteers! at the moment, we have only me working full-time (for free), $10k from SFF, and ~$15k from EAs who considered this to be the most effective nonprofit in this field. reach out if you want to donate your time or money. (donations are tax-deductible in the us.)
1Kabir Kumar3mo
Is the 200k context itself available to use anywhere? How different is it from the Stampy.ai dataset? Nw if you don't know due to not knowing what exactly stampy's dataset is. I get questions a lot, from regular ml researchers on what exactly alignment is and I wish I had an actually good thing to send them. Currently I either give a definition myself or send them to alignmentforum. 
2Mikhail Samin3mo
Nope, I’m somewhat concerned about unethical uses (eg talking to a lot of people without disclosing it’s ai), so won’t publicly share the context. If the chatbot answers questions well enough, we could in principle embed it into whatever you want if that seems useful. Currently have a couple of requests like that. DM me somewhere? Stampy uses RAG & is worse.
1don't_wanna_be_stupid_any_more3mo
this deserves way more attention. a big problem about AI safety advocacy is that we aren't reaching enough people fast enough, this problem doesn't have the same familiarity amongst the public as climate change or even factory farming and we don't have people running around in the streets preaching about the upcoming AI apocalypse, most lesswrongers can't even come up with a quick 5min sales pitch for lay people even if their live literally depended on it. this might just be the best advocacy tool i have seen so far, if only we can get it to go viral it might just make the difference.  edit: i take this part back i have seen some really bad attempts at explaining AI-x risk in laymen terms and just assumed it was the norm, most of which were from older posts. now looking at newer posts i think the situation is has greatly improved, not ideal but way better then i thought. i still think this tool would be a great way to reach the wider public especially if it incorporates a better citation function so people can check the source material (it does sort of point the user to other websites but not technical papers).
4Mikhail Samin3mo
Thanks! I think we’re close to a point where I’d want to put this in front of a lot of people, though we don’t have the budget for this (which seems ridiculous, given the stats we have for our ads results etc.), and also haven’t yet optimized the interface (as in, half the US public won’t like the gender dropdown). Also, it’s much better at conversations than at producing 5min elevator pitches. (Hard to make it good at being where the user is while getting to a point instead of being very sycophantic). The end goal is to be able to explain the current situation to people at scale.
[-]Mikhail Samin3mo11-13

I want to signal-boost this LW post.

I long wondered why OpenPhil made so many obvious mistakes in the policy space. That level of incompetence just did not make any sense.

I did not expect this to be the explanation:

THEY SIMPLY DID NOT HAVE ANYONE WITH ANY POLITICAL EXPERIENCE ON THE TEAM until hiring one person in April 2025.

This is, like, insane. Not what I'd expect at all from any org that attempts to be competent.

(openphil, can you please hire some cracked lobbyists to help you evaluate grants? This is, like, not quite an instance of Graham's Design Paradox, because instead of trying to evaluate grants you know nothing about, you can actually hire people with credentials you can evaluate, who'd then evaluate the grants. thank you <3)

Reply2
[-]habryka3mo414

To be clear, I don't think this is an accurate assessment of what is going on. If anything, I think marginally people with more "political experience" seemed to me to mess up more.

In-general, takes of the kind "oh, just hire someone with expertise in this" almost never make sense IMO. First of all, identifying actual real expertize is hard. Second, general competence and intelligence is a better predictor of task performance in almost all domains after even just a relatively short acclimation period that OpenPhil people far exceed. Third, the standard practices in many industries are insane and most of the time if you hire someone specifically for their expertise in a domain, not just as an advisor but an active team member, they will push for adopting those standard practices even when it doesn't make sense.

Reply531
9yams3mo
I don't think Mikhail's saying that hiring an expert is sufficient. I think he's saying that hiring an expert, in a very high-context and unnatural/counter-intuitive field like American politics, is necessary, or that you shouldn't expect success trying to re-derive all of politics in a vacuum from first principles. (I'm sure OpenPhil was doing the smarter version of this thing, where they had actual DC contacts they were in touch with, but that they still should have expected this to be insufficient.) Often the dumb versions of ways of dealing with the political sphere (advocated by people with some experience) just don't make any sense at all, because they're directional heuristics that emphasize their most counterintuitive elements. But, in talking to people with decades of experience and getting the whole picture, the things they say actually do make sense, and I can see how the random interns or whatever got their dumb takes (by removing the obvious parts from the good takes, presenting only the non-obvious parts, and then over-indexing on them). I big agree with Habryka here in the general case and am routinely disappointed by input from 'experts'; I think politics is just a very unique space with a bunch of local historical contingencies that make navigation without very well-calibrated guidance especially treacherous. In some sense it's more like navigating a social environment (where it's useful to have a dossier on everyone in the environment, provided by someone you trust) than it is like navigating a scientific inquiry (where it's often comparatively cheap to relearn or confirm something yourself rather than deferring).
[-]habryka3mo100

I mean, it's not like OpenPhil hasn't been interfacing with a ton of extremely successful people in politics. For example, OpenPhil approximately co-founded CSET, and talks a ton with people at RAND, and has done like 5 bajillion other projects in DC and works closely with tons of people with policy experience. 

The thing that Jason is arguing for here is "OpenPhil needs to hire people with lots of policy experience into their core teams", but man, that's just such an incredibly high bar. The relevant teams at OpenPhil are like 10 people in-total. You need to select on so many things. This is like saying that Lightcone "DOESN'T HAVE ANYONE WITH ARCHITECT OR CONSTRUCTION OR ZONING EXPERIENCE DESPITE RUNNING A LARGE REAL ESTATE PROJECT WITH LIGHTHAVEN". Like yeah, I do have to hire a bunch of people with expertise on that, but it's really very blatantly obvious from where I am that trying to hire someone like that onto my core teams would be hugely disruptive to the organization.

It seems really clear to me that OpenPhil has lots of contact with people who have lots of policy experience, frequently consults with them on stuff, and that the people working there full-time seem reasonably selected for me. The only way I see the things Jason is arguing for work out is if OpenPhil was to much more drastically speed up their hiring, but hiring quickly is almost always a mistake.

Reply1
6Mass_Driver3mo
Part of the distinction I try to draw in my sequence is that the median person at CSET or RAND is not "in politics" at all. They're mostly researchers at think tanks, writing academic-style papers about what kinds of policies would be theoretically good for someone to adopt. Their work is somewhat more applied/concrete than the work of, e.g., a median political science professor at a state university, but not by a wide margin. If you want political experts -- and you should -- you have to go talk to people who have worked on political campaigns, served in the government, or led advocacy organizations whose mission is to convince specific politicians to do specific things. This is not the same thing as a policy expert.  For what it's worth, I do think OpenPhil and other large EA grantmakers should be hiring many more people. Hiring any one person too quickly is usually a mistake, but making sure that you have several job openings posted at any given time (each of which you vet carefully) is not.
5yams3mo
I agree that this is the same type of thing as the construction example for Lighthaven, but I also think that you did leave some value on the table there in certain ways (e.g. commercial-grade furniture vs consumer-grade furniture), and I think that a larger total % domain-specific knowledge I'd hope exists at Open Phil is policy knowledge than total % domain-specific knowledge I'd hope exists at Lightcone is hospitality/construction knowledge. I hear you as saying 'experts aren't all that expert' * 'hiring is hard' + 'OpenPhil does actually have access to quite a few experts when they need them'  = 'OpenPhil's strategy here is very reasonable.' I agree in principal here but think that, on the margin, it just is way more valuable to have the skills in-house than to have external people giving you advice (so that they have both sides of the context, so that you can make demands of them rather than requests, so that they're filtered for a pretty high degree of value alignment, etc). This is why Anthropic and OAI have policy teams staffed with former federal government officials. It just doesn't get much more effective than that. I don't share Mikhail's bolded-all-caps-shock at the state of things; I just don't think the effects you're reporting, while elucidatory, are a knockdown defense of OpenPhil being (seemingly) slow to hire for a vital role. But running orgs is hard and I wouldn't shackle someone to a chair to demand an explanation. Separately, a lot of people defer to some discursive thing like 'The OP Worldview' when defending or explicating their positions, and I can't for the life of me hammer out who the keeper of that view is. It certainly seems like a knock against this particular kind of appeal when their access to policy experts is on-par with e.g. MIRI and Lightcone (informal connections and advisors), rather than the ultra-professional, ultra-informed thing it's often floated as being. OP employees have said furtive things like 'you wouldn't belie
2Mikhail Samin3mo
To be clear, I was a lot more surprised when I was told about some of what OpenPhil did in DC, once starting to facepalm really hard after two sentences and continuing to facepalm very hard for most of a ten-minute-long story. It was so obviously dumb, that even me, with basically zero exposure to American politics or local DC norms and only some tangential experience running political campaigns in a very different context (an authoritarian country), immediately recognized it as obviously very stupid. While listening, I couldn’t think of better explanations than stuff like “maybe Dustin wanted x and OpenPhil didn’t have a way to push back on it”. But not having anyone who could point out how this would be very, very stupid, on the team, is a perfect explanation for the previous cringe over their actions; and it’s also incredibly incompetent, on the level I did not expect. As Jason correctly noted, it’s not about “policy”. This is very different from writing papers and figuring out what a good policy should be. It is about advocacy: getting a small number of relevant people to make decisions that lead to the implementation of your preferred policies. OpenPhil’s goals are not papers; and some of the moves they’ve made that their impact their utility more than any of the papers they’ve funded more are ridiculously bad. A smart enough person could figure it out from the first principles, with no experience, or by looking at stuff like how climate change became polarized, but for most people, it’s a set of intuitions, skills, knowledge that are very separate from those that make you a good evaluator of research grants. It is absolutely obvious to me that someone experienced in advocacy should get to give feedback on a lot of decisions that you plan to make, including because some of them can have strategic implications you didn’t think about. Instead, OpenPhil are a bunch of individuals who apparently often don’t know the right questions to ask even despite their emp
9Mass_Driver3mo
I'm the author of the LW post being signal-boosted. I sincerely appreciate Oliver's engagement with these critiques, and I also firmly disagree with his blanket dismissal of the value of "standard practices."  As I argue in the 7th post in the linked sequence, I think OpenPhil and others are leaving serious value on the table by not adopting some of the standard grant evaluation practices used at other philanthropies, and I don't think they can reasonably claim to have considered and rejected them -- instead the evidence strongly suggests that they're (a) mostly unaware of these practices due to not having brought in enough people with mainstream expertise, and (b) quickly deciding that anything that seems unfamiliar or uncomfortable "doesn't make sense" and can therefore be safely ignored.  We have a lot of very smart people in the movement, as Oliver correctly points out, and general intelligence can get you pretty far in life, but Washington, DC is an intensely competitive environment that's full of other very smart people. If you try to compete here with your wits alone while not understanding how politics works, you're almost certainly going to lose.
8MichaelDickens3mo
Can you say more about this? I'm aware of the research on g predicting performance on many domains, but the quoted claim is much stronger than the claims I can recall reading.
6leogao3mo
random thought, not related to GP comment: i agree identifying expertise in a domain you don't know is really hard, but from my experience, identifying generalizable intelligence/agency/competence is less hard. generally it seems like a useful signal to see how fast they can understand and be effective at a new thing that's related to what they've done before but that they've not thought much specifically about before. this isn't perfectly correlated with competence at their primary field, but it's probably still very useful. e.g it's generally pretty obvious if someone is flailing on an ML/CS interview Q because they aren't very smart, or just not familiar with the tooling. people who are smart will very quickly and systematically figure out how to use the tooling, and people who aren't will get stuck and sit there being confused. I bet if you took e.g a really smart mathematician with no CS experience and dropped them in a CS interview, it would be very fascinating to watch them figure out things from scratch disclaimer that my impressions here are not necessarily strictly tied to feedback from reality on e.g job performance (i can see whether people pass the rest of the interview after making a guess at the 10 minute mark, but it's not like i follow up with managers a year after they get hired to see how well they're doing)
[-]Mikhail Samin3mo*100

PSA: if you're looking for a name for your project, most interesting .ml domains are probably available for $10, because the mainstream registrars don't support the TLD.

I bought over 170 .ml domains, including anthropic.ml (redirects to the Fooming Shoggoths song), closed.ml & evil.ml (redirect to OpenAI Files), interpretability.ml, lens.ml, evals.ml, and many others (I'm happy to donate them to AI safety projects).

Reply1
[-]Mikhail Samin7mo*10-5

Since this seems to be a crux, I propose a bet to @Zac Hatfield-Dodds (or anyone else at Anthropic): someone shows random people in San-Francisco Anthropic’s letter to Newsom on SB-1047. I would bet that among the first 20 who fully read at least one page, over half will say that Anthropic’s response to SB-1047 is closer to presenting the bill as 51% good and 49% bad than presenting it as 95% good and 5% bad.

Zac, at what odds would you take the bet?

(I would be happy to discuss the details.)

Reply
[-]Zac Hatfield-Dodds7mo151

Sorry, I'm not sure what proposition this would be a crux for?

More generally, "what fraction good vs bad" seems to me a very strange way to summarize Anthropic's Support if Amended letter or letter to Governor Newsom. It seems clear to me that both are supportive in principle of new regulation to manage emerging risks, and offering Anthropic's perspective on how best to achieve that goal. I expect most people who carefully read either letter would agree with the preceeding sentence and would be open to bets on such a proposition.

Personally, I'm also concerned about the downside risks discussed in these letters - because I expect they both would have imposed very real costs, and reduced the odds of the bill passing and similar regulations passing and enduring in other juristictions. I nonetheless concluded that the core of the bill was sufficiently important and urgent, and downsides manageable, that I supported passing it.

Reply
2Mikhail Samin7mo
I refer to the second letter. I claim that a responsible frontier AI company would’ve behaved very differently from Anthropic. In particular, the letter said basically “we don’t think the bill is that good and don’t really think it should be passed” more than it said “please sign”. This is very different from your personal support for the bill; you indeed communicated “please sign”. Sam Altman has also been “supportive of new regulation in principle”. These words sadly don’t align with either OpenAI’s or Anthropic’s lobbying efforts, which have been fairly similar. The question is, was Anthropic supportive of SB-1047 specifically? I expect people to not agree Anthropic was after reading the second letter.
3Zac Hatfield-Dodds7mo
I strongly disagree that OpenAI's and Anthropic's efforts were similar (maybe there's a bet there?). OpenAI formally opposed the bill without offering useful feedback; Anthropic offered consistent feedback to improve the bill, pledged to support it if amended, and despite your description of the second letter Senator Wiener describes himself as having Anthropic's support. I also disagree that a responsible company would have behaved differently. You say "The question is, was Anthropic supportive of SB-1047 specifically?" - but I think this is the wrong question, implying that lack of support is irresponsible rather than e.g. due to disagreements about the factual question of whether passing the bill in a particular state would be net-helpful for mitigating catastrophic risks. The Support if Amended letter, for example, is very clear: I don't expect further discussion to be productive though; much of the additional information I have is nonpublic, and we seem to have different views on what constitutes responsible input into a policy process as well as basic questions like "is Anthropic's engagement in the SB-1047 process well described as 'support' when the letter to Governor Newsom did not have the word 'support' in the subject line". This isn't actually a crux for me, but I and Senator Wiener seem to agree yes, while you seem to think no.
8MathiasKB7mo
One thing to highlight, which I only learned recently, is that the norm when submitting letters to the governor on any bill in California is to include: "Support" or "Oppose" in the subject line to clearly state the company's position. Anthropic importantly did NOT include "support" in the subject line of the second letter. I don't know how to read this as anything else than that Anthropic did not support SB1047.
2Mikhail Samin7mo
Good point! That seems right; advocacy groups seem to think staff sorts letters by support/oppose/request for signature/request for veto in the subject line and recommend adding those to the subject line. Examples: 1, 2. Anthropic has indeed not included any of that in their letter to Gov. Newsom.
5Yonatan Cale7mo
(Could you link to the context?)
[-]Mikhail Samin4mo8-1

In RSP, Anthropic committed to define ASL-4 by the time they reach ASL-3.

With Claude 4 released today, they have reached ASL-3. They haven’t yet defined ASL-4.

Turns out, they have quietly walked back on the commitment. The change happened less than two months ago and, to my knowledge, was not announced on LW or other visible places unlike other important changes to the RSP. It’s also not in the changelog on their website; in the description of the relevant update, they say they added a new commitment but don’t mention removing this one.

Anthropic’s behavior... (read more)

[This comment is no longer endorsed by its author]Reply
3Rasool4mo
The Midas Project is a good place to keep track of AI company policy changes. Here is their note on the Anthropic change: https://www.themidasproject.com/watchtower/anthropic-033125
1Sodium4mo
I don't think it's accurate to say that they've "reached ASL-3?" In the announcement, they say And it's also inaccurate to say that they have "quietly walked back on the commitment." There was no commitment to define ASL-4 by the time they reach ASL-3 in the updated RSP, or in versions 2.0 (released October last year), and 2.1 (see all past RSPs here). I looked at all mentions of ASL-4 in the lastest document, and this comes closest to what they have: Which is what they did with Opus 4. Now they have indeed not provided a ton of details on what exactly they did to determine that the model has not reached ASL-4 (see report), but the comment suggesting that they "basically [didn't] tell anyone" feels inaccurate. 
9Mikhail Samin4mo
* According to the Anthropic’s chief scientist’s interview with Time today, they “work under the ASL-3 standard”. So they have reached the safety level—they’re working under it—and the commitment would’ve applied[1]. * There was a commitment in RSP prior to Oct last year. They did walk back on this commitment quietly: the fact they walk back on it was not announced in their posts and wasn’t noticed in the posts of others; only a single LessWrong comment in Oct 2024 from someone not affiliated with Anthropic mentions it. I think this is very much “quietly walking back” on a commitment. * According to Midas, the commitment was fully removed in 2.1: “Removed commitment to “define ASL-N+ 1 evaluations by the time we develop ASL-N models””; a pretty hidden (I couldn’t find it!) revision changelog also attributes the decision to not maintain the commitment to 2.1. At the same time, the very public changelog on the RSP page only mentions new commitments and doesn’t mention the decision to “not maintain” this one. 1. ^ “they’re not sure whether they’ve reached the level of capabilities which requires ASL-3 and  decided to work under ASL-3, to be revised if they find out the model only requires ASL-2” could’ve been more accurate, but isn’t fundamentally different IMO. And Anthropic is taking the view that by the time you develop a model which might be ASL-n, the commitments for ASL-n should trigger until you rule that out. It’s not even clear what a different protocol could be, if you want to release a model that might be at ASL-n. Release it anyway and contain it only after you’ve confirmed it’s at ASL-n?
0Joseph Miller4mo
Meta-level comment now that this has been retracted. Anthropic's safety testing for Claude 4 is vastly better than DeepMind's testing of Gemini. When Gemini 2.5 Pro was released there was no safety testing info and even the model card that was eventually released is extremely barebones to compared to what Anthropic put out. DeepMind should be embarrassed by this. The upcoming PauseCon protest outside DeepMind's headquarters in London will focus on this failure.
[-]Mikhail Samin4mo130

I directionally agree!

Btw, since this is a call to participate in a PauseAI protest on my shortform, do your colleagues have plans to do anything about my ban from the PauseAI Discord server—like allowing me to contest it (as I was told there was a discussion of making a procedure for) or at least explaining it?

Because it’s lowkey insane!

For everyone else, who might not know: a year ago I, in context, on the PauseAI Discord server, explained my criticism of PauseAI’s dishonesty and, after being asked to, shared proofs that Holly publicly lied about our personal communications, including sharing screenshots of our messages; a large part of the thread was then deleted by the mods because they were against personal messages getting shared, without warning (I would’ve complied if asked by anyone representing a server to delete something!) or saving/allowing me to save any of the removed messages in the thread, including those clearly not related to the screenshots that you decided were violating the server norms; after a discussion of that, the issue seemed settled and I was asked to maybe run some workshops for PauseAI to improve PauseAI’s comms/proofreading/factchecking; and then, mo... (read more)

Reply
2Mikhail Samin4mo
reached out to Joep asking for the record, he said “Holly wanted you banned” and it was a divisive topic in the team.
-1Joseph Miller4mo
Uhh yeah sorry that there hasn't been a consistent approach. In our defence I believe yours in the only complex moderation case that PauseAI Global has ever had to deal with so far and we've kinda dropped the ball on figuring out how to handle it. For context my take is that you've raised some valid points. And also you've acted poorly in some parts of this long running drama. And most importantly you've often acted in a way that seems almost optimised to turn people off. Especially for people not familiar with LessWrong culture, the inferential distance between you and many people is so vast that they really cannot understand you at all. Your behavior pattern matches to trolling / nuisance attention seeking in many ways and I often struggle to communicate to more normie types why I don't think you're insane or malicious. I do sincerely hope to iron this out some time and put in place actual systems for dealing with similar disputes in the future. And I did read over the original post + Google doc a few months ago to try to form my own views more robustly. But this probably won't be a priority for PauseAI Global in the immediate future. Sorry.
0evhub4mo
This is false. Our ASL-4 thresholds are clearly specified in the current RSP—see "CBRN-4" and "AI R&D-4". We evaluated Claude Opus 4 for both of these thresholds prior to release and found that the model was not ASL-4. All of these evaluations are detailed in the Claude 4 system card.
[-]garrison4mo140

I wrote the article Mikhail referenced and wanted to clarify some things. 

The thresholds are specified, but the original commitment says, "We commit to define ASL-4 evaluations before we first train ASL-3 models (i.e. before continuing training beyond when ASL-3 evaluations are triggered). Similarly, we commit to define ASL-5 evaluations before training ASL-4 models, and so forth," and, regarding ASL-4, "Capabilities and warning sign evaluations defined before training ASL-3 models."

The latest RSP says this of CBRN-4 Required Safeguards, "We expect this threshold will require the ASL-4 Deployment and Security Standards. We plan to add more information about what those entail in a future update."

Additionally, AI R&D 4 (confusingly) corresponds to ASL-3 and AI R&D 5 corresponds to ASL-4. This is what the latest RSP says about AI R&D 5 Required Safeguards, "At minimum, the ASL-4 Security Standard (which would protect against model-weight theft by state-level adversaries) is required, although we expect a higher security standard may be required. As with AI R&D-4, we also expect an affirmative case will be required."

Reply
2evhub4mo
I agree that the current thresholds and terminology are confusing, but it is definitely not the case that we just dropped ASL-4. Both CBRN-4 and AI R&D-4 are thresholds that we have not yet reached, that would mandate further protections, and that we actively evaluated for and ruled out in Claude Opus 4.
[-]tylerjohnston4mo104

AFAICT, now that ASL-3 has been implemented, the upcoming AI R&D threshold, AI R&D-4, would not mandate any further security or deployment protections. It only requires ASL-3. However, it would require an affirmative safety case concerning misalignment.

I assume this is what you meant by "further protections" but I just wanted to point this fact out for others, because I do think one might read this comment and expect AI R&D 4 to require ASL-4. It doesn't.

I am quite worried about misuse when we hit AI R&D 4 (perhaps even moreso than I'm worried about misalignment) — and if I understand the policy correctly, there are no further protections against misuse mandated at this point.

Reply
6garrison4mo
Not meaning to imply that Anthropic has dropped ASL-4! Just wanted to call out that this is does represent a change from the Sept. 2023 RSP. 
[-]aysja4mo1310

Regardless, it seems like Anthropic is walking back its previous promise: "We have decided not to maintain a commitment to define ASL-N+1 evaluations by the time we develop ASL-N models." The stance that Anthropic takes to its commitments—things which can be changed later if they see fit—seems to cheapen the term, and makes me skeptical that the policy, as a whole, will be upheld. If people want to orient to the rsp as a provisional intent to act responsibly, then this seems appropriate. But they should not be mistaken nor conflated with a real promise to do what was said. 

Reply
7Mikhail Samin4mo
Oops. Thank you and apologies.
5tylerjohnston4mo
FYI, I was (and remain to this day) confused by AI R&D 4 being called an "ASL-4" threshold. AFAICT as an outsider, ASL-4 refers to a set of deployment and security standards that are now triggered by dangerous capability thresholds, and confusingly, AI R&D 4 corresponds to the ASL-3 standard. AI R&D 5, on the other hand, corresponds to ASL-4, but only on the security side (nothing is said about the deployment side, which matters quite a bit given that Anthropic includes internal deployment here and AI R&D 5 will be very tempting to deploy internally) I'm also confused because the content of both AI R&D 4 and AI R&D 5 is seemingly identical to the content of the nearest upcoming threshold in the October 2024 policy (which I took to be the ASL-3 threshold). A rough sketch of what I think happened:   A rough sketch of my understanding of the current policy: When I squint hard enough at this for a while, I think I can kind of see the logic: the model likely to trigger the CBRN threshold requiring ASL-3 seems quite close, whereas we might be further from the very-high threshold that was the October AI R&D threshold (now AI R&D 4), so the October AI R&D threshold was just bumped to the next level (and the one after that since causing dramatic scaling of effective compute is even harder than being a entry-level remote worker... maybe) with some confidence that we were still somewhat far away from it and thus it can be treated effectively as today's upcoming + to-be-defined (what would have been called n+1) threshold. I just get lost when we call it an ASL-4 threshold (it's not, it's an ASL-3 threshold), and also it mostly makes me sad that these thresholds are so high because I want Anthropic to get some practice reps in implementing the RSP before it's suddenly hit with an endless supply of fully automated remote workers (plausibly the next threshold, AI R&D 4, requiring nothing more than the deployment + security standards Anthropic already put in place as of today
[-]Mikhail Samin4mo70

Is there a way to use policy markets to make FDT decisions instead of EDT decisions?

Reply
[-]Martín Soto4mo190

Worked on this with Demski. Video, report.

Any update to the market is (equivalent to) updating on some kind of information. So all you can do is dynamically choose what to do or do not update on.* Unfortunately, whenever you choose not to update on something, you are giving up on the asymptotic learning guarantees of policy market setups. So the strategic gains from updatelesness (like not falling into traps) are in a fundamental sense irreconcilable with the learning gains from updatefulness. That doesn't prevent that you can be pretty smart about deciding what to update on exactly... but due to embededness problems and the complexity of the world, it seems to be the norm (rather than the exception) that you cannot be sure a priori of what to update on (you just have to make some arbitrary choices).

*For avoidance of doubt, what matters for whether you have updated on X is not "whether you have heard about X", but rather "whether you let X factor into your decisions". Or at least, this is the case for a sophisticated enough external observer (assessing whether you've updated on X), not necessarily all observers.

Reply
[-]mattmacdermott4mo142

I think the first question to think about is how to use them to make CDT decisions. You can create a market about a causal effect if you have control over the decision and you can randomise it to break any correlations with the rest of the world, assuming the fact that you’re going to randomise it doesn’t otherwise affect the outcome (or bettors don’t think it will).

Committing to doing that does render the market useless for choosing policy, but you could randomly decide whether to randomise or to make the decision via whatever the process you actually want to use, and have the market be conditional on the former. You probably don’t want to be randomising your policy decisions too often, but if liquidity wasn’t an issue you could set the probability of randomisation arbitrarily low.

Then FDT… I dunno, seems hard.

Reply1
6Mikhail Samin4mo
Yep! “If I randomize the pick, and pick A, will I be happy about the result?” “If I randomize the pick, and pick B, will I be happy about the result?” Randomizing 1% of the time and adding a large liquidity subsidy works to produce CDT.
6RyanCarey4mo
I agree with all of this! A related shortform here.
4mattmacdermott4mo
An interesting development in the time since your shortform was written is that we can now try these ideas out without too much effort via Manifold. Anyone know of any examples?
[-]Mikhail Samin2mo*40

The IMO organizers asked AI labs not to share their IMO results until a week later to not steal the spotlight from the kids. IMO organizers consider OpenAI's actions "rude and inappropriate".

https://x.com/Mihonarium/status/1946880931723194389 

Image
Reply
1ShardPhoenix2mo
Based on the last paragraph it doesn't sound like OpenAI specifically was asked to do this?
2Mikhail Samin2mo
The screenshot is not the source for "The IMO organizers asked OpenAI not to share their IMO results until a week later".
[-]Mikhail Samin2y3-1

People are arguing about the answer to the Sleeping Beauty! I thought this was pretty much dissolved with this post's title! But there are lengthy posts and even a prediction market!

Sleeping Beauty is an edge case where different reward structures are intuitively possible, and so people imagine different game payout structures behind the definition of “probability”. Once the payout structure is fixed, the confusion is gone. With a fixed payout structure&preference framework rewarding the number you output as “probability”, people don’t have a disagreem... (read more)

Reply
4Martin Randall8mo
As the creator of the linked market, I agree it's definitional. I think it's still interesting to speculate/predict what definition will eventually be considered most natural.
[-]Mikhail Samin6mo2-10

I do not believe Anthropic as a company has a coherent and defensible view on policy. It is known that they said words they didn't hold while hiring people (and they claim to have good internal reasons for changing their minds, but people did work for them because of impressions that Anthropic made but decided not to hold). It is known among policy circles that Anthropic's lobbyists are similar to OpenAI's.

From Jack Clark, a billionaire co-founder of Anthropic and its chief of policy, today:

Dario is talking about countries of geniuses in datacenters in the... (read more)

Reply
1sjadler6mo
I’ve only seen this excerpt, but it seems to me like Jack isn’t just arguing against regulation because it might slow progress - and rather something more like: “there’s some optimal time to have a safety intervention, and if you do it too early because your timeline bet was wrong, you risk having worse practices at the actually critical time because of backlash” This seems probably correct to me? I think ideally we’d be able to be cautious early and still win the arguments to be appropriately cautious later too. But empirically, I think it’s fair not to take as a given?
[-]Mikhail Samin3mo10

kudos to LW for making a homepage theme advertising the book!

Reply
2plex3mo
Yeah! This makes me want LW darkmode.
4Ben Pace3mo
You're one of today's lucky 10 – we already have a dark mode! It's in the menu in the top right, under 'theme'.
2plex3mo
There was I looking under Account Settings -> Site Customizations like a fool
[-]Mikhail Samin3y*10

[RETRACTED after Scott Aaronson’s reply by email]

I'm surprised by Scott Aaronson's approach to alignment. He has mentioned in a talk that a research field needs to have at least one of two: experiments or a rigorous mathematical theory, and so he's focusing on the experiments that are possible to do with the current AI systems.

The alignment problem is centered around optimization producing powerful consequentialist agents appearing when you're searching in spaces with capable agents. The dynamics at the level of superhuman general agents are not something ... (read more)

[This comment is no longer endorsed by its author]Reply
[+][comment deleted]21d20
Moderation Log
More from Mikhail Samin
View more
Curated and popular this week
276Comments
Deleted by Mikhail Samin, 09/14/2025
Mentioned in
33No, Futarchy Doesn’t Have This EDT Flaw