Mostly this is on vibes, and the MIRI team trying hard and seeming very successful and getting a lot of buzz, great blurbs, some billboards, etc.
I saw this tweet
E.g., the book is likely to become a NYT bestseller. The exact position can be improved by more pre-orders. (The figure is currently at around 5k pre-orders, according to the q&a; +20k more would make it a #1 bestseller).
Chat says about that
If preorders = 5k, you’re probably looking at 8k–15k total copies sold in week 1 (preorders + launch week sales).
Recently, nonfiction books debuting around 8k–12k week-1 copies often chart #8–#15 on the NYT list.
Lifetime Sales Ranges
Conservative: 20k–30k copies total (good for a nonfiction debut with moderate buzz).
Optimistic: 40k–60k (if reviews, media, podcasts, or TikTok keep it alive).
Breakout: 100k+ (usually requires either a viral moment, institutional adoption, or the author becoming part of a big public debate).
Is that a lot? I don't actually know, would be that’s not that many, but a decent number and might get a lot of buzz, commentary, etc. This is a major crux so I'd be interested in take.
I think the arguments here are clear but let me know if not
e.g.
Very interested in takes!
I also think this is likely to cause folks to look into the situation and ask, "is it really this bad?" I think it's helpful to point them to the fact that yes, Yudkowsky and Soares are accurately reporting that the AI CEOs think they're roughly russian-roulette odds gambling with the world [1]. I also think it's important to emphasize that a bunch of us have a bunch of disagreements, whether nuanced or blunt, with them, and still are worried.
Why? Because lots of folks live in denial that it's even possible for AI as smart as humans to exist one day, much less superintelligent AI soon. Often their defense mechanism is to pick at bits of the story. Reinforcing that even if you pick at bits of the story you still are worried is a helpful thing.
[1] Not trying to pick round ninety zillion of the fight about whether this is a good or bad idea, etc.!
True, although I wish more people would engage with the common anti-AI-x-risk argument of "tech CEOs are exaggerating existential risk because they think it'll make their products seem more important and potentially world changing, and so artificially boost hype". Not saying I agree with this, but there's at least some extent to which it's true, and I think this community often fails to appropriately engage with and combat this argument.
In general, this is why "appeal to authority" arguments should generally be avoided if we're talking about people who are widely seen as untrustworthy and having ulterior motives. At most I think people like Geoffrey Hinton are seen as reputable and not as morally compromised so serve as better subjects for an appeal to authority, but mostly rather than needing to appeal to authority at all we should just try and bring things back to the object-level arguments.
I think this community often fails to appropriately engage with and combat this argument.
What do you think that looks like? To me, that looks like "give object-level arguments for AI x-risk that don't depend on what AI company CEOs say." And I think the community already does quite a lot of that, although giving really persuasive arguments is hard (I hope the MIRI book succeeds).
here are some of my attempts at it, which I think stands out as unusual compared to how most respond; there are subverbal insights I haven't yet nailed down in how I approached this, hence the link instead of explanation
I'd currently summarize the view not as "CEOs scare people" but as "any publicity seems to be good publicity, even when warning of extinction, as if the warnings of extinction are interpreted by most to be cynical lies even when backed up by argumentation". I suspect that at least part of what's going on is that when someone doesn't comprehend the details of an argument, there's some chance they interpret it as a human-intentional lie (or other type of falsehood, perhaps accidental-on-the-author's-behalf-and-yet-valuable-to-the-egregore lie)?
“if you care about this, here’s a way to get involved”
My understanding is that MIRI expects alignment will be hard, an international treaty will be needed, and believes that a considerable proportion of the work that gets branded as "AI safety" is either unproductive or counterproductive.
MIRI could of course be wrong, and it's fine to have an ecosystem where people are pursuing different strategies or focusing on different threat models.
But I also think there's some sort of missing mood here insofar as the post is explicitly about the MIRI book. The ideal pipeline for people who resonate with the MIRI book may look very different than the typical pipelines for people who get interested in AI risk (and indeed, in many ways I suspect the MIRI book is intended to spawn a different kind of community and a different set of projects than the community/projects that dominated the 2020-2024 period, for example.)
Relatedly, I think this is a good opportunity for orgs/people to reassess their culture, strategy, and theories of change. For example, I suspect many groups/individuals would not have predicted that a book making the AI extinction case so explicitly and unapologetically would have succeeded. To the extent that the book does succeed, it suggests that some common models of "how to communicate about risk" or "what solutions are acceptable/reasonable to pursue" may be worth re-examining.
but if what's actually happening is that people interpret it as cynical dishonesty that does not believe its own doom arguments and thus must be instead whatever the next most likely reason is to make a doom argument, which seems to be a common reaction, then it may be made of backfire. I find it very hard to tell whether this is happening, and I know of many people who think it's the only thing that happens. I certainly do think it's a thing that happens ever.
I've been thinking this same thing for a while now, but coming at it from a different direction. I'm worried, and I'm not sure what to do about it. I've tried writing up some suggestions, but nothing has felt useful enough to post. To try and explain my position, I'll give a vague ramble comment here instead.
--
Yeah, I think it's possible the book will be a big deal. If it does make a significant splash, the overtone window might take a big knock, all at once. It's possible that the collective eye of the world, turns onto us. Onto LessWrong. How do we prep for that?
In a way that I adore, this community is a bunch of weirdos. We are not normal. We hold opinions that are vastly different from most of the world. If this book gets the reception it deserves, I think it'll be pretty easy to spin up articles dunking on LW. I imagine something like "Eugenics loving, Polygamous, vegan, SBF funded, Shrimp obsessed, Harry Potter fanfic, doomsday, sex cult, warns end times are near, in NYTs best seller".
I am afraid of the eye, looking down at us, calling us bad people, and I am afraid of the split. I do not want there to be the Blue tribe, the Red tribe, and the Grey tribe. I do not want this issue to become a culture war topic. How do we plan to avoid this outcome? If the book is successful, how do we steer the narrative away from "Group X wants to kill us all by doing Y!" and more into the realm of "Oh, this is a big deal, and we need to all work together to solve it"?
And how do we avoid being Carrie-ed in the cultural spotlight? How do we avoid people protesting in ways that are not beneficial to the cause? If we 'win' this thing, it seems to me, we need the support of the average person. But where is our relatable figure? Yudkowsky is a wonderful writer, and a quick thinking speaker. But, he is not a relatable figure head, and he is-unfortunately-somewhat easy to take jabs at.
Relevant fiction here is An Absolutely Remarkable Thing, by Hank Green. In which the Protagonist "April May" is thrown into the spotlight of the world, after an encounter with a mysterious robot. I'd recommend the book any time, but to me, it feels relevant now.
As stated, I am afraid, and it's possible my anxieties are projections of my own feelings. I'd be thankful to someone who could calm my anxiety with some logical argument. But, as of now, I think this emotion is telling me something important.
Is anyone using the book as a funnel to LessWrong? I don't think MIRI are (afaik). The only (again, afaik) event going on in the UK is being joint hosted by Pause AI, Control AI and some other local community members are helping out, which is not going to be a funnel for LW at all. I assume Lighthaven is doing something (haven't checked) but are they going to say "If you like this book you'll love our online forum?"
Moreover, is using LessWrong as the default funnel a good idea in the first place? I'd guess not. I know lots of people (notably Oliver Habryka) don't approve of Pause AI or Control AI, but I assume there must be other directions for suddenly-invigorated normies to be pointed in (though I've not actually looked for them).
Conversational moves in EA / Rationality that I like for epistemics
Can I bet the last 3 points are a joke?
Anyway, do we have a method to find out check-points or milestones for betting on a progress against a certain problem( ex. AI development safety, Earth warming)?
This is a butterfly idea, but it gestures at something that's probably true: our intuitions of whether something is a joke can be used to generate jokes, or at least be amused when we find out (in either direction - we were right, or we were wrong). I'm not quite up for a babble on the topic, but I kind of hope someone explores it.
"Butterfly idea" is real (there was a post proposing and explaining it as terminology; perhaps someone else can link it.)
"Gesture at something" is definitely real, I use it myself.
"Do a babble" is new to me but I'd bet on it being real also.
I got frightened off by the ratio you've offered, so I'm not taking it, but thank you for offering. I might reconsider with some lesser amount that I can consider play money. Is there even a viable platform/service for a (maybe) $1:$100 individual bet like this?
Haha! $1 is not worth the transaction cost to me. Let us consider it moot, and I'll let you know I've used all three phrases and had them used by others in convo with me.
Thanks for the links as they clarified a lot to me. The names of the tactics/techniques sounded strange to me and after unsuccessful googling for their meanings I started to believe it was a play with your readers.l, sorry if this suspicious of mine seemed rude.
The second part was curiosity to explore some potential cases of "What could we bet on?".
Leopold Aschenbrenner is starting a cross between a hedge fund and a think tank for AGI. I have read only the sections of Situational Awareness most relevant to this project, and I don't feel nearly like I understand all the implications, so I could end up being quite wrong. Indeed, I’ve already updated towards a better and more nuanced understanding of Aschenbrenner's points, in ways that have made me less concerned than I was to begin with. But I want to say publicly that the hedge fund idea makes me nervous.
Before I give my reasons, I want to say that it seems likely most of the relevant impact comes not from the hedge fund but from the influence the ideas from Situational Awareness have on policymakers and various governments, as well as the influence and power Aschenbrenner and any cohort he builds wield. This influence may come from this hedge fund or be entirely incidental to it. I mostly do not address this here, but it does make all of the below less important.
I also believe that some (though not all) of my concerns about the hedge fund are based on specific disagreements with Aschenbrenner’s views. I discuss some of those below, but a full rebuttal this is not (and many of the points of disagreement I don’t yet feel confident in my view on). There is still plenty to do to hash out the actual empirical questions at hand.
Why I am nervous
A hedge fund investing in AI related investments means Aschenbrenner and his investors will gain financially from more and accelerated AGI progress. This seems to me to be one of the most important dynamics (excluding the points about influence above). That creates an incentive to create more AGI progress, even at the cost of safety, which seems quite concerning. I will say that Leopold has a good track record here around turning down money in not signing an NDA at Open AI despite loss of equity.
Aschenbrenner expresses strong support for the liberal democratic world to maintain a lead on AI advancement, and ensure that China does not reach an AI-based decisive military advantage over the United States[1]. The hedge fund, then, presumably aims to both support the goal of maintaining an AI lead over China and profit off of it. In my current view, this approach increases race dynamics and increases the risks of the worst outcomes (though my view on this has softened somewhat since my first draft, for reasons similar to what Zvi clarifies here[2]).
I especially think that it risks unnecessary competition when cooperation - the best outcome - could still be possible. It seems notable, for example, that no Chinese version of the Situational Awareness piece has come to my attention; going first in such a game both ensures you are first and that the game is played at all.
It’s also important that the investors (e.g. Patrick Collison) appear to be more focused on economic and technological development, and less concerned about risks from AI. The incentives of this hedge fund are therefore likely to point towards progress and away from slowing down for safety reasons.
There are other potential lines of thought here I have not yet fleshed out including:
Ways that the hedge fund could in fact be a good idea:
EA and AI causes could really use funder diversification. If Aschenbrenner intends to use the money he makes to support these issues, that could be very valuable (though I’ve certainly become somewhat more concerned with moonshot “become a billionaire to save the world” plans than I used to be).
The hedge fund could position Aschenbrenner to have a deep understanding of and connections within the AI landscape, making the think tank outputs very good, and causing important future decisions to be made better.
Aschenbrenner of course could be right about the value of the US government’s involvement, maintaining a US lead, and the importance of avoiding Chinese military supremacy over the US. In that case, him achieving his goals would of course be good. Cruxes include the likelihood of international cooperation, the possibility of international bans, probability of catastrophic outcomes from AI and the likelihood of “muddling through” on alignment.
I’m interested in hearing takes, ways I could be wrong, fleshing out of my arguments, or any other thoughts people have relevant to this. Happy to have private chats in DMs to discuss as well.
To be clear, Aschenbrenner wants that lead to exist to avoid a tight race in which safety and caution are thrown to the winds. If we can achieve that lead primarily through infosecurity (something he emphasizes), then added risks are low; but I think the views expressed in Situational Awareness also imply the importance of staying technologically ahead of China as their AI research improves. This comes with precisely the risks of creating and accelerating a race of this nature.
Additionally, when I read his description of the importance of even a two month lead, it implied to me that if the longer, more comfortable lead is lost, there will be strong reasons for the US to advance quickly so as to avoid China reaching superintelligence and subsequent military dominance first (which doesn’t mean he thinks we should actually do this if the time came). This seems to fairly explicitly describe the tight race scenario. I don’t think Aschenbrenner believes this would be a good situation to be in, but nonetheless thinks that’s what the true picture is.
From Zvi’s post: “He confirms he very much is NOT saying this:
The race to ASI is all that matters.
The race is inevitable.
We might lose.
We have to win.
Trying to win won’t mean all of humanity loses.
Therefore, we should do everything in our power to win.
I strongly disagree with this first argument. But so does Leopold.
Instead, he is saying something more like this:
ASI, how it is built and what we do with it, will be all that matters.
ASI is inevitable.
A close race to ASI between nations or labs almost certainly ends badly.
Our rivals getting to ASI first would also be very bad.
Along the way we by default face proliferation and WMDs, potential descent into chaos.
The only way to avoid a race is (at least soft) nationalization of the ASI effort.
With proper USG-level cybersecurity we can then maintain our lead.
We can then use that lead to ensure a margin of safety during the super risky and scary transition to superintelligence, and to negotiate from a position of strength.”