This is actually the first writing from Altman I've ever read in full, because I find him entirely untrustworthy, so perhaps there's a style shock hitting me here. Maybe he just always writes like an actual cult leader. But damn, it was so much worse than I expected.
Very little has made me more scared of AI in the last ~year than reading Sam Altman try to convince me that "the singularity will probably just be good and nice by default and the hard problems will just get solved as a matter of course."
Something I feel is missing from your criticism, and also from most responses to anything Altman says, is "What mechanism in your singularity-seeking plans is there to prevent you, Sam Altman CEO of OpenAI, from literally gaining control of the entire human race and/or planet earth on the other side of singularity?"
I would ask this question because while it's obvious that a likely outcome is the disempowerment of all humans, another well know fear of AI is enabling indestructible autocracies through unprecedented power imbalances. If OpenAI's CEO can personally instruct a machine god to manipulate public opinion in ways we've never even conceptualized before, how do we not slide into an eternity of hyper-feudalism almost immediately?
He is handwaving away the threat of disempowerment in his official capacity as one of the few people on earth who could end up absorbing all that power. For me to personally make the statements he made would be merely stupid, but for OpenAI's CEO to make them is terrifying.
I guess I don't know if disempowerment-by-a-god-emperor is really worse than disempowerment-without-a-god-emperor, but my overall fear of disempowerment is raised by his obvious incentive to hide that outcome.
For me to personally make the statements he made would be merely stupid, but for OpenAI's CEO to make them is terrifying.
I think these statements would be terrifying coming from any arbitrary CEO, but Sam Altman in particular has a track record of manipulating people and squashing safety concerns for the specific goal of giving himself more power, and successfully thwarting attempts to reduce his power.
Also, it's not like these ideas are new to him or he hasn't thought about them before. See the Musk v. Altman emails, ctrl-f "AGI dictatorship"
For those who have not heard it, "Singularity, Singularity, Singularity, Singularity, Oh, I Don’t Know" I believe to be a reference to this (banger of a) song.
They all hope they’ll have enough time to discuss possible plans with very smart AI systems which are coming.
Ilya has been very explicit about it, but all AI lab leaders must be hoping for that…
"in a way, o3 is more intelligent than any human" — seriously doubt this. It is more like Donald Trump, it knows nothing but tells you that everything is great and here is an inconcrete answer to your question, and now I ask you a follow-up question.
In nearly three years of asking ChatGPT, I leave the chat furiously in 80% of cases with a "fuck you, I am talking to a chinese room, not an entity that can tell me more than rubber duck debugging can".
Fucks in the conversation start frequently at my second question, and make the chatbot reply more concise and sometimes even backtrack and take a different path, so I like to swear a lot.
Thanks For the Memos
Sam Altman offers us a new essay, The Gentle Singularity. It’s short (if a little long to quote in full), so given you read my posts it’s probably worth reading the whole thing. First off, thank you to Altman for publishing this and sharing his thoughts. This was helpful, and contained much that was good. It’s important to say that first, before I start tearing into various passages, and pointing out the ways in which this is trying to convince us that everything is going to be fine when very clearly the default is for everything to be not fine.Prepare For High Weirdness
Assuming we agree that the takeoff has started, I would call that the ‘calm before the storm,’ or perhaps ‘how exponentials work.’ Being close to building something is not going to make the world look weird. What makes the world look weird is actually building it. Some people (like Tyler Cowen) claim o3 is AGI, but everyone agrees we don’t have ASI (superintelligence) yet. Also, frankly, yeah, it’s super weird that we have these LLMs we can talk to, it’s just that you get used to ‘weird’ things remarkably quickly. It seems like it ‘should be weird’ (or perhaps ‘weirder’?) because what we do have now is still unevenly distributed and not well-exploited, and many of us including Altman are comparing the current level of weirdness to the near future True High Weirdness that is coming, much of which is already baked in. If anything, I think the current low level of High Weirdness is due to us. as I argue later, not being used to these new capabilities. Why do we see so few scams, spam and slop and bots and astroturfing and disinformation, deepfakes, cybercrime, giant boosts in productivity, talking mainly to AIs all day, actual learning and so on? Mostly I think it’s because People Don’t Do Things and don’t know what is possible.Short Timelines
That’s a bold prediction, modulo the ‘may’ and the values of ‘tasks in the real world’ and ‘novel insights.’ And yes, I agree that the following is true, as long as you notice the word ‘may’: Note not only the ‘may’ but the low bar for ‘not be wildly different.’ The people of 1000 BCE did all those things, plausibly they also did them in 10,000 BCE or 100,000 BCE. Is that what would count as ‘not wildly different’? This is essentially asserting that people in the 2030s will be alive. Well, I hope so!Not Used To It
I get why one would say this, but it seems very wrong? First of all, who is this ‘us’ of which you speak? If the ‘us’ refers to the people of Earth or of the United States, then the statement to me seems clearly false. If it refers to Altman’s readers, then the claim is at least plausible. But I still think it is false. I’m not used to o3-pro. Even I haven’t found the time to properly figure out what I can fully do with even o3 or Opus without building tools. We are ‘used to this’ in the sense that we are finding ways to mostly ignore it because life is, as Agnes Callard says, coming at us 15 minutes at a time, and we are busy, so we take some low-hanging fruit and then take it for granted, and don’t notice how much is left to pick. We tell ourselves we are used to it so we can go about our day.Singularity, Singularity, Singularity, Singularity, Oh, I Don’t Know (In That Order)
I note that Robin Hanson responded to this here: o3 pro, you want to take this one? Oh right, nominal and contratual stickiness, institutional price controls, surplus division, supply response, measurement mismatch, time reallocation, you get maybe a 10% pay bump, wages track bargained marginal revenue not raw technical output, and both lag the technical shock by years. I did, however, notice my economics being several times more productive there. Yep. Then table stakes, recursive self-improvement, self-perpetuating growth, a robot-based parallel physical production economy. His timeline seems to be AI 2028. Then true superintelligence, then it keeps going after that, then what? It’s important to notice that this ‘adapt to anything’ is true in some ways and not in others. There are some things that are like decapitations, in that you very much cannot adapt because they kill you, dead. Or that deny you the necessary resources to survive, or to compete. You can’t ‘adapt’ to compete with someone or something sufficiently more capable than you. I sigh every time I see this ‘well in the past we’ve adapted and there were more things to do so in the future when we make superintelligent things universally better at everything no reason this shouldn’t still be true, we just need some time.’ Um, no? Or at least, definitely not by default? I seriously don’t understand how you can expect robots by 2028 and wonders beyond the imagination along with superintelligence by 2035 and think mostly humans will do the things we usually do only with more capabilities at our disposal, or something? It’s like there’s some sort of semantic stop sign to not think about the obvious implications? Is there an actual model of what this world looks like?They Took Our Fake Jobs
There are two halves to this. The first half is, would the subsistence farmer think the jobs were fake? For some jobs yes, but once you explained what was going on and they got over future shock, I don’t think their breakdown of real versus fake would be that different from that of a farmer today. They might think a lot of them are not ‘necessary,’ that they were products of great luxury, but that would not be different than how they thought about the jobs of those at their king’s court. I too hope that I look a thousand years in the future and I see people at all, who are actually alive, doing things at all. I hope they move beyond thinking of them quite as ‘jobs’ but I will happily take jobs. This time is different, however. Before humans built tools and grew more capable through those tools, opening up our ability to do more things. The thing Altman is describing is very obviously, as I keep saying, not a mere tool. Humans will no longer be the strongest optimizers, or the smartest minds, or the most capable agents. Anything we can do, AI can do better, except insofar as the thing doesn’t count unless a human does it. Otherwise, an AI does the new job too. Altman talks about some people deciding to ‘plug in’ to machine-human interfaces while others choose not to. Won’t this be like deciding not to have a phone or not use computers, only vastly more so, and also the computer and phone are applying for your job? Then again, if all the jobs that involve the AI are done better by the AI alone anyway, including manual labor via robots, perhaps you don’t lose that much by not plugging in? And indeed, if there are jobs that ‘require you be a human’ it might also require that you not be plugged in.There Isn’t Going To Be a Merge [Finish Meme Here]
Think about chess. First humans beat AIs. Then AIs beat humans, but for a brief period AI and humans working together, essentially these ‘merges,’ still beat AIs. Then the humans no longer added anything. We’re going through the same process in a lot of places, like diagnostic reasoning, where the doctor is arguably already a net negative when they don’t accept the AI’s opinion. Now, humans use the AIs to train, perhaps, but they don’t ‘merge’ or ‘plug in’ because if they did that then the AIs would be playing chess. We want two humans to play chess, so they need to be fully unplugged, or the exercise loses its meaning. So, again, seriously, ‘merge’? Why do people think this is a thing? I have no idea where this expectation is coming from, other than that the people won’t have a say in it. The singularity will, as Douglas Adams wrote about deadlines, give a whoosh as it flies by. That will be that. There’s nothing to manage, your services are no longer required.Fun Little Detail About Compute
Jamie Sevilla notes that Altman’s estimate here that the average ChatGPT query uses about 0.34 watt-hours, about what an oven would use in a little over one second, and roughly one fifteenth of a teaspoon of water, similar to Epoch’s estimate of 0.3 watt-hours, which was a 90% reduction over previous estimates. Compute efficiency is improving rapidly. Also note o3’s 80% price drop.Oh Look It’s The Alignment Plan, And It’s “Solve Alignment”
Like Dario Amodei’s Machines of Loving Grace, the latest Altman essay spends the bulk of its time hand waving away all the important concerns about such futures, both in terms of getting there and where the there is we even want to get. It’s basically wish fulfillment. There’s some value in pointing out that such worlds will, to the extent we can direct those worlds and their AIs to do things, be very good at wish fulfillment. There are some people who need to hear that. But that’s not the hard part. Finally, we get to the ‘serious challenges to confront’ section. That’s it. Allow me to summarize this plan:- Solve the alignment problem so AIs learn and act towards what we collectively really want over the long term.
- Make superintelligence cheap, widely available, and not too concentrated.
- We’ll adapt, muddle through, figure it out, user freedom, it’s all good, society is resilient and adapts quickly.
I’m sorry, but that’s contradictory, doesn’t address the hard questions, isn’t an answer. Instead it passes the buck to ‘society’ to discuss and answer the questions, when for various overdetermined reasons we do not seem capable of having these conversations in any serious fashion – and indeed, the moment Altman comes up against possibly joining such a conversation, he punts, and says ‘you first.’ Indeed, he seems to acknowledge this, and to want to wait until after the singularity to figure out how to deal with the effects of the singularity – he says ‘the sooner the better’ but the plan is clearly not to get to this all that soon. This all assumes facts not in evidence and likely false in context. Why should we expect society to be resilient in this situation? What even is ‘society’ when the smartest minds, most capable agents, are AIs? How do users have this freedom, how is the intelligence widely available, if the agents all are going to act towards ‘what we collectively really want over the long term?’ and how do you reconcile these conflicting goals? Who decides what all of that means? If the power is diffused how do we avoid inevitable gradual disempowerment? What ‘broad bounds’ are we going to ‘decide upon’ and how are we deciding? How would certain people feel about the call for diffusion of power outside of ‘any one country’ and how does this square with all the ‘America must win the race’ talk? Either the power diffusion is meaningful or it isn’t. Given history, why would you expect there to be such a voluntary diffusion of power? That’s in addition to not addressing the technical aspect of any of this, at all. Yes, good, ‘solve the alignment problem,’ how the heck do you propose we do that? For any definition of that plan, especially if this has to survive wide distribution? I get that none of that is ‘the point’ of The Gentle Singularity. But the right answers to those questions are the only way such a singularity can stay gentle, or end well. It’s not an optional conversation. Social media feeds are in many ways a highly helpful example and intuition pump here, as it illustrates what ‘aligned’ means here. Those feeds are clearly aligned to the companies in question. There’s an additional question of whether such actions are indeed in the best interests of the companies, but for this purpose I think we should accept that they likely are. Thus, alignment here means aligned to the user’s longer term best interests, and how long term and how paternalistic that should be are left as open questions. The place this is potentially misleading is, if we did get a feed here that was aligned to the user’s ‘true’ preferences in some sense, then is it aligned? What if that was against what we collectively ‘really want,’ let’s say because it encourages too much social media use by being too good, or it doesn’t push you enough towards making new friends? And that’s only a microcosm. Not doing the social media misalignment thing is relatively easy – we all know how to align the algorithm to the user far better, and we all know why it isn’t done. The general case job here is vastly harder.The Plan After Solving Alignment Is To Muddle Through
Matt Yglesias wishes Sam Altman and others would tell us which policy ideas they think we should entertain, since he mentions that a much richer world could entertain new ideas. My honest-Altman response is two-fold.- We’re not richer yet, so we can’t yet entertain them. There’s a reason Altman says we won’t adapt the new social contract all at once. So it would be unwise to tell the world what they are. I think this is actually a strong argument. There are many aspects of the future that will require sacred value tradeoffs, if you take any side of one of those you open yourself up to attack, and if you do so before it is clear the tradeoff is forced it is vastly worse. There’s no winning doing this.
- If we do get into this richer position with the ability to meaningfully enact policy, if we are all still alive and in control over the future, then this is the part where we can adapt and muddle through and fix things in post. We can have that discussion later (and have superintelligent help). There’s no need to get distracted by this.
An obvious objection is, what makes us think we can use, demand or enforce social contracts in such a future? The foundations of social contract theory don’t hold in a world with superintelligence. I think that contra many science fiction writers that a future very rich world will choose to treat its less fortunate rather well even if nothing is forcing the elite to do so, but also nothing will be forcing the elite to do so.Supposedly Smooth Sailing
Finally, like Rob Wiblin I notice I am confused by this closing: It is hard not to interpret this, and many aspects of the essay, as essentially saying ‘don’t worry, nothing to see here, we got this, wonders beyond your imagination with no downsides that can’t be easily fixed, so don’t regulate me. Just go gently into that good night, and everything will be fine.’ I don’t understand why people respond so often with the common counterpoint of ‘well the singularity hasn’t happened yet, so the idea that it will hit you hard when it does come hasn’t been borne out.’ That doesn’t bear on the question at all. Max Kesin sums up the appropriate response: