Thanks for the overview and for the link to that old LiveJournal post by Scott Alexander!
So, Ilya is no longer talking to us, and Sam is talking, but "does not tell us the whole truth", to say the least.
Yet, I think this interview indicates that Sam's thinking is evolving. His earlier thinking has been closer to the old OpenAI "official superalignment position", namely to aim to steer and control superintelligent AI systems, which should be thought of as (very powerful) tools. And there are all kinds of problems with that approach.
Now he seems to be moving closer to Ilya's way of thinking.
If he is comfortable with the idea of being replaced by an AI CEO of OpenAI, it seems to me that this indicates that the aim to steer and control superintelligent AIs is being abandoned.
And his musings about AI which "really helps you to figure out what your true goals in life are" do resonate quite a bit with the second point of Ilya's thinking here (the "Second Challenge"): https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a
So, to the extent that we think Ilya's approach as sketched in 2023 makes better sense than the mainstream approach, this might be a positive sign...
Sam Altman talked recently to Theo Von.
Theo is genuinely engaging and curious throughout. This made me want to consider listening to his podcast more. I’d love to hang. He seems like a great dude.
The problem is that his curiosity has been redirected away from the places it would matter most – the Altman strategy of acting as if the biggest concerns, risks and problems flat out don’t exist successfully tricks Theo into not noticing them at all, and there are plenty of other things for him to focus on, so he does exactly that.
Meanwhile, Altman gets away with more of this ‘gentle singularity’ lie without using that term, letting it graduate to a background assumption. Dwarkesh would never.
Highlights, Quotes And Comments
Quotes are all from Altman.
Thank you, sir. Now actually take that to heart and consider the implications. It goes way beyond ‘maybe college isn’t a great plan.’
Why do you think the kids will be fine? Because they’re used to it? So it’s fine?
A new tool that is smarter than you are and super capable? Your words, sir.
True that. Can you please take your own statements seriously?
Again, please, I am begging you, take your own statements seriously.
And AI will be better at doing all of that. Yet Altman goes through all the past falsified predictions as if they apply here. He keeps going on and on as if the world he’s talking about is a bunch of humans with access to cool tools, except by his own construction those tools can function as OpenAI’s CEO and are smarter than people. It is all so absurd.
Highly plausible this is important to people. I don’t see any plan for giving it to them? The solution here is redistribution of a large percentage of world compute, but even if you pull that off under ideal circumstances no, that does not do it.
Well, sure, not at this capability level. Where is this hope coming from that it would continue for 100 years? Why does one predict the other? What will be the steps that humans will meaningfully do?
I think the actual plan is for the AI to lie to us? And for us to lie to ourselves? We’ll set it up so we have this idea that we matter, that we are important, and that will be fine? I disagree that this would be fine.
Altman discusses the parallel to discovering that Earth is not the center of the solar system, and the solar system is not the center of the galaxy, and so on, little blue dot. Well sure, but that wasn’t all that load bearing, we’re still the center of our own universes, and if there’s no other life out there we’re the only place that matters. This is very different.
Theo asks what Altman’s fears are about AI. Altman responds with a case where he couldn’t do something and GPT-5 could do it. But then he went on with his day. His second answer is impact on user mental health with heavy usage, which is a real concern and I’m glad he’s scared about that.
And then… that’s it. That’s what scares you, Altman? There’s nothing else you want to share with the rest of us? Nothing about loss of control issues, nothing about existential risks, and so on? I sure as hell hope that he is lying. I do think he is?
When asked about a legal framework for AI, Altman asks for AI privilege, sees this as urgent, and there is absolutely nothing else he thinks is worth mentioning that requires the law to adjust.
Theo then introduces Yoshua Bengio into the conversation, bringing up deception and sycophancy and neurolese.
Yes, we can all agree we don’t know. We get a lot of good attitude, the missing mood is present, but it doesn’t cash out in the missing concerns. ‘There’s clearly real risks’ but that in context seems to apply to things like jobs and meaning and distribution given all the context.
The first half of this seems false for quite a lot of times and places? Sure, you don’t know how the fortunes of war might go but for most of human history ‘100 years from now looks a lot like today’ was a very safe bet. Nothing ever happens (other than cycling wars and famines and plagues and so on) did very well. But yes, in 1800 or 1900 or 2000 you would have remarkably little idea.
Theo equates this race to Formula 1 and asks what the race is for. AGI? ASI? Altman says benchmarks are saturated and it’s all about what you get out of the models, but we are headed for some model.
Yeah, those do seem like important things that represent effective ‘finish lines.’
NO NO NO NO NO! That is not what happens! The whole idea is this thing becomes better at solving all the problems, or at least a rapidly growing portion of all problems. He mentions this possibility shortly thereafter but says he doesn’t think ‘the simplistic thing works.’ The ‘simplistic thing’ will be us, the humans.
Please take this seriously, consider the implications of what you are saying and solve for the equilibrium or what happens right away, come on man. The world doesn’t sit around acting normal while you get to implement some cool idea for an app.
Theo asks, would regular humans vote to keep AI or stop AI? Altman says users would say go ahead and users would say stop. Theo predicts most people would say stop it. My understanding is Theo is right for the West, but not for the East.
Altman asks Theo what he is afraid of with AI, Theo seems worried about They Took Our Jobs and loss of economic survival and also meaning, that we will be left to play zero-sum games of extraction. With Theo staying in Altman’s frame, Altman can pivot back to humans liking to be creative and help each other and so on and pour on the hopium that we’ll all get to be creatives.
Altman says, you get less enjoyment from a ghost robotic kitchen setup, something is missing, you’d rather get the food from the dude who has been making it. To which I’d reply that most of this is that the authentic dude right now makes a better product, but that ten years from now the robot will make a better product than the authentic dude. And yeah, there will still be some value you get from patronizing the dude, but mostly what you want is the food and thus will the market speak, and then we’ve got Waymos with GLP-1 dart guns and burrito cannons for unknown reasons when what you actually get is a highly cheap and efficient delicious food supply chain that I plan on enjoying very much thank you.
I mean I think a thing that efficiently gives you burritos does help you with your goals and people will love it, if it’s violently shooting burritos into your face unprompted at random times then no but yeah it’s not going to work like that.
I refer Altman to the parable of the whispering earring, but also this idea that the AI will remain a tool that helps individual humans accomplish their normal goals in normal ways only smarter is a fairy tale. Altman is providing hopium via the implicit overall static structure of the world, then assuming your personal AI is aligned to your goals and well being, and then making additional generous assumptions, and then saying that the result might turn out well.
On the moratorium on all AI regulations that was stripped from the BBB:
The proposal was, for all practical purposes, to have no guardrails. Lawmakers will say ‘it would be better to have one federal regulation than fifty state regulations’ and then ban the fifty state regulations but have zero federal regulation.
That’s good to hear versus the alternative, better those real concerns than an attempt to put a finger on the scale, although of course these are not the important concerns.
As Altman points out, it would be easy to tell if they made the model biased. And I think doing it ‘cleanly’ is not so simple, as Musk has found out. Try to put your finger on the scale and you get a lot of side effects and it is all likely deeply embarrassing.
I’m noting that because I’m tired of people treating ‘maybe we build a Dyson sphere’ as a statement worthy of mockery and dismissal of a person’s perspective. Please note that Altman thinks this is very possibly the future.
Being chased by a goose, asking scared of what. But yes.
I’m skipping over a lot of interactions that cover other topics.
My Next Guest Needs No Introduction
Altman is a great guest, engaging, fun to talk to, shares a lot of interesting thoughts and real insights, except it is all in the service of painting a picture that excludes the biggest concerns. I don’t think the deflections I care about most (as in, flat out ignoring them hoping they will go away) are the top item on his agenda in such an interview, or in general, but such deflections are central to the overall strategy.
The problem is that those concerns are part of reality.
As in, something that, when you stop looking at it, doesn’t go away.
If you are interviewing Altman in the future, you want to come in with Theo’s curiosity and friendly attitude. You want to start by letting Altman describe all the things AI will be able to do. That part is great.
Except also do your homework, so you are ready when Altman gives answers that don’t make sense, and that don’t take into account what Altman says that AI will be able to do. That notices the negative space being not mentioned, and that points it out. Not as a gotcha or an accusation, but to not let him get away with ignoring it.
At minimum, you have to point out that the discussion is making one hell of a set of assumptions, ask Altman if he agrees that those assumptions are being made, and check if how confident he is those assumptions are true, and why, even if that isn’t going to be your focus. Get the crucial part on the record. If you ask in a friendly way I don’t think there is a reasonable way to dodge answering.