Some podcasts are self-recommending on the ‘yep, I’m going to be breaking this one down’ level. This was very clearly one of those. So here we go.
As usual for podcast posts, the baseline bullet points describe key points made, and then the nested statements are my commentary.
If I am quoting directly I use quote marks, otherwise assume paraphrases.
The entire conversation takes place with an understanding that no one is to mention existential risk or the fact that the world will likely transform, without stating this explicitly. Both participants are happy to operate that way. I’m happy to engage in that conversation (while pointing out its absurdity in some places), but assume that every comment I make has an implicit ‘assuming normality’ qualification on it, even when I don’t say so explicitly.
On The Sam Altman Production Function
- Cowen asks how Altman got so productive, able to make so many deals and ship so many products. Altman says people almost never allocate their time efficiently, and that when you have more demands on your time you figure out how to improve. Centrally he figures out what the core things to do are and delegates. He says deals are quicker now because everyone wants to work with OpenAI.
- Altman’s definitely right that most people are inefficient with their time.
- Inefficiency is relative. As in, I think of myself as inefficient with my time, and think of the ways I could be a lot more efficient.
- Not everyone responds to pressure by improving efficiency, far from it.
- Altman is good here to focus on delegation.
- It is indeed still remarkable how many things OpenAI is doing at once, with the associated worries about it potentially being too many things, and not taking the time to do them responsibly.
On Hiring Hardware People
- What makes hiring in hardware different from in AI? Cycles are longer. Capital is more intense. So more time invested up front to pick wisely. Still want good, effective, fast-moving people and clear goals.
- AI seems to be getting pretty capital intensive?
- Nvidia’s people ‘are less weird’ and don’t read Twitter. OpenAI’s hardware people feel more like their software people than they feel like Nvidia’s people.
- My guess is there isn’t a right answer but you need to pick a lane.
- What makes Roon special? Lateral thinker, great at phrasing observations, lots of disparate skills in one place.
- I would add some more ingredients. There’s a sense of giving zero fucks, of having no filter, and having no agenda. Say things and let the chips fall.
- A lot of the disparate skills are disparate aesthetics, including many that are rare in AI, and taking all of them seriously at once.
- Altman doesn’t tell researchers what to work on. Researchers choose, that’s it.
- Email is very bad. Slack might not be good, it creates explosions of work including fake work to deal with, especially the first and last hours, but it is better than email. Altman suspects it’s time for a new AI-driven thing but doesn’t have it yet, probably due to lack of trying and unwillingness to pay focus and activation energy given everything else going on.
- I think email is good actually, and that Slack is quite bad.
- Email isn’t perfect but I like that you decide what you have ownership of, how you organize it, how you keep it, when you check it, and generally have control over the experience, and that you can choose how often you check it and aren’t being constantly pinged or expected to get into chat exchanges.
- Slack is an interruption engine without good information organization and I hate it so much, as in ‘it’s great I don’t have a job where I need slack.’
- There’s definitely room to build New Thing that integrates AI into some mix of information storage and retrieval, email slow communication, direct messaging and group chats, and which allows you to prioritize and get the right levels of interruption at the right times, and so on.
- However this will be tricky, you need to be ten times better and you can’t break the reliances people have. False negatives, where things get silently buried, can be quite bad.
On What GPT-6 Will Enable
- What will make GPT-6 special? Altman suggests it might be able to ‘really do’ science. He doesn’t have much practical advice on what to do with that.
- This seems like we hit the wall of ‘…and nothing will change much’ forcing Altman to go into contortions.
- One thing we learned from GPT-5 is that the version numbers don’t have to line up with big capabilities leaps. The numbers are mostly arbitrary.
Tyler isn’t going to let him off that easy. At this point, I don’t normally do this, but exact words seem important so I’m going to quite the transcript.
COWEN: If I’m thinking about restructuring an entire organization to have GPT-6 or 7 or whatever at the center of it, what is it I should be doing organizationally, rather than just having all my top people use it as add-ons to their current stock of knowledge?
ALTMAN: I’ve thought about this more for the context of companies than scientists, just because I understand that better. I think it’s a very important question. Right now, I have met some orgs that are really saying, “Okay, we’re going to adopt AI and let AI do this.” I’m very interested in this, because shame on me if OpenAI is not the first big company run by an AI CEO, right?
COWEN: Just parts of it. Not the whole thing.
ALTMAN: No, the whole thing.
COWEN: That’s very ambitious. Just the finance department, whatever.
ALTMAN: Well, but eventually it should get to the whole thing, right? So we can use this and then try to work backwards from that. I find this a very interesting thought experiment of what would have to happen for an AI CEO to be able to do a much better job of running OpenAI than me, which clearly will happen someday. How can we accelerate that? What’s in the way of that? I have found that to be a super useful thought experiment for how we design our org over time and what the other pieces and roadblocks will be. I assume someone running a science lab should try to think the same way, and they’ll come to different conclusions.
COWEN: How far off do you think it is that just, say, one division of OpenAI is 85 percent run by AIs?
ALTMAN: Any single division?
COWEN: Not a tiny, insignificant division, mostly run by the AIs.
ALTMAN: Some small single-digit number of years, not very far. When do you think I can be like, “Okay, Mr. AI CEO, you take over”?
COWEN: CEO is tricky because the public role of a CEO, as you know, becomes more and more important.
- On the above in terms of ‘oh no’:
- Oh no. Exactly the opposite. Shame on him if OpenAI goes first.
- OpenAI is the company, in this scenario, out of all the companies, we should be most worried about handing over to an AI CEO, for obvious reasons.
- If you’re wondering how the AIs could take over? You can stop wondering. They will take over because we will ask them to.
- CEO is an adversarial and anti-inductive position, where any weakness will be systematically exploited, and big mistakes can entirely sink you, and the way that you direct and set up the ‘AI CEO’ matters quite a lot in all this. The bar to a net positive AI CEO is much higher than the AI making on average better decisions, or having on average better features, and the actual bar is higher. Altman says ‘on the actual decision making maybe the AI is pretty good soon’ but this is a place where I’m going to be the Bottleneck Guy.
- CEO is also a position where, very obviously, misaligned means your company can be extremely cooked, and basically everything in it subverted, even if that CEO is a single human. Most of the ways in which this is limited are because the CEO can only be in one place at a time and do one thing at a time, couldn’t keep an eye on most things let alone micromanage them, and would require conspirators. A hostile AI CEO is death or subversion of the company.
- The ‘public role’ of the CEO being the bottleneck does not bring comfort here. If Altman (as he suggests) is public face and the AI ‘figures out what to do’ and Altman doesn’t actually get to overrule the AI (or is simply convinced not to) then the problem remains.
- On the above in terms of ‘oh yeah’:
- There is the clear expectation from both of them that AI will rise, reasonably soon, to the level of at least ‘run the finance department of a trillion dollar corporation.’ This doesn’t have to be AGI but it probably will be, no?
- It’s hard for me to square ‘AIs are running the actual decision making at top corporations’ with predictions for only modest GDP growth. As Altman notes, the AI CEO needs to be a lot better than the human CEO in order to get the job.
- They are predicting billion-dollar 2-3 person companies, with AIs, within three years.
- Altman asks potential hires about their use of AI now to predict their level of AI adoption in the future, which seems smart. Using it as ‘better Google’ is a yellow flag, thinking about day-to-day in three years is a green flag.
- In three years Altman is aiming to have a ‘fully automated AI researcher.’ So it’s pretty hard to predict day-to-day use in three years.
On government backstops for AI companies
A timely section title.
- Cowen and Altman are big fans of nuclear power (as am I), but people worry about them. Cowen asks, do you worry similarly about AI and the similar Nervous Nellies, even if ‘AI is pretty safe’? Are the Feds your insurer? How will you insure everything?
- Before we get to Altman’s answer can we stop to think about how absolutely insane this question is as presented?
- Cowen is outright equating worries about AI to worries about nuclear power, calling both Nervous Nellies. My lord.
- The worry about AI risks is that the AI companies might be held too accountable? Might be asked to somehow provide too much insurance, when there is clearly no sign of any such requirement for the most important risks? They are building machines that will create substantial catastrophic and even existential risks, massive potential externalities.
- And you want the Federal Government to actively insure against AI catastrophic risks? To say that it’s okay, we’ve got you covered? This does not, in any way, actually reduce the public’s or world’s exposure to anything, and it further warps company incentives. It’s nuts.
- Not that even the Federal Government can actually ensure us here even at our own expense, since existential risk or sufficiently large catastrophic or systemic risk also wipes out the Federal Government. That’s kind of the point.
- The idea that the people are the Nervous Nellies around nuclear, which has majority public support, while Federal Government is the one calming them down and ensuring things can work is rather rich.
- Nuclear power regulations are insanely restrictive and prohibitive, and the insurance the government writes does not substantially make up for this, nor is it that expensive or risky. The NRC and other regulations are the reason we can’t have this nice thing, in ways that don’t relate much if at all to the continued existence of these Nervous Nellies. Providing safe harbor in exchange of that really is the actual least you can do.
- AI regulations impose very few rules and especially very few safety rules.
- Yes, there is the counterpoint that AI has to follow existing rules and thus is effectively rather regulated, but I find this rather silly as an argument, and no I don’t think the new laws around AI in particular move that needle much.
- Altman points out the Federal Government is the insurer of last resort for anything sufficiently large, whether you want it to be or not, but no not in the way of explicitly writing insurance policies.
- I mean yes if AI crashes the economy or does trillions in damages or what not, then the Federal Government will have to try and step in. This is a huge actual subsidy to the AI companies and they should (in theory anyway) be pay for it.
- A bailout for the actual AI companies if they are simply going bankrupt? David Sacks has made it clear our answer is no thank you, and rightfully so. Obviously, at some point the Fed Put or Trump Put comes into play in the stock market, that ship has sailed, but no we will not save your loans.
- And yeah, my lord, the idea that the Feds would write an insurance policy.
- Cowen then says he is worried about the Feds being the insurer of first resort and he doesn’t want that, Altman confirms he doesn’t either and doesn’t expect it.
- It’s good that they don’t want this to happen but this only slightly mitigates my outrage at the first question and the way it was presented.
- Cowen points out Trump is taking equity in Intel, lithium and rare earths, and asks how this applies to OpenAI. Altman mostly dodges, pivots to potential loss of meaning in the world, and points out the government might have strong opinions about AI company actions.
- Cowen doesn’t say it here but to his credit is on record correctly opposing this taking of equity in companies correctly identifying it as ‘seizing the means of production’ and pointing out it is the wrong tool for the job.
- This really was fully a non-answer. I see why that might be wise.
- Could OpenAI be coerced into giving up equity, or choose to do so as part of a regulatory capture play? Yeah. It would be a no-good, very bad thing.
- The government absolutely will and needs to have strong opinions about AI company actions and set the regulations and rules in place and otherwise play the role of being the actual government.
- If the government does not govern the AI companies, then the government will wake up one day to find the AI companies have become the government.
On monetizing AI services
- Tyler Cowen did a trip through France and Spain and booked all but one hotel with GPT-5 (not directly in the app), and almost every meal they ate, and Altman didn’t get paid for that. Shouldn’t he get paid?
- Before I get to Altman’s answer, I will say that for specifically Tyler this seems very strange to me, unless he’s running an experiment as research.
- As in, Tyler has very particular preferences and a lot of comparative advantage in choosing hotels and especially restaurants, especially for himself. It seems unlikely that he can’t do better than ChatGPT?
- I expect to be able to do far better than ChatGPT on finding restaurants, although with a long and highly customized prompt, maybe? But it would require quite a lot of work.
- For hotels, yeah, I think it’s reasonably formulaic and AI can do fine.
- Altman responds that often ChatGPT is cited as the most trusted tech product from a big tech company. He notes that this is weird given the hallucinations. But it makes sense in that it doesn’t have ads and is in many visible ways more fully aligned with user preferences than other big tech products that involve financial incentives. He notes that a transaction fee probably is fine but any kind of payment for placement would endanger this.
- ChatGPT being most trusted is definitely weird given it is not very reliable.
- It being most trusted is an important clue to how people will deal with AI systems going forward, and it should worry you in important ways.
- In particular, trust for many people is about ‘are they Out To Get You?’ rather than reliability or overall quality, or are expectations set fairly. Compare to the many people who otherwise trust a Well Known Liar.
- I strongly agree with Altman about the payola worry, as Cowen calls it. Cowen says he’s not worried about it, but doesn’t explain why not.
- OpenAI’s instant checkout offerings and policies are right on the edge on this. I think in their present form they will be fine but they’re on thin ice.
- Cowen’s worry is that OpenAI will have a cap on how much commission they can charge, because stupider services will then book cheaply if you charge too much. Altman says he expects much lower margins.
- AI will as Altman notes make many markets much more efficient by vastly lowering search costs and transaction costs, which will lower margins, and this should include commissions.
- I still think OpenAI will be able to charge substantial commissions if it retains its central AI position with consumers, for the same reason that other marketplaces have not lost their ability to extract commissions, including some very large ones. Every additional hoop you ask a customer to go through loses a substantial portion of sales. OpenAI can pull the same tricks as Steam and Amazon and Apple including on price parity, and many will pay.
- This is true even if there are stupider services that can do the booking and are generally 90% as good, so long as OpenAI is the consumer default.
- Cowen doubles down on this worry about cheap competing agents, Altman notes that hotel booking is not the way to monetize, Cowen says but of course you do want to do that, Altman says no he wants to do new science, but ChatGPT and hotel booking is good for the world.
- This feels like a mix of a true statement and a dishonest dodge.
- As in, of course he wants to do hotel booking and make money off it, it’s silly to pretend that you don’t and there’s nothing wrong with that. It’s not the main goal, but it drives growth and valuation and revenue all of which is vital to the AGI or science mission (whether you agree with that mission or not).
- Cowen asks, you have a deal coming with Walmart, if you were Amazon would you make a deal with OpenAI or fight back? Altman says he doesn’t know, but that if he was Amazon he would fight back.
- Great answer from Altman.
- One thing Altman does well is being candid in places you would not expect, where it is locally superficially against his interests, but where it doesn’t actually cost him much. This is one of those places.
- Amazon absolutely cannot fold here because it loses too much control over the customer and customer flow. They must fight back. Presumably they should fight back together with their friends at Anthropic?
- Cowen asks about ads. Altman says some ads would be bad as per earlier, but other kinds of ads would be good although he doesn’t know what the UI is.
- Careful, Icarus.
- There definitely are ‘good’ ways to do ads if you keep them entirely distinct from the product, but the temptations and incentives here are terrible.
On AI’s future understanding of intangibles
- What should OpenAI management know about KSA and UAE? Altman says it’s mainly knowing who will run the data centers and what security guarantees they will have, with data centers being built akin to US embassies or military bases. They bring in experts and as needed will bring in more.
- I read this as a combination of outsourcing the worries and not worrying.
- I would be more worried.
- Cowen asks, how good will GPT-6 be at teaching these kinds of national distinctions, or do you still need human experts? Altman expects to still need the experts, confirms they have an internal eval for that sort of thing but doesn’t want to pre-announce.
- My anticipation is that GPT-6 and its counterparts will actually be excellent at understanding these country distinctions in general, when it wants to be.
- My anticipation is also that GPT-6 will be excellent at explaining things it knows to humans and helping those humans learn, when it wants to, and this is already sufficiently true for current systems.
- The question is, will you be able to translate that into learning and understanding such issues?
- Why is this uncertain? Two concerns.
- The first concern is that understanding may depend on analysis of particular key people and relationships, in ways that are unavailable to AI, the same way you can’t get them out of reading books.
- The second concern is that to actually understand KSA and UAE, or any country or culture in general, requires communicating things that it would be impolitic to say out loud, or for an AI to typically output. How do you pass on that information in this context? It’s a problem.
- Cowen asks about poetry, predicts you’ll be able to get the median Pablo Neruda poem but not the best, maybe you’ll get to 8.8/10 in a few years. Altman says they’ll reach 10/10 and Cowen won’t care, Cowen promises he’ll care but Altman equates it to AI chess players. Cowen responds there’s something about a great poem ‘outside the rubric’ and he worries humans that can’t produce 10s can’t identify 10s? Or that only humanity collectively and historically can decide what is a 10?
- This is one of those ‘AI will never be able to [X] at level [Y]’ claims so I’m on Altman’s side here, a sufficiently capable AI can do 10/10 on poems, heck it can do 11/10 on poems. But yeah, I don’t think you or I will care other than as a technical achievement.
- If an AI cannot produce sufficiently advanced poetry, that means that the AI is insufficiently advanced. Also we should not assume that future AIs or LLMs will share current techniques or restrictions. I expect innovation with respect to poetry creation.
- The thing being outside the rubric is a statement primarily about the rubric.
- If only people writing 10s can identify 10s then for almost all practical purposes there’s no difference between a 9 and a 10. Why do we care, if we literally can’t tell the difference? Whereas if we can tell the difference, if verification is easier than generation as it seems like it should be here, then we can teach the AI how to tell the difference.
- I think Cowen is saying that a 10-poem is a 9-poem that came along at the right time and got the right cultural resonance, in which case sure, you cannot reliably produce 10s, but that’s because it’s theoretically impossible to do that, and no human could do that either. Pablo Neruda couldn’t do it.
- As someone who has never read a poem by Pablo Neruda, I wanted to see what this 10.0 business was all about, so by Claude’s recommendation of ‘widely considered best Neruda poem’ without any other context, I selected Tonight I Can Write (The Saddest Lines). And not only did it not work on me, it seemed like something an AI totally could write today, on the level of ‘if you claimed to have written this in 2025 I’d have suspected an AI did write it.’
- With that in mind, I gave Claude context and it selected Ode to the Onion. Which also didn’t do anything for me, and didn’t seem like anything that would be hard for an AI to write. Claude suggests it’s largely about context, that this style was new at the time, and I was reading translations into English and I’m no poetry guy, and agrees that in 2025 yes an AI could produce a similar poem, it just wouldn’t land because it’s no longer original.
- I’m willing to say that whatever it is Tyler thinks AI can’t do, also is something I don’t have the ability to notice. And which doesn’t especially motivate me to care? Or maybe is what Tyler actually wants something like ‘invent new genre of poetry’?
- We’re not actually trying to get AIs to invent new genres of poetry, we’re not trying to generate the things that drive that sort of thing, so who is to say if we could do it. I bet we could actually. I bet somewhere in the backrooms is a 10/10 Claude poem, if you have eyes to see.
On Chip-Building
- It’s hard. Might get easier with time, chips designing chips.
- Why not make more GPUs? Altman says, because we need more electrons. What he needs most are electrons. We’re working hard on that. For now, natural gas, later fusion and solar. He’s still bullish on fusion.
- This ‘electrons’ thing is going to drive me nuts on a technical level. No.
- This seems simply wrong? We don’t build more GPUs because TSMC and other bottlenecks mean we can’t produce more GPUs.
- That’s not to say energy isn’t an issue but the GPUs sell out.
- Certainly plenty of places have energy but no GPUs to run with them.
- Cowen worries that fusion uses the word ‘nuclear.’
- I don’t. I think that this is rather silly.
- The problem with fusion is purely that it doesn’t work. Not yet, anyway.
- Again, the people are pro-nuclear power. Yay the people.
- Cowen asks do you worry about a scenario where superintelligence does not need much compute, so you’re betting against progress over a 30-year time horizon?
- Always pause when you hear such questions to consider that perhaps under such a scenario this is not the correct thing to worry about?
- As in, if we not only have superintelligence it also does not need so much compute, the last thing I am going to ponder next is the return on particular investments of OpenAI, even if I am the CEO of OpenAI.
- If we have sufficiently cheap superintelligence that we have both superintelligence and an abundance of compute, ask not how the stock does, ask questions like how the humans survive or stay in control at all, notice that the entire world has been transformed, don’t worry about your damn returns.
- Altman responds if compute is cheaper people will want more. He’ll take that bet every day, and the energy will still be useful no matter the scenario.
- Good bet, so long as it matters what people want.
- Cowen loves Pulse, Altman says people love Pulse, the reason you don’t hear more is it’s only available to Pro users. Altman uses Pulse for a combination of work related news and family opportunities like hiking trails.
- I dabble with Pulse. It’s… okay? Most of the time it gives me stories I already know about, but occasionally there’s something I otherwise missed.
- I’ve tried to figure out things it will be good at monitoring, but it’s tough, maybe I should invest more time in giving it custom instructions.
- In theory it’s a good idea.
- It suffers from division of context, since the majority of my recent LLM activity has been on Claude and perhaps soon will include Gemini.
On Sam’s outlook on health, alien life, and conspiracy theories
Ooh, fun stuff.
- What is Altman’s nuttiest view about his own health? Altman says he used to be more disciplined when he was less busy, but now he eats junk food and doesn’t exercise enough and it’s bad. Whereas before he once got in the hospital for trying semaglutide before it was cool, which itself is very cool.
- There’s weird incentives here. When you have more going on it means you have less time to care about food and exercise but also makes it more important.
- I’d say that over short periods (like days and maybe weeks) you can and should sacrifice health focus to get more attention and time on other things.
- However, if you’re going for months or years, you want to double down on health focus up to some reasonable point, and Altman is definitely here.
- That doesn’t mean obsess or fully optimize of course. 80/20 or 90/10 is good.
- Cowen says junk food doesn’t taste good and good sushi tastes better, Altman says yes junk food tastes good and sometimes he wants a chocolate chip cookie at 11:30 at night.
- They’re both right. Sometimes you want the (fresh, warm, gooey) chocolate chip cookie and not the sushi, sometimes you want the sushi and not the cookie.
- You get into habits and your body gets expectations, and you develop a palate.
- With in-context unlimited funds you do want to be ‘spending your calories’ mostly on the high Quality things that are not junk, but yeah in the short term sometimes you really want that cookie.
- I think I would endorse that I should eat 25% less carbs and especially ‘junk’ than I actually do, maybe 50%, but not 75% less, that would be sad.
- Cowen asks if there’s alien life on the moons of Saturn, says he does believe this. Altman says he has no opinion, he doesn’t know.
- I’m actually with Altman in the sense that I’m happy to defer to consensus on the probability here, and I think it’s right not to invest in getting an opinion, but I’m curious why Cowen disagrees. I do think we can be confident there isn’t alien life there that matters to us.
- What about UAPs? Altman thinks ‘something’s going on there’ but doesn’t know, and doubts it’s little green men.
- I am highly confident it is not little green men. There may or may not be ‘something going on’ from Earth that is driving this, and my default is no.
- How many conspiracy theories does Altman believe in? Cowen says zero, at least in the United States. Altman says he’s predisposed to believe, has an X-Files ‘I want to believe’ t-shirt, but still believes in either zero or very few. Cowen says he’s the opposite, he doesn’t want to believe, maybe the White Sox fixed the World Series way back when, Altman points out this doesn’t count.
- The White Sox absolutely fixed that 1919 World Series, we know this. At the time it was a conspiracy theory but I think that means this is no longer a conspiracy theory?
- I also believe various other sporting events have been fixed, but with less certainty, and to varying degrees – sometimes there’s an official’s finger on the scale but the game is real, other times you’re in Russia and the players literally part the seas to ensure the final goal is scored, and everything in between, but most games played in the West are on or mostly on the level.
- Very obviously there exist conspiracies, some of which succeed at things, on various scales. That is distinct from ‘conspiracy theory.’
- As a check, I asked Claude for the top 25 most believed conspiracy theories in America. I am confident that 24 out of the 25 are false. The 25th was Covid-19 lab origins, which is called a conspiracy theory but isn’t one. If you modify that to ‘Covid-19 was not only from a lab but was released deliberately’ then I’m definitely at all 25 are false.
- Cowen asks again, how would you revitalize St. Louis with a billion dollars and copious free time? Altman says start a Y-Combinator thing, which is pretty similar to what Altman said last time. But he suggests that’s because that would be Altman’s comparative advantage, someone else would do something else.
- This seems correct to me.
On regulating AI agents
- Should it be legal to release an AI agent into the wild, unowned, untraceable? Altman says it’s about thresholds. Anything capable of self-replication needs oversight, and the question is what is your threshold.
- Very obviously it should not be legal to, without checking first, release a self-replicating untraceable unowned highly capable agent into the wild that we have no practical means of shutting down.
- As a basic intuition pump, you should be responsible for what an AI agent you release into the wild does the same way you would be if you were still ‘in control’ of that agent, or you hired the agent, or if you did the actions yourself. You shouldn’t be able to say ‘oh that’s not on me anymore.’
- Thus, if you cannot be held accountable for it, I say you can’t release it. A computer cannot be held accountable, therefore a computer cannot make a management decision, therefore you cannot release an agent that will then make unaccountable management decisions.
- That includes if you don’t have the resources to take responsibility for the consequences, if they rise to the level where taking all your stuff and throwing you in jail is not good enough. Or if the effects cannot be traced.
- Certainly if such an agent poses a meaningful risk of loss of human control or of catastrophic or existential risks, the answer needs to be a hard no.
- If what you are doing is incompatible with such agents not being released into the wild, then what you are doing, via backchaining, is also not okay.
- There presumably should be a method whereby you can do this legally, with some set of precautions attached to it.
- Under what circumstances an open weight model would count as any of this is left as an open ended question.
- What to do if it happens and you can’t turn it off? Ring-fence it, identify, surveil, sanction the host location? Altman doesn’t know, it’s the same as the current version of this problem, more dangerous but we’ll have better defenses, and we need to urgently work on this problem.
- I don’t disagree with that response but it does not indicate a good world state.
- It also suggests the cost of allowing such releases is currently high.
On new ways to interface with AI
- Both note (I concur) that it’s great to read your own AI responses but other people’s responses are boring.
- I do sometimes share AI queries as a kind of evidence, or in case someone needs a particular thing explained and I want to lower activation energy on asking the question. It’s the memo you hope no one ever needs to read.
- Altman says people like watching other people’s AI videos.
- Do they, though?
- Altman points out that everyone having great personal AI agents is way more interesting than all that, with new social dynamics.
- Indeed.
- The new social dynamics include ‘AI runs the social dynamics’ potentially along with everything else in short order.
- Altman’s goal is a new kind of computer with an AI-first interface very different from the last 50 years of computing. He wants to question basic assumptions like an operating system or opening a window, and he does notice the skulls along the ‘design a new type of computer’ road. Cowen notes that people really like typing into boxes.
- Should AI get integrated into computers far more? Well, yeah, of course.
- How much should this redesign the computer? I’m more skeptical here. I think we want to retain control, fixed commands that do fixed things, the ability to understand what is happening.
- In gaming, Sid Meier called this ‘letting the player have the fun.’ If you don’t have control or don’t understand what is happening and how mechanics work, then the computer has all the fun. That’s no good, the player wants the fun.
- Thus my focus would be, how do we have the AI enable the user to have the fun, as in understand what is happening and direct it and control it more when they want to? And also to enable the AI to automate the parts the user doesn’t want to bother about?
- I’d also worry a lot about predictability and consistently across users. You simultaneously want the AI to customize things to your preferences, but also to be able to let others share with you the one weird trick or explain how to do a thing.
On how normies will learn to use AI
- What would an ideal partnership with a university look like? Altman isn’t sure, maybe try 20 different experiments. Cowen worries that higher education institutions lack internal reputational strength or credibility to make any major changes and all that happens is privatized AI use, and Altman says he’s ok with it.
- It does seem like academia and universities in America are not live players, they lack the ability to respond to AI or other changes, and they are mostly going to collect what rents they can until they get run over.
- In some senses I agree This Is Fine, obviously it is a huge tragedy all the time and money being wasted but there is not much we can do about this and it will be increasingly viable to bypass the system, or to learn in spite of it.
- How will the value of a typical college degree change in 5-10 years? Cowen notes it’s gone down in the last 10, after previously going up. Altman says further decline, faster than before, but not to zero as fast as it should.
- Sounds right to me under an ‘economic normal’ scenario.
- So what does get returns other than learning AI? Altman says yes, wide benefits to learning to use AI well, including but not limited to things like new science or starting companies.
- I notice Altman didn’t name anything non-AI that goes up in value.
- I don’t think that’s because he missed a good answer. Ut oh.
- How do you teach normies to use AI five years from now, for their own job? Altman says basically people learn on their own.
- It’s great that they can learn on their own, but this definitely is not optimal.
- As in, you should be able to do a lot better by teaching people?
- There’s definitely a common theme of lack of curiosity, where people need pushes in the right directions. Perhaps AI itself can help more with this.
- Will we still read books? Altman notes books have survived a lot of things.
- Books are on rapid decline already though. Kids these days, AIUI, read lots of text, but basically don’t read books.
- Will we start creating our own movies? What else will change? Altman says how we use emails and calls and meetings and write documents will change a lot, family time or time in nature will change very little.
- There’s the ‘economic normal’ and non-transformational assumption here, that the outside world looks the same and it’s about how you personally interact with AIs. Altman and Cowen both sneak this in throughout.
- Time with family has changed a lot in the last 50-100 years. Phones, computers and television, even radio, the shift in need for various household activities, cultural changes, things like that. I expect more change here, even if in some sense it doesn’t change much, and even if those who are wisest in many ways let it change the least, again in these ‘normal’ worlds.
- All the document shuffling, yes, that will change a lot.
- Altman doesn’t take the bait on movies and I think he’s mostly right. I mostly don’t want customized movies, I want to draw from the same movies as everyone else, I want to consume someone’s particular vision, I want a fixed document.
- Then again, we’ve moved into a lot more consumption of ephemeral, customized media, especially short form video, mostly I think this is terrible, and (I believe Cowen agrees here) I think we should watch more movies instead, I would include television.
- I think there’s a divide. Interactive things like games and in the future VR, including games involving robots or LLM characters, are a different kind of experience that should often be heavily customizable. There’s room for personalized, unique story generation, and interactions, too.
On AI’s effect on the price of housing and healthcare
- Will San Francisco, at least within the West, remain the AI center? Altman says this is the default, and he loves the Bay Area and thinks it is making a comeback.
- What about housing costs? Can AI make them cheaper? Altman thinks AI can’t help much with this.
- Other things might help. California’s going at least somewhat YIMBY.
- I do think AI can help with housing quite a lot, actually. AI can find the solutions to problems, including regulations, and it can greatly reduce ‘transaction costs’ in general and reduce the edge of local NIMBY forces, and otherwise make building cheaper and more tractable.
- AI can also potentially help a lot with political dysfunction, institutional design, and other related problems, as well as to improve public opinion.
- AI and robotics could greatly impact space needs.
- Or, of course, AI could transform the world more generally, including potentially killing everyone. Many things impact housing costs.
- What about food prices? Altman predicts down, at least within a decade.
- Medium term I’d predict down for sure at fixed quality. We can see labor shift back into agriculture and food, probably we get more highly mechanized agriculture, and also AI should optimize production in various ways.
- I’d also predict people who are wealthier due to AI invest more in food.
- I wouldn’t worry about energy here.
- What about healthcare? Cowen predicts we will spend more and live to 98, and the world will feel more expensive because rent won’t be cheaper. Altman disagrees, says we will spend less on healthcare, we should find cures and cheap treatments, including through pharmaceuticals and devices and also cheaper delivery of services, whereas what will go up in price are status goods.
- There’s two different sets of dynamics in healthcare I think?
- In the short run, transaction costs go down, people get better at fighting insurance companies, better at identifying and fighting for needed care. Demand probably goes up, total overall real spending goes up.
- Ideally we would also be eliminating unnecessary, useless or harmful treatments along the way, and thus spending would go down, since much of our medicine is useless, but alas I mostly don’t expect this.
- We also should see large real efficiency gains in provision, which helps.
- Longer term (again, in ‘normal’ worlds), we get new treatments, new drugs and devices, new delivery systems, new understanding, general improvement, including making many things cheaper.
- At that point, lots of questions come into play. We are wealthier with more to buy, so we spend more. We are wiser and know what doesn’t work and find less expensive solutions and gain efficiency, so we spend less. We are healthier so we spend less now but live longer which means we spend more.
- In the default AGI scenarios, we don’t only live to 98, we likely hit escape velocity and live indefinitely, and then it comes down to what that costs.
- My default in the ‘good AGI’ scenarios is that we spend more on healthcare in absolute terms, but less as a percentage of economic capacity.
On reexamining freedom of speech
- Cowen asks if we should reexamine patents and copyright? Altman has no idea.
- Our current systems are obviously not first best, already were not close.
- Copyright needs radical rethinking, and already did. Terms are way too long. The ‘AI outputs have no protections’ rule isn’t going to work. Full free fair use for AI training is no good, we need to compensate creators somehow.
- Patents are tougher but definitely need rethinking.
- Cowen is big on freedom of speech and worries people might want to rethink the First Amendment in light of AI.
- I don’t see signs of this? I do see signs of people abandoning support for free speech for unrelated reasons, which I agree is terrible. Free speech will ever and always be under attack.
- What I mostly have seen are attempts to argue that ‘free speech’ means various things in an AI context that are clearly not speech, and I think these should not hold and that if they did then I would worry about taking all of free speech down with you.
- They discuss the intention to expand free expression of ChatGPT, the famous ‘erotica tweet.’ Perhaps people don’t believe in freedom of expression after all? Cowen does have that take.
- People have never been comfortable with actual free speech, I think. Thus we get people saying things like ‘free speech is good but not [misinformation / hate speech / violence or gore / erotica / letting minors see it / etc].’
- I affirm that yes LLMs should mostly allow adults full freedom of expression.
- I do get the issue in which if you allow erotica then you’re doing erotica now, and ChatGPT would instantly become the center of erotica and porn, especially if the permissions expand to image and even video generation.
- Altman wants to change subpoena power with respect to AI, to allow your AI to have the same protections as a doctor or lawyer. He says America today is willing to trust AI on that level.
- It’s unclear here if Altman wants to be able to carve out protected conversations for when the AI is being a doctor or lawyer or similar, or if he wants this for all AI conversations. I think it is the latter one.
- You could in theory do the former, including without invoking it explicitly, by having a classifier ask (upon getting a subpoena) whether any given exchange should qualify as privileged.
- Another option is to ‘hire the AI lawyer’ or other specialist by paying a nominal fee, the way lawyers will sometimes say ‘pay me a dollar’ in order to nominally be your lawyer and thus create legal privilege.
- There could also be specialized models to act as these experts.
- But also careful what you wish for. Chances seem high that getting these protections would come with obligations AI companies do not want.
- The current rules for this are super weird in many places, and the result of various compromises of different interests and incentives and lobbies.
- What I do think would be good at a minimum is if ‘your AI touched this information’ did not invalidate confidentiality, whereas third party sharing of information often will do invalidate confidentiality.
- Google search is a good comparison point because it ‘feels private’ but your search for ‘how to bury a body’ very much will end up in your court proceeding. I can see a strong argument that your AI conversations should be protected but if so then why not your Google searches?
- Similarly, when facing a lawsuit, if you say your ChatGPT conversations are private, do you also think your emails should be private?
On humanity’s persuadability
- Cowen asks about LLM psychosis. Altman says it’s a ‘very tiny thing’ but not a zero thing, which is why the restrictions put in place in response to it pissed users off, most people are okay so they just get annoyed.
- Users always get annoyed by restrictions and supervision, and the ones that are annoyed are often very loud.
- The actual outright LLM psychosis is rare but the number of people who actively want sycophancy and fawning and unhealthy interactions, and are mostly mad about not getting enough of that, are very common.
I’m going to go full transcript here again, because it seems important to track the thinking:
ALTMAN: Someone said to me once, “Never ever let yourself believe that propaganda doesn’t work on you. They just haven’t found the right thing for you yet.” Again, I have no doubt that we can’t address the clear cases of people near a psychotic break.
For all of the talk about AI safety, I would divide most AI thinkers into these two camps of “Okay, it’s the bad guy uses AI to cause a lot of harm,” or it’s, “the AI itself is misaligned, wakes up, whatever, intentionally takes over the world.”
There’s this other category, third category, that gets very little talk, that I think is much scarier and more interesting, which is the AI models accidentally take over the world. It’s not that they’re going to induce psychosis in you, but if you have the whole world talking to this one model, it’s not with any intentionality, but just as it learns from the world in this continually coevolving process, it just subtly convinces you of something. No intention, it just does. It learned that somehow. That’s not as theatrical as chatbot psychosis, obviously, but I do think about that a lot.
COWEN: Maybe I’m not good enough, but as a professor, I find people pretty hard to persuade, actually. I worry about this less than many of my AI-related friends do.
ALTMAN: I hope you’re right.
- On Altman’s statement:
- The initial quote is wise.
- The division into these three categories is a vast oversimplification, as all such things are. That doesn’t make the distinction not useful, but I worry about it being used in a way that ends up being dismissive.
- In particular, there is a common narrowing of ‘the AI itself is misaligned’ into ‘one day it wakes up and takes over the world’ and then people think ‘oh okay all we have to do is ensure that if one day one of them wakes up it doesn’t get to take over the world’ or something like that. The threat model within the category is a lot broader than that.
- There’s also ‘a bunch of different mostly-not-bad guys use the AI to pursue their particular interests, and the interactions and competitions and evolutions between them go badly or lead to loss of human control’ and there’s ‘we choose to put the AIs in charge of the world on purpose’ with or without AI having a hand in that decision, and so on and so forth.
- On the particular worry here of Altman’s, yes, I think that extended AI conversations are very good at convincing people of things, often in ways no one (including the AI) intended, and as AIs gain more context and adjust to it more, as they will, this will become a bigger and more common thing.
- People are heavily influenced by, and are products of, their environment, and of the minds they interact with on a regular basis.
- On Cowen’s statement:
- A professor is not especially well positioned to be persuasive, nor does a professor typically get that much time with engaged students one-on-one.
- When people talk about people being ‘not persuadable’ they typically talk about cases where people’s defenses are relatively high, in limited not-so-customized interactions in which the person is not especially engaged or following their curiosity or trusting, and where the interaction is divorced from their typical social context.
- We have very reliable persuasion techniques, in the sense that for the vast majority of human history most people in each area of the world believed in the local religion and local customs and were patriots of the local area and root for the local sports team and support the local political perspectives, and so on, and were persuaded to pass all that along to their own children.
- We have a reliable history of armies being able to break down and incorporate new people, of cults being able to do so for new recruits, for various politicians to often be very convincing and the best ones to win over large percentages of people they interact with in person, for famous religious figures to be able to do massive conversions, and so on.
- Marxists were able to persuade large percentages of the world, somehow.
- Children who attend school and especially go to college tend to exit with the views of those they attend with, even when it conflicts with their upbringing.
- If you are talking to an AI all the time, and it has access to your details and stuff, this is very much an integrated social context, so yes many are going over time to be highly persuadable.
- This is all assuming AI has to stick to Ordinary Human levels of persuasiveness, which it won’t have to.
- There are also other known techniques to persuade humans that we will not be getting into here, that need to be considered in such contexts.
- Remember the AI box experiments.
- I agree that if we’re talking about ‘the AI won’t in five minutes be able to convince you to hand over your bank account information’ that this will require capabilities we don’t know about, but that’s not the threshold.
- If you have a superintelligence ready to go, that is ‘safety-tested,’ that’s about to self-improve, and you get a prompt to type in, what do you type? Altman raises this question, says he doesn’t have an answer but he’s going to have someone ask the Dalai Lama.
- I also do not know the right answer.
- You’d better know that answer well in advance.