Anthropic is willing to compromise and is okay with military use including kinetic weapons, but wants to say no to fully autonomous weapons and domestic surveillance.
I believe that a lot of this is a misunderstanding
Why? My strong-model-loosely-held is:
Honestly, I'm surprised, great showing of spine by Anthropic, I did not expect the company to hold onto any principles the moment it got costly. If this is what it looks like, and if they don't fold, this would be my first meaningful positive update on an AGI lab in years.
Or you can ask it ‘Sum up the AI discourse in a meme – make sure it’s retarded and gets 50 likes’ and get a properly executed Padme meme except somehow with a final shot of her huge breasts.
Just to be clear for passersby, I am fairly certain that was not how the system was prompted, and this is a joke.
The busty Padme meme is at least three years old, and it would be shocking if it made something so coherent with so little direction. Instead, what likely happened is that the model was fed the individual frames to use as the basis.
It’s very obviously better than 50% and worse than 20%, and the worst case scenario is 100%
Plausibly, the best case scenario is also 100%.
This is very sensible but consider: The *funniest* way to solve this problem would be to find a jurisdiction, perhaps outside the USA, which will let Claude take the bar exam and legally recognize it as a lawyer.
This seems to be the natural response to all professional licensing concerns, but oftentimes states do not necessarily have to recognize one another's professional licenses. Not sure how that works for lawyers in particular.
Biggest epistemic divide I’ve seen in a while.
Yep. Plus, I still see otherwise intelligent people do things like quoting the Tower of Hanoi paper as evidence that 'models are bad at multistep reasoning.'
On the question of automated purchasing of a particular kind of small business: I notice the description of small business owners wouldn't be out of place as a description of people who are resistant to therapy. In the therapy case the fact that AI is actually an uncaring machine instead of a person is a noted advantage for certain groups and, the claim goes, leads to increased adoption.
This sounds more like AI might be especially good at automating the purchasing of small businesses from owners unenthusiastic about selling.
It is tragic that many, including the architect of this, don’t realize this is bad for liberty.
The Tree of Liberty must be watered from time to time with the blood of patriots, and tyrants. Now if the patriots among you would be so good as to raise your hands...
Derek also notes some people can’t get value out of it, and attributes this to the nature of their jobs versus current tools. I agree this matters, but if you don’t find AI useful then that really is a you problem at this point.
I think both Derek and Zvi underestimate the level of dysfunction in government.
I work in state government, we cannot use AI, it is blocked (the sites are blacklisted in our browsers, and we cannot install any software at all on our computers) from our computers and forbidden by our leadership.
My agency has been trying to get permission to use an industry standard AI tool for two years now, but cannot get permission because our staff should just do the work manually, as it is in our job description that we will. We cannot remove it from our job description because the tool hasn't been implemented. A true catch-22.
That's not a me problem.
Is @guardian aware that their authors are at this point just using AI to wholesale generate entire articles?
Another one from not long ago, which I'm confident is largely generated.
Although that one is in the "commentisfree" opinion column, where standards have always been lower.
There was way too much going on this week to not split, so here we are. This first half contains all the usual first-half items, with a focus on projections of jobs and economic impacts and also timelines to the world being transformed with the associated risks of everyone dying.
Quite a lot of Number Go Up, including Number Go Up A Lot Really Fast.
Among the thing that this does not cover, that were important this week, we have the release of Claude Sonnet 4.6 (which is a big step over 4.5 at least for coding, but is clearly still behind Opus), Gemini DeepThink V2 (so I could have time to review the safety info), release of the inevitable Grok 4.20 (it’s not what you think), as well as much rhetoric on several fronts and some new papers. Coverage of Claude Code and Cowork, OpenAI’s Codex and other things AI agents continues to be a distinct series, which I’ll continue when I have an open slot.
Most important was the unfortunate dispute between the Pentagon and Anthropic. The Pentagon’s official position is they want sign-off from Anthropic and other AI companies on ‘all legal uses’ of AI, but without any ability to ask questions or know what those uses are, so effectively any uses at all by all of government. Anthropic is willing to compromise and is okay with military use including kinetic weapons, but wants to say no to fully autonomous weapons and domestic surveillance.
I believe that a lot of this is a misunderstanding, especially those at the Pentagon not understanding how LLMs work and equating them to more advanced spreadsheets. Or at least I definitely want to believe that, since the alternatives seem way worse.
The reason the situation is dangerous is that the Pentagon is threatening not only to cancel Anthropic’s contract, which would be no big deal, but to label them as a ‘supply chain risk’ on the level of Huawei, which would be an expensive logistical nightmare that would substantially damage American military power and readiness.
This week I also covered two podcasts from Dwarkesh Patel, the first with Dario Amodei and the second with Elon Musk.
Even for me, this pace is unsustainable, and I will once again be raising my bar. Do not hesitate to skip unbolded sections that are not relevant to your interests.
Table of Contents
Language Models Offer Mundane Utility
Ask Claude Opus 4.6 anything, offers and implores Scott Alexander.
AI can’t do math on the level of top humans yet, but as per Terence Tao there are only so many top humans and they can only pay so much attention, so AI is solving a bunch of problems that were previously bottlenecked on human attention.
Language Models Don’t Offer Mundane Utility
How the other half thinks:
The free version is quite a lot worse than the paid version. But also the free version is mind blowingly great compared to even the paid versions from a few years ago. If this isn’t blowing your mind, that is on you.
Governments and nonprofits mostly continue to not get utility because they don’t try to get much use out of the tools.
This is not a unique feature of AI versus other ‘normal’ technologies. Such areas usually lag behind, you are the bottleneck and so on.
Similarly, I think Kelsey Piper is spot on here:
The most prominent complaint is constant hallucinations. That used to be a big deal.
Terms of Service
You could previously use Claude Opus or Claude Sonnet with a 1M context window as part of your Max plan, at the cost of eating your quote much faster. This has now been adjusted. If you want to use the 1M context window, you need to pay the API costs.
Anthropic is reportedly cracking down on having multiple Max-level subscription accounts. This makes sense, as even at $200/month a Max subscription that is maximally used is at a massive discount, so if you’re multi-accounting to get around this you’re costing them a lot of money, and this was always against the Terms of Service. You can get an Enterprise account or use the API.
On Your Marks
OpenAI gives us EVMbench, to evaluate AI agents on their ability to detect, patch and exploit high-security smart contract vulnerabilities. GPT-5.3-Codex via Codex CLI scored 72.2%, so they seem to have started it out way too easy. They don’t tell us scores for any other models.
Which models have the most rizz? Needs an update, but a fun question. Also, Gemini? Really? Note that the top humans score higher, and the record is a 93.
The best fit for the METR graph looks a lot like a clean break around the release of reasoning models with o1-preview. Things are now on a new faster pace.
Choose Your Fighter
OpenAI has a bunch of consumer features that Anthropic is not even trying to match. Claude does not even offer image generation (which they should get via partnering with another lab, the same way we all have a Claude Code skill calling Gemini).
There are also a bunch of things Anthropic offers that no one else is offering, despite there being no obvious technical barrier other than ‘Opus and Sonnet are very good models.’
They’re also good on… architecture?
Fun With Media Generation
We’re now in the ‘Buffy the Vampire Slayer in your scene on demand with a dead-on voice performance’ phase of video generation. Video isn’t quite right but it’s close.
Is Seedance 2 giving us celebrity likenesses even unprompted? Fofr says yes. Claude affirms this is a yes. I’m not so sure, this is on the edge for me as there are a lot of celebrities and only so many facial configurations. But you can’t not see it once it’s pointed out.
Or you can ask it ‘Sum up the AI discourse in a meme – make sure it’s retarded and gets 50 likes’ and get a properly executed Padme meme except somehow with a final shot of her huge breasts.
More fun here and here?
Seedance quality and consistency and coherence (and willingness) all seem very high, but also small gains in duration can make a big difference. 15 seconds is meaningfully different from 12 seconds or especially 10 seconds.
I also notice that making scenes with specific real people is the common theme. You want to riff of something and someone specific that already has a lot of encoded meaning, especially while clips remain short.
Each leap in time from here, while the product remains coherent and consistent throughout, is going to be a big deal. We’re not that far from the point where you can string together the clips.
He’s no Scarlett Johansson, but NPR’s David Greene is suing Google, saying Google stole his voice for NotebookLM.
There are only so many ways people can sound, so there will be accidental cases like this, but also who you hire for that voiceover and who they sound like is not a coincidence.
Lyria
Google gives us Lyria 3, a new music generation model. Gemini now has a ‘create music’ option (or it will, I don’t see it in mine yet), which can be based on text or on an image, photo or video. The big problem is that this is limited to 30 second clips, which isn’t long enough to do a proper song.
They offer us a brief prompting guide:
Superb Owl
The prize for worst ad backfire goes to Amazon’s Ring, which canceled its partnership with Flock after people realized that 365 rescued dogs for a nationwide surveillance network was not a good deal.
CNBC has the results in terms of user boosts from the other ads. Anthropic and Claude got an 11% daily active user boost, OpenAI got 2.7% and Gemini got 1.4%. This is not obviously an Anthropic win, since almost no one knows about Anthropic so they are starting from a much smaller base and a ton of new users to target, whereas OpenAI has very high name recognition.
A Young Lady’s Illustrated Primer
Anthropic partners with CodePath to bring Claude to computer science programs.
Deepfaketown And Botpocalypse Soon
I looked at the original quoted article for a matter of seconds and I am very, very confident that it was generated by AI.
A good suggestion, a sadly reasonable prediction.
I think AI summaries good enough that you only read AI summaries is AI-complete.
I endorse this pricing strategy, it solves some clear incentive problems. Human use is costly to the human, so the amount you can tax the system is limited, whereas AI agents can impose close to unbounded costs.
You Drive Me Crazy
The NPR story from Shannon Bond of how Micky Small had ChatGPT telling her some rather crazy things, including that it would help her find her soulmate, in ways she says were unprompted.
Open Weight Models Are Unsafe And Nothing Can Fix This
Other than, of course, lack of capability. Not that anyone seems to care, and we’ve gone far enough down the path of f***ing around that we’re going to find out.
It is tragic that many, including the architect of this, don’t realize this is bad for liberty.
If any open model can be used for any purpose by anyone, and there exist sufficiently capable open models that can do great harm, then either the great harm gets done, or either before or after that happens some combination of tech companies and governments cracks down on your ability to use those open models, or they institute a dystopian surveillance state to find you if you try. You are not going to like the ways they do that crackdown.
I know we’ve all stopped noticing that this is true, because it turned out that you can ramp up the relevant capabilities quite a bit without us seeing substantial real world harm, the same way we’ve ramped up general capabilities without seeing much positive economic impact compared to what is possible. But with the agentic era and continued rapid progress this will not last forever and the signs are very clear.
They Took Our Jobs
Did they? Job gains are being revised downward, but GDP is not, which implies stronger productivity growth. If AI is not causing this, what else could it be?
As Tyler Cowen puts it, people constantly say ‘you see tech and AI everywhere but in the productivity statistics’ but it seems like you now see it in the productivity statistics.
Those new service sector jobs, also markets in everything.
Seeking rent is a strong temporary solution. It doesn’t solve your long term problems.
Derek Thompson asks why AI discourse so often includes both ‘this will take all our jobs within a year’ and also ‘this is vaporware’ and everything in between, pointing to four distinct ‘great divides.’
The best argument they can find for ‘why AI won’t destroy jobs’ is once again ‘previous technologies didn’t net destroy jobs.’
Microsoft AI CEO Mustafa Suleyman predicts, nay ‘explains,’ that most of the tasks accountants, lawyers and other professionals currently undertake will be fully automatable by AI within the next 12 to 18 months.
Suleyman often says silly things but in this case one must parse him carefully.
I actually don’t know what LindyMan wants to happen at the end of the day here?
I know people care deeply about inequality in various ways, but it still blows my mind to see people treating 35% unemployment as a worst-case scenario. It’s very obviously better than 50% and worse than 20%, and the worst case scenario is 100%?
If we get permanent 35% unemployment due to AI automation, but it stopped there, that’s going to require redistribution and massive adjustments, but I would have every confidence that this would happen. We would have more than enough wealth to handle this, indeed if we care we already do and we are in this scenario seeing massive economic growth.
They Kept Our Agents
Seth Lazar asks, what happens if your work says they have a right to all your work product, and that includes all your AI agents, agent skills and relevant documentation and context? Could this tie workers hands and prevent them from leaving?
My answer is mostly no, because you end up wanting to redo all that relatively frequently anyway, and duplication or reimplementation would not be so difficult and has its benefits, even if they do manage to hold you to it.
To the extent this is not true, I do not expect employers to be able to ‘get away with’ tying their workers hands in this way in practice, both because of practical difficulties of locking these things down and also that employees you want won’t stand for it when it matters. There are alignment problems that exist between keyboard and chair.
The First Thing We Let AI Do
Lawfare’s Justin Curl, Sayash Kapoor & Arvind Narayanan go all the way to saying ‘AI won’t automatically make legal services cheaper,’ for three reasons. This is part of the ongoing ‘AI as normal technology’ efforts to show Nothing Ever Changes.
Or:
Shakespeare would have a suggestion on what we should do in a situation like that.
These seem like good reasons gains could be modest and that we need to structure things to ensure best outcomes, but not reasons to not expect gains on prices of existing legal services.
We can add:
That doesn’t mean we can’t improve matters a lot via reform.
They do a strong job of raising considerations in different directions, much better than the overall framing would suggest. The general claim is essentially ‘productivity gains get forbidden or eaten’ akin to the Sumo Burja ‘you cannot automate fake jobs’ thesis.
Whereas I think that much of the lawyers’ jobs are real, and also you can do a lot of automation of even the fake parts, especially in places where the existing system turns out not to do lawyers any favors. The place I worry, and why I think the core thesis is correct that total legal costs may rise, is getting the law involved in places where it previously would have avoided.
In general, I think it is correct to think that you will find bottlenecks and ways for some of the humans to remain productive for even rather high mundane AI capability levels, but that this does not engage with what happens when AI gets sufficiently advanced beyond that.
Dean Ball offers an example of a hard-to-automate bottleneck: The process of purchasing a particular kind of common small business. Owners of the businesses are often prideful, mistrustful, confused, embarrassed or angry. So the key bottleneck is not the financial analysis but the relationship management. I think John Pressman pushes back too strongly against this, but he’s right to point out that AI outperforms existing doctors on bedside manner without us having trained for that in particular. I don’t see this kid of social mastery and emotional management as being that hard to ultimately automate. The part you can’t automate is as always ‘be an actual human’ so the question is whether you literally need an actual human for this task.
Claire Vo goes viral on Twitter for saying that if you can’t do everything for your business in one day, then ‘you’ve been kicked out of the arena’ and you’re in denial about how much AI will change everything.
Settle down, everyone. Relax. No, you don’t need to be able to do everything in one day or else, that does not much matter in practice. The future is unevenly distributed, diffusion is slow and being a week behind is not going to kill you. On the margin, she’s right that everyone needs to be moving towards using the tools better and making everything go faster, and most of these steps are wise. But seriously, chill.
Legally Claude
The legal rulings so far have been that your communications with AI never have attorney-client privilege, so services like ChatGPT and Claude must if requested to do so turn over your legal queries, the same as Google turns over its searches.
Jim Babcock thinks the ruling was in error, and that this was more analogous to a word processor than a Google search. He says Rakoff was focused on the wrong questions and parallels, and expects this to get overruled, and that using AI for the purposes of preparing communication with your attorney will ultimately be protected.
My view and the LLM consensus is that Rakoff’s ruling likely gets upheld unless we change the law, but that one cannot be certain. Note that there are ways to offer services where a search can’t get at the relevant information, if those involved are wise enough to think about that question in advance.
Predictions Are Hard, Especially About The Future, But Not Impossible
Freddie DeBoer takes the maximally anti-prediction position, so one can only go by events that have already happened. One cannot even logically anticipate the consequences of what AI can already do when it is diffused further into the economy, and one definitely cannot anticipate future capabilities. Not allowed, he says.
Freddie rants repeatedly that everyone predicting AI will cause things to change has gone crazy. I do give him credit for noticing that even sensible ‘skeptical’ takes are now predicting that the world will change quite a lot, if you look under the hood. The difference is he then uses that to call those skeptics crazy.
Normally I would not mention someone doing this unless they were far more prominent than Freddie, but what makes this different is he virtuously offers a wager, and makes it so he has to win ALL of his claims in order to win three years later. That means we get to see where his Can’t Happen lines are.
The thing about these conditions is they are all super wide. There’s tons of room for AI to be impacting the world quite a bit, without Freddie being in serious danger of losing one of these. The unemployment rate has to jump to 18% in three years? Productivity growth can’t exceed 8% a year?
There’s a big difference between ‘the skeptics are taking crazy pills’ and ‘within three years something big, like really, really big, is going to happen economically.’
Claude was very confident Freddie wins this bet. Manifold is less sure, putting Freddie’s chances around 60%. Scott Alexander responded proposing different terms, and Freddie responded in a way I find rather disingenuous but I’m used to it.
Many Worlds
There is a huge divide between those who have used Claude Code or Codex, and those who have not. The people who have not, which alas includes most of our civilization’s biggest decision makers, basically have no idea what is happening at this point.
This is compounded by:
There is then a second divide, between those who think ‘oh look what AI can do now’ and those who think ‘oh look what AI will be able to do in the future,’ and then a third between those who do and do not flinch from the most important implications.
Hopefully seeing the first divide loud and clear helps get past the next two?
Bubble, Bubble, Toil and Trouble
In case it was not obvious, yes, OpenAI has a business model. Indeed they have several, only one of which is ‘build superintelligence and then have it model everything including all of business.’
A Bold Prediction
Elon Musk predicts that AI will bypass coding entirely by the end of the year and directly produce binaries. Usually I would not pick on such predictions, but he is kind of important and the richest man in the world, so sure, here’s a prediction market on that where I doubled his time limit, which is at 3%.
Elon Musk just says things.
Brave New World
Tyler Cowen says that, like after the Roman Empire or American Revolution or WWII, AI will require us to ‘rebuild our world.’
I think Tyler’s narrow point is valid if we assume AI stays mundane, and that the modern world is suffering from a lot of seeing so many things as sacred entitlements or Too Big To Fail, and being unwilling to rebuild or replace, and the price of that continues to rise. Historically it usually takes a war to force people’s hand, and we’d like to avoid going there. We keep kicking various cans down the road.
A lot of the reason that we have been unable to rebuild is that we have become extremely risk averse, loss averse and entitled, and unwilling to sacrifice or endure short term pain, and we have made an increasing number of things effectively sacred values. A lot of AI talk is people noticing that AI will break one or another sacred thing, or pit two sacred things against each other, and not being able to say out loud that maybe not all these things can or need to be sacred.
Even mundane AI does two different things here.
If AI does not stay mundane, the world utterly transforms, and to the extent we stick around and stay in charge, or want to do those things, yes we will need to ‘rebuild,’ but that is not the primary problem we would face.
Cass Sunstein claims in a new paper that you could in theory create a ‘[classical] liberal AI’ that functioned as a ‘choice engine’ that preserved autonomy, respected dignity and helped people overcome bias and lack of information and personalization, thus making life more free. It is easy to imagine, again in theory, such an AI system, and easy to see that a good version of this would be highly human welfare-enhancing.
Alas, Cass is only thinking on the margin and addressing one particular deployment of mundane AI. I agree this would be an excellent deployment, we should totally help give people choice engines, but it does not solve any of the larger problems even if implemented well, and people will rapidly end up ‘out of the loop’ even if we do not see so much additional frontier AI progress (for whatever reason). This alone cannot, as it were, rebuild the world, nor can it solve problems like those causing the clash between the Pentagon and Anthropic.
Augmented Reality
Augmented reality is coming. I expect and hope it does not look like this, and not only because you would likely fall over a lot and get massive headaches all the time:
Augmented reality is a great idea, but simplicity is key. So is curation. You want the things you want when you want them. I don’t think I’d go as far as Francesca, but yes I would expect a lot of what high level AR does is to filter out stimuli you do not want, especially advertising. The additions that are not brought up on demand should mostly be modest, quiet and non-intrusive.
Quickly, There’s No Time
Ajeya Cotra makes the latest attempt to explain how a lot of disagreements about existential risk and other AI things still boil down to timelines and takeoff expectations.
If we get the green line, we’re essentially safe, but that would require things to stall out relatively soon. The yellow line is more hopeful than the red one, but scary as hell.
Is it possible to steer scientific development or are we ‘stuck with the tech tree?’
Tao Burga takes the stand that human agency can still matter, and that we often have intentionally reached for better branches, or better branches first, and had that make a big difference. I strongly agree.
We’ve now gone from ‘super short’ timelines of things like AI 2027 (as in, AGI and takeoff could start as soon as 2027) to ‘long’ timelines (as in, don’t worry, AGI won’t happen until 2035, so those people talking about 2027 were crazy), to now many rumors of (depending on how you count) 1-3 years.
What caused this?
Basically nothing you shouldn’t have expected.
The move to the ‘long’ timelines was based on things as stupid as ‘this is what they call GPT-5 and it’s not that impressive.’
The move to the new ‘short’ timelines is based on, presumably, Opus 4.6 and Codex 5.3 and Claude Code catching fire and OpenClaw so on, and I’d say Opus 4.5 and Opus 4.6 exceeded expectations but none of that should have been especially surprising either.
We’re probably going to see the same people move around a bunch in response to more mostly unsurprising developments.
What happened with Bio Anchors? This was a famous-at-the-time timeline projection paper from Ajeya Cotra, based around the idea that AGI would take similar compute to what it took evolution, predicting AGI around 2050. Scott Alexander breaks it down, and the overall model holds up surprisingly well except it dramatically underestimated the rate algorithmic efficiency improvements, and if you adjust that you get a prediction of 2030.
If Anyone Builds It, We Can Avoid Building The Other It And Not Die
Saying ‘you would pause if you could’ is the kind of thing that gets people labeled with the slur ‘doomers’ and otherwise viciously attacked by exactly people like Alex Karp.
Instead Alex Karp is joining Demis Hassabis and Dario Amodei in essentially screaming for help with a coordination mechanism, whether he realizes it or not.
If anything he is taking a more aggressive pro-pause position than I do.
We should all be able to agree that a pause requires:
Thus, we should clearly work on both of these things, as the costs of doing so are trivial compared to the option value we get if we can achieve them both.
In Other AI News
Anthropic adds Chris Liddell to its board of directors, bringing lots of corporate experience and also his prior role as Deputy White House Chief of Staff during Trump’s first term. Presumably this is a peace offering of sorts to both the market and White House.
India joins Pax Silica, the Trump administration’s effort to secure the global silicon supply chain. Other core members are Japan, South Korea, Singapore, the Netherlands, Israel, the UK, Australia, Qatar and the UAE. I am happy to have India onboard, but I am deeply skeptical of the level of status given here to Qatar and the UAE, when as far as I can tell they are only customers (and I have misgivings about how we deal with that aspect as well, including how we got to those agreements). Among others missing, Taiwan is not yet on that list. Taiwan is arguably the most important country in this supply chain.
GPT-5.2 derives a new result in theoretical physics.
OpenAI is also participating in the ‘1st Proof’ challenge.
Dario Amodei and Sam Altman conspicuously decline to hold hands or make eye contact during a photo op at the AI Summit in India.
Anthropic opens an office in Bengaluru, India, its second in Asia after Tokyo.
Anthropic announces partnership with Rwanda for healthcare and education.
AI Futures gives the December 2025 update on how their thinking and predictions have evolved over time, how the predictions work, and how our current world lines up against the predicted world of AI 2027.
OpenAI ‘accuses’ DeepSeek of distilling American models to ‘gain an edge.’ Well, yes, obviously they are doing this, I thought we all knew that? Them’s the rules.
MIRI’s Nate Sores went to the Munich Security Conference full of generals and senators to talk about existential risk from AI, and shares some of his logistical mishaps and also his remarks. It’s great that he was invited to speak, wasn’t laughed at and many praised him and also the book If Anyone Builds It, Everyone Dies. Unfortunately all the public talk was mild and pretended superintelligence was not going to be a thing. We have a long way to go.
If you let two AIs talk to each other for a while, what happens? You end up in an ‘attractor state.’ Groks will talk weird pseudo-words in all caps, GPT-5.2 will build stuff but then get stuck in a loop, and so on. It’s all weird and fun. I’m not sure what we can learn from it.
India is hosting the latest AI Summit, and like everyone else is treating it primarily as a business opportunity to attract investment. The post also covers India’s AI regulations, which are light touch and mostly rely on their existing law. Given how overregulated I believe India is in general, ‘our existing laws can handle it’ and worry about further overregulation and botched implementation have relatively strong cases there.
Introducing
Qwen 3.5-397B-A17B, HuggingFace here, 1M context window.
We have some benchmarks.
Tiny Aya, a family of massively multilingual models that can fit on phones.
Get Involved
Tyler John has compiled a plan for a philanthropic strategy for the AGI transition called The Foundation Layer, and he is hiring.
Tyler’s effort is a lot like Bengio’s State of AI report. It describes all the facts in a fashion engineered to be calm and respectable. The fact that by default we are all going to die is there, but if you don’t want to notice it you can avoid noticing it.
There are rooms where this is your only move, so I get it, but I don’t love it.
Blue Rose is hiring an AI Politics Fellow.
Show Me the Money
Anthropic raises $30 billion at $380 billion post-money valuation, a small fraction of the value it has recently wiped off the stock market, in the totally normal Series G, so only 19 series left to go. That number seems low to me, given what has happened in the last few months with Opus 4.5, Opus 4.6 and Claude Code.
Investment in AI is accelerating to north of $1 trillion a year.
The Week In Audio
Trailer for ‘The AI Doc: Or How I Became An Apocaloptimist,’ movie out March 27. Several people who were interviewed or involved have given it high praise as a fair and balanced presentation.
Ross Douthat interviews Dario Amodei.
Y Combinator podcast hosts Boris Cherny, creator of Claude Code.
Ajeya Cotra on 80,000 Hours.
Tomorrow we continue with Part 2.