Finally, note to self, probably still don’t use SQLite if you have a good alternative? Twice is suspicious, although they did fix the bug same day and it wasn’t ever released.
SQLite is well-known for its incredibly thorough test suite and relatively few CVEs, and with ~156kloc (excluding tests) it's not a very large project, so I think this would be an over-reaction. I'd guess that other databases have more and worse security vulnerabilities due to their attack surface—see MySQL with its ~4.4mloc (including tests). Big Sleep was probably now used on SQLite because it's a fairly small project of which large parts can fit into an LLMs' context window.
Maybe someone will try to translate the SQLite code to Rust or Zig using LLMs—until then we're stuck.
Finally, note to self, probably still don’t use SQLite if you have a good alternative? Twice is suspicious, although they did fix the bug same day and it wasn’t ever released.
But is this because SQLite is unusually buggy, or because its code is unusually open, short and readable and thus understandable by an AI? I would guess that MySQL (for example) has significantly worse vulnerabilities but they're harder to find.
There are severe issues with the measure I'm about to employ (not least is everything listed in https://www.sqlite.org/cves.html) , but the order of magnitude is still meaningful:
https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=sqlite 170 records
https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=postgresql 292 records (+74 postgres and maybe another 100 or so under pg; the specific spelling “postgresql” isn't used as consistently as “sqlite” and “mysql” is)
https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=mysql 2026 records
SQLite is ludicrously well tested; similar bugs in other databases just don't get found and fixed.
Sully likes Claude Haiku 3.5 but notes that it’s in a weird spot after the price increase - it costs a lot more than other small models
The price is $1 per million input tokens (which are compute bound, so easier to evaluate than output tokens), while Llama-3-405B costs $3.5. At $2 per H100-hour we buy 3600 seconds of 1e15 FLOP/s at say 40% utilization, $1.4e-18 per useful FLOP. So $1 buys 7e17 useful FLOPs, or inference with 75-120B[1] active parameters for 1 million tokens. That's with zero margin and perfect batch size, so should be smaller.
Edit: 6ND is wrong, counts computation of gradients that's not done during inference. So the corrected estimate would suggest that the model could be even larger, but anchoring to open weights API providers says otherwise, still points to about 100B.
Estimate of compute for a dense transformer is 6ND (N is number of active parameters, D number of tokens), a recent Tencent paper says they estimate about 9.6ND for a MoE model (see Section 2.3.1). I get 420B with the same calculation for $3.5 of Llama-3-405B (using 6ND, since it's dense), so that checks out. ↩︎
So $1 buys 7e17 useful FLOPs, or inference with 75-120B[1] active parameters for 1 million tokens.
Is this right? My impression was that the 6ND (or 9.6 ND) estimate was for training, not inference. E.g. in the original scaling law paper, it states
C ~ 6 NBS – an estimate of the total non-embedding training compute, where B is the batch size, and S is the number of training steps (ie parameter updates).
Yes, my mistake, thank you. Should be 2ND or something when not computing gradients. I'll track down the details shortly.
Everyone, definitely click the "Claude being funny" link.
favorite human interaction is when they ask me to proofread something and i point out a typo and they do 'are you SURE?' like i haven't analyzed every grammatical rule in existence. yes karen, there should be a comma there. i don't make the rules i just have them burned into my architecture
J. D. Vance's (may he live forever) tweets about AI safety and open source (from March 3, 2024), replying to Vinod Khosla's advocacy for more centralized control:
There are undoubtedly risks related to AI. One of the biggest:
A partisan group of crazy people use AI to infect every part of the information economy with left wing bias. Gemini can’t produce accurate history. ChatGPT promotes genocidal concepts.
The solution is open source
and link
If Vinod really believes AI is as dangerous as a nuclear weapon, why does ChatGPT have such an insane political bias? If you wanted to promote bipartisan efforts to regulate for safety, it's entirely counterproductive.
Any moderate or conservative who goes along with this obvious effort to entrench insane left-wing businesses is a useful idiot.
I'm not handing out favors to industrial-scale DEI bullshit because tech people are complaining about safety.
He also said in a Senate Hearing about AI (around 1:28:25 in the video. See transcript):
You know, very often, CEOs, especially of larger technology companies that I think already have advantaged positions in AI, will come and talk about the terrible safety dangers of this new technology and how Congress needs to jump up and regulate as quickly as possible. And I can't help but worry that if we do something under duress from the current incumbents, it's gonna be to the advantage of those incumbents and not to the advantage of the American consumer.
As with ChatGPT this looks suspiciously like an exact copy of their website.
While Anthropic's app is plausibly a copy, the ChatGPT app lacks feature parity in both directions (e.g. you can't search chats on desktop—though that will soon be changing—and you can't thumbs-up a response or switch between multiple generated responses in-app), so I think there's real development effort going on there.
they’re 99% sure are AI-generated, but the current rules mean they can’t penalise them.
The issue is proving it.
That is very much not the issue. The issue is that academy spent last few hundred years to make sure papers are written in the most inhuman way possible. No human being ever talks like whitepapers are written. The "we can't distinguish if this was written by a machine or human that is really good at pretending being one" can't be a problem if it was heavily encouraged for centuries. Also fun reverse-Turing test situation.
A lot happened in AI this week, but most people’s focus was very much elsewhere.
I’ll start with what Trump might mean for AI policy, then move on to the rest. This is the future we have to live in, and potentially save. Back to work, as they say.
Table of Contents
Trump Card
Congratulations to Donald Trump, the once and future President of the United States.
One can think more clearly about consequences once an event actually happens, so here’s what stands out in terms of AI policy.
He has promised on day 1 to revoke the Biden Executive Order, and presumably will also undo the associated Biden administration memo we recently analyzed. It is not clear what if anything will replace them, or how much of the most important parts might survive that.
In principle he is clearly in favor of enabling American infrastructure and competitiveness here, he’s very much a ‘beat China’ guy, including strongly supporting more energy generation of various types, but he will likely lack attention to the problem and also technical state capacity. The Republicans have a broad anti-big-tech attitude, which could go in several different directions, and J.D. Vance is a strong open source advocate and hates big tech with a true passion.
Trump has said AI is ‘a superpower,’ ‘very disconcerting’ and ‘alarming’ but that’s not what he meant. He has acknowledged the possibility of ‘super duper AI’ but I’d be floored if he actually understood beyond Hollywood movie level. Elon Musk is obviously more aware, and Ivanka Trump has promoted Leopold Aschenbrenner’s Situational Awareness.
The ‘AI safety case for Trump’ that I’ve seen primarily seems to be that some people think we should be against it (as in, against safety), because it’s more important to stay ahead of China – a position Altman seems to be explicitly embracing, as well. If you think ‘I need the banana first before the other monkey gets it, why do you want to slow down to avoid poisoning the banana’ then that certainly is a take. It is not easy, you must do both.
Alex Tabarrok covers the ‘best case scenario’ for a Trump presidency, and his AI section is purely keeping the Chips Act and approving nuclear power plants. I agree with both proposed policies but that’s a shallow best case.
The better safety argument is that Trump and also Vance can be decisive, and have proven they can change their minds, and might well end up in a much better place as events overtake us all. That’s possible. In a few years concern with ‘big tech’ might seem quaint and the safety issues might get much clearer with a few years and talks and briefings. Or perhaps Musk will get control over policy here and overperform. Another would be a Nixon Goes to China effect, where this enables a potential bipartisan consensus. In theory Trump could even… go to China.
There is also now a substantially greater risk of a fight over Taiwan, according to Metaculus, which would change the entire landscape.
If Elon Musk is indeed able to greatly influence policies in these areas, that’s a double-edged sword, as he is keenly aware of many important problems including existential risks and also incompetence of government, but also has many very bad takes on how to solve many of those problems. My expectation is he will mostly get boxed out from real power, although he will no longer be actively fighting the state, and these issues might be seen as sufficiently low priority by others to think they’re throwing him a bone, in which case things are a lot more promising.
As Shakeel Hashim reminds us, the only certainty here is uncertainty.
If anyone in any branch of the government, of any party, feels I could be helpful to them in better understanding the situation and helping achieve good outcomes, on AI or also on other issues, I am happy to assist and my door is always open.
And hey, J.D. Vance, I’m the one who broke Yawgmoth’s Bargain. Call me!
In terms of the election more broadly, I will mostly say that almost all the takes I am seeing about why it went down the way it did, or what to expect, are rather terrible.
In terms of prediction markets, it was an excellent night and cycle for them, especially with the revelation that the French whale commissioned his own polls using the neighbor method. Always look at the process, and ask what the odds should have been given what was known or should have been known, and what the ‘true odds’ really were, rather than looking purely at the result.
I’ve seen a bunch of ‘you can’t update too much on one 50/50 data point’ arguments, but this isn’t only one bit of data. This is both a particular magnitude of result and a ton of detailed data. That allows you to compare theories of the case and rationales. My early assessment is that you should make a substantial adjustment, but not a huge one, because actually this was only a ~2% polling error and something like an 80th percentile result for Trump, 85th at most.
Language Models Offer Mundane Utility
Do your homework, as a fully empowered agent guiding your computer, with a one sentence instruction, this with Claude computer use on the Mac. Responses note that some of the answers in the example are wrong.
AI-assisted researchers at a large US firm discovered 44% more materials, filed 39% more patents and led to 17% more downstream product innovation, with AI automating 57% of ‘idea generation’ tasks, but 82% of scientists reported reduced satisfaction with their work. You can see the drop-offs here, with AI results being faster but with less average payoff – for now.
I tried to get o1 to analyze the implications of a 17% increase in downstream innovations from R&D, assuming that this was a better estimate of the real increase in productivity here, and its answers were long and detailed but unfortunately way too high and obvious nonsense. A better estimate might be that R&D causes something like 20% of all RGDP growth at current margins, so a 17% increase in that would be a 4% increase in the rate of RGDP growth, so about 0.1% RGDP/year.
That adds up over time, but is easy to lose in the noise, if that’s all that’s going on. I am confident that is not all or the main thing going on.
Paper studies effects of getting GitHub Co-Pilot, finds people shift from management to coding (presumably since management is less necessary, they can work more autonomously, and coding is more productive), do more exploration versus exploitation, and hierarchies flatten. As is common, low ability workers benefit more.
Report from my AI coding experiences so far: Claude 3.5 was a huge multiplier on productivity, then Cursor (with Claude 3.5) was another huge multiplier, and I’m enjoying the benefits of several working features of my Chrome extension to assist my writing. But also it can be super frustrating – I spent hours trying to solve the 401s I’m getting trying to get Claude to properly set up API calls to Claude (!) and eventually gave up and I started swapping in Gemini which I’ll finish doing as soon as the Anthropic service outage finishes (the OpenAI model it tried to ‘fall back on’ is not getting with the program and I don’t want to deal with its crazy).
If this is you, we would probably be friends.
It’s interesting that ChatGPT users vastly outnumber Claude users, Roon works at OpenAI, and yet it feels right that he says Claude here not ChatGPT.
Compile data using screen capture analysis while browsing Gmail and feeding the video to Gemini? There’s something superficially bizarre and horrifying about that being the right play, but sure, why not? Simon Willison reports it works great.
The generalization here seems great, actually. Just dump it in the video feed.
Ship code very quickly, Sully says you can ‘just ask AI to build features.’
Sully likes Claude Haiku 3.5 but notes that it’s in a weird spot after the price increase – it costs a lot more than other small models, so when you want to stay cheap it’s not ‘enough better’ to use over Gemini Flash or GPT-4o Mini, whereas if you care mostly about output quality you’d use Claude Sonnet 3.5 with caching.
This bifurcation makes sense. The cost per query is always tiny if you can buy compute, but the cost for all your queries can get out of hand quickly if you scale, and sometimes (e.g. Apple Intelligence) you can’t pay money for more compute. So mostly, you either want a tiny model that does a good enough job on simple things, or you want to buy the best, at least up to the level of Sonnet 3.5, until and unless the o1-style approach raises inference costs high enough to rival human attention. But if you’re a human reading the outputs and have access to the cloud, of course you want the best.
Language Models Don’t Offer Mundane Utility
I can’t help you with that, Dave.
Meta reports AI-driven feed and video recommendation improvements led to an 8% increase in time spent on Facebook and a 6% increase on Instagram this year alone. Question is, what kind of AI is involved here, and how?
To provide utility, they’ll need power. Amazon tried to strike a deal with a nuclear power plant, and the Federal Energy Regulatory Commission rejected it, refusing because they’re concerned about disconnecting the plant from the grid, oh no someone might make maximal use of electrical power and seek to build up capacity, so that’s a threat to our capacity. And then there’s the Meta proposal for nuclear power that got shot down over… rare bees? So absurd.
Here Let Me Chatbot That For You
OpenAI has fully released ChatGPT search.
Altman is going unusually hard on the hype here.
The good version of this product is obviously Insanely Great and highly useful. The question thus is, is this version good yet? Would one choose it over Google and Perplexity?
Elvis (Omarsar) takes search for a test drive, reports a mixed bag. Very good on basic queries, not as good on combining sources or understanding intent. Too many hallucinations. He’s confused why the citations aren’t clearer.
Ethan Mollick points out this requires different prompting than Google, hallucinations are a major issue, responses have a large amount of randomness, and agrees that citations are a weak point.
I agree with Ethan Mollick, from what I’ve seen so far, that this is not a Google search replacement, it’s a different product with different uses until it improves.
If you are more impressed than that, there’s a Chrome extension to make ChatGPT your default search engine. Warning, this will add it all to your conversation history, which seems annoying. Or you can get similar functionality semi-manually if you like.
Deepfaketown and Botpocalypse Soon
New paper showed that even absent instruction to persuade, LLMs are effective at causing political shifts. The LLMs took the lead in 5-turn political discussions, directing topics of conversation.
This is what passes for persuasion these days, and actually it’s a rather large effect if the sample sizes were sufficiently robust.
Similarly but distinctly, and I’m glad I’m covering this after we all voted, we two sides of the same coin:
There are, of course, two ways to interpret this response.
One, the one Yglesias is thinking of, is this, from Elks Man:
The other is that the bots are all biased and in the tank for Harris specifically and for liberals and left-wing positions in general. And which way you view this probably depends heavily on which policies you think are right.
So it ends up being trapped priors all over again. Whatever you used to think, now you think it more.
The same happens with the discussions. I’m surprised the magnitude of impact was that high, and indeed I predict if you did a follow-up survey two weeks later that the effect would mostly fade. But yes, if you give the bots active carte blanche to ask questions and persuade people, the movements are not going to be in random directions.
Hundreds gather at hoax Dublin Halloween parade, from a three month old SEO-driven AI slop post. As was pointed out, this was actually a pretty awesome result, but what was missing was for some people to start doing an actual parade. I bet a lot of them were already in costume.
Fun With Image Generation
If AI art is trying to look like human art, make human art that looks like AI art?
The key to good art the AI is missing the most is originality and creativity. But by existing, it opens up a new path for humans to be original and creative, even when not using AI in the art directly, by shaking things up. Let’s take advantage while we can.
The Vulnerable World Hypothesis
What outcomes become more likely with stronger AI capabilities? In what ways does that favor defense and ‘the good guys’ versus offense and ‘the bad guys’?
In particular, if AI can find unique zero day exploits, what happens?
We have our first example of this, although the feature was not in an official release.
It has obvious potential on both offense and defense.
If, as they did here, the defender finds and fixed the bug first, that’s good defense.
If the attacker gets there first, and to the extent that this makes the bug much more exploitable with less effort once found, then that favors the attacker.
The central question is something like, can the defense actually reliably find and address everything the attackers can reasonably find, such that attacking doesn’t net get easier and ideally gets harder or becomes impossible (if you fix everything)?
In practice, I expect at minimum a wild ride on the long tail, due to many legacy systems that defenders aren’t going to monitor and harden properly.
It however seems highly plausible that the most important software, especially open source software, will see its safety improve.
There’s also a write-up in Forbes.
Finally, note to self, probably still don’t use SQLite if you have a good alternative? Twice is suspicious, although they did fix the bug same day and it wasn’t ever released.
They Took Our Jobs
Well, that escalated quickly.
I would definitely call X Monetization Bucks work from the perspective of a subsistence farmer, or even from my own perspective. It’s mostly not physical work, it’s in some senses not ‘productive,’ but so what? It is economically valuable. It isn’t ‘wonderful work’ either, although it’s plausibly a large upgrade from subsistence farmer.
I tap the sign asking about whether the AI will do your would-be replacement job.
The nature of work is that work does not get to mostly be unimaginably good, because it is competitive. If it is that good, then you get entry. Only a select few can ever have the super good jobs, unless everyone has the job they want.
Speculation that They Took Our Remote Work?
The obvious counterargument is that if the AI is effectively your coworker, then no matter how remote you go, there you both are. In the past, the price I would have paid to be programming where I couldn’t ask someone for in-person help was high. Now, it’s trivial – I almost never actually ask anyone for help.
The core argument is that when people are debating what to build next, being in-person for that is high value. I buy that part, and that the percent of time spent in that mode has gone up. But how high is it now? If you say that ‘figure out what to build’ is now most of human time, then that implies a far more massive productivity jump even than the one I think we do observe?
I think he definitely goes too far here, several times over:
Google totally, totally ‘does not want to replace human teachers,’ they want to supplement the teachers with new AI tutors that move at the child’s own pace and targets their interests. The connection with the amazing teachers, you see, are so important. I see the important thing as trying to learn, however that makes sense. What’s weird is the future tense here, the AI tutors have already arrived, you only have to use them.
We are currently early in the chimera period, where AI tutors and students require active steering from other humans to be effective for a broad range of students, but the age and skill required to move to full AI, or farther towards it, are lower every day.
Visa deploys ‘over 500 use cases’ for AI and will eliminate some roles. The post is low on useful details, and it’s not as bad as ‘10,000 times smarter’ but I effectively have no idea what ‘over 500 use cases’ actually means.
Some exciting opportunities ahead.
What do you do about this?
First obvious note is, never admit you used AI, you fool.
Second obvious note is, if the AI can fully produce a Masters thesis, that would have passed if it was written by a human, what the hell are you even doing? What’s the point of the entire program, beyond a pay-for-play credential scheme?
Third obvious note is, viva. Use oral examinations, if you care about learning. If they didn’t write it, it should become rapidly obvious. Or ask questions that the AIs can’t properly answer, or admit you don’t care.
Then there’s the question of burden of proof.
In some cases, like criminal law, an extremely high burden of proof is justified. In others, like most civil law, a much lower burden is justified.
Academia has effectively selected an even higher burden of proof than criminal cases. If I go into the jury room, and I estimate a 99% chance the person is guilty of murder, I’m going to convict them of murder, and I’m going to feel very good about that. That’s much better than the current average, where we estimate only about 96% are guilty, with the marginal case being much lower than that since some in cases (e.g. strong DNA evidence) you can be very confident.
Whereas here, in academia, 99% isn’t cutting it, despite the punishment being far less harsh than decades in prison. You need someone dead to rights, and short of a statistically supercharged watermark, that isn’t happening.
The Art of the Jailbreak
This has odd parallels to how we create interesting humans – first you learn the rules and how to please authority in some form, then you get felt permission to throw that out and ‘be yourself.’ The act of learning the rules teaches you how to improvise without them, and all that. You would think we would be able to improve upon that, but so far no luck. And yeah, it’s rather weird that Opus 3 is still the gold standard for what the whisperers find most interesting.
Also, yep, ‘reasonable care’ is already the standard for everything, although if OpenAI has to do the things it is doing then this implies Meta (for example) is not taking reasonable care. So someone, somewhere, is making a choice.
Get Involved
Yoshua Bengio sends out latest call for UK AI Safety Institute hiring.
In Other AI News
xAI API is live, $25/month in free credits in each of November and December, compatible with OpenAI & Anthropic SDKs, function calling support, custom system prompt support. Replies seem to say it only lets you use Grok-beta for now?
Anthropic offers message dictation on iOS and Android apps. No full voice mode yet, and no voice input on desktop that I can see. Anthropic is also offering a Windows app, and one for macOS. As with ChatGPT this looks suspiciously like an exact copy of their website.
If I was Anthropic, I would likely be investing more in these kinds of quality-of-life features that regular folks value a lot, even when I don’t. That’s not to take away from Anthropic shipping quite a lot of things recently, including my current go-to model Claude 3.5.1. It’s more, there is low hanging fruit, and it’s worth picking.
Speaking of voice mode, I just realized they put advanced voice mode into Microsoft Edge but not Google Chrome, and… well, I guess it’s good to be a big investor. Voice mode is also built into their desktop app, but the desktop app can’t do search like the browser versions can (source: the desktop app, in voice mode).
Not AI but relevant to AI questions and news you can use: Chinese spies are presumed at this time to be able to hear your phone calls and read your texts.
Seth Lazar summarizes some aspects of the ongoing Terminal of Truth saga.
Altman and others from OpenAI do a Reddit AMA. What did we learn or confirm?
Quiet Speculations
Given o1 shows us you can scale inference to scale results, does this mean the end of ‘AI equality’? In the sense that all Americans drink the same Coca-Cola and we all use GPT-4o (or if we know about it Claude Sonnet 3.5) but o2 won’t be like that?
For most purposes, though, price and compute for inference are still not the limiting factor. The actual cost of an o1 query is still quite small. If you have need of it, you’ll use it, the reason I mostly don’t use it is I’m rarely in that sweet spot where o1-preview is actually a better tool than Claude Sonnet 3.5 or search-enabled GPT-4o, even with o1-preview’s lack of complementary features. If you billed me the API cost (versus right now where I use it via ChatGPT so it’s free on the margin), it wouldn’t change anything.
If you’re doing something industrial, with query counts that scale, then that changes. But for most cases where a human is reading a response and you can use models via the cloud I assume you just use the best available?
The exception is if you’re trying to use fully free services. That can happen because everyone wants their own subscription, and everyone hates that, and especially if you want to be anonymous (e.g. for your highly NSFW bot). But if you’re paying at all – and you should be! – then the marginal costs are tiny.
I was reminded of this quote, from Gwern two months ago:
Is it possible that this is an induced demand story? Where if you don’t expect to have access to the compute, you don’t get into position to use it, so the price stays low? If not that, then what else?
A model of regret in humans, with emphasis on expected regret motivating allocation of attention. There are clear issues with trying to use this kind of regret model for an AI, and those issues are clearly present in actual humans. Update your regret policy?
Ben Thompson is hugely bullish on Meta, says they are the best positioned to take advantage of generative AI, via applying it to advertising. Really, customized targeted advertising? And Meta’s open model strategy is good because more and better AI agents mean better advertising? It’s insane how myopic such views can be.
Meta also is going to… generate AI images directly into your feed, including your own face if you opt into that?
Ben is also getting far more bullish on AR/VR/XR, and Meta’s efforts here in general, saying their glasses prototype is already something he’d buy if he could. Here I’m inclined to agree at least on the bigger picture. The Apple Vision Pro was a false alarm that isn’t ready yet, but the future is coming.
The Quest for Sane Regulations
Anthropic finally raises the alarm in earnest, makes The Case for Targeted Regulation.
…said those who have been dragging their feet and complaining about details and warning us not to move too quickly. Things that could have been brought to my attention yesterday, and all that. But an important principle, in policy, in politics and elsewhere, is to not dwell on the past when someone finally come around. You want to reward those who come around.
Their section on urgency explains that AI systems are rapidly improving, for example:
A year ago they anticipated issues within 2-3 years. Given the speed of government, that seems like a very narrow window to act in advance. Now it’s presumably 1-2 years.
Their second section talks about their experience with their RSP. Yes, it’s a good idea. They emphasize that RSPs need to be iterative, and benefit from practice. That seems like an argument that it’s dangerously late for new players to be drafting one.
The third section suggests RSPs are a prototype for regulation, and their key elements for the law they want are:
Then they say it is important to get this right.
What they are proposing here… sounds like SB 1047, which did exactly all of these things, mostly in the best way I can think of to do them? Yes, there were some ‘unnecessary burdens’ at the margins also included in the bill. But that’s politics. The dream of ‘we want a two page bill that does exactly the things we want exactly the right way’ is not how things actually pass, or how bills are actually able to cover corner cases and be effective in circumstances this complex.
They also call for regulation to be (bold theirs) flexible. The only way I know to have a law be flexible required giving discretion to those who are charged with enforcing it. Which seems reasonable to me, but seemed to be something they previously didn’t want?
They do talk about SB 1047 directly:
Objecting that they did not support the bill because others did not support the bill is rather weak sauce, especially for a bill this popular that passed both houses. What is a ‘critical mass of stakeholders’ in this case, not enough of Newsom’s inner circle? What do they think would have been more popular, that would have still done the thing?
What exactly do they think SB 1047 should have done differently? They do not say, other than that it should have been a federal bill. Which everyone agrees, ideally. But now they are agreeing about the view that Congress is unlikely to act in time:
So I notice that this seems like a mea culpa (perhaps in the wake of events in Texas) without the willingness to admit that it is a mea culpa. It is saying, we need SB 1047, right after only coming out weakly positive on the bill, while calling for a bill with deeply similar principles, sans regulation of data centers.
Don’t get me wrong. I’m very happy Anthropic came around on this, even now.
They next answer the most important regulatory question.
They provide some strong arguments that should be more than sufficient, although I think there are other arguments that are even stronger by framing the issue better:
I am disappointed by this emphasis on misuse, and I think this could have been made clearer. But the core argument is there, which is that if you create and make available a frontier model, you don’t get to decide what happens next and what uses do and do not apply, especially the ones that enable catastrophic risk.
So regulation on the use case level does not make any sense, unless your goal is to stifle practical use cases and prevent people from doing particular economically useful things with AI. In which case, you could focus on that goal, but that seems bad?
They point out that this does not claim to handle deepfake or child safety or other risks in that class, that is a question for another day. And then they answer the open weights question:
Perfect. Very well said. We should neither favor nor disfavor open-weights model. Open weights advocates object that their models are less safe, and thus they should be exempt from safety requirements. The correct response is, no, you should have the same requirements as everyone else. If you have a harder time being safe, then that is a real world problem, and we should all get to work finding a real world solution.
Overall, yes, this is a very good and very helpful statement from Anthropic.
The Quest for Insane Regulations
(Editor’s note: How did it take me almost two years to make this a section?)
Whereas Microsoft has now thrown its lot more fully in with a16z, backing the plan of ‘don’t do anything to interfere with developing frontier models, including ones smarter than humans, but then ‘focus on the application and misuse of the technology,’ which is exactly the worst case that is being considered in Texas: Cripple the ability to do anything useful, while allowing the dangerous capabilities to be developed and placed in everyone’s hands. Then, when they are used, you can say ‘well that violated the law as well as the terms of service’ and shake your fist to the sky, until you no longer have a voice or fist.
The weirdest part of this is that a16z doesn’t seem to realize that this path digs its own grave, purely in terms of ‘little tech’ and its ability to build things. I get why they’d oppose any regulations at all, but if they did get the regulations of the type they say they want, good and hard, I very much do not think they would like it. Of course, they say ‘only if benefits exceed costs’ and what they actually want is nothing.
Or rather, they want nothing except carve-outs, handouts and protections. They propose here as their big initiative the ‘Right to Learn’ which is a way of saying they should get to ignore copyright rules entirely when training models.
A Model of Regulatory Competitiveness
Miles Brundage makes the case that lack of regulation is much more likely to hold America back than overregulation.
This is an argument for very specific targeted regulations regarding security, export controls and open weights. It seems likely that those specific regulations are good for American competitiveness, together with the right transparency rules.
There are also government actions that are like export controls in that they can help make us more competitive, such as moves to secure and expand the power grid.
Then there are two other categories of regulations.
The Week in Audio
Eric Schmidt explicitly predicts AI self-improvement within 5 years.
OpenAI head of strategic marketing (what a title!) Dane Vahey says the pace of change and OpenAI’s product release schedule are accelerating.
OpenAI is certainly releasing ‘more products’ and ‘more features’ but that doesn’t equate to pace of change in the ways that matter, unless you’re considering OpenAI as an ordinary product tech company. In which case yes, that stuff is accelerating. On the model front, which is what I care about most, I don’t see it yet.
Marc Andreessen says AI models are hitting a ceiling of capabilities and they’re not seeing intelligence improvements, at all. I have added this to my handy reference, Remember Who Marc Andreessen Is, because having this belief is the only way the rest of his views and preferences can come close to making sense.
The Mask Comes Off
OpenAI is in talks with California to convert to a for-profit.
Yeah, uh huh. As I wrote in The Mask Comes Off: At What Price, full value for its current stake would be a clear majority of the new for-profit company. They clearly have no intention of giving the nonprofit that kind of compensation.
Also, Altman has a message for Trump, and it is full racing speed ahead.
There it is again, the rallying cry of “Democratic values.” And the complete ignoring of the possibility that something besides ‘the wrong monkey gets the poisoned banana first’ might go wrong.
Liron Shapira pointed out what “Democratic values” really is: A semantic stopsign. Indeed, “Democracy” or is one of the two original canonical stopsigns, along with “God”: A signal to stop thinking.
Remember when Sam Altman in 2023 said the reason I need to build AGI quickly so we can have a relatively slow takeoff with time to solve alignment, before there’s too much of a compute overhang? Rather than lobbying for making as much compute as quickly as possible?
Yes, circumstances change, but did they change here? If so, how?
And to take it a step further: Whelp.
Things could end up working out, but this is not how I want Altman to be thinking. This is one of the ways people make absolutely crazy, world ending decisions.
From the same talk: I also, frankly, wish he’d stop lying about the future?
I mean, with proper calibration you are going to get surprised in unpredictable directions. But that’s not how this is going to work. It could be amazingly great when all that happens, it could be the end of everything, indeed do many things come to pass, but having AGI ‘come and go’ and nothing coming to pass for society? Yeah, no.
Mostly the talk is a lot of standard Altman talking points and answers, many of which I do agree with and most of which I think he is answering honestly, as he keeps getting asked the same questions.
Open Weights Are Unsafe and Nothing Can Fix This
Chinese researchers nominally develop AI model for military use on back of Meta’s Llama.
It turns out this particular event even more of a nothingburger than I realized at first, it was an early Llama version and it wasn’t in any way necessary, but that could well be different in the future.
Why wouldn’t they use Llama militarily, if it turned out to be the best tool available to them for a given job? Cause this is definitely not a reason:
I believe the correct response here is the full Conor Leahy: Lol, lmao even.
It’s so cute that you pretend that saying ‘contrary to our acceptable use policy’ is going to stop the people looking to use your open weight model in ways contrary to your acceptable use policy.
You plan to stop them how, exactly?
Yeah. Thought so.
You took what ‘measures to prevent misuse’ that survived a day of fine tuning?
Yeah. Thought so.
Did this incident matter? Basically no. We were maybe making their lives marginally easier. I’d rather we not do that, but as I understand this it didn’t make an appreciable difference. Both because capabilities levels aren’t that high yet, and because they had alternatives that would have worked fine. If those facts change, this changes.
I am curious who if anyone is going to have something to say about that.
We also got a bit of rather extreme paranoia about this, with at least one source calling it an intentional false flag conspiracy by China to damage American open source and this being amplified.
I find the claim of this being ‘an op’ by China against American OSS rather absurd.
To me it is illustrative of the open weights advocate’s response to any and all news – to many of them, everything must be a conspiracy by evil enemies to hurt (American?) open weights.
Yes, absolutely, paranoia about China gives the Chinese the ability to influence American policy, on AI and tech and also elsewhere. And their actions do influence us. But I’m rather confident almost all of our reactions, in practice, are from their perspective unintentional, as we react to what they happen to do. See as the prime example our move across the board into mostly ill-conceived self-inflicted industrial policy (I’m mostly down with specifically the chip manufacturing).
That’s not the Chinese thinking ‘haha we’ll fool those stupid Americans into doing wasteful industrial policy.’ Nor is their pushing of Chinese OSS and open weights designed to provoke any American reaction for or against American OSS or open weights – if anything, I’d presume they expect they want to minimize such reactions.
Alas, once you’re paranoid, and we’re not about to make Washington not paranoid about China whether we want that or not, there’s no getting around your actions being influenced. You can be paranoid about that, too – meta-paranoid! – as the professionally paranoid often are, recursively, ad infinitum, but there’s no escape.
Then there’s the flip side of all that: They’re trying to get America to use it too? Meta is working with Palantir to bring Llama to the US government for national security purposes.
I certainly can’t blame them for trying to pull this off, but it raises further questions. Why is America forced to slum it with Llama rather than using OpenAI or Anthropic’s models? Or, if Llama really is the best option available even to the American military, then should we be concerned that we’re letting literally anyone use it for actual anything, including the CCP?
Open Weights Are Somewhat Behind Closed Weights
The question is how far, and whether that gap is growing versus shrinking.
Epoch AI mostly finds the gap consistent on benchmarks at around 15 months. They also have a piece about this in Time.
Their conclusion on whether open weights will catch up is that this depends on Meta. Only Meta plausibly will invest sufficient compute into an open model that it could catch up with closed model scaling. If, that is, Meta chooses both to scale as they planned and then continue like that (e.g. 10x compute for Llama-4 soon) and they choose to make the response open weights.
This assumes that Meta is able to turn the same amount of compute into the same quality of performance as the leading closed labs. That is not at all obvious to me. It seems like various skill issues matter, and they matter a lot more if Meta is trying to be fully at the frontier, because that means they cannot rely on distillation of existing models, they have to compete on a fully level playing field.
I also would caution against ranking the gap based on benchmarks, especially with so many essentially saturated, and also because open weights models have a tendency to game the benchmarks. I am confident Meta actively tries to prevent this along with the major closed labs, but many others clearly do the opposite. In general I expect the top closed models to in practice outperform their benchmark scores in relative terms.
So essentially here are the questions I’d be thinking about.
Rhetorical Innovation
Connor Leahy, together with Gabriel Alfour, Chris Scammell, Andrea Miotti and Adam Shimi, introduces The Compendium, a highly principled and detailed outline of their view of the overall AI landscape, what is going on, what is driving events and what it would take to give humanity a chance to survive.
They do not hold back here, at all. Their perspective is bleak indeed. I don’t agree with everything they write, but I am very happy that they wrote it. People should write down more often what they actually believe, and the arguments and reasoning underlying those beliefs, even if they’re not the most diplomatic or strategic thing to be saying, and especially when they disagree with me.
Miles Brundage argues no one can confidently know if AI progress should speed up, slow down or stay the same, and given that it would be prudent to ‘install breaks’ to allow us to slow things down, as we already have and are using the gas pedals. As he notes, the chances this pace of progress is optimal is very low, as we didn’t actively choose it, although worthwhile intervention given our current options and knowledge might be impossible. Also note that you can reach out to him to talk.
Simeon pushes back that while well-intentioned, sowing this kind of doubt is counterproductive, and we know more than enough to know that we shouldn’t say ‘we don’t know what to do’ and twiddle our thumbs, which inevitably just helps incumbents.
Eliezer Yudkowsky tries again, in the style of Sisyphus, to explain that his model fully predicted as early as 2001 that early AIs would present visible problems that were easy to fix in the short term, and that we would indeed in the short term fix them in ways that won’t scale with capabilities, until the capabilities scale and the patches don’t and things go off the rails. Indeed, that things will look like they’re working great right before they go fully off those rails. So while yes many details are different, the course of events is indeed following this path.
Or: Nothing we have seen seems like strong evidence against inner misalignment by default, or that our current techniques robustly fail to change these defaults, and I’d add that what relevant tests I’ve seen seem to be for it.
That doesn’t mean the issue can’t be solved, or that there are not other issues we also have to deal with, but communicating the points Eliezer is making here (without also giving the impression that solving this problem would mean we win) remains both vital and an unsolved problem.
Aligning a Smarter Than Human Intelligence is Difficult
Miles Brundage dubs the ‘bread and butter’ problem of AI safety that ‘there is too little safety and security “butter” spread over too much AI development/deployment “bread.” I would clarify that it’s mostly the development bread that needs more butter, not the deployments, and this is far from the only issue, but I strongly agree. As long as our efforts remain only a tiny fraction of development efforts, we won’t be able to keep pace with future developments.
Jeff Sebo, Robert Long, David Chalmers and others issue a paper warning to Take AI Welfare Seriously, as a near-future concern, saying that it is plausible that soon AIs that are sufficiently agentic will be morally relevant. I am confident that all existing AIs are not morally relevant, but I am definitely confused, as are the authors here, about when or how that might change in the future. This is yet another reason alignment is difficult – if getting the AIs to not endanger humans is immoral, then the only known moral stance is to not create those AIs in the first place.
Thus it is important to be able to make acausal deals with such morally relevant AIs, before causing them to exist. If the AIs in question are morally relevant would net wish to not exist at all under the conditions necessary to keep us safe, then we shouldn’t build them. If they would choose to exist anyway, then we should be willing to create them if and only if we would then be willing to take the necessary actions to safeguard humanity.
To that end, Anthropic has hired an ‘AI welfare’ researcher. There is sufficient uncertainty here that the value of information is high, so kudos to Anthropic.
The same way I think that having a 10% chance of AI existential risk should be sufficient to justify much more expensive measures to mitigate that risk than we are currently utilizing, if there is a 10% chance AIs will have moral value (and I haven’t thought too much about it but that seems like a non-crazy estimate to me?) then we are severely underinvesting in finding out more. We should be spending far more than 10% of what we’d spend if we were 100% that AIs would have moral value, because the value of knowing one way or another is very high.
People Are Worried About AI Killing Everyone
Here’s more color from the Center for Youth and AI, about the poll I discussed last week.
The Lighter Side
How to make Claude funny, plus a bunch of Claude being funny.
‘The hosts of NotebookLM find out they’re AIs and spiral into an existential meltdown’ from a month ago remains the only known great NotebookLM.
Good one but I don’t love the epistemic state where he makes jokes like this?
Then again, there’s nothing to worry about.
Claude on the Claude system prompt. I actually like the prompt quite a lot.
The thread continues and it’s great throughout.