LESSWRONG
LW

Edouard Harris
590Ω2518800
Message
Dialogue
Subscribe

Co-founder @ Gladstone AI.

Contact: edouard@gladstone.ai 

Website: eharr.is

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Experiments in instrumental convergence
A case for courage, when speaking of AI danger
Edouard Harris2mo40

Credit where credit is due, incidentally: the biggest single inflection point for this phenomenon was clearly Situational Awareness. Almost zero reporting in the mainstream news; yet by the end of 2024, everyone in the relevant spaces had read & absorbed it.

Reply
A case for courage, when speaking of AI danger
Edouard Harris2mo70

Correct & timely. There do exist margins where honesty and effectiveness trade off against each other, but today - in 2025, that is - this is no longer one of them. Your SB 1047 friend is quite right to suggest that things were different in 2023, though. The amount of resistance we got behind the scenes in trying to get words like "extinction" and even "AGI" (!!) published in our report (which was mostly written in 2023) was something to behold: back then you could only push the envelope so far before it became counterproductive. No longer. The best metaphor I've seen for what's happening right now in government circles is a kettle of boiling water: pockets of people who get it are coalescing and finding each other; entire offices are now AGI-pilled, where in 2023 you'd be lucky to find a single individual among a phalanx of skeptics. 

"The time is ripe" indeed.

Reply
Policymakers don't have access to paywalled articles
Edouard Harris8mo32

Yeah that could be doable. Dylan's pretty natsec focused already so I would guess he'd take a broad view of the ROI from something like this. From what I hear he is already in touch with some of the folks who are in the mix, which helps, but the core goal is to get random leaf node action officers this access with minimum friction. I think an unconditional discount to all federal employees probably does pass muster with the regs, though of course folks would still be paying something out of pocket. I'll bring this up to SA next time we talk to them though, it might move the needle. For all I know, they might even be doing it already.

Reply
Policymakers don't have access to paywalled articles
Edouard Harris8mo72

Because of another stupid thing, which is that U.S. depts & agencies have strong internal regs against employees soliciting and/or accepting gifts other than in carefully carved out exceptional cases. For more on this, see, e.g., 5 CFR § 2635.204, but this isn't the only such reg. In practice U.S. government employees at all levels are broadly prohibited from accepting any gift with a market value above 20 USD for example. (As you'd expect this leads to a lot of weird outcomes, including occasional hilarious minor diplomatic incidents with inexperienced foreign counterparties who have different gift giving norms.)

Reply
Policymakers don't have access to paywalled articles
Edouard Harris8mo103

Yep, can confirm this is true. And this often leads to shockingly stupid outcomes, such as key action officers at the Office of [redacted] in the Department of [redacted] not reading SemiAnalysis because they'd have to pay for their subscriptions out of pocket.

Reply
What’s the short timeline plan?
Edouard Harris8moΩ676

This is a great & timely post.

Reply
On the Gladstone Report
Edouard Harris1y380

Thanks very much for writing this. We appreciate all the feedback across the board, and I think this a well done and in-depth write up.

On the specific numerical thresholds in the report (i.e., your Key Proposal section), I do need to make one correction that also applies to most of Brooks's commentary. All the numerical thresholds mentioned in the report, and particularly in that subsection, are solely examples and not actual recommendations. They are there only to show how one can calculate self-consistent licensing thresholds under the principles we recommend. They are not themselves recommendations. We had to do it this way for the same reason we propose granting fairly broad rule-setting flexibility to the regulatory entity. The field is changing so quickly that any concrete threshold risks being out of date (for one reason or the other) in very short order. We would have liked to do otherwise, but that is not a realistic expectation for a report that we expect to be digested over the course of several months.

To avoid precisely this misunderstanding, the report states in several places that those very numbers are, in fact, only examples for illustration. A few screencaps of those disclaimers are below, but there are several others. Of course we could have included even more, but beyond a certain point one is simply adding more length to what you correctly point out is already quite a sizeable document. Note that the Time article, in the excerpt you quoted, does correctly note and acknowledge that the Tier 3 AIMD threshold is there as an example (emphasis added):

the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI’s GPT-4 and Google’s Gemini.

Apart from this, I do think overall you've done a good and accurate job of summarizing the document and offering sensible and welcome views, emphasis, and pushback. It's certainly a long report, so this is a service to anyone who's looking to go one or two levels deeper than the Executive Summary. We do appreciate you giving it a look and writing it up.

Reply1
Announcing Epoch’s dashboard of key trends and figures in Machine Learning
Edouard Harris2yΩ110

Gotcha, that makes sense!

Reply
Announcing Epoch’s dashboard of key trends and figures in Machine Learning
Edouard Harris2yΩ110

Looks awesome! Minor correction on the cost of the GPT-4 training run: the website says $40 million, but sama confirmed publicly that it was over $100M (and several news outlets have reported the latter number as well).

Reply
Inverse scaling can become U-shaped
Edouard Harris3yΩ232

Done, a few days ago. Sorry thought I'd responded to this comment.

Reply
Load More
27Inverse scaling can become U-shaped
Ω
3y
Ω
15
29POWERplay: An open-source toolchain to study AI power-seeking
Ω
3y
Ω
0
22Instrumental convergence: scale and physical interactions
Ω
3y
Ω
0
21Misalignment-by-default in multi-agent systems
Ω
3y
Ω
8
33Instrumental convergence in single-agent systems
Ω
3y
Ω
4
67AI Tracker: monitoring current and near-future risks from superscale models
Ω
4y
Ω
13
76AI takeoff story: a continuation of progress by other means
Ω
4y
Ω
13
22Defining capability and alignment in gradient descent
Ω
5y
Ω
6