Some people seem to think my timelines have shifted a bunch while they've only moderately changed.
Relative to my views at the start of 2025, my median (50th percentile) for AIs fully automating AI R&D was pushed back by around 2 years—from something like Jan 2032 to Jan 2034. My 25th percentile has shifted similarly (though perhaps more importantly) from maybe July 2028 to July 2030. Obviously, my numbers aren't fully precise and vary some over time. (E.g., I'm not sure I would have quoted these exact numbers for this exact milestone at the start of the year; these numbers for the start of the year are partially reverse engineered from this comment.)
Fully automating AI R&D is a pretty high milestone; my current numbers for something like "AIs accelerate AI R&D as much as what would happen if employees ran 10x faster (e.g. by ~fully automating research engineering and some other tasks)" are probably 50th percentile Jan 2032 and 25th percentile Jan 2029.[1]
I'm partially posting this so there is a record of my views; I think it's somewhat interesting to observe this over time. (That said, I don't want to anchor myself, which does seem like a serious downside. I should slide around a bunch and be somewhat incoherent if I'm updating as much as I should: my past views are always going to be somewhat obviously confused from the perspective of my current self.)
While I'm giving these numbers, note that I think Precise AGI timelines don't matter that much.
See this comment for the numbers I would have given for this milestone at the start of the year. ↩︎
I've updated towards somewhat longer timelines again over the last 5 months. Maybe my 50th percentile for this milestone is now Jan 2032.
Some AI company employees with shorter timelines than me mostly. I also think that "why I don't agree with X" is a good prompt to express some deeper aspect of my models/views. It also makes a good reasonably engaging hook for a blog post.
I might write some posts responding arguments for longer timelines that I disagree with if I feel like I have something interesting to say.
This is only somewhat related to what you were saying, but I do think 100 year medians vs 10 year medians does matter a bunch.
The Time article is materially wrong about a bunch of stuff
Agreed which is why I noted this in my comment.[1] I think it's a bad sign that Anthropic seemingly actively sought out an article that ended up being wrong/misleading in a way which was convenient for Anthropic at the time and then didn't correct it.
I really don't want to get into pedantic details, but there's no "supposed to" time for LTBT board appointments, I think you're counting from the first day they were legally able to appoint someone. Also https://www.anthropic.com/company lists five board members out of five seats, and four Trustees out of a maximum five. IMO it's fine to take a few months to make sure you've found the right person!
First, I agree that there isn't a "supposed to" time, my wording here was sloppy, sorry about that.
My understanding was a that there was a long delay (e.g. much longer than a few months) between the LTBT being able to appoint a board member and actually appointing such a member and a long time where the LTBT only had 3 members. I think this long of a delay is somewhat concerning.
My understanding is that the LTBT could still decide one more seat (so that it determines a majority of the board). (Or maybe appoint 2 additional seats?) And that it has been able to do this for almost a year at this point. Maybe the LTBT thinks the current board composition is good such that appointments aren't needed, but the lack of any external AI safety expertise on the board or LTBT concerns me...
More broadly, the corporate governance discussions (not just about Anthropic) I see on LessWrong and in the EA community are very deeply frustrating, because almost nobody seems to understand how these structures normally function or why they're designed that way or the failure modes that occur in practise. Personally, I spent about a decade serving on nonprofit boards, oversight committes which appointed nonprofit boards, and set up the goverance for a for-profit company I founded.
I certainly don't have particular expertise in corporate governance and I'd be interested in whether corporate governance experts who are unconflicted and very familiar with the AI situation think that the LTBT has the de facto power needed to govern the company through transformative AI. (And whether the public evidence should make me much less concerned about the LTBT than I would be about the OpenAI board.)
My view is that the normal functioning of a structure like the LTBT or a board would be dramatically insufficient for governing transformative AI (boards normally have a much weaker function in practice than the ostensible purposes of the LTBT and the Anthropic board), so I'm not very satisfied by "the LTBT is behaving how a body of this sort would/should normally behave".
I said something weaker: "For what it's worth, I think the implication of the article is wrong and the LTBT actually has very strong de jure power", because I didn't see anything which is literally false as stated as opposed to being misleading. But you'd know better. ↩︎
People seem to be reacting to this as though it is bad news. Why? I'd guess the net harm caused by these investments is negligible and this seems like a reasonable earning to give strategy.
Agentic software engineering mostly, I don't think Genie matters.
Probably it would be more accurate to say "doesn't seem to help much while it helps a lot for openai models".