LESSWRONG
LW

209
Ajeya Cotra
2401Ω72291000
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Christian homeschoolers in the year 3000
Ajeya Cotra5d62

I agree with this particular reason to worry that we can't agree on a meta-philosophy, but separately think that there might not actually be a good meta-philosophy to find, especially if you're going for greater certainty/clarity than mathematical reasoning!

Reply
ryan_greenblatt's Shortform
Ajeya Cotra4moΩ361

I agree that robust self-verification and sample efficiency are the main things AIs are worse at than humans, and that this is basically just a quantitative difference. But what's the best evidence that RL methods are getting more sample efficient (separate from AIs getting better at recognizing their own mistakes)? That's not obvious to me but I'm not really read up on the literature. Is there a benchmark suite you think best illustrates that?

Reply
How Much Are LLMs Actually Boosting Real-World Programmer Productivity?
Ajeya Cotra7mo70

Yeah I've cataloged some of that here: https://x.com/ajeya_cotra/status/1894821255804788876 Hoping to do something more systematic soon

Reply
AI Timelines
Ajeya Cotra9moΩ101710

To put it another way: we probably both agree that if we had gotten AI personal assistants that shop for you and book meetings for you in 2024, that would have been at least some evidence for shorter timelines. So their absence is at least some evidence for longer timelines. The question is what your underlying causal model was: did you think that if we were going to get superintelligence by 2027, then we really should see personal assistants in 2024? A lot of people strongly believe that, you (Daniel) hardly believe it at all, and I'm somewhere in the middle.

If we had gotten both the personal assistants I was expecting, and the 2x faster benchmark progress than I was expecting, my timelines would be the same as yours are now.

Reply
AI Timelines
Ajeya Cotra9moΩ470

I'm not talking about narrowly your claim; I just think this very fundamentally confuses most people's basic models of the world. People expect, from their unspoken models of "how technological products improve," that long before you get a mind-bendingly powerful product that's so good it can easily kill you, you get something that's at least a little useful to you (and then you get something that's a little more useful to you, and then something that's really useful to you, and so on). And in fact that is roughly how it's working — for programmers, not for a lot of other people. 

Because I've engaged so much with the conceptual case for an intelligence explosion (i.e. the case that this intuitive model of technology might be wrong), I roughly buy it even though I am getting almost no use out of AIs still. But I have a huge amount of personal sympathy for people who feel really gaslit by it all.

Reply
AI Timelines
Ajeya Cotra9mo64

Yeah TBC, I'm at even less than 1-2 decades added, more like 1-5 years.

Reply
AI Timelines
Ajeya Cotra9moΩ6156

Interestingly, I've heard from tons of skeptics I've talked to (e.g. Tim Lee, CSET people, AI Snake Oil) that timelines to actual impacts in the world (such as significant R&D acceleration or industrial acceleration) are going to be way longer than we say because AIs are too unreliable and risky, therefore people won't use them. I was more dismissive of this argument before but: 

  • It matches my own lived experience (e.g. I still use search way more than LLMs, even to learn about complex topics, because I have good Google Fu and LLMs make stuff up too much).
  • As you say, it seems like a plausible explanation for why my weird friends make way more use out of coding agents than giant AI companies.
Reply
AI Timelines
Ajeya Cotra9moΩ4103

Yeah, good point, I've been surprised by how uninterested the companies have been in agents.

Reply
AI Timelines
Ajeya Cotra9moΩ16309

One thing that I think is interesting, which doesn't affect my timelines that much but cuts in the direction of slower: once again I overestimated how much real world use anyone who wasn't a programmer would get. I definitely expected an off-the-shelf agent product that would book flights and reserve restaurants and shop for simple goods, one that worked well enough I would actually use it (and I expected that to happen before the one hour plus coding tasks were solved; I expected it to be concurrent with half hour coding tasks).

I can't tell if the fact that AI agents continue to be useless to me is a portent that the incredible benchmark performance won't translate as well as the bullish people expect to real world acceleration; I'm largely deferring to the consensus in my local social circle that it's not a big deal. My personal intuitions are somewhat closer to what Steve Newman describes in this comment thread.

It seems like anecdotally folks are getting like +5%-30% productivity boost from using AI; it does feel somewhat aggressive for that to go to 10x productivity boost within a couple years.

Reply21
ryan_greenblatt's Shortform
Ajeya Cotra9moΩ5132

My timelines are now roughly similar on the object level (maybe a year slower for 25th and 1-2 years slower for 50th), and procedurally I also now defer a lot to Redwood and METR engineers. More discussion here: https://www.lesswrong.com/posts/K2D45BNxnZjdpSX2j/ai-timelines?commentId=hnrfbFCP7Hu6N6Lsp

Reply1
Load More
27Survey on the acceleration risks of our new RFPs to study LLM capabilities
2y
1
300AI Timelines
Ω
2y
Ω
136
83New roles on my team: come build Open Phil's technical AI safety program with me!
2y
6
96New blog: Planned Obsolescence
2y
7
293Two-year update on my personal AI timelines
Ω
3y
Ω
60
370Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Ω
3y
Ω
95
228ARC's first technical report: Eliciting Latent Knowledge
Ω
4y
Ω
90
91More Christiano, Cotra, and Yudkowsky on AI progress
Ω
4y
Ω
28
119Christiano, Cotra, and Yudkowsky on AI progress
Ω
4y
Ω
95
22Techniques for enhancing human feedback
Ω
4y
Ω
0
Load More