LESSWRONG
LW

1585
O O
744Ω473370
Message
Dialogue
Subscribe

swe, speculative investor

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1O O's Shortform
2y
124
My AI Predictions for 2027
O O10d30

AI progress can be rapid but the pathway to it may involve different capability unlocks. For example, it may be you automate work more broadly and then reinvest that into more compute/automate chipmaking itself). Or you can get the same unlocks without rapid progress. For example, you get a superhuman coder but run into different bottlenecks. 

I think it's pretty obvious AI progress won't completely stall out, so I don't think that's the prediction you're making? It's one thing to say AI progress won't be rapid and then give a specific story as to why. Later if you hit most of your marks, it'll look like a much more valuable prediction than saying simply it won't be rapid. (Same applies to AI 2027).

The authors of AI 2027 made a pretty specific story before the release of ChatGPT and looked really prescient after the fact since it turned out to be mostly accurate. 

Reply
My AI Predictions for 2027
O O13d30

I just don't think there is much to this prediction. 

It takes a set of specific predictions, says none of it will happen, and by the nature of the conjunctive prediction, most will not happen. It would be more interesting to hear how AI will and will not progress rather than just denying an already unlikely to be perfect prediction.

Inevitably they'll be wrong on some of these, but they'll look more right on the surface level because they will be right on most of them.

Reply
My AI Predictions for 2027
O O16d20

It seems like basically everything in this is already true today. Not sure what you’re predicting here.

Reply
AGI: Probably Not 2027
O O1mo10

The author also seems to not realize that OpenAI's costs are mostly unrelated to its inference costs?

Reply
METR Research Update: Algorithmic vs. Holistic Evaluation
O O1mo50

I think the extra effort required to go from algorithmically to holistically qualifying scales linearly with task difficulty. Dense reward model scaling on hard to verify tasks seems to have cracked this. Deepminds polished holistically passing IMO solutions probably required the same order of magnitude of compute/effort as the technically correct but less polished OpenAI IMO solutions. (They used similar levels of models, compute, and time to get their respective results)

So while it will shift timelines, it is something that will fall to scale and thus shouldn’t shift it too much.

I predict once these methods make their way into commercial models, this will go away, or roughly 1 year. I’ll check back in 2026 to see if I’m wrong.

Reply
O O's Shortform
O O1mo58

I think AI doomers as a whole lose some amount of credibility if timelines end up being longer than they project. Even if doomers technically hedge a lot, the most attention grabbing part to outsiders is the short timelines + intentionally alarmist narrative, so they're ultimately associated with them.

Reply
tdko's Shortform
O O1mo61

It seems Gemini was ahead of openai on the IMO gold. The output was more polished so presumably they achieved a gold worthy model earlier. I expect gemini's swe bench to thus at least be ahead of OpenAI's 75%. 

Reply
Anthropic Faces Potentially “Business-Ending” Copyright Lawsuit
O O2mo*130

Afaict this case has been generally good for the industry but especially bad for Anthropic.


Edit: overall win, you can use books in training. You just can’t use pirated books.

Reply111
Cole Wyeth's Shortform
O O2mo24

Progress wise this seems accurate but the usefulness gap is probably larger than the one this paints.

Reply
Is the political right becoming actively, explicitly antisemitic?
Answer by O OJul 15, 2025-1-1

The right has always been vaguely anti-semitic. What's new is the left is now also vaguely anti-semitic, leading it to being overall more normalized.

Reply
Load More
5If the DoJ goes through with the Google breakup,where does Deepmind end up?
Q
1y
Q
1
26Thoughts on Francois Chollet's belief that LLMs are far away from AGI?
Q
1y
Q
17
5What happens to existing life sentences under LEV?
Q
1y
Q
7
14Hot take: The AI safety movement is way too sectarian and this is greatly increasing p(doom)
1y
15
27Supposing the 1bit LLM paper pans out
Q
2y
Q
11
13OpenAI wants to raise 5-7 trillion
2y
29
1O O's Shortform
2y
124