LESSWRONG
LW

459
Filipe Aleixo
3010
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
A Bear Case: My Predictions Regarding AI Progress
Filipe Aleixo6mo40

Your bear case is cogently argued, yet I find it way too tethered to a narrow view of LLMs as static tools bound by pretraining limits and jagged competencies. 

The evidence suggests broader potential. LLMs already power real-world leaps, from biotech breakthroughs (e.g., Evo 2’s protein design) to multi-domain problem-solving in software and strategy, outpacing human baselines in constrained but scalable tasks. Your dismissal of test-time compute and CoT scaling overlooks how these amplify cross-domain reasoning, not just in-distribution wins. 

Regarding programming, your current view also risks largely underestimating the vast potential of these models. See this from YC where they mention 1/4 of the founders claim 95% of their codebase is already written by LLMs. This has been my experience as well as a software engineer. You need to know how to steer the ship, but if you do, this tech comfortably makes you 10x.

https://www.youtube.com/watch?v=IACHfKmZMr8

I’m also skeptical of your claim that agency stalls at complexity; current models orchestrate complex workflows (e.g., agentic systems in logistics) with growing adeptness. Are you underweighting these strides because they don’t fit a clean AGI narrative, or do you see a ceiling I’m missing?

What’s your take on LLMs bridging inferential gaps across domains, say from code to ethics, where human steering already yields outsized returns?

Reply
1Beyond the Prompt: How Cognitive Depth Transforms LLM Interactions
6mo
0