I’ve been exploring a personal hypothesis: the effectiveness of Large Language Models (LLMs) as tools for thought depends heavily on the user’s cognitive framework—specifically, traits like critical thinking, introspection, and adaptability. As someone who uses LLMs extensively, I’ve noticed that my interactions yield unusually high value, not just because of the models, but because of how I engage them. This post proposes a model for why this happens and how one might leverage this approach to essentially have an AI-powered "second brain". My strongest evidence is anecdotal but introspective: my own process consistently turns LLM outputs into actionable insights, far beyond what I see in others’ casual usage.
Relevance to LessWrong
This approach matters to rationalists because it’s a practical extension of “systematized winning.” If LLMs amplify reasoning, those...
Your bear case is cogently argued, yet I find it way too tethered to a narrow view of LLMs as static tools bound by pretraining limits and jagged competencies.
The evidence suggests broader potential. LLMs already power real-world leaps, from biotech breakthroughs (e.g., Evo 2’s protein design) to multi-domain problem-solving in software and strategy, outpacing human baselines in constrained but scalable tasks. Your dismissal of test-time compute and CoT scaling overlooks how these amplify cross-domain reasoning, not just in-distribution wins.
Re... (read more)