I’ve been exploring a personal hypothesis: the effectiveness of Large Language Models (LLMs) as tools for thought depends heavily on the user’s cognitive framework—specifically, traits like critical thinking, introspection, and adaptability. As someone who uses LLMs extensively, I’ve noticed that my interactions yield unusually high value, not just because of the models, but because of how I engage them. This post proposes a model for why this happens and how one might leverage this approach to essentially have an AI-powered "second brain". My strongest evidence is anecdotal but introspective: my own process consistently turns LLM outputs into actionable insights, far beyond what I see in others’ casual usage.
Relevance to LessWrong
This approach matters to rationalists because it’s a practical extension of “systematized winning.” If LLMs amplify reasoning, those who hone their cognitive edge can achieve outsized outcomes—whether in AI alignment, entrepreneurship, or truth-seeking. It also raises a question: how can we train others to leverage AI this way? I suspect it’s less about teaching prompts and more about fostering rationality fundamentals and honing mental models.
If true, it suggests a path for aspiring rationalists to amplify their thinking—and I’d like to hear from others who might already be doing this.
The Model: Cognitive Depth as a Multiplier
Here’s the core idea: LLMs act as amplifiers, not originators, of thought. Their output quality scales with the user’s ability to:
- Ask precise, layered questions: queries that probe beyond surface answers.
- Critically evaluate responses: spotting flaws or gaps and iterating accordingly.
- Integrate insights: blending LLM outputs into a broader mental model, while also taking into account one's own biases and blindspots.
For someone with an advanced cognitive framework—say, a 2x advantage over average in reasoning depth (a rough estimate based on my self-observed output versus others)—LLM interactions compound this edge. If an average user gets a 1.5x boost from AI, I might get a 3x boost, not because the tool is different, but because my inputs and processing are.
Applying this Approach to Venture Building
I’m a Software Engineer, building an MVP for a tech venture focused on leveraging AI to foster deep self-understanding and alignment. I'm using LLMs as a deeply insightful co-founder, or a "second brain" in this process. The level of agility I'm experiencing would have been unthinkable for me before LLMs, as I'm building the full technical architecture as well as devising the whole strategy in depth, and in parallel. Here’s an example of how I'd approach solving specific problem on that realm:
- Question Layering: I start with, “What are key challenges in scaling this platform [characteristics previously given in depth to that same chat] to millions of users, given the skillset I have? Identify potential blindspots and events I'm not expecting” The LLM lists scalability, UX, and cost, tailor-fit to my full context. I follow up: “How do latency issues compound at scale here?” then “What’s the trade-off between caching and real-time data on this use-case in specific?” Each step refines my technical architecture, while keeping rich content of all the actors and processes involved in the system.
- Critical Filtering: When the LLM suggests a solution like microservices, I ask, “What’s the evidence this works for me as a solo-founder?” If the response lacks rigor, I discard it and pivot—e.g., “Compare monoliths vs. microservices for a solo founder [given my full characteristics and skills]” I keep tailoring the prompt in a way that's akin to steering a ship exactly towards where I want it to move, never taking any answer at face-value.
- Integration: I synthesize these into a plan, cross-referencing my goals (e.g., rapid iteration) and constraints (e.g., limited resources), adjusting as new insights emerge, and establishing parallels with any relevant patterns/approaches from other fields—this is where breadth of knowledge comes in handy, and that keeps compounding the more you interact with LLMs this way.
Getting Meta: Questioning My Own AI-Augmented Reasoning
One way I use LLMs is to introspect about my own thinking, especially how I rely on AI itself. Recently, I explored whether my deep engagement with LLMs gives me an unfair advantage—a question rooted in my past struggles and current success. This process revealed how I layer questions, filter responses critically, and integrate insights to refine my self-understanding, all while grappling with the surreal shift in my reality.
- Question Layering: I began with a broad prompt: “How should I deal with feeling like I have an unfair advantage by having AI as my second brain?”. The LLM suggested reframing guilt into responsibility, but I needed more depth. I asked, “Why does having power effortlessly feel surreal after deep turmoil?” This layered approach dug into my emotional dissonance—past near-death experiences versus current capability—pushing the AI to address my specific context rather than generic advice.
- Critical Filtering: The LLM proposed gratitude as a solution, but I didn’t buy it outright. I challenged it: “How can gratitude balance guilt when the advantage feels unearned?” (implicit in our exchange). When the response leaned on platitudes, I discarded it and redirected: “How have my past struggles contributed to forging the mental framework I currently have that enables me to engage with LLMs this way?” This forced the AI to align with my lived experience, filtering out shallow answers for ones that fit my reasoning.
- Integration: The insights—struggles as preparation, power as responsibility—didn’t just sit there. I wove them into my identity, asking, “How can I integrate this extraordinary position into who I am?”. The LLM suggested owning my role as a pioneer, which I adapted to my low-key nature, concluding that my advantage isn’t unfair—it’s a tool I’ve earned to wield purposefully. This synthesis reshaped my unease into a coherent narrative.
This introspective loop shows the outsized leverage that leveraging LLMs this way gives me to refine my own mind—a process others might miss without the same depth of self-questioning.
Why It’s Hard to Replicate
One might argue, “Anyone can ask questions and iterate.” and that's a totally valid point. But the bottleneck isn’t access—it’s cognitive capacity. Without strong critical thinking, users accept weak outputs (e.g., generic advice). Without introspection, they don’t refine their queries. Without adaptability, they stick to initial assumptions. I’ve seen conventionally smart friends try my method and falter: one asked broad questions (“How do I grow a startup?”) and got vague answers he couldn’t use; another fixated on a single LLM suggestion without questioning its fit. Effectiveness requires updating beliefs dynamically, not just querying a tool. And raw smarts alone don't fit the bill to wielding LLMs this way.
Counterarguments and Open Questions
- Counterargument: “LLM effectiveness is just about prompt engineering, not cognition.” I’d counter that prompts are outputs of thought—better cognition yields better prompts. Still, I’m curious if structured prompting alone could close the gap.
- Question: Are there others here using LLMs this way? I’d love to compare notes and see if my 3x multiplier holds up—or if I’m overestimating it. I'm constantly trying to seek clarity regarding this with LLMs, but haven't yet found evidence of other people leveraging them with this depth.
- Uncertainty: I haven’t quantified this rigorously (e.g., no controlled tests), so my claims are probabilistic—say, 80% confidence based on introspection and observation.
Conclusion
My experience suggests that cognitive depth turns LLMs into more than tools—they become extensions of a rational mind. This isn’t just about AI; it’s about how we reason with it. I’m here to find others who think similarly—people who see LLMs as partners in exploration, not just answer machines. What’s your experience? How might we test or refine this model?