LESSWRONG
LW

Frontpage

-6

Method Iteration: An LLM Prompting Technique

by Davey Morse
30th Aug 2025
2 min read
1

-6

Frontpage

-6

Method Iteration: An LLM Prompting Technique
3The Dao of Bayes
New Comment
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 7:18 AM
[-]The Dao of Bayes2d30

The simple question is: can this solve easy problems, like getting an AI to correctly count letters, reverse strings, etc.? Does this improve it's results on an IQ test like https://trackingai.org/IQ-Test-Viewer?

"Creative" doesn't mean "more accurate", but you can still test that too: is it writing better fiction than you normally get from those models? Do you have a story you'd be confident posting?

Reply
Moderation Log
More from Davey Morse
View more
Curated and popular this week
1Comments

TLDR: Method Iteration is a LLM prompting technique that causes better responses to hard problems.

Some researchers think that for AI to solve truly hard problems, we need bigger models, more data, or new architectures.

I wonder if there's another way. The text you get from an LLM is downstream of the thought process you ask it to run. Induce a better process, and you'll push capability without touching the weights. (There's similar intuition for the effectiveness of CoT in reasoning models in the first place.)

By hard problems, I mean nearly impossible problems, where any progress toward an answer would be significant for society. For example:

  1. What's a plan to significantly reduce global emissions on a $10k budget?
  2. What's a 10-minute plan for an indie developer to build the seed of a general superintelligence?

I've been experimenting with different thought processes, i.e. LLM prompt chains. Some don't move the needle, like asking the model to "try again but better," or asking the model to brainstorm ten responses, critique them, pick the best, and rerun. These polish; they don't rethink. But I've come across one approach which centers rethinking, helping LLMs improve the shape of their answer, which I'll call Method Iteration. 

Instead of asking directly for an output, you ask for a way of thinking and then improve that. 

The Method Iteration loop:

  1. Generate a method. The model states how it will tackle the question—its reasoning procedure.
  2. Generate an output (using the method). It follows that procedure to produce a plan/answer.
  3. Critique the output. Where it falls short, what’s missing, what’s incoherent.
  4. Critique the method. How the procedure itself produced those failures; propose a better procedure.

Then repeat.

Here are links to two conversations with GPT5, where I bring it through a few rounds of method iteration concerning building superintelligence and reducing climate change. I sense the responses improve each loop.

My guess at why method iteration works: a one-shot answer is a sample from a huge space with almost no structure. A method is a policy over thinking. Improving the policy compounds. Each loop doesn’t just edit the plan; it upgrades the generator of plans. You turn undirected sampling into directed search over procedures.

What happens in practice: even a 5-10 manual loops shifts the model into the territory of what strikes me as actual creativity—thoroughly digesting the problems, decomposing it, and then producing plans that actually match the ambition/scale of the prompts. In my experiments, it makes LLMs smarter. I'm curious how it works for others.

So, if you have a hard problem that you're struggling with and one-shot LLMs have struggled with, try applying method iteration. I'll be curious if it works for you.

Also, I'm curious to run this at scale. If a handful of loops makes a visible difference, what happens with orders of magnitude more? I’ve run dozens of method-iteration loops (four calls per loop), but not yet hundreds or thousands. As LLM calls are cheap and getting cheaper, I'm curious how far method iteration can go. Where does improvement saturate? Where does it break?

If you're experimenting with method iteration or any other repeatable LLM prompting techniques, please comment.