Competitive Truth-Seeking
In many domains, you can get better primarily by being correct more frequently. If you’re managing a team, or trying to improve your personal relationships, it’s very effective to improve your median decision. The more often you’re right, the better you’ll do, so people often implicitly optimize their success rate. But what about competitive domains, like prediction markets, investing, or hiring? One of the most common mistakes I see people make is trying to apply non-competitive intuitions to competitive dynamics. But the winning strategies are very different! It is not enough to be right – others must be wrong. Rather than having a high success rate across all situations, you want to find an edge in some situations, and bet heavily when you find a mistake in the consensus. For example, say you’re a startup trying to hire software engineers, and your process finds two excellent candidates – Alex and Bob. You make both of them offers at market rate for excellent software engineers. Alex knows how to interview well, so every company believes (correctly) that he’s excellent, while Bob is a bad interviewer, and most of your competitors think he’s merely good. As a result, there might be a 10% chance Alex accepts your offer in this situation, and a 50% chance Bob does. Assuming a similar rate of Alexes and Bobs in the world, this means your process should be optimized almost entirely for finding Bobs – if you miss a few Alexes to find another Bob, that’s totally worth it! But from the outside, this will look crazy – you’re passing up obviously great candidates in order to specifically find people who aren’t obviously great. Furthermore, this means that copying mainstream interviewing strategies is one of the worst things you can do – if you only get points for beating consensus predictions, then matching them will get you a 0. The next time you find yourself thinking about one of these situations, try to figure out whether it’s competitive or non-competitive, and
I haven't noticed many people mention the tmux trick with LLMs: it's easy to programmatically write to another tmux session. So you can spawn a long-running process like a REPL or debugger in a tmux session, use it as a feedback loop for Claude Code, and easily inspect every command in detail, examine the state of the program, etc. if you want to. You can use this with other bash processes too, anything where you'd like to inspect what the LLM has done in detail.
Using this with a REPL has made a noticeable difference in my productivity.