I write specialized data structure software for bioinformatics. I use AI to help with this on a daily basis, and find that it speeds up my coding by quite a bit. But it's not a 10x efficiency boost like some people are experiencing. I've been wondering why that is. Of course, it could be just a skill issue on my part, but I think there is a deeper explanation, which I want to try to articulate here.
In heavily AI-assisted programming, most time is spent trying to make the AI understand what you want to do, so it can write an approximation of what you want. For some people, most of programming work has shifted from writing code into writing requirement documents for AI, and watching over the AI as it executes. In this mode of work, we don't write solutions, but we describe problems, and the limiting factor is how fast we can specify.
I want to extend this idea one step deeper. I think that the bottleneck is actually in synchronizing the internal state of my mind with the internal state of the LLM. Let me explain.
The problem is that there is a very large context in my brain that dictates how the code should be written. Communicating this context to the AI through language is a lot of work. People are creating elaborate setups for Claude Code to get it to understand their preferences. But the thing is, my desires and preferences are mostly not stored in natural language form in my brain. They are stored in some kind of a native neuralese for my own mind. I cannot articulate my preferences completely and clearly. Sometimes I'm not even aware of a preference until I see it violated.
The hard part is transferring the high-dimensional and nuanced context in my head into the high-dimensional state of the LLM. But these two computers (my brain and the LLM) run on entirely different operating systems, and the internal representations are not compatible.
When I write a prompt for the AI, the AI tries to approximate what my internal state is, what I want, and how I want it done. If I could encode the entirety of the state of my mind in the LLM, I'm sure it could do my coding work. It is vastly more knowledgeable, and faster at reasoning and typing. For any reasonable program I want to write, there exists a context and a short series of prompts that achieves that.
But synchronizing two minds is a lot of work. This is why I find that for most important and precise programming tasks, adding another mind to the process usually slows me down.
I write specialized data structure software for bioinformatics. I use AI to help with this on a daily basis, and find that it speeds up my coding by quite a bit. But it's not a 10x efficiency boost like some people are experiencing. I've been wondering why that is. Of course, it could be just a skill issue on my part, but I think there is a deeper explanation, which I want to try to articulate here.
In heavily AI-assisted programming, most time is spent trying to make the AI understand what you want to do, so it can write an approximation of what you want. For some people, most of programming work has shifted from writing code into writing requirement documents for AI, and watching over the AI as it executes. In this mode of work, we don't write solutions, but we describe problems, and the limiting factor is how fast we can specify.
I want to extend this idea one step deeper. I think that the bottleneck is actually in synchronizing the internal state of my mind with the internal state of the LLM. Let me explain.
The problem is that there is a very large context in my brain that dictates how the code should be written. Communicating this context to the AI through language is a lot of work. People are creating elaborate setups for Claude Code to get it to understand their preferences. But the thing is, my desires and preferences are mostly not stored in natural language form in my brain. They are stored in some kind of a native neuralese for my own mind. I cannot articulate my preferences completely and clearly. Sometimes I'm not even aware of a preference until I see it violated.
The hard part is transferring the high-dimensional and nuanced context in my head into the high-dimensional state of the LLM. But these two computers (my brain and the LLM) run on entirely different operating systems, and the internal representations are not compatible.
When I write a prompt for the AI, the AI tries to approximate what my internal state is, what I want, and how I want it done. If I could encode the entirety of the state of my mind in the LLM, I'm sure it could do my coding work. It is vastly more knowledgeable, and faster at reasoning and typing. For any reasonable program I want to write, there exists a context and a short series of prompts that achieves that.
But synchronizing two minds is a lot of work. This is why I find that for most important and precise programming tasks, adding another mind to the process usually slows me down.