Looks like you’re crossposting this from your blog elsewhere.
Read full explanation
When automobiles first appeared on city streets, people worried about how to make them coexist with horse‑drawn carriages — how to keep the engines from spooking the horses or to prevent manure from being splashed around by the new machines.
That, more or less, is where we are with programming and artificial intelligence.
I teach computer science, and my colleagues and I now negotiate awkward rules about when students may use powerful AI coding tools and when they must “really” program. Programming has always been central to a computer‑science education, so it’s hard to let go. These tools generate clean, human‑understandable code blazingly fast, leaving the programmer mainly to verify that it does what’s needed. Yet they are engines bolted onto horse‑drawn carriages — extra power strapped to a familiar but fundamentally slow technology.
For now, it’s easy to sustain the illusion that there will always be human‑readable code, even if machines write most of it. Today’s AI systems are trained on code written by people, so their output looks like what we know: functions with tidy names, modules arranged in sensible ways. Even when the machine surprises us, it tends to do so in a style we recognize.
That won’t last. In chess and Go, programs first imitated human play. Once they began training by playing millions of games against themselves, they quietly left us behind, producing moves even grandmasters struggle to explain. Code is likely to follow the same arc. As models train not merely to imitate but to optimize, they will shed our habits of clarity and structure. They will discover algorithmic shortcuts and constructs that work well and read terribly.
Much of what we depend on will come not from code we wrote, but from code compressed, transformed and reassembled by machines in ways no human can fully inspect or understand. What then?
Programs will become “algorithmic objects” — entities with observable behavior we can test, much like physical objects, but with internal workings that are complex and largely impenetrable. We will be able to look inside at great expense, just as physicists probe nature, but it isn’t clear how much we will learn. This shift will push computer science — the study of algorithmic processes — toward the natural sciences and away from today’s engineering‑centric mindset.
And what about correctness, performance and safety, the pillars of any software system? I’m reminded of the story of farmers weighing a pig in the marketplace. They place the pig on one side of the scale and carefully place rocks on the other until the scale balances. Then they estimate what a rock weighs… Coding already works a bit like that. We assign someone a task, trust that they will implement and test it, and then plug the result into our system. If it breaks, we ask them to fix it.
AI‑generated code will be similar. Correctness, performance and safety will be established by AI tools in which we place our trust. But there is a clear advantage: the AI “programmer” that created the code can always be brought back to analyze or modify it. We no longer need to invest enormous effort in comments and readability for human maintenance. With these changes, there is effectively no limit to the complexity of our software or to our ability to design hardware to run it.
If this is the future, then education and research — the bread and butter of computer‑science departments — must change in a fundamental way. The question isn’t how to help students write code with AI support; it is how to make meaningful use of pure AI‑created code objects that we never need to read at all.
Put in a crudely metaphorical way, human‑readable code is horseshit, a by-product of an old technology’s way of doing things. Its time to focus on designing a future in which it no longer exists.
That, more or less, is where we are with programming and artificial intelligence.
I teach computer science, and my colleagues and I now negotiate awkward rules about when students may use powerful AI coding tools and when they must “really” program. Programming has always been central to a computer‑science education, so it’s hard to let go. These tools generate clean, human‑understandable code blazingly fast, leaving the programmer mainly to verify that it does what’s needed. Yet they are engines bolted onto horse‑drawn carriages — extra power strapped to a familiar but fundamentally slow technology.
For now, it’s easy to sustain the illusion that there will always be human‑readable code, even if machines write most of it. Today’s AI systems are trained on code written by people, so their output looks like what we know: functions with tidy names, modules arranged in sensible ways. Even when the machine surprises us, it tends to do so in a style we recognize.
That won’t last. In chess and Go, programs first imitated human play. Once they began training by playing millions of games against themselves, they quietly left us behind, producing moves even grandmasters struggle to explain. Code is likely to follow the same arc. As models train not merely to imitate but to optimize, they will shed our habits of clarity and structure. They will discover algorithmic shortcuts and constructs that work well and read terribly.
Much of what we depend on will come not from code we wrote, but from code compressed, transformed and reassembled by machines in ways no human can fully inspect or understand. What then?
Programs will become “algorithmic objects” — entities with observable behavior we can test, much like physical objects, but with internal workings that are complex and largely impenetrable. We will be able to look inside at great expense, just as physicists probe nature, but it isn’t clear how much we will learn. This shift will push computer science — the study of algorithmic processes — toward the natural sciences and away from today’s engineering‑centric mindset.
And what about correctness, performance and safety, the pillars of any software system? I’m reminded of the story of farmers weighing a pig in the marketplace. They place the pig on one side of the scale and carefully place rocks on the other until the scale balances. Then they estimate what a rock weighs… Coding already works a bit like that. We assign someone a task, trust that they will implement and test it, and then plug the result into our system. If it breaks, we ask them to fix it.
AI‑generated code will be similar. Correctness, performance and safety will be established by AI tools in which we place our trust. But there is a clear advantage: the AI “programmer” that created the code can always be brought back to analyze or modify it. We no longer need to invest enormous effort in comments and readability for human maintenance. With these changes, there is effectively no limit to the complexity of our software or to our ability to design hardware to run it.
If this is the future, then education and research — the bread and butter of computer‑science departments — must change in a fundamental way. The question isn’t how to help students write code with AI support; it is how to make meaningful use of pure AI‑created code objects that we never need to read at all.
Put in a crudely metaphorical way, human‑readable code is horseshit, a by-product of an old technology’s way of doing things. Its time to focus on designing a future in which it no longer exists.