Special thanks to Justis for proofreading this post and to the LW team for setting up the proofreading program.
As developers, we like to believe that we do engineering. Our industry is filled with traditional manufacturing methods and metaphors: "architects" create "specifications" that software engineers use to "build" features, which are then "deployed" after going through a rigorous "quality assurance process."
But this picture doesn't match reality. For example, it's almost impossible to correctly estimate how long a task will take, and because estimation errors accumulate, projects are often late. And quality varies widely, even across a single company, leading to outages and data-breaches that make the news, as well as imposing costs (read: pain) on end-users.
We can't engineer our way out of this problem because the engineering metaphor itself is part of it. It blinds us to some inconvenient yet critical facts about making software, so in our confusion we repeat the same mistakes again and again. It's a classic case of the map not fitting the territory.
There exists another metaphor that, while not recognized by mainstream programming culture, has shaped the industry from the very beginning. It doesn't even conflict with the engineering frame--instead, it extends it to acknowledge that writing software involves a kind of illegible creativity. That's a quality we usually associate with art, but I believe it has an outsized effect on the quality of software we make.
To explain what I mean, let's talk about how writers produce books.
Every writer can use the same words as every other writer--there are no secret dictionaries available only to masters of prose. Yet some writers produce boring, trivial pieces while other create works that move people, and continue doing so for decades or longer.
It boils down to how the words are arranged--composition. Somehow, those who do it well pick just the right ones and set them in just the right order, producing beautiful sentences. Then they string a couple of those into paragraphs, then paragraphs into chapters, and finally chapters into novels. But there doesn't seem to be a simple, clear process behind it, nothing someone else could just replicate.
I suspect it's because there are many dimensions to writing: grammar, style, voice, plot, world building, etc. Imagine that these are planes along which one can move. Inexperienced writers--most of us--consciously move along only one or two of them, leaving the rest to chance. This gets the job done, as you can see by reading any company announcement or professional email, but the pattern it produces is noisy, erratic.
In turn, experienced writers know how to chart an optimal path along most of these planes, composing an elegant pattern that's also dense in meaning. That's why, even though they use the same words as everyone else, a Hemingway or Bulgakov can express profound observations about life in just a handful of sentences.
It's difficult to teach this. After all, we all spend years at school on writing [essays](http://www.paulgraham.com/essay.html) and reports, yet few ever advance beyond basic forms like the five paragraph essay. I suspect it's because school is good for teaching things by repetition and memorization, but writing well requires playfulness and exploration, which entails making mistakes, and that's something the education system can't process.
Making software is similar:
Every programmer can use the same languages and tools as every other programmer--there are no secret programming languages available only to masters of code. Yet some programmers produce buggy code while others produce... less buggy code used by millions.
The difference comes down to skill in composition. Well-composed programs are easier to understand, which means they are easier to change, which makes fixing bugs or adding features faster. This is important because every time we change a program, we make the next change harder--it's why enterprise software works so poorly and why it takes so much effort to develop. But if we compose the software well--and recompose/refactor it regularly--we can retain the ability to change it for a long time.
This is visible in successful open source projects. Take Django, which is a web framework written in Python. It was first released in 2005, and by evolving and adapting to the changing environment of the web, it remains popular today. And if you look at the code, you'll probably find that it's easy to follow, even if Python is not your favorite programming language.
Most open source projects are elegant like that. Some more, others less. But to really see the difference, you have to compare them to in-house software, which often is a terrible mess. You'll likely find lots of confusing names and strong coupling and the whole structure will be somewhat disjoint. It'll take you days to figure out how to make even a simple change.
We have to control for timing here, though. If an amazing piece of software showed up at the wrong time, it would get ignored. For example, if Django showed up in 1995 instead of 2005, I can't imagine it would get anything more than a polite glance--computers then were slow, so using a "slow" language like Python would be madness.
Some other factors we need to control for that are unique to in-house software are deadlines and maze-like politic. So even if a company employs people able to compose beautiful programs, it's unlikely they would have the time or freedom to implement their designs.
Now, all of this is fine talk, but does it mean anything or is it just counting how many angels can dance on the tip of a pin? Looking at the past thirty years of development practices, the industry actually seems to be shifting toward making space for illegible creativity.
From Manufacturing to Crafting
Whenever I read about how software was made in the 90's, it sounds like manufacturing. Business leaders would create goals and turn them over to software architects so they could translate them into specifications for programmers to implement.
I think this is why UML diagrams and object-oriented programming became so popular back then--they promised what essentially amounts to software-as-lego-blocks, no skills required. Well, almost none, because you would need a handful of smart (read: expensive) architects to create designs that would then be handed off to low-paid programmers. These programmers (read: code monkeys) would just have to assemble everything together from pre-made pieces.
(I suspect this had something to do with the wave of IT outsourcing in the 90's).
However, by the end of that decade, it became apparent that something was wrong. Projects were late and often so buggy as to be barely usable. And much to the Planners' chagrin, forcing more legibility into the process (more plans, more specialized roles) just made things worse.
Around the same time, some people were experimenting with less legible methods to produce software like Agile or Extreme Programming. They turned everything upside down: instead of precise, grand plans, they recommended a series of iterations; instead of a strict hierarchy, they recommended chaos; instead of strict roles, they recommended cross-functional teams. In short, they created space for individual developers to exercise their creativity.
Today, twenty years later, we know that it worked. Whoever adopted these methods could deliver better, more reliable software faster. Techniques like kanban, sprints, automated testing, and others became the norm. And even the Planners hopped on the wagon, in a fashion unique to their culture, by growing an industry of experts who sold these new techniques packaged as precise, acronym-laden... plans.
However, along the way, a group of people came together who wanted to take the illegible approach even farther. They rallied around the flag of software craftsmanship and discussed stances, principles, and paradigms. (For the curious, check out The Clean Coder, SOLID, and Growing Object Oriented Software Guided by Tests). What's more, they added a moral dimension to creating beautiful software, investing the craftsman with a type of social responsibility toward others like them as well as to their users.
It never caught on. It was too hard to communicate to non-technical stakeholders. And perhaps the moral aspect seemed too intense to some, especially developers who treated making software as just a job. It remains a niche community. And yet, some of their ideas seeped into the broader community, with seemingly more people paying attention to details like naming and structure, and code formatters rapidly gaining popularity.
The software industry appears to be undergoing another fundamental shift toward encouraging the creative aspects of coding.
A few years ago, a community coalesced around the idea of bridging the gap between developers, who traditionally wrote programs, and operations, who ran it. They called it DevOps. Its goal was to solve operational problems with software and achieve better quality for the end user. There's a lot to talk about here, so to make the story short, I'll skip to site reliability engineering (SRE), which is a concrete implementation of DevOps ideas.
Core to SRE is the idea that, at the end of the day, the quality of software boils down to a handful of service level objects (SLOs)--metrics that describe how the software is working for a user. Popular SLOs are things like request latency, availability, and error rates. They're usually described in terms of "nines", which represent a ratio of successes/failures. If a service offers "4 nines of availability" (99.99% uptime), then users can expect no more than 4.38 minutes of downtime per month.
From a developer's perspective, this means freedom. It means you can do whatever you want as long as what you build achieves those numbers. You can type up a simple Python script and run it via cron, or you can make everything a microservice, or you can write your code upside down--or you can even compose beautiful code--and it's alright as long as the SLOs are met.
Realistically though, there will be constraints. Maybe your teammates won't appreciate you using an obscure, esoteric language. Maybe you will be under time pressure and will have to work with whatever is fastest. Maybe you will have to integrate your solution with a legacy system. But within those constraints, you can play with the problem to your heart's desire.
Optimizing for a handful of metrics brings with it the risk of falling victim to Goodhart's Law. To counter that, SRE includes a ritual where engineers and stakeholders meet regularly, say once a quarter, and adjust SLOs to ensure the team is working on the right things.
When someone asks for advice on how to write great essays or stories, the writing community universally replies: write. Write everyday. Write n words where n is between 50 and 5000. When you're having a bad day: write. When you're out of ideas: write. When doubts cloud your thoughts: write.
This approach isn't unique to writing. Tennis players improve by hitting the ball; carpenters by putting together another piece, musicians by replaying the same song. We call this grinding.
I think "grinding" is a great name. It emphasizes the repetitiveness of the process and how each iteration brings you closer to a finished work. It also describes the feeling that goes along with it: the discomfort, frustration, and intense focus of repeating an action, always slightly wrong, always hoping to make the next iteration slightly better. It's exhausting, but satisfying.
To learn to program, you must grind. In fact, this is how Paul Graham described it way back in 2003:
What else can painting teach us about hacking?
One thing we can learn, or at least confirm, from the example of painting is how to learn to hack. You learn to paint mostly by doing it. Ditto for hacking. Most hackers don't learn to hack by taking college courses in programming. They learn to hack by writing programs of their own at age thirteen. Even in college classes, you learn to hack mostly by hacking.
College teaches you everything about the medium--how the hardware works, how compilers translate code into CPU instructions, how data is arranged, etc.--but I've never seen any curriculum that included classes on composing programs.
(Edit: Apparently, compsci students spend more time writing software than they used to. Any one out there want to share your experience?)
This leads to several easily observable results. First, when computer science students enter the industry, they face a completely new and confusing landscape. All the theory they learned gets pushed into the "interesting hobby" background, and the one thing they spent the least time on--writing programs--consumes 90% or more of their time. I've seen many people struggle with this transition.
Second, the industry is divided into those that learned how to compose programs and those that didn't. It's most visible during interviews. Candidate A comes in and you ask them to write a loop that counts from 1 to 100, printing out numbers divisible by 3, 5, or both. They get that ready in a few minutes, after which you ask a harder question or you dig into topics like testing or performance.
But when candidate B comes in and you ask them the same question, they struggle to put the statements together. Sometimes they get the ordering wrong. Or forget which operator to use. Or mix up languages. It's especially visible when a candidate professes five or ten or fifteen years of experience, including many years of using some language, and then cannot back that up in any way.
We should learn from this. In college, students should get the opportunity to read, write, and critique code. Maybe we could make it a sort of optional specialization. We could also spin it off as its own major--computer science for those interested in theory and pursuing a career in academia; software engineering for those who want to compose programs for a living.
Approaching software as a question of composition isn't useful just for students though. Professional programmers can gain much from this approach as well. Once you accept that some programs are better than others, and that producing better programs is a skill on can train, you begin seeing options everywhere: How good can I make this module or project? How good should I make it? What are areas where I need to improve/want to improve?
And because programming is like writing or painting, the way to improve is straightforward: grind. Luckily, your job provides you an infinite number of challenges to practice on, which makes it great for deliberate practice. Think about each thing you're working on now and consider what's going well and what isn't; what you should differently next time; what problems came up--and which of them you could have controlled for and which ones were true unknown unknowns.
Like other artists, you can also study the works of others, which thanks to open source is incredibly easy. You can take a peek at how others compose software. How do they structure it? What problems did they encounter, and how did they solve them? Which ideas should you steal, and which ones should you avoid?
When I look at programming from this angle, it seems... fun. There's excitement, curiosity, energy, and a yearning to play and explore--kind of like back when I first began messing around with computers, when I probably learned the most in the least amount of time.
Many people will find this idea strange and not to their liking. Managers have for a long time complained that managing developers is like herding cats. I imagine they would prefer if developers were perfectly fungible resources in a "spherical cows" kind of way. That'd make hiring, managing, and firing them a whole lot easier.
But if the last thirty years tell us anything, it's that this approach doesn't work much at all. You can spend weeks putting together plans and diagrams and the project will end up late and overly complex--if usable at all. And the more you coerce people with these high-modernist tools, well, the more likely you are to fail.
Thankfully, people have been exploring and colonizing the illegible, creative spaces that lead others to write great code. What started off with a handful of adventurers living in makeshift camps in the blogosphere, giving the occasional conference talk, has turned into bustling towns connected into states--entities that not only produce industry best practices, but can defend their ideas against all sorts of bureaucrats. I think it's important because we, humans, will only become more reliant on software, so we better make it good.