I was pretty with you until this paragraph:
In many ways Inkhaven is an application of single piece flow to the act of writing. I do not believe intellectual progress must consist of long tomes that take months or years to write. Intellectual labor should aggregate minute-by-minute with revolutionary insights aggregating from hundreds of small changes. Publishing daily moves intellectual progress much closer to single piece flow.
Of course intellectual progress doesn’t always require tomes, but I think in many fields of science, important conceptual progress has historically occurred so dominantly via tomes that they can almost be considered its unit. Take for example well-regarded tomes like Astronomia Nova, Instauratio Magna, Principia, Reflections on the Motive Power of Fire, On the Origin of Species, or A Treatise on Electricity and Magnetism—would you guess the discovery or propagation of these ideas would have been more efficient if undertaken somehow more in single piece flow-style? My sense is that tomes are just a pretty natural byproduct of ambitious, large inferential distance-crossing investigations like these.
Yep, definitely! The reason why these are big tomes is IMO largely downstream of the distribution methods at the time.
Like, yes, totally, sometimes you have to cross large inferential distances. For example, the sequences are probably one of the most inferential-distance spanning artifacts that I have read in my life. Nevertheless, they were written one blogpost a day over the course of two years.
Many pieces of intellectual progress were also first made in the form of a lecture series, where each lecture was prepared after the previous one was finished. Then that lecture series was eventually written up into a book. Indeed, I think that is for most forms of intellectual progress, a better way of developing both ideas and pedagogical content knowledge.
I tend to write in large tomes that take months or years to complete, so I suppose I disagree with you too. Not that intellectual progress must consist of this, obviously, but that it can mark an importantly different kind of intellectual progress from the sort downstream of continuous shipping.
In particular, I think shipping constantly often causes people to be too moored to social reception, risks killing butterfly ideas, screens off deeper thought, and forces premature legibility. Like, a lot of the time I feel ready to publish something there is some bramble I pass in my writing, some inkling of “Is that really true? What exactly do I mean there?” These often spin up worthy investigations of their own, but I probably would’ve failed to notice them were I more focused on getting things out.
Intellectual labor should aggregate minute-by-minute with revolutionary insights aggregating from hundreds of small changes.
This doesn’t necessarily seem in conflict with “long tomes which take months to write.” My intellectual labor consists of insights aggregating from hundreds of small changes afaict, I just make those changes in my own headspace, or in contact with one or two other minds. Indeed, I have tried getting feedback on my work in this fashion and it’s almost universally failed to be helpful—not because everyone is terrible, but because it’s really hard to get someone loaded enough to give me relevant feedback at all.
Another way to put it: this sort of serial iteration can happen without publishing often, or even at all. It’s possible to do it on your own, in which case the question is more about what kind of feedback is valuable, and how much it makes sense to push for legibility versus pursuing the interesting thread formatted in your mentalese. I don’t really see one as obviously better than the other in general, and I think that doing either blindly can be pretty costly, so I'm wary of it being advocated as such.
I'm curious if we can somehow operationalize a bet between Lightcone-ish-folk and you/Adam. I think agree the social-environment-distortion is an important cost. I do think it's probably necessary for genius thinkers to have a period of time where they are thinking alone.
But, I do think there are also important benefits to publishing more, esp. if you can develop an internal locus of "what's important". I also think doing things like "just publish on your private blog rather than LessWrong, such that a smaller number of higher-context people can way in" would help.
But, my gut says pretty strongly that you and Adam are erring way too far in the not-publishing direction, and like, I would pay money for you to publish more.
FWIW I think that much our epistemic environment is too far from the true/right path. I think Habryka is able to think for himself in public better than most, and this has involved being one of the most intellectually aggressive arguers on the internet. I am not sure that being more in touch with everyone and their feedback would be healthy. For similar reasons, I don’t tweet. I suspect I would become more insane.
Yep, definitely! The reason why these are big tomes is IMO largely downstream of the distribution methods at the time.
What distribution differences do you mean? Kepler and Bacon lived before academic journals, but I think all the others could easily have published papers; indeed Newton, Darwin and Maxwell published many, and while Carnot didn't many around him did, so he would have known it was an option.
It seems more likely to me that they chose to write up these ideas as books rather than papers simply because the ideas were more "book-sized" than "paper-sized," i.e. because they were trying to discover and describe a complicated cluster of related ideas that was inferentially far from existing understanding, and this tends to be hard to do briefly.
I think that is for most forms of intellectual progress, a better way of developing both ideas and pedagogical content knowledge
It sounds like you're imagining that the process of writing such books tends to involve a bunch of waterfall-style batching, analogous to e.g. finishing the framing in each room of a house before moving on to the flooring, or something like that? If so, I'm confused why; at least my own experience with large writing projects has involved little of this, I think, though I'm sure writing processes vary widely.
I don't think papers were set up for wide distribution at all. Like, how would Newton, Darwin and Maxwell have published 10+ papers and distributed them all to their target audience?
Papers and books are aimed at different audiences. Papers are aimed at a small community of experts which has a lot shared epistemic prerequisites, so the average contribution is short. Books are aimed at larger audiences.
The modern internet enables you to write in small batches to both audiences (or any audience) really. I am more talking about the ability to write things like blogpost series, or have people follow your Youtube channel, or follow your Twitter, etc. (Like, I am not saying Youtube and Twitter are bastions of intellectual progress, but they enable distribution mechanisms that I think generally outperform previous ones).
The application of single piece flow is also one big reason for why Lightcone is a gaggle of generalists. Specialization often breeds waterfall habits. A gaggle of generalists can all focus their efforts together on shipping whatever needs to be shipped right now, until it is shipped, and then reorient.
This seems in tension with the way you praised assembly lines, where each work unit along the line specializes in a particular task? I imagine that if the workshop snake had human facilitators, it'd probably make sense for them to specialize in their respective stations, too.
It's true there is a bit of a tension. Some thoughts:
Beyond that, it's great if you do end up with a process that produces value by creating copies of the exact same widget, that take the exact same time, and if you do that, you can tolerate a lot of specialization, but of course things rarely work out that way. Especially in anything software adjacent, where you have software handle the uniform parts and so the labor you are trying to optimize is left with the heterogenous parts.
So what do you do if different stations randomly take longer or shorter, and your work product is very non-uniform and so requires different amounts of inputs from different stations each time? You need to find ways to transfer capacity from different stations on the assembly lines to other stations on the assembly line. This is what generalist labor is about. Generalist labor provides the slack in the system that allows it to maintain high throughput and efficiency.
At Lightcone, the way this plays out is that our core team is a lot of generalists, and then when we do end up having really locked down what a task is like and where it fits, we do hire labor to actually perform it every day who don't need to be generalists. Lighthaven in terms of maintenance and restocking and cleaning is not done operated by generalists, we hire contractors and specialized staff for that.
Another thing I maybe under-emphasized in the post is that the key criterion you want to optimize for is total production time. The whole point of single piece flow and small batches is to maximize the speed at which you get feedback about the consequences of your actions. The depth of your assembly line is a cost to that! Every time your assembly line gets longer, you have to wait longer to see the consequences on the final product.
Now, that's a big of high level on the tension here, but I realized I didn't really explain a good model of why narrow specialization in a non-uniform work environment pushes heavily towards big batches and waterfall planning.
Let's say you have an assembly line with 4 stations (A, B, C, and D), with a lot of variance at each station about how long work at that station takes. Let's talk about a few different scenarios:
1. You have 4 employees each capable of only performing the job of one station.
Now, let's say station A takes twice as long this time. This means station B, C and D will now be idle 50% of the time. The total efficiency of your process is .
[Ok, I have to run, but I'll edit this comment with the full explanation later, though maybe it's already clear]
Even in real assembly lines at physical factories, I get the impression that generalization has often worked well because it gives you the ability to change your process. Toyota is/was considered best-in-class, and a major innovation of theirs was having workers rotate across different areas, becoming more generalized and more able to suggest improvements to the overall system, with some groups rotating every 2 hours[1].
Tesla famously reduced automation around 2018 even when the marginal costs were lower than human operators, again because the lost flexibility wasn't worth it.[2] Though it's worth noting they started investing more in robots again in recent years, presumably when their process was more solidified[3].
[1]: https://michelbaudin.com/2024/02/07/toyotas-job-rotation-policy/
[2]: https://theconversation.com/teslas-problem-overestimating-automation-underestimating-humans-95388
[3]: https://newo.ai/tesla-optimus-robots-revolutionize-manufacturing/
Broadly agree. One thing I'll add is that you should structure a "piece" around your points of highest uncertainty, and a common mistake I see is for companies to iterate on the wrong thing. Real examples from my career:
Oh, yeah, I should write a future principles memo on falsifying the most load bearing assumptions early. Agree that that is a really important aspect of doing small batches well!
I've looked at my workflows through the lens of this post, and I'm realizing I could indeed make some of them much more efficient by restructuring them the way suggested here.
So, thanks! I think this advice will end up directly useful to me.
Thanks for sharing this - jfyi I interpretted the title differently than I think you meant it? More like you were saying "You should do multiple of a thing at once, but not too many."
Whereas I now think you mean something more like "It's best if you can do one of a thing at a time," which doesn't code to me as a small batch (because one-at-a-time seems non-batchy). With constraints, of course, that sometimes a pure one-at-a-time isn't doable.
The original title was "Small batches and the mythical single piece flow", which I guess communicates that better, but "single piece flow" is big word and I like imperative titles for this principle series better.
I might be able to come up with something else. Suggestions also welcome!
Edit: I updated the title! Maybe it's too hard to parse, but I think it's better.
I've been wondering about this in terms of my own writing, whether I should be working on multiple pieces at once to a greater degree than I am. Thinking aloud a bit:
I guess part of the question is, what are the efficiency effects of batch-processing, vs the more diluted feedback signal from multiple 'coming off the production line' at once? Though in my case, I'd probably still stagger the publication, and so maybe that's less of a concern (though there may still be some dilution from having shallower focus on each piece-in-process).
Context: Post #8 in my sequence of private Lightcone Infrastructure memos edited for public consumption.
When you finish something, you learn something about how you did that thing. When you finish many things at the same time, you do not get to apply the lessons you learned from each of those things to the others. This insight, turns out, was non-trivially a core cause of the industrial revolution.
The assembly line is one of the foundational technologies of modern manufacturing. In the platonically ideal assembly line the raw ingredients for exactly one item enter a factory on one end, and continuously move until they emerge as a fully assembled product at the other end (followed right by the second item, the third item, and so on). This platonic assembly line has indeed been basically achieved, even for some of humanity's most complicated artifacts. A Tesla factory converts a pile of unassembled aluminum and some specialized parts into a ready-to-ride car in almost exactly 10 hours, all on a continuously moving assembly line that snakes itself through the Gigafactory.
In manufacturing this is called "single piece flow". It means that:
And a smoothly running single piece flow assembly line is also the sign of a perfectly calibrated process. We know that we are not spending too much time on any part of our assembly. The conveyor belt moves continuously, always at the same speed, calibrated to be exactly enough to complete the task. If for some reason the task takes longer, because e.g. a worker is worse than a previous worker, we notice immediately.
In contrast to all of this, stands some human instincts around efficiency. If instead of making one item each from start to finish, we could just process a big batch of dozens or hundreds or thousands of items, we could, it seems, be so much more efficient. This is usually a lie.
The ever changing parable of the students making pottery/paper-airplanes/photographs/etc. is usually invoked at this point. While the parable has undergone many variations, this story from Atomic Habits is as far as I know the original one:
ON THE FIRST day of class, Jerry Uelsmann, a professor at the University of Florida, divided his film photography students into two groups.
Everyone on the left side of the classroom, he explained, would be in the “quantity” group. They would be graded solely on the amount of work they produced. On the final day of class, he would tally the number of photos submitted by each student. One hundred photos would rate an A, ninety photos a B, eighty photos a C, and so on.
Meanwhile, everyone on the right side of the room would be in the “quality” group. They would be graded only on the excellence of their work. They would only need to produce one photo during the semester, but to get an A, it had to be a nearly perfect image.
At the end of the term, he was surprised to find that all the best photos were produced by the quantity group. During the semester, these students were busy taking photos, experimenting with composition and lighting, testing out various methods in the darkroom, and learning from their mistakes. In the process of creating hundreds of photos, they honed their skills. Meanwhile, the quality group sat around speculating about perfection. In the end, they had little to show for their efforts other than unverified theories and one mediocre photo.
While one might accept that the assembly line has been deeply transformative in manufacturing, it might be less clear to see how the same principles would affect the operations of something like software engineering, which is a good chunk of what we do. However, the same principles have also been the driver of a non-trivial fraction of modern software development progress.
In the dark old days of software engineering, software would be shipped in what they called "releases".
The lifecycle of a release would start by a bunch of managers coming together and making a big long list of features they think the software they are working on should have. This list would then be handed to a small set of lead engineers to transform into something they would call the "spec", usually at least hundreds of pages long. This spec would then be handed to a set of programmers to "implement". The resulting piece of software would then be handed to a set of testers to test. Then handed back to the programmers to fix. Then they would do a big pile of user-testing to get product feedback on the resulting software. This would then result in an additional list of features, which would be translated into a spec, which would be implemented, tested and fixed.
And then finally, after many months, or even years, the software would be burned on a CD, and then be shipped out to users.
Contrast this with the processes dominating modern software engineering. Everything is continuously deployed. A single engineer routinely goes from having an idea for a feature, to having it shipped to users within hours, not months. Every small code change gets shipped, immediately. We avoid shipping many things at once, since it will make it harder for us to roll them back. This is an application of the principle of single piece flow/small batches.
At the management level, the opposite of single-piece flow is usually called "waterfall planning". A waterfall planning process is structured into multiple distinct stages of product development where big batches of changes get combined, audited, reviewed, iterated on, and eventually, sometime down the road, shipped to users. The alternative to waterfall processes are often called "lean processes" (also the eponymous cause of "the Lean Startup" book title).
The application of the principle of single piece flow to new domains can often produce enormous efficiency gains. One domain stuck deeply in the old ways, for example, is architecture and construction. A building gets built, or renovated, in a set of discrete, long stages. First the client "figures out what they need", then an architect draws up the blueprints, then a planner reviews the blueprints, then a contractor builds the whole building, then an auditor reviews the construction.
This is complete madness. How are you supposed to know what you need in a building if you have never built any part of it? How are you supposed to know what materials to work with if you don't know how well the different materials will work for you?
Lighthaven was renovated drastically differently from basically all other buildings built or renovated in the Bay Area. During renovation we would aim to finish a single room before we started working on the next room. After every room we would review what worked, what didn't work, which parts took longer than expected, and which parts turned out to be surprisingly easy. Our contractors were not used to this. We needed to change a huge amount about how they operated, but I think there was no other way for Lighthaven to have successfully gotten built if not for this.
Much of Lightcone's work should aim to ship as continuously as possible, even if there is no clear precedent for what single piece flow would look like in that domain. To show what this thinking looks like in-progress:
The ideal workshop, when I hold this consideration in mind, is a series of rooms that each participant walks through over the course of a week, with each station teaching them something, and preparing them for future stations. Every daylight hour, a newly educated participant leaves the workshop, with another person right behind them, and another person entering right at the start.[1]
Every single participant would be an opportunity to learn for all future participants. We could calibrate the efficiency and difficulty of each station (if necessary adapting to the participant), and would notice immediately if something was going wrong.
Unfortunately cohort effects loom large, as the experience of learning alone is very different than learning together, and this appears to be a big obstacle to making this ideal workshop happen. But I still am thinking that maybe there is some way.
In many ways Inkhaven is an application of single piece flow to the act of writing. I do not believe intellectual progress must consist of long tomes that take months or years to write. Intellectual labor should aggregate minute-by-minute with revolutionary insights aggregating from hundreds of small changes. Publishing daily moves intellectual progress much closer to single piece flow.
For Lighthaven event rentals, the month-long lead and planning time also generates so much inefficiency. The ideal series of events would be created one piece at a time. Of course the obstacle lies in people's calendars and their plans, who need to control their schedule weeks and days out, which requires locking in much about each event, long before the previous one has completed.
The application of single piece flow is also one big reason for why Lightcone is a gaggle of generalists. Specialization often breeds waterfall habits. A gaggle of generalists can all focus their efforts together on shipping whatever needs to be shipped right now, until it is shipped, and then reorient.
A "workshop snake" as Justis affectionately named it while helping me edit this post