Matthew Cobb’s book The Idea of the Brain notes that the brain has historically been analogized to a hydraulic system, or to a telegraph network, or to a telephone exchange; today it’s often analogized to a supercomputer; and in the future, who knows. His suggested takeaway is: neuroscientists have never known how to think about the brain, and are grasping at straws.
But that’s the wrong takeaway. The brain is a machine that runs an algorithm. Many people throughout history have grasped that idea, at least intuitively. And they’ve tried to explain that idea by analogizing the brain to other machines that can run algorithms, of which there are many: clockwork, hydraulics, telephone exchanges, silicon chips, and more. All the analogies through the ages are pointing to a single, consistent, profound truth.
I've seen a similar claim, or possibly the same claim. The claim was humans just compare the brain to what ever is the newest cool tech, which is clearly not true. Once airplain was the newest coolest tech, and no-one said the brain was an airplain, and same for lots of other tech.
As you say, there is a clear trend of what tech we use as methaphor for the brain.
The source for footnote 4 is a shortform by me, which was specifically about Ayahuasca which is much more likely than other psychedelics to have the described effects, though many go in that direction.
Edit: oops overconfident! Many people can figure out the same thing, models of reality are convergent because there is a reality to study.
A mechanical adder is not “a computer”, analogous to a MacBook. Rather, it’s a machine that runs an algorithm.
What does that even mean?
Hard to describe exactly, but I'll take a stab at it. Digital computers are different from mechanical adders because they implement an algorithm which is very 'general' and can be very easily and rapidly configured to follow a wide set of other algorithms. This can be used to run simulations of things. This unusual flexibility is seen to a much lesser extent in the brain because there are many physical restrictions on the changes that can be made to the algorithm being run by the brain. Similarly, you can physically modify a mechanical adder machine to change its algorithm to no longer be 'adding', but switching it to a different algorithm isn't easy the way switching the algorithm running on the meta-level of the digital computer is.
Something something... degrees of freedom... markov blankets... meta level programming.... mumble mumble...
An algorithm is an information-processing task, i.e. something that a Turing machine could do.
Sometimes there’s a machine that’s purpose-built to instantiate a particular algorithm, by its construction. If you don’t like the mechanical adder example, here’s another: there’s an algorithm to increment a calendar date, including how many days are in each month and figuring out whether it’s a leap-year or not; and certain fancy watches have gear-based mechanisms that will instantiate that algorithm (and they can get the right answer for centuries, even with all the weird leap-year rules).
Another example would be a special-purpose ASIC, like the tiny high-speed over-voltage-protection IC that you can find in a cell phone. In many cases, these kinds of chips have no “hardware vs software”, they’re not reprogrammable at all, they just have one particular thing that they do, and that design functionality is “burned in” via the placement of wires and logic gates.
IIUC, very early chips were all like that: there was a particular thing that people designed the chip to do, and they burned in that algorithm via the physical construction of the chip. …And then by the 1970s people came up with the idea that it would save a lot of design time, and help economies of scale, to make a smaller number of reprogrammable chips (I believe the Intel 4004 was a pioneering early example). Such a chip is still “a machine that runs an algorithm”, but the algorithm that the machine runs involves reading another arbitrary algorithm from memory and then doing whatever it says. It’s kinda confusing to think about a big unchangeable burned-in algorithm that finds and runs an arbitrary smaller algorithm nested inside … so instead, we don’t normally think of these reprogrammable chips as “a machine that runs an algorithm”, but rather we use terms like “hardware” and “software” etc.
Does that help?
While I'm not disputing the substance of what you are saying here (besides the 4004 timeline), from a computer science perspective I am a bit annoyed at the terminology. A machine that can load computational instructions from a storage medium would traditionally be called a programmable computer, whereas the system you describe "a machine that runs an algorithm" is just precisely a computer. I understand that this is not a nuance represented in more popular terminology, but I feel an article that is precise about the difference could benefit from also using the more precise terminology.
Wow are you saying Universal Computers were only built in the 70s? Half a century after Turing?
They were not; computers were programmable long before that. Before the 4004, the functionality that we today find on a CPU were distributed over a larger collection of circuitry, with different separate components for the ALU and the instruction interpreter and the register bank and the memory controller and such. But that assembly was already programmable and functioned as a Turing universal computer since the late 40s. The innovation of the intel 4004 was that it was the first design that had all that machinery on a single integrated chip (the first CPU as we might understand it today, in the sense of being the first central processing unit -- earlier designs were decentralized, though the term "CPU" was already in use before then).
This is a nitpick but I think there's an important sense in which the brain is different from machines that run algorithms: With machines that run algorithms, you commonly start with a formal description of the algorithm (in modern computers, writing source code using your intuitive understanding of the algorithm), translate that to something the machine can accept, and have the machine execute them.
That is, there's a process that can absorb a wide variety of algorithms, which gets applied to a specific algorithm. Even the marble adder likely follows this pattern because someone probably thought up a combination of logic gates to be "compiled" into a marble machine ahead of time.
The closest brains have to this algorithm-machine separation is DNA. But I wouldn't expect there to be a 1:1 correspondence between genetic mechanisms and the algorithm-pieces we decode from the brain, for a number of reasons. First, obviously the genes have to do a lot of other stuff than encode the algorithm run by 99.8% of the brain. But secondly, genes would likely often encode the "boundaries" of the algorithm rather than the algorithm itself because they often control growth of structures rather than directly acting. And thirdly, genes probably take a lot of shortcuts and roundabout ways due to biological implementation details that need not generalize to our models.
I interpret you as saying: normally the process by which we wind up with “a machine that runs an algorithm” is: first we have some algorithm (information-processing procedure) in mind, and then second we come up with a machine that captures it. Whereas that didn’t happen with brains.
But by the same token, normally the process by which we wind up with “a machine that captures images” is: first we have some idea about lenses and light-sensitive substrates etc., and then we build a camera. The human eye was not “designed” by evolution in that order. But that’s not important—we are still correct to think of a human eye as “designed” to capture images via lenses and light-sensitive substrates.
The design is always implicit not explicit in evolution (“design without a designer”), but it’s still real design.
By the same token, I claim that there are real design principles underlying what the brain does to allow navigation, anticipation of danger, learning new skills, etc., and those same design principles will be findable in some future algorithms textbook (and in some cases, current algorithms textbooks), and the brain is running an algorithm that works due to those principles, just as your eye works due to the principles of optics and lenses. The fact that evolution did not explicitly write out the principles in advance is kinda incidental.
(Sorry if I’m misunderstanding.)
There's a reason I started out by calling it a nitpick. 😅
I'm not making a claim about the normal way we wind up with "a machine that runs an algorithm", such that one can just swap in other things for "runs an algorithm", so it was perhaps a mistake for me to justify it with "you commonly start with...". My point is more that the hardware-software distinction generalizes to the case of mechanical adders because you start with a logic gate diagram here, but not to the brain because it evolved in a different way.
As an analogy, if one called an eye "a machine that bends light according to an ray optics diagram[1]", that would be similarly misleading. The question is, I guess, whether "algorithm" means something more like "ray optics diagram" ("a set of instructions to be followed in calculations") or whether it means something less premeditated.
not sure whether that's the right term and whether ray optics diagrams are necessarily used for designing cameras...
My full position is a bit subtle, because it's quite hard to find a materialist-rationalist version of the your statement in the OP that I would fully agree with. The word "design" is kinda objectionable because it implies a designer. Even "if one studied the brain well enough, one would come up with a model that could be used to substitute for the brain with equivalent behavior" is something I'm skeptical of. (But that skepticism is a bit separate from my objection above. Though both objections are motivated by a worry that one goes a bit too quickly from "supernaturalism is false" to "natural things are like artifice".)
The best I can come up with without coining wholly new words to describe it is to just have a disclaimer, perhaps in the comments like me, pointing out that there's still a distinction.
Calling it a nitpick because in this case I don't see any followup errors that would be made as a result of this terminology in this case from this article.
What's odd to me about this post is that it doesn't define algorithm, so I don't know what the claim is. [1]
The closest to a definition is the third-last section. Which I'd critique because (a) the definition should be at the beginning, but okay that's just a nitpick about writing/structure, more importantly (b) I read it and I'm still not sure I know what the definition is, and (c) insofar as I understand it, I don't think it maps onto conventional usage very well (see last section).
My best guess of what you mean based on that section is "anything that's entirely about data processing in the widest sense is an algorithm"? But I don't think that maps onto the common usage of the term very well. Also I don't know how it applies to obscure cases. If you have a knotted wire and make the surface repulsive, it will 'compute' a way to disentangle, which is an 'output' in the sense that it corresponds to a set of topological transformations, but is this an algorithm? (Probably not because it actually does the disentangling itself, rather than just computing how you would disentangle, so therefore it's not entirely about data processing?)
Edit: even the mechanical adder seems like an unclear case because it physically instantiates the solution, I guess it's an algorithm because we don't care about the physical instantiation, so in this case we can view it as only an informational output rather than a device that "does" anything? But this is not a crisp distinction; what if the mechanical adder were part of a larger system where the physical arrangement of marbles were utilized more?
If I did grasp the distinction correctly, then I don't think this distinction is all that practically relevant. If the brain used some kind of quantum algorithm that had 100000x computational overhang if it were instantiated on a classical computer, [2] then your conclusion that you could replace the input/output mapping with a computer would still be true, so what does it prove? This seems to me like just a restatement of the Turing thesis, or I guess an application of the thesis to brains?
My practical concern RE (c) is that, imE, the concept of algorithm is generally seen as implying that the manner in which the brain does computation is similar to what a computer does, but that's totally orthogonal to what you discuss here. So I can see people taking away the wrong thing.
"anything that's entirely about data processing in the widest sense is an algorithm"? But I don't think that maps onto the common usage of the term very well.
Not sure exactly what you have in mind. I do think “the kinds of things that would be discussed in current and future algorithms textbooks” is the right conceptual vocabulary for understanding how the brain does useful things (like navigating the world, avoiding threats, seeking opportunities, inventing tools and concepts, etc.), just as “the kinds of things that would be discussed in camera textbooks” is the right conceptual vocabulary for understanding how the human eye captures images.
Well, more specifically, the human eye is in the Venn diagram overlap between the design & engineering principles of image capture, and the biological constraints (e.g., the affordances of biological cells, and there has to be an evolutionary pathway, etc.). So to understand the human eye, you need to understand both the design principles from physics textbooks and the biological constraints from biology textbooks.
By the same token, to understand the human brain, you need to understand both the design & engineering principles from algorithms textbooks and the biological constraints from biology textbooks.
This seems to me like just a restatement of the Turing thesis, or I guess an application of the thesis to brains
Something like that. If you think it’s obvious, that’s great! I guess you’re not the target audience.
Hmm, okay. So after reflecting on this a bunch, I think the things that still bug me about this post after reading your clarification aren't about factual merits but about implications and tone. I'm not sure what the best practice in such a case is, maybe just not saying anything is best. I guess tell me if you prefer I didn't write this response lol. But decided to say it this time.
I think this post is a) mostly attacking a strawman, in that most people who you think disagree with you actually do so for different reasons (although not everyone, I concede some people will disagree with the algorithm thing as you define it) and b) even insofar as they do exist, the net effect of this post will be substantially negative because it's antagonizing, mocking, and unpersuasive.
E.g.:
The hilarious irony of psychedelics is:[4]
Objectively, psychedelics should be the most clear-cut evidence you could imagine for the idea that the brain is a machine that runs an algorithm, and that the mind is something that this algorithm does. After all, these tiny molecules, which just so happen to lock onto a widespread class of neuron receptors, create seismic shifts in consciousness, beliefs, perceptions, and so on.
…And yet, the people who actually take psychedelics are much likelier to stop believing that. Ironic.
This would feel right at home in r/sneerclub, which is odd to me because your posts usually have a very humble vibe. And yea I guess I don't understand what your theory of mind is for how something good will result from anyone reading this.
But yea feel free not to reply, and can not express similar things in the future if you want.
The sensory inputs and the body are much more integral than one would think. Biology is more of a cooperation between intelligent subsystems. Moreover the quantum properties that are harnessed by biology are not at all clear. There might be fundamental ways in which reality works that enable DNA to do what it does thanks to reliable interactions between particles like photons and electrons and maybe other excitations of quantum fields. Does this mean that the brain is not a machine? Well its a machine just like society is a machine, the curse of semantics. Does it run an algorithm? Well nobody wrote it. It just so happens that every complex form of matter is subject to a number of constraints proportional to its level of complexity. And its those constraints that dictate its behavior. You can say that such "algorithm" is sculpted by the constraints of the system, its just a byproduct. So there isn't a "well defined entity" that its running a "well defined process".
Some people say “the brain is a computer”. Other people say “well, the brain is not really a computer, because, like, what’s the hardware versus the software?” I agree: “the brain is a computer” is kinda missing the mark. I prefer: “the brain is a machine that runs an algorithm”.[1]
Here’s a mechanical adder:
What’s “hardware versus software” for a mechanical adder? The question is nonsense.
A mechanical adder is not “a computer”, analogous to a MacBook. Rather, it’s a machine that runs an algorithm. (Namely, the binary addition algorithm.)
And the brain is likewise a machine that runs a (much more complicated) algorithm.
“A machine??”, you say. Yeah, you heard me. A machine. An extraordinarily complex machine, but a machine all the same. If you could zoom in enough to really see it, it would just be obvious! You should pause here to marvel at some molecular simulations of cell biology in action: DNA replication, more crazy DNA stuff, kinesin, and so on.
…And what else would you expect? We live in a universe that follows orderly laws of physics.[2] And the laws apply to our bodies and brains just like everything else. After all, scientists have been measuring cells, including neurons and synapses, for a very long time, and everything they’ve found has been compatible with the normal laws of physics and chemistry, chugging along in a clockwork universe.
“An algorithm??”, you say. Yeah, you heard me. An algorithm. An extraordinarily complex algorithm, but an algorithm all the same. And the mind is part of that algorithm, i.e. a thing that the brain does.
What algorithm exactly? It’s as yet unknown to science. But it’s evidently an algorithm that helps animals survive and thrive by navigating their environment, anticipating danger, noticing opportunities, skillfully controlling their muscles, and so on.
“Well, the brain is not JUST a machine that runs an algorithm,” you say. Yeah sure. I oversimplified. The brain is also a gland. The brain is also a blood osmolality sensor. It’s a light sensor. It’s a muscle contractor. In other words: it has input channels and output channels—sensors and actuators.[3] These are indeed parts of the nervous system, and they are critically important.
…But they add up to ≈0.2% of the central nervous system.[4]
What’s the other 99.8% of the brain doing? Well, it’s a machine that runs an algorithm.
“No, I mean, you can’t reduce the brain to a mere machine that runs an algorithm,” you say. Who said anything about “reduce” and “mere”? You don’t have to be any less blown away by what the brain can do; instead you should be more blown away by the things that can be done by a machine that runs an algorithm.
Remember, there are infinitely many algorithms. They do all kinds of things. If, when you hear “algorithm”, you’re imagining lookup tables, and LLMs, and the procedure for calculating income tax, then you have still barely scratched the surface of the infinite variety of all possible algorithms.
Matthew Cobb’s book The Idea of the Brain notes that the brain has historically been analogized to a hydraulic system, or to a telegraph network, or to a telephone exchange; today it’s often analogized to a supercomputer; and in the future, who knows. His suggested takeaway is: neuroscientists have never known how to think about the brain, and are grasping at straws.
But that’s the wrong takeaway. The brain is a machine that runs an algorithm. Many people throughout history have grasped that idea, at least intuitively. And they’ve tried to explain that idea by analogizing the brain to other machines that can run algorithms, of which there are many: clockwork, hydraulics, telephone exchanges, silicon chips, and more. All the analogies through the ages are pointing to a single, consistent, profound truth.
“But mind is different from matter,” you say. Sure. An algorithm is different from a machine that runs that algorithm. If a silicon chip is running quicksort, the silicon chip does not thereby become quicksort.
Mind is an aspect of the algorithm that the brain runs. It’s a thing that the brain does, when the brain is working properly.
“But what about noise, randomness, and continuous (not discrete) quantities?”, you say. Well, what ABOUT noise, randomness, and continuous quantities? Algorithms are allowed to incorporate all those things! For example, Markov Chain Monte Carlo (MCMC) centrally incorporates randomness—and MCMC is, last I checked, an algorithm. Newton’s method involves continuous quantities, and that’s an algorithm too.
“But what about embodiment?”, you say. Well, what ABOUT embodiment? Algorithms are allowed to have inputs and outputs!
“But what about the brain rewiring itself?”, you say. Well what ABOUT the brain rewiring itself? Algorithms are allowed to do different calculations under different circumstances! See §2.3.3 of my Intro series, where I discuss the relationship between “brain plasticity” and “mutable variables”.
“An ‘algorithm’ can be anything, so your claim is vacuous,” you say. No. It’s definitely substantive.
As I mentioned above, 0.2% of the neurons in the human central nervous system are at least partly devoted to physical / chemical input and output—light detection, muscle contraction, etc. The other 99.8% are not. What I’m claiming is that you could (in principle) throw out that 99.8%, and instead connect that 0.2% to an external computer via some futuristic sci-fi radio input-output interface. I’m claiming that: if that computer was sufficiently fast, and if it was running just the right program, this 0.2%-brained human would be just as competent and genetically fit as they’d be with 100% of their neurons.
You cannot say this about the spleen. If you had a futuristic sci-fi radio interface to an external computer, you would not be able to scrape out 99.8% of the cells in an animal’s spleen, install the radio links, and have the animal wind up with high inclusive genetic fitness. Ditto with lung cells, muscle cells, and every other organ in the body.
The brain is a machine that runs an algorithm. An algorithm is about information processing. A sufficiently fast computer can (in principle) do that same information processing (cf. Turing completeness, Church–Turing thesis), and hence substitute for 99.8% of the brain. But computers cannot concentrate urine, nor produce stomach acid, nor grind food, nor shuttle oxygen from the air to the bloodstream. The brain is a machine that runs an algorithm, whereas other organs are not.
The hilarious irony of psychedelics is:[5]
Objectively, psychedelics should be the most clear-cut evidence you could imagine for the idea that the brain is a machine that runs an algorithm, and that the mind is something that this algorithm does. After all, these tiny molecules, which just so happen to lock onto a widespread class of neuron receptors, create seismic shifts in consciousness, beliefs, perceptions, and so on.
…And yet, the people who actually take psychedelics are much likelier to stop believing that. Ironic.
“But what about free will? And consciousness?”, you say. Oh jeez, this is getting outside the scope of a quick sassy little blog post. See my book-length discussion at: Intuitive Self-Models.
UPDATE 2026-02-22: A commenter informs me of some relevant CS jargon:
So I endorse “the brain is a computer” according to a specific obscure jargon definition of “computer”, but not according to how normal people use the term “computer”. For the purpose of this post, I’m sticking with more familiar terminology.
“The Standard Model of Particle Physics including weak-field quantum general relativity (GR)” (I wish it was better-known and had a catchier name) appears sufficient to capture everything that happens in the solar system (ref). Nobody has ever found any experiment violating it, despite extraordinarily precise and varied tests. This theory doesn’t capture everything that happens in the universe—in particular, it can’t make any predictions about either (A) microscopic exploding black holes or (B) the Big Bang. Also, (C) the Standard Model happens to include 18 elementary particles (depending on how you count), because those are the ones we’ve discovered; but the theoretical framework is fully compatible with other particles existing too, and indeed there are strong theoretical and astronomical reasons to think they do exist. It’s just that those other particles are irrelevant for anything happening on Earth—so irrelevant that we’ve spent decades and billions of dollars searching for any Earthly experiment whatsoever where they play a measurable role, without success. Anyway, I think there are strong reasons to believe that our universe follows some set of orderly laws—some well-defined mathematical framework that elegantly unifies the Standard Model with all of GR, not just weak-field GR—even if physicists don’t know what those laws are yet. (I think there are promising leads, but that’s getting off-topic.) …And we should strongly expect that, when we eventually discover those laws, we’ll find that they shed no new light whatsoever into how brains or minds work, beyond what’s already clear from the not-quite-complete laws of physics that we know today—just as we learned nothing new about brains or minds from previous advances in fundamental physics like GR or quantum field theory.
If you really want to be pedantic, the brain is also a weight that makes head-butts more forceful, and the brain is also a space heater, …
…By neuron count. Specifically, in the human central nervous system, the neurons doing physical / chemical input and output to non-brain parts of the body consist mostly of our ≈200 million retinal cells, along with a (comparatively) much smaller number of motoneurons and other odds and ends, as far as I can tell.
This section is parroting something I read once, but I can’t remember the source.