Jacob Cannell has claimed that biological systems at least get within 1 OOM of not a local, but global maximum in abilities.

His comment about biology nearing various limits are reproduced here:

The paper you linked seems quite old and out of date. The modern view is that the inverted retina, if anything, is a superior design vs the everted retina, but the tradeoffs are complex.

This is all unfortunately caught up in some silly historical "evolution vs creationism" debate, where the inverted retina was key evidence for imperfect design and thus inefficiency of evolution. But we now know that evolution reliably finds pareto optimal designs:

biological cells operate close to the critical Landauer Limit, and thus are pareto-optimal practical nanobots.

eyes operate at optical and quantum limits, down to single photon detection.

the brain operates near various physical limits, and is probably also near pareto-optimal in its design space.

Link to comment here:


I am confused about the Landauer limit for biological cells other than nerve cells, as it only applies to computation, but I want to ask, is this notion actually true?

And if this view was true, what would the implications be for technology?

New Answer
Ask Related Question
New Comment

6 Answers sorted by

  • The existence of invasive species proves that, at any given time, there are probably loads of possible biological niches that no animal is exploiting.

  • I believe that plants are ≳ 1 OOM below the best human solution for turning solar energy into chemical energy, as measured in power conversion efficiency.[1] (Update: Note that Jacob is disputing this claim and I haven’t had a chance to get to the bottom of it. See thread below.) (Then I guess someone will say “evolution wasn't optimizing for just efficiency; it also has to get built using biologically-feasible materials and processes, and be compatible with other things happening in the cell, etc.” And then I'll reply, “Yeah, that's the point. Human engineers are trying to do a different thing with different constraints.”)

  1. The best human solution would be a 39%-efficient quadruple-junction solar cell, wired to a dedicated electrolysis setup. The electrolysis efficiency seems to be as high as 95%? Multiply those together and we get ≈10× the “peak” plant efficiency number mentioned here, with most plants doing significantly worse than that. ↩︎

Do you have a source in mind for photosynthesis efficiency?

According to this source some algae have photosynthetic efficiency above 20%:

On the other hand, studies have shown the photosyntheticefficiency of microalge could well be in the range of 10–20 % or higher (Huntley and Redalje2007). Simple structure of algae allows them to achieve substantially higher PE valuescompared to terrestrial plants. PE of different microalgal species has been given in (Table 2)

Pirt et al. (1980) have suggested that even higher levels of PE can be attained by microalgae

... (read more)
5Steven Byrnes5mo
Thanks. As it happened, I had edited my original comment to add a source, shortly before you replied (so you probably missed it). See footnote. Sorry that I didn’t do that when I first posted. Your first source continues: When we say solar cells are 39% efficient, that’s as a fraction of all incoming sunlight, so the 3-6% is the correct comparison point, not the 11%, right? Within the 3-6% range, I think (low confidence) the 6% would be lower-intensity light and the 3% would be direct sunlight—I recall that plants start deliberately dumping light when intensity gets too high, because if downstream chemical reactions can’t keep up with upstream ones then you wind up with energetic intermediate products (free radicals) floating around the cell and destroying stuff. (Update: Confirmed! Actually this paper [https://escholarship.org/content/qt9cf6c2dq/qt9cf6c2dq.pdf?t=r9k3hj] says it’s even worse than that: “In leaves in full sun, up to 80% of the absorbed energy must be dissipated or risk causing serious damage to the system (41).”) There are likewise solar cells that also can’t keep up with the flux of direct sunlight (namely dye-sensitized solar cells), but the most commonly-used solar cells are perfectly happy with direct sunlight—indeed, it can make their efficiency slightly higher. The 39%-efficiency figure I mentioned was tested under direct sunlight equivalent. So the best point of comparison would probably be more like 3% than 6%, or maybe even less than 3%? Though it would depend a lot on local climate (e.g. how close to the equator? How often is it cloudy?) Wikipedia [https://en.wikipedia.org/wiki/Photosynthetic_efficiency] says “C4 plants, peak” is 4.3%. But the citation goes here [https://escholarship.org/content/qt9cf6c2dq/qt9cf6c2dq.pdf?t=r9k3hj] which says: (Not sure why wikipedia says 4.3% not 4.5%.) Again, we probably need to go down from there because lots of sunlight is the intense direct kind where the plant starts deliberately throwing some o
No if you look at the Table 1 in that source, the 3-6% is useful biomass conversion from crops, which is many steps removed. The maximum efficiency is: * 28%: (for the conversion into the natural fuel for the plant cells—ATP and NADPH). * 9.2%: conversion to sugar after 32% efficient conversion of ATP and NADPH to glucose * 3-6%: harvestable energy, as plants are not pure sugar storage systems and have various metabolic needs So it depends what one is comparing ... but it looks individual photosynthetic cells can convert solar energy to ATP (a form of chemical energy) at up to 28% efficiency (53% of spectrum * 70% leaf efficiency (reflection/absorption etc) * 76% chlorophyll efficiency). That alone seems to defeat the > 1 OOM claim, and some algae may achieve solar cell level efficiency.
Overall, this debate would benefit from clarity on the specific metrics of comparison, along with an explanation for why we should care about that specific metric. Photosynthesis converts light into a form of chemical energy that is easy for plants to use for growth, but impractical for humans to use to power their machines. Solar cell output is an efficient conversion of light energy into grid-friendly electrical energy, but we can’t exploit that to power plant growth without then re-converting that electrical energy back into light energy. I don’t understand why we are comparing the efficiency of plants in generating ATP with the efficiency of solar cells generating grid power. It just doesn’t seem that meaningful to me.
I'm simply evaluating and responding to the claim: It's part of a larger debate on pareto-optimality of evolution in general, probably based on my earlier statement: (then I gave 3 examples: cellular computation, the eye/retina, and the brain) So the efficiency of photovoltaic cells vs photosynthesis is relevant as a particular counterexample (and based on 30 minutes of googling it looks like biology did find solutions roughly on par - at least for conversion to ATP).
One source of interest is the prospect of improving food production efficiency by re-engineering photosynthesis.

No animals do nuclear fusion to extract energy from their food, meaning that they're about 11 orders of magnitude off from the optimal use of matter.

The inverted vs. everted retina thing is interesting, and it makes sense that there are space-and-mass-saving advantages to putting neurons inside the eye, especially if your retinas are a noticeable fraction of your weight (hence the focus on "small, highly-visual species"). But it seems like for humans in particular having an everted retina would likely be better "The results from modelling nevertheless indicate clearly that the inverted retina offers a space-saving advantage that is large in small eyes and substantial even in relatively large eyes. The advantage also increases with increasingly complex retinal processing and thus increasing retinal thickness. [...] Only in large-eyed species, the scattering effect of the inverted retina may indeed pose a disadvantage and the everted retina of cephalopods may be superior, although it also has its problems." (Kroger and Biehlmaher 20019)

But anyhow, which way around my vs. octopuses' retinas are isn't that big a mistake either way - certainly not an order of magnitude.

To get that big of an obvious failure you might have to go to more extreme stuff like the laryngeal nerve of the giraffe. Or maybe scurvy in humans.

Overall, [shrug]. Evolution's really good at finding solutions but it's really path-dependent. I expect it to be better than human engineering in plenty of ways, but there are plenty of ways the actual global optimum is way too weird to be found by evolution.

No animals do nuclear fusion to extract energy from their food, meaning that they're about 11 orders of magnitude off from the optimal use of matter.

That isn't directly related to any of the claims I made, which specifically concerned the thermodynamic efficiency of cellular computations, the eye, and the brain.

Nuclear fusion may simply be impossible to realistically harness by a cell sized machine self assembled out of common elements.

3Charlie Steiner5mo
Hence why it's an answer to a question called "Does biology reliably find the global maximum, or at least get close?" :P By analogy, I think it is in fact correct for brains as well. Brains don't use quantum computing or reversible computing, so they're very far from the global optimum use of matter for computation. Those are also hard if not impossible to realistically harness with something made out of living cells.
1M. Y. Zuo5mo
Neither of the alternatives have been proven to work at scale though? In fact there are still theoretical hurdles for a human brain-size implementation in either case that have not been fully addressed in the literature.
2Charlie Steiner5mo
Go on, what are some of the theoretical hurdles for a brain-scale quantum computer?
1M. Y. Zuo5mo
Interconnections between an enormous number of qubits?
If you're talking about decoherence issues, that's solvable with error correcting codes, and we now have a proof that it's possible to completely solve the decoherence problem via quantum error correcting codes. Link to article here: https://www.quantamagazine.org/qubits-can-be-as-safe-as-bits-researchers-show-20220106/ [https://www.quantamagazine.org/qubits-can-be-as-safe-as-bits-researchers-show-20220106/] Link to study: https://arxiv.org/abs/2111.03654 [https://arxiv.org/abs/2111.03654]
1M. Y. Zuo5mo
I'm referring to the real world engineering problem that interconnection requirements scale exponentially with the number of qubits. There simply isn't enough volume to make it work beyond an upper threshold limit of qubits, since they also have to be quite close to each other. It's not at all been proven what this upper limit is or that it allows for capabilities matching or exceeding the average human brain. If the size is scaled down to reduce the distances another problem arises in that there's a maximum limit to the amount of power that can be supplied to any unit volume, especially when cryogenic cooling is required, as cooling and refrigeration systems cannot be perfectly efficient.  Something with 1/100th the efficiency of the human brain and the same size might work, i.e. 2kW instead of 20 watts. But something with 1/1000000 the efficiency of the human brain and the same size would never work. Since it's impossible for 20MW of power to be supplied to such a concentrated volume while cooling away the excess heat sufficiently. That is a hard thermodynamic limit. There is the possibility of the qubits being spread around quite a bit farther from each other, i.e. in a room-size space, but that goes back to the first issue as it brings exponentially increasing losses, from such things as signalling issues. Which may be partially mitigated by improvements from such things as error correcting codes. But there cannot exist a 'complete' solution. As perfectly lossless information transmission is only an ideal and not achievable in practice.
One of the bigger problems that was solved recently is error correction. Without actively cooling things down, quantum computers need error correction, and it used to be a real issue. However, this was solved a year ago, at least in theory. It also solves the decoherence problem, which allows in theory room temperature computers. It's at least a possibility proof. The article's link is here: https://www.quantamagazine.org/qubits-can-be-as-safe-as-bits-researchers-show-20220106/ [https://www.quantamagazine.org/qubits-can-be-as-safe-as-bits-researchers-show-20220106/] And the actual paper is here: https://arxiv.org/abs/2111.03654 [https://arxiv.org/abs/2111.03654] Other than that, the problems are all practical.
2Charlie Steiner5mo
Oh, cool! I'm not totally clear on what this means - did things like the toric code [https://en.wikipedia.org/wiki/Toric_code]provide error correction in a linear number of extra steps, while this new result paves the way for error correction in a logarithmic number of extra steps?
Basically, the following properties hold for this code (I'm trusting quanta magazine to report the study correctly) 1. It is efficient like classical codes. 2. It can correct many more errors than previous codes. 3. It has constant ability to suppress errors, no matter how large the sequence of bits you've started with. 4. It sums up a very low number of bits/qubits, called the LDPC property in the quanta article. 5. It has local testability, that is errors can't hide themselves, and any check can reveal a large proportion of errors, evading Goodhart's Law.
Yeah, that's the big one for brains. I might answer using a similar example soon, but that might be a big one, as provisionally the latter has 35 more orders of magnitude worth of computing power.
1the gears to ascension5mo
one might say you're talking about costreg foom, not kardashev foom
Even here, that doesn't apply to quantum/reversible computers, or superconducting wires.

The article I linked argues that the inverted retina is near optimal, if you continue reading . ..

The scattering effects are easily compensated for:

Looking out through a layer of neural tissue may seem to be a serious drawback for vertebrate vision. Yet, vertebrates include birds of prey with the most acute vision of any animal, and even in general, vertebrate visual acuity is typically limited by the physics of light, and not by retinal imperfections.

So, in general, the apparent challenges with an inverted retina seem to have been practically abolish

... (read more)
2Charlie Steiner5mo
The benefit of the inverted retina doesn't scale with size. It decreases with size. Amount of retina scales like r^2, while amount of eyeball to put neurons in scales like r^3. This means that the smaller you are, the harder it is to find space to put neurons, while the bigger you are, the easier it is. This is why humans have eyeballs full of not-so-functional vitreous humor, while the compound eyes of insects are packed full of optical neurons. Yes, cephalopods also have eye problems. In fact, this places you in a bit of a bind - if evolution is so good at making humans near-optimal, why did evolution make octopus eyes so suboptimal? The obvious thing to do is to just put the neurons that are currently in front of the retina in humans behind the retina instead. Or if you're an octopus, the obvious thing to do is to put some pre-processing neurons behind the retina. But these changes are tricky to evolve as a series of small mutations (the octopus eye changes less so - maybe they have hidden advantages to their architecture). And they're only metabolically cheap for large-eyed, large-bodied creatures - early vertebrates didn't have all this free space that we do.
It's the advantage of compression reduction that generally scales with size/resolution due to the frequency power spectrum of natural images. Obvious perhaps, but also wrong, it has no ultimate advantage. Evidence for near-optimality of inverted retina is not directly evidence for sub-optimality of everted retina: it could just be that either design can overcome tradeoffs around the inversion/eversion design choice.

How do you view the claim that human cells are near a critical upper limit?

2Charlie Steiner5mo
Here's what I'd agree with: Specific cell functions are near a local optimum of usefulness, in terms of small changes to DNA that could have been supported against mutation with the fraction of the selection budget that was allocated to those functions in the ancestral environment. This formulation explains why human scurvy is allowed - producing vitamin C was unimportant in our ancestral environment, so the gene for it was allowed to degrade. And it doesn't fault us for not using fusion to extract energy from food - there's no small perturbation to our current digestive tract that starts a thermonuclear reaction.
1Gerald Monroe5mo
It's probably just wrong. For a trivial disproof : I will assume as stated that human neurons are at the Landauer limit. Well we know from measurements and other studies that nerve cells are unreliable. This failure to fire, exhausting their internal fuel supply so they stop pulsing when they should, all the numerous ways the brain makes system level errors, and the slow speed of signaling mean as a system the brain is nowhere close to optimal. (I can provide sources for all claims) That Landauer limit is for error free computations. When you inject random errors you lose information and system precision, and thus a much smaller error free system would be equal in effectiveness to the brain. This is likely why we are hitting humanlike performance in many domains with a small fraction of the estimated compute and memory of a brain. Also when you talk about artificial systems: human brain has no expansion ports, upload or download interfaces, any way to use a gigawatt of power to solve more difficult problems, etc. So even if we could never do better for the 20 watts the brain uses, in practice that doesn't matter.

I am confused about the Landauer limit for biological cells other than nerve cells, as it only applies to computation, but I want to ask, is this notion actually true?

Biological cells are robots that must perform myriad physical computations, all of which are tightly constrained by the thermodynamic Landauer Limit. This applies to all the critical operations of cells including DNA/cellular replication, methylation, translation, etc.

The lower Landauer bound is 0.02 eV, which translates into a minimal noise voltage of 20mV. Ion flows in neural signals operate on voltage swings around 100mV, close to the practical limits at low reliability levels.

The basic currency of chemical energy in biology is ATP, which is equivalent to about 1e-19J or roughly 1 eV, the practical limit for reliable computation. Proteins can perform various reliable computations from single or few ATP transactions, including transcription.

A cell has shared read only storage through the genome, and then a larger writable storage system via the epigenome, which naturally is also near thermodynamically optimal, typically using 1 ATP to read or write a bit or two reliably.

From "Information and the Single Cell":

Thus, the epigenome provides a very appreciable store of cellular information, on the order of 10 gigabytes per cell. It also operates over a vast range of time scales, with some processes changing on the order of minutes (e.g. receptor transcription) and others over the lifetime of the cell (irreversible cell fate decisions made during development). Finally, the processing costs are low: reading a 2-bit base-pair costs only 1 ATP.

Computation by wetware is vastly less expensive than cell signaling [11]; a 1-bit methylation event costs 1 ATP (though maintaining methylation also incurs some expense [63]).

According to estimates in "Science and Engineering Beyond Moore’s Law", an E Coli cell has a power dissipation rate of 1.4 e-13 W and takes 2400s for replication, which implies a thermodynamic limit of at most ~1e11 bits, which is close to their estimates of the cell's total information content:

This result is remarkably close to the experimental estimates of the informational content of bacterial cells based on microcalorimetric measurements which range from 1e11 to 1e13 bits per cell. In the following, it is assumed that 1 cell = 1e11 bit, i.e., the conservative estimate is used

A concrete setting in which to think about this would be the energy cost of an exonuclease severing a single base pair from a DNA molecule that was randomly synthesized and inserted into a test tube in a mixture of nucleotide and nucleoside monomers. The energy cost of severing the base pair, dissociating their hydrogen bond, and separating them irretrievably into the random mixture of identical monomers using thermal energy, would be the cost in energy of deleting 2 bits of information.

Unfortunately, I haven't been able to find the amount of ATP consumed ... (read more)

This is like saying there is massive redundancy in a GPU chip because the same bits are stored on wires in transit, in various caches, in the register file, and in correlated intermediate circuit states - and just as ridiculous. The comparison here is the energy required for important actions such as complete replication of the entire nanobot, which cells accomplish using efficiency close to the minimal thermodynamic limit. Any practical nanobot that does anything useful - such as replicate itself - will also need to read from its long term storage (with error correction), which will induce correlations into the various mechanical motor and sensor systems that combine the info from long-term storage with that sensed from the environment and perform the necessary intermediate computations chaining into motor outputs.
Far from ridiculous, I think this is a key point. As you point out, both cells and nanobots require information redundancy to replicate. We can consider the theoretical efficiency of information deletion in terms of two components: 1. The energy required to delete one bit from an individual information-storing structure, such as a DNA molecule. 2. The average amount of redundancy per bit in the cell or nanobot. These are two separate factors and we have to consider them both to understand whether or not nanobots can operate with greater energy efficiency than cells.
Replication involves copying and thus erasing bits from the environment, not from storage. The optimal non redundant storage nanobot already exists - a virus. But it’s hardly interesting and regardless the claim I originally made is about Pareto optimality.
Popping out to a meta-level, I am not sure if your aim in these comments is to communicate an idea clearly and defend your claims in a way that's legible and persuasive to other people? For me personally, if that is your aim, there are two or three things that would be helpful. 1. Use widely accepted jargon in ways that clearly (from other people's perspective) fit the standard definition of those terms. Otherwise, supply a definition, or an unambiguous example. 2. Make an effort to show how your arguments and claims tie into the larger point you're trying to make. If the argument is getting away from your original point, explain why, and suggest ways to reorient. 3. If your conversational partner offers you examples to illustrate their thinking, and you disagree with the examples or interpretation, then try using those examples to make your point. For example, you clearly disagree with some aspect of my previous comment about redundancy, but based on your response, I can't really discern what you're disagreeing with or why. I'm ready to let go of this conversation, but if you're motivated to make your claims and arguments more legible to me, then I am happy to hear more on the subject. No worries either way.
Upstream this subthread started when the OP said: To which I replied You then replied with a tangental thread (from my perspective) about 'erasing genetic information', which is not a subgoal of a biological cell (if anything the goal of a biological cell is the exact opposite - to replicate genetic information!) So let me expand my claim/argument: A robot is a physical computer built out of atomic widgets: sensors, actuators, connectors, logic gates, ROM, RAM, interconnect/wires, etc. Each of these components is also a physical computer bound by the Landauer limit. A nanobot/cell in particular is a robot with the unique ability to replicate - to construct a new copy of itself. This requires a large number of bit erasures and thus energy expenditure proportional to the information content of the cell. Thermodynamic/energy efficiency is mostly a measure of the fundamental widgets themselves. For example in a modern digital computer, the thermodynamic efficiency is a property of the node process, which determines the size, voltage, and electron flow of transistors and interconnect. CMOS chips have increased in thermodynamic efficiency over time ala Moore's Law. So then we can look at a biological cell, as a nanbot, and analyze the thermodynamic efficiency of its various elemental computational widgets, which includes DNA to RNA transcription (reading from DNA ROM to RNA RAM cache), computations (various RNA operations, methylation, protein interactions, etc), translation (RNA to proteins) and I provided links to sources establishing that these operations all are efficient down to the Landauer Limit. Then there is only one other notion of efficiency we may concern ourselves with - which is system level circuit efficiency. I mostly avoided discussing this because it's more complex to analyze and also largely orthogonal from low level thermodynamic/energy efficiency. For example you could have 2 different circuits that both add 32 bit numbers, and one uses 100k l
The e-coli calculations make no sense to me. They posit huge orders of magnitude differences between an "optimal" silicon based machine and a carbon one (e-coli cell). I attribute this to bogus calculations The one part I scrutinized: they use equation 7 to estimate the information content of an E-coli bacterium is ~1/2 TB. Now that just sounds absurd to me. That sounds like the amount you'd need to specify the full state of an E-coli at a given point in time (and indeed, that is what equation seven seems to be doing). They then say that E-coli performs the task of forming an atomically precise machine out of a max entropy state, instead of the actual task of "make a functioning e-coli cell, nevermind the exact atomic conditions", and see how long it would take some kind of gimped silicon computer because "surely silicon machines can't function in kilo kelvin temperatures?" to do that task. Then they say "oh look, silicon machines are 3 OOM slower than biological cells". 
The methodology they are using to estimate the bit info content of the bio cell is sound, but the values they plug in result in conservative overestimate. A functioning e-coli cell does require atomically precise assembly of at least some components (notably DNA) - but naturally there is some leeway in the exact positioning and dynamic deformation of other components (like the cell wall), etc. But a bio cell is an atomically precise machine, more or less. They assume 32 bits of xyz spatial position for each component and they assume atoms as the building blocks and they don't consider alternate configurations, but that seems to be a difference of one or a few OOM, not many. And indeed from my calc their estimate is 1 OOM from the maximum info content as implied by the cell's energy dissipation and time for replication (which worked out to 1e11 bits I think). There was another paper linked earlier which used a more detailed methodology and got an estimate of a net energy use of only 6x the lower unreliable landauer bound, which also constrains the true bit content to be in the range of 1e10 to 1e11 bits. Not quite, they say "a minimalist serial von neumman silicon machine is 2 OOM slower: Their silicon cell is OOM inefficient because: 1.) it is serial rather than parallel, and 2.) it uses digital circuits rather than analog computations
Thanks for taking the time to write this out, it's a big upgrade in terms of legibility! To be clear, I don't have a strong opinion on whether or not biological cells are or are not close to being maximum thermodynamic efficiency. Instead, I am claiming that aspects of this discussion need to be better-defined and supported to facilitate productive discussion here. I'll just do a shallow dive into a couple aspects. Here's a quote from one of your sources [https://www-sciencedirect-com.proxy.lib.umich.edu/science/article/pii/S0959438821001173]: I agree with this source that, if we ignore the energy costs to maintain the cellular architecture that permits transcription, it takes 1 ATP to add 1 rNTP to the growing mRNA chain. In connecting this to the broader debate about thermodynamic efficiency, however, we have a few different terms and definitions for which I don't yet see an unambiguous connection. * The Landauer limit, which is defined as the minimum energy cost of deleting 1 bit. * The energy cost of adding 1 rNTP to a growing mRNA chain and thereby (temporarily) copying 1 bit. * The power per rNTP required to maintain a copy of a particular mRNA in the cell, given empirical rates of mRNA decay. I don't see a well-grounded way to connect these energy and power requirements for building and maintaining an mRNA molecule to the Landauer limit. So at least as far as mRNA goes, I am not sold on (1). I'm sure you understand this, but to be clear, "doing all the same things" as a cell would require being a cell. It's not at all obvious to me why being effective at replicating E. coli's DNA would be a design requirement for a nanobot. The whole point of building nanobots is to use different mechanisms to accomplish engineering requirements that humans care about. So for example, "can we build a self-replicating nanobot that produces biodiesel in a scalable manner more efficiently than a genetically engineered E. coli cell?" is a natural, if still unde
Entropy is conserved. Copying a bit of dna/rna/etc necessarily erases a bit from the environment. Launder limit applies. This is why I used the term Pareto optimal and the foundry process analogy. A 32nm node tech is not Pareto optimal - a later node could do pretty much everything it does, only better. If biology is far from Pareto optimal, then it should be possible to create strong nanotechnology - artificial cells that do everything bio cells do, but OOM better. Most importantly - strong nanotech could replicate Much faster and using Much less energy. Strong nanotech has been proposed as one of the main methods that unfriendly AI could near instantly kill humanity. If biology is Pareto optimal at what it does then only weak nanotech is possible which is just bioengineering by another (unnecessary) name. This relates to the debate about evolution: my prior is that evolution is mysterious, subtle, and superhuman. If you think you found a design flaw, you are probably wrong. This has born out well so far - inverted retina is actually optimal, some photosynthesis is as efficient as efficient solar cells etc None of this has anything to do with goals other than biological goals. Considerations of human uses of biology are irrelevant
This is not a legible argument to me. To make it legible, you would need a person who does not have all the interconnected knowledge that is in your head to be able to examine these sentences and (quickly) understand how these arguments prove the conclusion. N of 1, but I am a biomedical engineering graduate student and I cannot parse this argument. What is "the environment?" What do you mean mechanistically when you say "copying a bit?" What exactly is physically happening when this "bit" is "erased" in the case of, say, adding an rNTP to a growing mRNA chain? Here's another thing you could do to flesh things out: Describe a specific form of "strong nanotech" that you believe some would view as a main method an AI could use to kill humanity nearly instantly, but that is ruled out based on your belief that biology is Pareto optimal. Obviously, I'm not asking for blueprints. Just a very rough general description, like "nanobots that self-replicate, infect everybody's bodies, and poison them all simultaneously at a signal from the AI."

I may be assuming familiarity with the physics of computation and reversible computing.

Copying information necessarily overwrites and thus erases information (whatever was stored prior to the copy write). Consider a simple memory with 2 storage cells. Copying the value of cell 0 to cell 1 involves reading from cell 0 and then writing said value to cell 1, overwriting whatever cell 1 was previously storing.

The only way to write to a memory without erasing information is to swap, which naturally is fully reversible. So a reversible circuit could swap the contents of the storage cells, but swap is fundamentally different than copy. Reversible circuits basically replace all copys/erasures with swaps, which dramatically blows up the circuit (they always have the same number of outputs as inputs, so simple circuits like AND produce an extra garbage output which must propagate indefinitely).

An assembler which takes some mix of atoms/parts from the environment and then assembles them into some specific structure is writing information and thus also erasing information. The assembly process removes/erases entropy from the original configuration of the environment (atoms/parts) memory, ... (read more)

This is very helpful. I am definitely unfamiliar with the physics of computation and reversible computing, but your description was quite clear. If I'm following you, "delete" in the case of mRNA assembly would means that we have "erased" one rNTP from the solution, then "written" it into the growing mRNA molecule. The Landauer limit gives the theoretical minimal energy required for the "delete" part of this operation. You are saying that since 1 high energy P bond (~1 ATP) is all that's required to do not only the "delete," but also the "write," and since the energy contained in this bond is pretty close to the Landauer limit, that we can say there's relatively little room to improve the energy efficiency of an individual read/write operation by using some alternative mechanism. As such, mRNA assembly approaches not only Pareto optimality, but a true minimum of energy use for this particular operation. It may be that it's possible to improve other aspects of the read/write operation, such as its reliability (mRNA transcription is error-prone) or speed. However, if the cell is Pareto optimal, then this would come at a tradeoff with some other trait, such as energy efficiency. If I am interpreting you correctly so far, then I think there are several points to be made.  1. There may be a file drawer problem operating here. Is a paper finding that some biological mechanism is far from Pareto optimal or maximally thermodynamically efficient going to be published? I am not convinced about how confidently extrapolate beyond specific examples. This makes me quite hesitant to embrace the idea that individual computational operations, not to mention whole cell-scale architectures, are maximally energy efficient. 2. The energy of ATP hydrolysis is still almost 30x the Landauer limit, even ignoring the energy-consuming cellular context in which its energy can be used to do useful delete/copy operations. So there seems to be theoretical ro
1. If anything I'd say the opposite is true - inefficiency for key biochemical processes that are under high selection pressure is surprising and more notable. For example I encountered some papers about the apparent inefficiency of a key photosynthesis enzyme the other day. 2. I don't know quite what you are referring to here, but i'm guessing you are confusing the reliable vs unreliable limits which I discussed in my brain efficiency post and linked somewhere else in this thread. That paper Gunnar found analyzes replication efficiency in more depth: I haven't read the paper in detail enough to know whether that 6x accounts for reliability/errors or not. https://aip.scitation.org/doi/10.1063/1.4818538 [https://aip.scitation.org/doi/10.1063/1.4818538]
I don't follow this. In what sense is a bit getting moved to the environment? I previously read deconfusing Landauer's principle [https://www.lesswrong.com/posts/9zKKweu5826vSigu3/deconfusing-landauer-s-principle] here and... well, I don't remember it in any depth. But if I consider the model shown in figures 2-4, I get something like: "we can consider three possibilities for each bit of the grid. Either the potential barrier is up, and if we perform some measurement we'll reliably get a result we interpret as 1. Or it's up, and 0. Or the potential barrier is down (I'm not sure if this would be a stable state for it), and if we perform that measurement we could get either result." But then if we lower the barrier, tilt, and raise the barrier again, we've put a bit into the grid but it doesn't seem to me that we've moved the previous bit into the environment. I think the answer might be "we've moved a bit into the environment, in the sense that the entropy of the environment must have increased"? But that needs Landauer's principle to see it, and I take the example as being "here's an intuitive illustration of Landauer's principle", in which case it doesn't seem to work for that. But perhaps I'm misunderstanding something? (Aside, I said in the comments of the other thread something along the lines of, it seems clearer to me to think of Landauer's principle as about the energy cost of setting bits than the energy cost of erasing them. Does that seem right to you?)
Yes, entropy/information is conserved, so you can't truly erase bits. Erasure just moves them across the boundary separating the computer and the environment. This typically manifests as heat. Landauer's principle is actually about the minimum amount of energy required to represent or maintain a bit reliably in the presence of thermal noise. Erasure/copying then results in equivalent heat energy release.
I want to jump in a provide another reference that supports jacob_cannell's claim that cells (and RNA replication) operate close to the thermodynamic limit. There are some caveats that apply if we compare this to different nanobot implementations: * a substrate needing fewer atoms/bonds might be used - then we'd have to assemble fewer atoms and thus need less energy. DNA is already very compact, there is no OOM left to spare, but maybe the rest of the cell content could be improved. As mentioned, for viruses there is really no OOM left.  * A heat bath and a solution of needed atoms are assumed. But no reuse of more complicated molecules. Maybe there are sweet spots in engineering space between macroscopic source materials (refined silicon, iron, pure oxygen, etc., as in industrial processes) and a nutrient soup.
This part about function is important, since I don't think the things we want out of nanotech perfectly overlap with biology itself, and that can cause energy efficiency to increase or decrease.
My comment above addresses this

I'm assuming you're using "global maximum" as a synonym for "pareto optimal," though I haven't heard it used in that sense before. There are plenty of papers arguing that one biological trait or another is pareto optimal. One such (very cool) paper, "Motile curved bacteria are Pareto-optimal," aggregates empirical data on bacterial shapes, simulates them, and uses the results of those simulations to show that the range of shapes represent tradeoffs for "efficient swimming, chemotaxis, and low cell construction cost."

It finds that most shapes are pretty efficient swimmers, but slightly elongated round shapes and curved rods are fastest, and long straight rods are notably slower. However, these long straight rod-shaped bacteria have the lowest chemotactic signal/noise ratio, because they can better resist being jostled around by random liquid motion. Finally, spherical shapes are probably easiest to construct, since you need special mechanical structures to hold rod and helical shapes. Finally, they show that all but two bacterial species they examined have body shapes that are on the pareto frontier.

If true, what would this "pareto optimality" principle mean generally?

Conservatively, it would indicate that we won't often find bad biological designs. If a design appears suboptimal, it suggests we need to look harder to identify the advantage it offers. Along this theme, we should be wary of side effects when we try to manipulate biological systems. These rules of thumb seem wise to me.

It's more of a stretch to go beyond caution about side effects and claim that we're likely to hit inescapable tradeoffs when we try to engineer living systems. Human goals diverge from maximizing reproductive fitness, we can set up artificial environments to encourage traits not adaptive in the wild, and we can apply interventions to biological systems that are extremely difficult, if not impossible, for evolution to construct.

Take the bacteria as an example. If this paper's conclusions are true, then elongated rods have the highest chemotactic SNR, but are difficult to construct. In the wild, that might matter a lot. But if we want to grow a f*ckload of elongated rod bacteria, we can build some huge bioreactors and do so. In general, we can deal with a pareto frontier by eliminating the bottleneck that locks us into the position of the frontier.

Likewise, the human body faces a tradeoff between being too vigilant for cancer (and provoking harmful autoimmune responses) and being too lax (and being prone to cancer). But we humans can engineer ever-more-sophisticated systems to detect and control cancer, using technologies that simply are not available to the body (perhaps in part for other pareto frontier reasons). We still face serious side effects when we administer chemo to a patient, but we can adjust not only the patient's position on the pareto frontier, but also the location of that frontier itself.

The most relevant pareto-optimality frontiers are computational: biological cells being computationally near optimal in both storage density and thermodynamic efficiency seriously constrains or outright dashes the hopes of nanotech improving much on biotech. This also indirectly relates to brain efficiency.

Not really, without further assumptions. The 2 largest assumptions that are there are: 1. We are strictly limited to classical computing for the future and that there's no superconducting materials to help us. Now I have a fairly low probability for superconduction/reversible/quantum computers this century, like on the order of 2-3%. Yet my view is conditional on no x-risk, and assuming 1,000-10,000 years are allowed, then I have 1-epsilon probability on quantum computers and superconductors being developed, and reversible computing more like 10-20%. 2. We can't use more energy. Charlie Steiner gives an extreme case, but if we can increase the energy, we can get much better results. And note that this is disjunctive, that is if one assumption is wrong, your case collapses.
Neither 1 or 2 are related to thermodynamic efficiency of biological cells or hypothetical future nanotech machines. Very unlikely that exotic superconducting/reversible/quantum computing is possible for cell sized computers in a room temperature heat bath environment. Too much entropy to deal with.
My point is your implications only hold if other assumptions hold, not just the efficiency assumption. Also, error correction codes exist for quantum computers, which deal with the decoherence issue in room temperature you're talking about, which is why I'm so confident about quantum computers working. Superconductors are also known how they work, link to article here: https://www.quantamagazine.org/high-temperature-superconductivity-understood-at-last-20220921/ [https://www.quantamagazine.org/high-temperature-superconductivity-understood-at-last-20220921/] Link to article here: https://www.quantamagazine.org/qubits-can-be-as-safe-as-bits-researchers-show-20220106/ [https://www.quantamagazine.org/qubits-can-be-as-safe-as-bits-researchers-show-20220106/] And the actual study: https://arxiv.org/abs/2111.03654 [https://arxiv.org/abs/2111.03654]
How does reversible/quantum computing help with protein transcription or DNA replication? Neither of those exotic computing techniques reduce the fundamental thermodynamic cost of physical bit erasures/copies from what I understand.
Because we get to use the Margolus-Levitin limit, which states: This means we get 15 orders of magnitude decrease from your estimates of 1E-19 joules for one bit, which is much better news for nanotech. I have even better news for total computation limits: 5.4x10^50 operations per second for a kilogram of matter. The limit for speed is this: And since you claimed that computational limits matter for biology, the reason is obvious. A link to the Margolus-Levitin theorem: https://en.m.wikipedia.org/wiki/Margolus–Levitin_theorem [https://en.m.wikipedia.org/wiki/Margolus%E2%80%93Levitin_theorem] In the fully reversible case, the answer is zero energy is expended.
That doesn't help with bit erasures and is thus irrelevant to what I'm discussing - the physical computations cells must perform.
The nice thing about quantum computers is that they're mostly reversible, ie swaps can always be done with zero energy, until you make a measurement. Once you do, you have to pay the energy cost, which I showed in the last comment. We don't need anything else here. Thanks to porby for mentioning this.
You seem confused here - reversible computations do not, can not erase/copy bits, all they can do is swap/transfer bits, moving them around within the computational system. Bit erasure is actual transference of the bit entropy into the external environment, outside the bounds of the computational system (which also breaks internal quantum coherence from what I recall, but that's a side point). Replication/assembly requires copying bits into (and thus erasing bits from) the external environment. This is fundamentally an irreversible computation.
Could you elaborate on this? I'm pretty surprised by an estimate that low conditioned on ~normalcy/survival, but I'm no expert.
Admittedly this is me thinking worst case scenario, where no technology can reliably improve the speed of getting to those technologies. If I had to compute an average case, I'd operationalize the following predictions: Will a quantum computer be sold to 10,000+ customers with a qubit count of at least 1,000 by 2100? Probability: (15-25%.) Will superconductors be used in at least 1 grid in Europe, China or the US by 2100? Probability: (10-20%). Will reversible computers be created by a company with at least $100 million in market cap by 2100? Probability: (1-5%). Now I'm somewhat pessimistic about reversible computers, as they may not exist, but I think there's a fair chance of superconductors and quantum computers by 2100.
Thanks! My understanding is that a true quantum computer would be a (mostly) reversible computer as well, by virtue of quantum circuits being reversible [https://physics.stackexchange.com/questions/270266/why-do-quantum-gates-have-to-be-reversible?noredirect=1&lq=1]. Measurements aren't (apparently) reversible, but they are deferrable. Do you mean something like... in practice, quantum computers will be narrowly reversible, but closer to classical computers as a system because they're forced into many irreversible intermediate steps?
Not really. I'm focused on fully reversible systems here, as they theoretically allow you to reverse errors without dissipating any energy, so the energy stored there can keep on going. It's a great advance, and it's stronger than you think since we don't need intermediate steps anymore, and I'll link to the article here: https://www.quantamagazine.org/computer-scientists-eliminate-pesky-quantum-computations-20220119/ [https://www.quantamagazine.org/computer-scientists-eliminate-pesky-quantum-computations-20220119/] But I'm focused on full reversibility, ie the measurement step can't be irreversible.

Basically, as far as I can tell, the answer is no, except with a bunch of qualifiers. Jacob Cannell has at least given some evidence that biology reliably finds pareto optimalish designs, but not global maximums.

In particular, his claims about biology never being improved by nanotech are subject to Extremal Goodhart.

For example, quantum computing/reversible computing or superconductors would entirely break his statement about optimal nanobots.

Ultimate limits from reversible computing/quantum computers come here:


From Gwern:

No, it's not. As I said, a skyscraper of assumptions each more dubious than the last. The entire line of reasoning from fundamental physics is useless because all you get is vacuous bounds like 'if a kg of mass can do 5.4e50 quantum operations per second and the earth is 6e24 kg then that bounds available operations at 3e65 operations per second' - which is completely useless because why would you constrain it to just the earth? (Not even going to bother trying to find a classical number to use as an example - they are all, to put it technically, 'very big'.) Why are the numbers spat out by appeal to fundamental limits of reversible computation, such as but far from limited to, 3e75 ops/s, not enough to do pretty much anything compared to the status quo of systems topping out at ~1.1 exaflops or 1.1e18, 57 orders of magnitude below that one random guess? Why shouldn't we say "there's plenty of room at the top"? Even if there wasn't and you could 'only' go another 20 orders of magnitude, so what? what, exactly, would it be unable to do that it would if you subtracted or added 10 orders of magnitude* and how do you know that? why would this not decisively change economics, technology, politics, recursive AI scaling research, and everything else? if you argue that this means it can't do something in seconds and would instead take hours, how is that not an 'intelligence explosion' in the Vingean sense of being an asymptote and happening far faster than prior human transitions taking millennia or centuries, and being a singularity past which humans cannot see nor plan? Is it not an intelligence explosion but an 'intelligence gust of warm wind' if it takes a week instead of a day? Should we talk about the intelligence sirocco instead? This is why I say the most reliable part of your 'proof' are also the least important, which is the opposite of what you need, and serves only to dazzle and 'Eulerize' the innumerate.

  • btw I lied; that multiplies to 3e75, not 3e65. Did you notice?

Landauer's limit only 'proves' that when you stack it on a pile of assumptions a mile high about how everything works, all of which are more questionable than it. It is about as reliable a proof as saying 'random task X is NP-hard, therefore, no x-risk from AI'; to paraphrase Russell, arguments from complexity or Landauer have all the advantages of theft over honest toil...

Links to comments here:



One important implication is that in practice, it doesn't matter whether biology has found a pareto optimal solution, since we can usually remove at least one constraint that applies to biology and evolution, even if it's as simple as editing many, many genes at once to completely redesign the body.

This also regulates my Foom probabilities. My view is that I hold a 1-3% chance that the first AI will foom by 2100. Contra Jacob Cannell, Foom is possible, if improbable. Inside the model, everything checks out, but outside the model, it's where he goes wrong.

For example, quantum computing/reversible computing or superconductors would entirely break his statement about optimal nanobots.

Reversible/Quantum computing is not as general as irreversible computing. Those paradigms only accelerate specific types of computations, and they don't help at all with bit erasing/copying. The core function of a biological cell is to replicate, which requires copying/erasing bits, which reversible/quantum computing simply don't help with at all, and in fact just add enormous extra complexity.

If biology would find the maximum we would expect that different species find the same photosynthesis process and that we can't improve the photosynthesis process of one species by swapping it out with that of another.

https://www.science.org/content/article/fight-climate-change-biotech-firm-has-genetically-engineered-very-peppy-poplar suggests that you can make trees grow faster by adding pumpkin and green algae genes. 

Without reading the link, that sounds like the exact opposite of the conclusion you should reach. Are they implanting specific genes, or many genes?

Green algae had more generation cycles to optimize their photosynthesis than trees have and achieved a better solution as a result.  That clearly suggests that organisms with generation cycles like trees or humans don't reliably find global maxima. 
Or green algae have reached a different local maxima? Right?
3 comments, sorted by Click to highlight new comments since: Today at 10:45 AM

Unless by "gobal" you mean "local", I don't see why this statement would hold? Land animals never invented the wheel even where wheeling is more efficient than walking (like steppes and non-sandy deserts). Same with catalytic coal burning for energy (or other easily accessible energy-dense fossil fuel consumption). Both would give extreme advantages in speed and endurance. There are probably tons of other examples, like direct brain-to-brain communication by linking nerve cells instead of miming and vocalizing. 

Is this all there is to Jacob’s comment? Does he cite sources? It’s hard to interrogate without context.

His full comment is this:

The paper you linked seems quite old and out of date. The modern view is that the inverted retina, if anything, is a superior design vs the everted retina, but the tradeoffs are complex.

This is all unfortunately caught up in some silly historical "evolution vs creationism" debate, where the inverted retina was key evidence for imperfect design and thus inefficiency of evolution. But we now know that evolution reliably finds pareto optimal designs:

biological cells operate close to the critical Landauer Limit, and thus are pareto-optimal practical nanobots. eyes operate at optical and quantum limits, down to single photon detection. the brain operates near various physical limits, and is probably also near pareto-optimal in its design space.

He cites one source on the inverted eye's superiority over the everted eye, link here:


His full comment is linked here:


That's the context.