There is an idea.

The idea, which has proven somewhat memetically sticky and is at the core of the so-called "effective accelerationist" (in short, e/acc) movement, is that there is a simple and powerful principle of non-equilibrium thermodynamics which favors the proliferation of ever more complex, intelligent and productive life (and life-like super-organisms, like countries or markets) over the alternative. It is phrased as follows in Beff Jezos "Notes on e/acc principles and tenets":

e/acc is about having faith in the dynamical adaptation process and aiming to accelerate the advent of its asymptotic limit; often reffered to as the technocapital singularity

Effective accelerationism aims to follow the “will of the universe”: leaning into the thermodynamic bias towards futures with greater and smarter civilizations that are more effective at finding/extracting free energy from the universe and converting it to utility at grander and grander scale

This idea originates in the work of physicist Jeremy England, and in particular in a paper which is explicitly cited in Jezos' post, "Statistical Physics of Adaptation". It's an open access paper so anyone with the necessary knowledge (the basics of statistical mechanics will do) and good will can easily go check it by themselves. I suggest using the linked final PRX version rather than the arXiv preprint, as the latter seems to have some significant difference that have been edited in the review process, and I found the final version much clearer (which suggests the editors and referees did their job well, so props to them).

Now, there are obvious philosophical criticisms one can move to this view, mainly that it seems to derive an ought from an is; even if Nature somehow preferred life that grows unchecked over every alternative, it does not mean we should stick to that preference any more than we did to its penchant for infant death; and if there's a choice to begin with for us to make, obviously it can't be that strong a preference. If you're thrown off an airplane in flight, you don't usually get to pick whether to obey or not the law of gravity. However, here I want to focus on the strength of the physical claim itself. I've read England's paper as well as some other related works and I want to go over its claims in detail, what they imply, to what kind of contexts they apply, and what they can't instead be used for. The power of a physical theory lies in its ability to make useful predictions - so, is the concept of "dissipative adaption" actually able to tell us anything about the world we wouldn't have been able to figure out otherwise?

The assumptions

The most important thing that is often overlooked when discussing a physical theory is its realm of applicability, defined by the assumptions the theory is built on. It's way too easy to forget these assumptions and then apply the theory outside of its original domain, where in fact it has no reason to hold any more. England sets out to study the question of non-equilibrium thermodynamics in a system that:

  1. has one or more internal degrees of freedom,
  2. has some macroscopic quantities we can measure to which multiple micro states can correspond,
  3. is coupled to some external heat bath at temperature , and
  4. is acted upon by some external "forcing" field, distinct from the coupling with the bath, that is capable of doing work on the system.

Hypothesis 1 is trivial, as there is no interest in studying a system without degrees of freedom. Hypothesis 2 is also very common; to make an example, if your system was for example a Petri dish with a colony of E. Coli bacteria, you may be able to count the precise number of bacteria, but to that macrostate (e.g. "there are 1,000 bacteria") corresponds an enormous amount of possible arrangements of each atom in the dish that would still match that description. Ideally, you could define the macrostate as corresponding to some probability distribution over all possible microscopic arrangements of the atoms.

Hypothesis 3 is typical of the so-called "canonical ensemble" in thermodynamics; it allows the system to exchange energy with the outside, but the thermal nature of this exchange severely limits the possible evolution of the system based on it alone, and it drives it only towards thermal equilibrium (that is, a Boltzmann probability distribution over the system's internal energy states). This would be like putting your Petri dish in a water bath kept constantly at 38 C to guarantee it keeps a stable temperature.

Hypothesis 4 is where the "non-equilibrium" part of thermodynamics really kicks in; without it, we're just doing good old regular thermodynamics. The presence of a forcing field is another way for the environment outside to introduce energy into the system, but with an all-important difference: where the energy that can be introduced by the bath is thermal, and thus degraded and high entropy, the forcing field can do mechanical or chemical work, and thus introduce high-grade, low-entropy free energy into the system. This is the kind of energy that can do stuff; in particular, it's the kind of energy that a living system usually needs to stay alive.

Examples of systems that match all four conditions are:

  • your Petri dish, if besides keeping it in a thermal bath you also shake it regularly;
  • a puddle of primordial soup, able to exchange heat with the air and ground surrounding it and periodically stimulated by lightning;
  • the Earth as a whole, receiving energy from the Sun and radiating it back out into space.

An example of a system that does not match the four conditions is the Universe as a whole, because as far as we know it doesn't exchange energy with anything outside of it nor is subject to any forcing. Though most subsets of the Universe we care to define instead probably qualify in some way. More on this later.

The theory

England's derivation begins from an observation made in 1999 by G. E. Crooks, which states that if you take two states of a system  and , the probability of going from one to the other in a time  is in relation to the probability of the reverse process as:

Here  is the probability of the reverse process, from  to , to happen on condition that we reverse all velocities (that's what the little dagger stands for). For example, if  and  were positions of the balls on a friction-less billiard table, it might be possible to go from  to  in a time  by hitting the cue ball in a certain way, and then if you suddenly could reverse the velocities of all the balls you'd see them just as surely rewind from  to ; the probabilities are both  in this case, and there's no entropy created, because this is a classic example of a reversible process. If however the table did have friction, then seeing the entire thing reverse would require the atoms of the mat to cooperate, by returning the thermal energy they received as an additional kick that speeds up the balls where they were slowed down before. This is obviously far more unlikely; the probability of the movie playing in reverse is far lower, and this relationship quantifies that by saying that the specific amount of how much more improbable it is is determined by the amount of entropy produced in the forward process while exchanging energy with the thermal bath, .[1] If you wait for the balls to stop completely, that's actually fairly easily calculated: it's the total kinetic energy of the system (aka the energy you imparted to the cue ball with your first strike) divided by the temperature of the table to which that energy was conveyed by friction as heat. Plugging in some common sense numbers you can see that an average billiard play has a chance of rewinding spontaneously of approximately , a number also technically known as "nope, not gonna happen"; and this is for a relatively simple mechanical process.

You might object though that the pool table analogy I used above is slightly oversimplified. In fact, there are many possible arrangements of atoms (microstates) corresponding to one possible arrangement of the balls on the table (macrostate). England's first step is to generalize Crooks' relation for this case:

Here  and  correspond to two observable macrostates. There are two main differences between the previous formula and this one:

  •  has been replaced by , the total entropy production along the path, which includes both the entropy produced by exchanging heat with the bath, , and the difference in entropy between macrostates inherent to their internal probability distribution of microstates;
  • The term on the right hand side is now averaged across all possible paths that would lead from  to . Note that this is the average of the exponential, not the exponential of the average; this means that the largest terms (namely, the paths with the smallest generated entropies) disproportionately dominate the average.

By taking the ratio of two such expressions involving two possible end states  and , and with some manipulation, England arrives at this final equation:

Here every term of the form  means ; in particular, setting ,

and

Here  is the total dissipated work along a given path, basically work done by the external force that gets then dumped into the bath and produces entropy (rather than turning into internal energy of the system). This equation gives us the logarithm of the ratio between two transition probabilities; in other words, if the right hand side turns out positive, that means that  is a more likely end state than , and vice versa. Let's then go over the individual terms to see their influence:

  1.  measures the distance of each of the end states from thermal equilibrium (with the bath temperature). The minus sign means that the preferred state is the one closer to thermal equilibrium;
  2.  measures the difference in probability of spontaneous reversion of the processes; basically, how likely is the system to randomly go back from  (or  to . This means that the preferred state is the one that is more likely to revert. This might seem counterintuitive, but you have to consider that the thermodynamic non-reversibility of the process is in fact accounted for in the next term. This one instead will be more affected by something like the complexity of the process. If going from  to  is a complex multi-step process, then this will be reflected also in the reverse, and will make it less likely;
  3.  means that the preferred state is the one that has the highest average dissipation; this is the term that basically drives the core claim about "dissipative adaptation";
  4.  introduces a skew based on the distribution of the dissipated work across possible trajectories; it mostly means that the preferred state is the one that has the most consistent dissipation distribution across possible trajectories to reach it.

Thus we have four different terms that determine which of the two possible final macrostates,  or , is more likely to materialize. Based on this, the theory argues that all else being equal, states which perform more dissipation are preferred. If you have a Petri dish full of carbon, hydrogen, oxygen and nitrogen atoms and illuminated by sunlight, of all the ways in which those atoms could arrange themselves, "a colony of photosynthetic algae" is a good example of such a dissipative state, as it will capture the free energy coming from sunlight and use it rather than let it pass through. Hence, life is thermodynamically inevitable, the Universe loves emerging complexity, and trying to stand in its way is moot, right?

I would say it's not quite so simple.

The question of predictive power

Let me clarify what I intend by "predictive power". When we have a scientific theory, this theory by definition must allow us to make falsifiable predictions. Predictions depend on our ability to generalize the model we're working with to the circumstance we're applying it to, and on our ability to gather the relevant input data. If you try to apply Newtonian gravity to the orbit of Mercury, it fails, because it's the wrong model for it. If you try to apply the equations of fluid dynamics to the atmosphere to predict the weather six months from now, they fail, because while they are the right model you can't possibly hope to have a precise enough description of the initial conditions to correctly carry out all required computations.

One particularly powerful class of model is the kind of models that make simple and universal predictions that apply to any system. These models could be said to be the backbone of why reductionism works: they allow you to take incredibly complex systems, draw a boundary around them, and treat them as black boxes, whose input and outputs will then obey some extremely simple laws. Thermodynamics has exactly this property. A human body's complexity is unholy; even the metabolism of a single cell is mind-boggling. But draw a surface around a human being and I can tell you with absolute certainty that the combined mass-energy of what goes in must eventually come out; the details of how your body turns water, air and food into a mixture of exhaled CO2, sweat, body heat, urine and poop are completely irrelevant. The complexity that no human intelligence could ever fully comprehend is reduced to a basic accounting problem that a 3rd grader could solve just fine. That's what makes thermodynamics useful and powerful.

Does the theory of dissipative adaptation even approach such broad applicability and power? I would contend it does not. I don't need a formal theory to tell me that obviously within certain conditions there can exist organisms that convert free energy into movement and self-replication, thus being able to dominate their environment. This is trivially known and not particularly useful. What would be really useful is a theory that is able to take an entire potential biosphere - draw a boundary, say, around a planet - and simplify all the immense complexity, all the ifs and buts, to a handful of basic rules that allow me to predict for example whether such a planet is likely to develop life or not. The e/acc interpretation of England's theory requires an even more ambitious application - that one might look at the entire Universe and claim that the only natural outcome of its history is to be eventually crawling with complex and self-replicating life (though not necessarily of the organic kind). That is no small task.

To be clear: I think England's derivation is fundamentally correct, I can't see any mistakes in it (and it has also passed the examination of peer reviewers likely more expert in the subject matter than me[2]). I'm very dubious that it is particularly useful to make broad claims about life, the Universe and everything[3]. It is too context and detail-dependent for that. I will provide discussions of several examples and sometimes try to "plug in the numbers" so to speak, if only very coarsely, to make my point. If all the equation says is "life happens, unless it doesn't, and we have no way to tell beforehand which is which", then it's not very useful at all and we're back to the starting point - to having to study the nitty gritty details of whichever system is at hand, and to have to do the hard work of steering things the way we want them to go rather than give ourselves up with abandon to the "thermodynamic will of the Universe".

The tree of life

Consider a tree.

The tree grows on dark, fertile soil. Let's erect an invisible force field to isolate this tree and the soil in which its roots dwell from everything else. The force field stops any matter from crossing; only radiation can come through. The tree now receives a constant influx of low-entropy energy (sunlight), and exchanges heat with a thermal bath at around ~20 C (the rest of the soil and the atmosphere of the Earth). This system sounds like it matches perfectly England's definition. Let's apply the principles of dissipative adaptation to it.

Energy arrives in the form of light, and part of it is caught by the tree's leaves. The leaves photosynthesize, converting some of the energy into chemical energy within glucose, and losing the rest as waste heat, which produces entropy at a rate . The tree then slowly burns up the glucose too, producing more waste heat; the total amount, eventually, is equal to the incoming energy. Some of the energy never makes it to the leaves and just hits the trunk or the soil, heating them up directly. Again, this increases entropy. Roughly speaking, entropy is created at a rate of  (using a solar constant of ).

How does this compare to a macrostate in which there is no tree, only soil and rotting organic matter? Entropy production isn't particularly different; sunlight heats up the soil directly instead, but the result is the same, as it gets turned into thermal energy[4]. The system is certainly closer to thermal equilibrium; organic matter spontaneously rots as soon as it stops being repaired by its own living mechanisms for a reason. There is a certain continuity argument in that obviously if we start with a tree, the macrostate in which there still is a tree can more easily revert than the one in which the tree died (as reverting in the latter case would require the tree spontaneously reassembling). But if instead we started out with a simple seed in the ground, this argument would not hold; the path in which the seed dies and rots into the ground seems definitely more simply revertible than one in which the seed sprouts and forms a tree (though both are spectacularly unlikely, an entire tree rewinding itself into seed form seems much more complex). So by all means, it looks like the state with no tree should be favored in the long run. The tree's ability to capture and dissipate energy does nothing to alter the balance, at the very least; the energy would be dissipated anyway, all the tree does by interposing itself between its source and its final destination is make the best out of whatever useful work it can extract from it. What dominates the system are other trends; and indeed, long term, most likely, the tree simply dies and rots away, giving another significant burst to entropy, and that's the end of it.

But obviously, this was a fairly limited system. If the tree can't reproduce for lack of space and a need to compete with its progeny, obviously it's going to eventually just die out. So maybe this was too small a boundary, and we need to encompass a much larger system instead.

Pale blue dot

Consider the Earth.

This is a much bigger system. Draw your invisible force field around it; it disrupts a bit of space travel, but very little of consequence happening in the biosphere. Energy still streams in in the form of sunlight. Heat still goes out towards an incredibly cold heat bath - space, essentially a reservoir of infinite heat capacity constantly kept at a temperature of 2.726 K[5]. This system, again, matches England's assumptions. And we know it has given origin to life, so clearly here the logic must work out, right?

Well, no. Earth receives sunlight, it heats up, it dumps the resulting thermal radiation in space, and that's it. The internal mechanisms, what happens to that energy in the process of going from light to heat (namely, from almost black body radiation at 6000 K, the average temperature of the surface of the Sun, to almost black body radiation at 288 K, the average temperature of the surface of the Earth), are fundamentally irrelevant to the overall dissipated work, the  term in England's balance. There's an additional term in the thermal balance coming from Earth's own internal heat (and additional heat produced by fission of radioactive elements) dissipating out in space, but that's a change in internal energy, not dissipation of the incoming forcing energy. Life for the most part captures that light, stores part of it as chemical energy, and then at some point in the food chain turn it into mechanical work and thus heat. Humans have gone a bit beyond, but even by burning fossil fuels we merely liberated back heat that had been stored as chemical energy millions of years ago. Over a long enough time horizon, that only evens out the balance. Perhaps the most thermally disruptive thing we've done in this sense is creating nuclear reactors and nuclear weapons, fissioning uranium that otherwise would never have been fissioned. But it took until the 1940s for us to even begin doing such things; obviously if anything can explain the emergence of complex life it must have applied to us a lot earlier already. And a civilization that subsisted entirely on solar energy would go back to functioning exactly like the rest of the biosphere does - pick up energy from the sun, exploit the temperature differential to do work with it, radiate the heat back into space.

According to Wikipedia, the planet Mercury receives between 4.6 and 10.6 times the solar energy per unit of surface than the Earth, and has a radius of 0.38 times the Earth, which means it receives (and dissipates) roughly the same total amount of solar energy. And yet it doesn't have life, because obviously dissipation isn't everything.

The devil, as it happens, is in the details. The first term of England's equation, the thermal equilibrium, features the internal energy of the system. The second term, the reversal probability, features the kinetics (e.g., the multiple steps required for a chemical reaction). If our system is the Earth as a whole, this hides the entire complexity of the Earth's whole Hamiltonian. All the subtleties of chemistry and quantum mechanics, all the complexities of biology and population dynamics, are in there. Whether a path exists from a primordial lump of rocks, water, ammonia and methane to a world with a thriving civilization in which I get to type this stuff on LessWrong, instead of just a slightly older lump of rocks, water, ammonia and methane, depends on which structures and how much complexity do the specific details of that world's laws allow at the given temperature. Given the specific correct conditions, sure, certain forms of life are probably favored. But we don't know those conditions without going into the details. And those quantities are pretty much impossible to compute - I couldn't hand you the equations and a planet at 200 C and ask you whether silicon-based life could exist and develop spontaneously on it. To try and fill in those terms even in the roughest way you'd need to go study the potential biochemistry of silicon at the very least - that is to say, just do the work you'd do anyway, with the equation being very little help. Perhaps in some circumstances it might indeed help you make decent educated guesses. I can't think of one. It's interesting in how it gives us a sense of the different trends at play; but their interplay is too complex and the configuration space they need to be averaged over too vast to actually run the calculation. And if that's already this hard for a planet, imagine if we wanted to go any bigger...

Across the Universe

Consider the Universe.

Consider, in particular, the observable universe. This universe is not an entirely closed system. The universe expands, and expands at accelerating rate due to dark energy, which means that things away from us appear to move faster and faster the further away they are. Past a certain distance, galaxies appear to move faster than the speed of light, and they can't affect us any more; that line is what we call the cosmic event horizon, and is in some way akin to the other kind of event horizon, the one that wraps around the singularity of a black hole. So, stuff can leave the observable universe and never come back. The cosmic event horizon should emit Hawking radiation, same as a black hole, so it does act like a heat bath, albeit an extremely cold one (Gibbons and Hawking, 1977, Leonhardt 2021). However, the observable universe is not subject to any external force introducing new energy into it, that we know of; thus it fails the assumptions for England's non-equilibrium thermodynamics to apply at all instead. Regular thermodynamics instead apply; all the observable universe can do is slowly roll towards thermal equilibrium with its bath, which means, burn through all its available free energy, increase entropy, and get really really cold. The universe is on a timer; it will last a long time, but it will not last forever.

The time scale aspect is important, because time scale features in England's relation, and so even if it applied to the universe (which it doesn't as long as we don't discover an external force introducing fresh free energy into it) it would not be a guarantee that any specific equilibrium, like a very life-rich one, could be favored on a finite time. And the time scales for things like random reversals of a dissipative process are freakishly huge; so if any system needed to go through any such bottleneck to reach its point of maximum dissipation, it might as well never do it with all the time in the world. In a different paper, England estimates the entropy created during the division of a single E. Coli bacterium at , and the division cycle as lasting around 20 minutes, or 1200 seconds. That means the inverse process (two E. Coli bacteria merging into one) would be expected to happen on average once every . seconds. That is not simply a big number. That is a ridiculous number. You could cover every planet in the Milky Way with a thin biofilm of E. Coli from the beginning of the universe to today, and then turn every second of the existence of the universe into as much as its entire existence, and you still would not see that happening once. The argument for life is predicated on the fact that life-rich states dissipate energy at a steady rate; but if it's easier for a system to fall into a dead state instead with a huge amount of one-time irreversibility, and climbing back up happens at those kind of rates, you will never see the life-rich states winning over on anything less than an eternity of eternities. Again, the specific details of the system matter a lot, and the predictive power of the principle is somewhat wanting.

This timescale argument is particularly important because one could always make the case that slow and steady wins the race - that by growing exponentially, given enough time, life can always eventually become the biggest entropy producer in the Universe, and thus be the maximum dissipation path in that equation (which, I will stress again, does not even apply to the Universe as a whole as far as we can tell). But the "enough time" bit isn't trivial. It particularly isn't because the chasm to overcome to become the greatest entropy producers in the cosmos isn't small. In fact, as we'll see, it's mindbogglingly large.

Black Holes and Revelations

What is the total entropy of the observable Universe?

Don't worry, we won't need to calculate it, because helpfully enough Lineweaver and Egan did it for us in 2010. I'll just paste a table from their paper[6] here:

The list is helpfully sorted in order of importance. Let's look at the top:

  1. The Cosmic Event Horizon, which as said above has Gibbons-Hawking entropy and technically emits radiation, wins easily the gold medal here with a whopping  of entropy;
  2. supermassive black holes immediately follow with nineteen orders of magnitude less entropy;
  3. then come stellar black holes, five more orders of magnitude below;
  4. then still more black holes, just smaller. Another two orders of magnitude;
  5. then come photons, so basically just free traveling radiation (mostly CMB and starlight, I'd guess). Eight more orders of magnitude;
  6. relic neutrinos, barely interacting, nearly massless subatomic particles. Roughly same order of magnitude as the photons;
  7. dark matter, which we don't quite know what is, but we know what it isn't: interacting with regular matter or interesting in any way other than its gravitational effects. Two more orders of magnitude;
  8. relic gravitons. We're eight places down and we're still encountering exotic, immaterial stuff that could never give rise to anything living. Same order of magnitude as dark matter;
  9. Interstellar Medium and Intergalactic Medium. Finally we're back to our good old baryonic matter! Except this is... mostly just hydrogen gas and extremely low density dust spread over vast, vast expanses of space. Six more orders of magnitude;
  10. and then finally we have stars, two more orders of magnitude below.

We went through ten places and a whopping forty-four orders of magnitude without even encountering the kind of matter with actual interesting chemistry that is most likely to give rise to life. All we learned is that mostly the entropy is the universe is that of its cosmic event horizon. Barring that, it's the entropy of its black holes. It would take 223 generations of exponentially reproducing E. Coli to match even just the entropy of the star contributions, at which point the bacterial colony would have to weigh  kg, or approximately the total mass of baryonic matter in the observable Universe[7].

Let's stay on the E. Coli example, because it's instructive and we have the data for it. Reproducing bacteria don't create matter out of nowhere; they need nutrients to grow and divide into more bacteria. So  generations of exponential growth, resulting in  bacteria, will require at least a mass of  nutrients[8]. In the process it will also produce  entropy. Compare that with the possibility of simply chucking the entire mass of nutrients into a black hole of mass . Black hole entropy is tricky because it goes like the surface area of the event horizon, and thus scales like the square of the mass. On what condition will the produced entropy be larger by tossing the matter into a black hole than by letting E. Coli reproduce? If we work in an approximation that  we have:

which implies

So for any realistic stellar black hole - never mind supermassive ones - there is no amount of reproductive cycles that can ever produce more entropy than we would by simply throwing away the entire thing through an event horizon right away. In fact, the difference is so large that any additional entropy is barely a rounding error.

"What maximizes entropy" is a terrible proxy metric to try to evaluate futures because in most cases what maximizes entropy is just disappearing forever towards a gravitational singularity. The thermodynamic history of the universe is the history of event horizons; everything else is commentary. To paraphrase Eliezer Yudkowski:

The Universe does not hate you, nor does it love you, but you are made out of mass which it can throw into a black hole.

There is no great manifest thermodynamic destiny other than that which we carve for ourselves with whatever work we can steal from the great river of free energy whose source is the Big Bang and whose estuary is an impenetrable event horizon, and as far as we can tell, that's about it[9].

Conclusions

This is not an exhaustive exploration of the topic by any means. In fact, studying this paper and researching for this post I've realized just how deep the rabbit hole might go. It should for example be genuinely possible to use England's equation to study the evolution of the geometry of the cosmos (if one e.g. treated dark energy as outside of it and thus a form of external force). I just don't have all the necessary knowledge for it and it would have grown too long and complicated. I've also tried some simulations of systems simple enough that computing England's quantities directly via Monte Carlo sampling should be possible, but I've left those out as they would have made the post even more complicated yet. Maybe for another day.

I also think that it might be possible perhaps to develop a better metric than just dissipation to characterize living systems at a thermodynamic level. Intuitively I feel like something akin to a "thermodynamic GDP" measuring the number and complexity of transactions through which useful work is extracted from a flow of degrading free energy might be such a thing, though I don't expect any straightforward rules to apply to it either.

I could, of course, be wrong on one or more thing. I'd be glad for any interesting criticism and thought provoking comments on the topic. There are still many details on which I am fuzzy myself.

Finally, to go back to the original inspiration from the e/acc philosophy, I think it's no secret that I disagree with it at a fundamental level. Even were the results different I'd still say that they wouldn't warrant building that sort of bold claims on it - after all we have no guarantee that the best possible self-replicator (or dissipator) is us and not a mindless paperclipper. But the results being what they are, I'd say even expecting that the preferred path over the history of the Universe contains any self-replicators at all might be a stretch. There is no obvious pre-ordained destiny or fate here, other than most likely the eventual heat death. Measuring a civilization by how much entropy it produces is like evaluating a meal prepared by a master chef by how much excrement it turns into.

We can do better than that.

 

  1. ^

    An important point about units: here and going forward I'll just work in units in which the Boltzmann constant, which has itself the units of an entropy, is 1. So entropies are pure numbers, and temperatures and energies have the same units. If you ever need to go back to SI units, for an entropy or an energy, just multiply it by ; temperatures will be in Kelvin to begin with.

  2. ^

    Though England's paper does contain at least one typo that escaped them: the definition of work at the bottom of the left hand column on page 4 contains an extra  term that should not be there. So maybe I read it more thoroughly than the referees after all.

  3. ^

    Since it evaluates to a pure number though, it does theoretically provide a possible meaning for why the answer to those things would be 42.

  4. ^

    One could object that depending on the shape of the invisible force field, maybe the tree captures some light that would have otherwise passed right through, thanks to its height. But that's easily fixed; for example, you could enclose the tree in a box without a top, or expand the field a bit to include empty ground near the tree, or replace the tree with some grass, or replace the Sun with a solar lamp that shines 12 hours a day but always perfectly vertically. The arguments are all still valid. 

  5. ^

    Other possible objection: Earth is in fact immersed in the so-called heliosphere, the bubble of light extremely rarefied gases that surround the Sun. These are both technically at a very high temperature and so low in density that treating them as a vacuum is probably as good an approximation as we need. 

  6. ^

    The publisher version is paywalled but their university has a mirror here.

  7. ^

    And the process would take about 3 days, if you could somehow make it happen without the resulting mass of E. Coli, dextrose and agar collapse into a black hole itself. That's one hell of a testament to the power of exponentials I guess.

  8. ^

    We will ignore the tiny amount of mass corresponding to the chemical binding energy that gets released as heat by the bacteria's metabolism.

  9. ^

    Alternatively, one might hope that the scenario from Isaac Asimov's famous short story "The Last Question" could be somehow possible. In that case, the hallmark of a successful humanity would not be increasing the entropy of the Universe - if anything, it would be the unprecedented ability to reduce it instead.

New to LessWrong?

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 4:45 AM

This is a superlatively excellent post. A shame it's getting so little attention. 

This also reminds me of Dijkstra's famous note on programming:

My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.

Celebrating "energy spent" also seems like looking at the wrong side of the ledger. It's what you get for that energy that matters, now how much you wasted.

(I am mentioning this because referencing a known meme is how you can compress a longer argument into tweet size. Something like: "mistaking 'energy spent' for success is the e/acc version of evaluating programmers by 'lines produced'".)

This is a philosophically tricky question. Let's count material resources as spent. But on what? On cycles of computation? But these are also spent. Spent on what? Maybe on pleasant experiences. But pleasant experiences are a kind of consumption, and a worthy life isn't characterized by how much you've consumed. A worthy life is characterized by what you've output. But output what? Help others get material resources, for instance?

And so we find ourselves in a loop. It's not clear where the "right side of the ledger" is to be found at all.

But pleasant experiences are a kind of consumption, and a worthy life isn't characterized by how much you've consumed. A worthy life is characterized by what you've output.

This sounds to me like confusing instrumental and terminal goals.

The things I do for others are only worthy because they are needed. Imagine baking tons of bread that no one wants to eat, so it just gets thrown in the garbage. Would that be a worthy activity?

Pleasant experiences are intrinsically worthy; that's what the word "pleasant" means, kind of. (We can criticize them if they also have some negative side effect, of course. But that just means that their total value is a sum of some positive and some negative components.)

When we judge a person who spends 10% of their resources on pleasure and 90% on producing for others "more worthy" than a person who spends 100% of their resources on pleasure, a part of that calculation is that the production for others will result in some extra pleasure down the line (even if only in the sense of "pleasure of not starving for a while").

If we instead had one person who spends 10% resources on pleasure and 90% on generating useless paperclips, and another person who spends 100% resources on pleasure, we would probably just call the first person stupid. If we had a group of people, where everyone produces paperclips, gives them to the next person in the circle, who takes them apart and then reassembles them into new paperclips and gives them to the next person again... we would call the entire group crazy.

a part of that calculation is that the production for others will result in some extra pleasure down the line

I don't agree with this view. The person who does things, taken to a high degree, is Leonardo da Vinci. The person who consumes stuff, taken to a high degree, is a kind of couch potato or sea sponge that doesn't even need much of a brain. Saying that the former is lowly "instrumental" while the latter is lofty "terminal" sounds very wrong to me.

Either Leonardo enjoys what he is doing (in that case there is terminal value for himself) or he is doing it for other people to enjoy (instrumental value) or both (both kinds of value).

In a hypothetical world where neither is true, i.e. Leonardo hates doing things and no one cares whether he does them or not, he should stop doing that.

He enjoys what he's doing, but that's not the most important measure. If you offered him a hypothetical harmless drug that could bring him even more enjoyment but stop him from working, he would refuse.

[-]dr_s4mo20

But the refusal of wireheading is itself in service of a terminal value - because "satisfaction" is more than simple pleasure.

That's a tautology then, "people want what they want". If I understood Villiam right, he was making a more substantive point: that all aspects of "what we want" ultimately reduce to pleasure, because it's intrinsically valuable and (presumably) nothing else is. Which is what I'm arguing against.

[-]dr_s4mo20

The original point was against using energy consumption as a measure of worthiness. It's true that all worthy things tend to consume energy, but energy consumption isn't proportional to worthiness, and some things that consume energy aren't worth anything at all. This holds whether one adopts a purely hedonistic view of utility or not.

Terminal values, presumably. If such things exist. In practice, you just have to examine ends of ends of ends of etc. far enough to eliminate bullshit work and yak shearing expeditions.

[-]dr_s4mo20

Right. See also: Zachtronics puzzles.

To steelman it, I think the "wasteful" position makes also a kind of sense: being pointlessly frugal when you're swimming in abundance can be the source of unnecessary suffering. But turning waste into a metric unto itself is begging to get Goodhart'd into doing some very stupid stuff.