“Humans will face the same fate as chickens”: Obviously, humans are much more powerful than chickens were, and things will go very differently.
“Humans will definitely get outcompeted by AIs”: I’m not making a direct claim like that, and I wouldn’t even be especially confident in it. My point is much more narrow, about how much evidence the outside view should be. There’s also hypothetical ways runaway competition could happen that have nothing to do with AI.
“There’s no guarantee you won’t get outcompeted even if you’ve been dominant in the Darwinian competition so far.” I’m saying something stronger than that, that it’s predictable that you will do better and better for a while, even if you are about to get outcompeted - see below.
Here’s what I AM saying but is not my main point:
“Long-run evolutionary dynamics seem concerning - it seems like there could be a race to the bottom where entities that care a nonzero amount about anything other than gaining power and expanding get outcompeted”: I agree, but this has been said by many people, including Nick Bostrom, Robin Hanson, Allan Dafoe here and here (I’d recommend the latter), Dan Hendrycks, Carl Shulman here and here, Andrés Gómez-Emilsson, and even arguably Nick Land.
“It’s possible that humans are in a temporary non-Malthusian Golden Age because of how economically valuable they happen to be right now[2]” I agree, but this has been touched on by eg Hanson and Bostrom again, Kulveit et al., Korinek and Stiglitz, and Garfinkel[3].
I hoped to convey an additional thing - here’s the original post’s description of it:
It makes complete sense that this nihilistic optimization process [uncontrollable runaway civilizational growth through Darwinian competition] at first actually benefits some class of agent - because initially, the easiest way to keep growing is to use some class of agent in the world and incentivize it by satisfying its preferences. But then, as the optimization becomes more and more advanced, it stops being beneficial - because there are almost certainly some more evolutionarily fit configurations out there than the class of agent that the process just happened to start out with.
I need a shorthand for this vague concept of “runaway civilizational growth, but specifically framed as increasingly sophisticated Darwinian competition” - let’s call it “evolutionary takeoff” (taken as broadly as possible, so including the emergence of life and of biological general intelligence up until the Industrial Revolution, AI and presumably the colonization of the universe).
At its core, what I’m saying is just an instance of Goodhart’s Law:
Evolutionary takeoff advances via classes of agents innovating more complexity, winning as a result, and spreading the innovation.[4] That means, for a while, the success of that class of agent correlates with the advancement of the evolutionary takeoff and the overall progress of the system.
The classic Goodhart pattern applies - as evolutionary takeoff inexorably advances, it carries more success for the currently dominant class of agent with it - until eventually the tails come apart and the class of agent’s level of success drops off sharply.
Humans being more and more economically valuable and capable of fulfilling their preferences recently - it’s not a random weird coincidence! It’s what you expect before the “showdown”, even if you end up getting outcompeted!
It would be extremely strange if AI appeared on the scene to potentially displace us in 13th century Europe, where humans had been stagnating for a while - where would it have come from? When evolutionary takeoff has advanced enough that the next generation of dominant agent threatens to displace you, the growth needed for that will necessarily have benefitted you first!
For chickens to be put into factory farms, humans first needed to grow in population and industrialize - which necessitated that chickens first benefitted a lot (grew in population, got food, etc), because they would naturally be a part of the growth of the system too.[5]
The goodharting cannot happen, the tails can’t come apart, until you’ve actually moved along the curve!
This temporary windfall, that in some abstract sense hasto exist,is what I call the Darwinian honeymoon.[6]
In short, it’s just the macro-level process of evolutionary takeoff, being goodharted with respect to the dominant class of agent’s utility. I hadn’t seen this observation made anywhere before[7], so I thought making it and coining a name for the phenomenon would be useful.
So: If you haven’t integrated this conceptual point before now, you should downweigh to some extent how much evidence (about how good the future will be) you draw from the outside view of “humans have been doing better, and getting more powerful and capable over time”, since it becomes less surprising.[8]
I guess what you should do is compare human progress and current human capacity to some intuitive sense of how much of it is expected, given this conceptual point. The amount that it exceeds what you would intuitively expect (if it does at all) is the amount of evidence that the outside view of human progress should give you.
Weirdly enough, it was quickly downvoted to 0 on the EA Forum, and only recovered to lightly positive later - this makes me worry about the intellectual climate over there, vis a vis openness to criticism / new ideas.
Specifically, because of how slow human reproduction is (allowing a kind of “evolutionary overhang” where economic growth can outpace population growth, making income growth possible, but only until faster replicators emerge). Malthus ended up embarrassed by the economic miracle of the 19th and 20th centuries, but maybe he was just too early.
The conceptual connection to Garfinkel is that humans being economically useful leads to growth of bargaining power. And that this may have been a major part of what made democracy possible.
For a simple example, around two billion years ago, the first eukaryotic cell internalized its power plants instead of having them on its membrane. This meant that energy production could scale with volume instead of surface area, and allowed eukaryotes to grow much larger and more complex than the existing prokaryotes. (although apparently, this model is somewhat contested today)
Here’s a framing that is much more speculative and a bit redundant, so I didn’t want it in the main body - but I do think there’s something here:
There is slack in the system until it’s optimized away. A temporarily evolutionarily dominant class of agent will produce an increasing surplus via its domination, and this surplus will itself be used as the search space in the global search process to find their successor, the next dominant class of agent (e.g. possibly humans building AIs in the course of their economic progress).
Take the example of chickens again. They were evolutionarily successful via [insert why chickens were good farm animals], and extracted a more and more value from the world via humans giving them extra food and shelter for their surplus (meat, eggs). Then, the economic transformation that chickens helped humans set off with their surplus destroyed them.
In some sense, it’s a misnomer because in normal, “horizontal” Darwinian competition this obviously doesn’t happen - species are, I presume, even more likely to get outcompeted when they are doing badly because of some exogenous factor, like dinosaurs by mammals after the asteroid impact. You need the additional factor of “verticality”/“increasing complexity”/”takeoff” when we restrict it to “major Darwinian breakthroughs” only, or something like that.
It may seem a bit weird to update a lot because of a purely conceptual trick - but in practice, in my personal experience, I think it has to be done sometimes. For example, every time you learn about a new “story”/”metanarrative” about the world (econ 101, Everything is Trauma, Everything is Status Competition, Everything is Oppression, etc etc etc), your brain sees things in a different light - and this is extremely epistemically valuable, even essential, but also completely non-empirical. We’re not ideal Bayesian agents who have a probability distribution over all possible hypotheses already, sometimes we learn about new ones and have to deal with that on the spot. (this specifically is not exactly a new story though, to be fair)
A few days ago, I published The Darwinian Honeymoon - why I am not as impressed by human progress as I used to be. To my gratification, it was quite well-received on Twitter, Substack and LessWrong.[1]However, in subsequent conversations I realized that I did not communicate my core point well enough, given how abstract it is. So I wanted to write a short (and somewhat sloppy and galaxy-brained) post explaining what I mean a bit more.
Here’s what I am NOT saying:
Here’s what I AM saying but is not my main point:
I hoped to convey an additional thing - here’s the original post’s description of it:
I need a shorthand for this vague concept of “runaway civilizational growth, but specifically framed as increasingly sophisticated Darwinian competition” - let’s call it “evolutionary takeoff” (taken as broadly as possible, so including the emergence of life and of biological general intelligence up until the Industrial Revolution, AI and presumably the colonization of the universe).
At its core, what I’m saying is just an instance of Goodhart’s Law:
Evolutionary takeoff advances via classes of agents innovating more complexity, winning as a result, and spreading the innovation.[4] That means, for a while, the success of that class of agent correlates with the advancement of the evolutionary takeoff and the overall progress of the system.
The classic Goodhart pattern applies - as evolutionary takeoff inexorably advances, it carries more success for the currently dominant class of agent with it - until eventually the tails come apart and the class of agent’s level of success drops off sharply.
Humans being more and more economically valuable and capable of fulfilling their preferences recently - it’s not a random weird coincidence! It’s what you expect before the “showdown”, even if you end up getting outcompeted!
It would be extremely strange if AI appeared on the scene to potentially displace us in 13th century Europe, where humans had been stagnating for a while - where would it have come from? When evolutionary takeoff has advanced enough that the next generation of dominant agent threatens to displace you, the growth needed for that will necessarily have benefitted you first!
For chickens to be put into factory farms, humans first needed to grow in population and industrialize - which necessitated that chickens first benefitted a lot (grew in population, got food, etc), because they would naturally be a part of the growth of the system too.[5]
The goodharting cannot happen, the tails can’t come apart, until you’ve actually moved along the curve!
This temporary windfall, that in some abstract sense has to exist, is what I call the Darwinian honeymoon.[6]
In short, it’s just the macro-level process of evolutionary takeoff, being goodharted with respect to the dominant class of agent’s utility. I hadn’t seen this observation made anywhere before[7], so I thought making it and coining a name for the phenomenon would be useful.
So: If you haven’t integrated this conceptual point before now, you should downweigh to some extent how much evidence (about how good the future will be) you draw from the outside view of “humans have been doing better, and getting more powerful and capable over time”, since it becomes less surprising.[8]
I guess what you should do is compare human progress and current human capacity to some intuitive sense of how much of it is expected, given this conceptual point. The amount that it exceeds what you would intuitively expect (if it does at all) is the amount of evidence that the outside view of human progress should give you.
Weirdly enough, it was quickly downvoted to 0 on the EA Forum, and only recovered to lightly positive later - this makes me worry about the intellectual climate over there, vis a vis openness to criticism / new ideas.
Specifically, because of how slow human reproduction is (allowing a kind of “evolutionary overhang” where economic growth can outpace population growth, making income growth possible, but only until faster replicators emerge). Malthus ended up embarrassed by the economic miracle of the 19th and 20th centuries, but maybe he was just too early.
The conceptual connection to Garfinkel is that humans being economically useful leads to growth of bargaining power. And that this may have been a major part of what made democracy possible.
For a simple example, around two billion years ago, the first eukaryotic cell internalized its power plants instead of having them on its membrane. This meant that energy production could scale with volume instead of surface area, and allowed eukaryotes to grow much larger and more complex than the existing prokaryotes. (although apparently, this model is somewhat contested today)
Here’s a framing that is much more speculative and a bit redundant, so I didn’t want it in the main body - but I do think there’s something here:
There is slack in the system until it’s optimized away. A temporarily evolutionarily dominant class of agent will produce an increasing surplus via its domination, and this surplus will itself be used as the search space in the global search process to find their successor, the next dominant class of agent (e.g. possibly humans building AIs in the course of their economic progress).
Take the example of chickens again. They were evolutionarily successful via [insert why chickens were good farm animals], and extracted a more and more value from the world via humans giving them extra food and shelter for their surplus (meat, eggs). Then, the economic transformation that chickens helped humans set off with their surplus destroyed them.
In some sense, it’s a misnomer because in normal, “horizontal” Darwinian competition this obviously doesn’t happen - species are, I presume, even more likely to get outcompeted when they are doing badly because of some exogenous factor, like dinosaurs by mammals after the asteroid impact. You need the additional factor of “verticality”/“increasing complexity”/”takeoff” when we restrict it to “major Darwinian breakthroughs” only, or something like that.
Possibly it exists somewhere in a footnote or aside that I haven’t seen, hard to say of course.
It may seem a bit weird to update a lot because of a purely conceptual trick - but in practice, in my personal experience, I think it has to be done sometimes. For example, every time you learn about a new “story”/”metanarrative” about the world (econ 101, Everything is Trauma, Everything is Status Competition, Everything is Oppression, etc etc etc), your brain sees things in a different light - and this is extremely epistemically valuable, even essential, but also completely non-empirical. We’re not ideal Bayesian agents who have a probability distribution over all possible hypotheses already, sometimes we learn about new ones and have to deal with that on the spot. (this specifically is not exactly a new story though, to be fair)