The case

Most of the future things we care about – i.e., (dis)value – come, in expectation, from futures where humanity develops artificial general intelligence (AGI) and colonizes many other stars (Bostrom 2003MacAskill 2022Althaus and Gloor 2016). 

Hanson (2021) and Cook (2022) estimate that we should expect to eventually “meet” (grabby) alien AGIs/civilizations – just AGIs, from here on – if humanity expands, and that our corner of the universe will eventually be colonized by aliens if humanity doesn’t expand. 

This raises the following three crucial questions:

  1. What would happen once/if our respective AGIs meet? Values handshakes (i.e., cooperation) or conflict? Of what forms?
  2. Do we have good reasons to think the scenario where our corner of the universe is colonized by humanity is better than that where it is colonized by aliens? Should we update on the importance of reducing existential risks?[1]
  3. Considering the fact that aliens might fill our corner of the universe with things we (dis)value, does humanity have an (inter-civilizational) comparative advantage in focusing on something the grabby aliens will neglect?

The answers to these three questions heavily depend on the values we expect the grabby aliens our AGI will meet to have. For instance, if we expect grabby alien AGIs to, say, care about suffering more than our AGI, AGI conflict generating significant suffering is then relatively unlikely, and the importance of reducing X-risks depends on whether you prefer the aliens’ degree of concern for suffering or that of our AGI.

Therefore, figuring out what aliens value (or Alien Values[2] Research) appears quite important,[3] although absolutely no one is working on it[4] as far as I know. 

Is it because it isn’t tractable? Although I see how it might seem so, I don’t think it is. First, thinking about the values of grabby aliens doesn’t strike me as harder than modeling their spread (see, e.g., Hanson 2021 and Cook 2022 for work on the latter). My EA Forum sequence What values will control the Future? is an instance of how simple observations/reasoning can make us significantly narrow down the range of values we should expect grabby aliens to have. Second, there seems to be – outside of the Effective Altruism sphere – a whole field of research focused on thinking about the evolution of aliens (most of which I’m not familiar with, yet), and there are already quite interesting takeaways (see, e.g., Kershenbaum 2020Todd and Miller 2017). Although the moral preferences of aliens are by no means the focus so far, this is evidence that figuring stuff out about aliens is feasible, and there might even be potential for making Alien Values Research part of people’s alien-related research agenda.

Acknowledgment

Thanks to Elias Schmied for their helpful comments on a draft. All assumptions/claims/omissions are my own.

Appendix: Relevant work

(This list is not exhaustive.[5] More or less ranked by decreasing order of relevance.)

  1. ^

    Charlie Guttman (2022) and Michael Aird (2020) ask questions very similar to this second one.

  2. ^

    Alien values” here literally means “the values of aliens”, not “values that look alien to us” as in this confusing LessWrong tag.

  3. ^

    Besides helping us answer the two above questions, it might also give us useful insights regarding the future of human evolution and what our successors might value (see Buhler 2023). Robin Hanson makes a similar point around the beginning of this interview.

  4. ^

    The Appendix lists a few pieces that raised relevant considerations, however. 

  5. ^

    And this is more because of my limited knowledge than due to an intent to keep this list short, so please send me other potentially relevant resources!

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 12:22 PM

figuring out what aliens value appears quite important

My instant answer to this question is that it is not of practical importance, except insofar as we may already be inside an alien sphere of influence. 

You're talking primarily about scenarios of alien encounter in which it's a meeting between a human-descended superintelligence and an alien-descended superintelligence. But by definition, the human-descended superintelligence is going to be better than you, at inferring the likely distribution of alien life and alien values in the cosmos. 

But since you're interested, I suggest you also look up "Xenology" by Robert Freitas, which is a big obscure work from the 1970s by someone who went on to become one of the major theorists of mechanical nanotechnology. It has weird stuff like eleven metalaws of first contact, devised in 1970 by an Austrian space lawyer

Apart from the fact that such works may contain valid observations that the current literature overlooks, they may also promote awareness of the extent to which current ideas about alien life are non-empirical guesswork and potentially quite wrong. 

Freitas opens his chapter 25 with the proposition that 

Many billions of intelligent races may exist in the Milky Way alone at the present time

which is a very Carl Sagan, birth-of-SETI perspective, and one which is still held by many many people. On the other hand, our local avantgarde believe that intelligence in the universe is dominated by aggressively expansionist superintelligences that may be trading with other branches of the universal wavefunction. Maybe that's a very current-year outlook, but even Bing can point out just how many assumptions it's making. 

[-][anonymous]9mo10

I think questions like these are important, so thank you for thinking about and writing about this.

A hypothetical civilization which hasn't observed signs of other life might also be able to find and understand these arguments. This includes the first civilization to create an ASI, if it has no way to know whether it's the first.

If we accept this, then we may prefer to act as if we are the first, because we may think it best for (alien) civilizations in general to act as if they are the first, to ensure that the actual first one acts appropriately. (i.e., creating an aligned ASI, when the alternative would be an unaligned ASI tiling the universe). You could frame this as a form of acausal trade.

I apologize if this is confusing, I'm autistic and struggle with reducing meaning into language that others understand. Please let me know if you need clarification.

Interesting, thanks! This is relevant to question #2 in the post! Not sure everyone should act as if they were the first considering the downsides of interciv conflicts, but yeah, that's a good point.

[-][anonymous]9mo10

I have two things I want to say, I'm not sure if this one is important (it's a physics question, out of curiosity, and you don't have to answer) so I'll make two separate comments. 

The question: Would an ASI in control of more matter have enough of an advantage to fully take over the lower amount of matter controlled by another ASI, or would the second ASI have other options, e.g things like "creating a black hole supercomputer that computes in ways it deems valuable"?

I don't know and this is outside the scope of this post I guess. There are a few organizations like the Center on Long-Term Risk studying cooperation and conflict between ASIs, however.

[+][comment deleted]9mo10