I recently thought of the possibility of this scenario while reading about the Fermi paradox. This may sound highly unlikely and like the plot of a sci-fi horror movie, and there is no evidence that it will happen. However, is there a possibility of this happening?

Scenario:

Highly technologically advanced aliens become aware of our existence or are already aware and decide to torture us (possible motivations are listed below).

They are able to make themselves invisible to us and travel to Earth or to send AI to do the same. Then, they enter our bodies (e.g. through nanobots). They edit our genomes to make us immortal and only able to do things that are necessary to keep ourselves alive (e.g. drinking water). Or they upload our minds.

They can then cause us pain and even edit our genomes to cause us to experience more pain (e.g. by adding more pain receptors). It could be possible for them to torture us until the end of the universe, if they have the ability to generate enough energy to keep us conscious. Even after the end of the universe, it might be possible for them to torture us in a different universe (Zeeya Merali writes about the possibility of creating a new universe in A Big Bang in a Little Room).

The following consists of more of my ideas about this. Obviously, please feel free not to read it.

Possible motivations for why aliens may want to torture us include:

  • They (or a group or an individual) have evolved to feel sadistic pleasure when they harm others who don't belong to their group.
  • They may gain feelings of status or power from harming us.
  • They may feel a strong desire for justice and wish to punish us for the actions of a small number of people who harm others (e.g. animals).
  • They may wish for aliens from other societies/planets to know about their technological abilities or cruelty (less likely).
  • They (or a group or an individual) develop a mental illness that causes them to wish to harm others. This seems less likely because such a technologically advanced society would likely be able to edit their own genomes to prevent and treat mental illness or would have non-biological bodies.

A possible reason for why aliens may torture us, without necessarily wanting to, is:

  • If their AI is hacked or programmed to harm others who don't belong to their group, the AI may decide to harm us.

Reasons why this scenario, although unlikely, may be possible:

Reasons why this scenario is unlikely include:

  • There is the possibility that we are the only life (or intelligent life) in the universe. So far, we haven't found evidence of aliens.
  • As far as we know, no aliens have visited Earth (there is the possibility that they have without our knowledge). This suggests that it's likely that aliens cannot visit, don't want to visit, or/and are not aware of us.
  • Highly intelligent aliens who live in a technologically advanced society are unlikely to care about us or to be concerned about what's happening on Earth.
  • They would likely be able to edit their own genomes or to program themselves to always experience positive emotions and pleasure. They would then be less likely to be motivated to travel to Earth and torture others.
  • Highly intelligent aliens may be more likely to have evolved to be kind toward others.
  • It could take at least hundreds of years for aliens to travel to Earth, depending on how far away they are (unless they are able to travel faster than the speed of light).

1

New Answer
Ask Related Question
New Comment

3 Answers sorted by

Technically, everything is possible (unless it violates the laws of physics, i.e. I am not sure about the part with making new universes), so yes, there is a posibility.

Is there a good reason to focus on this specific scenario, instead of the billion other possibilities?

Thanks for your response.

The reason why I'm concerned about this scenario is because this would be the worst possible outcome for anyone, in my opinion. The other possibilities wouldn't cause as much suffering.

2Viliam2y
You might be interested in this article [https://www.lesswrong.com/posts/3wYTFWY3LKQCnAptN/torture-vs-dust-specks]. The general idea is that the probability of outcome should be a part of the equation. Otherwise, insurance agents will love you as a customer.
1onevoyager2y
Thanks for sharing the article. I didn't read anything about the probability of outcome--are you referring to a comment?
2Viliam2y
Probabilities and frequencies are a related concept. Question "is it better if X happens to 1 person, or Y happens to 5 people" should have a similar answer to question "is it better if X happens (to 1 person) with probability 10%, or Y happens (to 1 person) with probability 50%". If you imagine hypothetical futures, it's kinda the same thing; 10% probability means in happens in 1 of 10 hypothetical futures; probability 50% means is happens in 5 of 10. If you want to prevent some hypothetical outcome, it usually comes with a cost. (It always comes with a cost, if we include your time spent thinking about the scenario.) Therefore my analogy with the insurance agents -- they typically want to focus your entire attention towards "what will be the consequences of X, if I am not insured"... and away from "what will be the consequences of paying for insurance, if X does not happen". To make a good decision, you need to consider both scenarios, and their relative probability. Not being insured can ruin your life if an unexpected event happens, but being insured too much also decreases your quality of life by taking away a part of your income; sometimes the latter cost (multiplied by probability) outweighs the former (multiplied by probability), and then not getting insured is the right choice. Thinking about hypothetical futures and taking action to avoid them, that is analogical to insurance. You spend some resources (including the time you spent thinking) now, in order to mitigate a possible problem in the future. The same equation applies; if the probability of the outcome is too small, it is not worth worrying about it. The time you spend worrying about the unlikely things is taken from the same budget you have for solving problems that are actually quite likely to happen.

The key problem with your question is that you ignore the evolutionary pressures. To become an intergalactic civilisation it's necessary to be very good at cooperation because it means that many actors have the capability to blow everything up.  

Other evolutionary pressures are about using resources effectively. Torturing people on earth wouldn't be an effective use of resources. 

Thanks for your response.

I do take evolutionary pressures into consideration in the list of reasons why this scenario is unlikely. I state that "highly intelligent aliens may be more likely to have evolved to be kind toward others." However, it's also possible that aliens "(or a group or an individual) have evolved to feel sadistic pleasure when they harm others who don't belong to their group."

Aliens who are excellent at cooperating with others may not necessarily be kind toward those who belong to out-groups. Lack of empath... (read more)

2ChristianKl2y
An intergalatic civilisation needs cultures that evolve for long periods of time separately to be peaceful with each other as it's likely possible for one solar system to extinguish other solar systems if it desires to do so in a way that can't be traced back to the attacker. It's a core economic principle that actors that are more effective at using resources outcompete actors that are less effective no matter the amount of resources that are available.
1onevoyager2y
Are you referring to multiple cultures that evolve alongside each other as part of the same civilization? Your point about the economic principle is good.

If aliens could torture us, another alien race could come and save us.

This is a great point.

I considered this possibility when I was writing my post. However, it's possible that the sadistic aliens would be capable of making our planet invisible to others, or that the benevolent aliens never travel close enough to Earth to be aware of us.

2avturchin2y
If there is one another alien race in observable universe, where should at least several more, and they may not like the idea of torture: they will be "exo-humanists", that is like effective altruist but for other alien races. A superintelligent AI on Earth which has a goal of global torture is worse as it looks like that the help will never arrive (actually, it can but from other universes via complex acausal trade and indexical uncertainty).
1onevoyager2y
Thanks for your response. I wonder if it's possible that intelligent life is so rare in the universe that there is only one other civilization, which could be sadistic. Would it be possible for you to expand more on how aliens from other universes can help "via complex acausal trade and indexical uncertainty"? I tried searching for those terms, but couldn't find anything about other universes.
2avturchin2y
If an alien civilization is 1 billion light years from us in one direction (and it is highest distance for contact), it implies that median distances between civilizations is 1 billion ly, and there are 5 others: in opposite direction, as well as up, down, right and left direction. So it is 1 civ or at least 7 including ours, based on some symmetry considerations. Two civs seems unlikely. The idea about prevention s-risks via acausal trade is discussed by me here: https://forum.effectivealtruism.org/posts/3jgpAjRoP6FbeESxg/curing-past-sufferings-and-preventing-s-risks-via-indexical [https://forum.effectivealtruism.org/posts/3jgpAjRoP6FbeESxg/curing-past-sufferings-and-preventing-s-risks-via-indexical]

New to LessWrong?