If there IS alien super-inteligence in our own galaxy, then what it could be like?

by Coacher1 min read26th Feb 201659 comments

6

Personal Blog

For a moment lets assume there is some alien intelligent life on our galaxy which is older than us and that it have succeeded in creating super-intelligent self-modifying AI.

Then what set of values and/or goals it is plausible for it to have, given our current observations (I.e. that there is no evidence of it`s existence)?

Some examples:

It values non-interference with nature (some kind of hippie AI)

It values camouflage/stealth for it own defense/security purposes.

It just cares about exterminating their creators and nothing else.

 

Other thoughts?

59 comments, sorted by Highlighting new comments since Today at 10:35 AM
New Comment

It has a belief that capturing the resources of the galaxy would not increase its security nor further its objectives. It doesn't mind interfering with us, which it is implicitly doing by hiding its presence and giving us the Fermi paradox which in turn influences our beliefs about the Great Filter.

For example, maybe they figured out how to convince it to accept some threshold of certainty (so it doesn't eat the universe to prove that it produced exactly 1,000,000 paperclips), it achieved its terminal goal with a tiny amount of energy (less than one star's worth), and halted.

We should also consider what beliefs or knowledge it could have that would cause it to stay home. For instance:

  • The Universe is a simulation and making parts of it more complex or expensive to simulate would shorten the life of other complex parts, would crash the simulation, would fail because the simulation wouldn't allow it, would draw undesired attention from the simulators, etc.
  • If their civilization spread to other stars, lightspeed limits would make them effectively independent, and eventually value drift or selfishness would cause conflict or would harm their existing goals
  • There is some danger out there, perhaps a relic of an even older civilization, and the best course is to hide from it; humans have not yet been attacked because we only started beaming out radio signals less than a hundred years ago

If (1) or (2) is correct they should destroy us.

If (1) is true, it would prevent them from spreading all over the galaxy, so they can't find us (to destroy us) this early in our evolution, a hundred years after we used radio. They might still destroy us relatively soon,, but they wouldn't be present in every star system. (Also, the fact we evolved in the first place might be evidence against this scenario.)

If (2) is true, they would have to somehow destroy all other life in the galaxy without risking that destroying-mechanism value-drifting or being used by a rival faction. This might be hard. Also, their values might not endorse destroying aliens without even contacting them.

If (1) is true the aliens should fear any type of life developing on other planets because that life would greatly increase the complexity of the galaxy. My guess is that life on earth has, for a very long time, done things to our atmosphere that would allow an advanced civilization to be aware that our planet harbors life.

This is actually a fairly healthy field of study. See, for example, Nonphotosynthetic Pigments as Potential Biosignatures.

Note, sending probes out any distance may increase computational requirements. Approximations are no longer sufficient when an agent's eye comes up very close to them. Unless we can expect the superintelligence to detect these signs from a great distance, from the home star, it might not afford to see them.

Also worth considering: Probes that close their eyes to everything but life-supporting planets, so that it wont notice the low grain of approximations and approximations can continue to be used in its presence.

It may have discovered some property of physics which enabled it to expand more efficiently across alternate universes, rather than across space in any given universe. Thus it would be unlikely to colonize much of any universe (specifically, ours).

If physics allows for the spreading across alternative universes at a rate greater than that at which you can spread across our universe, the Fermi paradox becomes even more paradoxical.

If an infinite number of aliens have the potential to make contact with us (which I realize isn't necessarily implied by your comment) then some powerful subset must be shielding us from contact.

Infinity is really confusing.

This is only a paradox under naive definitions of infinity. Once one starts talking about cardinality, the "paradoxical" nature of the thought experiment fades away.

In other words, this is not really responsive to James_Miller's comment.

I am not a cosmologist so forgive me if this theory is deranged, but what about Dark Matter? Is it possible there are vast banks of usable energy there, and that the ability to transition one's body to dark-matter would make it easier for a civ to agree to turn away from the resources available in light matter?

Similar to some of the other ideas, but here are my framings:

  1. Virtually all of the space in the universe have been taken over by superintelligences. We find ourselves observing the universe from one of these rare areas because it would be impossible for us to exist in one of the colonized areas. Thus, it shouldn't be too surprising that our little area of non-colonization is just now popping out a new superintelligence. The most likely outcome for an intelligent species is to watch the area around them become colonized while they cannot develop fast enough to catch up.

  2. A dyson-sphere level intelligence knows basically everything. There is a limit to knowledge and power that can be approached. Once a species has achieved a certain level of power it simply doesn't need to continue expanding in order to guarantee its safety and the fulfillment of its values. Continued expansion has diminishing returns and it has other values or goals that counterbalance any tiny desire to continue expanding.

Evolution should favor species that have expansion as a terminal value.

[-][anonymous]5y 3

Why terminal?

Care to show the path for that? Evolution favors individual outcomes, and species are a categorization we apply after the fact.

Survival of genotype is more likely for chains of individuals that value some diversity of environment and don't get all killed by a single local catastrophe, but it's not clear at all that this extends beyond subplanetary habitat diversity.

Care to show the path for that?

The Amish.

If you are not subject to the Malthusian trap, evolution favors subgroups that want to have lots of offspring. Given variation in a population not subject to the Malthusian trap concerning how many children each person wants to have, and given that one's preferences concerning children are in part genetically determined, the number of children the average member of such a species wants to have should steadily increase.

Aren't the Amish (and other fast-spawning tribes) a perfect example of how this doesn't lead to universal domination? They're all groups that either embrace primitivity or are stuck in it, and to a large extent couldn't maintain their high reproductive rate without parasitism on surrounding cultures.

Depends on how you define domination. Over the long run if trends continue the Amish will dominate through demography. I don't think the Amish are parasites since they don't take resources from the rest of us.

They are parasitic on our infrastructure, healthcare system, and military. Amish reap the benefits of modern day road construction methods to transport their trade goods, but could not themselves construct modern day roads. Depending on the branch of Amish, a significant number of their babies are born in modern-day hospitals, something they could not build themselves and which contributes to their successful birth rate.

Everything you wrote is also true of my family, but because of specialization and trade we are not parasites.

Last time I checked you weren't arguing that your family was going to dominate the world through breeding.

But it is unethical to allow all the suffering that occurs on our planet.

Compared to what alternative?

That depends on your ethical system, doesn't it?

  1. Alien AI is using SETI-attack strategy, but to convince us to bite and also to be sure that we have very strong computers which are able to run its code, it makes its signal very subtle and complex, so it is not easy to find. We didn't find it yet but will soon find. I wrote about SETI attack here: http://lesswrong.com/lw/gzv/risks_of_downloading_alien_ai_via_seti_search/

  2. Alien AI exist in form of alien nanobots everywhere (including my room and body), but they do not interact with us and try to hide from microscopy.

  3. They are berserk and will be triggered to kill us if we reach unknown threshold, most likely creation of AI or nanotech.

2 may include 3.

  1. This looks far fetched, but interesting strategy. Does it perhaps ever occur in nature? I.e. do any predators wait for their prey to become stronger/smarter, before luring them into the trap?

  2. I guess they could, but to what end?

  3. Why wait?

Does it perhaps ever occur in nature?

Notice how, say, the Andamanese are entirely safe from online phishing scams or identity theft.

  1. Andamanese ))
  2. Maybe alien nanobots control part of the galaxy which is concurred by host civilization and prevent any other civilization to invade or appear.
  3. Observational selection: we could find our selves only in civilization which berserkers have high attack treshold (or do not exist).

The superintelligence could have been written to value-load based on its calculations about an alien (to its creators) superintelligence (what Bostrom refers to as the "Hail Mary" approach). This could cause it to value the natural development of alien biology enough to actively hide its activities from us.

Then it should also have not caused us to falsely have a Fermi paradox and so believe in a great filter. It could have done this in numerous ways including by causing us to think that planets rarely form.

...Think of the Federation's "Prime Directive" in Star Trek.

Or we are an experiment (natural or artificial) that yields optimal information when unmanipulated or manipulated imperceptibly (from our point of view).

Or the way we try to keep isolated people isolated (https://en.wikipedia.org/wiki/Uncontacted_peoples)

Crazy Idea--What if we are an isolated people and the solution to the Fermi paradox is that aliens have made contact with earth, but our fellow humans have decided to keep this information from us. Yes, this seems extremely unlikely, but so do all other solutions to the Fermi paradox.

That is the basic idea behind the X-Files TV series and various UFO conspiracy theories, isn't it?

Then why would they even contact those few people?

It might not be direct contract but rather our astronomers have long since detected signs of alien life, but this has been kept from us.

To throw one out there, perhaps the first superintelligence was created by a people very concerned about AI risk and friendliness and one of it's goals is simply to subtly suppress (by a very broad definition) unfriendly AI's in the rest of the universe while minimizing disruption otherwise.

They place a high value on social unity, so spreading out over distances which would make it hard to keep a group-- or a mind-- together doesn't happen.

The most obvious and least likely possibility is that the superintelligence hasn't had enough time to colonize the galaxy (i.e. it was created very recently).

That's very unlikely. The universe is billions of years old, yet it would take only mere thousands of years to colonize the galaxy. Maybe millions if they aren't optimally efficient, but still a short time in the history of Earth.

I'm aware. Note that I did call it the "least likely possibility."

Maybe it uploaded all the minds it was programmed to help, and then ran them all on a series of small, incredibly efficient computers, only sending duplicates across space for the security of redundancy.

A few hundred parallel copies around as many stars would be pretty darn safe, and they wouldn't have any noticeable effect on the environment. We could have one around our sun right now without noticing.

And maybe the potential for outside destruction is better met by stealth than by salient power.

If it doesn't favor just making more people to have around, why should it ever go on beyond that?

May be the AI had existed in the Galaxy and halted by some internal reasons, but leaved after it some self-replicating remnants, which are only partly intelligent and so unable to fall in the same trap. Their behavior would look absurd to us, and that is why we can't find them.

Adding additional unneeded assumptions does not make hypothesis more likely. Just halting and not leaving any retarded children explains observations just as well if not better.

given our current observations (I.e. that there is no evidence of it`s existence)?

Our current observation is that we haven't detected and identified any evidence of their existence.

Another option: Maybe they're not hiding, their just doing their thing, and don't leak energy in a way that is obvious to us.

Usually lack of evidence is evidence of lacking. But given their existence AND lack of evidence, I think probability of purposefully hiding (or at least being cautious about not showing off too much) is bigger than they just doing their thing and we just don't see it even though we are looking really hard.

Usually lack of evidence is evidence of lacking.

Big difference between there being a lack of evidence, and a lack of an ability to detect and identify evidence which exists.

I think people are rather cheeky to assume that we necessarily have the ability to detect a SI.

There is no difference in saying that there is no evidence and that there might be evidence, but we don't have ability to detect it. Does god exist? Well maybe there is plenty evidence that it does, we just don't have the ability to see it?

Big difference.

You don't know how much money is in my wallet. I do. You have no evidence, and you don't have a means to detect it, but it doesn't mean there is no evidence to be had.

That third little star off the end of the milky way may be a gigantic alien beacon transmitting a spread spectrum welcome message, but we just haven't identified it as such, or spent time trying to reconstruct the message from the spread spectrum signal.

We see it. We record it at observatories every night. But we haven't identified it as a signal, nor decoded it.

There is indeed a difference between "we have observed good evidence of X" and "there is something out there that, had we observed it, would be good evidence of X".

Even so, absence of observed evidence is evidence of absence.

How strong it is depends, of course, on how likely it is that there would be observed evidence if the thing were real. (I don't see anyone ignoring that fact here.)

It seems you have some uncommon understanding of what word evidence means. Evidence is peace of information, not some physical thing.

Evidence is peace of information

I like this :-)

[-][anonymous]5y 0

Maybe they figured out how to convince it to accept some threshold of certainty (so it doesn't eat the universe to prove that it produced exactly 1,000,000 paperclips), it achieved its terminal goal with a tiny amount of energy (less than one star's worth), and halted.

[This comment is no longer endorsed by its author]Reply

Obviously Singleton AIs have a high risk to get extinct by low probability events before they initiate Cosmic Endowment. Otherwise we would have found evidence. Given the foom development speed a singeton AI might decide after few decades that it does not need human assistance any more. It extinguishes humankind to maximize its resources. Biological life had billions of years to optimize even against rarest events. A gamma ray burst or any other stellar event could have killed this Singleton AI. How we are currently designing AI will definetely not lead to a Singleton AI that will mangle its mind for 10 million years until it decides about the future of humankind.