It has a belief that capturing the resources of the galaxy would not increase its security nor further its objectives. It doesn't mind interfering with us, which it is implicitly doing by hiding its presence and giving us the Fermi paradox which in turn influences our beliefs about the Great Filter.
We should also consider what beliefs or knowledge it could have that would cause it to stay home. For instance:
If (1) is true, it would prevent them from spreading all over the galaxy, so they can't find us (to destroy us) this early in our evolution, a hundred years after we used radio. They might still destroy us relatively soon,, but they wouldn't be present in every star system. (Also, the fact we evolved in the first place might be evidence against this scenario.)
If (2) is true, they would have to somehow destroy all other life in the galaxy without risking that destroying-mechanism value-drifting or being used by a rival faction. This might be hard. Also, their values might not endorse destroying aliens without even contacting them.
If (1) is true the aliens should fear any type of life developing on other planets because that life would greatly increase the complexity of the galaxy. My guess is that life on earth has, for a very long time, done things to our atmosphere that would allow an advanced civilization to be aware that our planet harbors life.
This is actually a fairly healthy field of study. See, for example, Nonphotosynthetic Pigments as Potential Biosignatures.
Note, sending probes out any distance may increase computational requirements. Approximations are no longer sufficient when an agent's eye comes up very close to them. Unless we can expect the superintelligence to detect these signs from a great distance, from the home star, it might not afford to see them.
Also worth considering: Probes that close their eyes to everything but life-supporting planets, so that it wont notice the low grain of approximations and approximations can continue to be used in its presence.
Not necessarily - See Hilbert's Paradox.
https://en.wikipedia.org/wiki/Hilbert%27s_paradox_of_the_Grand_Hotel
I am not a cosmologist so forgive me if this theory is deranged, but what about Dark Matter? Is it possible there are vast banks of usable energy there, and that the ability to transition one's body to dark-matter would make it easier for a civ to agree to turn away from the resources available in light matter?
Similar to some of the other ideas, but here are my framings:
Virtually all of the space in the universe have been taken over by superintelligences. We find ourselves observing the universe from one of these rare areas because it would be impossible for us to exist in one of the colonized areas. Thus, it shouldn't be too surprising that our little area of non-colonization is just now popping out a new superintelligence. The most likely outcome for an intelligent species is to watch the area around them become colonized while they cannot develop fast enough to catch up.
A dyson-sphere level intelligence knows basically everything. There is a limit to knowledge and power that can be approached. Once a species has achieved a certain level of power it simply doesn't need to continue expanding in order to guarantee its safety and the fulfillment of its values. Continued expansion has diminishing returns and it has other values or goals that counterbalance any tiny desire to continue expanding.
Care to show the path for that? Evolution favors individual outcomes, and species are a categorization we apply after the fact.
Survival of genotype is more likely for chains of individuals that value some diversity of environment and don't get all killed by a single local catastrophe, but it's not clear at all that this extends beyond subplanetary habitat diversity.
Care to show the path for that?
The Amish.
If you are not subject to the Malthusian trap, evolution favors subgroups that want to have lots of offspring. Given variation in a population not subject to the Malthusian trap concerning how many children each person wants to have, and given that one's preferences concerning children are in part genetically determined, the number of children the average member of such a species wants to have should steadily increase.
Aren't the Amish (and other fast-spawning tribes) a perfect example of how this doesn't lead to universal domination? They're all groups that either embrace primitivity or are stuck in it, and to a large extent couldn't maintain their high reproductive rate without parasitism on surrounding cultures.
They are parasitic on our infrastructure, healthcare system, and military. Amish reap the benefits of modern day road construction methods to transport their trade goods, but could not themselves construct modern day roads. Depending on the branch of Amish, a significant number of their babies are born in modern-day hospitals, something they could not build themselves and which contributes to their successful birth rate.
Alien AI is using SETI-attack strategy, but to convince us to bite and also to be sure that we have very strong computers which are able to run its code, it makes its signal very subtle and complex, so it is not easy to find. We didn't find it yet but will soon find. I wrote about SETI attack here: http://lesswrong.com/lw/gzv/risks_of_downloading_alien_ai_via_seti_search/
Alien AI exist in form of alien nanobots everywhere (including my room and body), but they do not interact with us and try to hide from microscopy.
They are berserk and will be triggered to kill us if we reach unknown threshold, most likely creation of AI or nanotech.
2 may include 3.
Does it perhaps ever occur in nature?
Notice how, say, the Andamanese are entirely safe from online phishing scams or identity theft.
The superintelligence could have been written to value-load based on its calculations about an alien (to its creators) superintelligence (what Bostrom refers to as the "Hail Mary" approach). This could cause it to value the natural development of alien biology enough to actively hide its activities from us.
Or the way we try to keep isolated people isolated (https://en.wikipedia.org/wiki/Uncontacted_peoples)
That is the basic idea behind the X-Files TV series and various UFO conspiracy theories, isn't it?
Maybe it uploaded all the minds it was programmed to help, and then ran them all on a series of small, incredibly efficient computers, only sending duplicates across space for the security of redundancy.
A few hundred parallel copies around as many stars would be pretty darn safe, and they wouldn't have any noticeable effect on the environment. We could have one around our sun right now without noticing.
And maybe the potential for outside destruction is better met by stealth than by salient power.
If it doesn't favor just making more people to have around, why should it ever go on beyond that?
given our current observations (I.e. that there is no evidence of it`s existence)?
Our current observation is that we haven't detected and identified any evidence of their existence.
Another option: Maybe they're not hiding, their just doing their thing, and don't leak energy in a way that is obvious to us.
Usually lack of evidence is evidence of lacking. But given their existence AND lack of evidence, I think probability of purposefully hiding (or at least being cautious about not showing off too much) is bigger than they just doing their thing and we just don't see it even though we are looking really hard.
Big difference.
You don't know how much money is in my wallet. I do. You have no evidence, and you don't have a means to detect it, but it doesn't mean there is no evidence to be had.
That third little star off the end of the milky way may be a gigantic alien beacon transmitting a spread spectrum welcome message, but we just haven't identified it as such, or spent time trying to reconstruct the message from the spread spectrum signal.
We see it. We record it at observatories every night. But we haven't identified it as a signal, nor decoded it.
There is indeed a difference between "we have observed good evidence of X" and "there is something out there that, had we observed it, would be good evidence of X".
Even so, absence of observed evidence is evidence of absence.
How strong it is depends, of course, on how likely it is that there would be observed evidence if the thing were real. (I don't see anyone ignoring that fact here.)
Maybe they figured out how to convince it to accept some threshold of certainty (so it doesn't eat the universe to prove that it produced exactly 1,000,000 paperclips), it achieved its terminal goal with a tiny amount of energy (less than one star's worth), and halted.
Obviously Singleton AIs have a high risk to get extinct by low probability events before they initiate Cosmic Endowment. Otherwise we would have found evidence. Given the foom development speed a singeton AI might decide after few decades that it does not need human assistance any more. It extinguishes humankind to maximize its resources. Biological life had billions of years to optimize even against rarest events. A gamma ray burst or any other stellar event could have killed this Singleton AI. How we are currently designing AI will definetely not lead to a Singleton AI that will mangle its mind for 10 million years until it decides about the future of humankind.
For a moment lets assume there is some alien intelligent life on our galaxy which is older than us and that it have succeeded in creating super-intelligent self-modifying AI.
Then what set of values and/or goals it is plausible for it to have, given our current observations (I.e. that there is no evidence of it`s existence)?
Some examples:
It values non-interference with nature (some kind of hippie AI)
It values camouflage/stealth for it own defense/security purposes.
It just cares about exterminating their creators and nothing else.
Other thoughts?