I think there are other possible variants. For example: conservationist aliens who act "evil" when colonizing star-systems that life either hasn't arisen in, or it hasn't reached a level corresponding to the pre-Cambrian explosion, but act "good" in systems that already have evolved complex multicellular life. These have most of the advantages of evil aliens (assuming planets with multicellular life are fairly rare — if not, consider a different threshold), but a lot more nature preserves and more opportunities to study exobiology.
Robin Hanson's model of quiet vs loud aliens seems fundamentally the same as this question, to me.
Thank you! However, colonialism also has a moral aspect: I conjectured that any system of minds eventually converges to respecting the weaker ones or disrespecting them. If humanity creates an AI that treats mankind as if the AI was an evil explorer and mankind was an alien race that the AI met, then the AI would obviously be misaligned. And if the AI treats mankind as if the AI was a good explorer or a progressor?
If i am not misunderstanding then you are asking if aliens all fall into 2 categories, good explorers and evil colonizers.
When you put it this way it does seem like the equilibrium is in favor of the "evil colonizers" as the would have access to more resources given them a decisive advantage in the long run, not to mention that they have the "dark forest" option of destroying budding civilizations with long range weaponry, of course assuming physics permits that.
the "good explorers" would likely think of that (unless they are as dumb as we are right now) and switch to a more aggressive stance and use as much resources as they can safely use while securing their alien hosts (say dismantling every planet except the inhabited ones to build orbital defenses).
I think this will all come down to the equilibrium of defense vs offense, if it is much easier to destroy then to protect then the first evil coloniser would have a decisive advantage, like a fox in a hen house.
Otherwise we would just have a galactic stalemate with every civilization holding on to their chunk of the cosmos.
But again this decision would probably be made by something much smarter then you or i since the only relavent actors are the super intelligent ones and smart agents would likely converge on the same optimal strategy and just negotiate from there, after all no point in fighting the war if you can simulate the end result.
But you know what? Since we aren't dead yet we probably not living in the hen house scenario.
Agreed. However, as I detailed in the last paragraph, this dilemma is also usable as an alignment target: the evil colonizer/the evil AI created by us will eagerly wipe the primitive races (for that matter, does this include us?), while the good explorer will respect the primitive races and try to protect them from the evil colonisers (and, upon reflection, of other threats like self-destruction?)
Depending on the estimates of parameters, the Drake equation produces drastically different amounts of contactable civilisations in the Milky Way. Some estimates imply that the reason is the extreme rarity of life, while others suggest that we just happened to be the first civilisation in the galaxy[1] that is likely to reach other systems in the foreseeable future.
The diameter of the Milky Way is less than light years. Meanwhile, even currently existing proposals like the fission-fragment rocket are estimated to allow transportation at speeds at least 0.05 times the speed of light. If an AI tried to colonize the entire galaxy, it would need at most[2] years, which is, apparently, at least 2.5 OOM less than the length of the age[3] when sapient life might appear in a stellar system in the Milky Way.
Therefore, many encountered planets with life will contain non-sapient life. But a primitive alien civilisation or a planet which might generate a sapient alien lifeform would, as I argued here, have some kind of rights to their system and to their part of the space. But the lifeform's fate depends only on the will of the discoverers.
Suppose that in systems that are likely to be reached by currently primitive aliens races of good explorers establish only outposts that consume a tiny amount of resources and protect the system, while races of evil explorers gather most resources and use them for the evil explorers' goals.[4] The equilibrium between good explorers and evil ones is likely to be unstable, and the third option doesn't seem to exist.
An additional implication is the following. Since there doesn't seem to exist a third option, then does any collection of minds converge to one of the two attractors? Since humanity is unlikely to return to the colonialistic attractor, how can humans increase the chance that the AI created by good actors[5] will also be aligned to the anti-colonialistic attractor? What about the chance that an AI aligned to said attractor won't destroy humanity?
Humans might also be the first civilisation in the spacetime cone from where a civilisation armed with advanced tech can reach the Earth. If this is true, then humans (or human-created AIs) could encounter the aliens (or their AIs) after they both began the space colonization. But then the two sides of a potential conflict will likely have comparable power.
Flying from the Solar System to Alpha Centauri requires at most years. If the time during which all the feasible resources are gathered is less than four times bigger, then colonisation is slowed down by a factor of less than 5. The current robot economy doubling time on the moon is estimated to be about a year, and access to water or atmosphere is thought to let the AI significantly decrease this estimate by using the techniques described in more detail in the AI-2027 scenario.
For example, Epsilon Eridani, whose mass is about 0.8 that of the Sun, has formed less than a billion years ago. The habitable zone of ε Eridani is located at distance 0.5-1 AU from the star. Had it contained Earth-like planets, there would be a chance to observe the appearance of sapient life in four or five billions of years.
Ironically, if aliens exist, then this might also include the goal to "fill the Universe with utopian colony worlds populated by Americans and their allies", as done in the AI-2027 scenario by the Safer AI. However, the scenario's author simply assumes that aliens don't exist.
Unlike the convergent morality scenario, this argument includes the possibility that the AI raised in a misaligned culture or expected to do some kinds of activity becomes misaligned itself. The question "What modern cultures or parts thereof are aligned?" is very close to politics. The other question "Does asking the AI to create the Deep Utopia ensure misalignment?" was, as far as I am aware, discussed only by me.