Posts

Sorted by New

Wiki Contributions

Comments

The Octopus, the Dolphin and Us: a Great Filter tale

Imagine if we had made a replicator, demonstrated that it could make copies of itself, established with as high confidence as we could that it could survive the trip to another star, and had let >100,000 of the things off heading to all sorts of stars in the neighborhood. They would eventually (very soon compared to a billion years) visit every star in the galaxy and that would tell us a lot about the Fermi paradox and great filter.

As I said before (discounting planetarium hypothesis) we could have a high degree of confidence that the great filter was then behind us. It couldn't really be the case that thousands of civilizations in our galaxy had done such a thing, then changed their mind and destroyed all the replicators as some civilizations would probably destroy themselves between letting the replicators loose and changing their mind, or not change their mind/not care about the replicators. Therefore we would see evidence of their replicators in our solar system which we don't see.

The other way we can be sure the filter is behind us is successfully navigate the Singularity (keeping roughly the same values). That seems obviously MUCH more difficult to have confidence in.

If our goal is to make sure the filter is behind us then it is best to do it with a plan we can understand and quantify. Holding off human level AI until the replicators have been let loose seems to be the highest probability way to do that, but no-one has said such a thing before now as far as I am aware.

The Octopus, the Dolphin and Us: a Great Filter tale

Thanks for the comment. Yes I agree that if we had made such a replicator and set it loose then that would say a lot about the filter. To claim that the filter was still ahead of us in that case you would need to make the more bizarre claim that we would with almost 100% probability seek and destroy the replicators and almost all similar civilizations would do the same, then proceed not to expand again.

I am not sure that a highly believable model would go most of the way because there may be a short window between having a model, then AI issues changing things so it isn't built. It seems pretty believable for the case of mankind that there would be a very short time between building such an thing and going full AI, so to be sure you would actually have to build it and let it loose.

I am not sure why it isn't given much more attention. Perhaps many people don't believe that AI can be part of the filter e.g. the site overcomingbias.com. Also I expect there would be massive moral opposition to letting such a replicator loose from some people! How dare we disturb the whole galaxy in such an unintelligent way. Thats why I mention the simple one that just rearranges small asteroids. It would not wipe out life as we know it but would prove that we were past the filter as such a thing has not been done in our galaxy. I sure would be interested in seeing it researched. Perhaps someone with more kudos can promote it?

Likely a replicator would be a consequence of asteroid mining anyway as the best, cheapest way to get materials from asteroids is if it is all automatic.

The Octopus, the Dolphin and Us: a Great Filter tale

This person claims that all AI will rationally kill themselves and that the great filter would be after AI. http://www.science20.com/alpha_meme/deadly_proof_published_your_mind_stable_enough_read_it-126876 (I havn't got the paper but even if this is correct, to me it still would not explain the filter fully because a civilization could make a simple interstellar Replicator e.g. light sail propelled asteroid mining robot and let it lose before going AI and we see no evidence of these)

Also what about the Planetarium/galactic zoo/enforced noninterference possibility. Say that 99% of the time AI will take over the light cone destructively, but 1% of the time the AI will desire to watch and catalog intelligence arising then darkly wipe it out when it gets annoying and tries to colonize other stars and hence stuff up other experiments. Or more nicely it could welcome us to the galaxy and stop us from wiping out other civilizations etc.

For us it would mean that we got lucky with a 1% chance say 1 billion years ago when the first intelligent civilization arose, spread through the galaxy/light cone and made the watching/enforcing AI. (or made the watching AI then fought itself etc) There could have been ~1 million space-faring civilizations in the galaxy since and we are nothing special at all, on an average star in the middle age of the universe. In the case the filter is sort of ahead of us because we cannot expand and colonize - the much more advanced AI would stop us.

Either way if we make a simple replicator and have it successfully reach another solar system (with possibly habitable planets) then that would seem to demonstrate that the filter is behind us. We would have then done something that we can be sure noone else in the galaxy has done before as I have said we see no evidence of such replicators. I am talking about one that could not land on planets, just rearrange asteroids and similar objects with very low gravity.

2013 Less Wrong Census/Survey

Same thats pretty much why I choose cooperate.

2013 Less Wrong Census/Survey

Yes I did the survey. PW: one two.

Firstly I need to also say that giving probabilities to things that are either very low or very unknown is not very helpful. For example, aliens etc I don't know and as others have pointed out, God or simulation master, are they the same thing? Also giving the probability to us being Boltzmann brains or something very weird like that is undefined as it involves summing over the multi-verse which is un-countably infinite etc. For the simulation hypothesis I think we simply cant give a sensible number.

On a more general note, for friendly AI/unfriendly AI I think more attention should be on the social and human aspect. I don't see what maths proofs have to offer here. We already know you can potentially get bad AI because if you get an evil person say then give them a brain upload, self modifying powers etc, then they quite possibly will self modify to make themselves even more evil and stronger, turn off their conscience etc. What the boundaries of this are we don't know and need actual experiments to find out. Also how one person behaves and a society of self modifiers could quite possibly be a very different matter. Questions like do a large range of people with different values converge or diverge when given these powers is what we want to know.

Normal Ending: Last Tears (6/8)

It could be the case that civilization always goes down something like the super happy route, but without such rationality. So rather than getting disappointed about not achieving space travel, they just turn off such disappointment. There would be no reason for ambition, you can just give yourself the feeling of satisfied ambition without actually achieving anything. Once you have access to your own source code, perhaps thing always end up that way.

Decoherence is Simple

How do you explain this with many worlds, while avoiding non-locality? http://arxiv.org/pdf/1209.4191v1.pdf If results such as these are easy to explain/predict, can the many worlds theory gain credibility by predicting such things?