Let's assume that Eliezer is right: soon we'll have an AGI that is very likely to kill us all. (personally, I think Eliezer is right).

There are several ways to reduce the risk, in particular: speeding up alignment research and slowing down capabilities research, by various means. 

One underexplored way to reduce the risk is active SETI (also known as METI). 

The idea is as follows:

  • Send powerful radio signals into space: "guys, soon we'll be destroyed by a hostile AGI. Help us!" (e.g. using a language constructed for the task, like Lincos)
  • If a hostile alien civilization notices us, we're going to die. But if we're going to die from the AGI anyway, who cares?
  • If a benevolent alien civilization notices us, it could arrive in time to save us. 

The main advantage of the method is that it can be implemented by a small group of people within a few months, without governments and without billions of dollars. Judging by the running costs of the Arecibo Observatory, one theoretically can rent it for a year for only $8 million. Sending only a few hundred space messages could be even cheaper. 

Obviously, the method relies on the existence of an advanced alien civilization within a few light years from the Earth. The existence seems to be unlikely, but who knows. 

Is it worth trying?

New Answer
New Comment

5 Answers sorted by

avturchin

May 05, 2023

197

It actually may work. But not because aliens will come to save us - there is no time. But because any signal we send in space will reach any star before intelligence explosion wave, and thus aliens may know potentially hostile nature of our AI.

Our AI will know all this, and if it wants to have better relation with aliens, it may invest some resources in simulating friendliness. Cheap for AI, cheap for us.

BrooksT

May 05, 2023

81

So imagine you're living like you are, today, and somehow some message appears from a remote tribe that has never had contact with civilization before. The message reads "help! We've discovered fire and it's going to kill us! Help! Come quick!"

What do you do?

I'm not sure this analogy teaches us much.  A lot depends on what the surprise is - that there is a civilization there that knows how to communicate but hasn't yet, or that the civilization we've been leaving alone for Prime Directive reasons has finally discovered fire.   A lot depends on whether we take that fear seriously as well.

The answer could be any of:

  • come teach them to be safe, knowing it won't be the last interference we make.
  • open the interference floodgates - come study/exploit them, force them to be safe.
  • kill them before their fire escapes.
  • build firebreaks so they can't hurt others, but can advance by themselves.

Dagon

May 05, 2023

5-1

I'm not sure how you envision "sending signals into space" as noticeably different from what we've been doing for the last 100 years or so.  Any civilization close enough to hear a more directed plea, and advanced enough to intervene in any way, is already monitoring the Internet and knows whatever some subset of us could say.

Internet communications are mostly encrypted, so such a civilization would need to 1) be familiar with network protocols, particularly TLS, 2) inject its signals into our networks and 3) somehow capture a response.

I'm not sure our radio noise is powerful enough to be heard at interstellar distances. Something like the Arecibo message is much more likely to reach another solar system. 

It could also be important to specifically send a call for help. A call for help indicates our explicit consent to intervene, which could be important for an advanced civilization that has something like a non-intervention rule.

5Dagon1y
True enough.  My point is that anyone who CAN help is close enough and tech-advanced enough to have noticed the signals long ago, and gotten closer (at least sent listening stations) in order to hear.  Aliens who have not already done so, almost certainly can't. The specific call for help is problematic in a different way.  The post is unclear exactly who has standing and ability to make the request.  

Desiderata for an alien civ that saves us:

  • able to meaningfully fight a hard superintelligence
  • either not already grabby and within distance to get here, or already grabby and on their way to us
  • isn't simply a strict maximizer gone grabby itself
  • won't just want everything that came from earth dead as a result retributive feelings towards grabby alien spores sent out to grow on nearby planets by the generated maximizer

up to y'all whether the intersection of these criteria sounds likely

able to meaningfully fight a hard superintelligence

I am pretty sure that the OP's main source of hope is that the alien civ will intervene before a superintelligence is created by humans.

6ProgramCrafter1y
This is not particularly necessary, because we can save at least our genetic information for possibility of being recreated in the future. So we can also hope for friendly alien civ to defeat superintelligence even if humanity is extinct. If a hostile superintelligence takes over the universe, it's not likely that humans will be recreated ever.

PeterMcCluskey

May 05, 2023

2-4

If a hostile alien civilization notices us, we’re going to die. But if we’re going to die from the AGI anyway, who cares?

Anyone with a p(doom from AGI) < 99% should conclude that harm from this outweighs the likely benefits.

[+][comment deleted]1y20

Not sure about it. Depends on the proportion of alien civilizations that will cause more harm than good upon a contact with us. The proportion is unknown.

A common argument is that an interstellar civilization must be sufficiently advanced in both tech and ethics. But i don't think the argument is very convincing.

1 comment, sorted by Click to highlight new comments since: Today at 1:33 PM

Even if the alien civilization isn't benevolent, they would probably have more than enough selfish reasons to prevent a superintelligence from appearing on another planet.

So the question is whether they would be technologically advanced enough to arrive here in 5, 10, or 20 years or whatever time we have left until AGI

An advanced civilization that isn't a superintelligence itself that's advanced enough would probably have faced an AI extinction scenario and succeeded, so they would probably stand a much higher chance of aligning AI than ourselves. But previous success aligning an AI wouldn't mean future success.

Since we are certainly more stupid than said advanced alien civilization, they would either have to suppress our freedom, at least partially or find a way to make us smarter or more risk-averse.

Another question would be whether s risk from an alien civilization is worse than s risk from superintelligence.