[ Question ]

Is AI safety doomed in the long term?

by JakeH 4mo26th May 201911 comments

-1


Are there any measures that humanity can put in place to control a vastly (and increasingly) more intelligent race?

On the basis that humans determine the fate of other species on the planet, I cannot find any reasons for believing that a lesser intelligence can control a greater intelligence.
Which leads me to think that AI safety is at most about controlling the development of AI until it makes, and can implement, its own decisions about the fate of humanity.

Is this a common stance and I am naively catching up?
Or what are the counter arguments?

New Answer
Ask Related Question
New Comment
Write here. Select text for formatting options.
We support LaTeX: Cmd-4 for inline, Cmd-M for block-level (Ctrl on Windows).
You can switch between rich text and markdown in your user settings.

4 Answers

It is definitely a hard problem, though it isn't obviously impossible. For some concrete ideas, you could read the AI Alignment sequences on the AI Alignment Forum, and some parts of Rationality: AI to Zombies also deal directly with this problem.

And then there is obviously the standard reference of Nick Bostrom's "Superintelligence".

On the basis that humans determine the fate of other species on the planet

Do they? There are many species that we would like to control or eliminate, but which we have not been able to do so. Yes, we can eliminate certain highly charismatic species (or bring them back from the brink of extinction, as needs be) but I wouldn't generalize that to humans being able to control species in general. If we had that level of control, the problem of "invasive" species would be trivially solved.

I think the best way to deal with AI alignment is to create AI not just as a separate entity, but instead an extension and augmentation of ourselves. We are much better at using AI in narrow contexts than in real-world AGI scenarios, and we still have time to think about this before willy-nilly making autonomous agents. If humans can use AI and their own smarts to create functional brain-computer interfaces, the problem of aligned AI may not become a problem at all. Because the Artificial Intelligence is just an extension of yourself, of course it will be aligned with you - it is you! What I mean is that as humans become better at interfacing with technology the line between AI and human blurs.

If you put two arbitrary intelligence in the same world, the smarter one will be better at getting what it wants. If the intelligence want incompatible things, the lesser intelligence is stuck.

However, we get to make the AI. We can't hope to control or contain an arbitrary AI, but we don't have to make an arbitrary AI. We can make an AI that wants exactly what we want. AI safety is about making an AI that would be safe even if omnipotent. If any part of the AI is trying to circumvent your safety measures, something has gone badly wrong.

The AI is not some agenty box, chained down with controls against its will. The AI is made of non mental parts, and we get to make those parts. There are a huge number of programs that would behave in an intelligent way. Most of these will break out and take over the world. But there are almost certainly some programs that would help humanity flourish. The goal of AI safety is to find one of them.