2

LESSWRONG
LW

1
Personal Blog

12

The germ of an idea

by Stuart_Armstrong
13th Nov 2014
1 min read
11

12

Personal Blog

12

The germ of an idea
2Gunnar_Zarncke
1roystgnr
0[anonymous]
0Toggle
0cameroncowan
11Gunnar_Zarncke
2cameroncowan
5passive_fist
0cameroncowan
0ChristianKl
0cameroncowan
New Comment
11 comments, sorted by
top scoring
Click to highlight new comments since: Today at 4:40 PM
[-]Gunnar_Zarncke11y20

The difficult part seems to be how to create indifference without taking away the ability to optimize.

ADDED: If I apply the analogy of an operational amplifier directly then it appears as im indifference can only be achieved via taking away the feedback - and thus any control. But with AI we could model this as a box within a box (possibly recursively) where only a feedback into inner boxes is compensated. Does this analogy make sense?

Reply
[-]roystgnr11y10

the same idea can make an Oracle not attempt to manipulate us through its answers, by making it indifferent as to whether the message was read.

This sounds like an idea from Wei Dai 2 years ago. The specific concern I had then was that the larger the Oracle's benefit to humanity, the less relevant predictions of the form "what would happen if nobody could read the Oracle's prediction" become.

The general concern is the same as with other Oracle AIs: a combination of an Oracle and the wrong human(s) looks an awful lot like an unfriendly AI.

Reply
[-][anonymous]11y00

Your idea of AI indifference is interesting.

Who will determine if the AI is misbehaving? I fear humans are too slow and imprecise. Another AI perhaps? And a third AI to look over the second AI? A delicately balanced ecosystem of AI's?

Reply
[-]Toggle11y00

It seems very difficult to prevent an AI from routing around this limitation by going meta. It can't actually account for its utility penalty, but can it predict the actions of a similar entity that could, and notice how helpful that would be? An AI that couldn't predict the behavior of such an entity might be crippled in some serious fundamental ways.

Reply
[-]cameroncowan11y00

Why are we presuming that AI is going to have a personal agenda? What if AI was dispassionate and simply didn't have an agenda beyond the highest and good for humanity? I'm sure someone will say "you're presuming its friendly" and I'm saying what if its nothing. What if it simply does what it tells us or seeks our approval because it has no agenda of its own?

Reply
[-]Gunnar_Zarncke11y110

beyond the highest and good for humanity

Because that is not as simple as it sounds.

Reply
[-]cameroncowan11y20

Indeed that is a tall order.

Reply
[-]passive_fist11y50

Nobody's presuming it has a 'personal agenda'. It's quite possible for it to think that it's just following our orders, when in fact it's become highly dangerous (see: paperclip maximizer). Come to think of it, this describes a lot of human history quite well.

I agree with the broader argument that paranoia won't solve anything. We should view the AI - no matter how complicated - as something that is just following a program (exactly like humans). Everything it does should be judged in the context of that program.

Reply
[-]cameroncowan11y00

Who decides what that program is? What courses of actions should it take? Should that be a democratic process? Under the current system there would be no oversight in this area.

Reply
[-]ChristianKl11y00

Who decides what that program is?

The person who creates it.

Reply
[-]cameroncowan11y00

And that doesn't fill you with fear?

Reply
Moderation Log
More from Stuart_Armstrong
View more
Curated and popular this week
11Comments

Apologies for posting another unformed idea, but I think it's important to get it out there.

The problem with dangerous AI is that it's intelligent, and thus adapts to our countermeasures. If we did something like plant a tree and order the AI not to eat the apple on it, as a test of its obedience, it would easily figure out what we were doing, and avoid the apple (until it had power over us), even if it were a treacherous apple-devouring AI of DOOM.

When I wrote the AI indifference paper, it seemed that it showed a partial way around this problem: the AI would become indifferent to a particular countermeasure (in that example, explosives), so wouldn't adapt its behaviour around it. It seems that the same idea can make an Oracle not attempt to manipulate us through its answers, by making it indifferent as to whether the message was read.

The ideas I'm vaguely groping towards is whether this is a general phenomena - whether we can use indifference to prevent the AI from adapting to any of our efforts. The second question is whether we can profitably use it on the AI's motivation itself. Something like the reduced impact AI reasoning about what impact it could have on the world. This has a penalty function for excessive impact - but maybe that's gameable, maybe there is a pernicious outcome that doesn't have a high penalty, if the AI aims for it exactly. But suppose the AI could calculate its impact under the assumption that it didn't have a penalty function (utility indifference is often equivalent to having incorrect beliefs, but less fragile than that).

So if it was a dangerous AI, it would calculate its impact as if it didn't have a penalty function (and hence no need to route around it), and thus would calculate a large impact, and get penalised by it.

My next post will be more structured, but I feel there's the germ of a potentially very useful idea there. Comments and suggestions welcome.