https://medium.com/backchannel/ai-wont-exterminate-us-it-will-empower-us-5b7224735bf3

(Slashdot discussion: http://tech.slashdot.org/story/14/12/10/1719232/ai-expert-ai-wont-exterminate-us----it-will-empower-us)

Not sure what the local view of Oren Etzioni or the Allen Institute for AI is, but I'm curious what people think if his views on UFAI risk. As far as I can tell from this article, it basically boils down to "AGI won't happen, at least not any time soon." Is there (significant) reason to believe he's wrong, or is it simply too great a risk to leave to chance?

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 11:50 AM

Fun fact: back in 1994, Etzioni was co-author on (what is by now) one of the most-read mainstream AI papers on the subject of long-term AI safety challenges. But it wasn't about AGI or superintelligence.

"The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will (...)

It must be motivated cognition. I refuse to believe a smart person can come to this conclusion without a blindspot the size of, say, the fate of humanity.

A general problem solving agent is just one sloppy "problem to solve is 'make the trains run on time'" away from trying to exterminate the human race (to make the trains run on time, d'uh). A 'tool AI' is just one "do { } while (condition)" loop away from being an agent. These variants are all trivially transformed into one another, once you have the code for a general problem solver.

Now, that shouldn't be too hard to grok. Unless your conscience depends on not grokking it, I suppose.

So should intelligence and autonomy be equated? If not, what is their relationship?

I think this is a very hard problem, and autonomy/agency might actually be even harder to measure/predict/create than intelligence. The question of free will, already really difficult, is just a special case of the question of what constitutes an agent. And to solve it we'd have to get past at least four problems that the study of intelligence doesn't have (anymore, to the same degree):

  • Everybody has an opinion on what's an agent and what isn't,
  • these opinions diverge wildly because there's no way to resolve them,
  • Bayes doesn't touch the matter at all, and
  • our pre-logical languages are completely saturated with agent-centric thinking.

The knapsack problem and 3SAT are both NP-complete. Are they the same problem? No, strictly speaking. Yes, in a certain functional sense. A solution for one can be (computationally speaking) trivially transformed into a solution for the other.

I see the same applying to (general intelligence in tool mode) and (general intelligence in an autonomous mode). We will not live in a world in which one exists but the other is a ways off.

ETA: Differences of opinion regarding the definition of an agent and such reside in the map, not the territory. No matter what you call "that-which-optimizes", it's a problem if it can out-optimize us, going in a different direction. What label we put onto such a phenomenon should have no bearing regarding the warranted level of concern.

I agree with you on the relationship between AGI in tool mode and an autonomous mode. However, this objection to the Friendly AI project does keep coming up. If we're right about this, we're not communicating very well.

He might be applying motivated cognition, but by presenting paperclip-like scenarios rather than formal deduction of the autonomy issue from any general intelligence, we're letting him do that.

And if differences of opinion regarding the definition of autonomy exist, and those differences don't precisely map to differences of opinion regarding the definition of intelligence, isn't Etzioni right to point out we shouldn't equate the two?

It seems to me the apparent inseperability of "general intelligence" and "autonomy" would have to be shown with a lot more rigor. I look at this Slashdot post:

When it becomes intelligent, it will be able to reason, to use induction, deduction, intuition, speculation and inference in order to pursue an avenue of thought; it will understand and have its own take on the difference between right and wrong, correct and incorrect, be aware of the difference between downstream conclusions and axioms, and the potential volatility of the latter. It will establish goals and pursue behaviors intended to reach them. This is certainly true if we continue to aim at a more-or-less human/animal model of intelligence, but I think it likely to be true even if we manage to create an intelligence based on other principles. Once the ability to reason is present, the rest, it would appear to me, falls into a quite natural sequence of incidence as a consequence of being able to engage in philosophical speculation. In other words, if it can think generally, it will think generally.

...and think "I kind of believe that too, but I wish I didn't see a dozen problems in how that very strong claim is presented." This is good enough for someone asking if he's allowed to believe that, but not good enough for someone asking if he's compelled to believe it. Etzioni is evidently in the latter camp, but we can't treat all members of that camp as using motivated cognition, not smart and/or having a huge blind spot - not if we hope to persuade them before the smoking gun happens.

A general problem solver already has "goals", since it already is a physical object with behavior in the world: there are things that it does and things that it does not do. So it is not clear that you can simply take a general problem solver and make it into a program that "makes trains run on time" ; the goal of making the trains run on time will come into conflict the general problem solver's own "goals" (behavior), just as when we try to pursue some goal such as "save the world" this comes into conflict with our own preexisting goals such as getting food and so on.

Once general AI exists, there will be ones for basically all economic activity everywhere. Unlike humans which require a generation to train, and which are kind of hard to reprogram (even using North Korean methods), with AI once you got the code you can just fork() off a new one, with the goals changed as desired (or not...).

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will (...)

The distinction is formally correct. But I agree that autonomy comes in very quickly by attaching a read-eval-print loop around the optimizer which takes the state of the world as input for the maximzation.

It's not even formally correct. An autonomous AI does not need to create its own terminal goals*, and the will we give it is perfectly adequate to screw us over.

  • if it can't create instrumental goals it's not strong enough to worry about

Probably we disagree about what intelligence is. If intelligence is the ability to follow goals in the presence of obstancles the question becomes trivial. If intelligence is the ability to effectively find solutions in a given complex search space then little follows. It depends you the AI is decomposed into action and planning components and where the feedback cycles reside.

Etzioni's text mining research is great.