We may need another word for "agent with intentionality" - the way the word "agent" is conventionally used is closer to "daemon", i.e. tool set to run without user intervention.

I'm not sure even having a world-model is a relevant distinction - I fully expect sysadmin tools to be designed to form something that could reasonably be called a world model within my working lifetime (which means I'd be amazed if they don't exist now). A moderately complex Puppet-run system can already be a bit spooky.

Note that mere daemon-level tools exist that many already consider unFriendly, e.g. high-frequency trading systems.

A more mundane example:

The Roomba cleaning robot is scarcely an agent. While running, it does not build up a model of the world; it only responds to immediate stimuli (collisions, cliff detection, etc.) and generates a range of preset behaviors, some of them random.

It has some senses about itself — it can detect a jammed wheel, and the "smarter" ones will return to dock to recharge if the battery is low, then resume cleaning. But it does not have a variable anywhere in its memory that indicates how clean it believes the room is — an explicit repr... (read more)

3private_messaging8yVery good point on need for another word. To think about it, I think what we need is understanding of enormous gap between the software we design when we have some intent in mind, and fulfilling that intent itself. For example, if I have intent to get from point A to point B on terrain, I could build a solution consisting of 2 major parts: * the perceiver tool that builds and updates the map of terrain * the solver tool that minimizes some parameter over a path through this terrain (some discomfort metric, combined with the time, the risk of death, etc). [edit: please note that this terrain is not the real world terrain] A philosopher thinking about it could think up a mover-from-point-A-to-point-B which directly implements my 'wish' to get from A to B. It will almost certainly expose me to non-survivable accelerations, or worse yet, destroy buildings on its path (because in the wish i forgot to tell not to). That is because when you employ verbal reasoning you are thinking directly starting from intent. Alas, we do not know how to reduce intent to something that is not made of intent. edit: that is, in the mind of philosopher, the thing is actually minimizing - or maximizing - some metric along a real world path. We don't know how to do that. We do not know how we do that. We don't even know we actually do that ourselves. edit: We might figure out how to do that, but it is separate problem from either improvements to my first bullet point, or my second bullet point. Other thing that I forgot to mention: the relationship between the solver and the model is inherently different than the relationship between self driving car and the world. The solver has god's eye view, that works in high level terms. The car looks through sensors. The solver can not be directly connected to the real world, or even to a reductionist detailed physics simulator (too hard to define the comfortable path when the car, too, is made of atoms). There's the AK47 rifle: yo

Tool for maximizing paperclips vs a paperclip maximizer

by private_messaging 1 min read12th May 201223 comments

3


To clarify some point that is being discussed in several threads here, tool vs intentional agent distinction:

A tool for maximizing paperclips would - for efficiency purposes - have a world-model which it has god's eye view of (not accessing it through embedded sensors like eyes), implementing/defining a counter of paperclips within this model. Output of this counter is what is being maximized by a problem solving portion of the tool. Not the real world paperclips

No real world intentionality exist in this tool for maximizing paperclips; the paperclip-making-problem-solver would maximize the output of the counter, not real world paperclips. Such tool can be hooked up to actuators, and to sensors, and made to affect the world without human intermediary; but it won't implement real world intentionality.

An intentional agent for maximizing paperclips is the familiar 'paperclip maximizer', that truly loves the real world paperclips and wants to maximize them, and would try to improve it's understanding of the world to know if it's paperclip making efforts are successful.

The real world intentionality is ontologically basic in human language and consequently there is very strong bias to describe the former as the latter.

The distinction: the wireheading (either direct or through manipulation of inputs) is a valid solution to the problem that is being solved by the former, but not by the latter. Of course one could rationalize and postulate tool that is not general purpose enough as to wirehead, forgetting that the issue being feared is a tool that's general purpose to design better tool or self improve. That is an incredibly frustrating feature of rationalization. The aspects of problem are forgotten when thinking backwards.

The issues with the latter: We do not know if humans actually implement real world intentionality in such a way that it is not destroyed under full ability to self modify (and we can observe that we very much like to manipulate our own inputs; see art, porn, fiction, etc). We do not have single certain example of such stable real world intentionality, and we do not know how to implement it (that may well be impossible). We also are prone to assuming that two unsolved problems in AI - general problem solving and this real world intentionality - are a single problem, or are solved necessarily together. A map compression issue.

 

3