If I understand the Singularitarian argument espoused by many members of this community (eg. Muehlhauser and Salamon), it goes something like this:
- Machine intelligence is getting smarter.
- Once an intelligence becomes sufficiently supra-human, its instrumental rationality will drive it towards cognitive self-enhancement (Bostrom), so making it a super-powerful, resource hungry superintelligence.
- If a superintelligence isn't sufficiently human-like or 'friendly', that could be disastrous for humanity.
- Machine intelligence is unlikely to be human-like or friendly unless we take precautions.
I'm in danger of getting into politics. Since I understand that political arguments are not welcome here, I will refer to these potentially unfriendly human intelligences broadly as organizations.
Smart organizations
By "organization" I mean something commonplace, with a twist. It's commonplace because I'm talking about a bunch of people coordinated somehow. The twist is that I want to include the information technology infrastructure used by that bunch of people within the extension of "organization".
Do organizations have intelligence? I think so. Here's some of the reasons why:
- We can model human organizations as having preference functions. (Economists do this all the time)
- Human organizations have a lot of optimization power.
I talked with Mr. Muehlhauser about this specifically. I gather that at least at the time he thought human organizations should not be counted as intelligences (or at least as intelligences with the potential to become superintelligences) because they are not as versatile as human beings.
So when I am talking about super-human intelligence, I specifically mean an agent that is as good or better at humans at just about every skill set that humans possess for achieving their goals. So that would include things like not just mathematical ability or theorem proving and playing chess, but also things like social manipulation and composing music and so on, which are all functions of the brain not the kidneys
...and then...
It would be a kind of weird [organization] that was better than the best human or even the median human at all the things that humans do. [Organizations] aren’t usually the best in music and AI research and theory proving and stock markets and composing novels. And so there certainly are [Organizations] that are better than median humans at certain things, like digging oil wells, but I don’t think there are [Organizations] as good or better than humans at all things. More to the point, there is an interesting difference here because [Organizations] are made of lots of humans and so they have the sorts of limitations on activities and intelligence that humans have. For example, they are not particularly rational in the sense defined by cognitive science. And the brains of the people that make up organizations are limited to the size of skulls, whereas you can have an AI that is the size of a warehouse.
I think that Muehlhauser is slightly mistaken on a few subtle but important points. I'm going to assert my position on them without much argument because I think they are fairly sensible, but if any reader disagrees I will try to defend them in the comments.
- When judging whether an entity has intelligence, we should consider only the skills relevant to the entity's goals.
- So, if organizations are not as good at a human being at composing music, that shouldn't disqualify them from being considered broadly intelligent if that has nothing to do with their goals.
- Many organizations are quite good at AI research, or outsource their AI research to other organizations with which they are intertwined.
- The cognitive power of an organization is not limited to the size of skulls. The computational power is of many organizations is comprised of both the skulls of its members and possibly "warehouses" of digital computers.
- With the ubiquity of cloud computing, it's hard to say that a particular computational process has a static spatial bound at all.
Mean organizations
* My preferred standard of rationality is communicative rationality, a Habermasian ideal of a rationality aimed at consensus through principled communication. As a consequence, when I believe a position to be rational, I believe that it is possible and desirable to convince other rational agents of it.
Organizations are highly disanalogous to potential AIs, and suffer from severe diminishing returns: http://www.nytimes.com/2010/12/19/magazine/19Urban_West-t.html?reddit=&pagewanted=all&_r=0
... (read more)And so LessWrong has been proved correct that paperclips will be the end of us all.
Corporations exist, if they have any purpose at all, to maximize profit. So this presents a sort of dilemma: their diminishing returns and fragile existence suggest that either they do intend to maximize profit but just aren't that great at it; or they don't have even that purpose which is evolutionarily fit and which they are intended to by law, culture, and by their owners, in which case how can we consider them powerful at all or remotely similar to potential AIs etc?
For any of the many disanalogies one could mention. I bet organizations would work a lot better if they could only brainwash employees into valuing nothing but the good of the organization - and that's just one nugatory difference between AIs (uploads or de novo) and organizations.
Never mind the singularity, organizations aren't friendly and I'm worried about them.
An organization could be viewed as a type of mind with extremely redundant modular structure. Human minds contain a large number of interconnected specialized subsystems, in an organization humans would be the subsystems. Comparing the two seems illuminating.
Individual subsystems of organizations are much more powerful and independent, making them very effective at scaling and multitasking. This is of limited value, though: it mostly just means organizations can complete parallelizable tasks faster.
Intersystem communication is horrendously inefficient in organizations: bandwidth is limited to speech/typing and latency can be hours. There are tradeoffs here: military and emergency response organizations cut the latency down to seconds, but that limits the types of tasks the subsystems can effectively perform. Humans suck at multitasking and handling interruptions. Communication patters and quality are more malleable, though. Organizations like Apple and Google have had some success in creating environments that leverage human social tendencies to improve on-task communication.
Specialization seems like a big one. Most humans are to some degree interchangeable: what one can do, most o... (read more)
One of the advantages of bureaucracy is creating value from otherwise low-value inputs. The collection of people working in the nearest McDonalds probably isn't capable of figuring out from scratch how to run a restaurant. But following the bureaucratic blueprint issued from headquarters allows those same folks to produce a hamburger on demand, and getting paid for it.
That's a major value of bureaucratic structure - lowering the variance and raising the downside (i.e. a fast food burger isn't great, but it meets some minimum quality and won't poison you).
There are academic fields that study the behavior and anatomy of groups of people who act together to pursue goals. These include sociology, organizational behavior, military science, and even logistics. Singularity researchers should take some note of these fields' practical results.
Is that pretty much the point here?
The reason why an AGI would go foom is because it either has access to its own source code, so it can self modify, or it is capable of making a new AGI that builds on itself. Organizations don't have this same power, in that they can't modify the mental structure of the people that make up the organization. They can change the people in it, and the structure connecting them, but that's not the same type of optimization power as an AGI would have.
Also:
Not if you're talking about general intelligence. Deep Blue isn't an AGI, because it can only play chess. This is its only goal, but we do not say it is an AGI because it is not able to take its algorithm and apply it to new fields.
Robin Hanson has said somewhat similar things in his talk of UberTools.
On one hand, I think Luke is too dismissive of organizations. There's no reason not to regard organizations as intelligences, and I think the most likely paths to AGI go through some organization (today, Google looks like the most-likely candidate). But the bottleneck on organizational intelligence is either human intelligence or machine intelligence. So a super-intelligent corporation will end up having super-intelligent computers (or super-intelligent people, but it seems like computers are easier). If we're very lucky, those computers will directly inherit the corporation's purported goal structure ("to enhance shareholder value"). Not that shareholder value is a good goal -- just that it's much less bad than a lot of the alternatives. Given the difficulty of AI programming (not to mention internal corporate politics and Goodhart's law), it seems like SIAI's central arguments still apply.
Free market theorists from at least Smith considered a market as a benevolent super intelligence. In 1984, Orwell envisioned an organization as a mean super intelligence. In both cases, the functional outcome of the super intelligence ran counter to the intent of the component agents.
There have been very mean superintelligences. Political organization matters. They can be a benevolent invisible hand, or a malevolent boot stomping a human face forever.
Yup. There exist established fields that study super intelligences with interests not necessarily aligned with ours -- polisci, socialsci and econ. Now you may criticize their methods or their formalisms, but they do have smart people and insights.
I think the research into Friendliness, if it's not a fake, would do well to connect with some subproblem in polisci, socialsci or econ. It ought to be easier than the full problem, and the solution will immediately pay off. I asked Vassar about this once, and he said that he did not think this would be easier. I never really understood that reply.
I would advise putting a little bit more effort into formatting. Some of the font jumps are somewhat jarring, and prevent your post from having as much of an impact as you might hope.
I made it clear in our dialogue that I was stipulating a particular definition for intelligence:
... (read more)Here is my claim (contrary to Vassar). If you are worried about an unfriendly "foomy" optimizing process, then a natural way to approach that problem is to solve an easier related problem: make an existing unfriendly but "unfoomy" optimizing process friendly. There are lots of such processes of various levels of capability and unfriendliness: North Korea, Microsoft, the United Nations, a non-profit org., etc.
I claim this problem is easier because:
(a) we have a lot more time (no danger of "foom"),
(b) we can use empirical methods (processes already exist), to ground our theories.
(c) these processes are super-humanly intelligent but not so intelligent that their goals/methods are impossible to understand.
The claim is that if we can't make existing processes with all these simplifying features friendly, we have no hope to make a "foomy" AI friendly.
This post doesn't come close to refuting Intelligence Explosion: Evidence and Import.
That's true, but intelligence as defined in this context is not merely optimization power, but efficient cross-domain optimization power. There are many reasons why the intelligence of AI+ greatly dwarfs that of human organizations; see Section 3.1 of the linked paper.
This sounds similar to a position of Robin Hanson addressed in Footnote 25 ... (read more)
I felt an extreme Deja Vu when I saw the title for this.
I'm pretty sure I saw a post with the same name a couple of months ago. I don't remember what the post was actually about, so I can't really compare substance, but I have to ask. Did you post this before?
Again, sorry if this is me being crazy.
No, there was a very very similar post, about how governments are already super intelligences and seem to show no evidence of fooming.
I cannot think of any route to recursive self-improvement for an organization that does not go through an AI. A priori, it's conceivable that there is such a route and I just haven't thought of it, but on the other hand, the corporate singularity hasn't happened, which suggests that it is extremely difficult to make happen with the resources available to corporations today.
Sure, but this is essentially the same problem - once you get around the thinkos.
I think trying to understand organizational intelligence would be pretty useful as a way of getting a feel for the variety of possible intelligences. Organizations also have a legal standing as artificial persons, so I imagine that any AI that wanted to protect its interests through legal means would want to be incorporated. I'd like to see this explored further. Any suggestions on good books on the subject of corporations considered as AIs?
This overall topic is known as collective intelligence, where the word "collective" is intended (at least by some proponents) as a contrast to both individual intelligence and AI. There are some folks studying rationality in organizations and management, most notably including Peter Senge who first formulated the idea of a learning organization as a rough equivalent to "rationality" as such.
At a glance this seems pretty silly, because the first premise fails. Organizations don't have goals. That's the main problem. Leaders have goals, which frequently conflict with the goals of their followers and sometimes with the existence of the organization.
I get the sense that "organization" is more or less a euphemism for "corporation" in this post. I understand that the term could have political connotations, but it's hard (for me at least) to easily evaluate an abstract conclusion like "many organizations are of supra-human intelligence and strive actively to enhance their cognitive powers" without trying to generate concrete examples. Imprecise terminology inhibits this.
When you quote lukeprog saying
... (read more)I think the reason that organizations haven't gone 'FOOM' is due to the lack of a successful "goal focused self improvement method." There is no known way of building a organization that does not suffer from goal drifting and progressive degradation of performance. Humans have not even managed to understand how to build "goals" into organization's structure except in the crudest manner which is nowhere flexible enough to survive assaults of modern environmental change, and I don't think the information in sparse inter-linkages of real... (read more)
You say this is why you are not worried about the singularity, because organizations are supra-human intelligences that seek to self-modify and become smarter.
So is your claim that you are not worried about unfriendly organizations? Because on the face of it, there is good reason to worry about organizations with values that are unfriendly toward human values.
Now, I don't think organizations are as dangerous as a UFAI would be, because most organizations cannot modify their own intelligence very well. For now they are stuck with (mostly) humans for hardwar... (read more)
Not that it's central or anything, but I find it amusing that you mention as examples Muehlhauser and Salamon (two very central figures, to be sure), without mentioning a particular third...