Completely artificial intelligence is hard.  But we've already got humans, and they're pretty smart - at least smart enough to serve some useful functions.  So I was thinking about designs that would use humans as components - like Amazon's Mechanical Turk, but less homogenous.  Architectures that would distribute parts of tasks among different people.

Would you be less afraid of an AI like that?  Would it be any less likely to develop its own values, and goals that diverged widely from the goals of its constituent people?

Because you probably already are part of such an AI.  We call them corporations.

Corporations today are not very good AI architectures - they're good at passing information down a hierarchy, but poor at passing it up, and even worse at adding up small correlations in the evaluations of their agents.  In that way they resemble AI from the 1970s.  But they may provide insight into the behavior of AIs.  The values of their human components can't be changed arbitrarily, or even aligned with the values of the company, which gives them a large set of problems that AIs may not have.  But despite being very different from humans in this important way, they end up acting similar to us.

Corporations develop values similar to human values.  They value loyalty, alliances, status, resources, independence, and power.  They compete with other corporations, and face the same problems people do in establishing trust, making and breaking alliances, weighing the present against the future, and game-theoretic strategies.  They even went through stages of social development similar to those of people, starting out as cutthroat competitors, and developing different social structures for cooperation (oligarchy/guild, feudalism/keiretsu, voters/stockholders, criminal law/contract law).  This despite having different physicality and different needs.

It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident.  They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.

As corporations are larger than us, with more intellectual capacity than a person, and more complex laws governing their behavior, it should follow that the ethics developed to govern corporations are more complex than the ethics that govern human interactions, and a good guide for the initial trajectory of values that (other) AIs will have.  But it should also follow that these ethics are too complex for us to perceive.

New Comment
86 comments, sorted by Click to highlight new comments since: Today at 6:44 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.

Another possibility is that individual humans occasionally influence corporations' behavior in ways that cause that behavior to occasionally reflect human values.

4PhilGoetz13y
If that were the case, we would see specific humans influence corporations behavior in ways that would cause the corporations to implement those humans' goals and values, without preservation of deictic references. For instance, Joe works for Apple Computer. Joe thinks that giving money to Amnesty International is more ethical than giving money to Apple Computer. And Joe values giving money to Joe. We should therefore see corporations give lots of their money to charity, and to their employees. That would be Joe making Apple implement Joe's values directly. Joe's values say "I want me to have more money". Transfering that value extensionally to Apple would replace "me" with "Joe". Instead, we see corporations act as if they had acquired values from their employees, but with preservation of deictic references. That means, every place in Joe's value where it says "me", Apple's acquired value says "me". So instead of "make money for Joe", it says "make money for Apple". That means the process is not consciously directed by Joe; Joe would preserve the extensional reference to "Joe", so as to satisfy his values and goals.
8CronoDAS13y
Some people point to executive compensation at U.S. firms as evidence that many corporations have been "subverted" in exactly that way.
3roystgnr13y
It says "make money for Apple", which is a roundabout way of saying "make money for Apple's shareholders", who are the humans that most directly make up "Apple". Apple's employees are like Apple's customers - they have market power that can strongly influence Apple's behavior, but they don't directly affect Apple's goals. If Joe wants a corporation to give more money to charity, but the corporation incorporated with the primary goal of making a profit, that's not the decision of an employee (or even of a director; see "duty of loyalty"); that's the decision of the owners. There's definitely a massive inertia in such decisions, but for good reason. If you bought a chunk of Apple to help pay for your retirement, you've got a ethically solid interest in not wanting Apple management to change it's mind after the fact about where its profits should go. If you want to look for places where corporate goals (or group goals in government or other contexts) really do differ from the goals of the humans who created and/or nominally control them, I'd suggest starting with the "Iron Law of Bureaucracy".
0TheOtherDave13y
Agreed that if Apple is making a lot of money, and none of the humans who nominally influence Apple's decisions are making that money, that is evidence that Apple has somehow adopted the "make money" value independent of those humans' values. Agreed that if Apple is not donating money to charity, and the humans who nominally influence Apple's decisions value donating money to charity, that is evidence that Apple has failed to adopt the "donate to charity" value from those humans.
0Lightwave13y
Also, corporations are restricted by governments, which implement other human-based values (different from pure profit), and they internalize these values (e.g. social/environmental responsibility) for (at the least) signaling purposes.

How similar are their values actually?

One obvious difference seems to be their position on the exploration/exploitation scale, most corporations do not get bored (the rare cases where they do seem to get bored can probably be explained by an individual executive getting bored, or by customers getting bored and the corporation managing to adapt).

Corporations also do not seem to have very much compassion for other corporations, while they do sometimes co-operate I have yet to see an example one corporation giving money to another, without anticipating some s... (read more)

0PhilGoetz13y
Altruism and merging: Two very good points! Altruism can be produced via evolution by kin selection or group selection. I don't think kin selection can work for corporations, for several reasons, including massive lateral transfer of ideas between corporations (so that helping a kin does not give a great boost to your genes), deliberate acquisition of memes predominating over inheritance, and the fact that corporations can grow instead of reproducing, and so are unlikely to be in a position where they have no growth potential themselves but can help a kin instead. Can group selection apply to corporations? What are the right units of selection / inheritance?
0timtyler12y
You don't think there's corporate parental care?!? IMO, corporate parental care is completely obvious. It is a simple instance of cultural kin selection. When a new corporation is spun off from an old one, there are often economic and resource lifelines - akin to the runners strawberry plants use to feed their offspring. Lateral gene transfer doesn't much affect this. Growth competes with reproduction in many plants - and the line between the two can get blurred. It doesn't preclude parental care - as the strawberry runners show.

Should other large human organizations like governments and some religions also count as UFAIs?

Yes, I find it quite amusing that some people of a certain political bent refer to "corporations" as superintelligences, UFAIs, etcetera, and thus insist on diverting marginal efforts that could have been directed against a vastly underaddressed global catastrophic risk to yet more tugging on the same old rope that millions of other people are pulling on, based on their attempt to reinterpret the category-word; and yet oddly enough they don't think to extend the same anthropomorphism of demonic agency to large organizations that they're less interested in devalorizing, like governments and religions.

4[anonymous]13y
Maybe those people are prioritising the things that seem to affect their lives? I can certainly see exactly the same argument about government or religion as about corporations, but currently the biggest companies (the Microsofts and Sonys and their like) seem to have more power than even some of the biggest governments.
1anonym13y
There is also the issue of legal personality, which applies to corporations and not to governments or religions. The corporation actually seems to me a great example of a non-biological, non-software optimization process, and I'm surprised at Eliezer's implicit assertion that there is no significant difference between corporations, governments, and religions with respect to their ability to be unfriendly optimization processes, other than that some people of a certain political bent have a bias to think about corporations differently than other institutions like governments and religions.
0NancyLebovitz13y
I think such folks are likely to trust governments too much. They're more apt to oppose specific religious agendas than to oppose religion as such, and I actually think that's about right most of the time.
4CronoDAS13y
Probably.
-12PhilGoetz13y
0Alexandros13y
Funny you should mention that. Just yesterday I added on my list of articles-to-write one by the title of "Religions as UFAI". In fact, I think the comparison goes much deeper than it does for corporations.
0timtyler12y
Some corporations may become machine intelligences. Religions - probably not so much.

Unlike programmed AIs, corporations cannot FOOM. This leaves them with limited intelligence and power, heavily constrained by other corporations, government, and consumers.

The corporations that have come the closest to FOOMing are known as monopolies, and they tend to be among the least friendly.

4RolfAndreassen13y
Is this obvious? True, the timescale is not seconds, hours, or even days. But corporations do change their inner workings, and they have also been known to change the way they change their inner workings. I suggest that if a corporation of today were dropped into the 1950s, and operated on 1950s technology but with modern technique, it would rapidly outmaneuver its downtime competitors; and that the same would be true for any gap of fifty years, back to the invention of the corporation in the Middle Ages.
4wedrifid13y
I suggest it is - for anything but the most crippled definition of "FOOM".
5Will_Newsome13y
Right, FOOM by its onomatopoeic nature suggest a fast recursion, not a million-year-long one.
2RolfAndreassen13y
I am suggesting that a ten-year recursion time is fast. I don't know where you got your million years; what corporations have been around for a million years?
2NancyLebovitz13y
I'm inclined to agree-- there are pressures in a corporation to slow improvement rather than to accelerate it. Any organization which could beat that would be extremely impressive but rather hard to imagine.
0PhilGoetz13y
This is true, but not relevant to whether we can use what we know about corporations and their values to infer things about AIs and their values.
0wedrifid13y
It is relevant. It means you can infer a whole lot less about what capabilities an AI have and also about how much effort an AI will likely spend on self improvement early on. The payoffs and optimal investment strategy for resources are entirely different.

The SIAI is a "501(c)(3) nonprofit organization." Such organizations are sometimes called nonprofit corporations. Is SIAI also an unfriendly AI? If not, why not?

P.S. I think corporations exist mostly for the purpose of streamlining governmental functions that could otherwise be strucured in law, although with less efficiency. Like taxation, and financial liability, and who should be able to sue and be sued. Corporations, even big hierarchical organizations like multinationals, are simply not structured with the complexity of Searles' Chinese Room.

It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.

I don't understand why you think that the rest of your post seems to suggest this. It appears to me that you're proposing that human (terminal?) values are universal to all intelligences at our level of intelligence on the basis that humans and corporations share values, but this doesn't hold up because corporations are composed of humans, so I think that the natural state would be for them to value human values.

2PhilGoetz13y
I figured someone would say that, and it is a hypothesis worth considering, but I think it needs justification. Corporations are composed of humans, but they don't look like humans, or eat the things humans eat, or espouse human religions. Corporations are especially human-like in their values, and that needs explaining. The goals of a corporation don't overlap with the values of its employees. Values and goals are highly intertwined. I would not expect a corporation to acquire values from its employees without also acquiring their goals; and they don't acquire their employees' goals. They acquire goals that are analogous to human goals; but eg IBM does not have the goal "help Frieda find a husband" or "give Joe more money".
2timtyler13y
Both humans and corporations want more money. Their goals at least overlap.
0PhilGoetz13y
The corporation wants the corporation to have more money, and Joe wants Joe to have more money. Those are the same goals internally, but because the corporation's goal says "ACME Corporation" where Joe's says "Joe", it means the corporation didn't acquire Joe's goals via lateral transfer.
0timtyler13y
Normally, the corporation wants more money - because it was built by humans - who themselves want more money. They build the corporation to want to make money itself - and then want to pay them a wage - or dividends. If the humans involved originally wanted cheese, the corporation would want cheese too. I think by considering this sort of thought experiment, it is possible to see that the human goals do get transferred across.
[-]knb13y30

I don't think it is useful to call Ancient Egypt a UFAI, even though they ended up tiling the desert in giant useless mausoleums at an extraordinary cost in wealth and human lives. Similarly, the Aztecs fought costly wars to capture human slaves, most of whom were then wasted as blood sacrifices to the gods.

If any human group can be UFAI, then does the term UFAI have any meaning?

2Nornagest13y
My understanding is that the human cost of the Ancient Egyptian mausoleum industry is now thought to be relatively modest. The current theory, supported by the recent discovery of workers' cemeteries, is that the famous monuments were generally built by salaried workers in good health, most likely during the agricultural off-season. Definitely expensive, granted, but as a status indicator and ceremonial institution they've got plenty of company in human behavior. There's some controversy over (ETA: the scale of) the Aztec sacrificial complex as well, but since that's entangled with colonial/anticolonial ideology I'd assume anything you hear about it is biased until proven otherwise.
2knb13y
There is no debate over whether the Aztecs engaged in mass human sacrifice. The main disagreement amongst academics is over the scale. The Aztecs themselves claimed sacrifices of over 40,000 people, but they obviously had good reason to lie (to scare enemies). Spanish and pre-columbian Aztec sources agree that human sacrifice was widespread amongst the Aztecs.
0Nornagest13y
You're quite right; I should have been more explicit. Edited.

It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.

By human values we mean how we treat things that are not part of the competitive environment.

The greatness of a nation and its moral progress can be judged by the way its animals are treated.

-- Mahatma Gandhi

Obviously a paperclip maximizer wouldn't punch you in the face if you could destroy it. But if it is stronger than all oth... (read more)

0PhilGoetz13y
I don't think I mean that. I also don't know where you're going with this observation.
0wedrifid13y
Roughly, that you can specify human values by supplying a diff from optimal selfish competition.

Another point to consider would be my Imperfect levers article and this one. I believe that the organizations that show the first ability to foom would foom effectively and spread their values around. This is not in any way, new. I, of indian origin, am writing in english and share more values with some californian transhumanists than with my neighbours. If not for the previous fooms of the british empire, the computer revolution and the internet, this would not have been possible.

The question is how close to sociopathic rationality are any of these organ... (read more)

2Morendil13y
This remark deserves an article of its own, mapping each of Omohundro's claims to the observed behaviour of corporations.
0PhilGoetz13y
I can't even find what Omohundro you're talking about using Google.
3NancyLebovitz13y
http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ I don't know why google didn't work for you-- I used "omohundro's basic drives" and a bunch of links came up.

Common instrumental values are in the air today.

The more values are found to be instrumental, the more the complexity of value thesis is eroded.

0PhilGoetz13y
What particular instrumental values are you thinking of?

Charlie Stross seems to share this line of thought

We are now living in a global state that has been structured for the benefit of non-human entities with non-human goals. They have enormous media reach, which they use to distract attention from threats to their own survival. They also have an enormous ability to support litigation against public participation, except in the very limited circumstances where such action is forbidden.

[-]sfb13y20

I was expecting a post questioning who/what is really behind this project to make paperclips invisible.

2Blueberry13y
Well, it's clear who benefits. Tiling the universe with invisible paperclips is less noticeable and less likely to start raising concerns.

Michael Vassar raised some of the same points in his talk at H+, 2 weeks before I posted this.

Corporations (and governments) are not usually regarded as sharing human values by those who consider the question. This brief blog post is a good example. I would certainly argue that the 'U' is appropriate; but then I tend to regard 'UFAI' as meaning 'the complement of FAI in mind space'.

0PhilGoetz13y
Those people are considering a different question, which is, "Do corporations treat humans the way humans treat humans?" Completely different question. If corporations develop values that resemble those of humans by convergent evolution (which is what I was suggesting), we would expect them to treat humans the way humans treat, say, cattle.

Corporations develop values similar to human values. They value loyalty, alliances, status, resources, independence, and power. They compete with other corporations, and face the same problems people do in establishing trust, making and breaking alliances, weighing the present against the future, and game-theoretic strategies. They even went through stages of social development similar to those of people, starting out as cutthroat competitors, and developing different social structures for cooperation (oligarchy/guild, feudalism/keiretsu, voters/stockh

... (read more)
1Dorikka13y
...I am laughing hard right now.

Is my grandma an Unfriendly AI?

4Alicorn13y
Your grandma probably isn't artificial.
-1Vladimir_Nesov13y
She was designed by evolution, so could just as well be considered artificial. And did I mention the Unfriendly AI part?
3wedrifid13y
Not when using the standard meanings of either of those words.
-3Vladimir_Nesov13y
But what do you mean by "meaning"? Not that naive notion, I hope? Edit: This was a failed attempt at sarcasm, see the parenthetical in this comment.
2wedrifid13y
Question: How many legs does a dog have if you call the tail a leg? Answer: I don't care, your grandma isn't artificial just because you call natural artificial. Presenting a counter-intuitive conclusion based on basically redefining the language isn't "deep". Sometimes things are just simple. Perhaps you have another point to make about the relative unimportance of the distinction between 'natural' and 'artificial' in the grand scheme of things? There is certainly a point to be made there, and one that could be made without just using the words incorrectly.
-2Vladimir_Nesov13y
But that would be no fun. (For the perplexed: see No Evolutions for Corporations or Nanodevices. Attaching too many unrelated meanings to a word is a bad idea that leads to incorrect implicit inferences. Meaning is meaning, even if we don't quite know what it is, grandma and corporations are not Unfriendly AIs, and natural selection doesn't produce artificial things.)
0timtyler12y
It does, but indirectly.
-1PhilGoetz13y
Corporations are artificial, and they are intelligent. Therefore, they are artificial intelligences. (ADDED: Actually this is an unimportant semantic point. What's important is how much we can learn about something that we all agree we can call "AI", from corporations. Deciding this on the basis of whether you can apply the name "AI" to them is literally thinking in circles.)
[-]Emile13y-20

Corporations today are not very good AI architectures - they're good at passing information down a hierarchy, but poor at passing it up, and even worse at adding up small correlations in the evaluations of their agents.

I'd be cautious about the use of "good" here - the thing you describe mostly seem "good" from the point of view who cares about the humans being used by the corporations; it's not nearly as clear that they are "good" (bringing more benefits than downsides) for the final goals of the corporation.

If you were ta... (read more)

2PhilGoetz13y
Corporations are not good at using bottom-up information for their own benefit. Many companies have many employees who could optimize their work better, or know problems that need to be solved; yet nothing is done about it, and there is no mechanism to propagate this knowledge upward, and no reward given to the employees if they transmit their knowledge or if they deal with the problem themselves.

The differences between: a 90% human 10% machine company...

...and a 10% human 90% machine company...

...may be instructive if viewed from this perspective.

0PhilGoetz13y
I don't understand what you're getting at. My company has about 300 people, and 2500 computers. And the computers work all night. Are we 90% machine?
0timtyler13y
There are various ways of measuring. My proposal for a metric is here: http://machine-takeover.blogspot.com/2009/07/measuring-machine-takeover.html I propose weighing them: ...in particular, weighing their sensor, motor, and computing elements. ...so "no" - not yet.