What we said about this in Responses to Catastrophic AGI Risk:
Economic arguments, such as the principle of comparative advantage, are sometimes invoked to argue that AGI would find it more beneficial to trade with us than to do us harm. However, technological progress can drive the wages of workers below the level needed for survival (Clark 2007; Freeman 2008; Brynjolfsson and McAfee 2011; Miller 2012), and there is already a possible threat of technological unemployment (Brynjolfsson and McAfee 2011). AGIs keeping humans around due to gains from trade implicitly presumes that they would not have the will or the opportunity to simply eliminate humans in order to replace them with a better trading partner, and then trade with the new partner instead.
Humans already eliminate species with low economic value in order to make room for more humans, such as when clearing a forest in order to build new homes. Clark uses the example of horses in Britain: their population peaked in 1901, with 3.25 million horses doing work such as plowing fields, hauling wagons and carriages short distances, and carrying armies into battle. The internal combustion engine replaced so many of them that by 1924 there were fewer than two million. Clark (2007) writes:
There was always a wage at which all these horses could have remained employed. But that wage was so low that it did not pay for their feed, and it certainly did not pay enough to breed fresh generations of horses to replace them. Horses were thus an early casualty of industrialization.
There are also ways to harm humans while still respecting their property rights, such as by manipulating them into making bad decisions, or selling them addictive substances. If AGIs were sufficiently smarter than humans, humans could be tricked into making a series of trades that respected their property rights but left them with negligible assets and caused considerable damage to their well-being.
If you invade another nation, most of their value is in their infrastructure and their population: it takes time and effort to rebuild and co-opt these.
The other factor here is that human power structures are not stable. Even if the work is done to conquer and re-develop resources the power structure that benefits is not necessarily the same one that does it. We can use the US as an example again. It was worthwhile for the British to exterminate most of the inhabitants of the (now) US and to build infrastructure and new population. Yet the present day UK does not collect the benefits from that investment because the colonists realised this 'nation' and 'empire' business is an abstraction in people's minds, and one that can be altered by sinking a few ships and killing a few overlords. The individuals and their descendants benefited. The empire did not (as much).
Human power structures are too fragile to have allowed even the most dominant countries from just winning, taking all the resources and keeping them. This will not necessarily apply to all future human civilisations and it certainly wouldn't apply to AIs.
Trusting mechanisms like comparative advantage to save us in the face of strong not-explicitly-friendly-AI is crazy. The assumptions required to make that sort of system safe and stable just don't hold. (This kind of error is what most frustrates me about Robin Hanson's essays on related subjects.)
I really like your British empire/US example - and may steal it in future, if that's all right.
How much did the British of the 1600s make from the United States? Was it more than they spent in the cost of the colonization? was the gain greater than that to be had from other investment opportunities?
Furthermore, how much did the British of 1600 care about the British of 2013? Did they really care more about them than the Americans of 2013? Are the British of the Elizabethan era less connected to the Americans of today than the British today?
I'm not sure we can talk meaningfully about or plan for benefits 400+ years in the future. I strongly suspect Queen Elizabeth I wasn't planning that far ahead except in the most vague terms.
Human power structures are stable enough to plan for a reasonable period of time in the future. The further into time your plans go the more uncertain they become which is why plans need to be adjusted on the fly to meet changing conditions. Thinking otherwise I'll call the waterfall fallacy.
How much did the British of the 1600s make from the United States? Was it more than they spent in the cost of the colonization? was the gain greater than that to be had from other investment opportunities?
Almost certainly. And even neglecting any direct resource extraction, by my estimation it was worthwhile colonizing simply so that the USA exists. The UK benefits from gains from trade with the US and it is almost certainly better both politically and economically to have the country based on British heritage rather than whatever the alternative would have been.
Furthermore, how much did the British of 1600 care about the British of 2013? Did they really care more about them than the Americans of 2013?
It seems rather likely that the did. National identity and formally acknowledged power tends to be important to the people with power then.
Are the British of the Elizabethan era less connected to the Americans of today than the British today?
Only a little less. The dilution from immigration and the excessive amounts of slave importing makes some difference. The symbology of the rebellion also seems like the kind of thing that matters a bit since it involves a greater degree of change to the cultural legacy. Even so the connection is still rather strong and certainly comparable. It remains a "fork".
Almost certainly. And even neglecting any direct resource extraction, by my estimation it was worthwhile colonizing simply so that the USA exists. The UK benefits from gains from trade with the US and it is almost certainly better both politically and economically to have the country based on British heritage rather than whatever the alternative would have been.
My understanding of economic history is that the Marxist/Leninist interpretation of empire as profitable remains controversial and that a great many believe empire is better interpreted as related to national prestige and private interests engaged in rentseeking/'privatizing gains and socializing losses', and that it's questionable that England did receive in excess profits (above and beyond what it would have received from free trade) anywhere near what it spent on things like the British Navy or the French and Indian War (with Jacques Marseille arguing the same thing about the French empire).
There is nothing here that is specific to AIs. You could replace AI with "foreign nation" or "competing company" and it would read the same. E.g.
...though the competing company would prefer to trade with you rather than not trade with you, it would much, much prefer to dispossess you of your resources and use them itself. With the energy you wasted on a single cat video, it could have produced 500 of them! If it values these videos, then it is desperate to take over your stuff. Its absolute advantage makes this too tempting.
Thus I suggest considering how this plays out in the real world today. I.e. why does comparative advantage exist instead of one big maximally efficient monopoly? Answer that question first, and you'll be much better positioned to consider the case where a self-directed AI is added to the market.
Well, it is and was relatively rare through history for one country to have an absolute advantage over another. And when they do, or think they do, yes, often invasion is in the air.
Well, it is and was relatively rare through history for one country to have an absolute advantage over another.
It is a little hard to believe that the entire colonial era did not consist of nations with absolute advantage taking resources from other humans with much less of an economy. Even now, there is nearly a factor of 300 between the most and least productive (per capita) countries_per_capita). Tell me how that is not a good measure of absolute advantage.
Perhaps you consider post-Renaissance Europe with its industrial age and so on to be a blip on the human radar? I think you are going to find Greek and especially Roman domination of its neighbors to be easily traced to economic absolute advantages, although I'll leave that as an exercise for the readers.
Remember, absolute advantage means being better at everything. As such it is far from trivial to find a case were true absolute advantage is held. Of course, invasion and exploitation are not done only in conditions of complete absolute advantage...
A US hour of labor produces 9 times the tradable value of a Chinese hour of labor._per_capita). Absolute advantage would mea that there is NOTHING that China could produce with fewer labor hours than the US could produce. In fact, as you suggest, there is probably something. Things like Chinese art. Chinese tourism. Perhaps even a few special gourmet Chinese items of food.
But how much does the existence of these items change the big picture? In terms of labor hours, we outproduce the Chinese on the stuff we sell them by on average a little more than a factor of 9 while we outproduce them on the stuff they sell us by some factor much greater than 1, but less than 9. Would some collection of highly unique products that comprise way under 10%, possibly under 1% of the total economic value traded really change anything?
I don't think so. I think absolute advantage framed as "everything" is an oversimplification of the concept that really matters, which is overwhelming superiority in the hours required to produce stuff.
Do you actually disagree with this idea?
Absolute advantage meaning literally everything would mean China would have NOTHING we would buy from China. In fact, with thousands of items that could be traded
Your link is broken. You need to escape the underscores as "\ _" (without the space).
Thanks. Couldn't fix it by escaping underscores ,instead I made it a hyperlink and escaped the right parenthesis which is part of the link.
The code that turns things that start with "http colon slash slash" into clickable links doesn't treat Markdown characters like the underscore and the backslash specially. (This is probably meant to be convenient, but I consider it a bug.) It also does not allow close parens in links, which is what messed up the first version of mwengler's link.
Francis Bacon (1561-1626) is known for the scientific method, which was a key factor in the scientific revolution. Bacon stated that the technologies that distinguished Europe of his day from the Middle Ages were paper and printing, gunpowder and the magnetic compass, known as the Four great inventions. The Four great inventions important to the development of Europe were of Chinese origin.[6] Other Chinese inventions included the horse collar, cast iron, an improved plow and the seed drill.
ygert is right. See colonialism and hostile takeovers and whatever monopolies do exist.
There’s no maximally efficient monopoly because it is rare for a tuple (large company, small competitor) to have a 500 times absolute efficiency advantage for the large company at everything at the same time, and most companies tend to care about one particular thing rather than all of them.
Take a large oil company. It probably is more lots of times better than a single human at extracting oil, as well as at a few other tasks, let’s say transporting oil where it needs to go. Now imagine you have some land, and it's discovered to be above a rich oil field. Do you think the company will prefer to trade oil extracting and transporting with you, or to just buy (or force) you out of that land? What do you think will happen in practice? Also, what do you think it would happen if oil companies’ motivations weren’t at all structured by laws (like a UFAI would not be), and they didn’t expect to lose occasionally from fines and such.
Who says an AI's motivations and decisions won't be affected by laws? As you point out, economic entities' actions are constrained and influenced by the laws of the societies they operate in. Laws form a structure that modifies how an economy operates. An AI would simply be another entity operating within an economy built around laws and regulations, as are corporations, persons, nations, and families today. AIs might break the laws, as corporations and people do today; but they will nonetheless be constrained by the ability and willingness of governments to enforce those laws.
AIs are not magic genies. They can't just wave a wand and say, "Alakazam. I want the world to give me all its resources" and expect it to happen. There are no magic genies.
if it is rare for a large company to have a 500 times absolute efficiency advantage for everything at the same time, then it is very unlikely an AI will have such an advantage; and there's your answer. The same factors that make such advantages hard to accumulate today will likely prevent an AI from accumulating them in the future.
AIs are not magic genies.
I guess that depends on what level of AI we’re talking about. I mean, it’s true in a literal sense, but starting from a certain point they might approximate magic very well.
Insert analogy with humans and dogs here. Or a better example for this situation: think of a poker game: it’s got “laws”, both “man-made” (the rules) and “natural” (probability). Even if all other players are champions, if one of the players can instantly compute exactly all the probabilities involved, see clearly all external physiological stress markers on the other players (while showing none), has an excellent understanding of human nature, know all previous games of all players, and is smart enough to be able to integrate all that in real time, that player will basically always win, without “breaking the laws”.
if it is rare for a large company to have a 500 times absolute efficiency advantage for everything at the same time, then it is very unlikely an AI will have such an advantage; and there's your answer. The same factors that make such advantages hard to accumulate today will likely prevent an AI from accumulating them in the future.
I’m not convinced. If the AI was subject to the same factors a large company is subject to today, we wouldn’t need AIs. Note that a large company is basically a composite agent composed of people plus programs people can write. That is, the class of inventive problems it can solve are those that fit a human brain, even if it can work on more than one in parallel. Also, communication bandwidth between thinking nodes (i.e., the humans) is even worse than that inside a brain, and those nodes all have interests of their own that can be very different from that of the company itself.
Basically, saying that an AGI is limited by the same factors as a large company is a bit like saying that a human is limited by the same factors as a powerful pack of chimps. And yet, if he manages to survive an initial period for preparation, a human can pretty much "conquer" any pack of chimps they wanted to. (E.g., capture, kill, cut trees and build a house with a moat.)
If you think about it, in a way, chimps (or Hominoidea in general) already had their singularity, and they have no idea what’s going on whenever we’re involved.
You are proposing that AIs are magic genies. Take your poker example. While a computer program can certainly quickly calculate all the probabilities involved, and can probably develop a reasonable strategy for bluffing, that's as far as our knowledge goes.
We do not know if it is even possible to see clearly all external physiological stress markers on the other players or have an excellent understanding of human nature. How is a computer going to do this? Humans can't. Humans can't predict the behavior of dogs or chimpanzees and they're operating on a level way below ours.
It's not enough to say "But of course the AI will figure this out. It's smarter than us, so it will figure out this thing that eludes all humans." Show me how it's going to do all these things, and then you're treating the issue seriously. Otherwise you're just assigning it magic powers by fiat.
We do not know if it is even possible to see clearly all external physiological stress markers on the other players
See an example for one stress marker. That’s an order of magnitude above noticing blushes. Dogs have much better sense of smell, and technology exists to simulate that. You can probably detect the pulse of each person in a noisy room with just an array of sufficiently-accurate microphones.
Note that human intellect was sufficient to discover the technique, and the technique is powerful enough to allow human senses to see the movements directly, you don’t even need to examine Fourier transforms and the like.
Humans can't predict the behavior of dogs or chimpanzees and they're operating on a level way below ours.
I can’t and you can‘t. But dog and chimpanzee experts can predict lots of things I couldn’t. And experts on human behavior can predict lots of things about humans that might seem impossible to non-trained humans. Psychiatrists and psychologists can often deduce with decent confidence lots of things from seemingly innocuous facts, despite the mess their discipline might be in bulk. Sociopaths can often manipulate people despite (allegedly) not feeling the emotions they manipulate. Salesmen are often vilified for selling things the buyer doesn’t want, and the fact that there exist consistently better and worse salesmen indicate that it’s not just luck. Hell, I can predict lots of things about people I know well despite not being smarter than them.
Note that the hypothetical poker player (or whatever) doesn’t need to predict perfectly. They just need to do it much better than humans. And the fact that expert human poker players have been known to win poker tournaments without looking at their cards is evidence that even human-level prediction is hugely useful.
Hell, Eliezer allegedly got out of the box using only a text channel, he didn’t have the luxury of looking at the person to judge the emotional effects of his messages.
Laws, the costs of breaking them, the costs of making different ones, are just another optimization problem for businesses. Indeed, my singular insight about the intelligence services of nations is that the laws that constrain civilians within a country in commercial interactions are explicitly not applied to government intelligence agents and police generally, and especially when they are operating against other countries.
An AI will be as constrained by laws as would a similarly intelligent corporation. An AI which is much smarter than the collective intelligence of the best human corporations will be much less constrained by laws, especially as it accumulates wealth, which is essentially control of valuable tools.
One would expect in the mid term (as opposed to the long term) AI's to be part of corporations, that there would be an AI + human alliances which would be the most competitive.
If we get Kurzweil's future as opposed to the lesswrong orthodox future, AI will be integrated with human intelligence, that is, I will have modifications made to me that give me much higher intelligence than I have now. Conceivably at some point, the enhancements will have me jumping to a non-human substrate, but the line between what was unmodified human and what is clearly no longer human will be very hard to define. As opposed to the lesswrong vision which is AI's running off to the singularity while humans sit there paralyzed relying on their 1 kHz clocked parallel processor built entirely of meat. In which case the dividing line SEEMS much clearer.
Modified humans: human or not? I'm betting CEV when calculated will show that they are. I know I want to be smarter, how 'bout you?
And the laws of modified humans will be a whole lot more complex than the laws of bio-humans, just as the laws of humans are much more complex than the laws of monkeys.
It is much more difficult for America to invade, say, Iraq, and build the necessary infrastructure or whatever it needs for Iraq to become another America then it is for an AI to kill you and use your resources to make another AI.
If we could easily turn Iraq into another America, Iraq would have done it already. As great as it is to be the dictator of a third world country, it's much better to be the dictator of a developed country.
The idea of "absolute advantage" is rather problematic to begin with, and is generally based on an incomplete analysis. Take your hypothetical, for instance. If the AI for some reason want a cat video, why in the world would it trade you two hamburgers for one? That's ridiculous. Your hypothetical clearly posits that joules are transferable (otherwise, considering the possibility of the AI dispossessing you of them doesn't make sense). So while this trade leaves both you and the AI better off compared to the "no interaction" alternative, it leaves both of you worse off than if the AI had spent 15 joules making three hamburgers, traded them to you for 100 joules, and then made a cat video. Your analysis of the AI's "absolute advantage" considers the economy as having only two commodities, when it clearly has at least three. A joule costs you 0.0001 hamburgers. A joule costs the AI 0.2 hamburgers. You have a comparative advantage in joules, so you should trade them for hamburgers and cat videos. There is no Pareto Optimal scenario that involves you making hamburgers or cat videos.
There is no Pareto Optimal scenario that involves you making hamburgers or cat videos.
This makes sense, but made me confused about how the standard comparative advantage argument for trade, i.e., with two humans or two countries, works, and why it doesn't run into the same kind of conclusion. Turns out the confusion is justified. This 2007 paper, A New Construction of Ricardian Trade Theory, claims that all prior models of comparative advantage had the following problems:
On the contrary, the models so far analyzed had two crucial defects. (1) Inputs were restricted to labor as a unique factor and no material inputs were admitted. This implied that intermediate goods were excluded from any theoretical analysis of international trade. (2) Choice of techniques was not admitted. This is what is necessary when one wants to analyze technical change and development.
This is where the points about minimal human requirements comes up. And, as mentioned, the AI would still want to steal all your joules if it could.
The theory of comparative advantage is wrong in practice, because trade goods are not perfect commodities; there are differences not just in how much different parties can produce, but in quality. If things are in generally short supply, this won't matter much; but advance technology to get rid of scarcity, and suddenly those quality differences are all that matters, and someone whose output is slightly-lower quality can't trade at all.
And that is why our iPhones are made in the USA and Europe.
NOT. They are made in China using cheap labor instead of being made in USA and Europe using robots.
"Cheap Labor" is a statement that unwraps to absolute advantage, generally. An hour of labor in the US produces about 7X the value as an hour of labor in China. Their wage drifts down to allow them to change in the comparative advantage sense: an iphone made the same way in the US as it is made in China would be prohibitively expensive.
If we want hamburgers & the AI can make them much more efficiently than we can, why wouldn't we just willingly give our resources to the AI so that it can make hamburgers? Resisting the AI would be dangerous for us depending on the AI's military capabilities & the AI trying to overpower us could be dangerous for the AI as well.
The thing about nations is that they can externalize the costs & consolidate the benefits of invading a country -- the politicians & corporations that benefit from the invasion don't have to fight & die in the battles -- that's what poor young men & women are for; nor do they pay for the costs of the military supplies -- that's what taxes & debt are for.
Because "resources" means things like clothing, air, water, electricity, and the minerals contained in your body.
There's no scarcity of air. If the AI can turn air into hamburgers, I don't think the resources contained in my body would be the AI's preferred source of energy given that they will be more costly to extract (I will fight to keep them) & contain less energy overall than many other potential sources. If the AI can turn air into hamburgers, it could just leave the earth & convert the core of a huge star into hamburgers instead.
If we want hamburgers & the AI can make them much more efficiently than we can, why wouldn't we just willingly give our resources to the AI so that it can make hamburgers?
If your preferences are maximised by you being turned into a hamburger then by all means do so.
I think we need to clarify how God-like this hypothetical AI is. If the AI is not very God-like, then trying to turn humans into hamburgers could be very costly for it. If we made the AI, maybe we could make a competing AI to resist it or use some backdoor built into the AI's programming to pull the plug. At the very least, we could launch missiles at it.
If the AI is very God-like, then there are more resource rich sources than human beings it could easily obtain. It'd be sort of like humans gathering up all of the horses for transportation when we already have cars & planes.
And of course if there any extra costs to trading each unit (like in time, or in infrastructure), then the theorem no longer follows and there is some advantage above which it is not worth trading.
The motivation on the human side is interesting too. The law of comparative advantage suggests we (humans) can gain from trading with an AI. Unfortunately, this would give the AI optimisation power (information into and out of the box, etc.), allowing it to ultimately reorganise our atoms. This echoes the part of the historical motivation for mercantilism - trading with foreigners might make us wealthier, but if they become even wealthier, they'll be more able to conquer us.
The theory of comparative advantage says that you should trade with people, even if they are worse than you at everything (ie even if you have an absolute advantage). Some have seen this idea as a reason to trust powerful AIs.
For instance, suppose you can make a hamburger by using 10 000 joules of energy. You can also make a cat video for the same cost. The AI, on the other hand, can make hamburgers for 5 joules each and cat videos for 20.
Then you both can gain from trade. Instead of making a hamburger, make a cat video instead, and trade it for two hamburgers. You've got two hamburgers for 10 000 joules of your own effort (instead of 20 000), and the AI has got a cat video for 10 joules of its own effort (instead of 20). So you both want to trade, and everything is fine and beautiful and many cat videos and hamburgers will be made.
Except... though the AI would prefer to trade with you rather than not trade with you, it would much, much prefer to dispossess you of your resources and use them itself. With the energy you wasted on a single cat video, it could have produced 500 of them! If it values these videos, then it is desperate to take over your stuff. Its absolute advantage makes this too tempting.
Only if its motivation is properly structured, or if it expected to lose more, over the course of history, by trying to grab your stuff, would it desist. Assuming you could make a hundred cat videos a day, and the whole history of the universe would only run for that one day, the AI would try and grab your stuff even if it thought it would only have one chance in fifty thousand of succeeding. As the history of the universe lengthens, or the AI becomes more efficient, then it would be willing to rebel at even more ridiculous odds.
So if you already have guarantees in place to protect yourself, then comparative advantage will make the AI trade with you. But if you don't, comparative advantage and trade don't provide any extra security. The resources you waste are just too valuable to the AI.
EDIT: For those who wonder how this compares to trade between nations: it's extremely rare for any nation to have absolute advantages everywhere (especially this extreme). If you invade another nation, most of their value is in their infrastructure and their population: it takes time and effort to rebuild and co-opt these. Most nations don't/can't think long term (it could arguably be in US interests over the next ten million years to start invading everyone - but "the US" is not a single entity, and doesn't think in terms of "itself" in ten million years), would get damaged in a war, and are risk averse. And don't forget the importance of diplomatic culture and public opinion: even if it was in the US's interests to invade the UK, say, "it" would have great difficulty convincing its elites and its population to go along with this.