RSI capabilities could be charted, and are likely to be AI-complete.
What does RSI stand for?
Lately I've been listening to audiobooks (at 2x speed) in my down time, especially ones that seem likely to have passages relevant to the question of how well policy-makers will deal with AGI, basically continuing this project but only doing the "collection" stage, not the "analysis" stage.
I'll post quotes from the audiobooks I listen to as replies to this comment.
More (#3) from Better Angels of Our Nature:
...let’s have a look at political discourse, which most people believe has been getting dumb and dumber. There’s no such thing as the IQ of a speech, but Tetlock and other political psychologists have identified a variable called integrative complexity that captures a sense of intellectual balance, nuance, and sophistication. A passage that is low in integrative complexity stakes out an opinion and relentlessly hammers it home, without nuance or qualification. Its minimal complexity can be quantified by counting words like absolutely, always, certainly, definitively, entirely, forever, indisputable, irrefutable, undoubtedly, and unquestionably. A passage gets credit for some degree of integrative complexity if it shows a touch of subtlety with words like usually, almost, but, however, and maybe. It is rated higher if it acknowledges two points of view, higher still if it discusses connections, tradeoffs, or compromises between them, and highest of all if it explains these relationships by reference to a higher principle or system. The integrative complexity of a passage is not the same as the intelligence of the person who wrote it, but the
Okay. In this comment I'll keep an updated list of audiobooks I've heard since Sept. 2013, for those who are interested. All audiobooks are available via iTunes/Audible unless otherwise noted.
Outstanding:
Worthwhile if you care about the subject matter:
A process for turning ebooks into audiobooks for personal use, at least on Mac:
Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change.
This seems obviously false. Local expenditures - of money, pride, possibility of not being the first to publish, etc. - are still local, global penalties are still global. Incentives are misaligned in exactly the same way as for climate change.
RSI capabilities could be charted, and are likely to be AI-complete.
This is to be taken as an arguendo, not as the author's opinion, right? See IEM on the minimal conditions for takeoff. Albeit if &q...
(I don't have answers to your specific questions, but here are some thoughts about the general problem.)
I agree with most of you said. I also assign significant probability mass to most parts of the argument for hope (but haven't thought about this enough to put numbers on this), though I too am not comforted on these parts because I also assign non-small chance to them going wrong. E.g., I have hope for "if AI is visible [and, I add, AI risk is understood] then authorities/elites will be taking safety measures".
That said, there are some steps in...
I personally am optimistic about the world's elites navigating AI risk as well as possible subject to inherent human limitations that I would expect everybody to have, and the inherent risk. Some points:
I've been surprised by people's ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.
AI risk is a Global Catastrophic Risk i
The argument from hope or towards hope or anything but despair and grit is misplaced when dealing with risks of this magnitude.
Don't trust God (or semi-competent world leaders) to make everything magically turn out all right. The temptation to do so is either a rationalization of wanting to do nothing, or based on a profoundly miscalibrated optimism for how the world works.
/doom
I think there's a >15% chance AI will not be preceded by visible signals.
Aren't we seeing "visible signals" already? Machines are better than humans at lots of intelligence-related tasks today.
Which historical events are analogous to AI risk in some important ways? Possibilities include: nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars.
Cryptography and cryptanalysis are obvious precursors of supposedly-dangerous tech within IT.
Looking at their story, we can plausibly expect governments to attempt to delay the development of "weaponizable" technology by others.
These days, cryptography facilitates international trade. It seems like a mostly-positive force overall.
One question is whether AI is like CFCs, or like CO2, or like hacking.
With CFCs, the solution was simple: ban CFCs. The cost was relatively low, and the benefit relatively high.
With CO2, the solution is equally simple: cap and trade. It's just not politically palatable, because the problem is slower-moving, and the cost would be much, much greater (perhaps great enough to really mess up the world economy). So, we're left with the second-best solution: do nothing. People will die, but the economy will keep growing, which might balance that out, because ...
Here are my reasons for pessimism:
There are likely to be effective methods of controlling AIs that are of subhuman or even roughly human-level intelligence which do not scale up to superhuman intelligence. These include for example reinforcement by reward/punishment, mutually beneficial trading, legal institutions. Controlling superhuman intelligence will likely require qualitatively different methods, such as having the superintelligence share our values. Unfortunately the existence of effective but unscalable methods of AI control will probably lull el
Congress' non-responsiveness to risks to critical infrastructure from geomagnetic storms, despite scientific consensus on the issue, is also worrying.
Even if one organization navigates the creation of friendly AI successfully, won't we still have to worry about preventing anyone from ever creating an unsafe AI?
Unlike nuclear weapons, a single AI might have world ending consequences, and an AI requires no special resources. Theoretically a seed AI could be uploaded to Pirate Bay, from where anyone could download and compile it.
The use of early AIs to solve AI safety problems creates an attractor for "safe, powerful AI."
What kind of "AI safety problems" are we talking about here? If they are like the "FAI Open Problems" that Eliezer has been posting, they would require philosophers of the highest (perhaps even super-human) caliber to solve. How could "early AIs" be of much help?
If "AI safety problems" here do not refer to FAI problems, then how do those problems get solved, according to this argument?
@Lukeprog, can you
(1) update us on your working answers the posed questions in brief? (2) your current confidence (and if you would like to, by proxy, MIRI's as an organisation's confidence in each of the 3:
Elites often fail to take effective action despite plenty of warning.
I think there's a >10% chance AI will not be preceded by visible signals.
I think the elites' safety measures will likely be insufficient.
Thank you for your diligence.
There's another reason for hope in this above global warming: The idea of a dangerous AI is already common in the public eye as "things we need to be careful about." A big problem the global warming movement had, and is still having, is convincing the public that it's a threat in the first place.
Who do you mean by "elites". Keep in mind that major disruptive technical progress of the type likely to precede the creation of a full AGI tends to cause the type of social change that shakes up the social hierarchy.
Combining the beginning and the end of your questions reveals an answer.
Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of [nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars] just fine?
Answer how just fine any of these are any you have analogous answers.
You might also clarify whether you are interested in what is just fine for everyone, or just fine for the elites, or just fine for the AI in question. The answer will change accordingly.
From de Mesquita and Smith's The Dictator's Handbook:
How do tyrants hold on to power for so long? For that matter, why is the tenure of successful democratic leaders so brief? How can countries with such misguided and corrupt economic policies survive for so long? Why are countries that are prone to natural disasters so often unprepared when they happen? And how can lands rich with natural resources at the same time support populations stricken with poverty?
Equally, we may well wonder: Why are Wall Street executives so politically tone-deaf that they dole out billions in bonuses while plunging the global economy into recession? Why is the leadership of a corporation, on whose shoulders so much responsibility rests, decided by so few people? Why are failed CEOs retained and paid handsomely even as their company’s shareholders lose their shirts?
In one form or another, these questions of political behavior pop up again and again. Each explanation, each story, treats the errant leader and his or her faulty decision making as a one-off, one-of-a-kind situation. But there is nothing unique about political behavior.
...We look at each case and conclude they are different, uncharacteristic anomalies. Yet they are held together by the logic of politics, the rules ruling rulers.
...To understand politics properly, we must modify one assumption in particular: we must stop thinking that leaders can lead unilaterally.
No leader is monolithic. If we are to make any sense of how power works, we must stop thinking that North Korea’s Kim Jong Il can do whatever he wants. We must stop believing that Adolf Hitler or Joseph Stalin or Genghis Khan or anyone else is in sole control of their respective nation. We must give up the notion that Enron’s Kenneth Lay or British Petroleum’s (BP) Tony Hayward knew about everything that was going on in their companies, or that they could have made all the big decisions. All of these notions are flat out wrong because no emperor, no king, no sheikh, no tyrant, no chief executive officer (CEO), no family head, no leader whatsoever can govern alone.
...For leaders, the political landscape can be broken down into three groups of people: the nominal selectorate, the real selectorate, and the winning coalition.
The nominal selectorate includes every person who has at least some legal say in choosing their leader. In the United States it is everyone eligible to vote, meaning all citizens aged eighteen and over. Of course, as every citizen of the United States must realize, the right to vote is important, but at the end of the day no individual voter has a lot of say over who leads the country. Members of the nominal selectorate in a universal-franchise democracy have a toe in the political door, but not much more. In that way, the nominal selectorate in the United States or Britain or France doesn’t have much more power than its counterparts, the “voters,” in the old Soviet Union. There, too, all adult citizens had the right to vote, although their choice was generally to say Yes or No to the candidates chosen by the Communist Party rather than to pick among candidates. Still, every adult citizen of the Soviet Union, where voting was mandatory, was a member of the nominal selectorate. The second stratum of politics consists of the real selectorate. This is the group that actually chooses the leader. In today’s China (as in the old Soviet Union), it consists of all voting members of the Communist Party; in Saudi Arabia’s monarchy it is the senior members of the royal family; in Great Britain, the voters backing members of parliament from the majority party. The most important of these groups is the third, the subset of the real selectorate that makes up a winning coalition. These are the people whose support is essential if a leader is to survive in office. In the USSR the winning coalition consisted of a small group of people inside the Communist Party who chose candidates and who controlled policy. Their support was essential to keep the commissars and general secretary in power. These were the folks with the power to overthrow their boss—and he knew it. In the United States the winning coalition is vastly larger. It consists of the minimal number of voters who give the edge to one presidential candidate (or, at the legislative level in each state or district, to a member of the House or Senate) over another. For Louis XIV, the winning coalition was a handful of members of the court, military officers, and senior civil servants without whom a rival could have replaced the king.
Fundamentally, the nominal selectorate is the pool of potential support for a leader; the real selectorate includes those whose support is truly influential; and the winning coalition extends only to those essential supporters without whom the leader would be finished. A simple way to think of these groups is: interchangeables, influentials, and essentials.
In the United States, the voters are the nominal selectorate — interchangeables. As for the real selectorate — influentials — the electors of the electoral college really choose the president (just like the party faithful picked their general secretary back in the USSR), but the electors nowadays are normatively bound to vote the way their state’s voters voted, so they don’t really have much independent clout in practice. In the United States, the nominal selectorate and real selectorate are therefore pretty closely aligned. This is why, even though you’re only one among many voters, interchangeable with others, you still feel like your vote is influential — that it counts and is counted. The winning coalition — essentials — in the United States is the smallest bunch of voters, properly distributed among the states, whose support for a candidate translates into a presidential win in the electoral college. And while the winning coalition (essentials) is a pretty big fraction of the nominal selectorate (interchangeables), it doesn’t have to be even close to a majority of the US population. In fact, given the federal structure of American elections, it’s possible to control the executive and legislative branches of government with as little as about one fifth of the vote, if the votes are really efficiently placed...
Looking elsewhere we see that there can be a vast range in the size of the nominal selectorate, the real selectorate, and the winning coalition. Some places, like North Korea, have a mass nominal selectorate in which everyone gets to vote — it’s a joke, of course — a tiny real selectorate who actually pick their leader, and a winning coalition that surely is no more than maybe a couple of hundred people (if that) and without whom even North Korea’s first leader, Kim Il Sung, could have been reduced to ashes. Other nations, like Saudi Arabia, have a tiny nominal and real selectorate, made up of the royal family and a few crucial merchants and religious leaders. The Saudi winning coalition is perhaps even smaller than North Korea’s.
...These three groups provide the foundation of all that’s to come in the rest of this book, and, more importantly, the foundation behind the working of politics in all organizations, big and small. Variations in the sizes of these three groups give politics a three-dimensional structure that clarifies the complexity of political life. By working out how these dimensions intersect—that is, each organization’s mix in the size of its interchangeable, influential, and essential groups—we can come to grips with the puzzles of politics. Differences in the size of these groups across states, businesses, and any other organization, as you will see, decide almost everything that happens in politics—what leaders can do, what they can and can’t get away with, to whom they answer, and the relative qualities of life that everyone under them enjoys (or, too often, doesn’t enjoy).
More (#2) from The Dictator's Handbook:
...Democratic leaders profess a desire for democratization. Yet the reality is that it is rarely in their interest. As the coalition size grows in a foreign nation, its leader becomes more and more compelled to enact policies that his people want and not the policies desired by the puppeteer’s people. If a democratic leader wants a foreign leader to follow his prescribed policies then he needs to insulate his puppet from domestic pressures. This means reducing coalition size in vanquished states. This makes it cheaper
One open question in AI risk strategy is: Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of human-level AI (and beyond) just fine, without the kinds of special efforts that e.g. Bostrom and Yudkowsky think are needed?
Some reasons for concern include:
But if you were trying to argue for hope, you might argue along these lines (presented for the sake of argument; I don't actually endorse this argument):
The basic structure of this 'argument for hope' is due to Carl Shulman, though he doesn't necessarily endorse the details. (Also, it's just a rough argument, and as stated is not deductively valid.)
Personally, I am not very comforted by this argument because:
Obviously, there's a lot more for me to spell out here, and some of it may be unclear. The reason I'm posting these thoughts in such a rough state is so that MIRI can get some help on our research into this question.
In particular, I'd like to know: