Does anyone know any potential long term futures that are good, and do not involve the creation of a friendly super-intelligence.

To be clear, long term future means billion years+. In most of my world models, we settle into a state from which it is much easier to predict the future within the next few hundred years. (Ie a state where it seems unlikely that anything much will change)

By good, I mean any future that you would prefer to be in if you cared only about yourself, or would be replaced with a robot that would do just as much good here. A weaker condition would be any future that you would want not to be erased from existence.

A superintelligent agent running around doing whatever is friendly or moral or whatever would meet these criteria, I am excluding it because I already know about that possibility. Your futures may contain Superintelligences that aren't fully friendly. A superintelligence that acts as a ZFC oracle is fine.

Your potential future doesn't have to be particularly likely, just remotely plausible. You may assume that a random 1% of humanity reads your reply and goes out of their way to make that future happen. Ie people optimizing for this goal can use strategies of the form "someone does X" but not "everyone does X". You can get "a majority of humans does X" if X is easy to do and explain and most people have no strong reason not to X.

You should make clear what stops somebody making a UFAI which goes on to destroy the world. (Eg paperclip maximizer)

What stops Moloch, what stops us trashing away everything of value in to win competitions? (Hansons Hardscrabble frontier replicators.)


New Answer
New Comment

3 Answers sorted by

We build what I called a modest superintelligence, consisting of one or more humans who are naturally extremely intelligent or who underwent intelligence amplification, they figure out how to build a stable world government and decide that it's safer to do WBE and gradually increase human (em) intelligence than to build an FAI.

Safely and gradually enhancing human intelligence is hard. I agree that a team of human geniuses with unlimited time and resources could probably do it. But you need orders of magnitude more resources and thinking time than the fools "trying" to make UFAI.

A genetics project makes a lot of very smart babies, they find it hard to indoctrinate them, while educating them enough, while producing diversity. Militaristic bootcamp will get them all marching in line, and squash out most curiosity and give little room for skill. Handing them off to foster... (read more)

Yes, the most likely outcome: people develop increasingly more capable intelligence capabilities as extensions of themselves in a pluralistic society, without any one entity obtaining monopolistic control. Society continues as-is, but with posthuman capabilities.

as extensions of themselves

Lets assume that AI doubling time is fairly slow (eg 20 years) and very widely distributed. Huge numbers of people throw together AI systems in garages. If the basic problems of FAI haven't been solved, you are going to get millions of paperclip maximizers. (Well, most of them will be optimising different things) 100 years later, humanity, if it still exists at this point are pawns on a gameboard that contains many superintelligences. What happens depends on how different the superintelligences goals are, and how hard it is... (read more)

2[anonymous]4y
I don't understand why you keep humans and AI as ontologically separate entities in your prediction model. Increasing AI capabilities support the humans that operate them. This is the case today, and will be for the foreseeable future. Right now the human-computer interface is little mobile devices we interact with using our voice and hands, so AI assistance more resembles a personal super-secretary. But in the coming decades we will develop and deploy direct neural human-computer interfaces that will allow advances in AI to advance ourselves, directly. I don't see that pathway as resembling your "thousands of paperclip maximizers" scenario. I really don't understand your Egypt analogy, sorry.
1Donald Hobson4y
At the moment, human brains are a cohesive whole, that optimizes for human values. We haven't yet succeeded in making the machines share our values, and the human brain is not designed for upgrading. The human brain can take knowledge from an external source and use it. External tools follow the calculator model. The human thinks about the big picture world, and realizes that as a mental subgoal of designing a bridge, they need to do some arithmetic. Instead of doing the arithmetic themselves, they pass the task on to the machine. In this circumstance, the human controls the big picture, the human understands what cognitive labor has been externalized and knows that it will help the humans goals. If we have a system that a human can say "go and do whatever is most moral", that's FAI. If we have a calculator style system where humans specify the power output, weight, material use, radiation output ect of a fusion plant, and the AI tries to design a fusion plant meeting those specs, that's useful but not nearly as powerful as full ASI. Humans with calculator style AI could invent molecular nanotech without working out all the details, but they still need an Eric Drexler to spot the possibility. In my model you can make a relativistic rocket, but you can't take a sparrow, and upgrade it into something that flies through space at 10% light speed and is still a sparrow. If your worried that relativistic rockets might spew dangerous levels of radiation, you can't make a safe spacecraft by taking a sparrow and upgrading it to fly at 10% c. (Well with enough R&D you could make a rocket that superficially resembles a sparrow. Deciding to upgrade a sparrow doesn't make the safety engineering any easier.) Making something vastly smarter than a human is like making something far faster than a sparrow. Trying to strap really powerful turbojets to the sparrow and it crashes and burns. Try to attach a human brain to 100X human brain gradient decent and you get an out of control
3 comments, sorted by Click to highlight new comments since: Today at 6:21 AM

Given SAI is possible, regulation on AI is necessary to prevent people from making a UFAI. Alternatively, an SAI which is not fully aligned but has not goals directly conflicting with ours might be used to prevent the creation of UFAI.

This seems like one potential path, but for it to work, you would need a government structure that can survive without successful pro AI revolutionaries for a billion years. You also need law enforcement good enough to stop anyone trying to make UFAI, with not a single failure in a billion years. As for a SAI that will help us stop UFAI, can explain 1) how it would help and 2) how it would be easier to build than FAI?

You also need to say what happens with evolution, given this kind of time, and non ancestral selection pressures, evolution will produce beings not remotely human in mind or body. Either argue that the evolution is in a morally ok direction, and that your government structure works with these beings, or stop evolution by selective breeding - frozen samples - genetic modification towards some baseline. Then you just need to say how all human populations get this, or how any population that doesn't won't be building UFAI.

I have no comment on how plausible either of these scenarios are. I'm only making the observation that long term good futures not featuring friendly AI require some other mechanism preventing UFAI from happening. Either SAI in general would have to be implausible to create at all, or some powerful actor such as a government or limited AI would have to prevent it.