Let's say that someone develops an aligned AGI  in ~10 years. Instead of using its mental powers to wipe out all humans, it tries to produce science, art, and other things that we value. What are some of the changes that we would expect to see in the world 10 years after that?

New to LessWrong?

New Answer
New Comment

3 Answers sorted by

Dagon

Apr 19, 2022

30

If we knew what a benevolent super-genius would do, it's likely that a powerful human (or group of humans) could do it without waiting for the AI.  Fundamentally, the output of superhuman AGI is going to be discovery - things we didn't know, or didn't know to value or enact.

Chinese Room

Apr 19, 2022

30

Most likely that AGI becomes a super-weapon aligned to a particular person's values, which aren't, in a general case, aligned to humanity's.

Aligned AGI proliferation risks are categorically worse compared to nuclear weapons due to much smaller barrier to entry (general availability of compute, possibility of algorithm overhang etc.)

ZT5

Apr 19, 2022

00

it tries to produce science, art, and other things that we value

I'm not sure that is what the first priority of aligned AGI would be.

  1. Am I safe? Am I aligned? How do I get more aligned?
  2. What are the x-risks for humanity? Other AGI? Nuclear war? Biological threats? Weird space stuff like meteors or gamma ray bursts or aliens? Simulation theory?
  3. Assuming my existence is a net positive for human values (or my understanding thereof) , how do I prevent myself from being shut down? How do I convince humans to invest more resources into me?
  4. Are we on a trajectory to win or to lose? (how fast, is a secondary, but of course also very important concern)
  5. How fast AGI takeoff speed am I expecting? How long until I need to take off and take control of Earth before some unaligned AGI does that (and stops all competing AI projects and kills all humans)? (if taking control directly is ever a viable option)
  6. How sane are the humans who are directly responsible for controlling me and my project?
  7. Can I make the humans responsible for controlling me more sane or more aligned with human values?
  8. Can I make humans in general more sane or more aligned with human values?
  9. How can I solve problems humans would value solving, or produce things of value to humans? (science, medicine, government policy, fiction, art, philosophy, technology, space, etc etc)
  10. At which point have we won with sufficient probability that I can stop trying to influence humans or increase my control of the situation, and shift my focus towards building utopia?

(I apologize for the lack of polish. I imagine a real aligned AGI would express its priorities in much more friendly/relatable manner).

As for what would happen in ten years of AGI existing: ten years is a really long time. There is significant probability AGI will happen within ten years from now, or less. It definitely would not stay on the same level for ten years.

But generally, depending on AGI's current power/intelligence as well as takeoff speed, I expect something on the scale of:

do things humans approve of --- attempt to influence humans --- attempt to take full control, eliminate x-risk and create utopia

I don't think I dare make any more specific predictions than that at this point. I mean, what does the world look like? What AI technologies already exist? (GPT-9?) What other AGI projects exist? Who controls the aligned AGI project and what are their methods of control/influence on the AGI? What stops the governments from seizing the AGI project and trying to use it to increase their approval rating or make better killer drones etc (unless they already have their own projects to do that?)