This is a linkpost for

The Age of AI has begun
Artificial intelligence is as revolutionary as mobile phones and the Internet.

30s Update on Bill Gates' Views re Alignment:

  • Gates cites Bostrom and Tegmark's books as having shaped his thinking, but thinks that AI developments of the past few months don't make the control problem more urgent.
  • Gates asks whether we should try to prevent strong AI from ever being developed, what happens if strong AI's goals conflict with humanity's interests; says these questions will get more pressing with time.

Quotations that Convey Key Views

From the section: "Risks and problems with AI":

  • "Three books have shaped my own thinking on this subject: Superintelligence, by Nick Bostrom; Life 3.0 by Max Tegmark; and A Thousand Brains, by Jeff Hawkins."
    • "I don’t agree with everything the authors say, and they don’t agree with each other either. But all three books are well written and thought-provoking."
  • "There's the possibility that AIs will run out of control. Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us?"
    • "Possibly, but this problem is no more urgent today than it was before the AI developments of the past few months."
      • "[N]one of the breakthroughs of the past few months have moved us substantially closer to strong AI."
  • "Superintelligent AIs are in our future."
    • "Once developers can generalize a learning algorithm and run it at the speed of a computer—an accomplishment that could be a decade away or a century away—we’ll have an incredibly powerful AGI."
      • "It will be able to do everything that a human brain can, but without any practical limits on the size of its memory or the speed at which it operates. This will be a profound change."
  • "These “strong” AIs, as they’re known, will probably be able to establish their own goals. What will those goals be? What happens if they conflict with humanity’s interests? Should we try to prevent strong AI from ever being developed? These questions will get more pressing with time."


New Comment
9 comments, sorted by Click to highlight new comments since: Today at 8:35 AM

I have a kinda-unrelated question. Does Bill Gates write gatesnotes completely himself just because he wants? Or is this a marketing/pr thing and is written by other people? If it's the former, then I want to read it. If it's the latter, I don't.

I read the post and see his blogs/videos from time to time. I feel like the tone/style across them is very consistent, so that's some weak evidence that he writes them himself?

I am not sure how anyone would say that  "[N]one of the breakthroughs of the past few months have moved us substantially closer to strong AI." unless he hasn't really followed the breakthroughs of the past few months or had read only bad secondhand reports

Strong AI, yes. True AI, probably not (that's just my guess). I started following this fairly recently, can you (or someone) provide some links to articles/posts with updated predictions of timelines that factor in recent breakthroughs? How far are we from true AI?

I think there were some not insignificant updates to Metaculus aggregate predictions for AGI timelines in the past of few months:

What do you mean by true IA?

Y'know, True AI, like True Scotsmen.

I guess what I'm calling 'true AI' is not unlike the stated goal of general intelligence or AGI. As opposed to narrow AI (also called weak AI), true AI is what the average sci-fi fan thinks of as AI (movies such as 'Ex Machina', '2001: A Space Odyssey' or 'Zoe') who are seemingly conscious, exercise free will and demonstrate human-like cognitive degrees of freedom.

With recent breakthroughs it may be useful to separate those terms as we may have AGI soon but it will still be narrow in a lot of ways. True AI is still far off, in my opinion. I don't think it'll emerge directly from large language models but more likely from a new substrate that's more dynamic than the current computer chips, circuit boards, semiconductors etc. The invention/discovery of that new substrate will be the biggest bottleneck to true AI.

First, we should try to balance fears about the downsides of AI—which are understandable and valid—with its ability to improve people’s lives. To make the most of this remarkable new technology, we’ll need to both guard against the risks and spread the benefits to as many people as possible.

Second, market forces won’t naturally produce AI products and services that help the poorest. The opposite is more likely. With reliable funding and the right policies, governments and philanthropy can ensure that AIs are used to reduce inequity. Just as the world needs its brightest people focused on its biggest problems, we will need to focus the world’s best AIs on its biggest problems.

Although we shouldn’t wait for this to happen, it’s interesting to think about whether artificial intelligence would ever identify inequity and try to reduce it. Do you need to have a sense of morality in order to see inequity, or would a purely rational AI also see it? If it did recognize inequity, what would it suggest that we do about it?

New to LessWrong?