I have a kinda-unrelated question. Does Bill Gates write gatesnotes completely himself just because he wants? Or is this a marketing/pr thing and is written by other people? If it's the former, then I want to read it. If it's the latter, I don't.
I read the post and see his blogs/videos from time to time. I feel like the tone/style across them is very consistent, so that's some weak evidence that he writes them himself?
I am not sure how anyone would say that "[N]one of the breakthroughs of the past few months have moved us substantially closer to strong AI." unless he hasn't really followed the breakthroughs of the past few months or had read only bad secondhand reports
Strong AI, yes. True AI, probably not (that's just my guess). I started following this fairly recently, can you (or someone) provide some links to articles/posts with updated predictions of timelines that factor in recent breakthroughs? How far are we from true AI?
I think there were some not insignificant updates to Metaculus aggregate predictions for AGI timelines in the past of few months: https://www.lesswrong.com/posts/CiYSFaQvtwj98csqG/metaculus-predicts-weak-agi-in-2-years-and-agi-in-10#comments
I guess what I'm calling 'true AI' is not unlike the stated goal of general intelligence or AGI. As opposed to narrow AI (also called weak AI), true AI is what the average sci-fi fan thinks of as AI (movies such as 'Ex Machina', '2001: A Space Odyssey' or 'Zoe') who are seemingly conscious, exercise free will and demonstrate human-like cognitive degrees of freedom.
With recent breakthroughs it may be useful to separate those terms as we may have AGI soon but it will still be narrow in a lot of ways. True AI is still far off, in my opinion. I don't think it'll emerge directly from large language models but more likely from a new substrate that's more dynamic than the current computer chips, circuit boards, semiconductors etc. The invention/discovery of that new substrate will be the biggest bottleneck to true AI.
First, we should try to balance fears about the downsides of AI—which are understandable and valid—with its ability to improve people’s lives. To make the most of this remarkable new technology, we’ll need to both guard against the risks and spread the benefits to as many people as possible.
Second, market forces won’t naturally produce AI products and services that help the poorest. The opposite is more likely. With reliable funding and the right policies, governments and philanthropy can ensure that AIs are used to reduce inequity. Just as the world needs its brightest people focused on its biggest problems, we will need to focus the world’s best AIs on its biggest problems.
Although we shouldn’t wait for this to happen, it’s interesting to think about whether artificial intelligence would ever identify inequity and try to reduce it. Do you need to have a sense of morality in order to see inequity, or would a purely rational AI also see it? If it did recognize inequity, what would it suggest that we do about it?
This is a linkpost for https://www.gatesnotes.com/The-Age-of-AI-Has-Begun#ALChapter6
The Age of AI has begun
Artificial intelligence is as revolutionary as mobile phones and the Internet.
30s Update on Bill Gates' Views re Alignment:
Quotations that Convey Key Views
From the section: "Risks and problems with AI":