LESSWRONG
LW

Personal Blog

13

Superintelligent AI mentioned as a possible risk by Bill Gates

by Roko
28th Nov 2010
1 min read
20

13

Personal Blog

13

Superintelligent AI mentioned as a possible risk by Bill Gates
6JoshuaZ
0Roko
2JoshuaZ
6CarlShulman
3Roko
1Roko
1timtyler
0Roko
-1timtyler
3Paul Crowley
1timtyler
1Nic_Smith
5Emile
1NancyLebovitz
0Roko
2MichaelVassar
1Roko
6CarlShulman
-1timtyler
-7timtyler
New Comment
20 comments, sorted by
top scoring
Click to highlight new comments since: Today at 5:05 AM
[-]JoshuaZ15y60

It seems that Bill is also something of a Bayesian with respect to global catastrophic risk

This isn't Bayesianism. this is something closer to caring about expected utility. Not the same thing.

Reply
[-]Roko15y00

It might be the way one phrases bayesianism in a popular article, where the aim is to argue in favor of the object-level proposal rather than weaken the article by relying explicitly on bayesianism.

Reply
[-]JoshuaZ15y20

How so? They seem disconnected. Bayesianism is an epistemological approach. There's nothing for example that would stop someone from being a Bayesian and a virtue ethicist. Or a Bayesian with a deontology based on divine command theory.

Reply
[-]CarlShulman15y60

Surely Bill Joy is another possibility, and Kurzweil does talk at least a bit about AI x-risk.

Reply
[-]Roko15y30

True. I hadn't thought of that.

Where does Kurzweil talk about AI x-risk? I read the whole of TSIN and there is precisely one tiny paragraph in it about AI risk.

Reply
[-]Roko15y10

By the way my memory fails me: what exactly does Joy say about AI risk? What is his angle? If I recall correctly he cites the dangers of robots, not of superintelligence.

E.g. the word "superintelligence(ent)" only appears once in Bill Joy's famous essay "Why the future doesn't need us", and that in a Moravec quote. "Robot(ics)" appears 52 times.

Reply
[-]timtyler15y10

He says - of the "robots":

"If they are smarter than us, stronger than us, evolve quicker than us, they are likely to out-evolve us - in the same way that we have taken over the planet and out-evolved most of the other creatures" - source.

Reply
[-]Roko15y00

Still, that doesn't tell me why Gates said "superintelligent computers" rather than "highly-evolved robots"

Reply
[-]timtyler15y-10

Give a superintelligence some actuators and it becomes a robot. A superintelligence without actuators is not much use to anyone.

Reply
[-]Paul Crowley15y30

The point is that Gates's turn of phrase is informative about the provenance of his ideas.

Reply
[-]timtyler15y10

It might so inform - but Gates has a brain between his ears and mouth - and these concepts are likely to be old and familiar ones for him - so internal concept processing also seems fairly likely.

Reply
[-]Nic_Smith15y10

An oracle built with solid-state hard drives and no cooling fans would not be of use to anyone?

Reply
[-]Emile15y50

One wonders where Bill Gates read that superintelligent AI could be (but in his estimation, in fact isn't) a GCR. It couldn't have been Kurzweil, because Kurzweil doesn't say that. The only realistic possibilities are that the influence came via Nick Bostrom, Stephen Hawking or Martin Rees or possibly Bill Joy(See comments).

The idea is also quite common in Science Fiction.

Or reading OvercomingBias (unlikely), or talking to someone who did (more likely) - my impression is that more people may have been in contact with the "Scary Idea" though Eliezer's writing than through that of the other people you list (except probably Kurzweil). Back when Eliezer was posting daily on OB, I'd see mentions of the blog from quite varied sources (all quite geeky).

Of course, still more people have been exposed to a form of the Scary Idea through the Terminator movies and other works of fiction.

Reply
[-]NancyLebovitz15y10

Gates could have come up with the idea by himself, too.

Reply
[-]Roko15y00

Very doubtful, since he goes on to reject it.

Unless he actually accepts it, but can't say so in public for fear of being branded a kook.

Reply
[-]MichaelVassar15y20

Kurzweil does say that AGI is a GCR.

Reply
[-]Roko15y10

Where?

Reply
[-]CarlShulman15y60

In the Singularity Is Near, go to the index and look for "risk," "pathogen," and so on to find the relevant chapter. He says that the best way to reduce AI risk is to be moral, so that our future selves and successors respond well.

Reply
[-]timtyler15y-10

Note that "rational optimism" seems rather opposed to an apocalyptic end of civilisation.

Cultural evolution apparently consists of things getting better - since better things are what selection favours.

Reply
[+]timtyler15y-70
Moderation Log
More from Roko
View more
Curated and popular this week
20Comments

"There are other potential problems in the future that Mr. Ridley could have addressed but did not. Some would put super-intelligent computers on that list. My own list would include large-scale bioterrorism or a pandemic ... But bioterrorism and pandemics are the only threats I can foresee that could kill over a billion people."

- Bill Gates 

From

Africa Needs Aid, Not Flawed Theories

One wonders where Bill Gates read that superintelligent AI could be (but in his estimation, in fact isn't) a GCR. It couldn't have been Kurzweil, because Kurzweil doesn't say that. The only realistic possibilities are that the influence came via Nick Bostrom, Stephen Hawking or Martin Rees or possibly Bill Joy(See comments). 

It seems that Bill is also something of a Bayesian with respect to global catastrophic risk:

"Even though we can't compute the odds for threats like bioterrorism or a pandemic, it's important to have the right people worrying about them and taking steps to minimize their likelihood and potential impact. On these issues, I am not impressed right now with the work being done by the U.S. and other governments."