Wiki Contributions

Comments

It seems that in 2014 he believed that p(doom) was less than 20%

I do expect some of the potential readers of this post to live in a very unsafe environment - e.g. parts of current-day Ukraine, or if they live together with someone abusive - where they are actually in constant danger.

I live ~14 kilometers from the front line, in Donetsk. Yeah, it's pretty... stressful. 
But I think I'm much more likely to be killed by an unaligned superintelligence than an artillery barrage. 
Most people survive urban battles, so I have a good chance. 
And in fact, many people worry even less than I do! People get tired of feeling in danger all the time.

'“Then why are you doing the research?” Bostrom asked.

“I could give you the usual arguments,” Hinton said. “But the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.”'

'I asked Hinton if he believed an A.I. could be controlled. “That is like asking if a child can control his parents,” he said. “It can happen with a baby and a mother—there is biological hardwiring—but there is not a good track record of less intelligent things controlling things of greater intelligence.” He looked as if he might elaborate. Then a scientist called out, “Let’s all get drinks!”'

https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom

Hinton seems to be more responsible now!

The level of concern and seriousness I see from ML researchers discussing AGI on any social media platform or in any mainstream venue seems wildly out of step with "half of us think there's a 10+% chance of our work resulting in an existential catastrophe".

In fairness, this is not quite half the researchers. This is half the agreed survey.

'We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021. [...] We received 738 responses, some partial, for a 17% response rate'.

I expect that worried researchers are more likely to agree to participate in the survey.

Thanks for your answer, this is important to me.

I am not an American (so excuse me for my bad English!), so my opinion about the admissibility of attack on the US data centers is not so important. This is not my country.

But reading about the bombing of Russian data centers as an example was unpleasant. It sounds like a Western bias for me. And not only for me.

'What on Earth was the point of choosing this as an example? To rouse the political emotions of the readers and distract them from the main question?'.

If the text is aimed at readers not only from the First World countries, well, perhaps the authors should do such a clarification as you did! Then it will not look like political hypocrisy. Or not write about air strikes at all, because people are distracted for discussing this.

I'm not an American, so my consent doesn't mean much :)

Suppose China and Russia accepted the Yudkowsky's initiative. But the USA is not. Would you support to bomb a American data center?

I can provide several links. And you choose those that are suitable. If suitable. The problem is that I retained not the most complete justifications, but the most ... certain and brief. I will try not to repeat those that are already in the answers here.

Ben Goertzel

Jürgen Schmidhuber

Peter J.Bentley

Richard Loosemore

Jaron Lanier and Neil Gershenfeld


Magnus Vinding and his list

Tobias Baumann

Brian Tomasik
 

Maybe Abram Demski? But he changed his mind, probably.
Well, Stuart Russell. But this is a book. I can quote.

I do think that I’m an optimist. I think there’s a long way to go. We are just scratching the surface of this control problem, but the first scratching seems to be productive, and so I’m reasonably optimistic that there is a path of AI development that leads us to what we might describe as “provably beneficial AI systems.”

There are also a large number of reasonable people who directly called themselves optimists or pointed out a relatively small probability of death from AI. But usually they did not justify this in ~ 500 words…

I also recommend this book.

Load More