William the Kiwi

Wiki Contributions

Comments

Thank you for writing this Igor. It helps highlight a few of biases that commonly influence peoples decision making around x-risk. I don't think people talk about this enough.

I was contemplating writing a similar post to this around psychology, but I think you have done a better job than I was going to. Your description of 5 hypothetical people communicates the idea more smoothly than what I was planning. Well done. The fact that I feel a little upset that I didn't write something like this sooner, and the fact that the other comment has talked about motivated reasoning, produces an irony that it not lost on me.

I agree with your sentiment that most of this is influenced by motivated reasoning.

I would add that "Joep" in the  Denial story is motivated by cognitive dissonance, or rather the attempt to reduce cognitive dissonance by discarding one of the two ideas "x-risk is real and gives me anxiety" and "I don't want to feel anxiety".

In the People Don't Have Images story, "Dario" is likely influenced by the availability heuristic, where he is attempting to estimate the likelihood of a future event based on how easily he can recall similar past events.

I would agree that people lie way more than they realise. Many of these lies are self-deception.

We can't shut it all down.

Why do you personally think this is correct? Is it that humanity is unknowing of how to shut it down? Or uncapable? Or unwilling?

This post makes a range of assumptions, and looks at what is possible rather than what is feasible. You are correct that this post is attempting to approximate the computational power of a Dyson sphere and compare this to the approximation of the computational power of all humans alive. After posting this, the author has been made aware that there are multiple ways to break the Landauer Limit. I agree that these calculations may be off by an order of magnitude, but this being true doesn't break the conclusion that "the limit of computation, and therefore intelligence, is far above all humans combined".

Yea you could, but you would be waiting a while. Your reply and 2 others have made me aware that this post's limit is too low.

[EDIT: spelling]

I just read the abstract. Storing information in momentum makes a lot of sense as we know it is a conserved quantity. Practically challenging. But yes, this does move the theoretical limit even further away from all humans combined.

OK I take your point. In your opinion would this be an improvement "Humans have never completed at large scale engineering task without at least one mistake on the first attempt"?

For the argument with AI, will the process that is used to make current AI scale to AGI level? From what I understand that is not the case. Is that predicted to change?

Thank you for giving feedback. 

But isn't that also using a reward function? The AI is trying to maximise the reward it receives from the Reward Model. The Reward Model that was trained using Human Feedback.

Hi Edward, I can estimate you personally care about censorship, and outside the field of advanced AI that seems like a valid opinion. You are right that humans keep each other aligned by mass consensus. When you read more about AI you will be able to see that this technique no longer works for AI. Humans and AI are different.

Having AI alignment is a strongly supported opinion in this community and is also supported by many people outside this community as well. This is link is an open letter where a range of noteworthy people talk about the dangers of AI and how alignment may help. I recommend you give it a read. Pause Giant AI Experiments: An Open Letter - Future of Life Institute

AI risk is an emotionally challenging topic, but I believe that you can find the way to understand it more. 

Load More