Lightwave2

Posts

Sorted by New

Comments

How Many LHC Failures Is Too Many?

I bet the terrorists would target the LHC itself, so after the terrorist attack there's nothing left to turn on.

A Prodigy of Refutation

"Surely no supermind would be stupid enough to turn the galaxy into paperclips; surely, being so intelligent, it will also know what's right far better than a human being could."

Sounds like Bill Hibbard, doesn't it?

The Truly Iterated Prisoner's Dilemma

There's a dilemma or a paradox here only if both agents are perfectly rational intelligences. In the case of humans vs aliens, the logical choice would be "cooperate on the first round, and on succeeding rounds do whatever its opponent did last time". The risk of losing the first round (1 million people lost) is worth taking because of the extra 98-99 million people you can potentially save if the other side also cooperates.

Rationality Quotes 13

The soldier protects your rights to do any of those actions, and as there always are people, who want to take them away from you, it is the soldier who is stopping them from doing so.

Qualitative Strategies of Friendliness

Just like you wouldn't want an AI to optimize for only some of the humans, you wouldn't want an AI to optimize for only some of the values. And, as I keep emphasizing for exactly this reason, we've got a lot of values.

What if the AI emulates some/many/all human brains in order to get a complete list of our values? It could design its own value system better than any human.

Magical Categories

I wonder if you'd consider a superintelligent human have the same flaws as a superintelligent AI (and will eventually destroy the world). What about a group of superintelligent humans (assuming they have to cooperate in order to act)?