Wiki Contributions

Comments

This doesn't feel like it's really engaging at all with the content of the post. I don't mention "legitimacy" 10 times for nothing.

It was meant as an April Fool's in the same way the Death with Dignity post was an April Fool's.

trying to make it look like belief in witchcraft is very similar to belief in viruses.

I feel like you're missing the point. Of course, the germ theory of disease is superior to 'witchcraft.' However, in the average person's use of the term 'virus,' the understanding of what is actually going on is almost as shallow as 'witchcraft.' Of course, 'virus' does point towards a much deeper and important scientific understanding of what is going on, but in its every day use, it serves the same role as 'witchcraft.'

The point of the quote is that sometimes, when you want to get a message across (like 'boil the water before drinking it') it's easier to put yourself into the other person's ontology and get the message across in terms that they would understand, rather than trying to explain all of science.

I didn't mean to make 1. sound bad. I'm only trying to put my finger on a crux. My impression of most prosaic alignment work seems to be that they have 2. in mind, even though MIRI/Bostrom/LW seem to believe that 1. is actually what we should be aiming towards. Do prosaic alignment people think that work on human 'control' now will lead to scenario 1 in the long run, or do they just reject scenario 1?

I'm just confused about what "optimized for leaving humans in control" could even mean? If a Superintelligence is so much more intelligent than humans that it could find a way, without explicit coercion, for humans to ask it to tile the universe with paper-clips, then "control" seems like a meaningless concept. You would have to force the Superintelligence to treat the human skull, or whatever other boundary of human decision making, as some kind of unviolable and uninfluenceable black box.

I'm a little worried about what might happen if different parts of the community end up with very different timelines, and thus very divergent opinions on what to do.

It might be useful if we came up with some form of community governance mechanism or heuristics to decide when it becomes justified to take actions that might be seen as alarmist by people with longer timelines. On the one hand, we want to avoid stuff like the unilateralist’s curse, on the other, we can't wait for absolutely everyone to agree before raising the alarm.

For China, the Taliban and the DPRK, I think Fukuyama would probably argue that they don't necessarily disprove his theses, but it's just that it's taking much longer for them to liberalize than he would have anticipated in the 90s (he also never said that any of this was inevitable).

For Mormons in Utah, I don't think they really pose a challenge, since they seem to quite happily exist within the framework of a capitalist liberal democracy.

Technology, and AGI in particular, is indeed the most credible challenge and may force us to reconsider some high-stakes first principles questions around how power, the economy, society... are organized. Providing some historical context for how we arrived at the answers we now take for granted was one the main motivations of this post.

Instrumentally, an invisible alpha provides a check on the power of the actual alpha. A king a few centuries ago may have had absolute power, but he still couldn't simply act against what people understood to be the will of the actual alpha (God).

Load More