Lone Pine

Sequences

Alignment For Foxes

Wiki Contributions

Comments

Why can't I eat what my grandparents ate 100 years ago? According to SMTM, this would have been 3 large hearty meals full of bread, potatoes, meat, dairy, too much sugar and not enough vegetables. If I ate like that, not only would I get obese and diabetic, I would get severely sick from the gluten. Perhaps the gluten issues are specific to me, but I see a lot of people having food problems that go beyond metabolic.

If your efforts improve the situation by 1 nanodoom, you've saved 8 people alive today.

I think the technical answer comes down to the Church-Turing thesis and the computability of the physical universe, but obviously that's not a great answer for the compscidegreeless among us.

On the topic of decision theories, is there a decision theory that is "least weird" from a "normal human" perspective? Most people don't factor alternate universes and people who actually don't exist into their everyday decision making process, and it seems reasonable that there should be a decision theory that resembles humans in that way.

If the AI is a commercial service like Google search or Wikipedia, that is so embedded into society that we have come to depend on it, or if the AI is seen as national security priority, do you really think we will turn it off?

Presumably an AI this advanced would have the ability to eliminate all superficial forms of suffering such as hunger and disease. So how would we suffer? If the AI cannot fix higher order suffering such as ennui or existential dread, that is not an alignment problem.

The horror story people are worried about is "we suffer a lot but the AI doesn't care/makes it worse, and the AI doesn't allow you to escape by death."

Great post!

“I don’t want to talk about (blah) aspect of how I think future AGI will be built, because all my opinions are either wrong or infohazards—the latter because (if correct) they might substantially speed the arrival of AGI, which gives us less time for safety / alignment research.”

It seems to me that infohazards are the unstated controversy behind this post. The researchers you are in debate with don't believe in infohazards, or more precisely they believe that framing problems as infohazards makes progress impossible since you can't solve an engineering problem if you are not allowed to talk about it freely.

Presumably in the endgame, there will be no infohazards since all the important dangerous secrets are already widely known, or it's too late to keep secrets anyway. I think most researchers would prefer to work in an environment where they didn't have to deal with censorship. Therefore, if we can work as if it was the endgame already, then we might make more progress. That is the impetus behind getting to the endgame.

What do you mean by an oracular AI? Do you mean an oracle AI?

Load More