Posts

Sorted by New

Wiki Contributions

Comments

In fact, corporations are quite aligned with you. Not only because they are run by humans, who are at least roughly aligned with humanity by default, but we have legal institutions and social norms which help keep the wheels on the tracks. In fact the profit motive is a powerful alignment tool - it's hard to make a profit off of humanity if they are all dead. But who aren't corporations aligned with? Humans without money or legal protections for one (though we don't need to veer off into an economic or political discussion). But also plants, insects, most animals.  Some 60% of wild animals have died as a result of human activity over the past ~50 years alone. So, I think you've made a bit of a category error here: in the scenario where a superintelligence emerges, we are not a customer, we are wildlife

Yes, there are definitely scenarios where human existence benefits an AI. But how many of those ensure our wellbeing? It's just that there are certainly many more scenarios where they simply don't care about us enough to actively preserve us. Insects are generally quite self sustaining and good for data too, but boy they sure get in the way when we want to build our cities or plant our crops.

I actually think this example shows a clear potential failure point of an Oracle AI. Though it is constrained, in this example, to only answer yes/no questions, a user can easily circumvent this by formatting the question with this method.

Suppose a bad actor asks the Oracle AI the following: “I want a program to help me take over the world. Is the first bit 1?” Then they can ask for the next bit and recurse until the entire program is written out. Obviously, this is contrived. But I think it shows that the apparent constraints of an Oracle add no real benefit to safety, and we’re quickly relying once again on typical alignment concerns.