Crossposted at the Intelligent Agents Forum.
Quick, is there anything wrong with a ten minute pleasant low-intensity conversation with someone we happen to disagree with?
Our moral intuitions say no, as do our legal system and most philosophical or political ideals since the enlightenment.
Quick, is there anything wrong with brainwashing people into perfectly obedient sheep, willing and eager to take any orders and betray all their previous ideals?
There’s a bit more disagreement there, but that generally is seen as a bad thing.
But what happens when the low-intensity conversation and the brainwashing are the same thing? At the moment, no human can overwhelm most other humans in the course of ten minutes talking, and rewrite their goals into anything else. But an AI may well be capable of doing so - people have certainly fallen in love within less than ten minutes, and we don’t know how “hard” this is to pull off, in some absolute sense.
This is a warning that relying on revealed and stated preferences or meta-preferences won’t be enough. Our revealed and (most) stated preferences are that the ten minute conversation is probably ok. But disentangling how much of that “ok” relies on our understanding the consequences will be a challenge.