All human opinions cannot be created by persuasion alone because opinions have to start somewhere. People can and do think for themselves and that's what creates opinions. Then they might persuade people to have these opinions as well, but clearly persuasion is not the sole source and even then it's not like persuasion is a one-way process where you hit the persuade button and the other person is switched. It seems that your argument is that any human can be persuaded to any opinion at any time and I just can't buy that. Humans are malleable and we've made a huge number of mistakes in the past, but I don't see us as so bad that anyone can have their mind changed to anything regardless of the merit behind it. This entire site is based around getting people to not be arbitrarily malleable and to require rationality in making decisions—that there are objective conclusions and we should strive for them. Is this site and community a failure then? Are all of the people subject to mere persuasion in spite of rationality and cannot think for themselves?
Regarding actions that cause outrage I never said you were constrained by the outrage of others. I said an AI that maximizes human well-being is not going to take actions that cause extreme outrage.
All human opinions cannot be created by persuasion alone because opinions have to start somewhere. People can and do think for themselves and that's what creates opinions.
This is completely wrong. Again, you give "persuasion" a very narrow scope.
A baby is born without language, certainly without many opinions. It can be shaped by its environment ("persuasion") to be almost anything. Certainly, very few of the extremely diverse cultures and sub-cultures known from history have had any trouble raising their kids to behave like the adul...
I put "trivial" in quotes because there are obviously some exceptionally large technical achievements that would still need to occur to get here, but suppose we had an AI with a utilitarian utility function of maximizing subjective human well-being (meaning, well-being is not something as simple as physical sensation of "pleasure" and depends on the mental facts of each person) and let us also assume the AI can model this "well" (lets say at least as well as the best of us can deduce the values of another person for their well-being). Finally, we will also assume that the AI does not possess the ability to manually rewire the human brain to change what a human values. In other words, the ability for the AI to manipulate another person's values is limited by what we as humans are capable of today. Given all this, is there any concern we should have about making this AI; would it succeed in being a friendly AI?
One argument I can imagine for why this fails friendly AI is the AI would wire people up to virtual reality machines. However, I don't think that works very well, because a person (except Cypher from the Matrix) wouldn't appreciate being wired into a virtual reality machine and having their autonomy forcefully removed. This means the action does not succeed in maximizing their well-being.
But I am curious to hear what arguments exist for why such an AI might still fail as a friendly AI.