If wireheading were a serious policy proposal being actively pursued with non-negligible chances of success, I would be shooting to kill wireheaders, not arguing with them.
I am arguing precisely because Jeff and other people musing about wireheading are not actual criminals—but might inspire a future criminal AI if their argument is accepted.
Arguing about a thought experiment means taking it seriously, which I do. And if the conclusion is criminal, this is an important point that needs to be stated. When George Bernard Shaw calmly claims the political necessity of large scale extermination of people unfit for his socialist paradise, and doing it scientifically, he is not being a criminal—but it is extremely relevant to note that his ideas, if implemented, would be criminal, and that accepting them as true might indeed inspire criminals to act, and inspire good people to let criminals act.
If I am not to take wireheading seriously, there is nothing to argue. Just a good laugh to have.
And I am not angry at all about wireheading. But apparently, the first post of this series made a lot of commenters angry indeed who took it personally.
The STV supposes that pleasantness is valuable independently from the agent's embedding in reality—thus is a Pixie Dust Theory of Happiness, that I indeed argue against in my essay (see section "A Pixie Dust Theory of Happiness").
While the examples and repetition used in the paragraph cited are supposed to elicit a strong emotion, the underlying point holds: If you're trying to find the most emotional happiness intensive moment to reproduce, a violent joyful emotion from an insane criminal mastermind is more likely to be it than a peaceful zen moment by a mellow sage. The extreme negative cost to the victims, however great, is in this hypothesis only accounted once; it is thus dwarfed by the infinitely-replicated benefit to the criminal.
Emotions are a guide. You ought to feel them, and if they're wrong, you ought to explain them away, not ignore them. But, especially in an already long essay, it's easier and more convincing to show than to explain. If mass murder in the name of wireheading feels deeply wrong, that's a very strongly valid argument that indeed it is. Maybe I should update the essay to add this very explanation right afterwards.
Admittedly, my essay may not optimized to the audience of LessWrong, but that's my first couple essays, optimized for my preexisting audience. I wanted to share it here because of the topic, which is extremely relevant to LessWrong.
Finally I'll reply to meta with meta: if you are "totally out of [your] depth... and am not interested in learning it", that's perfectly fine, but then you should disqualify yourself from having an opinion on an "objective utility function of the universe" that you start by claiming you believe in, when you later admit that understanding one issue depends on understanding the other. Or maybe you somehow have an independent proof using a completely different line of argument that makes you confident enough not to look at my argument—in which case you should express more sympathy towards those who'd dismiss Jeff's argument as insane without examining his in detail.
After my massive negative score from the post above was reduced by time, I could eventually post the sequel on this site: https://www.lesswrong.com/posts/w4MenDETroAm3f9Wj/a-refutation-of-global-happiness-maximization
You don't get it. Murder is NOT an abstract variable in the previous comment. It's a constant.
No, no, no. The point is: for any fixed set of questions, higher IQ will be positively correlated with believing in better answers. Yet people with higher IQ will develop beliefs about new, bigger and grander questions; and all in all, on their biggest and grandest questions, they fail just as much as lower-IQ people on theirs. Just with more impact. Including more criminal impact when these theories, as they are wont to do, imply the shepherding (and often barbecuing) the mass of their intellectual inferiors.
Once again, "ideology" is but an insult for theories you don't like. All in all your post is but gloating at being more subtle than other people. Speak of an "analytical" state of mind.
But granted - you ARE more subtle than most. And yet, you still maintain blissful ignorance of some basic laws of human action.
PS: the last paragraph of your previous comment suggests that if you're into computer science, you might be interested Gerald J. Sussman's talk about "degeneracy".
Even in engineering and business schools, socialism is stronger than it ought to be and plays a strong role of censorship, "affirmative" action, selection of who's allowed to rise, etc. But it has less impact there, because (1) confrontation to reality and reason weakens it, (2) engineering is about control over nature, not over men, therefore politics isn't directly relevant, (3) power-mongers want to maximize their impact as such, therefore flock to other schools.
I assume no such causation. I do assume a correlation, which is brought about by evolution: cooperation beats conflict.
I don't understand your "simpler rejection" as stated.