Do you think the sensation of pleasure in and of itself has moral value?
No.
Have you written more about your position elsewhere
I have, but not in a convenient form. I'll just paste some stuff into this comment box:
Hitherto I had been some kind of utilitarian: The purest essence of wrongness is causing suffering to a sentient being, and the amount of wrongness increases with the amount of suffering. Something similar is true concerning virtue and happiness, though I realized even then that one has to be very careful in how 'happiness' is formulated. After all, we don't want to end up concluding that synthesizing Huxley's drug "soma" is humanity's highest ethical goal. If pressed to refine my concept of happiness, I had two avenues open: (i) Try to prise apart "animal happiness" - a meaningless and capricious flood of neurochemicals - from a higher "rational happiness" which can only be derived from recognition of truth or beauty (ii) Retreat to the view that "in any case, morality is just a bunch of intuitions that helped our ancestors to survive. There's no reason to assume that our moral intuitions are a 'window' onto any larger and more fundamental domain of moral truth."
(Actually, I still regard a weaker version of (ii) as the 'ultimate truth of the matter': On the one hand, it's not hard to believe that in any community of competing intelligent agents, more similar to each other than different, who have evolved by natural selection, moral precepts such as 'the golden rule' are almost guaranteed to arise. On the other, it remains the case that the spectrum of 'ethical dilemmas' that could reasonably arise in our evolutionary history is narrow, and it is easy for ethicists to devise strange situations that escape its confines. I see no reason at all to expect that the principles by which we evaluate the morality of real-world decisions can be refined and systematised to give verdicts on all possible decisions.)
[i.e. "don't take the following too seriously."]
I believe moral value is inherent in those systems and entities that we describe as 'fascinating', 'richly structured' and 'beautiful'. A snappy way of characterising this view is "value-as-profundity". On the other hand, I regard pain and pleasure as having no value at all in themselves.
In the context of interpersonal affairs, then, to do good is ultimately to make the people around you more profound, more interesting, more beautiful - their happiness is irrelevant. To do evil, on the other hand, is to damage and degrade something, shutting down its higher features, closing off its possibilities. Note that feelings of joy usually accompany activities I classify as 'good' (e.g. learning, teaching, creating things, improving fitness) and conversely, pain and suffering tend to accompany damage and degradation. However, in those situations where value-as-profundity diverges from utilitarian value, notice that our moral intuitions tend to favour the former. For instance:
Drug abuse: Taking drugs such as heroin produces feelings of euphoria but only at the cost of degrading and constraining our future behaviour, and damaging our bodies. It is the erosion of profundity that makes heroin abuse wrong, not the withdrawal symptoms, or the fact that the addict's behaviour tends to make others in his community less happy. The latter are both incidental - we can hypothetically imagine that the withdrawal symptoms do not exist and that the addict is all alone in a post-apocalyptic world, and we are still dismayed by the degradation of behaviour that drug addiction produces (just as we would be dismayed by a giraffe with brain damage, irrespective of whether the giraffe felt happy).
The truth hurts: We accept that there are situations where the best way to help someone is to criticise them in a way that we know they will find upsetting. We do this because we want our friend to grow into a better (more profound) version of themselves, which cannot happen until she sees her flaws as flaws rather than lovable idiosyncracies. On the utilitarian view, the rightness of this harsh criticism cannot be accounted for except in respect of its remote consequences - the greater happiness of our improved friend and of those with whom she interacts - yet there is no necessary reason why the end result of a successful self-improvement must be increased happiness, and if it is not then the initial upset will force us to say that our actions were immoral. However, surely it is preferable for our ethical theory to place value in the improvements themselves rather than their contingent psychological effects.
Nature red in tooth and claw (see Q6): Consider the long and eventful story of life on earth. Consider that before the arrival of humankind, almost all animals spent almost all of their lives perched on the edge, struggling against starvation, predators and disease. In a state of nature, suffering is far more prevalent than happiness. Yet suppose we were given a planet like the young earth, and that we knew life could evolve there with a degree of richness comparable to our own, but that the probability of technological, language-using creatures like us evolving is very remote. Sadly, this planet lies in a solar system on a collision course with a black hole, and may be swallowed up before life even appears. Suppose it is within our power to 'deflect' the solar system away from the black hole - should we do so? On the utilitarian view, to save the planet would be to bring a vast amount of unnecessary suffering into being, and (almost certainly) a relatively tiny quantity of joy. However, saving the planet increases the profundity and beauty of the universe, and obviously is in line with our ethical intuitions.
...
is it a standard one that I can look up?
I'm not sure, but it's vaguely Nietzschean. For instance, here's a quote from Thus Spoke Zarathustra:
Man is a rope, fastened between animal and Superman - a rope over an abyss. A dangerous going-across, a dangerous wayfaring, a dangerous looking-back, a dangerous shuddering and staying-still. What is great in man is that he is a bridge and not a goal; what can be loved in man is that he is a going-across and a down-going.
Actually, that one quote doesn't really suffice, but if you're interested, please read sections 4 and 5 of "Zarathustra's Prologue".
If pressed to refine my concept of happiness, I had two avenues open
What about Eliezer's position, which you don't seem to address, that happiness is just one value among many? Why jump to the (again, highly counterintuitive) conclusion that happiness is not a value at all?
Related To: Eliezer's Zombies Sequence, Alicorn's Pain
Today you volunteered for what was billed as an experiment in moral psychology. You enter into a small room with a video monitor, a red light, and a button. Before you entered, you were told that you'll be paid $100 for participating in the experiment, but for every time you hit that button, $10 will be deducted. On the monitor, you see a person sitting in another room, and you appear to have a two-way audio connection with him. That person is tied down to his chair, with what appears to be electrical leads attached to him. He now explains to you that your red light will soon turn on, which means he will be feeling excruciating pain. But if you press the button in front of you, his pain will stop for a minute, after which the red light will turn on again. The experiment will end in ten minutes.
You're not sure whether to believe him, but pretty soon the red light does turn on, and the person in the monitor cries out in pain, and starts struggling against his restraints. You hesitate for a second, but it looks and sounds very convincing to you, so you quickly hit the button. The person in the monitor breaths a big sigh of relief and thanks you profusely. You make some small talk with him, and soon the red light turns on again. You repeat this ten times and then are released from the room. As you're about to leave, the experimenter tells you that there was no actual person behind the video monitor. Instead, the audio/video stream you experienced was generated by one of the following ECPs (exotic computational processes).
Then she asks, would you like to repeat this experiment for another chance at earning $100?
Presumably, you answer "yes", because you think that despite appearances, none of these ECPs actually do feel pain when the red light turns on. (To some of these ECPs, your button presses would constitute positive reinforcement or lack of negative reinforcement, but mere negative reinforcement, when happening to others, doesn't seem to be a strong moral disvalue.) Intuitively this seems to be the obvious correct answer, but how to describe the difference between actual pain and the appearance of pain or mere negative reinforcement, at the level of bits or atoms, if we were specifying the utility function of a potentially super-intelligent AI? (If we cannot even clearly define what seems to be one of the simplest values, then the approach of trying to manually specify such a utility function would appear completely hopeless.)
One idea to try to understand the nature of pain is to sample the space of possible minds, look for those that seem to be feeling pain, and check if the underlying computations have anything in common. But as in the above thought experiment, there are minds that can convincingly simulate the appearance of pain without really feeling it.
Another idea is that perhaps what is bad about pain is that it is a strong negative reinforcement as experienced by a conscious mind. This would be compatible with the thought experiment above, since (intuitively) ECPs 1, 2, and 4 are not conscious, and 3 does not experience strong negative reinforcements. Unfortunately it also implies that fully defining pain as a moral disvalue is at least as hard as the problem of consciousness, so this line of investigation seems to be at an immediate impasse, at least for the moment. (But does anyone see an argument that this is clearly not the right approach?)
What other approaches might work, hopefully without running into one or more problems already known to be hard?