I don't like the expression "carve reality at the joints", I think it's very vague and hard to verify if a concept carves it there or not. The best way I can imagine this is that you have lots of events or 'things' in some description space and you can notice some clusterings, and you pick those clusters as concepts. But a lot depends on which subspace you choose and on what scale you're working... 'Good' may form a cluster or may not, I just don't even know how you could give evidence either way. It's unclear how you could formalize this in practice.

My thoughts on pleasure and the concept of good is that your problem is that you're trying to discover the sharp edges of these categories, whereas concepts don't work like that. Take a look at this LW post and this one from Slatestarcodex. From the second one, the concept of a behemah/dag exists because fishing and hunting exist.

Try to make it clearer what you're trying to ask. "What is pleasure really?" is a useless question. You may ask "what is going on in my body when I feel pleasure?" or "how could I induce that state again?"

You seem to be looking for some mathematical description of the pattern of pleasure that would unify pleasure in humans and aliens with totally unknown properties (that may be based on fundamentally different chemistry or maybe instead of electomagnetism-based chemistry their processes work over the strong nuclear force or whatever). What do you really have in mind here? A formula, like a part of space giving off pulses at the rate of X and another part of space at 1 cm distance pulsating with rate Y?

You may just as well ask how we would detect alien life at all. And then I'd say "life" is a human concept, not a divine platonic object out there that you can go to and see what it really is. We even have edge cases here on Earth, like viruses or prions. But the importance of these sorts of questions disappears if you think about what you'd do with the answer. If it's "I just want to know how it really is, I can't imagine doing anything practical with the answer" then it's too vague to be answered.

Try to make it clearer what you're trying to ask. "What is pleasure really?" is a useless question.

Asking "how do qualia systematically relate to physics" is not a useless question, since answering it would make physicalism knowledge with no element of commitment.

2johnsonmx5yI think we're still not seeing eye-to-eye on the possibility that valence, i.e., whatever pattern within conscious systems innately feels good, can be described crisply. If it's clear a priori that it can't, then yes, this whole question is necessarily confused. But I see no argument to that effect, just an assertion. From your perspective, my question takes the form: "what's the thing that all dogs have in common?"- and you're trying to tell me it's misguided to look for some platonic 'essence of dogness'. Concepts don't work like that. I do get that, and I agree that most concepts are like that. But from my perspective, your assertion sounds like, "all concepts pertaining to this topic are necessarily vague, so it's no use trying to even hypothesize that a crisp mathematical relationship could exist." I.e., you're assuming your conclusion. Now, we can point to other contexts where rather crisp mathematical models do exist: electromagnetism, for instance. How do you know the concept of valence is more like 'dogness' than electromagnetism? Ultimately, the details, or mathematics, behind any 'universal' or 'rigorous' theory of valence would depend on having a well-supported, formal theory of consciousness to start from. It's no use talking about patterns within conscious systems when we don't have a clear idea of what constitutes a conscious system. A quantitative approach to valence needs a clear ontology, which we don't have yet (Tononi's IIT is a good start, but hardly a final answer). But let's not mistake the difficulty in answering these questions with them being inherently unanswerable. We can imagine someone making similar critiques a few centuries ago regarding whether electromagnetism was a sharply-defined concept, or whether understanding it matters. It turned out electromagnetism was a relatively sharply-defined concept: there was something to get, and getting it did matter. I suspect a similar relationship holds with valence in conscious systems. I'm

The mystery of pain and pleasure

by johnsonmx 1 min read1st Mar 201543 comments

8


 

Some arrangements of particles feel better than others. Why?

We have no general theories, only descriptive observations within the context of the vertebrate brain, about what produces pain and pleasure. It seems like there's a mystery here, a general principle to uncover.

Let's try to chart the mystery. I think we should, in theory, be able to answer the following questions:


(1) What are the necessary and sufficient properties for a thought to be pleasurable?

(2) What are the characteristic mathematics of a painful thought?

(3) If we wanted to create an artificial neural network-based mind (i.e., using neurons, but not slavishly patterned after a mammalian brain) that could experience bliss, what would the important design parameters be?

(4) If we wanted to create an AGI whose nominal reward signal coincided with visceral happiness -- how would we do that?

(5) If we wanted to ensure an uploaded mind could feel visceral pleasure of the same kind a non-uploaded mind can, how could we check that? 

(6) If we wanted to fill the universe with computronium and maximize hedons, what algorithm would we run on it?

(7) If we met an alien life-form, how could we tell if it was suffering?


It seems to me these are all empirical questions that should have empirical answers. But we don't seem to have much for hand-holds which can give us a starting point.

Where would *you* start on answering these questions? Which ones are good questions, and which ones are aren't? And if you think certain questions aren't good, could you offer some you think are?

 

As suggested by shminux, here's some research I believe is indicative of the state of the literature (though this falls quite short of a full literature review):

Tononi's IIT seems relevant, though it only addresses consciousness and explicitly avoids valence. Max Tegmark has a formal generalization of IIT which he claims should apply to non-neural substrates. And although Tegmark doesn't address valence either, he posted a recent paper on arxiv noting that there *is* a mystery here, and that it seems topical for FAI research.

Current models of emotion based on brain architecture and neurochemicals (e.g., EMOCON) are somewhat relevant, though ultimately correlative or merely descriptive, and seem to have little universalization potential.

There's also a great deal of quality literature about specific correlates of pain and happiness- e.g., Building a neuroscience of pleasure and well-being and An fMRI-Based Neurologic Signature of Physical Pain. Luke covers Berridge's research in his post, The Neuroscience of Pleasure. Short version: 'liking', 'wanting', and 'learning' are all handled by different systems in the brain. Opioids within very small regions of the brain seem to induce the 'liking' response; elsewhere in the brain, opioids only produce 'wanting'. We don't know how or why yet. This sort of research constrains a general principle, but doesn't really hint toward one.

 

In short, there's plenty of research around the topic, but it's focused exclusively on humans/mammals/vertebrates: our evolved adaptations, our emotional systems, and our architectural quirks. Nothing on general or universal principles that would address any of (1)-(7). There is interesting information-theoretic / patternist work being done, but it's highly concentrated around consciousness research.

 

---

 

Bottom line: there seems to be a critically important general principle as to what makes certain arrangements of particles innately preferable to others, and we don't know what it is. Exciting!

8