Brian_Tomasik

Wiki Contributions

Comments

How is reinforcement learning possible in non-sentient agents?

Thanks. :) What do you mean by "unconscious biases"? Do you mean unconscious RL, like how the muscles in our legs might learn to walk without us being aware of the feedback they're getting? (Note: I'm not an expert on how our leg muscles actually learn to walk, but maybe it's RL of some sort.) I would agree that simple RL agents are more similar to that. I think these systems can still be considered marginally conscious to themselves, even if the parts of us that talk have no introspective access to them, but they're much less morally significant than the parts of us that can talk.

Perhaps pain and pleasure are what we feel when getting punishment and reward signals that are particularly important for our high-level brains to pay attention to.

Quick general thoughts on suffering and consciousness

Me: 'Conscious' is incredibly complicated and weird. We have no idea how to build it. It seems like a huge mechanism hooked up to tons of things in human brains. Simpler versions of it might have a totally different function, be missing big parts, and work completely differently.

What's the reason for assuming that? Is it based on a general feeling that value is complex, and you don't want to generalize much beyond the prototype cases? That would be similar to someone who really cares about piston steam engines but doesn't care much about other types of steam engines, much less other types of engines or mechanical systems.

I would tend to think that a prototypical case of a human noticing his own qualia involves some kind of higher-order reflection that yields the quasi-perceptual illusions that illusionism talks about with reference to some mental state being reflected upon (such as redness, painfulness, feeling at peace, etc). The specific ways that humans do this reflection and report on it are complex, but it's plausible that other animals might do simpler forms of such things in their own ways, and I would tend to think that those simpler forms might still count for something (in a similar way as other types of engines may still be somewhat interesting to a piston-steam-engine aficionado). Also, I think some states in which we don't actively notice our qualia probably also matter morally, such as when we're in flow states totally absorbed in some task.

Here's an analogy for my point about consciousness. Humans have very complex ways of communicating with each other (verbally and nonverbally), while non-human animals have a more limited set of ways of expressing themselves, but they still do so to greater or lesser degrees. The particular algorithms that humans use to communicate may be very complex and weird, but why focus so heavily on those particular algorithms rather than the more general phenomenon of animal communication?

Anyway, I agree that there can be some cases where humans have a trait to such a greater degree than non-human animals that it's fair to call the non-human versions of it negligible, such as if the trait in question is playing chess, calculating digits of pi, or writing poetry. I do maintain some probability (maybe like 25%) that the kinds of things in human brains that I would care most about in terms of consciousness are almost entirely absent in chicken brains.

Quick general thoughts on suffering and consciousness

I've had a few dreams in which someone shot me with a gun, and it physically hurt about as much as a moderate stubbed toe or something (though the pain was in my abdomen where I got shot, not my toe). But yeah, pain in dreams seems pretty rare for me unless it corresponds to something that's true in real life, as you mention, like being cold, having an upset stomach, or needing to urinate.

Googling {pain in dreams}, I see a bunch of discussion of this topic. One paper says:

Although some theorists have suggested that pain sensations cannot be part of the dreaming world, research has shown that pain sensations occur in about 1% of the dreams in healthy persons and in about 30% of patients with acute, severe pain.

Quick general thoughts on suffering and consciousness

[suffering's] dependence on higher cognition suggests that it is much more complex and conditional than it might appear on initial introspection, which on its own reduces the probability of its showing up elsewhere

Suffering is surely influenced by things like mental narratives, but that doesn't mean it requires mental narratives to exist at all. I would think that the narratives exert some influence over the amount of suffering. For example, if (to vastly oversimplify) suffering was represented by some number in the brain, and if by default it would be -10, then maybe the right narrative could add +7 so that it became just -3.

Top-down processing by the brain is a very general thing, not just for suffering. But I wouldn't say that all brain processes that are influenced by it can't exist without it. (OTOH, depending on how broadly we define top-down processing, maybe it's also somewhat ubiquitous in brains. The overall output of a neural network will often be influenced by multiple inputs, some from the senses and some from "higher" brain regions.)

Quick general thoughts on suffering and consciousness

Thanks for this discussion. :)

I think consciousness will end up looking something like 'piston steam engine', if we'd evolved to have a lot of terminal values related to the state of piston-steam-engine-ish things.

I think that's kind of the key question. Is what I care about as precise as "piston steam engine" or is it more like "mechanical devices in general, with a huge increase in caring as the thing becomes more and more like a piston steam engine"? This relates to the passage of mine that Matthew quoted above. If we say we care about (or that consciousness is) this thing going on in our heads, are we pointing at a very specific machine, or are we pointing at machines in general with a focus on the ones that are more similar to the exact one in our heads? In the extreme, a person who says "I care about what's in my head" is an egoist who doesn't care about other humans. Perhaps he would even be a short-term egoist who doesn't care about his long-term future (since his brain will be more different by then). That's one stance that some people take. But most of us try to generalize what we care about beyond our immediate selves. And then the question is how much to generalize.

It's analogous to someone saying they love "that thing" and pointing at a piston steam engine. How much generality should we apply when saying what they value? Is it that particular piston steam engine? Piston steam engines in general? Engines in general? Mechanical devices in general with a focus on ones most like the particular piston steam engine being pointed to? It's not clear, and people take widely divergent views here.

I think a similar fuzziness will apply when trying to decide for which entities "there's something it's like" to be those entities. There's a wide range in possible views on how narrowly or broadly to interpret "something it's like".

yet I'm confident we shouldn't expect to find that rocks are a little bit repressing their emotions, or that cucumbers are kind of directing their attention at something, or that the sky's relationship to the ground is an example of New Relationship Energy.

I think those statements can apply to vanishing degrees. It's usually not helpful to talk that way in ordinary life, but if we're trying to have a full theory of repressing one's emotions in general, I expect that one could draw some strained (or poetic, as you said) ways in which rocks are doing that. (Simple example: the chemical bonds in rocks are holding their atoms together, and without that the atoms of the rocks would move around more freely the way the atoms of a liquid or gas do.) IMO, the degree of applicability of the concept seems very low but not zero. This very low applicability is probably only going to matter in extreme situations, like if there are astronomical numbers of rocks compared with human-like minds.

Rob B's Shortform Feed

Thanks for sharing. :) Yeah, it seems like most people have in mind type-F monism when they refer to panpsychism, since that's the kind of panpsychism that's growing in popularity in philosophy in recent years. I agree with Rob's reasons for rejecting that view.

How is reinforcement learning possible in non-sentient agents?

An oversimplified picture of a reinforcement-learning agent (in particular, roughly a Q-learning agent with a single state) could be as follows. A program has two numerical variables: go_left and go_right. The agent chooses to go left or right based on which of these variables is larger. Suppose that go_left is 3 and go_right is 1. The agent goes left. The environment delivers a "reward" of -4. Now go_left gets updated to 3 - 4 = -1 (which is not quite the right math for Q-learning, but ok). So now go_right > go_left, and the agent goes right.

So what you said is exactly correct: "It is just physics. What we call 'reward' and 'punishment' are just elements of a program forcing an agent to do something". And I think our animal brains do the same thing: they receive rewards that update our inclinations to take various actions. However, animal brains have lots of additional machinery that simple RL agents lack. The actions we take are influenced by a number of cognitive processes, not just the basic RL machinery. For example, if we were just following RL mechanically, we might keep eating candy for a long time without stopping, but our brains are also capable of influencing our behavior via intellectual considerations like "Too much candy is bad for my health". It's possible these intellectual thoughts lead to their own "rewards" and "punishments" that get applied to our decisions, but at least it's clear that animal brains make choices in very complicated ways compared with barebones RL programs.

You wrote: "Sentient beings do because they feel pain and pleasure. They have no choice but to care about punishment and reward." The way I imagine it (which could be wrong) is that animals are built with RL machinery (along with many other cognitive mechanisms) and are mechanically driven to care about their rewards in a similar way as a computer program does. They also have cognitive processes for interpreting what's happening to them, and this interpretive machinery labels some incoming sensations as "good" and some as "bad". If we ask ourselves why we care about not staying outside in freezing temperatures without a coat, we say "I care because being cold feels bad". That's a folk-psychology way to say "My RL machinery cares because being outside in the cold sends rewards of -5 at each time step, and taking the action of going inside changes the rewards to +1. And I have other cognitive machinery that can interpret these -5 and +1 signals as pain and pleasure and understand that they drive my behavior."

Assuming this account is correct, the main distinction between simple programs and ourselves is one of complexity -- how much additional cognitive machinery there is to influence decisions and interpret what's going on. That's the reason I argue that simple RL agents have a tiny bit of moral weight. The difference between them and us is one of degree.

"The Conspiracy against the Human Race," by Thomas Ligotti

Great post. :)

Tomasik might contest Ligotti's position

I haven't read Ligotti, but based on what you say, I would disagree with his view. This section discusses a similar idea as you mention about why animals might even suffer more than humans in some cases.

In fairness to the view that suffering requires some degree of reflection, I would say that I think consciousness itself is plausibly some kind of self-reflective process in which a brain combines information about sense inputs with other concepts like "this is bad", "this is happening to me right now", etc. But I don't think those need to be verbal, explicit thoughts. My guess is that those kinds of mental operations are happening at a non-explicit lower level, and our verbal minds report the combination of those lower-level operations as being raw conscious suffering.

In other words, my best guess would be:

raw suffering = low-level mental reflection on a bad situation

reflected suffering = high-level mental reflection on low-level mental reflection on a bad situation

That said, one could dispute the usefulness of the word "reflection" here. Maybe it could equally well be called "processing".

Solipsism is Underrated

My comment about Occam's razor was in reply to "the idea that all rational agents should be able to converge on objective truth." I was pointing out that even if you agree on the data, you still may not agree on the conclusions if you have different priors. But yes, you're right that you may not agree on how to characterize the data either.

Solipsism is Underrated

I have "faith" in things like Occam's razor and hope it helps get toward objective truth, but there's no way to know for sure. Without constraints on the prior, we can't say much of anything beyond the data we have.

https://en.wikipedia.org/wiki/No_free_lunch_theorem#Implications_for_computing_and_for_the_scientific_method

choosing an appropriate algorithm requires making assumptions about the kinds of target functions the algorithm is being used for. With no assumptions, no "meta-algorithm", such as the scientific method, performs better than random choice.

For example, without an assumption that nature is regular, a million observations of the sun having risen on past days would tell us nothing about whether it will rise again tomorrow.

Load More