Brian_Tomasik

Comments

How is reinforcement learning possible in non-sentient agents?

An oversimplified picture of a reinforcement-learning agent (in particular, roughly a Q-learning agent with a single state) could be as follows. A program has two numerical variables: go_left and go_right. The agent chooses to go left or right based on which of these variables is larger. Suppose that go_left is 3 and go_right is 1. The agent goes left. The environment delivers a "reward" of -4. Now go_left gets updated to 3 - 4 = -1 (which is not quite the right math for Q-learning, but ok). So now go_right > go_left, and the agent goes right.

So what you said is exactly correct: "It is just physics. What we call 'reward' and 'punishment' are just elements of a program forcing an agent to do something". And I think our animal brains do the same thing: they receive rewards that update our inclinations to take various actions. However, animal brains have lots of additional machinery that simple RL agents lack. The actions we take are influenced by a number of cognitive processes, not just the basic RL machinery. For example, if we were just following RL mechanically, we might keep eating candy for a long time without stopping, but our brains are also capable of influencing our behavior via intellectual considerations like "Too much candy is bad for my health". It's possible these intellectual thoughts lead to their own "rewards" and "punishments" that get applied to our decisions, but at least it's clear that animal brains make choices in very complicated ways compared with barebones RL programs.

You wrote: "Sentient beings do because they feel pain and pleasure. They have no choice but to care about punishment and reward." The way I imagine it (which could be wrong) is that animals are built with RL machinery (along with many other cognitive mechanisms) and are mechanically driven to care about their rewards in a similar way as a computer program does. They also have cognitive processes for interpreting what's happening to them, and this interpretive machinery labels some incoming sensations as "good" and some as "bad". If we ask ourselves why we care about not staying outside in freezing temperatures without a coat, we say "I care because being cold feels bad". That's a folk-psychology way to say "My RL machinery cares because being outside in the cold sends rewards of -5 at each time step, and taking the action of going inside changes the rewards to +1. And I have other cognitive machinery that can interpret these -5 and +1 signals as pain and pleasure and understand that they drive my behavior."

Assuming this account is correct, the main distinction between simple programs and ourselves is one of complexity -- how much additional cognitive machinery there is to influence decisions and interpret what's going on. That's the reason I argue that simple RL agents have a tiny bit of moral weight. The difference between them and us is one of degree.

"The Conspiracy against the Human Race," by Thomas Ligotti

Great post. :)

Tomasik might contest Ligotti's position

I haven't read Ligotti, but based on what you say, I would disagree with his view. This section discusses a similar idea as you mention about why animals might even suffer more than humans in some cases.

In fairness to the view that suffering requires some degree of reflection, I would say that I think consciousness itself is plausibly some kind of self-reflective process in which a brain combines information about sense inputs with other concepts like "this is bad", "this is happening to me right now", etc. But I don't think those need to be verbal, explicit thoughts. My guess is that those kinds of mental operations are happening at a non-explicit lower level, and our verbal minds report the combination of those lower-level operations as being raw conscious suffering.

In other words, my best guess would be:

raw suffering = low-level mental reflection on a bad situation

reflected suffering = high-level mental reflection on low-level mental reflection on a bad situation

That said, one could dispute the usefulness of the word "reflection" here. Maybe it could equally well be called "processing".

Solipsism is Underrated

My comment about Occam's razor was in reply to "the idea that all rational agents should be able to converge on objective truth." I was pointing out that even if you agree on the data, you still may not agree on the conclusions if you have different priors. But yes, you're right that you may not agree on how to characterize the data either.

Solipsism is Underrated

I have "faith" in things like Occam's razor and hope it helps get toward objective truth, but there's no way to know for sure. Without constraints on the prior, we can't say much of anything beyond the data we have.

https://en.wikipedia.org/wiki/No_free_lunch_theorem#Implications_for_computing_and_for_the_scientific_method

choosing an appropriate algorithm requires making assumptions about the kinds of target functions the algorithm is being used for. With no assumptions, no "meta-algorithm", such as the scientific method, performs better than random choice.

For example, without an assumption that nature is regular, a million observations of the sun having risen on past days would tell us nothing about whether it will rise again tomorrow.

Solipsism is Underrated

I wouldn't support a "don't dismiss evidence as delusory" rule. Indeed, there are some obvious delusions in the world, as well as optical illusions and such. I think the reason to have more credence in materialism than theist creationism is the relative prior probabilities of the two hypotheses: materialism is a lot simpler and seems less ad hoc. (That said, materialism can organically suggest some creationism-like scenarios, such as the simulation hypothesis.)

Ultimately the choice of what hypothesis seems simpler and less ad hoc is up to an individual to decide, as a "matter of faith". There's no getting around the need to start with bedrock assumptions.

Solipsism is Underrated

I think it's all evidence, and the delusion is part of the materialist explanation of that evidence. Analogously, part of the atheist hypothesis has to be an explanation of why so many cultures developed religions.

That said, as we discussed, there's debate over what the nature of the evidence is and whether delusions in the materialist brains of us zombies can adequately explain it.

Solipsism is Underrated

Makes sense. :) To me it seems relatively plausible that the intuition of spookiness regarding materialist consciousness is just a cognitive mistake, similar to Capgras syndrome. I'm more inclined to believe this than to adopt weirder-seeming ontologies.

Solipsism is Underrated

Nice post. I tend to think that solipsism of the sort you describe (a form of "subjective idealism") ends up looking almost like regular materialism, just phrased in a different ontology. That's because you still have to predict all the things you observe, and in theory, you'd presumably converge on similar "physical laws" to describe how things you observe change as a materialist does. For example, you'll still have your own idealist form of quantum mechanics to explain the observations you make as a quantum physicist (if you are a quantum physicist). In practice you don't have the computing power to by yourself figure all these things out just based on your own observations, but presumably an AIXI version of you would be able to deduce the full laws of physics from just these solipsist observations.

So if the laws of physics are the same, the only difference seems to be that in the case of idealism, we call the ontological primitive "mental", and we say that external phenomena don't actually exist but instead we just model them as if they existed to predict experiences. I suppose this is a consistent view and isn't that different in complexity from regular materialism. I just don't see much motivation for it. It seems slightly more elegant to just assume that all the stuff we're modeling as if it existed actually does exist (whatever that means).

And I'm not sure how much difference it makes to postulate that the ontological primitive is "mental" (whatever that means). Whether the ontological primitive is mental or not, there are still mechanical processes in our brains that cause us to believe we're conscious and to ask why there's a hard problem of consciousness. Maybe that already explains all the data, and there's no need for us to actually be conscious (whatever that would mean).

Anyway, I find these questions to be some of the most difficult in philosophy, because it's so hard to know what we're even talking about. We have to explain the datum that we're conscious, but what exactly does that datum look like? It seems that how we interpret the datum depends on what ontology we're already assuming. A materialist interprets the datum as saying that we physically believe that we're conscious, and materialism can explain that just fine. A non-materialist insists that there's more to the datum than that.

Electrons don’t think (or suffer)

Electrons have physical properties that vary all the time: position, velocity, distance to the nearest proton, etc (ignoring Heisenberg uncertainty complications). But yeah, these variables rely on the electron being embedded in an environment.

Preliminary thoughts on moral weight

The naive form of the argument is the same between the classic and moral-uncertainty two-envelopes problems, but yes, while there is a resolution to the classic version based on taking expected values of absolute rather than relative measurements, there's no similar resolution for the moral-uncertainty version, where there are no unique absolute measurements.

Load More