Reasons against anti-aging

As someone who is very much in favor of anti-aging, I'd answer to it something like this: "I'm fine with you entertaining all these philosophical arguments, and if you like them so much you literally want to die for them, by all means. But please don't insist that me and everyone I care or will care about should also die for your philosophical arguments."

What Do We Know About The Consciousness, Anyway?

we're perceiving things as "qualities", as "feels", even though all we are really perceiving is data

I consider it my success as a reductionist that this phrase genuinely does not make any sense to me.

But he says he doesn't think the word "illusion" is a helpful word for expressing this, and illusionism should have been called something else, and I think he's probably right.

Yep, can't agree more, basically that's why I was asking - "illusion" doesn't sound like the right concept here.

What Do We Know About The Consciousness, Anyway?

Those are all great points. Regarding your first question, no, that's not the reasoning I have. I think consciousness is the ability to reflect on myself firstly because it feels like the ability to reflect on myself. Kind of like the reason that I believe I can see is that when I open my eyes I start seeing things and if I interact with those things they really are mostly where I see them, nothing more sophisticated than that. There's a bunch of longer more theoretical arguments I can bring for this point, but I never thought I should because I was kind of taking it as a given. It well may be me falling into the typical mind fallacy, if you say some people say otherwise. So if you have different intuitions about the consciousness, can you tell:

  1. How do you subjectively, from the first person view, know that you are conscious?
  2. Can you genuinely imagine being conscious but not self aware from the first person view?
  3. If you get to talk to and interact with, an alien or an AI of unknown power and architecture, how would you go about finding out if they are conscious?

And because it doesn't automatically fits into "If you have a conscious mind subjectively perceiving anything about the outside world, it has to feel like something" if you just replace "conscious" by "able to percieve itself".

Well, no, it doesn't fit quite as simple, but overall I think it works out. If you have an agent able to reflect on itself and model itself perceiving something, it's going to reflect on the fact that it perceives something. I.e. it's going to have some mental representation for both the perception and for itself perceiving it. It will be able to reason about itself perceiving things, and if it can communicate it will probably also talk about it. Different perceptions will be in relation to each other (e.g. sky is not the same color as grass, and grass color is associated with summer and warmth and so on). And, perhaps most importantly, it will have models of other such agents perceiving things and it will on the high abstract level that they have the same perceptions in them. But it will only have the access to the lower level data for such perceptions from its own sensory inputs, not others', so it won't be able to tell for sure what it "feels like" to them, because it won't be getting theirs stream of low-level sensory inputs.

In short, I think - and please do correct me if you have a counterexample - that we have reasons to expect such an agent to make any claim humans make (given similar circumstances and training examples), and we can make any testable claim about such an agent that we can make about a human.

What Do We Know About The Consciousness, Anyway?

Ah, I see. My take on this question would be that we should focus on the word "you" rather than "qualia". If you have a conscious mind subjectively perceiving anything about the outside world (or its own internal workings), it has to feel like something, almost by definition. Like, if you went to go get your covid shot and it hurt you'd say "it felt like something". If and only if somehow you didn't even feel the needle piercing your skin, you'll say "I didn't feel anything". There were experiments proving that people can react to a stimulus they are not subjectively aware of (mostly for visual stimuli), but I'm pretty sure in all those cases they'd say they didn't see anything - basically that's how we know they were not subjectively aware of it. What would it even mean for a conscious mind to be aware of a stimulus but it not "feeling like something"? It must have some representation in the consciousness, that's basically what we mean by "being aware of X" or "consciously experiencing X".

So I'd say given a consciousness experiencing stuff, you necessarily have conscious experiences (aka qualia), that's a tautology basically. So the question becomes why some things have consciousness, or to narrow it down to your question - why (certain) recursively self-modeling systems are conscious? And that's kind of what I was trying to explain by the part 4 of the post, and approximately the same idea just from another perspective is much better covered in this book review and this article

But if I tried to put it in one paragraph, I'd start with - how do I know that I'm conscious and why do I think I know it? And the answer would be a ramble along the lines of: well when I look into my mind I can see me, i.e. some guy who thinks and makes decisions and is aware of things, and have emotions and memories and so on and so forth. And at the same time as I see I also am this guy! I can have different thoughts whenever I choose to (to a degree), I can do different things whenever I choose to (to a still more limited degree), and at the same time I can reflect on the choice process. So my theory is that I can perceive myself as a human mind mostly because the self-reflecting model - which is me - has trained to perceive other human mind so well that it learned to generalize to itself (see the whole entire post for the details). Although Graziano in the article and book I linked provides a very convincing explanation as to why this self-modeling would also be very helpful for the general reasoning ability - something I was unsuccessfully trying to figure out in the part 5.

What Do We Know About The Consciousness, Anyway?

Your "definition" (which really isn't a definition but just three examples) have almost no implications at all, that's my only issue with it.

What Do We Know About The Consciousness, Anyway?

I don't think qualia - to the degree it is at all a useful term - has much to do with the ability to feel pain, or anything. In my understanding all definitions of qualia assume it is a different thing from purely neurological perceptions (which is what I'd understand by "feelings"), more specifically that the perceptions can generate qualia sometimes in some creatures but they don't automatically do.

Otherwise you'd have to argue one of the two:

  1. Either even the most primitive animals like worms which you can literally simulate neuron by neuron, have qualia as long as they have some senses and neurons.
  2. ..or "feel pain" and e.g. "feel warmth" are somehow fundamentally different where the first necessarily requires/produces quale and the second may or may not produce it.

Both sound rather indefensible to me, so it follows that an animal can feel pain without experiencing a quale of it, just like a scallop can see the light without experiencing a quale of it. But two caveats on this. First, I don't have a really good grasp on what a qualia is, and as wikipedia attests neither do the experts. I feel there's some core of truth that people are trying to get at with this concept (something along what you said in your first comment), but also it's very often used as a rug for people to hide their confusion under, so I'm always skeptical about using this term. Second, whether or not one should ascribe any moral worth to the agents without consciousness/qualia is decisively not a part of what I'm saying here. I personally do, but as you say it depends on one's preferences, and so largely orthogonal to the question of how consciousness works.

What Do We Know About The Consciousness, Anyway?

Looking at your debate both with me and with Gordon below, it seems like your side of the argument mostly consists of telling the opponent "no you're wrong" without providing any evidence to that claim. I honestly did my best to raise the sanity waterline a little, but to no success, so I don't see much sense in continuing.

What Do We Know About The Consciousness, Anyway?

Sure, I wasn't claiming at any point to provide a precise mathematical model let alone implementation, if that's what you're talking about. What I was saying is that I have guesses as to what that mathematical model should be computing. In order to tell whether the person experiences a quale of X (in the sense of them perceiving this sensation), you'd want to see whether the sensory input from the eyes corresponding to the red sky is propagated all the way up to the top level of predictive cascade - the level capable of modeling itself to a degree - and whether this top level's state is altered in a way to reflect itself observing the red sky.

And admittedly what I'm saying is super high level, but I've just finished reading a much more detailed and I think fully compatible account of this in this article that Kaj linked. In their sense, I think the answer to your question is that the qualia (perceived sensation) arises when both attention and awareness are focused on the input - see the article for specific definitions.

The situation where the input reaches the top level and affects it, but is not registered subjectively, corresponds to attention without awareness in their terms (or to the information having propagated to the top level, but the corresponding change in the top level state not being reflected in itself). It's observed in people with blindsight, and also was recreated experimentally.

What Do We Know About The Consciousness, Anyway?

Replacing it with another word of which you then use identically isn't the same as tabooing, that's kind of defeats the purpose.

there can still be agreement that they in some sense about sensory qualities.

There may be, but then it seems there's no agreement about what sensory qualities are.

I've said s already, haven't I? A solution to the HP would allow you to predict sensory qualities from detailed brain scans, in the way that Mary can't.

No, you have not, in fact in all your comments you haven't mentioned "predict" or "mary" or "brain" ever once. But now we're getting somewhere! How do you tell that a certain solution can or can't predict "sensory qualities"? Or better, when you say "predict qualities from the brain scans" do you mean "feel/imagine them yourself as if you've experienced those sensory inputs firsthand", or do you mean something else?

What Do We Know About The Consciousness, Anyway?

Yeah, although seems only in the sense where "everything [we perceive] is illusion"? Which is not functionally different from "nothing is illusion". Unless I'm missing something?

Load More