Signer

Posts

Sorted by New

Comments

Vim

Modal editing is a nice idea for compressing many hotkeys - it's a shame IDEs don't support defining your own modes.

What Do We Know About The Consciousness, Anyway?
  1. To me it looks like the defining feature of consciousness intuition is one's certainty in having it, so I define consciousness as the only thing one can be certain about and then I know I am conscious by executing "cogito ergo sum".

  2. I can imagine disabling specific features associated with awareness starting with memory: seeing something without remembering feels like seeing something and then forgetting about it. Usually when you don't remember seeing something recent it means your perception wasn't conscious, but you certainly forgot some conscious moments in the past.
    Then I can imagine not having any thoughts. It is harder for long periods of time, but I can create short durations of just seeing that, as far as I remember, are not associated with any thoughts.
    At that point it becomes harder to describe this process as self-awareness. You could argue that if there is representation of the lower level somewhere in the high level, then it is still modeling. But there is no more reason to consider these levels parts of the same system, than to consider any sender-receiver pair as self-modeling system.

  3. I don't know. It's all ethics, so I'll probably just check for some arbitrary similarity-to-human-mind metric.

we have reasons to expect such an agent to make any claim humans make

Depending on detailed definitions of "reflect on itself" and "model itself perceiving" I think you can make an agent that wouldn't claim to be perfectly certain in its own consciousness. For example, I don't see a reason why some simple cartesian agent with direct read-only access to its own code would think in terms of consciousness.

What Do We Know About The Consciousness, Anyway?

So my theory is that I can perceive myself as a human mind mostly because the self-reflecting model—which is me—has trained to perceive other human mind so well that it learned to generalize to itself.

What's your theory for why consciousness is actually your ability to perceive yourself as human mind? From your explanation it seems to be

  1. You think (and say) you have consciousness.
  2. When you examine why you think it, you use your ability to perceive yourself as human mind.
  3. Therefore consciousness is your ability to perceive yourself as human mind.

You are basically saying that consciousness detector in the brain is an "algorithm of awareness" detector (and algorithm of awareness can work as "algorithm of awareness" detector). But what are the actual reasons to believe it? Only that if it is awareness, then it explains why you can detect it? It certainly is not a perfect detector, because some people will explicitly say "no, my definition of consciousness is not about awareness". And because it doesn't automatically fits into "If you have a conscious mind subjectively perceiving anything about the outside world, it has to feel like something" if you just replace "conscious" by "able to percieve itself".

What Do We Know About The Consciousness, Anyway?

Ok, by these definitions what I was saying is "why not having ability to do recursion stops you from having pain-qualia?". Just feeling like there is a core of truth to qualia ("conceivability" in zombie language) is enough for asking your world-model to provide a reason why not everything, including recursively self-modeling systems, feels like qualialess feelings - why recursively self-modeling is not just another kind of reaction and perception?

What Do We Know About The Consciousness, Anyway?

I believe it depends on one's preferences. Wait, you think it doesn't? By "ability to do recursion" I meant "ability to predict its own state altered by receiving the signal" or whatever the difference of the top level is supposed to be. I assumed that in your model whoever doesn't implement it doesn't have qualia therefore doesn't feel pain because there is no one to feel it. And for the interested in the Hard Problem the question would be "why this specific physical arrangement interpreted as recursive modeling feels so different from when the pain didn't propagate to the top level".

Why We Launched LessWrong.SubStack

So the money play is supporting Substack in greaterwrong and maximizing engagement metrics by unifying lesswrong's and ACX's audiences in preparation to inevitable lesswrong ICO?

What Do We Know About The Consciousness, Anyway?

when a sensation perceived by a human (in the biological sense of perceiving) stops being a quale?

When it stops feeling like your "self-awareness" and starts feeling like "there was nobody “in there”". And then it raises questions like "why not having ability to do recursion stops you from feeling pain".

Could billions spacially disconnected "Boltzmann neurons" give rise to consciousness?

No value-free arguments against it, but it probably can be argued that you can't do anything to help Boltzmann’s brains anyway.

Toward A Bayesian Theory Of Willpower

I don't understand what's the point of calling it "evidence" instead of "updating weights" unless brain literally implements P(A|B) = [P(A)*P(B|A)]/P(B) for high level concepts like “it’s important to do homework”. And even then this story about evidence and beliefs doesn't bring anything additional to the explanation with specific weight aggregation algorithm.

What I'd change about different philosophy fields

The LEDs are physical objects and so your list of firings could be wrong about physical fact of actual firing if you had hallucination when making that list. Same with the neurons: it's either indirect knowledge about them, or no one actually knows whether some neuron is on or off.

Well, except you can say that neurons or LEDs themselves know about themselves. But first, it's just renaming "knowledge and reality" to "knowledge and direct knowledge" and second, it still leaves almost all seemings (except "left half of a rock seems like left half of a rock to a left half of a rock") as uncertain - even if your sensations can be certain about themselves, you can't be certain, that you having them.

Or you could have an explicitly Cartesian model where some part the chain "photons -> eye -> visual cortex -> neocortex -> expressed words" is arbitrary defined as always true knowledge. Like if the visual cortex says "there is an edge at (123, 123) of visual space", you interpret it as true or as an input. But now you have a problem of determining "true about what?". It can't be certain knowledge about eye, because visual cortex could be wrong about eye, and it can't be about visual cortex for any receiver of that knowledge, because it could be spoofed in transit. I guess implementing Cartesian agent would be easier or maybe even some part of any reasonable agent is required to be Cartesian, but I don't see how certainty in inputs can be justified.

Load More