green_leaf

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Christopher Hitchens, who tried waterboarding because he wasn't sure it was torture, wanted to stop almost instantly and was permanently traumatized, concluding it was definitely torture.

There is absolutely no way anyone would voluntarily last 3 minutes unless they simply hold their breath the entire time.

Answer by green_leaf30

To run with the spirit of your question:

Assuming the Dust Theory is true (i.e. the continuity of our experience is maintained purely by there somewhere being the next state of the state-machine-which-is-us). It doesn't need to be causally connected to your current state. So far so good.

What if there is more than one such subsequent state in the universe? No problem so far. Our measure just splits, and we roll the dice on where we'll find ourselves (it's a meaningless question to ask if the split happens at the moment of the spatial, or the computational divergence).

But what if something steals our measure this way? What if, while sleeping, our sleeping state is instantiated somewhere else (thereby stealing 50% of our measure) and never reconnects to the main computational stream instantiated in our brain (so every time we dream, we toss a coin to jump somewhere else and never come back)?

One obvious solution is to say that our sleeping self isn't us. It's another person whose memories are dumped into our brain upon awakening. This goes well with our sleeping self acting differently than us and often having entirely different memories. In that case, there is no measure stealing going on, because the sleeping stream of consciousness happening in our brain isn't ours.

The reliability of general facts could be checked by various benchmarks. The unreliability of specific studies and papers by personal experience, and by experiences of people I've read online.

I don't understand why, except maybe rephrasing a true fact keeps it true, but rephrasing a study title and a journal title makes it false.

green_leaf-1-2

Yes, but that very same process has a high probability probability of producing correct facts (today's LLMs are relatively reliable) and a very low probability of producing correct studies or papers.

LLMs hallucinate studies/papers so regularly you're lucky to get a real one. That doesn't have an impact on the truth of the facts they claimed beforehand. (Also, yes, Claude 3 Haiku is significantly less intelligent than 3.5 Sonnet.)

Then the problem is that you can't make bets and check your calibration, not that some people will arrive at the wrong conclusion, which is inevitable with probabilistic reasoning.

Would you say that the continuity of your consciousness (as long as you're instantiated by only one body) only exists by consensus?

What if the consensus changed? Would you cease to have the continuity of consciousness?

If the continuity of your consciousness currently doesn't depend on consensus, why think that your next conscious experience is undefined in case of a duplication? (Rather than, let's say, assigning even odds to finding yourself to be either copy?)

Also, I see no reason for thinking the idea of your next subjective experience being undefined (there being no case on the matter as to which conscious experience, if any, you'll have) is even a coherent possibility. It's clear what it would mean for your next conscious experience to be something specific (like feeling pain while seeing blue). It's also clear what would it mean for it to be NULL (like after a car accident). But it being undefined doesn't sound like a coherent belief.

It's been some time since models have become better than the average human at understanding language.

green_leaf-1-1

The central error of this post lies in the belief that we don't persist over time. All other mistakes follow from this one.

Well, a thing that acts like us in one particular situation (say, a thing that types "I'm conscious" in chat) clearly doesn't always have our qualia. Maybe you could say that a thing that acts like us in all possible situations must have our qualia?

Right, that's what I meant.

This is philosophically interesting!

Thank you!

It makes a factual question (does the thing have qualia right now?) logically depend on a huge bundle of counterfactuals, most of which might never be realized.

The I/O behavior being the same is a sufficient condition for it to be our mind upload. A sufficient condition for it to have some qualia, as opposed for it to have our mind and our qualia, will be weaker.

What if, during uploading, we insert a bug that changes our behavior in one of these counterfactuals

Then it's, to a very slight extent, another person (with the continuum between me and another person being gradual).

but then the upload never actually runs into that situation in the course of its life - does the upload still have the same qualia as the original person, in situations that do get realized?

Then the qualia would be very slightly different, unless I'm missing something. (To bootstrap the intuition, I would expect my self that chooses vanilla ice-cream over chocolate icecream in one specific situation to have very slightly different feelings and preferences in general, resulting in very slightly different qualia, even if he never encounters that situation.) With many such bugs, it would be the same, but to a greater extent.

If there's a thought that you sometimes think, but it doesn't influence your I/O behavior, it can get optimized away

I don't think such thoughts exist (I can always be asked to say out loud what I'm thinking). Generally, I would say that a thought that never, even in principle, influences my output, isn't possible. (The same principle should apply to trying to replace a thought just by a few bits.)

Load More