Just this guy, you know?
Have you tried this on humans? humans with dementia? very young humans? Humans who speak a different language? How about Chimpanzees or whales? I'd love to see the truth table of different entities you've tested, how they did on the test, and how you perceive their consciousness.
I think we need to start out by biting some pretty painful bullets:
1) "consciousness" is not operationally defined, and has different meanings in different contexts. Sometimes intentionally, as a bait-and-switch to make moral or empathy arguments rather than more objective uses.
2) For any given usage of a concept of consciousness, there are going to be variations among humans. If we don't acknowledge or believe there ARE differences among humans, then it's either not a real measure or it's so coarse that a lot of non-humans already qualify.
3) if you DO come up with a test of some conceptions of consciousness, it won't change anyone's attitudes, because that's not the dimension they say they care about.
3b) Most people don't actually care about any metric or concrete demonstration of consciousness, they care about their intuition-level empathy, which is far more based on "like me in identifiable ways" than on anything that can be measured and contradicted.
I was excited but skeptical, to hear that you could empirically test anything on the topic. And disappointed but unsurprised that you don't. There's nothing empirical about this - there is zero data you're collecting, measuring, or observing.
It will be an empirical test when you ACTUALLY have and use this teleporter.
Until then, you're just finding new ways to show that our intuitions are not consistent on esoteric and currently-impossible (and therefore irrelevant) topics.
Useful write-up. I think it's missing a very important point, which is that "responsibilty" has multiple different uses and meanings, and this ambiguity is sometimes intentional. Most of these are somewhat correlated, but not enough to mix them up safely.
1) Legal responsibility. Who can be compelled to change, or be punished (or who deserves rewards, for positive outcomes).
2) Causal decision responsibility. Whether one made choices that resulted in some consequence.
3) Experiential responsibility. Whether one experiences the situation directly, or only indirectly.
4) Intent responsibility. Whether one believes they have significant influence over the thing.
5) Moral responsibility (a). Whether one is pressured (by self or socially) to do something in the future.
6) Moral responsibility (b). Whether one is blamed (by self or socially) for something in the past.
For me it sounds like you did not mention the whole AI alignment question
True. The question didn't specify anything about it, so I tried to answer based on default assumptions.
You need to be clear who is included in "us". AI is likely to be trained on human understanding of identity and death, which is very much based on generational replacement rather than continuity over centuries. Some humans wish this wasn't so, and hope it won't apply to them, but there's not enough examples (none in truth, few and unrealistic in fiction) to train on or learn from.
It seems likely that if "happy people" ends up in the AI goalset, it'll create new ones that have higher likelihood of being happy than those in the past. Honestly, I'm going to be dead, so my preference doesn't carry much weight, but I think I prefer to imagine tiling the universe with orgasmium more than I do paperclips.
It's FAR more effort to make an existing damaged human (as all are in 2025) happy than just to make a new happy human.
I'm confused. You start with
“But you can’t have a story where everyone is happy and everything is perfect! Stories need conflict!”
And then list a bunch of conflicts. David Mamet's three rules are:
And all of your examples are fair places to answer these. I think we don't actually disagree, but I'm not sure I understand the objection you're responding to. Did someone actually say "there's no tension without dystopia"? If they only said "dystopia is the lazy person's generator of fictional tension", then I kind of agree.
The main good bit of market pricing this would miss is the demand reduction and reallocation caused by the higher prices
True. The main thing the "tax a price increase" misses is that it mutes the supply incentive effects of the price increase. I'd need to understand the elasticities of the two (including the pre-supply incentives for some goods: a decision to store more than current demand BEFORE the emergency gets paid DURING) to really make a recommendation, and it'd likely be specific enough to time and place and product and reason for emergency that "don't get involved at a one-size-fits-all level" is the only thing I really support.
This is a novel (to me) line of thinking, and I'm happy to hear about it! I'm not sure it's feasible, as one of the things the public hates more than price increases during a shortage is higher taxes any time.
That said, the REVERSE of this - slightly raise taxes in normal times, and make emergencies a tax holiday, might really work. This gives room for producers/distributors to raise prices WITHOUT as much impact on the consumers. Gets some of the good bits of market pricing, with less of the bad bits (both limited to the magnitude of the tax change relative to the scarcity-based price change).
It can? Depending on what you mean by "similar", either we can find them without this thought experiment or they don't exist and this doesn't help. Your example is absolutely not similar in the key area of individual continuity.