I've been trying to get MIRI to switch to stop calling this blackmail (extortion for information) and start calling it extortion (because it's the definition of extortion). Can we use this opportunity to just make the switch?
I support this, whole-heartedly :) CFAR has already created a great deal of value without focusing specifically on AI x-risk, and I think it's high time to start trading the breadth of perspective CFAR has gained from being fairly generalist for some more direct impact on saving the world.
"Brier scoring" is not a very natural scoring rule (log scoring is better; Jonah and Eliezer already covered the main reasons, and it's what I used when designing the Credence Game for similar reasons). It also sets off a negative reaction in me when I see someone naming their world-changing strate...(read more)
This is a cryonics-fails story, not a cryonics-works-and-is-bad story.
Seems not much worse than actual-death, given that in this scenario you could still choose to actually-die if you didn't like your post-cryonics life.
Seems not much worse than actual-death, given that in this scenario you (or the person who replaces you) could still choose to actually-die if you didn't like your post-cryonics life.
This is an example where cryonics fails, and so not the kind of example I'm looking for in this thread. Sorry if that wasn't clear from the OP! I'm leaving this comment to hopefully prevent more such examples from distracting potential posters.
Hmm, this seems like it's not a cryonics-works-for-you scenario, and I did mean to exclude this type of example, though maybe not super clearly:
OP: There's a separate question of whether the outcome is positive enough to be worth the money, which I'd rather discuss in a different thread.
(2) A rich sadist finds it somehow legally or logistically easier to lay hands on the brains/minds of cryonics patients than of living people, and runs some virtual torture scenarios on me where I'm not allowed to die for thousands of subjective years or more.