That is easily patched via full-dive VR. You can keep your biological body. No upload be necessary.
Have you read Friendship is Optimal? Is that outcome unappealing to you in a way which can't be easily patched (e.g. removing the "become a pony" requirement)? Do you think it would be unappealing to almost everyone in ways that can't be easily patched?
Spencer's post isn't loading for me and I don't see any post about this on his facebook feed -- is the link right?
Specifically for ME/CFS and Long Covid, I recommend s4me.info. Pretty much all of the major studies on the mind-body methods already have threads there with discussions. The tl;dr is that they are extremely low quality studies in ways that will not be a surprise to anyone familiar with the replication crisis and these techniques very likely do not work for ME/CFS or LC.
This blog is good as well: https://mecfsscience.org/
Anthropic researchers estimate that Opus 4.5 provides 2-3x speedup to their research, if I'm reading this correctly. This seems very important and I'm surprised I haven't seen more discussion of it.
Twitter thread: https://x.com/HjalmarWijk/status/1993752035536331113
Unrolled/without login required: https://twitter-thread.com/t/1993752035536331113
@HjalmarWijk
Nov 26Anthropic says in their system card that *all* their AI R&D evals are close to saturation, and report a median self-reported uplift of 2X (mean over 3X!) for power users. They provide very little evidence ruling out imminent dramatic AI R&D acceleration.
I personally suspect that their self-report uplift numbers are inflated and that agent time horizons are still limited. But if taken at face value, then even the most aggressive scenarios (e.g. AI 2027 or https://blog.redwoodresearch.org/p/whats-up-with-anthropic-predicting) would have underestimated progress.
I didn't quote the whole thread, there's more if you follow the link.
I recommend Deep Utopia for extensive discussion of this issue, if you haven't already read it.
I agree with most of this, but I think you're typical-minding when you assume that successionists are using this to resolve their own fear or sadness surrounding AI progress. I think instead, they mostly never seriously consider the downsides because of things like the progress heuristic. They never experience the fear or sadness you refer to in the first place. For them, it is not "painful to think about" as you describe.
Here is Eliezer's post on this topic from 17 years ago for anyone interested: https://www.lesswrong.com/posts/3Jpchgy53D2gB5qdk/my-childhood-role-model
Anna Salamon's comment and Eliezer's reply to it are particularly relevant.
Why?