Rationalist Reading Group (Online)
Budapest Meetup on Margit Sziget
Ann Arbor SSC Online Meetup
Sorted by Magic (New & Upvoted)
Magic (New & Upvoted)
Show Low Karma
Friday, July 24th 2020
Fri, Jul 24th 2020
Writing with GPT-3
Longevity interventions when young
No Ultimate Goal and a Small Existential Crisis
Meaning is Quasi-Idempotent
An Old Way to Visualize Biases
Comment Replies for Chains, Bottlenecks and Optimization
A strong vision can cover for a lot of internal tension - the external tension between your vision and what you want can hide internal tension related to not meeting all your needs. But, it can't cover forever - eventually, your other needs get louder and louder until they drown out your vision, leading to a crash in productivity. It can help to know what your leading indicators for ignoring your needs... that way, you can catch a crash before it happens, and make sure you resolve that internal tension. For me, it's my weight creeping up. I use food as a way to ignore negative emotions. So, when I see my weight creeping up over the course of a few days, I take time out to process emotions, take care of myself, and see what needs I've been ignoring. 1 hour of attending to other needs can save me weeks of total burnout.
I was talking with Rupert McCallum about the simulation hypothesis yesterday. Rupert suggested that this argument is self-defeating; that is it pulls the rug from under its own feet. It assumes the universe has particular properties, then it tries to estimate the probability of being in a simulation from these properties and if the probability is sufficiently high, then we conclude that we are in a simulation. But if we are likely to be in a simulation, then our initial assumptions about the universe are likely to be false, so we've disproved the assumptions we relied on to obtain these probabilities. This all seems correct to me, although I don't see this as a fatal argument. Let's suppose we start by assuming that the universe has particular properties AND that we are not a simulation. We can then estimate the odds of someone with your kind of experiences being in a simulation within these assumptions. If the probability is low, then our assumption will be self-consistent, but the if probability is sufficiently high, then it become probabilistically self-defeating. We would have to adopt different assumptions. And maybe the most sensible update would be to believe that we are in a simulation, but maybe it'd be more sensible to assume we were wrong about the properties of the universe. And maybe there's still scope to argue that we should do the former.
But can't you just believe in Rokos anti-basilisk, the aligned AI that will punish you if you bring a malevolent AI into existence?