Of course, I'm not expecting you to support the idea in the answers, but simply mentioning its conclusion:)

New Answer
New Comment

5 Answers sorted by

romeostevensit

Dec 10, 2019

150

People like to pretend they are doing fine by using a cognitive algorithm for judging that is riddled with availability heuristic, epistemically unsound dialectics and other biases. Almost everyone I meet is physically and emotionally unwell and shies away from thinking about it. What rare engagement does happen occurs with close intimates who are selected for having the same blind spots as them.

It's like everyone has this massive assumption that things will turn out fine, even though the default outcome is terrible (see obesity and medicated mental health rates). Or they just have learned helplessness about learned helplessness.

Under what circumstances do you get people telling you they are fine? That doesn't happen to me very much--"I'm fine" as part of normal conversation does not literally mean that they are fine.

4romeostevensit4y
It's more like strong resistance to change on the theory that the current trajectory doesn't wind up as a flaming pile of wreckage.

I think you'd need to define "fine" a little better for me to understand your argument. The likely result, for each of us, is death. I feel pretty helpless about that, and it's both learned and reasoned.

In the meantime, I do some things that make it slightly more pleasant for me and others, and perhaps (very hard to measure) more likely that there will be more others in the future than there would otherwise be. But I also do things that are contrary to those long-term goals, which bring shorter-term joy or expectation of survival.

The default (and inevitable) outcome _is_ terrible. And that's fine.

1Mati_Roy4y
by "that's fine", you mean "I learned helplessness", right? (just checking, because I'm not sure what it means to say that something terrible is fine)
4Dagon4y
I think it's actual helplessness, not "learned helplessness". "learned", in this context, usually implies "incorrect". "that's fine" means I believe that a change in my actions will cause more harm than it does good to my life-satisfaction index (or whatever we're calling the artificial construct of "sum of (possibly discounted) future utility stream"). It's perfectly reasonable to say it's "terrible" compared to some non-real ideal, and "fine" compared to actual likely futures. Unless you're just saying "people are hopelessly bad at modeling the world and making decisions", in which case I agree, but the problem is WAY deeper than you imply here.
2romeostevensit4y
I did a bad job of saying that I'm trying to highlight the attentional failures involved specifically.
1Mati_Roy4y
I'm not sure what cause you to like this framing and what it does to you psychologically, but personally it seems important to me to differentiate what's aligned with my preferences and what's fixable as 2 different concepts. I think having a single word for both "things that can be changed, but are okay as they are" and "things that can't be changed, but are not okay as they are" would render my cognition pretty confused, but maybe that's a cognitive hack to feel better or something.
3Dagon4y
Interesting - I do suspect there's a personality difference that makes us prefer different framings for this. For me, it would be maddening to have preferences over unreachable states.

shminux

Dec 11, 2019

60

Traversable wormholes, were they to exist for any length of time, would act as electric and gravitational Faraday cages, i.e. attenuate non-normal electric and gravitational field exponentially inside their throats with the scale parameter of the mouth size/throat circumference. Consequently, the electric/gravitational field around them is non-conservative. This follows straightforwardly from solving the Laplace equation, but never discussed in the literature as far as I can find.

Dagon

Dec 10, 2019

60

Not new, but possibly more important than it gets credit for. I haven't had time to figure out why it doesn't apply pretty broadly to all optimization-under-constraints problems.

https://en.wikipedia.org/wiki/Theory_of_the_second_best


Mati_Roy

Dec 10, 2019

40

Updated: 2019-12-10

2 of them:

  • there's a lot of advantages to video-recording your life (I want to write much more about this, and only took time for a very brief overview so far https://matiroy.com/writings/Should-I-record-my-life.html)
  • if MWI is true and today's cryonics is good enough, we can use quantum lottery to cryopreserve literally everyone for the cost of setting up a quantum lottery + some overhead (probably much less than 100k USD)

I am confused. If MWI is true, we are all already immortal, and every living mind is instantiated a very large number of times, probably literally forever (since entropy doesn't actually decrease in the full multiverse, and is just a result of statistical correlation, but if you buy the quantum immortality argument you no longer care about this).

3Mati_Roy4y
I disagree, but haven't had time to write why yet:)
3Viliam4y
If the lottery would pay for cryonics and a luxurious life afterwards, we could increase the chance of luxurious immortality. Quantum immortality only makes you immortal, but you probably also want to have a good life.
2 comments, sorted by Click to highlight new comments since: Today at 5:45 AM

Maybe it would be a useful norm for people to have such a list of ideas; it would allow to move faster

It seems that in many cases ideas don't just need arguments in their favor but also explanation/model building to be useful to others.