Posts

Sorted by New

Wiki Contributions

Comments

Seconded (after working with this concept-handle for a day). This here seems to be the exact key for (dis)solving the way my brain executes self-deception (clinging, attachment, addiction,).

(I'm noticing that in writing this, my brain is fabricating an option that has all the self-work results I envision, without any work required)

I find that [letting go of the (im)possible worlds where I'm not trapped] helps reframe/dissolve the feeling of trappedness. 

However, that kind of letting go often feels like paying a large price. E.g. in case of sensory overload it can feel like giving up on having any sense of control over reality/sensory-input whatsoever.

Does that maybe get at what you were asking?

 

It all does! Again, thanks for sharing.

Exciting stuff. This feels like a big puzzle piece I'd been missing. Have you written more about this, somewhere?

~vague gesturing at things I find interesting:

-How do different people (different neurotypes? different childhoods? personality types?) differ in the realities they want to share? 

-How do shared realities relate to phenomena like extraversion, charisma, autism?

-What's the significance of creating shared realities by experiencing things together?

Besides, do you use other neglected people-models that are similarly high-yield? Vague gesturing appreciated.

Problem: Abyss-staring is aversive, for some (much) more than for others. 

In my case, awareness hasn't removed that roadblock. Psychedelics have, to some degree, but I find it hard to aim them well. MDMA, maybe?

Example: Dividing the cake according to NEEDS versus CONTRIBUTION (progressive tax, capitalism/socialism,)

Also this entire post by Duncan Sabien

(@ Tech Executives, Policymakers & Researches)

Back in February 2020, the vast majority of people didn't see the global event of Covid coming, even though all the signs were there. All it took was a fresh look at the evidence and some honest extrapolation.

Looking at recent AI progress, it seems very possible that we're in the "February of 2020" of AI.

(original argument by Duncan Sabien, rephrased)

 (@ Tech Executives, Policymakers & Researches)

Load More