DonyChristie

Money can be thrown at my Patreon here: https://www.patreon.com/reflectivealtruist

DonyChristie's Comments

Sayan's Braindump

Can you define a post-scarcity economy in terms of what you anticipate the world to look like?

bgaesop's Shortform

I am currently very skeptical that the PNSE paper has anything of worth, given that Jeffery Martin's Finder's Course is basically a scam according to this review and some others. (I don't know if the paper is based on Finder's Course participants.) It would be valuable for someone to do a fact check on the paper.

Invisible Choices, Made by Default

Actually making the cards is what stops me.

What's your big idea?

"Let's finish what Engelbart started"

1. Recursively decompose all the problem(s) (prioritizing the bottleneck(s)) behind AI alignment until they are simple and elementary.

2. Get massive 'training data' by solving each of those problems elsewhere, in many contexts, more than we need, until we have asymptotically reached some threshold of deep understanding of that problem. Also collect wealth from solving others' problems. Force multiplication through parallel collaboration, with less mimetic rivalry creating stagnant deadzones of energy.

3. We now have plenty of slack from which to construct Friendly AI assembly lines and allow for deviations in output along the way. No need to wring our hands with doom anymore as though we were balancing on a tightrope.

In the game Factorio, the goal is to build a rocket from many smaller inputs and escape the planet. I know someone who got up to producing 1 rocket/second. Likewise, we should aim much higher so we can meet minimal standards with monstrous reliability rather than scrambling to avoid losing.

See: Ought

Book summary: Unlocking the Emotional Brain

I'm so glad someone did a writeup of this! Part of me has wanted to, I think I have a draft... I remember going through severe depression over four years ago and one of my reprieves was joyfully reading the papers written about coherence psychology. I will definitely be linking this post as a reference.

There are many times I am talking with people and want to reference from the conceptual structure of coherence psychology, but there is way too much inferential distance especially with aspiring rationalists who are not therapy geeks, so I end up mentally flailing my arms in frustration. The theory seems like a better candidate for The One True Psychotherapy than almost any other and it pains me to see people go about solving their problems without it in their toolkit, and not being able to communicate this to them. e.g. It's frustrating to see people trying to correct the output of emotional schemas without accessing the generating model for disconfirmation. e.g. A person may feel uncomfortable with someone else who has low self-esteem so they will try to correct it verbally without engaging in a process that will change the underlying 'pro-symptom position'.

There's the related problem that there are very few coherence therapists. I don't think most psychologists have heard of this and I find that confusing.

Oh, there's also the fact that I tried a coherence therapist and didn't find it that helpful the way it was done. They were fine to talk to but it seems retrospectively like they were cargo-culting the motions of coherence therapy as outlined by Ecker et al. I haven't had other therapists but I suspect the inefficacy is only very weak evidence pointing against the modality vs other modalities and more a problem with cramming an attempt at powerful introspection into expensive 1-hour blocks. i.e. I think psychotherapeutic structure across the board is broken and when the singularity happens it won't be a problem anymore.

My hope is that we can develop new delivery structures into which we can import psychological techniques and have them deployed at scale while being better than 1-hour weeklies, 8-hour shamanic trips, or that annoying app with the emotionally saccharine bird.

See also: The Method of Levels

Dony's Shortform Feed

How might a person develop INCREDIBLY low time preference? (They value their future selves in decades to a century nearly as much as they value their current selves?)

Who are people who have this, or have acquired this, and how did they do it?

Do these concepts make sense or might they be misunderstanding something? Tabooing/decomposing them, what is happening cognitively, experientially, when a human mind does this thing?

What would a literature review say?

Dony's Shortform Feed

I'm really noticing how the best life improvements come from purchasing or building better infrastructure, rather than trying permutations of the same set of things and expecting different results. (Much of this results from having more money, granting an expanded sense of possibility to buying useful things.)

The guiding question is, "What upgrades would make my life easier?" In contrast with the question that is more typically asked: "How do I achieve this hard thing?"

It seems like part of what makes this not just immediately obvious is that I feel a sense of resistance (that I don't really identify with). Part of that is a sense of... naughtiness? Like we're supposed to signal how hardworking we are. For me this relates to this fear I have that if I get too powerful, I will break away from others (e.g. skipping restaurants for a Soylent Guzzler Helmet, metaphorically) as I re-engineer my life and thereby invite conflict. There's something like a fear that buying or engaging in nicer things would be an affront to my internalized model of my parents?

The infrastructure guideline relates closely to the observation that to a first approximation we are stimulus-response machines reacting to our environment, and that the best way to improve is to actually change your environment, rather than continuing to throw resources past the point of diminishing marginal returns in adaptation to the current environment. And for the same reasons, the implications can scare me, for it may imply leaving the old environment behind, and it may even imply that the larger the environmental change you make, the more variance you have for a good or bad update to your life. That would mean we should strive for large positive environmental shifts, while minimizing the risk of bad ones.

(This also gives me a small update towards going to Mars being more useful for x-risk, although I may need to still propagate a larger update in the other direction away from space marketing. )

Of course, most of one's upgrades should be tiny and within one's comfort zone. What the portfolio of small vs huge changes one should make in one's life is an open question to me, because while it makes sense to be mostly conservative with one's allocation of one's life resources, I suspect that fear brings people to justify the static zone of safety they've created with their current structure, preventing them from seeking out better states of being that involve jettisoning sunk costs that they identify with. Better coordination infrastructure could make such changes easier if people don't have to risk as much social conflict.

Dony's Shortform Feed

You bring to mind a visual of the Power of a Mind as this dense directed cyclic graph of beliefs where updates propagate in one fluid circuit at the speed of thought.

I wonder what formalized measures of [agency, updateability, connectedness, coherence, epistemic unity, whatever sounds related to this general idea] are put forth by different theories (schools of psychotherapy, predictive processing, Buddhism, Bayesian epistemology, sales training manuals, military strategy, machine learning, neuroscience...) related to the mind and how much consilience there is between them. Do we already know how to rigorously describe peak mental functioning?

Dony's Shortform Feed

Do humans actually need breaks from working, physiologically? How much of this is a cultural construct? And if it is, can those assumptions be changed? Could a person be trained to enjoyably have 100-hour workweeks? (assume, if the book Deep Work is correct that you have max 4 hours of highly productive work on a domain, that my putative powerhuman is working on 2-4 different skill domains that synergize)

Dony's Shortform Feed

I find the question, "What would change my mind?", to be quite powerful, psychotherapeutic even. AKA "singlecruxing". It cuts right through to seeking disconfirmation of one's model, and can make the model more explicit, legible, object. It's proactively seeking out the data rather than trying to reduce the feeling of avoidant deflection associated with shielding a beloved notion from assault. Seems like it comports well with the OODA loop as well. Taken from Raemon's "Keeping Beliefs Cruxy".

I am curious how others ask this question of themselves. What follows is me practicing the question.

What would change my mind about the existence of the moon? Here are some hypotheses:

  • I would look up in the sky every few hours for several days and nights and see that it's not there.
  • I see over a dozen posts on my Facebook feed talking about how it turns out it was just a cardboard cutout and SpaceX accidentally tore a hole in it. They show convincing video of the accident and footage of people reacting such as leaders of the world convening to discuss it.
  • Multiple friends are very concerned about my belief in this luminous, reflective rocky body. They suggest I go see a doctor or the government will throw me in the lunatics' asylum. The doctor prescribes me a pill and I no longer believe.
    • It turns out I was deluded and now I'm relieved to be sane.
    • It turns out they have brainwashed me and now I'm relieved to be sane.
  • I am hit over the head with a rock which permanently damages my ability to form lunar concepts. Or it outright kills me. I think this Goodharts (is that the closest term I'm looking for?) the question but it's interesting to know what are bad/nonepistemic/out-of-context reasons I would stop believing in a thing.

These anticipations were System 2 generated and I'm still uncertain to what extent I can imagine them actually happening and changing my mind. It's probably sane and functional that the mind doesn't just let you update on anything you imagine, though I also hear the apocryphal saying that the mind 80% believes whatever you imagine is real.

Load More