moridinamael

moridinamael's Comments

Reality-Revealing and Reality-Masking Puzzles

I'm reminded of the post Purchase Fuzzies and Utilons Separately.

The actual human motivation and decision system operates by something like "expected valence" where "valence" is determined by some complex and largely unconscious calculation. When you start asking questions about "meaning" it's very easy to decouple your felt motivations (actually experienced and internally meaningful System-1-valid expected valence) from what you think your motivations ought to be (something like "utility maximization", where "utility" is an abstracted, logical, System-2-valid rationalization). This is almost guaranteed to make you miserable, unless you're lucky enough that your System-1 valence calculation happens to match your System-2 logical deduction of the correct utilitarian course.

Possible courses of action include:

1. Brute forcing it, just doing what System-2 calculates is correct. This will involve a lot of suffering, since your System-1 will be screaming bloody murder the whole time, and I think most people will simply fail to achieve this. They will break.

2. Retraining your System-1 to find different things intrinsically meaningful. This can also be painful because System-1 generally doesn't enjoy being trained. Doing it slowly, and leveraging your social sphere to help warp reality for you, can help.

3. Giving up, basically. Determining that you'd rather just do things that don't make you miserable, even if you're being a bad utilitarian. This will cause ongoing low-level dissonance as you're aware that System-2 has evaluated your actions as being suboptimal or even evil, but at least you can get out of bed in the morning and hold down a job.

There are probably other options. I think I basically tried option 1, collapsed into option 3, and then eventually found my people and stabilized into the slow glide of option 2.

The fact that utilitarianism is not only impossible for humans to execute but actually a potential cause of great internal suffering to even know about is probably not talked about enough.

ialdabaoth is banned

For the record, I view the fact that I commented in the first place, and that I now feel compelled to defend my comment, as being Exhibit A of the thing that I'm whining about. We chimps feel compelled to get in on the action when the fabric of the tribe is threatened. Making the banning of a badguy the subject of a discussion rather than being an act of unremarked moderator fiat basically sucks everybody nearby into a vortex of social wagon-circling, signaling, and reading a bunch of links to figure out which chimps are on the good guy team and which chimps are on the bad guy team. It's a significant cognitive burden to impose on people, a bit like an @everyone in a Discord channel, in that it draws attention and energy in vastly disproportionate scope relative to the value it provides.

If we were talking about something socio-emotionally neutral like changing the color scheme or something, cool, great, ask the community. I have no opinion on the color scheme, and I'm allowed to have no opinion on the color scheme. But if you ask me what my opinion is on Prominent Community Abuser, I can't beg off. That's not an allowed social move. Better not to ask, or if you're going to ask, be aware of what you're asking.

Sure, you can pull the "but we're supposed to be Rationalists(tm)" card, as you do in your last paragraph, but the Rationalist community has pretty consistently failed to show any evidence of actually being superior, or even very good, at negotiating social blow-ups.

ialdabaoth is banned

I wasn’t really intending to criticize the status quo. Social consensus has its place. I’m not sure moderation decisions like this one require social consensus.

ialdabaoth is banned

If you're looking for feedback ...

On one level I appreciate this post as it provides delicious juicy social drama that my monkey brain craves and enjoys on a base, voyeuristic level. (I recognize this as being a moderately disgusting admission, considering the specific subject matter; but I'm also pretty confident that most people feel the same, deep down.) I also think there is a degree of value to understanding the thought processes behind community moderation, but I also think that value is mixed.

On another level, I would rather not know about this. I am fine with Less Wrong being moderated by a shadowy cabal. If the shadowy cabal starts making terrible moderation decisions, for example banning everyone who is insufficiently ideologically pure, or just going crazy in some general way, it's not like there's anything I can do about it anyway. The good/sane/reasonable moderator subjects their decisions to scrutiny, and thus stands to be perpetually criticized. The bad/evil moderator does whatever they want, doesn't even try to open up a dialogue, and usually gets away with it.

Fundamentally you stand to gain little and lose much by making posts like this, and now I've spent my morning indulging myself reading up on drama that has not improved my life in any way.

Mental Mountains

Maybe, but I don't think that we developed our tendency to lock in emotional beliefs as a kind of self-protective adaptation. I think that all animals with brains lock in emotional learning by default because brains lock in practically all learning by default. The weird and new thing humans do is to also learn concepts that are complex, provisional, dynamic and fast-changing. But this new capability is built on the old hardware that was intended to make sure we stayed away from scary animals.

Most things we encounter are not as ambiguous, complex and resistant to empirical falsification as the examples in the Epistemic Learned Helplessness essay. The areas where both right and wrong positions have convincing arguments usually involve distant, abstract things.

moridinamael's Shortform

I thought folks might enjoy our podcast discussion of two of Ted Chiang's stories, Story of Your Life and The Truth of Fact, the Truth of Feeling.

Myalgia of Imbalance. Physical Restrictions, Pain, Tension & Weird Sensations.

Thanks for writing this up. Do you think massage materially would help with this type of issue?

I've been able to help a few people (including myself) with chronic neck/shoulder pain by getting people to utilize their rhomboids rather than their trapezius for the purpose of holding their shoulders back. The rhomboids have a significant mechanical advantage for that purpose. Most people can't even intentionally activate their rhomboids; they have no kinesthetic awareness of even possessing them. Wondered if you had a response to this, within the framework of the "main muscles of movement".

On Internal Family Systems and multi-agent minds: a reply to PJ Eby

My examples of subagents appearing to mysteriously answer questions was meant to suggest that there are subtle things that IFS explains/predicts, which aren't automatically explained in other models. Examples of phenomena that contradict IFS model would be even more useful, though I'm failing to think of what those would look like.

On Internal Family Systems and multi-agent minds: a reply to PJ Eby

I'm still not sure what it would mean for humans to actually have subagents, versus to just behave exactly as if they have subagents. I don't know what empirical finding would distinguish between those two theories.

There are some interesting things that crop up during IFS sessions that I think require explanation.

For example, I find it surprising that you can ask the Part a verbal question, and that part will answer in English, and the answer it gives can often be startling, and true. The whole process feels qualitatively different from just "asking yourself" that same question. It also feels qualitatively different from constructing fictional characters and asking them questions.

I also find that taking an IFS approach, in contrast to a pure Focusing approach, results in much more dramatic and noticeable internal/emotional shifts. The IFS framework is accessing internal levers that Focusing alone isn't.

One thing I wanted to show with my toy model, but didn't really succeed, was that arranging an agent architecture where certain functions belong to the "subagents" rather than the "agent" can be more elegant or parsimonious or strictly simpler. Philosophically, I would have preferred to write the code without using any for loops, because I'm pretty sure human brains never do anything that looks like a for loop. Rather, all of the subagents are running constantly, in parallel, and doing something more like message-passing according to their individual needs. The "agent" doesn't check each subagent, sequentially, for its state; the subagents pro-actively inject their states into the global workspace when a certain threshold is met. This is almost certainly how the brain works, regardless of whether you wish to use the word "subagent" or "neural submodule" or what exactly. In this light, at least algorithmically, it would seem that the submodules do qualify as agents, in most senses of the word.

The first step of rationality

Unfortunately there are many prominent examples of Enlightened/Awakened/Integrated individuals who act like destructive fools and ruin their lives and reputations, often through patterns of abusive behavior. When this happens over and over, I don't think it can be written off as "oh those people weren't actually Enlightened." Rather, I think there's something in the bootstrapping dynamics of tinkering with your own psyche that predictably (sometimes) leads in this direction.

My own informed guess as to how this happens is something like this: imagine your worst impulse arising, and imagine that you've been so careful to take every part of yourself seriously that you take that impulse seriously rather than automatically swatting it away with the usual superegoic separate shard of self; imagine that your normal visceral aversion to following through on that terrible impulse is totally neutralized, toothless. Perhaps you see the impulse arise and you understand intellectually that it's Bad but somehow its Badness is no longer compelling to you. I don't know. I'm just putting together the pieces of what certain human disasters have said.

Anyway, I don't actually think you're wrong to think integration is an important goal. The problem is that integration is mostly neutral. You can integrate in directions that are holistically bad for you and those around you, maybe even worse than if you never attempted it in the first place.

Load More