LESSWRONG
LW

124
Self
1261790
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2Self's Shortform
7mo
33
No wikitag contributions to display.
leogao's Shortform
Self2mo10

(I at least suspect this is my comparative advantage. But I'm not good at communicating [insights], a skill that comes neither with <analytical rigor> nor with <high-res introspective access>. 

It also seems like the <after controlling for situational factors, status psychology explains more than half of variance in human behavior> camp is essentially right, which colors most genuine discussion less pretty than most people would prefer, especially those with less introspective insight.

I (somewhat predictably, given my status incentives) hold that this is an important, central problem civilization has, bc mutual information is the fundament of cooperation, or expressed more concretely the better we model each other the easier it is to avoid common deception & adversity attractors.)

Reply
Expectation = intention = setpoint
Self2mo30

You [don't] have to believe!

You know how high school sports coaches like to go on about how "You have to believe you will win!"? And how the standard rationalist response is "Nonsense, of course you don't. Beliefs are supposed to track reality, not be wishful thinking. Believe what looks to be true, try your best, and find out if you win"?

The coach does have a point though, and there's a reason he's so adamant about what he's saying. If you expect to lose -- if you're directing attention towards the experience of your upcoming loss -- then you are intending to lose, and good luck winning if you aren't gonna even try. The problem is that he's expecting on the level of "Will we win this game?", which, according to the data, isn't looking like it's something we can control. He doesn't know what else to do, and he doesn't want to just give up, so of course he's going to engage in motivated thinking. Fudging the data until he can expect success is the only way he can hope to succeed. It's a load bearing delusion.[8][9]

One way to do better is to deliberately trade correctness of expectation for effort without letting delusion spread to infect the rest of your thinking. "Yeah, I'm probably going to lose. I don't care. I intend to win anyway". Or, in other words "Do or do not. There is no  'try'". That means setting yourself up for failure, expecting success knowing that you aren't likely to have that expectation realized. It's not pleasant, and that gap between your expectations and the data coming from reality is what suffering is. But with suffering comes hope, and sometimes the tradeoff is worthwhile. 

 

This post seems highly relevant.

It describes <a solution to this dilemma> that also is <a mental mechanism humans use natively>.

Reply
Generalized Hangriness: A Standard Rationalist Stance Toward Emotions
Self2mo30

“Pretend the emotion is a person or cute animal who can talk” is a pretty great trick.

 

Huh. Tried this on my social media cravings.

Couldn't visualize them as an animal, but managed <a stream of energy between me and my laptop screen>. Managed to make the stream talk in my mind.

This behaved like a "talking lens" laid over my perception. As if the craving itself was live-reacting to objects on my screen while I clicked and scrolled.

Informative via making the involved needs concrete.

Reply1
Elite Coordination via the Consensus of Power
Self6mo20

Improved my intuitions, ty.

Reply
Propagating Facts into Aesthetics
Self6mo10

Keeps baffling me how much easier having a concept for something makes thinking about it.

Reply
Self's Shortform
Self6mo1-2

What about this one:

"Hivemind" is best characterized as a state of zero adversarial behavior.

Reply
Self's Shortform
Self6mo1-9

"Humanity becomes a hivemind" is the single least dystopic coherent image of the future.

Reply
AI as Super-Demagogue
Self6mo32

Illustrative post. The downvotes confuse me.

Reply
Self's Shortform
Self6mo21

Depression is a formidable cognitive specialization.

Reply
Transformative trustbuilding via advancements in decentralized lie detection
Self6mo10

There may have been other, unmentioned optimization targets that also need eloquence

Predictions:

  • (75%) Groups who successfully[1] adopt trust technology will economically and politically outcompete the rest of their respective societies rather quickly (less than 10 years).
  • The efficiency gains feasibly up for grabs in the first 15 years compared to statusquo are over 100% (75%) or over 400% (50%).
  • (66%) Society-wide adoption of trustbuilding tech is a practical path / perhaps the only practical path towards sane politics in general and sane AI politics in particular.

The whole gestalt of why this is a huge affordance seems self-evident to me, it's a cognitive weakness of mine to often not know which parts of my thinking need more words written out loud to be legible.

But one intuition is: Regular "natural" human cultures are accidental products sampled from environments where deception-heavy strategies are dominant, and this imposes large deadweight costs on all pursuits of value, including economic value, happiness, friendship, and morality. Explicitly: Most of our cognition goes into deceiving others, and the density of useful acts could be multiple times higher.

  1. ^

    i.e. build mutual understandings at least to, but ideally surpassing, the point of family-like intimacy / feeling the others as extensions of oneself

Reply
Load More
2Self's Shortform
7mo
33