LESSWRONG
LW

Ulisse Mini
1750Ω62231190
Message
Dialogue
Subscribe

Born too late to explore Earth; born too early to explore the galaxy; born just the right time to save humanity.

https://uli.rocks/about

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Alignment stream of thought
3Ulisse Mini's Shortform
3y
28
No wikitag contributions to display.
johnswentworth's Shortform
Ulisse Mini12d00

EDIT: it's also possible John felt fine emotionally and was fully aware of his emotional state and actually was so good at not latching on to emotions that it was highly nontrivial to spot, or some combination. Leaving this comment in case it's useful for others. I don't like the tone though, I might've been very disassociated as a rationalist (and many are) but it's not obvious John is from this alone or not.

As a meditator I pay a lot of attention to what emotion I'm feeling in high resolution and the causality between it and my thoughts and actions. I highly recommend this practice. What John describes in "plan predictor predicts failure" is something I notice several times a month & address. It's 101 stuff when you're orienting at it from the emotional angle, there's also a variety of practices I can deploy (feeling emotions, jhanas, many hard to describe mental motions...) to get back to equilibrium and clear thinking & action. This has overall been a bigger update to my effectiveness than the sequences, plausibly my rationality too (I can finally be unbiased instead of trying to correct or pretend I'm not biased!)

Like, when I head you say "your instinctive plan-evaluator may end up with a global negative bias" I'm like, hm, why not just say "if you notice everything feels subtly heavier and like the world has metaphorically lost color" (how I notice it in myself. tbc fully nonverbally). Noticing through patterns of verbal thought also works, but it's just less data to do metacognition over. You're noticing correlations and inferring the territory (how you feel) instead of paying attention to how you feel directly (something which can be learned over time by directing attention towards noticing, not instantly)

I may write on this. Till then I highly recommend Joe Hudson's work, it may require a small amount of woo tolerance, but only small. He coached Sam Altman & other top execs on emotional clarity & fluidity. Extremely good. Requires some practice & willingness to embrace emotional intensity (sometimes locally painful) though.

Reply
Of Gender and Rationality
Ulisse Mini19d97

Biggest failure of the Rat community right now is neglecting emotional work, biggest upgrade to my rationality BY FAR (possibly more than reading the sequences even) has been in feeling all my emotions & letting them move through me till I'm back to clarity. This is feminine coded rationality imo (though for silly cultural reasons). AoA / Joe Hudson is the best resource on all this. He also works with Sama & OAI compute teams (lol).

A few concrete examples from my life.

  • When I fully feel my anger and let it move through me (apologies for woo terms!) I get back to clarity. My natural thoughts are correct, I don't need to do galexy brained metacognition & self-correction to maintain the semblance of clear thinking like I used to
  • When I fully feel my shame & forgive/accept myself it becomes much easier for me to execute long-term self-improvement plans, where I tackle character flaws (e.g. lower conscientiousness than I'd like) with a bunch of 5% improvements, whereas previously I felt too much shame to "sit in the problem" for so long in a gradual improvement approach. Self-acceptance has made self-improvement stuff so much easier to think about clearly.

 

In general: Emotion biasing me -> fully welcome the emotion -> no longer biasing me, just integrated information/perspective! It also feels better and is a practice I can do. Highly recommend!

I doubt a more emotionally integrated rationalist community would fix the gender problem, but it would definitely help. I've heard girls I know call the Rat/EA cluster "inhumane" and IMO this is getting at something that repulses a lot of people, there's a serious focus on head over emotions/heart/integrated-bodymind. Not as bad as Spock, but still pretty bad. Some lip service is paid to Focusing and Internal Double Crux (which are emotion-y kind of) but empirically most rats aren't very emotionally well-integrated, there's still a "Logical part" which is hammering down more "Irrational" parts, as opposed to working together. And this requires inner work! Not just reading replacing guilt once, for example.

All this relates to insecurity as well, it's very hard to think rationally when you're insecure. Preverbal thoughts and expectations will be warped at a deep level by emotional pieces trying to protect you. A lot can be done about that though, Chris is the main pioneer in the emotional security space IMO. Though the AoA/Joe Hudson stuff helps a ton too. All paths to the same goal.

Reply
Orienting Toward Wizard Power
Ulisse Mini25d*178

I really don't like how this post blends supernatural, fictional elements with the practical. The caveats about how wizard power in reality isn't like wizard power in stories are good, but not sufficient, the actively misleading term continues to warp people's cognition.

For example, it wasn't mentioned how technology (I'm not going to call it wizard power) generally requires a lot of coordination and capital ("king power") to get working, and produce at reasonable price. Magic is sexy and cool because you're "doing it yourself" whereas technology is a large team effort.

John seems to be warped by this effect ^ notice how he talks about DIY ~entirely in terms of doing stuff alone instead of in large groups, because that's sexier if you're an individualist & distrust all large groups. You would not come up with "making your own toothbrush" as something that's "wizard power" without these cognitive distortions (individualism + magical thinking).

But really my main problem with this isn't that it lacks some caveats, it's the general pattern of Rats actively distancing themselves from reality, often in a way with undercurrents of "our thoughts here are special and our community is the best". I know this isn't enough to point out the feeling I get for those who don't share it. It's hard to see when you're in the community, but after leaving looking back the distortions are extremely obvious. I might write more about this at some point, or maybe not.

I like this sentiment:

Forget RadVac. I wish for the sort of community which could produce its own COVID vaccine in March 2020, and have a 100-person challenge trial done by the end of April.

I wish there was more action and clear thinking in the rat community, without weird cognitive distortions that are hard to see except from outside.

Reply
Laziness death spirals
Ulisse Mini3mo30

You decide what is a win or not. If you're spiraling give yourself wins for getting out of bed, going outside, etc. Morale compounds and you'll get out of it. This is the biggest thing to do imo. Lower your "standards" temporarily. What we reward ourselves for is a tool to be productive, not an objective measure for how much we did that needs to stay fixed.

Reply
Suffering Is Not Pain
Ulisse Mini1y81

I think asking people like Daniel Ingram, Frank Yang, Nick Cammeratta, Shinzen Young, Roger Thisdell, etc. on how they experience pain post awakening is much more productive than debating 2500 year old teachings which have been (mis)translated many times.

Reply
What ML gears do you like?
Answer by Ulisse MiniNov 12, 202320

Answering my own question, a list of theories I have yet to study that may yield significant insight:

  • Theory of Heavy-Tailed Self-Regularization (https://weightwatcher.ai/)
  • Singular learning theory
  • Neural tangent kernels et. al. (deep learning theory book)
  • Information theory of deep learning
Reply
[UPDATE: deadline extended to July 24!] New wind in rationality’s sails: Applications for Epistea Residency 2023 are now open
Ulisse Mini2y44

Excited to see what comes out of this. I do want to raise attention to this failure mode covered in the sequences. however. I'd love for those who do the program try to bind their results to reality in some way, ideally having a concrete result of how they're substantively stronger afterwards, and how this replicated with other participants who did the training.

Reply
Load More
42What rationality failure modes are there?
Q
1y
Q
11
25What ML gears do you like?
Q
2y
Q
4
70Paper: Understanding and Controlling a Maze-Solving Policy Network
Ω
2y
Ω
0
105ActAdd: Steering Language Models without Optimization
Ω
2y
Ω
3
51Open problems in activation engineering
Ω
2y
Ω
2
23[ASoT] GPT2 Steering & The Tuned Lens
2y
0
16LIMA: Less Is More for Alignment
Ω
2y
Ω
6
67TinyStories: Small Language Models That Still Speak Coherent English
Ω
2y
Ω
8
437Steering GPT-2-XL by adding an activation vector
Ω
2y
Ω
98
40How to get good at programming
2y
3
Load More