This is a special post for quick takes by a gently pricked vein. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
8 comments, sorted by Click to highlight new comments since: Today at 6:03 AM

The expectations you do not know you have control your happiness more than you know. High expectations that you currently have don't look like high expectations from the inside, they just look like how the world is/would be.

But "lower your expectations" can often be almost useless advice, kind of like "do the right thing".

Trying to incorporate "lower expectations" often amounts to "be sad". How low should you go? It's not clear at all if you're using territory-free un-asymmetric simple rules like "lower". Like any other attempt at truth-finding, it is not magic. It requires thermodynamic work.

The thing is, the payoff is rather amazing. You can just get down to work. As soon as you're free of a constant stream of abuse from beliefs previously housed in your head, you can Choose without Suffering.

The problem is, I'm not sure how to strategically go about doing this, other than using my full brain with Constant Vigilance.

Coda: A large portion of the LW project (or at least, more than a few offshoots) is about noticing you have beliefs that respond to incentives other than pure epistemic ones, and trying not to reload when shooting your foot off with those. So unsurprisingly, there's a failure mode here: when you publicly declare really low expectations (eg "everyone's an asshole"), it works to challenge people, urges them to prove you wrong. It's a cool trick to win games of Chicken but as usual, it works by handicapping you. So make sure you at least understand the costs and the contexts it works in.

Is metarationality about (really tearing open) the twelfth virtue?

It seems like it says "the map you have of map-making is not the territory of map-making", and gets into how to respond to it fluidly, with a necessarily nebulous strategy of applying the virtue of the Void.

(this is also why it always felt like metarationality seems to only provide comments where Eliezer would've just given you the code)

The parts that don't quite seem to follow is where meaning-making and epistemology collide. I can try to see it as a "all models are false, some models are useful" but I'm not sure if that's the right perspective.

Coming from within that framing, I'd say yes.

From certain perspective, "more models" becomes one model anyway, because you still have to choose which of the models are you going to use at a specific moment. Especially when multiple models, all of them "false but useful", would suggest taking a different action.

As an analogy, it's like saying that your artificial intelligence will be an artificial meta-intelligence, because instead of following one algorithm, as other artificial intelligences do, it will choose between multiple algorithms. At the end of the day, "if P1 then A1 else if P2 then A2 else A3" still remains one algorithm. So the actual question is not whether one algorithm or many algorithms is better, but whether having a big if-switch at the top level is the optimal architecture. (Dunno, maybe it is, but from this perspective it suddenly feels much less "meta" than advertised.)

becomes one model anyway, because you still have to choose which of the models are you going to use at a specific moment.

The architecture feels way different when you're not trying to have consistency though. Your rules for switching can themselves switch based on the current model, and the whole thing becomes way more dynamic.

Cold Hands Fallacy/Fake Momentum/Null-Affective Death Stall

Although Hot Hands has been the subject of enough controversy to perhaps no longer be termed a fallacy, there is a sense in which I've fooled myself before with a fake momentum. I mean when you change your strategy using a faulty bottomline: incorrectly updating on your current dynamic.

As a somewhat extreme but actual example from my own life: when filling out answersheets to multiple-choice questions (with negative marks for incorrect responses) as a kid, I'd sometimes get excited about having marked almost all of the questions near the end, and then completely, obviously, irrationally decide to mark them all. This was out of some completion urge, and the positive affect around having filled in most of them. This involved a fair bit of self-deception to carry out, since I was aware at some level that I left some of them previously unanswered because I was in fact unsure, and to mark them I had to feel sure.

Now, for sure you could make the case that maybe there are times when you're thinking clearer and when you know the subject or whatever, where you can additionally infer this about yourself correctly and then rationally ramp up the confidence (even if slight) in yourself. But this wasn't one of those cases, it was the simple fact that I felt great about myself.

Anyway the real point of this post is that there's a flipside (or straightforward generalization) of this: we can talk about this fake inertia for subjects at rest or at motion. What I mean is there's this similar tendency to not feel like doing something because you don't have that dynamic right now, hence all the clichés of the form "first blow is half the battle". In a sense, that's all I'm communicating here, but seeing it as a simple irrational mistake (as in the example above) really helped me get over this without drama: just remind yourself of the bottomline and start moving in the correct flow, ignoring the uncalibrated halo (or lack thereof) of emotion.

Above, a visual depiction of strangepoop.

[This comment is no longer endorsed by its author]Reply

Ideally, I'd make another ninja-edit that would retain the content in my post and the joke in your comment in a reflexive manner, but I am crap at strange loops.