aaq

An engineering student at Northwestern University.

aaq's Comments

Bayesian examination

Try to think about this in terms of expected value. On your specific example, they do score more, but this is probabilistic thinking, so we want to think about it in terms of the long run trend.

Suppose we no longer know what the answer is, and you are genuinely 50/50 on it being either A or B. This is what you truly believe, you don't think there's a chance in hell it's C. If you sit there and ask yourself, "Maybe I should do a 50-25-25 split, just in case", you're going to immediately realize "Wait, that's moronic. I'm throwing away 25% of my points on something I am certain is wrong. This is like betting on a 3-legged horse."

Now let's say you do a hundred of these questions, and most of your 50-50s come up correct-ish as one or the other. Your opponent consistently does 50-25-25s, and so they end up more wrong than you overall, because half the time the answer lands on one of their two 25s, not their single 50.

It's not a game of being more correct, it's a game of being less wrong.

Bayesian examination

I disagree with your first point, I consider the 50:25:25:0 thing is the point. It's hard to swallow because admitting ignorance rather than appearing falsely confident always is, but that's why it makes for such a good value to train.

aaq's Shortform

Agreed on the difference. Different subcultures, I think, all try to push different narratives about how they are significantly different from other subcultures; they are in competition with other subcultures for brain-space. On that observation, my priors that rationalist content is importantly different to other subcultures in that regard are low.

I suppose my real point in writing this is to advise against a sort of subcultural Fear Of Being Ordinary -- rationalism doesn't have to be qualitatively different from other subcultures to be valuable. For people under its umbrella, it can be very valuable, for reasons that have almost nothing to do with the quirks of the subculture itself.

Bíos brakhús

This actually seems like a really, really good idea. Thanks!

Bayesian examination

Great post! Simple and useful. For spaced-repetition junkies in the crowd, I created a small Anki deck, created from this post to help me retain the basics.

27 cards: https://ankiweb.net/shared/info/187030147

Bayesian examination

You could normalize the scoring rule back to 1, so that should be fine.

aaq's Shortform

Scattered thoughts on how the rationalist movement has helped me:

On the topic of rationalist self-improvement, I would like to raise the point that simply feeling as though there's a community of people who get me and that I can access when I want to has been hugely beneficial to my sense of happiness and belonging in the world.

That generates a lot of hedons for me, which then on occasion allow me to "afford" doing other things I wouldn't otherwise, like spend a little more time studying mathematics or running through Anki flashcards. There's a part of me that feels like I'm not just building up this knowledge for myself, but for the future possible good of "my people". I might tie together stuff in a way that other people find interesting, or insightful, or at least enjoy reading about, and that's honestly fricking awesome and blows standard delayed-gratification "self improvement" tactics outta the water 10/10 would recommend.

Also there's the whole thing that Ozy who is rat-almost-maybe-adjacent wrote the greatest summary of the greatest dating advice book I ever read, and I literally read that effortpost every day for like 8 months while I was learning how to be a half-decent romantic option, and holy SHIT is my life better for that. But again - nothing specific to the rationalist techniques themselves there; the value of the community was pointing me to someone who thinks and writes in a way my brain sees and says "mmm yes tasty good word soup i liek thanke" and then that person happened to write a post that played a big role in helping me with a problem that was causing me a ton of grief.

TLDR rationalists > rationalism

{Math} A times tables memory.

When I stop to think of people I support who I would peg as "extreme in words, moderate in actions", I think I feel a sense of overall safety that might be relevant here.

Let's say I'm in a fierce, conquering mood. I can put my weight behind their extremism, and feel powerful. I'm Making A Difference, going forth and reshaping the world a little closer to utopia.

When I'm in a defeatist mood, where nothing makes sense and I feel utterly hopeless, I can *also* get behind the extremism -- but it's in a different light, now. It's more, "I am so small, and the world is so big, but I can still live by what I feel is right".

Those are really emotionally powerful and salient times for me, and ones that have a profound effect on my sense of loyalty to certain causes. But most of the time, I'm puttering along and happy to be in the world of moderation. Intellectually, I understand that moderation is almost always going to be the best way forward; emotionally, it's another story entirely.

Upon first reading, I had the thought that a lot of people don't notice the extreme/moderate dichotomy of most of their leaders. I still think that's true. And then a lot of people do learn of that dichotomy, and they become disgusted by it, and turn away from anyone who falls in that camp. Which makes sense, honesty is a great virtue, why can't they just say what they mean? But then I look at myself, and while it doesn't feel *optimal* to me, it does feel like just another element of playing the game of power. There's this skill of reading between the lines that I think most people know is there, but they're a little reluctant to look straight at it.

(Reinventing wheels) Maybe our world has become more people-shaped.

Causality seems to be a property that we can infer in the Democritan atoms and how they interact with one another. But when you start reasoning with abstractions, rather than the interactions of the atoms directly, you lose information in the compression, which causes causality in the interactions of abstractions with another to be a harder thing to infer from watching them.

I don't yet have a stronger argument than that; this is a fairly new topic of interest to me.

Load More