Scott Alexander

Scott Alexander's Comments

Is Rationalist Self-Improvement Real?
I'd similarly worry that the "manioc detoxification is the norm + human societies are as efficient at installing mental habits and group norms as they are at detoxifying manioc" model should predict that the useful heuristics underlying the 'scientific method' (e.g., 'test literally everything', using controls, trying to randomize) reach fixation in more societies earlier.

I'd disagree! Randomized controlled trials have many moving parts, removing any of which makes them worse than useless. Remove placebo control, and your trials are always positive and you do worse than intuition. Remove double-blinding, same. Remove power calculations, and your trials give random results and you do worse than intuition. Remove significance testing, same. Even in our own advanced civilization, if RCTs give a result different than common sense it's a 50-50 chance which is right; a primitive civilization who replaced their intuitions with the results of proto-RCTs would be a disaster. This ends up like the creationist example where evolution can't use half an eye so eyes don't evolve; obviously this isn't permanently true with either RCTs or eyes, but in both cases it took a long time for all the parts to evolve independently for other reasons.

Also, you might be underestimating inferential distance - tribes that count "one, two, many" are not going to be able to run trials effectively. Did you know that people didn't consistently realize you could take an average of more than two numbers until the Middle Ages?

Also, what would these tribes use RCTs to figure out? Whether their traditional healing methods work? St. John's Wort is a traditional healing method, there have now been about half a dozen high-quality RCTs investigating it, with thousands of patients, and everyone is *still* confused. I am pretty sure primitive civilizations would not really have benefited from this much.

I am less sure about trigger-action plans. I think a history of the idea of procrastination would be very interesting. I get the impression that ancient peoples had very confused beliefs around it. I don't feel like there is some corpus of ancient anti-procrastination techniques from which TAPs are conspicuously missing, but why not? And premodern people seem weirdly productive compared to moderns in a lot of ways. Overall I notice I am confused here, but this could be an example where you're right.

I'm confused about how manioc detox is more useful to the group than the individual - each individual self-interestedly would prefer to detox manioc, since they will die (eventually) if they don't. This seems different to me than the prediction market example, since (as Robin has discussed) decision-makers might self-interestedly prefer not to have prediction markets so they can keep having high status as decision-makers.

Is Rationalist Self-Improvement Real?

Thanks, all good points.

I think efficient market doesn't just suggest we can't do much better at starting companies. It also means we can't do much better at providing self-help, which is a service that can make people lots of money and status if they do it well.

I'm not sure if you're using index fund investing as an example of rationalist self-help, or just as a metaphor for it. If you're using it an example, I worry that your standards are so low that almost any good advice could be rationalist self-help. I think if you're from a community where you didn't get a lot of good advice, being part of the rationalist community can be really helpful in exposing you to it (sort of like the theory where college makes you successful because it inducts you into the upper-middle class). I think I got most of my "invest in index funds" level good advice before entering the rationalist community, so I didn't count that.

Being part of the rationalist community has definitely improved my life, partly through giving me better friends and partly through giving me access to good ideas of the "invest in index funds" level. I hadn't counted that as part of our discussion, but if I do, then I agree it is great. My archetypal idea of "rationalist self-help" is sitting around at a CFAR workshop trying very hard to examine your mental blocks. I'm not sure if we agree on that or if I'm caricaturing your position.

I'm not up for any gigantic time commitment, but if you want to propose some kind of rationalist self-help exercise that I should try (of the order of 10 minutes/day for a few weeks) to see if I change my mind about it, I'm up for that, though I would also believe you if you said such a halfhearted commitment wouldn't be a good test.

Is Rationalist Self-Improvement Real?

You're right in catching and calling out the appeal to consequences there, of course.

But aside from me really caring about the movement, I think part of my thought process is that "the movement" is also the source of these self-help techniques. If some people go into this space and then report later with what they think, I am worried that this information is less trustworthy than information that would have come from these same people before they started dealing with this question.

Is Rationalist Self-Improvement Real?

I have some pretty complicated thoughts on this, and my heart isn't really in responding to you because I think some things are helpful for some people, but a sketch of what I'm thinking:

First, a clarification. Some early claims - like the ones I was responding to in my 2009 essay - were that rationalists should be able to basically accomplish miracles, become billionaires with minimal work, unify physics with a couple of years of study, etc. I still occasionally hear claims along those lines. I am still against those, but I interpret you as making weaker claims, like that rationalists can be 10% better at things than nonrationalists, after putting in a decent amount of work. I'm less opposed to those claims, especially if "a decent amount of work" is interpreted as "the same amount of work you would need to get good at those things through other methods". But I'm still a little bit concerned about them.

First: I'm interpreting "rationalist self-help" to mean rationalist ideas and practices that are helpful for getting common real-life goals like financial, social, and romantic success. I'm not including things like doing charity better, for reasons that I hope will become clear later.

These are the kinds of things most people want, which means two things. First, we should expect a lot of previous effort has gone into optimizing them. Second, we should expect that normal human psychology is designed to optimize them. If we're trying to do differential equations, we're outside our brain's design specs; if we're trying to gain status and power, we're operating exactly as designed.

When the brain fails disastrously, it tends to be at things outside the design specs, that don't matter for things we want. For example, you quoted me describing some disastrous failures in people understanding some philosophy around atheism, and I agree that sort of thing happens often. But this is because it's outside of our common sense. I can absolutely imagine a normal person saying "Since I can't prove God doesn't exist, God must exist", but it would take a much more screwed-up person to think "Since I can't prove I can't fly, I'm going to jump off this cliff."

Another example: doctors fail miserably on the Bayes mammogram problem, but usually handle actual breast cancer diagnosis okay. And even diagnosing breast cancer is a little outside common sense and everyday life. Faced with the most chimpish possible version of the Bayes mammogram problem - maybe something like "This guy I met at a party claims he's the king of a distant country, and admittedly he is wearing a crown, but what's the chance he's *really* a king?" my guess is people are already near-optimal.

If you have this amazing computer perfectly-tuned for finding strategies in a complex space, I think your best bet is just to throw lots and lots of training data at it, then try navigating the complex space.

I think it's ironic that you use practicing basketball as your example here, because rationalist techniques very much are *not* practice. If you want to become a better salesman, practice is going out and trying to make lots of sales. I don't think this is a "rationalist technique" and I think the kind of self-help you're arguing for is very different (though it may involve better ways to practice). We both agree that practice is useful; I think our remaining disagreement is on whether there are things other than practice that are more useful to do, on the margin, than another unit of practice.

Why do I think this is unlikely?

1. Although rationalists have done pretty well for themselves, they don't seem to have done too remarkably well. Even lots of leading rationalist organizations are led by people who haven't put particular effort into anything you could call rationalist self-help! That's really surprising!

2. Efficient markets. Rationalists developed rationalist self-help by thinking about it for a while. This implies that everyone else left a $100 bill on the ground for the past 4000 years. If there were techniques to improve your financial, social, and romantic success that you could develop just by thinking about them, the same people who figured out the manioc detoxification techniques, or oracle bone randomization for hunting, or all the other amazingly complex adaptations they somehow developed, would have come up with them. Even if they only work in modern society, one of the millions of modern people who wanted financial, social, and romantic success before you would have come up with them. Obviously this isn't 100% true - someone has to be the first person to discover everything - but you should expect the fruits here to be very high up, high enough that a single community putting in a moderate amount of effort shouldn't be able to get too many of them.

(some of this becomes less relevant if your idea of rationalist self-help is just collecting the best self-help from elsewhere and giving it a stamp of approval, but then some of the other considerations apply more.)

3. Rationalist self-help starts looking a lot like therapy. If we're trying to make you a more successful computer programmer using something other than studying computer programming, it's probably going to involve removing mental blocks or something. Therapy has been pretty well studied, and the most common conclusion is that it is mostly nonspecific factors and the techniques themselves don't seem to have any special power. I am prepared to suspend this conclusion for occasional miracles when extremely charismatic therapists meet exactly the right patient and some sort of non-scaleable flash of lightning happens, but this also feels different from "the techniques do what they're supposed to". If rationalists are trying to do therapy, they are competing with a field of tens of thousands of PhD-level practitioners with all the resources of the academic and health systems who have worked on the problem for decades. This is not the kind of situation that encourages me we can make fast progress. See https://slatestarcodex.com/2019/11/20/book-review-all-therapy-books/ for more on this.

4. General skepticism of premature practical application. It took 300 years between Harvey discovering the circulatory system and anyone being very good at treating circulatory disease. It took 50 years between Pasteur discovering germ theory and anyone being very good at treating infections. It took 250 years between Newton discovering gravity and anyone being very good at flying. I have a lower prior than you on good science immediately translating into useful applications. And I am just not too impressed with the science here. Kahneman and Tversky discovered a grab bag of interesting facts, some of which in retrospect were false. I still don't think we're anywhere near the deep understanding of rationality that would make me feel happy here.

This doesn't mean I think rationality is useless. I think there are lots of areas outside our brain's normal design specs where rationality is really useful. And because these don't involve getting sex or money, there's been a lot less previous exploration of the space and the low hanging fruits haven't been gobbled up. Or, when the space has been explored, people haven't done a great job formalizing their insights, or they haven't spread, or things like that. I am constantly shocked by how much really important knowledge there is sitting around that nobody knows about or thinks about because it doesn't have an immediate payoff.

Along with all of this, I'm increasingly concerned that anything that has payoff in sex or money is an epistemic death zone. Because you can make so much money teaching it, it attracts too much charlatanry to navigate easily, and it subjects anyone who enters to extreme pressure to become charlatan-adjacent. Because it touches so closely on our emotions and sense of self-worth, it's a mind-killer in the same way politics are. Because everybody is so different, there's almost irresistible pressure to push the thing that saved your own life, without checking whether it will help anyone else. Because it's such a noisy field and RCTs are so hard, I don't trust us to be able to check our intuitions against reality. And finally, I think there are whole things lurking out there of the approximate size and concerningness of "people are homeostasis-preserving control systems which will expend their entire energy on undoing any change you do to them" that we just have no idea about even though they have the potential to make everything in this sphere useless if we don't respond to them.

I actually want to expand on the politics analogy. If someone were to say rationality was great at figuring out whether liberalism or conservatism was better, I would agree that this is the sort of thing rationality should be able to do, in principle. But it's such a horrible topic that has the potential to do so much damage to anyone trying to navigate it that I would be really nervous about it - about whether we were really up to the task, and about what it would do to our movement if we tried. These are some of my concerns around self-help too.

bgaesop's Shortform

I'd assumed what I posted was the LW meditator consensus, or at least compatible with it.

Free Money at PredictIt?
In prediction markets, cost of capital to do trades is a major distorting factor, as are fees and taxes and other physical costs, and participants are much less certain of correct prices and much more worried about impact and how many others are in the same trade. Most everyone who is looking to correct inefficiencies will only fade very large and very obvious inefficiencies, given all the costs.

https://blog.rossry.net/predictit/ has a really good discussion of how this works, with some associated numbers that show how you will probably outright lose money on even apparently ironclad trades like the 112-total candidates above.

Could we solve this email mess if we all moved to paid emails?

I'm sorry, I didn't understand that. Yes, this answers my objection (although it might cause other problems like make me less likely to answer "sorry, I can't do that" compared to just ghosting someone)

Could we solve this email mess if we all moved to paid emails?

I think it's great that you're trying this and I hope it succeeds.

But I won't be using it. For me, the biggest problem is lowering the sense of obligation I feel to answer other people's emails. Without a sense of obligation, there's no problem - I just delete it and move on. But part of me feels like I'm incurring a social cost by doing this, so it's harder than it sounds.

I feel like using a service like this would make the problem worse, not better. It would make me feel a strong sense of obligation to answer someone's email if they had paid $5 to send it. What sort of monster deletes an email they know the other person had to pay money to send?

In the same way, I would feel nervous sending someone else a paid email, because I would feel like I was imposing a stronger sense of obligation on them to respond to my request, rather than it being a harmless ask they can either answer or not. This would be true regardless of how important my email was. Meanwhile, people who don't care about other people's feelings won't really be held back, since $5 is not a lot of money for most people in this community.

I think the increased obligation would dominate any tendency for me to get less emails, and make this a net negative in my case. I still hope other people try it and report back.

How to Ignore Your Emotions (while also thinking you're awesome at emotions)

What would you recommend to people who are doing this (or to people who aren't sure if they're doing it or not?)

Mistake Versus Conflict Theory of Against Billionaire Philanthropy

I'm a little confused, and I think it might be because you're using "conflict theorist" different from how I do.

For me, a conflict theorist is someone who thinks the main driver of disagreement is self-interest rather than honest mistakes. There can be mistake theorists and conflict theorists on both sides of the "is billionaire philanthropy good?" question, and on the "are individual actions acceptable even though they're nondemocratic?" question.

It sounds like you're using it differently, so I want to make sure I know exactly what you mean before replying.

You say you've given up understanding the number of basically people who disagree with things you think are obvious and morally obligatory. I suspect there's a big confusion about what 'basically good' means here, I'm making a note of it for future posting, but moving past that for now: When you examine specific cases of such disagreements happening, what do you find how often? (I keep writing possible things, but on reflection avoiding anchoring you is better)

I think I usually find we're working off different paradigms, in the really strong Kuhnian sense of paradigm.

Load More