## LESSWRONGLW

What are rationalist presumptions?

I am new to this rationality and Bayesian ways of thinking. I am reading the sequence, but I have few questions along the way. These questions is from the first article (http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/)

Epistemic rationality

I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix. And, sufficiently frequently, you mean what I think you meant. I am wondering, if there is a list of things that rationalis... (read more)

Showing 3 of 6 replies (Click to show all)

I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix.

No, if you look at our yearly census you find that it lists a question for the probability that we are living in a simulation. Most people don't presume that this probability is zero but enter numbers different from zero if my memory is right.

1Bound_up4yOkay, I don't know why everyone is making this so complicated. In theory, nothing is presupposed. We aren't certain of anything and never will be. In practice, if induction works for you (it will) then use it! Once it's just a question of practicality, try anything you like, and use what works. It won't let you be certain, but it'll let you move with power within the world. As for values, morals, your question suggests you might be interested in A Thousand Shards of Desire in the sequences. We value what we do, with lots of similarities to each other, because evolution designed our psychology that way. Evolution is messy and uncoordinated. We ended up with a lump of half random values not at all coherent. So, we don't look for, or recommend looking for, any One Great Guiding Principle of morality; there probably isn't one. We just care about life and fairness and happiness and fun and freedom and stuff like anyone else. Lots of lw people get a lot of mileage out of consequentialism, utilitarianism, and particularly preference utilitarianism. But these are not presumed. Morality is, more or less, just a pile of things that humans value. You don't HAVE to prove it to get people to try to be happy or to like freedom (all else equal). If I've erred here, I would much like to know. I puzzled over these questions myself and thought I understood them.
6RomeoStevens4yRationalists often presume that it is possible to do much better than average by applying a small amount of optimization power. This is true in many domains, but can get you in trouble in certain places (see: the valley of bad rationality). Rationalists often fail to compartmentalize, even when it would be highly useful. Rationalists are often overconfident (see: SSC calibration questions) but believe they are well calibrated (bias blind spot, also just knowing about a bias is not enough to unbias you) Rationalists don't even lift bro. Rationalists often fail to take marginal utility arguments to their logical conclusion, which is why they spend their time on things they are already good at rather than power leveling their lagging skills (see above). (Actually, I think we might be wired for this in order to seek comparative advantage in tribal roles.) Rationalists often presume that others are being stupidly irrational when really the other people just have significantly different values and/or operate largely in domains where there aren't strong reinforcement mechanisms for systematic thought or are stuck in a local maximum in an area where crossing a chasm is very costly.

# 3

If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters: