This is a special post for quick takes by Sunny from QAD. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
72 comments, sorted by Click to highlight new comments since: Today at 2:14 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It's happened again: I've realized that one of my old beliefs (pre-LW) is just plain dumb.

I used to look around at all the various diet (Paleo, Keto, low carb, low fat, etc.) and feel angry at people for having such low epistemic standards. Like, there's a new theory of nutrition every two years, and people still put faith in them every time? Everybody swears by a different diet and this is common knowledge, but people still swear by diets? And the reasoning is that "fat" (the nutrient) has the same name as "fat" (the body part people are trying to get rid of)?

Then I encountered the "calories in = calories out" theory, which says that the only thing you need to do to lose weight is to make sure that you burn more calories than you eat.

And I thought to myself, "yeah, obviously.".

Because, you see, if the orthodox asserts X and the heterodox asserts Y, and the orthodox is dumb, then Y must be true!

Anyway, I hadn't thought about this belief in a while, but I randomly remembered it a few minutes ago, and as soon as I remembered its origins, I chucked it out the window.

Oops!

(PS: I wouldn't be flabbergasted if the belief turned out true anyway. But I've reverted my map from the "I know how the world is" state to the "I'm awaiting additional evidence" state.)

4Viliam4y
On one hand, yeah, obviously. On the other hand, "burning calories" is not an elementary action. Suppose I tell you to burn 500 calories now; how exactly would you achieve it? If your answer is that you would exercise or do something physically demanding, such actions spend ATP in the cells, so what if you don't have enough ATP in your muscle cells; what is your plan B for burning calories? From the opposite side, you can limit the intake of calories. What if your metabolism is such that if you don't provide enough calories, you will gradually fall in coma and die. Your metabolism can make it impossible for you to reduce the "calories in" or increase the "calories out", if it is somehow set up in a way that does not convert the calories into useful energy in your muscle cells, and it starts damaging your organs if the calories are missing in general. So while the theory is almost tautologically true, it may still be impossible to use it to lose weight. And the problem is that the proponents of "calories in = calories out" usually smugly pretend that it is an actionable advice, instead of mere description. The actionable advice needs to be about how the metabolism works. And the things that impact it, such as what you eat, and who knows what else. Also, if you have some hormonal imbalance, or alergy, or whatever, your metabolism may differ from other people even if you eat the same things and try to live the same lifestyle. So, while e.g. eating less refined sugar would probably help everyone, no advice would guarantee a perfect outcome for everyone.
2Sunny from QAD4y
You make a good point -- even if my belief was technically true, it could still have been poorly framed and inactionable (is there a name for this failure mode?). But in fact, I think it's not even obvious that it was technically true. If we say "calories in" is the sum of the calorie counts on the labels of each food item you eat (let's assume the labels are accurate) then could there not still be some nutrient X that needs to be present for your body to extract the calories? Say, you need at least an ounce of X to process 100 calories? If so, then one could eat the same amount of food, but less X, and potentially lose weight. Or perhaps the human body can only process food between four and eight hours after eating it, and it doesn't try as hard to extract calories if you aren't being active, so scheduling your meals to take place four hours before you sit around doing nothing would make them "count less". Calories are (presumably?) a measure of chemical potential energy, but remember that matter itself can also be converted into energy. There's no antimatter engine inside my gut, so my body fails to extract all of the energy present in each piece of food. Couldn't the mechanism of digestion also fail to extract all the chemical potential energy of species "calorie"?
2Viliam4y
Yes, there are many steps in the metabolism that are not under your conscious control. I am not an expert, so I don't want to speculate too much about the technical details, but I think that gut bacteria probably also play a role. Simply, not everything you put in your mouth ends up necessarily in your bloodstream, and not everything that you absorbed is necessarily available in form of muscle energy. I don't know any standard name. Seems to me the problem is confusing "rephrasing of the desired outcome" with "an algorithm that actually get you there". Something like: Q: How can I get lot of money? A: Become a millionaire! Like, yeah, technically, everyone who successfully followed this advice ended up with lots of money, and everyone who didn't can be accused of not following the advice properly, but that's simply because those are two ways to describe the same thing. Q: How can I lose weight? A: Get rid of the extra atoms! I mean, extra calories! Charitably, the advice is not absolutely bad, because for a hypothetical completely clueless listener it would provide some little information. But then, using this advice in practice means insinuating that your target is completely clueless, which is probably not be the case.
2Sunny from QAD4y
But atoms aren't similar to calories, are they? I maintain that this hypothesis could be literally false, rather than simply unhelpful.
3Viliam4y
Okay, it's not the same. But the idea is that the answer is equally unhelpful, for similar reasons.
2[comment deleted]4y
4Eli Tyre4y
I want to give a big thumbs up of positive reinforcement. I thinks its great that I got to read an "oops! That was dumb, but now I've changed my mind." Thanks for helping to normalize this.
2Sunny from QAD4y
Thanks for the feedback! Here's another one for ya. A relatively long time ago I used to be pretty concerned about Pascal's wager, but then I devised some clever reasoning why it all cancels out and I don't need to think about it. I reasoned that one of three things must be true: 1. I don't have an immortal soul. In this case, I might as well be a good person. 2. I have an immortal soul, and after my bodily death I will be assigned to one of a handful of infinite fates, depending on how good of a person I was. In this case it's very important that I be a good person. 3. Same as above, but the decision process is something else. In this case I have no way of knowing how my infinite fate will be decided, so I might as well be a good person during my mortal life and hope for the best. But then, post-LW, I realized that there are two issues with this: * It doesn't make any sense to separate out case 2 from the enormous ocean of possibilities allowed for by case 3. Or rather, I can separate it, but then I need to probabilistically penalize it relative to case 3, and I also need to slightly shift the "expected judgment criterion" found in case 3 away from "being a good person is the way to get a good infinite fate", and it all balances out. * More importantly, this argument flippantly supposes that I have no way of discerning what process, if any, will be used to assign me an infinite fate. An infinite fate, mind you. I ought to be putting in more thought than this even if I thought the afterlife only lasted an hour, let alone eternity. So now I am back to being rather concerned about Pascal's wager, or more generally, the possibility that I have an immortal soul and need to worry about where it eventually ends up. From my first read-through of the sequences I remember that it claims to show that the idea of there being a god is somewhat nonsensical, but I didn't quite catch it the first time around. So my first line of attack is to read through the sequences ag
2Viliam4y
I think the first step is to ask yourself what do you even mean by saying "god". Because if you have a definition like "the spirit of our dead chieftain, who sometimes visits me in my dreams", I have no problem with that. Like, I find it completely plausible that you see the image of the chieftain in your dreams; nothing unscientific about that. It's just that the image is generated by your brain, so if try to communicate with it, it can only give you advice that your brain generated, and it can only grant your wishes in the sense that you accomplish that yourself. But if you agree that this is exactly what you meant, then such god is perfectly okay for me. But the modern definition is more like: an intelligent being that exists outside of our universe, but can observe it and change it. Suspending disbelief for a moment; how exactly can a being be "intelligent", whether inside our universe or not? Intelligence implies processing data. That would imply that the god has some... supernatural neurons, or some other architecture capable of processing data. So the god is a mechanism (in the sense that a human is a biological mechanism) composed of parts, although those parts do not necessarily have to be found in our physics. Skill kinda plausible, maybe gods are made out of dark matter, who knows. But then, how did this improbably complicated mechanism come into existence? Humans were made by evolution, were gods too? But then again those gods are not the gods of religion; they are merely powerful aliens. But powerful aliens are neither creators of the universe, nor are they omniscient. Then the theologists try to come with some smart-sounding arguments, like "god is actually supremely simple" -- therefore Occam's razor does not disprove him, and no evolution was needed, because simple things are more likely a priori, so a supreme being is supremely likely. Or something like that. But this is nonsense, because "supremely simple" and "capable of processing information"
1TAG4y
The supernatural isn't supposed to be the natural done all over again. The typical theological claim is that God's wisdom or whatever is an intrinisic quality, not something with moving parts.
2Viliam4y
Well, "wisdom as an intrinsic quality" is a mysterious answer. And what is "wisdom without moving parts"? A precomputed table of wise answers to all possible questions? Who precomputed it and how? I agree that this is how theology usually answers it, but it is an answer that doesn't make any sense when you look at it closer; it's just some good-sounding words randomly glued together. And if you try to make it refer to something, even a hypothetical something, the whole explanation falls apart.
1TAG4y
Reductionism is a combination of three claims. 1. That many thing are made of smaller components 2. That those particular can be understood in terms of the operations of their components 3. There's an irreduciby basic level. Its not turtles all way down. If it's always the case that something that isn't explicable in terms of its parts is mysterious, then the lowest level us mysterious. If nothing is mysterious, if you apply the argument against mysterious answer without prejudice, reductionism is false. There isn't a consistent set of principles here. Continued .. Naturalism is the claim that there is a bunch of fundamental properties that just are, at the bottom of the stack ,and everything is built up from that. Supernaturalism is the claim that the intrinsic stuff is at the top of the stack, and everything else is derived from it top-down. That may be 100% false , but it is the actual claim. There's a thing called the principle of charity , where you one party interprets the others statements so as to maximise their truth value. This only enhances communication if the truth is not basically in dispute...that's the easy case. The hard case is when there is a basic dispute about whats true. In that case, it's not helpful to fix the other person's claims by making them more reasonable from your point of view. Anyway, thats how we ended up with "God must have superneurons in his superbrain".
2Viliam4y
Feels like in the top-down universe, science shouldn't work at all. I mean, when you take a magnifying glass and look at the details, they are supposedly generated on the fly to fit the larger picture. Then you apply munchkinry to the details and invent an atomic bomb or quantum computer... which means... what exactly, from the top-down perspective? Yeah, you can find an excuse, e.g. that some of those top-down principles are hidden like Easter eggs, waiting to be discovered later. That the Platonic idea of smartphones has been waiting for us since the creation of the universe, but was only revealed to the recent generation. Which would mean that the top-down universe has some reason to pretend to be bottom-up, at least in some aspects... Okay, the same argument could be made that quantum physics pretends to be classical physics at larger scale, or relativity pretends to be Newtonian mechanics at low speeds... as if the scientists are trying to make up silly excuses for why their latest magic works but totally "doesn't contradict" what the previous generations of scientists were telling us... Well, at least it seems like the bottom-up approach is fruitful, whether the true reason is that the universe is bottom-up, or that the universe it top-down in a way that tries really hard to pretend that it is actually bottom-up (either in the sense that when it generates the -- inherently meaningless -- details for us, it makes sure that all consequences of those details are compatible with the preexisting Platonic ideas that govern the universe... or like a Dungeon Master who allows the players to invent all kinds of crazy stuff and throw the entire game off balance, because he values consistency above everything). More importantly, in universe where there is magic all the way up, what sense does it make to adopt the essentially half-assed approach, where you believe in the supernatural but also kinda use logic except not too seriously... might as well throw the logic aw
1TAG4y
The basic claim of a top down universe is a short string that doesn't contain much information. About the same amount as a basic claim of reductionism. The top down claim doesnt imply a universe of immutable physical law, but it doesn't contradict it either. The same goes for the bottom-up claim. A universe of randomly moving high entropy gas is useless for science and technology, but compatible with reductionism. But all this is rather beside the point. Even if supernaturalism is indefensible, you can't refute it by changing it into something else.
1Sunny from QAD4y
I wouldn't call the dead chieftain a god -- that would just be a word game. Wait wait! You say a god-like being created by evolution cannot be a creator of the universe. But that's only true if you constrain that particular instance of evolution to have occered in *this* universe. Maybe this universe is a simulation designed by a powerful "alien" in another universe, who itself came about from an evolutionary process in its own universe. It might be "omniscient" in the sense that it can think 1000x as fast as us and has 1000x as much working memory and is familiar with thinking habits that are 1000x as good as ours, but that's a moot point. The real thing I'm worried about isn't whether there exists an omniscient-omnipotent-benevolent creature, but rather whether there exists *some* very powerful creature who I might need to understand to avoid getting horrible outcomes. I haven't yet put much thought into this, since I only recently came to believe that this topic merits serious thought, but the existence of such a powerful creature seems like a plausible avenue to the conclusion "I have an infinite fate and it depends on me doing/avoiding X". This is another area where my understanding could stand to be improved (and where I expect it will be during my next read-through of the sequences). I'm not sure exactly what kind of simplicity Occam's razor uses. Apparently it can be formalized as Kolmogorov complexity, but the only definition I've ever found for that term is "the Kolmogorov Complexity of X is the length of the shortest computer program that would output X". But this definition is itself in need of formalization. Which programming language? And what if X is something other than a stream of bits, such as a dandelion? And even once that's answered, I'm not quite sure how to arrive at the conclusion that Kolmogorov-ly simpler things are more likely to be encountered. (All that being said, I'd like to note that I'm keeping in mind that just because I don't
2Viliam4y
Some people in history did, specifically, ancient Romans. But now we don't. Just making it obvious. And this is similar to the chieftain thing. You can have a religion that defines "god" as "an alien from another universe", but many religions insist that god is not created but eternal. Yes, this is a practical approach. Well, you cannot disprove such thing, because it is logically possible. (Obviously, "possible" does not automatically imply "it happened".) But unless you assume it is "simulations all the way up", there must be a universe that is not created by an external alien lifeform. Therefore, it is also logically possible that our universe is like that. There is no reason to assume that the existing religions have something in common with the simulating alien. When I play Civilization, the "people" in my simulation have a religion, but it doesn't mean that I believe in it, or even approve it, or that I am somehow going to reward them for it. It's just a cosmic horror that you need to learn to live with. There are more. Any programming language; for large enough values it doesn't matter. If you believe that e.g. Python is much better in this regard than Java, then for sufficiently complicated things the most efficient way to implement them in Java is to implement a Python emulator (a constant-sized piece of code) and implementing the algorithm in Python. So if you chose a wrong language, you pay at most a constant-sized penalty. Which is usually irrelevant, because these things are usually applied to debating what happens in general when the data grow. But I agree that if you talk about small data, it is underspecified. I don't really know what could it mean to "have a universe defined by the following three bits: 0, 0, 1", and maybe no one has a meaningful answer to this. But there are cases where you can have an intuition that for any reasonable definition of a programming language, X should be simpler than Y. Just an intuition pump: Imagine that ther
1Sunny from QAD4y
The Civ analogy makes sense, and I certainly wouldn't stop at disproving all actually-practiced religions (though at the moment I don't even feel equipped to do that). Are you sure it's logically possible in the strict sense? Maybe there's some hidden line of reasoning we haven't yet discovered that shows that this universe isn't a simulation! (Of course, there's a lot of question-untangling that has to happen first, like whether "is this a simulation?" is even an appropriate question to ask. See also: Greg Egan's book Permutation City, a fascinating work of fiction that gives a unique take on what it means for a universe to be a simulation.) This sounds like the kind of thing someone might say who is already relatively confident they won't suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then... (WARNING: graphic imagery) ...upon your bodily death, your consciousness will be embedded in an indestructible body and put in a 15K degree oven for 100 centuries. (END). Would you still say it was just another cosmic horror you have to learn to live with? If you wouldn't still say that, but you say it now because your probability estimate is less than 1/1000, how did you come to have that estimate? The constant-sized penalty makes sense. But I don't understand the claim that this concept is usually applied in the context of looking at how things grow. Occam's razor is (apparently) formulated in terms of raw Kolmogorov complexity -- the appropriate prior for an event X is 2^(-B), where B is the Kolmogorov Complexity of X.  Let's say general relativity is being compared against Theory T, and the programming language is Python. Doesn't it make a huge difference whether you're allowed to "pip install general-relativity" before you begin?  I agree that these intuitions can exist, but if I'm going to use them, then I detest this process being called a formalization! If I'm allowed to invoke my sens
3Viliam4y
I find it difficult to imagine how such argument could even be constructed. "Our universe isn't a simulation because it has property X" doesn't explain why the simulator could not simulate X. The usual argument is "because quantum stuff, the simulation would require insane amounts of computing power", which is true, but we have no idea what the simulating universe looks like, and what kind of physics is has... maybe what's an insane amount for us is peanuts for them. But maybe there is some argument why a computing power in principle (like, some mathematical reason) cannot exceed certain value, ever. And the value may turn out to be insufficient to simulate our universe. And we can somehow make sure that our entire universe is simulated in sufficient resolution (not something like: the Earth or perhaps the entire Solar system is simulated in full quantum physics, but everything else is just a credible approximation). Etc. Well, if such thing happens, then I would accept the argument. Yeah, I just... stopped worrying about these kinds of things. (In my case, "these kinds of things" refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can't win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; and there is often nothing you can do about them, in some cases you would be required to do contradictory things (you spend your entire life trying to appease the bloodthirsty Jehovah, but it turns out the true master of universe is the goddess Kali and she is very displeased with your Christianity...) or it could be some god you don't even know because it is a god of Aztecs, or some future god that will only be revealed to humanity in year 3000. Maybe humans are just a precursor to an intelligent species that will exist million years in the future, and from their god's perspective humans are e
1Sunny from QAD4y
I see. In that case, I think we're reacting differently to our situations due to being in different epistemic states. The uncertainty involved in Everett branches is much less Knightian -- you can often say things like "if I drive to the supermarket today, then approximately 0.001% of my future Everett branches will die in a car crash, and I'll just eat that cost; I need groceries!". My state of uncertainty is that I've barely put five minutes of thought into the question "I wonder if there are any tremendously important things I should be doing right now, and particularly if any of the things might have infinite importance due to my future being infinitely long." Well, that's another reference to "popular" theism. Popular theism is a subset of theism in general, which itself is a subset of "worlds in which there's something I should be doing that has infinite importance". Yikes!! I wish LessWrong had emojis so I could react to this possibility properly :O This advice makes sense, though given the state of uncertainty described above, I would say I'm already on it. This is a good fallback plan for the contingency in which I can't figure out the truth and then subsequently fail to acknowledge my ignorance. Fingers crossed that I can at least prevent the latter! Well, I would have said that an exactly analogous problem is present in normal Kolmogorov Complexity, but... ...but this, to me, explains the mystery. Being told to think in terms of computer programs generating different priors (or more accurately, computer programs generating different universes that entail different sets of perfect priors) really does influence my sense of what constitutes a "reasonable" set of priors. I would still hesitate to call it a "formalism", though IIRC I don't think you've used that word. In my re-listen of the sequences, I've just gotten to the part where Eliezer uses that word. Well, I guess I'll take it up with somebody who calls it that. By the way, it's just popped in
2Viliam4y
Definitely not interested. My understanding of these things is kinda intuitive (with intuition based on decent knowledge of math and computer science, but still), so I believe that "I'll know it when I see it" (give me two options, and I'll probably tell you whether one of them seems "simpler" than the other), but I wouldn't try to put it into exact words.
1Sunny from QAD4y
Kk! Thanks for the discussion :)
3Dagon4y
Congratulations - noticing that you are confused is an important step! What are you doing while "awaiting additional evidence"? This is a topic that doesn't have a neutral/agnostic position - biology forces you to eat, you have some influence (depending on your willpower model) over what and how much.
2Sunny from QAD4y
This belief wasn't really affecting my eating habits, so I don't think I'll be changing much. My rules are basically: 1. No meat (I'm a vegetarian for moral reasons). 2. If I feel hungry but I can see/feel my stomach being full by looking at / touching my belly, I'm probably just bored or thirsty and I should consider not eating anything. 3. Try to eat at least a meal's worth of "light" food (like toast or cereal as opposed to pizza or nachos) per day. This last rule is just to keep me from getting stomach aches, which happens if I eat too much "heavy" food in too short a time span. I think I might contend that this kind of reflects an agnostic position. But I'm glad you asked, because I hadn't noticed before that rule 2 actually does implicitly assume some relationship between "amount of food" and "weight change", and is put in place so I don't gain weight. So I guess I should really have said that what I tossed out the window was the extra detail that calories alone determine the effect food will have on one's weight. I still believe, for normal cases, that taking the same eating pattern but scaling it up (eating more of everything but keeping the ratios the same) will result in weight gain.

The ball-on-a-hill model of reputation

This is a model I came up with in middle school to explain why it felt like I was treated differently from others even when I acted the same. I invented it long before I fully understood what models were (which only occurred sometime in the last year) and as such it's something of a "baby's first model" (ha ha) for me. As you'd expect for something authored by a middle schooler regarding their problems, it places minimal blame on myself. However, even nowadays I think there's some truth to it.

Here's the model. Your reputation is a ball on a hill. The valley on one side of the hill corresponds to being revered, and the valley on the other side corresponds to being despised. The ball begins on top of the hill. If you do something that others see as "good" then the ball gets nudged to the good side, and if you do something that others see as "bad" then it gets nudged to the other side.

Here's where the hill comes in. Once your reputation has been nudged one way or the other, it begins to affect how others interpret your actions. If you apologize for something you did wrong and your reputat... (read more)

4Gordon Seidoh Worley5y
This very much matches my own model. Once you are high or low status, it's self reinforcing and people will interpret the evidence to support the existing story, which is why when you are high you can play low and you won't lose status (you're just "slumming it" or something similar) and when you are low you can play high and will not gain any status (you're "reaching above your station).
4mako yass5y
We used to talk about a "halo effect" here (and sometimes, "negative halo effect"), I like this way of describing it. I think it might be more valuable to just prefer to use a general model of confirmation bias though. People find whatever they're looking for. They only find the truth if they're really really looking for the truth, whatever it's going to be, and nothing else, and most people aren't, and that captures most of what is happening.
3Raemon5y
Heh, I like this sentence a lot (both for being funny, sort of adorable, and also just actually being a useful epistemic status) This model certainly seems relevant, but should probably be properly seen as one particular lens, or a facet of a much more complicated equation. (In particular, people can have different kinds of reputation in different domains)
1Sunny from QAD5y
That's true. I didn't notice this as I was writing, but my entire post frames "reputation" as being representable as a number. I think this might have been more or less true for the situations I had in mind, all of which were non-work social groups with no particular aim. Here's another thought. For other types of reputations that can still be modeled as a ball on a hill, it might be useful to parameterize the slope on each side of the hill. * "Social reputation" (the vague stuff that I think I was perceiving in the situations that inspired this model) is one where the rep/+ side is pretty shallow, but the rep/- side is pretty steep. It's not too hard to screw up and lose a good standing — in particular, if the social group gets it in their head they you were "faking it" and that you're "not actually a good/kind/confident/funny person" — but once you're down the well, it's very hard to climb out. * "Academic reputation", on the other hand, seems like it might be the reverse. I can imagine that if someone is considered a genius, and then they miss the mark on a few problems in a row, it wouldn't do much to their standing, whereas if the local idiot suddenly pops out and solves an outstanding problem, everyone might change their minds about them. (This is based on minimal experience.) Of course, it also depends on the group. I'm curious — do you have any types of reputation in mind that you wouldn't model like this, or any particular extra parts that you would add to it?

When you estimate how much mental energy a task will take, you are just as vulnerable to the planning fallacy as when you estimate how much time it will take.

I'm told that there was a period of history where only the priests were literate and therefore only they could read the Bible. Or maybe it was written in Latin and only they knew how to read it, or something. Anyway, as a result, they were free to interpret it any way they liked, and they used that power to control the masses.

Goodness me, it's a good thing we Have Science Now and can use it to free ourselves from the overbearing grip of Religion!

Oh, totally unrelatedly, the average modern person is scientifically illiterate and absorbs their knowledge of what is "scientific" through a handful of big news sources and through cultural osmosis.

Hmm.

Moral: Be wary of packages labeled "science" and be especially wary of social pressure to believe implausible-sounding claims just because they're "scientific". There are many ways for that beautiful name to get glued onto random memes.

1MathiasKB5y
"Science confirms video games are good" is essentially the same statement as "The bible confirms video games are bad" just with the authority changed. Luckily there remains a closer link between the authroity "Science" and truth than the authority "The bible" and truth so it's still an improvement. Most people still update their worldview based upon whatever their tribe as agreed upon as their central authority. I'm having a hard time critisising people for doing this, however. This is something we all do! If I see Nick Bostrom writing something slightly crazy that I don't fully understand, I will still give credence to his view simply for being an authority in my worldview. I feel like my criticism of people blindly believing anything labeled "science" is essentially criticising people for not being smart enough to choose better authorities, but that's a criticism that applies to everyone who doesn't have the smartest authority (who just so happens to be Nick Bostrom, so we're safe). Maybe there's a point to be made about not blindly trusting any authority, but I'm not smart enough to make that point, so I'll default to someone who is.
1Sunny from QAD5y
Oh yes, that's certainly true! My point is that anybody who has the floor can say that science has proven XYZ when it hasn't, and if their audience isn't scientifically literate then they won't be able to notice. That's why I lead with the Dark Ages example where priests got to interpret the bible however was convenient for them.

I just saw a funny example of Extremal Goodhart in the wild: a child was having their picture taken, and kept being told they weren't smiling enough. As a result, they kept screaming "CHEEEESE!!!" louder and louder.

A koan:

If the laundry needs to be done, put in a load of laundry.
If the world needs to be saved, save the world.
If you want pizza for dinner, go preheat the oven.

When you ask a question to a crowd, the answers you get back have a statistical bias towards overconfidence, because people with higher confidence in their answers are more likely to respond.

From my personal wiki. Seems appropriate for LessWrong.

The End-product Substitution is a hypothesis proposed by me about my behavior when choosing projects to work on. The hypothesis is that when I am evaluating how much I would like to work on a project, I substitude judgment of how much I will enjoy the end product for judgment of how much I will enjoy the process of creating it. For example, I recently [Sep 2019] considered creating a series of videos mirroring the content of the LessWrong sequences, and found myself fawning over how nice it would be to... (read more)

I just learned a (rationalist) lesson. I'm taking a course that has some homework that's hosted on a third party site. There was one assignment at the beginning of the semester, a few weeks ago. Then, about a week ago, I was wondering to myself whether there would be any more assignments any time soon. In fact, I even wondered if I had somehow missed a few assignments, since I'd thought they'd be assigned more frequently.

Well, I checked my course's website (different from the site where the homework was hosted) and didn't see ... (read more)

3Raemon5y
Congrats on saying oops!
1eigen5y
How likely is it now that you are going to miss any more assignments? Not likely at all!
2Sunny from QAD5y
Yup. And they key thing that I'm reminding myself of is that this can't be achieved by convincing myself that there aren't any assignments to miss. It can only be achieved for sure by knowing whether there are assignments or not.

I've been thinking of signing up for cryonics recently. The main hurdle is that it seems like it'll be kind of complicated, since at the moment I'm still on my parent's insurance, and I don't really know how all this stuff works. I've been worrying that the ugh field surrounding the task might end up being my cause of death by causing me to look on cryonics less favorably just because I subconsciously want to avoid even thinking about what a hassle it will be.

But then I realized that I can get around the problem by pre-committing to sign up for cryonics no... (read more)

I just caught myself substituting judgment of representativeness for judgment of probability.

I'm a conlang enthusiast, and specifically I study loglangs, which are a branch of conlangs that are based around predicate logic. My motivation for learning these languages was that I was always bothered by all the strange irregularities in my natural language (like the simple past tense being the same as the past participle, and the word inflammable meaning two opposite things).

Learning languages like these has only drawn my attention to even more natural-la... (read more)

3Dagon5y
You may also be integrating something you've read and then forgotten you read, and this added weight to your visible-and-suspect though process in order to make a true statement. It would not surprise me to learn that at least some of your study has included examples of irregularity from MANY natural languages, including Chinese. So "I guarantee it does" may be coming from multiple places in your knowledge. So, was it actually incorrect, or just illegibly-justified?
1Sunny from QAD5y
Hmm, good question. I guess I wouldn't be surprised to learn that I'd read about Chinese having irregularities, though the main text I've read about this (The Complete Lojban Language) didn't mention any IIRC.
1mic5y
I wouldn't be surprised if Chinese had no irregularities in the tense system – it's a very isolating language. But here's one irregularity: the negation of 有 is 没有 ("to not have/possess"), but the simple negation of every other verb is 不 + verb. You can negate other verbs with 没, but then it's implied to be 没有 + verb, which makes the verb into something like a present participle. E.g., 没吃 = "to have not eaten".

I was 100%, completely, unreservedly fooled by this year's April Fools' joke. Hilarious XDD

1Sunny from QAD3y
Also as a side note, I'm curious what's actually in the paywalled posts. Surely people didn't write a bunch of really high-quality content just for an April Fools' day joke?

I would appreciate an option to hide the number of votes that posts have. Maybe not hide entirely, but set them to only display at the bottom of a post, and not at the top nor on the front page. With the way votes are currently displayed, I think I'm getting biased for/against certain posts before I even read them, just based on the number of votes they have.

3habryka5y
Yeah, this was originally known as "Anti-Kibitzer" on the old LessWrong. It isn't something we prioritized, but I think greaterwrong has an implementation of it. Though it would also be pretty easy to create a stylish script for it (this hides it on the frontpage, and makes the color white on the post-page, requiring you to select the text to see the score): https://userstyles.org/styles/175379/lesswrong-anti-kibitzer
4Sunny from QAD5y
Oh, good idea! I don't have Stylish installed, but I have something similar, and I was able to hide it that way. Thanks!
1Eli Tyre5y
Can you share it?
7Sunny from QAD5y
Sure. The Firefox plugin is Custom Style Sheet and the code is as follows:
2Eli Tyre5y
Thanks!
2Raemon5y
Presumably you'd prefer them not to appear in post-list-items as well? (i.e. on the frontpage?)
1Sunny from QAD5y
Right: :)
2Raemon5y
ah, whoops.

The other day, my roommate mentioned that the bias towards wanting good things for people in your in-group and bad things for those in your out-group can be addressed by including ever more people in your in-group.

Here's a way to do that: take a person you want to move into your in-group, and try to imagine them as the protagonist of a story. What are their desires? What obstacles are they facing right now? How are they trying to overcome them?

I sometimes feel annoyed at a person just by looking at them. I invented this technique just now, but I used ... (read more)

Why is it my responsibility to heal the wounds that somebody else dealt to me??

Because if you don't heal your wounds, you will bleed on people who didn't cut you.

2Matt Goldenberg5y
Alternatively: Because you're the one that hurts if you don't.
1Sunny from QAD5y
(I see this has been posted elsewhere. I don't know if I invented it independently or if I read it somewhere and then forgot about it until now.)

Idea: "Ugh-field trades", where people trade away their obligations that they've developed ugh-fields for in exchange for other people's obligations. Both people get fresh non-ugh-fielded tasks. Works only in cases where the task can be done by somebody else, which won't be every time but might be often enough for this to work.

2Dagon3y
Interesting thought.  Unfortunately, most tasks where I'm blocked/delayed by an ugh field either dissolve it as soon as I identify it, or include as part of the ugh that only I can do it.

I just caught myself committing a bucket error.

I'm currently working on a text document full of equations that use variables with extremely long names. I'm in the process of simplifying it by renaming the variables. For complicated reasons, I have to do this by hand.

Just now, I noticed that there's a series of variables O1-O16, and another series of variables F17-F25. For technical reasons relating to the work I'm doing, I'm very confident that the name switch is arbitrary and that I can safely rename the F's to O's without changing the meaning of the equa... (read more)

Epistemic status: really shaky, but I think there's something here.

I naturally feel a lot of resistance to the way culture/norm differences are characterized in posts like Ask and Guess and Wait vs Interrupt Culture. I naturally want to give them little pet names, like:

  • Guess culture = "read my fucking mind, you badwrong idiot" culture.
  • Ask culture = nothing, because this is just how normal, non-insane people act.

I think this feeling is generated by various negative experiences I've had with people around me, who, no matter where I am, always seem to share b... (read more)

2Pattern4y
Is it because they're expecting you to read their mind, and go along with their "culture", instead of asking you?
1Sunny from QAD4y
I couldn't parse this question. Which part are you referring to by "it", and what do you mean by "instead of asking you"?
2Pattern4y
it (the negative experiences) - Are *they (the negative experiences) the result of (people with a "culture" who's rules rules you don't understand) expecting you to read *their mind, and go along with their "culture", instead of asking you to go along with their culture?
1Sunny from QAD4y
Aha, no, the mind reading part is just one of several cultures I'm mentioning. (Guess Culture, to be exact.) If I default to being an Asker but somebody else is a Guesser, I might have the following interaction with them: Me: [looking at some cookies they just made] These look delicious! Would it be all right if I ate one? Them: [obviously uncomfortable] Uhm... uh... I mean, I guess so... Here, it's retroactively clear that, in their eyes, I've overstepped a boundary just by asking. But I usually can't tell in advance what things I'm allowed to ask and what things I'm not allowed to ask. There could be some rule that I just haven't discovered yet, but because I haven't discovered it yet, it feels to me like each case is arbitrary, and thus it feels like I'm being required to read people's minds each time. Hence why I'm tempted to call Guess Culture as "Read-my-mind Culture". (Contrast this to Ask Culture, where the rule is, to me, very simple and easy to discover: every request is acceptable to make, and if the other person doesn't want you to do what you're asking to do, they just say "no".)
1[anonymous]4y
It might be hard to take a normative stance, but if culture 1 makes you feel better AND leads to better results AND helps people individuate and makes adults out of them, then maybe it's just, y'know, better. Not "better" in the naive mistake-theorist assumption that there is such a thing as a moral truth, but "better" in the correct conflict-theorist assumption that it just suits you and me and we will exert our power to make it more widely adopted, for the sake of us and our enlightened ideals.

When somebody is advocating taking an action, I think it can be productive to ask "Is there a good reason to do that?" rather than "Why should we do that?" because the former phrasing explicitly allows for the possibility that there is no good reason, which I think makes it both intellectually easier to realize that and socially easier to say it.

I just noticed that I've got two similarity clusters in my mind that keep getting called to my attention by wording dichotomies like high-priority and low-priority, but that would themselves be better labeled as big and small. This was causing me to interpret phrases like "doing a string of low-priority tasks" as having a positive affect (!) because what it called to mind was my own activity of doing a string of small, on-average medium-priority tasks.

My thought process might improve overall if I toss out the "big" and "small&... (read more)