All of MoreOn's Comments + Replies

Rationality Quotes January 2013

I am firmly atheist right now, lounging in my mom's warm living room in a comfy armchair, tipity-typing on my keyboard. But when I go out to sea, alone, and the weather turns, a storm picks up, and I'm caught out after dark, and thanks to a rusty socket only one bow light works... well, then, I pray to every god I know starting with Poseidon, and sell my soul to the devil while at it.

I'm not sure why I do it.

Maybe that's what my brain does to occupy the excess processing time? In high school, when I still remembered it, I used to recite the litany against... (read more)

How would you talk a stranger off the ledge?

I'd love to redirect everyone in my blast radius who's ever mentioned suicide to a hotline, but somehow I think that's the first thing just about anyone says when someone mentions suicide... to the point when "get professional help" is synonymous with "I don't want to deal with this personally."

In a similar vein, do suicide hotlines actually work? I'm reading up on them right now, and found this alarming article, that basically says that sometimes the call centers screw up, but overall they work sort of well, and that lapses need to be ... (read more)

1David_Gerard10yThey appear (from the experience of friends who have brains such that they have had to frequently resort to them) to be a vast improvement over not having them. The volunteers are imperfect humans, but actually care about what they're doing, which seems to help.
5CronoDAS10yI once actually tried calling one of those hotlines to see what they were like; I waited on hold for a while and then gave up.
Why Our Kind Can't Cooperate

“If I agree, why should I bother saying it? Doesn’t my silence signal agreement enough?”

That’s been my non-verbal reasoning for years now! Not just here: everywhere. People have been telling me, with various degrees of success, that I never even speak except to argue. To those who have been successful in getting through to me, I would respond with, “Maybe it sounds like I’m arguing, but you’re WRONG. I’m not arguing!”

Until I read this post, I wasn’t even aware that I was doing it. Yikes!

6Omegaile10yThe fact is that there is a strong motive to disagree: either I change my opinion, or you do. On the other hand, the motives for agreeing are much more subtle: there is an ego boost; and I can influence other people to conform. Unless I am a very influent person, these two reasons are important as a group, but not much individually. Which lead us to think: There is a similar problem with elections, and why economists don´t vote [] . Anyway there is a nice analogy with physics: eletromagnetic force are much stronger than gravitational, but at large scale gravity is much more influent. (which is kinda obvius and made me think why no one pointed this on this post before)
[SEQ RERUN] Why is the Future So Absurd?

Gotcha. I wasn't aware that there had been more discussion about sequence reruns than that one thread.

Magic Tricks Revealed: Test Your Rationality

If those teacher's students were absolutely not expecting a lie, then another out-of-the-box question based on physics they should understand wouldn't trick them. The trust has been broken. On the other hand, if the problem is their inability to be creative enough, they won’t become creative just because they learned not to trust the teacher.

My high school physics teacher in high school who liked tricking us. Demonstrating his point about reflections off of light/dark surfaces, he covered up the laser pointer while shining it at a black binder. He put a co... (read more)

[SEQ RERUN] Why is the Future So Absurd?

Discuss the post here (rather than in the comments to the original post).

This comment by alexflint doesn't look like it's gotten much exposure back when sequence reruns were first discussed.

Maybe the template shouldn't be instructing people to leave comments here?

0MinibearRex10yWhen it was discussed, many people asked that comments go in the rerun, so that the "recent comments" feature wasn't showing comments on EY's old posts. If you do want to discuss something about a rerun, by all means leave a comment. But if you do have a question about a post that we won't get to for another year, then definitely leave it on the original post and hope someone spots it.
If You Demand Magic, Magic Won't Help

So, magic is easy. Then, everyone else is doing it, too. (And you're spending a good portion of your learning curve struggling with the magical equivalent of flipping a light switch). It's even more mundane than difficult magic.

By comparison, how many times today have you thought, "Wow! I'm really glad I have eyesight!" Well, now you have. But it's not something you go around thinking all the time. Why do you expect that you'd think "Wow! I'm really glad I have easy magic!" any more frequently?

3DanielLC10yTrue, but eyesight is awesome whether or not I explicitly think about it. I'm happy because I have eyesight. It's just that there's a somewhat longer chain of causality than if I'm happy that I have eyesight. I have eyesight, therefore I can use a monitor, therefore I can use the internet, therefore I can do fun stuff on the internet, therefore I am happy.
Joy in Discovery

The problem with routine discoveries, like my most recent discovery of how a magic trick works or the QED-euphoria I get after getting a proof down, is that it doesn't last long. I can't output 5 proofs/solutions an hour.


Subjects thought that accidents caused about as many deaths as disease.

Lichtenstein et aliōrum research subjects were 1) college students and 2) members of a chapter of the League of Women Voters. Students thought that accidents are 1.62 times more likely than diseases, and league members thought they were 11.6 times more likely (geometric mean). Sadly, no standard deviation was given. The true value is 15.4. Note that only 57% and 79% of students and league members respectively got the direction right, which further biased the geometric average down.

Th... (read more)

Introduction to the Sequence Reruns


At this point, [SEQ RERUNS] get very few responses. Barely any discussions happen in [SEQ RERUNS]. Might as well post comments in the original post and hope someone will respond in a couple months.

Trust in Math

"Huh, if I didn't spot this flaw at first sight, then I may have accepted some flawed congruent evidence too. What other mistaken proofs do I have in my head, whose absurdity is not at first apparent?"

Has this question ever been answered? It is one of those things I go around worrying about.

Magic Tricks Revealed: Test Your Rationality

Bwahahahahahaha! I'll admit I kinda freaked out at first.

[SEQ RERUN] Availability

Subjects thought that accidents caused about as many deaths as disease.

Lichtenstein et aliōrum research subjects were 1) college students and 2) members of a chapter of the League of Women Voters. Students thought that accidents are 1.62 times more likely than diseases, and league members thought they were 11.6 times more likely (geometric mean). Sadly, no standard deviation was given. The true value is 15.4. Note that only 57% and 79% of students and league members respectively got the direction right, which further biased the geometric average down.

Th... (read more)

[This comment is no longer endorsed by its author]Reply
Preference For (Many) Future Worlds

People have been gambling for millennia. Most of the people who have lost bets have done so without killing themselves. Much can be learned from this. For example, that killing yourself is worse than not killing yourself. This intuition is one that should follow over to ‘quantum’ gambles rather straightforwardly.

You weren't one of those people.

That non-ancestor of yours who played Quantum Russian Roulette with fifteen others is dead from your perspective, his alleles slightly underrepresented in the gene pool. In fact, if there was an allele for "... (read more)

Preference For (Many) Future Worlds

Reality wouldn't be mean to you on purpose.

Of course there would be worlds where something would have gone horribly wrong if you won the lottery. But there's no reason for you to expect that you'd wake up in one of those worlds because you won the lottery. The difference between your "horribly wrong" worlds (don't care about money/ inflation / no money) and wedrifid's (lost the lottery and became crippled) is that waking up in wedrifid's is caused by one's participation in the lottery.

The Mystery of the Haunted Rationalist

A Toilet Flush Monster climbed out of my toilet whenever I used to flush at night. If I could get back into bed completely covered by a blanket before it fully climbed out (i.e. the tank filled in with water and stopped making noises), then I was safe. All lights had to be off the whole time, or else the monster could see me.

Don't laugh.

In one of my childhood's flashes of clarity, I must have wondered how I knew about the monster if I'd never actually seen it. So one day I watched the toilet flush, and no monster came out. I checked with the lights off, an... (read more)

Making Beliefs Pay Rent (in Anticipated Experiences)

Taboo'ed. See edit.

Although I have a bone to pick with the whole "belief in belief" business, right now I'll concede that people actually do carry beliefs around that don't lead to anticipated experiences. Wulky Wilkinsen being a "post-utopian" (as interpreted from my current state of knowing 0 about Wulky Wilkinsen and post-utopians) is a belief that doesn't pay any rent at all, not even a paper that says "moneeez."

Use curiosity

The fact that I haven't noticed the same thing in casual conversations either speaks volumes for my conversation skills (lack thereof), or suggests that maybe not all people are as trigger-happy on the ignore button as you suggest.

2Dorikka11yOr that the people who you have casual conversations with are significant different from whose that JGWeissman does.
Making Beliefs Pay Rent (in Anticipated Experiences)

Two people have semantically different beliefs.

Both beliefs lead them to anticipate the same experience.

EDIT: In other words, two people might think they have different beliefs, but when it comes to anticipated experiences, they have similar enough beliefs about the properties of sound waves and the properties of falling trees and recorders and etc etc that they anticipate the same experience.

3JGWeissman11yTaboo [] "semantically". See also the example of The Dragon in the Garage, as discussed in the followup article [].
Making Beliefs Pay Rent (in Anticipated Experiences)

That said, I don't actually know anyone for whom this is true.

I don't know too many theist janitors, either. Doesn't mean they don't exist.

From my perspective, it sucks to be them. But once you're them, all you can do is minimize your misery by finding some local utility maximum and staying there.

Making Beliefs Pay Rent (in Anticipated Experiences)

If my tenants paid rent with a piece of paper that said "moneeez" on it, I wouldn't call it paying rent.

In your view, don't all beliefs pay rent in some anticipated experience, no matter how bad that rent is?

1Steven_Bukal11yOr they pay you with forged bills. You think you'll be able to deposit them at the bank and spend them to buy stuff, but what actually happens is the bank freezes your account and the teller at the store calls the police on you.
0JGWeissman11yNo, for an example of beliefs that don't pay rent in any anticipated experience, see the first 3 paragraphs of this article:
Making Beliefs Pay Rent (in Anticipated Experiences)

"Smart and beautiful" Joe is being Pascal's-mugged by his own beliefs. His anticipated experiences lead to exorbitantly high utility. When failure costs (relatively) little, it subtracts little utility by comparison.

I suppose you could use the same argument for the lottery-playing Joe. And you would realize that people like Joe, on average, are worse off. You wouldn't want to be Joe. But once you are Joe, his irrationality looks different from the inside.

Making Beliefs Pay Rent (in Anticipated Experiences)

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

If some average Joe believes he’s smart and beautiful, and that gives him utility, is that necessarily a bad thing? Joe approaches a girl in a bar, dips his sweaty fingers in her iced drink, cracks a piece of ice in his teeth, pulls it out of his mouth, shoves it in her face for demonstration, and says, “Now that I’d broken the ice—”

She thinks: “What a butt-ugly idiot!” and gets the hell away from him.

Joe goes on happily believing that he’s smart and beautifu... (read more)

1Viktor Riabtsev3yI am going to try and sidetrack this a little bit. Motivational speeches, pre-game speeches: these are real activities that serve to "get the blood flowing" as it were. Pumping up enthusiasm, confidence, courage and determination. These speeches are full of cheering lines, applause lights etc., but this doesn't detract from their efficacy or utility. Bad morale is extremely detrimental to success. I think that "Joe has utility-pumping beliefs" in that he actually believes the false fact "he is smart and beautiful"; is the wrong way to think of this subject. Joe can go in front of a mirror and proceed to tell/chant to himself 3-4 times: "I am smart! I am beautiful! Mom always said so!". Is he not in fact, simply pumping himself up? Does it matter that he isn't using any coherent or quantitative evaluation methods with respect to the terms of "smart" or "beautiful"? Is he not simply trying to improve his own morale? I think the right way to describe this situation is actually: "Joe delivers self motivational mantras/speeches to himself" and believes that this is beneficial. This belief does pay in anticipated experiences. He does feel more confident afterwards, it does make him more effective in conveying himself and his ideas in front of others. Its a real effect, and it has little to do with a false belief that he is actually "smart and beautiful".
4buybuydandavis10yI think you've hit on one of the conceptual weaknesses of many Rationalists. Beliefs can pay rent in many ways, but Rationalists tend to only value the predictive utility of beliefs, and pooh pooh other other utilities of belief. Comfort utility - it makes me feel good to believe it. Social utility - people will like me for believing it. Efficacy utility - I can be more effective if I believe it. Predictive Truth is a means to value, and even if a value in itself, it's surely not the only value. Instead of pooh poohing other types of utility, to convince people you need to use that predictive utility to analyze how the other utilities can best be fulfilled.
0JGWeissman11yIn this example, Joe's belief that he's smart and beautiful does pay rent in anticipated experience. He anticipates a favorable reaction if he approaches a girl with his gimmick and pickup line. As it happens, his innaccurate beliefs are paying rent in inaccurate anticipated experiences, and he goes wrong epistemically by not noticing that his actual experience differs from his anticipated experience and he should update his beliefs accordingly. The virtue of making beliefs pay rent in anticipated experience protects you from forming incoherent beleifs, maps not corresponding to any territory. Joe's beliefs are coherent, correspond to a part of the territory, and are persistantly wrong.
0NancyLebovitz11yIs there a difference between utility and anticipated experiences? I can see a case that utility is probability of anticipated, desired experiences, but for purposes of this argument, I don't think that makes for an important difference.
9jimrandomh11yThey can. They just do so very rarely, and since accepting some inaccurate beliefs makes it harder to determine which beliefs are and aren't beneficial, in practice we get the highest utility from favoring accuracy. It's very hard to keep the negative effects of a false belief contained; they tend to have subtle downsides. In the example you gave, Joe's belief that he's already smart and beautiful might be stopping him from pursuing self-improvements. But there definitely are cases where accurate beliefs are definitely detrimental; Nick Bostrom's Information Hazards [] has a partial taxonomy of them.
1TheOtherDave11yWell, he might. Or, rather, there might be available ways of becoming smarter or prettier for which jettisoning his false beliefs is a necessary precondition. But, admittedly, he might not. Anyway, sure, if Joe "terminally" values his beliefs about the world, then he gets just as much utility out of operating within a VR simulation of his beliefs as out of operating in the world. Or more, if his beliefs turn out to be inconsistent with the world. That said, I don't actually know anyone for whom this is true.
4Manfred11yThe trouble is that this rationale leads directly to wireheading at the first chance you get - choosing to become a brain in a vat with your reward centers constantly stimulated. Many people don't want that, so those people should make their beliefs only a means to an end. However, there are some people who would be fine with wireheading themselves, and those people will be totally unswayed by this sort of argument. If Joe is one of them... yeah, sure, a sufficiently pleasant belief is better than facing reality. In this particular case, I might still recommend that Joe face the facts, since admitting that you have a problem is the first step. If he shapes up enough, he might even get married and live happily ever after.
4Spurlock11yIt's sort of taken for granted here that it is in general better to have correct beliefs (though there have been some [] discussions [] as to why this is the case). It may be that there are specific (perhaps contrived) situations where this is not the case, but in general, so far as we can tell, having the map that matches the territory is a big win in the utility department. In Joe's case, it may be that he is happier thinking he's beautiful than he is thinking he is ugly. And it may be that, for you, correct beliefs are not themselves terminal values (ends in themselves). But in both cases, having correct beliefs can still produce utility. Joe for example might make a better effort to improve his appearance, might be more likely to approach girls who are in his league and at his intellectual level, thereby actually finding some sort of romantic fulfillment instead of just scaring away disinterested ladies. He might also not put all his eggs in the "underwear model" and "astrophysicist" baskets career-wise. You can further twist the example to remove these advantages, but then we're just getting further and further from reality. Overall, the consensus seems to be that wrong beliefs can often be locally optimal (meaning that giving them up might result in a temporary utility loss, or that you can lose utility by not shifting them far enough towards truth), but a maximally rational outlook will pay off in the long run.
How to Not Lose an Argument

More generally you cannot rigorously prove that for all integers n > 0, P(n) -> P(n+1) if it is not true, and in particular if P(1) does not imply P(2).

Sorry, I can't figure out what you mean here. Of course you can't rigorously prove something that's not true.

I have a feeling that our conversation boils down to the following:

Me: There exists a case where induction fails at n=2.

You: For all cases, if induction doesn’t fail at n=2, doesn’t mean induction doesn’t fail. Conversely, if induction fails, it doesn’t mean it fails at n=2. You have to care... (read more)

7Sniffnoy11yTo butt in, I doubt your interlocutors were attempting to argue this point; they seem like they were having more fundamental issues. But your original argument does seem to be a bit confused. Induction fails here because the inductive step fails at n=2. The inductive step happens to be true for n>2, but it is not true in general, hence the induction is invalid. The point is, rather than "you have to check n=2" or something similar, all that's going on here is that you have to check that your inductive step is actually valid. Which here means checking that you didn't sneak in any assumptions about n being sufficiently large. What's missing is not additional parts to the induction beyond base case and inductive step, what's missing is part of the proof of the inductive step.
3JGWeissman11yYour hindsight is accurate, but more than just recognizing the claim as true when presented to you, I am trying to get you to take it seriously and actively make use of it, by trying to rigorously prove things rather than produce sloppy verbal arguments that feel like a proof, which is possible to do for things that aren't true. This is accurate, and related, but not the entire point. Distinguish between a proof by mathematical induction and the process of attempting to produce a proof by mathematical induction. One possible result of attempting to produce a proof is a proof. Another possible result is the identification of some difficulty in the proof that is the basis of an insight that induction isn't the right approach or, as in the colored horses examples, that the thing you are trying to prove is not actually true. The point is that if you are properly attempting to produce a proof, which includes noticing difficulties that imply that the claim you are trying to prove is not actually true, you will either produce a valid proof or identify why your approach fails to provide a proof. No, your interlocutors were not arguing this point. Their performance, as reported by you, was horribly irrational. But you should apply as much scrutiny to your own beliefs and arguments as to your interlocutors.
How to Not Lose an Argument

"I refuse to cede you the role of instructor by letting you define the hypothetical."

You know, come think of it, that's actually a very good description of the second person... who is, by the way, my dad.

I am a lot more successful if I adopt the stance of "I am thinking about a problem that interests me," and if they express interest, explaining the problem as something I am presenting to myself, rather than to them. Or, if they don't, talking about something else.

This hasn't ever occurred to me, but I'll try it the next time a similar situation arises.

How to Not Lose an Argument

But why can you take a horse from the overlap? You can if the overlap is non-empty. Is the overlap non-empty? It has n-1 horses, so it is non-empty if n-1 > 0. Is n-1 > 0? It is if n > 1. Is n > 1? No, we want the proof to cover the case where n=1.

That's exactly what I was trying to get them to understand.

Do you think that they couldn't, and that's why they started arguing with me on irrelevant grounds?

And the point that I am trying to get you to understand, is that you do not need special rule to always check P(2) when making a proof by induction, in this case where the induction fails at P(1) -> P(2), carefully trying to prove the induction step will cause you to realize this. More generally you cannot rigorously prove that for all integers n > 0, P(n) -> P(n+1) if it is not true, and in particular if P(1) does not imply P(2).

How to Not Lose an Argument

.... The first n horses and the second n horses have an overlap of n-1 horses that are all the same color. So first and the last horse have to be the same color. Sorry, I thought that was obvious.

I see your point, though. This time, I was trying to reduce the word count because the audience is clearly intelligent enough to make that leap of logic. I can say the same for both of my "opponents" described above, because both of them are well above average intellectually. I honestly don't remember if I took that extra step in real life. If I haven't,... (read more)

6JGWeissman11yYou need to make this more explicit, to expose the hidden assumption: Take a horse from the overlap, which is the same color as the first horse and the same color as the last horse, so by transitivity, the first and last horse are the same color. But why can you take a horse from the overlap? You can if the overlap is non-empty. Is the overlap non-empty? It has n-1 horses, so it is non-empty if n-1 > 0. Is n-1 > 0? It is if n > 1. Is n > 1? No, we want the proof to cover the case where n=1.
How to Not Lose an Argument

I suspect that I lost the second person way before horses even became an issue. When he started picking on my words, "horses" and "different world" and "hypothetical person" didn't really matter anymore. He was just angry. What he was saying didn't make sense from that point on. For whatever reason, he stopped responding to logic.

But I don't know what I said to make him this angry in the first place.

Leaving aside the actual argument, I can tell you that there exist people (my husband is one of them, and come to think of it so is my ex-girlfriend, which makes me suspect that I bear some responsibility here, but I digress) whose immediate emotional reaction to "here, let me walk you through this illustrative hypothetical case" is strongly negative.

The reasons given vary, and may well be confabulatory.

I've heard the position summarized as "I don't believe in hypothetical questions," which I mostly unpack to mean that they understand ... (read more)

How to Not Lose an Argument

I don't think I ever got to my "ultimate" conclusion (that all of the operations that appear in step n must appear in the basis step).

I was trying to use this example where the proof failed at n=2 to show that it's possible in principle for a (specific other) proof to fail at n=2. Higher-order basis steps would be necessary only if there were even more operations.

How to Not Lose an Argument

Induction based on n=1 works sometimes, but not always. That was my point.

The problem with the horses of one color problem is that you are using sloppy verbal reasoning that hides an unjustified assumption that n > 1.

I'm not sure what you mean. I thought I stated it each time I was assuming n=1 and n=2.

0Nebu6yIn the induction step, we reason "The first horse is the same colour as the horses in the middle, and the horses in the middle have the same colour as the last horse. Therefore, all n+1 horses must be of the same colour". This reasoning only works if n > 1, because if n = 1, then there are no "horses in the middle", and so "the first horse is the same colour as the horses in the middle" is not true.
How to Not Lose an Argument

Most of the comments in this discussion focused on topics that are emotionally significant for your "opponent." But here's something that happened to me twice.

I was trying to explain to two intelligent people (separately) that mathematical induction should start with the second step, not the first. In my particular case, a homework assignment had us do induction on the rows of a lower triangular matrix as it was being multiplied by various vectors; the first row only had multiplication, the second row both multiplication and addition. I figured i... (read more)

3Pfft8yIn The Society of Mind, Marvin Minsky writes about "Intellectual Trauma": This seems to fit the anecdote very well--your interlocutor could not find a fault in the reasoning, noticed it led to an absurdity, and decided that this intellectual area is dangerous, scary, and should be evacuated as soon as possible.
7Douglas_Reay8yYou might find enlightening the part of the TED talk given by James Flynn [] (of the Flynn effect), where he talks about concrete thinking.
-3David_Gerard11y"No. Just an example. Lies propagate, that's what I'm saying. You've got to tell more lies to cover them up, lie about every fact that's connected to the first lie. And if you kept on lying, and you kept on trying to cover it up, sooner or later you'd even have to start lying about the general laws of thought. Like, someone is selling you some kind of alternative medicine that doesn't work, and any double-blind experimental study will confirm that it doesn't work. So if someone wants to go on defending the lie [] , they've got to get you to disbelieve in the experimental method. Like, the experimental method is just for merely scientific kinds of medicine, not amazing alternative medicine like theirs. Or a good and virtuous person should believe as strongly as they can, no matter what the evidence says. Or truth doesn't exist and there's no such thing as objective reality. A lot of common wisdom like that isn't just mistaken, it's anti-epistemology, it's systematically wrong. Every rule of rationality that tells you how to find the truth, there's someone out there who needs you to believe the opposite. If you once tell a lie, the truth is ever after your enemy; and there's a lot of people out there telling lies."
2ArisKatsaris11yYou didn't actually prove that n+1 horses have one color with this, you know, even given the assumption. You just said twice that n horses have one color, without proving that their combined set still has one color. For example consider the following "Suppose every n horses can fit in my living room. Add the n+1 horse, and take n out of those horses. They can fit in my living room by assumption. Remove 1 horse and take the one that’s been left out. You again have n horses, so they must again fit in my living room. Therefore, all horses fit in my living room." That's not proper induction. It doesn't matter if you begin with a n of 1, 2, 5, or 100 horses, such an attempt at induction would still be wrong, because it never shows that the proposition actually applies for the set of n+1.
4Tyrrell_McAllister11yThe problem was that your ultimate conclusion was wrong. It is not in fact the case that "mathematical induction should start with the second step, not the first." It's just that, like all proofs, you have to draw valid inferences at each step. As JGWeissman points out [], the horse proof fails at the n=2 step. But one could contrive examples in which the induction proof fails at the kth step for arbitrary k.
1Alicorn11yWhy didn't you drop the "horses" example when it tripped him up and go with, I dunno, emeralds or ceramic pie weights or spruckels [], stipulated to in fact have uniform color?
5JGWeissman11yMathematical induction using the first step as the base case is valid. The problem with the horses of one color problem is that you are using sloppy verbal reasoning that hides an unjustified assumption that n > 1. If you had tried to make a rigorous argument that the set of n+1 elements is the union of two of its subsets with n elements each, with those subsets having a non-empty intersection, this would be clear.

So what you're basically saying is that EDT is vulnerable to Simpson's Paradox?

But then, aren't all conclusions drawn from incomplete sets of data potentially at risk from unobserved causations? And complete sets of data are ridiculously hard (if not impossible) to obtain anyway.

Enjoying musical fashion: why not?

I'm sure that you're absolutely technically correct when saying what you'd said, but I had to reread it 5 times just to figure out what you meant, and I'm still not sure.

Are you saying that the strategy to indiscriminately like whatever's popular will lead to worse outcomes because of random effects, as in this experiment that showed that popularity is largely random? Then you're right--because what are the chances that your preferences exactly match the popular choice?

On the other hand, if it so happens that you end up liking something that's popular and ... (read more)

3Clippy11yYes, that's correct. Generally, permitting your valuations or beliefs to be influenced hysteretically in the direction of what's popular is bad. However, because aesthetic judgments have minor epistemic harm, and have instrumental value due to better "bonding" with other humans, that general heuristic does not apply here, so you can safely permit yourself to be drawn toward the judgment of others with respect to music. I shouldn't, but it's safe for humans.
The Hidden Complexity of Wishes

"I wish that the genie could understand a programming language."

Then I could program it unambiguously. I obviously wouldn't be able to program my mother out of the burning building on the spot, but at least there would be a host of other wishes I could make that the genie won't be able to screw up.

Enjoying musical fashion: why not?

I think alexflint's point is something along the lines of "it's okay to like popular things just because they're popular."

No, it is not okay (in the sense of being non-detrimental to one's terminal values) to like popular things simply on the basis that they are popular. Decision theories following this heuristic are vulnerable to numerous low-complexity attack vectors, leading them to (for example) perpetuate, generate, and incorrectly update on information cascades.

It would be more accurate to say that "Giving in to social pressure to have aesthetic preference on the basis of the popularity of such preferences has non-obvious and immodular benefits to one's terminal values, which are likely to outweigh their decision-theoretic vulnerabilities within human cultures."

Ability to react

Thanks for bringing this up. Now that you've said it, I think I'd observed something similar about myself. Like you, I find it far easier to solve internal problems than external. In SCUBA class, I could sketch the inner mechanism of the 2nd stage, but I'd be the last to put my equipment together by the side of the pool.

Your description maps really well onto introversion and extroversion. I searched for psychology articles on extraversion, introversion and learning styles. A lot of research has been done in that area. For example:

Through the use of EPQ vs.... (read more)

0Swimmer96311yWOW MoreOn...this is exactly the kind of thorough research/data I was looking for. I think I may have done the Eysenck test or a similar one in high school, during our "Civics and Careers" class.
3Torvaun11yUh, no. Pressure affects boiling point. If you're at a different pressure, it should not boil at 100 degrees C. If your water is contaminated by, say, alcohol, the boiling point will change. We aren't trying to explain away datapoints, we're using them to build a system that's larger than "Water boils at 100 degrees Centigrade." Just adding "at standard temperature and pressure," to the end of that gives a wider range of predictable and falsifiable results. What we're doing is rationality, not rationalization.
Variable Question Fallacies

Well in that case Earth doesn't really go around the sun, it just goes around the center of this galaxy on this weird wiggly orbit and the sun happens to always be in a certain position with respect to...... ouch! See what I did? I babbled myself into ineptness by trying to be "absolutely technically correct." I just can't. Even if I finished that "absolutely technically correct" sentence, I'd probably be wrong in some other way I haven't even imagined yet.

So let's accept the fact that not everything that is said which is true is "... (read more)

0bigjeff511yI believe you missed my point entirely. I was simply discribing why Hunga Huntergatherer might not have realized that it is the earth that goes round the sun. Hunga's map is still extremely useful, particularly for getting your bearings. The old saying "the sun rises in the east and sets in the west" is still useful even though it is the earth spinning to create the effect rather than the sun actually moving around the earth (which is implied in the saying). It's worth noting that Hunga's map is included in Amara's map, not eliminated by it. Albert's map also includes Barry's map, just like Einstein's map of gravity includes Newton's map. They're all still just maps though, and should be treated as such.
The Scales of Justice, the Notebook of Rationality

You're right, of course.

I'd written the above before I read this defense of researchers, before I knew to watch myself when I'm defending research subjects. Maybe I was far too in shock to actually believe that people would honestly think that.

2bigjeff511yYeah, it's a roundabout inference that I think happens a lot. I notice it myself sometimes when I hear X, assume X implies Y, and then later find out Y is not true. It's pretty difficult to avoid, since it's so natural, but I think the key is when you get surprised like that (and even if you don't), you should re-evaluate the whole thing instead of just adjusting your overall opinion slightly to account for the new evidence. Your accounting could be faulty if you don't go back and audit it.

Try answering this without any rationalization:

In my middle school science lab, a thermometer showed me that water boiled at 99.5 degrees C and not 100. Why?

8jslocum11yYou've missed a key point, which is that rationalization refers to a process in which one of many possible hypothesis is arbitrarily selected, which the rationalizer then attempts to support using a fabricated argument. In your query, you are asking that a piece of data be explained. In the first case, one filters the evidence, rejecting any data that too strongly opposes a pre-selected hypothesis. In the second case, one generates a space of hypothesis that all fit the data, and selects the most likely one as a guess. The difference is between choosing data to fit a hypothesis, and finding a hypothesis that best fits the data. Rationalization is pointing to a blank spot on your map and saying, "There must be a lake somewhere around there, because there aren't any other lakes nearby," while ignoring the fact that it's hot and there's sand everywhere.
1Desrtopa11yWhat elevation was your school at?
5Torvaun11yMy experience leads me to assume that the thermometer was mismarked. My high school chemistry teacher drilled into us that the thermometers we had were all precise, but of varying accuracy. A thermometer might say that water boils at 99.5 C, but if it did, it would also say that it froze at -0.5 C. Again, there are conditions that actually change the temperature at which water boils, so it's possible you were at a lower atmospheric pressure or that the water was contaminated. But, given that we have a grand total of one data point, I can't narrow it down to a single answer.
4ksvanhorn11yWhat altitude were you at?

I suspect you have a point that I'm missing.

My take is: either the reading was wrong (experimental error of some kind), or it wasn't wrong. If it wasn't wrong, then your water was boiling at a 99.5 degrees. There are a number of plausible explanations for the latter; the one that I assign the highest prior to is that you were at an elevation higher than sea level.

So, my answer is in the form of a probability distribution. Give me more evidence, and I will refine it, or demand and answer now, and I will tell you "altitude", my current most plausi... (read more)

The Correct Contrarian Cluster

Why would you expect someone who has a high correct contrarian factor in one area to have it in another?

Bad beliefs do seem to travel in packs (according to Penn and Teller, and Eliezer, anyhow). Lots of alien conspiracy nuts are government conspiracy nuts as well. That's not surprising, because bad beliefs are easy to pick up and they seem to be tribally maintained by the same tribe that maintains other bad beliefs.

But good beliefs? Really good ones? They're difficult. They take years. If you don't know of Less Wrong (or similar) as a source of good beli... (read more)

0William_Quixote9yI think people who are subject matter experts are likely to have several correct contrarian beliefs in their subject area. This should provide at least some clustering. People would also tend to have clusters of correct beliefs in areas their friends were subject matter experts in.
Possibility and Could-ness

The bigger something is, the more predetermined it gets.

But I assume that whenever a classical coin is flipped, there was an earlier quantum, world-splitting event which resulted in two worlds

Then your classical coin is a quantum coin that simply made its decision before you observed it. The outcome of a toss of a real classical coin would be the result of so many quantum events that you might as well consider the toss predetermined (my post above elaborates).

Are there thermodynamic coin flips too?

The exact same goes for a thermodynamic coin flip, ... (read more)

Possibility and Could-ness

I apologize. That's not how I meant it. All events are quantum, and they add up to reality. What I meant was, is free will lost in the addition?

This intuition is difficult like hell to describe, but the authors of Quantum Russian Roulette and this post on Quantum Immortality seemed to have it, as well as half the people I’d ever heard mentioning Schrödinger's cat. It’s the reason why the life of a person/cat in question is tied to a single quantum event, as opposed to a roll of a classical die that’s determined by a whole lot of quantum events.

Our decision... (read more)

0TobyBartels11yIn the case of Schrödinger's Cat, Schrödinger was criticising the København interpretation, in which there is a distinction drawn between classical and quantum worlds. In this and other thought experiments, if somebody who makes such a distinction might be listening in, then you have to make sure that they will accept that the relevant event is quantum. (Sometimes you also want to have precise probabilities to work with, too, so it helps to specify exactly what quantum event is the deciding factor.) This is the reverse of supposing that something is not a quantum event and hoping that those who don't make this distinction will accept it. Yes, but we're back to the objection that there are still a small portion of worlds that come out differently.
Possibility and Could-ness

My free will is in choosing which world my consciousness would observe. If I have that choice, I have free will.

There’re instances when I don’t have free will. Sprouting wings is physically improbable. If I estimate the chance of it happening at epsilon, within the constraints of physics, and even then as a result of random chance, this option wouldn’t really figure in my tree diagram of choices. Likewise, if quantum immortality is correct, then observing myself dying is physically impossible. (But what if the only way not to die would be to sprout wings?... (read more)

1Perplexed11yYou seem to be using a model in which there are two kinds of coin flips. There are quantum coin flips, which cause the world to split. And then there are classical coin flips - deterministic and non world-splitting, though due to our own lack of knowledge of initial conditions, there is a subjective probability of 0.5 for each outcome. I use a model something like this. But I assume that whenever a classical coin is flipped, there was an earlier quantum, world-splitting event which resulted in two worlds, one in which heads is the winning call, and one in which tails is destined to be the result.
3TobyBartels11yThere is no such thing as a non-quantum event. As far as we can tell, quantum physics is reality. Obligatory link to old material: Egan's Law []
Possibility and Could-ness

No, I haven’t. I’ve derived my views entirely from this post, plus the article above.

Since you mentioned “The Fabric Of Reality,” I tried looking it up on Less Wrong, and failing that, found its Wikipedia entry. I know not to judge a book by its Wikipedia page, but I still fail to see the similarity. Please enlighten me if you don't mind.

The following are statements about my mind-state, not about what is:

I don’t see why my view would be incapable of distinguishing free decisions from randomly determined ones. I’d go with naïve intuition: if I chose X and n... (read more)

Possibility and Could-ness

Zombie-me's are the replicas of me in alternate worlds. They aren't under my conscious control, thus they're "zombies" from my perspective.

Except, in my understanding, they are created every time I make a choice, in proportion to the probability and I would choose Y over X. That is, if there's a 91% chance that I'd choose X, then in 91% of the worlds the zombie-me's have chosen X and in the remaining 9% they'd chosen Y.

Again, caveat: I don't think physics and probability were meant to be interpreted this way.

5AlephNeil11yYour views on free will sound suspiciously as though you've derived them from "The Fabric Of Reality". Like Deutsch, you don't seem to appreciate that this isn't actually a response to the 'problem of free will' as generally understood, because it's inherently incapable of distinguishing free decisions from randomly determined ones, and is silent on questions of moral responsibility.
Newcomb's Problem and Regret of Rationality

4) Eliezer: just curious about how you deal with paradoxes about infinity in your utility function. If for each n, on day n you are offered to sacrifice one unit of utility that day to gain one unit of utility on day 2n and one unit on day 2n+1 what do you do? Each time you do it you seem to gain a unit of utility, but if you do it every day you end up worse than you started.

dankane, Eliezer answered your question in this comment, and maybe somewhere else, too, that I don't yet know of.

0dankane11yIf he wasn't really talking about infinities, how would you parse this comment (the living forever part): "There is no finite amount of life lived N where I would prefer a 80.0001% probability of living N years to an 0.0001% chance of living a googolplex years and an 80% chance of living forever." At very least this should imply that for every N there is an f(N) so that he would rather have a 50% chance of living f(N) years and a 50% chance of dying instantly than having a 100% chance of living for N years. We could then consider the game where if he is going to live for N years he is repeatedly offered the chance to instead live f(N) years with 50% probability and 0 years with 50% probability. Taking the bet n+1 times clearly does better than taking it n times, but the strategy "take the bet until you lose" guarantees him a very short life expectancy. If your utility function is unbounded you can run into paradoxes like this.
Possibility and Could-ness

At the risk of drawing wrong conclusions from physics I don't understand, I propose this model of free will within a lawful universe:

As I stand there thinking whether or not I should eat a banana, I can be confident that there's a world where a zombie-me is already eating a banana, and another world where a zombie-me has walked away from a banana.

As I stand near the edge of the cliff, there's a world where a zombie-me has jumped off the cliff to test quantum immortality, and Inspector Darwin has penciled in a slightly lower frequency of my alleles. But the... (read more)

3Manfred11yThere are in fact an infinite number of worlds in which you jump, sprout wings, and survive. To change the words and make the physics entirely well-accepted: if you jump off a cliff, there is a nonzero probability you sprout wings. I don't understand how this gives or doesn't give you free will, though - your actions are just (note: me using that word means I'm playing devil's advocate :D) random. Even though they're weighted towards certain outcomes, that doesn't mean a loaded die has free will when it "chooses" 6.
3ata11yWhat and why are the zombie-yous?
Outside the Laboratory

scientific inquiry with the choice of subject matter motivated by theism is of lower quality than science done without that motivation.

Absolutely. Hence, the warning flag. A scientist expecting to find the evidence of God doesn't just have freeloading beliefs, but beliefs that pay rent in wrong expectations. That's akin to a gambling economist.

best scientists ... tend to be less theistic.

I'd say it's good evidence in favor of P ( good science | scientist is theist ) < P ( good science ) . Of course, your point about correlation not causation is ... (read more)

Outside the Laboratory

Fixed. Thanks. I didn't realize that my statement read, "A priori reasoning can only be justified if it's a posteriori."

Edit: so what about my actual statement? Or, are we done having this discussion?

Load More