All of KPier's Comments + Replies

We celebrate the May date because May is a good time for a holiday (not close to other major holidays, good weather in our part of the world) and December is very close to the date of Solstice and also close to Christmas, Thanksgiving, etc. 

I appreciate this post. I get the sense that the author is trying to do something incredibly complicated and is aware of exactly how hard it is, and the post does it as well as it can be done. 

I want to try to contribute by describing a characteristic thing I've noticed from people who I later realized were doing a lot of frame control on me: 

Comments like 'almost no one is actually trying but you, you're actually trying' 'most people don't actually want to hear this, and I'm hoping you're different'.' I can only tell you this if you want to hear... (read more)

the common thread is that you, the listener, are special, and the speaker is the person who gets to recognize you as special, and the proof of your specialness is...

The speaker has granted you a "special" status, and now they can also set the rules you have to follow unless you want that status revoked. How much are you willing to pay in order to keep that precious status?

Antidotes: "I am not special" or "whether I am special or not, does not depend on whether X thinks I am".

Ahhh these are fantastic examples that clearly map onto frame controllers I know and I didn't think of it when writing this post; really great points.

Were the positive tests from the same batch/purchased all together?

Three of them (two positives, one negative) were from the same box, and the other two from different boxes. Other people at my house have taken tests from the same boxes, but no one else had positive results.

And same question for a positive test: if you get a positive and then retest and get a negative, do you have a sense of how much of an overall update you should make? I've been treating that as 'well, it was probably a false positive then', but multiplying the two updates together would imply it's probably legit?

Yeah, based on the Cochrane paper I'd interpret "one positive result and one negative result" as an overall update towards having COVID. In general, both rapid antigen tests and NAATs are more sensitive than they are specific (more likely to return false negatives than false positives.) Though also see the "Caveats about infectiousness" section, which suggests that NAATs have a much higher false positive rate for detecting infectiousness than they do for detecting illness. I don't have numbers for this, unfortunately, so I'm not sure if 1 positive NAAT + 1 negative NAAT is overall an update in favor or away from infectiousness.

Are test errors going to be highly correlated? If you take two tests (either of the same type or of different types) and both come back negative, how much of an update is the second test?

I'm not super sure; I wrote about this a little in the section "What if you take multiple tests?": but that's just a guess. I'd love to hear from anyone who has a more detailed understanding of what causes failures in NAATs and antigen tests. Naively, I'd expect that if the test fails due to low viral load, that would probably cause correlated failures across all tests taken on the same day. Waiting a few days between tests is probably a good idea, especially if you were likely to be in the early-infection stage (and so likely low viral load) during your first test. The instructions for the BinaxNOW rapid antigen test say that if you get a negative result, you shouldn't repeat the test until 3 days later.

Given your described desiderata, I would think that a slightly more rural location along the coast of California ought to be up there. Large properties in Orinda are not that expensive (there are gorgeous 16-30 acre lots for about 1million on Zillow right now), and right now, for better and for worse, the Bay is the locus of the rationalist and EA communities and of the tech industry; convincing people to move to a pastoral retreat 1hour from the city everyone already lives in is a much easier sell and smoother transition than convincing them to move acros... (read more)

That is, of course, consistent with it being net neutral to give people money which they spend on school fees, if the mechanism here is 'there are X good jobs, all of which go to people who've had formal education, but formal education adds no value here'. In that scenario it's in anyone's interest to send their kid to school, but all of the kids being sent to school does not net improve anything.

It seems kind of unlikely to me that primary school teaches nothing - and even just teaching English and basic literacy and numeracy seems really valuable - but if it does, that wouldn't make this woman irrational while it would make cash transfers spent on schooling poorly spent overall.

Formal education does improve skills and knowledge, so it's not merely a positional good.
If there is substantial inequality it may still be beneficial on balance to give the poorest people money which they spend on purely-positional school fees.

Thanks for answering this. It sounds like the things in the 'maybe concerns, insufficient info' categories are largely not concerns, which is encouraging. I'd be happy to privately contribute salary and CoL numbers to someone's effort to figure out how much people would save. is a little discouraging; there are Lead Java Developer roles listed for £30-50k , no equity which would pay $150,000-$180,000 base in SF and might well see more than $300k in total compensation. Even if you did want to buy a house,... (read more)

Thank you. LW 2.0 message system doesn't seem to be working properly, so I have sent a facebook request That might be exaggerating the compensation gap just a little bit. The first senior software engineering position on that list offers £57k-£64k and 2-5% equity. (Also $300k annual compensation would put someone into the top 1% of all US adults, this is not your typical scenario. Even among rationalists, the median non-student income is $75000 []) Also, hold on, Silicon Valley hasn't seceded from the US. (Yet.) That $300k after taxes comes out to $185k (£141.5k), as opposed to £60k which comes out to £42.5k here ($55.5k). If we remove only the rent from the living expenses we have around $167k and £39k. If the houses are $1m and £125k respectively then you can buy a house there in 6 years, whereas here it would only take 3.2 years. (And no, houses are not meant to be offloaded to a greater fool when it comes time to sell. Although houses in the area we are buying are rising at ~5%/year due to the central location) This is basically the general point, yes? The opening line to the economics section: Pointing out that "well actually, if you are a super-talented software dev you can make much more money in SF" is, well, not even wrong. If all the talk about optimizing for human flourishing didn't make it clear to people, perhaps the general outlook can be approximated by reading SSC Gives A Graduation Speech. [] One of the underlying ideas for the project is the cost of living is low enough that you can generate your own basic income. When you significantly reduce financial constrants, you can do a lot more things. A few that come to mind: * writing open-source software that benefits the world but can't be made into a profitable buisness model * full-time blogging on a $1000 a month patreon fanbase * raising five children with

Are you disagreeing with my prediction? I'd be happy to bet on it and learning that two of the four initial residents are trans women does not change it.

I wrote a post listing reasons why I would not move to Manchester. Since writing it I've gotten more confident about the 'bad culture fit' conclusion by reading bendini's blog. I would also add that the part of the community with the best gender ratio (rationalist tumblr) and the adjacent community with the best gender ratio (Alicorn's fan community) are also the ones with the norms that the founders of this project seem to find most objectionable, and the ones who seem to be the worst culture fit for the project. I think things li... (read more)

I'd rather not lose this exchange to Tumblr's archiving system, so I'll respond to relevent bits here: Startups are also a thing here too, just obviously not to the same extent that they are in the startup capital of the world. Good data on startups is very hard to find, but in absolute numbers Manchester is approximately second only to London in the UK. It is very much true that if you are renting a room in a grouphouse, the pay cut you take here will not be made up for by the significantly reduced cost of living, this does however start to change if you want to buy property (you can buy 3 bedroom houses on the street behind ours, a few miles from downtown, for around £125k) have kids or retire early in a place with rationalists nearby. If people get the data I've been requesting then I should be able to work out exactly how it compares and at which point someone would be better off here. Fair point, and something to think about for anyone with ties in the UK/EU It's worth pointing out that the main frugality nerd in this project is me, everyone else ranges from "unironically premium mediocre" to "simple tastes but doesn't think too strategically about it" Although everyone has the option to benefit from the fact that my brain treats optimizing spending for X and Y constraints as a game. Also food is really quite cheap here, partually because supermarkets produce their own branded stuff and it is actually decent quality, so the £60/month you are horrified by is not ramen and water but beef/chicken/pork, butter/olive oil, bread and a small but adequate serving of fruits and vegetables. I wouldn't reccomend it, but you can meet your energy and protein needs with homemade fried chicken for only £25/month. It might not be mentioned in the doc, but that's because it was written as part of the now-defunct Accelerator project, and its main goal was to convince its leader, Eric Bruylant (yes, that one), not to build a rationalist cult compound in rural Spain. It
welp, 2/4 current residents and the next one planned to come there are trans women so um, what gender ratio issue again?

I would live in this if it existed. Buying an apartment building or hotel seems like the most feasible version of this, and (based on very very minimal research) maybe not totally intractable; the price-per-unit on some hotels/apartments for sale is like $150,000, which is a whole lot less than the price of independently purchasing an SF apartment and a pretty reasonable monthly mortgage payment.

[commented in the incorrect place]

I am suspicious of this as an explanation. Most straight-identified women I know who will dance with/jokingly flirt with other women are in fact straight and not 'implicitly bisexual'; plenty of them live in environments where there'd be no social cost to being bisexual, and they are introspective enough that 'they are actually just straight and don't interpret those behaviors as sexual/romantic' seems most likely.

Men face higher social penalties for being gay or bisexual (and presumably for being thought to be gay or bisexual) which seems a more likely e... (read more)

I am not sure that we're communicating meaningfully here. I said that there's a place to set a threshold that weighs the expense against the lives. All that is required for this to be true is that we assign value to both money and lives. Where the threshold is depends on how much we value each, and obviously this will be different across situations, times, and cultures.

You're conflating a practical concern (which behaviors should society condemn?) and an ethical concern (how do we decide the relative value of money and lives?) which isn't even a particula... (read more)

Do you mean one, common threshold or do you mean an individual threshold that might be different for each person? I read you as arguing for one common threshold -- if we are taking about individual thresholds then I don't see any issues -- everyone just sets them wherever they like and that's it. I don't believe I said anything about what society should condemn. My interest started with this [], as my post noted, and it mostly focuses on determing the morality of the action solely on the basis of mental states, past and present.

Sorry, I am unwilling to assume any such thing. I would prefer a bit more realistic scenario where there is no well-known and universally accepted threshold. The condition of ships is uncertain, different people can give different estimates of that condition, and different people would choose different actions even on the basis of the same estimate.

It doesn't have to be well-known. Morally there's a threshold. Everyone who is trying to act morally is trying to ascertain where it should be, and everyone who isn't acting morally is taking advantage of the... (read more)

So, are you assuming moral realism []? That moral threshold which "is", does it objectively exist? Is is the same for everyone, all times and all cultures? Why do you think there is one specific place? That threshold depends on, among other things, risk tolerance. Are you saying that everyone does (or should have) the same risk tolerance?

Assume there's a threshold at which sending the ship for repairs is morally obligatory (if we're utilitarians, that is the point at which the cost of the repairs is less than the probability*expected cost of the ship sinking, taking into account the lives aboard, but the threshold needn't be utilitarian for this to work.)

Let's say that the threshold is 5% - if there's more than a 5% chance the ship will go down, you should get it repaired.

Mr. Grumpy's thought process seems to be 'I alieve that my ship will sink, but this alief is harmful and I should avoi... (read more)

Sorry, I am unwilling to assume any such thing. I would prefer a bit more realistic scenario where there is no well-known and universally accepted threshold. The condition of ships is uncertain, different people can give different estimates of that condition, and different people would choose different actions even on the basis of the same estimate. In particular, Mr.Doc has his own threshold which does not necessarily match yours or anyone else's or even whatever passes for the society's consensus.

The next passage confirms that this is the author's interpretation as well:

Let us alter the case a little, and suppose that the ship was not unsound after all; that she made her voyage safely, and many others after it. Will that diminish the guilt of her owner? Not one jot. When an action is once done, it is right or wrong for ever; no accidental failure of its good or evil fruits can possibly alter that. The man would not have been innocent, he would only have been not found out.

And clearly what he is guilty of (or if you prefer, blameworthy) is rati... (read more)

A shipowner was about to send to sea an emigrant-ship. He know that she was old, and not over-well built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him to great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections. He said to himself that sh... (read more)

An interesting quote. It essentially puts forward the "reasonable person" legal theory []. But that's not what's interesting about it. The shipowner is pronounced "verily guilty" solely on the basis of his thought processes. He had doubts, he extinguished them, and that's what makes him guilty. We don't know whether the ship was actually seaworthy -- only that the shipowner had doubts. If he were an optimistic fellow and never even had these doubts in the first place, would he still be guilty? We don't know what happened to the ship -- only that it disappeared. If the ship met a hurricane that no vessel of that era could survive, would the shipowner still be guilty? And, flipping the scenario, if solely by improbable luck the wreck of the ship did arrive unscathed to its destination, would the shipowner still be guilty?

From the upvotes I'm concluding it's worthwhile to go ahead and write it: I agree it serves as a pretty decent example of applying rationality concepts for long-term decision making. It'll have to wait a week until Thanksgiving Break, though.

I'm a freshman in college now, but a post or two analyzing the reasons for choosing an (expensive, high status) private college versus an (essentially free, low status) state college, or going to school in America versus Europe versus somewhere else, would have been immensely valuable to me a year ago.

This would belong on LessWrong because typical advice on this topic is either "follow your dreams, do what you love, everything will work out", or "you're an idiot to take on debt, if you can't pay your own way through college you're a lazy, e... (read more)

I think it would be a good idea for a post, not just for the specifics but also because it would relate to making decisions which involve long range predictions.

give 300 bucks to the Against Malaria Foundation, saving the lives of 1-3 children.

Source? The most recent estimate I've seen was that saving a life costs around $2000.

Fixed, sorry! (I'm female and that mistake doesn't bother me at all, but I know it really annoys some people. I'll be more careful in future.)

I completely agree that characterizing RW as contributing to existential risk is absurd.

Thanks for linking to the context! In fairness, though, if people are citing RationalWiki as proof that LessWrong has a "reputation", then devoting a discussion-level post to it doesn't strike me as excessive.

(On a related note: I hadn't read Jade's comments, but I did after you flagged them as interesting; they struck me as totally devoid of value. Would you mind explaining what you think the valid concern he/she's expressing is?)

Well, for one thing, Jade appears to be a "she". But never mind, I'm sure it'll all work out fine.

LW paying RW this much attention while also claiming that the entire future of human value itself is at stake looks on the surface like a failure of apportionment of cognitive resources, but perhaps I've missed something.

What do you mean by "this much attention"? If Konkvistador's links at the top are reasonably comprehensive (and a quick search doesn't turn up much more), there have been 2 barely-upvoted discussion posts about RW in four years, which hardly seems like much attention. For comparison, LW has devoted several times as much energy... (read more)

I'm not sure this comparison supports your point terribly well. Dating advice itself is incredibly instrumentally useful. The problems with the dating advice threads are the lack of quality content and the focus on irrelevant conflict. Lesswrong being unqualified to discuss a topic is a very different thing from a thing being insignificant or unworthy of attention.

I suppose I'm really thinking of an LW regular telling me in conversation that they consider RW a serious existential risk. You know, serious enough to even think about compared to everything else in the world.

This article is a response to this comment [] , which was actually mostly about this comment [] . Posting an entire article in response to half of that comment does strike me as an overreaction. (I'd be interested in Konkvistador's similar-length response to Jade's comment, though; there's a body of work [] there raising quite apposite concerns about problems with LW as a social environment - specifically, the existing real world problem of creepers at LW meetups - that won't disappear by merely downvoting them.)

... and if your utility scales linearly with money up to $1,001,000, right?

Yes, that sort of thing was addressed in the parenthetical in the grandparent. It doesn't specifically have to scale linearly.
Or if the payoffs are reduced to fall within the (approximately) linear region.

I don't think there's anything wrong with the topic, if it comes with a little bit of discussion along the lines of palladius's comment below, or along the lines of "What evidence would convince us that the sanity waterline is actually rising, as opposed to just more people being raised non-religious?"

It would be very interesting to see this study in the context of trendlines for other popular sanity-correlated topics, such as belief in evolution, disbelief in ghosts, non-identification with a political party, knowledge about GMOs, etcetera, even though there are lots and lots of confounding variables.

One alone, though, without commentary about rationality, probably does not belong on LessWrong.

Indeed. At least where I am, ISTM that the main factor in determining whether one is a theist is where and when their parents grew up. (And I suspect that once you control for that, the oft-mentioned correlation between atheism and high IQ would be much weaker.) And the fact that theists are more likely to be anthropogenic global warming sceptics is likely mediated by political affiliations --right-wingers are more likely both to be theists and to be AGW sceptics-- and maybe it would disappear once you control for position on the Political Compass.

I don't think he's saying that motives are morally irrelevant - I think he's saying that they are irrelevant to the point he is trying to make with that blog post.

I just want to experience being wrong sometimes.

Your comments are consistent with wanting to be proved wrong. No one experiences "being wrong" - from the inside, it feels exactly like "being right". We do experience "realizing we were wrong", which is hopefully followed by updating so that we once again believe ourselves to be right. Have you never changed your mind about something? Realized on your own that you were mistaken? Because you don't need to "lose" or to have other people "beat you" to experie... (read more)

That's insightful. And I realize now that my statement wasn't clearly worded. What I should have said was more like: "I need to experience other people being right sometimes." and I can explain why, in a re-framed way, because of your example: I don't experience being double checked if I am the one who figures it out. I know I am flawed, and I know I can't see all of my own flaws. If people aren't finding holes in my ideas (they find plenty of spelling errors and social mistakes, but rarely find a problem in my ideas) I'm not being double checked at all. This makes me nervous because if I don't see flaws with my ideas, and nobody else does either, then my most important flaws are invisible. I feel cocky toward disagreements with people. Like "Oh, it doesn't matter how badly they disagree with me in the beginning. After we talk, they won't anymore." I keep having experiences that confirm this for me. I posted a risk on a different site that provoked normalcy bias and caused a whole bunch of people to jump all over me with every (bad) reason under the sun that I was wrong. I blew down all the invalid refutations of my point and ignored the ad hominem attacks. A few days later, one of the people who had refuted me did some research, changed her mind and told her friends, then a bunch of the people jumping all over me were converted to my perspective. Everyone stopped arguing. This is useful in the cases where I have important information. It is unhealthy from a human perspective, though. When you think that you can convince other people of things, it feels a little creepy. It's like I have too much power over them. Even if I am right, and the way that I wield this gift is 100% ethical, (and I may not be, and nobody's double checking me) there's still something that feels wrong. I want checks and balances. I want other people with this power do the same to me. I want them to double check me. To remind me that I am not "the most powerful". I am a perfectionist wi

It looks like I won here, but I thought of some reasons why I may still have lost:

You should stop thinking about discussions in these terms.

Imagine you have 100 instances where you do a bunch of research, with the intention of having an unbiased view of the situation. Then you tell somebody about the result and they don't agree. But they don't support their points well. So you share the information you found and point out that their points were unsupported. They fail to produce any new information or points that actually add to the conversation. You may not have been trying to win, but if they're unable to support their points or supply new information and yet believe themselves to be right, when you destroy that illusion, the feeling of "oh I guess I was right" is a natural result. Imagine that during the same period of time, this happens to you zero times. Nobody finds a logical fallacy or poorly supported point. This is not because you are perfect - you aren't. It is probably due to hanging out with the wrong people - people who are not dedicated to reasoning well. Knowing I am not perfect is not reducing the cockiness that is starting to result from this, for me. It is making me nervous instead - this knowledge that I am not perfect has become a vague intellectual acknowledgement, not a genuine sense of awareness. The sense that I have flawed ideas and could be wrong at any time no longer feels real. Now that I am in a much bigger pond, I am hoping to experience a really good ass kicking. I want to wake up from this dream of feeling like I'm right all the time. The reason I want to lose is because I agree with you that I shouldn't see these debates as thing for me to win. I am tired of the experience of being right. I am tired of the nervousness that is knowing I am imperfect, that there are flaws I'm unaware of, but not having the sense that somebody will point them out. I just want to experience being wrong sometimes.

My estimate of the general intelligence of the subset of LWers who replied to this post has gone way down.

It seems like it's your estimate of the programming knowledge of the commenters that should go down. Most of the proposed solutions have in common that they sound really simple to implement, but would in fact be complicated - which someone with high general intelligence and rationality, but limited domain-specific knowledge, might not know.

Should people who can't program refrain from suggesting programming fixes? Maybe. But maybe it's worth the tim... (read more)

Generally speaking, there are fewer upvotes later in a thread, since fewer people read that far. If the children to your comment have more karma then your comment, it's reasonable to assume that people saw both comments and chose to up vote theirs, but if a parent to your comment has more karma, you can't really draw any inference from that at all.

Except that when I made my comment, Eliezer's was at zero. Er, it might have been +1, but it certainly wasn't +4.

Not to fall into the "trap" of buying warm fuzzies? Do you advocate a policy of never buying yourself any warm fuzzies, or just of never buying warm fuzzies specifically through donating to charity (because it's easy to trick your brain into believing it just did good)?

Yes, I am deeply suspicious of Eliezer's post on warm fuzzies vs utilons because while I accept that it can be a good strategy, I am skeptical that it actually is: my suspicion is that for pretty much all people, buying fuzzies just crowds out buying utilons.

For example, I asked Konkvistador on IRC, since he was planning on buying fuzzies by donating to this person, what utilons he was planning on buying, especially since he had just mentioned he had very little money to spare. He replied with something about not eating ice cream and drinking more water.

Looks like PMing is down, actually. You can email me at kelseyp [at] (not written out to avoid spambots).

I was accepted to Stanford this spring. At the welcome weekend, we talked a lot with the admissions representatives about what they're looking for - I'd be happy to share tips and my own essays. PM me.

Congratulations! And thank you. :)

The July matching drive was news to me; I wonder how many other readers hadn't even heard about it.

Is there a reason this hasn't been published on LessWrong, i.e. with the usual public-commitment thread?

Also, if a donation is earmarked for CFAR, does the "matching" donation also go to CFAR?

I'll post about it shortly. Yes.

Instrumental rationality is doing whatever has the best expected outcome. So spending a ton of time thinking about metaethics may or may not be instrumentally rational, but saying "thinking rationally about metaethics is not rational" is using the world two different ways, and is the reason your post is so confusing to me.

On your example of a witch, I don't actually see why believing that would be rational. But if you take a more straightforward example, say, "Not knowing that your boss is engaging in insider training, and not looking, coul... (read more)

most people might encounter one or two serious Moral Questions in their entire -lives-; whether or not to leave grandma on life support, for example. Societal ethics are more than sufficient for day-to-day decisions; don't shoplift that candy bar, don't drink yourself into a stupor, don't cheat on your math test.


For most people, a rational ethics system costs far more than it provides in benefits.

I don't think this follows. Calculating every decision costs far more than it provides in benefits, sure. But having a moral system for when serious... (read more)

"And one note: I think you're misusing "rational". Spending an hour puzzling over the optimal purchase of chips is not rational; spending an hour puzzling over whether to shoplift the chips is also not rational. You're only getting the counterintuitive result "rationality is not always rational" because you're treating "rational" as synonymous with "logical" or ""optimized" or "thought-through"." * I think this encapsulates our disagreement. First, I challenge you to define rationality while excluding those mechanisms. No, I don't really, just consider how you would do it. Can we call rationality as "A good decision-making process"? (Borrowing from [] ) I think the disconnect is in considering the problem as one decision, or two discrete decisions. "A witch did it" is not a rational explanation for something, I hope we can agree, and I hope I established that one can rationally choose to believe this, even though it is an irrational belief. The first decision is about what decision-making process to use to make a decision. "Blame the witch" is not a good process - it's not a process at all. But when the decision is unimportant, it may be better to use a bad decision making process than a good one. Given two decisions, the first about what decision-making process to use, and the second to be the actual decision, you can in fact use a good-decision making process (rationally conclude) that a bad-decision making process (an irrational one) is sufficient for a particular task. For your examples, picking one to address specifically, I'd suggest that it is ultimately unimportant on an individual basis to most people whether or not to support universal health care; their individual support or lack thereof has almost no effect on whether or not it is implemented. Similarly with abortion and gay marriage. For effective charities, this decision-making process can be outs

I think what you're trying to say is:

"Morally as computation" is expensive, and you get pretty much the same results from "morality as doing what everyone else is doing." So it's not really rational to try to arrive at a moral system through precise logical reasoning, for the same reasons it's not a good idea to spend an hour evaluating which brand of chips to buy. Yeah, you might get a slightly better result - but the costs are too high.

If that's right, here are my thoughts:

Obviously you don't need to do all moral reasoning from scratc... (read more)

You have a good encapsulation of what I'm trying to say, yes. I'm not arguing against "all moral reasoning from scratch," however, which I would regard as a strawman representation of rational ethics. (It was difficult to wholly avoid an apparent argument against morality from scratch, however, in establishing that rationality is not always rational, and trying to establish this in ethics as well, so I suspect I failed to some extent there, in particular the bit about the reasons for adopting rational ethics.) My focus, although it might not have been plain, was primarily on day-to-day decisions; most people might encounter one or two serious Moral Questions in their entire -lives-; whether or not to leave grandma on life support, for example. Societal ethics are more than sufficient for day-to-day decisions; don't shoplift that candy bar, don't drink yourself into a stupor, don't cheat on your math test. For most people, a rational ethics system costs far more than it provides in benefits. For a few people, it doesn't; either because they (like me) enjoy the act of calculation itself, or because they (say, a priest, or a counselor) are in a position such that they regularly encounter such Moral Questions, and must be capable of answering them sufficiently. We are, in fact, a -part- of society; relying on society therefore doesn't mean leaving Moral Questions unaddressed, but rather means leaving the expensive calculation to others, and evaluating the results (listening to the arguments), a considerably cheaper operation.

He also says:

As in so many other areas, our most important information comes from reality television.

I'm guessing both are a joke.

Yeah, I also took it as a joke.

Your article describes the consequences of being perceived as "right-wing" on American campuses. Is pick-up considered "right wing"? Or is your point more generally that students do not have as much freedom of speech on campus as they think?

I'm specifically curious about the claim that most professors would consider what you are doing to be evil. Is that based on personal experience with this issue?

Racism, sexism and homophobia are the three primary evils for politically correct professors. From what I've read of pick-up (i.e. Roissy's blog) it is in part predicated on a negative view of women's intelligence, standards and ethics making it indeed sexist.

See this to get a feel for how feminist react to criticisms of women. Truth is not considered a defense for this kind of "sexism". (A professor suggested I should not be teaching at Smith College because during a panel discussion on free speech I said Summers was probably correct.)

I've ... (read more)

My favorite explanation of Bayes' Theorem barely requires algebra. (If you don't need the extended explanation, just scroll to the bottom, where the problem is solved.)

That is a good article. I am looking for dozens of examples, all worked out in subtly different ways, just like a mini-textbook. OR a chapter in a regular text book. I'll probably have to just do it myself.

Chapter 79:

I think we're supposed to be able to figure this one out. My mental model of Eliezer says he thinks he's given us more than enough hints, and we have a week to wait despite it being a short, high tension chapter. He makes a big deal out of how Harry only has thirty hours, which isn't enough; he gives us a week, and a lot of information Harry doesn't have.

Who benefits from isolating Harry from both of his friends, and/or making him do something stupid to protect Hermione in front of the most powerful people in the Wizarding World?

Evidence again... (read more)

Harry isn't stupid, he has to realize that getting Hermione and Draco out of the way obviously benefits the defense professor. And Quirrell would know this, and not want to make Harry think he's someone who would ruin an innocent 12-year-old girl's life. Their next conversation is going to be interesting.

It's also mentioned in Circular Altruism.

This matches research showing that there are "sacred values", like human lives, and "unsacred values", like money. When you try to trade off a sacred value against an unsacred value, subjects express great indignation (sometimes they want to punish the person who made the suggestion).

My favorite anecdote along these lines - though my books are packed at the moment, so no citation for now - comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the co

... (read more)
Also here [] :
I didn't spot that. Probably a better source than mine, as it reflects EY's thoughts on things.
Quite a few ways this could be relevant. Lucius and the sacred value of his son, Dumbledore giving up on Hermione so as not to be blackmailed by Lucius, Harry considering throwing away all his plans to save Hermione from Azkaban, Hermione having to abandon one of her host of sacred values, the list goes on.

An egoist is generally someone who cares only about their own self-interest; that should be distinct from someone who has a utility function over experiences, not over outcomes.

But a rational agent with a utility function only over experiences would commit quantum suicide if we also assume there's minimal risk of the suicide attempt failing/ the lottery not really being random, etc.

In short, it's an argument that works in the LCPW but not in the world we actually live in, so the absence of suiciding rationalists doesn't imply MWI is a belief-in-belief.

I believe that my death has negative utility. (Not just because my family and friends will be upset; also because society has wasted a lot of resources on me and I am at the point of being able to pay them back, I anticipate being able to use my life to generate lots of resources for good causes, etc.)

Therefore, I believe that the outcome (I win the lottery ticket in one world; I die in all other worlds) is worse than the outcome (I win the lottery in one world; I live in all other worlds) which is itself worse than (I don't waste money on a lottery ticke... (read more)

The LCPW is the one where your argument fails while mine works: suppose only the worlds where you live matter to you, so you happily suicide if you lose. So any egoist believing the MWI should use quantum immortality early and often if he/she is rational.

In the original books, Harry's cohort was born ten years into an extremely bloody civil war. I always assumed birth rates were extremely low for Harry's age group, which would imply that the overall population is much larger than what you'd extrapolate from class sizes.

Of course, the numbers still don't work. There are 40 kids in canon!Harry's class. Even if you assume that's a tenth of the normal birthrate and the average person lives to 150, you get a wizarding population of 6,000.

In MoR, class sizes are around 120 (more than half the kids are in the ar... (read more)

In that case, shouldn't we see evidence of a baby boom occurring immediately following the end of the war, probably in the form of the years after Harry's being noticeably bigger than those that came before? canon!Harry is rather unobservant, but you'd think he'd have noticed at least that.
I'd forgotten about that quote. For reference, it's Chapter 74 [].

Kolmogorov Complexity/Solmanoff Induction and Minimum Message Length have been proven equivalent in their most-developed forms. Essentially, correct mathematical formalizations of Occam's Razor are all the same thing.

This is a pretty unhelpful way of justifying this sort of thing. Kolmogorv complexity doesn't give a unique result. What programming system one uses as one's basis can change things up to a constant. So simply looking at the fact that Solomonoff induction is equivalent to a lot of formulations isn't really that helpful for this purpose. Moreover, there are other formalizations of Occam's razor which are not formally equivalent to Solomonoff induction. PAC learning [] is one natural example.
The whole point is superfluous, because nobody is going to sit around and formally write out the axioms of these competing theories. It may be a correct argument, but it's not necessarily convincing.

You not being Will_Newsome. (I can't imagine how bizarre it must be to be watching this conversation from your perspective.)

Wait, but what changed that caused Mitchell_Porter to realize that?

I think 1) should probably be split into two arguments, then. One of them is that Many World is strictly simpler (by any mathematical formalization of Occam's Razor.) The other one is that collapse postulates are problematic (which could itself be split into sub-arguments, but that's probably unnecessary).

Grouping those makes no sense. They can stand (or fall) independently, they aren't really connected to each other, and they look at the problem from different angles.

The claim in parentheses isn't obvious to me and seems to be probably wrong. If one replaced any with "many" or "most" it seems more reasonable. Why do you assert this applies to any formalization?
Ah, okay, that makes more sense. 1a) (that MWI is simpler than competing theories) would be vastly more convincing than 1b) (that collapse is bad, mkay). I'm going to have to reread the relevant subsequence with 1a) in mind.

Eliezer said in the comments that it was in fact a fully fleshed out idea, but taken from a different story, and that it didn't seem right in the context of this story because it belonged to a different universe.

But yes, the out-of-placeness is noticeable.

Load More