All of Elithrion's Comments + Replies

Utilitarianism twice fails

I was supposed to check on this a long time ago, but forgot/went inactive on LW, but the post actually ended up at -26, so seemingly slightly lower than it was, which is evidence against your regression to 0 theory.

Boring Advice Repository

I agree with tut that increasing speed might help. Sometimes if I listen at default speed, I find my attention drifting off mid-sentence just because it's going so slowly. (Conversely, at higher speed, when my attention does drift off briefly, I sometimes miss a full sentence or two and have to rewind slightly.)

If that doesn't work, I don't really have many other ideas. Maybe you could try other repetitive mechanical actions to see if they coexist well with audiobooks. For example, maybe cooking, drawing, or exercising might work (if you do any of those). In general, I find it easy to not miss anything in an audiobook so long as I'm simultaneously doing something that does not also involve words.

Open Thread, June 16-30, 2013

[I made a request for job finding suggestions. I didn't really want to leave details lying around indefinitely, to be honest, so, after a week, I edited it to this.]

2ModusPonies8yFor job searching, focus less on sending out applications and more on asking [professors | friends | friends of friends | mentors | parents | parents' friends] if they know of anyone who's hiring for [relevant field]. When they say no, ask if they know anyone else you should talk to. To generalize from one example, every job I've ever worked has come from some sort of connection. I found my current position through my mom's dance instructor's husband. For figuring out what to do with your long-term future, there's not much I can say without knowing your goals, but http://80000hours.org/ [80000 hours] might or might not be relevant. If so, they're willing to advise you one-on-one.
Maximizing Your Donations via a Job

Incidentally, if your discount rate is really this high (you mention 22% annual at one point), you should be borrowing as much as you can from banks (including potentially running up credit cards if you have to - many of those seem to be 20% annual) and just using your income to pay down your debt.

I'd say just use your cost of borrowing (probably 7% or so?) for the purposes of discounting your salary and things, and then decide whether you should borrow to donate or not based on whether that rate is less than the expected rate of return for charities. (This is assuming that you can get access to adequate funds at this rate - I'm not entirely sure, but it seems plausible.)

Open Thread, June 2-15, 2013

I am really disappointed in you, gwern. Why would you use an English auction when you can use an incentive-compatible one (a second price auction, for example)? You're making it needlessly harder for bidders to come up with valuations!

(But I guess maybe if you're just trying to drive up the price, this may be a good choice. Sneaky.)

(But I guess maybe if you're just trying to drive up the price, this may be a good choice. Sneaky.)

Having read about auctions before, I am well-aware of the winner's curse and expect coordination to be hard on bidding for this unique item.

Bwa ha ha! Behold - the economics of the damned.

Drowning In An Information Ocean

Hm, that's true, I have heard that. Although in that particular case, it's actually unknown whether the shape is constructible or not, and I was trying to prove (in)constructibility rather than construct.

A Rational Altruist Punch in The Stomach

This is more like a conservative investment in various things by the managing funds for 200 years, followed by a reckless investment in the cities of Philadelphia and Boston at the end of 200 years. It probably didn't do particularly more for the people 200 years from the time than it did for people in the interim.

Also, the most recent comment by cournot is interesting on the topic:

You may also be using the wrong deflators. If you use standard CPI or other price indices, it does seem to be a lot of money. But if you think about it in terms of relative we

... (read more)
1Neotenic9yThat is unreasonable because we have more access to means of helping the poor today. If you expect the trend to go on into the future, than 2 million tomorrow is always better than a thousand today, which approximates maximal 3 lives on AMF of SCI
Buridan's ass and the psychological origins of objective probability

Interestingly, that trick does get the ass to walk to at least one bale in finite time, but it's still possible to get it to do silly things, like walk right up to one bale of hay, then ignore it and eat the other.

Okay, sure, but that seems like the problem is "solved" (i.e. the donkey ends up eating hay instead of starving).

0DanielLC9yIt can also use the "always eat the left bale first" strategy, although that gets kind of odd if it does it with a bale of size zero. There is a problem if you want to make it make an actual binary decision, like go to one bale and stay.
Buridan's ass and the psychological origins of objective probability

Does that really work for all (continuous? differentiable?) functions. For example, if his preference for the bigger/closer one is linear with size/closeness, but his preference for the left one increases quadratically with time, I'm not sure there's a stable solution where he doesn't move. I feel like if there's a strong time factor, either a) the ass will start walking right away and get to the size-preferred hay, or b) he'll start walking once enough time has past and get to the time-preferred hay. I could write down an equation for precision if I figure out what it's supposed to be in terms of, exactly...

1DanielLC9yLike I said, the hay doesn't move, but the donkey does. He starts walking right away to the bigger pile, but he'll slow down as time passes and he starts wanting the other one. Interestingly, that trick does get the ass to walk to at least one bale in finite time, but it's still possible to get it to do silly things, like walk right up to one bale of hay, then ignore it and eat the other. The solutions are almost certainly unstable. That is, once you find some ratio of bale sizes that will keep the donkey from eating, an arbitrarily small change can get it to eat eventually.
A Rational Altruist Punch in The Stomach

I'm not sure what an investment in a particular far-future time would look like. Money does not, in fact, breed and multiply when left in a vault for long enough. It increases by being invested in things that give payoffs or otherwise rise in value. Even if you have a giant stockpile of cash and put it in a bank savings account, the bank will then take it and lend it out to people who will make use of it for whatever projects they're up to. If you do that, all you're doing is letting the bank (and the borrowers) choose the uses of your money for the first ... (read more)

3Pablo9yMaybe like this [http://www.overcomingbias.com/2010/03/parable-of-the-multiplier-hole.html]:
2roystgnr9yYou're letting the bank and borrowers choose uses which they expect to be worth more than the cost, under the knowledge that they may be bankrupted if they choose poorly and keep the surplus profits if they choose well. These constraints tend to lead to fewer consumable luxury purchases and more carefully selected productive investments, and having more of the latter increases the potential economic output of the future. There are many caveats to this, though. Does our potential economic output really have no upper bound within a hundred orders of magnitude of its present state? That seems unlikely, but if not then those exponential returns are just the bottom tails of S-curves. Is this economic system going to be protected from overwhelming corruption, violence, and theft for a future period longer than all prior human history? That would be historically unprecedented, but it only takes one disaster to wipe out a fortune.
9moridinamael9yLikewise, if one actually expects to collect a googol dollars from investment, then either (a) galactic economies would need to be servicing the interest payments or (b) inflation has rendered dollars nearly valueless.
[SEQ RERUN] You're Calling *Who* A Cult Leader?

Alternatively, if it's done by someone whom you already know decently well, and who you know isn't really a crazy obsessive pedant, it can instead signal a liking of international or British English over American.

[SEQ RERUN] You're Calling *Who* A Cult Leader?

That sounds like good policy, although there may be significant variation in what sounds awful to different people (specifically, "whom" is generally more popular outside the US). "Who" is probably the safer choice when in doubt, admittedly.

[SEQ RERUN] You're Calling *Who* A Cult Leader?

Nope, in fact that one should also be "Whom are you calling a cult leader?" Who is the subject form, i.e. it's supposed to be used when it's the "who" person that is doing the actions. In this case, though, the subject is "you", who is doing the action ("calling" someone something), and the object is the someone being called something ("whom").

2Zaine9yFor sake of colloquial informality some purposefully adopt incorrect grammar. Regardless of whether that was the intent, such is the effect; a better question: "Does informality conveyed through use of colloquialisms benefit the author's purposes more than correct use of grammar?" The above line of enquiry presumes correct grammar is desirable - a separate but sound debate prerequisite answering the former question.
Buridan's ass and the psychological origins of objective probability

Okay, thanks for the explanation. It does seem that you're right*, and I especially like the needle example.

*Well, assuming you're allowed to move the hay around to keep the donkey confused (to prevent algorithms where he tilts more and more left or whatever from working). Not sure that was part of the original problem, but it's a good steelman.

1DanielLC9yYou don't have to move the hay during the experiment. The donkey is the one that moves. If he goes left as he gets hungry, you move the bale to his right a tad closer, and he'll slowly inch towards it. He'll slow down instead of speed up as he approaches it because he's also getting hungrier.
Existential risks open thread

This article from the Christian Science Monitor suggests that if the Chinese government decided to stop helping North Korea, that might cause the country to "implode", which feels like a good thing from an x-risk reduction standpoint.

I think the civil war that would result combined with extreme proximity between Chinese and US troops (the latter supporting South Korea and trying to contain nuclear weapons) is probably an abysmal thing from an x-risk reduction standpoint.

2TitaniumDragon9yChina has privately told the US that they would support the US in extending South Korean control over the entire Korean peninsula per the diplomatic cables leak. The Chinese would probably be happy if the US rolled in and flattened the entire country as long as they didn't have to let too many refugees into China, and really, at this point, the way that North Korea is acting China is probably willing to take the risk given that the North Koreans seem eager to cause trouble, and there's no guarantee it won't happen in a worse way later on. Honestly I think that crushing the North Korean government and military completely would probably pretty much end it. North Korea has a ton of propaganda about their country's superiority over the rest of the world; without the tight control over the country that the present government has, I don't think that vision of superiority would last very long. Not to say that they'd be terribly awesomely happy with us, but the US rolled into Japan after WWII and it worked out quite well. Given the present day poverty of the country, really all you'd have to do to win is wait for a bad famine to hit the country and roll in then; showing the people that you care about them with food is a dirty but probably effective way to make them distrust you less, especially if you have the South Koreans move in and the US move out as much as possible. Though of course other options exist. It would be a mess, but I think it would probably be significantly less messy than Afganistan, given that rather than having twenty different angry groups, you really have the government and that's about it.
[SEQ RERUN] You're Calling *Who* A Cult Leader?

Is using "whom" uncool or something? Maybe I'm just elitist (in a bad way) for liking it.

0[anonymous]9yIt's possible to avoid the "whom" and be grammatical: "*Who* is Being Called a Cult Leader By You?".
3Qiaochu_Yuan9yWhom use, even correct use but especially incorrect use, can signal an excessive concern with pedantry.
1Eliezer Yudkowsky9yI use "who" for the subject form or when "whom" sounds awful.
0NancyLebovitz9yI'm pretty sure it should be "who", since the title is an inversion of "Who are you calling a cult leader?".
Buridan's ass and the psychological origins of objective probability

Thanks (and I actually read the other new comments on the post before responding this time!) I still have two objections.

The first one (which is probably just a failure of my imagination and is in some way incorrect) is that I still don't see how some simple algorithms would fail. For example, the ass stares at the bales for 15 seconds, then it moves towards whichever one it estimates is larger (ignoring variance in estimates). If it turns out that they are exactly equal, it instead picks one at random. For simplicity, let's say it takes the first letter o... (read more)

2DanielLC9yYour problem is that you're using an algorithm that can only be approximated on an analog computer. You can't do flow control like that. If you want it to do A if it has 0 as an input and B if it has 1 as an input, you can make it do A+(B-A)x where x is the input, but you can't just make it do A under one condition and B under another. If continuity is your only problem, you can make it do A+(B-A)f(x), where f(x)=0 for 0<=x<=0.49 and f(x)=1 for 0.51<=x<=1, but f(x) still has to come out to 1/2 when x is somewhere between 0.49<x<0.51. If you tried to do your algorithm, after 15 seconds, there'd have to be some certainty level where the Ass will end up doing some combination of going left and choosing at random, which will keep it in the same spot if "random" was right. If "random" is instead left, then it stops if it's half way between that and right. I'm not really sure where that idea came from. Quantum physics is continuous. In fact, derivatives are vital to it, and you need continuity to have them. The position of an object is spread out over a waveform instead of being at a specific spot like a billiard ball, but the waveform is a continuous function of position. The waveform has a center of mass that can be specified however much you want. Also, the Planck length seems kind of arbitrary. It means something if you have an object with size one Planck mass (about the size of a small flea), but a smaller object would have a more spread out waveform, and a larger object would have a tighter one. That would make it so you can't purposely fool the Ass, but it won't keep that from happening on accident. For example, if you try to balance a needle on the tip outside when there's a little wind, you're (probably) not going to be able to do it by making it stand up perfectly straight. It's going to have to tilt a little so it leans into every gust of wind. But there's still some way to get it to balance indefinitely.
Buridan's ass and the psychological origins of objective probability

Sorry, I'm not sure I understand what you mean. What particle should we move to change the fact that the ass will eventually get hungry and choose to walk forward towards one of the piles at semi-random? It seems to me like you can move a particle to guarantee some arbitrarily small change, but you can't necessarily move one to guarantee the change you want (unless the particle in question happens to be in the brain of the ass).

1DanielLC9yIf you slowly move the particles one at a time from one bale to the other, you know that once you've moved the entire bale the Ass will change its decision. At some point before that it won't be sure. There might not actually be a choice where the Ass stands there until it starves. It might walk forward, or split in half down the middle and have half of it take one bale of hay and half take the other, or any number of other things. It's really more that there's a point where the Ass will eventually take a third option, even if you make sure all third options are worse than the first two.
1Eugine_Nier9ySee Daniel's comment here [http://lesswrong.com/lw/h3e/buridans_ass_and_the_psychological_origins_of/8oj1] .
Drowning In An Information Ocean

don't get fixed in proving the constructibility of enormously large polygons

Is this common? 'Cause um, at one point I did try to prove (or disprove) the constructibility of a hendecagon (11 sides) with neusis, but I didn't realise this was a popular pursuit. This isn't really related to the post, but I was very surprised constructibility got a mention.

(I ran into equations lacking an easy solution - they were sufficiently long/hard that Maple refused to chug through them - and decided it wasn't worth the effort to keep trying.)

1mare-of-night9yI forget what it was called, but I remember a past post about trying to disprove very, very settled rules of math or science. A lot of the people who commented on it said that they had tried to do this as teenagers. (I never tried to construct unconstructable shapes, but I tried for a couple weeks to design a perpetual motion machine, once. I stopped after my middle school science teacher explained why a certain design wouldn't work - the explanation was what I needed to finally grok the laws of thermodynamics.)
Buridan's ass and the psychological origins of objective probability

The problem with the Problem is that it simultaneously assumes a high cost of thinking (gradual starvation) and an agent that completely ignores the cost of thinking. An agent who does not ignore this cost would solve the Problem as Vaniver says.

1DanielLC9yThe Problem only assumes the universe is continuous. If you move a particle by a sufficiently small amount, you can guarantee an arbitrarily small change any finite distance in the future. Thanks to the butterfly effect, it has to be an absurdly tiny amount, but it's only necessary that it exists. Also, it assumes that the Ass will eventually die, but that's really more for effect. The point is that it can't make the decision in bounded time.
[SEQ RERUN] The Pascal's Wager Fallacy Fallacy

That's fair. I guess adopting exponential discounting is also good enough to rule out Christianity. Not about trying to live infinitely long, though - it would depend on how much believing in Christianity would hinder you in achieving that. (Same for other religions that don't promise sufficiently amazing bliss.)

0Desrtopa9yI would think that a properly self consistent Christian would probably not try to live forever given the expectation of being able to go to Heaven and stay there forever after a lifespan of ordinary length. On the other hand, Christians with sufficient alief to look forward to their own deaths are pretty rare.
[SEQ RERUN] The Pascal's Wager Fallacy Fallacy

Sure, but it doesn't matter how much probability mass atheism gets, because the religions are the only ones offering infinities*, and we're probably interested in best expected payoff, not highest probability. If religions have 1/10^50 residual probability mass and atheism has all the rest, you'd still probably have to choose one of them if at least one is offering immense payoffs and you haven't solved Pascal's Mugging.

*I guess one could argue that a Solomonoff prior assigns a zero probability to truly infinite things, but I'm not sure that's an argument I'd want to rely on (also I know Buddhism offers some merely vast numbers, although I'm not sure they're vast enough, and some other religions do too, I'd imagine).

2Desrtopa9yWhile apologetics have adopted the idea that going to heaven offers infinite utility, the actual descriptions of heaven in the texts from which Christianity is derived (what little there are,) don't describe anything like infinite utility, except if you accept that a finite utility times an infinite duration equals infinite utility. Given an infinitesimal chance of payout on Christianity, you'd probably be giving yourself better odds trying to achieve infinite utility by living infinitely long, even given the rather low odds of the fundamental laws of physics being amenable to that.
[SEQ RERUN] The Pascal's Wager Fallacy Fallacy

No, it's not (at least if we take the generous view and consider the Wager as an argument for belief in some type of deity, rather than the Christian one for which it was intended), because after considering all the hypotheses, you will still have to choose one (or more, I guess) of them, and it almost certainly won't be atheism. I also feel like you completely missed the point of my previous comment, but I'm not sure why, and am consequently at a loss as to how to clarify.

0Desrtopa9yWhy is that? While a belief having adherents may be evidence for that belief, other people believing differently will be evidence against it. When religions function as evidence against each other and lose probability mass, more of the lost probability mass goes to atheism than to other religions.
Is The Blood Thicker Near The Tropics? Trade-Offs Of Living In The Cold

I suppose I should have said "reasonably inhabited land".

I don't think it's a good idea to discuss this, not only because it may give people ideas, but also because there is only one possible side to the argument that can really be mentioned.

0[anonymous]9yI was worried about that. Do you suggest I edit or take the entire post down? Edit: For example by focusing on the comparison between 'reducing extreme poverty' and 'reducing xrisk'. 2nd Edit: I removed the 'identifiable targets', hopefully that will help.
[SEQ RERUN] The Pascal's Wager Fallacy Fallacy

I am not. The problem with Pascal's Wager is sort of that it fails to consider other hypotheses, but not in the conventional sense that most arguments use. Invoking an atheist god, as is often done, really does not counterbalance the Christian god, because the existence of Christianity gives a few bits of evidence in favour of it being true, just like being mugged gives a few bits of probability in favour of the mugger telling the truth. So, using conventional gods and heavens and hells like that won't balance to them cancelling out, and you will end up ha... (read more)

2JQuinton9yI'm pretty sure that actually is the problem with Pascal's Wager. You even just committed it when you only included Christianity. What about Islam? You go to hell in Islam if you are either an atheist or believe that Allah had a son. And then there are the various heretical versions of Christianity where you lose out on eternal life for "worshipping a dead man" instead of learning true knowledge from his teachings. There are so many other competing hypotheses to choose from besides just Christianity or atheism that this is the massive failing point in the wager.
Is The Blood Thicker Near The Tropics? Trade-Offs Of Living In The Cold

No, we would not. The Southern hemisphere is just generally warmer (at least on land).

0prase9yAntarctica excluded?
Solved Problems Repository

I think it depends on the reading. If you read it in a sort of snooty dismissive voice, yes, certainly. But if you read it in a genuinely perplexed kind of voice, it mostly sounds confused.

3jooyous9yThat's why I put the confused-face! I was pretty confused by "every situation" because I can definitely think of some situations where status considerations factor only negligibly into your decision process. For example: you are out with some people and notice your shoe is untied. Do you tie it? Uhh. Does it really matter if your friends are higher or lower status? Maybe if they can't afford shoes or something, but otherwise, not really. I think?
Solved Problems Repository

I was looking up the Marines' fitness requirements at some point randomly, and for females the pull-up requirement is apparently replaced with a flexed-arm hang (wiki about.com), so you could maybe try doing that.

3jsteinhardt9yFlexed-arm hang won't work as a replacement for pull-ups in the context of this particular program; maintaining a load in a static position is very different from moving that load up and down.
Upgrading moral theories to include complex values

By the way, 3^^3 = 3^27 is "only" 7625597484987, which is less than a quadrillion. If you want a really big number, you should add a third arrow (or use a higher number than three).

[SEQ RERUN] The Pascal's Wager Fallacy Fallacy

I feel like this post is dated by the fact that it came before Pascal's Mugging discussions to the point of being fairly wrong. The problem with Pascal's Wager actually is that the payoffs are really high, so they overwhelm an unbounded utility function (and they don't precisely cancel out, since we do have a little evidence). On the other hand, I suppose the core point that you shouldn't dismiss things out of hand if they have a low (but not tiny) probability and a large payoff is sound.

2MinibearRex9yI think you're confusing Pascal's Wager with Pascal's Mugging. The problem with Pascal's Mugging is that the payoffs are really high. The problem with Pascal's Wager is that it fails to consider any hypotheses other than "there is the christian god" and "there is no god".
Is The Blood Thicker Near The Tropics? Trade-Offs Of Living In The Cold

I'm really sceptical that this is as big a factor as some of the others, but I can see how it might be a significant factor. I've also lived in cold places most of my life, so I'm not in a very good position to judge. I feel like the biggest factor will ultimately turn out to be "that's how history played out", though. Looking back, it's not clear that the hypothetical dominance of the North was really noticeable until maybe the 17th century (I'm not entirely confident on this, so correct me if I'm wrong), so I'd be more inclined to attribute it ... (read more)

8Zaine9yAlso consider the Mayans, Aztecs, Egyptians, Mesopotamians, Indian regimes ( exempli gratia Indus Valley, Mauryan), Carthaginians, Caliphates, Ottomans, etcetera.
Personal Evidence - Superstitions as Rational Beliefs

What if the house merely floated the thing over there with reaction (pushing back on the floors/walls), and its floor rotted slightly (accumulating entropy, losing chemical energy) in proportion to the necessary force? In that case, he's only discovered ghostly energy transfer at small distances, which may be completely impractical (only one or two Nobels).

0CellBioGuy7yAlso possibly cheap (wooden, plaster, concrete) computational substrate.
LessWrong help desk - free paper downloads and more

This appears to be all that exists for 3 (page 2): http://jech.bmj.com/content/suppl/2003/09/23/57.9.DC1/Abstracts.pdf

It was so small that after finding it I kept looking for a good 15 minutes, but I'm pretty sure the abstract is all there is and the full article was never published (the first author doesn't list it on his personal page, and all the references seem to be to the abstract).

0gwern9yThanks for looking. It seems that it's another one of the many papers which get presented as an abstract and never published ("Full publication of results initially presented in abstracts (Review)" [http://dl.dropbox.com/u/85192141/2008-scherer.pdf]).
Reflection in Probabilistic Logic

This idea reminds me of some things in mechanism design, where it turns out you can actually prove a lot more for weak dominance than for strong dominance, even though one might naïvely suppose that the two should be equivalent when dealing with a continuum of possible actions/responses. (I'm moderately out of my depth with this post, but the comparison might be interesting for some.)

Personal Evidence - Superstitions as Rational Beliefs

I think there are non-anthropic problems with even rational!humans communicating evidence.

One is that it's difficult to communicate that you're not lying, and it is also difficult to communicate that you're competent at assessing evidence. A rational agent may have priors saying that OrphanWilde is an average LW member, including the associated wide distribution in propensity to lie and competence at judging evidence. On the other hand, rational!OrphanWilde would (hopefully) have a high confidence assessment of himself (herself?) along both dimensions. How... (read more)

Personal Evidence - Superstitions as Rational Beliefs

I meant "someone close to him" in a relationship, not a spatial, sense (so, "other family member or friend he knows about"). Which I guess is still kind of just a different connotation, but I think one worth noticing separately from the "crazy lurker who's been around for a while" hypothesis.

2ModusPonies9yI see. Thanks for clarifying.
0[anonymous]9yAh. Thanks for clarifying.
Personal Evidence - Superstitions as Rational Beliefs

Either that or maybe either OrphanWilde or his sister or someone else close to him really enjoys messing with everyone and making it seem that the house is haunted.

0A4FB53AC9yEspecially to mess with one of those people intolerant of our beliefs in the supernatural, who always have to go about how this or that can easily be dismissed if only you were rational. How ironical could it be then to get one to believe in a haunted house because it was the rational thing to do given the "evidence"?
0atucker9yThough, the other stuff in the post, and his other comments on the thread, really make it seem to me to be related to the house rather than to him, or his friends.
0ModusPonies9yThat sounds like a restatement of the parent comment, with different connotation.
[LINK] Transcendence (2014) -- A movie about "technological singularity"

Imdb lists Wally Pfister as director. The shooting date is also from its Q&A thing.

0RomeoStevens9yawww crap, cinematography was a huge glaring weakness of Nolan's films. This does not bode well.
[LINK] Transcendence (2014) -- A movie about "technological singularity"

Shooting is apparently scheduled to start April, so you probably don't have long to wait.

0RomeoStevens9ywithout a director?
Another community about existential risk - Arctic news

Technically, LW isn't about x-risk. It's about "refining the art of human rationality", as you can see up there in the header.

I am also not sure that a blogspot blog that gets 0-6 comments per post is really worth calling "a community" or taking particular notice of. The other ones you mention seem to more closely resemble communities, but have even less to do with x-risk.

2turchin9yTechically you are right. But my post was not about x-risks themselves but about tendecy of different groups of people aggregate around one partical x-risk and ignore other x-risks by for example not given them right to be called true x-risk.
[LINK] Transcendence (2014) -- A movie about "technological singularity"

Apparently an early script summary leaked. Spoilers:

Nppbeqvat gb gur fhzznel, n tebhc bs nagv-grpuabybtl greebevfgf nffnffvangr Jvyy, Riryla hcybnqf uvf oenva vagb n cebgbglcr fhcrepbzchgre. Nygubhtu fur ng svefg svaqf gur rkcrevzrag frrzf gb unir tbar jebat, orsber gbb ybat Riryla svaqf Jvyy erfcbaqvat va pbzchgre sbez.

Fur tbrf ba gb pbaarpg Jvyy gb gur Vagrearg fb ur pna uryc znxr shegure fpvragvsvp oernxguebhtuf. Jvyy nfxf Riryla gb pbaarpg n zvpebcubar naq n pnzren hc gb gur pbzchgre fb ur pna frr naq fcrnx gb ure nf jryy.

Jvyy perngrf n onpxhc bs uvzfr... (read more)

2Vaniver9yFb, guvf fhzznel fbhaqf n ybg gb zr yvxr Tubfg va gur Furyy.
0lukeprog9yChris Nolan is executive producer.
Critiques of the heuristics and biases tradition

You put your MP3 player on random. You have a playlist of 20 songs. What are the odds that the next song played is the same song which was just played?

I think the option is more typically called "shuffle", which actually accurately represents what it does.

1[anonymous]9yI once had a MP3 player where “random” actually did what it said (for some value of “actually”), including playing the same song twice in a row once in a while.
Caring about possible people in far Worlds

I care about possible people. My child, if I ever have one, is one of them, and it seems monstrous not to care about one's children.

I think you may have found one of the quickest ways to tempt me into downvoting a post without reading further (it wasn't quite successful - I read all the way through before downvoting). Poor reasoning and stereotypical appeal to emotion are probably not the ideal opener.

Beyond that, you never made clear what the purpose of the following arguments is and gave them really confusing titles.

  • I'm not sure in what way argumen
... (read more)
1drethelin9yVery strongly seconding the first part of this.
Open thread, March 17-31, 2013

I don't think your second point really is one, seeing as a CEO can not be installed without being affiliated with the power holders.

Why not? Some CEOs (especially for smaller companies, I think) are found via specialised recruiting companies, which I'd say is pretty unaffiliated. And in any case, it's not clear to me how you think the affiliation would be increasing pay. Do you imagine potential CEO candidates hold an auction in which they offer kickbacks to major shareholders/powerholders from their pay or something? Because I haven't heard of that eve... (read more)

0Metus9yInteresting. I will have to read through that later.
Open thread, March 17-31, 2013

I suspect there's too much of a difference in how much LW members know about basketball to get particularly wide participation. For example, I had to look up "March Madness" to figure out what this is about.

Also, there's a significant chance that either people would just copy the odds from Pinnacle, or maybe even arbitrage against it (valuing karma or whatever at 1-2 cents). Or, well, I'd certainly be tempted to =]

Open thread, March 17-31, 2013

I'm pretty sure that low salaries are a dysfunction of democracies rather than high salaries being a dysfunction of companies. In particular, it's not the case with every company that a couple of people hold enormous shares. And aside from that, even when there is clear evidence that "the majority" gets directly involved in CEO compensation, it doesn't seem that the salaries go down all that much.

Or looking at it differently, if the high salaries were the consequence of an undue concentration of power, we would expect that when one CEO leaves, an... (read more)

0Metus9yI don't think your second point really is one, seeing as a CEO can not be installed without being affiliated with the power holders. Can you back up your first point?
Open thread, March 17-31, 2013

I'm also curious, and would like to add a poll: [pollid:420]

Open thread, March 17-31, 2013

Regarding the note, in statistics you could call that a population parameter. While parameters that are used are normally things like "mean" or "standard deviation", the definition is broad enough that "the centre of mass of a collection of atoms" plausibly fits the category.

Load More