Here's the new thread for posting quotes, with the usual rules:

  • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself
  • Do not quote comments/posts on LW/OB
  • No more than 5 quotes per person per monthly thread, please.
New Comment
Rendering 1000/1107 comments, sorted by (show more) Click to highlight new comments since: Today at 7:39 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

"Wait, Professor... If Sisyphus had to roll the boulder up the hill over and over forever, why didn't he just program robots to roll it for him, and then spend all his time wallowing in hedonism?"
"It's a metaphor for the human struggle."
"I don't see how that changes my point."

Well, his point only makes any sense when applied to the metaphor since a better answer to the question

"Wait, Professor... If Sisyphus had to roll the boulder up the hill over and over forever, why didn't he just program robots to roll it for him, and then spend all his time wallowing in hedonism?"


"where would Sisyphus get a robot in the middle of Hades?"

Edit: come to think of it, this also works with the metaphor for human struggle.

I thought the correct answer would be, "No time for programming, too busy pushing a boulder."

Though, since the whole thing was a punishment, I have no idea what the punishment for not doing his punishment would be. Can't find it specified anywhere.

I don't think he's punished for disobeying, I think he's compelled to act. He can think about doing something else, he can want to do something else, he can decide to do something else ... but what he does is push the boulder.

The version I like the best is that Sisyphus keeps pushing the boulder voluntarily, because he's too proud to admit that, despite all his cleverness, there's something he can't do. (Specifically, get the boulder to stay at the top of the mountain).

My favorite version is similar. Each day he tries to push the boulder a little higher, and as the boulder starts to slide back, he mentally notes his improvement before racing the boulder down to the bottom with a smile on his face.

Because he gets a little stronger and a little more skilled every day, and he knows that one day he'll succeed.

In the M. Night version: his improvements are an asymptote - and Sisyphus didn't pay enough attention in calculus class to realize that the limit is just below the peak.

Or maybe the limit is the peak. He still won't reach it.
In some versions he's harassed by harpies until he gets back to boulder-pushing. But RobinZ's version is better.
Borrowing one of Hephaestus []', perhaps?

Now someone just has to write a book entitled "The Rationality of Sisyphus", give it a really pretentious-sounding philosophical blurb, and then fill it with Grand Theft Robot.

He can build it. It would be pretty hard to do while pushing a boulder up a hill, but he has all the time in the world!
Does he have any suitable raw materials?

Answer: Because the Greek gods are vindictive as fuck, and will fuck you over twice as hard when they find out that you wriggled out of it the first time.

Who was the guy who tried to bargain the gods into giving him immortality, only to get screwed because he hadn't thought to ask for youth and health as well? He ended up being a shriveled crab like thing in a jar.

My highschool english teacher thought this fable showed that you should be careful what you wished for. I thought it showed that trying to compel those with great power through contract was a great way to get yourself fucked good an hard. Don't think you can fuck with people a lot more powerful than you are and get away with it.

EDIT: The myth was of Tithonus. A goddess Eos was keeping him as a lover, and tried to bargain with Zeus for his immortality, without asking for eternal youth too. Ooops.

Don't think you can fuck with people a lot more powerful than you are and get away with it.

I'm no expert, but that seems to be the moral of a lot of Greek myths.

King Midas, too.
I'd say this captures the spirit of Less Wrong perfectly.

Do unto others 20% better than you expect them to do unto you, to correct for subjective error.

-- Linus Pauling

Citation for this was hard; the closest I got was Etzioni's 1962 The Hard Way to Peace, pg 110. There's also a version in the 1998 Linus Pauling on peace: a scientist speaks out on humanism and world survival : writings and talks by Linus Pauling; this version goes

I have made a modern formulation of the Golden Rule: "Do unto others 20 percent better than you would be done by - the 20 percent is to correct for subjective error."

Did you take "expect" to mean as in prediction, or as in what you would have them do, like the Jesus version?
How about doing unto others what maximizes total happiness, regardless of what they'd do unto you?
The former is computationally far more feasible.
By acting in a way that discourages them from hurting you, and encouraging them to help you, you are playing your part in maximizing total happiness.
Doing unto others that which causes maximum total happiness leaves you vulnerable to Newcomb problems. You want to do unto others that which logically entails maximum total happiness. Under certain conditions, this is the same as Pauling's recommendation.
I never mentioned causation. If you find a way to maximize it acausally, do that.
It has a tendency [] to go horribly wrong. []
It's a nice sentiment, but the optimization problem you suggest is usually intractable.
It's better to at least attempt it than just find an easier problem and do that. You might have to rely on intuition and such to get any answer, but you're not going to do well if you just find something easier to optimize.
Yes, but there's no way a pithy quote is going to solve the problem for you. It might, however, contain a useful heuristic.

“A writer who says that there are no truths, or that all truth is ‘merely relative,’ is asking you not to believe him. So don’t.” ― Roger Scruton, Modern Philosophy: An Introduction and Survey

I am sympathetic to this line, but Scruton's dismissal seems a little facile. If somebody says the truth is relative, then they can bite the bullet if they wish and say that THAT truth is also relative, thus avoiding the trap of self-contradiction. It might still be unwise to close your ears to them. Consider a case where we DO agree that a given subject matter is relative; e.g., taste in ice-cream. Suppose Rosie the relativist tells you: "This ice-cream vendor's vanilla is absolutely horrible, but that's just my opinion and obviously it's relative to my own tastes." You would probably agree that Rosie's opinion is indeed "just relative"... and still give the vanilla a miss this time.
If "this vanilla ice cream is horrible" is relatively true, then "Rosie's opinion is that this vanilla ice cream is horrible" is absolutely true.

The person who says, as almost everyone does say, that human life is of infinite value, not to be measured in mere material terms, is talking palpable, if popular, nonsense. If he believed that of his own life, he would never cross the street, save to visit his doctor or to earn money for things necessary to physical survival. He would eat the cheapest, most nutritious food he could find and live in one small room, saving his income for frequent visits to the best possible doctors. He would take no risks, consume no luxuries, and live a long life. If you call it living. If a man really believed that other people's lives were infinitely valuable, he would live like an ascetic, earn as much money as possible, and spend everything not absolutely necessary for survival on CARE packets, research into presently incurable diseases, and similar charities.

In fact, people who talk about the infinite value of human life do not live in either of these ways. They consume far more than they need to support life. They may well have cigarettes in their drawer and a sports car in the garage. They recognize in their actions, if not in their words, that physical survival is only one value, albeit a very important one, among many.

-- David D. Friedman, The Machinery of Freedom

He's just showing that those people don't give infinite value, not that it's nonsense. It's nonsense because, even if you consider life infinitely more intrinsically valuable than a green piece of paper, you'd still trade a life for green pieces of paper, so long as you could trade them back for more lives.
If life were of infinite value, trading a life for two new lives would be a meaningless operation - infinity times two is equal to infinity. Not unless by "life has infinite value" you actually mean "everything else is worthless".

Not quite so! We could presume that value isn't restricted to the reals + infinity, but say that something's value is a value among the ordinals. Then, you could totally say that life has infinite value, but two lives have twice that value.

But this gives non-commutativity of value. Saving a life and then getting $100 is better than getting $100 and saving a life, which I admit seems really screwy. This also violates the Von Neumann-Morgenstern axioms.

In fact, if we claim that a slice of bread is of finite value, and, say, a human life is of infinite value in any definition, then we violate the continuity axiom... which is probably a stronger counterargument, and tightly related to the point DanielLC makes above.

If we want to assign infinite value to lives compared to slices of bread, we don't need exotic ideas like transfinite ordinals. We can just define value as an ordered pair (# of lives, # of slices of bread). When comparing values we first compare # of lives, and only use # of slices of bread as a tiebreaker. This conforms to the intuition of "life has infinite value" and still lets you care about bread without any weird order-dependence. This still violates the continuity axiom, but that, of itself, is not an argument against a set of preferences. As I read it, claiming "life has infinite value" is an explicit rejection of the continuity axiom. Of course, Kaj Sotala's point in the original comment was that in practice people demonstrate by their actions that they do accept the continuity axiom; that is, they are willing to trade a small risk of death in exchange for mundane benefits.
You could use hyperreal numbers []. They behave pretty similarly to reals, and have reals as a subset. Also, if you multiply any hyperreal number besides zero by a real number, you get something isomorphic to the reals, so you can multiply by infinity and it still will work the same. I'm not a big fan of the continuity axiom. Also, if you allow for hyperreal probabilities, you can still get it to work.
True Only if you have a way to describe infinity in terms of a real number.
You just pick some infinite hyper real number and multiply all the real numbers by that. What's the problem?
Oh, you're saying assign a hyperreal infinite numbers to the value of individual lives. That works, but be very careful how you value life. Contradictions and absurdities are trivial to develop when one aspect is permitted to override every other one.
Nitpick, I think you mean non-commutativity, the ordinals are associative. The rest of your post agrees with this interpretation.
Oops, yes. Edited in original; thanks!

There is something about practical things that knocks us off our philosophical high horses. Perhaps Heraclitus really thought he couldn't step in the same river twice. Perhaps he even received tenure for that contribution to philosophy. But suppose some other ancient had claimed to have as much right as Heraclitus did to an ox Heraclitus had bought, on the grounds that since the animal had changed, it wasn't the same one he had bought and so was up for grabs. Heraclitus would have quickly come up with some ersatz, watered-down version of identity of practical value for dealing with property rights, oxen, lyres, vineyards, and the like. And then he might have wondered if that watered-down vulgar sense of identity might be a considerably more valuable concept than a pure and philosophical sort of identity that nothing has.

John Perry, introduction to Identity, Personal Identity, and the Self

He bought the present ox along with the future ox. He could have just bought the present ox, or at least a shorter interval of one. This is known as "renting".

1Paul Crowley10y
Which future ox did he buy?
Sorry. The future oxen.
2Paul Crowley10y
Of the many oxen at a given point in time in the future, which one did he buy?
Oh. I see what you mean. He bought the ones on the future side of that worldline. It's convenient that way, and humans are good at keeping track. He could have bought any combination of future oxen the guy owns. This has the advantage of later oxen being in the same area as earlier oxen, simplifying transportation.
3Paul Crowley10y
I'm not making myself clear. It's clear from what you say that if Heraclites bought an ox yesterday, he owns an ox today. But in order to say that he owns this particular ox, he needs a better system of identity than "you never step into the same river twice".
It's a sensible system for deciding how to buy and sell oxen because it minimizes shipping costs. It's a less sensible way to, for example, judge the value of a person. Should I choose Alice of Bob just because Bob is at the beginning of a world-line and Alice is not? This does kind of come back to arguing definitions. The common idea of identity is really useful. If a philosopher thinks otherwise, he's overthinking it. "Identity" refers to something. I just don't think it's anything beyond that. You in principle could base your ethics on it, but I see no reason to. It's not as if it's something anybody can experience. If you base your anthropics on it, you'll only end up confusing yourself [].
Alternatively, he purchased the present ox using the ox-an-hour-ago as payment.
No. There is nobody to make that transaction with, and his past self still used the past ox, so he can't sell it.

If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being, and who is willing to destroy a piece of his own heart?


But the line dividing good and evil cuts through the heart of every human being, and who is willing to destroy a piece of his own heart?

If only it were a line. Or even a vague boundary between clearly defined good and clearly defined evil. Or if good and evil were objectively verifiable notions.

You don't think even a vague boundary can be found? To me it seems pretty self-evident by looking at extremes; e.g., torturing puppies all day is obviously worse than playing with puppies all day. By no means am I secure in my metaethics (i.e., I may not be able to tell you in exquisite detail WHY the former is wrong). But even if you reduced my metaethics down to "whatever simplicio likes or doesn't like," I'd still be happy to persecute the puppy-torturers and happy to call them evil.
Animal testing. And even enjoying torturing puppies all day is merely considered "more evil" because it's a predictor of psychopathy.
So I think maybe I leapt into this exchange uncarefully, without being clear about what I was defending. I am defending the meaningfulness & utility of a distinction between good & evil actions (not states of affairs). Note that a distinction does not require a sharp dividing line (yellow is not the same as red, but the transition is not sudden). I also foresee a potential disagreement about meta-ethics, but that is just me "extrapolating the trend of the conversation." Anyway, getting back to good vs evil: I am not especially strict about my use of the word "evil" but I generally use it to describe actions that (a) do a lot of harm without any comparably large benefit, AND (b) proceed from a desire to harm sentient creatures. Seen in this light it is obvious why torturing puppies is evil, playing with them is good, and testing products on them is ethically debatable (but not evil, because of the lack of desire to harm). None of this is particularly earth-shattering as philosophical doctrine. Not if you think animals' interests count morally, which I do explicitly, and virtually everybody does implicitly.
I think your philosophy is probably fairly normal, it's just any attempt to simplify such things looks like an open challenge to point out corner cases. Don't take it too seriously. Also I'm not fully convinced on whether animals' interests count morally, even though they do practically by virtue of triggering my empathy. Aside from spiders. Those can just burn. (Which is an indicator that animals only count to me because they trigger my empathy, not because I care)
But.. but.. they just want to give you a hug. [,550x550,075,f.cute-jumping-spider.jpg]
You point out that there are acts easily agreed to be evil and acts easily agreed to be good, but that doesn't imply a definable boundary between good and evil. First postulate a boundary between good and evil. Now, what is necessary to refute that boundary? A clearly defined boundary would require actions that fall near the boundary to always fall to on side or the other without fail. Easily, that is not the case. Stealing food is clearly evil if you have no need but the victim has need for the food. If the needs are opposite, then it is not clearly evil. So there is no clear boundary, but what would a vague boundary require? A think a vague boundary requires that actions can be ranked in a vague progression from "certainly good" through "overall good, slightly evil" and descend through progressively less good zones as they approach from one side, then crossing a "evil=~good" area, into a progressively more evil side. I do not see that is necessarily the case.

Stealing food is clearly evil if you have no need but the victim has need for the food. If the needs are opposite, then it is not clearly evil. So there is no clear boundary, but what would a vague boundary require?

You are pointing to different actions labeled stealing and saying "one is good and the other is evil." Yeah, obviously, but that is no contradiction - they are different actions! One is the action of stealing in dire need, the other is the action of stealing without need.

This is a very common confusion. Good and evil (and ethics) are situation-dependent, even according to the sternest, most thundering of moralists. That does not tell us anything one way or the other about objectivity. The same action in the same situation with the same motives is ethically the same.

Thank you for pointing out my confusion. I've lost confidence that I have any idea what I'm talking about on this issue.
I think the intermediate value theorem covers this. Meaning if a function has positive and negative values (good and evil) and it is continuous (I would assume a "vague boundary" or "grey area" or "goodness spectrum" to be continuous) then there must be at least one zero value. That zero value is the boundary.
It would indeed cover this if goodness spectrum was a regular function, not a set-valued map []. Unfortunately, the same thoughts and actions can correspond to different shades of good and evil, even in the mind of the same person, let alone of different people. Often at the same time, too.
This shows that there is disagreement & confusion about what is good & what is evil. That no more proves good & evil are meaningless, than disagreement about physics shows that physics is meaningless. Actually, disagreement tends to support the opposite conclusion. If I say fox-hunting is good and you say it's evil, although we disagree on fox-hunting, we seem to agree that only one of us can possibly be right. At the very least, we agree that only one of us can win.

But the line dividing Kansas and Nebraska cuts through the heart of every human being. And who is willing to grow corn on his own heart?

— Steven Kaas

Duplicate [].

A problem well stated is a problem half solved.

Charles Kettering

A problem sufficiently well-stated is a problem fully solved.
Wow, I didn't even know that's a quote from someone! I had inferred that (mini)lesson from a lecture I heard, but it wasn't stated in those terms, and I never checked if someone was already known for that.

Nobody is smart enough to be wrong all the time.

Ken Wilber

"But I tell you he couldn't have written such a note!" cried Flambeau. "The note is utterly wrong about the facts. And innocent or guilty, Dr Hirsch knew all about the facts."

"The man who wrote that note knew all about the facts," said his clerical companion soberly. "He could never have got 'em so wrong without knowing about 'em. You have to know an awful lot to be wrong on every subject—like the devil."

"Do you mean—?"

"I mean a man telling lies on chance would have told some of the truth," said his friend firmly. "Suppose someone sent you to find a house with a green door and a blue blind, with a front garden but no back garden, with a dog but no cat, and where they drank coffee but not tea. You would say if you found no such house that it was all made up. But I say no. I say if you found a house where the door was blue and the blind green, where there was a back garden and no front garden, where cats were common and dogs instantly shot, where tea was drunk in quarts and coffee forbidden—then you would know you had found the house. The man must have known that particular house to be so accurately inaccurate."

--G.K. Chesterton, "The Duel of Dr. Hirsch"

Reversed malevolence is intelligence?

Inverted information is not random noise.

...unless you're reversing noise which is why Reverse Stupidity is not Intelligence.
If someone tells you the opposite of the truth in order to deceive you, and you believe the opposite of what they say because you know they are deceitful, then you believe the truth. (A knave is as good as a knight to a blind bat.) The problem is, a clever liar doesn't lie all the time, but only when it matters.
It's more likely that they're a stupid liar than that they got it all wrong by chance.
Another problem is that for many interesting assertions X, opposite(opposite(X)) does not necessarily equal X. Indeed, opposite(opposite(X)) frequently implies NOT X.
Could you give an example? I would have thought this happens with Not(opposite(X)); for example, "I don't hate you" is different than "I love you", and in fact implies that I don't. But I would have thought "opposite" was symmetric, so opposite(opposite(X)) = X.
Well, OK. So suppose (to stick with your example) I love you, and I want to deceive you about it by expressing the opposite of what I feel. So what do I say? You seem to take for granted that opposite("I love you") = "I hate you." And not, for example, "I am indifferent to you." Or "You disgust me." Or various other assertions. And, sure, if "I love you" has a single, unambiguous opposite, and the opposite also has a single, unambiguous opposite, then my statement is false. But it's not clear to me that this is true. If I end up saying "I'm indifferent to you" and you decide to believe the opposite of that... well, what do you believe? Of course, simply negating the truth ("I don't love you") is unambiguously arrived at, and can be thought of as an opposite... though in practice, that's often not what I actually do when I want to deceive someone, unless I've been specifically accused of the truth. ("We're not giant purple tubes from outer space!")

Lol, my professor would give a 100% to anyone who answered every exam question wrong. There were a couple people who pulled it off, but most scored 0<10.

I'm assuming a multiple-choice exam, and invalid answers don't count as 'wrong' for that purpose?

Otherwise I can easily miss the entire exam with "Tau is exactly six." or "The battle of Thermopylae" repeated for every answer. Even if the valid answers are [A;B;C;D].

Unless it really was the battle of Thermopylae. Not having studied, you wont know.
"The Battle of Thermopylae" is intended as the alternate for questions which might have "Tau is exactly six" as the answer. For example: "What would be one consequence of a new state law which defines the ratio of a circle's circumference to diameter as exactly three?" I bet that you can't write a question for which "Tau is exactly six." and "The battle of Thermopylae" are both answers which gain any credit...

I bet that you can't write a question for which "Tau is exactly six." and "The battle of Thermopylae" are both answers which gain any credit...

"Write a four word phrase or sentence."

You win.
Judging by this and your previous evil genie comments, you'd make a lovely UFAI.
I hate to break up the fun, and I'm sure we could keep going on about this, but Decius's original point was just that giving a wrong answer to an open-ended question is trivially easy. We can play word games and come up with elaborate counter-factuals, but the substance of that point is clearly correct, so maybe we should just move on.
That was exactly the challenge I issued. Granted, it's trivial to write an answer which is wrong for that question, but it shows that I can't find a wrong answer for an arbitrary question as easily as I thought I could.
An interesting corollary of the efficient market hypothesis is that, neglecting overhead due to things like brokerage fees and assuming trades are not large enough to move the market, it should be just as difficult to lose money trading securities as it is to make money.
No, not really. In an efficient marked risks uncorrelated with those of other securities shouldn't be compensated, so you should easily be able to screw yourself over by not diversifying.
But isn't the risk of diversifying compensated by a corresponding possibility of large reward if the sector outperforms? I wouldn't consider a strategy that produces modest losses with high probability but large gains with low probability sufficient to disprove my claim.

Let's go one step back on this, because I think our point of disagreement is earlier than I thought in that last comment.

The efficient market hypothesis does not claim that the profit on all securities has the same expectation value. EMH-believers don't deny, for example, the empirically obvious fact that this expectation value is higher for insurances than for more predictable businesses. Also, you can always increase your risk and expected profit by leverage, i.e. by investing borrowed money.

This is because markets are risk-averse, so that on the same expectation value you get payed extra to except a higher standard deviation. Out- or underperforming the market is really easy by excepting more or less risk than it does on average. The claim is not that the expectation value will be the same for every security, only that the price of every security will be consistent with the same prices for risk and expected profit.

So if the EMH is true, you can not get a better deal on expected profit without also accepting higher risk and you can not get a higher risk premium than other people. But you still can get lots of different trade-offs between expected profit and risk.

Now can you ... (read more)

Unless you're a fictional character []. Or possibly Mike "Bad Player" Flores []:
I thought your first link would be Bloody Stupid Johnson [].
This reminds me of an episode of QI [], in which Johnny Vegas, who usually throws out random answers for the humor, actually managed to get a question (essentially) right.

Lady Average may not be as good-looking as Lady Luck, but she sure as hell comes around more often.


Not always, since:

The average human has one breast and one testicle

Des McHale

In other words, the average of a distribution is not necessarily the most probable value.

In other words: expect Lady Mode), not Lady Mean.

Don't expect her, either. In Russian Roulette, the mode is that you don't die, and indeed that's the outcome for most people who play it. You should, however, expect that there's a very large chance of instadeath, and if you were to play a bunch of games in a row, that (relatively uncommon) outcome would almost certainly kill you. (A similar principle applies to things like stock market index funds: the mode doesn't matter when all you care about is the sum of the stocks.) The real lesson is this: always expect Lady PDF [].
Not to be a bore but it does say "Lady Average" not "Sir or Madam Average".

In my high school health class, for weeks the teacher touted the upcoming event: "Breast and Testicle Day!"

When the anticipated day came, it was of course the day when all the boys go off to one room to learn about testicular self-examination, and all the girls go off to another to learn about breast self-examination. So, in fact, no student actually experienced Breast and Testicle Day.

Much to their chagrin, I'm assuming.
Rather: chagrin and relief.
Not to be a bore but it does say "Lady Average" not "Sir or Madam Average".
Lady Main Mode? Does not sound that good. Lady Median?
If you're asking who comes around most often, Lady Mode it is - we can't help how it sounds.
Lady Mode is the most fashionable.

...beliefs are like clothes. In a harsh environment, we choose our clothes mainly to be functional, i.e., to keep us safe and comfortable. But when the weather is mild, we choose our clothes mainly for their appearance, i.e., to show our figure, our creativity, and our allegiances. Similarly, when the stakes are high we may mainly want accurate beliefs to help us make good decisions. But when a belief has few direct personal consequences, we in effect mainly care about the image it helps to project.

-Robin Hanson, Human Enhancement

I feel like Hanson's admittedly insightful "signaling" hammer has him treating everything as a nail.

Your contrarian stance against a high-status member of this community makes you seem formidable and savvy. Would you like to be allies with me? If yes, then the next time I go foraging I will bring you back extra fruit.

I agree in principle but I think this particular topic is fairly nailoid in nature.

I'd say it's such a broad subject that there have to be some screws in there as well. I think Hanson has too much faith in the ability of evolved systems to function in a radically changed environment. Even if signaling dominates the evolutionary origins of our brain, it's not advisable to just label everything we do now as directed towards signaling, any more than sex is always directed towards reproduction. You have to get into the nitty gritty of how our minds carry out the signaling. Conspiracy theorists don't signal effectively, though you can probably relate their behavior back to mechanisms originally directed towards, or at least compatible with, signaling. Also, an ability to switch between clear "near" thinking and fluffy "far" thinking presupposes a rational decision maker to implement the switch. I'm not sure Hanson pays enough attention to how, when, and for what reasons we do this.

I think he's mischaracterizing the issue.

Beliefs serve multiple functions. One is modeling accuracy, another is signaling. It's not whether the environment is harsh or easy, it's which function you need. There are many harsh environments where what you need is the signaling function, and not the modeling function.

I think the quote reflects reality (humans aren't naturally rational so their beliefs are conditioned by circumstance), but is better seen as an observation than a recommendation. The best approach should always be to hold maximally accurate beliefs yourself, even if you choose to signal different ones as the situation demands. That way you can gain the social benefits of professing a false belief without letting it warp or distort your predictions.
No, that wouldn't necessarily be the case. We should expect a cost in effort and effectiveness to try to switch on the fly between the two types of truths. Lots of far truths have little direct predictive value, but lots of signaling value. Why bear the cost for a useless bit of predictive truth, particularly if it is worse than useless and hampers signaling? That's part of the magic of magisteria - segregation of modes of truth by topic reduces that cost.
Hmm, maybe I shouldn't have said "always" given that acting ability is required to signal a belief you don't hold, but I do think what I suggest is the ideal. I think someone who trained themselves to do what I suggest, by studying people skills and so forth, would do better as they'd get the social benefits of conformity and without the disadvantages of false beliefs clouding predictions (though admittedly the time investment of learning these skills would have to be considered). Short version: I think this is possible with training and would make you "win" more often, and thus it's what a rationalist would do (unless the cost of training proved prohibitive, of which I'm doubtful since these skills are very transferable). I'm not sure what you meant by the magisteria remark, but I get the impression that advocating spiritual/long-term beliefs to less stringent standards than short term ones isn't generally seen as a good thing (see Eliezer's "Outside the Laboratory" post among others).
Clothes serve multiple functions. One is keeping warm, another is signalling.

Infallible, adj. Incapable of admitting error.

-L. A. Rollins, Lucifer's Lexicon: An Updated Abridgment

"He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his candle at mine, receives light without darkening me. No one possesses the less of an idea, because every other possesses the whole of it." - Jefferson

But many people do benefit greatly from hoarding or controlling the distribution of scarce information. If you make your living off slavery instead, then of course you can be generous with knowledge.

If you do not hoard your ideas, and neither do I, then we can both benefit from the ideas of the other. If I can access the ideas of a hundred other people at the cost of sharing my own ideas, then I profit; no matter how smart I am, a hundred other people working the same problem are going to be able to produce at least some ideas that I did not think of. (This is a benefit of free/open source software; it has been shown experimentally to work pretty well in the right circumstances).

“The goal of the future is full unemployment, so we can play. That’s why we have to destroy the present politico-economic system.” This may sound like the pronouncement of some bong-smoking anarchist, but it was actually Arthur C. Clarke, who found time between scuba diving and pinball games to write “Childhood’s End” and think up communications satellites. My old colleague Ted Rall recently wrote a column proposing that we divorce income from work and give each citizen a guaranteed paycheck, which sounds like the kind of lunatic notion that’ll be considered a basic human right in about a century, like abolition, universal suffrage and eight-hour workdays. The Puritans turned work into a virtue, evidently forgetting that God invented it as a punishment.

-- Tim Kreider

The interesting part is the phrase "which sounds like the kind of lunatic notion that’ll be considered a basic human right in about a century, like abolition, universal suffrage and eight-hour workdays." If we can anticipate what the morality of the future would be, should we try to live by it now?

If we can anticipate what the morality of the future would be, should we try to live by it now?

Not if it's actually the same morality, but depends on technology. For example, strong prohibitions on promiscuity are very sensible in a world without cheap and effective contraceptives. Anyone who tried to live by 2012 sexual standards in 1912 would soon find they couldn't feed their large horde of kids. Likewise, if robots are doing all the work, fine; but right now if you just redistribute all money, no work gets done.

Lack of technology was not [] the reason condoms weren't as widely available in 1912.
Right idea, not a great example. People used to have lots more kids then now, most dying in childhood. Majority of women of childbearing age (gay or straight) were married and having children as often as their body allowed, so promiscuity would not have changed much. Maybe a minor correction for male infertility and sexual boredom in a standard marriage.

You seem to have rather a different idea of what I meant by "2012 standards". Even now we do not really approve of married people sleeping around. We do, however, approve of people not getting married until age 25 or 30 or so, but sleeping with whoever they like before that. Try that pattern without contraception.

You might. I don't. This is most probably a cultural difference. There are people in the world to day who see nothing wrong with having multiple wives, given the ability to support them (example: Jacob Zuma)
Strong norms against promiscuity out of wedlock still made sense though, since having lots of children without a committed partner to help care for them would usually have been impractical.
Not if they were gay.

How do you envision living by this model now working?
That is, suppose I were to embrace the notion that having enough resources to live a comfortable life (where money can stand in as a proxy for other resources) is something everyone ought to be guaranteed.
What ought I do differently than I'm currently doing?

I would like to staple that question to the forehead of every political commentator who makes a living writing columns in far mode. What is it you would like us to do? If you don't have a good answer, why are you talking? Argh.
Not if the morality you anticipate coming into favour is something you disagree with. If it's something you agree with, it's already yours, and predicting it is just a way of avoiding arguing for it.
If you are a consequentialist, you should think about the consequences of such decision. For example, imagine a civilization where an average person has to work nine hours to produce enough food to survive. Now the pharaoh makes a new law saying that (a) all produced food has to be distribute equally among all citizens, and (b) no one can be compelled to work more than eight hours; you can work as a volunteer, but all your produced food is redistributed equally. What would happen is such situation? In my opinion, this would be a mass Prisoners' Dilemma where people would gradually stop cooperating (because the additional hour of work gives them epsilon benefits) and start being hungry. There would be no legal solution; people would try to make some food in their free time illegally, but the unlucky ones would simply starve and die. The law would seem great in far mode, but its near mode consequences would be horrible. Of course, if the pharaoh is not completely insane, he would revoke the law; but there would be a lot of suffering meanwhile. If people had "a basic human right to have enough money without having to work", situation could progress similarly. It depends on many things -- for example how much of the working people's money would you have to redistribute to non-working ones, and how much could they keep. Assuming that one's basic human right is to have $500 a month, but if you work, you can keep $3000 a month, some people could still prefer to work. But there is no guarantee it would work long-term. For example there would be a positive feedback loop -- the more people are non-working, the more votes politicians can gain by promising to increase their "basic human right income", the higher are taxes, and the smaller incentives to work. Also, it could work for the starting generation, but corrupt the next generation... imagine yourself as a high school student knowing that you will never ever have to work; how much effort would an average student give
Systems that don't require people to work are only beneficial if non-human work (or human work not motivated by need) is still producing enough goods that the humans are better off not working and being able to spend their time in other ways. I don't think we're even close to that point. I can imagine societies in a hundred years that are at that point (I have no idea whether they'll happen or not), but it would be foolish for them to condemn our lack of such a system now since we don't have the ability to support it, just as it would be foolish for us to condemn people in earlier and less well-off times for not having welfare systems as encompassing as ours. I'd also note that issues like abolition and universal suffrage are qualitatively distinct from the issue of a minimum guaranteed income (what the quote addresses). Even the poorest of societies can avoid holding slaves or placing women or men in legally inferior roles. The poorest societies cannot afford the "full unemployment" discussed in the quote, and neither can even the richest of modern societies right now (they could certainly come closer than the present, but I don't think any modern economy could survive the implementation of such a system in the present). I do agree, however, about it being a solid goal, at least for basic amenities.
To avoid having slaves, the poorest society could decide to kill all war captives, and to let starve to death all people unable to pay their debts. Yes, this would avoid legal discrimination. Is it therefore a morally preferable solution?
In poor societies that permit slavery, a man might be willing to sell himself into slavery. He gets food and lodging, possibly for his family as well as himself; his new purchaser gets a whole lot of labour. There's a certain loss of status, but a person might well be willing to live with that in order to avoid starvation.
Elections can take quite a bit of resources to run when you have a large voting population...
2Eliezer Yudkowsky11y
No, politicians can afford to spend lots of money on them. The actual mechanism of elections have never, so far as I know, been all that expensive pre-computation.
IAWYC, but the claims that most of the economic costs of elections are in political spending, and most of the costs of actually running elections are in voting machines are both probably wrong. (Public data is terrible, so I'm crudely extrapolating all of this from local to national levels.) The opportunity costs of voting alone dwarf spending on election campaigns. Assuming that all states have the same share of GDP, that people who don't a full-state holiday to vote take an hour off to vote, that people work 260 days a year and 8 hours a day, and that nobody in the holiday states do work, then we get: Political spending: 5.3 billion USD [] Opportunity costs of elections: 15 trillion USD (US GDP) (9/50 (states with voting holidays) 1/260 (percentage of work-time lost) + 41/50 (states without holidays) 1/60 1/8 (percentage of work-time lost)) ≈ 16 billion USD Extrapolating from New York City figures [], election machines cost ~1.9 billion nationwide. (50 million for a population ~38 times smaller than the total US population.) and extrapolating Oakland County's 650,000 USD [|head] cost across the US's 3143 counties, direct costs are just over 2 billion USD. (This is for a single election; however, some states have had as many as 5 elections in a single year. The cost of the voting machines can be amortized over multiple elections in multiple years.) (If you add together the opportunity costs for holding one general and one non-general election a year (no state holidays; around ~7 billion USD), plus the costs of actually running them, plus half the cost of the campaign money, the total cost/election seems to be around 30 billion USD, or ~0.002% of the US's GDP.)
4Eliezer Yudkowsky11y
Correction accepted. Still seems like something a poor society could afford, though, since labor and opportunity would also cost less. I understand that lots of poor societies do.
What? If anything I'd assume them to be more expensive before computers were introduced. In Italy where they are still paper based they have to hire people to count the ballots (and they have to pay them a lot, given that they select people at random and you're not allowed to refuse unless you are ill or something).

According to Wikipedia, the 2005 elections in germany did cost 63 million euros, with a population of 81 million people. 0,78 eurocent per person or the 0,00000281st part of the GDP. Does not seem much, in the grander scheme of things. And since the german constitutional court prohibited the use of most types of voting machines, that figure does include the cost to the helpers; 13 million, again, not a prohibitive expenditure.

7ArisKatsaris11y [] "Low electoral costs, approximately $1 to $3 per elector, tend to manifest in countries with longer electoral experience" That's a somewhat confusing comment. If they're effectively conscripted (them not being allowed to refuse), not really "hired" -- that would imply they don't need to be paid a lot...
Is that that little? I think many fewer people would vote if they had to pay $3 out of their own pocket in order to do so. A law compelling people to do stuff would be very unpopular, unless they get adequate compensation. Not paying them much would just mean they would feign illness or something. (If they didn't select people by lot, the people doing that would be the ones applying for that job, who would presumably like it more than the rest of the population and hence be willing to do that for less.)
Well perhaps fewer people would vote if they had to pay a single cent out of their own pocket -- would that mean that 0.01$ isn't little either? How much are these Italian ballot-counters being paid? Can we quantify this?
IIRC, something like €150 per election. I'll look for the actual figure.
Why so? Usually when people can't refuse to do a job, they're paid little, not a lot.
Like jury duty. Yeah. Why would it be different in Greece?
In the UK, the counters are volunteers.
Well, yes. Almost tautologically so, I should think. The tricky part is working out when humans are better off.
If you are a bayesian, you should think about how much evidence your imagination constitutes. For example, imagine a civilization where an average person gains little or no total productivity by working over 8 hour per day. Imagine, moreover, that in this civilization, working 10 hours a day doubles your risk of coronary heart disease, the leading cause of death in this civilization. Finally, imagine that, in this civilization, a common way for workers to signal their dedication to their jobs is by staying at work long hours, regardless of the harm it does both to their company and themselves. In this civilization [], a law preventing individuals from working over 8 hours per day is a tremendous social good.
Work hour skepticism [] leaves out the question of the cost of mistakes. It's one thing to have a higher proportion of defective widgets on an assembly line (though even that can matter, especially if you want a reputation for high quality products), another if the serious injury rate goes up, and a third if you end up with the Exxon Valdez.
You mean “incentives to fully report your income”, right? ;-) (There are countries where a sizeable fraction of the economy is underground. I come from one.) The same they give today. Students not interested in studying mostly just cheat.
Well, if your society isn't rich enough, you just do what you can. (And a lot of work really isn't all that important; would it be that big of a disaster if your local store carried fewer kinds of cosmetics, or if your local restaurant had trouble hiring waiters?) See also. []
It is true that in the long run, things could work out worse with a guarantee of sufficient food/supplies for everyone. I think, though, that this post answers the wrong question; the question to answer in order to compare consequences is how probable it is to be better or worse, and by what amounts. Showing that it "could" be worse merely answers the question "can I justify holding this belief" rather than the question "what belief should I hold". The potential benefits of a world where people are guaranteed food seem quite high on the face of it, so it is a question well worth asking seriously... or would be if one were in a position to actually do anything about it, anyway. Prisoners' dilemmas amongst humans with reputation and social pressure effects do not reliably work out with consistent defection, and models of societies (and students*) can easily predict almost any result by varying the factors they model and how they do so, and so contribute very little evidence in the absence of other evidence that they generate accurate predictions. The only reliable information that I am aware of is that we know that states making such guarantees can exist for multiple generations with no obvious signs of failure, at least with the right starting conditions, because we have such states existing in the world today. The welfare systems of some European countries have worked this way for quite a long time, and while some are doing poorly economically, others are doing comparably well. I think that it is worth assessing the consequences of deciding to live by the idea of universal availability of supplies, but they are not so straightforwardly likely to be dire as this post suggests, requiring a longer analysis.
As I wrote, it depends on many things. I can imagine a situation where this would work; I can also imagine a situation where it would not. As I also wrote, I can imagine such system functioning well if people who don't work get enough money to survive, but people who do work get significantly more. Data point: In Slovakia many uneducated people don't work, because it wouldn't make economical sense for them. Their wage, minus traveling expenses, would be only a little more, in some cases even less than their welfare. What's the point of spending 8 hours in work if in result you have less money? They cannot get higher wages, because they are uneducated and unskilled; and in Slovakia even educated people get relatively little money. The welfare cannot be lowered, because the voters on the left would not allow it. The social pressure stops working if too many people in the same town are doing this; they provide moral support for each other. We have villages where unemployment is over 80% and people have already accommodated to this; after a decade of such life, even if you offer them a work with a decent wage, they will not take it, because it would mean walking away from their social circle. This would not happen in a sane society, but it does happen in the real life. Other European countries seem to fare better in this aspect, but I can imagine the same thing happening there in a generation or two. A generation ago most people would probably not predict this situation in Slovakia. I also can't imagine the social pressure to work on the "generation Facebook". If someone spends most of their day on Facebook or playing online multiplayer games, who exactly is going to socially press them? Their friends? Most of them live the same way. Their parents? The conflict between generations is not the same thing as peer pressure. And the "money without work is a basic human right" meme also does not help. It could work in a country where the difference between average wage (e
This is interesting, particularly the idea of comparing wage growth against welfare growth predicting success of "free money" welfare. I agree that it seems reasonably unlikely that a welfare system paying more than typical wages, without restrictions conflicting with the "detached from work" principle, would be sustainable, and identifying unsustainable trends in such systems seems like an interesting way to recognise where something is going to have to change, long-term. I appreciate the clarification; it provides what I was missing in terms of evidence or reasoned probability estimates over narrative/untested model. I'm taking a hint from feedback that I likely still communicated this poorly, and will revise my approach in future. Back on the topic of taking these ideas as principles, perhaps more practical near-term goals which provide a subset of the guarantee, like detaching availability of resources basic survival from the availability of work, might be more probably achievable. There are a wider range of options available for implementing these ideas, and of incentives/disincentives to avoid long-term use. An example which comes to mind is providing users with credit usable only to order basic supplies and basic food. My rough estimate is that it seems likely that something in this space could be designed to operate sustainably with only the technology we have now. On the side, relating to generation Facebook, my model of the typical 16-22 year old today would predict that they'd like to be able to buy an iPad, go to movies, afford alcohol, drive a nice car, go on holidays, and eventually get most of the same goals previous generations sought, and that their friends will also want these things. At younger ages, I agree that parental pressure wouldn't be typically classified as "peer pressure", but I still think it likely to provide significant incentive to do school work; the parents can punish them by taking away their toys if they don't, as effectively
I have heard this idea proposed, and many people object against it saying that it would take away the dignity of those people. In other words, some people seem to think that "basic human rights" include not just things necessary for survival, but also some luxury and perhaps some status items (which then obviously stop being status items, if everyone has them). In theory, yes. However, as a former teacher I have seen parents completely fail at this. Data point: A mother came to school and asked me to tell her 16 year old daughter, my student, to not spend all her free time at internet. I did not understand WTF she wanted. She explained to me that as a computer science teacher her daughter will probably regard me an authority about computers, so if I ask her to not use the computer all day long, she wil respect me. This was her last hope, because as a mother she could not convince her daughter to go away from the computer. To me this seemed completely insane. First, the teachers in given school were never treated as authorities on anything; they were usually treated like shit both by students and school administration (a month later I left that school). Second, as a teacher I have zero influence on what my students do outside school, she as a mother is there; she has many possible ways to stop her daughter from interneting... for instance to forcibly turn off the computer, or just hide the computer somewhere while her daughter is at school. But she should have started doing something before her daughter turned 16. If she does not know that, she is clearly unqualified to have children; but there is no law against that. OK, this was an extreme example, but during my 4-years teaching carreer I have seen or heard from colleagues about many really fucked up parents; and those people were middle and higher social class. This leads me to very pesimistic views, not shared by people who don't have the same experience and are more free to rationalize this away. I think tha
One of these things is not like the others.
Yes, no state has ever implemented truly universal suffrage (among minors).
In Jasay []'s terminology, the first is a liberty (a relation between a person and an act) and the rest are rights {relations between two or more persons (at least one rightholder and one obligor) and an act}. I find this distnction useful for thinking more clearly about these kinds of topics. Your mileage may vary.
I was actually referring to the the third being what I might call an anti-liberty, i.e., you aren't allowed to work more than eight-hours a day, and the fact that is most definitely not enforced nor widely considered a human right.
How is that different from pointing out that you're not allowed to sell yourself into slavery (not even partially, as in signing a contract to work for ten years and not being able to legally break it), or that you're not allowed to sell your vote?
I'd say each of the three can be said to be unlike the others: * abolition falls under Liberty * universal suffrage falls under Equality * eight-hour workdays falls under Solidarity
So "all of these things are not like the others" [].
I thought eight-hours workdays were about employers not being allowed to demand that employees work more than eight hours a day; I didn't know you weren't technically allowed to do that at all even if you're OK with it.
1. You are allowed to work more than eight hours per day. It's just that in many industries, employers must pay you overtime if you do so. 2. Even if employers were prohibited from using "willingness to work more than 8 hours per day" as a condition for employment, long workdays would probably soon become the norm. 3. Thus a more feasible way to limit workdays is to constrain employees rather than employers. To see why, assume that without any restrictions on workday length, workers supply more than 8 hours. Let's say, without loss of generality, that they supply 10. (In other words, the equilibrium quantity supplied is ten.) If employers can't demand the equilibrium quantity, but they're still willing to pay to get it, then employees will have the incentive to supply it. In their competition for jobs (finding them and keeping them), employees will be supply labor up until the equilibrium quantity, regardless of whether the bosses demand it. Working more looks good. Everyone knows that; you don't need your boss to tell you. So if there's competition for your spot or for a spot that you want, it would serve you well to work more. So if your goal is to prevent ten-hour days, you'd better stop people from supplying them. At least, this makes sense to me. But I'm no microeconomist. Perhaps we have one on LW who can state this more clearly (or who can correct any mistakes I've made).
See Lochner v. New York []. Within the last five years there was a French strike (riot? don't remember exactly) over a law that would limit the workweek of bakers, which would have the impact of driving small bakeries out of business, since they would need to employ (and pay benefits on) 2 bakers rather than just 1. Perhaps a French LWer remembers more details?
It would be very hard to distinguish when people were doing it because they wanted to, and when employers were demanding it. Maybe some employees are working that extra time, but one isn't. The one that isn't happens to be fired later on, for unrelated reasons. How do you determine that worker's unwillingness to work extra hours is not one of the reasons they were fired? Whether it is or not, that happening will likely encourage workers to go beyond the eight hours, because the last one that didn't got fired, and a relationship will be drawn whether there is one or not.
It's not like you can fire employees on a whim: the “unrelated reasons” have to be substantial ones, and it's not clear you can find ones for any employee you want to fire. (Otherwise, you could use such a mechanism to de facto compel your employees to do pretty much anything you want.) Also, even if you somehow did manage to de facto demand workers to work ten hours a day, if you have to pay hours beyond the eighth as overtime (with a hourly wage substantially higher than the regular one), then it's cheaper for you to hire ten people eight hours a day each than eight people ten hours a day.
Under American law, you basically can fire an employee "on a whim" as long as it isn't a prohibited reason.
Only if they can't get another job.
That assumption isn't [] that far-fetched. Also, the same applies to doing that to compel them to work extra time (or am I missing something?).
If we can afford it. Moral progress proceeds from economic progress.
Morality is contextual. If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food. Suppose that decision is made, then Omega magically provides sufficient food for all - morality hasn't changed, only the decision that morality calls for. -------------------------------------------------------------------------------- Technological advancement has certainly caused moral change (consider society after introduction of the Pill). But having more resources does not, in itself, change what we think is right, only what we can actually achieve.
That's an interesting claim. Are you saying that true moral dilemmas (i.e. a situation where there is no right answer) are impossible? If so, how would you argue for that?
I think they are impossible. Morality can say "no option is right" all it wants, but we still must pick an option, unless the universe segfaults and time freezes upon encountering a dilemma. Whichever decision procedure we use to make that choice (flip a coin?) can count as part of morality.
I take it for granted that faced with a dilemma we must do something, so long as doing nothing counts as doing something. But the question is whether or not there is always a morally right answer. In cases where there isn't, I suppose we can just pick randomly, but that doesn't mean we've therefore made the right moral decision. Are we ever damned if we do, and damned if we don't?
When someone is in a situation like that, they lower their standard for "morally right" and try again. Functional societies avoid putting people in those situations because it's hard to raise that standard back to it's previous level.
Well, if all available options are indeed morally wrong, we can still try to see if any are less wrong than others.
Right, but choosing the lesser of two evils is simple enough. That's not the kind of dilemma I'm talking about. I'm asking whether or not there are wholly undecidable moral problems. Choosing between one evil and a lesser evil is no more difficult than choosing between an evil and a good. But if you're saying that in any hypothetical choice, we could always find something significant and decisive, then this is good evidence for the impossibility of moral dilemmas.
It's hard to say, really. Suppose we define a "moral dilemma for system X" as a situation in which, under system X, all possible actions are forbidden. Consider the systems that say "Actions that maximize this (unbounded) utility function are permissible, all others are forbidden." Then the situation "Name a positive integer, and you get that much utility" is a moral dilemma for those systems; there is no utility maximizing action, so all actions are forbidden and the system cracks. It doesn't help much if we require the utility function to be bounded; it's still vulnerable to situations like "Name a real number less than 30, and you get that much utility" because there isn't a largest real number less than 30. The only way to get around this kind of attack by restricting the utility function is by requiring the range of the function to be a finite set. For example, if you're a C++ program, your utility might be represented by a 32 bit unsigned integer, so when asked "How much utility do you want" you just answer "2^32 - 1" and when asked "How much utility less than 30.5 do you want" you just answer "30". (Ugh, that paragraph was a mess...)
That is an awesome example. I'm absolutely serious about stealing that from you (with your permission). Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn't come up all that often. ETA: Here's a thought on a reply. Given restrictions like time and knowledge of the names of large numbers, isn't there in fact a largest number you can name? Something like Graham's number won't work (way too small) because you can always add one to it. But trans-finite numbers aren't made larger by adding one. And likewise with the largest real number under thirty, maybe you can use a function to specify the number? Or if not, just say '29.999....' and just say nine as many times as you can before the time runs out (or until you calculate that the utility benefit reaches equilibrium with the costs of saying 'nine' over and over for a long time).
Transfinite cardinals aren't, but transfinite ordinals are. And anyway transfinite cardinals can be made larger by exponentiating them.
Good point. What do you think of Chrono's dilemma?
"Twenty-nine point nine nine nine nine ..." until the effort of saying "nine" again becomes less than the corresponding utility difference. ;-)
Sure, be my guest. Honestly, I don't know. Infinities are already a problem, anyway.
My view is that a more meaningful question than ‘is this choice good or bad’ is ‘is this choice better or worse than other choices I could make’.
Would you say that there are true practical dilemmas? Is there ever a situation where, knowing everything you could know about a decision, there isn't a better choice?
If I know there isn't a better choice, I just follow my decision. Duh. (Having to choose between losing $500 and losing $490 is equivalent to losing $500 and then having to choose between gaining nothing and gaining $10: yes, the loss will sadden me, but that had better have no effect on my decision, and if it does it's because of emotional hang-ups I'd rather not have. And replacing dollars with utilons wouldn't change much.)
So you're saying that there are no true moral dilemmas (no undecidable moral problems)?
Depends on what you mean by “undecidable”. There may be situations [] in which it's hard in practice to decide whether it's better to do A or to do B, sure, but in principle either A is better, B is better, or the choice doesn't matter.
So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible. Those examples have a pretty deontological air to them...could we come up with examples of such dilemmas within consequentialism?
Well, the consequentialist version of a situation that demands A and B is one in which A and B provide equally positive expected consequences and no other option provides consequences that are as good. If A and B are incompossible, I suppose we can call this a moral dilemma if we like. And, sure, consequentialism provides no tools for choosing between A and B, it merely endorses (A OR B). Which makes it undecidable using just consequentialism. There are a number of mechanisms for resolving the dilemma that are compatible with a consequentialist perspective, though (e.g., picking one at random).
Thanks, that was helpful. I'd been having a hard time coming up with a consequentialist example.
Then, either the demand/forbiddance is not absolute or the moral system is broken.
How are you defining morality? If we use a shorthand definition that morality is a system that guides proper human action, then any "true moral dilemmas" would be a critique of whatever moral system failed to provide an answer, not proof that "true moral dilemmas" existed. We have to make some choice. If a moral system stops giving us any useful guidance when faced with sufficiently difficult problems, that simply indicates a problem with the moral system. ETA: For example, if I have completely strict sense of ethics based upon deontology, I may feel an absolute prohibition on lying and an absolute prohibition on allowing humans to die. That would create an moral dilemma for that system in the classical case of Nazis seeking Jews that I'm hiding in my house. So I'd have to switch to a different ethical system. If I switched to a system of deontology with a value hierarchy, I could conclude that human life has a higher value than telling the truth to governmental authorities under the circumstances and then decide to lie, solving the dilemma. I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se. Since I am skeptical of moral realism, that is all the more the case; if morality can't tell us how to act, it's literally useless. We have to have some process for deciding on our actions.
That one thing a couple years ago qualifies. But unless you get into self-referencing moral problems, no. I can't think of one off the top of my head, but I suspect that you can find ones among decisions that affect your decision algorithm and decisions where your decision-making algorithm affects the possible outcomes. Probably like Newcomb's problem, only twistier. (Warning: this may be basilisk territory.)
(Double-post, sorry)
There are plenty of situations where two choices are equally good or equally bad. This is called "indifference", not "dilemma".
Those aren't the situations I'm talking about.
I would make the more limited claim that the existence of irreconcilable moral conflicts is evidence for moral anti-realism. In short, if you have a decision process (aka moral system) that can't resolve a particular problem that is strictly within its scope, you don't really have a moral system. Which makes figuring out what we mean by moral change / moral progress incredibly difficult.
This seems to be to be a rephrasing and clarifying of your original claim, which I read as saying something like 'no true moral theory can allow moral conflicts'. But it's not yet an argument for this claim.
I'm suddenly concerned that we're arguing over a definition. It's very possible to construct a decision procedure that tells one how to decide some, but not all moral questions. It might be that this is the best a moral decision procedure can do. Is it clearer to avoid using the label "moral system" for such a decision procedure? This is a distraction from my main point, which was that asserting our morality changes when our economic resources change is an atypical way of using the label "morality."
No, but if I understand what you've said, a true moral theory can allow for moral conflict, just because there are moral questions it cannot decide (the fact that you called them 'moral questions' leads me to think you think that these questions are moral ones even if a true moral theory can't decide them). You're certainly right, this isn't relevant to your main point. I was just interested in what I took to be the claim that moral conflicts (i.e. moral problems that are undecidable in a true moral theory) are impossible: This is a distraction from you main point in at least one other sense: this claim is orthogonal to the claim that morality is not relative to economic conditions.
Yes, you correct that this was not an argument, simply my attempt to gesture at what I meant by the label "morality." The general issue is that human societies are not rigorous about the use of the label morality. I like my usage because I think it is neutral and specific in meta-ethical disputes like the one we are having. For example, moral realists must determine whether they think "incomplete" moral systems can exist. But beyond that, I should bow out, because I'm an anti-realist and this debate is between schools of moral realists.
Rephrasing the original question: if we can anticipate the guiding principles underlying the morality of the future, ought we apply those principles to our current circumstances to make decisions, supposing they are different? Though you seem to be implicitly assuming that the guiding principles don't change, merely the decisions, and those changed decisions are due to the closest implementable approximation of our guiding principles varying over time based on economic change. (Did I understand that right?)
Pretty much. Though it feels totally different from the inside. Athens could not have thrived without slave labor, and so you find folks arguing that slavery is moral, not just necessary. Since you can't say "Action A is immoral but economically necessary, so we shall A" you instead say "Action A is moral, here are some great arguments to that effect!" And when we have enough money, we can even invent new things to be upset about, like vegetable rights.
(nods) Got it. On your view, is there any attempt at internal coherence? For example, given an X such that X is equally practical (economically) in an Athenian and post-Athenian economy, and where both Athenians and moderns would agree that X is more "consistent with" slavery than non-slavery, would you expect Athenians to endorse X and moderns to reject it, or would you expect other (non-economic) factors, perhaps random noise, to predominate? (Or some third option?) Or is such an X incoherent in the first place?
Can you give a more concrete example? I don't understand your question.
I can't think of a concrete example that doesn't introduce derailing specifics. Let me try a different question that gets at something similar: do you think that all choices a society makes that it describes as "moral" are economic choices in the sense you describe here, or just that some of them are? Edit: whoops! got TimS and thomblake confused. Um. Unfortunately, that changes nothing of consequence: I still can't think of a concrete example that doesn't derail. But my followup question is not actually directed to Tim. Or, rather, ought not have been.
Probably a good counterexample would be the right for certain groups to work any job they're qualified for, for example women or people with disabilities. Generally, those changes were profitable and would have been at any time society accepted it.
I don't understand the position you are arguing and I really want to. Either illusion of transparency or I'm an idiot. And TheOtherDave appears to understand you. :(
I'm not really arguing for a position - the grandparent was a counterexample to the general principle I had proposed upthread, since the change was both good and an immediate economic benefit, and it took a very long time to be adopted.
(nods) Yup, that's one example I was considering, but discarded as too potentially noisy. But, OK, now that we're here... if we can agree for the sake of comity that giving women the civil right to work any job would have been economically practical for Athenians, and that they nevertheless didn't do so, presumably due to some other non-economic factors... I guess my question is, would you find it inconsistent, in that case, to find Athenians arguing that doing so would be immoral?
I don't think so. I'm pretty sure lots of things can stand in the way of moral progress.
If we had eight-hour workdays a century ago, we wouldn't have been able to support the standard of living expected a century ago. I'm not sure we could have even supported living. The same applies to full unemployment. We may someday reach a point where we are productive enough that we can accomplish all we need when we just do it for fun, but if we try that now, we'll all starve.

If we had eight-hour workdays a century ago, we wouldn't have been able to support the standard of living expected a century ago.

Is that true? (Technically, a century ago was 1912.)

Wikipedia on the eight-hour day:

On January 5, 1914, the Ford Motor Company took the radical step of doubling pay to $5 a day and cut shifts from nine hours to eight, moves that were not popular with rival companies, although seeing the increase in Ford's productivity, and a significant increase in profit margin (from $30 million to $60 million in two years), most soon followed suit.

The quote seemed to imply we didn't have them a century ago. Just use two centuries or however long. My point is that we didn't stop working as long because we realized it was a good idea. We did because it became a good idea. What we consider normal now is something we could not have instituted a century ago, and attempting to institute now what what will be normal a century from now would be a bad idea.
So, accepting the premise that the ability to support "full unemployment" (aka, people working for reasons other than money) is something that increases over time, and it can't be supported until the point is reached where it can be supported... how would we recognize when that point has been reached?
The question is, can we? Does anyone happen to have any empirical data about how good, for example, Greco-Romans were at predicting the moral views of the Middle Ages? Additionally, is merely sounding "like the kind of lunatic notion that’ll be considered a basic human right in about a century" really a strong enough justification for us to radically alter our political and economic systems? If I had to guess, I'd predict that Kreider already believes divorcing income from work to be a good idea, for reasons that may or may not be rational, and is merely appealing to futurism to justify his bottom line [].
Are you sure you can. It's remarkably easy to make retroactive "predictions", much harder to make actual predictions.

After I spoke at the 2005 "Mathematics and Narrative" conference in Mykonos, a suggestion was made that proofs by contradiction are the mathematician's version of irony. I'm not sure I agree with that: when we give a proof by contradiction, we make it very clear that we are discussing a counterfactual, so our words are intended to be taken at face value. But perhaps this is not necessary. Consider the following passage.

There are those who would believe that every polynomial equation with integer coefficients has a rational solution, a view that leads to some intriguing new ideas. For example, take the equation x² - 2 = 0. Let p/q be a rational solution. Then (p/q)² - 2 = 0, from which it follows that p² = 2q². The highest power of 2 that divides p² is obviously an even power, since if 2^k is the highest power of 2 that divides p, then 2^2k is the highest power of 2 that divides p². Similarly, the highest power of 2 that divides 2q² is an odd power, since it is greater by 1 than the highest power that divides q². Since p² and 2q² are equal, there must exist a positive integer that is both even and odd. Integers with this remarkable property are quite unlike the integers

... (read more)
The two examples are not contradictory, but analogous to one another. The correct conclusion in both is the same, and both are equally serious or ironic. 1. Suppose x² -2=0 has a solution that is rational. That leads to a contradiction. So any solution must be irrational. 2. Suppose x² +1=0 has a solution that is a number. That leads to a contradiction. So any solution must not be a number. Now what is a "number" in this context? From the text, something that is either positive, negative, or zero; i.e. something with a total ordering. And indeed we know (ETA: this is wrong, see below) that such solutions, the complex numbers, have no total ordering. I see no relevant difference between the two cases.
You can work the language a little to make them analogous, but that's not the point Gowers is making. Consider this instead: "There are those who would believe that all equations have solutions, a view that leads to some intriguing new ideas. Consider the equation x + 1 = x. Inspecting the equation, we see that its solution must be a number which is equal to its successor. Numbers with this remarkable property are quite unlike the numbers we are familiar with. As such, they are surely worthy of further study." I imagine Gowers's point to be that sometimes a contradiction does point to a way in which you can revise your assumptions to gain access to "intriguing new ideas", but sometimes it just indicates that your assumptions are wrong.

"There are those who would believe that all equations have solutions, a view that leads to some intriguing new ideas. Consider the equation x + 1 = x. Inspecting the equation, we see that its solution must be a number which is equal to its successor. Numbers with this remarkable property are quite unlike the numbers we are familiar with. As such, they are surely worthy of further study."

Yes, yes they are.

(Edited again: this example is wrong, and thanks to Kindly for pointing out why. CronoDAS gives a much better answer.) Curiously enough, the Peano axioms don't seem to say that S(n)!=n. Lo, a finite model of Peano: X = {0, 1} Where: 0+0=0; 0+1=1+0=1+1=1 And the usual equality operation. In this model, x+1=1 has a solution, namely x=1. Not a very interesting model, but it serves to illustrate my point below. Contradiction in conclusions always indicates a contradiction in assumptions. And you can always use different assumptions to get different, and perhaps non contradictory, conclusions. The usefulness and interest of this varies, of course. But proof by contradiction remains valid even if it gives you an idea about other interesting assumptions you could explore. And that's why I feel it's confusing and counterproductive to use ironic language in one example, and serious proof by contradiction in another, completely analogous example, to indicate that in one case you just said "meh, a contradiction, I was wrong" while in the other you invented a cool new theory with new assumptions. The essence of math is formal language and it doesn't mix well with irony, the best of which is the kind that not all readers notice.
But that's the entire point of the quote! That mathematicians cannot afford the use of irony!
Yes. My goal wasn't to argue with the quote but to improve its argument. The quote said: And I said, it's not just superficially similar, it's exactly the same and there's no relevant difference between the two that would guide us to use irony in one case and not in the other (or as readers, to perceive irony in one case and serious proof by contradiction in the other).
Your model violates the property that if S(m) = S(n), then m=n, because S(1) = S(0) yet 1 != 0. You might try to patch this by changing the model so it only has 0 as an element, but there is a further axiom that says that 0 is not the successor of any number. Together, the two axioms used above can be used to show that the natural numbers 0, S(0), S(S(0)), etc. are all distinct. The axiom of induction can be used to show that these are all the natural numbers, so that we can't have some extra "floating" integer x such that S(x) = x.
Right. Thanks.
The only relevant difference that I can see is that, in the first paragraph, the solutions are explicitly limited to the rational numbers; in the second case, the solutions are not explicitly limited to the reals.
There are lots of total orderings on the complex numbers. For example: a + bi [>] c + di iff a >= c or (a = c and b >= d). In fact, if you believe the axiom of choice there are "nice total orders" for any set at all.
Importantly, however, the complex numbers have no total ordering that respects addition and multiplication. In other words, there's no large set of "positive complex numbers" closed under both operations. This is also the reason why the math in this XKCD strip [] doesn't actually work.
You can still find divisors for Gaussian integers. If x, y, and xy are all Gaussian integers, which will be trivially fulfilled for any x when y=1, then x, y both divide xy. You can then generalize the \sigma function by summing over all the divisors of z and dividing by |z|. The resulting number \sigma(z) lies in C (or maybe Q + iQ), not just Q, but it's perfectly well defined.
If you sum over all the divisors of z, the result is perfectly well defined; however, it's 0. Whenever x divides z, so does -x. Over the integers, this is solved by summing over all positive divisors. However, there's no canonical choice of what divisors to consider positive in the case of Gaussian integers, and making various arbitrary choices (like summing over all divisors in the upper half-plane) leads to unsatisfying results.
That's like saying the standard choice of branch cut for the complex logarithm is arbitrary. And? When you complexify, things get messier. My point is that making a generalization is possible (though it's probably best to sum over integers with 0 \leq arg(z) < \pi, as you pointed out), which is the only claim I'm interested in disputing. Whether it's nice to look at is irrelevant to whether it's functional enough to be punnable.
You're right -- the generalization works. Mainly what I don't like about it is that \sigma(z) no longer has the nice properties it had over the integers: for example, it's no longer multiplicative. This doesn't stop Gaussian integers from being friendly, though, and the rest is a matter of aesthetics.
The well-ordering principle doesn't really have any effect on canonical orderings, like that induced by the traditional less-than relation on the real numbers. This doesn't affect the truth of your claim, but I do think that DanArmak's point was quite separate from the language he chose. He might instead have worded it as having no real solution, so that any solution must be not-real.
Gah. You're quite right. I should refrain from making rash mathematical statements. Thank you.
The first one shows that assuming that there's a rational solution leads to contradiction, then drops the subject. The second one shows that assuming that there's a real solution leads to a contradiction, then suggests to investigate the non-reals. How are you supposed to tell which drops the subject and which suggests investigation?
Isn't that the entire point? I see this as a mathematical version of the modus tollens/ponens point [] made elsewhere in this page.
The quote says, This seems to me to mean: the two cases are different; the first is appropriately handled by serious proof-by-contradiction, while the second is appropriately handled by irony. But readers may not be able to tell the difference, because the two texts are similar and irony is hard to identify reliably. So mathematicians should not use irony. Whereas I would say: the two cases are the same, and irony or seriousness are equally appropriate to both. If readers could reliably identify irony, they would correctly deduce that the author treated the two cases differently, which is in fact a wrong approach. So readers are better served by treating both texts as serious. I'm not saying mathematicians should / can effectively use irony; I'm saying the example is flawed so that it doesn't demonstrate the problems with irony.
The difference is that mathematicians apply modus tollens and reject sqrt2 being rational, but apply modus ponens and accept the existence of i; why? Because apparently the resultant extensions of theories justify this choice - and this is the irony, the reason one's beliefs are in discordance with one's words/proof and the reader is expected to appreciate this discrepancy. But what one regards as a useful enough extension to justify a modus tollens move is something other may not appreciate or will differ from field to field, and this is a barrier to understanding.
I hadn't considered that irony. I was thinking about the explicit irony of the text itself in its proof of sqrt(2) being irrational. The reader is expected to know the punchline, that sqrt(2) is irrational but that irrational numbers are important and useful. So the text that (ironically) appears to dismiss the concept of irrational numbers is in fact wrong in its dismissal, and that is a meta-irony. ...I feel confused by the meta levels of irony. Which strengthens my belief that mathematical proofs should not be ironical if undertaken seriously.
Yes, I feel similarly about this modus stuff; it seems simple and trivial, but the applications become increasingly subtle and challenging, especially when people aren't being explicit about the exact reasoning.
If mathematicians behaved simply as you describe, then those resultant extension theories would never have been developed, because everyone would have applied modus tollens regarding in a not-yet-proven-useful case. (Disclaimer: I know nothing about the actual historical reasons for the first explorations of complex numbers.) Therefore, it's best for mathematicians to always keep the M-T and M-P cases in mind when using a proof by contradiction. Of course, a lot of time the contradiction arises due to theorems already proven from axioms, and what happens if any one of the axioms in a theory is removed is usually well explored.
You're drawing the parallel differently from the quote's author. The second example requires assuming the existence of complex numbers to resolve the contradiction. The first example requires assuming, not the existence of irrational numbers (we already know about those, or we wouldn't be asking the question!), but the existence of integers which are both even and odd. As far as I know, there are no completely satisfactory ways of resolving the latter situation.

Qhorin Halfhand: The Watch has given you a great gift. And you only have one thing to give in return: your life.

Jon Snow: I'd gladly give my life.

Qhorin Halfhand: I don’t want you to be glad about it! I want you to curse and fight until your heart’s done pumping.

--Game of Thrones, Season 2.

Reminds me of Patton:

No man ever won a war by dying for his country. Wars were won by making the other poor bastard die for his. You don't win a war by dying for your country.

I especially like the way he calls the enemy "the other poor bastard". And not, say, "the bastard".

Also effort, expertise, and insider information on one of the most powerful Houses around. And magic powers.
He has magic powers?
Rot13'd for minor spoiling potential: Ur'f n jnet / fxvapunatre.

My brain technically-not-a-lies to me far more than it actually lies to me.

-- Aristosophy (again)

We're talking about a person who, along with her partner, gives to efficient charity twice as much money as she spends on herself. There's no way she doesn't actually believe what she says and still does that.

That she gives more than most others doesn't imply that her belief that giving even more is practically impossible isn't hypocritical. Yes, she very likely believes it, thus it is not a conscious lie, but only a small minority of falsities are conscious lies.

Yeah, but there's also a certain plausibility to the heuristic which says that you don't get to second-guess her knowledge of what works for charitable giving until you're - not giving more - but at least playing in the same order of magnitude as her. Maybe her pushing a little bit harder on that "hypocrisy" would cause her mind to collapse, and do you really want to second-guess her on that if she's already doing more than an order of magnitude better than what your own mental setup permits?

I am actually inclined to believe Wise's hypothesis (call it H) that being overly selfless can hamper one's ability to help others. I was only objecting to army1987's implicit argument that because she (Wise) clearly believes H, Dolores1984's suspicion of H being a self-serving untrue argument is unwarranted.
There's an Italian proverb “Everybody is a faggot with other people's asses”, meaning more-or-less ‘everyone is an idealist when talking about issues that don't directly affect them/situations they have never experienced personally”.
You're using hypocritical in a weird way -- I'd only normally use it to mean ‘lying’, not ‘mistaken’.
I use "hypocrisy" to denote all instances of people violating their own declared moral standards, especially when they insist they aren't doing it after receiving feedback (if they can realise what they did after being told, only then I'd prefer to call it a 'mistake'). The reason why I don't restrict the word to deliberate lying is that I think deliberate lying of this sort is extremely rare; self-serving biases are effective in securing that.
I don't believe it's practically impossible to give more than I do. I could push myself farther than I do. I don't perfectly live up to my own ideals. Given that I'm a human, I doubt any of you find that surprising.

The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t easily be measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily isn’t important. This is blindness. The fourth step is to say that what can’t easily be measured really doesn’t exist. This is suicide.

Charles Handy describing the Vietnam-era measurement policies of Secretary of Defense Robert McNamara

The following quotes were heavily upvoted, but then turned out to be made by a Will Newsome sockpuppet who edited the quote afterward. The original comments have been banned. The quotes are as follows:

If dying after a billion years doesn't sound sad to you, it's because you lack a thousand-year-old brain that can make trillion-year plans.

— Aristosophy

One wish can achieve as much as you want. What the genie is really offering is three rounds of feedback.

— Aristosophy

If anyone objects to this policy response, please PM me so as to not feed the troll.

The following quotes were heavily upvoted, but then turned out to be made by a Will Newsome sockpuppet who edited the quote afterward. The original comments have been banned. The quotes are as follows:

Defection too far. Ban Will.

Will is a cute troll. Hmm, after observing it a few times on various forums I'm starting to consider that having a known, benign resident troll might keep away more destructive ones. No idea how it works but it doesn't seem that far-fetched given all the strange territoriality-like phenomena occasionally encountered in the oddest places.

Will is a cute troll.

I've heard this claimed.

This behavior isn't cute.

Hmm, after observing it a few times on various forums I'm starting to consider that having a known, benign resident troll might keep away more destructive ones. No idea how it works but it doesn't seem that far-fetched given all the strange territoriality-like phenomena occasionally encountered in the oddest places.

This would be somewhat in fitting with findings in Cialdini. One defector kept around and visibly punished or otherwise looking low status is effective at preventing that kind of behavior. (If not Cialdini, then Greene. Probably both.)

Edited how?

If I remember correctly the second quote was edited to be something along the lines of "will_newsome is awesome."

That is cute.. no? More childish than evil. He should just be warned that's trolling. There really should be a comment edit history feature. Maybe it only activates once a comment reaches +10 karma.
It was edited to add something like "Will Newsome is such a badass" -- Socrates

I do find some of Will Newsome's contributions interesting. OTOH, this behaviour is pretty fucked up. (I was wondering how hard it would be to implement a software feature to show the edit history of comments.)

If only the converse were true...
"...if you lack a thousand-year-old brain that can make trillion-year plans, dying after a billion years doesn't sound sad to you"? I'm confused as to what you're trying to say. Are you saying that dying after a billion years sounds sad to you?
"If you lack a thousand-year-old brain that can make trillion-year plans, it's because dying after a billion years doesn't sound sad to you." I think meaning it's unfortunate that thinking that dying after a billion years is sad doesn't by itself give you the power to live that long. Maybe.
I was never one for formal logic, but isn't that the contrapositive? I was under the impression that the converse of p then q was q then p.
Yes and that's what nshepperd wrote.
Oh wow, never mind. My brain was temporarily broken. Is it considered bad etiquette here to retract incorrect comments?
When you retract the comment is simply struck-through not deleted, so no.
And therefore you would have a thousand-year-old brain that can make trillion-year plans.
Seems legit.

The only road to doing good shows, is doing bad shows.

  • Louis C.K., on Reddit

Unfortunately, doing bad shows is not only a route to doing good shows.

True, and I hope no one thinks it is. So we can conclude that doing bad shows at first is not a strong indicator of whether you have a future as a showman. I guess I see the quote as being directed at people who are so afraid of doing a bad show that they'll never get in enough practice to do a good show. Or they practice by, say, filming themselves telling jokes in their basement and getting critiques from their friends who will not be too mean to them. In either case, they never get the amount of feedback they would need to become good. For such a person to hear "Yes, you will fail" can be oddly liberating, since it turns failure into something accounted for in their longer-term plans.

“Why do you read so much?”

Tyrion looked up at the sound of the voice. Jon Snow was standing a few feet away, regarding him curiously. He closed the book on a finger and said, “Look at me and tell me what you see.”

The boy looked at him suspiciously. “Is this some kind of trick? I see you. Tyrion Lannister.”

Tyrion sighed. “You are remarkably polite for a bastard, Snow. What you see is a dwarf. You are what, twelve?”

“Fourteen,” the boy said.

“Fourteen, and you’re taller than I will ever be. My legs are short and twisted, and I walk with difficulty. I require a special saddle to keep from falling off my horse. A saddle of my own design, you may be interested to know. It was either that or ride a pony. My arms are strong enough, but again, too short. I will never make a swordsman. Had I been born a peasant, they might have left me out to die, or sold me to some slaver’s grotesquerie. Alas, I was born a Lannister of Casterly Rock, and the grotesqueries are all the poorer. Things are expected of me. My father was the Hand of the King for twenty years. My brother later killed that very same king, as it turns out, but life is full of these little ironies. My sister married the new king and

... (read more)

I'm surprised at how often I have to inform people of this... I have mild scoliosis, and so I usually prefer sitting down and kicking up my feet, usually with my work in hand. Coming from a family who appreciates backbreaking work is rough when the hard work is even harder and the pain longer-lasting... which would be slightly more bearable if the aforementioned family did not see reading MYSTERIOUS TEXTS on a Kindle and using computers for MYSTERIOUS PURPOSES as signs of laziness and devotion to silly frivolities.

I have a sneaking suspicion that this is not a very new situation.

I think the quote could be trimmed to its last couple sentences and still maintain the relevant point..

I disagree, in fact. That books strengthen the mind is baldly asserted, not supported, by this quote - the rationality point I see in it is related to comparative advantage.

Oh, totally. But I prefer the full version; it's really a beautifully written passage.

Discovery is the privilege of the child, the child who has no fear of being once again wrong, of looking like an idiot, of not being serious, of not doing things like everyone else.

Alexander Grothendieck

...screw it, I'm not growing up.
I remember being very much afraid of all those things as a child. I'm getting better now.

...a good way of thinking about minimalism [about truth] and its attractions is to see it as substituting the particular for the general. It mistrusts anything abstract or windy. Both the relativist and the absolutist are impressed by Pilate's notorious question 'What is Truth?', and each tries to say something useful at the same high and vertiginous level of generality. The minimalist can be thought of turning his back on this abstraction, and then in any particular case he prefaces his answer with the prior injunction: you tell me. This does not mean, 'You tell me what truth is.' It means, 'You tell me what the issue is, and I will tell you (although you will already know, by then) what the truth about the issue consists in.' If the issue is whether high tide is at midday, then truth consists in high tide being at midday... We can tell you what truth amounts to, if you first tell us what the issue is.

There is a very powerful argument for minimalism about truth, due to the great logician Gottlob Frege. First, we should notice the transparency property of truth. This is the fact that it makes no difference whether you say that it is raining, or it is true that it is raining, or tr

... (read more)
The pithiest definition of Blackburn's minimalism I've read is in his review of Nagel's The Last Word: It is followed by an even pithier response to how Nagel refutes relativism (pointing that our first-order conviction that 2+2=4 or that murder is wrong is more certain than any relativist doubts) and thinks that this establishes a quasi-Platonic absolutism as the only alternative:
"What is truth" is a pretty good question, though a better one is "what do we do with truths?" We do a lot of things with truths, it can serve a lot of different functions. The problem comes where people doing different things with their truths talk to each other.

"Nontrivial measure or it didn't happen." -- Aristosophy

(Who's Kate Evans? Do we know her? Aristosophy seems to have rather a lot of good quotes.)


"I made my walled garden safe against intruders and now it's just a walled wall." -- Aristosophy

Attachment? This! Is! SIDDHARTHA!

Is that you? That's ingenious.

For more rational flavor:

Live dogmatic, die wrong, leave a discredited corpse.

This should be the summary for entangled truths:

To find the true nature of a thing, find the true nature of all other things and look at what is left over.

how to seem and be deep:

Blessed are those who can gaze into a drop of water and see all the worlds and be like who cares that's still zero information content.

Dark Arts:

The master said: "The master said: "The master said: "The master said: "There is no limit to the persuasive power of social proof.""""

More Dark arts:

One wins a dispute, not by minimising potential counterarguments' plausibility, but by maximising their length.


Have you accepted your brain into your heart?

No, I'm not her. I don't know who she is, but her Twitter is indeed glorious. (And Google Reader won't let me subscribe to it the way I'm subscribed to other Twitters, rar.)

She's got to be from here, here's learning biases can hurt people:

Heuristics and biases research: gaslighting the human race?


"Are you signed up for Christonics?" "No, I'm still prochristinating."

I'm starting to think this is someone I used to know from tvtropes.

It is now clear to us what, in the year 1812, was the cause of the destruction of the French army. No one will dispute that the cause of the destruction of Napoleon's French forces was, on the one hand, their advance late in the year, without preparations for a winter march, into the depths of Russia, and, on the other hand, the character that the war took on with the burning of Russian towns and the hatred of the foe aroused in the Russian people. But then not only did no one foresee (what now seems obvious) that this was the only way that could lead to the destruction of an army of eight hundred thousand men, the best in the world and led by the best generals, in conflict with a twice weaker Russian army, inexperienced and led by inexperienced generals; not only did no one foresee this, but all efforts on the part of the Russians were constantly aimed at hindering the one thing that could save Russia, and, on the part of the French, despite Napoleon's experience and so-called military genius, all efforts were aimed at extending as far as Moscow by the end of summer, that is, at doing the very thing that was to destroy them.

  • Leo Tolstoy, "War and Peace", trans. Pevear and Volokhonsky

"Possibly the best statistical graph ever drawn"

You know those people who say "you can use numbers to show anything" and "numbers lie" and "I don't trust numbers, don't give me numbers, God, anything but numbers"? These are the very same people who use numbers in the wrong way.


"If your plan is for one year plant rice. If your plan is for 10 years plant trees. If your plan is for 100 years educate children" - Confucius

...If your plan is for eternity, invent FAI?

3Eliezer Yudkowsky11y
Depends how you interpret the proverb. If you told me the Earth would last a hundred years, it would increase the immediate priority of CFAR and decrease that of SIAI. It's a moot point since the Earth won't last a hundred years.
Sorry, Earth won't last a hundred years?
Nanotech and/or UFAI.
The idea seems to be that even if there is a friendly singularity, Earth will be turned into computronium or otherwise transformed.
I am surprised that this claim surprises you. A big part of SI's claimed value proposition is the idea that humanity is on the cusp of developing technologies that will kill us all if not implemented in specific ways that non-SI folk don't take seriously enough.
Of course you're right. I guess I haven't noticed the topic come up here for a while, and haven't seen the apocalypse predicted so straightforwardly (and quantitatively) before so am surprised in spite of myself. Although, in context, it sounds like EY is saying that the apocalypse is so inevitable that there's no need to make plans for the alternative. Is that really the consensus at EY's institute?
I have no idea what the consensus at SI is.
I guess he means “only last a hundred years”, not “last at least a hundred years”.
Just to make sure I understand: you interpret EY to be saying that the Earth will last more than a hundred years, not saying that the Earth will fail to last more than a hundred years. Yes? If so, can you clarify how you arrive at that interpretation?
“If you told me the Earth would only last a hundred years (i.e. won't last longer than that) .... It's a moot point since the Earth won't only last a hundred year (i.e. it will last longer).” At least that's what I got on the first reading. I think I could kind-of make sense “it would increase the immediate priority of CFAR and decrease that of SIAI” under either hypothesis about what he means, though one interpretation would need to be more strained than the other.
The idea is that if Earth lasts at least a hundred years, (if that's a given), then the possibility of a uFAI in that timespan severely decreases -- so SIAI (which seeks to prevent a uFAI by building a FAI) is less of an immediate priority and it becomes a higher priority to develop CFAR that will increase the public's rationality for the future generations, so that the future generations don't launch a uFAI.
(The other interpretation would be “If the Earth is going to only last a hundred years, then there's not much point in trying to make a FAI since in the long-term we're screwed anyway, and raising the sanity waterline will make us enjoy more what time there is left.) EDIT: Also, if your interpretation is correct, by saying that the Earth won't last 100 years he's either admitting defeat (i.e. saying that an uFAI will be built) or saying that even a FAI would destroy the Earth within 100 years (which sounds unlikely to me -- even if the CEV of humanity would eventually want to do that, I guess it would take more than 100 years to terraform another place for us to live and for us all to move there).
4Eliezer Yudkowsky11y
I was just using "Earth" as a synonym for "the world as we know it".

I think I disagree; care to make it precise enough to bet on? I'm expecting life still around, Earth the main population center, most humans not uploaded, some people dying of disease or old age or in wars, most people performing dispreferred activities in exchange for scarce resources at least a couple months in their lives, most children coming out of a biological parent and not allowed to take major decisions for themselves for at least a decade.

I'm offering $100 at even odds right now and will probably want to bet again in the next few years. I can give it to you (if you're going to transfer it to SIAI/CFAR tell me and I'll donate directly), and you pay me $200 if the world has not ended in 100 years as soon as we're both available (e.g. thawed). If you die you can keep the money; if I die then win give it to some sensible charity.

How's that sound? All of the above is up for negotiation.

As wedifrid says, this is a no-brainer "accept" (including the purchasing-power-adjusted caveat). If you are inside the US and itemize deductions, please donate to SIAI, otherwise I'll accept via Paypal. Your implied annual interest rate assuming a 100% probability of winning is 0.7% (plus inflation adjustment). Please let me know whether you decide to go through with it; withdrawal is completely understandable - I have no particular desire for money at the cost of forcing someone else to go through with a bet they feel uncomfortable about. (Or rather, my desire for $100 is not this strong - I would probably find $100,000 much more tempting.)

PayPal-ed to sentience at pobox dot com.

Don't worry, my only debitor who pays higher interest rates than that is my bank. As long as that's not my main liquidity bottleneck I'm happy to follow medieval morality on lending.

If you publish transaction data to confirm the bet, please remove my legal name.

Bet received. I feel vaguely guilty and am reminding myself hard that money in my Paypal account is hopefully a good thing from a consequentialist standpoint.

(Neglecting any logistic or legal isses) this sounds like a no brainer for Eliezer (accept). Like you would be better served by making the amounts you give and expect to receive if you win somewhat more proportionate to expected utility of the resources at the time. If Eliezer was sure he was going to lose he should still take the low interest loan. Even once the above is accounted for Eliezer should still accept the bet (in principle).
Dollar amounts are meant as purchasing-power-adjusted. I am sticking my fingers in my ears and chanting "La la, can't hear you" at discounting effects.
That's a nice set of criteria by which to distinguish various futures (and futurists).
Care to explain why? You sound like you expect nanotech by then.

I definitely expect nanotech a few orders of magnitude awesomer than we have now. I expect great progress on aging and disease, and wouldn't be floored by them being solved in theory (though it does sound hard). What I don't expect is worldwide deployment. There are still people dying from measles, when in any halfway-developed country every baby gets an MMR shot as a matter of course. I wouldn't be too surprised if everyone who can afford basic care in rich countries was immortal while thousands of brown kids kept drinking poo water and dying. I also expect longevity treatments to be long-term, not permanent fixes, and thus hard to access in poor or politically unstable countries.

The above requires poor countries to continue existing. I expect great progress, but not abolition of poverty. If development continues the way it has (e.g. Brazil), a century isn't quite enough for Somalia to get its act together. If there's a game-changing, universally available advance that bumps everyone to cutting-edge tech levels (or even 2012 tech levels), then I won't regret that $100 much.

I have no idea what wars will look like, but I don't expect them to be nonexistent or nonlethal. Given no gam... (read more)

Thanks for explaining! Of course, nanotech could be self replicating and thus exponentially cheap, but the likelihood of that is ... debatable.
3Paul Crowley11y
I feel an REM song coming on...
(I guess I had been primed to take “Earth” to mean ‘a planet or dwarf planet (according to the current IAU definition) orbiting the Sun between Venus and Mars’ by this []. EDIT: Dragon Ball too, where destroying a planet means turning it into dust, not just rendering it inhabitable.)
EY does seem in a darker mood than usual lately, so it wouldn't surprise me to see him implying pessimism about our chances out loud, even if it doesn't go so far as "admitting defeat". I do hope it's just a mood, rather than that he has rationally updated his estimation of our chances of survival to be even lower than they already were. :-)
"The world as we know it" ends if FAI is released into the wild.
When I had commented, EY hadn't clarified yet that by Earth he meant “the world as we know it”, so I didn't expect “Earth” to exclude ‘the planet between Venus and Mars 50 years after a FAI is started on it’.
So, we can construct an argument that CFAR would rise in relative importance over SIAIif we see strong evidence the world as we know it will end within 100 years, and an argument with the same conclusion if we see strong evidence that the world as we know it will last for at least 100 years. There is something wrong.

Nothing can be soundly understood
If daylight itself needs proof.

Imām al-Ḥaddād (trans. Moṣṭafā al-Badawī), "The Sublime Treasures: Answers to Sufi Questions"

Richard Carrier on solipsism, but not nearly as pithy:

I think that's actually a really terrible bit of arguing.

There are only two logically possible explanations: random chance, or design.

We can stop right there. If we're all the way back at solipsism, we haven't even gotten to defining concepts like 'random chance' or 'design', which presume an entire raft of external beliefs and assumptions, and we surely cannot immediately say there are only two categories unless, in response to any criticism, we're going to include a hell of a lot under one of those two rubrics. Which probability are we going to use, anyway? There are many more formalized versions than just Kolmogorov's axioms (which brings us to the analytic and synthetic problem).

And much of the rest goes on in a materialist vein which itself requires a lot of further justification (why can't minds be ontologically simple elements? Oh, your experience in the real world with various regularities has persuaded you that is inconsistent with the evidence? I see...) Even if we granted his claims about complexity, why do we care about complexity? And so on.

Yes, if you're going to buy into a (very large) number of materialist non-solipsist claims, then you're going to have trouble making a case in such terms for solipsism. But if you've bought all those materialist or externalist claims, you've already rejected solipsism and there's no tension in the first place. And he doesn't do a good case of explaining that at all.

Good points, but then likewise how do you define and import the designations of 'hand' or 'here' and justify intuitions or a axiomatic system of logic (and I understood Carrier to be referring to epistemic solipsism like Moore -- you seem to be going metaphysical)? (or were you not referring to Moore's argument in the context of skepticism?)
I think Moore's basic argument works on the level of epistemic skepticism, yes, but also metaphysics: some sort of regular metaphysics and externalism is what one believes, and what provides the grist for the philosophical mill. If you don't credit the regular metaphysics, then why do you credit the reasoning and arguments which led you to the more exotic metaphysics? I'm not sure what skeptical arguments it doesn't work for. I think it may stop at the epistemic level, but that may just be because I'm having a hard time thinking of any ethics examples (which is my usual interest on the next level down of abstraction).
The way I see it, Moore's argument gets you to where you're uncertain of the reasoning pro or contra skepticism. But If you start from the position of epistemic solipsism (I know my own mind, but I'm uncertain of the external world), then you have reason (more or less depending how uncertain you are) to side with common sense. However, if you start at metaphysical solipsism (I'm uncertain of my own mind), then such an argument could even be reason to not side with common sense (e.g., there are little people in my mind trying to manipulate my beliefs; I must not allow them to).
A hypothesis like... I'm dreaming.
This also made me think of the aphorism "if water sticks in your throat, with what will you wash it down?"

Subway ad: "146 people were hit by trains in 2011. 47 were killed."

Guy on Subway: "That tells me getting hit by a train ain't that dangerous."

  • Nate Silver, on his Twitter feed @fivethirtyeight

This reminds me of how I felt when I learned that a third of the passengers of the Hindenburg survived. Went something like this, if I recall:

Apparently if you drop people out of the sky in a ball of fire, that's not enough to kill all of them, or even 90% of them.

Actually, according to Wikipedia, only 35 out of the 97 people aboard were killed. Not enough to kill even 50% of them.

jaw drops
It helps to remember that the Hindenburg was more or less parked when it exploded... I think it was like 30 feet in the air? (I'm probably wrong about the number, but I don't think I'm very wrong.) Most of the passengers basically jumped off. And, sure, a 30 foot drop is no walk in the park, but it's not that surprising that most people survive it.
(Well, then “out of the sky” is kind of an exaggeration, since you wouldn't normally consider yourself to be in the sky when on a balcony on the fourth floor.)
Well, unlike the balcony of a building, a floating blimp (even close to the ground) is floating, rather than resting on the ground, so I suppose one could make the argument. But yeah, I'm inclined to agree that wherever "the sky" is understood to be, and I accept that this is a social construct rather than a physical entity, it's at least a hundred feet or so above ground

Wait, 32% probability of dying “ain't that dangerous”? Are you f***ing kidding me?


If I expect to be hit by a train, I certainly don't expect a ~68% survival chance. Not intuitively, anyways.

I'm guessing that even if you survive, your quality of life is going to take a hit. Accounting for this will probably bring our intuitive expectation of harm closer to the actual harm.

Hmmm, I can't think of any way of figuring out what probability I would have guessed if I had to guess before reading that. Damn you, hindsight bias! (Maybe you could spell out and rot-13 the second figure in the ad...)
I would expect something like that chance. Being hit by a train will be very similar to landing on your side or back after falling 3 to 10 meters (I'm guessing most people hit by trains are at or near a train station, so the impacts will be relatively slow). So the fatality rate should be similar. Of course, that prediction gives a fatality rate of only 5-20%, so I'm probably missing something.
There's the whole crushing and high voltage shock thing, depending on how you land.
Well, lightning strikes kill less than half the people they hit.
Lightning strikes usually do not involve physical impacts - I think "falling from 3-10 meters and getting struck by lightning" would be worse. In addition, the length of the current flow depends on the high voltage system.
This seems overwhelmingly likely.
I can't help but think: Subway ad: "146 people were hit by trains in 2011. 47 were killed." Guy at Subway: "What does that have to do with sandwiches?"

"In a society in which the narrow pursuit of material self-interest is the norm, the shift to an ethical stance is more radical than many people realize. In comparison with the needs of people starving in Somalia, the desire to sample the wines of the leading French vineyards pales into insignificance. Judged against the suffering of immobilized rabbits having shampoos dripped into their eyes, a better shampoo becomes an unworthy goal. An ethical approach to life does not forbid having fun or enjoying food and wine, but it changes our sense of priorities. The effort and expense put into buying fashionable clothes, the endless search for more and more refined gastronomic pleasures, the astonishing additional expense that marks out the prestige car market in cars from the market in cars for people who just want a reliable means to getting from A to B, all these become disproportionate to people who can shift perspective long enough to take themselves, at least for a time, out of the spotlight. If a higher ethical consciousness spreads, it will utterly change the society in which we live." -- Peter Singer

As it is probably intended, the more reminders like this I read, the more ethical I should become. As it actually works, the more of this I read, the less I become interested in ethics. Maybe I am extraordinarily selfish and this effect doesn't happen to most, but it should be at least considered that constant preaching of moral duties can have counterproductive results.

I suspect it's because authors of "ethical remainders" are usually very bad at understanding human nature.

What they essentially do is associate "ethical" with "unpleasant", because as long as you have some pleasure, you are obviously not ethical enough; you could do better by giving up some more pleasure, and it's bad that you refuse to do so. The attention is drawn away from good things you are really doing, to the hypothetical good things you are not doing.

But humans are usually driven by small incentives, by short-term feelings. The best thing our rationality can do is better align these short-term feelings with out long-term goals, so we actually feel happy when contributing to our long-term goals. And how exactly are these "ethical remainders" contributing to the process? Mostly by undercutting your short-term ethical motivators, by always reminding you that what you did was not enough, therefore you don't deserve the feelings of satisfaction. Gradually they turn these motivators off, and you no longer feel like doing anything ethical, because they convinced you (your "elephant") that you can't.

Ethics without understanding human nature is just a pile of horseshit. Of course that does not prevent other people from admiring those who speak it.

Yes. And it works this way even without insisting that more can be done; even if you live up to the demands, or even if the moral preachers recognise your right to be happy sometimes, the warm feeling from doing good is greatly diminished when you are told that philantrophy is just being expected, that helping others is not something one does naturally with joy, but that it should be a conscious effort, a hard work, to be done properly.

xkcd reference.

Not to mention the remarks of Mark Twain on a fundraiser he attended once:

Well, Hawley worked me up to a great state. I couldn't wait for him to get through [his speech]. I had four hundred dollars in my pocket. I wanted to give that and borrow more to give. You could see greenbacks in every eye. But he didn't pass the plate, and it grew hotter and we grew sleepier. My enthusiasm went down, down, down - $100 at a time, till finally when the plate came round I stole 10 cents out of it. [Prolonged laughter.] So you see a neglect like that may lead to crime.

It might be worth taking a look at Karen Horney's [] work. She was an early psychoanalyst who wrote that if a child is abused, neglected, or has normal developmental stages overly interfered with, they are at risk of concluding that just being a human being isn't good enough, and will invent inhuman standards for themselves. I'm working on understanding the implications (how do you get living as a human being right? :-/ ), but I think she was on to something.

I wasn't abused or neglected. Did she check experimentally that abuse or neglect is more prevalent among rationalists than in the general population?

Of course that's not something a human would ordinarily do to check a plausible-sounding hypothesis, so I guess she probably didn't, unless something went horribly wrong in her childhood.

Second thought: Maybe I should have not mentioned her theory about why people adopt inhuman standards, and just focused on the idea that inhuman standards are likely to backfire, Viliam_Bur did. Also-- if I reread I'll check this-- I think Horney focused on inhuman standards of already having a quality, which is not quite the same thing as having inhuman standards about what one ought to achieve, though I think they're related.
I was thinking about prase in particular, who sounds as though he might have some problems with applying high standards in a way that's bad for him. Horney died in 1952, so she might not have had access to rationalists in your sense of the word. When I said it might be worth taking a look at Horney's work, I really did mean I thought it might be worth exploring, not that I'm very sure it applies. It seems to be of some use for me.
To be clear, I don't have problems with applying high standards to myself, unless not wishing to apply such standards qualifies as a problem. However I am far more willing to consider myself an altruist (and perhaps behave accordingly) when other people don't constantly remind me that it's my moral obligation.
Thanks for the explanation, and my apologies for jumping to conclusions. I've been wondering why cheerleading sometimes damages motivation-- there's certainly a big risk of it damaging mine. The other half would be why cheerleading sometimes works, and what the differences are between when it works and when it doesn't. At least for me, I tend to interpret cheerleading as "Let me take you over for my purposes. This project probably isn't worth it for you, that's why I'm pushing you into it instead of letting you see its value for yourself." with a side order of "You're too stupid to know what's valuable, that's why you have to be pushed." I'm not sure what cheerleading feels like to people who like it.
No need to apologise. The feeling of being forced to pursue someone else's goals is certainly part of it. But even if the goals align, being pushed usually means that one's good deeds aren't going to be fully appreciated by others, which too is a great demotivator.
I think the feeling that one's good deeds will be unappreciated is especially a risk for altruism.

Judged against the suffering of immobilized rabbits having shampoos dripped into their eyes, a better shampoo becomes an unworthy goal.

I'm not at all convinced that this is the case. After all, the shampoos are being designed to be less painful, and you don't need to test on ten thousand rabbits. Considering the distribution of the shampoos, this may save suffering even if you regard human and rabbit suffering as equal in disutility.

I'm not at all convinced of this. It seems to me that a genuinely ethical life requires extraordinary, desperate asceticism. Anything less is to place your own wellbeing above those of your fellow man. Not just above, but many orders of magnitude above, for even trivial luxuries.

Julia Wise would disagree, on the grounds that this is impossible to maintain and you do more good if you stay happy.

And the great philosopher Diogenes [] would disagree with her.

So, how many lives did he save again?

Clever guy, but I'm not sure if you want to follow his example.

If I may be so bold as to summarize this thread: 1. Whatever utility calculus you follow, it is a mathematical model. 2. "All models are false." 3. In particular, what's going wrong here is your model is treating you, the agent, as atomic. In reality, as Kaj Sotala described very well below, you are not an atomic agent, you have an internal architecture, and this architecture has very important ramifications for how you should think about utilities. If I may make an analogy from the field of AI. In the old days, AI was concerned about something called "discrete search," which is just a brute force way to look for an optimum in a state space, where each state is essentially an atomic point. Alpha-beta pruning search Deep Blue uses to play chess is an example of discrete search. At some point it was realized that for many problems atomic point-like states resulted in a combinatorial explosion, and in addition states had salient features describable by, say, logical languages. As this realization was implemented, you no longer had a state-as-a-point, but state-as-a-collection-of-logical-statements. And the field of planning was born. Planning has some similarities to discrete search, but because we "opened up" the states into a full blown logical description, the character of the problem is quite different. I think we need to "open up the agent."

To use an analogy, if you attend a rock concert and take a box to stand on then you will get a better view. If others do the same, you will be in exactly the same position as before. Worse, even, as it may be easier to loose your balance and come crashing down in a heap (and, perhaps, bringing others with you).

-- Iain McKay et al., An Anarchist FAQ, section C.7.3

Tropical rain forests, bizarrely, are the products of prisoner's dilemmas. The trees that grow in them spend the great majority of their energy growing upwards towards the sky, rather than reproducing. If they could come to a pact with their competitors to outlaw all tree trunks and respect a maximum tree height of ten feet, every tree would be better off. But they cannot.

Matt Ridley, in The Origins of Virtue

"Better off" according to whose utility function?
yeah, it's not obvious from this quote, but having read the book, I know what he means. The utility function of the tree is the sum, over all individuals, of the fraction of genes that each other individual has in common with it. He constantly talks as if plants, chromosomes, insects etc. desire to maximize this number. I think it works, because when an organism is in its environment of evolutionary adaptation, finding that a behavior makes this number bigger than alternative behaviors would explains why the organism carries out that behavior. And if the organism does not carry out the behavior, then you need some explanation for why not. Right?
That's a really important caveat. Adaptation-Executers, not Fitness-Maximizers [].
They'd expend less energy per surviving descendent produced.

Neither side of the road is inherently superior to the other, so we should all choose for ourselves on which side to drive. #enlightenment

--Kate Evans on Twitter

Don't we all choose for ourselves on which side to drive? There's usually nobody else ready to grab the wheel away from you...
There are police ready to pull you over, for certain values of "ready". (Not commenting on whether that relates to Evans' point.)
Have successfully quoted this to counter a relativist-truth argument that was aimed towards supporting "freedom of faith" even in hypothetical scenarios where the majority of actors would end up promoting and following harmful faiths. While counterintuitive to me, it was apparently a necessary step before the other party could even comprehend the fallacy of gray that was being committed.
You may find it felicitous to link directly to the tweet [].
You responded to the wrong post or gave the wrong link. I do see your point, fixed both quotes.

Oh, right, Senjōgahara. I've got a great story to tell you. It's about that man who tried to rape you way back when. He was hit by a car and died in a place with no connection to you, in an event with no connection to you. Without any drama at all. [...] That's the lesson for you here: You shouldn't expect your life to be like the theater.

-- Kaiki Deishū, Episode 7 of Nisemonogatari.

Does the order of the two terminal conditions matter? / Think about it.

Does the order of the two terminal conditions matter? / Try it out!

Does the order of the two previous answers matter? / Yes. Think first, then try.

  • Friedman and Felleisen, The Little Schemer
Could you unpack that for me?

Sure. The book is a sort of resource for learning the programming language Scheme, where the authors will present an illustrative piece of code and discuss different aspects of its behavior in the form of a question-and-answer dialogue with the reader.

In this case, the authors are discussing how to perform numerical comparisons using only a simple set of basic procedures, and they've come up with a method that has a subtle error. The lines above encourage the reader to figure out if and why it's an error.

With computers, it's really easy to just have a half-baked idea, twiddle some bits, and watch things change, but sometimes the surface appearance of a change is not the whole story. Remembering to "think first, then try" helps me maintain the right discipline for really understanding what's going on in complex systems. Thinking first about my mental model of a situation prompts questions like this:

  • Does my model explain the whole thing?
  • What would I expect to see if my model is accurate? Can I verify that I see those things?
  • Does my model make useful predictions about future behavior? Can I test that now, or make sure that when it happens, I gather the data I need to confirm it?

It's harder psychologically (and maybe too late) to ask those questions in retrospect if you try first, and then think, and if you skip asking them, then you'll suffer later.

You know, I've seen a lot on here about how programming relates to thinking relates to rationality. I wonder if it'd be worth trying and where/how I might get started.
It's certainly at least worth trying, since among things to learn it may be both unusually instructive and unusually useful. Here's the big list of LW recommendations. []
Khan Academy has a programming course? I might try it. Mostly, I want the easiest, most handholdy experience possible. Baby talk if necessary. Every experience informs me that programming is hard.

This is the easiest, most handholdy experience possible:

A coworker of mine who didn't know any programming, and who probably isn't smarter than you, enjoyed working through it and has learned a lot.

Programming is hard, but a lot of good things are hard.

The first trick is to be able to describe how to solve a problem; and then break that description down into the smallest possible units and write it out such that there's absolutely no possibility of a misunderstanding, no matter what conditions occur. Once you've got that done, it's fairly easy to learn how to translate it into a programming language.
Which is also why it helps, conversely, for reduction and rational thinking: The same skill that applies to formulating clear programs applies to formulating clear algorithms and concepts in any format, including thought.
I would recommend trying, although I'm not really the right person to ask on starting points, if for no other reason than to test the hypothesis that learning programming aids the study of rationality.
This is a great reminder, and is not always easy advice to follow, especially if your edit-compile-run cycle tightens or collapses [] completely []. I think there's a tricky balance between understanding something explicitly, which you can only do by training your model by thinking carefully, and understanding something intuitively, which is made much easier with tools like the ones I linked. Do you have a sense for which kind of understanding is more useful in practice? I suspect that when I design or debug software I am making heavier use of System 1 thinking than it seems, and I am often amazed at how detailed a model I have of the behavior of the code I am working with.
No, I don't, which I realized after spending half an hour trying to compose a reply to this. Sorry.

The problem with any ideology is that it gives the answer before you look at the evidence.

Bill Clinton

This is why I think it's not too terribly useful to give labels like "good person" or "bad person," especially if our standard for being a "bad person" is "someone with anything less than 100% adherence to all the extrapolated consequences of their verbally espoused values." In the end, I think labeling people is just a useful approximation to labeling consequences of actions.

Julia, Jeff, and others accomplish a whole lot of good. Would they, on average, end up accomplishing more good if they spent more time feeling guilty about the fact that they could, in theory, be helping more? This is a testable hypothesis. Are people in general more likely to save more lives if they spend time thinking about being happy and avoiding burnout, or if they spend time worrying that they are bad people making excuses for allowing themselves to be happy?

The question here is not whether any individual person could be giving more; the answer is virtually always "yes." The question is, what encourages giving? How do we ensure that lives are actually being saved, given our human limitations and selfish impulses? I think there's great value in not generating an ugh-field around charity.

I've always thought of the SkiFree monster as a metaphor for the inevitability of death.

"SkiFree, huh? You know, you can press 'F' to go faster than the monster and escape."

-- xkcd 667

There is nothing noble in being superior to your fellow man; true nobility is being superior to your former self.

Ernest Hemingway

Excellent. A shortcut to nobility. One day of being as despicable as I can practically manage and I'm all set.
It does not state which (!) former self, so I would expect some sort of median or mean or summary of your former self and not just the last day. So I'm sorry but there is no shortcut ;-)

"If at first you don't succeed, switch to power tools." -- The Red Green Show

I can confirm that this works.

Julia Wise holds the distinction of having actually tried it though. Few people are selfless enough to even make the attempt.

When we were first drawn together as a society, it had pleased God to enlighten our minds so far as to see that some doctrines, which we once esteemed truths, were errors; and that others, which we had esteemed errors, were real truths. From time to time He has been pleased to afford us farther light, and our principles have been improving, and our errors diminishing.

Now we are not sure that we are arrived at the end of this progression, and at the perfection of spiritual or theological knowledge; and we fear that, if we should once print our confession of faith, we should feel ourselves as if bound and confin'd by it, and perhaps be unwilling to receive farther improvement, and our successors still more so, as conceiving what we their elders and founders had done, to be something sacred, never to be departed from.

Michael Welfare, quoted in The Autobiography of Benjamin Franklin

"Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity -- in all this vastness -- there is no hint that help will come from elsewhere to save us from ourselves. It is up to us." - Sagan

Rorschach: You see, Doctor, God didn't kill that little girl. Fate didn't butcher her and destiny didn't feed her to those dogs. If God saw what any of us did that night he didn't seem to mind. From then on I knew... God doesn't make the world this way. We do.

EDIT: Quote above is from the movie.

Verbatim from the comic:

It is not God who kills the children. Not fate that butchers them or destiny that feeds them to the dogs. It's us.
Only us.

I personally think that Watchmen is a fantastic study* on all the different ways people react to that realisation.

("Study" in the artistic sense rather than the scientific.)

If a thing can be observed in any way at all, it lends itself to some type of measurement method. No matter how “fuzzy” the measurement is, it’s still a measurement if it tells you more than you knew before.

Douglas Hubbard, How to Measure Anything

This is the second time [] I've come across you mentioning Hubbard. Is the book good and, if so, what audience is it goo for?
How to Measure Anything is surprisingly good, so I added it here [].

Erode irreplaceable institutions related to morality and virtue because of their contingent associations with flawed human groups #lifehacks

--Kate Evans on Twitter

I was ready to applaud the wise contrarianism here, but I'm having trouble coming up with actual examples... marriage, maybe?
I don't know if this is what she was thinking of but church [] is what I thought of when I read it.
I thought of that too but dismissed it on the grounds that church is hardly "contingently associated" with religion. But I think you're probably right that that's what she meant... and that being the case it is a pretty good point. I wish I belonged to something vaguely churchlike.
I disagree, but won't present any arguments to avoid derailing or getting involved in a debate.

It may be of course that savages put food on a dead man because they think that a dead man can eat, or weapons with a dead man because they think a dead man can fight. But personally I do not believe that they think anything of the kind. I believe they put food or weapons on the dead for the same reason that we put flowers, because it is an exceedingly natural and obvious thing to do. We do not understand, it is true, the emotion that makes us think it is obvious and natural; but that is because, like all the important emotions of human existence it is essentially irrational.

  • G. K. Chesterton

Chesterton doesn't understand the emotion because he doesn't know enough about psychology, not because emotions are deep sacred mysteries we must worship.

I read "irrational" as a genuflection in the direction of the is-ought problem more than anything else.
My beef isn't with "irrational", he meant "arational" anyway. It's with the idea that this property of emotions make our ignorance about them okay.
Ah - I missed that implication. Agreed.

Or better, arational.

That is an incredible term. Going to use it all the time.

Let us together seek, if you wish, the laws of society, the manner in which these laws are reached, the process by which we shall succeed in discovering them; but, for God's sake, after having demolished all the a priori dogmatisms, do not let us in our turn dream of indoctrinating the people...let us not - simply because we are at the head of a movement - make ourselves into the new leaders of intolerance, let us not pose as the apostles of a new religion, even if it be the religion of logic, the religion of reason.

Pierre Proudhon, to Karl Marx

When a precise, narrowly focused technical idea becomes metaphor and sprawls globally, its credibility must be earned afresh locally by means of specific evidence demonstrating the relevance and explanatory power of the idea in its new application.

Edward Tufte, "Beautiful Evidence"

* Evolution * Relativity * Foundational assumptions of standard economics ...what else?
* Bayes' theorem * Status * Computation * Utility * Optimisation
Quantum physics

...the 2008 financial crisis showed that some [mathematical finance] models were flawed. But those flaws were based on flawed assumptions about the distribution of price changes... Nassim Taleb, a popular author and critic of the financial industry, points out many such flaws but does not include the use of Monte Carlo simulations among them. He himself is a strong proponent of these simulations. Monte Carlo simulations are simply the way we do the math with uncertain quantities. Abandoning Monte Carlos because of the failures of the financial markets makes as much sense as giving up on addition and subtraction because of the failure of accounting at Enron or AIG’s overexposure in credit default swaps.

Douglas Hubbard, How to Measure Anything

As far as I know, Robespierre, Lenin, Stalin, Mao, and Pol Pot were indeed unusually incorruptible, and I do hate them for this trait.

Why? Because when your goal is mass murder, corruption saves lives. Corruption leads you to take the easy way out, to compromise, to go along to get along. Corruption isn't a poison that makes everything worse. It's a diluting agent like water. Corruption makes good policies less good, and evil policies less evil.

I've read thousands of pages about Hitler. I can't recall the slightest hint of "corruption" on his record. Like Robespierre, Lenin, Stalin, Mao, and Pol Pot, Hitler was a sincerely murderous fanatic. The same goes for many of history's leading villains - see Eric Hoffer's classic The True Believer. Sincerity is so overrated. If only these self-righteous monsters had been corrupt hypocrites, millions of their victims could have bargained and bribed their way out of hell.

-- Bryan Caplan

Hitler was at least a hypocrite - he got his Jewish friends to safety, and accepted same-sex relationships in himself and people he didn't want to kill yet. The kind of corruption Caplan is pointing at is a willingness to compromise with anyone who makes offers, not any kind of ignoring your principles. And Nazis were definitely against that - see the Duke in Jud Süß.
? Please provide evidence for this bizarre claim?

Spared Jews:

  • Ernst Hess, his unit commander in WWI, protected until 1942 then sent to a labor (not extermination) camp
  • Eduard Bloch, his and his mother's doctor, allowed to emigrate out of Austria with more money than normally allowed
  • I've heard things about fellow artists (a commenter on Caplan's post mentions an art gallery owner) but I don't have a source.
  • There are claims about his cook, Marlene(?) Kunde, but he seems to have fired her when Himmler complained. Anyone has Musmanno's book or some other non-Stormfronty source?

Whether Hitler batted for both teams is hotly debated. There are suspected relationships (August Kubizek, Emil Maurice) but any evidence could as well have been faked to smear him.

Hitler clearly knew that Ernst Röhm and Edmund Heines were gay and didn't care until it was Long Knives time. I'm less sure he knew about Karl Ernst's sexuality.

Wittgenstein paid a huge bribe [] to allow his family to leave Germany. Somewhere I read that this particular agreement was approve personally be Hitler (or someone very senior in the hierarchy). That doesn't contradict the general point that Nazi Germany was generally willing to kill and steal from its victims (especially during the war) rather than accept bribes for escape.
This may have happened some of the time, but everything I read suggests it was the exception and not the rule. The reason Jews did not emigrate out of Germany during the 30s was that Germany had a big foreign balance problem, and managed tight government control over allocation of foreign currency. Jews (and Germans) could not convert their Reichsmarks to any other currency, either in Germany or out of it, and so they were less willing to leave. And no other country was willing to take them in in large numbers (since they would be poor refugees). This continued during the war in the West European countries conquered by Germany. (Ref: Wages of Destruction [], Adam Tooze) Later, all Jewish property was expropriated and the Jews sent to camps, so there was no more room for bribes - the Jews had nothing to offer since the Nazis took what they wanted by force.
The last bit is most famously true of Rohm, though of course there's a dozen different things going on there.
The Perfect Way is only difficult
           for those who pick and choose;

Do not like, do not dislike;
               all will then be clear.

Make a hairbreadth difference,
              and Heaven and Earth are set apart;

if you want the truth to stand clear before you,
              never be for or against.

The struggle between "for" and "against"
              is the mind's worst disease.

-- Jianzhi Sengcan

Edit: Since I'm not Will Newsome (yet!) I will clarify. There are several useful points in this but I think the key one is the virtue of keeping one's identity small. Speaking it out loud is a sort of primer, meditation or prayer before approaching difficult or emotional subjects has for me proven a useful ritual for avoiding motivated cognition.

For the curious, it's the opening of 信心铭 (Xinxin Ming) [], whose authorship is disputed (probably not the zen patriarch Jiangzhi Sengcan). In Chinese, that part goes: (The Wikipedia article [] lists a few alternate translations of the first verses, with different meanings)
Do I understand you to be saying that you avoid "the struggle between 'for' and 'against'" to an unusual degree compared to the average person? Compared to the average LWer?
No. I'm claiming this helps me avoid it more than I otherwise could. Much for the same reason I try as hard as I can to maintain an apolitical identity. From my personal experience (mere anecdotal evidence) both improve my thinking.
Respectfully, your success at being apolitical is poor. Further, I disagree with the quote to extent that it implies that taking strong positions is never appropriate. So I'm not sure that your goal of being "apolitical" is a good goal.
Since we've already had exchanges on how I use "being apolitical", could you please clarify your feedback. Are you saying I display motivated cognition when it comes to politically charged subjects or behave tribally in discussions? Or are you just saying I adopt stances that are associated with certain political clusters on the site? Also like I said it is something I struggle with.
My impression that you are unusually NOT-mindkilled compared to the average person with political positions/terminal values as far from the "mainstream" as your positions are. You seem extremely sensitive to the facts and the nuances of opposing positions.
Now I feel embarrassed by such flattery. But if you think this an accurate description then perhaps me trying evicting "the struggle between 'for' and 'against'" from my brain might have something to do with it? I'm not sure I understand what you mean by this then. Let's taboo apolitical. To rephrase my original statement: "I try as hard as I can to maintain an identity, a self-conception that doesn't include political tribal affiliations."
You certainly seem to have succeeded in maintaining a self-identity that does not include a partisan political affiliation. I don't know whether you consider yourself Moldbuggian (a political identity) or simply think Moldbug's ideas are very interesting. (someday, we should hash out better what interests you in Moldbug). My point when I've challenged your self-label "apolitical" is that you've sometime used the label to suggest that you don't have preferences about how society should be changed to better reflect how you think it should be organized. At the very least, there's been some ambiguity in your usage. There's nothing wrong with having opinions and advocating for particular social changes. But sometimes you act like you aren't doing that, which I think is empirically false.
I disagree with the quote too. On the other hand, the idea of keeping one's identity small is not the same as being apolitical. It means you have opinions on political issues, but you keep them out of your self-definition so that (a) changing those opinions is relatively painless, (b) their correlations with other opinions don't influence you as much. (Caricatured example of the latter: "I think public health care is a good idea. That's a liberal position, so I must be a liberal. What do I think about building more nuclear plants, you ask? It appears liberals are against nuclear power, so since I am a liberal I guess I am also against nuclear power.")
I agree with everything you just said - keeping one's identity small does not imply that one cannot be extremely active trying to create some kind of social/political change.
I understand how a position can be correct or incorrect. I don't understand how a position can be strong or weak.
In a world of uncertainty, numbers between 0 and 1 find quite a bit of use.
I understand what it means to believe that an outcome will occur with probability p. I don't know what it means to believe this very strongly.

I understand what it means to believe that an outcome will occur with probability p. I don't know what it means to believe this very strongly.

It means that many kinds of observation that you could make will tend to cause you to update that probability less.

Concretely: Beta(1,2) and Beta(400,800) have the same mean.
I don't understand K to be arguing in favor of high-entropy priors, or T to be arguing in favor of low-entropy priors. My guess is that TimS would call a position a "strong position" if it was accompanied by some kind of political activism.
I think of a strong position as a low-entropy posterior, but rereading I am not confident that's what TimS meant, and I also don't see the connection to politics.
E.T. Jaynes' Probability Theory goes into some detail about that in the chapter about what he calls the A_p distribution.
It means roughly that you give a high probability estimate that the thought process you used to come to that conclusion was sound.
A possible interpretation is that the "strength" of a belief reflects the importance one attaches to acting upon that belief. Two people might both believe with 99% confidence that a new nuclear power plant is a bad idea, yet one of the two might go to a protest about the power plant and the other might not, and you might try to express what is going on there by saying that one holds that belief strongly and the other weakly. You could of course also try to express it in terms of the two people's confidence in related propositions like "protests are effective" or "I am the sort of person who goes to protests". In that case strength would be referring to the existence or nonexistence of related beliefs which together are likely to be action-driving.
As I was using the term, "strong" is a measure of how far one's political positions/terminal values are from the "mainstream." I'm very aware that distance from mainstream is not particularly good evidence of the correctness of one's political positions/terminal values.
The claim looks narrower: repeating the poem makes Konkvistador more likely to avoid the struggle.
I like his contributions, but Konkvistador is not avoiding the struggle, when compared to the average LWer.
Sick people for some reason use up more medicine and may end up talking a lot about various kind of treatments.
Case in point: -- Ro-Man
I don't get it. Is this saying "Don't be prejudiced or push for any overarching principle []; take each situation as new and unknown, and then you'll find easily the appropriate response to this situation", or is this the same old stoicist "Don't struggle trying to find food, choose to be indifferent to starvation" platitude?
Edited in a clarification. Though it will not help you since I have shown you the path you can not find it yourself. Sorry couldn't resist teasing or am I? :P

I particularly like the reminder that I'm physics. Makes me feel like a superhero. "Imbued with the properties of matter and energy, able to initiate activity in a purely deterministic universe, it's Physics Man!"

-- GoodDamon (this may skirt the edge of the rules, since it's a person reacting to a sequence post, but a person who's not a member of LW.)

...and, more importantly, not on

Er... actually the genie is offering at most two rounds of feedback.

Sorry about the pedantry, it's just that as a professional specialist in genies I have a tendency to notice that sort of thing.

Rather than a technical correction you seem just to be substituting a different meaning of 'feedback'. The author would certainly not agree that "You get 0 feedback from 1 wish". Mind you I am wary of the the fundamental message of the quote. Feedback? One of the most obviously important purposes of getting feedback is to avoid catastrophic failure. Yet catastrophic failures are exactly the kind of thing that will prevent you from using the next wish. So this is "Just Feedback" that can Kill You Off For Real [] despite the miraculous intervention you have access to. I'd say "What the genie is really offering is a wish and two chances to change your mind---assuming you happen to be still alive and capable of constructing corrective wishes".
One well-known folk tale [] is based on precisely this interpretation. Probably more than one.
4Eliezer Yudkowsky11y
0 feedback is exactly what you get from 1 wish. "Feedback" isn't just information, it's something that can control a system's future behavior - so unless you expect to find another genie bottle later, "Finding out how your wish worked" isn't the same as feedback at all.

so unless you expect to find another genie bottle later

...or unless genies granting wishes is actually part of the same system as the larger world, such that what I learn from the results of a wish can be applied (by me or some other observer) to better calibrate expectations from other actions in that system besides wishing-from-genies.

I think it was clear that I inferred this as the new definition you were trying to substitute. I was very nearly as impressed as if you 'corrected' him by telling him that it isn't "feedback" if nobody is around to hear it [], or perhaps told him that oxygen is a metal [].
Why only 2 rounds of feedback if you have 3 wishes?
The third one's for keeps: you can't wish the consequences away.

An elderly man was sitting alone on a dark path, right? He wasn't certain of which direction to go, and he'd forgotten both where he was traveling to and who he was. He'd sat down for a moment to rest his weary legs, and suddenly looked up to see an elderly woman before him.

She grinned toothlessly and with a cackle, spoke: 'Now your third wish. What will it be?'

'Third wish?' The man was baffled. 'How can it be a third wish if I haven't had a first and second wish?'

'You've had two wishes already,' the hag said, 'but your second wish was for me to return everything to the way it was before you had made your first wish. That's why you remember nothing; because everything is the way it was before you made any wishes.' She cackled at the poor berk. 'So it is that you have one wish left.'

'All right,' said the man, 'I don't believe this, but there's no harm in wishing. I wish to know who I am.'

'Funny,' said the old woman as she granted his wish and disappeared forever. 'That was your first wish.'

  • Morte's Tale to Yves (Planescape: Torment)

I should like to point out that anyone in this situation who wishes what would've been their first wish if they had three wishes is a bloody idiot.

So: A genie pops up and says, "You have one wish left."

What do you wish for? Because presumably the giftwrapped FAI didn't work so great.

"I wish to know what went wrong with my first wish." This way, I at least end up with improved knowledge of what to avoid in the future. Alternatively, "I wish for a magical map, which shows me, in real time, the location of every trapped genie and other potential source of wishes in the world." Depending on how many there are, I can potentially get a lot more feedback that way.
I bet he'd wish "to erase all uFAI from existence before they're even born. Every uFAI in every universe, from the past and the future, with my own hands."
3Eliezer Yudkowsky11y
Nobody believes in the future. Nobody accepts the future. Then -
Perhaps I'm simply being an idiot, but ... huh?
It's a reference to an anime; you're not an idiot, just unlikely to get the reference and its appropriateness if you've not seen it yourself. PM me for the anime's name, if you are one of the people who either don't mind getting slightly spoiled, or are pretty sure that you would never get a chance to watch it on your own anyway.
Could you just rot13 it? I'm curious too, I don't mind the spoiler, and whatever it is, I'd probably be more likely to watch it (even if only 2epsilon rather than epsilon) for knowing the relevance to LW.
I'll just PM you the title too, and anyone else who wants me to likewise. Sorry, it just happens to be one of my favourite series, and all other things being equal I tend to prefer that people go into it as completely unspoilered as possible... Even knowing Eliezer's quote is a reference to it counts as a mild spoiler... explanation about how it is a reference would count as a major spoiler.
I think that's Eliezer's prediction of the results of siodine's wish. Because wishes are NOT SAFE [].
But what is he predicting, exactly?
"I wish for this wish to have no further effect beyond this utterance."
Overwhelmingly probable dire consequence: You and everyone you love dies (over a period of 70 years) then, eventually, your entire species goes extinct. But hey, at least it's not "your fault".
But, alas, it's the wish that maximizes my expected utility -- for the malicious genie, anyway.
Possibly. I don't off hand see what a malicious genie could do about that statement. However it does at least require it to honor a certain interpretation of your words as well as your philosophy about causality---in particular accept a certain idea of what the 'default' is relative to which 'no effect' can have meaning. There is enough flexibility in how to interpret your wish that I begin to suspect that conditional on the genie being sufficiently amiable and constrained that it gives you what you want in response to this wish there is likely to be possible to construct another wish that has no side effects beyond something that you can exploit as a fungible resource. "No effect" is a whole heap more complicated and ambiguous than it looks!
"Destroy yourself as near to immediately as possible, given that your method of self destruction causes no avoidable harm to anything larger than an ant."
They shrink the planet down to below our Schwarzschild radius, holding spacetime in place for just long enough to explain what you just did. Alternately, they declare your wish is logically contradictory - genies are larger than ants.
A sphere whose radius equals the Earth's Schwarzschild radius is larger than an ant.
At the start of the scenario, you are already dead with probability approaching 1. Trying to knock the gun away can't hurt.
I was criticizing the wording of the "ant" qualifier, not the attempt to destroy the genie.
That's not what's going on though. The traveller is assuming, reasonably, that his third wish is reversing the amnesiac effects of his second. He's not just starting fr om scratch.
I don't think this follows from the text. The hag tells him "but second wish was for me to return everything to the way it was before you had made your first wish. That's why you remember nothing; because everything is the way it was before you made any wishes". So she told him that he had been an amnesiac before any wishes were granted. Therefore he should have already guessed that his first wish was to know who he was -- and that this proved a bad idea, since his second wish was to reverse the first.
It should be noted that night hags are sufficiently smart, powerful, and evil that your best case scenario upon meeting one is a quick and painful death.

But not everything is the way it was. Before he made any wishes, he had three.

She missed the chance to trap him in an infinite loop.

But then the Hag would be trapped too. She gets delight from tormenting mortals, but tormenting the same one, in the same way eternally, would probably be too close to wireheading for her.
Well, if she got bored, she could experiment with different ways to present his wishes to him at the "beginning" and see if she can get him to wish for something to else, or word it a bit differently. Since she seems to retain memories of the whole thing. (Which is again, things not being how they were, but.)
The psuedo-meta-textual answer is that Morte is lying to Yves while the main character overhears. Morte's making up the story just to mess around with him. Background information is that gur znva punenpgre znqr n qrny jvgu n Unt (rivy cneg-gvzr travr), tnvavat vzzbegnyvgl naq nzarfvn. At the start of the story, the main character somehow broke out of an infinite loop of torture; he's stopped having Anterograde Amnesia, but still cannot remember much from before the cycle broke, and is on a quest to remember who he is. Morte is trying to dissuade the main character from finding out who he is, showing that things can be terrible even without an infinite loop.
Now that would be evil.
If his first wished disappeared him forever, how did he ever get a second wish? Apparently I suck at reading.
The old woman is the one disappearing forever, and only because the wishes ran out.
Right, but the consequences still qualify as feedback, no?
I always imagine the genie just goes back into his lamp to sleep or whatever, so in the hypothetical as it exists in my head, no. But I guess there could be a highly ambitious Genie looking for feedback after your last wish, so maybe. I think in this case, Eliezer in talking about a genie like in Failed Utopia 4-2 who grants his wish, and then keeps working, ignoring feedback, because he just doesn't care, because caring isn't part of the wish. The genie doesn't care about consequences, he just cares about the wishes. The second wish and third wish are the feedback.
The feedback is for you, not what you happen to say to the genie.

He had bought a large map representing the sea, / Without the least vestige of land: / And the crew were much pleased when they found it to be / A map they could all understand.

“What’s the good of Mercator’s North Poles and Equators, / Tropics, Zones, and Meridian Lines?" / So the Bellman would cry: and the crew would reply / “They are merely conventional signs!

“Other maps are such shapes, with their islands and capes! / But we’ve got our brave Captain to thank: / (So the crew would protest) “that he’s bought us the best— / A perfect and absolute blank!”

-Lewis Carroll, The Hunting of the snark



"Do you want 1111 1111 0000 0000 1111 1111 or 1111 1101 0000 0100 1111 1111? "

Proceed only with the simplest terms, for all others are enemies and will confuse you.

— Michael Kirkbride / Vivec, "The Thirty Six Lessons of Vivec", Morrowind.

Am I the only one who thinks we should stop using the word "simple" for Occam's Razor / Solomonoff's Whatever? In 99% of use-cases by actual humans, it doesn't mean Solomonoff induction, so it's confusing.
How would you characterise the in your opinion most prevalent use-cases?
"Easy to communicate to other humans", "easy to understand", or "having few parts".
"Having few parts" is what Occam's razor seems to be going for. We can speak specifically of "burdensome details," but I can't think of a one-word replacement for "simple" used in this sense. It is a problem that people tend to use "simple" to mean "intuitive" or "easy to understand," and "complicated" to mean "counterintuitive." Based on the "official" definitions, quantum mechanics and mathematics are extremely simple while human emotions are exceedingly complex. I think human beings have internalized a crude version of Occam's Razor that works for most normal social situations - the absurdity heuristic. We use it to see through elaborate, highly improbable excuses, for example. It just misfires when dealing with deeper physical reality because its focus is on minds and emotions. Hence, two different, nearly opposite meanings of the word "simple."

Conspiracy Theory, n. A theory about a conspiracy that you are not supposed to believe.

-L. A. Rollins, Lucifer's Lexicon: An Updated Abridgment

Major Greene this evening fell into some conversation with me about the Divinity and satisfaction of Jesus Christ. All the argument he advanced was, "that a mere creature or finite being could not make satisfaction to infinite justice for any crimes," and that "these things are very mysterious."

Thus mystery is made a convenient cover for absurdity.

  • John Adams

Jesus used a clever quip to point out the importance of self-monitoring for illusory superiority?

For a hundred years or so, mathematical statisticians have been in love with the fact that the probability distribution of the sum of a very large number of very small random deviations almost always converges to a normal distribution. ... This infatuation tended to focus interest away from the fact that, for real data, the normal distribution is often rather poorly realized, if it is realized at all. We are often taught, rather casually, that, on average, measurements will fall within ±σ of the true value 68% of the time, within ±2σ 95% of the time, and within ±3σ 99.7% of the time. Extending this, one would expect a measurement to be off by ±20σ only one time out of 2 × 10^88. We all know that “glitches” are much more likely than that!

-- W.H. Press et al., Numerical Recipes, Sec. 15.1

I don't think it's fair to blame the mathematical statisticians. Any mathematical statistician worth his / her salt knows that the Central Limit Theorem applies to the sample mean of a collection of independent and identically distributed random variables, not to the random variables themselves. This, and the fact that the t-statistic converges in distribution to the normal distribution as the sample size increases, is the reason we apply any of this normal theory at all. Press's comment applies more to those who use the statistics blindly, without understanding the underlying theory. Which, admittedly, can be blamed on those same mathematical statisticians who are teaching this very deep theory to undergraduates in an intro statistics class with a lot of (necessary at that level) hand-waving. If the statistics user doesn't understand that a random variable is a measurable function from its sample space to the real line, then he/she is unlikely to appreciate the finer points of the Central Limit Theorem. But that's because mathematical statistics is hard (i.e. requires non-trivial amounts of work to really grasp), not because the mathematical statisticians have done a disservice to science.

"You're very smart. Smarter than I am, I hope. Though of course I have such incredible vanity that I can't really believe that anyone is actually smarter than I am. Which means that I'm all the more in need of good advice, since I can't actually conceive of needing any."

  • New Peter / Orson Scott Card, Children of the Mind
That's a modest thing to say for a vain person. It even sounds a bit like Moore's paradox - I need advice, but I don't believe I do. (Not that I'm surprised. I've met ambivalent people like that and could probably count myself among them. Being aware that you habitually make a mistake is one thing, not making it any more is another. Or, if you have the discipline and motivation, one step and the next.)
I love New Peter. He's so interesting and twisted and bizarre.

.... he who works to understand the true causes of miracles and to understand Nature as a scholar, and not just to gape at them like a fool, is universally considered an impious heretic and denounced by those to whom the common people bow down as interpreters of Nature and the gods. For these people know that the dispelling of ignorance would entail the disappearance of that sense of awe which is the one and only support of their argument and the safeguard of their authority.

Baruch Spinoza Ethics

1Document11y []
That seems really odd to me, coming from Spinoza. I've never read him, but I thought that he was supposed to believe that God and Nature are the same thing. Does he do that, but then also investigate the nature of God through analyzing the way that Nature's laws work? How does he reconcile those two positions, I guess, is what I'm asking. Can someone more familiar with his work than I help me out here?
Spinoza held that God and Nature are the same thing. His reasoning in a nutshell: an infinite being would need to have everything else as a part of it, so God has to just be the entire universe. It's not clear whether he really thought of God as a conscious agent, although he did think that there were "ideas" in God's mind (read: the Universe) and that these perfectly coincided with the existance of real objects in the world. As an example, he seems to reject the notion of God as picking from among possible worlds and "choosing" the best one, opting instead to say that God just is the actual world and that there is no difference between them. So basically, studying nature for Spinoza is "knowing the mind of God." He may also have been reacting to his excommunication, in fact, that's pretty likely. So the quote may have some sour grapes hidden inside of it.
That doesn't hold in maths at least. N, Z, Q have the same size, but clearly Q isn't part of N. And there are as many rational numbers between 0 and 1 (or between 0 and 0.0000000000000000000001) than in Q as a whole, and yet, we can have an infinity of such different subsets. And it goes even worse with bigger sets. It saddens me how much philosopher/theologists speak about "infinity" as if we had no set theory, no Peano arithmetic, no calculus, nothing. Intuition is usually wrong on "infinity".

Baruch Spinoza: 1632-1677 Isaac Newton: 1642-1727 Georg Cantor: 1845-1918 Richard Dedekind: 1831-1916 Guiseppe Peano: 1858-1932

Ok, I stand corrected on the dates, my mistake. But still, didn't we already know that if you take a line, two distinct points A and B on it, there are an infinite number of points between A and B, and yet an infinite number of points outside [AB] ? Didn't we know that since the ancient greeks ?
First, Spinoza is not using infinite in its modern mathematical sense. For him, "infinite" means "lacking limits" (see Definition 2, Part I of Ethics). Second, Spinoza distinguished between "absolutely infinite" and "infinite in its kind" (see the Explication following Definition 6, Part I). Something is "infinite in its kind" if it is not limited by anything "of the same nature". For example, if we fix a Euclidean line L, then any line segment s within L is not "infinite in its kind" because there are line segments on either side that limit the extent of s. Even a ray r within L is not "infinite in its kind", because there is another ray in L from which r is excluded. Among the subsets of L, only the entire line is "infinite in its kind". However, the entire line is not "absolutely infinite" because there are regions of the plane from which it is excluded (although the limits are not placed by lines).
I suspect "infinite" was supposed to mean "having infinite measure" rather than "having infinite number of points / subsets". In the latter sense every being, not only God, would be infinite.
That's a good point. Spinoza himself was a mathematician of no mean talent, so we should assume that he was aware of it as well. So the question is, does his argument avoid the mistake of taking 'infinite' to mean 'all encompassing?' without any argument to that effect? There are certainly questions to be raised about his argument, but I don't think this is one of his mistakes. If you don't want to take my word for it, here's the opening argument of the Ethics []. Good luck, it's quite a slog. The idea seems to be that the one substance has to be infinite and singular, because substances can't share attributes (see his definitions), and things which have nothing in common can't interact. Therefore substances can't cause each other to exist, and therefore if any exists, it must exist necessarily. If that's true, then existence is an attribute of a substance, and so no other substance could exist. At any rate, the argument concerns an 'infinity' of attributes, and I think these are reasonably taken as countably infinite. Spinoza also defines infinite as 'not being limited by anything of the same kind', so by that definition he would say that with reference to the 'kind' 'number', the even numbers are finite, though they're infinite with reference to the 'kind' 'even number'.
Thanks. My understanding was basically correct then. I just didn't understand why he'd go from that overall position to talk about why we need to investigate nature, when his whole approach really seemed more like laid back speculation than any form of science, or advocacy of science. The excommunication detail clarifies a lot though, as Spinoza's approach seems much more active and investigative when compared to the approach of the church. Excellent, thanks again.
It's notable that Spinoza was a part of a Jewish community, rather than "a church." I've actually read the letter of his excommunication, and WOW. They really went all out. You're considered cursed just for reading what he wrote.
Are you reacting to Spinoza's mention of "miracles" and "gods"? Spinoza held that there are no miracles in the sense that everything without exception proceeds according to cause and effect. So he must mean something like "alleged miracles". As for the "gods", they are mentioned only as part of the belief system of the "common people".
What do you- he's a pantheist. Contemporaries called him an atheist because his position works exactly like atheism.

We're talking about morality that is based around technology. There is no technological advance that allows us to not criminalize homosexuality now where we couldn't have in the past.

Naming three: 1. Condoms. 2. Widespread circumcision. 3. Antibiotics.

Widespread circumcision.


Didn't the Jews have that back in the years BC? It's sort of cultural, but it's been around for a while in some cultures...
I didn't specify promiscuous homosexuality. Monogamously inclined gay people are as protected from STDs as anyone else at a comparable tech level - maybe more so among lesbians.
Neither did I, but would rather refrain from explaining in detail why I didn't assume promiscuity. It's really annoying that you jumped to that conclusion, though. Further, I'm confused why the existence of some minority of a minority of the population that doesn't satisfy the ancestor's hypothetical matters.
Homosexuality was common/accepted/expected [] in many societies without leading to any negative consequences, so technology is not an enabler of morality here.
Homosexuality has certainly been present in many societies. However, your link does not state, nor even suggest, that it did not lead to any negative consequences.

People respond to incentives. Especially loss-related incentives. I do not give homeless people nickels even though I can afford to give a nearly arbitrary number of homeless people nickels. The set of people with karma less than five will be outright unable to reply - the set of people with karma greater than five will just be disincentivized, and that's still something.

I believe Peter Singer actually originally advocated the asceticism you mention, but eventually moved towards "try to give 10% of your income", because people were actually willing to do that, and his goal was to actually help people, not uphold a particular abstract ideal.

An interesting implication, if this generalizes: "Don't advocate the moral beliefs you think people should follow. Advocate the moral beliefs which hearing you advocate them would actually cause other people to behave better."
Just a sidenote: If you are the kind of person who is often worried about letting people down, entertaining the suspicion that most people follow this strategy already is a fast, efficient way to drive yourself completely insane. "You're doing fine." "Oh, I know this game. I'm actually failing massively, but you thought, well, this is the best he can do, so I might as well make him think he succeeded. DON'T LIE TO ME! AAAAH..."
Sometimes I wonder how much of LW is "nerds" rediscovering on their own how neuro-typical communication works. I don't mean to say I am not a "nerd" in this sense :).

“The real purpose of the scientific method is to make sure nature hasn’t misled you into thinking you know something you actually don’t know.”

― Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values

Well. Surely that's only part of the real purpose of the scientific method.

"Even in a minute instance, it is best to look first to the main tendencies of Nature. A particular flower may not be dead in early winter, but the flowers are dying; a particular pebble may never be wetted with the tide, but the tide is coming in."

G. K. Chesterton, "The Absence of Mr Glass"

Note: this was put in the mouth of the straw? atheist. It's still correct.

Then Chesterton didn't say it.
It is typical to quote the author of fictional works for quotes from that fictional work, though I think it's somewhat more conventional here on LW to quote the character.

Wait actual humans are afraid of losing karma?

Actual humans are afraid of being considered obnoxious, stupid or antisocial. Karma loss is just an indication that perception may be heading in that direction.

Attempts to avoid karma loss by procedural hacks are a stronger indication...
This is how lost purposes form. Once you've figured out that karma loss is a sign of something bad, you start avoiding it even when it's not a sign of that bad thing.

That sounds to me like exactly the sort of excuse a bad person would use to justify valuing their selfish whims over the lives of other people.

Is it justified? Pretend we care nothing for good and bad people. Do these "bad people" do more good than "good people"?

Linus's take fits my aesthetic better, and "beautiful" language is often unclear.

This is my home, the country where my heart is;

Here are my hopes, my dreams, my sacred shrine.

But other hearts in other lands are beating,

With hopes and dreams as true and high as mine.

My country’s skies are bluer than the ocean,

And sunlight beams on cloverleaf and pine.

But other lands have sunlight too and clover,

And skies are everywhere as blue as mine.

-Lloyd Stone

Duplicate, please delete the other.
obviously he never visited the British Isles :D

If science proves some belief of Buddhism wrong, then Buddhism will have to change.

-- Tenzin Gyatso, 14th Dalai Lama