Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LW/OB
- No more than 5 quotes per person per monthly thread, please.
Well, his point only makes any sense when applied to the metaphor since a better answer to the question
is:
"where would Sisyphus get a robot in the middle of Hades?"
Edit: come to think of it, this also works with the metaphor for human struggle.
I thought the correct answer would be, "No time for programming, too busy pushing a boulder."
Though, since the whole thing was a punishment, I have no idea what the punishment for not doing his punishment would be. Can't find it specified anywhere.
I don't think he's punished for disobeying, I think he's compelled to act. He can think about doing something else, he can want to do something else, he can decide to do something else ... but what he does is push the boulder.
The version I like the best is that Sisyphus keeps pushing the boulder voluntarily, because he's too proud to admit that, despite all his cleverness, there's something he can't do. (Specifically, get the boulder to stay at the top of the mountain).
My favorite version is similar. Each day he tries to push the boulder a little higher, and as the boulder starts to slide back, he mentally notes his improvement before racing the boulder down to the bottom with a smile on his face.
Because he gets a little stronger and a little more skilled every day, and he knows that one day he'll succeed.
In the M. Night version: his improvements are an asymptote - and Sisyphus didn't pay enough attention in calculus class to realize that the limit is just below the peak.
Now someone just has to write a book entitled "The Rationality of Sisyphus", give it a really pretentious-sounding philosophical blurb, and then fill it with Grand Theft Robot.
Answer: Because the Greek gods are vindictive as fuck, and will fuck you over twice as hard when they find out that you wriggled out of it the first time.
Who was the guy who tried to bargain the gods into giving him immortality, only to get screwed because he hadn't thought to ask for youth and health as well? He ended up being a shriveled crab like thing in a jar.
My highschool english teacher thought this fable showed that you should be careful what you wished for. I thought it showed that trying to compel those with great power through contract was a great way to get yourself fucked good an hard. Don't think you can fuck with people a lot more powerful than you are and get away with it.
EDIT: The myth was of Tithonus. A goddess Eos was keeping him as a lover, and tried to bargain with Zeus for his immortality, without asking for eternal youth too. Ooops.
I'm no expert, but that seems to be the moral of a lot of Greek myths.
-- Linus Pauling
Citation for this was hard; the closest I got was Etzioni's 1962 The Hard Way to Peace, pg 110. There's also a version in the 1998 Linus Pauling on peace: a scientist speaks out on humanism and world survival : writings and talks by Linus Pauling; this version goes
“A writer who says that there are no truths, or that all truth is ‘merely relative,’ is asking you not to believe him. So don’t.” ― Roger Scruton, Modern Philosophy: An Introduction and Survey
Not quite so! We could presume that value isn't restricted to the reals + infinity, but say that something's value is a value among the ordinals. Then, you could totally say that life has infinite value, but two lives have twice that value.
But this gives non-commutativity of value. Saving a life and then getting $100 is better than getting $100 and saving a life, which I admit seems really screwy. This also violates the Von Neumann-Morgenstern axioms.
In fact, if we claim that a slice of bread is of finite value, and, say, a human life is of infinite value in any definition, then we violate the continuity axiom... which is probably a stronger counterargument, and tightly related to the point DanielLC makes above.
John Perry, introduction to Identity, Personal Identity, and the Self
He bought the present ox along with the future ox. He could have just bought the present ox, or at least a shorter interval of one. This is known as "renting".
Solzhenitsyn
If only it were a line. Or even a vague boundary between clearly defined good and clearly defined evil. Or if good and evil were objectively verifiable notions.
You are pointing to different actions labeled stealing and saying "one is good and the other is evil." Yeah, obviously, but that is no contradiction - they are different actions! One is the action of stealing in dire need, the other is the action of stealing without need.
This is a very common confusion. Good and evil (and ethics) are situation-dependent, even according to the sternest, most thundering of moralists. That does not tell us anything one way or the other about objectivity. The same action in the same situation with the same motives is ethically the same.
— Steven Kaas
Charles Kettering
Ken Wilber
--G.K. Chesterton, "The Duel of Dr. Hirsch"
Inverted information is not random noise.
Lol, my professor would give a 100% to anyone who answered every exam question wrong. There were a couple people who pulled it off, but most scored 0<10.
I'm assuming a multiple-choice exam, and invalid answers don't count as 'wrong' for that purpose?
Otherwise I can easily miss the entire exam with "Tau is exactly six." or "The battle of Thermopylae" repeated for every answer. Even if the valid answers are [A;B;C;D].
"Write a four word phrase or sentence."
Let's go one step back on this, because I think our point of disagreement is earlier than I thought in that last comment.
The efficient market hypothesis does not claim that the profit on all securities has the same expectation value. EMH-believers don't deny, for example, the empirically obvious fact that this expectation value is higher for insurances than for more predictable businesses. Also, you can always increase your risk and expected profit by leverage, i.e. by investing borrowed money.
This is because markets are risk-averse, so that on the same expectation value you get payed extra to except a higher standard deviation. Out- or underperforming the market is really easy by excepting more or less risk than it does on average. The claim is not that the expectation value will be the same for every security, only that the price of every security will be consistent with the same prices for risk and expected profit.
So if the EMH is true, you can not get a better deal on expected profit without also accepting higher risk and you can not get a higher risk premium than other people. But you still can get lots of different trade-offs between expected profit and risk.
Now can you ... (read more)
Anonymous
Not always, since:
Des McHale
In other words, the average of a distribution is not necessarily the most probable value.
In other words: expect Lady Mode), not Lady Mean.
In my high school health class, for weeks the teacher touted the upcoming event: "Breast and Testicle Day!"
When the anticipated day came, it was of course the day when all the boys go off to one room to learn about testicular self-examination, and all the girls go off to another to learn about breast self-examination. So, in fact, no student actually experienced Breast and Testicle Day.
-Robin Hanson, Human Enhancement
I feel like Hanson's admittedly insightful "signaling" hammer has him treating everything as a nail.
Your contrarian stance against a high-status member of this community makes you seem formidable and savvy. Would you like to be allies with me? If yes, then the next time I go foraging I will bring you back extra fruit.
I agree in principle but I think this particular topic is fairly nailoid in nature.
I think he's mischaracterizing the issue.
Beliefs serve multiple functions. One is modeling accuracy, another is signaling. It's not whether the environment is harsh or easy, it's which function you need. There are many harsh environments where what you need is the signaling function, and not the modeling function.
-L. A. Rollins, Lucifer's Lexicon: An Updated Abridgment
"He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his candle at mine, receives light without darkening me. No one possesses the less of an idea, because every other possesses the whole of it." - Jefferson
But many people do benefit greatly from hoarding or controlling the distribution of scarce information. If you make your living off slavery instead, then of course you can be generous with knowledge.
Or if, say, you run a university.
-- Tim Kreider
The interesting part is the phrase "which sounds like the kind of lunatic notion that’ll be considered a basic human right in about a century, like abolition, universal suffrage and eight-hour workdays." If we can anticipate what the morality of the future would be, should we try to live by it now?
Not if it's actually the same morality, but depends on technology. For example, strong prohibitions on promiscuity are very sensible in a world without cheap and effective contraceptives. Anyone who tried to live by 2012 sexual standards in 1912 would soon find they couldn't feed their large horde of kids. Likewise, if robots are doing all the work, fine; but right now if you just redistribute all money, no work gets done.
You seem to have rather a different idea of what I meant by "2012 standards". Even now we do not really approve of married people sleeping around. We do, however, approve of people not getting married until age 25 or 30 or so, but sleeping with whoever they like before that. Try that pattern without contraception.
How do you envision living by this model now working?
That is, suppose I were to embrace the notion that having enough resources to live a comfortable life (where money can stand in as a proxy for other resources) is something everyone ought to be guaranteed.
What ought I do differently than I'm currently doing?
According to Wikipedia, the 2005 elections in germany did cost 63 million euros, with a population of 81 million people. 0,78 eurocent per person or the 0,00000281st part of the GDP. Does not seem much, in the grander scheme of things. And since the german constitutional court prohibited the use of most types of voting machines, that figure does include the cost to the helpers; 13 million, again, not a prohibitive expenditure.
Is that true? (Technically, a century ago was 1912.)
Wikipedia on the eight-hour day:
Yes, yes they are.
--Game of Thrones, Season 2.
Reminds me of Patton:
I especially like the way he calls the enemy "the other poor bastard". And not, say, "the bastard".
-- Aristosophy (again)
We're talking about a person who, along with her partner, gives to efficient charity twice as much money as she spends on herself. There's no way she doesn't actually believe what she says and still does that.
Yeah, but there's also a certain plausibility to the heuristic which says that you don't get to second-guess her knowledge of what works for charitable giving until you're - not giving more - but at least playing in the same order of magnitude as her. Maybe her pushing a little bit harder on that "hypocrisy" would cause her mind to collapse, and do you really want to second-guess her on that if she's already doing more than an order of magnitude better than what your own mental setup permits?
Charles Handy describing the Vietnam-era measurement policies of Secretary of Defense Robert McNamara
The following quotes were heavily upvoted, but then turned out to be made by a Will Newsome sockpuppet who edited the quote afterward. The original comments have been banned. The quotes are as follows:
— Aristosophy
— Aristosophy
If anyone objects to this policy response, please PM me so as to not feed the troll.
Defection too far. Ban Will.
I've heard this claimed.
This behavior isn't cute.
This would be somewhat in fitting with findings in Cialdini. One defector kept around and visibly punished or otherwise looking low status is effective at preventing that kind of behavior. (If not Cialdini, then Greene. Probably both.)
Edited how?
If I remember correctly the second quote was edited to be something along the lines of "will_newsome is awesome."
I do find some of Will Newsome's contributions interesting. OTOH, this behaviour is pretty fucked up. (I was wondering how hard it would be to implement a software feature to show the edit history of comments.)
Unfortunately, doing bad shows is not only a route to doing good shows.
I'm surprised at how often I have to inform people of this... I have mild scoliosis, and so I usually prefer sitting down and kicking up my feet, usually with my work in hand. Coming from a family who appreciates backbreaking work is rough when the hard work is even harder and the pain longer-lasting... which would be slightly more bearable if the aforementioned family did not see reading MYSTERIOUS TEXTS on a Kindle and using computers for MYSTERIOUS PURPOSES as signs of laziness and devotion to silly frivolities.
I have a sneaking suspicion that this is not a very new situation.
I disagree, in fact. That books strengthen the mind is baldly asserted, not supported, by this quote - the rationality point I see in it is related to comparative advantage.
Alexander Grothendieck
"Nontrivial measure or it didn't happen." -- Aristosophy
(Who's Kate Evans? Do we know her? Aristosophy seems to have rather a lot of good quotes.)
*cough*
"I made my walled garden safe against intruders and now it's just a walled wall." -- Aristosophy
Is that you? That's ingenious.
For more rational flavor:
This should be the summary for entangled truths:
how to seem and be deep:
Dark Arts:
More Dark arts:
Luminosity:
She's got to be from here, here's learning biases can hurt people:
Cryonics:
I'm starting to think this is someone I used to know from tvtropes.
"Possibly the best statistical graph ever drawn" http://www.edwardtufte.com/tufte/posters
"Junior", FIRE JOE MORGAN
"If your plan is for one year plant rice. If your plan is for 10 years plant trees. If your plan is for 100 years educate children" - Confucius
...If your plan is for eternity, invent FAI?
I think I disagree; care to make it precise enough to bet on? I'm expecting life still around, Earth the main population center, most humans not uploaded, some people dying of disease or old age or in wars, most people performing dispreferred activities in exchange for scarce resources at least a couple months in their lives, most children coming out of a biological parent and not allowed to take major decisions for themselves for at least a decade.
I'm offering $100 at even odds right now and will probably want to bet again in the next few years. I can give it to you (if you're going to transfer it to SIAI/CFAR tell me and I'll donate directly), and you pay me $200 if the world has not ended in 100 years as soon as we're both available (e.g. thawed). If you die you can keep the money; if I die then win give it to some sensible charity.
How's that sound? All of the above is up for negotiation.
As wedifrid says, this is a no-brainer "accept" (including the purchasing-power-adjusted caveat). If you are inside the US and itemize deductions, please donate to SIAI, otherwise I'll accept via Paypal. Your implied annual interest rate assuming a 100% probability of winning is 0.7% (plus inflation adjustment). Please let me know whether you decide to go through with it; withdrawal is completely understandable - I have no particular desire for money at the cost of forcing someone else to go through with a bet they feel uncomfortable about. (Or rather, my desire for $100 is not this strong - I would probably find $100,000 much more tempting.)
PayPal-ed to sentience at pobox dot com.
Don't worry, my only debitor who pays higher interest rates than that is my bank. As long as that's not my main liquidity bottleneck I'm happy to follow medieval morality on lending.
If you publish transaction data to confirm the bet, please remove my legal name.
Bet received. I feel vaguely guilty and am reminding myself hard that money in my Paypal account is hopefully a good thing from a consequentialist standpoint.
Bet recorded: LW bet registry, PB.com.
I definitely expect nanotech a few orders of magnitude awesomer than we have now. I expect great progress on aging and disease, and wouldn't be floored by them being solved in theory (though it does sound hard). What I don't expect is worldwide deployment. There are still people dying from measles, when in any halfway-developed country every baby gets an MMR shot as a matter of course. I wouldn't be too surprised if everyone who can afford basic care in rich countries was immortal while thousands of brown kids kept drinking poo water and dying. I also expect longevity treatments to be long-term, not permanent fixes, and thus hard to access in poor or politically unstable countries.
The above requires poor countries to continue existing. I expect great progress, but not abolition of poverty. If development continues the way it has (e.g. Brazil), a century isn't quite enough for Somalia to get its act together. If there's a game-changing, universally available advance that bumps everyone to cutting-edge tech levels (or even 2012 tech levels), then I won't regret that $100 much.
I have no idea what wars will look like, but I don't expect them to be nonexistent or nonlethal. Given no gam... (read more)
Imām al-Ḥaddād (trans. Moṣṭafā al-Badawī), "The Sublime Treasures: Answers to Sufi Questions"
Reminds me of Moore's "here is a hand" paradox (or one man's modus tollens is another's modus ponens).
I think that's actually a really terrible bit of arguing.
We can stop right there. If we're all the way back at solipsism, we haven't even gotten to defining concepts like 'random chance' or 'design', which presume an entire raft of external beliefs and assumptions, and we surely cannot immediately say there are only two categories unless, in response to any criticism, we're going to include a hell of a lot under one of those two rubrics. Which probability are we going to use, anyway? There are many more formalized versions than just Kolmogorov's axioms (which brings us to the analytic and synthetic problem).
And much of the rest goes on in a materialist vein which itself requires a lot of further justification (why can't minds be ontologically simple elements? Oh, your experience in the real world with various regularities has persuaded you that is inconsistent with the evidence? I see...) Even if we granted his claims about complexity, why do we care about complexity? And so on.
Yes, if you're going to buy into a (very large) number of materialist non-solipsist claims, then you're going to have trouble making a case in such terms for solipsism. But if you've bought all those materialist or externalist claims, you've already rejected solipsism and there's no tension in the first place. And he doesn't do a good case of explaining that at all.
Subway ad: "146 people were hit by trains in 2011. 47 were killed."
Guy on Subway: "That tells me getting hit by a train ain't that dangerous."
This reminds me of how I felt when I learned that a third of the passengers of the Hindenburg survived. Went something like this, if I recall:
Actually, according to Wikipedia, only 35 out of the 97 people aboard were killed. Not enough to kill even 50% of them.
Wait, 32% probability of dying “ain't that dangerous”? Are you f***ing kidding me?
If I expect to be hit by a train, I certainly don't expect a ~68% survival chance. Not intuitively, anyways.
I'm guessing that even if you survive, your quality of life is going to take a hit. Accounting for this will probably bring our intuitive expectation of harm closer to the actual harm.
"In a society in which the narrow pursuit of material self-interest is the norm, the shift to an ethical stance is more radical than many people realize. In comparison with the needs of people starving in Somalia, the desire to sample the wines of the leading French vineyards pales into insignificance. Judged against the suffering of immobilized rabbits having shampoos dripped into their eyes, a better shampoo becomes an unworthy goal. An ethical approach to life does not forbid having fun or enjoying food and wine, but it changes our sense of priorities. The effort and expense put into buying fashionable clothes, the endless search for more and more refined gastronomic pleasures, the astonishing additional expense that marks out the prestige car market in cars from the market in cars for people who just want a reliable means to getting from A to B, all these become disproportionate to people who can shift perspective long enough to take themselves, at least for a time, out of the spotlight. If a higher ethical consciousness spreads, it will utterly change the society in which we live." -- Peter Singer
As it is probably intended, the more reminders like this I read, the more ethical I should become. As it actually works, the more of this I read, the less I become interested in ethics. Maybe I am extraordinarily selfish and this effect doesn't happen to most, but it should be at least considered that constant preaching of moral duties can have counterproductive results.
I suspect it's because authors of "ethical remainders" are usually very bad at understanding human nature.
What they essentially do is associate "ethical" with "unpleasant", because as long as you have some pleasure, you are obviously not ethical enough; you could do better by giving up some more pleasure, and it's bad that you refuse to do so. The attention is drawn away from good things you are really doing, to the hypothetical good things you are not doing.
But humans are usually driven by small incentives, by short-term feelings. The best thing our rationality can do is better align these short-term feelings with out long-term goals, so we actually feel happy when contributing to our long-term goals. And how exactly are these "ethical remainders" contributing to the process? Mostly by undercutting your short-term ethical motivators, by always reminding you that what you did was not enough, therefore you don't deserve the feelings of satisfaction. Gradually they turn these motivators off, and you no longer feel like doing anything ethical, because they convinced you (your "elephant") that you can't.
Ethics without understanding human nature is just a pile of horseshit. Of course that does not prevent other people from admiring those who speak it.
xkcd reference.
Not to mention the remarks of Mark Twain on a fundraiser he attended once:
I wasn't abused or neglected. Did she check experimentally that abuse or neglect is more prevalent among rationalists than in the general population?
Of course that's not something a human would ordinarily do to check a plausible-sounding hypothesis, so I guess she probably didn't, unless something went horribly wrong in her childhood.
I'm not at all convinced that this is the case. After all, the shampoos are being designed to be less painful, and you don't need to test on ten thousand rabbits. Considering the distribution of the shampoos, this may save suffering even if you regard human and rabbit suffering as equal in disutility.
Julia Wise would disagree, on the grounds that this is impossible to maintain and you do more good if you stay happy.
So, how many lives did he save again?
Clever guy, but I'm not sure if you want to follow his example.
-- Iain McKay et al., An Anarchist FAQ, section C.7.3
Matt Ridley, in The Origins of Virtue
--Kate Evans on Twitter
-- Kaiki Deishū, Episode 7 of Nisemonogatari.
Sure. The book is a sort of resource for learning the programming language Scheme, where the authors will present an illustrative piece of code and discuss different aspects of its behavior in the form of a question-and-answer dialogue with the reader.
In this case, the authors are discussing how to perform numerical comparisons using only a simple set of basic procedures, and they've come up with a method that has a subtle error. The lines above encourage the reader to figure out if and why it's an error.
With computers, it's really easy to just have a half-baked idea, twiddle some bits, and watch things change, but sometimes the surface appearance of a change is not the whole story. Remembering to "think first, then try" helps me maintain the right discipline for really understanding what's going on in complex systems. Thinking first about my mental model of a situation prompts questions like this:
It's harder psychologically (and maybe too late) to ask those questions in retrospect if you try first, and then think, and if you skip asking them, then you'll suffer later.
This is the easiest, most handholdy experience possible: http://learnpythonthehardway.org/book/
A coworker of mine who didn't know any programming, and who probably isn't smarter than you, enjoyed working through it and has learned a lot.
Programming is hard, but a lot of good things are hard.
Bill Clinton
This is why I think it's not too terribly useful to give labels like "good person" or "bad person," especially if our standard for being a "bad person" is "someone with anything less than 100% adherence to all the extrapolated consequences of their verbally espoused values." In the end, I think labeling people is just a useful approximation to labeling consequences of actions.
Julia, Jeff, and others accomplish a whole lot of good. Would they, on average, end up accomplishing more good if they spent more time feeling guilty about the fact that they could, in theory, be helping more? This is a testable hypothesis. Are people in general more likely to save more lives if they spend time thinking about being happy and avoiding burnout, or if they spend time worrying that they are bad people making excuses for allowing themselves to be happy?
The question here is not whether any individual person could be giving more; the answer is virtually always "yes." The question is, what encourages giving? How do we ensure that lives are actually being saved, given our human limitations and selfish impulses? I think there's great value in not generating an ugh-field around charity.
-- xkcd 667
Ernest Hemingway
"If at first you don't succeed, switch to power tools." -- The Red Green Show
Julia Wise holds the distinction of having actually tried it though. Few people are selfless enough to even make the attempt.
Michael Welfare, quoted in The Autobiography of Benjamin Franklin
"Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity -- in all this vastness -- there is no hint that help will come from elsewhere to save us from ourselves. It is up to us." - Sagan
EDIT: Quote above is from the movie.
Verbatim from the comic:
I personally think that Watchmen is a fantastic study* on all the different ways people react to that realisation.
("Study" in the artistic sense rather than the scientific.)
Douglas Hubbard, How to Measure Anything
--Kate Evans on Twitter
Chesterton doesn't understand the emotion because he doesn't know enough about psychology, not because emotions are deep sacred mysteries we must worship.
Or better, arational.
Pierre Proudhon, to Karl Marx
Edward Tufte, "Beautiful Evidence"
Douglas Hubbard, How to Measure Anything
-- Bryan Caplan
Spared Jews:
Whether Hitler batted for both teams is hotly debated. There are suspected relationships (August Kubizek, Emil Maurice) but any evidence could as well have been faked to smear him.
Hitler clearly knew that Ernst Röhm and Edmund Heines were gay and didn't care until it was Long Knives time. I'm less sure he knew about Karl Ernst's sexuality.
-- Jianzhi Sengcan
Edit: Since I'm not Will Newsome (yet!) I will clarify. There are several useful points in this but I think the key one is the virtue of keeping one's identity small. Speaking it out loud is a sort of primer, meditation or prayer before approaching difficult or emotional subjects has for me proven a useful ritual for avoiding motivated cognition.
It means that many kinds of observation that you could make will tend to cause you to update that probability less.
-- GoodDamon (this may skirt the edge of the rules, since it's a person reacting to a sequence post, but a person who's not a member of LW.)
Er... actually the genie is offering at most two rounds of feedback.
Sorry about the pedantry, it's just that as a professional specialist in genies I have a tendency to notice that sort of thing.
...or unless genies granting wishes is actually part of the same system as the larger world, such that what I learn from the results of a wish can be applied (by me or some other observer) to better calibrate expectations from other actions in that system besides wishing-from-genies.
I should like to point out that anyone in this situation who wishes what would've been their first wish if they had three wishes is a bloody idiot.
So: A genie pops up and says, "You have one wish left."
What do you wish for? Because presumably the giftwrapped FAI didn't work so great.
But not everything is the way it was. Before he made any wishes, he had three.
She missed the chance to trap him in an infinite loop.
-Lewis Carroll, The Hunting of the snark
.
— Michael Kirkbride / Vivec, "The Thirty Six Lessons of Vivec", Morrowind.
-L. A. Rollins, Lucifer's Lexicon: An Updated Abridgment
Jesus used a clever quip to point out the importance of self-monitoring for illusory superiority?
-- W.H. Press et al., Numerical Recipes, Sec. 15.1
Baruch Spinoza Ethics
Baruch Spinoza: 1632-1677 Isaac Newton: 1642-1727 Georg Cantor: 1845-1918 Richard Dedekind: 1831-1916 Guiseppe Peano: 1858-1932
We're talking about morality that is based around technology. There is no technological advance that allows us to not criminalize homosexuality now where we couldn't have in the past.
What?
Condoms may be older than you think.
People respond to incentives. Especially loss-related incentives. I do not give homeless people nickels even though I can afford to give a nearly arbitrary number of homeless people nickels. The set of people with karma less than five will be outright unable to reply - the set of people with karma greater than five will just be disincentivized, and that's still something.
I believe Peter Singer actually originally advocated the asceticism you mention, but eventually moved towards "try to give 10% of your income", because people were actually willing to do that, and his goal was to actually help people, not uphold a particular abstract ideal.
― Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values
G. K. Chesterton, "The Absence of Mr Glass"
Note: this was put in the mouth of the straw? atheist. It's still correct.
Actual humans are afraid of being considered obnoxious, stupid or antisocial. Karma loss is just an indication that perception may be heading in that direction.
Is it justified? Pretend we care nothing for good and bad people. Do these "bad people" do more good than "good people"?
Linus's take fits my aesthetic better, and "beautiful" language is often unclear.
This is my home, the country where my heart is;
Here are my hopes, my dreams, my sacred shrine.
But other hearts in other lands are beating,
With hopes and dreams as true and high as mine.
My country’s skies are bluer than the ocean,
And sunlight beams on cloverleaf and pine.
But other lands have sunlight too and clover,
And skies are everywhere as blue as mine.
-Lloyd Stone
-- Tenzin Gyatso, 14th Dalai Lama