A good rule of thumb to ask yourself in all situations is, “If not now, then when?” Many people delay important habits, work and goals for some hypothetical future. But the future quickly becomes the present and nothing will have changed.
Hollywood is filled with feel-good messages about how robotic logic is no match for fuzzy, warm, human irrationality, and how the power of love will overcome pesky obstacles such as a malevolent superintelligent computer. Unfortunately there isn’t a great deal of cause to think this is the case, any more than there is that noble gorillas can defeat evil human poachers with the power of chest-beating and the ability to use rudimentary tools.
From the British Newspaper 'The Telegraph', and their article on Nick Bostrom's awesome new book 'Superintelligence'.
I just thought it was a great analogy. Nice to see AI as an X-Risk in the mainstream media too.
..."I want information. I want to understand you. To understand what exactly I'm fighting. You can help me."
"I obviously won't."
"I will kill you if you don't help me. I'm not bluffing, Broadwings. I will kill you and you will die alone and unseen, and frankly you are far too intelligent to simply believe that the stories of ancestral halls are true. You will die and that will probably be it, and nobody will ever know if you talked or not—not that conversing with an enemy in a war you don't support is dishonorable in the first place."
"You'll let me leave if I stonewall, because you don't want to set a precedent of murdering surrendered officers."
"We'll see. Would you like another cup?"
"No."
Derpy smiled deviously. "You know, in that last battle? We didn't fly our cannon up there to the cliffs. Nope. We had Earth ponies drag them. Earth ponies are capable of astounding physical feats, you know. We're probably going to be using more mobility in our artillery deployment going forward, now that they've demonstrated how effective the concept is."
"...why did you tell me that? What would drive you to tell me that?"
Yes, that's exactly what I was thinking. General Broadwings thinks General Derpy is bluffing, so Derpy credibly precommits herself to not releasing him by telling him information that would surely doom her army if she did. She gives up the choice of freeing Broadwings, and comes out ahead for it.
A man is walking on the moon with his eyes turned up toward space And the bright blue world that watches him reflected on his face. The whole world sees the hero there and the module crew also. But few can see the guiding team that guards him from below.
Here's a health to the man who walked the moon, and the module crew above, And the team that watches from the sky with worry, joy, and love. To all who blazed the sky-trail come raise your glasses 'round; And a health to the unknown heroes, too, who never left the ground.
Here's a health to the ship's designers, and the welders of her seams, And all who man the radar-scan to watch our dawning dreams. For all the unknown heroes, sing out to every shore: "What makes one step a giant leap is all the steps before".
Leslie Fish, musically praising the Hufflepuff virtues.
Surgeons finally did upgrade their antiseptic standards at the end of the nineteenth century. But, as is often the case with new ideas, the effort required deeper changes than anyone had anticipated. In their blood-slick, viscera-encrusted black coats, surgeons had seen themselves as warriors doing hemorrhagic battle with little more than their bare hands. A few pioneering Germans, however, seized on the idea of the surgeon as scientist. They traded in their black coats for pristine laboratory whites, refashioned their operating rooms to achieve the exacting sterility of a bacteriological lab, and embraced anatomic precision over speed.
The key message to teach surgeons, it turned out, was not how to stop germs but how to think like a laboratory scientist. Young physicians from America and elsewhere who went to Germany to study with its surgical luminaries became fervent converts to their thinking and their standards. They returned as apostles not only for the use of antiseptic practice (to kill germs) but also for the much more exacting demands of aseptic practice (to prevent germs), such as wearing sterile gloves, gowns, hats, and masks. Proselytizing through their own students and colleagues, they finally spread the ideas worldwide.
That's why I'm skeptical of people who look at some catastrophic failure of a complex system and say, "Wow, the odds of this happening are astronomical. Five different safety systems had to fail simultaneously!" What they don't realize is that one or two of those systems are failing all the time, and it's up to the other three systems to prevent the failure from turning into a disaster.
-- Raymond Chen
Correlary: if you're running a system for which five simultaneous failures is a disaster, monitor each safety system seperately and treat any three simultaneous failures as if it were a disaster.
It was a gamble: would people really take time out of their busy lives to answer other people’s questions, for nothing more than fake internet points and bragging rights?
It turns out that people will do anything for fake internet points.
Just kidding. At best, the points, and the gamification, and the focused structure of the site did little more than encourage people to keep doing what they were already doing. People came because they wanted to help other people, because they needed to learn something new, or because they wanted to show off the clever way they’d solved a problem.
...
An incredible number of people jumped at the chance to help a stranger
-- Jay Hanlon, Five year retrospective on StackOverflow
On the other hand, a Slashdot comment that's stuck in my mind (and on my hard disks) since I read it years ago:
...In one respect the computer industry is exactly like the construction industry: nobody has two minutes to tell you how to do something...but they all have forty-five minutes to tell you why you did it wrong.
When I started working at a tech company, as a lowly new-guy know-nothing, I found that any question starting with "How do I..." or "What's the best way to..." would be ignored; so I had to adopt another strategy. Say I wanted to do X. Research showed me there were (say) about six or seven ways to do X. Which is the best in my situation? I don't know. So I pick an approach at random, though I don't actually use it. Then I wander down to the coffee machine and casually remark, "So, I needed to do X, and I used approach Y." I would then, inevitably, get a half-hour discussion of why that was stupid, and what I should have done was use approach Z, because of this, this, and this. Then I would go off and use approach Z.
In ten years in the tech industry, that strategy has never failed once. I think the key difference is the subtext. In the fir
In addition to the specific advice, this is an excellent example of rationality because it's about getting the best from people as they are rather than being resentful because they aren't behaving as they would if they were ideally rational.
The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.
-- Alberto Brandolini (via David Brin)
Refuting frequently appearing bullshit could be made more efficient by having a web page with standard explanations which could be linked from the debate. Posting a link (perhaps with a short summary, which could also be provided on the top of that web page) does not require too much energy.
Which would create another problem, of protecting that web page from bullshit created by reversing stupidity, undiscriminating skepticism, or simply affective death spirals about that web page. (Yes, I'm thinking about RationalWiki.) Maybe we could have multiple anti-bullshit websites, which would sometimes explain using their own words, and sometimes merely by linking to another website's explanation they agree with.
http://www.talkorigins.org/indexcc/ is considered a good one on the single issue of creationism vs. evolution.
is consciousness more like the weather, or is it more like multiplication?
More context:
a perfect simulation of the weather doesn’t make it rain—at least, not in our world. On the other hand, a perfect simulation of multiplying two numbers does multiply the numbers: there’s no difference at all between multiplication and a “simulation of multiplication.” Likewise, a perfect simulation of a good argument is a good argument, a perfect simulation of a sidesplitting joke is a sidesplitting joke, etc.
Maybe the hardware substrate is relevant after all. But [...] I think the burden is firmly on those of us who suspect so, to explain what about the hardware matters and why. Post-Turing, no one gets to treat consciousness’s dependence on particular hardware as “obvious”—especially if they never even explain what it is about that hardware that makes a difference.
But if that were the case, then moral philosophers - who reason about ethical principles all day long - should be more virtuous than other people. Are they? The philosopher Eric Schwitzgebel tried to find out. He used surveys and more surreptitious methods to measure how often moral philosophers give to charity, vote, call their mothers, donate blood, donate organs, clean up after themselves at philosophy conferences, and respond to emails purportedly from students. And in none of these ways are moral philosophers better than other philosophers or professors in other fields.
Schwitzgebel even scrounged up the missing-book lists from dozens of libraries and found that academic books on ethics, which are presumably mostly borrowed by ethicists, are more likely to be stolen or just never returned than books in other areas of philosophy. In other words, expertise in moral reasoning does not seem to improve moral behavior, and it might even make it worse (perhaps by making the rider more skilled at post hoc justification). Schwitzgebel still has yet to find a single measure on which moral philosophers behave better than other philosophers.
There's probably a selection effect at work. Would a highly moral person with a capable and flexible mind become a full-time moral philosopher? Take their sustenance from society's philanthropy budget?
Or would they take the talmudists' advice and learn a trade so they can support themselves, and study moral philosophy in their free time? Or perhaps Givewell's advice and learn the most lucrative art they can and give most of it to charity? Or study whichever field allows them to make the biggest difference in peoples' lives? (Probably medicine, engineering or diplomacy.)
Granted, such a person might think they could make such a large contribution to the field of moral philosophy that it would be comparable in impact to other research fields. This seems unlikely.
The same reasoning would keep highly moral people out of other sorts of philosophy, but people who don't have an interest in moral philosophy per se might not notice the point. It's hard to avoid if you specifically study it.
A good argument is like a piece of technology. Few of us will ever invent a new piece of technology, and on any given day it’s unlikely that we’ll adopt one. Nevertheless, the world we inhabit is defined by technological change. Likewise, I believe that the world we inhabit is a product of good moral arguments. It’s hard to catch someone in the midst of reasoned moral persuasion, and harder still to observe the genesis of a good argument. But I believe that without our capacity for moral reasoning, the world would be a very different place.
-Joshua Greene, “Moral Tribes”, Endnotes
Come back with your shield - or on it.
Our kind might not be able to cooperate, but the Spartans certainly could. The Spartans were masters of hoplite phalanx warfare where often every individual would have been better off running away but collectively everyone was better off if none ran away than if all did. The above quote is what Plutarch says Spartan mothers would tell their sons before battle. (Because shields were heavy if you were going to run away you would drop it, and coming back on your shield meant you were dead.) Spreading memes to overcome collective action problems is civilization level rational.
This seems like an elegant and funny take on Ben Franklin's wisdom.
Walter Sobchak: "Am I wrong?"
The Dude: "No you're not wrong."
Walter Sobchak: "Am I wrong?"
The Dude: "You're not wrong Walter. You're just an asshole."
-The Big Lebowski, Directed by Joel Coen and Ethan Coen, 1998
Sometimes the biggest disasters aren't noticed at all -- no one's around to write horror stories.
Vernor Vinge, A Fire Upon the Deep
Most of the time what we do is what we do most of the time.
-Daniel Willingham, Why Don't Students Like School. The point is that, quite often the reason we're doing something is that that's what we're used to doing in that situation.
Note: He attributes the quote to some other psychologists.
Most of the time he asked questions. His questions were very good, and if you tried to answer them intelligently, you found yourself saying excellent things that you did not know you knew, and that you had not, in fact, known before. He had "educed" them from you by his question. His classes were literally "education" - they brought things out of you, they made your mind produce its own explicit ideas.
Thomas Merton, about professor Mark Van Doren
After describing
blind certainty, a close-mindedness that amounts to an imprisonment so total that the prisoner doesn't even know he's locked up.
David Foster Wallace continues
The point here is that I think this is one part of what teaching me how to think is really supposed to mean. To be just a little less arrogant. To have just a little critical awareness about myself and my certainties. Because a huge percentage of the stuff that I tend to be automatically certain of is, it turns out, totally wrong and deluded. I have learned this the hard way, as I predict you will, too.
"Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers of the preceding generation." -Richard Feynman
Few people are capable of expressing with equanimity opinions which differ from the prejudices of their social environment. Most people are even incapable of forming such opinions.
I don't suppose you have a source for the quote? (at this point, my default is to disbelieve any attribution of a quote unknown to me to Einstein)
...There is a real joy in doing mathematics, in learning ways of thinking that explain and organize and simplify. One can feel this joy discovering new mathematics, rediscovering old mathematics, learning a way of thinking from a person or text, or finding a new way to explain or to view an old mathematical structure.
This inner motivation might lead us to think that we do mathematics solely for its own sake. That’s not true: the social setting is extremely important. We are inspired by other people, we seek appreciation by other people, and we like to help
The power is not in the choice of metaphor, it is in the ability to shift among metaphors. Teaching people this other metaphor [...] but not leaving them with the flexibility to move freely in and out is not having enabled them at all.
-- Kent Pitman
Elsewhere in the thread he says the following. I have corrected some typos and added emphasis.
- I expect a firestorm of complaining over the use of the word `stack'. Maybe I'll be pleasantly surprised. I prefer to use such metaphors because I think such abstractions give people a useful handhold when they are coming from other backgrounds. I get jumped on a lot for using a stack metaphor when talking about Scheme because people apparently think I've forgotten that it's not a strict stack; personally, I think the people who are so quick to jump on me have forgotten that even a metaphor that has a flaw can be a powerful way to reason and express even when not speaking rigorously. The remark here is intended to allow someone who is just barely reading along to confirm that something he may have strong knowledge of in another domain is in fact what is being discussed here. To not offer that handhold seems to me to be impolite.
..."We must not criticize an idiom [...] because it is not yet well known and is, therefore, less strongly connected with our sensory reactions and less plausible than is another, more 'common' idiom. Superficial criticisms of this kind, which have been elevated into an entire 'philosophy', abound in discussions of the mind-body problem. Philosophers who want to introduce and to test new views thus find themselves faced not with arguments, which they could most likely answer, but with an impenetrable stone wall of well-entrenched reactions. This is not
Language exists only on the surface of our consciousness. The great human struggles are played out in silence and in the ability to express oneself.
Challenge my assumption, not my conclusion, and do it with new evidence, instead of trying to twist the old stuff.
"The Originist", by Orson Scott Card
I believe the first part is frequently good advice. The second half is good, but not quite as good-- there still may be good new angles on old evidence.
Two mares, each convinced she was standing firmly on The Shores Of Rationality, stared helplessly into The Sea Of Confusion and despaired over their inability to ever rescue the friend helplessly floundering within.
A vivid description of inferential distance from Twilight's Escort Service.
Edit: It's from a comedy that relies on misunderstandings; Twilight chooses the word "escort" to advertise her teleportation abilities. If you don't enjoy awkwardness-based comedies, I recommend you stay away. The actual quote is about explaining a value dif...
'Deep pragmatism' is Joshua Greene's name for 'utilitarianism'.
Today we, some of us, defend the rights of gays and women with great conviction. But before we could do it with feeling, before our feelings felt like “rights,” someone had to do it with thinking. I’m a deep pragmatist, and a liberal, because I believe in this kind of progress and that our work is not yet done.
Joshua Greene, “Moral Tribes"
"Just as eating against one’s will is injurious to health, so studying without a liking for it spoils the memory, and it retains nothing it takes in." -Da Vinci
I'm starting a new 30 day challenge: the month of no "should." Instead of tediously working down a list of all the little chores and errands that I "should" be doing, I'll work to listen to what that little voice inside me wants to do. I think it will be interesting.
I've realized that I had started noticing and mitigating trivial inconveniences some time after reading the Yvain's post. Something as simple as leaving the door open or taking cookies from the the wrapper and placing them in a bowl, (or supporting form auto-fill, or placing the button (physical or virtual) you want the user to press right there in front if you are a developer) makes a difference in the "feature" being used (e.g. cookies being eaten).
Up next: figure out a way to use fewer parenthesis (including nested ones (yes, I've heard of commas)).
Most of the time what we do is what we do most of the time.
Note: He attributes the quote to some other psychologists.
But the more central point is that trying to explain or predict [institutional] behavior idealistically, in terms of things called "values" or moral fortitude, is foolish. It's magical thinking. "One party believes in..." Institutions don't have beliefs. They have incentives.
There is, to the [Slytherin adept], only one reality governing everything from quarks to galaxies. Humans have no special place within it. Any idea predicated on the special status of the human — such as justice, fairness, equality, talent — is raw material for a theater of mediated realities that can be created via subtraction of conflicting evidence, polishing and masking.
society should not be looking for ways to maintain privacy. It should be looking for ways to make privacy unnecessary. We will never be free until we lose our unnecessary secrets and discover we are better off without them.
(Please read the link for context before commenting on the quote alone)