Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LW/OB
- No more than 5 quotes per person per monthly thread, please.
Paul Dirac
On counter-signaling, how not to do:
-- The Irish Independent, "News In Brief"
Maybe the guy had been reading too much Edgar Allan Poe? As a child, I loved "The Purloined Letter" and tried to play that trick on my sister - taking something from her and hiding it "in plain sight". Of course, she found it immediately.
ETA: it was a girl, not a guy.
You are probably right that more information drew police attention to the car, but "near the border" gets one most of the way to legally justified. In the 1970s, the US Supreme Court explicitly approved a permanent checkpoint approximately 50 miles north of the Mexican border.
Chris Bucholz
It surprises people like Greg Egan, and they're not entirely stupid, because brains are Turing complete modulo the finite memory - there's no analogue of that for visible wavelengths.
If this weren't Less Wrong, I'd just slink away now and pretend I never saw this, but:
I don't understand this comment, but it sounds important. Where can I go and what can I read that will cause me to understand statements like this in the future?
When speaking about sensory inputs, it makes sense to say that different species (even different individuals) have different ranges, so one can percieve something and other can't.
With computation it is known that sufficiently strong programming languages are in some sense equal. For example, you could speak about relative advantages of Basic, C/C++, Java, Lisp, Pascal, Python, etc., but in each of these languages you can write a simulator of the remaining ones. This means that if an algorithm can be implemented in one of these languages, it can be implemented in all of them -- in worst case, it would be implemented as a simulation of another language running its native implementation.
There are some technical details, though. Simulating another program is slower and requires more memory than the original program. So it could be argued that on a given hardware you could do a program in language X which uses all the memory and all available time, so it does not necessarily follow that you can do the same program in language Y. But on this level of abstraction we ignore hardware limits. We assume that the computer is fast enough and has enough memory for whatever purpose. (More precise... (read more)
What does that statement mean in the context of thoughts?
That is, when I think about human thoughts I think about information processing algorithms, which typically rely on hardware set up for that explicit purpose. So even though I might be able to repurpose my "verbal manipulation" module to do formal logic, that doesn't mean I have a formal logic module.
Any defects in my ability to repurpose might be specific to me: I might able to think the thought "A-> B, ~A, therefore ~B" with the flavor of trueness, and another person can only think that thought with the flavor of falseness. If the truth flavor is as much a part of the thought as the textual content, then the second thinker cannot think the thought that the first thinker can.
Aren't there people who can hear sounds but not music? Are their brains not Turing complete? Are musical thoughts ones they cannot think?
It means nothing, although Greg Egan is quite impressed by it. Sad but true: Someone with an IQ of, say, 90 can be trained to operate a Turing machine, but will in all probability never understand matrix calculus. The belief that Turing-complete = understanding-complete is false. It just isn't stupid.
Have you ever tried to teach math to anyone who is not good at math? In my youth I once tutored a woman who was poor, but motivated enough to pay $40/session. A major obstacle was teaching her how to calculate (a^b)^c and getting her to reliably notice that minus times minus equals plus. Despite my attempts at creative physical demonstrations of the notion of a balanced scale, I couldn't get her to really understand the notion of doing the same things to both sides of a mathematical equation. I don't think she would ever understand what was going on in matrix calculus, period, barring "teaching methods" that involve neural reprogramming or gain of additional hardware.
Your claim is too large for the evidence you present in support of it.
Teaching someone math who is not good at math is hard, but "will in all probability never understand matrix calculus"!? I don't think you're using the Try Harder.
Assume teaching is hard (list of weak evidence: it's a three year undergraduate degree; humanity has hardly allowed itself to run any proper experiments in the field, and those that have been run seem usually to be generally ignored by professional practitioners; it's massively subject to the typical mind fallacy and most practitioners don't know that fallacy exists). That you, "in your youth" (without having studied teaching), "once" tutored a woman who you couldn't teach very well… doesn't support any very strong conclusion.
It seems very likely to me that Omega could teach matrix calculus to someone with IQ 90 given reasonable time and motivation from the student. One of the things I'm willing to devote significant resources to in the coming years is making education into a proper science. Given the tools of that proper science I humbly submit that you could teach your former student a lot. Track the progress of the Khan Academy for some promising developments in the field.
I fear you're committing the typical mind fallacy. The dyscalculic could simulate a Turing machine, but all of mathematics, including basic arithmetic, is whaargarbl to them. They're often highly intelligent (though of course the diagnosis is "intelligent elsewhere, unintelligent at maths"), good at words and social things, but literally unable to calculate 17+17 more accurately than "somewhere in the twenties or thirties" or "I have no idea" without machine assistance. I didn't believe it either until I saw it.
Obviously the man in the Chinese room lacks understanding, by most common definitions of understanding. It is the room as a system which understands Chinese. (Assuming lookup tables can understand. By functional definitions, they should be able to.)
http://languagelog.ldc.upenn.edu/nll/?p=2074
It shocked the hell out of me, too.
Extremely large numbers.
(among other things)
Just visualize n dimensions, and then set n = 4.
Barbara Alice Mann
I agree with the necessity of making life more fair, and disagree with the connotational noble Pocahontas lecturing a sadistic western patriarch. (Note: the last three words are taken from the quote.)
Agree that that looks an awful lot like an abuse of the noble savage meme. Barbara Alice Mann appears to be an anthropologist and a Seneca, so that's at least two points where she should really know better -- then again, there's a long and more than somewhat suspect history of anthropologists using their research to make didactic points about Western society. (Margaret Mead, for example.)
Not sure I entirely agree re: fairness. "Life's not fair" seems to me to succinctly express the very important point that natural law and the fundamentals of game theory are invariant relative to egalitarian intuitions. This can't be changed, only worked around, and a response of "so make it fair" seems to dilute that point by implying that any failure of egalitarianism might ideally be traced to some corresponding failure of morality or foresight.
Unfair is the opposite of fair, not the logical complement. The moon is neither happy nor sad.
That is indeed possible if F is incoherent or has no referent. The assertion seems equivalent to "There's no such thing as fairness".
I'm confused because it was Eliezer who taught me this.
EDIT: I'm now resisting the temptation to tell Eliezer to "read the sequences".
Original parent says, "The world is neither fair nor unfair", meaning, "The world is neither deliberately fair nor deliberately unfair", and my comment was meant to be interpreted as replying, "Of course the world is unfair - if it's not fair, it must be unfair - and it doesn't matter that it's accidental rather than deliberate." Also to counteract the deep wisdom aura that "The world is neither fair nor unfair" gets from counterintuitively violating the (F \/ ~F) axiom schema.
I didn't think I could remove the quote from that attitude about it very effectively without butchering it. I did lop off a subsequent sentence that made it worse.
Don't they usually say it about situations that they could choose to change, to people who don't have the choice?
Introspection tells me this statement usually gets trotted out when the cost of achieving fairness is too high to warrant serious consideration.
EDIT: Whoops, I just realised that my imagination only outputted situations involving adults. When imagining situations involving children I get the opposite of my original claim.
The automatic pursuit of fairness might lead to perverse incentives. I have in mind some (non-genetically related) family in Mexico who don't bother saving money for the future because their extended family and neighbours would expect them to pay for food and gifts if they happen to acquire "extra" cash. Perhaps this "Western" patriarchal peculiarity has some merit after all.
Is this really about fairness? Seems like different people agree that fairness is a good thing, but use different definitions of fairness. Or perhaps the word fairness is often used to mean "applause lights of my group".
For someone fairness means "everyone has food to eat", for another fairness means "everyone pays for their own food". Then proponents of one definition accuse the others of not being fair -- the debate is framed as if the problem is not different definitions of fairness, but rather our group caring about fairness and the other group ignoring fairness; which of course means that we are morally right and they are morally wrong.
Mad Men, "My Old Kentucky Home"
Another good one from Don Draper:
Arthur C. Clarke
The trouble is, the most problematic kinds of faith can survive it just fine.
Which leads us to today's Umeshism: "Why are existing religions so troublesome? Because they're all false, the only ones that exist are so dangerous that they can survive the truth."
The other day I was thinking about Discworld, and then I remembered this and figured it would make a good rationality quote...
-- Terry Pratchett, Feet of Clay
Reminded of a quote I saw on TV Tropes of a MetaFilter comment by ericbop:
Douglas Adams, Dirk Gently's Holistic Detective Agency
-- C. S. Lewis
-G. K. Chesterton, The Curse of the Golden Cross
-- Douglas Adams. The Long Dark Tea-Time of the Soul (1988) p.169
I can't find the quote easily (it's somewhere in God, No!), but Penn Jillette has said that one aspect of magic tricks is the magician putting in more work to set them up than anyone sane would expect.
I'm moderately sure that he's overestimating how clearly the vast majority of people think about what's needed to make a magic trick work.
His partner Teller says the same thing here:
Edit: That trick is 19 minutes and 50 seconds into this video.
Choosing something that's "too obvious" out of a large search space can work if you're playing against a small number of competitors, but when there are millions of people involved, not only are some of them going to un-ironically choose "1-2-3-4-5-6", but more than one person will choose it for the same reason it appeals to you.
The ghost of Parnell is Far, the presentation to the Queen is Near?
On politics as the mind-killer:
-- Julian Sanchez (the whole post is worth reading)
We've reached the point where the weather is political, and so are third person pronouns.
Tell that to Socrates.
"Muad’Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It is shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad‘Dib knew that every experience carries its lesson"
Frank Herbert, Dune
It took me years to learn not to feel afraid due to a perceived status threat when I was having a hard time figuring something out.
A good way to make it hard for me to learn something is to tell me that how quickly I understand it is an indicator of my intellectual aptitude.
Interesting article about a study on this effect:
This seems like a more complicated explanation than the data supports. It seems simpler, and equally justified, to say that praising effort leads to more effort, which is a good thing on tasks where more effort yields greater success.
I would be interested to see a variation on this study where the second-round problems were engineered to require breaking of established first-round mental sets in order to solve them. What effect does praising effort after the first round have in this case?
Perhaps it leads to more effort, which may be counterproductive for those sorts of problems, and thereby lead to less success than emphasizing intelligence. Or, perhaps not. I'm not making a confident prediction here, but I'd consider a praising-effort-yields-greater-success result more surprising (and thus more informative) in that scenario than the original one.
Dupe
-Carl Rogers, On Becoming a Person: A Therapist's View of Psychotherapy (1961)
David Pearce
When some Lesswrong-users use 'metaphysics', they mean other people's metaphysics. This is much like how some Christians use the term 'religion'.
-- Marvin Minsky, The Society of Mind
Johan Liebert, Monster
Alfred North Whitehead, “An Introduction to Mathematics” (thanks to Terence Tao)
On specificity and sneaking on connotations; useful for the liberal-minded among us:
... (read more)How about:
Someone who, following an honest best effort to evaluate the available evidence, concludes that some of the beliefs that nowadays fall under the standard definition of "racist" nevertheless may be true with probabilities significantly above zero.
Someone who performs Bayesian inference that somehow involves probabilities conditioned on the race of a person or a group of people, and whose conclusion happens to reflect negatively on this person or group in some way. (Or, alternatively, someone who doesn't believe that making such inferences is grossly immoral as a matter of principle.)
Both (1) and (2) fall squarely under the common usage of the term "racist," and yet I don't see how they would fit into the above cited classification.
Of course, some people would presumably argue that all beliefs in category (1) are in fact conclusively proven to be false with p~1, so it can be only a matter of incorrect conclusions motivated by the above listed categories of racism. Presumably they would also claim that, as a well-established general principle, no correct inferences in category (2) are ever possible. But do you really believe this?
It's not an improbable claim so much as a nigh-unfalsifiable claim.
I mean, imagine the following conversation between two hypothetical people, arbitrarily labelled RZ and EN here:
EN: By finding enough "code words" you can make any criticism of Obama racist.
RZ: What about this criticism?
EN: By declaring "epic", "confirmation mess", and "death blow" to be racist "code words", you can make that criticism racist.
RZ: But "epic", "confirmation mess", and "death blow" aren't racist code words!
EN: Right. Neither is "food stamps".
Of course, one way forward from this point is to taboo "code word" -- for example, to predict that an IAT would find stronger associations between "food stamps" and black people than between "epic" and black people, but would not find stronger associations between "food stamps" and white people than between "epic" and white people.
Marvin Minsky
--Nietzsche
“The mind commands the body and it obeys. The mind orders itself and meets resistance. ”
-St Augustine of Hippo
Augustine has obviously never tried to learn something which requires complicated movement, or at least he didn't try it as an adult.
-George Orwell
G. K. Chesterton
Zach Wiener's elegant disproof:
(Although to be fair, it's possible that the disproof fails because "think of the strangest thing that's true" is impossible for a human brain.)
It also fails in the case where the strangest thing that's true is an infinite number of monkeys dressed as Hitler. Then adding one doesn't change it.
More to the point, the comparison is more about typical fiction, rather than ad hoc fictional scenarios. There are very few fictional works with monkeys dressed as Hitler.
In real life the major players are immune to mindreading, can communicate securely and instantaneously worldwide, and have tens of thousands of people working under them. You are, ironically, overlooking the strangeness of reality.
Conservation of detail may be a valid argument though.
This quote seems relevant:
G. H. Hardy, upon receiving a letter containing mathematical formulae from Ramanujan
-Tim Ferriss, The 4-Hour Workweek
-Robert Kurzban, Why Everyone (Else) is a Hypocrite: Evolution and the Modular Mind
— Jack Vance, The Languages of Pao
-Game of Thrones (TV show)
-- Isuna Hasekura, Spice and Wolf vol. 5 ("servant" is justified by the medieval setting).
--Jonathan Haidt, source
I first encountered this in a physics newsgroup, after some crank was taking some toy model way too seriously:
Thaddeus Stout Tom Davidson
(I remembered something like "if you pull them too much, they break down", actually...)
My old physics professor David Newton (yes, apparently that's the name he was born with) on how to study physics.
--Some AI Koans, collected by ESR
-- Mark Rippetoe, Starting Strength
Yoshinori Kitase
-- Christina Rossetti, Who has seen the Wind?
A shortcut for making less-biased predictions, taking base averages into account.
Regarding this problem: "Julie is currently a senior in a state university. She read fluently when she was four years old. What is her grade point average (GPA)?"
--Razib Khan, source
-- Farenheit 451
I'll be sticking around a while, although I'm not doing too well right now (check the HPMOR discussion thread for those of you interested in viewing the carnage, it's beautiful). It's not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across. Plus, I like the idea of losing so much karma in one day and then eventually earning it all back and being recognized as a super rationalist. Gaining the legitimate approval of a group who now have a lot against me will be a decent challenge.
Also I doubt that I would be able to resist commenting even if I wanted to. That's probably mostly it.
Tips for dealing with people with big egos:
I'll add to this that actually paying attention to wedrifid is instructive here.
My own interpretation of wedrifid's behavior is that mostly s/he ignores all of these ad-hoc rules in favor of:
1) paying attention to the status implications of what's going on,
2) correctly recognizing that attempts to lower someone's status are attacks
3) honoring the obligations of implicit social alliances when an ally is attacked
I endorse this and have been trying to get better about #3 myself.
Might be too advanced for someone who just learned that saying "Please stop being stupid." is a bad idea.
The phrase "social alliances" makes me uneasy with the fear that if everyone did #3, LW would degenerate into typical green vs blue debates. Can you explain a bit more why you endorse it?
If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam's ability to engage in A...
...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.
...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance.
...if on reflection I reject A and I can't come to agreement with Sam, I endorse acknowledging that I've unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that's beside the point here.)
I agree with you that if I instead skip the reflective step and reflexively endorse A, that quickly degenerates into pure tribal warfare. But the failure in this... (read more)
Start to worry if you begin to feel morally obliged to engage in activity 'Z' that neither you, Sam or Pat endorse but which you must support due to acausal social allegiance with Bink mediated by the demon X(A/N)th, who is responsible for UFOs, for the illusion of stars that we see in the sky and also divinely inspired the Bhagavad-Gita.
Battling your way to sanity against corrupted hardware has the potential makings of a fascinating story.
It wasn't quite as dramatic as you make it sound, but it was certainly fascinating to live through.
The general case is here.
The specifics... hm.
I remain uncomfortable discussing the specifics in public.
This discussion is off-topic for the "Rationality Quotes" thread, but...
If you're interested in an easy way to gain karma, you might want to try an experimental method I've been kicking around:
Take an article from Wikipedia on a bias that we don't have an article about yet. Wikipedia has a list of cognitive biases. Write a top-level post about that bias, with appropriate use of references. Write it in a similar style to Eliezer's more straightforward posts on a bias, examples first.
My prediction is that such an article, if well-written, should gain about +40 votes; about +80 if it contains useful actionable material.
Oh, you want a Quest, not a goal. :-)
In that case, try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.
Note: I believe that it is not only possible, but even easy, for you to do this and get a net karma gain. All you need is (a) a fairly good argument, and (b) a friendly tone.
One day I will write "How to karmawhore with LessWrong comments" if I can work out how to do it in such a way that it won't get -5000 within an hour.
I know how you could do it. You need to come up with a detailed written strategy for maximizing karma with minimal actual contribution. Have some third party (or several) that LW would trust hold on to it in secrect.
Then, for a week or two, apply that strategy as directly and blatantly as you think you can get away with, racking up as many points as possible.
Once that's done, compile a list of those comments and post it into an article, along with your original strategy document and the verification from the third party that you wrote the strategy before you wrote the comments, rather than ad-hocing a "strategy" onto a run of comments that happened to succeed.
Voila: you have now pulled a karma hack and then afterwards gone white-hat with the exploit data. LW will have no choice but to give you more karma for kindly revealing the vulnerability in their system! Excellent. >:-)
Nitpick: cryptography solves this much more neatly.
Of course, people could accuse you of having an efficient way of factorising numbers, but if you do karma is going to be the least of anyone's concerns.