The other rationality quotes thread operates under the rule:

Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.

Lately it seems that every MIRI or CFAR employee is excempt from being quoted.

As there are still interesting quotes that happen on LessWrong, Overcoming Bias, HPMoR and MIRI/CFAR employee in general, I think it makes sense to open this thread to provide a place for those quotes. 

 

New Comment
62 comments, sorted by Click to highlight new comments since:

The first terrifying shock comes when you realize that the rest of the world is just so incredibly stupid.

The second terrifying shock comes when you realize that they're not the only ones.

-- Nominull3 here, nearly six-years old quote

[-]Tenoke470

"Goedel's Law: as the length of any philosophical discussion increases, the probability of someone incorrectly quoting Goedel's Incompleteness Theorem approaches 1"

--nshepperd on #lesswrong

There's a theorem which states that you can never truly prove that.

The probability that someone will say bullshit about quantum mechanics approaches 1 even faster.

At least, the possible worlds in which they don't start collapsing... Or something...

I love that 'bullshit' is now an academic term.

That doesn't say much; perhaps it approaches 1 as 1 - 1/(1+1/2+1/3...+1/n)?

I like your example, it implies that the longer the discussion goes, the less likely it is that somebody misquotes G.I.T. in any given statement (or per unit time etc). Kinda the opposite of what the intent of the original quote seems to be.

Yea, but it's clear what he's trying to convey: For any event that has some (fixed) episolon>0 probability of happening, it's gonna happen eventually if you give it enough chances. Trivially includes the mentioning of Gödel's incompleteness theorems.

However, it's also clear what the intent of the original quote was. The pedantry in this case is fair game, since the quote, in an attempt to sound sharp and snappy and relevant, actually obscures what it's trying to say: that Gödel is brought up way too often in philosophical discussions.

Edit: Removed link, wrong reference.

For any event that has some episolon>0 probability of happening, it's gonna happen eventually if you give it enough chances.

This is not true (and also you mis-apply the Law of large Numbers here). For example: in a series (one single, continuing series!) of coin tosses, the probability that you get a run of heads at least half as long as the overall length of the series (eg ttththtHHHHHHH) is always >0, but it is not guaranteed to happen, no matter how many chances you give it. Even if the number of coin tosses is infinite (whatever that might mean).

Interestingly, I read the original quote differently from you - I thought the intent was to say "any bloody thing will be brought up in a discussion, eventually, if it is long enough, even really obscure stuff like G.I.T.", rather than "Gödel is brought up way too often in philosophical discussions". What did you really mean, nsheppered???

Interestingly, I read the original quote differently from you - I thought the intent was to say "any bloody thing will be brought up in a discussion, eventually, if it is long enough, even really obscure stuff like G.I.T.", rather than "Gödel is brought up way too often in philosophical discussions". What did you really mean, nsheppered???

It was the latter. Also I am assuming that you haven't heard of Godwin's law which is what the wording here references.

in a series (one single, continuing series!) of coin tosses, the probability that you get a run of heads at least half as long as the overall length of the series (eg ttththtHHHHHHH) is always >0, but it is not guaranteed to happen, no matter how many chances you give it.

... any event for which you don't change the epsilon such that the sum becomes a convergent series. Or any process with a Markov property. Or any event with a fixed epsilon >0.

That should cover round about any relevant event.

(and also you mis-apply the Law of large Numbers here)

Explain.

Law of Large Numbers states that sum of a large amount of i.i.d variables approaches its mathematical expectation. Roughly speaking, "big samples reliably reveal properties of population".

It doesn't state that "everything can happen in large samples".

Thanks. Memory is more fragile than thought, wrong folder. Updated.

[-]Fhyve270

"How do you not have arguments with idiots? Don't frame the people you argue with as idiots!"

-- Cat Lavigne at the July 2013 CFAR workshop

If idiots do exist, and you have reason to conclude that someone is an idiot, then you shouldn't deny that conclusion -- at least when you subscribe to an epistemic primacy: that forming true beliefs takes precedence over other priorities.

The quote is suspiciously close to being a specific application of "Don't like reality? Pretend it's different!"

That quote summarizes a good amount of material from a CFAR class, and presented in isolation, the intended meaning is not as clear.

The idea is that people are too quick to dismiss people they disagree with as idiots, not really forming accurate beliefs, or even real anticipation controlling beliefs. So, if you find yourself thinking this person you are arguing with is an idiot, you are likely to get more out of the argument by trying to understand where the person is coming from and what their motivations are.

So, if you find yourself thinking this person you are arguing with is an idiot, you are likely to get more out of the argument by trying to understand where the person is coming from and what their motivations are.

Having spent some time on the 'net I can boast of considerable experience of arguing with idiots.

My experience tells me that it's highly useful to determine whether one you're arguing with is an idiot or not as soon as possible. One reason is that it makes it clear whether the conversation will evolve into an interesting direction or into the kicks-and-giggles direction. It is quite rare for me to take an interest in where a 'net idiot is coming from or what his motivations are -- because there are so many of them.

Oh, and the criteria for idiotism are not what one believes and whether his beliefs match mine. The criteria revolve around ability (or inability) to use basic logic, tendency to hysterics, competency in reading comprehension, and other things like that.

[-][anonymous]00

Yes, but fishing out non-idiots from say Reddit's front page is rather futile. Non-idiots tend to flee from idiots anyway, so just go where the refugees generally go to.

LW as a refugee camp... I guess X-D

The quote is suspiciously close to being a specific application of "Don't like reality? Pretend it's different!"

That can be a useful method of learning. Pretend it's different, act accordingly, and observe the results.

This is more to address the common thought process "this person disagrees with me, therefore they are an idiot!"

Even if they aren't very smart, it is better to frame them as someone who isn't very smart rather than a directly derogatory term "idiot."

This is more to address the common thought process "this person disagrees with me, therefore they are an idiot!"

(Certainly not my criterion, nor that of the LW herd/caravan/flock, a couple stragglers possibly excepted.)

a couple stragglers possibly excepted.

I think you missed a trick here...

The term 'idiot' contains a value judgement that a certain person isn't worth arguing with. It's more than just seeing the other person has having an IQ of 70.

Trying to understand the world view of someone with an IQ of 70 might still provide for an interesting conversation.

The term 'idiot' contains a value judgement that a certain person isn't worth arguing with.

Except that often it can't be avoided/ is "worth" it if only for status/hierarchy squabbling reasons (i.e. even when the arguments' contents don't matter).

Except that often it can't be avoided/ is "worth" it if only for status/hierarchy squabbling reasons (i.e. even when the arguments' contents don't matter).

That's why it's not a good idea to think of others as idiots.

Indeed, just as it can be smart to "forget" when you have a terminal condition. The "pretend it's different" from my ancestor comment sometimes works fine from an instrumental rationality perspective, just not from an epistemic one.

Whether someone is worth arguing with is a subjective value judgement.

And given your values you'd ideally arrive at those through some process other than the one you use to judge, say, a new apartment?

I think that trying to understand the worldview of people who are very different from you is often useful.

Trying to explain ideas in a way that you never explained them before can also be useful.

I agree. I hope I didn't give the impression that I didn't. Usefulness belongs to instrumental rationality more so than to epistemic rationality.

That's ... not quite what "framing" means.

"How do you not have arguments with idiots? Don't frame the people you argue with as idiots!"

I predict the opposite effect. Framing idiots as idiots tends to reduce the amount that you end up arguing (or otherwise interacting) with them. If a motivation for not framing people as idiots is required look elsewhere.

This doesn't look as bad as it looks like it looks.

Qiaochu_Yuan

"Taking up a serious religion changes one's very practice of rationality by making doubt a disvalue." ~ Orthonormal

The Arguments From My Opponent Believes Something are a lot like accusations of arrogance. They’re last-ditch attempts to muddy up the waters. If someone says a particular theory doesn’t explain everything, or that it’s elitist, or that it’s being turned into a religion, that means they can’t find anything else.

Otherwise they would have called it wrong.

-- Scott Alexander, On first looking into Chapman’s “Pop Bayesianism”

I strongly recommend not using stupid.

-- NancyLebovitz

You can find the comment here, but it is even better when taken completely out of context.

Now that we've beaten up on these people all over the place, maybe we should step up to the plate and say "how can we do better?".

-Robin Hanson, on a Blogginheads.tv conversation with Daniel Sarewitz. Sarewitz was spending a lot of time criticizing naive views which many smart people hold about human enhancement.

Experience is the result of using the computing power of reality.

-- Roland

[-][anonymous]230

Quoting yourself is probably a bit too euphoric even for this thread.

Humans are not adapted for the task of scientific research. Humans are adapted to chase deer across the savanna, throw spears into them, cook them, and then—this is probably the part that takes most of the brains—cleverly argue that they deserve to receive a larger share of the meat.

It's amazing that Albert Einstein managed to repurpose a brain like that for the task of doing physics.

Not a very advanced idea, and most people here probably already realised it -- I did too -- but this essay uniquely managed to strike me with the full weight of just how massive the gap really is.

I used to think "human brains aren't natively made for this stuff, so just take your biases into account and then you're good to go". I did not think "my god, we are so ridiculously underequipped for this."

Perhaps the rule should be "Rationality Quotes from people associated with LessWrong that they made elsewhere", which would be useful, but not simply duplicate other parts of LW.

I think the rule should be simply the exact converse of the existing Rationality Quotes rule, so every good quote has a home in exactly one such place.

How about a waiting period? I'm thinking that quotes from LW have to be at least 3 years old. It's way of keeping good quotes from getting lost in the past while not having too much redundancy here.

[-]tim110

I think three years is too long. I would imagine that there are a large number of useful quotes that are novel to many users that are much less than three years old.

Personally I would say we should just let it ride as is with no restrictions. If redundancy and thread bloat become noticeable issues then yeah, we might want to set up a minimum age for contributions.

I think the rule should be simply the exact converse of the existing Rationality Quotes rule, so every good quote has a home in exactly one such place.

This would be ideal. I like the notion of having a place for excellent rationalist quotes but like having the "non-echo chamber" rationality quotes page too.

I think let's see what happens.

It is tempting but false to regard adopting someone else's beliefs as a favor to them, and rationality as a matter of fairness, of equal compromise. Therefore it is written "Do not believe you do others a favor if you accept their arguments; the favour is to you." -- Eliezer Yudkowsky

For a self-modifying AI with causal validity semantics, the presence of a particular line of code is equivalent to the historical fact that, at some point, a human wrote that piece of code. If the historical fact is not binding, then neither is the code itself. The human-written code is simply sensory information about what code the humans think should be written.

— Eliezer Yudkowsky, Creating Friendly AI

The rule of derivative validity—“Effects cannot have greater validity than their causes.”—contains a flaw; it has no tail-end recursion. Of course, so does the rule of derivative causality—“Effects have causes”—and yet, we’re still here; there is Something rather than Nothing. The problem is more severe for derivative validity, however. At some clearly defined point after the Big Bang, there are no valid causes (before the rise of self-replicating chemicals on Earth, say); then, at some clearly defined point in the future (i.e., the rise of homo sapiens sapiens) there are valid causes. At some point, an invalid cause must have had a valid effect. To some extent you might get around this by saying that, [e.g.], self-replicating chemicals or evolved intelligences are pattern-identical with (represent) some Platonic valid cause—a low-entropy cause, so that evolved intelligences in general are valid causes—but then there would still be the question of what validates the Platonic cause. And so on.

— Eliezer Yudkowsky, Creating Friendly AI

I have an intuition that there is a version of reflective consistency which requires R to code S so that, if R was created by another agent Q, S would make decisions using Q's beliefs even if Q's beliefs were different from R's beliefs (or at least the beliefs that a Bayesian updater would have had in R's position), and even when S or R had uncertainty about which agent Q was. But I don't know how to formulate that intuition to something that could be proven true or false. (But ultimately, S has to be a creator of its own successor states, and S should use the same theory to describe its relation to its past selves as to describe its relation to R or Q. S's decisions should be invariant to the labeling or unlabeling of its past selves as "creators". These sequential creations are all part of the same computational process.)

— Steve Rayhawk, commenting on Wei Dai's "Towards a New Decision Theory"

Yes, any physical system could be subverted with a sufficiently unfavorable environment. You wouldn't want to prove perfection. The thing you would want to prove would be more along the lines of, "will this system become at least somewhere around as capable of recovering from any disturbances, and of going on to achieve a good result, as it would be if its designers had thought specifically about what to do in case of each possible disturbance?". (Ideally, this category of "designers" would also sort of bleed over in a principled way into the category of "moral constituency", as in CEV.) Which, in turn, would require a proof of something along the lines of "the process is highly likely to make it to the point where it knows enough about its designers to be able to mostly duplicate their hypothetical reasoning about what it should do, without anything going terribly wrong".

We don't know what an appropriate formalization of something like that would look like. But there is reason for considerable hope that such a formalization could be found, and that this formalization would be sufficiently simple that an implementation of it could be checked. This is because a few other aspects of decision-making which were previously mysterious, and which could only be discussed qualitatively, have had powerful and simple core mathematical descriptions discovered for cases where simplifying modeling assumptions perfectly apply. Shannon information was discovered for the informal notion of surprise (with the assumption of independent identically distributed symbols from a known distribution). Bayesian decision theory was discovered for the informal notion of rationality (with assumptions like perfect deliberation and side-effect-free cognition). And Solomonoff induction was discovered for the informal notion of Occam's razor (with assumptions like a halting oracle and a taken-for-granted choice of universal machine). These simple conceptual cores can then be used to motivate and evaluate less-simple approximations for situations where where the assumptions about the decision-maker don't perfectly apply. For the AI safety problem, the informal notions (for which the mathematical core descriptions would need to be discovered) would be a bit more complex -- like the "how to figure out what my designers would want to do in this case" idea above. Also, you'd have to formalize something like our informal notion of how to generate and evaluate approximations, because approximations are more complex than the ideals they approximate, and you wouldn't want to need to directly verify the safety of any more approximations than you had to. (But note that, for reasons related to Rice's theorem, you can't (and therefore shouldn't want to) lay down universally perfect rules for approximation in any finite system.)

Steve Rayhawk

[-][anonymous]00

Beware of self fulfilling thoughts are thoughts the truth conditions of which are subsets of the existence conditions

-Luke, Pale Blue Dot

[-][anonymous]00

Stirring quotes from this video about the Singularity Institute (MIRI

It's very hard to predict when you're going to get a piece of knowledge you don't have now - EY

paraphrase: nanotechnology is about the future of the material world, ai is about the future of the information world - a female SI advisor with nanotech experience - sounded very intelligent

(speaking about SI [now MIRI] and that they are seen as cutting edge/beyond the pale of respectability): "...in my experience it's only by pushing things beyond the pale of respectability that you get things done and push the dial" - Thiel

paraphrase: universities can only have near term goals (up to 7 years max, usually from 3 to 5 years), so non-profits can have goals of longer term, greater than 10 years - Thiel

IMO synthetic biology constitutes a third domain of advancement - the future of the living world

IMO synthetic biology constitutes a third domain of advancement - the future of the living world

Isn't that a subset of the material world? I imagine nanotechnology is going to play a part in medicine and the like too, eventually.
Of course, more than one thing can be about the future of the somethingsomething world.

[-][anonymous]00

Anything is a subset of another thing in one dimension or another.

[-][anonymous]00

21st-century Western males are shocked by the idea of rape because it violates cultural assumptions about gentlemanly conduct and the rules of how men compete among themselves for women; so another possibility I was wondering about is if, indeed, men would simply be more shocked by the whole idea than women. It just wasn't clear from the comments whether this was actually the case, or if my female readers were so offended as to not even bother commenting.

EY - Interlude with the Confessor

EY is right by contemporary theories:

Structural inequality encompasses the lower status of women in our community, lower rates of pay, and underrepresentation of women in leadership positions. Societies with greater structural inequality have higher levels of violence against women. Normative inequality refers to attitudes and beliefs that support male dominance and male entitlement. Men who perpetrate violence against women are more likely to hold these attitudes. [2]

Though he's making a very different point, I'd like to point something else out inspired by this piece that I do not feel would fit in with the narrative at the generic thread.

In my opinion, violence against men or intimate partner violence as a gender neutral construct is equally important, but more neglected yet, from a more neutral piece, as tractable as violence against women.

To satisfy anyone's curiosity, I identify neither as a feminist, nor a men's rights activists, nor as a humanist, but a rationalist.

If I missed something along the line, I'm really willing to learn.

kamenin on Collapse Postulates

Jonvon, there is only one human superpower. It makes us what we are. It is our ability to think. Rationality trains this superpower, like martial arts trains a human body. It is not that some people are born with the power and others are not. Everyone has a brain. Not everyone tries to train it. Not everyone realizes that intelligence is the only superpower they will ever have, and so they seek other magics, spells and sorceries, as if any magic wand could ever be as powerful or as precious or as significant as a brain.

Eliezer Yudkowsky