All of ChrisHallquist's Comments + Replies

The Best Textbooks on Every Subject

On philosophy, I think it's important to realize that most university philosophy classes don't assign textbooks in the traditional sense. They assign anthologies. So rather than read Russell's History of Western Philosophy or The Great Conversation (both of which I've read), I'd recommend something like The Norton Introduction to Philosophy.

Here was the original thread proposing this as a solution to the prophecy and here is the comment by Eliezer Yudkowsky confirming to be influenced by that thread.

Harry Potter and the Methods of Rationality discussion thread, January 2015, chapter 103

OH MY GOD. THAT WAS IT. THAT WAS VOLDEMORT'S PLAN. RATIONAL!VOLDEMORT DIDN'T TRY TO KILL HARRY IN GODRIC'S HOLLOW. HE WAITED ELEVEN YEARS TO GIVE HARRY A GRADE IN SCHOOL SO THAT ANY ASSASSINATION ATTEMPT WOULD BE IN ACCORDANCE WITH THE PROPHECY.

3buybuydandavis7yYes. Various theories have Q trying to build up Harry as appearing to be the savior of the magical world. Q tends to have the smug psychotic smiles when he is putting something over on someone. Harry thinks he has a Hallmark moment, while Q is just gloating over the "Mission Accomplished" sign in his head.
2014 Less Wrong Census/Survey

Duplicate comment, probably should be deleted.

2JoachimSchipper7yI assume that TheAncientGeek has actually submitted the survey; in that case, their comment is "proof" that they deserve karma.
2014 Less Wrong Census/Survey

Agreed. I actually looked up tax & spending for UK vs. Scandinavian countries, and they aren't that different. It may not be a good distinction.

5[anonymous]7yBut IIRC the way the tax money is spent is very different in the US vs in Scandinavia (and I'd guess the UK is somewhere in between): in the former it's mostly spent on means-tested transfer payments and in the latter is most spent on in-kind services, such as healthcare and education, that anyone can (in principle) avail of.
2014 Less Wrong Census/Survey

I thought of this last year after I completed the survey, and rated anti-agathics less probable than cryonics. This year I decided cryonics counted, and rated anti-agathics 5% higher than cryonics. But it would be nice for the question to be clearer.

2014 Less Wrong Census/Survey

Done, except for the digit ratio, because I do not have access to a photocopier or scanner.

Non-standard politics

Liberal here, I think my major heresy is being pro-free trade.

Also, I'm not sure if there's actually a standard liberal view of zoning policy, but it often feels like the standard view is that we need to keep restrictive zoning laws in place to keep out those evil gentrifiers, in which case my support for loser zoning regulations is another major heresy.

You could argue I should call myself a libertarian, because I agree the main thrust of Milton Friedman's book Capitalism and Freedom. However, I suspect a politician running on Friedman's platform today wou... (read more)

2Barry_Cotter7yHe supported a large negative income tax for those on the lowest (earned) incomes, tapering off to zero, then positive as earned income increased. This is really very far from a guaranteed basic income.
6Azathoth1237yDepends, is this in addition to or in place of the existing welfare state? Friedman's position was "in place of", someone running on that position today would probably be considered a "heartless fascist".
3fubarobfusco7yGood thing! We're going to end up in a world where robots do the poor-people jobs. (Just as we are now in a world where machines do the horse and ox jobs, like plowing and pulling carriages.) I for one would prefer that the poor people not starve to death as a result.
question: the 40 hour work week vs Silicon Valley?

and anyone smart has already left the business since it's not a good way of making money.

Can you elaborate? The impression I've gotten from multiple converging lines of evidence is that there are basically two kinds of VC firms: (1) a minority that actually know what they're doing, make money, and don't need any more investors and (2) the majority that exist because lots of rich people and institutions want to be invested in venture capital, can't get in on investing with the first group, and can't tell the two groups apart.

A similar pattern appears to ... (read more)

Could Robots Take All Our Jobs?: A Philosophical Perspective

Hi! Welcome to LessWrong! A lot of people on LessWrong are worried about the problem you describe, which is why the Machine Intelligence Research Institute exists. In practice, the problem of getting an AI to share human values looks very hard. But, given that human values are implemented in human brains, it looks like it should be possible in principle to implement them in computer code as well.

Announcing The Effective Altruism Forum

I think the "Well-kept gardens die by pacifism" advice is cargo culted from a Usenet world where there weren't ways to filter by quality aside from the binary censor/don't censor.

Ah... you just resolved a bit of confusion I didn't know I had. Eliezer often seems quite wise about "how to manage a community" stuff, but also strikes me as a bit too ban-happy at times. I had thought it was just overcompensation in response to a genuine problem, but it makes a lot more sense as coming from a context where more sophisticated ways of promoting good content aren't available.

[link] [poll] Future Progress in Artificial Intelligence

So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years.

I should note that it's not obvious what the experts responding to this survey thought "greatly surpass" meant. If "do everything humans do, but at x2 speed" qualifies, you might expect AI to "greatly surpass" human abilities in 2 years even on a fairly unexciting Robin Hansonish scenario of brain emulation + continued hardware improvement at roughly current rates.

Consider giving an explanation for your deletion this time around. "Harry Yudkowsky and the Methods of Postrationality: Chapter One: Em Dashes Colons and Ellipses, Littérateurs Go Wild"

I like the idea of this fanfic, it seems like it could have been executed much better.

EDIT: Try re-writing later? As the saying goes, "Write drunk; edit sober."

4Will_Newsome7yThat's what I did, actually. Maybe I should write sober too. But that Kentucky bourbon was just so inspiring.
Self-Congratulatory Rationalism

So I normally defend the "trust the experts" position, and I went to grad school for philosophy, but... I think philosophy may be an area where "trust the experts" mostly doesn't work, simply because with a few exceptions the experts don't agree on anything. (Fuller explanation, with caveats, here.)

0TheAncientGeek7yIf what philosophers specialise in clarifying questions, they can trusted to get the question right. A typical failure mode of amateur philosophy is to substitute easier questions for harder ones.
9Protagoras7yAlso, from the same background, it is striking to me that a lot of the criticisms Less Wrong people make of philosophers are the same as the criticisms philosophers make of one another. I can't really think of a case where Less Wrong stakes out positions that are almost universally rejected by mainstream philosophers. And not just because philosophers disagree so much, though that's also true, of course; it seems rather that Less Wrong people greatly exaggerate how different they are and how much they disagree with the philosophical mainstream, to the extent that any such thing exists (again, a respect in which their behavior resembles how philosophers treat one another).
[moderator action] Eugine_Nier is now banned for mass downvote harassment

Have you guys given any thought to doing pagerankish stuff with karma?

Can you elaborate more? I'm guessing you mean people with more karma --> their votes count more, but it isn't obvious how you do that in this context.

6gwern7yEver since https://en.wikipedia.org/wiki/Advogato [https://en.wikipedia.org/wiki/Advogato] there have been a lot of proposed trust metrics. Many of them function like Pagerank: you start off with a set of 'seed' users and then propagate influence based on how well users match them.
5IlyaShpitser7yI agree, it is not obvious. Unlike morality though, this seems like the right application area for pagerank ideas. Example: if you want to know about someone in academia, you ask the top 20 people in a field to get a sensible idea. So it seems worthwhile to think about/experiment with. I think one would need to iterate, I don't think one can get a sensible system from the armchair.
[moderator action] Eugine_Nier is now banned for mass downvote harassment

Everyone following the situation knew it was Eugine. At least one victim named him publicly. Sometimes he was referred to obliquely as "the person named in the other thread" or something like that, but the people who were following the story knew what that meant.

3David_Gerard7yAnd I just dropped from 9800ish to 8909. But still at +269 last 30 days. What?
[moderator action] Eugine_Nier is now banned for mass downvote harassment

I'm glad this was done, if only to send a signal to the community that something is being done, but you have a point that this is not an ideal solution and I hope a better one is implemented soon.

Downvote stalkers: Driving members away from the LessWrong community?

I'm not sure how to respond to this comment, given that it contains no actual statements, just rhetorical questions, but the intended message seems to be "F you for daring to cause Eliezer pain, by criticizing him and the organization he founded."

If that's the intended message, I submit that when someone is a public figure, who writes and speaks about controversial subjects and is the founder of an org that's fairly aggressive about asking people for money, they really shouldn't be insulated from criticism on the basis of their feelings.

-4[anonymous]7yYou could have simply not responded. It wasn't, no. It was a reminder to everyone else of XiXi's general MO, and the benefit he gets from convincing others that EY is a megalomaniac, using any means necessary.
Downvote stalkers: Driving members away from the LessWrong community?

The reason that nothing has been done about it is that Eliezer doesn't care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).

My guess is that he cares not nearly as much about LW in general now as he used to...

This. Eliezer clearly doesn't care about LessWrong anymore, to the point that these days he seems to post more on Facebook than on LessWrong. Realizing this is a major ... (read more)

7XiXiDu7yHe receives a massive number of likes there, no matter what he writes. My guess is that he needs that kind of feedback, and he doesn't get it here anymore. Recently he requested that a certain topic should not be mentioned on the HPMOR subreddit, or otherwise he would go elsewhere. On Facebook he can easily ban people who mention something he doesn't like.
Truth: It's Not That Great

...my motivation has been "I see people around me succeeding by these means where I have failed, and I want to be like them".

Seems like noticing yourself wanting to imitate successful people around you should be an occasion for self-scrutiny. Do you really have good reasons to think the things you're imitating them on are the cause of their success? Are the people you're imitating more successful than other people who don't do those things, but who you don't interact with as much? Or is this more about wanting to affiliate the high-status people you happen to be in close proximity to?

It is indeed a cue to look for motivated reasoning. I am not neglecting to do that. I have scrutinized extensively. It is possible to be motivated by very simple emotions while constraining the actions you take to the set endorsed by deliberative reasoning.

The observation that something fits the status-seeking patterns you've cached is not strong evidence that nothing else is going on. If you can write off everything anybody does by saying "status" and "signaling" without making predictions about their future behavior--or even looking i... (read more)

LessWrong as social catalyst

I love how understated this comment is.

Thanks for posting this. I don't normally look at the posters names when I read a comment.

Request for concrete AI takeover mechanisms

People voluntarily hand over a bunch of resources (perhaps to a bunch of different AIs) in the name of gaining an edge over their competitors, or possibly for fear of their competitors doing the same thing to gain such an edge. Or just because they expect the AI to do it better.

Open Thread April 8 - April 14 2014

Maximizing your chances of getting accepted: Not sure what to tell you. It's mostly about the coding questions, and the coding questions aren't that hard—"implement bubble sort" was one of the harder ones I got. At least, I don't think that's hard, but some people would struggle to do that. Some people "get" coding, some don't, and it seems to be hard to move people from one category to another.

Maximizing value given that you are accepted: Listen to Ned. I think that was the main piece of advice people from our cohort gave people in the... (read more)

Effective Effective Altruism Fundraising and Movement-Building

Presumably. The question is whether we should accept that belief of theirs.

A few remarks about mass-downvoting

And the solution to how not to catch false positives is to use some common sense. You're never going to have an aytomated algorithm that can detect every instance of abuse, but even an instance that is not detectable by automatic means can be detectable if someone with sufficient database access takes a look when it is pointed out to them.

Right on. The solution to karma abuse isn't some sophisticated algorithm. It's extremely simple database queries, in plain english along the lines of "return list of downvotes by user A, and who was downvoted," "return downvotes on posts/comments by user B, and who cast the vote," and "return lists of downvotes by user A on user B."

-2Vaniver8yAnd then what will you do with that data? If you find that GrumpyCat666 cast most of the downvotes, does that mean that GrumpyCat666 is a karmassassin, or that GrumpyCat666 is one of the gardeners? (I can't find the link now, but early on there was a coded rule to prevent everyone from downvoting more than their total karma. This prevented a user whose name I don't recall, who had downvoted about some massive fraction of all the comments the site had received, from downvoting any more comments, but this was seen as not helpful for the site, since that person was making the junk less visible.)
Rationality Quotes March 2014

Ah, of course, because it's more important to signal one's pure, untainted epistemic rationality than to actually get anything done in life, which might require interacting with outsiders.

This is a failure mode I worry about, but I'm not sure ironic atheist re-appropriation of religious texts is going to turn off anyone we had a chance of attracting in the first place. Will reconsider this position if someone says, "oh yeah, my deconversion process was totally slowed down by stuff like that from atheists," but I'd be surprised.

Self-Congratulatory Rationalism

Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?

What's your proposal for how to do that, aside from just evaluating the arguments the normal way? Ignore the politicians, and we're basically talking about people who all have PhDs, so education can't be the heuristic. You also proposed IQ and rationality, but admitted we aren't going... (read more)

Self-Congratulatory Rationalism

Skimming the "disagreement" tag in Robin Hanson's archives, I found I few posts that I think are particularly relevant to this discussion:

The sin of updating when you can change whether you exist

Username explicitly linked to torture vs. dust specks as a case where it makes sense to use torture as an example. Username is just objecting to using torture for general decision theory examples where there's no particular reason to use that example.

Self-Congratulatory Rationalism

But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain than you are there, I admit I am very skeptical of them.

With philosophy, I think the easiest, most important thing for non-experts to notice is that (with a few arguable exceptions are independently pretty reasonable) philosophers basically don't agree on anything. In the case of e.g. Plantinga specifically, non-experts can notice few other philosophers think the modal ontological argument accompli... (read more)

2Scott Alexander8yI agree that disagreement among philosophers is a red flag that we should be looking for alternative positions. But again, I don't feel like that's strong enough enough. Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones? Well, take Barry Marshall. Became convinced that ulcers were caused by a stomach bacterium (he was right; later won the Nobel Prize). No one listened to him. He said that "my results were disputed and disbelieved, not on the basis of science but because they simply could not be true...if I was right, then treatment for ulcer disease would be revolutionized. It would be simple, cheap and it would be a cure. It seemed to me that for the sake of patients this research had to be fast tracked. The sense of urgency and frustration with the medical community was partly due to my disposition and age." So Marshall decided since he couldn't get anyone to fund a study, he would study it on himself, drank a serum of bacteria, and got really sick. Then due to a weird chain of events, his results ended up being published in the Star, a tabloid newspaper that by his own admission "talked about alien babies being adopted by Nancy Reagan", before they made it into legitimate medical journals. I feel like it would be pretty easy to check off a bunch of boxes on any given crackpot index..."believes the establishment is ignoring him because of their biases", "believes his discovery will instantly solve a centuries-old problem with no side effects", "does his studies on himself", "studies get published in tabloid rather than journal", but these were just things he naturally felt or had to do because the establishment wouldn't take him seriously and he couldn't do things "right". I think it is much much less than the general public, but I don't
Self-Congratulatory Rationalism

I question how objective these objective criterion you're talking about are. Usually when we judge someone's intelligence, we aren't actually looking at the results of an IQ test, so that's subjective. Ditto rationality. And if you were really that concerned about education, you'd stop paying so much attention to Eliezer or people who have a bachelors' degree at best and pay more attention to mainstream academics who actually have PhDs.

FWIW, actual heuristics I use to determine who's worth paying attention to are

  • What I know of an individual's track recor
... (read more)

Your heuristics are, in my opinion, too conservative or not strong enough.

Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with. If you're a creationist, you can rule out paying attention to Richard Dawkins, because if he's wrong about God existing, about the age of the Earth, and about homosexuality being okay, how can you ever expect him to be right about evolution? If you're anti-transhumanism, you can rule out cryonicists because t... (read more)

A few remarks about mass-downvoting

Oh, I see now. But why would Eliezer do that? Makes me worry this is being handled less well than Eliezer's public statements indicate.

Self-Congratulatory Rationalism

Plantinga's argument defines God as a necessary being, and assumes it's possible that God exists. From this, and the S5 axioms of modal logic, it folllws that God exists. But you can just as well argue, "It's possible the Goldbach Conjecture is true, and mathematical truths are if true necessarily true, therefore the Goldbach Conjecture is true." Or even "Possibly it's a necessary truth that pigs fly, therefore pigs fly."

(This is as much as I can explain without trying to give a lesson in modal logic, which I'm not confident in my ability to do.)

1cousin_it8yThat's nice, thanks!
Self-Congratulatory Rationalism

People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.

My initial reaction to this was warm fuzzy feelings, but I don't think it's correct, any more than calling yourself a theist indicates believing you are God. "Rationalist" means believing in rationality (in the sense of being pro-rationality), not believing yo... (read more)

8Nornagest8yThe -ist suffix can mean several things in English. There's the sense of "practitioner of [an art or science, or the use of a tool]" (dentist, cellist). There's "[habitual?] perpetrator of" or "participant in [an act]" (duelist, arsonist). And then there's "adherent of [an ideology, doctrine, or teacher]" ( theist, Marxist). Seems to me that the problem has to do with equivocation between these senses as much as with the lack of an "aspiring". And personally, I'm a lot more comfortable with the first sense than the others; you can after all be a bad dentist. Perhaps we should distinguish between rationaledores and rationalistas? Spanglish, but you get the picture.
A few remarks about mass-downvoting

His assertion that there is no way to check seems to me a better outcome than these posts shouting into the wind that don't get any response.

Did he assert that, exactly? The comment you linked to sounds more like "it's difficult to check." Even that puzzles me, though. Is there a good reason for the powers that be at LessWrong not to have easy access to their own database?

0Douglas_Knight8yMy two paragraphs refer to two different things Eliezer said. The contrast is indicated by the word "but." Tenoke says that he asserted that, exactly. I assume Eliezer was just lying to get him to shut up. There are a lot of reasons not to look directly at the database. But once the person wrote an acceptable query years ago, yes, it should be easy to just try it again.
Self-Congratulatory Rationalism

The right rule is probably something like, "don't mix signaling games and truth seeking." If it's the kind of thing you'd expect in a subculture that doesn't take itself too seriously or imagine its quirks are evidence of its superiority to other groups, it's probably fine.

Self-Congratulatory Rationalism

You're right, being bad at signaling games can be crippling. The point, though, is to watch out for them and steer away from harmful ones. Actually, I wish I'd emphasized this in the OP: trying to suppress overt signaling games runs the risk of driving them underground, forcing them to be disguised as something else, rather than doing them in a self-aware and fun way.

4ialdabaoth8yBorrowing from the "Guess vs. Tell (vs. Ask)" meta-discussion, then, perhaps it would be useful for the community to have an explicit discussion about what kinds of signals we want to converge on? It seems that people with a reasonable understanding of game theory and evolutionary psychology would stand a better chance deliberately engineering our group's social signals than simply trusting our subconsciouses to evolve the most accurate and honest possible set.
4Jiro8yIt's happened to me again. At one point I lost about 20 karma in a few hours. Now it seems everything I post gets voted down. At an estimated loss of 30 karma per week, I'll end up being forced off the site by August.
Lifestyle interventions to increase longevity

How much have you looked into potential confounders for these things? With the processed meat thing in particular, I've wondered what could be so bad about processing meat, and if this could be one of those things where education and wealth are correlated with health, so if wealthy, well-educated people start doing something, it becomes correlated with health too. In that particular case, it would be a case of processed meat being cheap, and therefore eaten by poor people more, while steak tends to be expensive.

(This may be totally wrong, but it seems like an important concern to have investigated.)

2RomeoStevens8yMy process is to collect a list of confounders by looking at things controlled for in different studies, and then downgrading my estimation of evidence strength if I see obvious ones from the list not mentioned in a study. This is probably not the best way to do this but I haven't come up with anything better yet.
Self-Congratulatory Rationalism

So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.

I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are wel... (read more)

2blacktrance8yThe danger of this approach is obvious, but it can have benefits as well. You may not know that a particular LessWronger is sane, but you do know that on average LessWrong has higher sanity than the general population. That's a reason to be more charitable.

I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn

... (read more)
4Solvent8yThat's a moral disagreement, not a factual disagreement. Alicorn is a deontologist, and you guys probably wouldn't be able to reach consensus on that no matter how hard you tried.
Self-Congratulatory Rationalism

Saying

Interesting point. I'm not entirely clear how you arrived at that position. I'd like to look up some detail questions on that. Could you provide references I might look at?

sort of implies you're updating towards the other's position. If you not only disagree but are totally unswayed by hearing the other person's opinion, it becomes polite but empty verbiage (not that polite but empty verbiage is always a bad thing).

-2Gunnar_Zarncke8yBut shouldn't you always update toward the others position? And if the argument isn't convincing you can truthfully tell so that you updated only slightly.
Self-Congratulatory Rationalism

There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other

... (read more)
1Eugine_Nier8yWei Dai's description is correct, see here [http://lesswrong.com/lw/jq7/selfcongratulatory_rationalism/an78] for an example where the final estimate is outside the range of the initial two. And yes, the Aumann agreement theorem does not say what nearly everyone (including Eliezer) seems to intuitively think it says.
Self-Congratulatory Rationalism

Personally, I am entirely in favor of the "I don't trust your rationality either" qualifier.

1PeterDonis8yIs that because you think it's necessary to Wei_Dai's argument, or just because you would like people to be up front about what they think?
White Lies

Upvoted for publicly changing your mind.

White Lies

Further, the idea that the tribe of Honest Except When I Benefit is the vast majority while Always Honest is a tiny minority is not one that I'll accept without evidence.

Here's one relevant paper: Lying in Everyday Life

I read that paper, and was distressed, so I set about finding other papers to disprove it. Instead I found links to it, and other works that backed it up. I was wrong. Liers are the larger tribe. Thanks for educating me.

Rationality Quotes February 2014

We can't forecast anything so let's construct some narratives..?

I think the point is more "good forecasting requires keeping an eye on what your models are actually saying about the real world."

0Lumifer8yThat's not what the quote expresses. The quote basically says that there must be a dumbed-down version of describing whatever you are doing and you must know it -- otherwise you're clueless. And that just ain't true. Specifically, it's not true in most hard sciences (an in math, too, of course). Krugman, however, used to do research in economics where there is not much hard stuff and even that spectacularly doesn't work. Accordingly, there is nothing which is really complicated but does produce real results (as in, again, hard sciences). Given this, he thinks that really complicated things are just pointless and one must construct narratives -- because that's how economics, basically, exerts its influence. It makes up stories (about growth rates and money and productivity and... ) and if a story is too complicated it's no good. That's fine for economics but a really bad path to take for disciplines which are actually grounded in reality.
White Lies

In addition to mistakes other commenters have pointed out, it's a mistake to think you can neatly divide the world into "defectors" and "non-defectors," especially when you draw the line in a way that classifies the vast majority of the world as defectors.

-2WalterL8yMaybe, but its not a mistake that I manufactured. The notion that there are honest folks and dishonest folks isn't unique to me. I didn't invent it. Nor is it a fringe view. It is, I would posit, the common position. Further, the idea that the tribe of Honest Except When I Benefit is the vast majority while Always Honest is a tiny minority is not one that I'll accept without evidence. I think the reverse is true. Many sheep, few wolves.
4fubarobfusco8yThose sorts of mistakes are just gonna happen. A lot of folks also still believe that words have meanings (as a one-place function [http://lesswrong.com/lw/ro/2place_and_1place_words/]), that the purpose of language is to communicate statements of fact, and that dishonesty and betrayal can be avoided by not saying any statements that are "technically" false. Someone ought to write up "Five Geek Linguistic Fallacies" to accompany this old thing [http://www.plausiblydeniable.com/opinion/gsf.html]. "I am afraid we are not rid of God because we still have faith in grammar." —Nietzsche
Rationality Quotes February 2014

"Much of real rationality is learning how to learn from others."

Robin Hanson

[This comment is no longer endorsed by its author]Reply
3Said Achmiz8yQuote thread rules say:
Rationality Quotes February 2014

I once talked to a theorist (not RBC, micro) who said that his criterion for serious economics was stuff that you can’t explain to your mother. I would say that if you can’t explain it to your mother, or at least to your non-economist friends, there’s a good chance that you yourself don’t really know what you’re doing.

--Paul Krugman, "The Trouble With Being Abstruse"

-2Lumifer8yThe implication seems anti-rationality. As Noah Smith points out [http://noahpinionblog.blogspot.com/2014/02/explaining-theories-to-mom.html] Um. We can't forecast anything so let's construct some narratives..? :-/
6simplicio8ybig inferential distances usually --> long chain of reasoning --> at least one step is more likely to be wrong
Load More