Open thread, 25-31 August 2014

Previous open thread

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

227 comments, sorted by
magical algorithm
Highlighting new comments since Today at 7:39 AM
Select new highlight date

A few months ago I started using the Ultimate Geography Anki deck after performing quite abysmally on some silly geography quiz that was doing the rounds on Facebook. I now know where all the damn countries are, like an informed citizen of the world. This has proven itself very useful in a variety of ways, not least of which is in reading other material with a geographical backdrop. For example, the chapter in Guns, Germs and Steel on Africa is much more readable if you know where all the African countries are in relation to one another.

(In the process of doing this, coupled with an international event in Sweden, I've learned that the Scandinavian education systems are much, much better than that of the UK at teaching children about the rest of the world)

The geography deck was particularly easy to slip into because it developed an area I already (weakly) knew about. I'm looking for some new Anki content of a similar nature: a cross-domain-application body of knowledge I probably sort-of know a little bit already, that I can comprehensively improve upon.

Suggestions and anecdotes of similar experiences welcome.

Yep, I find the world a much less confusing place since I learned capitals and location on map. I had (and to some extent still do have) a mental block on geography which was ameliorated by it.

Rundown of positive and negative results:

In a similar but lesser way, I found learning English counties (and to an even lesser extent, Scottish counties) made UK geography a bit less intimidating. I used this deck because it's the only one on the Anki website I found that worked on my old-ass phone; it has a few howlers and throws some cities in there to fuck with you, but I learned to love it.

I suspect that learning the dates of monarchs and Prime Ministers (e.g. of England/UK) would have a similar benefit in contextualising and de-intimidating historical facts, but I never finished those decks and haven't touched them in a while, so never reached the critical mass of knowledge that allowed me to have a good handle on periods of British history. I found it pretty difficult to (for example) keep track of six different Georges and map each to dates, so slow progress put me off. Let me know if you're interested and want to set up a pact, e.g. 'We'll both do at least ten cards from each deck a day and report back to the other regularly' or something. In fact that offer probably stands for any readers.

I installed some decks for learning definitions in areas of math that I didn't know, but found memorising decontextualised definitions hard enough that I wasn't motivated to do it, given everything else I was doing and Anki-ing at the time. I still think repeat exposure to definitions might be a useful developmental strategy for math that nobody seems to be using deliberately and systematically, but I'm not sure Anki is a right way to do it. Or if it is, that shooting so far ahead of my current knowledge was the best way to do it. Similarly a LaTeX deck I got having pretty much never used LaTeX and not practising it while learning the deck.

Canadian provinces/territories I have not yet found useful beyond feeling good for ticking off learning the deck, which was enough for me since I did them in a session or two.

Languages Spoken in Each Country of the World (I was trying to do not just country-->languages but country-->languages with proportions of population speaking the languages) was so difficult and unrewarding in the short term that I lost motivation extremely quickly (this was months ago). The mental association between 'Berber' and 'North Africa' has come up a surprising number of times, though. Most recently yesterday night.

Periodic table (symbol<--->name, name<-->number) took lots of time and hasn't been very useful for me personally (I pretty much just learned it in preparation for a quiz). Learning just which elements are in which groups/sections of the Periodic table might be more useful and a lot quicker (since by far the main difficulty was name<--->number).

I am relatively often wanting for demographic and economic data, e.g. population of countries, population of major world cities, population of UK places, GDP's. Ideally I'd not just do this for major places since I want to get a good intuitive sense of these figures for very large or major places on down to tiny places.

Similarly if one has a hobby horse it could be useful. Examples off the top of my head (not necessarily my hobby horse): Memorising the results from the LessWrong surveys. Memorising the results from the PhilPapers survey. Memorising data about resource costs of meat production vs. other food production. Memorising failed AGI timeline predictions. Etc.

I found starting to learn Booker Prize winners on Memrise has let me have a few 'Ah, I recognise that name and literature seems less opaque to me, yay!' moments, but there's probably higher-priority decks for you to learn unless that's more your area.

What about learning a sense of scale, for both time and space?

planets and stars

replies to most common comments to the previous video

sub-atomic to hypothetical multi-universes-- uses pictures and numbers, no zooming. I hadn't realized how much overlap there is in size between the larger moons and smaller planets, and (in spite of having seen many pictures) hadn't registered that nebulas are much bigger than stars.

I'm going to post this before I spend a while noodling around science videos, but it might also be good to work on time scales and getting oriented among geological and historical time periods, including what things were happening at the same time in different parts of the world.

I'm an 4th year economics undergrad preparing start applying to PhD programs, and while I've never formally attempted to memorize GDPs, I've found that having a rough idea of where a county's per capita GDP is to be very useful in understanding world news and events (for example, I've noticed that around the $8,00-12,000 per year range seems to be the point where the median household gets an internet connection). If you do attempt to go the memorization route, be sure to use PPP-adjusted figures, as non-adjusted numbers will tend to systematically under estimate incomes in developing countries.

I did British monarchs last year while on a history kick, (which I'm still on). Pro-tip: watch films, television shows and plays featuring said monarchs, as they include salient contemporary historical events. For example, Nigel Hawthorne was the mad George. Hugh Laurie was his son, the Prince Regent, a contemporary of the Duke of Wellington (Stephen Fry), which places him temporally alongside the Napoleonic wars. Colin Firth was Queen Elizableth II's stuttering dad in The King's Speech. His brother was Mike from Neighbours (or the bad guy from Iron Man 3 if you're under 30) and their dad was Dumbledore.

(It turns out that royal history has plenty of independently interesting features, because it contains a lot of murders and wars and speculation about parentage. Contemporary introductions to historiography emphasise the movement away from history as the deeds of powerful men exercising their will through war and conquest, but the kings and wars are a lot more memorable and easier to place in time than the ephemeral stuff like trade routes and adoption of crops.)

Great effort. If may suggest a topic without providing a deck, I'd say learn about the vocabulary of personal finance. Or more generally, learn the vocabulary of stuff relevant to most lifes from time to time, like medicine, law and, well, finance. This helps to search for the correct things when needed and helps communicating with the relevant professionals.

Attempting to learn medicine vocabulary without understanding the underlying knowledge base is quite hard. Taking a readymade deck of corresponding vocabulary usually leads to attempting to learn something without understanding. That leads to forgotten cards and is ineffective.

Since I don't have the books around me, I'll have to write from memory without specific source. It might have been Decisive but I'm not sure.

In the book the authors described the problem of physicians leading patient's questions like with the complaint "my stomach hurts" they ask "does it hurt here?" pointing to where they feel the pain should be located. The patient, intimidated by the professional in front of them, affirms the question without supplying the information that it also hurts elsewhere. This is not yet a problem of vocabulary, but nonetheless important to keep in mind. The more interesting example was of an old man that described his problem as "feeling dizzy", so physicians tried to treat him for the syndromes that typically occur at higher ages. After some time a physician actually asked him "when do you feel dizzy?", receiving the answer "I feel dizzy all the time, when I get up, when I stand in the kitchen, when I read my newspaper." Turns out what this patient described as "being dizzy" was more something along the lines of "feeling confused" and was a symptom of a as-of-yet undiagnosed depression over his late wife's passing.

The whole episode could have been avoided if the patient knew how to correctly describe his issue - and if the pphysicians were more conscientious in diagnosing a specific issue.

Learning that there a qualia of "being dizzy" and a qualia of "feeling confused" and learning being able to distinguish those two qualia's isn't easy, if you don't have those qualia in the first place.

Words are cheap once you have the qualia.

Most people have fairly little awareness of what goes on inside their own body. Furthermore these days most doctors also lack the ability to perceive that information kinesthetically but focus on various tests and verbal feedback.

Practically it's also important to think about what information a doctor actually needs. That means who have to know what's normal and how you deviate from that.

How about starting with the Greek and Latin roots for medical terms?

Why not then just get a working knowledge of those languages? I had classical education and so I took latin and greek. I also speak some german as well as a working and halting conversational knowledge of french. When you have that you can understand the latin root from a word like ambulate or return.

A working knowledge of the languages is a much larger project. I'm moderately good at figuring out medical terms, but that doesn't mean I can do more than guessing at translations of text.

The actual medical knowledge is still requires but if you know the roots of things you can learn a great deal. The medical knowledge will give you meaning these can be quickly researched. In this day and time the ability to know a great deal of something requires only simple searches. I make a list of things to search all the time regular reading a research is Aristotelian.

Could you unpack that last sentence? I can't make sense of it. Aristotelian?

My procedure for coming up with search terms is to vaguely imagine a boring piece of writing about the subject I'm interested in, and then look for the least common words from it. If that doesn't work, then loosely mull for words inspired by the results that came up in the unsatisfactory searches.

Aristotelian = of Aristotle. Aristotle believed that regular reading, research, and expansion of the mind on various subjects was necessary to a good life. He wrote a great deal about the good life. I suggest adding that to your reading list.

I just take a subject say, "Scoliosis" and just put that into google and see what comes up. I start with the most popular sites and then look to more personal accounts once I know what is or is not scientific about it. For example, I am working on a novel right now and I needed to know how people performed check fraud. So I put that into google and started to read and eventually found a book by a detective about different cases he had solved. That helped me create a scenario that was very good and real life for the book. If you so choose you can do that regularly on a variety of subjects to learn more about something. The luxury about having a background in classical languages is that you can decode language and derive some meaning from it. Research is about layering. You start at the surface and then go deeper, then deeper and then deeper still. Think about the hierarchy of media:

Social Media (instant) Newspapers (or daily up to the minute news) Magazines (or media taking 4-5 days to create) Content aggregators/Monthly Publications Books

So for example researching check fraud I might see:

Tweets/posts about it A newspaper article about a check fraud ring A Magazine piece about its prevalence in America A group of these items over a period of time between one month and one year A book about check fraud rings by an expert

How far you go in the hierarchy depends on how much you want to know or where that information might be located. Also, for more effective searches in the future you may wish to use full sentences (Google is getting good at that) or also learning Boolean search terms.

KnaveOfAllTrades's idea of learning demographic & economic (GDP and its component parts) statistics of various places has occurred to me as a candidate for a useful Anki deck, so I second that.

Knowing some mathematical constants to a few significant figures can be useful. Memorizing √10 = 3.16 lets you interpret midpoints between ticks on a logarithmic scale, and √2 & √3 are the lengths of diagonals of unit squares & cubes. And knowing all three roots makes it easier to guesstimate square roots in general, using the √(ab) = (√a)(√b) result for non-negative a & b. Likewise for e.g. exp(2), exp(3), ln 2 & ln 3. The 68-95-99.7 rule should go on the list as well.

Is there an existing post on people's tendency to be confused by explanations that don't include a smaller version of what's being explained?

For example, confusion over the fact that "nothing touches" in quantum mechanics seems common. Instead of being satisfied by the fact that the low-level phenomena (repulsive forces and the Pauli exclusion principle) didn't assume the high-level phenomena (intersecting surfaces), people seem to want the low-level phenomena to be an aggregate version of the high-level phenomena. Explaining something without using it is one of the best properties an explanation can have, but people are somehow unsatisfied by such explanations.

Other examples of "but explain(X) doesn't include X!": emotions from biology, particles from waves, computers from solid state physics, life from chemistry.

More controversial examples: free will, identity, [insert basically any other introspective mental concept here].

Examples of the opposite: any axiom/assumption of a theory, billiard balls in Newtonian mechanics, light propagating through the ether, explaining a bar magnet as an aggregation of atom-sized magnets, fluid mechanics using continuous fields instead of particles, love from "God wanted us to have love".

Most people want the explanations (models) to make intuitive sense, though a few are satisfied with the underlying math only. And intuition is based on what we already know and feel.

The Pauli exclusion (or inclusion, if you take bosons) principle feels to me like rubbery wave-functions pushing against each other (or sticking together), even though I understand that antisymmetrization is not actually a microscopic force, and interacting electrons are not actually separate entities.

I do not think that one should lump free will and identity in the same category as basic QM, however, as we do not have nearly the degree of understanding of the cognitive processes in System 1 which produce the feeling of either.

What the goal of having an explanation?

Do you want the explanation to change your model of the world in a way that allows you to have the right intuition about a subject matter? Do you want the explanation to allow you to make better predictions about the subject matter?

Beliefs are supposed to pay rent.

If someone without a physics background hear about quantum mechanics they are supposed to be confused. If they aren't they would simply project their old ideas into the new theory and not really update anything on a deeper level.

emotions from biology

I'm not aware of a published theory of emotions as an extension of biology that describes all aspects of emotions that I observe on a day-to-day basis.

computers from solid state physics

Understanding hardware does need solid state physics but you also need to understand software to understand computers.

The one thing missing from that video (at least up to 4:23 when I got frustrated - and he had explicitly disclaimed talking about the Pauli Exclusion Principle before this point) which gets really to the heart of it is that the Pauli Exclusion Principle kicks in when one thing literally runs into the other - when parts of two things were trying to occupy exactly the same state. If 'couldn't go any further or you'd be inside the other thing, but you can't do that' isn't 'contact' then the word has no meaning.

The interviewer is exactly right at 4:17 - he did the demonstration wrong. He should have brought them into contact. Only when he was pushing inwards and the balls were pushing back hard enough to balance -- that's when he'd say they're in contact.

So this isn't a great example because the proper explanation does include a smaller version of what's being explained.

What people complaining about this usually do is link to this video (or better), but I'm not sure it's actually helpful for people who don't get it.

So I'm apparently a fictional spaceship now [1 2]. Also someone who's been instructed to keep an eye on it.

I have uploaded a collection of My Little Pony one-shots called Flashes of Insight to both FIMFiction.net and FanFiction.net. While most of the stories have no particular relevance to LessWrong, "Good Night" draws heavily on ideas I first encountered on this site, and I expect most people here will find it enjoyable. Eliezer Yudkowsky called it "chilling," which, coming from him, I consider a very great compliment.

These make me sad, but not in an objectionable way. Liked and Follow'd. Good Night seems specifically optimised to chill EY, was it your goal?

I am a bit puzzled by one aspect of Good Night, but that may be because I don't understand the tech level that the characters are operating at. In Twilight's place, it seems that the obvious thing to do would be to znxr n pbcl bs urefrys jvgu gur nccebcevngr oberqbz-erqhpgvba arhebzbqvsvpngvba, naq yrnir vg gb xrrc Pryrfgvn pbzcnal. Vs guvf vf cbffvoyr va gur frggvat, V qba'g frr jul guvf vfa'g n pyrne jva; fvapr Gjvyvtug rkcyvpvgyl qbrf abg jnag gung shgher sbe urefrys, fur gurerol fubhyq abg vqragvsl jvgu n ure-jub-qbrf-jnag-gung-shgher, be srne fhowrpgvir pbagvahngvba nf gung pbcl. Lbhe Gjvyvtug vf bs pbhefr serr gb abg-jnag guvf fbyhgvba, ohg vg ohtf zr gung fur qvqa'g guvax bs vg gb erwrpg vg.

Good Night seems specifically optimised to chill EY, was it your goal?

Oh, good heavens no! The thought that Mr. Yudkowsky would ever read the story did not even occur to me until long after it was finished.

I am a bit puzzled by one aspect of Good Night, but that may be because I don't understand the tech level that the characters are operating at.

At the level of magitek I envisioned the characters having, your solution should definitely be possible. The realistic answer is that the prompt gave us twelve hours of prep time and one hour of writing time; I did not think of your idea during the allotted time, and if I had I would have mercilessly cut it at the planning stage so that I could fit the whole story into one hour. Even disregarding the time limit, rnpu nethzrag V unq Pryrfgvn naq Gjvyvtug qvfphff jnf n fvatyr, ovt, eryngviryl fvzcyr pbaprcg; vzzbegnyf zhfg zbqvsl fb gung gurl pna rgreanyyl ybbc, be gurl zhfg tebj, be gurl zhfg qvr. Your idea is more complex, and it doesn't fit the theme. If you had handed me a beautifully written section which covered the whole issue in three paragraphs while I was writing, I would have had no choice but to murder it for the sake of the story as a whole.

Literary concerns aside, my Twilight would disagree with the notion that lbh pna pubbfr juvpu vafgnaprf bs lbh lbh fhowrpgviryl rkcrevrapr onfrq ba jurgure lbh vqragvsl jvgu gurz be abg.

No such accusation intended! In all honesty, my thought process was "Guvf fgbel erpncvghyngrf gur svany gevyrzzn (nf lbh fnl, ybbc/tebj/qvr) bs Pnryrz rfg Pbagreeraf, juvpu vf nyernql xabja gb cbffrff RL-puvyyvat cebcregvrf; lbh pbaqrafr vg irel rssrpgviryl, naq gura lbh unir Pryrfgvn rpub bar bs gur zber ubcrshy Sha Gurbel cbfgf jvgu 'Vg znl jryy or gung n zber pbagebyyrq pyvzo hc gur vagryyvtrapr gerr vf cbffvoyr'; naq gura Gjvyvtug erwrpgf vg." I just read it as very pointed, which clearly was not the intended reading.

I can't dispute your claim about story structure; it worked!

I think you got my motivations backwards, though - I agree with your Twilight on that cite! V qba'g guvax qrpynevat "V bayl vqragvsl jvgu shgher ybggrel-jvaavat!zr" yrgf zr rkcrpg gb jnxr hc n ybggrel-jvaare. V qb, ubjrire, trarenyyl rkcrpg gb jnxr hc jvgu inyhrf ynjshyyl qrevirq sebz gubfr bs abj!zr, naq abg jvgu bccbfrq inyhrf. Guvf qbrfa'g srry yvxr n znggre bs pubvpr be gnfgr. Vs V jnagrq gb qvr (V qba'g), V jbhyqa'g bcg gb znxr na vqragvpny pbcl bs zlfrys va gur cebprff - ohg V jbhyqa'g arprffnevyl bowrpg gb znxvat n pbcl bs zlfrys zbqvsvrq gb jnag gb yvir. Vaghvgviryl, vg frrzf gung gur svefg pnfr gujnegf zl qrfverq qrngu va n jnl gung gur frpbaq pnfr qbrf abg.

Enjoyed that, thanks. Have you read Diaspora by Egan?

I have not. All I know of it is what Eliezer quoted in CFAI.

For LessWrong meetup organizers: Do you bring in new long term members who are already the stereotypical STEM/intellectual/utilitarian/etc. type? Or do you attract a significant number of people who don't meet that description but nonetheless do become long-term members?

This has implications for community-building. It means that LW groups, instead of trying to find members from a wide base, should be trying to find the people in the population who share the LessWronger mentality.

Agreed, GiveWell has mentioned this WRT EA community building. That there seems to be a type EA appeals to and reaching people who are predisposed to be sympathetic to EA memes is a better use of time than trying to convince people who don't much care.

So, I made two posts sharing potentionally useful heuristics from Bayesianism. So what?

Should I move one of them to Main? On the one hand, these posts "discuss core Less Wrong topics". On the other, I'm honestly not sure that this stuff is awesome enough. But I feel like I should do something, so these things aren't lost (I tried to do a talk about "which useful principles can be reframed in a Bayesian terms" on a Moscow meetup once, and learned that those things weren't very easy to find using site-wide search).

Maybe we need a wiki page with a list of relevant lessons from probability theory, which can be kept up-to-date?

I might need some recalibration, but I'm not sure.

I research topics of interest in the media, and I feel frustrated, angry and annoyed about the half-truths and misleading statements that I encounter frequently. The problem is not the feelings, but whether I am 'wrong'. I figure there are two ways that I might be wrong:

(i) Maybe I'm wrong about these half-truths and misleading statements not being necessary. Maybe authors have already considered telling the facts straight and that didn't get the best message out.

(ii) Maybe I'm actually wrong about whether these are half-truths or really all that misleading. Maybe I am focused on questions of fact and the meanings of particular phrases that are overly subtle.

The reason why I think I might need re-calibration is because I don't consider it likely that I am much less pragmatic, smarter or more accurate than all these writers I am critical of (some of them, inevitably, but not all of them -- also these issues are not that difficult intellectually).

Here are some concrete examples, all regarding my latest interest in the Ebola outbreak:

  • Harvard poll: Most recently, the HSPH-SSRS poll with headlines, "Poll finds US lack knowledge about ebola" or, "Many Americans harbor unfounded fears about Ebola". But when you look at the poll questions, they ask whether Americans are "concerned" about the risk, not what they believe the risk to be, and whether they think Ebola is spread 'easily'. The poll didn't appear to be about American's knowledge of Ebola, but how they felt about the knowledge they had. The question about whether Ebola transmits easily especially irks me, since everyone knows (don't they??) that whether something is 'easy' is subjective?

  • "Bush meat": I've seen many places that people need to stop consuming bush meat in outbreak areas (for example). I don't know that much about how Ebola is spreading through this route, but wouldn't it be the job of the media and epidemiologists to report on the rate of transmission from eating bats (I think there has only been one ground zero patient in West Africa who potentially contracted Ebola from a bat) and weigh this with the role of local meat as an important food source (again, don't know, media to blame)? Just telling people to stop eating would be ridiculous, hopefully it's not so extreme. Also, what about cooking rather than drying local meat sources? This seems a very good example of the media unable to nuance a message in a reasonable way, but I allow I could be wrong.

  • Media reports "Ebola Continues to spread in Nigeria" when the increase in Ebola cases were at that time due to contact with the same person and had already been in quarantine. This seemed to hype up the outbreak when in fact the Nigerians were successfully containing it. Perhaps this is an example of being too particular and over-analyzing something subtle?

  • Ever using the phrase 'in the air' to describe how Ebola does or doesn't transmit, because this is a phrase that can mean completely different things to anyone using or hearing the phrase. Ebola is not airborne but can transmit within coughing distance.

  • The apparent internal inconsistency of a case of Ebola might come to the US, but an outbreak cannot happen here. Some relative risk numbers would be helpful here.

All of these examples upset me to various degrees since I feel like it is evidence that people -- even writers and the scientists they are quoting -- are unable to think critically and message coherently about issues. How should I update my view so that I am less surprised, less argumentative or less crazy-pedantic-fringe person?

My first suggestion would be to look at the incentives of people who write for the media. Their motivations are NOT to "get the best message out". That's not what they're paid for. Nowadays their principal goal is to attract eyeballs and hopefully monetize them by shoving ads into your face. The critical thing to recognize is that their goals and criteria of what constitutes a successful piece do not match your goals and your criteria of what constitutes a successful piece.

The second suggestion would be to consider that writers write for a particular audience and, I think, most of the time you will not be a member of that particular audience. Mass media doesn't write for people like you.

Your comment is well-received. I'm continuing to to think about it and what this means for finding reliable media sources.

My impression of journalists has always been that they would be fairly idealistic about information and communicating that information to be attracted to their profession. I also imagine that their goals are constantly antagonized by the goals of their bosses, that do want to make money, and probably it is the case that the most successful sell-out or find a good trade-off that is not entirely ideal for them or the critical reader.

I'll link this article by Michael Volkmann, a disillusioned journalist.

My impression of journalists has always been that they would be fairly idealistic about information and communicating that information to be attracted to their profession.

Unfortunately, in practice this frequently translates to "show the world how evil those blues are even if I have to bend the literal truth a little to do it."

finding reliable media sources

My feeling is that quest is misguided. There is no such thing as a pure spring which gushes only truth -- you cannot find one.

My own approach is to accept that reality is fuzzy, multilayered, multidimensional, looks very different from different angles, and is almost always folded, spindled, and mutilated for the purpose of producing a coherent and attractive story. Read lots of different (but, hopefully, smart and well-informed) sources which disagree with each other. Together they will weave a rich tapestry which might not coalesce into a simple picture but will be more "true", in a way, than a straight narrative.

Having said this, I should point out that adding pretty clear lies to the mix is not useful and there are enough sources sufficiently tainted to just ignore.

The link is making a different argument-- it says the problem isn't with the journalists or with their bosses, it's that the public isn't paying attention to the stories journalists are risking their necks to get.

True. I linked the article as an example of the idealistic journalist, one that is disappointed that his motives are distrusted by the public.

All of these examples upset me to various degrees since I feel like it is evidence that people -- even writers and the scientists they are quoting -- are unable to think critically and message coherently about issues.

That's a funny sentence. You yourself blame scientists with whom you didn't interact at all based on the way they got quoted without critically asking yourself whether your behavior makes sense.

If a journalist quotes a scientist the process might be: Journalists picks up the phone and calls the scientists. They talk 15 minutes about the issue. Then the journalist who thinks that it's his job to quote an authority picks one sentence of that interview that fits into the narrative the journalist wants to tell. It's quite possible that the scientists even didn't say that sentence "word for word".

It's also quite possible that you spend more time investigating the issue in detail then some of the journalists you read.

My limited experience with journalists supports this -- when they speak with you, they often already have the outline of the story ready (the nearest existing cliche); they only need a few words they can take out of context and used them to support their bottom line. You can try to educate them, but they don't really listen to you to learn about the topic, they listen to catch some nice keywords.

I recently decided to bite the bullet and started to use the Markdown standard in my plain-text documents (I would have preferred the syntax of txt2tags or Org-mode, but neither of those is nearly as widespread and well-supported). It's proven so useful that I am seriously considering uninstalling LibreOffice. Who needs a WYSIWYG editor when you have readable source code which can be easily converted to an html document? Not to mention that Notepad++ opens instantly, while LibreOffice Writer takes forever.

I highly recommend that anyone who deals with lots of text documents try Markdown. It will change your digital life. If you need help getting started, try the Markdown Tutorial.

Who needs a WYSIWYG editor when you have readable source code which can be easily converted to an html document?

Many people. Text editors and word processors are different tools serving different needs.

Try auto-generating a table of contents in your Markdown document or inserting a table with live formulas (you can, of course, use external tools to achieve any functionality...).

On a similar note, if the documents you are writing will end up printed, or in PDF format, I recommend TeX. It's significantly more complex than Markdown, but also far more powerful. And absolutely irreplaceable if you are writing something Math/Formula heavy.

I also make a point of editing all my documents in a text editor, and then compiling them. Seconding Jaime's recommendation.

Patrick McKenzie explains a bit more why he hates bitcoin:

My feelings about Bitcoin remind me of a fractal. Its a bad idea, and you zoom in at any part of it and discover new bad ideas, and there is in fact infinite resolution on how bad those ideas get. I have been working on a Why I Dislike Bitcoin For Technical Reasons essay for almost a year and cannot just hit publish because I dont know if ill ever be able to finish it.

Lets talk non-technical reasons.

When will I add support for crypto-currencies? Well, I sell a variety of products and services at price points between $29.95 and $30,000 a week, almost exclusively to professional Americans, with a small portion of the business being B2C and the larger part being B2B. I would accept Bitcoin if it were in demand from my customers. It isnt. It never will be because taking payment for legal products from the global rich is straightforward.

If you, like me, want to sell software, get a Stripe account and youre done. If you cant code sufficiently then use Gumroad or one of the numerous e-commerce systems and youre done. You will, at no point, have to explain to a Kansas schoolteacher (who thinks Internet Explorer is called the blue Googles) what crypto currency is, why she needs to download a new program to pay you, why she needs to give a company shes never heard of which isnt you withdraw access to her bank account to buy Scary Internet Voodoo, or how to process a novel transaction type shes never had to deal with before. You just get her to put in one of the several visa cards she has in her pocket.

But wait Patrick. You routinely transfer tens of thousands of dollars from the US to Japan. Wouldnt Bitcoin make that fast and easy?

No. Wire transfers are fast and easy  annoyingly expensive, to be true ($50 in fees plus the currency slippage), but J->US takes 45 minutes and US->J takes 48 hours. Throwing Bitcoin in the mix introduces numerous sources of risk. The odds-on place to do business for them in Japan  which people told me I should go get a job at (!)  is currently undergoing bankruptcy proceedings after losing about half a billion dollars of depositors money. If one of my $50k transfers had been tied up there, well, thats all she wrote for that, but at least I saved $50 on wire fees?

Asking my clients to acquire Bitcoin and pay me in them, which cuts out Bitcoin exchanges from the value chain, is a poor use of my time. Large companies, like the ones which happily pay $X0,000 invoices, will not do it. Full stop. To the extent that I want to change their minds about processes which are deep in the sinews of their company, it should not be their Accounts Payable department but rather their Marketing department, and after successfully implementing my ideas (which, unlike Bitcoin, will make them a lot of money) I will send in an invoice for a huge amount of money and predictably receive it.

> But Patrick, isnt Bitcoin a great platform for remittances? No, its a terrible platform for remittances because 98% of the problem of remittances is what is called in networks the Last Quarter Mile Problem and Bitcoin has no infrastructure for solving it on either end of the remittance and, even if they did, would not find themselves cost-competitive with Western Union. (The part between the last quarter miles being close-to-free doesnt help. Western Union can transfer money internally for close-to-free. The supermajority of their costs is maintaining an office which someone can go to in abuelitas village. Seriously, check their annual report.)

The above does not apply if you want to acquire nootropics from a questionable source overseas.

Most of Patrick's arguments against Bitcoin are actually against offering Bitcoin exclusively rather than against offering Bitcoin as an option, probably with an added fee.

Patrick McKenzie explains a bit more why he hates bitcoin:

I think "hate" is too strong a word. There are a lot of things for which I find no use but that I don't hate.

I agree that "hate" isn't doing much here besides giving emotional valence to empiricals, but this

My feelings about Bitcoin remind me of a fractal. It's a bad idea, and you zoom in at any part of it and discover new bad ideas, and there is in fact infinite resolution on how bad those ideas get. I have been working on a Why I Dislike Bitcoin For Technical Reasons essay for almost a year and cannot just hit publish because I don't know if i'll ever be able to finish it.

sounds a little stronger to me than "finds no use for".

The cookie example here is a nice explanation of the difference between frequentists and Bayesians.

Stanovich draws an interesting distinction between intelligence and rationality, where intelligence, as measured by IQ test, is so to say the strength of an individual's analytical abilities, whereas rationality is this individual's tendency to use these analytical abilities (as opposed to fast and unreliable Systems 1 processes); i.e., his or her tendency to "overcome his or her biases". According to Stanovich, there are large individual differences not only regarding IQ but also regarding rationality, and he is now in the process of constructing a test measuring people's rationality quotient, RQ. Now my question is this: in which areas do you think that a higher RQ people have comparative advantages, and in which areas do you think that high IQ people have comparative advantages? My hunch is that IQ pays off better in precise fields like mathematics, physics and computer science, where the problems are often so hard that most people can't solve them even if they overcome their biases and use their System 2, whereas high RQ pays off better in more ill-structured fields like qualitative sociology, where any individual line of reasoning usually is fairly simple, and therefore does not require a very high IQ, but where it is easy to fall prey to (politically) motivated reasoning, confirmation bias, and all sorts of other biases.

Hence in order to arrive at true theories, it seems to me that you need a high RQ in the social sciences. On the other hand, in order to sell your theories, RQ is not necessarily always helpful: on the contrary, a fair dose of overconfidence bias can be useful here. Many bigshot social scientists during the last century or so were anything but rational (Foucault and Freud are two of many examples), but were able to convince other (equally biased people) that they were.

As fields become more exact (as for instance psychology gradually have become), you gradually need a higher and higher IQ to compete: rationality is no longer enough. My guess is that as more and more fields grow more exact, moderate IQ people will be of less and less use in the academia.

Many bigshot social scientists during the last century or so were anything but rational (Foucault and Freud are two of many examples), but were able to convince other (equally biased people) that they were.

I understand that bashing Freud is a popular way to signal "rationality" -- more precisely, to signal loyalty to the STEM tribe which is so much higher status than the social sciences tribe -- but it really irritates me because I would bet that most people doing this are merely repeating what they heard from others, building their model completely on other people's strawmans.

Mostly, it feels to me horribly unfair towards Freud as a person, to use him as a textbook example of irrationality. Compared with the science we have today, of course his models (based on armchair reasoning after observing some fuzzy psychological phenomena) are horribly outdated and often plainly wrong. So throw those models away and replace them by better models whenever possible; just like we do in any science! I mostly object to the connotation that Freud was less rational compared with other people living in the same era, working in the same field. Because it seems to me he was actually highly above the average; it's just that the whole field was completely diseased, and he wasn't rational enough to overcome all of that single-handedly. I repeat, this is not a defense of factual correctness of Freud's theories, but a defense of Freud's rationality as a person.

To put things in context, to show how diseased psychology was in Freud's era, let me just say that the most famous Freud's student and then competitor, Carl Gustav Jung, rejected much of Freud's teachings and replaced them with astrology / religion / magic, and this was considered by many people an improvement compared with the horribly offensive ideas that people could be predictably irrational, motivated by sexual desires, and generally frustrated with the modern society based on farmers' values. (Then there was also the completely different school of Vulcan psychologists who said: Thoughts and emotions cannot be measured, therefore they don't exist, and anyone who says otherwise is unscientific.) This was the environment which started the "Freud is stupid" meme, which keeps replicating on LW today.

I think the bad PR comes from combination of two facts: 1) some of Freud's ideas were wrong, and 2) all of his ideas were controversial, including those which were correct. So, first we have this "Freud is stupid" meme most people agree with, however, mostly for wrong reasons. Then, the society gradually changes, and those Freud's ideas which happened to be correct become common sense and are no longer attributed to him; they are further developed by other people whom we remember as their authors. Only the wrong ideas are remembered as his legacy. (By the way, I am not saying that Freud invented all those correct ideas. Just that popularizing them in his era was a part of what made him controversial; what made the "Freud is stupid" meme so popular. Which is why I consider that meme very unfair.) So today we associate human irrationality with Dan Ariely, human sexuality with Matt Ridley, and Sigmund Freud only reminds us of lying on a couch debating which object in a dream represented a penis, and underestimating an importance of clitoris in female sexuality.

As someone who has actually read a few Freud's books long ago (before reading books by Ariely, Ridley, etc.), here are a few things that impressed me. Things that someone got right hundred years ago, when "it's obviously magic" and "no, thoughts and emotions actually don't exist" were the alternative famous models of human psychology.

(continued in next comment...)

(...continued)

The general ability of updating. At the beginning of Freud's career, the state-of-art psychotherapy was hypnosis, which was called "magnetism". Some scientists have discovered that the laws of nature are universal, and some other scientists have jumped to the seemingly obvious conclusion that analogically, all kinds of psychological forces among humans must be the same as the forces which makes magnets attract or repel each other. So Freud learned hyphosis, used it in therapy, and was enthusiastic about it. But later he noticed that it had some negative side effects (female patients frequently falling in love with their doctors, returning to their original symptoms when the love was not reciprocated), and that the positive side effects could also be achieved without hypnosis, simply by talking about the subject (assuming that some conditions were met, such as the patient actually focusing on the subject instead of focusing on their interaction with the doctor; a large part of psychoanalysis is about optimizing for these conditions). The old technique was thrown away because the new one provided better results. Not exactly the "evidence based medicine" by our current standards, but perhaps we could use as a control group all those doctors who stubbornly refused to wash their hands between doing autopsy and treating their patients, despite their patients dropping like flies. -- Later, Freud replaced his original model of unconscious, preconscious and conscious mind, and replaced it with the "id, ego, superego" model. (This is provided as an evidence of the ability to update, to discard both commonly accepted models and one's own previous models. Which we consider an important part of rationality.)

Speaking about the "id, ego, superego" model, here is the idea of a human brain not being a single agent, but composed of multiple modules, sometimes opposed to each other. Is this something worth considering for Less Wrong readers, either as a theoretical step towards reduction of consciousness, or as a practical tool for e.g. overcoming akrasia? "Ego" as the rational part of the brain, which can evaluate consequences, but often doesn't have enough power to enforce its decisions without emotional support from some other part of brain. "Id" as the emotional part which does not understand the concept of time. "Superego" as a small model of other people in our brain. Today we could probably locate the parts of the physical brain they correspond to.

"The Psychopathology of Everyday Life" is a book describing how seemingly random human errors (random movements, forgetting words, slips of the tongue) sometimes actually make sense if we perceive them as goal-oriented actions of some mental subagent. The biggest problem of the book is that it is heavy with theory, and a large part of it focuses on puns in German language... but remove all of this, don't mention the origin, and you could get a highly upvoted article on Less Wrong! (The important part would be not to give any credit to Freud, and merely present it as an evidence for some LW wisdom. Then no one will doubt your rationality.) -- On the other hand, "Civilization and Its Discontents" is a perfect book to be rewritten into a series of articles on Overcoming Bias, about a conflict between forager mentality and farmer social values.

But updating and modelling human brains, those are topics interesting for Less Wrong readers. Most people would focus on, you know, sex. Well, how exactly could we doubt the importance of sexual impulses in a society where displaying a pretty lady is advertising 101, Twilight is a popular book, and internet is full of porn? (Also, scientists accept the importance of sexual selection in evolution.) Our own society is a huge demonstration that Freud was right about the most controversial part of his theory. The only way to make him wrong about this is to create a strawman and claim that according to Freud everything was about sex, so if we find a single thing that isn't, we proved him wrong. -- But that strawman was already used in Freud's era; he actually started one of his books by disproving it. Too bad I don't remember which one. One of the case histories, probably. (It starts like: So, people keep simplifying my theories that all dreams are dogmatically about sex, so here is a simple example to correct the misunderstanding. And he describes a situation where some child wanted an ice cream, parents forbid it, and the child was unhappy and cried. That night, the child had a dream about travelling to North Pole, through mountains of snow. This, says Freud, is what resolving a suppressed desire in a dream typically looks like: The child wanted the ice cream, that's desire #1, but also the child wanted to avoid conflict with their parents, that's desire #2. How to satisfy both of them? The "mountains of show" obviously symbolize the ice cream; the child wants it, and gets it, a lot! But to avoid a conflict with parents, even in the dream, the ice cream is censored and becomes snow, so the child can plausibly deny to themselves disobeying their parents. This is Freud's model of human dreams. It's just that an adult person would probably not obsess so much about an ice cream, which they can buy if they really want it so much, but about something unavailable, such as a sexy neighbor; and also a smart adult would use more complex censorship to fool themselves.) Also, he had a whole book called "Beyond the Pleasure Principle" where he argues that some mind modules may be guided by principles other than pleasure, for example nightmares, repetition compulsion, aggression. (His explanation of this other principle is rather poor: he invents a mystical death principle opposing the pleasure principle. Anyway, it's evidence against the "everything is about sex" strawman.)

Freud was an atheist, and very public about it. He essentially described religion as a collective mental disease, in a book called "The Future of an Illusion". He used and recommended using cocaine... if he lived in the Bay Area today, and used modafinil instead, I can easily imagine him being a very popular Less Wrong member. -- But instead he lived a century ago, so he could only be one of those people spreading controversial ideas which are now considered obvious in hindsight.

lt;dr -- I strongly disagree with using Freud as a textbook example of insanity. Many of his once controversial ideas are so obvious to us now that we simply don't attribute them to him. Instead we just associate him with the few things he got wrong. And the whole meme was started by people who were even more wrong.

Hi Viliam,

thanks for your interesting and thoughtful response. Possibly I should have used another example. There are other, more clearcut cases in e.g. the postmodernist tradition, but I wanted someone more well-known.

The reason I chose him was not to signal loyalty to the STEM tribe, but rather because he is taken to be a textbook example of irrationality by Popper and Gellner, two of my favourite philosophers. Popper claimed that Freud's theories were unfalsifiable and that for any possible event E, both E and not-E was standardly taken to confirm his theories. This is inconsistent with probability theory, as pointed out in "Conservation of Expected Evidence" (which is a very Popperian post). The reason Freud and his followers (I think that some people have thought that some of his followers were actually worse on this point than Freud) did this mistake (if they did) presumably was confirmation bias (falsificationism can be seen as a tool to counter confirmation bias).

There is a huge literature on whether this claim is actually true. I have read Freud and Gellner's (to my mind very interesting) book on psycho-analysis, as well as some of Popper's texts on the topic, so I'm not merely repeating ideas I've heard from others. That said, I don't know the subject well enough to go into a detailed discussion of your claims. Also, it's sort of tangential to the topic. My point was not to bash Freud - that was so to say a side-effect of my claim.

Regarding your historical claims, I think that it's very hard to establish who introduced nebolous ideas such as Freud's tripartite model of the mind. Some claim that Plato's theory of the mind foreshadowed it. Gellner claims that all good original ideas in Freud are taken from Nietzsche. I don't know enough of the topic to determine whether any of these claims are true, but in order to establish whether they are, or whether Freud really was as significant and original as you claim, one would need to take a deep plunge into the history of ideas.