A fascinating article about rationality or the lack thereof as it applied to curing scurvy, and how hard trying to be less wrong can be: http://idlewords.com/2010/03/scott_and_scurvy.htm
Call for examples
When I posted my case study of an abuse of frequentist statistics, cupholder wrote:
Still, the main post feels to me like a sales pitch for Bayes brand chainsaws that's trying to scare me off Neyman-Pearson chainsaws by pointing out how often people using Neyman-Pearson chainsaws accidentally cut off a limb with them.
So this is a call for examples of abuse of Bayesian statistics; examples by working scientists preferred. Let’s learn how to avoid these mistakes.
How do you introduce your friends to LessWrong?
Sometimes I'll start a new relationship or friendship, and as this person becomes close to me I'll want to talk about things like rationality and transhumanism and the Singularity. This hasn't ever gone badly, as these subjects are interesting to smart people. But I think I could introduce these ideas more effectively, with a better structure, to maximize the chance that those close to me might be as interested in these topics as I am (e.g. to the point of reading or participating in OB/LW, or donating to SIAI, or attending/founding rationalist groups). It might help to present the futurist ideas in increasing order of outrageousness as described in Yudkowsky1999's future shock levels. Has anyone else had experience with introducing new people to these strange ideas, who has any thoughts or tips on that?
Edit: for futurist topics, I've sometimes begun (in new relationships) by reading and discussing science fiction short stories, particularly those relating to alien minds or the Singularity.
For rationalist topics, I have no real plan. One girl really appreciated a discussion of the effect of social status on the persuasiveness of argume...
The following stuff isn't new, but I still find it fascinating:
TL;DR: Help me go less crazy and I'll give you $100 after six months.
I'm a long-time lurker and signed up to ask this. I have a whole lot of mental issues, the worst being lack of mental energy (similar to laziness, procrastination, etc., but turned up to eleven and almost not influenced by will). Because of it, I can't pick myself up and do things I need to (like calling a shrink); I'm not sure why I can do certain things and not others. If this goes on, I won't be able to go out and buy food, let alone get a job. Or sign up for cryonics or donate to SIAI.
I've tried every trick I could bootstrap; the only one that helped was "count backwards then start", for things I can do but have trouble getting started on. I offer $100 to anyone who suggests a trick that significantly improves my life for at least six months. By "significant improvement" I mean being able to do things like going to the bank (if I can't, I won't be able to give you the money anyway), and having ways to keep myself stable or better (most likely, by seeing a therapist).
One-time tricks to do one important thing are also welcome, but I'd offer less.
I've got a weaker form of this, but I manage. The number one thing that seems to work is a tight feedback loop (as in daily) between action and reward, preferably reward by other people. That's how I was able to do OBLW. Right now I'm trying to get up to a reasonable speed on the book, and seem to be slowly ramping up.
This matches my experience very closely. One observation I'd like to add is that one of my strongest triggers for procrastination spirals is having a task repeatedly brought to my attention in a context where it's impossible to follow through on it - ie, reminders to do things from well-intentioned friends, delivered at inappropriate times. For example, if someone reminds me to get some car maintenance done, the fact that I obviously can't go do it right then means it gets mentally tagged as a wrong course of action, and then later when I really ought to do it the tag is still there.
Has anyone had any success applying rationalist principles to Major Life Decisions? I am facing one of those now, and am finding it impossible to apply rationalist ideas (maybe I'm just doing something wrong).
One problem is that I just don't have enough "evidence" to make meaningful probability estimates. Another is that I'm only weakly aware of my own utility function.
Weirdly, the most convincing argument I've contemplated so far is basically a "what would X do?" style analysis, where X is a fictional character.
It feels to me that rationalist principles are most useful in avoiding failure modes. But they're much less useful in coming up with new things you should do (as opposed to specifying things you shouldn't do).
Pigeons can solve Monty hall (MHD)?
A series of experiments investigated whether pigeons (Columba livia), like most humans, would fail to maximize their expected winnings in a version of the MHD. Birds completed multiple trials of a standard MHD, with the three response keys in an operant chamber serving as the three doors and access to mixed grain as the prize. Across experiments, the probability of gaining reinforcement for switching and staying was manipulated, and birds adjusted their probability of switching and staying to approximate the optimal strategy.
Behind a paywall
Behind a paywall
But freely available from one of the authors' website.
Basically, pigeons also start with a slight bias towards keeping their initial choice. However, they find it much easier to "learn to switch" than humans, even when humans are faced with a learning environment as similar as possible to that of pigeons (neutral descriptions, etc.). Not sure how interesting that is.
This was in my drafts folder but due to the lackluster performance of my latest few posts I decided it doesn't deserve to be a top level post. As such, I am making it a comment here. It also does not answer the question being asked so it probably wouldn't have made the cut even if my last few posts been voted to +20 and promoted... but whatever. :P
Perceived Change
Once, I was dealing a game of poker for some friends. After dealing some but not all of the cards I cut the deck and continued dealing. This irritated them a great deal because I altered the ord...
To venture a guess: their true objection was probably "you didn't follow the rules for dealing cards". And, to be fair to your friends, those rules were designed to defend honest players against card sharps, which makes violations Bayesian grounds to suspect you of cheating.
Why wouldn't the complaint then take the form of, "You broke the rules! Stop it!"?
Because people aren't good at telling their actual reason for disagreement. I suspect that they are aware that the particular rule is arbitrary and doesn't influence the game, and almost everybody agrees that blindly following the rules is not a good idea. So "you broke the rules" doesn't sound as a good justification. "You have influenced the outcome", on the other hand, does sound like a good justification, even if it is irrelevant.
The lottery ticked example is a valid argument, which is easily explained by attachment to random objects and which can't be explained by rule-changing heuristic. However, rule-fixing sentiments certainly exist and I am not sure which play stronger role in the poker scenario. My intuition was that the poker scenario was more akin to, say, playing tennis in non-white clothes in the old times when it was demanded, or missing the obligatory bow before the match in judo.
Now, I am not sure which of these effects is more important in the poker scenario, and moreover I don't see by which experiment we can discriminate between the explanation.
RobinZ ventured a guess that their true objection was not their stated objection; I stated it poorly, but I was offering the same hypothesis with a different true objection--that you were disrupting the flow of the game.
I'm not entirely sure if this makes sense, partially because there is no reason to disguise unhappiness with an unusual order of game play. From what you've said, your friends worked to convince you that their objection was really about which cards were being dealt, and in this instance I think we can believe them. My fallacy was probably one of projection, in that I would have objected in the same instance, but for different reasons. I was also trying to defend their point of view as much as possible, so I was trying to find a rational explanation for it.
I suspect that the real problem is related to the certainty effect. In this case, though no probabilities were altered, there was a new "what-if" introduced into the situation. Now, if they lose (or rather, when all but one of you lose) they will likely retrace the situation and think that if you hadn't cut the deck, they could have won. Which is true, of course, but irrelevant, since it also could have ...
I'm thinking of writing up a post clearly explaining update-less decision theory. I have a somewhat different way of looking at things than Wei Dia and will give my interpretation of his idea if there is demand. I might also need to do this anyway in preparation for some additional decision theory I plan to post to lesswrong. Is there demand?
How important are 'the latest news'?
These days many people are following an enormous amount of news sources. I myself notice how skimming through my Google Reader items is increasingly time-consuming.
What is your take on it?
I wonder if there is really more to it than just curiosity and leisure. Are there news sources (blogs, the latest research, 'lesswrong'-2.0 etc.), besides lesswrong.com, that every rationalist s...
I searched for a good news filter that would inform me about the world in ways that I found to be useful and beneficial, and came up with nothing.
Any source that contained news items I categorized as useful, they made up less than 5% of the information presented by that source, and thus were drowned out and took too much time and effort, on a daily basis, to find. Thus, I mostly ignore news, except what I get indirectly through following particular communities like LessWrong or Slashdot.
However, I perform this exercise on a regular basis (perhaps once a year), clearing out feeds that have become too junk-filled, searching out new feeds, and re-evaluating feeds I did not accept last time, to refine my information access.
I find that this habit of perpetual long-term change (significant reorganization, from first principles of the involved topic or action) is highly beneficial in many aspects of my life.
ETA: My feed reader contains the following:
"Why Self-Educated Learners Often Come Up Short" http://www.scotthyoung.com/blog/2010/02/24/self-education-failings/
Quotation: "I have a theory that the most successful people in life aren’t the busiest people or the most relaxed people. They are the ones who have the greatest ability to commit to something nobody else forces them to do."
It turns out that Eliezer might not have been as wrong as he thought he was about passing on calorie restriction.
Pick some reasonable priors and use them to answer the following question.
On week 1, Grandma calls on Thursday to say she is coming over, and then comes over on Friday. On week 2, Grandma once again calls on Thursday to say she is coming over, and then comes over on Friday. On week 3, Grandma does not call on Thursday to say she is coming over. What is the probability that she will come over on Friday?
ETA: This is a problem, not a puzzle. Disclose your reasoning, and your chosen priors, and don't use ROT13.
Today I was listening in on a couple of acquaintances talking about theology. As most theological discussions do, it consisted mainly of cached Deep Wisdom. At one point — can't recall the exact context — one of them said: "…but no mortal man wants to live forever."
I said: "I do!"
He paused a moment and then said: "Hmm. Yeah, so do I."
I think that's the fastest I've ever talked someone out of wise-sounding cached pro-death beliefs.
New on arXiv:
David H. Wolpert, Gregory Benford. (2010). What does Newcomb's paradox teach us?
...In Newcomb's paradox you choose to receive either the contents of a particular closed box, or the contents of both that closed box and another one. Before you choose, a prediction algorithm deduces your choice, and fills the two boxes based on that deduction. Newcomb's paradox is that game theory appears to provide two conflicting recommendations for what choice you should make in this scenario. We analyze Newcomb's paradox using a recent extension of game theory
Warning: Your reality is out of date
tl;dr:
There are established facts that don't change perceptibly (the boiling point of water), and there are facts that change constantly (outside temperature, time of day)
Inbetween these two intuitive categories, however, a third class of facts could be defined: facts that do change measurably, or even drastically, over human lifespans, but still so slowly that people, after first learning about them, have a tendency of dumping them into the "no-change" category unless they're actively paying attention to the f...
Which very-low-effort activities are most worthwhile? By low effort, I mean about as hard as solitaire, facebook, blogs, TV, most fantasy novels, etc.
When I was young, I happened upon a book called "The New Way Things Work," by David Macaulay. It described hundreds of household objects, along with descriptions and illustrations of how they work. (Well, a nuclear power plant, and the atoms within it, aren't household objects. But I digress.) It was really interesting!
I remember seeing someone here mention that they had read a similar book as a kid, and it helped them immensely in seeing the world from a reductionist viewpoint. I was wondering if anyone else had anything to say on the matter.
I have two basic questions that I am confused about. This is probably a good place to ask them.
What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be? For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.
Consider the following very interesting game. You have been given a person who will respond to all your yes/no questions by assigning a probabili
LHC to shut down for a year to address safety concerns: http://news.bbc.co.uk/2/hi/science/nature/8556621.stm
I've just finished reading Predictably Irrational by Dan Ariely.
I think most LWers would enjoy it. If you've read the sequences, you probably won't learn that many new things (though I did learn a few), but it's a good way to refresh your memory (and it probably helps memorization to see those biases approached from a different angle).
It's a bit light compared to going straight to the studies, but it's also a quick read.
Good to give as gift to friends.
Game theorists discuss one-shot Prisoner's dilemma, why people who don't know Game Theory suggest the irrational strategy of cooperating, and how to make them intuitively see that defection is the right move.
Is there a way to view an all time top page for Less Wrong? I mean a page with all of the LW articles in descending order by points, or something similar.
while not so proficient in math, I do scour arxiv on occasion, and am rewarded with gems like this, enjoy :)
"Lessons from failures to achieve what was possible in the twentieth century physics" by Vesselin Petkov http://arxiv.org/PS_cache/arxiv/pdf/1001/1001.4218v1.pdf
I have a problem with the wording of "logical rudeness". Even after having seen it many times, I reflexively parse it to mean being rude by being logical-- almost the opposite of the actual meaning.
I don't know whether I'm the only person who has this problem, but I think it's worth checking.
"Anti-logical rudeness" strikes me as a good bit better.
Thermodynamics post on my blog. Not directly related to rationality, but you might find it interesting if you liked Engines of Cognition.
Summary: molar entropy is normally expressed as Joules per Kelvin per mole, but can also be expressed, more intuitively, as bits per molecule, which shows the relationship between a molecule's properties and how much information it contains. (Contains references to two books on the topic.)
I'm considering doing a post about "the lighthouse problem" from Data Analysis: a Bayesian Tutorial, by D. S. Sivia. This is example 3 in chapter 2, pp. 31-36. It boils down to finding the center and width of a Cauchy distribution (physicists may call it Lorentzian), given a set of samples.
I can present a reasonable Bayesian handling of it -- this is nearly mechanical, but I'd really like to see a competent Frequentist attack on it first, to get a good comparison going, untainted by seeing the Bayesian approach. Does anyone have suggestions for ways to structure the post?
We've had these for a year, I'm sure we all know what to do by now.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.