This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

This thread brought to you by quantum immortality.

Open Thread June 2010, Part 4
New Comment
338 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
4Unnamed
See also
0khafra
See also
0h-H
upvoted-both-I remember yours this from a couple of years ago
0[anonymous]
The veil of ignorance was a nice touch.
0JoshuaZ
That's one of the funniest things I've seen in a while. I wish I could upvote that more.

A visual study guide to 105 types of cognitive biases

"The Royal Society of Account Planning created this visual study guide to cognitive biases (defined as "psychological tendencies that cause the human brain to draw incorrect conclusions). It includes descriptions of 19 social biases, 8 memory biases, 42 decision-making biases, and 36 probability / belief biases."

[-]knb120

Some random thoughts about thinking, based mostly on my own experience.

I've been playing minesweeper lately (and I've never played before). For the uninitiated, minesweeper is a game that involves using deductive reasoning (and rarely, guessing) to locate the "mines" in a grid of identical boxes. For such an abstract puzzle, it really does a good job of working the nerves, since one bad click can spoil several minutes' effort.

I was surprised to find that even when I could be logically certain about the state of a box, I felt afraid that I was incorrect (before I clicked), and (mildly) amazed when I turned out to be correct. It felt like some kind of low level psychic power or something. So it seems that our brains don't exactly "trust" deductive reasoning. Maybe because problems in the ancestral environment didn't have clean, logical solutions?

I also find that when I'm stymied by a puzzle, if I turn my attention to something else for a while, when I come back, I can easily find some way forward. The effect is stunning, an unsolvable problem becomes trivial five minutes later. I'm pretty sure there is a name for this phenomenon, but I don't know what it is. In any case, it's jarring.

Another random thought. When I'm sad about something in my life, I usually can make myself feel much better by simply saying, in a sentence, why I'm sad. I don't know why this works, but it seems to make the emotion abstract, as though it happened to somebody else.

7Alicorn
Explicitly acknowledging emotions as things with causes is a huge chunk of managing them deliberately. (I have a post in the works on this, but I'm not sure when I'll pull it together.)
0Will_Newsome
Lots of references to the CBT literature would be nice... no need to reinvent the wheel; CBT has a lot of useful things to say about NATs, and strategies to take care of them. (Then again this applies mostly to negative emotions, and deliberately managing positive emotions seems like a cool thing to do too.) That said, more instrumental rationality posts would be great.
0CronoDAS
What does NAT stand for?
2CronoDAS
I don't think that works for me. I often can't identify a specific cause of my sad feeling, and when I can, thinking about it often makes me feel worse rather than better.
3knb
Well I don't mean ruminating about the cause of the sad feeling. That is probably one of the worst things you can do. Rather I meant just identifying it. For example, when a girlfriend and I broke up (this was a couple years ago) I spent maybe two days feeling really depressed. Eventually, I thought to myself, "You're sad because you broke up with your girlfriend." That really put it in perspective for me. It made me think of all the cheesy teen movies where kids breakup with their sweethearts and act like it's the end of the world, when in the viewer sees it as a normal, even banal rite of passage to adulthood. I had always thought people who reacted like that were ridiculous. In other words, it feels like that thought put the issue in "far mode" for me.
0Blueberry
That works if there is a specific cause, but like some other people have said, my sad feelings aren't caused by external events.
3SilasBarta
Same here. I also found that often there's not any cause in the sense of something specific upsetting me; it's just an automatic reaction to not getting enough social interaction.
0Daniel_Burfoot
Arguably, problems in the modern environment don't have clean, logical solutions either! Note also that people get good at games like minesweeper and chess through learning. If the brain was primarily a big deductive logic machine, it would become good at these games immediately upon understanding the rules; no learning would be necessary.
0h-H
I'm nitpicking, but maybe it was simple pleasure at getting the game?

http://arxiv.org/abs/1006.3868

Philosophy and the practice of Bayesian statistics

Andrew Gelman, Cosma Rohilla Shalizi (Submitted on 19 Jun 2010) A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.

2cousin_it
I guess everyone here already understands this stuff, but I'll still try to summarize why "model checking" is an argument against "naive Bayesians" like Eliezer's OB persona. Shalizi has written about this at length on his blog and elsewhere, as has Gelman, but maybe I can make the argument a little clearer for novices. Imagine you have a prior, then some data comes in, you update and obtain a posterior that overwhelmingly supports one hypothesis. The Bayesian is supposed to say "done" at this point. But we're actually not done. We have only "used all the information available in the sample" in the Bayesian sense, but not in the colloquial sense! See, after locating the hypothesis, we can run some simple statistical checks on the hypothesis and the data to see if our prior was wrong. For example, plot the data as a histogram, and plot the hypothesis as another histogram, and if there's a lot of data and the two histograms are wildly different, we know almost for certain that the prior was wrong. As a responsible scientist, I'd do this kind of check. The catch is, a perfect Bayesian wouldn't. The question is, why?
6steven0461
But my sense is that the "substantial school in the philosophy of science [that] identifies Bayesian inference with inductive inference and even rationality as such", as well as Eliezer's OB persona, is talking more about a prior implicit in informal human reasoning than about anything that's written down on paper. You can then see model checking as roughly comparing the parts of your prior that you wrote down to all the parts that you didn't write down. Is that wrong?
1cousin_it
I don't think informal human reasoning corresponds to Bayesian inference with any prior. Maybe you mean "what informal human reasoning should be". In that case I'd like a formal description of what it should be (ahem).
2Cyan
Solomonoff induction, mebbe?
2cousin_it
Wei Dai thought up a counterexample to that :-)
1steven0461
Gelman/Shalizi don't seem to be arguing from the possibility that physics is noncomputable; they seem to think their argument (against Bayes as induction) works even under ordinary circumstances.
0magfrump
It seems to me that Wei Dai's argument is flawed (and I may be overly arrogant in saying this; I haven't even had breakfast this morning.) He says that the probability of knowing an uncomputable problem would be evaluated at 0 originally; I don't fundamentally see why "measure zero hypothesis" is equivalent to "impossible;" for example the hypothesis of "they're making it up as they go along" having probability 2^(-S) based on the size of the set shrinks at a certain rate as evidence arrives; that means that given any finite amount of inference the AI should be able to distinguish between two possibilities (they are very good at computing or guessing vs all humans have been wrong about mathematics forever) unless new evidence comes in to support one over the other "humans have been wrong forever" should have a consistent probability mass which will grow in comparison to the other hypothesis "they are making it up." Nobody seems to propose this (although I may have missed it skimming some of the replies) and it seems like a relatively simple thing (to me) to adjust the AI's prior distribution to give "impossible" things low but nonzero probability.
0cousin_it
Wei Dai's argument was specifically against the Solomonoff prior, which assigns probability 0 to the existence of halting problem oracles. If you have an idea how to formulate another universal prior that would give such "impossible" things positive probability, but still sum to 1.0 over all hypotheses, then by all means let's hear it.
0magfrump
Yeah well it is certainly a good argument against that. The title of the thread is "is induction unformalizable" which point I'm unconvinced of. If I were to formalize some kind of prior, I would probably use a lot of epsilons (since zero is not a probability); including an epsilon for "things I haven't thought up yet." On the other hand I'm not really an expert on any of these things so I imagine Wei Dai would be able to poke holes in anything I came up with anyway.
1cousin_it
There's no general way to have a "none of the above" hypothesis as part of your prior, because it doesn't make any specific prediction and thus you can't update its likelihood as data comes in. See the discussion with Cyan and others about NOTA somewhere around here.
0magfrump
Well then I guess I would hypothesize that solving the problem of a universal prior is equivalent to solving the problem of NOTA. I don't really know enough to get technical here. If your point is that it's not a good idea to model humans as Bayesians, I agree. If your point is that it's impossible, I'm unconvinced. Maybe after I finish reading Jaynes I'll have a better idea of the formalisms involved.
5SilasBarta
I thought that what I'm about to say is standard, but perhaps it isn't. Bayesian inference, depending on how detailed you do it, does include such a check. You construct a Bayes network (as a directed acyclic graph) that connects beliefs with anticipated observations (or intermediate other beliefs), establishing marginal and conditional probabilities for the nodes. As your expectations are jointly determined by the beliefs that lead up to them, then getting a wrong answer will knock down the probabilities you assign to the beliefs leading up to them. Depending on the relative strengths of the connections, you know whether to reject your parameters, your model, or the validity of the observation. (Depending on how detailed the network is, one input belief might be "i'm hallucinating or insane", which may survive with the highest probability.) This determination is based on which of them, after taking this hit, has the lowest probability. Pearl also has written Bayesian algorithms for inferring conditional (in)dependencies from data, and therefore what kinds of models are capable of capturing a phenomenon. He furthermore has proposed causal networks, which have explicit causal and (oppositely) inferential directions. In that case, you don't turn a prior into a posterior: rather, the odds you assign to an event at a node are determined by the "incoming" causal "message", and, from the other direction, the incoming inferential message. But neither "model checking" nor Bayesian methods will come up with hypotheses for you. Model checking can attenuate the odds you assign to wrong priors, but so can Bayesian updating. The catch is that, for reasons of computation, a Bayesian might not be able to list all the possible hypotheses and arbitrarily restrict the hypothesis space, and potentially be left with only bad ones. But Bayesians aren't alone in that either. (Please tell me if this sounds too True Believerish.)
1DPiepgrass
I have been googling for references to "computational epistemology", "algorithmic epistemology", "bayesian algorithms" and "epistemic algorithm" on LessWrong, and (other than my article) this is the only reference I was able to find to things in the vague category of (i) proposing that the community work on writing real, practical epistemic algorithms (i.e. in software), (ii) announcing having written epistemic algorithms or (iii) explaining how precisely to perform any epistemic algorithm in particular. (A runner-up is this post which aspires to "focus on the ideal epistemic algorithm" but AFAICT doesn't really describe an algorithm.) Who is "Pearl"?
2SilasBarta
Oh wow, thanks. I think at the time I was overconfident that some more educated Bayesian had worked through the details of what I was describing. But the causality-related stuff is definitely covered by Judea Pearl (the Pearl I was referring to) in his book *Causality* (2000).
4saturn
This sounds like a confusion between a theoretical perfect Bayesian and practical approximations. The perfect Bayesian wouldn't have any use for model checking because from the start it always considers every hypothesis it is capable of formulating, whereas the prior used by a human scientist won't ever even come close to encoding all of their knowledge. (A more "Bayesian" alternative to model checking is to have an explicit "none of the above" hypothesis as part of your prior.)
2CarlShulman
NOTA is addressed in the paper as inadequate. What does it predict?
0Cyan
See here.
2cousin_it
I don't see how that's possible. How do you compute the likelihood of the NOTA hypothesis given the data?
3Cyan
NOTA is not well-specified in the general case, but in at least one specific case it's been done. Jaynes's student Larry Bretthorst made a useable NOTA hypothesis in a simplified version of a radar target identification problem (link to a pdf of the doc). (Somewhat bizarrely, the same sort of approach could probably be made to work in certain problems in proteomics in which the data-generating process shares the key features of the data-generating process in Bretthorst's simplified problem.)
0cousin_it
If I'm not mistaken, such problems would contain some enumerated hypotheses - point peaks in a well-defined parameter space - and the NOTA hypothesis would be a uniformly thin layer over the rest of that space. Can't tell what key features the data-generating process must have, though. Or am I failing reading comprehension again?
0Cyan
Yep. I think the key features that make the NOTA hypothesis feasible are (i) all possible hypotheses generate signals of a known form (but with free parameters), and (ii) although the space of all possible hypotheses is too large to enumerate, we have a partial library of "interesting" hypotheses of particularly high prior probability for which the generated signals are known even more specifically than in the general case.
2Matt_Simpson
Model checking is completely compatible with "perfect Bayesianism." In the practice of Bayesian statistics, how often is the prior distribution you use exactly the same as your actual prior distribution? The answer is never. Really, do you think your actual prior follows a gamma distribution exactly? The prior distribution you use in the computation is a model of your actual prior distribution. It's a map of your current map. With this in mind, model checking is an extremely handy way to make sure that your model of your prior is reasonable. However, a difference in the data and a simulation from your model doesn't necessarily mean that you have an unreasonable model of your prior. You could just have really wrong priors. So you have to think about what's going on to be sure. This does somewhat limit the role of model checking relative to what Gelman is pushing.
0cousin_it
You shouldn't need real-world data to determine if your model of your own prior was reasonable or not. Something else is going on here. Model checking uses the data to figure out if your prior was reasonable, which is a reasonable but non-Bayesian idea.
0Matt_Simpson
Well, if you're just checking your prior, then I suppose you don't need real data at all. Make up some numbers and see what happens. What you're really checking (if you're being a Bayesian about it, i.e. not like Gelman and company) is not whether your data could come from a model with that prior, but rather whether the properties of the prior you chose seems to match up with the prior you're modeling. For example, maybe the prior you chose forces two parameters, a and b, to be independent no matter what the data say. In reality, though, you think it's perfectly reasonable for there to be some association between those two parameters. If you don't already know that your prior is deficient in this way, posterior predictive checking can pick it up. In reality, you're usually checking both your prior and the other parts of your model at the same time, so you might as well use your data, but I could see using different fake data sets in order to check your prior in different ways.
2WrongBot
Apologies if this has already been covered elsewhere, but isn't a prior just a belief? The prior is by definition whatever it was rational to believe before the acquisition of new evidence (assuming a perfect Bayesian, anyway). I'm not quite sure what you mean when you propose that a prior could be wrong; either all priors are statements of belief and therefore true, or all priors are statements of probability that must be less accurate than a posterior that incorporates more evidence. I suspect that there are additional steps I'm not considering.
2cousin_it
Nope, this isn't part of the definition of the prior, and I don't see how it could be. The prior is whatever you actually believe before any evidence comes in. If you have a procedure to determine which priors are "rational" before looking at the evidence, please share it with us. Some people here believe religiously in maxent, others swear by the universal prior, I personally rather like reference priors, but the Bayesian apparatus doesn't really give us a means of determining the "best" among those. I wrote about these topics here before. If you want the one-word summary, the area is a mess.
0WrongBot
Thanks for the links (and your post!), I now have a much clearer idea of the depths of my ignorance on this topic. I want to believe that there is some optimal general prior, but it seems much more likely that we do not live in so convenient a world.
0thomblake
But if you can evaluate how good a prior is, then there has to be an optimal one (or several). You have to have something as your prior, and so whichever one is the best out of those you can choose is the one you should have. As for how certain you are that it's the best, it's (to some extent) turtles all the way down.
0WrongBot
Instead of using "optimal general prior", I should have said that I was pessimistic about the existence of a standard for evaluating priors (or, more properly, prior probability distributions) that is optimal in all circumstances, if that's any clearer. Having thought about the problem some more, though, I think my pessimism may have been premature. A prior probability distribution is nothing more than a weighted set of hypotheses. A perfect Bayesian would consider every possible hypothesis, which is impossible unless hypotheses are countable, and they aren't; the ideal for Bayesian reasoning as I understand it is thus unattainable, but this doesn't mean that there are benefits to be found in moving toward that ideal. So, perfect Bayesian or not, we have some set of hypotheses which need to be located before we can consider them and assign them a probabilistic weight. Before we acquire any rational evidence at all, there is necessarily only one factor that we can use to distinguish between hypotheses: how hard they are to locate. If it is also true that hypotheses which are easier to locate make more predictions and that hypotheses which make more predictions are more useful (and while I have not seen proofs of these propositions I'm inclined to suspect that they exist), then we are perfectly justified in assigning a probability to a hypothesis based on it's locate-ability. This reduces the problem of prior probability evaluation to the problem of locate-ability evaluation, to which it seems maxent and its fellows are proposed answers. It's again possible there is no objectively best way to evaluate locate-ability, but I don't yet see a reason for this to be so. Again, if I've mis-thought or failed to justify a step in my reasoning, please call me on it.
8cousin_it
This doesn't sound right to me. Imagine you're tossing a coin repeatedly. Hypothesis 1 says the coin is fair. Hypothesis 2 says the coin repeats the sequence HTTTHHTHTHTTTT over and over in a loop. The second hypothesis is harder to locate, but makes a stronger prediction. The proper formalization for your concept of locate-ability is the Solomonoff prior. Unfortunately we can't do inference based on it because it's uncomputable. Maxent and friends aren't motivated by a desire to formalize locate-ability. Maxent is the "most uniform" distribution on a space of hypotheses; the "Jeffreys rule" is a means of constructing priors that are invariant under reparameterizations of the space of hypotheses; "matching priors" give you frequentist coverage guarantees, and so on. Please don't take my words for gospel just because I sound knowledgeable! At this point I recommend you to actually study the math and come to your own conclusions. Maybe contact user Cyan, he's a professional statistician who inspired me to learn this stuff. IMO, discussing Bayesianism as some kind of philosophical system without digging into the math is counterproductive, though people around here do that a lot.
0WrongBot
I'm in the process of digging into the math, so hopefully some point soon I'll be able to back up my suspicions in a more rigorous way. I was talking about the number of predictions, not their strength. So Hypothesis 1 predicts any sequence of coin-flips that converges on 50%, and Hypothesis 2 predicts only sequences that repeat HTTTHHTHTHTTTT. Hypothesis 1 explains many more possible worlds than Hypothesis 2, and so without evidence as to which world we inhabit, Hypothesis 1 is much more likely. Since I've already conceded that being a Perfect Bayesian is impossible, I'm not surprised to hear that measuring locate-ability is likewise impossible (especially because the one reduces to the other). It just means that we should determine prior probabilities by approximating Solomonoff complexity as best we can. Thanks for taking the time to comment, by the way.
2cousin_it
Then let's try this. Hypothesis 1 says the sequence will consist of only H repeated forever. Hypothesis 2 says the sequence will be either HTTTHHTHTHTTTT repeated forever, or TTHTHTTTHTHHHHH repeated forever. The second one is harder to locate, but describes two possible worlds rather than one. Maybe your idea can be fixed somehow, but I see no way yet. Keep digging.
2WrongBot
I've just reread Eliezer's post on Occam's Razor and it seems to have clarified my thinking a little. I originally said: But I would now say: This solves the problem your counterexample presents: Hypothesis 1 describes only one possible world, but Hypothesis 2 requires say, ~30 more bits of information (for those particular strings of results, plus a disjunction) to describe only two possible worlds, making it 2^30 / 2 times less likely.
2cousin_it
Then let's try this. Hypothesis 1 says the sequence will consist of only H repeated forever. Hypothesis 2 says the sequence will be HTTTHHTHTHTTTT repeated forever, where the can take different values on each repetition. The second hypothesis is harder to locate but describes an infinite number of possible worlds :-) If at first you don't succeed, try, try again!
0WrongBot
The problem with this counterexample is that you can't actually repeat something forever. Even taking the case where we repeat each sequence 1000 times, which seems like it should be similar, you'll end up with 1000 coin flips and 15000 coin flips for Hypothesis 1 and Hypothesis 2, respectively. So the odds of being in a world where Hypothesis 1 is true are 1 in 2^1000, but the odds of being in a world where Hypothesis 2 is true are 1 in 2^15000. It's an apples to balloons comparison, basically. (I spent about twenty minutes staring at an empty comment box and sweating blood before I figured this out, for the record.)
0cousin_it
I think this is still wrong. Take the finite case where both hypotheses are used to explain sequences of a billion throws. Then the first hypothesis describes one world, and the second one describes an exponentially huge number of worlds. You seem to think that the length of the sequence should depend on the length of the hypothesis, and I don't understand why.
0WrongBot
That is an awesome counter-example, thank you. I think I may wait to ponder this further until I have a better grasp of the math involved.
0thomblake
I'm not sure I'm willing to grant that's impossible in principle. Presumably, you need to find some way of choosing your priors, and some time later you can check your calibration, and you can then evaluate the effectiveness of one method versus another. If there's any way to determine whether you've won bets in a series, then it's possible to rank methods for choosing the correct bet. And that general principle can continue all the way down. And if there isn't any way of determining whether you've won, then I'd wonder if you're talking about anything at all (weird thought experiments aside).
0Blueberry
That check should be part of updating your prior. If you updated and got a hypothesis that didn't fit the data, you didn't update very well. You need to take this into account when you're updating (and you also need to take into account the possibility of experimental error: there's a small chance the data are wrong).
3Morendil
Hopefully the Book Club will get around to covering that as part of Chapter 4. I can't recall that it has anything to do with "updating your prior"; Jaynes just says that if you get nonsense posterior probabilities, you need to go back and include additional hypotheses in the set you're considering, and this changes the analysis. See also the quote (I can't be bothered to find it now but I posted it a while ago to a quotes thread) where Jaynes says probability theory doesn't do the job of thinking up hypotheses for you.

About the Rumsfeld quote mentioned in the most recent top-level post:

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we now know we don’t know. But there are also unknown unknowns. These are things we do not know we don’t know.

Why is it that people mock Rumsfeld so incessantly for this? Whatever reason you might have not to like him, this is probably the most insightful thing any government official has said at a press conference. And yet he's ridiculed for it by the very same people that are emphasizing, or at least should be emphasizing, the imporance of the insight.

Heck, some people even thought it was clever to format it into a poem.

What gives? Is this just a case of "no good deed goes unpunished"?

ETA: In your answer, be sure to say, not just what's wrong with the quote or its context, but why people don't make that as their criticism instead of just saying, ha ha, the quote sure is funny.

4simplicio
I agree that the quote is insightful and brilliant. I think it was seen by certain (tribally liberal) people as somehow euphemistic or sophistic, as though he were trying to invent a whole new epistemology to justify war. Politics is the mind-killer.
3cupholder
Some ideas. * People didn't/don't like Rumsfeld. * In the quote's original context, Rumsfeld used it as the basis of a non-answer to a question: * People think Rumsfeld's particular phrasing is funny, and people don't judge it as insightful enough to overcome the initial 'hee hee that sounds funny' reaction. * However insightful the quote is, Rumsfeld arguably failed to translate it into appropriate action (or appropriate non-action), which might have made it seem simply ironic or contrary rather than insightful. (Edit to fix formatting.)
1SilasBarta
So what would be the non-funny way to say? IMHO, Rumsfeld's phrasing is what you get if you just say it the most direct way possible. This is what always bothers me: people who say, "hey, what you said was valid and all, but the way you said it was strange/stupid". Er, so what would be the non-strange/stupid way to say it? "Uh, implementation issue." In the exchange, it looks like the reporter's followup question is nonsense. It only makes sense to ask if it's a known unknown, since you, er, never know the unknown unknowns. (Hee hee! I said something that sounds funny! Now you can mock me while also promoting what I said as insightful!) See also the edit to my original comment.
0cupholder
I'm not sure I'm capable of a good answer for the edited version of the question. I would guess (even more so than I'm guessing in my grandparent comment!) that once someone's 'ha ha' reaction kicks in (whether it's a 'ha ha his syntax is funny,' 'ha ha how ironic those words are in that context,' or a 'ha ha look at him scramble to avoid that question' kind of 'ha ha'), it obscures the perfectly rational denotation of what Rumsfeld said. I don't know of a way to make it less funny without losing directness. I think the verbal (as opposed to situational) humor comes from a combination of saying the word 'known' and its derivatives lots of times in the same paragraph, using the same kind of structure for consecutive clauses/sentences, and the fact that what Rumsfeld is saying appears obvious once he's said it. And I can't immediately think of a direct way of expressing precisely what Rumsfeld's saying without using the same kind of repetition, and what he's saying will always sound obvious once it's said. Things that are obvious once thought of, but not before, are often funny when pointed out, especially when pointed out in a direct and pithy way. That's basically how observational comedians operate. (See also Yogi Berra.) It's one of those quirks of human behavior a public speaker just has to contend with. Strictly speaking that's true, although for Rumsfeld to avoid the question on that basis is IMO at best pedantic; it's not hard to get an idea of what the reporter is trying to get at, even though their question's ill-phrased. (Belated edit - I should say that it would be pedantic, not that it is pedantic. Rumsfeld didn't actually avoid the question based on the reporter's phrasing, he just refused to answer.)
1SilasBarta
Right, that would make sense, except that the very same people, upon shifting gears and nominally changing topics, suddenly find this remark insightful -- "but ignore this when we go back to mocking Rumsfeld!" Wow, you have got to see Under Siege 2. It has this exchange (from memory): Bad guy #2: What's that? [...] Bad guy #1: It's a chemical weapons plant. And we know about it. And they know that we know. But we make-believe that we don't know, and they make-believe that they believe that we don't know, but know that we know. Everybody knows. Yes, "damned if you do, damned if you don't" is fun, but ultimately to be avoided by respectable people. Right, but aren't they typically followed by the appreciation of the insight rather than derision of whoever points it out? True, but it's not really Rumsfeld's job to improve reporters' questions. I mean, he might be a Bayesian master if he did, but it's not really to be expected.
0cupholder
I imagine the people who used the quote to mock Rumsfeld were already inclined to treat the quote uncharitably, and used its funniness/odd-soundingness as a pretext to mock him. Yeah, that got a giggle from me. Makes me wonder why some kinds of repetition are funny and some aren't! Agreed - I didn't mean to condone simultaneously mocking Rumsfeld's quote while acknowledging its saneness, just to explain why one might find it funny. It is (well, was) his job to make a good faith effort to try and answer their questions. (At least on paper, anyway. If we're being cynical, we might argue that his actual job was to avoid tough questions.) If I justified evading otherwise good questions in a Q&A because of minor lexical flubs, that would make the Q&A something of a charade.
2NancyLebovitz
It's possibly a matter of people being already disposed to dislike Rumsfeld, combined with a feeling that if he had so much understanding of ignorance, he shouldn't have been so pro-war.
1WrongBot
I agree that it's a brilliant idea, and that's why I cited him. He does the best job of describing that particular idea that I know of, and I'm amazed, as you are, that he said it at a press conference. I vehemently disagree with his politics, but that doesn't make him stupid or incapable of brilliance. If the tone of my post came across as mocking, that was not at all my intention.
0SilasBarta
I didn't mean to imply you were mocking him; I just mentioned your post because that's what reminded me to ask what I've been wondering about -- and you saved me some effort in finding something to cut-and-paste ;-)
1Richard_Kennaway
I am surely not the first to recognise the similarity to this poem. ETA: no, I'm not.
0SilasBarta
Hm, those are superficially similar, maybe, but I'm glad that someone, at least, was asking "er, what's the deal with the Rumsfeld quote?" back in '03.
0wedrifid
Wow, really? I honestly didn't know that quote ever provoked ridicule! Of course I also don't know how Rumsfeld is and didn't know he was a politician.
-2[anonymous]
I agree the quote is insightful and brilliant. I think it was seen by certain (tribally liberal) people as somehow euphemistic or sophistic, as though he were trying to invent a whole new epistemology to justify war. Politics is the mind-killer.

Part one of a five part series on the Dunning-Kruger effect, by Errol Morris.

http://opinionator.blogs.nytimes.com/2010/06/20/the-anosognosics-dilemma-1/

Also note that Oscar winning director Morris's next project is a dark comedy that is a fictionalized version of the founding of Alcor!

2arundelo
Ooh, it's nice to see more details on the lemon juice bank robber. When I first heard about him I thought he was probably schizophrenic. Maybe he was, but the details make it sound like he may indeed have been just really stupid.
2gwern
Isn't that a bad thing? I suspect a major source will be that recent book...
0Kevin
I thought that Morris's 30 minute interview with Saul Kent showed a favorable perspective on cryonics, or at least a true non-bias. Watch and decide for yourself: http://www.youtube.com/watch?v=HaHavhQllDI&feature=PlayList&p=A6E863FB777124DD&playnext_from=PL&index=36 http://www.youtube.com/watch?v=Psm96dR1d1A&feature=PlayList&p=A6E863FB777124DD&playnext_from=PL&index=37 http://www.youtube.com/watch?v=gBYIzWblGTI&feature=PlayList&p=A6E863FB777124DD&playnext_from=PL&index=38

On not being able to cut reality at the joints because you don't even know what a joint is: diagnosing schizophrenia

If you gave Aristotle ten thousand unplugged computers of different makes and models, no matter how systematically he analyzed them he'd not only be wrong, he'd be misleadingly wrong. He would find that they were related by shape-- rectangles/squares; by color-- black, white, or tan. Size/weight; material.

Aristotle was smart, but there is nothing he could ever learn about computers from his investigations. His science is all wrong for w

... (read more)
[-]h-H70

genes, memes and parasites?

tl:dr:"People who suffer from schizophrenia are, in fact, three times more likely to carry T. gondii than those who do not."

"Over the last five years or so, evidence has been building that some human cultural shifts might be influenced, or even caused, by the spread of Toxoplasma gondii."

"In the United States, 12.3 percent of women tested carried the parasite, and in the United Kingdom only 6.6 percent were infected. But in some countries, statistics were much higher. 45 percent of those tested in France were infected, and in Yugoslavia 66.8 percent were infected!"

1wedrifid
Wow. How is this parasite spread? Could those 'girly germs' that I avoided in primary school actually reduce my chances of getting schizophrenia?
1h-H
wait, what's a girly germ? I googled it and it game me a link about a Micronesian island :/
2wedrifid
Do young kids where you are come tease each other about the other sex? 'Cooties?' Whatever they call it. My question is how the parasite is spread. What does that 12.3% mean for the rest of the population? Why did they only test women?
3Morendil
It's a major pregnancy risk.
-1[anonymous]
Do young kids where you are come tease each other about the other sex? 'Cooties?' Whatever they call it. My question is how the parasite is spread. What does that 12.3% mean for the rest of the population? Why did they only test women?
0wedrifid
Ick. My double posting browser bug again.
2JoshuaZ
Have you tried using another browser? That might help you figure out if the problem is actually on the browser end and not something weird with the LW software.
1wedrifid
I'm using a different browser (different computer same browser by name) now and it is working fine. My other browser seems to work fine for a while after I restart it until some event causes it to thereafter double post every time. My hunch is that I could identify the triggering of one of the plugins as the cause. Even then the symptom is outright bizarre. What kind of bug would make the browser double send all post requests? Perhaps a failed attempt at spyware! No matter. I don't like my other computer anyway.

A recent study found that one effective way to resist procrastination in future tasks is to forgive previous procrastination- because the negative emotions that would otherwise remain create an ugh field around that task.

I found the study recently, but I've personally found this to be effective previously. Forcing your way through an ugh field isn't sustainable due to our limited supply of willpower (this is hardly a new idea, but I haven't seen it referenced in my limited readings on LW.)

0wedrifid
Some people have tried to emphasize that point but it isn't always universally understood.

Deus Ex: Human Revolution

IGN Preview

It has been a while since I needed to buy a new computer to play a game.

In addition to being a sequel to Deus Ex and looking generally bad-ass, transhumanism is explicitly mentioned. From the FAQ:

Essentially, DX: HR explores the beginnings of human augmentation and the transhumanism movement is a major influence in the game. There are people who think it's "playing God" to modify the body whatsoever and there are people (Transhumanists) who think it's the natural evolution of the human species to utilise tech

... (read more)

I remember a post by Eliezer in which he was talking about how a lot of people who believe in evolution are actually exhibiting the same thinking styles that creationists use when they justify their belief in evolution (using buzz words like "evidence" and "natural selection" without having a deep understanding of what they're talking about, having Guessed the Teacher's Password ). I can't remember what this post was called - does anybody remember? I remember it being good and wanted to refer people to it.

8Vladimir_M
I remember reading a post titled "Science as Attire," which struck me as making a very good point along these lines. It could be what you're looking for. As a related point, it seems to me that people who do understand evolution (and generally have a strong background in math and natural sciences) are on average heavily biased in their treatment of creationism, in at least two important ways. First, as per the point made in the above linked post, they don't stop to think that the great majority of folks who do believe in evolution don't actually have any better understanding of it than creationists. (In fact, I would say that the best informed creationists I've read, despite the biases that lead them towards their ultimate conclusions, have a much better understanding of evolution than, say, a typical journalist who will attack them as ignorant.) Second, they tend to way overestimate the significance of the phenomenon. Honestly, if I were to write down a list of widespread delusions sorted by the practical dangers they pose, creationism probably wouldn't make the top fifty.
7Mass_Driver
I'm extremely curious to hear both your list and JoshuaZ's list of the top 20 or so most harmful delusions. Feel free to sort by category (1-4, 5-10, 11-20, etc.) rather than rank in individual order.
7CronoDAS
I'll give you a big one: Dying a martyr's death gives you a one-way ticket to Paradise.
6JoshuaZ
I've separated some forms of alternative medicine out when one might arguably put them closer together. Also, I'm including Young Earth Creationism, but not creationism as a whole. Where that goes might be a bit more complicated. There's some overlap between some of these (such as young earth creationism and religion). The list also does not include any beliefs that have a fundamentally moral component. I've tried to not include beliefs which are stupid but hard to deal with empirically (say that there's something morally inferior about specific racial groups). Finally, when compiling this list I've tried to avoid thinking too much about the overall balance that the delusion provides. So for example, religion is listed where it is based on the harm it does, without taking into account the societal benefits that it also produces. 1-4: Religion, Ayurveda, Homeopathy, Traditional Chinese medicine (as standardized post 1950s) 5-10 The belief that intelligence differences have no strong genetic component. The belief that intelligence differences have no strong environmental component. The belief that there are no serious existential threats to humans. The belief that external cosmetic features or national allegiances are strong indicators of mental superiority or inferiority. That human females have fundamentally less mental capacity and that this difference is enough to be a useful data point when evaluating humans. The belief that the Chinese government can be trusted to benefit its people or decide what information they should or should not have access to. (The primary reason this gets on the list is the sheer size of China. There are other governments which are much, much worse and have similar delusions by the people. But the damage level done is frequently much smaller.) 11-20 Vaccines cause autism. Young Earth Creationism. Invisible Hand of the Market solves everything. Government solves everything. Providence. That there are not fundamental limits on certain n
3wedrifid
Is it trust or fear that is the real problem in that case? What would you do as an average Chinese citizen who wanted to change the policy? (Then, the same question assuming you were an actual Chinese citizen who didn't have your philosophical mind, intelligence, idealism and resourcefulness.)
5JoshuaZ
It seems like it is a mix. From people I've spoken to in China and the impression I get from what I've read about the Chinese censorship, the majority of people are generally ok with letting the government control things and think that that's really for the best. This seems to be changing slightly with the younger generation but it is hard to tell. Good points certainly. I'm not sure any average Chinese citizen alone can do anything. If I were an actual Chinese citizen alone given my "philosophical mind, intelligence, idealism and resourcefulness," I'm not sure I'd do anything either, not because I can't, but because the risk would be high. It is easy to say "oh, people in X situation should do Y because that's morally better or better for everyone overall" when one isn't in that situation. When one's life, family, or livelihood is the one being threatened then it is obviously going to be a lot more difficult. It isn't that I'm a coward (although I might be) it is just that standing up to the government in that sort of situation takes a lot of courage that I'm pretty sure I (and most people) don't have. But if the general population took an attitude that was more willing to do minor things (spread things like TOR or other methods of getting around the Great Firewall for example), then things might be different. But even that might not have a large impact. So yeah, I may need to take this off the list.
2Emile
I get the impression that overall, the younger generation is more apathetic about politics than the older one. (Though there is also the relatively recent phenomenon of "angry youths" (fenqing), who rant on forums and such.)
3Emile
Lists like that are good ! I'm a bit surprised at that one - the current Chinese government seems pretty rational and efficient to me, and I'd be hard-pressed to say what I would do differently in it's place or rather - there are things I would do differently, but I'm not sure I'd get better results). Control of information by the government should be seen mostly as a way of preserving it's own power. So I'm not really sure of how to interpret "The belief that the Chinese government can be trusted to [...] decide what information they should or should not have access to." - could you rephrase that belief so that it's irrationality becomes more apparent, maybe tabooing "can be trusted to" ? If you mean "Chinese people wrongly believe that the government is restricting information access for their own good", then I'm not sure that a lot of people actually believe that, and for those that do, that believing it does any harm.
1JoshuaZ
Ok. My impression is that that is a common belief in China and is connected to the belief that the government doesn't actively lie. I don't have a very good citation for this other than general impressions so I'm going to point to a relevant blog entry by a friend who spent a few years in China where she discusses this with examples. There are of course even limits to how far that will go. This is also complicated by the fact that many of the really serious harm in China (detainment of citizens for questioning policies, beatings and torture, ignoring of basic environmental and safety issues) stem from the local governments rather than the central government, and the relationship between Beijing and the local governments is very complicated. See also my remarks above to wedrifid which touch on these issues also. So yeah, it may make sense to take this off the list given the lack of harm directly coming from this issue.
2Douglas_Knight
I don't interpret the story in that blog post that way at all. People repeating nationalist lies doesn't mean they've been fooled. I highly recommend these posts about the psychology of mass lies. I don't recommend the third part.
2Risto_Saarelma
This one caught my eye, I don't think I've seen this listed as an obvious delusion before. Can you maybe expand more on this? I guess the idea is that a much larger number of people could make use of math or science if they weren't predisposed to think that they belong in an incapable segment? I'm thinking of something like picking the quarter of population that scores in the bottom at a standard IQ test or the local SAT-equivalent as the "large segment of population" though. A test for basic science and mathematics skills could be being able to successfully figure out solutions for some introductionary exercises from a freshman university course in mathematics or science, given the exercise, relevant textbooks and prerequisite materials, and, say, up to a week to work out things from the textbook. It doesn't seem obvious to me that such a test would end up with results that would make the original assertion go straight into 'delusion' status. My suspicions are somewhat based on the article from a couple of years back, which claimed that many freshman computer science students seem to simple lack the basic mental model building ability needed to start comprehending programming.
2JoshuaZ
Yes. And more people would go into math and science. That's a very interesting article. I think that the level of, and type of abstraction necessary to program is already orders of magnitude beyond where most people stop being willing to do math. My own experience in regards to tutoring students who aren't doing well in math is that one of the primary issues is one of confidence: students of all types think they aren't good at math and thus freeze up when they see something that is slightly different from what they've done before. If they understand that they aren't bad at math or that they don't need to be bad at math, they are much more likely to be willing to try to play around with a problem a bit rather than just panic. I was an undergraduate at Yale which is generally considered to be a decent school that admits people who are by and large not dumb. And one thing that struck me was that even in that sort of setting, many people minimized the amount of math and science they took. When asked about it the most common claim was that they weren't good at it. Some of those people are going to end up as future senators and congressman and have close to zero idea of how science works or how statistics work other than at the level they got from high school. If we're lucky, they know the difference between a median and a mean.
2Emile
Does anybody actually claim to believe that ?
3JoshuaZ
This view is surprisingly common. I don't want to move to much to a potentially mind-killing subject, but the idea isn't uncommon among certain groups in US politics. Indeed, they think it so strongly about some resources that they take it almost as an ideological point. This occurs when discussing oil most frequently. Emphasis is placed on things like the Eugene Island field and abiotic oil which they argue shows we won't run out of oil. The second is particularly galling because even if the abiotic oil hypotheses were correct the level of oil production would still be orders of magnitudes below the consumption rate. I'd point more generally to followers of Julian Simon (not Simon himself per se. His own arguments were generally more nuanced and subtle than what many people seem to get out of them).
2JanetK
Where would you put 'belief in free will' and 'belief in determinism'?
4JoshuaZ
They probably wouldn't get anywhere on the list for the reason that a) I'm not convinced that either determinism or free will as often given are actually well-defined notions and b) I don't see either belief as causing much harm in practice.
5Vladimir_M
Mass_Driver: I'm not sure if that would be a smart move, since it would mean an extremely high concentration of unsupported controversial claims in a single post. Many of my opinions on these matters would require non-obvious lengthy justifications, and just dumping them into a list would likely leave most readers scratching their heads. If you're really curious, you can read the comment threads I've participated in for a sample, in particular those in which I argue against beliefs that aren't specific to my interlocutors. Also, it should be noted that the exact composition of the list would depend on the granularity of individual entries. If each entry covered a relatively wide class of beliefs, creationism might find itself among the top fifty (though probably nowhere near the top ten).
7wedrifid
In this format that sounds like a good thing! At worst it would spark curiosity and provoke discussion. At best people would encounter a startling opinion that they had never seriously considered, think about for 60 seconds then form an understanding that either agrees with yours or disagrees, for a considered reason.
1h-H
seconded, but a list of 20 seems too long/too much work, no?
1wedrifid
I'd be thinking 5. :)
0[anonymous]
I'd be thinking 5. :)
2LucasSloan
Taking into account what I already said about needing to influence people who can actually use beliefs (thus controlling for things like atheism, evolution, etc.)... 1. FAI and related. 2. Inability to do math. 3. Failures around believing the state of the world is good (thinking aging is a good thing and the like). 4. Believing that politics is the best way to influence the world.
2JoshuaZ
What is the delusion here? What is the delusion here? Do you mean people convincing themselves that they can't do math? This seems too subjective to label a delusion. What do you mean by best and by influence?
2wedrifid
Inability to do math? Really? Are you talking 'disinclination to shut up and multiply' or actual ability to do math? I love math but don't really think most people need it.
4Richard_Kennaway
Dredging this up from deep nesting, because I think it's important: wedrifid says Yes. Never tell anyone that what you're teaching them is hard. When you do that, you're telling them they'll fail, telling them to fail.
5Alicorn
But if you tell them it's easy, then they will be embarrassed for failing at something easy, or can't be proud of succeeding at something easy.
3Richard_Kennaway
Telling them it's easy is also a bad idea.
2Alicorn
It strikes me that giving no information about the general difficulty of the subject is also a bad idea. (I imagined myself struggling with a topic where I had no information on how hard others found it, and my hypothetical self was ashamed, because clearly if it were something everyone found hard, they'd warn people and teach it more slowly, so it must be easy for everybody else but me.)
6Blueberry
Ideally, you'd teach the student not to be concerned with how well or how quickly they learn compared to others, which is a general learning technique that can apply to any field.
7Alicorn
Simply telling people not to worry about that doesn't... actually work, does it? That would genuinely surprise me.
8Blueberry
Math anxiety is actually very common, and one of the ways to reduce it is to make students aware of the problem. It's not as simple as saying "just don't worry", but in my experience as a tutor, it can be helpful to give gentle reminders that everyone learns at their own pace and that it may take some effort to understand a concept. Math is all about trying many blind alleys before you figure out the correct approach, and teaching using examples where you try many wrong approaches first can help students understand that you don't have to "get it" immediately, and it's ok to struggle through it sometimes. It's less "this is hard" and more "this often takes some effort to understand completely, so don't panic".
0Richard_Kennaway
When I teach, I don't say anything about "easy" or "difficult". I just teach the material. What is this "easy", this "difficult"? There is no "easy" or "difficult" for a Jedi -- there is only the work to be done and the effort it takes. "Difficult" means "I will fail". "Effort" means "I will succeed". You are torturing yourself by inventing fictional evidence. You have an entire imaginary scenario there, shadows and fog conjured from thin air.
4Peter_de_Blanc
I don't think Alicorn's evidence is completely fictional. It's a simulation. It's not as much evidence as if she had experienced it in real life, but it's much better than, e.g. the evidence of Terminator on future AIs.
-1Richard_Kennaway
This is a distinction without a difference. "Terminator" is a simulation -- the writers didn't make it up out of nothing. Granted, their purpose is to tell an entertaining story, but the idea that this is what future AIs would be like has been around for a long time, despite Asimov's efforts to create a framework for telling stories of friendly robots. Or to put it the other way round, Alicorn's scenario is as fictional as "Terminator". It is made out of plausible-sounding elements, as Terminator is, but the "clearly" and "must" and "everybody else but me" are signs that far too much belief is being placed in it.
3SilasBarta
Right, and there's the issue of whose fault the difficulty is. Sure, the student might not really be trying. But also, the teacher may not be explaining in a way that speaks to the learner's natural fluency. A method that works for the geeky types won't work work for more neurotypical types. For my part, I never have trouble explaining high school math to those who haven't completed it, even if they're told that trig, calculus, etc. is hard. It's because I first focus on finding out where exactly their knowledge deficit is and why the subject matter is useful. Of course, teachers don't have the luxury of one-on-one instruction, but yes, how you present the material matters greatly.
2PhilGoetz
Most people don't need to understand evolution. Maybe we should distinguish between "harmful to self", "harmful to society", and "harmful to a democratic society". If you can't do math at a fairly advanced level - at least having competence with information theory, probability, statistics, and calculus - you can't understand the world beyond what's visible on its (metaphorical) surface.

I've got a tangential question: what math, if learned by more people, would give the biggest improvement in understanding for the effort put into learning it?

Take calculus, for example. It's great stuff if you want to talk about rates of change, or understand anything involving physics. There's the benefit; how about the cost? Most people who learn it have a very hard time doing so, and they're already well above average in mathematical ability. So, the benefit mostly relates to understanding physics, and the cost is fairly high for most people.

Compare this with learning basic probability and statistical thinking. I'm not necessarily talking about learning anything in depth, but people should have at least some exposure to ideas like probability distributions, variance, normal distributions and how they arise, and basic design of experiments -- blinding, controlling for variables, and so on. This should be a lot easier to learn than calculus, and it would give insight into things that apply to more people.

I'll give a concrete example: racism. Typical racist statements, like "black people are lazy and untrustworthy," couldn't possibly be true in more than a statistical sen... (read more)

3taiyo
Probability theory as extended logic. I think it can be presented in a manner accessible to many (Jaynes PT:LOS is not accessible to many).
2nhamann
Tangential question to your tangential question: I'm puzzled, which math are you talking about here? The only math relevant to programming that I can think of that engineering students would also learn would be discrete math, but the extent needed for good programming competency is pretty small and easy to pick up. Are we talking numerical computing instead, with optimization problems and approximating solutions to DE's? That's the only thing I can think of relevant to engineering for which the math background might be more difficult than calculus.
2sketerpot
I was thinking more basic: induction, recursion, reasoning about trees. Understanding those things on an intuitive level is one of the main barriers that people face when they learn to program. It's one thing to be able to solve problems out of a textbook involving induction or recursion, but another thing to learn them so well that they become obvious -- and it's that higher level of understanding that's important if you want to actually use these concepts.
0taiyo
I'm not sure about all the details, but I believe that there was a small kerfuffle a few decades ago over a suggestion to change the apex of U.S. ``school mathematics'' from calculus to a sort of discrete math for programming course. I cannot remember what sort of topics were suggested though. I do remember having the impression that the debate was won by the pro-calculus camp fairly decisively -- of course, we all see that school mathematics hasn't changed much.
0Nisan
Calculus might not be the best example of a skill with relatively low payoff, because you need some calculus to understand what a continuous probability distribution is.
0wedrifid
I do? I thought I understood both calculus and continuous probability but I didn't know one relied on the other. You are probably right, sometimes things that are 'obvious' just don't get remembered.
3Nisan
For example, suppose you have a biased coin which lands heads up with probability p. A probability distribution that represents your belief about p is usually a non-negative real function f on the unit interval whose integral is 1. Your credence in the proposition that p lies between 1/3 and 1/2 is the integral of f from 1/3 to 1/2.
0wedrifid
Yes, extremely obvious now that you mention it. :)
0wedrifid
Well above average mathematical ability and cannot do calculus to the extent of understanding rates of change? For crying out loud. You multiply by the number up to the top right of the letter then reduce that number by 1. Or you do the reverse in the reverse order. You know, like you put on your socks then your shoes but have to take off your shoes then take off your socks. Sometimes drawing a picture helps prime an intuitive understanding of the physics. You start with a graph of velocity vs time. That is the 'acceleration'. See... it is getting faster each second. Now, use a pencil and progressively color in under the line. that's the distance that is getting covered. See how later on more when it is going faster more distance is being traveled at one time and we have to shade in more area? Now, remember how we can find the area of a triangle? Well, will you look at that... the maths came out the same!
6taiyo
I teach calculus often. Students don't get hung up on mechanical things like (x^3)' = 3x^2. They instead get hung up on what %20=%20\lim_{h%20\to%200}%20\dfrac{f(x+h)%20-%20f(x)}{h}) has to do with the derivative as a rate of change or as a slope of a tangent line. And from the perspective of a calculus student who has gone through the standard run of American school math, I can understand. It does require a level up in mathematical sophistication.
-1wedrifid
That's the problem. See that bunch of symbols? That isn't the best way to teach stuff. It is like trying to teach them math while speaking a foreign language (even if technically we are saving the greek till next month). To teach that concept you start with the kind of picture I was previously describing, have them practice that till they get it then progress to diagrams that change once in the middle, etc. Perhaps the students here were prepared differently but the average student started getting problems with calculus when it reached a point slightly beyond what you require for the basic physics we were talking about here. ie. they would be able to do 1. and but have no chance at all with 2:
3taiyo
I'm not claiming that working from the definition of derivative is the best way to present the topic. But it is certainly necessary to present the definition if the calculus is being taught in math course. Part of doing math is being rigorous. Doing derivatives without the definition is just calling on a black box. On the other hand, once one has the intuition for the concept in hand through more tangible things like pictures, graphs, velociraptors, etc., the definition falls out so naturally that it ceases to be something which is memorized and is something that can be produced ``on the fly''.
2wedrifid
A definition is a black box (that happens to have official status). The process I describe above leads, when managed with foresight, to an intuitive way to produce a definition. Sure, it may not include the slogan "brought to you by apostrophe, the letters LIM and an arrow" but you can go on to tell them "this is how impressive mathematcians say you should write this stuff that you already understand" and they'll get it. I note that some people do learn best by having a black box definition shoved down their throats while others learn best by building from a solid foundation of understanding. Juggling both types isn't easy.
1Richard_Kennaway
That is close to saying "this stuff is hard". How about first showing the students the diagram that that definition is a direct transcription of, and then getting the formula from it?
1wedrifid
(Actually, many would struggle with 1. due to difficulty with comprehension and abstract problem solving. They could handle the calculus but need someone to hold their hand with the actual thinking part. That's what we really fail to teach effectively.)
4sketerpot
People get the simple concepts mixed together with a bunch of mathy-looking symbols and equations, and it all congeals into an undifferentiated mass of confusing math. Yes, I know calculus is actually pretty straightforward, but we're probably not a representative sample. Talk with random bewildered college freshmen to combat sample bias. I did this, and what I learned is that most people have serious trouble learning calculus. Now, if you want to be able to partially understand a bunch of physics stuff but you don't necessarily need to be able to do the math, you could probably get away with a small subset of what people learn in calculus classes. If you learned about integration and differentiation (but not how to do them symbolically), as well as vectors, vector fields, and divergence and curl, then you could probably get more benefit-per-hour-of-study than if you went and learned calculus properly. It leaves a bad taste in my mouth, though.
2wedrifid
When taught well the calculus required for the sort of applications you mentioned is not something that causes significant trouble, certainly not compared to vector fields, divergence or curl. By taught well, if you will excuse my lack of seemly modesty, is how I taught it in my (extremely brief - don't let me get started on what I think of western school systems) stint teaching high school physics. The biggest problem for people learning basic calculus is that people teaching it try to convey that it is hard. I'm only talking here about the level of stuff required for everyday physics. Definitely not for the vast majority of calculus that we try to teach them.
0Emile
Aw, please ? I'd be interested in hearing about the differences with other systems :)
8wedrifid
It has been said that democracy is the worst form of government except all the others that have been tried. --Sir Winston Churchill I'm not quite going to make that analogy but I will hasten to assert that there are far worse systems of education than ours. Including some that are 'like ours but magnified". In terms of healthy psychological development and practical skill acquisition the apprenticeship systems of various cultures have been better. Right now I can refer to the school system on one of the Solomon Islands. The culture is that of a primitive coastal village but with western influences. Western teaching materials and a teacher are provided but occurs in the morning for 4 hours a day. No breaks are needed and nor is any pointless time wasting. The children then spend their time surfing. But they surf carrying spears and catch fish while they are doing it. What appeals to me about that system is: * The shorter time period. * Most of the time kids spend at school is a blatant waste. in particular, in the youngest years a lot of what the kids are doing is 'growing older'. That is what is required for their brains to handle the next critical learning skills. * Much more than 4 hours a day of learning is squandered on diminishing returns. The Cambridge Handbook of Expertise and Expert Performance suggests that 4 hours per day of deliberate practice (7 days a week for 10 years) is a good approximate guide for how to gain world-class expert level performance in a field. It is remarkably stable across many domains. * The children's social lives are not dominated by playground politics and are not essentially limited to same age peers. * Not only are the extracurricular activities physically healthier than more time wasted in classes they are better for brain development too. What is the formula for increased release of Neurotrophic Growth Factors, consolidation into stable Neurogenesis and optimized attention control and cognitive performance? Aerobi
2SilasBarta
I agree, and this is a tragedy in that it makes it so students don't have marketable skill by 14 as they would in an apprentice system, and so are dependent on mommy and daddy. This "age of genuine independence from parents" is increasing all the time, and there's no excuse for it. It disenfrachises children more than any legal age restrictions on this or that.
8JoshuaZ
While as a mathematician I find that claim touching, I can't really agree with it. To use the example that was one of the starting points of this conversation, how much math do you need to understand evolution? Sure, if you want to really understand the modern synthesis in detail you need math. And if you want to make specific predictions about what will happen to allele frequencies you'll need math. But in those cases it is very basic probability and maybe a tiny bit of calculus (and even then, more often than not you can use the formulas without actually knowing why they work beyond a very rough idea). Similar remarks apply to other areas. I don't need a deep understanding of any of those subjects to have a basic idea about atoms, although again I will need some of them if I want to actually make useful predictions (say for Brownian motion). Similarly, I don't need any of those subjects to understand the Keplerian model of orbits, and I'll only need one of those four (calculus) if I want to make more precise estimations for orbits (using Newtonian laws). The amount of actual math needed to understand the physical world is pretty minimal unless one is doing hard core physics or chemistry.
1wedrifid
For example... trying to work out what happens when I shoot a neverending stream of electrons at a black hole. The related theories were more or less incomprehensible to me at first glance. Not being able to do off the wall theorizing on everything at the drop of the hat has to at least make 49!
0PhilGoetz
The human-scale physical world is relatively easy to understand, and we may have evolved or learned to perform the trickier computations using specialized modules, such as perhaps recognizing parabolas to predict where a thrown object will land. You get far with linear models, for instance, assuming that the distance something will move is proportional to the force that you hit it with, or that the damage done is proportional to the size of the rock you hit something with. You rarely come across any trajectory where the second derivative changes sign. The social world, the economic world, ecology, game theory, predicting the future, and politics are harder to understand. There are a lot of nonlinear and even non-differentiable interactions. To understanding a phenomenon qualitatively, it's helpful to perform a stability analysis, and recognize likely stable areas, and also unstable regions where you have phase transitions, period doublings, and other catastrophes, You usually can't do the math and solve one of these systems; but if you've worked with a lot of toy systems mathematically, you'll understand the kind of behaviors you might see, and know something about how the number of variables and the correlations between them affect the limits of linear extrapolation. So you won't assume that a global warming rate of .017C/year will lead to a temperature increase of 1.7C in 100 years. I'm making this up as I go; I don't have any good evidence at hand. I have the impression that I use math a lot to understand the world (but not the "physical" world of kinematics). I haven't observed myself and counted how often it happens.
2Will_Newsome
I'd like this to be true, as I want the time I spend learning math in the future to be as useful as you say, but I seem to have come rather far by knowing the superficial version of a lot of things. Knowing the actual math from something like PT:LOS would be great, and I plan on reaching at least that level in the Bayesian conspiracy, but I can currently talk about things like quantum physics and UDT and speed priors and turn this into changes in expected anticipation. I don't know what Kolmogorov complexity is, really, in a strictly formal from-the-axioms sense, nor Solomonoff induction, but I reference it or things related to it about 10 times a day in conversations at SIAI house, and people who know a lot more than I do mostly don't laugh at my postulations. Perhaps you mean a deeper level of understanding? I'd like to achieve that, but my current level seems to be doing me well. Perhaps I'm an outlier. (I flunked out of high school calculus and 'Algebra 2' and haven't learned any math since. I know the Wikipedia/Scholarpedia versions of a whole bunch of things, including information theory, computer science, algorithmic probability, set theory, etc., but I gloss over the fancy Greek letters and weird symbols and pretend I know the terms anyway.)
5Will_Newsome
A public reminder to myself so as to make use of consistency pressure: I shouldn't write comments like the one I wrote above. It lingers too long on a specific argument that is not particularly strong and was probably subconsciously fueled by a desire to talk about myself and perhaps countersignal to someone whose writing I respect (Phil Goetz).
4CronoDAS
I have a belief that I can fix things like this, having spent time working with other students in high school. If I ever meet you in person, will you assist me in testing that belief? ;)
2LucasSloan
I'm pretty sure that most people around lesswrong have about the same level of familiarity with most subjects (outside whatever field they actually specialize). Although I do think that you are relatively weak in mathematics, but advanced math just really isn't that important, vis-a-vis being generally well educated and rational.
0LucasSloan
This one.
2JoshuaZ
Are you then asserting that non-utilitarian views constitute a delusion?
3LucasSloan
I'm asserting that saying "We must do X, because it produces good effect Y", when there is option Z, which delivers the same Y for half the cost, is a delusion.
1JoshuaZ
This seems more like a common cognitive error than a delusion. How are you defining delusion? It seems like I am using a more narrow definition of delusion. I'm using something like "statement or class of statements about the physical world that are demonstrably extremely unlikely to be true." What definition are you using?
1wedrifid
Lucas's statement fits this definition. This may me be clearer if you consider just "we must do X", which is a claim about the physical world. The because part does not happen to change this. If you don't agree that the truncated claim fits the criteria I infer that the most likely difference in definitions between you and Lucas is in not so much around 'delusion' but rather about what 'must' means in relation to the physical world. This would make what you say true even if it isn't grounded in my preferred ontology.
0JoshuaZ
Ah, so the issue is that I see "must" as entangled with moral and ethical claims that aren't necessarily connected to the physical world in any useful fashion.
1wedrifid
Exactly! And to delve somewhat deeper into the levels of meaning there are many who would say that 'must' or weaker 'should' claims are about satisfying a given 'rightness' function. Of those people many of them will say that the 'rightness' function can't reasonably be described as something that is part of the physical world. After accepting that position some will say that a 'must' claim is making an objective assertion about what best satisfies a known 'rightness' function. In perhaps simpler terms, I'll look at the X/Y example we already have: There are various things that can be accepted or rejected as 'delusions', that may be considered claims about the physical world. (In most cases the proposed delusion would be the negation, but the 'can?' is symmetric.) 1. Can lacking the belief "We must do things that have good effects" be a delusion? 2. Can lacking the belief "Y is a good effect" be a delusion? 3. Can lacking the belief "X has the effect Y" be a delusion? 4. Can lacking the belief "Z delivers the same Y for half the cost" be a delusion? 5. Can having the belief "We must do X even if Z does Y for half the cost" be a delusion? Let's see... * Your claim requires that your reject 1, 2 and 5 as possible candidates for delusion. * There are some that would reject 1 and 2 as candidates for delusion but say that 5 is a candidate because it implies fallacious reasoning based on the other arbitrary not-physical premises. * I accept 1 and 2 as possible candidates too via an ontology that formalises (and grounds in the physical) the way that normative claims are actually used in practice. But I never presume this definition when in conversation unless I know the others in the conversation are either familiar with technical formalism or using the colloquial meaning of 'should/must'. * When I assert that Lucas's statement is correct it is based off the "5" claim. It didn't even occur to me to reject 5 as a possible delusion because it just seems so obvio
0[anonymous]
Exactly! And to delve somewhat deeper into the levels of meaning there are many who would say that 'must' or weaker 'should' claims are about satisfying a given 'rightness' function. Of those people many of them will say that the 'rightness' function can't reasonably be described as something that is part of the physical world. After accepting that position some will say that a 'must' claim is making an objective assertion about what best satisfies a known 'rightness' function. In perhaps simpler terms, I'll look at the X/Y example we already have: There are various things that can be accepted or rejected as 'delusions', that may be considered claims about the physical world. (In most cases the proposed delusion would be the negation, but the 'can?' is symmetric.) 1. Can lacking the belief "We must do things that have good effects" be a delusion? 2. Can lacking the belief "Y is a good effect" be a delusion? 3. Can lacking the belief "X has the effect Y" be a delusion? 4. Can lacking the belief "Z delivers the same Y for half the cost" be a delusion? 5. Can having the belief "We must do X even if Z does Y for half the cost" be a delusion? Let's see... * Your claim requires that your reject 1, 2 and 5 as possible candidates for delusion. * There are some that would reject 1 and 2 as candidates for delusion but say that 5 is a candidate because it implies fallacious reasoning based on the other arbitrary not-physical premises. * I accept 1 and 2 as possible candidates too via an ontology that formalises (and grounds in the physical) the way that normative claims are actually used in practice. But I never presume this definition when in conversation unless I know the others in the conversation are either familiar with technical formalism or using the colloquial meaning of 'should/must'. * When I assert that Lucas's statement is correct it is based off the "5" claim. It didn't even occur to me to reject 5 as a possible delusion because it just seems so obvio
2wedrifid
1. The creation of an FAI is not the most important thing the species could be doing. 2. The best way to create an FAI is not...
2LucasSloan
If I might jump in on the listing of delusions, I think that perhaps one of the most important things to understand about widespread delusions is who, in fact, holds them. A bunch of rednecks in Louisiana not believing in evolution isn't important, because even if they did, it wouldn't inform other parts of their worldview. In general, the specific delusions of ordinary people (IQ < 120) aren't important, because they aren't the ones who are actually affecting anything. Even improving the rationality and general problem awareness of smart people (120 < IQ < 135) doesn't really help, because then you get people who will expend enormous effort doing things like evangelizing atheism to the ordinary people and fighting global warming and the like. Raising the sanity waterline is important, but effort should be focused on people with the ability to actually use true beliefs.
3cupholder
I'm less sure. I would have thought that they affect things indirectly at least through social transmission of beliefs, what they choose to spend their money on, and the demands they make of politicians. Arguably, one should expect it to help less than improving the rationality and awareness of people with IQ < 120, just because there are 11 times as many people with IQ < 120 than there are with 120 < IQ < 135.
3Mass_Driver
I sincerely hope that you are using IQ as only the crudest shorthand for "ability to actually use true beliefs," but your point in general is very well taken. Please do jump in if you have a listing of the most harmful delusions. :-)
2wedrifid
IQ >= 120 is a fairly low bar. IQ is also a strong indicator for the potential for someone's behavior to be influenced by delusions (rather than near mode thinking + social pressure being the dominant adaptation.)
2Mass_Driver
Do you mean do say that people of ordinary intelligence, as a general rule, don't actually believe whatever it is they say they believe, but instead just parrot what those around them say? You might be right. I think I need to find a way to re-immerse myself in a crowd of people of average intelligence; it's been far too long, and my predictive/descriptive powers for such people are fraying. Note that none of this is sarcasm; this comment is entirely sincere.
6Douglas_Knight
Wedrifid only said "potential"; most people, smart or not, behave as you say. And I would expand "delusion" to 'belief": being smart is correlated with being influenced by beliefs, true or false. That people act on beliefs or have at all coherent world-views is the most dangerous widespread delusion. ("The world is mad.") Immersing yourself in a crowd of average intelligence might help you see this, but I rather doubt that your associates act on their beliefs.
3wedrifid
Another thing that is dangerous is the people that actually act on their beliefs. They are much harder to control. People 'acting as if' pragmatically don't do things that we strongly socially penalize.
1Mass_Driver
Not on their stated beliefs; surely, but don't most people have a set of actual beliefs? Can't these actual beliefs, at least in some contexts, be nudged so as to influence the level and direction of cognitive dissonance, which in turn can influence actions?
4JoshuaZ
There's certainly evidence that intelligent people are more likely to have more coherent worldviews. For example, the GSS data shows that higher vocabulary is associated with more extreme political views to either end of the traditional political spectrum. There's similar research for IQ scores but I don't have a citation for that.
1Blueberry
Are you saying more extreme political views are more coherent? I'm not following this.
6Vladimir_M
Blueberry: That seems like an almost self-evident observation to me. I have never seen anyone state clearly any political or ideological principles, of whatever sort and from whatever position, whose straightforward application wouldn't lead to positions that are utterly extremist by the standards of the present centrist opinion. Getting people with regular respectable opinions to contradict themselves by asking a few Socratic questions is a trivial exercise (though not one that's likely to endear you to them!). The same is not necessarily true for certain extremist positions.
2Blueberry
And it seems self-evidently false to me, so I'm very curious what exactly you mean. If you take any one principle and apply it across the board, to everything, without limitation, you'll end up with an extremist position, basically by definition. So in that sense, extremist positions may be simpler than moderate ones. But that's more "extrapolation" and "exaggeration" than "straighforward application". Moderate positions tend to carefully draw lines to balance out many different principles. I'm not sure how to discuss this without giving contemporary political examples, so I'll do so with the warning that I'm not necessarily for or against any of the following moderate positions, and I'm not intending to debate any of them; I'm just claiming that they're moderate and consistent. * The government should be able to impose a progressive tax on people's incomes, which it can then use for national defense, infrastructure, and social programs, while still allowing individuals to make profits (contrast communism and pure libertarianism) * Individuals over 18 who have not been convicted of a felony should be able to carry a handgun, but not an automatic weapon, after a brief background check, except in certain public places (contrast with complete banning of guns and with a free market on all weapons) * The government should regulate and approve the sale of some kinds of chemicals, completely banning some, allowing some with a doctor's prescription, and allowing some to be sold freely over the counter after careful review * People over a certain age X should be able to freely have consensual sex in private with each other without government interference; people under X-n should not be allowed to engage in sex; people in between should be allowed to have sex only with people close to their own age * The country should guard its borders and not let anyone in without approval, and deport anyone found to have entered illegally, but should grant entry to tourists and

What you list are explicit descriptions of concrete positions on various issues, not the underlying principles and logic. However, what I had in mind is that if you take some typical persons whose positions on concrete issues are moderate and respectable by the contemporary standards, and ask them to state some abstract principles underlying their beliefs, a simple deduction from the stated principles will often lead to different and much more extreme positions in a straightforward way. If called out on this, your interlocutors will likely appeal to a disorganized and incoherent set of exceptions and special cases to rationalize away the problem, even though before the problem is pointed out, they would affirm these principles in enthusiastic and absolute terms.

Let me give you an example of Socratic questioning of this sort that I applied in practice once. In the remainder of the comment, I'll assume that we're in the U.S. or some other contemporary Western society.

Let's discuss the principle that religion and state should be separate, in the sense that each citizen should be free to affirm and follow any religious beliefs whatsoever as long as this doesn't imply any illegal actio... (read more)

6CronoDAS
Do you ever find people who bite the other bullet and say that, well, the principle wasn't really all that good after all, since it didn't allow for this particular exception? As far as I'm concerned, religious beliefs should be given exactly the same protections as political beliefs, and no more. (Religion is given all too much deference in the United States today.) If you can refuse to hire people because they belong to the Raving Loonies Party - and it's legal to do so in most states - then it should also be legal to refuse to hire people who belong to the Church of Raving Loonies.
7Alicorn
If we started treating religious and political beliefs as commensurate, I think this would result in - at least in some regions - greater deference to politics, not lesser deference to religion.
2Mass_Driver
OK, what if we reword this as "the state should consider religious beliefs as a matter of purely private and personal choice, because they are very important and the state is not good at identifying or encouraging appropriate religious beliefs." Isn't that a coherent, moderate principle that explains much of American policy on what to do when religion intrudes onto the public sphere? According to this principle, the state can ban religious discrimination because this reinforces private choice of religion and does not require the state to inquire at all into which religious beliefs are appropriate. Yet, also according to this principle, public schools should not allow prayer during class time, because this would interfere with private choice of religion and requires the state to express an opinion about which religious beliefs are appropriate. I don't deny the general assertion that many Americans fail the "express Socratically consistent principles and policies" test, but I'm with Blueberry in that I think moderate, coherent principles are quite possible.
3Vladimir_M
Mass_Driver: Trouble is, this still requires that the state must decide what qualifies as a religious belief, and what not. Once this determination has been made, things in the former category will receive important active support from the state. There is also the flipside, of course: the government is presently prohibited from actively promoting certain beliefs because it would mean "establishing religion" according to the reigning precedent, but it can actively promote others because they don't qualify as "religious." Now, if there existed some objective way -- a way that would carve reality at the joints -- to draw limits between religion on one side and ideology, philosophy, custom, moral outlook, and just plain personal opinions and tastes on the other, such determination could be made in a coherent way. But I don't see any coherent way to draw such limits, certainly not in a way that would be consistent with the present range of moderate positions on these issues. (By the way, another interesting way to get respectable-thinking folks into a tremendous contradiction is to get them to enthusiastically affirm that legal discrimination on the basis of attributes that are a pure accident of birth is evil -- and then point out that this implies that any system of citizenships, passports, visas, and immigration laws must be evil. Especially if you add that religion is usually much easier to change than nationality! Pursuing this line of thought further leads to a gold mine of incoherences in the whole "normal" range of beliefs nowadays, as regularly demonstrated on Overcoming Bias.) Another thing I should perhaps make sure to point out is that I don't necessarily consider coherence as a virtue in human affairs, though that's a complex topic in its own right.
4LucasSloan
Typically, yes. People with extreme views typically don't fail to make inferences from their beliefs along the lines of "X is good, so doing Y, which creates even more of X's goodness, would be even better!" Y might in fact be utterly stupid and evil and wrong, and a moderate with less extreme views might be against it, but the moderate and the extremist might both agree with X, even though the failure of logic that leads the extremist to endorse the evil Y is the belief that X is good.
0cupholder
Do more extreme political views signify more coherent worldviews?
-1Mass_Driver
You really should watch your grammar, syntax, and spelling while commenting on intelligence. The irony is distracting, otherwise. Unless you were referring to the CIA and FBI?
1JoshuaZ
It might be more generally a sign that I shouldn't comment when it is late at night in my timezone. Also, it should constitute evidence that we need better spellcheckers that don't just catch non-words but also words that are clearly wrong from minimal context (although in this particular case catching that that was the wrong word would almost seem to require solving the natural language problem unless one had very good statistical methods).
2wedrifid
I differentiate between 'actually believe' and 'act as if they are an agent with the belief that'. All people mostly do the latter but high IQ people are somewhat more likely to let 'actual beliefs' interfere with their lives.
1LucasSloan
I would say that people of ordinary intelligence don't actually have anything that I would identify as a non-trivial belief. They might say they believe in god, but they don't actually expect to get the pony they prayed for (even if they say that they do). However, they do have accurate beliefs regarding, say, how to cook food, or whether jumping off a building is a healthy idea, because they actually have to use such beliefs.
0PhilGoetz
In a democracy, specific delusions of ordinary people are important.
1LucasSloan
In a representative democracy, the specific delusions of the elected and unelected officials are important.
3JoshuaZ
If you said that it wouldn't make the top 10, I'd find that not implausible. Claiming it wouldn't make the top 50 seems implausible. Actual dangers posed by creationism:1) It makes people have a general more anti-science attitude and makes children less likely to become scientists 2) it takes up large sets of resources that would be spent usefully otherwise 3) it actively includes the spreading of a lot of misinformation 4) it snags otherwise bright minds who might otherwise becomes productive individuals (Jonathan Sarfati for example is a chess master, unambiguously quite bright, and had multiple good scientific papers before getting roped into YECism. Michael Behe is in a similar situation although for ID rather than young earth creationism). 5) The young earth variants encourage a narrow time outlook which is not helpful for long-term planning about the world or appreciation of serious existential threats (although honestly so few people pay attention to existential risks this is probably a minor issue) 6) It causes actual scientists and teachers to lose their jobs or have their work restricted (admittedly this isn't common but that's partially because creationism doesn't have much ground). 7) It encourages general extremist religious attitudes. So not in the top 10? I'd agree with that. But I have trouble seeing it not in the top 50 most dangerous widespread delusions.
2multifoliaterose
Thanks, this is what I had in mind.
4wedrifid
I don't remember a post by Eliezer on the subject but it is oh so true. I often feel a 'cringe' reaction when I hear 'evidence' being used as religious symbol. It is the same cringe reaction I get when I hear people say "God Says" on something that I know isn't even covered in their bible. In both cases something BAD is going on that has nothing to do with whether or not there is a God.

Here is some javascript to help follow LW comments. It only works if your browser supports offline storage. You can check that here.

To use it, follow the pastebin link, select all that text and make a bookmark out of it. Then, when reading a LW page, just click the bookmark. Unread comments will be highlighted, and you can jump to next unread comment by clicking on that new thing in the top left corner. The script looks up every (new) comment on the page and stores its ID in the local database.

Edit: to be more specific, all comments are marked as read as s... (read more)

1W-Shadow
I made a similar Greasemonkey script some time ago.

Strange occurrence in US South Carolina Democratic primary.

The only explanation, Mr. Rawl’s representatives told the committee, was faulty voting machines — not chance, name order on the ballot, or Republicans crossing over to vote for the weaker Democrat. With testimony dominated by talk of standard variances, preference theories and voting machine software, the hearing took on the spirit of a political science seminar.

The Washington Post profiled Alvin Greene last week

10 minute video interview with Greene

What happened here?

Wikipedia has a list of po... (read more)

9SilasBarta
Not ready to answer the rationalist questions, but why is it that, as soon as elections don't go toward someone who played the standard political game, suddenly, "it must be a mistake somehow"? You guys set the terms of the primaries, you pick the voting machines. If you're not ready to trust them before the election, the time to contest them was back then, not when you don't like the result. Where was Rawl on the important issue of voting machine reliability when they did "what they're supposed to"? I understand that elections are evidence, and given the prior on Greene, this particular election may be insufficient to justify a posterior that Greene has the most "support", however defined. But elections also serve as a bright line to settle an issue. We could argue forever about who "really" has the most votes, but eventually we have to say who won, and elections are just as much about finality on that issue as they are as an evidential test of fact. To an extent, then, it doesn't matter that Greene didn't "really" get the most votes. If you allow every election to be indefinitely contested until you're convinced there's no reason the loser really should have won, elections never settle anything. The price for indifference to voting procedure reliability (in this case, the machines) should be acceptance of a bad outcome for that time, to be corrected for the next election, or through the recall process. Frankly, if Greene had lost but could present evidence of the strength Rawl presented, we wouldn't even be having this conversation. ETA: Oh, and you gotta love this: Damn those candidates with autism symptoms! Only manipulative people like us deserve to win elections!
7jimrandomh
I should point out that most of the people who ought to know about the issue, have been screaming bloody murder about electronic voting machines for some time now. Politicians and the general public just haven't been listening. This issue is surfacing now, not because it wasn't an issue before, but because having a specific election to point to makes it easier to get people to listen. It also helps that the election wasn't an important one (it was a Democratic primary for a safe Republican seat), and the candidates involved don't have the resources to influence the discussion like they normally would.
5JoshuaZ
This doesn't sound like autism to me. It sounds more like a neurotypical individual who is dealing with a very unexpected and stressful set of events and having to talk about them.
4SilasBarta
Be that as it may, those are typical characteristics of high-functioning autistics, and I'm more than a little bothered that they view this as justification for reversing his victory. Take the part I bolded and remove the "incoherent rambling" bit, and you could be describing me! Well, at least my normal mode of speech without deliberate self-adjustment. And my lack of incoherent rambling is a judgment call ;-)
3wedrifid
Well... knowing that someone is autistic is some inferential evidence in favor of them being a good hacker.
2Blueberry
Yes. Exactly. This is true for lawsuits as well: getting a final answer is more important than getting the "right" answer, which is why finality is an important judicial value that courts balance.
5Morendil
My most likely explanations would be 1) software bug(s) 2) voter whim or confusion 3) odd hypothesis no one has thought of yet. Active intent to steal the nomination a distant fourth. Make it 60/30 among the first two. Evidence? Well, anything credible, but how likely is that. :)
4JoshuaZ
I put a very high probability that some form of tampering occurred primarily due to the failure of the data to obey a generalized Benford's law. Although a large amount of noise has been made about the the fact that some counties had more votes cast in the Republic governor's race than reported turnout, I don't see that as strong evidence of fraud since turnout levels in local elections are often based on the counting ability of the election volunteers who often aren't very competent. I'd give probability estimates very similar to those of Jim's but with a slightly higher percentage for people actually voting for him. I'd do that I think by moving most of the probability mass from the idea of someone tampering with the election to expose the insecure voting machines which implies a very strange set of ethical thought processes. I've also had enough experience in local elections to know that sometimes very weird things happen for reasons that no one can explain (and that this occurs even with systems that are difficult to tamper with). So using the primary breakdown given by Jim I'd put it as follows: * Voters actually voted for him: 0.25 * Someone tampered with the voting machines or memory cards to make Alvin Greene win: 0.25 * ...and that person did it because they wanted Alvin Greene to win: 0.1 * ...and that person did it for kicks: 0.1 * ...and that person did it because they wanted to expose the insecure voting machines: 0.05 * Someone meant to tamper with a different election on the same ballot, but accidentally altered the democratic primary additionally or instead: 0.1 * The votes were altered by leftover malware from a previous election which was also hacked: 0.2 * There was a legitimate error in setting up or managing the voting machines altered the vote totals: 0.2 Edit: Thinking this through another possibility that should be listed is deliberate Republican cross-over (since it is an open primary) but given the evidence that seems of negligible prob
1jimrandomh
I would count that under "voters actually voted for him"
1JoshuaZ
Ok. Yeah, so that should probably be a subcategory of that in that it explains the weird results in a sensible fashion.
3prase
I don't know the details about the American voting system, but (or maybe therefore) I am surprised how low estimates all people give to the possibility that the result is genuine. My estimate (without much research, I've just read the links) is * 0.5 voters actually voted for Greene * 0.3 error of some kind * 0.2 conspiracy In order to update, any evidence is accepted, of course. What I would most like to see: results of some statistical survey, conducted either before or better after the election, historical data concerning performance of black candidates, historical data from elections with big difference between the intensity of the campaign between the competing candidates, a lot of independent testimonies of trustworthy voters reporting non-standard behaviour of the voting machines, description of how can the results be altered (and what is normally done to avoid that).
0[anonymous]
I would say * 0.6 voters actually voted for Greene * >0.3 error of some kind * <0.1 conspiracy Conspiracy is a really stupid claim for this result - it is an incredibly unimportant election. If someone was going to purposely jigger the results of an election, they would do it where it actually mattered. The only reason it is still on there is that people sometimes do do really stupid things (as opposed to normally stupid things that they do all the time).
3jimrandomh
Here is my probability distribution: * Voters actually voted for him: 0.1 * Someone tampered with the voting machines or memory cards to make Alvin Greene win: 0.4 * ...and that person did it because they wanted Alvin Greene to win: 0.1 * ...and that person did it for kicks: 0.1 * ...and that person did it because they wanted to expose the insecure voting machines: 0.2 * Someone meant to tamper with a different election on the same ballot, but accidentally altered the democratic primary additionally or instead: 0.1 * The votes were altered by leftover malware from a previous election which was also hacked: 0.2 * There was a legitimate error in setting up or managing the voting machines altered the vote totals: 0.2 Note that I started researching this topic with an atypically high prior probability for voting machine fraud, and believe that it is very likely that major US elections in the past were altered this way. The strongest direct evidence I see for fraud having occurred is that there were "three counties with more votes cast in Republican governor's race than reported turnout in the Republican primary" FiveThirtyEight. Note that this means botched vote fraud, not correctly-implemented vote fraud, since correctly implemented vote fraud, using a strategy such as the Hursti hack, would have changed the votes but not the turnout numbers. The Benford's Law analysis on FiveThirtyEight, on the other hand, I find very unconvincing - first because it has a low p-value, and second because it doesn't represent the way voting machine fraud really works; it can only detect if someone makes up vote totals from scratch, rather than adding to or subtracting from real vote totals.
3wedrifid
Probability that this person would have a worse influence on the senate than a more standard politician: 5%.
0Kevin
I would give it lower than that, US Senators have surprisingly little power.
1AlexMennen
That is not important when considering the probability that Alvin Greene would have a worse influence on the Senate than the average politician if he got elected. It is only important when considering the probability that he would have a much worse influence on the Senate than average.
0Daniel_Burfoot
??? I mean, in the sense that the US government is like a massive Ouija board that is not really controlled by anyone, then sure. But the senators seem to have a particularly heavy hand on the board.
0Kevin
Sorry, I meant "influence", not "power".
0Larks
Conditional on their winning the election, presumably.
0wedrifid
I'm not sure that is technically necessary given the precise phrasing.
0Larks
Because, unless he is a politician, the sentence fails to make sense, because 'more standard politician' requires him to be one? If so, I think being selected as a candidate makes you a politician.
0wedrifid
It seems to make sense without any fancy interpretation.
1Liron
I think voters were clueless about both candidates, but they like to fill in all the boxes on the ballot, so they chose the name that has the higher positive affect by far: "Alvin Greene". To me that would be sufficient to explain the entire anomaly, if not for the mysterious origin of Greene's $10,000 filing fee.
4Kevin
Also the possible "Al Green" effect -- voters may have thought they were voting for the famous soul singer.
1wedrifid
The next election being won by a ficus would boost my estimate. Or, you know, something else ridiculous like an action hero actor.
2LucasSloan
Why is this at all ridiculous? Is there any reason to believe Arnold Schwarzenegger has done a significantly worse job than other governors, controlling for ability of the legislature to agree on anything and the health of the economy?
3wedrifid
It merely serves to illustrate what politics is really about. It certainly isn't about voting for people who are the best suited for making and implementing the decisions that are best for the country, planet or species. I actually would have voted for him unless he had a particularly remarkable opponent. All else being equal I take a contribution in another field that is popular and that I appreciate is a more important signal to me than success as a pure courtier. It is unfortunate that I do not have reason to consider consider political popularity as a stronger signal of country-leading competence than creating 'kindergarten cop'. I've already assigned a low probability to Alvin being at all worse than the alternatives. I expect Arnold would be 'even' better. (Oh, and I do think that one liner is sub par. It would be better to stick to actual ridiculous rather than superficially ridiculous.)
0[anonymous]
Probability that this person would have a worse influence on the senate than a more standard politician: 5%.
-2Roko
my breakdown: Conspiracy: 19% Error of some kind: 80% Voters actually voted for him: 1% (Given that there is a unique cause, and it is one of those three, of course.)
3LucasSloan
Surely the "voters aren't actually paying attention" hypothesis deserves more than 1% probability.
0h-H
that could fall under 'error of some kind'.

The sting of poverty

What bees and dented cars can teach about what it means to be poor - and the flaws of economics

http://www.boston.com/bostonglobe/ideas/articles/2008/03/30/the_sting_of_poverty/?page=full

and lots of Hacker News comments: http://news.ycombinator.com/item?id=1467832

[-][anonymous]50

Has anybody looked into OpenCog? And why is it that the wiki doesn't include much in the way of references to previous AI projects?

1Mitchell_Porter
If making a Friendly AI is compared to landing on the moon, I'd say OpenCog is something like the scaffolding for a backyard rocket. It still needs something extra - the rocket - and even then it won't achieve escape velocity. But a radically scaled-up version of OpenCog - with a lot more theory behind it, and tailored to run at the level of a whole data center rather than on a single PC - is the sort of toolset that could make a singularity.
[-][anonymous]50

For those of you who don't want to register to fanfic.com to receive notifications of new chapters to Harry Potter and the methods of rationality, I have added a Mailinglist. You can add yourself here: http://felix-benner.com/cgi-bin/mailman/listinfo/fanfic It is still untested so I don't know it will work, but I assume so.