Posts

Sorted by New

Wiki Contributions

Comments

"Do you mean to say that there is no way to employ/train our brains to do rational thinking more effectively and intuitively?"

I don't don't know whether RickJS meant to say that or not. But this blog post suggests to me a way forward: whenever confronted with questions about likelihood or probability, consciously step back and assess whether a frequentist analysis is possible. Use that approach if it is. If not, shift toward Bayesian views. But in either case, also ask: can I really compute this accurately, or is it too complex? Some things you can do well enough in your head, especially when perfect accuracy isn't necessary (or even possible). Some things you can't.

Maybe if you started kids in their junior year in high school, they might be pretty skilled at telling which was which (of the four possibilities inherent in what I outline above) by the end of their senior year.

You watch someone flip a coin a hundred times. After a while, you get your frequentist sense of the probability that it will come up heads.

Then somebody takes a small, flat square piece of metal, writes "heads" on one side. Before flipping it, he asks you: "What's the chance it's going to come up 'heads' 100 times in a row?"

Would you say, "I have no idea?"

If you said, "Well, very unlikely, obviously", what makes it so obvious to you? What's your degree of certainty about each statement in your line of reasoning? And where did those degrees of certainty come from?

Sure, all sorts of past "reference classes" and analogous events might turn up as we discussed your reasoning. But the fact would still remain, most people, whether asked about that coin or asked about that small flat square piece of metal, will give you an answer that's pretty inaccurate if you ask them how likely it is that it will come up heads five times in a row, no matter whether you asked in frequentist terms or Bayesian terms.

When it comes to assessing the chance of a certain series of independent events, bias of some kind does seem to enter. This is probably (um, heh) because, although we might be fairly frequentist when it comes to notably and frequent events, we don't naturally note any given series of independent events as an event in itself. (For one thing, the sheer combinatorics prevent even encountering many such series.)

I wouldn't be surprised if the ultimate synthesis shows our brains are effectively frequentist (even almost optimally so) when it comes the sorts of situations they evolved under, but also that these evolved optimizations break under conditions found in our increasingly artificial world. One does not find things much like coins in nature, nor much reason for people to use their computed fairness to resolve issues.