This recent article at Slate thinks so:

Why Your 4-Year-Old Is As Smart as Nate Silver

It turns out that even very young children reason [using Bayes Theorem]. For example, my student Tamar Kushnir, now at Cornell, and I showed 4-year-olds a toy and told them that blocks made it light up. Then we gave the kids a block and asked them how to make the toy light up. Almost all the children said you should put the block on the toy—they thought, sensibly, that touching the toy with the block was very likely to make it light up. That hypothesis had a high “prior.”

Then we showed 4-year-olds that when you put a block right on the toy it did indeed make it light up, but it did so only two out of six times. But when you waved a block over the top of the toy, it lit up two out of three times. Then we just asked the kids to make the toy light up.

The children adjusted their hypotheses appropriately when they saw the statistical data, just like good Bayesians—they were now more likely to wave the block over the toy, and you could precisely predict how often they did so. What’s more, even though both blocks made the machine light up twice, the 4-year-olds, only just learning to add, could unconsciously calculate that two out of three is more probable than two out of six. (In a current study, my colleagues and I have found that even 24-month-olds can do the same).

 There also seems to be a reference to the Singularity Institute:

The Bayesian idea is simple, but it turns out to be very powerful. It’s so powerful, in fact, that computer scientists are using it to design intelligent learning machines, and more and more psychologists think that it might explain human intelligence.

(Of course, I don't know how many other AI researchers are using Bayes Theorem, so the author also might not have the SI in mind)

If children really are natural Bayesians, then why and how do you think we change?

 

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 2:49 AM
[-]prase12y310

So, being able to observe that one behaviour causes the desired outcome more often than another behaviour counts as reasoning using Bayes Theorem? On this level of vagueness we could proclaim children natural frequentists, or Popperian falsificationists, or whatever else with equal ease.

The children adjusted their hypotheses appropriately when they saw the statistical data

Using such words to describe small children trying to light up a toy makes me suspect that this post is a parody.

There also seems to be a reference to the Singularity Institute:

You should get out more :)

By this I mean to become more acquainted with non-SI efforts in machine learning and AI (which is almost the same as "efforts in machine learning and AI").

[-]gwern12y100

Yeah, he's definitely over-thinking it. Bayesian techniques are all over AI (and the version of AI relabeled 'machine learning') these days.

Yeah, I don't remember hearing anything about any AI work SI has done with Bayes Theorem. It's definitely used in the field though.

If children really are natural Bayesians, then why and how do you think we change?

"Change"? Are you saying that many adults would use an obviously less-reliable technique? Somehow I doubt it. Did they run the same experiment with the adult subjects?

[-]Cyan12y70

Did they run the same experiment with the adult subjects?

Yes, they did. Gopnik writes:

As we get older our “priors,” rationally enough, get stronger and stronger. We rely more on what we already know, or think we know, and less on new data. In some studies we’re doing in my lab now, my colleagues and I found that the very fact that children know less makes them able to learn more. We gave 4-year-olds and adults evidence about a toy that worked in an unusual way. The correct hypothesis about the toy had a low “prior” but was strongly supported by the data. The 4-year-olds were actually more likely to figure out the toy than the adults were.

Interesting, and still perfectly Bayesian. Adults have stronger priors, so their updates are not as large.

[-]Cyan12y10

Yup. The nature of the change in JQuinton's question was a change in the available evidence. (A quibble: this is not perfectly Bayesian, since adults ought not to treat toys in psychology experiments as exchangeable with toys encountered in the wild. I'd posit that Thinking, Fast and Slow is relevant here.)

Reality: 2 + 2 = 4.

Newspaper headline: "2 + 2 = 4 000 000 000. Scientists worry: Is our society prepared for a dramatic impact of large numbers, or will our civilization collapse? (read more on page 13...)"

[-]Cyan12y20

Bayesian statistician extraordinaire Andrew Gelman also posted a discussion of the article.

As to the question of how we change (accepting, for the sake of argument, that adults are intuitive Bayesians to some extent), I think K? O'Rourke nailed it.

It seems my model of LessWrong is somehow broken, and so I want to know why--

The OP is at -3. Why is that? (note: I am not the OP). The article is relevant, and not a re-post, and contains both a link AND a synopsis. The only reason I can think of is either people thought it should go in Open (and didn't leave a comment to say that). I think the article is not controversial enough, too old, and too downvoted for it to merely be the initial downvote wave that posts sometimes get.

Anyways, my expectations would have been that the post is in the low positive numbers. A -3 punches my expectations in the face and insults my expectation's mother. So now I'm curious. Ideas?

[This comment is no longer endorsed by its author]Reply

I'm pretty sure that the -3 is just the initial downvote wave; it'll climb back up to ~2 during the next 24hrs. Of course the fact that this discussion is in the comments might affect things.

I am part of the "initial downvote wave". I downvoted the post because although the "Bayesian" hypothesis might be interesting to LessWrong, the academic articles linked to from the Slate article didn't really support it., the Slate article was just written by some researcher trying to push their own research angle, and the LW post didn't do any further analysis.

My advice when linking to something like this is to link directly to the academic paper and to draw your summary directly from the abstract of the paper, so that you don't misrepresent what the paper claims. Popular science pieces normally write whatever they feel like and then link to a couple of vaguely related papers, so they can't be trusted at all.

My advice when linking to something like this is to link directly to the academic paper and to draw your summary directly from the abstract of the paper, so that you don't misrepresent what the paper claims.

Hear, hear.

I observe that there are also three reasonably highly upvoted comments critical of the OP.
My working theory is that the post was downvoted for reasons similar to those listed in those comments.
Perhaps even by the commenters themselves.

Ah, thank you. I could have sworn that I read the comments, trying to see if they mentioned why the downvotes. But I must have been too scan-y, because that didn't even click until you posted the explanation. Brains work weird.

I'm retracting the OC.

Note: I did not downvote