Reto Schneider acquired, and has posted on his blog, footage of the Dr. Fox lecture experiment from 1976.  Experimenters prepared a nonsensical lecture on 'Mathematical Game Theory as Applied to Physician Education', and taught it to an actor, Michael Fox, who delivered it to three different audiences of 55 different professionals who asked questions and gave the lecture positive evaluations.  Details here, including the results of the 8-question evaluation.

The 3 groups were

  • 11 attendees of a teacher training conference in continuing education
  • 11 a group of mental health educators
  • 33 educators and administrators enrolled in a graduate level university educational philosophy course

(BTW, feel free not to vote this up.  I'm just relaying a link I found interesting.)

New Comment
26 comments, sorted by Click to highlight new comments since: Today at 12:48 PM

Thank you for linking to the original paper. I read it, and I no longer think this was a completely valid experiment. Or, at least, it is not as strong as it seems to be.

The questionnaire that they gave subjects read:

  • Did he dwell upon the obvious?
  • Did he seem interested in his subject?
  • Did he use enough examples to clarify his material?
  • Did he present his material in a well-organized form?
  • Did he stimulate your thinking?
  • Did he put his material across in an interesting way?
  • Have you read any of this speaker's publications?

Few of these seem completely incompatible with illogical nonsense. I'm sure he sounded interested in his subject, and for all I know he used lots of examples and put things across in an interesting way (Something like "educating physicians is much like hunting tigers, because of the part with the stethoscopes" is interesting and provides examples, but still total nonsense).

Another section asked for written comments, and they received comments like "Enjoyed listening", "Has warm manner", "Good flow, seems enthusiastic", all of which I'm sure were true, as well as a few like "Too intellectual a presentation", "left out relevant examples", and my favorite, "He misses the last few phrases which I believe would have tied together his ideas for me." These last ones seem to me like face-saving ways of saying "I didn't actually understand the slightest bit of what he was talking about".

If you have a really nice, warm presenter, probably after a lot of old stuffy guys who have bored everyone in the conference to death, and you ask for evaluations that don't really ask the questions you're interested in but give enough waffle room to allow respondents to praise the presentation they don't understand, I'm not at all surprised that people would do that.

Why, oh why, couldn't the experimenters have included a simple "Did you or did you not understand what this man was talking about?" It almost seems suspicious, like they were worried they wouldn't get as interesting a result.

...or maybe it's just normal incompetence. I have this same problem with course evaluations in my own university: they consist entirely of closed questions on peripheral issues that force me to end up giving very positive evaluations to awful classes. For example, it might ask me to rate from 1 to 5 the answers to lots of questions like "Was the professor always available to help students?" and "Was the work load reasonable?" and other things I am forced to admit were satisfactory, but nothing like "Did the professor drone on in a monotone about his own research interests for two hours a day and never actually get to covering the course material?"

Well, I think the similarity to actual IRL course evaluations is probably intentional - they were probably modeling the questions on either a particular course evaluation questionnaire or a mixture of many. And this shows that course evaluations are pretty bad at picking out professors who cannot explain to people what they are talking about. Given how useful a little impenetrability can be in many fields of research, one wonders how intentional this might be...

Agreed. "Did he use enough examples to clarify his material?" and "Did he present his material in a well-organized form?" are the only relevant questions.

Why, oh why, couldn't the experimenters have included a simple "Did you or did you not understand what this man was talking about?"

Yes, that would have been better..

I have this same problem with course evaluations in my own university.

My favorite are the course evaluations that the instructor picks up at the same time as the final exam. Before the final exam in a microeconomics course I was taking, I drew a graph on the board showing the distribution (as goods) of grades and evaluations, and showed there were benefits to trade. (I still have no way of knowing whether that had an effect or not.)

The problem is, that's a one-shot prisoner's dilemma, and a microecon professor ranks just below a literal sociopath in terms of how likely he is to defect on the one-shot prisoner's dilemma.

There are reputation effects!

(BTW, feel free not to vote this up. I'm just relaying a link I found interesting.)

I'd prefer that links with short explanations go in the Discussion area, and that Main be reserved for more substantial pieces.

Why?

I think that discussion should be for things that are incomplete that an author wants help with, or that are trivial, funny, or otherwise not as important. That way Main vs Discussion has a functional distinction: Main is things you want to read if you're a lurker and just want to learn the important stuff; Discussion is things you want to read if you are a community member and want to contribute to the community and help explore vague ill-formed thoughts.

I post some very long, substantive pieces in Discussion because they are incomplete and for discussion.

Dividing things into "short" and "long", or "significant contribution by poster" vs. "no significant contribution by poster", is not useful. The Michael Fox lecture is a notable experiment in rationality studies; this post provides new details on it and is therefore significant. It is also complete. The fact that I, the person who has linked to the content, have little to add, is of no interest to someone trying to learn things. I don't know why you care whether the explanation found here is short or long. The content, the thing linked to, is long and substantive and important; and putting it in Discussion will make people looking for long and substantive and important things miss it.

I'll move this into Discussion because the parent of this comment has 24 votes - but I think it belongs in Main; and I think the worrying about links going in Main is probably people worrying that somebody else is getting karma too easily, rather than thinking about how to make the site effective.

I see someone has already moved it into Discussion, without asking or telling me. How rude.

Well, I'm not vehement about it, but I see the goal of Main as putting our best face forward, and I want it to be reserved for posts that do a lot of work on the reader's behalf. Yes, this is partly about signaling, but it's good for new readers if the most prominent material shows strong signals of quality.

I think the authors of this experiment were afraid to draw the correct (and obvious) conclusions from it. If a phony "expert" doesn't immediately stand out among the "real" ones during collaborative expert work, there are only two plausible hypotheses. The milder one is that the particular business transacted there is just a phony pretense, and the stronger one is that the entire field of expertise is as phony as a three dollar bill.

The experimenters, however, themselves have prominent positions within the same system, so instead we get a rambling discussion that evades the obvious issues.

Those are not the only two plausible hypotheses, particularly because the phony expert was presenting to psychology professors et al.

What would be the alternative hypotheses according to you?

Well, the most obvious is that people not in the same field are bad at detecting expertise. But I suppose there are even more - one could blame the audience as much as you blame the subject, that is, psychology professors :P Or you could guess at response bias - maybe people are bad at admitting they didn't understand something.

Yeah, I'll give a high performance evaluation if I enjoyed the course, and I can enjoy the course either because I learned something or because the instructor is a skilled performance artist. I don't think the issue is that I can't tell I'm not learning anything - I think the issue is that I'm inclined to reward things other than teaching me things even in an environment ostensibly about learning.

I don't think the issue is that I can't tell I'm not learning anything - I think the issue is that I'm inclined to reward things other than teaching me things even in an environment ostensibly about learning.

One of the issues was that, even after Fox had been revealed as a fake, is that people were earnestly interested in the application of game theory to physician education. That is, the amount academics care about various ideas depends heavily on the panache with which those ideas are delivered.

Man, I'm interested in the application of game theory to physical education and I ain't even seen the lecture!

I can think of plenty applications, but they all seem to involve recess. :)

First thing that came to mind: artillery truces in WWI + dodgeball.

You know you're from a tough school when you think of artillery as phys ed.

I think it's about physician education, ie of doctors.

I totally read "physical education." That's interesting.

Good catch.

Is there a video of the full lecture?

Email Reto Schneider and ask.

I know somebody is going to say, "Teachers and philosophers - so what?" So I'll say it first, and request that anybody else wishing to harp on the point provide some data showing that teachers and philosophers are not very smart or not very critical.

[-]gwern13y120

provide some data showing that teachers and philosophers are not very smart or not very critical.

I wouldn't say these are philosophers at all ('educational philosophy' sounds pretty dubious as well). You can look at some GRE scores by major: http://www.ncsu.edu/chass/philo/GRE%20Scores%20by%20Intended%20Graduate%20Major.htm Philosophy scores very high or at the top in all 3; education varies widely. ('Education-Early Childhood' is apparently not for the sharpest tools in the shed...)