by [anonymous]
2 min read16th Feb 20199 comments

13

[See also: On Giving Advice and Recognizing vs Generating]

So this quarter, I’ve been tutoring for an undergraduate computer science course, and one interesting thing that I’ve revised is how I think about teaching/learning.

Let me try to illustrate. Here’s a conversation that happened between some staff and the professor one day; it’s about what advice to give students when testing their code.

Tutor A: “I think that we should remind students to be very thorough about how they approach testing their code. Like, so far, I’ve been telling them that every line of code they write should have a test.”
Tutor B: “Hmmm, that might backfire. After all, lots of those lines of code are in functions, and we really just want them to make sure that their functions are working as intended. It’s not really feasible to test lines of code inside a function.”
Professor: “Actually…that seems like a fine outcome. We want students to be thinking about testing, and I’d actually be very excited if someone came up to me and asked how to test for what goes on inside a function…”

This was surprising to me because most discussions I’ve previously had about student learning had focused on how to reduce confusion for the students. But in this scenario, the professor was fine with it happening; if anything, they seemed pleased that the concepts, when taken to their extremes, incited more questions.

And this general concept, of giving students something that's Not The Answer, in an effort to move them closer to The Answer seems to show up in several other areas.

For example, from my shallow understanding of how kōans work in Zen Buddhism, there’s a similar mechanic going on. The point of a kōan isn’t to develop a fully satisfactory answer to the question it asks, but to wrestle with the strangeness / paradoxical nature of the kōan. The auxiliary things that happen along the way, en route to the answer is really what the kōan is about.

To be clear, the thing I’m trying to point at isn’t just giving people practice problems, the way that we already do for math or physics.

Rather, I’m imagining things like people purposefully writing incorrect / confusing material, such that it prompts students to ask more questions. I guess there’s already a good amount of people in the rationality space who write abstrusely, but I wonder how many are doing so for pedagogical reasons? Seeing as noticing confusion is an oft-cited useful skill, I think that there’s more to do here, especially if you’re upfront about how some of the material is going to be incomplete, and maybe sometimes even wrong.

It seems like there’s a slew of related useful skills here. Several times, in math class, for example, I’ve had my instructor make an error while doing some proof. And now I’m confused about how we got from step N to step N+1. Sometimes I figure that they’re just wrong, and I write what I think is correct. And sometimes I get scared that I’m the one who doesn’t understand.

But this whole process raises good questions. What if I hadn’t noticed that something was wrong? What does that say about my understanding? When I do notice that something is wrong, how do I know if it’s me or the other person?

This all seems applicable outside of a pedagogical context.

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 8:33 PM

The idea of "purposefully telling people incorrect information to make them learn even faster than by giving them correct information" feels like rationalization. I strongly doubt that people who claim to use this method actually bother measuring its efficiency. It is probably more like: "I gave them wrong information, some students came to the right conclusion anyway, which proves that I am a fantastic teacher, and other students came to a wrong conclusion, which proves that those students were stupid and unworthy of my time." Congratulations, now the teacher can do nothing wrong!

The goal of abstruse writing (if done intentionally, as opposed to merely lacking the skill to write clearly) is to avoid falsification. If my belief is never stated explicitly, and I only give you vague hints, you can never prove me wrong. Even if you guess correctly that I believe X, and then you write an argument about why X is false, I still have an option to deny believing X, and can in turn accuse you of strawmaning me (and being too stupid to understand the true depths of my thinking). If my writing becomes popular, I can let other people steelman my ideas, and then wisely smile and say "yes, that was a part of the deep wisdom I wanted to convey, but it goes even deeper than that", taking credit for their work and making them happy by doing so.

purposefully telling people incorrect information to make them learn even faster than by giving them correct information

For what it's worth I think this is a poor characterization of the point (although I know it's stated in so many words in the post). It's not that you want to tell people things that you know are wrong so much as prod folks into engaging with their confusion, and a way to do that is to sometimes deliberately ask them to lean into that confusion by serving it up to them to be seen clearly. Done skillfully this doesn't necessarily mean telling people things you believe to be incorrect (although making "mistakes" or telling folks there are deliberate inaccuracies in something to get them to look for it is a well-worn technique that can work well if done well), but it may mean doing things like working through the implications of what someone claims until you reach the point where it runs into trouble or asking them to do something that you know will force them to grapple with their misunderstanding because they wouldn't be able to do it otherwise.

Yet it's a fair point to call out that there's a different between chaos and skillfully applied chaos to a purpose, and the former would look like what you seem to be concerned with while the latter is more the intention of the post as I read it.

I feel like the people from whom I learned to distinguish social-reality from reality-reality, their technique depended a lot of being deliberately confusing or weird, in large part to break me out of established patterns of thought.

Eliezer wrote rather plainly about distinguishing social reality, beliefs-as-attire, etc. And I think this was sufficient to help me notice the reality/social-reality distinction in groups I wasn't part of myself, or no longer primarily-identified as. But it seemed surprisingly useful to listen to weird rants by other iconoclasts in order to get a clearer sense of how-social-reality-feels from the inside.

(I think those people also may have had other agendas going on that the confusion may have also been part of. I do wonder at the fact that the people who most wanted to break me of my immersion in social reality also had weird agendas that benefited from me being disoriented)

((Apologies for being a bit cryptic))

Optimising for student learning and growth requires participation from the student themselves who knows what they need. Some amount of structure and letting them figure it out for themselves

This brings up the question of what you're trying to optimize for when teaching; in particular, which segment of the student population are you trying to best teach? If the median, then this strategy will, at best, be useless, at worst, actively harm their learning. If the top percentile, then it may very well produce better outcomes than a more straightforward approach. But it does seem to be the case that there's a trade-off.

I've seen this referred to as discursive learning. The idea that things stick much better when you're forced to expend some cognition in order to figure out how it fits with what you already know.

I think this technique only works for one-on-one (or a small group), live interactions. I.e. it doesn't work well for online writings.

The two components that are important for ensuring this technique is successful is:

1. You should tailor the confusion to the specific person you're trying to teach.

2. You have to be able to detect when the confusion is doing more damage than good, and abort it if necessary.

What you're talking about makes sense in the context you've given - you know people are paying attention when they ask "how to test for what goes on inside a function…". Focusing on the understanding rather than a stream of words that aren't necessarily going to go in people's heads might be something that isn't done as much (or maybe it depends on where you are) because the student to teacher ratio is so high (and students only have "1 teacher" per subject, at a time).

I think if you ask people hard enough questions, and they solve them by themselves, then misinformation might not be necessary. ('Solve the quadratic equation', instead of 'here is the solution to the quadratic equation'.)

Strong agree. One of my subgoals during teaching is often to confuse students. See also this video, which basically captures the reason why.