Except that it will be more important in scenarios like the medium one or slopworld where the AIs' capabilities somehow stop at the level attainable by humans. If the AIs are indeed coming for all the jobs and all instrumental goals, as Bostrom proposes in the Deep Utopia, then what's left for humanity?
As for real reasons to learn, well, maybe one could point out science-like questions that are simple for human creativity and very hard for AIs? Or come up with examples of parallels that are easy for humans to notice and hard for the AIs?
<crossposted from AI and Intrinsic Motivation to Learn - by M Flood)
Zvi Mowshowitz, in a recent AI roundup (emphasis added):
In order to use an opportunity to learn, LLM or otherwise, you need to be keeping up with the material so you can follow it, and then choose to follow it. If you fall sufficiently behind or don’t pay attention, you might be able to fake it (or cheat on the exams) and pass. But you won’t be learning, not really.
So it isn’t crazy that there could be a breakpoint around age 16 or so for the average student, where you learn enough skills that you can go down the path of using AI to learn further, whereas relying on the LLMs before that gets the average student into trouble. This could be fixed by improving LLM interactions, and new features from Google and OpenAI are plausibly offering this if students can be convinced to use them.
I’m skeptical we know whether that breakpoint exists. To my knowledge we don’t yet have graphs showing a discontinuity in skills or test scores across cohorts. We should be actively looking for it. If it appears, the right response is to rethink how schools work — not to try, in vain, to ban LLMs.
A friend who teaches seminars at Oxford told me about students returning from summer with no books read, most with little interest in the syllabus for a course they voluntarily paid for. That surprised her. It shouldn’t surprise us: most students are not the 1% who end up in graduate school. Most people are the 99%. And yes, many of those 99% are probably handing in essays drafted by LLMs.
So, to ask plainly: is the wide availability of AI degrading metacognition, critical thinking, deep reading, and writing?
Or is AI merely exposing who already lacks intrinsic motivation to learn?
I don’t have a grand theory of intrinsic motivation. I have no idea where it comes from: genetics, upbringing, luck, or some mix. I see it in myself and in the high achievers I know; plenty of otherwise bright people simply don’t have it. Spend time around children and you’ll notice the variance immediately. Some are voraciously curious; others care more about social standing, comfort, or dominance. Parents who read produce children who read; incurious homes tend to produce incurious kids. Cases that break that, such as a brilliant child raised in a non-reading home, exist, but they’re rare.
I’ll be blunt: school can support curiosity and hard work, or it can crush them, but it cannot manufacture intrinsic motivation out of nothing. That means external incentives matter. We need real reasons for people to learn — not theatrical threats about AI taking all the jobs (which is demotivating), but concrete signals: you actually need this skill to eat, to earn, to participate. Hopefully we can do better than “beatings will continue until motivation improves,” maybe something like “you’ll live better with this skill than without it.”
Call it harsh but call it honestly: if we fail to redesign learning and labor, a nontrivial slice of this generation risks arriving at adulthood fluent in the appearance of competence and empty in the habit of competence. They’ll be, functionally, next-token predictors.
They will write clever memos, pass automated checks, and still fail at the unforgiving, judgment-heavy tasks that make organizations and democracies function. If that outcome is plausible, our job is triage: rewrite assessments, fund apprenticeships, and demand stronger signals of process, not just product. Schooling has always rewarded the product of student in cap and gown holding a diploma and then moved on. That won’t cut it in an age of believable machine fluency. Signals need to be harder to fake.
“AI is coming for all jobs sooner rather than later” may be true. Saying it aloud, repeatedly, is still demotivating for most minds. If we insist on that rhetoric without also building pathways that teach, credential, and pay for real skill, we risk turning a technology problem into a generational social crisis. And to make a self-reinforcing loop: low-skilled grads increase demand for AI to take up the slack, which reduces the demand for skilled grads, which decreases motivations to learn, which leads to low-skilled grads…