Alternatively or in addition to this, you can embrace AI and design new tasks and assignments that cause students to learn together with the AI.
This magical suggestion needs explication.
From what I've seen via Ethan Mollick, instead of copy-pasting, the new assignments that would be effective are the same as the usual - just "do the work," but at the AI. Enter a simulation, but please don't dual-screen the task. Teach the AI (I guess the benefit here is immediate feedback, as if you couldn't use yourself or friend as a sounding board), but please don't dual-screen the task. Have a conversation (again, not in class or on a discussion board or among friends), but please don't dual-screen the task. Then show us you "did it." You could of course do these things without AI, though maybe AI makes a better (and certainly faster) partner. But the crux is that you have to do the task yourself. Also note that this admits the pre-existing value of these kinds of tasks.
Students who will good-faith do the work and leverage AI for search, critique, and tutoring are...doing the work and getting the value, like (probably more efficiently than, possibly with higher returns than) those who do the work without AI. Students who won't...are not doing the work and not getting the value, aside from the signaling value from passing the class. So there you have it - educators can be content that not doing the assignment delivers worse results for the student, but the student doesn't mind as long as they get their grade, which is problematic. Thus, educators are not going quietly and are in fact very concerned about AI-proofing the work, including shifts to in-person testing and tasks.
However, that only preserves the benefit of the courses and in turn degree (I'm not saying pure signaling value doesn't exist, I'm just saying human capital development value is non-zero and under threat). It does not insulate the college graduate from competition in knowledge work from AI (here's the analogy: it would obviously be bad for the Ford brand to send lemons into the vehicle market, but even if they are sending decent cars out, they should still be worried about new entrants).
Obligatory shill/reminder for any teacher reading this that if they want a never-before-seen educational data science challenge which can't be solo'd by current-gen AI (and is incidentally super easy to mark) they can just DM me; I might want some kind of compensation if it needs to be extensively customized and/or never released to the public, but just sharing scenarios a few months before I post them on LW is something I'd absolutely do for the love of the game. (and if they want any other kind of challenge then uh good luck with that)
I will say that the experience of a european university (in Sweden) has been completely different.
General points:
Experiential report:
If other Europeans want to comment it would be cool to hear but I honestly think this might just be an American skill issue in terms of having better institutions. (or maybe my experience of university is non-standard.)
Cheaters. Kids these days, everyone says, are all a bunch of blatant cheaters via AI. Then again, look at the game we are forcing them to play, and how we grade it. If you earn your degree largely via AI, that changes two distinct things.
Both learning and signaling are under threat if there is too much blatant cheating. There is too much cheating going on, too blatantly. Why is that happening? Because the students are choosing to do it.
Ultimately, this is a preview of what will happen everywhere else as well. It is not a coincidence that AI starts its replacement of work in the places where the work is the most repetitive, useless and fake, but its ubiquitousness will not stay confined there. These are problems and also opportunities we will face everywhere. The good news is that in other places the resulting superior outputs will actually produce value.
You Could Take The White Pill, But You Probably Won’t
As I always say, if you have access to AI, you can use it to (A) learn and grow strong and work better, or (B) you can use it to avoid learning, growing and working. Or you can always (C) refuse to use it at all, or perhaps (D) use it in strictly limited capacities that you choose deliberately to save time but avoid the ability to avoid learning. Choosing (A) and using AI to learn better and smarter is strictly better than choosing (C) and refusing to use AI at all. If you choose (B) and use AI to avoid learning, you might be better or worse off than choosing (C) and refusing to use AI at all, depending on the value of the learning you are avoiding. If the learning in question is sufficiently worthless, there’s no reason to invest in it, and (B) is not only better than (C) but also better than (A).
I notice I am confused. What is the difference between ‘learning that’ and ‘just finding it out’? And what’s to stop Daniel from walking through the a derivation or explanation with the AI if he wants to do that? I’ve done that a bunch with ML, and it’s great. o3’s example here was being told and memorizing the integral of sin x is -cos x rather than deriving it, but that was what most students always did anyway. The path you take is up to you.
I would instead ask, why are you assigning essays the AI can do for them, without convincing the students why they should still write the essays themselves? The problem, as I understand it, is that in general students are more often than not:
If the reward for painting is largely money, which it is, then clearly if you give artists the ability to cheat then many of them will cheat, as in things like forgery, as they often have in the past. The way to stop them is to catch the ones who try. The reason the Buddhist monk presumably wouldn’t ‘cheat’ at meditation is because they are not trying to Be Observed Performing Meditation, they want to meditate. But yes, if they were getting other rewards for meditation, I’d expect some cheating, sure, even if the meditation also had intrinsic rewards. Back to the school question. If the students did know how to use AI to learn, why would they need the school, or to do the assignments? The entire structure of school is based on the thesis that students need to be forced to learn, and that this learning must be constantly policed.
Is Our Children Learning
The thesis has real validity. At this point, with not only AI but also YouTube and plenty of other free online materials, the primary educational (non-social, non-signaling) product is that the class schedule and physical presence, and exams and assignments, serve as a forcing function to get you to do the damn work and pay attention, even if inefficiently.
It’s entirely not obvious whether it would be a good idea to convince the kid otherwise. Using AI is going to be the most important skill, and it can make the learning much better, but maybe it’s fine to let the kid wait given the downside risks of preventing that? The reason taking such a drastic (in)action might make sense is that the kids know the assignments are stupid and fake. The whole thesis of commitment devices that lead to forced work is based on the idea that the kids (or their parents) understand that they do need to be forced to work, so they need this commitment device, and also that the commitment device is functional. Now both of those halves are broken. The commitment devices don’t work, you can simply cheat. And the students are in part trying to be lazy, sure, but they’re also very consciously not seeing any value here. Lee here is not typical in that he goes on to actively create a cheating startup but I mean, hey, was he wrong?
Bingo. Lee knew this is no way to learn. That’s not why he was there. Columbia can call its core curriculum ‘intellectually expansive’ and ‘personally transformative’ all it wants. That doesn’t make it true, and it definitely isn’t fooling that many of the students.
Cheaters Never Stop Cheating
The key fact about cheaters is that they not only never stop cheating on their own. They escalate the extent of their cheating until they are caught. Once you pop enough times, you can’t stop. Cheaters learn to cheat as a habit, not as the result of an expected value calculation in each situation. For example, if you put a Magic: the Gathering cheater onto a Twitch stream, where they will leave video evidence of their cheating, will they stop? No, usually not. Thus, you can literally be teaching ‘Ethics and AI’ and ask for a personal reflection, essentially writing a new line of Ironic, and they will absolutely get it from ChatGPT.
This is a way to know students are indeed cheating rather than using AI to learn. The good news? Teachable moment. Lee in particular clearly doesn’t have a moral compass in any of this. He doesn’t get the idea that cheating can be wrong even in theory:
If you’re enabling widespread cheating on the LSATs and GREs, you’re no longer a morally ambiguous rebel against the system. Now you’re just a villain. Or you can have a code:
Wendy will use AI for ‘all aid short of copy-pasting,’ the same way you would use Google or Wikipedia or you’d ask a friend questions, but she won’t copy-and-paste. The article goes on to describe her full technique. AI can generate an outline, and brainstorm ideas and arguments, so long as the words are hers. That’s not an obviously wrong place to draw the line. It depends on which part of the assignment is the active ingredient. Is Wendy supposed to be learning:
Wendy says planning the essay is fun, but ‘she’d rather get good grades.’ As in, the system actively punishes her for trying to think about such questions rather than being the correct form of fake. She is still presumably learning about the actual content of the essay, and by producing it, if there’s any actual value to the assignment, and she pays attention, she’ll pick up the reasons why the AI makes the essay the way it does. I don’t buy that this is going to destroy Wendy’s ‘critical thinking’ skills. Why are we teaching her that school essay structures and such are the way to train critical thinking? Everything in my school experience says the opposite. The ‘cheaters’ who only cheat or lie a limited amount and then stop have a clear and coherent model of why what they are doing in the contexts they cheat or lie in is not cheating or why it is acceptable or justified, and this is contrasted with other contexts. Why some rules are valid, and others are not. Even then, it usually takes a far stronger person to hold that line than to not cheat in the first place.
If You Know You Know
Another way to look at this is, if it’s obvious from the vibes that you cheated, you cheated, even if the system can’t prove it. The level of obviousness varies, you can’t always sneak in smoking gun instructions. But if you invoke the good Lord Bayes, you know.
Not that they flag it.
But there’s a huge difference between ‘I flag this as AI and am willing to fight over this’ and knowing that something was probably or almost certainly AI. What about automatic AI detectors? They’re detecting something. It’s noisy, and it’s different, it’s not that hard to largely fool if you care, and it has huge issues (especially for ESL students) but I don’t think either of these responses is an error?
If you’re direct block quoting Genesis without attribution, your essay is plagiarized. Maybe it came out of the AI and maybe it didn’t, but it easily could have, it knows Genesis and it’s allowed to quote from it. So 93% seems fine. Whereas Wendy’s essay is written by Wendy, the AI was used to make it conform to the dumb structures and passwords of the course. 11% seems fine.
The Real Victims Here
I’m sorry, what? Given how estimations work, I can totally believe we might be overestimating the number of kids who are cheating. Of course, the number is constantly rising, especially for the broader definitions of ‘cheating,’ so even if you were overestimating at the time you might not be anymore. But no, this is not about ‘a few more plagiarized assignments per term,’ both because this isn’t plagiarism it’s a distinct other thing, and also because by all reports it’s not only a few cases, it’s an avalanche even if underestimated. Doing the assignments yourself is now optional unless you force the student to do it in front of you. Deal with it. As for this being ‘grief and hassle’ for educators, yes, I am sure it is annoying when your system of forced fake work can be faked back at you more effectively and more often, and when there is a much better source of information and explanations available than you and your textbooks such that very little of what you are doing really has a point to it anymore. If you think students have to do certain things themselves in order to learn, then as I see it you have two options, you can do either or both.
Alternatively or in addition to this, you can embrace AI and design new tasks and assignments that cause students to learn together with the AI. That’s The Way. Trying to ‘catch’ the ‘cheating’ is pointless. It won’t work. Trying only turns this at best into a battle over obscuring tool use and makes the whole experience adversarial. If you assign fake essay forms to students, and then grade them on those essays and use those grades to determine their futures, what the hell do you think is going to happen? This form of essay assignment is no longer valid, and if you assign it anyway you deserve what you get.
I think that is wrong. We are a long way away from the last people giving up this ghost. But seriously it is pretty insane to think ‘using AI for homework’ is cheating. I’m actively trying to get my kids to use AI for homework more, not less.
What percentage of that 90% was ‘cheating’? We don’t know, and definitions differ, but I presume a lot less than all of them. Now and also going forward, I think you could say that particular specific uses are indeed really cheating, and it depends how you use it. But if you think ‘use AI to ask questions about the world and learn the answer’ is ‘cheating’ then explain what the point of the assignment was, again? The whole enterprise is broken, and will be broken while there is a fundamental disconnect between what is measured and what they want to be managing.
The entire article makes clear that students almost never buy that their efforts would be worthwhile. A teacher can think ‘this will teach them effort’ but if that’s the goal then why not go get an actual job? No one is buying this, so if the grades don’t reward effort, why should there be effort? How dare you let 18-year-olds decide whether to engage with their assignments that produce no value to anyone but themselves. This is all flat out text.
There’s no point. Was there ever a point?
The question is, once you know, what do you do about it? How do you align what is measured with what is to be managed? What exactly do you want from the students?
What is measured gets managed. You either give the better grade to the ‘barely literate’ essay, or you don’t. My children get assigned homework. The school’s literal justification – I am not making this up, I am not paraphrasing – is that they need to learn to do homework so that they will be prepared to do more homework in the future. Often this involves giving them assignments that we have to walk them through because there is no reasonable way for them to understand what is being asked. If it were up to me, damn right I’d have them use AI.
Great! Now we can learn.
Taking Note
Another AI application to university is note taking. AI can do excellent transcription and rather strong active note taking. Is that a case of learning, or of not learning? There are competing theories, which I think are true for different people at different times.
AI also means that even if you don’t have it take notes or a transcript, you don’t have to worry as much about missing facts, because you can ask the AI for them later. My experience is that having to take notes is mostly a negative. Every time I focus on writing something down that means I’m not listening, or not fully listening, and definitely not truly thinking.
Of course your laptop is open to an AI. It’s like being able to ask the professor any questions you like without interrupting the class or paying any social costs, including stupid questions. If there’s a college lecture, and at no point do you want to ask Gemini, Claude or o3 any questions, what are you even doing? That also means everyone gets to learn much better, removing the tradeoff of each question disrupting the rest of the class. Similarly, devising study materials and practice tests seems clearly good.
What You Going To Do About It, Punk?
The most amazing thing about the AI ‘cheating’ epidemic at universities is the extent to which the universities are content to go quietly into the night. They are mostly content to let nature take its course. Could the universities adapt to the new reality? Yes, but they choose not to.
The obvious interpretation is that college had long shifted into primarily being a Bryan Caplan style set of signaling mechanisms, so the universities are not moving to defend themselves against students who seek to avoid learning. The problem is, this also destroys key portions of the underlying signals.
How Bad Are Things?
Periodically you see talk about how students these days (or kids these days) are in trouble. How they’re stupider, less literate, they can’t pay attention, they’re lazy and refuse to do work, and so on.
The thing is, this is a Pessimists Archive speciality, this pattern dates back at least to Socrates. People have always worried about this, and the opposite has very clearly been true overall. It’s learning, and also many other things, where ‘kids these days’ are always ‘in crisis’ and ‘falling behind’ and ‘at risk’ and so on. My central understanding for this is that as times change, people compare kids now to kids of old both through rose-colored memory glasses, and also by checking against the exact positive attributes of the previous generations. Whereas as times change, the portfolio of skills and knowledge shifts. Today’s kids are masters at many things that didn’t even exist in my youth. That’s partly going to be a shift away from other things, most of which are both less important than the new priorities and less important than they were.
Is it finally ‘learning to think’ this time? Really? Were they reading the sequences? Could previous students have written them? And yes, people really will use justifications for our university classes that are about as strong as ‘blacksmithing is an extremely useful skill.’ So we should be highly suspicious of yet another claim of new tech destroying kids ability to learn, especially when it is also the greatest learning tool in human history. Notice how much better it is to use AI than it is to hire to a human to do your homework, if both had the same cost, speed and quality profiles.
With AI, you create the prompt and figure out how to frame the assignment, you can ask follow-up questions, you are in control. With hiring a human, you are much less likely to do any of that. It matters. Ultimately, this particular cataclysm is not one I am so worried about. I don’t think our children were learning before, and they have much better opportunity to do so now. I don’t think they were acting with or being selected for integrity at university before, either. And if this destroys the value of degrees? Mostly, I’d say: Good.
The Road to Recovery
If you are addicted to TikTok, ChatGPT or your phone in general, it can get pretty grim, as was often quoted.
The ‘catch’ that isn’t mentioned is that She Got Better.
I think it’s both interesting and important context. If your example of a student addicted to ChatGPT and her phone beat that addiction, that’s highly relevant. It’s totally within Bounded Distrust rules to not mention it, but hot damn. Also, congrats to maybeimnotsosmart.
The Whispering Earring
Ultimately the question is, if you have access to increasingly functional copies of The Whispering Earring, what should you do with that? If others get access to it, what then? What do we do about educational situations ‘getting there first’? In case you haven’t read The Whispering Earring, it’s short and you should, and I’m very confident the author won’t mind, so here’s the whole story.
This is very obviously not the optimal use of The Whispering Earring, let alone the ability to manufacture copies of it. But, and our future may depend on the answer, what is your better plan? And in particular, what is your plan for when everyone has access to (a for now imperfect and scope limited but continuously improving) one, and you are at a rather severe disadvantage if you do not put one on? The actual problem we face is far trickier than that. Both in education, and in general.