Crossposted from the EA Forum: https://forum.effectivealtruism.org/posts/yNitwYkHP6DtkkSrG/should-ai-writers-be-prohibited-in-education 

 

Note: this is an attempt to engage with and interpret legislation on the usage of AI. I don't have a strong opinion on this yet and expect it to be controversial which is why I preferred the Question Mode.

 

In the AI Act, i.e., the EU's regulatory framework for the usage of AI-related technologies it is mentioned that 

The following artificial intelligence practices shall be prohibited:

the placing on the market, putting into service or use of an AI system that
exploits any of the vulnerabilities of a specific group of persons due to their
age, physical or mental disability, in order to materially distort the behaviour of
a person pertaining to that group in a manner that causes or is likely to cause
that person or another person physical or psychological harm. 

                                                                               Title II, Article 5, p. 43 (English Version).

I'll set up one interpretation of this statement in debate form: 

Question: should AI writers be prohibited in education? 

Claim: we can stretch this statement to apply to the usage of AI writing products employed by underage students for their assignments. This technology exploits students' inability to make a fully-informed and thoughtful decision as to what would be beneficial for their intellectual development and education. Therefore, the practice should be prohibited. 

Counterclaim: the AI system is not exploiting anyone's vulnerability as the notion of vulnerability should not be considered to entail one's proneness to dishonesty or cheating behaviors. Therefore, AI writers should not be prohibited and students should be held accountable for cheating when using AI writing models to compose their assignments. 

 

Feel free to continue the debate in the comments section. 

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

ChristianKl

Jan 17, 2023

52

Writing in a way that clearly communicates the writers intent to an AI is likely the most important writing ability for most students to learn.

Prohibiting students from using AI writers hampers them from doing their best to learn important writing abilities. It's like forbidding using calculators. 

It's a more important skill then writing essays for teachers to evaluate.

Astynax

Jan 17, 2023

41

I speak as someone who teaches college freshmen.

On the one hand, I see AI writers as a disaster for classes involving writing. I tried ChatGP3 last night and gave it an assignment like one I might assign in a general studies class; it involved two dead philosophers. I would definitely have given the paper an A. It was partly wrong, but the writing was perfect and the conclusion correct and well argued.

This isn't like Grammarly, where you write and the computer suggests ways to write better. I didn't write my paper; I wrote a query. Crafting the query took me almost no time to learn, and here it is: cut-and-paste the assignment into the prompt, and add the phrase "with relevant citations, and a reference section using MLA format." OK, now you can do it too!

The reason I think this matters is that both effective communication and rational thinking in the relevant field -- the things you practice if you write a paper for a class -- are things we want the next generation to know. 

On the other hand, a legal ban feels preposterous. (I know, I'm on LW, I'm trying to rein in my passion here.) A site that generates fake papers can exist anywhere on the globe, outside EU or US law. Its owners can reasonably argue that its real purpose isn't to write your paper for you, but for research purposes (like ChatGP3!), or for generating content for professional web sites (I've seen this advertised on Facebook), or to help write speeches, or as souped-up web searches, which is essentially what they are anyway. 

What has worked so far as technology changed was to find technical solutions to technical problems. For a long time colleagues have used TurnItIn.com to be sure students weren't buying or sharing papers. Now they can use a tool that detects whether a paper was written by ChatGP3. I don't know if eventually large language models (LLMs) will outpace associated detectors, but that's what's up now.