OPINION: Allowing AI in schools is becoming more likely, but is it a good idea?

OPINION: Allowing AI in schools is becoming more likely, but is it a good idea?

Lillith Dunn, Editor-in-Chief, and Maayna Parikh, guest contributor
eSomethin Staff

Artificial intelligence is no longer something only seen in sci-fi movies or high-tech offices—it is now showing up in classrooms across the country. From AI-powered tutoring programs to tools that can write essays in seconds, schools are facing a major shift in how students learn and teachers teach. Supporters argue that AI can help personalize education, save time, and give students extra support when they need it most. However, critics worry that these tools could encourage cheating, reduce critical thinking skills, and create unfair advantages. As schools try to decide whether to ban, limit, or embrace artificial intelligence, one question remains at the center of the debate: is AI helping students learn, or making learning easier to avoid?

The introduction paragraph above was written entirely by ChatGPT, but what if the entire article was written by AI? Picture that. We put in a prompt, and AI conducts our “interviews,” does the research, perfects the grammar, and readers would never know the difference. This could be a possibility in the near future as schools continue to consider the use of AI in classrooms. But is adopting artificial intelligence in schools actually a good idea?

In our opinion? No. 

Here are some common words you’ll see throughout this article, and what we mean by them:
Cheating: Cheating is simply breaking the rules laid out for a certain task or activity. Plagiarism more specifically is the act of claiming someone or something else’s work as your own.
AI: Specifically generative AI within this article. “Generative AI creates new content, like text or images, based on patterns in data.”
Generative AI. This is different from traditional AI, such as Siri and Alexa, which uses pre-built code to complete simple tasks. The same concept applies to tools such as spell check, which is an NLP (natural language processing) AI. 
Co-pilot: a person or tool that assists you in completing tasks. Co-pilots can provide feedback and advice, but you are doing most, if not all, of the work. Co-pilots don’t generate ideas, make decisions on how a task is completed, and do minimal editing overall. A Co-pilot can be a general term with regards to AI, but is also the more specific AI program that Microsoft has.
Collaborator: a person or tool that works with you in completing tasks. Collaborators can provide ideas, advice, feedback, and help make decisions in how a task is completed. The work is split 50/50 or 60/40 with a collaborator. Once you start allowing ChatGPT to make decisions (ie generating examples and choosing sources) and produce content for you, rather than simply provide feedback on a completed task, it becomes a collaborator, not a co-pilot.
ChatGPT: an OpenAI app that is a popular AI source, especially for students. Since it is entirely Generative AI, it will be used as a substitute word for AI throughout this article.

Currently, Perrysburg’s AI policy isn’t as strict as you might expect. While AI is strictly prohibited when it comes to doing assignments and essays, many resources are allowed to help students study and learn material they may be struggling with. 

But as AI sources become more popular, many schools have opened up options for AI help with essays and certain assignments. For example, you could hypothetically use ChatGPT to give you essay ideas and sources, so long as you write the essay itself. Schools are also becoming more lax with teachers using AI for their lessons and gradings. 

All of these changes are more harmful than they seem.

First, if we start allowing AI in schools, how would one even start to regulate it? Currently, the line is drawn clearly in the sand as to how AI can be used in schools. Students are told not to use it. However, if we start adding nuances it becomes harder to tell when that line has been crossed. For example, a student may have used AI to generate ideas for an article, while another may have used AI to write just a paragraph.

The tension between students, teachers, and AI

A study done by Temple University tested the accuracy of a popular AI detection software, Turnitin.com. Turnitin correctly identified 28 of 30 samples of human generated texts, or 93%. One sample was rated incorrectly as 11% AI-generated, and another sample was not able to be rated. The site was also able to:

Turnitin correctly identified 23 of 30 samples of AI-generated texts as being 100% AI generated, or 77%. Five samples were rated as partially (52-97%) AI-generated. Two samples were not able to be rated. When hybrid texts were introduced, however, things got murky. 

According to the study, “Determining the correctness of Turnitin’s scores on the hybrid texts proved to be a challenging exercise. Technically, since the texts were all partly human-written and partly AI-generated, a “correct” score could be considered to be any score between 1-99% (in other words, not 0% and not 100%.) By that metric, Turnitin correctly identified 13 of 30 hybrid texts as being neither fully human-written nor fully-AI-generated, or 43%. Of the remaining texts, 6 were identified as 100% AI and 7 were identified as 100% human-written. One text was unable to be rated.”

Overall, the study found that Turnitin had an 86% success rate. It is safe to say that when texts are entirely written or AI generated, Turnitin has great accuracy. However, students are unlikely to submit an assignment entirely written by AI, and when the website is presented with a blend of human and bot, it often becomes confused and inaccurate. If these hybrid-type articles are allowed by schools, how will teachers determine what is and isn’t cheating?

Detecting AI usage is not the only problem with allowing it in schools. Students’ use of AI can inhibit their critical thinking skills, as well as how they process and interact with new material. Once you start using AI to write essays or complete assignments, you are no longer interacting with the material or learning the lessons the same ways you would without relying on ChatGPT.

We aren’t saying that AI should not be used for tutoring purposes or translating for transfer students, but it should not be completing the work for you. ChatGPT can make a good co-pilot; using it as a collaborator is what prevents learning. 

Not only do students using AI have a negative impact on learning, teachers using AI can erode trust in students as well. A study from the University of Hong Kong explained that when teachers use AI to do things such as create assignments, it lessens trust between that teacher and their students. A student in the Turnitin study, LM, explained that she found the use of AI hypocritical: “The teacher said we cannot use AI, but she’s using it!”

This sentiment was echoed by other students who disliked their teachers’ use of AI in a classroom setting. Fundamentally disagreeing with how you are being taught, as well as viewing that teacher as a hypocrite, creates a rift between students and educators, an issue that can be prevented if AI use is limited in schools.

A screenshot of the inaccurate Gemini statement. While the error has since been corrected, the claim was up for long enough to spread across the internet. This is a prime example of why relying on AI can be harmful for students. Image from: Fact-Checking AI | Saint Mary’s College

Furthermore, AI is not the most reliable source. While it has been resolved, not too long ago, Google’s AI assistant, Gemini, claimed that eating rocks was beneficial to people. This information was pulled from an Onion article, which is a well known satirical news site. In this scenario, though, AI was taking this joke and treating it as fact. 

With students using artificial intelligence, not only might they not be understanding the material as well as they should be, they might be absorbing false information. Realistically, if a student is choosing to use AI because they don’t even have time to complete their homework, the likelihood of them fact checking the AI’s output is low.

Ultimately, AI can be a useful tool if used correctly. Instances such as AI creating flashcards, or reteaching material are great examples of AI acting as a co-pilot, and can be extremely beneficial. However, using AI as a collaborator is not the same. Once you start allowing AI as a collaborator, the line in the sand becomes less clear until it is swept away by the waves of nuance and technological inaccuracy in grading. If using ChatGPT to complete assignments is automatically considered cheating, the line stays unmuddled and makes teachers’ and administrators’ jobs easier when it comes to grading. 

The fact is that you wouldn’t want this article to be written by AI, then for us to slap our names on it and act as if it is our own work, so why should this be considered the norm for schools in the future?

Other stories on eSomethin:

Share

Written by:

11 Posts

View All Posts
Follow Me :