As schools nationwide grapple with the growing impact of artificial intelligence on student learning, a new challenge has emerged: maintaining academic integrity in a world where AI tools can write essays, solve complex problems, and even mimic human writing.
In response, educators in the San Ramon Valley Unified School District (SRVUSD) seem to be divided between embracing AI as a tool for deeper learning and condemning it as strictly prohibited.
“On one side, AI has the potential to support students and make learning accessible for students with challenges,” said Mitch Bathke, the journalism adviser and a world history teacher at Dougherty Valley High School in San Ramon. “On the other hand, it enables people’s worst impulses for laziness and deception. And I don’t think teachers have figured out how to stop that considerable downside.”
Concerns about AI are now compounded as the SRVUSD debates integrating Gemini AI into school-issued computers next school year.
“As an English teacher, it’s pretty much hard to be anything but against it,” AP Lit and English 11 teacher Sarah Militante explained. “Not only is AI doing the writing for students, but it’s doing the thinking for them. And it’s my job to teach students how to think critically. So if they’re having a computer do that for them, it just goes against everything I believe about education, about writing, about the arts, etc.”
A study published by the Center for Democracy & Technology, a civil rights nonprofit, found that student discipline rates stemming from suspected plagiarism due to use of generative AI are notably on the rise, increasing from 48% to 64% between the 2022-23 and 2023-24 school years.
Means to plagiarize and detect AI usage are proliferating as well. The same study also indicates that the number of teachers relying on AI detection tools has surged, jumping 30% to 68% during the 2023-24 school year.
But these tools have flaws and inaccuracies.
“I have used the tools built into Turnitin.com and Google Classroom, but they are bordering on useless,” Bathke said. “I have spotted obviously AI-generated content that passed those tools with flying colors. I have examined flagged writing by these tools only to find that they are perfectly fine.”
Added Militante: “The true problem is the time it takes [checking for AI], as instead of giving actual feedback on writing, we’re spending it all asking if student writing is AI.”
How AI detectors work
So how does an AI “checker” actually work? GPTZero, one of the pioneer AI detectors, uses algorithms to look at verbiage, sentence structure, and textual meaning, and then compares the text to large collections of known AI-generated and human-written content.
They examine the “perplexity,” or how much a piece of text deviates from what an AI model has learned during its training. Essentially, they look at how surprising the language is based on what the model has seen before. More common language (low perplexity) is more likely to be flagged as AI generated, while unique content (high perplexity) will be flagged as human generated.
Another property of text measured by GPTZero is “burstiness,” or the phenomenon where certain words or phrases appear in rapid succession or “bursts” within a text. Humans tend to be more variable (high burstiness) in their writing, with various sentence lengths and structures present throughout a piece of writing. AI models, however, tend to be more consistent and uniform (low burstiness).
But humans can have low-perplexity content if they use common phrases and low burstiness if they have a consistent writing style. So, human-generated content can still be flagged as AI written.
GPTZero itself acknowledges it is not 100% accurate and claims to have a 96.5% accuracy rate when detecting for AI in “mixed” (both human-written and AI content) documents. OpenAI, the company that created ChatGPT, even shut down its own AI detection software in mid-2023 — only six months after it was unveiled — because of its poor accuracy.
The tool most widely used by educators, with more than 16,000 academic institutions, publishers, and corporations using its plagiarism-detection services, is Turnitin.com. But the University of Kansas Center for Teaching Excellence finds the software appears to have a margin of error of plus or minus 15 percentage points. This means an AI detection score could be off by a great deal, enough to cause significant consequences for a student who might be innocent.
Dougherty Valley High School 11th grader Arshia Chhabra has firsthand experience with the limitations of AI detection tools.
“I’ve had AI detection tools like Turnitin.com flag multiple essays of my own work as AI generated, which has significantly limited my creativity,” Chhabra said. “My writing style tends to be a bit more flowery and elaborate, which gets flagged immediately. I have to change the way that I phrase things to avoid being punished for a crime I didn’t commit.”
The inaccuracies of AI detection also hurt teachers because in some cases it lets AI-generated content slip through. Once students find ways to bypass detectors, including the growing tools to “humanize” AI-generated content, there’s no way to reprimand the plagiaristic behavior.
“I’ve had a number of students who are just straight-up cheating, even in an elective like journalism, where one should only be enrolled if they actually like writing,” Bathke said. “AI is enabling the worst behaviors in far too many students obsessed with obtaining high scores and not at all concerned with actual learning.”
As the difficulties with AI plagiarism continue to rise, schools are left with crucial policy decisions to make to address it.
Students and teachers seek improved guidelines
Students like Janisha Potipireddi, another Dougherty Valley High 11th grader, are calling for more detailed and clear AI policies.
“Right now, the border between what is considered acceptable use of AI and what is not acceptable is blurred,” Potipireddi said. “That can cause further issues that are unfair to the student if it is not addressed.”
Added Chhabra, “I think that the AI policies that we have right now are understandably strict, but they definitely need some nuance.”
As an alternate way to detect AI, Chhabra suggests Draftback, a Chrome extension some of her teachers use that allows students to see if a large amount of text was copy pasted, as well as viewing version history keystroke by keystroke as an alternative to AI detection tools. In this way, even if an essay gets flagged, teachers can go back and check if it’s clear that a student wrote or copied the work.
There seems to be a consensus between students and teachers that SRVUSD AI guidelines need to be changed.
“Currently, SRVUSD guidelines for the use of AI are too vague,” Bathke said. “There are efforts currently underway to provide clarity, but not all educators agree on what that should look like.”
Teachers have varying approaches to dealing with AI usage in the classroom. Militante describes an English 11 final paper in which she had her students write by hand for three days, only being allowed to work on it in class while monitoring their screens through Securly and collecting their phones in an organizer. Securly is a cloud-based web filtering and school safety tool.
“I wish it didn’t have to be like that,” Militante said. “But the thing is, if it was just a free-for-all, then a lot of kids would take advantage of using AI, and then we would never learn how to be good writers.”
While concerns about misuse exist, teachers and students also see the potential benefits of integrating AI into education. An important consideration for school districts is the positive uses of AI, including fostering creativity and improving learning outcomes.
“Integrating AI tools in the classroom has the potential to be a force for good,” Bathke explained. “It will take work, discipline, planning, and patience on behalf of the teachers. It will take a shift in priorities amongst some students who are just looking to cut corners. But once that all gets sorted out, I think AI could make people better organized and stronger writers.”
Some teachers have already begun implementing AI in their classrooms. Dougherty Valley High 11th grader Mili Prakash has taken courses such as AP Psychology and AP Seminar, which she said allows her to use AI to grade AAQs, a type of free-response question, so students can get quick feedback and understand how to improve. In the AP Seminar class, she is able to use AI tools to simplify lengthy research papers and understand their content.
Looking ahead, educators believe achieving a delicate balance between discouraging AI plagiarism and encouraging learning from AI is important. But they also agree that embracing AI as a permanent part of the educational landscape is essential.
“AI is here to stay,” Bathke emphasized. “We need to accept that and work towards teaching responsible use. Denying it and cursing it won’t make it go away; it will just make us look like luddites out of touch with the modern world.”
Anushka Kabra is an 11th grader at Dougherty Valley High School in San Ramon.