As artificial intelligence (AI) software like ChatGPT becomes more prevalent in everyday life, universities are grappling with how to handle the new technology in regards to academic misconduct.
One French university even banned its use; but at the University of Guelph, it’s not so clear cut.
“The whole question of AI and things like ChatGPT is complicated,” said Byron Sheldrick, associate vice-president (academic) at the University of Guelph. “It’s not just a question of academic misconduct, although it can be seen that way.”
Sheldrick said this is why rather than banning the software, the university is currently working on a policy and a statement regarding the use of ChatGPT and AI more generally, “both a threat to academic integrity in the sense that students could use it to cheat, but it’s also a powerful tool.”
They are also undertaking a complete review of their academic misconduct policy.
The goal is to find a balance between using and encouraging it as a helpful tool, while also preventing plagiarism.
“It's not just all about cheating, it's also about, how do we teach students to use tools like artificial intelligence because these are going to be the tools of the future and the tools of the work world,” he said. “People are going to have to understand how to use those tools, but also how to do so with integrity.”
Considering the complexity of this, he said they need to develop resources for students and instructors, and to rethink how assessments are designed to limit the potential for plagiarism.
There are AI detection tools as well: OpenAI, the creator of ChatGPT, has released a tool to detect AI generated content. Plagiarism checking tool Turnitin is also in the process of adding an AI writing and ChatGPT detection capability to its software.
But Sheldrick would prefer to “think about how we assess so that we can limit the exposure of our students, or it makes it more difficult for them to use ChatGPT.”
For example, he said in some institutions instructors are asking students to submit a short two or three minute video along with the essay discussing how they wrote it, how they did their research and developed their ideas.
Another tactic might be to have instructors ask students to specify how they used ChatGPT, if it’s in compliance with the requirements of the assignment.
Whether students can use it or not will be up to the instructor's discretion.
He expects some instructors will also prioritize more face-to-face assessments rather than take-home, while others will design their assessments in ways that make it more difficult to rely on ChatGPT.
“So there are different ways of approaching it. All of which is to say it's more complicated than just saying AI is bad and any use of it should be banned, but we need to be careful and thoughtful about how we use it.”
He said using the software as a jumping off point, to help structure an essay, for instance, is “potentially a legitimate use.”
Instructors often encourage collaboration in assignments to help in a similar way, he said, so this would add another dimension to that.
In his mind, it can be used as a tool to help plan or organize thoughts, for instance, as long as the final product is clearly the student’s own work.
But for Ruediger Mueller, depending on how it’s used, it’s just another way to plagiarize.
Mueller is a professor and the associate dean academic in the College of Arts, meaning he often deals with plagiarism cases.
Right now, he has three cases of academic misconduct before him, where students have been accused of using AI despite being told not to.
If an instructor said it could be used, he said that’s fair game. The key is to clearly articulate what is and isn’t permitted.
But if a faculty member said not to use a tool like ChatGPT, and a student does, “in my opinion, black and white, they’re absolutely committing academic misconduct, because what they’re doing is they're giving someone else's ideas and passing them off as their own.”
The same goes if an instructor said not to use an external aid; ChatGPT would count the same as any other resource, as it’s stealing someone else’s idea without giving them credit.
Philosophy professor Andrew Bailey disagrees.
Since ChatGPT’s content is not being written by a person, it’s not technically stealing intellectual property, “which I think is part of what makes plagiarism immoral,” Bailey said, who is affiliated with the Centre for Advancing Responsible and Ethical Artificial Intelligence.
However, he said he wants students to learn and be engaged with the material, thinking about it in their own way.
“And things like ChapGPT can be a barrier to that,” he said, because too much reliance on the software means they haven’t done the required work. But, although it may seem foolproof to some students, they can also get caught.
While he said it can produce “some high quality pieces of writing,” that could potentially be mistaken for the work of a student, it’s also unpredictable.
“So I think this is a bit of a trap for students because sometimes ChatGPT produces outputs that sound sensible but are wrong,” he said.
For instance, it can write convincingly, but not always logically. It also can’t source material accurately all the time, Sheldrick said, and sometimes even makes up sources.
And while it can consider a prompt and cleverly string one word after another until it’s a full piece of text, with a different response every time, Bailey noted it isn’t designed to “say things that are true.”
This is why they said it’s important to educate students on the limitations – and benefits – of tools ChatGPT, as well as reiterating what constitutes academic misconduct.
Like Sheldrick mentioned, Bailey could see incorporating more in-person assessments in his classes; but he’s also interested in re-designing the assessments with ChatGPT in mind, making his questions less generic, involving response that include personal experience or Guelph-specific information – things ChatGPT wouldn’t necessarily know about.
In terms of banning the software, neither Bailey or Mueller think such measures are necessary.
“It makes absolutely no sense to ban AI from a campus because AI is here to stay, and it's only going to become more complex. We can't ban the future,” Mueller said.
Likewise, Sheldrick cautioned others from acting too quickly.
“We're going to have to wait to really see how it all falls out,” he said.