ChatbotGPT worries teachers who seek to detect cheating by AI

ChatbotGPT worries teachers who seek to detect cheating by AI

Comments

Teachers and professors across the education system are in a near-panic as they confront a revolution in artificial intelligence that could allow for cheating on a grand scale.

The source is ChatGPT, an artificial intelligence bot released a few weeks ago that allows users to ask questions and, moments later, receive well-written answers that are eerily human.

Almost immediately, educators began experimenting with the tool. While the bot’s answers to academic questions weren’t perfect, they were awfully close to what teachers would expect from many of their students. How long, educators wonder, will it be before students begin using the site to write essays or computer code for them?

Māra Corey, an English teacher at Irondale Senior High School in New Brighton, Minn., said she discussed the matter with her students almost immediately so they could understand how using the tool could impede their learning.

“Some of them were shocked that I knew about it,” she said. She didn’t worry that the conversation might plant bad ideas in their heads. “Hoping that teenagers don’t notice the new flashy thing that will save them time is a fool’s errand.”

Stumbling with their words, some people let AI do the talking

Within days of its launching, more than a million people had tried ChatGPT. Some asked innocent questions, such as how to explain to a 6-year-old that Santa Claus isn’t real. Other queries demanded complex responses, such as finishing a piece of tricky software code.

For some students, the temptation is obvious and enormous. One senior at a Midwestern school, who spoke on the condition of anonymity for fear of expulsion, said he had already used the text generator twice to cheat on his schoolwork. He got the idea after seeing people expound on Twitter about how powerful the word generator was after it was released on Nov. 30.

He was staring at an at-home computer-science quiz that asked him to define certain terms. He put them into the ChatGPT box and, almost immediately, the definitions came back. He wrote them by hand onto his quiz paper and submitted the assignment.

Later that day, he used the generator to help him write a piece of code for a homework question for the same class. He was stumped, but ChatGPT wasn’t. It popped out a string of text that worked perfectly, he said. After that, the student said, he was hooked, and plans to use ChatGPT to cheat on exams instead of Chegg, a homework help website he’s used in the past.

He said he’s not worried about getting caught because he doesn’t think the professor can tell his answers are computer-generated. He added that he has no regrets.

“It’s kind of on the professor to make better questions,” he said. “Use it to your own benefit. … Just don’t get through an entire course on this thing.”

What is ChatGPT, the viral social media AI?

The tool was created by OpenAI, an artificial intelligence laboratory launched several years ago with funding from Elon Musk and others. The bot is powered by a “large language model,” AI software that is trained to predict the next word in a sentence by analyzing massive amounts of internet text and finding patterns by trial and error. ChatGPT was also refined by humans to make its answers more conversational, and many have noted its ability to produce paragraphs that are often humorous or even philosophical.

Still, some of its responses have been blatantly wrong or bigoted, such as when a user got it to write a rap lyric that said: “If you see a woman in a lab coat, she’s probably just there to clean the floor.” Creators acknowledge that ChatGPT isn’t perfect and can give misleading answers.

Educators assume that with time the tool will improve and knowledge of it among students will grow. Some say teachers will adjust their assessments to take the possibility of cheating into account. For example, they’ll require students to write papers by hand or during class, when they can be monitored. Others are contemplating how to write questions that require deeper thinking, which is more challenging for the bot.

The stakes are high. Many teachers agree that learning to write can only take place as students grab ideas and put them into sentences. Students start out not knowing what they want to say, and as they write, they figure it out. “The process of writing transforms our knowledge,” said Joshua Wilson, an associate professor in the School of Education at the University of Delaware. “That will completely get lost if all you’re doing is jumping to the end product.”

Wilson added that while universities are buzzing about this, many secondary teachers remain blissfully unaware.

“The average K-12 teacher — they’re just trying to get theirs [semester-end] grades in,” he said. “It’s definitely a wave that’s going to hit.”

Teachers say pandemic tech changed their jobs forever

Department chairs at Sacred Heart University in Connecticut have already discussed how to deal with artificial intelligence, and faculty members know they must find ways to contend with it, said David K. Thomson, an associate professor of history at the school.

Thomson said he realized by experimenting with the site that it does pretty well with the sort of questions that appear on many take-home tests, such as one asking the student to compare the development of the northern and southern American colonies before the Revolution in economics and other terms. “It wasn’t perfect,” he said. “Nor are college students perfect.”

But when he asked a more sophisticated question, such as how Frederick Douglass made his argument against the institution of slavery, the response was far less cogent. Professors, he said, will have to give judgments that judge analytical reasoning and not just facts that can be looked up.

At the same time, others see possible upsides. The technology is an opportunity for teachers to think more deeply about the assignments they give — and talk to students about why it’s important to create their own work — said Joshua Eyler, an assistant professor at the University of Mississippi who directs the Center for Excellence in Teaching & Learning, who pointed derisively to a “moral panic.”

“This is a kind of moment calculator for the teaching of writing,” Eyler said. “Just as calculators change the way we teach math, this is a similar moment for teaching writing.”

“Predictably, what we’ve seen is a kind of moral panic. There’s a great fear that students are going to use these tools to cheat.”

Michael Feldstein, an educational consultant and publisher of the blog e-Literate, said that along with panic, there’s curiosity among educators. He said some professors in trade-oriented fields see AI-generated writing as possibly a useful tool. A marketing student might use it to write marketing copy in school, he said, and also in a future job. If it works, he asked, what’s wrong with that?

“They don’t care if students will be the next Hemingway. If the goal is communication, it’s just another tool,” Feldstein said. The most important thing, he said, is the tool to be used as part of learning, not in place of learning.

As educators consider how to live with the technology, some companies are thinking about ways to defeat it.

Turnitin, a company that has created widely used software to detect plagiarism, is now looking at how it might detect AI-generated material.

The automated essays differ from student-written work in many ways, company officials say. Students write with their own voice, which is absent from ChatGPT content. AI-written essays sound like the average person, but any given student is not spot-on average, so the essays won’t sound like them, said Eric Wang, vice president for AI at Turnitin.

“They tend to be probabilistically vanilla,” he said.

Remote learning apps share children’s data at a ‘dizzying scale’

But detecting cheaters who use the technology will be difficult.

Sasha Luccioni, a research scientist at the open-source AI start-up Hugging Face, said OpenAI should allow the public to browse ChatGPT’s code, because only then can scientists build truly robust tools to catch cheaters.

“You’re working with a black box,” she said. “Unless you really have [access to] these layers and how they’re connected, it’s really hard to create a meaningful [cheating detection] tools.”

Hugging Face hosts a detection tool for a previous chatbot model, called GPT-2, and said it could potentially help teachers detect ChatGPT text, but would probably be less accurate for newer models.

Scott Aaronson, a guest researcher at OpenAI, said the company is exploring different ways to battle misuse, including the use of watermarks and models that differentiate between bot-generated and real-world text. Some have questioned whether the watermark approach is enough.

“We’re still running experiments to determine the best approach or combination of approaches,” Aaronson said in an email.

ChatGPT had its own ideas about the solution. Asked how to confront the possibility of cheating, the bot offered several suggestions: educate students about the consequences of cheating, proctor exams, make questions more sophisticated, give students the support they need so they don’t see the need to cheat.

“Ultimately, it is important to communicate clearly with students about your expectations for academic integrity and to take steps to prevent cheating,” the bot explained. “This can help create a culture of honesty and integrity in your classroom.”

corrections

An earlier version of this article incorrectly said that Hugging Face created a detection tool for a chatbot model called GPT-2. It hosts the tool. The article has been corrected.