Skip to main content
Campus

BYU professors, students debate use of AI in classrooms

pexels-tara-winstead-8386440.jpg
The controversy surrounding ethical uses AI has increased. Students and teachers had various perspectives. (Pexels/Tara Winstead)

Editor's Note: This article was written by Dallin Burningham, a guest contributor to The Daily Universe.

Since November of 2022, AI has disrupted an untold number of industries, markets and jobs across the globe. For higher education as a whole, Brigham Young University, and its students and professors, have been no exception. Students almost immediately realized the potential that AI had to help them with their school work.

“It was scary,” Kaiona Worthington, a junior studying philosophy, said. “I thought I was cheating.”

“It seemed like google on crack to me,” said BYU sophomore Sonia Chini studying Wildlife and Wildlands conservation.

Senior Calvin Boyce said of his reaction to using ChatGPT for the first time: “I was shocked. It was way better than I thought it was going to be. I was expecting Siri-like responses, and AI is a whole lot more impressive than that.”

To many students, AI’s application to their schoolwork was clear from the get-go. Others took a bit longer to incorporate it in their field of study. Matthew Schwartz, a sociology major, said he was one of these.

“I am kind of a grandpa when it comes to (using AI). But ever since Fall Semester my professors have been encouraging me to use it more. So that’s helped me,” Schwartz said.

AI use in schools has started to pick up significantly since OpenAI shocked the world in 2022.

Many students may find this exciting, because of the time that AI saves them. However, some of those concerned with academic honesty are quick to point out the red flags that these numbers raise, and question whether this indicates an increased rate of learning or a spike in cheating.

Many institutions of higher education are dedicated to academic honesty, and BYU is no exception.

“The first injunction of the Honor Code is the call to ‘be honest,’” the academic honesty policy for BYU reads.

“BYU students should seek to be totally honest in their dealings with others. They should complete their own work and be evaluated based upon that work. They should avoid academic dishonesty and misconduct in all its forms, including but not limited to plagiarism, fabrication or falsification, cheating, and other academic misconduct.”

In addition to their academic honesty policy as part of the honor code, BYU was quick to release its generative AI guidelines following the advent of ChatGPT. Students can go to genai.byu.edu to view those guidelines. Under the Academic Integrity section, the site reads:

"BYU does not currently have any specific school-wide policies to define permissible AI use for assignments, tests, or other schoolwork. For general principles, please refer to the Academic Honesty Policy, Data Use, Privacy, and Security Policy and CES Honor Code.”

Even with guidance from the university, students and professors alike still feel like it’s the Wild West when it comes to AI use in school.

Professors have taken the liberties that the university has granted them and come to vastly different conclusions when it comes to when AI is permissible to use and when it is not.

“I’ve had some professors that didn’t really care. It’s like, use whatever you need to study, but obviously don’t use it on a test. But I do have one professor, he said you can use ChatGPT as long as you tell me that you used it, and then I'll take 30% off (your grade). So I feel I’ve gotten mixed feelings from professors for sure,” Chini said.

Professor Aaron Miller of the Marriott School of Business says his AI policy is “They can use it on assignments where I say they can, and I ask them not to in other assignments.”

Professor Jamie Jensen from the school of Life Sciences stated, “What I usually tell students is that you can use AI as a learning tool, but you should not submit AI as your work.” She acknowledges that students struggle to differentiate these aspects.

“Students are welcome to use AI, and probably should, but the final product that they turn into me needs to be their own work, and that they should indicate to me the way in which they used it,” said Eric Dursteler, professor of Humanities.

Other professors take a more laissez-faire approach, choosing to hardly address AI at all in their classroom policies.

“Yeah, I kind of have (an AI policy) in my graduate class … but I don’t even address it in my undergraduate classes because it’s not relevant,” stated Brian Harker, professor of Music History.

Professors vary not only in their actual AI policy, but also in their overall feeling towards the new technology. The emotions range from excited to scared, and many professors are a bit of both.

"I tell my students to use it as a great starting point. It's a powerful tool that can be used in an imaginative sort of process of creating something yourself,” Professor Dursteler said.

Miller stated that LLMs, or Large Language Models that specialize in reading and writing, are a “great way for students to evaluate their thinking.”

But Miller also said, “if you hand off all the process of thinking to AI, whatever you submit doesn’t reflect your actual capacity, for sure. And so that’s the biggest tension. I do think there’s an underlying integrity issue and that that’s always been true for cheating generally. That it’s important that we graduate students with integrity.”

Jensen added, “I do feel like AI is robbing (students) of developing the skills. Number one, how do you find information, and number two, how do you decide if it’s legit … The problem is when they come to a new problem they don’t know where the information applies.”

Most students and professors still feel that AI use is a sort of academic gray area, which makes it hard to identify and police any potential misuse. The uncertainty surrounding it has also made it impossible for students and professors to effectively harness the technology's full potential.

Some professors don’t feel that AI misuse is a serious problem, and that universities shouldn’t restrict its use.

“I don’t think it’s a huge problem, to be very honest,” Dursteler said. “I tell my students that it's a starting point. It’s a tool that can be used in an imaginative process of creating something yourself.”

Still, others worry about how to control AI’s use to prevent cheating, without hindering too much of its potential. Many teachers have even had to change the types of assignments they give and the way assignments are administered to return grades back to normal.

“I was having them do the exams online. But I heard it through the grapevine from students to TAs that they were all using ChatGPT,” Jensen said. “I also noticed that the averages were way higher than they’ve ever been. So I have completely abandoned online. I thought I was doing students a favor by having it online. Now they’re going to take all their tests in the testing center.”

Harker added, “I used to have them write research papers, and as soon AI became relevant, I stopped doing that and I have them write short essays in the testing center.”

Many have pointed out that cheating also became much easier when the internet came out, but education still hasn’t changed a ton in a lot of regards. Now though, students not only can look up answers to multiple-choice or fill-in-the-blank questions, but make up their own original responses with a nicely worded AI prompt.

Miller expressed his concerns around the rise of students cheating with AI: “The really interesting challenge of cheating is no matter how high the integrity of the student is, the probability of their cheating goes up as cheating gets easier and the stakes go up. And that’s basically the equation. As pressure goes up, as opportunity goes up, rationalization becomes really easy and students will cheat. And AI has taken the ease of cheating to a new level. Like it’s made the opportunity part of cheating, like, so ridiculously high.”

Some professors have also wished for more guidance from the university on the subject, and tools for how to deal with the problems it poses.

Jensen said that he has struggled to find good tools for detecting AI.

“Another tool that I was trying at home, I purposely generated stuff in ChatGPT and put it in, and then I purposely wrote stuff and it was not good at all. It was telling me my own writing was ChatGPT,” Jensen said.

He also said that he feels like the university could give them better tools for AI detection.

“I have zero resources to test for AI and to know that it's doing a good job. Because I don't want to accuse a student of AI if they didn't use AI,” he said.

Some professors even worry that ChatGPT could have broad impacts on how society values education, and worry that it may lead to an even greater decrease in the value of college degrees in the workplace if students can’t prove to employers they have the skills to handle jobs themselves.

“There are a lot of ways to learn, and they don't have to be at a university. The core feature of a university education is credentialing. We’re promising the world that our graduates can do certain things. As long as we’re making that promise, we have to uphold it,” Professor Miller said. “We're always going to have to trade off that credentialing problem and weigh like, okay, ‘how is this impacting our credentialing of our students?’”

So while major efficiencies have been gained by the advent of AI, the jury is still out on how it will continue to change higher education, for better or for worse.