As the U.S. continues to face extreme political divisions, several BYU faculty from the political science and computer science departments came together to study how artificial intelligence can solve misunderstandings and conflict in American political discourse.
Political discussions on these topics can bring out emotional and unproductive dialogue, as was seen in the 2020 presidential debates between President Joe Biden and former President Donald Trump. Comments for the video of the debate on YouTube ranged from “love watching Joe Biden speaking coherently” to “Donald will you just be quiet for a minute.”
The recent study titled “Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale” was led by BYU political science professors Lisa P. Argyle, Ethan C. Busby and Joshua R. Gubler. BYU computer science professor David Wingate and Christopher A. Bail, Duke University sociology and public policy professor, contributed to the research. Two University of Washington grad students and Taylor Sorensen, a BYU undergrad studying computer science, participated in the study. Throughout their research, contributors explored how AI may help people be more engaged and empathetic towards those with differing political views.
Lisa Arygle, an assistant political science professor at BYU, said the study was several years in the making.
“We have been working together for several years to see how large language models can be used in social science research, and we have a collective hope that we can find ways for these tools to make the world a better place,” Argyle said.
To conduct their study, Argyle and fellow researchers used the popular online software GPT-3 to make recommendations to improve the participants’ perception of feeling understood. In one part of the study, the researchers used the divisive issue of gun control as an example of how people could use GPT-3 to make their online political dialogue more civil. Participants would receive suggestions on what words they should rephrase so other participants felt more understood. For example, rather than someone who supports stricter gun control regulation saying “guns are a stain on democracy,” the GPT-3 software would suggest “I understand that you value guns.”
Following this exercise, the participants reported an improvement in conversation quality, democratic reciprocity and tone without systematically changing the conversation or moving people’s policy attitudes.
“I often try to keep my expectations about what we will find realistic, so I was pleasantly surprised when everything pretty much turned out as we hoped and expected,” Argyle said regarding the results of the research.
Although their study proves AI can be used for good in political dialogue, Ethan Busby, BYU political science professor and co-author of the study, said using AI can have some downfalls.
“When you’re working with these kinds of generative AI tools, there is always the possibility of unforeseen consequences,” Busby said. “We worked quite hard to get the AI tool to not change whatever political stance the person presented, which means that we didn’t correct any misinformation people provided to each other.”
Argyle and Busby’s research is particularly interesting to students like BYU political science student David Acre, who is excited to see how AI can help students engage in thoughtful political discourse.
“I think AI can be used as an instrument in today’s day, especially with all the innovative technology we have,” Acre said.
Although he is a proponent of utilizing AI in political discourse, Acre believes it will ultimately have to be real people who choose to engage in thoughtful discussion.
“I think we can learn from AI and use it as a crutch until we learn to walk, and we shouldn’t depend on it for our real and intimate conversations,” Acre said.
The researchers who conducted the study are hopeful that using language models like GPT-3 and GPT-4 will help others engage in more productive political discourse. They believe it is possible to use AI tools as a way to solve the big problems we face and to do it in a way that keeps people in charge. While there remain questions about AI and its capacity for harm, Argyle and her fellow researchers believe AI can help manage some of our most hard-to-solve problems and enhance our ability to understand each other.