Elections in the Age of AI

Good Systems Researchers Analyze AI's Role in Modern Political Campaigns

November 1, 2024
Elections in the Age of AI

In September, a story broke about a group of well-known conservative influencers who unwittingly became shills of a Russian disinformation campaign. The Justice Department has indicted two Russian nationals for orchestrating the scheme, which allegedly funneled almost $10 million through a U.S. media company, Tenet Media, to spread pro-Russian content via popular right-wing influencers. From YouTube videos to tweets, these influencers reached millions of Americans, many of whom were unaware they were consuming content shaped by foreign adversaries and designed to stoke division.

"The influencers didn’t realize what was happening at first," said Josephine ("Jo") Lukito, an assistant professor who studies political misinformation and disinformation at the Moody College of Communication’s School of Journalism and Media. "They were hired through an intermediary and thought it was just another paid promotion. But the funding was directly connected to the Russian government, and the content they were producing aligned with Kremlin interests."

The Tenet Media incident is just the latest example of how sophisticated disinformation campaigns have become. Just last month, another Russian disinformation campaign manufactured and then promoted salacious allegations against Vice President Kamala Harris’s running mate, Minnesota Governor Tim Walz — allegations quickly proved false by The Washington Post.

Artificial intelligence (AI) tools are only amplifying the problem, allowing mis- and disinformation to spread faster and more convincingly than ever before. As part of a Good Systems core project called Designing Responsible AI Technologies to Protect Information Integrity, Lukito and her teammates are focused on building AI tools to combat these threats.

The Power of Connection over Correction

Although Lukito has been studying the online spread of mis- and disinformation for more than a decade — her research on Russian meddling in the 2016 election was referenced in Robert Mueller’s 2018 report — her focus on the nuances of political communication was inspired by a much more personal experience: her students’ disagreements with family members during holiday gatherings. "A lot of students struggled when they went home for the holidays," Lukito said. "They asked me, 'How do I communicate in a way that helps them understand my political viewpoint?'"

This frustration led her to investigate why traditional methods of argumentation often fail in today's polarized climate. Through her research, Lukito discovered that deliberative language, which relies on logical arguments to persuade, often doesn’t resonate in these emotionally charged conversations. Instead, she found that connective language, which engages emotions and fosters dialogue, is more effective. "What research has shown is that emotional connection tends to open people up to conversation," she said. "So, don’t try to convince someone by telling them they’re wrong and backing it up with facts. Instead, ask questions like, 'Where did you hear that?' or use connective language like, 'I can see why you think that.'"

This understanding has informed her work on how misinformation spreads and how AI tools can be used to both amplify and counteract false narratives.

False Narratives in the Age of AI

The rise of generative AI has transformed how false information is created and disseminated. Misinformation — the unintentional spread of false information — and disinformation — the deliberate sharing of lies — can now be generated and amplified at an unprecedented scale. "AI has been used for mis- and disinformation for decades now," Lukito said. "What has changed is the rise of generative AI — specifically the use of artificial intelligence to generate or create text, videos, audio and images. And certainly in 2024, what we have seen is a noticeable increase in the use of generative AI to produce politically oriented content."

This isn’t just happening in fringe corners of the internet. Last year, the Republican National Party released an AI-generated ad that depicted a dystopian future under President Joe Biden. Though the ad included a small disclaimer about the use of AI, the lack of regulation around such content is concerning, according to Matthew Lease, professor at UT’s School of Information and Information Integrity project lead. "In general, technology is always ahead of the law, and right now there is no law requiring that AI-generated content be disclosed in political ads," he said.

AI has been used for mis- and disinformation for decades now. What has changed is the rise of generative AI.

— Jo Lukito, Moody College of Communication

The speed at which AI-generated content can spread — especially when shared by public figures or influencers — makes real-time monitoring essential, according to Lease. "We’re pretty good when it comes to responding to false information after the fact, but what happens if it’s Election Day and something goes viral before anyone can verify it?" he said. "What do we do for these real-time attacks? I think that’s a concern."

The Tenet Media incident illustrates just how quickly disinformation can take hold. Even after the influencers involved were made aware of the Russian connection, the damage had been done — thousands of Americans had already been exposed to the false narratives. "Once disinformation is out there, it’s hard to reel it back in," Lukito says. "That’s why real-time detection is so important."

Humans in the Loop

Despite the power of AI, both Lukito and Lease emphasize that humans remain a critical part of the solution. To enhance human oversight, Lukito and her colleagues have been exploring information sorting, which employs AI to sift through and organize vast amounts of content, allowing fact-checkers to focus on the most relevant claims. This collaborative approach — "human-in-the-loop," as it’s known in computer science — leverages AI’s strengths to support rather than replace human oversight.

One Good Systems paper, co-authored by Lease, describes "co-design" workshops the team conducted with professional fact-checkers worldwide. These sessions aimed to identify specific AI advancements needed to accelerate fact-checking. Through this "gap analysis," they pinpointed deficiencies in current natural language processing (NLP) technology, helping guide future NLP research. The approach mirrors Good Systems' broader goal: to advance AI capabilities in response to real-world challenges, then channel these advancements back into society to address those same issues.

In terms of the effect on society and democracy, it's terrible if you have people who are not well informed, and it does remain our individual responsibility to make sure we're getting our information from respected providers.

— Matthew Lease, School of Information 

Another tool supporting this mission is Entendre, an AI-powered bot detection tool developed by the Information Integrity team. By identifying bot-like behavior such as high posting volume and specific metadata patterns, Entendre can detect social bots that often amplify misinformation on fringe and extreme platforms with minimal moderation, like Parler. This tool supports fact-checking and moderation by detecting automated accounts spreading disinformation, helping human moderators focus on the most influential or harmful content.

AI Literacy and Civic Engagement

While AI holds promise for combating disinformation, its use in political campaigns raises significant ethical concerns. AI can be used to micro-target voters with messages tailored to their personal fears and anxieties, raising questions about data privacy and manipulation.

As the 2024 election approaches, both Lease and Lukito agree that public awareness is critical. "In terms of the effect on society and democracy, on shared governance, it's terrible if you have people who are not well informed, and it does remain our individual responsibility to make sure we're getting our information from respected providers," Lease said. "If you see something online that you might want to share with a friend, take a moment to pause before you send it on."

Beyond media literacy, Lukito advocates for AI literacy so that people can better understand the data behind these tools and identify AI-generated content. Still, she’s careful not to place the onus entirely on individuals — corporations and governments need to step up too, she said, stressing that companies should prioritize ethical AI practices, while policymakers must create safeguards for public interests.

Finally, to engage voters who feel disconnected from politics — those with "low political self-efficacy," in poli-sci parlance — Lukito suggests focusing on issues closer to home, like education and housing. That way citizens can feel their actions have real impact, fostering a stronger sense of civic engagement at the grassroots level. "I think anyone who’s doing this sort of work in this space, we’re very motivated to make sure that not only is our research relevant in an academic journal, but that it has a public relevance to it, that it's helping empower people to vote and make educated decisions when they're online, when they're consuming information," Lukito said.

Grand Challenge:
Good Systems