Designing Responsible AI Technologies to Curb Disinformation

 

The rise of social media and the growing scale of online information have led to a surge of intentional disinformation and incidental misinformation. It is increasingly difficult to tell fact from fiction, and the challenge is more complex than simply differentiating “fake news” from simple facts. This project uses qualitative methods and machine learning models to understand how digital disinformation arises and spreads, how it affects different groups in society, and how to design effective human-centered interventions.

 

Team Members


Dhiraj Murthy
School of Journalism and Media
Project Co-Lead
Maria De-Arteaga
Information, Risk and Operations Management
Greg Durrett
Computer Science
Jessy Li
Linguistics
Josephine “Jo” Lukito
School of Journalism and Media

News


March 2, 2023
Understanding the Ethical Future of AI
Dr. Matt Lease, professor in the School of Information at the University of Texas at Austin, provides a better understanding of AI and Chat GPT.
Nov. 9, 2022
Disinformation Day 2022 Considers Pressing Need for Cross-sector Collaboration and New Tools for Fact Checkers
Good Systems Researchers and Partners Explore Issues of Bias, Fairness, and Justice and Examine Challenges and Opportunities for Fact Checkers at the Inaugural Disinformation Day Conference.
Nov. 11, 2022
Op-Ed: Social Media Platforms’ Struggles with Misinformation and Racism: Challenges and Paths Forward
From “fake news” screenshots to conspiratorial claims, the lead up to the 2022 Midterm elections has shown that misinformation remains a problem in public discourse. This is especially harmful for minority groups and underrepresented populations, as they tend to be the target of misinformation-motivated vitriol.
March 5, 2022
Challenging the Status Quo in Machine Learning
UT researchers Maria De-Arteaga and Min Kyung Lee talk about their different but complementary work to make algorithms less biased and harmful.
Jan. 29, 2021
Machine Learning for Social Good
Last year, Maria De-Arteaga joined the McCombs School of Business faculty as an assistant professor in the Information, Risk and Operations Management Department. During her Ph.D., she became increasingly concerned about the risk of overburdening or underserving historically marginalized populations through the application of machine learning. She's now devoted her career to understanding the risks and opportunities of using ML to support decision-making in high-stakes settings.

Videos


Select Publications


Larissa Doroshenko and Josephine Lukito. Trollfare: Russia’s Disinformation Campaign During Military Conflict in Ukraine. International Journal of Communication.  

Sina Fazelpour, Maria De-Arteaga. “Diversity in Sociotechnical Machine Learning Systems.” Big Data and Society. 

Nikhil L. Kolluri and Dhiraj Murthy. “CoVerifi: A COVID-19 News Verification System.” Online Social Networks and Media. 

Chenyan Jia, Alexander Boltz, Angie Zhang, Anqing Chen, and Min Kyung Lee. Understanding Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-partisan Misinformation. ACM CSCW 2022.  

Li Shi, Nilavra Bhattacharya, Anubrata Das, Matthew Lease, and Jacek Gwizdka. The Effects of Interactive AI Design on User Behavior: An Eye-tracking Study of Fact-checking COVID-19 Claims. Proceedings of the 7th ACM SIGIR Conference on Human Information, Interaction and Retrieval. 

 

View all publications