Designing Responsible AI Technologies to Curb Disinformation

Our project investigates two key societal challenges. First, how can we design responsible AI technologies to curb the digital spread of disinformation? Second, by grounding research on responsible AI in the context of a real societal challenge – disinformation – what broader lessons can we learn about how to better design, build, and evaluate responsible AI technologies in general? For example, while there is vast technological research on creating explainable AI models, much of this research today lacks a real, practical use case and meaningful evaluation. Moreover, because disinformation often targets and disproportionately impacts different demographic groups, how can we measure and ensure the fairness of AI models for prioritizing human effort in fact checking? The problem of disinformation thus offers an invaluable, concrete testbed for grounding broader research in developing and testing fair and explainable AI models. 

Team Members
Dhiraj Murthy
School of Journalism and Media
Maria De-Arteaga
Information, Risk and Operations Management
Greg Durrett
Computer Science
Jessy Li
Linguistics
Josephine “Jo” Lukito
School of Journalism and Media
News
March 5, 2022
Challenging the Status Quo in Machine Learning
UT researchers Maria De-Arteaga and Min Kyung Lee talk about their different but complementary work to make algorithms less biased and harmful.
Jan. 29, 2021
Machine Learning for Social Good
Last year, Maria De-Arteaga joined the McCombs School of Business faculty as an assistant professor in the Information, Risk and Operations Management Department. During her Ph.D., she became increasingly concerned about the risk of overburdening or underserving historically marginalized populations through the application of machine learning. She's now devoted her career to understanding the risks and opportunities of using ML to support decision-making in high-stakes settings.
Select Publications

Larissa Doroshenko and Josephine Lukito. Trollfare: Russia’s Disinformation Campaign During Military Conflict in Ukraine. International Journal of Communication.  

Sina Fazelpour, Maria De-Arteaga. “Diversity in Sociotechnical Machine Learning Systems.” Big Data and Society. 

Nikhil L. Kolluri and Dhiraj Murthy. “CoVerifi: A COVID-19 News Verification System.” Online Social Networks and Media. 

Chenyan Jia, Alexander Boltz, Angie Zhang, Anqing Chen, and Min Kyung Lee. Understanding Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-partisan Misinformation. ACM CSCW 2022.  

Li Shi, Nilavra Bhattacharya, Anubrata Das, Matthew Lease, and Jacek Gwizdka. The Effects of Interactive AI Design on User Behavior: An Eye-tracking Study of Fact-checking COVID-19 Claims. Proceedings of the 7th ACM SIGIR Conference on Human Information, Interaction and Retrieval. 

 

View all publications