The rise of social media and the growing scale of online information have led to a surge of intentional disinformation and incidental misinformation. It is increasingly difficult to tell fact from fiction, and the challenge is more complex than simply differentiating “fake news” from simple facts. This project uses qualitative methods and machine learning models to understand how digital disinformation arises and spreads, how it affects different groups in society, and how to design effective human-centered interventions.
Team Members
School of Information
Project Lead
Dhiraj Murthy
School of Journalism and Media
Project Co-Lead
Maria De-Arteaga
Information, Risk and Operations Management
Greg Durrett
Computer Science
School of Information
Jessy Li
Linguistics
Josephine “Jo” Lukito
School of Journalism and Media
News
Videos
Documents
Select Publications
Larissa Doroshenko and Josephine Lukito. Trollfare: Russia’s Disinformation Campaign During Military Conflict in Ukraine. International Journal of Communication.
Sina Fazelpour, Maria De-Arteaga. “Diversity in Sociotechnical Machine Learning Systems.” Big Data and Society.
Nikhil L. Kolluri and Dhiraj Murthy. “CoVerifi: A COVID-19 News Verification System.” Online Social Networks and Media.
Chenyan Jia, Alexander Boltz, Angie Zhang, Anqing Chen, and Min Kyung Lee. Understanding Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-partisan Misinformation. ACM CSCW 2022.
Li Shi, Nilavra Bhattacharya, Anubrata Das, Matthew Lease, and Jacek Gwizdka. The Effects of Interactive AI Design on User Behavior: An Eye-tracking Study of Fact-checking COVID-19 Claims. Proceedings of the 7th ACM SIGIR Conference on Human Information, Interaction and Retrieval.