Designing Responsible AI Technologies to Curb Disinformation

 

The rise of social media and the growing scale of online information have led to a surge of intentional disinformation and incidental misinformation. It is increasingly difficult to tell fact from fiction, and the challenge is more complex than simply differentiating “fake news” from simple facts. This project uses qualitative methods and machine learning models to understand how digital disinformation arises and spreads, how it affects different groups in society, and how to design effective human-centered interventions.

 

Team Members


News


April 17, 2023
“They’re Coming to Take over Our Country”: Researching Global Circuits of Racist Misinformation
“The others are coming. They are coming to get us, take over our country, colonize us, and replace us. They’re an existential threat.” These types of racist logics are a regular trope around the world. Some governments incorporate this type of messaging into disinformation campaigns, which have ripple effects of unintentional misinformation on social media.
March 2, 2023
Understanding the Ethical Future of AI
Dr. Matt Lease, professor in the School of Information at the University of Texas at Austin, provides a better understanding of AI and Chat GPT.
Nov. 11, 2022
Op-Ed: Social Media Platforms’ Struggles with Misinformation and Racism: Challenges and Paths Forward
From “fake news” screenshots to conspiratorial claims, the lead up to the 2022 Midterm elections has shown that misinformation remains a problem in public discourse. This is especially harmful for minority groups and underrepresented populations, as they tend to be the target of misinformation-motivated vitriol.

Videos


Documents