Designing Responsible AI Technologies to Protect Information Integrity

 

The rise of social media and massive information sharing online have led to a dramatic increase in the spread of both inadvertent misinformation and strategic disinformation (e.g., foreign influence operations seeking to undermine democratic nations). Additional challenges arise in helping decision-makers navigate conflicting information (e.g., information coming from different sources or evolving during a crisis, such as a national disaster or pandemic). To meet this challenge, our mission is to design, build, and test innovative AI technologies to support journalists, professional fact-checkers, and information analysts. Our use-inspired research to protect information integrity world-wide drives our broader work to develop responsible AI technologies that are both fair (in protecting different stakeholders who may bear disproportionately impacts) and explainable (so that stakeholders can best capitalize upon AI speed and scalability alongside their own knowledge, experience, and human ingenuity).

Team Members


Events


News


Nov. 11, 2022
Op-Ed: Social Media Platforms’ Struggles with Misinformation and Racism: Challenges and Paths Forward
From “fake news” screenshots to conspiratorial claims, the lead up to the 2022 Midterm elections has shown that misinformation remains a problem in public discourse. This is especially harmful for minority groups and underrepresented populations, as they tend to be the target of misinformation-motivated vitriol.
Nov. 9, 2022
Disinformation Day 2022 Considers Pressing Need for Cross-sector Collaboration and New Tools for Fact Checkers
Good Systems Researchers and Partners Explore Issues of Bias, Fairness, and Justice and Examine Challenges and Opportunities for Fact Checkers at the Inaugural Disinformation Day Conference.
March 5, 2022
Challenging the Status Quo in Machine Learning
UT researchers Maria De-Arteaga and Min Kyung Lee talk about their different but complementary work to make algorithms less biased and harmful.

Videos


Documents