Designing Responsible AI Technologies to Protect Information Integrity

 

The rise of social media and massive information sharing online have led to a dramatic increase in the spread of both inadvertent misinformation and strategic disinformation (e.g., foreign influence operations seeking to undermine democratic nations). Additional challenges arise in helping decision-makers navigate conflicting information (e.g., information coming from different sources or evolving during a crisis, such as a national disaster or pandemic). To meet this challenge, our mission is to design, build, and test innovative AI technologies to support journalists, professional fact-checkers, and information analysts. Our use-inspired research to protect information integrity world-wide drives our broader work to develop responsible AI technologies that are both fair (in protecting different stakeholders who may bear disproportionately impacts) and explainable (so that stakeholders can best capitalize upon AI speed and scalability alongside their own knowledge, experience, and human ingenuity).

Team Members


Events


News


Jan. 29, 2021
Machine Learning for Social Good
Last year, Maria De-Arteaga joined the McCombs School of Business faculty as an assistant professor in the Information, Risk and Operations Management Department. During her Ph.D., she became increasingly concerned about the risk of overburdening or underserving historically marginalized populations through the application of machine learning. She's now devoted her career to understanding the risks and opportunities of using ML to support decision-making in high-stakes settings.

Videos


Documents