AI and the Future of Racial Justice


AI-based technologies used by cities and organizations have been shown to exhibit racial bias, and yet, they can determine who gets access to employment, care, and adequate housing. This project seeks to understand the racial disparities in AI-based systems and design and implement solutions in the areas of public safety, transportation, and health. Engagement with local government offices, public agencies, industry, and communities is at the center of its effort to tackle the challenge of achieving racially equitable AI.


Team Members

Civil, Architectural and Environmental Engineering
Project Co-Lead


Feb. 21, 2022
How AI Can Help Combat Systemic Racism
S. Craig Watkins looks beyond algorithm bias to an AI future where models more effectively deal with systemic inequality.
Nov. 1, 2021
Designing AI for Racial Equity: Translating Ethics into Practice
Among the principles for ethical and responsible AI none is more prominent than the “fairness and non-discrimination” principle, that is, the idea that data-based systems like AI/ML should avoid unjust impacts on people, especially those from historically marginalized populations.
Sept. 28, 2021
Building Equity in AI: Insights from The University of Texas and Microsoft
S. Craig Watkins, professor in the Moody College of Communications, shares guiding strategies and principles for approaching AI and its role in society, specifically on how to use AI technology to solve health, humanitarian, and accessibility issues while keeping transparency, equity, and ethics at the forefront.


Select Publications

Angela J. Haddad, Aupal Mondal, Chandra R. Bhat, Angie Zhang, Madison C. Liao, Lisa J. Macias, Min Kyung Lee, and S. Craig Watkins. “Pedestrian Crash Frequency: Unpacking the Effects of Contributing Factors and Racial Disparities.” Accident Analysis & Prevention 182 (March 1, 2023).