AI and the Future of Racial Justice

Many cities and organizations are adopting technologies like artificial intelligence (AI) to guide how departments organize data, make decisions, and deliver services to citizens. The widespread adoption of these systems is propelled by the desire to automate decision-making, manage large scale databases, and reduce the likelihood of human bias in decisions related to the allocation of resources and services. However, even as these technologies augment human labor and lead to greater workplace efficiency a critical question emerges: to what extent do these systems practice bias and replicate inequities?   

Because AI systems rank, profile, and sort, they have the power to shape allocative practices (Eubanks, 2018). In the context of local governments and organizations, this means, for example, that AI systems can determine who gets access to employment, care, or adequate housing. Without the proper mechanisms to evaluate and test these systems through an equity lens, they may replicate systemic inequalities.  

This interdisciplinary research collaboration will identify, propose, and develop ways to systematically address the inclination for bias and disparate impacts that characterize AI systems.  We will do this in the following ways: 

  • In consultation with the City of Austin’s Equity Office our team will develop and implement a pilot research collaboration with at least one department to identify and propose solutions to address disparate impacts in the adoption and deployment of technology.  

  • Through measuring and understanding racial disparities in treatment and service provision related to the use of AI in key areas of life such as public safety, transportation, and health. Furthermore, we will produce a set of recommendations and solutions that mitigate bias and racially disparate impacts in the application of data-based systems.  

  • Through the building of “good systems” and “action plans” that are informed by interdisciplinary approaches and designed to expand access to critical life-affirming services. 

Finally, the core feature of our research activities centers racial justice and equity as the main locus of study and problem-solving interventions. Moreover, our project is designed to realize one of the key goals of the Good Systems Grand Challenge, which is to pursue scholarly inquiries that engage community stakeholders and strive for real world impact.    

Team Members
Civil, Architectural and Environmental Engineering
Feb. 21, 2022
S Craig Watkins
How AI Can Help Combat Systemic Racism
S. Craig Watkins looks beyond algorithm bias to an AI future where models more effectively deal with systemic inequality.
Nov. 1, 2021
Designing AI for Racial Equity: Translating Ethics into Practice
Among the principles for ethical and responsible AI none is more prominent than the “fairness and non-discrimination” principle, that is, the idea that data-based systems like AI/ML should avoid unjust impacts on people, especially those from historically marginalized populations.
Sept. 28, 2021
Building Equity in AI: Insights from The University of Texas and Microsoft
S. Craig Watkins, professor in the Moody College of Communications, shares guiding strategies and principles for approaching AI and its role in society, specifically on how to use AI technology to solve health, humanitarian, and accessibility issues while keeping transparency, equity, and ethics at the forefront.