

Designing AI technologies that benefit society is our grand challenge.
AI-based technologies are helping us solve complex problems in nearly every discipline and industry, but they have the capacity to be harmful to us in ways we might not predict or intend.
Coexisting with AI
Artificial intelligence is a system that can correctly interpret data, learn from it, and then use what it has learned to adapt in order to achieve specific goals autonomously. It improves our everyday lives, but not without risk.
AI is changing the way we do everything because it’s everywhere — from dating apps to the most advanced military weapons systems. AI does many things faster and better than humans can alone, but there are ethical and societal implications to consider.
How can we ensure that AI is beneficial — not detrimental — to humanity? What unintended consequences are we overlooking by developing technology that can be manipulated and misused?
It is ethically irresponsible to focus only on what AI can do. We believe it is equally important to ask what it should (and should not) do.
Our goal is to better understand what changes new technologies will bring, predict how those changes will unfold, and mitigate the harms or unintended consequences they could cause while still leveraging the benefits AI provides.
To do that, our team brings students and researchers together from more than two dozen schools and units on The University of Texas at Austin campus to investigate how to define, evaluate, and build a “Good System.”
Core research projects
Explore all projectsOur team is working to establish a framework for evaluating, developing, implementing, and regulating AI-based technologies so they reflect human values. In 2021, we launched a set of six core research projects, which explore critical topics including surveillance inquiry, disinformation, smart cities, racial justice, human-machine partnerships, and the future of work.


AI and the Future of Racial Justice
Explores racial disparities in AI-based systems and seeks to design and implement solutions in the areas of public safety, transportation, and health.

Being Watched: Embedding Ethics in Public Cameras
Investigates the social acceptance of cameras and video data and how to develop technical solutions that will satisfy privacy concerns.

Living and Working with Robots
Aiming to overcome the technical and social hurdles to deploying robots by building and studying them in the communities where they will be used.


Designing Responsible AI Technologies to Curb Disinformation
Employs machine learning to understand how disinformation arises and spreads and how to design effective human-centered interventions.

A Good System for Smart Cities
Seeks to build a system that will link city datasets to predict the effects of urban development projects, including Austin’s Project Connect.

Making Smart Tools Work for Everyone
Designs smart hand tools that have embedded AI to empower workers to accomplish more while keeping their jobs secure.
Latest News
View allGood Systems’ Executive Team represents the College of Liberal Arts, the College of Natural Sciences, the Moody College of Communication, the Cockrell School of Engineering, the LBJ School of Public Affairs, the School of Architecture, and the School of Information. Visit our full team page to see the full list of Good Systems researchers, affiliates, and staff.
-
-
-
-
-
-
Stacey Ingram Kaleh
Network Relationship ManagerOffice of the Vice President for Research, Scholarship and Creative Endeavors -
-
Lea Sabatini
Program DirectorOffice of the Vice President for Research, Scholarship and Creative Endeavors -
-
-
Explore our interactive network map to see how different researchers, schools, and organizations are connected to Good Systems. Search by name, CSU, or project or click any node on the map and pause to see its connections appear. You can magnify or expand the view, and you can click on any individual to see which projects they’ve been affiliated with.