Coexisting with AI
Artificial intelligence is a system that can correctly interpret data, learn from it, and then use what it has learned to adapt in order to achieve specific goals autonomously. It improves our everyday lives, but not without risk.
AI is changing the way we do everything because it’s everywhere — from dating apps to the most advanced military weapons systems. AI does many things faster and better than humans can alone, but there are ethical and societal implications to consider.
How can we ensure that AI is beneficial — not detrimental — to humanity? What unintended consequences are we overlooking by developing technology that can be manipulated and misused?
Our goal is to create a new way of designing values-based AI technologies that both protect and improve our world.
To do that, our team brings students and researchers together from more than two dozen schools and units on The University of Texas at Austin campus to investigate how to define, evaluate, and build a “Good System” from the beginning instead of relying on policy to make them safer once they’re in use.
It is ethically irresponsible to focus only on what AI can do. We believe it is equally important to ask what it should (and should not) do.
All advances in society require balancing what we value. We balance the value of national security with those of privacy and liberty, and sometimes we must choose safety over profit margins. Recent advances in AI technology, though, have brought us to a crossroads where we will soon be making new and difficult choices about what we prioritize as a society. This is important because while AI by itself isn’t good or bad, it’s also not neutral — it reflects the biases of the people who design, train, and use it.
Our goal is to better understand what changes these new technologies will bring, predict how those changes will unfold, and mitigate the harms or unintended consequences they could cause while still leveraging the benefits AI provides. Over the next eight years, our team will establish a framework for evaluating, developing, implementing, and regulating AI-based technologies so they reflect human values at their core.
Define Good Systems
Human values are different across individuals and groups of people, and they change over time. What does it mean for a system that uses AI technology to be “good” (or not)?
EVALUATE Good Systems
Once a system that includes AI technology is in use, how do we decide if it is good (or not)? We will develop a framework that can be used to decide.
BUILD Good Systems
How can we ensure that the systems we create will be good? We must prepare our future workforce to design productive human-AI partnerships.
IMPLEMENT Good Systems
We will develop a set of best practices that are transparent and adaptable for designing and using AI and for predicting and mitigating its risks.
Year One Funded Projects
Good systems are human-AI partnerships that address the needs and values of society. Developing them is our mission.
Building and Testing Machine Learning Methods for Metadata Generation in Audiovisual Collections
Audiovisual (AV) materials play a fundamental role as historical and scientific records. AV materials provide evidence for every activity on earth from endangered languages to rare bird calls to the … Keep reading
Design of Fair AI Systems via Human-Centric Detection and Mitigation of Biases
AI systems may not only reproduce data bias but even amplify it. Unfortunately, even defining data bias is difficult, let alone detecting and mitigating it. For example, consider bias by … Keep reading
Designing Human-AI Partnerships for Information Search and Evaluation
Grounding the pursuit of responsible AI in the context of a real societal challenge — misinformation — forces researchers to translate abstract questions into real, practical problems to solve. We … Keep reading
Probabilistically Safe and Correct Imitation Learning
Most AI research is concerned with best case (single demo) or average case performance. However, given in many safety-critical tasks such as robotics, average case performance is often not good … Keep reading
Bad AI and Beyond: Exploring How Popular Media Shape the Perceived Opportunities and Threats of AI
We will report on how media representations shape public perceptions of AI and then use what we learn to explore how we might better represent everyday interactions with AI to … Keep reading
How African-American and Latinx Youth Evaluate Their Experiences with Digital Assistants
As AI-driven devices and toys play an increasingly important role in children’s lives, little research has considered how social and economic disparities and cultural differences shape children’s engagement with AI. … Keep reading
Disinformation in Context: AI, Platforms, and Policy
Artificial intelligence and machine-driven content creation and circulation are significant components of contemporary disinformation efforts. We will describe and evaluate aspects of social media systems involved in disinformation campaigns. We … Keep reading
Ethical Data Design for Good Systems
Data and the systems that manage it are not neutral but, instead, are part of the process that affects how AI-based technologies work. Not all data and computer scientists, however, … Keep reading
Privacy Preferences and Values for Computer Vision Applications
Technology is transforming people’s lives, but it’s a constant struggle to ensure that technology designs address people’s values and preferences, especially those of traditionally underserved groups. Computer vision, as an … Keep reading