Ethics, Values, and A.I.

“Technology is neither good nor bad; nor is it neutral.” This is the first law of technology, outlined by historian Melvin Kranzberg in 1985. It means that technology is only good or bad if we perceive it to be that way based on our own value system. At the same time, because the people who design technology value some things more or less than others, their values influence their designs.

We use that technology — and, increasingly, artificial intelligence — to entertain us, communicate, get places faster, make predictions, swipe left or right, protect our homes, solve complex problems quickly and easily. In short, A.I. is changing the way we do everything because it’s everywhere — from dating apps to the most advanced military technology.

But because technology is never neutral, it has the capacity to be harmful to us in ways we might not intend or predict. The difficulty for us, as scientists and engineers, is that A.I. is helpful.

It can do many things faster, better, and easier than humans, and humans reap the rewards. But how will A.I. affect society, work, and how we interact with others? We need to answer these questions proactively rather than waiting for bad things to happen and reacting after it’s already too late.

In the words of Michael Crichton’s “Jurassic Park” mathematician, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think about if they should.”


Can We Ensure that A.I. Protects Humanity, not Destroys it?

That’s the question we have to ask now: Should we? How can we ensure that advances to A.I. are beneficial to humanity, not detrimental? How can we develop technology that makes life better for all of us, not just some? What unintended consequences are we overlooking or ignoring by developing technology that has the power to be manipulated and misused, from undermining elections to exacerbating racial inequality?

Our goal is to provide a way for prosocial values to drive the design of artificial intelligence in autonomous and semi-autonomous technologies so that those systems both protect and improve society.

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think about if they should.”

— Ian Malcolm, “Jurassic Park”

YEAR ZERO

This marks our development year as a future UT grand challenge. Our focus during “year zero” of this 8-year research project is to develop what we’re calling the Good Systems Values Networks Method and grow our own network of colleagues, partners, and supporters as well.

Our Values Networks Method combines two important approaches to technology development:

  • Value-Sensitive Design (VSD) puts procedures in place early in a product’s design process to account for varied — or even conflicting — social values among technology’s end-users.
  • Socio-Technical Interaction Networks (STINs) seek to understand the complex interactions and relationships among people, information, and technology.

Our proposed Values Networks Method connects VSD (on the microscale) and STINs (on the macroscale) to forge a novel research approach that will:


Meet the Team

Research groups around the world are asking similar questions about A.I., but their background traditionally focuses on computer science. Our grand challenge team is composed of computer scientists as well as natural and social scientists, technologists, ethicists, engineers, health and transportation experts, and more.

Read full bio

Samuel Baker

Associate ProfessorEnglish
Read full bio

Chandra Bhat

ProfessorCivil, Architectural and Environmental Engineering
Read full bio

Tanya Clement

Associate ProfessorEnglish
Read full bio

Kenneth R. Fleischmann

Associate ProfessorSchool of Information
Read full bio

Junfeng Jiao

Assistant ProfessorSchool of Architecture
Read full bio

Matthew Lease

Associate ProfessorSchool of Information
Read full bio

Peter Stone

ProfessorComputer Science
Read full bio

Sharon Strover

ProfessorRadio, Television and Film

William Tierney

Professor and Department ChairPopulation Health
Read full bio

Amelia Acker

Assistant ProfessorSchool of Information
Read full bio

Alan Bovik

ProfessorElectrical and Computer Engineering
Read full bio

Casey Boyle

Assistant ProfessorRhetoric and Writing
Read full bio

Aaron Choate

Director, Digital StrategiesUniversity of Texas Libraries
Read full bio

Maria Esteva

Research AssociateTexas Advanced Computing Center
Read full bio

Joydeep Ghosh

ProfessorElectrical and Computer Engineering
Read full bio

Sherri Greenberg

Clinical ProfessorLBJ School of Public Affairs
Read full bio

Natalia Ruiz Juri

Research AssociateCenter for Transportation Research
Read full bio

Katheryn Pierce Meyer

Head of Architectural CollectionsUniversity of Texas Libraries
Read full bio

Alison Norman

Associate Professor of InstructionComputer Science
Read full bio

Bruce Porter

ProfessorComputer Science
Read full bio

Chris Rossbach

Assistant ProfessorComputer Science
Read full bio

Suzanne Scott

Assistant ProfessorRadio, Television and Film
Read full bio

Paul Toprac

Associate Professor of InstructionComputer Science
Read full bio

Maytal Saar-Tsechansky

Associate ProfessorInformation, Risk, and Operations Management
Read full bio

J. Craig Wheeler

ProfessorAstronomy
Read full bio

Paul Woodruff

ProfessorPhilosophy
Read full bio

Weijia Xu

Research ScientistTexas Advanced Computing Center
Read full bio

Yan Zhang

Associate ProfessorSchool of Information
Follow us

@UTGoodSystems

News & Events

Injustices of Digital Disruption: More Tepid Policy or a Radical Democratic Turn?April 4, 20193:30pmBelo Center for New Media, room 5.208

Injustices of Digital Disruption: More Tepid Policy or a Radical Democratic Turn?

Led by Professor Robin Mansell from the London School of Economics and Political Science. We will examine a fundamentally important question for the future of society: Is there ever likely to be an effective challenge to the pursuit of wealth through inequitable mass individualization?

The ‘platformization of everything’ — by Google, Baidu, Facebook, Amazon and a few others — is implicated in the spread of misinformation and in the deepening of many kinds of inequalities. This lecture explores reasons for the persistence of cautious and relatively weak policy responses to platform power and whether a turn to radical democratic theory and practice might help to promote policy responses that work as a counterpoint to platform dominance. This event is part of the Digital Media Speaker Series and is sponsored by the Technology and Information Policy Institute (TIPI) and Good Systems.

Human-Machine Networks: Working Together or Working Apart?March 7, 20193:30pmCMA 5.136, Lady Bird Johnson Room

Human-Machine Networks: Working Together or Working Apart?

Led by Dean Eric Meyer from the School of Information. The story of society is inextricably bound with the rise of tools and machines. In the digital age, the machines we have created have become immensely powerful on the one hand but are also limited in many ways. This talk uses examples from research over the last decade — including citizen science, digital scholarship, crisis response, and knowledge creation on the internet — to explore how humans and machines work jointly and independently in complex socio-technical assemblages. This event is part of the Digital Media Speaker Series at the Moody College of Communication.

The Future of Ubiquitous Spoken Content with Doug Oard, University of MarylandApril 12, 20199:00amPOB Vislab

The Future of Ubiquitous Spoken Content with Doug Oard, University of Maryland

Ubiquitous recorders are capturing our daily sounds on the street, at our work and leisure places, and in our homes. The study of the cultural, political, ethical, and technological impact of automating sound and incorporating it into different systems is in its infancy, and scholars and technologists often do not have a good understanding of what kind of data, what kind of techniques and algorithms, and what kinds of interpretations can be drawn from audio data in systems that use A.I. technologies.

This event is part of our Acoustic Surveillance and Big Data Series. In this talk, Doug Oard will review the technologies that have brought us these challenges and opportunities, and he’ll identify some remaining technical challenges that currently limit their reach and application. He will then focus on the interplay between technology and policy that will shape the ways in which we might seek to achieve a balance between the risks and benefits that this cornucopia of new information could offer.

Introduction to Sound Analysis with Brian McFee, NYUApril 5, 20199:00amPCL Learning Lab 1

Introduction to Sound Analysis with Brian McFee, NYU

Ubiquitous recorders are capturing our daily sounds on the street, at our work and leisure places, and in our homes. The study of the cultural, political, ethical, and technological impact of automating sound and incorporating it into different systems is in its infancy, and scholars and technologists often do not have a good understanding of what kind of data, what kind of techniques and algorithms, and what kinds of interpretations can be drawn from audio data in systems that use A.I. technologies.

This event is part of our Acoustic Surveillance and Big Data Series and will ask participants to identify their research objectives with sound and use computational analytics to analyze sound files. Questions to consider will include what is the nature of the quantification of sound? What does it mean to break sound down into feature sets for big data analysis? 

Alexas, Wiretaps, and Gunshots: Some Notes on Acoustic Surveillance with Leonardo Cardoso, Texas A&M UniversityMarch 29, 20199:00amPOB 6.304

Alexas, Wiretaps, and Gunshots: Some Notes on Acoustic Surveillance with Leonardo Cardoso, Texas A&M University

Ubiquitous recorders are capturing our daily sounds on the street, at our work and leisure places, and in our homes. The study of the cultural, political, ethical, and technological impact of automating sound and incorporating it into different systems is in its infancy, and scholars and technologists often do not have a good understanding of what kind of data, what kind of techniques and algorithms, and what kinds of interpretations can be drawn from audio data in systems that use A.I. technologies.

This event is part of our Acoustic Surveillance and Big Data Series and will ask participants what social and personal values are at play in given scenarios involving sound and voice recording and what concerns they have in terms of potential ethical or policy issues. 

Fireside Chat and Q/A with Microsoft’s Julie BrillMarch 11, 201911:00amEER Mulva Auditorium

Fireside Chat and Q/A with Microsoft’s Julie Brill

Regulation and Responsibility: Join Microsoft’s Julie Brill, corporate VP and deputy general counsel, for a fireside chat and Q/A with Professor Sharon Strover from the Moody College of Communication. They’ll cover privacy, data protection, and tech policy.

Designing Good AI+Human Hybrid Systems to Curb MisinformationFebruary 15, 201910:00amPerry Castañeda Library

Designing Good AI+Human Hybrid Systems to Curb Misinformation

At this hackathon we will tackle the problem: what novel systems might we design to help curb the rise of online misinformation and disinformation? We will also expand scope from purely automated AI systems to also considering hybrid AI+human systems, and what it means for such hybrid systems to be “good.” 
The Ethical Operating System: How Not to Regret the Things You BuildJanuary 29, 20191:30pmMoody College of Communication

The Ethical Operating System: How Not to Regret the Things You Build

Join Sam Woolley from Institute of the Future at UT’s Digital Media Speaker Series this month. The current wave of computational propaganda has taken the world by surprise. Technology firms, policy makers, journalists and the general public are scrambling to respond to the societal threats posed by disinformation and politically motivated trolling. This talk outlines one method for responding to these issues: the Ethical Operating System (ethicalOS.org), a toolkit for anticipating future uses of technology. Jane McGonigal and Samuel Woolley, with support from Omidyar Network, constructed this guide to help a wide variety of groups think about how to design technology with democracy and human rights in mind. The toolkit has been used by major companies in Silicon Valley, by legislators at the state and federal level and by students in Stanford’s design school and intro to computer science courses. It’s time, however, to put into the hands of the U.S. public so that they can help in the fight against disinformation and manipulative technology.

UT, Microsoft Researchers Seek to Make Computers More Accessible to People Who Are BlindApril 9, 2019

UT, Microsoft Researchers Seek to Make Computers More Accessible to People Who Are Blind

The whole purpose of Good Systems is to ensure that AI is making the world a better place and is helping people. That’s exactly what we’re doing in this project. We’re using AI to help people.”

Can Trump’s New Initiative Make American AI Great Again?February 15, 2019

Can Trump’s New Initiative Make American AI Great Again?

When developing policy guidelines and regulation, it is critically important to separate these various technologies and applications so as to deal with them individually. Any effort to consider all of AI as one unit when developing policies and initiatives would be very misguided.” — Peter Stone, Department of Computer Science professor and Good Systems founding researcher

Newsletter: Hacking Open Data to Improve Our CitiesOctober 26, 2018

Newsletter: Hacking Open Data to Improve Our Cities

In October, Good Systems team members hosted a Good Systems 311 Calls and 500 City Hackathon. UT students used A.I. and machine learning methods to analyze large scale data sets of 311 calls, which log resident complaints, concerns, and non-emergency problems. This is valuable information that, when examined on aggregate, can help inform local decision-makers and city planners.

Newsletter: Good Systems Update and HackathonSeptember 25, 2018

Newsletter: Good Systems Update and Hackathon

The Good Systems development year is off to a running start! Thank you to everyone who has made our first two events a success. You have shown drive an initiative in this new UT Grand Challenge from Bridging Barriers and we hope to keep that enthusiasm going throughout the year.

New Bridging Barriers Themes in Development AnnouncedOctober 27, 2017

New Bridging Barriers Themes in Development Announced

Vice President for Research Dan Jaffe introduces Planet Texas 2050 and new projects in development that could one day become grand challenges at The University of Texas at Austin.

 


Please Join Us on This Journey

2018 marks the beginning of our development year. This is the time when we grow our team, ask hard questions (then, even harder ones), and decide how to design our work over the next decade. We welcome and value your feedback, your thoughts, and your contributions.

Please follow us on social media, meet us at our events, and let us know why you think this is a grand challenge.