Ethics, Values, and A.I.

“Technology is neither good nor bad; nor is it neutral.” This is the first law of technology, outlined by historian Melvin Kranzberg in 1985. It means that technology is only good or bad if we perceive it to be that way based on our own value system. At the same time, because the people who design technology value some things more or less than others, their values influence their designs.

We use that technology — and, increasingly, artificial intelligence — to entertain us, communicate, get places faster, make predictions, swipe left or right, protect our homes, solve complex problems quickly and easily. In short, A.I. is changing the way we do everything because it’s everywhere — from dating apps to the most advanced military technology.

But because technology is never neutral, it has the capacity to be harmful to us in ways we might not intend or predict. The difficulty for us, as scientists and engineers, is that A.I. is helpful.

It can do many things faster, better, and easier than humans, and humans reap the rewards. But how will A.I. affect society, work, and how we interact with others? We need to answer these questions proactively rather than waiting for bad things to happen and reacting after it’s already too late.

In the words of Michael Crichton’s “Jurassic Park” mathematician, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think about if they should.”


Can We Ensure that A.I. Protects Humanity, not Destroys it?

That’s the question we have to ask now: Should we? How can we ensure that advances to A.I. are beneficial to humanity, not detrimental? How can we develop technology that makes life better for all of us, not just some? What unintended consequences are we overlooking or ignoring by developing technology that has the power to be manipulated and misused, from undermining elections to exacerbating racial inequality?

Our goal is to provide a way for prosocial values to drive the design of artificial intelligence in autonomous and semi-autonomous technologies so that those systems both protect and improve society.

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think about if they should.”

— Ian Malcolm, “Jurassic Park”

YEAR ZERO

This marks our development year as a future UT grand challenge. Our focus during “year zero” of this 8-year research project is to develop what we’re calling the Good Systems Values Networks Method and grow our own network of colleagues, partners, and supporters as well.

Our Values Networks Method combines two important approaches to technology development:

  • Value-Sensitive Design (VSD) puts procedures in place early in a product’s design process to account for varied — or even conflicting — social values among technology’s end-users.
  • Socio-Technical Interaction Networks (STINs) seek to understand the complex interactions and relationships among people, information, and technology.

Our proposed Values Networks Method connects VSD (on the microscale) and STINs (on the macroscale) to forge a novel research approach that will:


Meet the Team

Research groups around the world are asking similar questions about A.I., but their background traditionally focuses on computer science. Our grand challenge team is composed of computer scientists as well as natural and social scientists, technologists, ethicists, engineers, health and transportation experts, and more.

Read full bio

Samuel Baker

Associate ProfessorEnglish
Read full bio

Chandra Bhat

ProfessorCivil, Architectural and Environmental Engineering
Read full bio

Tanya Clement

Associate ProfessorEnglish
Read full bio

Kenneth R. Fleischmann

Associate ProfessorSchool of Information
Read full bio

Junfeng Jiao

Assistant ProfessorSchool of Architecture
Read full bio

Matthew Lease

Associate ProfessorSchool of Information
Read full bio

Peter Stone

ProfessorComputer Science
Read full bio

Sharon Strover

ProfessorRadio, Television and Film

William Tierney

Professor and Department ChairPopulation Health
Read full bio

Amelia Acker

Assistant ProfessorSchool of Information
Read full bio

Alan Bovik

ProfessorElectrical and Computer Engineering
Read full bio

Casey Boyle

Assistant ProfessorRhetoric and Writing
Read full bio

Aaron Choate

Director, Digital StrategiesUniversity of Texas Libraries
Read full bio

Maria Esteva

Research AssociateTexas Advanced Computing Center
Read full bio

Joydeep Ghosh

ProfessorElectrical and Computer Engineering
Read full bio

Sherri Greenberg

Clinical ProfessorLBJ School of Public Affairs
Read full bio

Natalia Ruiz Juri

Research AssociateCenter for Transportation Research
Read full bio

Katheryn Pierce Meyer

Head of Architectural CollectionsUniversity of Texas Libraries
Read full bio

Alison Norman

Associate Professor of InstructionComputer Science
Read full bio

Bruce Porter

ProfessorComputer Science
Read full bio

Chris Rossbach

Assistant ProfessorComputer Science
Read full bio

Suzanne Scott

Assistant ProfessorRadio, Television and Film
Read full bio

Paul Toprac

Associate Professor of InstructionComputer Science
Read full bio

Maytal Saar-Tsechansky

Associate ProfessorInformation, Risk, and Operations Management
Read full bio

J. Craig Wheeler

ProfessorAstronomy
Read full bio

Paul Woodruff

ProfessorPhilosophy
Read full bio

Weijia Xu

Research ScientistTexas Advanced Computing Center
Read full bio

Yan Zhang

Associate ProfessorSchool of Information
Follow us

@UTGoodSystems

News & Events

Designing Good AI+Human Hybrid Systems to Curb MisinformationFebruary 15, 201910:00amPerry Castañeda Library

Designing Good AI+Human Hybrid Systems to Curb Misinformation

At this hackathon we will tackle the problem: what novel systems might we design to help curb the rise of online misinformation and disinformation? We will also expand scope from purely automated AI systems to also considering hybrid AI+human systems, and what it means for such hybrid systems to be “good.” 
The Ethical Operating System: How Not to Regret the Things You BuildJanuary 29, 20191:30pmMoody College of Communication

The Ethical Operating System: How Not to Regret the Things You Build

Join Sam Woolley from Institute of the Future at UT’s Digital Media Speaker Series this month. The current wave of computational propaganda has taken the world by surprise. Technology firms, policy makers, journalists and the general public are scrambling to respond to the societal threats posed by disinformation and politically motivated trolling. This talk outlines one method for responding to these issues: the Ethical Operating System (ethicalOS.org), a toolkit for anticipating future uses of technology. Jane McGonigal and Samuel Woolley, with support from Omidyar Network, constructed this guide to help a wide variety of groups think about how to design technology with democracy and human rights in mind. The toolkit has been used by major companies in Silicon Valley, by legislators at the state and federal level and by students in Stanford’s design school and intro to computer science courses. It’s time, however, to put into the hands of the U.S. public so that they can help in the fight against disinformation and manipulative technology.

Can Trump’s New Initiative Make American AI Great Again?February 15, 2019

Can Trump’s New Initiative Make American AI Great Again?

When developing policy guidelines and regulation, it is critically important to separate these various technologies and applications so as to deal with them individually. Any effort to consider all of AI as one unit when developing policies and initiatives would be very misguided.” — Peter Stone, Department of Computer Science professor and Good Systems founding researcher

Newsletter: Hacking Open Data to Improve Our CitiesOctober 26, 2018

Newsletter: Hacking Open Data to Improve Our Cities

In October, Good Systems team members hosted a Good Systems 311 Calls and 500 City Hackathon. UT students used A.I. and machine learning methods to analyze large scale data sets of 311 calls, which log resident complaints, concerns, and non-emergency problems. This is valuable information that, when examined on aggregate, can help inform local decision-makers and city planners.

Newsletter: Good Systems Update and HackathonSeptember 25, 2018

Newsletter: Good Systems Update and Hackathon

The Good Systems development year is off to a running start! Thank you to everyone who has made our first two events a success. You have shown drive an initiative in this new UT Grand Challenge from Bridging Barriers and we hope to keep that enthusiasm going throughout the year.

New Bridging Barriers Themes in Development AnnouncedOctober 27, 2017

New Bridging Barriers Themes in Development Announced

Vice President for Research Dan Jaffe introduces Planet Texas 2050 and new projects in development that could one day become grand challenges at The University of Texas at Austin.

 


Please Join Us on This Journey

2018 marks the beginning of our development year. This is the time when we grow our team, ask hard questions (then, even harder ones), and decide how to design our work over the next decade. We welcome and value your feedback, your thoughts, and your contributions.

Please follow us on social media, meet us at our events, and let us know why you think this is a grand challenge.