Ethics, Values, and A.I.
“Technology is neither good nor bad; nor is it neutral.” This is the first law of technology, outlined by historian Melvin Kranzberg in 1985. It means that technology is only good or bad if we perceive it to be that way based on our own value system. At the same time, because the people who design technology value some things more or less than others, their values influence their designs.
We use that technology — and, increasingly, artificial intelligence — to entertain us, communicate, get places faster, make predictions, swipe left or right, protect our homes, solve complex problems quickly and easily. In short, A.I. is changing the way we do everything because it’s everywhere — from dating apps to the most advanced military technology.
But because technology is never neutral, it has the capacity to be harmful to us in ways we might not intend or predict. The difficulty for us, as scientists and engineers, is that A.I. is helpful.
It can do many things faster, better, and easier than humans, and humans reap the rewards. But how will A.I. affect society, work, and how we interact with others? We need to answer these questions proactively rather than waiting for bad things to happen and reacting after it’s already too late.
In the words of Michael Crichton’s “Jurassic Park” mathematician, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think about if they should.”
Can We Ensure that A.I. Protects Humanity, not Destroys it?
That’s the question we have to ask now: Should we? How can we ensure that advances to A.I. are beneficial to humanity, not detrimental? How can we develop technology that makes life better for all of us, not just some? What unintended consequences are we overlooking or ignoring by developing technology that has the power to be manipulated and misused, from undermining elections to exacerbating racial inequality?
Our goal is to provide a way for prosocial values to drive the design of artificial intelligence in autonomous and semi-autonomous technologies so that those systems both protect and improve society.
“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think about if they should.”
— Ian Malcolm, “Jurassic Park”
This marks our development year as a future UT grand challenge. Our focus during “year zero” of this 8-year research project is to develop what we’re calling the Good Systems Values Networks Method and grow our own network of colleagues, partners, and supporters as well.
Our Values Networks Method combines two important approaches to technology development:
- Value-Sensitive Design (VSD) puts procedures in place early in a product’s design process to account for varied — or even conflicting — social values among technology’s end-users.
- Socio-Technical Interaction Networks (STINs) seek to understand the complex interactions and relationships among people, information, and technology.
Our proposed Values Networks Method connects VSD (on the microscale) and STINs (on the macroscale) to forge a novel research approach that will:
among humanists, social scientists, and technologists, who will combine conceptual, empirical, and technical investigations
into broader and larger values networks that consider diverse values that should (or should not) be built into A.I. systems
that individuals consider important in life, with an emphasis on prosocial values like democracy, fairness, transparency, and agency
Meet the Team
Research groups around the world are asking similar questions about A.I., but their background traditionally focuses on computer science. Our grand challenge team is composed of computer scientists as well as natural and social scientists, technologists, ethicists, engineers, health and transportation experts, and more.