Bringing Robots into the Real World: Q&A with Peter Stone and Elliott Hauser

November 8, 2021
Three men stand around a robot on wheels in a robotic labs on the University of Texas at Austin campus.
(Left to right) Computer Science Professor Peter Stone, Assistant Professor Joydeep Biswas and Professor of Practice Justin Hart test a robot in a robotics laboratory at the Anna Hiss Gymnasium at the University of Texas at Austin campus. Photo credit: Callie Richmond

Computer Science Professor Peter Stone and School of Information Assistant Professor Elliott Hauser are among several UT researchers working on a new project for the Good Systems grand challenge called “Living and Working with Robots.” It’s a new core research project that seeks to deploy human-robot systems in ethical and responsible way — right here on the Forty Acres. Their project looks not only at the technical aspects involved in building service robots but also the social aspects, like how people feel about sharing space with this kind of technology. We talked with Peter and Elliott about their work and how it will change the way we intermingle with robots.

To start, when we’re talking about robots, what do we mean?

Peter Stone: “Robot” is a historically loaded term. People mean a lot of different things by that. Some people mean an autonomous system that behaves without human intervention. Essentially, in that case, robots are just software, and robotics has nothing to do with physical agents. When I think of robots, though, I think of something that has a physical presence in the world and does some form of automated behaviors. Traditionally, that has meant robots that are stiff and do something repetitive, like those that perform tasks as part of a factory assembly line.

My research has always been about having robots that can exist with uncertainty in the real world and don’t just do the same task over and over. They perform different motions at different times, perhaps because there are people around them or because they pick things up that are not always in the same place. In “Living and Working with Robots,” we are trying to create robots that can inhabit the same spaces as people. There are robots, and there are intelligent robots.

Elliott Hauser: My work has previously explored how information systems affect our social reality. So, robots are an interesting new avenue for me. Although they take unambiguous action in the world, their physical action can have a direct effect upon people’s social realities. Part of understanding robots is exploring our own ideas about what robots are, which often comes from science fiction. Even those imaginary robots have surprisingly real-world consequences because they influence what roboticists see as possible. We must have a shared vision of what we are working toward and become more deliberate in the ways we use our imaginations so that we can avoid the most harmful consequences often seen in science fiction.

Can you describe your project “Living and Working with Robots?”

Peter: I see it as an outgrowth of a project that I’ve been working on for a long time with several other UT researchers that looks at how to create robots that are a part of the social fabric. Our work started by creating robots that have long-term autonomy, meaning they can navigate large spaces like the entire UT campus instead of just inside small buildings. To make that happen, though, it’s important there’s no backlash against these robots — people don’t feel like the robots are bothering them, make too much noise, or running into things. People must be confident that their privacy is being respected and that the robots aren’t doing anything they aren’t comfortable with. This requires not only computer science experts but also people with social science and humanities backgrounds. If we are going to build robots that can interact with people in the social space, we must consider the best way to do it and what effect it will have on people.

“It’s not just technologists who must play a part in developing AI and addressing these risks but also social scientists.”

Elliott: A major ambition of this project is to deploy robots to work in UT’s libraries, an important part of our community with a rich history and distinctive culture where robots have the potential to directly support the university’s mission. The typical way of thinking would be to put the robot in the library and see what happens. But we want to take a different view and understand the existing labor context — how materials are moved around, what tasks employees typically are required to do, and their annoyances with their jobs. Instead of saying, “We have this technology, let’s apply it,” our goal is to identify the right problem before we try to solve it with robotics. We’re prepared to hear responses like “robots don’t make sense here,” but we’re going in with open minds and expect to be surprised. What can library staff and patrons teach us about how they work and what they value? It’s a really different way of working that requires time, patience, and resources.

Aren’t robots already being designed and tested in real world contexts?

Peter: Real-world testing is starting to happen. The most famous are the robots being used at Amazon fulfillment centers, but those are in a warehouse and not in the open world. There are also now autonomous cars and delivery robots like Amazon Scout. However, even then, they are being used to carry out a specific task and usually move point-to-point, then turn off. This is different than what we would call a general-purpose robot, which would be turned on regularly and asked to do different things by different people. In industry, where we see most robots today, engineers build them for a specific purpose, which is actually better for building a business. That’s because locomotion, manipulation, and computer vision are each incredibly difficult areas. So, the more simplistic and repetitive the task, the more successful the robot is going to be. Sometimes it’s easier to deliver limited value as a technology develops.

We are taking the next step. We are thinking about things industrial labs may not be focusing on now because it’s beyond their horizon — robots that can do more than a specific function. It’s a hard problem to do in an affordable way, but it introduces a whole bunch of research challenges, and solving them will help with future applications.

When I started in this field, the place to do AI was in academia. Now, there are a lot of companies building products based on AI. The question is: are we working on something that is better done in one of those settings or better done in academia? My answer: Let’s look at a horizon and imagine capabilities no one is exploring right now. We are not building a business plan or trying to make money off our work. We are simply trying to answer the question about what’s possible and — to the extent we have choices in how we build robots — how can we do it in a way that’s most acceptable by the public?

robot team
The “Living and Working with Robots” team includes faculty from Computer Science, Communications Studies, the School of Information, Liberal Arts and others. Photo credit: Callie Richmond

Elliott: Another huge part of this project is understanding how people perceive and interact with the robots and whether they are accepted by the public and don’t cause harm. Harm and acceptance are two separate things from a research perspective. Acceptance is someone’s willingness to be around or use a technology: their attitude towards it. In robotics, when we test acceptance, we’re looking at whether this robot is something you’d be comfortable around and, if it’s designed to be interacted with, whether and how you interact with it. Separately, we can investigate whether the introduction of a technology has harmful effects. Ultimately, you put these two areas of study together and you can distinguish technologies that have positive effects but aren’t accepted by a community or technologies that have negative effects but are accepted and adopted by a community — and everything in between. We are working, hopefully, towards technologies that are equitably beneficial and accepted and used by those whom they benefit.

You have been deploying robots for a little while on campus in advance of this project. How have they been received so far?

Peter: I can recall a rather funny story. We had a robot wandering through a computer lab where students were working. It had a human operator behind it to oversee its movements. The robot had some trouble because there was a backpack on the floor. The robot didn’t detect it, so it bumped into it. Even though there was a robot handler behind it, the student still turned around and apologized to the robot, rather than the person, for being in its path. That’s how we want to think about robots, as social actors. If you open the door for a robot, maybe it will do something nice for you in return.

What do you hope to accomplish with this project?

Elliott: Fundamentally this is a research project, and the long-term goal is to uncover new knowledge and to train graduate students who can advance robotics. When you have robots doing jobs on campus, it opens a wealth of opportunities for researchers. There are not a lot of places in the world that have access to this kind of system. As each of our students gets involved in this project in different ways, there’s going to be a lot of mutual learning and teaching, and technical, social, ethical, and critical perspectives are going to mix fruitfully.

Also, deploying technology into an existing community will interest the regulatory and industry communities. People working on robotics technology will be curious about what we discover, and we can use it to create toolkits for them as well as lists of best practices.

One of the first outcomes we hope to expect from our project is that robots will be able to carry out deliveries on campus, ferrying books and other materials between the libraries and different buildings and offices. These won’t replace library staff but will hopefully offset their existing workloads.

“This allows both students and researchers to see that it can be done in a way that’s good and a way that’s bad. We need those crucibles — test areas to evaluate our methodologies. ‘Living and Working with Robots’ provides that.”

Once we achieve these goals, we will then test whether we can artificially interrupt the robots’ normal processes and ask them to do something different. We have considered this in the context of disaster response, where we can have a fleet of robots on campus stop their behavior to carry out critical tasks. Disaster can be anything from a devastating blizzard, like we had this past winter, to a more long-term situation like we’ve had with COVID, where social distancing became required. Robots are already used in many disaster situations, but these are special-purpose robots that search for survivors or help disarm explosives. Our opportunity is to look at what an installed robotic infrastructure does for organizations’ abilities to use their existing resources to adapt in times of need. And we think this work will be very relevant to an increasing number of places as robotic deployments become more common.

Peter, the 100-year study on artificial intelligence, known as AI100, was just released. You were on the committee for the study. Do you have any important findings and recommendations to share from this year’s report that are applicable to the “Living and Working with Robots” project?

Peter: The AI100 project is unique because it’s a longitudinal study of AI, meaning that there are going to be reports every five years for 100 years. It will show a lot about the evolution of artificial intelligence. The title of this year’s report is “Gathering Strength, Gathering Storms” — the idea being that there has been a lot of technical progress, as well as a lot of risks related to AI, that have been identified and studied in the past five years. The report authors share the same philosophy we do in this project: that it’s not just technologists who must play a part in developing AI and addressing these risks but also social scientists.

Social scientists have the perspective that core technologists don’t about what impact robots will have on our social interactions and in society. They have the tools and language to be able to predict, analyze, and explain what is going on and what might happen when we deploy these systems.

When we stepped back and examined the field of AI for the AI100 report, one of the things we looked at were robots operating in the care industry, particularly eldercare and nursing. We found that it is important to have robots working alongside people, not replacing people. Much of the philosophy in this project reflects that idea.

How will this project advance the mission of Good Systems, to design values-based AI?

Peter: One of the things that Good Systems needs is testbeds where we can study AI-based technological systems that have the potential to interact with people and that exist in the real world. This allows both students and researchers to see that it can be done in a way that’s good and a way that’s bad. We need those crucibles — test areas to evaluate our methodologies. “Living and Working with Robots” provides that. The technological AI systems we study are the kinds of systems that are already becoming pervasive now. So, the lessons we learn will not just be applied to this project but to current and future Good Systems projects as we try to develop responsible technologies that have a positive, rather than negative, impact on our world.

 

Peter Stone, Ph.D., is a professor in the Department of Computer Science and founder and director of the Learning Agents Research Group within the Artificial Intelligence Laboratory. He is also the Director of Texas Robotics and Executive Director of Sony AI America. Stone’s research focuses on machine learning, multiagent systems, and robotics.

Elliott Hauser, Ph.D., is an assistant professor in the School of Information. He explores how we use information systems to make things true or false, and what we do after we are certain. To study this, Hauser applies a sociotechnical perspective to a variety of information systems, from books to databases to algorithms. His work incorporates philosophy, information ethics, infrastructure studies, and algorithm studies.

Grand Challenge:
Good Systems