Designing Culturally Sensitive AI Devices

September 1, 2020
Amazon echo
Amazon Echo. Photo courtesy of Adam Bowie.

As digital assistants like Siri and Alexa become more common in our lives, people increasingly see them as companions that accompany them throughout their day. Young children, especially, are more apt to see these devices as real people or friends.

University of Texas researcher S. Craig Watkins, Journalism and Media professor in the Moody College of Communication, says that’s why it is more important than ever that these devices reflect the diverse backgrounds of their users.

Watkins and his team have been studying how Black and LatinX children ages 8 to 12 experience digital assistants as part of a project for the UT grand challenge Good Systems, which aims to design ethical, values-based AI technologies.

“We have come to understand over the years that technology we presumed to be without bias can have these unintended consequences based on the kinds of ideas and belief systems that go into the actual building and designing of these AI-based systems,” says Watkins. “I think the big concern that technology companies are beginning to recognize right now is that these systems are being built without a lot of diversity in the room.”

Bias exists in a number of technological innovations we use today. Consider facial recognition software used by police, which has been found to misidentify Black and Hispanic people, leading, in at least one instance, to a wrongful arrest. Watkins says algorithms that help with criminal justice sentencing by using historical data to determine risk also have unfair results for Black and Hispanic people.

For digital assistants, Watkins says he and his team have initially identified issues with the voice used for devices like Siri and Alexa, which is typically a white woman. He says Black and Hispanic children could feel a lack of connection to the device because it doesn’t sound like them. Devices like Siri and Alexa also may struggle to answer some culturally specific questions or know how to respond if asked about race and racism — all things the team is looking into as it conducts its experiments.

Siqi Yi, a doctoral student from the School of Information, says one child they spoke with said using a digital assistant made him feel different from his friends.

“He felt that most of the people in his home country in Mexico didn’t have technology like Alexa or Google Home, so it made him feel less connected to his ethnicity,” ” Yi says.

Moving experiments online

The research project is still in its early phases.

The team started by interviewing children and their parents and found that homes with children were most likely to adopt technology like digital assistants. Parents’ basic concerns with the technology centered around privacy, and they worried about how their children would use the devices — issues that are universal and not necessarily tied to race and ethnicity.

“There’s a lot of hype and hope about artificial intelligence and how it can benefit humankind and transform society in ways that are impactful and that empower us to create more equitable outcomes, but it’s not going to do that on its own. It’s only going to do that through the human ingenuity and human intentionality.

The next step of the project will consist of experiments, where researchers walk kids through several scenarios and observe how they interact with the devices. This will include things like giving the kids an age-appropriate homework assignment and asking them to use the digital assistant to help them complete it. Some of the tasks might leverage aspects of the children’s cultural identity to help better understand how digital assistants respond in those situations, such as asking questions in certain languages, asking about culturally relevant pop culture interests, or even having the kids ask Siri or Alexia what ethnicity it is.

“We cannot conduct face-to-face experiments in the lab because of COVID-19, so we changed our research methods for phase two,” Yi says. “We plan to do online contextual inquiry to move this project forward.”

Watkins adds that video-based experiments could work even better because they may actually be able to involve more people, such as those who couldn’t have participated due to transportation limitations or other logistical challenges.

“It might potentially give us access to a greater diversity of kids and parents,” Watkins says.

He adds that they are starting small and hoping to expand the project in the future if they get access to additional funding.

The team’s ultimate goal is to be able to pass the research findings on to developers so they can design technology that is more inclusive in the future.

Their research is significant because it is among the first to look at children. Most of the research today around digital assistants has been centered on adults. But over time, more and more children are gaining access to these devices, which affects their cognitive development as well as how they consume information and build knowledge. This makes understanding how children use these devices especially important. “Because children’s brains are still developing, they experience digital assistants much differently than adults,” Yi explains.

“If we discover, for example, that Black and LatinX children bring certain kinds of cultural expectations, certain kinds of cultural sensibilities or cultural norms, then if designers are aware of those, they can begin calibrating and developing algorithms that can be responsive to those cultural nuances,” Watkins adds.

Watkins says the research is especially pertinent right now as the country is giving more attention to race and racial issues. He says developers might be even more receptive to their findings.

“We are all figuring out how to respond to the growing concern and recognition around systemic racism. Virtually every institution is implicated in this, including technology,” Watkins says. “There’s a lot of hype and hope about artificial intelligence and how it can benefit humankind and transform society in ways that are impactful and that empower us to create more equitable outcomes, but it’s not going to do that on its own. It’s only going to do that through the human ingenuity and human intentionality. Part of that includes developing teams that are diverse and bring different perspectives and views of the world into the research and development process. That can lead to products and experiences that are much more reflective of the world around us.”

Grand Challenge:
Good Systems

S. Craig Watkins, Ph.D., is a journalism professor at The University of Texas at Austin. An internationally recognized expert in media, Watkins is the author of five books exploring young people’s engagement with media and technology. His two most recent books — The Digital Edge and Don’t Knock the Hustle — result from his work with the Connected Learning Research Network, a research collaborative funded by the MacArthur Foundation. Watkins is the Good Systems Research Director for Racial and Social Justice.

Siqi Yi is a doctoral student in the University of Texas at Austin’s School of Information. Her research interests are primarily focused on immersive media’s on people’s information-seeking behavior and learning. Prior to joining iSchool’s doctoral program, she got her Master’s degree in Learning Technologies at UT and worked as an instructional designer in the Alien Rescue project to help build 3D models.