UT, Universities Join Forces on AI Ethics

Good Systems Leads a Growing Effort to Center Ethics in AI Research and Teaching

April 30, 2025
Six members of the team represented Good Systems at the peer consortium's gathering in Washington, DC, in February.
Good Systems’ Stacey Ingram Kaleh (far left), Sharon Strover (back row, third from right), Sam Baker (back row, fourth from right), Sherri Greenberg (front row, center), Luis Sentis (front row, fourth from left) and Chandra Bhat (sixth from right) were among those who convened in Washington, DC, on February 19.

In February, a group of scholars from top universities gathered in Washington, D.C., with a shared mission: ensuring that AI innovation is grounded in ethics and the public good. The meeting was the latest step forward for a growing initiative that Good Systems helped launch: a "community of practice" uniting universities across the country that are committed to responsible AI research and teaching.

In late 2022, members of the Good Systems Executive Team (GSET) began reaching out to other institutions with strong interdisciplinary AI ethics programs. The goal was simple but ambitious: build a space where researchers could share ideas and practices, collaborate on projects and advocate for AI that prioritizes societal welfare.

"It's important, as a group, to say, 'Ethics should be at the forefront of the conversation in AI.'"

— Stacey Ingram Kaleh, manager of partnerships and programs

"What we have done, essentially, is get together and agree that we should be working together, because it's important, as a group, to say, 'Ethics should be at the forefront of the conversation in AI,'" said Stacey Ingram Kaleh, Good Systems' manager of partnerships and programs. "And not only do we want to have interdisciplinary collaboration on our campus, we want multi-university collaboration around this."

The group, informally known for now as the "ethical and responsible AI university collaborative," includes researchers and faculty leaders from public and private universities across the country. Members include Johns Hopkins’ Institute for Assured Autonomy, NC State’s Center for AI in Society and Ethics, Critical AI @ Rutgers, Penn State’s Center for Socially Responsible AI, Carnegie Mellon’s CREATE Lab, the University of Washington, the University of Georgia, the University at Buffalo, and the University of Richmond’s Center for Liberal Arts and AI. They meet quarterly, with plans for a public-facing website this summer to collect research, teaching resources, and curriculum materials.

Good Systems has been a driving force in shaping the collaborative's mission. "The idea was, ‘How can we further this notion of a collaborative, a community of practice that brings [together] values-aligned researchers in this fairly multidisciplinary space?’" said engineering professor Chandra Bhat, a founding member of Good Systems and part of its Executive Team.

‘Fascinating and Scary’: AI in the Classroom and Beyond 

For Bhat and others, teaching students to think critically about AI is a top priority. "How do we engender critical thinking within our students, not have AI be seen as the panacea that can do everything?" said Bhat, who delivered the opening remarks in Washington.

One key insight that emerged from the gathering was the need to help students understand what AI cannot do, and where it fails. "We need to have a scaffolding structure for our students,” Bhat said. “We need to be sure we are mentoring them. We need to be sure they probe the questions, that they are always curious.”

The collaborative also aims to advance research on the societal biases that can be perpetuated by AI systems, which, as Bhat said, are designed by “the privileged, the educated." Without civic and community engagement, AI systems risk embedding human biases into automated decision-making, reinforcing injustices rather than correcting them.

"We are in an age which has been compared to the Industrial Revolution or the Renaissance. It is fascinating and sometimes scary."

— Chandra Bhat, Cockrell School of Engineering

One striking example: companies using AI to sort through job applications found that women and minorities were disproportionately weeded out. "The data fed to AI is based on human perceptions and human biases, those still continue to get embedded in these algorithms," Bhat said. This makes public and community engagement in AI development not just valuable, but essential.

The work is far from done, and many questions remain. "We are in an age which has been compared to the Industrial Revolution or the (Renaissance),” Bhat said. “We are in a place where I can only shake my head. It is fascinating and sometimes scary."

Thankfully, efforts like Good Systems — and now, its national consortium — are working to ensure AI evolves in a more just, inclusive and human-centered way. In other words, to make it a little less scary.

Grand Challenge:
Good Systems