In April 2022, Good Systems hosted its annual symposium – entitled “Defining, Evaluating, and Building Ethical AI Systems” – over two days at UT Austin. Experts from the public and private sectors, including academia, industry, government, and nonprofits, met to share their latest insights on leveraging AI for social good and identifying next steps to address the most pressing societal issues we face.
The symposium started on Thursday, April 7 with a keynote speech by Milind Tambe, Ph.D., Gordan McKay Professor of Computer Science at Harvard University and the Director of AI for Social Good at Google Research India. In his speech, Tambe explored the global social impact of AI, especially in terms of getting the most from limited resources, by sharing four projects led by his team. Three of Tambe’s projects focused on public health, with the fourth project centering on wildlife conservation.
“This work requires that we step out of the lab and into the field, and try to understand firsthand the problems actually faced as opposed to cool technology that we can develop in the lab and helicopter into the field,” Tambe said.
Tambe’s first example of a public health project was a partnership with Armman, an India-based nonprofit dedicated to reducing maternal and child mortality and morbidity. Tambe’s team found that AI could assist call center workers with decision-making in terms of which mothers to call with reminders to enroll their children in government programs or to attend their health check-up.
Other public health projects Tambe’s team worked on included helping the UCLA School of Social Work use social networks for HIV prevention among youth experiencing homelessness and collaborating with Michael Mina, Ph.D., assistant professor of epidemiology in the T.H. Chan School of Public Health at Harvard University, to determine whether the widespread use of rapid tests or PCR tests would best prevent the spread of COVID-19.
“This work requires that we step out of the lab and into the field, and try to understand firsthand the problems actually faced as opposed to cool technology that we can develop in the lab and helicopter into the field,” Tambe said.
Tambe also highlighted a wildlife conservation project using game theory and machine learning to prevent poaching in the Queen Elizabeth National Park and Murchison Falls National Park, both in Uganda, and Sre Pok Wildlife Sanctuary in Cambodia. Tambe’s team advised park rangers about where best to concentrate their efforts by creating a model called PAWS that predicts which high-risk, infrequently patrolled areas of the park should be searched for snares.
“These may seem like very different projects, but what ties them together is the underlying research area of multiagent systems,” Tambe said.
Tambe concluded his keynote speech with four lessons learned:
- Achieving social impact and AI go hand-in-hand.
- Partnerships with communities and nonprofits are crucial, and the goal is to empower nonprofits to use AI tools and avoid being gatekeepers to AI technology for social impact.
- We have to look at the entire data-to-deployment pipeline; this work is not just about improving algorithms.
- Lack of data is a norm, and identifying new forms of data should be part of our project strategy.
The keynote speech was followed by a poster session showcasing nineteen Good Systems projects from the first two years of the research grand challenge. Attendees explored the projects and spoke with faculty and student researchers who shared their key takeaways.
The second day of the symposium began with Good Systems Core Research Project Presentations. View the full presentations below to learn more about the ways in which Good Systems is investigating issues including privacy and surveillance, the spread of disinformation, navigating multiple datasets in smart cities, the intersection of AI and racial justice, smart hand tools and the future of work, and human-AI relationships.
- Being Watched: Embedding Ethics in Public Cameras by Sharon Strover, Ph.D. (School of Journalism), and Atlas Wang, Ph.D. (Electrical and Computer Engineering)
- Designing Responsible AI Technologies to Curb Disinformation by Jessy Li, Ph.D. (Linguistics)
- A Good System for Smart Cities by Weijia Xu, Ph.D. (Texas Advanced Computer Center), and Junfeng Jiao, Ph.D. (School of Architecture)
- AI & the Future of Racial Justice by Min Kyung Lee, Ph.D. (School of Information), Angela Haddad, and Angie Zhang
- Making Smart Tools Work for Everyone by Kenneth Fleischmann, Ph.D. (School of Information), Sherri Greenberg (LBJ School of Public Affairs), and Raul Longoria, Ph.D. (Mechanical Engineering)
- Living and Working with Robots by Elliott Hauser, Ph.D. (School of Information)
The symposium concluded with two panel sessions in the afternoon. In the first – “Putting Good Systems into Practice” – Will Griffin of Hypergiant, Mikel Rodriguez of MITRE, Alice Xiang of Sony AI, and moderator Kenneth Fleischmann explored the tensions and opportunities that exist when embedding ethics in the development of AI.
To curb the negative effects of unregulated AI, Griffin considers a two-pronged approach necessary: requiring AI ethics coursework in K-12 and higher education alongside teaching coding, in the same way law and medical schools have long required ethics in the curricula, and working with policymakers at the local, state, and federal levels to ban the unethical use of AI.
“It can destroy innovation, if you don’t have ethics in your policy. There are going to be bans from jurisdictions, which creates a huge disincentive for companies to use AI – not in a malicious way – but, in the world we live in, it would be like nuclear weapons. If there weren’t rules around nuclear weapons, any use of nuclear weapons is unethical, so the same is true of these types of technologies that touch so many people,” Griffin said.
Watch the “Putting Good Systems into Practice” panel discussion below:
The final panel discussion – “Smart, Equitable, Responsive – Can a City Be All Three?” – brought together Chelsea Collier of Digi.City, Brandon Kroos of the City of Austin, and T. Leo Cao of UT Austin, and was moderated by Junfeng Jiao. Panelists emphasized the importance of community-centered strategies and stakeholder input in smart cities.
“One way we like to frame this is we start from the actual problem statement. If we are building a new AI or starting to implement new technologies, what problem are we trying to solve for? Who is most directly impacted by that? At the end of the day, they are the ones you need to be serving. They are the ones we are serving in our office,” Kroos said.
Watch the “Smart, Equitable, Responsive – Can a City Be All Three?” panel discussion below:
The two-day symposium provided a chance for UT researchers and external partners to come together to spark innovation, build valuable relationships, and engage in important cross-sector conversations to identify next steps.
To watch all videos of the Good Systems Annual Symposium, check out our YouTube playlist.