texture bg
decorative dash
Fall 2023 – Spring 2024

Good Systems Speaker Series

The Good Systems Speaker Series brings together diverse perspectives on ethical AI from across the academic, industry, nonprofit, and public service sectors. Thought leaders will discuss topics related to Good Systems’ core research areas, which include AI and racial justice, surveillance, disinformation, smart cities, human-robot interactions, smart tools, and the future of work.

Upcoming Speaker Series Events


No events at this time.

Past Speaker Series Events


April 3, 2024

Preserving Digital Images and Data: Procedural, Policy, and Privacy issues
Howard Besser, Professor Emeritus, Cinema Studies, New York University

21st century media pose challenges to preserving the historical record. Collecting institutions need guidance and new strategies in order to save selective cellphone video, GPS data, and video from surveillance cameras, drones, and police bodycams. In this Talk, Howard Besser will discuss how saving this type of material poses procedural, policy, and privacy issues. And he will demonstrate the ongoing tension between preservation and privacy. The presentation will include a case-study of preserving cellphone videos from the Occupy Movement, and a close look at police body cam videos.

 

March 4, 2024

Effective Human-Machine Partnerships in High Stakes Settings
Julie Shah, Professor of Aeronautics and Astronautics; Faculty Director, Industrial Performance Center; Director, Interactive Robotics Group 
MIT

Every team has top performers -- people who excel at working in a team to find the right solutions in complex, difficult situations. These top performers include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they cannot often tell us how they do it. Similarly, even when a machine can do the job better than most of us, it can’t explain how. The result is often an either/or choice between human and machine - resulting in what we call zero-sum automation. In this talk Dr. Julie Shah presents research case studies from industry and also shares the MIT Interactive Robotics Group's latest research effectively blending the unique decision-making strengths of humans and intelligent machines.

 

February 21, 2024

Don’t Believe the Hype: AI in Entertainment Media
Andrew Augustin (Arts and Entertainment Technologies, UT Austin), Rakeda L. Ervin (Austin Film Society), Geoff Marslett (University of Colorado Boulder), and Suzanne Scott (Radio-Television-Film, UT Austin). Moderated by Sam Baker (English, UT Austin).

How are our perceptions of AI technologies influenced by popular media? Join us for a dynamic and thought-provoking discussion on the blurred lines, hard lines, and connections between AI hype, portrayals of AI in entertainment media, and reality. Hear perspectives from experts in film, game design, TV, and fan culture in this roundtable moderated by Dr. Sam Baker.

 

February 5, 2024

What can the participatory disinformation ecosystem teach us about how AI will be weaponized?
Claire Wardle, Professor of the Practice, Department of Health Services and Co-Director, Information Futures Lab, Brown University

For the past twenty years, we have watched new technologies enable the creation and dissemination of user-generated content (UGC). The results have been enormous, bringing numerous challenges around copyright, verification, and networked harassment. The leaps in AI taking place today are causing similar and even more extraordinary challenges. What can be learned from the ways in which industries adapted to UGC that we can apply to AI? How can we be better prepared for the dark side of AI when large-scale weaponization of the technology inevitably occurs? What safeguards can we start to build that might mitigate the eventual harms?

 

October 4, 2023

The Importance of Open Models in AI, and Implications for the Future of Oversight
Ben Brooks, Head of Public Policy for Stability AI

Open models play a vital role in helping to promote transparency, security, and competition in AI. However, open models are highly capable and widely available, posing a challenge for regulators and policymakers considering the future of oversight. In this session, Ben Brooks, Head of Public Policy for Stability AI, discusses recent developments in open models, AI regulation, and practical steps that policymakers can take to mitigate emerging risks while continuing to foster open and grassroots innovation.