Hero graphic
decorative dash
Cross-Campus Collaborations

Good Systems partners with departments and initiatives across the UT Austin campus to develop interdisciplinary programming for diverse audiences. Speakers in our co-sponsored events explore a dynamic range of issues related to Ethical AI. 

Upcoming Events


Coming soon!

Past Events


February 23, 2023 

ChatGPT and the Future of Healthcare
IC2 Colloquium 

Ying Ding (School of Information), Greg Durrett (Computer Science), Will Griffin (Blockchain Creative Labs FOX), Justin Rousseau (Population Health and Neurology), and S. Craig Watkins, Moderator (School of Journalism and Media and IC2)  

The rise of large language models (LLMs) like Chat GPT raise a number of compelling questions about the future of artificial intelligence systems and society. This colloquium co-hosted by UT Austin’s IC2 Institute and Good Systems considers the implications of LLMs in the context of health care. From the design of health-based conversational chatbots to systems designed to expedite the management of health-related administrative tasks, LLMs will almost certainly impact how we design future health services. LLMs can facilitate access to health services and information, addressing the needs of anyone, anytime and from anywhere.  As a result, individuals will have unprecedented access to systems that can respond to their queries or even offer specialized forms of support. On the other hand, the growing adoption of LLMs raises concerns about accuracy of information, reliability, and bias.  As one critic notes, “these models have no comprehension of the prompts and queries they are responding to.”  This is especially relevant when the query might be about a chronic disease, a mental health illness, or a medication.  The colloquium offers some nuanced perspectives on LLMS in the context of health. Co-hosted with the IC2 Institute. 

Read the colloquium highlights here.

 

February 22, 2023 

Coexisting with AI: Embedding Ethics in Education and Practice 
Texas Science Festival

Kenneth Fleischmann (School of Information), Will Griffin (Blockchain Creative Labs FOX), Peter Stone (Computer Science), and Sharon Strover, Moderator (School of Journalism and Media) 

Why is it important for artificial intelligence systems to prioritize ethics? AI technologies are becoming a part of nearly everything we do and interact with -- from social media platforms and search engines to mortgage lending, job recruitment, and law enforcement systems to the cars we drive and devices we use in our homes. These technologies can help connect us, increase efficiency, and improve safety – but they also have the capacity to harm us in ways we may not often think about. How do we weigh the potential benefits and harms of AI technologies? How can we ensure these technologies are designed to benefit everyone and not just a select few?  This panel, which includes Good Systems researchers and partners, will consider the role of ethics in education and practice to set the stage for how we, as a society, can embed ethics in AI today and in the future.  This is a featured event in the Texas Science Festival, sponsored by the College of Natural Sciences at The University of Texas at Austin. 

 

January 27, 2023

Responsible Machine Learning’s Causal Turn: Promises And Pitfalls 
Ethics in AI Seminar

Zachary Lipton (Carnegie Mellon University) 

With widespread excitement about the capability of machine learning systems, this technology has been instrumented to influence an ever-greater sphere of societal systems, often in contexts where what is expected of the systems goes far beyond the narrow tasks on which their performance was certified. Areas where our requirements of systems exceed their capabilities include (i) robustness and adaptivity to changes in the environment, (ii) compliance with notions of justice and non-discrimination, and (iii) providing actionable insights to decision-makers and decision subjects. In all cases, research has been stymied by confusion over how to conceptualize the critical problems in technical terms. And in each area, causality has emerged as a language for expressing our concerns, offering a philosophically coherent formulation of our problems but exposing new obstacles, such as an increasing reliance on stylized models and a sensitivity to assumptions that are unverifiable and (likely) unmet. This talk will introduce a few recent works, providing vignettes of reliable ML’s causal turn in the areas of distribution shift, fairness, and transparency research. Co-hosted with the Institute for Foundations of Machine Learning and the Forum for Artificial Intelligence.