Good Systems Speaker Series
April 24, 2023
AI for Intelligent Societies James Hodson
How will Artificial Intelligence make human environments more livable and sustainable? How can we make it easy for municipalities and communities to discover and deploy emerging technologies in ways that support shared values and reduce societal frictions and risks? The AI for Good Foundation has been working with cities, regional governments, national governments, and supranational organizations like the UN and OECD to answer these questions through institutional frameworks, experimentation, and partnerships. This session will explore AI for Good's approach, case studies, and a vision for AI in urban settings.
April 12, 2023
Ethics Helps Our Society Prosper: What Should We Do About AI? Benjamin Kuipers, PhD
Recent advances in AI, ML, and Robotics have driven increased attention to ethical questions about the impact of technology on society. These include relationships between the value of technology and its safety; surveillance and privacy; fairness and bias; and the future of work and economic inequality, among others. We can take steps toward a unified approach to these questions, drawing on several disciplines including theories of evolution, games, business, cooperation, and trust. In this talk, Benjamin Kuipers will explore that approach and its implications for the deployment and regulation of AI technologies.
March 6, 2023
Predict and Surveil: Data, Discretion, and the Future of Policing Sarah Brayne, PhD
The scope of criminal legal surveillance, from the police to the prisons, has expanded rapidly in recent decades. At the same time, the use of big data has spread across a range of fields, including finance, politics, health, and criminal justice. Drawing on fieldwork conducted within the Los Angeles Police Department, Brayne shows how law enforcement uses predictive analytics and new surveillance technologies to allocate resources, identify criminal suspects, and conduct investigations. She then analyzes how the adoption of big data analytics transforms organizational practices, and how the police themselves respond to these new data-driven strategies. Proponents argue that big data can be used to make law enforcement practices more effective, fair, accountable, and objective, in part by stripping discretion from biased front-line actors. This research reveals the ways that police use of big data does not eliminate discretion, but rather displaces discretionary power to earlier, less visible parts of the policing process.
January 30, 2023
Imagining Infinite Futures: AI in Art and Design
Artificial intelligence tools are increasingly being used by artists, architects, and designers in creative work. And, these days, it’s not uncommon to see AI-generated art making media headlines. How are text-to-image AI programs like DALL-E, Midjourney, and Stable Diffusion changing the design disciplines and the public’s perception of art and design? Why is it important for art and design to remain fundamentally human-driven, and AI-assisted? Good Systems explores in this roundtable discussion with architect Kory Bieg (UT Austin School of Architecture), artist, designer, and founder Jiabao Li (UT Austin College of Fine Arts), artist Jason Salavon (University of Chicago), and curator Claudia Schmuckli (Fine Arts Museums of San Francisco) and moderated by Good Systems Chair Dr. Sharon Strover.
November 28, 2022
Is the Next Winter Coming for AI? Elements of Making Secure and Robust AI
Josh Harguess, PhD
While the recent boom in Artificial Intelligence (AI) has given rise to the technology’s use and popularity across many domains, the same boom has exposed vulnerabilities of the technology to many threats that could cause the next “AI winter." AI is no stranger to “winters," or drops in funding and interest in the technology and its applications. There is some consensus that another AI winter is all but inevitable in some shape or form, however, current thoughts on the next winter do not consider secure and robust AI and the implications of the success or failure of these areas. The emergence of AI as an operational technology introduces potential vulnerabilities to AI’s longevity. In this talk, Harguess will introduce the following four pillars of AI assurance, that if implemented, will help us to avoid the next AI winter: security, fairness, trust, and resilience.
October 3, 2022
Detecting “Fake News” Before It Was Even Written, Media Literacy, and Flattening the Curve of the COVID-19 Infodemic
Preslav Nakov, PhD
Given the recent proliferation of disinformation online, there has been growing research interest in automatically debunking rumors, false claims, and "fake news." A number of fact-checking initiatives have been launched so far, both manual and automatic, but the whole enterprise remains in a state of crisis: by the time a claim is finally fact-checked, it could have reached millions of users, and the harm caused could hardly be undone. In his talk, Dr. Preslav Navok shares how the Tanbih news aggregator limits the impact of “fake news,” propaganda, and media bias by making users aware of what they are reading. Dr. Navok also demonstrates the Prta system, a media literacy tool that detects techniques used in malicious content. Finally, Dr. Navok advocates for a holistic approach to combatting disinformation that combines the perspectives of journalists, fact-checkers, policymakers, social media platforms, and society as a whole, and presents recent COVID-19-related research in that direction.