AI is in our lives, influencing the way we work and teach, connect with one another, and move about our cities and communities. It’s increasingly a topic of discussion in our classrooms, board rooms, and around the dinner table. Good Systems researchers and partners are optimistic about the ways in which AI can enhance quality of life for everyone and transform society for the better when ethics are at the core of design, deployment, and policy. They are also keenly aware of the risks and potential harms posed by AI technologies, from exacerbated inequities and biases to disinformation at a previously unimaginable scale. How can we mitigate and eliminate these risks to ensure that AI technologies benefit everyone, not just a subset of the population? Addressing a grand challenge such as this one requires diverse perspectives.
As the 2023-24 academic year begins, Good Systems Executive Team members and close collaborators share their thoughts on the most exciting developments in AI for social good as well as the most pressing ethical concerns facing society today. Read dynamic, personal perspectives from expert practitioners and UT Austin researchers as they reflect on two questions: “What is one recent development in AI that excites you regarding its potential for social good?” and “In your opinion, when it comes to AI and society, what’s the most pressing ethical concern?”
Kay Firth-Butterfield
Executive Director, Centre for Trustworthy Technology – a World Economic Forum Centre for the Fourth Industrial Revolution
What is one recent development in AI that excites you regarding its potential for social good?
The excellent advances in ensuring that AI can detect the most subtle changes on breast imaging, which can potentially ensure that all breast tumors are picked up, should enable many more women to survive. This is important not only for them, but also for their families, friends, and society.
In your opinion, when it comes to AI and society, what’s the most pressing ethical concern? Why?
Disinformation and the potential to manipulate voters using AI and deepfakes. Over 50% of the world's population by GDP goes to vote in 2024 and we have no useful safeguards in place against these critical AI enabled problems.
Kenneth R. Fleischmann, PhD
Professor, School of Information
Founding Chair, Good Systems
What is one recent development in AI that excites you regarding its potential for social good?
Advances in AI have positioned it to become a guardian angel for workers, from grammar checkers that can help workers identify errors, to smart hand tools that can provide feedback that can help workers to avoid workplace accidents and receptive stress injuries. If AI is designed to assist and benefit workers, rather than replace them, it can have a beneficial impact on the global economy and on the lives of workers.
In your opinion, when it comes to AI and society, what’s the most pressing ethical concern? Why?
The most pressing concern with AI today is whether it will be used to create a more equitable society, or if it will be used to exacerbate existing inequities or create new ones. AI that is designed to benefit workers and community members could result in a more stable and harmonious society. However, if AI is used to displace workers and widen the divide between the haves and the have-nots, the end result may be even greater societal unrest and tensions, which could eventually boil over. It is vital to invest in research to understand how to ensure that AI-based systems can be used to create a more equitable, just, and sustainable society.
Sherri Greenberg
Assistant Dean for State and Local Government Engagement, LBJ School of Public Affairs
Professor of Practice, LBJ School and Steve Hicks School of Social Work
Chair, Good Systems
What is one recent development in AI that excites you regarding its potential for social good?
AI has great potential for social good in many areas. Certainly, there are many applications that can benefit people including: fighting hunger with food production, accessing education, and providing better healthcare. One very exciting recent development in healthcare was the ability for AI to assist researchers in quickly developing the mRNA vaccines for COVID-19.
In your opinion, when it comes to AI and society, what’s the most pressing ethical concern? Why?
AI presents ethical concerns in various forms. However, currently, generative AI is the AI development that most people hear about. The most pressing ethical concern is the potential ability of machines to make certain decisions on their own. My top ethical concern is the potential ability for machines to make life and death choices such as using weapons.
Junfeng Jiao, PhD
Associate Professor, School of Architecture
Director, Urban Information Lab
Past Chair, Good Systems
What is one recent development in AI that excites you regarding its potential for social good?
The development of Generative AI (GAI) and large language foundational models, represented by GPT and Bard. It will significantly change many things in our society such as how we write emails to how we solve math problems to how we write codes, etc. There are many other applications yet to be discovered. GAI is moving us one step closer to Artificial General Intelligence (AGI), which you can think of as a computer as smart as a human being.
Another impact from GAI is on the possible emergence of super intelligence. Imagine there is an intelligence that is smarter than every human combined. These two are different but related.
In your opinion, when it comes to AI and society, what’s the most pressing ethical concern? Why?
The alignment of artificial intelligence, such as how to ensure our AI systems align with core human ethics and, more importantly, how to ensure the safety of AI or super intelligence. For example, how do we ensure AI will make the right decision in the case of attack or other adversaries?
Jason JonMichael
Senior Strategist, Highly Automated Systems Safety (HASS) Center of Excellence, U.S. Department of Transportation
What is one recent development in AI that excites you regarding its potential for social good?
For someone who lives in the transportation automation world, it would be easy to assume one would reference a recent advancement in that marketplace. Rather than comment on any recent advances in AI science, I would characterize my overall excitement with regard to the bountiful amount of use cases being developed around nearly every step of human development, from walking to rocket science. These are the most exciting and serious conversations I am having these days. When you unpack the use cases around early human development, education, and individualized learning – it’s fascinating to perceive the possibilities. That fascination fades into fear as one begins to factor in the unintended, the marginalized, and the knowledge that without intentionally engineering for good, bad things will happen.
The Good Systems vision, mission and approach is what helps academia, government and the public gain hands-on experience with AI, learn from one another, and do it better the next time.
In your opinion, when it comes to AI and society, what’s the most pressing ethical concern? Why?
Intentionality in design with a safety continuum in mind. In no other field is this more important. Without guardrails, the risks aren’t reproducing bias, inequity, and discrimination; it will exacerbate it to a point of threatening human rights and liberties - to threatening human life itself.
Assume we can program AI to not directly threaten human life, but to what extent are unintended consequences allowed? The pharmaceutical field has lessons for us to glean from drug testing; clinical studies and trials. The risk to human life is the same. To what level will we assess, test, and validate the safety of AI? Will we provide the same amount of effort and rigor we have come to expect from other, very similar industries? Are we prepared for the AI equivalent to the Tylenol murders of 1982 and the start of something we take for granted today – tamper evident safety seals on over-the-counter medication. Who is responsible for the iterative uses of your technology?
Matt Lease, PhD
Professor, School of Information
Good Systems Executive Team
What is one recent development in AI that excites you regarding its potential for social good?
The vision of “Democratizing access to AI” seeks to lower bars and barriers to creating and/or using AI capabilities so that that anyone with an idea or need can create and/or use their desired AI functionality, regardless of their knowledge, resources, location, etc. For example, cloud-based services enable people to access powerful computing resources from a basic laptop and internet connection. Another related example and goal is the idea of “zero-code” programming: to enable non-programmers to be able to accomplish various tasks of interest without needing to program. Putting these ideas together, an exciting development is the rise of large language models, such as GPT, that enable people to create custom AI solutions without programming. By describing the AI’s task to perform in natural language by “prompting” the AI, strong AI performance can now be achieved across a wide range of tasks.
In your opinion, when it comes to AI and society, what’s the most pressing ethical concern? Why?
There are a host of intertwined issues, but one central one is insufficient public understanding of AI: what is possible and what is not, including a realistic understanding of opportunities, risks, as well as how it is likely to change and improve as we look to the future. Without understanding such key aspects of the technology, it is difficult to avoid unrealistic hype, exaggerated fears (ground in sensationalist emotion rather than informed reason), and to effectively engage in practical democratic discourse around sound investment and regulation. Like everything else in a democracy, people need quality information and understanding for effective self-governance. This is why, if I slipped a second pressing ethical concern into this answer, I'd mention risk of AI abuse around disinformation and misinformation, since they similarly strike at heart of the effective functioning of democracy.
Sarvapali “Gopal” Ramchurn, FIET
Chair and Director, UKRI Trustworthy Autonomous Systems (TAS) Hub
CEO, Responsible AI UK
Professor of Artificial Intelligence, University of Southampton
What is one recent development in AI that excites you regarding its potential for social good?
I am excited at the prospect of Causal AI systems. These AI technologies have the potential to revolutionize how we build AI systems that account for causes and effects in the real-world rather than try to learn from pure correlations. We will trust AI based systems more if they are able to capture understandings of physical and social interactions rather than purely infer such interactions from correlations. Causal AI systems will help us reduce bias and help us make better decisions. For example, they may be able to explain the reasons for diagnosing cancers from complex histopathology data or the reasons for classifying images as offensive online with higher degrees of trust.
In your opinion, when it comes to AI and society, what’s the most pressing ethical concern? Why?
I am mostly concerned about the admission by some of the big tech companies that systems built on Large Language Models should be allowed to generate fake news or content with impunity on the basis that we should respect free speech. This shows a disregard for fundamental elements of a stable society i.e., truth and honesty. I am also worried about the conflict of interests arising out of close collaborations between governments and Big Tech when it comes to regulating AI. History has shown that commercial benefit trumps ethics and social values. The pervasiveness of AI is growing rapidly and unless we build in notions of responsibility in the design, operation, and governance of AI based systems, we risk creating new societal problems that will prove hugely costly to those whose voices are not accounted for in the current conversations on AI.
Joshua Stadlan
Lead Computational Social Scientist
Modeling Lead, Social Justice Platform
Model-based Analytics Department
MITRE
What is one recent development in AI that excites you regarding its potential for social good?
I am excited about the potential for Large Language Models (LLMs) to help people interact in everyday language with public systems and processes, from the justice system to social services. For instance, by taking care of the routine interactions, LLMs can free up human staff to spend more time with people who need more tailored support.
In your opinion, when it comes to AI and society, what’s the most pressing ethical concern? Why?
An ethical concern across AI risks in the current landscape–from AI-based discrimination to deepfakes–is the potential diffusion of responsibility in the AI ecosystem. When automated systems discriminate against a marginalized community, who is ethically responsible–the data collectors who just compiled available data, the algorithm team who worked in isolation, the end-users with little insight into systems’ underlying models, or the whole society that generated the biased data?
That's why it makes sense to take a coalition approach to AI assurance – formulating new standards and best practices across government, industry, and academia so that ethics are considered across the AI ecosystem and no one passes the buck. I also see a role for computational ethics: to act ethically in an AI-integrated world, we need to integrate the perspectives of people who will interact with AI in different ways–alongside Big Data and computational models–into our ethical reasoning for AI policymaking, to help us advance AI for public good.
Sharon Strover, PhD
Philip G. Warner Regents Professor in Communication, School of Journalism and Media
Past Chair, Good Systems
What is one recent development in AI that excites you regarding its potential for social good?
The inklings that companies and policymakers are taking AI’s potential impact on social transactions and processes of all sorts more seriously are exciting. This signals the moment that scientific advances have left the lab environment and are now out in the wild, and people are anxiously figuring out how to harness the potential for a large range of good purposes. As someone who studies emerging technologies, I know there are predictable developmental sequences in play, and we are in the early phases of simply defining what we want such powerful systems embodied in AI to do. Right now, there are no norms.
In your opinion, when it comes to AI and society, what’s the most pressing ethical concern? Why?
I’m not sure there is one single concern, but cultivating an awareness of the many flavors of AI and moving away from thinking of it as one blunt instrument represent important precursors to effectively establishing where we want these systems to go. Figuring out how we want them to serve humanity is high on the social agenda. The more informed and thoughtful people can be about it, and the more that future directions are not consumed entirely by the profit-making potential of these applications, the better off we will be.
Peter Stone, PhD
Truchard Foundation Chair in Computer Science, College of Natural Sciences
Director, Texas Robotics
Executive Director, Sony AI America
Good Systems Executive Team
What is one recent development in AI that excites you regarding its potential for social good?
There are so many developments in important application areas such as healthcare, traffic, epidemiology, agriculture, and so many others, that it's impossible to pick just one. But that's really the most exciting development in AI from my perspective — that more people are excited
about and learning about and actively innovating with AI than ever before. The widespread familiarity with and adoption of the latest AI technologies is bound to enable many new applications for social good.
In your opinion, when it comes to AI and society, what’s the most pressing ethical concern? Why?
There are unfortunately also many such concerns. Just as people striving to enhance social good have greater access to AI tools than ever before, people with ill intentions — the proverbial "bad actors" — also have such access. There are many as-of-yet unsolved concerns ranging from spread of mis/disinformation, infringement of privacy, bias in models, uses in pornography, and others. The latest AI technologies are proving to be quite disruptive, both for better and for worse.
------
Good Systems looks forward to continuing to work with partners across disciplines and sectors to define, evaluate, and build AI systems for the benefit of society.
Have a question? Email goodsystems@austin.utexas.edu and join the conversation on Twitter/X using #UTEthicalAI.
If you are interested in learning more about Ethical AI and how you can get involved to ensure AI technologies are human-centered and values-driven, consider joining Good Systems throughout the year for public programs like the Good Systems Speaker Series.