Good Systems Researchers and Partners on Shaping AI for Social Good
The start of a new academic year seems to usher in a renewed sense of possibility. Campus is buzzing with classes, research projects and public programs in full swing. In the thick of the Year of AI at UT Austin, Good Systems researchers and partners continue to ask the big ethics questions at the intersection of technology and society, challenge the status quo when it comes to AI innovation and expand our thinking around what it means to change the world.
Collaboration is at the heart of our grand challenge to design responsible AI systems for the benefit of society. When it comes to tackling some of the most pressing issues facing society today, Good Systems believes it's essential to bring together perspectives across disciplines and sectors and people from all walks of life and from every corner of the globe. We are grateful for our partners in academia, industry, government and the nonprofit sector who help us imagine and develop new AI systems and policies to advance equity and transparency, protect privacy and information integrity, equip and empower skilled workers, make our cities smarter and build robots that improve our lives.
As AI technologies continue to develop at a rapid pace, we keep our minds open and ready to examine, learn, challenge, design and redesign the ways we interact with AI on an ongoing basis.
With this constant state of change in mind, we asked some of our researchers and closest partners with a dynamic range of expertise to share their insights on what we should know about AI today, innovative applications of AI for social good, how we can all influence technology design and deployment and take agency over the ways AI shows up in our lives, and what they are most looking forward to from Good Systems this year.
Discover their perspectives:
Darla Cameron
Interim Chief Product Officer
Texas Tribune
What is one thing you want everyone to know about AI technologies?
Recent leaps in artificial intelligence tools and large language models are rapidly changing the way people consume information. This could be a scary prospect for the news industry, but there are reasons for cautious optimism. At The Texas Tribune, a nonprofit news leader that covers state politics and policy, we're committed to trust and transparency in all aspects of our work, including our approach to using these tools. We’re interested in using AI to make our content more accessible, with tools like AI-generated audio versions of stories. And we have long used AI as an efficiency tool to transcribe interviews and public meetings for internal use, with journalists always verifying published quotes. Our organization introduced an AI policy earlier this year to guide our decision making as we start to consider how we might adopt these new technologies while listening to Texans' needs.
What is a particularly innovative and socially beneficial use of AI you've seen in the past year?
I love Digital Democracy from CalMatters, a digital nonprofit news organization in California. It's a superpowered political directory, campaign finance and legislation-tracking tool that uses AI to make information about state government accessible and searchable. CalMatters' journalists worked with researchers from Cal Poly San Luis Obispo and 10up, a global web development firm, to create this powerful new tool.
Daniel Culotta
Chief Innovation Officer, Office of Innovation
City of Austin
What is one thing you want everyone to know about AI technologies?
AI technologies are tools. Don't get too wrapped up in the tool itself; focus on what it can help you do and what you want to accomplish with it. If you have a good understanding of where you want to go, AI could be a tool that helps you get there.
What are you most looking forward to from Good Systems this year?
I'm looking forward to deepening our collaboration around what it means to operationalize good systems in city settings and creating strong feedback loops that generate new evidence, partnerships and positive outcomes for Austin residents.
Ozgur Eris, Ph.D.
Managing Director, AI and Autonomy Innovation Center
MITRE
What is one piece of advice you would give people who are interested in learning more about AI and taking agency over their use of technology?
AI does not have a purpose other than what we, humans, assign to it. Use AI when and where it can add value to your life.
What is one thing you want everyone to know about AI technologies?
AI, in addition to helping us do what we already do better, can inspire us. Inspiration does not have to mean that it is always right.
Arya Farahi, Ph.D.
Assistant Professor, Department of Statistics and Data Science
Director, D3 (Data, Discovery, Decision) Lab
What is one thing you want everyone to know about AI technologies?
We are reaching a tipping point for trust in AI — before society loses confidence in these technologies, now is the time to make major investments in building trusted and trustworthy AI. This requires engagement from diverse stakeholders, reflecting the complexity and multi-faceted nature of creating AI systems that are fair, transparent and accountable, so people can rely on these technologies with confidence.
What is a particularly innovative and socially beneficial use of AI you've seen in the past year?
I am very excited to see the infiltration of AI technologies into the scientific discovery pipeline, especially recent advances in using foundation models. Progress in basic science has always been a precursor to societal change, and today, AI is accelerating scientific discovery at an unprecedented pace. While this is incredibly exciting, the speed of these advancements demands a measure of caution to ensure that ethical considerations and societal impacts are carefully managed.
Joel Fischer
Professor, University of Nottingham
Research and Engagement Director, UKRI Trustworthy Autonomous Systems Hub
What is one thing you want everyone to know about AI technologies?
One thing that’s worth emphasizing about AI is that there is often a surprising amount of human labor involved in making the technology work “behind the scenes.” The development, training, maintenance, monitoring and fine-tuning of AI technology requires not just developers and engineers, but also often the work of untrained “workers” to input and correct data and to moderate and control AI-generated content. It’s worth bearing in mind these often “hidden” forms of labor and acknowledging their importance and the contributions of these people to ensure successful working of what is euphemistically portrayed as completely autonomous technologies.
What are you most looking forward to from Good Systems this year?
Updates on outputs from Good Systems projects like publications and events like conferences and workshops to participate in.
Nancy Morgan
CEO Ellis Morgan Enterprises
Former Intelligence Community Chief Data Officer
Client Advisor, The Cantellus Group
What is one thing you want everyone to know about AI technologies?
Data is key — getting the right data, to the right humans and machines, at the right time and in the right form. While everyone is excited about the potential of AI technologies, each organization must have a solid data foundation and put in place proper data management practices to support the safe and trustworthy use of AI for any particular use case. Everyone in an organization has a responsibility to do their part to ensure data and model quality and integrity. This work is not a one-and-done proposition, but an ongoing effort, during design, development and testing of a capability and after it is initially released into production and beyond.
What is a particularly innovative and socially beneficial use of AI you've seen in the past year?
A young relative of mine has a very rare disease where there is not enough funding to research innovative treatments, create new paths to clinical trials and/or develop a cure. Researchers are working with AI to rapidly organize, search and compare large volumes of data from other families of diseases that share certain characteristics, which is accelerating their ability to identify and rapidly test out treatment options to understand how this disease might respond.
The research approach is based on a collaborative, international public-private partnership, and AI is a true force- and staffing-multiplier. They are also able to look at data from different optics — regional, ethnicity, age range, etc. — to discern if those differences could affect outcomes. AI could fundamentally change life expectancy for people struggling with rare diseases.
Dhiraj Murthy, Ph.D.
Professor, School of Journalism and Media
Moody College of Communication
What is one piece of advice you would give people who are interested in learning more about AI and taking agency over their use of technology?
Learn about how the technologies actually work. I recommend that people read AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, by Arvind Narayanan and Sayash Kapoor.
What is one thing you want everyone to know about AI technologies?
That AI is decades-old and not something that crept up that we should be scared of. Back in 1989, John Anderson published a piece spelling out some of the origins of human knowledge and machine knowledge. He prophetically stated, “The speed with which a [AI] program will run depends on how cleverly it is compiled into code and on which machine it runs.” Because today’s AI can be near instantaneous due to clever coding and arrays of graphics processing units (GPUs), some think AI just came out of nowhere when ChatGPT became popular.
What is a particularly innovative and socially beneficial use of AI you've seen in the past year?
Using AI to quickly detect health information that may cause harm to people.
What are you most looking forward to from Good Systems this year?
Building accessible online tools for stakeholders in public health to use AI in socially beneficial ways.
Julie Schell, Ed.D.
Assistant Vice Provost of Academic Technology and Director of the Office of Technology
Assistant Professor of Practice, Design, School of Design and Creative Technologies
College of Fine Arts
What is one piece of advice you would give people who are interested in learning more about AI and taking agency over their use of technology?
Experiment. There’s nothing better than hands-on engagement to build expertise and confidence in using AI. I spent over 50 hours experimenting with image-based models and then moved into large language model experiments. I think on average it takes about 10 to 15 hours to come to learn that AI is not actually better than you. It might be faster and faster, but it is not as good, creative, ethical, or as interesting as you are in your specific area of expertise. Once you truly grasp this, you can start to learn with AI in both forward and responsible ways.
What is a particularly innovative and socially beneficial use of AI you've seen in the past year?
Imagine you are a student trying hard to learn a difficult concept in one of your courses and that concept is foundational to future learning. You have to work during your instructor's hours, and your peers are not around when you have time to study. Now imagine that there is a 24/7 tutor available to help you with the concept and that tutor was trained specifically by your instructor on the specific topic you are struggling with. Sure, the tutor is not your instructor, so they won't have the same level of expertise and might not get everything right. However, the tutor is available as long as you need them, they are welcoming, polite and responsive, and they give you as many examples as you need until you feel confident you've got the concept down. In the Office of Academic Technology, we are leading the development of a custom GPT-based AI Tutor called UT Sage to deliver this beneficial use of AI to students and faculty at UT.
What makes Sage different from a tool such as ChatGPT is that we've designed Sage to leverage a potent set of learning science principles. We know that students learn best when they are interactively engaged in their learning, when they feel welcome and a sense of belonging in the learning space and when they can evaluate their learning states and then self-direct and self-regulate their study behaviors to bridge gaps. Sage will offer an AI-based tutor that meets all of these principles and has the added benefit of being trained by our expert faculty.
Ultimately, the vision for UT Sage is to provide a safe and secure way to advance learning with generative AI. We are excited to be working with a cohort of faculty to build the UT Sage now (faculty can get involved by email at oat@utexas.edu). We plan to make Sage available to all faculty in the Spring of 2025.
Some of the most exciting use cases with Sage reveal how we might be able to use generative AI in both forward (i.e., innovative) and responsible (i.e., safe and ethical) ways to solve longstanding or unyielding teaching and learning problems. For example, one faculty member is thinking about how to use Sage to bridge the content gaps that always exist between laboratory and lecture courses. Others are using it to help teach concepts that students always find challenging and necessary for success in their courses and beyond. Using AI-based technology to solve teaching and learning problems that we cannot otherwise solve is what excites me most about Sage and generative AI more broadly.
What are you most looking forward to from Good Systems this year?
Teaching and learning use cases and projects that show how to engage with generative AI in really creative yet responsible ways!
Keri K. Stephens, Ph.D.
Professor and Distinguished Teaching Professor, Communication Studies
Moody College of Communication
What is one piece of advice you would give people who are interested in learning more about AI and taking agency over their use of technology?
I’ve spent a lot of time observing people who are interacting with AI technologies in what we call Human AI-Teaming. The advice I want to share is to remember that you, as the human, are the expert! Don’t let yourself be lulled into thinking the computer is all knowing, because it isn’t. It is so important for humans to look over things the machines produce and continue to be the expert when working with AI technologies.
Learn more about Prof. Stephens’ work on this subject here.
What is a particularly innovative and socially beneficial use of AI you've seen in the past year?
The speed at which generative AI systems are being brought behind firewalls and the conversations around productive uses of AI are really exciting. Many people who have only been exposed to generative AI in the past one and a half years have no idea how to use it productively and to protect their data. Some are afraid to use it all, and most people have no training around how to craft a detailed prompt to get responses that can move their own ideas forward. I’m using it in all the classes I teach, and we are focused on where these systems can be most helpful as we carry out our normal work. From brainstorming to refining clarity in assignments I create for my students, we will continue to see innovative and socially beneficial ways to interact with generative AI.
Atlas Wang, Ph.D.
Associate Professor, Chandra Family Department of Electrical and Computer Engineering
Cockrell School of Engineering
What is one piece of advice you would give people who are interested in learning more about AI and taking agency over their use of technology?
Start by building a solid foundation in the fundamentals of AI — learn the basics of machine learning, data science and programming. It's also crucial to stay curious and critically engage with the ethical implications of AI.
What are you most looking forward to from Good Systems this year?
I'm excited to see how Good Systems continues to advance the development of AI systems that are not only technically sound but also aligned with societal values. Their commitment to interdisciplinary research and public engagement promises to drive forward AI that benefits all aspects of society, particularly in areas like ethical AI deployment and community-centered design.
S. Craig Watkins, Ph.D.
Executive Director, IC2 Institute
Ernest A. Sharpe Centennial Professor, School of Journalism and Media
Moody College of Communication
What is one thing you want everyone to know about AI technologies?
Increasing media attention to, and fascination with, AI technologies tends to diminish the role that humans can and do play in the impacts these systems generate. It is important that a greater diversity of people understand that these systems are largely derived from, and dictated by, human values and motivations, including power and self-interest. For AI technologies to deliver on the promise of building a better world, a greater diversity of people will need to participate in how these are developed and deployed.
What are you most looking forward to from Good Systems this year?
I’m really excited about some new research that our team will be doing this year related to the capture and analysis of real-world data to understand the human behaviors, interactions and other factors that impact social and racial disparities in two specific domains: transportation and health. For example, members of our team will be capturing, annotating and machine-learning traffic data to analyze pedestrian-vehicle interactions. Previous research from the team suggests that Black and Latinx pedestrians are much more likely than their white counterparts to be the victims of pedestrian fatalities due to a variety of factors, including pedestrian behavior, yielding variations among drivers and matters related to the built environment.
A second project will be experimenting with the use of sensor-based technologies via wearables and smartphones to provide high-density, automated and objective assessments of interactions between mothers and infants. One of the goals of this project is to develop deeper insights into the daily environments that affect maternal health outcomes. Both of these projects will transform these novel data into algorithmic models to help identify significant patterns that can illuminate how social and racial disparities take form in everyday life situations.
See more expert perspectives from Good Systems leaders and partners in “Looking Forward: Good Systems Leaders & Partners on Opportunities and Challenges in Ethical AI.”
Learn more about Good Systems partnership opportunities here.