Core Research Projects
In 2021, we launched a set of six core research projects which explore critical areas within Ethical AI: racial justice, surveillance and privacy, misinformation and disinformation, smart cities, living and working with robots, and smart hand tools and the future of work.
AI and the Future of Racial Justice
Explores racial disparities in AI-based systems and seeks to design and implement solutions in the areas of public safety, transportation, and health.
Being Watched: Embedding Ethics in Public Cameras
Investigates the social acceptance of cameras and video data and how to develop technical solutions that will satisfy privacy concerns.
Designing Responsible AI Technologies to Curb Disinformation
Employs machine learning to understand how disinformation arises and spreads and how to design effective human-centered interventions.
A Good System for Smart Cities
Seeks to build a system that will link city datasets to predict the effects of urban development projects, including Austin’s Project Connect.
Living and Working with Robots
Overcoming the technical and social hurdles to deploying robots by building and studying them in the communities where they will be used.
A lack of affordable housing is a major problem in US cities from the Bay Area to Boston. Austin is no exception. In 2015, the Austin-Round Rock metropolitan area was named one of the most economically segregated areas in the country.
This project reports on how media representations shape public perceptions of AI and then uses findings to explore how Good Systems might better represent everyday interactions with AI to the public.
This project develops methodology and workflows for libraries, archives, and museums to use machine learning and supercomputing resources to generate metadata for AV materials in the humanities.
This project investigates comparative policies around the creation and use of video data in the public sector. As more cities deploy monitoring and sensing technologies, cameras are in the front lines of data-gathering in traffic, policing, and health and safety.
Older adults are especially vulnerable to believing and circulating disinformation online, and we want to enable this population to use social media more responsibly.
This project examines how human-centered approaches to assess bias and fairness can address a critical gap to inform research on algorithmic fairness.
This project designs and prototypes new ways to find, interpret, and evaluate online information with the goal of helping to combat rampant misinformation. It studies how people evaluate and integrate information from disparate online sources, focusing on fact-checking as it relates to the COVID-19 pandemic.
This project examines the temporal dynamics of emotional appeals in Russian campaign messages used in the 2016 election on Facebook and Twitter.
This project investigated how data ethics can be a point of departure in designing and evaluating good systems, examining the contradictions and pressure points among various data practices.
This project examined how children from groups underrepresented in STEM programs understand, interact with, and evaluate AI-driven digital assistants.