Data and integrated technologies can transform cities. Sensors and artificial intelligence can assist people crossing busy streets, provide broadband access at bus stops, or use drones that use AI to help fight fires. These are what we call smart cities: using technology to improve transportation, healthcare, emergency services, energy, and water infrastructure so that it functions reliably and efficiently for residents.
I have been researching and working in and with cities my entire career. I have focused on the nexus of technology, governance, and equity in areas such as housing, transportation, and healthcare. My interest in smart cities stems from my belief that a truly smart city must improve service delivery and quality of life for all its residents.
Success in this arena requires looking beyond promised gains of smart city technologies to the full realm of possibilities from using these tools — both good and bad. Innovation must not sacrifice civil liberties in a bid for progress. Some cities have rushed to purchase facial recognition technologies to help law enforcement identify criminals only to discover that they were biased against people of color and women and raised numerous privacy concerns. Often, they were being disproportionately deployed against immigrants and lower income residents, or they simply were inaccurate.
Additionally, many cities have procured drones, cameras, and other surveillance technologies without outlining misuse policies or options for redress. In Port Arthur, Texas, police used automatic license plate readers to target traffic tickets, which exacerbated financial difficulties for Port Arthur’s poorest residents who became indebted to the city. In New York City, Wi-Fi kiosks sell residents’ personal information to advertisers and provide that data to law enforcement personnel. Smart tools like cameras and location trackers promise enhanced safety or service delivery but can — and have been — used to surveil and harm the very residents the tool was supposed to help.
Technological progress carries a big risk of disproportionately harming or excluding certain groups.
The repercussions of these tools highlight the need to fully prepare for the intended and unintended consequences of these technologies once put into use. What are the costs despite the societal benefit? Who owns the data and how is it stored? Whom is it shared with? Can people be identified, monitored, followed? If so, how, when, and by whom? To be truly smart, cities must have answers to these questions long before a tool is implemented. People’s privacy and autonomy must be protected, and cities must be transparent in their justification and use of technologies. Innovation should strengthen, not undermine, the relationship between residents and their government.
At The University of Texas at Austin, the research grand challenge Good Systems seeks to answer these questions with the goal of ensuring that AI is beneficial — not detrimental — to society. Technology can have harmful outcomes, so we must have an ethical framework based on human values to evaluate them.
We do not want smart cities to be code for surveillance cities, in which innovation compromises freedom, civil rights, and privacy.
In February 2020, Good Systems began working directly with the City of Austin to co-organize a workshop that would build relationships between UT Austin and city agencies. This event resulted in seven city-university collaborations, including one with Austin-Travis County EMS to design an algorithm that improves ambulance response times as well as my team’s project, Smart Cities Should be Good Cities: AI, Equity and Homelessness.
As part of my team’s work, we are partnering with local government and nonprofit homelessness service providers to understand the data needs for providing tailored services, such as linking people experiencing homelessness to emergency shelters on a particularly cold night or connecting individuals to substance and mental health support. Our qualitative research team has completed more than 80 surveys with people experiencing homelessness and more than 30 with service providers so far. Our project is always guided by stakeholder insight, while our machine learning team has improved our AI-assisted decision-making tool — which can make sense of large amounts of data — by carefully measuring and eliminating biases against underrepresented and underserved communities.
As one of my graduate research assistants, Destiny Moreno, says, "Technological progress carries a big risk of disproportionately harming or excluding certain groups.” Moreno is a part of our Smart Cities Should be Good Cities project investigating the usefulness of AI for effective allocation of permanent housing resources, showcasing one way that cities can leverage emerging technology to benefit underrepresented populations. She also studies how technology affects criminal justice systems and policing strategies.
Moreno makes the case that far too many municipal projects presuppose the need for technological intervention, thus drawing resources and attention away from alternative solutions that might have been less costly or more appropriate for the community. Projects can also rush past establishing connections between those implementing and those receiving the service, risking deepened feelings of distrust or dissatisfaction among residents with their local government.
What are the costs despite the societal benefit? Who owns the data and how is it stored? Whom is it shared with? Can people be identified, monitored, followed? If so, how, when, and by whom? To be truly smart, cities must have answers to these questions long before a tool is implemented.
For example, people in Singapore lost trust in their COVID-19 contact tracing mobile application after it was revealed its data was shared with law enforcement despite assurances otherwise. These cities move something that was once thought of as science fiction into people’s backyards without soliciting community input or establishing proper guard rails, or they threaten public spaces in overzealous, unchecked data collection.
Our goal at Good Systems has been to first identify local challenges and determine whether there is, and should be, a role for a new technology. If there is, we can help explore how to deploy the technology in such a way that upholds — not erodes — democratic principles.
We do not want smart cities to be code for surveillance cities, in which innovation compromises freedom, civil rights, and privacy. Therefore, we recommend the following framework:
- lead with the problem that needs solving and not the solution that should be applied,
- put residents’ needs at the center by engaging with them directly through surveys, interviews, and focus groups,
- build collaborative partnerships with government agencies, nonprofits, and community groups to co-create solutions, and
- protect residents’ privacy, freedom, and autonomy when they are in public spaces.
Perhaps the most defining characteristic of a smart city is its commitment to long-term societal benefit. Smart cities cannot be short-sighted. They cannot view technology as a shiny, new toy to deploy for its own sake. Instead, they must investigate all facets of a tool to determine if it adequately addresses the problem at hand. They must weigh both the capacity to cause harm and to produce benefits — and then put technology to work for residents so they can feel safer, get places faster, and live healthier lives. Smart cities hold a lot of promise, if created responsibly.