Ethics Belongs in Industry — Here’s How We Ensure it’s a Priority

February 26, 2021
conference room

 

Issues surrounding the ethical uses of AI make front page news all the time. In 2020, Amazon paused police use of facial recognition software because of rising criticism from civil rights activists about its tendency to misidentify Black and Hispanic people. The use of video surveillance tools to create smart cities has raised privacy concerns as well, like when using cameras to improve traffic flows. Films like “Coded Bias” or “The Social Dilemma” raise questions about the implicit bias in AI and on social media [when people unconsciously hold attitudes toward others or associate stereotypes with them]. For example, on these platforms, algorithms match consumers’ patterns to make future predictions about what they want to see, leading people to seek out more content that verifies their beliefs.

Ethical concerns regarding modern technology seem to show up everywhere.

These issues give rise to questions about whether industry is ready to face the challenges associated with AI and how ethical considerations can be implemented in the day-to-day workflow at technology companies.

As a graduate student in the School of Information, I recently completed a capstone project that looked at how companies can ensure their practices meet ethical standards. I worked with a local Austin startup that offers AI and machine learning services. They allowed me to interview employees, observe their weekly ethical discussions, and analyze their ethics Slack channel to see if or how ethical considerations play into their jobs.

To compare this startup’s approach with its peers, I conducted a Qualtrics Survey with around 70 participants from technology companies in Texas and beyond. Respondents ranged from CEOs to software engineers and computer scientists and came from different sized companies — mostly large ones.

Less than half mentioned biases, transparency, or unintended consequences as concerns, and less than a third mentioned equity.

My research showed surprising results. While almost all survey participants found ethical concerns “very” or “extremely important,” about half thought that their company faced ethical issues just a couple of times a year or not at all. This was in stark contrast to other respondents, who reported they believed ethical issues arose multiple times a week or even daily.

I wondered what caused the huge discrepancy in responses.

During interviews I conducted at the local startup, ethical issues like built-in biases, equity, or unintended consequences were mentioned frequently — during recurring team discussions and on their team Slack channels. That keeps these issues top-of-mind, and employees are more likely to recognize them when they arise. This may not be the case at most companies, though.

Only about half of the companies I surveyed have regular ethical discussions — some only once a year, as a formality — or they only raise ethical concerns during onboarding or annual training events. About half of the respondents did not know if their company had an ethics officer, of if they did, what that person does. The responses regarding the type of ethical issues furthermore confirmed a lack of awareness. Less than half mentioned biases, transparency, or unintended consequences as concerns, and less than a third mentioned equity.

It’s not part of their corporate culture.

To make it such, it has to be infused into nearly every part of a company — from attracting, hiring, and retaining ethical employees to internal auditing processes, product development, marketing, and community engagement. And larger companies may need to take different approaches than smaller ones, like instituting formal procedures and roles such as routine auditing, ethics officers, ethics hotlines, and whistleblower protections.

My hope for the future is that an increasing number of public interest technologists, who work to use technology for the public good, will help ensure that ethics is implemented in a company’s daily processes. This could raise awareness of potential ethical concerns, create a company culture that ensures a responsible use of technology, and guide them to find ways to implement ethical behavior in the daily workflow.

About Tina Lassiter

Tina Lassiter is a 2020 graduate of the School of Information at the University of Texas at Austin. Originally she worked as a German lawyer specializing in information technology, intellectual property, and international law. Going back to graduate school, she took a special interest in ethics and AI. She worked as a graduate research assistant for the Good Systems Grand Challenge project “Bad AI and Beyond.” She participates in ongoing research about AI and recruiting and worked together with the local Austin tech company KUNGFU.AI for her capstone project.

Grand Challenge:
Good Systems