Privacy Preferences and Values for Computer Vision Applications


Technology is transforming people’s lives, but it’s a constant struggle to ensure that technology designs address people’s values and preferences, especially those of traditionally underserved groups. Computer vision, for example, empowers individuals with vision impairments, but it also carries with it privacy concerns. For instance, service providers risk leaking private information when collecting and sharing pictures.  

This research project addresses the conflict between convenience and privacy inherent to computer vision with the goal of developing future computer vision technologies that support diverse users with visual impairments, especially those who are traditionally technologically underserved.  Research identified what users value, what concerns they have, and what they prefer to develop privacy recommendations for inclusive computer vision technologies. 

Methodology and Findings  

The research team completed an investigation into the values and privacy concerns that individuals who are blind or have low vision (BLV) hold in the context of their use of Visual Assistance Technologies. The investigation resulted in a publication in the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 

In this publication, findings are reported from interviews with N=20 participants who are totally blind (M=9, F=11; ages 22-73). The interviews included open-ended and short answer questions about users’ experiences with Visual Assistance Technologies, their everyday privacy concerns, and their concern about 22 specific types of image content that may be considered private when shared with the general public, when shared with Visual Assistance Technologies that employ humans, and Visual Assistance Technologies that employ computer vision. 

Two key novel findings can be noted and used to improve the design of privacy-aware technology for people who depend on VIDS. First, the project team identifies the types of information BLV people consider to be the greatest concern in terms of privacy.  This information is conveyed through an easy-to-reference table that identifies each private information type and its priority. Computer vision developers may use the table as a taxonomy from which they can create datasets to train algorithms to better recognize content that contains private information, with a focus on the content types that matter the most. 

Second, the team identifies the perceived risks, benefits, and trade-offs people who are BLV encounter when using AI-powered VIDS. For example, many participants use AI-powered VIDS to access information despite the fact that they either mistrust or are unaware of how these services handle their data.  Underlying the risks, benefits, and trade-offs, five values were observed to be held by participants: anonymity, accountability, trust, choice, and independence. 


Team Members

Select Publications

Abigale Stangl, Kristina Shiroma, Bo Xie, Kenneth R. Fleischmann, and Danna Gurari. “Visual Content Considered Private by People Who are Blind.” In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’20). Virtual Event, Greece. ACM, New York, NY, USA. October 26–28, 2020.