‘People conflate privacy with secrecy’ — An Interview with Good Systems Symposium Keynote Speaker Helen Nissenbaum

March 19, 2024
Helen Nissenbaum

Privacy is often confused with secrecy, and secrecy is often viewed negatively. It blocks promising avenues and slows innovation, among other hindrances. Privacy, however, is a far more nuanced, often empowering, and crucial right, especially in an age in which ubiquitous technology has the power to track our every move. Philosopher Helen Nissenbaum is a leading authority on privacy and the architect of a philosophical maxim on the positive contribution privacy provides us all: her theory of contextual integrity (CI), which defines privacy as the appropriate flow of information.

We spoke with Nissenbaum, in advance of her keynote address at the upcoming Good Systems symposium, to learn more about CI, how it might be incorporated into ethical AI, and why she’s just as excited to attend the event as she is to be one of its keynote speakers.

You developed the concept of contextual integrity about 20 years ago. How would you summarize it?

It's a concept that's supposed to address the way individuals respond to the systems in our lives that track us and profile us and respond to us. These digital systems have been big for a while, but they’ve become more massive with AI, and even more so with generative AI. People feel an onslaught, and they may express their concern as, “My privacy is violated.” So, there's a need to understand, What's the nature of the threat that the term “privacy” captures in this kind of environment? The theory of contextual integrity responds to that question and explains what it is that’s concerning about the way data is recorded, used, and integrated into decisions about us.

What inspired you to conceive it?

The concept grew out of existing theories of privacy out there. I looked, and it seemed to me that, first, none of the regulations protecting privacy, nor the philosophical understandings or conceptions of privacy at that time, were able to capture what the technology was doing. And it struck me that [with] some conceptions of privacy, people conflate privacy with secrecy. They may say you're giving up your privacy when you're sharing information or when information about you is being captured. But it didn’t seem that that was the problem. People are sensible. We share information with one another, and it's very productive. It greases the wheels in our social life, in our commercial existence, all walks of life. We understand that sharing information is not a problem. What people were upset about was sharing information inappropriately, not sharing [in general].

Why is it called “contextual integrity”?

To talk about appropriate flow is to say that social life is not one undifferentiated space; there are different contexts that make up social life — education context, health context, political context, family context, and so on.

And then one further step is that these contexts are identified in terms of the purposes they serve, and this can be very complicated. Thus, the data flows serve the context. That's how we get to “contextual integrity.” We say appropriate data flows serve the integrity of the context.

Can you give me an example?

Take healthcare. Let’s say we agree that health equity is a value in a just society. If you're sick, you get treated, not if you're rich or good looking or connected or educated. So, there are purposes in the context, and there are values in the context, and ultimately the flows of information — the best rules that we could have — are rules that serve the purposes.

If we didn't feel confident that the information was treated in the right way, we may choose not to see our physicians, or we may lie about our symptoms, which is bad for us but also bad for society: you didn't take that COVID test because you were concerned the results would be made public, and now 20 people are infected.

People are really savvy about certain things, like location tracking. It makes a difference whether the location information is going to the FBI, your family, or an advertiser. It's not that they say nobody should know my location. They say, “Yes, location is a sensitive piece of information, but I say yes to these people and no to those people.”

What about something we all encounter daily, like location tracking?

That’s an interesting example because people are really savvy about certain things, like location tracking.  When you look at how they respond, it makes a difference whether the location information is going to the FBI, your family, or an advertiser. They’re discriminating about that. It's not that they say nobody should know my location. They say, “Yes, location is a sensitive piece of information, but I say yes to these people and no to those people.”

How does CI apply there?

Sometimes we may not be able to predict [how CI fits in], in part because when there's new technology, we don't yet know what a good rule is. For example, when Congress passed the Telecommunications Act of 1985, part of what they were doing was recognizing that these telecom companies play a certain important role in our lives, and we haven't properly regulated the services they provide. So they included privacy regulation, because it takes a while to have a sense of what role these actors are playing. And I think in tech, we're very much in the space of learning what these companies are doing. Think about Facebook or Meta and how our understanding of the role they’re playing in society has evolved.

How does AI fit into all this?

I have an article called “Contextual Integrity Up and Down the Data Food Chain” that touches on this a little. Even before AI (was ubiquitous), it was algorithmic decision systems, it was machine learning, it was data science — all these practices are now encapsulated by this term “AI,” and it’s difficult to be precise. So, a lot of what I say is about AI under the new definition of AI. Mainly, it's a phenomenon of decision systems that are based on data.

So, when you're making a decision that a human might make, like whether you're going to get a mortgage or not, now we would say, “Oh, you use AI to make it.” What we really mean is that we've trained our algorithms on personal data; now they are predicting whether you're going to be able to pay. So, it's about AI in that sense. The hand-wavy part is generative AI. CI actually can help with privacy issues regarding generative AI in ways that other theories of privacy are not equipped to, and that has to do with these ends and purposes we have to look at to establish whether a certain practice involves inappropriate data flows.

This is still very much a work in progress. I have a lot to learn in this space myself, so I was delighted to be asked to join the discussion at the Good Systems symposium. I'm happy to talk about my work, but I'm looking forward to what I can learn from being there, because I always learn a lot from everyone else.

 

Helen Nissenbaum is the Andrew H. and Ann R. Tisch Professor of Information Science and the founding director of the Digital Life Initiative at Cornell Tech. RSVP here to see her keynote address, “Positive Thinking about Privacy,” at Good Systems’ annual symposium.

Grand Challenge:
Good Systems
Events: