Saturday, February 6, 2010

Radical Shock

Make no mistake: the privacy debate is hotter than ever. The recent uproar over Facebook’s new Terms of Service – and then, even more recently, Twitter’s new service terms – is all about privacy, says privacy scholar Helen Nissenbaum. The Internet, she says, has introduced a "radical shock" to our notions of privacy in society, disrupting our long-held distinctions between what is private and what is not.

But what do people mean in today's world of YouTube and Facebook and email when they say their privacy has been violated? They don't care so much that their personal information has been shared, Nissenbaum says—but whether it's been shared appropriately. That's why personal information, she says, ought to be distributed and protected according to social context—what’s appropriate, say, in the workplace, or a medical clinic, or a social network, or a school, or among family and friends.

Today’s privacy policies and rules are not nuanced enough, Nissenbaum says. We've got “one size fits all” protections that either go too far by ignoring these distinctions or fail to go far enough.

“The rapid adoption and infiltration of digital information technologies into virtually all aspects of life, to my mind, have resulted in a schism — many schisms — between our experience of and expectations for privacy today,” says Nissenbaum, the author of the just-published Privacy in Context: Technology, Policy, and the Integrity of Social Life. These gaps, she says, are producing in society “a kind of radical shock, and we need some new ways to talk about privacy.”

I caught up with Nissenbaum earlier this week at her NYU office just off Manhattan’s Washington Square. She is an associate professor in NYU’s Department of Culture and Communication and a Senior Fellow of the NYU Information Law Institute. What follows is an edited transcript of our conversation:

Last week in Davos, social media company CEOs met at the World Economic Forum to talk about the impact of social networks like Facebook and MySpace on society. Reid Hoffman, the LinkedIn CEO, told the group that “all these concerns about privacy tend to be old people’s issues.” He said the value of being connected and transparent is so great, that privacy is not so much a concern any more. What do you think? Is it an “old people’s issue?”

Nissenbaum: [Laughs.] Reid, actually, was one of my students at Stanford, years ago. But no, I totally disagree with those kinds of critiques that say young people don’t care about privacy. Some people say privacy involves withholding information or is the right to control information. But when I see people getting into a flap over privacy, I don’t think that’s what they’re really after. People want to share information; what they care about is the appropriate flow of information. They want the right information to go to the right people and under the right circumstances. They want this “contextual integrity” for the information going around about them. Everybody is interested in privacy under that definition.

Teenagers yell if their parents read their diaries; I have 18- and 20-year-olds in college coming to me all the time, saying, “Oh my god, my 12-year-old sister wants to friend me on Facebook! That’s awful.” I think these are all expressions of a desire for privacy. A number of years ago, at Princeton, where I used to work, I had an alumni event, with an audience of all different ages. I asked those assembled, “How would you feel if you were in a job interview and as a condition of that, you had to yield your medical records?” There was a huge difference in the responses. Older people were much more indignant about that request but many of the younger people said they wouldn’t mind. Does that mean they don’t care about privacy?

You say that individuals shouldn’t be able to control the flow of information.

That’s right. The nuts-and-bolts of my theory says that privacy depends on the social context of information being shared and what’s appropriate for those contexts. Right now, we take information and divvy it up into public information and private information, sensitive or non-sensitive – and then have two different ways of dealing with it. I think that’s problematic. People then get all wrapped up in knots trying to figure out if their IP address is personal or not. I know the EU is struggling with questions like these right now, and it’s a non-starter. Privacy isn’t ‘one-size-fits-all.’

We really need to be much more nuanced and descriptive, and to open ourselves up to the diversity of categories of all types of information and the range of social contexts for that information – and then act appropriately in each situation.

You and I are in structured situation at the moment. I know, more or less, what you expect of me in this interview and you know what I expect of you. These things are governed by social norms. So much of what privacy is depends on the nature of the information at issue and what our roles are as individuals within a certain social context. And then there’s something called the constraints on the flow of information. You could check out my Web site, for example. And then you could ask a whole lot of people to give you some information about me. And then you could go to ChoicePoint and pay them to write up a whole long report on me. In each of these cases, the way you’re getting information about me is governed by certain information flows and different constraints on the flow of that information. You could ask me some questions directly about myself, and I could choose not to answer some of those questions.

So there are circumstances in which people should control the information about them. But in other instances, this may not be appropriate. Let’s say you’re under investigation for having committed a murder and the police are investigating you, and they want to find out where you were on Friday night at 8 p.m. They may ask you, but ultimately, they must — behind your back — verify where you were at that time. And in this society, we’re not going to allow you to control that piece of information. We want the police to actually ferret out that information by any means. Nobody would say the police violated your privacy in this case, because we understand their need to get it independently of you. I think it’s intuitive.

Why did you write this book?

Too much time has been wasted deciding whether this or that piece of information – or this or that place – is private or public. What people really care about is whether information is shared appropriately, within the social context of any given situation.

You say some of this is intuitive. But do we need a set of rules that would lead to public policies that could more intelligently codify these distinctions – to honor what you call this “contextual integrity” of information?

Yes and no. We depend on entrenched social norms for guidance, so there are a lot of people who know already what should be public and private, particularly in the realms of the family. In the workplace, on the other hand, we need to be told what the rules are, and this is where information technology has been a radical shock. There, it’s not good enough just to have implicit behavioral norms, like those which tell you how you should behave at a cocktail party. If you screw up there, it’s not so terrible. But if you’re a doctor, it’s probably a good idea to be required to write down what your responsibilities are when it comes to somebody else’s information.

What is contextual integrity – the theory you put forward in this book?

There are two parts to it. The first asks us to identify the places where people are getting freaked out about information flow and privacy issues and recognize the kinds of challenges that we’re confronting with technology. And then, the second part, is the moral part of the theory that says that not all change is bad. The first part says here’s how we recognize the nature of the change on our expectations about the flow of information. The second part says look, we have much better medical monitoring devices and using them, we can now save lives, so that’s fabulous.

There are a lot of ways that we’re being monitored that are good and all to our benefit, and there are other ways that aren’t so great. Information that previously was available to your doctor is now being made available to entire consortiums of research institutions and insurance companies and so forth. We need to map these flows and how they’re changing. We need a way of looking at what types of information flows are appropriate so that we can start talking as a society about what works and what doesn’t – or what should. We need to be talking about all of this more intelligently.

Why now?

There are now things we can do with technology that we couldn’t do before – but that we, as a society, never really stopped to think about whether we should.

When suddenly we become confronted with something like Google Street View, we now have the possibility of surveillance cameras, if you will. Back in the day, it was considered okay if I saw you, so long as you could see me. But now, with Street View, we now have a surveillance image that gets posted on the Web and suddenly, this completely challenges our expectations of how some information flows, and is supposed to flow. Suddenly, there are people who can view you and you have no clue.

So my theory of contextual integrity really pushes for society to map out these technology changes, these points of radical shock where suddenly, information flows in highly unexpected ways and it challenges us. We freak out because it’s so unexpected. And no matter what you say about being in a public place so you should have no expectations, the truth is that you do have expectations – because that’s how life and (information) flow were governed for years and years. My book seeks to acknowledge the changes that information technology brings to our expectations, characterize the changes, and then advocate for us all to get on to discussing whether these changes are good or bad. Who are the winners and losers? Can we regulate the flow of information, or should we?

I mean, first you recognize the changes – such as the massive databases that can be aggregated from distinct sources, and then be used to mine different kinds of information and create profiles that can be used to make decisions about an individual. These are the types of radical, unexpected shifts in the flow of information that my theory seeks to address.

Hasn’t the legal environment been able to help add clarity to some of this already?

U.S. law has been heavily critiqued because it’s sectoral; it’s based on different sectors. You have, for example, financial privacy and communications privacy and video privacy, and so forth. People have said this is problematic, but I think the U.S. approach has merit because it has in mind particular contexts in which the information flow is occurring. I’m not saying that U.S. law is perfect: Choicepoint and Lexis-Nexis, for example, are out of control and highly problematic because they bring information from all different kinds of places, take it out of context and fail to respect the norms out of which it was shared with other actors – and then make that information available in contexts and under constraints that are inappropriate. This is an area in which the law, hopefully, will catch up. But I think we can do better.

It’s not hopeless. When the FTC, for example, was asked to create privacy rules for the financial industry, I think they did a pretty good job because they were able to focus on very specific types of information relevant to different contexts. For instance, there was an argument about whether your name and address, shown above the line in a credit report, should be public. Credit companies argued that it should be because it’s not financial information. But the FTC said it should be private, because it appears in the context of a financial action. The FTC went to court over it and won, and I thought that was fabulous. When laws are made correctly – with information flows and social contexts in mind – I think it could serve us all well.

Wouldn’t this all be easier if we simply put limits on what data could be archived, an approach raised by Viktor Mayer-Schonberger in his recent book, Delete? Should all the information about us be allowed to exist in digital perpetuity?

I do think information should be deleted, but again, to argue for deletion, you could say even that is a sort of arbitrary move. To restrict access to information may, in some cases, require deletion but the word that a lot of legal scholars use is that we’d want to tailor that deletion appropriately. There may be some instances where we decide there’s a whole lot of information being kept somewhere that should just be wiped out. But we want those constraints to be subject to the specific individuals and the context of given situations.

Some of the new mobile devices – from PDAs to the new iPad — are creating completely new contexts for the flow of personal information. Does the mapping of real-time, geographically-specific behaviors demand a new definition of privacy?

There’s an interesting re-configuration going on about what we think of as social space. People see their social space differently as a result of social networks and location-aware devices. I think we’re just now being forced to confront the question of geo-location. It’s now becoming a new aspect of information available about people that’s going to force us to start asking these same sets of questions around.

On Foursquare, for example, some people feel that by playing, they’ve already given their implicit permission to give up their personal information.

Nonsense. I think that before we start going around saying that anything is implicit in this way, we ought to explore whether it should be. What should the rules be? If you had to sit down and read every privacy policy on the Web or for every device that you bought, it would take you – and I’m making this up – two and half years, right? [Laughter] Ultimately, a lot of great work in privacy has been written about constraining the flow of information one way or another. But what I want to add to the mix of our discussion about privacy in society is the notion that we have to look at the contexts, themselves, to determine what’s appropriate, and under which circumstances. Thinking about privacy in this way leads us to ask much bigger questions.

I like working with computer scientists. Together with them, I’ve created a bit of subversive software, such as TrackMeNot, which is committed to privacy in Web search terms.

We’ve also created something called Adnostic, which is supposed to help against online behavioral targeting. And there’s another project we’re working on about court records and placing them online in certain circumstances.

So many of our questions about privacy and what’s appropriate when we’re creating this software takes us back and forces us to ask what are the functions of our institutions in society. Because of technology’s challenge to previous flows of personal information, we find ourselves almost having to go back to these first principles, even saying, what are the purposes of the court? What are records? That sort of thing.

For example, with the courts, if you don’t take care and dump everything onto the Web, including the names and addresses of jurors, for example, maybe the next time you get asked to serve on a jury, you will struggle hard to avoid it, and that won’t promote the values of the court. It will make the courts function worse, forcing us to reach back all the way to consider the roles of the institutions, themselves.

Very delicate considerations need to be embedded in these technologies.

Where has technology changed the traditional flow of information most radically, to what you refer to as “shock status?”

One is in monitoring and tracking. This isn’t visual anymore. It’s online and it can happen when you’re interacting with your supermarket. Second is this arena of aggregating information and analyzing it. It’s all behind-the-scenes and it’s driving a lot of the monitoring, so people are not so obviously aware of it. Sometimes, some little surprising thing happens and you think, hmmmm, I wonder how they knew that? And then, if you’re thoughtful, you realize that somebody has a database somewhere. But it’s not in your face.

Third, there’s the worry about communications and media because this is not just about information that sits in a database somewhere. It’s about distribution. This is Twitter and Facebook and blogs and email. In information science, this whole notion of aggregating information from different sources and then using it to profile people – to see if they’re terrorists or good mortgage prospects – it’s very cutting-edge stuff, involving statistical techniques and operations research. But here’s the problem. It’s not directly experienced except in the ways your bank will reply to you.

Are you hopeful about the future of privacy?

My hope level is in constant flux. When I think of the vast back end of information aggregators interacting directly and indirectly with personal information, such as Google, Choicepoint, ISPs, government agencies, and financial conglomerates, I fear the worst. I worry that the landscape of incentives will swamp just about any moral consideration we might bring to bear. At the same time, I’m buoyed by the growth in size and quality of privacy scholarship and practice, the guile, brilliance, and insubordination of computer hackers and NGO players. And sometimes, watershed events can be enormously important; grim as it is, the Google/China debacle may turn a few heads.


-- Marcia Stepanek

[Editor's Note: This interview was originally published on PopTech.com and is being reposted here with permission]

Labels: , , , , , , , , ,

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home