Going online initiates information exchanges. We go to a website, open an email, or check on someone’s status update—we receive information. Usually, we also give up some. The website tracks our activity on the site, and may leave something called a “cookie” in our browser that allows it to track certain other places we visit. The email can only be opened in a specified account. It may also send information back to source when it is opened. So to look at a Facebook status, we need to be logged in to an account with permission to look at that account.
The current web economy is built on data about the people who use it. The winners are those who grab the most data and use it well. But this is something we struggle with because it sometimes violates our competing needs for privacy, anonymity, and identity.
Remember the television show “Cheers?” It’s about a bar where “everybody knows your name.” The same people always go to the same bar and sit in the same places. One of our basic human needs is to have a place like that, an environment in which people know who we are. Or, more precisely, they know us as who we want to be.
On the other hand, we don’t want them following us home. Or into the bathroom. Or watching us when we take a potentially embarrassing risk like asking someone on a date. In the real world, we have a very complex set of rules about when to watch someone, when to pay attention, and when we deliberately don’t catch a person’s eye because that would trigger an acknowledgement that we just don’t have time for right now.
In the real world, we also give and receive constant signals about each other’s lives and background. We can read a great deal about a person’s education, socio-economic status, religion, authority, and political leanings—but in many situations the polite thing is to ignore all these signals. When we start accumulating a disproportionate amount of information about another specific person, that may seem ‘creepy.’ There is no standard amount of data—it’s very contextual. But we tend to recognize when we see it being violated.
We often get it wrong. I lived, for a few months, in a small town in Virginia. So small that people who I’d never met knew me as “the guy who’d left the top down on his convertible and it rained.” This identification was enough for the people at the bank, even. A waitress at one of the two restaurants took it in her head that I always drank my coffee with cream and two sugars. Every time I arrived, the coffee was ready and I was too polite to complain. I drank the coffee, perpetuating the mis-identification.
One concept about the Web is that it’s some kind of giant store, or maybe a big magazine stand, that we browse anonymously. Nobody in the store ever knows who we are, but they stand by in case we need something. There are no store cameras, no customer loyalty programs. If we buy porn, it doesn’t go on our permanent record. I am not sure why this sounds ideal, but I think it is because we don’t want to be committed to anything. We don’t want to have our coffee preference known in advance. We’re afraid that if we pick up a pair of shoes, the salesperson will then be disappointed if we don’t buy them. If we try on a sweater, people might see us as the kind of person who wears “that” type of sweater. We want to experiment, safely.
Of course whenever we add content to the Web we have the opposite feeling. If we post an article, we want to know who read it. We send out newsletters; we want to know if they were opened. If we’re selling t-shirts, we can barely keep ourselves from following customers through the store, constantly asking them if they like the design and, if not, why not?
Online, instead of politeness separating out these competing drives, we have algorithms. Algorithms don’t do a good job of adjusting to context or adapting to individual feelings about privacy and anonymity. Algorithms divide people into very broad categories. Yes, we can get a lot of feedback in a specific person’s preferences in terms of products. We may even find the best time to offer them more information. But an algorithm can’t read that little signal that says “you’re getting a little too close, a bit too persistent.”
This is just the introduction to our Privacy, Anonymity, and Identity series. We will be exploring these topics more in-depth over the next several weeks. If you have comments or suggestions, please let us know.
Signup below to receive updates about what we are up to.