We Should Have the Right to Disappear

Facebook’s #10YearChallenge shows the creeping paranoia around facial recognition. Is it time to fight to be unseen?

There’s a new trend among Facebook users. The #10YearChallenge asks users to juxtapose a photo of themselves from 10 years ago against one from today. The meme has sparked questions about whether it’s a ruse designed to help Facebook perfect its facial recognition software. This is unlikely, but it points to widespread fears about the fate of our faces in an era of expanding A.I. — and what we might need to do about it.

The theory claims that while the social network already has all the photos it might need to identify you now (and a decade ago), getting users to verify both by posting them next to each other shortcuts the potentially time-consuming work it would take for the program to do this positive identification on its own.

But not everyone’s convinced.

“If you’re one of the 350 million people or so who’s been on Facebook since 2009 — or if you’ve uploaded older photos to the platform after joining — the world’s biggest social network already knows what you look like now, in the past, and probably in the future, too,” New York’s Max Read wrote Wednesday. “If I were going to build a conspiracy theory around the meme, I might suggest that it was planted by a social network, not as a way to secretly extract data, but as a way to secretly drive engagement.”

But that debate is almost beside the point. The larger takeaway is that, even if it’s not a secret way for either Facebook or anyone else to inform a facial recognition program, people think it could be.

That speculation is valid and expected. After all, we are increasingly surrounded by machines that take pictures of our faces, with and without or permission. And we are becoming more aware of what those photos can be used for, and what kind of access they might grant or limit, depending on who we are and in what other scenarios our face has been seen. Can we get on an overseas flight? Can we buy groceries? Can we jaywalk? Doing these common things is currently, or will soon be, dependent on facial recognition.

All of which means guessing games like this one — about whether or not a Facebook meme is an attempt to perfect these systems — is just the first of many that will inevitably follow, as facial recognition spreads. We will begin to wonder (if we don’t already), each time our image is captured by a camera, if our photo is being used for expressed purposes, or also for something else entirely. That will make us increasingly paranoid.

For many, Jeremy Bentham’s “panopticon” — a jail in which the prisoners are always potentially being watched by a jailer — is insufficient an analogy in the current surveillance climate, where a more pervasive and overarching form of surveillance has been created (Bentham didn’t — nor could have — taken into account the notion of GPS or ad tracking, for instance). Nevertheless, the idea that the constant feeling of surveillance affects our behaviour has held for centuries. By and large, we act differently when we feel we’re being watched. It’s a subtle form of control, but it is control, nonetheless.

How do we take it back?

The internet erases time, most poignantly when it comes to personal history. Online, there is frequently no separation between who we are and who we once were.

Not that long ago, a different kind of conversation arose about how much we’ve changed, and how much online platforms like Facebook or Google remember. At that time, the concern wasn’t about what information could be accessed by analyzing our faces, but what could be discovered by simply knowing our name.

As we began to more fully incorporate platforms like Facebook, Twitter, and LinkedIn into our lives, and filled them with our personal information, and as that information — along with other public records — became accessible via a simple Google search, we speculated about what we might be unknowingly revealing about ourselves. Or how much someone could discover about us at the stroke of a key. And we wondered then, too, about the control we might be granting others over our lives.

Out of that came a decision, not yet adopted universally, but generally accepted as sensible practice: we should have the right to be forgotten. We should have the right, in other words, to have our digital past erased so as not to be a hindrance, nor a source of external pressure on our present or future. We should have the right to be in control.

Maybe it’s time we started expanding the scope of the memory hole into which we fight to cast our digital doubles — those datafied mirror images of us that sit somewhere beyond the screen, against which our real lives are measured, evaluated, and judged.

We should reframe things to reflect the expanding scope of the surveillance that surrounds us, the smart devices that are now in every room of our home, attaching themselves to more parts of our bodies. Because the right to be forgotten still implies that our names and faces and data are being collected and will be used, and will only be discarded — forgotten — at a later date. The way things are moving, this may prove insufficient.

Because what the Facebook photo meme controversy shows is that control exerted by the broadening consumer surveillance apparatus is already starting. We are already beginning to act like panopticon prisoners, increasingly wary of everything we do as we do it.

This prisoner mindframe contributes to what David Lyon calls the surveillance imaginary: “How the various features of what has been called the surveillance society influence how people picture themselves in their social arrangements and relationships, such that in ordinary everyday life they include and even embrace surveillance in their vision of how societies are ordered and their roles within that.” That is, it helps affirm new assumptions, not just about how we live our lives, but about how our society works, including the roles of data and privacy — namely, that data is necessary to properly organize everything, and that privacy can and should be traded for access to products or services.

As facial recognition technology proliferates, these assumptions will become more entrenched than ever. It will be tougher and tougher to evade or escape them, to find a corner behind which to hide. And we may find that asking the A.I. program to simply forget what we’ve done or that we existed will not be good enough. Instead, we may begin to wish we were simply invisible.

And so we might need something more than the right to be forgotten. Just like choosing to have part of our past hidden from the search engines, we might need the ability to stay hidden from the cameras that will soon also become a nearly inescapable necessity. To force the system not to see us.

We will need the right to disappear.

writer.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store