“Cat food. Cat food. Cat food.”
Recently, a friend of mine, who does not own a cat, repeated this phrase over and over as he sat at my dining table, searching something online on his phone.
His theory is that his phone is constantly monitoring what he says — what we are all saying — and he set out to prove it by loudly talking about something he would not otherwise ever search for. If at some point ads for cat food or other cat-related products suddenly started appearing on webpages or in his social media feeds, he thought, it might suggest that merely talking about something around a smartphone would trigger an ad. A few weeks later, he’d still not seen a cat food ad, but had seen one for something cat-related.
He is not alone in suspecting this is how things work. People recount instances where they’ve talked about a product — or merely thought about one— and, pop, suddenly there it is on their phone. It’s even happened to me. A friend told me about a children’s movie he was taking his daughter to see. I’d never heard of it, let alone searched for it. Later that evening, an ad for that film appeared in my Instagram feed.
There are, of course, plausible explanations for these occurrences, other than that our phones are listening in on everything we say.
The cat-related ad in my friend’s web browser might have been placed in front of his eyes as the result of an unknowable algorithmic calculation that deduced, because of a number of factors other than him muttering about cat food, he might have need for a cat product. The movie ad I saw in my Instagram feed may have appeared because my friend searched showtimes whilst in my house, and a program decided that there was a good chance other phones nearby might be joining him at the film.
Still, it’s difficult to rationalize — especially when, competing for ad space in our social media feeds, are stories detailing how modern computer assistants like Google Home or Amazon’s Alexa and Echo are constantly listening to our conversations and, in some cases, repeating that information without our knowledge.
Last week, a couple discovered their Amazon Echo — one of the company’s computer assistants — had accidentally picked up on a keyword that it interpreted as a request to record their conversation. Echo subsequently heard another phrase they used as an instruction to send that recording to their friends, which it dutifully did. Amazon said it was an error and reassured customers that it was an “unlikely string of events.”
But what is it about these episodes that are so unnerving?
Over the past few years, fierce debate over freedom of speech has erupted in western society. Without detailing too closely the arguments, suffice to say that, as it relates to speech on technology platforms, the debate has focused largely on political language. For instance, the accusations that Facebook’s trending news topics were biased against conservative-leaning sources and stories. Or accusations that Twitter is either too liberal or too libertarian — depending on what day it is, and who’s been offended — regarding what people say on its platform.
But by concentrating on whether tech platforms limit certain political language or not — which is a conversation that is inherently controversial and often undertaken with a political agenda in mind — we have been distracted from thinking about other sorts of boundaries technology has erected around speech that might affect us in a more profound way.
In 2016, Google’s Year in Search review had an accompanying ad campaign featuring its familiar search bar laid overtop of poignant images from the previous year: over protesters at Standing Rock; over people holding a banner reading “refugees welcome”; over David Bowie’s face; and so on. An accompanying video did the same. (Google used the same technique for another video this spring marking International Women’s Day.)
Google’s apparent intent was to find a way to summarize how it helps people make sense of the endless stream of images we see into a coherent narrative, placing itself as a unique tool for discovery and broadening of scope. The images stretch beyond the search window for that reason. And, in truth, modern technology often makes us believe that to be the case — that there is nothing we cannot know or do or see, now that we are connected to these devices.
But the borders of that box matter.
As we slowly discover — sometimes by deliberate trial — what the microphones that now surround us might be picking up, we are testing and learning the new boundaries of our speech. What we are finding is that what we say and, by extension, what we think and how we think it, is increasingly positioned and framed just as Google’s ad campaign suggests it should be — that is, as being akin to a search term.
Conversation may soon no longer be about an exchange of ideas, so much as a series of statements and sentences that can convey thought while avoiding verbal computer command prompts. Some words will no longer go together. Some phrases will no longer be used before, or after, others.
And this will not be a shift driven by politics. Nor will it be brought about by social progress. Nor will it be in aid of gaining a deeper understanding of one other, or empathetically, as is often the case with language, and which can serve to move us forward.
This is different. This is the introduction of boundaries on language that a few companies in Silicon Valley created, arbitrarily. This is also a change that will lack the very thing that language has traditionally thrived upon and adapted to: humanity.
This re-framing of speech, this re-contextualization of words by computer programs, is occurring as we change our lives in other programmatic ways. We download data and statistics about our sleep patterns, our steps, and our meals with increasing frequency — examining natural human behaviour as if it were carried out by machines, constantly adjusting it to reach optimized targets and performance levels.
We are, in short, not only beginning to think about ourselves as computers, but now, faced with new barriers on how we communicate, we are starting to think like computers, too — or, at least, in ways that might trigger a computer’s response, rather than a human’s.
To turn the idea around is perhaps more unsettling: As the distance closes between our computers and ourselves, we may find that, more and more, before we walk, eat, sleep, or talk, we will soon have to constantly consider how we can do those things in a such a way that we act and sound human.
Cat food. Cat food. Cat food.