Can You Murder Information?
Here’s a new version of a story we’ve heard before:
An August 2020 internal presentation at Facebook revealed that the platforms Groups “tied to mercenary and hyperpartisan entities” were using the platform’s tools “to build large audiences,” the Wall Street Journal reported last week. “Many of the most successful Groups were under the control of administrators that tolerated or actively cultivated hate speech, harassment and graphic calls for violence,” the presentation reportedly showed. One top Group was even found to aggregate “‘the most inflammatory news stories of the day and feed them to a vile crowd that immediately and repeatedly calls for violence,’” the presentation noted.
As the November election drew near, Facebook tried to contain the problem, temporarily halting “algorithmic recommendations to Groups dedicated to political or civic issues,” and instituted other measures in an effort to limit their growth and influence. It didn’t work, the situation was too far gone. Groups continued to expand on their own, gaining organic traction, connecting people with shared interests.
The mechanics of are easy to explain. As Nathan Jurgenson wrote this week, Facebook’s focus on Groups in the wake of the 2016 election, “meant emphasizing the formation of groups that were native to Facebook — that drafted on its affordances and adopted its incentives.”
“It wasn’t a matter of giving already existing groups from the world a Facebook equivalent and hub; it was a matter of generating groups that had explosive growth potential,” Jurgenson wrote. “Rather than a diverse set of groups to meet a variety of social needs, Facebook drove the production of a single kind of group, committed to engagement at any cost.”
Inevitably, that which Facebook had been designed to encourage became another monster it could no longer control.
“Facebook stands for bringing us closer together and building a global community,” Facebook CEO Mark Zuckerberg wrote in 2017. “The most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.” This is the language of the social internet. Facebook is the company that says it most frequently and most seriously, but for the first decades of the 21st century, its essence — that people are better off if they are more closely connected — acted as a driver for user adoption and equally as the message of a new global cultural and economic model.
It’s ended up being a self-fulfilling prophecy. The more Facebook and its ilk sold themselves as integral to the workings of society, the more they became a necessity for people to operate in society — and the more, it seems, society has operated like a social platform.
That is to say, our expectations of the world we inhabit now reflect the expectations we have of the social internet, not the other way around. If before, we expected the world to be diverse and nuanced, we now expect it to be, like a social platform, homogeneous and rigid. If we once expected the world to be indifferent to our personal needs, we now expect it to cater directly to our every whim. If we once expected the world to be full of compromise, we now expect it to be solely comprised of winners and losers. If we expected that accepted truth could be established, we now expect that it cannot.
And if there ever was a time when we could expect political beliefs to be a form of genuine personal expression, we can now never be so sure it’s not just a way to game the metrics, an audience-building tactic or data-mining exercise. Or that maybe it’s just for the lulz.
This expectations reversal is why Facebook’s problem — the one of inhumanity breeding within it — keeps repeating. Where we might have once expected people to be, well, people, our expectations on the social internet is that people are something different. People on the social internet are avatars we collect and surveil, bits of data that stream past, worthy of our attention the same way an advertisement might be — and frequently literally given equal importance by the platform’s algorithms. We consume and dispense with them equally. In a sense, they’re not real.
Because the platforms have always been more than what they claimed — not just means of communication, but the foundations of a new form of economic, social and cultural exchange — and because we adopted them, and not the other way around, their logic has become our logic, their rules have become our rules. Their expectations of who we are and the rules of our interactions — that we are essentially a collection of data points engaged in an endless a zero-sum metrics game — have become ours as well.
Is it at all surprising, in that case, that, when correctly prompted, so many of us find our capacity for cut-throat cruelty expanded in feverish, endless one-upmanship? Or that the sense of inhumanity that pervades these so-called communities can tip so quickly toward outright barbarism? After all, if we are all nothing but data, what do our actions really mean? Can you hang an avatar? Can you murder information? Can anything or anyone ever be really real?
If you give people a technology and tell them often enough that it’s the tool that will help them figure out how the world works, you can’t be surprised when they end up believing you. Nor should you be surprised if, once they start using the technology as they’ve been told to, that the world actually ends up working that way.