As authorities rushed to stop a gunman on a mass killing spree in New Zealand, engineers, programmers and content moderators around the globe were scrambling to keep the rampage from going viral.
It didn’t work.
The terrorist in Christchurch killed 50 people at two mosques Friday, livestreaming part of the attack on Facebook. The original video was taken down within an hour. But copies proliferated across the major platforms.
On Saturday night, Facebook announced that it had removed 1.5 million copies of the video, with 1.2 million blocked at the moment they were uploaded. YouTube declined to give numbers, although its chief product officer told The Washington Post that at times, a copy of the video was being uploaded every second.
After decades of shunning responsibility for user content, Big Tech is slowly making its products safer for society — banning anti-vaccine misinformation, for instance, and cracking down on political disinformation. More moderation comes with heavy risks, of course. Decisions about the limits of free speech would shift to companies whose priorities are driven by shareholders. But the viral spread of the Christchurch shooting video shows the limits of the content moderation machine in the face of technologies that have been designed to be attention traps. Stricter moderation or filtering systems are not the answer. It must be a priority to redesign technology to respect the common good, instead of ignoring it.
There is a special urgency when it comes to mass killings, since media amplification can inspire copycat killings, and only so much can be done in a crisis where every hour counts.
Filtering systems have been relatively successful where time is not of the essence — copyright infringement never kills anyone, even if it stays up for a week. On longer time scales, moderation works reasonably well.
For about a decade or longer, the tech companies have automatically filtered out child pornography in a partnership with the National Center for Missing and Exploited Children, which maintains a database of “fingerprinted” videos. Starting in 2016, major platforms adapted that technology to filter for “terrorist” content.
But this kind of filtering only works when new content matches known banned content through a form of digital fingerprinting. Although the video of the New Zealand attack was fingerprinted immediately, countless thousands of people beat the filter over and over again by recutting, editing, watermarking and modifying the video itself. While harder to find on major social media platforms, copies of the video are still easily found online.
It was once the sole province of the press to think about how to cover such tragedies in a responsible way. As media consumption shifts online, it is now on the tech companies to use their considerable ingenuity to grapple with mitigating the public health crisis of mass killings.
Keeping the internet, or at the very least social media, free from vile content is grueling work. The experience of content moderators for Facebook raises troubling questions about the future of human moderation and the wider danger that online content poses to public health.
Repeated exposure to conspiracy theories — say, that the Earth is flat or that the Holocaust didn’t happen — turns out to sway content moderators, an effect that may very well be playing out in the population at large. Repeated exposure to images of violence and sexual exploitation often leaves moderators with post-traumatic stress disorder. Moderators have reported crying on the job or sleeping with guns by their side. Turnover is high, pay is low, and although they have access to on-site counselors, many moderators develop symptoms of PTSD after leaving the job.
We don’t know if moderators are canaries for the social-media-consuming public at large or if their heavy dose of the worst of the web makes them outliers. Is repeated exposure to conspiracy theories — often given boosts by recommendation algorithms — swaying the general public, in some cases leading to public health emergencies like the measles outbreak? Is extremist propaganda fueling a surge in right-wing violence?
The killer in New Zealand sought to hijack the attention of the internet, and the millions of uploads of his video — both attempted and achieved — were a natural consequence of what the platforms are designed to promote in users: the desire to make content go viral.
In the midst of the crisis this weekend, YouTube resorted to temporarily disabling the ability to search recently uploaded videos. It’s not the first time a platform has disabled a function of its product in response to tragedy. In July, WhatsApp limited message forwarding in India in the wake of lynchings fueled by rumors spread by users of the service. The change became global in January in an effort to fight “misinformation and rumors.”
It’s telling that the platforms must make themselves less functional in the interests of public safety. What happened this weekend gives an inkling of how intractable the problem may be. Internet platforms have been designed to monopolize human attention by any means necessary, and the content moderation machine is a flimsy check on a system that strives to overcome all forms of friction. The best outcome for the public now may be that Big Tech limits its own usability and reach, even if that comes at the cost of some profitability. Unfortunately, it’s also the outcome least likely to happen.
This article originally appeared in The New York Times.