Pulse logo
Pulse Region

Editorials of The Times

As authorities rushed to stop a gunman on a mass killing spree in New Zealand, engineers, programmers and content moderators around the globe were scrambling to keep the rampage from going viral.

It didn’t work.

The terrorist in Christchurch killed 50 people at two mosques Friday, livestreaming part of the attack on Facebook. The original video was taken down within an hour. But copies proliferated across the major platforms.

On Saturday night, Facebook announced that it had removed 1.5 million copies of the video, with 1.2 million blocked at the moment they were uploaded. YouTube declined to give numbers, although its chief product officer told The Washington Post that at times, a copy of the video was being uploaded every second.

After decades of shunning responsibility for user content, Big Tech is slowly making its products safer for society — banning anti-vaccine misinformation, for instance, and cracking down on political disinformation. More moderation comes with heavy risks, of course. Decisions about the limits of free speech would shift to companies whose priorities are driven by shareholders. But the viral spread of the Christchurch shooting video shows the limits of the content moderation machine in the face of technologies that have been designed to be attention traps. Stricter moderation or filtering systems are not the answer. It must be a priority to redesign technology to respect the common good, instead of ignoring it.

There is a special urgency when it comes to mass killings, since media amplification can inspire copycat killings, and only so much can be done in a crisis where every hour counts.

Filtering systems have been relatively successful where time is not of the essence — copyright infringement never kills anyone, even if it stays up for a week. On longer time scales, moderation works reasonably well.

For about a decade or longer, the tech companies have automatically filtered out child pornography in a partnership with the National Center for Missing and Exploited Children, which maintains a database of “fingerprinted” videos. Starting in 2016, major platforms adapted that technology to filter for “terrorist” content.

But this kind of filtering only works when new content matches known banned content through a form of digital fingerprinting. Although the video of the New Zealand attack was fingerprinted immediately, countless thousands of people beat the filter over and over again by recutting, editing, watermarking and modifying the video itself. While harder to find on major social media platforms, copies of the video are still easily found online.

It was once the sole province of the press to think about how to cover such tragedies in a responsible way. As media consumption shifts online, it is now on the tech companies to use their considerable ingenuity to grapple with mitigating the public health crisis of mass killings.

Keeping the internet, or at the very least social media, free from vile content is grueling work. The experience of content moderators for Facebook raises troubling questions about the future of human moderation and the wider danger that online content poses to public health.

Repeated exposure to conspiracy theories — say, that the Earth is flat or that the Holocaust didn’t happen — turns out to sway content moderators, an effect that may very well be playing out in the population at large. Repeated exposure to images of violence and sexual exploitation often leaves moderators with post-traumatic stress disorder. Moderators have reported crying on the job or sleeping with guns by their side. Turnover is high, pay is low, and although they have access to on-site counselors, many moderators develop symptoms of PTSD after leaving the job.

We don’t know if moderators are canaries for the social-media-consuming public at large or if their heavy dose of the worst of the web makes them outliers. Is repeated exposure to conspiracy theories — often given boosts by recommendation algorithms — swaying the general public, in some cases leading to public health emergencies like the measles outbreak? Is extremist propaganda fueling a surge in right-wing violence?

The killer in New Zealand sought to hijack the attention of the internet, and the millions of uploads of his video — both attempted and achieved — were a natural consequence of what the platforms are designed to promote in users: the desire to make content go viral.

In the midst of the crisis this weekend, YouTube resorted to temporarily disabling the ability to search recently uploaded videos. It’s not the first time a platform has disabled a function of its product in response to tragedy. In July, WhatsApp limited message forwarding in India in the wake of lynchings fueled by rumors spread by users of the service. The change became global in January in an effort to fight “misinformation and rumors.”

It’s telling that the platforms must make themselves less functional in the interests of public safety. What happened this weekend gives an inkling of how intractable the problem may be. Internet platforms have been designed to monopolize human attention by any means necessary, and the content moderation machine is a flimsy check on a system that strives to overcome all forms of friction. The best outcome for the public now may be that Big Tech limits its own usability and reach, even if that comes at the cost of some profitability. Unfortunately, it’s also the outcome least likely to happen.

__

Shedding Any Last Illusions About the Saudis

It comes as little surprise, sadly, that the Saudi thugs who slaughtered Jamal Khashoggi were a secret crew of enforcers for Crown Prince Mohammed bin Salman. They had been watching, kidnapping, detaining and torturing Saudi dissidents for more than a year before they traveled to Istanbul to kill and dismember the Washington Post journalist.

According to a report in The Times by Mark Mazzetti and Ben Hubbard, citing officials who have read classified intelligence, the team is known to U.S. officials as the “Saudi Rapid Intervention Group,” and it carried out at least a dozen operations before the Khashoggi murder.

This comes as little surprise because of all the sordid information that has come out about the Saudi hit men and the crown prince since the brazen assassination of Khashoggi inside the Saudi Consulate in Istanbul shocked the world in October, and because of the clumsy initial attempts by Crown Prince Mohammed to deny the killing and then to pin it on underlings who purportedly exceeded their orders and are now said to be standing trial in a courtroom no independent witnesses have been to.

In the five months since the killing, most of the world has learned that Crown Prince Mohammed is not the modernizing liberal of the image he cultivated among Western leaders and visitors but rather a despot who suppressed those who challenged his image and his power.

That he would create a secret team of enforcers further affirms that Khashoggi’s killing was not some rogue operation by loyal courtiers against a nettlesome critic. Instead, it was part of a systematic campaign to silence dissidents that was overseen by a top aide to the crown prince, Saud al-Qahtani, and led in the field by an intelligence officer who had traveled abroad with the crown prince, Maher Abdulaziz Mutreb.

According to U.S. officials, the team was worked so hard that, in June, it asked for holiday bonuses.

Saudi Arabia, an absolute monarchy with a particularly stern form of Islam, has always been high on the list of human-rights violators. But with the ascent of Crown Prince Mohammed’s father, King Salman, to the throne in January 2015, and with Crown Prince Mohammed’s subsequent emergence as the heir and the power behind the throne, the pace of arrests, repression and executions rose to levels unseen in two decades.

According to a Saudi group that tracks political prisoners, Prisoners of Conscience, more than 2,600 Saudi dissidents — including scientists, writers, lawyers and women’s rights campaigners — were locked up in the kingdom while the crown prince was building his image abroad as a reformer. His celebrated decision to let women drive, in the best known example, was accompanied by the imprisonment of the women who had campaigned for the right.

The revelation that the killing of Khashoggi was part of a systematic campaign against dissidents strips away any remaining illusions about Crown Prince Mohammed. The Guardian newspaper has reported signs that King Salman has begun to curb his son’s power and that Crown Prince Mohammed has missed a series of high-profile ministerial and diplomatic meetings the past two weeks.

President Donald Trump and his senior adviser and son-in-law Jared Kushner, who shaped much of their Middle East policy around their friendly relations with the crown prince, have tried to minimize the fallout from the Khashoggi killing. But the Senate demonstrated this past week that it was prepared to part with the president on Saudi Arabia when it voted 54-46 to end U.S. military assistance for the Saudi-led war in Yemen, a conflict that has created a humanitarian crisis in the country. The House is expected to follow suit.

Congress should demand a full disclosure of intelligence records about Khashoggi’s murder, about the team that committed it and about the role of Crown Prince Mohammed. It also needs to demand the immediate release of the political prisoners in whose support Khashoggi wrote the articles that sealed his fate. And even if Trump insists on continuing to back this damaged and damaging prince, the president should be using his leverage to extract such concessions on human rights.

This article originally appeared in The New York Times.

Subscribe to receive daily news updates.

Next Article