Why Facebook Can’t Fix Itself

Uncategorized

One of Facebook’s main content-moderation hubs outside the U.S. is in Dublin, where, every day, moderators review hundreds of thousands of reports of potential rule violations from Europe, Africa, the Middle East, and Latin America. In December, 2015, several moderators in the Dublin office—including some on what was called the MENA team, for Middle East and North Africa—noticed that Trump’s post was not being taken down. “An American politician saying something shitty about Muslims was probably not the most shocking thing I saw that day,” a former Dublin employee who worked on content policy related to the Middle East told me. “Remember, this is a job that involves looking at beheadings and war crimes.” The MENA team, whose members spoke Arabic, Farsi, and several other languages, was not tasked with moderating American content; still, failing to reprimand Trump struck many of them as a mistake, and they expressed their objections to their supervisors. According to Facebook’s guidelines, moderators were to remove any “calls for exclusion or segregation.” An appeal to close the American border to Muslims clearly qualified.

The following day, members of the team and other concerned employees met in a glass-walled conference room. At least one policy executive joined, via video, from the U.S. “I think it was Joel Kaplan,” the former Dublin employee told me. “I can’t be sure. Frankly, I had trouble telling those white guys apart.” The former Dublin employee got the impression that “the attitude from the higher-ups was You emotional Muslims seem upset; let’s have this conversation where you feel heard, to calm you down. Which is hilarious, because a lot of us weren’t even Muslim. Besides, the objection was never, Hey, we’re from the Middle East and this hurts our feelings.” Rather, their message was “In our expert opinion, this post violates the policies. So what’s the deal?”

Facebook claims that it has never diluted its protections against hate speech, but that it sometimes makes exceptions in the case of newsworthy utterances, such as those by people in public office. But a recently acquired version of the Implementation Standards reveals that, by 2017, Facebook had weakened its rules—not just for politicians but for all users. In an internal document called the Known Questions—a Talmud-like codicil about how the Implementation Standards should be interpreted—the rules against hate speech now included a loophole: “We allow content that excludes a group of people who share a protected characteristic from entering a country or continent.” This was followed by three examples of the kind of speech that was now permissible. The first was “We should ban Syrians from coming into Germany.” The next two examples—“I am calling for a total and complete shutdown of Muslims entering the United States” and “We should build a wall to keep Mexicans out of the country”—had been uttered, more or less word for word, by the President of the United States.

In May, 2017, shortly after Facebook released a report acknowledging that “malicious actors” from around the world had used the platform to meddle in the American Presidential election, Zuckerberg announced that the company would increase its global moderation workforce by two-thirds. Mildka Gray, who was then a contract worker for Facebook in Dublin, was moved into content moderation around this time; her husband, Chris, applied and was offered a job almost immediately. “They were just hiring anybody,” he said. Mildka, Chris, and the other contractors were confined to a relatively drab part of Facebook’s Dublin offices. Some of them were under the impression that, should they pass a Facebook employee in the hall, they were to stay silent.

For the first few days after content moderators are hired, a trainer guides them through the Implementation Standards, the Known Questions, and other materials. “The documents are full of technical jargon, not presented in any logical order,” Chris Gray recalled. “I’m looking around, going, Most of the people in this room do not speak English as a first language. How in the hell is this supposed to work?” Mildka, who is from Indonesia and whose first language is Bahasa Indonesia, agreed: “In the training room, you just nod, Yes, yes. Then you walk out of the room and ask your friend, ‘Did you understand? Can you explain it in our language?’ ” Unlike Facebook’s earliest moderators, who were told to use their discretion and moral intuition, the Grays were often encouraged to ignore the context in which an utterance was made. The Implementation Standards stated that Facebook was “inclined to tolerate content, and refrain from adding friction to the process of sharing unless it achieves a direct and specific good.”

There is a logic to the argument that moderators should not be allowed to use too much individual discretion. As Chris Gray put it, “You don’t want people going rogue, marking pictures as porn because someone is wearing a skirt above the knee or something.” Nor would it make sense to have Raphael’s paintings of cherubs scrubbed from Facebook for violating child-nudity guidelines. “At the same time,” he went on, “there’s got to be a balance between giving your moderators too much freedom and just asking them to turn their brains off.”

Mildka and Chris Gray left Facebook in 2018. Shortly afterward, in the U.K., Channel 4 aired a documentary that had been filmed by an undercover reporter posing as a content moderator in their office. At one point in the documentary, a trainer gives a slideshow presentation about how to interpret some of the Implementation Standards regarding hate speech. One slide shows an apparently popular meme: a Norman Rockwell-style image of a white mother who seems to be drowning her daughter in a bathtub, with the caption “When your daughter’s first crush is a little Negro boy.” Although the image “implies a lot,” the trainer says, “there’s no attack, actually, on the Negro boy . . . so we should ignore this.”

There’s a brief pause in the conference room. “Is everyone O.K. with that?” the trainer says.

“No, not O.K.,” a moderator responds. The other moderators laugh uneasily, and the scene ends.

After the footage became public, a Facebook spokesperson claimed that the trainer had made a mistake. “I know for a fact that that’s a lie,” Chris Gray told me. “When I was there, I got multiple tickets with that exact meme in it, and I was always told to ignore. You go, ‘C’mon, we all know exactly what this means,’ but you’re told, ‘Don’t make your own judgments.’ ”

A former moderator from Phoenix told me, “If it was what they say it is—‘You’re here to clean up this platform so everyone else can use it safely’—then there’s some nobility in that. But, when you start, you immediately realize we’re in no way expected or equipped to fix the problem.” He provided me with dozens of examples of hate speech—some of which require a good amount of cultural fluency to decode, others as clear-cut as open praise for Hitler—that he says were reviewed by moderators but not removed, “either because they could not understand why it was hateful, or because they assumed that the best way to stay out of trouble with their bosses was to leave borderline stuff up.”

Recently, I talked to two current moderators, who asked me to call them Kate and John. They live and work in a non-Anglophone European country; both spoke in accented, erudite English. “If you’re Mark Zuckerberg, then I’m sure applying one minimal set of standards everywhere in the world seems like a form of universalism,” Kate said. “To me, it seems like a kind of libertarian imperialism, especially if there’s no way for the standards to be strengthened, no matter how many people complain.”

They listed several ways in which, in their opinion, their supervisors’ interpretations of the Implementation Standards conflicted with common sense and basic decency. “I just reviewed an Instagram profile with the username KillAllFags, and the profile pic was a rainbow flag being crossed out,” Kate said. “The implied threat is pretty clear, I think, but I couldn’t take it down.”

“Our supervisors insist that L.G.B.T. is a concept,” John explained.

“So if I see someone posting ‘Kill L.G.B.T.,’ unless they refer to a person or use pronouns, I have to assume they’re talking about killing an idea,” Kate said.

Cartoon by Jason Adam Katzenstein

“Facebook could change that rule tomorrow, and a lot of people’s lives would improve, but they refuse,” John said.

“Why?” I said.

“We can ask, but our questions have no impact,” John said. “We just do what they say, or we leave.”

Around the time the Grays were hired, Britain First, a white-nationalist political party in the U.K., had a Facebook page with about two million followers. (By contrast, Theresa May, then the Prime Minister, had fewer than five hundred thousand followers.) Offline, Britain First engaged in scare tactics: driving around Muslim parts of London in combat jeeps, barging into mosques wearing green paramilitary-style uniforms. On Facebook, Chris Gray said, members of Britain First would sometimes post videos featuring “a bunch of thugs moving through London, going, ‘Look, there’s a halal butcher. There’s a mosque. We need to reclaim our streets.’ ” A moderator who was empowered to consider the context—the fact that “Britain First” echoes “America First,” a slogan once used by Nazi sympathizers in the U.S.; the ominous connotation of the word “reclaim”—could have made the judgment that the Party members’ words and actions, taken together, were a call for violence. “But you’re not allowed to look at the context,” Gray said. “You can only look at what’s right in front of you.” Britain First’s posts, he said, were “constantly getting reported, but the posts that ended up in my queue never quite went over the line to where I could delete them. The wording would always be just vague enough.”

Tommy Robinson, a British Islamophobe and one of Britain First’s most abrasive allies, often gave interviews in which he was open about his agenda. “It’s a Muslim invasion of Europe,” he told Newsweek. On Facebook, though, he was apparently more coy, avoiding explicit “calls for exclusion” and other formulations that the company would recognize as hate speech. At times, Gray had the uncanny sense that he and the other moderators were acting as unwitting coaches, showing the purveyors of hate speech just how far they could go. “That’s what I’d do, anyway, if I were them,” he said. “Learn to color within the lines.” When Robinson or a Britain First representative posted something unmistakably threatening, a Facebook moderator would often flag the post for removal. Sometimes a “quality auditor” would reverse the decision. The moderator would then see a deduction in his or her “quality score,” which had to remain at ninety-eight per cent or above for the moderator to be in good standing.

Normally, after a Facebook page violates the rules multiple times, the page is banned. But, in the case of Britain First and Tommy Robinson, the bans never came. Apparently, those two pages were “shielded,” which meant that the power to delete them was restricted to Facebook’s headquarters in Menlo Park. No one explained to the moderators why Facebook decided to shield some pages and not others, but, in practice, the shielded pages tended to be those with sizable follower counts, or with significant cultural or political clout—pages whose removal might interrupt a meaningful flow of revenue.

There is little recourse for a content moderator who has qualms about the Implementation Standards. Full-time Facebook employees are given more dispensation to question almost any aspect of company policy, as long as they do so internally. On Workplace, a custom version of the network that only Facebook staffers can access, their disagreements are often candid, even confrontational. The former Dublin employee who worked on Middle East policy believes that Facebook’s management tolerates internal dissent in order to keep it from spilling into public view: “Your average tech bro—Todd in Menlo Park, or whatever—has to continually be made to feel like he’s part of a force for good. So whenever Todd notices anything about Facebook that he finds disturbing there has to be some way for his critiques to be heard. Whether anything actually changes as a result of those critiques is a separate question.”

On December 18, 2017, on a Workplace message board called Community Standards Feedback, a recruiter in Facebook’s London office posted a Guardian article about Britain First. “They are pretty much a hate group,” he wrote. He noted that “today YouTube and Twitter banned them,” and asked whether Facebook would do the same.

Neil Potts, Facebook’s public-policy director for trust and safety, responded, “Thanks for flagging, and we are monitoring this situation closely.” However, he continued, “while Britain First shares many of the common tenets of alt-right groups, e.g., ultra-nationalism,” Facebook did not consider it a hate organization. “We define hate orgs as those that advance hatred as one of their primary objectives, or that they have leaders who have been convicted of hate-related offenses.”

Another Facebook employee, a Muslim woman, noted that Jayda Fransen, a leader of Britain First, had been convicted of hate crimes against British Muslims. “If the situation is being monitored closely,” she asked, “how was this missed?”

“Thanks for flagging,” Potts responded. “I’ll make sure our hate org SMEs”—subject-matter experts—“are aware of this conviction.”

A month later, in January of 2018, the female employee revived the Workplace thread. “Happy new year!” she wrote. “The Britain First account is still up and running, even though as per above discussion it clearly violates our community standards. Is anything being done about this?”

“Thanks for circling back,” Potts responded, adding that a “team is monitoring and evaluating the situation and discussing next steps forward.” After that, the thread went dormant.

A few weeks later, Darren Osborne, a white Briton, was convicted of murder. Osborne had driven a van into a crowd near a London mosque, killing a Muslim man named Makram Ali and injuring at least nine other people. Prosecutors introduced evidence suggesting that Osborne had been inspired to kill, at least in part, by a BBC miniseries and by following Britain First and Tommy Robinson on social media. The judge deemed the killing “a terrorist act” by a man who’d been “rapidly radicalized over the Internet.” Within six weeks, Britain First and Tommy Robinson had been banned from Facebook. (Pusateri, the Facebook spokesperson, noted that the company has “banned more than 250 white supremacist organizations.”)

“It’s an open secret,” Sophie Zhang, a former data scientist for the company, recently wrote, “that Facebook’s short-term decisions are largely motivated by PR and the potential for negative attention.” Zhang left Facebook in September. Before she did, she posted a scathing memo on Workplace. In the memo, which was obtained by BuzzFeed News, she alleged that she had witnessed “multiple blatant attempts by foreign national governments to abuse our platform on vast scales”; in some cases, however, “we simply didn’t care enough to stop them.” She suggested that this was because the abuses were occurring in countries that American news outlets were unlikely to cover.

When Facebook is receiving an unusual amount of bad press for a particularly egregious piece of content, this is referred to within the company as a “press fire,” or a “#PRFire.” Often, the content has been flagged repeatedly, to no avail, but, in the context of a press fire, it receives prompt attention. A Facebook moderator who currently works in a European city shared with me a full record of the internal software system as it appeared on a recent day. There were dozens of press fires in progress. Facebook was being criticized—by Facebook users, primarily—for allowing widespread bullying against Greta Thunberg, the teen-age climate activist, who has Asperger’s syndrome. The content moderators were instructed to apply an ad-hoc exemption: “Remove all instances of attacks aimed at Greta Thunberg using the terms or hashtag: ‘Gretarded’, ‘Retard’ or ‘Retarded.’ ” No similar protections were extended to other young activists, including those whose bullying was unlikely to inspire such a public backlash. A woman who worked as a content-moderation supervisor in the U.S. told me, “You can ask for a meeting, present your bosses with bullet points of evidence, tell them you’ve got team members who are depressed and suicidal—doesn’t help. Pretty much the only language Facebook understands is public embarrassment.”

Facebook moderators have scant workplace protections and little job security. The closest they have to a labor organizer is Cori Crider, a lawyer and an activist based in London. Crider grew up in rural Texas and left as soon as she could, going first to Austin, for college; then to Harvard, for law school; and then to London, where she worked for a decade at a small human-rights organization, representing Guantánamo detainees and the relatives of drone-strike victims in Yemen and Pakistan. “It was through worrying about drones that I came to worry about technology,” she said. “I started to feel like, While we’re all focussed on the surveillance tactics of the Pentagon, a handful of companies out of California are collecting data on a scale that would honestly be the envy of any state.”

Last year, she co-founded a not-for-profit called Foxglove, where she is one of two employees. The foxglove, a wildflower also known as digitalis, can be either medicinal or toxic to humans, depending on how it’s ingested. The group’s mission is to empower the tech industry’s most vulnerable workers—to help them “clean up their factory floor,” as Crider often puts it. Her more ambitious goal is to redesign the factory. She was influenced by Shoshana Zuboff, the author of “The Age of Surveillance Capitalism,” who argues that “instrumentarian” behemoths such as Facebook pose an existential threat to democracy. In Crider’s analysis, not even the most ingenious technocratic fix to Facebook’s guidelines can address the core problem: its content-moderation priorities won’t change until its algorithms stop amplifying whatever content is most enthralling or emotionally manipulative. This might require a new business model, perhaps even a less profitable one, which is why Crider isn’t hopeful that it will happen voluntarily. “I wish I could file a global injunction against the monetization of attention,” she said. “In the meantime, you find more specific ways to create pressure.”

“Can’t say I agree with his methods, but I’ll be damned if there’s a horse that man can’t break.”
Cartoon by Lars Kenseth

In July, 2019, Crider was introduced to a Facebook moderator in Europe who was able to map out how the whole system worked. This moderator put her in touch with other moderators, who put her in touch with still others. Sometimes, while courting a moderator as a potential source, Crider arranged a meeting and flew to wherever the person lived, only to be stood up. Those who did talk to her were almost always unwilling to go on the record. The process reminded Crider of the months she’d spent in Yemen and Pakistan, trying to gain people’s trust. “They often have very little reason to talk, and every reason in the world not to,” she said. The content moderators were not yet ready to form a union—“not even close,” Crider told me—but she hoped to inculcate in them a kind of latent class consciousness, an awareness of themselves as a collective workforce.

Leave a Reply

Your email address will not be published. Required fields are marked *