Why Can’t Facebook Get Hate Speech Right?

Published: 5 October

Region: Worldwide

By Mikhail Yakovlev

Facebook_CensorshipFrom censoring breast-feeding mothers to deleting the iconic photo of the ‘napalm girl’ posted by the Norwegian PM, Facebook’s procedures for identifying and removing offensive and dangerous content are infamously flawed.

Recently Facebook have published their Community Standards, but it is not clear how efficiently these are enforced. We at the Media Diversity Institute (MDI) decided to test it for ourselves.

 

Historically, Facebook’s track-record has been problematic. According to a recent podcast “Post No Evil” produced by WNYC Radiolab, Facebook have acted as a secretive multinational refusing to publish their Community Standards in full until April this year. Officially, Facebook claimed that if the rules were published some users would find ways to circumvent them.

But, this lack of transparency alienated many users and frequently caused backlash. Probably, the most famous example of this is Facebook’s decision to ban images of breast-feeding mothers back in 2012. A well-publicised campaign of offline and virtual nurse-ins eventually forced them to reverse this policy.

In any case, given that Facebook promote themselves as a community of users whose “mission is to give people the power to share and make the world more open & connected,” it make sense for them to make these rules public so ‘members of the community’ can identify and report content that should not be there.

Long hours, little pay and trauma

Facebook_Philippines_1But, creating and publishing a failproof set of rules is only one part of the problem. The other is enforcement. Although more than 95% of terrorist content in English is now removed by automated software, Facebook employs thousands of moderators to manually remove other undesirable content, like hate speech, nudity and harassment.

According to one of the podcast’s interviewees, a former Facebook moderator from the Philippines identified only as Marie, some moderators struggle to properly evaluate every post due to the sheer volume of content they must review in a day – up to 25,000 individual posts per worker. All for an hourly wage of between $1 and $3.*

It is hardly surprising, then, that some of Marie’s co-workers should indiscriminately ok all the posts as a way to get back at Facebook.

There is nothing to suggest Marie did this herself. But, she admitted to banning posts that she found unacceptable based on her “conservative background,” even if those did not violate the Community Standards. For example, she banned pictures of breastfeeding mothers because she felt a moral obligation to protect young children from nudity.

This begs the question, does anything go?

Can a homophobic moderator simply remove a picture of two people of the same gender kissing?

Facebook_Philippines_2As the Community Standards are available in full now, I decided to test how well they are enforced for myself.

In an official blog explaining its hate-speech policy, Facebook claimed that “after conflict started in the region in 2014” they no longer allow posts by Russians that “call Ukrainians ‘khokhol,’ literally ‘topknot,’” and by Ukrainians that “call Russians ‘moskal.’”

However, I noticed that some posts like these were still visible. I reported two of these posts. One of them read “I hate [all] khkohols. Who is with me?” and the other – “khkokhols are shit!!!!” I also reported a Facebook profile under the clearly fake and inflammatory name “I Hate Bendera-loving khokhols.”

Facebook removed all of these within four hours. However, each of these posts had been on Facebook for at least three months and none of them had been flagged for removal by other users or Facebook’s automated software. Posts that contain equally-offensive slurs in English almost never survive this long.

This raises the additional question of whether it is desirable or even possible to uniformly apply the same set of rules to content produced globally.

It seems that Facebook themselves do not really know the answer.

Tech company? Online community? Media?

Another anonymous former employee, who worked for the team responsible for Community Standards at the Facebook HQ in California, confided to Radiolab that his decision to leave the company was partly due to the inconsistent way in which the standards were applied.

He recounted how Facebook executives overturned his team’s decision to remove the now-iconic photo of a Boston Marathon bombing victim in a wheelchair with his legs blown-off for violating Facebook’s ban on “visible internal organs.” His team was told that this image was “news-worthy” so should be allowed to stay.

About six months later, his team made the decision not to remove a ‘newsworthy’ video of a Mexican woman being beheaded by a member of the ‘Los Zetas’ drug cartel in order to raise awareness about Mexico’s endemic drug-cartel problem. This time, the same executive told them to take the image down.

Both times this executive essentially made an editorial decision based on ‘newsworthiness’.

Does this mean that Facebook has become some novel type of media with an editorial policy of sorts? For the time being, Facebook continue to claim that they are a “technology company,” not media.

A slave to profit?

More problematically, it has also been suggested that Facebook are sometimes prepared to disregard their policies for profit.

According to an undercover investigation by Channel 4, moderators at Facebook’s second outsource partner located in Ireland were told: “If you start censoring too much then people stop using the platform. It’s all about money at the end of the day”.

Facebook denies this. But if true, such an attitude is deeply irresponsible.

Of course, some argue that social networks should not be too sanitised, otherwise nobody would use them and free speech will be undermined. An administrator of a US Navy group was unequivocal – “comments in a closed or private group should not be moderated by FB support people.”

He explained that members of his group “conducted a test to see how” Facebook responded to certain words, namely “‘fag’ or any derivative” and the n-word. They found that posts featuring “fag” were consistently removed when reported, posts featuring the n-word “were found to be acceptable.”

Likewise, in my own test I reported a post that implied that all Arabs are murderers and terrorists. In my opinion, this post contradicts Facebook’s ban on hate speech by comparing a race to “violent and sexual criminals.” Unlike the anti-Ukrainian posts, Facebook did not remove this post. They provided no explanation as to why they decided that this post does not breach their rules.

I should mention that this post generated over thirty responses. Most, but not all, of these urged the author to desist from using discriminatory Islamophobic and Arabophobic language.

But, is this reason enough to allow the original post to remain?

In my opinion, the fact that Facebook removes some posts is not a problem. The real problem is that by removing some posts that apparently contradict its Community Standards but not others, Facebook are creating a sense of arbitrary authority and perpetuating systematic bias towards certain groups.

Online hate speech. Offline violence.

Even more problematically, Facebook’s inability and/or unwillingness to enforce the Community Standards fairly across languages and cultures can have real-world consequences for some of the most marginalised individuals and groups.

Facebook_RohingyaA recent UN report on anti-Rohingya violence in Myanmar highlights that “in a context where Facebook is the internet…Facebook posts and messages [can lead and] have led to real-world discrimination and violence.”

Unfortunately, prior experience suggests that Facebook only acts when its inaction generates enough negative media attention in the West.

The fundamental problem here seems to be that users who do not live in Western Europe and the US are low priority for a private company, which derives most of its profits from these markets.

Facebook’s treatment of the anti-Rohingya hate speech is a case in point. Although the conflict in Myanmar started in 2012, Facebook only had two Burmese-speaking moderators as recently as last year. Finally, rising media and government outrage has prompted Facebook to act in April this year. Mark Zukerberg promised the US Congress to “ramp up our effort there [in Myanmar] dramatically.” Since then, Facebook have hired more Burmese-speaking staff and tried to make their automated moderation software more responsive to Burmese text. Unfortunately, most commentators agree that this was too little too late.

This poses an even more fundamental question. Can we trust a private company to regulate speech?

Other models have been tried. Wikipedia – the world’s largest online encyclopaedia – is famously openly-editable by its users. With the exception of a few wealthy individuals paying PR agencies to alter entries about them, this user-led approach has been remarkably successful.

In contrast, the unmoderated depths of Reddit have become home to multiple hateful and/or radicalised cyber-communities. At least one of these played a role in a recent terrorist attack.

Reddit themselves were forced to take the unprecedented step of banning some of the most-hateful subreddits in November 2017. Likewise, Wikipedia has made some pages ‘protected’ to prevent damaging and inaccurate edits. Interestingly, the page about the Rohingya people is openly editable in English as in Burmese.


*This wage relates to outsourced moderators in the Philipines only.