Liking genocide on Facebook

The social media network Facebook is 15 years old today. The role of the platform in fueling hate speech has been widely discussed recently. At the heart of it was the case of Myanmar, where Facebook was used in the campaign to force hundreds of thousands of Rohingya out of the country in a crime that amount to genocide, according to the UN. Should Facebook be held accountable in the future? Will it?

Liking genocide on Facebook©JusticeInfo.net
7 min 45Approximate reading time

The multi-billion-dollar US social media company Facebook has spent the last year being pilloried as the platform through which fake news has been spread and amplified, potentially altering election and referenda results worldwide.

Apparently low on the list of international hashtag outrage though, is the way that the company has made mistakes as it has expanded and hoovered up millions of new users in fragile states. Especially in Myanmar, where an estimated 700,000 Rohingya Muslims were forced out of their homes, following a vicious hate speech campaign, in what the UN has described as a genocide.

I wouldn’t be surprised if relatively soon there are new regulatory frameworks put in place, to help clarify the very unknown space around the duty that platforms have

Will Facebook ever truly be held to account for its (in-)actions in places like Myanmar? Emma Irving, Assistant Professor at Leiden University Law School, believes it is very unlikely. “They’ve monopolized this public space, and they have control over it, with no accountability,” she says. “I think times may be changing,” suggests Alexa Koenig, Director of the Human Rights Center (HRC) at Berkeley University. “I wouldn’t be surprised if relatively soon there [are] new regulatory frameworks put in place, to help clarify the very unknown space around the duty that platforms have ultimately to the populations that they serve in very different ways.”

The Myanmar Campaign

Before discussing how those frameworks may lead towards accountability for crimes, it’s worth recounting what exactly has happened in Myanmar.

Essentially, Facebook stands accused of being used to further hate speech. There appears to have been “an orchestrated, concerted and military-controlled campaign to use the platform to misinform the public” says Félim McMahon, Technology Director of the HRC. A journalist and investigator by trade, McMahon is currently on leave from his work at the International Criminal Court and is looking specifically at the nexus of social media and war crimes. A big Reuters investigation to which HRC contributed explains the context of a rapid spread of smartphones as democracy was re-introduced in Myanmar after 2011. “By 2016, nearly half the population had mobile phone subscriptions... Most purchased smartphones with internet access,” the report noted. And with specific low cost packages, the Facebook app went viral.

A New York Times investigation uncovered exactly how “the propaganda campaign — which was hidden behind fake names and sham [Facebook] accounts — went undetected. The campaign, described by five people who asked for anonymity because they feared for their safety, included hundreds of military personnel who created troll accounts and news and celebrity pages on Facebook and then flooded them with incendiary comments and posts timed for peak viewership.”

“We looked at hate speech in Myanmar for two years. Our teams were collecting instances of hate speech. It was apparent for anyone who cares to see it and collect it that there was a lot of negative sentiment being driven on the platform,” confirms McMahon.

The limits of the algorithm

That Facebook was overwhelmed is evident. Efforts were made by the company at regulating content in Burmese, and offensive posts were taken down. But these efforts were both a struggle against a flood and limited by lack of deep cultural knowledge. An example outlined in the Reuters report is one particular racial slur – "kalar" – which can be a highly derogatory term used against Muslims, but can have a much more innocent meaning: "chickpea". Banning the word on Facebook, by itself, makes no sense. The push towards a purely technologically-driven, automated response to solving these issues worries Koenig. She recognizes that the sheer “scale of content delivered on the platforms” means a technological approach will be needed. But until social media platforms do “more sociological analysis,” automated algorithms are “going to continue to be a crude tool in light of very sensitive issues.”

Until social media platforms do more sociological analysis, automated algorithms are going to continue to be a crude tool in light of very sensitive issues.

Finally, Facebook took down 18 accounts and 52 pages linked to Burmese officials in August 2018. The company said it had "found evidence that many of these individuals and organizations committed or enabled serious human rights abuses in the country."

A serious response to alleged genocide?

The rhetorical responsibility

“The press release Facebook on taking down these accounts would indicate to me that at least they want the world to think they are taking it seriously,” says Irving who also studies digital data and atrocity accountability. Koenig is more positive: “I definitely feel that they do want to get it right. I do think we are seeing a movement from the highest levels of leadership policy development at all the platforms right now of kind of struggling with how to translate that into practice in digital space.”

The report does paint an extraordinary picture of a company that was apparently deeply unaware – at the time – of its own potential for doing harm.

Further, the company commissioned an independent human rights report, based – it says – on the UN’s Business and Human Rights Guiding Principles, which it made public in November 2018. The report admonished Facebook for failing to prevent its platform from being used to “foment division and incite offline violence” in Myanmar, admitted Facebook executive Alex Warofka. The report does paint an extraordinary picture of a company that was apparently deeply unaware – at the time – of its own potential for doing harm. It shows that Facebook leadership did little to figure out the facts on the ground.

Irving describes such company-commissioned reports as part of the “rhetoric of responsibility” which she has seen increasingly being adopted by social media companies. Priya Pillai, senior consultant and researcher in international law who blogs at Opinio Juris, says she had expected more substance: “Facebook needs to actually acknowledge its actions and potential impact in Myanmar in a more comprehensive way, which the report really did not do. It’s essentially avoided a tough discussion.”

Proving intent

So can – will – Facebook ever be held accountable for the way its platform was abused? It seems unlikely.

As a business, Facebook was required to conduct due diligence, says Irving, and to “know whether your business conduct is going to be a problem for human rights in that area.” But ultimately those human rights principles are not binding. “So you hit dead ends, to be honest.”

You could submit that because Facebook was the medium for dissemination of hate speech, that arguably created conditions for the commission of mass atrocities.

Pillai suggests looking back to the example of how media-related individuals were prosecuted for incitement to genocide at the UN-backed Rwanda Tribunal, including top board members of the notorious radio station Mille Collines (RTLM). “I do think there is a parallel,” she says, arguing that even though it “may not be exactly the same situation,” you could submit that because Facebook was the medium for dissemination of hate speech, that “arguably created conditions for the commission of mass atrocities.”

However, Irving points out that Facebook is a platform, not a media production house or broadcaster or an author. Contrary to Rwandan RTLM, “it’s not that Facebook itself is generating content that’s hateful and inciteful.” But potentially, she continues, “if you build your algorithms in such a way that it promotes hateful content and inciteful content to the top of someone’s news feed, you’re doing more than being just a neutral hosting platform.”

“There’s a lot of chatter right now around what are the different legal theories for holding platforms responsible,” agrees Koenig, “and one of the questions I’ve been asking myself as a lawyer is: Is there something akin to the ‘knew, or should have known’, like in command responsibility [cases]?”. Under International Humanitarian Law, commanders can be held responsible for war crimes committed by subordinates, in some circumstances, when they knew or should have known about the crimes committed and didn’t do anything to prevent or punish them. But even though she recognizes the social media platforms have “tremendous control over what comes into their communities,” she doesn’t see that it would be easy to find a suitable jurisdiction to hold them accountable.

If you wanted to try to prosecute Mark Zuckerberg for crimes against humanity… I think you’d still have a hard time proving intent.

Irving says flatly she believes its impossible, because of Facebook’s identity as a company, not an individual. “I’ve looked in every nook and cranny that I can think of and from a strictly legal perspective, Facebook is a private company,” and “if you wanted to try to prosecute Mark Zuckerberg [Facebook’s founder and chairman] for aiding and abetting genocide and crimes against humanity… I think you’d still have a hard time proving intent.”

“It’s sort of like the Wild West”

What now for Facebook?

“It’s too late for the Myanmar situation,” says McMahon. You can’t mitigate the past, says Irving, but you can work out “how not to commit that kind of harm again in the future”. The social media behemoth is certainly engaging with civil society and academics to work out how to improve. And the pressure to reform – fueled by pressure from critical investigative journalism reports – continues.

Fragile states like Myanmar, often with deeply divisive pasts, and lacking traditions of media literacy, have become the spaces in which a social media platform such as Facebook in fact IS the internet for most people. Irving likens Facebook to a public utility and “if Facebook starts to equal ‘the internet’ – as a company there are corresponding sets of duties.”

“It’s sort of like the Wild West,” suggests Koenig, “where historically we populated the land, people staked their claim and the next evolution is really going to have to be the equivalent of a bill of rights or a constitution.”

“What we are seeing is the company shifting from just shifting responsibility, avoiding regulation, now leaning in to the problem, and not just deny and disrupt,” says McMahon. “The question is going to be whether they are willing to pivot,” agrees Koenig. “In any institution or any field of practice there’s usually a learning curve, a time period during which people are very willing to trade off responsibility for innovation,” she continues. “But I do think that now that social media companies like Facebook have matured there is less of a willingness to say anything goes.”

New approaches are needed with “engagement from civil society, states and platforms,” says McMahon. “It’s going to be an issue for leadership to decide if they’re willing to listen and to learn from communities that may not naturally seem to have an obvious technological expertise,” says Koenig, “but certainly have the social and cultural expertise. I think it’s highly possible. But listening is a very difficult skill.”

The danger of self-regulation

Another danger is of potentially going too far the other way and regulating the space for freedom of speech – banning everything – to the detriment of all, including journalists and human rights investigators. The criticism by “civil society [has been] pushing [Facebook] in the direction of becoming ‘Big Brother’ and we don’t want that,” says McMahon. Irving agrees on the current risk: “Essentially what you’re doing is giving private companies the task of deciding what speech is hateful, what speech is permissible, what counts as  incitement… We are giving them the role of deciding what we as a society consider protected speech and what speech should be removed, and the policies for the way they do this are very un-transparent and unaccountable.”

Essentially what you’re doing is giving private companies the role of deciding what we as a society consider protected speech.

At fifteen years old, Facebook may act as an unaccountable hate speech platform or an equally unaccountable self-regulator. Or be a more responsible adult.