The Implications of Inconsistent Content Moderation: Reflections on Ukraine and Yemen Conflicts

By Caroline Tynan - 07 April 2022

Writing for GPNG, Caroline Tynan argues that online platforms must adhere to and carefully balance international human rights law to tackle online hate and extremism during conflicts.  

Over the last several years, human rights organizations have noted with alarm the problem of automated removal of extremist content. Not only have these policies lacked transparency and been used against journalists and activists, but they have also removed evidence of war crimes. While free speech advocates have voiced concerns over censorship, international human rights investigators using open source technology have led the way in critically engaging with content moderation policies’ effects on legal evidence of crimes by both state and non-state actors. Free speech and tech experts in recent weeks have seized on the Russian invasion of Ukraine to challenge platforms to more systematically address problems with poor transparency of algorithms that have allowed disinformation to flourish. Lessons from more complex conflicts further reveal just how far platforms have to go in confronting the root of their problems. 

GPNG.pngBecause there is so much international attention focused on what is a clear case of an illegal military intervention in Europe, social media platforms are facing a reckoning that they cannot as easily overlook, as with intra-state conflicts outside of the Western world. But even as major Russian state media outlets RT and Sputnik face bans, content moderation and free speech experts argue that the responses of tech platforms are not only too late, but fall short of upholding international law. Facebook and Twitter have limited but still not completely banned Russian state media or official accounts, instead responding with piecemeal attempts to limit Russian state propaganda. In the last few days alone, new research shows that RT Arabic and RT’s Spanish language Facebook pages have actually increased traffic after Facebook and other companies moved to restrict access to Russia Today and Sputnik from European-based users. In addition to platforms’ disregard for their role in disinformation and incitement to violence in non-English speaking settings, their belated and limited response to Russian pro-war propaganda reflects a larger systemic failure to hold to account governments responsible for inciting violence.

Automated content regulation

The problem is rooted in the opaque and politicized nature of tech companies’ profit-based approach. There is little transparency on criteria for how and why AI enforced content moderation policies are shaped, or why so many instances of government-incited violence, from India to Ethiopia, still go unchecked.  The result is an approach that is simultaneously overbroad and under-inclusive.

The one commonality is that companies all base their decisions off some combination of US State Department and/or the UN Security Council’s designated terror lists. The Global Internet Forum to Counter Terrorism began as an industry-led attempt to deal with the use of social media by ISIS and al Qaeda. Its criteria were initially based on whether an account is directly associated with a group that is on the UN sanctions list for terrorism. It has since expanded with the encouragement of Western governments interested in dealing with Islamic extremism and the growing threat of far-right extremist content online. However, with an increasingly broad mandate, its lack of transparency in what is called ‘hashing’ to automatically identify and remove extremist content, risks reinforcing power divides between states and societies. This is not only because it has failed to explain how it will safeguard from its methods being abused to silence journalists, but also because its non-state actor focused model sets the stage for inconsistent accountability. 

US, UN, and other governmental terror definitions and sanctions lists are not reliable sources for identifying actors inciting violence online. They are highly politicized, and are more limited when it comes to governments as opposed to non-state actors. Russia, Saudi Arabia, China and India, for example, are well known for spreading their own propaganda and disinformation, while also going to great lengths to exert control through platforms. These governments are simply too powerful to land on a state-sponsor of terror sanctions list. Increasing the scope of such designations would not solve the problem. Instead, platforms need to hold government discourse online to a higher standard altogether, regardless of whether governments are considered democratic, historical security partners to the US or major global economic powerhouses.

Treating Ukraine as an exception

The current war in Ukraine thus raises two overlapping problems with the GIFCT and larger content moderation policies shaped by Silicon Valley’s evolving relationship with repressive governments. First: there is not yet a clear standard in how content moderation is applied in line with international law. There has existed for decades international law prohibiting incitement to hatred and violence as well as propaganda for war, enshrined in Article 20 of the International Covenant on Civil and Political Rights. Yet, this has rarely been implemented, as defining propaganda can be so difficult and subjective, and seems potentially in contradiction of Article 19’s guarantee to free expression. Because Article 20 primarily holds governments, rather than the public, accountable for war propaganda, states have been wary of implementing it. The US, with its unique degree of free expression, has been particularly skeptical due to potential violations of its First Amendment.

Yet, if tech companies like Facebook and Twitter grappled more seriously with international law, rather than scrambling to respond as events unfold and evolve, there might have been a more coherent response to Russian information warfare at this time. Instead, platforms are treating Ukraine as an exception. To justify Facebook’s decision to temporarily allow [Ukrainians’] hate speech against Russians is, in the words of Meta’s president of global affairs Nick Clegg, “a temporary decision in extraordinary and unprecedented circumstances”. This not only creates confusion and anger over double standards, but potentially establishes dangerous precedent for other conflicts. 

The complexity of information warfare overlooked: the case of Yemen

In late 2020, as I was working with Mnemonic to lead Yemeni Archive’s project documenting attacks against Yemeni media and journalists, the team encountered a mass removal of tweets that were potential evidence. Most of these were affiliated with the Houthi media outlet al-Masirah, but many others were less clearly affiliated. Twitter offered a response to Mnemonic, but the platform has yet to differentiate a rationale for these particular accounts’ removal aside from the presumption that their affiliation with an organization that was labeled by the US government as a foreign terrorist organization. In its final hours in office, the Trump administration labeled the Houthis a foreign terrorist organization, as part of its efforts to ramp up tensions with Iran, with whom the Houthis have increasingly been affiliated. The Biden administration’s reversal of this decision did not restore these accounts’ access to Twitter, and since the UN designated the Houthis as terrorists last month, their removal from platforms is now in line with GIFCT policy.

There is, without a doubt, a case to be made for removing Houthi content on platforms. They have been the most sophisticated actor disseminating disinformation in Yemen. But platforms must do so in a way that adheres to a transparent process, rather than simply relying on a highly politicized and opaque ‘methodology’ based on UN or US terror designations. Furthermore, it is imperative that platforms ensure online content is accessible to researchers and human rights investigators, so that war crimes–by all parties–can be documented and used in international court cases. 

The current norm enshrined in the GIFCT fails to hold internationally recognized governments involved, in this case, the Saudi and UAE coalition, accountable, for their own role in war propaganda. If state policies causing mass civilian deaths and destruction of civilian infrastructure were consistently referenced for the GIFCT’s standards in content moderation, then a precedent would have been set making Russia’s war on Ukraine grounds for this designation for Russia. But, few have argued in favor of designating Russia a state sponsor of terrorism, and even fewer have made this case for Saudi Arabia or the UAE for their actions in Yemen.

Politicized or uneven removals of social media to address incitement of violence or disinformation arguably bring higher stakes when it comes to inadvertent fallout in the form of censorship. Because it is well documented by now that all sides in Yemen have been engaging in severe violations of both human rights and humanitarian law, these poorly defined removal policies targeting Houthi affiliated media end up providing additional fuel for their propaganda and claims to legitimacy.

The need for an approach rooted in IHRL

As for content moderation on extremism and propaganda inciting violence, social media companies need to have a serious reckoning over consistencies and context.  International law is complex for a reason, because context matters. But this is all the more reason to engage with its different components, to tackle difficult questions on freedom of expression and propaganda. Adherence to both Articles 19 and 20 articles requires from platforms a careful balancing act, which it seems, cannot be achieved with current automated content moderation policies. At the same time, rightfully removed propaganda must be treated like the evidence it is, and this is especially important in the very places to which platforms are least attentive: non-English speaking settings, mostly in the Global South. Finally, governments must be treated with the same scrutiny the GIFCT has applied to non-state purveyors of extremist content: to Putin and Assad as much as to ISIS, and to Saudi Arabia and the UAE as much as to the Houthis. 

 

 

Caroline Tynan recently worked as Research Manager for the Committee to Protect Journalists in New York, and as an independent consultant to the Berlin-based Mnemonic. She completed her PhD in political science at Temple University in 2019, and is the author of Saudi Interventions in Yemen: a Historical Comparison of Ontological Insecurity (Routledge 2020).Views here are her own.

Photo by Pixabay

Disqus comments