Accelerating Digital Repression and Its Existential Threat to Democracy

By Chris Ogden and Olivia Hagen - 19 June 2023
Accelerating Digital Repression and Its Existential Threat to Democracy

Chris Ogden and Olivia Hagen provide the conclusion to Global Policy's e-book on 'Digital Repression: Causes, Consequences and Policy Responses'. The e-book's chapters can all be found here and the full e-book will be available in late summer.

This e-book has provided an in-depth analysis of digital repression, which is a growing threat to democratic governance globally. Comprising eleven chapters written by leading scholars and policymakers, it has highlighted how the rapid expansion of new and emerging internet and communication technologies (ICTs) has significantly increased any state’s capacity for repression and social control. This ever-growing technological capacity poses a serious threat to internet freedom and human rights, which can potentially have a devastating – and irrevocable – impact upon our societies worldwide. Although digital repression is often associated with autocracies, many of the contributors have also shown how democracies utilise repressive technologies, albeit less frequently, as they are subject to more significant normative and constitutional constraints (Feldstein 2021).

Through the diverse perspectives presented in this book, stakeholders at the local, national and global levels can now better understand the intricate environment of digital repression and develop effective strategies to combat this growing threat. As such, this E-Book serves as a valuable resource for those seeking to safeguard internet freedom and human rights in the face of digital repression. Exploring the various facets of digital repression, this project encompassed four distinct themes, each delving into important aspects of this phenomenon.  

The first theme, emphasised by Steven Feldstein, Andrea Kendall-Taylor, and Erica Frantz, focused on identifying and understanding digital repression. In Chapter 1, Feldstein shed light on the underlying causes - and dispelled some common misconceptions - surrounding digital repression, noting that ‘policymakers should look at regime incentives, political interests, and resource capacity to better understand why regimes acquire and deploy repressive technologies’. Frantz and Kendall-Taylor’s Chapter 2 then considered the complex and multi-faceted reasons behind states adopting or abstaining from digital repression tactics and specifically considered regime type, digital capacity and levels of wealth.

The second theme then looked into the question of responsibility in digital repression, with contributions from Marcus Michaelsen, Xiao Qiang and Adrian Shahbaz. In Chapter 3, Michaelsen illuminated how autocrats employ digital repression tactics beyond their borders, including phishing campaigns, and examined the associated risks of such a strategy. Chapter 4 by Xiao then investigated China’s role in global digital repression through three key dimensions, namely the export of surveillance technology, investment in digital infrastructure and influencing international organisations. Finally, in Chapter 5, Shahbaz investigated private sector companies’ involvement in the digital repression nexus, elucidating their complicity as either unwitting or unscrupulous agents of state repression.

The third theme accentuated the perils of digital repression, and featured insights from Jessica Brandt, Anita Gohdes and Jaclyn Kerr. In Chapter 6, Brandt scrutinised the utilisation of digital repression by democracies and the resulting implications for democratic governance. In turn, Gohdes, in Chapter 7, examined whether or not ICTs primarily benefit states or civil society, ultimately identifying three spheres of control relating to criminalising civil society content, weaponising digital infrastructures and manipulating the information space. Finally, Kerr, in Chapter 8, assessed the ‘dictator’s digital dilemma’, exploring how autocracies navigate the delicate balance between complete internet control and fostering economic development, which enhances how to decipher the evolution of digital repression. 

The concluding theme then underscored effective policy responses to digital repression, featuring contributions from Allie Funk, Richard Crespin, Caroline Logan, Ana Blanco and Jennifer Earl. In Chapter 9, Funk outlined practical strategies for states to counter digital repression at the local, national, and international levels, including having more effective multilateral coordination, bolstering national protections for human rights online and increasing investment in local actors. Subsequently, Chapter 10 by Crespin, Logan and Blanco highlighted eight ways in which multinational corporations can combat digital repression, so as ‘to ensure their platforms promote an open exchange of information and are not used as the weapon of choice by autocrats and their allies’. In the final chapter, Earl discussed how non-governmental organisations (NGOs) and people’s movements can oppose digital repression, specifically by applying existing resistance techniques, making repression risky, and keying mitigation tactics to combat different kinds of digital repression.

Generative AI’s Exponential Threat

In light of the recent release of generative AI language models such as Open AI’s ‘ChatGTP’ and Google’s ‘Bard’, understanding how governments employ digital repression and how to respond to it has become even more crucial. Disinformation researchers have voiced concerns that these models could be harnessed as potent tools for spreading misinformation at an exponential rate. Whilst disinformation ‘is not a new problem’ (Sanders and Schneier 2023), with Facebook, for example, removing over a billion fake accounts a year that generate ‘fake news’ (Ibid), experts warn that rampant AI technology can make disinformation easier to produce on an industrial scale, and thus more challenging to stop.

With personalised chatbots that can mimic language, tone and human logic, disinformation could be spread in ever more credible and persuasive ways (Hsu and Thompson 2023). A 2020 study by the Center on Terrorism, Extremism, and Counterterrorism from the Institute of International Studies at Middlebury found that GPT-3, the technology behind ChatGTP, had an impressive knowledge of extremist communities and could create online content that mimics the content created by such groups (Hsu and Thompson 2023). Although Open AI has policies in place to prevent the creation of harmful or biased content and offers moderation tools to protect against misuse (OpenAI 2023), these measures are unlikely to be entirely effective. As ChatGTP itself has acknowledged, it ‘may occasionally produce harmful instructions or biased content’ and Sam Altman, CEO of ChatGPT’s Open AI, has noted that AI can be used to manipulate voters and target disinformation (Fung 2023).

In addition to concerns about the spread of disinformation, ChatGTP and similar AI technologies could also make ‘democracy even more messy’ (Cowen 2022), as they have the potential to intervene in democratic regulatory processes. In the US, for example, there is a comment period before new regulations take effect, which interested parties could potentially flood with the help of ChatGTP, similar to the Russian Internet Research Agency’s attempt to influence the 2016 US elections (Sanders and Scheiner 2023). Experts note that currently, there are no effective mitigation tactics to combat such disinformation, adding to the complexity – and ambiguity – of democratic processes (Hsu and Thompson 2023). As a result, stakeholders must be aware of the potential impact of both known and unknown AI technologies on democratic systems and develop appropriate strategies to mitigate all risks.

In autocracies, where digital repression has become a large part of the autocrats’ repressive toolkit, the threat to internet freedom and human rights is further amplified by the advent of AI technologies. For example, in the years leading up to the 2021 military coup in Myanmar, Facebook turned into an ‘echo chamber of anti-Rohingya content’ (Amnesty International 2022), allowing the military junta and radical Buddhist nationalist groups to spread disinformation targeting the Muslim community. The consequences of the disinformation campaign were devastating, resulting in the military junta’s crackdown on the Rohingya in 2017, where the Rohingya were subject to widespread atrocities, including murder, rape, and torture, which forced hundreds of thousands of people to flee to nearby Bangladesh. 

An Amnesty report from 2022 also revealed that Facebook ‘knew or should have known’ (Amnesty International 2022) that their algorithms were not only spreading but also actively intensifying the dissemination of anti-Rohingya disinformation. This active role played by Facebook’s platform ultimately contributed significantly to the Rohingya genocide (Amnesty International 2022). Facebook later revealed that the key reason disinformation was allowed to flow on their platform was the lack of Burmese-speaking content moderators, with the company having only two such specialists available as of early 2015 (Solon 2018). This example underscores how AI has the capacity to contribute to the rapid spread, intensification and even normalisation of digital repression across different ICT platforms. Furthermore, it highlights the urgent need for stakeholders to proactively recognise the implications of AI technologies and develop robust strategies –regulatory, educational and practical – to counteract their negative impact on internet freedom and human rights.

In an authoritarian context, the development of potent AI software can, therefore, potentially turbocharge digital repression and authoritarian tactics. In countries like Myanmar, where the state lacks the incentive to moderate online content, AI could facilitate the mass production of disinformation. Consequently, this could perpetuate hatred and exacerbate the persecution of marginalised groups and activists. In more advanced autocratic states, led by the poster child of China, AI technology could also be used much more systematically by leaders to deeply manipulate information and heighten social control and regime survival. Once developed domestically, such technology could then be exported to other autocracies.

- - -

AI technology can – and most likely will – be exported in efforts to influence and subvert political processes in established democracies. Such efforts are entirely conceivable vis-à-vis the coming 2024 general elections in the United States and India and those in the United Kingdom in 2025. We can thus expect new AI-powered versions of Cambridge Analytica to personally and collectively target voters on an industrial scale, and in a highly specific, evolving and manipulative manner. Such a tactic will embolden a highly polarised political - and emotionally charged - atmosphere within these countries and elsewhere, significantly disrupting the conduct and outcome of these elections. If unchallenged, this technology will therefore be a destabilising, frightening and destructive force that poses a major existential threat to the world’s oldest, largest and most essential democracies. Such an attack will invigorate authoritarian regimes, and tip humanity into an autocratic future. 

 

 

Dr Chris Ogden is Senior Lecturer / Associate Professor in Asian Security and Asian Affairs in the School of International Relations at the University of St Andrews, Scotland.  His research interests concern the global rise of India and China, great power politics, shifting world orders, authoritarianism, the Asian Century, Hindu nationalism, and the interplay between national identity, security and domestic politics in South Asia (primarily India) and East Asia (primarily China).  Chris’ latest book concerns The Authoritarian Century: China's Rise and the Demise of the Liberal International Order (Bristol UP) and he was also the Series Consultant for the 2023 BBC Documentary Series, India: The Modi Question.  For more information, see http://chris-ogden.org 

Olivia Mills Hagen is currently in her final year of an MA (Hons.) in International Relations at the University of St Andrews and an intern for Global Policy Online. Before university, she decided to do her National Service and spent a year in Northern Norway in the Norwegian Army's Artillery Battalion. During her time at St Andrews, Olivia has been the director of the Lumsden Leadership Summit, a platform that invites successful and inspiring women to speak to inspire the student body and help them become the next generation of leaders. As the director, she focused the summit on sustainability and invited women whose diverse careers shared sustainability as the common denominator. Her academic research is centered on the intricate and multifaceted phenomenon of digital repression, as well as international development, foreign policy of India and China and force and statecraft.

Photo by Brett Sayles

 

 

 

References

 

Amnesty International. 2022. “Myanmar: Facebook’s Systems Promoted Violence against Rohingya; Meta Owes Reparations – New Report.” amnesty.org. Amnesty International. https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/.

Cowen, Tyler. 2022. “ChatGPT Could Make Democracy Even More Messy.” Washington Post, December 6, 2022. https://www.washingtonpost.com/business/chatgpt-could-makedemocracy-even-more-messy/2022/12/06/e613edf8-756a-11ed-a199-927b334b939f_story.html.

Feldstein, Steven. 2021. The Rise of Digital Repression How Technology Is Reshaping Power, Politics, and Resistance. Oxford: Oxford University Press USA - OSO.

Fung, Brian. 2023. “Mr. ChatGPT Goes to Washington: OpenAI CEO Sam Altman Testifies before Congress on AI Risks” CNN Business. May 16, 2023. https://edition.cnn.com/2023/05/16/tech/sam-altman-openai-congress/index.html.

Hsu, Tiffany, and Stuart A. Thompson. 2023. “Disinformation Researchers Raise Alarms about A.I. Chatbots.” The New York Times, February 8, 2023, sec. Technology. https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html.

OpenAI. 2023. “OpenAI API.” Platform.openai.com. OpenAI. 2023. https://platform.openai.com/docs/guides/moderation/overview.

Sanders, Nathan E., and Bruce Schneier. 2023. “How ChatGPT Hijacks Democracy.” The New York Times, January 15, 2023, sec. Opinion. https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html.

Solon, Olivia. 2018. “Facebook’s Failure in Myanmar Is the Work of a Blundering Toddler.” The Guardian, August 16, 2018. https://www.theguardian.com/technology/2018/aug/16/facebook-myanmar-failure-blundering-toddler.

Disqus comments