Why the EU needs a supranational agency to regulate AI

By Pramuan Bunkanwanicha and Diego Abellán Martínez - 04 January 2024
Why the EU needs a supranational agency to regulate AI

Regulating artificial intelligence has become a major challenge for policymakers. Laws recently adopted by the European Union are not enough. Pramuan Bunkanwanicha and Diego Abellán Martínez write that countries must collaborate and create a supranational regulatory body to promote AI’s safe, secure, and peaceful use. Otherwise, countries in search of investments in AI will have incentives to apply loose laws to become more competitive.

Disruptive innovations accelerate economic growth by modifying markets. Artificial intelligence (AI) is set to become the most significant disruptive innovation because it will profoundly affect people’s lives and the market structure of all industries. Andreas Kaplan and Michael Haenlein define AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”.

AI is already transforming companies’ business models and many of its applications (in industries such as healthcare, automotive, and finance) are expected to change people’s lives. Regulators and business leaders are anticipating some negative consequences. In 2020, the European Parliament’s think-tank assessed the potential adverse effects on the labour market. It estimated that 14 per cent of the jobs in OECD countries were potentially automatable, and 32 per cent could be severely impacted by AI. In a recent interview, Arvind Krishna, IBM’s CEO, supported this view and predicted that AI could replace 30 per cent of the company’s workers in five years (7,800 jobs). However, quantifying the consequences is still complex, and AI-humans complementary (vs. substituting) effects may appear.

AI is among the greatest challenges to governments, who have a difficult task: to improve efficiency by maximising home-grown technological products and, at the same time, minimise the adverse effects of technology.

Risks and challenges

AI’s capacity to perform routine and non-routine cognitive tasks generates high expectations of productivity improvements. However, there are risk factors associated with its use:

Impact on the labour market

AI is disruptive. Using it to automate processes will potentially affect every sector of the economy. The main concern is that workers will be substituted with the technology, generating potentially massive layoffs. This effect may not be homogeneous. High-skilled workers have more options to adapt and use AI as a complementary tool to improve productivity, which may not be the case for white-collar workers. Two additional effects can amplify this impact: 1) The increase in productivity will benefit shareholders instead of labour share, increasing income inequality, and 2) emerging economies will lose labour cost advantages, resulting in a shift of investment to developed countries, where automation is already established, increasing the economic gap across countries and ultimately leading to a rise in forced migrations.

Biased algorithms. Algorithms are black boxes and may contain bias in their design, which could cause harm. One relevant case affected the Netherlands. In 2022, the Dutch government admitted that its tax authority had used algorithms trained with racial discrimination to spot childcare benefit fraud, leading to wrong accusations and causing irreparable damage to the victims. The black box methodology may result in undesired effects such as monopolistic behaviour, if the algorithm establishes that the monopolistic equilibrium is the best solution. It can define strategies to anticipate and eliminate this scenario in the early stages by destroying competitors when they are not a threat. Even if the algorithms are well-implemented, the lack of transparency can generate distrust. Apple card was accused of gender discrimination a few months after its launch in August 2019. While this news raised alarms about AI bias, the investigation found no evidence of unlawful discrimination.

Cyberterrorism. With AI technology, “bad users” can severely damage democracy by manipulating elections with the dissemination of fake information. They can do that more effectively, identifying the audience, the timing, and the right channel to affect the will of voters. Similarly, the United Nations Office of Counter-Terrorism (UNOCT) identifies additional risks of malicious AI use. The UNOCT highlights that AI technology is a powerful tool to facilitate terrorism through physical attacks with drones and self-driving cars,  producing cyberattacks on critical infrastructures or inciting violence more effectively using social media.

How governments should control the risks

Contrary to the belief in market self-regulation, government intervention is necessary to limit the adverse effects of AI. So far, countries have made individual efforts at different speeds. China, the US and the European Union have all recently announced AI regulation. The EU’s Artificial Intelligence Act, the world’s first comprehensive AI law, is a step in the right direction. This initiative aims to ensure that the AI developed and used in Europe fulfils safety and transparency standards. However, even though the co-rapporteurs of the AI Act propose the creation of a EU agency and academic research shows the need for it, the act requires each member country to establish their own agency to handle all AI questions. Spain created Europe’s first supervisory body.

The need for a supranational agency

Even though some think that a global watchdog is premature, it is necessary. The main concern with local laws is the incentive for governments to issue loose regulations to become more competitive and attract more AI investments. Another concern is that powerful domestic players, mainly large corporations and economic interest groups, might try to capture the benefits of regulation. These elements significantly limit the efficiency of individual policies to control the harmful effects of AI.

We can use the comparison with nuclear energy to address the need for coordinated AI regulation. Atomic power can generate destruction and devastate regions. It can also be used to improve citizens’ welfare, for example, by producing energy. As the International Atomic Energy Agency (IAEA) was created in 1957 to promote and control the use of nuclear technology and ensure the technology is not used for any military purpose, AI regulation needs global coordination through a supranational regulatory body to promote the technology’s safe, secure, and peaceful use.

 

 

Pramuan Bunkanwanicha is a Professor of Finance at ESCP Business School (Paris campus), as well as the school's Faculty Dean. His research lies at the intersection of family business, political connections and finance. 

Diego Abellán Martínez is a Global Executive PhD student at ESCP Business School. He is a co-founding partner of Outliers Ideas, an independent firm that implements illiquid investments in the European market, with a particular focus on Spain. 

This blog post is based on Market regulation and disruptive innovation: The case of Artificial Intelligence, part of ESCP Business School’s New technologies and the future of individuals, organisations, and society impact paper series. It first appeared on the LSE's Business Review. The post represents the views of its author(s), not the position of LSE Business Review or the London School of Economics and Political Science. 

Photo by Google DeepMind

Disqus comments