Fragmentation and the Future: Investigating Architectures for International AI Governance

Fragmentation and the Future: Investigating Architectures for International AI Governance

The international governance of artificial intelligence (AI) is at a crossroads: should it remain fragmented or be centralised? We draw on the history of environment, trade, and security regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak for centralisation. The risk of creating a slow and brittle institution, and the difficulty of pairing deep rules with adequate participation, speak against it. Other considerations depend on the specific design. A centralised body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial, and fragmented institutions could self‐organise. In sum, these trade‐offs should inform development of the AI governance architecture, which is only now emerging. We apply the trade‐offs to the case of the potential development of high‐level machine intelligence. We conclude with two recommendations. First, the outcome will depend on the exact design of a central institution. A well‐designed centralised regime covering a set of coherent issues could be beneficial. But locking‐in an inadequate structure may pose a fate worse than fragmentation. Second, fragmentation will likely persist for now. The developing landscape should be monitored to see if it is self‐organising or simply inadequate.

Policy Implications

  • Secretariats of emerging AI initiatives, for example, the OECD AI Policy Observatory, Global Partnership on AI, the UN High‐level Panel on Digital Cooperation, and the UN System Chief Executives Board (CEB) should coordinate to halt and reduce further regime fragmentation.
  • There is an important role for academia to play in providing objective monitoring and assessment of the emerging AI regime complex to assess its conflict, coordination, and catalysts to address governance gaps without vested interests. Secretariats of emerging AI initiatives should be similarly empowered to monitor the emerging regime. The CEB appears particularly well placed and mandated to address this challenge, but other options exist.
  • What AI issues and applications need to be tackled in tandem is an open question on which the centralisation debate sensitively turns. We encourage scholars across AI issues from privacy to military applications to organise venues to more closely consider this vital question.
  • Non‐state actors, especially those with technical expertise, will have a potent influence in either a fragmented or centralised regime. These contributions need to be used, but there also need to be safeguards in place against regulatory capture.
  • The AI regime complex is at an embryonic stage, where informed interventions may be expected to have an outsized impact. The effect of academics as norm entrepreneurs should not be underestimated at this point.

 

Photo by Thiago Matos from Pexels