AI could Kickstart a New Global Arms Race – we need Better Ways to Govern it before it’s too Late
There is a lot of money to be made from Artificial Intelligence. By one estimate, the market is projected to hit US$36.8 billion by 2025. Some of this money will undoubtedly go to social good, like curing illness, disease and infirmity. Some will also go to better understanding intractable social problems like wealth distribution, urban planning, smart cities, and more “efficient” ways to do just about everything. But the key word here is “some”.
There’s no shortage of people touting the untold benefits of AI. But once you look past the utopian/dystopian and techno-capitalist hyperbole, what we are left with is a situation where various stakeholders want to find new and exciting ways to part you from your money. In other words: it’s business, not personal.
While the immediate benefits of AI might be clear from a strategic business perspective, the longer term repercussions are not. It’s not just that the future is impossible to predict, complex technologies are hard to control, and human values are difficult to align with computer code, it’s also that in the present it’s hard to hear the voices calling for temperance and judiciousness over the din of companies clamouring for market advantage.
This is neither a new nor recent phenomenon. Whether it was the social media boom, the smart phone “revolution”, or the commercialisation of the world wide web, if there’s money to be made, entrepreneurs will try and make it. And they should. For better or worse, economic prosperity and stability depends on what brilliance can be conjured up by scientific minds.
But that’s only one side of the coin. The flipside is that prosperity and stability can only be maintained if equally brilliant minds work together to ensure we have durable ways to govern these technologies, legally, ethically, and for the social good. In some cases, this might mean agreeing that there are simply certain things we should not do with AI; some things that profit should not be derived from. We might call this “conscious capitalism” – but it is, in fact, now a societal imperative.
#AIEthics
There are structural problems in how the AI industry is shaping up, and serious asymmetries in the work that is being done. It’s all well and good for large companies invested in presenting themselves as the softer, cuddlier, but no less profitable, face of this new technological revolution to tout hashtags like #responsibleAI or #AIEthics. No rational person is object to either, but they should not distract from the fact that hashtags aren’t coherent policy. Effective policy costs money to research, devise, and implement – and right now, there is not enough time, cash, brainpower and undivided attention being devoted to building the robust governance infrastructure that will be required to compliment this latest wave of technological terraforming.
There are people out there thinking the things that need to be thought and implemented on the law, policy and governance side, but they are being drowned out by the PR, social media “influencers” and marketing campaigns that want to turn a profit from AI, or tell you how they can help your company do so.
Ultimately, our reach exceeds our grasp. We are far better at building new, exciting and powerful technologies than we are at governing them. To an extent, this has always been the case with new technologies, but the gap between what we create and the extent we can control it is widening and deepening.
Over the course of my PhD, where I researched long term strategies for AI governance and regulation, I was offered some sage advice: “If you want to ensure you’re remembered as a fool, make predictions about the future.” While I try and keep that in mind, I am going to go out on a limb: AI will fundamentally remake society beyond all imagination.
Our commitment to ensuring safe and beneficial AI should amount to more than hashtags, handshakes and “changing the narrative”. It should be internalised into the ethos of AI development. Technical research must go hand in hand with law and policy research on both the public and private side. With great power comes great shared responsibility – and it’s about time we recognise that this is the best business model we have for AI going forward.
If we are going to try and socialise the benefits of AI across society – as the familiar refrain goes – we need to get serious about the distribution of money across the AI industry today. Public and private research and public engagement has a critical role to play in this, even if it’s easier (and cheaper) to co-opt it into in-house research. We need to build a robust government-led research infrastructure in the UK, Europe and beyond to meet head on the challenges AI and other “tech” will pose. This means we need to think about more than just about data protection, algorithmic transparency and bias.
We also need to get serious about how our legal and political institutions will need to adapt to meet the challenges of tomorrow. And they will need to adapt, just as they have proven able to do in the face of earlier technological changes, whether it was planes, trains, automobiles or computers. From legal personhood to antitrust laws, or criminal culpability to corporate liability, we are starting to confront the incommensurability of certain legal norms with the lived reality of the 21st century.
The challenges of tomorrow
AI is a new type of beast. We cannot do governance as usual, which has meant waiting for the latest and greatest “tech” to appear and then frantically react to keep it in check. Despite protestations to the contrary, we must be proactive in engaging with AI development, not reactive. In the parlance of regulation, we need to think ex ante and not just ex post. The hands-off, we-are-just-a-platform-and-have-no-responsibility-here tone of Silicon Valley must be rejected once and for all.
If we are going to adapt our institutions to the 21st century we must understand how they have adapted before, and what can be done today to equip them for the challenges of tomorrow. These changes must be premised upon evidence; not fatalistic conceits about the machines taking over, not philosophical frivolity, not private interests. We need smart people on the law and policy side working with the smart people sitting at the keyboards and toiling in the labs at the companies where these engines of tomorrow are being assembled line by line. Some might see this as an unholy alliance, but it is, in fact, a noble goal.
The governance and regulation of AI is not a national issue; it is a global issue. The untold fortunes being poured into the technical side of AI research needs to start making its way into the hands of those devoted to understanding how we might best actualise the technology, and how we can in good conscience use it to solve problems where there is no profit to be made.
The risk we run is that AI research kick starts a new global arms race; one where finishing second is framed as tantamount to economic hari-kari. There is tremendous good that the AI industry can do to help change this, but so far these good intentions have not manifested themselves in ways conducive to building the robust law, policy and social-scientific infrastructure that must compliment the technical side. As long as this imbalance continues, be afraid. Be very afraid.
Christopher Markou is on the Legal Expertise Committee of Resposible Robitics, an NGO that promotes the responsible design, development, implementation, and policy of robots embedded in our society. He receives funding from The Social Sciences and Humanities Research Council of Canada (SSHRC). This post first appeared on The Conversation.
Photo credit: GLAS-8 via Foter.com / CC BY-NC-ND