Kay Firth-Butterfield

Search
Close this search box.

Atlantic Dialogues: Our Future with AI Hinges on Global Cooperation

Kay Firth-Butterfield’s article titled “Our Future with AI Hinges on Global Cooperation” was published in Atlantic Dialogues: On AI, Society and what comes next

Since large language models began making headlines in the fall of 2022, millions of words have been written about the dangers of AI. Those of us who work on these technologies and their implications have been talking about this since 2014, but now the conversation has gone mainstream—so much so that it risks drowning out necessary discussion of how to achieve AI’s many benefits, including real gains in our most pressing challenges.  

The solution is governance. The AI world needs the public’s trust to achieve the benefits of AI, and it won’t get there without regulation. We must ensure the safety of the technology as it is used today, known as Responsible AI, while also looking to the future. More than 60 percent of Americans say they are concerned about AI’s negative impacts, according to an AI Policy Institute poll from spring 2023, but without strong laws we’ll neither prevent them nor have the tools to deal with them when they arise.  

Yet right as we need it most, public trust in AI in democratic societies is falling at an alarming rate. In a recent Luminate survey, 70 percent of British and German voters who identified as understanding AI said they were concerned about its effect on their elections. Similarly, an Axios/Morning Consult poll showed that more than half of Americans believe AI will definitely or probably affect the 2024 election outcome, while more than one-third of them expect their own confidence in the results to be decreased because of AI. More generally, 2 out of 5 American workers are worried about losing their jobs to AI, according to an American Psychological Association poll, while Gallup found that 80 percent do not trust companies to self-govern their use of AI. We will never realize technology’s economic and positive benefits without addressing these concerns.  

However, in 2020, PwC surveyed more than 90 sets of ethical AI principles from groups around the world finding that all groups agreed on the need for nine ethical concepts, including accountability, data privacy, and human agency. Now, governments need to work together to figure out how to make these concepts reality, building a coalition of the willing across nations that can do the hard work of planning for an uncertain future. 

If we continue to simply react to technological advances without thinking ahead, there is a very real risk that we will arrive in 2050 to find that we live in a world that no longer meets our needs as humans. The EU has thus far chosen a risk-mitigation approach, which addresses current problems but not the essential issue of how humans wish to interact with AI in the future. Individual U.S. states are enacting their own laws, which could slow innovation and make cooperation more difficult. It is guaranteed that future generations will work beside AI systems and robots. The fact that regulation on AI is slow to develop means that we are seeing regulators and individuals falling back on existing laws to drive best practices. With AI being a decision making tools liability falls on the user of the tool or its developer and not the human operator. Rather than simply attempting to mitigate harm we should be creating best practices and policies which enable us to ensure our children live in a human focused work served by AI rather than as humans in an AI world. 

By working together, democratic governments around the world, acting with stakeholders from civil society, academia and business, could create laws not to address every specific situation—which would be impossible—but instead to outline specific requirements organizations around the world must follow when designing, using or developing AI systems. This must be a priority because many of the users of AI have little understanding of the harmful effects which might result even as they think they are using it for good. A proactive set of regulations would require AI development teams to adopt proven best practices and adhere to all existing and new legislation for creating Responsible AI systems from the outset.  

It is tempting to think the domestic governance gap might be filled by international regulation or treaties, but there are risks to this approach: The U.N. Security Council is often at an impasse, and despite calls from the Secretary General and smaller nations, we have waited, without result, since 2013 for an agreement on the control of Lethal Autonomous Weapons. International Treaties generally prohibit or clarify existing items so if we are unable to achieve that it is going to be very hard for the international community to agree to proactively designing policy for a world acceptable to all of us. 

The U.N. is expected to name members of a high-level panel on AI, which is a welcome development, but it is unlikely that the creation of an advisory board will result in meaningful regulation as quickly as we need it. The world simply does not have five years to figure out its next steps.  

But international cooperation need not run through the U.N. Promising suggestions include emulating the model of the European Organization for Nuclear Research, an intergovernmental organization that includes 23 member states, or the Vaccines Alliance. Taking that path, as well as increasing access to the internet or taking an approach such as GAVI would ensure that the Global North doesn’t have unilateral control over AI technology, lead to less inequality, and make sure AI serves many different cultures. Governments around the world would come together to envision a positive future for their citizens with AI and create the regulations necessary to achieve it.  

Governance is hard. True global governance is even harder. Even this faster path will take time, requiring companies designing, developing, and using AI to self-regulate—with full support from their boards and C suites—in the meantime. But ultimately, collaboration is necessary to build a world in which humanity benefits from AI rather than being forced to adapt to it. A comprehensive approach is essential, and we must act now.  

Share: