Hogan Lovells 2024 Election Impact and Congressional Outlook Report
A few months after the UK Government published a White Paper on its approach to artificial intelligence regulation and invited comments from stakeholders, the Prime Minister Rishi Sunak has announced the UK will host a global summit on the regulation of AI in Autumn this year. Since the publication of the White Paper, and in light of the frenetic pace of the development of AI technology, there have been high profile calls, not just from politicians but also from the leading industry figures and academics, to put in place robust regulatory standards to mitigate AI risks and to do it quickly.
So, what is the ultimate purpose of the UK’s proposed global summit and what could it achieve?
The UK’s White Paper, published in March this year, was primarily focused on the domestic approach to regulation. It proposed a set of overarching principles of regulation (safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress). These principles are to be applied in different regulatory contexts by expert sectoral regulators, each relying on their existing powers. Regulators were also encouraged to publish guidelines on the sector or area specific application of the principles. It presented a contrasting approach to the more prescriptive legislative framework being developed by way of the EU’s AI Act.
Crucially, the UK’s White Paper also included an Annex expressing the UK’s intentions to foster the development of international AI standards and argued that the UK was well positioned to do so as the country ranked third in the world for AI publications and number of AI companies. It pointed to existing forums for multilateral engagement on the topic of AI, such as the OECD AI Governance Working Party, G7 and Council of Europe Committee on AI (the latter of which is currently developing an AI human rights treaty).
There has been much talk of the potential for the AI Act ultimately becoming the de facto global standard if the major industry players are forced to comply with it and apply their compliance mechanisms across their global operations. However, the development and implementation of EU legislation of such technical complexity and significance will inevitably take some time.
Mr Sunak’s summit appears to be an attempt to bring together political and industry leadership from across the globe to design an internationally coherent approach to mitigating the risks of AI which can then shape national regulation. His announcement referred to the need for multilateral agreement of “guardrails” to prevent the risks that have been voiced in public debate.
International regulatory standards are not by any means a novel idea. They exist across many sectors and have in several cases succeeded in securing coherent coordinated approaches to regulating risk in the context of extra-territorial problems. Such standards are most necessary when risks affecting global organisations and activities are not constrained by geographical boundaries. Their development is typically achieved through discussion and the building of consensus between experts, industry and political leaders. A challenge of the scale, importance, complexity and global impact of the risks arising from AI development and use therefore appears a natural candidate for international standard setting.
There are precedents which the parties to the UK’s planned summit can look to for inspiration:
The UK has not yet announced which countries, companies and other stakeholders will be invited to the summit but its success will require the broadest possible participation and engagement, including from those who have started on a domestic path to regulation. The active involvement of those organisations and institutions from across the world that have already contributed their respective blueprints to the future of AI governance will be particularly critical to ensure a true global impact.
In terms of the UK’s own domestic approach, its regulators have begun thinking about how to implement the AI principles outlined in the White Paper, with the Competition and Markets Authority launching a review into the competition and consumer protection impacts of AI foundation models. With a general election on the cards next year, the opposition party Labour has also been setting out its vision, indicating that it would introduce a licensing regime for AI development akin to those used in the regulation of medicines and nuclear power.
What is clear is that whichever government is in power, the UK will be seeking to play an influential role in the development of a global AI regulatory framework in line with its vision for Global Britain, at the heart of responding to global challenges and leading policy debate on the international stage. This may possibly involve the establishment of a new intergovernmental organisation for the safe and secure development of advanced AI. Mr Sunak’s AI adviser, Matt Clifford, has stated that a global regulatory framework which ensures appropriate auditing and assessment of AI tools before they are released is needed within the next two years to ensure AI safety. The UK will hope to be at the centre of achieving that goal.
Authored by Eduardo Ustaran, Charles Brasted, Dan Whitehead, and Telha Arshad.