News

Roundtable discussion on Global Standards for AI Governance and the UK's AI regulatory framework

Image
Image

A cross-practice Global Regulatory & IPMT team hosted a roundtable discussion on the key requirements that industry would like to see incorporated in the UK Government’s approach to developing its AI regulatory framework.

Lawyers Charles BrastedEduardo UstaranDan WhiteheadTelha Arshad, and Imogen Ireland, policy professionals and in-house counsel from many of the leading global AI businesses have met with officials from the UK government’s Office for Artificial Intelligence to discuss the UK’s approach to AI regulation and the interplay that this will have with other existing or developing regulatory frameworks globally.

A packed room of representatives from leading technology, banking, consumer, energy, entertainment, media, and transport organisations discussed the challenges around designing proportionate and future-proofed regulation to secure AI safety and how the UK can shape global governance of AI, including through its upcoming global AI Safety Summit.

As the UK government seeks to set a gold standard domestically that delivers on its 5 principles for AI regulation, including the need for safety, transparency and fairness, there was agreement in the room that there is a need for robust regulation that will create the right environment for artificial intelligence to flourish safely in the UK and beyond.

Below is a summary of key takeaways from the discussion:

Model for emerging AI regulation

  • Role of sector-by-sector regulation – Ensuring regulation is risk-based and applied proportionately in different contexts supports sector-by-sector regulation, but this approach generates the risk of inconsistent application. The deployment by central government of AI expertise and resources to sector-specific regulators will be key and there is industry appetite for a UK central coordinating regulator or statutory body.
  • Support for hybrid approach – An entirely principles-based regime could also create inconsistency at the sectoral level, so there is scope for a hybrid approach with some prescription in overarching legislation (e.g. to define “high risk applications” and clarify where in the supply chain liability sits) with a sector-based approach building on this (e.g. to outline what obligations are appropriate for high risk applications in a particular sector).

Harmonisation of global AI regulation

  • Need for clear and consistent definitions at a global level – Divergence in regulatory regimes across different jurisdictions is unhelpful for global businesses and can be mitigated by the agreement of global standards, such as key definitions of  what “transparency” means in the context of AI regulation.
  • Role of global standards – Given its first mover advantage and extra-territorial effect, the EU AI Act could become a de facto global framework for multinationals but there is a role for global standards to plug gaps left by the AI Act or influence change in areas where the AI Act falls short. Dialogue between the UK, US and EU (as well as other major global players) is needed to facilitate this and the UK’s AI Safety Summit provides an opportunity.

The UK iterative approach

  • Regulation needs to focus on the “input” as well as the “output” – There are, for example, critical IP issues with respect to the ownership of data used to feed generative AI models that need to be resolved, with appropriate involvement from central government.
  • Risk of a licensing regime – A regulatory approach which requires AI applications to be approved before release carries a real risk of stifling innovation and creating a barrier to entry for the smallest players.

 

 

Authored by Charles Brasted, Eduardo Ustaran, Dan Whitehead, Telha Arshad, and Imogen Ireland.

Search

Register now to receive personalized content and more!