2024-2025 Global AI Trends Guide
The announcement also marks a shift in tone in respect of AI, from discussion of “safety and risk” to “opportunity and growth”, positioning the UK as forward-looking and open to innovation. As discussed below, the announcement could have implications for the UK’s developing approach to a regulatory framework for AI, which is more sector-specific and principles-based than the EU’s comprehensive and prescriptive AI Act, and aligns with expectations that the forthcoming UK AI Bill, will be targeted at advanced foundation models and limited in scope.
What is the AI Opportunities Action Plan?
The AI Opportunities Action Plan (the “Plan”) was commissioned to Matt Clifford (technology entrepreneur and adviser) by Peter Kyle (Secretary of State for Science, Innovation and Technology) to devise an AI action plan for the British Government. It sets out a roadmap for Britain to become a “world leader in AI”.
The plan includes 50 recommendations and is split into three key sections: (1) lay the foundations to enable AI (2) change lives by embracing AI and (3) secure our future with homegrown AI. The key recommendations to note are as follows:
Why does this matter?
The AI Opportunities Action Plan is an essential component within the Government’s existing “pro-growth and pro-innovation” economic and technology strategy for the UK. This strategy was central to the Government’s manifesto and has remained its primary mission since the election last July. However, the Government has been fairly mute on AI until now, previously highlighting the need for protection against AI risks. The adoption of the AI Opportunities Plan is arguably the first concrete commitment to championing AI investment in the UK that we have seen from the new Government and responds to calls from the industry to embrace the technology.
The Government’s announcement further suggests that the forthcoming AI Bill, announced last year, will be (as expected) light touch and narrow in scope, focusing on ‘frontier’ AI risks, to avoid UK regulation being perceived as inhibiting AI development and investment. This approach is in marked contrast with the European Union, which will regulate the development, use and distribution of a wide range of AI systems and models across industry sectors.
By comparison, the UK’s Action Plan continues to promote a sector-specific approach to regulation. For example, it proposes accountability measures for existing regulators, who will be required to publish annually how they have enabled innovation and growth driven by AI in their sector, placing responsibility on those regulators to secure growth as well as safety. At the same time, as set out in the Plan, if evidence demonstrates that that innovation is not being promoted by individual regulators then the Plan recommends that the Government introduces more radical changes to the regulatory model for AI, for example by empowering a central body.
What’s next?
The Government has confirmed that we can expect the publication of draft legislation, which is likely to be consulted on, relating to critical risks associated with frontier AI. It will be particularly interesting to see whether this future bill focuses solely on the primary developers of foundation models, or also brings into scope the wider ecosystem of organisations that are involved in the fine-tuning, configuration and deployment of these technologies.
The UK will also likely want to ensure its approach is as aligned, as far as possible, with the US and this will mean having early conversations with the incoming Trump administration later this month and in early February. The evolving global regulatory AI landscape reinforces the importance of being flexible and proactive in response to change.
Authored by Dan Whitehead, Robert Gardener, Telha Arshad and Pemi Arowojolu.