Hogan Lovells 2024 Election Impact and Congressional Outlook Report
Amidst the backdrop of the recent release of the close-to-final EU AI Act and the UK government’s response to the AI White Paper, discussions regarding the need of AI regulations in the UK are gaining momentum. The timely introduction of the Artificial Intelligence (Regulation) Bill by Lord Holmes on 22 November 2023 marks a crucial step forward. This article looks into the rationale and key aspects of the Bill.
The Artificial Intelligence (Regulation) Bill (the “Bill”) was introduced as a Private Members’ Bill by Lord Holmes of Richmond in the House of Lords on 22 November 2023. The primary purpose of the Bill is to establish a framework for the regulation of AI in the UK. This involves putting AI regulatory principles on a statutory footing, and establishing a central AI Authority responsible for overseeing the regulatory approach to AI.
Commenting on the rationale of the Bill, Lord Holmes provided his views in Hogan Lovells’ Digital Transformation: The Influencers Podcast (link to the podcast here):
“We need to legislate… in a way which is entirely possible, in fact necessary, to hold consumer protection, citizen rights and pro innovation, all in the same hand. It's essential that any legislation or any regulation is built on those key principles.”
In addition to the social risks for citizens, Lord Holmes also stressed the importance of creating a clear framework for the recognition and protection of intellectual property rights that arise in the use of AI systems:
“Work is urgently needed and it can't be that we take a wait-and-see approach here, because if we wait and see, it will be desperately difficult to try and reassert those rights retrospectively. Wait-and-see, for me, is never the way to achieve optimal outcomes. We need to lead, and IP and copyright is but one very clear example of why we need to lead and why we need to lead right now.”
Originating in either the House of Commons or the House of Lords, Private Members’ Bills provide a distinct avenue for individual legislators to advocate for novel legislative proposals. Private Members’ Bills can serve a valuable legislative process by stimulating meaningful discourse on emerging important issues and potentially influencing society at large. Private Members’ Bills that capture public imagination have successfully become law, and there are various precedents for Private Members’ Bills receiving the Royal Assent, as seen with the Computer Misuse Act 1990 and the Tobacco Advertising and Promotion Act 2002. With the recent release of the close-to-final EU AI Act and the UK government’s response to the AI White Paper, public discussions regarding the need of AI regulations in the UK are gaining momentum. The strong alignment with current priorities, coupled with growing cross-party support for AI regulation in the UK, has bolstered hopes for the Bill’s potential passage.
The Bill offers valuable insights into regulating the development and usage of AI systems in the UK. It may serve as a useful framework if the government chooses to adopt a statutory AI regulation. Some key aspects covered under the Bill include:
AI is defined broadly in the Bill as “technology enabling the programming or training of a device or software to—
(a) perceive environments through the use of data;
(b) interpret data using automated processing designed to approximate cognitive abilities; and
(c) make recommendations, predictions or decisions;
with a view to achieving a specific objective.”
The Bill’s definition of AI is comprehensive, covering data-driven decision-making. The broad scope ensures the framework remains adaptable to future advancements in the field.
Similar to the EU AI Act, the definition of AI includes generative AI, meaning deep or large language models able to generate text and other content based on the data on which they are trained.
The Bill requires the AI Authority to have regard to the principles that—
*These principles can be amended by the Secretary of State by regulations.
The Bill incorporates the five cross-sectoral principles listed in (a) above and outlined in the UK government's White Paper. It further elaborates on these principles, offering guidance for the proposed AI Authority. The focus on the principles is aligned with the recent UK government’s response to the AI White Paper, and it is crucial for building public trust in AI.
The Bill empowers the Secretary of State with the right to create the AI Authority, which will have the following functions and responsibilities:
The proposed AI Authority plays a vital role in coordinating existing regulatory bodies, and promoting responsible AI practices. Its functions including supporting testbeds and sandbox initiatives and conducting horizon-scanning will be critical for encouraging innovation.
The Bill envisages existing regulators including the ICO and CMA to continue their roles in regulating and providing guidance on AI under their respective field.
The Secretary of State, in consultation with the AI Authority, must by regulations require any business which develops, deploys or uses AI must have a designated AI officer, who will be responsible for ensuring the safe, ethical, unbiased and non-discriminatory use of AI by the business, and ensuring that data used by the business in any AI technology is unbiased.
The requirement for an AI Responsible Officer within businesses that develop, deploy, or use AI signifies a commitment to responsible AI governance. While details regarding skillsets and potential overlap with Data Protection Officers under the GDPR need clarification, this can be addressed during the refinement process.
The Secretary of State, in consultation with the AI Authority, must by regulations provide that:
Regulations under this section may provide for informed consent to be express (opt-in) or implied (opt-out) and may make different provision for different cases.
The Bill emphasises transparency through clear labelling and informed consent for users of AI systems. This could be an important step forward not only to protect users, but also protect the creative industry, particularly in light of the UK government shelving a long-awaited voluntary AI copyright code setting out rules on the training of AI models using copyrighted materials such as books and music in February 2024. This clause will ensure that AI developers can only use such copyrighted materials subject to seeking informed consent and complying with all applicable IP and copyright obligations.
According to the Bill, the AI Authority must implement a programme for meaningful, long-term public engagement about the opportunities and risk presented by AI, and consult the general public and such persons as it considers appropriate as to the most effective frameworks for public engagement, having regard to international comparators.
Commenting on the importance of public engagement, Lord Holmes stated “That public engagement is absolutely critical and AI needs to be able to prove itself trustworthy. Otherwise, it won't really achieve any of the optimal benefit, but we may well be saddled with many of the potential downsides. And we've seen how to get this right. Take IVF, for example, invitro fertilization. What could be more terrifying? What could be more science fiction than bringing life into being in a laboratory test tube? Why is it now seen as a positive part of our society? Because years ago, a colleague of mine, Baroness Warnock had the Warnock Commission to do exactly this; to engage with people, to engage with their concerns, their issues, and to have that real sense of engagement around an issue, so we get to a positive societal benefit from it. That's what we need with AI. That's why, as I say, the public engagement clause is probably for my mind, the most important of all are the ones that are in the bill.”
This Bill presents a crucial opportunity and a blueprint for regulating AI systems and building trust with the safe usage, development and deployment of AI systems in the UK. By fostering responsible AI development, the UK government can unlock the potential of AI technology, while mitigating potential risks.
The Bill has completed its first reading in the House of Lords in November 2023, and proceed to the second reading, scheduled on 22 March 2024. There will be subsequent committee and report stages prior to the third reading. Following the completion of the stages in the House of Lords, all those stages will be repeated at the House of Commons, as long as the Parliamentary time allows with the cross-party support.
If you are interested to listen to the full conversation with Lord Holmes about the Bill, please click the link here, and you can also learn more about the Bill and Lord Holmes's work by visiting his website, blog or on LinkedIn @LordChrisHolmes.
Authored by John Salmon, Louise Crawford and Daniel Lee.