Hogan Lovells 2024 Election Impact and Congressional Outlook Report
This article is the third in a series examining the range of legal areas impacted by artificial intelligence and machine learning. This article focuses on the potential benefits and associated risks of financial firms’ increasing reliance on AI and considers how existing financial services rules and guidance such as the Senior Managers and Certification Regime can be used to manage those risks and challenges.
As examined broadly in the first article in this series, the FCA, PRA and Bank of England published a joint Discussion Paper (DP5/22) on 11 October 2022 on the use of Artificial Intelligence (AI) and Machine Learning in financial services. DP5/22 focuses on how regulators should approach the “safe and responsible” adoption of AI in financial services and opens the debate of whether the regulation of AI in financial services can be managed through clarifications of the existing regulatory framework or whether a new approach is needed.
In particular, DP5/22 considers the additional challenges and risks that AI brings to firms’ decision-making and governance processes and how this might be addressed through existing regulatory rules and guidance.
This article considers how the Senior Managers and Certification Regime (SM&CR) could be applied to manage the potential regulatory challenges and risks posed by the use of AI in financial markets.
In its 2022/2023 Business Plan, published in April 2022, the FCA expressed its commitment to become a “data-led regulator”. In addition to exploring how it can use AI in discharging its own supervisory and enforcement objectives, the FCA also stated its intention to better understand how AI is changing UK financial markets. The FCA commissioned Alan Turing Institute report published in June 2021 and the final report of the UK’s AI Public-Private Forum (AIPPF) published in February 2022, highlight the risks and benefits of the use of AI in financial services. In its report, the AIPPF made it clear to the supervisory authorities that the private sector wants regulators to have a role in supporting the safe adoption of AI in UK financial services. A common theme of the Alan Turing and AIPPF reports and DP5/22 is an expectation that human involvement in the design, operation and governance and oversight of the AI systems is necessary to offset the potential challenges and risks of AI.
Notwithstanding the potential benefits of AI such as speed, scale and accuracy of outputs in addition to greater financial innovation and a reduction in costs, novel challenges and increased risks can arise, for example, due to AI models being able to learn the rules and alter model parameterisation iteratively. Such risks emphasise the need for involving humans in the decision-making loop and AI process. As highlighted in DP5/22 “The human element can act as a valuable safeguard against harmful outcomes, providing contextual knowledge that may be outside the capability of a model, and identifying where an automated decision could be problematic, and therefore requiring further review”.
DP5/22 examines how human involvement in AI may apply at a number of levels including in the design and operation of the AI system. For the purpose of this article we will look further at the regulators’ expectations around the governance and operational oversight of AI.
DP5/22 suggests the SM&CR as a potential solution for the governance and oversight framework needed to manage the risks and challenges posed by AI. An FCA speech given by Jessica Rasu, FCA Chief Data, Information and Intelligence Officer on 9 November 2022 also mentions the SM&CR as an existing framework that could be applied to “the many new regulatory challenges posed by the use of AI in UK financial markets”. In September 2021, the International Organisation of Securities Commissions (IOSCO) published a report on the use of AI and machine learning proposing global best practices for firms to address conduct risks associated with the development, testing and deployment of AI and machine learning. It is notable that the UK SM&CR is mentioned in this global report as an example of how senior managers are made ultimately accountable for the activities of the firm.
The existing FCA and PRA rules and guidance implementing the SM&CR emphasise senior management accountability and responsibility which are relevant to the use of AI and the risks the regulators are looking to circumvent. Within the SM&CR, the FCA and PRA set out their expectations on the accountability of Senior Management Functions (SMF) for branches and subsidiaries.
In DP5/22, the regulators acknowledge that there is at present no dedicated SMF for AI within the SM&CR. Currently, technology systems are the responsibility of the SMF24 (Chief Operations function). Separately, the SMF4 (Chief Risk function) has responsibility for the overall management of the risk controls of a firm including the setting and managing of its risk exposures. SMF4 and SMF24 apply to PRA-authorised SM&CR banking and insurance firms and FCA-authorised enhanced scope SM&CR firms, but are not requirements for core or limited scope SM&CR firms. This potential gap in current regulation means that certain firms would not be meeting any future regulators’ expectations around governance and oversight of AI projects.
DP5/22 specifically requests feedback from stakeholders on creating a dedicated SMF and/or a new Prescribed Responsibility under the SM&CR. The regulators acknowledge that AI use may not yet have reached a level of materiality or pervasiveness to justify the changes but the AIPPF has highlighted current uncertainty regarding the split of responsibilities and potential gaps in regulation for firms around AI. Further guidance on governance functions, roles and responsibilities could provide much-needed clarity.
Expanding the SM&CR certification regime to create a new certification function for AI is another possible solution suggested in DP5/22. Given the technical complexity of AI systems, it is key that the staff responsible for deploying or developing them are competent to do so. The current certification requirement for staff responsible for algorithmic trading extends to persons who: (i) approve the deployment of trading algorithms; (ii) approve the amendment to trading algorithms; and (iii) have significant responsibility for the management of the monitoring, or decide, whether or not trading algorithms are compliant with a firm’s obligations. Given the rapid developments in AI and machine learning technologies, regulators and firms may look to further consider whether the SM&CR will need to be extended to other individuals managing AI systems including data analysts, data scientists and data engineers who may not typically have performed roles subject to regulatory scrutiny.
The concept of ‘reasonable steps’ is a core element of the SM&CR. SMFs can be subject to enforcement action under s.66A(5) and/or s.66B(5) of FSMA if an area of the firm for which the SMF has responsibility breaches regulatory requirements and the FCA and/or PRA can demonstrate that they failed to take such steps as a person in the senior manager’s position could reasonably be expected to take to prevent or stop these breaches.
What may constitute ‘reasonable steps’ in an AI context is addressed in DP5/22 as an area needing further discussion. Will any reasonable steps taken in an AI context differ from current steps taken by SMFs and if so how?
PRA SS28/15, SS35/15, and DEPP 6.2 ‘Deciding whether to take action’, are the key reference sources on the ‘reasonable steps’ criteria under the SM&CR, and have detailed, but not exhaustive, expectations on what constitutes reasonable steps and on how firms and SMFs can document and evidence them. However, this guidance built on the PRA’s, FCA’s, and FSA’s prior enforcement activity and supervisory experience and was issued before autonomous, decision-making technology such as AI was widespread and, as a result, the guidance does not explicitly refer to such technology. DP5/22 suggests that firms could consider what may constitute reasonable steps at each successive stage of the lifecycle of a typical AI system.
The use of the existing SM&CR framework as an oversight and governance framework for AI systems in firms appears to be a logical step given the investment many firms have already made in operationalising SM&CR. Notwithstanding, further clarity and guidance will be needed to expand the SM&CR to an AI context. The existing SM&CR guidance was written before many of the current technologies existed and therefore key aspects such as how ‘reasonable steps’ will apply in an AI context and which roles will be in scope will need to be carefully considered. Given the fast-moving and innovative pace of AI any guidance may need to build in an element of flexibility to ensure new technology is captured.
Authored by Melanie Johnson.