Hogan Lovells 2024 Election Impact and Congressional Outlook Report
The G7 - comprising seven of the world’s leading economies - has published a set of principles designed to promote the safe use of artificial intelligence systems, together with a code of conduct to assist organisations in meeting those principles.
The International Guiding Principles on Artificial Intelligence (the “Principles”) have been developed under the “Hiroshima Process” which was established at the G7 Summit in May 2023, as part of coordinated efforts by a number of international bodies (including the OECD and the EU-U.S. Trade and Technology Council) to establish global guardrails and standards for AI. The accompanying International Code of Conduct for Advanced AI Systems (the “Code of Conduct”) contains a set of rules that AI developers are encouraged to follow, on a voluntary basis, to mitigate risks throughout the AI lifecycle.
In the statement accompanying publication of the Principles and the Code of Conduct on 30 October 2023, the G7 highlighted that the Principles and the Code of Conduct will continually be reviewed and updated to ensure that they remain relevant given the fast moving nature of AI technology. The Principles and Code of Conduct come at a critical time, with the convening of the UK global AI Safety Summit in the UK on 1 and 2 November, which the G7 specifically refer to in their statement. One of the key Summit objectives is to discuss the measures which individual organisations should take to enhance the safe development of frontier AI.
The Code of Conduct specifically recognises that different jurisdictions may take their own unique approaches to implementing the rules in their own way. For EU Member States, the Artificial Intelligence Act (the "AI Act"), which is expected to be finalised by early next year, will provide prescriptive and legally binding rules on AI development and use and it is likely that the Artificial Intelligence Act will provide a template on which other jurisdictions will seek to model their own AI legislative frameworks.
It is interesting to note that on the same day the G7’s Principles and Code of Conduct were published, U.S. President Joe Biden issued an Executive Order on “Safe, Secure and Trustworthy Artificial Intelligence” calling upon the establishment of new standards for AI safety and security. It is not yet clear how the US approach will compare with that of the G7.
The G7 countries (which include the United States, Canada, France, Germany, Italy, Japan, and the UK) established the Hiroshima AI Process with the objective of creating a comprehensive policy framework that fosters the development of safe, secure and trustworthy AI systems and mitigates the risks arising, in particular, from generative AI. The top 5 risks that have been identified by the G7 are the spread of disinformation and manipulation, intellectual property infringements, threats to privacy, discrimination and bias, and risks to safety.
The Hiroshima AI Process consists of the following:
The Principles have been designed with a view to being endorsed by organisations from the private sector, public sector and academia. They are intended to help manage risks in AI development on a transitional basis whilst governments develop a more enduring and regulation based approach for advanced AI.
The European Commission have already noted that these principles are consistent with provisions of the AI Act (which is in the final stages of negotiation), and the Commission has expressly welcomed the Code of Conduct.
The Principles provide guidance for organisations developing AI systems, including foundation models and generative AI. Eleven key principles are promoted, which include:
The Principles encourage a risk based approach involving risk mitigation measures such as security controls, data and IP protection, authentication mechanisms as well as promoting active monitoring of AI systems to prevent any misuse. Additionally, the Principles recommend implementing transparent and internal AI governance structures that disclose their AI governance policies and share information relating to incidents with AI systems.
The Code of Conduct contains practical advice on how organisations can comply with the Principles. The Code of Conduct also identifies key areas of risks for advanced AI model systems, which should be monitored by organisations as appropriate, including:
The Principles and the Code of Conduct provide guidance and standards for the developing of AI, albeit on a voluntary basis. A new chapter in AI regulation will begin when the Artificial Intelligence Act becomes law, and the prescriptive requirements become legally binding.
In the meantime, the conversation will continue at an international level as to how industries can benefit from the many opportunities presented by AI technological developments but without loss of control, risks to safety, and erosion of democratic values and ethics.
The Principles and the Code of Conduct will continue to be reviewed, including through ongoing multi-stakeholder consultations, to ensure that they are able to adapt and remain applicable to the rapidly evolving nature of AI.
Undoubtedly, they will be one of the topics of discussion at the AI Safety Summit as the discussion of the risks and safe development of AI constitute key aspects of the Summit's objectives.
Authored by Louise Crawford and Sam Krips.