News

G7 releases Guiding Principles and a voluntary Code of Conduct for AI developers

Image
Image

The G7 - comprising seven of the world’s leading economies - has published a set of principles designed to promote the safe use of artificial intelligence systems, together with a code of conduct to assist organisations in meeting those principles.

The International Guiding Principles on Artificial Intelligence (the “Principles”) have been developed under the “Hiroshima Process” which was established at the G7 Summit in May 2023, as part of coordinated efforts by a number of international bodies (including the OECD and the EU-U.S. Trade and Technology Council) to establish global guardrails and standards for AI. The accompanying International Code of Conduct for Advanced AI Systems (the “Code of Conduct”) contains a set of rules that AI developers are encouraged to follow, on a voluntary basis, to mitigate risks throughout the AI lifecycle.

In the statement accompanying publication of the Principles and the Code of Conduct on 30 October 2023, the G7 highlighted that the Principles and the Code of Conduct will continually be reviewed and updated to ensure that they remain relevant given the fast moving nature of AI technology. The Principles and Code of Conduct come at a critical time, with the convening of the UK global AI Safety Summit in the UK on 1 and 2 November, which the G7 specifically refer to in their statement. One of the key Summit objectives is to discuss the measures which individual organisations should take to enhance the safe development of frontier AI.

The Code of Conduct specifically recognises that different jurisdictions may take their own unique approaches to implementing the rules in their own way. For EU Member States, the Artificial Intelligence Act (the "AI Act"), which is expected to be finalised by early next year, will provide prescriptive and legally binding rules on AI development and use and it is likely that the Artificial Intelligence Act will provide a template on which other jurisdictions will seek to model their own AI legislative frameworks.   

It is interesting to note that on the same day the G7’s Principles and Code of Conduct were published, U.S. President Joe Biden issued an Executive Order on “Safe, Secure and Trustworthy Artificial Intelligence” calling upon the establishment of new standards for AI safety and security. It is not yet clear how the US approach will compare with that of the G7.

The Hiroshima AI Process

The G7 countries (which include the United States, Canada, France, Germany, Italy, Japan, and the UK) established the Hiroshima AI Process with the objective of creating a comprehensive policy framework that fosters the development of safe, secure and trustworthy AI systems and mitigates the risks arising, in particular, from generative AI. The top 5 risks that have been identified by the G7 are the spread of disinformation and manipulation, intellectual property infringements, threats to privacy, discrimination and bias, and risks to safety.

The Hiroshima AI Process consists of the following:

  1. analysis of priority risks, challenges and opportunities of generative AI;
  2. the Principles;
  3. the Code of Conduct; and
  4. project based cooperation in support of developing responsible AI tools and best practices.

The Principles have been designed with a view to being endorsed by organisations from the private sector, public sector and academia. They are intended to help manage risks in AI development on a transitional basis whilst governments develop a more enduring and regulation based approach for advanced AI.

The European Commission have already noted that these principles are consistent with provisions of the AI Act (which is in the final stages of negotiation), and the Commission has expressly welcomed the Code of Conduct.

The Principles

The Principles provide guidance for organisations developing AI systems, including foundation models and generative AI. Eleven key principles are promoted, which include:

  1. Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle;
  2. Identify and mitigate vulnerabilities, and, where appropriate, incidents and patterns of misuse, after deployment including placement on the market;
  3. Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increase accountability;
  4. Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia;
  5. Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures, in particular for organizations developing advanced AI systems;
  6. Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle;
  7. Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content;
  8. Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures;
  9. Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education;
  10. Advance the development of and, where appropriate, adoption of international technical standards; and
  11. Implement appropriate data input measures and protections for personal data and intellectual property.

The Principles encourage a risk based approach involving risk mitigation measures such as security controls, data and IP protection, authentication mechanisms as well as promoting active monitoring of AI systems to prevent any misuse. Additionally, the Principles recommend implementing transparent and internal AI governance structures that disclose their AI governance policies and share information relating to incidents with AI systems.

The Code of Conduct

The Code of Conduct contains practical advice on how organisations can comply with the Principles. The Code of Conduct also identifies key areas of risks for advanced AI model systems, which should be monitored by organisations as appropriate, including:

  • Increased chemical, biological, radiological and nuclear risks due to AI systems lowering barriers to entry for weapons development or use;
  • Offensive cyber capabilities;
  • AI models that are 'self-replicating' or able to train other models; and
  • Societal risks, such as bias and discrimination or violation of legal frameworks including data protection.

What can we expect to see next?

The Principles and the Code of Conduct provide guidance and standards for the developing of AI, albeit on a voluntary basis. A new chapter in AI regulation will begin when the Artificial Intelligence Act becomes law, and the prescriptive requirements become legally binding. 

In the meantime, the conversation will continue at an international level as to how industries can benefit from the many opportunities presented by AI technological developments but without loss of control, risks to safety, and erosion of democratic values and ethics.

The Principles and the Code of Conduct will continue to be reviewed, including through ongoing multi-stakeholder consultations, to ensure that they are able to adapt and remain applicable to the rapidly evolving nature of AI.

Undoubtedly, they will be one of the topics of discussion at the AI Safety Summit as the discussion of the risks and safe development of AI constitute key aspects of the Summit's objectives.

 

 

Authored by Louise Crawford and  Sam Krips.

Search

Register now to receive personalized content and more!