Hogan Lovells 2024 Election Impact and Congressional Outlook Report
On July 30 the UK ICO published new guidance on AI and data protection. The guidance is intended to provide organisations that are either using or developing artificial intelligence technologies, with practical recommendations on the steps they should take to comply with data protection law.
While privacy professionals will be familiar with many of the topics addressed in the guidance, there are also a number of specific challenges raised in connection with the use of AI which may not be as commonly encountered. Many of these issues, such as the possibility of inherent bias, inaccuracies in model outputs and the difficulties in transparently explaining how decisions are made, arise as a result of some of the specific characteristics that are associated with AI and will often require positive interventions to meet regulatory expectations.
Consistent with the ICO's general approach to compliance, the guidance emphasises the importance of organisations taking a risk-based approach to AI. First, there should be an assessment of the risks to the rights and freedoms of data subjects that may arise in the circumstances. This should be followed by the identification and implementation of appropriate technical and organisational measures to mitigate those risks.
For organisations that are looking to use or develop AI (or are already doing so), some of the key issues identified by the ICO which need to be considered include:
Careful consideration needs to be given to the controllership status of each party involved in the use, provision and development of AI, taking into account the particular circumstances. The role of developers and service providers may be particularly unclear, with them holding different statuses during the product lifecycle. For instance, where personal data is used to train a model, the organisation responsible for this is likely to be a controller. However, that same organisation may act as a processor when it makes the model available to its customers.
Concerns about the potential for inherent bias and discriminatory outcomes, arising from decisions taken through the use of AI, have been rife over recent years. The ICO emphasises that preventing bias is a key component in ensuring that processing is considered fair and protects individuals' rights and freedoms under the GDPR. Organisations should be looking to identify the risks of potential bias in their AI models and deploy technical measures, such as making modifications to the training data and the underlying algorithms to mitigate these risks. In the UK, the presence of bias should be determined in accordance with what constitutes discrimination under the Equality Act 2010.
Where AI is being used to make predictions or decisions about a particular individual, it is important that there is a reasonable degree of confidence about their accuracy. While this does not mean that outputs from AI models need to be 100% accurate, the ICO expects that reasonable steps be taken to ensure that potentially incorrect inferences are corrected and errors are minimised.
Being able to explain why an AI model reached a particular inference or prediction is vital to the GDPR's principle of transparency. The ICO expects organisations to provide clear and detailed information about the basis on which automated decisions are taken about individuals. This will likely include the reasons for a decision being taken, the data used to make that decision and details about the technical steps taken to ensure the AI operates in a fair and unbiased manner. Extensive guidance has been separately published on this topic by the ICO which can be found here.
Special category data may be used across an AI product's lifecycle. A facial recognition system may use biometric data to train the model to recognise a person's characteristics. Equally, the ICO acknowledges that special category data may be utilised in testing to check for potential bias amongst particular groups, such as different ethnicities. When doing so, careful consideration will need to be given to any potential condition under the GDPR that could be satisfied. That may have to be explicit consent in some circumstances, but it may also be possible to satisfy an alternative condition under the UK Data Protection Act 2018.
The ICO highlights a number of particular security risks that arise from the use of AI which will need to be assessed, taking into account the particular circumstances. A given example is the potential for 'model inversion' attacks, where a threat actor derives additional personal data about a specific individual from a model that was originally used for training purposes.
We advise organisations that are already using or planning to use AI models in the future to take proactive steps towards compliance, including: