Hogan Lovells 2024 Election Impact and Congressional Outlook Report
“My lord, my need is sore! Spirits that I've cited my commands ignore.” Are today’s AI solutions like Goethe’s bewitched broomsticks – which the sorcerer’s apprentice has called to life and then desperately tries, in vain, to control with his commands? How can we effectively prompt, control and monitor the output of generative functionalities based on large language models – which, much like the poem’s broomsticks, are capable of human-like interactions?
The technology at the forefront of public debate is Generative Artificial Intelligence (GenAI). GenAI is based on AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio and video. The current text of the EU AI Act, based on the political agreement submitted on 9 December 2023 and leaked on 22 January 2024, also addresses transparency obligations for providers and users of GenAI under Art. 52 (1) AI Act.
The use of GenAI has the potential to bring enormous benefits to businesses and individuals: GenAI tools can help with idea generation, content planning, research, review and editing, and ultimately, with content creation. GenAI can be used as standalone software, or as part of another services – for instance through integration into a social media app, an email inbox or into a search engine.
At the same time, from an ethical and societal, as well as from a legal point of view, the potential risks following in the wake of any new technology are as significant as its vast benefits. In connection with GenAI, one of these risks is the potential for the auto-generation of illegal or harmful content. Conversely, GenAI can also be part of the solution to addressing online harms – where it is embedded into content moderation processes to enable automated and efficient removal of illegal and violative content at large scale. So let us look at both sides of the equation: First, what are the risks GenAI poses in the context of creation? Second, what role can it play in the content moderation which platform operators are already intensively engaging in? In that context, we also look at the central requirements for content moderation as they have been freshly stipulated in the EU Digital Services Act (DSA) and the UK Online Safety Act (OSA). And finally: Does the presence of GenAI on both sides of the equation create benefits, or friction? Can the broomsticks effectively control each other? What lies ahead when we consider GenAI as a tool for creation and for moderation of content alike?
Harmful and illegal content, from hate speech and disinformation campaigns (fake news) to counterfeit goods and fake social media profiles, has been a frequent phenomenon ever since the creation of the internet. GenAI could be abused by bad actors to amplify such material even further:
Where illegal and harmful content is shared by users of online services, it falls to the service providers to prevent its spread. Corresponding notice-and-takedown mechanisms and content moderation processes have long been a well-established feature of online platforms rich in third-party engagement, in the ongoing efforts to curb the spread of illegal and harmful content which is impossible to avoid in the user-centric ecosystem of the internet.
Over the last year, GenAI has become central to the content moderation challenge across online services of all shapes and sizes – from social media to online marketplaces, app stores, gaming platforms, dating apps, fitness and other community platforms, booking sites, file sharing and cloud hosting services. The debate over how platforms should be required to tackle not only outright illegal content, but also contributions that are “merely” harmful and violate the operator’s terms of service, such as hate speech and fake news, plays an important part in the digital strategies of the EU and UK.
The Digital Services Act is the EU’s new central regime regulating how online platforms moderate content, ensure transparency of their advertising, and use algorithmic processes for ranking and recommendations. The DSA introduces a set of staggered obligations for all intermediary services, depending on which category they fall into – from mere conduit to hosting, online platform services, B2C ecommerce platforms, and, at the top of the ladder, very large online platforms (VLOPs) with more than 45 million monthly active users in the EU. For these VLOPs, 19 so far (with more in the pipeline), the DSA already applies since August 2023. For all other intermediaries, it becomes applicable on 17 February 2024.
Let us take a look at how the DSA applies to GenAI in various use cases, and which corresponding obligations it imposes:
Despite its horizontal applicability to a wide range of intermediary services, the DSA’s language isn’t at all a clear-cut fit when applied to GenAI use-cases:
To the extent the DSA does apply to content generated with the help of AI, its most relevant content moderation obligations are the following:
Use of AI in the context of content creation and moderation is part both of the challenge and of the solution: Online services increasingly rely on AI systems for moderating the vast amounts of third party content they host. Automated content filters and review algorithms can be employed to identify inappropriate material highly efficiently. GenAI, in particular, is able to identify reasons scan the service to identify illegal or harmful content (referred to by the OSA as “proactive technology”). The OSA’s definition of said proactive technology (in section 231(10)) makes clear that this is intended to cover “technology which utilises artificial intelligence or machine learning”. The OSA also imposes obligations on service providers to include “clear and accessible provisions” in their terms giving information about any use of proactive technology to comply with the OSA’s duties, as well as requirements to allow users to complain about the way proactive technology has been used by the service in content moderation.
In its draft codes of practice on the OSA’s illegal content duties, published in November 2023, Ofcom specified that, in order to comply with the code (and thereby demonstrate compliance with the OSA’s core duties to tackle illegal content), certain services would need to take automated content moderation measures, such as the use of “hash matching” technology to proactively detect and remove child sexual abuse material.
Therefore, as with the DSA, AI will continue to exacerbate the problem of illegal and harmful content the OSA seeks to address, but will also be part of the solution deployed by online services to meet their new regulatory obligations.
GenAI is changing – challenging, facilitating, accelerating, standardising – how we communicate, create, and work in many aspects of our lives. Where user-generated content is concerned, GenAI is part of both the problem and the solution. Going forward, it will be imperative that the positive impact of GenAI on content moderation solutions is perceived as stronger than its contribution to the creation and dissemination of illegal and harmful content. User trust is an essential condition of our online ecosystem, and no online business model can operate sustainably without it.
In that context, content moderation is essential for creating a safe, predictable, and trustworthy online environment. Amidst all the excitement about the EU AI Act, the specific regulation of illegal content and online harms under the DSA and the OSA seek to factor in automation and use of AI both as part of content creation and of content moderation – but deficiencies are already apparent in both these brand new regimes. One thing is certain: Our understanding of the current generation of GenAI systems, and of their potential, is still very much incomplete and growing. Robust yet flexible, and thus future-proof, regulation is crucial at this juncture, where technological progress is moving constantly and evolving fast.
Authored by Anthonia Ghalamkarizadeh, Telha Arshad, Jasper Siems