Hogan Lovells 2024 Election Impact and Congressional Outlook Report
The explosion of artificial intelligence (AI) offerings and integrations in recent years has sent sepia-toned twentieth-century legal doctrines scrambling to keep up with decidedly technicolor twenty-first century technology. As more and more companies and consumers continue down the yellow brick road of using AI to buy and sell products and services, 2024 is set to be a year in which advertising law comes to the forefront to address the dramatic increase in false and misleading commercial information entering the marketplace that is generated by or associated with AI.
2023 brought countless legal questions and lawsuits over whether the training of generative AI constituted copyright infringement (or fair use) as regulatory and legal frameworks attempted to wrap their hands around what AI is or can be. As we move into 2024, the legal focus is poised to respond to AI’s undeniable potential – and nascent implementation – for use in advertising. In traditional consumer retail, there has been an explosion of AI-enabled “try-on” features, personalized product consultations and chat bot influencers, as well as a surge of increasingly tailored and specifically targeted advertising based on a consumer’s online and even offline behavior. And in the B2B context, companies race to find ways to make their products and services more efficient (and competitive) through AI enhancements and integrations, which they then use to market those offerings.
AI’s growing prominence in advertising brings increased legal risk and scrutiny – from government regulators, competitors, consumer advocacy groups, and more. The variety of AI-related use cases each implicate different aspects of regulation, including different regulators:
Consumer protection agencies have noted the potential discriminatory impacts of advertising practices based on biased algorithms, and the possible associated liabilities.
There has been an uptick in activity from the United States Federal Trade Commission (the FTC) expressing concern over claims relating to AI, including how it was trained, what benefits it provides, whether it functions as advertised, and whether technology is properly characterized as "AI."
Potential exists for Lanham Act false advertising claims, particularly to the extent AI-related representations are deemed material.
Perennial advertising topics such as basic accuracy, claim substantiation, and the potential use of deepfakes are slated to rise to the forefront of public and legal awareness.
We’re already seeing increased activity across these channels. Last month, AI-generated video scams duped consumers into believing Taylor Swift was giving away “free” sets of Le Creuset cookware – in exchange for a “small shipping fee” of course – with the ads using AI-generated versions of the singer’s face and voice. Ms. Swift is just the latest in a line of celebrities contending with AI versions of themselves being used to peddle deceptive products and services. Actress Scarlett Johansson, for example, sent a cease & desist letter back in November to the company behind an AI image and voice simulation mobile app that purportedly used “deepfakes” of Ms. Johansson in its advertisement for the app. The US Congress has even taken note of this increase in deepfake advertising and introduced legislation to address it with the “Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2021” [DEEP FAKES Accountability Act] (H.R. 2395; 2021-2022 Session) and “No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act of 2024” [NO AI FRAUD Act] (H.R. 6943; 2023-2024 Session) in the House of Representatives, as well as the “Nurture Originals, Foster Art, and Keep Entertainment Safe” Act of 2023 [NO FAKES Act] (discussion draft) in the Senate, to place more regulation on this increasing – and deceptive – advertising practice.
Even a standard advertising practice – product placement – is getting the AI-treatment. Advertising companies like Rembrand are developing software platforms that utilize generative AI to insert virtual products into videos and photographs post-production. This technology permits advertisers to partner with content creators and include sponsorships in advertising that cannot be skipped or ignored given its integration into the content. It also raises a host of truth-in-advertising questions.
And, of course, just as the legal market has recently seen lawyers relying upon case law that was wholly invented by AI (without checking the citations), advertisers proceed at their own risk if they publicize advertising claims generated by AI that may not have the requisite substantiation or grounding in data, exposing the advertiser to potentially significant false advertising liability.
As AI advertising continues to take root, companies can expect increased regulation, litigation, and policy in this space. Advertisers must stay informed of these trends and should apply the principles of transparency, accuracy, and human oversight that guide the use of AI more broadly. If you or your company plan on using AI in your marketing – or marketing your AI – our extensive team of leaders in advertising law around the globe are ready to advise you.