
Trump Administration Executive Order (EO) Tracker
The article describes key technological terms and their relationship to the Responsible Corporate Officer Doctrine and the Food, Drug, and Cosmetic Act. There is discussion about establishing a prima facie case of a violation of the FDCA and the real-world (instead of hypothetical) risk of prosecutions based on using AI. The article ends by providing tips on mitigating the foregoing risks.
Although artificial intelligence (AI) is leading to rapid innovation at an inhuman speed, the need for human intervention has never been greater. AI also is necessitating innovative theories of criminal liability pursued by the US Department of Justice (DOJ). One such innovation will likely relate to the Responsible Corporate Officer Doctrine (RCOD). Corporate executives should understand the risks associated with AI and the RCOD and prepare accordingly. While corporate executives across industries should take note, executives of companies dealing with most foods, food additives, dietary supplements, drugs, medical devices, tobacco products, and cosmetics should pay particular attention given the risks stemming from the Food, Drug, and Cosmetic Act (the FDCA)
At the outset, it is important to note there is no universally accepted definition of “artificial intelligence.” Indeed, many people interchangeably—and improperly—use terms such as “AI,” “algorithm,” “machine-learning,” and “generative AI.” Although each term is related, they are distinct concepts. DOJ has acknowledged that “[a]rtificial intelligence [] generally refers to machine-based systems that can make predictions, recommendations, or decisions influencing real or virtual environments.” Artificial Intelligence and Civil Rights, Civ. Rts. Div., U.S. Dep’t of Just. (Oct. 1, 2024). In other words, AI is a broad field that focuses on creating systems that can perform tasks that typically require human intelligence.
Algorithms are the core building blocks of AI systems. An “algorithm” is basically a fancy way of describing step-by-step instructions (“rules”) designed to perform a specific task (an “output”). For example, following a recipe (following the rules) to make sweet potato pie (the output) is an algorithm.
Machine learning (ML) is a subset of AI that focuses on training algorithms with minimal human intervention to improve over time. A common example is a filter that learns to identify spam emails by analyzing many samples. A less common, but no less important, example could be a pharmaceutical company using ML to predict the efficacy of new drug compounds.
Finally, generative AI is a subset of AI that focuses on creating new content, such as text, art, or music. Another example of generative AI could involve a pharmaceutical company using this technology to accelerate the process of identifying compounds for possible new drugs.
As the everyday application of this technology continues to evolve, executives in companies regulated by the FDCA are uniquely vulnerable to novel theories of criminal prosecution.
Generally, the FDCA aims to ensure the safety of food, drugs, medical devices, and cosmetics. It lists several prohibited acts, including prohibitions against “[t]he introduction or delivery for introduction into interstate commerce of any food, drug, device, tobacco product, or cosmetic that is adulterated or misbranded” and “[t]he adulteration or misbranding of any food, drug, device, tobacco product, or cosmetic in interstate commerce.” 21 U.S.C § 331(a), (b). An usual feature of the FDCA is that it is a strict liability offense. In other words, there is no requirement that the government prove “mens rea” or a “criminal intent.” Indeed, a defendant can be convicted without proof of knowledge of the prohibited act or intention to perform the prohibited act.
A decades-old principle of the FDCA is that it “dispenses with the conventional requirement for criminal conduct—awareness of some wrongdoing. In the interest of the larger good, it puts the burden of acting at hazard upon a person otherwise innocent but standing in responsible relation to a public danger.” United States v. Dotterweich, 320 U.S. 277, 281 (1943). Ultimately, this principle has become known as the RCOD (also known as the Park Doctrine), wherein a violation of the FDCA does not require “awareness of some wrongdoing.” United States v. Park, 421 U.S. 658, 672 (1975). Instead, the government establishes “a prima facie case when it introduces evidence sufficient to warrant a finding by the trier of the facts that the defendant had, by reason of his position in the corporation, responsibility and authority either to prevent in the first instance, or promptly to correct, the violation complained of, and that he failed to do so.” Id. at 674– 75.
The real-world consequences of the RCOD can lead to incarceration. In United States v. DeCoster, two commercial farm executives pled guilty to misdemeanor violations of the FDCA for introducing into interstate commerce eggs contaminated by salmonella. 828 F.3d 626 (8th Cir. 2016). “In their plea agreements, the DeCosters stated that they had not known that the eggs were contaminated at the time of shipment but stipulated that they were in positions of sufficient authority to detect, prevent, and correct the sale of contaminated eggs had they known about the contamination.” Id. In upholding the convictions, the court relied on the foundational principle that the corporate officers did not need to know that their company violated the FDCA in order to be convicted and sentenced to a period of incarceration.
The lack of mens rea to support the convictions in DeCoster could provide a roadmap to support a conviction where a corporate executive lacked knowledge about the outputs of AI. The stipulated lack of knowledge of the prohibited acts under the FDCA did not save the defendants in DeCoster. Lack of knowledge likely will not save a defendant accused of violating the FDCA stemming from the use of AI.
This risk is not merely hypothetical. DOJ has increased scrutiny surrounding criminal prosecutions. Earlier this year, Deputy Attorney General Lisa O. Monaco announced “Justice AI,” an initiative to “understand and prepare for how AI will affect [DOJ’s] mission and how to ensure [DOJ] accelerate[s] AI’s potential for good while guarding against its risks.” Deputy Att’y Gen. Lisa O. Monaco, Remarks at the University of Oxford on the Promise and Perils of AI (Feb. 14, 2024). In that same announcement, Deputy AG Monaco made explicitly clear that “[g]oing forward, where Department of Justice prosecutors can seek stiffer sentences for offenses made significantly more dangerous by the misuse of AI —they will.” Indeed, federal prosecutors already have the tools to carry out AG Monaco’s warning. For example, the US Sentencing Guidelines enhancements for use of a special skill (§ 3B1.3) or employing complex, intricate, or sophisticated means (§ 2B1.1(b)(10)) are two examples where federal prosecutors could seek to enhance a defendant’s sentence for a conviction under the FDCA stemming from unknowingly relying on the outputs of an AI system.
Given DOJ’s increased scrutiny surrounding AI and the potential for strict liability criminal prosecutions, corporate governance is more important than ever. Companies using cutting-edge AI technologies should consider reviewing training data sets, testing AI systems and their outputs, and reviewing controls designed to mitigate the risks related to undesirable outputs (e.g., an AI system contributing to a prohibited act under the FDCA). Periodic monitoring—human intervention—remains an essential method of mitigation despite technological advancements occurring at an inhuman speed.
Authored by Jason Downs.
Originally published by the ABA.