Abstract
The decision to regulate artificial intelligence (AI) has far reaching consequences. Determining how to address budding applications of AI technology should depend on their effects. This article describes how regulation should be carefully tailored to avoid harm while maximizing social welfare, building on Orly Lobel’s taxonomy of regulatory tools. Part I examines the foundational difficulties in governing AI, including industry influence in regulation and deficiencies in enforcement. Part II elaborates on Lobel’s framework, detailing the benefits and limitations of a variety of tools, such as voluntary standards, soft law mechanisms, and public-private partnerships. It describes how bringing in diverse stakeholders can achieve a more practical approach to AI governance but cautions against an evaluation of AI that overlooks its effects on areas such as access, autonomy, privacy, and the environment. Part III introduces the legislative carve-out as a potential instrument in AI governance. Using the 21st Century Cures Act’s exclusion of certain Clinical Decision Support (CDS) software from FDA oversight as a case study, it evaluates the carve-out’s implications for innovation, safety, and physician liability. The article concludes by advocating for a nuanced approach to AI governance that furthers innovation while mitigating risks, underscoring the importance of tailoring regulation based on the degree of likely harm.
Recommended Citation
Brenda M. Simon,
Bespoke Regulation of Artificial Intelligence,
57 Loy. L.A. L. Rev. 989
(2025).
Available at: https://digitalcommons.lmu.edu/llr/vol57/iss4/3