Document Type

Article

Publication Date

Spring 2025

Abstract

Artificial Intelligence (“AI”) startups have taken center stage, rapidly disrupting conventional industries at an unprecedented pace with their groundbreaking innovations. Hailed by many as the most significant technological advancement of our era, AI’s profound societal impact has garnered heightened public and governmental scrutiny. The spotlight has recently fallen on OpenAI, the creator of ChatGPT, which weathered a tumultuous period marked by the ouster and subsequent rehiring of CEO Sam Altman, a board reconfiguration, and Altman’s later return to the board. Concerns over AI safety were offered as the rationale for the tandem corporate governance structure of nonprofit and for-profit at OpenAI which led to board friction, a management coup, and superalignment defection. Similarly, concerns over AI safety also underscore the creation of the corporate structures at Anthropic and xAI.

This Article explores the innovative corporate governance models that have emerged from leading AI startups like OpenAI, Anthropic, and xAI, assessing their long-term viability as these companies race against one another in building AI foundation models. Ultimately, it proposes a path forward for improved governance in AI startups by advocating for an amendment to corporate law requiring a board-level AI Safety Committee at AI startups.

Included in

Law Commons

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.