By: Brian Brown
I. Introduction
On July 14, 2025, Anthropic, a leading artificial general intelligence (“AI”) entity, announced that the U.S. Department of War awarded it a “two-year prototype other transaction agreement with a $200 million ceiling.”[1] The award provided Anthropic with the opportunity to “prototype frontier AI capabilities that advance U.S. national security.”[2] It enabled the integration of Anthropic’s products, including its Claude Gov model, to help “U.S. defense and intelligence organizations” with tasks such as “process[ing] and analyz[ing] vast amounts of complex data.”[3] However, by way of its usage policy, Anthropic maintained safeguard provisions to, among other things, restrict its models from being used for mass domestic surveillance and for fully autonomous weapons.[4]
The Department of War stated that it “will only contract with AI companies who accede to ‘any lawful use’” and demanded Anthropic to remove the safeguard provisions.[5] Anthropic insisted on its policies because such uses are too risky, unreliable, and incompatible with democratic values.[6] As a result, the Pentagon designated Anthropic a supply chain risk to national security and ordered federal agencies and military contractors to halt doing business with Anthropic.[7] Litigation commenced in federal court and is ongoing.[8] This dispute illuminates the intensity of the emerging competing interests at stake in the world of AI development.
As the competing interests that arise from the development and implementation of AI continue to emerge and materialize, an opportunity presents itself to inspect how companies have anticipated and prepared from a legal lens. This blog briefly discusses the corporate frameworks, approaches, and mechanisms adopted and deployed by AI companies to combat the potential emerging controversies.
II. The Corporate Structure for AI Companies
AI companies tend to structure under the Delaware Public Benefit Corporation (“PBC”) framework.[9] A PBC is a corporate structure that is advantageous for mission-driven entrepreneurs seeking profit and the protection to pursue a mission beneficial for the public.[10] The Delaware General Corporation Law states that in managing the PBC, the directors must balance the stockholders’ interests, the interests of those materially affected by the corporation’s conduct, and the public benefit stated in the PBC’s charter.[11] Fiduciaries of a PBC do not owe duties to any person because he or she has an interest in the PBC’s public benefit.[12] Stockholders of a PBC may bring claims to challenge the directors’ duty to balance the interests, but the stockholder must own at least two percent of the PBC’s outstanding shares or, if the PBC is publicly traded, either two percent of the PBC’s outstanding shares or shares worth $2 million in market value.[13] PBCs are, unless otherwise provided by statute, subject to ordinary corporate law rules, including derivative requirements, the business judgment rule, and the exculpation of liability under 8 Del. C. § 102(b)(7).[14]
The PBC particularly suits AI entities because the framework best supports the long-term nature of the industry.[15] The industry, as it continues to evolve, is subject to significant far reaching impacts, and it poses grave risks and uncertainties.[16] The PBC enables directors of AI companies, looking long-term, to balance the interests of the many stakeholders who may be affected by the impacts and risks with its investors’ pecuniary interests.[17]
III. Anthropic
Anthropic adopted the PBC as its corporate structure to remain for-profit and for more freedom to pursue its mission to responsibly develop and maintain AI for the long-term benefit of humanity.[18] The PBC structure gives its “board the legal latitude to weigh long- and short-term externalities[.]”[19] The framework affords Anthropic a better opportunity to align its governance with its public benefit mission.[20]
However, Anthropic determined that the PBC structure, alone, was not sufficient in light of the “governance challenges [it] fores[aw] in the development of transformative AI.”[21] To combat this insufficiency, Anthropic adopted an innovative corporate governance framework—specifically, the Long-Term Benefit Trust (the “Trust”)—to pair with the PBC framework. Anthropic’s Trust, organized under Delaware law as a purpose trust, is a five-member independent body of financially disinterested trustees that participates in the selection and removal of Anthropic’s board of directors.[22] Anthropic characterizes it as a “different kind of stockholder” that can “primarily concern itself with [] long-range issues.”[23] The Trust, for example, “can ensure that the organizational leadership is incentivized to carefully evaluate future models for catastrophic risks or ensure they have nation-state level security, rather than prioritizing being the first to market above all other objectives.”[24] However, Anthropic is “not ready to hold [the Trust] out as an example to emulate;”[25] it is merely “a modest experiment.”[26]
IV. Other AI Companies: OpenAI and xAI
OpenAI took a different approach. It started out as a non-profit.[27] Then, in 2019, it estimated that it needed more capital to build its generative AI models, so it created a “bespoke structure” where the non-profit controlled a new for-profit corporation with a capped profit share for investors and employees.[28] Recently, OpenAI realized it needed even more capital than it initially anticipated, so it converted the for-profit corporation into a PBC.[29] It adopted a mission to “ensure the benefits of artificial intelligence to all of humanity.”[30] The converted “PBC now run[s] and control[s] OpenAI’s operations and business, while the non-profit” entity maintains oversight and charitable initiatives.[31] OpenAI seeks to continue to evolve, viewing its mission as a continuous objective.[32]
On the other hand, Elon Musk’s AI company, xAI, was formed in 2023 as a PBC under Nevada law.[33] The birth of xAI followed a dispute between Musk and OpenAI, and the tension has continued and recently resulted in a lawsuit.[34] While the litigation was ongoing, xAI terminated its own PBC status.[35] The company later merged with X (formerly known as Twitter) and “remained without its PBC structure.”[36] Musk’s AI company appears to be less engaged than its competitors with respect to anticipating and planning for potential risks and societal impacts by refining its corporate structure or governance model.[37]
V. Conclusion
Scholars have suggested that utilizing innovative corporate structures and governance mechanisms to combat externality concerns associated with the development of AI may be inevitably ineffective.[38] Others have suggested that perhaps the PBC is not necessary to achieve desired social benefits and goals.[39] As new controversies emerge, like the intense national security dispute between Anthropic and the Department of War, investigating and monitoring how companies are reacting may expose helpful patterns and suggest useful trends for the future.
About the Author

Brian is a third-year regular division student at Widener University Delaware Law School and serves as the External Managing Editor for Volume 51 of the Delaware Journal of Corporate Law. Brian earned his bachelor’s degree from Bucknell University and studied Political Science. Brian is currently an intern at the Delaware Supreme Court and will be joining a large corporate firm in Wilmington, Delaware in the fall.
[1] Statement from Dario Amodei on Our Discussions with the Department of War, Anthropic (Feb 26, 2026) [hereinafter Statement from Dario Amodei], https://www.anthropic.com/news/statement-department-of-war. Dario Amodei is the CEO of Anthropic. Dario Amodei, Dario Amodei, https://www.darioamodei.com (last visited Apr. 13, 2026).
[2] Id.
[3] Id.
[4] See Statement from Dario Amodei, supra note 1; see also Usage Policy, Anthropic (effective September 25, 2025), https://www.anthropic.com/legal/aup.
[5] Statement from Dario Amodei, supra note 1 (citing Secretary of War’s Memorandum to Senior Pentagon Leadership, Artificial Intelligence Strategy for the Department of War (Jan. 9, 2026), https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ARTIFICIAL-INTELLIGENCE-STRATEGY-FOR-THE-DEPARTMENT-OF-WAR.PDF).
[6] Statement from Dario Amodei, supra note 1 (stating that “using these systems for mass domestic surveillance is incompatible with democratic values” and that the systems are too risky and “simply not reliable enough to power fully autonomous weapons”).
[7] Lisa Eadicicco, Anthropic Sues the Trump Administration After it was Designated a Supply Chain Risk, CNN (Mar. 9, 2026), https://www.cnn.com/2026/03/09/tech/anthropic-sues-pentagon.
[8] Ashley Capoot, Anthropic Sues Trump Administration Over Pentagon Blacklist, CNBC (Mar. 9, 2026), https://www.cnbc.com/2026/03/09/anthropic-trump-claude-ai-supply-chain-risk.html.
[9] Project Liberty, The New Trend in Tech: Public Benefit Corporations, LinkedIn (May 13, 2025), https://www.linkedin.com/pulse/new-trend-tech-public-benefit-corporations-projectliberty-flwof/.
[10] See Del. Code, tit. 8,§ 361–68. The public benefit sought to be achieved must be indicated in the certificate of incorporation. Id. § 362(a)(1).
[11] Id. § 362(a).
[12] Id. § 365(b).
[13] Del. Code, tit. 8,§ 367.
[14] Id. § 361 (“If a corporation elects to become a public benefit corporation under this subchapter in the manner prescribed in this subchapter, it shall be subject in all respects to the provisions of this chapter, except to the extent this subchapter imposes additional or different requirements, in which case such requirements shall apply.”).
[15] See id. (“[A] PBC structure can help tech firms stay focused on long-term societal impact.”).
[16] See, e.g., Recognize Potential Harms and Risks, Nat’l Telecomms. & Info. Admin. (Mar. 27, 2024), https://www.ntia.gov/issues/artificial-intelligence/ai-accountability-policy-report/requisites-for-ai-accountability-areas-of-significant-commenter-agreement/recognize-potential-harms-and-risks.
[17] See Del. Code, tit. 10, § 362(a).
[18] See Long-Term Benefit Trust, Anthropic (Sept. 19, 2023), https://www.anthropic.com/news/the-long-term-benefit-trust.
[19] Id.
[20] Id.
[21] See idß. Anthropic found that “[the PBC] does not make directors of the corporation directly accountable to other stakeholders or align their incentives with the interests of the general public.” Long-Term Benefit Trust, supra note 18.
[22] Id. The Trust’s authority to select and remove members of the board grows over time. Id.
[23] Long-Term Benefit Trust, supra note 18.
[24] Id.
[25] Id.
[26] John Morley, David Berger, & Amy Simmerman, Anthropic Long-Term Benefit Trust, Harv. L. Sch. F. Corp. Governance (Oct. 28, 2023), https://corpgov.law.harvard.edu/2023/10/28/anthropic-long-term-benefit-trust/.
[27] Why OpenAI’s Structure Must Evolve to Advance Our Mission, OpenAI (Dec. 27, 2024), https://openai.com/index/why-our-structure-must-evolve-to-advance-our-mission/.
[28] Id. “That year, the for-profit raised an initial round of over $100M, followed by $1 billion from Microsoft.” Id.
[29] Id.
[30] OpenAI, supra note 27.
[31] Id.
[32] Id.
[33] Lora Kolodny, Elon Musk’s xAI Secretly Dropped its Benefit Corporation Status While Fighting OpenAI, CNBC (Aug. 25, 2025), https://www.cnbc.com/2025/08/25/elon-musk-xai-dropped-public-benefit-corp-status-while-fighting-openai.html.
[34] Id.
[35] Id.
[36] Id.
[37] See supra Part III, IV.
[38] See Amoral Drift in AI Corporate Governance, 138 Harv. L. Rev. 1633 (2025).
[39] Alanna Potter, Purpose or Profit?: The Rise of Public Benefit Corporations in the Technology Industry, 20 Duke L. & Tech. Rev. 90, 90 (2023).

Leave a Reply