What It Is
The Frontier Model Forum (FMF) is a non-profit industry body — technically a 501(c)(6) business league under US tax law, the same legal form as a trade association — whose stated mission is “ensuring safe and responsible development of frontier AI models.” It was founded in July 2023 by Anthropic, Google, Microsoft, and OpenAI; Amazon and Meta joined in May 2024, bringing current membership to six companies. The Forum defines a “frontier AI model” deliberately broadly: a general-purpose model that outperforms all models widely deployed for at least the preceding twelve months. The effect of this definition is to keep membership criteria tied to empirical capability rather than self-designation.
The FMF operates across three stated mandates: developing best practices and safety standards for frontier model deployment; funding independent research through its AI Safety Fund; and facilitating information-sharing among member companies, governments, academia, and civil society. Notably, the organization explicitly states it “does not engage in lobbying,” a commitment that distinguishes it formally from conventional industry advocacy bodies, though critics argue the distinction is operationally thin. The Forum has no regulatory authority; all of its frameworks, thresholds, and commitments are voluntary.
Day-to-day operations are led by Executive Director Chris Meserole, formerly Director of the AI and Emerging Technology Initiative at the Brookings Institution, who was appointed in October 2023. The board is composed of representatives from member organizations, though individual names are not publicly listed.
Why It Matters for AI Governance and Narratives
The FMF sits at a structurally significant position in the AI governance landscape: it is the only industry-constituted body specifically organized around the handful of companies deploying the most capable general-purpose models. Other governance actors — the US and UK AI Safety Institutes, the OECD’s Global Partnership on AI, ISO/IEC technical committees — are either intergovernmental or multi-stakeholder. The FMF speaks exclusively for the frontier tier, and it does so by claiming to produce the technical substrate that regulators need: evaluation taxonomies, capability thresholds, risk frameworks. Its 14 published issue briefs and five technical reports (as of April 2026) are positioned as inputs to formal standards processes rather than as lobbying documents.
For the observatory’s analytical purposes, the FMF embodies a recurring dynamic in AI governance: incumbent actors proposing the frameworks by which they will be evaluated. This is not a new pattern — pharmaceutical industry participation in drug approval criteria offers a precedent — but it shapes which safety risks become legible and which remain outside the frame. The FMF’s published work focuses predominantly on catastrophic misuse risks (biosecurity, cybersecurity, nuclear threat uplift) and model extraction attacks. Labor displacement, market concentration, and surveillance infrastructure appear in neither the Forum’s issue briefs nor its grant priorities. The choice of what to study is itself a framing decision.
Key Facts and Dates
July 2023 — Founded by Anthropic, Google, Microsoft, and OpenAI, announced during a period of intense Congressional attention on AI and concurrent with White House voluntary commitment negotiations.
October 2023 — Chris Meserole appointed as Executive Director; AI Safety Fund announced with an initial pool exceeding $10 million, funded by the four founding companies plus the Patrick J. McGovern Foundation, Schmidt Sciences, and several individual philanthropists.
February 2024 — FMF joins the US AI Safety Institute Consortium as a founding member.
May 2024 — Amazon and Meta join; all six member companies sign the Frontier AI Safety Commitments at the AI Seoul Summit alongside twelve other companies.
November 2024 / December 2025 — First and second rounds of AI Safety Fund grants completed. The second round awarded over $5 million across eleven grantees selected from more than 100 proposals, including Apollo Research, FAR.AI, and university teams at Caltech, UIUC, and Toronto. Research areas: biosecurity red-teaming, AI agent cybersecurity, multi-agent oversight, and scheming detection.
February 2026 — The FMF published an issue brief on adversarial distillation — the practice of using a frontier model’s outputs to train a competing model without authorization — alongside a progress update on a “first-of-its-kind information-sharing agreement” under which member firms exchange threat intelligence about attempted model extraction. This mechanism became operationally visible in April 2026 when reporting confirmed that OpenAI, Anthropic, and Google were actively coordinating through FMF infrastructure to detect and block distillation attempts attributed to entities, including Chinese actors, violating terms of service.
Structural criticism worth noting: the FMF’s voluntary framework means companies retain discretion to override safety commitments, and its evaluation taxonomies are developed by the same organizations that will be assessed against them. The 501(c)(6) designation, while fully disclosed, positions it legally as a business league whose members share commercial interests — a fact relevant to assessing the independence of its research outputs.
Where to Learn More
- Frontier Model Forum official website — mission, publications, grant announcements, and working group outputs: https://www.frontiermodelforum.org
- FMF Issue Brief: Adversarial Distillation (February 2026) — the brief directly relevant to the editorial’s reference to Chinese model-copying: https://www.frontiermodelforum.org/issue-briefs/issue-brief-adversarial-distillation/
- AI Seoul Summit Frontier AI Safety Commitments (UK Government, May 2024) — the voluntary commitments signed by all FMF members: https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024
- Springer/AI and Ethics: “The harms of terminology: why we should reject so-called ‘frontier AI’“ (2024) — academic critique of the underlying framing: https://link.springer.com/article/10.1007/s43681-024-00438-1