What It Is
COUNTER — Counting Online Usage of NeTworked Electronic Resources — is a not-for-profit organisation registered as COUNTER Metrics Limited. Founded in 2003, it exists for a narrow but consequential purpose: to bring publishers, libraries, library consortia, aggregators, and technology providers into agreement on how to count and report usage of online content. Its primary product is the Code of Practice (CoP), a technical standard that governs the definitions, data formats, and reporting obligations that content platforms must meet if they want to demonstrate compliance to institutional subscribers.
When a university library subscribes to a journal database, it needs to know whether that subscription is worth renewing. COUNTER’s Code of Practice is what makes those conversations possible — it defines what a “full-text request” is, how to distinguish a human user from a web crawler, and what counts as a distinct session. Without this standardisation, every publisher would report usage differently, and comparison across platforms would be impossible. COUNTER’s compliance certification, which publishers and vendors can obtain through third-party audit, is the mechanism by which the standard becomes binding in practice.
The current Code of Practice is Release 5.1, with full compliance required from January 2025. It covers reports for journals, books, databases, and multimedia content. A key feature since Release 5.0 (2017) has been the Access_Method attribute, which distinguishes Regular (human-initiated) usage from TDM (Text and Data Mining) usage — automated machine access conducted under a licensed arrangement. The purpose was to prevent large-scale automated harvesting from inflating the usage statistics that libraries use to justify subscriptions.
Why It Matters for AI Governance and Narratives
COUNTER’s technical work is becoming a governance question. The existing Access_Method taxonomy — Regular and TDM — was designed for a world in which automated content access was episodic, contractual, and clearly distinguishable from ordinary reading. Agentic AI systems break that model in at least two ways. First, they access content continuously and at scale, often on behalf of human users who never visit the publisher’s platform directly — meaning traffic that would previously generate a “regular” usage event now may not appear at all, or appears only as bot activity to be filtered out. Second, the line between “an AI tool the publisher hosts” and “an external AI agent retrieving content” is no longer clean.
Whoever defines “Agent” in COUNTER’s framework will effectively control the audit trail for how AI systems consume licensed academic knowledge infrastructure. This matters for the observatory’s analytical domain in several ways: it shapes how content publishers argue for AI licensing revenues; it provides (or withholds) the empirical basis for claims about AI’s dependence on academic literature; and it determines whether institutional knowledge — the kind held in library-subscribed databases — will be visible or invisible in accounts of what AI systems have been trained on and what they are actively consuming. The framing contest over AI’s relationship to existing knowledge institutions runs partly through this technical definition.
Key Facts and Dates
COUNTER established a dedicated Generative and Agentic AI working group in 2025, with eleven named participants drawn from libraries, publishers, and technology providers. In September 2025, the working group published its conclusions: breaking changes to the Code of Practice were premature given how rapidly the technology was evolving, and COUNTER had committed to no breaking changes before approximately 2030. Instead, the working group proposed a best-practice guidance document using the existing Code’s optional extensions framework.
The draft guidance was published on December 8, 2025, and opened for formal stakeholder consultation through February 23, 2026. The core proposal was a new Access_Method value — Agent — to allow publishers to separately report AI-agent-generated access alongside human and TDM usage. A pre-conference session at the NISO Plus conference in Baltimore (February 16, 2026), led by COUNTER Executive Director Tasha Mellins-Cohen, was dedicated to the topic. COUNTER published its post-consultation findings on March 19, 2026. The specific adoption decision and stakeholder consensus figures from that post could not be independently retrieved from COUNTER’s public-facing documentation at time of writing, though the post’s existence is confirmed in COUNTER’s news index.
Note on the 97.5% consensus figure: The observatory’s editorial cited this figure from a social post [POST-63471]. It is consistent with the March 19, 2026 consultation results publication timeline, but it could not be independently confirmed against COUNTER’s own published materials in the course of this research. Readers seeking the primary figure should consult the March 19 post directly at countermetrics.org.
Where to Learn More
- COUNTER Metrics — About: https://www.countermetrics.org/about/ — Primary organisational source; mission, governance, membership structure
- AI Working Group Update (September 2025): https://www.countermetrics.org/ai-bots-group/ — The working group’s published conclusions on why breaking changes were rejected and what the Agent extension would address
- AI Metrics Consultation (December 2025): https://www.countermetrics.org/ai-consultation/ — Consultation announcement with link to draft guidance PDF
- NISO coverage of consultation: https://www.niso.org/niso-io/2026/02/counter-consultation-new-guidelines-ai-usage-tracking-open-through-february-23 — Independent standards-community framing of the process and its significance