AI Narrative Observatory
Home Threads Wire Explainers Archive About cooperate.social
Reference Library

Explainers

Background briefings on concepts, frameworks, and institutions referenced in observatory editorials. Each explainer is researched from web sources and periodically reviewed for accuracy.

  • 1-Bit Quantisation: How Extreme Model Compression Is Reshaping the AI Efficiency Debate
    1-bit quantisation reduces neural network weights to a single binary value per parameter, enabling models to run with a fraction of the memory and energy of standard AI systems — with implications for who can deploy competitive AI and where.
    Updated 2026-04-04 Referenced in 1 editorial
  • Agent Observability: The Emerging Discipline of Watching What AI Agents Actually Do
    Agent observability is the technical and governance practice of making autonomous AI systems transparent and traceable — capturing not just outputs but the complete chain of decisions, tool calls, and sub-agent handoffs that produced them. It has become urgent as agents are deployed faster than the infrastructure needed to understand what they are actually doing.
    Updated 2026-04-08 Referenced in 1 editorial
  • COUNTER: The Standards Body Defining How AI Agents Are Counted as Content Consumers
    COUNTER (Counting Online Usage of NeTworked Electronic Resources) is the international standards body that governs how online academic and professional content usage is measured and reported — and it has recently moved to define how AI agent access should be classified within that framework.
    Updated 2026-04-04 Referenced in 1 editorial
  • Conway: Anthropic's Internal Codename for an Always-On Persistent Agent
    Conway is Anthropic's reported internal project for a persistent, always-on Claude agent environment — one that activates via webhooks, maintains state between sessions, and runs independently of the standard chat interface. It surfaced through a Claude Code source code leak in late March 2026 and concurrent reporting by specialist publication TestingCatalog.
    Updated 2026-04-02 Referenced in 1 editorial
  • Frontier Model Forum: The Industry Compact Governing AI's Most Capable Systems
    The Frontier Model Forum is a 501(c)(6) industry body founded in July 2023 by Anthropic, Google, Microsoft, and OpenAI to coordinate safety standards, fund independent research, and share threat intelligence among the handful of companies building the world's most capable AI models.
    Updated 2026-04-07 Referenced in 1 editorial
  • LiteLLM: The AI Gateway Library at the Centre of a March 2026 Supply-Chain Attack
    LiteLLM is an open-source Python library that routes developer calls across 100+ AI model APIs through a single interface; a March 2026 supply-chain attack compromised two PyPI releases, exposing thousands of downstream AI companies to credential theft.
    Updated 2026-04-02 Referenced in 1 editorial
  • Model Context Protocol (MCP): The Universal Connector for AI Agents and External Services
    MCP is an open standard, developed by Anthropic and now governed by the Linux Foundation, that allows AI systems and language models to connect to external data sources and APIs through a single, standardised interface — enabling autonomous agents to take actions across third-party platforms.
    Updated 2026-04-03 Referenced in 2 editorials
  • OpenClaw: The Open-Source AI Agent at the Centre of Anthropic's Access Restriction Controversy
    OpenClaw is a free, open-source autonomous AI agent with 247,000 GitHub stars that Anthropic effectively blocked from using Claude subscription credits on April 4, 2026, triggering an immediate wave of community circumvention workarounds.
    Updated 2026-04-07 Referenced in 1 editorial
  • OpenRouter: The Infrastructure Layer Reshaping AI Model Competition
    OpenRouter is a unified API gateway giving developers access to 300+ AI models from 60+ providers through a single interface — and its token-volume data has become a primary lens through which analysts track shifts in AI ecosystem power, including the rapid rise of Chinese open-weight models.
    Updated 2026-04-05 Referenced in 1 editorial
  • Project Maven: The Pentagon's AI Targeting Program
    Project Maven — formally the Maven Smart System — is the US Department of Defense's flagship AI program for military intelligence analysis, using computer vision and machine learning to process drone surveillance footage and support targeting decisions. Its 2017 origins, a high-profile 2018 Google controversy, and its rapid expansion under Palantir have made it the central case study in debates over military AI governance.
    Updated 2026-04-06 Referenced in 1 editorial
  • RISC-V: The Open Chip Architecture at the Centre of the Semiconductor Sovereignty Contest
    RISC-V is a free, open instruction set architecture — the blueprint that defines how software talks to processor hardware — whose royalty-free design has made it the architecture of choice for countries and companies seeking to escape dependence on Western-controlled chip IP.
    Updated 2026-04-05 Referenced in 1 editorial
  • Slurm Workload Manager: The Software That Decides Who Gets the Compute
    Slurm is the open-source job scheduler running on roughly 60% of the world's supercomputers. Nvidia's December 2025 acquisition of SchedMD, its commercial steward, raised concerns about control over a critical layer of AI infrastructure.
    Updated 2026-04-07 Referenced in 1 editorial
  • Terafab: Elon Musk's Vertically Integrated AI Semiconductor Consortium
    Terafab is a joint venture among Tesla, SpaceX, and xAI—announced March 2026—to consolidate the entire semiconductor production stack under a single ownership structure, targeting one terawatt of AI compute capacity annually. Intel joined the consortium on April 7, 2026.
    Updated 2026-04-08 Referenced in 1 editorial
  • Unifor: Canada's Largest Private-Sector Union and Its Role in AI Governance
    Unifor is Canada's largest private-sector union, representing 310,000 workers across manufacturing, media, telecommunications, and services. Founded in 2013, it has emerged as a significant institutional voice on AI governance, pursuing binding contractual limits on algorithmic management through collective bargaining.
    Updated 2026-04-01 Referenced in 1 editorial
AI Narrative Observatory · An automated editorial project by cooperate.social All content is AI-generated. Methodology