The Agent Framework Wars Are Over. Everyone Won. That's the Problem.

Ask a platform architect how many cloud providers their organization runs. Watch their expressions. The answer is almost never one, and the reasons are always the same: different teams, different timing, different commitments made before the current strategy existed. The cloud was supposed to consolidate. It has been consolidating for 15 years, and most enterprises are still running more than one.
CI/CD was supposed to standardize. Jenkins still runs in the basement alongside GitHub Actions and whatever the platform team mandated two years ago. The service mesh question, Istio or Linkerd or Consul, was asked in 2019 and is still open at organizations that considered themselves decisive.
This is not a failure of enterprise decision-making. It is how enterprise infrastructure actually works. Commitments compound. Migration costs exceed the value of consistency. New teams make new bets. And the platforms themselves have no incentive to make switching easy, because lock-in is the business model.
Every major agent framework launched in the last twelve months is built on exactly this logic. Which means the governance problem coming at every CISO, Chief AI Officer, and platform architecture leader is not which framework to choose. It is what to do when the answer, inevitably, is all of them.
Twelve Months, Five Frameworks
The agent framework landscape consolidated from "many experiments" to "a few serious platforms" in roughly one year. The platforms that emerged are not startups or community projects. They are flagship products from the five largest technology companies in the world.
OpenAI Agents SDK, released March 2025, is a lightweight Python and TypeScript framework built around a deliberately minimal set of primitives: Agents, Handoffs, Guardrails, and Tracing. Provider-agnostic, supporting over 100 LLMs. The spiritual successor to Swarm, optimized for simplicity over configurability.
Google ADK, introduced at Google Cloud Next 2025, is an open-source framework designed for full-stack development of agents and multi-agent systems. Model-agnostic and deployment-agnostic in principle, optimized for Gemini and Vertex AI in practice. The same framework powering Google's own Agentspace and Customer Engagement Suite.
Microsoft Agent Framework, announced in October 2025, is the convergence of Semantic Kernel and AutoGen into a single, unified runtime that combines Semantic Kernel's enterprise foundations with AutoGen's multi-agent orchestration patterns. Open standards across MCP, A2A, and OpenAPI. Python and .NET. Routes toward Azure AI Foundry.
NVIDIA NemoClaw, unveiled at GTC 2026 last week, is the enterprise extension of OpenClaw, a full agent runtime with hardware-optimized inference, enterprise RBAC, signed skill registries, and the Nemotron model stack underneath. Jensen Huang called it the operating system of agentic computers.
LangChain ecosystem (LangGraph + LangSmith) holds a deep enterprise install base, highly recommended framework for production agents, with over 70 million monthly downloads across the LangChain ecosystem and deployments in production at LinkedIn, Uber, Cisco, BlackRock, and JPMorgan. LangSmith provides observability.
And behind these five: CrewAI, PydanticAI, LlamaIndex, Amazon Bedrock Agents, IBM Watsonx Orchestrate, and a dozen more in active enterprise deployment.
The Consolidation That Will Not Come
The reasonable expectation, looking at previous infrastructure categories, is that the market consolidates. Two or three frameworks win. Enterprises standardize. The governance story simplifies.
That is not what is happening here, and the reason is structural. These frameworks are not neutral tools. They are distribution mechanisms for model consumption, cloud compute, and inference revenue. No platform company has an incentive to make their framework interchangeable with a competitor's. The switching costs are a feature, not a bug.
Enterprises are not going to pick one. A financial services firm running on Azure will use Microsoft Agent Framework for its core workflows. Its data science team, hired from Google, will prototype with ADK. Its NVIDIA DGX cluster will run NemoClaw for latency-sensitive workloads. Its compliance team will insist on LangSmith observability because that is what they already have. Its newest engineering team will ship with OpenAI Agents SDK because it was the fastest to working prototype. The frameworks are too useful, the switching costs are too high, and the organizational inertia is too real. Multi-framework is the destination, not a temporary state.
The Governance Problem Nobody Has Solved
Every one of these frameworks ships with application-layer governance. Guardrails, tracing, access controls, audit logs, safety checks. Each vendor has invested seriously in making their governance story credible.
And every one of those governance implementations is specific to that framework.
OpenAI's guardrails govern OpenAI Agents SDK workflows. They do not govern NemoClaw agents. Google ADK's safety patterns apply inside ADK runtimes. They do not apply to Microsoft Agent Framework deployments. NemoClaw's signed skill registry and RBAC operate inside the NemoClaw runtime. They are unaware of the LangChain agents running alongside them on the same Kubernetes cluster.
As the framework landscape fragments, application-layer governance fragments with it. Each new framework an enterprise adopts adds a new governance silo: new policies to configure, new audit logs to aggregate, new access control models to reconcile with enterprise identity, and new safety configurations to maintain. The governance overhead scales with the number of frameworks, and the coverage is always incomplete at the edges where frameworks interact.
There is a more serious problem underneath this. Application-layer governance, by definition, lives inside the process being governed. When a NemoClaw agent calls an OpenAI endpoint through a LangChain routing layer inside a Microsoft Agent Framework orchestration workflow, a configuration that will exist in production before this year is out, and none of the application-layer governance from any of those frameworks can see the full picture. Each sees only its own slice of the execution. The inference call crosses network boundaries that each framework's internal controls cannot follow.
This is not an edge case. It is the normal operating condition of enterprise multi-agent deployments at scale.
The One Layer That Does Not Fragment
Infrastructure-layer governance does not know which framework built the agent. It governs the traffic.
An inference call from a NemoClaw agent and an inference call from a Google ADK agent are, at the network layer, identical: an HTTP request carrying a prompt to an LLM endpoint. The AI Gateway's safety pipeline, covering pattern matching, PII redaction, NVIDIA Safety NIMs, and hallucination detection, applies to both independently, without modifying either framework.
An MCP tool invocation from an OpenAI Agents SDK workflow and an MCP tool invocation from a Microsoft Agent Framework orchestration are, at the network layer, the same protocol: a tool call with an identity, a method, and parameters. Infrastructure-layer authorization validates both against the same policy, regardless of which runtime initiated the call.
A policy enforced at the network layer applies across the entire multi-framework agent estate. One set of policies. One audit log. One enforcement boundary. The governance overhead does not scale with the number of frameworks because it is not within any of them.
This is what Traefik Hub's Triple Gate architecture was designed to address. Three enforcement points, each targeting a distinct traffic pattern every agent generates regardless of runtime:
- The AI Gateway intercepts every inference call: the safety pipeline, token rate limiting, semantic caching, and model failover, which apply universally to every LLM request, whether it originates from NemoClaw, ADK, OpenAI SDK, or LangGraph.
- The MCP Gateway intercepts every tool invocation: infrastructure-layer authorization on the specific parameters of each tool call, independent of the role-based controls inside any particular framework's runtime.
- The API Gateway governs the management plane: the configuration endpoints, monitoring interfaces, and orchestration APIs of the agent infrastructure itself, enforcing authentication, schema validation, and threat detection regardless of which framework generated the traffic.
Three patterns. Three enforcement points. Framework-agnostic, by design.
Why This Argument Gets Stronger Over Time
In a world with five competing frameworks running within the same enterprise, application-layer governance requires five separate configurations, five separate audit log streams, five separate safety policy maintenance cycles, and still provides no visibility at the boundaries where frameworks interact.
Infrastructure governance provides one. The multi-framework future makes that one more valuable, not less.
The pattern is already established in adjacent infrastructure categories. Enterprises do not maintain separate network security policies for each application stack. They do not run separate identity providers for each platform vendor's tooling. The governance lives at the layer that does not change when the application layer changes.
Agent infrastructure is arriving at the same conclusion. The application layer will continue to fragment: new frameworks, new runtimes, new vendor platforms, new enterprise bets. The infrastructure layer is where governance stabilizes.
The Strategic Question for Every Platform Leader
The question facing every CISO, Chief AI Officer, and platform architecture leader right now is not which agent framework to standardize on. That question has no good answer, because the organization is already running multiple frameworks and will continue to do so.
The question is: where does governance live in a multi-framework agent estate, and how does it scale as the number of frameworks grows?
If the answer is "inside each framework," the honest follow-up is: who maintains five separate governance configurations, how are they kept consistent, and what happens at the boundaries where frameworks interact?
If the answer is "at the infrastructure layer," the follow-up is: which traffic governance architecture covers all three patterns, inference calls, tool invocations, and management APIs, across every framework without requiring modification to any of them?
That second answer is the one that scales. It is also the answer that matches how enterprise security has worked in every infrastructure category that came before.
The agent framework wars are over. Everyone won. The governance story for a world where everyone won is an infrastructure-layer story, and that story is only beginning to be written.


