Key takeaways
- Three credible industry voices — a16z, Rippletide, and Gartner — have independently converged on the same diagnosis: agentic AI systems require both context infrastructure and enforcement infrastructure to operate reliably in production.
- The emerging two-layer architecture (context layer → LLM agent → decision layer) is an improvement over the current norm, but it introduces three structural failure modes: context-authority desynchronization, split-brain memory, and fragmented feedback.
- These failure modes are not edge cases. They are the predictable consequence of running two systems that must agree on the state of the enterprise at the exact moment of every AI decision.
- Context OS is the alternative: a unified operating system where context, policy enforcement, decision memory, and feedback share a single state model and operate as one transaction.
- Enterprise AI does not need more layers. It needs fewer seams. Every seam between systems is a production failure surface.
Why Are Three Industry Voices Converging on the Same Enterprise AI Diagnosis?
In the span of three weeks, three distinct sources — a venture capital firm, a governance startup, and the leading enterprise analyst organization — arrived at the same diagnosis from different angles.
- Andreessen Horowitz (Jason Cui and Jennifer Li) argued that AI agents fail because they lack business context. Their framework proposes a context layer: structured, API-exposed, continuously refined by human expertise. Their diagnosis is correct. An agent that does not understand business semantics will produce confidently wrong outputs.
- Rippletide (Patrick Joubert) responded with a sharpening observation: context without enforcement is not infrastructure. A context layer tells agents what data means. It does not prevent unauthorized actions. His framework proposes a deterministic enforcement engine — a Decision Context Graph — that intercepts every agent action before execution. He is also correct.
- Gartner's Data & Analytics Summit 2026 declared context the new critical infrastructure, with VP Analyst Adam Ronthal projecting that 60% of agentic analytics projects relying solely on MCP will fail by 2028 without semantic foundations. Simultaneously, Gartner projects AI governance platform spending will reach $492 million in 2026 — validating both the context and enforcement markets at once.
The convergence is real and the diagnosis is correct: agentic AI systems require both context infrastructure and decision infrastructure to operate in production.
Where I diverge from this emerging consensus is on architecture. Building context and enforcement as two separately maintained systems — integrated at runtime — is the wrong topology. And the consequences of getting it wrong will appear in production within 12 months.
The industry is building the right components in the wrong topology. Context and enforcement are not separate infrastructure layers. They are primitives within a single operating system.
What Is the Two-Layer AI Governance Model — and Where Does It Break?
The architecture the industry is converging toward follows this flow:
Data Stack → Context Layer → LLM Agent → Decision Layer → Execution
The context layer (Atlan, Collibra, or an in-house metadata platform) feeds business semantics to the agent at inference time via MCP or API. The decision layer intercepts the agent's proposed action and evaluates it deterministically against authority rules.
This is a genuine improvement over the current dominant architecture:
Data Stack → LLM Agent → Execution
Adding a context layer reduces semantic errors. Adding a decision layer prevents unauthorized actions. Both improvements are real.
But the two-layer model introduces three architectural failure modes that neither layer, operating independently, can resolve.
Can the two layers be integrated well enough to avoid these failures?
Not reliably. The failures described below are structural — they arise from the fundamental property of two systems maintaining shared state independently. Better APIs reduce latency; they do not eliminate drift.
What Are the Three Failure Modes of Separated AI Decision Infrastructure?
Failure Mode 1: Context-Authority Desynchronization in Agentic AI Systems
The context layer and the decision layer must agree on the current state of the enterprise at the moment of every decision. When they disagree, the system makes structurally unsound decisions.
Scenario: On Monday, the compliance team revises the auto-approval ceiling for Category B vendors from €5,000 to €2,000. The policy engine in the decision layer is updated. The context layer, which maintains the knowledge graph of vendor categories and spending patterns, is not updated until its nightly synchronization cycle.
On Monday afternoon, a procurement agent evaluates a €3,500 payment to a Category B vendor. The context layer provides complete, valid context: vendor is certified, budget is sufficient, contract terms are satisfied. The agent reasons the payment should be approved. The decision layer then blocks it — the auto-approval ceiling is now €2,000.
- The benign case: The decision layer caught the discrepancy, but the agent wasted reasoning capacity on a path that was never going to be authorized. The user sees what appears to be an arbitrary block on a well-justified decision.
- The malign case: The ceiling is raised (not lowered), the decision layer updates, but the context layer still carries the old ceiling. The agent escalates a €3,500 payment that should now be auto-approved. Human reviewers are pulled into a decision the system should have handled autonomously. Multiply this across hundreds of daily decisions and the governance layer creates the operational friction it was supposed to eliminate.
The root cause is not a configuration error. It is structural: context and authority are maintained in separate systems with separate update cycles. They will drift — not occasionally, but systematically. This is the same problem that plagued ETL pipelines and microservice architectures. Any distributed system with shared state and independent update paths will experience this failure.
Architectural lesson: When two systems must agree on state to make correct decisions, they should share state — not synchronize it.
Failure Mode 2: Split-Brain Decision Memory Across AI Governance Systems
In the two-layer model, the institutional memory of AI decisions is split across two systems. The context layer remembers what the agent knew. The decision layer remembers what the agent was permitted to do. Neither holds the complete decision record.
When a regulator asks why a specific action was authorized, the enterprise must reconstruct the full picture by correlating records from two separate stores:
- What context was available at decision time? (Ask the context layer.)
- What rules were applied? (Ask the decision layer.)
- Did the context the agent used correspond to the context the decision layer was evaluating against? (Requires correlating timestamps and state versions across two independent systems that may not share a clock.)
This is not a hypothetical audit inconvenience. It is a fundamental integrity problem. If the two systems log at different granularities, use different versioning schemes, or have different retention policies, the complete decision record is structurally incomplete — not accidentally incomplete.
In Context OS, this is solved with a unified Decision Trace: a single, immutable record that captures the full decision lifecycle — the context compiled, the policies evaluated, the authority verified, the action taken, and the evidence produced. One record. One system. One source of institutional decision memory.
Architectural lesson: Decision memory must be atomic. A decision recorded in two systems is a decision that cannot be reliably reconstructed. The audit trail must be unified by construction, not by correlation.
Failure Mode 3: Fragmented Feedback Loops in Separated Decision Infrastructure
The most consequential failure mode of separated infrastructure is the feedback problem.
- The context layer can learn that certain context sources produce stale information. But it has no visibility into how its context was used in authorization decisions. It does not know which context gaps caused incorrect escalations. It optimizes in isolation from enforcement outcomes.
- The decision layer can learn that certain policies generate excessive escalations. But it has no visibility into context quality. It cannot determine whether a blocked action was blocked because the policy was correct, or because the context delivered to the agent was incomplete. It optimizes in isolation from context quality.
In a unified system, feedback operates across the full decision infrastructure lifecycle. When a policy generates excessive escalations, the system can trace whether those escalations were caused by correctly calibrated policies acting on complete context — or by stale context triggering rules that would not have fired with current information. The feedback signal is richer because it has full visibility across both dimensions simultaneously.
Organizations using Context OS report 10–17% quarterly improvement in decision accuracy. This compounding improvement is possible only because the feedback loop operates across context quality, policy precision, authority calibration, and decision outcome quality simultaneously. In a separated architecture, each layer optimizes its own domain without visibility into the other, producing local improvements that do not translate to system-level improvement.
Architectural lesson: Feedback that crosses a system boundary is always lossy. The learning signal that matters most — the causal chain from context quality through authority evaluation to decision outcome — only exists in a unified system.
What Is the Correct Architecture for Enterprise AI Governance?
The three failure modes share a single root cause: context and enforcement are treated as separate infrastructure concerns, with separate state, separate memory, and separate learning. Every integration between them — whether via API, MCP, or a shared data store — introduces seams. Every seam is a production failure surface.
The alternative is not two layers with better integration. It is one operating system with four unified primitives:
┌───────────────────────────────┐
│ Context OS
│
│ State → Context
│ ↑ ↓
│ Feedback ← Policy
│
└───────────────────────────────┘
The Four Primitives of Context OS
| Primitive | What It Does | Why It Must Be Unified |
|---|---|---|
| State | Canonical, versioned representation of every entity, relationship, and condition across the enterprise | Context and policy both read from the same state. No synchronization lag. A policy change at 2:00 PM is reflected in the 2:01 PM context compilation. |
| Context | Decision-grade compilation scoped to the specific decision at hand | Context compilation includes policy-relevant elements because the policy engine and context engine share state. The compiled context is not just semantically relevant — it is governance-complete. |
| Policy | Dual-gate enforcement before reasoning commits and before actions execute | Policy evaluation can trigger context recompilation if critical information is missing. This bidirectional interaction is impossible when context and policy are in separate systems. |
| Feedback | Closed-loop learning from real agent decisions, tied to execution traces | Feedback operates across context quality AND policy precision simultaneously. When an escalation is caused by stale context rather than correct policy, the system traces the root cause across the full decision lifecycle. |
The Decision Trace — the structured, immutable record of every decision — is generated as a single atomic transaction capturing all four primitives. There is no cross-system correlation required. There is no question of whether the context record and the policy record refer to the same decision state. They do, by construction.
Is Context OS replacing the context layer, the decision layer, or both?
Context OS can operate in two modes: as a standalone system with its own context compilation (via 80+ enterprise integrations), or as a governance and enforcement layer that inherits context from an existing catalog like Atlan or Collibra and adds decision enforcement, memory, and feedback on top. Either way, the integration surface between context and enforcement is eliminated.
Why Is "Layer" the Wrong Abstraction for AI Agent Decision Systems?
The word "layer" carries an implicit assumption: data flows in one direction. Each layer transforms its input and passes it to the next. This is the mental model of the OSI stack, the TCP/IP model, the traditional data pipeline. It works well for pipelines.
Enterprise AI agent decisions do not flow in one direction. They form a loop.
Context informs reasoning. Reasoning proposes action. Authority evaluates the proposed action against context and policy simultaneously. The evaluation may modify the context scope (requesting additional information before deciding). The decision produces memory. The memory feeds back into future context and future policy. This is a cycle, not a pipeline.
When you implement a cycle as two separate layers with linear data flow between them, you lose the bidirectional interactions that make the system work:
- Authority cannot request richer context. In the two-layer model, context is delivered to the agent before the decision layer is involved. If the decision layer determines that a critical piece of context is missing — a vendor certification not included in the initial compilation — it cannot request it. It can only block the action and force a retry. In a unified system, policy evaluation can trigger context recompilation mid-decision.
- Context cannot learn from authority outcomes. The context layer does not know whether the context it delivered was sufficient for the authority evaluation. Without this feedback, it cannot optimize for governance-relevant completeness.
- Memory cannot be atomic. In the two-layer model, the decision trace must correlate records from two separate systems. Any discrepancy in timing, granularity, or versioning creates an incomplete or inconsistent audit record.
The "layer" abstraction is popular because it maps to familiar mental models from data engineering. But the AI agent decision lifecycle is not a layer problem. It is a state management problem. And state management problems are solved by operating systems, not by stacking independent infrastructure layers.
Can Your AI Governance Architecture Pass the Decision Audit Test?
Joubert proposed three diagnostic questions to test whether an enterprise has genuine governance infrastructure or merely the appearance of it. Below, those three questions are extended with three more that specifically test whether the architecture is unified or merely integrated.
| # | Question | Two-Layer Answer | Context OS Answer |
|---|---|---|---|
| 1 | Can you reproduce a past decision? | Partially. Must correlate records from two systems with separate versioning. | Yes. Single Decision Trace with full context, policy, authority, and evidence in one record. |
| 2 | Where are authority boundaries encoded? | In the decision layer's graph — separate from the context the agent reasons over. | In the same state model that context is compiled from. One source of truth. |
| 3 | Who is accountable when the agent acts incorrectly? | Unclear. Was the error in context (Layer 1) or enforcement (Layer 2)? | Traceable. The Decision Trace shows exactly what context was compiled and what policy was applied. |
| 4 | If a policy changes at 2 PM, does the 2:01 PM decision reflect it? | Depends on the sync cycle between layers. May lag hours or days. | Immediately. Policy and context read from shared state in the same transaction. |
| 5 | Can the system trace whether an escalation was caused by policy or by stale context? | No. Root cause crosses the boundary between two independent systems. | Yes. Feedback loops operate across context quality and policy precision simultaneously. |
| 6 | Can authority evaluation request additional context mid-decision? | No. Context is delivered before authority evaluation begins. No bidirectional flow. | Yes. Policy evaluation can trigger context recompilation for missing elements. |
Questions 1–3 are Joubert's. Both architectures can answer them, though with different levels of reliability. Questions 4–6 test the integration surface between context and enforcement. The two-layer model cannot reliably answer any of 4–6, because the answers depend on interactions that cross the boundary between two independently maintained systems.
What Does Rippletide Get Right — and Where Does the Architecture Diverge?
Before describing what Context OS implements differently, it is worth being precise about what Joubert's analysis gets exactly right — because the points of agreement are substantial.
- Reading agents vs. acting agents. The distinction between agents that produce answers and agents that take actions is the most important framing in the current discourse. The failure mode for a reading agent is a wrong answer. The failure mode for an acting agent in an agentic AI system is an unauthorized action. These require fundamentally different infrastructure.
- Three diagnostic questions. If you cannot reproduce a decision, locate your authority boundaries, or assign accountability, you do not have governance. You have hope.
- Prompt-embedded authority is not governance. If your agent's authority boundaries live in a system prompt, they are not enforceable, not versioned, not auditable, and not consistently applied. A system prompt written before a policy revision is a liability, not a control.
- The eight-step framework. Particularly steps 6 (structure authority rules), 7 (enforce pre-execution), and 8 (write the decision trace) constitute a rigorous specification for what enterprise agent deployment actually requires.
The divergence is not in the diagnosis — it is in the architecture. Joubert proposes context layers and decision layers as complementary infrastructure. The argument here is that they must be unified: not because complementary systems cannot work, but because separated systems with shared state invariably drift, fragment, and fail at the integration surface.
There is a historical precedent. In the early 2000s, enterprises ran separate authentication (LDAP) and authorization (custom ACLs) systems. The failure modes were predictable and persistent: users authenticated against stale role assignments, or deprovisioned in one system but not the other. The industry converged on unified identity platforms — Active Directory, then Okta, then identity-as-a-service — not because separate systems could not work, but because the integration surface created a permanent governance vulnerability. The same convergence will happen in AI governance.
How long did the identity platform convergence take?
Approximately one decade. The convergence from separated to unified AI governance systems is expected to happen faster because the failure modes manifest immediately in real-time agent decisions, not in batch processes.
How Does Context OS Implement Unified Decision Infrastructure for Agentic AI?
Context OS is the unified operating system for enterprise AI agent governance. It eliminates the integration surface between context and enforcement through four primitives operating on a single state model.
The Eight-Step Framework, Unified
Joubert extends the a16z five-step context framework to eight steps. In a unified architecture, the responsibility split collapses:
| # | Step | Two-Layer Ownership | Context OS Implementation |
|---|---|---|---|
| 1 | Make data accessible | Context layer | State primitive: 80+ enterprise integrations |
| 2 | Build context via LLM | Context layer | Context primitive: LLM-powered compilation from shared State |
| 3 | Refine with human expertise | Context layer | State primitive: human refinement updates shared State immediately |
| 4 | Expose via API or MCP | Context layer | Context OS exposes via MCP, API, and internal compilation |
| 5 | Maintain continuously | Context layer | Feedback primitive: continuous learning from real decisions |
| 6 | Structure authority rules | Decision layer | Policy primitive: rules encoded in the same State model as context |
| 7 | Enforce pre-execution | Decision layer | Policy primitive: Dual-Gate enforcement at reasoning and execution |
| 8 | Write the decision trace | Decision layer | Decision Memory: unified atomic trace across all four primitives |
In the two-layer model, steps 1–5 are owned by one system and steps 6–8 by another. In Context OS, all eight steps are operations within a single system. The integration surface disappears. The state is shared. The memory is atomic. The feedback is unified.
Deployment Options
- Managed SaaS — 4-week deployment
- Customer VPC — for data residency requirements
- On-Premises / Hybrid — for regulated environments
Context OS is model-agnostic (OpenAI, Anthropic, Google, AWS, Azure, self-hosted) and integrates with Snowflake, Databricks, ServiceNow, SAP, Oracle EBS, and 80+ additional enterprise systems.
Can Context OS coexist with an existing data catalog?
Yes. When an existing catalog (Atlan, Collibra, Snowflake Horizon) is in place, Context OS can inherit its metadata and add decision governance, authority management, decision memory, and feedback loops on top. The catalog remains the metadata system of record; Context OS becomes the decision governance layer — unified.
Conclusion: Enterprise AI Needs an Operating System, Not More Layers
The industry convergence described here is real. Models generate. Frameworks orchestrate. Context enriches. Governance enforces. All four functions are necessary.
The question is whether they require four separate systems, or fewer.
Financial Services models and orchestration frameworks are correctly separated — they have genuinely independent concerns and weak coupling. A model upgrade should not require a framework rewrite.
However, context as infrastructure and enforcement are incorrectly separated. They have shared concerns with strong coupling. A policy change must be reflected in the next context compilation. A context gap must be traceable as the root cause of an enforcement failure. Memory must be atomic across both. Feedback must operate across both. These are properties of a unified AI governance system, not of an integration.
This is why Context OS is an operating system, not a layer. An operating system manages state, enforces access control, persists memory, and provides feedback to applications. Context OS manages enterprise state, enforces AI authority governance, persists institutional memory, and provides feedback to AI agents. The analogy is not decorative — it is architectural.
The decision lifecycle — compile context, evaluate authority, persist memory, learn from outcomes — is a transaction, not a pipeline. Transactions require atomicity, consistency, isolation, and durability. They require ACID properties. They require an operating system.
A context layer informs agents. A decision layer constrains agents. Context OS does both — as a single transaction, with a single state model, producing a single memory record, feeding a single learning loop.
Context and enforcement, unified, are the operating system. And the operating system is what enterprises actually need to run.
Frequently Asked Questions
-
Does Context OS replace context layers like Atlan?
Context OS can work in two modes. It can operate as a standalone system with its own context compilation (connecting directly to enterprise data sources via 80+ integrations). Or it can inherit and extend context from an existing data catalog like Atlan, Collibra, or Snowflake Horizon. In the second mode, the catalog remains the system of record for metadata, lineage, and definitions. Context OS adds decision governance, authority management, decision memory, and feedback loops on top. Either way, context and enforcement are unified within Context OS — the integration surface between layers is eliminated.
-
Does Context OS replace decision layers like Rippletide?
Context OS provides deterministic enforcement (Dual-Gate Governance) as one of its four unified primitives. It also provides the three capabilities that a decision layer alone does not: context compilation, decision memory, and feedback loops. The unification of these four capabilities in one system is the architectural argument of this article. Organizations evaluating decision enforcement infrastructure should assess whether they need a decision layer or an operating system.
-
Is the unification argument purely theoretical?
No. The three failure modes described (desynchronization, split-brain memory, fragmented feedback) are derived from production deployments. The LDAP/ACL historical precedent is well-documented. The convergence from separated to unified identity systems took approximately one decade. We expect the convergence from separated to unified AI governance systems to happen faster because the failure modes manifest immediately in real-time agent decisions, not in batch processes.
-
What is the deployment model for Context OS?
Context OS deploys in three configurations: Managed SaaS (4-week deployment), Customer VPC, or On-Premises/Hybrid. It is model-agnostic (OpenAI, Anthropic, Google, AWS, Azure, self-hosted). It integrates with enterprise systems including Snowflake, Databricks, ServiceNow, SAP, and Oracle EBS.
Related Resources
- What Is Context OS? — The Complete Guide
- Context Layer vs. Context OS: What's the Difference?
- What Is Decision Memory? — The Complete Guide
- The Decision Gap: Why Enterprise AI Agents Fail in Production
- Dual-Gate Governance: How It Works and Why It Matters
- What Is AI Governance? The Complete 2026 Enterprise Guide


