ElixirData Blog – Insights on Governed Enterprise AI & Context OS

Context Graph and Decision Graph for Robotics and Physical AI

Written by Navdeep Singh Gill | Jan 6, 2026 12:44:13 PM

In software, a bug crashes an application. In robotics, a bug can kill a person. On March 18, 2018, an Uber self-driving vehicle struck and killed Elaine Herzberg in Tempe, Arizona. The vehicle detected her six seconds before impact — more than enough time to stop safely.

The perception system oscillated between classifications: unknown object → vehicle → bicycle17 times.  Each reclassification resets motion prediction. Emergency braking was disabled to avoid “erratic behavior.” No decision was ever made to stop.  One decision failure. One death. Uber shut down its autonomous driving program. This is the defining reality of Physical AI:

“Robots don’t fail safely by default — they fail physically.”

Why Physical AI Demands Physical Accountability

Robots now operate:

  • Outside cages

  • Beside humans

  • In hospitals, warehouses, highways, and homes

Failures no longer mean downtime. They mean injury, liability, regulation, and public trust collapse.

Across industries, the same pattern repeats:

  • Amazon warehouses show higher injury rates after robot deployment

  • Tesla Autopilot has over 40 fatalities under investigation

  • Industrial robots kill ~2 workers per year in the US

  • Surgical robots face increasing FDA scrutiny

Each incident is framed as a technical failure. In reality, everyone is a decision failure.

Why is explainability critical for Physical AI?
Because failures cause physical harm, regulators and investigators require defensible evidence of why a robot acted.

The Core Problem: Robots Decide Without Governed Context

Modern robotics stacks rely on:

  • Foundation models for perception

  • Reinforcement learning for policy optimization

  • End-to-end neural pipelines from sensors to actions

These systems are powerful — and opaque.

They optimize outcomes without preserving:

  • Why was a decision made

  • What alternatives were considered

  • What uncertainty existed

  • Who had authority at the moment of action

When something goes wrong, investigations rely on reconstruction — not evidence. That is unacceptable in physical systems.

The Pattern Behind Major Robotics Incidents

Incident Decision Failure Consequence
Uber AV Fatality Classification uncertainty never defaulted to safety Death
Amazon Robot Injuries Human-robot coordination is implicit, not governed Elevated injuries
Tesla Autopilot Human–AI handoff authority unclear 40+ deaths
Industrial Robots Safety zone decisions undocumented ~2 deaths/year

Every fatality occurs at a decision boundary, not a mechanical one.

The Four Failure Modes of Physical AI

Without a decision substrate, robots fail predictably:

Failure Mode Physical Manifestation
Context Rot Acting on stale world models
Context Pollution Sensor noise distorts decisions
Context Confusion Ambiguous situations misclassified
Decision Amnesia Past incidents do not apply to new situations

The Uber crash was a textbook case of Context Confusion — and no mechanism to say:
“I’m uncertain. Default to safety.”

From Task-Based Robotics to Context-Based Physical AI

Traditional robotics assumed:

  • Known environments

  • Known objects

  • Known failure modes

Physical AI breaks those assumptions.

Modern robots must:

  • Act under partial, ambiguous perception

  • Infer intent from language

  • Adapt without retraining

  • Learn from outcomes — not labels

This shift makes explainable, governable decision-making mandatory.

How does Context OS improve robot safety?
By enforcing safety and authority structurally, so unsafe actions are impossible, not merely discouraged.

What Is a Governed Context Graph in Robotics?

A Governed Context Graph is not a map or scene graph. It is a living representation of how the world behaves, learned through interaction.

It captures:

  • Entities (humans, tools, zones)

  • Affordances (what can be done)

  • Spatial and temporal dynamics

  • Task constraints

  • Safety boundaries

  • Authority models

  • Uncertainty levels

Key principle:
Context is learned, updated, and governed — not statically modeled. The Context Graph becomes the robot’s situational memory.

What Is a Decision Graph?

If Context Graph captures the world, Decision Graph captures the decision. A Decision Graph records complete Decision Lineage:

Element Recorded Evidence
Trigger Instruction, anomaly, perception change
Context Objects, constraints, uncertainty
Options Actions considered
Safety Constraints evaluated
Authority Human or system approval
Action What was chosen — and why
Outcome Success, failure, learning signal

This is not chain-of-thought logging. It is decision provenance. When incidents occur, evidence already exists.

Why Decision Graph Complements Reinforcement Learning

Reinforcement Learning answers:

Did this action work?

Decision Graph answers:

Why did the agent choose this action under these constraints?

Together, they enable safe, transferable learning.

RL Alone RL + Decision Graph
Reward optimization Contextual learning
Opaque failures Diagnosable failures
Unsafe exploration Governed exploration
Brittle generalization Context-aware generalization

Deterministic Enforcement: Safety by Construction

Safety cannot rely on monitoring. By the time you review logs, someone may already be injured.

With Context OS:

  • Unsafe paths do not exist

  • Authority is verified structurally

  • Uncertainty forces conservative behavior

  • Overrides are recorded by design

“If a robot can violate a safety rule, the architecture is already broken.”

Progressive Autonomy: How Trust Is Earned

Autonomy is not a switch. It is a contract.

Level Governance
Assistive Human approves actions
Supervised Robot acts within bounds
Autonomous Full lineage, audit-ready

Trust benchmarks include:

  • Safety compliance (100%)

  • Uncertainty handling

  • Near-miss learning

  • Incident-free duration

If benchmarks slip, the authority contracts immediately.

Is this approach compatible with foundation models?
Yes. Context Graph and Decision Graph govern foundation models rather than replacing them.

Final Takeaway

The future of robotics is not just better control or bigger models. It is robots that can explain, justify, and defend every decision they make.

Capability without accountability is liability. Autonomy without explainability is unacceptable. Physical AI without physical accountability is dangerous.

Does Decision Graph slow down real-time robotics?
No. Lineage is captured asynchronously and does not block control loops.