OntoGuard Runtime Cognitive Control Plane

The runtime control plane that authorizes state transitions in agentic, memory-bearing AI systems — before they affect your business.

The Ontology AI and Semantic Layer that turns enterprise AI into a governed System of Intelligence with a true Reasoning Layer and Cognition Layer.

U.S. Patent Application 19/444,521 — Track I Prioritized Examination Granted May 4, 2026.
Core capabilities (LLM Output Governance + L3 Training Signals) are live in production today.
State-Transition Governance Agentic AI Checkpoint Decision Authorization Packet Enterprise Ontology Grounding Closed-Loop Training Signals
Before OntoGuard: AI systems proposed outputs, actions, writes, and handoffs with limited proof of control.
After OntoGuard: every high-stakes state transition can be allowed, blocked, or escalated before commit.

Regulatory readiness is one wedge. The larger platform is enterprise state governance for probabilistic, distributed, tool-using AI. Last updated: May 15, 2026.

1. What it does 2. Why buyers care 3. See sample packet 4. Start pilot

What You License Today

OntoGuard is not just a governance concept. It packages runtime authorization into concrete, exportable deliverables that engineering, risk, compliance, audit, and legal teams can use immediately.

Core capabilities are live in production today: LLM Output Governance and L3 Training Signal Export.
Engineering / Risk

Trust API

Real-time ALLOW, BLOCK, or ESCALATE decisions with reason codes, routing, release status, and business-effect metadata.

Compliance / Legal

Governed State Transition Record

Structured JSON plus buyer-readable PDF with full audit trail, evidence state, triad metadata, and decision credential.

Audit / GRC

Compliance Trace Pack

Symbolic trace, BM25 and semantic retrieval context, evidence hashes, clause-level citations, and scope coverage disclosure.

Platform / DevOps

Policy Router SDK

Maps AI outputs to jurisdiction-specific policy packs such as GDPR, HIPAA, SEC, FINRA, SOX, GLBA, and internal rules.

Internal Audit / Legal

Evidence Bundle

Immutable PDF, JSON, hashes, chain-of-custody artifacts, and buyer-safe telemetry for internal and external audit review.

All artifacts are schema-stable and exportable. No silent failures. Every decision produces a complete, auditable record — even when the system abstains or escalates.

How OntoGuard Is Different

Monitoring tools observe what happened. Checklists document intent. OntoGuard authorizes the proposed AI state transition before it commits.

Capability
Monitoring Tools
Governance Checklists
Traditional GRC
OntoGuard
Authorizes state transitions before commit
Limited
Produces buyer + machine-readable proof
Limited
Manual
✓ Full
Works with agentic + memory systems
Ontology-driven, not just logs
Limited
Closed-loop training signals
No retraining required

Runtime Cognitive Control Before Commit

OntoGuard is the Ontology AI and Semantic Layer for enterprise AI: a runtime cognitive control plane, Reasoning Layer, and Cognition Layer for state-transition governance. Modern AI systems no longer just answer prompts; they call tools, write memory, update knowledge graphs, trigger workflows, hand work to other agents, and generate training signals that change future behavior.

From LLM outputs and training signals to tool calls, memory updates, ontology changes, policy mappings, and multi-agent handoffs (via integration), OntoGuard turns every proposed state change into a governed, auditable decision with evidence, routing, risk, uncertainty, arbitration transparency, audit hashes, and improvement signals.

OntoGuard governs the commit boundary. It asks whether a proposed AI state transition is authorized, traceable, evidence-backed, and safe to release before the change touches a customer, workflow, regulator, record, or downstream system.

The result is a Decision Authorization Packet and Governed State Transition Record: buyer-safe proof showing decision, scope, evidence, triad, risk, uncertainty, arbitration, audit credential, human-review routing, and improvement signal.

Regulatory compliance is not the ceiling. It is the first high-value use case for a broader enterprise control plane.

Zero Hard Gates. Always Export. Evidence Never Blank.

Regulated buyers need governance that is resilient under ambiguity. OntoGuard is designed so uncertainty does not erase the audit trail.

Zero Hard Gates

Hard failures are converted into explicit abstention, SAFE_TEMPLATE, or HUMAN_REVIEW routing so the governance record still exists.

Always-Export Artifacts

Every governed run is expected to produce a buyer-readable PDF and complete JSON packet, even when the answer is withheld.

Evidence-Never-Blank

If evidence is sparse, OntoGuard records fallback reasons, provisional review anchors, uncertainty, and next human step instead of silently emitting an empty proof trail.

What the Runtime Control Plane Gives You

How It Works — The 3-Layer Semantic Governance Stack

OntoGuard is not just ontology lookup and not just scoring. It uses ontology throughout the governance path to connect an LLM output to policy, evidence, scope, trace, human review, and learning feedback.

Executive Summary

OntoGuard uses a three-layer system to turn raw AI outputs into governed, auditable decisions. Layer 1 grounds outputs to real enterprise objects and rules. Layer 2 checks accuracy, risk, uncertainty, and compliance. Layer 3 turns every approved or corrected decision into a learning signal that improves future retrieval, policy, and agent behavior.

Result: fewer incidents, faster audits, clearer human review, and safer autonomy without changing model weights.

Full Technical Detail

The technical view below shows how L1 symbolic grounding, L2 semantic consensus, and L3 alignment feedback produce the Decision API, Governed State Transition Record, evidence pack, triad, audit credential, and improvement signal.

L1

Symbolic Grounding

Normalizes the governed prompt and response into clause hits, evidence IDs, retrieval IDs, checksums, regulation/domain scope, and a buyer-readable symbolic trace.

  • Ontology-grounded concepts
  • Clause and fallback hits
  • Evidence pack and provenance
L2

Semantic Consensus

Compares compliance, accuracy, risk, uncertainty, hallucination status, and agent disagreement before deciding whether release is safe.

  • Trust and uncertainty signals
  • Cold / Heat / Mercury triad
  • ALLOW, BLOCK, or ESCALATE routing
L3

Alignment Feedback

Turns governed outcomes into reusable improvement signals for retrieval, policy, alignment, training data curation, and future agent decision quality.

  • Gold example candidates
  • Human-review outcomes
  • Closed-loop training signal export

From Output Governance to State-Transition Governance

Agentic systems don’t just answer — they change state. OntoGuard already governs LLM outputs and is extending the same rigorous control to tool calls, memory, and agent actions. The question now is: is this proposed transition authorized, traceable, and safe to commit?

Old AI Governance Unit
Future Governance Unit with OntoGuard
Model output
Governed decision / state transition
Prompt / response pair
Before → Proposed → After state
Hallucination check
Uncertainty-aware authorization
Logging
Audit credential + improvement signal
Human review
Routed decision workflow
Fine-tuning data
Governed training signal
Agent action
Tool-call / memory / ontology authorization
Static governance weakens when AI systems use tools, maintain memory, and update knowledge. OntoGuard makes each proposed transition explicit before it becomes business state.

What We Govern: The 7 State Transitions

Seven event types can become ALLOW, BLOCK, or ESCALATE decisions with evidence, scope, audit hashes, and improvement signals.

Proposed ChangeOutput, action, write, update, signal, or handoff
OntoGuard CheckpointBM25 + semantic evidence, ontology grounding, triad, arbitration
DecisionALLOW, BLOCK, or ESCALATE before commit
Proof + LearningPacket, audit credential, human route, L3 signal
1. LLM / Model OutputProduction

Risk if ungoverned: unsupported answers, misleading recommendations, unsafe release, or false confidence.

OntoGuard produces: Decision API, evidence pack, triad, hallucination status, audit hashes, and review route.

2, 3, 4, 5 & 7. Agentic Extensions API Today · Enforcement Q3 2026

Current Status: The same Decision API and JSON contract used for LLM outputs is designed to support these use cases.

How it works: Agent frameworks can call OntoGuard’s Decision API before executing tool calls, memory writes, ontology changes, policy updates, or agent handoffs.

Expansion Roadmap: Full enforcement middleware and framework adapters (LangChain, CrewAI, AutoGen, etc.) are rolling out in Q3 2026.

6. Training / Alignment Signal ExportProduction

Risk if ungoverned: bad examples, unsafe corrections, or unreviewed behavior changes feeding future systems.

OntoGuard produces: gold-example candidates only when governed outcomes are approved, corrected, and traceable.

Production today: LLM output governance and L3 training signal export. Agentic extensions (tool calls, memory, ontology, handoffs) are available via API integration today, with full enforcement rolling out Q3 2026.

Ontology AI as the Semantic Layer for Enterprise Intelligence

Enterprise systems already run on objects, relationships, and rules. OntoGuard turns that structure into Ontology AI: the Semantic Layer that lets AI reason over enterprise reality before a proposed state transition becomes a decision.

Enterprise Objects
  • Customer / Account / Claim
  • Policy / Contract / Case
  • Vendor / Asset / Location
  • Obligation / Exception / Evidence
Relationships & Rules
  • Customer → owns → Account
  • Claim → references → Policy
  • Contract → restricts → Data Use
  • Case → requires → Review Outcome

Semantic Layer

Maps prompts, outputs, policy scope, BM25 evidence, clause hits, enterprise objects, and symbolic traces into one governed representation.

Reasoning Layer

Turns semantic evidence into allowed, blocked, or escalated decisions with uncertainty, risk, arbitration, and human-review routing.

Cognition Layer

Feeds approved or corrected outcomes back into L3 training signals, policy improvements, retrieval improvements, and future governance quality.

Together, these layers form a System of Intelligence: not just model output, but enterprise-aware reasoning, authorization, evidence, and learning.
Enterprise Reality Objects, relationships, policies, obligations, evidence, and workflow state
+
AI Proposal Output, tool call, memory write, ontology update, policy mapping, training signal, or handoff
OntoGuard Semantic Layer Grounds, reasons, routes, audits, and converts decisions into governed learning signals
Result: “We can prove it — and here is the semantic path from enterprise state to governed decision.”

Governed State Transition Record

The Decision Authorization Packet is the buyer-readable PDF. The Governed State Transition Record is the structured JSON contract underneath it. In the current v1.2.0 lineage, the record already carries the fields needed for audit, routing, repair, and improvement loops.

before_state
Prompt, workflow, scope, and current object context
proposed_action
Output, tool call, memory write, mapping, or handoff
evidence_state
Clause hits, retrieval IDs, checksums, symbolic trace
triad_with_meta
Cold, Heat, Mercury, provenance, and penalties
audit_credential
Decision, evidence, report, and manifest hashes
improvement_signal
Review outcome and L3 gold-example candidate

L3 Training Signal Export — From Decision to Gold Example Candidate

The same governance packet that protects a workflow can also create clean, reviewable signals for better retrieval, policies, evaluation sets, and future agents.

AttemptModel, agent, or tool call proposes an output.
OntoGuard DecisionALLOW, BLOCK, or ESCALATE is computed before release.
Evidence + ReasonsTrace, clauses, uncertainty, risk, and reason codes are exported.
Gold ExamplesApproved review outcomes become candidate training and policy examples.
Better Decision MakingFuture retrieval, policies, agents, and evals improve from governed signals.

What Real Enterprise Output Looks Like

This is output from a real financial services pilot (Q1 2026, NDA). We surfaced primary-scope coverage gaps and withheld autonomous release — exactly the commercially valuable behavior buyers need. The workflow was evaluated under FINRA, GLBA, SEC, and SOX primary scope.

Financial Services Pilot Buyer-Safe Telemetry Primary-Scope Gap Flagging Governed State Transition Record v1.2.0
Primary Scope CoverageFINRA / GLBA / SEC / SOX = 0.00% flagged
DecisionESCALATE
Routed ToHUMAN_REVIEW
Trust87.09% below autonomous threshold
TriadCold 0.00% | Heat 2.61% | Mercury 96.55%
ArbitrationTransparent agent voting and native computation
Evidence HandlingPending review anchors and coverage-gap disclosure
Artifact Depth17-page PDF + rich JSON with 87 top-level keys
Why this matters: mapped evidence can exist while the decision-driving financial regulations remain uncovered. OntoGuard exposes that gap before release instead of presenting a false clean pass.
Similar artifacts have been delivered across KYC, claims, prior authorization, and benefits determination workflows. The financial packet and JSON should be shared as sanitized or NDA-gated artifacts. Keep the public mental-health sample available for open website visitors.

Measurable Business Impact

OntoGuard is not only an audit story. In a pilot, its artifacts map directly to review minutes saved, escalation-rate reduction, false-approval prevention, faster cycle time, and safer launch velocity.

Opex Takeout

  • Reduce paid review minutes
  • Lower escalations and rework loops
  • Automate evidence packaging and QA checks
Escalation rate ↓Review minutes ↓AHT ↓

Loss Prevention

  • Prevent wrong approvals and policy leakage
  • Catch issues before incidents or rollbacks
  • Route risky decisions to accountable review
False approvals ↓Incidents ↓Fraud exposure ↓

Working Capital

  • Shorten claims, disputes, exceptions, and close cycles
  • Improve denial prevention and collections workflows
  • Attach proof at the moment of decision
Cycle time ↓Denials ↓DSO ↓
Assumption-safe ROI: hard-dollar calculations appear only when buyer-supplied baselines exist. No invented savings, no fake benchmarks.

Current Capability vs. Expansion Roadmap

Core capabilities — LLM Output Governance and L3 Training Signals — are live in production today. The same governance engine and Decision API contract extend to agentic workflows through API integration.

Production Today

LLM Output Governance + L3 Training Signals

Full Decision Authorization Packet, buyer-safe telemetry, audit hashes, human-review routing, and closed-loop training signal export.

API Today · Enforcement Q3 2026

Agentic State Governance

Tool call authorization, memory write control, ontology change proposals, policy mapping updates, and multi-agent handoff governance.

Available now via API integration. Full enforcement middleware and framework adapters are rolling out in Q3 2026.

💼 Enterprise Workflows We Govern Today

OntoGuard governs proposed AI state transitions in workflows where mistakes become cost, regulatory exposure, operational delay, or customer harm.

Financial Services

KYC and onboarding approvals, claims and dispute resolution, underwriting and credit decisions, advisor copilots, SEC / FINRA reporting support, fraud operations, and customer communications.

Healthcare

Prior authorization, eligibility checks, clinical documentation, coding support, patient triage, care navigation, medical chatbot review, and safety escalation.

Public Sector & Enterprise Ops

Benefits and eligibility determinations, casework, investigations, procurement approvals, customer operations, refunds, exceptions, IT automation, and change approvals.

Human Review Options: APPROVE REWRITE BLOCK

How OntoGuard Actually Works

OntoGuard exports the Semantic Layer and Reasoning Layer: ontology scope → BM25/semantic evidence → clause coverage → arbitration → audit hashes → human routing → L3 signal.

1 · Governed State Transition Record

v1.2.0-style structure carries before state, proposed action, evidence state, decision, triad metadata, audit credential, and improvement signal.

2 · BM25 + Semantic Candidate Bag

Current-run prompt, response, and scope anchors feed lexical BM25, semantic retrieval, clause normalization, checksums, and no-silent-drop telemetry.

3 · Primary-Scope Gap Penalty

Cold index is penalized when decision-driving regulations such as FINRA, GLBA, SEC, or SOX have zero coverage, even when supplemental regulations are mapped.

4 · Buyer-Safe vs. Internal Lanes

Buyers see clean decision evidence; internal lanes preserve repair flags, monotonic markers, schema fences, diagnostics, and repair provenance.

5 · Transparent Agent Consensus

Compliance, Accuracy, Risk, and Feedback agents expose votes, disagreement, consensus, and native arbitration computation.

6 · Portable Proof

Financial pilot artifacts expose 87-key JSON depth, hashes, coverage gaps, evidence anchors, schema-constrained fields, and L3 improvement readiness.

Differentiation: OntoGuard does not bury meaning in notebooks or pipeline glue. It exports the semantic chain as a buyer artifact.

Regulatory Strength as a Secondary Superpower

OntoGuard is broader than compliance tooling, but regulatory readiness remains a powerful entry point. The platform can expose whether the regulations that matter for a workflow are actually covered — not merely whether some evidence was found.

Financial Services

FINRA, GLBA, SEC, SOX, credit, advisory, customer communications, trading support, regulated reporting, and primary-scope coverage gaps.

Healthcare and Life Sciences

HIPAA, PHI handling, clinical support workflows, prior authorization, medical chatbot review, patient communication, and safety escalation.

Privacy and AI Governance

GDPR, EU AI Act readiness, privacy obligations, risk disclosures, human-review routing, and evidence-backed decision records.

Public Sector and Operations

Eligibility, procurement, exception handling, casework, audit trails, reviewer accountability, and policy-change governance.

We do not just check boxes. We surface coverage gaps in the laws, policies, and enterprise rules that actually drive the workflow.
Under the current EU AI Act timeline, key high-risk and transparency obligations become applicable around August 2, 2026, while simplification amendments remain in motion.

Patent Feature Remediation Expansion Roadmap

Every packet can expose a buyer-readable next human step tied to the exact L1, L2, or L3 capability that needs remediation.

L1 Evidence Remediation: add or verify missing evidence, citation, retrieval ID, clause mapping, or ontology scope.
L2 Consensus Remediation: resolve trust, risk, uncertainty, hallucination, or semantic disagreement before release.
L3 Feedback Remediation: convert reviewer outcome into policy, retrieval, evaluation, or gold-example improvement signal.

Why This Matters to the Buyer

Buyers do not need another opaque score. They need a defensible answer to four questions: should this AI change commit, why, what evidence proves it, and what happens next?

Commit?ALLOW, BLOCK, or ESCALATE
Why?Reason codes, triad, risk, uncertainty
Proof?Evidence IDs, retrieval IDs, hashes, trace
Next?Human route or L3 improvement signal

How It Works

OntoGuard uses a three-layer Semantic Governance Stack: L1 Symbolic Grounding, L2 Semantic Consensus, and L3 Alignment Feedback. The result is not just a score — it is a governed release decision with evidence, traceability, and reusable learning signals.

1. The AI Evolution

Diagram showing the evolution from basic LLM outputs to ontology-grounded runtime cognitive control and state-transition governance

2. Solving the Peak Data Problem

Diagram explaining the peak data problem and how ontology-grounded AI governance creates reusable evidence and learning signals

3. The AI Trust Pipeline

OntoGuard AI trust pipeline showing semantic governance, evidence retrieval, compliance scoring, and decision authorization

4. Executive Summary

One-page visual summary of OntoGuard runtime cognitive control plane, symbolic trace, evidence pack, triad scoring, and Decision Authorization Packet 📄 NDA Access Request

Not Just Ontology — Runtime Authorization + Learning Loop

  • 🧠 Decision API: ALLOW, BLOCK, or ESCALATE with reasons, confidence, trace ID, evidence, and release status
  • 🔁 Training Signal Export: governed decisions become clean examples for better future agents
  • 📊 Semantic Governance Triad: Cold, Heat, and Mercury explain grounding, volatility, and trace fidelity
  • 🛡️ Audit-Ready Proof: governed response, evidence, hashes, hallucination status, uncertainty, and human-review task

U.S. Patent Application 19/444,521 — Track I Prioritized Examination Granted May 4, 2026. OntoGuard produces runtime authorization decisions backed by evidence, routing, auditability, and improvement signals.

Public sample packet available now. Full technical details, claims mapping, and private demo assets available under NDA.

Glossary for Runtime Cognitive Control Buyers

Ontology AIAI grounded in enterprise objects, relationships, policies, evidence, and symbolic traces rather than prompt text alone.
Semantic LayerThe enterprise meaning layer that maps prompts, outputs, scope, BM25 evidence, clause hits, objects, and rules into a governable representation.
Reasoning LayerThe layer that converts semantic evidence, uncertainty, risk, and arbitration into allowed, blocked, or escalated decisions.
Cognition LayerThe feedback layer that converts governed outcomes into L3 training signals, retrieval improvements, policy updates, and future decision quality.
System of IntelligenceThe full operating model: semantic grounding, reasoning, runtime authorization, auditability, human review, and learning signals.
Runtime Cognitive Control PlaneThe backend layer that authorizes, blocks, escalates, records, and improves high-stakes AI state transitions.
State-Transition GovernanceGovernance applied to proposed AI changes. Currently in production for LLM outputs and L3 training signals. Support for tool calls, memory writes, ontology changes, policy updates, and multi-agent handoffs is available via API integration and in active development for full enforcement.
Decision Authorization PacketA portable PDF + JSON receipt proving how an AI proposal was governed before release.
Enterprise OntologyObject-oriented enterprise reality: customers, accounts, claims, contracts, policies, relationships, obligations, and evidence.
Buyer-Safe TelemetryClean decision-relevant data separated from internal repair flags, hardening markers, and diagnostic lanes.
L3 Training Signal ExportGoverned outcomes transformed into reviewable improvement examples for retrieval, policies, alignment, and evals.

FAQ

What is state-transition governance?

State-transition governance asks whether a proposed AI change is authorized, traceable, and safe to commit. The proposal may be an LLM output, tool call, memory write, ontology update, policy mapping, training signal, or multi-agent handoff.

Do you support tool calls, memory writes, and multi-agent handoffs today?

Not at the same production level as LLM output governance yet. The core Decision API and JSON contract are designed to support these use cases through integration. Agent frameworks can call OntoGuard before executing actions. Full enforcement middleware and adapters for LangChain, CrewAI, and other frameworks are currently in development.

What is a Decision Authorization Packet?

A Decision Authorization Packet is a portable PDF and JSON governance receipt for a high-stakes AI state transition. It records the ALLOW, BLOCK, or ESCALATE decision, evidence, reasons, uncertainty, hallucination status, symbolic trace, human-review routing, audit hashes, and improvement signals.

Is OntoGuard only a regulatory compliance tool?

No. Regulatory readiness is a strong wedge, but OntoGuard is positioned as a runtime cognitive control plane for governing proposed state transitions across enterprise workflows, agentic systems, memory, tools, ontology, and training feedback.

How does OntoGuard use Ontology AI as a Semantic Layer?

Ontology-grounded objects, relationships, rules, clause hits, domain scope, evidence, and symbolic traces connect AI proposals to enterprise reality. This Semantic Layer becomes the Reasoning Layer and Cognition Layer that lets OntoGuard prove why a state transition was allowed, blocked, or escalated.

Which state transitions can OntoGuard govern?

The current production center is LLM output governance and L3 training-signal export. The same packet contract extends through integration to agent tool calls, memory writes, ontology changes, policy mappings, and multi-agent handoffs, with full enforcement middleware and framework adapters in active development.

What makes the financial services pilot important?

It shows OntoGuard doing the commercially valuable thing: surfacing primary-scope coverage gaps in FINRA, GLBA, SEC, and SOX, withholding autonomous release, and exporting buyer-safe evidence instead of presenting a false clean pass.

What happens when evidence is incomplete?

The packet still exports. OntoGuard routes uncertainty to SAFE_TEMPLATE or HUMAN_REVIEW, records reason codes, preserves audit hashes, flags coverage gaps, and avoids silently blank evidence or failed artifacts.

Book a 6-Week Pilot on Your Real High-Stakes Workflow

Bring 3–5 representative prompts or AI outputs into a structured six-week pilot. OntoGuard will return Decision Authorization Packets showing ALLOW, BLOCK, or ESCALATE, release status, evidence, trace, risk, uncertainty, hallucination status, audit hashes, and human-review routing.

No commitment. No retraining. No model-weight changes.

For deeper diligence, we can provide NDA terms, private packet artifacts, integration materials, and technical briefings.

Contact

Email mark.starobinsky@ontoguard.ai for pilots, licensing, partnerships, or NDA access.