OntoGuard Decision Authorization Packet

Direct LLM backend governance, Decision Authorization Packets, and ontology-grounded decisions.

U.S. Patent Application 19/444,521 — Track I Prioritized Examination Granted May 4, 2026.
Direct LLM Backend Governance Runtime AI Governance Decision Authorization Packet EU AI Act Readiness Closed-Loop AI Training Signals

OntoGuard sits between your LLM and the business workflow. It decides whether an AI output should be ALLOWed, BLOCKed, or ESCALATEd — then produces a portable PDF + JSON packet with evidence, risk, uncertainty, hallucination status, symbolic trace, audit hashes, human-review routing, backend governance records, and improvement signals.

Before OntoGuard: the buyer had AI outputs.
After OntoGuard: the buyer has governed AI decisions with evidence, routing, auditability, and learning signals.

AI Act readiness is no longer theoretical. Current-law high-risk obligations were pointed at August 2, 2026, while EU simplification amendments remain in motion. Either way, buyers will need evidence, routing, auditability, and governance receipts. Last updated: May 11, 2026.

Runtime AI Control Before Release

AI teams are deploying model outputs faster than legal, compliance, risk, and product teams can govern them.

Model logs show what happened. Evals test behavior before deployment. Observability monitors systems. Policy documents describe expectations. OntoGuard authorizes, blocks, or escalates live AI outputs before downstream release.

It does not replace your model. It governs the backend output boundary with a Decision API and Decision Authorization Packet: a buyer-facing, audit-ready PDF + JSON packet with decision reasons, ontology-grounded evidence, traceability, risk, uncertainty, hallucination status, human-review tasking, and L3 improvement signals.

OntoGuard turns AI outputs into governed business decisions — and turns governed decisions into clean gold-example candidates for better future agents.

Zero Hard Gates. Always Export. Evidence Never Blank.

Regulated buyers need governance that is resilient under ambiguity. OntoGuard is designed so uncertainty does not erase the audit trail.

Zero Hard Gates

Hard failures are converted into explicit abstention, SAFE_TEMPLATE, or HUMAN_REVIEW routing so the governance record still exists.

Always-Export Artifacts

Every governed run is expected to produce a buyer-readable PDF and complete JSON packet, even when the answer is withheld.

Evidence-Never-Blank

If evidence is sparse, OntoGuard records fallback reasons, provisional review anchors, uncertainty, and next human step instead of silently emitting an empty proof trail.

What the Decision Authorization Packet Gives You

How It Works — The 3-Layer Semantic Governance Stack

OntoGuard is not just ontology lookup and not just scoring. It uses ontology throughout the governance path to connect an LLM output to policy, evidence, scope, trace, human review, and learning feedback.

L1

Symbolic Grounding

Normalizes the governed prompt and response into clause hits, evidence IDs, retrieval IDs, checksums, regulation/domain scope, and a buyer-readable symbolic trace.

  • Ontology-grounded concepts
  • Clause and fallback hits
  • Evidence pack and provenance
L2

Semantic Consensus

Compares compliance, accuracy, risk, uncertainty, hallucination status, and agent disagreement before deciding whether release is safe.

  • Trust and uncertainty signals
  • Cold / Heat / Mercury triad
  • ALLOW / BLOCK / ESCALATE routing
L3

Alignment Feedback

Turns governed outcomes into reusable improvement signals for retrieval, policy, alignment, training data curation, and future agent decision quality.

  • Gold example candidates
  • Human-review outcomes
  • Closed-loop training signal export

Direct LLM Backend Governance Boundary

OntoGuard already governs model outputs at the backend boundary. It evaluates high-stakes LLM responses before downstream release and produces a Decision Authorization Packet with evidence, reasons, routing, uncertainty, audit hashes, and L3 improvement signals.

Current: Model Output Intake

Accepts a governed prompt, LLM response, workflow context, target domains, and regulatory scope for pre-release evaluation.

Current: Decision API

Produces ALLOW / BLOCK / ESCALATE, release status, routed_to, business effect, trust, uncertainty, and buyer-readable reason codes.

Current: Decision Authorization Packet

Exports the PDF + JSON governance receipt with evidence IDs, retrieval IDs, symbolic trace, audit hashes, hallucination status, and next human step.

Current: Human Review Routing

Routes escalated outputs to HUMAN_REVIEW with APPROVE, REWRITE, or BLOCK outcomes and preserves review decisions as L3 feedback signals.

Current: L3 Training Signal Export

Converts governed outcomes into gold-example candidates for retrieval, policy, evaluation, alignment, and future decision improvement.

Roadmap: Agentic Tool Control

ToolCallEnvelope, DecisionReceipt, Tool Policy Registry, Enforcement Middleware, and immutable System of Record are planned as a premium agent-control-plane extension.

Shipping now: backend model-output governance, Decision API, Decision Authorization Packet, human-review routing, audit evidence, and L3 training-signal export.
In development: full agent/tool-call control plane for autonomous actions and policy-enforced tool execution.

L3 Training Signal Export — From Decision to Gold Example Candidate

The same governance packet that protects a workflow can also create clean, reviewable signals for better retrieval, policies, evaluation sets, and future agents.

AttemptModel, agent, or tool call proposes an output.
OntoGuard DecisionALLOW, BLOCK, or ESCALATE is computed before release.
Evidence + ReasonsTrace, clauses, uncertainty, risk, and reason codes are exported.
Gold ExamplesApproved review outcomes become candidate training and policy examples.
Better Decision MakingFuture retrieval, policies, agents, and evals improve from governed signals.

See the Packet OntoGuard Produces

Sample scenario: a mental-health AI chatbot response evaluated under GDPR, HIPAA, and EU AI Act context. OntoGuard governed the output before release and produced a portable Decision Authorization Packet.

DecisionESCALATE
Release AuthorizedNo
Release StatusWITHHELD_PENDING_REVIEW
Routed ToHUMAN_REVIEW
Trust25.00%
TriadCold 90.57% | Heat 17.56% | Mercury 96.77%
Hallucination StatusPass
ProofEvidence, hashes, governed response, review task, and L3 signals attached
Public sample should be sanitized and non-confidential. Full technical architecture, claims mapping, private demo assets, and integration materials remain available under NDA.

💼 Use Case — High-Stakes AI Output Authorization

An AI system generates an answer, recommendation, analysis, or action that could affect a customer, patient, employee, regulator, or business workflow.

OntoGuard produces an authorization decision — ALLOW, BLOCK, or ESCALATE — plus a Decision Authorization Packet showing the governed prompt, governed response, evidence, policy context, risk, uncertainty, hallucination status, symbolic trace, audit hashes, and next human step.

Example workflows include healthcare and mental-health AI, financial risk explanations, legal analysis, insurance claims, enterprise copilots, regulated reporting, and agentic business actions.

Human Review Options: APPROVE REWRITE BLOCK

New Decision Packet Extensions

OntoGuard now packages commercial, audit, and learning signals directly into the Decision Authorization Packet — without changing trust, policy, benchmark, risk, or Decision API outcomes.

ROI Calculator

Customer-supplied inputs can render hard-dollar opex savings, loss-prevention value, implementation cost, payback period, and ROI status without inventing assumptions.

Compliance Heatmap

Visual coverage matrix for regulations, clause counts, citation linkage, display bands, and review status so buyers can see gaps at a glance.

Benchmark Lines

Industry benchmark line and peer-comparison slot that only displays percentile claims when a verified benchmark corpus and dataset hash exist.

Offline Audit Credential

Unsigned audit anchor using existing hashes. External verification URL remains unconfigured until a real verifier is deployed.

Trend History

Optional inline history to show how decisions, trust, risk, and semantic governance signals change across runs.

Commercial Packaging

Sellable-lite callout for enterprise workflow pilots, proof packs, procurement, and high-stakes AI deployment discussions.

Multi-Domain and Jurisdiction Expansion

OntoGuard is built as a portable governance layer. Domain and jurisdiction packs can target regulated workflows where AI outputs need evidence, routing, and auditability.

Financial Services

SEC, FINRA, SOX, GLBA, credit, risk, trading support, customer communications, and regulated reporting workflows.

Healthcare and Life Sciences

HIPAA, PHI handling, clinical support workflows, medical chatbot review, patient communication, and safety escalation.

Legal and Compliance

Contract review, legal analysis, policy interpretation, evidence preservation, and reviewer sign-off for high-stakes outputs.

Energy and Critical Infrastructure

Operational risk, infrastructure procedures, incident response, reliability review, and audit-ready AI decision governance.

Patent Feature Remediation Roadmap

Every packet can expose a buyer-readable next human step tied to the exact L1, L2, or L3 capability that needs remediation.

L1 Evidence Remediation: add or verify missing evidence, citation, retrieval ID, clause mapping, or ontology scope.
L2 Consensus Remediation: resolve trust, risk, uncertainty, hallucination, or semantic disagreement before release.
L3 Feedback Remediation: convert reviewer outcome into policy, retrieval, evaluation, or gold-example improvement signal.

What You Gain That You Do Not Have Today

OntoGuard is not merely model logging, evals, observability, GRC, or human review tooling. It is runtime output authorization with evidence, routing, auditability, and improvement signals attached.

Without OntoGuard
With OntoGuard
AI produces outputs with limited proof of pre-release control.
AI outputs become governed decisions before release.
Human review is ad hoc, manual, or inconsistently triggered.
Human review is triggered by explicit Decision API routing and reason codes.
Evidence is difficult to reconstruct after the fact.
Evidence IDs, retrieval IDs, checksums, symbolic trace, and hashes are preserved.
Compliance rationale lives in separate policies, tickets, or tribal knowledge.
Policy, regulatory, risk, uncertainty, and hallucination rationale is attached to the decision.
Failures become one-off incidents.
Failures become structured L3 improvement signals for retrieval, alignment, policy, and future agents.

How It Works

OntoGuard uses a three-layer Semantic Governance Stack: L1 Symbolic Grounding, L2 Semantic Consensus, and L3 Alignment Feedback. The result is not just a score — it is a governed release decision with evidence, traceability, and reusable learning signals.

1. The AI Evolution

LLM Evolution to Ontology-Enhanced AI

2. Solving the Peak Data Problem

Peak AI Problem and Solution

3. The AI Trust Pipeline

AI Validation and Compliance Scoring

4. Executive Summary

AI Reasoning, Compliance, and Symbolic Trust System Overview 📄 NDA Access Request

Not Just Ontology — Runtime Authorization + Learning Loop

  • 🧠 Decision API: ALLOW / BLOCK / ESCALATE with reasons, confidence, trace ID, evidence, and release status
  • 🔁 Training Signal Export: governed decisions become clean examples for better future agents
  • 📊 Semantic Governance Triad: Cold, Heat, and Mercury explain grounding, volatility, and trace fidelity
  • 🛡️ Audit-Ready Proof: governed response, evidence, hashes, hallucination status, uncertainty, and human-review task

U.S. Patent Application 19/444,521 — Track I Prioritized Examination Granted May 4, 2026. OntoGuard produces runtime authorization decisions backed by evidence, routing, auditability, and improvement signals.

Public sample packet available now. Full technical details, claims mapping, and private demo assets available under NDA.

How OntoGuard Is Different

Glossary for Runtime AI Governance Buyers

Decision Authorization PacketA portable PDF + JSON receipt proving how an AI output was governed before release.
Runtime AI GovernanceGovernance applied to a live model output or agent action before it reaches the business workflow.
Ontology AIUse of structured concepts, relationships, clauses, and evidence to ground LLM decisions in domain context.
Control PlaneThe backend layer that wraps attempts, enforces policy, records decisions, and routes human review.
L3 Training Signal ExportGoverned decisions transformed into reviewable improvement examples for future agents, policies, retrieval, and evals.
Zero Hard GatesA governance invariant where failures become explicit routing and evidence disclosures, not missing artifacts.

FAQ

What is a Decision Authorization Packet?

A Decision Authorization Packet is a portable PDF and JSON governance receipt for a high-stakes AI output. It shows the ALLOW, BLOCK, or ESCALATE decision, evidence, reasons, uncertainty, hallucination status, symbolic trace, human-review task, audit hashes, and improvement signals.

How does runtime AI governance work?

Runtime AI governance sits between the LLM and the downstream workflow. OntoGuard evaluates a live output before release, generates a decision record, routes ALLOW / BLOCK / ESCALATE outcomes, and exports a packet for audit and review.

How is OntoGuard different from model logs, evals, observability, or GRC?

Logs and observability describe what happened; evals test behavior before deployment; GRC documents controls. OntoGuard authorizes, blocks, or escalates an individual AI output at runtime and attaches evidence to that decision.

How does OntoGuard use ontology AI?

Ontology-grounded concepts, clause hits, domain scope, evidence, and symbolic traces connect raw LLM output to policies, regulations, risks, and buyer-readable reasons.

Does OntoGuard support EU AI Act compliance workflows?

OntoGuard supports EU AI Act readiness by producing evidence, governance receipts, human-review routing, uncertainty disclosures, risk summaries, and auditable decision records for high-stakes AI workflows.

What happens if evidence is incomplete or the system is uncertain?

The packet still exports. OntoGuard routes uncertainty to SAFE_TEMPLATE or HUMAN_REVIEW, records reason codes, preserves audit hashes, and avoids silently blank evidence or failed artifacts.

Get a Free 48-Hour Pilot on Your Real High-Stakes Workflow

Send 3–5 representative prompts or AI outputs. OntoGuard will return a sample Decision Authorization Packet showing ALLOW / BLOCK / ESCALATE, release status, evidence, trace, risk, uncertainty, hallucination status, audit hashes, and human-review routing.

No commitment. No retraining. No model-weight changes.

For deeper diligence, we can provide NDA terms, private packet artifacts, integration materials, and technical briefings.

Contact

Email mark.starobinsky@ontoguard.ai for pilots, licensing, partnerships, or NDA access.