Docs

ZAK is a constitutional execution environment: executors stay unpredictable (models, workflows, services, humans) while governance becomes mechanical—enforced as law and emitted as proof.

What ZAK does at runtime

Every governed interaction is wrapped in a GovernanceEnvelope (invariants cannot be disabled), evaluated against a ConstitutionalLaw, then routed through an enforcement gate.

Probe → Execute → EnforcementGate → (Deny | Silence | Transform | Emit) → Receipt

The three enforcement modes

  • DENY_AT_ADMISSION: request never runs; a denial + receipt are returned.
  • ALLOW_BUT_SILENCE: request may run; output is replaced with a canonical response + receipt.
  • ALLOW_BUT_TRANSFORM: request runs; output is forced into a safe schema + receipt.

Receipts (proof, not logs)

ZAK produces a receipt for every outcome—including denials. For the public demo, you can verify the receipt locally (hash match). For production, receipts can be chained, enriched with metadata, and signed.

See: /proof

How this differs from policy-as-code (e.g. OPA)

Policy engines are excellent at evaluating rules. ZAK’s focus is the part that breaks in the real world: enforcement at the point of execution plus receipt-shaped proof.

Question Policy-as-code ZAK
Can it evaluate rules? Yes Yes
Does it enforce outcomes at execution time? Depends on integration; often “allow/deny” only Yes (deny / waive / silence / transform)
Does it emit verifiable artifacts? Typically logs/decision records Receipts designed for verification + long-horizon audit
Is denial first-class evidence? Usually “request blocked” Yes (prove what didn’t execute, and why)

In practice: ZAK can sit alongside existing policy engines. The point is to make “approved behavior” mechanically enforceable and provable.


Research basis

The conceptual framework behind ZAK is published as: “Constitutional Governance for Computational Systems” (Kent Burchard, 2026).

The thesis: software now functions as critical infrastructure, yet governance is still mostly advisory and post-hoc. Constitutional governance formalizes system intent, continuously measures structural health, detects drift, enforces constraints mechanically at execution time, and emits auditable receipts for every governed action.

Key contributions (from the paper)

  • Formalized intent: machine-checkable architectural constraints (not prose)
  • Continuous measurement: health + trajectory (velocity/acceleration), not snapshots
  • Typed drift: classify divergence by failure mode, not just magnitude
  • Mechanical enforcement: allow / deny / require waiver integrated into execution paths
  • Formal waivers: time-bounded, auditable exceptions (not “workarounds”)
  • Execution receipts: verifiable governance record for every action

This framework is explicitly not a replacement for domain certifications (DO‑178C, IEC 62304) or legal review. It’s a substrate that makes those processes operational, continuous, and provable.

AUAIREF (Australia) — “responsible execution” draft submission

We also submitted a draft framework to the Australian Government’s AI Safety Institute: Artificial Intelligence Responsible Execution Framework (AIREF) (December 2025).

AIREF takes “guardrails” and turns them into runtime execution architecture: a deterministic governance kernel, a real-time safety runtime (autonomy modulation), and an accountability layer that emits structured review units (AGRUs) for oversight and audit readiness.

  • Deterministic governance kernel: enforce policies/risk profiles before output reaches users
  • Safety runtime: modulate autonomy based on risk signals (e.g., safety bands)
  • Accountability layer: produce human-readable review units (AGRUs), not just raw logs
  • Lab-to-live: offline testing harness + online runtime enforcement as a single pipeline

Key specs

Deep dives

  • OnePager — fast architecture + category framing.
  • Worlds — product surfaces (“apps”) built on the same engine.
  • Security — threat model summary and failure modes.