Legible's three-phase model validates every deployment against what your system actually does in production — building a Trusted Change Boundary before anything reaches your users.
From the moment a deploy is triggered to production confirmation — Legible governs every step using evidence, not assumptions.
We ingest your existing OpenTelemetry traces and build versioned production fingerprints — a living map of how your services actually interact. No new instrumentation. No code changes.
Before a deployment ships, Legible generates a hypothesis — the Inferred Intended Change — and validates it against staging behaviour. The result is a Trusted Change Boundary: the maximum allowed surface.
Production behaviour is checked against the boundary. Changes inside: explained. Changes outside: unexplained and flagged. Verdict: ALLOW, MONITOR, HOLD, or ESCALATE — with a full evidence chain.
When a deployment is detected, Legible normalises evidence from your CI/CD pipeline, changelogs, PRs, feature flags, and config systems into a unified evidence model.
From this evidence, the system generates an Inferred Intended Change (IIC) — a structured prediction of what behavioural changes the deployment will produce.
The IIC is a hypothesis, not a source of truth. It has no governance authority on its own. It must be validated by what actually happens.
After deployment to staging, Legible observes actual runtime behaviour and computes a Stage Behavioral Delta (SBD) — a multi-dimensional measurement across structural topology, traffic distribution, retry patterns, and latency.
The SBD is compared against the Phase 1 hypothesis. Verdicts: CONFIRMED, SUPERSET, SUBSET, DIVERGENT, or UNVERIFIABLE.
The system then constructs the Trusted Change Boundary — the maximum allowed surface — from stage-validated changes, confirmed predictions, known environment divergences, and bounded transitive dependencies.
After deployment to production, Legible computes a Production Behavioral Delta and checks it against the Trusted Change Boundary.
Changes inside the boundary: explained. Changes outside: unexplained and classified by confidence tier.
Governance confidence depends on the boundary, not prediction accuracy. Runtime behaviour is always the source of truth.
The question isn't "is the system healthy?" — it's "did the deployment produce exactly what it was supposed to, and nothing else?"
No SDKs. No agents. No new instrumentation. Connect your existing telemetry and deployment pipelines.
We'll show you what Legible sees in your first 30 minutes — hidden dependencies, drift from recent deployments, and changes that are risky right now.
Get your Day-0 safety report →