Most AI tools are black boxes. They synthesize, you trust — or you don't. There's no way to see the evidence, audit the reasoning, or trace a claim back to its source.
Gamut is different. Every claim is source-attributed. Every score is decomposable. Every assertion is traceable to ground truth. We don't ask you to trust the AI — we give you the tools to audit it.
The Verification Firewall sits between your existing data sources and the decisions those sources inform. It doesn't replace your tools. It makes them trustworthy.
Entity data flows in from any source. Gamut verifies it against government registries, cross-validates claims across independent sources, scores confidence deterministically, and surfaces the evidence chain — so your team sees the proof, not just the conclusion.
Each layer is designed around a different architectural principle and catches a different failure mode. No single layer is assumed to be sufficient.
Additional APAC jurisdictions (Thailand, Vietnam, Malaysia, Indonesia) on the roadmap via AsiaVerify. Every new registry expands the verifiable universe across all verticals without pipeline changes.
The Verification Confidence Score is a 0–100 number computed entirely in Python. It decomposes into four observable, auditable dimensions.
Every point traces to an observable fact. No narrative. No black-box model score. When you see 73, you can decompose it in one click.
The four dimensions are architectural — they define what verification means. What's configurable is the policy on top: how much each dimension weighs, which entity attributes count toward completeness, which risk flags are active, and what severity they carry. A Scoring Template is a runtime configuration loaded from Firestore — not a code change, not a redeployment.
Same engine, different policy. Bring your own risk framework — we verify the evidence underneath it.
Each vertical is a Scoring Template — not a separate product. Every vertical uses the same verification engine, the same Sentinel monitoring, the same audit trail, and the same evidence pipeline.
Entities verified today may change tomorrow. A company gets struck off. Ownership transfers. Financial health deteriorates. Filing activity stops. If your verification is point-in-time only, you're blind between reviews.
Sentinel watches registry status, filing activity, and ownership changes across every entity on your watch list. When a material change is detected — status change, filing gap, ownership transfer — Sentinel generates a signal within hours, not months.
Every signal is written to BigQuery with the same audit trail as the initial verification. Regulators can trace not just the original score, but every subsequent change and the evidence that triggered it.
Every verification query enriches your entity dataset. Verified claims, confidence scores, source attributions, and risk flags are written back to an intelligence cache after every run. The second time you encounter an entity, the pipeline starts with everything it learned from the first encounter — registry status, funding data, founder details, HQ confirmation — before running a single new search.
This is a structural advantage, not a model advantage. The AI model doesn't improve between queries. The evidence base does. Query 1,000 produces better results than query 10 because the pipeline accumulated verified evidence over 999 prior runs.
Your queries build your dataset. No competitor ran them. No competitor has it.
When you ask "why is this entity scored at 73%?" — Gamut answers with raw evidence, not narrative. Each line: the claim, the value, the source, the confidence tier. No narrative. No synthesis.
If a user can see exactly which sources support every claim, in one click, with no LLM involved in the render — the architecture is transparent. If they can't, it's a black box with a confidence score on top. The IC Memo synthesis layer sits on top of this evidence for the full investment story — but the first thing you see is the proof.
Verity is a three-layer content authenticity detection pipeline, available under Apache 2.0. Verity handles trust in content. Gamut handles trust in entities.
The Gamut platform adds government registry verification, entity resolution, deterministic confidence scoring, configurable Scoring Templates, Sentinel monitoring, and the full IC Memo intelligence pipeline on top of Verity's authenticity layer.
gamutagent/verityFiled 2025–2026, covering the core platform architecture.
Patent claims explicitly cover compliance, insurance underwriting, KYC, and government contractor verification as deployment verticals.
Glass-box systems — where every claim is sourced, every score is decomposable, every assertion is traceable to ground truth — are the only defensible position in a landscape where regulators, courts, and counterparties increasingly demand proof, not promises.
Hallucination is not a model problem. It is an architecture problem. And architecture problems have architecture solutions.
Get in Touch