Gamut
Get in Touch
Architecture

Glass-Box AI
Architecture.

Most AI tools are black boxes. They synthesize, you trust — or you don't. There's no way to see the evidence, audit the reasoning, or trace a claim back to its source.

Gamut is different. Every claim is source-attributed. Every score is decomposable. Every assertion is traceable to ground truth. We don't ask you to trust the AI — we give you the tools to audit it.

Get in Touch See the Scoring Formula ↓

Your Data Stack Tells You What Entities Claim to Be.
Gamut Tells You What's Actually True.

The Verification Firewall sits between your existing data sources and the decisions those sources inform. It doesn't replace your tools. It makes them trustworthy.

Your existing data sources
PitchBook
D&B
CRM / MDM
Internal databases
Web data
Gamut Verification Firewall
Registry Match  ·  Claim Validation  ·  Cross-Source Consistency  ·  Confidence Scoring  ·  Audit Trail
Investment Committee
Compliance Sign-off
Policy Binding
Counterparty Onboarding
Trusted regulated decisions

Entity data flows in from any source. Gamut verifies it against government registries, cross-validates claims across independent sources, scores confidence deterministically, and surfaces the evidence chain — so your team sees the proof, not just the conclusion.

A Fabricated Claim Would Need to Survive All Six Layers Independently

Each layer is designed around a different architectural principle and catches a different failure mode. No single layer is assumed to be sufficient.

L1
Structured Schema Enforcement
The model cannot generate open-ended claims. Every field must be populated from a specific retrieval step. No schema slot, no claim surfaces. If the model can't find a source for a field, it surfaces as "unverified" — not as a plausible invention.
L2
Source-Provenance Trust Hierarchy
Government registry data sits at the top. Official filings, verified news, company self-reported data, and web-scraped content each carry architecturally assigned confidence tiers. The LLM does not decide confidence — the architecture assigns it based on source provenance. A web-scraped claim cannot overwrite a registry response.
L3
CAER — Claim-Attribution-Evidence-Reasoning
Every factual assertion carries an explicit attribution to a source, evidence that supports the claim, and reasoning that connects evidence to conclusion. Embedded in the generation process, not applied after the fact. If the model cannot provide attribution, the claim does not surface.
L4
Isolated Adversarial Review
A dedicated Critic agent examines outputs and probes for fabrication. It operates with its own context, isolated from the generative agents — so it cannot be influenced by the same reasoning chains that produced a potentially fabricated output.
L5
Source Attribution Chains
Full chain of custody for every data point: which agent retrieved it, from which source, at what time, through what verification steps. An analyst, regulator, or auditor can follow any claim back to its origin.
L6
Government Registry Ground Truth
Live registry checks are binary: the entity exists with these attributes, or it doesn't. No LLM can override a registry response. When the architecture includes live registry verification as a mandatory step, fabricated entities are caught at the point of generation.

Three Jurisdictions Live in Production

🇸🇬
Singapore — ACRA
Every registered company in Singapore
BigQuery-cached dataset with live API fallback. Registration status, UEN, entity type, incorporation date, registered address, officer count, SSIC classification.
Live
🇬🇧
United Kingdom — Companies House
Every active company in the United Kingdom
REST API integration. Company number, status, incorporation date, registered office, SIC codes, filing history.
Live
🇺🇸
United States — SEC EDGAR
Every SEC filer in the United States
Free API, no authentication required. CIK, filing status, ticker, exchange listing, filing recency as a proxy for active status.
Live

Additional APAC jurisdictions (Thailand, Vietnam, Malaysia, Indonesia) on the roadmap via AsiaVerify. Every new registry expands the verifiable universe across all verticals without pipeline changes.

No LLM in the Scoring Path. Pure Deterministic Code.

The Verification Confidence Score is a 0–100 number computed entirely in Python. It decomposes into four observable, auditable dimensions.

Registry Match
Does a government registry confirm this entity exists with the claimed attributes?
30 pts
Source Diversity
How many independent sources corroborate the entity's claims? Diversity of source types matters more than volume.
30 pts
Cross-Source Consistency
Do independent sources agree on key attributes? Contradictions reduce the score.
25 pts
Entity Completeness
How many key attributes are verified? Missing fields reduce confidence proportionally.
15 pts
Example decomposition — score of 73
registry confirmed       +30 pts
three independent sources  +24 pts
minor inconsistency       −2 pts  # employee count conflict
missing beneficial ownership −9 pts
──────────────────────────────
total                      73 / 100

Every point traces to an observable fact. No narrative. No black-box model score. When you see 73, you can decompose it in one click.

Configurable Scoring Policy

The four dimensions are architectural — they define what verification means. What's configurable is the policy on top: how much each dimension weighs, which entity attributes count toward completeness, which risk flags are active, and what severity they carry. A Scoring Template is a runtime configuration loaded from Firestore — not a code change, not a redeployment.

PE/VC
Registry match + source diversity weighted highest. Funding verification, registry status confirmation.
Compliance
Registry match + cross-source consistency highest. Deregistration alerts, filing recency, sanctions flags.
Insurance
Entity completeness + filing recency highest. Incorporation stability, ownership continuity, financial health.

Same engine, different policy. Bring your own risk framework — we verify the evidence underneath it.

Three Verticals, One Engine

Each vertical is a Scoring Template — not a separate product. Every vertical uses the same verification engine, the same Sentinel monitoring, the same audit trail, and the same evidence pipeline.

PE/VC Due Diligence
Investment Intelligence
Startup verification for investment decisions. Funding verification, founder validation, registry status confirmation. IC Memo output with source-attributed evidence chains. The proof-of-concept vertical — deployed and validated.
Deployed & validated
Compliance & Third-Party Risk
Continuous Monitoring
Vendor and counterparty verification for regulated environments. Sanctions screening weight, filing recency emphasis, deregistration alerts. BLOCK/FLAG/PASS verdict with audit-ready evidence. Scoring Template configured for MAS, FCA, OCC, or any regulatory regime.
Scoring Template live
Insurance Underwriting
Risk Assessment
Policyholder risk assessment at binding and renewal. Incorporation stability, ownership continuity, financial health scoring. Four sub-dimension risk scorecard with binding snapshot export. Risk flags map directly to underwriting categories — loss prevention, not loss processing.
Scoring Template live

Verification at a Point in Time Is Not Enough

Entities verified today may change tomorrow. A company gets struck off. Ownership transfers. Financial health deteriorates. Filing activity stops. If your verification is point-in-time only, you're blind between reviews.

Sentinel watches registry status, filing activity, and ownership changes across every entity on your watch list. When a material change is detected — status change, filing gap, ownership transfer — Sentinel generates a signal within hours, not months.

Every signal is written to BigQuery with the same audit trail as the initial verification. Regulators can trace not just the original score, but every subsequent change and the evidence that triggered it.

Registry Status Change
Entity struck off, dissolved, or status downgraded. Alert generated within hours of registry update.
Filing Gap Detected
Expected filing activity absent. Compliance flag raised, evidence snapshot captured.
Ownership Transfer
Director or beneficial owner change detected. Full before/after evidence snapshot written to audit trail.

The Pipeline Gets Smarter With Every Query

Every verification query enriches your entity dataset. Verified claims, confidence scores, source attributions, and risk flags are written back to an intelligence cache after every run. The second time you encounter an entity, the pipeline starts with everything it learned from the first encounter — registry status, funding data, founder details, HQ confirmation — before running a single new search.

This is a structural advantage, not a model advantage. The AI model doesn't improve between queries. The evidence base does. Query 1,000 produces better results than query 10 because the pipeline accumulated verified evidence over 999 prior runs.

Your queries build your dataset. No competitor ran them. No competitor has it.

The User Sees the Evidence, Not Just the Conclusion

When you ask "why is this entity scored at 73%?" — Gamut answers with raw evidence, not narrative. Each line: the claim, the value, the source, the confidence tier. No narrative. No synthesis.

Registry: ACRA Registered (UEN 201912345A) — verified, confidence 1.0
Funding: $10M Series A, Vertex Ventures (Mar 2026) — 1 source, confidence 0.7
HQ: Singapore — 2 sources, confidence 0.9
Employees: Not verified
Founded: 2019 — registry, confidence 1.0
The glass-box test

If a user can see exactly which sources support every claim, in one click, with no LLM involved in the render — the architecture is transparent. If they can't, it's a black box with a confidence score on top. The IC Memo synthesis layer sits on top of this evidence for the full investment story — but the first thing you see is the proof.

Open Source

Verity — Trust, but Verify.
Verity Handles the Trust.

Verity is a three-layer content authenticity detection pipeline, available under Apache 2.0. Verity handles trust in content. Gamut handles trust in entities.

The Gamut platform adds government registry verification, entity resolution, deterministic confidence scoring, configurable Scoring Templates, Sentinel monitoring, and the full IC Memo intelligence pipeline on top of Verity's authenticity layer.

gamutagent/verity
Layer 1 — Source Credibility
Scores the reliability of content sources. YAML-driven domain trust tiers. Reuters scores differently than a PR wire. Zero API cost.
Layer 2 — Relevance Filtering
Ensures only topically relevant content enters the pipeline. 7 deterministic Python checks: AI hedge phrases, uniform sentence length, missing bylines, clickbait patterns.
Layer 3 — Authenticity Detection (optional)
Identifies synthetic, manipulated, or low-integrity content. One API call per article. Uses your existing scoring key. Disable for high-volume runs.
Intellectual Property

Three U.S. Provisional Patents Filed

Filed 2025–2026, covering the core platform architecture.

Config-driven verification pipeline instantiation via shell architecture without code changes
Deterministic confidence scoring using pluggable scoring templates with external signal ingestion
Continuous entity monitoring through compounding enrichment loops

Patent claims explicitly cover compliance, insurance underwriting, KYC, and government contractor verification as deployment verticals.

The Question Isn't Whether to Use AI.
It's Whether Your AI Can Survive an Audit.

Glass-box systems — where every claim is sourced, every score is decomposable, every assertion is traceable to ground truth — are the only defensible position in a landscape where regulators, courts, and counterparties increasingly demand proof, not promises.

Hallucination is not a model problem. It is an architecture problem. And architecture problems have architecture solutions.

Get in Touch