1 · Source ingestion

Every input is fingerprinted before it enters the engine. We accept primary sources only — filings, transcripts, registry records, and first-party documents — plus a curated alt-data layer with explicit provenance.

2 · Deterministic calculation

Models do not do final arithmetic. Every financial figure on every published page is computed in audited code from the underlying primary source. If the input changes, the figure changes; if the calculation changes, it’s versioned and visible.

3 · Adversarial review

Bull case, bear case, and a data-quality gate run against every memo before publication. Coverage gaps, missing citations, or weak source bases halt the pipeline and escalate to an editor.

4 · Editorial gate

A named editor signs every published note. Updates are versioned in place — never silently rewritten. Post-mortems on closed positions are first-class artifacts, not buried.

5 · AI assist (read this)

AI surfaces (briefs, ask, scavenger) operate under the same hard rules as published memos — with one extra constraint: the model never produces numbers. Every numeric claim in an AI-generated paragraph is a placeholder the renderer expands from the deterministic metric layer. The model is an assistant, not a source.

How confidence is computed

Confidence is observed, not learned. The seeded adapter rates an output high when at least three indexed sources and four deterministic metrics back the company; medium when the index is thinner; and low when only a single source backs the brief. A lowrating gates the body of the AI brief behind an explicit “show anyway” click. A live LLM adapter overrides this with the model’s self-reported confidence when available, but the seeded floor still applies.

What the flags mean

  • single-source — only one indexed source backs every claim. Read the citations carefully.
  • low-density — fewer than three deterministic metrics computed. The brief is necessarily thin.
  • uncertain — top retrieval hit was a weak match against the question. Cross-check.
  • stale — newest cited source is more than 90 days old. Re-run scavenger before relying.
  • off-topic — the question wandered outside what’s indexed for this name.

Quoted vs composed

Paragraphs the model composed must obey the no-raw-numbers rule. Paragraphs quoted verbatim from primary sources may carry numbers — the citation chain stands behind them. Quoted paragraphs render in italics with a left border.

When the assistant refuses

The assistant refuses when no indexed source resolves the question, when the question is empty or too long, or when the model output fails the validator (raw number, hallucinated source ID, schema violation). Refusal is preferred over a confident-sounding but unsupported answer.

What this means in practice

  • Every figure on every page traces to a primary source.
  • Every published note has a citation ledger you can audit.
  • Every published note has a version history.
  • Closed and stopped-out positions are public.
  • AI surfaces obey the same citation rules; the model is an assistant, not a source.