Methodology
The trust infrastructure around the AI.
Every competitor will use AI. The advantage isn't the model — it's what the model has to pass before its output reaches you. This page documents the four pillars of that infrastructure: deterministic kill gates, public calibration tracking, audit-grade source provenance, and an architecturally-enforced Chinese Wall between the ranking engine and broker economics.
Four pillars of the trust infrastructure
How the ranking is computed
Each tracked company is scored on six structural criteria, then a momentum overlay (capped at 2.0×) is applied from operational signals. Final score = base × momentum, with 90-day rolling state and 60-day exponential decay. The criteria are deliberately structural, not narrative — they're the things you'd ask about a company on a partner-meeting dial-in.
- Time since last financing. Stale rounds penalize; recent priced rounds boost.
- Top-decile VC investor presence. Weighted by fund vintage performance, not name recognition.
- Serial-founder status. Prior outcomes matter; first-time founders are not penalized but receive less weight here.
- Senior management technical credentials. Sourced from public bios, patents, conference appearances — not LinkedIn (constitutional prohibition).
- Senior management commercial credentials. Documented revenue ownership at prior roles, named customer wins, GTM experience at scale.
- Partner-level VC attribution. Specific partner, not just firm — partner outcomes vary meaningfully within a single fund.
Momentum overlay
On top of the base score, we compute a momentum multiplier from operational signals: GitHub commit and release velocity, HuggingFace model releases, public benchmark performance, disclosable compute-infrastructure contracts, and ATS-feed hiring velocity. Sources are public or licensed (Revelio Labs, ATS-feed providers); we do not scrape LinkedIn.
The multiplier is capped at 2.0× to prevent a single hot signal from dominating the ranking. The 60-day exponential decay means a month-old GitHub burst contributes about half what it did the day it landed.
The calibration loop
Every IC memo we ship records its predictions — IRR, MOIC, exit multiple, revenue growth band, margin trajectory — with a resolution horizon. When the horizon arrives, an independent grader pulls the realized data and compares. The result feeds two things: the public calibration retrospectives (so customers can see how our predictions in their sectors have actually played out) and the prompt-revision proposals that ship through our internal adaptation pipeline.
Some predictions resolve fast: revenue growth at the next reporting cycle, margin expansion at the next 10-Q, financing rounds within two quarters. Some resolve slowly: IRR resolves only at exit. We surface interim signals where available rather than waiting for terminal resolution, and we're explicit about which is which.
The point is not to be right every time. The point is to know how right we are, sector by sector, and to publish that.
How the Chinese Wall is enforced
For a customer base whose entire job is sniffing out misaligned incentives, "trust us" is not enough. The wall is enforced in three layers:
1. Architectural isolation
The ranking engine and the IOI workflow live in separate code paths with separate data stores and separate read permissions. The ranking job has no credentials to read the IOI tables and no network path to the broker integration. A constitutional code-level prohibition prevents IOI economics from feeding back into ranking scores under any circumstance, and is enforced statically by a checker that fails the build on any forbidden import. The technical contract — module boundaries, the single allowed bridge, and the enforcement layers — is documented in docs/CHINESE_WALL.md in the public source.
2. Broker structure
TrancheBook is a financial publisher and research platform — not a broker-dealer. Indications of interest are routed to our registered broker-dealer partner (Rainmaker Securities) on a flat-fee basis. We are paid for routing, not for deal outcome — so the ranking has nothing to gain from boosting a company we expect to transact.
3. Public commitment
The prohibition is published here — in plain language, on the methodology page. If the wall ever weakens, this page must weaken with it. Customers reading along will notice. That public-commitment cost is the point.
Read the calibration retrospectives
Methodology is words. Calibration retrospectives are the receipts. We publish them monthly on the blog.