ExitValue.ai

METHODOLOGY

How ExitValue.ai estimates business value

The engine produces a defensible valuation range for a private business by selecting the methodology used by actual buyers in that vertical, pulling stratified medians from a database of 25,592 real M&A transactions, and applying calibrated adjustments for risk and growth signals.

Quick answers

How does ExitValue.ai calculate a valuation?
Classifies the business into one of 107 sub-verticals, selects the methodology actually used by buyers in that market (EBITDA, SDE, Revenue, % of Revenue, or Asset), pulls stratified medians of recent M&A deals at the matching size bracket, and applies multiplicative adjustments for owner dependency, customer concentration, recurring revenue, growth, margin quality, and deal structure.
What data sources?
S&P Capital IQ historical, SEC EDGAR 8-K and S-4 filings (auto-ingested daily), and verified press releases. 25,592 unique deduplicated transactions. No estimated or imputed values.
How accurate?
On a 1,075-deal funnel-realistic sample, median absolute % error 47%, R2 0.64. EBITDA path is strongest (MAPE 31%, R20.85). Each result surfaces the typical per-deal error band for that specific (sub-vertical × size) cell.

The data layer

The transaction database holds 25,592 unique completed M&A deals from 1977 to 2026, deduplicated across multiple source feeds. 13,896 transactions (54.3%) have disclosed EV/EBITDA and 24,758 (96.7%) have disclosed EV/Revenue. Sources: S&P Capital IQ for the historical backbone, SEC EDGAR for newly disclosed deals (8-K and S-4 filings auto-ingested through a daily pipeline), and verified press releases. The data layer is strictly factual — no estimated, modeled, or imputed values. Estimation only happens in the calculation engine, never in the raw record.

Deals are classified into one of 107 sub-verticals across 17 industries. For each (sub-vertical × size bracket) cell, the engine computes stratified medians, p25, and p75 from the most recent (post-2018) cohort with at least 7 disclosed deals; it falls back to all-time data when the recent cohort is too thin.

Methodology selection

Different industries trade on different valuation conventions. The engine selects the right one per sub-vertical:

  • Dental practices, optometry, primary care, vet: percent of trailing-12-month collections (60-85% typical range), with a practice-based clamp on adjustments
  • SaaS, software, fintech, healthtech, cybersecurity: EV/Revenue (3-15x depending on growth and recurring mix)
  • HVAC, plumbing, electrical, MSP, pest control: EV/EBITDA with recurring-revenue adjustment
  • Restaurants, salons, Main Street services: SDE multiple (1.5-4x typical)
  • Insurance agencies: multiple of annual commissions
  • Asset-heavy industrials (manufacturing, distribution): blended asset + EBITDA

When an EBITDA-method business has unusually thin margin (below 5%), the engine auto-switches to revenue methodology with a thin-margin band — this reflects how buyers actually price distressed-margin operators. The switch is suppressed for high-multiple verticals (SaaS, software, cybersecurity) where low margin is normal investment-stage behavior.

Source blending

The base multiple is a weighted blend of: direct-bracket medians from the transaction database (highest weight when the cell has 7+ deals); published sub-vertical benchmarks; adjacent-bracket medians (one step down for SMB conservatism); fallback medians from the parent industry; and Damodaran-style public-company comps adjusted for size and liquidity. When a registered business-type segmentation applies (DSO vs solo dental, corporate vs independent ASC, growth-stage vs bootstrapped SaaS), the type-specific multiples dominate the blend.

Adjustments (multiplicative)

On top of the base multiple, the engine applies adjustment factors that compound multiplicatively, clamped overall to a defensible range:

  • Owner dependency — minimal: +15%, important: -5%, critical: -20% to -30%
  • Customer concentration — graduated penalty topping out at -35% for B2B at 100% single-customer
  • Recurring revenue mix — super-linear premium curve: 30% recurring → +13%, 70%+ → +25%
  • Growth trend — declining: -15%, flat: 0, growing: +5-10%, rapid (30%+ YoY): +20-30%
  • Margin quality — premium for sustained 40%+ EBITDA margin
  • Years in business — small maturity premium for 10+ year operators
  • Deal structure — earnout exposure compresses upfront; clean stock sale neutral

Sanity checks

The output passes through a sanity layer that caps multiples by methodology (e.g., revenue cap of 5x for non-tech, 15x for high-multiple verticals; EBITDA cap of 22x non-tech, 25x tech), enforces revenue floors and ceilings (15% to 8x revenue), and runs an EBITDA-vs-Revenue cross-check that blends the two when they diverge by more than 1.7x.

Validation

Three-layer validation runs nightly:

  • Twelve mathematical invariants on engine output — ordering, clamping, methodology-bound enforcement, NaN rejection
  • Seventy-nine probe regression cases against operator-curated business profiles with expected output bands; any change that pushes a probe out of band gets reverted
  • Thirty-eight public-company controls across three size bands ($100M-500M, $500M-2B, $2B+) verifying the engine's private-acquisition output sits appropriately above public trading multiples (20-30% control premium)

Honest precision

On a sample of 1,075 funnel-realistic transactions (revenue $500K-$25M, deals post-2018, services and consumer clusters), the engine achieves:

  • Median absolute % error: 47%
  • Within ±20% of actual: 24% of deals
  • Within ±30% of actual: 34% of deals
  • R2 (log space, vs actual EV): 0.64

Per-methodology breakdown: EBITDA path (n=274) MAPE 31%, R2 0.85 (strong); Revenue path (n=209) MAPE 51%, R2 0.66; Percent of Revenue (n=71) MAPE 62%, R2 0.56; Negative-earnings Revenue (n=521) MAPE 52%, R2 0.46.

Each /results page surfaces the typical per-deal error band specific to the user's (sub-vertical × size bracket) cell, computed from actual engine performance against historical deals at that exact size and industry. This is the user-facing honest accuracy number — not an aspirational confidence claim.

Continuous calibration

Four feedback loops keep the engine improving without per-session rebuilds:

  • Probe regression suite — operator-curated business profiles with expected output bands, replayed nightly
  • Closed-deal feedback — when a funnel deal actually closes, actual price gets compared to the engine's original output, feeding back into per-cell error rates
  • Quarterly drift detection — every 90 days the engine configuration is checked against the latest population medians per (sub-vertical × bracket); drift exceeding thresholds triggers retuning
  • Daily EDGAR ingestion— new SEC 8-K M&A filings auto-ingested into the database, refreshing per-cell stats nightly

Cite this methodology

ExitValue.ai. (2026). Business Valuation Methodology. https://exitvalue.ai/methodology

Apply this methodology

Healthcare-services depth (operator's lane)

The engine is calibrated heavily on healthcare-services data because that's where the operator's domain expertise sits (4 years M&A across dental, vet, ASC, primary care, behavioral health).