Data · Estimates

Point Estimates Lie to Small Business Owners

April 2026 · 9 min read

A founder friend sent me a text last month: “We’re projecting $40k in revenue for April.” I asked how confident she was. She said “pretty confident.” Two weeks later she texted me: “Tracking $31k. What happened?”

Nothing happened. Her projection was a point estimate — a single number without an error bar. Point estimates always feel more certain than they are, because they don’t visually display their own uncertainty. $40k sounds like a target. $31k to $47k (80% confidence) sounds like a forecast. They contain the same information if your model is honest, but only the second one stops you from making decisions the first one can’t support.

This post is about how to produce calibrated ranges instead of point estimates, for a small business, in a spreadsheet, without a data team.

Why point estimates are the default (and why they’re dangerous)

Every tool the small-business world gives you demands a single number. QuickBooks projections. Loan applications. Investor decks. Supplier contracts. "Next month’s revenue" has a single cell, not a range. So owners learn to pick a number that sounds plausible and hope.

The harm is subtle. You make decisions — hire a person, take on a loan, commit to a lease — as if the point estimate is true. When reality comes in at 70% of the projection (as it often does), you’re over-committed. Not because the projection was malicious, but because it didn’t expose its own error bar, so nobody planned around the downside.

This is the same mistake a weather forecaster would be making if they told you "tomorrow’s high is 72°" instead of "tomorrow’s high is between 68° and 76° with high confidence." Except weather forecasters stopped doing that decades ago because they noticed people were under-dressing for cold days.

What "calibrated" actually means

An estimate is calibrated when, over many attempts, the stated confidence matches reality:

Most untrained estimators are wildly overconfident. When people say "I’m 95% sure," the right answer is inside their stated range maybe 60% of the time. This is called overconfidence bias and it’s been measured hundreds of times across domains from weather to finance.

The fix isn’t being smarter. It’s keeping score and correcting. Every time you make a ranged estimate, log it. When the truth comes in, check whether it fell inside your range. After 20-30 of these, you’ll know whether your "80%" is really 80% or really 55%. If it’s 55%, widen your ranges.

The three-step method for a small business

Step 1: Replace every point estimate with a triplet

Instead of "April revenue: $40k," report:

April revenue forecast:
  p10 (low case):   $32k
  p50 (most likely): $40k
  p90 (high case):   $49k

p10 = you’d be surprised to come in below this. p50 = your best single-number guess. p90 = you’d be surprised to come in above this. The p10-to-p90 range is your 80% confidence interval.

If you have historical data, you can compute these from the last N months’ actuals:

# Excel, with last 12 months in B2:B13
p10 = PERCENTILE.INC(B2:B13, 0.10)
p50 = PERCENTILE.INC(B2:B13, 0.50)
p90 = PERCENTILE.INC(B2:B13, 0.90)

If you don’t have 12 months of data, start with the rule-of-thumb: p10 = 0.75 × p50, p90 = 1.25 × p50. That gives you a ±25% range, which is in the right ballpark for most small-business monthly revenue and is almost certainly better than reporting only the point estimate.

Step 2: Make the decision against the range, not the midpoint

This is the part most people skip and it’s the part that matters.

Every decision that depends on the forecast should be tested against all three numbers, not just the p50.

The trick: different decisions care about different tails of the distribution. Planning against the p50 is like a weather forecaster telling a wedding planner "the expected rainfall is 0.3 inches." That number doesn’t answer the only question the wedding planner has, which is "what’s the probability of more than 1 inch?"

Step 3: Keep a scoreboard

In a spreadsheet:

Forecast datep10p50p90ActualInside 80% CI?
Mar 1$28k$35k$43k$34kyes
Apr 1$32k$40k$49k$31kno (below p10)
May 1$34k$42k$51k$44kyes

After 10 forecasts, count how often "Inside 80% CI?" was yes. If it’s 7-9 out of 10, you’re calibrated. If it’s 3-5 out of 10, you’re overconfident and need to widen your ranges. Multiply p10 by 0.9 and p90 by 1.1 as a correction and try again next round.

This is a feedback loop, not a one-time exercise. You’ll keep nudging the ranges for a year or two until they’re trustworthy. Then you’ll make better decisions forever.

Worked example: should we sign the catering contract?

Real scenario we ran. A prospective wholesale account proposed a contract that guaranteed us $8k/month in minimum orders for 12 months, but required us to commit staffing and ingredient capacity to cover their peak-week demand of $14k.

The point-estimate analysis: "average $11k/month × 12 = $132k, great deal."

The ranged analysis: at p10, they order the minimum ($8k), which after the committed staff and ingredients left us with a tiny margin. At p90, their peak weeks stacked on our existing wholesale base would exceed our production capacity, meaning we’d miss existing orders and damage those accounts.

We negotiated a cap on peak-week volume and a smaller minimum. Final contract was ~80% of the original dollar value but had zero bad outcomes in either tail. The point-estimate analysis would have cheerfully said yes to a structure that hurt us in both tails.

The broader pattern

Weather forecasting moved to probabilistic forecasts in the 1960s. Finance moved to probabilistic risk models in the 1970s. Hospital capacity planning, airline seat pricing, utility load forecasting — all operate on distributions, not point estimates. Small businesses are 50 years late to this party, not because the math is hard, but because the tools they buy (QuickBooks, POS systems, loan software) assume the old way.

The fix is a spreadsheet, a weekly habit, and a willingness to report ranges instead of single numbers in places you can control. Your lender will still want a single number on the application. But your own planning doc doesn’t have to.

The forecasting systems I work on now at ZenHodl output nothing but calibrated ranges — every predicted outcome is a probability distribution with an explicit confidence, and every day a monitoring job checks whether those confidences held up against realized results. The measurement loop is the thing that makes the ranges trustworthy over time. Same principle, different industry.

Three takeaways

  1. Replace point estimates with p10/p50/p90 everywhere you can. If you have no history, start with ±25%.
  2. Make decisions against the relevant tail, not the midpoint. Downside decisions: p10. Upside capacity: p90.
  3. Keep a scoreboard of your ranges versus actuals. Widen the ranges until they’re right about 80% of the time.

← Back to the Journal.