Pricing check

The check before you ship the pricing page.

Paste your pricing page. 500 ICP buyers tell you which tier they'd pick, what feels overpriced, what's missing, and which competitor they'd switch from. Test up to 3 variants side by side. In 60 seconds.

No card. No signup. Three free checks in your first minute.

How the check runs

Four steps. 60 seconds end to end.

/ 01

Paste

URL, copy, or up to 3 pricing-page variants. We'll run them side by side.

/ 02

Pick an audience

Match the audience to who you sell to today. SMB SaaS buyer, indie hacker, mid-market.

/ 03

Run

500 buyers per variant pick a tier, name what feels overpriced, name what's missing.

/ 04

Read the report

Tier-pick distribution, perceived value, switch-from competitors, dropoff points.

What you get back

A real report, not a vibe check.

Verdict, sentiment distribution, verbatim quotes from 500 simulated ICP buyers, the most common objection, friction points, recommendations. One artifact you can paste into your team Slack and act on tomorrow.

Pricing check · sample
Run #04219 · 500 buyers
Positive
58%
Neutral
18%
Negative
24%
Top objection
“I don't see a price anywhere, feels like enterprise sales.”
full report
What founders catch

Three real examples of what the check found.

Default tier was wrong

Before

Highlighted the €99 'Team' tier as the recommended choice.

After

ICP read 'Team' as 38% lower value than 'Starter'. Defaulted to Starter, conversion held.

Mis-ranked tier hierarchy.

Free tier looked too generous

Before

Generous free tier next to a €49 paid tier.

After

ICP saw no reason to upgrade. Free tier gated more aggressively, paid conversion +19%.

Paid value not legible vs. free.

'Talk to us' on every tier

Before

Three of four tiers had 'Talk to us' instead of a price.

After

Indie-hacker ICP bounced. Self-serve prices added to two tiers. Sign-ups doubled in a week.

PLG audience reads 'Talk to us' as 'enterprise sales motion.'
Common questions

Things SaaS founders ask before running a check.

Is this just GPT?+

No. Every check runs through nine independent corrections: a multi-model ensemble across multiple independent frontier model families, calibration against historical ground truth, revealed-preference weighting, and distribution-shape matching. One model wrapped in a persona prompt is one model's opinion. We give you 500.

How accurate is this?+

87% median accuracy across calibrated SaaS clusters, audited monthly. Every cluster is dated, sourced, and visible on the validation page. If a cluster drifts below 80%, we pause it automatically.

What audiences are available?+

Pre-built clusters for B2B SaaS buyers (SMB and mid-market), indie hackers, dev-tool buyers, marketing-led SaaS, sales-led SaaS, PLG users, agency owners, and API-first buyers. New clusters land monthly. See the audiences page for status and accuracy.

Does it work for B2C?+

Today the calibrated SaaS clusters are the focus. The same engine powers our enterprise customers' B2C work, see the enterprise page for that. If you're a SaaS founder targeting consumers, the indie-hacker and product-led clusters are the closest fit while we calibrate B2C-specific SaaS audiences.

Test your pricing page free.

No card. No sales call. Three live reactions in your first minute.