What 500 simulated SaaS founders said about your pricing page
We ran the live pricing pages of five well-known SaaS companies through Prism's Pricing Check, against the B2B-SaaS-SMB cluster. Here's what surfaced.

Pricing pages get more attention from founders than almost any other surface on the site, and they get less validation than almost any other change. We ship them on gut, watch the conversion rate for a week, attribute movement to whatever launched alongside, and call it a success or a regression with very little signal either way.
So we ran five live pricing pages, Linear, Vercel, Resend, Notion, Stripe, through Prism's Pricing Check against the B2B-SaaS-SMB cluster (n = 500, audited Apr 18, 2026, 88% accuracy). The brief: simulate a 50-person SaaS company evaluating each tool. Find the misreads. Surface the objections. Predict the tier they'd pick.
The point of this teardown isn't to roast any of these companies. They all built better pricing pages than 95% of the SaaS market. The point is that even at that quality bar, a calibrated check surfaces specific, actionable patterns that nobody on the inside would have known to look for.
Linear: the "Talk to us" problem
Linear's pricing page leads with Free, Standard ($10), Plus ($14), and Enterprise (Talk to us). Strong page overall, clean tier hierarchy, honest feature differentiation, no dark patterns. The check surfaced one thing nobody on the team would have flagged: 23% of the simulated SMB SaaS buyers, asked which tier they'd pick, mentioned the Enterprise tier as a friction point even though they had zero intention of buying it. The phrase "Talk to us" reads as an enterprise sales motion to the SMB audience. Not a deal-breaker, but a quiet background tax on perceived approachability.
Recommended fix: move "Enterprise" below the public tier grid as a smaller link ("Looking for enterprise? See enterprise pricing") rather than as a fourth column. We've seen this pattern improve SMB conversion by 8–12% in other tests.
Vercel: the bandwidth-cost anxiety
Vercel's pricing is famous for being honest about usage-based components. That honesty is also where the cluster surfaced the loudest objection: 41% of simulated SaaS buyers, when asked what would stop them from upgrading, mentioned some variant of "I don't know what my bill will be next month." Specifically the bandwidth-overage modeling. The page does explain the calculation, but the cluster reading is that "explained" isn't the same as "predictable."
Recommended fix: add a calculator above the pricing tiers, not below. Make the predicted monthly bill visible before the sign-up CTA, not after.
Resend: the "am I in the right product" question
Resend's pricing page is brutally clean, three tiers, one number per tier, generous Free. The check still surfaced something: 18% of the SMB SaaS audience, on first read, weren't sure whether Resend was for "email API" or "email marketing platform." Same root cause: the hero is product-led but the pricing page implicitly assumes you already know what you're looking at.
Recommended fix: add a one-line frame above the tier grid ("Resend is the email API for developers, for marketing email automation, see [partner]").
Notion: the per-seat math anxiety
Notion's per-seat pricing is standard SaaS, but the cluster surfaced an interesting per-buyer-segment difference. The simulated indie-hacker sub-cluster (n = 50 of the 500) reacted significantly worse to per-seat pricing than the SMB-with-team sub-cluster. The phrase that came up repeatedly: "I'd buy this for myself but the per-seat math means I can't bring my one contractor in cheaply."
Recommended fix: this isn't a copy issue, it's a packaging issue. The fix is a 2–3 seat "Founder bundle" tier. Notion already has Personal Pro, but it's capped at one user and not surfaced on the main pricing page.
Stripe: the only one that came back clean
Stripe's pricing page is the only one in the set that returned no high-severity objections. The cluster pick distribution matched what Stripe probably wants (most respondents picked "Standard" with a Stripe Atlas/Connect upsell). The objection severity distribution was unusually flat, no single complaint exceeded 12% of the audience.
The take-away isn't that Stripe's page is perfect. It's that when you've been iterating on a pricing page for ten years with a team of designers, you eventually flatten the obvious-misread distribution. The remaining gains are in the long tail, and that's the point at which Prism stops being the right tool and user interviews start being the right tool.
What this teardown is, and isn't
This isn't evidence that any of these companies are wrong. They're some of the best-positioned SaaS businesses in the market. It's evidence that even the best-positioned pricing pages have one or two specific, actionable patterns that a calibrated 500-buyer check surfaces in 60 seconds. None of these would have been caught by a ChatGPT review. None of them would have shown up in a two-week user-interview round (the sample size is wrong). Each one would cost €15,000–€25,000 to surface via a traditional research engagement, and arrive eight weeks after the page shipped.
We don't publish customer-specific findings without permission. The five examples above were chosen because the pricing pages are public, the issues are surface-level, and the fixes are uncontroversial. If you want to run the same check on your own pricing page, the link below is free for the first three.
Run your own check.
Three free checks. No card. 60 seconds to first reactions. Run one on the landing page or pricing page you're about to ship.