The Pre‑Build Idea Audit: 9 Quick Signals That Tell You Don’t Build (and What to Test Instead)
Written by AppWispr editorial
Return to blogTHE PRE‑BUILD IDEA AUDIT: 9 QUICK SIGNALS THAT TELL YOU DON’T BUILD (AND WHAT TO TEST INSTEAD)
Before you carve road‑map months into sprints and pull an engineer off another ship, run this pre‑build idea audit. It’s a short, opinionated checklist of nine high‑signal red flags that should pause any build — and for each signal I give the exact 1–3 rapid experiments you can run in days (not months) to prove demand or kill the idea fast. Use this when your instinct says “we should build,” but you need real evidence first.
Section 1
Why a pre‑build audit beats premature shipping
Most founders confuse ‘I want to build’ with ‘people will pay for this’. The pre‑build audit reframes the question: do real customers exhibit behaviors that make building worthwhile? Pretotyping and lightweight validation methods — fake doors, concierge MVPs, landing‑page presales — are designed to answer that in days, not quarters.
The point of these experiments isn’t a polished product; it’s measurable commitment. Pretotyping pioneer Alberto Savoia popularized doing cheap, quick simulations to test demand before expensive engineering. If the audit raises red flags, run one of the suggested rapid experiments below instead of starting a full build.
- Aim for observable commitment: email signups are weaker than clicks that convert to payment or explicit preorders.
- Prefer manual delivery (concierge) when outcomes matter more than automation — it reveals true value and refinements before code.
- Use fake‑door and presale pages to quantify willingness to pay quickly.
Section 2
Nine red flags that should stop you from building (and why they’re high signal)
Each signal below is a quick, observable indicator that your idea currently lacks the behavioral evidence to justify engineering time. If you spot one or more, don’t tweak product scope — pause and run the paired experiment.
These aren’t wishy‑wash market research items. They focus on behavior (not opinion) and on signals that reliably predict wasted engineering cycles: expressed interest that doesn’t convert, nebulous buyer identity, outsized reliance on features, negative unit economics, and so on.
- 1) No paying commitments after a minimal presale page — people sign up but won’t prepay. (High signal: willingness to pay is missing.)
- 2) Customer jobs are fuzzy — users can’t describe the outcome they’d trade money for. (High signal: unclear value proposition.)
- 3) Buyers are multiple conflicting personas — no single segment shows repeatable need. (High signal: no focused initial beachhead.)
- 4) Solution complexity is high (multi‑party flows, hardware, compliance) but you can’t show a manual path to deliver the outcome. (High signal: engineering risk + unclear traction path.)
- 5) Existing manual workarounds accomplish the job cheaply — customers use spreadsheets, contractors, or Slack effectively. (High signal: low pain to justify switching.)
- 6) Acquisition cost unknown or obviously high — you don’t have early channels that scale. (High signal: growth model unproven.) (Example: niche B2B feature without a clear outbound list.)
Sources used in this section
Section 3
Nine red flags (continued) and their single best rapid experiment
Continue the checklist and pair each remaining red flag with a concrete, single best experiment you can run in days to measure real demand.
For each experiment I specify what to measure and what signal means ‘stop’ vs ‘go’.
- 7) Early users demand many niche customizations before buying → Experiment: Concierge MVP for 3–10 customers (charge real money). Measure: retention + willingness to pay repeatedly. Stop if you can’t get 3 customers to pay at full or near‑full price.
- 8) Market education looks necessary (users don’t understand the benefit) → Experiment: Landing‑page presale + explainer video. Measure: conversion rate from targeted ads or cold traffic. Stop if conversion from targeted audience is under a conservatively defined threshold (e.g., <1% paid conversion from warm channels).
- 9) Feature adds marginal value to an existing product (customers say “nice to have”) → Experiment: Fake‑door test (button/CTA for the feature leading to an email/payment flow). Measure: CTA clickthrough → payment or deposit. Stop if clicks don’t translate into paid commitments.
Section 4
Exact experiment templates: fake‑door, concierge, and landing‑page presales
Template A — Fake‑door (best when you need to test feature demand quickly): Build a single landing page or an in‑app CTA that promises the feature with a clear CTA: “Get early access — reserve for $X.” Route the CTA to a simple payment or ‘reserve’ form. If you can’t accept money, capture a refundable deposit or require a calendar reservation. Measure conversion rate from targeted users and follow up personally.
Template B — Concierge MVP (best for outcome‑heavy or complex workflows): Sell the outcome at real price, deliver manually for the first customers, and document time/cost to serve. Use the manual engagement to refine the onboarding questions and delivery steps. Measure repeat purchases, gross margin when manual tasks are measured, and customer willingness to keep paying once you announce automated roadmap.
- Fake‑door specifics: clear headline, pricing, scarcity (limited seats), 1‑page checkout (Stripe/PayPal). Signal to proceed: consistent paid conversions at or above your target conversion rate from a warm audience.
- Concierge specifics: charge full or near‑full price (not free trials), promise a refund policy, instrument time to deliver and customer outcomes, ask for referrals. Signal to proceed: profitable or near‑profitable manual delivery and recurring demand.
- Landing‑page presale specifics: combine explainer video + pricing tiers + social proof placeholders (if you have them). Drive traffic from narrow, relevant channels. Signal to proceed: paid presales hitting the minimum viable cohort size you need to justify engineering.
Section 5
How to run the experiments, interpret results, and decide next steps
Run each experiment with a clearly defined minimum‑success threshold before you start. For example: 50 paid presales from a targeted list, 3 paying concierge customers with positive ROI after manual costs, or a fake‑door 2% paid conversion from warm traffic. Predefine your stop/go thresholds and the time window (often 7–21 days).
Decision framing: “Stop” means the experiment fails to reach the pre‑set threshold; do not build. “Pivot” means you learned something specific (wrong pricing, wrong channel, different persona) and run a new, focused test. “Go” means the experiment validates the riskiest assumption (value + willingness to pay + channel) and you can responsibly allocate engineering time.
- Set thresholds before running tests — do not move goalposts after seeing early data.
- Charge money when possible — free interest is noise. Even refundable deposits are stronger signals than email signups.
- Record operational cost during concierge runs — that tells you if automation is worth building.
FAQ
Common follow-up questions
What is a fake‑door test and when should I use it?
A fake‑door test presents a feature or product to users (landing page, in‑app CTA) and measures how many try to sign up or pay before the feature exists. Use it when you want to validate demand for a single feature or upgrade quickly and with minimal build cost.
How much should I charge in a concierge MVP?
Charge near your planned price—enough to reflect real value. Avoid free trials for validation. If customers hesitate, try refundable deposits or discounted early‑access pricing but record the true willingness to pay.
What sample size or conversion rate proves I should build?
There’s no universal number: set thresholds tied to your economics. Examples: 3 paying concierge customers may validate a niche B2B feature; 50 paid presales might validate a consumer app. Base thresholds on LTV, payback period, and minimum cohort size needed to iterate.
Can these experiments replace user interviews?
No—interviews complement experiments. Interviews help shape hypotheses and targeting; experiments measure real behavior. Use both: interviews to craft the test, experiments to validate if people will act.
Sources
Research used in this article
Each generated article keeps its own linked source list so the underlying reporting is visible and easy to verify.
Barry O'Reilly
Pretotyping to Build the Right "It" with Alberto Savoia
https://barryoreilly.com/explore/podcast/pretotyping-build-right-alberto-savoia/
FourWeekMBA
Pretotyping: How To Find The Right Idea To Avoid Business Failure With Alberto Savoia
https://fourweekmba.com/pretotyping-alberto-savoia/
When Notes Fly
Quick Validation MVP Ideas
https://whennotesfly.com/ideas/startup-mvp-ideas/
Shortform
What's a Concierge MVP? How Do You Build One?
https://www.shortform.com/blog/concierge-mvp/
Solopreneur Global
Validate SaaS Feature Ideas Before Building (concierge MVP example)
https://www.solopreneur.global/posts/validate-saas-feature-ideas-before-building
Next step
Turn the idea into a build-ready plan.
AppWispr takes the research and packages it into a product brief, mockups, screenshots, and launch copy you can use right away.