Kill‑or‑Build Decision Kit: 7 Rapid Tests to Prioritize App Features and Avoid Wasted Engineering
Written by AppWispr editorial
Return to blogKILL‑OR‑BUILD DECISION KIT: 7 RAPID TESTS TO PRIORITIZE APP FEATURES AND AVOID WASTED ENGINEERING
Founders and small product teams can’t afford the wrong build. This decision kit gives you seven short, repeatable experiments — with exact outputs you can use in a prioritization meeting — that separate ideas that deserve engineering time from the ones you should kill. Each test is executable within two weeks and requires little or no engineering. At the end of each test you’ll have one of three outcomes: kill, iterate the hypothesis, or move to build brief.
Section 1
How to use this kit (2‑week sprint, clear outcomes)
Treat every feature hypothesis as a single test. Define the riskiest assumption (demand, willingness to pay, or retention impact) and pick the experiment that most directly challenges that assumption. Each test below is designed to be run in 2–14 days, with a one‑metric decision rule (example thresholds below).
Run experiments sequentially or in parallel depending on risk. If the idea affects activation or retention, prioritize activation smoke tests first. If it’s monetization‑led, run paid preorders or a price anchor fake door. After each experiment, record the outcome as one of: Kill (stop investing), Iterate (change hypothesis and re‑test), or Build Brief (green light to write specs and estimate engineering).
- Timebox each test to 2–14 days.
- Pick one primary metric and a clear numerical threshold before you start.
- Use manual (concierge) versions before automating.
- Record explicit outcome: Kill, Iterate, or Build Brief.
Section 2
1) Fake‑door landing (demand & willingness to wait)
Create a simple landing page or in‑app CTA that presents the feature as if it exists, and measure clicks and signups. The goal is behavioral evidence: people who click a CTA or join a waitlist reveal more than survey answers. Use copy that clarifies value and an explicit next step (join waitlist, pre-order, schedule demo).
Decision rule: if conversion to meaningful action (click → email or preorder) meets your threshold (example: 2–5% of reachable audience via a short ad or mailing), move toward paid‑preorder or concierge; if <0.5% across a credible sample, kill or pivot the idea.
- Tools: Carrd, Unbounce, simple in‑app CTA, or even an email capture behind a button.
- Measure click-through rate and email-to-action conversion, not vanity page views.
- Keep experiment ethical: disclose waitlist or expected ship date when closing the test.
Section 3
2) Concierge MVP (real value, high‑signal feedback)
Deliver the feature manually to a small set of users. For subscription or service features this exposes genuine product‑market fit signals: are users willing to trade time or money for the manual workflow? Concierge tests are especially valuable for personalization, onboarding or complicated flows where automated engineering is expensive.
Decision rule: if 4–8 pilot users repeatedly return within the manual flow and report concrete value (NPS-like qualitative signal + willingness to continue), convert the manual steps into acceptance criteria for a build brief. If manual delivery is unsustainably expensive with weak retention, kill or rework the value prop.
- Scope the concierge deliverable tightly: one persona, one use case, 2–3 manual deliveries.
- Measure repeat usage and explicit willingness to pay or continue.
- Use notes from every session to convert manual steps into acceptance tests.
Sources used in this section
Section 4
3) Paid preorders & price anchors (money talks fast)
A paid preorder — charging before the feature exists — is the cleanest signal for monetizable demand. Offer an early‑access price or lifetime discount with an explicit ship window and refund policy. Even a modest paid‑preorder conversion rate from your engaged audience demonstrates real willingness to buy.
Decision rule: if your preorder conversions reach a prespecified revenue threshold (for example, cover expected initial engineering + 20%), proceed to build; if you get strong interest but low willingness to pay, iterate pricing or packaging; if interest is negligible, kill.
- Use Stripe Checkout or Gumroad to accept payments quickly.
- Be explicit about delivery timelines and refund terms to remain ethical and compliant.
- Use preorder revenue to de‑risk build costs where possible.
Section 5
4) Micro‑surveys with behavioral follow‑ups (cheap signal, scalable)
Combine a short in‑product or email micro‑survey with a follow‑up behavioral ask (e.g., 'Would you like to join a pilot? Click to schedule'). Micro‑surveys convert stated preference into a small action and filter out noise from vague opinions.
Decision rule: if >20% of respondents take the follow‑up action or self‑select into pilots, treat that as positive signal to run a concierge or preorder. If survey responses cluster around 'not for me' or show low follow‑up, kill or reframe the target persona.
- Keep surveys under 3 questions: problem severity, current workaround, willingness to try/pay.
- Always attach a behavioral follow‑up (book call, join pilot, click CTA).
- Segment responses by persona to avoid average results masking pockets of demand.
Sources used in this section
FAQ
Common follow-up questions
How fast can I run these tests?
Most tests here are designed to produce a decision within 2–14 days. Fake‑door landing and micro‑surveys can return usable data in 48–72 hours if you have an audience to reach; concierge tests often take a week to recruit and deliver; paid preorders may need up to two weeks to surface meaningful conversions.
What thresholds should I use to 'kill' an idea?
Use outcome thresholds tied to your risk profile and channels. Example rules: if fake‑door conversion <0.5% from a credible sample, or paid preorders fail to cover the expected MVP engineering cost within two weeks, mark for kill. Always set thresholds before running a test to avoid confirmation bias.
Is a fake‑door test ethical if the feature doesn’t exist?
Yes, if you disclose the waitlist/preorder status where users complete a transaction and provide easy refunds or clearly state the feature is coming. Ethical fake‑door tests focus on measuring interest, not deceiving users into thinking a product already works.
When should I skip validation and just build?
Skip only when you have repeated behavioral evidence from customers (paying users demanding the feature, clear retention lift in production experiments) or when the cost of testing exceeds the cost of building and the risk is low. For early‑stage, founder‑led teams, the tests here usually de‑risk the decision faster and cheaper than blind builds.
Sources
Research used in this article
Each generated article keeps its own linked source list so the underlying reporting is visible and easy to verify.
Chameleon
Fake Door Testing - How it Works, Benefits & Risks
https://www.chameleon.io/blog/fake-door-testing
Launching Next
Fake Door Test: A Guide to Quickly Validating Ideas
https://www.launchingnext.com/blog/fake-door-test/
N-iX
Concierge MVP: Everything you need to know
https://www.n-ix.com/concierge-mvp/
Learning Loop
Concierge MVP Experiment (Concierge Test)
https://learningloop.io/plays/concierge
Evelance
Fake Door Testing: The Complete Guide
https://evelance.io/blog/fake-door-testing-the-complete-guide/
Learning Loop
Fake Door Testing: What It Is and How to Run One
https://learningloop.io/plays/fake-door-testing
Wikipedia
Lean startup
https://en.wikipedia.org/wiki/Lean_startup
Next step
Turn the idea into a build-ready plan.
AppWispr takes the research and packages it into a product brief, mockups, screenshots, and launch copy you can use right away.