AppWispr

Find what to build

From 10 Interviews to a One‑Page Spec: A 90‑Minute Template + Worked Example

AW

Written by AppWispr editorial

Return to blog
MR
UI
AW

FROM 10 INTERVIEWS TO A ONE‑PAGE SPEC: A 90‑MINUTE TEMPLATE + WORKED EXAMPLE

Market ResearchApril 12, 20265 min read981 words

You ran ten interviews. Now what? Stop letting qualitative notes sit in folders. This post gives a 90‑minute, timeboxed workflow and a ready-to-export one‑page spec template you can copy, plus a filled example from an indie app idea. Use it after your first 10 interviews to produce a prioritized, build‑ready spec that designers and engineers can action.

turn user interviews into product spec templateuser interview synthesisone page product specfounder research workflowproduct requirements template

Section 1

The promise: what a one‑page spec should do for you

Link section

A one‑page spec is not a replacement for detailed docs — it’s a decision artifact you use to align team priorities and start building. In 90 minutes you can move from messy interview notes to a prioritized feature set framed by problem, user, and evidence. That reduces ambiguity for engineering handoffs and keeps early builds honest to customer needs.

This one‑page approach borrows from lean research and product one‑pagers used across UX teams and collaborative whiteboard templates: capture the job to be done, three core user stories, acceptance criteria, and a short prioritization rationale. The goal is fast clarity — enough context for design and engineering to scope an MVP sprint without waiting for a lengthy MRD.

  • Produce a single decision artifact that communicates: problem, who, why, evidence, and next steps.
  • Prioritize by frequency, pain, and implementation cost to avoid feature bloat.
  • Use the spec to run a follow‑up prototype or experiment quickly.

Section 2

90‑minute, timeboxed workflow (who does what and when)

Link section

Set a 90‑minute session with one facilitator (product founder or PM), one synthesizer, and optionally a designer or engineer for feasibility checks. The agenda: 0–10m align on interview cadence and top hypotheses; 10–40m rapid extraction (note cards); 40–70m affinity mapping + vote; 70–85m build the one‑page spec; 85–90m assign next experiments. Timeboxing forces tradeoffs and prevents hours of rumination.

For extraction, use a simple note card per meaningful quote or observed pain (digital sticky notes in Miro, Mural, or FigJam work well). Each note should include: verbatim quote (or short paraphrase), inferred need, and interview id. During affinity mapping cluster by problem, not by suggested solutions — that preserves why the user cares rather than how they imagine a fix.

  • 0–10m: Align (audience, target outcome, top 3 hypotheses).
  • 10–40m: Extraction — one card per meaningful insight across 10 interviews.
  • 40–70m: Affinity map and vote on clusters by frequency and severity.
  • 70–85m: Draft the one‑page spec and quick feasibility check.
  • 85–90m: Assign experiment owner and next step (prototype, metric, timeline).

Section 3

One‑page template (exportable) and a worked example

Link section

Use this exact layout when you build the one‑page spec: Title; Target user (1‑sentence); Core problem statement (1 sentence); Supporting evidence (3 bullet quotes with interview ids); Top 3 jobs‑to‑be‑done or user stories; Acceptance criteria (1–2 checks per story); Priority and rationale (RICE-lite or simple High/Med/Low); Quick feasibility note and next experiment. Keep text concise — the document must be skimmed fast.

Worked example (indie app idea: 'Inbox for freelance invoices'): Target user — solo freelancers who invoice monthly. Core problem — tracking sent invoices and client payment status is time consuming and error prone. Evidence — three short quotes pulled from different interviews showing frequency and impact. Top stories and acceptance criteria follow directly from those clusters; priority explains why a payment‑status dashboard (MVP) beats automated reminders (later).

  • Template fields you can copy: Title | Target user | Problem (1 line) | Evidence (3 quotes w/ ids) | Top 3 user stories | Acceptance criteria | Priority & rationale | Feasibility note | Next experiment.
  • Worked example focuses on a single, high‑impact slice (status dashboard) and lists the exact minimum acceptance checks needed for an engineer to estimate.

Section 4

How to prioritize and move from spec to experiment

Link section

Prioritize clusters by a combination of: frequency across interviews, the intensity of emotional language (how painful the problem felt in the quote), and relative engineering cost. For speed use a three‑bucket system (High/Med/Low) and one feasibility note from an engineer or through a quick search of existing solutions. That’s enough to pick an MVP slice for a single sprint.

Convert the one‑page into an experiment plan: hypothesis, prototype type (clickable vs. code), primary metric, cohort, and duration. Use your interview evidence in the hypothesis (e.g., “Because 7 of 10 freelancers said X, we believe a payment‑status dashboard will increase on‑time payments by making status visible”). After the experiment, update the spec with measured learning — that closes the loop and preserves evidence for future roadmap decisions.

  • Prioritization inputs: frequency, pain intensity, and implementation cost.
  • Experiment plan fields: hypothesis (evidence‑based), prototype, primary metric, cohort, duration.
  • After experiment: replace assumptions with results and iterate the one‑page spec.

FAQ

Common follow-up questions

Do I need recordings and transcripts to use this workflow?

Recordings and transcripts help speed extraction but are not required. If you don’t have them, use detailed notes and tag each insight with interview ids. Tools like Miro and Mural offer templates to capture notes during interviews which make the 30‑minute extraction step faster.

What if my ten interviews contradict each other?

Contradictions are useful signals. During affinity mapping, separate clusters that conflict and flag them for follow‑up. Prioritize clusters that show consistent frequency or severity; treat opposing clusters as hypothesis tests to run in your experiment stage.

How do I surface non‑verbal signals (frustration, confusion) from interviews?

Translate non‑verbal signals into short, labeled evidence cards (e.g., 'confusion: hesitated 10s when describing X') and include them in your clustering. Non‑verbal cues increase the weight of a cluster when prioritizing because they indicate emotional intensity.

Can I use AI to speed synthesis?

Yes—AI tools can accelerate transcription and draft synthesis, but always validate AI summaries against the original notes or transcripts. Use AI outputs as first drafts, then quickly check representative quotes from actual interviews before placing them in the one‑page spec.

Sources

Research used in this article

Each generated article keeps its own linked source list so the underlying reporting is visible and easy to verify.

Next step

Turn the idea into a build-ready plan.

AppWispr takes the research and packages it into a product brief, mockups, screenshots, and launch copy you can use right away.