> startup-idea-validation

Use when validating a startup idea before building. Produces evidence-based GO/NO-GO decisions using a 9-dimension scorecard (problem, market, timing, moat, unit economics, founder-market fit, feasibility, GTM, risk), a validation ladder (interviews -> smoke test -> concierge/WoZ -> paid pilot), and riskiest-assumption-first experiments.

fetch
$curl "https://skillshub.wtf/luisschmitzheadline/VC-Skills.md/vasilyu-startup-idea-validation?format=md"
SKILL.mdstartup-idea-validation

Startup Idea Validation

Systematic validation for testing ideas before building: define hypotheses, collect evidence, score the opportunity, and make a decision you can defend.

Operating Principles (2026)

  • Prefer decisions over inventories: each dimension ends with GO / CONDITIONAL / PIVOT / NO-GO and a next action.
  • Separate evidence quality from confidence: weak evidence cannot justify a high score.
  • Pre-register thresholds and stop rules before running experiments (avoid moving goalposts).
  • Validate willingness-to-pay and time-to-value early (price is part of the product).
  • Calibrate thresholds to the target outcome (venture-scale vs cash-flow business) and business model (B2B SaaS, B2C, marketplace, services).
  • Stay safe and ethical: no misrepresentation, respect ToS, and handle customer data with minimization and retention limits.

Intake Checklist (Ask First)

  • One-sentence idea + target user + job-to-be-done
  • Business model: B2B/B2C, SaaS/usage-based/marketplace/services, ACV/ARPU range
  • Geography, constraints (regulated domain, procurement/security requirements, data access)
  • Target outcome: venture-scale, profitable small business, or thesis-driven R&D
  • Current evidence: interviews, pilots, pre-sales, traffic, competitor list, pricing assumptions

Choose the Right Output

If the user asks…Produce…Use…
“Validate this idea” / “Is this worth building?”9-dimension scorecard + verdictvalidation-scorecard.md, go-no-go-decision.md
“What’s the riskiest assumption?”RAT + test planriskiest-assumption-test.md, validation-experiment-planner.md
“Test my hypothesis”Hypothesis canvas + experiment designhypothesis-canvas.md, hypothesis-testing-guide.md
“Market size for X”TAM/SAM/SOM sizing + assumptions tablemarket-sizing-worksheet.md, market-sizing-patterns.md
“Can this be profitable / what’s my runway?”Unit economics + runway + scenariosfinancial-modeling-calculator.md
“Should I build X or Y?”Comparative scorecard + decision memovalidation-scorecard.md, go-no-go-decision.md

Workflow

  1. Clarify the target outcome and business model; set default thresholds accordingly.
  2. Identify the RAT (the assumption that kills the business if wrong).
  3. Plan the validation ladder: interviews -> smoke test -> concierge/WoZ -> paid pilot.
  4. Run the cheapest falsifiable test first; pre-register PASS/FAIL thresholds and stop rules.
  5. Score all 9 dimensions using evidence; downgrade scores when evidence is weak.
  6. Produce a decision memo: verdict, why, what would change the decision, and the next smallest reversible step.

9-Dimension Scorecard

DimensionWeightWhat it measures
Problem severity15%Urgency, cost of inaction, current workarounds
Market size12%Sufficient demand for the target outcome
Market timing10%Clear “why now” and tailwinds
Competitive moat12%Defensibility over time
Unit economics15%Profit path (incl. payback and margins)
Founder-market fit8%Access, expertise, and execution capability
Technical feasibility10%Buildability, dependencies, constraints
GTM clarity10%ICP, channels, motion, first customers
Risk profile8%What can kill it and likelihood

Verdict thresholds (default):

  • 80–100: GO
  • 60–79: CONDITIONAL (validate RAT first)
  • 40–59: PIVOT
  • <40: NO-GO

Deep scoring rubrics and calibration live in validation-methodology.md.

Evidence Rules

  • Strong evidence is behavioral commitment with cost (time, money, switching, access); weak evidence is opinions and hypotheticals.
  • Triangulate important claims across at least two sources (especially market sizing and competitor state).
  • Keep an evidence trail: link + capture month; separate “fact” vs “assumption”.

Validation Ladder (Default)

StepGoalStrong signal
InterviewsValidate the problem and contextRepeated pain with real workarounds and spend
Smoke testValidate demandQualified conversion with price shown
Concierge/WoZValidate workflow valueUsers complete the job and return
Paid pilotValidate willingness-to-payPaid, renewed, or expanded

AI / Automation Notes (2026)

If the idea depends on AI (agents, copilots, automation), validate these explicitly:

  • Data rights and access: can you legally and reliably access required data?
  • Reliability: define success metrics, failure modes, and human fallback; validate on real workflows.
  • Cost-to-serve: model inference + retrieval + human-in-the-loop costs in assets/financial-modeling-calculator.md.

See hypothesis-testing-guide.md for AI-specific experiment patterns.

Integration Points

Receives From

Feeds Into

Resources

ResourcePurpose
validation-methodology.mdScoring rubrics and calibration
hypothesis-testing-guide.mdExperiment design and RAT workflows
market-sizing-patterns.mdTAM/SAM/SOM methods and pitfalls
moat-assessment-framework.mdDefensibility analysis

Templates

TemplatePurpose
validation-scorecard.mdFull 9-dimension scoring
go-no-go-decision.mdDecision memo format
hypothesis-canvas.mdHypothesis definition
validation-experiment-planner.mdExperiment planning + thresholds
riskiest-assumption-test.mdRAT identification and test design
market-sizing-worksheet.mdSizing worksheet
financial-modeling-calculator.mdRunway + scenarios + unit economics

Data

FilePurpose
sources.jsonCurated validation resources

┌ stats

installs/wk0
░░░░░░░░░░
github stars14
███░░░░░░░
first seenMar 23, 2026
└────────────

┌ repo

luisschmitzheadline/VC-Skills.md
by luisschmitzheadline
└────────────