Shorts

From MVP to PMF: a 90-day plan for testing hypotheses and retention metrics

Sep 16, 2025 | By Team SR

Getting from a scrappy prototype to a product customers keep coming back to is a race won by focus, not luck. Whether you’re building a new wallet, a micro-betting feature, or a pool-betting community, the path is the same: state your riskiest assumptions, test them quickly, and let retention decide what survives. Think of a sportsbook feature such as betting on horse racing—it lives or dies on fast onboarding, a clear first bet, and repeat engagement. This 90-day plan shows how to structure that work with clear hypotheses, crisp metrics, and decisions tied to data rather than hunches.

Days 0–45: Prove the problem, cut scope, and ship an activation path

The first phase sets the foundation. Your job is to confirm a real problem, strip the MVP to the core job-to-be-done, instrument the funnel, and get a small but real audience to first value. Think activation first, revenue later. You should already have the analytics pipeline wired (events, cohorts, and experiment flags) before you invite the first user cohort.

Experiment map for the first 45 days

HypothesisSignal you needMetric & target (first pass)Test setupDecision rule
Users have a strong “race-day itch” you can satisfy fast% of new users who place a first bet within one sessionActivation: ≥35% of sign-ups place a first bet within 10 minutesOnboarding A/B: “Skip-all” vs. “Explain-as-you-go” stepsKeep variant if +20% activation vs. control
KYC friction is scaring off legit usersBounce after KYC startKYC pass: ≥70% of starts; Time-to-pass: ≤4 minutes medianKYC vendor form vs. native light KYC (where legal)Keep if drop in abandon ≥25% and fraud stays flat
Odds format & slip layout are unclearRage-clicks and slip edits before submitBet slip error rate: ≤8%; First bet success: ≥90%Slip layout A/B + inline helper “?” tooltipShip variant if error rate falls ≥30%
First deposit happens when deposit prompts appear post-selectionDeposit conversion from selection screenFTD (first-time deposit): ≥25% of activated usersTrigger deposit prompt after selection vs. beforeKeep if FTD improves without lowering bet submit rate
Early content notifications spark day-2 returnsD2 retention lifts in notified cohortD2≥28%, D7≥18% (directional)Push/email “Race you follow starts in 30 min” vs. no reminderKeep if D2 lift ≥3 pp and opt-out <5%

Why this matters now

In the opening stretch, you’re not hunting for perfection; you’re chasing proof that a meaningful chunk of newcomers can reach first value quickly and without hand-holding. If you can’t get activation and D2/D7 to move with small, sharp changes, you either misread the problem or packed too much friction into the path. Kill scope, re-test, and avoid piling on features that add complexity without moving the activation needle.

Days 46–90: Push retention, pressure-test PMF, and line up growth loops

Assume your activation path is workable. Phase two shifts focus to habit formation, repeat usage, and early unit economics. The aim is to create repeatable reasons to come back (content, odds drops, social proof) while verifying that cohorts stabilize rather than decay.

The high-leverage plays for retention and PMF checks (run in parallel sprints)

  • Cohort health check
    • Build weekly signup cohorts; track D1/D7/D30 and rolling WAU/MAU.
    • Watch the “tail”: does retention flatten after week 4 or keep sliding?
  • Habit loop design
    • Trigger: race alerts and personalized picks → Action: one-tap rebet or “smart slip” → Reward: faster settlement, free micro-stakes, or streak badges.
    • Measure repeat bet cadence per user per week.
  • Pricing & promos sanity
    • Test low-stake freebet vs. boosted odds on the same fixture; compare retention and gross margin impact, not just click-through.
  • CRM lifecycle
    • Three levels: “welcome to first bet,” “first deposit to second deposit,” and “lapsed 7–14 days.” Keep content short, contextual, and grounded in the user’s last action.
  • PMF survey + qualitative
    • Run a short in-product poll to a random 30%: “How would you feel if you could no longer use this product?” Track “very disappointed” %, top use cases, and missing features.
  • North Star metric
    • Pick one behavior tightly linked to value (e.g., “weekly bettors who placed ≥2 bets on events they follow”). Make every sprint explain movements in this metric.

What you’re deciding by day 90

By the end of this phase you should see clear signs that cohorts are stabilizing and that at least one loop creates repeat use without heavy discounts. If you need deep promos to prop up D30, the product hasn’t earned repeat behavior yet; go back to the habit loop and content timing. If the “very disappointed” share is low and churn is high, you likely solved a convenience tweak rather than a must-have job—pivot to the sharper pain you heard in interviews.

How to choose and defend your retention metrics

Retention metrics can send you chasing shadows if they’re defined poorly. Set them once, document them, and make them boring on purpose. A sportsbook or casino feature should prefer behavior-based retention over vanity metrics like raw sessions.

Start by writing down the exact formulas and acceptable ranges for each key metric. Agree on a first-pass target that is both ambitious and believable for your category and price point. Then link every experiment brief to a metric it intends to move, so you never run tests “just because.”

Metric definitions worth publishing to the whole team

  • Activation: % of sign-ups who perform the core action inside one session (e.g., submit first bet or load first stake). Target: 30–40% early, rising with UX fixes.
  • Short-term retention: D1/D7/D30 active users by cohort (performed core action on that day). Targets vary by product; watch trendlines more than absolutes.
  • Habit depth: Median core actions per active user per week; aim for a steady rise without heavy promo support.
  • FTD rate: % of activated users who deposit within 24 hours. A healthy base often lands 20–35% in early cohorts.
  • Payer conversion: % of month-active users who made at least one deposit this month; pair with ARPPU so promos don’t mask weak pay behavior.
  • Churn: % of users with no core action for 14 consecutive days; define “resurrection” separately.
  • North Star: One sentence that names the behavior you want more of, measured weekly.

Clear, shared definitions prevent goal-post shifting and keep product, marketing, and compliance speaking the same language. When metrics are crisp, decisions get faster and arguments get shorter.

Crafting a backlog that finds PMF faster

A strong backlog is small, biased toward learning, and pruned weekly. The point is not to ship the most; it’s to learn the most per week while staying compliant and safe for users. Keep big bets rare and surround them with quick tests that either magnify a win or kill a weak idea before it eats a month.

Open your backlog meeting by asking: what did we learn last week that changes this week’s top three items? If the answer is “nothing,” you’re either not instrumented or not running tests that matter. Treat “no learning” as a red flag, not a lull.

Your backlog should mix three classes of work: activation friction removers, retention loop refiners, and trust & safety essentials. If the week tilts heavily to one bucket, you should be able to say why (e.g., a compliance update or a cohort shock).

Small, purposeful backlog slices keep the team shipping and the learning flywheel spinning. When the data shows a clear winner, graduate it to the core product and move on—don’t polish endlessly.

When to say you’ve found PMF (and when you haven’t)

PMF is not a vibe; it’s a cluster of signals you can write down. You’ll feel it in support queues, organic referrals, and a calendar that shifts from scrambling for users to shipping requested improvements—but you should still measure it.

Three simple checks help:

  1. Cohort flattening: Post-week-4 retention curves hit a plateau rather than sliding to zero.
  2. PMF survey: “Very disappointed” hits ~40% or your category’s known benchmark; top reasons cluster around the same core use case.
  3. Paid sanity: Modest paid spend (not fire-hose discounts) acquires users who retain within 10–15% of organic cohorts.

If you miss two of the three, you’re not there yet. That’s not failure; it’s a clear instruction to tighten the job-to-be-done, sharpen the hook, or fix the habit loop.

Final thoughts

Ninety days is enough time to learn what matters, not to build everything. Anchor the plan to a crisp job-to-be-done, ship only what helps a newcomer reach first value, and let retention, not opinions, call the shots. Keep the scope narrow, the experiments honest, and the definitions public. If the product earns repeat behavior without heavy discounts, you’ll know you’re on the right track—from MVP to a product that users would miss if it disappeared tomorrow.

Recommended Stories for You