What this round tests
- North-star metric definition and decomposition
- Input vs output metrics — which lever you'd pull
- Counter-metrics that protect users and trust
What interviewers are listening for
- A clean metric tree, not a single vanity number
- Trade-off awareness — what could regress
- Data-informed decisions, not data-driven theatre
Common mistakes
- Picking a north-star metric that's easy to game
- Forgetting counter-metrics entirely
- Treating retention and activation as interchangeable
Concepts tested in execution
Practice questions (78)
- Choose metrics for a creator marketplace
- Diagnose a 12-point activation drop
- North Star for a B2B collaboration tool
- North Star for a music free tier
- A/B test a checkout redesign
- Conversion up, session length down
- Onboarding completion vs. day-7 retention
- Two segments, opposite signals
- Engagement drops a week after launch
- Retention up, ARPU down
- Input metrics for a creator's first 14 days
- Power and runtime for an A/B test
- Drop-off at 'add payment'
- North Star for a podcast platform
- North Star for a job-search product
- North Star for a clinic EHR
- North Star for a school-district product
- Rewriting a North Star at year five
- Input metrics for trial-to-paid
- Input metrics for week-1 retention
- Input metrics for content quality
- Input metrics for marketplace liquidity
- Pricing-page bounce
- Sign-up → first action funnel
- Drop-off at 'invite teammates'
+ 53 more in the full bank.
Practice this round in PrepOS
Calibrate your target level, set the practice category to execution, and the adaptive queue will surface the highest-impact reps first.
Practice execution →