Executive Mentor — Stress Test
Business assumption stress testing and validation.
$skillfish add ~skills/executive-mentor-stress-test Real Conversations
See how the Executive Mentor — Stress Test skill responds to actual requests. These are representative examples of what your AI agent will produce.
My Series B model shows $8M ARR in 24 months from $2.2M today. Key assumptions: 3 new AEs hired in Q1 ramping to $200K quota, 2x marketing budget generating 150 MQLs/month, NRR improving from 92% to 108%. Stress test this for me.
Running a structured stress test across your three key assumption clusters.
Cluster 1 — AE ramp and quota attainment: Base assumption: 3 AEs × $200K quota × 4 quarters = $2.4M gross new ARR from new AEs. Stress scenario: SaaS AE average quota attainment is 68%. Ramp takes 9–12 months for enterprise AEs. Realistic contribution in 24 months from Q1 hires: $200K × 68% attainment × 1.5 effective years (accounting for ramp) × 3 AEs = $612K gross new ARR, not $2.4M. That's a $1.8M gap in your model from this assumption alone.
Cluster 2 — Marketing MQL scaling: Base assumption: 2x budget → 150 MQLs/month → pipeline to support $8M ARR. Stress scenario: Marketing channels exhibit diminishing returns. If your current budget generates 60 MQLs/month, doubling spend rarely generates exactly 2x MQLs — typically 1.5x–1.8x. At 1.6x, you get 96 MQLs/month, not 150. At your current conversion rates (MQL → close), that's a 35% reduction in revenue contribution from this channel. Ask: What's your empirical CPL at current spend, and what evidence do you have that it holds at 2x budget?
Cluster 3 — NRR improvement (92% → 108%): This is your most optimistic and least supported assumption. A 16-point NRR improvement requires either significant expansion revenue (upsell/cross-sell motion you may not have built) or a 50%+ reduction in gross churn. What specifically changes between now and month 24 that drives this?
Downside scenario: AE attainment at 68%, MQL scale at 1.6x, NRR improves modestly to 98%. Result: $4.8M ARR at month 24, not $8M.
Recommendation: Build three cases (base/upside/downside) with explicit sensitivities. Present the downside to investors before they find it.
Get Started
How to install and use this skill in your preferred environment.
Skills are designed for AI coding agents (Claude Code, Cursor, Windsurf) and IDE-based workflows where the agent can read files, run scripts, and act on your codebase. Web-based AI can use the knowledge and frameworks, but won't have tool access.
Models & Context
Which AI models and context windows work best with this skill.
Recommended Models
Larger models produce more detailed, production-ready outputs.
Context Window
This skill's SKILL.md is typically 3–10 KB — fits in any modern context window.
All current frontier models (Claude, GPT, Gemini) support 100K+ context. Use the full window for complex multi-service work.
Pro tips for best results
Be specific
Include numbers — users, budget, RPS — so the skill can size the architecture.
Share constraints
Compliance needs, team size, and existing stack all improve the output.
Iterate
Start with a high-level design, then ask follow-ups for IaC, cost analysis, or security review.
Combine skills
Pair with companion skills below for end-to-end coverage.
Ready to try Executive Mentor — Stress Test?
Install the skill and start getting expert-level guidance in your workflow — any agent, any IDE.
$skillfish add ~skills/executive-mentor-stress-test