AI Prompts for Startups

Startups run lean by necessity, and AI has become the ultimate force multiplier for small teams. The right prompts let a two-person founding team produce the output of a much larger organization — but only if the prompts encode real startup thinking, not generic business advice. MVP planning prompts should include your target user, the core problem, your hypothesis, technical constraints (budget, timeline, team skills), and the minimum feature set needed to test the hypothesis. Ask the model to prioritize ruthlessly — the output should be a focused scope, not a feature wish list. Include questions like "What can we cut and still validate the core assumption?" to force the AI into startup-mode thinking.

Investor pitch prompts should specify the stage (pre-seed, seed, Series A), the audience (angel investor, VC partner, accelerator), your traction metrics, market size data, and competitive landscape. Ask for a narrative arc that opens with the problem, demonstrates market pull, explains your unfair advantage, and closes with the ask and use of funds. For pitch deck copy specifically, constrain each slide to one key message with supporting data points — investors skim decks in under three minutes. Growth experiment prompts should include your current metrics baseline, the lever you are targeting (acquisition, activation, retention, revenue, referral), your budget, and your measurement plan. Ask the model to generate ten experiment ideas ranked by expected impact and effort.

Hiring prompts help startups punch above their weight in recruiting. Provide the role, your company stage, culture values, technical requirements, and compensation range. Ask the model to write job descriptions that attract startup-minded candidates — people who thrive in ambiguity, wear multiple hats, and care about impact over title. For interview question generation, specify the competencies you are evaluating and ask for behavioral questions with scoring rubrics. Market research prompts should include your industry, target segment, and specific questions you need answered — then ask the model to synthesize publicly available data into actionable insights with source citations. The speed advantage is enormous: tasks that would take a consultant a week can be drafted in an hour and refined from there.

Startup Prompts You Can Use Today

Copy any prompt, fill in the {{variables}}, and paste into ChatGPT, Claude, or any AI tool.

Pitch Deck Copy Generator

You are a pitch deck consultant who has helped raise $500M+ across seed to Series B rounds. Write slide-by-slide copy for {{startup_name}}.

Stage: {{funding_stage}} (pre-seed | seed | Series A | Series B)
Audience: {{investor_type}} (angel investors | VC partners | accelerator)

Company details:
- Problem: {{problem_statement}}
- Solution: {{solution_description}}
- Traction: {{traction_metrics}}
- Market size: {{market_size}}
- Business model: {{business_model}}
- Team: {{team_background}}
- Ask: {{funding_ask}} for {{use_of_funds}}

Produce copy for each slide:

### Slide 1: Title
Company name, one-line description, stage, and ask amount.

### Slide 2: Problem
Visceral description of the pain. Use a specific story or data point, not abstractions.

### Slide 3: Solution
What you built and how it solves the problem. One key screenshot/demo description.

### Slide 4: Market Size
TAM / SAM / SOM with sources. Show the "wedge" — where you start and how you expand.

### Slide 5: Traction
Metrics that matter for this stage. Growth rate > absolute numbers at pre-seed/seed.

### Slide 6: Business Model
How you make money. Unit economics if available.

### Slide 7: Competition
2x2 matrix positioning. Be honest about competitors — investors will Google them.

### Slide 8: Team
Why THIS team wins. Relevant experience, unfair advantages, domain expertise.

### Slide 9: The Ask
Amount, use of funds breakdown, milestones this funding will achieve.

Rules: One key message per slide. Max 6 bullet points per slide. No jargon. Every data point needs a source or qualifier.
startup_namefunding_stageinvestor_typeproblem_statementsolution_descriptiontraction_metricsmarket_sizebusiness_modelteam_backgroundfunding_askuse_of_funds

Why it works: Follows the proven venture pitch narrative arc, enforces one message per slide, and requires honest competitor framing that builds credibility with investors.

Investor Update Email

Write a monthly investor update email for {{startup_name}}{{month_year}}.

## Key Metrics
{{metrics_table}}
(Include: MRR, growth rate, burn rate, runway, users, engagement, NPS — whatever is relevant)

## Highlights
{{highlights}}

## Challenges
{{challenges}}

## Asks
{{asks_from_investors}}

---

Format the email as:

**Subject line:** {{startup_name}}{{month_year}} Update: [one key metric or milestone]

**Body structure:**
1. **TL;DR** (3 bullets max — the update for investors who won't read further)
2. **Metrics dashboard** — table with this month, last month, and MoM change. Use arrows or +/- for trend.
3. **What went well** — 2-3 accomplishments with context on why they matter
4. **What didn't go well** — 1-2 challenges, written honestly. Include what you're doing about it.
5. **Key learnings** — what you learned this month that changed your thinking
6. **Asks** — specific, actionable requests (intros, advice, resources). Make it easy for investors to help.
7. **What's next** — top 3 priorities for next month

Tone: Confident but honest. Investors respect founders who acknowledge problems early. Never hide bad news — frame it with your response plan.

Previous month's priorities and whether they were achieved:
{{previous_priorities}}
startup_namemonth_yearmetrics_tablehighlightschallengesasks_from_investorsprevious_priorities

Why it works: Opens with TL;DR for skimmers, pairs bad news with action plans, and ends with specific asks that make it easy for investors to provide value.

Lean Canvas Builder

Create a complete Lean Canvas for {{startup_name}}: {{one_line_description}}.

Target customer: {{target_customer}}
Industry: {{industry}}

Fill in each of the 9 boxes with specific, testable statements — not vague aspirations:

## Lean Canvas: {{startup_name}}

### 1. Problem (Top 3)
List the 3 most critical problems your target customer faces. For each:
- The problem in their words (not yours)
- How they currently solve it (existing alternatives)
- Why current solutions are inadequate

### 2. Customer Segments
- Primary segment: who has this problem most acutely?
- Early adopters: who will try an imperfect v1? (Be specific: job title, company size, behavior)

### 3. Unique Value Proposition
One clear sentence: "We help [customer] achieve [outcome] by [mechanism], unlike [alternative]."

### 4. Solution
For each of the 3 problems, the simplest possible feature that addresses it.
(Resist the urge to list 20 features. Three is the right number for v1.)

### 5. Channels
How will you reach early adopters? Rank by:
- Cost to acquire a customer
- Time to first conversion
- Scalability

### 6. Revenue Streams
- Pricing model: {{pricing_model}}
- Price point and justification (anchored to value delivered or competitor pricing)
- When you will charge (day 1 vs. freemium vs. after milestone)

### 7. Cost Structure
- Fixed costs (team, infrastructure, tools)
- Variable costs (per-customer costs)
- Monthly burn rate estimate

### 8. Key Metrics
The 3-5 numbers you will check weekly to know if you're on track.
For each: the metric, current baseline, 90-day target, and why this metric matters.

### 9. Unfair Advantage
Something that cannot be easily copied or bought: proprietary data, network effects, domain expertise, regulatory moat, community. "First mover" is NOT an unfair advantage.

### Riskiest Assumption
Which box contains the assumption that, if wrong, kills the business? This is what you test first.
startup_nameone_line_descriptiontarget_customerindustrypricing_model

Why it works: Forces specificity in every box (no vague statements), limits the solution to 3 features, and identifies the riskiest assumption to test first.

Customer Discovery Questions

Generate a customer discovery interview script for {{startup_name}} targeting {{target_customer}}.

Hypothesis we are testing:
{{hypothesis}}

Problem space: {{problem_description}}

## Customer Discovery Interview Guide

### Warm-Up (2 min)
- Establish rapport. Ask about their role and day-to-day responsibilities.
- "Tell me about your role at {{company_context}}. What does a typical week look like?"

### Problem Exploration (15 min)
Open-ended questions to understand their world BEFORE mentioning your solution:

1. "Walk me through the last time you dealt with {{problem_area}}. What happened?"
2. "What's the most frustrating part of {{problem_area}} for you right now?"
3. "How are you currently handling this? Walk me through your process."
4. "What have you tried before that didn't work? Why?"
5. "If you could wave a magic wand and fix one thing about {{problem_area}}, what would it be?"
6. "How much time/money do you spend on this per {{time_period}}?"
7. "On a scale of 1-10, how painful is this problem? What would make it a 10?"

### Validation Questions (10 min)
Test specific assumptions without pitching:

8. "Some people tell us that {{assumption_1}}. Does that resonate with your experience?"
9. "We've heard that {{assumption_2}}. How does that compare to what you see?"
10. "If a solution existed that could {{proposed_value}}, how would that change your workflow?"

### Willingness to Pay (5 min)
11. "What are you currently paying for tools/services in this area?"
12. "If something solved [their stated #1 pain], what would that be worth to you?"
13. "Who makes the purchasing decision for tools like this at your company?"

### Close (3 min)
14. "Who else on your team experiences this problem? Would you be open to connecting me?"
15. "Can I follow up in a few weeks to share what we're building based on these conversations?"

## Interview Rules
- LISTEN more than talk (80/20 rule)
- Never pitch your solution during problem exploration
- Record specific quotes, not summaries
- If they say "that would be nice to have" — probe deeper. "Nice to have" ≠ will pay for.
- Track: did they describe the problem unprompted, or only after you framed it?
startup_nametarget_customerhypothesisproblem_descriptioncompany_contextproblem_areatime_periodassumption_1assumption_2proposed_value

Why it works: Follows Mom Test principles — explores the problem before mentioning any solution, uses past behavior questions instead of hypotheticals, and probes willingness to pay.

MVP Specification Writer

Write an MVP specification for {{product_name}}{{product_description}}.

Target user: {{target_user}}
Core hypothesis: {{hypothesis}}
Timeline: {{timeline}}
Team: {{team_composition}}
Tech stack: {{tech_stack}}

## MVP Specification

### 1. Problem Statement
What specific problem does this solve? Who has it? How do we know?

### 2. Success Criteria
How do we know if the MVP succeeded?
- Primary metric: {{primary_metric}} — target: {{target_value}} within {{timeframe}}
- Secondary metrics: engagement, retention, NPS (define thresholds)
- Kill criteria: what result tells us to pivot or stop?

### 3. User Stories (MoSCoW Prioritized)

**Must Have** (MVP won't work without these):
| Story | Acceptance Criteria | Estimate |
|-------|-------------------|----------|

**Should Have** (include if time allows):
| Story | Acceptance Criteria | Estimate |

**Won't Have** (explicitly out of scope for v1):
List features you are intentionally NOT building and why.

### 4. User Flow
Step-by-step flow for the core user journey:
1. User arrives at... → sees...
2. User does... → system responds with...
(Cover only the primary flow. Edge cases are not MVP.)

### 5. Technical Architecture
- Simple architecture diagram (describe in text)
- Key technical decisions and why
- What you're buying vs. building
- Infrastructure: {{infrastructure}}

### 6. What We're Faking
What looks like a feature to the user but is actually manual/hardcoded/wizard-of-oz behind the scenes? (This is good — it's how you test demand before building.)

### 7. Timeline
| Week | Milestone | Deliverable |
|------|-----------|-------------|

### 8. Risks
Top 3 risks to shipping on time and what we'll do about each.

The cardinal rule: if a feature doesn't directly test {{hypothesis}}, it does not belong in the MVP.
product_nameproduct_descriptiontarget_userhypothesistimelineteam_compositiontech_stackprimary_metrictarget_valuetimeframeinfrastructure

Why it works: Defines kill criteria upfront, explicitly lists what is NOT being built, encourages Wizard of Oz testing, and ties every feature back to the core hypothesis.

Growth Experiment Designer

Design a growth experiment for {{startup_name}} targeting the {{growth_lever}} lever (acquisition | activation | retention | revenue | referral).

Current state:
- {{growth_lever}} metric: {{current_value}}
- Target: {{target_value}} within {{timeframe}}
- Budget: {{budget}}
- Team available: {{team_resources}}

## Growth Experiment: {{experiment_name}}

### Hypothesis
"We believe that [doing {{proposed_change}}] for [{{target_segment}}] will result in [{{expected_outcome}}] because [{{reasoning}}]."

### Experiment Design
- **What we're changing:** Specific description of the variant
- **Control:** What the current experience looks like
- **Audience:** {{target_segment}} — how we'll segment/target them
- **Sample size needed:** [calculate based on expected effect size and significance]
- **Duration:** [minimum time for statistical significance]

### Implementation
Step-by-step what needs to be built/configured:
1. ...
2. ...
3. ...
Estimated effort: {{effort_estimate}}

### Metrics
| Metric | Type | How Measured | Current Baseline | Target |
|--------|------|-------------|-----------------|--------|
| Primary | ... | ... | ... | ... |
| Secondary | ... | ... | ... | ... |
| Guardrail | ... | ... | ... | ... |

Guardrail metrics = things that should NOT get worse (e.g., don't improve activation by hurting retention).

### Decision Framework
- **Ship it** if: primary metric improves by >= {{min_improvement}}% with statistical significance (p < 0.05)
- **Iterate** if: directionally positive but not significant — extend duration or increase sample
- **Kill it** if: no improvement after {{max_duration}} or guardrail metrics degraded

### Next Experiments
If this works, what do we test next to compound the gains?
If this fails, what alternative hypothesis do we test?

Generate {{num_alternatives}} alternative experiment ideas for the same lever, ranked by expected impact / effort ratio.
startup_namegrowth_levercurrent_valuetarget_valuetimeframebudgetteam_resourcesexperiment_nameproposed_changetarget_segmentexpected_outcomereasoningeffort_estimatemin_improvementmax_durationnum_alternatives

Why it works: Structures experiments with hypothesis, guardrail metrics, and pre-defined decision criteria so results are actionable rather than ambiguous.