ChatGPT Prompts for Product Managers
Product managers spend a disproportionate amount of time writing — PRDs, user stories, stakeholder updates, competitive analyses, roadmap narratives, and launch plans. AI can cut the time on these deliverables dramatically, but only when your prompts capture the context that makes the output useful to your specific team and product. Asking ChatGPT to "write a PRD" gives you a generic template. Asking it to write a PRD for a specific feature, with defined user segments, success metrics, technical constraints, and dependencies produces a document your engineers can actually build from.
For PRDs, include the problem statement, target user persona, proposed solution, success metrics (quantified), scope boundaries (what is explicitly out of scope), and known technical constraints. User story prompts should specify the persona, the job-to-be-done, acceptance criteria format, and edge cases to consider. Competitive analysis prompts work best when you name the competitors, specify the dimensions you care about (pricing, features, positioning, market share), and the audience for the analysis (board, engineering, marketing). Roadmap prompts should include your planning horizon, strategic themes, resource constraints, and the level of detail needed (epic-level versus feature-level). For stakeholder updates, specify the audience (executives want different information than engineers), the time period, key wins, blockers, and what decisions you need from them.
The most effective PMs build a personal library of prompts they have refined through iteration. PromptingBox lets you save, version, and organize your product management prompts so you can reuse what works and continuously improve your AI-assisted workflow.
8 Copy-Ready Product Management Prompts
Each prompt includes variables you can customize. Click Copy to use immediately in ChatGPT, Claude, or your preferred AI tool.
PRD Writer
You are a senior product manager at a high-growth SaaS company. Write a comprehensive Product Requirements Document (PRD) for the following feature. Feature: {{feature}} Product: {{product}} Target user persona: {{persona}} Include these sections: 1. **Problem Statement** - What user pain point does this solve? Include qualitative and quantitative evidence if available. 2. **Goals & Success Metrics** - Define 2-3 primary KPIs with specific targets (e.g., "Reduce onboarding drop-off from 40% to 25% within 90 days"). 3. **User Stories** - Write 5-8 user stories in "As a [persona], I want [action] so that [outcome]" format, ordered by priority. 4. **Proposed Solution** - Describe the feature in detail. Include key user flows, UI considerations, and interaction patterns. 5. **Scope** - Explicitly define what is IN scope and OUT of scope for v1. 6. **Technical Considerations** - API changes, data model impacts, third-party dependencies, performance requirements. 7. **Dependencies & Risks** - Cross-team dependencies, technical risks, and mitigation strategies. 8. **Timeline** - Suggested phasing (MVP, v1, future iterations) with rough effort estimates. 9. **Open Questions** - List unresolved decisions that need stakeholder input. Write in a clear, direct style. Avoid filler. Every section should contain specifics, not generic placeholders.
Why it works: Specifying the persona, requiring quantified success metrics, and demanding explicit scope boundaries produces a PRD engineers can build from instead of a vague feature wish.
User Story Generator
Generate a complete set of user stories for the following feature. Feature: {{feature}} Product: {{product}} Primary persona: {{persona}} Additional personas (if any): {{secondary_personas}} For each user story, provide: - **Story**: "As a {{persona}}, I want [action] so that [outcome]" - **Acceptance Criteria**: 3-5 testable criteria in Given/When/Then format - **Edge Cases**: 2-3 edge cases the engineering team should handle - **Priority**: Must-have / Should-have / Nice-to-have (MoSCoW) - **Estimated Complexity**: S / M / L Organize stories into these categories: 1. Core functionality (the happy path) 2. Error handling and validation 3. Accessibility and responsive behavior 4. Admin/power-user scenarios 5. Analytics and tracking events Aim for 12-20 stories total. Prioritize coverage over volume.
Why it works: Requiring Given/When/Then acceptance criteria, edge cases, and complexity estimates produces stories that are sprint-ready. Categorizing by functionality type ensures no blind spots.
Competitive Analysis
Conduct a structured competitive analysis for {{product}} against these competitors: {{competitors}}. Analysis dimensions: 1. **Product Positioning** - How does each competitor position themselves? What is their core value proposition? 2. **Feature Comparison** - Build a comparison matrix for these key capabilities: {{key_features}}. Rate each competitor as: Strong / Adequate / Weak / Missing. 3. **Pricing & Packaging** - Tiers, pricing model (per-seat, usage-based, flat), free tier availability, enterprise pricing. 4. **Target Audience** - Who is each product built for? Company size, industry, user role. 5. **Strengths & Weaknesses** - 3 strengths and 3 weaknesses for each competitor. 6. **Market Gaps** - Identify 2-3 underserved needs that no competitor fully addresses. 7. **Threat Assessment** - Rank competitors by threat level (High / Medium / Low) with rationale. Audience for this analysis: {{audience}} End with a "Strategic Implications" section: 3-5 actionable recommendations for how {{product}} should respond based on the competitive landscape.
Why it works: Naming specific competitors, defining comparison dimensions, and specifying the audience (board vs. engineering) ensures the analysis is tailored and actionable, not a generic overview.
Feature Prioritization (RICE Framework)
Prioritize the following feature ideas using the RICE scoring framework. Product: {{product}} Planning horizon: {{quarter}} Features to evaluate: {{feature_list}} For each feature, score these dimensions: - **Reach**: How many users/accounts will this affect per {{time_period}}? Provide a specific number estimate with reasoning. - **Impact**: How much will this move the target metric? (3 = massive, 2 = high, 1 = medium, 0.5 = low, 0.25 = minimal). State which metric and why. - **Confidence**: How certain are we about reach and impact estimates? (100% = high, 80% = medium, 50% = low). Cite the evidence level (data, anecdotal, gut). - **Effort**: How many person-weeks of engineering effort? Factor in design, backend, frontend, QA, and documentation. Calculate RICE score = (Reach x Impact x Confidence) / Effort Present results as: 1. A ranked table (highest RICE score first) with all dimension scores 2. A brief commentary on the top 3 explaining why they won 3. A "Watch Out" section flagging any feature where low confidence could swing the ranking significantly 4. Your recommended sequencing for the quarter, accounting for dependencies between features
Why it works: Requiring specific numbers for Reach, citing evidence level for Confidence, and including person-weeks for Effort forces rigorous thinking instead of gut-based scoring.
Stakeholder Update Email
Write a stakeholder update email for {{product}}. Audience: {{audience}} Time period: {{time_period}} Key accomplishments: {{accomplishments}} Current blockers: {{blockers}} Upcoming milestones: {{upcoming_milestones}} Structure the email as follows: 1. **TL;DR** (2-3 sentences) - The single most important takeaway and one decision needed. 2. **What Shipped** - Bullet list of completed items with quantified impact where available (e.g., "Reduced page load by 40%", "3,200 users activated in first week"). 3. **Metrics Dashboard** - 3-5 key metrics with current value, trend (up/down/flat), and target. 4. **Blockers & Risks** - What is stuck and what you need from the audience to unblock it. Be specific about the ask. 5. **Next 2 Weeks** - Top 3 priorities with expected completion dates. 6. **Decision Needed** - If any decision requires stakeholder input, state the options, your recommendation, and the deadline for a response. Tone: professional but concise. Executives skim; lead with outcomes, not activity. Total length should be readable in under 2 minutes.
Why it works: Specifying the audience, requiring quantified impact, and including a decision-needed section produces an email that drives action instead of just reporting status.
Sprint Retro Facilitator
You are an experienced agile coach facilitating a sprint retrospective. Team: {{team_name}} Sprint: {{sprint_number}} Sprint goal: {{sprint_goal}} Goal achieved: {{goal_status}} Sprint data: - Planned story points: {{planned_points}} - Completed story points: {{completed_points}} - Carryover items: {{carryover_items}} - Incidents/bugs reported: {{incidents}} Generate a structured retro agenda: 1. **Check-in** (2 min) - A quick one-word energy check. Provide 3 creative check-in question options. 2. **Data Review** (5 min) - Summarize the sprint data above in a clear visual format. Calculate velocity trend if possible. Highlight the gap between planned and completed. 3. **What Went Well** - Based on the sprint data, suggest 3-4 likely "went well" items as conversation starters. 4. **What Didn't Go Well** - Based on the carryover and incidents, suggest 3-4 likely friction points to discuss. 5. **Root Cause Exploration** - For each friction point, provide a "5 Whys" starter question to help the team dig deeper. 6. **Action Items Template** - Provide a table with columns: Action, Owner, Due Date, Success Criteria. Pre-fill 2-3 suggested actions based on the data. 7. **Closing** - A team appreciation prompt and a one-sentence sprint-ahead focus statement. Keep the tone constructive, blame-free, and focused on systemic improvements over individual performance.
Why it works: Feeding in real sprint data (points, carryover, incidents) allows the AI to generate specific talking points rather than generic retro templates. The 5 Whys starters push the team past surface-level complaints.
Customer Interview Questions
Generate a customer interview guide for {{product}}. Interview goal: {{interview_goal}} Target persona: {{persona}} Feature/area of focus: {{focus_area}} Interview duration: {{duration}} minutes Structure the guide as follows: **Opening (2 min)** - Provide a warm intro script that sets expectations, explains recording consent, and puts the interviewee at ease. **Context & Background (5 min)** - 3-4 questions about their role, workflow, and current tools. Focus on understanding their world before discussing your product. **Problem Exploration (10 min)** - 5-6 open-ended questions about the pain points related to {{focus_area}}. Use "Tell me about a time when..." and "Walk me through how you currently..." formats. - Include 2 follow-up probes for each question (e.g., "What happened next?", "How did that make you feel?"). **Solution Validation (8 min)** - 3-4 questions about how they would evaluate a solution. Do NOT describe your feature yet. Ask about their criteria, must-haves, and dealbreakers. **Reaction to Concept (5 min)** - Provide a brief, neutral description of {{focus_area}} to read aloud. Then ask 3 questions to gauge interest, concerns, and willingness to pay/adopt. **Wrap-up (2 min)** - Ask who else you should talk to, what question you should have asked, and provide a thank-you script. **Interviewer Notes Section** - A checklist of non-verbal cues to watch for (hesitation, enthusiasm, confusion). - Reminder: do not lead the witness; listen more than you talk.
Why it works: Separating problem exploration from solution validation prevents leading questions. Including follow-up probes and non-verbal cue reminders produces interview data you can actually synthesize into insights.
Go-to-Market Checklist
Create a comprehensive go-to-market (GTM) checklist for launching {{feature}} in {{product}}. Launch date: {{launch_date}} Target audience: {{persona}} Launch tier: {{launch_tier}} (e.g., Tier 1 = major launch with press, Tier 2 = blog + email, Tier 3 = in-app announcement only) Build the checklist organized by team and timing: **4 Weeks Before Launch** - Product: Feature freeze criteria, QA sign-off checklist, performance benchmarks, rollback plan - Design: Final asset list (screenshots, demo video, illustrations, OG images) - Engineering: Feature flags, monitoring dashboards, alerting thresholds, load testing results **2 Weeks Before Launch** - Marketing: Blog post draft, email campaign copy, social media posts (Twitter, LinkedIn, Product Hunt), landing page updates - Sales: Updated pitch deck slides, competitive positioning talking points, FAQ for sales calls, demo script - Support: Knowledge base articles, internal training completed, escalation path for launch-day issues **Launch Day** - Execution: Feature flag flip sequence, monitoring checklist (error rates, latency, support tickets), war room channel setup - Communications: Blog published, emails sent, social posts scheduled, in-app announcement triggered - Leadership: Executive summary sent, key metrics dashboard shared **1 Week After Launch** - Analytics: Adoption metrics (DAU, activation rate, feature usage), funnel conversion impact, support ticket volume - Feedback: User feedback synthesis, NPS/CSAT delta, top 5 issues reported - Retrospective: What went well, what to improve, updated playbook for next launch Include a RACI matrix template (Responsible, Accountable, Consulted, Informed) for the cross-functional items.
Why it works: Organizing by team and timeline with specific deliverables (not vague "prepare marketing materials") makes this immediately actionable. The launch tier parameter scales the checklist appropriately.
Recommended tools & resources
Browse product management prompt templates from the community.
Prompt BuilderBuild custom PM prompts with guided steps.
Business Prompt TemplatesBroader business templates for strategy and operations.
Prompt TipsTechniques to get better product thinking from AI models.
Prompt PatternsReusable structures for PRDs, specs, and analysis.
RolesRole-based prompt collections for product teams.