AI Prompts for Code Review
AI-assisted code review is not about replacing human reviewers — it is about catching the things humans routinely miss under time pressure. Security vulnerabilities, inconsistent error handling, missing edge cases, performance antipatterns, and deviations from project conventions are all areas where AI excels because it does not get fatigued or distracted. But "review this code" is too vague to be useful. Effective code review prompts specify the language and framework, the type of review you want (security, performance, readability, or all three), the project's conventions, and the severity level of feedback you care about. The prompts below are organized by review focus area so you can run targeted passes rather than one unfocused sweep.
Security review prompts should instruct the AI to check for the OWASP Top 10 categories relevant to your stack — SQL injection, XSS, insecure deserialization, broken authentication — and to flag any user input that reaches a database query, file system operation, or external API call without sanitization. Performance review prompts work best when you specify your scale: an N+1 query is irrelevant in a script that runs once a day but critical in a request handler serving thousands of concurrent users. Include your ORM, database, and expected load. For refactoring suggestions, provide context about the codebase's age and your team's bandwidth — the AI should suggest practical improvements, not a rewrite. Test coverage prompts should ask the AI to identify untested code paths, suggest specific test cases, and flag any logic that is difficult to test, which often indicates a design problem worth addressing.
Standardize your code review prompts across your team. When everyone uses the same review templates, you get consistent coverage and catch the same categories of issues regardless of who runs the review. PromptingBox lets you save, version, and share code review prompts so your entire engineering team benefits from the same proven templates.
Code Review Prompt Templates
Copy any prompt and paste it into your AI tool with your code. Replace the {{variables}} with your specifics.
Security Audit
Review the following {{language}} code for security vulnerabilities. Focus on the OWASP Top 10 categories relevant to this stack, including: - SQL injection and NoSQL injection vectors - Cross-site scripting (XSS) — stored, reflected, and DOM-based - Insecure deserialization - Broken authentication or session management - Sensitive data exposure (hardcoded secrets, logging PII) For each vulnerability found, provide: 1. The exact line(s) affected 2. The severity (Critical / High / Medium / Low) 3. A concrete fix with code Context: - Framework: {{framework}} - This code handles: {{code_purpose}} ``` {{code}} ```
Why it works: Specifying OWASP categories and the framework narrows the AI to relevant attack surfaces instead of generic advice. Requiring severity levels and concrete fixes makes the output immediately actionable.
Performance Review
Analyze the following {{language}} code for performance issues. Consider the production context: this code runs in {{deployment_context}} handling approximately {{expected_load}}. Review for: - Time complexity issues (nested loops, unnecessary iterations) - N+1 query patterns or redundant database calls - Memory allocation and potential leaks - Blocking operations that should be async - Missing caching opportunities - Inefficient data structures for the access pattern ORM/Database: {{orm_and_database}} For each issue, estimate the performance impact (orders of magnitude) and provide an optimized alternative. ``` {{code}} ```
Why it works: Including deployment context and expected load lets the AI calibrate which optimizations actually matter. An N+1 query in a cron job is different from one in a hot request path.
Architecture Review
Review this {{language}} module for architectural quality. The codebase follows {{architecture_pattern}} and this module is responsible for {{module_responsibility}}. Evaluate: - Single Responsibility: Does each class/function do one thing well? - Dependency direction: Are dependencies pointing inward (toward domain logic)? - Abstraction levels: Is the code mixing high-level orchestration with low-level details? - Interface boundaries: Are the public APIs clean and minimal? - Extensibility: Can new behavior be added without modifying existing code? - Testability: Can this code be unit tested without complex mocking? Flag any violations of {{team_conventions}} if present. Provide a summary rating (Strong / Acceptable / Needs Refactoring) with prioritized recommendations. ``` {{code}} ```
Why it works: Grounding the review in your specific architecture pattern and team conventions produces targeted feedback rather than textbook SOLID lectures. The rating system forces a clear verdict.
PR Review Checklist
Act as a senior engineer reviewing this pull request. The PR is: {{pr_description}} Review the diff below against this checklist: **Correctness** - Does the logic match the stated intent? - Are edge cases handled (null, empty, boundary values)? - Are error paths handled gracefully? **Readability** - Are variable and function names descriptive? - Is the code self-documenting or does it need comments? - Is there unnecessary complexity? **Testing** - Are the changes covered by tests? - What test cases are missing? **Side Effects** - Could this break existing functionality? - Are there backward compatibility concerns? - Does this affect any shared state or global config? Output a structured review with APPROVE, REQUEST_CHANGES, or COMMENT for each section and specific inline feedback referencing line numbers. ```diff {{diff}} ```
Why it works: A structured checklist mirrors how experienced reviewers actually think. Requiring a verdict per section prevents wishy-washy feedback and forces the AI to commit to its assessment.
Tech Debt Assessment
Analyze the following {{language}} code for technical debt. This code is {{code_age}} and is maintained by a team of {{team_size}}. Categorize any tech debt found into: 1. **Critical** — Will cause production incidents if not addressed 2. **Strategic** — Slows down feature development significantly 3. **Cosmetic** — Code smell but low practical impact For each item found, provide: - What the debt is and where it lives - Why it accumulated (likely cause) - Effort estimate to fix (hours: 1-2h, 4-8h, multi-day) - Suggested approach (quick patch vs. proper refactor) - Risk of the fix introducing regressions Conclude with a prioritized action plan that respects the team's bandwidth: what to fix this sprint, what to schedule, and what to accept. ``` {{code}} ```
Why it works: Including code age and team size produces realistic recommendations instead of idealistic rewrites. The three-tier severity and effort estimates make the output useful for sprint planning.
Code Style Enforcer
Review this {{language}} code against our team's style guide. Enforce the following conventions strictly: {{style_rules}} For each violation: - Quote the offending line - State which rule it violates - Provide the corrected version Do NOT flag issues outside of the rules above — focus only on style consistency, not logic or architecture. If the code fully complies, say so explicitly. Group violations by rule for easy scanning. At the end, provide a single copy-pasteable corrected version of the full code. ``` {{code}} ```
Why it works: Constraining the review to explicit style rules prevents scope creep into architecture or logic opinions. Requiring the corrected version makes fixes instant rather than manual.
Recommended tools & resources
Browse community-shared prompt templates for every use case.
Prompt BuilderBuild structured code review prompts step by step.
Best Prompts for CodingCurated AI prompts specifically designed for software engineers.
ChatGPT Prompts for DevelopersDeveloper-focused prompts for ChatGPT across all workflows.
Prompt PatternsProven prompt structures for consistent, high-quality output.
AI Coding Assistant PromptsPrompts designed for AI-powered coding assistants and IDEs.