Windsurf AI Prompts & Configuration Guide
Windsurf (formerly Codeium) is an AI-native code editor that combines intelligent autocomplete with Cascade, its agentic coding assistant. Like Cursor, Windsurf reads project-level configuration to understand your codebase, but it uses its own rules format and has a distinct workflow built around multi-step task execution. Getting the most out of Windsurf means understanding how to structure your rules files and write prompts that leverage Cascade's ability to plan and execute across multiple files.
Windsurf supports global rules (applied to all projects) and project-specific rules that live in your repository. These rules tell Cascade about your preferred frameworks, coding style, file organization, and testing requirements. A well-configured rules file means Cascade generates code that matches your existing codebase from the first interaction. Windsurf also supports MCP (Model Context Protocol), which means you can connect external tools -- like your PromptingBox prompt library -- directly into your development environment.
When prompting Cascade, be explicit about scope and expected behavior. Instead of "add authentication," try "add email/password authentication using NextAuth.js with a PostgreSQL adapter, matching the patterns in src/lib/auth." Cascade works best when it has clear context about what you want and where it should look. Browse our configuration templates and coding prompts to set up Windsurf for your stack.
Copy-Paste Windsurf Prompts
Ready-to-use prompts for Windsurf Cascade. Copy, fill in the {{variables}}, and run in Cascade.
Codebase Navigation
I'm new to this codebase. Help me understand the architecture: 1. Read the top-level directory structure and identify the main entry points 2. Find the {{framework}} configuration file and summarize the key settings 3. Identify the routing pattern — list all routes/pages and what they do 4. Find where {{core_concept}} is implemented (the main service/module) 5. Map the data flow: how does data get from {{data_source}} to {{data_destination}}? Output a brief architecture summary with: - Tech stack (framework, language, database, key libraries) - Directory structure overview (what lives where) - Key files I should read first - Any patterns or conventions the codebase follows
Why it works: Gives Cascade a structured exploration plan instead of a vague 'explain this codebase.' The numbered steps guide it through a logical discovery process that builds understanding incrementally.
Multi-File Refactoring
Refactor {{what_to_refactor}} across the codebase: Current state: {{current_pattern}} Desired state: {{desired_pattern}} Scope: - Files to modify: {{file_pattern}} (e.g., src/components/**/*.tsx) - Files to NOT touch: {{excluded_files}} Steps: 1. First, search for all usages of the current pattern and list them 2. Create the new {{abstraction_type}} (interface/function/component) in {{target_file}} 3. Update each file to use the new pattern 4. Verify no broken imports or type errors remain Constraints: - Preserve all existing functionality — this is a pure refactor - Keep the same exports (no breaking changes for consumers) - Match existing code style (indentation, naming conventions) - Run the TypeScript compiler after changes to verify
Why it works: Defines both the current and desired state, scopes the file changes, and includes a verification step. This prevents Cascade from partially refactoring and leaving the codebase in an inconsistent state.
Test Generation
Write tests for {{file_path}}: Test framework: {{test_framework}} Test file location: {{test_file_path}} Coverage requirements: - Happy path for each exported function/component - Edge cases: empty inputs, null/undefined, boundary values - Error cases: invalid inputs, network failures, timeouts - {{specific_scenario}} scenario For each test: - Use descriptive test names: "should [expected behavior] when [condition]" - Arrange-Act-Assert pattern - Mock external dependencies ({{dependencies_to_mock}}) - No test interdependencies — each test should run independently Do not modify the source file. Only create/update the test file. Aim for the tests to pass on the first run — read the source implementation carefully before writing assertions.
Why it works: Specifies the test framework, naming convention, and coverage expectations upfront. The instruction to read the source first prevents Cascade from writing tests against assumed behavior.
Code Explanation
Explain how {{feature_name}} works in this codebase:
1. Find the entry point — where does the user trigger this feature?
2. Trace the execution path step by step:
- Which components/functions are called?
- What data transformations happen?
- Where does it interact with the database/API?
3. Identify the key files involved and their roles
4. Note any error handling, edge cases, or important business logic
Format your explanation as:
- **Entry point:** [file and function]
- **Flow:** Step-by-step numbered list
- **Key files:** Table of file path + responsibility
- **Gotchas:** Anything non-obvious or surprising about the implementation
Keep the explanation technical but concise. I want to understand it well enough to modify it confidently.Why it works: Asks Cascade to trace a real execution path rather than summarize files in isolation. The structured output format ensures you get actionable understanding, not a wall of text.
Debugging Assistant
I'm hitting this bug: Error: {{error_message}} Where: {{where_it_occurs}} When: {{reproduction_steps}} Expected behavior: {{expected}} Actual behavior: {{actual}} Debug this step by step: 1. Read the file where the error originates 2. Trace backward — what calls this code and with what arguments? 3. Check for common causes: null/undefined values, type mismatches, async timing issues, missing env vars 4. Look at recent changes to the involved files (if git history is available) 5. Propose a fix with an explanation of the root cause Before applying the fix: - Explain WHY the bug happens, not just how to fix it - Show me the specific line(s) that are wrong - Confirm the fix won't break other code paths that use the same function
Why it works: Provides all the context a debugger needs (error, location, reproduction, expected vs actual) and asks for root cause analysis before applying a fix, preventing superficial patches.
Architecture Review
Review the architecture of {{module_or_feature}} and identify improvements:
Focus areas:
1. **Separation of concerns** — Is business logic mixed with UI/framework code?
2. **Error handling** — Are errors caught, logged, and shown to users appropriately?
3. **Performance** — Any N+1 queries, unnecessary re-renders, missing indexes, or large bundle imports?
4. **Type safety** — Any `any` types, missing null checks, or unsafe type assertions?
5. **Duplication** — Code that's copied between files that should be extracted into shared utilities?
For each issue found:
- File and line number
- What the problem is
- Severity: critical / moderate / minor
- Suggested fix (code snippet if helpful)
Do not make any changes. Only analyze and report. Sort findings by severity (critical first).Why it works: A structured review checklist with severity ratings focuses Cascade on high-impact issues. The 'do not make changes' constraint ensures you review findings before anything is modified.
Recommended tools & resources
Configuration templates for Windsurf, Cursor, Claude Code, and more.
Prompt BuilderGenerate structured prompts optimized for Windsurf Cascade.
Save Across AI ToolsOne prompt library that works with Windsurf and every other AI tool.
Prompt PatternsProven prompting structures for agentic coding workflows.
Prompt TemplatesBrowse coding and development templates for any framework.
What is MCP?Learn about Model Context Protocol and how Windsurf uses it.