Test Generation
Specifies the test framework, naming convention, and coverage expectations upfront. The instruction to read the source first prevents Cascade from writing tests against assumed behavior.
Write tests for {{file_path}}:\n\nTest framework: {{test_framework}}\nTest file location: {{test_file_path}}\n\nCoverage requirements:\n- Happy path for each exported function/component\n- Edge cases: empty inputs, null/undefined, boundary values\n- Error cases: invalid inputs, network failures, timeouts\n- {{specific_scenario}} scenario\n\nFor each test:\n- Use descriptive test names: "should [expected behavior] when [condition]"\n- Arrange-Act-Assert pattern\n- Mock external dependencies ({{dependencies_to_mock}})\n- No test interdependencies — each test should run independently\n\nDo not modify the source file. Only create/update the test file. Aim for the tests to pass on the first run — read the source implementation carefully before writing assertions.
Variables to customize
Why this prompt works
Specifies the test framework, naming convention, and coverage expectations upfront. The instruction to read the source first prevents Cascade from writing tests against assumed behavior.
Save this prompt to your library
Organize, version, and access your best prompts across ChatGPT, Claude, and Cursor.
Related prompts
Forcing the agent to plan before acting prevents premature execution and wasted steps. Explicit dependency mapping enables parallel execution and catches logical gaps early.
Tool Selection AgentThe ReAct pattern (Reason + Act) creates an explicit reasoning trace that improves tool selection accuracy. The error-handling rule prevents infinite retry loops.
Prompt CompressorExplicitly requiring all functional requirements to be preserved prevents the model from over-compressing and losing critical instructions.
Memory Management AgentExplicit memory read/write instructions create agents that improve over time. Categorization keeps memories organized, and the deduplication rule prevents context bloat.