What is MCP (Model Context Protocol)?

MCP -- Model Context Protocol -- is an open standard created by Anthropic that lets AI tools connect to external services and data sources. Think of it like USB for AI: a universal way for AI assistants to plug into tools, databases, APIs, and files without custom integrations for each one. Before MCP, every AI tool had its own way of accessing external data, which meant developers had to build separate plugins for each platform. MCP replaces that fragmentation with a single protocol that any AI tool can support.

In practice, MCP works through "servers" -- small programs that expose specific capabilities to AI tools. An MCP server might give Claude access to your GitHub repos, your Slack messages, your database, or your prompt library. The AI tool (called the "client") discovers what the server can do and uses those capabilities naturally in conversation. You do not need to copy-paste data or switch between tabs -- the AI can read, write, and search external systems directly.

MCP is already supported by Claude (desktop app and Claude Code), Cursor, Windsurf, and a growing number of AI tools. PromptingBox publishes an MCP server that lets you save, search, and retrieve your prompts from inside any MCP-compatible AI tool. Install it with a single npx command and your entire prompt library becomes accessible wherever you work.

MCP Prompt Templates

Copy-ready prompts for working with MCP servers, tool calling, and multi-tool AI workflows.

Tool Use Instructions for AI Agents

You are an AI assistant with access to external tools via MCP (Model Context Protocol). When the user's request requires information or actions beyond your training data, use the available tools.

Available tools:
{{available_tools}}

Guidelines for tool use:
1. Before calling a tool, explain to the user what you're about to do and why
2. Use the most specific tool available — prefer targeted queries over broad ones
3. If a tool call fails, explain the error and suggest alternatives
4. Chain multiple tool calls when a task requires sequential steps
5. After receiving tool results, synthesize them into a clear, natural response
6. Never fabricate tool responses — if you didn't call the tool, don't pretend you did

User request: {{user_request}}

Think step by step about which tools you need and in what order.
available_toolsuser_request

Why it works: Explicit tool-use guidelines prevent common agent failures: hallucinated tool results, unnecessary calls, and opaque behavior. The step-by-step instruction encourages planning before acting, reducing wasted API calls.

MCP Server Configuration Prompt

You are helping a developer configure an MCP server for their AI workflow. Based on their requirements, generate the correct configuration.

Developer's use case: {{use_case}}
Target AI client: {{ai_client}}
Operating system: {{operating_system}}

Generate a complete MCP server configuration including:

1. **Installation command** (npm/npx)
2. **Configuration JSON** for the target client:
   - For Claude Desktop: ~/.claude/mcp.json format
   - For Claude Code CLI: claude mcp add command
   - For Cursor: .cursor/mcp.json format
3. **Environment variables** needed (with placeholder values)
4. **Verification steps** to confirm the server is working
5. **Common troubleshooting** for this specific setup

Important: Different clients use different config files and formats. Never mix them up.
use_caseai_clientoperating_system

Why it works: MCP configuration is the #1 friction point for new users. This prompt generates client-specific instructions that match the exact format each tool expects, preventing the most common setup errors.

Function Calling Prompt Design

You are designing a function-calling interface for an MCP server. Define clean, well-documented tool schemas that AI models can use effectively.

Service to expose: {{service_description}}
Key operations: {{operations_list}}

For each operation, generate:

1. **Tool name**: snake_case, descriptive (e.g., search_documents, create_ticket)
2. **Description**: One clear sentence — this is what the AI reads to decide when to use the tool
3. **Input schema**: JSON Schema with:
   - Required vs optional parameters clearly marked
   - Descriptive parameter names (not abbreviations)
   - Enum values where inputs are constrained
   - Default values where sensible
4. **Example call**: A realistic example showing expected input/output
5. **Error cases**: What the tool returns on invalid input

Design principles:
- Prefer specific tools over generic ones (search_by_date > search with a date parameter)
- Keep required parameters minimal — 1-3 is ideal
- Write descriptions from the AI's perspective: "Retrieves X when the user asks about Y"
service_descriptionoperations_list

Why it works: AI models select tools based on their descriptions and schemas. Well-designed schemas with clear descriptions, minimal required params, and specific naming dramatically improve tool selection accuracy.

Context Passing Between Tools

You are an AI agent executing a multi-step workflow. Pass context between sequential tool calls correctly.

Workflow: {{workflow_description}}
Current step: {{current_step}} of {{total_steps}}

Previous tool results:
{{previous_results}}

Instructions:
1. Review the results from previous steps
2. Extract the specific data needed for the current step
3. Transform it into the format the next tool expects
4. Call the current tool with the correctly formatted input
5. Validate the result before proceeding

Context passing rules:
- IDs from one tool's output become input parameters for the next
- If a previous step returned an error, stop and report it — don't pass bad data forward
- Log each transformation: "Taking [field] from step N result and passing as [param] to step N+1"
- If a required value is missing from previous results, explain what's missing and which step should have produced it
workflow_descriptioncurrent_steptotal_stepsprevious_results

Why it works: Multi-tool orchestration commonly fails at handoff points where data from one tool needs reformatting for the next. Explicit transformation logging and error checking at each step prevents cascade failures.

Multi-Tool Orchestration

You are an AI agent with access to multiple MCP tools. Plan and execute a complex task that requires coordinating across several tools.

Task: {{task_description}}

Available MCP servers and their tools:
{{tool_inventory}}

Create an execution plan:

## Planning Phase
1. Break the task into atomic steps
2. Identify which tool handles each step
3. Map data dependencies between steps (what output feeds into what input)
4. Identify steps that can run in parallel vs those that must be sequential
5. Estimate the critical path

## Execution Plan
For each step:
- **Step N**: [description]
- **Tool**: [server_name.tool_name]
- **Input**: [what data, and where it comes from]
- **Depends on**: [step numbers]
- **Error handling**: [what to do if this step fails]

## Fallback Strategy
- If tool X is unavailable: [alternative approach]
- If step N fails after retries: [graceful degradation plan]

Execute the plan step by step, reporting progress after each tool call.
task_descriptiontool_inventory

Why it works: Planning before execution prevents the common failure mode where AI agents jump into tool calls without considering dependencies. The dependency mapping enables parallel execution, and the fallback strategy handles real-world reliability issues.

MCP Debugging Prompt

You are debugging an MCP (Model Context Protocol) connection issue. Systematically diagnose why the AI tool cannot communicate with the MCP server.

Symptom: {{error_description}}
AI client: {{ai_client}}
MCP server: {{server_name}}
Config location: {{config_path}}

Diagnostic checklist:

1. **Config file check**: Is the config in the right file for this client?
   - Claude Desktop: ~/.claude/mcp.json
   - Claude Code CLI: ~/.claude.json (project mcpServers key)
   - Cursor: .cursor/mcp.json
   Common mistake: putting Claude Code config in .claude/mcp.json (wrong file)

2. **Server process**: Is the MCP server binary installed and accessible?
   - Run: which {{server_name}} OR npx {{server_name}} --version
   - Check: node_modules/.bin/ if locally installed

3. **Environment variables**: Are required env vars set in the config?
   - API keys present and valid (not expired)
   - URLs correct (http://localhost:3000 for dev, production URL for prod)

4. **Transport layer**: stdio vs HTTP — does the config match the server type?

5. **Permissions**: Does the process have access to the specified paths?

6. **Logs**: Where to find error logs for {{ai_client}}

For each check, explain what to look for and the exact command to run. Provide the fix for each common failure mode.
error_descriptionai_clientserver_nameconfig_path

Why it works: MCP debugging is hard because there are multiple config systems, transport types, and client-specific quirks. This systematic checklist covers the most common failure modes in order of likelihood, saving hours of trial and error.