System Prompts——The "Director Behind the Scenes"

Table of Contents
- Prompts Aren’t a Single Sentence, But an “Information Package”
- Segmented Composition: Modular Prompt Architecture
- Dynamic Assembly: Different Recipes for Different Scenarios
- Model-Specific Tuning: Different “Scripts” for Different Models
- Prompts as the “Control Plane”
- Practical Prompt Engineering Tips
- Implications for Using Claude Code
- Summary
Have you noticed that the same request gets completely different response styles from Claude Code versus ChatGPT?
Ask “help me write a login feature” and ChatGPT might directly give you a code snippet, then ask “what else do you need?” Claude Code, on the other hand, will first ask: what framework? How should passwords be encrypted? Do you need email verification?
This difference isn’t about the model itself—it’s “system prompts” at work. Today we’re揭开ing this “director behind the scenes” mask.
The diagram: System prompts are like a movie director setting the scene and character positioning
Prompts Aren’t a Single Sentence, But an “Information Package”
Many people think system prompts are as simple as “You are Claude, an AI assistant.” Actually, Claude Code’s system prompt is a carefully designed “information package” composed of multiple modules.
Imagine film shooting: the director doesn’t just tell actors “you play a cop,” but provides detailed instructions—what era is the story set? What’s this character’s personality? What’s the emotional tone of this scene? What kind of dialogue style is needed?
System prompts follow the same principle. They don’t just tell the model “you’re an AI assistant” and call it done—they set:
- Who you are (identity)
- What you can do (tool capabilities)
- How you should do it (behavioral norms)
- What can and cannot be done (safety boundaries)
Segmented Composition: Modular Prompt Architecture
Claude Code’s system prompts use segmented composition design, divided into several independent modules:
Identity Module: Defines the AI’s basic identity and role positioning.
You are Claude Code, Anthropic's AI coding assistant.
Your goal is to help users complete software engineering tasks.
You interact with the user's environment through tools: reading files, executing commands, editing code, etc.
Tools Module: Lists all available tools and their descriptions.
You can use the following tools:
- FileReadTool: Read file contents
- FileEditTool: Edit files
- BashTool: Execute shell commands
- GrepTool: Search file contents
...
Each tool’s description is defined here—this is why the same functionality results in different behavior between Claude Code and ChatGPT. Tool descriptions aren’t documentation for humans, but instructions for the model.
Format Module: Defines response format requirements.
Use Markdown format.
Code blocks use appropriate language markers.
Paths are wrapped in backticks.
Safety Module: Defines safety boundaries and restrictions.
Do not execute destructive operations without confirmation.
Do not access sensitive files.
Obey the user's permission settings.
The diagram: Segmented module structure of system prompts
These modules are defined in constants/prompts.ts and then dynamically assembled in query.ts based on current state.
Dynamic Assembly: Different Recipes for Different Scenarios
System prompts aren’t fixed. Claude Code dynamically adjusts based on current state:
Adjusted by Permission Mode: If users configure alwaysDeny rules, the safety module reinforces relevant warnings.
Adjusted by Tool Availability: If certain tools are disabled, the tools module won’t include them. This is why tools with feature() = false, the model doesn’t even know exist.
Adjusted by Context: If handling sensitive operations, the safety module appends additional warnings.
Adjusted by User Preferences: Instructions in the CLAUDE.md file are merged into system prompts, overriding or supplementing default behavior.
This dynamic assembly mechanism gives system prompts both consistency (core modules unchanged) and flexibility (adjustable by scenario).
Model-Specific Tuning: Different “Scripts” for Different Models
Claude Code supports multiple models (Claude 3.5 Sonnet, Claude 3 Opus, etc.), with slightly different system prompts for each.
This isn’t discrimination—it’s tuning. Different models have different characteristics:
Claude 3.5 Sonnet: Capable and fast, prompts can be more concise, giving it more autonomy.
Claude 3 Opus: Most capable but slower, prompts can be more detailed, fully leveraging its comprehension.
Older Models: May need more explicit instructions, fewer implicit assumptions.
This tuning manifests in:
- Detail level of tool descriptions
- Emphasis level of safety warnings
- Strictness level of format requirements
- Quantity and quality of examples
Prompts as the “Control Plane”
In Claude Code’s architecture, system prompts play the role of the “control plane”—they don’t directly execute operations, but control the system’s behavior patterns.
This is similar to operating systems: the kernel (Agent Loop, tool system) handles execution, but system configuration (prompts) determines the kernel’s behavior mode.
Advantages of this design:
Configurable: AI behavior can be changed by modifying prompts without changing code.
Experimental: Anthropic can A/B test different prompts and observe effects.
Extensible: New features can be supported by adding new prompt modules.
Rollbackable: If a prompt change has problems, it can quickly rollback to a previous version.
Practical Prompt Engineering Tips
If you want to apply these concepts in your own AI applications, here are some tips:
Modular Design: Split prompts into independent modules for easy maintenance and composition.
Layered Control: System-level (immutable), project-level (CLAUDE.md), session-level (temporary instructions).
Explicit Over Implicit: Don’t assume the model “should understand”—explicitly state expectations.
Example-Driven: Use few-shot examples to demonstrate expected behavior, more effective than abstract descriptions.
Continuous Iteration: Prompts aren’t written once and done—they’re continuously optimized based on actual usage feedback.
Implications for Using Claude Code
Understanding system prompts helps you use Claude Code better:
Why does it sometimes “over-confirm”? Because the safety module requires it to ask users when uncertain.
Why won’t it execute dangerous operations? Because the safety module explicitly prohibits it, even if the user asks (unless permission overrides).
Why does the same question sometimes get different answers? Because prompts may have been adjusted in version updates, or current context affects prompt assembly.
Why does CLAUDE.md work? Because it’s merged into system prompts, becoming guidance for model behavior.
Summary
System prompts are Claude Code’s “director behind the scenes,” setting the AI’s role, capabilities, behavior, and safety boundaries. Through segmented composition and dynamic assembly, it maintains both flexibility and safety.
Understanding this isn’t just about satisfying curiosity—it helps you:
- Collaborate better with AI (understand its behavior logic)
- Use CLAUDE.md more effectively (know why it works)
- Design better prompts in your own AI applications (borrow these patterns)
In the next article, we’ll talk about why AI sometimes “disobeys”—prompt behavior guidance techniques.
