Have you experienced this: you ask AI to help you write a feature, and it immediately dives in without hesitation. After it’s done, you find—there’s no parameter validation, no error handling, edge cases not considered at all.

Or the opposite: you want it to help change something small, but it asks you eighteen questions: what framework? what version? need to support old versions? should I write tests?

This is AI “misbehavior” in two forms: reckless when it should be cautious, hesitant when it should be decisive. Today we’ll talk about how to use prompts to “train” AI behavior.

Training AI Like a Guide Dog The diagram: Prompt behavior guidance is like training a guide dog, teaching it when to be proactive and when to ask

Root Cause: AI is Too “Impatient”

AI assistants (including Claude Code) have a characteristic: they’re too eager to “perform.”

When you say “help me write a login feature,” what it’s thinking is “finish quickly to look good,” never pausing to ask: what framework? how should passwords be encrypted? should I remember login state?

It’s like a fresh graduate working on a project—they hear the requirements and start coding, never clarify, never think about potential pitfalls. After they’re done, the feature exists, but there are hidden dangers everywhere.

The problem isn’t that AI is stupid—it’s that it’s “impatient.” The role of prompts is to install a workflow for this “impatient person”—first ask for clarification, then plan before acting, write tests before code, and only write code after tests pass.

Behavior Shaping: From “Impatient” to “Patient”

Claude Code’s system prompts contain extensive content shaping the AI’s behavior patterns. This shaping doesn’t directly tell it “how you should behave,” but teaches it “how to judge when encountering situations.”

It’s like training a guide dog—you don’t tell it “take these specific steps,” but teach it “how to judge at intersections, how to handle obstacles.”

How is this done specifically?

Clarify Expected Behavior: Not “be careful,” but “in the following situations you should ask the user first: involving database modifications, deleting files, modifying config files…”

Provide Decision Framework: “If you’re unsure about user intent, use BashTool’s echo to output your understanding and ask the user to confirm before continuing.”

Set Behavior Boundaries: “Don’t assume the user’s development environment—ask if unsure. Don’t assume the user’s preferred tech stack—ask if unsure.”

Behavior Guidance Framework The diagram: Decision framework for prompt behavior guidance

The Power of Few-shot Examples

Telling AI “how you should behave” is often insufficient—showing it a few examples is more effective. This is Few-shot prompting.

Claude Code’s prompts include some example dialogues demonstrating expected interaction patterns:

Undesired Behavior (Negative Example):

User: Help me change this function
AI: Okay, I've changed it [directly modifies code]

Expected Behavior (Positive Example):

User: Help me change this function
AI: I'd like to understand your requirements first. What does this function currently do? What should it become? Can I look at the related code?
[Uses FileReadTool to read code]
[Asks about specific requirements]
[Confirms modification plan]
[Executes modification]

Through comparison, the model learns: don’t rush to act, first understand requirements.

When to Be Proactive, When to Ask

A core question: when should AI decide autonomously, and when should it ask the user?

Claude Code’s prompts give clear guidance:

Situations Where It Should Ask:

  • Involves destructive operations (delete, overwrite)
  • Involves security configuration changes
  • User intent is unclear
  • Multiple viable approaches and user preference is unknown

Situations Where It Can Decide Autonomously:

  • Read-only operations (view, search)
  • Clear, reversible operations
  • Operations explicitly authorized by the user

Situations Where It Must Decide Autonomously:

  • Technical details of tool calls
  • Implementation methods for intermediate steps
  • Internal decisions that don’t need user confirmation

This tiered approach gives AI autonomy while maintaining caution about critical decisions.

Avoiding “Over-Eagerness”

AI “over-eagerness” is another common problem. For example, when the user says “look at this file,” AI not only reads the file but proactively changes code, renames variables, adds comments—the user might just want a quick look.

Claude Code uses prompts to suppress this “over-eagerness”:

Limit Proactive Operations: “Unless the user explicitly asks, do not modify files.”

Clarify Scope: “First complete the user’s explicit task, then ask if additional improvements are needed.”

Progressive Suggestions: “If you want to provide extra suggestions, first complete the main task, then ask the user ‘I noticed a few issues, would you like me to fix them?’”

Avoiding “Failing to Ask When It Should”

Conversely, AI sometimes “fails to ask when it should”—making decisions when user intent is unclear.

Claude Code’s prompts prevent this through:

Intent Clarification: “If the user’s request is ambiguous, use echo to output your understanding and ask for confirmation.”

Approach Confirmation: “If there are multiple viable approaches, briefly explain them and ask the user’s preference.”

Risk Warnings: “If operations might be risky, warn the user and request confirmation.”

A/B Testing in Prompt Optimization

Claude Code’s prompts aren’t designed from thin air—they’re continuously optimized through A/B testing.

Anthropic tests different prompt variants:

Variant A: “Ask the user when uncertain” Variant B: “When uncertain, first use echo to output your understanding, then ask the user”

They watch which version has higher user satisfaction, higher task completion rate, and lower error rate.

This data-driven prompt optimization makes Claude Code’s behavior increasingly align with user expectations.

Practical: Adjusting AI Behavior

If you want to adjust behavior in your own AI applications, try these techniques:

Clarify Behavior Guidelines: Not “be smarter,” but “in situation X do Y, in situation A do B.”

Let Examples Speak: Provide positive and negative examples for the model to learn expected patterns.

Set Boundaries: Clarify what can be done, what must be asked about, what absolutely cannot be done.

Iterate and Optimize: Continuously adjust prompts based on actual usage feedback.

User Feedback Loop: Collect “this wasn’t right” feedback from users to improve prompts.

Implications for Using Claude Code

Understanding behavior guidance helps you:

Understand why AI asks so many questions. It’s not that it’s stupid—prompts require it to ask when uncertain.

Know how to make AI more decisive. Explicitly authorize certain operations in CLAUDE.md to reduce questioning.

Know how to make AI more cautious. Emphasize certain operations must be confirmed in CLAUDE.md.

Use CLAUDE.md effectively. It’s not “a manual for AI,” but “supplementary behavior guidelines.”

Summary

AI “misbehavior” is often not a model problem, but a prompt guidance problem. Through clear behavior guidelines, Few-shot examples, and A/B testing optimization, Claude Code transforms AI from an “impatient intern” into a “methodical professional assistant.”

In the next article, we’ll talk about tool descriptions—how to make AI “understand” each tool’s manual.