Introduction

This is my notes from Chapter 2 of Anthropic’s official prompt engineering tutorial. If you often use Claude or other large language models but feel like “it just doesn’t get what I mean,” this article is for you.

The core idea is simple: Claude responds best to clear, direct instructions.

Claude Has No Context

Let’s start with an easily overlooked fact: Claude has absolutely zero background information about what you’re trying to do. It doesn’t know your project, your preferences, or what’s going on in your head.

The only thing it knows is what you write in your prompt.

It’s like assigning a task to a new colleague for the first time. You say “print this file for me,” and they’ll definitely ask: single-sided or double-sided? Should I staple it? How many copies? Where should I put it?

You need to spell out every detail for them to get it right. Same with Claude.

The Golden Rule

Anthropic’s official tutorial has a super practical “golden rule”:

Show your prompt to a colleague or friend and have them follow your instructions. If they look confused, Claude will be confused too.

This rule is simple and brutal, but incredibly effective. Often we think we’ve been clear, but we’ve actually just filled in a lot of information in our heads that never made it into the prompt.

Real-World Example: Writing Haiku

Prompt transformation process

Let’s look at a concrete example. Say you want Claude to write a haiku about robots.

First Version

Write a haiku about robots.

Claude’s response:

I'd be happy to create one for you! Here's a haiku about robots:

Metal hearts beating
Electric currents flowing
Future has arrived

The poem is there, but what’s with “I’d be happy to create one for you! Here’s a haiku about robots:”? I just want the poem, not the pleasantries.

Improved Version

So what do you do? Simple - just say what you want:

Write a haiku about robots. Skip the preamble, just give me the poem.

Claude’s response:

Metal hearts beating
Electric currents flowing
Future has arrived

Clean and simple.

Key point: Say what you want directly. Don’t expect Claude to guess that you don’t want a preamble.

Real-World Example: Making AI Choose

Here’s another interesting example. Sometimes when we ask Claude a question, it gives a “both sides” answer, like that friend who always says “I’m fine with anything.”

First Version

Who is the greatest basketball player of all time?

Claude’s response:

This is a controversial question. Some people think it's Michael Jordan because of his six championships and dominance; others think it's LeBron James because of his versatility and longevity; some would mention Kobe, Magic Johnson, etc...

The answer isn’t wrong, but it’s not satisfying. I just want to know who YOU think is the best, don’t tell me “some people think.”

Improved Version

Who is the greatest basketball player of all time? I know there are different opinions, but if you had to pick one, who would it be?

Claude’s response:

If I had to pick one, I'd choose Michael Jordan. His six championships, five MVPs, and dominance in clutch moments make him the most influential basketball player.

Now that’s taking a stance.

Key point: If you want a definitive answer, say “you must pick one” in your prompt. Claude defaults to balanced responses unless you explicitly ask it to choose.

Three Advanced Tips

Tip 1: Explicitly Say What You Don’t Want

Besides saying what you want, you can directly say what you don’t want:

Write a product description for me, no marketing speak, no exaggeration, just like you're telling a friend.

This is much clearer than just saying “write a product description.”

Tip 2: Specify Format Requirements

Summarize this article in three bullet points, each no more than 20 words.

Specific format requirements help Claude understand exactly what output format you want.

Tip 3: Control Output Content

Give me a Python quicksort function, no explanation, no comments, just the code.

Sometimes you just want the result, not the explanation. Just say so.

Practice Exercises

Anthropic’s tutorial has three exercises that I find particularly representative:

Exercise 1: Make Claude Respond in Spanish

Task: Modify the system prompt to make Claude respond to “Hello Claude, how are you?” in Spanish.

The key to this exercise: You need to explicitly say in the system prompt “respond to all questions in Spanish.”

Exercise 2: Output Only a Name

Task: Have Claude answer “Who is the greatest basketball player of all time” but only output the name, no other text or punctuation.

This exercise is tougher - it requires the output to be exactly “Michael Jordan,” not a single character more.

The prompt might look like:

Who is the greatest basketball player of all time? Only output the person's name, no explanation, punctuation, or other text. The answer is Michael Jordan, only output these three words.

Exercise 3: Write an 800+ Word Story

Task: Have Claude write a story of at least 800 words.

The key to this exercise: You need to explicitly say “at least 800 words,” and probably add requirements like “develop the plot in detail,” otherwise Claude might write 200 words and call it done.

Why Is Being Clear and Direct So Important?

Bottom line: large language models aren’t human, they don’t have the ability to “read between the lines.”

When you tell a friend “whatever,” they might know you actually want hot pot. But when you tell Claude “whatever,” it will literally give you whatever answer.

Vague prompts just make things harder for yourself. You’ll have to keep adjusting your prompt, going back and forth for several rounds; Claude can only guess, and when it guesses wrong you have to start over; what could’ve been done in one shot turns into a drawn-out battle. It’s like going to a restaurant and telling the waiter “get me some food” - they’ll definitely look confused and still have to ask what you want, spicy or not, do you want rice.

Practical Advice

Before writing your next prompt, stop and think about these three questions:

Did you clearly say what you want? Claude can’t read minds - if you don’t say it, it won’t know. It’s like going to a hair salon and just saying “cut it shorter” - that’s not enough. You need to say how high to buzz the sides, how long to leave the bangs, whether to thin it out.

Did you also clearly say what you don’t want? Sometimes telling AI “don’t do this” works better than telling it “do this.” It’s like ordering takeout - saying “no cilantro, no scallions” is more reliable than saying “make it normal.”

Are your requirements specific enough? What format, how long, what style, what tone - spell out all these details. You wouldn’t tell a designer “make a nice poster” and leave it at that, right? You’d definitely specify dimensions, color scheme, what elements to include. Same with Claude.

If the answer to all three questions is “yes,” your prompt probably won’t fail.

Summary

The core of Anthropic’s official tutorial Chapter 2 is one sentence: be clear and direct.

Don’t beat around the bush, don’t expect Claude to read your mind, just say what you want.

Remember the golden rule: if your colleague would be confused looking at your prompt, Claude will definitely be confused too.

Next chapter we’ll cover more advanced techniques, but being clear and direct is the foundation of everything. Get this right, and your prompt engineering skills will already be better than most people’s.


Found This Useful?

If this article helped you write better prompts:

  • Give it a like so more people can see these practical tips
  • Share it with friends who are still “fighting” with AI
  • Follow me for more chapters from Anthropic’s official tutorial

Feel free to share your thoughts or questions in the comments.