Last week I had dinner with a friend who spent three days debugging a Rust service. The service was running, Tokio was spinning, CPU usage was embarrassingly low, memory was rock solid. But latency kept spiking, requests randomly stalled. Dashboard showed everything was fine, not even a warning in the logs.

I asked him: “Did you use std::fs::read inside an async function?”

He froze for a second, then swore.

Yep, he’d stepped on that classic Rust async trap—blocking operations inside async code. The interesting thing about this async blocking pitfall is that it compiles, runs, passes tests, even survives stress testing. But the moment you hit high concurrency in production, the whole service chokes like someone’s got their hands around its throat.

The Restaurant Waiter Story

To understand this problem, let’s talk about how Tokio works. Think of Tokio’s worker threads as waiters in a restaurant.

One waiter handles multiple tables at once. Here’s how they work: walk to table A and ask “is your food ready?”, if not, check table B, if not, check table C. After one round, maybe table A’s food is ready, so they serve it. This approach is super efficient—one waiter can handle a dozen tables simultaneously.

┌───────────────────────────┐
│   Worker Thread (Waiter)  │
├───────────────────────────┤
│ poll(Table A) → Not ready │
│ poll(Table B) → Not ready │
│ poll(Table C) → Ready!    │
│ poll(Table A) → Ready!    │
└───────────────────────────┘

This is the essence of async programming—don’t wait, poll. Each poll should return quickly, either “I’m ready” or “not yet, go do something else.”

Now here’s the problem. What if the waiter walks to a table and the customer starts chatting about life for a full minute? Every other table is left waiting, food gets cold with no one to serve it, new customers have no one to greet them. The entire restaurant’s service quality collapses instantly.

That’s exactly what happens when you do blocking operations in async code.

That Innocent-Looking Code

Check out this code:

async fn handle_request() {
    let config = std::fs::read("config.json").unwrap();
    process(config).await;
}

Looks normal, right? Just reading a config file. But std::fs::read is a blocking call—it makes the current thread sit there waiting until the file is read. In regular synchronous code, that’s fine. But in an async function, this one line blocks the entire worker thread.

Not just one task—every task on that thread.

What’s worse, the Rust compiler has no idea. It won’t give you any warnings because syntactically this is perfectly legal. The compiler doesn’t know std::fs::read blocks; it just sees a normal function call.

This is Rust’s design philosophy—it chooses to trust the programmer rather than doing runtime magic behind your back. Go will silently switch goroutines when you block. JavaScript doesn’t even have blocking I/O. But Rust says: “You’re an adult, you should know what you’re doing.”

Three Common Ways to Step on This Trap

Besides file I/O, there are two other common ways to hit this trap.

First: using the standard library’s Mutex:

async fn handle(state: Arc<Mutex<Data>>) {
    let guard = state.lock().unwrap();
    guard.update();
    do_something_async().await;  // Disaster happens here
}

The problem here is you’re holding a lock while awaiting. Imagine: waiter A grabs the kitchen’s only knife, then goes to chat with customers. Waiter B wants to cut vegetables, finds the knife is gone, has to wait. Waiter C also wants to cut vegetables, also has to wait. The entire kitchen grinds to a halt.

Second: CPU-intensive computation:

async fn handle() {
    let result = calculate_prime_numbers(1000000);  // Takes 200ms
    send_response(result).await;
}

Many people think only I/O operations count as blocking, but CPU-intensive work is blocking too. Your waiter is standing there doing mental math for 200 milliseconds while every other table waits. Async is not the same as parallel—they’re two different things.

The Right Way to Do It

Once you know where the problem is, the solutions are straightforward.

For file I/O, use Tokio’s async version:

// Wrong
let data = std::fs::read("file.txt").unwrap();

// Right
let data = tokio::fs::read("file.txt").await.unwrap();

For operations that must block, use spawn_blocking to offload to a dedicated thread pool:

let data = tokio::task::spawn_blocking(|| {
    std::fs::read("config.json")
}).await.unwrap();

It’s like when a restaurant gets an order that takes a long time to prepare—the waiter doesn’t make it themselves. They hand it to the kitchen and continue serving other tables. When the kitchen finishes, they notify the waiter, who then serves it.

For locks, either use Tokio’s async Mutex, or make sure to release the lock before awaiting:

// Option 1: Use async Mutex
let guard = state.lock().await;
guard.update();
do_something_async().await;

// Option 2: Release lock before await
{
    let guard = state.lock().unwrap();
    guard.update();
}  // Lock released here
do_something_async().await;

The 9x Performance Boost Story

Back to my friend’s story. His service read a config file at startup using std::fs::read. Normally no problem—the config file was small, reading was fast. But under high concurrency, that “fast” operation got amplified—every request read it once, hundreds of requests came in simultaneously, and the worker threads got completely jammed.

After changing that one line to tokio::fs::read, throughput jumped 9x instantly. No other changes, just one line of code. This kind of Rust performance optimization is often that simple and brutal—find the blocking point, kill it.

This is both the beauty and the trap of Rust async programming. It gives you extreme performance and control, but only if you know the rules. The compiler won’t stop you from making mistakes; it’ll just let production flames teach you the lesson.

Every Rust async engineer steps on this trap. The only difference is how many times. Smart engineers remember after stepping on it once, then share the lesson with others to help them avoid the same pain.


If you found this useful, share it with your Rust friends—might save them three days of debugging. Follow me for more on those love-hate aspects of Rust design.