If async tasks keep waking you up at 3 a.m., this guide is for you. We’ll build a resilient microservice with Rust + Axum that processes jobs reliably, scales well, and shuts down gracefully.
The mental model: a busy coffee shop
An HTTP request is like a customer placing an order. You shouldn’t block the cashier until the coffee is brewed. Instead: accept the order, hand it to the kitchen, and move on to the next customer. Our job queue is the order slip; workers are the baristas.
Dependencies
[dependencies]
axum = "0.7"
tokio = { version = "1", features = ["full"] }
tokio-util = "0.7"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tower = "0.4"
rand = "0.8"
HTTP handlers (the “order desk”)
use std::sync::Arc;
use axum::{
extract::Extension,
routing::{get, post},
Json, Router,
};
use serde::{Deserialize, Serialize};
use tokio::sync::mpsc;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Job {
pub id: u64,
pub task_type: String,
pub payload: String,
}
type JobSender = mpsc::Sender<Job>;
async fn submit_job(
Extension(sender): Extension<Arc<JobSender>>,
Json(payload): Json<Job>,
) -> &'static str {
match sender.send(payload).await {
Ok(_) => "✅ Job accepted. Processing shortly...",
Err(_) => "❌ Queue is full. Please try later.",
}
}
async fn health_check() -> &'static str {
"Service status: All good!"
}
fn create_app(sender: Arc<JobSender>) -> Router {
Router::new()
.route("/submit", post(submit_job))
.route("/health", get(health_check))
.layer(Extension(sender))
}
Background worker: efficient like a sorting center
use tokio_util::sync::CancellationToken;
use tokio::time::{sleep, Duration};
use tokio::sync::mpsc::Receiver as JobReceiver;
async fn worker_loop(mut receiver: JobReceiver<Job>, cancel: CancellationToken) {
println!("🚀 Worker started");
while let Some(job) = tokio::select! {
job = receiver.recv() => job,
_ = cancel.cancelled() => {
println!("🛑 Shutdown signal received. Exiting worker...");
None
}
} {
println!("🔧 Processing job: {:?}", job);
process_job_with_retry(&job).await;
}
println!("👷 Worker exited");
}
Retry policy: exponential backoff
use rand::Rng;
async fn process_job_with_retry(job: &Job) {
let mut attempts = 0;
let max_retries = 3;
loop {
attempts += 1;
let success = try_process_job(job).await;
if success {
println!("✅ Job {} completed", job.id);
break;
}
if attempts >= max_retries {
println!("💔 Job {} failed after {} attempts", job.id, max_retries);
break; // consider dead-letter queue
}
let delay = std::time::Duration::from_secs(2_u64.pow(attempts - 1));
println!("⚠️ Job {} attempt {} failed. Retrying in {}s", job.id, attempts, delay.as_secs());
sleep(delay).await;
}
}
async fn try_process_job(job: &Job) -> bool {
println!("🔄 Handling task type: {}", job.task_type);
// Simulate 50% failure rate
let mut rng = rand::thread_rng();
let success = rng.gen_bool(0.5);
if success {
println!("🎉 Job processed successfully");
} else {
println!("😞 Job processing failed. Network issues?");
}
success
}
Graceful shutdown: say goodbye politely
use axum::Server;
use std::net::SocketAddr;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("🚀 Booting async microservice...");
// Create job queue
let (tx, rx) = mpsc::channel::<Job>(100);
let tx = Arc::new(tx);
// Cancellation token
let cancel = CancellationToken::new();
// Spawn worker
let cancel_worker = cancel.clone();
let worker_handle = tokio::spawn(worker_loop(rx, cancel_worker));
// Build app
let app = create_app(tx.clone());
let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
println!("🌐 Listening on http://{}", addr);
// Graceful shutdown signal
let shutdown_signal = {
let cancel = cancel.clone();
async move {
tokio::signal::ctrl_c().await.expect("failed to listen for shutdown");
println!("🛑 Ctrl+C received. Starting graceful shutdown...");
cancel.cancel();
}
};
let server = Server::bind(&addr)
.serve(app.into_make_service())
.with_graceful_shutdown(shutdown_signal);
tokio::select! {
_ = server => {},
_ = worker_handle => {},
}
println!("👋 Service stopped gracefully");
Ok(())
}
Suggested project structure
src/
├── main.rs # entrypoint
├── handlers/ # HTTP handlers
│ ├── mod.rs
│ └── jobs.rs
├── workers/ # background workers
│ ├── mod.rs
│ └── job_worker.rs
├── models/ # data models
│ ├── mod.rs
│ └── job.rs
└── config/ # configuration management
├── mod.rs
└── settings.rs
Quick test with curl
# Run the service
cargo run
# Health check
curl http://localhost:3000/health
# Submit a job
curl -X POST http://localhost:3000/submit \
-H "Content-Type: application/json" \
-d '{"id": 1, "task_type": "email", "payload": "send welcome email"}'
Level up ideas
- Persistent status store with Redis or a database
- Add
/status/:id
endpoint to query job status - Horizontal scaling with multiple instances + load balancer
- Observability: Prometheus + Grafana metrics and alerts
Example: status endpoint skeleton
use axum::{extract::Path, http::StatusCode, Json};
async fn get_job_status(
Path(job_id): Path<u64>,
) -> Result<Json<JobStatus>, StatusCode> {
// lookup status from Redis or DB
// return JSON status
}
Why Rust?
Rust combines C/C++ level performance with memory safety. For high-concurrency services, async Rust with Tokio is a natural fit.
- Memory usage: significantly lower than GC languages
- CPU efficiency: often 2-3x faster than Node.js, close to C++
- Concurrency: scales to massive connections with minimal resources
Production deployment (Docker)
FROM rust:1.70 as builder
WORKDIR /app
COPY . .
RUN cargo build --release
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/async-microservice /usr/local/bin/
EXPOSE 3000
CMD ["async-microservice"]
Build with care, and your service will run like a Swiss watch — and let you actually sleep at night.