At 10 p.m., only a few warm lights are still on in the coffee shop. I’m behind the bar, watching the new teammate tie on an apron.
“Remember,” I hand over a walkie-talkie, “everyone has their own inbox. When an order comes in, don’t shout to the whole shop—just slip it into the right box. They’ll make one drink at a time, and no one fights over the kettle. The shop stays calm.”
He nods. I point to the duty roster on the wall: “What if someone drops off? Simple. We have a supervisor on duty. If someone goes down, the supervisor brings them back up or swaps in a fresh hire. Everyone’s name is in the front desk directory. If you need someone, call them by name.”
“Doesn’t sound complicated,” he smiles.
“Right.” I slide over the checklist. “In Rust, this is the Actor model. Don’t think of concurrency as a group chat. It’s more like a small, orderly shop: messages are orders, Actors are baristas, addresses are walkie-talkies, supervision is the manager, and the directory is the registry. Let’s open for business.”
Let’s set up this small shop. The code is short; the path is clear.
1) Define Actor: set the rules for the barista
#[async_trait::async_trait]
pub trait Actor: Send + 'static {
type Message: Send + 'static;
async fn handle(&mut self, msg: Self::Message);
}
The rules are really just two: what kind of order you take (Message
), and when an order arrives, don’t fuss—take it and process it yourself (handle
). The one who takes the order decides its fate.
2) Address Addr<T>
: give the barista a walkie-talkie
I put the walkie-talkie in his hand and say: “When the red light blinks, press and speak. The order flies into the other person’s box. No chasing, no yelling to the entire shop.” In code, this walkie-talkie is an mpsc
sending channel, a.k.a. Addr<T>
.
use tokio::sync::mpsc;
#[derive(Clone)]
pub struct Addr<T> {
sender: mpsc::Sender<T>,
}
impl<T> Addr<T> {
pub async fn send(&self, msg: T) -> Result<(), mpsc::error::SendError<T>> {
self.sender.send(msg).await
}
}
3) spawn_actor
: clock the barista in
He clocks in, ties the apron, and stands at his station. There’s a click in the inbox—an order comes in; he processes it. No orders? He waits quietly for the next one. That’s “being on shift.”
use tokio::sync::mpsc;
use tokio::task::JoinHandle;
pub fn spawn_actor<A>(mut actor: A, capacity: usize) -> (Addr<A::Message>, JoinHandle<()>)
where
A: Actor,
{
let (tx, mut rx) = mpsc::channel::<A::Message>(capacity);
let handle = tokio::spawn(async move {
while let Some(msg) = rx.recv().await {
actor.handle(msg).await;
}
});
(Addr { sender: tx }, handle)
}
- If the inbox is full,
send
will wait. That’s natural throttling—like a line at the counter.
4) supervise_actor
: the manager keeps watch
The supervisor is the last to leave. They don’t talk much, just watch two things: is the person still up, and is the shop still running? If someone suddenly collapses, the supervisor lets the shop breathe, brings the person back up, or starts a new shift. Steady is their only style.
use std::time::Duration;
pub fn supervise_actor<A, F>(make: F, capacity: usize) -> Addr<A::Message>
where
A: Actor,
F: Fn() -> A + Send + Sync + 'static,
{
let (addr_tx, addr_rx) = std::sync::mpsc::sync_channel::<Addr<A::Message>>(1);
std::thread::spawn(move || {
let rt = tokio::runtime::Runtime::new().expect("rt");
rt.block_on(async move {
loop {
let (addr, handle) = spawn_actor(make(), capacity);
let _ = addr_tx.send(addr.clone());
// Wait for actor to finish (normal or crash)
let _ = handle.await;
// Take a breath, then bring someone back on shift (continuous supervision)
tokio::time::sleep(Duration::from_millis(200)).await;
}
});
});
// First available address
addr_rx.recv().expect("supervisor failed to start")
}
In production, you’d run supervision in the same Tokio runtime, with backoff, caps, and alerts—like an old shop’s SOP, every piece in place.
5) Named registry: the front desk directory
There’s a notebook in the front drawer with everyone’s name and walkie-talkie channel. You don’t have to remember numbers—just call by name and your message gets delivered. The registry is that notebook.
use once_cell::sync::Lazy;
use std::any::Any;
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
static REGISTRY: Lazy<Arc<Mutex<HashMap<String, Box<dyn Any + Send + Sync>>>>> = Lazy::new(|| {
Arc::new(Mutex::new(HashMap::new()))
});
pub fn register_addr<T: 'static + Send + Sync>(name: &str, addr: Addr<T>) {
let mut map = REGISTRY.lock().unwrap();
map.insert(name.to_string(), Box::new(addr));
}
pub fn get_addr<T: 'static + Send + Sync>(name: &str) -> Option<Addr<T>> {
let map = REGISTRY.lock().unwrap();
map.get(name)
.and_then(|b| b.downcast_ref::<Addr<T>>())
.cloned()
}
Note: The above is for demonstration. In a real project, prefer dashmap
with type-safe wrappers, or manage addresses centrally at the app layer to avoid cross-type erasure.
6) Example: make a “counting coffee”
On the first night shift, there aren’t many customers. I let him play a small game: for every drink made, increment a number; when someone asks how many, report it. Two message styles are enough for practice: Inc(n)
means “add n more,” and Get
means “what’s the number now?”
use tokio::sync::oneshot;
enum CounterMsg {
Inc(u64),
Get(oneshot::Sender<u64>),
}
struct Counter { value: u64 }
#[async_trait::async_trait]
impl Actor for Counter {
type Message = CounterMsg;
async fn handle(&mut self, msg: Self::Message) {
match msg {
CounterMsg::Inc(n) => self.value += n,
CounterMsg::Get(tx) => { let _ = tx.send(self.value); }
}
}
}
#[tokio::main]
async fn main() {
let (addr, _h) = spawn_actor(Counter { value: 0 }, 64);
addr.send(CounterMsg::Inc(3)).await.unwrap();
addr.send(CounterMsg::Inc(7)).await.unwrap();
let (tx, rx) = oneshot::channel();
addr.send(CounterMsg::Get(tx)).await.unwrap();
let v = rx.await.unwrap();
println!("counter = {}", v); // 10
}
See? Two “add another cup,” one “report the number.” No complaints and no race for work—everything in order, like a clock.
7) Extensions (pick as needed)
As the shop gets busier, you’ll naturally add tools: a small HTTP takeout window at the front (hold an Addr<T>
in your axum
/warp
handler and dispatch to the back), a conveyor belt in the back (mpsc
/broadcast
/Stream
to move half-finished items forward), a splitter and a worker pool as the team grows, SOP upgrades for the supervisor (backoff, tiers, limits), and dashboards with tracing
for latency, queue depth, and error rates so you can sleep soundly.
FAQ
Q: Can I just await
a return value?
A: You can, but a more idiomatic way is to include a one-shot reply envelope (oneshot
). When the work is done, they put the result back.
Q: Will messages get lost? A: Queues have capacity. When they’re full, you’ll wait a bit (backpressure). With good capacity and pacing, the line stays orderly and messages stay safe.
Q: How’s performance? A: More than enough for most business workloads. If you’re chasing extremes, use lock-free structures, a dedicated executor, and locality optimizations.
Wrap-up
That’s our little shop: roles are clear, routes are clear, busy but not chaotic. The nouns you see—Actor
, Addr
, spawn
, supervise
, registry—are just ways to encode this order. Don’t rush for features. First make “message-driven, serial handling, and supervisable” solid. When the line gets long someday, you’ll find this old discipline is your confidence to scale.
Follow Mengshou Programming for more engineering deep-dives.