Last Friday at 11 PM, I was once again wrestling with distributed systems.
To be honest, I initially just wanted simple communication between two servers, but I accidentally created a monster-level distributed Actor system. After testing the performance data, I was stunned myself—it was 10 times faster than I expected.
The story starts with a simple requirement: I wanted programs on different machines to communicate as easily as chatting on WeChat. Traditional approaches are either too heavy (message queue ecosystem) or too fragile (bare TCP connections). So I had a sudden idea: why not combine the Actor model with WebSocket?
The result is what I’m sharing in this article—a distributed system that even I find a bit scary.
How Insane is This System?
Let me start with the conclusion: I set up 3 test machines, each running 100 Actors, sending messages to each other frantically. The result was stable latency under 2ms with CPU usage under 15%. I couldn’t believe this data myself and tested it multiple times.
Even more impressive is the fault tolerance. I deliberately unplugged the network cable of one machine, and other nodes detected it within 2 seconds, automatically rerouting all messages. When I plugged the cable back in, that machine seamlessly rejoined the cluster as if nothing had happened.
Core Design Philosophy
Simply put, I wanted a few basic capabilities:
Each machine runs independently - No dependency on centralized coordinators, each node can stand alone
Cross-machine messaging as simple as local - actor.send("node2:worker", msg)
and you’re done
Automatic handling of all network issues - Reconnection, retry, routing, I don’t want to deal with these
Dynamic scaling - New machines can join anytime, failed ones are automatically removed
Performance cannot be compromised - Since we’re using Rust, we must squeeze out every bit of performance
Architecture Deep Dive: Four Core Components
1. Local Actor System - Perfect Single-Machine Implementation
This part is relatively conventional, the familiar Actor pattern we know:
// Basic trio
spawn_actor::<MyActor>()
addr.send(message).await
impl Actor for MyActor { ... }
Nothing fancy, but this is the foundation and must be solid. Each machine runs its own Actor runtime, independent of others.
2. WebSocket Channels - Neural Network Between Machines
This is the essence of the entire system. I chose WebSocket not because it’s trendy, but because it’s born for bidirectional communication.
Using tokio-tungstenite
to build connection pools, the message format is simple and brutal:
#[derive(Serialize, Deserialize)]
struct ClusterMessage {
to: String, // "node2:worker_01"
from: String, // "node1:router"
payload: Vec<u8>, // Actual message data
}
Why Vec<u8>
? Because I tested it, and bincode serialization is 3 times faster than JSON and 40% smaller in size. In high-frequency communication scenarios, this optimization can save your life.
3. Routing Protocol - Intelligent Addressing System
This is where the real skill is tested. For an address like "node2:worker_01"
, the system must automatically resolve:
- Target node: node2
- Target Actor: worker_01
- Routing strategy: Direct connection, relay, or failure?
Even more impressive is the fault handling:
- Target node offline: Automatically cache messages, wait for it to return
- Actor doesn’t exist: Return DeadLetter error
- Network jitter: Exponential backoff retry, maximum 3 times
- Node overload: Automatic load balancing to other nodes
4. Distributed Registry - Every Node is a Directory Service
Traditional distributed systems need centralized registries, but I think this is the root of single points of failure. So my design is: Every node is a registry.
Each node maintains two tables:
// Local Actor list
local_actors: DashMap<String, Box<dyn ActorRef>>
// Remote node connections
remote_nodes: DashMap<String, WebSocketSender>
Why DashMap
instead of HashMap
? Concurrency safety and better performance. In high-concurrency scenarios, this choice reduces lock contention by 50%.
Chapter 3: Real-World Scenario Practice
Let’s look at a concrete example to feel the power of this system.
Suppose we have such a distributed task processing system:
node1
hostsJobRouter
(task router)node2
hostsWorkerA
(worker A)node3
hostsWorkerB
(worker B)
When a new task arrives, the flow is:
User submits task → JobRouter → Smart distribution → WorkerA/WorkerB → Execution complete → Result returned
This is like a super-efficient remote office team, with the task distributor in Beijing and executors distributed in Shanghai and Shenzhen, but collaborating seamlessly.
Technology Choices: Each Has a Story
These dependencies weren’t chosen randomly; each is a careful choice after I stepped into pitfalls:
tokio
- The king of async runtimes, no competition. I tried async-std, but the ecosystem is too small.
tokio-tungstenite
- The most stable WebSocket implementation. I used tungstenite core before, but encountered various strange issues under high concurrency.
serde
+ bincode
- JSON is too bloated, bincode is 3 times faster and 40% smaller. Data doesn’t lie.
dashmap
- This choice saved my life. I used RwLock<HashMap>
before, and lock contention under high concurrency killed performance.
uuid
- Node ID generation, handles edge cases well.
Combined, this set performed perfectly in my tests.
What Other Modifications Can We Make?
After the basic architecture was done, I added some black tech features:
Failover - When one machine goes down, other nodes automatically take over all its Actors within 2 seconds. Users are unaware.
Message deduplication - Duplicate messages caused by network jitter are automatically filtered. Used a Bloom Filter with almost zero memory overhead.
TLS encryption - Essential for production. I used rustls, which is faster and more secure than OpenSSL.
Monitoring dashboard - Real-time view of each node’s CPU, memory, and message throughput. Used Grafana for visualization.
Dynamic configuration - Actor behavior can be hot-updated without restarting the system. This feature saved me countless late nights.
What’s Next?
Now that you’ve seen the potential of this system, I’ll release the real goods—complete implementation code.
The next article will teach you step by step:
Building WebSocket Connection Pools - How to manage hundreds of concurrent connections without crashing
Implementing Smart Routing - How a message travels from ActorX on nodeA to ActorY on nodeB
Fault Handling Mechanisms - Strategies for handling node failures, network partitions, and message loss
Performance Tuning Secrets - Why this system achieves such insane performance
Trust me, after reading the next article, you’ll be able to build one yourself. And the performance won’t be worse than mine.
Don’t Miss the Practical Code
This is just an appetizer; the real feast is coming. I’ll gradually release:
Complete source code - Runnable code with all core features, clone and run directly
Don’t want to miss it?
- WeChat Official Account: Follow “Dream Beast Programming”, code updates will be pushed first
- Tech Exchange Group: Join our Rust tech group to exchange experiences with other developers
Remember, no matter how good the theory is, it’s not as good as practicing one line of code. See you in the next article!
Follow Dream Beast Programming WeChat Official Account to unlock more black tech