System Overview
Calimero Core v0.10.1-rc.8 — 22 crates Rust workspace
High-Level Architecture
The system is organized in horizontal layers. External clients and CLIs connect through the Server layer, which delegates to Actix actors. Actors coordinate via typed messages and share a common storage layer at the bottom.
Crate Dependency Graph
Directed edges show compile-time dependencies. Top-level binaries depend on the server and node crates, which fan out through the core libraries, all converging on shared primitives at the bottom.
Actor System
Calimero uses Actix for its actor model. Three primary actors manage the system, plus a long-running async task for synchronization and a background garbage-collection actor.
NodeManager actor
Central orchestrator. Handles network events (peer join/leave, stream open), dispatches deltas to the right context, manages blob cache and heartbeat intervals. Owns LazyRecipient<ContextMessage> and LazyRecipient<NetworkMessage>.
Key messages:
- HandleNetworkEvent — process gossip, stream, discovery events
- HandleStateDelta — route incoming delta to context
- HandleGroupOp — forward signed group ops
- BootContext — initialize context after creation
ContextManager actor
Manages all contexts and their groups. 20+ message handlers for execution, governance, membership, upgrades. Owns GroupStore instances per context. Coordinates with the runtime for WASM execution.
Key messages:
- ExecuteMethod — run WASM method, produce delta
- HandleGroupOp — ingest governance operation
- CreateContext / JoinContext
- UpdateApplication — upgrade WASM binary
NetworkManager actor
Wraps the libp2p swarm. Publishes and subscribes to gossipsub topics, manages Kademlia DHT, and handles stream-based request/response protocols. One topic per context + group topics.
Key messages:
- Publish — send gossip message on topic
- Subscribe / Unsubscribe
- OpenStream — direct peer-to-peer stream
- ProvideRecord / GetRecord — DHT ops
SyncManager async task
Not an Actix actor — runs as a long-lived async task spawned at startup. Implements four sync protocols: hash comparison (cheap), level-wise (medium), snapshot (heavy), and direct delta exchange. Triggered by heartbeat mismatches.
Communicates via tokio::broadcast channels and direct LazyRecipient sends to NodeManager.
GC Actor background
Garbage collection actor that periodically cleans up expired blobs, stale peer entries, and orphaned context data. Runs on a configurable timer. Low-priority Actix actor.
Reads from storage columns and issues deletes for entries past their TTL.
Communication Patterns
Data Flow: Method Execution
End-to-end flow from a client calling a WASM method to state convergence across all peers.
Client → Server
External client sends a JSON-RPC 2.0 request to the /rpc endpoint. The request specifies context_id, method, and args. Authenticated via JWT bearer token (mero-auth).
Server → ContextManager
The JSON-RPC handler constructs an ExecuteMethod message and sends it to ContextManager via LazyRecipient<ContextMessage>. Includes caller identity, payload, and executor kind.
ContextManager → Runtime
ContextManager loads the WASM module (cached in-memory), builds a VMContext with storage handles, caller identity, and method metadata. Invokes runtime::execute().
Runtime → WASM
Wasmer executes the compiled WASM function. The application calls host functions for storage reads/writes, CRDT operations, event emissions, and cross-context calls. All mutations are collected in a pending changeset.
State Commit
On successful return, the changeset is committed to storage. A CausalDelta is constructed containing the diff, new root_hash, parent hashes, and the operation metadata. Appended to the context's DAG.
Gossip Broadcast
The delta is serialized and published via NetworkManager::Publish on the context's gossipsub topic. The message includes the StateDelta payload and the sender's peer ID.
Remote Nodes Receive + Apply
Peer nodes receive the gossip message. NodeManager::HandleStateDelta routes it to the correct context. ContextManager verifies the delta's parent hash exists in the local DAG, then applies it to storage.
Convergence
Heartbeats carry each context's root_hash. If peers disagree, SyncManager triggers a catch-up protocol (hash comparison → level-wise → snapshot fallback). All peers eventually converge to the same state.
Data Flow: Governance Operation
How an admin action (e.g. adding a member, changing capabilities) propagates across the group and reaches eventual consistency.
Admin Action
An admin (via REST API or meroctl) submits a governance action. The server translates this into a GroupOp variant — e.g. MemberAdded, CapabilitiesSet, VisibilitySet. Includes the target group ID.
Sign + Apply Locally
The operation is signed with the node's Ed25519 key, producing a SignedGroupOpV1. Includes: state_hash (current group state), nonce (monotonic), and parent hash. The op is applied to local GroupStore and appended to the OpLog DAG.
Gossip Broadcast
The signed operation is published via gossipsub on the group topic (separate from context data topics). All group members subscribed to this topic receive the message.
Remote Ingestion
Receiving nodes verify the Ed25519 signature and check the sender has sufficient capabilities to perform the operation. The op is added to the local DagStore. If parents are missing, the op enters a pending queue until parents arrive.
Catch-Up Protocol
Heartbeats carry each group's latest state_hash. On mismatch, peers exchange GroupDeltaRequest / GroupDeltaResponse messages to transfer missing operations. The DAG structure ensures causal ordering is preserved during catch-up.
Cross-Crate Communication
Crates communicate via thin async façade structs that wrap LazyRecipient<M>. Each façade provides typed, async methods that hide the Actix message-passing details. This keeps crate boundaries clean — consumers never import Actix directly.
NodeClient
LazyRecipient<NodeMessage>
Thin async façade used by Server and ContextManager to send commands to the NodeManager actor.
Key Methods
- boot_context()
- handle_network_event()
- get_peers()
- broadcast_delta()
- request_blob()
ContextClient
LazyRecipient<ContextMessage>
Used by Server (for RPC execution) and NodeManager (for delta routing) to invoke context-level operations.
Key Methods
- execute_method()
- create_context()
- join_context()
- handle_group_op()
- update_application()
- get_context_info()
NetworkClient
LazyRecipient<NetworkMessage>
Used by NodeManager, ContextManager, and SyncManager to publish messages, open streams, and manage topic subscriptions.
Key Methods
- publish()
- subscribe()
- unsubscribe()
- open_stream()
- get_peers()
- provide_record()
Façade Pattern
How LazyRecipient works
LazyRecipient<M> is a wrapper around Actix's Recipient<M> that defers resolution. It stores an Arc<OnceCell<Recipient<M>>> internally. On first use, the address is resolved from the actor registry. Subsequent sends reuse the cached recipient.
This pattern breaks circular initialization dependencies — actors can hold LazyRecipients to each other without needing all actors to be started simultaneously. The send() method is async and returns the actor's response.
inner: Arc<OnceCell<Recipient<M>>>,
}
impl<M> LazyRecipient<M> {
pub async fn send(&self, msg: M) -> Result<M::Result> { /* ... */ }
}
Message envelope pattern
Each actor defines a single top-level enum that wraps all possible messages. This allows a single Handler impl to dispatch on variants, simplifying the API surface.
ExecuteMethod { ctx: ContextId, method: String, args: Vec<u8> },
CreateContext { params: CreateParams },
JoinContext { invitation: Invitation },
HandleGroupOp { op: SignedGroupOpV1 },
UpdateApplication { ctx: ContextId, blob: BlobId },
// ... 15+ more variants
}