System Overview

Calimero Core v0.10.1-rc.8 — 22 crates Rust workspace

22
crates
11
core libraries
2
CLI binaries
15+
sample apps

High-Level Architecture

The system is organized in horizontal layers. External clients and CLIs connect through the Server layer, which delegates to Actix actors. Actors coordinate via typed messages and share a common storage layer at the bottom.

meroctl CLI operator / admin Admin Dashboard embedded SPA (React) External Clients RPC / WS / SSE Server Layer Axum · REST Admin API · JSON-RPC 2.0 · WebSocket · SSE Events · Prometheus Metrics NodeManager actor Network event dispatch, blob cache, heartbeat Delta handling, stream routing, join/leave Startup orchestration, context boot crates/node · Handler<NodeMessage> ContextManager Contexts, groups, governance DAGs Application lifecycle, 20+ message handlers GroupStore, upgrades, member management crates/context · Handler<ContextMessage> actor NetworkManager actor libp2p: gossipsub, Kademlia, mDNS, relay Streams, AutoNAT v2, DCUtR, rendezvous crates/network · Handler<NetworkMessage> SyncManager Hash compare Level-wise sync Snapshot / Delta WASM Runtime Wasmer + Cranelift 50+ host functions CRDT collections Storage Layer RocksDB · 11 column families · typed keys · causal DAG · CRDT collections · blob store mero-auth crate JWT issuing + verification Pluggable providers (NEAR, ETH, …) Challenge / nonce auth flow Embedded or reverse-proxy mode crates/mero-auth P2P Network Other Calimero nodes gossipsub · streams · DHT topic-based message routing Node Context Network Sync Runtime Storage Server Auth direct async / indirect

Crate Dependency Graph

Directed edges show compile-time dependencies. Top-level binaries depend on the server and node crates, which fan out through the core libraries, all converging on shared primitives at the bottom.

merod meroctl server node context-primitives node-primitives mero-auth context network store runtime sync dag network-primitives store-rocksdb storage sys calimero-sdk primitives
Binary / Primitives
Server / SDK
Node
Context
Network
Storage
Runtime
Auth

Actor System

Calimero uses Actix for its actor model. Three primary actors manage the system, plus a long-running async task for synchronization and a background garbage-collection actor.

NodeManager actor

Central orchestrator. Handles network events (peer join/leave, stream open), dispatches deltas to the right context, manages blob cache and heartbeat intervals. Owns LazyRecipient<ContextMessage> and LazyRecipient<NetworkMessage>.

Key messages:

  • HandleNetworkEvent — process gossip, stream, discovery events
  • HandleStateDelta — route incoming delta to context
  • HandleGroupOp — forward signed group ops
  • BootContext — initialize context after creation

ContextManager actor

Manages all contexts and their groups. 20+ message handlers for execution, governance, membership, upgrades. Owns GroupStore instances per context. Coordinates with the runtime for WASM execution.

Key messages:

  • ExecuteMethod — run WASM method, produce delta
  • HandleGroupOp — ingest governance operation
  • CreateContext / JoinContext
  • UpdateApplication — upgrade WASM binary

NetworkManager actor

Wraps the libp2p swarm. Publishes and subscribes to gossipsub topics, manages Kademlia DHT, and handles stream-based request/response protocols. One topic per context + group topics.

Key messages:

  • Publish — send gossip message on topic
  • Subscribe / Unsubscribe
  • OpenStream — direct peer-to-peer stream
  • ProvideRecord / GetRecord — DHT ops

SyncManager async task

Not an Actix actor — runs as a long-lived async task spawned at startup. Implements four sync protocols: hash comparison (cheap), level-wise (medium), snapshot (heavy), and direct delta exchange. Triggered by heartbeat mismatches.

Communicates via tokio::broadcast channels and direct LazyRecipient sends to NodeManager.

GC Actor background

Garbage collection actor that periodically cleans up expired blobs, stale peer entries, and orphaned context data. Runs on a configurable timer. Low-priority Actix actor.

Reads from storage columns and issues deletes for entries past their TTL.

Communication Patterns

NodeManager ContextManager NetworkManager SyncManager LazyRecipient LazyRecipient LazyRecipient tokio::broadcast channel Heartbeat events, sync triggers, and state updates flow through the broadcast channel

Data Flow: Method Execution

End-to-end flow from a client calling a WASM method to state convergence across all peers.

1

Client → Server

External client sends a JSON-RPC 2.0 request to the /rpc endpoint. The request specifies context_id, method, and args. Authenticated via JWT bearer token (mero-auth).

2

Server → ContextManager

The JSON-RPC handler constructs an ExecuteMethod message and sends it to ContextManager via LazyRecipient<ContextMessage>. Includes caller identity, payload, and executor kind.

3

ContextManager → Runtime

ContextManager loads the WASM module (cached in-memory), builds a VMContext with storage handles, caller identity, and method metadata. Invokes runtime::execute().

4

Runtime → WASM

Wasmer executes the compiled WASM function. The application calls host functions for storage reads/writes, CRDT operations, event emissions, and cross-context calls. All mutations are collected in a pending changeset.

5

State Commit

On successful return, the changeset is committed to storage. A CausalDelta is constructed containing the diff, new root_hash, parent hashes, and the operation metadata. Appended to the context's DAG.

6

Gossip Broadcast

The delta is serialized and published via NetworkManager::Publish on the context's gossipsub topic. The message includes the StateDelta payload and the sender's peer ID.

7

Remote Nodes Receive + Apply

Peer nodes receive the gossip message. NodeManager::HandleStateDelta routes it to the correct context. ContextManager verifies the delta's parent hash exists in the local DAG, then applies it to storage.

8

Convergence

Heartbeats carry each context's root_hash. If peers disagree, SyncManager triggers a catch-up protocol (hash comparison → level-wise → snapshot fallback). All peers eventually converge to the same state.

Client JSON-RPC Server Axum ContextMgr ExecuteMethod Runtime WASM exec Storage commit delta Gossip broadcast Peers apply + verify Converge heartbeat sync

Data Flow: Governance Operation

How an admin action (e.g. adding a member, changing capabilities) propagates across the group and reaches eventual consistency.

1

Admin Action

An admin (via REST API or meroctl) submits a governance action. The server translates this into a GroupOp variant — e.g. MemberAdded, CapabilitiesSet, VisibilitySet. Includes the target group ID.

2

Sign + Apply Locally

The operation is signed with the node's Ed25519 key, producing a SignedGroupOpV1. Includes: state_hash (current group state), nonce (monotonic), and parent hash. The op is applied to local GroupStore and appended to the OpLog DAG.

3

Gossip Broadcast

The signed operation is published via gossipsub on the group topic (separate from context data topics). All group members subscribed to this topic receive the message.

4

Remote Ingestion

Receiving nodes verify the Ed25519 signature and check the sender has sufficient capabilities to perform the operation. The op is added to the local DagStore. If parents are missing, the op enters a pending queue until parents arrive.

5

Catch-Up Protocol

Heartbeats carry each group's latest state_hash. On mismatch, peers exchange GroupDeltaRequest / GroupDeltaResponse messages to transfer missing operations. The DAG structure ensures causal ordering is preserved during catch-up.

Admin API GroupOp Sign + Apply Ed25519 · OpLog Gossip group topic Remote Verify DagStore · pending Catch-Up GroupDeltaReq/Resp Each arrow represents a message boundary — failures at any stage are recoverable via the catch-up protocol

Cross-Crate Communication

Crates communicate via thin async façade structs that wrap LazyRecipient<M>. Each façade provides typed, async methods that hide the Actix message-passing details. This keeps crate boundaries clean — consumers never import Actix directly.

NodeClient

LazyRecipient<NodeMessage>

Thin async façade used by Server and ContextManager to send commands to the NodeManager actor.

Key Methods

  • boot_context()
  • handle_network_event()
  • get_peers()
  • broadcast_delta()
  • request_blob()

ContextClient

LazyRecipient<ContextMessage>

Used by Server (for RPC execution) and NodeManager (for delta routing) to invoke context-level operations.

Key Methods

  • execute_method()
  • create_context()
  • join_context()
  • handle_group_op()
  • update_application()
  • get_context_info()

NetworkClient

LazyRecipient<NetworkMessage>

Used by NodeManager, ContextManager, and SyncManager to publish messages, open streams, and manage topic subscriptions.

Key Methods

  • publish()
  • subscribe()
  • unsubscribe()
  • open_stream()
  • get_peers()
  • provide_record()

Façade Pattern

Server / RPC Handler async fn handle_request() NodeManager fn handle(&mut self, msg) ContextClient .execute_method(ctx, args) await LazyRecipient::send() ContextManager Handler<ContextMessage> Actix actor mailbox async send typed call typed call Result<T, Error> returned to caller
How LazyRecipient works

LazyRecipient<M> is a wrapper around Actix's Recipient<M> that defers resolution. It stores an Arc<OnceCell<Recipient<M>>> internally. On first use, the address is resolved from the actor registry. Subsequent sends reuse the cached recipient.

This pattern breaks circular initialization dependencies — actors can hold LazyRecipients to each other without needing all actors to be started simultaneously. The send() method is async and returns the actor's response.

pub struct LazyRecipient<M: Message> {
    inner: Arc<OnceCell<Recipient<M>>>,
}

impl<M> LazyRecipient<M> {
    pub async fn send(&self, msg: M) -> Result<M::Result> { /* ... */ }
}
Message envelope pattern

Each actor defines a single top-level enum that wraps all possible messages. This allows a single Handler impl to dispatch on variants, simplifying the API surface.

pub enum ContextMessage {
    ExecuteMethod { ctx: ContextId, method: String, args: Vec<u8> },
    CreateContext { params: CreateParams },
    JoinContext { invitation: Invitation },
    HandleGroupOp { op: SignedGroupOpV1 },
    UpdateApplication { ctx: ContextId, blob: BlobId },
    // ... 15+ more variants
}