Node Crate

calimero-node + calimero-node-primitives

Purpose

NodeManager is an Actix actor that orchestrates the entire node. It receives network events via a dedicated channel bridge, manages blob caching, periodic heartbeats, delta handling, and routes streams to the sync manager.

It does not run libp2p directly — NetworkManager does that. NodeManager communicates with the network through NodeClient, which wraps a LazyRecipient<NodeMessage> and a NetworkClient handle. The crate also contains the SyncManager (a long-lived async task, not an Actix actor) and a GarbageCollector actor for storage tombstone cleanup.

2
crates
30+
source files
3
actors / tasks
4
sync protocols

Module Structure

File layout across the two crates. Each handler lives in its own focused file following the Single Responsibility Principle.

calimero-node

NodeManager actor definition, NodeClients, NodeManagers, NodeState, SyncSession, CachedBlob
start() entry point, NodeConfig, wires Store / BlobManager / actors / server / GC
Handler<NodeMessage> dispatch — routes GetBlobBytes, specialized node invites
Handler<NetworkEvent> — gossip, stream, subscription, broadcast message routing
handle_state_delta — DeltaStore buffering, context routing, sync session awareness
Blob vs sync stream routing based on /calimero/blob/ vs /calimero/stream/ protocol prefix
Blob transfer handler — serves blob data to requesting peers
Reliable mpsc channel replacing LazyRecipient<NetworkEvent> to avoid cross-arbiter loss
NetworkEventBridge — tokio task that bridges the channel to the NodeManager actor
Per-context delta buffering for out-of-order delta arrival
GarbageCollector actor — periodic tombstone cleanup across all contexts

calimero-node / sync

SyncManager, SyncConfig, protocol re-exports, metrics collector traits
Periodic sync loop, protocol selection, stream handling, peer tracking
Merkle DFS traversal — cheapest sync protocol
BFS level-wise sync for wide trees
Full snapshot transfer — heaviest fallback protocol
Blob sharing protocol
Key sharing / challenge domain constants
Per-peer sync history tracking
SyncMetricsCollector trait, PhaseTimer, NoOpMetrics
Production Prometheus metric implementation

calimero-node-primitives

NodeClient façade — subscribe, broadcast, heartbeat, group ops, sync trigger
Application install / uninstall / list / bundle operations
Blob add / get / delete / list / auth operations
Generic alias create / delete / lookup / resolve / list
NodeMessage enum — GetBlobBytes, RegisterPendingSpecializedNodeInvite, etc.
BroadcastMessage, sync wire types, protocol traits, state machine
DeltaBuffer — bounded buffer for deltas received during snapshot sync

Key Types

Central structs in the node crate. NodeManager is the actor; its fields are split into three injected concerns: clients, managers, and mutable state.

NodeManager

pub struct NodeManager {
    clients: NodeClients,     // context + node façades
    managers: NodeManagers,   // blobstore + sync
    state: NodeState,       // mutable runtime state
}

NodeClients

struct NodeClients {
    context: ContextClient,
    node: NodeClient,
}

NodeManagers

struct NodeManagers {
    blobstore: BlobManager,
    sync: SyncManager,
}

NodeState

struct NodeState {
    blob_cache: Arc<DashMap<BlobId, CachedBlob>>,
    delta_stores: Arc<DashMap<ContextId, DeltaStore>>,
    pending_specialized_node_invites: PendingSpecializedNodeInvites,
    accept_mock_tee: bool,
    node_mode: NodeMode,
    sync_sessions: Arc<DashMap<ContextId, SyncSession>>,
}

NodeConfig

pub struct NodeConfig {
    home: Utf8PathBuf,
    identity: Keypair,
    group_identity: Option<GroupIdentityConfig>,
    network: NetworkConfig,
    sync: SyncConfig,
    datastore: StoreConfig,
    blobstore: BlobStoreConfig,
    context: ContextConfig,
    server: ServerConfig,
    gc_interval_secs: Option<u64>,
    mode: NodeMode,
    specialized_node: SpecializedNodeConfig,
}

NodeClient primitives

Thin async façade in calimero-node-primitives. Used by Server and ContextManager to interact with node-level concerns. Wraps a Store, BlobManager, NetworkClient, and a LazyRecipient<NodeMessage>.

pub struct NodeClient {
    datastore: Store,
    blobstore: BlobManager,
    network_client: NetworkClient,
    node_manager: LazyRecipient<NodeMessage>,
    event_sender: broadcast::Sender<NodeEvent>,
    ctx_sync_tx: mpsc::Sender<(Option<ContextId>, Option<PeerId>)>,
    specialized_node_invite_topic: String,
}

Context & Group Subscriptions

fn

subscribe(context_id)

Subscribe to a context's gossipsub topic via NetworkClient

fn

unsubscribe(context_id)

Unsubscribe from a context topic

fn

subscribe_group(group_id)

Subscribe to group/{hex} topic

fn

unsubscribe_group(group_id)

Unsubscribe from group topic

fn

broadcast(context, sender, artifact, …)

Publish a StateDelta on the context's gossipsub topic

fn

publish_signed_group_op(group_id, op)

Publish a SignedGroupOpV1 on the group topic

fn

publish_group_heartbeat(group_id, …)

Broadcast GroupStateHeartbeat for catch-up detection

fn

get_peers_count(context)

Global peer count or per-topic mesh peers

Application & Blob Management

NodeClient also exposes application lifecycle and blob storage through sub-modules:

Applications

  • install_application()
  • install_from_path()
  • install_from_url()
  • install_from_bundle_blob()
  • uninstall_application()
  • list_applications()
  • list_packages()

Blobs

  • add_blob()
  • get_blob() / get_blob_bytes()
  • delete_blob()
  • list_blobs()
  • find_blob_providers()
  • announce_blob_to_network()
  • create_blob_auth()

Aliases

  • create_alias()
  • delete_alias()
  • lookup_alias()
  • resolve_alias()
  • list_aliases()

Startup Flow

The start() function in run.rs bootstraps the entire node. Each component is initialized in dependency order, with LazyRecipients breaking circular initialization.

merod run CLI entry point start() run.rs entry Store open RocksDB + optional encryption BlobManager filesystem backend NetworkManager actor · libp2p swarm NetworkClient + EventChannel mpsc(1000) · 80% warning NodeClient store + blob + network + events ContextManager actor · context_recipient NodeManager actor · node_recipient NetworkEventBridge tokio::spawn · channel → actor SyncManager.start() async task · periodic loop Server tokio::spawn · Axum GC Actor 12h default interval 1 2 3 4 5 6 tokio::select! { sync, server, bridge } Node Network Context Storage Sync Server GC

Event Handling

NodeManager implements Handler<NetworkEvent> (via the bridge) and Handler<NodeMessage>. Network events arrive through a dedicated mpsc channel to avoid cross-arbiter message loss.

BroadcastMessage Dispatch

When a gossipsub message arrives, NodeManager deserializes it as a BroadcastMessage variant and routes accordingly:

1

StateDelta

Routed to handle_state_delta which checks sync session state. If a snapshot sync is active, the delta is buffered; otherwise forwarded to ContextManager for application.

2

HashHeartbeat

Periodic root hash announcement. Compared with local state; mismatches trigger sync via ctx_sync_tx channel.

3

SignedGroupOpV1

Forwarded to ContextManager's group operation handler. Signature verified, then applied to the local GroupStore DAG.

4

GroupStateHeartbeat

Group-level hash heartbeat for governance DAG catch-up detection.

5

GroupGovernanceDelta

Governance state delta. Routed to ContextManager for DAG ingestion.

6

GroupMutationNotification

Notification of group membership changes. Triggers re-evaluation of subscriptions and capabilities.

Stream Routing

When a peer opens a stream, stream_opened.rs inspects the protocol prefix to route:

/calimero/stream/…

Sync protocol streams — handed off to SyncManager for hash comparison, level-wise sync, snapshot transfer, or key sharing.

/calimero/blob/…

Blob transfer protocol — handled by blob_protocol.rs. Serves blob data from BlobManager to requesting peers.

Subscription Lifecycle

NodeManager manages gossipsub subscriptions for both context topics (one per context) and group topics (group/{hex32}).

ctx

Context Topics

Subscribed when a context is joined/created via NodeClient. Carries StateDelta and HashHeartbeat messages. Unsubscribed on context leave.

grp

Group Topics

Subscribed for each group the node participates in. Carries SignedGroupOpV1, GroupStateHeartbeat, and GroupGovernanceDelta. On peer subscription events, the node auto-triggers group sync and alias re-broadcast.

Periodic Tasks

Heartbeats

SyncManager periodically broadcasts HashHeartbeat per context and GroupStateHeartbeat per group. Interval is configurable via SyncConfig.

Tombstone GC

GarbageCollector actor runs every gc_interval_secs (default 12 hours). Scans all contexts for expired CRDT tombstones past TOMBSTONE_RETENTION_NANOS and deletes them.

Blob Eviction

Cached blobs in NodeState::blob_cache are evicted based on last_accessed timestamps. The CachedBlob::touch() method refreshes access time on read.

Sync Loop

SyncManager runs a continuous async loop: listen for sync triggers from ctx_sync_rx, select the cheapest applicable protocol (hash comparison → level-wise → snapshot), execute, and track results per peer.

Dependencies

What the node crate depends on and what depends on it.

calimero-node depends on

calimero-context
ContextManager actor — started by node during bootstrap
calimero-context-primitives
ContextClient façade for sending messages to ContextManager
calimero-context-config
ContextGroupId, group configuration types
calimero-network
NetworkManager actor — started by node, runs libp2p swarm
calimero-network-primitives
NetworkClient, NetworkEvent, NetworkConfig, specialized invite types
calimero-node-primitives
NodeClient, NodeMessage, BroadcastMessage, sync wire types, DeltaBuffer
calimero-store
Store handle, typed keys, column families
calimero-store-rocksdb
RocksDB backend for Store
calimero-store-encryption
Optional AES encryption layer for the datastore
calimero-storage
CRDT storage engine, tombstone constants, EntityIndex
calimero-blobstore
BlobManager for binary object storage
calimero-server
Axum HTTP server, started by node as tokio task
calimero-primitives
Core types: ContextId, BlobId, PublicKey, events
calimero-crypto
SharedKey for encrypted sync handshakes
calimero-dag
DAG operations used by sync protocols
calimero-tee-attestation
TEE attestation verification for specialized node invites
calimero-utils-actix
LazyRecipient, ActorExt, Actix utilities

Depended on by

merod
Binary crate — calls calimero_node::start() from merod run command
calimero-server
Uses NodeClient (from primitives) for blob and application operations
calimero-context
Uses NodeClient (from primitives) for broadcast, subscribe, and sync triggers
Node
Context
Network
Storage
Server
Shared / Primitives
TEE