Node Crate
calimero-node + calimero-node-primitives
Purpose
NodeManager is an Actix actor that orchestrates the entire node. It receives network events via a dedicated channel bridge, manages blob caching, periodic heartbeats, delta handling, and routes streams to the sync manager.
It does not run libp2p directly — NetworkManager does that. NodeManager communicates with the network through NodeClient, which wraps a LazyRecipient<NodeMessage> and a NetworkClient handle. The crate also contains the SyncManager (a long-lived async task, not an Actix actor) and a GarbageCollector actor for storage tombstone cleanup.
Module Structure
File layout across the two crates. Each handler lives in its own focused file following the Single Responsibility Principle.
calimero-node
calimero-node / sync
calimero-node-primitives
Key Types
Central structs in the node crate. NodeManager is the actor; its fields are split into three injected concerns: clients, managers, and mutable state.
NodeManager
clients: NodeClients, // context + node façades
managers: NodeManagers, // blobstore + sync
state: NodeState, // mutable runtime state
}
NodeClients
context: ContextClient,
node: NodeClient,
}
NodeManagers
blobstore: BlobManager,
sync: SyncManager,
}
NodeState
blob_cache: Arc<DashMap<BlobId, CachedBlob>>,
delta_stores: Arc<DashMap<ContextId, DeltaStore>>,
pending_specialized_node_invites: PendingSpecializedNodeInvites,
accept_mock_tee: bool,
node_mode: NodeMode,
sync_sessions: Arc<DashMap<ContextId, SyncSession>>,
}
NodeConfig
home: Utf8PathBuf,
identity: Keypair,
group_identity: Option<GroupIdentityConfig>,
network: NetworkConfig,
sync: SyncConfig,
datastore: StoreConfig,
blobstore: BlobStoreConfig,
context: ContextConfig,
server: ServerConfig,
gc_interval_secs: Option<u64>,
mode: NodeMode,
specialized_node: SpecializedNodeConfig,
}
NodeClient primitives
Thin async façade in calimero-node-primitives. Used by Server and ContextManager to interact with node-level concerns. Wraps a Store, BlobManager, NetworkClient, and a LazyRecipient<NodeMessage>.
datastore: Store,
blobstore: BlobManager,
network_client: NetworkClient,
node_manager: LazyRecipient<NodeMessage>,
event_sender: broadcast::Sender<NodeEvent>,
ctx_sync_tx: mpsc::Sender<(Option<ContextId>, Option<PeerId>)>,
specialized_node_invite_topic: String,
}
Context & Group Subscriptions
subscribe(context_id)
Subscribe to a context's gossipsub topic via NetworkClient
unsubscribe(context_id)
Unsubscribe from a context topic
subscribe_group(group_id)
Subscribe to group/{hex} topic
unsubscribe_group(group_id)
Unsubscribe from group topic
broadcast(context, sender, artifact, …)
Publish a StateDelta on the context's gossipsub topic
publish_signed_group_op(group_id, op)
Publish a SignedGroupOpV1 on the group topic
publish_group_heartbeat(group_id, …)
Broadcast GroupStateHeartbeat for catch-up detection
get_peers_count(context)
Global peer count or per-topic mesh peers
Application & Blob Management
NodeClient also exposes application lifecycle and blob storage through sub-modules:
Applications
- install_application()
- install_from_path()
- install_from_url()
- install_from_bundle_blob()
- uninstall_application()
- list_applications()
- list_packages()
Blobs
- add_blob()
- get_blob() / get_blob_bytes()
- delete_blob()
- list_blobs()
- find_blob_providers()
- announce_blob_to_network()
- create_blob_auth()
Aliases
- create_alias()
- delete_alias()
- lookup_alias()
- resolve_alias()
- list_aliases()
Startup Flow
The start() function in run.rs bootstraps the entire node. Each component is initialized in dependency order, with LazyRecipients breaking circular initialization.
Event Handling
NodeManager implements Handler<NetworkEvent> (via the bridge) and Handler<NodeMessage>. Network events arrive through a dedicated mpsc channel to avoid cross-arbiter message loss.
BroadcastMessage Dispatch
When a gossipsub message arrives, NodeManager deserializes it as a BroadcastMessage variant and routes accordingly:
StateDelta
Routed to handle_state_delta which checks sync session state. If a snapshot sync is active, the delta is buffered; otherwise forwarded to ContextManager for application.
HashHeartbeat
Periodic root hash announcement. Compared with local state; mismatches trigger sync via ctx_sync_tx channel.
SignedGroupOpV1
Forwarded to ContextManager's group operation handler. Signature verified, then applied to the local GroupStore DAG.
GroupStateHeartbeat
Group-level hash heartbeat for governance DAG catch-up detection.
GroupGovernanceDelta
Governance state delta. Routed to ContextManager for DAG ingestion.
GroupMutationNotification
Notification of group membership changes. Triggers re-evaluation of subscriptions and capabilities.
Stream Routing
When a peer opens a stream, stream_opened.rs inspects the protocol prefix to route:
/calimero/stream/…
Sync protocol streams — handed off to SyncManager for hash comparison, level-wise sync, snapshot transfer, or key sharing.
/calimero/blob/…
Blob transfer protocol — handled by blob_protocol.rs. Serves blob data from BlobManager to requesting peers.
Subscription Lifecycle
NodeManager manages gossipsub subscriptions for both context topics (one per context) and group topics (group/{hex32}).
Context Topics
Subscribed when a context is joined/created via NodeClient. Carries StateDelta and HashHeartbeat messages. Unsubscribed on context leave.
Group Topics
Subscribed for each group the node participates in. Carries SignedGroupOpV1, GroupStateHeartbeat, and GroupGovernanceDelta. On peer subscription events, the node auto-triggers group sync and alias re-broadcast.
Periodic Tasks
Heartbeats
SyncManager periodically broadcasts HashHeartbeat per context and GroupStateHeartbeat per group. Interval is configurable via SyncConfig.
Tombstone GC
GarbageCollector actor runs every gc_interval_secs (default 12 hours). Scans all contexts for expired CRDT tombstones past TOMBSTONE_RETENTION_NANOS and deletes them.
Blob Eviction
Cached blobs in NodeState::blob_cache are evicted based on last_accessed timestamps. The CachedBlob::touch() method refreshes access time on read.
Sync Loop
SyncManager runs a continuous async loop: listen for sync triggers from ctx_sync_rx, select the cheapest applicable protocol (hash comparison → level-wise → snapshot), execute, and track results per peer.
Dependencies
What the node crate depends on and what depends on it.