Network & P2P

calimero-network + calimero-network-primitives

Purpose

NetworkManager is an Actix actor that owns and drives the libp2p Swarm with 10+ composed behaviours. It handles gossipsub pub/sub, Kademlia DHT, mDNS discovery, relay circuits, rendezvous registration, hole punching, and multiplexed streams.

The network layer only moves opaque bytes — message decoding happens in the node layer. This clean boundary means calimero-network never imports application-level types; it simply ferries Vec<u8> payloads between peers and hands them up through NetworkEvent variants.

10+
libp2p behaviours
2
stream protocols
14
NetworkClient methods

libp2p Behaviour Stack

All behaviours are composed into a single #[derive(NetworkBehaviour)] struct. Each behaviour manages one protocol concern, and the swarm dispatches events through pattern-matched handlers.

Gossipsub pub/sub mesh relay topic-based broadcast Kademlia DHT peer routing /calimero/kad/1.0.0 mDNS LAN discovery optional toggle Relay circuit relay v2 NAT traversal fallback Rendezvous namespace registration peer discovery bootstrap DCUtR direct connection upgrade hole punching AutoNAT v2 reachability detection public / private probe Identify version & protocol push agent string exchange Ping liveness heartbeat RTT measurement Streams (multiplexed) /calimero/stream/0.0.2 · /calimero/blob/0.0.2 general sync, delta exchange, blob transfer Request-Response specialized node invite verification & response #[derive(NetworkBehaviour)] struct Behaviour { … } all behaviours composed into a single libp2p Swarm driven by NetworkManager actor Core protocol NAT / relay Connectivity Diagnostics Discovery Specialized

Topic Management

Gossipsub uses IdentTopic for two distinct topic namespaces. The network crate only sees opaque topic strings — the semantic meaning is defined at the node layer.

Context Topics

let topic = IdentTopic::new(context_id);
// Carries: StateDelta, HashHeartbeat
// One topic per context — peers subscribe on context join

Group Topics

let topic = IdentTopic::new(format!("group/{}", hex::encode(group_id)));
// Carries: SignedGroupOpV1, GroupGovernanceDelta,
// GroupStateHeartbeat, GroupMutationNotification
// One topic per group — cross-context governance traffic

Important boundary

BroadcastMessage is not defined in the network crate — it lives in calimero-node-primitives. The network layer only sees raw bytes; deserialization into BroadcastMessage variants happens in NodeManager.

Stream Protocols

Two multiplexed stream protocols ride on the same libp2p::Stream infrastructure. Both use length-delimited framing for reliable message boundaries.

/calimero/stream/0.0.2

General-purpose sync stream for delta exchange, key sharing, and state negotiation.

  • Length-delimited frames
  • 8 MiB max frame size
  • StreamMessageInitPayload dispatch
  • Used by sync engine for catch-up & real-time replication

/calimero/blob/0.0.2

Blob transfer protocol for large binary payloads (WASM, application data).

  • JSON BlobRequest / BlobResponse handshake
  • Borsh-encoded BlobChunk stream
  • Chunked transfer for large payloads
  • Announce / query / request lifecycle
Peer A open_stream() Yamux Multiplexed Connection /calimero/stream/0.0.2 /calimero/blob/0.0.2 Peer B accept_stream() Frame: len (4B) payload (≤ 8 MiB)

NetworkClient

NetworkClient is the async handle held by other actors to communicate with NetworkManager. Each method sends a typed message and awaits the response.

Connection

fn dial(addr) → Result
fn listen_on(addr) → Result
fn bootstrap() → Result

Pub/Sub

fn subscribe(topic) → Result
fn unsubscribe(topic) → Result
fn publish(topic, data) → Result

Streams

fn open_stream(peer) → Stream

Mesh Info

fn peer_count() → usize
fn mesh_peer_count(topic) → usize
fn mesh_peers(topic) → Vec

Blob Transfer

fn announce_blob(hash) → Result
fn query_blob(hash) → Providers
fn request_blob(peer, hash) → Blob

Specialized Node

fn send_specialized_node_
  verification_request(..) → Result
fn send_specialized_node_
  invitation_response(..) → Result

Discovery

The discovery subsystem combines multiple libp2p mechanisms into a state machine that progressively establishes connectivity with the mesh.

Boot dial bootstrap nodes Register rendezvous namespace Discover find context peers Connect dial discovered peers Relay Reservation request circuit relay slot from configured relay nodes AutoNAT Probe test reachability from external peers DCUtR Upgrade attempt hole punch upgrade relayed → direct Public IP optional advertise external address mDNS (LAN) concurrent local discovery (if enabled) Kademlia DHT routing table & provider records

Transport

The transport layer supports multiple stacks for different network conditions. All connections are authenticated and encrypted.

TCP Stack

Primary transport for reliable connections.

// Transport chain:
TCPTLS (libp2p-tls) → Noise (XX handshake)
   → Yamux (stream multiplexing)

QUIC Stack

UDP-based transport with built-in encryption and muxing.

// Transport chain:
QUIC (UDP + TLS 1.3 built-in)
   → native stream multiplexing
// No separate Noise/Yamux needed
Yamux Frame Codec: version (1B) type (1B) flags (2B) stream ID (4B) length (4B) payload (variable) Noise XX Handshake: → e ← e, ee, s, es → s, se ✓ mutual authentication complete

Dependencies

Key crate dependencies and their roles in the network layer.

calimero-network

libp2p
Core networking stack — swarm, transports, all behaviour crates
tokio
Async runtime for swarm event loop and stream I/O
futures
Stream/Sink adapters for length-delimited framing
serde / serde_json
Blob protocol JSON handshake serialization
borsh
Blob chunk binary serialization
tracing
Structured logging for network events
calimero-network-primitives
Shared types: NetworkConfig, stream protocol IDs

calimero-network-primitives

libp2p-identity
PeerId, Keypair re-exports
multiaddr
Multi-address types for listen/dial configuration
serde
Config serialization (NetworkConfig, BootstrapConfig, etc.)
strum
Enum display/from_str for protocol IDs