Error Handling
MeroboxError hierarchy, result pattern, retry configuration, and error codes
MeroboxError Hierarchy
All merobox errors inherit from a single base class, providing consistent handling and error messaging.
Error Types
NodeResolutionError
Raised when the NodeResolver cannot find a suitable backend for a declared node. All resolution strategies (remote, URL, Docker, binary) have been exhausted.
AuthenticationError
Raised when JWT authentication fails: invalid credentials, expired tokens, failed refresh, or missing auth configuration.
WorkflowError
Base error for workflow-level issues. Parent of StepValidationError and StepExecutionError.
StepValidationError
Raised during the validate() phase when a step’s configuration is missing required fields, references invalid nodes, or has incompatible options.
StepExecutionError
Raised during the execute() phase when a step fails: RPC errors, unexpected results, assertion failures, or timeout during execution.
ConfigurationError
Raised when the YAML workflow file is malformed, missing required fields, or contains invalid values that prevent parsing.
MeroboxTimeoutError
Raised when an operation exceeds its configured timeout: health checks, sync waits, sandbox startup, or HTTP requests.
ValidationError
Generic validation error for non-step configuration: invalid port ranges, malformed URLs, unsupported backend types.
ClientError
Raised by calimero-client-py when a JSON-RPC call returns an error response or the HTTP connection fails.
ok() / fail() Pattern
Steps use the ok() and fail() helper functions to return structured results, enabling consistent error handling and result propagation.
ok(data)
return StepResult(
success=True,
data=data or {},
error=None,
)
Used when a step completes successfully. The data dict is stored in workflow_results if the step has a name.
fail(error)
return StepResult(
success=False,
data={},
error=error,
exception=exc,
)
Used when a step fails. The WorkflowExecutor checks result.success and either continues or triggers cleanup.
success: bool
data: dict
error: Optional[str]
exception: Optional[Exception]
duration_ms: float # Execution time in milliseconds
RetryConfig & @with_retry
Operations that may transiently fail (HTTP calls, health checks, sync waits) use the retry system.
RetryConfig
max_retries: int = 3
base_delay: float = 1.0 # seconds
max_delay: float = 30.0
backoff_factor: float = 2.0
jitter: bool = True
retryable_errors: tuple = (
MeroboxTimeoutError,
ClientError,
ConnectionError,
)
@with_retry Decorator
async def health_check(node: Node) -> bool:
# Will retry up to 5 times with
# exponential backoff on failure
resp = await node.get("/health")
return resp.status == 200
The decorator wraps async functions with automatic retry logic. On each failure, it checks if the exception matches retryable_errors, calculates the next delay with exponential backoff and optional jitter, then retries. Non-retryable errors propagate immediately.