CLI Reference
CLI Reference
Section titled “CLI Reference”Complete reference for all PetalTrace CLI commands.
Global Options
Section titled “Global Options”petaltrace [command] [flags]
Flags: --config string Path to config file (default: petaltrace.yaml) -h, --help Help for petaltrace -v, --version Version for petaltraceStart the PetalTrace daemon.
petaltrace serve [flags]Flags:
| Flag | Description | Default |
|---|---|---|
--config | Path to configuration file | petaltrace.yaml |
What it starts:
- HTTP API server on port 8090
- OTLP/gRPC collector on port 4317
- OTLP/HTTP collector on port 4318
- SQLite store with automatic migrations
Example:
# Start with defaultspetaltrace serve
# Start with custom configpetaltrace serve --config /etc/petaltrace/petaltrace.yamlOutput:
time=2026-03-17T18:29:08.672-07:00 level=INFO msg="starting petaltrace" version=0.1.0-devtime=2026-03-17T18:29:08.675-07:00 level=INFO msg="database initialized" path=/Users/user/.petaltrace/data.dbtime=2026-03-17T18:29:08.676-07:00 level=INFO msg="starting API server" addr=0.0.0.0:8090time=2026-03-17T18:29:08.676-07:00 level=INFO msg="petaltrace ready" api=0.0.0.0:8090 otlp_http=[::]:4318 otlp_grpc=[::]:4317Shutdown gracefully with Ctrl+C.
Manage workflow runs.
runs list
Section titled “runs list”List recent runs with optional filtering.
petaltrace runs list [flags]Flags:
| Flag | Description | Default |
|---|---|---|
--workflow | Filter by workflow name | |
--status | Filter by status: running, completed, failed | |
--since | Show runs since duration (e.g., 24h, 7d) | |
--limit | Maximum number of runs | 50 |
--json | Output as JSON | false |
Examples:
# List recent runspetaltrace runs list
# List completed runs from the last 24 hourspetaltrace runs list --status completed --since 24h
# List runs for a specific workflow as JSONpetaltrace runs list --workflow email-processor --json
# Filter failed runs with limitpetaltrace runs list --status failed --limit 10Output:
STATUS WORKFLOW RUN ID DURATION TOKENS COST STARTED✓ email-processor run-01JK3ABC... 1.2s 5000 $0.0150 2026-03-17 10:15:30✗ research-pipeline run-01JK3DEF... 3.4s 2100 $0.0089 2026-03-17 10:14:15● content-writer run-01JK3GHI... - 1200 $0.0042 2026-03-17 10:16:00
Showing 3 runsStatus icons: ✓ completed, ✗ failed, ● running, ○ cancelled
runs show
Section titled “runs show”Show detailed information about a specific run.
petaltrace runs show <run-id> [flags]Flags:
| Flag | Description | Default |
|---|---|---|
--spans | Include span tree | false |
--node | Filter spans by node ID (requires --spans) | |
--json | Output as JSON | false |
Examples:
# Show run detailspetaltrace runs show run-01JK3ABC
# Show run with full span treepetaltrace runs show run-01JK3ABC --spans
# Show spans for a specific nodepetaltrace runs show run-01JK3ABC --spans --node researcher_agent
# Output as JSONpetaltrace runs show run-01JK3ABC --jsonruns delete
Section titled “runs delete”Delete a run and all its spans.
petaltrace runs delete <run-id> [flags]Flags:
| Flag | Description | Default |
|---|---|---|
-y, --yes | Skip confirmation prompt | false |
Example:
# Delete with confirmationpetaltrace runs delete run-01JK3ABC
# Delete without confirmationpetaltrace runs delete run-01JK3ABC -yprompt
Section titled “prompt”Display the full prompt and completion for an LLM node.
petaltrace prompt <run-id> <node-id> [flags]Flags:
| Flag | Description | Default |
|---|---|---|
-c, --completion | Include the completion/response | false |
-f, --format | Output format: text, json, curl, sdk | text |
Examples:
# Show prompt onlypetaltrace prompt run-01JK3ABC researcher_agent
# Show prompt and completionpetaltrace prompt run-01JK3ABC researcher_agent --completion
# Generate cURL commandpetaltrace prompt run-01JK3ABC researcher_agent --format curl
# Generate Python SDK codepetaltrace prompt run-01JK3ABC researcher_agent --format sdkOutput formats:
text: Human-readable prompt with message formattingjson: Structured JSON with all prompt/completion datacurl: Copy-paste ready cURL command for the API callsdk: Python SDK code (Anthropic or OpenAI based on provider)
Cost analysis commands.
cost summary
Section titled “cost summary”Aggregate cost metrics across runs.
petaltrace cost summary [flags]Flags:
| Flag | Description | Default |
|---|---|---|
--since | Time window | 7d |
--group-by | Group by: workflow, provider, model | |
--json | Output as JSON | false |
Examples:
# Summary for last 7 dayspetaltrace cost summary
# Summary for last 30 days grouped by providerpetaltrace cost summary --since 30d --group-by provider
# Summary grouped by model as JSONpetaltrace cost summary --group-by model --jsonOutput:
Cost Summary (last 7 days)────────────────────────────────────────Total Runs: 142Total Tokens: 1,234,567Total Cost: $12.34
By Provider: anthropic $8.90 (72%) openai $3.44 (28%)
By Workflow: research-pipeline $6.50 email-processor $3.20 content-writer $2.64cost run
Section titled “cost run”Per-run cost breakdown.
petaltrace cost run <run-id> [flags]Flags:
| Flag | Description | Default |
|---|---|---|
--by-node | Show breakdown by node | false |
--json | Output as JSON | false |
Examples:
# Run cost summarypetaltrace cost run run-01JK3ABC
# Run cost with node breakdownpetaltrace cost run run-01JK3ABC --by-nodeCompare two runs.
petaltrace diff <base-run-id> <compare-run-id> [flags]Flags:
| Flag | Description | Default |
|---|---|---|
--include-content | Include full text diffs | false |
--include-inputs | Include input/output data diffs | false |
--format | Output format: table, json, summary | table |
-o, --output | Write output to file | |
--no-cache | Don’t use or store cached diff | false |
Examples:
# Compare two runs (summary)petaltrace diff run-01JK3ABC run-01JK3XYZ --format summary
# Compare with full text diffspetaltrace diff run-01JK3ABC run-01JK3XYZ --include-content
# Export diff as JSONpetaltrace diff run-01JK3ABC run-01JK3XYZ --format json -o diff.jsonOutput (summary format):
=== Diff Summary ===
Base Run: run-01JK3ABCCompare Run: run-01JK3XYZ
Status Match: YesPath Divergence: NoDuration Delta: +1250msToken Delta: +342Cost Delta: +$0.002150Nodes Changed: 3
=== Cost Breakdown ===
Base Cost: $0.015230Compare Cost: $0.017380Delta: +$0.002150
By Model: claude-sonnet-4-20250514: +$0.002150replay
Section titled “replay”Replay a captured run.
petaltrace replay <run-id> [flags]Flags:
| Flag | Description | Default |
|---|---|---|
--mode | Replay mode: live, mocked, hybrid | live |
--model | Override LLM model | |
--provider | Override LLM provider | |
--temperature | Override sampling temperature | |
--max-tokens | Override max tokens | |
--diff | Auto-diff after completion | false |
--sync | Wait for replay to complete | true |
--tag | Add tags (format: key=value, repeatable) | |
--json | Output as JSON | false |
--petalflow-url | PetalFlow daemon URL | http://localhost:8080 |
Replay Modes:
| Mode | LLM Calls | Tool Calls | Use Case |
|---|---|---|---|
live | Real | Real | Re-execute with different config |
mocked | Captured | Captured | Deterministic testing |
hybrid | Real | Captured | Test prompt changes |
Examples:
# Live replay with different modelpetaltrace replay run-01JK3ABC --mode live --model claude-3-opus-20240229 --diff
# Deterministic mocked replaypetaltrace replay run-01JK3ABC --mode mocked
# Hybrid replay with temperature overridepetaltrace replay run-01JK3ABC --mode hybrid --temperature 0.2 --diff
# Add tags to replay runpetaltrace replay run-01JK3ABC --tag experiment=v2 --tag team=platformreplay status
Section titled “replay status”Check status of a replay operation.
petaltrace replay status <replay-id>replay diff
Section titled “replay diff”Compute diff for a completed replay.
petaltrace replay diff <replay-id>export
Section titled “export”Export a run to a JSON file.
petaltrace export <run-id> [output-file] [flags]Flags:
| Flag | Description | Default |
|---|---|---|
--include-search-text | Include extracted FTS text | false |
Examples:
# Export to filepetaltrace export run-01JK3ABC my-run.json
# Export with search textpetaltrace export run-01JK3ABC my-run.json --include-search-text
# Export to stdout (omit filename)petaltrace export run-01JK3ABCimport
Section titled “import”Import a run from a JSON file.
petaltrace import <file> [flags]Flags:
| Flag | Description | Default |
|---|---|---|
--new-id | Generate new run and span IDs | false |
Examples:
# Import preserving IDspetaltrace import my-run.json
# Import with new IDs (avoids conflicts)petaltrace import my-run.json --new-idRun garbage collection to delete old runs.
petaltrace gc [flags]Flags:
| Flag | Description | Default |
|---|---|---|
--retain | Override retention period | Config value |
--dry-run | Preview deletions without executing | false |
--include-starred | Also delete starred runs | false |
--verbose | Show each run being deleted | false |
Examples:
# Preview what would be deletedpetaltrace gc --dry-run
# Run GC with custom retentionpetaltrace gc --retain 14d
# Run GC including starred runs (verbose)petaltrace gc --include-starred --verboseOutput:
Garbage collection complete Deleted: 42 runs Freed: 128 MB Remaining: 158 runsDisplay storage and system statistics.
petaltrace stats [flags]Flags:
| Flag | Description | Default |
|---|---|---|
--json | Output as JSON | false |
Example:
petaltrace statsOutput:
PetalTrace Statistics────────────────────────────────────────Database: Path: /Users/user/.petaltrace/data.db Size: 256 MB Total Runs: 200 Total Spans: 4,521
Data Range: Oldest Run: 2026-03-01 08:15:30 Newest Run: 2026-03-17 18:29:08
Top Workflows (by run count): email-processor 85 research-pipeline 62 content-writer 53
Top Workflows (by cost): research-pipeline $45.23 email-processor $28.90 content-writer $18.45Run the MCP server for agent integration.
petaltrace mcp [flags]Flags:
| Flag | Description | Default |
|---|---|---|
--config | Path to configuration file | petaltrace.yaml |
The MCP server uses stdio transport (stdin/stdout) for the MCP protocol. Logs are written to stderr.
Example:
# Run MCP server (typically invoked by MCP client)petaltrace mcpClaude Code Configuration:
{ "mcpServers": { "petaltrace": { "command": "petaltrace", "args": ["mcp"], "env": {} } }}