Skip to content

CLI Reference

Complete reference for all PetalTrace CLI commands.

Terminal window
petaltrace [command] [flags]
Flags:
--config string Path to config file (default: petaltrace.yaml)
-h, --help Help for petaltrace
-v, --version Version for petaltrace

Start the PetalTrace daemon.

Terminal window
petaltrace serve [flags]

Flags:

FlagDescriptionDefault
--configPath to configuration filepetaltrace.yaml

What it starts:

  • HTTP API server on port 8090
  • OTLP/gRPC collector on port 4317
  • OTLP/HTTP collector on port 4318
  • SQLite store with automatic migrations

Example:

Terminal window
# Start with defaults
petaltrace serve
# Start with custom config
petaltrace serve --config /etc/petaltrace/petaltrace.yaml

Output:

time=2026-03-17T18:29:08.672-07:00 level=INFO msg="starting petaltrace" version=0.1.0-dev
time=2026-03-17T18:29:08.675-07:00 level=INFO msg="database initialized" path=/Users/user/.petaltrace/data.db
time=2026-03-17T18:29:08.676-07:00 level=INFO msg="starting API server" addr=0.0.0.0:8090
time=2026-03-17T18:29:08.676-07:00 level=INFO msg="petaltrace ready" api=0.0.0.0:8090 otlp_http=[::]:4318 otlp_grpc=[::]:4317

Shutdown gracefully with Ctrl+C.


Manage workflow runs.

List recent runs with optional filtering.

Terminal window
petaltrace runs list [flags]

Flags:

FlagDescriptionDefault
--workflowFilter by workflow name
--statusFilter by status: running, completed, failed
--sinceShow runs since duration (e.g., 24h, 7d)
--limitMaximum number of runs50
--jsonOutput as JSONfalse

Examples:

Terminal window
# List recent runs
petaltrace runs list
# List completed runs from the last 24 hours
petaltrace runs list --status completed --since 24h
# List runs for a specific workflow as JSON
petaltrace runs list --workflow email-processor --json
# Filter failed runs with limit
petaltrace runs list --status failed --limit 10

Output:

STATUS WORKFLOW RUN ID DURATION TOKENS COST STARTED
✓ email-processor run-01JK3ABC... 1.2s 5000 $0.0150 2026-03-17 10:15:30
✗ research-pipeline run-01JK3DEF... 3.4s 2100 $0.0089 2026-03-17 10:14:15
● content-writer run-01JK3GHI... - 1200 $0.0042 2026-03-17 10:16:00
Showing 3 runs

Status icons: completed, failed, running, cancelled

Show detailed information about a specific run.

Terminal window
petaltrace runs show <run-id> [flags]

Flags:

FlagDescriptionDefault
--spansInclude span treefalse
--nodeFilter spans by node ID (requires --spans)
--jsonOutput as JSONfalse

Examples:

Terminal window
# Show run details
petaltrace runs show run-01JK3ABC
# Show run with full span tree
petaltrace runs show run-01JK3ABC --spans
# Show spans for a specific node
petaltrace runs show run-01JK3ABC --spans --node researcher_agent
# Output as JSON
petaltrace runs show run-01JK3ABC --json

Delete a run and all its spans.

Terminal window
petaltrace runs delete <run-id> [flags]

Flags:

FlagDescriptionDefault
-y, --yesSkip confirmation promptfalse

Example:

Terminal window
# Delete with confirmation
petaltrace runs delete run-01JK3ABC
# Delete without confirmation
petaltrace runs delete run-01JK3ABC -y

Display the full prompt and completion for an LLM node.

Terminal window
petaltrace prompt <run-id> <node-id> [flags]

Flags:

FlagDescriptionDefault
-c, --completionInclude the completion/responsefalse
-f, --formatOutput format: text, json, curl, sdktext

Examples:

Terminal window
# Show prompt only
petaltrace prompt run-01JK3ABC researcher_agent
# Show prompt and completion
petaltrace prompt run-01JK3ABC researcher_agent --completion
# Generate cURL command
petaltrace prompt run-01JK3ABC researcher_agent --format curl
# Generate Python SDK code
petaltrace prompt run-01JK3ABC researcher_agent --format sdk

Output formats:

  • text: Human-readable prompt with message formatting
  • json: Structured JSON with all prompt/completion data
  • curl: Copy-paste ready cURL command for the API call
  • sdk: Python SDK code (Anthropic or OpenAI based on provider)

Cost analysis commands.

Aggregate cost metrics across runs.

Terminal window
petaltrace cost summary [flags]

Flags:

FlagDescriptionDefault
--sinceTime window7d
--group-byGroup by: workflow, provider, model
--jsonOutput as JSONfalse

Examples:

Terminal window
# Summary for last 7 days
petaltrace cost summary
# Summary for last 30 days grouped by provider
petaltrace cost summary --since 30d --group-by provider
# Summary grouped by model as JSON
petaltrace cost summary --group-by model --json

Output:

Cost Summary (last 7 days)
────────────────────────────────────────
Total Runs: 142
Total Tokens: 1,234,567
Total Cost: $12.34
By Provider:
anthropic $8.90 (72%)
openai $3.44 (28%)
By Workflow:
research-pipeline $6.50
email-processor $3.20
content-writer $2.64

Per-run cost breakdown.

Terminal window
petaltrace cost run <run-id> [flags]

Flags:

FlagDescriptionDefault
--by-nodeShow breakdown by nodefalse
--jsonOutput as JSONfalse

Examples:

Terminal window
# Run cost summary
petaltrace cost run run-01JK3ABC
# Run cost with node breakdown
petaltrace cost run run-01JK3ABC --by-node

Compare two runs.

Terminal window
petaltrace diff <base-run-id> <compare-run-id> [flags]

Flags:

FlagDescriptionDefault
--include-contentInclude full text diffsfalse
--include-inputsInclude input/output data diffsfalse
--formatOutput format: table, json, summarytable
-o, --outputWrite output to file
--no-cacheDon’t use or store cached difffalse

Examples:

Terminal window
# Compare two runs (summary)
petaltrace diff run-01JK3ABC run-01JK3XYZ --format summary
# Compare with full text diffs
petaltrace diff run-01JK3ABC run-01JK3XYZ --include-content
# Export diff as JSON
petaltrace diff run-01JK3ABC run-01JK3XYZ --format json -o diff.json

Output (summary format):

=== Diff Summary ===
Base Run: run-01JK3ABC
Compare Run: run-01JK3XYZ
Status Match: Yes
Path Divergence: No
Duration Delta: +1250ms
Token Delta: +342
Cost Delta: +$0.002150
Nodes Changed: 3
=== Cost Breakdown ===
Base Cost: $0.015230
Compare Cost: $0.017380
Delta: +$0.002150
By Model:
claude-sonnet-4-20250514: +$0.002150

Replay a captured run.

Terminal window
petaltrace replay <run-id> [flags]

Flags:

FlagDescriptionDefault
--modeReplay mode: live, mocked, hybridlive
--modelOverride LLM model
--providerOverride LLM provider
--temperatureOverride sampling temperature
--max-tokensOverride max tokens
--diffAuto-diff after completionfalse
--syncWait for replay to completetrue
--tagAdd tags (format: key=value, repeatable)
--jsonOutput as JSONfalse
--petalflow-urlPetalFlow daemon URLhttp://localhost:8080

Replay Modes:

ModeLLM CallsTool CallsUse Case
liveRealRealRe-execute with different config
mockedCapturedCapturedDeterministic testing
hybridRealCapturedTest prompt changes

Examples:

Terminal window
# Live replay with different model
petaltrace replay run-01JK3ABC --mode live --model claude-3-opus-20240229 --diff
# Deterministic mocked replay
petaltrace replay run-01JK3ABC --mode mocked
# Hybrid replay with temperature override
petaltrace replay run-01JK3ABC --mode hybrid --temperature 0.2 --diff
# Add tags to replay run
petaltrace replay run-01JK3ABC --tag experiment=v2 --tag team=platform

Check status of a replay operation.

Terminal window
petaltrace replay status <replay-id>

Compute diff for a completed replay.

Terminal window
petaltrace replay diff <replay-id>

Export a run to a JSON file.

Terminal window
petaltrace export <run-id> [output-file] [flags]

Flags:

FlagDescriptionDefault
--include-search-textInclude extracted FTS textfalse

Examples:

Terminal window
# Export to file
petaltrace export run-01JK3ABC my-run.json
# Export with search text
petaltrace export run-01JK3ABC my-run.json --include-search-text
# Export to stdout (omit filename)
petaltrace export run-01JK3ABC

Import a run from a JSON file.

Terminal window
petaltrace import <file> [flags]

Flags:

FlagDescriptionDefault
--new-idGenerate new run and span IDsfalse

Examples:

Terminal window
# Import preserving IDs
petaltrace import my-run.json
# Import with new IDs (avoids conflicts)
petaltrace import my-run.json --new-id

Run garbage collection to delete old runs.

Terminal window
petaltrace gc [flags]

Flags:

FlagDescriptionDefault
--retainOverride retention periodConfig value
--dry-runPreview deletions without executingfalse
--include-starredAlso delete starred runsfalse
--verboseShow each run being deletedfalse

Examples:

Terminal window
# Preview what would be deleted
petaltrace gc --dry-run
# Run GC with custom retention
petaltrace gc --retain 14d
# Run GC including starred runs (verbose)
petaltrace gc --include-starred --verbose

Output:

Garbage collection complete
Deleted: 42 runs
Freed: 128 MB
Remaining: 158 runs

Display storage and system statistics.

Terminal window
petaltrace stats [flags]

Flags:

FlagDescriptionDefault
--jsonOutput as JSONfalse

Example:

Terminal window
petaltrace stats

Output:

PetalTrace Statistics
────────────────────────────────────────
Database:
Path: /Users/user/.petaltrace/data.db
Size: 256 MB
Total Runs: 200
Total Spans: 4,521
Data Range:
Oldest Run: 2026-03-01 08:15:30
Newest Run: 2026-03-17 18:29:08
Top Workflows (by run count):
email-processor 85
research-pipeline 62
content-writer 53
Top Workflows (by cost):
research-pipeline $45.23
email-processor $28.90
content-writer $18.45

Run the MCP server for agent integration.

Terminal window
petaltrace mcp [flags]

Flags:

FlagDescriptionDefault
--configPath to configuration filepetaltrace.yaml

The MCP server uses stdio transport (stdin/stdout) for the MCP protocol. Logs are written to stderr.

Example:

Terminal window
# Run MCP server (typically invoked by MCP client)
petaltrace mcp

Claude Code Configuration:

{
"mcpServers": {
"petaltrace": {
"command": "petaltrace",
"args": ["mcp"],
"env": {}
}
}
}