CLI Reference
Complete command reference. CLI Reference →
The Cortex CLI enables powerful workflows for managing AI agent memory, knowledge bases, and state. This guide covers practical patterns for development, automation, and production use.
Install Cortex using Go:
go install github.com/petal-labs/cortex/cmd/cortex@latestVerify the installation:
cortex --helpEnable shell completion for a better experience:
# Add to ~/.bashrceval "$(cortex completion bash)"# Add to ~/.zshrceval "$(cortex completion zsh)"# Add to ~/.config/fish/config.fishcortex completion fish | sourceStart the Cortex server
# Background mode for developmentcortex serve &Create a knowledge collection
cortex knowledge create-collection --name docs --namespace myprojectIngest your first document
cortex knowledge ingest \ --namespace myproject \ --collection docs \ --title "README" \ --file README.mdSearch your knowledge base
cortex knowledge search "how to install" --namespace myprojectA common use case is creating a searchable knowledge base from project documentation.
# Create a collection for your projectcortex knowledge create-collection \ --namespace my-app \ --name documentation \ --description "Project documentation and guides"# Ingest individual files with semantic chunkingcortex knowledge ingest \ --namespace my-app \ --collection documentation \ --title "Architecture Overview" \ --file docs/architecture.md \ --chunk-strategy semantic
# Bulk ingest an entire directorycortex knowledge ingest-dir \ --namespace my-app \ --collection documentation \ --dir ./docs \ --pattern "*.md" \ --recursive# Create a separate collection for codecortex knowledge create-collection \ --namespace my-app \ --name codebase \ --description "Source code for context-aware assistance"
# Ingest Go filescortex knowledge ingest-dir \ --namespace my-app \ --collection codebase \ --dir ./src \ --pattern "*.go" \ --recursive
# Ingest with metadatacortex knowledge ingest \ --namespace my-app \ --collection codebase \ --title "Main Package" \ --file cmd/main.go \ --metadata '{"package": "main", "type": "entrypoint"}'# Semantic search across documentationcortex knowledge search "authentication flow" \ --namespace my-app \ --collection documentation \ --mode hybrid \ --limit 5
# Search with JSON output for scriptingcortex knowledge search "error handling" \ --namespace my-app \ --json | jq '.results[].content'
# View collection statisticscortex knowledge stats --namespace my-app#!/bin/bash# rebuild-knowledge-base.sh - Refresh knowledge base from latest docs
NAMESPACE="my-app"
echo "Clearing existing documentation..."# Note: You may want to implement a refresh strategy instead
echo "Ingesting documentation..."cortex knowledge ingest-dir \ --namespace "$NAMESPACE" \ --collection documentation \ --dir ./docs \ --pattern "*.md" \ --recursive
echo "Ingesting source code..."cortex knowledge ingest-dir \ --namespace "$NAMESPACE" \ --collection codebase \ --dir ./src \ --pattern "*.go" \ --recursive
echo "Knowledge base updated!"cortex knowledge stats --namespace "$NAMESPACE"Manage persistent conversation history for AI agents across sessions.
# Start a new conversation threadTHREAD_ID="support-$(date +%s)"
# Record user messagecortex conversation append \ --namespace chat-agent \ --thread-id "$THREAD_ID" \ --role user \ --content "I need help configuring the API"
# Record assistant responsecortex conversation append \ --namespace chat-agent \ --thread-id "$THREAD_ID" \ --role assistant \ --content "I'd be happy to help! What API are you trying to configure?"
# Continue the conversation...cortex conversation append \ --namespace chat-agent \ --thread-id "$THREAD_ID" \ --role user \ --content "The authentication API"# Get full historycortex conversation history \ --namespace chat-agent \ --thread-id "$THREAD_ID"
# Get recent messages onlycortex conversation history \ --namespace chat-agent \ --thread-id "$THREAD_ID" \ --limit 10
# Export as JSON for agent contextcortex conversation history \ --namespace chat-agent \ --thread-id "$THREAD_ID" \ --json | jq '.messages'# Find conversations mentioning specific topicscortex conversation search "authentication error" \ --namespace chat-agent \ --limit 10
# List all active threadscortex conversation list --namespace chat-agentWhen conversations get long, compress them while preserving context:
# Summarize, keeping the last 5 messages intactcortex conversation summarize \ --namespace chat-agent \ --thread-id "$THREAD_ID" \ --keep-recent 5
# The summary is stored with the thread and automatically# included when retrieving history#!/bin/bash# agent-memory.sh - Helper for agent conversation memory
NAMESPACE="my-agent"THREAD_ID="${1:-default}"ACTION="${2:-history}"
case "$ACTION" in history) cortex conversation history \ --namespace "$NAMESPACE" \ --thread-id "$THREAD_ID" \ --json ;; add-user) cortex conversation append \ --namespace "$NAMESPACE" \ --thread-id "$THREAD_ID" \ --role user \ --content "$3" ;; add-assistant) cortex conversation append \ --namespace "$NAMESPACE" \ --thread-id "$THREAD_ID" \ --role assistant \ --content "$3" ;; summarize) cortex conversation summarize \ --namespace "$NAMESPACE" \ --thread-id "$THREAD_ID" \ --keep-recent 10 ;; clear) cortex conversation clear \ --namespace "$NAMESPACE" \ --thread-id "$THREAD_ID" ;;esacUse workflow context to share state across agent tasks and runs.
# Store simple valuescortex context set "config/model" "gpt-4o" --namespace workflow
# Store JSON valuescortex context set "user/preferences" \ '{"theme": "dark", "language": "en"}' \ --namespace workflow
# Retrieve valuescortex context get "config/model" --namespace workflowcortex context get "user/preferences" --namespace workflow --jsonIsolate state to specific workflow runs:
RUN_ID="run-$(uuidgen)"
# Store run-specific statecortex context set "progress" "50" \ --namespace workflow \ --run-id "$RUN_ID"
cortex context set "results/step1" \ '{"status": "complete", "output": "..."}' \ --namespace workflow \ --run-id "$RUN_ID"
# List all keys for this runcortex context list \ --namespace workflow \ --run-id "$RUN_ID"
# Get run-specific valuecortex context get "progress" \ --namespace workflow \ --run-id "$RUN_ID"# Cache with 1-hour expirationcortex context set "cache/api-response" \ '{"data": [...]}' \ --namespace workflow \ --ttl 1h
# Session data with 24-hour expirationcortex context set "session/user-123" \ '{"authenticated": true}' \ --namespace workflow \ --ttl 24h# Append to an arraycortex context set "log/events" '["event1"]' --namespace workflow
cortex context merge "log/events" '["event2", "event3"]' \ --namespace workflow \ --strategy append# Result: ["event1", "event2", "event3"]
# Deep merge objectscortex context set "config" '{"a": 1, "b": {"x": 1}}' --namespace workflow
cortex context merge "config" '{"b": {"y": 2}, "c": 3}' \ --namespace workflow \ --strategy merge_deep# Result: {"a": 1, "b": {"x": 1, "y": 2}, "c": 3}
# Numeric accumulationcortex context set "metrics/count" "10" --namespace workflowcortex context merge "metrics/count" "5" --namespace workflow --strategy sum# Result: 15# View change history for a keycortex context history "config/model" --namespace workflow
# Useful for debugging and auditing state changes#!/bin/bash# workflow-state.sh - Manage workflow state
NAMESPACE="pipeline"RUN_ID="${1}"
# Initialize workflowinit_workflow() { cortex context set "status" "running" \ --namespace "$NAMESPACE" \ --run-id "$RUN_ID"
cortex context set "started_at" "$(date -u +%Y-%m-%dT%H:%M:%SZ)" \ --namespace "$NAMESPACE" \ --run-id "$RUN_ID"}
# Update progressupdate_progress() { local step="$1" local status="$2"
cortex context set "steps/$step" \ "{\"status\": \"$status\", \"timestamp\": \"$(date -u +%Y-%m-%dT%H:%M:%SZ)\"}" \ --namespace "$NAMESPACE" \ --run-id "$RUN_ID"}
# Complete workflowcomplete_workflow() { cortex context set "status" "completed" \ --namespace "$NAMESPACE" \ --run-id "$RUN_ID"
cortex context set "completed_at" "$(date -u +%Y-%m-%dT%H:%M:%SZ)" \ --namespace "$NAMESPACE" \ --run-id "$RUN_ID"}
# Get workflow stateget_state() { cortex context list --namespace "$NAMESPACE" --run-id "$RUN_ID" --json}Build and query knowledge graphs with automatic entity extraction.
# Create entitiescortex entity create \ --namespace research \ --name "OpenAI" \ --type organization \ --description "AI research company" \ --metadata '{"founded": "2015", "headquarters": "San Francisco"}'
cortex entity create \ --namespace research \ --name "Sam Altman" \ --type person \ --description "CEO of OpenAI"
# Add aliasescortex entity add-alias <entity-id> --alias "Samuel Altman"# Link entitiescortex entity add-relationship <sam-id> <openai-id> \ --namespace research \ --type "works_at" \ --metadata '{"role": "CEO", "since": "2019"}'
# Query relationshipscortex entity relationships <openai-id> \ --namespace research \ --direction incoming# Semantic searchcortex entity search "artificial intelligence companies" \ --namespace research \ --limit 10
# Filter by typecortex entity list \ --namespace research \ --type organization \ --sort mention_countEntities are automatically extracted from ingested content:
# Check extraction queue statuscortex entity queue-stats --namespace research
# Entities are extracted from:# - Conversation messages# - Knowledge chunks
# View extracted entitiescortex entity list --namespace research --sort created_at# When duplicates are detected, merge themcortex entity merge <keep-id> <remove-id> --namespace research
# The removed entity's relationships and mentions are transferredAll commands support --json for structured output:
# Parse search resultsresults=$(cortex knowledge search "query" --namespace app --json)echo "$results" | jq '.results[] | {title: .document_title, score: .score}'
# Get conversation as contextcontext=$(cortex conversation history --thread-id chat-1 --namespace agent --json)echo "$context" | jq '.messages | map({role, content})'
# Extract entity datacortex entity list --namespace research --json | jq '.entities[].name'#!/bin/bash# batch-ingest.sh - Parallel document ingestion
NAMESPACE="docs"COLLECTION="manuals"
# Process files in parallel (4 at a time)find ./manuals -name "*.pdf" | \ xargs -P 4 -I {} sh -c ' filename=$(basename "{}" .pdf) cortex knowledge ingest \ --namespace "$NAMESPACE" \ --collection "$COLLECTION" \ --title "$filename" \ --file "{}" '#!/bin/bashset -e
# Check if Cortex is availableif ! cortex knowledge stats --namespace test > /dev/null 2>&1; then echo "Error: Cortex is not available" exit 1fi
# Handle ingestion errorsif ! cortex knowledge ingest \ --namespace app \ --collection docs \ --title "Document" \ --file doc.md 2>&1; then echo "Ingestion failed, retrying..." sleep 2 cortex knowledge ingest \ --namespace app \ --collection docs \ --title "Document" \ --file doc.mdfi# Developmentcortex serve --config ~/.cortex/dev.yaml &
# Production with restricted namespacecortex serve \ --config /etc/cortex/prod.yaml \ --namespace production \ --transport sse \ --port 9810name: Index Documentation
on: release: types: [published]
jobs: index-docs: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4
- name: Install Cortex run: go install github.com/petal-labs/cortex/cmd/cortex@latest
- name: Index Documentation env: CORTEX_DATABASE_URL: ${{ secrets.CORTEX_DATABASE_URL }} run: | VERSION="${{ github.event.release.tag_name }}"
# Create versioned collection cortex knowledge create-collection \ --namespace docs \ --name "v${VERSION}" \ --description "Documentation for version ${VERSION}"
# Ingest all markdown files cortex knowledge ingest-dir \ --namespace docs \ --collection "v${VERSION}" \ --dir ./docs \ --pattern "*.md" \ --recursivesync-knowledge: stage: deploy script: - go install github.com/petal-labs/cortex/cmd/cortex@latest - | cortex knowledge ingest-dir \ --namespace $CI_PROJECT_NAME \ --collection documentation \ --dir ./docs \ --pattern "*.md" \ --recursive only: - main#!/bin/bash# Test that documentation can be parsedfor file in $(git diff --cached --name-only -- '*.md'); do if ! cortex knowledge ingest \ --namespace test \ --collection validation \ --title "$(basename "$file")" \ --file "$file" \ --dry-run 2>/dev/null; then echo "Error: $file failed validation" exit 1 fidone#!/bin/bash# maintenance.sh - Weekly maintenance script
# Run garbage collectioncortex gc --all
# Clean expired TTL valuescortex context cleanup --expired-ttl
# Clean old run-scoped context (older than 7 days)cortex context cleanup --old-runs
# Show namespace statisticscortex namespace stats --namespace production# Create database backup (SQLite)cortex backup --output "backup-$(date +%Y%m%d).db"
# Export specific namespace to JSONcortex export \ --namespace production \ --output "export-$(date +%Y%m%d).json"# Check server health (when running with SSE)curl http://localhost:9811/health
# Get Prometheus metricscurl http://localhost:9811/metrics
# Monitor extraction queuewatch -n 5 'cortex entity queue-stats --namespace production'CLI Reference
Complete command reference. CLI Reference →
MCP Integration
Use Cortex with AI agents. MCP Tools →
Knowledge Ingestion
Deep dive into ingestion strategies. Ingestion Guide →
Examples
See complete working examples. Examples →