Skip to content

CLI Workflows

The Cortex CLI enables powerful workflows for managing AI agent memory, knowledge bases, and state. This guide covers practical patterns for development, automation, and production use.

Install Cortex using Go:

Terminal window
go install github.com/petal-labs/cortex/cmd/cortex@latest

Verify the installation:

Terminal window
cortex --help

Enable shell completion for a better experience:

Terminal window
# Add to ~/.bashrc
eval "$(cortex completion bash)"
  1. Start the Cortex server

    Terminal window
    # Background mode for development
    cortex serve &
  2. Create a knowledge collection

    Terminal window
    cortex knowledge create-collection --name docs --namespace myproject
  3. Ingest your first document

    Terminal window
    cortex knowledge ingest \
    --namespace myproject \
    --collection docs \
    --title "README" \
    --file README.md
  4. Search your knowledge base

    Terminal window
    cortex knowledge search "how to install" --namespace myproject

Workflow 1: Building a Project Knowledge Base

Section titled “Workflow 1: Building a Project Knowledge Base”

A common use case is creating a searchable knowledge base from project documentation.

Terminal window
# Create a collection for your project
cortex knowledge create-collection \
--namespace my-app \
--name documentation \
--description "Project documentation and guides"
Terminal window
# Ingest individual files with semantic chunking
cortex knowledge ingest \
--namespace my-app \
--collection documentation \
--title "Architecture Overview" \
--file docs/architecture.md \
--chunk-strategy semantic
# Bulk ingest an entire directory
cortex knowledge ingest-dir \
--namespace my-app \
--collection documentation \
--dir ./docs \
--pattern "*.md" \
--recursive
Terminal window
# Create a separate collection for code
cortex knowledge create-collection \
--namespace my-app \
--name codebase \
--description "Source code for context-aware assistance"
# Ingest Go files
cortex knowledge ingest-dir \
--namespace my-app \
--collection codebase \
--dir ./src \
--pattern "*.go" \
--recursive
# Ingest with metadata
cortex knowledge ingest \
--namespace my-app \
--collection codebase \
--title "Main Package" \
--file cmd/main.go \
--metadata '{"package": "main", "type": "entrypoint"}'
Terminal window
# Semantic search across documentation
cortex knowledge search "authentication flow" \
--namespace my-app \
--collection documentation \
--mode hybrid \
--limit 5
# Search with JSON output for scripting
cortex knowledge search "error handling" \
--namespace my-app \
--json | jq '.results[].content'
# View collection statistics
cortex knowledge stats --namespace my-app
#!/bin/bash
# rebuild-knowledge-base.sh - Refresh knowledge base from latest docs
NAMESPACE="my-app"
echo "Clearing existing documentation..."
# Note: You may want to implement a refresh strategy instead
echo "Ingesting documentation..."
cortex knowledge ingest-dir \
--namespace "$NAMESPACE" \
--collection documentation \
--dir ./docs \
--pattern "*.md" \
--recursive
echo "Ingesting source code..."
cortex knowledge ingest-dir \
--namespace "$NAMESPACE" \
--collection codebase \
--dir ./src \
--pattern "*.go" \
--recursive
echo "Knowledge base updated!"
cortex knowledge stats --namespace "$NAMESPACE"

Workflow 2: Conversation Memory for Chat Agents

Section titled “Workflow 2: Conversation Memory for Chat Agents”

Manage persistent conversation history for AI agents across sessions.

Terminal window
# Start a new conversation thread
THREAD_ID="support-$(date +%s)"
# Record user message
cortex conversation append \
--namespace chat-agent \
--thread-id "$THREAD_ID" \
--role user \
--content "I need help configuring the API"
# Record assistant response
cortex conversation append \
--namespace chat-agent \
--thread-id "$THREAD_ID" \
--role assistant \
--content "I'd be happy to help! What API are you trying to configure?"
# Continue the conversation...
cortex conversation append \
--namespace chat-agent \
--thread-id "$THREAD_ID" \
--role user \
--content "The authentication API"
Terminal window
# Get full history
cortex conversation history \
--namespace chat-agent \
--thread-id "$THREAD_ID"
# Get recent messages only
cortex conversation history \
--namespace chat-agent \
--thread-id "$THREAD_ID" \
--limit 10
# Export as JSON for agent context
cortex conversation history \
--namespace chat-agent \
--thread-id "$THREAD_ID" \
--json | jq '.messages'
Terminal window
# Find conversations mentioning specific topics
cortex conversation search "authentication error" \
--namespace chat-agent \
--limit 10
# List all active threads
cortex conversation list --namespace chat-agent

When conversations get long, compress them while preserving context:

Terminal window
# Summarize, keeping the last 5 messages intact
cortex conversation summarize \
--namespace chat-agent \
--thread-id "$THREAD_ID" \
--keep-recent 5
# The summary is stored with the thread and automatically
# included when retrieving history
#!/bin/bash
# agent-memory.sh - Helper for agent conversation memory
NAMESPACE="my-agent"
THREAD_ID="${1:-default}"
ACTION="${2:-history}"
case "$ACTION" in
history)
cortex conversation history \
--namespace "$NAMESPACE" \
--thread-id "$THREAD_ID" \
--json
;;
add-user)
cortex conversation append \
--namespace "$NAMESPACE" \
--thread-id "$THREAD_ID" \
--role user \
--content "$3"
;;
add-assistant)
cortex conversation append \
--namespace "$NAMESPACE" \
--thread-id "$THREAD_ID" \
--role assistant \
--content "$3"
;;
summarize)
cortex conversation summarize \
--namespace "$NAMESPACE" \
--thread-id "$THREAD_ID" \
--keep-recent 10
;;
clear)
cortex conversation clear \
--namespace "$NAMESPACE" \
--thread-id "$THREAD_ID"
;;
esac

Workflow 3: Workflow Context for State Management

Section titled “Workflow 3: Workflow Context for State Management”

Use workflow context to share state across agent tasks and runs.

Terminal window
# Store simple values
cortex context set "config/model" "gpt-4o" --namespace workflow
# Store JSON values
cortex context set "user/preferences" \
'{"theme": "dark", "language": "en"}' \
--namespace workflow
# Retrieve values
cortex context get "config/model" --namespace workflow
cortex context get "user/preferences" --namespace workflow --json

Isolate state to specific workflow runs:

Terminal window
RUN_ID="run-$(uuidgen)"
# Store run-specific state
cortex context set "progress" "50" \
--namespace workflow \
--run-id "$RUN_ID"
cortex context set "results/step1" \
'{"status": "complete", "output": "..."}' \
--namespace workflow \
--run-id "$RUN_ID"
# List all keys for this run
cortex context list \
--namespace workflow \
--run-id "$RUN_ID"
# Get run-specific value
cortex context get "progress" \
--namespace workflow \
--run-id "$RUN_ID"
Terminal window
# Cache with 1-hour expiration
cortex context set "cache/api-response" \
'{"data": [...]}' \
--namespace workflow \
--ttl 1h
# Session data with 24-hour expiration
cortex context set "session/user-123" \
'{"authenticated": true}' \
--namespace workflow \
--ttl 24h
Terminal window
# Append to an array
cortex context set "log/events" '["event1"]' --namespace workflow
cortex context merge "log/events" '["event2", "event3"]' \
--namespace workflow \
--strategy append
# Result: ["event1", "event2", "event3"]
# Deep merge objects
cortex context set "config" '{"a": 1, "b": {"x": 1}}' --namespace workflow
cortex context merge "config" '{"b": {"y": 2}, "c": 3}' \
--namespace workflow \
--strategy merge_deep
# Result: {"a": 1, "b": {"x": 1, "y": 2}, "c": 3}
# Numeric accumulation
cortex context set "metrics/count" "10" --namespace workflow
cortex context merge "metrics/count" "5" --namespace workflow --strategy sum
# Result: 15
Terminal window
# View change history for a key
cortex context history "config/model" --namespace workflow
# Useful for debugging and auditing state changes
#!/bin/bash
# workflow-state.sh - Manage workflow state
NAMESPACE="pipeline"
RUN_ID="${1}"
# Initialize workflow
init_workflow() {
cortex context set "status" "running" \
--namespace "$NAMESPACE" \
--run-id "$RUN_ID"
cortex context set "started_at" "$(date -u +%Y-%m-%dT%H:%M:%SZ)" \
--namespace "$NAMESPACE" \
--run-id "$RUN_ID"
}
# Update progress
update_progress() {
local step="$1"
local status="$2"
cortex context set "steps/$step" \
"{\"status\": \"$status\", \"timestamp\": \"$(date -u +%Y-%m-%dT%H:%M:%SZ)\"}" \
--namespace "$NAMESPACE" \
--run-id "$RUN_ID"
}
# Complete workflow
complete_workflow() {
cortex context set "status" "completed" \
--namespace "$NAMESPACE" \
--run-id "$RUN_ID"
cortex context set "completed_at" "$(date -u +%Y-%m-%dT%H:%M:%SZ)" \
--namespace "$NAMESPACE" \
--run-id "$RUN_ID"
}
# Get workflow state
get_state() {
cortex context list --namespace "$NAMESPACE" --run-id "$RUN_ID" --json
}

Workflow 4: Entity Memory for Knowledge Graphs

Section titled “Workflow 4: Entity Memory for Knowledge Graphs”

Build and query knowledge graphs with automatic entity extraction.

Terminal window
# Create entities
cortex entity create \
--namespace research \
--name "OpenAI" \
--type organization \
--description "AI research company" \
--metadata '{"founded": "2015", "headquarters": "San Francisco"}'
cortex entity create \
--namespace research \
--name "Sam Altman" \
--type person \
--description "CEO of OpenAI"
# Add aliases
cortex entity add-alias <entity-id> --alias "Samuel Altman"
Terminal window
# Link entities
cortex entity add-relationship <sam-id> <openai-id> \
--namespace research \
--type "works_at" \
--metadata '{"role": "CEO", "since": "2019"}'
# Query relationships
cortex entity relationships <openai-id> \
--namespace research \
--direction incoming
Terminal window
# Semantic search
cortex entity search "artificial intelligence companies" \
--namespace research \
--limit 10
# Filter by type
cortex entity list \
--namespace research \
--type organization \
--sort mention_count

Entities are automatically extracted from ingested content:

Terminal window
# Check extraction queue status
cortex entity queue-stats --namespace research
# Entities are extracted from:
# - Conversation messages
# - Knowledge chunks
# View extracted entities
cortex entity list --namespace research --sort created_at
Terminal window
# When duplicates are detected, merge them
cortex entity merge <keep-id> <remove-id> --namespace research
# The removed entity's relationships and mentions are transferred

All commands support --json for structured output:

Terminal window
# Parse search results
results=$(cortex knowledge search "query" --namespace app --json)
echo "$results" | jq '.results[] | {title: .document_title, score: .score}'
# Get conversation as context
context=$(cortex conversation history --thread-id chat-1 --namespace agent --json)
echo "$context" | jq '.messages | map({role, content})'
# Extract entity data
cortex entity list --namespace research --json | jq '.entities[].name'
#!/bin/bash
# batch-ingest.sh - Parallel document ingestion
NAMESPACE="docs"
COLLECTION="manuals"
# Process files in parallel (4 at a time)
find ./manuals -name "*.pdf" | \
xargs -P 4 -I {} sh -c '
filename=$(basename "{}" .pdf)
cortex knowledge ingest \
--namespace "$NAMESPACE" \
--collection "$COLLECTION" \
--title "$filename" \
--file "{}"
'
#!/bin/bash
set -e
# Check if Cortex is available
if ! cortex knowledge stats --namespace test > /dev/null 2>&1; then
echo "Error: Cortex is not available"
exit 1
fi
# Handle ingestion errors
if ! cortex knowledge ingest \
--namespace app \
--collection docs \
--title "Document" \
--file doc.md 2>&1; then
echo "Ingestion failed, retrying..."
sleep 2
cortex knowledge ingest \
--namespace app \
--collection docs \
--title "Document" \
--file doc.md
fi
Terminal window
# Development
cortex serve --config ~/.cortex/dev.yaml &
# Production with restricted namespace
cortex serve \
--config /etc/cortex/prod.yaml \
--namespace production \
--transport sse \
--port 9810

GitHub Actions: Index Documentation on Release

Section titled “GitHub Actions: Index Documentation on Release”
name: Index Documentation
on:
release:
types: [published]
jobs:
index-docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Cortex
run: go install github.com/petal-labs/cortex/cmd/cortex@latest
- name: Index Documentation
env:
CORTEX_DATABASE_URL: ${{ secrets.CORTEX_DATABASE_URL }}
run: |
VERSION="${{ github.event.release.tag_name }}"
# Create versioned collection
cortex knowledge create-collection \
--namespace docs \
--name "v${VERSION}" \
--description "Documentation for version ${VERSION}"
# Ingest all markdown files
cortex knowledge ingest-dir \
--namespace docs \
--collection "v${VERSION}" \
--dir ./docs \
--pattern "*.md" \
--recursive
sync-knowledge:
stage: deploy
script:
- go install github.com/petal-labs/cortex/cmd/cortex@latest
- |
cortex knowledge ingest-dir \
--namespace $CI_PROJECT_NAME \
--collection documentation \
--dir ./docs \
--pattern "*.md" \
--recursive
only:
- main
.git/hooks/pre-commit
#!/bin/bash
# Test that documentation can be parsed
for file in $(git diff --cached --name-only -- '*.md'); do
if ! cortex knowledge ingest \
--namespace test \
--collection validation \
--title "$(basename "$file")" \
--file "$file" \
--dry-run 2>/dev/null; then
echo "Error: $file failed validation"
exit 1
fi
done

#!/bin/bash
# maintenance.sh - Weekly maintenance script
# Run garbage collection
cortex gc --all
# Clean expired TTL values
cortex context cleanup --expired-ttl
# Clean old run-scoped context (older than 7 days)
cortex context cleanup --old-runs
# Show namespace statistics
cortex namespace stats --namespace production
Terminal window
# Create database backup (SQLite)
cortex backup --output "backup-$(date +%Y%m%d).db"
# Export specific namespace to JSON
cortex export \
--namespace production \
--output "export-$(date +%Y%m%d).json"
Terminal window
# Check server health (when running with SSE)
curl http://localhost:9811/health
# Get Prometheus metrics
curl http://localhost:9811/metrics
# Monitor extraction queue
watch -n 5 'cortex entity queue-stats --namespace production'