Skip to content

Examples

Practical examples demonstrating Cortex memory primitives in real-world scenarios.

RAG Pipeline

Build a retrieval-augmented generation system using the knowledge store.

Agent Memory

Implement persistent conversation memory for long-running agents.

Workflow State

Use context storage for cross-step state in multi-stage workflows.

Entity Tracking

Track people and organizations across interactions.

Terminal window
# 1. Ingest your documentation
cortex knowledge ingest-dir \
--collection docs \
--dir ./documentation \
--pattern "*.md"
# 2. Search for relevant context
cortex knowledge search "authentication setup" --mode hybrid --limit 3
# 3. Use results in your LLM prompt
# (Results provide grounded context for responses)
Terminal window
# Agent stores conversation as it runs
cortex conversation append \
--thread-id agent-session-123 \
--role assistant \
--content "I found 3 relevant documents about authentication."
# Later, retrieve context
cortex conversation history --thread-id agent-session-123 --limit 20
Terminal window
# Store state that persists
cortex context set "project/analyzed-files" '["main.go", "handler.go"]'
# Next run retrieves it
cortex context get "project/analyzed-files"
Terminal window
# List people mentioned across conversations
cortex entity list --type person --sort mention_count
# See who works where
cortex entity relationships ent-jane-123 --type works_at

Cortex uses the Iris SDK directly—no separate Iris service required. Configure your provider:

~/.cortex/config.yaml
embedding:
provider: openai # Or: anthropic, voyageai, gemini, ollama
model: text-embedding-3-small
dimensions: 1536
batch_size: 100
cache_size: 1000

Set your API key:

Terminal window
# For OpenAI (most common)
export OPENAI_API_KEY="sk-..."
# For local inference with Ollama (no API key needed)
export OLLAMA_BASE_URL="http://localhost:11434"

The SDK handles batching, caching, and error recovery automatically.

Use Cortex as memory backend for PetalFlow workflows:

// In a PetalFlow node, store state
ctx.ContextSet("user-preferences", prefs)
// Later nodes retrieve it
prefs, _ := ctx.ContextGet("user-preferences")
// Search knowledge for grounding
docs, _ := ctx.KnowledgeSearch("billing questions", 5)

Any MCP client can use Cortex tools:

{
"mcpServers": {
"cortex": {
"command": "cortex",
"args": ["serve"]
}
}
}