RAG Pipeline
Build a retrieval-augmented generation system using the knowledge store.
Practical examples demonstrating Cortex memory primitives in real-world scenarios.
RAG Pipeline
Build a retrieval-augmented generation system using the knowledge store.
Agent Memory
Implement persistent conversation memory for long-running agents.
Workflow State
Use context storage for cross-step state in multi-stage workflows.
Entity Tracking
Track people and organizations across interactions.
# 1. Ingest your documentationcortex knowledge ingest-dir \ --collection docs \ --dir ./documentation \ --pattern "*.md"
# 2. Search for relevant contextcortex knowledge search "authentication setup" --mode hybrid --limit 3
# 3. Use results in your LLM prompt# (Results provide grounded context for responses)# Agent stores conversation as it runscortex conversation append \ --thread-id agent-session-123 \ --role assistant \ --content "I found 3 relevant documents about authentication."
# Later, retrieve contextcortex conversation history --thread-id agent-session-123 --limit 20# Store state that persistscortex context set "project/analyzed-files" '["main.go", "handler.go"]'
# Next run retrieves itcortex context get "project/analyzed-files"# List people mentioned across conversationscortex entity list --type person --sort mention_count
# See who works wherecortex entity relationships ent-jane-123 --type works_atCortex uses the Iris SDK directly—no separate Iris service required. Configure your provider:
embedding: provider: openai # Or: anthropic, voyageai, gemini, ollama model: text-embedding-3-small dimensions: 1536 batch_size: 100 cache_size: 1000Set your API key:
# For OpenAI (most common)export OPENAI_API_KEY="sk-..."
# For local inference with Ollama (no API key needed)export OLLAMA_BASE_URL="http://localhost:11434"The SDK handles batching, caching, and error recovery automatically.
Use Cortex as memory backend for PetalFlow workflows:
// In a PetalFlow node, store statectx.ContextSet("user-preferences", prefs)
// Later nodes retrieve itprefs, _ := ctx.ContextGet("user-preferences")
// Search knowledge for groundingdocs, _ := ctx.KnowledgeSearch("billing questions", 5)Any MCP client can use Cortex tools:
{ "mcpServers": { "cortex": { "command": "cortex", "args": ["serve"] } }}