Getting Started
Getting Started with Cortex
Section titled “Getting Started with Cortex”This guide walks you through installing Cortex, starting the MCP server, and using the core memory primitives.
Installation
Section titled “Installation”curl -LO https://github.com/petal-labs/cortex/releases/latest/download/cortex_darwin_arm64.tar.gztar -xzf cortex_darwin_arm64.tar.gzsudo mv cortex /usr/local/bin/curl -LO https://github.com/petal-labs/cortex/releases/latest/download/cortex_darwin_amd64.tar.gztar -xzf cortex_darwin_amd64.tar.gzsudo mv cortex /usr/local/bin/curl -LO https://github.com/petal-labs/cortex/releases/latest/download/cortex_linux_amd64.tar.gztar -xzf cortex_linux_amd64.tar.gzsudo mv cortex /usr/local/bin/curl -LO https://github.com/petal-labs/cortex/releases/latest/download/cortex_linux_arm64.tar.gztar -xzf cortex_linux_arm64.tar.gzsudo mv cortex /usr/local/bin/Requires Go 1.24+ with CGO enabled:
git clone https://github.com/petal-labs/cortex.gitcd cortexgo build -o cortex ./cmd/cortexsudo mv cortex /usr/local/bin/Verify the installation:
cortex --versionQuick Start
Section titled “Quick Start”-
Start the MCP Server
The simplest way to run Cortex is with the default stdio transport:
Terminal window cortex serveFor web-based MCP clients, use SSE transport:
Terminal window cortex serve --transport sse --port 9810 -
Ingest Some Knowledge
Add a document to the knowledge store:
Terminal window cortex knowledge ingest \--collection docs \--title "Getting Started" \--file README.mdOr ingest an entire directory:
Terminal window cortex knowledge ingest-dir \--collection docs \--dir ./documentation \--pattern "*.md" -
Search Your Knowledge
Find relevant information:
Terminal window cortex knowledge search "how to configure"Use hybrid search for best results:
Terminal window cortex knowledge search "authentication setup" --mode hybrid -
Store Workflow Context
Save state that persists across runs:
Terminal window cortex context set "project/config" '{"debug": true, "env": "dev"}'Retrieve it later:
Terminal window cortex context get "project/config" -
Launch the TUI
Explore your data interactively:
Terminal window cortex tuiNavigate with
1-5to switch sections,j/kto move,Enterto select,qto quit.
Configuration
Section titled “Configuration”Cortex works with sensible defaults, but you can customize behavior via ~/.cortex/config.yaml:
storage: backend: sqlite # or "pgvector" for PostgreSQL data_dir: ~/.cortex/data
embedding: provider: openai # openai, anthropic, voyageai, gemini, ollama model: text-embedding-3-small dimensions: 1536 batch_size: 100 cache_size: 1000
summarization: provider: anthropic # LLM provider for conversation summaries model: claude-sonnet-4-6 max_tokens: 1024
conversation: auto_summarize_threshold: 50 semantic_search_enabled: true
knowledge: default_chunk_strategy: sentence default_chunk_max_tokens: 512 default_chunk_overlap: 50
entity: extraction_mode: full # off, sampled, whitelist, full extraction_model: claude-haiku-4-5
server: metrics_enabled: true metrics_port: 9811 structured_logging: trueStorage Backends
Section titled “Storage Backends”Zero-infrastructure setup using SQLite with the vec0 extension for vector operations:
storage: backend: sqlite data_dir: ~/.cortex/dataThis is the recommended choice for local development and single-node deployments.
For production deployments with multiple instances or higher throughput:
storage: backend: pgvector database_url: postgres://user:pass@localhost:5432/cortexEnsure the pgvector extension is installed:
CREATE EXTENSION IF NOT EXISTS vector;Embedding Configuration
Section titled “Embedding Configuration”Cortex uses the Iris SDK to generate embeddings directly via provider APIs. No separate Iris service required.
embedding: provider: openai # Provider: openai, anthropic, voyageai, gemini, ollama model: text-embedding-3-small dimensions: 1536 batch_size: 100 # Texts per API call (reduces round trips) cache_size: 1000 # LRU cache entries (reduces API costs)Environment Variables:
Set the API key for your chosen provider:
# For OpenAI (default)export OPENAI_API_KEY="sk-..."
# For Anthropicexport ANTHROPIC_API_KEY="sk-ant-..."
# For VoyageAIexport VOYAGEAI_API_KEY="..."
# For Geminiexport GEMINI_API_KEY="..." # or GOOGLE_API_KEY
# For Ollama (local, no key required)export OLLAMA_BASE_URL="http://localhost:11434"Using with MCP Clients
Section titled “Using with MCP Clients”Claude Desktop
Section titled “Claude Desktop”Add Cortex to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json):
{ "mcpServers": { "cortex": { "command": "cortex", "args": ["serve"] } }}Restart Claude Desktop to discover Cortex tools.
Custom MCP Clients
Section titled “Custom MCP Clients”For SSE-based clients, start Cortex with HTTP transport:
cortex serve --transport sse --port 9810Connect your client to http://localhost:9810.
Namespaces
Section titled “Namespaces”All data in Cortex is isolated by namespace. Use namespaces to separate:
- Different projects or workflows
- Development vs production data
- Multi-tenant deployments
# Commands accept --namespace flagcortex knowledge search "query" --namespace acme/research
# Restrict MCP server to a namespacecortex serve --namespace acme/researchDefault namespace is default when not specified.
Observability
Section titled “Observability”Metrics
Section titled “Metrics”Enable Prometheus metrics in your config:
server: metrics_enabled: true metrics_port: 9811Access metrics at http://localhost:9811/metrics.
Key metrics include:
cortex_operations_total- Operations by primitive, action, namespace, statuscortex_operation_duration_seconds- Operation latency histogramcortex_search_latency_seconds- Search-specific latencycortex_embedding_requests_total- Embedding API calls
Health Check
Section titled “Health Check”curl http://localhost:9811/healthNext Steps
Section titled “Next Steps”Now that Cortex is running:
- Learn about Concepts to understand the four memory primitives
- Explore the CLI Reference for all available commands
- See MCP Tools for programmatic access
- Try the Knowledge Ingestion Guide for advanced document processing