Skip to content

CLI Guide

The Iris CLI provides a powerful command-line interface for interacting with LLM providers, managing API keys, and automating AI workflows. It’s designed for rapid prototyping, prompt testing, and integration into shell scripts and CI/CD pipelines.

Install the CLI using Go:

Terminal window
go install github.com/petal-labs/iris/cli/cmd/iris@latest

Verify the installation:

Terminal window
iris version
# iris v0.11.0 (go1.22.0, darwin/arm64)

Enable shell completion for a better experience:

Terminal window
# Add to ~/.bashrc
eval "$(iris completion bash)"
  1. Set up an API key

    Terminal window
    export OPENAI_API_KEY=sk-...
    # Or use the secure keystore
    iris keys set openai
  2. Send your first chat

    Terminal window
    iris chat "What is the capital of France?"
  3. Try streaming output

    Terminal window
    iris chat --stream "Write a haiku about programming"
  4. Switch providers

    Terminal window
    iris chat --provider anthropic --model claude-3-5-sonnet-20241022 "Hello!"

Send chat completions to any configured provider.

Terminal window
iris chat [flags] <prompt>
FlagShortDescriptionDefault
--provider-pLLM provider to useopenai
--model-mModel nameProvider default
--system-sSystem promptNone
--temperature-tSampling temperature (0.0-2.0)0.7
--max-tokensMaximum tokens in responseModel default
--streamStream output in real-timefalse
--json-jOutput as JSONfalse
--raw-rOutput only the response textfalse
--file-fRead prompt from fileNone
--image-iInclude image URL or pathNone
--interactiveStart interactive chat sessionfalse
--timeoutRequest timeout60s
--verbose-vShow request/response detailsfalse
Terminal window
# Basic chat
iris chat "Explain recursion in simple terms"
# With specific model
iris chat -p anthropic -m claude-3-opus "Write a sonnet"
# With system prompt
iris chat -s "You are a pirate" "What's the weather like?"
# Lower temperature for deterministic output
iris chat -t 0.1 "List the planets in order"
# Streaming for long responses
iris chat --stream "Write a short story about a robot"
# JSON output for scripting
iris chat --json "Summarize this text" | jq '.output'
# Read prompt from file
iris chat -f prompt.txt
# With image (vision models)
iris chat -i "https://example.com/photo.jpg" "Describe this image"
iris chat -i ./local-image.png "What's in this picture?"
# Multiple images
iris chat -i image1.jpg -i image2.jpg "Compare these images"
# Verbose mode for debugging
iris chat -v "Hello" 2>&1 | head -20

Manage encrypted API keys in the local keystore.

Terminal window
iris keys <subcommand> [args]
SubcommandDescription
set <provider>Store a new API key
get <provider>Retrieve a stored key (masked)
listList all stored providers
remove <provider>Delete a stored key
test <provider>Test if the key is valid
exportExport keys (for backup)
importImport keys from backup
Terminal window
# Store API keys (prompts for value)
iris keys set openai
iris keys set anthropic
iris keys set gemini
# List stored keys
iris keys list
# openai
# anthropic
# gemini
# Test a key works
iris keys test openai
# ✓ OpenAI key is valid (organization: org-xxx)
# Remove a key
iris keys remove gemini
# Export for backup (encrypted)
iris keys export > keys-backup.enc
# Import from backup
iris keys import < keys-backup.enc

List and inspect available models.

Terminal window
iris models <subcommand> [args]
SubcommandDescription
list [provider]List available models
info <model>Show model details
default [provider]Show or set default model
Terminal window
# List all models for a provider
iris models list openai
# gpt-4o
# gpt-4o-mini
# gpt-4-turbo
# gpt-3.5-turbo
# text-embedding-3-small
# text-embedding-3-large
# ...
# List models for all providers
iris models list
# Get detailed info about a model
iris models info gpt-4o
# Name: gpt-4o
# Provider: openai
# Context: 128000 tokens
# Max Output: 16384 tokens
# Features: chat, streaming, tools, vision, json_mode
# Pricing: $5.00/1M input, $15.00/1M output
# Show default model
iris models default openai
# gpt-4o
# Set default model
iris models default openai gpt-4o-mini

Generate embeddings for text.

Terminal window
iris embed [flags] <text>
FlagShortDescriptionDefault
--provider-pEmbedding provideropenai
--model-mEmbedding modelProvider default
--dimensions-dOutput dimensionsModel default
--file-fRead texts from file (one per line)None
--output-oOutput file (JSON)stdout
Terminal window
# Generate embedding for text
iris embed "The quick brown fox"
# With specific model
iris embed -m text-embedding-3-large "Hello world"
# Reduce dimensions
iris embed -d 512 "Search query"
# Batch from file
iris embed -f documents.txt -o embeddings.json
# Pipe to other tools
iris embed "query text" | jq '.embedding[:5]'

Scaffold new Iris projects.

Terminal window
iris init [flags] <project-name>
FlagDescriptionDefault
--templateProject templatebasic
--providerDefault provideropenai
--no-gitSkip git initializationfalse
TemplateDescription
basicSimple chat application
agentTool-using agent scaffold
ragRAG pipeline with embeddings
apiHTTP API server
Terminal window
# Create basic project
iris init myproject
# Create agent project
iris init --template agent my-agent
# Create RAG project with Anthropic
iris init --template rag --provider anthropic my-rag-app
# Skip git
iris init --no-git quick-test

Manage CLI configuration.

Terminal window
iris config <subcommand> [args]
SubcommandDescription
showDisplay current configuration
set <key> <value>Set a configuration value
get <key>Get a configuration value
editOpen config file in editor
pathShow config file path
Terminal window
# Show all config
iris config show
# Set default provider
iris config set default_provider anthropic
# Set default model
iris config set default_model gpt-4o-mini
# Get a value
iris config get default_provider
# anthropic
# Edit config file
iris config edit
# Show config path
iris config path
# /Users/username/.iris/config.yaml

Display version and build information.

Terminal window
iris version [flags]
FlagDescription
--jsonOutput as JSON
--shortVersion number only
Terminal window
iris version
# iris v0.11.0 (go1.22.0, darwin/arm64)
# Built: 2024-01-15T10:30:00Z
# Commit: abc123def
iris version --short
# v0.11.0
iris version --json
# {"version":"v0.11.0","go":"1.22.0","os":"darwin","arch":"arm64",...}

The CLI reads configuration from ~/.iris/config.yaml:

# Default settings
default_provider: openai
default_model: gpt-4o
# Provider configurations
providers:
openai:
api_key_env: OPENAI_API_KEY
default_model: gpt-4o
organization: org-xxx # Optional
anthropic:
api_key_env: ANTHROPIC_API_KEY
default_model: claude-3-5-sonnet-20241022
gemini:
api_key_env: GEMINI_API_KEY
default_model: gemini-1.5-pro
ollama:
base_url: http://localhost:11434
default_model: llama3.2
# Chat defaults
chat:
temperature: 0.7
max_tokens: 4096
stream: false
# Output preferences
output:
color: true
format: text # text, json, markdown
# Telemetry
telemetry:
enabled: false
endpoint: ""
VariableDescription
IRIS_KEYSTORE_KEYMaster password for encrypted keystore
IRIS_CONFIGCustom config file path
IRIS_DEFAULT_PROVIDEROverride default provider
IRIS_DEFAULT_MODELOverride default model
OPENAI_API_KEYOpenAI API key
ANTHROPIC_API_KEYAnthropic API key
GEMINI_API_KEYGoogle Gemini API key
XAI_API_KEYxAI (Grok) API key
PERPLEXITY_API_KEYPerplexity API key
HUGGINGFACE_API_KEYHugging Face API key
VOYAGEAI_API_KEYVoyage AI API key

Configuration values are resolved in this order (highest priority first):

  1. Command-line flags
  2. Environment variables
  3. Configuration file (~/.iris/config.yaml)
  4. Built-in defaults

Start an interactive chat session for multi-turn conversations:

Terminal window
iris chat --interactive
CommandDescription
/helpShow available commands
/clearClear conversation history
/system <prompt>Set system prompt
/model <name>Switch model
/provider <name>Switch provider
/temperature <value>Set temperature
/save <file>Save conversation to file
/load <file>Load conversation from file
/exportExport as JSON
/tokensShow token usage
/quit or /exitExit interactive mode
$ iris chat --interactive
Iris Interactive Mode (gpt-4o)
Type /help for commands, /quit to exit
You: Hello! What can you help me with?
Assistant: Hello! I'm Iris, your AI assistant. I can help you with:
- Answering questions and explaining concepts
- Writing and reviewing code
- Analyzing data and documents
- Creative writing and brainstorming
What would you like to explore?
You: /model gpt-4o-mini
Switched to model: gpt-4o-mini
You: What's 2+2?
Assistant: 2 + 2 = 4
You: /tokens
Session tokens: 127 (prompt: 89, completion: 38)
You: /save chat-history.json
Conversation saved to chat-history.json
You: /quit
Goodbye!

The CLI is designed for integration into shell scripts and CI/CD pipelines.

Terminal window
# Get structured output
result=$(iris chat --json "List 3 programming languages")
echo "$result" | jq '.output'
# Extract specific fields
tokens=$(iris chat --json "Hello" | jq '.usage.total_tokens')
echo "Used $tokens tokens"
Terminal window
# Pipe file content
cat README.md | iris chat "Summarize this document"
# Pipe command output
git diff | iris chat "Explain these changes"
# Chain commands
curl -s https://api.example.com/data | iris chat "Analyze this JSON"
Terminal window
# Process multiple files
for file in docs/*.md; do
echo "Processing $file..."
iris chat -f "$file" --raw "Summarize this document" > "summaries/$(basename $file)"
done
# Parallel processing with xargs
find . -name "*.go" | xargs -P 4 -I {} sh -c 'iris chat -f {} --raw "Review this code" > {}.review'
#!/bin/bash
set -e
# Check if command succeeds
if iris chat --json "Hello" > /dev/null 2>&1; then
echo "API is working"
else
echo "API connection failed"
exit 1
fi
# Capture errors
result=$(iris chat --json "Test" 2>&1) || {
echo "Error: $result"
exit 1
}
# GitHub Actions example
- name: Generate Release Notes
run: |
git log --oneline v1.0.0..HEAD | \
iris chat --raw "Generate release notes from these commits" > RELEASE_NOTES.md
# GitLab CI example
generate_docs:
script:
- iris chat -f src/main.go --raw "Generate API documentation" > docs/api.md
Terminal window
# Development
IRIS_DEFAULT_MODEL=gpt-4o-mini iris chat "Quick test"
# Production (use specific config)
IRIS_CONFIG=/etc/iris/prod.yaml iris chat "Generate report"
# Testing with local Ollama
iris chat -p ollama -m llama3.2 "Test local model"

Ensure Go bin is in your PATH:

Terminal window
export PATH=$PATH:$(go env GOPATH)/bin
# Add to ~/.bashrc or ~/.zshrc for persistence

Check your key configuration:

Terminal window
# Check environment
echo $OPENAI_API_KEY
# Check keystore
iris keys list
# Test the key
iris keys test openai

Verify the model name:

Terminal window
# List available models
iris models list openai
# Check model info
iris models info gpt-4o

Increase the timeout for long operations:

Terminal window
iris chat --timeout 120s "Write a long document..."

Add delays between requests in scripts:

Terminal window
for prompt in "${prompts[@]}"; do
iris chat "$prompt"
sleep 2 # Wait 2 seconds between requests
done

Enable verbose output to diagnose issues:

Terminal window
# Show request/response details
iris chat -v "Hello"
# Show even more detail
IRIS_DEBUG=1 iris chat "Hello"

Check logs for detailed error information:

Terminal window
# Default log location
cat ~/.iris/logs/iris.log
# Tail logs in real-time
tail -f ~/.iris/logs/iris.log

Tools Guide

Build tool-augmented agents. Tools →