Streaming Guide
Advanced streaming patterns. Streaming →
The Iris CLI provides a powerful command-line interface for interacting with LLM providers, managing API keys, and automating AI workflows. It’s designed for rapid prototyping, prompt testing, and integration into shell scripts and CI/CD pipelines.
Install the CLI using Go:
go install github.com/petal-labs/iris/cli/cmd/iris@latestVerify the installation:
iris version# iris v0.11.0 (go1.22.0, darwin/arm64)Enable shell completion for a better experience:
# Add to ~/.bashrceval "$(iris completion bash)"# Add to ~/.zshrceval "$(iris completion zsh)"# Add to ~/.config/fish/config.fishiris completion fish | sourceSet up an API key
export OPENAI_API_KEY=sk-...# Or use the secure keystoreiris keys set openaiSend your first chat
iris chat "What is the capital of France?"Try streaming output
iris chat --stream "Write a haiku about programming"Switch providers
iris chat --provider anthropic --model claude-3-5-sonnet-20241022 "Hello!"Send chat completions to any configured provider.
iris chat [flags] <prompt>| Flag | Short | Description | Default |
|---|---|---|---|
--provider | -p | LLM provider to use | openai |
--model | -m | Model name | Provider default |
--system | -s | System prompt | None |
--temperature | -t | Sampling temperature (0.0-2.0) | 0.7 |
--max-tokens | Maximum tokens in response | Model default | |
--stream | Stream output in real-time | false | |
--json | -j | Output as JSON | false |
--raw | -r | Output only the response text | false |
--file | -f | Read prompt from file | None |
--image | -i | Include image URL or path | None |
--interactive | Start interactive chat session | false | |
--timeout | Request timeout | 60s | |
--verbose | -v | Show request/response details | false |
# Basic chatiris chat "Explain recursion in simple terms"
# With specific modeliris chat -p anthropic -m claude-3-opus "Write a sonnet"
# With system promptiris chat -s "You are a pirate" "What's the weather like?"
# Lower temperature for deterministic outputiris chat -t 0.1 "List the planets in order"
# Streaming for long responsesiris chat --stream "Write a short story about a robot"
# JSON output for scriptingiris chat --json "Summarize this text" | jq '.output'
# Read prompt from fileiris chat -f prompt.txt
# With image (vision models)iris chat -i "https://example.com/photo.jpg" "Describe this image"iris chat -i ./local-image.png "What's in this picture?"
# Multiple imagesiris chat -i image1.jpg -i image2.jpg "Compare these images"
# Verbose mode for debuggingiris chat -v "Hello" 2>&1 | head -20Manage encrypted API keys in the local keystore.
iris keys <subcommand> [args]| Subcommand | Description |
|---|---|
set <provider> | Store a new API key |
get <provider> | Retrieve a stored key (masked) |
list | List all stored providers |
remove <provider> | Delete a stored key |
test <provider> | Test if the key is valid |
export | Export keys (for backup) |
import | Import keys from backup |
# Store API keys (prompts for value)iris keys set openaiiris keys set anthropiciris keys set gemini
# List stored keysiris keys list# openai# anthropic# gemini
# Test a key worksiris keys test openai# ✓ OpenAI key is valid (organization: org-xxx)
# Remove a keyiris keys remove gemini
# Export for backup (encrypted)iris keys export > keys-backup.enc
# Import from backupiris keys import < keys-backup.encList and inspect available models.
iris models <subcommand> [args]| Subcommand | Description |
|---|---|
list [provider] | List available models |
info <model> | Show model details |
default [provider] | Show or set default model |
# List all models for a provideriris models list openai# gpt-4o# gpt-4o-mini# gpt-4-turbo# gpt-3.5-turbo# text-embedding-3-small# text-embedding-3-large# ...
# List models for all providersiris models list
# Get detailed info about a modeliris models info gpt-4o# Name: gpt-4o# Provider: openai# Context: 128000 tokens# Max Output: 16384 tokens# Features: chat, streaming, tools, vision, json_mode# Pricing: $5.00/1M input, $15.00/1M output
# Show default modeliris models default openai# gpt-4o
# Set default modeliris models default openai gpt-4o-miniGenerate embeddings for text.
iris embed [flags] <text>| Flag | Short | Description | Default |
|---|---|---|---|
--provider | -p | Embedding provider | openai |
--model | -m | Embedding model | Provider default |
--dimensions | -d | Output dimensions | Model default |
--file | -f | Read texts from file (one per line) | None |
--output | -o | Output file (JSON) | stdout |
# Generate embedding for textiris embed "The quick brown fox"
# With specific modeliris embed -m text-embedding-3-large "Hello world"
# Reduce dimensionsiris embed -d 512 "Search query"
# Batch from fileiris embed -f documents.txt -o embeddings.json
# Pipe to other toolsiris embed "query text" | jq '.embedding[:5]'Scaffold new Iris projects.
iris init [flags] <project-name>| Flag | Description | Default |
|---|---|---|
--template | Project template | basic |
--provider | Default provider | openai |
--no-git | Skip git initialization | false |
| Template | Description |
|---|---|
basic | Simple chat application |
agent | Tool-using agent scaffold |
rag | RAG pipeline with embeddings |
api | HTTP API server |
# Create basic projectiris init myproject
# Create agent projectiris init --template agent my-agent
# Create RAG project with Anthropiciris init --template rag --provider anthropic my-rag-app
# Skip gitiris init --no-git quick-testManage CLI configuration.
iris config <subcommand> [args]| Subcommand | Description |
|---|---|
show | Display current configuration |
set <key> <value> | Set a configuration value |
get <key> | Get a configuration value |
edit | Open config file in editor |
path | Show config file path |
# Show all configiris config show
# Set default provideriris config set default_provider anthropic
# Set default modeliris config set default_model gpt-4o-mini
# Get a valueiris config get default_provider# anthropic
# Edit config fileiris config edit
# Show config pathiris config path# /Users/username/.iris/config.yamlDisplay version and build information.
iris version [flags]| Flag | Description |
|---|---|
--json | Output as JSON |
--short | Version number only |
iris version# iris v0.11.0 (go1.22.0, darwin/arm64)# Built: 2024-01-15T10:30:00Z# Commit: abc123def
iris version --short# v0.11.0
iris version --json# {"version":"v0.11.0","go":"1.22.0","os":"darwin","arch":"arm64",...}The CLI reads configuration from ~/.iris/config.yaml:
# Default settingsdefault_provider: openaidefault_model: gpt-4o
# Provider configurationsproviders: openai: api_key_env: OPENAI_API_KEY default_model: gpt-4o organization: org-xxx # Optional
anthropic: api_key_env: ANTHROPIC_API_KEY default_model: claude-3-5-sonnet-20241022
gemini: api_key_env: GEMINI_API_KEY default_model: gemini-1.5-pro
ollama: base_url: http://localhost:11434 default_model: llama3.2
# Chat defaultschat: temperature: 0.7 max_tokens: 4096 stream: false
# Output preferencesoutput: color: true format: text # text, json, markdown
# Telemetrytelemetry: enabled: false endpoint: ""| Variable | Description |
|---|---|
IRIS_KEYSTORE_KEY | Master password for encrypted keystore |
IRIS_CONFIG | Custom config file path |
IRIS_DEFAULT_PROVIDER | Override default provider |
IRIS_DEFAULT_MODEL | Override default model |
OPENAI_API_KEY | OpenAI API key |
ANTHROPIC_API_KEY | Anthropic API key |
GEMINI_API_KEY | Google Gemini API key |
XAI_API_KEY | xAI (Grok) API key |
PERPLEXITY_API_KEY | Perplexity API key |
HUGGINGFACE_API_KEY | Hugging Face API key |
VOYAGEAI_API_KEY | Voyage AI API key |
Configuration values are resolved in this order (highest priority first):
~/.iris/config.yaml)Start an interactive chat session for multi-turn conversations:
iris chat --interactive| Command | Description |
|---|---|
/help | Show available commands |
/clear | Clear conversation history |
/system <prompt> | Set system prompt |
/model <name> | Switch model |
/provider <name> | Switch provider |
/temperature <value> | Set temperature |
/save <file> | Save conversation to file |
/load <file> | Load conversation from file |
/export | Export as JSON |
/tokens | Show token usage |
/quit or /exit | Exit interactive mode |
$ iris chat --interactiveIris Interactive Mode (gpt-4o)Type /help for commands, /quit to exit
You: Hello! What can you help me with?Assistant: Hello! I'm Iris, your AI assistant. I can help you with:- Answering questions and explaining concepts- Writing and reviewing code- Analyzing data and documents- Creative writing and brainstorming
What would you like to explore?
You: /model gpt-4o-miniSwitched to model: gpt-4o-mini
You: What's 2+2?Assistant: 2 + 2 = 4
You: /tokensSession tokens: 127 (prompt: 89, completion: 38)
You: /save chat-history.jsonConversation saved to chat-history.json
You: /quitGoodbye!The CLI is designed for integration into shell scripts and CI/CD pipelines.
# Get structured outputresult=$(iris chat --json "List 3 programming languages")echo "$result" | jq '.output'
# Extract specific fieldstokens=$(iris chat --json "Hello" | jq '.usage.total_tokens')echo "Used $tokens tokens"# Pipe file contentcat README.md | iris chat "Summarize this document"
# Pipe command outputgit diff | iris chat "Explain these changes"
# Chain commandscurl -s https://api.example.com/data | iris chat "Analyze this JSON"# Process multiple filesfor file in docs/*.md; do echo "Processing $file..." iris chat -f "$file" --raw "Summarize this document" > "summaries/$(basename $file)"done
# Parallel processing with xargsfind . -name "*.go" | xargs -P 4 -I {} sh -c 'iris chat -f {} --raw "Review this code" > {}.review'#!/bin/bashset -e
# Check if command succeedsif iris chat --json "Hello" > /dev/null 2>&1; then echo "API is working"else echo "API connection failed" exit 1fi
# Capture errorsresult=$(iris chat --json "Test" 2>&1) || { echo "Error: $result" exit 1}# GitHub Actions example- name: Generate Release Notes run: | git log --oneline v1.0.0..HEAD | \ iris chat --raw "Generate release notes from these commits" > RELEASE_NOTES.md
# GitLab CI examplegenerate_docs: script: - iris chat -f src/main.go --raw "Generate API documentation" > docs/api.md# DevelopmentIRIS_DEFAULT_MODEL=gpt-4o-mini iris chat "Quick test"
# Production (use specific config)IRIS_CONFIG=/etc/iris/prod.yaml iris chat "Generate report"
# Testing with local Ollamairis chat -p ollama -m llama3.2 "Test local model"Ensure Go bin is in your PATH:
export PATH=$PATH:$(go env GOPATH)/bin# Add to ~/.bashrc or ~/.zshrc for persistenceCheck your key configuration:
# Check environmentecho $OPENAI_API_KEY
# Check keystoreiris keys list
# Test the keyiris keys test openaiVerify the model name:
# List available modelsiris models list openai
# Check model infoiris models info gpt-4oIncrease the timeout for long operations:
iris chat --timeout 120s "Write a long document..."Add delays between requests in scripts:
for prompt in "${prompts[@]}"; do iris chat "$prompt" sleep 2 # Wait 2 seconds between requestsdoneEnable verbose output to diagnose issues:
# Show request/response detailsiris chat -v "Hello"
# Show even more detailIRIS_DEBUG=1 iris chat "Hello"Check logs for detailed error information:
# Default log locationcat ~/.iris/logs/iris.log
# Tail logs in real-timetail -f ~/.iris/logs/iris.logStreaming Guide
Advanced streaming patterns. Streaming →
Tools Guide
Build tool-augmented agents. Tools →
Providers
Configure different providers. Providers →
Examples
See complete working examples. Examples →