Skip to content

Iris Examples

These examples demonstrate production-ready patterns for building AI-powered applications with Iris. Each example includes complete, runnable code with proper error handling, multiple provider options, and real-world use cases.

Unlike toy demos, these examples show:

  • Complete implementations — not snippets, but full working code
  • Production patterns — error handling, retries, timeouts, and graceful degradation
  • Provider flexibility — each example works with multiple providers
  • Real use cases — patterns extracted from production applications

Build intelligent search and retrieval systems that combine embeddings, vector databases, and language models.

Create autonomous agents that can reason, plan, and execute multi-step workflows using external tools.

Process images, documents, and mixed media alongside text for richer AI interactions.

Build responsive applications with streaming responses and real-time processing.

Each example is a complete Go program. To run any example:

Terminal window
# Clone the examples repository
git clone https://github.com/petal-labs/iris-examples
cd iris-examples
# Set up your API keys
iris keys set openai
iris keys set anthropic
iris keys set voyageai
# Run an example
go run rag-pipeline/main.go

Different examples work best with different providers:

ExampleRecommended ProviderAlternatives
RAG PipelineOpenAI + Voyage AIAnthropic + Gemini
Batch EmbeddingsVoyage AIOpenAI, Gemini
Agent With ToolsClaude claude-sonnet-4-20250514GPT-4o, Gemini Pro
Multimodal QAGPT-4oClaude claude-sonnet-4-20250514, Gemini Pro
Streaming SummariesGPT-4oClaude, Gemini

Each example follows a consistent structure:

example-name/
├── main.go # Entry point with CLI flags
├── config.go # Configuration and environment setup
├── types.go # Domain types and interfaces
├── tools.go # Tool definitions (if applicable)
└── README.md # Usage instructions and requirements

All examples use the keystore pattern for secure API key management:

// Primary provider from keystore
provider, err := openai.NewFromKeystore()
if err != nil {
// Fall back to environment variable
provider, err = openai.NewFromEnv()
if err != nil {
log.Fatal("No API key found. Run: iris keys set openai")
}
}

Examples demonstrate proper error handling with typed errors:

resp, err := client.Chat(model).User(prompt).GetResponse(ctx)
if err != nil {
var apiErr *core.APIError
if errors.As(err, &apiErr) {
switch apiErr.StatusCode {
case 429:
// Handle rate limiting with exponential backoff
time.Sleep(calculateBackoff(attempt))
continue
case 500, 503:
// Retry on server errors
continue
default:
return fmt.Errorf("API error: %w", err)
}
}
return err
}

Examples use context for timeouts and cancellation:

// Create context with timeout
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Pass context to all operations
resp, err := client.Chat(model).User(prompt).GetResponse(ctx)

Examples show how to configure retry behavior:

client := core.NewClient(provider,
core.WithRetryPolicy(&core.RetryPolicy{
MaxRetries: 3,
InitialInterval: 1 * time.Second,
MaxInterval: 30 * time.Second,
BackoffMultiplier: 2.0,
RetryOn: []int{429, 500, 503},
}),
)