Unified Provider Interface
One fluent API across OpenAI, Anthropic, Gemini, xAI, Z.ai, Perplexity, Ollama, Voyage AI, and Hugging Face. Switch providers with a single line change.
Iris is a production-grade Go SDK and CLI for building AI-powered applications and agent workflows. It provides a unified interface for working with multiple LLM providers, enabling teams to ship consistent, reliable AI features without rewriting provider-specific code.
// One API for all providersclient := core.NewClient(openai.New(apiKey))resp, _ := client.Chat("gpt-4o"). System("You are a helpful assistant."). User("Explain quantum computing in simple terms."). GetResponse(ctx)Modern AI applications need to work with multiple LLM providers—for cost optimization, feature access, latency requirements, or redundancy. Iris solves the complexity of multi-provider integration with a unified, type-safe API that handles the differences behind the scenes.
Unified Provider Interface
One fluent API across OpenAI, Anthropic, Gemini, xAI, Z.ai, Perplexity, Ollama, Voyage AI, and Hugging Face. Switch providers with a single line change.
Streaming-First Design
Built-in streaming responses with well-defined channels, helpers for aggregation, and support for SSE-style real-time output.
Tools & Function Calling
Native tool calling support with automatic schema generation, structured outputs, and seamless integration with reasoning models like GPT-5 and Claude.
Secure by Default
Encrypted keystore with AES-256-GCM and Argon2id key derivation. API keys never appear in logs or stack traces.
Multimodal Support
Process images, PDFs, and files alongside text. Vision models work consistently across providers.
Embeddings & RAG
Generate embeddings for semantic search and RAG pipelines. Batch processing for efficiency.
Telemetry Hooks
Instrument requests with custom telemetry for observability, cost tracking, and debugging.
Retry Policies
Configurable retry logic with exponential backoff for transient failures. Circuit breaker patterns.
Iris is built on principles that make it suitable for production deployments:
All provider interactions use explicit Go structs and interfaces. No hidden map[string]any conversions
or runtime reflection magic. This means:
Provider-specific logic lives entirely within providers/<name> packages. The core package never
imports providers directly, and your application code can depend only on the core interface:
// Your code depends on the interface, not the implementationfunc ProcessQuery(p core.Provider, query string) (*core.ChatResponse, error) { client := core.NewClient(p) return client.Chat(p.DefaultModel()).User(query).GetResponse(ctx)}Streaming is not an afterthought—it’s the primary mode of operation. Every provider must implement
streaming if the underlying API supports it. The ChatStream type provides consistent access to:
The core.Client is safe for concurrent use across goroutines. Builders are per-request and should
not be shared. This allows connection pooling and high-throughput patterns:
client := core.NewClient(provider) // Create once, use everywhere
// Safe to call from multiple goroutinesgo func() { client.Chat("gpt-4o").User("Query 1").GetResponse(ctx) }()go func() { client.Chat("gpt-4o").User("Query 2").GetResponse(ctx) }()Iris supports a comprehensive range of LLM providers:
| Provider | Chat | Streaming | Tools | Vision | Embeddings | Reasoning |
|---|---|---|---|---|---|---|
| OpenAI | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Anthropic | ✓ | ✓ | ✓ | ✓ | - | ✓ |
| Gemini | ✓ | ✓ | ✓ | ✓ | ✓ | - |
| xAI (Grok) | ✓ | ✓ | ✓ | ✓ | - | - |
| Z.ai | ✓ | ✓ | - | - | - | - |
| Perplexity | ✓ | ✓ | - | - | - | - |
| Ollama | ✓ | ✓ | ✓ | ✓ | ✓ | - |
| Voyage AI | - | - | - | - | ✓ | - |
| Hugging Face | ✓ | ✓ | - | - | ✓ | - |
See the Providers section for detailed setup instructions and model-specific features.
Build chat interfaces that automatically fall back between providers for reliability:
// Try OpenAI first, fall back to Anthropic on failureproviders := []core.Provider{ openai.New(openaiKey), anthropic.New(anthropicKey),}
for _, p := range providers { client := core.NewClient(p) resp, err := client.Chat(p.DefaultModel()).User(query).GetResponse(ctx) if err == nil { return resp, nil } log.Printf("Provider %s failed: %v, trying next", p.ID(), err)}Combine embeddings and chat completions for retrieval-augmented generation:
// Generate embedding for queryembedClient := core.NewClient(voyageai.New(voyageKey))embedding, _ := embedClient.Embed("voyage-3").Text(query).GetEmbedding(ctx)
// Search vector store (Qdrant, pgvector, etc.)docs := vectorStore.Search(embedding, 5)
// Generate response with contextchatClient := core.NewClient(openai.New(openaiKey))resp, _ := chatClient.Chat("gpt-4o"). System("Answer based on the provided context."). User(fmt.Sprintf("Context:\n%s\n\nQuestion: %s", docs, query)). GetResponse(ctx)Create agents that can call external functions and APIs:
tools := []core.Tool{ { Name: "search_database", Description: "Search the product database", Parameters: core.ToolParameters{ Type: "object", Properties: map[string]core.Property{ "query": {Type: "string", Description: "Search query"}, "limit": {Type: "integer", Description: "Max results"}, }, Required: []string{"query"}, }, },}
resp, _ := client.Chat("gpt-4o"). System("You are a shopping assistant."). User("Find me wireless headphones under $100"). Tools(tools...). GetResponse(ctx)
// Handle tool callsfor _, tc := range resp.ToolCalls { result := executeToolCall(tc) // Continue conversation with tool result}Process images and documents alongside text:
resp, _ := client.Chat("gpt-4o"). System("Analyze images and provide detailed descriptions."). UserMultimodal(). Text("What's in this image?"). ImageURL("https://example.com/photo.jpg"). Done(). GetResponse(ctx)Use the Iris CLI for rapid prototyping and prompt exploration:
# Quick chatiris chat "Explain the CAP theorem"
# With specific modeliris chat --model claude-3-opus "Write a haiku about Go"
# Streaming outputiris chat --stream "Generate a short story"
# With system promptiris chat --system "You are a Go expert" "How do I handle errors idiomatically?"flowchart TB subgraph Application["Your Application"] Code[Go Code] CLI[Iris CLI] end
subgraph Core["iris/core"] Client[core.Client] Builder[ChatBuilder] Stream[ChatStream] Tools[Tool Registry] end
subgraph Middleware["Middleware Layer"] Telemetry[Telemetry Hooks] Retry[Retry Policy] Secrets[Secret Management] end
subgraph Providers["iris/providers/*"] OpenAI[openai] Anthropic[anthropic] Gemini[gemini] XAI[xai] Ollama[ollama] Others[...] end
subgraph External["External Services"] APIs[(LLM APIs)] Keystore[(Encrypted Keystore)] end
Code --> Client CLI --> Client Client --> Builder Builder --> Stream Client --> Tools
Client --> Telemetry Client --> Retry Client --> Secrets
Telemetry --> OpenAI Telemetry --> Anthropic Telemetry --> Gemini Telemetry --> XAI Telemetry --> Ollama Telemetry --> Others
OpenAI --> APIs Anthropic --> APIs Gemini --> APIs XAI --> APIs Ollama --> APIs Others --> APIs
Secrets --> Keystorecore.Client with a chosen providerIris integrates seamlessly with PetalFlow, the graph-based workflow engine:
// Use Iris as the LLM backend for PetalFlow nodesprovider := openai.New(os.Getenv("OPENAI_API_KEY"))adapter := irisadapter.NewProviderAdapter(provider)
// Create LLM nodes powered by IrisclassifyNode := petalflow.NewLLMRouter("classify", adapter, petalflow.LLMRouterConfig{ Model: "gpt-4o-mini", InputKey: "message", Categories: []petalflow.Category{ {Name: "billing", Description: "Billing and payment issues"}, {Name: "technical", Description: "Technical support"}, },})This combination gives you:
Getting Started
Concepts
Providers
Guides
API Reference
Examples