LLM Nodes With Iris
LLM Nodes With Iris
Section titled “LLM Nodes With Iris”PetalFlow integrates with Iris to power LLM operations. The irisadapter package bridges Iris
providers into PetalFlow nodes, giving you access to all Iris features while keeping
orchestration logic in your graphs.
Provider Setup
Section titled “Provider Setup”Installing the Adapter
Section titled “Installing the Adapter”go get github.com/petal-labs/petalflow/irisadapterCreating a Provider Adapter
Section titled “Creating a Provider Adapter”The adapter wraps any Iris provider for use in PetalFlow nodes:
import ( "github.com/petal-labs/iris/providers/openai" "github.com/petal-labs/petalflow/irisadapter")
provider := openai.New(os.Getenv("OPENAI_API_KEY"))client := irisadapter.NewProviderAdapter(provider)import ( "github.com/petal-labs/iris/providers/anthropic" "github.com/petal-labs/petalflow/irisadapter")
provider := anthropic.New(os.Getenv("ANTHROPIC_API_KEY"))client := irisadapter.NewProviderAdapter(provider)import ( "github.com/petal-labs/iris/providers/ollama" "github.com/petal-labs/petalflow/irisadapter")
provider := ollama.New(ollama.WithBaseURL("http://localhost:11434"))client := irisadapter.NewProviderAdapter(provider)Basic LLM Node
Section titled “Basic LLM Node”Create an LLM node with a provider adapter:
llmNode := petalflow.NewLLMNode("chat", client, petalflow.LLMNodeConfig{ Model: "gpt-4o-mini", System: "You are a helpful assistant.", PromptTemplate: "{{.question}}", OutputKey: "answer", Timeout: 30 * time.Second,})Configuration Options
Section titled “Configuration Options”| Field | Type | Description |
|---|---|---|
Model | string | Model identifier (provider-specific) |
System | string | System message for the conversation |
PromptTemplate | string | Go template for user prompt |
InputVars | []string | Envelope variables to include in prompt |
OutputKey | string | Envelope key for storing response |
JSONSchema | map[string]any | JSON Schema for structured output |
Temperature | *float64 | Sampling temperature (0.0-2.0) |
MaxTokens | *int | Maximum response tokens |
Timeout | time.Duration | Maximum time to wait for response |
RetryPolicy | core.RetryPolicy | Retry behavior for transient failures |
Budget | *core.Budget | Resource limits for the LLM call |
RecordMessages | bool | Append conversation to envelope.Messages |
Prompt Templates
Section titled “Prompt Templates”Templates use Go’s text/template syntax with direct access to envelope variables:
Accessing Variables
Section titled “Accessing Variables”config := petalflow.LLMNodeConfig{ PromptTemplate: `Analyze the following customer message:
Customer: {{.customer_name}}Message: {{.message}}Previous interactions: {{.interaction_count}}
Provide a sentiment analysis and suggested response.`,}Conditional Content
Section titled “Conditional Content”config := petalflow.LLMNodeConfig{ PromptTemplate: `{{if .context}}Context: {{.context}}
{{end}}Question: {{.question}}
{{if .format_instructions}}{{.format_instructions}}{{end}}`,}Iterating Over Lists
Section titled “Iterating Over Lists”config := petalflow.LLMNodeConfig{ PromptTemplate: `Based on these search results:
{{range $i, $doc := .documents}}[{{$i}}] {{$doc.title}}{{$doc.content}}
{{end}}
Answer the question: {{.query}}`,}Using InputVars
Section titled “Using InputVars”If no template is provided, InputVars are concatenated with newlines:
config := petalflow.LLMNodeConfig{ InputVars: []string{"context", "question"}, OutputKey: "answer",}// Prompt becomes: "{context}\n{question}"Streaming Responses
Section titled “Streaming Responses”PetalFlow automatically uses streaming when the provider supports it. The adapter detects if the
underlying Iris provider implements StreamingLLMClient and uses the streaming path.
During streaming, the runtime emits EventNodeOutputDelta events for each chunk:
opts := petalflow.DefaultRunOptions()opts.EventHandler = func(e petalflow.Event) { switch e.Kind { case petalflow.EventNodeOutputDelta: // Real-time token output fmt.Print(e.Payload["delta"]) case petalflow.EventNodeOutputFinal: // Complete response fmt.Println("\nFinal:", e.Payload["text"]) }}
runtime := petalflow.NewRuntime()runtime.Run(ctx, graph, env, opts)Multi-Provider Setup
Section titled “Multi-Provider Setup”Use different providers for different tasks in the same graph:
// Fast model for classificationclassifyClient := irisadapter.NewProviderAdapter( openai.New(os.Getenv("OPENAI_API_KEY")),)
// Powerful model for generationgenerateClient := irisadapter.NewProviderAdapter( anthropic.New(os.Getenv("ANTHROPIC_API_KEY")),)
// Local model for privacy-sensitive operationslocalClient := irisadapter.NewProviderAdapter( ollama.New(ollama.WithBaseURL("http://localhost:11434")),)
// Build graph with multiple providersg := petalflow.NewGraph("multi-provider")
g.AddNode(petalflow.NewLLMNode("classify", classifyClient, petalflow.LLMNodeConfig{ Model: "gpt-4o-mini", PromptTemplate: "Classify this text: {{.input}}", OutputKey: "classification",}))
g.AddNode(petalflow.NewLLMNode("generate", generateClient, petalflow.LLMNodeConfig{ Model: "claude-sonnet-4-20250514", PromptTemplate: "Based on classification {{.classification}}, generate: ...", OutputKey: "response",}))Structured Output
Section titled “Structured Output”JSON Schema
Section titled “JSON Schema”Request structured JSON responses with a schema:
llmNode := petalflow.NewLLMNode("extract", client, petalflow.LLMNodeConfig{ Model: "gpt-4o-mini", System: "Extract contact information from the text.", PromptTemplate: "Text: {{.text}}", OutputKey: "contact", JSONSchema: map[string]any{ "type": "object", "properties": map[string]any{ "name": map[string]any{"type": "string"}, "email": map[string]any{"type": "string"}, "phone": map[string]any{"type": "string"}, }, "required": []string{"name"}, "additionalProperties": false, },})When JSONSchema is set, the response is automatically parsed and stored as map[string]any:
// Access parsed JSONcontact, _ := result.GetVar("contact")if c, ok := contact.(map[string]any); ok { name := c["name"].(string) email := c["email"].(string)}Error Handling
Section titled “Error Handling”Retry Policy
Section titled “Retry Policy”Configure retries directly in the LLMNodeConfig:
llmNode := petalflow.NewLLMNode("generate", client, petalflow.LLMNodeConfig{ Model: "gpt-4o-mini", PromptTemplate: "{{.prompt}}", OutputKey: "response", RetryPolicy: petalflow.RetryPolicy{ MaxAttempts: 3, Backoff: time.Second, }, Timeout: 30 * time.Second,})Budget Limits
Section titled “Budget Limits”Set resource limits to prevent runaway costs:
llmNode := petalflow.NewLLMNode("generate", client, petalflow.LLMNodeConfig{ Model: "gpt-4o", PromptTemplate: "{{.prompt}}", OutputKey: "response", Budget: &petalflow.Budget{ MaxInputTokens: 1000, MaxOutputTokens: 500, MaxTotalTokens: 1500, MaxCostUSD: 0.10, },})Error Handling in Graphs
Section titled “Error Handling in Graphs”Route errors to dedicated handlers:
g := petalflow.NewGraph("with-error-handling")
g.AddNode(petalflow.NewLLMNode("generate", client, generateConfig))
g.AddNode(petalflow.NewRuleRouter("error_check", petalflow.RuleRouterConfig{ Routes: []petalflow.RouteRule{ {When: petalflow.RouteCondition{Var: "llm_error", Op: petalflow.OpNotEmpty}, To: "error_handler"}, }, Default: "continue",}))
g.AddNode(petalflow.NewTransformNode("error_handler", petalflow.TransformNodeConfig{ Transform: func(inputs map[string]any) (any, error) { err := inputs["llm_error"].(error) log.Printf("LLM error: %v", err) return "I apologize, but I encountered an error. Please try again.", nil }, OutputKey: "response",}))Token Usage Tracking
Section titled “Token Usage Tracking”LLMNode automatically records token usage in the envelope:
// Run the workflowresult, _ := runtime.Run(ctx, graph, env, opts)
// Access token usage (stored as {OutputKey}_usage)usage, _ := result.GetVar("answer_usage")if u, ok := usage.(petalflow.TokenUsage); ok { fmt.Printf("Tokens: %d input, %d output, $%.4f\n", u.InputTokens, u.OutputTokens, u.CostUSD)}Recording Conversation History
Section titled “Recording Conversation History”Enable RecordMessages to append the conversation to the envelope:
llmNode := petalflow.NewLLMNode("chat", client, petalflow.LLMNodeConfig{ Model: "gpt-4o-mini", System: "You are a helpful assistant.", PromptTemplate: "{{.user_input}}", OutputKey: "response", RecordMessages: true,})
// After execution, envelope.Messages contains:// - User message with the prompt// - Assistant message with the responseThis is useful for multi-turn conversations or audit trails.