Skip to content

Petal Flow Overview

Petal Flow is a graph runtime for building AI agent workflows. Chain LLM calls, tools, routers, and transformations into directed graphs that are inspectable, testable, and production-ready.

Graph-first

Model workflows as nodes and edges with explicit entry points.

Runtime control

Built-in runtime events, debugging hooks, and step controllers.

LLM-native

LLM nodes, routers, tools, and guards built for AI operations.

Composable

Mix built-in nodes with custom Go functions.

  • Customer support routers that triage and escalate tickets
  • RAG workflows that retrieve, synthesize, and cite sources
  • Data enrichment pipelines with validation and transformation stages
  • Human-in-the-loop review systems with approval gates
  • Observable agent workflows with event-driven monitoring

Petal Flow and Iris are designed to work together. Use Iris providers inside Petal Flow nodes to power LLM calls while keeping routing and orchestration inside a graph.

// Use Iris providers inside Petal Flow nodes
llmNode := petalflow.NewLLMNode("summarize", llmConfig{
Provider: openai.NewFromKeystore(),
Model: "gpt-4o",
Prompt: "Summarize the following: {{.input}}",
})

This keeps your AI stack modular and observable. Petal Flow handles the workflow orchestration while Iris handles the LLM interactions.