Build, test, debug and deploy on your own infrastructure. Beta now available!
Design agent graphs, simulate judge-reviewed runs, and get cost+latency estimates. Mix OpenAI and self-hosted models (e.g., Ollama). Export an agent-compose JSON and run it anywhere. Beta now available!
*UI latency for local preview, not model inference time.
{
"agents": [
{
"id": "planner",
"name": "Planner",
"model": "openai:gpt-4.1",
"instruction": "Break the user goal into validated steps.",
"outputs": ["plan"]
},
{
"id": "executor",
"name": "Executor",
"model": "ollama:qwen2.5",
"instruction": "Execute each step using tools and return structured JSON.",
"outputs": ["result"]
},
{
"id": "judge",
"name": "Judge",
"model": "openai:gpt-4o-mini",
"instruction": "Evaluate result against plan. If failing, request one regeneration (maxTurns=5).",
"outputs": ["verdict"]
}
],
"edges": [
{ "from": "planner.plan", "to": "executor" },
{ "from": "executor.result", "to": "judge" }
],
"userFacing": "executor",
"policies": { "judgeMaxTurns": 5, "retryOnFailure": true }
}
Export as JSON · Run locally via npm or use our managed cloud