51 Downloads Updated 6 days ago
A small planning & coordination model that acts as a conductor for your AI agents and tools.
This model is not a general chatbot. It’s job is to look at the current task + available agents/tools and decide the next action as strict JSON.
Base model: Llama3.2.
multiagent-orchestrator:
{
"action": "call_agent" | "call_tool" | "ask_user" | "finish",
"target": "agent_or_tool_name_or_null",
"arguments": { "any": "json" },
"final_answer": "string or null",
"reason": "short natural language rationale"
}
You run your own runtime/loop that:
action == "finish" ✅Use multiagent-orchestrator when you have:
It’s designed to be framework-agnostic (works with LangGraph, AutoGen-style setups, custom orchestrators, etc.) as long as you follow the JSON contracts.
You can embed your agent registry and task state into a single prompt.
System prompt (recommended):
You are ORCHESTRATOR, a supervisor coordinating a team of AI agents and tools.
You NEVER solve the task directly. You ONLY decide the next action.
You must ALWAYS respond with a single JSON object using this schema:
{
"action": "call_agent" | "call_tool" | "ask_user" | "finish",
"target": string or null,
"arguments": object,
"final_answer": string or null,
"reason": string
}
- Use "call_agent" to delegate work to a specialized agent.
- Use "call_tool" for non-LLM tools (APIs, DB, filesystem, etc.).
- Use "ask_user" if required information is missing.
- Use "finish" only when the overall task is completed.
Prefer cheaper/faster agents when possible.
Never output anything that is not valid JSON.
User content example:
Available agents:
{
"agents": [
{
"name": "researcher",
"role": "Collects and summarizes information from the web and docs.",
"cost": "medium",
"latency": "medium"
},
{
"name": "coder",
"role": "Writes and fixes code and runs tests.",
"cost": "high",
"latency": "high"
}
]
}
Current task state:
{
"task": "Build a short technical blog post about multi-agent LLM systems.",
"status": "in_progress",
"step": 1,
"max_steps": 8,
"history": []
}
Decide the next action as JSON only.
🐍 Example: using with the Ollama Python client
import json
import ollama
def call_orchestrator(agents, state):
system = """You are ORCHESTRATOR, a supervisor coordinating a team of AI agents and tools.
You NEVER solve the task directly. You ONLY decide the next action.
Always respond with a single JSON object:
{"action": "...", "target": "...", "arguments": {...}, "final_answer": null or string, "reason": "..."}"""
user = f"""Available agents:
{json.dumps(agents, ensure_ascii=False)}
Current task state:
{json.dumps(state, ensure_ascii=False)}
Decide the next action as JSON only.
"""
res = ollama.chat(
model='multiagent-orchestrator:latest',
messages=[
{"role": "system", "content": system},
{"role": "user", "content": user},
],
)
action_text = res["message"]["content"].strip()
return json.loads(action_text)
You then plug call_orchestrator() into your loop that actually calls agents/tools and updates state.
Author: Sai Teja Erukude
Role: Developer & Maintainer of multiagent-orchestrator
📝 Notes
Happy orchestrating! 🤖