Skip to content

Quickstart

Build and run a multi-agent society in under 5 minutes. This guide walks you through one society -- a PM who delegates work to a Coder, reviewed by a Reviewer -- in three stages. Each stage changes only a few lines from the previous one.

Stage What changes API key needed?
1. Run it now MockLLM with scripted responses No
2. Make it real Swap to auto_detect_llm() Yes
3. Watch it live Add the live dashboard Yes

Installation

pip install claw
# or
uv add claw

Stage 1: Run It Now (No API Key)

This builds a 3-agent PR review society using MockLLM so it runs entirely offline. Save this as my_society.py:

import asyncio

from claw import (
    Agent,
    Delegation,
    LLMResponse,
    LocalRuntime,
    MockLLM,
    Oversight,
    Society,
    ToolCall,
)

# 1. Define agents
pm = Agent(name="pm", role="project manager", model="mock")
coder = Agent(name="coder", role="implementer", model="mock")
reviewer = Agent(name="reviewer", role="code reviewer", model="mock")

# 2. Build the society graph
s = Society(name="pr-review")
s.connect(pm, coder, Delegation())          # PM delegates to Coder
s.connect(coder, reviewer, Oversight(max_rounds=3))  # Reviewer oversees Coder

# 3. Script MockLLM responses
llm = MockLLM()
llm.script("You are pm,", responses=[
    LLMResponse(tool_calls=[
        ToolCall(name="emit_event", arguments={
            "event_type": "task_delegated",
            "target_agent": "coder",
            "data": {"task": "Write hello.py"},
        })
    ]),
    LLMResponse(content="acknowledged"),
])
llm.script("You are coder,", responses=[
    LLMResponse(tool_calls=[
        ToolCall(name="emit_event", arguments={
            "event_type": "complete",
            "target_agent": "pm",
            "data": {"status": "done"},
        })
    ]),
])
llm.script("You are reviewer,", responses=[
    LLMResponse(tool_calls=[
        ToolCall(name="approve", arguments={"comment": "LGTM"})
    ]),
])

# 4. Run
async def main():
    runtime = LocalRuntime(llm)
    result = await runtime.run(s, "Write hello.py")
    print(f"Status: {result.status}")
    print(f"Events: {len(result.trace)}")
    for e in result.trace:
        print(f"  {e.source} -> {e.target}: {e.type}")

asyncio.run(main())

Run it:

python my_society.py

You should see output like:

Status: completed
Events: 3
  pm -> coder: task_delegated
  coder -> pm: complete
  pm -> pm: acknowledged

If that works, you have a running society. Time to make it real.


Stage 2: Make It Real (One-Line Change)

Replace MockLLM() with auto_detect_llm() and remove the scripted responses. The agents now talk to a real LLM.

First, set an API key for whichever provider you have:

# Pick one:
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="AI..."
export OPENAI_API_KEY="sk-..."

Now update my_society.py. Here is the diff -- three changes:

  1. Replace the MockLLM import with auto_detect_llm
  2. Replace the MockLLM() constructor and all llm.script() calls with one line
  3. That's it -- everything else stays the same
import asyncio

from claw import (
    Agent,
    Delegation,
    LocalRuntime,
    Oversight,
    Society,
    auto_detect_llm,
)

# 1. Define agents (same graph as before)
pm = Agent(name="pm", role="project manager")
coder = Agent(name="coder", role="implementer")
reviewer = Agent(name="reviewer", role="code reviewer")

# 2. Build the society graph (unchanged)
s = Society(name="pr-review")
s.connect(pm, coder, Delegation())
s.connect(coder, reviewer, Oversight(max_rounds=3))

# 3. Auto-detect LLM from environment
llm = auto_detect_llm()  # Uses whichever API key you have set

# 4. Run (unchanged)
async def main():
    runtime = LocalRuntime(llm)
    result = await runtime.run(s, "Write hello.py")
    print(f"Status: {result.status}")
    print(f"Events: {len(result.trace)}")
    for e in result.trace:
        print(f"  {e.source} -> {e.target}: {e.type}")

asyncio.run(main())

auto_detect_llm() checks for API keys in order -- Anthropic, Gemini, OpenAI -- and returns a configured LiteLLMBackend. No model strings, no provider config. See the LLM backends guide for details.


Stage 3: Watch It Live (Add Dashboard)

Swap runtime.run() for serve() to get a real-time dashboard in your browser. Two changes from Stage 2:

  1. Import serve instead of LocalRuntime
  2. Replace the LocalRuntime + run() call with serve()
import asyncio

from claw import (
    Agent,
    Delegation,
    Oversight,
    Society,
    auto_detect_llm,
    serve,
)

# 1. Define agents (same graph)
pm = Agent(name="pm", role="project manager")
coder = Agent(name="coder", role="implementer")
reviewer = Agent(name="reviewer", role="code reviewer")

# 2. Build the society graph (unchanged)
s = Society(name="pr-review")
s.connect(pm, coder, Delegation())
s.connect(coder, reviewer, Oversight(max_rounds=3))

# 3. Auto-detect LLM (unchanged)
llm = auto_detect_llm()

# 4. Run with live dashboard
async def main():
    result = await serve(s, task="Write hello.py", llm=llm)
    # Open http://localhost:8765 in your browser
    print(f"Status: {result.status}")

asyncio.run(main())

Run it and open your browser:

python my_society.py
# Dashboard: http://127.0.0.1:8765

The dashboard shows the society graph, streams events in real-time as agents communicate, and tracks artifact changes. It shuts down automatically when the society completes. See the Dashboard guide for configuration options.


What Just Happened

The runtime executes a four-phase loop:

  1. Compile -- The society graph is validated. Each agent's system prompt, visible context, and available actions are derived from the graph topology and edge types.
  2. Seed -- The task string becomes the first task_assigned event, routed to the entry agent (the delegation source with no incoming delegation edges -- pm in this case).
  3. Drain -- The event bus processes events one at a time. Each event is delivered to its target agent, the agent's LLM is called with the compiled prompt plus event context, and the LLM response is parsed into new events and artifact writes. New events go back onto the bus. This continues until a termination condition is met.
  4. Return -- The runtime returns a SocietyResult containing the status, full event trace, final artifact state, and execution stats.

Events flow along edges. When pm emits an event targeting coder, it travels along the Delegation edge connecting them. The edge type determines what actions and context are available to each agent.

Adding a Reviewer

In the quickstart above, the reviewer is already part of the graph. Here is what the Oversight edge gives you:

from claw import Oversight

reviewer = Agent(name="reviewer", role="critic", model="mock")
s.connect(coder, reviewer, Oversight(max_rounds=3))

This creates a coder -> reviewer oversight relationship. The reviewer can approve, reject, or post comments. If max rounds are exhausted, the edge resolves automatically. See examples/pr_review.py for a full working example with a reviewer.

Next Steps

  • Agents -- Agent creation, configuration, tools, and human agents
  • Edges -- Edge types, when to use each, group edges
  • Artifacts -- Versioned shared work products, token budgets
  • Societies -- Building and composing society graphs
  • Runtime -- Execution engine, concurrency, termination conditions
  • LLM Backends -- Configuring LLM providers
  • Dashboard -- Live visualization of running societies