Skip to content

LLM

Protocol and Data Models

base

LLM protocol and data models for the Claw runtime.

Defines the structured request/response types used by all LLM backends (MockLLM, LiteLLM, etc.) and the LLM protocol itself.

ContextType

Bases: StrEnum

Type of context block in a prompt.

ContextBlock(type, content, source='') dataclass

A single block of context within a prompt.

Enables granular prompt assembly and debugging — each block is tagged with its type and source for traceability.

Attributes:

Name Type Description
type ContextType

What kind of context this is.

content str

The text content.

source str

Where this came from (artifact name, edge id, etc.).

ToolCall(name, arguments=dict(), id='') dataclass

A tool invocation from an LLM response.

Attributes:

Name Type Description
name str

Tool name (e.g. "emit_event", "write_artifact").

arguments dict[str, Any]

Parsed arguments dictionary.

id str

Provider-assigned tool call ID, echoed in tool results.

Usage(input_tokens=0, output_tokens=0) dataclass

Token usage for an LLM call.

Attributes:

Name Type Description
input_tokens int

Tokens consumed by the prompt.

output_tokens int

Tokens in the response.

LLMResponse(content='', tool_calls=list(), usage=Usage()) dataclass

Response from an LLM call.

Attributes:

Name Type Description
content str

Text response.

tool_calls list[ToolCall]

Structured tool invocations.

usage Usage

Token usage statistics.

input_tokens property

Convenience accessor for usage.input_tokens.

output_tokens property

Convenience accessor for usage.output_tokens.

Message(role, content='', tool_calls=list(), tool_call_id='') dataclass

A single message in a multi-turn conversation.

Used within the ReAct tool loop: the agent calls a tool, the tool result is appended as a message, and the agent continues with the updated conversation history.

Attributes:

Name Type Description
role str

Message role ("assistant", "tool", "user").

content str

Text content of the message.

tool_calls list[ToolCall]

Tool invocations (for assistant messages).

tool_call_id str

ID linking this result to its tool call (for tool result messages).

Prompt(system='', context=list(), event='', tools=list(), messages=list()) dataclass

Structured prompt for an LLM call.

Assembles system prompt, context blocks, event description, tool definitions, and optional conversation history into a single request object.

Attributes:

Name Type Description
system str

System prompt text.

context list[ContextBlock]

Ordered list of context blocks.

event str

Event description for this turn.

tools list[dict[str, Any]]

Tool definitions available to the agent.

messages list[Message]

Conversation history for multi-turn tool loops.

LLM

Bases: Protocol

Protocol for LLM backends.

All LLM backends (MockLLM, LiteLLM, etc.) implement this protocol. The runtime calls complete() with a structured Prompt and receives a structured LLMResponse.

complete(prompt, *, model='') async

Send a prompt to the LLM and return a response.

Parameters:

Name Type Description Default
prompt Prompt

Structured prompt with system, context, event, tools.

required
model str

Model identifier (e.g. "anthropic/claude-sonnet-4-20250514"). Backends may use this or ignore it if pre-configured.

''

Returns:

Type Description
LLMResponse

Structured LLM response with content and/or tool calls.

MockLLM

mock

MockLLM — deterministic LLM backend for testing.

Supports scripted responses keyed by (agent_name, event_type) or agent pattern matching on the system prompt. Records all prompts for test assertions.

ScriptEntry(response, agent_pattern='', event_pattern='', data_pattern=dict()) dataclass

A single scripted response entry.

Attributes:

Name Type Description
response LLMResponse

The LLMResponse to return.

agent_pattern str

Regex or substring to match in the system prompt.

event_pattern str

Substring to match in the event description.

data_pattern dict[str, str]

Key-value pairs that must appear in the event text.

MockLLM(default=None)

Deterministic LLM for testing.

Features: - Scripted responses keyed by (agent_pattern, event_pattern) - Sequence support: agent's Nth call returns Nth scripted response - Pattern matching on event data content - Default fallback response - Record mode: logs all prompts for inspection

Example::

llm = MockLLM()
llm.script("dev", responses=[
    LLMResponse(content="I'll implement this"),
    LLMResponse(tool_calls=[ToolCall(name="emit_event", ...)]),
])
llm.script("reviewer", event_pattern="review_requested", responses=[
    LLMResponse(tool_calls=[ToolCall(name="approve")]),
])

call_count property

Total number of calls made.

script(agent_pattern='', *, event_pattern='', responses=None)

Register scripted responses.

Parameters:

Name Type Description Default
agent_pattern str

Substring/regex to match in system prompt.

''
event_pattern str

Substring to match in event description.

''
responses list[LLMResponse] | None

Ordered list of responses for sequential calls.

None

complete(prompt, *, model='') async

Return a scripted response matching the prompt.

Matching priority: 1. Agent + event pattern match (most specific) 2. Agent pattern only 3. Default fallback

calls_for(agent_pattern)

Return calls where the system prompt matches the pattern.

reset()

Reset call history and counters.

LiteLLM Backend

litellm_backend

LiteLLM backend — unified LLM gateway for all providers.

Wraps litellm.acompletion() to provide a single class that works with any LLM provider (Anthropic, Google, OpenAI, etc.) via model string prefixes.

API keys are read from environment variables automatically by LiteLLM (ANTHROPIC_API_KEY, GOOGLE_API_KEY, OPENAI_API_KEY, etc.).

LiteLLMBackend(default_model='anthropic/claude-sonnet-4-20250514', temperature=0.0, max_tokens=4096)

LLM backend using LiteLLM for multi-provider support.

Parameters:

Name Type Description Default
default_model str

Default model string if not specified per-call. Uses LiteLLM model format (e.g. "anthropic/claude-sonnet-4-20250514", "gemini/gemini-2.0-flash", "gpt-4o").

'anthropic/claude-sonnet-4-20250514'
temperature float

Sampling temperature (0.0 = deterministic).

0.0
max_tokens int

Maximum tokens in the response.

4096

complete(prompt, *, model='') async

Send a prompt to LiteLLM and return a structured response.

Parameters:

Name Type Description Default
prompt Prompt

Structured prompt with system, context, event, tools.

required
model str

Model override. Falls back to default_model.

''

Returns:

Type Description
LLMResponse

LLMResponse with content, tool calls, and usage.