Agents

An Agent project defines an AI agent — a conversational loop that calls an LLM, handles tool use, manages message history, and produces a final response. The generated agent.py exports an AgentRunner class you can import and use anywhere.

How It Works

The canvas is the agent's behavior graph — the logic that runs on every turn. Exec flow controls the sequence of operations. Data flow passes messages, responses, and values between nodes.

The agent's identity — system prompt, model, temperature, tool access — is defined in the Personas panel, not in the graph. The graph is persona-agnostic. The same behavior graph compiles differently depending on which persona is active.

Basic Agent Loop

A minimal agent looks like this:

Agent Start
  → User Message (wraps initial_input as a user message)
  → Append Message (adds to history)
  → LLM Call (sends messages to the model)
  → On Tool Call (branches: tool use vs. final response)
      ├─ exec_tool → Dispatch Tool → Tool Result Message → Append Message → loop back to LLM Call
      └─ exec_text → Agent End (return the text response)

The loop_back on the tool use path creates a multi-turn loop — the agent keeps calling the LLM until it produces a final text response.

Loop Control Nodes

Agent Start

Entry node. Starts exec flow and outputs initial_input (str) — the first user message passed to the agent when run() is called.

Agent End

Statement node. Ends the agent loop and returns a value. Connect the final output string to the output input port. Generates return output.

Loop Back

Statement node. Wires back to a previous node in the exec flow to create a multi-turn loop. Use it to loop back to the LLM Call after handling a tool call. Exec cycles through a Loop Back node are valid — they are not flagged as errors by validation.

LLM Nodes

LLM Call

Statement node. Sends a message list to the model and returns the response. The model, system prompt, max tokens, and temperature are injected from the active persona at compile time — there are no model settings on this node.

Outputs:

  • response — The full AiResponse object
  • text — The text content of the response (shortcut for simple cases)

LLM Stream

Statement node. Like LLM Call but streams the response token by token. Has two exec outputs: on_text fires for each chunk, on_done fires when the stream completes.

Outputs:

  • chunk — The current text chunk (on each on_text tick)
  • full_text — The complete text (on on_done)

Parse Response

Expression node. Extracts fields from an AiResponse object.

Outputs:

  • text — Text content
  • stop_reason — Why the model stopped (end_turn, tool_use, max_tokens)
  • tool_calls — List of AiToolCall objects if the model called tools
  • input_tokens — Number of input tokens consumed (response.usage.input_tokens)
  • output_tokens — Number of output tokens generated (response.usage.output_tokens)

Structured Output

Statement node. Sends messages to the model and extracts structured data matching a JSON schema. The implementation adapts per provider — Anthropic uses forced tool choice, OpenAI uses response_format JSON schema, and Google uses response_mime_type. All produce structured JSON matching your schema.

Configure schema_name and schema_json in state. Connect a message list to messages.

Outputs:

  • data — The extracted structured data as a dict
  • raw_response — The full AiResponse for inspection

Message Nodes

User Message

Expression node. Wraps a string as a {"role": "user", "content": "..."} message.

Assistant Message

Expression node. Wraps a string as a {"role": "assistant", "content": "..."} message.

System Message

Expression node. Wraps a string as a {"role": "system", "content": "..."} message. Same pattern as User Message and Assistant Message. Use this to inject system-level instructions into a message list programmatically.

Append Message

Expression node. Adds a message to a message list and returns the new list. Use this to accumulate conversation history.

Message History

Statement node. A stateful accumulator — maintains a running list of messages across loop iterations. Set max_length to cap the history size (older messages are dropped). Outputs the current history as AiMessageList.

Tool Use Nodes

On Tool Call

Branch node. Routes exec flow based on why the LLM stopped:

  • exec_tool path — The model wants to call a tool (stop_reason == "tool_use")
  • exec_text path — The model returned a final response (stop_reason == "end_turn")

Also outputs:

  • tool_call — The AiToolCall object (on the tool path)
  • text — The response text (on the text path)

Dispatch Tool

Statement node. Routes an AiToolCall to the correct handler function. Configure tool bindings in state — each binding maps a tool name to either a Loom project reference or a Python function name.

The active persona's tool whitelist filters which bindings are included in the compiled output. Tools not on the whitelist are excluded from the generated code.

Tool Result Message

Expression node. Packages a tool call and its result into an AiMessage suitable for adding back to the conversation history. Connect both tool_call and result inputs.

Memory Nodes

Memory Write

Statement node. Stores a value under a key. In the compiled output, uses an in-memory dict by default — a comment marks the swap point for SQLite, Redis, or a memory MCP server.

Memory Read

Expression node. Retrieves a value by key. Outputs both the value and a found boolean. Set a default in state for the case where the key doesn't exist.

Flow Nodes

Human Input

Statement node. Places a blocking input() call in the exec flow. The prompt_text state field becomes the literal string passed to Python's input(). Outputs the user's response as input (str).

Generates: user_input_0 = input('Enter input: ')

This only works for CLI-based agents. For web or API contexts, use Named Variables (input direction) instead — they become typed function parameters on run() that callers pass programmatically. See the Named Variables topic for details.

Get Persona Property

Expression node. Reads a named property from the active persona's Properties list (defined in the Personas panel). The property_name state field is the lookup key — it must match a property name on the persona (e.g. system_prompt, context, constraints).

At compile time, Loom looks up the property by name in the active persona and embeds its value as a Python string literal. The value output port produces that string wherever you wire it — typically into a System Message or Format Prompt node.

If the property name doesn't match anything on the persona, the output is an empty string. If you change a persona property, regenerate to see the change in output.

Error Handling Nodes

Try/Catch

Statement node. Wraps a section of exec flow in error handling. Has two exec outputs: exec_try for the happy path and exec_catch for the error path. If any node in the try chain raises an exception, execution jumps to the catch path with the error output set to the exception message string.

Codegen: generates a try/except wrapper around the try-path code.

Retry

Statement node. Retries a section of exec flow up to a configurable number of times. Has two exec outputs: exec_body (the code to attempt on each iteration) andexec_exhausted (runs if all retries fail).

Configure max_retries (default 3) and delay_seconds (default 1) in state.

Outputs:

  • attempt — Current attempt number (0-indexed)
  • last_error — The error message from the most recent failed attempt

Codegen: generates a for loop with try/except andtime.sleep(delay_seconds) between attempts.

Parallel Execution

Parallel

Statement node. Runs multiple branches of exec flow concurrently. Has dynamic exec outputs (branch_0, branch_1, ..., branch_n) plusexec_done which fires after all branches complete.

Configure branch_count (default 2) in state.

Outputs:

  • results — A list containing the results from all branches

Codegen: generates asyncio.gather(*branches) to run all branches concurrently.

Response Checking

Check Response

Statement node. Inspects a text string and branches based on a pattern match. Designed for checking AI text output against expected patterns. Has two exec outputs:exec_match (pattern matched) and exec_else (no match).

Configure in state: check_type (contains, not_contains, equals, starts_with, ends_with, or regex) and pattern (the pattern string to check against).

Inputs:

  • text — The string to check

Outputs:

  • matched — Boolean indicating whether the pattern matched

HTTP Requests

HTTP Request

Statement node. Makes an HTTP request to an external URL. Supports GET, POST, PUT, PATCH, and DELETE methods with optional authentication (none, bearer, or basic).

Inputs:

  • url — The URL to request
  • headers — Optional dict of HTTP headers
  • body — Optional request body string (for POST/PUT/PATCH)

Outputs:

  • response_body — The response body as a string
  • status_code — HTTP status code (e.g. 200, 404)
  • response_headers — Response headers as a dict

Configure method (GET/POST/PUT/PATCH/DELETE) and auth_type (none/bearer/basic) in state. Codegen uses httpx.AsyncClient.

Data Transformation

JSON Parse

Expression node. Parses a JSON string into a dict.

Inputs: text (str) / Outputs: data (dict)

JSON Stringify

Expression node. Serializes a dict to a JSON string.

Inputs: data (dict) / Outputs: text (str)

Dict Get

Expression node. Extracts a value from a nested dict using a dot-notation path (e.g. result.items[0].name). Set path and optional default_value in state.

Inputs: data (dict) / Outputs: value (any)

Cache

Statement node. In-memory key-value cache with TTL expiry. Useful for avoiding redundant LLM calls or HTTP requests for the same input.

Configure ttl_seconds in state (default 300).

Inputs:

  • key — Cache key string
  • value — Value to store (optional — omit to read only)

Outputs:

  • cached_value — The cached value (if found)
  • hit — Boolean indicating whether the key was in cache and not expired

File I/O

File Read

Statement node. Reads the contents of a file from disk.

Inputs: path (str)

Outputs:

  • content — The file contents as a string
  • exists — Boolean indicating whether the file was found

File Write

Statement node. Writes content to a file on disk. Set mode in state: write (overwrite) or append.

Inputs:

  • path — File path
  • content — String content to write

RAG (Retrieval)

Embed Text

Statement node. Generates a vector embedding for a text string using a Voyage AI model. Set model in state: voyage-3, voyage-3-lite, or voyage-code-3.

Inputs: text (str)

Outputs:

  • embedding — The embedding vector as a list of floats
  • dimensions — Dimensionality of the embedding

Chunk Text

Expression node. Splits a long text string into overlapping chunks for embedding or processing. Configure chunk_size (default 500) and overlap (default 50) in state.

Inputs: text (str)

Outputs:

  • chunks — List of text chunks
  • count — Number of chunks produced

Delay

Delay

Statement node. Pauses execution for a configurable number of seconds. Set seconds in state (default 1). Useful for rate limiting API calls or adding delays between retries.

Parameterizing with Named Variables

Use Named Variables to expose configuration as typed function parameters instead of hardcoding values in node state. Promote any node field to an input variable (right-click → "Promote to Input") and it becomes a parameter on run():

# Before: hardcoded temperature
agent.run("Hello")

# After: temperature promoted to input variable
agent.run("Hello", temperature=0.9, context="Be creative")

See the Named Variables topic for the full system — field promotion, Get/Set Variable nodes, and output variables.

Running Your Agent

from agent import AgentRunner

agent = AgentRunner()
result = agent.run("What is the capital of France?")
print(result)

Or run directly:

python agent.py