Server Graph

A Server Graph project builds a complete MCP server — defining tools, resources, and prompts inline on the canvas, importing agents as reusable components, and wiring orchestration logic between them.

How It Works

The Server Graph canvas is where you define your server's capabilities directly. Tools, resources, and prompts are defined inline using definition nodes — each one starts an exec flow that contains the handler logic. Agents from other Weave AI projects can be imported and called as part of tool or orchestration logic.

The compiled output is a folder structure you can run with one command or push directly to a git repo for automatic deployment.

Inline Tool Definition

Tools are defined directly on the Server Graph canvas using the Tool Definition node. The pattern is: Tool Definition → handler logic → Tool Return.

  1. Add a Tool Definition node — set name, description, and parameters
  2. Use Get Tool Arg nodes to extract parameter values from the tool input
  3. Wire your handler logic (core nodes, agent calls, etc.)
  4. End with a Tool Return node (or Tool Error for error cases)

Each Tool Definition compiles to an @server.tool() decorated async function inside server.py. All tools live in the same file — no separate server projects needed.

Between Tool Definition and Tool Return you can also use error handling nodes (Try/Catch, Retry), Parallel for concurrent execution, Check Response for pattern matching on AI text output, HTTP Request for external API calls, data transformation nodes (JSON Parse, JSON Stringify, Dict Get, Cache), File I/O (File Read, File Write), and RAG nodes (Embed Text, Chunk Text).

Tool Nodes

Tools are functions your server exposes for clients to call.

Tool Definition

Defines a tool and starts its handler logic. Configure in the node state:

  • name — The tool name (snake_case, e.g. search_web)
  • description — What the tool does. Claude uses this to decide when to call it.
  • params — List of parameters with name, type, and description

The input data output port provides the validated tool arguments as a McpToolInput value. Use Get Tool Arg nodes to extract individual parameters.

Generates a @server.tool() decorated async function.

Tool Return

Ends a tool handler and returns a value to the caller. Connect any value to the value input port. Accepts strings, numbers, booleans, lists, and dicts.

Tool Error

Returns an error from a tool handler. Set an error_code in state and connect an error message string to the message input.

Get Tool Arg

Expression node. Extracts a named parameter from a McpToolInput value. Set param_name in state to match one of the parameters defined on your Tool Definition.

Resource Nodes

Resources expose data that clients can read by URI.

Resource Definition

Defines a static resource at a fixed URI. Set uri_template, description, and mime_type in state. The uri_params output provides any path parameters if the URI contains variables.

Resource Template

Defines a parameterized resource where the URI contains variables (e.g. /users/{user_id}/profile). The params output provides the matched URI parameters.

Resource Return

Returns content from a resource handler. Connect your content value to the content input port. Set mime_type in state (e.g. text/plain, application/json).

Prompt Nodes

Prompts are reusable message templates clients can request by name.

Prompt Definition

Defines a named prompt. Set name, description, and arguments in state. The args output provides the argument values passed by the caller.

Prompt Return

Returns a list of messages from a prompt handler. Connect an AiMessageList value to the messages input. Use Build Message and Build Message List nodes to construct the list.

Helper Nodes

Build Message

Expression node. Creates a single AiMessage from a string. Set role to user or assistant in state.

Build Message List

Expression node. Combines multiple messages into an AiMessageList. The number of inputs is configurable via the count state field.

Format Prompt

Expression node. Fills {param} slots in a template string with connected data values. Set the template in state — each {param_name} slot becomes a data input port.

Output Structure

agents/
  <agent_name>/
    agent.py
    requirements.txt
server.py          <- the server (entry point, contains inline tools)
requirements.txt   <- merged dependencies
README.md          <- run instructions

server.py contains all inline tool, resource, and prompt definitions as decorated functions. Imported agents live in subfolders:

from agents.my_researcher.agent import AgentRunner as ResearcherAgent

Everything is local. pip install -r requirements.txt then python server.py.

Importing Agents

Use the Agents panel on the left to import Agent projects. Click Add Agent, select a project, and an Agent Ref node appears on the canvas with ports stamped from the agent's interface contract.

When you import an agent, Loom takes a contract snapshot — a frozen copy of the agent's exposed ports and active persona at that moment. The ref node includes a persona dropdown with the selected_persona_id so you can choose which persona the agent compiles with.

Staleness

If the referenced agent changes after you import it — new tools added, ports renamed, or the agent's active persona changed — the ref node shows a stale warning. Click Refresh on the node to pull the latest contract. Ports that no longer exist are highlighted so you can rewire them.

Agent Ref Node

Represents a referenced Agent project. Ports are generated from the agent's interface contract. The node includes a persona dropdown showing available personas with their selected_persona_id. If the agent's active persona has changed since import, the node shows a stale warning.

Orchestration Nodes

Gateway

Entry node. Defines how your compiled server is exposed to clients. Set transport in state:

  • stdio — For Claude Desktop and local MCP clients
  • http — For remote clients over HTTP
  • sse — For remote clients using Server-Sent Events

Set port for HTTP/SSE transports. Exec flow from the Gateway node defines the request handling pipeline.

Request Router

Statement node. Routes incoming requests to different paths based on rules. Define routes in state — each route is a pattern or keyword that maps to a named exec output port. Add as many routes as you need; each becomes a route_* exec output.

Response Aggregator

Statement node. Collects results from multiple upstream components and combines them into a single output. Set strategy in state:

  • first — Returns the first result that arrives
  • merge — Merges dict results together
  • all — Returns all results as a list

Server Logger

Statement node. Logs data at any point in the pipeline. Set log_mode to console, file, or both. Connect any value to the data input and optionally a label string.

Deployment

Download ZIP

Click Generate to download a ZIP of the full output structure. Unzip, install dependencies, and run:

pip install -r requirements.txt
python server.py

Push to GitHub

Click Push to GitHub to push the output directly to your connected repository. The folder structure is written to the repo root (or a configured subdirectory).

If your deployment platform (Railway, Render, Fly.io) is connected to the repo, it redeploys automatically on every push. The workflow becomes:

  1. Update a persona, add a tool, or rewire the server graph
  2. Click Generate + Push to GitHub
  3. Your deployment picks up the commit and redeploys
  4. Claude Desktop reconnects to the updated server

Adding to Claude Desktop

Point Claude Desktop at server.py using stdio transport:

{
  "mcpServers": {
    "my_system": {
      "command": "python",
      "args": ["/path/to/server.py"]
    }
  }
}

The server exposes all inline tools plus tools from imported agents as a single unified MCP server. Claude sees one server with all the tools — it doesn't know or care that some come from imported agent projects underneath.