Server Graph
A Server Graph project builds a complete MCP server — defining tools, resources, and prompts inline on the canvas, importing agents as reusable components, and wiring orchestration logic between them.
How It Works
The Server Graph canvas is where you define your server's capabilities directly. Tools, resources, and prompts are defined inline using definition nodes — each one starts an exec flow that contains the handler logic. Agents from other Weave AI projects can be imported and called as part of tool or orchestration logic.
The compiled output is a folder structure you can run with one command or push directly to a git repo for automatic deployment.
Inline Tool Definition
Tools are defined directly on the Server Graph canvas using the Tool Definition node. The pattern is: Tool Definition → handler logic → Tool Return.
- Add a Tool Definition node — set name, description, and parameters
- Use Get Tool Arg nodes to extract parameter values from the tool input
- Wire your handler logic (core nodes, agent calls, etc.)
- End with a Tool Return node (or Tool Error for error cases)
Each Tool Definition compiles to an @server.tool() decorated async function inside server.py. All tools live in the same file — no separate server projects needed.
Between Tool Definition and Tool Return you can also use error handling nodes (Try/Catch, Retry), Parallel for concurrent execution, Check Response for pattern matching on AI text output, HTTP Request for external API calls, data transformation nodes (JSON Parse, JSON Stringify, Dict Get, Cache), File I/O (File Read, File Write), and RAG nodes (Embed Text, Chunk Text).
Tool Nodes
Tools are functions your server exposes for clients to call.
Tool Definition
Defines a tool and starts its handler logic. Configure in the node state:
- name — The tool name (snake_case, e.g.
search_web) - description — What the tool does. Claude uses this to decide when to call it.
- params — List of parameters with name, type, and description
The input data output port provides the validated tool arguments as a McpToolInput value. Use Get Tool Arg nodes to extract individual parameters.
Generates a @server.tool() decorated async function.
Tool Return
Ends a tool handler and returns a value to the caller. Connect any value to the value input port. Accepts strings, numbers, booleans, lists, and dicts.
Tool Error
Returns an error from a tool handler. Set an error_code in state and connect an error message string to the message input.
Get Tool Arg
Expression node. Extracts a named parameter from a McpToolInput value. Set param_name in state to match one of the parameters defined on your Tool Definition.
Resource Nodes
Resources expose data that clients can read by URI.
Resource Definition
Defines a static resource at a fixed URI. Set uri_template, description, and mime_type in state. The uri_params output provides any path parameters if the URI contains variables.
Resource Template
Defines a parameterized resource where the URI contains variables (e.g. /users/{user_id}/profile). The params output provides the matched URI parameters.
Resource Return
Returns content from a resource handler. Connect your content value to the content input port. Set mime_type in state (e.g. text/plain, application/json).
Prompt Nodes
Prompts are reusable message templates clients can request by name.
Prompt Definition
Defines a named prompt. Set name, description, and arguments in state. The args output provides the argument values passed by the caller.
Prompt Return
Returns a list of messages from a prompt handler. Connect an AiMessageList value to the messages input. Use Build Message and Build Message List nodes to construct the list.
Helper Nodes
Build Message
Expression node. Creates a single AiMessage from a string. Set role to user or assistant in state.
Build Message List
Expression node. Combines multiple messages into an AiMessageList. The number of inputs is configurable via the count state field.
Format Prompt
Expression node. Fills {param} slots in a template string with connected data values. Set the template in state — each {param_name} slot becomes a data input port.
Output Structure
agents/
<agent_name>/
agent.py
requirements.txt
server.py <- the server (entry point, contains inline tools)
requirements.txt <- merged dependencies
README.md <- run instructions
server.py contains all inline tool, resource, and prompt definitions as decorated functions. Imported agents live in subfolders:
from agents.my_researcher.agent import AgentRunner as ResearcherAgent
Everything is local. pip install -r requirements.txt then python server.py.
Importing Agents
Use the Agents panel on the left to import Agent projects. Click Add Agent, select a project, and an Agent Ref node appears on the canvas with ports derived from the agent's Named Variables (input variables become input ports, output variables become output ports).
When you import an agent, Loom takes a contract snapshot — a frozen copy of the agent's variable-derived I/O ports and available personas at that moment. The ref node includes a persona dropdown with the selected_persona_id so you can choose which persona the agent compiles with.
Staleness
If the referenced agent changes after you import it — new tools added, ports renamed, or the agent's active persona changed — the ref node shows a stale warning. Click Refresh on the node to pull the latest contract. Ports that no longer exist are highlighted so you can rewire them.
Agent Ref Node
Represents a referenced Agent project. Ports are derived from the agent's Named Variables (inputs and outputs only — local variables are not exposed). The node includes a persona dropdown showing available personas with their selected_persona_id. If the agent's active persona has changed since import, the node shows a stale warning.
Orchestration Nodes
Gateway
Entry node. Defines how your compiled server is exposed to clients. Set transport in state:
- stdio — For Claude Desktop and local MCP clients
- http — For remote clients over HTTP
- sse — For remote clients using Server-Sent Events
Set port for HTTP/SSE transports. Exec flow from the Gateway node defines the request handling pipeline.
Request Router
Statement node. Routes incoming requests to different paths based on rules. Define routes in state — each route is a pattern or keyword that maps to a named exec output port. Add as many routes as you need; each becomes a route_* exec output.
Response Aggregator
Statement node. Collects results from multiple upstream components and combines them into a single output. Set strategy in state:
- first — Returns the first result that arrives
- merge — Merges dict results together
- all — Returns all results as a list
Server Logger
Statement node. Logs data at any point in the pipeline. Set log_mode to console, file, or both. Connect any value to the data input and optionally a label string.
Event System
Events are the connective tissue between agents, tools, and the outside world in a Server Graph. They let you react to things that happen — an agent finishing, an error occurring, an inbound HTTP request — without coupling the handler logic directly into the main request flow.
Events live exclusively in the Server Graph. Agent graphs have their own control flow via exec edges — adding a parallel pub/sub system inside a single agent's reasoning would create competing control flows.
Emit Event
Statement node. Fires a named event with an optional payload. Any On Event node subscribed to the same event name will trigger.
Set event_name in state. Connect any value to the payload input port.
On Event
Entry node. Subscribes to a named event and starts an exec flow when it fires. Set event_name in state to match the name used in Emit Event.
Outputs:
- payload — The value attached to the event by the emitter
On Agent Complete
Entry node. Fires when a specific Agent Ref node finishes execution. Select which agent to watch via agent_ref_index in state.
Outputs:
- result — The agent's return value
- agent_name — The name of the agent that completed
On Agent Error
Entry node. Fires when a specific Agent Ref node throws an unrecoverable error — meaning the agent exhausted its internal retries and error handling. Select which agent to watch via agent_ref_index in state.
Outputs:
- error — The error message string
- agent_name — The name of the agent that failed
On Webhook
Entry node. Fires on an inbound HTTP request to a configured path. Set path (e.g. /webhook) and method (POST or GET) in state.
Outputs:
- body — The request body as a dict
- headers — Request headers as a dict
- path — The matched URL path
Error Handling: Three Layers
Errors follow a strict escalation model. Each layer handles what it can and only escalates what it cannot.
- Agent graph (internal recovery) — Try/Catch, Retry, Check Response nodes handle transient failures inside the agent's cognitive loop. If the agent recovers, the server graph never knows anything went wrong.
- Server graph events (observation) — On Agent Error fires only when an error escapes the agent graph — the agent exhausted its retries or the error was unrecoverable. It observes the failure; it does not reach inside the agent to handle it.
- Server graph orchestration (fallback) — The server graph decides what happens after an agent fails. Route to a fallback agent, return a cached response, notify an operator, or let the error propagate to the client.
Deployment
Download ZIP
Click Generate to download a ZIP of the full output structure. Unzip, install dependencies, and run:
pip install -r requirements.txt
python server.py
Push to GitHub
Click Push to GitHub to push the output directly to your connected repository. The folder structure is written to the repo root (or a configured subdirectory).
If your deployment platform (Railway, Render, Fly.io) is connected to the repo, it redeploys automatically on every push. The workflow becomes:
- Update a persona, add a tool, or rewire the server graph
- Click Generate + Push to GitHub
- Your deployment picks up the commit and redeploys
- Claude Desktop reconnects to the updated server
Connecting MCP Clients
Your compiled server is a standard MCP server. Any MCP-compatible client can connect to it. The configuration format varies by client, but the pattern is the same: point the client at server.py.
stdio transport (local clients):
{
"mcpServers": {
"my_system": {
"command": "python",
"args": ["/path/to/server.py"]
}
}
}
HTTP/SSE transport (remote clients): Run the server on a host and point clients at the URL (e.g. http://localhost:8080).
Compatible Clients
| Client | Transport | Config Location |
|---|---|---|
| Claude Desktop | stdio | claude_desktop_config.json |
| Claude Code | stdio | .mcp.json in project root |
| Cursor | stdio | Cursor settings → MCP |
| Windsurf | stdio | Windsurf settings → MCP |
| VS Code Copilot | stdio | .vscode/mcp.json |
| Continue.dev | stdio / SSE | ~/.continue/config.json |
| Zed | stdio | Zed settings → Extensions |
The server exposes all inline tools plus tools from imported agents as a single unified MCP server. Clients see one server with all the tools — they don't know or care that some come from imported agent projects underneath.