Using Generated Code

The code Loom generates is standard Python with no Loom dependency. This section covers how to use it in real projects after exporting.

Agent Integration

As a standalone CLI

python agent.py
# Prompts: "Enter your message: "
# Prints: "Agent output: ..."

Importing into another Python project

from agent import AgentRunner

agent = AgentRunner()

# Simple call
result = agent.run("Summarize this document")
print(result)

# With named variables (if you promoted fields to input variables)
result = agent.run("Summarize this", temperature=0.3, context="academic paper")

# In a web server (FastAPI)
from fastapi import FastAPI
app = FastAPI()

@app.post("/chat")
async def chat(message: str, temperature: float = 0.7):
    agent = AgentRunner()
    return agent.run(message, temperature=temperature)

Feeding input at runtime

There are three input mechanisms, from most to least composable:

  1. Named Variables (recommended) — Input variables become typed parameters on run(). Callers pass values programmatically:
    agent.run("Hello", temperature=0.9, context="Be creative")
    Use Get Variable nodes in the graph to read these values. See the Named Variables topic for details.
  2. initial_input parameter — Always the first argument to run(). The Agent Start node's initial_input port maps to this. This is the primary user message.
  3. human_input node — Places a blocking input() call mid-flow. Only works in CLI contexts. For web/API agents, use Named Variables instead.

Modifying the generated code

Common modifications after export:

# Change the system prompt without regenerating
SYSTEM_PROMPT = "Your new system prompt"

# Swap the memory backend (look for this comment in agent.py)
# MEMORY BACKEND — swap this dict for SQLite, Redis, or a memory MCP server
_memory: dict = {}  # ← replace with your backend

# Add custom tools
# Find the dispatch_tool section and add your own handlers

# Change the model
MODEL = "claude-sonnet-4-20250514"  # ← or "gpt-4o", "gemini-2.0-flash", "grok-3"

Server Graph Integration

Running locally

pip install -r requirements.txt
python server.py

This starts the server. By default it runs with stdio transport (for Claude Desktop). Set transport to http or sse on the Gateway node to run as a network server.

Deploying to Railway / Render / Fly.io

  1. Push to GitHub via the Push to GitHub button in Loom
  2. Connect your deployment platform to the repo
  3. Set the start command to python server.py
  4. Set the port environment variable if using HTTP/SSE transport

The output is a standard Python project — it works with any deployment platform that runs Python.

Folder structure after export

agents/
  researcher/             <- each imported agent is self-contained
    agent.py
    requirements.txt
server.py                 <- the server (inline tools + orchestration)
requirements.txt          <- merged dependencies (deduplicated)
README.md                 <- run instructions

Agent subfolders are independently runnable. server.py imports from them via relative imports and contains all inline tool definitions. You can also run individual agents standalone:

python agents/researcher/agent.py              # run one agent alone
python server.py                               # run the full system

Environment Variables

Generated code reads an API key from the environment based on the persona's model provider:

export ANTHROPIC_API_KEY=sk-ant-...   # Claude models
export OPENAI_API_KEY=sk-...          # GPT / o-series models
export XAI_API_KEY=xai-...            # Grok models
export GOOGLE_API_KEY=AI...           # Gemini models

Only the key for your chosen provider is required. No other environment variables are needed unless you add them to your tool handlers or server configuration.