Using Generated Code
The code Loom generates is standard Python with no Loom dependency. This section covers how to use it in real projects after exporting.
Agent Integration
As a standalone CLI
python agent.py
# Prompts: "Enter your message: "
# Prints: "Agent output: ..."Importing into another Python project
from agent import AgentRunner
agent = AgentRunner()
# Simple call
result = agent.run("Summarize this document")
print(result)
# With named variables (if you promoted fields to input variables)
result = agent.run("Summarize this", temperature=0.3, context="academic paper")
# In a web server (FastAPI)
from fastapi import FastAPI
app = FastAPI()
@app.post("/chat")
async def chat(message: str, temperature: float = 0.7):
agent = AgentRunner()
return agent.run(message, temperature=temperature)Feeding input at runtime
There are three input mechanisms, from most to least composable:
- Named Variables (recommended) — Input variables become typed parameters on
run(). Callers pass values programmatically:
Use Get Variable nodes in the graph to read these values. See the Named Variables topic for details.agent.run("Hello", temperature=0.9, context="Be creative") initial_inputparameter — Always the first argument torun(). The Agent Start node'sinitial_inputport maps to this. This is the primary user message.human_inputnode — Places a blockinginput()call mid-flow. Only works in CLI contexts. For web/API agents, use Named Variables instead.
Modifying the generated code
Common modifications after export:
# Change the system prompt without regenerating
SYSTEM_PROMPT = "Your new system prompt"
# Swap the memory backend (look for this comment in agent.py)
# MEMORY BACKEND — swap this dict for SQLite, Redis, or a memory MCP server
_memory: dict = {} # ← replace with your backend
# Add custom tools
# Find the dispatch_tool section and add your own handlers
# Change the model
MODEL = "claude-sonnet-4-20250514" # ← or "gpt-4o", "gemini-2.0-flash", "grok-3"Server Graph Integration
Running locally
pip install -r requirements.txt
python server.pyThis starts the server. By default it runs with stdio transport (for Claude Desktop). Set transport to http or sse on the Gateway node to run as a network server.
Deploying to Railway / Render / Fly.io
- Push to GitHub via the Push to GitHub button in Loom
- Connect your deployment platform to the repo
- Set the start command to
python server.py - Set the port environment variable if using HTTP/SSE transport
The output is a standard Python project — it works with any deployment platform that runs Python.
Folder structure after export
agents/
researcher/ <- each imported agent is self-contained
agent.py
requirements.txt
server.py <- the server (inline tools + orchestration)
requirements.txt <- merged dependencies (deduplicated)
README.md <- run instructionsAgent subfolders are independently runnable. server.py imports from them via relative imports and contains all inline tool definitions. You can also run individual agents standalone:
python agents/researcher/agent.py # run one agent alone
python server.py # run the full systemEnvironment Variables
Generated code reads an API key from the environment based on the persona's model provider:
export ANTHROPIC_API_KEY=sk-ant-... # Claude models
export OPENAI_API_KEY=sk-... # GPT / o-series models
export XAI_API_KEY=xai-... # Grok models
export GOOGLE_API_KEY=AI... # Gemini modelsOnly the key for your chosen provider is required. No other environment variables are needed unless you add them to your tool handlers or server configuration.