Creating a new Service
Services are containerized tools that Agents can call to perform complex operations. This guide walks through the full lifecycle of creating a service — from project structure to deployment.
Prerequisites
Before you begin, you'll need:
- A running server in Agent Studio with Docker available
- Familiarity with Docker and your chosen programming language
- An understanding of how services work
Project structure
Every service follows a standard structure. Here's the layout for a Python-based service:
my-service/
├── .devcontainer/
│ ├── Dockerfile # Container image definition
│ └── devcontainer.json # Dev container config (optional)
├── service.yaml # Service configuration — tools, inputs, outputs
├── entrypoint.py # Main entry point
├── my_tool.py # Tool implementation
├── requirements.txt # Dependencies
├── README.md # Service documentation and usage examples
└── AGENTS.md # Instructions for the Agent when editing this service
You can start from a template by asking the Agent to list available templates, or by browsing the templates in ~/.renderedai/templates.
service.yaml
The service.yaml file defines your service's metadata, tools, inputs, and outputs. This is how the platform and Agents understand what your service can do.
service:
name: my-service
description: A brief description of what the service does
volumes: []
tools:
process:
description: Processes input data and produces results
inputs:
input_file:
type: string
required: true
description: Path to the input file
output_directory:
type: string
required: false
description: Directory to write output files
outputs: []
git:
remote: https://github.com/your-org/my-service.git
branch: main
autotag: true
autocommit: true
rules: README.md
Key fields
service.name— The service identifier. Must be unique within your Organization.service.description— Displayed in the service catalog and read by Agents.service.volumes— Volume dependencies the service requires.service.tools— Each tool is an executable operation the Agent can call. Define inputs with type, required flag, and description.rules— Path to a markdown file containing rules for how the Agent should use this service.
Dockerfile
The Dockerfile defines your service's container image. Place it at .devcontainer/Dockerfile.
FROM ubuntu:24.04
RUN apt-get update && apt-get install -y python3 python3-venv && \
python3 -m venv /venv
ENV PATH="/venv/bin:$PATH"
WORKDIR /service
COPY . /service
RUN /venv/bin/pip install -r /service/requirements.txt
USER ubuntu
ENTRYPOINT ["python3"]
CMD ["/service/entrypoint.py"]
Key points:
- Use
USER ubuntu(uid=1000, gid=1000) for correct file permissions when writing to mounted volumes. - Install dependencies during the build so they're cached in the image layer.
- Set the entrypoint to run your main script.
Entrypoint
The entrypoint script reads the payload environment variable, parses it as JSON, and dispatches to the appropriate tool.
import os
import json
from my_tool import process
payload = os.getenv("payload")
payload = json.loads(payload)
tool = payload.get("tool")
inputs = payload.get("inputs")
if tool == "process":
input_file = inputs.get("input_file")
output_directory = inputs.get("output_directory", "/workspace")
process(input_file, output_directory)
else:
print(f"Unknown tool: {tool}")
The payload structure is always:
{
"tool": "tool_name",
"inputs": {
"param1": "value1",
"param2": "value2"
}
}
⚠ Warning
Always include the "inputs" object in the payload, even if it's empty. Services will fail if the inputs key is missing.
Progress updates
For long-running operations, you can send progress updates back to the platform. This gives users visibility into what the service is doing.
import os
import requests
update_url = os.environ.get("UPDATE_URL")
api_key = os.environ.get("RENDEREDAI_API_KEY")
def update_progress(status, message, progress=None):
if update_url and api_key:
requests.put(update_url, json={
"status": status, # running, failed, success
"progress": progress, # 0-100
"message": message, # Max 256 characters
}, headers={"x-api-key": api_key})
Call update_progress("running", "Processing step 1 of 3...", 33) from within your tool to keep users informed.
Volume mounts
Services write output to the /workspace directory. There are three volume options:
- Local volumes — Mounted from the server's
/workspacedirectory. Good for local development and testing. - Workspace volumes (recommended) — Network volumes that persist across servers. Located at
/workspace/volumes/<volume-name>/. - Service volumes — Network volumes attached directly to the service.
ℹ Tip
Always confirm the output location with the user before running a service. Default to writing to the workspace volume.
Testing locally
Build and test your service with Docker before deploying:
# Build the image
docker build -t my-service -f .devcontainer/Dockerfile .
# Run with a test payload
docker run \
-e payload='{"tool":"process","inputs":{"input_file":"/workspace/input.txt"}}' \
-v /workspace:/workspace \
-u $(id -u):$(id -g) \
my-service
For GPU-enabled services, add --gpus all to the docker run command.
You can also mount your service source code for faster iteration without rebuilding:
docker run \
-e payload='{"tool":"process","inputs":{"input_file":"/workspace/input.txt"}}' \
-v /workspace:/workspace \
-v $(pwd):/service \
-u $(id -u):$(id -g) \
my-service
Deploying
Once your service is tested locally, deploy it to the platform. There are two deployment methods:
Deploying from a server (Agent deploy tool)
From within a running server, you can deploy using the Agent's built-in deploy tool:
- Review your
service.yaml,README.md, and Dockerfile to ensure they're aligned. - Test the service using the Agent's test tool to verify it runs correctly on the platform.
- Deploy using the Agent's deploy tool. The Agent will build the container, push it to the registry, and register the service.
Deploying with anadeploy (CLI)
The anadeploy command from the anatools Python package lets you deploy services from any terminal — including your local machine or a CI/CD pipeline.
# Install anatools
pip install anatools
# Deploy from the service directory (auto-detects service.yaml)
cd my-service/
anadeploy
# Or specify the service file explicitly
anadeploy --service path/to/service.yaml
# Deploy to a specific existing service by ID
anadeploy --serviceId <service-id>
# Deploy without interactive prompts (for CI/CD)
anadeploy --noninteractive
anadeploy will:
- Authenticate with the platform (using your saved credentials or
--email/--password) - Build and push the container image
- Register the service version with the platform
- Optionally save the remote config back to
service.yamlfor future deploys - Sync service rules from your local
README.mdif configured
If your service.yaml includes a git section with auto-commit: true, anadeploy will also commit and push the deployment to your git remote.
After deployment, the service appears in your Organization's service catalog and can be attached to any workspace.
⚠ Warning
Always test your service locally before deploying. Deploying a broken service will make it unavailable to all workspaces that have it attached.
Best practices
- Keep services focused — Each service should do one thing well. Prefer multiple small services over one large monolith.
- Write clear tool descriptions — Agents rely on these descriptions to understand when and how to use each tool.
- Document with README.md — Include Docker run examples and detailed parameter descriptions. This file is used as service rules.
- Handle errors gracefully — Print clear error messages so the Agent can diagnose issues and retry or ask the user for help.
- Use progress updates — For operations longer than a few seconds, send progress updates so users know the service is working.

