Skip to main content

Blueprints

Blueprints are pre-configured agent templates that bundle a sandbox image, inference profiles, network policies, and resource limits into a single deployable spec. They follow the NemoClaw-compatible blueprint format.

Using a Blueprint

Create an agent from a blueprint:

legible agent create my-analyst --blueprint legible-default

With a specific inference profile:

legible agent create my-analyst --blueprint legible-default --profile anthropic

Built-in Blueprints

Legible ships blueprints optimized for different data sources:

BlueprintData SourcesDescription
legible-defaultAll connectorsGeneral-purpose agent with broad connector support
legible-postgresPostgreSQLOptimized for PostgreSQL workloads
legible-bigqueryBigQueryOptimized for BigQuery analytics
legible-snowflakeSnowflakeOptimized for Snowflake data warehouse
legible-mysqlMySQLOptimized for MySQL databases
legible-clickhouseClickHouseOptimized for ClickHouse analytics
legible-duckdbDuckDBOptimized for DuckDB local analytics
legible-mssqlSQL ServerOptimized for Microsoft SQL Server
legible-oracleOracleOptimized for Oracle databases
legible-trinoTrinoOptimized for Trino distributed SQL
legible-redshiftRedshiftOptimized for Amazon Redshift
legible-databricksDatabricksOptimized for Databricks lakehouse
legible-athenaAthenaOptimized for Amazon Athena
legible-analystAll connectorsAnalysis-focused with additional tools

Blueprints are stored in ~/.legible/blueprints/ or bundled with the CLI binary.

Blueprint Spec

A blueprint is a directory containing a blueprint.yaml file:

my-blueprint/
├── blueprint.yaml
├── Dockerfile # Optional: custom sandbox build
└── policies/
└── legible-sandbox.yaml

Full Example

version: "0.1.0"
min_openshell_version: "0.1.0"
description: |
Default Legible agent blueprint. Provides a sandboxed AI coding agent
with MCP access to your Legible project's semantic layer.

supported_connectors:
- POSTGRES
- BIG_QUERY
- SNOWFLAKE
- MYSQL
- DUCKDB

components:
sandbox:
image: "legible-sandbox:latest"
name: "legible-agent"
forward_ports:
- 9000
resources:
cpus: "4.0"
memory: "16g"
max_sandboxes: 20

inference:
profiles:
nvidia:
provider_type: "nvidia"
provider_name: "nvidia-inference"
endpoint: "https://integrate.api.nvidia.com/v1"
model: "nvidia/nemotron-3-super-120b-a12b"
openai:
provider_type: "openai"
provider_name: "openai-inference"
endpoint: "https://api.openai.com/v1"
model: "gpt-4o"
anthropic:
provider_type: "anthropic"
provider_name: "anthropic-inference"
endpoint: "https://api.anthropic.com/v1"
model: "claude-sonnet-4-20250514"
local:
provider_type: "ollama"
provider_name: "local-inference"
endpoint: "http://host.docker.internal:11434/v1"
model: "llama3.1:8b"

mcp:
servers:
legible:
transport: "streamable-http"
url: "http://host.docker.internal:9000/mcp"

policies:
network: "policies/legible-sandbox.yaml"
filesystem:
read_only:
- /usr
- /lib
- /proc
- /app
- /etc
read_write:
- /home/sandbox
- /workspace
- /tmp
process:
deny_privilege_escalation: true
run_as_user: sandbox
run_as_group: sandbox

agent:
type: "claude"
allowed_types:
- claude
- codex
- opencode
- copilot
entrypoint: "/usr/local/bin/sandbox-entrypoint.sh"

Spec Reference

Top-Level Fields

FieldRequiredDescription
versionYesBlueprint spec version (e.g., "0.1.0")
min_openshell_versionNoMinimum required OpenShell CLI version
descriptionYesHuman-readable description
supported_connectorsNoList of supported data source types. Empty = all connectors.

components.sandbox

FieldDescription
imageContainer image name
build.dockerfilePath to Dockerfile for custom builds
build.contextDocker build context path
nameDefault sandbox name
forward_portsPorts to expose from the sandbox
envEnvironment variables (key-value map)
resources.cpusCPU allocation (e.g., "4.0", "8.0")
resources.memoryMemory allocation (e.g., "16g", "32g")
resources.diskDisk allocation
resources.max_sandboxesMaximum concurrent sandboxes on the gateway

components.inference

Defines inference routing profiles. Each profile configures a different LLM provider:

FieldDescription
provider_typeProvider type: nvidia, openai, anthropic, ollama
provider_nameName for the OpenShell provider
endpointInference API endpoint URL
modelModel identifier

Select a profile at creation time:

legible agent create my-agent --blueprint legible-default --profile nvidia

If no profile is specified, Legible picks the first available in this priority order: nvidiaanthropicopenailocal → first defined.

components.mcp

FieldDescription
servers.<name>.transportMCP transport type (usually streamable-http)
servers.<name>.urlMCP server URL

components.tools

FieldDescription
installList of packages to install in the sandbox
scriptsNamed scripts available in the sandbox
scripts[].nameScript name
scripts[].commandShell command to execute
scripts[].descriptionHuman-readable description

policies

FieldDescription
networkPath to network policy YAML file
filesystem.read_onlyPaths mounted as read-only
filesystem.read_writePaths mounted as read-write
process.deny_privilege_escalationBlock privilege escalation (true/false)
process.run_as_userContainer user
process.run_as_groupContainer group

agent

FieldDescription
typeDefault agent type (claude, codex, opencode, copilot)
allowed_typesAgent types this blueprint supports
entrypointCustom entrypoint command

Custom Blueprints

Creating a Blueprint

  1. Create a directory with a blueprint.yaml:
mkdir my-blueprint
cd my-blueprint
  1. Write the spec:
version: "0.1.0"
description: "My custom agent blueprint"

components:
sandbox:
image: "my-sandbox:latest"
resources:
cpus: "8.0"
memory: "32g"

inference:
profiles:
openai:
provider_type: "openai"
provider_name: "openai-inference"
endpoint: "https://api.openai.com/v1"
model: "gpt-4o"

policies:
network: "policies/my-policy.yaml"

agent:
type: "claude"
  1. Create the agent from your custom blueprint:
legible agent create my-agent --blueprint ./my-blueprint

Installing Blueprints

Place blueprint directories in ~/.legible/blueprints/ to make them available by name:

cp -r my-blueprint ~/.legible/blueprints/my-blueprint
legible agent create my-agent --blueprint my-blueprint

Auto-Provisioning

When auto-provisioning is enabled, Legible automatically selects the best blueprint for your data source. The mapping is:

Data SourceBlueprint
PostgreSQLlegible-postgres
BigQuerylegible-bigquery
Snowflakelegible-snowflake
MySQLlegible-mysql
ClickHouselegible-clickhouse
DuckDBlegible-duckdb
Otherslegible-default

If a connector-specific blueprint isn't found, legible-default is used as a fallback.