Sandbox
Secure code execution in isolated environments
Documentation Status: This documentation describes the planned API design for the .do platform. Code examples represent the intended interface and may not reflect the current implementation state. See roadmap for implementation status.
The .do platform provides secure sandbox environments for executing arbitrary code safely, supporting both Edge and Node.js runtimes with industry-leading security through V8 isolates and Cloudflare's sandbox technology.
Overview
Our sandbox technology enables:
- Secure execution of untrusted code with multi-layered defense
- Edge runtime support with V8 isolates for maximum performance
- Node.js runtime compatibility for broader ecosystem support
- Resource limits including timeout, memory, and CPU constraints
- Network isolation with configurable access controls
- File system restrictions with read-only and temporary writable access
- Memory constraints with automatic cleanup after execution
Terminology
Understanding key terms used throughout this documentation:
-
Isolate: A lightweight execution context within the V8 JavaScript engine that provides memory isolation. Isolates run within the same process but cannot access each other's memory. Think of them as independent JavaScript runtimes that share the same underlying engine.
-
Sandbox: The complete security environment that includes the isolate plus additional security layers (filesystem restrictions, network controls, resource limits). While an isolate provides memory isolation, a sandbox provides comprehensive execution isolation.
-
Worker: A Cloudflare Workers script that runs in an isolate. In the context of dynamic execution, this refers to code loaded and executed on-demand via Worker Loaders.
-
Cordon (Trust Cordon): A group of workers with the same trust level that are physically isolated from other cordons. Workers are distributed among cordons based on their trust level (high, medium, low, untrusted) to provide defense in depth.
-
Trust Level: A classification of code based on its source and verification status. Determines which cordon the code runs in and what resource limits apply:
high: Internal, fully trusted codemedium: Verified third-party codelow: User-submitted code with basic validationuntrusted: Unverified external code with maximum restrictions
-
Worker Loader: A Cloudflare Workers feature that enables dynamic loading of worker code at runtime. Returns a WorkerStub that can be used to execute the loaded worker.
-
WorkerStub: A synchronous reference to a dynamically loaded worker that can be used to invoke the worker via RPC or fetch requests.
-
Bindings: Environment resources (KV namespaces, D1 databases, service bindings, etc.) made available to sandboxed code. Custom bindings use RPC patterns to provide safe, controlled API surfaces.
Prerequisites
Before using the sandbox system, ensure you have the following:
Required Configuration
- Cloudflare Account: A Cloudflare account with Workers enabled
- Worker Plan: Workers Paid plan (required for Worker Loaders)
- wrangler.toml Configuration: Add Worker Loader configuration:
name = "my-sandbox-worker"
main = "src/index.ts"
compatibility_date = "2024-01-15"
[[unsafe.bindings]]
name = "WORKER_LOADER"
type = "worker_loader"- Environment Setup: Install required dependencies:
npm install sdk.do hono cloudflare:workersOptional Configuration
- D1 Database (for SafeDatabaseAPI examples):
[[d1_databases]]
binding = "DB"
database_name = "sandbox-db"
database_id = "your-database-id"- KV Namespace (for storage examples):
[[kv_namespaces]]
binding = "CACHE"
id = "your-kv-id"- Trust Cordons: Configure worker cordons in your Cloudflare dashboard (Enterprise feature)
Permissions
- Worker Loaders: Enable in Cloudflare dashboard under Workers & Pages > Settings
- Service Bindings: Configure RPC services for custom API bindings
- Network Access: Configure allowed/blocked domains in worker settings
Architecture Overview
V8 Isolates
The .do platform leverages V8 isolates as the foundation for secure code execution. V8 isolates are lightweight contexts that provide isolated execution environments within a single process.
Key Benefits:
- 10-100x faster cold starts compared to containers or VMs
- Lower memory footprint enabling thousands of isolates per machine
- Strong isolation preventing memory access outside the isolate
- Rapid switching between isolates with minimal overhead
- Multi-tenant by design built into V8's architecture
Cloudflare Worker Loader
Dynamic Worker Loaders enable spawning isolates that run arbitrary code on-demand:
import { WorkerEntrypoint } from 'cloudflare:workers'
export default class DynamicExecutor extends WorkerEntrypoint {
async fetch(request: Request) {
// Get returns a WorkerStub synchronously
const worker = this.env.WORKER_LOADER.get(workerId)
// WorkerStub can be used to invoke the Worker
return await worker.fetch(request)
}
}Features:
- Dynamic loading of Workers by ID
- Isolate caching keeps warm isolates in memory
- Synchronous stubs returned immediately without awaiting
- Custom bindings via environment object serialization
- RPC support through WorkerEntrypoint classes
Multi-Layered Security
The platform implements defense in depth with multiple security layers:
- Trust-based separation: Workers distributed among cordons based on trust levels
- Process-level sandboxing: Linux namespaces and seccomp restrict filesystem/network access
- Memory protection keys: Each isolate has random keys protecting V8 heap data
- V8 sandbox: Additional sandboxing within V8 itself
- Custom bindings: Strict API surface control through RPC patterns
Security Model and Isolation
Isolate-Level Security
V8 isolates prevent code from accessing memory outside the isolate—even within the same process. Each isolate runs in its own secure context:
import { $.sandbox } from 'sdk.do'
const result = await $.sandbox.execute.code({
code: untrustedCode,
runtime: 'edge',
isolation: {
memory: true, // Memory isolation via V8 isolate
network: false, // Disable network access
filesystem: false, // Disable filesystem access
bindings: [] // No custom bindings
}
})Trust Cordons
Workers are assigned trust levels and distributed among cordons:
const result = await $.sandbox.execute.code({
code: userCode,
trustLevel: 'low', // Runs in low-trust cordon
timeout: 5000, // Stricter limits for low trust
memory: 128, // Lower memory allocation
})Custom Binding Controls
Define precise API surfaces using RPC patterns:
import { WorkerEntrypoint } from 'cloudflare:workers'
interface QueryResult {
rows: Array<Record<string, any>>
rowCount: number
}
// Define a safe API binding
class SafeDatabaseAPI extends WorkerEntrypoint {
private db: D1Database
constructor(db: D1Database) {
super()
this.db = db
}
async query(sql: string, params?: Array<string | number>): Promise<QueryResult> {
// Validate query - check for common SQL injection patterns
if (!this.isQuerySafe(sql)) {
throw new Error('Unsafe query blocked')
}
// Use parameterized queries to prevent SQL injection
const statement = this.db.prepare(sql)
const result = params ? await statement.bind(...params).all() : await statement.all()
return {
rows: result.results,
rowCount: result.results.length,
}
}
private isQuerySafe(sql: string): boolean {
// Prevent multiple statements
if (
sql.includes(';') &&
sql
.trim()
.split(';')
.filter((s) => s.trim()).length > 1
) {
return false
}
// Block dangerous SQL keywords
const dangerousKeywords = /\b(DROP|DELETE|TRUNCATE|ALTER|EXEC|EXECUTE)\b/i
if (dangerousKeywords.test(sql)) {
return false
}
// Ensure no string concatenation patterns that bypass parameterization
if (sql.includes('||') || sql.includes('CONCAT')) {
return false
}
return true
}
}
// Provide to dynamic Worker
const worker = env.WORKER_LOADER.get(workerId, {
bindings: {
DB: new SafeDatabaseAPI(env.DB),
},
})Edge Runtime Support
Execute code in V8 isolates for maximum performance with near-instant cold starts.
Basic Execution
import { $.sandbox } from 'sdk.do'
const result = await $.sandbox.execute.code({
runtime: 'edge',
code: `
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url)
return new Response(\`Hello \${url.pathname}\`)
}
}
`,
timeout: 10000,
memory: 128
})With Custom Environment
const result = await $.sandbox.execute.code({
runtime: 'edge',
code: edgeWorkerCode,
env: {
// Custom bindings serialized to the isolate
API_KEY: 'safe-api-key',
DATABASE: databaseBinding,
QUEUE: queueBinding,
},
})RPC API Pattern
import { WorkerEntrypoint } from 'cloudflare:workers'
// Define RPC API for sandboxed code
class SandboxAPI extends WorkerEntrypoint {
private kv: KVNamespace
constructor(kv: KVNamespace) {
super()
this.kv = kv
}
async storeData(key: string, value: any): Promise<void> {
// Validate key format
if (!/^[a-zA-Z0-9_-]+$/.test(key)) {
throw new Error('Invalid key format')
}
// Implement safe storage with validation
await this.kv.put(key, JSON.stringify(value))
}
async fetchData(key: string): Promise<any | null> {
// Validate key format
if (!/^[a-zA-Z0-9_-]+$/.test(key)) {
throw new Error('Invalid key format')
}
const data = await this.kv.get(key)
return data ? JSON.parse(data) : null
}
}
// Provide to sandbox
const result = await $.sandbox.execute.code({
runtime: 'edge',
code: userCode,
env: {
api: new SandboxAPI(env.KV),
},
})Edge Runtime Benefits:
- Instant cold starts (<1ms typically)
- Minimal overhead compared to containers
- Global distribution via Cloudflare network
- Cost-effective due to efficient resource usage
- Web API compatibility standard fetch, streams, etc.
Node.js Runtime Support
Full Node.js compatibility when broader ecosystem support is needed.
Basic Execution
const result = await $.sandbox.execute.code({
runtime: 'node',
code: `
const fs = require('fs')
const path = require('path')
module.exports = async function(context) {
// Access to Node.js APIs within sandbox constraints
const files = fs.readdirSync('/tmp')
return { files, cwd: process.cwd() }
}
`,
timeout: 30000,
memory: 512,
})With npm Packages
const result = await $.sandbox.execute.code({
runtime: 'node',
code: `
const lodash = require('lodash')
const moment = require('moment')
module.exports = async function(data) {
const processed = lodash.map(data, item => ({
...item,
timestamp: moment().toISOString()
}))
return processed
}
`,
packages: ['[email protected]', '[email protected]'],
timeout: 30000,
})Node.js Runtime Benefits:
- Full Node.js APIs including fs, path, crypto, etc.
- npm ecosystem access to millions of packages
- Familiar environment for Node.js developers
- Broader compatibility for existing codebases
- Advanced debugging via Chrome DevTools protocol
Resource Limits and Quotas
Comprehensive Limits
Configure precise resource constraints for safe execution:
await $.sandbox.execute.code({
code: userCode,
runtime: 'edge',
limits: {
// Time constraints
timeout: 10000, // 10 seconds max wall time
cpuTime: 5000, // 5 seconds max CPU time
// Memory constraints
memory: 128, // 128MB max memory
memoryThreshold: 0.8, // Warn at 80% usage
// Network constraints
network: {
enabled: true,
maxRequests: 10, // Max 10 outbound requests
maxBytes: 1024000, // Max 1MB transferred
allowedDomains: ['api.example.com', '*.safe-domain.com'],
blockedDomains: ['internal.company.com'],
},
// Filesystem constraints
filesystem: {
readOnly: ['/data'], // Read-only paths
writable: ['/tmp'], // Writable paths
maxFileSize: 10240, // 10KB max file size
maxFiles: 100, // Max 100 files
},
// Execution constraints
maxSubprocesses: 0, // No subprocesses
maxFileDescriptors: 10, // Max 10 open files
},
})Default Quotas by Runtime
Edge Runtime Defaults:
- Timeout: 10 seconds
- Memory: 128MB
- CPU time: 50ms
- Network: disabled
- Filesystem: disabled
Node.js Runtime Defaults:
- Timeout: 30 seconds
- Memory: 512MB
- CPU time: 5 seconds
- Network: disabled
- Filesystem: read-only /tmp
Quota Monitoring
const result = await $.sandbox.execute.code({
code: userCode,
monitoring: true,
})
console.log(result.metrics)
// {
// cpuTime: 234, // ms
// memory: 45, // MB peak
// networkRequests: 3,
// networkBytes: 15423,
// filesRead: 5,
// filesWritten: 2,
// duration: 1250 // ms total
// }Code Execution Flow
Execution Lifecycle
- Request received - Code and configuration submitted
- Trust evaluation - Assign trust level and cordon
- Isolate allocation - Get cached isolate or create new one
- Environment setup - Serialize and inject bindings
- Code loading - Parse and validate code
- Execution - Run code with resource monitoring
- Result capture - Collect return value and metrics
- Cleanup - Release resources, cache isolate if warm
- Response - Return result and execution metadata
Detailed Flow Diagram
// 1. Submit execution request
const executionId = await $.sandbox.execution.submit({
code: userCode,
runtime: 'edge',
async: true, // Returns immediately with execution ID
})
// 2. Check execution status
const status = await $.sandbox.executionStatus.get({ executionId })
// { status: 'running', progress: 0.45, metrics: {...} }
// 3. Wait for completion
const result = await $.sandbox.completion.wait({
executionId,
timeout: 30000,
})
// 4. Get execution details
const details = await $.sandbox.executionDetails.get({ executionId })
console.log(details)
// {
// id: 'exec_abc123',
// status: 'completed',
// startTime: '2025-01-15T10:30:00Z',
// endTime: '2025-01-15T10:30:02Z',
// duration: 2150,
// trustLevel: 'medium',
// cordon: 'worker-pool-2',
// isolateId: 'iso_xyz789',
// isolateCached: true,
// result: { data: [...] },
// metrics: {
// cpuTime: 1840,
// memory: 67,
// networkRequests: 4,
// filesAccessed: 8
// },
// logs: [...],
// errors: []
// }Error Handling
try {
const result = await $.sandbox.execute.code({
code: potentiallyFailingCode,
runtime: 'edge',
})
} catch (error) {
if (error.code === 'SANDBOX_TIMEOUT') {
console.error('Execution exceeded timeout')
} else if (error.code === 'SANDBOX_MEMORY_LIMIT') {
console.error('Memory limit exceeded')
} else if (error.code === 'SANDBOX_NETWORK_BLOCKED') {
console.error('Attempted blocked network access')
} else if (error.code === 'SANDBOX_SECURITY_VIOLATION') {
console.error('Security violation detected')
}
// Access execution metadata even on failure
console.log(error.executionId)
console.log(error.metrics)
console.log(error.logs)
}Debugging Capabilities
Console Logging
Capture console output from sandboxed code:
const result = await $.sandbox.execute.code({
code: `
console.log('Starting execution')
console.warn('This is a warning')
console.error('This is an error')
export default {
async fetch() {
console.debug('Request received')
return new Response('OK')
}
}
`,
captureConsole: true,
})
console.log(result.logs)
// [
// { level: 'log', message: 'Starting execution', timestamp: '...' },
// { level: 'warn', message: 'This is a warning', timestamp: '...' },
// { level: 'error', message: 'This is an error', timestamp: '...' },
// { level: 'debug', message: 'Request received', timestamp: '...' }
// ]Performance Profiling
const result = await $.sandbox.execute.code({
code: userCode,
profiling: {
enabled: true,
cpuProfile: true,
heapSnapshot: true,
},
})
// Access profiling data
console.log(result.profile.cpuProfile)
console.log(result.profile.heapSnapshot)
console.log(result.profile.timeline)Breakpoint Debugging
// Enable remote debugging
const session = await $.sandbox.debug.startSession({
code: userCode,
runtime: 'node',
debugPort: 9229,
})
console.log(`Chrome DevTools URL: ${session.devtoolsUrl}`)
// Connect Chrome DevTools for live debugging
// Execute with breakpoints
const result = await session.execute({
breakOnStart: true,
})
// Cleanup
await session.close()Error Stack Traces
const result = await $.sandbox.execute.code({
code: buggyCode,
stackTraces: true,
sourceMap: true, // Enable source map support
})
if (result.error) {
console.log(result.error.stack)
// Full stack trace with source positions
console.log(result.error.sourceLocation)
// { file: 'user-code.js', line: 42, column: 15 }
}Worker Loader Examples
Dynamic Worker Loading
import { WorkerEntrypoint } from 'cloudflare:workers'
interface Env {
WORKER_LOADER: WorkerLoader
}
interface WorkerLoader {
get(workerId: string, options?: { bindings?: Record<string, any> }): WorkerStub
}
interface WorkerStub {
fetch(request: Request): Promise<Response>
}
export default class DynamicLoader extends WorkerEntrypoint<Env> {
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url)
const workerId = url.searchParams.get('worker')
if (!workerId) {
return new Response('Missing worker ID', { status: 400 })
}
// Load worker dynamically - returns synchronously
const worker = this.env.WORKER_LOADER.get(workerId)
// Forward request to loaded worker
return await worker.fetch(request)
}
}With Custom Bindings
import { WorkerEntrypoint } from 'cloudflare:workers'
interface Env {
WORKER_LOADER: WorkerLoader
DB: D1Database
CACHE: KVNamespace
}
export default class extends WorkerEntrypoint<Env> {
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url)
const workerId = url.searchParams.get('worker')
if (!workerId) {
return new Response('Missing worker ID', { status: 400 })
}
// Define custom bindings for the dynamic worker
const customEnv = {
// Service binding with RPC API
DATABASE: new DatabaseAPI(this.env.DB),
// KV namespace
CACHE: this.env.CACHE,
// Configuration
CONFIG: {
apiUrl: 'https://api.example.com',
maxRetries: 3,
},
}
// Load worker with custom environment
const worker = this.env.WORKER_LOADER.get(workerId, {
bindings: customEnv,
})
return await worker.fetch(request)
}
}Caching Strategy
import { WorkerEntrypoint } from 'cloudflare:workers'
interface Env {
WORKER_LOADER: WorkerLoader
}
export default class extends WorkerEntrypoint<Env> {
private workerCache = new Map<string, WorkerStub>()
async fetch(request: Request): Promise<Response> {
const workerId = this.getWorkerId(request)
// Check manual cache first
// Note: Worker Loader also caches internally, so this provides an additional
// application-level cache. Consider cache invalidation strategy based on your needs.
if (!this.workerCache.has(workerId)) {
// Loader also caches internally
const worker = this.env.WORKER_LOADER.get(workerId)
this.workerCache.set(workerId, worker)
}
const worker = this.workerCache.get(workerId)!
return await worker.fetch(request)
}
private getWorkerId(request: Request): string {
const url = new URL(request.url)
const workerId = url.searchParams.get('worker')
if (!workerId) {
throw new Error('Missing worker ID')
}
return workerId
}
}Error Handling Best Practices
Worker Loaders can fail for various reasons. Implement comprehensive error handling:
import { WorkerEntrypoint } from 'cloudflare:workers'
interface Env {
WORKER_LOADER: WorkerLoader
}
export default class extends WorkerEntrypoint<Env> {
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url)
const workerId = url.searchParams.get('worker')
if (!workerId) {
return new Response('Missing worker ID', { status: 400 })
}
try {
// Get worker stub - this is synchronous but the worker may not exist
const worker = this.env.WORKER_LOADER.get(workerId)
// The fetch call is where errors typically occur
const response = await worker.fetch(request)
return response
} catch (error) {
// Handle different types of errors
if (error instanceof Error) {
// Worker not found
if (error.message.includes('not found')) {
return new Response('Worker not found', { status: 404 })
}
// Worker execution error
if (error.message.includes('execution')) {
return new Response('Worker execution failed', { status: 500 })
}
// Timeout error
if (error.message.includes('timeout')) {
return new Response('Worker execution timed out', { status: 504 })
}
// Memory limit exceeded
if (error.message.includes('memory')) {
return new Response('Worker memory limit exceeded', { status: 507 })
}
}
// Generic error handling
console.error('Worker execution error:', error)
return new Response('Internal server error', { status: 500 })
}
}
}Common Worker Loader Errors:
-
Worker Not Found: The worker ID doesn't exist or was deleted
- Solution: Validate worker IDs before calling get()
- Implement fallback logic or default worker
-
Isolate Creation Failed: Unable to spawn new isolate
- Solution: Implement retry logic with exponential backoff
- Monitor isolate creation metrics
-
Binding Serialization Failed: Custom bindings can't be serialized
- Solution: Ensure bindings are serializable (RPC-compatible)
- Use WorkerEntrypoint classes for complex bindings
-
Execution Timeout: Worker execution exceeded timeout
- Solution: Set appropriate timeout limits
- Implement async execution for long operations
-
Memory Exceeded: Worker used more memory than allocated
- Solution: Increase memory limits or optimize worker code
- Monitor memory usage metrics
Cloudflare Sandbox Examples
AI Code Execution
Execute AI-generated code safely:
import { $.ai, $.sandbox } from 'sdk.do'
// Generate code with AI
const generatedCode = await $.ai.generate.code({
prompt: 'Write a function to calculate the Fibonacci sequence',
language: 'javascript',
model: 'claude-sonnet-4.5'
})
// Execute in sandbox
const result = await $.sandbox.execute.code({
code: generatedCode.code,
runtime: 'edge',
test: {
cases: [
{ input: [10], expected: 55 },
{ input: [20], expected: 6765 }
]
}
})
if (result.allTestsPassed) {
console.log('AI-generated code is correct!')
await $.code.deploy.function({
name: 'fibonacci',
code: generatedCode.code
})
}User-Provided Scripts
Run user scripts with strict security:
import { Hono } from 'hono'
const app = new Hono()
app.post('/execute', async (c) => {
const { code, input } = await c.req.json()
try {
const result = await $.sandbox.execute.code({
code,
runtime: 'node',
timeout: 5000,
memory: 256,
limits: {
network: false,
filesystem: { writable: ['/tmp'] },
maxSubprocesses: 0,
},
context: { input },
})
return c.json({
success: true,
output: result.output,
metrics: result.metrics,
})
} catch (error) {
return c.json(
{
success: false,
error: error.message,
code: error.code,
},
400
)
}
})Plugin System
Enable safe third-party plugin execution:
import { $.sandbox, $.database } from 'sdk.do'
interface Plugin {
id: string
code: string
verified: boolean
name: string
version: string
}
interface PluginExecutionResult {
success: boolean
output?: any
error?: Error
metrics?: {
cpuTime: number
memory: number
duration: number
}
}
class PluginManager {
async executePlugin(pluginId: string, context: any): Promise<PluginExecutionResult> {
// Load plugin metadata
const plugin = await $.database.plugin.get({ id: pluginId }) as Plugin
if (!plugin.verified) {
// Run unverified plugins with stricter limits
return await $.sandbox.execute.code({
code: plugin.code,
runtime: 'edge',
trustLevel: 'low',
timeout: 3000,
memory: 64,
limits: {
network: false,
maxFileDescriptors: 5
},
context
})
} else {
// Verified plugins get more resources
return await $.sandbox.execute.code({
code: plugin.code,
runtime: 'edge',
trustLevel: 'medium',
timeout: 10000,
memory: 128,
context
})
}
}
}Workflow Execution
Execute workflow steps in isolated environments:
import { $.workflow, $.sandbox } from 'sdk.do'
$.workflow.define.process({
name: 'data-pipeline',
steps: [
{
name: 'extract',
executor: 'sandbox',
code: `
export default async function extract(context) {
const response = await fetch(context.sourceUrl)
return await response.json()
}
`
},
{
name: 'transform',
executor: 'sandbox',
code: `
export default async function transform(data) {
return data.map(item => ({
id: item.id,
value: item.value * 2,
processed: new Date().toISOString()
}))
}
`
},
{
name: 'load',
executor: 'sandbox',
code: `
export default async function load(data, context) {
await context.database.insert('results', data)
return { count: data.length }
}
`
}
]
})
// Execute workflow
const result = await $.workflow.execute.process({
name: 'data-pipeline',
input: {
sourceUrl: 'https://api.example.com/data'
}
})Performance Considerations
Edge vs Node.js Runtime Selection
Choose the appropriate runtime based on your execution requirements:
Use Edge Runtime When:
- Cold start time is critical (<1ms required)
- Code is compatible with Web APIs (fetch, streams, crypto)
- Memory footprint is small (<128MB)
- Execution time is short (<50ms CPU time)
- Global distribution is important
- Cost optimization is a priority
Use Node.js Runtime When:
- You need full Node.js APIs (fs, child_process, etc.)
- npm packages are required
- Memory requirements are higher (>128MB)
- Longer execution times are needed (>50ms CPU)
- Advanced debugging with Chrome DevTools is required
- Broader ecosystem compatibility is essential
Cold Start Optimization
// ✅ Good: Pre-warm isolates for frequently used code
const workerStub = env.WORKER_LOADER.get(workerId)
// Keep stub reference to maintain warm isolate
globalThis.warmWorkers = globalThis.warmWorkers || new Map()
globalThis.warmWorkers.set(workerId, workerStub)
// ❌ Bad: Creating new isolates on every request
// This discards the stub, causing cold starts
const result = await env.WORKER_LOADER.get(workerId).fetch(request)Resource Limit Impact
Memory Limits:
- Lower limits (64-128MB): Faster allocation, more restrictive, better for simple code
- Higher limits (256-512MB): Slower allocation, allows data processing, better for complex operations
- Impact: +1-5ms allocation time per 128MB
Timeout Settings:
- Edge runtime: 10ms-10s (recommend: 5s for user code)
- Node.js runtime: 100ms-30s (recommend: 10s for user code)
- Impact: No performance cost, only determines max execution time
Network Access:
- Disabled: Fastest, zero overhead
- Enabled with allowlist: +10-50ms per request (DNS + connection)
- Enabled without restrictions: +10-100ms per request + security risk
Execution Patterns
Synchronous Pattern (Faster):
// Best for: Quick operations, simple transformations
const result = await $.sandbox.execute.code({
code: simpleFunction,
runtime: 'edge',
timeout: 1000,
})Async Pattern (Better for Long Operations):
// Best for: Long-running tasks, batch processing
const executionId = await $.sandbox.execution.submit({
code: longRunningCode,
async: true,
})
// Check status periodically
const result = await $.sandbox.completion.wait({ executionId })Bundle Size Optimization
// ✅ Good: Minimal imports, tree-shaking friendly
import { execute } from 'sdk.do/sandbox'
// ❌ Bad: Full SDK import
import * as sdk from 'sdk.do'Monitoring Performance
const result = await $.sandbox.execute.code({
code: userCode,
monitoring: true,
profiling: { enabled: true },
})
// Analyze performance bottlenecks
console.log('Cold start:', result.metrics.coldStart)
console.log('Execution time:', result.metrics.duration)
console.log('CPU time:', result.metrics.cpuTime)
console.log('Memory peak:', result.metrics.memory)
// Alert on performance degradation
if (result.metrics.duration > 5000) {
await alertTeam('Sandbox execution exceeded 5s', result.metrics)
}Best Practices for Sandbox Usage
1. Always Set Timeouts (Priority: P0 - Critical)
Prevent runaway code with appropriate timeouts:
// ✅ Good: Reasonable timeout
await $.sandbox.execute.code({
code: userCode,
timeout: 10000, // 10 seconds
})
// ❌ Bad: No timeout or excessive timeout
await $.sandbox.execute.code({
code: userCode,
// No timeout = potential infinite loop
})2. Configure Memory Limits (Priority: P0 - Critical)
Set appropriate memory limits based on expected workload:
// ✅ Good: Right-sized memory
await $.sandbox.execute.code({
code: dataProcessingCode,
runtime: 'node',
memory: 512, // 512MB for data processing
})
// ❌ Bad: Excessive memory for simple task
await $.sandbox.execute.code({
code: simpleCalculation,
memory: 2048, // Too much
})3. Implement Proper Error Handling (Priority: P0 - Critical)
Always catch and handle sandbox errors gracefully:
// ✅ Good: Comprehensive error handling
try {
const result = await $.sandbox.execute.code({
code: userCode,
runtime: 'edge',
})
if (result.success) {
await processResult(result.output)
} else {
await logError(result.error)
}
} catch (error) {
// Handle different error types
switch (error.code) {
case 'SANDBOX_TIMEOUT':
await notifyUser('Execution took too long')
break
case 'SANDBOX_MEMORY_LIMIT':
await notifyUser('Code used too much memory')
break
case 'SANDBOX_SECURITY_VIOLATION':
await alertSecurityTeam(error)
break
default:
await logError(error)
}
}4. Enable Logging and Monitoring (Priority: P1 - Important)
Always enable logging for debugging and auditing:
// ✅ Good: Logging enabled
const result = await $.sandbox.execute.code({
code: userCode,
captureConsole: true,
monitoring: true,
metadata: {
userId: user.id,
requestId: request.id,
source: 'api',
},
})
// Store execution logs
await $.database.insert.sandboxLog({
executionId: result.id,
userId: user.id,
logs: result.logs,
metrics: result.metrics,
timestamp: new Date(),
})5. Use Trust Levels Appropriately (Priority: P0 - Critical)
Assign appropriate trust levels based on code source:
// ✅ Good: Trust levels based on source
const getTrustLevel = (codeSource) => {
switch (codeSource) {
case 'internal':
return 'high'
case 'verified-partner':
return 'medium'
case 'user-submitted':
return 'low'
default:
return 'untrusted'
}
}
await $.sandbox.execute.code({
code: userCode,
trustLevel: getTrustLevel(codeSource),
limits: getLimitsForTrustLevel(trustLevel),
})6. Validate Code Before Execution (Priority: P1 - Important)
Perform static analysis before running code:
// ✅ Good: Validation before execution
import { validateCode } from '@dotdo/code-validator'
const validation = await validateCode(userCode)
if (!validation.safe) {
throw new Error(`Unsafe code detected: ${validation.issues.join(', ')}`)
}
const result = await $.sandbox.execute.code({
code: userCode,
runtime: 'edge',
})7. Implement Rate Limiting (Priority: P0 - Critical)
Prevent abuse with rate limiting:
// ✅ Good: Rate limiting per user
import { RateLimiter } from '@dotdo/rate-limiter'
const limiter = new RateLimiter({
max: 100, // 100 executions
window: 3600000, // per hour
})
app.post('/execute', async (c) => {
const userId = c.get('userId')
if (!(await limiter.check(userId))) {
return c.json({ error: 'Rate limit exceeded' }, 429)
}
const result = await $.sandbox.execute.code({
code: userCode,
runtime: 'edge',
metadata: { userId },
})
return c.json(result)
})8. Use Network Allowlists (Priority: P0 - Critical)
When network access is needed, use strict allowlists:
// ✅ Good: Strict network allowlist
await $.sandbox.execute.code({
code: apiIntegrationCode,
limits: {
network: {
enabled: true,
allowedDomains: ['api.trusted-service.com', 'webhooks.partner.com'],
maxRequests: 10,
},
},
})
// ❌ Bad: Open network access
await $.sandbox.execute.code({
code: apiIntegrationCode,
limits: {
network: { enabled: true },
// No restrictions!
},
})9. Clean Up Resources (Priority: P1 - Important)
Ensure proper cleanup of long-running executions:
// ✅ Good: Cleanup with timeout
const executionId = await $.sandbox.execution.submit({
code: longRunningCode,
async: true,
})
try {
const result = await $.sandbox.completion.wait({
executionId,
timeout: 30000,
})
await processResult(result)
} catch (error) {
if (error.code === 'TIMEOUT') {
// Cancel execution if timeout
await $.sandbox.execution.cancel({ executionId })
}
} finally {
// Always cleanup
await $.sandbox.execution.cleanup({ executionId })
}10. Test Sandbox Security (Priority: P1 - Important)
Regularly test security boundaries:
// ✅ Good: Security testing
describe('Sandbox Security', () => {
it('should prevent filesystem access', async () => {
const result = await $.sandbox.execute.code({
code: `
export default async function() {
const fs = require('fs')
return fs.readdirSync('/')
}
`,
runtime: 'node',
})
expect(result.error.code).toBe('SANDBOX_SECURITY_VIOLATION')
})
it('should enforce network restrictions', async () => {
const result = await $.sandbox.execute.code({
code: `
export default async function() {
return await fetch('https://blocked.com')
}
`,
runtime: 'edge',
limits: { network: false },
})
expect(result.error.code).toBe('SANDBOX_NETWORK_BLOCKED')
})
})Related Documentation
- Code - Deploy code to sandbox environments
- Functions - Execute functions in sandbox
- MCP - MCP tool execution in sandbox
- Workflows - Sandbox execution in workflows