Infrastructure
Understand the deployment infrastructure and edge computing architecture
The .do platform leverages Cloudflare's global edge network to deploy your Business-as-Code and Services-as-Software with maximum performance and reliability.
Edge Computing Architecture
Global Network
Deploy to 300+ cities worldwide:
- Sub-50ms latency: Users connect to nearest data center
- Automatic routing: Traffic routes to optimal location
- DDoS protection: Built-in security at edge
- 99.99% uptime: Distributed resilience
Deployment Layers
┌─────────────────────────────────────┐
│ Edge Layer (Global) │
│ - API Endpoints │
│ - Static Assets │
│ - Rate Limiting │
└─────────────────────────────────────┘
↓
┌─────────────────────────────────────┐
│ Compute Layer (Regional) │
│ - Durable Objects │
│ - Workflow Engine │
│ - Background Jobs │
└─────────────────────────────────────┘
↓
┌─────────────────────────────────────┐
│ Data Layer (Regional) │
│ - Database (D1) │
│ - Key-Value (KV) │
│ - R2 Storage │
└─────────────────────────────────────┘Cloudflare Workers
Serverless functions running at the edge.
Features
- Zero cold starts: Instant execution
- V8 isolates: Faster than containers
- Global deployment: Deploy once, run everywhere
- Auto-scaling: Handle any load
Configuration
# wrangler.toml
name = "my-api"
main = "src/index.ts"
compatibility_date = "2024-01-01"
[env.production]
workers_dev = false
routes = [
{ pattern = "api.example.com/*", zone_name = "example.com" }
]Durable Objects
Stateful, strongly-consistent compute units.
Use Cases
- Real-time collaboration
- Chat and messaging
- Game servers
- WebSocket connections
- State machines
Example
export class Agent {
state: DurableObjectState
constructor(state: DurableObjectState) {
this.state = state
}
async fetch(request: Request) {
// Handle requests with persistent state
const count = (await this.state.storage.get('count')) || 0
await this.state.storage.put('count', count + 1)
return new Response(`Count: ${count + 1}`)
}
}Data Storage
D1 (SQL Database)
SQLite at the edge:
// Query database
const users = await env.DB.prepare('SELECT * FROM users WHERE active = ?').bind(true).all()KV (Key-Value)
Low-latency key-value storage:
// Store and retrieve
await env.KV.put('key', 'value')
const value = await env.KV.get('key')R2 (Object Storage)
S3-compatible object storage:
// Upload file
await env.R2.put('file.jpg', fileData)
// Download file
const file = await env.R2.get('file.jpg')Network Architecture
Traffic Routing
User Request
↓
[Cloudflare Edge]
↓
[Worker / API]
↓
[Durable Object] (if needed)
↓
[Database / Storage]
↓
Response to UserCaching Strategy
- Edge cache: Static assets cached at edge
- Worker cache: Dynamic content cached in Workers
- Database cache: Query results cached
- Application cache: Custom caching logic
Scaling
Automatic Scaling
- Request-based: Scale based on requests per second
- Geographic: Scale regionally based on traffic
- Resource-based: Scale based on CPU/memory usage
Performance
- Sub-50ms P50 latency globally
- 100K+ requests per second per endpoint
- 10GB+ data transfer per day
- Unlimited concurrent connections
Security
Built-in Protection
- DDoS mitigation
- Rate limiting
- Bot detection
- WAF rules
- SSL/TLS encryption
Configuration
// Rate limiting
export default {
async fetch(request, env) {
const ip = request.headers.get('CF-Connecting-IP')
const limit = await env.RATE_LIMITER.limit({ key: ip })
if (!limit.success) {
return new Response('Rate limit exceeded', { status: 429 })
}
return handleRequest(request)
},
}Monitoring
Built-in Metrics
- Request rate
- Error rate
- Latency (P50, P95, P99)
- CPU usage
- Memory usage
Custom Metrics
// Track custom metrics
await env.ANALYTICS.writeDataPoint({
blobs: ['api-call'],
doubles: [latency],
indexes: [endpoint],
})Cost Optimization
Pricing Model
- Pay per request
- Free tier: 100K requests/day
- Flat rate for unlimited requests
- No infrastructure costs
Optimization Tips
- Use edge caching
- Minimize database queries
- Batch operations
- Use KV for hot data
- Implement request coalescing
Infrastructure as Code
Terraform Configuration
Manage Cloudflare infrastructure with Terraform:
# terraform/main.tf
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = "~> 4.0"
}
}
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
variable "cloudflare_api_token" {
type = string
sensitive = true
}
variable "account_id" {
type = string
}
variable "zone_id" {
type = string
}
# Worker Script
resource "cloudflare_worker_script" "api" {
account_id = var.account_id
name = "api-worker"
content = file("../workers/api/dist/index.js")
plain_text_binding {
name = "ENVIRONMENT"
text = "production"
}
secret_text_binding {
name = "API_KEY"
text = var.api_key
}
kv_namespace_binding {
name = "KV"
namespace_id = cloudflare_workers_kv_namespace.api_kv.id
}
d1_database_binding {
name = "DB"
database_id = cloudflare_d1_database.api_db.id
}
r2_bucket_binding {
name = "STORAGE"
bucket_name = cloudflare_r2_bucket.api_storage.name
}
service_binding {
name = "AUTH"
service = "auth-worker"
environment = "production"
}
analytics_engine_binding {
name = "ANALYTICS"
}
}
# Worker Route
resource "cloudflare_worker_route" "api" {
zone_id = var.zone_id
pattern = "api.example.do/*"
script_name = cloudflare_worker_script.api.name
}
# KV Namespace
resource "cloudflare_workers_kv_namespace" "api_kv" {
account_id = var.account_id
title = "api-production-kv"
}
# D1 Database
resource "cloudflare_d1_database" "api_db" {
account_id = var.account_id
name = "api-production-db"
}
# R2 Bucket
resource "cloudflare_r2_bucket" "api_storage" {
account_id = var.account_id
name = "api-production-storage"
location = "auto"
}
# Durable Object Namespace
resource "cloudflare_workers_kv_namespace" "durable_objects" {
account_id = var.account_id
title = "durable-objects-production"
}
# Custom Domain for Worker
resource "cloudflare_record" "api" {
zone_id = var.zone_id
name = "api"
value = "workers.dev"
type = "CNAME"
proxied = true
}
# Rate Limiting
resource "cloudflare_rate_limit" "api" {
zone_id = var.zone_id
threshold = 1000
period = 60
match {
request {
url_pattern = "api.example.do/*"
}
}
action {
mode = "challenge"
timeout = 86400
}
}
# WAF Rules
resource "cloudflare_ruleset" "waf" {
zone_id = var.zone_id
name = "API WAF Rules"
description = "WAF rules for API worker"
kind = "zone"
phase = "http_request_firewall_custom"
rules {
action = "block"
expression = "(http.request.uri.path contains \"/admin\" and not ip.src in {192.0.2.0/24})"
description = "Block admin access from non-whitelisted IPs"
enabled = true
}
rules {
action = "challenge"
expression = "(cf.threat_score gt 14)"
description = "Challenge medium threat score"
enabled = true
}
}
# Workers Analytics Engine
resource "cloudflare_workers_kv_namespace" "analytics" {
account_id = var.account_id
title = "analytics-production"
}
# Output values
output "worker_url" {
value = "https://api.example.do"
}
output "kv_namespace_id" {
value = cloudflare_workers_kv_namespace.api_kv.id
}
output "d1_database_id" {
value = cloudflare_d1_database.api_db.id
}
output "r2_bucket_name" {
value = cloudflare_r2_bucket.api_storage.name
}Apply infrastructure:
# Initialize Terraform
cd terraform
terraform init
# Plan changes
terraform plan -var-file="production.tfvars"
# Apply changes
terraform apply -var-file="production.tfvars"
# Destroy infrastructure
terraform destroy -var-file="production.tfvars"Pulumi Configuration
Alternative IaC with Pulumi:
// infrastructure/index.ts
import * as pulumi from '@pulumi/pulumi'
import * as cloudflare from '@pulumi/cloudflare'
const config = new pulumi.Config()
const accountId = config.require('accountId')
const zoneId = config.require('zoneId')
// KV Namespace
const kvNamespace = new cloudflare.WorkersKvNamespace('api-kv', {
accountId,
title: 'api-production-kv',
})
// D1 Database
const d1Database = new cloudflare.D1Database('api-db', {
accountId,
name: 'api-production-db',
})
// R2 Bucket
const r2Bucket = new cloudflare.R2Bucket('api-storage', {
accountId,
name: 'api-production-storage',
location: 'auto',
})
// Worker Script
const workerScript = new cloudflare.WorkerScript('api-worker', {
accountId,
name: 'api-worker',
content: pulumi.output(import('fs').then((fs) => fs.readFileSync('../workers/api/dist/index.js', 'utf8'))),
plainTextBindings: [
{
name: 'ENVIRONMENT',
text: 'production',
},
],
kvNamespaceBindings: [
{
name: 'KV',
namespaceId: kvNamespace.id,
},
],
d1DatabaseBindings: [
{
name: 'DB',
databaseId: d1Database.id,
},
],
r2BucketBindings: [
{
name: 'STORAGE',
bucketName: r2Bucket.name,
},
],
analyticsEngineBindings: [
{
name: 'ANALYTICS',
},
],
})
// Worker Route
const workerRoute = new cloudflare.WorkerRoute('api-route', {
zoneId,
pattern: 'api.example.do/*',
scriptName: workerScript.name,
})
// DNS Record
const dnsRecord = new cloudflare.Record('api-dns', {
zoneId,
name: 'api',
value: 'workers.dev',
type: 'CNAME',
proxied: true,
})
// Export outputs
export const workerUrl = pulumi.interpolate`https://api.example.do`
export const kvNamespaceId = kvNamespace.id
export const d1DatabaseId = d1Database.id
export const r2BucketName = r2Bucket.nameDeploy with Pulumi:
# Install dependencies
pnpm install @pulumi/pulumi @pulumi/cloudflare
# Set configuration
pulumi config set cloudflare:apiToken $CLOUDFLARE_API_TOKEN --secret
pulumi config set accountId $CLOUDFLARE_ACCOUNT_ID
pulumi config set zoneId $CLOUDFLARE_ZONE_ID
# Preview changes
pulumi preview
# Deploy stack
pulumi up
# View outputs
pulumi stack output
# Destroy stack
pulumi destroyResource Management
KV Namespace Management
Manage key-value storage:
// scripts/manage-kv.ts
import { Cloudflare } from 'cloudflare'
const cf = new Cloudflare({
apiToken: process.env.CLOUDFLARE_API_TOKEN,
})
const accountId = process.env.CLOUDFLARE_ACCOUNT_ID!
const namespaceId = process.env.KV_NAMESPACE_ID!
// Write key
await cf.kv.namespaces.values.update(accountId, namespaceId, 'my-key', {
value: 'my-value',
metadata: { timestamp: Date.now() },
})
// Read key
const value = await cf.kv.namespaces.values.get(accountId, namespaceId, 'my-key')
// List keys
const keys = await cf.kv.namespaces.keys.list(accountId, namespaceId, {
limit: 1000,
prefix: 'user:',
})
// Delete key
await cf.kv.namespaces.values.delete(accountId, namespaceId, 'my-key')
// Bulk write
const records = [
{ key: 'key1', value: 'value1' },
{ key: 'key2', value: 'value2' },
{ key: 'key3', value: 'value3' },
]
for (const record of records) {
await cf.kv.namespaces.values.update(accountId, namespaceId, record.key, {
value: record.value,
})
}D1 Database Management
Manage SQLite databases:
# Create database
pnpm wrangler d1 create production-db
# List databases
pnpm wrangler d1 list
# Execute query
pnpm wrangler d1 execute production-db \
--command "SELECT * FROM users LIMIT 10"
# Execute from file
pnpm wrangler d1 execute production-db \
--file ./migrations/001_create_users.sql
# Create migration
pnpm wrangler d1 migrations create production-db add_email_column
# Apply migrations
pnpm wrangler d1 migrations apply production-db
# Backup database
pnpm wrangler d1 export production-db --output backup.sql
# Import data
pnpm wrangler d1 execute production-db --file backup.sqlMigration example:
-- migrations/0001_create_tables.sql
CREATE TABLE IF NOT EXISTS users (
id TEXT PRIMARY KEY,
email TEXT UNIQUE NOT NULL,
name TEXT NOT NULL,
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL
);
CREATE INDEX idx_users_email ON users(email);
CREATE TABLE IF NOT EXISTS sessions (
id TEXT PRIMARY KEY,
user_id TEXT NOT NULL,
token TEXT UNIQUE NOT NULL,
expires_at INTEGER NOT NULL,
FOREIGN KEY (user_id) REFERENCES users(id)
);
CREATE INDEX idx_sessions_token ON sessions(token);
CREATE INDEX idx_sessions_user_id ON sessions(user_id);R2 Storage Management
Manage object storage:
// scripts/manage-r2.ts
export default {
async fetch(request: Request, env: Env) {
const url = new URL(request.url)
const key = url.pathname.slice(1)
switch (request.method) {
case 'GET':
// Download file
const object = await env.STORAGE.get(key)
if (!object) {
return new Response('Not found', { status: 404 })
}
return new Response(object.body, {
headers: {
'Content-Type': object.httpMetadata?.contentType || 'application/octet-stream',
'Content-Length': object.size.toString(),
ETag: object.httpEtag,
},
})
case 'PUT':
// Upload file
await env.STORAGE.put(key, request.body, {
httpMetadata: {
contentType: request.headers.get('Content-Type') || 'application/octet-stream',
},
customMetadata: {
uploadedBy: request.headers.get('X-User-ID') || 'unknown',
uploadedAt: new Date().toISOString(),
},
})
return new Response('Uploaded', { status: 201 })
case 'DELETE':
// Delete file
await env.STORAGE.delete(key)
return new Response('Deleted', { status: 204 })
case 'HEAD':
// Get metadata
const head = await env.STORAGE.head(key)
if (!head) {
return new Response('Not found', { status: 404 })
}
return new Response(null, {
headers: {
'Content-Type': head.httpMetadata?.contentType || 'application/octet-stream',
'Content-Length': head.size.toString(),
ETag: head.httpEtag,
},
})
default:
return new Response('Method not allowed', { status: 405 })
}
},
}R2 CLI operations:
# Create bucket
pnpm wrangler r2 bucket create production-storage
# List buckets
pnpm wrangler r2 bucket list
# Upload file
pnpm wrangler r2 object put production-storage/file.txt --file ./local-file.txt
# Download file
pnpm wrangler r2 object get production-storage/file.txt --file ./downloaded-file.txt
# List objects
pnpm wrangler r2 object list production-storage --prefix uploads/
# Delete object
pnpm wrangler r2 object delete production-storage/file.txt
# Delete bucket
pnpm wrangler r2 bucket delete production-storageEnvironment Configuration
Environment Variables
Manage environment-specific configuration:
// workers/api/src/config.ts
export interface Config {
environment: 'development' | 'staging' | 'production'
apiUrl: string
databaseUrl: string
logLevel: 'debug' | 'info' | 'warn' | 'error'
features: {
analytics: boolean
rateLimit: boolean
caching: boolean
}
}
export function getConfig(env: Env): Config {
const environment = env.ENVIRONMENT || 'development'
const configs: Record<string, Config> = {
development: {
environment: 'development',
apiUrl: 'https://dev-api.example.do',
databaseUrl: env.DEV_DATABASE_URL,
logLevel: 'debug',
features: {
analytics: false,
rateLimit: false,
caching: false,
},
},
staging: {
environment: 'staging',
apiUrl: 'https://staging-api.example.do',
databaseUrl: env.STAGING_DATABASE_URL,
logLevel: 'info',
features: {
analytics: true,
rateLimit: true,
caching: true,
},
},
production: {
environment: 'production',
apiUrl: 'https://api.example.do',
databaseUrl: env.PRODUCTION_DATABASE_URL,
logLevel: 'warn',
features: {
analytics: true,
rateLimit: true,
caching: true,
},
},
}
return configs[environment]
}
// Usage in worker
export default {
async fetch(request: Request, env: Env) {
const config = getConfig(env)
if (config.features.analytics) {
// Track request
await trackAnalytics(request, env)
}
if (config.features.rateLimit) {
// Check rate limit
const limited = await checkRateLimit(request, env)
if (limited) {
return new Response('Rate limit exceeded', { status: 429 })
}
}
return handleRequest(request, env, config)
},
}Secret Management
Secure secret storage:
# Set secret
pnpm wrangler secret put API_KEY --env production
# Prompt: Enter value for API_KEY:
# Set from file
cat api-key.txt | pnpm wrangler secret put API_KEY --env production
# Set from environment variable
echo $API_KEY | pnpm wrangler secret put API_KEY --env production
# List secrets (doesn't show values)
pnpm wrangler secret list --env production
# Delete secret
pnpm wrangler secret delete API_KEY --env productionUsing secrets in code:
export default {
async fetch(request: Request, env: Env) {
// Access secrets from env
const apiKey = env.API_KEY
const dbPassword = env.DB_PASSWORD
const jwtSecret = env.JWT_SECRET
// Use in API calls
const response = await fetch('https://api.example.com', {
headers: {
Authorization: `Bearer ${apiKey}`,
},
})
// Use in JWT signing
const token = await sign({ userId: '123' }, jwtSecret)
return new Response('OK')
},
}Multi-Region Architecture
Geographic Distribution
Deploy across regions for low latency:
// workers/router/src/index.ts
export default {
async fetch(request: Request, env: Env) {
// Get user's region from Cloudflare
const country = request.cf?.country as string
const region = getRegion(country)
// Route to regional endpoint
const regionalEndpoint = env[`${region}_ENDPOINT`]
const response = await fetch(regionalEndpoint, {
method: request.method,
headers: request.headers,
body: request.body,
})
return response
},
}
function getRegion(country: string): string {
const regions: Record<string, string> = {
US: 'us_east',
CA: 'us_east',
GB: 'eu_west',
FR: 'eu_west',
DE: 'eu_west',
JP: 'ap_northeast',
CN: 'ap_northeast',
AU: 'ap_southeast',
BR: 'sa_east',
}
return regions[country] || 'us_east'
}Regional Failover
Automatic failover to backup regions:
export default {
async fetch(request: Request, env: Env) {
const regions = ['us-east', 'eu-west', 'ap-southeast']
for (const region of regions) {
try {
const endpoint = env[`${region.toUpperCase().replace('-', '_')}_ENDPOINT`]
const response = await fetch(endpoint, {
method: request.method,
headers: request.headers,
body: request.clone().body,
signal: AbortSignal.timeout(5000), // 5s timeout
})
if (response.ok) {
return response
}
} catch (error) {
// Continue to next region
console.error(`Failed to reach ${region}:`, error)
}
}
return new Response('All regions unavailable', { status: 503 })
},
}Resource Limits
Cloudflare Workers Limits
Be aware of platform limits:
- CPU time: 50ms (free), 30s (Bundled/Unbound)
- Memory: 128MB
- Bundle size: 1MB (after compression)
- Request size: 100MB
- Response size: Unlimited
- Subrequests: 50 (free), 1000 (paid)
- KV reads: 100,000/day (free), unlimited (paid)
- KV writes: 1,000/day (free), unlimited (paid)
- D1 rows read: 5M/day (free), 25M/day (paid)
- D1 rows written: 100K/day (free), 50M/day (paid)
- R2 operations: Class A: 1M/month, Class B: 10M/month
Optimization Strategies
Optimize for resource limits:
// 1. Minimize bundle size
import { z } from 'zod' // ❌ Large library
import { minischema } from 'minischema' // ✅ Smaller alternative
// 2. Use streaming for large responses
export default {
async fetch(request: Request) {
const { readable, writable } = new TransformStream()
const writer = writable.getWriter()
const encoder = new TextEncoder()
// Stream data in chunks
;(async () => {
for (let i = 0; i < 1000; i++) {
await writer.write(encoder.encode(`Chunk ${i}\n`))
}
await writer.close()
})()
return new Response(readable, {
headers: { 'Content-Type': 'text/plain' },
})
},
}
// 3. Cache expensive operations
const cache = new Map<string, any>()
async function getCachedData(key: string, fetcher: () => Promise<any>) {
if (cache.has(key)) {
return cache.get(key)
}
const data = await fetcher()
cache.set(key, data)
return data
}
// 4. Batch KV operations
async function batchKVReads(keys: string[], env: Env) {
// Read multiple keys in parallel
const promises = keys.map((key) => env.KV.get(key))
const values = await Promise.all(promises)
return Object.fromEntries(keys.map((key, i) => [key, values[i]]))
}
// 5. Use Durable Objects for stateful operations
export class Counter {
state: DurableObjectState
count: number = 0
constructor(state: DurableObjectState) {
this.state = state
this.state.blockConcurrencyWhile(async () => {
this.count = (await this.state.storage.get('count')) || 0
})
}
async fetch(request: Request) {
this.count++
await this.state.storage.put('count', this.count)
return new Response(String(this.count))
}
}Disaster Recovery
Backup Strategy
Automated backup procedures:
// scripts/backup.ts
import { Cloudflare } from 'cloudflare'
const cf = new Cloudflare({
apiToken: process.env.CLOUDFLARE_API_TOKEN!,
})
async function backupKV(accountId: string, namespaceId: string) {
const backup: Record<string, string> = {}
let cursor: string | undefined
do {
const response = await cf.kv.namespaces.keys.list(accountId, namespaceId, {
limit: 1000,
cursor,
})
for (const key of response.result) {
const value = await cf.kv.namespaces.values.get(accountId, namespaceId, key.name)
backup[key.name] = value as string
}
cursor = response.result_info?.cursor
} while (cursor)
// Save to file
await Bun.write(`backups/kv-${Date.now()}.json`, JSON.stringify(backup, null, 2))
}
async function backupD1(databaseId: string) {
// Export via wrangler
const { $ } = await import('bun')
await $`pnpm wrangler d1 export ${databaseId} --output backups/d1-${Date.now()}.sql`
}
async function backupR2(bucketName: string) {
// Sync to local directory
const { $ } = await import('bun')
await $`pnpm wrangler r2 object list ${bucketName} --output backups/r2-${bucketName}-${Date.now()}.json`
}
// Run backups
await backupKV(process.env.ACCOUNT_ID!, process.env.KV_NAMESPACE_ID!)
await backupD1(process.env.D1_DATABASE_ID!)
await backupR2(process.env.R2_BUCKET_NAME!)Restore Procedures
Restore from backups:
// scripts/restore.ts
async function restoreKV(accountId: string, namespaceId: string, backupFile: string) {
const backup = JSON.parse(await Bun.file(backupFile).text())
for (const [key, value] of Object.entries(backup)) {
await cf.kv.namespaces.values.update(accountId, namespaceId, key, {
value: value as string,
})
}
}
async function restoreD1(databaseId: string, backupFile: string) {
const { $ } = await import('bun')
await $`pnpm wrangler d1 execute ${databaseId} --file ${backupFile}`
}
async function restoreR2(bucketName: string, backupDir: string) {
const { $ } = await import('bun')
// Upload all files from backup directory
await $`pnpm wrangler r2 object put ${bucketName} --dir ${backupDir}`
}Next Steps
- CI/CD → - Automate deployments
- Configuration → - Manage settings
- Strategies → - Deployment patterns
- Observe → - Monitor infrastructure
Infrastructure Tip: Edge computing provides unmatched performance and scalability. Design for the edge first, with Infrastructure as Code for repeatability and disaster recovery.