AI & Intelligence
llm
Unified LLM gateway for accessing multiple AI providers
llm
Unified gateway for Large Language Models providing consistent API across OpenAI, Anthropic, Google, Meta, and other providers.
Overview
The LLM primitive abstracts away provider-specific APIs, giving you a single interface for text generation, chat completion, embeddings, and function calling across all major AI providers.
SDK Object Mapping
This primitive maps to the ai SDK object - one of the 8 core platform objects (generative functions):
import { ai, llm } from 'sdk.do'
// AI - Generate text (ai is one of 8 core SDK objects)
const response = await ai.generate({
prompt: 'Write a product description',
model: 'gpt-5',
provider: 'openai.llm.do',
})
// AI - Chat completion
const chat = await ai.chat({
messages: [{ role: 'user', content: 'Explain quantum computing' }],
model: 'claude-sonnet-4.5',
provider: 'anthropic.llm.do',
})
// AI - Helper functions
const items = await ai.list('Generate a list of fruit names')
const isValid = await ai.is('this is a valid email address', '[email protected]')
const areValid = await ai.are('these are valid emails', ['[email protected]', '[email protected]'])
// Direct LLM access
const response2 = await llm.generate({
prompt: 'Explain quantum computing',
model: 'gpt-5',
})Subdomain Architecture
The llm primitive uses infinite free subdomains for AI providers:
llm.do # Root - Universal LLM interface
├── openai.llm.do # OpenAI (GPT-4, GPT-5)
├── anthropic.llm.do # Anthropic (Claude)
├── meta.llm.do # Meta (Llama)
├── google.llm.do # Google (Gemini)
├── mistral.llm.do # Mistral AI
├── cohere.llm.do # Cohere
└── {custom}.llm.do # Custom providersChild Primitives
- models - Model management and selection
- embeddings - Vector embeddings generation
- vectors - Vector operations and search
Quick Example
import { llm } from 'sdk.do'
// Generate text
const response = await llm.generate({
prompt: 'Write a product description',
model: 'gpt-5',
})
// Streaming
for await (const chunk of llm.stream({ prompt: 'Tell me a story', model: 'claude-sonnet-4.5' })) {
process.stdout.write(chunk)
}Core Capabilities
- Multi-Provider - OpenAI, Anthropic, Google, Meta, and custom providers
- Unified API - Consistent interface across all providers
- Streaming - Real-time token streaming for responsive UIs
- Function Calling - Structured outputs and tool use
- Cost Optimization - Automatic model selection and caching
Access Methods
SDK
TypeScript/JavaScript library for LLM operations
await llm.generate({ prompt: 'Explain quantum computing', model: 'gpt-5' })CLI
Command-line tool for LLM interactions
do llm generate "Explain quantum computing" --model gpt-5API
REST/RPC endpoints for LLM access
curl -X POST https://api.do/v1/llm/generate -d '{"prompt":"Explain quantum computing","model":"gpt-5"}'MCP
Model Context Protocol for AI-to-AI communication
Generate text using GPT-5: "Explain quantum computing"Related Primitives
Child Primitives
- models - Model management and selection
- embeddings - Vector embeddings generation
- vectors - Vector operations and search