.do
Mcp

Reference MCP

Model Context Protocol reference for reference.do - API reference and technical docs

Reference MCP

API reference and technical docs

Overview

The Model Context Protocol (MCP) provides AI models with direct access to reference.do through a standardized interface.

Installation

pnpm add @modelcontextprotocol/sdk

Configuration

Add to your MCP server configuration:

{
  "mcpServers": {
    "reference": {
      "command": "npx",
      "args": ["-y", "@dotdo/mcp-server"],
      "env": {
        "DO_API_KEY": "your-api-key"
      }
    }
  }
}

Tools

reference/invoke

Main tool for reference.do operations.

{
  "name": "reference/invoke",
  "description": "API reference and technical docs",
  "inputSchema": {
    "type": "object",
    "properties": {
      "operation": {
        "type": "string",
        "description": "Operation to perform"
      },
      "parameters": {
        "type": "object",
        "description": "Operation parameters"
      }
    },
    "required": ["operation"]
  }
}

Usage in AI Models

Claude Desktop

// ~/Library/Application Support/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "reference": {
      "command": "npx",
      "args": ["-y", "@dotdo/mcp-server", "--tool=reference"],
      "env": {
        "DO_API_KEY": "undefined"
      }
    }
  }
}

OpenAI GPTs

# Custom GPT configuration
tools:
  - type: mcp
    server: reference
    operations:
      - invoke
      - query
      - execute

Custom Integration

import { Client } from '@modelcontextprotocol/sdk/client/index.js'
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js'

const transport = new StdioClientTransport({
  command: 'npx',
  args: ['-y', '@dotdo/mcp-server', '--tool=reference'],
})

const client = new Client(
  {
    name: 'reference-client',
    version: '1.0.0',
  },
  {
    capabilities: {},
  }
)

await client.connect(transport)

// Call tool
const result = await client.callTool({
  name: 'reference/invoke',
  arguments: {
    operation: 'reference',
    parameters: {},
  },
})

Tool Definitions

Available Tools

{
  "tools": [
    {
      "name": "reference/invoke",
      "description": "Invoke reference.do",
      "inputSchema": {
        /* ... */
      }
    },
    {
      "name": "reference/query",
      "description": "Query reference.do resources",
      "inputSchema": {
        /* ... */
      }
    },
    {
      "name": "reference/status",
      "description": "Check reference.do status",
      "inputSchema": {
        /* ... */
      }
    }
  ]
}

Resources

Available Resources

{
  "resources": [
    {
      "uri": "reference://config",
      "name": "Reference Configuration",
      "mimeType": "application/json"
    },
    {
      "uri": "reference://docs",
      "name": "Reference Documentation",
      "mimeType": "text/markdown"
    }
  ]
}

Prompts

Pre-configured Prompts

{
  "prompts": [
    {
      "name": "reference-quick-start",
      "description": "Quick start guide for reference.do",
      "arguments": []
    },
    {
      "name": "reference-best-practices",
      "description": "Best practices for reference.do",
      "arguments": []
    }
  ]
}

Examples

Basic Usage

// AI model calls tool via MCP
mcp call reference/generate

With Parameters

// Call with parameters
await mcp.callTool('reference/invoke', {
  operation: 'process',
  parameters: {
    // Operation-specific parameters
  },
  options: {
    timeout: 30000,
  },
})

Error Handling

try {
  const result = await mcp.callTool('reference/invoke', {
    operation: 'process',
  })
  return result
} catch (error) {
  if (error.code === 'TOOL_NOT_FOUND') {
    console.error('Reference tool not available')
  } else {
    throw error
  }
}

AI Integration Patterns

Agentic Workflows

// AI agent uses reference.do in workflow
const workflow = {
  steps: [
    {
      tool: 'reference/invoke',
      operation: 'analyze',
      input: 'user-data',
    },
    {
      tool: 'reference/process',
      operation: 'transform',
      input: 'analysis-result',
    },
  ],
}

Chain of Thought

AI models can reason about reference.do operations:

User: "I need to process this data"

AI: "I'll use the reference tool to:
1. Validate the data format
2. Process it through reference.do
3. Return the results

Let me start..."

[Calls: mcp call reference/generate]

Server Implementation

Custom MCP Server

import { Server } from '@modelcontextprotocol/sdk/server/index.js'
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'

const server = new Server(
  {
    name: 'reference-server',
    version: '1.0.0',
  },
  {
    capabilities: {
      tools: {},
      resources: {},
      prompts: {},
    },
  }
)

// Register tool
server.setRequestHandler('tools/call', async (request) => {
  if (request.params.name === 'reference/invoke') {
    // Handle reference.do operation
    return {
      content: [
        {
          type: 'text',
          text: JSON.stringify(result),
        },
      ],
    }
  }
})

const transport = new StdioServerTransport()
await server.connect(transport)

Best Practices

  1. Tool Design - Keep tools focused and single-purpose
  2. Error Messages - Provide clear, actionable errors
  3. Documentation - Include examples in tool descriptions
  4. Rate Limiting - Implement appropriate limits
  5. Security - Validate all inputs from AI models
  6. Monitoring - Track tool usage and errors