Experiment
Test hypotheses, run A/B tests, and optimize business outcomes with data-driven experimentation
Test hypotheses, run A/B tests, and optimize business outcomes with data-driven experimentation using the .do platform's decide and send primitives.
Overview
The Experiment phase is where you validate assumptions, test variations, and discover what works best for your business. Using the .do platform's experimentation primitives, you can run sophisticated experiments without complex infrastructure.
This phase enables:
- A/B Testing: Test multiple variants to find winners
- Feature Flags: Control feature availability dynamically
- Multi-Armed Bandits: Automatically optimize allocation
- Event Streaming: Track all user interactions in real-time
- Statistical Analysis: Make data-driven decisions with confidence
Core Primitives
decide: A/B Testing and Variant Selection
The decide primitive assigns users to experiment variants:
// Simple A/B test
on($.User.visits.HomePage, async (event, $) => {
const variant = await decide('homepage-hero', {
userId: event.userId,
variants: ['control', 'variant-a', 'variant-b'],
weights: [1/3, 1/3, 1/3], // Equal distribution
})
// Track which variant was shown
await send($.Analytics.track, {
event: 'experiment_assigned',
userId: event.userId,
experiment: 'homepage-hero',
variant,
})
// Serve appropriate content
return {
hero: getHeroContent(variant),
variant,
}
})
// Conversion tracking
on($.User.clicks.CTA, async (event, $) => {
const variant = await decide.get('homepage-hero', event.userId)
await send($.Analytics.track, {
event: 'cta_clicked',
userId: event.userId,
experiment: 'homepage-hero',
variant,
})
})send: Event Streaming
The send primitive streams events for analysis:
// Track user actions
on($.User.performs.Action, async (action, $) => {
await send($.Analytics.track, {
event: action.type,
userId: action.userId,
properties: {
timestamp: new Date(),
page: action.page,
...action.metadata,
},
})
})
// Track business metrics
on($.Order.completed, async (order, $) => {
await send($.Analytics.track, {
event: 'purchase_completed',
userId: order.userId,
properties: {
amount: order.total,
items: order.items.length,
plan: order.plan,
},
})
// Update experiment metrics
const experiment = await decide.get('checkout-flow', order.userId)
if (experiment) {
await send($.Experiment.conversion, {
experiment: 'checkout-flow',
variant: experiment,
userId: order.userId,
value: order.total,
})
}
})
// Real-time event processing
on($.Analytics.receives.Event, async (event, $) => {
// Process immediately for real-time dashboards
await $.Metrics.increment(event.name, {
userId: event.userId,
value: event.value || 1,
})
// Also queue for batch processing
await send($.Queue.analytics, event)
})Experimentation Patterns
Pattern 1: Simple A/B Test
Test two variants to find the winner:
// Define experiment
const pricingTest = $.experiment({
name: 'pricing-page-2024',
hypothesis: 'Annual pricing emphasis increases conversions',
variants: {
control: {
name: 'Monthly First',
description: 'Show monthly pricing prominently',
},
treatment: {
name: 'Annual First',
description: 'Show annual pricing prominently with savings badge',
},
},
allocation: 0.5, // 50/50 split
metrics: {
primary: 'conversion_rate',
secondary: ['time_on_page', 'scroll_depth'],
},
minSampleSize: 1000,
significanceLevel: 0.05,
})
// Assign variant
on($.User.visits.PricingPage, async (event, $) => {
const variant = await decide(pricingTest.name, {
userId: event.userId,
variants: ['control', 'treatment'],
})
await send($.Analytics.track, {
event: 'pricing_page_view',
userId: event.userId,
experiment: pricingTest.name,
variant,
})
return { layout: variant }
})
// Track conversions
on($.User.starts.Checkout, async (event, $) => {
const variant = await decide.get(pricingTest.name, event.userId)
await send($.Experiment.conversion, {
experiment: pricingTest.name,
variant,
userId: event.userId,
plan: event.plan,
})
})
// Analyze results
const results = await $.Experiment.analyze(pricingTest.name)
if (results.winner && results.confidence > 0.95) {
// Winner found with 95% confidence
await $.Experiment.graduate(pricingTest.name, results.winner)
}Pattern 2: Multi-Variant Test (A/B/C/D)
Test multiple variants simultaneously:
// Define multi-variant test
const emailSubjectTest = $.experiment({
name: 'welcome-email-subject',
variants: {
control: 'Welcome to Acme',
personalized: 'Welcome {{name}} - Get Started',
benefit: 'Start Saving Time Today',
urgency: '24h to Unlock Your Benefits',
},
allocation: [0.25, 0.25, 0.25, 0.25],
})
// Assign variant when sending email
on($.User.signup, async (user, $) => {
const variant = await decide(emailSubjectTest.name, {
userId: user.id,
variants: ['control', 'personalized', 'benefit', 'urgency'],
})
const subject = getEmailSubject(variant, user)
await send($.Email.welcome, {
to: user.email,
subject,
experiment: emailSubjectTest.name,
variant,
})
await send($.Analytics.track, {
event: 'welcome_email_sent',
userId: user.id,
experiment: emailSubjectTest.name,
variant,
})
})
// Track email engagement
on($.User.opens.Email, async (event, $) => {
const variant = await decide.get(emailSubjectTest.name, event.userId)
await send($.Experiment.conversion, {
experiment: emailSubjectTest.name,
variant,
userId: event.userId,
metric: 'email_opened',
})
})Pattern 3: Multi-Armed Bandit
Automatically optimize allocation based on performance:
// Define bandit experiment
const ctaButtonTest = $.bandit({
name: 'cta-button-color',
variants: ['blue', 'green', 'orange', 'red'],
strategy: 'thompson-sampling', // or 'epsilon-greedy', 'ucb'
epsilon: 0.1, // 10% exploration, 90% exploitation
})
// Assign variant (automatically shifts traffic to winners)
on($.User.visits.LandingPage, async (event, $) => {
const variant = await decide(ctaButtonTest.name, {
userId: event.userId,
variants: ctaButtonTest.variants,
strategy: 'bandit',
})
await send($.Analytics.track, {
event: 'landing_page_view',
userId: event.userId,
experiment: ctaButtonTest.name,
variant,
})
return { ctaColor: variant }
})
// Track conversions (bandit automatically updates)
on($.User.clicks.CTA, async (event, $) => {
const variant = await decide.get(ctaButtonTest.name, event.userId)
await send($.Experiment.reward, {
experiment: ctaButtonTest.name,
variant,
userId: event.userId,
reward: 1, // Binary reward (clicked = 1)
})
})Pattern 4: Feature Flag with Gradual Rollout
Control feature availability with gradual rollout:
// Define feature flag
const newDashboard = $.feature({
name: 'dashboard-v2',
enabled: false,
rollout: {
initial: 0,
step: 0.1, // Increase by 10% each step
interval: '1 day',
target: 1.0,
},
targeting: {
users: ['beta-tester-1', 'beta-tester-2'],
segments: ['premium', 'enterprise'],
rules: [
{ attribute: 'signupDate', operator: 'before', value: '2024-01-01' }, // Existing users first
],
},
})
// Check feature availability
on($.User.visits.Dashboard, async (user, $) => {
const enabled = await decide.feature('dashboard-v2', {
userId: user.id,
attributes: {
plan: user.plan,
signupDate: user.createdAt,
},
})
await send($.Analytics.track, {
event: 'dashboard_view',
userId: user.id,
feature: 'dashboard-v2',
enabled,
})
return { version: enabled ? 'v2' : 'v1' }
})
// Gradual rollout management
await $.Feature.startRollout('dashboard-v2')
// Automatically increases from 0% → 10% → 20% → ... → 100%
// Emergency kill switch (immediately disables feature for all users)
await $.Feature.disable('dashboard-v2')
// Other feature management methods:
// - $.Feature.enable('name', { users: [...] }) - Enable for specific users
// - $.Feature.setRollout('name', 0.5) - Set to 50% rollout
// - $.Feature.pause('name') - Pause rollout at current percentagePattern 5: Sequential Testing
Test one thing at a time for clear causality:
// Define sequential tests
const experimentQueue = $.experimentQueue({
experiments: [
{
name: 'headline-test',
duration: '7 days',
minSampleSize: 1000,
},
{
name: 'pricing-test',
duration: '14 days',
minSampleSize: 500,
},
{
name: 'cta-test',
duration: '7 days',
minSampleSize: 1000,
},
],
})
// Automatically progress through queue
on($.Experiment.completed, async (experiment, $) => {
// Analyze and store results
const results = await $.Experiment.analyze(experiment.name)
await db.create($.ExperimentResult, results)
// Graduate winner
if (results.winner) {
await $.Experiment.graduate(experiment.name, results.winner)
}
// Start next experiment
const next = await experimentQueue.next()
if (next) {
await $.Experiment.start(next.name)
}
})Advanced Experimentation
Segmented Experiments
Run different experiments for different user segments:
// Segment-specific experiments
on($.User.visits.Page, async (user, $) => {
// Determine user segment
const segment = await $.Segment.identify(user, {
dimensions: ['plan', 'industry', 'size'],
})
// Run segment-specific experiment
const experiment = getExperimentForSegment(segment)
const variant = await decide(experiment.name, {
userId: user.id,
variants: experiment.variants,
segment,
})
await send($.Analytics.track, {
event: 'experiment_assigned',
userId: user.id,
experiment: experiment.name,
variant,
segment,
})
})Holdout Groups
Maintain control groups to measure cumulative impact:
// Create holdout group
const holdout = $.holdout({
name: 'optimization-holdout-2024',
percentage: 0.05, // 5% holdout
description: 'Measures cumulative impact of all 2024 optimizations',
})
// Check if user is in holdout
on($.User.visits, async (user, $) => {
const isHoldout = await decide.holdout(holdout.name, {
userId: user.id,
})
if (isHoldout) {
// Show baseline experience (no optimizations)
return { experience: 'baseline' }
} else {
// Show optimized experience (all experiments)
return { experience: 'optimized' }
}
})Metric Attribution
Attribute metric changes to specific experiments:
// Track metric with experiment context
on($.Metric.changed, async (metric, $) => {
// Get all active experiments for this user
const experiments = await decide.getActive(metric.userId)
// Attribute metric change to experiments
for (const experiment of experiments) {
await send($.Experiment.metric, {
experiment: experiment.name,
variant: experiment.variant,
userId: metric.userId,
metric: metric.name,
value: metric.value,
change: metric.delta,
})
}
})Experiment Analysis
Statistical Significance
Calculate confidence and significance:
// Analyze experiment results
const analysis = await $.Experiment.analyze('pricing-test', {
metrics: ['conversion_rate', 'revenue_per_user'],
method: 'bayesian', // or 'frequentist'
})
console.log(analysis)
// {
// experiment: 'pricing-test',
// status: 'conclusive',
// winner: 'treatment',
// confidence: 0.97,
// lift: {
// conversion_rate: 0.15, // 15% improvement
// revenue_per_user: 0.23, // 23% improvement
// },
// recommendation: 'graduate-winner',
// }
// Auto-graduate if confident
if (analysis.confidence > 0.95 && analysis.lift.conversion_rate > 0.1) {
await $.Experiment.graduate('pricing-test', analysis.winner)
}Real-Time Dashboards
Monitor experiments in real-time:
// Create experiment dashboard
const dashboard = $.dashboard({
experiment: 'pricing-test',
widgets: [
{
type: 'metric',
metric: 'conversion_rate',
breakdown: 'variant',
},
{
type: 'chart',
metric: 'daily_conversions',
groupBy: 'variant',
},
{
type: 'significance',
metric: 'conversion_rate',
confidence: true,
},
],
refresh: '1 minute',
})
// Real-time updates via websocket
on($.Experiment.metric, async (event, $) => {
await send($.Dashboard.update, {
dashboard: dashboard.id,
metric: event.metric,
variant: event.variant,
value: event.value,
})
})Best Practices
Do's
- Test one thing at a time - Isolate variables for clear causality
- Define success metrics upfront - Know what you're measuring
- Calculate sample size - Ensure statistical power
- Run long enough - Account for day-of-week effects
- Track everything - Comprehensive event tracking
- Document hypotheses - Record why you're testing
- Learn from losers - Negative results are valuable
Don'ts
- Don't peek early - Wait for statistical significance
- Don't ignore seasonality - Account for temporal effects
- Don't test too much - Avoid experiment fatigue
- Don't forget mobile - Test across devices
- Don't ship without testing - Validate before launch
- Don't stop learning - Continuous experimentation
- Don't ignore small wins - Compound improvements matter
CLI Tools
# Create experiment
do experiment create --name "pricing-test" --variants control,treatment
# Start experiment
do experiment start pricing-test
# Check status
do experiment status pricing-test
# Analyze results
do experiment analyze pricing-test --metric conversion_rate
# Graduate winner
do experiment graduate pricing-test --variant treatment
# List active experiments
do experiment list --status activeNext Steps
- Iterate → - Iterate based on experiment results
- Launch → - Launch winning variants
- Analytics → - Deep dive into analytics
Experiment Tip: Ship experiments, not features. Every change is a hypothesis to be validated.