Skip to content

The Three Enriches

AI Butler’s YAML config isn’t a flat bag of knobs. It’s organized into three tiers with progressive disclosure: simple toggles at the top, power-user structure in the middle, developer tunables at the bottom. This pattern is called the Three Enriches.

Every configurable parameter belongs to exactly one tier. This is what makes the setup wizard, web UI, and AI-assisted configuration possible without drowning new users in choices.

TierAudienceExposed in
SettingsEveryoneSetup wizard, web form, natural language
ConfigurationsPower usersWeb form (advanced), YAML, natural language
OptionsDevelopersYAML only

“Everyday knobs.” The setup wizard shows only these. They’re simple values like timezone, enabled channels, persona name, cost strategy.

settings:
language: en
timezone: Europe/Madrid
persona_name: "Butler"
active_channels:
- webchat
- telegram
model: claude
agent_mode: swarm
cost:
strategy: balanced # frugal | balanced | quality
monthly_budget_usd: 25
offline_mode: false
telemetry_enabled: false

“Power-user structure.” Used by people who know what they want. Organized by subsystem — models, channels, memory, auth, plugins, etc.

configurations:
models:
primary: claude-sonnet-4-6
fallback: claude-haiku-4-5
channels:
telegram:
enabled: true
typing_indicators: true
voice_response: auto
slack:
enabled: true
typing_indicators: true
voice_response: text
web:
port: 3377
bind_address: 0.0.0.0
max_upload_size_mb: 25
auth:
enabled: true
require_totp: true
session_timeout: 12h
memory:
providers: [local]
import_sources: [claude, chatgpt]
mcp: true
iot:
homeassistant:
enabled: true
plugins:
plugin_dir: /data/plugins
max_plugins: 50

“Developer tunables.” Fine-grained internals — database cache sizes, model sampling parameters, embedding thresholds. The average user never touches these.

options:
models:
temperature: 0.7
max_tokens: 4096
timeout: 120s
max_retries: 3
database:
cache_size_mb: 64
mmap_size_mb: 256
wal_autocheckpoint: 1000
memory:
embedding_model: all-MiniLM-L6-v2
embedding_dimensions: 384
similarity_threshold: 0.7
max_thoughts: 100000
plugins:
max_memory_mb: 64
max_execution_seconds: 30
transaction:
per_transaction_limit_usd: 5
daily_limit_usd: 20

The same question has to be answered at multiple levels of detail. Take “which model should I use?”:

  • Settings: model: claude — pick a provider by name, done
  • Configurations: primary: claude-sonnet-4-6, fallback: claude-haiku-4-5 — specific models, fallback strategy
  • Options: temperature: 0.7, max_tokens: 4096, timeout: 120s — sampling and request tuning

Each tier is sufficient on its own. A user who only sets settings.model gets sensible defaults for everything else. A developer who needs to tune options.models.temperature still inherits the provider from settings.model.

At startup, values are resolved in this order:

  1. Built-in defaults
  2. options: section
  3. configurations: section (overrides options)
  4. settings: section (overrides configurations)
  5. Environment variables (AIBUTLER_*)
  6. Command-line flags

Later sources win. This means you can leave configurations.models.primary unset and let settings.model drive the whole model stack.

Plugins declare their own parameters using the same three tiers in their manifest:

name: weather-plugin
parameters:
settings:
default_location: "Madrid"
units: metric
configurations:
providers: [openweathermap, meteostat]
cache_ttl_minutes: 15
options:
http_timeout_seconds: 10
retry_count: 3

The plugin management UI shows the right parameters at the right disclosure level — end users see default_location, developers see http_timeout_seconds.