Skip to content

Configuration

The Configuration page is where you set up everything the daemon needs to function at full capability. Models must be configured for embedding and (optionally) summarization before CI can provide semantic search and session summaries.

Open it from the dashboard sidebar, or navigate to http://localhost:{port}/config.

Configuration page showing model setup and settings

Models are the engine behind CI’s semantic search and session summaries. You need at least an embedding model for code search. A summarization model is optional but highly recommended for session summaries, titles, and memory extraction.

OAK supports any local provider with an OpenAI-compatible API:

ProviderDefault URLNotes
Ollamahttp://localhost:11434Most popular choice. Free, easy to set up.
LM Studiohttp://localhost:1234Desktop app with a visual model browser.
CustomAny URLAny OpenAI-compatible endpoint (vLLM, llama.cpp, etc.)

Setup steps:

  1. Select your provider or enter a custom URL
  2. The UI fetches available models from the provider
  3. Select a model
  4. Click Test & Detect — OAK auto-detects dimensions and context window
  5. Save the configuration

Embedding models convert code and text into vectors for semantic search. You need one running before CI can index your codebase.

Recommended models for Ollama:

ModelDimensionsContextSizePull Command
nomic-embed-text7688K~270 MBollama pull nomic-embed-text
bge-m310248K~1.2 GBollama pull bge-m3
nomic-embed-code7688K~270 MBollama pull nomic-embed-code

For LM Studio: Search the Discover tab for nomic-embed-text-v1.5 or bge-m3 and download.

For the full list of models OAK recognizes, run:

Terminal window
oak ci config --list-models

Summarization uses a general-purpose LLM (not an embedding model) to generate session summaries, titles, and extract memory observations. This is a separate configuration from embeddings — you can use different providers or models for each.

Recommended models for Ollama:

ModelResource LevelContextSizePull Command
gemma3:4bLow (8 GB RAM)8K~3 GBollama pull gemma3:4b
gpt-oss:20bMedium (16 GB RAM)32K~12 GBollama pull gpt-oss:20b
qwen3:8bMedium (16 GB RAM)32K~5 GBollama pull qwen3:8b
qwen3-coder:30bHigh (32+ GB RAM)32K~18 GBollama pull qwen3-coder:30b

For LM Studio: Search the Discover tab for any of the models above and download. Make sure to start the local server with your chosen model loaded.

To see what summarization models are available from your provider:

Terminal window
oak ci config --list-sum-models

Local models often default to small context windows that limit summarization quality. The context window determines how much of a session the model can “see” at once.

  • Ollama: Default num_ctx is typically 2048 tokens. For better summaries, increase it:
    Terminal window
    # Create a Modelfile with larger context
    echo 'FROM gemma3:4b
    PARAMETER num_ctx 8192' > Modelfile
    ollama create gemma3-4b-8k -f Modelfile
  • LM Studio: Check the model’s context length in the UI settings and increase if needed.

Higher context windows = better summaries, but more memory usage. 8K is a good minimum for summarization. Models like qwen3:8b support up to 32K natively.

You can also auto-detect and set the context window via CLI:

Terminal window
oak ci config --sum-context auto # Auto-detect from provider
oak ci config --sum-context 8192 # Set explicitly
oak ci config --sum-context show # Show current setting

Configure automatic backups and related policies from the Teams page or via the configuration file. See Teams — Automatic Backups for the full guide.

SettingConfig KeyDefaultDescription
Automatic backupsbackup.auto_enabledfalseEnable periodic automatic backups
Include activitiesbackup.include_activitiestrueInclude the activities table in backups
Backup intervalbackup.interval_minutes30Minutes between automatic backups (5–1440)
Backup before upgradebackup.on_upgradetrueCreate a backup before oak upgrade

Backup settings are per-machine (stored in .oak/config.{machine_id}.yaml), except on_upgrade which is project-level (stored in .oak/config.yaml).

# Per-machine backup settings
codebase_intelligence:
backup:
auto_enabled: true
include_activities: true
interval_minutes: 30
# Project-level backup settings (in .oak/config.yaml)
codebase_intelligence:
backup:
on_upgrade: true

Control when background jobs process sessions:

SettingDescriptionDefault
min_activitiesMinimum activity count before a session qualifies for background processingVaries
stale_timeoutHow long (in seconds) an inactive session sits before cleanupVaries
  • Higher min_activities = only substantial sessions get summarized (reduces noise)
  • Lower stale_timeout = sessions are cleaned up faster (more responsive, but may cut off sessions that pause briefly)

Configure log rotation to manage disk usage:

SettingDescription
Max file sizeMaximum size of each log file before rotation
Backup countNumber of rotated log files to keep

See the Logs page for details on viewing and filtering logs.

Control which directories are skipped during codebase indexing.

OAK includes sensible defaults out of the box:

  • .git, node_modules, __pycache__, .venv, venv
  • dist, build, .next, .nuxt
  • And other common build/dependency directories

OAK also respects your project’s .gitignore — anything gitignored is automatically excluded from indexing.

Add your own patterns from the Configuration page. Patterns use glob syntax (e.g., vendor/**, generated/**). Custom exclusions are saved to the daemon’s configuration file.

If you’ve added patterns you no longer need, use the Reset to Defaults button to restore the built-in exclusion list.

The Test & Detect buttons on the Configuration page let you verify your provider setup:

  • Test connection — Verifies the provider URL is reachable and the model exists
  • Auto-detect dimensions — Sends a test embedding to determine the model’s vector dimensions
  • Discover context window — Probes the model to find its maximum context length

This saves you from having to look up model specifications manually.