Configuration

Environment variables

Olaf picks backends and models from environment variables. Typical examples:

LLaMA.cpp (primary)

export LLAMACPP_ENDPOINT="http://localhost:8080"
export LLAMACPP_MODEL="your-model-name"

LLaMA.cpp (embeddings)

export LLAMACPP_EMBED_ENDPOINT="http://localhost:8081"
export LLAMACPP_EMBED_MODEL="your-embed-model"

Ollama

export OLLAMA_ENDPOINT="http://localhost:11434"
export OLLAMA_MODEL="your-model-name"

External OpenAI-compatible API

export OPENAI_ENDPOINT="https://api.openai.com/v1"
export OPENAI_MODEL="gpt-4"
export OPENAI_API_KEY="your-api-key"

Observability (optional)

export PROMETHEUS_ENDPOINT="http://localhost:9090"
export GRAFANA_ENDPOINT="http://localhost:3000"

Typical defaults include http://localhost:8080 and a bundled model name — confirm against your deployment notes or administrator.

YAML tool configuration (config.yaml)

Olaf loads an optional YAML file from config.yaml in the working directory unless your binary or launcher overrides the path.

Structure:

tools:
  some_tool_name:
    enabled: true
    commands:
      - name: example
        command: /usr/bin/example --flag

If the file is missing, configuration defaults to empty maps and tools fall back to other checks as implemented by callers.