Logging & Traceability
This page is the canonical guide for KalamDB runtime logging, JSON log output, Docker log collection, and distributed tracing.
Use it when you want to:
- switch the backend to JSON logs
- configure logging from
server.toml - run with logging overrides from environment variables
- ship logs through Docker or Dokploy
- enable OTLP tracing for Jaeger or another collector
What KalamDB Supports Today
KalamDB currently supports two practical runtime log output modes:
compact: human-readable text logs written toserver.logjson: JSON Lines logs written toserver.jsonl
When log_to_console = true, the same events are also emitted to container stdout or your local terminal.
For traceability beyond logs, KalamDB also supports OTLP trace export under [logging.otlp].
JSON Logging Syntax
KalamDB JSON logs are emitted as JSON Lines.
- Each log event is one JSON object
- Each object is written on its own line
- The log file is
server.jsonl - This format is the one read by
system.server_logs
Example real log line:
{"timestamp":"2026-05-13T04:39:33.428337Z","level":"INFO","message":"KalamDB Server v0.5.0-beta.1 | Build: 2026-05-13 03:42:16 UTC","log.target":"kalamdb_server","log.module_path":"kalamdb_server","log.file":"backend/src/main.rs","log.line":340,"target":"kalamdb_server","threadName":"main"}Common fields you should expect:
| Field | Meaning |
|---|---|
timestamp | UTC event timestamp in RFC 3339 format |
level | Log severity such as INFO, WARN, ERROR, DEBUG, TRACE |
message | Event message text |
target | Rust tracing target for the event |
threadName | Thread name that emitted the event |
log.file | Source file when available |
log.line | Source line when available |
log.module_path | Rust module path when available |
Depending on the event and span context, extra tracing metadata can also appear.
Important Limitation: No Custom JSON Schema Yet
KalamDB does not currently support custom JSON log templates.
That means:
- you can choose
compactorjson - you can choose the log level and destination
- you can export OTLP traces
- you cannot rename JSON keys or define a custom JSON logging pattern yet
If you need a different schema for downstream systems, transform the JSON lines in your log pipeline after KalamDB emits them.
Configure Logging In server.toml
Use this when you want structured logs on disk and on stdout:
[logging]level = "info"format = "json"logs_path = "./logs"log_to_console = trueslow_query_threshold_ms = 1200 [logging.targets]datafusion = "warn"arrow = "warn"parquet = "warn"With that config:
- file logs go to
./logs/server.jsonl - console logs also use the same event stream
system.server_logscan read the JSON log file
If you use format = "compact", the server writes server.log instead.
Configure Logging With Environment Variables
Environment variables override server.toml at startup.
JSON Logging From Env
KALAMDB_LOG_LEVEL=info \KALAMDB_LOG_FORMAT=json \KALAMDB_LOGS_DIR=./logs \KALAMDB_LOG_TO_CONSOLE=true \cargo run --manifest-path backend/Cargo.toml --bin kalamdb-serverLogging Environment Variables
| Environment variable | Maps to | Notes |
|---|---|---|
KALAMDB_LOG_LEVEL | logging.level | error, warn, info, debug, trace |
KALAMDB_LOG_FORMAT | logging.format | Use json for JSON Lines, compact for text |
KALAMDB_LOGS_DIR | logging.logs_path | Directory for server.log or server.jsonl |
KALAMDB_LOG_TO_CONSOLE | logging.log_to_console | true, 1, yes enable console emission |
KALAMDB_SLOW_QUERY_THRESHOLD_MS | logging.slow_query_threshold_ms | Slow query threshold in milliseconds |
KALAMDB_OTLP_ENABLED | logging.otlp.enabled | Enables trace export |
KALAMDB_OTLP_ENDPOINT | logging.otlp.endpoint | OTLP collector endpoint |
KALAMDB_OTLP_PROTOCOL | logging.otlp.protocol | grpc or http |
KALAMDB_OTLP_SERVICE_NAME | logging.otlp.service_name | Service name shown in tracing backends |
KALAMDB_OTLP_TIMEOUT_MS | logging.otlp.timeout_ms | OTLP export timeout |
Current limitation:
- per-target overrides under
[logging.targets]are currently configured inserver.toml, not through dedicated environment variables
Docker Logging
For Docker deployments, the simplest pattern is:
- emit JSON logs to stdout
- keep
log_to_console = true - optionally persist the same logs under
/data/logs
The single-node Docker config shipped with KalamDB already does this:
[logging]level = "info"logs_path = "/data/logs"log_to_console = trueformat = "json"docker run Example
docker run --rm \ -p 8080:8080 \ -v "$PWD/data:/data" \ -e KALAMDB_LOG_LEVEL=info \ -e KALAMDB_LOG_FORMAT=json \ -e KALAMDB_LOG_TO_CONSOLE=true \ -e KALAMDB_LOGS_DIR=/data/logs \ jamals86/kalamdb:latestDocker Compose Example
services: kalamdb: image: jamals86/kalamdb:latest ports: - "8080:8080" volumes: - ./data:/data environment: KALAMDB_LOG_LEVEL: info KALAMDB_LOG_FORMAT: json KALAMDB_LOG_TO_CONSOLE: "true" KALAMDB_LOGS_DIR: /data/logsDokploy Compatibility
Dokploy runtime logs are compatible with KalamDB JSON logs as raw log lines.
What this means in practice:
- if KalamDB writes JSON logs to stdout, Dokploy can display them
- each KalamDB log event appears as one JSON line
- this works best when
KALAMDB_LOG_FORMAT=jsonandKALAMDB_LOG_TO_CONSOLE=true
What you should not assume today:
- Dokploy does not appear to have a documented KalamDB-specific schema for these fields
- I did not find evidence that Dokploy automatically maps fields like
threadName,target, orlog.lineinto structured columns for KalamDB logs
The safe assumption is:
- Dokploy will show the JSON lines correctly
- any deeper structured parsing depends on Dokploy-side features or downstream tooling, not on a KalamDB-specific format contract
Dokploy’s log UI is built around Docker log streaming, so stdout compatibility matters more than matching a custom syntax.
OTLP Tracing
For distributed tracing, configure [logging.otlp] in addition to normal logs.
[logging.otlp]enabled = trueendpoint = "http://127.0.0.1:4317"protocol = "grpc"service_name = "kalamdb-server"timeout_ms = 3000Environment equivalent:
export KALAMDB_OTLP_ENABLED=trueexport KALAMDB_OTLP_ENDPOINT="http://127.0.0.1:4317"export KALAMDB_OTLP_PROTOCOL="grpc"export KALAMDB_OTLP_SERVICE_NAME="kalamdb-server"export KALAMDB_OTLP_TIMEOUT_MS=3000Use OTLP when you want:
- trace spans in Jaeger, Tempo, or an OpenTelemetry Collector
- request correlation across services
- timing and span visibility beyond plain log lines
Use JSON logging when you want:
- structured application logs
- Docker or Dokploy log shipping
- queryable
system.server_logsentries
Most production setups use both.
Query Logs From SQL
system.server_logs reads the JSON log file generated by the backend.
Example:
SELECT timestamp, level, target, messageFROM system.server_logsORDER BY timestamp DESCLIMIT 50;This view only works reliably when the backend log format is json.
Recommended Production Setup
For most production deployments:
- set
format = "json" - keep
log_to_console = true - write logs under a persistent directory such as
/data/logs - enable OTLP if you need request tracing in Jaeger or Tempo
- use Dokploy or Docker for raw log streaming, and a collector for deeper analytics if needed