Skip to Content
ConfigurationsLogging & Traceability

Logging & Traceability

This page is the canonical guide for KalamDB runtime logging, JSON log output, Docker log collection, and distributed tracing.

Use it when you want to:

  • switch the backend to JSON logs
  • configure logging from server.toml
  • run with logging overrides from environment variables
  • ship logs through Docker or Dokploy
  • enable OTLP tracing for Jaeger or another collector

What KalamDB Supports Today

KalamDB currently supports two practical runtime log output modes:

  • compact: human-readable text logs written to server.log
  • json: JSON Lines logs written to server.jsonl

When log_to_console = true, the same events are also emitted to container stdout or your local terminal.

For traceability beyond logs, KalamDB also supports OTLP trace export under [logging.otlp].

JSON Logging Syntax

KalamDB JSON logs are emitted as JSON Lines.

  • Each log event is one JSON object
  • Each object is written on its own line
  • The log file is server.jsonl
  • This format is the one read by system.server_logs

Example real log line:

json snippetJSON
{"timestamp":"2026-05-13T04:39:33.428337Z","level":"INFO","message":"KalamDB Server v0.5.0-beta.1 | Build: 2026-05-13 03:42:16 UTC","log.target":"kalamdb_server","log.module_path":"kalamdb_server","log.file":"backend/src/main.rs","log.line":340,"target":"kalamdb_server","threadName":"main"}

Common fields you should expect:

FieldMeaning
timestampUTC event timestamp in RFC 3339 format
levelLog severity such as INFO, WARN, ERROR, DEBUG, TRACE
messageEvent message text
targetRust tracing target for the event
threadNameThread name that emitted the event
log.fileSource file when available
log.lineSource line when available
log.module_pathRust module path when available

Depending on the event and span context, extra tracing metadata can also appear.

Important Limitation: No Custom JSON Schema Yet

KalamDB does not currently support custom JSON log templates.

That means:

  • you can choose compact or json
  • you can choose the log level and destination
  • you can export OTLP traces
  • you cannot rename JSON keys or define a custom JSON logging pattern yet

If you need a different schema for downstream systems, transform the JSON lines in your log pipeline after KalamDB emits them.

Configure Logging In server.toml

Use this when you want structured logs on disk and on stdout:

toml snippetTOML
[logging]level = "info"format = "json"logs_path = "./logs"log_to_console = trueslow_query_threshold_ms = 1200 [logging.targets]datafusion = "warn"arrow = "warn"parquet = "warn"

With that config:

  • file logs go to ./logs/server.jsonl
  • console logs also use the same event stream
  • system.server_logs can read the JSON log file

If you use format = "compact", the server writes server.log instead.

Configure Logging With Environment Variables

Environment variables override server.toml at startup.

JSON Logging From Env

bash snippetBASH
KALAMDB_LOG_LEVEL=info \KALAMDB_LOG_FORMAT=json \KALAMDB_LOGS_DIR=./logs \KALAMDB_LOG_TO_CONSOLE=true \cargo run --manifest-path backend/Cargo.toml --bin kalamdb-server

Logging Environment Variables

Environment variableMaps toNotes
KALAMDB_LOG_LEVELlogging.levelerror, warn, info, debug, trace
KALAMDB_LOG_FORMATlogging.formatUse json for JSON Lines, compact for text
KALAMDB_LOGS_DIRlogging.logs_pathDirectory for server.log or server.jsonl
KALAMDB_LOG_TO_CONSOLElogging.log_to_consoletrue, 1, yes enable console emission
KALAMDB_SLOW_QUERY_THRESHOLD_MSlogging.slow_query_threshold_msSlow query threshold in milliseconds
KALAMDB_OTLP_ENABLEDlogging.otlp.enabledEnables trace export
KALAMDB_OTLP_ENDPOINTlogging.otlp.endpointOTLP collector endpoint
KALAMDB_OTLP_PROTOCOLlogging.otlp.protocolgrpc or http
KALAMDB_OTLP_SERVICE_NAMElogging.otlp.service_nameService name shown in tracing backends
KALAMDB_OTLP_TIMEOUT_MSlogging.otlp.timeout_msOTLP export timeout

Current limitation:

  • per-target overrides under [logging.targets] are currently configured in server.toml, not through dedicated environment variables

Docker Logging

For Docker deployments, the simplest pattern is:

  1. emit JSON logs to stdout
  2. keep log_to_console = true
  3. optionally persist the same logs under /data/logs

The single-node Docker config shipped with KalamDB already does this:

toml snippetTOML
[logging]level = "info"logs_path = "/data/logs"log_to_console = trueformat = "json"

docker run Example

bash snippetBASH
docker run --rm \  -p 8080:8080 \  -v "$PWD/data:/data" \  -e KALAMDB_LOG_LEVEL=info \  -e KALAMDB_LOG_FORMAT=json \  -e KALAMDB_LOG_TO_CONSOLE=true \  -e KALAMDB_LOGS_DIR=/data/logs \  jamals86/kalamdb:latest

Docker Compose Example

yaml snippetYAML
services:  kalamdb:    image: jamals86/kalamdb:latest    ports:      - "8080:8080"    volumes:      - ./data:/data    environment:      KALAMDB_LOG_LEVEL: info      KALAMDB_LOG_FORMAT: json      KALAMDB_LOG_TO_CONSOLE: "true"      KALAMDB_LOGS_DIR: /data/logs

Dokploy Compatibility

Dokploy runtime logs are compatible with KalamDB JSON logs as raw log lines.

What this means in practice:

  • if KalamDB writes JSON logs to stdout, Dokploy can display them
  • each KalamDB log event appears as one JSON line
  • this works best when KALAMDB_LOG_FORMAT=json and KALAMDB_LOG_TO_CONSOLE=true

What you should not assume today:

  • Dokploy does not appear to have a documented KalamDB-specific schema for these fields
  • I did not find evidence that Dokploy automatically maps fields like threadName, target, or log.line into structured columns for KalamDB logs

The safe assumption is:

  • Dokploy will show the JSON lines correctly
  • any deeper structured parsing depends on Dokploy-side features or downstream tooling, not on a KalamDB-specific format contract

Dokploy’s log UI is built around Docker log streaming, so stdout compatibility matters more than matching a custom syntax.

OTLP Tracing

For distributed tracing, configure [logging.otlp] in addition to normal logs.

toml snippetTOML
[logging.otlp]enabled = trueendpoint = "http://127.0.0.1:4317"protocol = "grpc"service_name = "kalamdb-server"timeout_ms = 3000

Environment equivalent:

bash snippetBASH
export KALAMDB_OTLP_ENABLED=trueexport KALAMDB_OTLP_ENDPOINT="http://127.0.0.1:4317"export KALAMDB_OTLP_PROTOCOL="grpc"export KALAMDB_OTLP_SERVICE_NAME="kalamdb-server"export KALAMDB_OTLP_TIMEOUT_MS=3000

Use OTLP when you want:

  • trace spans in Jaeger, Tempo, or an OpenTelemetry Collector
  • request correlation across services
  • timing and span visibility beyond plain log lines

Use JSON logging when you want:

  • structured application logs
  • Docker or Dokploy log shipping
  • queryable system.server_logs entries

Most production setups use both.

Query Logs From SQL

system.server_logs reads the JSON log file generated by the backend.

Example:

sql snippetSQL
SELECT timestamp, level, target, messageFROM system.server_logsORDER BY timestamp DESCLIMIT 50;

This view only works reliably when the backend log format is json.

For most production deployments:

  1. set format = "json"
  2. keep log_to_console = true
  3. write logs under a persistent directory such as /data/logs
  4. enable OTLP if you need request tracing in Jaeger or Tempo
  5. use Dokploy or Docker for raw log streaming, and a collector for deeper analytics if needed
Last updated on