Skip to Content
SQL ReferenceExport User Data

Export User Data

KalamDB lets any authenticated user export a snapshot of all their own data as a downloadable .zip archive. The export captures every user-owned Parquet file (all rows the user has written), so you always have a portable, offline copy of your data.

Role requirement: Any authenticated user can export their own data. No elevated role is required.


EXPORT USER DATA

Trigger an asynchronous export job that flushes all user tables, copies the Parquet files, and bundles them into a ZIP archive.

EXPORT USER DATA;

The command returns immediately with a Job ID. The export runs in the background and can take up to a few minutes on large data sets (the executor first flushes all buffered writes to Parquet before copying).

Idempotency

Only one export per user per calendar day can be active at a time. If you run EXPORT USER DATA more than once on the same day, the second call returns the existing job ID rather than creating a duplicate job.

Response

User data export started. Job ID: UE-abc123. Use SHOW EXPORT to check status and get the download link.

Example

-- Export all your data EXPORT USER DATA;

SHOW EXPORT

Check the status of your export jobs and, once complete, retrieve the download URL.

SHOW EXPORT;

Returns the most recent 20 export jobs for the calling user, sorted by creation time descending.

Result columns

ColumnDescription
job_idThe export job ID (prefix UE-)
statusJob status: Queued, Running, Completed, Failed
created_atWhen the export job was created
completed_atWhen the job finished (null if not yet done)
download_urlFull URL to download the ZIP (populated only when status = Completed)

Example

-- Check status of your latest export SHOW EXPORT;

Sample output (completed)

job_idstatuscreated_atcompleted_atdownload_url
UE-abc123Completed2026-02-25 13:00:002026-02-25 13:01:30http://localhost:8080/v1/exports/alice/export-alice-20260225-130000.zip 

Downloading the export

Once SHOW EXPORT shows Completed, use the download_url to fetch your ZIP with any HTTP client. Authenticate as the exporting user using a Bearer token.

Using curl

# 1. Get access token via Basic auth TOKEN=$(curl -s -u alice:Password123 http://localhost:8080/v1/auth/token \ | jq -r '.access_token') # 2. Download the ZIP with Bearer auth curl -H "Authorization: Bearer $TOKEN" \ -o my_data.zip \ "http://localhost:8080/v1/exports/alice/export-alice-20260225-130000.zip"

Access control

The download endpoint enforces strict ownership:

  • The authenticated user must match the {user_id} in the URL path.
  • Any other authenticated user receives 403 Forbidden.
  • Unauthenticated requests receive 401 Unauthorized.

ZIP archive structure

The downloaded ZIP contains one Parquet file per table that belongs to the exporting user, laid out as:

export-<user_id>-<timestamp>.zip └── <namespace>/<table_name>/ └── <parquet_segment_file>.parquet └── ...

Each Parquet file contains all rows written by the user for that table (after the flush-first pass ensures buffered writes are included).


Async job execution

Export jobs run as background UserExport jobs in the UnifiedJobManager. Monitor progress via SQL:

SELECT job_id, status, message FROM system.jobs WHERE job_type = 'UserExport' ORDER BY created_at DESC LIMIT 5;
status valueMeaning
QueuedExport is waiting to start
RunningFlushing tables and copying Parquet files
CompletedZIP file is ready to download
FailedAn error occurred — see message column for details

GDPR / data portability

EXPORT USER DATA is designed to support data portability requirements. The export includes a complete, machine-readable snapshot of all data the user has written, in the open Parquet columnar format — compatible with DuckDB, Apache Spark, pandas, and most data tools.

Last updated on