Skip to Content

TypeScript SDK Examples

The KalamDB repo currently ships three runnable TypeScript examples, and KalamDB now also has a documented vector search workflow you can use from the same SDK.

They live in the repository under:

  • examples/simple-typescript
  • examples/chat-with-ai
  • examples/summarizer-agent

Each runnable example has its own setup path and its own test.

1. Realtime Ops Feed

Repository path: examples/simple-typescript

What it demonstrates:

  • queryAll() for the initial SQL read
  • live() for live browser updates
  • browser auth with Auth.basic(...)
  • cross-tab realtime sync from one write

Why it matters:

This is the smallest browser example that still feels real. It avoids wrapper layers, keeps the full flow inside one React component, and shows the recommended pattern: one live query returns the current rows while USER-table isolation keeps each signed-in user inside their own data partition.

Run it:

cd examples/simple-typescript npm install npm run setup npm run dev

Test it:

npm test

The Playwright test opens two tabs, inserts one row, and verifies both tabs update.

2. Chat With AI

Repository path: examples/chat-with-ai

What it demonstrates:

  • writing to a shared table from the browser
  • topic fan-out from ALTER TOPIC ... ADD SOURCE ...
  • background work with runAgent()
  • live browser updates when the worker writes the reply row

Why it matters:

This example shows the table → topic → worker → browser loop without needing a large app shell or external auth setup.

Run it:

cd examples/chat-with-ai npm install npm run setup npm run agent npm run dev

Test it:

npm test

The Playwright test starts the agent, opens two tabs, sends a message, and waits for the assistant reply in both tabs.

3. Summarizer Agent

Repository path: examples/summarizer-agent

What it demonstrates:

  • a worker-only runAgent() example
  • row enrichment back into the source table
  • a failure sink table for exhausted retries

Why it matters:

If you do not need a browser at all, this is the cleanest place to start. It shows how little code is required to build a useful background worker around KalamDB topics.

Run it:

cd examples/summarizer-agent npm install npm run setup npm run start

Test it:

npm test

The integration test inserts a row and waits until the worker writes the summary back.

4. Vector Search Pattern

Repository reference: backend/tests/scenarios/scenario_14_vector_rag.rs

What it demonstrates:

  • EMBEDDING(n) columns for document and attachment vectors
  • ALTER TABLE ... CREATE INDEX ... USING COSINE
  • nearest-neighbor SQL with ORDER BY COSINE_DISTANCE(...) LIMIT k
  • joining vector hits back to FILE-backed document rows

Why it matters:

This is the current KalamDB pattern for semantic retrieval and RAG. You keep rich document rows in one table, keep embeddings in a keyed companion table, and run vector search with normal SQL. The same flow works with TYPE = 'USER', so each signed-in user only searches their own embeddings.

TypeScript query example:

const rows = await client.queryAll(` SELECT d.id, d.title, d.body FROM rag.documents AS d JOIN rag.documents_vectors AS v ON v.id = d.id ORDER BY COSINE_DISTANCE(v.doc_embedding, '[1.0,0.0,0.0]') LIMIT 5 `);

SQL setup example:

CREATE TABLE rag.documents_vectors ( id BIGINT PRIMARY KEY, doc_embedding EMBEDDING(384) ) WITH (TYPE = 'USER'); ALTER TABLE rag.documents_vectors CREATE INDEX doc_embedding USING COSINE;

Read next:

Last updated on