TaleKeeper Svelte Themes

Talekeeper

Record, transcribe, and summarize your tabletop RPG sessions. Captures live audio, generates transcripts with speaker diarization, and produces session summaries and character POV recaps using local AI. Fully self-hosted with Python/FastAPI backend and Svelte 5 frontend.

TaleKeeper

Record, transcribe, and summarize D&D sessions — entirely offline.

TaleKeeper captures audio from your game table, transcribes speech on-device using Whisper, identifies who said what via speaker diarization, and generates narrative session recaps using a local LLM. No cloud services required.

Prerequisites

  • Python 3.11+
  • Node.js 18+ (for building the frontend)
  • ffmpeg — required by pydub for audio conversion
    brew install ffmpeg
    
  • Ollama (optional) — for AI-powered session summaries
    brew install ollama
    

    Quick Start

1. Clone and install

git clone <repo-url> && cd TaleKeeper

# Create a virtual environment
python3 -m venv .venv
source .venv/bin/activate

# Install the Python package in editable mode
pip install -e .

2. Build the frontend

cd frontend
npm install
npm run build
cd ..

This compiles the Svelte app into src/talekeeper/static/, which FastAPI serves automatically.

3. Run

talekeeper serve

This starts the server at http://127.0.0.1:8000 and opens your browser.

4. (Optional) Set up Ollama for summaries

ollama serve            # start the Ollama server
ollama pull llama3.1:8b # download a model

TaleKeeper works without Ollama — recording, transcription, and diarization all function independently. Ollama is only needed for generating session summaries.

Docker Compose (alternative)

If you prefer not to install Python, Node.js, ffmpeg, and Ollama manually, you can run everything with Docker Compose:

docker compose up -d --build

This starts two services:

  • talekeeper — the app at http://localhost:8000
  • ollama — the LLM backend at http://localhost:11434

Pull a model for summaries:

docker compose exec ollama ollama pull llama3.1:8b

Data is persisted via bind-mounts (./data/db and ./data/audio) and a named volume for Ollama models, so nothing is lost when containers restart.

Development

Run the backend and frontend dev servers in separate terminals:

# Terminal 1 — backend with auto-reload
talekeeper serve --reload --no-browser

# Terminal 2 — frontend with hot-reload
cd frontend && npm run dev

The Vite dev server (port 5173) proxies /api and /ws requests to the FastAPI backend (port 8000).

CLI Options

talekeeper serve [OPTIONS]

  --host TEXT       Host to bind to (default: 127.0.0.1)
  --port INT        Port to bind to (default: 8000)
  --reload          Enable auto-reload for development
  --no-browser      Don't open browser on startup

Project Structure

src/talekeeper/
  app.py              FastAPI application
  cli.py              CLI entry point
  db/                 SQLite schema and async connection management
  routers/            API endpoints (campaigns, sessions, recording, etc.)
  services/           ML pipelines (transcription, diarization, summarization)
  static/             Compiled frontend assets (generated by npm run build)

frontend/
  src/
    components/       Svelte UI components
    routes/           Page-level components
    lib/              API client, router utilities

Data Storage

All data lives in the data/ directory relative to where you run the server:

data/
  db/talekeeper.db    SQLite database (campaigns, transcripts, summaries)
  audio/<campaign-id>/ Recorded audio files (.webm)

Back up this folder to preserve your recordings and transcripts.

Hardware Notes

  • Targets Apple Silicon Macs (M1/M2+) with Metal acceleration for ML workloads
  • 16 GB RAM recommended for running Whisper + diarization during recording
  • Smaller Whisper models (tiny, base, small) work on 8 GB machines — configurable in Settings

Top categories

Loading Svelte Themes