TaleKeeper Svelte Themes

Talekeeper

Record, transcribe, and summarize your tabletop RPG sessions. Captures live audio, generates transcripts with speaker diarization, and produces session summaries and character POV recaps using local AI. Fully self-hosted with Python/FastAPI backend and Svelte 5 frontend.

TaleKeeper

Record, transcribe, and summarize D&D sessions — entirely offline.

TaleKeeper captures audio from your game table, transcribes speech on-device using Whisper, identifies who said what via speaker diarization, and generates narrative session recaps using a local LLM. No cloud services required.

Prerequisites

  • Python 3.11+
  • Node.js 18+ (for building the frontend)
  • ffmpeg — required by pydub for audio conversion
    brew install ffmpeg
    
  • Pango — required by WeasyPrint for PDF export
    brew install pango
    
  • Ollama (optional) — for AI-powered session summaries and image generation. Install the official macOS app (not Homebrew — the brew formula lacks MLX support needed for image generation).

    Quick Start

1. Clone and install

git clone <repo-url> && cd TaleKeeper

# Create a virtual environment
python3 -m venv venv
source venv/bin/activate

# Install all dependencies (backend + frontend)
make install

2. Build and run

make serve

This builds the frontend, then starts the server at http://127.0.0.1:8000 and opens your browser.

Run make help to see all available targets.

4. (Optional) Set up Ollama for summaries

ollama serve            # start the Ollama server
ollama pull llama3.1:8b # download a model

TaleKeeper works without Ollama — recording, transcription, and diarization all function independently. Ollama is only needed for generating session summaries.

Docker Compose (alternative)

If you prefer not to install Python, Node.js, ffmpeg, and Ollama manually, you can run everything with Docker Compose:

docker compose up -d --build

This starts two services:

  • talekeeper — the app at http://localhost:8000
  • ollama — the LLM backend at http://localhost:11434

Pull a model for summaries:

docker compose exec ollama ollama pull llama3.1:8b

Data is persisted via bind-mounts (./data/db and ./data/audio) and a named volume for Ollama models, so nothing is lost when containers restart.

Image Generation

TaleKeeper can generate scene illustrations from your session content using any OpenAI-compatible image generation API. No image service is bundled — you bring your own provider.

Supported providers

Any API that implements the OpenAI /v1/images/generations endpoint works, including:

  • Ollama (local, with models like x/flux2-klein:9b)
  • OpenAI (DALL-E)
  • Self-hosted (ComfyUI, Stable Diffusion WebUI with --api flag, etc.)

Configuration

Configure the image provider in the Settings page:

  • Base URL — e.g., http://localhost:11434/v1 for Ollama or https://api.openai.com/v1 for OpenAI
  • API Key — if required by your provider
  • Model — the model name your provider expects

Environment variables IMAGE_API_KEY and IMAGE_MODEL can also be used.

Development

Run the backend and frontend dev servers in separate terminals:

# Terminal 1 — backend with auto-reload
make dev

# Terminal 2 — frontend with hot-reload
cd frontend && npm run dev

The Vite dev server (port 5173) proxies /api and /ws requests to the FastAPI backend (port 8000).

Testing

make test           # run all tests (backend + frontend)
make test-backend   # backend only (pytest)
make test-frontend  # frontend only (vitest)
make coverage       # backend tests with coverage
make check          # frontend type checking (svelte-check)

CLI Options

talekeeper serve [OPTIONS]

  --host TEXT       Host to bind to (default: 127.0.0.1)
  --port INT        Port to bind to (default: 8000)
  --reload          Enable auto-reload for development
  --no-browser      Don't open browser on startup

Project Structure

src/talekeeper/
  app.py              FastAPI application
  cli.py              CLI entry point
  db/                 SQLite schema and async connection management
  routers/            API endpoints (campaigns, sessions, recording, etc.)
  services/           ML pipelines (transcription, diarization, summarization)
  static/             Compiled frontend assets (generated by npm run build)

frontend/
  src/
    components/       Svelte UI components
    routes/           Page-level components
    lib/              API client, router utilities

Data Storage

All data lives in the data/ directory relative to where you run the server:

data/
  db/talekeeper.db    SQLite database (campaigns, transcripts, summaries)
  audio/<campaign-id>/ Recorded audio files (.webm)
  images/<session-id>/ Generated scene illustrations (.png)

Back up this folder to preserve your recordings and transcripts.

Documentation

TaleKeeper uses Zensical for its documentation site.

# Install docs dependencies (one-time)
venv/bin/pip install -e ".[docs]"

# Build and serve docs locally
make docs

This builds the docs and serves them at http://127.0.0.1:8080. Use make docs-build to build without serving.

Hardware Notes

  • Targets Apple Silicon Macs (M1/M2+) with Metal acceleration for ML workloads
  • 16 GB RAM recommended for running Whisper + diarization during recording
  • Smaller Whisper models (tiny, base, small) work on 8 GB machines — configurable in Settings

Top categories

Loading Svelte Themes