Record, transcribe, and summarize D&D sessions — entirely offline.
TaleKeeper captures audio from your game table, transcribes speech on-device using Whisper, identifies who said what via speaker diarization, and generates narrative session recaps using a local LLM. No cloud services required.
brew install ffmpeg
brew install pango
git clone <repo-url> && cd TaleKeeper
# Create a virtual environment
python3 -m venv venv
source venv/bin/activate
# Install all dependencies (backend + frontend)
make install
make serve
This builds the frontend, then starts the server at http://127.0.0.1:8000 and opens your browser.
Run make help to see all available targets.
ollama serve # start the Ollama server
ollama pull llama3.1:8b # download a model
TaleKeeper works without Ollama — recording, transcription, and diarization all function independently. Ollama is only needed for generating session summaries.
If you prefer not to install Python, Node.js, ffmpeg, and Ollama manually, you can run everything with Docker Compose:
docker compose up -d --build
This starts two services:
http://localhost:8000http://localhost:11434Pull a model for summaries:
docker compose exec ollama ollama pull llama3.1:8b
Data is persisted via bind-mounts (./data/db and ./data/audio) and a named volume for Ollama models, so nothing is lost when containers restart.
TaleKeeper can generate scene illustrations from your session content using any OpenAI-compatible image generation API. No image service is bundled — you bring your own provider.
Any API that implements the OpenAI /v1/images/generations endpoint works, including:
x/flux2-klein:9b)--api flag, etc.)Configure the image provider in the Settings page:
http://localhost:11434/v1 for Ollama or https://api.openai.com/v1 for OpenAIEnvironment variables IMAGE_API_KEY and IMAGE_MODEL can also be used.
Run the backend and frontend dev servers in separate terminals:
# Terminal 1 — backend with auto-reload
make dev
# Terminal 2 — frontend with hot-reload
cd frontend && npm run dev
The Vite dev server (port 5173) proxies /api and /ws requests to the FastAPI backend (port 8000).
make test # run all tests (backend + frontend)
make test-backend # backend only (pytest)
make test-frontend # frontend only (vitest)
make coverage # backend tests with coverage
make check # frontend type checking (svelte-check)
talekeeper serve [OPTIONS]
--host TEXT Host to bind to (default: 127.0.0.1)
--port INT Port to bind to (default: 8000)
--reload Enable auto-reload for development
--no-browser Don't open browser on startup
src/talekeeper/
app.py FastAPI application
cli.py CLI entry point
db/ SQLite schema and async connection management
routers/ API endpoints (campaigns, sessions, recording, etc.)
services/ ML pipelines (transcription, diarization, summarization)
static/ Compiled frontend assets (generated by npm run build)
frontend/
src/
components/ Svelte UI components
routes/ Page-level components
lib/ API client, router utilities
All data lives in the data/ directory relative to where you run the server:
data/
db/talekeeper.db SQLite database (campaigns, transcripts, summaries)
audio/<campaign-id>/ Recorded audio files (.webm)
images/<session-id>/ Generated scene illustrations (.png)
Back up this folder to preserve your recordings and transcripts.
TaleKeeper uses Zensical for its documentation site.
# Install docs dependencies (one-time)
venv/bin/pip install -e ".[docs]"
# Build and serve docs locally
make docs
This builds the docs and serves them at http://127.0.0.1:8080. Use make docs-build to build without serving.