A lightweight, local-first web UI for chatting with AI characters.
I built this because I wanted a fast, no-nonsense frontend for roleplaying with local LLMs (running via vLLM, Ollama, text-generation-webui, etc.) and to test some potential improvements to it. It's built with Svelte 5 (using runes for state management) and runs on Bun.
Note: This is very much a WIP pet project. It works for my local setup, but expect rough edges. There is no auth or complex database layer and I only actually validated it with KoboldCPP.
/v1/chat/completions endpoint. (Yes, I know it should support text completion.)<think> tags for models. You can also toggle whether reasoning blocks are fed back into the prompt context (no idea if this is a good idea?)..json files (Yes, I should fix this.).You'll need Bun installed.
# Clone the repo
git clone https://github.com/yourusername/BunnyTavern.git
cd BunnyTavern
# Install dependencies
bun install
# Start the dev server
bun run dev
The app will be available at http://localhost:5173.
Settings are currently managed in the UI and persisted to your browser's localStorage.
By default, the app points to http://127.0.0.1:5000/v1. You'll need to update the API URL in the settings menu to match your local inference server (e.g., http://127.0.0.1:8080/v1 for llama.cpp or http://127.0.0.1:11434/v1 for Ollama).
Since there's no external database, everything lives in a data/ folder at the root of the project.
data/characters/ - Character definitions.data/chats/ - Chat histories, formatted as {characterId}_{chatId}.json.If you're migrating characters from other platforms, you can just drop the JSON files into the data/characters/ directory as long as they match the basic schema (id, name, description, systemPrompt, firstMessage). Yes, it is different from SillyTavern, but I didn't want to handle that RN. :P
$state runes)svelte-adapter-bun)