A modern, feature-rich web interface for local LLMs
Features • Quick Start • Documentation • Contributing
Vessel is intentionally focused on:
If you want a universal, highly configurable platform → open-webui is a great choice. If you want a small, focused UI for local LLM usage → Vessel is built for that.
📖 Full documentation on the Wiki →
Clean chat interface |
Syntax-highlighted code |
Integrated web search |
Model browser |
Ollama must listen on all interfaces for Docker to connect:
# Option A: systemd (Linux)
sudo systemctl edit ollama
# Add: Environment="OLLAMA_HOST=0.0.0.0"
sudo systemctl restart ollama
# Option B: Manual
OLLAMA_HOST=0.0.0.0 ollama serve
# One-line install
curl -fsSL https://somegit.dev/vikingowl/vessel/raw/main/install.sh | bash
# Or clone and run
git clone https://github.com/VikingOwl91/vessel.git
cd vessel
./install.sh
Open http://localhost:7842 in your browser.
./install.sh --update # Update to latest
./install.sh --uninstall # Remove
📖 Detailed installation guide →
Full documentation is available on the GitHub Wiki:
| Guide | Description |
|---|---|
| Getting Started | Installation and configuration |
| LLM Backends | Configure Ollama, llama.cpp, or LM Studio |
| Projects | Organize conversations into projects |
| Knowledge Base | RAG with document upload and semantic search |
| Search | Semantic and content search across chats |
| Custom Tools | Create JavaScript, Python, or HTTP tools |
| System Prompts | Manage prompts with model defaults |
| Custom Models | Create models with embedded prompts |
| Built-in Tools | Reference for web search, calculator, etc. |
| API Reference | Backend endpoints |
| Development | Contributing and architecture |
| Troubleshooting | Common issues and solutions |
Vessel prioritizes usability and simplicity over feature breadth.
Completed:
Planned:
Non-Goals:
Do one thing well. Keep the UI out of the way.
Contributions are welcome!
git checkout -b feature/amazing-feature)Issues: github.com/VikingOwl91/vessel/issues
GPL-3.0 — See LICENSE for details.
Made with Svelte • Supports Ollama, llama.cpp, and LM Studio