A modern, feature-rich web interface for Ollama
Why Vessel • Features • Screenshots • Quick Start • Installation • Roadmap
Vessel and open-webui solve different problems.
Vessel is intentionally focused on:
It exists for users who want a UI that is fast and uncluttered, makes browsing and managing Ollama models simple, and stays out of the way once set up.
open-webui aims to be a feature-rich, extensible frontend supporting many runtimes, integrations, and workflows. That flexibility is powerful — but it comes with more complexity in setup, UI, and maintenance.
Vessel deliberately avoids becoming a platform. Its scope is narrow by design.
Vessel includes five powerful tools that models can invoke automatically:
| Tool | Description |
|---|---|
| Web Search | Search the internet for current information, news, weather, prices |
| Fetch URL | Read and extract content from any webpage |
| Calculator | Safe math expression parser with functions (sqrt, sin, cos, log, etc.) |
| Get Location | Detect user location via GPS or IP for local queries |
| Get Time | Current date/time with timezone support |
Clean, modern chat interface |
Syntax-highlighted code output |
Integrated web search with styled results |
Light theme for daytime use |
Browse and manage Ollama models |
|
Ollama must listen on all interfaces for Docker containers to connect. Configure it by setting OLLAMA_HOST=0.0.0.0:
Option A: Using systemd (Linux, recommended)
sudo systemctl edit ollama
Add these lines:
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Then restart:
sudo systemctl daemon-reload
sudo systemctl restart ollama
Option B: Manual start
OLLAMA_HOST=0.0.0.0 ollama serve
curl -fsSL https://somegit.dev/vikingowl/vessel/raw/main/install.sh | bash
git clone https://somegit.dev/vikingowl/vessel.git
cd vessel
./install.sh
The installer will:
Once running, open http://localhost:7842 in your browser.
The install script handles everything automatically:
./install.sh # Install and start
./install.sh --update # Update to latest version
./install.sh --uninstall # Remove installation
Requirements:
# Make sure Ollama is running first
ollama serve
# Start Vessel
docker compose up -d
cd frontend
npm install
npm run dev
Frontend runs on http://localhost:5173
cd backend
go mod tidy
go run cmd/server/main.go -port 9090
Backend API runs on http://localhost:9090
| Variable | Default | Description |
|---|---|---|
OLLAMA_API_URL |
http://localhost:11434 |
Ollama API endpoint |
BACKEND_URL |
http://localhost:9090 |
Vessel backend API |
| Variable | Default | Description |
|---|---|---|
OLLAMA_URL |
http://localhost:11434 |
Ollama API endpoint |
PORT |
8080 |
Backend server port |
GIN_MODE |
debug |
Gin mode (debug, release) |
Create docker-compose.override.yml for local customizations:
services:
frontend:
environment:
- CUSTOM_VAR=value
ports:
- "3000:3000" # Different port
vessel/
├── frontend/ # SvelteKit 5 application
│ ├── src/
│ │ ├── lib/
│ │ │ ├── components/ # UI components
│ │ │ ├── stores/ # Svelte 5 runes state
│ │ │ ├── tools/ # Built-in tool definitions
│ │ │ ├── storage/ # IndexedDB (Dexie)
│ │ │ └── api/ # API clients
│ │ └── routes/ # SvelteKit routes
│ └── Dockerfile
│
├── backend/ # Go API server
│ ├── cmd/server/ # Entry point
│ └── internal/
│ ├── api/ # HTTP handlers
│ │ ├── fetcher.go # URL fetching with wget/curl/chromedp
│ │ ├── search.go # Web search via DuckDuckGo
│ │ └── routes.go # Route definitions
│ ├── database/ # SQLite storage
│ └── models/ # Data models
│
├── docker-compose.yml # Production setup
└── docker-compose.dev.yml # Development with hot reload
# Frontend unit tests
cd frontend
npm run test
# With coverage
npm run test:coverage
# Watch mode
npm run test:watch
cd frontend
npm run check
Use the dev compose file for hot reloading:
docker compose -f docker-compose.dev.yml up
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/v1/proxy/search |
Web search via DuckDuckGo |
POST |
/api/v1/proxy/fetch |
Fetch URL content |
GET |
/api/v1/location |
Get user location from IP |
GET |
/api/v1/models/registry |
Browse Ollama model registry |
GET |
/api/v1/models/search |
Search models |
POST |
/api/v1/chats/sync |
Sync conversations |
All requests to /ollama/* are proxied to the Ollama API, enabling CORS.
Vessel is intentionally focused on being a clean, local-first UI for Ollama. The roadmap prioritizes usability, clarity, and low friction over feature breadth.
These improve the existing experience without expanding scope.
Still local-first, still focused — but easing onboarding and workflows.
These are explorations, not promises. They are intentionally separated to avoid scope creep.
Vessel intentionally avoids becoming a platform.
If a feature meaningfully compromises simplicity, it likely doesn't belong in core Vessel.
Do one thing well. Keep the UI out of the way. Prefer clarity over configurability.
Contributions are welcome! Please feel free to submit a Pull Request.
Issues and feature requests are tracked on GitHub: https://github.com/VikingOwl91/vessel/issues
git checkout -b feature/amazing-feature)git commit -m 'Add some amazing feature')git push origin feature/amazing-feature)Copyright (C) 2026 VikingOwl
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.