Local, privacy-friendly search + grounded summary: SearXNG behind a small reverse proxy, plus a Tauri desktop UI that calls your self-hosted metasearch and (optionally) a local llama.cpp server (e.g. Nemotron-class GGUF).
From the repo root:
cd infra/searxng
docker compose up -d
# or: docker-compose up -d
JSON search (via Caddy on localhost only):
curl -sG 'http://127.0.0.1:8080/search' --data-urlencode 'q=searxng' --data-urlencode 'format=json' | head -c 400
If the container layout or settings path differs for your image tag, adjust docker-compose.yml and settings.yml per SearXNG docs.
cd apps/desktop
npm install
npm run tauri dev
http://127.0.0.1:8080).llama-server yourself).The app uses Rust + reqwest for SearXNG (no browser CORS issues). Answers stream from the OpenAI-compatible endpoint on llama.cpp when the process is running.
Use a build that provides llama-server with your model, for example:
llama-server -m ./model.Q4_K_M.gguf --host 127.0.0.1 --port 8081
Match host and port in app settings.
| Path | Purpose |
|---|---|
apps/desktop/ |
Tauri 2 + SvelteKit + TypeScript UI |
infra/searxng/ |
Docker Compose: Caddy + SearXNG |
mockup/ |
Static glass UI mockup |
docs/design/ |
UI design guide |