Local AI chat application built with Svelte 5 and Vite. Connects to an LM Studio instance via OpenAI-compatible API with real-time streaming.
# Install dependencies
npm install
# Create environment file
cp .env.example .env
# Edit .env with your LM Studio API address
# Start development server
npm run dev
| Variable | Default | Description |
|---|---|---|
VITE_API_BASE |
http://10.3.58.20:1234/v1 |
LM Studio API base URL |
VITE_MODEL_ID |
qwen/qwen2.5-coder-14b |
Default model identifier |
VITE_SYSTEM_PROMPT |
You are a helpful assistant. |
System prompt for all conversations |
| Command | Description |
|---|---|
npm run dev |
Start development server |
npm run build |
Production build |
npm test |
Run test suite |
npm run test:watch |
Run tests in watch mode |
src/
lib/ # API client, state management, file processing, search
components/ # Svelte 5 components (< 200 lines each)
App.svelte # Root layout
app.css # Linear-inspired design system tokens
Data flow: Input → App.send() → store.send() → streamChat() yields tokens → store mutates $state → Chat/Message re-render reactively.
Linear-inspired dark theme with Inter Variable font. Semi-transparent borders, indigo accent, luminance-stacked backgrounds. Responsive at 320px, 768px, and 1440px breakpoints.
$state, $derived, $effect)Private