A privacy-focused email classification tool powered by local AI. Automatically categorize your emails into spam, newsletters, and keep using Ollama LLMs—all processed locally without sending data to external servers. Features a stunning "2028 design" UI with animated progress indicators and real-time statistics.
spam - Phishing, scams, unsolicited junknewsletter - Marketing emails, subscriptionskeep - Personal correspondence, work emails, important notificationsDashboard Tab
Emails Tab
Settings Tab
This application can run in two modes:
Best for: Maximum privacy, offline use, local Ollama processing
Run as a native desktop application with:
Quick Start:
npm install
npm run electron:dev # Development mode
Build Installers:
npm run electron:build # All platforms
npm run electron:build:win # Windows (.exe, portable)
npm run electron:build:mac # macOS (.dmg)
npm run electron:build:linux # Linux (.AppImage, .deb, .rpm)
Installers will be in the dist-electron/ directory.
Best for: Quick testing, remote access, shared hosting
Run as a traditional web application:
Quick Start:
npm install
# Terminal 1: Backend
npm run dev
# Terminal 2: Frontend
npm run dev:frontend
Navigate to http://localhost:5173
| Feature | Desktop (Electron) | Web App |
|---|---|---|
| Privacy | Maximum (all local) | Good (can use local Ollama) |
| Setup | One-click install | Manual start |
| Access | Single machine | Any browser |
| Updates | Auto-updater | Manual pull |
| System Tray | Yes | No |
| Offline | Yes (with local Ollama) | No |
Before you begin, ensure you have the following installed:
ollama pull qwen2.5:0.5b (recommended for speed)If using real IMAP (not mock mode):
.env filegit clone https://github.com/discoverNorman/email-ollama-cleaner.git
cd email-ollama-cleaner
npm install
Copy the example environment file:
cp .env.example .env
Edit .env with your configuration (see Configuration below).
Run database migrations:
npm run db:migrate
This creates a SQLite database (emails.db) with the required schema.
Development mode (recommended for testing):
# Terminal 1: Start backend server (port 3000)
npm run dev
# Terminal 2: Start frontend dev server (port 5173)
npm run dev:frontend
Production mode:
npm run build
npm start
Navigate to http://localhost:5173 (dev) or http://localhost:3000 (production).
Edit your .env file with the following variables:
# Gmail IMAP Configuration
[email protected]
GMAIL_APP_PASSWORD=xxxx-xxxx-xxxx-xxxx # 16-character app password
# Use mock IMAP for testing (generates fake emails)
USE_MOCK_IMAP=true # Set to false to use real IMAP
# Ollama server URL (local or remote)
OLLAMA_HOST=http://localhost:11434
# Model to use (see supported models below)
OLLAMA_MODEL=qwen2.5:0.5b # Fastest model (recommended)
# OLLAMA_MODEL=llama3.2 # More accurate but slower
# OLLAMA_MODEL=gemma2 # Alternative option
# Database file location
DATABASE_URL=./emails.db
# Number of emails to fetch per IMAP scan
BATCH_SIZE=50
# Server port
PORT=3000
Model-specific prompts are optimized for:
qwen2.5:0.5b - Recommended (fastest, ~96 emails/round)llama3.2 - Balanced speed/accuracyphi3 - Microsoft's small modelgemma2 / gemma3 - Google's modelsmistral - Mistral AItinyllama - Ultra-fast, lower accuracyorca-mini - Small OpenOrca modelSpeed Tip: Pre-warm your model before scanning:
curl http://localhost:11434/api/generate -d '{"model":"qwen2.5:0.5b","prompt":"hi"}'
http://localhost:5173)The backend exposes a REST API on port 3000:
GET /api/health
Response:
{
"status": "ok",
"ollama": {
"connected": true,
"model": "qwen2.5:0.5b"
}
}
GET /api/stats
Response:
{
"total": 150,
"spam": 45,
"newsletter": 60,
"keep": 45,
"avgConfidence": 0.87
}
GET /api/emails?classification=spam&limit=20&offset=0
Query Parameters:
classification (optional): spam, newsletter, or keeplimit (optional): Number of results (default: 50)offset (optional): Pagination offset (default: 0)POST /api/scan
Starts an email scan with parallel workers. Returns scan progress.
GET /api/unsubscribe/pending
Returns newsletters with unsubscribe links.
POST /api/unsubscribe/:id/complete
Marks an unsubscribe task as completed.
GET /api/scans
Returns all past scan logs with statistics.
Backend:
better-sqlite3)Frontend:
AI:
email-ollama-cleaner/
├── src/
│ ├── server.ts # Hono server entry point
│ ├── config.ts # Environment configuration
│ ├── types.ts # Shared TypeScript types
│ ├── db/
│ │ ├── schema.ts # Drizzle ORM schema (emails, scans, tasks)
│ │ └── index.ts # Database connection
│ ├── services/
│ │ ├── imap-client.ts # IMAP client (mock + real)
│ │ ├── ollama-client.ts # Ollama API client
│ │ ├── email-processor.ts # Email processing pipeline
│ │ ├── queue-worker.ts # Parallel queue workers
│ │ └── unsubscribe.ts # URL extraction
│ └── routes/
│ └── api.ts # API route handlers
├── frontend/
│ ├── index.html # Instant loading screen
│ ├── src/
│ │ ├── App.svelte # Main app (tabs, routing)
│ │ ├── app.css # 2028 design system
│ │ ├── main.ts # Entry point
│ │ └── lib/
│ │ └── api.ts # API client
│ └── vite.config.ts
├── .env.example
├── package.json
├── drizzle.config.ts
├── tsconfig.json
└── README.md
emails table:
id, subject, from, to, date, bodyclassification (spam/newsletter/keep)confidence (0-1)processed_atunsubscribe_tasks table:
id, email_id, url, status, created_atscan_logs table:
id, started_at, completed_at, total_emails, spam_count, newsletter_count, keep_count| Setting | Default | Description |
|---|---|---|
| Model | qwen2.5:0.5b |
Fastest model with good accuracy |
| Workers | 16 | Parallel queue workers |
| Concurrency | 12→16 | Auto-scaling parallel requests |
| Batch Size | 8 | Emails per Ollama request |
Throughput: ~96 emails/round (8 emails × 12 concurrent requests)
curl http://localhost:11434/api/generate -d '{"model":"qwen2.5:0.5b","prompt":"test"}'
qwen2.5:0.5b) for initial testing.env if your LLM has high throughputnpm run dev # Start backend dev server (hot reload)
npm run dev:frontend # Start Svelte dev server (hot reload)
npm run build # Build both backend and frontend
npm run preview # Preview production build
npm run electron:dev # Start Electron in development mode
npm run electron:build # Build production Electron app (all platforms)
npm run electron:build:win # Build for Windows only
npm run electron:build:mac # Build for macOS only
npm run electron:build:linux # Build for Linux only
npm run db:generate # Generate new Drizzle migrations
npm run db:migrate # Run database migrations
Backend changes (src/):
src/tsx watch will automatically reload the serverFrontend changes (frontend/src/):
frontend/src/After modifying src/db/schema.ts:
npm run db:generate # Generates migration SQL
npm run db:migrate # Applies migration
Problem: "Failed to connect to Ollama"
Solutions:
ollama listOLLAMA_HOST in .env matches your Ollama servercurl http://localhost:11434/api/tagsollama pull qwen2.5:0.5bProblem: "Invalid credentials" or "Authentication failed"
Solutions:
USE_MOCK_IMAP=true in .env for testing without real IMAPProblem: Email processing is too slow
Solutions:
OLLAMA_MODEL=qwen2.5:0.5bBATCH_SIZE if hitting memory limitsProblem: "Database is locked"
Solutions:
emails.db-shm and emails.db-wal files (if safe)Problem: "App won't start" or blank window
Solutions:
npm run buildnode_modules and reinstall: rm -rf node_modules && npm installnpm run electron:devProblem: "Backend not connecting" in Electron
Solutions:
electron/main.js console output for backend errorslsof -i :3000 (Mac/Linux) or netstat -ano | findstr :3000 (Windows)npm run dev then electron .Problem: Missing app icons
Solutions:
electron/assets/ directoryelectron/assets/README.md for icon requirementsProblem: Build fails with "Cannot find module"
Solutions:
dist/ directory exists: npm run buildelectron/main.js and electron/preload.js are presentnpm installrm -rf dist dist-electron && npm run buildThis project is licensed for personal use only. Commercial use and derivative works are not permitted under the CC BY-NC-ND 4.0 license.
If you have questions or feedback, contact: info@agentdrivendevelopment.com
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
You are free to:
Under these terms:
For more details, see the LICENSE file.
norman Email: info@agentdrivendevelopment.com GitHub: @discoverNorman
Built with Claude Code - AI-assisted development tool by Anthropic