Multi-LLM content polishing with privacy-filtered transcripts.
Takes privacy-filtered content and provides AI-assisted editing and polishing. Supports multiple LLM providers including fully local Ollama option. ONLY module that requires internet (when using cloud LLMs).
Input: filtered.md - Privacy-filtered markdown from Module 2
Output: polished.md - AI-enhanced content
# Use Node 20
nvm use 20
# Install frontend dependencies
npm install
# Set up Python backend
cd backend
uv venv
source .venv/bin/activate
uv pip install -r requirements.txt
# Terminal 1: Start FastAPI backend
cd backend
source .venv/bin/activate
uvicorn main:app --reload
# Terminal 2: Start Tauri frontend
npm run tauri:dev
# Run tests
npm test
cd backend && pytest
# Build production app
npm run tauri:build
filtered.md from Module 2polished.mdAnthropic (Recommended)
export ANTHROPIC_API_KEY=your-key-here
OpenAI
export OPENAI_API_KEY=your-key-here
OpenRouter
export OPENROUTER_API_KEY=your-key-here
Gemini
export GOOGLE_API_KEY=your-key-here
Ollama
# Install Ollama
brew install ollama
# Pull a model
ollama pull llama2
Based on working implementations from:
/Users/kjd/01-projects/IAC-033-writer/ - AI editor patterns/Users/kjd/01-projects/IAC-011-sovereign-backend/ - API patternssrc/
├── components/ # Svelte UI components
│ ├── Editor.svelte
│ └── ProviderSelector.svelte
├── stores/ # Svelte stores
└── tauri/ # Rust backend
backend/
├── main.py # FastAPI app
├── providers/ # LLM provider integrations
│ ├── anthropic.py
│ ├── openai.py
│ ├── openrouter.py
│ ├── gemini.py
│ └── ollama.py
└── utils/ # Shared utilities
llama2 or mistral[To be determined]
See main project repository for contribution guidelines.