IAC-099-module-3-editor Svelte Themes

Iac 099 Module 3 Editor

Multi-LLM content polishing. Tauri 2 + Svelte 5 + FastAPI. Anthropic/OpenAI/Ollama support.

IAC-099 Module 3: AI Editor

Multi-LLM content polishing with privacy-filtered transcripts.

Purpose

Takes privacy-filtered content and provides AI-assisted editing and polishing. Supports multiple LLM providers including fully local Ollama option. ONLY module that requires internet (when using cloud LLMs).

Input/Output Contract

Input: filtered.md - Privacy-filtered markdown from Module 2 Output: polished.md - AI-enhanced content

Tech Stack

  • Framework: Tauri 2
  • Frontend: Svelte 5
  • Backend: FastAPI (local server)
  • LLM Providers: Anthropic, OpenAI, OpenRouter, Gemini, Ollama
  • Database: SQLite (local storage)
  • Platform: macOS (Apple Silicon M2)

Prerequisites

  • macOS with Apple Silicon (M2/M3)
  • Node.js 20 (via nvm)
  • Python 3.11+ with uv
  • Rust (latest stable)
  • (Optional) Ollama for local AI processing

Installation

# Use Node 20
nvm use 20

# Install frontend dependencies
npm install

# Set up Python backend
cd backend
uv venv
source .venv/bin/activate
uv pip install -r requirements.txt

Development

# Terminal 1: Start FastAPI backend
cd backend
source .venv/bin/activate
uvicorn main:app --reload

# Terminal 2: Start Tauri frontend
npm run tauri:dev

# Run tests
npm test
cd backend && pytest

Building

# Build production app
npm run tauri:build

Usage

  1. Load filtered.md from Module 2
  2. Select AI provider:
    • Anthropic (Claude) - Requires API key
    • OpenAI (GPT-4) - Requires API key
    • OpenRouter - Requires API key
    • Gemini - Requires API key
    • Ollama - Fully local, no API key needed
  3. Click "AI Assist" to get suggestions
  4. Review and accept/reject suggestions
  5. Click "Export" to save polished.md

LLM Provider Configuration

Cloud Providers (Require Internet + API Keys)

Anthropic (Recommended)

export ANTHROPIC_API_KEY=your-key-here

OpenAI

export OPENAI_API_KEY=your-key-here

OpenRouter

export OPENROUTER_API_KEY=your-key-here

Gemini

export GOOGLE_API_KEY=your-key-here

Local Provider (No Internet Required)

Ollama

# Install Ollama
brew install ollama

# Pull a model
ollama pull llama2

Configuration

  • API keys (stored locally, encrypted)
  • Default LLM provider
  • Model selection per provider
  • Temperature/creativity settings
  • Max token limits
  • System prompts

Source Code References

Based on working implementations from:

  • /Users/kjd/01-projects/IAC-033-writer/ - AI editor patterns
  • /Users/kjd/01-projects/IAC-011-sovereign-backend/ - API patterns

Success Criteria

  • ✅ Loads markdown
  • ✅ Connects to at least one LLM (Anthropic or Ollama)
  • ✅ Provides AI suggestions
  • ✅ Outputs polished markdown
  • ✅ Works with API keys (user provides)
  • ✅ Ollama option for fully local AI

Architecture

src/
├── components/      # Svelte UI components
│   ├── Editor.svelte
│   └── ProviderSelector.svelte
├── stores/          # Svelte stores
└── tauri/          # Rust backend

backend/
├── main.py         # FastAPI app
├── providers/      # LLM provider integrations
│   ├── anthropic.py
│   ├── openai.py
│   ├── openrouter.py
│   ├── gemini.py
│   └── ollama.py
└── utils/          # Shared utilities

Privacy & Security

  • API keys stored locally (encrypted at rest)
  • API keys never logged or transmitted except to respective LLM providers
  • User explicitly triggers AI assistance (no automatic processing)
  • Ollama option provides fully local processing
  • No telemetry or analytics

Performance Notes

  • Cloud LLMs: Fast response (1-5 seconds)
  • Ollama (local): Slower on CPU, faster with GPU (5-30 seconds)
  • Recommended Ollama model for M2: llama2 or mistral

License

[To be determined]

Contributing

See main project repository for contribution guidelines.

Top categories

Loading Svelte Themes