A browser-based AI application that runs entirely on your device without sending data to external servers. This project uses Svelte 5, SvelteKit, WebAssembly, and various AI libraries for chat, transcription, text-to-speech, and image processing.
You can try the application at: https://enclave.page
Clone the repository
Install dependencies
nvm use
npm install
Configure environment variables
cp .env.example .env
Available variables:
PUBLIC_DISABLE_OPFS=true
- Disable OPFS caching for testing fallback behaviorStart the development server
npm run dev
Open your browser and navigate to http://localhost:5173
npm run build
# Build the Docker image
docker build -t enclave .
# Run the container
docker run -p 3000:3000 enclave
The application downloads AI models directly to your browser and runs inference using WebAssembly. Models are cached locally using OPFS when available. This approach ensures your data stays private and can work offline after the initial model download.