Local AI code generation platform built for private, GPU-backed development on your own hardware.
https://github.com/tylerdotai/titan-aihttp://localhost:3000Titan AI is a local-first coding assistant that keeps inference on your own hardware. The current repo combines a FastAPI backend, a static web interface, lightweight auth, and streaming chat endpoints that connect to a local model server for code generation and related tasks.
| Layer | Technology |
|---|---|
| Backend | FastAPI |
| Database | SQLite |
| Auth | OAuth2-style token flow |
| Inference | Local model server over HTTP |
| Frontend | Static HTML in app/static/ |
/chat/streamapp/main.pyapp/static/tasks.dbapp/main.py FastAPI app, auth, chat, and TTS endpoints
app/static/index.html Local web interface
tasks.db SQLite data store
LICENSE MIT license
README.md Project overview
pipgit clone https://github.com/tylerdotai/titan-ai.git
cd titan-ai
pip install fastapi uvicorn sqlalchemy requests httpx python-multipart
Titan AI is currently optimized for local/self-hosted use.
https://github.com/tylerdotai/titan-aihttp://localhost:3000uvicorn app.main:app --reload --port 3000
MIT License - see LICENSE for details.