Siza AI generation engine — multi-framework code generation, component registry, and ML-powered quality scoring.
@forgespace/siza-gen is the AI brain extracted from
siza-mcp. It provides:
npm install @forgespace/siza-gen
import {
searchComponents,
initializeRegistry,
GeneratorFactory,
} from '@forgespace/siza-gen';
await initializeRegistry();
const results = searchComponents('hero section');
const generator = GeneratorFactory.create('react');
| Module | Description |
|---|---|
generators/ |
React, Vue, Angular, Svelte, HTML code generators |
registry/ |
502 snippets — 357 component + 85 animation + 60 backend |
ml/ |
Embeddings (all-MiniLM-L6-v2), quality scoring, training pipeline |
feedback/ |
Self-learning loop, pattern promotion, feedback-boosted search |
quality/ |
Anti-generic rules, diversity tracking |
artifacts/ |
Generated artifact storage and learning loop |
Built-in multi-provider support with auto-fallback:
import { createProviderWithFallback } from '@forgespace/siza-gen';
// Tries Ollama first (local), falls back to OpenAI/Anthropic/Gemini
const provider = await createProviderWithFallback();
Supports: Ollama (local), OpenAI, Anthropic, Gemini (via OpenAI adapter).
Transform branding-mcp tokens into design context:
import { brandToDesignContext } from '@forgespace/siza-gen';
const designContext = brandToDesignContext(brandIdentity);
An optional Python FastAPI sidecar handles compute-intensive ML operations. When unavailable, the system gracefully degrades to Transformers.js and heuristics.
cd python && pip install -e ".[dev]"
python -m uvicorn siza_ml.app:app --port 8100
Or via npm:
npm run sidecar:start # Launch Python sidecar
npm run sidecar:test # Run Python tests (41 tests)
| Endpoint | Description |
|---|---|
POST /embed |
Sentence-transformer embeddings |
POST /embed/batch |
Batch embeddings |
POST /vector/search |
FAISS k-NN similarity search |
POST /score |
LLM-based quality scoring |
POST /enhance |
LLM-based prompt enhancement |
POST /train/start |
LoRA fine-tuning via PEFT |
GET /health |
Liveness check |
GET /metrics/report |
ML observability metrics |
Fallback chain: Python sidecar → Transformers.js/local LLM → heuristics.
npm install && npm run build
npm test # 424 tests, 21 suites
npm run validate # lint + format + typecheck + test
npm run registry:stats # Report snippet counts
MIT