Opinionated SvelteKit-based LLM frontend for LangGraph server.
https://svelte-langgraph-demo.synergyai.nl/
The monorepo uses a single .env file at the root to configure both frontend and backend. Copy the example file and update it with your values:
cp .env.example .env
The .env file is organized into sections:
Common Variables:
AUTH_OIDC_ISSUER - Your OIDC provider's issuer URL (e.g., http://localhost:8080 for local mock)Backend Variables:
OPENAI_API_KEY - Your OpenAI-compatible API key (e.g., OpenAI, OpenRouter)OPENAI_BASE_URL - OpenAI-compatible API base URL (optional, defaults to OpenAI)CHAT_MODEL_NAME - OpenAI-compatible model to use (defaults to gpt-4o-mini)LANGSMITH_API_KEY - Your LangSmith API key for tracing (optional)LANGSMITH_ENDPOINT - LangSmith endpoint URL (optional, defaults to EU region)Frontend Variables:
AUTH_TRUST_HOST - Enable auth trust host (set to true for development)AUTH_OIDC_CLIENT_ID - Your OIDC client ID (e.g., svelte-langgraph)AUTH_OIDC_CLIENT_SECRET - Your OIDC client secretAUTH_SECRET - Random string for session encryption (generate with npx auth secret)PUBLIC_LANGGRAPH_API_URL - URL of your LangGraph server (typically http://localhost:2024)PUBLIC_SENTRY_DSN - Public DSN for Sentry error tracking (optional)This application supports multiple OpenAI-compatible providers. Configure your preferred provider using the environment variables above.
To use OpenAI directly (no additional configuration needed):
# .env
OPENAI_API_KEY=your_openai_api_key
# OPENAI_BASE_URL not needed for OpenAI (uses default)
CHAT_MODEL_NAME=gpt-4o-mini # Default OpenAI model
Popular OpenAI models:
gpt-4o-mini - Fast, cost-effective model (default)gpt-4o - Most capable modelgpt-3.5-turbo - Legacy modelOpenRouter provides access to multiple AI models including free options:
# .env
OPENAI_API_KEY=your_openrouter_api_key
OPENAI_BASE_URL=https://openrouter.ai/api/v1
CHAT_MODEL_NAME=x-ai/grok-4-fast:free # Free Grok model
Popular OpenRouter models:
x-ai/grok-4-fast:free - Free Grok modelmeta-llama/llama-3.2-3b-instruct:free - Free Llama modelanthropic/claude-3.5-sonnet - Claude 3.5 Sonnet (paid)For local AI models using Ollama:
# .env
OPENAI_API_KEY=ollama # Can be any value for local usage
OPENAI_BASE_URL=http://localhost:11434/v1
CHAT_MODEL_NAME=llama3.2 # Your local Ollama model
For local development and testing, the project includes a mock OIDC provider using oidc-provider-mock. This lightweight Python-based mock server simulates a real OIDC provider, allowing you to develop and test authentication flows without needing to set up a full OAuth2/OIDC provider.
What it does:
.well-known/openid-configuration)moon :dev for seamless developmentConfiguration:
http://localhost:8080svelte-langgraph)secret)test-user (subject claim in JWT)The following command ensures dependencies are installed and starts dev servers for frontend, backend, and OIDC mock provider, with hot reload:
moon :dev :oidc-mock
This automatically starts:
http://localhost:5173http://localhost:2024http://localhost:8080 (for local authentication)Make sure to configure your .env file to point to the OIDC mock provider (see Configuration section above).
Run all checks (linting, type checking, formatting, building, unit and E2E tests):
moon check --all
This currently requires Docker to be running for the LangGraph server build.
The backend uses LangGraph for AI workflow orchestration with the following key dependencies:
The frontend is built with modern web technologies:
End to end tests are written using Playwright with fixtures and page object models. They live in their own project in e2e/.
They can be run with:
moon e2e:test
Or, interactively, with:
moon e2e:test-ui
To run a Docker build of the project, use Docker Compose:
docker compose build
To run it:
docker compose up [--build]
For now, we will not be running the backend in Docker, so to test with the dev backend, it's required to make it available to the Docker container and inform the Docker container of your IP:
moon backend:dev -- --host 0.0.0.0
And in a different terminal:
PUBLIC_LANGGRAPH_API_URL=http://host.docker.internal:2024 docker compose up --build
This project uses Paraglide-JS for type-safe internationalization. Paraglide offers a developer-friendly approach where you can:
All translations are stored in apps/frontend/messages/:
Each file contains key-value pairs for all UI text:
{
"$schema": "https://inlang.com/schema/inlang-message-format",
"hello_world": "Hello, {name}!",
"local_name": "English"
}
The local_name key is special - it defines how each language refers to itself in the language switcher.
apps/frontend/messages/ (e.g., fr.json for French)en.json and translate all valuesapps/frontend/project.inlang/settings.json:{
"locales": ["en", "nl", "hi", "fr"]
}
That's it! The language will automatically appear in the language switcher with the name specified in local_name.