Multiple providers, plug in your API keys (stored entirely locally) and you're good to go
Tool use. Works with OpenAI, Anthropic, and Groq models. Parallel tool calls are supported.
server/toolfns/toolfns.go
. You only need to write functions. The function comment is the description the model receives, so it knows what to use. Click the Sync
button in the web UI to refresh your tools.Multimodal input: upload, paste, or share links to images
Image generation using DALL-E 3
Multi-shot prompting. Also edit, delete, regenerate messages, whatever. The world is your oyster
Pre-filled responses (where supported by provider)
Support for all available models across all providers
Change model mid-conversation
Conversation sharing (if you choose to share, your conversation has to be stored on an external server for the share link to be made available. Self-hosted share options coming soon. No, I will not view any of your stuff.)
Branching conversation history (like the left-right ChatGPT arrows that you can click to go back to a previous response)
Coming soon:
If you don't want to use tools, you don't need to install anything. A hosted instance is available at: https://llum.chat
If you want to use tools, proceed below.
The server and client are available prebuilt as a single binary. Download prebuilt packages from the releases page.
Download the binary for your platform, then run it, which will start both the client and the server:
./lluminous-darwin-amd64
Running at http://localhost:8081
Open the link in your browser and you're good to go!
If you want to build your own tools and recompile into a single client+server binary, download dist-client.tar.gz
from the releases page and unzip it into server/dist-client
, then run:
go build -tags release
This will get you a new binary which contains the tools you just added, and works just like before.
Alternatively, you can proceed below with a full setup of both the client and server.
npm i && npm run dev
. The client will be accessible at http://localhost:5173cd server && go generate ./... && go build && ./server -password chooseapassword -llama "path/to/llama.cpp (optional)
. The server will be accessible at http://localhost:8081. You can plug this into the server address in the chat UI, along with the password you selected.