This is a very basic frontend for the local Ollama API built with SvelteKit.
(MacOs) Simply download the zip https://ollama.com/download and extract its contents to the Applications folder and that's it.
The first time you run Ollama you will be prompted to "Run your first model". Simply copy the command, open a terminal window and run it.
Ollama is installed and will auto start with the operating system. Note that Ollama will unload after 4 minutes of inactivity.
Important: Depending on you connection speed Ollama may take a while to start if you are running (and pulling) a model for the first time.
Once downloaded Ollama will open the local version.
Clone the repo here https://github.com/Mark2M/svollama-ai-portal.
Once you've created a project and installed dependencies with npm install
(or pnpm install
or yarn
).
npm run dev
# or start the server and open the app in a new browser tab
npm run dev -- --open
To create a production version of your app:
npm run build
You can preview the production build with npm run preview
.
To deploy your app, you may need to install an adapter for your target environment.