this project is a simple frontend for animagine-xl-3.1 that i built, in order to test out a simple tag autocomplete/correction system.
if you haven't already set up the project, look at the setup guide.
you can run the project automatically with:
./run.sh
or on windows:
run.bat
or alternatively by running the backend script:
cd backend
python main.py
the program accepts a --device
flag, to set the device that inference is run on. it tries to use mps
(apple silicon) by default and falls back to cpu
if unavailable. use cuda
to run on nvida gpus.
you will probably need somewhere around 16GB of VRAM to run the program?
git clone https://github.com/e74000/diffusion_frontend_thing
cd diffusion_frontend_thing
./init.sh
(alternatively init.bat
for windows users). you may need to make the script executable,./run.sh
(alternatively run.bat
for windows users). you may need to make the script executable,git clone https://github.com/e74000/diffusion_frontend_thing
cd diffusion_frontend_thing
cd frontend
npm install
npm run build
cd ..
cd backend
python -m venv venv
source venv/bin/activate
on windows you need to use:venv\\Scripts\\activate
pip install -r requirements.txt
safe.h5
exists in the backend directory. if it is missing fetch it with:curl https://r2.e74000.net/diffusion_frontend_thing/safe.h5
python main.py