Welcome to Arcadia, the AI Medieval Drama Simulator. This project is an experiment in using generative AI technologies to create a narrative experience, while also hopefully serving as an example to the larger community of best practices around how to use various generative AI APIs.
Note the latest stable version is always hosted at https://www.generativestorytelling.ai
Arcadia highlights the use of LLMs to generate animated narratives, detailing the drama in a royal court of your choosing. List the names of your friends and family, and watch as they betray, poison, marry, and stab one another to take control of the Kingdom!
Narration is provided by GPT 3.5
The 3d Background is from Blockadelabs
Character portraits are generated by Dalle
Upcoming features include text to speech narration, music, sound effects, and the ability to save,share, and replay stories.
To ensure this can serve as an example for people, I have attempted to minimize third party dependencies in the code, and to also keep the build system as simple as possible. When there is a choice between doing things a fancy way (e.g. Sass, OpenAPI) or a simple way (shared directory of TypeScript files), the simple way has been chosen.
Ideally if you are familiar with TypeScript, Express, and HTML, you should be able to understand the code base.
Svelte is used on the front end, if you are not familiar with Svelte, it is a very minimal set of tooling for doing data binding in HTML, the basics can be picked up in less than an hour with the full tutorial taking 2 or 3 hours at max. Even if you do not know Svelte, hopefully the front end code can still be easily understood.
The code is split into three folder:
The shared
folder is symlinked into the backend
and frontend
folders, a nifty trick that allows for sharing TypeScript types and modules between projects without having to setup a full mono-repo. If you are using an old version of Windows (pre Windows 10), then you may have to manually enable symlinks for non-administrator accounts on your system.
The backend is a minimal Express.js server that demonstrates how to prompt GPT so it gives structured responses, and then parsing those responses. Two examples of fetching from GPT are included, one hitting the REST endpoint for chat, and a second showing streaming responses for chat. In both cases results over sent from the server over a web socket to the front end web client.
To get the backend working you will need to create your own .env
file with your OPENAI_API_KEY
key in it. If you want to save stories you'll also need do add AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Note that sadly vultr is hard coded as the s3 provider, that needs to be extracted to the env file as well at some point.
The front end is a Svelte web app that gathers the name of the members of the royal court, sends it off to the backend, and when a story is fetched, animates it on the Dialogue.svelte
component.
Common types and utility functions are in shared
. Primarily used for data types that are shared by the front end and the back end.
Due to build issues on some OSs, sadly backend now has a hard copy of shared types, need to figure out why symlinking isn't being picked up on MacOS.
To run the project locally, do the following:
backend
create a file .env
and populate it with OPENAI_API_KEY=
and fill in your key.shared
, backend
and frontend
do npm install
and npm build
frontend
and enter npm run dev
that will start up vite, and you can connect to http://localhost:5173/
, select both checkboxes up top and then make some drama
.
a. To alter which pregenerated story is displayed, you can change the index on line 14 of StoryFetcherws.ts
`const events = parseOutEvents(pregenStories[2].story);`
backend
and enter npm run dev
. The frontend, when run locally, will automatically attempt to connect to the backend on localhost.npm run debug
is supported on the backend to run node --inspect
To run the project with Docker Compose locally:
Clone the repo and create a .env
file in the project root:
OPENAI_API_KEY=your_key_here
AWS_ACCESS_KEY_ID=your_key_here
AWS_SECRET_ACCESS_KEY=your_key_here
WS_BASE=ws://localhost:8080
Build and start the services:
docker compose build
docker compose up -d
Access the application at http://localhost:8080
traefik
Clone the repository on your VPS:
ssh user@your-vps
git clone https://github.com/yourusername/arcadia.git
cd arcadia
Create production .env
file:
cat > .env <<EOF
OPENAI_API_KEY=your_actual_key
AWS_ACCESS_KEY_ID=your_actual_key
AWS_SECRET_ACCESS_KEY=your_actual_key
DOMAIN=arcadia.yourdomain.com
WS_BASE=wss://arcadia.yourdomain.com
EOF
Build and start services:
docker compose build
docker compose up -d
Verify Traefik picked it up:
docker compose logs -f
# Check Traefik dashboard or logs to see if routes registered
To deploy updates to your production instance:
SSH into VPS and pull latest code:
ssh user@your-vps
cd arcadia
git pull origin main
Rebuild changed services: ```bash
docker compose build frontend
docker compose build backend
docker compose build
3. **Recreate containers (zero-downtime update):**
```bash
docker compose up -d
Docker Compose automatically:
Verify deployment:
docker compose ps
docker compose logs -f frontend
docker compose logs -f backend
Rollback if needed:
git checkout <previous-commit-hash>
docker compose build
docker compose up -d
Upcoming features: