rust-svelte-setup Svelte Themes

Rust Svelte Setup

An exploratory rust + svelte setup using a rust backend with microservices.

rust-svelte-setup

This project is an exploration in creating a standard setup for a microservice backend using Rust. The main focus is on backend architecture, simple CRUD operations, no event-driven architecture. The focus is on simplicity, type safety and testability.

Architecture

Services

The backend consists of rust microservices. A clients request always reaches the gateway service where it is authenticated and forwarded to the respective microservice. The gateway exposes a restful http server (axum). Within the backend, the communication is done through grpc (tonic). Each microservices has its own protobuf file defining the service and models.

Microservice structure

A typical microservice (see dummy) will have the following files:

  • main.rs: setup (read env variables, open db connection) and run the service
  • lib.rs: exposes service boundaries (such as proto.rs) for other microservices (see Microservice boundaries)
  • server.rs: implements the gRPC endpoints and service logic
    • Each endpoint typically gets its own file (e.g., get_entity.rs)
  • db.rs: database/repository layer for CRUD operations
  • proto.rs: generated code from the protobuf definitions (does not need to be checked in git)
  • utils.rs: shared methods between endpoints, models, etc.
  • error.rs: error types for endpoints and database operations
  • client.rs: gRPC client implementation + service mocks (auto generated code)

The architecture decouples the database/repository layer from the service logic. See also Master hexagonal architecture in Rust.

Microservice boundaries (lib.rs)

Microservices must have access to the api layer of other microservices, which means they must have access to the proto generated client and request/response messages of other microservices. This may be solved by either compiling the protos in a common proto library and including the common library in the microservice, or compiling the proto that belongs to the service as part of the service and exposing it in lib.rs. This setup uses the second solution. It avoids introducing a shared proto library and additionally each service can define which part of the proto it wants to expose. Note: the lib.rs should not expose more than needed by other service, so usually it only exposes the full or parts of the proto.rs.

Database

I use tokio-postgres for database access. I tried sqlx with compiled sql statements, but found it caused more problems than it solved for me. To me a plain uncompiled sql statement with good unit testing is the way to go. And deadpool-postgres for connection pooling.

Shared dependencies (workspace)

Microservices have a lot of dependencies in common, such as tonic, prost, tokio, serde etc. This may lead to a drift in dependency versions, where microservice a depends on a different version of package x than microservice b. The solution is to put all microservices in a workspace and define the share dependencies as a workspace dependency.

Deployment of microservices

Deploy a single microservice (docker)

The Dockerfile for each microservice is autogenerated using the scripts/docker-gen script. This approach ensures that the Dockerfile only includes the microservice itself and the code from any services or packages it depends on. This is critical for optimal caching, especially when using cargo-chef to separate dependency compilation from service code builds.

All microservices of the backend are deployed together with docker compose.

Cache external dependencies between docker builds (cargo-chef)

This setup uses cargo-chef to split the build into two steps:

  1. Compile all external dependencies (which change rarely)
  2. Compile the microservice's actual binary

This separation allows Docker to cache the dependency layer, so rebuilding is much faster when only your service code changes.

I use a custom version of cargo-chef (not the main release), because of a fix I contributed (PR #324) that minimizes the recipe for workspaces. With this fix, a workspace member (microservice or package) will only rebuild if one of its dependencies changes, instead of rebuilding too often as before.

Alternative docker strategy

At the moment I build the binary within the docker build process. For Rust images this can be very slow 🐌. I put a lot of effort into caching everything optimally and reduce this time, but if a central dependency changes this can be a pain. An alternative would be to build the binary outside of docker and copy the binary into a minimal docker image (e.g., scratch or alpine). If I am honest, this sounds like the more scalable approach. But there is something more elegant about building everything within docker.

CI/CD

This project currently does not have a CI/CD pipeline set up but you definitely should add one. I've just not gotten around to do it yet.

Authentication

Authentication is hand-rolled using information from lucia and implements oauth login with google and github. This is not production-grade security. I'm not a security expert. Do really not use this for your super private production app!

Protos

Communication in the backend is done via gRPC. proto files are compiled into rust and typescript code, thus the backend can share request/response models with the frontend.

Routing

I use Traefik as a reverse proxy to route requests to the backend or the frontend. Setting it up was straightforward, at least I dont remember any major issues.

Testing

Unit tests

For unit tests I use rstest for table-driven testing. This makes it easy to cover multiple scenarios.

Database unit tests

Database unit tests use testcontainers to spin up a real postgres database.

Integration tests

Integration tests also use testcontainers to spin up all required services. These tests are located in services/gateway/tests and check the interactions between microservices in a realistic environment.

Tracing

I use OpenTelemetry to instrument and collect traces. The traces are sent to Jaeger by default, but this can be swapped with other OpenTelemetry compatible backends.

Inter-service tracing

Traces are propagated between microservices

  • Sending: Interceptors inject/extract context and add a trace_id.
  • Receiving: Middleware picks up the context and records the trace_id.

Further Reading

How to run

  1. Copy .env.example to .env and adjust as needed.
  2. Generate code and Dockerfiles:
    just generate
    
  3. Build and deploy the backend:
    just build-services
    just deploy
    
  4. To run the app locally (in the app directory):
    npm run dev -- --open
    
    Or build and deploy the app:
    just build-app
    just deploy-app
    

Do I promise this works flawlessly? No. There might be the one or other steps you have to do manually. Feel free to let me know.

But now be real, how does it compare to go?

I use go professionally, so I think I can give a bit of perspective. The tldr is: for large software projects I’d still choose go for the majority of services, but I’d definitely consider Rust for performance-critical parts (see this good read: https://engineering.grab.com/counter-service-how-we-rewrote-it-in-rust). So having a standard Rust setup in the toolkit is a win. For a hobby project like this one? I just prefer writing Rust. Its like solving puzzles for me.

What I love about Rust:

  • I just love Rust more than Go.
  • Type safety. In Go it’s easy to forget passing values to structs and let’s be honest, who creates explicit constructors for everything?
  • Performance: Its straightforward to write fast Rust code. Is it always needed for a web app that has 1 user? No, but its nice to not worry about and see traces under 1ms.
  • Nil pointer exception: In Go it’s just a tad too easy to get a nil pointer exception and crash your microservice. Want to access a nested proto struct but haven’t checked the parent for nil? Boom...
  • Compile with features: for example you can gate testutils behind a compile time feature. In Go, it’s not straightforward to share testutils without polluting the public API between services.
  • Error handling: I don’t mind Go’s verbosity, but Rust has more batteries here with anyhow and thiserror. It just clicks more for me even though I haven’t fully found my groove.
  • No garbage collection: Just one problem less to care for.

The negatives:

  • The big one is compile time/docker time. Rebuilding a full service from scratch in Docker on a mac can take up to 10 minutes. Want to parallelize this over 10 microservices? Your memory is killed. I put a lot of effort into optimizing caching, using cargo-chef, fixing cargo-chef, autogenerating optimal Dockerfiles (see architecture). But here Go just wins. How does it compile so fast? Maybe I just need to crank some compiler flags in Rust to make it bearable, but I haven’t gotten around to that.
  • Table testing is a bit cumbersome in Rust. I use rstest and really like it, but it’s macro-based, which always breaks my formatting in nvim...
  • gRPC gateway: I thought this was a standard gRPC thing. Was surprised Rust doesn’t have a good gRPC gateway. Maybe tonic adds one at some point? (https://github.com/hyperium/tonic/issues/332)
  • HTTP/gRPC middleware: Took me quite some time to write gRPC middleware in Rust. That’s a lot easier in Go, but once you figure out the Rust/tower way, it’s kinda fun.
  • I like how easy it is to onboard new people in Go while in Rust I’d probably spend days explaining generics, lifetimes, async traits and would fumble most of the explanations. What's that Pin thing again?
  • The test harness in Go is so much simpler if you want to do some pre/post setup for all tests. For example in database tests I want to spin up a single postgres container for all tests and then destroy it afterwards. Thats straightforward in Go, but in Rust I really struggled and only managed with a whacky approach that kills the docker containers with a system call. I am not the only one with this problem: Single container start for whole integration test file.

Where is it used so far?

A backend with a similar setup to this one powers my personal website for tracking running data: runaround.world (feel free to give it a try, but its early stage - it only supports data from polar and strava at the moment). It works really well. Rust + Postgres delivers the performance you'd expect and in practice there's no need to optimize beyond just writing sane Rust code. So don't worry about a few clones here and there. I like the type safety that Rust provides, there are rarely any issues that I have to debug after it compiles. And if there are issues, tracing helps to track them down quickly.

Similar Projects

There are a few similar projects from which I drew inspiration, however there weren't as many as I expected. Here are some of them:

Top categories

Loading Svelte Themes