This project is an exploration in creating a standard setup for a microservice backend using Rust. The main focus is on backend architecture, simple CRUD operations, no event-driven architecture. The focus is on simplicity, type safety and testability.
The backend consists of rust microservices. A clients request always reaches the gateway service where it is authenticated and forwarded to the respective microservice.
The gateway exposes a restful http server (axum). Within the backend, the communication is done through grpc (tonic). Each microservices has its own protobuf file defining the service and models.
A typical microservice (see dummy) will have the following files:
main.rs: setup (read env variables, open db connection) and run the servicelib.rs: exposes service boundaries (such as proto.rs) for other microservices (see Microservice boundaries)server.rs: implements the gRPC endpoints and service logicget_entity.rs)db.rs: database/repository layer for CRUD operationsproto.rs: generated code from the protobuf definitions (does not need to be checked in git)utils.rs: shared methods between endpoints, models, etc.error.rs: error types for endpoints and database operationsclient.rs: gRPC client implementation + service mocks (auto generated code)The architecture decouples the database/repository layer from the service logic. See also Master hexagonal architecture in Rust.
lib.rs)Microservices must have access to the api layer of other microservices, which means they must have access to the proto generated client and request/response messages of other microservices. This may be solved by either compiling the protos in a common proto library and including the common library in the microservice, or compiling the proto that belongs to the service as part of the service and exposing it in lib.rs.
This setup uses the second solution. It avoids introducing a shared proto library and additionally each service can define which part of the proto it wants to expose. Note: the lib.rs should not expose more than needed by other service, so usually it only exposes the full or parts of the proto.rs.
I use tokio-postgres for database access. I tried sqlx with compiled sql statements, but found it caused more problems than it solved for me. To me a plain uncompiled sql statement with good unit testing is the way to go. And deadpool-postgres for connection pooling.
workspace)Microservices have a lot of dependencies in common, such as tonic, prost, tokio, serde etc. This may lead to a drift in dependency versions, where microservice a depends on a different version of package x than microservice b. The solution is to put all microservices in a workspace and define the share dependencies as a workspace dependency.
docker)The Dockerfile for each microservice is autogenerated using the scripts/docker-gen script. This approach ensures that the Dockerfile only includes the microservice itself and the code from any services or packages it depends on. This is critical for optimal caching, especially when using cargo-chef to separate dependency compilation from service code builds.
All microservices of the backend are deployed together with docker compose.
cargo-chef)This setup uses cargo-chef to split the build into two steps:
This separation allows Docker to cache the dependency layer, so rebuilding is much faster when only your service code changes.
I use a custom version of cargo-chef (not the main release), because of a fix I contributed (PR #324) that minimizes the recipe for workspaces. With this fix, a workspace member (microservice or package) will only rebuild if one of its dependencies changes, instead of rebuilding too often as before.
At the moment I build the binary within the docker build process. For Rust images this can be very slow š. I put a lot of effort into caching everything optimally and reduce this time, but if a central dependency changes this can be a pain. An alternative would be to build the binary outside of docker and copy the binary into a minimal docker image (e.g., scratch or alpine). If I am honest, this sounds like the more scalable approach. But there is something more elegant about building everything within docker.
This project currently does not have a CI/CD pipeline set up but you definitely should add one. I've just not gotten around to do it yet.
Authentication is hand-rolled using information from lucia and implements oauth login with google and github. This is not production-grade security. I'm not a security expert. Do really not use this for your super private production app!
Communication in the backend is done via gRPC. proto files are compiled into rust and typescript code, thus the backend can share request/response models with the frontend.
I use Traefik as a reverse proxy to route requests to the backend or the frontend. Setting it up was straightforward, at least I dont remember any major issues.
For unit tests I use rstest for table-driven testing. This makes it easy to cover multiple scenarios.
Database unit tests use testcontainers to spin up a real postgres database.
Integration tests also use testcontainers to spin up all required services. These tests are located in services/gateway/tests and check the interactions between microservices in a realistic environment.
I use OpenTelemetry to instrument and collect traces. The traces are sent to Jaeger by default, but this can be swapped with other OpenTelemetry compatible backends.
Traces are propagated between microservices
trace_id.trace_id..env.example to .env and adjust as needed.just generate
just build-services
just deploy
app directory):npm run dev -- --open
Or build and deploy the app:just build-app
just deploy-app
Do I promise this works flawlessly? No. There might be the one or other steps you have to do manually. Feel free to let me know.
I use go professionally, so I think I can give a bit of perspective. The tldr is: for large software projects Iād still choose go for the majority of services, but Iād definitely consider Rust for performance-critical parts (see this good read: https://engineering.grab.com/counter-service-how-we-rewrote-it-in-rust). So having a standard Rust setup in the toolkit is a win. For a hobby project like this one? I just prefer writing Rust. Its like solving puzzles for me.
What I love about Rust:
anyhow and thiserror. It just clicks more for me even though I havenāt fully found my groove.The negatives:
A backend with a similar setup to this one powers my personal website for tracking running data: runaround.world (feel free to give it a try, but its early stage - it only supports data from polar and strava at the moment). It works really well. Rust + Postgres delivers the performance you'd expect and in practice there's no need to optimize beyond just writing sane Rust code. So don't worry about a few clones here and there. I like the type safety that Rust provides, there are rarely any issues that I have to debug after it compiles. And if there are issues, tracing helps to track them down quickly.
There are a few similar projects from which I drew inspiration, however there weren't as many as I expected. Here are some of them: