This project explores creating a standard setup for a microservice backend using Rust.
Prerequisites: This project uses Nix to manage development dependencies (Rust, Node.js, just, etc.) and direnv for automatic shell setup.
Copy .env.example to .env and adjust as needed
Start infrastructure (database, Traefik, Jaeger):
just deploy-infrastructure
Build and deploy backend services:
just build-services
just deploy-services
Run the frontend locally:
cd app && npm run dev -- --open
Connecting to the database:
docker exec -it db-db-1 psql -U postgres -d user_db
The backend consists of Rust microservices. Client requests always reach the gateway service, where they are authenticated and forwarded to the respective microservice. The gateway exposes a RESTful HTTP server (axum). Within the backend, communication happens via gRPC (tonic). Each microservice has its own protobuf file defining its service and models.
Each microservice focuses on simple CRUD operations. The architecture decouples the database/repository layer from the service logic. If complexity grows, responsibilities can be further split (e.g., add a dedicated service layer for domain logic).
A typical microservice (see dummy) has the following structure:
main.rs – setup (environment variables, database connection) and service startuplib.rs – exposes service boundaries (such as proto.rs) for other microserviceshandler.rs – implements gRPC endpoints and service logicget_entity.rs)database/ – database/repository layer for CRUD operationsproto.rs – generated code from protobuf definitions (not checked into git)utils.rs – shared methods between endpoints, models, etc.error.rs – error types for endpoints and database operationsclient.rs – gRPC client implementation + service mocks (auto-generated)See also: Master hexagonal architecture in Rust
lib.rs)Microservices need access to the API layer of other microservices—specifically the proto-generated client and request/response messages. This can be solved by:
proto library and including it in each microservice, orlib.rsThis setup uses the second approach. It avoids introducing a shared proto library, and each service can define which parts of the proto to expose. Note: lib.rs should only expose what's needed by other services—typically the full or partial proto.rs.
This project uses tokio-postgres for database access. sqlx with compiled SQL statements was tried but caused more problems than it solved. Plain SQL with good unit testing is sufficient. Connection pooling is handled by deadpool-postgres.
Microservices share many dependencies (tonic, prost, tokio, serde, etc.), which can lead to version drift between services. The solution is to put all microservices in a workspace and define shared dependencies as workspace dependencies.
The Dockerfile for each microservice is auto-generated using workspace-cache, a tool built specifically for this purpose. It analyzes workspace dependencies and generates optimal Dockerfiles that include only the microservice itself and its actual dependencies.
This approach uses a two-stage build:
This separation allows Docker to cache the dependency layer, making rebuilds much faster when only service code changes. Unlike cargo-chef, workspace-cache is designed specifically for workspaces and generates minimal, optimized Dockerfiles automatically.
All backend microservices are deployed together with Docker Compose.
Currently, binaries are built within the Docker build process. For Rust images this can be slow. Significant effort has gone into optimal caching, but if a central dependency changes, it can still take a while to rebuild all images.
An alternative is building binaries outside Docker and copying them into a minimal image (e.g., scratch or alpine). This approach might be more scalable, but is not what is chosen for this setup.
Authentication is implemented using the documentation from lucia and implements OAuth login with Google and GitHub.
⚠️Do not use this without audit on production!
Backend communication uses gRPC. Proto files are compiled into both Rust and TypeScript code, allowing the backend to share request/response models with the frontend.
Traefik serves as a reverse proxy to route requests to the backend or frontend.
Unit tests use rstest for table-driven testing, making it easy to cover multiple scenarios.
Database tests use testcontainers to spin up a real Postgres database.
Integration tests also use testcontainers to spin up all required services. These tests live in services/gateway/tests and verify interactions between microservices.
OpenTelemetry instruments and collects traces. Traces are sent to Jaeger by default, but this can be swapped with any OpenTelemetry-compatible backend.
Traces propagate between microservices:
trace_idtrace_idTL;DR: For large software projects, go is a great choice for the majority of services, but Rust is worth considering for performance-critical parts (see: How Grab rewrote their counter service in Rust).
anyhow and thiserror, the ecosystem is better too.workspace-cache and auto-generated Dockerfiles, but Go wins here.A backend with a similar setup powers runaround.world, a personal website for tracking running data. Feel free to try it—but it's early stage and only supports Polar and Strava data at the moment.
It works really well. Rust + Postgres delivers the expected performance, and in practice there's no need to optimize beyond writing sane Rust code. Don't worry about a few .clone() calls here and there. The type safety Rust provides means issues after compilation are rare. When they do occur, tracing helps track them down quickly.
A few similar projects that provided inspiration: