High-performance financial data streaming system demonstrating production-grade Rust backend architecture, real-time web applications, and enterprise deployment patterns
Interactive market depth visualization with sub-50ms latency updates
This project showcases my ability to architect, implement, and deploy production-grade distributed systems from scratch. Rather than a simple proof-of-concept, this is a fully-realized financial data platform with enterprise observability, multi-environment deployment strategies, and rigorous correctness guarantees.
Systems Programming Excellence
unwrap() calls—every failure path is explicitly handledConcurrency & Performance
Modern Web Architecture
Production Infrastructure
Data Engineering
mbo-backend/)The Rust backend is the heart of the system, built with Axum for HTTP routing and Tokio for async runtime. Key architectural decisions:
// Custom book implementation with invariant enforcement
fn match_crossed_orders(&mut self) -> Result<()> {
// Prevents negative spreads by simulating matching engine behavior
// Maintains price-time priority across all operations
}
Server-Sent Events over HTTP rather than WebSockets for:
pub struct Metrics {
pub messages_processed: Counter,
pub http_request_duration: Histogram,
pub active_connections: IntGauge,
pub order_book_apply_duration: Histogram,
// ... 10+ custom metrics
}
Every critical path is instrumented—P50/P99/P999 latencies, throughput, error rates, and resource utilization all exported to Prometheus.
mbo-frontend/)TypeScript + Svelte application demonstrating modern frontend patterns:
Multi-Environment Strategy:
Observability Stack:
scrape_configs:
- job_name: 'mbo-backend'
scrape_interval: 5s # High-frequency metrics collection
metrics_path: '/metrics'
Grafana dashboards visualize:
Problem: MBO dataset starts mid-session, so early messages reference non-existent orders.
Solution: Defensive programming with explicit logging. Cancel/modify operations on missing orders emit warn traces but don't crash. In production, I'd instrument this to alert on anomalies.
Problem: Order book occasionally showed bid >= ask (negative spread).
Solution: Implemented matching engine simulation that automatically executes crossed orders. This maintains market realism while preserving idempotency (requirement #16). Algorithm ensures no invalid state persists.
Challenge: Achieve 500K msg/sec with p99 <10ms.
Result: Sustained 17M msg/sec burst with p99 consistently <5ms. Bottleneck became test machine's connection limits, not the backend. Optimizations included:
Implemented custom drop guards:
tokio::spawn(async move {
let _ = rx.await;
metrics_for_cleanup.active_connections.dec();
});
Guarantees metric cleanup even if client abruptly disconnects, preventing resource leaks.
| Technology | Justification |
|---|---|
| Rust | Memory safety + zero-cost abstractions = predictable performance under load. No GC pauses. |
| Tokio/Axum | Industry-standard async runtime + ergonomic HTTP framework. Battle-tested at scale. |
| Svelte | Compile-time reactivity means smaller bundles and faster runtime than React/Vue. |
| Bun | TS/JS toolchain that "just works"—installs faster than caching overhead (seriously, read this). |
| SQLite | Embedded, zero-config persistence. Perfect for append-only time-series at this scale. |
| Prometheus | De-facto standard for metrics. Powerful query language (PromQL) and ecosystem. |
| Nix | Reproducible builds down to the compiler version. No "works on my machine" issues. |
| K8s | Enterprise deployment reality. Self-healing, declarative config, industry standard. |
| Metric | Target | Achieved | Notes |
|---|---|---|---|
| Throughput | 500K msg/sec | 17M msg/sec | 34x over spec (burst) |
| P99 Latency | <50ms | <5ms | 10x better than requirement |
| Concurrent Clients | 100+ | 130+ | Limited by test machine, not backend |
| P50 Latency | - | 300 μs | Microsecond response times |
| Uptime | - | 100% | Never crashed during stress testing |
Application: mbo.hiibolt.com
Real-time order book visualization with interactive playback controls
Monitoring: mbo-grafana.hiibolt.com
Prometheus metrics and Grafana dashboards showing system internals
Production metrics: throughput, latencies, connection counts, error rates
Prerequisites: Docker + Docker Compose (or Nix for local dev)
# Clone the repository
git clone https://github.com/hiibolt/mbo.git && cd mbo
# Set environment variables
cp .env.example .env
# Edit .env and add your DBN_KEY (or omit for demo data)
# Launch production stack
docker compose --profile prod up
# Access at http://localhost (frontend), http://localhost:9090 (metrics)
Local Development (requires Nix with flakes):
nix develop # Enter dev shell with all dependencies
cd mbo-backend && cargo run # Start backend
cd mbo-frontend && bun dev # Start frontend (separate terminal)
Backend Engineering:
Rust • Tokio • Axum • Anyhow • SQLite • Prometheus • Server-Sent Events
Frontend Development:
TypeScript • Svelte • SvelteKit • Bun • TailwindCSS • Skeleton UI
DevOps & Infrastructure:
Docker • Kubernetes • GitHub Actions • Nix • Prometheus • Grafana • Nginx
Software Engineering Practices:
CI/CD • Dependency Management • Security Scanning • Performance Testing • API Design • Distributed Systems • Observability • Documentation
The order book implementation enforces critical invariants:
Verified via:
This project represents the intersection of my interests in systems programming, financial technology, and production engineering. It's a demonstration that I don't just write code—I architect systems that work reliably at scale.
I used Claude Opus 4.1 for dense, difficult tasks requiring heavy verification and Claude Sonnet 4.5 for less intense tasks such as test verification, by-line documentation, and rapid templating.
Sonnet 4.5 was used in the re-writing of this README.md file.
List of usage:
databento code to use more verbose anyhow reportingunwrap and expect callstokio_tracing coverage.dockerignore files was done with Sonnet, as it's an otherwise tedious task. I was careful to monitor for secret leaks and repo ballooning before confirming.