Performance comparison tool for Frontend frameworks (React, Angular, Next.js, Svelte, Flutter) and Backend frameworks (NestJS, .NET Core, Go/Gin, FastAPI, Spring Boot).
| Metric | Description | Unit | Good | Needs Work | Poor |
|---|---|---|---|---|---|
| FCP | First Contentful Paint | seconds | < 1.8s | < 3.0s | > 3.0s |
| LCP | Largest Contentful Paint | seconds | < 2.5s | < 4.0s | > 4.0s |
| TTI | Time to Interactive | seconds | < 3.8s | < 7.3s | > 7.3s |
| TBT | Total Blocking Time | ms | < 200ms | < 600ms | > 600ms |
| CLS | Cumulative Layout Shift | score | < 0.1 | < 0.25 | > 0.25 |
| SI | Speed Index | seconds | < 3.4s | < 5.8s | > 5.8s |
| Metric | Description | Unit |
|---|---|---|
| Bundle Size | Production build size | KB |
| Build Time | Production build time | seconds |
| Memory Usage | Browser memory consumption | MB |
Use the Python script to automatically extract Lighthouse metrics:
# 1. Run Lighthouse on your app
lighthouse http://localhost:3000 --only-categories=performance --output=json --output-path=./report/react_report.json
# 2. Update the framework JSON with extracted metrics
python scripts/update-frontend.py react report/react_report.json
The script extracts these metrics from the Lighthouse report:
# Install Lighthouse CLI
npm install -g lighthouse
# Run Lighthouse and save JSON report
lighthouse http://localhost:3000 --only-categories=performance --output=json --output-path=./report/react_report.json
# Update metrics JSON
python scripts/update-frontend.py react report/react_report.json
Production bundle size after tree-shaking and minification.
# React
npm run build && du -sh build/
# Angular
ng build --configuration=production && du -sh dist/
# Next.js
npm run build && du -sh .next/
# Svelte
npm run build && du -sh build/
# Flutter Web
flutter build web --release && du -sh build/web/
# Use 'time' command to measure build duration
time npm run build
# Chrome DevTools:
# 1. Open DevTools (F12)
# 2. Go to Memory tab
# 3. Take heap snapshot
# 4. Note "Total Size" in MB
# React
npm run build && npx serve -s build
# Angular
ng build && npx serve dist/<project-name>/browser
# Next.js
npm run build && npm run start
# Svelte/SvelteKit
npm run build && npm run preview
# Flutter Web
flutter build web --release
python -m http.server 8080 --directory build/web
# or: npx serve build/web
Save frontend benchmark results to compare/frontend/<framework>.json:
{
"framework": "React",
"version": "19.0.0",
"measuredAt": "2026-01-28T23:07:05Z",
"environment": {
"node": "22.0.0",
"browser": "Chrome 132",
"os": "Windows 11"
},
"lighthouse": {
"fcpMs": 2255,
"lcpMs": 2405,
"ttiMs": 2405,
"tbtMs": 0,
"cls": 0,
"siMs": 2255
},
"build": {
"bundleSizeKB": 287,
"buildTimeSeconds": 8.6
},
"runtime": {
"memoryUsageMB": 14.5
}
}
compare/frontend/
├── react.json
├── angular.json
├── nextjs.json
├── svelte.json
└── flutter.json
python scripts/update-frontend.py <framework> <lighthouse-report.json>
# Examples:
python scripts/update-frontend.py react report/react_report.json
python scripts/update-frontend.py angular report/angular_report.json
python scripts/update-frontend.py nextjs report/nextjs_report.json
python scripts/update-frontend.py svelte report/svelte_report.json
python scripts/update-frontend.py flutter report/flutter_report.json
The script will:
build and runtime sections| Framework | FCP | LCP | TTI | TBT | CLS | SI |
|---|---|---|---|---|---|---|
| Next.js | 0.7s | 1.2s | 1.5s | 50ms | 0.02 | 0.85s |
| Svelte | 0.8s | 1.0s | 1.2s | 10ms | 0 | 0.85s |
| Angular | 1.1s | 1.8s | 2.4s | 180ms | 0.01 | 1.4s |
| React | 1.2s | 2.4s | 2.4s | 200ms | 0.05 | 2.3s |
| Flutter | 2.1s | 2.8s | 2.8s | 350ms | 0 | 2.4s |
| Framework | Bundle | Build | Memory |
|---|---|---|---|
| Svelte | 12 KB | 2.4s | 8 MB |
| React | 43 KB | 8.6s | 15 MB |
| Next.js | 68 KB | 12.2s | 17 MB |
| Angular | 96 KB | 4.8s | 14 MB |
| Flutter | 1.2 MB | 45.2s | 46 MB |
# 1. Start any backend on port 8080
# 2. Run the benchmark with backend name
node benchmark.js gin # Go/Gin
node benchmark.js fastapi # Python/FastAPI
node benchmark.js nestjs # NestJS
node benchmark.js netcore # .NET Core
node benchmark.js springboot # Spring Boot
Results are saved to compare/<backend>.json
The benchmark script automatically measures:
| Metric | Description | How It's Measured |
|---|---|---|
| Response Time | API latency (ms) | 100 sequential GET requests to /todos |
| Throughput | Requests/second | 5s burst test with 50 concurrent workers |
| Concurrency | Max connections | Tests 10, 50, 100, 200, 500 concurrent requests |
| Memory Usage | RAM under load (MB) | Fetched from GET / health endpoint |
These metrics need to be measured separately and updated in health/<backend>.json:
Time from process start to first request served.
# Terminal 1: Start with timing
time ./gin-app
# Terminal 2: Wait for "Listening on :8080" then immediately
curl http://localhost:8080/
# Use uvicorn startup time
time uvicorn main:app --host 0.0.0.0 --port 8080 &
# Wait for "Uvicorn running" message
curl http://localhost:8080/
# Measure startup
time dotnet run &
# Wait for "Now listening on" message
curl http://localhost:8080/
# Measure startup
time npm run start:prod &
# Wait for "Nest application successfully started"
curl http://localhost:8080/
# Measure startup (shown in console output)
mvn spring-boot:run
# Look for "Started Application in X.XXX seconds"
Production build size including runtime dependencies.
# Build optimized binary (stripped)
go build -ldflags="-s -w" -o api ./cmd/api
# Check size
du -h api
Typical size: ~10-15 MB (single binary, no dependencies)
# Build Docker image
docker build -t python-todo-api .
# Check image size
docker images python-todo-api
Typical size: ~40-60 MB (with uvicorn, pydantic, etc.)
# Publish to ./publish folder
dotnet publish -c Release -o ./publish
# Check size
du -sh ./publish
Typical size: ~80-100 MB (self-contained) or ~5-10 MB (framework-dependent)
# Get node_modules size (production dependencies only)
npx cost-of-modules --no-install --production
# Build production and check dist folder size
npm run build
du -sh dist/
Total bundle size: node_modules size + dist folder size
Typical size: ~80-100 MB (mostly node_modules)
# Build fat JAR with Gradle
./gradlew bootJar
# Check JAR size
ls -lh build/libs/*.jar
Typical size: ~100-150 MB (fat JAR with embedded Tomcat)
Time to create a production/release build.
time go build -o app
Typical time: ~2-5 seconds
# No build step required (interpreted)
# If using Docker:
time docker build -t fastapi-app .
Typical time: ~0 seconds (no compilation) or ~30-60s for Docker
time dotnet publish -c Release
Typical time: ~8-15 seconds
time npm run build
Typical time: ~5-10 seconds
time mvn clean package -DskipTests
# or with Gradle
time ./gradlew build
Typical time: ~30-60 seconds
Reference values based on typical production environments:
| Backend | Response Time | Throughput | Concurrent | Memory | Cold Start | Bundle Size | Build Time |
|---|---|---|---|---|---|---|---|
| Gin | 0.8 ms | 125K req/s | 50K | 28 MB | 12 ms | 12 MB | 3s |
| FastAPI | 1.2 ms | 42K req/s | 25K | 85 MB | 180 ms | 45 MB | 0s |
| .NET Core | 1.5 ms | 38K req/s | 22K | 145 MB | 420 ms | 95 MB | 12s |
| NestJS | 2.8 ms | 18K req/s | 15K | 120 MB | 850 ms | 85 MB | 8s |
| Spring Boot | 3.2 ms | 28K req/s | 12K | 320 MB | 2800 ms | 120 MB | 45s |
After measuring cold start and bundle size, update the compareData section in each health file:
// health/dotnet.json
{
"compareData": {
"memoryUsageMB": 95,
"coldStartMs": 420, // ← Update this
"bundleSizeMB": 95, // ← Update this
"responseTimeMs": 1.5,
"requestsPerSec": 38000,
"concurrentConnections": 22000
}
}
# 1. Start Go backend
cd go-backend && ./app &
node benchmark.js
pkill app
# 2. Start Python backend
cd python-backend && uvicorn main:app --port 8080 &
node benchmark.js
pkill uvicorn
# 3. Start .NET backend
cd dotnet-backend && dotnet run &
node benchmark.js
# Ctrl+C to stop
# 4. Start NestJS backend
cd nestjs-backend && npm run start:prod &
node benchmark.js
pkill node
# 5. Start Spring backend
cd spring-backend && mvn spring-boot:run &
node benchmark.js
# Ctrl+C to stop
Collect static metrics (dependencies, build size, LOC) without running the server:
┌─────────────┬──────────────┬────────────┬───────────────┐
│ Backend │ Dependencies │ Build Size │ Lines of Code │
├─────────────┼──────────────┼────────────┼───────────────┤
│ .NET Core │ 7 │ 51.01 MB │ 1,287 │
│ Spring Boot │ 13 │ 44.7 MB │ 604 │
│ Go │ 57 │ 86.87 MB │ 2,089 │
│ NestJS │ 39 │ 0.31 MB │ 697 │
│ FastAPI │ 19 │ N/A │ 1,222 │
└─────────────┴──────────────┴────────────┴───────────────┘
Usage:
python collect-metrics.py gin # Run for Go (Gin)
python collect-metrics.py fastapi # Run for FastAPI
python collect-metrics.py nestjs # Run for NestJS
python collect-metrics.py netcore # Run for .NET Core
python collect-metrics.py springboot # Run for Spring Boot
The script:
.csproj, build.gradle.kts, go.mod, package.json, pyproject.toml)compare/<backend>.jsoncompare/
├── gin.json # Go benchmark results
├── fastapi.json # Python benchmark results
├── netcore.json # .NET benchmark results
├── nestjs.json # NestJS benchmark results
└── springboot.json # Spring benchmark results
Each backend must expose GET / returning:
{
"status": "healthy",
"version": "1.0.0",
"environment": "development",
"server": {
"name": ".NET Core",
"language": "C#",
"frameworkVersion": ".NET 10.0"
},
"memory": {
"rssMB": 72.5
},
"compareData": {
"coldStartMs": 420,
"bundleSizeMB": 95
}
}