8mb.local is a self‑hosted, fire‑and‑forget video compressor. Drop a file, choose a target size (e.g., 8MB, 25MB, 50MB, 100MB), and let GPU-accelerated encoding produce compact outputs with AV1/HEVC/H.264. Supports NVIDIA NVENC, Intel/AMD VAAPI (Linux), and CPU fallback. The stack includes a SvelteKit UI, FastAPI backend, Celery worker, Redis broker, and real‑time progress via Server‑Sent Events (SSE).
Main Interface
|
GPU Support List
|
Settings Panel
|
Live Queue
|
Compressing (Real-time Logs)
|
Encoder Validation Tests
|
Job History
|
Advanced Options
|
/app/history.json/api/version endpointflowchart LR
A[Browser / SvelteKit UI] -- Upload / SSE --> B(FastAPI Backend)
B -- Enqueue --> C[Redis]
D[Celery Worker + FFmpeg NVENC] -- Progress / Logs --> C
B -- Pub/Sub relay --> A
D -- Files --> E[outputs/]
A -- Download --> B
Note (Nov 2025): RTX 50-Series (Blackwell) Support
🎉 RTX 50-Series Users (RTX 5090/5080/5070 Ti/etc.):
Verified working support with full NVENC hardware acceleration!Docker Image:
jms1717/8mblocal:rtx50-working
Branch:rtx50-blackwell
Complete Setup Guide: RTX50-WORKING.md⚠️ CRITICAL for RTX 50-series: You must mount the WSL driver directory:
-v /usr/lib/wsl/drivers:/usr/lib/wsl/drivers:ro
(Already configured in docker-compose.yml on the rtx50-blackwell branch)Requirements:
- RTX 50-series GPU (Blackwell/SM_100)
- NVIDIA Driver 550.x+ (tested with 581.80)
- Windows 11 WSL2 or Linux with CUDA 13 support
Verified Test Results: ✅ All 6 encoders passing (h264_nvenc, hevc_nvenc, av1_nvenc, libx264, libx265, libaom-av1)
Other NVIDIA GPUs:
- Main branch: CUDA 12.2 + FFmpeg 6.1.1, driver 535.x+ (Turing/Ampere/Ada)
- CPU/VAAPI still work if NVENC is incompatible
Intel Arc (e.g., A140, A380) and recent Intel iGPUs are supported via VAAPI and QSV (libmfx) on Linux hosts.
--device /dev/dri access--enable-vaapi and --enable-libmfx (QSV)intel-media-driver, libmfx1, libva-utilsLIBVA_DRIVER_NAME=iHDdocker-compose.yml:devices: ["/dev/dri:/dev/dri"]Limitations:
Components
ffmpeg -progress and publishes updates.Data & files
uploads/ – incoming filesoutputs/ – compressed resultsFILE_RETENTION_HOURS.WORKER_CONCURRENCY - Maximum concurrent compression jobs (default: 4, range: 1-20)BACKEND_HOST - Backend bind address (default: 0.0.0.0)BACKEND_PORT - Backend port (default: 8001)PUBLIC_BACKEND_URL - Frontend API endpoint; leave unset to use same‑origin (recommended)Control which codecs appear in the UI via environment variables or the Settings page:
CODEC_H264_NVENC, CODEC_HEVC_NVENC, CODEC_AV1_NVENC - NVIDIA encodersCODEC_H264_QSV, CODEC_HEVC_QSV, CODEC_AV1_QSV - Intel Quick SyncCODEC_H264_VAAPI, CODEC_HEVC_VAAPI, CODEC_AV1_VAAPI - AMD/Intel VAAPI (Linux)CODEC_LIBX264, CODEC_LIBX265, CODEC_LIBAOM_AV1 - CPU encodersAll default to true. The system validates encoder availability at runtime and automatically falls back to CPU if hardware isn't available.
You can manage settings through the web interface at /settings:
/gpu-supportExample .env file:
AUTH_ENABLED=false
AUTH_USER=admin
AUTH_PASS=changeme
FILE_RETENTION_HOURS=1
WORKER_CONCURRENCY=4
REDIS_URL=redis://127.0.0.1:6379/0
BACKEND_HOST=0.0.0.0
BACKEND_PORT=8001
8mb.local supports running multiple compression jobs in parallel. Configure the maximum number of concurrent jobs via:
WORKER_CONCURRENCY in your .env file (default: 4)WORKER_CONCURRENCY=10 to environment sectionImportant: Container restart required after changing concurrency setting.
| GPU Model | Recommended Concurrency | Notes |
|---|---|---|
| Quadro RTX 4000 / RTX 3060+ | 6-10 jobs | Excellent NVENC throughput, handles high concurrency well |
| RTX 3090 / 4090 | 8-12 jobs | Top-tier NVENC performance, best for bulk processing |
| GTX 1660 / RTX 2060 | 3-5 jobs | Good NVENC performance for mid-range |
| GTX 1050 Ti / Entry-level | 2-3 jobs | Basic NVENC, limited parallel capacity |
| CPU-only (libx264/libx265) | 1-2 jobs per 4 cores | Very slow, high CPU usage |
| Intel/AMD VAAPI | 4-8 jobs | Depends on iGPU/dGPU capabilities |
The Queue page clearly shows which jobs are running simultaneously:
Start with 4 concurrent jobs and gradually increase while monitoring GPU utilization and job completion times.
Codec/container notes
+faststart for better web playback.Performance tips
8mb.local automatically detects and uses available hardware acceleration:
NVIDIA GPU (NVENC): Best support for AV1, HEVC, H.264
--gpus all flag in docker run commanddocker exec 8mblocal bash -c "ffmpeg -hide_banner -encoders | grep -i nvenc"Intel GPU (Quick Sync Video - QSV): Good support for H.264, HEVC, AV1 (Arc GPUs)
/dev/dri is accessible in container--device=/dev/dri:/dev/dri flag in docker run commanddocker exec 8mblocal bash -c "ffmpeg -hide_banner -encoders | grep -i qsv"AMD GPU (VAAPI - Linux only): Support for H.264, HEVC, AV1
/dev/dri access)--device=/dev/dri:/dev/dri flag in docker run commanddocker exec 8mblocal bash -c "ffmpeg -hide_banner -encoders | grep vaapi"CPU Fallback: Works on any system without GPU
The system validates encoder availability at runtime and automatically falls back to CPU if hardware isn't available. You'll see log messages like:
The easiest way to run 8mb.local is with the pre-built Docker image. Choose the command for your system:
docker run -d --name 8mblocal -p 8001:8001 -v ./uploads:/app/uploads -v ./outputs:/app/outputs jms1717/8mblocal:latest
docker run -d --name 8mblocal --gpus all -e NVIDIA_DRIVER_CAPABILITIES=compute,video,utility -p 8001:8001 -v ./uploads:/app/uploads -v ./outputs:/app/outputs jms1717/8mblocal:latest
Note: The
-e NVIDIA_DRIVER_CAPABILITIES=compute,video,utilityenvironment variable is required to enable NVENC support. It tells the NVIDIA Container Toolkit to mount video encoding libraries into the container.
docker run -d --name 8mblocal --device=/dev/dri:/dev/dri -p 8001:8001 -v ./uploads:/app/uploads -v ./outputs:/app/outputs jms1717/8mblocal:latest
docker run -d --name 8mblocal --gpus all -e NVIDIA_DRIVER_CAPABILITIES=compute,video,utility --device=/dev/dri:/dev/dri -p 8001:8001 -v ./uploads:/app/uploads -v ./outputs:/app/outputs jms1717/8mblocal:latest
Access the web UI at: http://localhost:8001
For easier management, use Docker Compose. Create a docker-compose.yml file:
services:
8mblocal:
image: jms1717/8mblocal:latest
container_name: 8mblocal
ports:
- "8001:8001"
volumes:
- ./uploads:/app/uploads
- ./outputs:/app/outputs
- ./.env:/app/.env # Optional: for custom settings
restart: unless-stopped
services:
8mblocal:
image: jms1717/8mblocal:latest
container_name: 8mblocal
ports:
- "8001:8001"
volumes:
- ./uploads:/app/uploads
- ./outputs:/app/outputs
- ./.env:/app/.env # Optional: for custom settings
restart: unless-stopped
environment:
- NVIDIA_DRIVER_CAPABILITIES=compute,video,utility # Required for NVENC
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
services:
8mblocal:
image: jms1717/8mblocal:latest
container_name: 8mblocal
ports:
- "8001:8001"
volumes:
- ./uploads:/app/uploads
- ./outputs:/app/outputs
- ./.env:/app/.env # Optional: for custom settings
devices:
- /dev/dri:/dev/dri
restart: unless-stopped
services:
8mblocal:
image: jms1717/8mblocal:latest
container_name: 8mblocal
ports:
- "8001:8001"
volumes:
- ./uploads:/app/uploads
- ./outputs:/app/outputs
- ./.env:/app/.env # Optional: for custom settings
devices:
- /dev/dri:/dev/dri
restart: unless-stopped
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
Then run:
docker compose up -d
If you want to build the image yourself:
Clone the repository:
git clone https://github.com/JMS1717/8mb.local.git
cd 8mb.local
Build and run:
docker build -t 8mblocal:local .
docker run -d --name 8mblocal -p 8001:8001 -v ./uploads:/app/uploads -v ./outputs:/app/outputs 8mblocal:local
Or with Docker Compose:
docker compose up -d --build
Create a .env file in the same directory as your docker-compose.yml (optional):
# Authentication (can also be configured via Settings UI)
AUTH_ENABLED=false
AUTH_USER=admin
AUTH_PASS=changeme
# File retention
FILE_RETENTION_HOURS=1
# Codec visibility (all default to true)
CODEC_H264_NVENC=true
CODEC_HEVC_NVENC=true
CODEC_AV1_NVENC=true
CODEC_H264_QSV=true
CODEC_HEVC_QSV=true
CODEC_AV1_QSV=true
CODEC_H264_VAAPI=true
CODEC_HEVC_VAAPI=true
CODEC_AV1_VAAPI=true
CODEC_LIBX264=true
CODEC_LIBX265=true
CODEC_LIBAOM_AV1=true
# Redis (internal, usually no need to change)
REDIS_URL=redis://127.0.0.0:6379/0
BACKEND_HOST=0.0.0.0
BACKEND_PORT=8001
Mount it with -v ./.env:/app/.env in docker run, or add it to volumes in docker-compose.yml.
CRITICAL for Real-Time Progress: If using a reverse proxy (nginx, Nginx Proxy Manager, Traefik, etc.), SSE (Server-Sent Events) requires special configuration to prevent buffering:
Add to your proxy configuration for the /api/stream/ location:
location /api/stream/ {
proxy_pass http://backend:8001;
proxy_buffering off; # REQUIRED - Disables response buffering for SSE
proxy_cache off; # Recommended - Disables caching
proxy_set_header Connection ''; # Recommended - Removes connection header
chunked_transfer_encoding on; # Recommended - Enables chunked transfer
}
In Nginx Proxy Manager:
location /api/stream/ {
proxy_buffering off;
proxy_cache off;
proxy_set_header Connection '';
chunked_transfer_encoding on;
}
Add labels to your docker-compose:
labels:
- "traefik.http.middlewares.no-buffer.buffering.maxRequestBodyBytes=0"
- "traefik.http.middlewares.no-buffer.buffering.maxResponseBodyBytes=0"
- "traefik.http.routers.8mblocal.middlewares=no-buffer"
<Location /api/stream/>
ProxyPass http://backend:8001/api/stream/
ProxyPassReverse http://backend:8001/api/stream/
SetEnv proxy-sendchunked 1
SetEnv proxy-interim-response RFC
</Location>
Why this matters: Without proxy_buffering off, nginx buffers the entire SSE response and sends all progress events at once when the job completes, instead of streaming them in real-time. You'll see "progress stuck at 0%" until completion, then everything updates instantly.
Testing: After configuring, start a compression job. You should see:
SSE connection opened, SSE event: progressIf progress still doesn't update until completion, check your proxy logs and verify proxy_buffering off; is applied.
curl -fsSL https://get.docker.com | sh/dev/dri exists and user has access (ls -l /dev/dri)sudo usermod -aG docker $USER (logout/login required)Check container is running:
docker ps | grep 8mblocal
Check available encoders:
docker exec 8mblocal bash -c "ffmpeg -hide_banner -encoders | grep -E 'nvenc|qsv|vaapi|264|265|av1'"
View logs:
docker logs 8mblocal
Access the UI at http://localhost:8001 and go to Settings → Available Codecs to see detected hardware.
Pull the latest image and restart:
docker pull jms1717/8mblocal:latest
docker stop 8mblocal
docker rm 8mblocal
# Then run your docker run command again, or:
docker compose pull
docker compose up -d
If you run with --gpus all (or have gpus: all in compose) on a host with no NVIDIA adapter, Docker's NVIDIA prestart hook will abort before our app starts:
nvidia-container-cli: initialization error: WSL environment detected but no adapters were found
Fix: remove --gpus all and any NVIDIA_* environment variables. The app will start in CPU mode automatically. For Intel/AMD on Linux, use /dev/dri mapping instead (see Intel/AMD sections above).
NVENC "Operation not permitted" or "Driver does not support required nvenc API version" error:
Most common cause: Missing NVIDIA_DRIVER_CAPABILITIES environment variable
Solution: Add -e NVIDIA_DRIVER_CAPABILITIES=compute,video,utility to your docker run command
Example:
docker run -d --name 8mblocal \
--gpus all \
-e NVIDIA_DRIVER_CAPABILITIES=compute,video,utility \
-p 8001:8001 \
-v ./uploads:/app/uploads \
-v ./outputs:/app/outputs \
jms1717/8mblocal:latest
This tells the NVIDIA Container Toolkit to mount NVENC libraries into the container
Critical: Driver Version Mismatch - The #1 cause of NVENC failures:
Driver does not support the required nvenc API version. Required: 13.0 Found: 12.1nvidia-smi and look at driver version:Solution: Upgrade NVIDIA Driver
For Debian 12 systems:
# Debian backports has older drivers - use NVIDIA's official repo instead
wget https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
sudo apt install nvidia-driver
sudo reboot # Required to load new driver
For Ubuntu systems:
# Add NVIDIA PPA for latest drivers
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-driver-550 # or newer
sudo reboot
After reboot, verify: nvidia-smi should show driver 550+ and ffmpeg -encoders | grep nvenc inside container should work.
Why this happens: Newer ffmpeg builds use NVENC API 13.0 features for better quality/performance. Older drivers (535.x) only support up to API 12.1.
Verification steps:
nvidia-smi (look for version 550+)docker exec 8mblocal ffmpeg -f lavfi -i nullsrc -c:v h264_nvenc -f null -NVIDIA_DRIVER_CAPABILITIES env var aboveOn Linux: Verify NVIDIA Container Toolkit is installed
If driver upgrade not possible: System will automatically fallback to CPU encoding
Intel QSV not working:
/dev/dri exists: ls -l /dev/drigroups should show video or render--device=/dev/dri:/dev/dri flag is usedAMD VAAPI issues:
glxinfo | grep -i mesa (install mesa-utils)/dev/dri exists and is accessible--device=/dev/dri:/dev/dri flag is usedThis error occurs when NVENC encoder fails to initialize. The system now automatically detects this and falls back to CPU, but if you want GPU acceleration:
Error Messages You May See:
"Cannot load libnvidia-encode.so.1"
NVIDIA_DRIVER_CAPABILITIES environment variable-e NVIDIA_DRIVER_CAPABILITIES=compute,video,utility to docker run command (see example in NVENC section above)"Driver does not support the required nvenc API version. Required: 13.0 Found: 12.1"
nvidia-smi and check driver version"Could not open encoder before EOF" or "Task finished with error code: -22 (Invalid argument)"
Common causes:
Quick Fix (Automatic): The system will detect the initialization failure and automatically use CPU encoding. You'll see:
Warning: hevc_nvenc failed initialization test (driver/library issue), falling back to CPU
Using encoder: libx265 (requested: hevc_nvenc)
Your video will still compress, just using CPU instead of GPU.
To Enable GPU (Optional):
Fix missing NVIDIA_DRIVER_CAPABILITIES (if you saw error #1 above):
-e NVIDIA_DRIVER_CAPABILITIES=compute,video,utilityenvironment:
- NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
Upgrade NVIDIA driver (if you saw error #2 above):
Missing NVIDIA Container Toolkit: Docker can't access GPU
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list && sudo apt-get update && sudo apt-get install -y nvidia-container-toolkitsudo systemctl restart dockerWorkaround if upgrade not possible: Disable NVENC and use CPU or VAAPI
Real-World Example: Debian 12 with Driver 535 On a fresh Debian 12 install with Quadro RTX 4000, the default driver (535.247.01) is too old:
# Initial state - fails with API 12.1 vs 13.0 error
nvidia-smi # Shows driver 535.247.01
# Fix by upgrading to NVIDIA's official driver repository
wget https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
sudo apt install nvidia-driver # Installs 550+ series
sudo reboot
# After reboot - NVENC works!
nvidia-smi # Shows driver 580.95.05 or newer
This resolved the exact issue encountered on the powerhouse server.
Permission denied writing uploads/outputs:
chmod 777 uploads outputs or chown -R $USER:$USER uploads outputsPorts in use:
-p 8080:8001 instead of -p 8001:8001PUBLIC_BACKEND_URL (frontend uses same‑origin)Progress bar stuck at 0% until job completes:
proxy_buffering off; to your proxy config for /api/stream/ location (see Reverse Proxy Configuration section above)Container won't start:
docker logs 8mblocaldocker rm -f 8mblocalFiles slightly over target size:
FFmpeg errors:
docker logs 8mblocaldocker exec 8mblocal nvidia-smi (NVIDIA) or docker exec 8mblocal ls -l /dev/dri (VAAPI)docker exec 8mblocal bash -c "ffmpeg -hide_banner -encoders | grep -E 'nvenc|qsv|vaapi|264|265|av1'"# Check if container is running
docker ps | grep 8mblocal
# View container logs
docker logs 8mblocal
# Check NVIDIA GPU status (if using NVENC)
docker exec 8mblocal nvidia-smi
# Check available video devices (if using VAAPI)
docker exec 8mblocal ls -l /dev/dri
# List available encoders in container
docker exec 8mblocal bash -c "ffmpeg -hide_banner -encoders | grep -E 'nvenc|qsv|vaapi|264|265|av1'"
# Check NVIDIA libraries (if using NVENC)
docker exec 8mblocal bash -c "ls -l /usr/lib/x86_64-linux-gnu/libnvidia-encode.so*"
# Restart container
docker restart 8mblocal
# View real-time logs
docker logs -f 8mblocal
proxy_buffering off; for /api/stream/ (see Reverse Proxy Configuration section)Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
You are free to use, share, and adapt this project for non-commercial purposes as long as you provide appropriate attribution.
You are not allowed to use this project for commercial purposes under this license.
Looking to use this project commercially? I offer a separate commercial license that grants you the right to use this project in your commercial products.
Please contact me directly.
Pull requests welcome! Please ensure Docker builds succeed and test with your GPU hardware.
For issues, questions, or feature requests, please open an issue on GitHub.