runhub Svelte Themes

Runhub

AI-powered batch image generation interface using Google Gemini for prompt engineering and custom python pipelines for generating images using Flux.2-klein-9b and zImage base models. Also supports custom workflows from Running hub. Enhance and upscale options. . Built with SvelteKit 5 + Docker.

rhub — RunningHub Precision Control Center

Advanced image and video generation control center. Features AI-orchestrated prompt engineering from 300 photogenic locations, multi-model generation (FLUX.1-dev via RunningHub, Z-Image via RunPod Serverless, and FLUX.2-klein via RunPod Serverless), multi-LoRA additive blending, sequential batch queuing, image upscaling with LSB steganography support, image enhancement via fal.ai Phota and RunningHub workflows, video creation via fal.ai Seedance 2.0, and real-time polling.

rhub is a specialized SvelteKit-based dashboard that transforms simple subject descriptions into high-quality, LoRA-consistent imagery and video. It solves the "repetition problem" in AI generation by bridging expert prompt engineering (Gemini/Qwen) with multiple synthesis pipelines.

Key Features

  • Multi-Model Generation — Switch between FLUX.1-dev (RunningHub), Z-Image (RunPod Serverless), and FLUX.2-klein (RunPod Serverless) from the Generate tab. The same AI prompt engineering pipeline feeds all three models.
  • Multi-LoRA Blending — Both Z-Image and FLUX.2-klein support a dynamic LoRA Stack. Load multiple adapters in parallel with per-LoRA strength control and additive blending. A curated preset dropdown is populated from a build-time config file — pick a style or enter any custom URL.
  • Expert Orchestration — Google Gemini 3 Flash or RunPod Qwen 30B synthesizes detailed prompts via a 2-step process: location selection + AI composition. Each model has a specialized Prompt Director tuned for its strengths.
  • FLUX.2-klein Support — 9B-parameter undistilled flow-match transformer by Black Forest Labs, with quality presets, multi-LoRA support, optional detail refinement (2nd pass), and server-side upscaling.
  • Image Upscaling — Batch upscale images to 2K resolution using specialized RunningHub workflows. Handles intermediary storage via S3 (e.g., Backblaze B2) with automatic presigned URL generation.
  • Image Enhancement — Enhance images via the Enhance tab using fal.ai Phota or two RunningHub workflows (standard Enhance or Enhance+Detail). Accepts image file uploads or URLs. Any generated image in the queue can be sent directly to Enhance.
  • Video Creation — Create videos via the Create Video tab using fal.ai Seedance 2.0 (bytedance/seedance-2.0/reference-to-video). Supports up to 9 reference images (URL or local upload), up to 3 reference audio files (URL or local upload), resolution, duration, aspect ratio, and audio generation options. FAL jobs are polled asynchronously and videos are saved to the output volume. Any generated image in the queue can be sent directly to the video tab as a reference image.
  • LSB Steganography (TT-Decoder/Encoder) — Built-in TypeScript support for both extracting hidden data from generated images and embedding data into carrier images for secure upscale processing.
  • Persistent Sequential Queue — Bypasses RunningHub's single-task limitation with a robust client-side queue. Captures full form state (LoRA, model, output dir, API keys) per task, survives page refreshes, and processes jobs one-by-one.
  • Environment-backed API Keys — API keys can be pre-configured in .env and are automatically used as defaults. UI fields override them on a per-session basis.
  • Modern Tabbed UI — 4-tab segmented control interface (Generate, Upscale, Enhance, Create Video) with smooth sliding indicators and responsive mobile layout.
  • 300 Curated Locations — Module-level Fisher-Yates shuffled queue of 300 unique locations ensures zero repetition across large batches.
  • Flexible Dimensions — 9 aspect ratio presets for FLUX.1-dev and Z-Image (16px-aligned auto-calculation), plus 12 dedicated aspect ratio presets for FLUX.2-klein (landscape, square, and portrait — all ~1K resolution, 32px-aligned).
  • Containerized Deployment — Fully Dockerized with environment-based configuration for secrets and server limits.

Architecture

The application runs as a single SvelteKit container. API routes handle server-side logic including AI prompt synthesis, S3 uploads, RunningHub interaction, direct RunPod Serverless calls, fal.ai enhancement, and fal.ai video generation. Images and videos are served directly from a Docker-mounted volume to prevent caching issues and ensure persistence.

Data Flow

Generation Flow — FLUX.1-dev (RunningHub)

  1. User selects FLUX.1-dev model, provides subject characteristics, LoRA URL, and API keys.
  2. Tasks are added to the Persistent Queue.
  3. Background processor picks the next task:
    • AI selects a location and generates a vivid composition.
    • AI synthesizes the final detailed FLUX.1-dev prompt.
  4. Dimensions are calculated and the task is submitted to RunningHub.
  5. The client polls /api/check for status, downloads the result, and optionally decodes hidden data.

Generation Flow — Z-Image (RunPod Serverless)

  1. User selects Z-Image model and configures subject characteristics and an optional LoRA Stack.
  2. Tasks are added to the Persistent Queue with the same AI prompt engineering pipeline.
  3. Background processor picks the next task:
    • AI synthesizes the prompt (same Gemini/Qwen pipeline as FLUX.1-dev). Trigger words from all LoRAs in the stack are automatically aggregated.
  4. Job is submitted to the RunPod Z-Image Serverless endpoint (/run) using the new multi-LoRA loras array.
  5. The client polls /api/zimage-check until the job completes, then downloads the JPG from the S3 URL returned by RunPod.
  6. Optionally, a High-Res Refinement second pass is run server-side before delivery.

Generation Flow — FLUX.2-klein (RunPod Serverless)

  1. User selects FLUX.2-klein model, chooses a quality preset, configures a multi-LoRA stack, and optionally sets seed, prompt length limit, detail refinement, and upscaling options.
  2. Tasks are added to the Persistent Queue with the same AI prompt engineering pipeline.
  3. Background processor picks the next task:
    • AI synthesizes the prompt using the FLUX Prompt Director — a specialized system prompt tuned for FLUX.2 [klein] 9B (camera/film language, avoids SDXL boilerplate).
  4. Job is submitted to the RunPod FLUX.2-klein Serverless endpoint (/run).
  5. The client polls /api/zimage-check until the job completes, then downloads the JPEG from the S3 presigned URL returned by RunPod.

Enhance Flow

  1. User opens the Enhance tab and selects an engine: fal.ai Phota, RunningHub Enhance, or RunningHub Enhance+Detail.
  2. User provides an image (file upload or URL). Any result in the queue can be sent directly via Send to Enhance.
  3. For fal.ai Phota: image is submitted to fal-ai/phota/enhance synchronously; result is downloaded and saved.
  4. For RunningHub engines: task is submitted to the respective RunningHub app; client polls /api/enhance-check for completion and downloads the result.

Video Creation Flow

  1. User opens the Create Video tab and writes a prompt.
  2. Optionally adds up to 9 reference images (paste URL or upload from device) and up to 3 reference audio files (paste URL or upload from device). Any image result in the queue can be sent directly via Send to Video.
  3. User configures resolution (480p/720p), duration, aspect ratio, audio generation, and optional seed.
  4. Clicking Add Video to Queue submits the task to the queue.
  5. Background processor submits to POST https://queue.fal.run/bytedance/seedance-2.0/reference-to-video.
  6. Client polls /api/video-check every 5 seconds. When complete, the mp4 is downloaded and saved to the output volume.

Upscale Flow

  1. User uploads images via the Upscale tab.
  2. Background processor picks the next upscale task:
    • If TT-Decoder toggle is ON: The image is encoded into a new carrier PNG using TT-Encoder.
    • The image (original or encoded) is uploaded to S3 storage.
    • A temporary presigned URL is generated and sent to the RunningHub 2K Upscale workflow.
  3. The client polls status and downloads the upscaled result.

Quick Start

Prerequisites

# Clone the repository
git clone <repo-url>
cd rhub

# Configure environment variables
cp .env.example .env
# Edit .env with your API keys, S3 credentials, and RunPod endpoints

# Ensure the shared Docker network exists
docker network create shared_net 2>/dev/null || true

# Build and start
docker compose up -d --build

Configuration

API keys can be set in .env (recommended for persistent use) or entered directly in the Web UI. UI values override .env values.

Environment Variables

Variable Required Description
RUNNINGHUB_API_KEY For FLUX.1-dev, Upscaling & RunningHub Enhance RunningHub API key
GEMINI_API_KEY For Gemini prompt provider Google Gemini API key
RUNPOD_API_KEY For Z-Image, FLUX.2-klein & Qwen provider RunPod API key
FAL_KEY For Enhance (Phota) & Create Video fal.ai API key
RUNPOD_ZIMAGE_ENDPOINT For Z-Image Full RunPod endpoint URL (e.g. https://api.runpod.ai/v2/<id>)
RUNPOD_FLUX_KLEIN_ENDPOINT For FLUX.2-klein Full RunPod endpoint URL (e.g. https://api.runpod.ai/v2/<id>)
S3_ENDPOINT For Upscaling S3 API endpoint URL (e.g. https://s3.us-west-004.backblazeb2.com)
S3_BUCKET For Upscaling Name of the bucket for intermediary image storage
S3_ACCESS_KEY_ID For Upscaling S3 access key ID
S3_SECRET_ACCESS_KEY For Upscaling S3 secret access key
S3_REGION For Upscaling S3 region (default: us-east-1)
BODY_SIZE_LIMIT Always Maximum upload size in bytes (e.g., 52428800 for 50MB)

Web UI Settings

Setting Description
Generation Model Choose FLUX.1-dev (RunningHub), Z-Image (RunPod Serverless), or FLUX.2-klein (RunPod Serverless)
AI Prompt Provider Choose between Google Gemini or RunPod (Qwen 30B) for prompt engineering
RunningHub API Key Overrides RUNNINGHUB_API_KEY env var for this session
Gemini API Key Overrides GEMINI_API_KEY env var for this session
RunPod API Key Overrides RUNPOD_API_KEY env var for this session
fal.ai API Key Overrides FAL_KEY env var for this session (used by Enhance and Create Video)
Enable TT-Decoder Toggle LSB steganography decoding/encoding (FLUX.1-dev only, persisted in localStorage)

Generation Parameters

Shared (all models)

Parameter Default Description
Aspect Ratio 1:1 9 presets — dimensions are auto-calculated at 16px alignment (FLUX.1-dev and Z-Image only)
Output Sub-directory generations Sub-folder inside /mount where results are saved
Filename Prefix image Prefix applied to all saved filenames

Z-Image (RunPod Serverless)

Parameter Default Description
Inference Steps 40 Number of diffusion steps. The current upstream worker auto-tunes around 40 for Z-Image Base.
Guidance Scale 4.5 CFG scale. 4.5 is the current photorealism default; higher increases adherence but can over-saturate.
Scheduler Shift 3.0 FlowMatch scheduler shift. 3.0 is the photorealism sweet spot; 5–7 for creative composition.
CFG Normalization on Enabled by default in the current Z-Image worker tuning for photorealism.
Beta Sigmas off Disabled by default in the current worker; can be enabled explicitly if a specific LoRA or endpoint revision needs it.
LoRA Stack 1 empty row Multiple LoRAs can be sent via loras[], each with URL, trigger word, and scale. Select from the curated preset dropdown (populated from loras-zimage.json) or enter any URL directly.
Seed -1 (random) Fixed seed for reproducibility.
Enable High-Res Refinement on RealPLKSR upscale + Z-Image img2img refinement pass for extra detail. On by default.
↳ Upscale Factor 1.25 Scale multiplier for the refinement pass. 1.25× stays within 24 GB with LoRAs loaded.
↳ Denoising Strength 0.42 Img2img denoising strength. The current upstream worker uses 0.42 to clean the under-denoised base-pass look without changing composition too aggressively.
↳ Pass 2 Steps 28 Inference steps for the refinement pass.
↳ Pass 2 Guidance 4.5 CFG scale for the refinement pass.
↳ Pass 2 Prompt Limit 512 Token limit for the pass-2 prompt encoder.

FLUX.2-klein (RunPod Serverless)

Parameter Default Description
Quality Preset realistic_character Preset bundle: steps, guidance, shift, and resolution tuned for the use-case. Options: realistic_character, portrait_hd, cinematic_full, fast_preview, maximum_quality, character_portrait_best, character_portrait_vertical, character_cinematic
Aspect Ratio 1:1 12 presets (landscape, square, portrait) — all ~1K resolution, 32px-aligned. Landscape: 21:9 (1024×448), 2:1 (1024×512), 16:9 (1024×576), 3:2 (1024×672), 4:3 (1024×768), 5:4 (1024×832). Square: 1:1 (1024×1024). Portrait: 4:5 (832×1024), 3:4 (768×1024), 2:3 (672×1024), 9:16 (576×1024), 1:2 (512×1024).
LoRA Stack 1 empty row Multiple LoRAs can be sent via loras[], each with URL, trigger word, and scale. Select from the curated preset dropdown (populated from loras-klein.json) or enter any URL directly.
Seed -1 (random) Fixed seed for reproducibility.
Prompt Length Limit 512 Text encoder token cap (max_sequence_length).
LoRA Mix Method absolute Multi-LoRA scale interpretation: absolute (exact strengths) or normalized (auto-balance).
Enable Detail Refinement off Runs a second inference pass to sharpen fine details and textures.
↳ Refinement Strength 0.2 Img2img denoising strength for the refinement pass.
↳ Refinement Steps 12 Inference steps for the refinement pass.
↳ Refinement Guidance 1.0 CFG scale for the refinement pass.
↳ Refinement LoRA Strength 1.0 Scales LoRA influence during pass 2 (second_pass_lora_scale_multiplier).
Enable Upscaling off Upscales the generated image server-side before delivery.
↳ Upscale Factor 2.0 Scale multiplier (0.25–4×).
↳ Upscale Blend 0.35 Blending factor between original and upscaled features.

Create Video (fal.ai Seedance 2.0)

Parameter Default Description
Prompt Text description of the video. Reference images as @Image1, audio as @Audio1.
Reference Images up to 9 JPEG/PNG/WebP — paste URL or upload from device. Sent as image_urls[].
Resolution 720p 480p (faster) or 720p (balanced).
Duration auto auto or 4–15 seconds.
Aspect Ratio auto auto, 21:9, 16:9, 4:3, 1:1, 3:4, 9:16.
Generate Audio on Synthesize synchronized sound effects, ambient audio, and lip-sync.
Reference Audio up to 3 MP3/WAV — paste URL or upload from device (shown when Generate Audio is on).
Seed -1 (random) Fixed seed for reproducibility.
Output Directory generations Sub-folder inside /mount where the mp4 is saved.
Filename Prefix video Prefix applied to saved mp4 filenames.

API Reference

POST /api/generate

Submits an image generation job. Handles AI prompt engineering via Gemini or RunPod Qwen, then routes to the selected model backend.

  • FLUX.1-dev: Submits to RunningHub workflow. Returns { taskId, model: 'flux-dev', prompt }.
  • Z-Image: Submits to RunPod Z-Image Serverless endpoint with multi-LoRA loras array. Returns { jobId, model: 'z-image', prompt }.
  • FLUX.2-klein: Submits to RunPod FLUX.2-klein Serverless endpoint with preset, explicit width/height from the selected aspect ratio, multi-LoRA loras, lora_scale_mode, max_sequence_length, optional 2nd pass options, and optional upscale options. Returns { jobId, model: 'flux-klein', prompt }.

POST /api/zimage-check

Polls a RunPod job (Z-Image or FLUX.2-klein) for completion. When the job completes, downloads the image from the S3 presigned URL and saves it to the output directory. Returns { status, filename }.

POST /api/upscale

Handles multipart form uploads. Encodes images if requested, uploads to S3, and submits to specialized RunningHub upscaling workflows.

POST /api/check

Polls RunningHub task status and handles post-processing (image download + optional TT-Decode).

POST /api/enhance

Handles image enhancement. Accepts multipart form with engine (fal, runninghub, runninghub-detail), image file or URL. For fal.ai Phota: runs synchronously and returns { filename }. For RunningHub engines: submits task and returns { taskId } for polling.

POST /api/video

Submits a video generation job to the fal.ai queue (bytedance/seedance-2.0/reference-to-video). Returns { requestId, statusUrl, responseUrl, outputDir, prefix }.

POST /api/video-check

Polls a fal.ai video job for completion. When complete, downloads the mp4 and saves it to the output directory. Returns { status, filename }.

GET /api/images/[...path]

Serves generated/upscaled/enhanced images and videos from the mounted output volume.

Project Structure

rhub/
├── src/
│   ├── lib/
│   │   ├── loras-klein.json          # FLUX.2-klein LoRA preset list (generated by generate-loras.sh)
│   │   ├── loras-zimage.json         # Z-Image LoRA preset list (generated by generate-loras.sh)
│   │   ├── tt-decoder.ts             # LSB Steganography extraction
│   │   ├── tt-encoder.ts             # LSB Steganography embedding
│   │   ├── s3.ts                     # S3 Client & Presigned URL logic
│   │   └── locations.ts              # 300 photogenic locations
│   └── routes/
│       ├── +page.svelte              # Main UI (Runes) — Generate, Upscale, Enhance & Create Video tabs
│       ├── +page.server.ts           # Server load — passes env-backed API keys + LoRA lists to UI
│       └── api/
│           ├── generate/             # AI Synthesis + Multi-model Submission
│           ├── zimage-check/         # RunPod job polling + image download (Z-Image & FLUX.2-klein)
│           ├── upscale/              # Upload + Encoding + S3 Hosting
│           ├── check/                # RunningHub task polling + TT-Decode
│           ├── enhance/              # Image enhancement (fal.ai Phota / RunningHub workflows)
│           ├── video/                # fal.ai Seedance 2.0 video submission
│           ├── video-check/          # fal.ai video job polling + mp4 download
│           └── images/               # Static file serving from /mount
├── generate-loras.sh                 # Host script — regenerates LoRA JSON configs from local dirs
├── .env.example                      # Template for secrets and limits
├── .env                              # Local secrets (gitignored)
├── Dockerfile                        # Production build
└── docker-compose.yml                # Container orchestration

Updating the LoRA Presets

Run the host-side script before rebuilding the container whenever LoRA files are added or removed:

bash generate-loras.sh
docker compose up -d --build

The script scans:

  • /mnt/backblaze/storage/LoRA/Civitai/flux.2-klein-9b/src/lib/loras-klein.json
  • /mnt/backblaze/storage/LoRA/Civitai/zImage/src/lib/loras-zimage.json

Tech Stack

  • Frontend: Svelte 5 (Runes), TypeScript, SvelteKit
  • Backend: Node.js, AWS SDK (S3), pngjs
  • AI: Gemini 3 Flash / RunPod Qwen 30B
  • Image Generation: FLUX.1-dev (RunningHub) / Z-Image (RunPod Serverless) / FLUX.2-klein (RunPod Serverless)
  • Image Enhancement: fal.ai Phota / RunningHub Enhance workflows
  • Video Generation: fal.ai Seedance 2.0 (bytedance/seedance-2.0/reference-to-video)
  • Infrastructure: Docker, Docker Compose

License

Private — All rights reserved.

Top categories

Loading Svelte Themes