Not affiliated with the Homebox project. This is an unofficial third-party companion app.
AI-powered companion for Homebox inventory management.
![]() |
![]() |
![]() |
![]() |
Take a photo of your stuff, and let AI identify and catalog items directly into your Homebox instance. Perfect for quickly inventorying a room, shelf, or collection.
flowchart LR
A[π Login<br/>Homebox] --> B[π Select<br/>Location]
B --> C[πΈ Capture<br/>Photos]
C --> D[βοΈ Review &<br/>Edit Items]
D --> E[β
Submit to<br/>Homebox]
B -.-> B1[/Browse, search,<br/>or scan QR/]
C -.-> C1[/AI analyzes with<br/>OpenAI GPT-5/]
D -.-> D1[/Edit names,<br/>quantities, labels/]
*LiteLLM is a Python adaptor library we use to call OpenAI directly, no Local AI model required (unless you want to), just your API key.
GPT-5 mini (default) offers the best accuracy. GPT-5 nano is 3x cheaper but may need more corrections. Typical cost: ~$0.30 per 100 items (mini) or ~$0.10 per 100 items (nano).
Prices as of 2025-12-10, using OpenAIβs published pricing for GPT-5 mini and GPT-5 nano.
Before you start, you'll need:
Compatibility: Tested with Homebox v0.21+. Earlier versions may have different authentication behavior.
Want to try it out without setting up Homebox? Use the public demo server:
docker run -p 8000:8000 \
-e HBC_LLM_API_KEY=sk-your-key \
-e HBC_HOMEBOX_URL=https://demo.homebox.software \
ghcr.io/duelion/homebox-companion:latest
Open http://localhost:8000 and login with [email protected] / demo
# docker-compose.yml
services:
homebox-companion:
image: ghcr.io/duelion/homebox-companion:latest
container_name: homebox-companion
restart: always
environment:
- HBC_LLM_API_KEY=sk-your-api-key-here
- HBC_HOMEBOX_URL=http://your-homebox-ip:7745
ports:
- 8000:8000
docker compose up -d
Open http://localhost:8000 in your browser.
Tip: If Homebox runs on the same machine but outside Docker, use
http://host.docker.internal:PORTas the URL.
Homebox Companion uses LiteLLM as a Python library to call AI providers. You don't need to self-host anything β just get an OpenAI API key from platform.openai.com and you're ready to go. We officially support and test with OpenAI GPT models only.
You can try other LiteLLM-compatible providers at your own risk. The app checks if your chosen model supports the required capabilities using LiteLLM's API:
Required capabilities:
litellm.supports_vision(model)litellm.supports_response_schema(model)Finding model names:
Model names are passed directly to LiteLLM. Use the exact names from LiteLLM's documentation:
Common examples:
gpt-4o, gpt-4o-mini, gpt-5-miniclaude-sonnet-4-5, claude-3-5-sonnet-20241022Note: Model names must exactly match LiteLLM's expected format. Typos or incorrect formats will cause errors. Check LiteLLM's provider documentation for the correct model names.
Running Local Models:
You can run models locally using tools like Ollama, LM Studio, or vLLM. See LiteLLM's Local Server documentation for setup instructions.
Once your local server is running, configure the app:
HBC_LLM_API_KEY=any-value-works-for-local # Just needs to be non-empty
HBC_LLM_API_BASE=http://localhost:11434 # Your local server URL
HBC_LLM_MODEL=ollama/llava:34b # Your local model name
HBC_LLM_ALLOW_UNSAFE_MODELS=true # Required for most local models
Note: Local models must support vision (e.g., llava, bakllava, moondream). Performance and accuracy vary widely.
β οΈ Important: Other providers (Anthropic, Google, OpenRouter, local models, etc.) are not officially supported. If you encounter errors, we may not be able to help. Use at your own risk.
π Full reference: See
.env.examplefor all available environment variables with detailed explanations and examples.
For a quick setup, you only need to provide your OpenAI API key. All other settings have sensible defaults.
| Variable | Required | Description |
|---|---|---|
HBC_LLM_API_KEY |
Yes | Your OpenAI API key |
HBC_HOMEBOX_URL |
No | Your Homebox instance URL (defaults to demo server) |
| Variable | Default | Description |
|---|---|---|
HBC_LLM_MODEL |
gpt-5-mini |
Model to use. Supported: gpt-5-mini, gpt-5-nano. |
HBC_LLM_API_BASE |
β | Custom API base URL (for proxies or experimental providers) |
HBC_LLM_ALLOW_UNSAFE_MODELS |
false |
Skip capability validation for unrecognized models |
HBC_LLM_TIMEOUT |
120 |
LLM request timeout in seconds |
HBC_IMAGE_QUALITY |
medium |
Image quality for Homebox uploads: raw, high, medium, low |
Control compression applied to images uploaded to Homebox. Compression happens server-side during AI analysis to avoid slowing down mobile devices.
| Quality Level | Max Dimension | JPEG Quality | File Size | Use Case |
|---|---|---|---|---|
raw |
No limit | Original | Largest | Full quality originals |
high |
2560px | 85% | Large | Best quality, moderate size |
medium |
1920px | 75% | Moderate | Default - balanced |
low |
1280px | 60% | Smallest | Faster uploads, smaller storage |
Example:
HBC_IMAGE_QUALITY=high
Note: This setting only affects images uploaded to Homebox. AI analysis always uses optimized images regardless of this setting.
| Variable | Default | Description |
|---|---|---|
HBC_CAPTURE_MAX_IMAGES |
30 |
Maximum photos per capture session |
HBC_CAPTURE_MAX_FILE_SIZE_MB |
10 |
Maximum file size per image in MB |
Note: These are experimental settings. It's advisable to keep the default values to minimize data loss risk during capture sessions.
| Variable | Default | Description |
|---|---|---|
HBC_RATE_LIMIT_ENABLED |
true |
Enable/disable API rate limiting |
HBC_RATE_LIMIT_RPM |
400 |
Requests per minute (80% of Tier 1 limit) |
HBC_RATE_LIMIT_TPM |
400000 |
Tokens per minute (80% of Tier 1 limit) |
HBC_RATE_LIMIT_BURST_MULTIPLIER |
1.5 |
Burst capacity multiplier |
Note: Default settings are conservative (80% of OpenAI Tier 1 limits). Only configure if you have a higher-tier account or need to adjust limits.
Examples for different OpenAI tiers:
HBC_RATE_LIMIT_RPM=4000 HBC_RATE_LIMIT_TPM=1600000HBC_RATE_LIMIT_RPM=4000 HBC_RATE_LIMIT_TPM=3200000| Variable | Default | Description |
|---|---|---|
HBC_SERVER_HOST |
0.0.0.0 |
Server bind address |
HBC_SERVER_PORT |
8000 |
Server port |
HBC_LOG_LEVEL |
INFO |
Logging level |
HBC_DISABLE_UPDATE_CHECK |
false |
Disable update notifications |
HBC_MAX_UPLOAD_SIZE_MB |
20 |
Maximum file upload size in MB |
HBC_CORS_ORIGINS |
* |
Allowed CORS origins (comma-separated or *) |
Customize how AI formats detected item fields. Set via environment variables or the Settings page (UI takes priority).
| Variable | Description |
|---|---|
HBC_AI_OUTPUT_LANGUAGE |
Language for AI output (default: English) |
HBC_AI_DEFAULT_LABEL_ID |
Label ID to auto-apply to all items |
HBC_AI_NAME |
Custom instructions for item naming |
HBC_AI_DESCRIPTION |
Custom instructions for descriptions |
HBC_AI_QUANTITY |
Custom instructions for quantity counting |
HBC_AI_MANUFACTURER |
Instructions for manufacturer extraction |
HBC_AI_MODEL_NUMBER |
Instructions for model number extraction |
HBC_AI_SERIAL_NUMBER |
Instructions for serial number extraction |
HBC_AI_PURCHASE_PRICE |
Instructions for price extraction |
HBC_AI_PURCHASE_FROM |
Instructions for retailer extraction |
HBC_AI_NOTES |
Custom instructions for notes |
HBC_AI_NAMING_EXAMPLES |
Example names to guide the AI |
Tip: The Settings page has an "Export as Environment Variables" button.
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.