Not affiliated with the Homebox project. This is an unofficial third-party companion app.
AI-powered companion for Homebox inventory management.
![]() |
![]() |
![]() |
![]() |
Take a photo of your stuff, and let AI identify and catalog items directly into your Homebox instance. Perfect for quickly inventorying a room, shelf, or collection. Use the AI Chat to manage your inventory, find locations, or update details just by asking.
flowchart LR
A[π Login<br/>Homebox] --> B[π Select<br/>Location]
B --> C[πΈ Capture<br/>Photos]
C --> D[βοΈ Review &<br/>Edit Items]
D --> E[β
Submit to<br/>Homebox]
B -.-> B1[/Browse, search,<br/>or scan QR/]
C -.-> C1[/AI analyzes with<br/>OpenAI GPT-5/]
D -.-> D1[/Edit names,<br/>quantities, tags/]
*LiteLLM is a Python adaptor library we use to call OpenAI directly, no Local AI model required (unless you want to), just your API key.
GPT-5 mini (default) offers the best accuracy. GPT-5 nano is 3x cheaper but may need more corrections. Typical cost: ~$0.30 per 100 items (mini) or ~$0.10 per 100 items (nano).
Prices as of 2025-12-10, using OpenAIβs published pricing for GPT-5 mini and GPT-5 nano.
Before you start, you'll need:
Compatibility: Tested with Homebox v0.21+. Earlier versions may have different authentication behavior.
Want to try it out without setting up Homebox? Use the public demo server:
docker run -p 8000:8000 \
-e HBC_LLM_API_KEY=sk-your-key \
-e HBC_HOMEBOX_URL=https://demo.homebox.software \
ghcr.io/duelion/homebox-companion:latest
Open http://localhost:8000 and login with [email protected] / demo
# docker-compose.yml
services:
homebox-companion:
image: ghcr.io/duelion/homebox-companion:latest
container_name: homebox-companion
restart: always
environment:
- HBC_LLM_API_KEY=sk-your-api-key-here
- HBC_HOMEBOX_URL=http://your-homebox-ip:7745
ports:
- 8000:8000
docker compose up -d
Open http://localhost:8000 in your browser.
Tip: If Homebox runs on the same machine but outside Docker, use
http://host.docker.internal:PORTas the URL.
ARM64/Raspberry Pi: Docker images are built for both
linux/amd64andlinux/arm64architectures.
The chat assistant has access to 21 tools for interacting with your Homebox inventory:
Read-Only (auto-execute):
| Tool | Description |
|------|-------------|
| list_locations | List all locations |
| get_location | Get location details with children |
| list_tags | List all tags |
| list_items | List items with filtering/pagination |
| search_items | Search items by text query |
| get_item | Get full item details |
| get_item_by_asset_id | Look up item by asset ID |
| get_item_path | Get item's full location path |
| get_location_tree | Get hierarchical location tree |
| get_statistics | Get inventory statistics |
| get_statistics_by_location | Item counts by location |
| get_statistics_by_tag | Item counts by tag |
| get_attachment | Get attachment content |
Write (requires approval):
| Tool | Description |
|------|-------------|
| create_item | Create a new item |
| update_item | Update item fields |
| create_location | Create a new location |
| update_location | Update location details |
| create_tag | Create a new tag |
| update_tag | Update tag details |
| upload_attachment | Upload attachment to item |
| ensure_asset_ids | Assign asset IDs to all items |
Destructive (requires approval):
| Tool | Description |
|------|-------------|
| delete_item | Delete an item |
| delete_location | Delete a location |
| delete_tag | Delete a tag |
Homebox Companion uses LiteLLM as a Python library to call AI providers. You don't need to self-host anything β just get an OpenAI API key from platform.openai.com and you're ready to go. We officially support and test with OpenAI GPT models only.
Fallback Support: You can configure a secondary LLM profile in Settings that automatically activates if your primary provider fails.
You can try other LiteLLM-compatible providers at your own risk. The app checks if your chosen model supports the required capabilities using LiteLLM's API:
Required capabilities (photo scanning):
litellm.supports_vision(model)litellm.supports_response_schema(model)Required for Chat assistant (in addition to the above):
litellm.supports_function_calling(model). Models without native tool calling (e.g., llava, moondream) will work for photo scanning but not for the Chat assistant, which relies on tool calls to query your inventory.Finding model names:
Model names are passed directly to LiteLLM. Use the exact names from LiteLLM's documentation:
Common examples:
gpt-4o, gpt-4o-mini, gpt-5-miniclaude-sonnet-4-5, claude-3-5-sonnet-20241022Note: Model names must exactly match LiteLLM's expected format. Typos or incorrect formats will cause errors. Check LiteLLM's provider documentation for the correct model names.
Running Local Models:
You can run models locally using tools like Ollama, LM Studio, or vLLM. See LiteLLM's Local Server documentation for setup instructions.
Once your local server is running, configure the app:
HBC_LLM_API_KEY=any-value-works-for-local # Just needs to be non-empty
HBC_LLM_API_BASE=http://localhost:11434 # Your local server URL
HBC_LLM_MODEL=ollama/llava:34b # Your local model name
HBC_LLM_ALLOW_UNSAFE_MODELS=true # Required for most local models
Note: Local models must support vision for photo scanning (e.g., llava, bakllava, moondream). For the Chat assistant, the model must also support function calling β most vision-only models do not. Check your model's capabilities with litellm.supports_function_calling("ollama/your-model"). Performance and accuracy vary widely.
β οΈ Important: Other providers (Anthropic, Google, OpenRouter, local models, etc.) are not officially supported. If you encounter errors, we may not be able to help. Use at your own risk.
π Full reference: See
.env.examplefor all available environment variables with detailed explanations and examples.
For a quick setup, you only need to provide your OpenAI API key. All other settings have sensible defaults.
| Variable | Required | Description |
|---|---|---|
HBC_LLM_API_KEY |
Yes | Your OpenAI API key |
HBC_HOMEBOX_URL |
No | Your Homebox instance URL (defaults to demo server) |
HBC_LINK_BASE_URL |
No | Public URL for Homebox links in chat (defaults to HBC_HOMEBOX_URL) |
| Variable | Default | Description |
|---|---|---|
HBC_LLM_MODEL |
gpt-5-mini |
Model to use. Supported: gpt-5-mini, gpt-5-nano. |
HBC_LLM_API_BASE |
β | Custom API base URL (for proxies or experimental providers) |
HBC_LLM_ALLOW_UNSAFE_MODELS |
false |
Skip capability validation for unrecognized models |
HBC_LLM_TIMEOUT |
120 |
LLM request timeout in seconds |
HBC_LLM_STREAM_TIMEOUT |
300 |
Streaming timeout for large responses (e.g., hierarchical views) |
HBC_IMAGE_QUALITY |
medium |
Image quality for Homebox uploads: raw, high, medium, low |
Control compression applied to images uploaded to Homebox. Compression happens server-side during AI analysis to avoid slowing down mobile devices.
| Quality Level | Max Dimension | JPEG Quality | File Size | Use Case |
|---|---|---|---|---|
raw |
No limit | Original | Largest | Full quality originals |
high |
2560px | 85% | Large | Best quality, moderate size |
medium |
1920px | 75% | Moderate | Default - balanced |
low |
1280px | 60% | Smallest | Faster uploads, smaller storage |
Example:
HBC_IMAGE_QUALITY=high
Note: This setting only affects images uploaded to Homebox. AI analysis always uses optimized images regardless of this setting.
| Variable | Default | Description |
|---|---|---|
HBC_CAPTURE_MAX_IMAGES |
30 |
Maximum photos per capture session |
HBC_CAPTURE_MAX_FILE_SIZE_MB |
10 |
Maximum file size per image in MB |
Note: These are experimental settings. It's advisable to keep the default values to minimize data loss risk during capture sessions.
| Variable | Default | Description |
|---|---|---|
HBC_RATE_LIMIT_ENABLED |
true |
Enable/disable API rate limiting |
HBC_RATE_LIMIT_RPM |
400 |
Requests per minute (80% of Tier 1 limit) |
HBC_RATE_LIMIT_TPM |
400000 |
Tokens per minute (80% of Tier 1 limit) |
HBC_RATE_LIMIT_BURST_MULTIPLIER |
1.5 |
Burst capacity multiplier |
Note: Default settings are conservative (80% of OpenAI Tier 1 limits). Only configure if you have a higher-tier account or need to adjust limits.
Examples for different OpenAI tiers:
HBC_RATE_LIMIT_RPM=4000 HBC_RATE_LIMIT_TPM=1600000HBC_RATE_LIMIT_RPM=4000 HBC_RATE_LIMIT_TPM=3200000| Variable | Default | Description |
|---|---|---|
HBC_SERVER_HOST |
0.0.0.0 |
Server bind address |
HBC_SERVER_PORT |
8000 |
Server port |
HBC_LOG_LEVEL |
INFO |
Logging level |
HBC_DISABLE_UPDATE_CHECK |
false |
Disable update notifications |
HBC_MAX_UPLOAD_SIZE_MB |
20 |
Maximum file upload size in MB |
HBC_CORS_ORIGINS |
* |
Allowed CORS origins (comma-separated or *) |
When deploying to production, review these security settings:
| Variable | Default | Production Recommendation |
|---|---|---|
HBC_CORS_ORIGINS |
* |
Set to specific origins (e.g., https://your-domain.com) |
HBC_AUTH_RATE_LIMIT_RPM |
10 |
Login attempts per minute per IP (brute-force protection) |
HBC_CHAT_RATE_LIMIT_RPM |
20 |
Chat messages per minute per IP (LLM cost protection) |
CORS Example:
# Allow only your frontend domain
HBC_CORS_ORIGINS=https://inventory.example.com
# Multiple origins (comma-separated)
HBC_CORS_ORIGINS=https://inventory.example.com,https://admin.example.com
Note: The default
HBC_CORS_ORIGINS=*allows requests from any origin, which is convenient for development but should be restricted in production environments exposed to the internet.
| Variable | Default | Description |
|---|---|---|
HBC_PRINT_ENABLED |
false |
Show a "Print Label" button after items are created |
When enabled, a print button appears on the post-creation screen for each item. Pressing it triggers Homebox's built-in labelmaker, which generates and prints a label via the command configured on your Homebox server.
Homebox server prerequisite: Set the HBOX_LABEL_MAKER_PRINT_COMMAND environment variable on your Homebox instance (e.g., lp -d MyPrinter %s). Without it, print requests will fail. See Homebox documentation for details.
Customize how AI formats detected item fields. Set via environment variables or the Settings page (UI takes priority).
| Variable | Description |
|---|---|
HBC_AI_OUTPUT_LANGUAGE |
Language for AI output (default: English) |
HBC_AI_DEFAULT_TAG_ID |
Tag ID to auto-apply to all items |
HBC_AI_NAME |
Custom instructions for item naming |
HBC_AI_DESCRIPTION |
Custom instructions for descriptions |
HBC_AI_QUANTITY |
Custom instructions for quantity counting |
HBC_AI_MANUFACTURER |
Instructions for manufacturer extraction |
HBC_AI_MODEL_NUMBER |
Instructions for model number extraction |
HBC_AI_SERIAL_NUMBER |
Instructions for serial number extraction |
HBC_AI_PURCHASE_PRICE |
Instructions for price extraction |
HBC_AI_PURCHASE_FROM |
Instructions for retailer extraction |
HBC_AI_NOTES |
Custom instructions for notes |
HBC_AI_NAMING_EXAMPLES |
Example names to guide the AI |
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.