AI-powered bird feeder classification for Frigate NVR
Docs โข Quick Start โข Live Demo
If YA-WAMF is useful to you, starring the repo helps more people discover it.
A bird classification system that integrates with Frigate NVR to automatically identify birds visiting your feeder using advanced AI models.
If you share your guest dashboard publicly, I would love to see it. Please open an issue or discussion and drop a link so I can take a look.
A personal project built with AI-assisted coding, inspired by the original WhosAtMyFeeder. When I noticed the original project wasn't being maintained, I saw an opportunity to learn and build something better.
Built with help from AI coding assistants as an experiment in what's possible with modern development tools. Feedback and contributions are welcome!
Please see the Security Policy for supported versions, reporting guidelines, and a summary of security features.
A public instance of YA-WAMF is available here (always on dev branch, may be broken!):
When Frigate detects a bird at your feeder, YA-WAMF:
Detailed feature behavior, edge cases, and integration notes are documented in the links below.
Use the full docs hub for setup, integrations, and troubleshooting:
Here's the flow from bird to identification:
โโโโโโโโโโโโโโโ MQTT Event โโโโโโโโโโโโโโโ
โ Frigate โ โโโโโโโโโโโโโโโโโ> โ YA-WAMF โ
โ (NVR) โ "bird detected" โ Backend โ
โโโโโโโโโโโโโโโ โโโโโโโโฌโโโโโโโ
โ
v
โโโโโโโโโโโโโโโโ
โ Fast Path: โ
โ Use Frigate โ
โ Sublabels? โ
โโโโโโโโฌโโโโโโโโ
โ
(No) v (Yes)
โโโโโโโโโโโโโโโโ
โ AI Engine โ
โ (TFLite/ONNX)โ
โโโโโโโโฌโโโโโโโโ
โ
v
โโโโโโโโโโโโโโโโ
โ Save to DB & โ
โ Notify User โ
โโโโโโโโฌโโโโโโโโ
โ
v
โโโโโโโโโโโโโโโโ
โ Auto Video โ
โ Analysis โ
โ (Background) โ
โโโโโโโโโโโโโโโโ
For the full event lifecycle and architecture details, see the documentation links above.
/dev/dri into the container and grant the host's actual /dev/dri device GIDs (often video/render, but numeric IDs vary)1. Download configuration files:
mkdir ya-wamf && cd ya-wamf
curl -O https://raw.githubusercontent.com/Jellman86/YetAnother-WhosAtMyFeeder/main/docker-compose.yml
curl -O https://raw.githubusercontent.com/Jellman86/YetAnother-WhosAtMyFeeder/main/.env.example
cp .env.example .env
2. Configure your environment:
Edit .env with your settings:
# Docker network (check with: docker network ls)
DOCKER_NETWORK=frigate
# Frigate instance
FRIGATE_URL=http://frigate:5000
# MQTT broker (usually 'mosquitto' if running in Docker)
MQTT_SERVER=mosquitto
MQTT_PORT=1883
# MQTT authentication (if required)
MQTT_AUTH=true
MQTT_USERNAME=mqtt_user
MQTT_PASSWORD=secret_password
# Timezone
TZ=Europe/London
3. Verify Docker network:
Ensure the network specified in .env exists and matches your Frigate setup:
docker network ls
Intel iGPU (OpenVINO) note (optional):
/dev/dri:/dev/dri to the backend servicegroup_add entries matching your host's /dev/dri numeric group IDs (ls -ln /dev/dri)4. Set permissions, create directories, and start:
PUID=$(id -u)
PGID=$(id -g)
echo "PUID=$PUID" >> .env
echo "PGID=$PGID" >> .env
mkdir -p config data/models
sudo chown -R "$PUID:$PGID" config data
sudo chmod -R u+rwX,g+rwX config data
docker compose up -d
If you use Portainer stacks, set the same PUID/PGID values in stack environment variables.
If you deploy via Portainer: create a Stack from
docker-compose.ymland use "Pull and redeploy" for updates (after changing image tags or pulling the latest:latest/:dev).
5. Access the dashboard:
Open http://localhost:9852 (or http://YOUR_SERVER_IP:9852)
6. Download (or re-download) AI models:
In the web UI, go to Settings -> Detection -> Model Manager and download a model. Re-download is also supported with progress tracking and safe staged replace/rollback behavior. Models are saved to data/models/ and persist across updates.
Guest mode is read-only and rate-limited. Guests can view detections and any existing AI Naturalist analysis, but cannot change settings, delete items, or run new AI analysis. You can hide camera names and limit the public history window in Settings > Security.
Check logs to confirm everything is working:
docker compose ps # Check container status
docker compose logs yawamf-backend -f # Follow backend logs
# You should see:
# MQTT config: auth=True port=1883 server=mosquitto
# Connected to MQTT topic=frigate/events
| Issue | Solution |
|---|---|
| MQTT connection failed | Verify DOCKER_NETWORK matches Frigate's networkCheck MQTT hostname and credentials |
| Frontend not loading | Run docker compose ps to check healthView logs: docker compose logs yawamf-frontend |
| No detections | Confirm Frigate is detecting birds Check backend logs for events Verify model was downloaded in Settings |
For detailed troubleshooting, see the Troubleshooting Guide.
Q: What model should I use? For most users, RoPE ViT-B14 (the default) is the best balance of accuracy and speed for wildlife-wide classification. If you prefer a larger alternative on standard CPU, try ConvNeXt Large. If you have GPU acceleration available and need maximum accuracy on rare or difficult species, try EVA-02 Large. For constrained hardware, MobileNet V2 (legacy) is the fastest option. See AI Models & Performance for a full comparison.
Q: My birds are classified as "Unknown Bird" โ how do I fix this?
Lower the Min Confidence Floor (e.g., from 0.4 to 0.2), or lower the Confidence Threshold (e.g., from 0.7 to 0.5). Note that wildlife-wide models (ConvNeXt, EVA-02) naturally produce lower per-class scores than birds-only models due to competing against ~8,500 non-bird classes โ the recommended threshold shown in the Model Manager card already accounts for this. Enabling Deep Video Analysis also helps for difficult identifications.
Q: What is "Trust Frigate Sublabels" and should I enable it? When enabled, if Frigate has already identified a bird species (via its own classifier or Frigate+), YA-WAMF will trust that label instantly and skip local AI inference. This saves CPU and is useful if you've already tuned Frigate's detection. Disable it to always run YA-WAMF's own AI independently.
Q: How do I share my dashboard publicly? Enable Guest Mode in Settings > Security. Guests get a read-only view with rate limiting. You can optionally hide camera names and restrict how far back the public history goes. See Authentication & Access.
Q: What Frigate version is required? YA-WAMF works best with Frigate 0.17+. The recommended Frigate config in this project uses Frigate 0.17's tiered recording retention format. See the Frigate Configuration Guide.
Q: Can I run YA-WAMF without Frigate?
YA-WAMF is designed specifically as a Frigate companion and requires Frigate as its event source. It listens for frigate/events MQTT messages and fetches media from the Frigate HTTP API.
Q: How do I run the model accuracy tests? See Model Accuracy & Benchmarks for full instructions โ it covers CPU accuracy benchmarks, Intel GPU validation, and NVIDIA GPU diagnostic probes.
Q: Why are my clips very short?
This is expected behaviour for birds. If a bird is only at the feeder for 2 seconds, the Frigate event is 2 seconds. Configure record.alerts.pre_capture and record.detections.pre_capture in your Frigate config to add context around each detection (e.g., pre_capture: 5, post_capture: 25). See the Frigate Configuration Guide.
YA-WAMF also supports optional Full visit clips for longer playback from Frigate recordings. When enabled in Settings โ Connection โ Frigate, YA-WAMF requests a configurable camera-level window around the detection timestamp, persists that full-visit file locally, and automatically prefers it on the normal clip route once it has been generated. The default window is 30 seconds before plus 90 seconds after the detection.
Q: How do I update YA-WAMF?
Run docker compose pull && docker compose up -d from your stack directory. Settings and history are preserved because they live in the persistent /config and /data volumes.
All settings are managed through the web UI under Settings. Configuration is persisted to config/config.json.
| Setting | Description | Default |
|---|---|---|
| Frigate URL | Frigate instance for fetching media | http://frigate:5000 |
| MQTT Server | MQTT broker hostname | mqtt |
| Classification Threshold | Minimum confidence for detections (0-1) | 0.7 |
| Min Confidence Floor | Reject detections below this score | 0.4 |
| Trust Frigate Sublabels | Use Frigate's labels instead of local AI | Enabled |
| Auto Video Analysis | Analyze full video clips for accuracy | Disabled |
| AI Model | RoPE ViT-B14 (default), ConvNeXt, EVA-02, and birds-only model options in Model Manager | RoPE ViT-B14 |
| BirdWeather Token | Upload detections to BirdWeather | (none) |
| BirdNET-Go Topic | MQTT topic for audio detections | birdnet/text |
๐ Read the Full Authentication & Access Control Guide
If you are upgrading from an older version using YA_WAMF_API_KEY, your setup will continue to work, but this method is deprecated and will be removed in a future release.
For detailed upgrade instructions, see the Migration Guide.
YA-WAMF includes a robust video proxy that streams clips directly from Frigate. This supports:
YA-WAMF includes a custom component for Home Assistant to bring your bird sightings into your smart home.
Features:
Setup:
custom_components/yawamf folder to your Home Assistant custom_components directory.http://192.168.1.50:9852).This project is actively developed and your feedback is valuable!
How to contribute:
Feel free to open PRs if you have improvements to share. Just keep in mind this is a hobby project maintained in spare time.
This project is licensed under the MIT License - see the LICENSE file for details.