Specific usecase: loads cardiac ultrasound clips (mp4) so users can label them on specific metrics (image quality, view, etc...).
General usecase: loads any video so users can apply any number of labels to the video.
Output: each labeled video has a corresponding file_name.json file with labels.
The app will ask you to select you root folder. The root folder should contain storage, done, and data folders. All videos go in the storage folder.
As videos get labeled, they will automatically get moved to the done folder.
All labels will automatically end up in the data folder.
Root
β
ββββstorage
| video_3.mp4
| video_4.mp4
| ...
|
ββββimage_recognition
| saved_model
|
ββββdone
| video_1.mp4
| video_2.mp4
|
ββββdata
| video_1.json
| video_2.json
|
Svelte
Electron
Tensorflow 2 + opencv to determine clip depth
npm installnpm startnpm run makeIf you want to use the digit recognition we're using to gauge depth, you'll need to have these installed on your system:
Note: the system is really bad at classifying the number 7 correctly.
If you don't want these, remove lines 10 and 33 from FormTemplate.components.svelte