SkinScan is a skin cancer detector web application that allows users to upload images of their skin conditions. Our system uses a multi-class ML-model that analyses uploaded images, in conjunction with other features such as age and sex, to determine the probable type of condition and whether it is benign (noncancerous) or malignant (cancerous).
For legal and ethical concerns, the aim of this system is not to provide any medical diagnosis but rather a recommendation on whether the user should seek out professional medical assistance. The goal is to minimise false negatives (i.e., optimise recall) while retaining acceptable overall model accuracy.🎯 This ensures users can trust the model’s predictions for benign conditions. The model slightly overrepresents malignant conditions to avoid missing true positives, as the risks of missing malignant conditions far outweigh the inconvenience of recommending medical advice for benign conditions.
To enhance transparency 🔍 and build user trust 🤝, the results include AI explainability measures.🤖💡 These measures display a percentage score for each feature’s relative impact on the prediction, along with a heatmap overlay highlighting the areas of the input image that our ML model focused on during processing.
The system’s web application includes an admin panel UI for administrator users. This panel provides access to system analytics📊 and other functionalities, such as managing the ML-pipeline to train new models or replace the current active model used for running inference on user data. Administrators can view previous model versions, review their hyperparameters, and compare performance metrics across different versions using visual graphs. The admin panel also provides detailed insights into the usage and accuracy of the system, helping developers and healthcare professionals make informed improvements. This ensures the tool remains accurate, effective, and trustworthy.
To run locally, refer to the instructions inside the Client
directory README.
Make sure the .env file contains the following:
# Django secret key
SECRET_KEY = <KEY_VALUE>
# Use "False" for production
DEBUG = "True"
Django
project root foldercd server
Django
development serverpython3 manage.py runserver
http://127.0.0.1:8000
Django
project root foldercd server
Django
development serverpython manage.py runserver
http://127.0.0.1:8000
If any changes are made to the Django
models (database schemas), the changes need to be migrated to the database(s). Execute the following commands from the Django
project root folder:
python3 manage.py makemigrations
python3 manage.py migrate
python3 manage.py migrate --database=db_images
To run the Django
unit tests, execute the following commands from the Django
project root folder:
python3 manage.py test
Once you are done, deactivate the Python virtual environment using:
deactivate
Navigate to the repository root folder in your terminal
cd /path/to/repository
Create a Python virtual environment
python3 -m venv venv
Activate the virtual environment
source venv/bin/activate
Install the required dependencies
on Linux:
pip install -r requirements.txt
on macOS:
pip install -r requirements-mac.txt
Note: this guide is written for
WSL2
usingUbuntu 22.04 LTS (Jammy)
.
Python 3.11 is not included in the default Ubuntu repository so we need to add a PPA in order to install. If you are using a different Ubuntu version you need to verify that Python 3.11 is provided here or use a different PPA.
deadsnakes PPA
to the systemsudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.11 python3-tk tk-dev
python3 --version
python3.11 --version
venv
for Python 3.11sudo apt install python3.11-venv
cd /path/to/repository
python3.11 -m venv venv
source venv/bin/activate
pip
inside the virtual environmentpip install --upgrade pip
pip install -r requirements.txt
In order to utilize the GPU for TensorFlow operations, additional setup is needed.
Note: verify that you have the hardware & system requirements needed: TensorFlow website
Ensure that you have the latest Nvidia GPU drivers installed. Most cards with updated drivers should support CUDA: Nvidia website
Download the CUDA Toolkit 12.3.2
installer for x86 from the Nvidia website
Open WSL
in terminal and navigate to the directory you saved the installer - run the following commands:
wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/12.3.2/local_installers/cuda-repo-wsl-ubuntu-12-3-local_12.3.2-1_amd64.deb
sudo dpkg -i cuda-repo-wsl-ubuntu-12-3-local_12.3.2-1_amd64.deb
sudo cp /var/cuda-repo-wsl-ubuntu-12-3-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-3
Verify installation using the following command:
nvcc --version
If the last command doesn't work, you need to add the CUDA Toolkit
to the environment variables:
nano
(or any other editor)nano ~/.bashrc
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
source ~/.bashrc
nvcc
command now worksnvcc --version
Note: for this step you need to create an Nvidia developer account (for free) to download the library.
Download cuDNN v8.9.7 (December 5th, 2023), for CUDA 12.x
for Ubuntu x86 from the Nvidia website.
Open WSL
in terminal and navigate to the directory you saved the installer - run the following commands:
sudo dpkg -i cudnn-local-repo-ubuntu2204-8.9.7.29_1.0-1_amd64.deb
Note: if you get the message about the
keyring
, copy the command from the output and run it in the terminal before proceeding with the next step.
sudo apt update
cuDNN
librarysudo apt install -y libcudnn8
dpkg -l | grep libcudnn
Note: you should see output similar to:ii libcudnn8 8.9.7.29-1+cuda12.2 amd64 cuDNN runtime libraries
cd /path/to/repository
source venv/bin/activate
python3.11 dev_utils/test_gpu.py
Note: TensorFlow will silently default to using the CPU. If you suspect that your GPU is not being utilized you can enable explicit device logging by editing the script and changing the parameter in the following line to
True
:
tf.debugging.set_log_device_placement(False)
The project has been developed over the course of 8 weeks by the following: