The project Face mask detection has been achieved by adopting the Deep Learning technique and MobileNetV2 Architecture. We have designed our project into three phases:
a. Data Preprocessing.
b. Training face mask detector.
c. Implementing face mask detector.
First, we used a suitable algorithm to train the mask and non- mask images. After the model has been conditioned, pass it on to the loading mask detector, which can detect and identify each face.
Data preprocessing is a method for converting unclean data into a clean dataset. Data preprocessing entails converting data from available format to another format that is more user-friendly, desirable, and meaningful .
Figure 3 shows the data preprocessing part, we converted all the images from the folders “with mask” and “without a mask” into arrays so that with those arrays we created a deep-learning model.
- Looping over the image path (With mask and without mask folder).
- Resizing the input images uniformly to (224 x 224).
- All the photographs in the dataset are visualized as “with mask” and “without a mask”. Initialized data and labels, labels are in alphabetical order so by using label binarizer to covert the data to numeric numbers (0, 1).
- Converting all images into an array by using the img_to_array function. This img_to_array function comes under keras. Preprocessing. Image module.
- Appending the pre-processed input image. Finally converting them into NumPy array.
- Splitting the training and testing data.
Training of Model
Building the Model using MobileNetV2 Architecture:
After the input image is processed as an array we send the data into the MobilenetV2 and then we do max pooling on the same data and then flatten it to create a fully connected layer that gives the output.
MobilenetV2 is faster than Convolution Neural Network. MobilenetV2 also uses fewer parameters. The weights of each layer in the model are predefined based on the Image Net dataset. The padding, strides, kernel height, input channels, and output channels are all represented by weights. MobileNetV2 was selected as the algorithm for creating a device-deployable model. On top of the MobileNetV2 model, a customized fully connected layer with four sequential layers was created. The layers are 1. Average Pooling layer with 7,7 weights 2. Linear layer with ReLu activation function 3. Dropout Layer 4. The final layer softmax function gives the result of two probabilities each one represents the classification of “Mask” or “No Mask”.  Image data generator creates many images with the help of a single image by changing properties of that image which later used for layer training, we use Adam optimizer to optimize the result. The image data generated before is flown to train the existing training data. Later we predict the output by evaluating the network using Numpyarray.
Here specified three hyperparameter constants which include my initial learning rate to 1e-4, number of training epochs to 20, and batch size to 32, these data are taken at such a lower rate to get better accuracy. We optimized MobileNetV2 with a mask/No mask dataset and attained a classifier that is 99% accurate.
Implementing face mask detector
- Load face mask trained model and Caffe model to detect faces in video.
- To identify the face/faces using OpenCV by collecting real-time data through a webcam.
- Now the real-time data (frame/sec) we collected from the webcam to classify it using a trained model to predict the output of the given real-time input.
- As an output we get the frame in which if a person wearing a mask it shows “Mask” with a rectangle of green color on his/her face and if a person is not wearing a Mask it shows
- “No Mask” with a rectangle of red color on his/her faces particularly.
- If in case a particular person is not wearing a mask so the system automatically generates an Email to notify the administrator and also rings the alarm system to prevent the carelessness of not wearing a mask.