The basic scheme of the elements presents in a chair and the modification required to install the designed control board. Our proposed board has a Raspberry Pi 4 Model B responsible for performing the tasks for all the systems. The motor control driver is used to running the motor and responsible for moving the wheelchair. There is a provision to the central control unit to operate the system manually or automatically. Here we have approved a new approach by Raspberry Pi to our proposed system, which is most cost-effective and straightforward in architecture than previously invented all the systems. This proposed system has an application called Open CV installed. With this open CV, raspberry pi records live video streaming and identify any object or person. It also allows the system to detect up to 80 different objects within an image or video frame, and each object is individually identified using a bounding box. Tensorflow [23] can detect various objects from an image or video stream and provide information about their location within the image. For example, a screenshot of the system output is shown in Figures 5 (a) and 5 (b), where the system detects two separate objects.
Tensorflow uses pre-trained and customized models to identify hundreds of objects with a function. A common feature of Tensorflow is to identify what an image represents. Image classification is the act of predicting what is represented by an image. An image classification model helps to recognize the original image. Users can customize any model into this application as per need.
Algorithm for TensorFlow Object detection
Step 1: Install libraries including supporting files into raspberry pi.
Step 2: For object detection .xml file are created for each image.
Step 3: Images are divided into two parts, i.e. Training and testing image along with their .xml
Step 4: For test and train images .csv file are created.
Step 5: For train and test images, tf.record are created using .csv files and images folder.
Step 6: Pertained models are downloaded and configured.
Step 7: Training and custom images using a pre-trained model.
Step 8: Models are created by exporting path using the checkpoint which previously saved.
Step 9: Frozen_inference_graph.pb + new check point + saved models directory all are created inside model.
Step10: Make a prediction using this newly created model.
Step11: System is ready for predict models from live video streaming or images.
To identify an object TensorFlow algorithm first generates a large number of boundary boxes into the image from where visual features are extracted. Depending upon the visual features, it can categorize which objects are present in the boundary boxes. Finally, all the boxes are extracted from the main images and verify with its pre-trained model for matching. Once the boundary model is matched, the system can identify the object and show the object's name above the particular object boundary boxes. There are more than thousands of objects are trained in this system. But we developed the algorithm to navigate only when the camera focuses on particular objects using their predefine tags.
Once the system detects the object, a wave of sound is emitted from the ultrasonic sensor device and reflects towards it by getting the obstacle of that object. Using this technology proposed system can calculate the distance of any object coming in front of it or how many distances it has to travel to reach its destination. The proposed wheelchair can reach its destination by maintaining proper distance to avoid an accident and accurate distance measurement using this sensor. MPU 6050 Gyro-accelerometer is used to measure the proper angle for steering the wheel.
The system is tested in an indoor environment using a Pi NOIR camera. The camera starts capturing videos and extracts the frames, which are converted to many classifications.
For example, the proposed system is trained to identify only five different types of object representation photo frames. The objects are a medicine strip, table, chairs, bottles, and glass.
Subsequently, when a model's image-like chair is rendered as the system input shown in figure 6, the proposed system only outputs the image represented by four previously trained models shown in table1.
As a result, a chair has a 51% chance of being present in the input picture shown in figure 7(a). Image classification can only tell the probability that an image represents one or more classes with previously trained models. Image classification is labeled when modeling images. Each label is given a separate category name so that it has the advantage of identifying the model. An image is given as the input to a model for estimation, and the system gives an array of possibilities between 0 and 1 of the model. Each image output of an image is mixed with the previously trained label and starts compressing it. The system gives the label the model most likely to be. The sum of all numbers into the array is always equal to 100.
If a model closely matches previously approved models, the system allows the model's name with the higher probability label.
Sl No
|
Label
|
Probability
|
1
|
Medicine Strip
|
22%
|
2
|
Table
|
25%
|
3
|
Chair
|
51%
|
4
|
Bottle
|
2%
|
5
|
Glass
|
2%
|
Table 1
An IR camera is interfaced with the Raspberry Pi, positioned to follow the user's focus point. When the user focuses on an object, the Raspberry Pi processes live video streaming and detect the object with the object detection algorithm. Once the object is detected, the Raspberry Pi sends the instruction to the motor driver to move the object in the proper place. We used OpenCV for programming functions that target real-time computer vision. Then we installed an open-source library called TensorFlow for dataflow programming across different tasks. It is a symbolic math library that is used for neural networks such as machine learning algorithms. TensorFlow provides a framework for object detection in OpenCV. It contains many trend models of everyday life such as TV, computer, parson, cycle, board, laptop, glass, table, chair, etc. We can also include any model we want to include in this library.
Using this method, only the objects in this library are selected by the programming tag, and it will be output to the GPIO pin of the Raspberry Pi when it comes to living video streaming. The Raspberry Pi sends the instruction to the motor drive circuit using an Arduino, utilizing this GPIO pin. Like that instruction, the motor driver circuit drives the wheelchair to that object. For example, there are many objects in a room such as a TV, refrigerator, table, chair, cupboard, bed, computer, etc. Our system can detect all of these objects. If the programmer has selected the table and chairs tag, our proposed system can move only those two objects when the user looks at these two objects. The system will not move if the user looks at the rest of the objects. Here is an Arduino connection set up with a Raspberry Pi so the two boards can communicate with one another, and this whole task can be done very easily. Only with Raspberry Pi could the whole work be done, so the Arduino and motor drive circuits would not be needed. But the main problem with this is the delay time. The processing time of the Raspberry Pi processor on this proposed system is minimal, which can take a lot of processing time to complete many tasks at once. To solve this problem, all the image processing-related work has been done by Raspberry Pi, and Arduino has done the other work to control the ultrasonic sensor and motor driver for the steering wheel at a proper angle.
This whole system can be run by a second person using his android mobile application. Users can control the Raspberry Pi through Android Mobile using an open-source application called VNC Viewer. To implement this, the VNC Viewer application must be installed on the android mobile. This application is available in the Play Store on Android Mobile. Raspberry Pi and Android mobile must be connected to the same network with a Wi-Fi hotspot. Then it is possible to get control of the Raspberry Pi from the mobile app by typing in the specific IP address of the Raspberry Pi from the VNC viewer app. The IP Scanner application can be used to find out the IP address of the Raspberry Pi. IP Scanner application can show IP addresses of all the devices belonging to the same network.
To say, someone living far away from home in his workplace and in time cannot take care of his elderly parents or patients. In such an environment, this proposed system can be beneficial. The caregiver can provide timely care using his mobile compromises and extend his help to these people by providing timely medicine or a glass of water.