Multiple Control Assistive Wheelchair for Lower Limb Disabilities & Elderly People

This article presents a multiple control wheelchair to help individuals with lower-limb disabilities using advanced assistive technologies. A new control methodology of a multiple-controlled wheelchair (MCW) has been given here. The main feature of this assistive device is that it can navigate by following the user's eyesight. The proposed MCW can also be controlled using either joystick or the internet or a combination of all. This system uses multiple sensor networks to measure the terrain condition; users command and translate it into control action. The proposed control system runs on a Raspberry Pi, which can take an accurate turn and capable of controlling forward or backward motion. Raspberry Pi, camera module and sensors networks, etc., were all connected to the android mobile system, shortening the physical distance between the disabled person end and the supervisor end, and serial communication is used. Multiple-Layer system architecture with special control functions is built that uses android mobile interfacing to realize the automatic control and capture videos of the surrounding environment remotely. It is shown that the users can carry out all essential tasks with more control actions which improve the quality of traditional wheelchair.


Introduction
In recent years, researchers have become increasingly interested in the eld of rehabilitation. Assistive technology is an emerging area in the current study where some robotic devices can strengthen people with disabilities or older people to survive in their daily living activities. Intelligent wheelchairs have become a real option for all these kinds of helpless people. Today, it has become one of the most popular indoor navigation vehicles to its easy control, smooth mobility, and application-speci c human-machine interface [1].
The recent trend of supportive wheelchairs has been widely studied in both academics and industries around the world. However, very few of them have handled customer requirements in real life and together have achieved market success. In 1595 an unknown manufacturer designed a wheelchair for Philips II in Spain. In 1655, Stephen Farfer, a paralogous watchmaker, built a three-wheeled self-propelled chair to imitate this. Later it is known as a wheelchair [2]. Wheelchair design has become increasingly important in the 21st century in combining emerging technologies.
Several studies have suggested that assistive wheelchair can bring more easy life for lower limb disable and older people. A wide range of wheelchairs, such as tilt-in-space [3], elevation, and stair-climbing [4] [5] wheelchairs, are now available. With advancements in technology, wheelchairs are currently controlled with a joystick [6], head gesture [7], or other technologies. There are some assistive wheelchair technologies where it can be controlled through a human-machine HMI control interface. Examples include voice command control [8] [9], follow-up movements of the eyes [10] [11], or follow the movements of the tongue [12] [13]. In works such as [14], the author addresses a system that controls the chair movement based on voice commands. But their control solutions presented by the authors are extremely expensive. The user has to recognize their voice in the system before using this kind of Page 3/16 technology. In other research, such as [15], the author proposed a system where the user could control the wheelchair with their head's movement, where a camera is mounted and records the user's head's movement. In another case, the author describes a control interface [16] where the wheelchair is operated by following the electrical activity (electrography) of the user's muscles via the EMG signal. But using this kind of electromagnetic signal can hamper user health. In the workplace [17] [18] the authors used to control in which the user's brain was able to drive the wheelchair by measuring electrical trends produced by EEG (electroencephalography). But in this kind of system, there is plenty of limitation. After analyzing state-of-the-art interfaces and control systems for assistive wheelchairs, it has realized a lack in control devices' design and raises many di culties. This proposed work presents a multiple control protocol and device hardware capable of being connected to actuators, sensors, and other peripherals on this electric wheelchair and works through image processing techniques. Users can control this system using either joystick or image processing techniques. A caregiver also can control this proposed system anywhere in the world using IoT. However, with these modern technology wheelchairs, there are concerns about its careful and risk-free environment. Some surveys indicate an increasing number of accidents and riskfactor, proportional to the number of devices, especially among older users [19] [20]. For this, a secure operation is essential where the user controls the time in danger. To overcome this kind of situation, we made the architecture of this wheelchair to be a stable condition in all environments and easily control it.
The main contributions of this paper are 1) Development of a multiple control assistive smart wheelchair and proposing an object detection algorithm using raspberry pi, Arduino, Pi-noir camera, ultrasonic sensor, and MPU 6050 Gyro-accelerometer. 2) Automatic object detection as per user preference and navigate the wheelchair accordingly, and 3) conduct experimental wheelchair navigation as per user preference to validate the algorithm, control function, and device performance.
In the rest of this proposed paper, Section 2 reviews the complete architecture of the proposed wheelchair where all the hardware component and software part is discussed. The method of the proposed wheelchair is presented in Section 3. This section describes the speci cation and requirements of all the devices and how this proposed system is working. Section 4 describes the Result and Discussion where this system performs, and some data set of desire output is mentioned here. Some comparison between a traditional power wheelchair and the proposed wheelchair is also discussed in this section. Finally, section 5 describes the Conclusion of this proposed article.

Architecture
This proposed work aims to avoid obstacles and identify how to operate and assist the user by identifying object decisions related to user preference. A pi noir camera is xed into the proposed system. When the user turns his head and wants to focus on objects, the system automatically identi es that object and navigates it. The user's guided path (operated via a joystick or object detection system), and the wheelchair instantaneous environment (provided from the output of sensors) are used to determine the control signal by sending the wheelchair, and the direction and speed are based on the Object detection. Instead, the trick for the suggested task is to pick the object automatically.
Wheelchair designing scheme involves the motor wheel assembly, a suspension tool, and a chassis that adequately su cient weight-bearing capability. Figure 1 shows the different parts of the proposed wheelchair, where Figure 1(a) consists of a BLDC hub motor with a diameter of 17cm and back wheels with 34cm diameter. Figure 1(b) shows the steering wheel assembly with a diameter of 14cm, which is connected through the ASMC-04 servo motor. Figure 1(c) shows the standard wheelchair, and gure 1(d) refers to the overall concept mapping of the proposed system. We use several modules that have been implemented using Raspberry 4, a single board computer. This methodology is used to separate computing-intensive tasks (completed by Raspberry Pi) and controlling tasks (completed by Arduino).
Some low-cost sensors like ultrasonic sensors, pi noir camera, motor driver, BLDC-Hub motor, and Servo motor are also used in this system. Figure 2(a) shows the system architecture, where all devices are connected to the Raspberry Pi. The system owchart is shown in Fig 2(b).

Hardware Description
Some hardware component of this system is discussed in more detail below.

Motor Driver Mechanism for steering wheel
A motor driver circuit is used to running the motor, which is like the current ampli er. The primary purpose of the motor driver circuit is to take a low current signal and convert it to a high current signal so that it can rub the motors smoothly. Using a motor driver is because the motors cannot be connected to the Arduino directly as they don't get su cient power. L239D IC is 16 pin IC out of which 4 pins are input (2, 7, 10 & 15) and 4 pins are output (3,6,11,14) used to control the staring mechanism using an Arduino.
When the system detects an object and wants to navigate the desired location, this staring wheel connected in front of the wheelchair rotates as per instruction given by the 3axix accelerometer. A circuit diagram of a motor drive mechanism is shown in gure 3(a), and a 24-volt servo motor with model no ASMC 04 is connected in a front-wheel for steering purposes shown in gure 3(b).

BLDC Hub Motor
Brushless DC motors (BLDCs) have become increasingly concentrated in motor control technology for many motor manufacturers due to their increasing use in many applications. BLDC motors are much better quality than other brushed DC motors for variable speed handling, high e ciency, and superior heat dissipation capability [21]. These motors have become very effective in the development of digital controls for reliability and cheap pricing. The brushless DC motor (known as BLDC) is a permanent magnet synchronous electric motor driven by direct ow (DC) power. By replacing it with an electronically controlled rebuilding structure, the replacement changes the phase ow and moves the motor to generate rotation torque. The connection of the BLDC hub motor into the system is shown in gure 4(a). Break and throttle wire is also connected through the controller for manual uses shown in gure 4(b).

Accelerometer MPU 6050
It's a kind of sensor used to determine proper angel with rotation velocity X, Y, and Z-axis. It consists of a 3-axis Gyroscope with MEMS technology which can determine the orientation and polarity of rotation. In this proposed system, it is attached with a Pi-noir camera. When the user tried to track an object, MPU 6050 can measure the proper angle of rotation and instruct the Arduino to rotate the steering wheel in the desire direction.

Software Description
The controller of the proposed system is implemented using python. It is an interpretive, interpreter, object-centric, high-level, general-purpose programming language. Its high-level data structure supports numerous programming paradigms, including object-oriented and functional programming, and its interpreter is available for many operating systems. A python is open-source software that makes it very effective in quick edit-test-debug cycles and reduces program maintenance costs.
OpenCV or Open Source Computer Vision Library [22] is an open-source computer vision library. It is sometimes called the machine learning software library. OpenCV is designed to provide a common framework for various computer vision applications and accelerate machine perception in commercial products. Users can easily create or modify their programming to be a BSD-licensed product. At least two thousand optimized algorithms currently exist in the OpenCV library. These algorithms are used for various tasks such as identifying human faces, identifying different objects, categorizing human actions from a video, tracking a moving object, extracting a 3D model from an object, etc. This library is currently used by agencies, research teams and widely used in government agencies.

Methodology
The basic scheme of the elements presents in a chair and the modi cation required to install the designed control board. Our proposed board has a Raspberry Pi 4 Model B responsible for performing the tasks for all the systems. The motor control driver is used to running the motor and responsible for moving the wheelchair. There is a provision to the central control unit to operate the system manually or automatically. Here we have approved a new approach by Raspberry Pi to our proposed system, which is most cost-effective and straightforward in architecture than previously invented all the systems. This proposed system has an application called Open CV installed. With this open CV, raspberry pi records live video streaming and identify any object or person. It also allows the system to detect up to 80 different objects within an image or video frame, and each object is individually identi ed using a bounding box. Tensor ow [23] can detect various objects from an image or video stream and provide information about their location within the image. For example, a screenshot of the system output is shown in Figures 5 (a) and 5 (b), where the system detects two separate objects.
Tensor ow uses pre-trained and customized models to identify hundreds of objects with a function. A common feature of Tensor ow is to identify what an image represents. Image classi cation is the act of predicting what is represented by an image. An image classi cation model helps to recognize the original image. Users can customize any model into this application as per need.

Algorithm for TensorFlow Object detection
Step 1: Install libraries including supporting les into raspberry pi.
Step 2: For object detection .xml le are created for each image.
Step 3: Images are divided into two parts, i.e. Training and testing image along with their .xml Step 4: For test and train images .csv le are created.
Step 5: For train and test images, tf.record are created using .csv les and images folder.
Step 6: Pertained models are downloaded and con gured.
Step 7: Training and custom images using a pre-trained model.
Step 8: Models are created by exporting path using the checkpoint which previously saved.
Step 9: Frozen_inference_graph.pb + new check point + saved models directory all are created inside model.
Step10: Make a prediction using this newly created model.
Step11: System is ready for predict models from live video streaming or images.
To identify an object TensorFlow algorithm rst generates a large number of boundary boxes into the image from where visual features are extracted. Depending upon the visual features, it can categorize which objects are present in the boundary boxes. Finally, all the boxes are extracted from the main images and verify with its pre-trained model for matching. Once the boundary model is matched, the system can identify the object and show the object's name above the particular object boundary boxes.
There are more than thousands of objects are trained in this system. But we developed the algorithm to navigate only when the camera focuses on particular objects using their prede ne tags.
Once the system detects the object, a wave of sound is emitted from the ultrasonic sensor device and re ects towards it by getting the obstacle of that object. Using this technology proposed system can calculate the distance of any object coming in front of it or how many distances it has to travel to reach its destination. The proposed wheelchair can reach its destination by maintaining proper distance to avoid an accident and accurate distance measurement using this sensor. MPU 6050 Gyro-accelerometer is used to measure the proper angle for steering the wheel.
The system is tested in an indoor environment using a Pi NOIR camera. The camera starts capturing videos and extracts the frames, which are converted to many classi cations.
For example, the proposed system is trained to identify only ve different types of object representation photo frames. The objects are a medicine strip, table, chairs, bottles, and glass.
Subsequently, when a model's image-like chair is rendered as the system input shown in gure 6, the proposed system only outputs the image represented by four previously trained models shown in table1.
As a result, a chair has a 51% chance of being present in the input picture shown in gure 7(a). Image classi cation can only tell the probability that an image represents one or more classes with previously trained models. Image classi cation is labeled when modeling images. Each label is given a separate category name so that it has the advantage of identifying the model. An image is given as the input to a model for estimation, and the system gives an array of possibilities between 0 and 1 of the model. Each image output of an image is mixed with the previously trained label and starts compressing it. The system gives the label the model most likely to be. The sum of all numbers into the array is always equal to 100.
If a model closely matches previously approved models, the system allows the model's name with the higher probability label.  Table 1 An IR camera is interfaced with the Raspberry Pi, positioned to follow the user's focus point. When the user focuses on an object, the Raspberry Pi processes live video streaming and detect the object with the object detection algorithm. Once the object is detected, the Raspberry Pi sends the instruction to the motor driver to move the object in the proper place. We used OpenCV for programming functions that target real-time computer vision. Then we installed an open-source library called TensorFlow for data ow programming across different tasks. It is a symbolic math library that is used for neural networks such as machine learning algorithms. TensorFlow provides a framework for object detection in OpenCV. It contains many trend models of everyday life such as TV, computer, parson, cycle, board, laptop, glass, table, chair, etc. We can also include any model we want to include in this library.
Using this method, only the objects in this library are selected by the programming tag, and it will be output to the GPIO pin of the Raspberry Pi when it comes to living video streaming. The Raspberry Pi sends the instruction to the motor drive circuit using an Arduino, utilizing this GPIO pin. Like that instruction, the motor driver circuit drives the wheelchair to that object. For example, there are many objects in a room such as a TV, refrigerator, table, chair, cupboard, bed, computer, etc. Our system can detect all of these objects. If the programmer has selected the table and chairs tag, our proposed system can move only those two objects when the user looks at these two objects. The system will not move if the user looks at the rest of the objects. Here is an Arduino connection set up with a Raspberry Pi so the two boards can communicate with one another, and this whole task can be done very easily. Only with Raspberry Pi could the whole work be done, so the Arduino and motor drive circuits would not be needed. But the main problem with this is the delay time. The processing time of the Raspberry Pi processor on this proposed system is minimal, which can take a lot of processing time to complete many tasks at once. To solve this problem, all the image processing-related work has been done by Raspberry Pi, and Arduino has done the other work to control the ultrasonic sensor and motor driver for the steering wheel at a proper angle.
This whole system can be run by a second person using his android mobile application. Users can control the Raspberry Pi through Android Mobile using an open-source application called VNC Viewer. To implement this, the VNC Viewer application must be installed on the android mobile. This application is available in the Play Store on Android Mobile. Raspberry Pi and Android mobile must be connected to the same network with a Wi-Fi hotspot. Then it is possible to get control of the Raspberry Pi from the mobile app by typing in the speci c IP address of the Raspberry Pi from the VNC viewer app. The IP Scanner application can be used to nd out the IP address of the Raspberry Pi. IP Scanner application can show IP addresses of all the devices belonging to the same network.
To say, someone living far away from home in his workplace and in time cannot take care of his elderly parents or patients. In such an environment, this proposed system can be bene cial. The caregiver can provide timely care using his mobile compromises and extend his help to these people by providing timely medicine or a glass of water.

Result & Discussion
In the laboratory, some objects like medicine, tables, chairs, computers, etc., are placed in front of the system for testing. As a result, when the user focuses on the object, the Raspberry Pi camera detects the particular object tag from live video streaming. It sends the instruction to the motor driver circuit. At the same time accelerometer and ultrasonic sensor measure the required angle and distance of the particular object. According to the angle and distance measured by sensors, the wheelchair automatically reaches the destination as per the de ned algorithm. If any unknown obstacle comes in front of the system, it stops moving. Using the throttle, the system moves smoothly in manual mode. Using VNC and Putty application on android mobile, the proposed system can con gure externally by providing internet connectivity through Wi-Fi. It is also possible to get control over it using the same. The following characteristics were noted down during control of this assistive wheelchair shown in table 2.
Test Attributes:  Table 2 After testing this proposed system into our laboratory, it shows satisfactory results for detecting objects per frame rate 25fps. To navigate towards the object, it can take a rotation maximum of up to 1.3 feet. It is also observed that the proposed wheelchair can navigate a maximum of 6km/hour, and it can stop precisely 1 to 2 feet before the desired object. The system architecture is very complex as well as costly due to the use of various sensors. The system always navigates if motion is sensed.
An only ultrasonic sensor is used to calculate distance as well as to detect obstacles.
The system can only navigate if the user detects a known object. [6] 3. Different module sensors and controller units are connected to a computer to sense PWM signals for Left-right motors. The voice control system is also implemented.
The system architecture is complex, and it remains very costly due to the use of a computer as a central controller unit.
Raspberry Pi is used to implement this system which makes this assistive device cost-effective. [7] 4. Arduino and a Bluetooth device are used in this system. Android mobile is used to pass the voice command to drive the motor accordingly.
Users have to go through the voice recognition process before using it. Those people who can't talk cannot use this system. Navigation can be done by object detection process based on the user's eyesight. [8]

5.
A head mount display with a Tobii 4C gaze tracker is used to track user eye gaze. According to gaze direction, the wheelchair will navigate.
The user has to go through some tanning processes before using this device. Users may feel uneasy with the Head mount device.
In this proposed system, the Pi noir camera automatically detects human eyes and tracks eye-ball for navigation. [9] 6. An ARM Cortex A8 processor with 512 MB ram is used to interface with a headset and a permanent magnet inside the user's mouth and generate the magnetic eld.
The user has to wear a headset with two sensors board of 3-axial magnetic sensors, batteries, and transmitter circuit. A serial peripheral interface is used to transmit data and drive the wheel accordingly.
Only a Pi-noir camera, Raspberry Pi, BLDC motor, and a 24-volt stepper motor is used as a staring wheel. [11] A Gyroscope sensor is also used to sense the position of the tongue.
7. 89C51 microcontroller is used to receive data from the transmitter end, connected to a permanent magnet. A Hall effect sensor is used to detect tongue movement and send it to the receiver end for drive motors.
The user has to carry a permanent magnet in the mouth.
A magnetic eld will be generated in the mouth, which can hamper the user's health.
Every human being can use this system, and no magnetic radiation occurs in this proposed system which can hamper the user's health. [12] 8. Speech processor HM2007 is interfaced with Arduino to decode voice commands. According to voice command, the motor driver drives the wheels.
User has to always go through speech recognition process before using this system. A noisy environment can disturb the voice recognition process.
In every condition, the proposed system can track eye-ball movement.
[13] There is no such kind of electromagnetic wave generates in this proposed system. [15]

Conclusion
Raspberry pi is an example of advanced technology in the modern world. However, having its upgraded operating system can make it extremely fast and opportunistic. We have implemented this proposed system based on object detection and tasted this in an unknown environment where the system can identify various objects and navigate towards the prede ned object. Using SSH and VNC, it is easily accessible through IoT, and it is also functionally well-using throttle connected to BLDC motor for manual uses. Other peripherals, including microcontrollers, sensors, cameras, etc., are collected from the local market at a much lower cost. This is much cheaper than the cost of previous wheelchairs. This technology can be a blessing to people with elderly and disabilities in today's world. We are trying to increase the technology's accuracy as much as possible so that people with disabilities can restore their self-reliance. This proposed wheelchair could be a suitable replacement for those imported commercially and become recourse for the elderly, helpless, and disabled people worldwide. Future works will be mapping, localization, trajectory planning, tracking, etc., with the improvement of the navigation process so that a caregiver can visualize the wheelchair in an animated form to realize the real-time position.

Declarations
Con ict of Interest I, Sudipta Chatterjee as a leading author of this journal article, declared that I had made this work in my interest and knowledge to develop the standard of life for lower limb disable persons around the world, and I have not taken any monitory or nancial help from any other person or organization. y y y g p sensor based brain computer interface for a robotic wheelchair," Journal of Intelligent & Robotic Systems, vol. 87, no. 2, pp. 247-263, 2017. Figure 1 Architectural con guration        System is detecting a chair from live video streaming Figure 7 (a) After detecting an object system verifying its probability level (b) The system is detecting the object and navigate towards it.