Integrating earthquake early warning system and a smart robot for post-earthquake automated inspection and 1 emergency response 2

9 The natural hazard, mainly earthquake, has caused substantial economic losses and human life loss to many 10 countries. Taiwan, which is located on the western Circum-Pacific seismic belt, has encountered the problem 11 as mentioned earlier in Meishan, Hsinchu-Taichung, and Chi-Chi earthquakes a few years ago. In this study, 12 the researchers propose a novel robot-event integrated system capable of doing the automated inspection and 13 emergency response due to a significant earthquake. When the household’s earthquake warning receiving 14 device picks up an alert, its built-in wireless communications system will send a signal to the robot. The robot 15 commences inspection of the indoor area via real-time image recognition and tracking. It will approach them 16 upon detecting fallen people, regulating their movements via a robot operating system (ROS) monitoring 17 interface. The robot is designed to operate in a house that remains standing with acceptable damage in which 18 the furniture might falling and injure the occupants after an earthquake hit. The indoor experiment conducted 19 to verify the robot system and operation with a designed condition such as fallen and non-fallen people as a 20 detected object. The robot tested to deliver food or medicine for fallen people while waiting for rescuers to 21 arrive. Tests indicate that the proposed smart robot has prospective implementation to the real-world 22 application with more research and development. The smart robot integrated with an earthquake early warning 23 system has a promising approach to the temporary care of people affected by earthquakes. 24 28


50
For the last few decades, the natural hazard such earthquake has caused huge economic losses and human life loss. The NW-SE compression with a 7 cm/year convergence rate (Wu et al. 1999). The losses push Taiwan's researchers to develop 57 mitigation-based technology to reduce future earthquake hazards. The Central Weather Bureau (CWB) of Taiwan developed 58 and implemented an earthquake early warning system (EWS). Three physical bases for earthquake EWS are described as 59 follows: a) a strong ground movement due to the earthquake is caused by shear (S) waves and surface wave, b) typical crustal 60 P-wave (primary wave) velocity is about 6-8 km/s in which S-and surface waves progressed half of P-waves' speed, and 3) 61 seismic wave velocities have slower movement than transmitted signals from devices such as telegram, telephone, or radio 62 about 300.000 km/s (Wu et al. 2007). The current established earthquake EWS is a ten-second alert system in advance of an 63 imminent tremor in Taiwan's eastern regions. The earthquake EWS involves lining of a 620 km fibre-optic cable network along 64 eastern Taiwan's coastal areas.

65
The earthquake EWS has detected earthquake occurrence during 2006-2012 is 207 earthquakes around Taiwan. 196 of 207 66 earthquakes were successfully detected with a 95% success rate while 5% offshore and magnitude smaller than five could not 67 be detected with virtual subnetwork (VSN) system (Wu et al. 2013). Thus, the current earthquake EWS plays a vital role in 68 reducing and mitigating earthquake disaster impact. Recent technology can improve not only one-way alerting devices but 69 multiple devices to warn people at one time. Therefore, it is essential to provide especially older and disabled people living 70 alone with higher risk with services such as earthquake alerts, post-earthquake inspections, supplies, and standard relief 71 approaches. For instance, a compound earthquake early-warning system -taking advantage of the slower velocity of some 72 major seismic waves such as S waves and surface waves -could alert older people living alone tens of seconds before an 73 earthquake is felt locally, via lights, sounds, or electronic bulletins, so that they can take early action and thus reduce their  area. Mirandola, Emilia-Romagna region, Northern Italy, was the area which has numerous historical building and experienced 85 damages due to two major earthquake occurred in May 2012 (Kruijff et al. 2012). The human-robot application mentioned 86 before shows great concern and huge contribution of human-robot operation for inspection in post-disaster of United Stated,

87
East Asia, and Europe (Cubber et al. 2017). The robots mentioned before were designed to operate in outdoor conditions with 88 human-robot mechanism operation. However, far too little attention has been paid to the indoor robot service development to 89 monitor resident condition that affected by the natural catastrophe.

90
Therefore, researchers have been developed a wheeled smart robot furnished with five DOF robot arm for indoor daily 91 service application and found capable of helping disabled and older people (Balaguer et al. 2005). The robot design has a 92 locking mechanism to connect the robot arm to a wheelchair which has electronic equipment on board to control its movement.

93
Meanwhile, another robot has successfully of handling even more sophisticated household work. With a seven-axis robotic arm 94 on a mobile platform and visual capability could deliver drinks directly to people (Reiser et al. 2009), taking advantage of 95 RGB-D camera to track the positions of people's mouths, make it possible as robot feeding assistance (Fang et al. 2018). Some 96 existing mobile robots can also detect fallen people in real-time via a system comprising a Kinect depth camera, cloud-point  To the best of the authors' knowledge, most studies in robot fallen people detection only focus on detects and reports fallen 107 people to family members or other caregivers. However, much uncertainty still exists about the causes and effects of fallen 108 people with the robot that detect them. So far, the success of the robot-based application to detect fallen people never discusses 109 any trigger that can lead to people's fallen or provide a particular treatment to the victim.

110
Therefore, the following research proposes a novel and robust robot-event integrated system to anticipate unconscious or 111 fallen people due to the earthquake hit. The robot is designed to automatically changes daily service mode into inspection and 112 emergency response mode through a specific trigger of an earthquake magnitude threshold value. The main purpose of this 113 study is to develop a smart robot capable of anticipating the resident experiencing unconscious or difficulty moving around 114 after an earthquake occurs. This research is the first research that develops a robot prototype integrated with an earthquake 115 EWS for indoor service operation to reduce earthquake risk and temporary care of people affected by earthquakes.  In practice, after receiving an earthquake warning, the system's earthquake warning receiving device will send a message 126 via its built-in wireless communications device to the robot. When it happens, after activating its buzzer and LED light, the 127 robot will open its simultaneous localization and mapping (SLAM) system for indoor fixed-point cruising, and conduct real-128 time image identification and tracking via a deep-learning-integrated RGB-D depth camera and Jetson TX2 AI computing 129 device. Upon detecting fallen people, it will approach them, which will be monitored with ROS interface. Since fallen people 130 may be still conscious, but unable to move freely, the robotic are will distribute food or medicine directly to fallen peoples' 131 faces for them, taking advantage of depth camera and pre-trained Mobilenet-SSD deep learning model to identify the exact 132 positions required, before rescuers arrive. This process is described in Figure 2. The system's earthquake early warning interface relies on WebSocket, via an application programing interface (API) 136 connection from compound earthquake early warning platform to the server. A master encryption key retrieved from the 137 WebSocket server will be used to produce a set of encryption keys for return transmission to the Web Client, simulated using 138 a PC in this case. The simulated Web Client then adds account and password information to each encryption key before sending 139 it to the WebSocket server, which will examine whether that key belongs to a legal user before sending a message regarding  The proposed system's chassis employs Arduino as a control system, and features two encoder-furnished DC motors. Its motion 157 is illustrated in Figure 4. The radius of the distance between the two wheels stands at d. The corresponding linear velocities of 158 the left and right wheels reach V L and V R , respectively, where V is the chassis' instant linear velocity, and R is its motion 159 radius. For circular motion, the angular velocity of the left and angle wheels is the same, and therefore is also the angular 160 velocity of chassis motion, i.e., where ω is its instant angle. Eq. (1) further implies that the chassis' motion mode has the following three states: traditional Monte Carlo localization. Unlike the robotic mapping method described above, in which sensor were used for data 196 collection and the map was derived from both sensor data and mapping data; then, the move-base package in the navigation 197 stack is used to obtain further data on the surroundings (such as scanning results) and to generate a global or regional cost map, 198 which helps the robot avoid obstacles and arrive at its designated position safely.

199
Path planning for the move base can be carried out via either a global or a regional approach, using an A* algorithm (Yao  involves, first, loading the weighting of a model already trained in advance, and modifying the last several layers of output of the 220 neural network, before using that network to accelerate training with existing data using feature-weighting derived from previous 221 data training. This approach produces good identification results even without a good dataset.

222
The three general types of targets of this training -fallen people, non-fallen people, and people's mouths -were applied to 223 the deep-learning model in the robot's navigation system, using 3D spatial coordinates derived from point-cloud messages released 224 by the depth camera. Thus, the robot could be directed to proceed to a particular point in the x and y plane-coordinate system of 225 the 3D coordinates.

226
Another deep-learning model was applied to the robotic arm, retrieving the 3D spatial coordinates of the victims' mouths  integration between earthquake EWS and the smart robot will be discussed in the next section.

252
The right-hand side of the same figure presents a 3D schematic, hereafter referred to as the emulation model, of the ROS in 253 unified robot description format (URDF). Once the emulation model is complete and the robot is operating under its ROS, the 254 host computer can monitor the operation of the actual robot and the emulation model simultaneously.

255
A web client (PC) was employed to emulate the reception device for an earthquake early warning system, as shown in 256 Figure 11. Upon receiving early warning of an earthquake larger than magnitude 4 on the Richter scale, the PC relays that 257 message via the host computer LoRa to the robot, whose LED bulb will transform from green to red, signalling its entry into 258 rescue mode. Its buzzer will also sound for 15 seconds, alerting residents to prepare themselves for a strong earthquake. The 259 threshold value as a trigger in the system can be changed to any value according to user needs.

260
The robot is expected to complete its SLAM before receiving its first early earthquake warning, so that an indoor plane 261 map will be available to aid its navigation when an earthquake hits. Real-time monitoring of the robot's mapping and  The other approach to manual mapping, remote release of coordinates and use of a local costmap for partial obstacle avoidance, 276 is shown in Figure 13. In it, the green arrow near the top represents manually released navigation coordinates, including x, y, z 277 coordinate points and angles of rotation roll, pitch, and yaw.

278
Following the release of coordinate points, the robot will automatically plan a navigable path -for example, the one shown 279 in blue in the middle of diagram, to its target position, based on the mileage messages, indicated by the red arrow, that are fed 280 back by the robot chassis repeatedly. After mapping is completed, it is stored for application in subsequent fixed-point 281 navigation. Figure 14 shows the completed mapping of the experimental site. The completed map of the experimental site (Fig. 14) was applied to navigation. This process differed from applying the existing 287 plan for navigation using a local costmap for partial obstacle avoidance, in that a global costmap was used for global obstacle 288 avoidance. Information on generated areas was added to the constructed map, enabling the robot to be aware of fixed obstacles 289 and keep its distance from them to avoid collision, supplemented by a local costmap for partial obstacle avoidance. When a 290 new obstacle appeared on the map, it was inflated for avoidance by the robot. However, before navigation, positioning was 291 carried out using the AMCL algorithm, so that the robot could understand its location on the map (Fig. 15). The robot spreads 292 path direction signs around the area to facilitate this positioning before the user releases the coordinate points it needs for 293 cruising. As indicated in Figure 16 by four round red spots, when the robot proceeds to point 1, path directions will converge 294 around it, indicating successful positioning, before the robot proceeds to the other locations in sequentially before the user 295 releases the coordinate points it needs for cruising.

296
In the relevant phase of this study's experiments, the robot was moved around effectively via remote keyboard control; 297 conducted SLAM for automatic obstacle avoidance via manual remote release of coordinate points; and used the map it had 298 constructed for indoor navigation, avoiding obstacles by changing path automatically whenever new obstacles appeared 299 between it and its target point. The robot proved unable to avoid obstacles less than 20 centimeters in height, as its LiDAR 300 sensor was 20 centimeters above the ground. However, this did not negatively impact its disaster-relief mission of inspecting 301 fixed sites according to coordinate points set in advance. This study divided real-time object tracking into two neural-network models. The first, applied in navigation, was aimed at 309 identifying fallen people while the robot was moving between any two points, so that it could stop to help them before 310 continuing its journey to its original target location. The second model was used when the robot reached any victim, to identify 311 his/her mouth and guide the robot arm to it to deliver relief supplies.

312
This study's basis model was constructed using 1) a Mobilenet-SSD neural network, 2) a transfer-learning approach to

316
Applications of neural networks to identify people are quite mature, as evidenced by their widespread use for various 317 purposes. In our experiment, in which cross-entropy was used as the loss function, the robot was able to distinguish fallen from 318 non-fallen people (Fig. 17). However, the final loss of 0.976 suggests that there is considerable room for improvement in the 319 identification rate. Having been trained to detect fallen and non-fallen people, the robot begins to search for the former with the assistance of its 323 navigation system, the robot will automatically release the coordinates of the point to be inspected only after activation of the 324 navigation system. After detecting a fallen person, the robot will plan a new path to proceed to his/her location, rather than 325 continuing along its originally planned route. Because, in map terms, a fallen person represents a new obstacle, a certain distance 326 will be maintained between any such persons and the robot, via the latter's partial-avoidance function. After arriving at the fallen 327 person's position, the robot will return to its pre-set cruising coordinate point, as shown in Figure 18. The experiments 328 demonstrated that the study's robot could reach every coordinate point required to effectively search the experimental space for 329 fallen people. Also, the robot was able to carry out the required supply mission, using the 3D coordinates of the victim's mouth as 330 retrieved by a depth camera. After activation of the supply function, the robotic arm first proceeds to its preset posture. Here, it is 331 important to note that the arm was installed in front of the depth camera, to maximize the latter's field of view; but this required 332 that the arm's joints be set near the bottom of the platform, which limited its stretch. Upon detection of a human mouth, the depth camera converts its coordinate point to a 3D one, which the robotic arm can then use 336 to determine the rotation angle for each of its joints via ROS MoveIt's reverse kinematics. The relevant information will then be 337 sent to the Mbed board driving the robotic arm, to enable completion of the supply mission. Mouth-position data will gradually disappear over approximately 10 seconds, after which the robotic arm will return to its original posture and prepare for its next 339 supply mission, as shown in Figure 20.

340
Having been created merely for research purposes, the study's robotic arm should not be expected to operate with the same 341 degree of precision as its industrial counterparts. Likewise, the experimental results indicate that there may have been discrepancies 342 in both the installation angle and height of the depth camera, as compared with the ROS's 3D installation model. The robotic arm's 343 actuator also had some difficulty reaching the center of the target object, due to discrepancies between the depth values output by 344 the camera and actual distances, as set forth in detail in Figure 19. The error average in the x, y and z-axis is 1.9 cm, 1.3 cm, and 345 2.2 cm respectively. The z-axis, which can be described as approaching the target object, had the largest error. Its presumably 346 because the study model's inference image was converted to global coordinates via plane transformation, resulting in rotation 347 errors when the tested object (i.e., a human face) was not pointed directly at the camera lens.

348
Other constraints on the robot's ability to deliver supplies included the height of the tested object above the floor, due to the 349 shortness of its arm. Also, robotic-arm supply attempts failed when fallen people's mouths were pointing straight upwards, beyond 350 the detection field of the depth camera, which cannot rotate automatically. These mechanism problems will be addressed in future 351 studies.

352
The robot in this study successfully leveraged the navigation system described above to achieve both automatic obstacle 353 avoidance and targeting of its robotic arm to people's faces. Upon detecting a fallen person's mouth, the robot conducted path 354 planning and proceeded to that point, avoiding obstacles in its path automatically; conducted path planning for its robotic arm; and 355 delivered supplies with that arm, maintaining a certain distance from the fallen person after arrival, since she/he is also a kind of

369
In line with prior applications of robots for search and relief in recent years, this solution took the form of a smart robot integrating

370
Internet of Things (IoT) and artificial intelligence (AI) technologies. Specifically, the proposed first-generation robot includes 371 embedded Arduino and Mbed microcontrollers for driving its chassis and robotic arm, respectively, and a host-computer ROS 372 enabling navigation, automatic obstacle avoidance, and real-time image identification. These features enable it to issue a timely 373 warning when an earthquake hits, conduct fixed-point cruising to search for fallen people, track their mouth features, and deliver 374 relief supplies directly to their mouths with its robotic arm, guided by reverse kinematics. This robot's strong test performance 375 suggests that, with a few modifications, it could be used in various other types of disasters, such as fires in highway tunnels, or -376 if coupled with ships' automatic identification systems -maritime accidents.

377
However, further experiments and training will be needed to improve the system's ability to distinguish fallen people from 378 other types of objects. Specifically, subsequent augmentations to the system are expected to include the integration into the ROS 379 of millimeter-wave sensing of the human heartbeat, to reduce mis-identification of non-human objects as fallen people. Also, the 380 existing robotic arm is a simplified type for research purposes, and insufficient for real-world applications in terms of both its 381 motor precision and its length. Nevertheless, it is hoped that this study's robot can serve as groundwork for the development of

386
Data availability statement All relevant data are included in the paper or its supplementary information.

388
Conflict of interest None.