An intelligent manufacturing cell based on human–robot collaboration of frequent task learning for flexible manufacturing

The trend of short-run production and personalized customization is more and more popular in the manufacturing industry. And the robots in these production lines must conduct task adjustment efficiently when learning new tasks. Thus, this paper developed the intelligent manufacturing cell based on the human–robot collaboration (HRC-IMC) which can enhance the learning ability of cobots by introducing the intelligence of human. The HRC-IMC was composed with four modules: the imitating learning module, the human–robot safety planning module, the task planning module and the visual inferring module. All of the four modules were designed to provide a set of systematic and effective methods. That was conductive to the efficiency improvement of the task adjustment for cobots’ new task learning. The experimental results indicated that the efficiency of task adjustment can be increased by 42.8 % when the HRC-IMC was employed than that of Moveit. All in all, this study is of great significance for improving the efficiency of new task adjustment of cobots by imitating the manipulation experience of human via combining four algorithm modules.


Introduction
The artificial operations in production lines of manufacturing industry (like the shoemaking industry in Fig. 1) were widely replaced by robots or automatic lines in recent years. Since the trend of short-run production and personalized customization seems almost inevitable for the flexible manufacturing like shoemaking industry, robots should have the ability of new tasking learning to ensure the efficiency of task adjustments [1][2][3]. However, the current mathematical modeling for the robots' planning in production lines was designed for structured tasks. The mathematical modeling must be rebuilt when robots deal with new tasks, which is negative to the efficiency of task adjustment for robots in short-run production, especially for dual-arm robots. Besides, the robustness of the available mathematical models in dynamic task is low [4,5]. Researches has verified that introducing human into the manipulating cell [6,7] can improve the intelligence of cobot in the human-robot collaboration (HRC). Based on the human-cyber physical systems and other methods, these kind of methods could be realized via digitizing the manipulation experience of human. That has attracted more and more attentions of the scholars [8][9][10].
The human-cyber-physical systems(HCPS) is an important branch of the intelligent manufacturing cell with human in the loop [11]. The HCPS was developed from the concept of human in the loop cyber-physical systems [12]. Sowe et al. [13] developed cyber-physical-human systems (CPHS) by putting human in the loop. Although they described the human capabilities using human service capability description model, few instances were conducted to verify the performance of the CPHS. Krugh and Mears [14] tried to introduce the cyber-human systems into automotive assembly. Zhou et al. [15] firstly proposed the concept of HCPS. And they designed HCPS with different levels to meet the various demand of the intelligent manufacturing. However, most of these systems above put more emphasis on the system architecture rather than the realization of the function modules. And there are lots of technical details to develop to make these HCPS-like systems work in the manufacturing. Besides, the abbreviations in this paper was summarized in Table 1.
The three key parameters that affect the efficiency of intelligent manufacturing cell with human in the loop are the function module of human-robot safety, tasking planning, tasking state understanding [16][17][18]. Firstly, since human and robot share the same task space in the intelligent manufacturing cell, the safety of human should be ensured, which is the precondition of human working in the loop [19]. One of the popular methods to ensure human-robot safety is setting an electronic fence in the space of human-robot coexistence [20,21]. Lotsaris et al. [22] labeled the safety fields in human-robot coexistence space via AR. But it is still an electronic fence method. And in the electronic fence  methods, the distance between human and robot is treated as a critical parameter for safety planning [23]. However, the distance has no significant change in close range collaborative tasks, which may result in setting numerous safety levels. Emanuele et al. [24] introduced a human-robot distance estimation cell that fused depth sensor and laser scanners in case of missing human visual data. But this distance-level safety strategy is mainly suited to the contactless interaction conditions. In addition, some CPS-based safety planning methods for enabling safe human-robot collaborative [25,26] are still within range of distance-level safety strategy. And these works didn't involve the state estimation of multi-sensor fusion. So the accuracy is limited by one single sensor. Michalos et al. [27] designed a safety strategy based on EU standards (ISO 10218 ¨C1, ISO 102018 ¨C2, ISO/TS 15066) to conduct a HRC assembly task. It still followed the scheduler panning scope of rigid tasks. Aivaliotis et al. [28] proposed a method for limiting the forces applied by an industrial robotic manipulator by inverse dynamic robot model. This passive safety method would introduce more stop in close proximity HRC tasks. Furthermore, in these tasks of human-robot coexisting in close range, human being overshadowed in visual sensors has become constant trouble for safety planning. So the prediction of humans' behavior should be concerned so that humans' safety can be taken into consideration in advance in the planning of robots. Since the CNN-based human pose prediction can overcome the noise shortcomings of depth sensors [29], using marker-less visual methods was widely employed [30] in the prediction of human pose in robots' safety planning. Zhu et al. [31] integrated the spatiotemporal information of moving obstacles into the robotic motion planning to avoid collision with the human. In these methods, both humans and robots were filtered through visual algorithm. The experimental scenario was, however, so simple that the occlusion condition was not involved. Secondly, the efficiency of task adjustment for the HRC-IMC is mainly subject to another key parameter-the robots' ability of new task learning. As digitized experience of humans can be applied to improve the intelligent of robots, human-guided tasking learning can be more efficiency than mathematical modeling methods aforementioned. This kind of task adjustment strategies was thereby regarded as another import research branch to review for the HRC-IMC. In the research of Edmonds et al. [32], data gloves were used to record the key points sequences of human hands in the manipulation process of a bottle. Similarly, objects attached with piezoelectric force transducers were employed by Pham et al. [33] to tracking grasping points (GP) in the operation process of humans. The two methods aforementioned will affect the tactile feeling of human hands, which may result in motion distortion [34]. Then the operation experience of human would be distorted before learning by robots.
It may introduce unpredictable effects on the subsequently imitating learning. In addition, some grasping data sets of humans had been built for imitating the grasping of robots, such as GRAB [35], ContactPose [36] and so on. Although these data sets were built in a visual mode, the object element in these data sets were mainly household items rather than objects of industrial productions. The operating habits of humans for these two kinds of objects may be in great difference.
Furthermore, since working with lots of uncertainty like human and dynamic tasks, the third key parameter related to the HRC-IMC should be that robot must understand the task process via visual or other sensor data [37][38][39][40]. Evangelou et al. [41] introduced a solution in scheduling and assignment of assembly tasks to both human and robot resources. The task planning in their method was realized by search in the dataset with an artificial Intelligence algorithm. But this offline method cannot meet the demands of industry, as the work cell was a dynamic environment with human. Tsarouchi et al. [42] developed a HRC assembly cell based on gesture recognition and task planning. But their assembly cell works mainly depending on the scheduler service. The efficiency of task adjustment for their assembly cell was subject to the rebuilding efficiency of the scheduler service, which was similar to the mathematical modeling methods. Li et al. [43] combined the task semantic and task planning by introducing the task semantic object matrix. In this way, robots can understand the task scenario and the task process. That can enhance the robustness of the robot for dynamic change. Li et al. [44] also developed a force inferring method to estimate the fuzzy grasping force through visual deformation, which can adjust the close degree of robot via visual data for force sensor-less robot. The two methods can make the robot understand the operation state of robot. It is the precondition for the adjustment of robots in the collaborative tasks.
To solve the poor efficiency of adjustment problem of robot in the short-run production line, this paper developed the HRC-IMC based on the three key parameters aforementioned. Firstly, using the safety planning algorithm, imitating learning algorithm, force inferring algorithm [44] and semantic based task planning algorithm [43], the HRC-IMC was established. Then, the operation experience of human can be learned via imitating learning module of HRC-IMC when dealing with new tasks. And with the HRC-IMC, robot can understand the task state in the collaborative tasks, so as to adopt human actions. Besides, robot can ensured the safety of human initiatively in the HRC-IMC. Compared with the existing dynamics modeling methods for task adjustments, our method not only can reduce the workload of rebuilt the model by learning the experience from human, but also has specific algorithm support than other HCPS-like conceptual framework. All in all, the HRC-IMC can provide a more efficiency, safety and intelligent system for the manufacturing industry. Moreover, it not only meet the demands of short-run production and personalized customization of manufacturing industry of consumer goods, but also can be applied to other manufacturing domain.
The remainder of this paper is organized as follows. Section 2 describes the modules of the HRC-IMC. Then experiments are set in Sect. 3. And the results of the experiments are discussed in Sect. 4. In the end, Sect. 5 outlines the conclusion.

The intelligent manufacturing cell based on human-robot collaboration
As shown in Fig. 2, the HRC-IMC system is composed with four parts: the module of cobot, the operation layer, the HRC module and the human. In particular, the HRC-IMC module is the foremost module which is composed with four key algorithms. Moreover, the HRC-IMC module has four functions: lower-alpha ensure the safety of human in HRC tasks; 1 decode the operational experience of human via imitating learning module. In this way, robot can realize efficient task adjustment through learning from human; 2 conduct HRC tasks with the estimation of task state through the safety planning module and visual inferring module; 3 realize dynamic adjustment with the tasking planning module and visual inferring module. It is the main difference with other HCPS mode system.
Besides, the Baxter is used as the fundamental hardware of the module of cobot. As in known, there are 7 degrees of freedom for each arm of the Baxter robot. And each joint of the robot is equipped with the SEA to estimate the joint torque. Besides, the repeated positioning accuracy of the robot is 5mm. In addition, the HRC-IMC integrates the tasking planning technology and the visual inferring algorithm of our previous work [43,44]. With these algorithms, the HRC-IMC can understand the task state and estimate the fuzzy grasping force via visual information. Furthermore, the imitating learning method is employed in the HRC-IMC to make the robot learn from the operation experience of human. That enhances the intelligence of the HRC-IMC by Fig. 2 The prototype system framework of intelligent manufacturing cell based on human-robot collaboration combing the intelligence of human and the operational ability of the robot. Also, the safety planning module can ensure the safety of human in the human-robot co-exist space.
The prototype system framework of HRC-IMC is shown as Fig. 2. The HRC-IMC is mainly composed of three parts, namely, cobot module, human and HRC module. And the functions of HRC-IMC are realized by the planning layer, the network layer, the hardware layer, the operation layer and the HRC module. Furthermore, the layers and modules of HRC-IMC are described in detail in the following paragraphs.
Planning layer According to the operation task commands of cobots, subtasks can be carried out via the planning layer, such as grasping planning, motion planning, dynamic planning, self-collision avoidance planning, collision avoidance planning in Cartesian space and safety planning. This layer is an important module which can decode the control commands into the joint moving of the robot. Besides, all of the commands are published via the controller of ROS. So does the feedback of the sensor data. The motion planning results of the planning layer are also sent to the task simulation environment to verify the planning results before the operation.
Network layer This layer is designed to realize the transmission between the control system and the hardware system of the robot, such as video feedback, control commands, image processing information, auxiliary information of robot and so on.
Robot hardware layer It mainly consists of vision sensor, series elastic actuator (SEA), joint encoder and end-effector. The function of this layer can execute operation commands. And the original data of operation execution can be obtained via sensor information data. Then these data in terms of operation execution are sent to the planning layer.
Operation layer It includes workbench, all kinds of objects and tools. It can be regarded as a part of the automatic production line or a station.
HRC module It is composed of the task planning module, safety planning module, visual inferring module and imitating learning module. And the main function of the HRC module is to conduct the planning of human-robot cooperation tasks and the imitating learning of new tasks.
Human The main functions of man in the HRC-IMC are shown in Fig. 3. Firstly, if it is not necessary for human to directly participate in the task, human will act as the task command publisher and supervisor to monitor the execution of the task. And human should also make emergency management if emergencies occur. Secondly, the tasks involving humans can be divided into two groups, namely the HRC tasks and the imitating learning tasks. In the HRC tasks, human is demanded to conduct cooperative tasks along with robots. Besides, for some new tasks, humans can share their operating experiences with robots. And with the imitating learning module, human experiences can be learned by robots through deep learning network. Then a supervised learning model of human experience can be obtained and applied for planning of robots when dealing with same tasks or objects. In this way, robots can realize the rapid adjustment of the task. Moreover, with digitizing the human intelligence, the intelligence of HRC-IMC can be improved. So does the manipulation performance of the robot.

Task simulation environment
The module of task simulation environment is shown as Fig. 4. In order to establish a visual virtual operating environment, Gazebo is employed to model the three-dimensional task space of robots and operational scenario. And the controller of robots is built based on the ROS to make the simulation environment faithful. Besides, the modeling process above mainly involves simulation environment modeling and kinematics modeling. The 3D model of the robot and the object is used to build the corresponding URDF model. And the simulation environment of the shoe-making scene as shown in Fig. 4 is built according to the file format of Gazebo physical simulation engine. Based on the robotic kinematics relationship defined in URDF model, kinematics and dynamics library (KDL) was used to generate kinematics chain. The kinematics model was carried out by referring to equations (1)- (5).
For one joint, if the joint' coordinates matrix at time t 0 is P t0 , there are joints angle matrix t0 and seeds angle matrix 0 which make the formulas (1) and (2) true.
Formula (2) is the Taylor series expansion of formula (1). And the higher-order-terms in formula (2) is the higher order terms of Taylor series expansion. In addition, J stands for Jacobian matrix which can be express in detail via formula (3). (1) If higher-order-terms in formula (2) is ignored, t0 can be calculated through formula (4).
Formulas (2)-(4) is a classical numerical recursion algorithm. And the interpolation termination conditions is shown as formula (5).
represents threshold value for interpolation search. Fig. 3 The role of human in the prototype system Fig. 4 The Gazebo simulation environment for the prototype system

Planning module
The planning module mainly realizes kinematics planning, including grasping planning, motion planning and safety planning. The grasping planning involves grasping point planning and grasping path planning. The process of grasping planning is shown in Fig. 5. As shown in Fig. 5, there are two modes of grasping point selection, namely, imitating grasping mode and envelopment model mode. The envelopment model in this paper is realized by the pick-and-place module which is mainly used for the grasping point planning of the simple geometric shapes, such as cylinder, cuboid and so on. The imitating grasping mode is the main grasping point planning method of the HRC-IMC. In this mode, the trained priori model of human grasping for the same object is applied to generate grasping points for the subsequently grasping path planning of robot. And in this process, the pose and semantic information of the objects are set as the input of the priori model. Besides, the path planning for grasping is conducted by trajectory planning module in the planning layer. And trajectory planning is mainly realized by combining the KDL library and ROS. With the functions in the KDL library, path points are fitted into paths with trapezoidal wave velocity characteristics, which are then converted into ROS path point format. Finally, the trajectory tracking control is realized based on the motion controller of ROS.

HRC module
The network module realizes the information communication between the control system and the robot, which is the basis of the task command sending to the robot and the feedback of the data of sensors. In this paper, the data need to be exchanged including real-time point cloud and image data, control command data and auxiliary information data, such as joint angle and SEA force information of robot. Socket communication protocol is used in this paper to establish the connection between the control system and the robot, which will not be described in detail here.
The HRC module is mainly composed of the task planning module, safety planning module, the visual inferring module and the imitating learning module. The visual inferring module mainly estimates the fuzzy hardness according to the visual information. And this module was mainly developed according our previous work [36]. Besides, the function of the task planning module is mainly realized by the method proposed in ref. [35]. The safety planning module in this paper is mainly developed for the contact maintenance scenario of cobots. At present, there is only human-machine safety planning function based on vision and force fusion. Since the robot control system is developed under the framework of ROS, in order to facilitate the information transfer between different modules, the HRC module is developed in the form of ROS package. And the data information between modules is transmitted in the form of topic.
As shown in Fig. 6, the human-inspired collaboration strategy was employed in the HRC module. And the HRC strategy was realized through safety planning based on human-robot distance. With the HRC strategy, human's actions can be extracted in RGB image. After the corresponding depth information of keypoints being extracted, the 3D keypoints of human are estimated. Then the minimum distance between human and robot can be obtained with the help of their bounding boxes. And this minimum distance is converted to virtual force through formula (6) which can be exerted on the robot's end effect after adjoint transformation.   (7) represents the force of the robot at the endeffector; d t is the minimum distance between human and robot at time t; F vt stands for the virtual force at time t; k a , k b and k c stand for the adjustment parameters of virtual force.
Furthermore, the superposed effect on end effort, named F co , is calculated directly according to the forward dynamics and other input elements. The F co and its duration t are two paramount parameters for the estimation of safety state. Based on the estimated safety state valued as s, the robot can conduct safety planning via inverse dynamic according to formulas (7)- (13), taking the interrupt position points q c and interrupt time points t i into consideration. (7) is generalized mass matrix; V(q) in formula (8) stands for inertial matrix; C(q,q) represents centrifugal and Coriolis force vector in formula (7); u(q,̇q) has the same meaning with C(q,q) . Furthermore, G(q) and p(q) are gravity vectors in formulas (7) and (8) respectively. x is the desired position at end-effector. And q is the desired joint angles.
Virtual displacements of each joint are set as q . And the displacement of robot's end effector is set as D. Accelerated velocity of robot's end effector and joints, which is subject to force on robot's end effector, can be calculated through formulas (9)- (13) according to the principle of virtual work. Fig. 6 The HRC framework based on safety planning J is Jacobian matrix of robot; J − † stands for pseudoinverse of J.
In addition, the position control quantity x 0 in impedance controller can be calculated through formula (12) when the external force on robot's end effector is F t and the desired position is x d at time t.
Then the desired new joints position q d and joints velocity ̇q d can be calculated based on x d and ̇x d . Subsequently q d and ̇q d were sent to robot control module. In this way, robot can decreases its velocity until stopping along with the approaching of human. More importantly, the task would be continuous if the state switches into the collaborative-move. This slow down and continuous mode can guarantee both the safety of human and the efficiency of execution.
The grasping planning of robots involving lots of mathematical modeling works when dealing with new tasks or new targets. So the study of imitating learning in the HRC module focused on the GP planning of robots. In this way, robot can adjust quickly through learning from human instead of mathematical modeling. Besides, the framework of imitating learning module of the HRC module is shown Fig. 7 The imitating learning framework in HRC module  Fig. 7. In Fig. 7, the control principle of robots' imitating human's operation can be divided into three parts according to their functions in the imitating process. Human-robot operation module involves the input of human's operation extraction and output of robots' imitating operation; the humanoid GP information bridge, which is composed of hierarchical bounding box for hands' action position tracking, grasping points optimizing and GP error calculating for feedback via calculating GP error. Firstly, in human-robot operation, keypoints extraction is conducted from RGB image which is in terms of human's operation actions. And these keypoints are converted to array and published to the humanoid GP information bridge via ROS master. Then they can be subscribed by both robots and users in control system. In addition, hierarchical bounding boxed of hands and objects being built according to the hands' keypoints, GP sequence are estimated through FCL [45] by intersection of closest points calculation and send to grasping points imitating module. Moreover, the GP mapping is introduced to overcome the problem resulting from different structures between robots' gripper and humans' hands in this module. Finally, the optimized GP are applied to trajectory planning for robotic grasping operation through humanoid GP information bridge.

The fusion mechanisms of visual and force in the HRC-IMC
The fusion of vision and force sense mechanism in the prototype system of HRC-IMC is shown in Fig. 8. As shown in Fig. 8, the vision in this paper is mainly used in three domain: a) estimation of object deformation information; b) recognition of object' semantics and pose; c) recognition of human's actions. Since the force sensor used in this paper is only SEA, the actual force perception information is only joint force information. the other forces data, such as grasping force and end-effort force, can only be obtained by calculation. The mechanism of force induced by vision is mainly embodied in the fuzzy hardness inference in the visual inferring module. According to the content of ref. [36], the supervised learning mechanism built by LSTM network can realize grasping force adjustment by transforming visual deformation into fuzzy hardness. In addition, after the object's semantic and pose information being recognized via visual information, the target semantic matrix generated by the task planning can be used to realize the autonomous modeling of the target and obstacle. That can help robot realize the understanding of task and the autonomous adjustment in the process of the task. Besides, when human actions are recognized, robots can conduct imitating learning according to the semantic and pose information of the objects. Then the robot can quickly obtain the operated ability of new tasks. Moreover, the human actions can also be applied for the estimation of the safety state according to the visual and force information in the cooperative operation. In this way, the safety of human can be ensured in the cooperative task. The aforementioned are the fusion mechanism of vision and force in this paper. As the intelligent requirements of cobots become higher and higher,  it is difficult to realize intelligent control only by relying on single information channel. Therefore, the fusion mechanism of vision and force based on human operation mode can meet the demands of different task. That may be one of the effective ways to realize intelligent control of HRC.

The hardware and software environment for HRC-IMC
The structure of HRC-IMC developed in this paper is shown in Fig. 9. The hardware and software environment of the HRC-IMC is shown in Table 2. The hardware of the system mainly includes: Baxter cooperative robot from Rethink Company, two-finger electric gripper, Kinect V1 depth camera, a laptop computer, common tools and operating objects in shoe production line, test platform and tooling and fixture. The computer is configured with an Intel i7-7700, a 2.8GHz CPU and 8GB of RAM.
The system is built in Ubuntu16.04 system environment with ROS Kinetic architecture [46]. The Qt is employed as the main development platform. Besides, the 3D models of robot, target and environment are modeled and generated by SolidWorks 2016, which are converted by URDF plug-in. The network communication function is realized by Socket programming. Baxter SDK is used to conduct robot control and joint information feedback. The OpenCV [47] and PCL [48] libraries are used for image and point cloud processing. Besides, the installation address of the above visual processing library file is added to the cMakeList file of the Qt project package.

Experiment
A imitating grasping experiment was conducted to evaluate the efficiency of task adjustment of the HRC-IMC above. As shown in Fig. 10, the main equipments applied to conduct the experiment are a Baxter robot with parallel gripper, a Kinect v1 camera. Moreover, a laptop with a graphics card of GTX1060 and Baxter sdk1.2 environment was used for image processing and robot's motion planning. In addition, a simplified sole model was used to conduct grasping operations. Since the grasping points selection of sole changes when task switches, the efficiency from modeling to grasping points selection were tested via imitating grasping and Pick-and-Place module of Moveit. In order to conduct contact points estimation between hands and objects according to geometric intersection, PCL [48], FCL [45] and Tensor-Flow [49] were also applied in this experiment.
Furthermore, two groups experiments were conducted to verify the feasibility of the task adjustment via the HRC-IMC. One-hand grasping experiments were used to test the feasibility grasping points imitating from human. And twohands operation was applied to imitate dual-arm operation scene.

One-hand grasping
One of the GP point clouds for shoe sole is shown in Fig. 11. And Fig. 12 is three grasping trajectories for three different GP when the shoe sole model remains in a same position. As shown in Fig. 12b, the grasping RPY degree at start position and goal position are set as same value so as to minimize path planning failure rate of ROS resulting from different grasping RPY setting. The invariant obstacle position and small change for three GP contribute to the nearly same change of RPY along trajectory in Fig. 12b. As shown in figures, the GP imitating human are generated as point cloud (green) and combined with objects' points clouds (blue). Moreover, the success rates of 60 times repeated experiments and the efficiency of modeling are listed in Table 3. Furthermore, for one hand's grasping operation on flat and long object like shoe sole, the probability of GP lying in the left, middle or right is almost same and would not introduce negative factor to human for success grasping. As shown in Table 3, the success rate for grasping sole is 70%, which is 10% higher than that of Pick-and-Place module. The two reason which makes the success rate decrease for sole model can be explained through Fig. 12a. For Baxter robot, the kinematic chain in OMPL of ROS for trajectory planning is from link torso to link gripper. In addition, the origin of link gripper coordinates is at the center of each gripper. And the length of each gripper is 0.011 m. The maximum error in Y axis is thereby 0.055m which almost has no effect on grasping success rate. If the error in Y axis approaches or surpasses 0.0055m, the slippage may occur when conduct grasping operation on shoe sole. And this phenomenon is the main reason for grasping failure because of the relatively low accuracy for Baxter robot. The other reason leading to grasping failure is the trajectory planning problem for ROS in grasping process, especially when taking obstacle avoidance into consideration. As aforementioned, the efficiency in Table 3 is the time consuming from geometric modeling to the obtaining the grasping points results. As shown in Table 3, the efficiency from setting the simplified model to the accomplishment of grasping points is 35 minutes. It is 20 minutes for the imitating learning module. And the efficiency of adjustment can be increased 42.9% when using the imitating learning module of the HRC-IMC. That demonstrated the superiority of the HRC-IMC.

Two-hands grasping
As aforementioned, the efficiency of task adjustment of the proposed imitating grasping can be increased by 42.8 % than Pick-and-Place module when robots conduct one hand grasping. Besides, for objects with the size similar to the shoe-sole, the grasping method of MoveIt does work. And the key superiority is that the proposed imitating grasping can be applied to grasping points learning of objects like the shoe-sole for dual-arm robots. Moreover, no mathematical modeling must be rebuilt when robots deal with new tasks compared with the MoveIt grasping method. It is conductive to the efficiency of task adjustment for robots in short-run production.
As shown in Fig. 13a, dual-arm GP imitating is the development from one-hand grasping in Fig. 12. The grasping force adjustment in scene of Fig. 12 can be avoid via two hands operation. Moreover, Fig. 13b, c can indicate that the grasping process can be divided into independent operations of left arm and right arm before move to grasping position. But in Fig. 13b, the trajectories of dual-arm after actual GP is synchronous. And the bottleneck for increasing dual-arm grasping success rate lies here. Although the experiments can verify the feasibility of GP imitating human's two-hands operation, the path planning for dual-arm moving with end effort remaining grasping pose to target pose is still key limiting factor for improving grasping success rate.

Conclusion
This work aims to enable robots in the production line to realize quickly adjustment in flexible manufacturing, which meet the demands of the short-run production. And the HRC-IMC was presented which can combined the operational experience of human and the operational capacity of robot, rather than adjusting different mathematical model for different tasks. With the HRC-IMC, critical problems of robots involving the evaluation of task state and the safety planning in task process were addressed. Moreover, the understanding of the scene and task for robots can promote the state evaluation in the task-level. That can improve the efficiency of HRC. Besides, the experimental results indicated that the efficiency of task adjustment of HRC-IMC can be increased 42.8 % than that of Moveit. Therefore, in the short-run production, the HRC-IMC can be used as a more intelligent and efficient substitute for the traditional automatic manufacturing cell.