Multi-Objective Optimized Immune Algorithm For Computing Ooading Problem In Edge Computing Scenes of Internet of Vehicles

: With the development of 5G technology, the Internet of Vehicles (IoV) has also received worldwide attention. IoV edge computing achieves the goal of low latency by offloading tasks to the Mobile Edge Computing Server (MECS). However, it is still a challenge to reduce the computing delay of mobile terminal devices while ensuring the low energy consumption and load balancing of servers. In order to solve this problem, the system model, delay model, load balancing model, energy consumption model and objective optimization model are established in this paper. A computational unloading scheme based on multi-objective immune optimization algorithm is proposed. Finally, this scheme is compared with the reference scheme and the literature scheme. Simulation experiments show that the proposed scheme can effectively reduce the average unload delay of users, optimize the workload between servers, and effectively reduce energy consumption. Its performance is better than the schemes in the literature.


1.Introduction
In recent years, with the rapid development of the economy, the number of vehicles has also exploded, and traffic problems often appear. With the emergence of 5G technology, terminal devices, especially vehicles, will have more urgent demands on resources, and edge computing of the Internet of Vehicles has gradually become a research hotspot [1]. Internet of vehicles means that vehicles perceive the status information inside the vehicle through sensing technology, and obtain all dynamic information of vehicles within the regional platform through the infinite communication network, and make effective use of it. The existing Internet of vehicles reduces the probability of vehicle collisions by providing vehicles with space between vehicles. Or through real-time navigation to improve the user's driving experience while improving the efficiency of traffic operation. In addition, the number of on-board multimedia devices is also increasing, the addition of a variety of entertainment facilities also makes the vehicle needs a large increase in computing resources, which makes the vehicle to unload the task of a large increase in demand. Cloud computing allows mobile terminals will be sent to the cloud, to effectively reduce the load and energy consumption of mobile terminals, but the car networking tasks are very strict for the time delay, high delay brought by the upload task to the cloud is car networking scenario is difficult to accept, the edge, it can be added to cloud computing, as a small cloud computing network, Edge computing devices can better meet the needs of Internet of Vehicles [2]. The main difference between edge computing and cloud computing is that edge computing transfers the task to the edge server nearby, which can greatly reduce the delay of the task and reduce the load on the cloud terminal, and at the same time has better security. In the unloading process, the unbalanced assignment of tasks may lead to excessive load of some servers. When the load of MEC server is unstable, it will affect the server performance and even lead to server failure. On the other hand, the energy consumed in the operation process is also one of the measurement criteria [3]. Therefore, when allocating resources, it is necessary to optimize the load and energy consumption of a large number of deployed MEC servers to reduce the cost of edge equipment [4]. Therefore, the design of a computing mode that can guarantee load balancing and low energy consumption on the premise of low delay is a problem worth studying. At present, the unloading decision problem in the IoV edge computing scenario has attracted extensive attention from scholars at home and abroad. Literature [3] designed a vehicle-to-vehicle unloading model for cross-regional task transmission, and used NSGA-III to optimize the load balancing rate, server energy consumption and computational task processing delay. Literature [4] proposes a blockchain-based computing offloading method to ensure data integrity, and adopts NSGA-III to generate resource allocation strategy, in view of the phenomenon of unbalanced server load and insecure information transmission. Literature [5] designed a three-layers architecture, including the vehicles cloud, roadside cloud and cloud, effective for a large number of tasks on the road to the request for distribution, reasonable use of the resources of three between clouds, and combining genetic algorithm and particle swarm algorithm, designs an adaptive particle swarm optimization algorithm, effectively reduce the task response time delay and energy consumption. In literature [6], the trade-off relationship between energy consumption and task processing delay was studied in the fog cloud scenario, and the NSGA-II algorithm was used to conduct experiments on it.
Literature [7] solves the problem of workload distribution in the Internet of Things -fog -cloud architecture, and makes a tradeoff between energy consumption and propagation delay. NSGA-II algorithm is used to process the multi-objective model, and simulation calculation is carried out in three scenarios of fog, pure cloud and fog cloud, which proves that this scheme has significant differences.
Literature [8] adopts a hybrid task scheduling algorithm, which has the maximum completion time of farmland and better performance than the single ant colony algorithm and genetic algorithm in a multiprocessor environment. Literature [9] studied resource allocation in a multi-task environment. In order to make effective use of resources, a multi-task evolution scheme of dynamic resource allocation was designed, which could be adaptively allocated to each task according to the demands of the task and had the ability of cross-regional allocation. Literature [10] introduces an adaptive unloading method based on genetic algorithm to deal with effective traffic in the Internet of Things, which effectively reduces unnecessary delay and complexity in the request process and improves the success rate of Internet of Things requests. In view of the lack of set interpretation of the weighting coefficient, Literature [11] proposed a MMD method with rich geometric interpretation, which could generate the weighting coefficient with a systematic method without prior preference. Literature [12] designed a new architecture to achieve efficient data scheduling in mixed V2I and V2V communication, proposed the problem of time data uploading and publishing, and proposed an evolutionary multi-objective algorithm, MO-TDUD, to improve data quality and delivery rate. Literature [13] has designed a vehicular delay-tolerant networks (VDTNs) to transfer data to vehicles on the edge of the network to process and switch the large data set generated by the vehicles. Literature [14] proposed a resource allocation method named MRAM in the Internet of Things scenario, and proposed multi-criteria decision making and order preference technology based on ideal solution similarity, which effectively solved the requirements of low delay, low load balancing and low energy consumption. Literature [15] aims at shortening communication and processing delay, and adopts distributed alpha algorithm and optimal signal-to-noise ratio to distribute Internet of Things traffic to appropriate workstations, so as to achieve traffic load balance between workstations and fog nodes. Literature [16] proposed a general framework based on the application of the Internet of Things, in which tasks were assigned by setting a threshold. If the response time was less than the threshold, it would be accepted by the fog node; otherwise, it would be transmitted to other fog nodes or clouds.
Literature [17] proposed an optimization method based on stochastic gradient descending (SGD), designed an efficient distributed unloading model to solve the nonlinear multi-objective optimization problem, and found the optimal balance between energy consumption delay and cost.
Our main contributions are following: • Propose a multi-objective immune algorithm(MOIA) to realize multi-objective optimization of unloading delay, server energy consumption and load balancing.
• Design a vehicle-to-vehicle optimization model to solve the task unloading target problem when the vehicle moved across the region, and the server energy consumption and load balancing were optimized under the premise of ensuring low delay.
• Conduct extensive experimental evaluations to prove the effectiveness of the proposed algorithm and model The remaining of this paper is organized as follows. Section 2 establishes the system model, mathematical model and problem model. Section 3 describes the design and improvement of the algorithm. Section 4 designs related experiments. Section 5 summarizes the article and looks into the future.

System model
The system model designed in this paper is shown in Figure 1 without loss of generality. Suppose there are M cars in the system, denoted as ,and can handle multiple tasks at the same time.
The server allocates computing resources according to the number of virtual machine instances. In order to simplify the analysis process and obtain valuable rules, this paper assumes that the process is quasi-static, that is, the task set in a single time period is unchanged, but the task set will change in different time periods. When at the beginning of each time period, the vehicles will be submit to the server of the area needed for the task of computing resources and storage resources, represented as binary group <z , c ii  , after the server upload resources to the cloud, the cloud will target server according to the current server status for task allocation, task when a vehicle unloading area target server is not in the vehicle, The vehicle is transmitted based on vehicle to vehicle (V2V) technology, and the task is first passed to the vehicle in the next area, which is then forwarded to the server or the vehicle in the next area. Similarly, when the vehicle is crossing the region, if the vehicle has already left the region, the unloading target information will be sent by the original server to the nearest vehicle in the boundary area, and transmitted by the vehicle. Ensure that the server has enough storage space to accept tasks during task assignment. Set as the set of unfinished tasks that have been assigned to the above, then Equation (1) shall be satisfied during the assignment process.

Workload balancing model
In order to better evaluate the load degree of this model, the usage degree of all MEC servers and the load task amount assigned to a single server are combined to discuss. The usage rate j u of VM instances on j MEC is shown in Equation (2).
Where V represents the number of virtual machine instances on the server, and  (3).
Then, the average utilization of virtual machine instances on all servers is represented by MSU as shown in Equation (4).
Then, the resource balance rate MSB of all servers is shown in Equation (5).
The load balancing rate L of the system can be obtained as shown in Equation (6).

Time consumption model
Unloading delay is the most important criterion in this model, and the size of unloading delay will directly affect the user experience. In this model, unloading delay is mainly composed of four parts: 1.
Where 2 VE  is the byte transfer rate from the vehicle to ECD (V2E). The processing delay of a task is directly related to the number of virtual machine instances on the server and the computing resources required for the task. The processing delay of i task is expressed in Equation (8).
After the task processing is completed, it needs to be sent back to the vehicle, so the feedback time of i task is expressed in Equation (9).
Then, the unloading delay 1  ( ) of i task is expressed in Equation (10).
When the vehicle drives along the road, it may pass through two or more server areas, then the execution server of the task is judged to be the focus of the model. If the vehicle in the handover area needs to unload the task to the next regional server, the changed task will be transmitted by the vehicle in the next region. Vector group 12 ={h ,h , ,h } 1 k M m H  ， is used to indicate whether the task will be unloaded to other servers, where i h is shown in Equation (11).  (12). (12) Where,  is the number of vehicle-to-vehicle transmission, then the unloading delay 2  ( ) of i task is shown in Equation (13).
Then, the total unloading delay  can be expressed as Equation (14).

Energy consumption model
The server energy consumption is divided into three parts: server base energy consumption, virtual machine instance energy consumption and unused virtual machine instance energy consumption. The power consumption of the server mainly depends on the power of the server and the virtual machine and the processing time of the task. Is expressed as Equation (15).
Then the basic energy consumption base E of all servers is expressed as Equation (16).
Where  is the power of the server. The energy consumption of virtual machine instances in use, S, is shown in Equation (17).  (18) Where  is the idle virtual power, then the total energy consumption of all servers E is shown in Equation (19).

Optimization model
The model designed in the Internet of Vehicles scenario in this paper takes minimizing the delay as the main goal, and ensures the load balance of each server while reducing the total energy consumption of the server. Therefore, the problem to be solved in this paper is defined as follows Equations (21) and (22) indicate that the server shall not be overloaded during task assignment, and the number of allocated tasks shall not cause the number of idle VM Instances of the server.

Offloading decision scheme based on multi-objective immune algorithm
In algorithm inspired by biological immune system. It is a heuristic random search algorithm that combines certainty and random selection. In the immune algorithm, the antigen represents the problem to be optimized, each antibody represents a feasible solution, and each gene represents a computational unloading strategy for a task. In this paper, we will adopt decimal encoding and unload decision will be made using vector group 12 ( , , , ),1 mi X x x x x N =   , where each gene bit represents the unload target server of a task. In the immune algorithm, the affinity function is used to evaluate the adaptability of antibodies to antigens, that is, to evaluate the pros and cons of antibody solutions. The affinity function in this paper consists of three parts: time delay, energy consumption and load balance, as shown in Equations (6), (14) and (19). Constraints (21) and (22)

Performance evaluation
In order to verify the effectiveness of the proposed MOIA-based scheme, it can effectively reduce the unloading delay of mobile users, ensure the load balance of the server and effectively reduce the energy consumption of the server. This paper uses MATLAB to carry on the simulation experiment. In this simulation experiment, the effectiveness of MOIA is first verified, and it is compared with the traditional NSGA-III. In order to verify the effectiveness of the scheme, the scheme using MOIA proposed in this paper is compared with other schemes. The following is a description of the unloading scheme adopted: • Benchmark: All tasks are offloaded to the nearest server. However, if it is discovered during the unloading process that the task requires more resources than the server being unloaded has.
It will unload to the nearest server and repeat the process until all tasks have been assigned.
• Computing offload method based on cloud edge computing (MOC): In literature [3], a solution algorithm based on SAW and MCDM is designed to reduce time and energy consumption and maintain system stability by managing resources in IOV-CEC system. NSGA-III is used to realize the multi-objective optimization.
In this paper, load balancing, time cost and energy consumption are taken as the main evaluation indexes. The comparison between MOIA Benchmark and MOC in the same experimental background is analyzed in detail. Detailed simulation parameters developed in this experiment are shown in Table   1.

Table1
Parameter settings

Parameter description Value
The

Evaluation of algorithm effectiveness
In this section, in order to prove the effectiveness of the proposed algorithm, the MOIA of improved NSGA-III proposed in this paper is compared with the traditional NSGA-III. The Pareto frontier was obtained by running the traditional NSGA-III 100 times. A group of representative experimental results will be presented in this paper. Figures 4,5, and 6 illustrate the non-dominated solution. Fig. 4 illustrates the relationship between task unloading delay and energy consumption. In all schemes, the relationship between time delay and energy consumption is roughly inversely proportional, that is, the energy consumption will show an upward trend with the reduction of time delay. Figure 5 illustrates the relationship between load balancing and energy consumption. As the load balancing decreases, the energy consumption also decreases. Figure 6 illustrates the relationship between task unload delay and load balancing. With the optimization of load balancing, the time delay will also show a decreasing trend. Compared with the NSGA-III, the MOIA designed in this paper is considered to have better performance in multi-peak search than the traditional NSGA-III according to the conclusions in Figure 4, Figure 5 and Figure 6.

comparison of load balancing on the server
Load balancing is an important index of the server. This paper estimates the load balancing of the system by discussing the load of a single server and all the servers. Figure 5 shows the load of MOIA Benchmark and MOC, which distribute tasks more evenly across servers than Benchmark and MOIA.
With the increasing number of vehicles, the load balancing performance of the scheme using MOIA is improved compared with that of Benchmark scheme and MOC scheme.

comparison of server power consumption
Energy consumption consists of three parts: server base energy consumption, virtual machine instance energy consumption in use and idle virtual machine instance energy consumption. The energy consumption of the server in these three aspects is calculated in Figure 6a, 6b and 6c. In Fig. 6a, it can be seen that under the reasonable allocation of the algorithm, the total running time of each server is reduced and the basic energy consumption is also reduced. In Figure 6b, as the MOIA reduces the processing delay of tasks with more transmission delay, it can be seen that the energy consumption of the virtual machine used by the server to process tasks decreases. In Fig. 6c, when the number of tasks is small, the load of each server is small, and the effect of MOIA is not obvious. When the number of tasks is increased, the scheme using MOIA tends to have obvious advantages over other schemes.

Comparison of task unload delays
The unloading delay of the task is the most important criterion. This paper divides the total unloading delay into four parts: upload delay, execution delay, return delay and V2V transmission delay, as shown in Fig. 8a,8b,8c and 8d. The scheme proposed in this paper introduces vehicle-to-vehicle transmission technology and attempts to reduce the processing delay of the server through vehicle -tovehicle transmission, while optimizing load balancing and low energy consumption. Therefore, it may cost more transmission delay to transfer the task from the high-load server to a suitable MEC server to perform. Figure 8a shows the processing time of the task with different number of vehicles, and Figure   8d shows the transmission time of the task with different number of vehicles. It can be seen that compared with other schemes, the scheme using MOIA has a higher transmission delay, but has a certain reduction in processing delay. Fig. 9 shows the total unloading delay of each scheme. From Fig.   9, we can see that the scheme using MOIA has a slight increase in the total delay compared with other algorithms, but the comparison of comprehensive energy consumption and load balancing shows that it has significant effects on the optimization of load balancing and server energy consumption

Conclusion
In this paper, the parallel unloading problem of multiple vehicles is discussed in the scenario of the Internet of Vehicles, and a MEC system model is designed. Based on this model, the goal is to minimize the mobile user delay and reduce the total energy consumption of the server as much as possible while maintaining the load balance of the server. Firstly, a multi-objective optimization function is constructed to calculate the unloading method, and the idea of immune algorithm is used to improve the traditional NSGA-III, and a lot of simulation experiments are carried out. The simulation results show that the proposed MOIA has better multi-peak optimization ability than the traditional NSGA-III algorithm, and the designed unloading scheme can effectively reduce the unloading delay, reasonably allocate tasks to maintain the load balance of each server and effectively reduce the energy consumption of the server. In addition, compared with the benchmark scheme and the unload scheme in literature [3], the unload scheme based on MOIA proposed in this paper can carry out more efficient unload of tasks with better performance and has better application value.
In future research work, the proposed MOIA will be improved through reinforcement learning and applied to more complex models and scenarios.