An Application Deployment Approach for City IoT Applications in Resource-constrained Edge Computing Environments

: With the development and utilization of more and more city Internet of Things (IoT) applications with high resource requirements, how to reduce the consumption of energy, processor resources and bandwidth resources in resource - constrained edge clouds while ensuring the execution delay of these applications is an urgent problem to be solved. Therefore, an optimal energy - bandwidth tradeoff deployment approach for city IoT application is proposed for resource - constrained edge clouds. In this approach, the city IoT applications are first divided into multiple collaborative tasks and offload to edge clouds. Secondly, a joint optimization model including energy consumption, resource wastage, resource load imbalance and bandwidth resource consumption is established for the task offloading scheme. Thirdly, the city IoT application deployment problem is optimized under the constraints of resource and execution delay. Finally, a comprehensive simulation test is conducted to analyze the deployment approaches from the aspects of performance and effectiveness. The experimental results show that our deployment approach is superior to other related approaches.


Introduction
With the rapid development of mobile networks and the wide application of Internet of Things (IoT) in intelligent manufacturing, smart home, smart transportation and other fields, IoT devices such as smartphones, tablets, drones and wearable devices are becoming more and more popular. In 2018, Cisco predicted that the number of global IoT devices will grow from 8.6 billion in 2017 to 12.3 billion in 2022, among which, more than 422 million global IoT devices and connections will adopt 5G [1]. However, these IoT devices are often limited by computing power, storage space, energy capacity, environmental awareness, and other resources, and complex computing tasks such as optical character recognition, face recognition, and augmented reality are locally inefficient [2]. In addition, the diversity of latency-sensitive and resource-intensive city IoT applications often results in a variety of different computing and communication costs [3].
In order to alleviate the limitation of mobile computing power, mobile edge computing has emerged as a new computing paradigm that utilizes resources near IoT devices to provide timely services in conjunction with cloud servers. In mobile edge computing, resource-intensive and latency-sensitive city IoT application from the edge of the mobile network will usually be broken down into a series of tasks, which can be independent design, development, deployment and operational [4]. These tasks coordinating with each other are offloaded onto the edge clouds or the remote cloud [5], and turn over a cluster consisting of multiple virtual machines or containers for collaborative processing. Furthermore, the IoT devices reduce the energy consumption and speed up the calculation process, and also make the run on IoT devices emerging city IoT applications possible. In the process of computing offloading, multiple tasks split by city IoT applications need to select the best computing node for collaboratively processing by multiple virtual machines or containers. Meanwhile, given the consumption of energy, processor resources, and bandwidth resources by these tasks with the required execution delay, it has become an urgent problem that which deployment approach of city IoT application should be adopted to optimally allocate these tasks to edge clouds.
Compared with the resources in the remote cloud, the resources in the edge cloud are usually: (1) resource-constrained --limited computing resources, because the IoT devices have small processors and limited power budget; (2) Heterogeneity -processors with different architectures; (3) Dynamics --their workload changes and city IoT applications compete for limited resources. These factors cause various resources in the edge cloud (such as CPU, storage, and bandwidth) to be scarcer and more expensive than those in the remote cloud [6]. Therefore, the cost of renting edge servers becomes a major challenge for application operators as they rent lots of edge servers and deploy massive tasks to provide a better user experience. According to RightScale's report [7], 26% of enterprises with more than 1,000 employees spend more than $6 million annually on public clouds, but 35% of this cloud spending is wasted --users may always overestimate resource consumption. In addition, application operators always have specific requirements for their applications, which are to maintain some key performance indicators [8], such as average response time. Therefore, it is necessary to study how to find the optimal deployment scheme for a city IoT application while ensuring the actual performance indicators such as execution delay, energy consumption, processor resource consumption and bandwidth resource consumption of the city IoT application.
To solve the above problem, this paper studies the optimal task deployment problem for city IoT applications deployed to edge clouds. The joint optimization of the energy consumption, processor resource consumption and bandwidth resource consumption of edge clouds under the condition of specified execution delay is analyzed and discussed via considering the impact of different task deployment schemes on the execution delay and the system utility of edge clouds. The main contributions are as follows:  The optimal deployment of city IoT applications is formulated as three sub-problems of energy consumption, processor resource consumption and bandwidth resource consumption, and solved using models such as energy consumption, resource wastage, resource load imbalance and bandwidth resource consumption under the constraints of resource and execution delay.
 On the basis of the above model, a joint optimization model is first established to measure the consumption of energy, processor resources and bandwidth resources of city IoT application deployment scheme. Then, a Differential Evolution [9] based Optimization Deployment of City IoT Application (DEODCA) approach is introduced to minimize the consumption of energy, processor resources and bandwidth resources in the edge clouds under the constraints of resource and execution delay.  We establish a city mobile edge computing system model to comprehensively evaluate the performance and effectiveness of our proposed deployment approach for city IoT applications. The rest of this paper is organized as follows. Section 2 introduces the related work. Section 3 introduces the system model. Section 4 proposes the deployment model of city IoT application. Section 5 proposes the design and implementation of city IoT application deployment approach. Section 6 provides the performance evaluation including experimental parameter settings, comparison results, and parameter study. Finally, we conclude this paper with the future work recommendations in Section 7.

Related Work
In recent years, many scholars have carried out researches on the IoT application deployment in mobile edge computing, and made many research achievements. For example, Tong et al. [10] designed edge clouds as a tree hierarchy of regionally distributed edge servers to improve cloud resource utilization efficiency, and proposed a workload allocation algorithm to assign mobile applications to some edge servers. Tan et al. [11] proposed a general model for how to distribute and schedule jobs to minimize the weighted response time of all jobs, and then proposed an online distribution and scheduling algorithm with extensibility in the speed enhancement model. Meng et al. [12] jointly considered the management of network bandwidth and computing resources to meet the maximum value of deadlines, and then proposed an online algorithm to greedily schedule newly arrived tasks to meet the new deadline. Wu et al. [13] proposed a heuristic algorithm for service selection in mobile edge computing systems to solve how to allocate composite services of service requests to edge servers and cloud servers to reduce the execution delay. Chen et al. [14] proposed a data-intensive service edge deployment scheme based on genetic algorithm to minimize the response time of data-intensive service deployment under storage constraints and load balancing. Bahreini et al. [15] proposed a mixed integer linear programming formulation to solve the multi-component application placement problem for dynamic user location and network capability, thus minimizing the time cost incurred when running the application. Considering that multiple mobile user devices simultaneously offload their tasks to the mobile edge clouds, Chen et al. [16] effectively solved the multi-user and multi-task computing offloading problem of green mobile edge computing by introducing centralized and distributed greedy maximum scheduling algorithm. Dai et al. [17] jointly optimized user association and computational offload by developing a computational offloading algorithm that considers computational resource allocation and transmission power allocation to minimize the overall energy consumption. Chen et al. [18] proposed an energy efficient dynamic offloading algorithm to minimize the energy consumption of task offloading while guaranteeing the average queue length. The above researches [10][11][12][13][14][15][16][17][18] are basically carried out by taking a single indicator as the optimization objective. It is difficult to achieve the purpose of reducing the system utility and network resources of the mobile edge computing environment with ensuring the execution delay.
At present, there are also massive researches on multi-objective optimization deployment of IoT applications. For example, Chen et al. [19] proposed a distributed computing offloading algorithm based on game theory to solve the problem of multi-user computing offloading for multi-channel wireless interference or mobile edge computing in competitive environment by modeling the communication cost and computing cost during offloading. Chen, et al. [20] formulated a data-intensive applications deployment strategies to minimize the delay of the mobile devices and minimize the monetary cost of application service providers under the condition of data transfer among mobile devices, edge servers, and clouds, user movement, and changing load balancing conditions. Deng et al. [7] proposed an approach to generate an appropriate deployment scheme at the minimum cost under the on-demand billing model in the case of the resource constraints of edge servers, the business logic of applications and the average response time of applications. Hu et al. [21] proposed an approximately optimal service allocation strategy that meets the constraints of edge server resources and bandwidth to find the tradeoff between average network delay and load balancing. Wu et al. [22] formalized the mixed task assignment problem of mobile edge computing as a multi-objective optimization problem, and then proposed an efficient offloading framework with intelligent decision-making ability to jointly minimize system utility and bandwidth allocation of each mobile device. Pallewatta et al. [23] proposed an IoT application layout strategy that utilizes micro-service independent deployability and scalability to minimize latency and network utilization. Zhang et al. [24] proposed an adaptive task offloading algorithm to optimize and balance the energy consumption of terminal devices and the overall task execution time. Goudarzi et al. [25] proposed a parallel IoT batch application layout decision approach based on MEME algorithm to minimize the execution time and energy consumption of IoT applications in a computing environment with multiple IoT devices, multiple Fog/Edge servers and cloud servers. Cheng et al. [26] studied the task allocation algorithm in the mobile edge computing system of data sharing, and then proposed three algorithms to reduce the delay and energy consumption needed to process global tasks and separable tasks respectively. Zhang et al. [27] proposed a low-complexity solution scheme based on the improved Newton method and the concept of calculating offloading priority to minimize task execution delay and equipment energy consumption by studying the multi-objective resource allocation problem of multi-user mobile edge computing system. Tao et al. [28] proposed a task offloading algorithm to solve the problem of minimized energy consumption under the constraints of resource capacity and delay, and meet mobile users' demands for low-energy and high-performance tasks. Guo et al. [29] studied the optimal allocation scheme of energy saving resources for multi-user mobile edge computing systems with inelastic computing tasks and non-negligible task execution time, and proposed a low complexity algorithm to solve the suboptimal solution by combining the optimization problem with the three-stage pipeline scheduling problem and using Johnson algorithm and convex optimization technology. The above studies [7,[19][20][21][22][23][24][25][26][27][28][29] all solve the corresponding joint optimization problem through the corresponding calculation offloading algorithm. However, their disadvantage lies in that the designed cost does not take into account the bandwidth resource consumption between tasks deployed on different edge servers and the energy consumption of edge clouds.
Based on the above analysis, this paper will study the optimal deployment of city IoT applications in resource-constrained edge clouds. That is, when the multiple collaborative tasks that make up the city IoT applications are offloaded to the resource-constrained edge clouds, how to minimize consumption of energy, processor resources, and bandwidth resources in the edge clouds under the constraints of resource and execution delay.

System Model
Mobile edge computing provides the cloud services through pushing the cloud resources (e.g., computing, network and storage) to the edge of the mobile network. The traditional wireless access network has the conditions of intelligence, application localization and close deployment through the integration of wireless network and applications, and provides a high bandwidth and low latency transmission capacity. The mobile edge computing system model is shown in Fig.1. providing cloud services; (4) Edge server and cloud server accommodating the virtual machines (VMs) or containers (As); (5) Tasks (Ts) assigned to the virtual machines or containers. When an IoT device makes an application offloading request, a virtual machine or container on the edge server or cloud server assists the device in handling the tasks offloaded onto it, and feeds the result back to the IoT device. Considering that the factors such as high-rise buildings in the city have great interference to the wireless signal, all edge clouds are connected by a fiber optic backhaul network based on a full network topology [30]. Furthermore, the propagation delay between edge clouds is load-independent. Each edge cloud is endowed with certain computing and storage capabilities by deploying some heterogeneous edge servers interconnected by switches, and it can receive, process, and forward the offloading requests from IoT devices via a wireless cellular base station. Since IoT devices appear randomly and generate city IoT applications in an arbitrary order and time, the number of offloading requests for city IoT applications will vary from time to time [31]. Meanwhile, each city IoT application usually has a certain deadline and is modeled as a directed acyclic graph reflecting the task dependence. Therefore, application service providers can rent cellular base stations from communications facility providers to deploy and handle these collaborative tasks.

Deployment Model of City IoT Application
This section first introduces the resource wastage model, resource load imbalance model, energy consumption model and bandwidth resource consumption model of city IoT application deployment scheme; and then proposes a joint optimization objective function for the city IoT application deployment.

Resource wastage model
Since edge cloud has scarce and expensive processing resources compared with remote cloud, how to maximize the resource utilization ratio of edge server is the focus of current attention while deploying city IoT applications to edge clouds. For each edge server, the utilization of certain resources such as CPU, memory, disk, and bandwidth refers to the ratio of the resources used to the total resources [32], as shown in formulation (1). The total resource utilization of the edge cloud can be defined as the average utilization of each resource type of all edge servers. The mean value further reflects the idle resources in the edge clouds, that is, the larger the mean value is, the less the idle resources in the edge clouds will be, and conversely, the more. Therefore, the idle resources (i.e., resource wastage) W in the edge clouds can be calculated by formulation (2).
where M, M ' and N respectively represent the total number of edge servers, startup edge servers and virtual machines or containers; j U  represents the utilization of one resource in the j-th edge server; i R  represents one resource demand of the i-th virtual machine or container; j C  represents the total amount of one resource owned by the j-th edge server; the binary variable ij z indicates whether the i-th virtual machine or container is assigned to the j-th edge server, that is, if the i-th virtual machine or container is assigned to the j-th edge server, then

Resource load imbalance model
While improving the utilization rate of various resources of edge servers, the load balancing degree of these edge servers should also be taken into account. The resource load imbalance level IB for all edge clouds can be obtained by averaging the various resource load imbalance levels [32], as shown in the formulation (3).
where Ω represents the set of various resources such as CPU, bandwidth, memory, disk of edge servers.

Energy consumption model
Relevant studies [33], [34] show that CPU is the most important energy consumption component, and propose that the energy consumption of edge server is linearly related to its CPU utilization, or piecewise linear function. However, existing studies show that the energy consumption of edge servers depends on the comprehensive utilization of CPU, memory, disk and network interface. The energy consumption models of different types of edge servers are different, and the simple linearity is not applicable to many new models [35]. According to the research in literature [36], the fitting error of polynomial model is smaller than that of linear model and piecewise linear model. The energy consumption model of the edge server established by the quadratic polynomial model is more suitable for the actual edge server, as shown in the formulation (4).
where 1  and 2  represent the positive fixed polynomial coeeficient; idle j P represents the power consumed when the j-th edge server is started without any load.

Bandwidth resource consumption model
Considering that the city IoT applications will be split into multiple collaborative tasks, and offloaded onto the edge servers to hand in a cluster composed of multiple virtual machines or containers for collaborative processing. That is, there will be communication between virtual machines or containers in the same cluster. Therefore, the bandwidth resources and execution time required by the cluster to process a city IoT application are directly related to the location of virtual machines or containers in the edge clouds. This is due to the fact that when virtual machines or containers in the same cluster are on the same edge server, the communication between virtual machines or containers does not consume bandwidth resources and execution time; on the contrary, it consumes a certain amount of bandwidth resources and execution time. Therefore, the bandwidth resources BW consumed by edge servers after processing a batch of city IoT applications can be calculated by formulation (5).

Joint optimization formulation
When a batch of city IoT applications are offloaded onto the edge cloud, how to adopt the city IoT application deployment approach to improve the system utility and bandwidth resource utilization of these edge clouds under certain execution delay is the key problem that needs to be solved at present. Since the heterogeneous virtual machines or containers that handle these city IoT applications are assigned to different heterogeneous edge servers [37], different deployment schemes have different effects on the system utility and bandwidth resource consumption of the whole edge computing system. To solve the problem of inconsistent optimization objectives, it is necessary to find an optimal deployment scheme by proposing a city IoT application deployment approach. Furthermore, the resource wastage, resource load imbalance, energy consumption and bandwidth resource consumption of edge clouds are minimized under the constraint of resource regulation and execution delay. The joint optimization objective function of the optimization phase can be expressed by formulation (6).    (7) represents the constraint condition of execution delay, that is, the total execution time required by the edge servers to process a batch of city IoT applications should be less than the preset threshold value L; l data represents the amount of data that the l-th virtual machine or container sends to the t-th virtual machine or container.

Design of City IoT Application Deployment Approach
This section will introduce the implementation details of DEODCA approach. Firstly, the DE algorithm is briefly introduced, then the improvement measures of DE algorithm are proposed, and finally the detailed implementation scheme of DEODCA approach is given.

DE algorithm
DE algorithm is a heuristic random search algorithm based on population difference proposed by Storn and Price in 1995 to solve Chebyshev polynomials [9]. Unlike other common evolutionary algorithms, it adopts an arithmetic operator to modify the internal representation of an individual to make a difference. Next, through the evaluation of the generated difference vector, if its fitness value is better than the fitness of the predetermined current individual, the newly generated vector will be used to replace the current individual. At present, DE algorithm has developed many different optimization strategies based on the number of disturbed individuals and weighted difference vectors [38]. In order to maintain the population diversity, the DE/rand/1/bin strategy is adopted to randomly select the disturbed vectors, thus three operations of variation, crossover, and selection can be obtained as described below [9]. For the G-th iteration, a population is composed of NP D-dimensional parameter vectors x , and the population will no change.

3) Selection operation
Based on the first two operations, a greedy selection operation is needed to compare the fitness values () f  of the children and the corresponding parent, and the better ones are saved into the G+1 generation population, as shown in formulation (10).

Improvement of DE algorithm
The essence of city IoT application deployment is to establish a reasonable mapping relationship between application task set and edge server resources in edge clouds based on resource and execution delay constraint rules, so as to obtain the smallest value of joint optimization objective function. To solve the above problems, it is necessary to improve the variation operation, crossover operation, selection operation and relevant control parameters of DE algorithm on the basis of formulating a chromosome coding scheme, and then put forward an effective improved DE algorithm.

1) Chromosome coding scheme
According to the characteristics of the city IoT application deployment, this paper adopts a more concise and understandable real coding approach to encode chromosomes. In the encoding scheme, chromosome length is expressed as the total number of tasks in a batch of city IoT applications. Each gene fragment represents a task number, its bit value represents the edge server number assigned to an edge cloud, and the final chromosome encoding scheme is shown in Fig.2. As shown in Fig.2, each edge cloud has Me heterogeneous edge servers; each city IoT application (APP) can consist of multiple tasks with different quantity and computing resource requirements, which are represented by a directed acyclic graph; when Q city IoT applications are deployed to a mobile edge computing environment consisting of E heterogeneous edge clouds, the edge server in each edge cloud and the task in each city IoT application are both numbered from 1. Please note that the total number of tasks is much larger than the number of resources in the edge clouds and is adopted to represent the length of chromosomes. Then the corresponding decoding can be obtained according to the encoding scheme in Fig.2, as shown in the formulation (11).

2) Improved algorithm
In order to solve the optimizing deployment problem of city IoT application, the standard DE algorithm is required to be improved via the chromosome coding scheme in Fig.2. In the process of improvement, three control parameters (i.e., the population size NP, crossover probability CR and scaling factor F) need to be set. Considering the two control parameters of F and CR include constant, random and adaptive strategies, different setting strategies can have a large effect on the convergence speed, diversity and search space of the algorithm [39]. Therefore, to avoid the problem of falling in local optimal and low convergence speed, this paper exploits a variety of optimization strategies to improve the diversity and convergence speed of the algorithm, so as to better solve the optimal deployment problem of city IoT application.
In term of the adaptive strategies for controlling parameter CR and F, the size of F is a positive relationship with the variation search scope of DE algorithm. F will be reduced with the operation of the algorithm, which ensures the initial population diversity and the optimal solution protection of the later algorithm. n F of the individual n x adaptively changes in the iterative process, as shown in formulation (12). Given that the larger cross probability CR increases the probability that individuals with low fitness are entering the next generation, the smaller cross probability CR is very good for global search ability and population diversity. CR where max f and min f respectively represent the fitness of the worst individuals and the optimal individuals in the current iteration population; f represents the average fitness value of the current population; n f represents the fitness of the individual n x ; max CR represents the maximum crossover probability; min CR represents the minimum crossover probability; 0 F represents the initial scaling factor.
During the runtime of the algorithm, except for that the variation operation produces the variation vector , 1 , Based on the above encoding scheme, the DEODCA approach can be designed by using these improved measures to improve the standard DE algorithm, and solve well the optimization deployment problem of city IoT applications. The implementation scheme of the DEODCA approach is shown in Algorithm 1.
As described in Algorithm 1, line 1~2 initialize the system parameters such as the population size NP, chromosome length D, maximum iteration times max Gen , maximum crossover probability max CR , minimum crossover probability min CR , initial scaling factor 0 F , edge cloud number E and its edge server number ,

3) Complexity Analysis
In this section, we analyze the complexity of the Algorithm 1, which mainly includes three parts, i.e., mutation operation, crossover operation and selection operation. As we have seen before, the population size and the individual vector parameter dimension are respectively set to NP and D. The complexity of the

Performance Evaluation
This section first introduces the setting of the simulation experiment environment, then compares the DEODCA approach with some related approaches, and finally evaluates and analyzes these approaches from two aspects of performance and effectiveness.

Experimental setup
We can set up a city mobile edge computing experiment environment with 25 edge clouds and 20 IoT devices by extending the CloudSim simulator [40], [41]. In this experimental environment, each edge cloud includes a base station interconnecting with other base stations via the full network topology-based optical fiber backhaul network, a certain number of heterogeneous edge servers, and multiple IoT devices interconnecting via wireless access. The configuration parameters of each edge server are randomly selected from the set {HP ProLiant G4, HP ProLiant G5} [42], and deployed to multiple edge clouds. When 20 IoT devices appear at some point and produce a batch of city IoT applications, each city IoT application is split into three collaborative tasks and assigned to 60 heterogeneous virtual machines based on the task size.
The bandwidth requirement of each virtual machine is selected randomly from the set [10,50] Mbps; the data amount sent of each virtual machine is randomly selected from the set [1,2] Mb; the CPU and memory requirement of each virtual machine is randomly selected from the set {2000MIPS and 3.75GB, 500MIPS and 0.6GB, 1000MIPS and 1.7GB, 2500MIPS and 0.85GB} [42]; the disk requirement of each virtual machine is set at 1GB. When these edge servers are started without any load, the power of edge servers HP ProLiant G4 and HP ProLiant G is respectively set to 86w and 93.7w [42]. To evaluate the performance and effectiveness of the DEODCA approach, we compare it with the following benchmark approaches.  Random Deployment (RD) ： When multiple edge server candidates meet resource constraints, randomly select the edge server to run each virtual machine.  First Fit Deployment (FFD)：When there are several candidate edge servers that meet the constraints, select the edge server that meets the resource requirements for the first time to run each virtual machine.  Particle Swarm Optimization (PSO)： When a number of edge server candidates meet the resource constraints, the edge server that meets the resource requirements is selected to run each virtual machine via the particle swarm optimization algorithm [43].
 Multi-objective Grouping Genetic Algorithm (MGGA)：When a number of edge server candidates meet the resource constraints, the edge server that meets the resource requirements is selected via the multi-objective grouping genetic algorithm to run each virtual machine [44].

Experimental results and evaluation
We first compared the resource wastage level, resource load imbalance level, energy consumption and bandwidth resource consumption of the edge clouds while dealing with a batch of city IoT applications under the condition of specified execution delay; and then analyzed and studied the relevant experimental parameters including the number of base stations and the number of city IoT applications. 1) Performance analysis As shown in Figs. 3 to 7, since RD approach randomly deploys the virtual machines that the tasks reside, its resource wastage level and resource load imbalance level are both the highest of four approaches. FFD approach exploits the first adaption to deploy these virtual machines, that is, the next edge server will be selected to host residual virtual machines, only when the first appropriate edge server cannot accommodate them. Although the resources of the edge server can be fully utilized, MGGA and PSO both exploit intelligent optimization algorithms to deploy the virtual machines that these tasks are located, but they still have higher resource wastage level and resource load imbalance level than the DEODCA approach. The above reasons also result in the decreasing energy consumption of the RD, FFD, MGGA, PSO, and DEODCA. Given that the tasks deployed on the virtual machine need to communicate with other tasks in the same cluster, the bandwidth resources consumed by different deployment solutions for these five approaches and the execution time processing a batch of city IoT applications are also diminishing. Since the execution time of the RD approach exceeds the execution latency threshold, it is not recommended for the deployment of the city IoT applications.

2) Parameter study
In view of the above approaches, we further analyzed the impact of the number of base stations and the number of city IoT applications on the resource wastage level, resource load imbalance level, energy consumption and bandwidth resource consumption of the edge clouds, as shown in Fig.8 and Fig.9.
(1) Effect of the number of base stations Fig.8 shows the impact of the number of base stations on all approaches. In order to make this effect more clearly, the number of virtual machines and the city IoT applications was respectively set up to 20 and 60; the number of base stations varied from 15 to 30 by step size 5, and the number of edge servers changed accordingly to 76, 101, 127, and 152. The analysis of these figures can be obtained that although the resource wastage level, the resource load imbalance level, energy consumption and bandwidth resource consumption fluctuate with the increase of the number of base stations, it can be thought that there is no such effect. In these five approaches, the DEODCA approach always has the lowest resource wastage level, the resource load imbalance level, energy consumption and bandwidth resource consumption. (2) Effect of the number of city IoT applications Fig.9 shows the impact of the number of city IoT applications on all approaches. In order to make this effect more clearly, the number of base stations and the edge servers was respectively set up to 25 and 127; the number of city IoT applications varied from 5 to 20 by step size 5, and the number of virtual machines changed accordingly to 15, 30, 45, and 60. The analysis of these figures can be found that although the resource wastage level fluctuates with the increase of the number of city IoT applications, it can be thought that there is no such effect. The resource load imbalance level, energy consumption and bandwidth resource consumption of all approaches increase as the number of city IoT applications increases; this is due to that the increase in the number of city IoT applications requires more virtual machines to deal with their tasks. Furthermore, in these five approaches, the DEODCA approach always has the lowest resource wastage level, resource load imbalance level, energy consumption and bandwidth resource consumption.

Conclusion and Future Work
With the advent of the 5G era, an increasing number of IoT devices have adopted 5G communication technology, and produce a large number of city IoT applications with high resource demand and delay sensitivity. Considering that IoT devices with limited resources cannot meet the resource demands of these city IoT applications, these IoT devices offload some tasks of city IoT applications onto the edge clouds. In order to improve the system utility and processing efficiency of edge clouds, this paper presented an optimal deployment approach for city IoT applications of resource-constrained edge clouds to allocate their collaborative tasks. Through the comparative analysis of experimental results, our proposed approach is superior to other related approaches in terms of effectiveness and efficiency.   Comparison of bandwidth resource consumption Comparison of execution time Figure 8