Multimedia Computational Offloading for 5G Mobile Edge 1 Computing

9 Due to the tremendous growth of smartphone users with their massive usage of faster 10 response, delay sensitive applications lead to high traffic demands, still not yet been met by 11 current researchers. Thus, the ever-increasing computational demands are managed through 12 computational offloading, which offload intensive workloads to the small cells having 13 functionality similar to the resource-rich providers namely edge server. With the use of edge 14 server, the heavily loaded computations get executed outside of the mobile device which 15 results in minimization of mobile energy consumption and latency. Mobile edge computing is 16 an emerging paradigm of commercial infrastructure for computational offloading, which 17 enhances the power of smart mobile devices. Generally, edge provides cloud services and 18 resources to the nearest proximity of users with radio access in fifth-generation (5G) 19 networks for low latency, prompt response, and filtering. But, the frequent network failure 20 automatically degrades the performance. Hence in this paper, we integrate the functionalities 21 of Edge Server with Pico cells (ESP) for efficient traffic management and also propose a 22 distributed Multi hop Mesh Middle layer (MMM) architecture for seamless communication 23 with high resilience, minimal latency, and reduction of energy consumption. Generally, a 24 pico cell is a distributed antenna system, an alternative to a repeater used to extend wireless 25 services to 100 users. We developed an Ant Social based Vector (ASV) optimization 26 algorithm for an intelligent offload decision, which paves the way for backhaul routing 27 conflicts. Simulation results of Cloud Sim show that offloading tiny devices integrated with 28 significant additional computational capabilities would satisfy all the demands from each 29 network within a 15ms delay. 30


111
The rest of this paper is organized as follows: Section II discusses related work, Section 112 III covers system architecture, Section III.1 presents ASV in MMM Architecture, Section IV 113 Section describes experimental analysis of the proposed MMM framework and Section V 114 specifies the conclusion. making is an essential aspect which is primarily based on the prediction of computing time.
So, in early 2000s, the main focusis to develop algorithms for offloading decisions to 121 decide whether offloading would benefit mobile users or not [15]. Energy saving [16] is 122 another crucial issue, hence complex and energy constrained computations could be 123 offloaded to the cloud, which performs operations faster with retaining of mobile battery life 124 [17], [18] but it leads to a signal overhead for a mobile user. 125 To overcome offloading problems, Lei et al. proposed a computation offloading 126 mechanism where intensive application is executed completely at remote side, i.e., outside of 127 the mobile device is termed as computation offloading. The method proposed in [19] 128 addresses for a multi-client mobile environment in which virtual machine can be migrated 129 from a cloud datacenter to mini-datacenter (edge), and can be managed through cloud 130 surrogate for better mobile energy efficiency, but the mechanism is not guaranteed for IP 131 continuity. To overcome the IP continuity issue, Wang et al. proposed a centralized improved 132 particle swarm optimization algorithm for a single client application. The authors in [20] 133 focus on minimization of mobile energy consumption, and optimal VM and CDN placement. 134 The mechanism specified in this paper lacks composite services and other topological support proposed in [21] dealt with reduction of both mobile energy consumption and response time. 138 But this framework is suitable for LTE with intermittent connectivity. 139 To address the issues on the preservation of battery power through non-intermittent here, C1 represents the actual waiting time constraint where ∆i is the actual waiting time.

252
Besides, C2 gives the service delay constraint. C3 ensures the minimum utility of each task 253 and C4 restricts the minimum utility of every CN. C5 indicates that each CN can compute at 254 most one task one time, and C6 shows that each task can be allocated to at most one CN. 255 Finally, C7 represents the binary constraint.

257
Contrary to the service-oriented problem, the CN-oriented problem focuses on maximizing 258 the utility of CNs, and the objective function can be represented as Subjects to constraints C1-C7. Obviously, the service-oriented and CN-oriented optimization process. However, the information exchange will overload with the increase of network size.

265
One effective way to solve this problem is the matching game, which performs in a  Steps performed by ESP for image editing:

283
Generally, heuristic algorithms can be applied to a wide range of any optimization problems 284 to yield near-optimal solutions with minimal execution time. Thus, we develop an ASV 285 algorithm for optimal path selection. To do so, our proposed algorithm inspired the behavior 286 of ants for seeking food between source locations to the repository. Generally, a pheromone 287 trial is a liquid, lays down by ants during they travel. It acts as trial for further ants to 288 proceed. By the trail values, it is easy to identify the shortest and optimal path. Since the 289 value of trail gets evaporated over time, the only shorter path will attract all ants to deposit 290 more pheromone values in that path.

291
These positive feedbacks of ants will be updated in ta bu (target) list for optimal selection.

292
Finally, the shortest path with high pheromone value will be considered as optimal. When 293 this artificial intelligence concept is applied to NP-hard problems, the term "trail value" or 294 pheromone refers to the latency and its transmission cost from the user location to destination 295 as CDN. The main idea is spreading the utility gained from finishing the computation 296 throughout processor time it requires based on distance or latency.

297
For instance, consider a scenario of a single user requesting for social networking. User is 298 a source node, and CDN is a destination. With each ESP has a list which will be updated by 299 ants with their respective latency values to the destination. Ants will travel from sourceto 300 destination by laying pheromone in their path with the constraint as ants won't revisit any node twice. Now, associatively compare the collected value in the list of each ESP with the 302 latency. Finally, the path with less latency will be chosen as optimal for processing the 303 offload. Our proposed ASV algorithm is also applicable for multiuser diverse application 304 needs. Also, ESPs are in the form of mesh network, which yields optimal routing. Generally, 305 by offloading to shorter distance ESPs we can achieve less latency, power, and transmission 306 or monetary cost. proximity. Therefore, the ESP of a minimum latency path has been selected for offloading.

311
Any ESP can execute and share the partial processing with its neighbor ESPs of another 312 region more easily without transmission delay. Thus, high resilience with multi sharing is 313 applicable readily and at least cost.

314
Assume a sample of an application as in Figure 4 has few numbers of clusters denoted as Thus, analyzer will concurrently evaluate the energy consumed by the mobile device for 377 the current task. Also, the max value in the energy level is assumed to be the threshold and 378 idle value when mobile is inactive as 0.5. In order to take an effective decision, equation 1 379 and 6 evaluates the energy of the current task to be executed in device. Generally, handover 380 will be associatively managed and processed. If the evaluated value is higher, then eNB will 381 offload the task to ESPs, which is nearer to the user's location with the current value as ᵞi = 1;  The ASV algorithm, initially make the optimal path S as an empty set. If the path has at 415 least one node then traverse for t computations, till an optimal solution S is found 416 successfully. If there is no next node, then the path of the current node is optimal. If there is 417 more than one next node to process, then compute latency of each node. Finally, compare the

425
In our paper, we consider primary challenges of generalized architecture and optimal 426 offloading. Our results show that it would be an alternative, reliable and an effective 427 offloading for both static and dynamic environments than other existing works.

446
First is a diverse type of computing tasks, which depends on the computing environment.

447
Though few contents of frequently used data are cacheable for reuse by other devices, data 448 belongs to personal computing is not cacheable and so it must often be executed in real time.

449
Second, it is not practical to build user patterns locally at each server; instead, learning the 450 intelligent techniques and methods over extensive sets can provide a broader view on the user 451 patterns.

478
The distributed deployment feature leads to frequent site attacks hence requires stringent 479 security policies/implementation of trust management systems. In order to gain access to the 480 platform, derive the information needed regarding user proximity and network analytics.

481
Besides, service providers would like to attain user information to update their services; then 482 there is a great challenge to the development of privacy protection mechanisms.

484
As   of Experience (QoE). But our proposed ASV provides a faster response with improved QoE. 531 We can see from Figure 9 that local computations will drain the user's battery much faster Besides, increasing the percentage of requests by the mobile devices in turn will reduce 538 total power consumption in contrast to the scenarios that need the 5G core to fulfill.

539
However, application and content providers are challenged by the latency of the network 540 when connecting to the CDN, which has to be resolved by today's emerging intelligent 541 mechanism. Thus, in order to obtain a tolerable latency, combination of proximity and 542 intelligence has been embedded in our proposed ESP. with ESP is unique, is not only bringing computation resources closer to mobile user's, but 549 also decouples the problem of identifying user needs through the use of prediction and 550 learning technique. Hence, our proposed MMM architecture efficiently handles both the 551 offloading and handover. In addition, it efficiently sustains with the optimal routing through 552 intelligent ASV algorithm. In particular, ESP is an emerging mechanism that can be best 553 suited in emergency situations, battlefield surveillance, retail places, stadium, shopping malls, 554 high-speed mobile video applications access to transport etc. Thus, our proposed offloading 555 scheme is a self-healing network that could be intelligently managed by licensed reusable 556 channels. As a future work, we plan to create a test bed for offloading scheme, and then to 557 expand this concept to other areas such as trust management among others.