A greedy randomized adaptive search procedure for scheduling IoT tasks in virtualized fog–cloud computing

Virtualized fog–cloud computing (VFCC) has emerged as an optimal platform for processing the increasing number of emerging Internet of Things (IoT) applications. VFCC resources are provisioned to IoT applications in the form of virtual machines (VMs). Effectively utilizing VMs for diverse IoT tasks with varying requirements poses a significant challenge due to their heterogeneity in processing power, communication delay, and energy consumption. In addressing this challenge, in this article, we propose a system model for scheduling IoT tasks in VFCCs, considering not only individual task deadlines but also the system's overall energy consumption. Subsequently, we employ a greedy randomized adaptive search procedure (GRASP) to determine the optimal assignment of IoT tasks among VMs. GRASP, a metaheuristic‐based technique, offers appealing characteristics, including simplicity, ease of implementation, a limited number of tuning parameters, and the potential for parallel implementation. Our comprehensive experiments evaluate the effectiveness of the proposed method, comparing its performance with the most advanced algorithms. The results demonstrate that the proposed approach outperforms the existing methods in terms of deadline satisfaction ratio, average response time, energy consumption, and makespan.


INTRODUCTION
In recent years, the Internet of Things (IoT) has become an increasingly widespread technology, and as a result, IoT-enabled devices such as cameras, smart meters, sensors, wearable devices, smartphones, and smart vehicles generate enormous amounts of information that must be processed by IoT applications. 1These IoT applications are being extensively deployed in various domains, including smart healthcare systems, augmented and virtual reality, connected and autonomous cars, drones, real-time manufacturing, and fire detection. 2,3Despite the potential for IoT applications to increase productivity and provide considerable value to our lives, IoT devices are resource-limited and cannot satisfy the quality of service (QoS) requirements of IoT applications.5][6] Nevertheless, IoT applications cannot tolerate the high connection latency between end devices and cloud servers.An alternative approach is to leverage fog computing, which offers resources such as storage, computation, and networking close to the IoT devices that generate the data. 7,8Through fog computing, delay-sensitive IoT applications can achieve higher QoS by lowering latency, increasing network utilization, and securing information.Fog computing, however, poses several challenges due to the capacity constraints and widespread distribution of computing resources at this layer.Moreover, efficient energy consumption is a requisite for fog computing. 9][12][13][14] Task scheduling is a fundamental problem in fog-cloud computing frameworks, with significant implications for timely task completion and resource utilization efficiency.Task scheduling involves mapping and managing how tasks are executed across a group of distributed resources.Consequently, an effective IoT task scheduling method aiming to balance QoS demands with optimum system resource utilization, such as energy consumption, is paramount to effectively utilizing the fog-cloud computing system.6][17][18] Since task scheduling optimization is known as an NP-hard problem, numerous heuristic-, metaheuristic-, and machine learning-based approaches have been developed to solve it by considering different optimization goals.For example, in Reference 19, efficient heuristic rules and a priority-aware algorithm are respectively proposed for the local decision-making process and scheduling of the offloaded tasks among edge cloud servers in mobile edge computing environments.A new method for workflow scheduling in a fog computing environment has been designed 20 by hybridizing the fireworks algorithm (FWA) and heterogeneous earliest finish time (HEFT) to improve the system's makespan and cost.In Reference 21, a double-deep Q-learning (DDQL)-based task scheduling scheme is proposed to optimize the service delay and computation cost in fog-based IoT environments.Although different methods have been proposed to solve the task scheduling problem, more work is required for scheduling IoT tasks in fog-cloud environments.Little attention has been given to optimizing the energy consumption of VMs while meeting the QoS requirements of IoT workloads in the heterogeneous fog-cloud environment.
This article examines the optimal IoT task scheduling problem in a virtualized fog-cloud computing (VFCC) environment.Virtualization technology is a key element usually employed in fog-cloud computing systems to improve energy efficiency and resource utilization. 224][25] In the VFCC platform, resources are allocated to IoT tasks as a collection of heterogeneous VMs.The heterogeneity of VMs in terms of communication delay, processing power, and energy consumption significantly complicates task scheduling.Furthermore, IoT tasks intensify the task scheduling problem in terms of the quantity of instructions, memory requirements, and deadlines.Therefore, it is necessary to efficiently solve this problem using intelligent techniques.This study addresses the scheduling of a set of independent IoT tasks on a VFCC with the goal of optimizing energy usage while satisfying the deadline requirements of the IoT tasks.][31][32][33][34] GRASP is empowered by the combination of the following advantages: (a) greediness to generate reasonably good initial solutions, (b) randomization to explore different solution spaces, (c) local search to find local optima, and (d) multistart to select the best local optima.Furthermore, GRASP provides the following appealing characteristics, which introduce it as a promising technique to solve the task scheduling problem: First, it is simple and can easily be implemented in real-world scenarios.Second, a few parameters must be adjusted and fine-tuned to achieve optimal results.Finally, GRASP can be implemented in parallel due to the independence of its iterations from each other.Therefore, the running time of GRASP can be drastically reduced.These appealing features convinced us to use this technique for solving the task scheduling problem in a fog-cloud computing environment.The simulation results verify the effectiveness of our GRASP approach.
The major contributions to this work are as follows: • Presenting a system architecture and problem formulation for IoT task scheduling in a virtualized fog-cloud environment, simultaneously considering energy consumption and deadline requirements.
• Proposing a new GRASP-based IoT task scheduling method for assigning tasks to virtualized fog cloud computing resources optimally.
• Conducting extensive experiments to validate the performance of the proposed approach compared with state-of-the-art methods in terms of deadline satisfaction ratio, average response time, energy consumption, and makespan.
The remainder of the article is structured as follows: The relevant research on the issue is discussed in Section 2. Section 3 describes the system model, including problem formulation and system design.Section 4 describes the specifics of the proposed algorithm.Section 5 provides an evaluation of our proposed method based on experimental results.The final part of the article, Section 6, includes recommendations for further research.

RELATED WORK
6][37][38][39] Since the problem is in the NP-hard class, many approximation approaches such as heuristics, metaheuristics, and machine learning techniques are recommended with different optimization goals.Effective task scheduling is crucial in a fog-cloud computing environment, where the allocation of resources directly impacts system performance.Energy efficiency is paramount to minimize the environmental footprint and operational costs, ensuring sustainability.Quality of Service (QoS) guarantees user satisfaction by optimizing task execution based on predefined criteria, such as response time and reliability.Load balancing enhances resource utilization, preventing overloading of specific nodes and maximizing system throughput.Lastly, cost considerations are essential to optimize resource allocation and minimize expenses, aligning with budget constraints.Balancing these factors in task scheduling ensures a harmonious synergy between energy efficiency, user satisfaction, resource utilization, and cost-effectiveness in cloud-fog environments.In this section, we review and focus on key features of various relevant studies that have been recently introduced for the fog and cloud computing environments.
An approach based on the bee swarm optimization algorithm was proposed in Reference 40 to schedule independent tasks in the fog computing environment.They incorporate a tradeoff between the allocated memory required by the submitted tasks and the execution time.In Reference 41, the authors use a dynamic priority queue to solve the task scheduling problem in cloud computing.In this work, first the differential evolution (DE) algorithm by the multiple criterion decision-making method is used for prioritizing the submitted tasks.Then the available virtual machines are ranked based on their processing power.Finally, tasks are mapped to virtual machines with the aim of optimizing energy consumption and makespan by considering deadline constraints.In Reference 42, an energy-efficient algorithm for mapping a set of unconstrained tasks onto heterogeneous VMs was proposed.First, the resource utilization and completion time of a task are calculated.Then, the values are normalized to make an efficient scheduling decision.The primary goal of this study is to minimize the energy consumption and makespan of a cloud computing system.
A Q-learning-based framework to schedule user requests on a virtualized cloud data center was introduced in Reference 43.In the first phase of this framework, an M/M/S queuing model fairly distributes the incoming user requests among available servers.In the second phase, for each server, a scheduler is utilized to assign tasks to VMs based on Q-learning.The goal is to maximize the CPU utilization of each server while minimizing the task response time.A task scheduler for fog computing was proposed in Reference 44 by using the particle swarm optimization (PSO) algorithm.A fuzzy logic-based fitness function was employed where both delay-tolerant and delay-sensitive IoT applications were taken into account.The goals of the scheduler are to reduce network utilization and the loop delay of applications.In Reference 45, a combination of a greedy strategy and a modified genetic algorithm was proposed for task scheduling optimization in cloud environments.Their experimental results showed that it achieved better QoS in terms of average response time, total execution time, and workload balancing.
Furthermore, to address the workflow scheduling problem in the fog computing environment, the authors of Reference 46 exploited a hybrid culture evolutionary algorithm and invasive weed optimization to order tasks in a workflow.To evaluate the fitness of each solution, the heterogeneous earliest finish time (HEFT) algorithm 47 allocates tasks to processors.In this work, the dynamic voltage and frequency scaling (DVFS) technique is also run on each solution to save energy.Scheduling of IoT tasks with volunteer fog-cloud resources was proposed in Reference 48.First, the problem has been formulated as a mixed integer linear programming (MILP) model with the aim of optimizing the total cost of computation, communication, and delay violation.Then, two heuristic algorithms are proposed to efficiently solve the model.In Reference 49, a hyper-heuristic technique uses a honey bee-based strategy for cloud task scheduling.To achieve load balancing, some tasks are fed to an underloaded VM from an overloaded VM.The results showed that the presented method provided a better makespan and reduced the degree of imbalance.Sheng et al.The study 50 used a Markov decision process (MDP) for formulating the task scheduling problem in edge computing.In addition a policy-based deep reinforcement learning (DRL) algorithm was leveraged to solve the problem.The main goal is to maximize the task satisfaction degree, which is the ratio of the expected delay to the response time.
The fog-based Industrial IoT (IIoT) task scheduler was proposed in Reference 51.The authors minimize the energy consumption and makespan of the system using Harris-Hawk optimization based on a local search strategy (HHOLS).The results demonstrated the outperformance of HHOLS over various state-of-the-art algorithms.However, the deadline requirement for tasks is not considered in their model.This algorithm is one of the baselines in our experiments.The combination of HEFT and the Fireworks algorithm for scheduling workflow applications in fog computing environments is presented by Yadav et al. 20 This method attempts to minimize the makespan and scheduling costs.The study 52 minimized the consumed energy of fog nodes and the deadline violation time of tasks in the scheduling of IoT tasks in heterogeneous fog networks.The problem was modeled as mixed-integer nonlinear programming (MINLP) and solved by a semi-greedy approach.The authors in Reference 53 address the complex challenge of optimizing data offloading strategies for users and computing service pricing for Multiaccess Edge Computing (MEC) servers in a dynamic edge computing environment.The authors propose a novel model that uniquely considers users' risk-aware decision-making in the context of potential MEC server overexploitation.By integrating prospect theory, the tragedy of the commons, and users' probability weighting phenomenon, the model captures a multiuser-multiserver-multiaccess edge computing environment.Notable contributions include the consideration of MEC servers' probability of failure, the use of prospect-theoretic utility functions for user satisfaction, and a multileader-multifollower Stackelberg game addressing the association problem between users and MEC servers.The article introduces alternative decision-making mechanisms, employing both game-theoretic and reinforcement learning-based approaches to derive optimal computing service pricing policies for MEC servers.
Najafizadeh et al. 54 have focused on a multi-objective solution for the task scheduling problem in fog-cloud computing.They propose a simulated annealing-based algorithm to optimize the cost and service execution time while the deadline and access level of each task are considered.To achieve a tradeoff between service cost and service time, the goal programming approach is used.Fizza et al. 55 studied the task scheduling problem in the embedded fog-cloud architecture.They considered three types of tasks: soft, firm, and hard deadline tasks.It is suggested that hard-deadline tasks be scheduled at the local embedding system.If the deadline for firm and soft tasks is not met on the embedded system, they are scheduled on the fog and cloud nodes, respectively.Although this work is promising for task scheduling in deadline-aware conditions, the interaction between clouds and fog is limited.Moreover, energy consumption is not considered in the model.This method is also one of the baselines in our experiments.
Table 1 summarizes the main features of the related works and compares them with our work in terms of four main categories: consideration of the environment, application type, performance metrics, and utilized technique.Although the conducted research is valuable and covers different aspects, to the best of our knowledge, this is the first study that employs the GRASP technique to address the problem of task scheduling in virtualized fog-cloud computing environments to reduce the system's energy usage while still enabling users to meet their deadline requirements.It is worth mentioning that our motivation for considering these two objectives is as follows: First, the deadline is one of the most important elements related to the scheduling of IoT tasks in VFCC systems.Second, optimizing the energy consumption of computing nodes is a significant challenge to achieving green computing.

SYSTEM MODEL
Our system model is defined in this section.Section 3.1 describes the system architecture and Section 3.2 presents the formulation of the job scheduling problem.

System architecture
We consider a typical three-layer IoT fog-cloud architecture with multiple heterogeneous fog and cloud nodes in a master-slave mode.Figure 1 provides an overview of the system.Several smart devices, such as wearable devices, cameras, smartphones, smart home sensors, and appliances, as well as industry devices, comprise the IoT layer.These geographically distributed devices generate a massive volume of data, which should be processed and analyzed to get valuable information.However, most of these devices suffer from limited memory, processing, and power resources.So, IoT devices usually offload their requests to the fog-cloud resources through the available gateways.The fog and cloud layers consist of a set of slave nodes named fog nodes (FNs) and cloud nodes (CNs), respectively.These are heterogeneous, virtualized nodes, where each of them runs several virtual machines (VMs).The fog layer also includes a master node called a resource management unit (RMU).The first component is the request receiver.The request receiver uses gateway devices to receive the offloaded requests from the IoT devices.After receiving each request, this component first calculates all of the request-related parameters, like the size of each task, the number of tasks, and the memory and deadline requirements of that task.Then, it sends tasks to the scheduler.The second component is the resource monitor module which has the responsibility of periodically capturing the context and information of each VM in terms of CPU utilization, memory utilization, bandwidth, and disk space.At each period, the collected information is shared with the third component, which is the scheduler.The heart of RMU is the scheduler component, in which the decision-making process is done.In each time interval, the scheduler resolves a problem and determines how to assign tasks to VMs by utilizing data obtained from the other two elements.It is worth mentioning that the proposed task scheduling algorithm is executed in this component.

Problem formulation
This subsection outlines the formulation of the task scheduling problem for virtualized fog-cloud computing environments.The symbols used in this article are summarized in Table 2. Consider a set of n diverse tasks, T = { 1 ,  2 , … ,  n } that have been received by the system within a particular time interval, that is, .In this work, similar to many previous works, 40,44,51,55 The tasks are deemed to be independent of one another and can be executed in parallel.For each task  i , 1 ≤ i ≤ n, its main characteristics are defined as follows: We are examining a fog-cloud computing environment that is virtualized and comprises of a set of   .Having a set of tasks, T, and a set of VMs, , we use A n×m as a binary allocation matrix for mapping function f ∶ T → .For each element a i,(j,k) ∈ A n×m , its value is set to one, that is, a i,(j,k) = 1, if task  i is mapped to VM  j,k , ∀i = 1, 2, … , n and∀j = 1, 2, … , p and ∀k = 1, 2, … , N num j ; otherwise, a i,(j,k) = 0.
In the following subsection, we present a mathematical formulation for the delay and energy of a virtualized fog-cloud environment.

Delay
The , respectively, where calculation of  comm i involves adding up the communication delay.The communication delay for a task is the duration necessary for it to be transmitted from the RMU to the destination node and for the outcome to be received.Let task  i is assigned to VM  j,k , that is, a i,(j,k) = 1.In this case, N j will be the destination node.Thus, we have ( To determine the execution time for task  i , its size is divided by the processing speed of the VM that has been assigned to it.That is When a task is mapped to a particular VM, it must wait in the queue until the VM becomes available, that is, all previous tasks are executed.The order of task execution is determined by the task scheduler component.Let  j,k be a set of tasks allocated to VM  j,k , those tasks for which a i,(j,k) = 1, ∀i = 1, 2, … , n.For each VM  j,k , we use B j,k as a prioritization vector to determine the execution order of the tasks assigned to that VM in which the value of each element b i,(j,k) will be between one and the size of  j,k , that is, 1 ≤ b i,(j,k) ≤|  j,k |.In order to compare the execution order of tasks  i ,  l ∈  j,k , the binary variable c (i,l),(j,k) is used where c (i,l),(j,k) = 1 means that task  i has higher priority than task  l , that is, b i,(j,k) < b l,(j,k) ; otherwise c (i,l),(j,k) = 0. Therefore, the waiting time for task  i is computed in the following manner.
Now, the response time of task  i in the system is defined as: The deadline requirement of task  i is met when , ∀i = 1, 2, … , n.Let x i = 1 denote that the deadline of task  i is met; otherwise x i = 0. Thus, the deadline satisfaction ratio can be obtained as below.

Energy
Here, we present an estimation model for calculating the energy consumption of a virtualized fog-cloud environment to process a group of n incoming tasks during a specific time interval.Our attention is directed towards the energy consumption of computing nodes, which encompasses the basic energy consumption of the active nodes and the energy usage of their hosted VMs in both active and idle states. 56It should be noted that if even one VM is active on a node, the node is considered to be in an active state, and its baseline energy consumption will be included in the calculations.
The consumed energy of a VM depends on its mode, that is, active or idle, and the amount of time it is in that mode.The active time of VM  j,k is obtained by summing up the execution time of all assigned tasks.
To determine the idle time of each VM, it is necessary to first obtain the makespan of the system, which is represented by .In a virtualized computing system, the makespan refers to the maximum time during which a VM is active among all of the VMs.
Based on Equations ( 6) and ( 7), the idle time of VM  j,k will be equal to Now, we can estimate the consumed energy of a VM considering the amount of time being in each mode and its power consumption profile.Therefore, the amount of energy consumed by VM  j,k is expressed as where,  indicate the power consumption of VM  j,k in idle and active modes, respectively.Based on the preceding analysis, the energy consumption for a specific computing node, N j , can be computed as follows: where, N pbase j indicates the baseline power consumption of node N j .Finally, the total energy consumption of a virtualized fog-cloud computing to process a set of n tasks is calculated as follows.

Problem overview
This article presents the following formulation for the task scheduling problem in a virtualized fog-cloud environment.
: a set of m heterogeneous VMs.

Output:
A n×m : a binary allocation matrix for mapping function f ∶ T → .B j,k : a prioritization vector for determining the execution order of the assigned tasks to VM  j,k , ∀j = 1, 2, … , p and∀k = 1, 2, … , N num j .
Objective function: subject to the following constraints: Constraint ( 13) pertains to memory and specifies that a given task can only be assigned to VMs that possess adequate memory for that task.Constraint (14) mandates that each task can only be allocated to a single VM.Inequality in constraint (15) ensures that the deadline of each task is satisfied.Constraints ( 16)-( 18) define the domain of the variables.Since the task scheduling problem is an NP-hard optimization problem, there is practically no optimal solution for solving it in a limited amount of time.Therefore, in the next section, we propose a GRASP-based technique to efficiently solve the problem.

GRASP-BASED ALGORITHM
This section introduces a GRASP-based algorithm designed to solve the task scheduling problem in virtualized fog-cloud systems.The main objective of the proposed algorithm is to minimize the energy consumption of the system, while simultaneously meeting the deadline requirements of the incoming tasks.GRASP is an iterative-based metaheuristic procedure that consists of two main phases per iteration.The first phase is the construction phase, where a feasible solution is generated using a greedy randomized approach.The second phase is the local search phase, which enhances the solution obtained in the first phase by exploring its neighborhood.This entire process is repeated until a stopping criterion is met, such as reaching an acceptable solution or exceeding the maximum number of iterations.The best solution found over all iterations is then reported as the final result.The details of each phase and the whole GRASP process for the considered problem are discussed in the following.

Construction phase
The purpose of the construction phase is to build an initial solution.This phase starts with an empty solution and constructs it incrementally using a greedy randomized strategy.At each step, one element is selected based on the greedy randomized approach and added to the partial solution.This process continues until the entire solution is built.The pseudocode of the proposed construction phase procedure is presented in Algorithm 1.The procedure receives set of tasks T, set of VMs , and the control parameter  ∈ [0, 1] as input where  is a constant value that control the degree of greediness and randomness.The procedure works as follows.First, the solution S is initialized (line 1) and the set of tasks are sorted in non-descending order based on their deadline requirement (line 2).Then, in the main loop, that is, lines 3-17, each task  i ∈ T is assigned to a VM based on the following strategy.The removeFirst method grabs the first task, say task  i , form the set T and remove it from the set (line 4).For each task  i , an empty candidate list CL is created.In the second loop (lines 6-11), first the memory constraint of task  i is checked (line 7).Then, in line 8, the calculateResponseTime method obtains the response time of the task if it is allocated to VM  j,k .This value can be calculated using Equation (4), see Section 3.2.1.After that, it is added to the candidate list CL in line 9. Now, CL contains the list of VMs that have enough memory for processing task  i with their response time for this task.Line 12 sorts CL in non-descending order.After this process, the method buildRCL gets the CL list and the parameter  as input and makes a restricted candidate list RCL (line 13).The RCL list is composed of the first × | CL | VMs of the CL list, that is, the VMs that provide the lowest response time.Next, the selectRandom method randomly selects a VM, say s, (line 14) and task  i is assigned to it (line 15).Finally, the available time of the VM s is updated by the method updateVM (line 16).

Local search phase
Although the construction phase generates a feasible solution, it is not guaranteed to be locally optimal.Therefore, a local search procedure can be applied to explore the solution's neighborhood and improve its quality.The local search procedure iteratively works on the current solution by replacing it with a better one in its own neighborhood.This process continues until no further improvement can be achieved.Our proposed local search procedure is described as follows.
Algorithm 2 shows the pseudocode of the proposed local search phase procedure.The main idea of the procedure is based on the first improvement strategy. 28Here is the detail.The procedure gets solution S as input and attempts to improve its quality.The main loop in lines 1-14 is performed until no more improvement is possible.The second loop (lines 2-13), generates neighborhoods of the current solution one by one and visits them until an improving solution is found.To this end, for every neighbor S ′ ∈ N(S), the total energy consumption and deadline satisfaction ratio are computed using the calculateTotalEnergy and calculateDeadlineSatisfationRatio methods in lines 4 and 3, respectively.If the DSR of the solution S ′ is better than the current solution S, the procedure moves to this solution and considers it as the current solution (lines 5-8).However, if the DSR of solutions S and S ′ are the same, S ′ will be selected as the next solution if it decrease the total energy consumption of the system (lines 9-12).
It is worth mentioning that we use the one-swap-neighborhood strategy for producing the neighborhoods of a solution.The neighborhood strategy is characterized by the set of all feasible solutions that can be obtained by swapping the locations of any two tasks in the current solution.Therefore, a given scheduling solution for n tasks has exactly n × (n − 1)∕2 neighbors.

GRASP
GRASP is the hybridization of a greedy randomized approach with a local search technique embedded in a multistart procedure.The fundamental concept behind the multistart procedure in GRASP is that by generating several initial solutions for a local search algorithm, the chances of finding a starting solution that leads to a global optimum are enhanced. 28gorithm 3. GRASP (T, , , N itr ) Algorithm 3 outlines the GRASP approach proposed for addressing the task scheduling problem in virtualized fog-cloud systems.After initializing the value of DSR best and  total best in line 1 and 2, respectively, the algorithm iterations are carried out in the for loop from lines 3 to 14.A solution is constructed at each iteration using the SOLUTION-CONSTRUCTION procedure in line 4 (see Algorithm 1).In the line 5, the LOCAL-SEARCH is applied the proposed local search to the current solution to achieve local optimum.The value of DSR best is updated and S is recorded (lines 6-9), if the DSR of the current solution S is greater than the DSR of the best solution found so far.In the case that the DSR of the current solution S and DSR best are equal, we compare the value of their energy consumption.If new solution, that is, S, gives smaller value, it is recorded (lines 10-13).After N itr iterations, the algorithm returns the solution with the best DSR.It should be noted that if there are multiple solutions with the same best DSR value, the one that results in the lowest overall energy consumption will be chosen.Since the GRASP algorithm includes multiple randomization iterations, it can be observed that the local search stage functions as a form of local search that involves several restarts.This is a promising strategy to avoid poor quality local minima.Also, it is interesting to note that because of the independence of GRASP iterations, it can be implemented in parallel with only using three global variables, that is, DSR best ,  total best , and S * .Therefore, its running time can be remarkably improved.

Complexity analysis
In this section, we analyze the computational complexity of the proposed algorithm.As the GRASP algorithm comprises two separate phases, we calculate the complexity of each phase independently.The first phase, that is, Algorithm 1, involves constructing the initial solution using a semi-greedy method, which initially sorts the tasks based on the deadline.Various sorting methods can be employed for this purpose, and the computational complexity of this component is O(n log n).For each task, the algorithm performs the following two steps: (1) creating the Candidate List (CL), which requires O(m), and (2) sorting CL, which needs at most O(m log m).Since we have n tasks, the time complexity of these two steps will be O(n × m) and O(n × m log m), respectively.The other steps can be done in constant time.Consequently, the time complexity of the construction phase is O(n × m log m).
The time complexity of the second phase, that is, Algorithm 2, can be analyzed as follows.The outer "while" loop iterates until S is locally optimal.The number of iterations depends on the convergence speed of the algorithm.Let us denote the maximum number of iterations as M. The "foreach" loop iterates through each N(S), where N(S) is the neighborhood of S. As we mentioned in Section 4.2, the size of the neighborhood for a given solution is n × (n − 1)∕2.Therefore, the first improvement strategy at most examines O ( n 2 ) neighbor for each solution.The operations inside the foreach loop can be done in constant time.Given these considerations, the worst-case time complexity is approximately O ( M × n 2 ) .Finally, since the proposed GRASP algorithm, Algorithm 3, runs the first and second phases N itr times, its overall time complexity is given by O ) .

PERFORMANCE EVALUATION
This section involves carrying out extensive simulation experiments to assess the effectiveness of the proposed algorithm across different dimensions.The experimental setup is outlined in Section 5.1, while Sections 5.2 and 5.3 respectively provide information about the evaluation metrics and comparison algorithms used.Finally, the findings of the experiments are reported and examined in Section 5.4.

Experimental settings
This section provides simulation settings to validate the proposed method's effectiveness.In order to cover different scenarios, the random uniform distribution is used to generate the synthetic dataset for simulation experiments, similar to References 46,51,55.The characteristics of the tasks and virtual machines are established based on Tables 3 and 4, respectively.As in Reference 55, we assume two types of tasks are offloaded from IoT devices to the fog-cloud environment.
Type-1 tasks are lightweight, while Type-2 tasks are heavy.The number of VMs hosted on each FN is set within range of 1-5, and for each CN, it is set within the range of 2-10.To assess the efficacy of the proposed method in varying circumstances, four experiments were conducted to examine the impact of different parameters.Table 5 provides an overview of each experiment and its settings.The specific details of each experiment are outlined below: Experiment one (Exp#1): The objective of this experiment is to analyze how the system's performance is affected by an increase in the number of tasks.We varied the number of tasks from 100 to 500 in increments of 100, while keeping the number of FNs and CNs constant at 24 and 4, respectively.Additionally, a delay of 200 ms between the fog and cloud environments is assumed.
Experiment two (Exp#2): This experiment studies the impact of varying the number of FNs.The number of FNs is increased from 4 to 32, with an increase of 4 nodes at each step.In this experiment, the number of tasks is fixed at 300, and the number of CNs is set to 4, while assuming a fog-to-cloud delay of 200 ms.
Experiment three (Exp#3): This experiment aims to demonstrate how the system's performance is influenced by the number of available cloud nodes.We varied the number of cloud nodes from 2 to 16 in increments of 2 while keeping the number of tasks and fog nodes constant at 300 and 16, respectively.Additionally, a fog-to-cloud delay of 200 ms is assumed.
Experiment four (Exp#4): This experiment involves a problem with 300 tasks, 16 fog nodes, and 4 cloud nodes, with the fog-to-cloud delay varying from 40 to 320 ms in increments of 40 ms at each step.
All experimental simulations were carried out on a PC running Windows 10 Pro and equipped with two Intel® Xeon® CPU E7-4850 v4 processors running at 2.10 GHz and 8.00 GB of RAM.To ensure the validity of the results, each experiment was run 10 times, and the average, maximum, and minimum values obtained from these runs were reported.The source code for the article is available at Reference 57.

Performance metrics
The proposed algorithm's performance was evaluated using the following criteria, comparing it against other algorithms: Deadline satisfaction ratio (DSR): The metric being used is the QoS, which measures the algorithm's ability to meet task deadlines.It is defined as the percentage of tasks that are able to meet their required deadlines, as shown in Equation (5).
Average response time (ART): This metric indicates the average delay experienced by a submitted task in the fog-cloud environment, which is calculated as below. where, is the response time of task  i and n is the number of tasks submitted to the system during a specific time interval.
Energy consumption: This metric represents the total energy consumption of a virtualized fog-cloud computing for processing a set of n tasks, see Equation (11).
Makespan: This metric measures the total processing time required to process a set of tasks submitted to the system.It represents the duration from the start of task processing until the completion of the last task.It is defined in Equation (7).
Execution time (Runtime): This metric illustrates the time taken by the algorithm to schedule tasks on the system, measuring the overall runtime of the algorithms.

Compared algorithms
The performance of the proposed algorithm was assessed by comparing it to three other algorithms.The algorithms used for comparison, along with a brief description, are as follows: Earliest Deadline First (EDF) 58,59 : EDF is a scheduling algorithm that prioritizes tasks based on their deadlines and allocates them randomly to nodes with memory constraints in place.
Harris Hawk optimization algorithm based on a local search strategy (HHOLS) 51 : This algorithm utilizes Harris Hawks Optimization 60 as a metaheuristic to minimize energy consumption in solving the task scheduling problem.
Self-contained fog-cloud (SFC) 55 : This heuristic algorithm schedules type-1 tasks on FNs and type-2 tasks on CNs for VMs.It uses the EDF strategy for scheduling the set of tasks assigned to each layer.

Experimental results
In this subsection, we report and analyze the outcomes of the conducted experiments to assess the performance and effectiveness of the proposed algorithm.

Deadline satisfaction ratio (DSR)
The experimental results for the proposed algorithm and the benchmark algorithms are reported in Figure 2A-D.The focus is on the Deadline Satisfaction Ratio (DSR) metric.The results indicate that the proposed GRASP algorithm outperforms the benchmarks in all scenarios and significantly increases the percentage of tasks that meet their deadline requirements.This is unsurprising, as the proposed algorithm prioritizes maximizing the DSR of tasks.Figure 2A demonstrates that as the number of tasks submitted to the system increases, the DSR of the algorithms decreases for all algorithms, but the reduction is less significant for the proposed algorithm compared to the benchmarks.Figure 2B,C reveals that as the number of FNs and CNs increases, the submitted tasks enjoy higher DSR.Notably, for a scenario with 300 tasks, 8 FNs, and 4 CNs, the proposed algorithm achieves 95.3% DSR, while the values for EDF, HHOLS, and SFC are significantly lower at 48.8%, 41.2%, and 52.9%, respectively.The impact of fog on cloud delay is indicated in Figure 2D.It shows that as the delay between the fog and cloud environment increases, the DSR of the algorithms is decreased, except for SFC.This can be attributed to the fact that GRASP, EDF, and HHOLS assign some type-1 tasks to the cloud, while SFC only allocates type-2 tasks to the cloud.As a result, when the fog-to-cloud delay increases, some type-1 tasks may miss their deadlines.These results demonstrate that the proposed algorithm consistently outperforms the benchmark algorithms in terms of deadline satisfaction, highlighting its effectiveness in meeting task deadlines.

Average response time (ART)
Figure 3A-D illustrates the performance of the compared algorithms in terms of ART.Generally, our GRASP algorithm achieves the lowest values, while EDF exhibits the highest values.In the first experiment, as shown in see Figure 3A, with the growth of the number of submitted tasks, the proposed algorithm performs much better than the others.This demonstrates the efficient handling of system load by our algorithm.It is evident from Figure 3B that increasing the number of FNs leads to reduced ART for all algorithms, particularly for our GRASP SFC and HHOLS.Additionally, in Figure 3C it is evident that the ART of all algorithms decreases significantly with the addition of more CNs to the system.This outcome is expected since CNs are more powerful than FNs.As a result, the heavy Type-2 tasks experience lower execution times.Here, on average, the proposed algorithm improves the ART by about 11.6% compared to the second-best results.Finally, from Figure 3D, indicates a smooth increase in the ART of algorithms as the FTC delay value increases.

F I G U R E 3
Comparing the performance of algorithms in terms of average response time (ART) for different experiments.

Energy consumption
Figure 4A-D illustrates the simulation results for the energy consumption of algorithms.It can be seen from the figures that, in most cases, the proposed algorithm gives the minimum values for the amount of energy consumed.In these instances, the proposed GRASP reduces energy consumption by percentages ranging from 2.2% to 42.4% when compared to the best results.In a few cases, the HHOL algorithm slightly outperforms our algorithm; however, this superiority is not significant, and the values are comparable to those of the proposed algorithm.Throughout all our experiments, SFC yielded the best results in only one case, that is, Expr#4, with the difference being less than 1.2%.Consequently, our algorithm results in a more appropriate mapping of tasks on fog and cloud nodes, which leads to a reduction in energy consumption.

Makespan
Experimental results for makepan are presented in Figure 5A-D, where the performance of the proposed GRASP algorithm is compared with the baseline algorithms.The results demonstrate that the proposed algorithm outperforms the baseline algorithms in terms of makespan.This is attributed to the ability of the GRASP algorithm to distribute the tasks among the available fog and cloud nodes in a balanced manner, thereby achieving better load balancing.As can be seen from the figures, the proposed approach provides either the best or near the best results.Compared with other methods, the proposed algorithm performs better due to the solution construction phase, the semi-greedy strategy, and the local search with multiple restarts.

Runtime
To evaluate the performance of the algorithms in terms of runtime, we have reported the numerical results for Expr#1.Figure 6 illustrates an overview of the runtime of each algorithm for different numbers of tasks.The figure shows the feasibility of implementing the proposed GRASP algorithm in a real-time or near-real-time context.While heuristic algorithms like EDF and SFC exhibit low runtime, a common trait among heuristics, our analysis in the previous subsection revealed their comparatively inferior task scheduling performance.In contrast, the GRASP algorithm not only maintains a competitive runtime but also outperforms in task scheduling efficiency.This dual advantage positions the GRASP algorithm as a promising candidate for deployment in dynamic environments that require not only swift execution but also optimized task scheduling, affirming its potential for real-time applications or scenarios demanding close-to-real-time responsiveness.

CONCLUSION AND FUTURE WORK
Virtualized fog-cloud computing (VFCC) constitutes a distributed system employed for processing an extensive array of tasks originating from diverse Internet of Things (IoT) devices.The task of scheduling these operations on virtualized fog and cloud resources poses a significant challenge, impacting resource utilization and quality of service (QoS).This study introduces an approach based on a Greedy Randomized Adaptive Search Procedure (GRASP) to minimize energy consumption while ensuring high QoS for IoT tasks.The effectiveness of the proposed approach is validated through extensive simulation experiments involving varying numbers of tasks, fog nodes, cloud nodes, and fog-to-cloud communication delays.The results illustrate a substantial improvement in the percentage of tasks meeting their deadline requirements while simultaneously minimizing system energy consumption.Several promising avenues for future research in this domain are identified.First, there is an intention to enhance the model's realism by incorporating features such as diverse task initiation times and accurately representing the varying CPU power scales of different devices.Second, this research can be extended by considering additional factors such as monetary cost, security and privacy, mobility of IoT devices, and resource failures.Furthermore, future investigations may explore ways to enhance the local search phase by examining alternative search mechanisms.

∶∶∶
physical nodes, consisting of |  | fog nodes (FNs) and |  | cloud nodes (CNs).Each node N j , 1 ≤ j ≤ p, has the following characteristics: N num j Number of VMs hosted on the node, N comm j Communication delay to the RMU, N pbase j Baseline power consumption.Now, let  j = {  j,1 ,  j,2 , … ,  j,p num j } be a set of VMs hosted on the node N j , 1 ≤ j ≤ p.The characteristics of each VM  j,k are defined as below:  cpu j,k ∶ CPU processing speed (in Million Instructions Per Second-MIPS),  mem j,k ∶ Memory capacity (in MegaBytes-MB),  pactive j,k ∶ Power consumption in active mode (in Watt-W),  pidle j,k ∶ Power consumption in idle mode (in Watt-W).

p
}}is used to represent the set of system's VMs.Let m denote the number of VMs, that is,m = N num 1 + … + N num p

F I G U R E 2
Comparing the performance of algorithms in terms of deadline satisfaction ratio (DSR) for different experiments.

F I G U R E 4
Comparing the performance of algorithms in terms of energy consumption for different experiments.

F I G U R E 5 F I G U R E 6
Comparing the performance of algorithms in terms of makespan for different experiments.Comparing the performance of algorithms in terms of runtime for experiment#1.
Summary of notation and corresponding units.
TA B L E 2 response time of task  i , denoted by Specifications of tasks.Specifications of VMs.Experiments settings.