This research introduces the Large Energy-Aware Fog (LEAF), a simulator that aims to eliminate the short-comings of existing models, thus enabling comprehensive modeling of Large Energy-Aware Fog computing environments. The most evident deficiency of existing fog computing simulators is that, if at all, only the energy consumption of data centers and end devices is simulated. However, for obtaining a holistic energy consumption model, the network must be considered too. The simulators presented feature network modeling, do this very thoroughly. They model every single network package individually, sometimes even implementing entire communication protocols like TCP. The Vehicular OBU Capability (VOCC) dataset was used in this research using LEAF technique for evaluation. This degree of detail negatively impacts performance. Demands scalability, LEAF takes a different approach: Network links are considered resources, similar to compute nodes, that have certain properties and constraints and a particular performance per watt. Data flows between tasks allocate bandwidth on these links. The dataset was being acquired from KAGGLE which is open-source: https://www.kaggle.com/datasets/haris584/vehicular-obu-capability-dataset. Modeling networks in this more abstract fashion is inspired by the analytical approaches. It enhances the performance of the DES because fewer events get created. Furthermore, the analysis of a model’s state at a particular time step becomes very easy: The user can comprehend the current network topology, all data flows, and allocated resources of running applications solely by observing the simulation state at any point in time. A disadvantage of this kind of modeling is the higher level of abstraction, and therefore, possibly less accurate results.
The Fig. 4 depicts an exemplary infrastructure graph with a multitude of different compute nodes and network links. In the bottom left, several sensors form a Bluetooth Low Energy (BLE) mesh network, which connects to a fog computing layer. The fog nodes in this layer are interconnected via passive optical network (PON) and can communicate with cloud nodes via WAN. The bottom right shows several smart meters that connect to public fog infrastructure via Ethernet. Furthermore, connected cars are linked to these fog nodes, as well as to each other for Vehicle-To-Vehicle (V2V) communication via Wi-Fi. The public fog nodes connect the cloud via WAN with a 4G LTE access network. Lastly, in the top right, two devices are directly connected to the cloud via WAN using a Wi-Fi and 5G access network. Note that the access network, for example, wireless routers or mobile base stations, is usually not explicitly modeled in LEAF. Instead, the user should determine the overall bandwidth and power usage of the entire WAN link. Two vehicles are on the map, connected to nearby traffic lights, and two traffic lights have fog nodes attached. Note that this is just an example; the number of vehicles and fog nodes varies in the experiments.
Although some components of LEAF are based on analytical approaches, at its core it remains a numerical model to be exact a DES. The user can adjust the infrastructure’s topology, the placement of applications, and the parameterization of resources over time. This flexibility has two decisive advantages over purely analytical models: First of all, fog computing environments can be represented in a much more realistic way. LEAF allows for dynamically changing network topologies, mobility of end devices, and varying workloads. Secondly, LEAF enables online decision making during the simulation, enabling research on energy-aware task placement strategies, scheduling algorithms, or traffic routing policies.
The source task is running on a sensor and sends 1 Mbit/s to the processing task. The processing task requires 3000 MIPS and emits 200 kbit/s to the sink task, which, for example, stores the results in a cloud storage. A task placement strategy can now decide on a placement of the processing task, having three options:
-
Placing the task on the sensor: Since the sensor does not have the computational capacities to host a task that requires 3000 MIPS, this placement is not possible.
-
Placing the task on the fog node: If the fog node still has enough remaining capacity, it can host the task. This placement effectively reduces the amount of data sent via WAN.
-
Placing the task in the cloud: The cloud has unlimited processing capabilities in this example. Consequently, a placement is always possible.
Hence, they are not only suitable for periodic measurements for subsequent analysis of power usage profiles, but can also be called at runtime by other simulated hardware or software components. Examples of such simulated entities are batteries that update their state of charge or energy-aware task placement strategies. Hence, LEAF supports online decision making based on power usage and complies with requirement.
In theory, a power model can be any mathematical function and may depend on multiple input parameters besides the load. This linear power model can be defined as:
P(t) = Pstatic + C(t) σ (1)
Evidently, this model improves the accuracy of the simulation. Nonetheless, when simulating hundreds or thousands of nodes, more complex power models come at a cost. Calculating the distance between that many interconnected devices at each time step of the simulation can become computationally expensive. If the performance gets affected too much, an alternative approach, in this case, could be to estimate an average distance and incorporate the energy dissipation directly into the slope σ of the linear model.
Much like wired connections, the energy-effectiveness of wireless connections strongly depends on the used hardware. To obtain a precise, one has to sum up the required energy per bit for receiving and transmitting data of both communicating devices. Further determining factors in wireless power consumption are the distance between devices and the characteristics of the environment. On short distances and with a free line of sight, a transmitter has to use way less energy to send packets than over larger distances or with elements absorbing or reflecting the radio waves. This is a general characteristic of wireless networks: Sending packages is costlier than receiving because the transmitter has to account for the energy dissipation.
However, LEAF does currently not allow the modeling of asymmetrical bandwidth and power consumption characteristics. Since edges in the infrastructure graph are undirected, network connections are expected to provide the same bandwidth and consume the same power for uplink and downlink - although, in reality, this is rarely the case. To obtain valid results, the user must determine how the network is mostly used in the simulated scenario and choose an adequate parameter. For example, if the mobile base stations in a simulation solely upload data to the cloud, the user should pick a σ, which represents a realistic uplink consumption. A future version of LEAF can remedy this deficiency by not utilizing an undirected graph as the basis for the infrastructure model but a directed multigraph. This way, uplink and downlink can be modeled separately. Moreover, it would become possible to have multiple different connections between two compute nodes. This would additionally enable energy-aware routing algorithms that, for example, could decide whether it is better so send data via Wi-Fi or Bluetooth.
Furthermore, for precise results, one would not only need to model the transmit power of base stations but also how it degrades over the distance since mobile devices usually connect to the base station with the strongest signal. To summarize, if no further information is available on the type and location of base stations or if different kinds of base stations are deployed, it is tough to provide reasonable estimates on their consumption. If possible, the parameterization should be based on measurement data collected in the real existing environment. In fully conceived scenarios, the consumption of base stations remains a significant factor of uncertainty.
Table 1
Power consumption of networking equipment.
Type
|
Model
|
Max capacity
|
Max power
|
Energy per bit
|
Core router
|
CRS-3
|
4480 Gbit/s
|
12300 W
|
σcr = 2.7 nJ/bit
|
Metro router
|
7603
|
560 Gbit/s
|
4550 W
|
σmr = 8.1 nJ/bit
|
BNG
|
ASR 9010
|
320 Gbit/s
|
1890 W
|
σbng = 5.9 nJ/bit
|
Ethernet switch
|
Catalyst 6509
|
256 Gbit/s
|
1766 W
|
σes = 6.9 nJ/bit
|
Data center switch
|
Nexus 9500
|
102.4 Tbit/s
|
15954 W
|
σdcs = 0.2 nJ/bit
|
Incorporating these techniques into the fog environment simulation is of varying difficulty. Shutting down hosts within compute nodes can be modeled by tracking how long they are idle and marking them as powered-off after some time. For powered-off hosts, power models return P(t) = 0. It is significantly more complicated to switch off entire compute nodes or network links in the simulation because this changes the network’s topology. Since the proposed model is relatively abstract and does not model individual switches or routers, it is not suitable for modeling the dynamic shutdown of single network devices.
To make full use of energy-saving mechanisms, users should adapt their application placement strategies and routing policies. For example, workload can be consolidated at a minimum number of nodes to maximize the number of idle nodes that can be powered off. Among many established consolidation algorithms for cloud computing there are already some approaches targeted at fog computing. A smart consolidation of correlated VMs can also highly reduce network traffic and conserve energy.