An SDN-based energy-aware traffic management mechanism

Green computing is a central theme in many computer science areas, including computer networks. Dynamic solutions that can properly adjust network resources can prevent infrastructure over-provision and mitigate power consumption during low-demand periods. In this work, we propose DTM (Dynamic mechanism for Traffic Management), an energy-aware dynamic mechanism for traffic management, built upon the SDN paradigm. DTM continuously monitors the use of network links to concentrate traffic and disconnect idle equipment without degrading the offered quality of service. Our simulations show that the mechanism can save up to 46% of energy, on average, in the links’ capacities of homogeneous and heterogeneous scenarios. In scenarios with average to high traffic demands, the mean energy savings are 36.72% and 17.86%, respectively. Compared to a well-known existing mechanism, our approach is up to 7% better for medium-demand scenarios, and approximately 4% better for high-demand scenarios.

monitors network flows and uses fewer routes as possible to transmit the existing end-to-end flows, which enables the disconnection of links and the turning off of idle network devices. Moreover, to avoid the recurrent connection/disconnection of devices -which may shorten equipment life-our mechanism uses a traffic aggregation policy. However, a policy only aggregates flows, allowing link disconnection, in case its utilization level does not indicate a future increasing demand.
We emphasize that the existing works in the literature focus on topology optimization and the minimization of the number of links and switches. Existing solutions do not consider the dynamics of flows [5][6][7][8]. In fact, only a few works like [9] and [7] explore the seasonality of traffic.
We have evaluated DTM by emulating an SDN environment from a typical university campus topology (Section 5). We consider different traffic levels as well as distinct overloaded scenarios. Our evaluation results evidence an energy-saving from 17 to 46%, depending on the emulated network demand. Compared with an existing solution, our results are, on average, 7 to 4% better for medium-and high-demand scenarios, respectively.
In sum, our contributions are the following: -We propose an energy-aware dynamic mechanism for traffic management. -The mechanism has been evaluated through the emulation of networks using Mininet, considering a realistic topology with homogeneous and heterogeneous links' capacities. -Our evaluation results show that our mechanism provided an average energy saving of approximately 46% in a scenario with low network demand, similar to a nighttime pattern. -In scenarios with average to high traffic demands, the mean energy savings are 36.72% and 17.86%, respectively. -Compared to a well-known existing mechanism, our approach is up to 7% better for medium-demand scenarios and approximately 4% better for highdemand scenarios.
The remainder of this article is organized as follows. In Section 2, we discuss the related work. In Section 3, we introduce the common scenario we consider in this work. In Section 4 we describe the protocol we propose. Evaluation and results of our proposal are presented in Section 5. Finally, Section 6 concludes this work.

Related work
The topic of energy-saving is worthy of investigation [1,2]. Traditionally, energy-saving works on computer networks have focused on mathematical models of topology and minimizing attributes (e.g., number of links, switches, or CPU frequency) [5][6][7][8]. The authors of [10] formulate an optimization model to minimize the number of network links and network assets, choosing the position of the controller. According to them, these are key issues for energy saving. Many of these formulations involve heuristics, once resource optimization in this environment is considered an NP-Complete problem. The authors of [6] and [8] propose heuristics for disconnecting aggregated link network cards, which can be disabled independently. The authors do not turn off the entire equipment or an entire link because they believe this reduces network connectivity. In [11], the authors consider bandwidth, link load, and traffic arrays to hibernate switches, rearranging the network topology as needed.
Fernández-Fernández et al. [10] formulate an optimization model that considers the routing requirements for data and control planes. This model considers the reduction of network assets and also considers driver location, which, according to the authors, is a key issue for energy savings.
These problem formulations often involve heuristics because resource allocation optimization in this environment is an NP-Complete problem. In [6] and [8], the authors proposed heuristics to reduce power consumption by disconnecting aggregated link network cards, which can be disabled independently. The authors do not turn off equipment or entire links because they believe this reduces network connectivity. Habibullah et al. [11] investigate the reduction of network infrastructure power consumption through the hibernation of switches, rearranging the network topology as needed. For this, they consider bandwidth, link load, and traffic arrays as input parameters for their decision algorithms.
Rather than rearranging the network topology or shutting down the networking assets, the authors of [5] adjust the speed of underused links. The lower priority traffic is redirected and real-time traffic is kept on the minimum path to satisfy the desired QoS. Note that the reduction in power consumption is achieved only by reducing link speeds, resulting in marginal gains. Moreover, also note that link speed reduction is not commonly supported on commercial devices.
Only a few works consider the dynamics of the network. Clearly, nowadays, network dynamics and infrastructure is a topic that must be taken into account for many purposes, from energy-saving to the coexistence of multiple highperformance distributed applications [12]. In this context, Heller et al. propose the ElasticTree, a network power manager for datacenter that monitors traffic and chooses the set of network elements that must remain active to meet the goals of performance and fault tolerance. The ElasticTree turns off unnecessary links and switches as much as possible. For this, a formal model, a greedy algorithm, a topology-sensitive heuristic, and forecasting methods were adopted. In the same way, the authors of [7] present a model and a greedy heuristic to find a minimal path concerning the number of hops and an increase in energy consumption.
In this sense, Markiewicz et al. [7] consider that network elements can be turned off. The power consumption minimization problem is formulated using an IP model, and the objective function jointly minimizes the sum of the power consumption of the links and the switches. The authors introduced four greedy heuristics algorithms to solve the problem, namely, shortest path first, longest path first, smallest demand first, and highest demand first. The algorithms are evaluated over two sample topologies, a campus network and a mesh network for low, medium, and high traffic. Experimental results show that the longest path first gives close to optimal energy savings of up to 35% and is better than the other three algorithms for both topologies.
Recently, a number of works use SDN to save energy in networks, especially in cloud-based environments. Son et al. [13] also avoid overbooking of resources, and as a consequence, save energy, through the use of SDN. Authors, in this case, strategically allocate a more precise amount of resources to VMs and traffic (in SDN-Based Cloud Data Centers) according to the demand of Quality of Service (QoS). Despite the similar objective (the energy-saving), DTM focuses on network resources and topology, while Son et al. focus on the allocation of Cloud Data Centers resources. In a different way, Xu et al. [14] schedule the flow order to save energy in an SDN network compliant environment. In this way, they can optimize the selection of links, avoiding defragmentation. Finally, Jia et al. [15] also use an SDN approach to re-route flows. Initially, they treat the problem as a min-cost problem, which is NPhard. Moreover, they do not dynamically define network link utilization levels, which may induce network to shift between on-off link states, in a ping-pong effect. Finally, we highlight that we provide close-to-real implementation, instead of simulation.
It is possible to see on Table 1 a comparison between all related works. We also highlight both Markiewicz et al. and our proposal, once this work will be used to compare directly with our proposal ahead. Differently from DTM, most existing works do not consider the energy-save characteristics or only present costly solutions that rely on specialized mechanisms. Moreover, these studies only reassess the proposed optimization/heuristic in the face of changes in flows but do not consider a smoothing interference with network assets or possible traffic aggregations. Thus, they only disconnect links and switches when they become naturally idle.

Considered scenario
In this article, we consider networks with an arbitrary topology as shown in Fig. 1. At the network edge, host nodes represent the elements that generate and consume data streams. Hosts are connected to access switches, which in turn are connected to routing switches. All switches are compatible with the OpenFlow protocol, which is the most widely used SDN platform today, both in development and research [3]. The switches are responsible for routing packets between links, according to the flow rules previously configured by one or more controllers, responsible for the centralized control logic of the network.
This network topology is modeled as a graph G = (N , L), where N = N h ∪ N c is a set of nodes, and L a set of links. The subset N h represents the hosts, as a subset N c corresponding to the switches. Specifically, where N a is the access switch, and ∃ l ci ∈ L connects a CPLEX Real and Synthetic Abeline and Hierarchical X --Lin et al. [8] CPLEX Real and Synthetic Abeline and Hierarchical X --Fernández-Fernández et al. [10] Testbed Real SNDLib X -X Sasaki et al. [5] Simulation Real and Synthetic Random X --Son et al. [13] Simulation Real and Synthetic Mesh X --Xu et al. [14] Simulation Synthetic Fat-Tree and BCube X X X Oliveira et al. [12] Simulation Synthetic Fat-Tree and BCube X --Jia et al. [15] Simulation Real Rocketfuel X X -Our proposal Simulation Real and Synthetic Mesh X X X Fig. 1 Considered arbitrary network topology switch c directly attached to a host i ∈ N h ; and N e the forward switch, where ∀ l cj ∈ L connects the switch c directly to another switch j ∈ N c . DTM reduces the power consumption by shutting down (or hibernating) selectively ports on switches that have their idle links. If all ports on a switch are turned off, the switch can also be turned off. For this, traffic management algorithms need to concentrate flows on a minimum number of links, but without degrading the quality of service offered. This strategy is called resource consolidation, according to [16].
For all switch c ∈ N c , denote E t (c) as the energy consumption of c at time t, following Eq. 1 [5]: where Eb t (c) shows the base consumption of c at time t, necessary to keep the switch working (processor, cooler, etc.); Ep g (c) and Ep f (c) show the energy consumption for each port of 1 Gbps and 100 Mbps in c, respectively; Np gt (c) and Np f t (c) represent the number of 1-Gbps and 100-Mbps ports active in c at time t, respectively. Note that the energy consumption of each port changes according to its speed [7]. Then, the energy consumption can be estimated for network total amount E t (G) at time t following Eq. 2:

DTM: a dynamic traffic management
In this article, we propose an SDN-based energy-aware traffic management mechanism. This DTM engine presents three main components: (i) active network monitoring, which maintains an up-to-date network resource usage model; (ii) the new flows installation algorithm, which reactively identifies new traffic and allocates appropriate routes; and (iii) the active stream redirection algorithm, which aggregates flows in the least amount of links to shut down idle resources, avoiding to overload the remaining links.
In what follows, we detail each one of these components.

Active network monitoring
DTM uses the OpenFlow controller to access network information. In this sense, the network controller maintains a detailed and up-to-date network view, including information on resource use and energy consumption of active elements.
Since DTM can shut down and reconnect links and switches over time, we denote as G t ⊆ G nodes and links set that is on and available for use in an instant t. To keep the model up to date, the controller periodically sends OpenFlow messages to the switches requesting information and statistics. The t interval between requests can be adjusted to a good compromise between model update and network overhead. With the answers, it is possible to update G t , the estimated power consumption E t (G), the instantaneous throughput of data streams and also the utilization rate of network links. This information is posteriorly used by the new flow installation algorithm and active flow redirection algorithm. Let U t (l) be the link utilization rate l ∈ L, calculated by the bandwidth ratio at the instant t for the transmission capacity of l. In this work, we use Table 2 as a reference to classify the link utilization and any number of link states could be used. This categorization is used by algorithms to prevent overloaded links from receiving new flows, avoiding packet loss. Besides, low-load links prove to be favorable candidates for shutdown ports on switches. Thus, the mechanism aims to reallocate all flows of these links to alternative routes, making them idle and energy-saving. We Table 2 S t (l) state as a function of U t (l) use

S t (l)
Link state Utilization rate U t (l) let as future work the investigation of the impact of varying the possible number of link states. Once the controller collects information of network elements (links) and updates the network model, it builds the P ij set with the k minimum paths connecting host i to j , ∀ i, j ∈ N h . The controller also calculates, ∀ c ∈ N c , the Q(c) indicator representing the total paths in the P sets passing the c switch. Minimum paths and the Q(c) indicator make sense when viewed from an energy-saving perspective because the shorter and more concentrated the paths, the fewer network elements are used and the more switches can be turned off.

Flow installation
The new flow installation by the controller is reactive. In other words, when a host "a" starts a new flow to host "b," the network controller detects this new flow routes accordingly.
Let f ab be a flow between hosts a, b ∈ N h . When the first packet of f ab reaches the access switch, the controller is notified and starts searching for the best path p best ab ∈ P ab with all links in state S t (l) ≤ s max , as per Algorithm 1. The initial search is restricted to paths that do not use overloaded links (s max = s 2 ). The algorithm prioritizes the optimal path p i that can be accommodated in G t and does not result in increased power consumption. In the case of several ideal paths, the shortest one is prioritized |p i |, having as a tiebreaker the indicator Q(p i ) = c ∈ p i Q(c) . If there is no p i ∈ G t , then the energy impact E i (p), ∀ p ∈ P ab , and choose the p e path that results in the smallest final energy increment. In this case, we will need to update G t , enabling links or switches to accommodate the new flow.
In case it does not exist a path p m ab with at least one overloaded link, the controller relaxes the maximum state of link constraint (s max = s 3 ), and perform a new search for p m ab . Once the path is set to f ab , flow rules are installed on all c ∈ p m ab switches.

Active flows redirection
The new flow installation considers the instantaneous link utilization rate. However, traffic behavior over time can lead to undesirable use in some links (i.e., link overloading). In this case, the active flow redirection algorithm redistributes traffic across the network. To do this, it evaluates the S t (l) state that each link can assume, as shown in Fig. 2 (which reflects the states we previously defined in Table 2).
In this work, we assume that flows allocated to a link in an overload state S t (l) = s 3 may be penalized with longer delays and packet losses. In this case, the algorithm redirects from this to other links the smallest amount of flows required for the utilization rate to decrease to some state S t (l) < s 3 . This strategy prevents links from remaining overloaded, reducing possible damage to network QoS indicators.
On the other hand, we also consider that links at the low charge state S t (l) = s 0 are underutilized. In this case, the algorithm tries to redirect all of these flows to other links in the network and then disable this link which in turn saves network energy. When a link is completely idle, it is possible to shut down its ports on the adjacent switches. If all ports on a switch are off, then the switch can also be turned off completely. If there is no way able to receive any of the redirecting flows, the process is aborted with the understanding that it is not possible to vacate the link without damaging the QoS indicators of the active traffic.
Flow redirection is performed as a trigger whenever active network monitoring classifies a link as low load or overload. The search for alternative paths is done by the Algorithm 1, with the restriction of s max = s 1 . This restriction requires new paths to use only links in the normal or low load state, preventing links in the s 2 (high load) state, receiving more flows, and being classified as overloaded in sequence, which would result in new redirections and, in a subsequent "ping-pong" effect between the s 2 and s 3 states.

Evaluation and results
In this section, we evaluate the DTM and the energy it saves. More precisely, we evaluate DTM in three distinct scenarios. First, we evaluate it in a static scenario (Section 5.1), where we illustrate the very basic functionality of DTM. Then, we evaluate DTM in a dynamic, yet synthetic, scenario (Section 5.2), where we show the performance of DTM under a controlled experiment. Finally, we evaluate DTM under a realistic scenario (Section 5.3), where we mimic a typical campus network and evaluate the performance of our proposal.

Scenario A: static evaluation of DTM
First, we statically evaluate DTM to check its internal mechanism accuracy. This illustrative example guides readers to follow the basic DTM mechanisms. In a static built scenario, we are able to test (i) the DTM network topology discovery process; (ii) the active network monitoring process; (iii) the flow installation process. Figure 3 presents the static scenario we use to check DTM basic functionality. According to this figure, the topology we evaluate presents three hosts (h1, h2, h3) and for switches (S1, S2, S3).
First, the DTM controller performs a network topology discovery process. It detects all switches and links. Initially, only access switches will be kept turned on (S1, S3, and S4), and all reminder switches will be turned off to save energy (S2). Note that in Fig. 3a S2 is red-colored to represent it in off state.
After a few seconds, as shown in Fig. 3b, we intentionally start a flow between hosts H1, H2, and H3. The network controller then detects these flows and starts the flow installation process. In this case, the controller must activate the switches S2; otherwise, there will no path between communicating endpoints. In sum, the active nodes and links are colored in green in Fig. 3b.
Flows cease after 10 s. In this case, hosts and links present no inactivity, as we depict in Fig. 3c. In this experiment, we set up a 2-min threshold that triggers the 1 http://mininet.org 2 https://www.openvswitch.org/ DTM network controller. The controller, in turn, turns off all unnecessary switches, saving as much as energy as it could. By the end, Fig. 3d presents the same network state as the initial setup.

Scenario B: dynamic evaluation of DTM
Second, we dynamically evaluate DTM by changing the average network load in a controlled network environment. In this scenario, we evaluate the energy-savings, as well  Figure 4 illustrates the network we evaluate. The switches are represented by circles and the hosts by squares. All links are Gigabit Ethernet. Initially, as shown in Fig. 4a, end hosts are idle and, as a consequence, switches are in the low load state we previously defined. The controller initiates the automatic network topology detection and then it builds the minimum spanning tree which connects all hosts. In this case, note, in Fig. 4a, that switch S3 is off, which saves network energy.
During our experiments, the host H3 acts as a server. Host H1 connects to H3, demanding on average 500 Mbps. The controller calculates the minimum spanning tree and, as shown in Fig. 4b, the suggested shortest path between H1 Fig. 4 Behavior of the model proposed and H3 is S1-S2-S4. In this case, switches S1, S2, and S4 are low loaded and they accept the flow. The controller then finally installs the route in these switch tables.
In a third step, shown in Fig. 4c, the host H2 initiates communication with the server H3. H2 demands, in this case, 300 Mbps. In short, the controller identifies the path S2-S4 in the minimum spanning tree. These switches (i.e., S2 and S4) are in the normal load state and then, they accept the flow; the route is installed and hosts communicate with each other.
Any network link may become overloaded. For example, as shown in Fig. 4d, the link between S2 and S4 turns overloaded which imposes the controller to redirect traffic, according to the policies we previously defined.
In short, the controller performs an active flows redirection policy and it notes that the flow between H1 and H3 has an optional route through S1-S3-S4. The switch S3 can handle the overloaded traffic from the other switches. The controller then binds and installs this route, as shown in Fig. 4e. At this moment, there is no one overloaded switch; however, all switches are turned on, which imposes the highest energy consumption to the network.
Suppose flow between H1 and H3 ends. In this case, switch S3 (and its links) will turn to the low load state. The DTM controller may redirect all remaining flows from S3 table to other network switches (which are on normal or low load states). As a consequence, in case the controller successfully transfers all S3 flows to other switches, it turns off S3, as shown in Fig. 4f. Figure 5 shows the percentage of energy-saving, and the network load, during the experiment we previously depicted. As expected, the energy economy is closely related to the network load. In fact, the higher the network load, the higher the number of switches we expect to turn on. Note that the flow redirection mechanism may impose a latency on energy savings. For example, during the period from 17 to 21 s in Fig. 5, the DTM controller turned on switches due to overloaded links, which impacted the  Flow installation process and flow redirections may injure network traffic. For example, during a flow redirection, end-to-end communication may experience higher latencies. Figure 6 presents the end-to-end latency between network end-points during our experiments. This figure clear corroborates our previous comment. In special, during flow installation/redirections, as we noticed around the 15 to the 28 s of the experiment. More precisely, during a flow installation, a switch must consult the controller for a forwarding rule (reactive mode). Controller answer switch and, only after that, switch installs the new rule and flow can be performed. The second peak corresponds to a flow redirection due to an overloaded link. As the controller had proactively installed the rules, the overhead is smoothly lower. The third peak is similar to the second. That is, the controller observes that there is a low load on the routing switch links, triggering an event that leads to a new stream redirection section, intending to turn off the switch.

Scenario C: realistic evaluation of DTM
DTM can be used in arbitrary topologies as long as the nodes are SDN compatible. In this paper, we consider a realistic scenario represented by Fig. 7, which shows a campus network topology. This kind of topology has already been largely explored in similar works,as [7,12,14,15]. Over the past decade, data centers and large network topologies, as a campus network, remain roughly stable. We have not noticed any disruptive technology and, as a consequence, this kind of topology remains updated, representing a wide variety of networks.
In this work, we consider a network topology with 95 links and 45 nodes (i.e., switches). More in deep, nodes include routing switches (nos. 1 through 4) and access switches (nos. 5 through 18). The others are host nodes (nos. 19 to 45), forming two distinct groups (the upper and the lower portions of the figure). Client/server application pairs for traffic generation and consumption are always positioned one in each host group.
We evaluate DTM considering two distinct scenarios. First, we consider the network with homogeneous links. In this case, all links have the same negotiated speed of 250 Mbps, which is close to the previous works [7]. This allows us to compare DTM to an existing approach to save energy in networks. Then, we evaluate DTM considering heterogeneous network links. In this case, the links interconnecting the routing switches have a speed of 1 Gbps, while the other links maintain the speed of 250 Mbps. Again, this scenario follows close to previous existing work.

Evaluation methodology
In this article, we consider the power consumption parameters (in watts) the same as [5] and [17]. More specifically, the base switch consumption is Eb t (c) = 146, the 1-Gbps port consumption is Ep g (c) = 0.87, and the 100-Mbps port consumption is Ep f (c) = 0.18. New flow arrival rate follows a Poisson process with an interval expectation λ = 3 s. New flows have a fixed duration of 15 s and can assume one of the load levels described in Table 3. These values are equivalent to those used in [7] and take into account night traffic behavior, average daytime traffic, and annual peak traffic. According to the authors, annual peak traffic is five times higher than night traffic, and average daytime traffic is three times higher than night traffic. The traffic was generated by the D-ITG (Internet Traffic Generator) [18] tool.
The T 0→1 , T 1→2 , and T 2→3 thresholds used to classify link states were set at 20%, 60%, and 80% respectively. These values define a good compromise between the states we define in this work. We let future work further investigate these parameters. Moreover, we consider t = 1 s. In a real environment, we must fine-tune this value to not overload the network with control messages. We consider building P ij sets with k = 16 minimum paths connecting host i to j , ∀ i, j ∈ N h . All minimum paths will have five hops between source and destination in this topology.
Our evaluations evidence the percentage of power savings by comparing DTM to a network where all devices are turned on (i.e., there is no energy-saving mechanism). We repeat each experiment 50 times, and each one remains for 120 s. Unless we tell otherwise, we present mean values and confidence intervals, for a 95% confidence level.

Evaluation
We evaluated DTM in the homogeneous and heterogeneous scenarios, considering the three traffic demand levels.  Figure 8a and b show the energy-saving cumulative distribution functions in each evaluated configuration. Intuitively, the higher demand, the lower savings achieved, as more links are in use and fewer opportunities to shut down idle network elements. Under low traffic demand, both scenarios show similar behavior. In these cases, the minimum links required to maintain network connectivity are active. As a result, DTM achieves a significant average energy savings of Fig. 8 Energy savings achieved in the evaluated scenarios and comparison of DTM with state of the art over 46%. On average demand, energy savings differ. In a heterogeneous scenario, the average savings were ≈ 40%. In a homogeneous scenario, this value is near 36%, ≈ 4% lower than heterogeneous scenario. As the heterogeneous scenario has some higher capacity links, it can absorb this average demand without linking new links. In the homogeneous scenario, however, it is necessary to connect more links and eventually more switches, which increases energy consumption. This is even more evident with a high work demand, where more links and switches are required to support full traffic, minimizing the opportunity to save energy. Energy savings fall considerably for both cases but are most evident in the homogeneous scenario. In this situation, the homogeneous scenario consumed ≈ 7% more energy than the heterogeneous one, and the latter achieved no more than 25% of the total average energy savings. Figure 9 presents the network map and illustrates which switches are turned on/off. In this figure, we present in green the switches typically turned on during our simulations. The Fig. 9(a) to (c) present situations of low, medium and high network demands, respectively, for the homogeneous scenario. The Fig. 9(d) to (f) refer to the heterogeneous scenario.
As discussed earlier, in a high demand situation, there are more switches turned on for both scenarios. On the other hand, in a low demand scenario, the flow between the network links is low, which eliminates the redirection of active flows and results in the highest energy savings.  The heterogeneous scenario has greater energy savings compared to the homogeneous one. This is quite evident when comparing the Fig 9(b) and (e), or the Fig. 9(c) and (f). Again, heterogeneous scenarios have some links with larger capacity that are capable of supporting large volumes of flows without having to enable alternate paths for possible redirections. Table 4 shows the flow observed in the network during simulations. Homogeneous and heterogeneous scenarios present equivalent behavior for low and medium demands. For high demand, the flow observed in the heterogeneous scenario is higher. This is because the homogeneous scenario core links reach their load limit and become a bottleneck for the network. In contrast, the heterogeneous scenario core links support the generated flows, providing energy savings and higher flow between swi tches.
Finally, we compare DTM with a state of the art solution, specifically [7]. Figure 8c shows that both algorithms have the same energy-saving gain in low traffic demand scenarios. In fact, the minimum number of links for network connectivity is capable of ensuring all low demands experience flows. Thus, there are scenarios, some solutions are at their maximum level of the economy. On average traffic demand, note a single difference between the algorithms for both homogeneous and heterogeneous links. DTM was, on average, 7% more economical than the proposal of [7] in a homogeneous scenario, and 6.5%, on average, more economical in the heterogeneous scenario. A higher traffic demand turns more difficult for any mechanism to save energy, regardless of the energysaving strategy. In this case, most of the available links must be turned on to accommodate all existing flows.
When the traffic demand is high, DTM was ≈ 4% and 5.79% better than [7] method, considering the homogeneous and heterogeneous scenarios, respectively. In short, even when energy-saving opportunities are rare, DTM has the advantage. The gains are significant, especially considering the ultimate impact given computer network-scale usage today.

Conclusions and future work
In this paper, we present DTM, an Energy-Aware Dynamic Traffic Management. DTM employs SDN to improve traffic routing between switches. The mechanism has been evaluated through the emulation of networks using Mininet, considering a realistic topology. Our evaluation results show that DTM provided an average energy saving of 46% in a scenario with low network demand, similar to a nighttime pattern. In this low-demand scenario, as expected, we achieved the highest energy savings. In scenarios with average to high traffic demands, the mean energy savings are 36.72 and 17.86%, respectively.
Compared to a well-known existing mechanism, our approach is up to 7% for medium-demand scenarios and approximately 4% better for high-demand scenarios. Future work includes the investigation of DTM scalability in other topologies, ranging the topology density, for example.