The dynamic bandwidth-aware system is programmed to ease congestions on the LMS by expanding the campus network bandwidth and improving end-user experiences on the LMS. The system performs bandwidth reservation according to the data flow context of the LMS. It has an event that collects data about packet flow characteristics of the LMS, another event defines rules to interpret the flow characteristics, and yet another event performs customized actions on the packets.
The system first observes and collects a log of data flow characteristics on the LMS. This includes port negotiations, port status, and bandwidth requirements calculations for the packet flow. If a link is down, it will be captured as the port status for that negotiator. The system then triggers the traffic monitoring event which gives its context inference. Thus, it defines heavy, moderate, or low traffic situations. The rule for heavy traffic bandwidth requirement was set to 80% of the requested link bandwidth. Based on the recorded flow context inference, the system will provide bandwidth assurance that measures up to the traffic volume. In this case, the controller will either add links to LAC and transmit the packets over the LACP links or transmit traffic on the available physical links using round-robin load balancing.
The self-hosted LMS server is expected to exhibit a peak time behaviour all day long. Thousands of remote concurrent negotiations for network resources must be processed throughout the day. Therefore, the proposed algorithm provides a traffic shaping mechanism to ensure all users have an overall favourable connection to their learning resources. For example, on a typical weekday, thousands of users compete for network access to real-time services from the LMS. The Ryu controller will keep a log of all network resource negotiations and will estimate bandwidth requirements for port connection requests it received.
Based on the estimated bandwidth requirement, the controller will sanction traffic flow over the LAG links or the physical links. This is done by calculating the bandwidth requirements of the received traffic and compare the results with the 80% threshold. The required bandwidth was calculated using formula:
B = T * N
B = Bandwidth needed
T = Network traffic load in a moment of time
N = Number of concurrent users
If the bandwidth needed is greater than or equal to the threshold of 80% for a negotiated link bandwidth, then the controller will add links to the LAG according to the bandwidth requirements for packets transmission and efficient use of the transmission links. However, if the traffic bandwidth is less than or equal to the 80% threshold for the requested link bandwidth, then the controller will remove link(s) from the LAG to measure up to the traffic bandwidth requirements.
Figure 1 shows the block diagram showing how the algorithm logically processes packets.
Proposed Algorithm
Initialize:
Maximum number of aggregated links = 4
Bandwidth threshold = 0.8 // 80% threshold for link aggregation
Function get current traffic(link):
// Return current traffic for the specified link(s) and number of concurrent users
Calculate bandwidth required for current traffic:
Bandwidth required = Traffic load * Number of users
Function add link to LAG (links):
Total weighted link bandwidth = 0.0
For each link (links):
If bandwidth required > = bandwidth threshold:
Aggregated count = 0
For each link in links:
If link aggregated:
Aggregated count + = 1
If aggregated links count = max number of aggregated links:
Exit loop
Function transmit data (links, LACP):
// Transmit data over the LACP links
Function synchronize links(links):
// Perform synchronization for link status and data transmission
Else:
For each link in links:
Link aggregated = False
Function transmit data (link, bandwidth):
// Transmit data over the link with the specified bandwidth
Function synchronize links(links):
// Perform synchronization for link status and data transmission
Sleep for 10 seconds and start the traffic monitoring process.
3.2 Context Awareness
The algorithm describes a python3 decorator @set_ev_cls in Ryu.controller.handler that monitors the event-port-modification class. This decorator performs the function of keeping port state information and calculating the throughput of requested links. In this study, the LACP system was configured to respond to links adjustment whenever the threshold of 80% weighted traffic bandwidth is reached for a requested link(s) at any moment in time. At the interval of every ten seconds, the system analyses the throughput in Mbps. Then the system performs bandwidth calculation on each port of the simulated switch. The results of these calculations help the Ryu controller to make adaptations to the network and links. The main network adaptation provided for in this work is the decision to add more links to the initialized one or otherwise, and it is usually subject to calculated bandwidth requirement of traffic flow.
3.3 Dynamic Bandwidth Assurance
The system is configured to function in both active and passive LACP modes. LACP packets are transmitted in active mode whenever the learned ports recorded in the table-miss of the switching hub are matched and packets are forwarded to the learned port. This is done by binding the LAG group dpid to the learned ports in the switching hub. Making the LAG group actively open for monitoring in the intervals of 10 seconds. The LACP settings (modeprobe 4) in the ubuntu system sets the logical EtherChannel (bond0) as the master channel and the physical EtherChannel h1-eth0 and h1-eth1 (for two channels aggregation) or h1-eth0, h1-eth1, h1-eth2, and h1-eth3 (for four channels aggregation), are set as slave channels depending on the number of links the system aggregates at any moment in time. Whenever the 80% weighted bandwidth requirement is reached, a new physical link will be added to LAG, and traffic will be transmitted over them at the combined bandwidth of the logical interface created. Traffic that requires lower than the 80% bandwidth threshold are transmitted using the Open vSwitch switching hub algorithm. If the bandwidth requirement remains low for a continuous 5 minutes, the learned logical interfaces in the switching hub will be disabled. This demonstrates how the algorithm monitors the changes that occur in the slave state by the way events of the Ryu controller are executed.
In addition, if for any reason one of the links in the LAG fails or is disabled, the lacplib will direct traffic to the active link. So, in a LAG, the bonded physical channels always serve as active backups for each other. The actual bandwidth of the backup link remains the same as the bandwidth of the logical interface (bond0), but the maximum bandwidth that will be utilized will be the actual bandwidth of the backup link(s). For example, in the four-channel LAG, if h1-eth0 is down for any reason, the LACP data unit will be transmitted over the remaining channels (h1-eth1, h1-eth2, and h1-eth3) provided they are added to the LA Group. However, the actual bandwidth of the transmission is the summation of the weighted bandwidth of the used links.
The algorithm observes when a slave channel’s current state orchestrates an idle timeout for the flow entry that performs packet-in of the LACP data unit. A timeout may occur because of a link failure or when the physical interface is disabled. The system will print the change of state to disable and then send a FlowRemove message to the switching hub. In this instance, packets-in flow is redirected to the Ryu controller, which decides to transmit the packet over a backup link. Therefore, when the enable/disable state of a physical interface is changed, the FlowRemove event handler will delete all flow entries that use the physical interfaces included in the logical interface to which the disabled physical interface belongs. This makes LACP suitable for constructing highly available network links and provides a quick switch for fault tolerance.