The analysis and evaluation in its first phase focus on the performance of the network nodes, in each scenario, 40 simulations are carried out modifying the seed of the random number generators. The results presented in these tests refer to the average values of each of the measurements made in each simulation.
6.1 Traffic Generator Performance.
Before analyzing the results, the algorithmic model of the generator is validated. In the first instance, the packet generation processes are based on the standard generation processes offered by OPNET. The IPv6 encapsulation process receives the packets and encapsulates them in their respective IPv6 header. The generation and encapsulation process establish the packet creation time, which is used as a reference for delay calculations.
The inter-arrival times for these processes for both exponential and constant pdfs are shown in Fig. 13.
When calculating the average of the inter-arrival time in Fig. 13 (b), an average of 0.02 seconds is obtained, in this way, we can conclude that the process of generation and encapsulation in the IPv6 header meets the expected characteristics, as well as the packet size values that remain constant
The delay results for each generator class were obtained by placing traffic sources per class with an average transmission rate equal to the maximum capacity in the queue, the average arrival rates follow an exponential probability distribution to emulate behavior. random in the tails of the generator. All the packets were generated with a maximum size of 1500 Bytes, equivalent to the maximum transportable size in an Ethernet network. The results obtained are observed in Figs. 14.
For all queues, the delay is kept below the limit specified in Eq. 2–2. For example, the average delay for traffic with the highest priority (Class 4) for queuing in the generator is approximately 10.5 ms compared to the limit of 3 seconds specified by Eq. 2–2.
To validate the output of the generator of the WRED and DRR algorithms, the traffic generation parameters are configured in such a way that the queues enter into congestion, additionally, the size of the packets is configured in such a way that the independence to the throughput at the output of the generator obtaining:
As can be seen in Figs. 15, the throughput value for each queue is exactly as expected: 400kbps for class 1, 500kbps for class 2, 500kbps for class 3, and 600kbps for class 4. In front of Fig. 16 it can be concluded that the delay remains below the limits of other algorithms such as WFQ (Eq. 2–2) approximately 3 seconds but with lower computational cost. These results validate the behaviour of the DRR algorithm used by the nodes in the following scenarios.
Figure 17 shows the average queue size values that maintain their constant average value during the simulation, which is desirable for the WRED algorithm.
From the results of WRED and DRR, the validity of the implementations can be concluded. The models described in other scenarios contain the same implementation, so the aforementioned validation is applied to the other models.
6.4 Performance with Reuse.
6.4.1 Normal Router: During this test, 2 high-priority flows (class 4) and 2 flows with arrival rates with exponential probability density function in the other classes and a test flow with a constant transfer rate are transmitted. The router classifies only by service differentiation with a constant bandwidth for each class. The behaviour of classes and the performance of each of the flows were evaluated using a utilization rate of 105% of the link capacity. Additionally, the number of packets lost for each flow was counted. The results obtained demonstrate the effect of congestion on the flows and the execution of a discard algorithm such as RED, for this system, congestion and packet discard are presented equally to each of the classes. The delay and jitter values remain similar as there is no traffic differentiation. In the following test, the traffic is increased to 125% of the link capacity. The increase in traffic accentuates the level of packet loss in each class even in high-priority flows which can have high packet loss. However, in routing with only service differentiation, the delay and jitter values are constant for each of the classes.
The absence of an admission control congests the traffic by classes increasing the packet loss in each priority queue [16], the implemented admission control improves performance as described in the following test.
6.4.2 Router with QoS manager agent: Performing the same tests now with a router with QoS manager, 105% link capacity and throughput per class give the following results:
In this test, each flow is monitored, and admission control is applied at the input. The application of different throughput for each class generates an inequality in the delay levels, in a more accentuated way in those with lower priorities. This behaviour is expected since class 1 or 2 flows are considered to be elastic, that is, not susceptible to delays and may have variable bandwidth, such as file transfers or some WEB-based protocols, additionally, this type of traffic must be supported by reliable delivery protocols such as TCP to support packet loss.
As for the highest priority class, the QoS manager agent keeps the delay and jitter levels low; packet loss due to buffer overflow equal to zero. This result validates the operation of the quality of service agent against the established design criteria. This behaviour is desired for time-sensitive traffic.
To analyze the behaviour against variations, a test flow is introduced. When the test flow is generated with DS code in classes 1 and 3, the packet loss of said flows increases a little, if the test flow is generated as a flow Class 4 This is guaranteed the same behaviour as other Layer 4 flows, low delay, low jitter and no packet loss due to buffer overflow.
, they quickly reach the user limits designated in the QoS manager agent, so the use of the bandwidth band is prioritized for high-priority traffic. The prioritization of said traffic also generates an increase in the delays of the other classes, which can be counterproductive for applications with these priority levels. In a congestion scenario, the agent rejects traffic that generates congestion as detailed in the results of the following test.
If a flow exceeds the levels allowed by the user, the agent will manage the traffic towards a lower class so as not to degrade the delay, jitter and packet loss of the class to which it belongs. Said behaviour is described in Fig. 20, in which the same class 4 and 3 flows are generated. In class 2, all flows are attributed to the same source IP to exceed the user limit. Class 1 flows are stopped and the test flow is added to the same IP used for class 3 flows. When the user limit is exceeded, the test flow is reclassified to class 1.
In conclusion, the agent presents improvements in the attention to priority flows, increases the rates of attention and causes massive discards in queues of lower priority. However, this scenario lacks either the congestion control that the QoS agent uses to optimize its results.
6.5 MPLS Scenario Performance: In this scenario, the behaviour of the topology with MPLS [15] nodes was evaluated. For this test at the first edge node (MPLS_Router), the traffic described was generated. Under these conditions, the throughput values in the border router that receives the traffic described are observed in Fig. 21a.
The results show that the flows in this node do not present any modification during the entire simulation, even exceeding the limits corresponding to each tunnel capacity. For example, the class 4 tunnel has a capacity of 1200 Kbps and during this test, it reaches 1500 Kbps. Due to this effect, the flows can accumulate in the highest priority categories and generate packet loss as can be seen in Fig. 21b. In this figure it can be seen that from the second 350, the congestion effect begins in the router that concentrates the flows (Router_MPLS2), the immediate effect is the reduction of throughput in the classes that are above the defined limit. Now the results of the throughput, jitter, and delay measurements for this test described in this scenario is equivalent to when all flows are active (after 400 seconds of simulation).
The flows belonging to class 3 and class 1 keep their packets, however, a large decrease is seen in class 4 throughput. From the above, it is concluded that the absence of flow control in each of the classes generates congestion in the backbone routers that results in higher latencies and packet losses.
6.6 Performance Scenario QoS Manager Agent: The QoS manager agent for the same simulation scenario uses the same traffic characteristics described. The traffic generated by the first generator with source IP 431 exceeds the maximum values for class 4, the QoS manager agent organizes the flows in such a way that the flow that exceeds the capacity is redirected to class 3 as seen in Fig. 22(a).
After 325 seconds, the transmission of the second node begins, before that time the other border node (Agent_QoS1) began its transmission (Fig. 6–12(b)). So the backbone router reports congestion. Figure 22 shows a peak in class 3, at 350 seconds, generated by the entry into backbone congestion and corresponds to the transient reorganization of the database by the Qos manager agent. In the same way, in Fig. 22(b) it can be seen how the throughput belonging to class 2 is transferred to class 1 (with lower priority).
As a result of the reorganization of the flows and the new QoS policies applied due to the reorganization, the volume of traffic on the backbone router is reduced until the congestion notification is removed. Figure 22(c) shows the behavior of the throughput for each of the classes.
Flow reduction ensures that backbone routers keep queues for classes greater than one out of congestion, thus reducing the possibility of packet loss on higher-priority flows.
Under the same traffic parameters of the previous scenario, the results obtained in each of the classes are:
According to the results, it is observed that the QoS manager agent slightly improves the results related to delay, jitter and throughput, since the delays in class 4 remain similar despite the increase in throughput. The flow reorganization has an impact on the throughput of the other classes, increasing their values. The results describe the behaviour of each of the flows; it can be concluded that the loss percentage is lower in high-priority flows that are maintained in their respective priority. Flows that exceed capacities are reduced to lower categories in which they experience greater congestion, resulting in increased delay. The congestions experienced in the backbone imply packet losses that in the case of high-priority flows are almost negligible low.
When congestion occurs, the QoS manager agent does not degrade the parameters of delay, jitter, and packet loss, so the flows that are generated after congestion are discarded. Therefore, this behavior allows us to conclude that the ECN notification is efficient in the interruption of incoming traffic, limiting the number of flows allowing for to reduction of the discard levels of the highest priority flows and reducing the flows of the other classes to lower priorities.
An increase in traffic in the MPLS scenario leads directly to congestion in some of the AF classes, further increasing the packet loss values, the loss values tend to increase as the traffic on the congested queue increases due to the WRED algorithm [14]. The QoS manager agent tries to avoid congestion in the higher priority classes by penalizing lower priority flows, this scheme demonstrated an efficiency in packet loss of less than 3% of the total packets, fully complying with its operation. If there is an increase in traffic, the QoS manager agent blocks the reception of flows when the backbone presents congestion so that the existing traffic is not affected, a result that is not possible in MPLS, generating further degradation of the service.