Implementation of Extra E�cient Bandwidth Utilization Dynamic Bandwidth Allocation Algorithm to support Differentiated service classes in XGPON

Dynamic Bandwidth Allocation (DBA) algorithm is necessary for the e�cient utilization of bandwidth in XG-PON. Most of the existing DBA algorithms have not utilized the unused bandwidth of a queue with other service classes queue. The E�cient Bandwidth Utilization (EBU) uses a Borrow Refund (BR) method in the update operation to use the unused bandwidth between a tra�c class queue of which has some �aws in the BR method. The EBU mechanism causes a reduction in the allocation of bandwidth and increases the delay for the next tra�c class. This paper presents an Extra E�cient Bandwidth Utilization (EEBU) algorithm, which overcomes the limitations mentioned above with proper polling and scheduling mechanism. The theoretical and simulation results show that EEBU improvements for T-CONT 2 delay are 1% and 10%, and for T-CONT 3 delay are 8% and 22%, and for T-CONT 4 delay by 6% and 4.5% compared to the EBU and Giga PON access network (GIANT) respectively.


I. Introduction
A telecommunication network which provides the facility of establishing connection between users and their immediate service providers are called Access Networks and it is one of the types of Telecommunication Networks.The optical access network is preferred over other access networks.The service provider uses PON [1] as an access technology that falls in the optical access network category.
Optical access network achieves more bene t in terms of distance coverage, speed, cost, etc. PON is a point to multipoint system [2].PON is found to be much more competent than the Active Optical Network (AON).PON uses passive splitter and combiner and optical ber.PON further classi ed as Asynchronous Transfer Mode (ATM) PON (APON), Broadband PON (BPON), Ethernet PON (EPON), GPON, and Wavelength Division Multiplexing PON (WDM-PON) [3].
GPON is preferred among all the other passive optical networks because of its data rate, split ratio, Quality of Service (QoS), and wide applications.Features like high e ciency, rich user interface, high bandwidth, and wide coverage make it a perfect technology to realize broadband access network services.It operates in the downstream and upstream direction at a wavelength of 1480-1500 nm and 1260-1360 nm.With a scalable bandwidth from 155Mbit/s to 2.5Gbit/s.It provides a maximum downstream speed of 2.488Gb/s and an upstream speed of 1.244Gb/s.The physical network support reaches up to 60km and supports a split ratio of up to 128 based on standards.GPON has Asynchronous Transfer Mode (ATM) or GPON encapsulation method (GEM) to transport data.But GEM is the most    GPON is based on data rates.In XGPON, the main advantage is that the bitrates increased downstream to 10Gb/s and upstream 2.5 Gb/s compared with GPON [7].GPON and XGPON follow the DBA approach.
Every algorithm has an advantage.DBA algorithm has a function of allocating and sharing the excess bandwidth collected from the lightly loaded to the heavily loaded unit [8].GPON physical reach is about 60km and achieves a split ratio of up to 256, and is almost double the number of GPON [9].Apart from these differences, there are many similarities between GPON and XGPON standards like its upstream bandwidth DBA scheme, medium sharing TDMA mechanism, QOS service methodology, etc.Based on the observation, it seems that XGPON could help the evolution of many intensive-bandwidth applications like IPTV, Video conference, chatting, or video messages.In cellular networks, XG-PON is used as the backhaul and front haul interface for MBS (Multiple Base Stations) and provides faster network facilities in mobiles by integrating with various wireless networks [10].
The rest of the paper ordered as follows.Section II reviews the related work, section III explains the polling and scheduling mechanism of the proposed EEBU algorithm, and section IV describes the result and comparison with the existing few algorithms.Lately.Section V concludes the future works related to further research.

Ii. Related Papers
The First DBA algorithm for GPON was introduced in 2006 [11].GIANT uses the Down counter and its involvement in allocating the bandwidth to each of the service classes.Its value is diminished by unity for every frame duration (FD), and on expiration, the bandwidth is allocated.GIANT algorithms have an idle time delay, and at last, there is a bandwidth that remains unallocated.To overcome the GIANT algorithm's idle time delay, the Immediate Allocation with Colorless Grant (IACG) algorithm [12] was introduced.The available byte counter (VB) is used in each ONU for each service class, and allocated bytes are stored for that class.The unused bandwidth is assigned to each of the ONU by using T-CONT5.Still, the unused bandwidth exists.For this, the EBU algorithm [13] is introduced, where unutilized bandwidth is used by the borrow and refund (BR) method in the update operation.The available byte counter changes to negative if the demand is greater than its VB size, then it borrows the bandwidth from the available byte counter of other tra c.Refunds it from the common counter in update operation, suppose if there is no bandwidth available in common counter, it leads to reduced bandwidth allocation to the remaining tra c classes.To address the shortcomings of the EBU algorithm, the Simple and Feasible DBA algorithm (SFDBA) [14] was developed.It has a single available byte counter for all tra c classes.However, if that service class's byte counter gets depleted, a different service class cannot use one service class's unused bandwidth.To address these issues, the DBA with High Utilization (DBAHU) algorithm [15] was introduced as an updated version of SFDBA.In the SFDBA update operation, the DBAHU algorithm employs the BR concept of EBU.
Furthermore, in the case of no unused bandwidth, the next cycle is affected.The X-GIANT algorithm [16] improves on the GIANT concept in the XGPON standard by allocating bandwidth regardless of the expiry of the down counter.The enhanced Bandwidth Utilization (IBU) algorithm [17]   Since the request R12 and R22 are zero, no grant is issued.The R13 granted, and VB3 becomes zero.But R23 cannot be given because there are no bytes in VB3, and also because of its limitation, it cannot use VB2.The Bytes in VB2 are unused, and it gets wasted.In EEBU, the bytes allocated in VB2 have added to T. The grant is issued only by T. Since the request R12, R22 is zero, no grant given.Now the bytes of VB3 are added to T (200 + 50).R13 granted, and T becomes 200.Next, R23 granted, T becomes 100.By this example, we consider that the unused bandwidth is e ciently utilized in the EEBU algorithm.

D. Simulation Model and Conditions
The simulation tool used in this paper is NS-3.NS-3 is a network simulator [21] that uses C + + and Python.The XG-PON module was developed with NS-3 [22].In this paper, the simulation scenario consists of 16 ONUs (W) and 1 OLT.

Iv. Result And Discussion
Delay is a time taken by the packet to reach the destination from the source.The source indicates the OLT and destination indicates the ONU.The results measured a delay in milliseconds.Figure 2 shows the average delay values of 16 ONUs under various network loads for Tra c classes 2, 3, and 4 and total upstream.
Figure 2(a), for T-CONT 2, the delay performance of EEBU is similar to EBU, since ONU's bandwidth allocation process starts from the T-CONT 2 tra c class.However, at the middle load, when the tra c load is equal to 0.5, EEBU's delay performance is increased by 1 percent due to the total counter's use.Compared to the GIANT algorithm, at load offered is equal to 0.3, the delay performance increased around 10%.When the tra c load is equal to 0.5, the delay performance improved by 7.5%, and the remaining tra c load maintained the delay performance around 4%.The maximum and minimum T2 delay in EEBU is reduced by 10% and 4% compared to the GIANT.
More priority goes to T-CONT 3 since it is of video application.From Fig. 2(c), the delay performance of EEBU is improved over EBU for T-CONT 4 tra c class at middle and higher loads because of the removal of borrow refund operation in the update operation.The T-CONT 4 delay in EEBU is reduced by 5.9% compared to EBU.In GIANT delay performance is similar to EEBU, the middle, the delay performance improved by 4.5%.

V. Conclusion
In this paper, the ber access network is preferred over other access networks.PON is chosen over other technology in optical Fiber because of its shared architecture.Due to better features, GPON is preferred over EPON.XG-PON is adopted for this study as it's the updated version of GPON, and its details presented in this report.A study performed in XGPON on the DBA algorithms showed that the available bandwidth was not used adequately.This paper proposed a new model of the DBA algorithm for XG-PON.It improves the EBU algorithm's scheduling mechanism, where the unused bandwidth is adequately utilized by the total counter so that there is no leftover.The delay of individual tra c classes and overall upstream delay of EEBU is reduced compared to EBU and GIANT.Therefore, the EEBU algorithm can support triple-play services more e ciently compared to the existing DBA algorithms from the results and analysis.This project will be useful for evaluating certain futuristic technologies like front haul and backhaul in 5G.
preferred one.It works on the IP protocol.Transmission Containers (T-CONTs) handle the upstream bandwidth allocation.Each T-CONT type will have a separate queue, and each queue will have a different Alloc-ID (Allocation Identi er) and will be maintained overall by ONUs.T-CONT 1, T-CONT 2, T-CONT 3, T-CONT 4, T-CONT 5 are the ve T-CONTs [6].In The working: It has two sections, the Guaranteed Phase Allocation (GPA) and the Surplus Phase Allocation (SPA).The SPA begins after the execution of the GPA.GPA is executed for T-CONT2 and T-CONT3 since T-CONT2 and T-CONT3 require assured bandwidth.T-CONT3 requires non-guaranteed bandwidth as well.Parameters such as Service Interval (SI) and Available Bandwidth (AB) are necessary for each T-CONT to assign bandwidth and vary for each T-CONT.

T
-CONT of each ONU are T-CONT2, T-CONT3, and T-CONT4, T-CONT has one queue with an Alloc-ID.The number of cycles depends upon the value of SI.The previous cycle grant is allocated with the DBR u slot in C0 and reaches the ONU at C1.The request report collected from the ONUs during C1 sent to the OLT during C2.At C3, all the reports go to the OLT.The OLT processes the request and assigns the grant during C4.It reaches the ONU at C1 of the next cycle or C5 of the current cycle.But when the report reaches C2, it is used in C3.By this time, the ONU has consumed the allocations G0, G1, G2, and G3 from four cycles C0, C1, C2, and C3.These allocations subtracted from the report before processing them for a grant.Figure1shows the XGPON transmission diagram during the polling process.Each queue is assigned with a Polling ag by the OLT.Polling ag PF KJ is open i.e., set to 0, the OLT allocates a DBR u slot to that queue during downstream.The variable DBR KJ denotes the DBR u slot assigned to the queue Q KJ .The queue can place it's requested in that DBR u slot and sends the slot upstream.Once a DBR u slot is allocated, the DBR u ag is closed i.e., set to 1.The DBR u ag is opened again when the SI timer expires.The proposed algorithm gives one DBR u slot per SI.To know the current request of queues, it must report every downstream cycle, but allocating DBR u slot for every Upstream cycle leads to bandwidth wastage as the DBR u slot length is 4 bytes for one queue.The polling ag is opened again when the SI timer expires.The proposed algorithm allocates one DBR u slot per SI.Each ONU has three queues, so 12 * W bytes are wasted where W represents the ONU number.PM Y indicates the ONU in which polling begins for T-CONT type Y.The polling mechanism is operated according to SLA.DBR XY denotes the DBR u slot for T-CONT type Y of ONU X. DBR XY = 0 means the slot is not assigned.The variable PF XY represents the DBR u ag for T-CONT type Y of ONU X. PF XY =0 means the polling ag is open.DBR XY = 1, means the slot is assigned.DS denotes the size of the DBR u slot.polling_end indicates the termination of a polling mechanism.If polling_end = 1 means polling is ended.B.EEBU Scheduling mechanismThe scheduling mechanism in the proposed DBA algorithm follows SLA.The request report was obtained from the polling operation used in scheduling for allocating bandwidth.EEBU algorithm uses a common counter(T), which stores the values of the available byte counter of all T-CONT types (VB Y ).All the tra c queue uses bandwidth from the common counter, and the unutilized bandwidth of a queue will be shared with other queues.In this way, it eliminates the borrow refund operation in the update operations-the granting starts when T and Frame Bytes(FB) are greater than zero.Then the minimum condition, the OLT, grants the minimum of Request of T-CONT type Y of ONU X(R XY ), T, and FB.Then the grant size is subtracted from these factors.Finally, the update operation starts when the Down counter of tra c class Y (D Y ) expires.This VB Y is recharged to Allocated Bandwidth for T-CONT type Y(AB Y ), D Y recharged to SI Y , and the PF XY opened.If allocation_end = 1 means allocation is ended.For XGPON, the value for frame bytes is 38880 bytes.AB XY denotes the allocation bytes for T-CONT type Y of ONU X. SI XY denotes the service interval for T-CONT type Y of ONU X. C. Theoretical Analysis The theoretical analysis made to show EEBU outperforms EBU in utilizing unused bandwidth completely.Consider there are two ONU's in the system.Let request R12 and R22 be the request of ONU1 and ONU2 of T-CONT type 2, considering there is no request from ONU1 and ONU 2 for T-CONT type 2. Let R13 and R23 request ONU1 and ONU2 of T-CONT type 3 and assume it is 50 and 100 bytes.Let V2 and V3 be the byte counter for T-CONT types 2 and T-CONT type 3, and consider it is 200 and 50 bytes.The process starts from ONU 1 and then ONU 2 of T-CONT type 2, followed by ONU1 and ONU 2 of T-CONT type 3.

Figure 2 (
b) shows that the T-CONT 3 delay performance of the EEBU is improved by 8% at the tra c load equal to 0.3, compared to EBU.It maintains the constant delay improvement for the remaining tra c load of around 4%. Compared to GIANT, the initial tra c load at 0.1 to 0.3 nearly 22% delay improvement and maintains a 7% improvement for the remaining tra c load.Compared to EBU, the maximum and minimum T-CONT 3 delay in EEBU is reduced by 8% and 4% by eliminating borrow refund operation in T-CONT 2 tra c class.Compare to GIANT, the maximum and minimum T-CONT 3 delay in EEBU reduced by 22% and 7%.

Figure 2 (
Figure 2(d) shows that the upstream delay is the average value of the sum of T-CONT 2, T-CONT 3, and T-CONT 4 class delay.The upstream delay of EEBU improved over EBU as the delay of T-CONT 2, T-CONT 3, and T-CONT 4 tra c class in EEBU is less than EBU and GIANT.The maximum and minimum delay in EEBU is reduced by 2% and 2.6% compared to EBU, and 11% and 3% compared to GIANT.

Table
shows the T-CONT types, parameters, and application.
GPON is subdivided into XGPON and Next Generation PON (NGPON).The difference between these and according to the request.T OLT consists of T DBA time to process the DBA algorithm and GRANT time to allocate grants.Allocating grants to ONUs is called scheduling.Usually, SI is multiples of 125µs.SI is divided into types of SI MAX and SI MIN depending upon the service class.The Allocation Bytes depends upon the service rate (R) and SI.
[20]nces the polling and scheduling mechanisms.IBU increases the T-CONT 4 delay by giving the most priority to T-CONT2 and T-CONT3.The Comprehensive Bandwidth Utilization (CBU) algorithm [18] is useful in improving the IACG and EBU polling and scheduling mechanisms.It improves the T-CONT4 tra c delay but fails in T-CONT2 and T-CONT3 tra c compared to EBU.A new polling and scheduling mechanism used for the CBA algorithm[19].Demand Forecasting (DF) DBA[20]is based on statistical modelling, and the OLT predicts ONU tra c demands.The DF-DBA algorithm reduces the time required to process the ONU request and DBA calculation.Thus the time for processing the ONU request and DBA calculation is eliminated.In the literature review, it is clear that XGPON 's current DBA algorithms do not effectively and e ciently allocate the remaining bandwidth in the available byte counter.This paper introduced a new algorithm named EEBU to utilize unused bandwidth completely without any complexity.Iii.Eebu AlgorithmA.EEBU Polling Mechanism:The polling and scheduling mechanism follows Service SI. RTT/2 is the time taken by the OLT to assign allocation to ONU and vice versa.Once the Dynamic Bandwidth Request upstream (DBR u ) slot reaches the ONU, it takes T 0 time plus T E time for processing the request from queues, where T 0 is the processing delay, and T E is the equalization delay.The request processing from ONUs is called polling.The request processed from ONUs is sent to OLT.After reaching OLT, it takes T OLT time to calculate the demand and provide it Table2displays the parameters taken for simulation and their values for the algorithm.Each ONU contains 3 T-CONTs(2, 3, and 4).It offers tra c loads from 0.1 to 1.The queue size for each T-CONT is 1 MB (Megabytes).The rates on the upstream and downstream are set at 2.488 Gbps and 10 Gbps.The Optical Distribution Network area is 20 Km, so RTT is 200 µs.The processing time of ONU (T0) is 35µs.