In the early of 1950, TV broadcast was started in many countries. Commercial programs are broadcasted and viewed at the user end. Starting from an analog mode of transmission till reaching home via Direct to Home (DTH), many technological changes incurred in TV broadcasting. Combining numerous applications with high-speed networks are expected in the future. High bandwidth and least delay are typical at the customer end. Television such network should be met with the central organization, which requires high exchange speed for good Quality of Service (QoS) at the customer end. Various users of different age groups are accompanied by TV, and numerous live programs like sports and news are watched by users worldwide. Transformation from simple transmission to DTH has improved digital delivery to individuals that upgrade the Quality of Experience (QoE). Internet Protocol TV (IPTV), an application that integrates television service to high-speed networks and makes TV service anywhere. The application demands high transmission capacity and low latency. Early approaches [1–4] address the bandwidth constraint by reconstructing video signals using buffers that adjust delay in the client's front end. Many authors [7–9] proposed playback mechanism to reduce the delay at user end. Low-resolution video signals are streamed, stored in buffer, and integrated at user's end on request. The low resolution lasts until the user's end get connected with high-speed server.
Daniel et.al (2013) [1] proposed a bandwidth restriction algorithm. As user claims for a new video stream, low quality I frames are acquainted with the standard stream, thereby reducing the quality. The synthesized video quality will be low, and expecting the human visual framework can't differentiate it. The play-out buffer got loaded up with multicast stream. Mandal et.al (2008) [3] utilizes a feedback mechanism. The clients are equipped with preview channels managed of low-resolution video signal. The mechanism used to prefetch the low resolution channels occurred at service end to reduce the switching delay between channels. The channels are limited, and only popular channels are recommended. Hence, the authors' recommendation is to introduce low resolution I frames and streaming the popular channels for next viewing. The client's interest is not focused on next view.
Bandodkaret.al(2008) [2] focused on reducing the user-perceived latency. The author proposed multicast based approach to reduce the display latency with a good quality multicast stream. The method supports network with high load and server load. Lee et.al(2007) [4] uses IGMP proxy servers. The scheme involves prejoining strategy dependent on the past visit of client that is held by the IGMP proxy server. The constraint that the IGMP needs to support the switching between the adjacent channels. The authors use the study of previous view history and involving multicast, IGMP proxy servers.
Kim et.al(2008) [5] works with surfing behaviour to address the switching delay. STB sums the data to multicast proxy server to prejoin the expected channels of the user. As user claims for the new channel, the channel is switched with minimum delay. Ramos et.al (2013) [7] conveys additional I frames thereby, the play-out mechanism reduces the latency at the user's front end. Insertion of I frames support to clip the required size video as adapted with the buffer at service provider. As more I frames are inserted, it results in the complexity of encoder design.
Lee et.al (2008) [6] uses H.264 scalable video scheme that utilizes base layer and enhancement layer. In preview mode, the users are permitted to get to the base layer and switch over to the channel effectively. When the user switches to a new channel, watching mode, both base layer and enhancement layer are served to equip the signal's good quality. Lee et.al (2014) [8] classifies the accessible channels into hot and cold channels. The channels that will probably be watched in the future are named hot channels, and the following channels are cold channels. The channels are classified as hot and cold as with the back history of the IPTV users and back history of viewing. For the preview mode, the hot channels are prefetched as low-resolution signals. The author classifies the watch channels based on the client's interest as hot and cold channels. Further, the datas are stored and made available in the nearby service provider.
Yang et.al (2015) [9] proposes framework-based IPTV (FIPTV). The model uses Backing United Stream (BUS) virtual channel that downloads sample of video segments and stores in the client's local buffer. As the user demands the target channel, the playback mechanism will get initiated, leading to zero switching delay. M.S.K Manikandan (2016) [10] uses Grouped Frequency Interleaved Ordering (GFIO) with Pre-Fetching (PF) to reduce the delay in IPTV switching. The strategy used to lessen the channel seek time of IPTV channels. Closely associated channels are put nearer that lessen the channel seek time.
Zare (2016) [11] proposes Program Driven Channel Switching (PDCS) and Program Driven with Weight (PDW). Here, the user can choose the desired program rather than switch to channels; hence, channel switching gets free from the channel number. Zare (2018) [12] uses Pre-Buffering join with program Driven to lessen the channel switches. This will lessen the holding time to deliver the first frame as the user selects the channel to the previous method adapted like Frequency Interleave Ordering (FIO), Frequency Circular Ordering (FCO), Program Driven with Weight (PDW). Li (2020) [19] identifies the user's zapping pattern and recommends the channel to view for the next zapping. The method dilutes the user’s confusion to choose the desired channel. Neural-based recommender system is used for the above-said process. The model involves two modules Recommender System (RS) attention module and Channel attention module. The former captures the user’s interest, and the latter captures the channel switching behaviour of the user.
Gupta (2015) [13] proposed a configurable arbiter for n users. The author uses round-robin techniques for high-speed SoC design. The arbiter is parameterized and can be configured dynamically that increases the flexibility for better user’s access. Oveis-Gharan (2015) [14] proposed an index-based round-robin arbiter that actuates the input port of the router. The arbiter consumes less power, low chip area, and high performance. The techniques are simple, fast, and have less hardware overhead.
Khanam, R (2015) [15] proposed a dynamic bus arbiter for SoC. The arbiter distributes the priority among themasters to access the shared bus. The method removes the bus starvation and contention to access the shared bus. The author identifies that the technique is superior to the previous proposed “Dynamic Lottery based Algorithm”. Kamal, R (2016) [16] compares three arbitration algorithms fixed priority, round robin and matrix arbiters. The author identifies that fixed, matrix arbiters are slower and less efficient. The authors recommend round-robin arbiter for high-speed switching.
Sievers (2017) [17] targets streaming applications like signal and video processing. The author introduces a tightly coupled shared data memory to CPU clusters. The techniques lower the access latency to the shared memory. Wittig (2019) [18] uses shared memory for multiple processing elements. The conflict in accessing the shared memory is reduced by access interval prediction method that minimizes the collision by different processors to access the memory.
The available work gives a limelight on the role of playback mechanism and proves that it is inevitable to reduce the switching delay. Many authors proposed this mechanism and contributed a lot to this method. Hence the playback methodology is taken into account for the work. In early research, the user's interest is estimated at the service provider end (i.e., router). The low-resolution data are stored at the router end and streamed back when the user switches between channels. The router uses proxy servers to get around the user's interest information at its end to supplement when required by the user. As with Ramos et.al [7] the video buffering delay is between 1 to 2 minutes. Also, the information stored in the router will enlarge with the number of users. With new inventions of IoT, many devices collect different information and need to be stored. As a result, a vast memory requirement is expected at the router end (the trespassing information needs to be stored in the router for at least some time as guided by respective countries).
Hence, it is proposed to deploy local memory in the head-end device of the home. STB serves as the head-end device. Memory components are added along with the STB and stored the user's interest in low resolution. The user's interest is computed at the router end, and the information is made available to the channels of STB. Channel 0 of the STB will be served with the user's interest on the front screen, and the other channels of the STB are offered with the correlated channels of the user's interest currently running on the front-end display device. The channel on view is streamed with high quality as per the privileges of the user. The other signals arriving at the following channels of STB are streamed with low quality and stored in the local memory of the STB. The work thereby aims to reduce the buffering delay as the information is made available to the head end device of the client.
The rest of the paper is organized as follows. The proposed model is discussed in Sect. 2. The proposed algorithm is discussed in Sect. 3. The next Sect. 4 discusses the obtained results and outcomes of the proposed architecture. Section 5 discusses the conclusion of the work.