Today's strong on-drive hardware and the introduction of novel processing devices like GPUs have made deep learning feasible. To aid in the development and testing of the recommended data models, the scientific world has developed a number of open-source large data sources. Contrarily, 5G has a significant market value because it aims to provide new, innovative capabilities that no other communication system has been able to do so far. The widespread use of smartphones, portables, and Internet of Things (IoT) devices has heightened the significance of 5G.
The principal aim of this study is to provide research answers to the following questions:
2.1 Research Question 1: Discussing primary issues in deep learning for smart healthcare
Deep learning and 5G are two technologies that have attracted a lot of interest currently. A summary of the papers included is provided in this subsection Structural level, Communication level and Technological level are the three main aspect level into which the critical problems can be classified. Figure 2 Illustrates the aspect level description of issues in deep learning and 5G methods.
2.1.1 Structural Level
The research targeted detection techniques and deep learning models for prediction of mobile communication at the structural level. The structural level can be discussed as resource distribution research. Most of the survey papers that discussed issues with modulation schemes definition, radio frequency classification, human interaction, fault diagnosis, equipment forecasting, debugging strategy and channel information estimation at the structural level of the reference model.
Zhou et al. [19] investigated the traffic flow prediction in extremely dense networks, a challenging situation because modulation schemes and enormous MIMO solutions were present. A deep learning model was employed to anticipate congestion. This allowed for the detection of potential congestion as well as the subsequent judgement process to prevent or diminish it. With high density D2D mm Wave settings, Abdelreheem et al. [20] suggested a deep learning model for modulation-based intelligent communications networks. To improve the overall system capacity, the model chooses the appropriate routing protocol while taking into account a variety of reliability indicators with an indication of the strength. Maksymyuk et al. [21], used supervised learning to offer an adaptive modulation scheme based on MIMO technology. The concept essentially creates a system to compute each directional antenna's phase shift and magnitude. Depending on how many users are in a certain region, the supervised learning algorithm can modify the signal intensity. The method may create a more customised response for users in that region if there are many users congregated in a constrained space. Conversely, a message with extensive coverage will be transmitted to serve the entire area if users are distributed across a large area.
A frequent issue in wireless communication systems is channel information estimation. These characteristics sum up how the information will travel from the source to the destination. In order to optimise the overall communication, the broadcast can be selected depending on the channel information to account for the present channel capacity.
The majority of the time, traditional channel information estimation techniques demand powerful computing resources [22]. To prevent Doppler frequency estimation in MIMO systems, Meharabi et al. [23] employed deep learning for decision-making for channel estimation. The investigators took into account motorised channels, where the Doppler rate changes from one transmission to the next and makes it challenging to estimate the channel capacity. Therefore, the MIMO withering channels over various Doppler rates were trained and predicted using the deep learning model. Using deep learning models in three use situations, Jiang et al. [24] published some evaluations for channel estimation. The two situations were (a) a dynamic channel information estimation methodology based on deep learning, and (b) MIMO with a number of co-users computes the angular power spectrum (APS) data and is evaluated using deep learning techniques.
To achieve ultra-reliable low latency communication, fault detection systems are crucial. For dependable communication and reduced latency (as broken hardware lengthens transmission times), fault detection is essential. Hence, because 5G networks use a variety of devices, finding errors is a difficult operation that cannot be performed automatically and demands advanced procedures. High-bandwidth network issues were examined by Yu et al. in [25]. The 5G photonic fronthaul network's solitary defect location using the model. In light of single-link connections, the suggested model proved able to spot errors and erroneous alerts in warning output. To identify and pinpoint antenna problems in mmwave communications, a deep-learning schema was put forth by Chen et al. [26]. The system first finds the flaws with minimal cost of neural network before pinpointing their exact location. Due to the large number of antennas involved in a mmWave setup, the second phase is an additional difficult task, hence a more complicated neural network was recommended.
The debugging procedures involve the creation of information at the source and the reconstruction of that information at the receiver. Data corruption, meanwhile, can occur as a result of channel disruptions and interference because of the dynamic nature of the channels [27]. To reduce the total mean square error of the user signals, Kang et al. [28] designed a deep learning model to train the coding and decoding process of the MIMO-NOMA system. Using deep learning of OFDM systems, Kim et al. [29] developed an innovative peak-to-average power ratio (PAPR) minimization approach. Large peak-to-average values are undesirable for battery performance since high range tends to use a lot of energy, sometimes from remote devices. The suggested model holds a subsection of the error rate computation of the OFDM model, and peak-to-average is reduced.
Application areas for equipment location forecasting include position-based services, wireless network management, digital mobile quality of service (QoS) delivery, strategic planning for mobile processing, and mobile stockpiling [30]. Wang et al. [31] suggested a deep learning approach to forecast the equipment position in ultra-dense networks. The deployment of tiny cells unavoidably results in more frequent latency, rendering the accessibility process more difficult. Therefore, predicting the location of an object in this circumstance is crucial. The framework was employed to foresee transition planning and evaluate the future movement. The deep learning model was capable of determining the appropriate ground station to acquire the user if a transition was anticipated to actually happen. Using the light emitted by the challenges encountered and a deep learning algorithm, the location of the user was projected by Gante et al. [32]. These contain subliminal data about their respective placements.
2.1.2 Communication Level
Prospective 5G networks might set the standard for wireless communication systems with a variety of devices that operate at faster data rates, with reduced transmission, and with reduced power demand. The communication level is segmented into abnormal behaviour detection Systems, congestion forecasting, memory management, and resource planning.
By collecting features from network flows, Maimo et al. [33] suggested a deep learning technique to evaluate network traffic. Additionally, the configuration of the cyber defence architecture is tuned to find the abnormal traffic fluctuations, intending to both maximise the computing resources required at any given time and fine-tune the operation of the monitoring and identification procedures. Newaza et al. [34] define a security framework that uses machine learning to identify harmful activity. Health-Guard uses four machine learning-based abnormality detection methods to find suspicious activity. They are: Artificial Neural Network, Decision Tree, Random Forest, and k-Nearest Neighbour.
Internet traffic has increased ten-fold in the last decade. This serves as a key foundation for developing the design of the new era of mobile networks [35]. By minimising power consumption, implementing proactive sequencing, and averting structural problems, it is monitoring and controlling the system resources by anticipating traffic for the following day, hour, or even minute.
Guo et al. [36] proposed a deep learning-based method for anticipating traffic for network virtualization mechanisms. It is advised that 5G relies on network segmentation to support various services and customers while essentially separating them. A preventive access network method with deep learning concepts was utilised to accurately estimate the traffic. A DeepCog approach with a comparable goal was presented by Schiliro et al. [37]. while reducing customer support infractions and commodity pricing. DeepCog can estimate the facilities needed to distribute traffic growth demands in distributed systems.
The intricacy of management systems and coordination rises along with the number of participants, applications, and assets. Efficiency gains and the prevention of over or under resource optimal sizing can both be achieved through optimised resource use. Thankfully, new advances in machine learning that connect with neighbouring contexts can give an economical solution for resolving these challenges in such a highly complex and volatile real-time network.
Celebi et al. [38] presented an OOS method that takes advantage of this stress variable both centrally and decentral. Load-based OOS techniques reduce energy consumption by 50% while maintaining the average SCN throughput. Additionally, load-based approaches benefit SCN traffic and situations where delays are intolerable. Zhang et al. [39] suggested a deep learning algorithm to flexibly assign channels in multi-carrier power amplifiers (MCPA) while considering energy infrastructure. The major objective was to determine the best carrier for MPCA distribution while limiting overall power requirements. Gaussian relaxation and deep learning were used in combination to overcome this issue.
User-generated media content has driven cellular data consumption for the past several years. This created new difficulties in sending large amounts of data at increased levels and with network latency from resource producers to final users. The downlink lines experience high traffic overload, especially in 5G applications where multiple smaller network elements are dispersed [40]. The most frequently used information can be cached near the network's interface, such as in ground stations, to alleviate this problem [40]. Choosing the right cache memory placement approach is difficult, though. The choice of the most suitable content to cache memory and the position accuracy for information storage can both affect how well the cache management strategy functions.
Shuja et al. of [41] took into account extremely dense virtual networks in which small ground stations serve as the features for cache memory nodes. In order to enhance the entire cache memory filling process, it is necessary to limit power usage and latency. The best caching strategy was discovered using a deep learning algorithm rather than linear programming procedures. By significantly reducing overhead, this model may optimise in real time.
2.1.3 Technological level
Self-organizing networks (SON) are a mechanism used in cellular networks that makes it easy, quick, and systematic to develop, construct, manage, and upgrade radio communication access networks. Due to its potential to decrease capital expenditure (CAPEX) and operational expenditure (OPEX), SON is a crucial technology for mobile networks in the future. Nevertheless, SON encompasses both QoS and system performance. Improved network resource planning can lead to improved quality of service and rising revenue. The framework for self-optimization in 5G networks, dubbed APP-SON, was presented by Moysen and Giupponi [40]. By locating similar functionalities and forming groupings using the Hungarian Algorithm Assisted Clustering, it was created to enhance certain specific network performance measures depending on the mobile app characteristics. The relationship between the case, the system characteristics, and the network indicators was established using deep learning. As an illustration, multimedia app indicators can be utilised to determine that video traffic should be given precedence because it makes up over 90% of all traffic.
2.2 Research Question 2: Which of the following learning modalities—supervised, unsupervised, and reinforcement learning—is most commonly used to address 5G issues?
The Internet generates a tremendous amount of data. In addition to being produced by people, this data is also produced by software, cell-phones, and other technology. A developer will undoubtedly decide how to train an algorithm using a particular learning model depending on the type of data available and the motivation at hand. Furthermore, to understand the learning techniques illustrated in Fig. 3
The majority of the publications employed the supervised learning method even though labelled datasets are difficult to come by in 5G settings. Regression issues and classifying jobs both benefit greatly from this strategy. A model can learn from the supervised learning input, which comes in the form of a tagged dataset, and solve problems more quickly as a result. This approach aids in categorical value prediction. The input data could be considered a component of a specific class or group [40].
Supervised learning is absolutely at odds with this learning process. Unsupervised learning does not, in essence, have a comprehensive and well-labeled dataset. Self-directed learning is unsupervised learning. Its primary goal is to investigate hidden patterns and predict outcomes. In essence, the machine data and instructs it to search for hidden traits and group the data in a logical manner.
To train the model and find an approximation of the best combined resource provisioning approach and power consumption, Komisarek et al. of [35] presented a blended technique using both supervised and unsupervised learning. Using unsupervised learning, Kim et al. of [29] constructed a deep learning framework to map the cluster mapping and de-mapping of signals on each sub-carrier in an OFDM system while reducing the BER. A MU-SIMO subsystem portrayal is used in an unsupervised deep learning model was presented by Le et al. of [43]. Its primary goal was to reduce the disparity between the broadcast and received signals.
5G communication networks don't use either supervised learning or unsupervised learning as their foundation. In addition, the algorithms here learn how to respond to their context on their own. It is expanding quickly and also creating a wide range of learning strategies. The fields of automation, graphics, etc., can benefit from these methods.
Allocation of resources was employed by Jiang et al. of [24] to increase the URLLC's power consumption and latency services. Zhao et al. [44], the authors also took into account a URLLC service, but this time they focused on using reinforcement learning to enhance the datagram scheduling of a mmwave protocol. Ahad et al. [5], reinforcement learning was applied to choose radio parameters and maximise various metrics for the scenario under consideration.
2.3 Research Question 3: Which deep learning methods are most commonly used in 5G environments?
The typical deep learning approaches used to resolve 5G issues in the literature are depicted in the Fig. 4. The deep learning method that works in traditional neural networks with fully connected layers is followed by long short-term memory (LSTM), and convolutional neural networks (CNN). Prior to describing the studies that used each deep learning architecture in 5G wireless mobile networks, the fundamental theories of each deep learning architecture are presented.
A recurrent neural network (RNN) can handle sequential data, including time series, speech, and language. This is because of its ability to keep knowledge about previous elements for any current element in a sequence. To deal with sequential data, RNN [45], while several others used RNN variants.
Panse et al. of [46] developed a deep learning-based digital cancellation method to get rid of linear and non-linear disturbances. The residual disturbance between the actual and estimated self-interference signals is represented by the custom loss function in the deep learning model after it gets a signal. This model had a bespoke memory unit but was based on RNN. Zaho et al. of [44] proposed an LSTM model for processing throughput volume data. The model was developed to forecast downlink real-time congestion in order to provide relevant data and enhance the planned overloading program's reliability. In [47], Sun et al. used a biLSTM model to capture the memory component of PA. The conceptual framework of PA activity and the potential of biLSTM algorithms to take into consideration both the front and reverse temporal dynamics of the data input were reconciled.
In order to process data from numerous matrices or multidimensional matrices and identify set of capabilities from them, CNN models were designed. As a result, the convolution layer is used to handle data with various hyperparameters, including 1D for impulses and patterns, 2D for images or sound spectrum analysis, and 3D for multimedia [10]. To take advantage of the actual feature convolutions performed by the CNN layers, the required information for the CNN models is supplied as a form of images by Zhang et al. [48]. Aspects of both time and place were taken into consideration for the works displayed. These important plan parameters behave differently depending on the time of day and the location of the ground station. Therefore, research uses CNN to simultaneously examine both spatial and temporal characteristics and extract pertinent coupled features.
The works discussed by Zhou et al. [19] used CNN architectures and used a variety of variables that affect channel state information as input for the models, such as frequency range, position, period, heat, moisture, and climate. The authors took into account 1D and 2D convolutions in order to extract a periodic indicative tensor from data.
For issues with a small number of labelled data points and a sizable number of unstructured ones, deep belief networks (DBNs) are appealing. This is mostly based on the belief that hidden patterns are used to train the model and labelled data is utilised to fine-tune the network system during the training process [49]. As a result, throughout most of the training phase of this deep learning approach, unsupervised and supervised learning are merged.
A similar DBN model was trained for fault detection on optical front-halls by Yu et al. [25] using a hybrid technique. The dataset, which is made up of connection malfunction occurrences, was acquired from an actual network operator's processing system.
2.4 Research Question 4: What are the biggest unsolved research problems in the 5G and deep learning fields?
The main important research concerns can be highlighted from the publications that outline the study's future stages. Many projects intend to test their approach using effective system components or information. The absence of real-world datasets and trails is frequently cited by authors as a fundamental weakness of their current assignment, leading them to turn to the use of datasets produced via experiments. The use of simulated data may restrict the breadth of the conclusions, even if this is not in and of itself a mistake. Actually, the employment of deep learning techniques is much more appealing because it is harder to create computational equations in this context. Deep learning algorithms can handle a wide range of data types and can take in multimodal model parameters, but the responses suggested are frequently streamlined to achieve lower computing criteria.
The intricacy of the circumstances to be resolved significantly rises with the incorporation of these parameters. Furthermore, the effectiveness of the system is significantly affected by the existence of other factors [11]. This is a problem when working with real-world systems like 5G. Additionally, it is crucial to ascertain the degree of complexity required to understand an issue. The use of reinforcement learning is an experimental method that is possible to implement in 5G networks. Reinforcement learning is used for adapting and responding quickly to the present condition. The available tasks to accomplish and context must be described in this model in order for the learner to understand how to carry out the responsibilities that maximise the results. This model does not require data for the training of the learning model. The issue may be that one might not afford to allow an agent to make poor selections in an effort to learn because these choices may be costly to the network's ability to function. We discover that this type of issue is also prevalent throughout other crucial fields, such as clinical uses, where using deep learning is not an option because of the potential harm to living beings.
Computational power and consistency can occasionally be an issue for deep learning, particularly when there are a large number of devices around. Naturally, several publications emphasised the need to increase the effectiveness of their methods as an issue and an area for additional investigation. Admittedly, the effectiveness of the implemented deep learning system has a small impact on the efficiency of the total system. Some efforts aim to tweak the offered method, while others want to prune their existing networks, and still others are thinking about whole alternative deep learning models with the right kinds of layers to be implemented.
Finally, researchers think it's critical to emphasise how well 5G networks and the Internet of Things are integrated. New performance requirements for IoT applications in the future include huge connectivity, confidentiality, dependability, mobile communication range, ultra-low responsiveness, efficiency, and ultra-reliability [23]. The majority of these needs are included in the 5G services that are anticipated, so this is not a random occurrence.