Simulating Solar Radio Bursts Using Generative Adversarial Networks

Solar flares are one of the most extreme drivers of space weather in our solar system. The impulsive solar radio emission associated with a solar flare is known as a solar radio burst (SRB). They are generally studied in dynamic spectra and are classified into five major spectral classes, ranging from Type I to Type V, based on their form and frequency, and time duration. Due to their intricate characterisation, generating a training set for object-detection and classification models of such phenomena is a difficulty in machine learning. Current algorithms implement parametric modelling where the quantity, grouping, intensity, drift rate, heterogeneity, start–end frequency and start–end time of Type-III and Type-II radio bursts are all random. However, this model does not factor in the true shape or general features seen in real dynamic spectra observations of the Sun, which can be crucial when training classification or object-detection algorithms. In this research, we introduce a methodology named a Generative Adversarial Network (GAN) for generating realistic SRB simulations. By using real examples of Type-III and Type-II SRB data, we can train GANs to generate images almost comparable to real observed data. Furthermore, we evaluate the results of the generated model using human perception, then we compare and contrast the results using a metric known as the Fréchet Inception Distance.


Introduction
Solar flares are the most powerful explosive occurrences in the solar system, causing particles to accelerate to near-light speed (Lin, 2011). From gamma rays to radio waves, the accelerated particles emit light over the entire electromagnetic spectrum. Solar radio bursts (SRBs) are characterised by high-intensity radio emission that appears as complex patterns in dynamic spectra (Pick and Vilmer, 2009). SRBs are classified into five types, ranging from Type I to Type V, based on the shape in the dynamic spectra. Because they can happen hundreds of times per day (particularly Type-III bursts), detecting them and determining their spectral properties is a computational problem. Classifying these radio bursts, however, is a major difficulty due to their complicated characterisation. New technology, such as the LOw Frequency ARray (LOFAR: Van Haarlem et al., 2013), has made this task more difficult in recent years by generating high-volume data streams (up to 3 Gb/s at a single station) of radio-burst observations that must be identified quickly and accurately. The development of LOFAR for Space Weather (LOFAR4SW) (Carley et al., 2020a), a system upgrade that aims to autonomously monitor solar radioactivity, has brought to light the need for automated data pipelines for solar radio bursts in recent years. In the near future, such a system will require software pipelines to automatically detect SRBs. With the recent research of classification algorithms such as Support Vector Machines (SVMs) (Evgeniou and Pontil, 2001), Random Forest (RF) (Louppe, 2014) and object-detection algorithms such as You Only Look Once (YOLO) (Redmon and Farhadi, 2017) for detecting Type-III and Type-II SRBs, it has become clear that a high-quality simulated training set is required to make algorithms more resilient and robust for increased accuracy and precision. This paper establishes the role of Generative Adversarial Networks (GANs) in generating images for such training sets.
Traditional methods of creating training sets for SRB classification often involve the tedious task of sifting through gigabytes of data archives and collecting relevant training examples of the desired radio burst. The idea of generating simulated data eliminates this tedious task as it allows us to generate data that not only look like SRBs but also produce it in volume and in a short space of time. In previous attempts, parametric modelling allowed us to create SRB-like images in huge quantities along with grouping, intensity, drift rate, heterogeneity, start-end frequency and start-end time, all characteristics of a Type III (Carley et al., 2020b). However, simulated data produced from simple parametric models do not produce realistic simulated data when compared to real observed data.
The introduction of Generative Adversarial Networks back in 2014 was a game-changer for computer vision (Goodfellow et al., 2020). The idea of pitching two neural networks against each other to generate new realistic images seemed unheard of. The architecture of a GAN is composed of two convolutional neural networks (CNNs) (O'Shea and Nash, 2015) with one being inverted. The end goal of the study showed that GANs could produce almost identical, albeit small 32 × 32, images to the Modified National Institute of Standards and Technology dataset (MNIST) handwritten digit dataset (Deng, 2012). This essentially introduced the idea of dataset creation using neural networks and it has been widely adopted ever since.
The follow-up development of Deep Convolutional GANs (DCGANs) in 2015 (Radford, Metz, and Chintala, 2016), provided architecture structure and stability to the two CNNs within the GAN network, in particular, the upsampling generator network. This research introduced new techniques such as batch normalisation, convolutional strides, removal of fully connected layers, ReLu, Leaky ReLu activation function and the Tanh activation function for the generator. These changes to the GAN architecture provided a much smoother training process, which is very beneficial when training custom datasets. In this research, we use these improvements to generate simulated SRBs. Salimans et al. (2016) illustrated a significant increase in resolution unprecedented by a generating network. Improved training stabilisation and the introduction of its evaluation metric, allowed the researchers to produce 128 × 128 pixel images. This was significant as GANs became more "training friendly" to custom datasets as well as being able to produce good quality images albeit at a very high computational cost.
Generative deep-learning models such as DCGANs are playing a crucial role when classifying SRBs (Zhang et al., 2021). The system is altered to convert a generative network of the GAN into a classification technique. This algorithm is applied to the Culgoora Observatory and Learmonth Observatory data at 25 to 180 MHz, similar to LOFAR metric wavelengths and achieves between 89 -92% accuracy at classifying Type-III SRBs. However, the model in our research will be used to generate simulated SRB data for object detection and classification training inference. We also calculate similarities between real observed data and simulated data produced by GANs.
Recent research has adapted the GANs concept and improved it further to create Highresolution Deep Convolutional Generative Adversarial Networks (HDCGANs) (Curtó et al., 2017). HDCGANs are capable of producing 512 × 512 pixel images of realistic simulated human faces, further showing the potential for GANs to create datasets.
The Search for Extra-terrestrial Intelligence (SETI) is another area where machine learning is being used in conjunction with radio-telescope observations (Zhang et al., 2018). SETI uses machine learning to look for Fast Radio Bursts (FRBs) in planetary systems using its Allen telescope array. SETI creates FRBs using simulation methods due to the scarcity of the phenomena. This allows for the generation of large datasets to train an algorithm known as ResNet to identify FRBs. This highlights the significance of radio-burst simulation techniques.
Previously, we used the parametric modelling method to train YOLO to detect Type-III SRBs (Scully et al., 2021). Using parametric modelling as a source for generating simulated data for YOLO to train on, we obtained an accuracy score of 82.63% when detecting Type IIIs. We noted some key areas where improvements could be made, most notably the simulated training set.
In this research, we implement GANs to generate both Type-III and Type-II SRB simulation dynamic spectra. We gather real SRB data from the I-LOFAR data archive that contains examples of real Type IIIs and real Type IIs, respectively. We then train the GAN network on these two separate training sets in which GANs then produce realistic SRB images. In Section 1 of this paper, we discuss the I-LOFAR telescope and how it obtains its data, the target SRB phenomena and the idea of simulating SRBs. In Section 2, we describe the GAN architecture and the difficulties in training such a network. Then in Section 3, we discuss the training of the network. We then evaluate the GAN model using human perception and a metric known as the Fréchet Inception Distance (FID) score in Section 4.

The LOw-Frequency ARray (LOFAR)
The LOFAR radio interferometer, which includes Ireland's own I-LOFAR station (see Figure 1), was built in the north of the Netherlands and across Europe and produced the data for this research. In the largely unexplored low-frequency region of 10 and 250 MHz, LO-FAR offers a unique set of observational modes for observing the radio universe. Because LOFAR employs digital beam-forming techniques, it can observe several different radioastronomical objects simultaneously. The system can operate as a very long baseline interferometer, employing multiple stations simultaneously, or each station can operate as a stand-alone telescope. The essential functions of LOFAR antenna stations are the same as those of traditional interferometric radio dishes. These stations, like traditional radio dishes, have a large collecting area and high sensitivity, as well as pointing and tracking capabilities. LOFAR stations differ from traditional radio dishes in that they do not physically move. LO-FAR uses a combination of analog and digital beam-forming techniques to combine signals from individual antennas to create a phased array, making the system more flexible and agile.
Station-level beamforming allows for rapid telescope re-pointing and simultaneous observations from a single station. The digitised, beam-formed data from the stations can then be transferred to a central processing facility, where they can be correlated to offer visibility for imaging and observation analysis. Single-station beam-formed data are typically assembled into a dynamic spectrum with a time resolution of 5 microseconds and a frequency resolution of 195 kHz. The dynamic spectra are composed of 488 frequency channels, and these data can be recorded at a rate of several teraByte (TB) per hour. The I-LOFAR team has recently built a high-performance computing system for the purpose of processing and recording raw beam-formed data (Murphy et al., 2021). Due to the high volume of data, automated algorithms are required to sort and classify any phenomena of interest. In our case, the classification and detection of SRBs is the primary goal. In this research, we use the processed observations generated by I-LOFAR to create an unlabelled training set on which GANs can be evaluated.

Solar Radio Bursts
Solar radio bursts are frequently observed in dynamic frequency versus time spectra. Type-III radio bursts are the most prevalent, are short lived and manifest as a vertical strip, as shown in Figure 2. Simulating such phenomena is made more complicated by the many Figure 2 Examples of real Type-III solar radio bursts as seen in a dynamic spectrum at a frequency range of 20 and 90 MHz. different forms a Type III might take, such as being smooth or patchy, faint or intense, superimposed on other radio bursts, free-standing or in groups, or embedded in strong radio-frequency interference (RFI). Low-frequency counterparts can frequently turn fanlike and last for several minutes. Here, we examine Type IIIs in the frequency range of 10 and 100 MHz, where they generally occur as a vertical stripe.
Type-II radio bursts are produced by eruption-driven shockwaves in the solar atmosphere and can last any time from a few seconds to several minutes, drift in dynamic spectra at tens of MHz per second, and have several emission lanes. The successful detection and classification challenge is made more difficult by the wide range of shapes that a Type II can take. They can be inhomogeneous, brief or long lasting, embedded in intense RFI and overlaid on other bursts just as Type IIIs. Figure 3 shows how challenging Type-II classification is. Unlike Type IIIs, their shape is unaffected by the observation frequency, i.e. they have the same shape from high to low frequency, albeit with a quicker drift rate at high frequency.

SRB Simulation
Simulating SRBs has been crucial to creating large datasets for training classifier algorithms. It eliminates the tedious task of searching through large data archives and selecting suitable images appropriate for a training set. Simulating data also eliminates the need for cleaning any interference within this archive data. Our previous SRB simulation method involved using parametric models in which polynomials were used to create the Type-III shape in dynamic spectra, while skewed Gaussians were used for their temporal intensity profile at each frequency. This method produced Type-III radio bursts that were random in number, grouping, intensity, drift rate, heterogeneity start-end frequency, and start-end time. We embedded the bursts in a background of simulated and random RFI channels, an example of which can be seen in Figure 4. We performed similar operations for the generation of Type-II radio bursts, where their drift rate, positions in the spectrogram, heterogeneity, and length in time and frequency were randomised. This technique offered many benefits for creating a good training set, including, high-volume data generation and automatic labelling of the images produced. However, these parametric-modelled simulations lacked realistic characteristics of actual Type-III and Type-II SRBs, such as SRB shape variety, grouping,  intensity drift rate, and background intensity. The introduction of GANs enabled us to create realistic Type-III and Type-II SRB simulations comparable to actual I-LOFAR observations.

Generative Adversarial Network
GANs are a form of generative modelling that employ deep learning CNNs. In machine learning, generative modelling is a type of unsupervised learning that automatically detects Figure 5 Example of a GAN architecture for generating simulated Type-III and Type-II SRB data. The task of the generator is to take in random input values (noise) and create an image from it using a deconvolutional neural network. We use five transpose convolutional layers at a variation of 2× 2 and 1× 1 strides to upsample the data. After each transpose layer we apply batch normalisation and Leaky ReLu activation that modifies the ReLu function to allow small negative values when the input is less than zero. At the final layer we use the Tanh activation function, this is due to the fact that when generating the images, they are typically normalised to be either in the range [0, 1] or [−1, 1]. We then input a batch of real training set or fake images depending on the training stage to the discriminator where it goes through five convolutional layers at a variation of 2 × 2 and 1 × 1 strides to downsample the data. After each convolutional layer we apply batch normalisation again and then ReLu activation that converts all negative values to 0. At the final layer we use a sigmoid activation function, which normalises the output in the range [0, 1]. and learns regularities or patterns in input data so that the model created may be used to produce new examples that could have been drawn from the original dataset. GANs are a clever way of training a generative model by framing the problem as a semi-supervised learning problem with two sub-models: the generator and inversed CNN, which we train to generate new examples, and the discriminator, which tries to classify examples as real (from the training set) or fake (from the test set). The two models are trained in an adversarial zero-sum game until the discriminator model is unable to distinguish images that are fake or real, indicating that the generator model is producing believable examples, see Figure 5 for the architecture. To put it simply, GANs generate realistic data based on existing available data.

Discriminator
Discriminators are simple binary image classifiers that accept an image as input and output whether the picture is real (output = 1) or fake (output = 0). The goal of the discriminator is to be the best at distinguishing fake images from real ones. As a result, we must consider two instances while determining the amount of error that the discriminator makes during the training phase: • When a discriminator is fed real images, the error produced is called a real error or positive error. • When a discriminator is fed fake images created by the generator, the error produced is called a fake error or negative error.
During the training phase, the objective function of the discriminator is the sum of positive and negative errors to be optimised: where log(D(x i )) refers to the probability that the generator is correctly classifying the ith mini-batch m of real images and log(1 − D(G(z i ))) is the probability of correctly labelling the ith mini-batch of fake images that comes from the generator. We update the GAN network by inputting a batch of 128 × 128 × 3 real images to the discriminator and using the output to update the discriminator (or rather update the weights of the discriminator with each epoch to minimise the loss). The output vector will have values ranging from 0 to 1. We then compare these anticipated values to their true labels, i.e. real images are labelled as 1 and fake images are labelled as 0. We determine the gradient, or the derivative of the loss function with respect to the weights in the model, after calculating the loss of the discriminator across real images. The generator is then used to generate a batch of fake images using some random noise input. These images are then fed into the discriminator, which gives predictions for these fakes (values between 0 and 1). We compute the loss, or how far the predicted labels are from the real labels, by comparing these predicted values to their true labels, which is 0 in this case. After calculating the loss across fake images, the derivative of the loss function is used to calculate gradients, much like with real images. Finally, to minimise the overall discriminator loss, the weights W n are adjusted, where n is the iteration index: The learning rate describes how the weights of our network are adjusted in relation to the loss gradient descent. It determines how quickly or slowly we will approach the optimal weights. The Gradient Descent Algorithm minimises a cost function at each step to estimate the model weights over a number of iterations.

Generator
The most difficult task for a generator is to create an image that is realistic enough to mislead the discriminator. Using the discriminator, the amount of error from the generator during the training phase can be easily calculated. The objective of the generator function is given by: Generators are more complicated than discriminators. For each image, they take an array of "random numbers", identical in size to the discriminator (128 × 128 × 3) to simulate a noisy input. It should be noted here that these "random numbers" are initially influenced by the output of the discriminator. This input is then fed into the CNN of the generator, the output of which is an image that is further refined by multiple passes through the CNN. When updating the generator, a similar procedure to the discriminator is followed. We begin by passing a batch of fake images to the discriminator (produced during discriminator training). The reason for this is that the discriminator is updated before we begin updating the generator, necessitating a forward run of the fake batch. The loss is then determined using the output of the discriminator and the real label of the images. It is worth noting that, although these are fake images, we set their true label to 1 throughout the loss calculations. The generator sets the true labels to 1 because it wants the discriminator to assume it is producing real images. Ideally, when the fake images are given as input, the generator wants the discriminator to output 1 or as close to 1 as possible. In this case, the loss function is used to reduce the difference between the output of the discriminator for fake images and the output of the discriminator for true images. Finally, based on the gradient, the weights are modified to minimise the total generator loss and the process repeats.

Convergence Failure and Mode Collapse
Due to its volatile training process (Goodfellow, 2016), as shown in Figures 6 and 7, we have no way of knowing when the training process should end and when a final model should be saved for subsequent use because there is no objective assessment of model performance (Borji, 2019). As a result, it is common to keep the current states of the generator and the model to create a large number of fake images generated during training. One indicator that the network of the GAN is producing fake generated images comparable to real ones is the training loss of the network. When the training loss of the generator spikes, it is a good indication that the generator is producing noisy or bad SRB images during that step; this is known as convergence failure (Goodfellow, 2016). It manifests itself as a significant divergence between the training losses of the discriminator and the generator that typically lasts for more than 10 iterations. This is evident in Figures 6A and C.
Another key feature in training GANs is Mode collapse (Goodfellow, 2016). The generator may get stuck into a setting where it always produces the same output. This happens because the main aim of the generator is to fool the discriminator not to generate a diverse output. In other words, the discriminator gets stuck in a local minimum and cannot find the best strategy for rejecting the diverse outputs of the generator and thus it keeps producing that one type of image. To identify Mode collapse, the line plot will show oscillations in the loss over time, most notably in the model loss of the generator. One other method to identify Mode collapse is observing the images post-training produced by GANs. As mentioned before during Mode collapse, GANs will produce roughly the same type of image over and over again. This enables post-epoch evaluation of each saved generator model based on its generated images. Models can be saved in a systematic manner across training epochs, such as once every one, five, ten, or more; in our case we save the model and images generated by the model after every epoch.

Type-III Generation
The GAN network was first trained to generate simulated Type-III SRBs. The training set consists of 2763 real Type-III images containing just over 33000 real observed Type-III examples that have been gathered by combining different observation days made by I-LOFAR. We focused on the 20 to 100 MHz range as this is where we see the Type-III vertical strip shape. These observation days vary between busy and quiet in terms of solar activity and also contain a variety of different Type IIIs such as inverted-U bursts and Type-N bursts. This data was then cleaned as we want GANs to generate images that do not contain interference Figure 6 The loss-error battle between the discriminator and the generator when generating Type IIIs. This illustrates the GAN networks learning pattern. We can see convergence failure in effect in two training instances, see panels A and C where the generator training loss spikes. usually seen in observations, such as embedded RFI. As GAN is a semi-unsupervised algorithm we do not need to label any dataset but we do have to classify it as a whole, meaning the collection of training images fall under the same Type-III label. Therefore, the images can be fed straight into the algorithm for training.
This research was performed on a machine comprising of 2× SLI inter-connected GPU Nvidia Geforce RTX 2080 Ti, using Ubuntu 20.4.2 LTS on an AMD Ryzen Threadripper 1950x with 32 GB of RAM. For the training configuration, we used 90% of GPU capacity for a variety of different epochs at a batch size of 32, as seen in Figure 6. GANs were trained numerous times, which allowed us to build a collection of over 5000 generated simulated Type-III SRBs. However, the generated images were small (128 × 128 pixels), so we then bulk rescaled the images from 128 × 128 to 256 × 256. Once this was done, we created a new simulated training set that we can use for classification and object-detection algorithms such as YOLO.

Type-II Generation
When training GANs to generate Type-II images the task became somewhat more complex. One constraint is the lack of observed data from the I-LOFAR data archive. This is due to the Sun's 11-year solar cycle. From 2017, the date I-LOFAR was commissioned, the Sun has been in a solar minimum, so the activity was much lower. Consequently, we could only gather five observed Type-II SRBs. The five observed Type IIs in question spanned a couple of hours, so by dividing them into 10-minute segments we could create a training set totalling 214 images for GANs to train on. This amount of data is not enough to train GANs Figure 7 The loss-error battle between the discriminator and the generator when generating Type IIs, again showing the learning pattern of GANs on Type-II data. There is an increased number of epochs due to the lack of data in the training set but the algorithm seems to train well up to the 1500-epoch mark. Again, we see convergence failure in panels A, B, and C where it actually corrects itself during training.
as it creates issues such as over-fitting and Mode collapse when training. To combat this we implement data augmentation by taking copies of the images and editing the images slightly using techniques such as blurring, cropping, and feature removal to give variety within the dataset. We were thus able to increase the training set to 1527 images. Another issue with Type-II SRBs is their shape; their structure in an image has a degree of randomness. We decided to take all images into account even if there was radio interference within the data as we needed to obtain the maximum number of images for GANs to train on.
The Type IIs were trained on the same machine and training setup as the Type IIIs; the only difference is the increased number of epochs, see Figure 7, to allow GANs to work with this complex yet very small amount of data. We again trained GANs numerous times to create a collection of over 5000 Type-II generated simulated examples. The generated data from the GANs was resized to 256 × 256. However, due to the above constraints, the generated Type IIs contained a lot of noise. This noise contained RGB colouring that distorted the image and in some cases disrupted the shape of the Type IIs. To combat this, the generated simulated Type IIs were converted to greyscale that eliminated RGB colouring noise produced by the GANs in training.

GANs: Evaluation and Results
We trained multiple GANs on Type-III and Type-II datasets for a variety of different numbers of epochs. During training, eight images were generated after each epoch. We then Figure 8 The comparison of Type-III SRBs (10-minute segments in the frequency range of 20 and 90 MHz) generated by parametric models and GANs to real Type IIIs generated by I-LOFAR. manually filtered out noise-generated images corresponding to significant spikes in loss error and removed repeated images produced by Mode collapse. The collection of images was then bulk resized to 256 × 256 and then converted to greyscale. Two methods were then used to evaluate GANs, human perception and the Fréchet Inception Distance (FID).

Human Perception
Manual assessment or human interpretation of images produced by a generator model is the common way to evaluate GANs (Borji, 2019). The generator is used to create a batch of fake generated images, then an evaluation of the quality and diversity of the images in relation to the target domain, in this case SRBs, is performed. To accomplish this, we train GANs over a number of epochs several times. Once a training instance is complete, we compare the generated images to both parametric and real observed data.
When visually comparing parametric modelling, GANs, and real Type IIIs, we note how the parametric modelling method fails to replicate the consistent brightness intensity seen in the real observed data. The parametric model seems to have an abrupt change in intensity about halfway through the burst. GANs on the other hand have a consistent intensity drift from start to end with no abrupt changes in shape and intensity. While parametric modelling has the tadpole-like shape of Type IIIs it does not model the gradual top and bottom seen in GANs and observed real data, i.e. parametric models have flat tops and bottoms not seen in real data. These traits are evident in the comparison between parametric modelling, GANs, and real data in Figure 8. When comparing Type IIs, parametric models have the general shape of observed data but, in appearance, look like a real Type II that has been blurred. This blurring obscures crucial structure within the Type IIs. Furthermore, parametric Type IIs do not have the inherent RFI interference or embedded Type-III SRBs seen in real observations. A comparison between parametric modelling, GANs, and real data is shown in Figure 9. Human perception can be used to evaluate GANs, however, it is especially subject to observer bias; SRBs are not like cars, people, animals, etc., but are abstract objects that can be easily misclassified. To address this issue, we introduce a metric known as the Fréchet Inception Distance that calculates the similarities between datasets produced by parametric modelling, GANs and real observed data.

Figure 9
The comparison of Type-II SRBs (10-minute segments in the frequency range of 20 and 90 MHz) generated by parametric models and GANs to a real Type II generated by I-LOFAR.

Figure 10
An example of FID comparing the disturbance level of the same image. Note how when the image is distorted the FID score rises. The lower the FID score the better the quality of the generated SRB image compared to a real SRB image.

Fréchet Inception Distance
The Fréchet Inception Distance (FID) is a metric for assessing the quality of generated images and was created primarily to assess the performance of GANs without humanperception biases (Heusel et al., 2017). The FID score was designed with the aim of comparing the statistics of a group of fake generated images to the statistics of a collection of real images from the target domain, in this case SRBs, to evaluate how well the fake generated images compare to the real observed data. A lower FID suggests higher-quality images, a higher score indicates lower-quality images, and the relationship may be linear, as shown in Figure 10. The FID score utilises Google's inception v3 CNN model (Szegedy et al., 2015). Specifically, the last pooling layer prior to the output classification of images is used to capture features of an input image, without the classification scores. The Fréchet distance is used to calculate the similarity distance between these two feature distributions (real and fake). The following equation is then used to determine the FID score: where the FID score is referred to as d 2 , showing that it is a distance and has squared units.
The mu 1 and mu 2 are the feature-wise mean of the real and generated images, respectively. The C 1 and C 2 refer to the covariance matrix for the real and generated feature vectors (known as sigma). The ||mu 1 − mu 2 || 2 are the sum squared difference between the two mean vectors. Using this method we compare FID scores of both parametric and GANs generated Type IIIs and Type IIs to real observed Type IIIs and Type IIs, as seen in Table 1.

Conclusion
In this article, we have shown that GANs are very good at generating SRB simulations almost comparable to actual SRBs generated by I-LOFAR. This particular configuration of GANs can generate quite large images at 128 × 128 and then with bulk resizing and greyscaling we can generate images in excess of 256 × 256. GAN generated images can provide classification and detection algorithms with an appropriately sized corpus of realistic models obviating the need for sourcing real observations, a bonus given the relative rarity of Type-II bursts.

Code Availability
The implemented code for this research can be found here ...https://github.com/ jeremiahscully/GANs.git.

Declarations
Competing interests The authors declare no competing interests.