Importance Sampling Over Monte Carlo Technique to Estimate Wireless Infrared On-Chip Communication

Optical code division multiple access (OCDMA) represents the development of optical communication. The advantages of OCDMA such as, asynchronous multiuser communication transmission, flexibility, ample bandwidth and scalability; have made this technique a better candidate to be proposed for next generation network on chip. In this paper, performance of on-chip OCDMA communication system is analyzed. Analytical techniques for estimating the performance of optical communication systems are difficult, complicated and no complete analytical treatment of the problem has been obtained before. Simulation techniques such as Monte Carlo (MC) are used to obtain reasonable estimates of these systems performance. For optical communications, Monte Carlo require excessively large sample sizes which makes it cumbersome in terms of physical resources, and are not practical for estimating very low values of error probabilities. Modified Monte-Carlo (Importance sampling/IS) method was used to reduce the number of simulation tests for estimating error probabilities for optical communications systems by modifying the probability density function of the on-chip noise process. Simulation results are presented and assessed in term of bit error rate (BER), showing the contribution of IS relatively to MC.


Introduction
Photonic and wireless on-chip communications are playing major roles in defining high performance of the entire system on chip (SoC). With a wide variety of on-chip heterogeneous intellectual property (IP) blocks, network-on-chip (NoC) architectures have been proposed because of the scalability and high-bandwidth requirement. Furthermore, the NoC topology has a significant effect on the overall network performance. OCDMA topology reduces the number of message/packet hops, and can support all the wireless links of a single node working simultaneously. Moreover, the OCDMA system communication is expected to have a low BER, achieved by the different receiver blocks responsible for extracting the signals of a desired user. The photo-detector in an optical communication system plays an important role in the overall good functioning of the system. However by using PIN PDs the receiver becomes thermal noise-limited since, the biasing voltage applied to the PIN diode determines the generated current and the values of its resistivity. Avalanche photodiode (APD) are used to increase the detected output signal gain thanks to the process of impact ionization in order to obtain acceptable receiver performance. The drawback of APDs, is the excess generated non-Gaussian noise at their output and the statistical nature of these photodiodes makes performance analysis quite cumbersome. For this reasons, in this work we opted for the PIN-PD in the receiver part. However, determining the system performance is a complicated task. To reduce the complexity of optical NoCs performance estimation that number of onchip IPs is scaling up, different simulations techniques has been studied and proposed. See Refs. [1][2][3] for more details.
The BER performance of the Optical CDMA system could be estimated by several techniques following approaches such as; Gaussian Approximation (GA) [4], Monte Carlo simulation [5,6], and modified Monte Carlo simulation (Importance Sampling) [7,8], that is Quasi-analytical.
For on-chip optical wireless communication called wireless infrared communication, a negligible link distance between a well aligned transmitter and receiver is considered. So, we can neglect the propagation loss.
In the optical on-chip system noise sources analysis, performance estimates was quite complicated and difficult. In general, the ONoC optimal BER performance is obtained by applying the Gaussian approximation as detailed in [9]. Nevertheless, the Gaussian Approximation (GA) overestimates the real system performance and may lead to wrong extrapolation results [9]. It is necessary to overcome these analyzes of complex systems, since optical receivers operating at low power levels are susceptible to corruption by noise sources. In fact, the converted signal from optical to electronic sampled with the on-chip recovered clock, noise, and then fed to a decision circuit, thus introducing performance prediction errors. Assuming a perfect knowledge of the optimum threshold at the receiver, the decision circuit, has to EXOR the arbitrary values of sampling phase with the threshold.
In order to obtain acceptable accuracy receiver performance estimates without a cumbersome analysis, and to avoid the GA's drawback, Monte-Carlo (MC) technique is proposed in Refs. [5,6]. MC approach can produce very accurate system performance for a large number of simulation trials. MC is a statistical technique to model a digital communication system in order to measure its performance [10]. Unfortunately, conventional Monte Carlo method is impractical for simulating optical detection systems since to achieve a meaningful estimate of the system performance it is necessary to put a tremendous computational burden on the simulation [11]. Hence, the main drawback of Monte-Carlo method is the large samples sizes required for estimating low error probabilities. To reduce the number of simulation trials, a technique was developed known as Importance Sampling (IS) [12,13]. This modified Monte Carlo technique, can be shown that within certain hypotheses it can reduce the complexity of the simulation, by reduces the sample size requirements.
The remainder of this paper is structured as follows. In Section II, we describe some theoretical BER formulation. We specify noise sources affecting optical receiver performance. The concept of Monte Carlo and Importance Sampling technique is presented.
Some numerical results are provided and discussed to assessing MC and IS technique are then shown in Section III. Finally, Section IV provides some concluding remarks.

Problem Formulation
In this work we consider a homogeneous optical communication system, using the ON-OFF keying (OOK) signal set, and we adopt for the receiver an intensity modulation and direct detection (IM/DD). Once the optical signal is detected it will be converted into an electrical signal, which is then synchronously sampled. The analog samples are compared to a threshold value in order to make the decision.
The photoelectrons rate emanates from photo-detector proportional to the light incident intensity is given by Eq. (1) [15]: where, η is the probability that a photon is converted into an electron, Ps is the received optical power corresponding to the useful signal, h is Planck's constant, ν is the optical frequency, and d denotes the rate of the noise. We notice, from Eq. (1), that noise is added to the useful signal at the receiver, which limits the performance of the system and make it more complicate to estimate [14,15]. The NoC major noise sources are categorized into five groups: (a) shot noise, occurs in photon counting in photo-detector, (b) dark current which contribute as source of shot noise, it can be measured when the incident optical power is zero (c) relative intensity noise (RIN), often associated with transmission of optical data (d) thermal noise, known as white noise, produces fluctuations about some average voltage that are Gaussian distributed in the time domain (e) multi-access interference noise (MAI), since we consider the NoC based on optical CDMA technique. The mean and the variance of the received signal depend on the arrival rate of the light intensity. It is the same for the shot and RIN noise, unlike that of thermal noise and dark current noise. Without system communication loss we consider constants values for photocurrent rate that are 0 for the Off keying and 1 for On keying.
An error occurs when 0 is sent but the received signal is Zi + Õ (Zi noise and Õ is a signal corresponding to logic 0) which exceed the threshold T; and therefore interpreted as being 1. On the other hand, an error occurs when 1 is sent but the received signal is Zi + Î (Î is a signal corresponding to logic 1) which fall below threshold, and therefore interpreted as 0. The theoretical average error probability Pe, tends to be equal to the measured BER value. The Pe can be written as the sum of two types of error probabilities which are functions of the constant threshold. The average probability of error Pe can be expressed as in Refs. [6,7] by: Pe(0sent) and Pe(1sent) are the probabilities of transmitting "0" and "1", respectively. In the following we assume that the likelihood that a "1" or "0" transmitted is 1/2. The probability density functions (pdf), f 0 and f 1 are the distributions of X when "zero" or a "one" was sent, respectively. We can rewrite Eq. (2) as [6,7]: A simple and natural unbiased estimator of the BER for communication system is the sample mean and can be defined from [11] as: where, r j denote the random sample. N is number of sample. P (1/0) , P (0/1) are the conditional probability distribution functions that will govern decision making under statistical binary hypotheses respectively H 0 and H 1 .H 0 : photon intensity when space is transmitted (signal absent).H 1 : photon intensity when mark is transmitted (signal present).
Since the probability density depends on the arrival rates 0 and 1 , the probability of error given H 0 is different from the probability of error given H 1 [16] (Eq. 5) r = P∕h denotes the photoelectron count in a sampling period (T) of the photo-detector.
Di is the region of decision for Hi given as: The threshold compared to the photoelectron number in sampling period to decide as follows where the threshold with fully noise knowledge as given in Eq. 6 depend on the received power: OOK receivers require an adaptable threshold in order to ensure an optimal threshold setting which value belongs to the following interval. 0 + d ≤ ≤ 1 + d . 0 , 1 and, d approximated in Eq. (7) according to the Gaussian model [17] (3) if r ≥ the received bit was 1 Different noise sources affecting signal detection for on chip optical system that limiting its performance are analyzed. General noise sources affecting the receiver are; photocurrent shot-noise in photo-detector due to fluctuations, because of randomness of electrons [18], dark current which is the constant current that exists when no light is incident on a photo-detector, thermal noise arises from the thermal fluctuations in the electron density within a conductor, transmitter noise known as pink noise is sometimes referred as 1/f noise, it describes the instability in the power level of a laser, finally the Multiple-Access Interference (MAI) caused by multiple users who are transmitting simultaneously in a NoC based on optical CDMA technique. Details on the noise sources can be found in [15,18,19]. Each noise source is characterized by its variance, namely shot noise vari- where N the number of simultaneous users and 2 is the average over all cross-correlation between different pair codes, RIN variance 2 RIN = RINI 2 s B.

Monte Carlo Method
In the situations where a mathematical approach is inextricable, a Monte Carlo (MC) simulation approach could be a key methodology due to its ability to analyses and evaluates complex digital communication systems performance. In Monte Carlo simulation [7], the sent-digits is composed of s = {s 1 , s 2 ,.., s n }, each s i ∈ {0, 1} and the random noise denoted as Z = {Z 1 , Z 2 ,.., Z n }, where Z i follows a normal distribution with mean μ and a variance σ 2 of all noise sources contribution, namely shot noise, dark current noise thermal noise, transmitted noise, and the multi-access interference noise for simultaneous transmitters. The received signal X = {X 1 , X 2 ,., X n }, where X = s + Z. Each X i is compared to the threshold γ given in Eq. (6) to obtain the output signal Y = {Y 1 , Y 2 ,., Y n }. Where Y i = 1 if X i >γ else Y i = 0.
To estimate the probability of optical communication system errors, we compare through the above mathematical model the output sequence to the input one and count errors. Thus the Bit Error ratio (BER) given in Eq. (8) as defined in [16]: I r j = 1 when the jth bit is error else I r j = 0 , M MC is the number samples. The variance of MC estimator is given as follow: (7a) 0 = I DS + I DB From Eq. (9b), we can conclude that for a given variance, the number of samples M MC increases considerably for very small values of P e . Thus, MC simulation takes excessively long time to compute small BER values. In order to reduce the BER, the MC technique proposes deal with a colossal number of samples, which risks to make the SoC processing more expensive. This can be time consuming especially for digital lightwave systems where BER's below 10 -12 are encountered.
To overcome this problem, approaches called variance reduction (VR) techniques are proposed. In fact, reducing the BER estimator's variance makes it possible to achieve the given precision with a smaller number of trials. There are two variance of Monte Carlo estimator reduction techniques; Antithetic variates (AV) and Control variates (CV).
The Antithetic variates method [20] considers correlated, rather than independent, samples. However, designing an efficient correlation structure is difficult, thus limiting the practical use of these methods.
The control variables approach [21] is useful when simulating an expected value of a random variable. A second random variable, for which the expected value is known, is introduced. The correlation between the two random variables should be maximized such that the variance of the estimate of the variable is reduced, resulting in a more accurate simulation.
The Importance Sampling (IS) technique aims to reduce the variance of a given simulation estimator. This involves reducing the variance of the bit error rate (BER) estimator in communication systems. By reducing the variance, IS estimators can achieve a given precision from shorter simulations.

Importance Sampling Technique
A variance reduction technique that can be used in the modified Monte-Carlo method, known as importance sampling (IS), making it possible for optical transmissions to estimate a low BER efficiently [7,22]. Importance sampling consists in transforming the received signal X into X * = X * 1 , X * 2 , X * 3 , … , X * n , where X * i follows a modified Gaussian distribution which is a biasing probability density function (PDF) f X * (x) [22]. The biasing is defined as a modification of the density of the output. The biasing PDF function provides a relative likelihood that the random samples generated are chosen to come from the regions that cause more errors in the sample space. Each error is weighted by a called adjustment factor or weight factor . For a given optical system, factor allows the reduction of variance of the estimate for a fixed number of trials. The received signal become It is important to understand how to determine the optimal biasing PDF f X * (X) , and in this case how to transfer X into X*. Several articles discussed this issue such as [6,22,23]. In the next section we review a response to the previous question.
The average probability of error is This can rewrite as: Thus, the BER IS is estimated by counting each unbiasing weighting errors is given by: To bias the input symbol distribution, let assume, the useful signal "S" and its added noise "Z" are independent. In this case, the weighting equation can be considered as: The weight factor can be written as: f Z * (z) . Thus, the biasing in the modified Monte Carlo technique is the ratio of the noise densities. The biasing is an artificial distortion, which generate deliberate important errors. So, that the importance sampling (IS) variance is widely reduced. The Importance Sampling variance of error probability is Comparing variances of the Monte Carlo and Importance Sampling estimators, we can note that the possibility for a theoretically variance reduction is evident. Indeed, it is possible to reduce the IS variance to zero if (x) = P e . In the case where M MC = M IS , any weighting factor leading to { ω(x) − P e < 1 − P e } will reduce the value of the integral in Eq. (13b). Thus, the IS estimator variance will be reduced. Otherwise, for a fixed variance (the same variance for MC and IS estimator), the samples counts M IS can be made less than M MC . In this case, the expression used to define the usefulness of the Importance Sampling technique is the ratio: From the gain expression Г Eq. (16), we can conclude that P 2 e < ≤ P e . The value of the gain Г will increase as the probability of system errors increases. We can conclude that, it's possible to estimate Pe very accurately with a small M IS sample size.

Simulation and Numerical Results
We estimated the BER by the mean of numerical simulation considering parameters extracted from Ref. [24]. We considered an OOK signal with different noise sources and we extracted the BER by Monte Carlo and Importance Sampling simulations approach proposed in the previous section (Fig. 1).
To simulate MC and IS technique, we take into account the noise sources. We recall that the problem of OCDMA multiple user access is dealt with in [19], leading to the user data extraction without the presence of MAI. However, as the on-chip system power considered as weak, it is more realistic to take into account all the different noise sources, to get an accurate estimate of our system performance.
To calculate the MC and IS modified noise variances, we use a parameter α which has a value in the interval [0 1[. We assume that all noise sources generated in the receiver represented with its Gaussian variances are independent each other. The receiver BER can be expressed as: (Eq. 17). (17a) The average SNR is: SNR MC = I 2 S ∕ 2 MC , SNR IS = I 2 S ∕ 2 IS . I S = .e.P∕h , P is the optical received power. For the optical NoC, we consider a system with following specifications; OCDMA technique access, VCSEL laser diode wavelength = 850nm , Carrier frequency F c = 3.5GHz , Bit rate of 5 Gb/s, and Electronic Bandwidth B 0 = 100MHz . The simulation parameters are listed in Table1 We depicted on Fig. 2 the variation of BER versus α for a coherent on-chip OCDMA system. We note that we have an improvement in the BER performance for IS method, But a degradation in the performance for MC method for > 0.5 . This result confirms that MC technique is enabling to get almost the same IS performance with small number of sample, that results in a value of α less than 4.
We investigated the performance of an on-chip OCDMA system by the mean of MC and IS method. The results appearing in Figs. 3 and 4 shows a sequence of BER curves for a Monte Carlo and Importance Sampling simulation, for which α factor were set equal to 0.3 then to 0.7.
The obtained BER, plotted in Fig. 3 the Fig. 3. First, the poor BER results from susceptibility of the receiver to noise. Second, the BER is improved when the average received power is increased, because the SNR is improved. But when power tends to 10 -8 , the BER will saturate to the poorer limit value. In this case, the noise sources dominating the main signal, and limiting the estimated system performance. In fact, the signal outputs of the on-chip OCDMA receiver will be corrupted by severe noise. Therefore, a large sample size is required with reduced variance to improve the BER, what motivates the use of the IS method. Even in the absence of great accuracy, the MC and IS method comparative evaluations may be useful for the sensitivity of the performance estimates.
The obtained BER, plotted in Fig. 4, illustrate that for level power higher than 10 -6 , the BER performance is almost the same for both MC and IS methods. In fact in this case, the thermal noise, and MAI are the major contributors. Nevertheless, MAI is eliminated by developing a detection technique and the thermal noise is low enough to introduce instability to the system. But for weak power, the noise sources are much more influential (Fig. 3), which reveals the significance of the variance reduction by the IS method.
We  The increase of parameter ϖ allows a reduction of Γ. For simulation resultants in Fig. 5 and Fig. 6, we choose the average probability of error value Pe = 0.2, which gives 0.04 < ≤ 0.2 ( P 2 e < ≤ P e ). We notice that the conventional Monte Carlo simulation sampling size for low = 2 IS 2 MC ratio condition, agreed with Importance Sampling size of sample. The estimated Γ gains, depicted in Fig. 5, indicates that at α higher than 0.4 the ratio Γ increase exponentially which a gives saving in sample size. Figure 6 shows that we can estimate Pe more accurately with a small sample size. Moreover, larger savings of sample size occur at lower values of ϖ. The sample size required, of importance sampling technique is better than the Monte Carlo methods of estimating low bit error probabilities in communication systems.

Conclusion
In this paper, Two techniques are described that yield reliable BER estimates of optical NoC. Importance Sampling estimator is proposed to improve the performance of Monte Carlo BER estimation, which is validated by simulations. The results indicate that Monte Carlo performs well for low lever received power P ≤ 10 −5 W for an α factor less than 0.3. However, with the continuing decreasing power, any weak noise level can mislead system. In this case, it is crucial to have a tremendous sample size of the received signal to have the most accurate performance estimate, to the detriment of making the system computationally expensive. MC based BER estimation does not perform well over this previous constraint, whereas a modified MC (IS) yields good results BER estimation, even for weak received power. Simulation results depicted in Figs. 3, and 4, illustrates that, the IS sample size needed for simulation is reduced considerably compared to MC one. However, the implementation of IS method present its major inconvenient since it depends on the on-chip architecture, what the challenge of the next research work.
Funding The authors did not receive support from any organization for the submitted work.

Data availability Data was extracted from [28].
Code availability MathLab custom code.