Securing information using a proposed reliable chaos-based stream cipher: with real-time FPGA-based wireless connection implementation

In this paper, a robust chaos-based stream cipher (CBSC) is proposed. The novelty of this work is that it addresses all challenges confronting chaos-based cryptography. The PCBSC (proposed CBSC) has a robust synchronization circuit that mitigates the effect of channel noise, a perturbation block that overcomes the dynamical degradation, a robust encryption scheme, and an efficient control parameters’ generator that generates strong keys. According to the complexity evaluation, the improved chaotic map provides good statistical properties. This can be confirmed by the obtained high values of the statistical metrics (largest Lyapunov exponent, approximate entropy, permutation entropy, and sample entropy) used for the evaluation. According to the security analysis, the PCBSC has good security features and provides strong keys that ensure confusion property, as well as enough space to withstand brute-force attacks. On the other hand, the proposed encryption scheme proves its efficiency; the result of the differential attack clearly shows that the diffusion property is guaranteed. Additionally, the original images’ statistical properties are completely dispersed on the encrypted images. The obtained performance over noisy channels proves the synchronization circuit’s efficiency. When compared to other proposals, the PCBSC provides the best results. In addition, the PCBSC is implemented on an FPGA and evaluated in real-time over a wireless link.


Introduction
Today, the internet reaches all corners of the globe, and anyone can connect to it. It is a place where anyone (whether businesses or individuals) may provide and request products, services, assistance, news, and so on. As a result, there is an easy access to digital content, as well as higher demand for data sharing. On the other hand, wireless communication is a vital part of the telecommunications system's backbone, and its security has become a main priority. Concurrently, private content became vulnerable to illegal access, and so, protecting private content from unauthorized users became one of the most major challenges confronting owners. Over the years, numerous traditional cryptographic algorithms have shown their efficiency in protecting private digital content. However, as technology evolves, so does the demand for data exchange (military databases, banking transactions, medical imaging systems, paid TV streaming, and so on, with image and video accounting for the majority of it). This has necessitated the development of more sophisticated and reliable cryptographic algorithms to meet the growing needs of data security.
Recently, there has been a surge of interest in chaosbased cryptography. This is due to the fact that it is capable of meeting the growing demand for information security. Chaotic systems exhibit random behavior, are extremely sensitive to initial conditions, and exhibit a continuous broad-band power spectrum. The evolution of two chaotic signals generated from very similar initial conditions differs dramatically. In addition, the random-like behavior of such systems is given by simple and deterministic dynamical systems. Since the 1990s, numerous researchers have observed an intriguing link between chaos and cryptography: numerous features of chaotic systems have analogs in conventional cryptosystems [1,2].
Due to chaotic systems' extreme sensitivity to their initial conditions, having two chaotic systems evolve in synchrony may appear odd. However, with the finding of Pecora and Carroll in 1990, the prospect of selfsynchronization of chaotic oscillations became possible [3], This discovery was a watershed moment in the use of chaos in cryptography. Since that time, considerable effort has been made in this field of research.
Using chaotic systems in cryptography has a variety of implications. According to [1], two ways to construct chaos-based cryptosystems exist: analog and digital. Analog chaos-based cryptosystems are often built on synchronized setups; depending on the configuration, the message to be transmitted is either included into the analog chaotic signal or used to control multiple chaotic outputs. Some known examples of analog chaos-based cryptosystems including (but not limited to): chaotic masking [4][5][6][7][8], chaotic switching [9,10], and chaotic modulation [11].
In fact, analog chaos-based cryptography has not received much attention compared to the digital approach. This is because, on the one hand, of the domination of digital computers and what they can offer as advantages, and on the other hand, of the vulnerability of analog cryptosystems [12][13][14][15][16].
Digital chaos-based cryptosystems (a.k.a. digital chaotic ciphers) are developed for digital computers and encrypt plaintext in a variety of ways using one or more chaotic maps implemented with finite computational precision [1]. The majority of research efforts have been directed toward developing fast and trustworthy cryptosystems based on digital chaos. Due to the widespread availability of computers and digital devices, and the ease with which binary data can be handled (digital filtering, detection and correction of data, etc.), digital chaos-based cryptosystems are more efficient than analog ones. Studies on digital chaos generators have been reported to have more diversity compared to their analog counterparts [17]. Using chaotic systems as a source of randomness for digital cryptosystems is the most intuitive application of chaotic systems.
It is worth noting that the preceding lines barely present of what's out there in terms of proposed chaosbased cryptosystems; due to their vast number and diversity, classifying them is difficult. On the other hand, and in accordance with [1], it is difficult to assess the security and effectiveness of many digital chaosbased cryptosystems in a systematic manner, leaving them widely exposed to attacks. However, a variety of security evaluation measures may be utilized to analyze the security of any proposed chaos-based cryptosystems and, to some extent, establish its reliability.
Despite the huge number of chaos-based cryptosystems proposed recently, many of them have failed to meet established design and security requirements (see Sect. 2). Some proposals focus on certain problems while turning a blind eye to others, and we rarely come across a fully functional chaos-based cryptosystem. As a result, each proposed cryptosystem's reliability and robustness are dependent on taking into consideration all known design and security requirements.
The main goal of this work is to develop a robust chaos-based stream cipher that takes into account as much as possible the design and security requirements. The proposed chaos-based stream cipher (PCBSC) is made up of four main parts, one of which is a perturbation block designed to reduce the impact of the digitization process on the chaotic behavior of the digitized chaotic map. A synchronization circuit that increases the performance of the PCBSC in noisy channels. For every given secret key, a control parameter generator that keeps the chaotic map in a chaotic ragime. The PCBSC Also includes an encryption block, which adds another degree of security to the PCBSC and ensures the diffusion property as well as protection to various cryptographic attacks.
The novelty of this work is that it addresses many of the challenges facing the application of chaos to cryptography in one chaos-based cryptosystem. When looking at many existing works, we find that they address some issues and ignore others. For example, many proposed schemes for enhancing digitized chaotic maps' properties have proved their efficacy but did not provide a mechanism of synchronization under real noisy channel, or avoid mentioning the security issues in their designs.
The PCBSC is then implemented on FPGA under real wireless channel constraints to show what performance can be provided in real-time applications. Due to their increasing performance and decreasing cost, FPGAs are becoming the first choice to implement digital systems. Recently, the majority of chaos-based cryptosystems are implemented on FPGAs [40][41][42][43] despite the fact that there are other cheap and powerful computing devices that are useful for cryptographic applications and allow lightweight applications [44][45][46]. The use of a field-programmable gate array (FPGA) has shown advantages, such as the verification and fast proto-typing of dynamical systems [40]. The computation power of a FPGA lies in its execution strategy; as a parallel chip by nature, it can run many tasks at the same time instead of one task at a time as microprocessors and microcontrollers do. This paper is organized as follow: Sect. 2 discusses the most known design and security requirements for any proposed chaos-based cryptosystem. Section 3 introduces the proposed chaos-based stream cipher (PCBSC) and provides sufficient details on its oper-ational basis. In Sect. 4, the evaluation of the improved chaotic map in terms of complexity is presented, in which some statistical and mathematical tools are used for this purpose. Section 5 discusses the PCBSC's security analysis, in which the key strength, sensitivity, and space are all carefully analyzed. Additionally, the proposed encryption block is evaluated by examining the encrypted image's statistical properties and performing the differential attack on it to assess its strength. The performance of the PCBSC under noisy channel is reported in Sect. 6, where the proposed synchronization circuit is evaluated to confirm its efficiency. Section 7 discusses the implementation steps of the PCBSC on an FPGA and its real-time evaluation through wireless transmission. The obtained results comparison with other proposals is the subject of the Sect. 8. The paper is terminated by a conclusion about the achieved results in Sect. 9.

Chaos-based cryptography challenges
Designers of any chaos-based cryptosystem should emphasize many critical design challenges. Indeed, despite its effectiveness, applying chaos to cryptography is a tough process. A number of important issues must be addressed while constructing a chaos-based cryptosystem. Chaos-based encryption has encountered a number of issues that limit detract from its utility in cryptography. The following subsections discuss the important issues related to chaos-based cryptography, which should be considered by every designer.

Synchronization
In chaotic systems, synchronization means that the trajectories of the receiver (response or slave) system can track those of the emitter (master) system starting from arbitrary initial conditions. For chaos-based stream ciphers in particular, the first issue that should be taken into account is synchronization. That is, the emitter and the receiver should provide the same dynamics to easily recover the original plaintext.
Even though the possibility of synchronization in chaotic systems has been discovered [3], the high sensitivity to initial conditions that characterizes such systems precludes synchronization because even two identical systems cannot evolve in the same manner, even if their initial conditions are very close [21]. Additionally, the conventional synchronization method (continuously driving the response system) is very susceptible to channel noise. Thus, any proposed chaosbased stream encryption should consider synchronization over noisy channels.

Digitization effect on the dynamical properties of chaotic systems
Encryption algorithms that are based on the generation of random numbers should include strong PRNGs that provide good randomness quality, with good distribution and high unpredictability. Indeed, when chaotic systems are implemented on digital computers with finite computing precision, their dynamical properties deteriorate. A digitized chaotic system is qualitatively different from its analog counterpart and produces a predictable output (periodic) with poor statistical properties. This significant degradation diminishes the effectiveness of chaotic systems in cryptography considerably.
Many studies suggest that the cycle length of a given digitized chaotic system's output gets longer as the arithmetic precision size gets larger. The digitized chaotic systems are assumed as periodic, and their longest orbit can never be more than 2 L (L represents the arithmetic precision size), more information about this can be found in [47].
Numerous remedies have been suggested to mitigate the dynamical degradation found in digitized chaotic systems; they can be broadly classified into three categories: (1) using high arithmetic precision [48], which can significantly lengthen the cycle and improve statistical properties, (2) cascading multiples chaotic systems [49], and combining their outputs to obtain the final chaotic output, and (3) using an external random or pseudo-random source to perturb the chaotic system's orbit [50]. However, the first two remedies, despite the enhancements they can give us, are costly in terms of performance (implementation cost). The last remedy is more efficient and more supported by chaoticians.
Thus, any chaos-based cryptosystem designer should take into account the dynamical degradation found in the digitized chaotic systems and what solution might propose to overcome this problem.

Design complexity and performance
Many applications do not suit cryptosystems that based on design complexity. Today, resource-constrained networks make up a significant portion of communication networks (military surveillance, medical sensing networks, wearable devices, etc.). These networks are characterized by their limited computation capability, limited storage space, and strict power utilization management [51]. Securing data over such networks is crucial. Consequently, complicated cryptosystems are not deemed ideal for securing data over resourceconstrained networks.
Design complexity, on the other hand, entails increased hardware resource utilization, processing time (due to significant signal propagation delays), and heat dissipation. These difficulties have a direct impact on the design's performance, implementation costs, and energy usage.
When compared to their low-complexity equivalents, cryptosystems with complicated structures are more secure, reliable, and harder to break. As a result, any proposed chaos-based cryptosystem should strike a balance between security and complexity.

Security
Any encryption algorithm's reliability is determined by its resistance to different known attacks. A strong encryption algorithm should withstand any serious attack, regardless of whether the intruder has whole or partial knowledge of the encryption algorithm's structure.
Immunity from cryptographic attacks is intrinsically linked to the secret key. The key is by far the most major element of the encryption algorithms, security should be solely dependent on the key. No matter how robust and well designed the encryption algorithm is, if the key is chosen incorrectly or the keyspace is too small, the cryptosystem will be easily broken [1].
There are three main important issues related to the key, namely, the key construction, the key length, and the key strength. For chaos-based cryptography, it is obvious that the key is made using the control parameters. Depending on the control parameters' intervals, a given chaotic system exhibits many behaviors between fixed points: periodic, quasi-periodic, and chaotic. What we are interested in is constructing the key from the intervals where the control parameters assure the chaotic behavior. The bifurcation diagrams can aid us in defining the intervals in which the control parameters result in chaotic behavior.
Another important issue concerning the key is the equality of strength. That is, each infinitesimal change in the key should result in the same ciphertext distribution. Flipping one bit in any position in the key should result in a significant change in the ciphertext. In general, the Avalanche Effect, as it is known in cryptography, can be used to evaluate the key's strength. The avalanche effect is well known among cryptologists. It is one of the most important properties that characterizes the efficiency of encryption algorithms. Good encryption algorithms have high avalanche effects.
The avalanche effect can be measured by comparing the bit error rate (BER) of the plaintext with that of the recovered one as a function of the attempted keys. The BER should be around or more than 50%, regardless of how close or far we are to the correct key. The BER for weak encryption algorithms decreases as we come closer to the correct keys (low avalanche effect).
Confusion and diffusion are two more crucial properties that any encryption algorithm should provide, as outlined by Claude Shannon in his report [52]. The main objective of these properties is to make the relationship between the encryption algorithm's input and output as complicated as possible, so that any minor change in the input should be dispersed in the statistical properties of the output. Confusion indicates that a little change in the key (flipping one bit) should result in large changes in the ciphertext; at least 50% of the bits in the ciphertext should be affected. The same thing for the diffusion property; a minor change in the plaintext (flipping one bit) should lead to large changes in the ciphertext (a different ciphertext). Thus, for any proposed chaos-based cryptosystem, these two main properties should be ensured.

The proposed Chaos-based stream cipher (PCBSC)
The PCBSC's basic scheme (for both emitter and receiver) is depicted in Fig. 1. On the emitter side, the system is made up of five main blocks: the first is the chaotic generator; the second is the control parameter generator; the third is the perturbation block; the fourth is the encryption block; and the fifth is the BitBasher block. On the emitter side, the system is identical to the emitter, with the exception of the decryption and synchronization blocks. The PCBSC is a stream cipher cryptosystem in which the plaintext is encrypted bit by bit using the chaotic sequence (using the XOR logical operator) and transferred across the transmission channel with the synchronization signal. When the receiver system synchronizes with the emitter, the original plaintext can be easily recovered on the receiver side. Each block's functionality is described in detail in the subsections that follow.

The chaotic generator
In our case, we have used the Hénon chaotic map because of its simplicity, the Hénon map is a 2D chaotic map that given by the following recursive system: The Hénon map has been studied in depth for a = 1.4 and b = 0.3, where numerical evidence of chaotic behavior was found. Figure 6a presents the Hénon attractor for a = 1.4 and b = 0.3.

The control parameters generator
The control parameters generator is a vital part of the PCBSC. In cryptography, the chaotic generator's strength is determined by its chaotic behavior. This chaotic behavior is only possible within a restricted ranges of control parameter values. Bifurcation diagrams can help to define the ranges within which the system is chaotic or not (see the PCBSC security evaluation section). Due to the fact that secret keys are generated using control parameters, this block's internal structure allows to retain the chaotic behavior of the chaotic generator for any given key.
Besides the control parameters a and b in the Hènon chaotic map, we add two more control parameters, c and d, and assume the constant 1 in the system (1) as a new variable, e. This is done to make the key size larger. Thereby, the system (1) can be written like this: Later, we'll look at how to set appropriate ranges for the newly added control parameters in order to attain chaotic behavior. To facilitate comprehension, we will defer discussing the block's functionality to the PCBSC security evaluation section. This is because we need visual evidence of the resulting bifurcation diagrams in order to identify the proper control parameters' ranges.

The perturbation and BitBasher blocks
The perturbation block's basic scheme is presented in Fig. 2. It consists of a simple but efficient circuit. The main advantage of this block is that it requires no external perturbation source; it provides a self-perturbation mechanism. As shown in Fig. 1, the perturbation is performed on the feedback signal x k , so the system (2) can be rewritten as follows, where P(x k ) denotes the perturbed signal: The perturbation is performed in the following manner: initially, the chaotic signal x k is routed through two blocks, Slice 1 and Slice 2. The Slice 2 block extracts the LSB (lowest significant bit) from the binary vector of the chaotic signal x k and serially inserts it into L registers (as mentioned previously, L represents the arithmetic precision size and at the same time here the number of the internal registers contained in the shift register). The concatenate block is used to collect the outputs of all internal registers to create the perturbation signal P k . The perturbation process is done by feeding the perturbation signal P k to the chaotic system. The perturbation is not continuous but occurs at random intervals. The process is as follows: the Slice 1 block extracts the m lowest bits from the chaotic signal x k , where m is defined by the maximum period that the original chaotic map can provide with the arithmetic precision size L. For example (see Table 1), the longest period that the original Hénon map may give is 280 for L = 16. As a result, m = round(log 2 (280)/log 2 (2)) = 8.
The output signal max is an m-bit integer. During the counting process, one value of this signal is stored on the register. The counter is incremented until this value is reached. When the counter hits this maximum, the comparison block generates a value of 1 to reset it, controls the multiplexer to output the pertur-bation signal instead of the signal x k , and enables the register to load a new value of the signal max. Thus, as previously stated, the perturbation occurs when the counter reaches its maximum value. Because the maximum value is variable, the period of perturbation is not constant but random. This is a pivotal feature that characterizes the proposed perturbation circuit. In Sect. 4, we will show that the proposed perturbation circuit is capable of effectively increasing the cycle length of the digitized chaotic map.
The BitBasher is another straightforward but efficient block in the PCBSC. This block is included because it serves the same objective as the previous one, which is to enhance the chaotic map's statistical properties. The BitBasher block is responsible for reversing the bit order of the chaotic signal input. Following a series of experiments, we concluded that the Bit-Basher block should be placed as illustrated in Fig. 1. We shall see how such a simple block can significantly increase the randomness of chaotic systems and expand the range of control parameters to have a chaotic behavior.

The encryption/decryption blocks
Keep in mind that the PCBSC is a symmetric cipher key, which means that encryption is done bit by bit (one bit at a time) and that the same encryption key is used on the emitter side should be regenerated and used on the receiver side.
As previously stated, two important tasks of any cryptosystem should be assured, namely the confusion and diffusion. Because the control parameters are used as keys in the PCBSC, the confusion property may be guaranteed (as we will see next) due to the chaotic system's high sensitivity to the initial parameters. To assure the diffusion property and to complicate and obfuscate the relationship between the plaintext and the ciphertext, we propose an efficient encryption/decryption mechanism. The ciphertext C k is obtained as follows: where P k , R k , and C k−1 represent, respectively, the plaintext, the chaotic sequence, and the previous ciphertext. "O" denotes the process by which the final ciphertext is obtained; it comprises a sequence of  Fig. 3 The proposed encryption/decryption mechanism blocks (from one to five) through which the ciphertext will be subject to many changes, as shown in Fig. 3. Each of these five blocks shuffles the bits positions of its input in a distinctive manner. There are two possible scenarios for the final output of each block, depending on the signal that controls the corresponding multiplexer: the same input signal and the shuffled input signal. The LSBs of the secret keys key a , key b , key c , key d , and key e are the signals that control every multiplexer (keys representing each control parameter).
For the decryption process, the order of the blocks is inverted so that the original plaintext can be easily recovered, as shown in Fig. 3. The recovered plaintext r P k is obtained as follows: where O −1 denotes the process of the inverting order of the blocks. The proposed encryption scheme assures that a little change in the plaintext or the multiplexers' inputs results in significant changes in the ciphertext. From a cryptography viewpoint, the plaintext's statistical properties will be completely distributed in the ciphertext (More details can be found in Sect. 5).

Synchronization block
The synchronization block is one of the most critical components of our PCBCS. The original plaintext can never be recovered without synchronization, even with the correct keys. As the driving signal, we used the signal x 2 k . Experiments revealed that when using this signal, synchronization between the emitter (master) and Fig. 4 Evolution of the synchronization error between the emitter (master) system and the receiver (response) one receiver (response or slave) systems happens promptly after only few iterations. Given that the master system is indicated in (3), the response system is as follows: ( ) denotes the signals generated by the response system to distinguish between them and those generated by the master system. Starting from different initial conditions (x 0 = 0.1, y 0 = 0.1, x 0 = 0.5, y 0 = 0.9), the evolution of the synchronization error ( Fig. 4) demonstrates that after a few iterations, the response system synchronizes with the master, and the receiver follows the emitter's dynamics.
Experiments have shown that in noisy channels, continuous driving of the response system is not the right approach. That is, the existence of noise in the picture adds another challenge due to the chaotic systems' high sensitivity to the initial settings. To protect the PCBSC from channel noise as much as possible, we propose a simple solution that consists of driving the response system just during specified periods rather than continuously.
In fact, the noise power fluctuates continuously between high and low values. The proposed synchronization circuit (Fig. 5) tries to track the moments where the noise power is low. Practically, it compares the value of the driver signal x 2 k with the same signal x 2 k generated by the response system. When the two signals are equal, it means that the two systems are synchronized and have the same outputs. To be sure that this equality is not accidental, each signal is passed through 3 registers, and each register's output of the first signal (x 2 k ) is compared to the register's output of the second signal x 2 k . The outputs of the comparators are loaded into a logical AND gate, so the signal S 1 of the output of the logical AND gate will take the value 1 only if the two signals are equal during three successive periods.
The S 1 signal is fed into a logical OR gate, which then controls a counter and a multiplexer. When S 1 = 1 (E = 1), the counter starts counting, and the multiplexer outputs x 2 k rather than x 2 k . In this case, the response system works independently from the master system and will be protected from the channel noise. The signal S 2 , on the other hand, is activated until the counter reaches its maximum value. The counter can reach a maximum of 2 m , where m is the counter's size in bits. The process will be restarted when the counter reaches its maximum.
The evaluation of the performance of the proposed synchronization circuit under a noisy channel is given in Sect. 6.

Evaluation of the improved Hènon map
Given that the PCBSC's strength is related to the complexity of the chaotic generator, this section will focus on the evaluation of the improved Hénon map using the proposed perturbation block. The resulting chaotic sequence is evaluated based on two key factors: its complexity and its statistical properties. Some mathematical and statistical tools are used to perform the evaluation.

Complexity evaluation
This section deals with the evaluation of the improved Hénon map in terms of its complexity, where some mathematical tools are used for this purpose.

Trajectory and phase space analysis
The improved Hénon map's complexity can be visually assessed by visualizing the resulting phase space. The obtained result is given in Fig. 6. It is clear that the phase space analysis shows clearly the good distribution of the improved Hênon map output over the whole space, in contrast to the original map where the output is confined to a specific form. These obtained results confirm that the complexity of the modified Hénon map has been improved.

Bifurcation diagram analysis
A bifurcation diagram is an extremely valuable tool for visualizing the values that are visited or approached asymptotically in phase space when the control parameter evolves. It simplifies the visual distinction between regular and chaotic zones [53].
By fixing L to 16 , the bifurcation diagrams of both the original and improved Hénon maps are shown in Fig. 7. As can be seen, the improved Hénon map provides far better results. The ranges of control parameters within which the system exhibits chaotic behavior are greatly expanded in comparison to the original Hénon map. Additionally, the improved map's output has a well-defined distribution throughout the interval [0, 1].

Autocorrelation and period length analysis
In this section, the complexity of the improved Hénon map is evaluated by measuring the autocorrelation and the obtained period length. Autocorrelation is an important mathematical tool that can help to measure the relationship between samples of the same signal for the purpose of detecting similarities. It should be noted that a truly random process has very low autocorrelation coefficients, so the autocorrelation effectively evaluates the complexity of a given chaotic system. Autocorrelation can also help to determine the period of a chaotic system by identifying repeated patterns within the chaotic sequence.
The autocorrelation test was performed on both the original and improved Hénon maps for L = 16. The results are shown in Fig. 8. The improved Hénon map has a poor correlation between its output samples and has a noise-like autocorrelation result. The original Hénon map, on the other hand, falls swiftly in the cycle and has a high correlation between its output samples.
The comparison of the autocorrelation coefficient values obtained from the improved Hénon map and the original map is another comparison proof that confirms the improvement made on the Hénon map; Fig. 8c represents the comparison result of the autocorrelation coefficients obtained from the improved Hénon map  Figure 8d represents the comparison result of the autocorrelation coefficients obtained from the improved Hénon map and the Matlab rand.
The improved Hénon map exhibits the lowest autocorrelation coefficients when compared to the original map, as shown in Fig. 8c. This means that the Hénon map's complexity has increased dramatically.
The Matlab rand function is used to generate pseudorandom sequences with uniformly distributed elements in the interval (0,1); it is a true random-like process because it is carried out with high arithmetic precision (64 bits). When the complexity analysis findings of the improved Hénon map are compared to the rand function, it is possible to get a good idea of the improved map's complexity quality. Although the improved Hénon map was carried out using low arithmetic precision (16 bits), it yielded better results than the rand function, which was carried out using high arithmetic precision. This is proven by the improved Hénon map's low autocorrelation coefficients when compared to the rand function (Fig. 8d).
The period length of the improved Hénon map has been effectively extended. The computed period length for both the original and improved Hénon maps is shown in Table 1. It should be noted that for the improved Hénon map, the process was carried out with a set of control parameters that were chosen randomly. On the one hand, the results reveal that any control parameter for the improved map yields the same result (in which the period length is extended effectively). On the other hand, the period for L = 16 has been prolonged by 16569 times compared to the original map, while the periods for L = 24 and L = 32 cannot be computed with our available computing platform.

Largest Lyapunov exponent analysis
The complexity of a given dynamical chaotic system refers also to its high sensitivity to the initial conditions; for chaotic systems, two trajectories started from very close points (initial conditions) diverge quickly. The largest LE (Lyapunov Exponent) is the mathematical tool that helps measure the rate of the divergence between the two trajectories. A chaotic system with a positive LE will have completely diverged trajectories after a certain number of iterations, while the largest LE value is an indicator of higher unpredictability and sensitivity. On the other hand, a negative or zero value indicates periodic behavior [54].
The largest LE analysis has been performed using the Rosenstein's algorithm implemented in Matlab. The original Hénon map and the improved one were subjected to the largest LE analysis as a function of the control parameters. The largest LE analysis on the Matlab rand function was undertaken to confirm the efficiency of the improvement made on the Hénon map. Because the rand function generates a different state each time the is run, the largest LE was applied to the sequence generated as a function of the round number (the rounds here replace the control parameters in the Hénon maps). The obtained results are shown in Fig. 9. As we can see: -For the original Hénon map, the largest LE takes negative values for a < 1 and b < 0.07, on the other hand, the maximum of the largest LE cannot exceed 0.48 for the first case ( Fig. 9(1)) and 0.7 for the second case ( Fig. 9(2)). -In the case of the improved Hénon map; the obtained results were quite better. According to Fig. 9 from (4)  Based on what has been mentioned thus far, we may conclude that the improved Hénon has the highest largest LE value and thus more unpredictability and sensitivity to the initial conditions. These findings are also significantly better than those obtained using the rand function ( Fig. 9(3)), despite the fact that this function is efficient and implemented high arithmetic precision.

Statistical properties analysis
In this section, the improved Hénon map is evaluated in terms of the statistical properties of its output. For this purpose, some statistical metrics are used, such as the approximate entropy, the sample entropy, and the permutation entropy. The well-known NIST statistical test set is also used to evaluate the randomness of the improved Hénon map's output. Matlab also has an efficient function for evaluating the complexity of a given random sequence, which is used to evaluate the output of the improved Hénon map.

The approximate entropy analysis
The Approximate Entropy ( ApEn) is a mathematical tool that is used to evaluate the non-regularity and unpredictability in time series data. This test has been proposed first by Steve M. Pincus [55] for the purpose of detecting similar patterns in time series. A comprehensive tutorial on the ApEn with theoretical background can be found in [56]. Roughly speaking, high values of ApAn mean the signal is more random and unpredictable, whereas low values mean the signal is more regular and easy to predict.
The ApEn is performed on the original and improved Hénon maps' outputs for a set of control parameters, the ApEn is also performed on the Matlab rand func-tion as a function of a number of rounds, this is for the purpose of making a comparison between the obtained results from the improved map and the Matlab rand function. The obtained results are presented in Fig. 10.
The obtained results show that: We may deduce from the obtained results that, first, the high ApEn values corroborate the improved Hénon map's high unpredictability and randomness, and second, the high values are preserved over the entire intervals of the control parameters. This also confirms that for the improved Hénon map, the control parameter intervals in which the system is chaotic are extended.

The permutation entropy analysis
The permutation entropy (P E) is an efficient tool that quantifies effectively the complexity in a dynamic system, it bases on the principle of capturing the order relations between values of a time series and extracting a probability distribution of the ordinal patterns, this tool is characterized by its conceptual simplicity and computational speed and provide many advantages [57,58]. The smaller the P E is, the more regular and more deterministic the time series is. Contrarily, the closer to 1 the P E is, the more noisy and random the time series is. For an in-depth understanding of how the P E test is performed, a comprehensive example is given in [57].
The P E test is carried out on both the original and improved Hénon maps as a function of a set of control parameters. The P E test is also performed on the Matlab rand function, as previously done, and the results are shown in Fig. 11 -For all control parameters in the improved Hénon map, the P E ranges from 0.991231 to 0.999931. In contrast to the original map, these results are preserved over the entire intervals of the control parameters. The P E values obtained are quite close to 1 and are consistent with the results obtained using the Matlab rand function. This confirms that the improved Hénon map has better randomization quality than the original one.

The sample entropy analysis
Sample entropy (SampEn) has the same concept as the ApEn (it is a modified version of the ApEn). It was introduced by the authors in [59] and has some advantages over the ApEn, such as accuracy, data length independence, and implementation simplicity. More details about the SampEn can be found in [56]. High SampEn values imply high complexity of the evaluated system's output.
The SampEn test was performed on the original Hénon map, the improved one, and the Matlab rand function for comparison. The obtained results are shown in Fig. 12. We can readily see that: The obtained results show that the SampEn has high values for the improved Hénon map when compared to the original one; these values are approximately equivalent to those obtained for the Matlab rand function, although the latter is implemented using high arithmetic precision (64 bits). This confirms the improvements made to the Hénon map in terms of complexity and control parameter interval extension.

Matlab runstest function analysis
Matlab has a convenient function for determining the randomness of a given time series. It is about the runstest function, which is based on the number of consecutive runs with values above or below the variable's mean (the signal to be evaluated). If the test rejects the null hypothesis at the 5% significance level, the result is 1, else it is 0. The test can additionally provide a numerical value for the p value (significance level), which must be greater than or equal to 0.05.
The runstest results are shown in Fig. 13. It is worth noting that all of the runs' test results are presented in a single figure to facilitate comparison. The abscissa indicates the position of the control parameter being employed within its interval. As illustrated in Fig. 13 In comparison to the original Hénon map, we can conclude that the improved Hénon map has better randomness properties. The improved Hénon map, on the other hand, gets better results than the Matlab rand function. The rand function has a success rate of 94.79% (despite being implemented using high arithmetic precision), which is less than the improved Hénon map in all cases. Thus, the Matlab runstest function's result show that the Hénon map has been improved.

The NIST test suite analysis
The NIST (National Institute of Standards and Technology) Test Suite is a statistical package comprised of 15 tests designed to check the randomness of (arbitrarily long) binary sequences generated by cryptographic random or pseudorandom number generators based on hardware or software [60]. Each test generates a p value , which should be equal to or greater than 0.01, to indicate that the test was successfully passed.
The binary output sequences of both the original and improved Hénon maps are evaluated using the NIST statistical tests. In the case of the improved Hénon map, The obtained results, as well as the success overage, are shown in Table 2. It is obvious that the improved Hénon map has high randomness properties; more than 92.19% of the tests are passed successfully, whereas only 18.75% percent of the tests are passed successfully in the case of the original Hénon map (and this is the best case).

Security evaluation of the PCBSC
This section deals with the evaluation of the PCBSC in terms of its security and reliability. It is the most significant part of this work because it determines the PCBSC's security level. We begin by analyzing the key strength and space, then the confusion and diffusion properties, and finally the statistical properties of the encrypted image to demonstrate how the plaintext's statistical properties are dispersed throughout the ciphertext. The PCBSC was also subjected to a number of well-known cryptographic attacks in order to assess its reliability.
It is worth noting that the implementation precision has been increased (L = 32 in this case), which increases the size of the system's key while also raising the chaotic generator's complexity. The improved Hénon map achieved excellent results when L = 16  The obtained value in this case is less than 0.01, the test is considered as Failed was used. As a result, using L = 32 increases more the chaotic generator's complexity and statistical properties.

The system's keys
In the encryption algorithm, the key is the most crucial component. The security of the system is directly dependent on this key: how it is made, how long it is, and how strong it is. The length of the key, which can be expressed in bits, determines the level of security given by an encryption algorithm. The maximum number of operations required for decryption is defined by the length of the key. The control parameters are commonly used to generate keys in chaos-based cryptography. The system only exhibits chaotic behavior for certain and restricted ranges of control parameters, which poses a significant difficulty. Selecting keys outside of these specific ranges is pointless because the system will behave in a predictable manner rather than chaotically. This almost surely results in a vulnerable encryption system. As a result, the keys should be chosen from the ranges in which the system behaves chaotically.

The keys construction
Bifurcation diagrams can be used to identify the control parameter ranges in which the system exhibits chaotic behavior. By referring to the bifurcation diagrams (Fig. 7); we can see that the ranges of control parameters where the improved map exhibits chaotic behavior are extended. The control parameter a can be set to any value in [0,2] Over the interval [0,1], the control parameters b, c, d, and e can take any value. As a result, it should be evident that the keys can be chosen from any of these intervals, but there are some critical considerations to make: -It would be preferable if we avoided using very small control parameter values. Experiments have shown that setting all control parameters to small values (> 0.05) at the same time leads in quasiregular behavior. As a result, the control parameter intervals should start from values greater than 0.05.
-Experiments also shown that setting the control parameter b to a value greater than 0.55 lengthens the time it takes for the systems (emitter and receiver) to synchronize. Because it is preferable for the systems to synchronize quickly, the appropriate interval for the control parameter b is [> 0.05, < 0.55].
Because the system is implemented using fixedpoint precision arithmetic, we can simply manipulate the control parameters' binary words. As a result, we proposed the control parameters generator to achieve what was previously said about the new control parameter boundaries (Fig. 1). Figure 14 depicts its internal basic scheme.
The K ey a (an integer of 31 bits used to construct the parameter a) is passed through a Slice block to extract the three MSBs (signal S a1 ) and the remaining 28 bits (S a2 ). These two signals are combined with a constant (of 1 bit of length) S a3 = "1 in binary to create a new binary vector in the order : S a1 &S a3 &S a2 (a new integer of 32 bits of length). Then after, the newly obtained integer is reinterpreted as 32Q31. In other words, the 32-bit integer is converted to a fractional number with a fractional part of 31 bits of length and an integer part of 1 bit. The obtained number corresponds to the control parameter a, has the following boundaries: -K ey a = 2 31 , corresponds to a = 1.9999999995 34339 (in binary 1111111111111111111111111 1111111). -K ey a = 0, corresponds to a = 0.125 = (in binary 00010000000000000000000000000000).
In the binary data, the 1 represented in bold is the constant S a3 . It is obvious that we always have a ∈ [0.125, 1.999999999534339] for any given value to the K ey a , and this exactly what we want since 0.125 > 0.05.
The K ey b (the key constructing the parameter b), which is an integer (S b2 ) of 30 bits of length, is combined with a constant of two bits of length (S b1 = "01 in binary) to form a new binary vector in this order: S b1 &S b2 (a new integer of 32 bits of length). The new obtained integer is then reinterpreted as 32Q32, a fractional number of 32 bits for the fractional part and 0 bits for the integer part. The obtained number represents the control parameter b that has the following boundaries: -K ey b = 2 30 , corresponds to b = 0.49999999976 7169 (in binary 0111111111111111111111111 1111111). The process is the same for the keys K ey c , K ey d , and K ey e (keys that construct the control parameters c, d, and e), each key is coded in 31 bits and passed through a Slice block to extract the two upper bits (S c1 , S d1 , and S e1 ), and the remaining 28 bits (S c2 , S d2 , and S e2 ) Each of these two signals pairs are combined with constants (each constant = "1" in binary) of 1 bit of length (S c3 , S d3 , and S e3 ) to form new binary vectors in these orders: S c1 &S c3 &S c2 , S d1 &S d3 &S d2 , and S e1 &S e3 &S e2 (new integers of 32 bits of length). The newly obtained integers are then converted to a fractional precision numbers of 32Q32 (32 bits for the fractional part and 0 bit for the integer part). The following are the boundaries of each new control parameter (c, d, and e): -K ey c , K ey d , K ey e = 2 31 , corresponds to c, d, e = 0.999999999767169 (in binary 11111111111111 111111111111111111). -K ey c , K ey d , K ey e = 0 , corresponds to c, d, e = 0.125 (in binary 0010000000000000000000000 0000000).
The 1 represented in bold is the constant that added to each key. It is obvious that we always have c, d, e ∈ [0.125, 0.999999999767169] for any given values to these keys, and this exactly what we want since 0.125 > 0.05.
"Key strength" refers to the encryption system's sensitivity to minor changes in the key.Generally, the smallest amount of change is flipping one bit in the key.A good encryption system should provide highly sensitive keys. This property is also linked to the confusion property explained before, which means that the relationship between the key and the corresponding ciphertext is not in a simple way to know. Knowing full or partial information about the ciphertext shouldn't reveal any information about the used key, even if the structure of the encryption system is well known.
When an intruder tries to recover the plaintext using a key that is extremely close to the original key used for encryption (difference of one bit), no partial or complete information about the plaintext should be given. Two identical plaintexts encrypted with two very close keys, on the other hand, should produce completely different ciphertexts. This concept is linked to the modern notion of the "avalanche effect." Flipping one bit in the key must change each output bit with a probability of 50% (strict avalanche criterion). Figure 15 presents the the encryption and decryption processes of an RGB image. The results show the difference between two ciphertexts encrypted with very close keys (a difference of one bit between each tried key and its corresponding real key). We can see that the two obtained ciphertexts are completely different in each case, despite the fact that the difference in each key is only one bit (LSB). This confirms that the PCBSC has highly sensitive keys.
There is also another important issue related to the key; the equality of strength. This property (as mentioned previously) characterizes the avalanche effect. The aim of this test is to evaluate if the keys are equally strong or not. In a good encryption scheme, the BER between the plaintext and the recovered one should be zero for the real keys and around 0.5 for the other keys whenever we are close to or far from the real keys.
The results of the BER evolution as a function of keys pairs are shown in Fig. 16. The difference in BER between the plaintext and the recovered text is computed using a one-bit change at each step. According to the obtained results, the PCBSC has a strong key, and the evolution of the BER is flat around 0.5,  (2) and an encrypted image with one bit flipped in key a (LSB), (4) the difference between (2) and an encrypted image with one bit flipped in key b (LSB), (5) the difference between (2) and an encrypted image with one bit flipped in key c (LSB), (6) the difference between (2) and an encrypted image with one bit flipped in key d (LSB), (7) the difference between (2) and an encrypted image using the real keys except for the real keys used for encryption, where the BER is equal to zero. We can confidently state that the avalanche effect has been guaranteed.

The encryption block sub-key
As previously stated, the encryption block shown in Fig. 3 adds another level of complexity to the PCBSC, hence increasing its security. Given that this part is about the system's key, it's reasonable to evaluate the impact of the sub-keys that control this block. Each of these sub-keys (key a(0) , key b(0) , key c(0) ,key d(0) , and key e(0) ) represents the LSB of the corresponding system's key (key a , key b , key c ,key d , and key e) ). Because of the control parameter's high sensitivity (evaluation has been performed), any change to these sub-keys results in a different state.
The findings of the evaluation of the sensitivity of the sub-keys are shown in Fig. 17, where it is obvious that the encryption block is extremely sensitive to the sub-key. Thus, the encryption block adds an extra layer of protection to the PCBSC.

The key space
The key space is the possible (valid) keys given to an cryptographic algorithm. A symmetric cryptographic algorithm (such as in our case) should provide a large enough key space to avoid brute-force attacks. A 128bit symmetric key is computationally secure against brute-force attack [61], thus, in our case, the sum of the whole key space is 31 bits (K ey a ) + 30 bits (K ey b ) + 31 bits (K ey c ) + 31 bits (K ey d ) + 31 bits (K ey d ) =154 bits. Therefore, the key space is 154, i.e., there 2 154 = 2.2836 × 10 46 possible key.

Image statistical analysis
Images have some special characteristics compared to other data types, and the relationship between pixels and grayscale levels can reveal information to an intruder. Hence, a good cryptographic algorithm should disperse any statistical information about the original image into the encrypted one. The following statistical analysis has been performed on the PCBSC to evaluate its immunity against statistical attacks.

Image histogram
An image histogram represents the number of pixels for each grayscale level in the image. This analysis has been performed on the PCBSC. Figure 18 presents the histograms of the original image and the encrypted one. We can see clearly that the encrypted image has a histogram that is quite uniform compared to the original one. This indicates that the statistical properties of the original image are completely dispersed in the encrypted image.

Correlation between adjacent pixels
For an ordinary image with meaningful visual perception, the correlation between adjacent pixels is always high as their pixel values are close to each other [62]. A good cryptographic algorithm produces an encrypted image with a very low correlation between adjacent pixels. To perform this test on the original image and the encrypted one, we randomly selected 1500 pixels from each image over the three directions (horizontal, vertical, and diagonal). For each direction, we computed the correlation coefficient for each pixel pair using the following formulas: where D(x) and D(y) represent, respectively, the variance of x and y, cov(x, y) represent the covariance between x and y. We have also: can be computed with the same manner.
The correlation between adjacent pixels' analysis results are shown in Fig. 19. For both the original and encrypted images, the figure depicts the correlation distribution between two horizontally adjacent pixel pairs. The original image has a high correlation; the grayscale levels are concentrated around the graph (x = y), whereas the encrypted image has a low correlation and the grayscale levels are dispersed across the whole 2D plan. Fig. 19 The correlation between adjacent pixels' analysis results: (1) for the original image, and (2) for the encrypted one Table 3 contains numerical data for the correlation coefficients across the three dimensions for both the original and the encrypted images.

Information entropy
Entropy is another efficient metric that assesses the randomness of a particular encrypted image. It represents the average information generated from all pixels. The entropy H is defined as [62]: where P n is the probability of occurrence of the given gray level, and L is the maximum of the gray level, each pixel in a grayscale image is coded in 8 bits, resulting in L = 2 8 . Images that provide a uniformed distribution of the gray level have good randomness quality and thus result in high information entropy (H approaching the value of 8). The results of the information entropy analysis performed on the PCBSC are given in Table 4.
The analysis is carried out on both the original Lina RGB image and its corresponding encrypted image.
The achieved information entropy results are quite good for the encrypted image. The obtained entropy values are very close to the value of 8 for each color component. This is a clear indication of the PCBSC's efficiency and reliability.

The differential attack analysis
The differential attack's goal is to determine how a small change in the system's input affects the output. It's a chosen plaintext attack of some sort. Biham and Shamir were the first to propose it [63], then Chen later proposed a more efficient modified version [64]. The diffusion property in an encryption system can also be characterized using the differential attack.
In order to carry out this attack on the PCBSC, we must encrypt two plain images that are identical except for the first pixel (i.e., they differ only in the first pixel). The differences between the corresponding encrypted images, denoted C 1 and C 2 , are then determined. The PCBSC is immune to these types of attacks if the difference between C 1 and C 2 is significant.
According to [64], two measures are used to carry out the differential attack: the Number of Pixels Change Rate (NPCR) and the Unified Average Changing Intensity (UACI). For a good encryption system, the NPCR should be greater than 99%, whereas the UACI should be greater than 33%, the NPCR and UACI are, respectively, given by: where: W and H represent, respectively, the width and the height of the image. Differential attack analysis has been carried out on the PCBSC. We have randomly chosen the position of one pixel in the grayscale Lena original image and changed it. Then the original lena image and the modified one (by changing the chosen pixel) are encrypted and decrypted for several rounds. The corresponding encrypted images for each round are used to compute the NPCR and UACI. The obtained results are presented in Fig. 20. According to the obtained results, we find that the NPCR varies from 0.996022 % to 0.998432 %, whilst the UACI from 33.381326 % to 33.561924 % . Thus, the results show that the PCBSC is immune to differential attack and that the diffusion property is guaranteed.

Performance under noisy channel
This section aims to evaluate the PCBSC under noisy channel conditions, i.e., the evaluation of the proposed synchronization circuit. As previously stated, the proposed synchronization circuit is used to prevent the system from the effects of noise as much as possible. The existence of noise is an important concern, especially considering the fact that chaotic systems are are extremely sensitive to their initial conditions. Traditional synchronization methods (driving the response system continuously) are substantially more vulnerable to channel noise. In the case of the proposed synchronization circuit, the synchronization is done only for specific moments of time. For a good comparison, the evaluation is done as follows: We use two configurations, the first containing the PCBSC (emitter and receiver) with the traditional (continuous driving of the response system) synchronization method; the second containing the PCBSC, this time using the proposed synchronization circuit. In each case, the driving signal is passed through a QPSK modulator and then a noisy channel with additive white Gaussian noise (AWGN). The received signal is demodulated, and the synchronization takes place.
As described previously, the proposed synchronization circuit includes a counter that serves as a timer for the response system to be disconnected from the driver signal and act independently. The longer the response system operates independently, the more protected it is from channel noise.
To evaluate the performance of the proposed synchronization scheme, we compute the BER between the original Lina image and the recovered one as a function of the SNR (Signal to Noise Ratio). The PCBSC is evaluated in two scenarios: one with the proposed synchronization circuit and the other without. In the first scenario, the BER is computed for various internal counter sizes (m). This is to show how the time it takes for the response system to operate independently affects the system's performance when running on a noisy channel. Figure 22 depicts the final result. This result clearly reveals that the PCBSC is more efficient against channel noise when the proposed synchronization circuit is used. On the other hand, the longer the response system is allowed to run independently, the more protected it is against channel noise.
Using m = 16 bits as an example, the BER between the original image and the recovered one is 0 at an SNR of 3.5 dB, whereas the BER for the PCBSC without the proposed synchronization circuit is 0.354. Figures 21  and 23 present the recovered images as a function of SNR for both configurations, it is obvious that the proposed synchronization circuit outperforms the usual synchronization method in terms of immunity against channel noise. section. We used two Basys 3 boards, including a Xilinx Artix-7 FPGA (XC7A35T series) with two 2.4 GHz radio modules based on the Nordic Semiconductor nRF24L01+ chip. The Vitis Model Composer tool was used to design the PCBSC so that we could proceed to the synthesis and hardware implementation phases as quickly as possible. Model Composer is a model-based design tool that enables rapid design exploration within the MathWorks Simulink®environment and accelerates the path to production for Xilinx®programmable devices through automatic code generation [65].
An RGB image is encrypted and recovered using the FPGA-based PCBCS. The real-time evaluation using the wireless connection of the PCBSC is done as follows (Fig. 24). -The microcontroller, which has the role of controlling the nRF24L01+ module, receives the incoming serial data (SR) and sends it again to the nRF24L01+ module over the SPI interface. -The received data is wirelessly sent to the receiver PCBSC across the nRF24L01+ module. The data is subsequently received by the nRF24L01+ receiving module, which serially delivers it to the microcontroller and then to the FPGA-based receiver PCBSC through the RS-232 port. -The received serial data is then rearranged to form the driver signal and the encrypted image. The driver signal is sent to the proposed synchronization circuit and the encrypted image to the receiver encryption block. When the synchronization takes place, the original image is recovered and sent to the VGA component in conjection with the encrypted image to display them on the screen. Figure 25 presents the FPGA-based hardware configuration of the PCBSC, in which we can see the Basys 3 board implementing the emitter PCBSC and the other implementing the receiver system. The first screen displays the original image and the encrypted one, whereas the second screen displays the received encrypted image and the recovered one. It is clear that the original image is well recovered.

Hardware-based performance and implementation cost
The FPGA resource utilization report is a useful tool for determining the performance of the designed system as well as the cost of implementation. The resource utilization report provided by Vivado is shown in Fig. 26

The PCBSC versus other proposals
As previously stated, what distinguishes our work is that it addresses all of the challenges facing the use of chaos in cryptography, as opposed to many other works that only handle some of them and turn a blind eye to the others. This section summarizes the results of comparison of the PCBSC with other proposals from many sides. The PCBSC performance is only compared with a some randomly picked works in this context due to their diversity and huge numbers. The key aspects of comparison are summarized in Table 5. (×) denotes that the authors did not address this issue or that the test was not conducted, whereas ( √ ) denotes that this issue is addressed.
Generally, we can deduce from this table that the PCBSC provides the best results in terms of the differential attack results. The obtained results in terms of the correlation coefficients between adjacent pixels of the encrypted image show that the PCBSC provides the best results, especially over both horizontal and vertical directions.
In comparison to the other proposals, the obtained entropy value indicates that the PCBSC produces encrypted images with a high level of complexity.
The PCBSC has a key of 2 154 of space, which is larger than the proposals in [66,69]. Larger key spaces, such as in [67,68], are obtained due to the high arithmetic precision (64 bits) used. However, employing high arithmetic precision in addition to the complicated structures of these proposed encryption schemes has a direct impact on the implementation cost.
Another significant issue concerning the key is that, despite the fact that the keys are large enough, the authors in [66][67][68] have ignored an important issue, namely the weak keys. That is, because the keys are created from the initial conditions and the control parameters, the authors did not indicate the intervals where the control parameters lead to weak keys in order to avoid them, but instead used the entire intervals. This is a major problem that undermines the security of the proposed encryption schemes.
Concerning synchronization and dynamical degradations in digitized chaotic systems, none of these proposals have mentioned this important point, with the exception of the authors in [66], who used a simple technique to enhance the dynamical properties, but their proposed technique is only applicable to the chaotic system's output, leaving the main properties of the original chaotic system unchanged (including the intervals of the control parameters leading to no chaotic behavior).

Conclusion
Much effort has been expended in this work to develop a robust and efficient chaos-based stream cipher. Indeed, despite the large number of recently proposed chaosbased encryption systems, none of them has addressed all of the challenges associated with chaos-based cryptography to our knowledge. Some proposals address certain problems while ignoring others. In the case of our PCBSC, practically all of these issues have been addressed in a single cryptosystem. The dynamical degradations inherent in digitized chaotic systems are minimized by employing an effective self-perturbation circuit that enhances the statistical properties of the digitized chaotic map, extends the intervals of its control parameters to exhibit chaotic behavior, and significantly increases its cycle length. This was proven by the good results obtained from the statistical and mathematical evaluation tools used for this purpose, which demonstrated that the statistical and randomness properties of the original chaotic map had been greatly enhanced.
The PCBSC has an efficient control parameters generator that generates appropriate parameters, resulting in strong secret keys with a large space (2 154 ). The PCBSC also provides an efficient encryption scheme, which, in addition to the system's keys, can ensure confusion and diffusion properties. The security analysis showed the efficiency of the proposed encryption scheme in addition to the strength of the keys.
The PCBSC takes into account the effect of transmission channel noise on synchronization performance. It provides an efficient synchronization circuit that minimizes the effects of channel noise on the system as much as possible. The synchronization circuit evaluation results showed that the performance of the PCBSC under noisy channels is significantly improved when compared to a configuration employing a traditional synchronization approach.
The PCBSC takes into account the effect of transmission channel noise on the synchronization's performance. It incorporates an effective synchronization circuit that minimizes the effects of channel noise on the systems. The synchronization circuit's evaluation results indicate that its performance under noisy channels is significantly improved when compared to a configuration employing a conventional synchronization approach.
The PCBSC has been evaluated in real-time using an FPGA with a wireless transmission link in which an RGB image was encrypted and recovered in . According to the resources utilization report, the PCBSC has a simple structure and its implementation cost is low.
In general, when designing the PCBSC, we considered almost all of the challenges associated with the use of chaos in cryptography, including the influence of digitization on the dynamical features of chaotic systems, security, synchronization under noisy channels, complexity, and performance. This is a feature that is seldom present in a single design in the most recently proposed chaos-based encryption schemes.
The insertion of the PCBSC into real network applications will be the subject of our future work. The PCBSC will be implemented either as software or hardware depending on the application: video streaming encryption, voice encryption, or general data encryption.
Data availability Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Declarations conflict of interest
The authors declare that they have no conflict of interest.