Compression of images with a mathematical approach based on sine and cosine equations and vector quantization (VQ)

Compressing the image causes less memory to be used to store the images. Compressing images increases the transmission speed of compressed images in the network. Vector quantization (VQ) is one of the image compression methods. The challenge of the vector quantization method for compression is the non-optimization of the codebooks. Codebook optimization increases the quality of compressed images and reduces the volume of compressed images. Various methods of swarm intelligence and meta-heuristics are used to improve the vector quantization algorithm, but using meta-heuristic methods based on mathematical sciences has less history. This paper uses an improved sine–cosine algorithm (SCA) version to optimize the vector quantization algorithm and reduce the compression error. The reason for using the SCA algorithm in image compression is the balance between the search for exploration and exploitation search by sine and cosine functions, which makes it less likely to get caught in local optima. The proposed method to reduce the calculation error of the SCA algorithm uses spiral trigonometric functions and a new mathematical helix. The proposed method searches for optimal solutions with spiral and snail searches, increasing the chances of finding more optimal solutions. The proposed method aims to find a more optimal codebook by the improved version of SCA in the VQ compression algorithm. The advantage of the proposed method is finding optimal codebooks and increasing the quality of compressed images. The proposed method implementing in MATLAB software, and experiments showed that the proposed method's PSNR index improves the VQ algorithm's ratio by 13.73%. Evaluations show that the proposed method's PSNR index of compressed images is higher and better than PBM, CS-LBG, FA-LBG, BA-LBG, HBMO-LBG, QPSO-LBG, and PSO-LBG. The result shows that the proposed method (or ISCA-LBG) has less time complexity than HHO and WOA compression algorithms.


Introduction
Images are a sample of information for data storage.If data is stored as images, our understanding of the data will increase.Data descriptions provide less visual perspective than images instead of numbers.Sending data over the network, satellites, and storing data using the image format (Nan et al. 2022).Today, images are used in various applications, such as social networking websites and the Internet (Wang et al. 2022).Images typically require relatively large amounts of memory for storage.Sending data in images on computer networks also requires sufficient bandwidth.It is necessary to send an image from a source to a destination.Network lines are typically used to send packets.One challenge for storing images and sending them over the network is image size.Compression is used to reduce image size and save storage costs.Image compression reduces the size of images for storage and transmission on computer networks.In other words, image compression, to display an image with the lowest possible bit, plays a role in sending images, image storage, and machine vision programs.In image compression, the compression coefficients are quantized then the quantized coefficients can be encoded (Jamil and Piran 2022).In image compression, a decoding process is implemented, and the image is decompressed.In image compression, two phases of image compression and non-compression are performed at the source and destination, respectively.Various methods and standards, such as PNG, JPEG, and JPEG 2000 used to compress images.These image compression methods use manual conversion and quantization methods.Typically, these compression methods damage the edges and textures of the image (Lu et al. 2019).
Images seen by the human eye vary in intensity.Images have continuous values of light intensity.Very high memory is required to store images, so images need to be compressed.Advanced cameras are on the market today, and compression image.These cameras have a digitizer that samples and compresses images.One popular method for the compression of an image is vector quantization (VQ).Vector quantization is used in many areas, one of which is to compress data (text, audio, image).In this process, the encoder forms an address of the code word, which is similar to the input image.Then, this address is used by the decoder to retrieve the image (Mali et al. 2022).
Codebooks are a vital factor in the image compression process.Vector quantization (VQ) is a block coding technique used to compress an image.In this method, encryption and a simple decryption process cause highlevel compression (Bilal et al. 2021).Codebook production is a vital factor in VQ compression that directly influences the computational cost and the reconstructed image's quality.The Linde-Buzo-Gray (LBG) algorithm is a sophisticated VQ compression technique that uses the k-mean clustering algorithm to order to design codebooks (Othman et al. 2021).A challenge for optimal compression is to search for the optimal codebook.By finding the optimal codebook, the compression level will increase, and the image quality will increase.Image compression is an optimization problem.Some studies have used metaheuristic algorithms to enhance compression quality (Alapatt et al. 2021).In these methods, the goal is to find the best codebook for optimizing the compression of the LBG algorithm.Image compression is essential because it reduces storage space and speeds up image transmission over the network.This paper aims to present an image compression version based on the LBG algorithm (Satish Kumar et al. 2021).Optimization methods such as WOA (Rahebi 2022), Bat algorithm (Guo et al. 2021), and Firefly algorithm (Rani et al. 2021) have been used to compress the images.
In these studies, modeling the behavior of living things was used to optimize image compression.In this paper, a method that is based on the mathematical behavior of sine and cosine functions is used to optimize compression.The challenge of compression methods is that choosing the optimal codebook is an NP-hard problem and requires an algorithm that performs global and local searches.In this paper, an improved and proposed version of the sine and cosine algorithms (Mirjalili 2016) is used to improve the compression of the LBG algorithm.The advantage of the SCA algorithms is that they search the problem space intelligently and with mathematical relations.Unlike most studies that use the behavior of living things to solve optimization problems, this paper uses mathematical modeling to compress images.
There are various motivations for presenting a new method of image compression.One of our motivations for compressing images is to reduce image size and the memory required to store images.Compressing images and reducing their size is one of the main goals of authors, but maintaining the quality of compressed images is another goal of any efficient compression algorithm.The authors are trying to provide an efficient compression algorithm that reduces the size of images and does not reduce the quality of images as much as possible.Many studies use the LBG compression algorithm based on VQ compression for image compression.One of the main motivations of the authors is to calculate the optimal codebook in image compression.So far, meta-heuristic methods such as GA and PSO have been used to find the optimal codebook, but these methods are prone to get caught in local optima.Finding the optimal codebook of the local type reduces the compression quality.It is necessary to present an intelligent meta-heuristic algorithm to find the optimal codebook in compression.The SCA algorithm is a mathematical metaheuristic algorithm; despite the balance between local and global search, it searches the space around optimal solutions with simple sine and cosine equations.The authors' motivation is to improve the search in the SCA algorithm with more advanced search strategies, such as snail movements.The authors use the improved version of SCA to increase the compression rate and the quality of compressed images.
The innovations presented in this paper for compression are as follows: • Use mathematical equations to model and solve optimal problems such as image compression.The contribution of the authors of this paper is as follows: • Provide an improved version of LBG compression.
• Compress images using mathematical methods.
• Improved sine and cosine algorithm accuracy for better image compression.• Evaluate the proposed method with practical indicators in the field of image compression.
This paper has been prepared and written in several sections.In Sect.2, the background of image compression is introduced.Section 3 presents the proposed method for compressing images and improving the sine and cosine algorithms.In Sect.4, the proposed method is implemented and analyzed.Finally, Sect. 5 presents the conclusions of the study and future works.

Related work
Image compression saves less storage space.Compression of images efficient and fast transfer of compression images over the network.The compression of images plays a crucial role in digital image processing.Compression of images is used in medical sciences, television transmission, military applications, satellite communications, and mobile applications.Mathematical optimization and coding algorithms are widely used for image compression, and the techniques mentioned above are widely used for the compression of images and video (Jamil and Piran 2022).Traditional methods such as JPEG and JPEG 2000 have been proposed for image compression.These methods for image compression mainly consist of conversion, quantization, and entropy encoding.In methods such as JPEG and JPEG 2000, DCT or wavelet transforms use an input image and scalar-quantization coefficients to obtain compressed representations (Stoykova et al. 2022).
Entropy encoding schemes, Hoffmann, and adaptive arithmetic encoders are used to encode quantized coefficients for JPEG and JPEG 2000.Because DCT and wavelet transformers are designed for piecewise stationary signals, they cannot preserve the quality of the edges and textures of the image (Dimililer 2022).Directional wavelet transforms (Brahimi et al. 2021) are developed to fix standard wavelet conversion problems and significantly improve JPEG 2000 compression performance.Encoding within the standard high-efficiency video coding (HEVC) (Hajihashemi et al. 2021) is also a method of image compression.In these methods, conversion, quantization, and encryption are manually designed and not optimally designed, which poses a challenge to the compression of images.Several techniques, such as predictive coding (Sharma et al. 2021), transform-based coding (Shi et al. 2021), and VQ, have been proposed for image compression.
VQ-based image compression techniques are popular for high-level, low-distortion compression compared to other methods.The advantage of the VQ method is the fast compression mechanism.This method is suitable for applications that decompress images more than once, such as multimedia and websites.Optimal codebook production is one of the vital steps of VQ compression that can be optimized using various optimization techniques, such as genetic algorithm (Thilagam and Arunesh 2021), adaptive vector quantization (Darwish and Almajtomi 2021), and various other techniques.The VQ compression method starts by splitting the image into non-overlapping blocks called input vectors.These input vectors depend on the size of the input image and the block size.The number of input vectors can be calculated by dividing the input image's size by the block's size.For compression, the input vectors are quantified to lessen the number of vectors needed to display the image.Selected input vectors (codewords) as well as along with their indicators, are transferred as a compressed image.Several algorithms have been proposed to find the optimal code for generating codebooks, the most famous of which is the LBG algorithm.The LBG algorithm repeatedly generates an N-sized codebook.Primary vectors are divided into groups of N clusters.The codebook is corrected in each iteration as long as the distortion is within acceptable limits.However, the LBG algorithm is prone to be caught in local optimization (Dong et al. 2018).Various optimization techniques have been proposed to advance the LBG algorithm to select its optimal codebook.
In Zhu et al. (2022), they proposed a Unified Multivariate Gaussian Mixture method for image compression.They propose a multivariate Gaussian combination with the means and covariances to be estimated for image compression.Their method uses new probabilistic vector quantization to approximate the means effectively.Experiments show that their model reduced image distortion performance and increased compression speed by 3.18 times.
In Kumari et al. (2021), an image compression algorithm with K-means clustering and a flower pollination algorithm (FPA) is proposed.Magnetic resonance imaging of the human body creates a large image and must be compressed, but medicine requires high image quality to diagnose the disease more effectively.This research presents vector quantification based on the FPA algorithm for better image compression.
In Xu et al. (2021), they presented an image compression algorithm regarding vector quantification and linear regression prediction.The proposed design compresses the image relying on the prediction outcome of linear regression, thus raising the compression ratio remarkably.
In Geetha et al. (2021), they introduced an image compression technique based on the lion optimization algorithm for biomedical applications.In this study, a bioinspired algorithm was used to optimize image compression and build a codebook.This research proposes an approach to constructing a VQ codebook named the L2-LBG method using the milk optimization algorithm and the LZMA chain algorithm.In this method, when building the codebook valve optimization algorithm, Lempel Ziv Markov (LZMA) is used to compress the index table and boost the performance of compression performance.The proposed L2-LBG method has higher compression than CS-LBG, FA-LBG, and JPEG2000 methods.
In Sabbavarapu et al. (2021), they offered a method of separate wavelet transformation and recurrent neural network-based medical image compression for MRI and CT images.This research uses compression techniques based on discrete wavelet transformation and recursive neural networks through brain images to achieve a better compression rate.In this method, to the ROI image and the non-ROI image, the region growing and Otsu thresholding are used.The gravitational search algorithm and particle swarm optimization increase the accuracy of RNN prediction.This method's performance is calculated using the signal-to-noise ratio, the mean square error, the density ratio, and the percentage of space-saving.The proposed compression method preserves the quality of the compressed images more than the quasi-fractal, oscillation method, and burrows-wheeler transform methods.
In Minu and Canessane (2021), they introduced a vector-based quantization compression method based on the squirrel search algorithm.UAVs mostly fly at low altitudes to capture the image at high-resolution.Because short flights and high-resolution cameras produce large volumes of images, image compression is essential.This study has designed a new squirrel search algorithm with an LBGbased image compression technique called SSA-LBG for drone images.In their method, SSA is used to build codebooks for VQ.Using SSA-LBG leads to effective compression with low computation time and a high PSNR ratio.
In Khan (2021), an improved version of vector quantization using the genetic algorithm's approach for image compression is presented.This research presents the implementation of vector quantization using GA to produce the optimal codebook for image compression.The neural network has been used to reconstruct the image quality in their method.
In Chavan et al. (2020), they presented a model of image compression with adaptive vector quantization and a modified rider optimization algorithm.The vector quantization method has been used, and the Linde-Buzo-Gray algorithm has been used to compress the images.The rider optimization algorithm is used to optimize the codebook to achieve an improved image compression effect.Codebooks are optimized so that the total compression ratio and the error difference between the uncompressed and original images should be diminished as a goal function.
In Althobaiti (2023), they presented a vector quantization method based on a crow search algorithm for image compression in the Internet of Things.This paper presents the vector quantization approach based on the new crow search algorithm for image compression in 6G networks.Their technique includes Linde-Buzo-Gray (LBG) with vector quantization (VQ) technique in image compression, and crow search algorithm (CSA) is used for optimal codebook selection.Their method has a higher compression rate than previous meta-heuristic and LBG methods.
In Tamboli et al. (2023), they presented a medical image compression method by optimizing honey bees.In their method, the Modified marriage in honey bees optimization model (MMBO) model, the ACM weighting factor is used in compression.PSNR of the proposed model is higher than MFO, LA, MBO, and JCF-LA compression methods.
In El-Nouby et al. (2022), they presented an image compression model by modeling the quantized masked image.In this research, they replaced the vanilla vector quantizer with the product quantizer.
In Zerva et al. (2023), they presented a medical image compression method with wavelet method.The proposed method is the standard wavelet difference reduction (WDR) method using the average pixel difference in color images.The proposed method uses 31 colorectal cancer slides for evaluation.The results show that their method improves the PSNR index by about 22.65 dB compared to JPEG 2000.Their method has succeeded in improving compression compared to discrete wavelet transform (DWT).Their method is widely used to compress and transmit microscopic medical images in real time.
The recently presented meta-heuristic algorithms have exciting and intelligent behaviors to find optimal solutions.Among recent meta-heuristic algorithms, we can name Intelligent clonal optimizer (ICO) algorithms (Sahargahi et al. 2022), Sheep Flock Optimization Algorithm (SFOA) (Kivi and Majidnezhad 2022), and heat transfer relationbased optimization algorithms (HTOA) (Asef et al. 2021).These algorithms try to solve optimization problems based on swarm intelligence behaviors.The advantage of these algorithms is the parallel navigation of the problem space to find the optimal solution.The desired methods are highly accurate and can be used in optimization problems.Unlike these algorithms, some meta-heuristic algorithms have been presented based on mathematical modeling.Table 1 summarizes the methods of compression of reviewed images and the advantages and disadvantages of methods of image compression.
Evaluations show that in most studies, machine learning and deep learning in compression have been used to improve image quality.Applying deep learning methods holds a lot of time for compression training.Some studies have used meta-heuristic and evolutionary algorithms, such as genetic algorithms.The genetic algorithm is prone to get caught up in local optimal and prevents compression from being optimized.Some studies have used a combination of meta-heuristic and machine-learning methods that are time-consuming and complex.Some studies have used swarm intelligence to optimize compression, but these methods have many parameters, and if they are optimally selected, their accuracy will stay the same.Some metaheuristic algorithms, such as the HHO algorithms, are highly complex and increase compression time.The sine and cosine algorithm is a low-complexity algorithm and is one of the few optimization algorithms with a mathematical approach.The SCA algorithm has the advantage of performing a balanced local and global search simultaneously and, therefore, can increase the compression quality of the images.

The proposed method
For the optimal compression of images by the VQ method, finding the optimal codebook plays an influential role.The optimized codebook makes the VQ method increase the compression level and the quality of compressed images.The original problem is an optimization problem; each solution is a codebook.The objective function can be PSNR or MSE.The goal is to find the optimal codebook to minimize MSE or maximize PSNR.This section describes the proposed method for compressing images using the improved version of the SCA algorithm.The improved version of the SCA algorithm is named ISCA in this paper.

Vector quantization
In the vector quantization (VQ) method, two parts of encoding and decoding are used to compress the image.Through this method, an image is separated into blocks and then encrypted.Then, to create the compressed image, the decoding process is used.Figure 1 compresses images in the VQ method (Chavan et al. 2020).
In the vector quantization algorithm, the image is parted into different parts.Each block is stored inside a codebook, so each codebook has an index item and a code word.The codebook is sent to the destination in decoding mode to reconstruct the image.A codebook consists of a set of codewords the size of a non-overlapping block.After successfully generating the codebook, each vector is indexed with the index number from the index table.These index numbers are sent to the recipient.The received indexed numbers are decoded with the recipient list table.The codebook in the receiver is like the sender codebook.To achieve image compression at a more optimal level, using optimal codebooks is one of the difficulties in the vector quantification algorithm.Finding the optimal value of the optimal codebook is an NP-Hard optimization problem.One practical way to find the optimal codebook is to use meta-heuristic algorithms.
The VQ algorithm is a block coding technique in which the production of a codebook is an essential part.VQ encryption and decryption are shown in Fig. 1.This algorithm divides an image with dimensions (N 9 N) into smaller blocks of size n*n.The number of blocks is equal to N b .These blocks are training vectors described as X i (i = 1, 2, 3, 4, …, N b ).The selected vectors are codebook codes and are represented as C i where i = 1, 2, 3, …, N c .N c is the whole codeword in the codebook.Image vectors are approximated by using the code word index.The codeword index is estimated by finding the minimum Euclidean distance between the image vectors and the codewords.The codebook is passed to the recipient, and the image is reconstructed through the corresponding codewords and indexes.The distortion between the codebook and the training vector is calculated as Eq. ( 1): In this function, the constraints of Eqs. ( 2) and ( 3) are considered: The partition R j V j = 1, 2, 3…, N c must meet the criterion of Eq. (4(: The codewords C j should be defined as the center of R j as in Eq. ( 5): Fig. 1 Vector quantization method for image compression (Chavan et al. 2020)

LBG algorithm
The most common version of the VQ algorithm is called the LBG algorithm.One of the advantages of the LBG algorithm is the reduction of image distortion due to compression.The generalized Lloyd algorithm (GLA), or the LBG algorithm, is a pioneering algorithm for VQ execution.The flowchart of the LBG algorithm is shown in Fig. 2.
LBG is an algorithm of a k-mean-based clustering method.The optimal solution is found using the closest matching function.This function attempts to ensure the distortion does not increase from one iteration to another.But this approach has limitations in being trapped in local optimization because of inadequate randomization of the primary codebook.

Sine and cosine algorithm
The sine and cosine search algorithms are modeled with sine and cosine trigonometric relations to search the problem space to find the optimal answer.Figure 3 shows that the optimal solution is switched based on a random variable between local and global search.If the random number is less than one, it will be searched inside the circle.If the random number is more than one, then the area outside the optimal solution is searched by global search.This algorithm performs local and global search behavior alternately to search the problem space well.According to this, r 1 is a coefficient that becomes more or less than one, respectively, then a global and local search for the current optimal solution is performed (Mirjalili et al. 2020).If the range of this parameter is (Nan et al. 2022;Wang et al. 2022) or [-2, -1] then, as in Fig. 3, Fig. 2 Image compression with LBG algorithm the outside of the optimal area is searched as a global search.Suppose the value of this parameter is between -1 and ? 1.Then the optimal solution is searched around and near as a local search (Mirjalili et al. 2020;Sun et al. 2022).
In the SCA algorithm, the value of the r 1 parameter is constantly reduced in terms of iterations to change the search from global to local.In the last iteration, more is searched for the optimal solution.In the SCA algorithm, two Eqs.( 6) and ( 7) with the sine and cosine trigonometric functions are used to update the solutions to the problem (Mirjalili 2016): In these Equations, X t i and X tþ1 i are the current position of a solution such as i in iteration t and the new solution i in the new iteration or t ? 1, respectively.Parameters A, r 1 and C are the search parameters of the SCA algorithm.The parameter P Ã is the best optimal solution calculated up to the current iteration.To find the optimal solution, each of the above relations, like Eq. ( 8), can be executed with a probability of 50% on a solution of a problem such as X t i to calculate its position in the new iteration X tþ1 i .To be (Mirjalili 2016): In this Equation, rand() is a random number in the range of zero and one.If it is less than 0.5, it is a sine type search, and if it is greater than or equal to 0.5, a cosine-based search is performed.

Improvement of sine and cosine algorithms (ISCA)
The SCA algorithm although a robust algorithm based on mathematical modeling, it has the following challenges: • The search for optimal and current solutions is based only on simple sine and cosine behavior.In other words, more search is done between the current solution and the optimal solution, reducing the algorithm's ability to search.• The sine and cosine functions are used simply.The more complex use of trigonometric functions leads to a more efficient problem space search.• The SCA algorithm does not use spiral movements that better search A more advanced version of this algorithm is provided to solve these challenges.In this version (ISCA), according to Eq. ( 9), a search model with spiral behavior of sine and cosine functions is added to the algorithm.In this case, the spiral problem space, as in Fig. 4, spirals the space around the optimal solution.Mathematical functions such as Eq. ( 9) are used to model this type of search: In this equation, Sgn is a function of the sign function, a is a parameter, h is the angle between the solution and the horizontal axis of the coordinates.In this equation, the angle varies from zero to 360.An equation is proposed to simulate the angle and increase it alternately as Eq. ( 10): Fig. 3 Search based on the sine and cosine function amplitude (Sun et al. 2022) In this equation, Iter is equal to the current iteration number of the algorithm, MaxIter is equal to the maximum iteration of the algorithm, and rand() is a random number between zero and one.
To improve the SCA algorithms, a rotational search of Eq. ( 10) can also be suggested: From the combination of Eqs. ( 8), ( 11), ( 12) is created, which combines the proposed modeling for search with simple sine and cosine modeling.
In this equation, a can be proposed as Eq. ( 13): In this equation, X t Rand is a random solution to the problem space.The purpose of the more sophisticated modeling in the proposed method is to search the problem space to find the optimal codecs more efficiently.

Optimal compression with the proposed algorithm
In the proposed method for image compression, a codebook is encoded as a member of the ISCA algorithm.
. The blocks used in the image are defined by a set such as X ¼ x i ; i ¼ 1; 2; . ..; N b ð Þ .In the proposed method, a codebook has N c codewords.If a codebook is defined as C, it has N c codewords in the set C ¼ C 1 ; C 2 ; C 3 ; . ..; C N c f g .The parameter n is the population size of the proposed algorithm.Each codebook is evaluated using the evaluation function in Eq. ( 14): Any codebook or any member of the ISCA algorithm that minimizes the objective function is considered optimal as a codebook.For the compression of images according to the code network of Fig. 6, the optimal codebook is found with the ISCA algorithm.Several random codebooks are created as members of the proposed algorithm population.Each codebook is evaluated by the objective function.The optimal codebook is determined by repeating the proposed algorithm.The codebook is updated using the sine and cosine relationships.Figure 7 shows the flowchart of the proposed method for image compression.In this section, the proposed method is implemented.In this section, the implementation results are compared with similar methods.

Implementation parameters
To implement the maximum number of iterations of the proposed algorithm 20, the population size is set to 30.Each experiment was performed 50 times, and the average of the indicators was used for evaluation.The light intensity of the images is between 0 and 255, and the size of the images is 512 by 512 pixels.In the implementations, the book size of the codes is set to 8, 16, 32, 64, 128, 256, 512, and 1024.

Evaluation criteria
To evaluate the proposed method, the bit rate per pixel (bpp), mean square error (MSE), and Peak Signal Noise ratio (PSNR) is used.These indicators are formulated in Eqs. ( 14), ( 15), and ( 16).In this equation, k is the size of the blocks and N c is the number of codebooks: In this equation, M and N are the dimensions of the images.i and j are the row and column numbers of each pixel in the image.In this equation, f is the uncompressed image and f is the compressed image.

Comparison and discussion
One method is to evaluate the proposed method of measuring uncompressed image quality.The PSNR index is used to assess the quality of reconstructed images.In the PSNR index experiments, five sample images measuring 512 by 512 were used to measure the quality of the compressed images.In these experiments, the codebook sizes are set to 8, 16, 32, 64, 128, 256, 512, and 1024.Tables 2,  3, 4, 5 and 6 show the PSNR index for different Bitrates for pentagonal images (Bilal et al. 2021).
Evaluations show that the proposed method has a higher PSNR in reconstructing compressed images.By increasing the bitrate, the proposed method can better maintain image quality.In these experiments, the image quality index with compression methods of PBM, CS-LBG, FA-LBG, BA-LBG, HBMO-LBG, QPSO-LBG, PSO-LBG, LBG was compared with the proposed method.Experiments showed that the proposed method was more successful than other meta-processing methods in maintaining the quality of the reconstructed images.
Figure 9 compares the PSNR index with the average bitrate compression methods.According to the comparisons, the PSNR index in the proposed method is 27.24.The PSNR index in LBG, PBM, FA-LBG, BA-LBG, HBMO-LBG, QPSO-LBG, and PSO-LBG methods are 23.95, 26.19, 26.24, 26.35, 26.36, 26.69 and 26.17, respectively.Among the compared methods, the most PSNR is related to the proposed method, and the worst is the LBG method.Among the compared methods, the best PSNR index after the proposed method is the bat clustering method.The proposed method is an improved version of the LBG method, and compared to this method, it has improved the PSNR index by about 13.73%.In other words, the proposed method was better than other methods in increasing the quality of uncompressed images.In Eq. ( 18), the objective and competency functions are used, and any algorithm that minimizes the value of this objective function is a more appropriate method for compressing images.In Fig. 10, the value of the objective function in the proposed method and the other methods used in the Lu et al. (2019) are compared.
In Fig. 10, the proposed method with cost function index is compared with FA-LBG, BA-LBG, DE-LBG, IPSO-LBG, IDE-LBG, and WOA-LBG methods.The cost function's value in the proposed image compression method is 841.The value of the cost function in FA-LBG, BA-LBG, DE-LBG, IPSO-LBG, IDE-LBG, and WOA-LBG methods are 1200, 1060, 860, 950, 855, and 846, respectively.Evaluations show that the proposed method minimizes the cost function more than other methods.
An evaluation index for evaluating compression algorithms is the Structural Similarity Index measure (SSIM).Equation ( 18) is used to formulate the SSIM index.
The functions L(X, Y), C(X, Y), and S(X, Y) are the luminance, contrast, and structure of the image, respectively.X and Y are the original image and the reconstructed image, respectively.The parameters a, b, and c are equal to one in this study.Increasing the SSIM index (X, Y) indicates better image quality.In the diagram of Fig. 11, the SSIM index is compared between the methods, HHO-LBG, WOA-LBG, BA-LBG, FA-LBG, PSO-LBG in different Bitrates.
Analysis and evaluation of SSIM diagrams show that the proposed method has increased the SSIM index more than other methods by increasing Bitrate.As the Bit rate increases, the SSIM index in all algorithms increases, but the value of this index in the proposed method is more than in these methods.The average SSIM index for different Bitrate values is shown in Fig. 12.
Experiments show that the proposed method has an SSIM index of 80.92%.SSIM index in compression methods such as HHO-LBG, WOA-LBG, BA-LBG, FA-LBG, and PSO-LBG equals 70.38%, 72.42%, 74.64%, 78.95%, 79.36%, respectively.Among the methods compared, the proposed method has minor damage to the image LENA BABOON PEPPERS BARB GOLDHILL due to compression.Among the methods compared, the proposed method has the best performance in image compression and is in second and third place among HHO and WOA algorithms.The PSO algorithm has damaged the image most in image compression.The proposed image compression method is based on random and meta-heuristic behaviors.In other words, one of the evaluation criteria of meta-heuristic algorithms is the use of Friedman's test.The Friedman test aims to determine which meta-heuristic algorithm has a better rank in calculating the optimal solution.Equation ( 19) calculates the rank of a meta-heuristic algorithm in image compression in the Friedman test.
k is the number of meta-heuristic algorithms in image compression, and n is the number of trials for ranking meta-heuristic algorithms.The value of R j is equal to the average rank of algorithm number j in image compression.The White Shark optimization algorithm (WSO) (Braik et al. 2022), aquila optimizer (AO) algorithm (Abualigah et al. 2021a), arithmetic optimization algorithm (AOA) (Abualigah et al. 2021b), gorilla troops optimizer (GTO) (Abdollahzadeh et al. 2021), gravitational search algorithm (GSA) (Rashedi et al. 2009), Inclined planes system   4.1268, 4.2314, 2.4735, 2.8694, 2.5692, 2.4854 and 3.6842, respectively.Friedman's rank of the proposed method in image compression is equal to 2.2364, and it is lower than WSO, AO, AOA, GTO, and SCA methods.A lower value in the Friedman test indicates that the proposed algorithm has obtained a better rank in compression tests to reduce the MSE error of compressed images.Experiments show that the proposed method has improved Friedman's rank by 39.29% compared to the SCA algorithm.Among meta-heuristic methods, the AOA method is the competitor of the proposed algorithm and has the second rank in compression.
In Figs.14, 15, and 16 compression time of the proposed method with LBG, PSO-LBG, QPSO-LBG, and HBM-LBG methods for bit rate = 0.3125 and bit rate = 0.375 and bit rate = 0.4375 has been compared.In these tests, the size of Codebooks is set to 32, 64, and 128, respectively (Horng and Jiang 2011).
The decimal values are rounded due to their insignificance in the comparison process.Experiments show that at bit rate = 0.3125 and Codebook size = 32, the compression time in LBG, PSO-LBG, QPSO-LBG, and HBM-LBG methods equals 3, 484, 298, 323.The time of the proposed method for The average SSIM  compression is equal to 317 and has the lowest time between algorithms for compression.The PSO algorithm or the QPSO version takes more time because this algorithm needs to calculate the velocity vector in addition to the position vector.
For bit rate = 0.375 and Codebook size = 64, the compression time in the LBG, PSO-LBG, QPSO-LBG, and HBM-LBG methods equals 6, 1834, 1113, 1269, and the time of the proposed method, in this case, is 1182.In the case of bit rate = 0.375 and Codebook size = 64, the proposed method has less time than PSO-LBG, QPSO-LBG, and HBM-LBG in compression.
In the case of bit rate 0.4375 and Codebook size = 128, the compression time of the proposed method is less than PSO-LBG and HBM-LBG, but it has more compression time than the QPSO-LBG method.The LBG method generally has a shorter execution time than the metaheuristic combined methods with LBG, but it has a weak PSNR index in compression.
One of the applications of compression is in medical images.The proposed method is implemented on a set of CT and MRI images and compared with the results of several studies in Rani et al. (2021).In the diagram of Fig. 17, the PSNR index of the proposed method is compared with three compression studies.
An essential factor for evaluating compression algorithms is comparing the MSE.In the diagram of Fig. 18, the proposed method is compared with L2-LBG, CS-LBG, FF-LBG, and JPEG 2000 methods in 8 different medical image datasets (Geetha et al. 2021).In the diagram of Fig. 18, the MSE index is used to compress medical images.

Qualitative comparison
In Table 7, the proposed method is qualitatively compared with a number of related works in image compression based on PSNR, complexity, speed.
The qualitative comparison of the proposed method with recent studies shows that the proposed method has a high PSNR rate in compression, which shows that the quality of the images compressed by the proposed method is high.The complexity of the proposed algorithm is moderate because the basis of the proposed algorithm is the simple SCA algorithm.The lack of complexity makes the proposed method have a high compression speed.

Conclusion
Image compression makes images take up less space in the memory of computer systems.Sending compressed images to the network is faster, and less network bandwidth is wasted.The LBG algorithm is an applicable image compression algorithm.The LBG algorithm is a version of the VQ compression method.This algorithm generates codebooks for image compression, but this method is prone to get caught up in local optimal.The LBG compression method, in most cases, finds relatively optimal codebooks and is more involved in local optimizations.The lack of optimal codebook selection in the LBG compression method reduces the compression level of the algorithm, and the reconstructed image could be of better quality.This paper uses the improved sine and cosine algorithms to select the codebook in the LBG algorithm optimally.
In the proposed method, spiral searches with sine and cosine functions are added to the SCA algorithm to increase the ability to search the proposed algorithm in the problem space.The proposed method introduces a combined version of the LBG algorithm and the improved SCA algorithm for image compression.The PSNR index in the proposed method is improved by about 13.73% compared to the LBG algorithm.The proposed method in the reconstruction of compressed images restores the image quality better than the methods of PBM, CS-LBG, FA-LBG, BA-LBG, HBMO-LBG, QPSO-LBG, and PSO-LBG.The proposed method minimizes the cost function more than the FA-LBG, BA-LBG, DE-LBG, IPSO-LBG, IDE-LBG, and WOA-LBG Proposed Method Karri and Jena (2016)[49] Kumar et al. (2018)  Table 7 Qualitative comparison of the proposed method with related works methods.The proposed method of compression causes minor damage to the compression image.The SSIM index of the proposed method is higher than methods such as HHO-LBG, WOA-LBG, BA-LBG, FA-LBG, and PSO-LBG, which indicates the better quality of the proposed compression.
Unlike HHO and WOA compression algorithms, the proposed method has less time and space complexity and, therefore, less execution time than these compression algorithms.The proposed method is more complex than LBG due to the use of the meta-heuristic method, and its execution time is longer than this algorithm.PSNR index is more than existing meta-heuristic methods; reduction of image damage due to compression, providing a more efficient version of SCA algorithm is one of the advantages of the proposed method.More compression time than the LBG method and the inability to learn is one of the limitations of the proposed method.
In future works, an accelerated and parallel version of the proposed algorithm in the Graphics Processing Unit (GPU) with the CUDA parallelization framework is proposed to overcome these challenges.Another of our future works is to apply deep learning to improve compression quality.Another future work is to improve meta-heuristic algorithms such as GSA and IPO in image compression.

Figure 5
Figure 5 shows three codebooks as three members of a meta-heuristic algorithm.Vector quantization (VQ) is an image compression technique based on block coding.Generating a codebook is an essential step of the VQ technique.Assume that the image size Y ¼ y ij È É is M 9 M pixels.Each image is divided into blocks of size n 9 n.The number of blocks an image has equals N b ¼ N

Fig. 4
Fig. 4 Circular and spiral search (Pal and Saraswat 2019) and hardware MATLAB software version 2021 has been used for implementation.Windows 10 and its professional version with an Intel processor and five cores with 4 GB of DDR3 RAM are used for implementations.The dimensions of the images used are 512 9 512, and they are gray images.

Fig. 8
Fig. 8 Standard images for compression in experiments

Fig. 10
Fig. 9 Comparison of PSNR index of the proposed method with other methods in image compression Fig.11Comparing the SSIM index of the proposed method with other methods in image compression

Fig. 18
Fig. 18 Comparison of the MSE index in the proposed method and compression methods

•
Provide an improved version of the sine and cosine algorithms by smartening the search space in the problem space.•Provideadiscrete version of the improved sine and cosine algorithm for image compression.•Codingthe problem of image compression in the form of sine and cosine algorithms.•Application of mathematical spiral equations in real applications.•Modeling and formulation of Spiral and Snail search around optimal solutions with new proposed equations.• More balance between local and global search with spiral equations in the proposed algorithm.• Changing the global search strategy dynamically to local search according to the iteration of the optimized algorithm.

Table 1
Advantages and disadvantages of image compression methods Rani et al. (2021)021)[48] Proposed Method PSNR Fig.17Average PSNR index of the proposed method with three studies on medical images