Evaluation of soft computing in methodology for calculating information protection from parameters of its distribution in social networks

: Security of data has always been a big problem in information technology. Because the data are stored in a variety of locations, including all over the world, this problem becomes even more pressing in the context of cloud computing. Concerns about cloud technology stem primarily from users' concerns regarding data security and privacy. The heterogeneity of cloud resources and the numerous shared applications they serve can benefit from effective scheduling. Considering the quality of the service that is provided to users, this will cut costs and energy use for them. Goal of this study is to improve cloud soft computing's resource allocation and data protection using a secure channel model and machine learning architecture combined with distributed social networks. The cloud architecture data protection in the proposed network model is accomplished by developing the channel model using hierarchical lightweight cryptography analysis. Then, Q-bayes propagation quantum networks are used to allocate resources. Memory capacity, data protection analysis, throughput, end-end delay, and processing time are all used in experimental analysis.Proposed technique attained memory capacity of 73%, data protection analysis of 69%, throughput of 95%, end-end delay of 69%, processing time of 49%.

Term "cloud computing" refers to a collection of computers that work together to carry out a variety of calculations and tasks.One of the most significant IT paradigms of recent times is cloud computing.Reduced time and expense on the market are two of this IT technology's primary advantages for businesses [1].Companies and organizations can use shared computing and storage resources through cloud computing.It is superior to developing anb d utilizing one's own infrastructure.Additionally, organizations and businesses can take advantage of cloud computing's cost-effective, secure, and adaptable IT infrastructure.It can be compared to national electric grids, which enable businesses and homes to connect to a centralized, costeffective, and efficient energy source.Cloud computing users are particularly concerned about data security [2].To eliminate user concerns, this method requires appropriate security principles as well as mechanisms.Majority of cloud service users are concerned that their private data may be transmitted to other cloud service providers or used for other purposes.There are four parts to the user data that must be protected: i) data on usage; information gathered from computers, including sensitive data; information regarding a bank account, health, etc. iii) Information that identifies an individual; information that could be used to find person (iv) distinct device identities; information that could be traced in a single way, such as unique hardware identities or IP addresses [3].Computational resources like CPU, memory, disk, and communication bandwidth are made possible by virtualization technology in the cloud.in server clusters or data centers to be distributed by allocating VMs on demand.One of the key enablers of large-scale cloud computing (CC)method is effective adoption of VMs in data centers.A reliable and effective plan for allocating and managing virtual machines (VMs) is essential in order to support this feature [4].An optimal algorithm, such as integer programming and dynamic programming, or an approximate algorithm, such as PTAS or heuristic algorithms, are utilized to solve resource allocation issue [5].Best algorithm can only be used with small-scale data sets because the resource allocation problem is NPhard; In contrast to the optimal solution, the approximate or heuristic algorithm will have issues with low allocation accuracy and low computational efficiency [6].

Literature Review:
In order to comprehend existing CC resource allocation methods, a large survey was carried out.It is possible to summarize scope of enhancing cloud resource allocation by evaluating its advantages and disadvantages.Work [7] talked about modified round-robin method, which is one of the best models for allocating resources in cloud computing because it reduces application requirements.Users and a number of data centers make up the cloud.The resources can be accessed by users, but only certain users have access to them.It is managed by service providers, and restrictions are changed.Round-robin method meets user needs as well as shortens wait times.Another compromise model that lessens VM resource allocation issues is the hierarchical framework [8].

Data protection in cloud based social networks:
Cloud application service security is defined in the existing literature [9] as protection of SaaS applications and cloud operational services from threats and vulnerabilities.An agent-oriented modeling framework for evaluating security requirements has been proposed in work [10].The author [11] provides a comprehensive definition and explanation of a number of cloud privacy and security issues.However, security requirements do not provide a clear framework.Work [12] further divides cloud application security engineering and implementation into two main categories: systems and software development security and software acquisition security author [13] suggests their cloud storage-specific security model.The only difference between the two is that the proposal from [14] goes into greater detail and describes the theories as well as users associated with their proof-of-concept.However, neither of the proposals [15] has any experiments, simulations, or empirical data to support its fine-grained security model's efficacy or robustness.

Resource allocation using machine learning techniques:
It was suggested in [16] that the cluster scheduler handle multiresource packing in relation to work in task assignment for maximum utilization of cloud resources.In order to enable the spare cluster to carry out as many jobs as possible, it prefers jobs with shorter durations.An intuitionistic fuzzy-based HRRN scheduler was proposed in [17] on basis of intuitionistic fuzzy set theory.This method can adjust schedules based on continuous feedback and deal with the response ratio's imprecision.Using a fault-tolerant mechanism, a dynamic task assignment scheduling method known as EFDTS [18] was developed to maximize resource utilization.Work [19] suggested three bio-inspired algorithms for better task assignments: modified particle swarm optimization (MPSO), modified cat swarm optimization (MCSO), and hybrid (MPSO+MCSO).Regarding Q-learning-based solutions, [20] suggested a Qlearning-based dynamic consolidation strategy for allocating incoming requests.In order to determine whether a host should be awake or asleep, the agent draws on previous knowledge.To reduce latency, work [21] suggested a brand-new joint task assignment as well as resource allocation strategy known as QL-Joint Task Assignment as well as Resource Allocation (QL-JTAR).

Proposed Cloud network model:
This section demonstrates the design of the proposed cloud network model, which incorporates distributed social networks and ML for improved resource allocation as well as network security in a cloud soft computing environment.Fog computing is a technology that enables integration of IoT with cloud as cloud is unable to handle huge volume of data created by IoT devices due to their limited storage as well as processing complexity.Fog computing has therefore existed for a few years as a means of overcoming cloud computing's issues and difficulties.It proposes establishing a computational layer that functions as an intermediary between IoTas well as cloud.To be more specific, this computational layer is located close to end devices to reduce latency time.As a result, the use of the fog layer results in a three-layered architecture that urgently requires autonomous computing.Self-adaptive as well as autonomous components that make up this fog layer are handled by it, resulting in an energyand processing-efficient layered architecture.The proposed framework suggests incorporating a control loop into fog layer due to presence of computationally efficient devices in layer.In same way, it has two kinds of components as computational resources: fog master and fog slaves.Fog master is computationally more effective than fog slave in this case.The proposed work is illustrated in Fig. 1.

Secure channel model with hierarchical lightweight cryptography analysis:
A model structure for our suggested model is shown in Figure 2. Included are users, a scheduling manager, a reputation manager, a manager of trust and reputation, and resource centres with a number of resource blocks.Procedure as a whole is as follows: Users use the scheduling manager to assign a task in the resource center in order to access the resource block.The scheduling manager ensures that the path to the relevant resource center is provided by the resource block in which it is located.After accessing the resource block, the user displays the values for the attributes TF and RF. from users K at BSjis given as eq.( 1): Where    is an additive receiver noise denotes    ∼  (   ,   2   ) while    is zero mean and   2 is variance.Then UL signal in cell l denote    ∈ Chas power  , =  {|   | 2 } and   > 0means uplink SNR and U L signal    ∈ ℂ  is given as eq.( 2): Whereas, BS as dedicated in cell j chooses receive combining vector    ∈ ℂ  at time and are given as eq.( 3): Then the desired signal becomes √  ∑ =1 ≠  ∑ =1   ℎ      with intra-cell signals and inter-cell interference.BS j transmits signal in cell l that is written as eq.( 4): Where   ∈    is assigned as transmit precoding vector.Then received signal    ∈ ℂis modelled as eq.( 5): Symbol vector is given as and    is an additive receiver noise.Term √  > 0means SNR of DL.Then    is given as eq.(6,7):

Data encryption model
An attack on network security can intercept data on any link or end-to-end connection.Link encryption as well as end-to-end encryption are typically the two distinct encryption strategies used to ensure the safety of communication. Figure 3 depicts the method for data encryption.Plaintext X uses encryption method E and encryption key E to send the receiver ciphertext  =  () , and decryption method and decryption key Kd are used to solve plaintext  =  ( ).OSI architecture's second layer is data link layer.Through data link layer protocol, the layer transmits any combination of bitstreams over the physical link.Encryption is implemented independently on each communication link in link-encrypted networks.Each link usually uses a different encryption key.This ensures that other links' information will not be disrupted in the event of a link failure.

Figure-3 cloud data encryption model
The following is an illustration of how the link encryption-based network security authentication data encryption technique works: The specific formula is as follows, assuming that is any value in  and is ciphertext value of its data encryption eq. ( 8): Data encryption method is limited by following formula to solving encryption function given by M (P), and distribution of M (P) is balanced by eq.(9).
Where Z is a range parameter of M (P) and ∕2 is a function of data encryption.To achieve key data encryption protection of computer communication network, encryption function is solved in accordance with data encryption process constraint.This is specific formula by eq. ( 10): The system theory cannot be broken if key is a real random number, but it is actually difficult to meet this requirement.As a result, the key sequence is frequently a pseudorandom sequence.The heterogeneous fusion network data is mathematically modeled and analyzed using the data transmission security method [25-27].The following is the information security transmissionbased resource management algorithm modeleq.(11): ∈   ≤     ≤   log 2   , ∀ ∈  (11) whereRmin k is the kth algorithm's minimal transmission security threshold, and Rmin k 0, to meet needs of different users for data transmission security.Because different users have varied requirements for data transmission security, data security value exceeding variable security user Rmin k should be dispersed while assuring the security of each user's data transmission.As a result,  − min  is the objective function of security user variable that corresponds to it.The security indicator that is given to user for variable security user k ∈ V should be found when data transmission security is greater than or equal to minimum transmission security threshold Rmin k; Transmission security that is given to the user with the fixed security k ∈ F ought to have a fixed value, which would be Rk = Rmin k.
Tolerance threshold for cross-layer co-channel interference set by security users to meet their resource management requirements is ∑=0 , as heterogeneous integrated network [28].To put it another way, realtime services should have a lower cross-layer co-channel interference threshold than non-realtime services.To be ready for the following stage in the resource management algorithm's derivation, heterogeneous integrated network resource management method based on data security transmission has been finished.
The set of all the convex combinations of a family of m vectors, {uj} (j = 1, . . ., m; uj∈ R n), is known as the convex hull by eq.(13).
The definition of a Markov process comes first.If and only if probability of transitioning to state St+1 depends only on current state St and not on states S1, S2, , or St1, then a series of states is Markov.To put it another way by eq. ( 14), In RL, we frequently refer to a time-homogeneous Markov chain, where the probability of a transition is independent of time t by eq. ( 15): To maximise expected value of return, or to select best course of action, RL must make decisions over time.Here are our definitions of return and policy.Total discounted reward from time-step t is the return Gt by eq. ( 16).
Long-term value of a state when adhering to policy is provided by state-value function vπ (s).
Let's use the variable   ⊆ ℝ R to represent each feature.An n-dimensional vector  − ( 1 , … ,   ) ∈  ⊆ ℝ  is then used to fully define each consumer.
The fundamental formula for determining an element's activation value in relation to other elements connected to it, with a strength wji, is a function of the weighted sum of the elements by eq. ( 17): where: Netj = Net Input to the j level unit.
Threshold of unit, also known as tendency of unit to activate or inhibit itself, must be added to this eq.(18). if =   ⋅ (1 −   ) thenΔ  will be by eq. ( 19): where: E = Error; p = Model; tk = Target; uk = Output.And then by eq. ( 20): At this point, it can be stated that value outjwill find amount of value to be added to or subtracted from weight wji in connection to activation state of unit ui, specifically activation with which uj is connected to weight wji, as well as in relation to coefficient r.Adjustment rate that one should use is this one by eq. ( 21).
Both negative and positive values for wji are possible.It represents the "quantum" that will be either added to or taken away from weight's prior value, wji.Then by eq. ( 22): However, according to equation, each unit of a weight that arrives must have an actual value that is comparable to an ideal value towards which it should tend ).However, this assumption is only met by weights that connect a unit layer to Output unit layer.To determine that, we'll apply the chain rule by eq. ( 23).
Calculus makes it simple to demonstrate that the derivative of sigmoid function g(z) is given by g. (z).(1-g(z)).partial derivative of bias-related J by eq. ( 24) in this network, the equations ( 25) and ( 26) can be re-written as Once more, we know the RHS of equations ( 27) This allows us to calculate the necessary gradients.Backpropagation's fundamental mathematical principles are discussed here.We are now able to employ this strategy to backpropagate a straightforward linear NN after comprehending the preceding mathematical aspects.However, fundamental concept is same.Backpropagation simply makes use of chain rule of derivatives to determine how network's parameters change cost function J.
Utilizing resource management method, resource parameters a, b, c, and d in heterogeneous integrated network are updated, and scheduling opportunity of heterogeneous integrated network data is produced by streamlining evaluation stages.Function's expression is as eq.( 28): Where "a, b, c, d" denotes the data resource parameter's scheduling opportunity; User i's j-type service's packet group queue first group waiting time is represented by W (t). The two queues that are assumed to store packet resources produced by service sources SQ and SP are referred to as Q (t) and W (t), respectively.The paper's resource management method, which is based on data security transmission, is then described.Maximum allowed packet transmission security need is TQ >Tp, where j denotes the service queue.

Experimental analysis:
Simulation tool and its description: Our simulations are carried out on a MATLAB-based simulator on a computer running Windows 10 Professional 64-bit on an Intel® Core(TM) i7-4770 CPU with 3.4 GHz frequency.
We consider a 30-MU MEC system.Each MU's CPU computational capacity is selected at random from set 0.4, 0.5, 1.0 GHz to take into account diverse computing capabilities of MUs.MEC server's CPU has a set computational speed of 10 GHz.
As a result, the Cloud server's computing speed has been set at 10 GHz.Consider face recognition application in [32] as an example of a sophisticated application, with a data size of 5000 kB for computation offloading and a total CPU cycle requirement of 1000 megacycles to complete the task.Each MU is assigned a decision weight from the set "0, 0.2, 0.5, 0.8, 1.0" for the execution time w t i.From w e i = 1w t i, energy consumption decision weight is calculated.In addition, it is anticipated that 100 megacycles of CPU time are needed to encrypt as well as decrypt transmitted data.In addition, in our simulation, the security decision is made at random.Table 1 contains a list of additional communication and computation settings.We run our simulations for fifty runs, and the averaged results can be derived from these runs.

CPU cycles to accomplish tasks 𝛽 𝑖 1000Megacycles
Data size for computation tasks   5000kB

MEC server capability F 10GHz
As a result, the MEC server's computing speed has also been set to 10 GHz.Consider a sophisticated application with a data size of 5000 kB for computation offloading and a total CPU cycle requirement of 1000 megacycles to complete the task.Each MU is assigned a decision weight from set "0, 0.2, 0.5, 0.8, 1.0" for the execution time w t i.From w e i = 1w t i, the energy consumption decision weight is calculated.For instance, if w t I = 1 and w e I = 0, the user is only interested in execution time.If w t I = 0.5, MU was concerned about execution time and energy use.In addition, it is anticipated that 100 megacycles of CPU time are required to encrypt and decrypt transmitted data.In addition, in our simulation, the security decision is made at random.Table 1 contains a list of additional communication and computation settings.We run our simulations for fifty runs, and the averaged results can be derived from these runs.6 shows comparative analysis for size of task user between proposed and existing technique.Proposed technique attained memory capacity of 73%, data protection analysis of 69%, throughput of 95%, end-end delay of 69%, processing time of 49%; existing EFDTS attained memory capacity of 68%, data protection analysis of 62%, throughput of 89%, endend delay of 62%, processing time of 39% and MPSO+MCSOattained memory capacity of 72%, data protection analysis of 65%, throughput of 93%, end-end delay of 66%, processing time of 45%.

Comparative analysis:
The system divides the jobs that need to be scheduled into 25 sets, with 5 to 50 jobs in each set.We assume that all jobs arrive at a mean rate λ, also known as Poisson arrivals.It has been demonstrated that such a model is useful for modeling job arrivals in data centers.The average load can be changed anywhere from 10% to 190% of the cluster's capacity by adjusting the parameter λ.CPU and memory with capacities of 1r and 1r, are cluster resources, where r = 10.
The following are the resources that are required for each job: One resource is chosen at random as primary resource and other as auxiliary resource for each job.Interest for essential asset is irregular somewhere in the range of 0.5r and 1r, while the interest for the helper asset is arbitrary somewhere in the range of 0.1r and 0.2r.In terms of each job's duration, 80 percent are short-term positions with one to three time slots, while the remaining jobs are long-term positions with 10 to fifteen time slots.
A In order to facilitate adequate environmental exploration, the value of was chosen.Parametric settings for Q-learning experimental analysis are identical.The following are the resources that are required for each job: For each work, a resource is randomly selected to serve as the principal resource and another as an auxiliary resource.Demand for major resource fluctuates randomly between 0.5 and 1r, while demand for auxiliary resource fluctuates randomly between 0.1 and 0.2r.In terms of each job's duration, 80 percent are shortterm positions with one to three time slots, while the remaining jobs are long-term positions with 10 to fifteen time slots.
The data center's convergence time prior to optimal placement is taken into account as a function, and the request range is from 50 to 500.The proposed algorithm's performance is observed to be superior in terms of convergence time; Even the requests are greater than 300 VMs.It takes less time to converge, which is acceptable given simulation's operating range.Performance is checked using seven different loads.In most cases, the amount of energy used increases with number of VMs, but in case of proposed method, amount of energy used is lower than in a typical process.Because the service providers may not grant full access to all users, waiting time is a factor that is taken into consideration here.Therefore, depending on type of resource allocation request made by users, waiting time may increase.It has been observed that proposed method has maintained an average response time for all requests despite increasing response times.In the case of greed, the maximum waiting time is 140 seconds, and FIFO takes longer than other two.Time needed to respond to a user's request for a resource is referred to as average response time.Ratio of total service time to total task waiting time is used to define it.
We use two insecure and two secure resource centers to verify the presentation.First as well as second resource centers are insecure, according to user feedback, whereas third and fourth resource centers are secure as well as permit us to observe how our system operates.With varying numbers of users, our system's presentation is verified.In this section, the resource center's "High" representation on graph indicates that it is secured, while "Low" representation on graph indicates that it is not.By discretizing observed benchmark results into time steps, a performance model distribution is created.We are able to method variable performance at every time step of this.A uniform random performance sample is generated by taking lowest as well as highest observed values for each time step.When performance variability increases across all VMs instantiated in a region, we designate a specific peak time.

Conclusion:
This study proposes a novel method for cloud soft computing resource management that also improves data security.With Q-bayes propagation quantum networks based resource management and hierarchical lightweight cryptography analysis, the proposed model developed a cloud channel model.To reduce cloud server energy consumption and task execution delay, an optimization problem that takes into account resource allocation, computation offloading decision, and data security issues is proposed.An efficient solution to this issue is the creation of an optimal computation offloading algorithm and specific procedures for selecting the best cloud server offloading option.Proposed technique attained memory capacity of 73%, data protection analysis of 69%, throughput of 95%, end-end delay of 69%, processing time of 49%.We will use a good compression layer in future work to make offloading computation task's data smaller in low bandwidth state, which will improve system's overall performance.

Figure- 2
Figure-2 proposed cloud network model

Table - 2
Analysis for various network parameters Analysis for time steps in terms of memory capacity, data protection analysis, throughput, end-end delay, processing time Above figure4shows comparative analysis for time steps.Proposed technique attained memory capacity of 65%, data protection analysis of 55%, throughput of 88%, end-end delay of 62%, processing time of 36%; existing EFDTS attained memory capacity of 62%, data protection analysis of 52%, throughput of 83%, end-end delay of 55%, processing time of 32% and MPSO+MCSOattained memory capacity of 64%, data protection analysis of 53%, throughput of 85%, end-end delay of 59%, processing time of 35%.Figure-5 Analysis for time steps in terms of memory capacity, data protection analysis, throughput, end-end delay, processing timeAbove figure 5 analysis for number of tasks.Proposed technique attained memory capacity of 69%, data protection analysis of 62%, throughput of 92%, end-end delay of 69%, processing time of 42%; existing EFDTS attained memory capacity of 63%, data protection analysis of 55%, throughput of 86%, end-end delay of 58%, processing time of 35% and MPSO+MCSOattained memory capacity of 66%, data protection analysis of 59%, throughput of 89%, end-end delay of 65%, processing time of 38%.Figure-6 Analysis for size of task user in terms of memory capacity, data protection analysis, throughput, end-end delay, processing time Above figure workload simulator generates the user request models.In open-loop mode, the simulator simulates user requests.Poisson requests are generated in the open-loop mode, and the mean arrival rate can be adjusted to anywhere from 10-150 reqs/sec.2. The maximum response time per request is governed by an SLA of 250 milliseconds.According to Pc. D.1, any request that exceeds this amount is considered to be in violation of SLA and subject to a penalty.The policy's distance from the SLA is directly influenced by the penalty Pc's value.3. The parametric settings listed below are used to start Q-learning.The majority of the estimate's error will be backed up if learning rate is set to.Value of future states is discounted by the discount factor.