In many other RL systems that use an ANN as a dimensionality reduction, the state vectors are fed into the ANN as insight. Every cloud storage service was vulnerable to disruptions at some period in history. As a result, a strategy that was somewhat adaptable to variations in the quantity of cloud storage services was required. Figure 2 illustrates a new strategy for an ANN with reinforcement learning proposed in this paper.
There are three cloud storage facilities, each with their own spatial domain, and the purpose of this new concept is to permit RL to deal with every cloud individually to yield various transfer functions. The MPSA's condition features are fed into the ANN, which offers the production principles. Every one of them correlates to a separate cloud. Each of these characteristics could be used to determine the current situation of the file in the cloud. Following that, each output value is turned into a precise dimension action, taking into account the value of other activities, as shown in Fig. 2.
(ai)t = P (yi)t / ∀p (p)
- Where i indicates each cloud service provider's alphanumeric code, and p is the state value of the data center numeral i.
This model enables the RL program to assign more of each file to the cloud with the highest situation significance. The reinforcement learning server takes a distinct response from each scenario after completing all activities. The numerous failure formulas are computed using these incentives . The procedure for calculating the objective functions was one of the most challenging issues in this work. Algorithm 2 involves the method. Cost and delay are two unlimited parameters that influence the total score. The proportion of the present latency point to the utmost latency period was calculated to maximize the latency time. To lower the delay time, the amount acquired was then multiplied by-1. The approach employed with latency time, on the other hand, will not operate at cost value. Cloud storage costs are progressive, which implies they increase overtime over the pay period. As a result, evaluating the cost of data storage after each activity is difficult . To maximize the cost, the cost was first estimated regardless of the quantity of data consumed, as shown below in the procedure called.
Illustration of the AI distribution mechanism
Cij and Cjk are the components of an artificial neural network when it is first created.
for < event = 1, Q > do
for < t = 1, N > do
pc count ← verify how long private loud storage services are offered
Regulate information available to have the same number of nodes in the output layer to pc count
Obtain file volume and file contact model quality to the ANN
for < i = 1,pc count > do
(yi)t ← production cost of private cloud node i
For < i = 1, pc count > do
Produce event (ai)t = (yi)t/ ∑∀p(p)
Perform all events (a)t
Monitor incentives (in)t from all private clouds
Calculate error for all private clouds
Inform system weight
Where Ocost is the entire cost of data storage in the cloud, and Outilized is the entire quantity of information stored in cloud storage. The entire system cost is denoted by the letter S. S utilized is the quantity of information transferred into and out of the storage services of the cloud. AO cost represents the entire value of all procedures in the cloud. i is the maximum numeral determined on the basis of cloud supplier i and n indicates the probability of cloud services accessible. Beyond this calculation, the goal is to calculate the cost increase in proportion to the cost consumed.
In most cases, the cost provides a result that is less than 1. In this investigation, calculating the price ratio using the same method as the latency time won't work. As a result, a distinct model was utilized. The following condition was added to promote learning confluence: every number of the above equations that surpassed 1 was utilized to calculate the relationship between the entire different cycles awaiting a novel worth surpassing 1; then the program substitutes this result with the correct value. This method allowed the algorithm to discover how to reduce costs completely and quickly. Eventually, distinct values for cost and latency were introduced to the performance metric to make the architecture more effective at optimizing both latency and cost at the same time. The ratings were calculated using the file's significance; this was generated from the patterns of data files' characteristics . If the file is expected to be particularly dynamic in the near future, the architecture will place less emphasis on cost and instead aim to reduce latency. The same would be true in reverse: the relevance of latency time was lowered for idle files, and the cost was minimized. Algorithm 2 lays out the whole reward function.
The reward function is based on the overall cost of delivering each file among several private cloud storage providers and the delay time.
Execute numeral of private cloud storage services accessible nu. Private cloud
Parameters for the most critical file, as well as maximum latency and cost for each private cloud, should be executed. (max.Weight, max. Latency Array , max.Cost Array )
Execute three variables (latency Reward , cost Reward , cost ) in order to
Calculate the whole reward
Pa– determine the file's importance based on the predicted access pattern (nu.Read + nu.Write + lifeTime).
if pa > max.Weight then
max.Weight ← pa;. locate the most significant folder
pa = pa/max.Weight. Calculate the relation of the present latency time to the
Maximum weight so far
for < i = 1,nu.privateCloud > do
if file Latency Time[i] > max. Latency Array[i] then
max. Latency Array[i] ← f ile Latency Time[i]. locate the slowest file
Latency Reward[i] ← −1 ∗ (f ile Latency Time[i]/max Latency Array[i]), .
Calculate the ratio of the present latency time to the slowest file latency so far, for each
if cost[i] > 1 then
Max.Cost Array[i] ← cost[i]
Cost Reward[i] ← −1 ∗ (cost[i]/max.Cost Array[i]). ratio of the present cost to the utmost charge so far, for each private cloud
Sum Reward[i] ← (1 − w) ∗ cost Reward[i] + w ∗ latency Reward[i] .
Calculate the whole compensation for allocating each file
Revisit total Reward