TTLA: two-way trust between clients and fog servers using Bayesian learning automata

Fog computing is a promising paradigm for use as an efficient architecture for Internet-of-Things applications. This architecture’s advantages include proximity, low latency, adaptable resource capacity, and a distributed structure. A considerable amount of generated data and their requisites to real-time processes cause fog nodes to offload the number of tasks to the others, causing trust issues. In this paper, each client prefers to offload a task to a trusted server, and each server prefers to serve trusted clients. This may take some time, especially if we wish to reduce energy consumption. This paper proposes a Bayesian learning automaton-based two-way trust management strategy to address this issue. The proposed method outperforms current state-of-the-art methods regarding energy consumption, network usage, latency, response time, and trust value.


Introduction
In the past decade, the pervasiveness of the internet and the increasing number of internet-connected smart physical devices have given rise to a new concept known as the Internet of Things (IoT). IoT devices such as sensors, objects, actuators, and intelligent nodes are associated with mobility challenges, resource constraints, 1 3 TTLA: two-way trust between clients and fog servers using… scalability, and heterogeneity [1]. IoT devices, on the other hand, generate large volumes of data from big data. For instance, an electronic patient health tracking system includes a variety of wearable devices and sensors that produce a vast array of data types [2]. Cloud computing is an efficient method for processing and storing this significant amount of data due to its high computing power and storage capacity [3]. Some IoT applications may require very short response times and rapid processing, such as intelligent transportation systems [4], smart grids [5], smart healthcare [6][7][8][9][10], emergency responses, and other delay-sensitive programs. On the other hand, some decisions can be made locally without having to transfer data to the cloud. Moreover, even if some decisions must be made in the cloud, sending all the data to the cloud and classifying it are unnecessary, because not all information is useful for analysis and decision-making. In other words, these challenges, driven by the rapid growth of the IoT and associated with network bandwidth, latency, reliability, and security, cannot be addressed independently of a centralized cloud model. Therefore, cloud computing is unsuitable for real-time analysis and decision-making [2]. Fog computing (FC) is a distributed paradigm that can overcome these challenges and limitations by computing, storage, communication, and network services on the network's edge and between end devices and cloud data centers to reduce latency and extend bandwidth and reliability [11]. Therefore, it can manage instantaneous data, and it is unnecessary to immediately transfer all data from edge devices to the cloud [12]. On the other hand, integrating edge equipment and cloud resources also prevents resource challenges. Cloud computing is actually complemented by FC, which supports mobility, dynamism, geographic distribution, location awareness, heterogeneity, and interoperability [2] where FC helps cloud computing perform its role more efficiently. Considering the features of cloud computing, traffic safety [13,14], electronic health [6], web content delivery [15], augmented reality [16], virtual reality [17], and big data analysis [18], including new IoT applications and services, are suitable for cloud computing. Fog's characteristics, such as its heterogeneous environment, adaptability, dynamics, and geographical distribution, can place it in a precarious and uncertain position. However, fog can be mobile and, consequently, unavailable. Thus, the fog's unpredictable conditions may be jeopardized at any time. For instance, if a moving vehicle is used as a service vehicle in a foggy environment, the vehicle may be involved in an accident, or drones performing edge services may have a technical defect. Therefore, predicting performance indicators such as power, availability, and dependability will be difficult. Because this is more complex and challenging than the cloud, it is difficult to predict and trust fog [19,20], and to resist security and privacy threats with the features and flexibility of the fog model [2,12,21].
The security and privacy solutions available in cloud computing cannot be used directly for FC because the architecture of the two computational podiums is quite different [21,22]. Because of the centralized components of cloud computing, cloud security is relatively simple compared to distributed architectures such as cloud computing. Therefore, security issues, privacy, and trust in cloud computing must be addressed to accelerate the development and adoption of cloud computing in academia and industry. Trust management is one of the challenges associated with the cloud computing environment, which is highly related to security and privacy. Lack of redundancy, dynamism, high mobility support, and low node processing power [23] are other features of FC that add to the complexity of trust management in FC [24]. Therefore, fog servers pose a risk to fog clients and other fog servers. The same holds for customers of fog. Although authentication is a very useful encryption method for establishing the initial connection between IoT devices and fog nodes, it is not sufficient because, over time, devices can malfunction and be compromised [25]. Moreover, existing cryptographic solutions are incapable of defending against internal attacks, such as the attack of a rogue node that has been authenticated and integrated into the system [1].
Trust plays an essential role in the relationship between fog nodes and end devices. Fog nodes are considered the most important component of the IoT-based network [26], as they ensure the privacy and anonymity of end users [8]. In addition, this component must be trusted to gain agency responsibility. It must ensure that the node implements the encryption process on the data it publishes and only launches non-destructive activities. This requires that all nodes that are part of the fog network have a certain level of trust in each other [12].
Trust management provides the mechanism for deciding whether to trust the associated entity. The nodes' ability to predict other nodes' behavior facilitates the decision-making process. It also leads to the diagnosis of damaged or misbehaving nodes [27,28]. Trust is essential for interaction in an uncertain environment and ensures data security and user privacy. Devices may encounter heterogeneous devices in the network and must be handled with care due to the unpredictability and instability of their behavior. The purpose of trust management systems in cloud computing is to identify and prevent the use of invalid servers.
In addition to establishing trust, it should be noted that the trust management mechanism should not negatively impact other network parameters, including increasing energy consumption. Therefore, fog servers should be empowered to reduce their energy consumption to increase their chances of being chosen to serve fog customers, increasing their reliability, and decreasing their energy consumption. Several algorithms and structures for cloud computing have been proposed to reduce energy consumption, each of which aims to reduce energy consumption and optimize resource utilization. Energy consumption is investigated in numerous fields, including processing, storage, and transmission. Considering the characteristics of the fog network, the most important of which is the reduction in latency due to the use of time-sensitive and mission-critical applications on this network, it is necessary to find a suitable method for the fog network that reduces latency and ensures security is not compromised [29]. In the current research, while presenting the model of mutual trust management between customers and fog servers, we seek an intelligent and secure routing that can select the most dependable and energyefficient fog server in the current research. Numerous routing algorithms exist in the network. Nevertheless, due to the nature of the fog network, which serves real-time and latency-sensitive applications, we must employ a lightweight and efficient routing algorithm that attempts to select secure nodes with lower power consumption. Furthermore, the algorithm must be able to assist in increasing energy efficiency and maximizing the fog grid's lifespan.

BLA motivation
In the trust management system proposed in this research, the customer sends their request to the most trusted server, which can even be a server with the lowest energy consumption. In this system, a Bayesian learning automaton (BLA) is used to learn all the modes and actions in the network to achieve the best and most reliable node (and the lowest energy consumption). The selection of BLA has been made by studying the following routing algorithms: Heuristic (or innovative/exploratory) algorithms: As with the greedy algorithm, heuristics seek a reasonable and acceptable solution to a complex problem, which is not necessarily the best solution to the problem, and have no guarantee of finding one. Meta-heuristic algorithms: Algorithms that are used to solve optimization problems and, in cases where they are combined, or some of their steps are modified, have the ability to escape local optimization and reach the global optimization, such as genetic algorithm (GA), ant colony, bee colony, refrigeration simulation algorithm, forbidden search algorithm, and colonial competition [30]. Machine learning (ML) algorithms: These algorithms can process significant amounts of data and predict or make patterns based on this information. The predictive model, such as deep learning [31], neural network, and reinforcement learning (RL) [32], becomes more accurate over time as the program modifies itself.
Due to the nature and characteristics of FC, it has been placed alongside the cloud as a supplement to speed up and reduce delays in delay-sensitive applications such as intelligent transportation, smart healthcare, and emergency response. In the proposed trust management system, a method will be provided that, while increasing the accuracy of trust calculations compared to other trust management systems, utilizes lightweight calculations to increase the speed of delay problems, which is the most crucial factor for delay-sensitive applications. The use of probability in learning algorithms can increase the algorithm's speed, and the learning automata algorithm's calculations are simple and lightweight.
BLA is a type of RL algorithm well-suited to real-time applications for several reasons. First, BLA is computationally efficient because it uses a simple probability distribution to estimate the rewards of different actions and updates this distribution with each trial. This means that BLA can quickly adapt to changing environments and learn the optimal actions in real time. Second, BLA is robust to noisy and uncertain environments, where it incorporates uncertainty into its probability distribution, allowing it to make informed decisions even when the rewards for different actions are unclear or fluctuate over time. Third, BLA can handle a large number of possible actions. It uses a set of probability distributions to estimate the rewards of various actions, enabling it to deal with a large number of possible actions without becoming overwhelmed by the complexity of the decision-making process. Finally, BLA can learn from feedback in real time. It updates its probability distribution with each trial, allowing it to quickly learn from feedback and adjust its actions accordingly [33]. In addition, BLA can escape local optima by incorporating uncertainty into its estimates of rewards for different actions. This uncertainty allows BLA to explore different actions even if they have not yielded high rewards in the past, potentially leading to the discovery of better, globally optimal solutions.

3
The need for computing power, battery, and high memory is due to the complexity of the algorithm [34], and these requirements are among the limitations of IoT devices; thus, a combination of learning automata and Bayesian inference based on probability will be used to achieve the best answer in the shortest time.
Our key contributions to this paper are as follows: 1. We provide an intelligent two-way trust management system based on subjective logic and the BLA algorithm. This system allows clients to determine whether the server can provide a reliable and secure service; on the other hand, it allows the server to determine the reliability of clients. 2. System evaluation shows that the proposed method using a lightweight learning algorithm is smart in selecting service providers and is more comprehensive than the two-way trust management system in fog systems [2] and fuzzy trust management system [35] with better efficiency in energy consumption, latency, cost, trust, and network usage.
The remainder of this paper is structured as follows. In Sect. 2, a summary of related works is provided. In Sect. 3, the system model and network architecture are presented. In Sect. 4, we describe in detail our intelligent trust management algorithms. In Sect. 5, the evaluation results of the proposed algorithms are compared to other methods and presented. Finally, Sect. 6 discusses the conclusion and makes recommendations for future research.

Related work
Research on trust management in cloud, fog, and IoT systems has been conducted in recent years. We carried out a review of these works within the context of trust management. Rathee et al. [36] proposed a routed secure sending mechanism to prevent attacks by analyzing the level of trust and ranking of each IoT device and fog node based on their communication behavior. A trust manager is created between the fog and IoT layers, which records all fog nodes in the search table and identifies IoT and malicious fog nodes. In addition, the fog nodes compute the IoT layer's requested services layer and route the services in the most secure manner possible. This method employs a third party as a trust manager between the fog and IoT layers. Unlike cryptographic techniques, trust-based techniques increase security without reducing communication overhead or raising network standards. Al-Khafajiy et al. [37] proposed a trust management approach for cloud computing (COMITMENT) based on a trust recommendation according to quality of service (QoS) and quality of the previous history (QoP) criteria derived from direct interactions and experiences. Previous indirect nodes are used to evaluate and manage the level of trust of nodes in the FC environment. Nodes with a positive track record positively affect the overall trust score, and nodes with a negative track record negatively impact the overall trust score. This model uses the Bayesian network to evaluate direct satisfactory experiences based on direct interactions 1 3 TTLA: two-way trust between clients and fog servers using… between fog nodes. Hussain et al. [38] proposed a text-aware trust assessment model to assess user reliability in a fog-based IoT (FIoT) environment to identify malicious nodes. The proposed method employs a multi-resource-aware evaluation system of text-based trust and reputation to determine a user's reliability. Text-aware feedback and feedback detector systems have also been utilized to establish an unbiased trust assessment. In addition, the monitoring mode of unreliable and destructive users is considered for monitoring user behavior and trust.
Alemneh et al. [2] introduced a peer-to-peer mental logic-based two-way trust management system for FCs. The operation of the system is such that the fog user or the service requester verifies the legitimacy of the fog servers, whether they can provide accurate, reliable, and secure services, and vice versa, the fog servers, and before delivering the service, review and approve the legitimacy and illegitimacy of the fog users. This system is event-based and event-distributed, using service quality and social trust criteria to determine trust. In addition, the final trust value is computed by dynamically combining direct and indirect observation data (recommendations of neighboring nodes).
Awan et al. [39] proposed a NeuroTrust trust management mechanism for the IoT that uses a superimposed learning multilevel perceptron neural network to predict malicious and compromised nodes for safe transmission. An input layer, two hidden layers, and an output layer with a threshold value are used to calculate the binary output. IoT devices include patient health sensors connected to patients and can transmit data to health care providers, such as hospitals. The proposed method uses trust parameters, including reliability, compatibility, and package delivery rate, to assess the degree of trust. This method uses a light encryption mechanism for security when publishing data. In the proposed NeuroTrust mechanism, destructive node behavior is predicted, and data transfer only occurs if the receiver is secure and can identify the source node using trust parameters. This method employs a smart home with a dedicated server to perform trust calculations and monitor malicious nodes. Trust is calculated directly and indirectly (recommendations).
El-Sayed et al. [40] proposed a trust-based framework based on machine learning (ML) using a decision tree classifier and artificial neural networks for the vehicle network. The network nodes are multifaceted and typically consist of vehicles, roadside units (RSUs), and data centers that utilize wireless networks to exchange data. Information is exchanged between nodes to monitor and control a large number of vehicles; security in this critical network and trust between nodes when exchanging messages is crucial for network security. This model evaluates trust using distancebased criteria, such as Euclidean distance. The proposed trust-based model employs a direct and indirect trust assessment strategy (recommendations) to quantify the trust value. The RSU assigns vehicles by RSU based on the behavior of the nodes in the vehicle environment, and the neural network concepts are implemented in RSU. Nodes with a minimum distance from RSU and a good track record of trust values are selected by RSU as recommended nodes. One of the factors will act as a controlling factor that helps store various computational results and information about the nodes of the vehicle and the RSU.
Farahbakhsh et al. [33] used the BLA to offload context-aware load in mobile edge computing (MEC) with multiple users. The Bayesian automaton learns all the network modes and actions and helps improve the offloading algorithm. All offload processes collect contexts using independent management as a monitoring, analysis, planning, and execution loop. Simulation results indicate that this method is superior in terms of energy consumption, implementation cost, network utilization, and delay of local calculations and offloading when contextaware algorithms are not considered.
Wang et al. [41] focused on optimizing energy consumption and latency when using mobile fog devices to collect data from wireless sensor networks (WSN) to the cloud. The outcomes demonstrate an increase in response time and communication expenses, but the routing performance is superior to that of conventional solutions. They disregarded network threats, data privacy, and integrity during data transmission. Ilyas et al. [42] presented a three-layer cluster-based WSN routing protocol to increase network life, improve throughput, reduce latency or packet loss, and continue to work in the face of malicious nodes. The proposed approach is superior to fuzzy logic based on unequal clustering, ant colony optimization based on hybrid routing, artificial bee colony, and energy-conscious multi-hop routing in network lifetime, throughput, average power consumption, and packet latency.
Subramanian et al. [43] proposed a lightweight trust method to establish packet routing protocols (PRL) in low-power and high-bandwidth IoT networks based on the recommendation of neighboring nodes. PRL, despite its communication security features, is frequently vulnerable to routing attacks. In this study, the future behavior of nodes is predicted, and the level of trust between fog nodes is increased without impacting network speed or performance.
Haseeb et al. [44] proposed a lightweight and secure fog-based routing protocol (SEFR) to minimize data latency and increase energy management. This method uses multidimensional service quality criteria to select the next hop and facilitates time-sensitive applications with network edges. The proposed protocol also protects real-time data based on two levels of cryptographic security, where the first level proposes a lightweight confidential data scheme between cluster heads and fog nodes, and the second level proposes a high-performance asymmetric cryptographic scheme between the fog and cloud layers. Analysis of simulation-based experiments demonstrated that the proposed protocol outperforms existing routing, security, and network management solutions.
The authors in [35] proposed a broker-based trust assessment framework for fog service allocation, focusing on identifying a reliable fog to fulfill user requests. They utilized fuzzy logic as the basis for the evaluation and designed a fuzzy-based filtering algorithm to match the user's request to one of the predefined sets created and managed by the server. Only QoS trust criteria were considered in this plan, which was a one-sided trust.
In the current research, we aimed to model and solve the trust management system in fog. To our knowledge, this is the first paper to consider a two-way trust strategy in FC architecture using an ML algorithm. Table 1 classifies the research summary on trusted management by platforms, properties, limitations, and trust criteria.

3
TTLA: two-way trust between clients and fog servers using…

System model and architecture
The architecture and system model is presented in this section. As depicted in Fig. 1, the FC environment's architecture consists of fog clients, fog servers, and device owners. In this architecture, all fog nodes are able to offload their tasks to other trusted devices. The problem is identifying the most trusted fog nodes as offloading destinations. Figure 2 shows the system model of our proposed system. The fog server can communicate with neighboring fog nodes (i.e., with fog servers and fog clients) for up to one step (1-hop). The customer may be user-portable devices such as smartphones, laptops, and computers or non-user devices such as smart lamps, smart washing machines, and CCTV cameras, among others. In this study, we refer to user-portable devices as the first group of devices. Fog clients move in a predetermined direction and are able to communicate with fog servers in close proximity. Each node has an owner, and an individual may own multiple nodes.
To receive the service, the fog client requests a fog server to connect. Using the BLA algorithm, the environment then learns to recommend the most trusted and best fog server to the customer. The fog server must then verify that it is connected to a trusted (not malicious) client. The server calculates the customer's level of trust by consulting with neighboring servers and observing itself directly to determine the customer's legitimacy. This value is also sent as a recommendation to other servers. The same applies to the customer. The customer desires assurance that the fog server is reliable and capable of delivering the required service (a malicious fog server may provide the wrong service). The fog client consults with neighboring fog servers, and the result is combined with direct observation to determine the final trust status of the server. Like fog servers, fog clients share their experience with nearby servers. Nodes only store and exchange trust information regarding neighboring nodes to optimize computational performance. In summary, negotiation and interaction are as follows. Using BLA, the most trusted server is connected to the client. The server then evaluates the client's trust, and if a trust threshold is established, the connection between the server and the client is complete; otherwise, the server rejects the connection.
Due to the characteristics of fog, such as its heterogeneous environment, dynamism, high mobility support, geographical distribution, lack of redundancy, and wireless connectivity in fog environments, security issues in fog are of critical importance. Therefore, security is one of the main components of architecture. To this end, fog calculations are considered. In addition to trust management issues, this component manages all security concerns, including encryption, privacy, authentication, intrusion detection, and prevention, among others. Trust management systems enable reliable communication between nodes in the fog environment, and trust management algorithms can be used to identify trusted clients and fog servers. The proposed trust management algorithm can be run under applications that require reliable communication. Once trusted nodes are found, fog clients can send their data to servers in secure conditions. Servers also retain clients recognized as trusted by a trust management algorithm. Therefore, the trust management system helps the nodes to check each other before the service is provided. The symbols used in this paper are defined in Table 2.

Trust criteria
The trust criterion is the information required to calculate the level of trust of a node. Usually, in trust management systems, more than one feature is used to assess the trust of customers and fog servers. One measure of trust is service quality, which evaluates  Relationship between the ML module and the trust assessment in the mutual trust management system the server's ability to successfully perform a request or service from various perspectives. QoS is the most important criterion because the customer wants to select a server that can provide the appropriate level of service. On the other hand, social relationships heavily influence the servers' selection of customers. In the proposed two-way trust management system, fog servers will assess fog customer reliability based on criteria such as friendliness, honesty, ownership, and energy consumption, while fog customers will select criteria such as delay and package delivery rate. Some criteria for assessing the value of a server's trust are presented below, including reliability, response time, ownership, and power consumption.

Fog clients and servers ownership criteria
Each fog node possesses an owner. This criterion is assumed to be trusted devices belonging to the same individual [45]. If a node encounters another node belonging to the same person, the trust value is set to one; otherwise, this criterion's value is zero.

QoS criteria for assessing the trustworthiness of fog servers
• Energy consumption: The energy consumption of fog servers and clouds regarding the power of all hosts can be calculated using Eq. (2).
where E c denotes the energy consumption of the current state, T n represents the update time of the now time, T lu is the last utilization's update time, and P h is the host's power in the last utilization [33]. • Latency: The application's latency can be expressed by Eq. (3) including the system clock (T1) and the tuple end time (T2).
where T st represents the start time of a tuple and N1 is the number of executed tuple types. In addition, the execution time can be calculated by (T1 − T st ) [33].
The application latency [33] can be determined through Eq. (4) where N2 shows the number of receipts tuple types and T s is the transfer time between two modules. • Packet delivery ratio (PDR): Total shipments equal the number of successfully delivered packages. The ratio of packets received by the destination node 1 Owners ofi, jare the same person, 0 Otherwise to packets sent by the source. In another solution, the PDR is modeled using the well-known Gilbert-Elliott closed-loop model (Markov dual-state model) [46] where p is the probability of transitioning from a good node to a bad node: where r is the probability of transition from bad to good.
Model parameters are obtained by observing the loss of packets in the links of a good and bad node. Therefore, the probabilities of transitioning from good to bad and bad to good are determined by the PDR result of the FC environment.
• Reliability: Reliability is the ability of a service to operate without failure over time and under specific conditions. Therefore, reliability is based on the average time commitment given by the service provider for failure or previous failures encountered by users. This value can be expressed through Eq. (8). where numFailure represents the number of users who experienced a crash or failure in less time than what the service provider promised, n expresses the total number of users, and Pmttf is the average time the service provider has committed to failing. • Response time: It indicates the time required for the service provider to submit the request. The response time can be determined through Eq. (9). Respon-seTime denotes the average response time, T i is the time between when user i requested the service and when the service was actually available, and n is the total number of service requests sent to a server.
• End-to-end packet forwarding ratio (EPFR): This criterion is defined as the ratio between the number of packets received by the application node of the destination node and the number of packets sent by the source node. Equation (10) can be used to calculate what the source node sends where k indicates the number of successful receipts and n represents the total number of packets sent.

Social criteria: to assess the trust of fog customers
• Energy consumption: This parameter can be used to determine a customer's energy consumption: data size, transmission distance, computational energy of data sent, and the requested number of MIPS. This can be expressed by Eq. (2). • Friendship: This parameter indicates the degree of proximity of one node compared to other nodes. The calculation of friendship can be related to the history of the interaction [47] so that the experience of more positive interaction between the two nodes indicates greater trust and confidence between them. Friendship is calculated as the rate of a customer's successful connection requests ( SCR i ) to the maximum of all connection requests (CR). If the server accepts the request, the connection request will be considered successful. A server accepts a connection request from a client if the client's level of trust exceeds the minimum required by specific application software.
• Honesty: Based on direct observations of other nodes over time, honesty evaluates the belief that a node is reliable. This criterion is computed by keeping track of the number of suspected fraudulent experiences of the trusted node observed by the trusting node over a period of time using a set of anomaly detection rules, such as significant differences in the recommendation as well as interruptions, re-transmissions, repetitions, and the occurrence of a delay [48,49]. According to Eq. (12), the rate of trust validity ( VTP i ) and realized connection requests could be used to measure honesty (RCR ). Trust between the trustor and the trustee is established when connection requests are fulfilled. Exaggerated customer recommendations are perceived as unreliable releases and nodes with low levels of trust reject connection requests.

The proposed approach
Given that in the proposed two-way trust management system, in addition to the direct trust that is obtained through self-monitoring of servers and customers, indirect trust that results from the recommendations of neighboring nodes will be used so that the trust of used methods can be evaluated to account for the uncertainties and inaccuracies in the recommendations. For this category of problems, fuzzy logic and subjective logic-based solutions are proposed. These methods allow the presentation of trust-related conclusions with insufficient evidence. The proposed system is based on a particular version of the belief theory known as subjective logic [50], is distributed and event-oriented, and uses social trust, QoS, and energy consumption information with multiple trust characteristics to calculate the trust values of fog nodes against which the trustee (trustor) and the trustee evaluate each other to establish a trusted relationship. The final amount of trust is determined by the direct trust gained from self-observation and the indirect trust gained from the recommendations of neighboring nodes.

Subjective logic
Mental logic is a type of probabilistic logic and a special form of belief theory that creates a relationship between uncertainty and belief. Mental logic is suitable for modeling and analyzing situations or propositions that involve uncertainty and relatively unreliable sources. The proposition is expressed as a probability in the range of 0 to 1 [50]. One of these propositions is trust. This research proposes a system that employs subjective logic to gather recommendations for neighboring fog servers where uncertainty exists. Mental logic is predicated on the notion that trust, as a claim expressing a theory about an entity, is subjective and experienced differently by each individual, and is neither universal nor objective. Practitioners may be unable to meet all the criteria for assessing the level of trust. This indicates that trust is calculated with insufficient evidence and that each node in the FC environment calculates its own trust value for each node it encounters.
In subjective logic, uncertain probabilities are presented as ideas and opinions. The comment or degree of trust in node x, denoted by W x , is defined as the following [50]: where b x denotes the belief in the reliability of node x, d x represents the lack of belief and doubt in the reliability of node x, u x is the uncertainty for the conclusion of the reliability of the node, and the atomic value of a x is the uncertainty. To what extent this can factor into the amount of trust can be represented by an atomic value of 0.5, where the probability of an opinion's true or false output is equal (equal chance of presenting a true or false output). The sum of belief, unbelief, and uncertainty must equal 1. The following equation can be used to convert the degree of trust expressed as a quadruple to a single value of trust: To determine the degree of trust between nodes in a fog network, we obtain the values of belief ( b x ), disbelief ( d x ), and uncertainty ( u x ) of node x based on its positive and negative experiences [27,50]. Fog nodes calculate the good and bad experiences of the nodes they encounter and send these values as a mental trust when asked for advice. Some advisers may not be telling the truth. To increase the accuracy of the trust, calculations must be made on the recommendations. Weighing the recommendations and then combining them to determine the accuracy of the recommendations, the next task is to "gather trust" from the five dimensions of trust calculation. In subjective logic, the combination is performed using two operators, Discounting and Consensus [51].

Discounting operator
The node calculating the amount of trust in other nodes compares the recommendations it has received using discounting (represented by the operator ⊗ ) with the amount of trust it has in the recommenders (Fig. 3). Thus, the trust values of reputable consultants exceed those of less trustworthy consultants. This is crucial for defending against trust-based attacks.
Suppose that the trusted node i has subjective trust T i , k concerning the recommending node k and that the recommending node k has subjective trust T k , j regarding the trusted node j; these expressions are illustrated in Eq. (15). Based on the TTLA: two-way trust between clients and fog servers using… recommendations of node k, the indirect trust assessment of node i to node j is calculated by combining trust k to j and trust i to k as follows:

Consensus operator
The recommendations of several advisors are combined using consensus (indicated by operator the ⊕ ). This operator works the same for each recommendation (Fig. 4).
Assume that nodes i and k have recommendations for node j as follows through Eq. (16). The recommendation for j obtained from the combination of recommendations i and k is calculated via:

Trust calculation
Indirect trust is computed as a subjective trust by applying the reduction and consensus operators to the recommendations obtained from neighboring nodes. Only recommendations from reputable advisors can be obtained using a thresholdbased filter to obtain more reliable recommendations to be more resistant to trustbased attacks [52]. If the number of neighboring nodes is limited, all recommendations can be considered, and trust-based attacks can be prevented during the confidence-building phase through the use of reduction and consensus operators.
Suppose node i has confidence in the recommenders r1, r2, … , rk, respectively, at time t as T i,r1 (t) , T i,r2 (t),..., T i,rk (t) , and the recommenders have confidence in node j at the same time as j at time t as T r1,j (t) , T r2,j (t) , … , T rk,j (t) . The cumulative and final (15)  indirect trust in node j, evaluated by node i, is then computed using the reduction and consensus operators, as shown in Eq. (17).
In addition to the recommendations, the trust management system also depends on the calculated trust value of a node from direct observation. The amount of direct trust in node j, evaluated by node i at time t, is calculated using Eq. (18).
where x, y, z and w are the criteria of trust and , , and are the weight factors of the criteria of individual trust. Fog server x, y, z, and w can be latency, PDR, ownership, and power consumption, while fog customer x, y, z, and w can be friendliness, honesty, ownership, and power consumption, respectively. The weight factors of the QoS criterion are energy consumption and social trust based on the node owner's level of trustworthiness and reputation. Here, a node's reputation indicates the extent to which it has been trusted in previous interactions.
If the same person owns the trustor and trustee or the trustee's reputation exceeds the trust threshold, the average value for trust criteria is considered. Otherwise, the ownership criterion can be assigned half the weight to other trust criteria. This simple weighting method rewards well-credited nodes and penalizes poor-credited nodes. Furthermore, this method is suitable for customers with limited funds. Equation (19) is used to determine the final amount of trust in a node: The factor determines the share of direct and indirect trust in the final trust value. The contribution of indirect trust to total trust is determined based on the value of trust in the trusted person derived from previous trust calculations T i,j (t − Δt).
The number of recommenders with n(rec) and the maximum possible recommenders determined during the simulation settings are denoted by max(rec), and the indirect trust share is calculated through Eq. (20). The indirect trust weight factor in the past experience and the current number of recommenders are normalized to the TTLA: two-way trust between clients and fog servers using… maximum possible number of recommenders per node. This formula shows that the share of indirect trust does not exceed half of the total trust, and the increase in share is not proportional to the number of recommenders.

Bayesian learning automata
Learning automation, a form of reinforcement learning, can be viewed as a single object with a limited number of actions. It applies an action from its set to a random environment based on the probability vector. The action is received as input by the environment, generating a response signal. The response is evaluated using an unknown constant probability distribution to determine whether it is beneficial or detrimental to the environment. If the response is appropriate, a course of action will be chosen. If not, the answers will be revised. Consequently, it uses the environment's response to automatically select its following action, which is determined by the learning automata algorithm. Selective action significantly impacts search performance; during this process, the system automatically learns the optimal action to take. Figure 2 shows, in a general sense, the relationship between random automatics and the environment in a fog environment [53,54]. BLA is Bayesian by definition and is based on calculating rewards or punishments and random sampling from the beta distribution [33]. Bayesian learning is a probabilistic method of inference that weighs the evidence of hypotheses, where its purpose is to find optimal solutions to problems. The beta distribution formula is as follows [55]: In BLA, we use the beta distribution with two positive parameters, and . The probability density function (PDF) can be calculated through Eq. (21) [33] as follows.
This is the PDF of the beta distribution, for x ranging from 0 and 1, with shape parameters , > 0 . It is a power function of the variable x and its reflection (1 − x) . Additionally, the PDF value is between 0 and 1.

TTLA
This study uses BLA to learn all network movements and actions to identify the most dependable node (and the lowest energy consumption).

3
TTLA: two-way trust between clients and fog servers using… The steps of the two-way trust calculation algorithm are described in greater depth below. The final trust value is in the range [1,0], where 0 represents complete distrust and 1 represents complete trust [48].
The level of ignorance or threshold at which trust and distrust are identified varies depending on the application. This value is typically set to 0.5 for most applications but is typically higher for health and safety-sensitive applications.
The initial parameters are set first per Algorithm. 1, BLA then conducts the server selection procedure. Direct observations and calculated criteria determine direct trust in the server based on direct evidence. Mental logic is used to evaluate indirect trust in the server by aggregating the recommendations of neighboring nodes. Next, the two direct and indirect trusts are combined based on their respective weights and threshold comparisons (these values are derived from multiple experimental runs). According to the BLA algorithm, and (encouragement and punishment) parameters increase. The following step involves the client requesting the best server chosen by BLA. Direct trust in the customer is measured using direct observations and a calculation based on predetermined criteria. The evaluation of indirect trust in the customer is performed using mental logic to aggregate the recommendations of neighboring nodes. Direct and indirect trusts are combined based on the respective weights and threshold comparisons. If the customer is trusted, mutual trust is established, and the service connection is established; otherwise, the server rejects the request because the customer is untrustworthy.

Evaluation
This section provides an evaluation of the proposed methodology. The environment and method are simulated utilizing the iFogsim library [56]. The proposed method is then contrasted with two other approaches. The first is a two-way trust management system for fog systems (MTTA) [2] while the second is a fuzzy-based trust evaluation framework (Fuzzy) [35].

Simulation configuration
A multi-module application with a predefined workflow is employed to evaluate the proposed approach. This workflow consists of two-loop motion detector → object detector → object tracker and object tracker → PTZ (a camera with the capability of pan, tilt, and zoom) control. These modules form a microdata center network intelligent surveillance application based on FC [56]. All device and application module configurations are provided in this section. We evaluated four areas with various cameras and FDs. Tables 3 and 4 show the edge configuration and FD properties [33]. The edges correspond to the connection between the modules mentioned. In addition, the software and hardware properties of each FD in the network are presented. Tables 5 and 6 show the module configurations. The host differs from FD, and its software and hardware properties are presented accordingly. Since each TTLA: two-way trust between clients and fog servers using… application module is a software component, it requires several resources, e.g., RAM, MIPS, Bandwidth, and size, to be executed. All compared methods are evaluated in the same environment and configuration to ensure a fair comparison.

Energy consumption
Energy consumption is an essential metric in this research. According to Fig. 5, the proposed trusted approach as TTLA yields superior results to the Fuzzy and MTTA methods. Indeed, utilizing the BLA algorithm resulted in a more effective and trusted strategy than other methods, affecting FDs' energy consumption.   Figure 6 depicts the total cost of system execution. As can be seen, the figure indicates that the cost will increase when the number of FDs exceeds 30.

Total execution cost
As evident, the TTLA method is the most cost-effective solution. Using intelligent methods to optimize the issue of trust in this research is a further result deduced from the figure.

Network usage
Network usage analysis enables us to determine which method is more resourceefficient. Methods that keep resources employed and reduce their idle time are preferred.
According to Fig. 7, the Fuzzy method yields the lowest network utilization, while TTLA is the most efficient.  TTLA: two-way trust between clients and fog servers using…

Delay
Delay or latency relates to the start and end times of the tuple (in this case, the execution waiting time for each tuple is considered). Figure 8 demonstrates that the TTLA is capable of establishing a secure connection between FDs with minimal latency; it assists the network in supporting real-time applications. The figure illustrates a gradual increase in delay value as the number of FDs increases.

Response time
Response time refers to the amount of time it takes for a request to transfer from one fog node to another fog node or cloud and back, including processing and communication delays. Real-time applications necessitate rapid response messages; therefore, response time is a suitable metric in this regard. The TTLA has a response time between 1.28 * 10 3 and 1.78 * 10 3 , as shown in Fig. 9.  It is superior to other methods, with a range between 1.28 * 10 3 and 2.82 * 10 3 . It indicates that the TTLA can find a trusted FD to offload the task more quickly than other methods. Figure 10 shows the trust value when clients request servers to locate more trusted nodes. Since both the TTLA and MTTA are based on a trust value, these are compared in the figure.

Trust value
TTLA's higher trust value indicates a more secure method than MTTA. The TTLA method exhibits the highest trust value when evaluated with 26 and 38 FDs.

Discussion
This paper focuses on the BLA-based strategy for managing trust between fog clients and servers. While trusted communication in FC has many advantages (improving efficiency, reliability, resource utilization, reducing contention, detecting and responding to security threats in real-time), there are also several potential drawbacks and limitations to consider: • Security risks: Since FC involves distributed resources and data, there is a higher risk of security breaches or cyberattacks, especially in industries like healthcare or finance. This is because data and processing may be exposed to more potential vulnerabilities than in a centralized system. • Scalability: As fog nodes are deployed closer to the network's edge, they may have limited processing power and storage capacity, reducing the system's scalability. • Cost: Deployment and maintenance of a trusted communication system in FC can be expensive, especially if specialized hardware or software is required.

Conclusion
This paper proposes a BLA-based two-way trust management strategy for FC-based applications. In each communication, several fog nodes serve as clients and others as servers. Each fog node tends to communicate with nodes yielding the highest level of trust. In addition, energy efficiency was a potential challenge investigated in this work. A two-way trust algorithm facilitates the management of client-server security. Clearly, when the number of nodes increases, an ML algorithm is required instead of heuristics. The results indicated that after applying the BLA, the proposed method was superior to the Fuzzy and MTTA approaches. Furthermore, the proposed method outperformed the alternatives by 10% in energy consumption, 5% in network utilization, 9% in latency, 28% in response time, and 3% in trust value. Future work will attempt to extend the proposed strategy to a fully distributed method. Moreover, all network nodes must be aware of the network without transmitting sensitive or secure information. In this regard, developing a distributed trust management algorithm may be beneficial.