FSCN: a novel forwarding method based on Shannon entropy and COPRAS decision process in named data networking

The appropriate output interface (forwarding) has recently emerged as a key challenge affecting the performance of Named Data Networking (NDN). Regarding this matter, a novel strategy is presented based on the COPRAS (Complex Proportional Assessment) Decision-Making Process with the dynamic weighting of Shannon’s entropy called FSCN for transmitting request packets through the optimal output interface. The essential parameters identified in the suggested approach such as bandwidth, delay, and the number of hops can be dynamically weighted using the Shannon entropy technique and conditions. Then, the interfaces are scored using the COPRAS method, and the suitable output interface is ascertained. The advantages of the proposed method include considering influential criteria to characterize the path’s performance and the dynamic weighting of criteria. The simulation outcomes in ndnSIM demonstrated enhancements in critical parameters, including interest throughput, satisfaction ratio, packet drop, and delivery time when compared to similar approaches.


Introduction
Numerous researchers have proposed the Named Data Networking (NDN) architecture to transform the configuration of networks based on Internet Protocol (IP). Instead of a host-centric approach, this network adopts a content-centric approach [1]. In other words, applications and clients are solely concerned with the content; in contrast, the Internet Protocol (TCP/IP) concentrates on identifying and locating the device, which complicates and restricts management. Moreover, NDN facilitates effective content dissemination and safeguards the content itself, rather than securing data channels [2]. One of the benefits of NDN is the ability to authenticate and identify any content without the involvement of the service provider. Additionally, by storing content in the vicinity of the client, bandwidth wastage is prevented [3]. Independent naming of location, name-based routing, and higher security compared to the current structure of the Internet are other advantages of NDN [4]. The data structures of an NDN node include a Pending Interest Table (PIT), a Forwarding Information Base (FIB), and a Content Store (CS) that performs in-network caching, i.e., it caches potential passing Data packets to fulfill forthcoming Interests. PIT maintains all the forwarded upstream Interests that require to be satisfied by the Data packets. Although FIB resembles its corresponding IP forwarding table, it utilizes content name prefixes rather than IP prefixes for indexing the table and allows for multi-path forwarding by applying different output interfaces for individual prefixes. By employing the CS, PIT and FIB tables, and in-network storage, informationcentric networks can enhance network performance. In the ideal state, NDN prefers data to be stored in the nearest router to the client for minimizing access delay [5]. In these types of networks, despite the aforementioned advantages, there are several challenges such as naming [6], routing [7], security [8], mobility [9], and caching [10] that need to be addressed. Routers are required to perform tasks beyond routing, for instance, tasks like information storage and cache searching demand faster speeds than those supported by current routers. Routers are required to store more tables, which substantially amplifies processing overhead [11]. The temporary memory is stored by copying the requested data at all nodes along the path to the client. This leads to severe congestion and increases network resource consumption such as bandwidth and internal memory [4]. Among these challenges, forwarding in namebased networks has recently been investigated in various fields such as the Internet of Things [12], smart healthcare [13], smart cities [14], wireless networks [15], vehicle-to-vehicle communication [16], video streams [17], and others.
In NDN, the data delivery process starts by initiating a request for the desired content, which leads to the discovery of the required information. In NDN, the request is made hop by hop, rather than end to end [18]. As a result, routers have to choose suitable interfaces during the request process based on network conditions by making forwarding decisions [19]. The ability to make forwarding decisions dynamically and adaptively is a crucial function for NDN. As the forwarding strategy plays a significant role in data retrieval, designing a strategy with low complexity while it is crucial to ensure the forwarding process's efficiency [20]. Inappropriate routing can result in packet collisions, congestion, and increased delay [21]. The majority of the current designs employ one or two criteria to choose the output interface, which can lead to inappropriate forwarding [22]. Also, criteria weighting is not done dynamically, which can lead to network imbalance and suboptimal resource utilization. The performance decreases when the network traffic rises because routers will have access to fewer resources for subsequent requests, increasing the likelihood of blockage and performance degradation [20].
This article presents a forward strategy for NDN, referred to as FSCN (Forwarding Method based on Shannon Entropy and COPRAS Decision Process in NDN), which considers crucial parameters like delay, bandwidth, and the number of hops and adaptively weighs them using Shannon entropy. One advantage of the proposed method is to allow the possibility of adjusting the weights based on the conditions at each stage. Subsequently, employing the COPRAS method, each of these interfaces is scored based on the weights of the parameters and the path conditions, and finally based on the calculated output correlation, the optimal interface for transmission is determined and selected. The proposed approach prioritizes the optimization of network resource utilization, delay reduction, throughput enhancement, load balancing improvement, and packet drop minimization. Therefore, the innovation of the proposed method can be classified as: • The method incorporates significant factors such as bandwidth, link delay, and the number of hops, which influence the output interface selection based on their respective weights. The COPRAS decision-making technique has been utilized to determine the output interface, considering bandwidth, link delay, and the number of hops simultaneously. The benefits of the COPRAS method comprise simple calculation, comprehensive option ranking, and the ability to consider positive and negative criteria. • The method employs the Shannon entropy method for dynamic weighting of bandwidth, delay, and the number of hops criteria, which can be modified based on the network conditions. The Shannon entropy method determines the weight of factors based on the extent of variation in the indices, which can be adjusted in response to changes in the network conditions. • The decreased delay time is attributed to the incorporation of criteria such as link delay, number of hops, and bandwidth, which affect the selection of the output interface, and the weights of these factors are adjusted based on the network conditions. • The proposed method significantly enhances load balancing due to the presence of bandwidth is considered one of the factors in selecting the output interface, which optimizes the distribution of traffic across the network and the use of network resources. • The dynamic weighting of the bandwidth criterion results leads to a reduction in Packet Drop. Considering the dynamic nature of network conditions has led to a suitable improvement in link delay. • An effort is made to the network's performance and efficiency by increasing the number of flows, which makes the network more scalable. This article's format is arranged in the following manner. The second and third sections cover the research's fundamental concepts and background, respectively. The fourth section focuses on a detailed examination of the Problem Statement. Section 5 explains the proposed FSCN method and the procedures for achieving the desired output formula. In Sect. 6, we will analyze and discuss the results, and lastly, the article will conclude in Sect. 7.

Preliminary
In this section, first, an overview of NDN is presented, and then, the fundamental procedures used for selecting the subsequent router with the help of the COPRAS method [23]and the Shannon entropy weighting [24]are examined.

Overview of NDN
Hierarchical names are utilized by the Named Data Networking (NDN) to retrieve content. Each NDN router has Content Store (CS), Pending Interest Table (PIT), and Forwarding Information Base (FIB) tables. The sending process of an Interest Packet (I_pkt) and its corresponding Data Packet (D_pkt) in an NDN router network is demonstrated in Fig. 1.
When the Interest packet arrives at any of the routers, it first checks the CS. If the requested information is available in the CS, it provides it to the client. If the requested information is not available in the CS, it checks the PIT. In case the desired data are present in the PIT list, it awaits a reply. If the requested information is not in the PIT either, it pertains to the FIB table to find a route to retrieve the content and delivers the data packet to the client on the way back [25].

COPRAS method
COPRAS (Complex Proportional Assessment) is a decision-making method that considers multiple criteria to determine the optimal choice. Its benefits include straightforward computation, a comprehensive ranking of options, and the ability to take both positive and negative criteria into account. Since this research uses three criteria of bandwidth, delay, and hop count to decide on selecting the output interface, which has negative criteria in addition to positive criteria, the COPRAS technique has been used to identify the optimal choice. In multi-criteria decision-making models, goal setting or weighting of criteria and/or ranking of options is performed. The option ranking is also pursued through this method, and The COPRAS method is utilized in this study to rank options, while the Shannon entropy method is applied to dynamically weigh the criteria. In the COPRAS method, the following steps are taken to obtain the best option. The first step in this approach involves constructing a decision matrix. The decision matrix is a grid used to assess several options based on various criteria. Specifically, it is a grid in which each option is evaluated according to a set of criteria. The decision matrix is represented by X, with each element designated as x ij in Eq. 1.
In all multi-criteria decision-making methods that rely on the decision matrix, the second stage involves normalization or secondary scale conversion. In the third step of the COPRAS method, the normalized decision matrix must be weighted. To achieve this, the weight of each criterion is multiplied by all elements of the same criterion, as shown in Eq. 2 [23]. r ij represents the criterion elements of interest, and w j is the weight of that criterion.
There are multiple methods available to obtain the criteria weights. The Shannon entropy technique is employed in this study to derive the criteria weights. One of the benefits of using the Shannon entropy method is that the weights of the criteria can be altered based on the circumstances.
FSCN: a novel forwarding method based on Shannon entropy and…

Weighting using the Shannon entropy method
In most multi-criteria decision-making problems, knowledge of the element weights is crucial. The Shannon entropy technique is one of the approaches used to determine the element weights. The Shannon entropy technique determines the weight of each element by assessing the degree of variability in the values. The normalized matrix is denoted by N, with each element indicated by n ij , and normalization is achieved linearly. In the Shannon entropy technique, based on the weights obtained from the indices in this stage, the indices with higher variability are considered more significant than others, and their influence on choosing the optimal option is greater. The initial step in this technique involves constructing a decision matrix according to Eq. 1. Then, the degree of deviation is calculated, and ultimately, weights are assigned to each of the criteria.

Related work
There have been numerous proposals by researchers to tackle forwarding problems in NDN. Based on Fig. 2, NDN's forwarding strategies can generally be classified into four main categories, namely, Adaptive Forwarding, the best possible path found for retrieving the DATA packet [26]. Blind forwarding utilizes a flooding mechanism where INTEREST and DATA packets are sent throughout the network without any consideration for their destination [21].
Yi et al. [32] proposed a preliminary plan for NDN transmission strategy, in which interfaces are classified based on a color-coding scheme and ranked according to their response delay. Requests are always sent to the interface with the highest rank, which is marked in green. In this strategy, predictions made based on network conditions resulted in long delays. In the research conducted by Li et al. [27], Fig. 2 Forwarding strategies in NDN [32] a Greedy Ant Colony Optimization algorithm (GSCF) is proposed, which considers the process of sending NDN as a multi-objective distributed random distribution problem, and the probability distribution on interfaces is calculated by the optimization algorithm of the ant colony. Although this study effectively reduces content delivery time, it does not take into account network load balancing.
According to a study by Udugama et al. [33], a multi-path forwarding strategy based on demand (OMP-IF) has been introduced, in which multiple heterogeneous paths are identified from the client to the content resources and are simultaneously used to send requests. Multi-path transmission by clients is performed using a round-robin and weighted mechanism based on delay measurements. In this method, network resources are not optimally utilized.
Gong et al. [34] proposed a probabilistic binary tree forwarding (PBTF) strategy, which depicts the NDN forwarding process as selecting a path from a probabilistic binary tree and uses an online machine learning method to adjust the weight of the probabilistic binary tree. Although PBTF successfully utilizes machine learning to improve request forwarding efficiency in NDN, it has limitations in obtaining it. In a study by Bastos et al. [35], PBTF relies on network dynamics for learning. It uses distributed copies to obtain knowledge and find a relationship with the least retrieval time. Then it uses the acquired knowledge to send requests to the best available interface.
The proposed method by Carofiglio et al. [36] is based on PIT (Packet-in- Table) and interfaces are weighted based on the frequency of pi related to input and the FIB interface. Any attempt to send interest packets leads to an increase in the score of the corresponding interface. Removing or receiving data packets reduces pi. This study considered the number of pending Interest and improved the load balancing and congestion control parameters.
This method reduces network costs and balances the load. However, it does not examine the network conditions and the current state of the interfaces. In a study conducted by Posch et al. [37], a random adaptive forwarding strategy is proposed, inspired by a self-control water pipeline system. In this approach, the distribution and guidance of request packets can be done intelligently, thus preventing bottlenecks and link failures. If there is too much pressure, dense nodes become active to reduce pressure, maximize the satisfaction-to-benefit ratio, and use an implicit feedback mechanism to ensure a reduction in the ratio of traffic transmitted through dense nodes. The pros of this work is that it considered the throughput; however, it just improved the network traffic.
In another study, Muralidharan et al. [38] proposed an MDP-IoT model to guide heterogeneous traffic of the Internet of Things to the best interface based on the type of traffic. The allocation of traffic is divided into three classes in this strategy: Event-based traffic with low latency, query-based traffic with medium priority, and periodic traffic with zero delays make up these three classes. The MDP algorithm is used to decrease RTT values and, thus, fulfill the delay requirements of delayintolerant applications in the IoT-NDN environment. In the study conducted by Yao et al. [18], sending requests in the NDN network is modeled as a semi-Markov decision problem (SMDP). An adaptive forwarding strategy is then developed based on the SMDP theory and the unpredictability of network requests. This strategy can be 1 3 FSCN: a novel forwarding method based on Shannon entropy and… managed using Q-learning integrated into an artificial neural network. One of the advantages of this method is considering the criterion of the number of unsatisfied interests. However, the criteria of delay of each link and criteria of bandwidth and number of hops were not considered.
In another study, Afanasyev et al. [39], several standard forwarding strategies (i.e., Broadcast, BestRoute, and NCC) are employed in the NDN simulator, where requests are sent to all interfaces, and decisions are made based on the number of hops. The NCC strategy sends requests to the interface with the lowest latency. Nonetheless, the update of routing information is a time-consuming and expensive process.
Ren et al. [40] proposed a dynamic multi-path forwarding (DMF) mechanism for information-centric networks that used the RTT parameter in the initial phase and the available bandwidth as a parameter in the subsequent phase. The saturation stage of this protocol is unlikely to cope with cases where caches within the network do not contain all the required data pieces and network scenarios are more complex. In a study by Zhang et al. [26], a new adaptive forwarding strategy based on Q-learning is proposed for minimizing delivery time in AFSNDN. AFSNDN is divided into two stages-exploration and exploitation stages. The goal of the exploration stage is to collect information, while the exploitation stage aims to send dynamic interest packets.
Shi et al. [22] proposed a multi-path forward strategy called PIBW, taking into account various criteria such as pending requests, available bandwidth, and threshold limit of pending requests. PIBW uses pending requests and the threshold limit of pending requests (PI Th) to control available capacity. PIBW performs better than basic solutions in terms of throughput, link bandwidth utilization, and convergence time. In research conducted by Zhang et al. [41], a diversity-based strategy (CPVF) is proposed, which defines the selection of the transmission path `as a multi-attribute decision-making (MADM) problem and considers interface status, number of pending interests, Pheromone Concentration of the Data Packet, and RTT delay as decision features. Md et al. [42] have proposed a resource-based transfer strategy in which content resources respond to Interest requests in a distributed manner. Content requests are directed to all resources, but resources are likely to respond in a way that minimizes overhead and content retrieval delays. This strategy tends to favor resources that are closer in proximity.
Djama et al. [43] have proposed a learning-based adaptive forwarding strategy called LAFS for IoT environments that reduces resource usage and improves network performance. This strategy is based on a learning process that provides the necessary knowledge and allows network nodes to collaborate intelligently. The scheme fares well in energy efficiency, overhead, and content retrieval.
Akther et al. [44] have proposed a reinforcement learning strategy using Thompson sampling that is based on the recipient resource for optimizing the transmission and response to interests. This method introduces a 'Beam' concept along with scoped-flooding to optimize the sending of interests. This research formulates the Interest answering strategy by a content source as a Two-Armed Beta-Bernoulli Bandit model, denoted as S RAB. da Silva et al. [45] have proposed an input PIT management protocol, taking into account the importance and diverse needs in environments such as VANET, as well as the importance of the PIT table in NDN and the fact that overflow of the PIT table prevents new interest messages from being stored in the PIT.
Abdi et al. [20] proposed a strategy for sending request packets to the best interfaces to improve the overall performance of the network, such as Network Load Balancing, Average Interest Satisfaction Ratio, Average Throughput, and Average. This method uses a Markov Decision Process (MDP) and a Learning Automata (LA) algorithm to rank the interfaces based on past and present information. The approach for each interface is determined based on three parameters: interface bandwidth, the number of rejected requests from each interface, and waiting time, according to past experiences. The proposed strategy (LA-MDPF) is a single-path method, and the router, as the decision-making agent, evaluates previous decisions and responds with punishment or reward, which affects similar decision-making in the future depending on whether the choice made leads to a positive or negative result. At first, all interface weights are given equal consideration, but in subsequent stages, the weight of the chosen interface is prioritized. When the chosen selection leads to locating the desired content, the likelihood of selecting that specific interface in future attempts is increased. Although this research considered the number of unsatisfied Interest, RTT and bandwidth criteria, the link delay is not considered in forwarding strategy.
Delvadia et al. [46] introduces a Q-learning driven forwarding strategy in ICN for interest packets. It aims to gain learning through historical events and selects best mechanism to forward interest. The considered performance parameters are data retrieval delay, server hit rate, network overhead, network throughput and network load. The key focus of forwarding mechanism is to select the best next node to forward request packet so that desired content can be fetched with minimal cost. Proposed approach exploits modified Q-learning approach in forwarding of interest (IPQ-Learning mechanism) as well as data packet (DPQ-Learning mechanism). The IPQ-Learning mechanism has costs additional overhead. Table 1 compares the most recent important works.

Problem statement
In NDN networks, forwarding has special importance. In the forwarding stage, the router must use the information available in the FIB, which contains additional information besides a list of available interfaces. In NDN, an interest packet can be transmitted through multiple accessible output interfaces, and the forwarding strategy is responsible for selecting the appropriate interface for transmission [41]. For this reason, in NDN, selecting the appropriate output interface considering the current network conditions has a significant impact on network performance. If the right choice is not made, it can lead to increased delays, high congestion, and reduced operational capacity of the entire network, which means a significant loss of network performance [40]. Therefore, the following characteristics should be considered for selecting the appropriate interface: 1 3 FSCN: a novel forwarding method based on Shannon entropy and… DPQ-Learning mechanism Learning through historical events and selects best mechanism to forward interest Data retrieval delay, Server hit rate, network overhead, network throughput and network load The IPQ-Learning mechanism has costs additional overhead • The use of influential criteria such as bandwidth, delay of each link, and the number of hops. Changes in the network status must be continuously monitored to prevent a decline in network efficiency, a rise in latency, and a reduction in load balancing. • The dynamic weighting of criteria, where the weight of criteria can also be changed with changes in link status. These variations need to be continuously monitored during network operation to ensure the most current network condition is taken into account when selecting the output interface. • The use of a decision-making method that can simultaneously consider the target parameters of Interest Throughput, Satisfaction Ratio, Packet Drop, Delivery Time, and Load Balancing to decide on the output interfaces by concurrently taking into account the enhancement of these metrics. • The network must be scalable, meaning that as the number of nodes increases, the mentioned metrics remain unaffected, and network performance does not suffer a significant drop.
Up until now, the methods that have been introduced for choosing the output interface have relied on one or two parameters. However, these parameters, including bandwidth, link delay, and the number of hops, have not been taken into account simultaneously. Assigning equal weight to parameters and keeping their weights constant cannot be a suitable solution to improve the efficiency of an NDN network, given the changes that occur in the network.
This research proposes a new strategy called FSCN for selecting the output interface in an NDN router by considering important parameters, including bandwidth, delay, and the number of hops simultaneously, and using the COPRAS decisionmaking process along with dynamic weighting of Shannon entropy. The features of the proposed method will be discussed in detail in the following sections.

The proposed FSCN method
In this section, the steps to obtain a relationship that can select the optimal output interface are examined, given the importance of selecting the output interface. As mentioned earlier, bandwidth, delay, and the number of hops are taken into account in selecting the output interface, and they are dynamically weighted. It is possible to change the weight of parameters based on network conditions. After performing the mentioned steps, a relationship has been achieved that can determine the score of output interfaces. All steps including normalization, weighting with Shannon entropy, and scoring using the COPRAS method have been taken into account in this relationship.

Details of the proposed method
Considering the goals of this article, bandwidth, delay of each link, and the number of hops have been taken into account to continuously monitor network status

Receive interest packet
The I_pkt field includes the "Name" which may contain the prefix D_pkt or the exact name of D_pkt. Upon receiving the I_pkt packet at any router and checking the CS and PIT tables and finding no match for the desired name in them, the bandwidth, delay of each link, and the number of hops are updated based on changes in network traffic.

CS matching
The NDN router enhances content sharing and minimizes content retrieval duration by retaining a duplicate of the D_pkt that traverses through them in the CS table, and using the Least Recently Used cache method, it is evicted when the table is full. If I_pkt reaches one of the routers, the CS checks it first, and if the requested information is available, it is provided to the client. Exact name matching (i.e., characterby-character matching) is used for the CS search.

PIT matching
PIT generates an entry for every new I_pkt, and these PIT entries are utilized to deliver the D_pkt to the consumer until the corresponding D_pkt arrives or the entry's time expires. If the requested content is not in the CS, PIT is checked, and if the requested information exists in the pending interest table (PIT), it waits for the response. Exact name matching is used for the PIT search.

FIB matching
FIB stores the next hops and other information for each destination name prefix and is used to send I_pkt to the producer. At this stage, the interfaces are scored using the COPRAS method, and requests are sent through the interface that has the highest score. FIB search is done using exact name matching.

Mathematical expressions and determining the score for each of the output interfaces through calculation
To calculate a relationship that can determine the score of each interface, a decision matrix must first be created based on bandwidth, delay, and the number of hops. After normalization, the dynamic weighting of each index is applied, and the scoring is done. The symbols utilized in this article are presented in Table 2.  Table 3 displays the decision matrix table for the proposed method.

Creating the decision matrix and normalization of metrics
In the proposed method, since there are varying measurement scales, the metrics must initially be normalized to a dimensionless scale. Interface n T n h n B n According to Eq. 3 [23], x ij is each element of the X matrix in Eq. 1, and n ij is its normalized metric.
Based on Eq. 3, the normalization of each of the bandwidth, delay and hop count parameters will be performed using Eqs. 4, 5 and 6.
B norm i , T norm i , and h norm i are the normalized hop count, delay, and bandwidth, respectively, for interface 'I'. Its inputs are from the decision matrix, and its output is used as an input in Eq. 8, which is the degree of variability metric.

Dynamic weighting of metrics
In the proposed FSCN method, since the weighting of parameters is not constant, the importance coefficient of each parameter should determine the degree of deviation. This can be expressed using Eqs. 9, 10 and 11 according to Eq. 7 [24]. The degree of deviation used for dynamic weighting is obtained based on the deviation of the metrics. Here, d j is the deviation degree and E is the dispersion level of each metric obtained from Eq. 8.
K is Shannon's entropy constant and n ij is the normalized metric.
: a novel forwarding method based on Shannon entropy and… In Eqs. 9, 10 and 11, h i , T i , and B i represent the delay, the number of hops, and the bandwidth of interfaces, i and m, are the number of output interfaces for each router, and d H , d t , d b are the bandwidth deviation degree, delay deviation degree, and the number of hops deviation degree, respectively, which will be used for dynamic weighting of metrics in Eq. 13.
Finally, each parameter is assigned a weight using Eqs. 12 and 13. Considering that deviation degree is used to obtain the weight of metrics and the delay, bandwidth, and number of hops metrics are updated for each router upon receiving interest packets and in case of changes in network traffic, dynamic weighting is employed.
Based on Eqs. 7 and 12, we arrived at Eq. 13 for weighting each parameter after simplification, which is used in Eq. 23 for calculating the score of each interface. W j represents the weight of each parameter, where j = {h, T, B} and x ij is an element of the decision matrix. FSCN uses Eqs. 14, 15 and 16 to determine the weight of the criteria of bandwidth, delay and number of hops parameters.
W h , W T , and W B represent the weights for bandwidth, hop count, and delay (h i , T i , and B i ), respectively, where hop count, delay, and bandwidth are related to interface i. These equations take into account the standard deviation, and by changing the difference between each criterion, their respective weights will also (11) be adjusted, leading to dynamic weighting. Due to the fact that the degree of deviation is considered in Eqs. 14, 15 and 16, we will have dynamic weighting.

Rating each of the interfaces
According to Eqs. 17 and 18 [23], the set of negative and positive metric values for each option should be separately calculated, and Eq. 23, which is used to calculate the score of each interface, should be applied.
Separately calculating the set of negative and positive metric values for each interface is required. In the proposed method, bandwidth is considered a positive parameter, and delay and hop count are considered negative parameters. Based on Eqs. 17 and 18, Eqs. 19 and 20 will be used to calculate the sum of the + and-indicators.
S i + represents the sum of positive indicators, and Si− represents the sum of negative indicators. Finally, to select the optimal option in the COPRAS method, Eq. 21 [23] is used.
Since there is no need for a profitability percentage for ranking in Eq. 9, after normalizing Eq. 22, we will have it for negative indicators. Si− NORM Refers to the negative indicators that have been normalized. Equation 9 will be converted to Eq. 23.
FSCN: a novel forwarding method based on Shannon entropy and… Q i represents the score, and B i , T i , h i refer to the number of hops, delay, and bandwidth of interface i, respectively. Finally, the interface with the highest Q i is selected. The Interest forwarding process is shown in Algorithm 1.

A case study
To solve a sample problem, consider a router with three output interfaces as shown in Fig. 4.
If a content request is received by router R, after checking the CS and PIT, the interest packet must be sent through one of the output interfaces of R. Bandwidth, Fig. 4 A sample problem for different scenarios delay of each link, and the number of hops are shown in Table 4 for different scenarios. Table 5 includes the calculation of the weight of each of the criteria and the score of each of the output interfaces based on the entries in Table 4.
In example 1, the weight of the delay criterion is higher than the other two criteria, due to the influence of the deviation degree of Eqs. 9, 10 and 11 on the determination of the weight of criteria 14-16. To better understand the deviation degree, the normalization of each criterion is calculated based on Eq. 3, and the values for example 1 are presented in Table 6.
Since the values are normalized and there are three output interfaces in this example, the mean of each value in Table 5 is 0.33. When examining the values associated with T norm , it becomes apparent that their deviation from the mean value of 0.33 is larger than that of the other two parameters. Therefore, it will have a higher deviation and as a result, its weight will be higher. By substituting the values in Table 3 into the derived Eq. 23, the score of each interface will be obtained. In Eq. 23, the deviation and hence dynamic weighting for the criteria are considered, and separately calculating the scores is not required. Based on Table 3 in example 1, Q1 has the highest score and is transmitted from the corresponding interface. Table 3 information is used to analyze different network conditions in examples 2 to 4, and the results are displayed in Table 4. Changes in network conditions result in changes to the weights of the criteria.

Simulation results and discussion
This section compares the performance of the proposed strategy using the BR [38], SAF [36], RFA [35], and LA [20] methods with the assistance of the ndn-SIM module based on the NS-3 simulation platform.

Simulation parameters and configuration
The ndnSIM simulation environment creates a structure of NDN nodes, including CS, PIT, FIB, and other components. The method's performance is evaluated in an Abilene topology according to Fig. 5, considering factors such as Average Throughput, Average Interest Satisfaction Ratio, Average Packet Drop, Average Delivery Time, and Load Balancing. It is tested under different conditions, including interest packet sending rate, cache size, link bandwidth, and various numbers of nodes, following the scenarios presented in Table 7.
The simulation was run twenty times on 1-20 consumer/producer pairs. Each consumer requested a 4 MB file from its corresponding producer in simulation j, 1 ≤ j ≤ 20, which randomly selected j consumer/producer pairs. The content was requested with a Zipf [18] distribution by the clients (consumers) at 2000-4000 Interests/sec. The servers were configured with a repository that permanently stored a non-overlay subcategory of the content catalog in the system. The cache of a router was 1-60% of the content catalog in size.
The catalog comprised 104 distinct content items of the same size, each consisting of 100 independent data packets. The Least Recently Used cache replacement policy is employed to manage the contents of the cache. Each simulation scenario was run for 300 s. The simulation parameters are depicted in Table 8.

Examining the average throughput under various conditions
According to its definition, average throughput is the average number of packets that successfully reach their destination per unit of time [47]. Figure 6a illustrates the number of input data packets per second, which varies between 2000 and 4000. The proposed method has a higher operational power average compared to SAF. As shown in this figure, as the rate increases and the router becomes saturated, the proposed method outperforms SAF in terms of operational  power. This is because the proposed strategy takes into account the conditions of the interface according to Sect. (5.2.5.2) and the variable network conditions, resulting in more efficient use of network resources. When considering dynamic conditions, the changes that occur in the network will affect the selection of the output interface based on Eq. (23). The proposed method uses three parameters simultaneously: bandwidth, access delay, and minimum distance, and takes into account not only the status of the relevant interface but also the status of other factors affecting Fig. 6 Average throughput a Scenarios (1-5). b Scenarios (6-10). c Scenarios (11)(12)(13). d Scenarios (14)(15)(16). e Scenarios (17)(18)(19) transmission. Dynamically and continuously weighting parameters, it leads to more efficient use of the network resources. The weight status of the parameters changes with the changes that occur in the network to consider all conditions. Considering the limited network resources, dynamic parameter weighting improves network performance, and as the network becomes busier, the proposed method performs better. Therefore, the proposed strategy can make the best forward decision under unstable NDN network conditions. However, when the traffic reaches a level where the router becomes saturated, the slower slope will take effect because a saturated router will not have access to more resources for subsequent requests, increasing the likelihood of blockage and performance degradation. The reason for the proposed method's improved operational efficiency compared to RFA is that instead of following a uniform load distribution across all available paths, it focuses on the paths that provide better performance and tries to effectively and fully utilize the capacity of all paths.
As shown in Fig. 6a, the proposed method is capable of providing an acceptable power rate under different network traffic levels. As the request rate increases, we have better operational efficiency compared to other strategies due to the higher speed of finding suitable interfaces based on Eq. 23 and the continuous measurement of various network parameters. Expectedly, potential strategies obtain higher average power than the BR strategy by selecting better potential paths based on network conditions, allowing for a more logical use of network resources. Possible strategies employ routes that may not necessarily be the shortest but lead to higher data packet rates. Hence, BR cannot provide the same operational power level as other strategies do under the same network conditions. Figure 6b illustrates the impact of cache size on the average operational power of each strategy. In this case, the cache size has increased from 1 to 60%.
While it is evident that all strategies perform better with larger cache sizes, it is more important that strategies with limited hidden memory and resource constraints perform well. Figure 6b illustrates that the proposed method has a higher average operational power than other strategies at a cache size of 1%. The proposed strategy performs better than other strategies in all sizes of hidden memory and the average computational power of the network is greatly increased by increasing the size of the cache memory. This is because having a larger cache memory enriches temporary content copies, which helps the proposed strategy to better use network resources by parameter weighting according to Sect. (5.2.5.2). Figure 6c, d, and e shows the average throughput for scenarios with different numbers of nodes. In all cases, the proposed method performs better than other algorithms, which can be attributed to using better paths to maximize power.
In the proposed approach, important parameters such as bandwidth, delay, and the number of hops used are capable of sending requests to the best interface according to the relationship (23) and significantly improving network performance. The lower throughput of RFA compared to the proposed method is only due to its focus on balancing the load and not considering other cases. In Fig. 6(c, d and e), it can be seen that the proposed method performs better than SAF with an increase in the number of nodes and network congestion. The reason that BR has the lowest throughput in all topologies is that the probable strategies use different paths that are not the shortest, and other factors and conditions of the network are also taken into account. While LA-MDPF, RFA, and SAF all support adaptive sending, the proposed method shows better performance than these strategies, which is due to its good ability to provide services in unstable and variable network conditions based on relationships . Table 9 shows the average percentage improvement of the proposed method for Average Throughput.

Examining the Average Interest Satisfaction ratio under different conditions
According to [15], the ratio of received data packets to requested packets for all customers is referred to as the Interest Satisfaction Ratio (ISR). Figure 7a displays different traffic loads, and it shows that the proposed method outperforms other strategies in terms of satisfaction. This is because the proposed strategy considers bandwidth as a crucial decision-making factor and dynamically weights it based on Eq. 14, resulting in meeting more requests. As demonstrated in Fig. 7a, our strategy yields significantly better ISR than SAF. The average ISR decreases in all methods with network congestion since most Interest packets are answered when the network is not busy, but the packet drop rate increases in times of congestion, reducing ISR. Nevertheless, the proposed strategy offers higher throughput compared to other methods as traffic load increases, indicating that the bandwidth is used more effectively based on Eq. 9 and network resources.
The difference between ISR SAF and the proposed method in higher packet rates is attributed to the fact that SAF is a multi-path forwarding strategy designed to maximize power at each node, whereas the proposed method has different objectives. As shown in Fig. 7a, RFA has a lower ISR compared to LA-MDPF, SAF, and the proposed method, and its performance deteriorates as the traffic load increases. RFA's lack of long-term storage for interface information leads to lower performance in scenarios with heavy traffic loads. The BR strategy has the worst ISR performance, as it uses the least number of hops from customers to servers, leading to congestion due to the overuse of a single path. Figure 7b shows the average ISR as a function of cache size. The proposed method has the highest ISR and outperforms other strategies in this regard. As the cache size increases, the ISR for the proposed method also gradually increases. At small cache sizes where all strategies show reduced performance, the proposed strategy has a higher ISR than other strategies, as shown in Fig. 7b. Increasing the bandwidth improves the performance of all algorithms, but the proposed method outperforms others due to its better path selection and consideration of various parameters for selecting the appropriate interface based on Eq. 23. SAF has a lower ISR than the proposed method in all scenarios. RFA has a relatively low cache hit rate and ISR due to its load-balancing mechanism, which evenly distributes requests across all paths. For BR, the emphasis on the shortest paths from customers to content providers leads to extensive use of cache in the shortest paths, resulting in lower ISR than other methods. Table 10 shows the Fig. 7 Average interest satisfaction ratio per node a Scenarios (1)(2)(3)(4)(5). b Scenarios (6-10). c Scenarios (11)(12)(13). d Scenarios (14)(15)(16). e Scenarios (17)(18)(19) 1 3 FSCN: a novel forwarding method based on Shannon entropy and… average percentage improvement of the proposed method for the Average Interest Satisfaction.

Investigating the average packet drop under different conditions
The term "Average Packet Drop" refers to the average number of packets that have been dropped across the network during a specific period [48]. According to Fig. 8a, the proposed method has the lowest drop rate compared to all other tested strategies. This advantage is due to the dynamic weighting of criteria based on Sect. (5.2.5.2) in the proposed method, which enables the determination of criteria weights and maintains a low drop rate against network condition changes. The primary distinguishing factor between the proposed method and other strategies is the number and type of criteria employed for decision-making to select the interface with the lowest drop rate. SAF, RFA, and BR only rely on a single criterion for selecting the appropriate interface, whereas the proposed method takes into account three criteria, namely bandwidth, hop count, and delay. The suggested approach takes into account the available bandwidth, delay, and the number of hops to minimize congestion resulting from transmission decisions based on Eqs. 9, 10 and 11. By doing so, it optimizes the usage of the internal cache and improves network performance under high traffic loads.
The average packet drop rate is presented in Fig. 8(c, d, and e) for various hop scenarios, and the proposed method exhibits a lower packet drop rate in all three scenarios due to the selection of appropriate interfaces based on Eq. 23. In comparison, among the algorithms tested, BR has the highest packet drop rate and the least ability to determine the best interfaces. Table 11 provides the average percentage enhancement of the proposed method for the average packet drop.

Investigating the average delivery under different conditions
The term "Average delivery time" as defined by [26] pertains to the average duration for the desired data to reach its intended destination. Figure 9a shows the plotted average delivery time to the traffic load. The BR strategy chooses to transmit new packet requests via the shortest route. However, as seen in the same figure, the delivery time for the BR strategy increases over time due to network congestion gradually escalating, despite continuing to send content requests via the same path. In contrast, the proposed method outperforms other strategies across different traffic load levels by considering the number of hops and dynamically weighting the interface selection according to Eq. 16. While RFA operates  based on the number of pending requests, the proposed method selects interfaces based on their available bandwidth, number of hops, and delay, using Eq. 23. This approach's use of various features creates a performance difference and enables the proposed strategy to evaluate interfaces more accurately and comprehensively, resulting in higher throughput and less delay. Additionally, the proposed method makes better use of the potential benefits of in-network caching and shows better performance. On the other hand, SAF has a higher delay compared to the proposed method, mainly due to the use of alternative paths to increase throughput and maximize request satisfaction. Figure 9b shows a change in the cache memory size from Fig. 9 Average delivery time a Scenarios (1-5). b Scenarios (6-10). c Scenarios (11)(12)(13). d Scenarios (14)(15)(16). e Scenarios (17)(18)(19) 1 to 60%. Using in-network caching can shorten the distance between clients and content copies, resulting in a decrease in average delay. A larger cache memory size allows for better performance under potential strategies, specifically the proposed strategy (BR). Figure 9(c, d, e) demonstrates the proposed algorithm's superiority in terms of delivery time compared to other evaluated strategies in networks with varying numbers of nodes. The proposed approach is dynamic and considers the delay metric of each link according to Eq. 16, allowing it to receive a large portion of requested content from intermediate router caches. The proposed method aims to distribute the traffic load evenly among all available interfaces and boost the rate of received packets, even for those that travel long distances to reach the repository. Additionally, it is worth noting that the tested probable strategies have demonstrated shorter delays compared to the BR strategy, as they take advantage of the extensive connectivity features of NDN instead of relying on a single path. The average percentage of improvement in terms of Average Delivery for the proposed method is presented in Table 12.

Investigating load balancing under different conditions
The term refers to the average load ratio during a specified period compared to the total bandwidth capacity of the router and is defined in reference [20]. The load balancing factors obtained for the compared forwarding strategies are displayed in Fig. 10a. BR has the least satisfactory performance in terms of load balancing since it prioritizes the shortest path from the customer to the content provider. On the other hand, RFA demonstrates superior performance in this aspect compared to other strategies. This is attributed to its utilization of a mechanism based on the number of pending interests, ensuring a uniform distribution of such interests across all available paths presented in the FIB. Equations 9, 10 and 11 enable the proposed method to balance the load by utilizing paths that provide better performance in terms of capacity to maximize this parameter by fully utilizing the capacity of all links when sending Interest packets. In contrast to RFA, the proposed method does not leave any link capacity unused during load balancing. Additionally, the proposed method takes advantage of in-network caching benefits, even with limited cache size, and maintains high performance.
The distribution of requests on network links under five tested forwarding strategies is shown in Fig. 10(c, d, and e), where the proposed strategy considers the status of all interfaces and dynamically maintains a balanced distribution of requests on links based on the 5-2-5-2 partition while increasing the packet retrieval rate. The load balancing factors LA-MDPF, SAF, and BR show differences between the proposed strategy, which effectively balances requests and improves the packet retrieval rate. The results indicate that the proposed strategy is more effective in utilizing network resources, balancing the load, and managing congestion compared to other methods when there is an increase in network nodes and traffic load. This is because the proposed method can find the best interface using Eq. 23. On the other hand, BR cannot identify the optimal path and continues to use a suboptimal path, resulting in worse load balancing compared to other strategies.  (1)(2)(3)(4)(5). b Scenarios (6-10). c Scenarios (11)(12)(13). d Scenarios (14)(15)(16). e Scenarios (17)(18)(19).
FA has a lower load balancing coefficient compared to other methods because it distributes requests among heterogeneous interfaces. In contrast, the proposed method not only can discover potential paths but can also distribute requests among interfaces more logically, resulting in better load balancing. This is supported by the average percentage of improvement for the proposed method in Load Balancing, as shown in Table 13.

Conclusion and future works
The forwarding strategy is a crucial component in NDN networks, and it significantly affects the network's overall performance by reducing delay and optimizing bandwidth utilization. Previous research has shown that selecting the output interface plays a crucial role in improving various performance metrics, such as Interest Throughput, Satisfaction Ratio, Packet Drop, Delivery Time, and Load Balancing. This article evaluates the effectiveness of FSCN in finding the most suitable output interface in NDN networks. This approach employs dynamic weighting of influential factors like bandwidth, link delay, and the number of hops to choose a suitable output interface using the COPRAS method, thus achieving multiple goals such as improving Interest Throughput, Satisfaction Ratio, Packet Drop, Delivery Time, and Load Balancing simultaneously. It also continuously monitors changes in network conditions and adjusts the criteria weights using the Shannon entropy method if necessary. Simulation results using ndnSIM demonstrate enhancements in key parameters, including Interest Throughput, Satisfaction Ratio, Packet Drop, Delivery Time, and Load Balancing, in comparison to other methods like BR [39], SAF [37], RFA [36], and LA [20] under different circumstances. We plan to design and investigate algorithms for placement and replacement in our future work using machine learning and metaheuristic algorithms. Moreover, a novel caching strategy will be also integrated with the proposed method (FSCN) to increase the forwarding strategy performance.
Author contributions MS contributed to method and literature review, writing the manuscript, collecting data, simulation, reviewing the manuscript, data and result analysis. BB (Corresponding Author) contributed to literature review, writing the manuscript, collecting data, simulation, reviewing the manuscript, English writing data and result analysis. FH contributed to method and literature review, and reviewing the manuscript. ZB contributed to method and literature review, and reviewing the manuscript.