A joint optimization scheme of task caching and offloading for smart factories

In the application scenario of a smart factory, the computing tasks of the smart devices are intensive, and these tasks are sensitive to delay. The smart devices need to offload their tasks to an edge server for execution. However, the computing resources of edge servers are limited, and a workload imbalance will tend to arise among multiple edge servers. To solve these problems, we propose a joint optimization scheme of task caching and offloading based on task scenario awareness, which decouples the joint optimization problem into two subproblems: task caching and task offloading. These two subproblems are solved using a task-scenario-aware bilateral auction algorithm, which enables an edge server to adapt its task caching strategy to the task scenario of the smart devices. Experimental results show that this paper’s scheme reduces the task execution time by up to 21.5% compared to existing schemes and has better load balancing effects.


Introduction
Smart factory technology has undergone significant development in the past few years. Via collaboration between smart robots and 5 G communication technology, the production efficiency of smart factories has been greatly improved [1]. Recently, scholars have proposed the idea of the 'smart factory of the future.' To this end, industrial smart devices (ISDs) in factories need to be equipped with numerous sensors and controllers to enable precision instrument control, automated production, 1 3 and emergency response [2]. Thus, the computing equipment in factories must have the ability to process numerous real-time data and to coordinate various complex network hardware systems [3].
ISDs usually cannot perform computationally intensive tasks themselves due to their limited computing resources. Therefore, scholars have attempted to introduce cloud computing into the intelligent industrial internet. However, the traditional cloud computing model has certain shortcomings, such as long data transmission distances and unstable network delays. As a result, it is difficult for traditional technologies to provide solutions for automated production in smart factories [4]. To overcome the shortcomings of cloud computing, Cisco has proposed a three-layer fog computing architecture for industrial control systems. This structure extends computing power from the data center to the edge of the network [5]. 3GPP has specified the key technical points of automated factories based on 5 G networks. For example, the automation of control processes requires smart factories to have many sensors and monitors. The communication method of closed-loop control applications is used for connection, and the end-to-end (E2E) delay requirement for precise control is 1 ms [6]. The Clear5G project initiated by the European Union is aimed at solving technical difficulties in future engineering environment and future wireless networks. This project has clarified that various applications in smart factories present high requirements for communication technology (in terms of factors such as network delay, reliability, and large-scale connectivity). These high requirements cannot be satisfied by any known wired or wireless network communication technology [7]. The concept of the smart machine box (SMB) has been proposed for mechanical engineering in smart factories. An SMB can be connected to a machine as an edge device to provide a variety of smart functions, such as machine configuration, data collection, fault detection and device communication. Nevertheless, SMBs may encounter situations in which tasks cannot be completed within the time constraints due to their own limited computing resources. For example, suppose that an SMB is executing a set of smart functions and is suddenly requested to execute a task with a high computational load. At this time, the SMB is unable to execute the newly requested task because its computing resources are already occupied by the smart functions it is currently executing. If the SMB were to perform the newly requested task, the smart functions being executed would be affected [8]. To address such situations, developing suitable technology for the dynamic allocation of computing resources and the offloading of tasks in edge networks is a critical and challenging problem for smart factories.

Related work
For the task offloading problem in the smart factory environment, researchers can learn from the computing offloading methods used in the Internet of Vehicles and in general computing. However, the smart factory environment is slightly different from these two cases. In a smart factory environment, there are fewer types of tasks offloaded for computing, and the movement speed of the smart devices is slow. At the same time, the movement trajectories of the smart devices are uncertain, the reliability of the algorithms must be high, and the devices are sensitive to energy consumption and security concerns. Therefore, many new methods for task offloading must be introduced to address these challenges.
At present, researchers have proposed many schemes for task offloading in edge networks. Early research focused on task offloading for a single device. In [9], a centerless three-layer network architecture was designed in which mobile users can offload tasks to edge servers or cloud servers without cooperation. In [10], a task offloading scheduling and power allocation method for a single-user mobile edge computing (MEC) system was proposed. The method ensures a reasonable allocation of equipment transmission power and minimizes a weighted sum of the task execution delay and energy consumption.
However, the computing resources available in practical applications often cannot meet the service requirements of all devices. Therefore, the latest research has focused on methods of task offloading to multiple edge servers acting in collaboration. In [11], the problem of computing resource management for large-scale mobile terminals was investigated. Peer-to-peer entity perception technology and computing offloading technology were introduced into 5 G networks. In [12], the joint optimization problem of terminal equipment offloading decision-making and network resource allocation was analyzed, and the offloading efficiency for delay-sensitive tasks was effectively improved. In [13], a multihop computing offloading method for edge networks was proposed. Joint multitask computing offloading and network traffic scheduling algorithms were designed to minimize the average execution time of tasks. In [14], a two-layer MEC system was proposed. The researchers designed a distributed collaborative caching and offloading algorithm that can minimize the network cost on the user side while satisfying the deadline constraints of the offloaded tasks. In [15], a model describing workload offloading for mobile users was proposed that uses the Lyapunov optimization framework to balance task offloading efficiency with queue backlogs.
In addition, in [16][17][18], the use of deep learning methods was proposed to solve the problem of offloading in edge computing. The proposed approach is designed to meet the personalized needs of users while increasing the long-term benefits of all users (in terms of considerations such as computing delay and terminal energy consumption). In [19][20][21][22][23][24], game theory was used to solve the problem of offloading distributed tasks. While reducing the energy consumption of the terminal equipment, the method also improves the efficiency of distributed task offloading. As seen from the above analysis of the existing research, the majority of current studies are optimized only for task offloading in edge networks. These studies do not consider optimization strategies for other links in the process from task generation to the end of task calculation.
In this paper, we propose a joint optimization algorithm for ISD task caching and offloading based on task scenario awareness. Our contributions are described as follows: • The problem of ISD task caching and offloading in the smart factory scenario is modeled as a nonlinear programming problem. A scheme for the joint optimization of task caching and offloading based on task scenario awareness 1 3 (JOCOA) is designed to solve this problem. Through the joint optimization of task caching and task offloading, the goal of minimizing the task execution delay is achieved. • The original problem can be decoupled into two subproblems: the task caching subproblem and the task offloading subproblem. We propose a task-scenarioaware bilateral auction task caching algorithm. The auction transactions are dynamically adjusted in accordance with the classification results for task scenario awareness, thereby dynamically updating the task caching strategy. • For the task offloading subproblem, we perform task offloading on the basis of the results of the task caching algorithm, the average execution delay for similar tasks in the task scenario, and a device cooperation preference value to reduce the task execution delay.

System model
The main notations used in this paper are shown in Table 1.

Smart factory network model
As shown in Fig. 1, it is assumed that there are M ISDs in future smart factory network under consideration. Each ISD is equipped with one or more cameras to collect real-time image information. The ISDs are also equipped with a variety of sensors to enable them to make reasonable responses to changes in the environment. For the raw video data or sensor data, further calculation and processing are required before they can be used [25]. It is assumed that due to hardware cost and volume constraints, it is impossible to equip each ISD with computing and data storage modules. Therefore, their calculation tasks need to be offloaded to other nodes in the network for calculation. The network contains multiple 5 G New Radio small cells (5 G NR SCs). Each 5 G NR SC is equipped with an MEC server to provide computing and storage capabilities for local devices. This server is connected to the core network and to remote cloud servers to provide stronger computing capabilities. Edge n represents the n-th 5 G NR SC, where n ∈ N , N = {1, 2, ⋯ , N} . Each 5 G NR SC is configured with an MEC server. m represents the m-th ISD, where m ∈ M , M = {1, 2, ⋯ , M} . In the smart factory scenario, the edge computing devices are densely deployed; consequently, the signal coverage areas of the 5 G NR SCs overlap. An ISD located in an area of overlapping coverage selects the edge server with the best channel conditions and the most sufficient computing resources for association. The set of ISDs associated with edge n is M n ; accordingly, if ISD m is associated with edge n, this is expressed as m ∈ M n . In this network scenario, S types of task caching methods are available, of which s ∈ S , S = {1, 2, ⋯ , S}.
As shown in Fig. 2, we divide time into consecutive time slots, and the task caching policy is updated at the beginning of each time slot. When an ISD moves, it offloads tasks to the nearest edge server. If this edge server is running low on computational resources, it offloads the tasks to a collaborating edge server. At the beginning of a certain time slot, ISD m generates a task I m = ( , , d, s) , where represents the data volume of task I m , represents the computing resource requirements of task I m , d represents the execution delay of task I m , and s represents the task caching strategy for task I m .

Communication model
We design the ISDs to communicate with the edge devices using orthogonal frequency-division multiple access (OFDMA), meaning that each 5 G NR SC allocates an orthogonal channel to each directly associated ISD. Therefore, no channel interference problem arises in communication with multiple ISDs [26]. The preference value of the task caching node selected for edge n n,I m The preference value of the task offloading node selected for edge n The total revenue of caching strategy s for edge n in scenario p According to Shannon's formula, the data transmission rate between ISD m and edge n is where B n denotes the channel bandwidth of edge n, | M n | denotes the number of associated ISDs within the signal coverage area of edge n, h mn denotes the channel gain between ISD m and edge n, 2 denotes the signal transmit power of ISD m, and P m denotes the noise power. In addition, the transmission delay of ISD m offloading task I m to edge n is

Fig. 2 Task offloading during ISD movement
It is assumed that the 5 G communication transmission rate between edge n and edge y has a fixed value, denoted by R ny . When edge n forwards task I m to edge y, the transmission delay between the two 5 G NR SCs is

Task caching model
Different from the multiuser scenario in general edge computing, in the smart factory environment, the types of ISD tasks are limited. ISDs of the same kind are likely to repeatedly perform the same task within a short period of time. In this case, task caching technology can be employed to improve the service efficiency of the edge computing devices [27]. At the beginning of slot t, the set of task caching strategies of each edge n is denoted by C t = {C t n | n ∈ N} , where C t n = {c t n,s | s ∈ S} represents the caching strategy when edge n is in slot t. The task caching strategy of edge n is limited by the maximum caching resource capacity of the device, as expressed by the following constraint condition:

Task offloading model
An ISD considers offloading tasks to the nearest edge server with sufficient computing resources. If the computing resources of the edge server are insufficient, the tasks will be sent to a collaborative edge server or a remote cloud server.
At the beginning of slot t, the set of computing resource allocation strategies of the 5 G NR SCs is denoted by F t = f t mn | n ∈ N, I m ∈ I exe n , where f t mn represents the amount of resources allocated to task I m in slot t and I exe n represents the set of tasks performed at edge n. The constraint condition that edge n must satisfy when allocating computing resources to task set I exe n is At the beginning of slot t, the offloading location selection strategies for the tasks (3) t trs ny = R ny .
means that edge n forwards task I m to edge y or the remote cloud server. The constraint condition on the offloading location of task I m is If ISD m offloads task I m to its directly associated edge n at the beginning of slot t, then the task offloading delay is divided into three components: (1) the link delay of ISD m when uploading data, (2) The calculation delay of executing the calculation task at edge n, and (3) the link delay of transmitting the calculation result back to ISD m after the calculation is completed. Therefore, the delay required for offloading a task from an ISD to the directly associated edge server is where t tran.up mn and t tran.dowm mn are the uplink and downlink transmission delays, respectively, of ISD m offloading task I m to edge n, which can be calculated in accordance with Eq. (2).
The edge server directly associated with ISD m may have insufficient computing resources. Therefore, using multiple edge servers to collaboratively perform tasks may allow the computing resources of the edge servers to be more fully utilized. Compared with the case of offloading to the directly associated edge server, for the process of task offloading to a cooperative edge server, the forwarding transmission delay between edge n and edge y also needs to be considered. The delay of forwarding calculation task I m of ISD m from edge n to edge y is The powerful computing capabilities of the remote cloud server provide an emergency guarantee for ISD task offloading. It is assumed that the computing resources of the remote cloud server are sufficient and that the computing delay can therefore be disregarded. Accordingly, the delay of task offloading to the remote cloud server is mainly divided into the link delay of the ISD uploading data to the directly associated edge n and the link delay of edge n forwarding the task I m to the remote cloud server. Therefore, the remote cloud server task offloading delay is Since the calculation time of the cloud server can be ignored, we calculate only the upload time and download time for offloading the task to the cloud server.

Problem formulation
ISD m offloads task I m to the directly associated edge server edge n, the collaborative edge server edge y, or the remote cloud server. Therefore, the total delay of ISD m when executing task I m is The optimization target is to minimize the overall time delay of all ISDs m when executing their tasks I m . Accordingly, the optimization problem is modeled as follows: 4 Joint optimization scheme of task caching and offloading 4.1 Bilateral auction algorithm for task caching based on task scenario awareness

Task scenario information space
At the beginning of slot t, edge n collects the task scenario information of its directly associated ISDs. x t m,n represents the task scenario information of ISD m directly associated with edge n. x t m,n ∈ [0, 1] Q represents that the value range of task scenario information is [0,1], and Q represents the dimensionality of the task scenario information.
As shown in Fig. 3, edge n collects the task scenario information of its ISDs and, for each ISD, either identifies the task scenario information subspace corresponding to the ISD or sends the task scenario information to collaborative edge server y. h denotes the input data, such as computing resource requirements, caching resource requirements and the task delay limit, which need to be input into the model after data normalization. The task scenario information space is a Q-dimensional cube and can be divided into h Q sets. The task scenario information subspace in which ISD m is located is denoted by p ∈ P , where P is the overall task scenario information space. The task scenarios and running statuses of multiple ISDs are similar when they are in the same task scenario information subspace p.

Formulation of the task caching problem
The benefit of edge n caching task I m is related to the task execution delay and the task delay limit. d t.lim I m is the delay limit of task I m of ISD m, which is the maximum delay time that the ISD can accept for task I m . When the task execution delay T is lower, the difference with respect to the delay limit d t.lim I m is greater, and the benefit of edge n caching the task is also greater. Accordingly, when the task caching strategy s is employed, the revenue r s (x t n,m ) generated by caching task I m is defined as where s m is a task caching strategy that includes task I m and T x t n,m represents the task execution delay of ISD m. The total system revenue is the sum of the task revenues generated by all ISDs. Therefore, the maximum total system revenue can be formulated as

Task caching strategy and algorithm description
This section introduces the task caching algorithm, which includes four stages: the stage of determining the set of tasks to participate in the auction, the stage Fig. 4 Algorithm flowchart of determining the ISD type, the ISD bidding stage and the edge server auction stage. The algorithm flowchart is shown in Fig. 4.
(1) Determining the set of tasks to participate in the auction When edge n is in task scenario information subspace p in slot t, the update for task caching strategy s is expressed as where L n p,s (t) represents the number of times that edge n has selected task caching strategy s. The number of auction rounds in which edge n has participated in slot t is denoted by D(t) . If L n p,s (t) > D(t) , the auction phase has not yet been completed for this task caching strategy. F ua.num n (t) is the set of task caching strategies for which the auction phase is unfinished. This set is expressed as where F ua.num n,m,p (t) represents the set of task caching strategies for which the auction process is unfinished for ISD m in subspace p.
(2) Determining the ISD type Before the task auction, it is necessary to obtain the task delay limit and computing resource requirements in accordance with the task scenario of the ISD to evaluate its type. At the beginning of slot t, ISD m in task scenario subspace p generates task I m , whose computing resource requirements are denoted by , and the number of task requests is L n.req p,s . At this time, edge n updates the computing resource requirements n.req p,s of task caching strategy s as follows: In accordance with historical experience, the execution delay of task I m is limited in advance, and the delay limit is d t.lim . At the same time, edge n collects all task scenario information directly related to the ISD. Then, the system estimates the task execution delay d t.est of the ISD using the delay prediction method proposed in [28]. Edge n then obtains the average execution delay of the tasks in task scenario subspace p in accordance with its computing power, as follows: If d t.est > d t.lim , then the device is a high-priority ISD. If d t.est < d t.lim , then the device is a medium-priority ISD. This type of device can perform task caching when the task caching resources are sufficient. If d t.est < d t.ave , then the device is (14) L n p,s (t) = L n p,s (t − 1) + 1. a low-priority ISD. The task execution delay of this device meets the computing requirements of the task scenario, and the task caching priority is low.
(3) ISD bidding stage r m p,s represents the revenue value of task caching strategy s when ISD m is in task scenario p. ISD m can update its revenue value r m p,s by using the estimated task revenue r m and the number of executions L m.exe p,s of the task. In slot t, the revenue r m p,s obtained by ISD m in task scenario p and task caching strategy s is expressed as The total revenue r n.sum p,s of all cached tasks of ISD m in task scenario p is The bidding of an ISD in a task auction is related to the number of auction rounds, the total task revenue, and the task caching priority. The auction bidding function of ISD m is designed as where sum represents the upper limit on the number of auction rounds and D(t) represents the number of auction rounds. The parameter m ∈ [0, 1] is used to adjust the bid difference between ISDs of the same priority, and N m p,s represents the task caching priorities of the three types of ISDs. To save time in the auction process, it is not necessary for every ISD to participate in the auction. In accordance with the number of tasks with unfinished auctions in F ua.num n (t) and the task caching capacity K n , the number of ISDs participating in the auction can be dynamically adjusted. If F ua.num n (t) ⩾ K n , then edge n selects K n ISDs to participate in the auction. If F ua.num n (t) < K n , then the F ua.num n (t) ISDs with the highest total revenue are selected to skip the auction and directly cache their tasks.
(4) Edge server auction stage In addition to task caching for directly associated ISDs, edge n may also perform task caching for cooperative edge servers. Therefore, the auction price of edge n should take into account the task caching requirements of each collaborative edge server y through a collaborative preference value y n,p . The collaboration preference value y n,p is related to the number of collaborations L n,y p between the two edge servers and the number of collaborative devices N y that can be selected by edge y.

3
The larger the number of historical collaborations between edge n and edge y is and the fewer collaborative edge servers there are among which edge y can choose, the higher the collaboration preference value y n,p of edge n for edge y. This value is expressed as where r n.lo p,s represents the estimated revenue of edge n in task scenario p for the directly related ISDs and r ny.co p,s represents the estimated revenue of edge n as a collaborative device in task scenario p. The total revenue of edge n under caching strategy s in task scenario p is Therefore, the task auction bidding function of edge n is designed as follows: The steps of the task caching algorithm are shown in Algorithm 1. First, each edge n obtains the task scene information with the ISD. The set of tasks with an unfinished auction is calculated according to Eq. (15), the task execution delay and the limit delay are calculated according to Eq. (17), and the device type is determined according to d t.ave in relation to their size. Then, when F ua n (t) ≠ � , the value of collaboration preference for edge n is calculated with the total revenue according to Eq. (22) (23). Finally, the number of ISDs entering the auction queue is determined based on the size relationship of F ua.num n and K n . In each round of auctions, the task caching strategy is determined based on the quoted revenue. Tasks that do not complete the auction enter the set F ua n to wait for the next round of auctions.

Formulation of the task offloading problem
On the basis of task scenario awareness and ISD classification, a node selection strategy W for task offloading can be designed. The time delay for each ISD to upload data to the edge server can be determined. Thus, the problem of finding the task offloading node selection strategy W can be transformed into P3: where

Task offloading node selection strategy
Task I m generated by ISD m can be offloaded to the directly associated edge server n, a collaborative edge server y, or a remote cloud server. If task I m hits the edge n cache, then task I m is added to the local cache hit task set. Other tasks are added to the set of missed tasks, I no n . The number of tasks in the cache hit task set I hit n determines the load on edge n. The tasks in I hit n can be directly and locally executed at edge n. The task offloading strategy for task set I no n is introduced as follows: A small number of tasks in I no n can also be locally executed. The remaining tasks can be performed by the collaborative edge server y or the cloud server. These tasks are added to the off-site task set I of f n for offloading. For the tasks in I of f n , their offloading locations are determined based on the estimated offloading delay and the collaboration preferences of the edge nodes. (

1) Classification of tasks according to the estimated offloading delay
Under the assumption that computing resources are equally shared among all tasks that are offloaded to collaborative edge server y, the average amount of computing resources allocated to each task is where num y represents the number of tasks offloaded to edge y. f ave ym is the average resource requirement for task I m . The execution time of task I m when offloaded to (26) f ave ym = f max y num y + 1 , collaborative edge server y includes three components: the delay of ISD m uploading task I m to edge n, the delay of transmission from edge n to edge y, and the calculation delay at edge y. When task I m is offloaded to the cloud server, the calculation delay can be disregarded, and the estimated delay of offloading task I m to the cloud server is expressed as If d cl > d t.est , then the delay of offloading task I m to the cloud server cannot meet the application requirements. In this case, task I m can only be executed on collaborative edge server y, and a n,I m = 1 . In accordance with the above analysis, the off-site task set I of f n is traversed, and the tasks for which d cl > d t.est are added to the set I exe y .

(2) Selection of collaborative nodes in accordance with the collaboration preferences between two edge servers
For the tasks in I exe y , collaborative nodes can be selected in accordance with the collaboration preference values of edge n. The collaboration preference value between two edge servers is related to the edge network topology and the estimated offloading delay. The preference value of edge n for selecting a cooperative node is expressed as Edge n preferentially selects the collaborative node with the largest preference value for task offloading. If this collaborative node has computing resources remaining, then the offloading of task I m is accepted. If the computational load on this collaborative node is too high, then the offloading of task I m is rejected, and the process of collaborative node selection is repeated in the next time slot. The steps of the task offloading algorithm are shown in Algorithm 2.

Algorithm complexity analysis
The proposed JOCOA algorithm is a distributed strategy among multiple edge servers. Therefore, its algorithmic complexity is analyzed from the perspective of a single edge server. The algorithm complexity analysis is divided into two parts: task caching and task offloading. If N 1 denotes the number of edge servers participating in the task caching auction and M 1 denotes the number of ISDs participating in the task caching auction, then the complexity of the auction algorithm is O N 1 × M 1 . If N 2 represents the number of tasks for which offloading requests are accepted and M 2 represents the number of tasks that edge n is waiting to offload, then the complexity of the task offloading algorithm is O M 2 × N 2 × logN 2 + N 2 . Therefore, the overall complexity of the JOCOA

Experimental setup and introduction of algorithms considered for comparison
In this paper, the SIMU 5 G simulation tool is used to establish an MEC network.
To verify the effectiveness of the JOCOA solution in the smart factory environment, we use real data collected by AWS for experiments [29]. This dataset simulates the tasks of ISDs, including video analysis, environmental monitoring and equipment control. These tasks have some common characteristics, such as computing resource requirements, memory resource requirements and task delay limits. The parameter settings are shown in Table 2.
The proposed joint optimization algorithm is compared with five other task offloading algorithms. Each algorithm is introduced as follows: (1) JOCOA. The scheme proposed in this paper. ISDs are classified based on task scenario awareness, and task caching and task offloading are jointly optimized. (2) Collaborative data caching and computing offloading (CDCCO) algorithm. This scheme is a collaborative algorithm for distributed task caching and computing offloading [14]. Noise power (dBm) -100 Task data volume for caching (MB) [1,20] Communication range of an edge server (m) 150 (3) Random service caching (Random). Each edge server randomly performs task caching. Then, tasks are offloaded based on the edge servers' collaboration preferences. (4) Noncooperative (Non-Co). The task caching strategy is set for the experiment.
The task offloading strategy is to execute offloaded tasks at the directly associated edge server, and no cooperative offloading strategy is used. (5) Cloud. No task caching strategy is set in the experiment. All tasks are sent directly to the cloud server for execution. (6) Brute-force. The experiment uses the brute force method for task offloading.
When a task is generated, all edge servers calculate the auction quotation and participate in the bidding, and the ISD selects the best quotation for the transaction.

(1) Effect of the number of collaborative edge devices
The task offloading delay is related to the number of collaborative edge devices. Six networks with different numbers of collaborative edge devices are designed for experimentation. The average task execution delays in the six networks are shown in Fig. 5. Initially, an increase in the number of collaborative edge devices significantly reduces the average task execution latency. For a network with five collaborative edge devices, the average task execution delay is 61.3% of that without collaborative edge devices. The experimental results reveal that the reduction in the task execution delay decreases as the number of collaborative edge devices further increases. The task execution latency of a network with one collaborative edge device is 14.5% higher than that of a network without any collaborative edge devices. However, the task execution latency of a network with five collaborative edge devices is only 2.5% higher than that of a network with four collaborative edge devices. As the number of collaborative edge servers increases, edge servers with heavier loads can choose among more locations for task offloading, which makes the computing load of the edge network more balanced and reduces the task execution delay.

(2) Effect of the number of tasks
The number of tasks generated in a given period of time will affect the average task execution delay in the edge network. For this group of experiments, three task sequences are designed with different numbers of consecutive tasks: 100, 300, and 500. The average task delays under the five algorithms are compared, and the results are shown in Fig. 6.
As the total number of tasks increases, the average execution delays under the Random and Non-Co algorithms significantly increase. Because the Random algorithm randomly caches tasks, no task caching strategy is set. In the Non-Co algorithm, no task offloading strategy is set for collaborative edge devices. When the total number of tasks is large, queuing of the tasks occurs at edge servers with heavier loads. In the Cloud algorithm, all tasks are offloaded to the cloud server for execution. The longer transmission delay leads to a longer average task execution delay. When the number of tasks is 500, the task execution delay under the JOCOA algorithm is reduced by 16.9% compared with that under the CDCCO algorithm. Because the JOCOA algorithm additionally includes a task caching strategy based on task scenario awareness, it improves the hit rate of task caching and allows more tasks to be offloaded to the directly associated edge servers for execution, thereby reducing the task execution latency. (

3) Effect of the number of ISDs
The number of ISDs will affect the average execution delay of tasks in the edge network. As shown in Fig. 7, as the number of ISDs increases, the average task execution delay under the Cloud algorithm slightly increases. However, the magnitude of the change is not obvious. In an environment with 5 ∼ 20 ISDs, the performance of the other four algorithms is not much different. When the number of ISDs is greater than 20, however, the performance of the Random and Non-Co algorithms begins to significantly drop. These two algorithms lack optimization of the task caching strategy and computational offloading strategy. Therefore, their performance is poor when computing resources are tight. When the number of ISDs is 50, the average execution delay under the JOCOA algorithm is 21.5% lower than that under the Fig. 6 Effect of the number of tasks on the task execution delay CDCCO algorithm. The JOCOA algorithm has an obvious time performance advantage compared with the other algorithms when the number of ISDs is large.
(4) Effect of task caching capacity As the task caching capacity increases, the number of tasks that can be cached at an edge server increases. Therefore, the average task execution delay is reduced. As shown in Fig. 8, no task caching strategy is set in the Cloud algorithm. Therefore, the task caching capacity has no effect on the Cloud algorithm. The Random and Non-Co algorithms result in almost the same task execution delay when the caching capacity is 4. However, as the caching capacity increases, the execution delay under the Non-Co algorithm begins to outperform that under the Random algorithm. The Random algorithm uses a random task caching strategy, and the caching hit rate with this strategy is not high. Therefore, the Random algorithm is less efficient than the other three algorithms. It should be noted that the algorithm performance mainly depends on the amount of computing resources Fig. 7 Effect of the number of ISDs on the task execution delay Fig. 8 Effect of task caching capacity on task execution delay available at the edge servers. The task caching capacity is a secondary factor that has a relatively small impact on the task execution delay. However, when the caching capacity is less than 5, the task execution delay under the JOCOA algorithm is higher than that under the CDCCO algorithm. This is because the JOCOA algorithm needs to implement a task caching strategy based on task scenario awareness. When the available task caching capacity is insufficient, the cached tasks at an edge server will be frequently replaced, which reduces the execution efficiency of the algorithm. Nevertheless, when the caching capacity is 5, the execution delays of the two algorithms are similar, and as the task caching capacity further increases, the task execution delay under the JOCOA algorithm continues to decrease. When the task buffer capacity is 9, the average execution delay under the JOCOA algorithm is reduced by 13.3% compared with that under the CDCCO algorithm. (

5) Locations of task execution
This experiment analyses the proportions of the locations at which the calculation tasks are executed when each of the five compared algorithms is used. As shown in Fig. 9, the Cloud algorithm sends all tasks directly to the cloud server. Therefore, the proportion of tasks performed by the cloud server is 100%. The Non-Co algorithm does not consider a cooperative device offloading strategy. Therefore, the proportion of tasks performed by cooperative devices is 0. Among the remaining three algorithms, the JOCOA algorithm is the most efficient in performing tasks on directly associated edge servers, which shows that its task caching strategy can effectively improve the task execution efficiency of the edge servers. The JOCOA algorithm allows more tasks to be served simultaneously, and due to the joint optimization mechanism of task caching and offloading, the proportion of tasks sent to the cloud for execution is greatly reduced. Therefore, the time delay incurred in the link transmission process is also reduced.

(6) Effect of the number of auction rounds
The auction algorithm determines whether a task is cached. If a transaction between the two auction parties is established, the task is cached. Otherwise, the Fig. 9 Proportions of tasks executed at different edge servers task is placed in the next round of auctions. If a transaction is not established before the maximum auction round limit is reached, the task will not be cached. This experiment verifies the impact of the number of task auction rounds on algorithm performance. As shown in Fig. 10a, with an increase in the number of task auction rounds, the cache hit rate of the tasks significantly increases. However, the increase in the cache hit rate begins to decrease after five rounds of auctions. After nine rounds of auctions, the cache hit rate stabilizes at approximately 92%. As shown in Fig. 10b, the average task execution delay shows a downward trend for the first four rounds of auctions; however, starting from the fifth round of auctions, the average task execution delay begins to increase again. In the first four rounds of auctions, the cache hit rate of the tasks is greatly increased. The edge servers can perform a larger number of tasks, thereby improving the computing efficiency. However, with a further increase in the number of auction rounds, many tasks for which a transaction has not yet been established must participate in subsequent auctions, leading to an increase in the number of tasks involved in each auction. Consequently, the complexity of the auction algorithm increases, along with the delay of task execution.
(7) The comparison of the JOCOA algorithm with the Brute-force algorithm In order to demonstrate the advantage of the JOCOA algorithm in terms of time complexity, this group of experiments compared it with the Brute-force algorithm. As can be seen in Fig. 11, the average task execution time of the Brute-force algorithm rises significantly as the number of tasks increases. At 500 tasks, the JOCOA algorithm took 33.7% less time than the Brute-force algorithm.

Conclusion
This paper studies the problem of task caching and offloading in the edge computing environment in a smart factory, and the JOCOA scheme is proposed to solve this problem. Based on task scenario awareness, the proposed scheme optimizes the two processes of task caching and task offloading. Experimental results show that the JOCOA scheme has better time performance than existing algorithms and achieves better load balancing among edge servers. However, these experiments focus only Fig. 10 Effects of the number of auction rounds on the cache hit rate of tasks and the average task execution delay on the factors that affect the offloading delay of edge devices and do not consider the impact of the algorithm on other aspects of their performance. In future research, the influence of equipment energy consumption factors will also be considered. A task offloading algorithm that takes into account both time efficiency and energy savings will be designed.