Intelligent Situation Awareness Based on the Fractal Dimension and Multidimensional Reconstruction

In order to realize autonomy and intelligence for situation awareness, this paper proposes an intelligent situation awareness model based on fractional dimension information mining and multidimensional information reconstruction. First, some new concepts are proposed, the spatial situation perception is established by 3D reconstruction of the input fusion information, 4D reconstruction completes the situation comprehension and 5D reconstruction seeks the situation prediction. The three‐level situation estimation model is optimized to a more robust flexible model. Combined with the database system, the reasoning learning mechanism, and the diversified human‐machine interface concept, a basic framework of intelligent situation awareness is constructed. Second, the basic process of intelligent situation awareness is provided. Third, based on the flexible configuration method of the situation awareness system and the concept of unmanned weight determination for the situation awareness agents, an efficiency measurement model for the intelligent situation awareness model is also constructed, and an evaluation method for the consistency of multinode intelligent situation awareness is also provided. Fourth, the paper gives a typical electromagnetic situation estimation example for a drilling platform to explain and validate the concepts. Finally, several suggestions are put forward for the next construction of an intelligent situation awareness system.


Introduction
In order to estimate the precise location, identity, and real-time situation of the target, information fusion of data from single or multiple sources is required. [1]As a typical link in the field of information fusion, situation awareness (SA) has produced many new structures, techniques, and methods in recent years.However, as the scale of the system expands and the complexity of attention mechanism in psychology and SA theory, and puts forward the concept of attention mechanism in battlefield SA.
The comprehensive performance of these studies is that there is rarely a single technology for SA applications, but through various methods and techniques, overlapping and complementary use to achieve better results.However, the following problems still exist: in the scenario of complex SA technology application, the previous SA level and the decision makers' behavioral response will affect the degree and level of subsequent SA; the traditional situation awareness based on multisource information fusion is an open-loop when modeling the situation, lack of feedback to control the input information, lack of means to coordinate and optimize resources, therefore, it cannot respond to the complex and varied environment in time.Most theoretical frameworks are inseparable from human dependence, and the attention mechanism, decision-making level, and system adaptability of human beings are indispensable elements in the whole SA loop.
With the expansion of SA space, the complexity of the environment, the diversity of perceived targets, the complexity of sensor types, and the explosive growth of information volume, the awareness strength is increasing.However, factors such as people's attention, reaction speed, and judgment are limited.The traditional SA model, which relies on human subjective experience or human cognition in the loop, cannot cope with more complex systems and more transmutative information.The previous SA methods are inseparable from human analysis and judgment, the subjective experience, attention mechanism, and operational decision-making level determine the quality of SA because the advantage of the human brain is that it can be used in various complex environments, but the disadvantage is slow response, unable to maintain attention for a long time, [9] especially slow reasoning ability may not be able to make rapid and correct situation.
At present, although there are some mature algorithms to alleviate human workload, but lack adaptive, self-learning, selfevolution, and other attributes of different objectives, environments, relationships, tasks, and even some specific parameters, they cannot meet the SA needs for nonhuman agents.Therefore, combining artificial intelligence technology and SA, beyond human intelligence and physical strength to recognize and predict complex situations under massive data, and ultimately provide a strategic reference for the ultimate decision, is the next important research direction for SA.
In this paper, the Endsley SA framework is extended to be selforganization, self-adaptation, and self-learning.On the basis of incorporating the concept of autonomy, the reconstruction from low-dimensional information to high-dimensional information is continuously realized to parse more complex information.Based on the database learning and algorithm organization strategy, an ISA framework is established, which can free people from the traditional SA loop.As a monitor, human is located above the SA loop, only making judgments on the rationality of final perception and decision-making results, and intervening when necessary.On the one hand, it can reduce the workload and subjective misjudgment to the greatest extent; on the other hand, it can deal with more situation information quickly and improve the effectiveness of SA.
The rest of the paper is organized as follows.In Section 2, we propose the concepts of 3D reconstruction of "discrete state", 4D reconstruction of "continuous state", 5D reconstruction of "future trend", fractal reconstruction of situation, and a new type of ISA; Section 3 gives the basic structure of the new ISA framework, and focuses on exploring the main characteristics of the ISA flexible model.Section 4 constructs a basic implementation flow for the new ISA model.Section 5 discusses the unmanned measurement method of the ISA agents, presents a typical ISA effectiveness measurement model, and constructs a multi node situation fusion framework, exploring the basic model of consistency evaluation for ISA; Section 6 takes the example of the electromagnetic SA of a drilling platform, demonstrates the situation perception based on 3D reconstruction in the space domain, situation comprehension based on 4D reconstruction in time domain, 5D reconstruction based on uncertainty and fractal reconstruction.Finally, Section 7 gives some concluding remarks.

3D Reconstruction of "Discrete State"
In computer vision, the concept of 3D reconstruction refers to the process of reconstructing 3D surface information of the object based on images.The concept of "discrete state" 3D reconstruction proposed in this paper refers to the creation of a specific 3D space at some certain moment, and integrates various types of data in this 3D space based on the principle of spatiotemporal registration, to achieve the stereoscopic description purpose of discrete time environments, targets, and events.For example, for the multiview visible light image data of battlefield space, a 3D reconstruction operation can be performed to reconstruct a 3D model of the target and the environment, and the position information can be integrated to complete the absolute orientation in the 3D environment, then the radar information is real-time integrated to complete the positioning and association of the targets, thereby the situational perception of the target is completed in 3D space.

4D Reconstruction of "Continuous State"
In computer vision and computer graphics, 4D reconstruction is the process of capturing the shape and appearance of real objects along a temporal dimension.This process can be accomplished either by active or passive methods and is also referred to as nonrigid or spatio-temporal reconstruction. [10]ome studies on 4D reconstruction are: Grau's paper [11] discusses free-view video technology for special effects and postsports analysis, capturing human motion as 3D surface data and representing it as 4D data over time.Jankó and Chetverikov's paper [12] introduces a 4D reconstruction studio established by the Institute of Computer and Automation of the Hungarian Academy of Sciences, which uses a high-resolution camera to output a dynamic 3D model for creating mobile actors.Doulamisb and Ioannidesa's paper [13] seeks to reconstruct 3D models of historical monuments from thousands of images on the network and visualize 4D landscapes through metadata management.Mustafa and Kim's paper [14] proposes a method of reconstructing 4D temporal coherence models for complex dynamic scenes, which does not require prior knowledge of field structure or camera calibration, allowing reconstruction from multiple mobile cameras.A complete 4D dynamic model is obtained by combining sparse to dense time correspondence with joint multi-view segmentation and reconstruction.Rodríguez-Gonzálvez and Muñoz-Nietoas's paper [15] from the view of multisource data fusion and data management, describes the different environments time-varying representation and the evaluation of data sources in the cultural heritage.Paper [16] uses a 3D scanner to capture flower models at different periods, reconstructes a series of point clouds based on coherent spatiotemporal sequences, and can accurately infer the shape of the occluded petals.These studies essentially describe the motion state of objects.
According to Einstein's 4D spatiotemporal theory, the four dimensions description vectors are [x, y, z, t], which t is time, the spatiotemporal dimension's essence is to describe the motion of the object.The 4D reconstruction concept of "state" proposed in this paper refers to the input detection and position fusion results.For the specific target or region, the discrete 3D information of history and the current stage is reconstructed into a dynamic continuous based on the time axis, a dynamic continuous 3D scene is reconstructed to represent the evolution of the historical state of the target or region.
The space formed by the 4D reconstruction of the "state" is just based on the 3D space of the initial "state", which records the change of the event and can be visualized in the discrete 3D space.It is not the standard Euclidean space, but projecting at specific times with 3D properties.Through the 4D reconstruction of the target or region, the computer can understand the targets' positional relationship or the regions' changing process, that is, the purpose of situation comprehension.

5D Reconstruction of "Future Trend"
There are not many papers that apply the 5D reconstruction concept, and the literature that can be found openly is used for seismic data recovery.For example, Poole and Veritas's paper [17] uses the antileakage Fourier transform to reconstruct 5D seismic data, Stanton and Sacchi's paper [18] uses the convex set projection algorithm to reconstruct 5D seismic data, and another paper [19] compares three 5D seismic data reconstruction methods.
The 5D space of the so-called "future trend" in this paper is also not a standard Euclidean space, and its dimensions are [x, y, z, t, p], where p represents the possibility.For the 5D reconstruction of "future trend", it is necessary to predict specific events on the premise that the system has understood the target or regional state.However, with the evolution of input values and the variation of the prediction algorithm, there are many possibilities for the prediction results, that is, different deduction results are different for different 4D spatiotemporal dimensions.
The 5D reconstruction takes the specific events in the current space and time as input and through the extraction of the target elements, the event comprehension has been completed in the 4D space.Therefore, the uncertainty derivation process based on event information is the core of the 5D reconstruction of "future trend".In the reconstructed 5D space, the scale of the projection is the largest possibility measure, and the 4D space under the projection is taken as the target of the "future trend", thereby completing the situation prediction.

Fractal Reconstruction of "Situation"
Dimension is an important feature quantity that characterizes a geometric object and represents the number of independent coordinates, which are required to describe the position of a point in a geometric object.Objects in Euclidean geometry are described by integer dimensions, called topological dimensions.However, fractal theory believes that the dimension can also be not an integer, but a fractional, which is a fractional dimension. [20]The fractal dimension contains the integer dimension, which describes the correlation of the local system behavior composed of small and fragmentary local features in the natural phenomenon. [21,22]he integer dimension can only describe the static features of the geometry, while the fractal dimension describes the dynamic change of the geometry.
The concepts mentioned above for 3D reconstruction, 4D reconstruction, and 5D reconstruction are all topological dimension reconstructions.Resolving the higher dimensions of the world is a tool for human understanding.The basic feature of topological dimension reconstruction is to input lowdimensional and integer-dimensional data and reconstruct more understandable high-dimensional data.For example, the internal image of an object is reconstructed by CT scan data, [23] and the internal or surface model of the object is reconstructed by the image data [24,25] to achieve a more intuitive and in-depth understanding of the world.
There are not many literatures that explicitly propose the fractal reconstruction concept.For example, we can find paper [26] uses the fractal nature structure reconstruction to study the relationship between the electronic properties of BaTiO3 ceramics and the fractal properties of microstructures.Paper [27] uses fractal geometry theory to deal with two problems of unstructured 3D measurement data: fractal feature extraction and fractal surface (geochemical landscape) reconstruction.Rex and Pilger's paper [28] discusses that information theory and fractal analysis are a new standard of fitting.Paper [29] uses the fractal interpolation function (FIF) to perform a 3D reconstruction of tumor perfusion.Although the concept of fractal reconstruction is clearly proposed, the fractal function is still used to reconstruct the topology dimension.In the paper, [30] the fractal geometry method is used to reconstruct the dense natural terrain surface from sparse data while maintaining its roughness and estimating each reconstruction point's uncertainty.
Therefore, in this paper, for the nature of the dynamic change described by the fractional dimension, a concept of fractal reconstruction of "situation" is proposed: Let R be a real space, and n, N ∈ R, if the input information dimension is a positive integer n, the output information dimension is any real number N > n, the process is called fractal reconstruction.
Fractal reconstruction can maximize mining, utilize incomplete information, and enrich the situational elements, which is more conducive to the system's self-fulfilling of situation perception, situation comprehension, and situation prediction, especially if it can make up the information dimension faults in different dimensions data association, and can more comprehensively reflect the situation between the topological information dimensions.For example, given a single remote sensing satellite imagery and digital elevation model (DEM) data for a region, 3D surface information for that region is required.Due to the lack of continuous, high-resolution oblique photographic data at various views in the region, such as side information of the building is missing.At this time, the image and the DEM data can be merged to form a 2.5-D model having only the target height without side information.

Intelligent Situation Awareness (ISA)
Humans are in a SA loop [31] and subjective decisions and behaviors of humans may influence the perception, comprehension, and prediction of the situation.Therefore, it is necessary to strengthen the weight of the machine for SA, reduce the impact of human factors on SA, and conduct autonomous SA research for unmanned systems.
Therefore, here proposes the concept of intelligent SA, that is, in the human/unmanned system, relying on the constantly updated database and reasoning learning mechanism, autonomously complete the situation assessment and threat assessment, and perform historical data association on the generated situation information., complete the self-perception, self-comprehension, and self-prediction of the situation.

Intelligent Situation Awareness (ISA) Model
In this section, an ISA method based on fractal dimension information mining and multidimensional reconstruction is proposed.On the basis of detection fusion and location fusion processing, the input information is classified by dimension, combined with the target database, environment database, event template database, algorithm database, and reasoning learning mechanisms, then oriented to the current state or events of single or multiple targets, combined with real-time environmental information, quantitatively describe the targets' overall situation and events.Through 3D and 4D reconstruction, accomplish the target and environment's aggregate organization, events and activities' behavioral interpretation, time-based contextualization, etc.For example, to comb the attribute relation, state relation, cause and effect relation in objects, and the environmental information relation such as topography, astronomy, climate, etc.
Automatically generate the organization forms of various elements such as activities, events, time, and location of the target and environment, and predict the results of the events under the framework of 5D reconstruction.Furthermore, the inference of the enemy threat factor or intention is performed based on the method of impact factor quantization and weighting.Then, the SA results are visualized, and a decision plan for humans to make the final strategy selection and decision is given from the analysis results and judgment conclusions, so as to finally carry out the action execution.The entire process of perception and decision will be recorded in the database through the learning mechanism to provide a reference template for subsequent behavior, see Figure 1.
Since humans are no longer in the deep participation position in the above framework, but play the role of the final decisionmaking commander, the visualization of the target, environment, events, situation, and intention can enable humans to analyze the situation information comprehensively at each stage.
When the visual situation and decision-making are provided to the human-machine interface, SA is not a single static process, but re-applying the existing awareness results to the environment and the target through the situational linkage based on the 4D reconstruction.Through the new information input at a new time, we continuously perceive the objective situation.That is to say, all kinds of situation information on the human-machine interface are updated dynamically in real-time.
At the same time, all kinds of situation information that has been acquired are accumulated and stored, and various reasoning algorithms and human optimal decisions are learned and stored in real-time, so various new object data, relationship data, and method data are formed.Iterations make the system's awareness of autonomy, continuity, timeliness, and accumulation.In addition, with the database system and the reasoning learning mechanism as data and logic support, the system has the functions of open scalability, flexibility, and distribution.The two separate from ISA on hardware makes the system robust and errorresistant.

Awareness Object Mode Determination
Since different situation elements need to be extracted for different modes, object mode determination is the basis for SA.The weight assignment of different situation elements will affect which type of rules is invoked by subsequent database systems, and which algorithm is chosen by the reasoning learning mechanism.In order to ensure good SA and consistency, the principle of object model selection is to accurately and detailedly describe specific regions, targets, and events, and avoid macroscopic and fuzzy.For example, for the electromagnetic SA mode of surface ships, it is necessary to determine factors such as the number of platforms, task attributes, and radar models.

Awareness Spatial Unitization
For SA system, not all information in the space is required.Therefore, the screening of spatial information is optional, and it is necessary to unitize the space according to specific rules.For each subunit, distributional situation elements are extracted, and then the situation elements in all the units are combined to generate a comprehensive SA result, which makes the awareness of a target more specific.The unitized processing method can be used in the form of a digital grid or can be flexibly divided according to the target grouping or the event grouping.In addition, the introduction of distributed computing and parallel computing techniques can accelerate the perceived rate in discrete space.

Information Element Classification in Dimension
According to the dimension of the source output information, it can be generally divided into a 1D source, a 2D source, a 3D source, and a 4-D source.In general, the higher the source dimension, the better it is to detect and identify the target.In the discrete-time dimension, the source can be summarized into the 1-3 spatial dimension.From the perspective of fractal dimension information, horizontally integrating different dimensional information into 3D space is beneficial to complete the omnidirectional positioning and recognition of the target, so as to achieve the purpose of situation perception.

ISA System's Flexible Configuration
From the perspective of process control, the process of SA has the following characteristics: 1) The SA process can be regarded as a collection of subprocesses.For some certain targets, each subprocess has a specific reference model, but in the face of ever-changing perceptual tasks and data volume, to achieve ISA needs to increase, decrease, or adjust the existing reference models according to the actual situation.
2) SA requires automatic execution and interrelated performance in modules.In particular, the awareness task elements and specific methods must be adaptive.3) There are data sharing and data dependencies in multiple awareness sub-processes.In the process, it is necessary to continuously select and template the situation elements, and continuously to match the corresponding algorithms.
In order to adapt to the changes of the future environment and user needs, the above ISA system can have a flexible architecture without adapting or doing a little modification and adapt to the system environment and functional evolution.
The application system in each module communicates with other modules only through the interface.When the system requirements change, it just needs to modify the relevant part in a module or reconfigure the support module.In general, this reusable architecture is bound to have commonalities and differences in modules.The principle of flexible configuration is to ignore the commonality of abstract and local differences.In the ISA process, the internal information flow must have the potential to adapt to the changes.According to the different awareness tasks, the data management module and the learning module can flexibly select data and algorithm resources.

ISA Flexible Model
According to the situation elements in the specific space-time domain, with Endsley's dynamic decision-making SA model, [32] the situation estimation is a three-level process including perception, comprehension, and prediction.The current process can be smoothly executed with human beings' participation.However, for the ISA of unmanned systems, humans may be not in the loop and the system relies on the existing data and algorithms for autonomous SA.At this time, it is necessary to consider a more general situation.For example, when the state and attribute estimation of the input target is incomplete, the database model and template data are insufficient, and the model and algorithm in the reasoning learning mechanism are not perfect, the perception, comprehension, and prediction of the situation may not be in the same line.At this point, Endsley's three-level model will no longer be applicable.That is to say, in a more general situation, for the processing of situation information, under the imperfect intelligent mechanism, it is more likely to occur interruption, singularity, or error at any time, and form situation debris.
As shown in Figure 2, in order to maximize the exploitation value of existing information and visualize all the awareness information, this section generalizes the Endsley situation assessment three-level model into a flexible model for the abovementioned concept of ISA.Situation perception, situation comprehension, and situation prediction are three functional modules that are independent of each other.Not only the situation prediction module's output can be used as the threat assessment's input, but also the output of any one module can be used as input to the situation linkage and threat assessment module.This model enables the system to have on-site sensing (environment adaptation) capability, which can effectively improve the fault tolerance and robustness of SA.The specific workflow is as follows: For the multisource input information that has been detected and fused, the situation perception module obtains the basic situation elements after completing the target attributed and behavior data analyzed, and determines various element templates for different targets.At the same time, it matches, compares, and supplements the historical data in the database, and retains valuable data.The grouping of situation elements is completed by time or spatial association.Furthermore, the spatial information is fused to a specific 3D space to complete the discrete state 3D reconstruction.Then, the target or event description is performed to the threat assessment module, and the status information of the target or event is provided to the situation linkage module for visual processing.
If the system cannot go on to the next situation comprehension work now, the basic situation element description and the warning information are provided for humans to make countermeasures.In the situation comprehension module, it is necessary to carry out the continuous state 4D reconstruction based on the discrete 3D spatial data in history and complete the dynamic understanding of the target action or event change.After completing the generation and verification of the existing situation, the situation comprehension module can directly provide the situation description information to the threat assessment module, provide the current cycle situation information to the situation linkage module, and perform the historical situation association by the situation linkage module.If the system can not continue the situation prediction work in the next step, provide current situation information for human reference and manually formulate a response strategy.Based on the understanding of continuous state elements, the target intent prediction can be predicted by the corresponding uncertainty reasoning algorithm, that is, the future trend 5D reconstruction is completed.After the deduction is completed, the situation prediction module provides target intent information to the threat estimation module on one hand and provides input to the autonomous decision-making module.On the one hand, it provides the target intent information to the situation linkage module to perform the same intent association, further for threat assessment and machine decision-making provides vertical support.

Threat Assessment and Decision Support Module
A threat assessment module is a quantitative way to assess and analyze the threat level of the enemy.According to the result of the situation assessment, the threat assessment module must first extract the threat elements, including the target's intention and ability.Secondly, according to the specific target's event template, the action intention is matched; and the behavior ability is extracted from the target database.Then, the threat time level is calculated to determine the threat membership degree and weight distribution.According to the script library, the threat level is calculated and sorted.The decision support module can quickly and accurately formulate strategies in rapidly changing situations, which can avoid the decision-making factors that cannot be fully considered in the case of manual decision-making, and the correct decision cannot be made in time for an emergency.However, the decision support system is not a substitute for the people to make the final decision.It just provides a series of decision-making references for humans, sorts and displays them in order of priority, and has a concise explanation for each strategy, so that for the next step of execution, people can quickly select the appropriate strategy or based on the information provided by the decision system to develop an appropriate strategy.

Database System
The database supporting the ISA system mainly includes four parts: experience, elements, relationships, and methods, see Figure 3.The data storage method can be static aggregation, used to store priori information, or dynamically dispersed, and updates data in real-time.It can be done in both local storage and online cloud storage. 1) In the experiential database, the destination library and result library are mainly included.The destination library pre-stores the purpose, target, or event expectation value of each task; the resulting library updates the stored results of each perception and decision link in real-time.2) In the element database, mainly includes an entity library, environment library, symbol library, and material library.The entity library mainly stores various attribute parameters of the targets in the awareness environment, such as basic attributes, spatial attributes, and performance attributes; the environment library mainly stores various environmental data such as the geographical environment of the task area, the astronomical hydrological environment, and the relationship between environmental elements; the symbol library mainly stores various primitive glyphs of different dimensions to assist the manual interpretation in the visualization link; the material library is used to store other related information such as target preferences, salient features, historical rules, etc. 3) In the relationship database, mainly includes a rule library, script library, template library, and relationship library.The rule library quantifies and stores various rules and regulations as the basic basis for inferential judgment; the script library consists of the specific events' occurrences sequence, and script data will include initial conditions, target elements, platform vectors, scene settings, behavioral outcomes, etc.; The template library is based on reason deduction, conditional derivation, and time derivation, including information about different types of situation poles, and relative values as evidence of situation reasoning; the relationship library stores the relationship data of each target or event, such as behavior relationship, task relationship, membership relationship, etc. 4) In the method database, it mainly includes the algorithm library and the parameter library.The algorithm library mainly includes various classical algorithms in the fields of search, reasoning, planning, decision-making, and learning.Such as evidence theory, Bayesian method, fuzzy reasoning, rough set theory, knowledge agent, interval grey relation, game information fusion, expert system, blackboard model, template matching, quality factor, fuzzy Petri net, genetic algorithm, graphic and planning recognition, granular computing, deep learning, and other algorithms; the parameter library mainly for the operation mechanism of various algorithms to give the corresponding parameter selection dictionary or adaptive parameter selection algorithm to play the best performance for the algorithm.
In addition, in order to effectively realize the robust and fast reading of all kinds of data, there is also a database-oriented resource management mechanism, which mainly implements the functions of data classification, search, read, and write.On the one hand, it can realize the scheduling of existing data resources; on the other hand, the empirical knowledge outputted by the learning mechanism can be classified and stored.

Reasoning Learning Mechanism
In the whole process of SA, the higher the level of abstraction, the more subjective the situation cognition, and leads to the lower level of the situation quantification.The more subjective emotion or perceptual knowledge is integrated, the more difficult to analyze objectively and reasoning logically.Therefore, for ISA systems, it is necessary to classify existing information elements and uncertain elements on the basis of various types of data support, select appropriate reasoning algorithms from the algorithm library, and find the correspondence in various elements, to find the situation assumption subject to maximum probability distribution.
As shown in Figure 3, the search mechanism, reasoning mechanism, and planning mechanism are used to support situation perception, situation comprehension situation prediction, and threat assessment.For the different task requirements and scenarios, the system matches the corresponding algorithm independently and adjusts the algorithm parameters adaptively to complete all-direction reasoning.
In the presence/absence of the loop, the learning mechanism continuously monitors the above-mentioned search, reasoning, planning, decision-making, and execution processes, and relies on the bias of manual selection strategies to continuously discover hidden information in various data streams.Then a formed particular direction cumulative effect provides empirical data for subsequent iterations or a new task.In addition, the system also introduces big data, cloud computing, [33] and deep learning methods as support for database systems and reasoning learning mechanisms, making it flexible, building blocks, and general dynamics.
It should be noted that in the whole process of reasoning, decision-making, and learning, not only one reasoning method is applied in the whole link or a certain link, but also through the resource management to carry out various methods with comprehensive application. [34]On the one hand, the resource management system can integrate and schedule relevant data, relationships, and algorithms for specific problems; on the other hand, it can transform the final decision execution of human beings, visualize the process and results of situation data into new database resources through learning mechanisms.

Situation Linkage and its Visualization Unit
The output of situation perception, situation comprehension, and situation prediction is a certain period situation.It cannot express all the situations in the historical time, and there is still a certain delay in the current SA.Therefore, a situation assessment output in a cycle is actually a SA of one or a few discrete moments.In order to achieve a comprehensive understanding of the situation, it is necessary to integrate historical, non-failure, and pending perception information, comprehension information, and prediction information to form a continuous and dynamic situation, which is the situation linkage.The situation linkage module can not only perform the vertical historical situation linkage operation in the node but also horizontally correlate other SA results to perform multi-node situation fusion processing.
The situation visualization module visualizes complex and invisible situations, allowing people to quickly understand complex situations in a real-time, shareable manner.The module separately explores the key elements in the situation perception, the key event information in the situation comprehension, and the key intentional information in the situation prediction, and uses visual graphics, text rules, or other related forms to visually express, so that people can quickly interpret the current situation.For example, the parallel coordinate method, contour method, threat degree function method, etc. can be used for visualization processing.
In order to display multi-dimensional visual and multidimensional linkages of the current and future situations, and to express the state evolution of entities by means of combing with environmental data, target data, symbol data, and relational data, considering based on GIS, OGRE, OSG, Vega, Skyline, STK, and other tools to achieve 3D scene reconstruction display.With 3D plotting symbols, it can achieve 2D planar projection display, 3D omnidirectional display, 2/3D linkage display, and 4D dynamic display.
In addition, based on augmented reality or mixed reality technology, the reconstructed virtual scene is compared with the actual input target environment, the error or redundant data is corrected in real-time by the target image matching method, and the predicted alarm data is projected to the real environment.On the one hand, it can improve the reliability of the situation 3D display; on the other hand, by comparing the reconstructed scene with the real one, the performance and consistency of the SA can be visually evaluated.

Human Machine Interface System
The man-machine interface has three working modes: autonomous, semi-autonomous, and manual.The autonomous mode is under manual monitoring, the system is allowed to go directly to the execution step according to the optimal strategy provided by the decision support system.Meantime, people can enter the system and shut down strategy execution instruction at any time; the semi-autonomous mode is that the system gives a series of decisions in a specific order.Refer to the visual situation, manually select or modify the appropriate strategy, and decide whether to execute; the artificial mode is when the SA result is incompatible with the decision, that is, when the selfdetermination has obvious deviation, the new strategy is perceived by people according to the SA result.
In this way, the command or control people are at the top of the entire operation chain, and the human-machine interface is used as a monitoring system for optimal decision-making deployment and task assignment, enabling people to simultaneously cope with multiple sensing systems, which can greatly improve the execution efficiency based on reducing the workload.
In the implementation of the human-machine interface system, in addition to traditional devices such as display, mouse, and keyboard, it is also possible to expand the visualization of the situation based on sensor fusion technology, virtual reality, and augmented reality technology.For example, using augmented reality helmets allows commanders to immerse the battlefield scenes, using voice recognition control technology, eye tracking recognition control technology, and manual control to achieve human-machine interfaces diversified operation.Furthermore, some special attention should be paid to the human-computer interaction interface design, which should be simple, flexible, and intelligent, easy to interact naturally, facilitate collaboration, and effectively perceive.
In the above intelligent situational awareness and humancomputer interaction model, since the situation is visualized in the 3D space, the feedback of the situational awareness results can be realized by means of augmented reality and manual decision-making, so the awareness framework is closed-loop.Situational awareness is no longer a simple information push mode, but a realistic drive mode that is different from the real scene.Therefore, the above model is an iterative process.

The Intelligent Situation Awareness Process
The section illustrates the specific implementation process of the above ISA method, as shown in Figure 4.
1) Set the awareness space: Set the awareness space according to the target location information input by the first-level fusion result.For the drilling platform, the perceived space is the entire entity's electromagnetic coverage.2) Set the awareness target: Set the target of interesting.For example, for the drilling platform, you can set its water surface and air target as the sensing target, such as a supply ship, stand-by ship, and ocean monitoring aircraft.3) Set the awareness mode: set the specific combat style to be perceived.For example, for the above-mentioned perceived target, the awareness mode can be set as the marine area monitoring SA. 4) Environment element extraction: Extract the terrain, weather, sea state, electromagnetic environment, and other information from the database, and visualize the environment in 3D.

5) Target element extraction:
Using the Bayesian network method to achieve the extraction and recognition of the target elements, forming a quantitative description for the target force composition, and setting the situational element information as: where P i is the ith target unit state set, the information format is P i = {time, batch number, type, attribute, position, state, heading, range of radiation source…}.For the drilling platform, the batch number can be the identification number of the monitoring batch, the type can be supply ship, stand-by ship, ocean monitoring aircraft, etc.The attribute refers to the enemy, me, and friend.The position is expressed by the latitude and longitude of the target, and the state includes the target's speed and height.
6) Target element matching: Match and classify the target performance data in the database to determine various attributes of the target.For example, the monitoring capability, supply capability, defensive capability, maneuverability touch intensity, etc. 7) Space unitization: Set the target battlefield area M, to divide the battlefield into k layers by grid cube.Each layer has m * n units, which are expressed as: if the grid size is d 3 , according to the sampling theorem, it cannot be described when the orthographic projection area of the target is smaller than 0.25d 2 .In addition, it is also possible to implement multi-unit processing of the area with grids of different sizes on different scales.
8) Target detection: Based on databases and symbol libraries, detect the position and attributes of targets in each spatial box.9) Target grouping: Grouping or clustering elements according to platform and space based on specific rules.Clusters based on nearest neighbors are commonly used to cluster spatial groups and clusters of functional groups and interaction groups are performed.10) Situation understanding input judgment: If the input information required for the situation comprehension process is provided, continue with the event correlation operation.Otherwise, use the same color or symbol to mark targets within the same group, and proceed to step (28).11) Event detection: By comparing the target position changes, speed, switching machine, and other factors at different times.
With the target event detection and behavior analysis, to establish an event relationship view, such as trajectory, task, radar switch time, etc. 12) Event Correlation: Associate the historical events of the same target or group in the previous cycle with the current event.13) Event relationship judgment: Search for keywords in the template library, extract event templates based on event feature descriptions, filter through learning mechanisms, and determine whether similar templates exist in the database.If so, continue; If it does not exist, store the template.If it does not exist, classify and store the template data, and proceed to step (28).14) Target consolidation: Simplify situation information and combine data according to the target type P i .15) Extraction of relationship elements: According to the data, establish the relationship between the objectives, including spatial relationships, time relationships, communication relationships, and dependencies.In addition, it also includes information on the opposite side of policies, personnel tasks, and adverse factors.16) Relationship matching: Match the template events and historical events to the corresponding template in the database, such as the take-off moment of the aircraft, the radar switch-on event, and so on.If the match is successful, proceed to the next step; Otherwise, save the existing element relationships and proceed to step (28).17) Situation generation: Understand the specific properties of the entity and form situation sets.18) Extract the situation template: Through the learning mechanism determine whether there is a similar template in the database, if not, perform the classification and storage operation to the template data.
the positive and negative values of I indicate whether the party has absolute initiative in the space.If it is equal to zero, means that it is evenly matched.F(R i ) indicates that the opposite side can prevent our ith task.If the opposite side can block it, the battlefield initiative contribution factor for the ith task.
24) Intention correlation: Using relevant algorithms for data mining and knowledge discovery, correlating all targets' operational intent, forming overall prediction information, and referring to Section 5.2 to calculate the unmanned weight R of the perception agents.If the association is successful, continue; Otherwise, proceed to step 27).25) Threat synthesis: Determine the opposite side's intention estimation based on the target's maneuvering intention, radar switch time, etc.; synthesize various parameter performances, and situation trends, to determine the opposite side's monitoring target and area, and complete each threat element probability calculation.26) Threat level calculation: Calculate the threat level according to the following formula: where w 1 is the threat weight of target task capability, E is the task capability index, w 2 is the threat weight of task intention, G is the target task intention and K is the threat correction coefficient.

27) Decision support:
Based on the threat level, generate the optimal decision and maintain the decision result, then the SA of this cycle ends.28) Situation linkage: The upper periodic situation factors are fused with each stage SA results of this cycle.Useful situations are grouped to determine the relationship in the situation elements.Including spatial association and time correlation, the spatial association is related to the spatial proximity of the target group; the temporal association associates the situation elements in the same spatial group according to the inheritance of time.Finally, a unified comprehensive situation is formed with a historical evolution display function of the situation.
29) 3D situation visualization: Visualize various situational information that can be displayed during the current SA cycle in 3D space.30) Integrated situation Visualization: Visualize the important areas, targets, and monitoring coverage in the situation comprehension window.31) Manual intervention in decision-making: Through the human-machine interface, manual intervention is used to execute autonomous or nonautonomous decision-making modes, generate decision information to drive action execution, save decision results, and end this SA cycle.

Intelligent Situation Awareness Performance Metric
ISA's performance metric is to measure whether the system can autonomously transform the perceived information into understanding information within the specified time limit, and the ability to predict the situation based on experience.Different from previous performance metrics, this paper does not consider human education, training, personality, experience, and other factors, directly based on the ISA framework to determine performance metric rule and build a model.

Metric Criteria
In this section, based on the above ISA model, combined with the completeness, accuracy, and timeliness of user requirements, we propose that the performance metric of the ISA model must follow the autonomy principle.
1) Autonomy: Based on various databases reasoning learning algorithms, the system must have autonomous SA performance, including automatic classification and screening of input data, automatic matching with corresponding algorithms and data, and constructing an autonomous perception space for event understanding.The higher the autonomy degree, the more accurate the awareness results, and the more detailed the prediction results, the higher the intelligence of the system.Therefore, autonomy is an important indicator of whether a system can be called intelligence.2) Completeness: In a specific awareness cycle, the number of perceived targets is compared with the number of real entities.
Including the pre-processing links with event matching, unitized target screening, dimension classification, etc., mainly using the completeness of input data, database, and algorithm library to measure.That is, the first is to measure whether the input data can support the algorithm running, the second is to measure the perception degree of the target situation, and the third is to measure whether the record of the learning algorithm is complete.3) Accuracy: The main purpose is to measure the agreement degree between perceived results and objective reality based on the awareness model.That is, the system can execute correct database matching and algorithm scheduling in the autonomous state, including the situation perception metric in 3D space, the event understanding metric in 4D space, and the possibility metric to 5D reconstruction.

4) Timeliness:
The shorter the SA period, the higher the result update frequency and the better the timeliness.The timeliness indicator is based on achieving certain accuracy.The main metric is the time complexity and spatial complexity of various algorithms.For example, the database searching algorithm's query times, the reasoning algorithm efficiency, the learning algorithm classification efficiency, and so on.In addition, after the algorithm is optimized, it is also necessary to consider whether the hardware can support the massive data and iterative algorithms' running.

The Unmanned Weighting of ISA's Agents
The ultimate goal of the ISA framework is to provide autonomous awareness services for unmanned systems.In the resource distribution between people and machines, there is a trade-off whether a human are in the loop.Therefore, the initiative degree of the unmanned system to process SA information determines the ISA quality directly.
Considering the unmanned weighting of the agents, there are several aspects: Is it conducive to correct and real-time SA?Is it conducive to correct and real-time decision-making and execution?Does it hinder human subjective will to involve any link?Hence, a calculation method for SA agents' unmanned weight is given: where R is the unmanned weight, if R > 0, means that the ISA system has relative autonomy; if R < 0, it means that human has relative autonomy; if R = 0, it means that the human and the intelligent awareness system cooperate with each other, and maintain a balanced state. i ≥ 0 is the contribution coefficient of unmanned weights for the ith SA in the task cycle based on SA results x, reflecting the contribution degree of unmanned system autonomy to SA; H i is the ith artificial intervention identification in the task cycle, if humans can prevent the unmanned system work process in real-time, H i = −1; if human does not intervene in the process of autonomous perception, H i = 1; and when they cooperate with each other, H i = 0.

A Typical Metric Model
SA performance is proportional to the product of information acquisition and processing efficiency: where U is the SA performance, U acq is the information acquisition efficiency, U awa is the information processing efficiency, and they are continuous functions of time.
For the ISA model proposed in this paper, the above performance metric method can neither judge the system autonomy nor measure the internal module performance.The paper focuses on the above metric principles and provides a typical efficiency metric model for similar tasks, specifically for the ISA model in Figure 1: where U is the overall performance of the ISA model proposed before.U pre is the information preprocessing model performance, which is represented by the weighted effectiveness of three elements: target object pattern determination, spatial unitization, and information element dimension classification.U awa is the situation perception model performance, U aud is the situation comprehension model performance, U apr is the situation prediction model performance, and  +  +  = 1, three weights are used to measure the flexible model performance in Figure 2, when the situation perception module is unable to provide effective input for the situation comprehension module,  = 1 and  =  = 0; when the situation comprehension module is unable to provide effective input for the situation prediction module,  = 0. U dat is the database performance, U std is the reasoning learning model performance.U lin is the situation linkage model performance, U thr is the threat assessment model performance, their value relationship depends on the unmanned weight R of the SA agents: when R ≤ 0, indicates that human has autonomy over the SA system, and at this point, the SA system plays a role in providing awareness and decision support for humans; when R > 0 indicates that the SA system can autonomously complete the whole work in an unmanned state.
The performance of the information preprocessing module is evaluated by: U pre = aU obj + bU spa + cU fra (9)   where U obj is the effectiveness of object mode determination: P(e i |E) is the matching probability of the current perceptual object pattern e i and event model set E, t is the matching time; U spa is spatial unit effectiveness: ∑ o i is the number of cells containing the target, O is the number of cells after unit; U fra is information elements dimension classification efficiency: M is the total number of sources, ∑ Ψ(P i ∩ P j ) is the number of sources with similar information elements; and a + b + c = 1 is the corresponding weight coefficients.
U awa can be measured by the ratio measurement of input information location elements under discrete state 3D reconstruction: i is the ith situation element that is integrated into 3D space.U aud can be measured by the proportion of event information under continuous state 4D reconstruction: f is the space-time fitting function.
U apr can be measured by the least possibility of 5D reconstruction based on uncertainty: U lin can be measured by the situation fusion ratio: ∑ S t ∩ S t+1 indicates the situation fusion number in adjacent time, ∑ S t and represents the total number of discrete states.
U thr can be measured by assigning the final threat weight ∃: U dat is mainly measured by the delay time t and the information matching ratio: ∑ P i is the information matching number, ∑ s i is the information query number.U std can be measured by time complexity T(n) space complexity S(n):

Multi Node ISA Consistency Assessment
In order to accomplish the same task, when multiple nodes simultaneously perform collaborative SA on the same target, each node is required to maintain the same perception, comprehension, and prediction of the situation elements on the basis of obtaining different input information, which is the essential of the multi-node collaborative planning and action.Similar to the team situation awareness (TSA) or the shared situation awareness (SSA), the SA results are shared.In contrast, the consistency of multi node ISA in this paper has two differences: on the one hand, the object of SSA is not only human but also an unmanned system; on the other hand, not only the SA results are shared, but it is also necessary to fuse different nodes' situation.For each ISA system, each node performs sensing operations through different input data, respectively generates different SA results, and fuses the situation information of different nodes through the situation fusion module to form a complete and shared comprehensive Similar to the concept of absolute consistency and relative consistency and the concept of general consistency and narrow consistency, we believe that ISA consistency also includes two aspects, the first is the consistency between the SA result and the practical; the second is the SA consistency of the same target between nodes.The two are both relative and intersecting.Therefore, the consistency assessment elements can also be described in terms of completeness, accuracy, continuity, timeliness, and commonality.However, as the ISA consistency emphasizes the autonomous sharing and fusing of the situations among multiple nodes, a multinode ISA consistency assessment method based on the spatial benchmark is presented below.
Both the overall situation and the segmentation situation in nodes have 4D space-time characteristics.Therefore, as long as the time reference and the spatial reference are determined, and judge the specific situation elements' time and space errors, the consistency measurement can be performed.Let the spatial reference be P, the situation element dimension be l, the task cycle be T, for the 4D space situation, projecting based time t, and at the time t the Nth node's discrete state P i (t), i = 1, … N, t ∈ [0, T] is obtained in 3D space, and then the situation consistency of the ith node can be expressed as:

Case of Typical Drilling Platform Electromagnetic Situation Awareness
This section illustrates the ISA method.

Basic Assumptions
In the free space, to perceive the electromagnetic situation of the monitoring alarm system.The opposite side includes one drilling ship, one supply ship, and one stand-by ship, we have one ocean monitoring aircraft to monitor its exploitation activities.At the beginning of the sensing cycle, taking the drilling platform as the center, the stand-by ship is located 50 km northwest, the supply ship is located 350 km south; and the ocean monitoring aircraft is located 400 km west.The initial state is shown in Figure 5.
During the sensing cycle, the drilling platform remains relatively stationary.The stand-by ship sailed at a speed of 30 km h −1 along the northwest, performing a searching mission; the supply ship approached the drilling platform at a speed of 25 km h −1 , performing the support mission; the ocean monitoring aircraft flew from the west side to the east at a reconnaissance radius of 400 km with speed of 800 km h −1 , and the perception cycle is 2.7 h.
Based on the above models and scenarios, it is required to perceive the electromagnetic situation of each target in an all-around way, to define the radar threat range at different moments in 3D space, and to generate a comprehensive situation map.

The Main Data Processing Flow
1) After the primary information fusion, the input information of the ISA system is shown in Table 1.Where the array  data[*] stores entities' coordinate information at different times.
Search for the above entity information in the entity library, if the model information exists, the information is supplementally stored; Otherwise, a new entity is stored in the entity library.2) Set the SA task to "electromagnetic situation estimation", search for relevant keywords in the template library, find similar event templates, and determine that the main purpose of the event is to estimate the trajectory of the perception target and the threat range of the radar.If there are algorithms for trajectory prediction and radar threat range estimation in the algorithm library associated with the target library, and there is attribute feature information related to the perception target in the parameter library, the system recognizes the existence of an event template, and clarifies the perception style and continue; Otherwise, it will prompt missing information and switch to manual operation.
3) The detection range of the Y model ocean monitoring aircraft from the database is generally above 600 km, and the flight altitude is ≈10 000 meters.Therefore, the sensing free space is the entire entity area and its electromagnetic coverage, set to be the center of the drilling platform, within a radius of 1000 km, an altitude of 20 000 meters.The perceived target is each entity.According to the initial state information, the 3D situation of the awareness space of the entities at the initial moment is shown in Figure 6. 4) As the electromagnetic situation estimation is carried out in free space, the terrain is set to sea level, and the sea condition is good.According to the input information, the target element is extracted, matched, and classified with the data in the database, and various performance attributes, generating a collection of situation element information P i = { time, model, batch, location, speed, heading} at different time points within 2.7 hours.Two groups are obtained, as shown in  [3,3,2] The information in the cell unrelated to the task is emptied, and the 2D and 3D images are shown in Figure 7, respectively.6) At this point, the physical space situation has been observed.
The system determines whether the entity has fitable track information in the perception cycle.If it does not exist, the input data only supports the discrete state 3D reconstruction, data conditions do not support situation comprehension, then visualize current information and transfer to a manual decision; if there is track information in the perception cycle, continue.7) According to the above grouping, in the comprehension cycle, the target states of all discrete moments are merged and fitted, and the track is detected separately according to the relationship elements in the targets.The situation is related to the track, forming a discrete situation set sorted by time, that is, realizing the continuous state 4D reconstruction.Drawing and saving the motion trajectories of the ocean monitoring aircraft, the stand-by ship, and the supply ship, respectively.Achieve a comprehension of the target movement situation, and finally, generate a comprehensive situation map as shown in Figure 8. 8) Determine whether the situation estimation operation can be performed.If the basic parameters of the detection radar carried on each entity do not exist in the database, the situation estimation operation cannot be performed, we visualize and store the comprehensive situation map, and start the manual decision phase; if there are related parameters, continue.9) Assuming that each entity has turned on radar, there are many possibilities for the various parameters selection.For example, the electromagnetic potential is different according to the selection of the parameters just for the drilling platform, as shown in Figure 9.   Therefore, in order to minimize the threats detected by the radar, the radar equation [35] is called in the algorithm library and the parameters are maximized.According to the template, the horizontal direction function of the antenna takes the value range It should be noted that the above simulation of the electromagnetic situation is only the maximum threat space to the radar.In the actual situation, there is great uncertainty about the number a) b)

Conclusion and Remark
The paper gives some basic concepts and a theoretical framework for the ISA, and integrates the existing methods to form a basic ISA process.New concepts such as "discrete state" 3D reconstruction, "continuous state" 4D reconstruction, "future trend" 5D reconstruction, and "situation" fractal reconstruction are given as support means for intelligent realization.Through the proof of the ISA convergence, the description of the system's flexible configuration, and the simulation case analysis, it is shown that the ISA framework is feasible.For further construction and implementation, the following gives some suggestions: To strengthen multidimensional data awareness construction: SA emphasizes a deeper understanding of the input data, so it is necessary to achieve a more round comprehension in a higher a) b) dimension by continuously increasing the dimensions of the data.Especially for the battlefield SA, the comprehensive situation map is generated at an objective level, that is, including land, sea, air, sky, electricity, network, and other dimensions.Through reconstructing 1D information to 2D, image understanding can be obtained; through reconstructing 2D information to 3D, depth information understanding can be obtained; through reconstructing 3D information to 4D, dynamic event information understanding can be obtained; through reconstructing 4D information to 5D, prediction information of unknown events can be obtained; Each dimension is not independent.The fractal dimension between related dimensions contains more complex information.Mining the fractal dimension can effectively make up for the deficiency of topological dimension perception.
To strengthen various shared database construction: In the ISA framework, the shared database is the basis for the system to operate.Like human knowledge and experience, sharing a database is also a manifestation of teamwork and complementary collaboration.Therefore, for the construction of shared databases and big data, a common data framework should be used for standardization construction.For example, for battlefield SA, especially for the long-term construction of battlefield geospatial, reconnaissance intelligence, operational planning and force deployment, etc., various operational rules, event templates, estimation algorithms, and classification methods must be implemented.In addition, in terms of data scheduling and read/write speed, optimization should be carried out from both hardware and software.It can be said that the more complete information in the database, the higher the system's autonomy degree.
To strengthen the modularization construction: At present, theories and algorithms of various aspects of SA are diverse and mature, and related technologies scattered in various fields of artificial intelligence can basically solve their main problems in the field.What is most lacking now is the integration of mature technologies to create a systemic ISA solution.Through the standardized processing of data interfaces, how to modularize and componentize various algorithms and shared databases to achieve adaptive system customization for different SA tasks will be the next important research direction.

Figure 1 .
Figure 1.ISA and human-computer interaction model.

Figure 3 .
Figure 3. Collaboration between database system and learning inference mechanism.

19 )
Situation prediction input judgment: Whether it can provide the input information required for the situation prediction process, if yes, go the target position estimation operation; otherwise visualize the comprehensive information, save the situation understanding result, and end work.20) Target location estimation: Predict the targets' location and activities range.21) Event prediction: According to the template library, rule library, and script library, refer to the event template and situation template, refer to the target's customary tactics, and use the corresponding reasoning method to detect the event and path planning.22) Scenario generation: Generate a set of scenario hypotheses based on a symbol library.23) Calculate the initiative:

Figure 6 .
Figure 6.3D situation space of the initial moment.
[0 ∼ 2], and the vertical direction function takes the value range [−0.5 ∼ 0.5], to achieve 5D reconstruction under the maximum threat level.The electromagnetic situation estimation for group 1 is shown in Figure 10, a) is the 2D situation map; b) is the 3D electromagnetic situation map.For the electromagnetic situation estimation of group two, the initial and final moment states are shown in Figure 11.10) Generate a situation hypothesis and merge the cluster situation assessment results to form a comprehensive situation assessment with the initial and final time as shown in Figure 12.

Figure 9 .
Figure 9. 3D electromagnetic situation of drilling platform with different parameters.

Figure 10 .
Figure 10.Electromagnetic situation map of group one.

Figure 11 .
Figure 11.2/3 D electromagnetic situation map of group two.

Table 1 .
Target information table.