Multi sensor data and temporal image fusion cross validation technique for Agri yield monitoring system

In recent years, the diverse application in various disciplines and the versatility has gained a huge interest for the researchers to research on the multi-sensor data fusion technology. The remote sensing process involves the measurement and recording of the data from a scene. Thus, the remote sensing systems are known to be a powerful tool as they help in the earth's atmosphere and surface monitor at different scales. The remote sensing of the data faces a serious challenge as the data captured by the multiple sensors are heterogeneous. This affects the ecient processing and the effectiveness of the data that is being sensed. Thus, the increase in the diversity in data increases the ancillary datasets. These multimodal datasets are used jointly to improve the processing performance as per the application requirement. Initially, the fusion of the temporal data with the backscattered/temporal data is possible from the data retrieved from remote sensing. Many researchers made several types of research on fusing the multi-temporal and multimodal data and gave different ideas for a different type of researchers. This paper presents the cross-validation technique for monitoring the yield. This monitoring system is developed by fusing the multi-sensor data and the temporal images. This fusion is performed, and the performance of the yield monitoring system is analyzed from the results obtained. By using the cross-validation technique, the eciency of the system is found to be improved.


Introduction
Remote sensing, the methodology that is being used in a wide range of applications like civilian applications, military, and surveillance C. Elachi (1987). The remote sensing process involves the measurement and recording of the data from a scene. Thus, the remote sensing systems are known to be a powerful tool as they help in the earth's atmosphere and surface monitor at different scales. The scales vary from global to local scale. These monitoring systems can provide vital mapping and coverage and the land coverage classi cations like a forest, soil, water bodies, vegetation, etc. The grade of the accuracy greatly depends on the following: 1. Knowledge of the researchers, 2. Area cover types and 3. Image quality. The relation between the forest, soil, and the forest is obtained when there is a correlation plotted among the super cial deposits, drainage, and features of the topography. The above details become vital for the management of the land in use and the classi cation of the land topography. The air/space borne sensors are being used in the process of remote sensing. These sensors can acquire the data at varying spectral bands on the oppressed frequency basis at numerous resolutions. The data related fusion aspects is shown in Fig. 1.
There are numerous data available at the wide spectrum from the sensors for the same scene or the site observed. The single-unit sensor's data becomes inconsistent, incomplete, and even imprecise for many applications ((P.K. Varshney (1997); (D.L. Hall et al.1997); (C. Pohl et al.1998)) The uncertainty and the errors due to single-unit sensor information may be decreased with the help of multi-sensor data as they provide compatible data which are used in the fusion for giving the observed site a better understanding (A. Farina et al. 1996; A. Farina et al. 1996;V. Clement et al.1993). The remotely sensed images are provided either by radar or by heterogeneous sensors. Thus, the multi-temporal images provided for the same site are needed to determine the changes that had occurred in the considered site. The site segment classi cation needs the multi-dimensional images of the radar and non-radar sensors as the segment separation increases because of the multi-polarization and multi-spectral data (A. Farina et al. 1984; F.W. Leberl et al. 1990). The vital information in the interpretation of the site scene is the contextual data.
When it comes to labeling, the pixel in isolation provides incomplete information of the characteristics that are wanted. Contextual information is mainly dependent on the three domains: 1. Time, 2. frequency, and 3. Space. The different electromagnetic bands of the spectrum are known as the spectral dimensions. These dimensions are either obtained by a single-unit sensor or by the multiple frequencies operating numerous sensors. The single band images offer less resolution compared to the separation of various ground cover classes. The complementary data of the same observed site are available for the image fusion as follows: 1. Multi-sensor image fusion, 2. Multi-temporal image fusion, 3. Multi-frequency image fusion, 4. Multi-polarization image fusion, and multi-resolution image fusion. The remaining sections of the paper are organized as follows. Section 2 contains the description of the detailed study made related to our work. Section 3 contains a detailed description of the components used in the yield monitoring system. Section 4 contains the methodology used for the data fusion of the temporal images and its data processing, classi cation algorithms. This section also provides information about di culties involved in the classi cation of the data. Section 5 contains the experimental results and their discussion.

Related Work
The remotely sensed data from the satellite platform is of different bandwidth and varying spatiotemporal resolutions. Thus, these data from the satellite are always available for the requirement of There is a need for a huge amount of data that are spatiotemporally adjacent or the historic data in the pixel removal and lost data reconstruction (A. Chatterjee et al. 2010). A similar design of the two distinct satellite platforms can be used in the data merging process which helps partially in the retrieval of the lost data. The spatial, temporal, and spectral images can be enhanced using the well-known technique called data fusion (J. Dong et al.2009). Data fusion tends to extract data from asynchronous time series of the satellite data whereas the data merging algorithms uses the synchronous time series data from the satellite with varying temporal, spectral, and spatial resolution. The data fusion technique enhances the spectral, spatial, and temporal properties and converts them into a synthetic image ( The vital factors that affect the development of the data fusion or the data merging algorithms irrespective of the land surface or the top of the atmosphere environment are 1. characteristics of the illumination, 2. satellite platform's spatial and spectral properties, 3. contamination in the cloud, and angle of view (C. Song et,.2003). The different parameters of the land surfaces are robustly measured using the different parameters with the help of the newly designed approach which provides multi-sensor data along with the data fusion. These approaches are capable of providing accurate, complementary, and robust output from the remotely sensed data. There are three sensor fusion techniques and they are remote with proximal sensor fusion, proximal with proximal sensor fusion, and remote with remote sensor fusion. The data fusion integrates the data from the multi-sensors and the accuracy in the output is more precise than the data from the single sensor output.
The right technique chosen for the measurement greatly in uences the measurement accuracy. Choosing the sensor set to be integrated is greatly dependent on the following: 1. Practical information, 2. Fusibility of the sensor, 3. Objective parameters, and 4. Requirement and situation of the application. There are two types of sensors in which one type is easily available for blend whereas the other type needs a higher-end calibration, regular monitoring of the data interpretation and processing.

Proposed System Model
The proposed system model is mainly depending on the volumetric ow measurement. The various sensors are engaged in the enhancement of the yield monitor. The system model includes the GPS, junction box, optical sensor, and eld computer. The owchart of the multi-sensor data and temporal image fusion is shown in Fig. 2. At the top of the clean grain elevator, an optical sensor is placed beside it to calculate the grain yield. The elevator housing includes the xed hinged mounting bracket that encloses the transmitter and receiver in association with the lens and lens holders. The operational function of each sensor is exhibited via the light-emitting diode. From one to the other end of the elevator paddles, the infrared light beam is transmitted. If there is any interruption in the light beam is detected using the receiver. When the paddle allows the transmission of the sensor across it, the beam is broken. The beam breaks for a longer period depending on the amount of grain available on the paddle.
The Global Positioning System (GPS) is a sensor-based yield monitoring system. The satellite signals are utilized in the proposed monitoring system. The main purpose of the sensor-based GPS is to identify the desired location and in displaying the combined speed via satellite images. The main bene t of GPS mapping technology is monitored using the sub-meter accuracy. The association of the ground-related segments with the space is well-known as Global Positioning System. Above the ground level, at the high central point, GPS is usually mounted.
The position of the eld computer is usually opposite to the driver seat. The output signals of the sensor are stored and displayed by the eld computer where these signals are considered as data. The signal obtained from the sensors are later used in the eld computer includes the GPS interface, user graphical user interface, external data storage devices, and controls the communication among various devices.
The eld computer can perform various functions such as sensor integration and calibration, moisture content, online speed, the area covered, and yield per hectare. The yield sensor, GPS receiver, and moisture sensor header data are collected in the Junction box. The Junction Box also records the header height, cut width, and GPS receiver. The correction factor, crop type, eld name, and calibration number are to be led in the Junction box. The Junction box is placed inside the cab mainly for safety and protection purposes rather than using as weather proof. The path for the sensor cables is attained through the bulkheads depend on the mounting position of the Junction box.
The header of the combine encloses the on/off switch of the spring. The association of the large number of the harvester in the eldwork is differentiated using the yield monitor switch. When the header is in an upward position then the data logging is retained. The Junction box starts to count the area when the switch is in ON condition. The switch is ON condition when the combine header is in working position.
The temporal phenomena are used in the analysis of the remote sensing data by the time variable. The satellite constellation namely, Sentinel 1 and Sentinel 2 includes the features such as ne resolution images and short revisit time. These characteristic features make the satellite remote sensing images obtain the semantic content from a scene. In space and time, the dimensionality changes from the 3D to 4D by the time variable. The time variable in association with the image pair, short time-series, or long time-series SAR active images are exploiting the performance of the time variable. In the structural multitemporal sensor-fusion, time-series multi-sensor images are fused. The perspective point-of-view of various techniques in the analysis of data is obtained via the fusional temporal information along with the image spatial, image backscattering, and image spectral information.
The multi-temporal data includes the analysis of image time series. The ne spatial resolution images are acquired through the association of spectral, spatial, and temporal informational data on dense timeseries. Sentinel 1, Sentinel 2, and Landsat-8 are the few dense time-series that produce ne resolution images which can be further improved. These data are used in the analysis of the eld vegetation in the agricultural farms to improve their contemporaneous applications. An individual multi-sensor time-series obtained from the various satellite are fused is also capable of providing precise farming. Depending on the consequence and its extension, the analysis of the temporal information varies. Initially, the consequence is classi ed concerning the multi-temporal data and the solution of the respective consequence is attained later. The labeled training data related to the multi-temporal classi cation challenges are encountered.
The multi-temporal images for at least a pair are obtained from the geographical area at various times.
Based on the analysis of data objective, temporal data are classi ed into various types. The recent images of the land cover map with the time-series, multi-temporal land cover maps set are attained from each item of the time-series, and every pixel in images is represented as temporal information of the seasonal land cover map. Depending on the illustration, the multi-temporal is classi ed into three different ways. In an application-oriented system, the identi cation of the problem is very di cult to implement.
The representation of the multi-temporal data stacked vector is generated as an input to the classi er in the supervised direct multi-date classi cation. The image stacking vector can be acquired by several times with the characterized pixels. The newly available images produce the map regarding the land cover concerning the training classi cation. Image acquisition dates measured based on the land cover remain unaltered, and the data distribution model concerning the various methodologies. The statistical Bayesian method is utilized for evaluation in later times. Each attribute of the time-series land cover is categorized using the multi-date direct classi cation method. Therefore, currently, the present acquisition time for the corresponding land cover map is obtained. Thus, the transition of the explicit land cover can be determined and the removal of the assumption in between the selected data will not affect the progress. The probability of class combined with the adequate training data provides the information for the framework of the proposed model. The proposed model is applicable in contemporaneous applications. To overcome the consequence in a multiday direct classi cation, various methodologies are adopted.
The cascade categories of image pairs are involved in classifying the multi-temporal data. The block scheme of the temporal classi cation is shown in Fig. 3. At different periods, the temporal correlation is utilized to link the probability of classes in a single image based on the Bayesian perspective. The proposed model consists of the distribution class for all individual data to determine the temporal correlation in between the images. The proposed multi-temporal and multi-sensor data includes the neural network classi er in association with the Bayesian decision framework. The proposed structure is free from the distribution estimation as it is acquired from the multispectral and SAR multi-temporal images.
The fusion model is usually preferable in the classi cation scheme. Kernel methods and multiple classi er systems and several neural models namely, radial basis function networks, and multilayer perception neural networks are used in the fusion model. The challenges in the classi cation of multitemporal data using the deep learning architectures namely convolutional neural networks are minimum.
Thus, the proposed model framework apprehends the Spatio-temporal arrangement. The precise rate over the land-cover-transition maps is high.
The multi-temporal image development is achieved through the deep learning framework with the affordable computation is the signi cant drawbacks. The deep learning framework in association with the 4D data structure is more complex and hence, a large amount of training data is required for effective performance. The remote sensing data is categorized depending on the availability of labeled samples with the time information sources. These data are trained to learn the supervised algorithms.
The temporal-based modeling includes multi-temporal classes, the time-series-based linkage between the various classes, and distinct spatial with the high temporal value. The semi-supervised classi cation methods are achieved via the combination of labeled training data with the recorded images to develop the proposed model. The spatial-temporal properties include the estimation of time-series. The remote sensing scheme makes use of the expectation-maximization algorithm regarding the land-cover map. This scheme is applicable in the classi cation of the cascade, compound, and bi-temporal images. Thus, the proposed system model consists of the linked multispectral, SAR multi-temporal, and multi-sensor images. The proposed system utilizes the active learning structural compound classi cation in improving the training data with a low-cost collection.
The high multi-temporal label classes in the images are used to acquire ad-hoc training data samples. The proposed transfer learning scheme performs the propagation function with the data of the given image to the training sets of other images with the constrained time-series. The class labels of the images are transmitted within the constrained time-series to the remaining pixels which remain unaltered. The multi-temporal classi er is provided with the increased supervision due to the changes that occurred on the unsupervised detection. In the classi cation of multi-temporal data, a partially supervised 4D data framework along with the deep learning architecture is provided. Finally, the decoupling of the network is achieved in relationship with the spatial-temporal pattern without affecting the extraction capability.

Result And Discussion
The growing and the increasing availability of various sensor captured data connected with the help of the computational tools and the methodological approaches make them attractive in the fusion of the diversi ed datasets that are complementary. This helps in increasing the data processing system's e ciency and capacity of the remotely sensed data concerning the problems that are available. This detailed interpretation, it is shown clear that the data fusion technique is achieving more importance and popularity. There are few applications of the data fusion technique, which will not be given importance in the future. The reports and documents generated by the humans will act as the source for the future technology. The powerful frameworks will be developed, and they will have the capacity to to process wide variety of data.
The Fig. 4 shows the result of the simulation showing the root mean square error distribution (RMSE). This result is obtained when there is a prediction of a trait rather than replacing it from the random and blocked cross-validation with different values for k. The experiment provided the result which is 5-95% true value RMSE range of the real model.
The Fig. 5 provides the forecast estimate by plotting the cross-validation that is being repeatedly performed varying the block size. This result is from the performance threshold which is drawn. For a performance prediction threshold given, the projection requirement is varied. Thus, a range of values is held by the forecast horizon. The forecast horizon is marked in such a position where after that the projection is too useless and unreliable.

Conclusion
This paper presented a multi-sensor data and temporal image fusion, cross-validation and classi cation technique and algorithms. The data fusion is an art methodology state. The data fusion has a wide application range, and it is known to be the multi-disciplinary eld for research. We have developed a data centric arrangement for the methodology of the data fusion. This system is designed to explore the theoretical framework and their associated aspects that are challenging. This paper also involves the introduction of algorithms for the existing categories. This paper also includes the exploration of the fusion technologies, it is vivid that the system of data fusion is becoming the common place application's technology. There is an increased demand for the large-scale data fusion technology which involves the web and the sensor networks. Thus, the highly scalable algorithms for the data fusion are demanding an increased interest on distributed architectures. As the result, there is a trend developed which will increase the adoption of the scalable data fusion algorithms in the future. This trend also motivated the researchers to perform more research on the topics related to the performance of the data fusion systems in real time applications for security and reliability purposes.

Declarations
Con ict of interest : The authors declare that they have no con ict of interest.

Funding : Not Applicable
Availability of data and material: Not Applicable Code availability: Not Applicable Figure 1 Characteristics of multi-temporal data related fusion.

Figure 2
Flowchart for the multi-sensor data and temporal image fusion.

Figure 3
Block scheme of the temporal classi cation. Root mean square error distribution (RMSE).

Figure 5
Performance evaluation for various block size.