Artificial Neural Network coupled Condition Monitoring for advanced Fault Diagnosis of Engine

Abstract This paper reflects on the use of the Artificial Neural Network ( ANN) approach to diagnose and interpret engine failure behaviour. The current research focuses on the analysis of quantitative wear trend patterns through Condition Tracking (CM) and soft computational approaches. Oil analysis has been carried out to observe the engine failure trend. An ANN model using a Nonlinear Autoregressive with Exogenous Input (NARX) architecture has been employed to predict quantitative outputs such as Wear Particle Concentration (WPC), Wear Severity Index (WSI), Severity Index (SI) and Percentage of Large Particle (PLP) in connection with input functions of Engine Running Hours, RPM and oil temperature. Correlation function and error similarity are statistically evaluated to represent the model's robustness and effectively chart the loss input-output sequence. The subsequent ANN model demonstrates the capabilities for advance diagnosis and better prediction of engine performance.


Introduction
To produce a sustained power output, engine failure is not desirable and is an essential energy supply for both the automobile and industrial industries. A significant area of study is failure analysis. In this sense, and taking into account researchers conducted failure prediction by lubricating oil analysis through ferrography techniques that help to control the maintenance process until a failure occurs in case of engine run by alternative fuel. The root cause analysis of the failure of the outer ring fracturing of the four-row cylindrical roller bearing was done, and the visual inspection of the failed rolling surfaces was emphasised. [1]. Failure by overheating of the exhaust valve of the heavy-duty CNG engine. They researched and observed failure well faster than the predicted life span [2]. This research is also focusing on a heavyduty CNG engine but in a different way. The aim is to increase availability, alarming wear tendency rate to prevented failure by locating the wear out equipment, which would effectively minimize repair costs and performance rate.
With this respect, maintenance contributes significantly to the life of the technology capital assets. It describes a combination of all administrative, technical, and sporting stuff to operate maintenance and other physical layouts in an attempt to re-establish appropriate working conditions [3]. Maintenance aims to keep the plant from being shut down by an uncontrollable operation [4 -5]. The health of the rotating equipment should be maintained according to appropriate maintenance methods, such as condition-based maintenance, preventive maintenance, and break down maintenance. Each unit must be adequately operated to keep it secure, supposed with unconditioned operation [6][7]. The research was based on 50 samples of oil and tested over 250 straight hours in the engine. The result is that maintenance intervals can last longer, while maintenance costs increase at the same time [8]. Various wearatlases have been issued since then, some of them having been distributed online. The correct sampling frequency must be specified in the vehicle's maintenance log. When potential problems are identified, the sampling frequency should be raised until the condition and operation of the machine is determined. To measure each lubrication parameter, set a constant range of operation of the test engine/lubricant over time [9]. Lubricant monitoring is useful for all vital rotating machinery. The maintenance manager can collect essential data about the equipment's working conditions by doing the lubricant test. While the industry still relies primarily on a sequential, predictive maintenance strategy, growing uncertainty, enhanced demands and competitive criteria as to the availability, reliability of equipment and the effect of the data revolution on vessel operations, it prefers a better-organized Condition Based Maintenance (CBM) system [10].
CBM points not only on identifying and assessing device faults, but also on investigating, predicting, and monitoring faults compared to breakdowns and preventive maintenance [11]. Predictive maintenance relies on condition-based maintenance on the results of analysis and critical attenuation parameters [12]. Predictive maintenance is routine monitoring of the existing condition of rotating equipment, operational productivity and various parameters providing evidence to assess the optimum time between measures to minimise costs and to decrease the number of indefinite delays [13]. Increasing the availability of equipment not only involves reducing the amount of damage, but also the time needed for repair and inspection: insufficient stable equipment to reach high capability levels; similarly, significant is the optimum speed of repair, maintenance and inspection. To increase the performance of machinery and industrial wellbeing, all recognised failures with disastrous consequences must be avoided [14].
The core of the CBM is condition monitoring, which aims to collect data on equipment state and is performed by defining and predicting various measurable parameters using different instruments. This is an important predictive maintenance element. Condition monitoring techniques allow to identify the root source of failure and take precautionary measures before an error occurs. Information can contain the vibration, acoustic, thermal, oil and lubricant, and current signal measurements. Condition monitoring is the technique of a new method to scheduled maintenance focused on condition monitoring approaches using an assessment of the state of the equipment that arrived in the 1970s and 1980s. situation [15].
The basic requirement, i.e., human resource tools, skills, and knowledge, are used for implementing a condition monitoring technique. This technique allows the research, recording, and monitoring of data obtained using computer tools and the error trend curves [16]. When the condition is checked, if the reading exceeds the predetermined value, the monitoring equipment is declared faulty, and maintenance interference is activated. However, little thought was paid to how critical levels and monitoring gaps are determined in both practical and theory [17]. The purpose of general maintenance and machine condition monitoring is to predict the degradation trend of the equipment performance, which tends to combine reliability and accessibility directly and indirectly at a minimal cost [18 -22]. Recently, new approaches have been explored to improve equipment reliability, availability, and maintainability [23].
Preventive maintenance is a preferred choice for machinery operators and is actively monitored by predictive maintenance. Therefore, corrective maintenance structures and equipment adjustments are avoided, increasing reliability and overall availability. In addition, the transition to data from scheduled repairs will lead to more effective scheduled maintenance to enhance the cost reduction, increased use of equipment, and improved security. Therefore, the industry is looking for reliable, time-efficient and utmost operating performance as well as safe and stable operation in an unfavourable environment.
The oil analysis technique is a type of failure analysis technique, covering a wide range of topics, including oil corrosion analysis, physical, chemical and contamination detection. This analysis was done by various techniques like; ferrography analysis, magnetic plug analysis, Fourier transform infrared spectrum analysis, infrared spectrum analysis, Plasma Spectrometer Test (PST) and more [24 -27]. Ferrography is an example of a prosperous oil analysis method capable of monitoring the wear of engineering systems. It is a technique of separation particle in glass based on the magnetic time interactions and flow of suspended particles in an external magnetic field [28 -29]. This method was developed in the 1970s to study the occurrence of wear particles in lubricated dynamic components. New areas of research and practice have been opened in wear monitoring of mechanical equipment for the unique advantages of ferrographic analysis. As most mechanical instruments are made of iron and steel, the idea of using magnets to trap wear particles in lubricants is implemented. Many information is revealed as a result of repeated experiments and research, the development of ferrography, a new technique of oil analysis [30 -34].
The ANN is capable of solving analytical modelling problems including non-traditional methods including physical, dynamic and energy transfer processes. Therefore, they can use pre-service approaches that can help decision-makers choice the suitable maintenance equipment for their appliances. The need for smart technology used as an adjunct to existing situational control programs in experiments and applications is emerging as one of the most successful technologies in this field [35]. ANN provides diagnostic tools to understand and interpret the external state of complex systems [36]. Based on the state of the system and its resistance parameters, data-based methods for diagnosing faults and predicting useful life are increasingly applied [37]. Unlike the classical model-based approach, ANN is a data-driven, adaptive approach that has little resemblance to the model under study [38]. They learn from preceding examples and derive ambiguous functional relationships between data. Secondly, ANN has the potential to improve overall performance. After training in the presented data, ANNs were able to manage data they had never seen before correctly. Third, ANN is a more general and interactive global mathematical calculation than conventional computational and statistical approaches.
Monitor the status of accelerator alarms to facilitate the decision-making load on the accuracy of the system and equipment and apply ANN to the error class [39]. They used ANN to diagnose marine diesel engine failures associated with engine clearance and cylinder load fluctuations. Their research suggests that ANN is highly accurate in predicting the faults of a ship engine and can improve the reliability engine performance [40]. Also, a neural network is used to diagnose faults in the marine system that cause failure due to input and prediction data in the network [41]. ANN has applied for CBM in the engine of medium-speed diesel engine working on a case study of a fishing boat. ANN analyzes real-time controlled data to determine the status of a machine fault [42].
On the other hand, predicted its performance based on the ANN model of the marine diesel engine, based on input data, i.e. engine load and speed which operate with output data, i.e. braking power, specific brake fluid consumption and exhaust gas temperature. These results suggest that the ANN models predicted error is smaller than the experimental models [43].
Perform applications for the clustering and monitoring the data on marine diesel engine using self-organised map neural networks (SOM) [44]. Furthermore, scrutinize the performance based on the driven data model using CBM in a marine propulsion system. The results established that it is possible to use the CBM and machine learning technique; ANN typically performs best outcomes [45]. They Conducted several analyses using a multi-layer neural network to predict oil production from the Gulf of Mexico. Choosing the dimensions of serial data is difficult and time-consuming and requires further study [46]. Validation of the nonlinear system remains an active area of research. This journal publishes a collection of papers based on a set of nonlinear system identification benchmark problems [47][48][49][50][51].
Henceforth, it can be said that quantitative analysis of wear particle is a compelling way of deriving the key parameters required to diagnose and predict the failure of moving equipment. This can be gainfully applied to the prediction of failure of engine operated by unconventional fuel to prescribe an effective maintenance strategy which may not be in line with the standard practice of maintenance of the engine operated by conventional fossil fuel.
So, this paper focuses on combining the Condition Monitoring as a diagnostic approach with an ANN developed model based on NARX architecture. After testing over a large number of lubricant oil samples, an ANN diagnostic system has been designed at an optimal parameter setting to predict fault at an early stage. The predictive model developed from wear quantitative data sets can monitor the engine performance, help prevent maintenance and warn of any maintenance that results in requests for replacement or repair.

Significance and novelty of work
After deriving a holistic amount of information from the past literature survey, the present work has been aimed with the following objectives: a) To incorporate condition-based maintenance so that any interruption of power generation due to regular inspection and overhauling can be minimised. The failure of the components can be predicted in advance as well as the necessary repair or replacement works can be c) This work is an integration of experimental, and soft computational study, which is a novelty in itself.

Methodology
The current work is separately split into two parts, respectively, experimental and soft computing. Both approaches are briefly defined in the following subsections.

Experimental Procedure
Experimental procedures consist of a series of samples of lube oil accompanied by an analysis of the quantity of wear debris with the samples carried out by ferrographic techniques.
Those are briefly mentioned in the following section.

Oil analysis
Oil testing is quick to measure particles existence in the oil, which determines engine health. It is like a medical blood test that can diagnose our disease with our blood. In recent years, engine lubricants demand has been increasing, particularly in the energy generated sectors. This has contributed to the production of synthetic lubricants at a low risk that does not react at high temperatures. Synthetic oils are processed using sophisticated processing and modern formulas.
There resulting from PAO based synthetic compounds (polyolefin, polyester, polyglycol), non-synthetic PAO, esters, alkylated naphthalene, and alkylated benzene. It is becoming increasingly important to use synthetic oils where mineral oils do not meet the requirements. Improper combustion produces oxides and harmful particulates in the environment. Consequently, the process of accessories and lubricants are developing products with longer service life, which can reduce oil discharges during operation of the equipment.
An essential feature of lubricants in relation to temperature increase is their behaviour. They are not used at room temperature; They usually increase in temperature and pressure.
To enhance the existing quality, chemicals are used to inform the unique properties of the oil, mainly when the lubricant is worked under extreme conditions. The degradation of lubricants is not a natural phenomenon-the weakening of their physical properties, corrosion over time and multiple-use during life. Degradation of the lubricant was due to oxidation; viscosity; contamination; lack of additives (anti-corrosion, anti-wear, dispersing agents, etc.).
The present oil analysis was carried out, accompanied by a quantitative ferrographic method.

Sampling point
The sample collection for the present case study is done from four-stroke, twelvecylinder vertical CNG engine from oil and natural gas industry. Table 1 Table 2 below. • The robust stability of the lubricating film, which preserves its properties even under extreme pressure and temperature environments.
• Improved potential for detergent/dispersant, guaranteeing the engine's flawless cleaning by instigating a deposit forming.
• High consistency of alkaline reserve over the lifespan of the lubricant.
Source: Specific sheet of lubricant, supplier of lubricant.

Preparation of oil sample for testing
The magnetic settling of wear particles in lubricating oil starts directly after the sample is left waiting. The particles must be uniformly distributed to produce a sample size from a large sample size. The following technique is recommended for producing a homogenous mixture: • To allow for the observation of the oil and significant particles, the oil should be in a clean vessel. Ensuring that the vessel is two-thirds full to allow agitation to blend the particles deeply into the oil, thereby giving a homogeneous mixture to the sample.
• Heat the oil to 55 ° C (approximately 131 ° F). This is to keep the particles suspended as long as possible, according to ASTM standard practice because of to remove the moisture.
• Take it from the source of heat and quake the bottle strenuously. • Mix 1 or 2 ml of tetrachlorethylene in 1 ml of oil in the sample tube in the sample tube, the viscosity of the oil determines the quantity of tetrachloroethylene applied to the oil.
To minimise the viscosity of high-viscosity fluids, add 2 ml of tetrachloroethylene. This will cause the viscous oil to flow at a comparable rate to lower viscosity fluids along the precipitator tube. 1 ml of tetrachloroethylene will be enough to enable fluid flow into the precipitator tube for low-viscosity fluids. It does not matter if 1 or 2 ml of tetrachloroethylene is used, as long as the required 1 ml of oil used for each test.
Nevertheless, high viscous samples that pass too slowly will impact the particle deposition by increasing the volume of material accumulated on the DL versus the DS.
• The faster analyse a sample gives better results. It may contribute to assembling of particles from the precipitator tube by enabling the sample to settle on a test vial as much of the material is accumulated at the bottom of the vial and concentrated in the precipitator tube around the DL sensor. To prevent this, use the prepared sample or remix the sample before testing so that the wear particle is adequately dispersed.

Quantitative Ferrographic technique
Quantitative ferrography is useful for analysing the nature, magnitude and the trend of growth in wear rate by the particle size distribution of wear debris as shown in fig.1(a) and fig.1(b) shows the schematic diagram of DR-V ferrography. This characterises and distinguishes different wear situation. Oil samples, along with solvent tetrachloroethylene (C2Cl4) is shaken in a test tube to reduce the viscosity of the oil. This is made to flow through a precipitator tube under symphonic action. A magnet is placed beneath the glass tube.
The magnetic attraction arrests the ferrous particles. DL (5 microns) are deposited at the entry while DS (1-2 microns) are arranged away from the entry. The magnetic force is proportional to the volume of particle, whereas the viscous force resisting motion is proportional to the particle area. The motion downward through the glass tube is proportional to the effective particle diameter. Two light beams pass through the precipitator tube. The first

Soft computing technique
In-depth wear particle analyses are very relevant to obtain various wear trend parameters that are useful for predicting the failure using the soft computational model. In the following sections, the soft computing approach, i.e. ANN (NARX) used for current research, is briefly discussed.

Construction of NARX model
The data-driven predictive diagnostic is more effective methods in CNG engine prognostic applications because of the simplicity in data finding and consistency in complex processes. They are also of particular importance because of the ability to integrate innovative and conventional approaches by generating inclusive diagnostic methods over a wide-ranging data series. One such technique for modelling multi-step prediction is NARX. NARX model is

Where r (t) is the observation of an exogenic input at t (time).
In addition, the management of time series data is minimized over time, changing the opening mode for the network loop network response, input mode, and layer state. It allows configuring easily the original time series data in a network that consumes less time. The timeline is used to store the accumulated values (t) and z (t). The graphical outline model of NARX is shown in Fig. 2.

Determining input and output data
The normalises data was of direct use for training. It adapts to the outside of the neuron,

Decision of Dataset
The literature review shows that the NARX frame model can be correctly utilised for training and testing in different sections of the data collection. In this study, 75 per cent of the total data set was chosen to prepare the model for training, along with a 10-timesteps delay line. The cross-recognition and neural sample training were connected to a secondary collection of results. A key reason for solving the over-fitting of the neural system is the selection of 20 percent of the cross-recognition database to prevent mistrust.

Determination of activation function
Transfer function primarily aimed at adjusting neuron level or activation node inputs for the NARX model. Also, the activation function provides details on non-linear regression between neuron patterns to create a correct relationship between the input and the output layer, weight and bias. A single hidden layer with tangent sigmoid activation function (tansig) was built to predict better output outcomes using the selected formula (3) to train the network, as it is differentiable, continuous and non-linear to predict a better outcome.

Determination of training algorithm
The first NARX prototype was designed as a feed-forward backpropagation. During

Statistical estimation of output variables
Statistical estimation is conducted using multiple sets of statistical parameters to predict performance sets of data. The output of the ANN model was developed in the current study by evaluating some statistical output parameters with their importance and range of approximation are mentioned in Table 3 below. Calculates the difference between the measured and predicted values.
Computes error in percentage of the predicted values.
Compute relative error between experimental values and predicted values.

ANN (NARX architecture) modelling
The purpose of this analysis was to build the predictive model with at least no test variables and to compare with the outputs. The ANN model was constructed using NARX architecture which resembles a time series neural prediction system. In this study, there are three inputs concerning the engine parameters fig.3. shows the framework model. The flowchart of the model is shown in fig 4. There are four sets of output (WPC, SI, WSI, PLP), and each set contains 30 data. The unit of each output is distinct from each other; thus, the output cannot be comparable. For this purpose, the data are normalised to make them identical.

Fig.3. General ANN model configuration.
In the next step, a multi-layer perceptron model was formed with NARX architecture having feed-forward error backpropagation and tapped delay lines to define the hidden relationship between input and output. The data were trained by comparing the engine parameter with the quantitative parameters derived from the observational investigation. Fig 5. shows the designed structure of the NARX model.

Optimization of neuron topology
The NARX model testing was evaluated with six different algorithms, namely the quasi-   Table 4. It is made evident that, (3-10-4) topology with trainlm was established to be the optimal network. Table 5 shows how the proposed model is optimally configured. Table 6 illustrates the analysis of the amount of trainlm neurons. The overall mean value of R is 0.99459 (as seen in Fig.7), while it is 0.99645 for training, 0.98237 for testing, and 0.98411 for validation of the current ANN model. Fig. 8 shows the autocorrelation error plot within 95% of the confidence limit, which informs about the minimal deviation of NARX predicted results from experimental outcomes.

NARX model validation
The objective of designing the NARX failure predictive model is to verify the output responses with the experimental outcomes. The quantitative parameters for CNG units are validated with NARX outcomes. Fig. 9-13 shows the comparison of predicted results and actual machine results for each test case of the developed model (trainlm). This is the most consistent and accurate evidence between the predicted output and the actual output of each test case in the network. In Fig. 9 (a and b)  Similarly, Fig. 10 (a and b) Fig.12). The predicted WSI value is correlated with the experimentally measured WSI values, as shown in R (as seen in Fig.13) respectively. Similarly, the experimentally measured PLP and ANN predicted PLP is shown in Fig.12 (a and b). Comparison of error calculations (as seen in Fig.13) shows that MSE and MSRE scored 0.000213 and 0.00035 for PLP, with 3.39 % for MAPE and 0.99999 for R respectively. Hence, It is a prominent sign of the developed ANN (NARX) model as a robust detection in predicting the quantity of equipment output wear real-time property relations.