Adaptive unscented Kalman filter for neuronal state and parameter estimation

Data assimilation techniques for state and parameter estimation are frequently applied in the context of computational neuroscience. In this work, we show how an adaptive variant of the unscented Kalman filter (UKF) performs on the tracking of a conductance-based neuron model. Unlike standard recursive filter implementations, the robust adaptive unscented Kalman filter (RAUKF) jointly estimates the states and parameters of the neuronal model while adjusting noise covariance matrices online based on innovation and residual information. We benchmark the adaptive filter’s performance against existing nonlinear Kalman filters and explore the sensitivity of the filter parameters to the system being modelled. To evaluate the robustness of the proposed solution, we simulate practical settings that challenge tracking performance, such as a model mismatch and measurement faults. Compared to standard variants of the Kalman filter the adaptive variant implemented here is more accurate and robust to faults.


Introduction
The application of data assimilation (or state estimation) techniques to single neuron dynamics was greatly popularized by Schiff (2011), based on the work of Voss et al. (2004) on the FitzHugh-Nagumo model. The latter have shown that recursive Bayesian state estimators such as the unscented Kalman filter (UKF) (Julier & Uhlmann, 1997) could be used to track the nonlinear dynamics of neuronal models and identify relevant model parameters based on the observation of a measurable, albeit noisy, membrane voltage trace. The effective combination of tracking and system identification garnered a lot of interest from researchers working at the intersection of computational neuroscience and control theory. While indispensable for attitude estimation in aerospace and localization in robotics (Barfoot, 2017), state estimation is still emerging in the neuroscience and biomedical fields.
While often assumed to be static, changes in neuron model parameters can lead to significantly different excitability characteristics. In the absence of direct measurements of such parameters, recursive state estimators allow for observed states, unobserved states, and parameters to be tracked more accurately. Moye and Diekman (2018) explore the robustness of data assimilation against poor initialization of neuronal parameters by comparing the performance of the UKF, a sequential approach, to variational methods which are also commonly found in the literature (Bano-Otalora et al., 2021;Toth et al., 2011). The UKF has been shown to be robust against mismatches between the model known a priori and the model from which the observed data originates, even in the presence of significant model inaccuracies 1 3 (e.g., steady-state constants replacing transient dynamics (Ullah & Schiff, 2009)). As knowledge of these underlying models may be lacking (particularly in neuroscience), adaptive techniques that simultaneously identify missing parameters are highly desirable. This challenge is common to many disciplines and it has led to the concept of robust adaptive unscented Kalman filter (RAUKF) (Hajiyev & Soken, 2014;Zheng et al., 2018). Few applications of adaptive filters have been used so far (Hamilton et al., 2018), and to our knowledge none have been applied with a fault detection scheme.
The Kalman filter is known to be the optimal recursive state estimator in the context of linear dynamics and Gaussian distributed noise (Kalman, 1960). However, neuronal dynamics are typically highly nonlinear and warrant the use of nonlinear estimators such as the extended Kalman filter (EKF), unscented Kalman filter (UKF) (Julier et al., 2000) or particle filter (PF). While linearization of neuronal dynamics about the most recent state estimate (such as in the EKF) can be shown to perform well in certain situations, it is prone to instability and divergence (Lankarany et al., 2014). Deterministic sampling alternatives, such as the sigma-point transform around which the UKF is built, are generally more suited to the dynamics under study here (Schiff, 2011). In this case, the analytical derivatives of the dynamics and observation models used in the EKF are no longer required. Instead, the sampling-based UKF allows for the models to be treated as black boxes, which could suit practical biomedical applications.
Despite the undoubted capability of these techniques in inferring hidden dynamics in nonlinear systems, some critical challenges related to the robustness of UKF and other families of KFs remain when it comes to their application on real-time inference in biological models. These challenges are mainly related to a lack of a priori information to inform the initialization of state variables and noise covariance matrices. When modelling neurons, one must take into account the behaviour of specific ion channels, their conductances, and kinetics, which may not be present in the dataset to provide suitable estimates for the initial state, especially as conductances and dynamics vary greatly in different neurons, even within the same region (Golowasch, 2014). The inability of the model to observe the full state, as well as abrupt changes that can occur in recordings (e.g., sharp discontinuities in membrane potential traces) can lead to drastic changes in covariance estimates and in turn, instability. While ad hoc adjustments of covariance matrices such as covariance inflation have been proposed in the past (Schiff 2009), a state estimation method capable of handling biological systems is still lacking. The present work endeavours to address this gap.
To address the challenges of applying the KF to neuron models, we consider a few modifications based on modern adaptations of the UKF. For KF initialization, we address this challenge as follows: We employ a RAUKF which, through online fault detection, adjusts the covariances supplied to more appropriate values. We ensure optimal performance of this fault detection by performing a grid space search of the relevant parameters across a range of trials. The optimal parameters determined are model specific; however, certain aspects of the values determined may be used with some consideration of their relevance in what parameters need to be estimated. For the concern of the model being incomplete and thus unable to represent fully or reproduce the desired behaviour, we implement a more detailed model and track it via a RAUKF using the less complete version of the model. In doing so, we monitor the performance over time, especially when the less complete model is unable to match spike times, hyperpolarization curves, or other features due to its incompleteness. Abrupt changes in recorded data, be they from random noise or artifacts from the recording, are addressed by the robust and adaptive response to state changes in the RAUKF.
In this paper, we develop a RAUKF for neuronal state and parameter estimation based on the work of Hajiyev and Soken (2014) and Zheng et al. (2018). We use two variants of the conductance-based Morris-Lecar neuron model (Prescott et al., 2008) to showcase the filter's adaptability against noise and lacking model information. This filter includes a fault detection schema for the adjustment of covariance estimates, which allows for increased computational efficiency compared to previous works that adjust covariance estimates upon each sample (Hamilton et al., 2018). This filter also has parameters for weighting the variance updates to tune the covariance inflation to the specific model, as shown in section 3.1.
The following section 2.1 introduces the 2-dimensional conductance-based model, which is subsequently used in the state estimation framework (section 2.2) as the dynamics model. The core implementation of the UKF is reviewed in section 2.3 before introducing the extensions that support adaptation (section 2.4) and correction (section 2.5) of uncertain parameters and noise covariance matrices.
A numerical exploration of the RAUKF parameter space is provided in section 3.1 to identify the set of conditions best suited for this application. The performance of the RAUKF for neuronal state estimation is compared to existing algorithms in section 3.2, followed by simulations mimicking measurement faults (section 3.3) and model discrepancies (section 3.4). The significance of the results for neuronal dynamics identification and future extensions are discussed in section 4.
Overall, the RAUKF outperforms a standard UKF implementation in the case of poor initialization of covariance matrices and neuronal model parameters. The bulk of RAUKF adaptation steps are taken at the onset of simulation, where it adjusts its parameters based on the most recently observed data. The advantages of adaptation are particularly noticeable in the presence of measurement faults and model mismatch where the performance of a standard implementation quickly deteriorates. Demonstrations of robustness against experimental faults are of particular importance to validate the use of algorithms like RAUKF in practical settings.

Neuron model
As the simplest possible biophysical representation for a neuron, conductance-based models are commonly used in computational neuroscience (Skinner, 2006). To demonstrate applications of this filtering technique, we use variants of the Morris-Lecar model as provided by Prescott et al. (2008) (see Appendix A). As our developed method may be applied to any conductance-based neuron model, we here refer to a generic conductance-based neuron model described by where V denotes the membrane voltage, I stim an external current stimulation, m and w are arbitrary gating variables with associated time constants m and w , ḡ 's are maximal conductances and E's are reversal potentials. By considering a separation of timescales, the quasi-steadystate approximation m = m ∞ will be used in the following sections (thus reducing (1) to a two-dimensional conductance-based model). Table 1 provides ranges for each variable. (1)

Neuronal state and parameter estimation
In this context, the main goal of data assimilation consists in estimating V and w based on noisy measurements of the membrane voltage and predictions V and ŵ afforded by the dynamics model (1). Given that the incoming measurements represent the newest source of information about the biological system, the recursive updates are performed at a rate equal to the measurement sampling period T. To accommodate this discrete process, the dynamics are discretized, with state x k = V k w k T and input current u k = I stim,k : models uncertainty inherent to the dynamics. Given a neuron model such as (1), attributing the notion of internal state to the membrane voltage V and recovery variable w to comply with the state-space representation of control theory (2) can be ambiguous. As alluded to in section 1, the geometrical space of a model describing a biological system relies on a set of parameters which are not constant in reality. The parameters reflect biophysical processes that fluctuate as a result of noisy interactions. While we can initialize parameters based on ranges obtained from experimental studies (see Table 1), parameter estimates more consistent with the latest observations are desired. The conductance of the ionic channels in (1) are a prime example of biologically relevant parameters that naturally vary at a much slower rate of change than the state variables. Consequently, it may be beneficial to directly estimate these parameters from data alongside the state x k .
In joint state and parameter estimation, the state variable is augmented to account for l parameters k ∈ ℝ l of interest (Stengel, 1994). It is assumed that the rate of change of the parameters is much slower than that of the main variables x k . As such, the parameters are assigned artificial stochastic dynamics where m ,k−1 denotes additive white Gaussian noise. The dynamics of the augmented state X k = x k k T can then be expressed as where M k−1 ∼ N( , Q) . The observation model used to characterize noisy membrane voltage measurements is described by where n V,k denotes measurement noise (in the general case, y k = g(X k ) + n k , n k ∼ N( , R) ). Here, the direct observation of the membrane voltage y k mimics experimental recording techniques (e.g., current-clamp methods). With only a subset of X k being measurable, the method presented in this study allows hidden states to be estimated.

Unscented Kalman filter
Following a Bayesian inference approach and assuming Gaussian beliefs, the state X k ∈ ℝ n+l of the neuronal system is a random variable tracking the mean of a normal probability density function, while a covariance P xx k ∈ ℝ (n+l)×(n+l) tracks its spread. State estimation consists in estimating the current state X k given knowledge of its initial conditions X 0 , inputs u k−1 and observations of its behaviour over time y k . The inputs used in the simulation are the same as those used in the filter. The reason for using the same input current was to simulate the environment of a in vitro stimulation studies where the manipulation of the cell is well known. The filter aims to maximize the probability of observing y k given a belief about X k afforded by a model of the system dynamics and an observation model, (5) and (6) respectively. This relationship can be reversed via Bayes' rule to solve for the a posteriori conditional distribution p(X k | y k , u k−1 ) which represents the true state probability given measurements.
We start by assuming the following Gaussian priors for the prediction step: where X n|m denotes an estimate of X at time n based on observations up to and including time m ≤ n with corresponding estimated covariance matrix P xx n|m . Then, the predicted belief N(X k|k−1 ,P xx k|k−1 ) is approximated as follows: where col j L is the jth column of the matrix obtained by Cholesky decomposition =P xx k−1|k−1 ; is a userdefinable parameter, often selected according to the heuristic N + = 3 to best capture higher order moments (Julier et al., 2000). 2. The set of sigma-points X k−1|k−1 are passed through the nonlinear dynamics model (5) 3. The transformed sigma-points are combined into the predicted mean X k|k−1 and predicted covariance P xx k|k−1 with the weights i summing to 1 according to Second, the predicted belief is revised against the most recent observations y k according to the following steps: following the same procedure as before 2. The sigma-points X k|k−1 are passed through the measurement model (6) 3. The innovation vector v k|k−1 and associated covariance matrix S yy k|k−1 are defined as 4. The transformed sigma-points are combined into the predicted innovation covariance matrix P yy k|k−1 and crosscovariance matrix P xy 1 3 5. The calculation of the Kalman gain K k and the corrected belief N(X k|k ,P xx k|k ) follow from the standard equations below (Barfoot, 2017): 6. Finally, the residual vector v k|k and associated covariance matrix S yy k|k are defined as 7. Following the same procedure as before, a set of 2N + 1 sigma-points X k|k = {X 0 k|k , … , X i k|k , … , X i+N k|k } is sampled from the corrected belief N(X k|k ,P xx k|k ) , passed through the measurement model (6) and recombined into the predicted residual covariance matrix P yy k|k :

Adaptive filter
Despite having no guarantee of convergence, the UKF performs well with nonlinear systems, both in tracking state variables and in identifying system parameters. A strong condition for successful estimation is the initialization of covariance matrices Q and R based on a priori information about the system (e.g., measurement noise can be estimated from preexisting data and sensor characteristics). However, in many real-time scenarios, particularly in neuroscience, such information might be lacking, resulting in incomplete initial estimates and, in turn, suboptimal filtering (Stengel, 1994). With Q and R too large, the solution might end up biased, too small and divergence could occur (this is particularly true for slow states, given that they depend on the noise evolution defined by the process noise covariance). Adaptive techniques have been devised to render the state estimation algorithm more robust against poor estimates of covariance matrices (Schiff, 2009). While methods such as covariance inflation aim to adjust the covariance of the state, and thus improve the stability of the filter, few works have looked at addressing inaccurate noise covariance matrices in this field. Yet, innovation and residualbased approaches (Stengel, 1994) allow, respectively, Q k−1 and R k to be updated recursively alongside states and parameters based on information readily available in the current UKF implementation.
From (2) (Mohamed & Schwarz, 1999;Stengel, 1994) where j 0 = k − N + 1 . Alternatively, (Zheng et al., 2018) proposed a weighted update rule based on the direct approximation of the covariance, 1 , effectively). At each adjustment, Q k−1 and Q k−1 are combined based on a weighting factor ∈ (0, 1): A similar approach is used to estimate R k , this time based on the residual v k|k . From (6), Then, where, similar to (20), P yy k|k is the covariance of the residual v k|k approximated by the set of 2N + 1 sigma-points where ∈ (0, 1) is the weighting factor analogous to .

Fault detection
Naturally, the addition of adaptation constitutes a trade-off between computational cost and tracking accuracy. Once sufficiently corrected, additional updates of the noise covariance matrices Q k−1 and R k will lead to marginal improvements in performance. For this reason, adaptation may be considered as a response to fault detection. Provided with a means of identifying faults in the system (process-or measurement-related), adaptation may be used selectively to return the estimation system into normal operating conditions.
A simple fault detection rule follows from innovationbased methods and the statistical function which has 2 distribution with s = 1 degrees of freedom since v k|k ∈ ℝ (Hajiyev & Caliskan, 2003;Zheng et al., 2018). Under normal operating conditions, the innovation vector is normally distributed ( H 0 ). Any deviation from this nominal behaviour could indicate a system fault, such as a damaged sensor, and trigger a recovery mechanism as a result ( H 1 ). To determine which hypothesis should be accepted, a chi-squared test is performed to determine when P( 2 > 2 ,s ) = , where is the significance level and 2 ,s denotes the threshold to be exceeded for a fault to be detected (Hajiyev & Soken, 2014): The updates (33) and (40) are performed as a result of rejecting H 0 . If a windowing method is followed, the selection of N effectively tunes the detection system's sensitivity to faults (a large N may smooth the effects of a potential fault, whereas a small N may lead to false alarms) (Hajiyev & Caliskan, 2003). Alternatively, Zheng et al. (2018) show how the fault threshold could be used in the selection of appropriate weighting factors and . Application-dependent tuning parameters a > 0 and b > 0 are introduced to control how sensitive the noise covariance update rules should be to the innovation statistic k .

Results
Given the dynamics model (2) and observation model (6), the state x k may be recursively estimated for k = 1, … , N using the RAUKF algorithm despite significant amounts of noise in the measurements, poor parameter initialization as well as poor initial estimates of Q 0 and R 1 . The noisy measurements y 1∶N,meas are obtained beforehand by evolving (2), sampling the membrane voltage at a period T = 0.1 ms and adding white Gaussian noise with covariance cov(n V ) = 3 mV . Figure 1 illustrates these measurements as well as the noisy input stimulation, generated according to an Ornstein-Uhlenbeck process with time constant noise = 5 simulating synaptic currents seen in vivo (Destexhe et al., 2001): where ∼ N(0, 1) , I avg = 50 A∕cm 2 and noise = 25 A∕cm 2 . All simulations use the Euler method (Moye & Diekman, 2018) to integrate the dynamics (2) for 1500 ms with a timestep of 0.01 ms.
In addition to qualitative assessments of tracking performance, errors are measured quantitatively using the rootmean-square error (RMSE).
where X k represents groundtruth data.

Exploring the filter parameter space
In performing joint state estimation and parameter identification for a conductance-based neuron model, the accuracy of the tracking and convergence to the true states are most important. Figures 2 and 3 show the root-mean-square tracking error (RMSE) result of sweeps over parameters 0 , 0 and a, b from Eqs. (44) and (45) respectively. The RMSE is computed from the simulation half-point to allow enough time for transients to subside and subsequently divided by the range of each variable (see Table 1) to facilitate the comparison of tracking errors.
According to Fig. 2, the combination of a small 0 and small 0 seem to be the most effective. Yet, for larger 0 , 0 values, the filter becomes too sensitive to adaptation leading to more frequent failures. RMSE increases when 0 and 0 are dissimilar, when one is much larger than the other.
The higher a and b, the higher the probability that ← 0 and ← 0 respectively. While this can be seen above for a ≥ 5 , the selection of b does not seem particularly sensitive. Overall, 0 = 0 = 0.2 , a ≥ 5.0 and b ∈ [3, 10] seem appropriate for this system.

Joint state and parameter estimation
In the absence of adaptation, poor initialization of noise covariance matrices Q and R significantly impacts the stability of a recursive filter. Unless specified otherwise, the following parameterization is used throughout this section to illustrate this point: X 0 = [−100, 0.5, 10, 80, 140] , P 0 = diag([0.0001, 0.0001, 0.0001, 0.0001]) , Q 0 = diag([10, 0.001, 10, 10, 10]) , R 1 = 0.3. Figure 4 compares the performance of the UKF algorithm and that of the RAUKF on a tracking task where the initialization of the filters is identical in X 0 , P 0 , Q 0 and R 1 . While both filters successfully maximize the probability of observing the measurements (top panel), the UKF struggles to estimate the unobserved state (bottom panel). This shortcoming of the standard filter is even more pronounced during the identification of conductance parameters, as shown in Fig. 5.
In both cases, the RAUKF leverages the fault detection process to adapt the unknown covariance matrices, resulting in clear tracking and identification improvements. Transient effects of adaptation are noticeable early on in the simulation, when a high number of corrections are made (i.e., when the innovation vector is the least normally distributed). Quantitative performance of the filters is compared based on RMSEs, evaluated from t = 750 ms onward to minimize the impact of transients. A comparison of the corresponding covariances estimates P k can be found in the supplementary material, where uncertainty envelopes up to three  Table 2) subject to a noisy input current I stim (bottom half) (OU process with I avg = 50 A∕cm 2 , noise = 5 and noise = 25 A∕cm 2 ) 1 3 standard deviations ( ±3 ) are provided for each tracking task. In general, RAUKF results in far narrower envelopes owing to adaptation which is lacking in the standard UKF implementation.

Performance against measurement faults
The fault detection test introduced in section 2.5 was originally developed as a response to potential actuator or sensor malfunction (Hajiyev & Soken, 2014). To emulate a faulty sensor, we consider a scenario where the measurement noise profile of the membrane voltage observations changes mid-simulation. Between t = 375 ms and t = 1125 ms , the measurement noise profile is set to ñ V,t = 5n V,t . In such a scenario, standard recursive estimation techniques (e.g., UKF) tend to fail owing to the unexpected change in noise covariance properties and the lack of adaptation (see the red line at t = 640 ms in Fig. 6). On the other hand, as the faulty measurements momentarily alter the distribution of the innovation vector, the RAUKF triggers a correction of the measurement noise covariance matrix R k . Figures 6 and 7 illustrate the changing measurement noise profile and its effect on state tracking and parameter identification.

Performance against model inadequacies
The final simulation tests the RAUKF tracking performance when using incomplete models. A 3-dimensional Morris-Lecar-like model (see  with parameter values in Table 3) is used to generate noisy observations, while the filter's dynamics model is described by a 2-dimensional Morris-Lecar-like model (48). As a result, the source of the measurements and the tracking model no longer match. This setup aims to simulate a practical setting where measurements of membrane voltage are far more expressive than that which could be reproduced with a low-dimensional model.
The 3-dimensional model (49) splits the slow current I slow =ḡ slow w(V − E K ) into a K + rectifier current I K,dr and a subthreshold (outward) current I sub (Prescott et al., 2008). Since (49) includes all the parameters from (48), the estimation objectives remain the same as in the previous simulations, with only the measurements being different.
From Figs. 8 and 9, it is clear that the UKF predominantly tracks the observations of the noisy membrane voltage to the detriment of the other unobserved states, whose estimation relies solely on the prediction model. Given the lack of adaptation, when a significant mismatch occurs between the observation and the prediction, the UKF diverges substantially and may even fail. On the other hand, the RAUKF can adapt the noise covariance matrices to compensate for the unexpected observation, thus preventing the filter from diverging and/or becoming unstable. The reason the RAUKF estimation of V appears to be slightly off initially can be attributed to its effort to compensate for the error in unobserved states.
While it comes as no surprise that the filter now favours observations over its internal estimate of the membrane voltage, the successful tracking of the unobserved state and parameters reflects the robustness of the filter. Compared to previous simulations, the estimation incurs a larger overall tracking error, yet the accuracy remains satisfactory (compare RMSEs between Figs. 4,5 and 8,9). The ability to accurately determine the true channel conductances from more realistic measurements is of particular importance for practical applications of these estimation methods.

Discussion
The application of state estimation in neuroscience and biomedical fields is burgeoning. In this work, we detail the efficacy of a robust and adaptive unscented Kalman filter (RAUKF) when applied to neuronal state and parameter estimation. RAUKF is capable of estimating neuronal dynamics accurately in simulation despite initial poor estimates. Further, RAUKF maintains tracking performance when subject to measurement faults and model mismatch.
Recursive filters such as the (RA)UKF are well suited to applications where measurement data may be streaming over time, a scenario we tried to emulate in our study. In addition, filters allow us to quantify the uncertainty of the tracking (through the simultaneous estimation of the variance) and even adapt it in response to changes in the measurement process. However, sequential methods such as these lack the ability to easily handle constraints. While not considered in this work, modifying the filter to constrain the parameter space would allow for better parameter fitting (Simon, 2010). Constraining parameters based on their physical/ biological interpretation may allow for better tracking, e.g., constraining the gating parameter w between [0, 1] as the value represents how open a given channel is. This type of constraint behaviour may also make the filter more robust as the parameter space being constrained would be able to prevent numerical instabilities that unconstrained estimates may cause. In general, the constrained optimization of matching states and parameters to a batch of measurements is better handled by variational approaches (Bano-Otalora et al., 2021;Kadakia, 2022;Toth et al., 2011), which can leverage standard optimization packages, albeit at a potentially higher computational cost. However, as discussed in (Moye & Diekman, 2018), optimization approaches will not scale as well as sequential approaches if states are to be estimated from large amounts of data. Overall, the relative ease with which a sequential filter can be set up to estimate states and parameters in an online manner warrant its use over variational approaches for the applications considered in this study.
Tuning parameters of the RAUKF is something that must be done under the consideration of the system that is being modelled. While some generalizations about parameter selection can be made, most are specific to the context in which the filter is being applied. The choice of parameters a and b depend on the model being used. In a previous work (Zheng et al., 2018), a and b were increased in tandem resulting in an increase of performance that plateaued, as can be seen in Figure 3. However, performance increases with an increase in a and degrades with an increase in b when a ≠ b . The performance trend observed with an increasing a is likely due to the filter's poor initialization of estimates for states, parameters and process noise covariances, over which a has a direct influence through the adaptation of (44). In Fig. 9 Parameter identification performance given a mismatch between the dynamics model and the model generating observations contrast, the filter's observation comes from the noisy membrane potential, and this source is known to have white Gaussian noise; increasing b makes the filter over-tune the estimate of the noise of the observation, detracting from the filter's capacity to estimate the hidden states of the system. This difference is illustrated in the heatmap of the RMSE for the w gating parameter, where an increase in b does not lead to as drastic a decrease in performance compared to the states that are estimated by a random walk. The values to use for 0 and 0 may be adjusted in a similar fashion to that used for a and b, as the model has less certainty about the process dynamics than the observation so there is a bias to the value of 0 over 0 .
Fault detection is useful when modelling mechanisms that are prone to discontinuous or abrupt changes, such as spike firing in neurons, especially when being acted upon by some external activity. Action potentials and their related spike times are often of particular relevance when replicating a neuron's behaviour in a model. A fault detection mechanism ensures that spikes that are not reflected in the model's prediction will likely be detected as a fault, resulting in a non-normal distribution such that the state estimation, more so the variance of the estimation, will change more rapidly.
Compared to existing covariance matrix adaptation techniques (Berry & Sauer, 2013;Hamilton et al., 2018), the use of fault detection allows for a more computationally efficient adaptation by reducing the number of updates required. Further, though not explored in detail in this work, the parameters of the fault detection may be used to adapt the rate of covariance inflation, preventing it from inflating too greatly when faults are not detected.
Prior application of filters (Hamilton et al., 2018) have used methodology from Berry and Sauer (2013), for which the equations to adapt the covariance matrices Q and R are estimated by a moving average. A large difference compared to the method from Zheng et al. (2018) is using only a sample of 1, with weights to tune the change of covariance over time, as described in (33), and (40). This allows for savings in terms of computational efficiency, though at a cost of the accuracy of the estimate of Q and R . Future works should investigate how filters applied to neuronal dynamics benefit from these differences in state estimation.
While not considered in this work, modifying the filter to constrain the parameter space would allow for better parameter fitting (Simon, 2010). Constraining parameters based on their physical/biological interpretation may allow for better tracking, e.g., constraining the gating parameter w between [0, 1] as the value represents how open a given channel is. This type of constraint behaviour may also make the filter more robust as the parameter space being constrained would be able to prevent numerical instabilities that unconstrained estimates may cause.
The application of the RAUKF on incomplete models relates to possible future research that may extend this work. When the state estimation process does not match the model used to generate the observed state (Fig. 8) the filter still does well to track state estimates even when the mismatch between the observation and process is apparent. Future work may address this by introducing some of the constrained methods previously mentioned (Simon, 2010) and incorporating additional states following random walk dynamics that may be representative of hidden dynamics not encapsulated by the process model. In addition, building upon previous work conducted on simpler neuronal models (e.g., leaky integrate-and-fire) (Lankarany et al., 2013(Lankarany et al., , 2016, robust and adaptive filtering techniques may be used to identify heterogeneous populations of neurons and possible differential properties or mechanisms (such as excitatory/ inhibitory afferents or conductances) that contribute to this heterogeneity.
Another possible extension would be to determine channel conductance properties of specific cell types in different locations, such as hyperpolarization-activated cation (h-) channels in oriens lacunosum-moleculare (OLM) cells of the hippocampus. H-channels in OLM cells are known to vary in a locationdependent fashion (Hilscher et al., 2019), and OLM cells are known to be important contributors of theta rhythms that facilitate spatial memory processes (Klausberger & Somogyi, 2008). We have developed detailed multi-compartment OLM cell models to understand how they contribute to circuit function (Sekulić et al., 2020). Since we have demonstrated that the RAUKF technique can deal with incomplete models, it may be possible to determine varying h-channel conductances using reduced (and thus incomplete) OLM cell models (Sun et al., 2022) with experimental OLM cell recordings from different locations.
Finally, it is important to note that the adaptation of noise covariance matrices presented in this work applies to a wide range of recursive state estimation algorithms. While the UKF generally performs better than other filters in its class, its sampling-based approach and higher computational cost may hinder its adoption in embedded biomedical application. Alternatives based on linearization include a robust adaptive EKF or, if the hardware supports it, a robust adaptive iterated EKF. Demonstrations of these alternatives for the joint state and parameter estimation of various neuronal models can be found in the supplementary material.
The assumption of linearizable dynamics may prove unrealistic for certain neuronal models, especially as higher-order dynamics are considered. However, extending early works on single model tracking and control (Ullah & Schiff, 2009), we believe estimation algorithms such as RAUKF could be effectively combined with low-order representations of high dimensional neuronal dynamics