Determine the regularized design for the commonly known class of exponential regressions. This class of model functions is widely used in practice, for example, in the life sciences. It describes typical growth dynamics, such as exponential, Sshaped, spiking and other object behavior. Classical model functions are chosen because their features are well studied. Therefore, the effectiveness of the proposed design approach is maximally expressed, and the mathematical features of the resulting solutions are revealed to define further research directions.
Theoretically important, the model functions under study will express explicit solutions to Eq. (2.1). This eliminates in design construction both the affect of the direct problem error and the possible effects of the solution discretization.
This Section aims to reveal what occurs in the observational design, accounting for the effect of noise. From a theoretical viewpoint, the following essential issues are investigated: (i) the character of the noise influence on the design error distribution, (ii) factors that can reduce the noise effect, and (iii) peculiarities of the optimal design that are generated by the noise. From a practical viewpoint, the focus is on how the 2m options of the observation fitting (Fig. 1) are applied in the optimal design. Everywhere below the sample with the minimum size (m = p) is considered.
6.1 Oneparameter model
Consider the case of a nonlinear model function that facilitates analytically determining the optimal observation and its design error. Such a study represents each step of the regularized solution to the design problem and reveals its main features in detail.
Let the following exponential function describes the received signal:
y(x)θ = Y0 exp(θx), \({\infty }<x<+{\infty }\) (5.1)
with a known value \({\left.y\right}_{x=0}={Y}_{0}\ne 0\) and an unknown parameter \(\theta \ne 0\) that is to be estimated. The optimal onepoint design \({{\Xi }}^{\left(M\right)}= \left\{{x}^{\left(opt\right)}\right\}\) is sought in design space R1 using the observations that satisfy (2.2) – (2.4). The desired solution is constructed in four steps (Section 4).
Step I is simple since the model function (5.1) is a solution to the wellknown differential equation. Accordingly, the signals \({\left.\stackrel{}{y}\right}_{\stackrel{}{\theta }}\) and \({\left.y\right}_{{\theta }^{\left(\nu \right)}}\) with the parameters \(\stackrel{}{\theta }\) and \({\theta }^{\left(\nu \right)}=\stackrel{}{\theta }\nu\), which are needed to obtain the consistency equations, are expressed by the function (5.1) directly. Their explicit types eliminate the influence of the prototype simulation errors on the design.
Step II, using the functions \({\left.\stackrel{}{y}\right}_{\stackrel{}{\theta }}\) and \({\left.y\right}_{{\theta }^{\left(\nu \right)}}\), gives the following consistency equations (4.2) relative to the design error \(\nu =\stackrel{}{\theta }{\theta }^{\left(\nu \right)}\):
$${Y}_{0}\text{exp}(\stackrel{}{\theta }x\left)\right[1\text{exp}(\nu x\left)\right]=\pm 2\delta .$$
5.2
Eq. (5.2) has two nonzero solutions. Regarding the relative error \(\mu =\nu /\stackrel{}{\theta }\) the elements from the matching subset are as the following roots:
$${\tilde{\mu }}_{1}=1\frac{1}{\stackrel{}{\theta }x}\text{l}\text{n}\left[\text{exp}\left(\stackrel{̄}{\theta }x\right)+2\delta /{Y}_{0}\right],{\infty }<x<+{\infty },$$
5.3
$${\tilde{\mu }}_{2}=1\frac{1}{\stackrel{}{\theta }x}\text{l}\text{n}\left[\text{exp}\left(\stackrel{̄}{\theta }x\right)2\delta /{Y}_{0}\right],{\infty }<x<+{\infty }$$
5.4
Step III requires choosing the guaranteed design error \(\widehat{\mu }\) in design space R1 between two solutions, \(\tilde{\mu }={\tilde{\mu }}_{1}\left(x\right)\) or \(\tilde{\mu }={\tilde{\mu }}_{2}\left(x\right)\), that minimize the stabilizer \({\Omega }\left[\tilde{\theta }\right]= {\stackrel{}{\theta }}^{2}[1\tilde{\mu }{\left(x\right)]}^{2}\). Accordingly, if \(\stackrel{̄}{\theta }<0\), then for x < 0, the stabilizer Ω has the minimum on the branch \(\widehat{\mu }={\tilde{\mu }}_{2}\left(x\right)\), and for x > 0, the stabilizer Ω has the minimum value on the branch \(\widehat{\mu }={\tilde{\mu }}_{1}\left(x\right)\). If \(\stackrel{̄}{\theta }>0\), then \(\widehat{\mu }= {\tilde{\mu }}_{1}\left(x\right)\) for x < 0, and \(\widehat{\mu }={\tilde{\mu }}_{2}\left(x\right)\) for x > 0.
Step IV requires determining the global minimum of the relative design error, \({\mu }^{\left(M\right)}=\underset{x}{\text{m}\text{i}\text{n}}\left\widehat{\mu }\left(x\right)\right\). In design space R1, the global minimum is determined by the asymptotic decay of functions (5.3) and (5.4). From this, it follows that for \(\stackrel{}{\theta }<0\) the global minimum is \({\left.{\mu }^{\left(M\right)}\right}_{{x}^{\left(opt\right)}\to \infty }=0\), and for \(\stackrel{}{\theta }>0\) is \({\left.{\mu }^{\left(M\right)}\right}_{{x}^{\left(opt\right)}\to +\infty }=0\). The sought optimal design is
$${\left.{{\Xi }}^{\left(M\right)}\right}_{x\in {\text{R}}^{1}}=\left\{\begin{array}{c}{x}^{\left(opt\right)}\to {\infty },\stackrel{}{\theta }<0\\ {x}^{\left(opt\right)}\to +{\infty },\stackrel{}{\theta }>0\end{array}\right\}$$
In addition to the global minimum, it is necessary to consider the existence of a local minimum. The function \({\mu }_{1}\left(x\right)\) has the local minimum \(\widehat{\mu }\), whose value is determined below. The position \({x}^{*}=0\) is unidentifiable due to the violation of the onetoone correspondence.
From (5.3), the desired position \({x}^{\left(opt\right)}\) from R1 is defined for every \(\overline{\theta }\ne 0\) as the expression
$${x}^{\left(opt\right)}=\frac{1}{\stackrel{}{\theta }\widehat{\mu }}\text{l}\text{n}\left[1\widehat{\mu }\right]$$
5.5
where the value \(\widehat{\mu }\) satisfies the equation
$$\widehat{\mu }{\left[1\widehat{\mu }\right]}^{\frac{1\widehat{\mu }}{\widehat{\mu }}}=2\delta /\left{Y}_{0}\right$$
5.6
If \(\stackrel{}{\theta }<0\), then the local minimum becomes global, and\({\mu }^{\left(M\right)}= \widehat{\mu }\). If \(\stackrel{̄}{\theta }>0\), then the global minimum in the design region \(\mathfrak{D}\) = {0 < x \(\le \mathcal{l}<\mathcal{\infty }\}\) will be located at the end of the observation interval, \({x}^{\left(opt\right)}=\mathcal{l}\). Thus, the guaranteed design error is given by
$${\mu }^{\left(M\right)}=\left\{\begin{array}{cc}\widehat{\mu },& \stackrel{}{\theta }<0,\mathcal{l}\ge {x}^{\left(opt\right)};\\ 1\frac{1}{\stackrel{}{\theta }\mathcal{l}}\text{l}\text{n}\left[\text{exp}\left(\stackrel{}{\theta }\mathcal{l}\right)2\delta /{Y}_{0}\right],& \stackrel{}{\theta }<0,\mathcal{l}<{x}^{\left(opt\right)},\stackrel{}{\theta }>0,\mathcal{l}>0,\\ & \end{array}\right.$$
which corresponds to the optimal design
$${\left.{{\Xi }}^{\left(M\right)}\right}_{x\in [0,\mathcal{l}]}=\left\{\begin{array}{c}{x}^{\left(opt\right)},\\ \\ \mathcal{l},\\ \end{array}\begin{array}{c}\stackrel{}{\theta }<0,\mathcal{l}\ge {x}^{\left(opt\right)}\\ \\ \stackrel{}{\theta }<0,\mathcal{l}<{x}^{\left(opt\right)},\stackrel{̄}{\theta }>0,\mathcal{l}>0\\ \end{array}\right\}$$
Let us describe the features of the obtained solution.
Expressions (5.5) and (5.6) demonstrate that the optimal design depends not only on the sought parameter \(\stackrel{}{\theta }\) but also on the noise scatter band δ. Notably, the optimal solutions (5.5) and (5.6) for a given \(\delta\) are constructed under the bilevel scheme \(\delta \Rightarrow \widehat{\mu }\Rightarrow {x}^{\left(opt\right)}\). Additionally, Eqs. (5.5) and (5.6) show the existence of the factor determining the behavior of the design accuracy and solution existence. The value \(\mathcal{R}=2\delta /\left{Y}_{0}\right\) expresses that Eq. (5.6) has a solution only for \(\mathcal{R}<1\). Accordingly, the value \({\delta }^{*}= \left{Y}_{0}\right/2\) determines the upper bound of the noise, and when \(\delta \ge {\delta }^{*}\), the desired parameter θ cannot be reconstructed. Since such experimental conditions exist, it is suggested to introduce a factor that expresses conditions under which there are no estimates and indicates model characteristics to reduce noise effects. The abovedetermined factor \(\mathcal{R}\) describes the model properties that guarantee the solution existence for a given \(\delta\) and therefore, it can be called a reduction factor of noise effects.
Furthermore, from (5.5) and (5.6), it follows that the values x*\(\)=\(\)0 and \({x}^{*}=1/\left\stackrel{}{\theta }\right\) determine the segment [0, \(1/\left\overline{\theta }\right\)] for \(\overline{\theta }<0\) and [−\(1/\overline{\theta }\), 0] for \(\overline{\theta }>0\), where it is not recommended to observe the signal y(x) because the samples will give rise to inadmissible design errors, µ ≫1. For any δ ≠ 0, the optimal observation is displaced at the position \(\left{x}^{\left(opt\right)}\right>1/\left\overline{\theta }\right\). If \(\delta \to 0\), then \(\left{x}^{\left(opt\right)}\right\to 1/\left\overline{\theta }\right+0\). The offset is determined by the value δ as well as by choice of the design space. The obtained result demonstrates that identifying the best and worst regions in the observation space is an essential feature of an experimental design.
Compare the results obtained with the investigations of the inputoutput model (5.1) that have been conducted by many authors using the FIM paradigm.
The commonly known design \({{\Xi }}_{1}=\{1/\stackrel{}{\theta }\}\) was determined in [7]. As shown above, a similar design is valid in the case of a sufficiently small noise level. From (5.5), it is easy to define the corresponding relative design error \({\mu }^{\left(m\right)}\). Then from (5.6), it follows that the upper boundary of the noise should be δ < 0.02Y0. The latter expresses the condition that describes the boundary of the design \({{\Xi }}_{1}\) correct application. As seen, the interval is very narrow. Outside this interval, the estimation will give significant errors.
Another wellknown design \({{\Xi }}_{2}=\{1.6/\stackrel{̄}{\theta }\}\) was obtained in [9, 13, 16, 33]. This design is more precise in terms of noise impact than the previous Ξ1. The design \({{\Xi }}_{2}\) expresses the approximately optimal position for δ < 0.18Y0. This condition determines the boundary of the design \({{\Xi }}_{2}\) correct application. For such a case, the best relative design error is \({\mu }^{\left(m\right)}\) < 0.639. As seen, the design is the approximated solution to the optimal design problem for small noise dispersion.
As demonstrated above, it is essential that for δ ≠ 0, the position \({x}^{*}= 1/\left\stackrel{}{\theta }\right\) defines not an optimal sensor placement but the boundary of the unsatisfactory estimation zone. In this regard, the optimal observation for the case δ ≠ 0 is correctly positioned with a positive offset from the point \({x}^{*}=1/\left\stackrel{}{\theta }\right\).
It is very often claimed that for the signal (5.1), there always exists a locally optimal onepoint design. In particular, the designs \({{\Xi }}_{\text{1,2}}\) do not express the constraints on the noise level and they are formally valid for every δ. The existence of the value δ* demonstrates that this statement is not correct.
Thus, the solutions obtained under the regularized paradigm describe all the factors that determine the value and behavior of the design error. The noise significantly affects the design, and the reduction factor reflect the condition for reducing the noise impact. Substantial distortions in the optimal solution can occur if the deviation between the actual signal and its finite sample is not considered. The solution to the optimal design problem does not exist for all noisy data. A threshold of noise exists that causes the solution to the design problem to be absent at every point of the observation space. The new designs obtained comprise the previously known optimal solutions as individual cases. The boundaries of the applications of the known optimal designs are determined.
6.2 Twoparameter model
An essential part of the optimization of signal observations is to determine the design error map concerning the sensor’s positions. For this purpose, let us show how the regularized solution (4.1) – (4.4) can express the best and worst observation regions. This option is not available with other known design approaches.
Consider the following exponential function:
\(y(x{\left.)\right}_{{\theta }_{\text{1,2}}}={\theta }_{1}\text{exp}({\theta }_{2}x)\) , \({\infty }<x<+{\infty }\), (5.7)
described an observed signal. Here, the unknown parameters \({\theta }_{\text{1,2}}\ne 0\in\)R2 are to be estimated. The optimal design Ξ(opt) = \(\{{x}_{k}^{\left(opt\right)}{\}}_{k=\overline{\text{1,2}}}\)is sought in the design space R2, using the sample (2.2) – (2.4).
Step I consists in using the explicitly known prototype (5.7) to define the functions \({\left.\stackrel{}{y}\right}_{{\stackrel{}{\theta }}_{\text{1,2}}}\) and \({\left.y\right}_{{\theta }_{\text{1,2}}^{\left(\nu \right)}}\) with the parameters \({\stackrel{}{\theta }}_{\text{1,2}}\) and \({\theta }_{\text{1,2}}^{\left(\nu \right)}={\stackrel{}{\theta }}_{\text{1,2}}{\nu }_{\text{1,2}}\). Based on the functions \({\left.\stackrel{}{y}\right}_{{\stackrel{}{\theta }}_{\text{1,2}}}\) and \({\left.y\right}_{{\theta }_{\text{1,2}}^{\left(\nu \right)}}\), the consistency equations are constructed relative to the design errors \({\nu }_{\text{1,2}}={\stackrel{}{\theta }}_{\text{1,2}}{\theta }_{\text{1,2}}^{\left(\nu \right)}\) for the twopoint design as the system of equations:
$${\stackrel{}{\theta }}_{1}\text{exp}({\stackrel{̄}{\theta }}_{2}{x}_{i})[{\stackrel{}{\theta }}_{1}{\nu }_{1,l}^{}\left]\text{exp}[\right({\stackrel{}{\theta }}_{2}{\nu }_{2,l}^{}\left){x}_{i}\right]={{\Delta }}_{i,l},i=\text{1,2},l=\overline{\text{1,4}}.$$
5.8
The following consistency corridor matrix describes the worst observations:
$${\Delta }=\left(\begin{array}{cc}+2\delta ,& +2\delta ,\\ +2\delta ,& 2\delta ,\end{array}\begin{array}{cc} 2\delta ,& 2\delta \\ +2\delta ,& 2\delta \end{array}\right).$$
Step II gives the solutions to the Eqs. (5.8) regarding the relative errors \({\mu }_{k,l}={\nu }_{k,l}^{}/{\stackrel{}{\theta }}_{k},k=\text{1,2},l=\stackrel{}{\text{1,4}}\) as follows
$${\tilde{\mu }}_{1,l}=1\left[1\text{exp}({\stackrel{}{\theta }}_{2}{x}_{1})\right]{\left[\frac{1{{\Delta }}_{1,l}/{\stackrel{}{\theta }}_{1}\text{ exp}({\stackrel{}{\theta }}_{2}{x}_{1})}{1{{\Delta }}_{2,l}/{\stackrel{}{\theta }}_{1}\text{ exp}({\stackrel{}{\theta }}_{2}{x}_{2})}\right]}^{\frac{{x}_{1}}{{x}_{2}{x}_{1}}},l=\overline{\text{1,4}},$$
5.9
$${\tilde{\mu }}_{2,l}=\frac{1}{{\stackrel{}{\theta }}_{2}({x}_{2}{x}_{1})}\text{ln}\left[\frac{1{{\Delta }}_{1,l}/{\stackrel{}{\theta }}_{1 }\text{ exp}({\stackrel{}{\theta }}_{2}{x}_{1})}{1{{\Delta }}_{2,l}/{\stackrel{}{\theta }}_{1} \text{exp}({\stackrel{}{\theta }}_{2}{x}_{2})}\right],l=\overline{\text{1,4}}$$
5.10
Step III requires determining among four options \({\left\{{\tilde{\mu }}_{k,l}^{}\right\}}_{k=\text{1,2}}^{l=\stackrel{}{\text{1,4}}}\) the best one, which is named as the guaranteed design error \(\widehat{\mu }={\left\{{\widehat{\mu }}_{k}^{}\right\}}_{k=\text{1,2}}^{}\). The latter is defined for each \({\Xi }=\left\{{x}_{\text{1,2}}\right\}\) by minimizing the stabilizer (4.3)
$$\widehat{\mu }\left({x}_{\text{1,2}}\right)=Arg\underset{1\le l\le 4}{\text{min}}{\sum }_{k=\text{1,2}}{\left\{{\stackrel{}{\theta }}_{k}[1{\tilde{\mu }}_{k,l}\left({x}_{\text{1,2}}\right)]\right\}}^{2}$$
The optimal design \({{\Xi }}^{\left(R\right)}\) will be determined based on the minimization of the rootmeansquare norm of the guaranteed design error
$${{\Xi }}^{\left(R\right)}=Arg \underset{{x}_{\text{1,2}}}{\text{m}\text{i}\text{n}}{ \mu }_{rms}\left({x}_{\text{1,2}}\right),$$
where
Step IV realization depends on the different cases of the parameters \({\overline{\theta }}_{\text{1,2}}\). Having omitted the cumbersome but straightforward calculations, let us describe the main functional features of the design errors (5.9) and (5.10).
5.2.1 The case \({\overline{\theta }}_{\text{1,2}}\)> 0
If the design region has not are the constraints, then the global minimum is defined by the asymptotic properties of the function \({\widehat{\mu }}_{\text{1,2}}\left({x}_{\text{1,2}}\right)\).
For the positive value \({\overline{\theta }}_{2}\)> 0, the asymptotic behavior is expressed as follows. If \({x}_{\text{1,2}}\to \infty\), then \({\widehat{\mu }}_{1}=1\pm 2\delta /{\stackrel{̄}{\theta }}_{1},\) \({\widehat{\mu }}_{2}=1\). If \({x}_{\text{1,2}}\to +\infty\), then \({\widehat{\mu }}_{\text{1,2}}=0\). For the case \({x}_{1}=0\) and \({x}_{2}\to +\infty\), the design errors have the asymptotic values
$${\left.{\widehat{\mu }}_{1}\right}_{{x}_{1}=0,{x}_{2}\to {\infty }}=2\delta /{\stackrel{̄}{\theta }}_{1},{\left.{\widehat{\mu }}_{2}\right}_{{x}_{1}=0,{x}_{2}\to {\infty }}=0$$
5.11
As a result, in the absence of any constraints on design space R2 the global minimum of the function \({\mu }_{rms}\left({x}_{\text{1,2}}\right)\) belongs to the first quadrant (\({x}_{\text{1,2}}>0\)) and \({\left.{\mu }^{\left(R\right)}\right}_{{x}_{\text{1,2}}\to +\infty }=0\). The third quadrant (\({x}_{\text{1,2}}<0\)) is the worst area from the viewpoint of asymptotic behaviour. Into the second (\({x}_{1}<0,{x}_{2}>0\)) and fourth (\({x}_{1}>0,{x}_{2}<0\)) quadrants the best measurements exist only for the limited interval of \({x}_{2}\). From (5.9), (5.10), it follows that if \({x}_{2}>0\) and \({x}_{1}\to \infty\) or \({x}_{2}<0\) and \({x}_{1}\to +\infty\), then \({\widehat{\mu }}_{1}\to  \infty\).
In restricting the further study to only the first quadrant, the determination of the local minima of the functions (5.9), (5.10) will be made for the case \({x}_{2}>{x}_{1}\). The option \({x}_{1}>{x}_{2}\) is symmetric. It follows from the condition \({\mu }_{rms}\left({x}_{\text{1,2}}\right)= {\mu }_{rms}\left({x}_{\text{2,1}}\right)\).
For \({x}_{\text{1,2}}\ge 0\) the largest values of the function \({\mu }_{rms}\left({x}_{\text{1,2}}\right)\) are allocated on the curve
$$\chi \left({x}_{1}\right)=Arg\underset{0<{x}_{2}<{\infty }}{\text{m}\text{a}\text{x}}{\mu }_{rms}\left({x}_{\text{1,2}}\right)$$
5.12
We omit a cumbersome description of the general form \(\chi \left({x}_{1}\right)\) for the arbitrary \({x}_{1}\ge 0\), however, indicate that this curve passes through the point (0,\({\chi }_{0}\)). The value \({\chi }_{0}\) is the solution to the equation
$$\left[\text{exp}({\stackrel{}{\theta }}_{2}{\chi }_{0})+2\delta /{\stackrel{}{\theta }}_{1}\right]\left[\text{exp}({\stackrel{}{\theta }}_{2}{\chi }_{0})2\delta /{\stackrel{}{\theta }}_{1}\right]={\left(1+2\delta /{\stackrel{}{\theta }}_{1}\right)}^{2}$$
5.13
for δ < \({\overline{\theta }}_{1}\)/2 and \({\overline{\theta }}_{1}<{\overline{\theta }}_{2}\), or for δ > \({\overline{\theta }}_{1}\)/2 and any \({\overline{\theta }}_{\text{1,2}}\). For the case δ < \({\overline{\theta }}_{1}\)/2 and \({\overline{\theta }}_{1}\ge {\overline{\theta }}_{2}\) the value \({\chi }_{0}\) is determined by
$$\text{exp}(4\delta {\chi }_{0}){\left[\text{exp}({\stackrel{}{\theta }}_{2}{\chi }_{0})2\delta /{\stackrel{}{\theta }}_{1}\right]}^{2}=\left(1+2\delta /{\stackrel{}{\theta }}_{1}\right)\left(12\delta /{\stackrel{}{\theta }}_{1}\right)$$
5.14
Due to the existence of the curve \(\chi \left({x}_{1}\right)\), the observation area is divided into two zones, \(0\le {x}_{2}< \chi \left({x}_{1}\right)\) and \({x}_{2}\ge \chi \left({x}_{1}\right)\). In each zone, µ1,2→0 with unlimited growth of both \({x}_{1}\) and \({x}_{2}\). In this case, the partition line \(\chi \left({x}_{1}\right)\) tends to the line \({x}_{2}\) = \({x}_{1}\), if \({x}_{1}\to \infty\). The condition means that the size of the first zone is reduced, and the best observations in this zone should be very close to each other. Expressions (5.10) and (5.11) facilitate determining the maximum width of the first zone.
The position of the global minimum in these areas is determined by the constraints that are imposed on the design region. If the constraints are given as the condition \(0\le {x}_{\text{1,2}}\le \mathcal{l},\) then, due to the asymptotic properties of the function \({\mu }_{rms}\left({x}_{\text{1,2}}\right)\) described earlier, the neighbourhood of the point \({x}_{\text{1,2}}=\mathcal{l}\) is optimal for any \(\mathcal{l}\):
$${\left.{{\Xi }}_{3}^{\left(R\right)}\right}_{{\stackrel{}{\theta }}_{\text{1,2}}>0, {x}_{\text{1,2}}\in \left[0,\mathcal{l}\right]}=\{\mathcal{l}0,\mathcal{l}\}$$
5.15
The latter expression demonstrates that the first optimal measurement should be performed from the left side of the border neighborhood, and the second one is the border itself.
If the value \(\mathcal{l}\) is increased, then the Roptimal design error is decreased, and the optimal design in \({\text{R}}^{2}\) is expressed as
$${\left.{{\Xi }}_{4}^{\left(R\right)}\right}_{{\stackrel{}{\theta }}_{\text{1,2}}>0,{x}_{\text{1,2}}\in {\text{R}}^{2}}=\{{x}_{\text{1,2}}^{\left(opt\right)}\to +{\infty }\}$$
5.16
In connection with this optimal solution, it is well known that the value \(\mathcal{l}\) should be as maximal as possible.
If the designs (5.15) and (5.16) are not admissible from a practical viewpoint, then the case of the local minima at the crosssection \({x}_{2}=\mathcal{l}\) can be considered. Here the principal significance is provided by the topology of the surface \({\mu }_{rms}\left({x}_{\text{1,2}}\right)\). For the arbitrary \({\stackrel{̄}{\theta }}_{\text{1,2}}\) and δ the crosssections for the function \({\mu }_{rms}\left({x}_{\text{1,2}}\right)\) at \({x}_{2}=\mathcal{l}\) do not have anyone specific functional character, and there are regions of monotonicity, unimodality, and multimodality.
Compare the obtained results with the known solutions. The design
$${{\Xi }}_{5}=\{\mathcal{l}1/{\stackrel{̄}{\theta }}_{2},\mathcal{l}\}$$
5.17
has been found in [7] and also by many other authors. It holds for \(\mathcal{l}\gg \text{max}(1/{\stackrel{̄}{\theta }}_{2},1)\) and correlates with the design (5.15) for the large \(\mathcal{l}\). However, for the case \(\mathcal{l}\le 1/{\stackrel{}{\theta }}_{2}\), the design (5.17), even for its refinement into the form Ξ6 = max(0, \(\mathcal{l}1/{\stackrel{}{\theta }}_{2}\)), turns out to be the rough approximation of the position of the minimal design error due to the existence of the abovedescribed variants of the optimal solution multimodality. The result means that if the value \(\mathcal{l}\) is not enormous, it is always possible to determine the optimal solution better than the design (5.17).
For example, for the signal (5.7) with \({\overline{\theta }}_{1}\) = 1.15, \({\overline{\theta }}_{2}\) = 1.28, \(\mathcal{l}\) = 1, δ = 0.5, the design Ξ7 = {0.054,1} has been determined in [22] by the overapproximation of the set of guaranteed parameter estimates. For this design, it is easy to obtain from (5.9), (5.10) that \({\left.{\mu }_{rms}\right}_{{{\Xi }}_{7}}=\)0.8168. The design (5.17) yields \({\left.{\mu }_{rms}\right}_{{{\Xi }}_{5}=\left\{\text{0.219,1}\right\}}=\) 0.4983. For the design (5.15), its error is \({\left.{\mu }_{rms}\right}_{{{\Xi }}_{3}=\left\{\text{0.9986,1}\right\}} =\)0.4170. The latter is the best solution among others mentioned above. The regularized designing under the scheme (4.1) – (4.4) additionally determines the local minimum for Ξ8 = {0.419662, 0.419664}, for which \({\left.{\mu }_{rms}\right}_{{{\Xi }}_{8}}=\)0.6186 and for Ξ9 = {0, 1}, for which \({\left.{\mu }_{rms}\right}_{{{\Xi }}_{9}}=\) 0.7916. As can be seen, the regularized paradigm depicts the functional properties of the design more widely.
5.2.2 The case \({\overline{\theta }}_{1}\) > 0, \({\overline{\theta }}_{2}\) < 0
For the negative parameter \({\overline{\theta }}_{2}\) < 0, the asymptotic behavior is asymmetric to the case \({\overline{\theta }}_{2}\) > 0. Therefore, the global minimum of the optimality criterion belongs in the third quadrant (\({\overline{\theta }}_{2}\)> 0) and the first quadrant \(({\overline{\theta }}_{2}\) < 0). The second and fourth quadrants are the worst observation areas.
Asymptotic properties, as in the previous case, play a decisive role in the existence of both the global and the local minima. If x1,2→ −∞, then \({\widehat{\mu }}_{\text{1,2}}=0\) and if x1,2→+∞, then
$${\widehat{\mu }}_{1}=1\pm 2\delta /{\stackrel{}{\theta }}_{1},{\widehat{\mu }}_{2}=1$$
5.18
For \({x}_{1}\) = 0 and \({x}_{2}\) → +∞, the design errors have asymptotic values (5.11). Therefore, if the constraints on the design space are absent, \(\mathfrak{D}\) ⊆ R2, then the global minimum of the design error is maintained for the third quadrant (\({x}_{\text{1,2}}\le 0\))
$${\left.{\mu }^{\left(R\right)}\right}_{{x}_{\text{1,2}}\to {\infty }}=0$$
Therefore, the optimal design has the following form \({\left.{{\Xi }}^{\left(R\right)}\right}_{{\stackrel{}{\theta }}_{1}>0,{\stackrel{̄}{\theta }}_{2}<0,{x}_{\text{1,2}}\in {R}^{2}}=\{{x}_{\text{1,2}}^{\left(opt\right)}\to {\infty }\}\). The reduction factor of the model (5.7) is \(\mathcal{R}=2\delta /{\stackrel{}{\theta }}_{1}\). Consequently, if \(\mathcal{R}\ge 1\), then \({\widehat{\mu }}_{\text{1,2}}\) > 1.
As in the previous case \({\overline{\theta }}_{\text{1,2}}\) > 0, there is a partition curve \(\chi \left({x}_{1}\right)\) expressing the maximal design error of (5.12) at a given point \({x}_{1}\) among all \({x}_{2}\). It passes through the point (0, \({\chi }_{0}\)), where the value \({\chi }_{0}\) satisfies the equation
$$\text{exp}(4\delta {\chi }_{0}){\left[\text{exp}({\stackrel{}{\theta }}_{2}{\chi }_{0})+\frac{2\delta }{{\stackrel{̄}{\theta }}_{1}}\right]}^{2}=\left(1+\frac{2\delta }{{\stackrel{}{\theta }}_{1}}\right)\left(1\frac{2\delta }{{\stackrel{}{\theta }}_{1}}\right)$$
for \(\delta <{\overline{\theta }}_{1}/2\), \({\overline{\theta }}_{1}\ge {\overline{\theta }}_{2}\) and the equation
$$\left[\text{exp}({\stackrel{}{\theta }}_{2}{\chi }_{0})+\frac{2\delta }{{\stackrel{}{\theta }}_{1}}\right]\left[\text{exp}({\stackrel{}{\theta }}_{2}{\chi }_{0})\frac{2\delta }{{\stackrel{}{\theta }}_{1}}\right]={\left(1\frac{2\delta }{{\stackrel{}{\theta }}_{1}}\right)}^{2}$$
for \(\delta <{\overline{\theta }}_{1}/2\), \({\overline{\theta }}_{1}<{\overline{\theta }}_{2}\).
Taking into account the existence of the partition curve χ and using the asymptotic values (5.18), the global minimum of the function \({\mu }_{rms}\) is determined into the interval \(0<{x}_{2}<{\left.\chi \right}_{{x}_{1}=0}\). For this zone, the Roptimal design is Ξ(R) = \(\{0,{x}_{2}^{\left(opt\right)}\}\), where \({x}_{2}^{\left(opt\right)}\) is defined by the equation
$$\text{ln}\frac{\text{exp}[{\stackrel{}{\theta }}_{2}{x}_{2}^{\left(opt\right)}]\frac{2\delta }{{\stackrel{}{\theta }}_{1}}}{1\frac{2\delta }{{\stackrel{}{\theta }}_{1}}}=\frac{{\stackrel{}{\theta }}_{2}{x}_{2}^{\left(opt\right)}}{1\frac{2\delta }{{\stackrel{}{\theta }}_{1}}\text{exp}[{\stackrel{}{\theta }}_{2}{x}_{2}^{\left(opt\right)}]}$$
for \({\overline{\theta }}_{1}>\left{\overline{\theta }}_{2}\right\), \(\delta <{\overline{\theta }}_{1}/2\), and the equation
$$\text{ln}\frac{\text{exp}[{\stackrel{}{\theta }}_{2}{x}_{2}^{\left(opt\right)}]+\frac{2\delta }{{\stackrel{}{\theta }}_{1}}}{1+\frac{2\delta }{{\stackrel{}{\theta }}_{1}}}=\frac{{\stackrel{}{\theta }}_{2}{x}_{2}^{\left(opt\right)}}{1+\frac{2\delta }{{\stackrel{}{\theta }}_{1}}\text{exp}[{\stackrel{}{\theta }}_{2}{x}_{2}^{\left(opt\right)}]}$$
for \({\overline{\theta }}_{1}\le \left{\overline{\theta }}_{2}\right\), \(\delta <{\overline{\theta }}_{1}/2\).
The feature of the first zone is its strong flatness. Here, the variations of the optimal measurement into the interval \(0<{x}_{2}<\chi \left({x}_{1}\right)\) lead to slight changes \({\mu }_{rms}\). In this connection, the value of the global minimum can be majorized from the values \({\mu }_{\text{1,2}}\) at the point \({x}_{1}=0,{x}_{2}\to 0\). As a result, if \({\overline{\theta }}_{1}>\left{\overline{\theta }}_{2}\right\) on the condition \(\delta <{\overline{\theta }}_{1}/2\), then \({\widehat{\mu }}_{1}=2\delta /{\stackrel{}{\theta }}_{1},\) \({\widehat{\mu }}_{2}=2\delta /(2\delta +{\stackrel{}{\theta }}_{1})\), and if \({\overline{\theta }}_{1}\le \left{\overline{\theta }}_{2}\right\) on the condition \(\delta <{\overline{\theta }}_{1}/2\), then \({\widehat{\mu }}_{1}=2\delta /{\stackrel{}{\theta }}_{1},\) \({\widehat{\mu }}_{2}=2\delta /(2\delta {\stackrel{}{\theta }}_{1})\).
For the case \({x}_{2}>\chi \left({x}_{1}\right)\), the local minimum exists at the position \(\{0,{x}_{2}^{\left(opt\right)}\}\), where \({x}_{2}^{\left(opt\right)}\) satisfies the equation
$$\text{ln}\frac{\text{exp}[{\stackrel{̄}{\theta }}_{2}{x}_{2}^{\left(opt\right)}]+\frac{2\delta }{{\stackrel{̄}{\theta }}_{1}}}{1\frac{2\delta }{{\stackrel{̄}{\theta }}_{1}}}=\frac{{\stackrel{}{\theta }}_{2}{x}_{2}^{\left(opt\right)}}{1+\frac{2\delta }{{\stackrel{̄}{\theta }}_{1}}\text{exp}[{\stackrel{}{\theta }}_{2}{x}_{2}^{\left(opt\right)}]}$$
for any \({\overline{\theta }}_{\text{1,2}}\ne 0\). From this equation, it follows that \({x}_{2}^{\left(opt\right)}\to \infty\) for the small δ and the large \({\overline{\theta }}_{1}\). Because of this, the function \({\mu }_{rms}\) asymptotically decreases. The relationship between the values \(\mathcal{l}\) and \({x}_{2}^{\left(opt\right)}\) establishes the position of the optimal observation in this region.
As in the previous case \({\overline{\theta }}_{\text{1,2}}\) > 0, the noise is of crucial importance, whose scatter band \(2\delta\) determines the position of the optimal observation. There is a limit value \({\delta }^{*}={\stackrel{̄}{\theta }}_{1}/2\), after which \({\left.{\mu }^{\left(R\right)}\right}_{\delta \ge {\delta }^{*}}>1\) for any \({x}_{\text{1,2}}>0\) and \({\overline{\theta }}_{\text{1,2}}\ne 0\). This condition means that the signal (5.7) is not identifiable beginning from a specific threshold value \({\delta }^{*}\). It is also noteworthy that for any \(\delta \ne 0,\) the optimal design depends on both \({\overline{\theta }}_{1}\) and \({\overline{\theta }}_{2}\). Only for \(\delta =0\), the optimal solution does not depend on the value \({\overline{\theta }}_{1}\).
5.2.3 The case\({\overline{\theta }}_{\text{1,2}}<0\)
To complete the signal (5.7) study, we briefly describe the case of the negative \({\overline{\theta }}_{\text{1,2}}\).
If \({\overline{\theta }}_{\text{1,2}}<0\), then the position of the global minimum µ(R) is not limited to a neighborhood along with the point \((0,{x}_{2}\to +0)\). Besides, the global minimum can significantly shift from this point along the line \({x}_{2}\) = \({x}_{1}\). This shift occurs for fixed \({\overline{\theta }}_{2}\) and δ, where the value \({\overline{\theta }}_{1}\) is increasing. The local minimum is on the line \({x}_{1}=0\).
In summary, the investigation of the functional properties of the signal in question yields the map of the best and worst estimation zones for various δ and \({\overline{\theta }}_{\text{1,2}}\). A multimodal character is typical for the optimal design. The determination of the alternative observational areas extends the regions of the best measurements and seems to be an essential part of the design. Accordingly, the design problem can be formulated not as seeking strictly defined sensor positions at the observation region but as areas with the best measurements covering the design criterion’s global and local minima. The transition to the regularized paradigm clarifies the scope of the commonly known FIM designs for the signal (5.7) and brings the new features of the optimal solution to light. As it turns out, the wellknown FIM inferences are mathematically valid only for the exactly known data.
6.3 Threeparameter model
Previously, the commonly known dependence of the optimum design on the desired parameters for a nonlinear signal – a solution to the direct problem (2.1) – was confirmed. Mathematically, this dependency indicates that the design problem is solved locally, making it impossible to extend the resulting solution \({{\Xi }}^{\left(opt\right)}\) in a global sense and generalize it to a broad class of desired quantities. Accordingly, the theoretically important question is, what are the best design solutions in a global sense for nonlinear cases?
As demonstrated above, this complex situation is manageable if we can determine the optimal design structure for various sought quantities and construct the correspondence ‘set of quantities – best observation areas’. As a result, design optimization can be considered globally [39] to determine a Pareto efficient solution [11].
Let us deal separately with the design dependence on the model parameters and demonstrate how the investigation of the consistency equations facilitates revealing the structure of the optimal solution. It is demonstrated that in a nonlinear case, the regularized paradigm can define the optimal solution structure independently of the model parameters and bounded perturbations. In addition, as in the abovestudied cases, the change in the design paradigm will reveal new features of the classical model.
Consider the Verhulst equation
$$\frac{dy}{dx}={\theta }_{1}y{\theta }_{2}{y}^{2},x>0,$$
5.19
$${\left.y\right}_{x=0}={\theta }_{0}$$
5.20
that is well known in mathematical biology [32], and y expresses the typical dynamics of the population spread whose parameters \({\theta }_{\text{0,1},2}\) are the unknown quantities to be estimated. The following four steps determine the optimal design.
Step I. The solution to Eqs. (5.19), (5.20) is commonly known as an Sshaped curve. Its original form for the case \({\theta }_{\text{1,2}}=const\) is expressed as an exponential regression
$$y=\frac{{\theta }_{0}{\theta }_{1}\text{e}\text{x}\text{p}\left({\theta }_{1}x\right)}{{\theta }_{1}+{\theta }_{0}{\theta }_{2}\left[\text{exp}\left({\theta }_{1}x\right)1\right]}.$$
Regarding the number of the sought parameters of the latter function, note that the way in which the optimal design can express the sensitivity of the observed signal concerning the initial state \({\theta }_{0}\) is theoretically important. It is known that the state function of lumped and distributed systems is often locally insensitive to variations in the initial conditions [43]. Therefore, the determination of the best observations with the minimum volume m = 3 is considered to reconstruct all the parameters \({\theta }_{\text{0,1},2}\) of the model (5.19), (5.20).
Using the analytical solution to Eqs. (5.19) and (5.20) for two sets of the model parameters \({\stackrel{}{\theta }}_{\text{0,1},2}\)
and \({\theta }_{\text{0,1},2}^{\left(\nu \right)}={\stackrel{}{\theta }}_{\text{0,1},2}{\nu }_{\text{0,1},2}\), the consistency equations (4.2) are obtained as the expression
\(\frac{{\stackrel{}{\theta }}_{0}{\stackrel{}{\theta }}_{1}\text{e}\text{x}\text{p}\left({\stackrel{}{\theta }}_{1}{x}_{i}\right)}{{\stackrel{}{\theta }}_{1}+{\stackrel{}{\theta }}_{0}{\stackrel{}{\theta }}_{2}\left[\text{exp}\left({\stackrel{}{\theta }}_{1}{x}_{i}\right)1\right]}\frac{\left({\stackrel{}{\theta }}_{0}{\nu }_{0,l}^{}\right)\left({\stackrel{}{\theta }}_{1}{\nu }_{1,l}^{}\right)\text{exp}\left[\left({\stackrel{}{\theta }}_{1}{\nu }_{1,l}^{}\right){x}_{i}\right]}{\left({\stackrel{}{\theta }}_{1}{\nu }_{1,l}^{}\right)+\left({\stackrel{}{\theta }}_{0}{\nu }_{0,\text{l}}^{}\right)\left({\stackrel{}{\theta }}_{2}{\nu }_{2,l}^{}\right)\left\{\text{exp}\left[\left({\stackrel{}{\theta }}_{1}{\nu }_{1,l}^{}\right){x}_{i}\right]1\right\}}={{\Delta }}_{i,l},\) \(i=\stackrel{}{\text{1,3}},l=\stackrel{}{\text{1,8},}\) (5.21)
where \({\nu }_{k,l}^{}\) denotes the absolute design error of the kth parameter for the lth matching option, which is described by the consistency corridor matrix
$${\Delta }=\left(\begin{array}{c}+2\delta ,\\ +2\delta ,\\ +2\delta ,\end{array}\begin{array}{c}2\delta ,\\ +2\delta ,\\ +2\delta ,\end{array}\begin{array}{c}+2\delta ,\\ 2\delta ,\\ +2\delta ,\end{array}\begin{array}{c}+2\delta ,\\ +2\delta ,\\ 2\delta ,\end{array}\begin{array}{c}+2\delta ,\\ 2\delta ,\\ 2\delta ,\end{array}\begin{array}{c}2\delta ,\\ +2\delta ,\\ 2\delta ,\end{array}\begin{array}{c}2\delta ,\\ 2\delta ,\\ +2\delta ,\end{array}\begin{array}{c}2\delta \\ 2\delta \\ 2\delta \end{array}\right)$$
5.22
Step II. Taking into account the separability of the matching equation terms, the design error ν in Eqs. (5.21) decreases when the expressions with exponential terms tend to zero or infinity. This indicates that at the optimum observation positions, the conditions \(\text{exp}\left[\left({\stackrel{}{\theta }}_{1}{\nu }_{1,l}^{}\right){x}_{i}\right]\to 1 \text{a}\text{n}\text{d} \text{exp}\left[\left({\stackrel{}{\theta }}_{1}{\nu }_{1,l}^{}\right){x}_{i}\right]\to \infty\) occur for any θ. From this, it follows that \({x}_{1}^{\left(opt\right)}=0\) and \({x}_{3}^{\left(opt\right)}\to \infty\). As a result, we have
$${\tilde{\nu }}_{0,l}^{}={{\Delta }}_{1,l},l=\stackrel{}{\text{1,8},}$$
5.23
$${\tilde{\nu }}_{1.l}^{}={\stackrel{}{\theta }}_{1}\left(\frac{{\stackrel{}{\theta }}_{1}}{{\stackrel{}{\theta }}_{2}}{\varDelta }_{3,l}\right)\left({\stackrel{}{\theta }}_{2}{\tilde{\nu }}_{2,l}^{}\right),l=\stackrel{}{\text{1,8}}$$
5.24
By substituting expressions (5.23) and (5.24) in (5.21), the following design error is finally determined
$${\tilde{\nu }}_{2,l}^{}={\stackrel{}{\theta }}_{2}+\frac{1}{\left({\stackrel{}{\theta }}_{1}/{\stackrel{}{\theta }}_{2}{\varDelta }_{3,l}\right){x}_{2}}\text{l}\text{n}\left\{\frac{({\stackrel{}{\theta }}_{0}{\varDelta }_{1,l})}{\left({\stackrel{}{\theta }}_{0}{\varDelta }_{1,l}\right)\left({\stackrel{}{\theta }}_{1}/{\stackrel{}{\theta }}_{2}{\varDelta }_{3,l}\right)}\left[1\frac{{\stackrel{}{\theta }}_{1}/{\stackrel{}{\theta }}_{2}{\varDelta }_{3,l}}{ \frac{{\stackrel{}{\theta }}_{0}{\stackrel{}{\theta }}_{1}\text{e}\text{x}\text{p}\left({\stackrel{}{\theta }}_{1}{x}_{2}\right)}{{\stackrel{}{\theta }}_{1}+{\stackrel{}{\theta }}_{0}{\stackrel{}{\theta }}_{1}\left[\text{exp}\left({\stackrel{}{\theta }}_{1}{x}_{2}\right)1\right]}{\varDelta }_{2,l }}\right]\right\},l=\stackrel{}{\text{1,8}}$$
5.25
Step III. The guaranteed design errors are defined by minimizing the stabilizer (4.3),
$$\widehat{\nu }\left({x}_{2}\right)=Arg\underset{1\le l\le 8}{\text{m}\text{i}\text{n}}\sum _{k=0}^{2}[{\stackrel{}{\theta }}_{k}{\tilde{\nu }}_{k,l}^{}({x}_{2}){]}^{2}$$
among all eight cases (5.23) – (5.25) for every \(l=\stackrel{}{\text{1,8}}\) that express the worst approximation.
Step IV. The optimal position \({x}_{2}^{\left(opt\right)}\) is defined by minimizing criterion (4.4). This gives the following structure of the optimal solution. Three positions of the measurements determine the minimum volume of the sample. Two positions among the three optimal sensor placements do not depend on the sought model parameters for any \({\stackrel{}{\theta }}_{\text{0,1},2}\) and \(\delta <{\delta }^{*}\). The initial and as long as possible positions are the best sensor placements, \({x}_{1}^{\left(opt\right)}=0, {x}_{3}^{\left(opt\right)}\to \infty .\) The sensitivity of the signal y to the initial condition \({\theta }_{0}\) does not bring the design error outside the interval \(\pm 2\delta\).
The position of the second optimum observation \({x}_{2}^{\left(opt\right)}\) is always higher than the position \({x}_{infl}\)at which the growth function y has an inflection point, \({d}^{2}y/{\left.d{x}^{2}\right}_{x={x}_{infl}}=0\). This point determines the basic properties of the Sshaped curve [32]. For the case study (Table 1), the best second observational region is determined by the interval \({1.5{x}_{infl}<x}_{2}^{\left(opt\right)}<2{x}_{infl}\).
The proposed regularizedminimalresidual design for the Sshaped curve facilitates determining the features, which express the global character of the solution obtained.
First, the obtained optimal solution demonstrates the property for . From the existence of the design error it follows that the threshold level of the noise has the value . If , then the solution to the design problem does not exist. Table 1 expresses the effect of these conditions. The detected design properties demonstrate how the initial condition impacts the estimation of the growth process parameters.
The values of the design errors \({\nu }_{\text{0,1},2}^{}\) can be defined explicitly. If \(\delta >{\stackrel{}{\theta }}_{1}/{2/\stackrel{}{\theta }}_{2}\), then \({\nu }_{1}>{\stackrel{}{\theta }}_{1}\). The value of the absolute design error of the parameter \({\stackrel{}{\theta }}_{0}\)does not depend on its value and is determined only by the noise.
For small \({\stackrel{}{\theta }}_{\text{1,2}}\) the main design error appears in reconstructing the initial condition \({\stackrel{}{\theta }}_{0}\). Taking into account the existence of the threshold value, \({\delta }^{*}={\stackrel{}{\theta }}_{0}/2\), it is better to define the initial condition by direct measurements. In practice, the following inference has significance: a shift in the observations from the starting point of the growth process leads to an increase in the estimation error.
During the monotonic change in the variable x, the stabilizer’s minimum value can be achieved at different variants of the matching conditions. This leads to a jump in the error \({\nu }_{2}\) and consequently to the appearance of the local minima of the total design error. In general, the design is not unimodal.
The numerical analysis of the estimation behavior at the observation positions that deviated from the optimal positions shows a significant decrease in the obtained design errors. These errors can exceed the minimum design errors by order of magnitude.
Сompare the obtained results with the inferences of the previously conducted studies of the model (5.19) and (5.20).
Bayesian optimal designs for the logistic model with two unknown parameters were found in [8]. It has been shown that the optimum positions should be close to the boundaries of the observation interval. In [24], it was demonstrated that minimax Doptimal designs could be quite efficient under a Bayesian setup.
In [11], the design was regularized by Tikhonov’s algorithm with the regularization parameter. The optimal solution was referred to as the Pareto efficient. However, the proposed formulation does not restrict the solution domain and does not take into account the existence of the noise scatter band because the noise is considered to have a zero mean value (see Section 2).
The previously determined optimal position \({x}_{2}^{\left(opt\right)}\) was specified near the inflection point \({x}_{infl}\). The design (4.1) – (4.4) demonstrates that such an estimation can only be considered as a lower bound of the optimal position. In [5], the best observations were obtained by the Monte Carlo method. Formulation (4.1) – (4.4) does not require numerical modelling of the sample. Because of this, the more general specifications of the problem formulation were studied. The sufficiency of the three measurement positions is proven for the more general conditions of the problem in question.
The abovedescribed conditions for the optimal design existence are determined for the first time. These conditions express the boundaries of the admissible signal observations.
6.4 Numerical analysis of an observational design structure
Consider a case with no analytical solutions to the consistency equations. For such formulations, it is necessary to construct a correspondence between the desired quantities and observation areas by introducing some reference parameters. In what follows, the idea of determining the best and worst observation areas is examined in detail numerically. It is demonstrated that the global and local minima can extend the design solution by the notion of the best observation areas.
The following exponential regression is given:
\(y(x{\left.)\right}_{{\theta }_{\text{1,2},3}}=\frac{{\theta }_{1}}{{\theta }_{1}+{\theta }_{2}{\theta }_{3}}\left\{\text{exp}\left({\theta }_{3}x\right)\text{e}\text{x}\text{p}[({\theta }_{1}+{\theta }_{2}\left)x\right]\right\}\) , \(0\le x<+{\infty }\). (5.26)
This function is considered in life sciences [2, 23] to describe the spiking growth dynamics.
For the directly defined prototype (5.26), it is desired to estimate \({\theta }_{\text{1,2},3}=const\ne 0\). The optimal design Ξ(R) = \(\{{x}_{i}^{\left(opt\right)}{\}}_{i=\overline{\text{1,3}}}\)is sought in the region \(\mathfrak{D}\subseteq\)R3[0,∞) using the observations (2.2) – (2.4). The design steps are executed as follows.
Step I. The required signal representations \({\left.\stackrel{}{y}\right}_{{\stackrel{}{\theta }}_{\text{1,2},3}}\) and \({\left.y\right}_{{\theta }_{\text{1,2},3}^{\left(\nu \right)}}\) with the parameters \({\stackrel{}{\theta }}_{\text{1,2},3}\) and \({\theta }_{\text{1,2},3}^{\left(\nu \right)}={\stackrel{}{\theta }}_{\text{1,2},3}{\nu }_{\text{1,2},3}\) is performed analytically by the function (5.26). The consistency equations (4.2) relative to the design errors \({\nu }_{\text{1,2},3}\) for the threepoint design (m = p = 3) are of the form
$$\frac{{\stackrel{}{\theta }}_{1}}{{\stackrel{}{\theta }}_{1}+{\stackrel{}{\theta }}_{2}{\stackrel{}{\theta }}_{3}}\{\text{exp}({\stackrel{}{\theta }}_{3}{x}_{i})\text{exp}[({\stackrel{}{\theta }}_{1}+{\stackrel{}{\theta }}_{2}){x}_{i}\left]\right\}$$
$$\frac{{\stackrel{}{\theta }}_{1}{\nu }_{1,l}^{}}{{\stackrel{}{\theta }}_{1}+{\stackrel{}{\theta }}_{2}{\stackrel{}{\theta }}_{3}{\nu }_{1,l}^{}{\nu }_{2,l}^{}+{\nu }_{3,l}^{}}\{\text{exp}[({\stackrel{}{\theta }}_{3}{\nu }_{3,l}^{}\left){x}_{i}\right]$$
$$\text{exp}[({\stackrel{}{\theta }}_{1}+{\stackrel{}{\theta }}_{2}{\nu }_{1,l}^{}{\nu }_{2,l}^{}){x}_{i}\left]\right\}={{\Delta }}_{i,l},i=\overline{\text{1,3}},l=\overline{\text{1,8},}$$
5.27
where the worst observations are described by the consistency corridor matrix (5.22).
Step II. The simultaneous equations (5.27) cannot be solved analytically, so its solution for the given \(\stackrel{̄}{\theta }\), δ and \(\{{x}_{k}{\}}_{k=\overline{\text{1,3}}}\) is sought numerically. For such a case, it is necessary to introduce the reference parameters \(\stackrel{̄}{\theta }\) against which the optimal solution is sought. Similarly to [23], the first option is selected as \({\stackrel{}{\theta }}^{\left(1\right)}\) = (6.85, 1.70, 0.55)T. The second option \({\stackrel{}{\theta }}^{\left(2\right)}\)= (92.41, − 88.115, 0.059)T is introduced as in [2].
The selected reference parameters \({\stackrel{}{\theta }}^{\left(1\right)}\) and \({\stackrel{}{\theta }}^{\left(2\right)}\) determine the different behaviours of the signal (5.26). Due to the limited scope of the publication, we will not present the results of the sensitivity analysis of the signal (5.26) relative to its parameters. It should only be noted that the variations of the parameters from the option \({\stackrel{}{\theta }}^{\left(2\right)}\) are sharply limited due to the absence of the solution \(\left\{{\nu }_{\text{1,2},3}\right\}\) to Eqs. (5.27). The solvability violation of the consistency equations will undoubtedly affect the design behavior. The option \({\stackrel{}{\theta }}^{\left(1\right)}\) has no limitation in a similar sense. The design dependence on the parameters \({\stackrel{}{\theta }}_{\text{1,2},3}\) will be considered, whereby the variations of the parameters should cover a wide range of the functional properties of the signal (5.26).
Step III. Among eight solutions to Eqs. (5.27) \(\tilde{\nu }\) the one is determined that minimizes the stabilizer (4.3)
$$\widehat{\nu }\left({x}_{\text{1,2},3}\right)=Arg\underset{1\le l\le 8}{\text{m}\text{i}\text{n}}\sum _{k=1}^{3}[{\stackrel{}{\theta }}_{k}{\tilde{\nu }}_{k,l}^{}({x}_{\text{1,2},3}){]}^{2}$$
The latter expresses the guaranteed design error at each point of the design region.
Step IV. The optimal design Ξ(R) is determined as the minimum of the criterion
where \({\widehat{\mu }}_{k}={\stackrel{̑}{\nu }}_{k}/{\stackrel{}{\theta }}_{k},k=\overline{\text{1,3}}\) denotes the relative design errors.
The obtained results reveal the following properties of the optimal design:

Tables 2 and 3 express the noise level effect on the optimal design,

Table 4 shows the difference between the global and local minima, and

Table 5 depicts the dependence of the optimal solution on the soughtfor quantities.
Then the features of the optimal observational design are as follows.
First, the noise effect on the optimal design can be broken down into four grades (Tables 2 and 3). The asymptotic of the obtained solutions demonstrate the tendency \({\nu }_{\text{1,2},3}\to 0\)for \(\delta \to 0\).
Second, the solutions express the existence of the strictly defined structure of the optimal design. A similar structure reflects the best observation areas. For signal (5.26), the structure is expressed in three areas. They are (i) the ascending branch, (ii) the region of the specific point of the state function, and (iii) the descending branch (Tables 2 and 3). The sought parameters’ variations (Tables 4 and 5) express the structure boundaries and the conditions under which the structure is changed. The deviations of the optimal solutions within the structure are not significant. The comparison of the optimal solutions for the cases \({\stackrel{}{\theta }}^{\left(1\right)}\) and \({\stackrel{}{\theta }}^{\left(2\right)}\) shows the dependence of the structure character on the functional properties of the model in question. What matters here is determined by the differences noted above between options \({\stackrel{}{\theta }}^{\left(1\right)}\) and \({\stackrel{}{\theta }}^{\left(2\right)}\).
Third, the characteristic feature of the optimal observation is its multimodal nature. For the signal (5.26), the differences between the global and local minima of the design errors are not substantial and are often small (Table\(\)4). The design structure changes from one optimal area to another as the noise scatter band increases in size.
The existence of the optimal solution structure indicates that it is possible to specify the strongly determined areas of the best observations. Summarizing the results in Tables 2–5, it is found that for the signal (5.26) with a range of parameters such as \({\stackrel{}{\theta }}^{\left(1\right)}\pm 30\%\) and for any noise with \(0<\delta <{\delta }^{*}\), the best observations belong to the intervals 0.07 < x < 0.19, 0.48 < x < 0.73, 1.4 < x < 1.74 and 2.8 < x < 3.5.
The last recommendation demonstrates how the dependence of the design on the initial data {\(\stackrel{}{\theta },\delta\)} can be reduced. Recommendations of this kind are exempted from the need to specify the initial guesses of the unknowns accurately. It is theoretically important that the developed approach can define the correspondence between a set of desired quantities and a specific set of the best observation areas. Therefore, the search for the optimal solution at certain points in the observation space can be replaced by determining the best region.
Additionally, this outcome indicates a direction of further investigation of the signal (5.26). It is the study of the design formulation with overdetermined measurements. Examining the consistency equations ensures the determination of the regions of optimal multipoint measurements.
Fourth, the existence of the significant errors in the optimal solution, \({\mu }^{\left(R\right)}\)> 0.1, is associated, first of all, with the poor scalability of the sought quantities \({\stackrel{}{\theta }}_{\text{1,2},3}\). Their values differ from each other by more than an order of magnitude (compare the results in Tables 4 and 5). The relationship between the value δ and the maximum value of the observed signal, \({y}_{max}=\underset{x\ge 0}{\text{m}\text{a}\text{x}} y\left(x\right)\), is also relevant (Tables 4 and 5). The sensitivity of the signal y(x) doubtless influences the design error \({\mu }^{\left(R\right)}\). The case of \({\stackrel{}{\theta }}_{1}=7, {\stackrel{}{\theta }}_{2}=5\) demonstrates that the reconstruction accuracy improves with increasing \({\stackrel{̄}{\theta }}_{3}\), although the value \({y}_{max}\) is decreased (Table 5). Here, the rate of the signal change becomes the main factor. Notably, there exists a case of the sought parameters in which their optimal design errors are lower than the noise level, \({\mu }^{\left(R\right)}<\delta\) (Table 3).
Based on the solvability of the design problem, the following property of the model function (5.26) should be highlighted: for a given \(\stackrel{̄}{\theta }\), the solvability is determined by the observation positions \(\{{x}_{k}{\}}_{k=\overline{\text{1,3}}}\in \mathfrak{D}\) and the value δ > 0. There may exist \({\left\{{x}_{k}^{*}\right\}}_{k=\overline{\text{1,3}}}\) and δ* for which a solution to the matching equations is absent. Because of this, there are points \({\left\{{x}_{k}^{*}\right\}}_{k=\overline{\text{1,3}}}\) that do not allow signal reconstruction, even for a small δ. From there, it follows that only minor variations in the sought parameters are acceptable. Because of this, the design behavior will not change significantly when the noise level varies. This property is wholly manifested in the case of \({\stackrel{}{\theta }}^{\left(2\right)}\) where the solvability violation of the consistency equations occurs in a wide range of the desired parameters. As a result, the structure of the optimal solution for option \({\stackrel{}{\theta }}^{\left(2\right)}\) is less varied than the structure of the optimal solution for the reference parameter \({\stackrel{}{\theta }}^{\left(1\right)}\).
Let us compare the obtained results with the known solutions under the FIM paradigm. For \({\stackrel{}{\theta }}^{\left(1\right)}\) the design Ξ10 = {0.1, 0.5, 2.0} was determined in [23] and for the case of \({\stackrel{̄}{\theta }}^{\left(2\right)}\) the design Ξ11 = {0.2288, 1.3886, 18.417} was found in [2].
The error of the design Ξ10 significantly exceeds the design error of the regularized design (see Table 2). The FIM results in an optimal design that is close to the regularized design only for δ → 0. Even for a small δ, the reconstruction accuracy of the FIM optimum design is worse than that of the regularized design. The difference increases with the growth of δ. The optimal design essentially depends on the noise.
The design Ξ11 does not provide a solution to Eqs. (5.27) for all eight options (5.22). For this reason, the results of the estimation with design Ξ11 are not included in Table 3.
For the highnoise scatter band, the FIM designs Ξ10 and Ξ11 manifest unacceptable design errors, \({\mu }_{rms}>1\). At the same time, the regularization ensures the solvability of the design problem even for large δ and, in fact, accomplishes the required minimization of the noise influence.