Expected and Predicted Curves of Markov Stochastic Processes and a New Interpretation of Deterministic Models


 Two curves characterizing a set of realizations of Markov stochastic process were analysed: the expected curve and the predicted curve. Although the definition of the predicted curve is new, curves with the same formulas are known in science as so-called deterministic models. This can be seen in the examples of the most well-known models in biology: logistics, Lotka-Volterra and Hardy-Weinberg. Attention was paid to the efficiency of calculating these both curves and their interpretation as the central tendency characteristics of Markov stochastic processes.


Introduction
All numerical random variables can be characterized by a "central tendency" -the central or typical value for their probability distribution. The most popular of these is an expected value, despite the fact that it does not always exist ( [8], [20]). Numerical stochastic processes can be characterized in a similar way. Expected values of stochastic process are usually calculated in application articles ( [11], [18], [3]).
In this paper, the multidimensional stochastic processes are considered, in which the states are elements of X = N n × R m . However, it is convenient to separately discuss about discrete-valued (states in N n ) and continuous-valued (states in R m ) processes. Therefore, four types of stochastic processes are being considered: discrete-time discrete-valued (stochastic chains), discretetime continuous-valued, continuous-time discrete-valued and continuous-time continuous-valued.
Homogeneous processes are most often considered in mathematics. Meanwhile, non-homogeneous processes have greater application in biology due to the diurnal and seasonal rhythms of many natural phenomena. In addition, theoretical considerations of Markov stochastic processes are much easier for non-homogeneous stochastic processes as opposed to the special case: homogeneous processes. This is due to the fact that the integrals in the formulas for the transition probabilities unfortunately have the antiderivatives of homogeneous processes, but with extremely complex formulas, as described in [22]).
All known stochastic models that play a major role in biology have the Markov property. This assumption is accomplished in models, which recreate the course of events that occur in reality and the probabilities of these events at time [t, t + 1) depend on the state of the system at time t( [1], [15], [7]). This can be done in all object-oriented models ( [10], [12], [5]) because everything that happens in time step [t, t + 1) depends on the state of the program at time t in these models. It is impossible to program such models in a different way. All past events can be considered only by programming some carrier of information about these events which exists at time t. Sometimes, a numerical variable without the Markov property is calculated using objectoriented models, but then it is usually the sum of the variables that form a multidimensional Markovian model.
All stochastic processes (ξ t ) t∈Υ with the Markov property yield a set of realizations, which can be characterized by the expected realization. It is a function of: Υ t → Eξ t ∈ X It will be referred to as the expected curve. The expected curve is difficult to calculate even for simple stochastic processes. However, the set of realizations of the stochastic processes has a second good curve, which forms a recursive sequence or an easily computable differential equation. It will be referred to as the predicted curve. Its formal definition is the purpose of this paper. The second aim is to compare it with the expected curve.
In this paper, it will be shown that for the most well-known models used in ecology and evolution, the formulas for the predicted curves turn out to be the same as for the so-called deterministic population models. This fact completely changes their interpretation. In addition, the formulas derived in this paper will allow the correct determination of stochastic and deterministic models of the same phenomenon.

Basic definitions
Stochastic processes formed by object-oriented models illustrate how desirable it is to generalize the theory of Markovian stochastic processes.
A stochastic process is defined as a collection of random variables: (ξ t ) t∈Υ indexed by a subset Υ of [0, ∞) and having values in X (a set of states). A sigma-algebra σ(X) with a measure µ is defined for subsets of X. For X = N d or X = R d this is a Lebesgue measure. Moreover, there exists a probability space (Ω, σ(Ω), P ), which allows the calculation of the probability of some subsets of the set Ω (set of the process realizations).
All Markov processes can be defined by highly generalized counterparts of the stochastic or intensity (state-transition) matrix or kernel. It is a multidimensional object containing time-dependent functions whose values are probabilities or instantaneous probability rates. Depending on the type of the stochastic process, the set Ψ of such functions is defined as: Let X be a space of states with numbers. Thus, X = N n for discretevalued processes and X = R n for continuous-valued processes. Let X 0 ⊆ X. Most often, the set X 0 is assumed to be bounded.
The generalizations of a stochastic matrix, a stochastic kernel, an intensity (state-transition) matrix and an intensity kernel to a multidimensional Markov stochastic process have the following form: Definition 2.1. Let (ξ t ) t∈Υ be a discrete-time, discrete-valued Markov stochastic process. A stochastic matrix is a set of numbers formed by the function p : X 0 × X → Ψ and p(x, y)(t, t + 1) = P {ξ t+1 = y|ξ t = x}. This function has the property: ∀ x∈X 0 y∈X p(x, y)(t, t + 1) = 1. Each function of this property defines the discrete-time, discrete-valued Markov stochastic process.
Definition 2.2. Let (ξ t ) t∈Υ be a discrete-time, continuous-valued Markov stochastic process. A stochastic kernel is a set of numbers formed by the function p : X 0 ×X → Ψ and U p(x, y)(t, t+1)dy = P {ξ t+1 ∈ U −{x}|ξ t = x} for all U measurable subsets of X.
This function has the property ∀ x∈X 0 X p(x, y)(t, t + 1) ≤ 1. Each function of this property defines the discrete-time, continuous-valued Markov stochastic process. Definition 2.3. Let (ξ t ) t∈Υ be a continuous-time, discrete-valued Markov stochastic process. An intensity matrix is a set of numbers formed by the function q : X 0 × X → Ψ and q(x, y)(t, t + 1) = lim ∆→0 This function has the properties: ∀ x∈X 0 ∀ y∈X,y =x q(x, y)(t) ≥ 0 and ∀ x∈X 0 q(x, x)(t) ≤ 0, and ∀ x∈X 0 y∈X q(x, y)(t) = 0. Each integrable function of these properties defines the continuous-time, discrete-valued Markov stochastic process.
Continuous-time continuous-valued stochastic processes are rarely modelled, and therefore are not termed as such in this terminology. They are created in textbooks using the Wiener process ( [6], [2]). However, the natural generalization of the Markov process described earlier is quite different. It should still be a process in which a finite number of random events can occur per unit of time. Yet these events can be vectors of real numbers, and the moments of these events can be any real number. Its realizations are not continuous. They are step functions with arbitrarily long and high step size. These processes have the intensity kernel which is an integrable and bounded function q : X 0 × X → Ψ and each integrable and bounded function q : X 0 × X → Ψ defines a continuous-time continuous-valued Markov stochastic process. Because no one such process is as known as processes described in this paper, the thory of them will be omitted.
The definitions 2.1 -2.3 show that stochastic/intensity matrices/kernels may not be square. Every good programmer who writes object-oriented programs restricts the set of events for which probabilities or probability rates can be calculated. Otherwise, his program may hang, or in the worst case, produce erroneous results. Most often, when a state outside of X 0 is reached during the simulation, the simulation stops before the assumed completion time T max .
At the beginning, the above generalizations due to technical limitations seemed irrelevant. Meanwhile, the past study proved that for each Markovian stochastic process with a probability or intensity matrix or kernel (corresponding to the above definitions), there exists a probability space that is completely consistent with the Kolmogorov's system of axioms, which evaluates the frequency of occurrence of certain realizations in the simulated process ( [21], [23]). Moreover, the formulas for these probabilities were derived and they are used in this paper.
The class of Markov stochastic processes, having a stochastic matrix or an intensity matrix, a stochastic kernel or an intensity kernel, is very broad. Since each finite quadratic stochastic matrix or kernel can be extended to non-quadratic by making p(x, y)(t, t + 1) = 0 for y ∈ X − X 0 and each quadratic intensity matrix or kernel can be extended to non-quadratic ones by making q(x, y)(t) = 0 for y ∈ X − X 0 , such class of Markov processes includes all the well-known processes discussed in the textbooks.
Sometimes calculation a stochastic or intensity matrix or kernel for an object-oriented model is not easy, but it is worth doing. They allow calculation of the expected curve Eξ : Υ t → Eξ t ∈ R n and a similar concept, which is referred to as the predicted curve. Both curves allow the prediction of the behaviour of the model during the simulation and the possibility to limit the values of the model parameters to suit one's own needs.

Expected and predicted curves in discrete-time discrete-value processes
In this section, the concepts of expected and predicted curves are illustrated for a birth-and-death process in discrete time. However, they are defined in a general way. + A two-dimensional stochastic matrix in independent-dependent orientation (put into a Cartesian coordinate system) often has a characteristic structure. The probabilities of values slightly greater than 0 in each column are concentrated in a relatively small area. If the points of this matrix are coloured with different shades of gray depending on the probability, then a dark stripe is formed in the area of such matrix. This allows predicting the shapes of the stochastic process realizations started from the same state. For the discrete-time birth-and-death process, the dark stripe courses above the diagonal for small values and below its for large values of the population size Fig.1).
The higher probabilities are drawn in darker colours. B. The idea of the formation realization of the stochastic is process, as given by the stochastic matrix. C. Seven sample realizations of this process. This process has a non-quadratic stochastic matrix with positive probabilities in the area 200 × 400, and the set X 0 = 0, 1, ..., 200 cannot be extended.
The expected value at time t can only be calculated for realizations existing by the time t or more. At the beginning, a quadratic submatrix containing the probabilities of transition from state k to state n will be defined for n ∈ X 0 . Definition 2.4. A conditional quadratic submatrix of the stochastic matrix p : For a one-dimensional stochastic process, it is a well-known quadratic stochastic matrix. The s-product of quadratic stochastic matrices is a probability matrix, where a state X transits to y in time [t, t + s). It will be noted as: p s (t, t + s). This property allows calculating the expected value of a stochastic process (ξ t ) t∈Υ of non-quadratic stochastic matrices.
The definition of the expected curve has a standard form: Calculating the expected curve is quite laborious and sometimes the only way to predict the expected value for major t is by iteratively traversing the ergodic effects. However, there is another curve that characterizes the set of realizations of the stochastic process. There are many reasons to call this characteristic a predicted curve. The idea of calculating this curve is presented in Fig.2 .  Fig. 1, where each column has a conditional expected value (given by the formula k + p r (k) * k − p s (k) * k for any k). The idea of forming a predicted curve is that the ends of the vertical segments lie exactly on the function B. Predicted and expected curves for this process. The asymptotic value of the predicted curve equals 100. The expected curve tends to 0 (if 0 ∈ X 0 but it takes an extremely long time to drop to 0) or 99.4974 . The sequence F (k) : X k → E(ξ t |ξ t−1 = k) ∈ R n can be extended as a continuous function in many ways. However, if the formula for the conditional expected value is also a real function F (x), then the function F (x) is most optimally extended. It occurs in all models, where the probabilities of certain events are estimated by regressions. All known regressions are chosen from the family of continuous real functions.
The definition of the predicted value has the following form: Definition 2.6. The predicted curve of a discrete-time stochastic process (ξ t ) t∈Υ is formed by the function F : treated as a real function if possible. Then, the predicted curve is a sequence F pr : Υ t → X such that: Both curves seem to be very similar, but it depends on the stochastic matrix. Predicted curves sometimes show non-fading fluctuations. Then, the process is realized with rapid fluctuations, but such fluctuations are not regular. The fluctuations of the expected curve always decreases to zero (due to the ergodic theorem) (Fig. 3).  The expected curve characterizes a probability distribution of states at a given time, and the predicted curve -a typical course of a stochastic process realizations.

Expected and predicted curves in discrete-time continuous-valued pro-
cesses Continuous-valued processes require some explanations. A stochastic kernel is defined in textbooks as a collection of densities of continuous distributions p x : X 0 → [0, ∞). This is generalized in this paper. Definition 2.2 says that such a model is considered when the process tends not to change its state at many time steps. Then, a function p x (t) equals to the function y → p(x, y)(t, t + 1) may not be a density because X p(x, y)(t, t + 1)dy ≤ 1. The number 1 − X p(x, y)(t, t + 1)dy is a probability that the process doesn't change the state x at time [t, t + 1). Let x be the state at time t. Let x ∈ X 0 . Then: X 0 p(x, y)(t, t + 1)dy is a probability that the next state at time t + 1 is different than x and belongs to X 0 .
X−X 0 p(x, y)(t, t+1)dy is a probability that the next state does not belong to X 0 . So, it is not equal to x. Because X 0 ⊂ X then )dy is a probability that the next state at time t + 1 belongs to X 0 .
Since the expected value at time t for the process (ξ t ) t∈|Υ can be calculated only for the realizations with duration t or more, then the quadratic subkernel must be defined.
In analogy to the discrete-value process, we can calculate kernels for the transition from time t to time t + 2, t + 3, ..., t + s by defining the product of stochastic kernels. In these calculations, the transitions x → y → z, x → y → y and x → x → y for different x, y and z must be considered separately.
Definition 2.8. The s-product of stochastic subkernels is the quadratic kernel of the functionsp s (x, z). These functions are calculated as sums of integrals: For s = 2:p and for s > 2: The expected value of this process at time t is equal to: Now, we can define an expected curve for the discrete-time continuousvalued stochastic process.
Definition 2.9. The expected curve for a discrete-time stochastic process (ξ t ) t∈Υ is the function F exp : Υ t → Eξ t ∈ X.
The calculation of this curve is not simple, although it may require only a small value of time T and enable a good algorithmic calculation of the integrals. Estimation of their asymptotic values can be done using the ergodic theorem. However, the predicted curve is easy to define and very often easy to calculate. Definition 2.10. The predicted curve of the discrete-time stochastic process (ξ t ) t∈Υ is a recursive sequence: The kernel of many discrete-time continuous-valued stochastic processes is formed by densities of the normal distribution: p(x, y)(t) = where µ(x) and σ(x) are given functions. This formula does not simplify the calculation of the expected curve, but the predicted curve has the formulas F pr (0) = x 0 and F pr (t + 1) = µ(F pr (t)).

Expected and predicted curves in continuous-time discrete-valued processes
An intensity matrix consists of the instantaneous probability rates of changing a state x to y at time t. Let p(x, y)(t 1 , t 2 ) be the probability of a set of all realizations that have state x at time t 1 and state y at time t 2 . It is expressed by a very complex formula ( [21]): where: In the last equation τ 0 = t 1 , τ r+1 = t 2 , m 0 = x and m r = y. This is done to reduce its notation. The expected value of the random variable ξ t is equal to: The expected curve is the function F exp : [0, T ) t → Eξ t ∈ X. The calculation of this curve using presented formulas is not effective even by supercomputers. The number r above which the sum given by formula 9 is so small that it can be omitted, is usually very large. Presented formulas can be used proving some theorems about the expected curves. For calculations it is better to do a lot of simulations (10000 or more) and calculate means of population sizes at chosen times t (Fig.4).
The items in the intensity matrix are the derivatives at point 0 + of the function [0, T ) ∆ → p(x, y)(t, t + ∆) ∈ R, where p(x, y)(t, t + ∆) is the probability of changing state x to state y at time [t, t + ∆). These items are the rates of change of probabilities: that a process with state x at time t may change to state y at the next time. For each state x ∈ X 0 we can calculated: y∈X yq(x, y)(t) If its state at time t is equal to x, then this is the expected change in the state of the process after time t. We can plot such expected changes for different x on a chart (Fig.4). X = N d is a discrete set. The sums y∈X yq(x, y)(t) are defined only for the integer vectors x and y. However, the formula x → y∈X yq(x, y)(t) often has the form, which is also a function of real vectors. Then we can consider the differential equation: dF dt = y∈X yq(F (t), y)(t) This is an idea of the predicted curve, which is a solution to the above equation.
Definition 2.11. For stochastic continuous-time discrete-valued stochastic processes with intensity matrix such that the function X 0 x → y∈X−{x} yq(x, y)(t) is continuous and differentiable, the predicted curve is the solution of the differential equation such that: For any two-dimensional intensity matrix, the expected and predicted curves are very similar (Fig.4). A greater difference is formed for fourdimensional matrices, as well as for specific parameters only.

The predicted curve and deterministic models
Although the definition of the predicted curve for Markov stochastic processes is new, these functions are already well known in the fields of applied mathematics, physics, theoretical biology, etc.
Examples of stochastic models and prediction curves are shown in Figs.1 and 2. Both figures show the same population model, in which the probability that some individual will give birth to one descendant in time [t, t + 1) depends on the population size n and is equal to p r (n). At the same time, the probability of the death of an individual is equal to p s (n). The stochastic matrix for population dynamics (function p(x,y)) is shown in Fig.1. According to definition 2.6, the predicted curve for this process is a recursive sequence given by the formula: For the linear functions p r (n) = a r n + b r and p s (n) = a s n + b s this formula has the form: where a = a r − a s and b = b r − b s . Let X t = − a b+1 F pr (t). Then: In theoretical biology, a recursive sequence of the form X t+1 = κX t (1 − X t ) (for κ > 1 and κ < 4 ) is called a "discrete-time logistic population model".

It was widely disseminated in biology by Robert McCredie May ([17]).
After Poisson, "birth-and-death" is the best known continuous-time Markovian stochastic process. Its intensity matrix is defined as The predicted curve satisfies the differential equation: If the probability rate of birth of an individual depends on the population size according to the linear function q r (n) = a r n + b r and the probability rate of death of an individual depends on the population size according to the formula q s (n) = a s n + b s , then the probability rate of change in population size from n to n + 1 is equal to µ(n) = n(a r n + b r ) and the probability rate of change in population size from n to n − 1 is equal to λ(n) = n(a s n + b s ). Therefore: The last formula is known as the logistic model of the population. It was developed by a Belgian mathematician, Pierre Verhulst, in 1838 ( [24]) and propagated by Raymond Pearl and Lowella Reed in 1920 ( [19]).
Lotka-Volterra equations (also known as predator-prey models) are well known. They were described by Alfred J. Lotka in 1925 ([14]) as an alternative version of the autocatalytic chemical reaction model published in 1910 ( [13]). 1927, Vito Volterra described the same set of equations ( [25]), published it earlier in Italian.
The numbers b r,1 , b s,1 , d r,2 , d s,2 must have values such that all probability rates are not negative within a large interval of sizes of both populations. The system of differential equations 22 takes the following form: dF pr,2 dt = F pr,2 (F pr,1 (c r,2 − c s,2 ) + (d r,2 − d s,2 )) It can be noted as: dF pr,1 dt = F pr,1 (−aF pr,2 + b) (24) dF pr,2 dt = F pr,2 (cF pr,1 − d) ). The graph of hare and lynx dynamics in this publication is the most frequently reprinted graph in science. It appears in many articles, manuals and textbooks around the world.
A known example of a Markov stochastic chain is the Wright-Fisher model of genetic drift [27]. Let each individual in the population have a gene pair of alleles A or B and produce many reproductive cells of one gene A or B. The cell fraction of allele A is equal n A n A +n B , where n A is a number in the genes of allele A for all individuals and n B is many genes of allele B for all individuals. Each progeny is formed by the random connection of two reproductive cells. Genes cannot mutate. Generations do not overlap, so that genes from generation t cannot be combined with genes from the next generation t + 1. The initial generation (at t = 0) consists of N AA individuals of genotype AA, N AB individuals of genotype BB and N BB individuals of genotype BB. What will be the proportion of individuals with the AA, AB and BB genotypes in each generation? What is the predicted value of this process?
The described model can be reduced to a pair of letters AA, AB and BB drawn multiple times, where the probability of drawing A is p = 2N AA +N AB 2(N AA +N AB +N BB ) , while the probability of drawing B is (1 − p). So the probability of obtaining AA is equal to p 2 , the probability of obtaining AB is equal to 2p(1 − p), and the probability of obtaining BB is equal to (1 − p) 2 .
The space of states of this model is a (N ∪ {0}) 3 . The stochastic process of the number of individuals of genotypes AA, AB and BB can be noted as (ξ AA,t , ξ AB,t , ξ BB,t ). The stochastic "matrix" of this process is equal to: . The conditional expected value of this process is equal to: E(ξ AA,t+1 , ξ AB,t+1 , ξ BB,t+1 |ξ AA,t = N AA , ξ AB,t = N AB , ξ BB,t = N BB ) = (26) where ξ t+1 = ξ AA,t+1 + ξ AB,t+1 + ξ BB,t+1 and p = 2N AA +N AB 2(N AA +N AB +N BB ) . This is a real function. After several transformations of the formula in definition 2.6 the predicted curve of this process has a following form: where N t is population size at t and p = 2N AA,0 +N AB,0 2N 0 . The equation 26 or 27 is well known as "Hardy-Weinberg law". In the original papers on genetic drift models, other stochastic processes were considered. It analyzed other variables: the number of genes of allele A and the number of genes of allele B. It will be noted as (ξ A,t , ξ B,t ) t∈N . The space of states of this model is (N ∪ {0}) 2 . The probability of transition from the t-th generation (n A , n B ) to the next generation (k A , k B ) is equal to: To simulate this process, information about the population dynamics (function t → N t ) is required. The assumption that this dynamics is constant is made very often. Genetic drift can then be observed in populations of different sizes (Fig. 5). The conditional expected value of this process is equal to: According to definition 2.6, the predicted curve of this process is equal to: F pr (t + 1) = κ t+1 n A,t n A,t + n B,t , κ t+1 n B,t n A,t + n B,t = = κ t+1 n A,t−1 n A,t−1 + n B,t−1 , κ t+1 n B,t−1 n A,t−1 + n B,t−1 = ...
The predicted fraction of allele A is constant and the same as in the initial population. In biology, this is a conclusion from the Hardy-Weinberg law.
Equation 26 was formed in 1908 by Godfrey H. Hardy and Wilhelm Weinberg ( [9], [26]) as the limit of the fraction of alleles A for which the size of the population tends to infinity. There are other interpretations of the same equation as the predicted value: this is a central tendency characteristic of stochastic processes, which means that the mean fraction at each time is constant despite the drift of the fraction in different directions. No direction of the drift-model realizations is discriminated or stood out.

Similarity of deterministic models and predicted curves
Equivalence of the formulas of some deterministic models to the predicted values of stochastic processes with the same assumptions is not unusual. This applies to models that consider the size of the community. The rate of change of this size is a function of the unknown formula that depends on the size of the community. To establish the correct differential equation, it is necessary to divide the size of this formula by a value proportional to the community size and to consider the equation as the community size tends to infinity. The integer argument of the function is changed to a real one. This quotient is called "density", which confuses ecologists who calculate population density in a completely different way. Moreover, the non-linear functions of the community size are changed to a linear formula due to the possibility of developing all analytic functions as Taylor series.
If this model corresponds to a stochastic model in which the changes in the population are independent events with the same probability, then the stochastic matrix of its model is formed by binomial or multinomial distributions. When that the community increases to infinity, then the probabilities in this matrix will increasingly revolve around the conditional expected size. This is in accordance with Jacob Bernoulli's law of large numbers. In an infinite community, the stochastic process is realized with overlapping of the expected and predicted curves. However, they overlap with the solutions of the deterministic equations only if the functions of the probability or probability rate of the community size change are linear. For non-linear functions, the differences between the differential model and the predicted curves can be quite large, as well as for specific parameters only.
The interpretation of the predicted curves may be quite different from the equations created by the deterministic models. This can be seen in the example of the Wright-Fisher genetic drift model and the Hardy-Weinberg law. In stochastic theory, they are complementary to each other. Nevertheless, some scientists believe that Hardy-Weinberg law and genetic drift model are mutually exclusive ( [4]).
The similarity of some real processes to solutions of some deterministic models are treated as the proof that the world is deterministic. This article shows why the stochastic world is sometimes similar to the solutions of the differential equations. In other words, this article can be treated as a probabilistic explanation why the determinism of our world comes from.

Advantages of predicted curves
The predicted curve has a lot of advantages. It forms differential equations in which all parameters have a "biological" ("chemical", "physical", etc.) interpretation. They are values of measurement variables, or numbers, allowing the calculation of some relationships obtained by regression.
If the probabilities in the stochastic matrices are calculated based on experimental or field studies, then the time units for their calculation are known. Thus, probabilities are the mean fraction of events (transitions) over a specified time period (which is not large for a small time period, but increases if this time period increases). This time period is expressed in units of time (s, min, day, year, etc.), which can be displayed on the time axis of the predicted curve. Probability rates are derived quantities due to the time measured in the specified units. They are 1/s, 1/min, 1/day, 1/year, etc. For instance, the instaneous probability rate of 0.24/day is equal to 0.01/h. A stochastic model using such parameters has a time axis calculated in known units. The time axis of its implementation is expressed in the same units as the time axis of the predicted curve. This fact increases the application of predicted curves compared to deterministic models.
The predicted curve characterizes the central tendency of almost every stochastic process (excluding only processes without conditional expected values). It is often possible to derive its formula before specifying the values of the parameters. It allows one to perform mathematical analysis and to predict possible courses of stochastic process realizations. It can be used to calibrate the parameters of stochastic models, because the predictions using them are always very good, even for very small communities.
The predicted curves as recursive sequences or differential equations can be characterized by stability or instability. It allows the use of these terms for stochastic models: it is stable if its predicted curve is stable. This is consistent with ecology, where the terms 'stability' and 'instability' are used to refer to ecosystems that are always stochastic. The definition of stability of stochastic systems allows a theoretical study between stability and durability of ecological systems.

methods
Statistical analysis of various existing Markov stochastic processes used in biology was performed.

Author contributions statement
This paper was written and prepared for publication by Mi los lawa Sokó l: 100% of contribution.

Additional information
Funding Not applicable; Conflicts of interest/Competing interests Not applicable; Availability of data and material Not applicable; Code availability Not applicable; Ethics approval Not applicable; Consent to participate Not applicable; Consent for publication I consent for publication,