Upper and lower bounds of the ﬁrst exit time of Brownian motion from The Minimum and Maximun Parabolic Domains with variable dimension to research the variation of biological species


 Consider a Brownian motion with variable dimension starting at an interior point of the minimum or maximum parabolic domains Dmax t and Dmin t in Rd(t)+2, d(t) ≥ 1 is an increasing integral function as t →∞,d(t) →∞, and let τDmax t and τDmin t denote the ﬁrst time the Brownian motion exits from Dmax t and Dmin t , respectively. Upper and lower bounds with exact constants for the asymptotics of logP(τDmax t > t) and logP(τDmin t > t) are given as t → ∞, depending on the shape of the domain Dmax t and Dmin t . The methods of proof are based on Gordon’s inequality and early works of Li, Lifshits and Shi in the single general parabolic domain case.


Introduction and main result
More and more mathematicians have paid attention to the study of the first exit time problem of Brownian motion which has been widely used in mathematics, physics, biology and other fields. Considered that the changes of species which are caused by environmental changes and species migration in ecological chain, the first exit time of Brownian motion is used for research where Brownian motion distribution can be regarded as the change of the number of each species in this paper, and the variable dimension of Brownian motion is considered to species in ecology. It means that species in nature have changed over time. In this paper, we study the first exit time of variable-dimensional Brownian motion in the minimum and maximun parabolic domains to simulate the Early warning function of species change in ecological chain.
Throughout the paper, let {B(s) = B 1 (s), B 2 (s), · · · B d(t) (s)} ∈ R d(t) , 0 ≤ s ≤ t be a standard d(t)-dimensional Brownian motion whose dimensions are changed over time t, where B i (s), 1 ≤ i ≤ d(t) are independent Brownian motions starting at 0. Consider the first exit time τ D max t and τ D min t of (d(t) + 2)-demensional Brownian motion from the minimum or maximum parabolic domains D min t = (x, y 1 , y 2 ) ∈ R d(t)+2 : x < min{(y 1 + 1) 1/p1 , (y 2 + 1) 1/p2 }, x ∈ R d(t) , and D max t = (x, y 1 , y 2 ) ∈ R d(t)+2 : x < max{(y 1 + 1) 1/p1 , (y 2 + 1) 1/p2 }x ∈ R d(t) , where the function x := [ i ] 1/2 is the Euclidean norm of x := (x 1 , · · · , x d(t) ) ∈ R d(t) . That is the exit time or the stopping time which plays a key role in the probabilistic solution to the Dirichlet problem. Before our introduction, let us recall some well-known results of the first exit time. Firstly, Li considered the first exit time of a Brownian motion from an unbounded domain in [4] as follows: where f (x) is convex and symmetric with respect to the set of all orthogonal transformations of R d .B(s), 0 ≤ s ≤ t is a standard d-dimensional Brownian motion, and d is a positive constant. Here and throughout the paper, W (s) is a standard one-dimensional Brownian motion starting at 0, independent ofB(s). Furthermore, he assumed f (x) is a non-decreasing lower semi-continuous convex function on [0, ∞) with f (0) finite as given in [11]. Using the above conditions, very general estimates for the asymptotics of the exit probability were given by Gaussian technique and Slepian's inequality. However, in view of the generality of f (x), the lower and upper estimates were not asymptotically equivalent. Lifshits and Shi in [5] gave further restrictions and assumed f (t) = t p , where p > 1. They obtained the lower and upper estimates of the following probability and also provided the lower and upper estimates are asymptotically equivalent, thus improving Li's estimates in this case. On the basis of two results above and Gordon's inequality, Lu [6] considered the exit probabilities in the minimum and maximum parabolic domains, namely where p j > 1, j = 1, 2, and W j , j = 1, 2 are standard one-dimensional Brownian motions, not only independent of each other, but also independent of {B(s) ∈ R d , s ≥ 0}.
More and more mathematicians have realized the importance of the first exit time. In the last few years, there has been a large number of mathematicians to study the first exit time of a Brownian motion from various domains, it was also widely applied in mathematics and physics.
For the sake of clarity, let us recall some well-known results of the first exit time. Firstly, Li considered the first exit time of a Brownian motion from an unbounded domain in [4] as follows: where f (x) is convex and symmetric with respect to the set of all orthogonal transformations of R d .B(s), 0 ≤ s ≤ t is a standard d-dimensional Brownian motion, and d is a positive constant. Here and throughout the paper, W (s) is a standard one-dimensional Brownian motion starting at 0, independent ofB(s). Furthermore, he assumed f (x) is a non-decreasing lower semi-continuous convex function on [0, ∞) with f (0) finite as given in [11]. Using the above conditions, very general estimates for the asymptotics of the exit probability were given by Gaussian technique and Slepian's inequality. However, in view of the generality of f (x), the lower and upper estimates were not asymptotically equivalent.
Lifshits and Shi in [5] gave further restrictions and assumed f (t) = t p , where p > 1. They obtained the lower and upper estimates of the following probability and also provided the lower and upper estimates are asymptotically equivalent, thus improving Li's estimates in this case. On the basis of two results above, Lu [6] considered the exit probabilities in the minimum and maximum parabolic domains, namely where p j > 1, j = 1, 2, and W j , j = 1, 2 are standard one-dimensional Brownian motions, not only independent of each other, but also independent of {B(s) ∈ R d , s ≥ 0}. It is their works that motivate our study. Up to now, to the best of our knowledge, all the researchers above only consider that the dimension d is a constant. Little is known about the variable dimension. Therefore, in this paper, we mainly consider the exit probabilities of d(t)-dimensional Brownian motion, here the dimension changes over time t. The dimension is no longer the fixed constant d as previous studies. In practice the problem is more worthy to be studied when the dimension is random. However, this problem is based on the result that d(t) is nonrandom function. Therefore in this paper we consider d(t) is the following case. Throughout the paper, let {B(s) = B 1 (s), B 2 (s), · · · B d(t) (s)} ∈ R d(t) , 0 ≤ s ≤ t be a standard d(t)dimensional Brownian motion whose dimensions are changed over time t, where B i (s), 1 ≤ i ≤ d(t) are independent Brownian motions starting at 0. Consider the first exit time τ D max t and τ D min t of (d(t) + 2)-demensional Brownian motion from the minimum or maximum parabolic domains where the function x := [ That is the exit time or the stopping time which plays a key role in the probabilistic solution to the Dirichlet problem. We also consider the exit probabilities in the minimum and maximum parabolic domains, namely Note that if W j (s) < −1, j = 1, 2, we assume that probabilities (1.9) and (1.10) are equal to 0. In this paper, we provide the asymptotic estimates with exact constants for (1.9) and (1.10). The following simple fact is the key step for our upper estimate of the exit probability (1.10). It is based on a powerful Gaussian technique, Gordon's inequality in [3].
where ξ is a standard normal random variable and independent of {B(s) ∈ R d , s ≥ 0} and W 2 .
Gordon's inequality and its variations provide a very useful tool in the theory of Gaussian process and probability in Banach spaces. The simplest form of Gordon's Theorem for centered Gaussian variables X ij and Y ij with indexes for any i = l and j, k, then for all real scalars λ ij , Using Proposition 1.1, we give the main result of this paper as follows: in the minimum parabolic domain, we have (1.14) where , j = 1, 2, Γ(·) denotes the usual gamma function, and c 1 and c 2 are strictly positive constants which are independent of p and t.
Remark: Now, we assume that B(s) = {B 1 (s), B 2 (s), · · · } is a infinity dimensional Brownian motion, and the projecting mapping is The generally problem is the estimates of the following probability: On the one hand, it is easy to find that the probability (1.15) is decreasing with d(s), so we have On the other hand, for finite partition t 0 = 0 < t 1 < t 2 < · · · < t n = t, we can get the upper bound of the the probability (1.15), Using conditional expectation, we have By Anderson's inequality for Gaussian measure and scaling property of Brownian motion, Then, by induction, we have (1.17) Thus combining (1.16) and (1.17), for calculating (1.15) we only consider the simple case (1.18) Thus, in this paper we just consider the dimension d(t) instead of dimension d(s). The rest of the paper is organized as follows. In Section 2, we present several estimates for the Brownian motion and the Bessel process of the exit probability for moving boundaries. They are necessary for the proof of Theorem 1.1. In Section 3, we give the proofs of Proposition 1.1 and the upper estimates in (1.13) and (1.11). The proofs of the lower estimates in (1.12) and (1.14) are presented in Section 4.

Exit probabilities with moving boundary
To prove Theorem 1.1, in this section, we need the following results due to Shao and Wang [9] for the small ball probabilities in Gaussian field, the result for the first exit time of Brownian motion from a unbounded convex domain in Li [4], and the result for the first exit time of Brownian motion from a parabolic domain in Lifshits and Shi [5].
If the dimension is a constant d, we consider the probability From Li [4], it is easy to see, for any x > 0, where K is a various positive constant, and j v is the smallest positive zero of the Bessel function However, up to now, we don't know any result for the dimension d(t). So, in this section, we give two lemmas for getting upper and lower bounds of the probability for all s, t ≥ 0. Then there exists 0 < c 1 < ∞ depending only on α and d such that for any 0 < x < 1.
The argument of Lemma 2.1 was given in Shao and Wang [9]. And they also gave a lower bound of small ball probability of Gaussian fields. Lemma 2.2. Let X = {X(t); t ∈ [0, 1] d } be a Gaussian field with mean zero. Assume that there exits a non-decreasing function σ(x) on [0, 1] such that Suppose that α > 0, σ(x)/x α is non-decreasing on [0, 1] for some α > 0 and that for every 0 < h ≤ 1 and integer k with 0 ≤ k ≤ 1/h. Then there exists a positive constant c 2 = c 2 (α, d), for any 0 < x < 1 In fact, Shao in [8] obtained estimate of the probability P (sup 0≤s≤t B(s) ≤ x) with exact constants. However, we don't care about the precise coefficients in this paper. So we just use the above Lemmas.By Lemma 2.1 and Lemma 2.2, we get the following Propositions.
Proof: First, we use the independence between B i . (2.4) By Lemma 2.1, let d = 1, α = 1/2, and Z(t) defined as B 1 (t), then (2.1) can be written as Next, we consider the lower bound of the probability.
x, then the inequality (2.7) can be written as Proposition 2.2. Let g(t) be a continuous strictly positive function such that g (t) ≤ 0 is continuous and g (t) ≥ 0. Then here c 2 is a sufficiently large constant.
In particular, under the additional condition √ tg (t) → 0 as t → ∞, for and δ > 0 and t large.

The proof of the Proposition 2.2 is based on proof Theorem 2.2 in [4], we only replaced (2.18) in Li [4] by the following inequality, using Brownian scaling property and (2.3), we have
The rest proof is similar to the proof of Theorem 2.2 in Li [4].
For any p > 1 whereġ denotes the Radon-Nikodym derivative of g and Φ(x) is the distribution function of a standard normal random variable. [7] and follows from well-known theorems concerning the equivalence of measures of Gaussian processes. Note that

Lemma 2.3 is given in
Before giving more lemmas, let us introduce some notation. Here and in the following, where A ↑ θ,p is the set of all non-decreasing functions on the set A θ,p defined by Using the above notation and applying the classical Schilder large deviation theorem, Lishifts and Shi provided a useful upper estimate in [5].
Using Lemma 2.4, Song [10] obtained the upper and lower estimates of the exit probability (1.6) of brownian motion from a parabolic domain with variable dimension d(t), as follows: where Γ(·) denotes the usual gamma function, and c 1 and c 2 are strictly positive constants which are independent of p and t.
Using Lemma 2.1-2.5 in the next two sections, we give the proof of Theorem 1.1.

Upper estimates
Our upper bounds argument modifies the one appeared in Lifshits and Shi [5]. We need to pay special attention to the variable dimension d(t) and inequality (2.3). It is easy to obtain the upper estimate of (1.13). Note that Thus we only need to provide the upper estimate of the probability on the right-hand side of (3.1). It is easy to see that For the probability on the right-hand side of (3.2), we use Lemma 2.5 to find the upper bounds of two probabilities, and compare them to get the upper estimate in (1.13) as follows: .

(3.3)
Before giving the upper estimate in (1.11), we need to prove Proposition 1.1. We have where the inequality follows from Gordon's Theorem [3] by conditioning on B(s), 0 ≤ s ≤ t. To justify it, we simply note that where j = k, δ jk = 1 and j = k, δ jk = 0. Thus, Proposition 1.1 follows from Gordon's inequality. Note that in most papers and books Gordon's inequality is proved and used for mean-zero Gaussian random vectors. Here we, in fact, use a form that the same mean depends on the index parameter. The standard proof can be modified to cover this case.
Next, using (3.4), we give the proof of the upper bound estimate in (1.11). Our argument is a modification of the one from [5]. To obtain a rigorous upper bound, let 0 = t 0 < t 1 < t 2 < . . . < t M ≤ t and observe that where S(t) := sup 0≤s≤t √ sξ and S 2 (t) := sup 0≤s≤t W 2 (s). Since B(s) ∈ R d has independent increments, we have for a i > 0, i ≤ M, By Anderson's inequality and the fact that B has stationary increments, we have Plugging (3.7) into (3.6), by induction, we have (3.8) By Proposition 2.1, we know that Using (3.8) and (3.9), we have where c(ε) = (1 − ε)c 1 d(t). By conditioning on the Brownian motion W 2 and the standard normal variable ξ, and using (3.5) and (3.10), we have We split the expectation in (3.11) into two parts In view of S 2 (t) ≥ 0 for t ≥ 0, in the case of ξ ≤ 0, we have (1+sup 0≤s≤ti √ sξ) 2/p1 ≤ (1 + S 2 (t i )) 2/p2 for 1 ≤ i ≤ M . Using the independence between W 2 and ξ, it is easy to see On the other hand, for II, let In the case of ξ > 0, by the scaling property of W 2 and the monotonicity of S 2 (t), we have Plugging (3.13) into II yields (3.14) Similarly, from (3.12) we also obtain Next, we deal with II. We split the expectation in (3.14) into two parts, In the case of ξ > 0 and Q, in view of the monotonicity of S 2 (t), for the integral in III, we have for M large, Using the independence between ξ and S 2 (t), and plugging (3.16) into III yields On the other hand, in the case of {ξ > 0, Q c } = {(1 + √ tξ) 2/p1 > (1 + √ tS 2 (2τ 1 )) 2/p2 }, in view of the monotonicity of S 2 (t), for the integral in IV , we have Using (3.15) and (3.17), we have . (3.20) Next we try to find the upper estimates of I + III and IV , respectively. We split the expectation in (3.20) into two parts, At this stage, using Lemma 2.4 for the first term on the right-hand side of (3.21), we get that Next, dealing with IV . For the sake of brevity, write We split the expectation on the right side of (3.19) into three parts. Writing them as V, V I and V II. Next, we deal with the three expectations, respectively. Firstly, Secondly, . Note that the inf in (3.24) is attained at u = kt (p1−1)/2(p1+1) . Finally, The second inequality in (3.25) can be easily obtained from the result of Mill's ratio (see [6]). Comparing (3.22)-(3.25), it is easy to see that (3.22) is the dominant term, then using all above inequalities, we have lim sup (3.26)

Lower estimates
It is easy to obtain the lower estimate of (1.12). Note that On the right-hand side of (4.1), we use Lemma 2.5 to find the lower bounds of two probabilities, and compare them to get the lower estimates in (1.12) as follows: (4.2) Next, we give the proof of the lower bound in (1.14). Our argument is a modification of the one from [5]. Take a function h ∈ A ↑ 0,p1 solving the variational problem That is, let h ∈ A ↑ 0,p1 be such that From (4.4), it is easy to verify that h(x) = o(x 1/2 ), and x p1/2 = o(h(x)) for x → 0 + . Since p 1 > p 2 > 1, there exists 2 > r > 1, such that p 1 > p 2 > r > 1.
For the first probability on the right-hand side of (4.7), by the scaling property of B, we have Note that 1 ≥ h(ε) −1 ν −p1p2/(p1+1)(p2+1) forces s < ε for s ∈ V c . This is used in formula (4.11). Then splitting the integral in (4.10) into two parts, we get .

Authors' contributions
The first author(Chao Liu) conceived the innovation of this paper and provided relevant lemmas to prove the theory. The second author(Wenbin Che) and the third author(Jingjun Zhang) gave detailed proof of the main conclusion of the paper.