Dynamic Programming of the Stochastic Burgers Equation Driven by Levy Noise

. In this work, we study the optimal control of stochastic Burgers equation perturbed by Gaussian and L´evy type noises with distributed control process acting on the state equation. We use the dynamic programming approach for the second order Hamilton-Jacobi-Bellman (HJB) equation consisting of an integro-di ﬀ erential operator with L´evy measure associated with the stochastic control problem. Using the regularizing properties of the transition semigroup corresponding to the stochastic Burgers equation and compactness arguments, we solve the HJB equation and the resultant feedback control problem.


Introduction
Optimal control theory of fluid mechanics has been one of the important subjects in applied mathematics with several engineering applications.An interesting problem in this direction is the rigorous study of the feedback synthesis of optimal control problems for the stochastic Burgers equation forced by random noise using the infinite-dimensional Hamilton-Jacobi-Bellman (HJB) equation associated with the problem.Noise term enters in the physical system as a forcing due to structural vibration and other environmental effects that can be incorporated either as a random boundary forcing or as a random distributed forcing in the state equation.The optimal control of deterministic Navier-Stokes equations has been studied in [24] by showing that the value function, which is the minimum for an objective functional, is the viscosity solution of the associated HJB equation and the authors [13] extended this study for the optimal control of 2D stochastic Navier-Stokes equations with Gaussian noise.In [6], the authors considered the optimal control of stochastic Burgers equation with Gaussian noise.They introduced a specific control problem in which the covariance operator also acts on control process and using the Hopf transformation, they solved the problem by using dynamic programming approach.They removed this restriction in [7] and solved the control problem by again associating it with HJB equation.In the later case, they considered the mild form of the HJB equation and used the smoothing properties of the transition semigroup to obtain a smooth solution to the HJB equation in a weighted space.The same strategy has also been used to study the optimal control of 2D stochastic Navier-Stokes equations with Gaussian noise in [8].For various aspects of the optimal control of deterministic and stochastic fluid dynamic models, one may refer to [23] and also for other general systems, one can look at the books [27,3].
Unlike in the case of optimal control of stochastic Navier-Stokes/Burgers equations with Gaussian noise (see, [23,25,13,6,7,8]), there are only very few works in the case of control of stochastic PDEs with Lévy noise (see, [1,17]).Moreover, in the recent book [4], a phenomenological study of fully developed turbulence and intermittency is carried out and in which it is nicely proposed that the experimental observations of these physical characteristics can be modeled by stochastic Navier-Stokes equations with Lévy noise.In fact, other qualitative properties like, ergodicity and invariant measure for stochastic Navier-Stokes/Burgers equations with Lévy noise have already been studied in the literature (see, for example, [10,11,18]).
In this paper, we consider the optimal control of stochastic Burgers equation perturbed by Gaussian and Lévy type noises with distributed stochastic control force acting on the equation.Following the methodology developed in [7], we transform the HJB equation of partial integro-differential type with Lévy measure into mild form using transition semigroup associated with the stochastic Burgers equation.The regularity of solutions for the mild form of the HJB equation directly depends on the smoothing properties of the semigroup and this is achieved via the Bismut-Elworthy-Li type formula [12] derived for the Burgers equation with Lévy noise.However, boundedness of the derivative of the transition semigroup demands the finiteness of exponential moments of the stochastic Burgers equation.We have stated this exponential estimate which will be proved in the future work.Then the solution for the HJB equation is obtained in a space of smooth functions having weighted exponential growth by compactness arguments.This justifies the required smoothness of solutions for the feedback control formula and therefore by standard arguments, we prove the existence of an optimal pair for the control problem.
This paper is organized as follows.In Section 2, we state the problem and give necessary function spaces to be used in the rest of the paper.In Section 3, we state the HJB equation and write the Galerkin approximation.The Section 4 is devoted to establish the crucial moment estimates for the Burgers equation with Lévy noise.Using these estimates in Section 5, we prove a-priori estimates related to the smoothing properties of the semigroup.Finally in Section 6, we establish the main results of this paper.
Since the level of turbulence in a fluid flow can be characterized by the time averaged enstrophy, it seems appropriate to consider the minimization (over all controls from a suitable admissible set) of a cost functional 2.1.Functional Setting.We define the function spaces and notation frequently used in the sequel.Let H = L 2 (0, 1) be endowed with the inner product (•, •) and the norm • .Let V = H 1 0 (0, 1) be endowed with the norm will be denoted as • 1 to be consistent with the fractional powers.The induced duality, for instance between the spaces V and its dual V ′ = H −1 (0, 1) will be denoted as ((•, •)).
Let us define the positive self-adjoint operator Au = −D 2 ξ u for u ∈ D(A) = H 2 (0, 1) ∩ V. Then A −1 is a compact self-adjoint operator and hence by the spectral theory, there exists sequence of orthonormal basis functions {e k } ∞ k=1 in H and eigenvalues {σ k } accumulating at zero so that A In this paper, we also use the fractional powers of A. For u ∈ H and α > 0, let us define where Here D(A α ) is equipped with the norm 2 ) = V; in general, we set V α = D(A α/2 ) with u α = A α/2 u .For any s 1 < s 2 , the embedding D(A s 2 ) ⊂ D(A s 1 ) is also compact.Indeed, it is known that A has an orthonormal basis of eigenvectors {e k } given by e Applying the Hölder inequality on the expression (2.3), one can get the following interpolation estimate: for any real s 1 ≤ s ≤ s 2 and θ is given by s We define the nonlinear operator Let us define the trilinear operator An integration by parts yields 2.2.Lévy Noise and Hypothesis.Let (Ω, F , P) be a complete probability space equipped with an increasing family of sub-sigma fields {F t } 0≤t≤T of F satisfying usual conditions.Since Q is a positive, symmetric and trace class operator on H, there exists an orthonormal basis Here ̺ k is the eigenvalue corresponding to {f k } which is real and positive satisfying The stochastic process {W(t) : 0 ≤ t ≤ T } is an H-valued cylindrical Wiener process on (Ω, F , {F t } t≥0 , P) if and only if for arbitrary t, the process W(t) can be expressed as Let (Z, | • |) be a separable Banach space and (L t ) t≥0 be an Z−valued Lévy process.For every ω ∈ Ω, L t (ω) has at most countable number of jumps in an interval and the jump ∆L The measure N(•, •) is the Poisson random measure(or jump measure) with respect to (Ω, F , {F t } 0≤t≤T , P) associated with the Lévy process (L t ) t≥0 .Here B(Z 0 ) is the Borel is the σ-finite measure defined on (Z 0 , B(Z 0 )).The intensity measure µ(•) on Z satisfies the conditions µ({0}) = 0 and The compensated Poisson random measure is defined by N(dt, Γ) = N(dt, Γ) −µ(Γ)dt, where µ(Γ)dt is the compensator of the Lévy process (L t ) t≥0 and dt is the Lebesgue measure.
Let G : [0, 1] × [0, T ] × Z → H be a measurable and F t −adapted process satisfying where G(t, z) := G(ξ, t, z).The integral defined by M(t) := t 0 Z G(s, z) N(ds, dz) is an H−valued martingale and there exist an increasing càdlàg processes so-called quadratic variation process [M] t and Meyer process M t such that [M] t − M t is a martingale (see, [22]).Moreover, we have the following Itô isometry For more details on Lévy processes one may refer to [2] and [21].Following are the basic assumptions on Q, G(•, •) and µ(•) used in the sequel.Other necessary assumptions on the noise coefficient are stated in relevant sections of the paper.
Assumption 2.1.(A 1 ) For any κ ∈ (1/2, 1), the operator Q satisfies (ii) ′ The assumption (ii) and the Poincaré inequality give , for all p ≥ 2. Also we fix measurable subsets Z m of Z such that µ(Z m ) < +∞ and Z m ↑ Z as m → ∞.
In the rest of the paper, C will denote a generic positive constant depending on the given arguments.
Example 2.1.As an example of Q, one may take x , any α ∈ ( 1 2 , 1) satisfies the Assumption (A 1 ).2.3.Function Spaces.To obtain the smoothing property of the transition semigroup, we need to work with some space of functions having exponential weights.Let us define B R := {x ∈ H : x ≤ R} and for k ∈ N, α ∈ [0, 1) the space . The space C k+α (B R ) is a Banach space with the norm For ε > 0, we define the weighted space For α = 0, we obtain the space C k ε as For ε = 0, we use C k+α 0 (B R ) and • k+α,0 = • k+α .Moreover, we define and the following norms We wish to point out here that one can very well work with function spaces restricted to a ball of radius R 0 in the Hilbert space H.

The Hamilton-Jacobi-Bellman Equation
The system is controlled through the process U : [0, 1]×[0, T ]×Ω → H which is adapted to the filtration {F t } t≥0 .For a fixed constant ρ > 0, we define the set of all admissible control as U 0,T ρ = U ∈ L 2 (Ω, L 2 (0, T ; H)) : U ≤ ρ, P − a.s., and U is adapted to{F t } t≥0 .(3.1) The abstract form of the controlled stochastic Burgers equation is given by The cost functional now associated with the optimal control problem (3.2) consists of the minimization over all controls U ∈ U 0,T ρ of the functional We aim to find a U ∈ U 0,T ρ such that J (0, T ; x, U) = inf U∈U 0,T ρ J (0, T ; x, U).First we state the existence and uniqueness of solutions of the equation (3.2).
We use the dynamic programming approach to solve the optimal control problem which involves the study of the value function defined as V (t, x) := inf U∈U 0,t ρ J (0, t; x, U), which is formally a solution of the following Hamilton-Jacobi-Bellman equation (see where φ(x) = D ξ x 2 and L x v is the integro-differential operator given by Note that the Hamiltonian function can be evaluated explicitly as for p > ρ. (3.6) The HJB equation (3.4) can now be written as Moreover, if v is a smooth solution of the HJB equation (3.7), the optimal control is given by the formula (3.8) In (3.8), X(t) is the optimal solution of the following closed loop equation The pair ( X, U) is the optimal pair of the control problem.In order to obtain such a smooth solution to (3.7), we use the transition semigroup (S t ) t≥0 defined on (Ω, F , {F s } s≥t , P) of the uncontrolled stochastic Burgers equation associated with (3.2).The semigroup is defined by Under the transition semigroup, the value function v solves the following equation in mild form As we are looking for a smoothness of the transition semigroup (S t ) t≥0 , we appeal to Bismut-Elworthy-Li(BEL) formula for the cádlag process which in turn requires the differential of Y(•) corresponding to the system (3.10) with respect to the initial data. The The second differential of Y(•) is defined by All the equations and formulas stated above will be justified formally using approximation techniques.
3.1.Finite Dimensional Approximation.We introduce the approximation for the controlled Burgers equation (3.2).Let {e 1 , e 2 , • • • , e m } be the first m eigenvectors of A and P m be orthogonal projector of H onto the space spanned by these m eigenvectors.For a fixed m ∈ N, define the approximation of the nonlinearity in the state equation as and the nonlinearity in the cost functional as (see section 3, [7]) For any u ∈ H 1 0 (0, 1), an integration by parts yields ) ), we also have ) where is defined in (3.14).The approximated optimal control problem is the minimization over all U m ∈ U 0,T ρ of the cost functional where φ m (•) and f m (•) are defined in (3.15).The finite dimensional equation associated with (3.10) is given by we have the following approximated HJB equation: where L x v m is the operator The mild form of the HJB equation (3.21) is The approximation of (3.12) in the direction of and the approximation of the second differential of Y m (•) satisfies Now we state the following BEL formula involving the Itô-Lévy process Y m (•) of the system (3.18),whose proof is given in [18].Proposition 3.1 (Bismut-Elworthy-Li Formula).For each f ∈ C b (P m H), the semigroup (S m t ) t≥0 is Gâteaux differentiable and its derivative in any direction h ∈ P m H is given by ) Moreover, the second differential is given by We use the following interpolation result frequently (3.28)

A-priori Estimates of the Stochastic Burgers Equation
In this section, we derive various moment estimates including energy estimates and exponential moment estimates for the stochastic Burgers equation with Lévy noise.In particular, to estimate the first and second variation η h m (•) and ζ h m (•) respectively (see Lemma 4.3) in (3.26) and (3.27), we are in need of the following estimates.
Let K(t) := I(t) + J(t), where I(•) and J(•) are the stochastic integrals defined by It is easy to show that I(•) and J(•) satisfy the following stochastic differential equations: 9,18]).For any T ≥ 0 and p ≥ 1, we have the following: Lemma 4.2.Under the Assumption (A 2 ), for any x ∈ H and p ≥ 1, the solution of (3.20) satisfies the following a-priori estimate: We multiply (4.4) with Z m (•) to get Using the Gagliardo-Nirenberg inequality, we have Substituting (4.7) in (4.5), we arrive at By the embeddings H 1 (0, 1) ⊂⊂ L ∞ (0, 1) ֒→ L 4 (0, 1) ֒→ L 2 (0, 1), we also have Integrating the above inequality from 0 to t to find An application of Grönwall's inequality yields for all t ∈ [0, T ].Taking expectation in (4.11) and applying Lemma 4.1, we get ) , one can complete the proof of the lemma.
Proposition 4.1.Suppose that the Assumption (A 2 ) − (ii) ′ is satisfied.Then for any p ≥ 2, there exists a constant C > 0 such that for any x ∈ H, the following estimate holds: Proof.For a given m ∈ N and for all l > 0, we define a sequence of stopping times The Itô formula applied to Y m (•) p leads to the following: where Here we used the fact that (B m (Y m (s)), Y m (s)) = 0 (see Remark 3.1).For x, y ∈ P m H and for any p ≥ 2, one obtains from the Taylor formula that Making use of the above inequality, we get Using the Davis inequality and Young's inequality, we estimate J 2 as follows Applying the Burkholder-Davis-Gundy type inequalities (see [14]), we get where we also used Young's inequality in the last step.Finally as above Moreover, it is easy to see that Tr Substituting (4.17)-(4.20)back into (4.15),we arrive at Let Ω l := ω ∈ Ω : Y m (t) < l .
Proposition 4.2.Let 0 < ε < ε 0 be small enough and Assumption 2.1 hold.Then for any x ∈ H and t ∈ [0, T ], the following holds: The proof follows easily for the case x restricted to a ball of radius R 0 in H by an application of the Ito formula to exponential functions.This exponential estimate also holds up to a stopping time without restricting the initial data x to a ball in H. Proof for these details are addressed in a future paper.
Next we derive the estimates concerning the differentials (3.24) and (3.25) using the exponential moment estimate stated in Proposition 4.2.
Lemma 4.3.Suppose the conditions of Proposition 4.2 hold true.For any p ≥ 2 and x, h ∈ P m H, there exists a constant C > 0 such that for any t ∈ [0, T ] : and Moreover, we have Proof.We multiply (3.24) by p η h m (t) p−2 η h m (t) to obtain Twice integrating by parts with the note of g ′′ m (x) ≤ 2, for any x ∈ R and using the Gagliordo-Nirenberg inequality, we get The identity (4.27) can be estimated by the Grönwall inequality as For ε > 0, we get the following by Young's inequality: Thus it is immediate that Integrating by parts and using g ′′ m (x) ≤ 2, x ∈ R, we get L 4 (0,1) .Note that by (4.29) (with p = 6 and p = 2) and (4.30) (with weight ε 8 ), we get

.33)
A calculation similar to (4.28) (with p = 2) yields 1 .Taking (4.33) into account and applying the Grönwall inequality, one can obtain It leads to So, it leads to By definition of g m (x) given in (3.14) ,we get Using (4.43) in (4.42), we estimate the first integral with the help of the inequality x 3 , the Sobolev embedding H 1/3 (0, 1) ⊂ L 6 (0, 1) and the interpolation inequality (2.4) as → 0 as m → ∞.
Therefore, for any fixed ε > 0, we get by the Chebyshev inequality that and by the Markov inequality that Then there exists a subsequence of Y m (denoted by Y m for notational simplification, see Theorem 17.3, [15]) such that Y m → Y almost surely in D([0, T ]; H) ∩ L 2 (0, T ; V).Since the solution Y(•) is unique, the entire sequence Y m (•) converges almost surely.This completes the proof.

Smoothing Properties of the Transition Semigroup
Next, we prove an estimate concerning the regularity of the semigroup S m t f.The proof is similar to [7].In order to prove this estimate we are in need of the following estimate concerning the stochastic integral in the BEL formula.For notational simplification, we suppress the dependence of Q, T, p in constants appearing in the rest of the calculations.Lemma 5.1.Suppose the operator Q satisfies the assumption (A 1 ).Then for any there exists a constant C > 0 such that where Proof.Applying the Hölder inequality and Burkholder-Davis-Gundy inequality (see Theorem 3.49, [21]), we get Since by (A 1 ), Q , we obtain again by the interpolation inequality (2.4) (with θ = 1 − κ, s 1 = 0 and s 2 = 1 2 ), Hölder's inequality and Lemma 4.3 that This completes the proof.
In order to estimate the mild form of the HJB equation (3.23), we need the following estimate.(5.7) Proof.For x, h ∈ P m V, we obtain Note that (5.8) Thus by using Hölder's inequality, Lemma 4.2, Proposition 4.1 and Lemma 4.3, we have Hence we get the result for α = 0 as For the case of α = 1, we note that A standard computation similar to (5.8) leads to the following A computation similar to I 1 for I 2 and the estimate of The other cases of α ∈ (0, 1) can be done, as we argued in Proposition 5.1, by using the interpolation result in Proposition 3.2.
Using the smoothing property of the transition semigroup proved in Proposition 5.1 and Lemma 5.2, we prove the regularity of solutions for the mild form (3.23) of the HJB equation (3.21).Proposition 5.2.Let α ∈ (0, 1) be fixed such that (1 + α)(1 + κ) < 2 and let γ < ε 0 .Suppose the Assumptions (A 1 ) − (A 2 ) hold true.Then for any ε > 0, there exist a constant C > 0 such that the following holds: (5.9) and Proof.Note that the solution v m (t, x) which solves the HJB equation written in mild form (3.23) is in fact the value function to the approximate control problem on [0, T ]: The last estimate follows from Proposition 4.1.Hence, we get But, if we consider the control problem in [0, t], for any t ∈ [0, T ], one can also get From (3.23), Proposition 5.1 and Lemma 5.2, we obtain The final term in (5.12) can be estimated using the interpolation result of Proposition 3.2, Proposition 5.1 and (5.11) and arguing similarly as in Proposition 4.4 of [7], for any ε < ( ε 0 − γ)(1 + α).The estimate (5.10) follows from Proposition 4.5 of [7].
In order to prove the convergence of v m (•, •) in the mild form (3.23), we need more spatial regularity on v m (•, •).But proving such regularity with the integral This leads to the following regularity result.
Proposition 5.3.Suppose the conditions given in Proposition 5.2 are satisfied.Let 0 < δ < 1 4 and γ < ε 0 .Then there exists a constant C > 0 and k(δ) such that sup Proof.Using the exponential estimate given in Proposition 4.2 and arguing similar to that of Proposition 4.6, [7], one can complete this proof.

Solvability of Optimal Control Problem
Now we prove the existence of a mild solution of the HJB equation (3.7) using various a-priori estimates of the semigroup, energy estimates derived in previous section, the Arzelá-Ascoli theorem and the compactness of D(A δ ) in H, for 0 < δ < 1  4 .In order to prove the boundedness of each term in the HJB equation, we need to bound the integro-differential operator with Lévy measure in a compact set.It demands the following assumption on the noise coefficient: Assumption 6.1.(A 3 ) For any x ∈ H and 0 < θ < 1, there exists a constant C > 0 such that the jump noise coefficient G(•, •) satisfies Now we are ready to prove the existence of mild solution of the HJB equation (3.7) in the smooth function space C 1 γ (H) having exponential weights.Theorem 6.1.Assume that Assumptions (A 1 )−(A 3 ) hold.Then there exists a mild solution v(t, •) ∈ C 1 γ (H), for some γ ∈ [0, ε 0 ] where ε 0 = ε 0 2 , of the HJB equation (3.7).Proof.We first prove the convergence of u m (•, •) defined in (5.13).We extend . By the definition of u m (•, •), we rewrite the HJB equation as follows: for any x ∈ P m H and t ∈ [0, T ].In order to apply the Arzelá-Ascoli theorem, we set the compact sets for any r ∈ N as follows: , and We now prove the equicontinuity and boundedness of V r m (•, •).From Proposition (5.2), we obtain From (6.1), we have sup Using the above estimates, we obtain From the definition of F(•), we have For any x ∈ K r , by Assumption (A 3 ), we have sup so that the final term from (6.3) can be estimated using Taylor's theorem as |D t u r m (x, t)| ≤ C(γ, ε, r).(6.9) Using (6.2) and (6.9) and arguing as in Theorem 2.2, [7], one can prove the equicontinuity of . Moreover, by Proposition 5.3 and Arzelá-Ascoli theorem, there exists a continuous function V r from K r × [τ r , T ] to P r H such that along a subsequence By the compactness of D(A δ ) in H and using Proposition 5.3, one can obtain the pointwise strong limit V (•, •) defined on r∈N P r H × (0, T ] of the sequence V r (•, •).Moreover, V (•, •) is also continuous on the same topology and can be extended to H × (0, T ].Using this strong convergence of V r → V and the estimate given in (6.2), and Proposition 5.3, we obtain that u(•, •) is differentiable D x u(x, t) = V (x, t).
In order to prove the convergence of v m (•, •), we further need the following convergence result whose proof is given at the end of this proof.
since Y m (•) converges to Y(•) strongly in L 2 (Ω; L 2 (0, T ; V)) and the second integral is bounded by energy estimate in Proposition 4.1.This proves (6.10).In order to prove (6.11), we first observe that Next, we write Note that The integral I 4 can be further estimated using Hölder's inequality as In the above calculations, we also used Lemma 4.
An application of the convergence (4.26) and the energy estimates lead to the convergence of E t 0 |I 3 (s)|ds to 0 as m → ∞.This completes the proof.Now using the solution v m ∈ C 1 γ (H), we justify the feedback control formula U m (t) = G(D x v m (T − t, X m (t))) and prove the convergence of X m (•) to X(•).Moreover, we establish an identity satisfied by the cost functional J (0, T ; x, U).Theorem 6.2.Suppose the conditions given in Theorem 6.1 are satisfied.Then for any control U ∈ U 0,T ρ , the following identity holds: where J (0, T ; x, U) is defined in (3.3), the function χ satisfies χ(a) = 0 for a ≤ 0 and χ(a) = a 2 for a ≥ 0 and X(•) is the solution of (3.2).Moreover, the closed loop equation (3.9) has an optimal pair ( X, U) with In order to prove Theorem 6.2, we need the following results on the approximated cost functional.Lemma 6.2.For m ∈ N, let X m (•) be the solution of (3.18) and U m = P m U ∈ L 2 (Ω; L 2 (0, T ; P m H)).
For any x ∈ P m H, the following holds for the approximated cost functional where χ(•) is the function defined in Theorem 6.2.
Proof.Let us apply the finite dimensional Itô formula to the process v m (t − s, X m (s)) to obtain where Setting t = T , integrating from 0 to T , and taking expectation, we get Using the definition of approximated cost functional J m and arranging the terms in (6.14), we arrive at Using the definition of F m (•), the final integral in (6.15) can be written as This completes the proof.
Proof.It is clear from (4.35) and the rest of the arguments in Proposition 4.3 that in order to prove the almost sure convergence of X m (•) in the given topology, it is sufficient to prove Since X m ⇀ X in L 2 (Ω; L 2 (0, T ; V)) and U m → U in L 2 (Ω; L 2 (0, T ; H)), the convergence in (6.16) follows.
Proof of Theorem 6.2.It can be easily shown that the following approximated closed loop equation has a unique solution X m (•): From the definition of G in (3.8), we know that G(p) ≤ ρ, for all p ∈ H.
Since X(•) is the solution corresponding to the control U(t) = G(D x v(T − t, X(t))), from (6.12) we also have v(T, x) = J (0, T ; x, U).From the above inequality it is clear that U(•) is an optimal control and ( X, U) is an optimal pair.= inf Besides, for more details on the case of continuous diffusion, one can refer to [27] (and also [13]) and that can be modified to this case.Hence L(X 1 (s), U 1 (s))ds L(X 1 (s), U 1 (s))ds + V (τ, X 1 (τ )) .Now the HJB equation can be formally obtained by applying the Itô formula for V (τ, X(τ ))− V (t, x).

h 4 .Proposition 4 . 3 .
(4.34)    Again using (4.30) and then Proposition 4.2, we can conclude the proof of second part.The convergence (4.26) can be established by similar arguments as in Lemma 3.3,[7].Finally we prove the almost sure convergence of the solutions of(3.20).Suppose that the Assumption(A 2 )(ii) is satisfied.Let {x m } ∞ m=1 be such that x m → x strongly in H. Then Y m (•),the solution of (3.20) converges to the unique solution Y(•) of (3.10) in L 2 (Ω; L ∞ ([0, T ]; H)) ∩ L 2 (Ω; L 2 (0, T ; V)) having càdlàg paths and almost surely in D([0, T ]; H) ∩ L 2 (0, T ; V).Proof.The existence and uniqueness of solution of the equation (3.20) is given in Theorem 3.1.We only need to prove the strong convergence of Y m (•) to Y(•) in the given topology.Using the Itô formula and taking expectation, we get

Lemma 6 . 1 . 0 S 0 S
For any x ∈ H and t ∈ [0, T ], we have t 0 S m t−s φ m (P m x)ds → φ m (P m x)ds → D x t t−s φ(x)ds.(6.11)By the convergence of the sequence u m (•, •) together with Lemma 6.1, we obtain thatv m k (t, x) → v(t, x) and v(t, x) = u(t, x) + t 0 S t−s φ(s)ds, which is continuous on [0, T ] × H and v(0, x) = f (x).Moreover, D x v m k (t, x) → D x v(t, x) for any (t, x) ∈ (0, T ] × H.The proof of showing that v(•, •) is indeed the mild solution of (3.7) satisfying the integral identity(3.11)can be established by arguing similarly as in Theorem 2.2, [7].Proof of Lemma 6.1.Let x ∈ H and t ∈ [0, T ], then we have t 0 S m t−s φ m (P m x)ds − t t−s φ(x)ds