Open Access

The existence and uniqueness of the solution for nonlinear elliptic equations in Hilbert spaces

Journal of Inequalities and Applications20152015:250

https://doi.org/10.1186/s13660-015-0764-7

Received: 22 September 2014

Accepted: 20 July 2015

Published: 14 August 2015

Abstract

The existence and uniqueness of the mild solution for semilinear elliptic partial differential equations associated with stochastic delay evolution equations are obtained by means of infinite horizon backward stochastic differential equations. Applications to optimal control for an infinite horizon are also given.

Keywords

Kolmogorov equations stochastic delay evolution equations infinite horizon backward stochastic differential equations optimal control for an infinite horizon

MSC

93E20 60H30 60H15

1 Introduction

In this article we consider infinite horizon stochastic delay evolution equations of the form
$$ \textstyle\begin{cases} dX(s) = AX(s)\, ds + F(X_{s})\, ds +G(X_{s})\, dW(s), \quad s\geq0, \\ X_{0}=x, \end{cases} $$
(1.1)
where
$$X_{s}(\theta)=X(s+\theta), \quad \theta\in[-\tau,0]\quad \mbox{and} \quad x\in C\bigl([-\tau,0],H\bigr). $$
W is a cylindrical Wiener process in a Hilbert space Ξ. A is the generator of a \(C_{0}\) semigroup in another Hilbert space H, and the coefficients F and G are assumed to satisfy Lipschitz conditions with respect to the appropriate norms.
Our approach to optimal control problems for stochastic delay evolution equations is based on backward stochastic differential equations (BSDEs), which were first introduced by Pardoux and Peng [1]: see [24], as general references. To be more precise, we consider the following forward-backward system:
$$ \textstyle\begin{cases} dX(s) = AX(s)\, ds + F(X_{s})\, ds +G(X_{s})\, dW(s), \quad s\geq0, \\ X_{0}=x, \\ dY(s) =\lambda Y(s)\, ds-\psi(X_{s},Y(s),Z(s))\, ds+Z(s)\, dW(s),\quad s\geq0. \end{cases} $$
(1.2)
If we assume ψ satisfies suitable conditions, then there exists a unique continuous adapted solution of (1.2) with λ large enough, denoted by \((X(\cdot ,x),Y(\cdot,x),Z(\cdot,x))\) in \(H\times R\times \Xi\). We define a deterministic function \(v: {\mathcal{C}}\rightarrow R\) by \(v(x)=Y(0,x)\), where \({\mathcal{C}}\) denotes the space of continuous functions from \([-\tau,0]\) to H, and it turns out that v is unique mild solution of the generalization of the nonlinear elliptic partial differential equation:
$$ {\mathcal{L}}\bigl[v(\cdot)\bigr](x)= \lambda v(x)+\psi\bigl(x,v(x),\overline { \nabla_{0}v(x)}G(x)\bigr),\quad x\in{\mathcal{C}}, x(0)\in D(A), $$
(1.3)
where
$$\begin{aligned} {\mathcal{L}}[\phi](x) =&{\mathcal {S}}(\phi) (x)+\bigl\langle Ax(0)1_{0}+F(x)1_{0},\overline{\nabla_{x}\phi(x)} \bigr\rangle \\ &{}+\frac{1}{2}\sum^{\infty}_{i=1} \overline{\nabla_{x}^{2}\phi(x)} \bigl(G(x)e_{i}1_{0}, G(x)e_{i}1_{0}\bigr), \end{aligned}$$
(1.4)
\(\{e_{i}\}_{i=1}^{\infty}\) denotes a basis of Ξ, 10 denotes the character function of \({\{0\}}\), \(\overline {\nabla_{x}\phi(x)}\), \(\overline{\nabla_{x}^{2}\phi(x)}\) denote the extensions of \(\nabla_{x}\phi(x)\), \(\nabla_{x}^{2}\phi(x)\), respectively (see Lemma 2.1 and Lemma 2.2). \(\overline{\nabla_{0}v(x)}\) is defined by \(\overline{\nabla_{0}v(x)}G(x)=\overline{\nabla_{x}v(x)}(G(x)1_{0})\). In this paper, we call (1.3) the nonlinear stationary Kolmogorov equation.
We consider a controlled equation of the form
$$ \textstyle\begin{cases} dX^{u}(s)=AX^{u}(s)\, ds + F(X^{u}_{s})\, ds +G(X^{u}_{s})R(X^{u}_{s},u(s))\, ds+G(X^{u}_{s})\, dW(s), \quad s\geq0, \\ X^{u}_{0}=x. \end{cases} $$
(1.5)
The control process u takes values on a measurable space \((U,\mathcal{U})\). The aim is to minimize an infinite horizon cost functional of the form
$$ J(u)=E\int_{0}^{\infty}e^{-\lambda s}g \bigl(X^{u}_{s},u(s)\bigr)\, ds, $$
(1.6)
over all admissible controls. Here q is function on \({\mathcal{C}}\times U\). We define the Hamiltonian function relative to the above problem: for all \(x\in\mathcal{C}\), \(z\in \Xi\),
$$ \psi(x,z)=\inf\bigl\{ g(x,u)+zR(x,u): u\in U\bigr\} . $$
(1.7)
Then, under suitable conditions, we eventually show that v is the value function of the control problem.

Stochastic optimal control problems have been studied by many authors. In [59], the authors proved there exists a direct (classical or mild) solution of the corresponding Hamilton-Jacobi-Bellman (HJB) equation, by which the optimal feedback law is obtained. Gozzi [8, 9] showed there exists unique mild solution of the associated HJB equation, where the diffusion term only satisfies weaker nondegeneracy conditions.

The viscosity solution methods have been successfully applied to stochastic optimal control problems (see [1015] and references therein). Lions [13] proved that there exists a unique viscosity solution for a general class of fully nonlinear second order equations in an infinite dimensional Hilbert space. In [15], the existence and uniqueness of the viscosity solution for general unbounded second order partial differential equation were shown. Optimal control problems for stochastic differential equations with delay are considered in [10, 14, 16, 17]. Chang et al. [17] found that the value function for an optimal control problem of a general stochastic differential equation with a bounded memory is the unique viscosity solution of the HJB equation.

BSDEs were known to be useful tools in the study of stochastic optimal control problems; see, for example, [18, 19]. The optimal control problems for stochastic systems in infinite dimensions have been considered in [2025]. Using Malliavin calculus and BSDEs, Fuhrman and Tessitore [22] showed that there exists a unique mild solution of nonlinear Kolmogorov equations and found that the mild solution coincides with the value function of the control problem. In Fuhrman and Tessitore [23], the existence and uniqueness of the mild solution for semilinear elliptic differential equations in Hilbert spaces were obtained by means of infinite horizon BSDEs in Hilbert spaces and Malliavin calculus, moreover, the existence of optimal control is proved by the feedback law. Fuhrman et al. [21] considered the optimal control problems for stochastic differential equations with delay and the associated Kolmogorov equations, the existence and uniqueness of the mild solution for the Kolmogorov equations was proved and the existence of optimal control was obtained. In Fuhrman et al. [20], the optimal ergodic control of a Banach valued stochastic evolution equation was studied and the optimal ergodic control was obtained by the ergodic BSDEs.

The main result of this paper is the proof of existence and uniqueness of the mild solution of (1.3) and (1.4). Some authors considered the Kolmogorov equations associated with stochastic evolution equations (see [22, 23]) and with stochastic delay differential equations (see [21]). However, as far as we know, there are few authors who concentrated on (1.3) and (1.4), for example, [26] and [27] for nonlinear parabolic partial differential equations. In this paper we want to extend the results of [23] to stochastic delay evolution equations in Hilbert spaces. Thanks to Lemma 2.1 and Lemma 2.2, we can consider the optimal control problem of (1.1) and the associated nonlinear Kolmogorov equations (1.3) and (1.4).

The plan of the paper is as follows. In the next section we introduce the basic notations and two basic lemmas. Section 3 is devoted to proving the regularity of the mild solution for infinite horizon stochastic delay evolution equation. The forward-backward system is considered and (4.10) is proved in Section 4. In Section 5, the mild solution of Kolmogorov equation (1.3) is considered. Finally, applications to optimal control for an infinite horizon are presented in Section 6.

2 Preliminaries

We list some notations that are used in this article. We use the symbols Ξ, K, and H to denote real separable Hilbert spaces, with scalar products \((\cdot,\cdot)_{\Xi}\), \((\cdot,\cdot)_{K}\), and \((\cdot,\cdot)_{H}\), respectively. Let \(|\cdot|\) denote the norm in various spaces, with a subscript if necessary. Let \({\mathcal {C}}=C([-\tau,0],H)\) denote the space of continuous functions from \([-\tau,0]\) to H, endowed with the usual norm \(|f|_{C}=\sup_{{\theta\in[-\tau,0]}}|f(\theta)|_{H}\). \(L(\Xi,H)\) denotes the space of all bounded linear operators from Ξ into H; the subspace of Hilbert-Schmidt operators, with the Hilbert-Schmidt norm, is denoted by \(L_{2}(\Xi,H)\).

Let \((\Omega,{\mathcal{F}},P)\) be a complete space with a filtration \(\{{\mathcal{F}}_{t}\}_{t\geq0}\) which satisfies the usual condition, i.e., \(\{{\mathcal{F}}_{t}\}_{t\geq0}\) is a right continuous increasing family of sub σ-algebra of \(\mathcal{F}\) and \({\mathcal{F}}_{0}\) contains all P-null sets of \(\mathcal{F}\). A cylindrical Wiener process defined on \((\Omega,{\mathcal{F}},P)\), and with values in a Hilbert space Ξ, is a family \(\{W(t),t\geq0\}\) of linear mappings \(\Xi\rightarrow L^{2}(\Omega)\) such that for every \(\xi, \eta\in\Xi\), \(\{W(t)\xi,t\geq0\}\) is a real Wiener process and \({E}(W(t)\xi\cdot W(t)\eta)=(\xi,\eta)_{\Xi}t\). In the following, \(\{W(t),t\geq0\}\) is a cylindrical Wiener process adapted to the filtration \(\{{\mathcal{F}}_{t}\}_{t\geq0}\).

Now let us define several classes of stochastic processes with values in a Banach space F:
  • \(L^{p}_{{\mathcal{P}}}(\Omega; L^{q}_{\beta}(F))\), defined for \(\beta\in R\) and \(p,q\in[1,\infty)\), denotes the space of equivalence classes of processes \(\{Y(s), s\geq 0\}\), with values in F, such that the norm
    $$|Y|^{p}=E\biggl(\int^{\infty}_{0}e^{q\beta s} \bigl\vert Y(s)\bigr\vert ^{q}\, ds\biggr)^{\frac{p}{q}} $$
    is finite and Y admits a predictable version.
  • \({\mathcal{K}}^{p}_{\beta}\) denotes the space \(L^{p}_{{\mathcal {P}}}(\Omega; L^{2}_{\beta}(F))\times L^{p}_{{\mathcal{P}}}(\Omega; L^{2}_{\beta}(L_{2}(\Xi,F)))\). The norm of an element \((Y,Z)\in{\mathcal{K}}^{p}_{\beta}\) is \(|(Y,Z)|=|Y|+|Z|\).

  • \(L^{p}_{{\mathcal{P}}}(\Omega;C([0,T];F))\), defined for \(T>0\) and \(p\in[1,\infty)\), denotes the space of predictable processes \(\{Y(s), s\in[0,T]\}\) with continuous paths in F, such that the norm
    $$|Y|^{p}=E\sup_{s\in[0,T]}\bigl\vert Y(s)\bigr\vert ^{p} $$
    is finite. Elements of \(L^{p}_{{\mathcal {P}}}(\Omega;C([0,T];F))\) are identified up to indistinguishability.
  • \(L^{q}_{{\mathcal{P}}}(\Omega;C_{\eta}(F))\), defined for \(\eta\in R\) and \(q\in[1,\infty)\), denotes the space of predictable processes \(\{Y(s), s\geq 0\}\) with continuous paths in F, such that the norm
    $$|Y|^{q}=E\sup_{s\geq0}e^{\eta qs}\bigl\vert Y(s)\bigr\vert ^{q} $$
    is finite. Elements of \(L^{q}_{{\mathcal {P}}}(\Omega;C_{\eta}(F))\) are identified up to indistinguishability.
  • Finally, for \(\eta\in R\) and \(q\in[1,\infty)\), we defined \({\mathcal{H}}^{q}_{\eta}\) as the space \(L^{q}_{{\mathcal{P}}}(\Omega; L^{q}_{\eta}(F))\cap L^{q}_{{\mathcal {P}}}(\Omega;C_{\eta}(F))\), endowed with the norm
    $$|Y|_{{\mathcal{H}}^{q}_{\eta}}=|Y|_{L^{q}_{{\mathcal{P}}}(\Omega; L^{q}_{\eta}(F))}+|Y|_{L^{q}_{{\mathcal {P}}}(\Omega;C_{\eta}(F))}. $$

We say \(f\in C^{1}({\mathcal{C}};H)\) if f is continuous, Fréchet differentiable and \(\nabla_{x}f:{\mathcal{C}}\rightarrow L({\mathcal{C}},H)\) is continuous.

We say \(g\in{\mathcal{G}}^{1}({\mathcal{C}};H)\) if g is continuous, Gâteaux differentiable with respect to x on \({\mathcal{C}}\) and \(\nabla_{x}g:{\mathcal{C}}\rightarrow L({\mathcal{C}},H)\) is strongly continuous.

Let \(F_{c}\) be the vector space of all simple functions of \(z1_{[a,c)}\) where \(z\in H\), \(c\in(a,b]\) and \(1_{[a,c)}:[a,b]\rightarrow R\) is the character function of \([a,c)\). It is clear that \(C([a,b],H)\cap F_{c}=\{0\}\). Form the direct sum \(C([a,b],H)\oplus F_{c}\) and give it the complete norm
$$|y+z1_{[a,c)}|=\sup_{s\in[a,b]}\bigl\vert y(s)\bigr\vert +|z|, \quad y\in C\bigl([a,b],H\bigr), z\in H. $$

Now let us give two basic lemmas that will be used in the following sections (see Lemma 2.1 and Lemma 2.2 in [26]).

We say \({f}:C([a,b],H)\oplus F_{c}\rightarrow K\) satisfying (V) if \(\{x^{n}\}_{n\geq1}\) is a bounded sequence in \(C([a,b],H)\) and \(y+z1_{[a,c)}\in C([a,b],H)\oplus F_{c}\) such that \(x^{n}(s)\rightarrow y(s)+z1_{[a,c)}\) as \(n\rightarrow \infty\) for all \(s\in[a,b]\), and \(\sup_{s\in[a,b]}|(x^{n}(s)-y(s),h_{i})|\leq |(z,h_{i})|\) for all \(i\in N\), where \(\{h_{i}\}_{i\geq1}\) is a basis of H, then \(f(x^{n})\rightarrow f(y+z1_{[a,c)})\) as \(n\rightarrow\infty\).

Lemma 2.1

Let \(f\in L(C([a,b];H);K)\). Then, for every \(c\in (a,b]\), f has a unique continuous linear extension \(\overline{f}:C([a,b],H)\oplus F_{c}\rightarrow K\) satisfying (V). Moreover, the extension map \(e:L(C([0,T],H),K)\rightarrow L(C([0,T],H)\oplus F_{c},K)\), \(f\rightarrow\overline{f}\) is a linear isometry.

We say \({f}:[C([a,b],H)\oplus F_{c}]\times[C([a,b],H)\oplus F_{c}]\rightarrow R\) satisfies (W) if \(\{x^{n}\}_{n\geq1}\), \(\{y^{n}\}_{n\geq1}\) are bounded sequences in \(C([a,b],H)\) and \(x+z_{1}1_{[a,c)}, y+z_{2}1_{[a,c)}\in C([a,b],H)\oplus F_{c}\) such that \(x^{n}(s)\rightarrow x(s)+z_{1}1_{[a,c)}\), \(y^{n}(s)\rightarrow y(s)+z_{2}1_{[a,c)}\) as \(n\rightarrow \infty\) for all \(s\in[a,b]\), and \(\sup_{s\in[a,b]}|(x^{n}(s)-x(s),h_{i})|\leq |(z_{1},h_{i})|\), \(\sup_{s\in[a,b]}|(y^{n}(s)-y(s),h_{i})|\leq |(z_{2},h_{i})|\) for all \(i\in N\), then \(f(x^{n},y^{n})\rightarrow f(x+z_{1}1_{[a,c)},y+z_{2}1_{[a,c)})\) as \(n\rightarrow\infty\).

Lemma 2.2

Let \(\beta:C([a,b],H)\times C([a,b];H)\rightarrow R\) be a continuous bilinear map. Then, for every \(c\in(a,b]\), β has a unique continuous bilinear extension \(\overline{\beta}:[C([a,b],H)\oplus F_{c}]\times [C([a,b],H)\oplus F_{c}]\rightarrow R\) satisfying (W).

3 The forward equation

In this section we consider the system of stochastic delay evolution equations:
$$ \textstyle\begin{cases} dX(s)= AX(s)\, ds + F(X_{s})\, ds +G(X_{s})\, dW(s),\quad s\in [0,\infty), \\ X_{0}=x\in{\mathcal{C}}. \end{cases} $$
(3.1)

We make the following assumptions.

Hypothesis 3.1

  1. (i)

    The operator A is the generator of a strongly continuous semigroup \(\{e^{tA}, t\geq0\}\) of bounded linear operators in the Hilbert space H. We denote by M and ω two constants such that \(|e^{tA}|\leq Me^{\omega t}\), for \(t\geq0\).

     
  2. (ii)
    The mapping \(F: {\mathcal{C}}\rightarrow H\) is measurable and satisfies, for some constant \(L>0\),
    $$ \bigl\vert F(x)\bigr\vert \leq L \bigl(1 + \vert x\vert _{C} \bigr), \qquad \bigl\vert F(x)-F(y)\bigr\vert \leq L|x-y|_{C}, \quad x,y\in \mathcal{C}. $$
     
  3. (iii)
    The mapping \(G: {\mathcal{C}}\rightarrow L_{2}(\Xi,H)\) is measurable and satisfies, for every \(x,y\in\mathcal{C}\),
    $$ \bigl\vert G(x)\bigr\vert _{L_{2}(\Xi,H)}\leq L\bigl(1+\vert x\vert _{C}\bigr),\qquad \bigl\vert G(x)-G(y)\bigr\vert _{L_{2}(\Xi,H)} \leq L|x-y|_{C} $$
    (3.2)
    for some constants \(L>0\).
     
  4. (iv)

    \(F(\cdot)\in C^{1}({\mathcal{C}},H)\), \(G(\cdot)\in C^{1}({\mathcal{C}},L_{2}(\Xi,H))\).

     
We say that X is a mild solution of equation (3.1) if it is a continuous, \(\{{\mathcal{F}}_{t}\}_{t\geq0}\)-predictable process with values in H, and it satisfies, P-a.s.,
$$ \textstyle\begin{cases} X(s)=e^{sA}x(0)+\int_{0}^{s}{e^{(s-\sigma)A}}F(X_{\sigma})\, d\sigma+\int_{0}^{s}{e^{(s-\sigma)A}}G(X_{\sigma})\, dW(\sigma), \quad s\in[0,\infty), \\ X_{0}=x\in{\mathcal{C}}. \end{cases} $$
(3.3)
We define \(G^{N}(t,x)\) from \({\mathcal{C}}\) to \(L_{2}(\Xi;H)\) by
$$G^{N}(x)=\sum^{N}_{i=1} \bigl(G(x),e_{i}\bigr)\otimes e_{i}, \quad x\in{ \mathcal{C}}, $$
where \(\{e_{i}\}_{i\geq1}\) denotes a basis of Ξ. We denote by \(X^{N}(\cdot,x)\) the solution of (3.3) with respect to \(G^{N}\).

We first recall two well-known results of (3.1) on a bounded interval.

Theorem 3.2

(Theorem 3.2 in [26])

Assume that Hypothesis 3.1 holds. Then for all \(q\in[2,\infty)\) and \(T>0\) there exists a unique process \(X\in L^{q}_{\mathcal{P}}(\Omega,C([0,T];H))\), a mild solution of (3.1). Moreover,
$$ E\sup_{s\in[0,T]}\bigl\vert X(s)\bigr\vert ^{q}\leq C \bigl(1+\vert x\vert _{C}\bigr)^{q} $$
(3.4)
for some constant C depending only on q, T, τ, L, ω, and M.

Theorem 3.3

(Theorem 4.2 in [26])

Assume that Hypothesis 3.1 holds and let \(u: {\mathcal{C}}\rightarrow R\) be a Borel measurable function such that \(u(\cdot)\in{C}^{1}({\mathcal{C}},R)\) and
$$\bigl\vert u(x)\bigr\vert +\bigl\vert \nabla_{x}u(x)\bigr\vert \leq C\bigl(1+\vert x\vert \bigr)^{m} $$
for some \(C>0\), \(m\geq0\) and for every \(x\in{\mathcal{C}}\). Then the joint quadratic variation on \([0,T']\),
$$\bigl\langle u\bigl(X^{N}_{\cdot}\bigr), We_{i} \bigr\rangle _{[0,T']}=\int^{T'}_{0} \overline{\nabla_{0}u\bigl(X^{N}_{s}\bigr)}G \bigl(X^{N}_{s}\bigr)e_{i}\, ds $$
for every \(x\in{\mathcal{C}}\), \(i=1,2,\ldots,d\) and \(0\leq T'< T\).

By Theorem 3.2 and the arbitrariness of T in its statement, the solution is defined for every \(s\geq 0\). To stress the dependence on the initial data, we denote the solution by \(X(x)\). We have the following result.

Theorem 3.4

Assume that Hypothesis 3.1(i)-(iii) holds and the process \(X(\cdot,x)\) is a mild solution of (3.1) with initial value \(x\in {\mathcal{C}}\). Then, for every \(q\in[1,\infty)\), there exists a constant \(\eta(q)\) such that the process \(X_{\cdot}(x)\in{\mathcal {H}}^{q}_{\eta(q)}\). Moreover, for a suitable constant \(C>0\), we have
$$ E\sup_{s\geq 0}e^{\eta(q) qs}|X_{s}|_{C}^{q}+E \int_{0}^{\infty}e^{\eta(q) qs}|X_{s}|_{C}^{q} \, ds \leq C\bigl(1+\vert x\vert _{C}\bigr)^{q}, $$
(3.5)
where the constants \(\eta(q)\) depending only on q, τ, L, ω, and M.
If we assume that Hypothesis 3.1(iv) also holds true, then the map \(x\rightarrow X_{\cdot}(x)\) belongs to \(C ^{1}( {\mathcal{C}},{\mathcal {H}}^{q}_{\eta(q)})\) and for every \(h\in{\mathcal{C}}\), the process \(\nabla_{x}X_{s}(x)h\), \(s\in[0,+\infty)\) solves, P-a.s., the following equation:
$$ \textstyle\begin{cases} \nabla_{x}X_{s}(x)h(\theta)=e^{(s+\theta)A}h(0)+\int_{0}^{s+\theta }{e^{(s+\theta-\sigma)A}}\nabla_{x} F(X_{\sigma}(x))\nabla_{x}X_{\sigma}(x)h\, d\sigma \\ \hphantom{\nabla_{x}X_{s}(x)h(\theta)={}}{}+\int_{0}^{s+\theta}{e^{(s+\theta-\sigma)A}}\nabla_{x} G(X_{\sigma}(x))\nabla_{x}X_{\sigma}(x)h\, dW(\sigma) , \quad s+\theta\geq0, \\ \nabla_{x}X_{s}(x)h(\theta)=h((s+\theta)\vee-\tau), \quad s+\theta \in[-\tau,0). \end{cases} $$
(3.6)
Moreover, \(|\nabla_{x}X_{s}(x)h|_{{\mathcal{H}}^{q}_{\eta(q)}}\leq C|h|_{C}\) for a suitable constant \(C>0\).

Proof

We define a mapping Φ from \({\mathcal{H}}^{q}_{\eta}\times \mathcal{C}\) to \({\mathcal{H}}^{q}_{\eta}\) by the formula
$$\begin{aligned}& \Phi(X_{\cdot},x)_{s}(l)=e^{(s+l)A}x(0)+\int _{0}^{s+l}{e^{(s+l-\sigma )A}}F(X_{\sigma})\, d\sigma \\& \hphantom{\Phi(X_{\cdot},x)_{s}(l)={}}{}+\int_{0}^{s+l}{e^{(s+l-\sigma)A}}G(X_{\sigma}) \, dW(\sigma) , \quad s\in[0,\infty), l\in[-\tau,0], s+l\geq0, \\& \Phi(X_{\cdot},x)_{s}(l)=x(s+l), \quad s\in[0,\infty), l \in[-\tau ,0], s+l< 0. \end{aligned}$$
We will prove that, provided η is suitably chosen, \(\Phi(\cdot ,x)\) is a contraction in \({\mathcal{H}}^{q}_{\eta}\), uniformly in x, or in other words, there exists \(c<1\) such that for every \(x\in\mathcal{C}\),
$$ \bigl\vert \Phi\bigl(X^{1}_{\cdot},x\bigr)-\Phi \bigl(X^{1}_{\cdot},x\bigr)\bigr\vert _{{\mathcal {H}}^{q}_{\eta}}\leq c \bigl\vert X^{1}_{\cdot}-X^{2}_{\cdot}\bigr\vert _{{\mathcal {H}}^{q}_{\eta}}, \quad X^{1}_{\cdot}, X^{2}_{\cdot}\in{{\mathcal {H}}^{q}_{\eta}}. $$
(3.7)
For simplicity, we treat only the case \(F=0\), the general case being handled in a similar way. We will use the so called factorization method; see [28], Theorem 5.2.5. Let us take \(q>1\) and \(\alpha\in(0,1)\) such that
$$\frac{1}{q}< \alpha< \frac{1}{2} \quad \mbox{and let}\quad c^{-1}_{\alpha}=\int_{\sigma}^{s}(s-r)^{\alpha-1}(r- \sigma )^{-\alpha}\, dr, $$
by the stochastic Fubini theorem,
$$\begin{aligned}& \Phi(X_{\cdot},x)_{s}(l) = e^{(s+l)A}x(0) \\& \hphantom{\Phi(X_{\cdot},x)_{s}(l) ={}}{}+c_{\alpha}\int_{0}^{s+l} \int_{\sigma}^{s+l}(s+l-r)^{\alpha-1}(r- \sigma)^{-\alpha} {e^{(s+l-r)A}} {e^{(r-\sigma)A}}\, dr\, G(X_{\sigma})\, dW(\sigma) \\& \hphantom{\Phi(X_{\cdot},x)_{s}(l)}= e^{(s+l)A}x(0)+ \Phi'(X_{s}) (l), \quad s\in[0,\infty), l\in[-\tau,0], s+l\geq0, \\& \Phi(X_{\cdot},x)_{s}(l) = x(s+l), \quad s\in[0,\infty), l \in[-\tau,0], s+l< 0, \end{aligned}$$
where
$$\begin{aligned}& \Phi'(X_{\cdot})_{s}(l) = c_{\alpha} \int_{0}^{s+l}(s+l-r)^{\alpha -1}{e^{(s+l-r)A}}Y(r) \, dr, \\& Y(r) = \int_{0}^{r}(r-\sigma)^{-\alpha}{e^{(r-\sigma)A}}G(X_{\sigma}) \, dW(\sigma). \end{aligned}$$
By \(\sup_{-\tau\leq l\leq0}|e^{(s+l)A}x(0)|\leq Me^{\omega s}|x|_{C}\), we see that the process \(e^{(s+\cdot)A}x(0)\), \(s\geq0\), belongs to \({\mathcal{H}}^{q}_{\eta}\) provided \(\omega+\eta<0\). Next let us estimate \(\Phi'(X_{\cdot})\), since
$$ \bigl\vert \Phi'(X_{\cdot})_{s}(l)\bigr\vert \leq c_{\alpha}\int_{0}^{s+l}(s+l-r)^{\alpha -1}M{e^{(s+l-r)\omega}} \bigl\vert Y(r)\bigr\vert \, dr, $$
(3.8)
setting \(q'=\frac{q}{(q-1)}\),
$$\begin{aligned}& e^{q\eta s}\bigl\vert \Phi'(X_{\cdot})_{s} \bigr\vert ^{q} \\& \quad \leq c^{q}_{\alpha}M^{q}\sup _{-\tau\leq l\leq0}e^{q\eta s} \biggl(\int_{0}^{s+l}(s+l-r)^{\alpha-1}e^{\omega (s+l-r)} \bigl\vert Y(r)\bigr\vert \, dr\biggr)^{q} \\& \quad \leq c^{q}_{\alpha}M^{q}\sup _{-\tau\leq l\leq 0}\biggl(\int_{0}^{s+l}(s+l-r)^{\alpha-1}e^{\frac{(\omega+\eta)}{q'} (s+l-r)}e^{\frac{(\omega+\eta)}{q} (s-r)}e^{\eta r} \bigl\vert Y(r)\bigr\vert \, dr\biggr)^{q} \\& \quad \leq c^{q}_{\alpha}M^{q}\sup _{-\tau\leq l\leq0}\biggl(\int_{0}^{s+l}e^{(\eta+\omega)(s+l-r)}(s+l-r)^{(\alpha-1)q'} \, dr \biggr)^{\frac{q}{q'}} \int_{0}^{s+l}e^{(\eta+\omega)(s-r)}e^{q\eta r}{ \bigl\vert Y(r)\bigr\vert }^{q}\, dr \\& \quad \leq c^{q}_{\alpha}M^{q}\biggl(\int _{0}^{s}e^{(\eta+\omega) r}r^{q'(\alpha-1)}\, dr \biggr)^{\frac{q}{q'}}\int_{0}^{s}e^{(\eta +\omega)(s-r) }e^{q\eta r}{ \bigl\vert Y(r)\bigr\vert }^{q}\, dr. \end{aligned}$$
Applying the Young inequality for convolutions, we have
$$\begin{aligned}& \int^{\infty}_{0}e^{q\eta s}\bigl\vert \Phi'(X_{\cdot})_{s}\bigr\vert ^{q} \,ds \\& \quad \leq c^{q}_{\alpha }M^{q}\biggl(\int _{0}^{\infty}e^{(\eta+\omega) s}s^{q'(\alpha-1)}\,ds \biggr)^{\frac{q}{q'}}\int_{0}^{\infty}e^{(\eta +\omega)s } \,ds\int_{0}^{\infty} e^{q\eta s}{\bigl\vert Y(s)\bigr\vert }^{q}\,ds, \end{aligned}$$
and we obtain
$$\begin{aligned}& \bigl\vert \Phi'(X_{\cdot})\bigr\vert _{L^{q}_{\mathcal {P}}(\Omega;L^{q}_{\eta}({\mathcal{C}}))} \\& \quad \leq c_{\alpha}M|Y|_{L^{q}_{\mathcal {P}}(\Omega;L^{q}_{\eta}(H))}\biggl(\int_{0}^{\infty}e^{(\eta +\omega) s}s^{q'(\alpha-1)} \,ds\biggr)^{\frac{1}{q'}}\biggl(\int_{0}^{\infty }e^{(\eta+\omega)s } \,ds\biggr)^{\frac{1}{q}}. \end{aligned}$$
(3.9)
If we start again from (3.8) and apply the Hölder inequality, we obtain, for \(\eta<0\),
$$\bigl\vert e^{\eta(s+l)}\Phi'(X_{\cdot})_{s}(l) \bigr\vert \leq c_{\alpha}M\biggl(\int_{0}^{s+l}r^{(\alpha-1)q'}{e^{(\omega+\eta )rq'}} \, dr\biggr)^{\frac{1}{q'}} \biggl(\int_{0}^{s+l}{e^{\eta rq}} \bigl\vert Y(r)\bigr\vert ^{q}\,dr\biggr)^{\frac{1}{q}} $$
and
$$\bigl\vert e^{\eta s}\Phi'(X_{\cdot})_{s} \bigr\vert \leq c_{\alpha}M\biggl(\int_{0}^{s}r^{(\alpha-1)q'}{e^{(\omega+\eta )rq'}} \, dr\biggr)^{\frac{1}{q'}} \biggl(\int_{0}^{s}{e^{\eta rq}} \bigl\vert Y(r)\bigr\vert ^{q}\,dr\biggr)^{\frac{1}{q}}. $$
So we conclude that
$$ \bigl\vert \Phi'(X_{\cdot})\bigr\vert _{L^{q}_{\mathcal {P}}(\Omega;C_{\eta}({\mathcal{C}}))} \leq c_{\alpha}M|Y|_{L^{q}_{\mathcal {P}}(\Omega;L^{q}_{\eta}(H))}\biggl(\int_{0}^{\infty}r^{(\alpha -1)q'}{e^{(\omega+\eta)rq'}} \, dr\biggr)^{\frac{1}{q'}}. $$
(3.10)
On the other hand, by the Burkholder-Davis-Gundy inequalities, for some constant \(c_{q}\) depending only on q, we have
$$\begin{aligned} {E}\bigl\vert Y(r)\bigr\vert ^{q} \leq& c_{q}{E}\biggl( \int_{0}^{r}(r-\sigma)^{-2\alpha }\bigl\vert {e^{(r-\sigma)A}}G(X_{\sigma})\bigr\vert ^{2}_{L_{2}(\Xi,H)} \,d\sigma\biggr)^{\frac{q}{2}} \\ \leq&L^{q}c_{q} {E}\biggl(\int_{0}^{r}(r- \sigma)^{-2\alpha }e^{2\omega(r-\sigma)}\bigl(1+|X_{\sigma}|^{2}_{C} \bigr)\,d\sigma\biggr)^{\frac {q}{2}}, \end{aligned}$$
which implies
$$ \bigl[E\bigl\vert Y(r)\bigr\vert ^{q}\bigr]^{\frac{2}{q}}\leq L^{2}c_{q}^{\frac{2}{q}}\int_{0}^{r}(r- \sigma)^{-2\alpha}e^{2\omega(r-\sigma)} \bigl[E\bigl(1+|X_{\sigma}|_{C}\bigr)^{q} \bigr]^{\frac{2}{q}}\,d\sigma, $$
so that
$$\begin{aligned} e^{2\eta r}\bigl[E\bigl\vert Y(r)\bigr\vert ^{q} \bigr]^{\frac{2}{q}} \leq& C_{1}\int_{0}^{r}(r- \sigma )^{-2\alpha}e^{2(\omega+\eta)(r-\sigma)} e^{2\eta\sigma}\,d\sigma \\ &{}+C_{2}\int_{0}^{r}(r- \sigma)^{-2\alpha}e^{2(\omega+\eta)(r-\sigma)} e^{2\eta\sigma}\bigl[E|X_{\sigma}|_{C}^{q} \bigr]^{\frac{2}{q}}\,d\sigma \end{aligned}$$
for suitable constants \(C_{1}\), \(C_{2}\). Applying the Young inequality for convolutions, we obtain
$$\begin{aligned} \int_{0}^{\infty}e^{q\eta r}E{\bigl\vert Y(r) \bigr\vert }^{q}\,ds \leq& C_{1}\biggl(\int _{0}^{\infty}s^{-2\alpha}e^{2(\omega+\eta )s}\,ds \biggr)^{\frac{q}{2}} \int_{0}^{\infty}e^{q\eta s} \,ds \\ &{}+C_{2}\biggl(\int_{0}^{\infty}s^{-2\alpha}e^{2(\omega+\eta )s} \,ds\biggr)^{\frac{q}{2}} \int_{0}^{\infty}e^{q\eta s}E|X_{s}|^{q}_{C} \, ds. \end{aligned}$$
This shows that \(|Y|_{L^{q}_{\mathcal {P}}(\Omega;L^{q}_{\eta}(H))}\) is finite provided \(\eta<0\) and \(\omega+\eta<0\), so the map is well defined.
If \(X_{\cdot}^{1}\), \(X_{\cdot}^{2}\) are processes belonging to \({\mathcal{H}}^{q}_{\eta}\) and \(Y^{1}\), \(Y^{2}\) are defined accordingly, similarly we find that
$$ \bigl\vert Y^{1}-Y^{2}\bigr\vert _{L^{q}_{\mathcal {P}}(\Omega;L^{q}_{\eta}(H))}\leq Lc^{\frac{1}{q}}_{\alpha}\bigl\vert X^{1}_{\cdot}-X^{2}_{\cdot}\bigr\vert _{L^{q}_{\mathcal {P}}(\Omega;L^{q}_{\eta}(\mathcal{C}))}\biggl(\int_{0}^{\infty }s^{-2\alpha}e^{2(\omega+\eta)s} \,ds\biggr)^{\frac{1}{2}}. $$
By the inequalities (3.9) and (3.10), we obtain an explicit expression for the constant c in (3.7) and it is immediate to find that \(c<1\) provided \(\eta<0\) is chosen sufficiently small. We fix such a value \(\eta(q)\). The first result is a consequence of the contraction principle. We get the estimate (3.5) also by the contraction property of \(\Phi(\cdot,x)\).
Now let us study the regular dependence of the solution on the initial datum. Firstly, we show that the map \(x\rightarrow X_{\cdot}(x)\) belongs to \({\mathcal{G}}^{1}({\mathcal{C}},{\mathcal {H}}^{q}_{\eta(q)})\). By the parameter depending contraction principle (see [29]), it suffices to prove that
$$\Phi\in{\mathcal{G}}^{1,1}\bigl({\mathcal{H}}^{q}_{\eta(q)} \times{\mathcal {C}};{\mathcal{H}}^{q}_{\eta(q)}\bigr). $$
We split the proof into several steps.

Step 1. It is clear that Φ is continuous.

Step 2. The directional derivative \(\nabla_{X}\Phi(X,x)\) in the direction \(N\in{\mathcal{H}}^{q}_{\eta (q)}\) satisfies
$$\begin{aligned}& \nabla_{X}\Phi (X,x;N)_{s}(\theta)=\int ^{s+\theta}_{0} e^{(s+\theta-\sigma)A}\nabla_{x}F(X_{\sigma})N_{\sigma} \, d{\sigma} \\ & \hphantom{\nabla_{X}\Phi (X,x;N)_{s}(\theta)={}}{}+ \int^{s+\theta}_{0} e^{(s+\theta-\sigma)A}\nabla_{x}G(X_{\sigma})N_{\sigma}\,dW( \sigma), \quad s+\theta\geq0, \\ & \nabla_{X}\Phi (X,x;N)_{s}(\theta)=0, \quad s+\theta< 0. \end{aligned}$$
Moreover, the mappings \((X,x)\rightarrow \nabla_{X}\Phi(X,x;N)\) and \(N\rightarrow \nabla_{X}\Phi(X,x;N)\) are continuous. We only prove this claim in the special case \(F=0\). For fixed \(x\in{\mathcal{C}}\), for all \(s\geq0\), we define
$$\begin{aligned} I^{\varepsilon}_{s}(\theta) :=& \frac{1}{\varepsilon}\Phi(X+ \varepsilon N,x)_{s}(\theta)-\frac{1}{\varepsilon}\Phi(X,x)_{s}( \theta) \\ &{}- \int^{s+\theta}_{0}e^{(s+\theta-\sigma)A} \nabla_{x}G(X_{\sigma })N_{\sigma}\,dW(\sigma) \\ =&\int^{s+\theta}_{0}\biggl(\int^{1}_{0} \bigl(e^{(s+\theta-\sigma )A}\nabla_{x}\bigl(G(X_{\sigma}+\zeta \varepsilon N_{\sigma})\bigr)N_{\sigma}\\ &{}-e^{(s+\theta-\sigma)A} \nabla_{x} G(X_{\sigma })N_{\sigma}\bigr)\,d \zeta\biggr)\,dW(\sigma). \end{aligned}$$
By a similar procedure, we show that, for \(\frac{1}{q}<\alpha<\frac{1}{2} \) and a suitable constant \(c_{q}\),
$$ \bigl\vert I^{\varepsilon}\bigr\vert ^{q}_{{\mathcal{H}}^{q}_{\eta(q)}}\leq c_{p} E\int^{\infty}_{0}e^{r\eta(q)q} \bigl\vert Y^{\varepsilon}(r)\bigr\vert ^{q}\,dr, $$
where
$$\begin{aligned} Y^{\varepsilon}(r) =&\int^{r}_{0}(r- \sigma)^{-\alpha}\biggl(\int^{1}_{0} \bigl(e^{(r-\sigma)A}\nabla_{x}\bigl(G(X_{\sigma}+\zeta \varepsilon N_{\sigma})\bigr)N_{\sigma}\\ &{}-e^{(r-\sigma)A} \nabla_{x}G(X_{\sigma})N_{\sigma }\bigr)\,d\zeta\biggr) \,dW(\sigma). \end{aligned}$$
Thus, for a suitable constant c,
$$\begin{aligned} E\bigl\vert Y^{\varepsilon}(r)\bigr\vert ^{q} \leq& cE\biggl( \int^{r}_{0}(r-\sigma )^{-2\alpha}\biggl\vert \int^{1}_{0}\bigl(e^{(r-\sigma)A} \nabla_{x}\bigl(G(X_{\sigma}+\zeta\varepsilon N_{\sigma})\bigr)N_{\sigma}\\ &{}-e^{(r-\sigma)A}\nabla_{x} G(X_{\sigma})N_{\sigma} \bigr)\,d\zeta\biggr\vert ^{2}_{L_{2}(\Xi,H)}\,d\sigma \biggr)^{\frac{q}{2}}, \end{aligned}$$
and setting
$$ f^{\varepsilon}(\sigma,r,\zeta)=e^{r\eta(q)}(r-\sigma)^{-\alpha}\bigl\vert e^{(r-\sigma)A}\nabla_{x}\bigl(G(X_{\sigma}+\zeta \varepsilon N_{\sigma})\bigr)N_{\sigma}-e^{(r-\sigma)A} \nabla_{x}\bigl(G(X_{\sigma})N_{\sigma }\bigr)\bigr\vert _{L_{2}(\Xi,H)}, $$
we find that
$$ E\int^{\infty}_{0}e^{r\eta(q)q}\bigl\vert Y^{\varepsilon}(r)\bigr\vert ^{q}\,dr\leq c\int ^{\infty}_{0}E\biggl(\int^{r}_{0} \biggl\vert \int^{1}_{0}f^{\varepsilon}( \sigma,r,\zeta )\,d\zeta\biggr\vert ^{2}\,d\sigma\biggr)^{\frac{q}{2}} \,dr. $$
From Hypothesis 3.1(iii) and (iv) it follows that \(|\nabla_{x}G(x)h|\leq L|h|\), which implies
$$\bigl\vert f^{\varepsilon}(\sigma,r,\zeta)\bigr\vert \leq 2Le^{r\eta(q)}(r-\sigma)^{-\alpha}e^{\omega(r-\sigma)}|N_{\sigma}|. $$
Moreover,
$$\begin{aligned}& \int^{\infty}_{0}E\biggl(\int^{r}_{0} \bigl\vert e^{r\eta(q)}(r-\sigma )^{-\alpha}e^{\omega(r-\sigma)}|N_{\sigma}| \bigr\vert ^{2}\,d\sigma\biggr)^{\frac{q}{2}}\,dr \\& \quad \leq \int^{\infty}_{0}E\biggl(\int ^{r}_{0}e^{2(\omega+\eta (q))(r-\sigma)} (r-\sigma)^{-2\alpha}e^{2\eta(q)\sigma} \bigl[E|N_{\sigma}|^{q}\bigr]^{\frac {2}{q}}\,d\sigma \biggr)^{\frac{q}{2}}\,dr \\& \quad \leq \biggl(\int^{\infty}_{0}s^{-2\alpha}e^{2(\omega+\eta (q))r} \,dr\biggr)^{\frac{q}{2}}\int^{\infty}_{0}e^{q\eta (q)r}E|N_{r}|^{q} \,dr \\& \quad \leq c|N|^{q}_{{\mathcal{H}}^{q}_{\eta(q)}}< \infty. \end{aligned}$$
Since \(e^{rA}\nabla_{x}G(x)h\) is continuous in x, then, by the dominated convergence theorem, it follows that \(E\int^{\infty}_{0}e^{r\eta(q)q}|Y^{\varepsilon}(r)|^{q}\,dr\rightarrow0\).

In a similar way we find that the mappings \((X,x)\rightarrow \nabla_{X}\Phi(X,x;N)\) and \(N\rightarrow \nabla_{X}\Phi(X,x;N)\) are continuous.

Step 3. It is easy to show that the directional derivative \(\nabla_{x}\Phi(X,x;h)\) in direction \(h\in{\mathcal{C}}\) is the process given by
$$\nabla_{x}\Phi(X,x;h)_{s}(\theta)= \textstyle\begin{cases} e^{(s+\theta)A}h(0),& s+\theta\geq0, \\ h( s+\theta),& s+\theta\in[-\tau,0), \end{cases} $$
and the mappings \((X,x)\rightarrow \nabla_{x}\Phi(X,x;h)\) and \(h\rightarrow \nabla_{x}\Phi(X,x;h)\) are continuous.

From the parameter depending contraction principle (see [29]), it follows that (3.6) holds true. The final estimate is a trivial consequence of (3.6).

Now we have to prove that the map \(x\rightarrow X_{\cdot}(x)\) belongs to \(C^{1}({\mathcal{C}},{\mathcal{H}}^{q}_{\eta (q)})\). For simplicity, we set \(F=0\). For every \(x,y,h\in{\mathcal{C}}\), by (3.6) we have
$$\begin{aligned}& \nabla_{x}\bigl(X_{s}(x)-X_{s}(y)\bigr)h( \theta) \\& \quad = \int_{0}^{s+\theta}{e^{(s+\theta-\sigma)A}}\bigl( \nabla_{x} G\bigl(X_{\sigma}(x)\bigr)\nabla_{x}X_{\sigma}(x)- \nabla_{x} G\bigl(X_{\sigma}(y)\bigr)\nabla_{x}X_{\sigma}(y) \bigr)h\, dW(\sigma). \end{aligned}$$
By a similar procedure, we find that, for some constant \(c_{q}\), it may vary from line to line,
$$\begin{aligned}& \bigl\vert \nabla_{x}X_{\cdot}(x)-\nabla_{x}X_{\cdot}(y) \bigr\vert ^{q}_{L({\mathcal{C}};{\mathcal {H}}^{q}_{\eta{(q)}})} \\& \quad =\sup_{|h|_{\mathcal{C}}=1}\bigl\vert \nabla_{x}X_{\cdot}(x)h- \nabla_{x}X_{\cdot}(y)h\bigr\vert ^{q}_{{\mathcal{H}}^{q}_{\eta{(q)}}} \\& \quad \leq \sup_{|h|=1}c_{q}\biggl(\int _{0}^{\infty}s^{-2\alpha}e^{2(\omega +\eta(q))s}\,ds \biggr)^{\frac{q}{2}} \biggl(\int_{0}^{\infty}e^{(\eta(q)+\omega) s}s^{q'(\alpha-1)} \,ds\biggr)^{\frac{q}{q'}}\biggl(\int_{0}^{\infty }e^{(\eta(q)+\omega)s } \,ds+1\biggr) \\& \qquad {}\times E\int^{\infty}_{0}e^{q\eta(q)\sigma} \bigl\vert \bigl(\nabla_{x} G\bigl(X_{\sigma}(x)\bigr) \nabla_{x}X_{\sigma}(x)-\nabla_{x} G \bigl(X_{\sigma}(y)\bigr)\nabla_{x}X_{\sigma}(y)\bigr)h \bigr\vert ^{q}\, d\sigma \\& \quad \leq \sup_{|h|=1}c_{q}\biggl(\int _{0}^{\infty}s^{-2\alpha }e^{2(\omega+\eta(q))s}\,ds \biggr)^{\frac{q}{2}} \biggl(\int_{0}^{\infty}e^{(\eta(q)+\omega) s}s^{q'(\alpha-1)} \,ds\biggr)^{\frac{q}{q'}}\biggl(\int_{0}^{\infty }e^{(\eta(q)+\omega)s } \,ds+1\biggr) \\& \qquad {}\times E\int^{\infty}_{0}e^{q\eta(q)\sigma} \bigl[\bigl\vert \bigl(\nabla_{x}X_{\sigma}(x)- \nabla_{x}X_{\sigma}(y)\bigr)h\bigr\vert ^{q} \\& \qquad {}+ \bigl\vert \bigl(\nabla_{x}G\bigl(X_{\sigma}(x)\bigr)- \nabla_{x} G\bigl(X_{\sigma}(y)\bigr)\bigr) \nabla_{x}X_{\sigma}(y)h\bigr\vert ^{q}\bigr]\,d \sigma. \end{aligned}$$
Letting \(\eta(q)\) be sufficiently small such that \(c_{q}(\int_{0}^{\infty}s^{-2\alpha}e^{2(\omega+\eta (q))s}\,ds)^{\frac{q}{2}} (\int_{0}^{\infty}e^{(\eta(q)+\omega) s}\times s^{q'(\alpha-1)}\,ds)^{\frac{q}{q'}} (\int_{0}^{\infty}e^{(\eta(q)+\omega)s }\,ds+1)< 1\), we find that
$$\begin{aligned}& \bigl\vert \nabla_{x}X_{\cdot}(x)-\nabla_{x}X_{\cdot}(y) \bigr\vert ^{q}_{L({\mathcal{C}};{\mathcal{H}}^{q}_{\eta{(q)}})} \\& \quad \leq c_{q}\sup_{|h|=1} E\int ^{\infty}_{0}e^{q\eta(q)\sigma}\bigl\vert \bigl( \nabla_{x}G\bigl(X_{\sigma}(x)\bigr)-\nabla_{x} G \bigl(X_{\sigma}(y)\bigr)\bigr)\nabla_{x}X_{\sigma}(y)h \bigr\vert ^{q}\, d\sigma \\& \quad \leq \frac{c_{q}}{\lambda}E\int_{0}^{\infty}e^{q\eta(q)\sigma} \bigl\vert \nabla _{x}G\bigl(X_{\sigma}(x)\bigr)- \nabla_{x} G\bigl(X_{\sigma}(y)\bigr)\bigr\vert _{L({\mathcal{C}};L_{2}(\Xi;H))}^{2p}\,d\sigma \\& \qquad {} +c_{q}\lambda\sup_{|h|_{\mathcal{C}}=1} E\int ^{\infty}_{0}e^{q\eta(q)\sigma}\bigl\vert \nabla_{x}X_{\sigma}(y)h\bigr\vert ^{2p}_{C} \, d\sigma \\& \quad \leq \frac{c_{q}}{\lambda}E\int_{0}^{\infty}e^{q\eta(q)\sigma} \bigl\vert \nabla _{x}G\bigl(X_{\sigma}(x)\bigr)- \nabla_{x} G\bigl(X_{\sigma}(y)\bigr)\bigr\vert _{L({\mathcal{C}};L_{2}(\Xi;H))}^{2p}\,d\sigma+c_{q}\lambda. \end{aligned}$$
Since \(x\rightarrow X_{\cdot}(x)\) is continuous from \({\mathcal{C}}\) to \({\mathcal{H}}^{q}_{\eta(q)}\), if we assume \(x\rightarrow y\), then there exists a subsequence \(\{x_{n}\}\) such that \(X_{\cdot}(x_{n})\rightarrow X_{\cdot}(y)\), P-a.s. As \(\nabla_{x}G(x)\) is continuous in x and \(|(\nabla_{x}G(X_{\sigma}(x))-\nabla_{x} G(X_{\sigma}(y)))|^{2p}_{L({\mathcal{C}};L_{2}(\Xi,H))}\leq(2L)^{2p}\), by the dominated convergence theorem, we have
$$\lim_{n\rightarrow\infty}\bigl\vert \nabla_{x}X_{\cdot}(x_{n})- \nabla_{x}X_{\cdot}(y)\bigr\vert ^{q}_{L({\mathcal{C}};{\mathcal{H}}^{q}_{\eta{(q)}})}=0. $$
Let us assume that there exists a subsequence \(\{x_{m}\}\) such that
$$\lim_{m\rightarrow\infty}\bigl\vert \nabla_{x}X_{\cdot}(x_{m})- \nabla_{x}X_{\cdot}(y)\bigr\vert ^{p}_{L({\mathcal{C}};{\mathcal{H}}^{q}_{\eta{(q)}})} \neq0. $$
Then there exists a subsequence \(\{x_{m_{k}}\}\) of \(\{x_{m}\}\) and a constant \(\varepsilon>0\) such that
$$\bigl\vert \nabla_{x}X_{\cdot}(x_{m_{k}})- \nabla_{x}X_{\cdot}(y)\bigr\vert ^{p}_{L({\mathcal{C}};{\mathcal{H}}^{q}_{\eta{(q)}})}> \varepsilon. $$
On the other hand, by a similar procedure, there exist a subsequence \(\{ x_{m_{k_{l}}}\}\) of \(\{x_{m_{k}}\}\) such that
$$\lim_{l\rightarrow\infty}\bigl\vert \nabla_{x}X_{\cdot}(x_{m_{k_{l}}})- \nabla _{x}X_{\cdot}(y)\bigr\vert ^{p}_{L({\mathcal{C}};{\mathcal{H}}^{q}_{\eta{(q)}})}=0. $$
It is a contradiction. Thus we obtain
$$\lim_{x\rightarrow y}\bigl\vert \nabla_{x}X_{\cdot}(x)- \nabla_{x}X_{\cdot}(y)\bigr\vert ^{p}_{L({\mathcal{C}};{\mathcal{H}}^{q}_{\eta{(q)}})}=0. $$
The proof is finished. □

4 The backward-forward system

In this section we consider the system of stochastic differential equations, P-a.s.,
$$ \textstyle\begin{cases} X(s)=e^{sA}x(0)+\int_{0}^{s}{e^{(s-\sigma)A}}F(X_{\sigma})\,d\sigma +\int_{0}^{s}{e^{(s-\sigma)A}}G(X_{\sigma})\,dW(\sigma), \quad s\in[0,\infty), \\ X_{0}=x\in{\mathcal{C}}, \\ Y(s)-Y(T)+\int^{T}_{s}Z(\sigma)\,dW(\sigma)+\lambda\int^{T}_{s}Y(\sigma)\,d\sigma \\ \quad =\int^{T}_{s}\psi(X_{\sigma},Y(\sigma ),Z(\sigma))\,d\sigma,\quad 0\leq s\leq T< \infty. \end{cases} $$
(4.1)

We make the following assumptions.

Hypothesis 4.1

  1. (i)
    The mapping \(\psi: {\mathcal{C}}\times K\times L_{2}(\Xi,K)\rightarrow K\) is continuous and, for some \(L>0\), \(\mu\in R\), and \(m\geq1\),
    $$\begin{aligned}& \bigl\vert \psi(x,y_{1},z_{1})-\psi(x,y_{2},z_{2}) \bigr\vert \leq L|y_{1}-y_{2}|+L|z_{1}-z_{2}|, \\& \bigl\vert \psi(x,y,z)\bigr\vert \leq L\bigl(1+|x|_{C}^{m}+|y|+|z| \bigr), \\& \bigl\langle \psi(s,x,y_{1},z)-\psi(s,x,y_{2},z),y_{1}-y_{2} \bigr\rangle _{K}\geq\mu|y_{1}-y_{2}|^{2} \end{aligned}$$
    for every \(x\in{\mathcal{C}}\), \(y,y_{1},y_{2}\in K\), \(z,z_{1},z_{2}\in L_{2}(\Xi,K)\).
     
  2. (ii)
    The map \(\psi(\cdot,\cdot,\cdot)\in{C}^{1}({\mathcal {C}}\times R\times L_{2}(\Xi,R); R)\) and there exist \(L>0\) and \(m\geq0\) such that
    $$\bigl\vert \nabla_{x}\psi(x,y,z)h\bigr\vert \leq L|h|_{C}\bigl(1+\vert x\vert _{C}+|y| \bigr)^{m}\bigl(1+\vert z\vert \bigr) $$
    for every \(x,h\in{\mathcal{C}}\), \(y\in R\), and \(z\in L_{2}(\Xi,R)\).
     

For the backward-forward system (4.1) we have the following basic result (see Proposition 3.11 and Proposition 5.1 in [23]).

Theorem 4.2

Assume that Hypotheses 3.1 and 4.1 hold. There exists a constant \(\eta (q)\) such that, for \(p\in(2,+\infty)\), β, and q satisfying
$$ q\geq p(m+1) (m+2), \qquad \beta< \eta(q) (m+1) (m+2), \qquad \beta< 0, \qquad \eta(q)< 0, $$
(4.2)
and for every \(\lambda>\hat{\lambda}=-(\beta+\mu-L^{2}/2)\), the following hold:
  1. (i)

    For every \(x\in{\mathcal{C}}\), there exists a unique solution in \({\mathcal{H}}^{q}_{\eta(q)}\times{\mathcal{K}}^{p}_{\beta}\) of (4.1), which will be denoted by \((X(\cdot,x),Y(\cdot,x),Z(\cdot,x))\). Moreover, \(Y(x)\in L^{p}_{\mathcal{P}}(\Omega;C_{\beta}(R))\).

     
  2. (ii)
    The maps \(x\rightarrow X(x)\), \(x\rightarrow(Y(x),Z(x))\), \(x\rightarrow Y(x)\) belong to \({\mathcal{G}}^{1}({\mathcal{C}};{\mathcal {H}}^{q}_{\eta(q)})\), \({\mathcal{G}}^{1}({\mathcal{C}};{\mathcal {K}}_{\beta}^{p})\), and \({\mathcal{G}}^{1}({\mathcal{C}};L^{p}_{\mathcal {}P}(\Omega;C_{\beta}(R)))\), respectively. Moreover, for every \(h\in {\mathcal{C}}\), \((\nabla_{x}Y(s,x)h,\nabla_{x}Z(s,x)h)\) solves the equation, P-a.s.,
    $$\begin{aligned}& \nabla_{x}Y(s,x)h-\nabla_{x}Y(T,x)h+ \lambda\int^{T}_{s}\nabla_{x}Y( \sigma,x)h\, d\sigma +\int^{T}_{s} \nabla_{x}Z(\sigma,x)h\, dW(\sigma) \\& \quad = -\int^{T}_{s}\nabla_{x}\psi \bigl( X_{\sigma}(x),Y(\sigma,x),Z(\sigma,x)\bigr)\nabla_{x}X_{\sigma}(x)h \, d\sigma \\& \qquad {}-\int^{T}_{s}\nabla_{y} \psi\bigl( X_{\sigma}(x),Y(\sigma,x),Z(\sigma,x)\bigr)\nabla_{x}Y( \sigma,x)h\, d\sigma \\& \qquad {}-\int^{T}_{s}\nabla_{z} \psi\bigl( X_{\sigma}(x),Y(\sigma,x),Z(\sigma,x)\bigr)\nabla_{x}Z( \sigma,x)h\, d\sigma, \quad s\in[0,+\infty). \end{aligned}$$
    (4.3)
     
  3. (iii)
    For every \(x,h\in\mathcal{C}\), \(t\geq0\) there exists a suitable constant \(c>0\) independent of x and h such that
    $$\begin{aligned}& E\sup_{s\geq0}e^{p\beta s}\bigl\vert \nabla_{x} Y(s,x)h\bigr\vert ^{p} +E\biggl(\int ^{+\infty}_{0}e^{2\beta\sigma}\bigl\vert \nabla_{x} Y(\sigma ,x)h\bigr\vert ^{2} \biggr)^{\frac{p}{2}} \\& \qquad {}+ E\biggl(\int^{+\infty}_{0}e^{2\beta\sigma} \bigl\vert \nabla_{x} Z(\sigma ,x)h\bigr\vert ^{2} \biggr)^{\frac{p}{2}} \\& \quad \leq c|h|^{p}_{C}\bigl(1+|x|_{C}^{m^{2}} \bigr)^{p}. \end{aligned}$$
    (4.4)
     

Theorem 4.3

Under the assumptions of Theorem  4.2, we have
$$ \lim_{x^{1}\rightarrow x}\sup_{|h|=1}E\sup _{s\geq0}e^{-p\lambda s}\bigl\vert \nabla_{x}Y(s,x)h- \nabla _{x}Y\bigl(s,x^{1}\bigr)h\bigr\vert ^{p} = 0. $$
(4.5)

Proof

Setting \(\nabla_{x}Y^{\lambda}(s)=e^{-\lambda s}\nabla_{x}Y(s)\) and \(\nabla_{x}Y^{\lambda}(s)=e^{-\lambda s}\nabla_{x}Y(s)\), by the Itô formula, (4.3) is equivalent to
$$\begin{aligned}& \nabla_{x}Y^{\lambda}(s,x)h-\nabla_{x}Y^{\lambda}(T,x)h +\int^{T}_{s}\nabla_{x}Z^{\lambda}( \sigma,x)h\, dW(\sigma) \\& \quad = -\int^{T}_{s}\nabla_{x}\psi \bigl(\sigma, X_{\sigma}(x),Y(\sigma,x),Z(\sigma,x)\bigr)e^{-\lambda\sigma} \nabla _{x}X_{\sigma}(x)h\, d\sigma \\& \qquad {}-\int^{T}_{s}\nabla_{y} \psi\bigl(\sigma, X_{\sigma}(x),Y(\sigma,x),Z(\sigma,x) \bigr)e^{-\lambda\sigma}\nabla _{x}Y(\sigma,x)h\, d\sigma \\& \qquad {}-\int^{T}_{s}\nabla_{z} \psi\bigl(\sigma, X_{\sigma}(x),Y(\sigma,x),Z(\sigma,x) \bigr)e^{-\lambda\sigma}\nabla _{x}Z(\sigma,x)h\, d\sigma, \quad 0\leq s\leq T< +\infty. \end{aligned}$$
For every \(T>0\), we can show that, for a suitable constant \(c_{p}>0\) depending only on p,
$$\begin{aligned}& \lim_{x^{1}\rightarrow x}\sup_{|h|=1}E\sup _{s\geq0}e^{-p\lambda s}\bigl\vert \nabla_{x}Y(s,x)h- \nabla _{x}Y\bigl(s,x^{1}\bigr)h\bigr\vert ^{p} \\& \quad \leq c_{p}\lim_{x^{1}\rightarrow x}\sup_{|h|=1}E \bigl\vert \nabla_{x}Y^{\lambda}(T,x)h-\nabla_{x}Y^{\lambda}\bigl(T,x^{1}\bigr)h\bigr\vert ^{p}. \end{aligned}$$
On the other hand, for every \(\varepsilon>0\), letting T be large enough, by estimate (4.4) we see that
$$\begin{aligned}& \sup_{|h|=1}E\bigl\vert \nabla_{x}Y^{\lambda}(T,x)h- \nabla_{x}Y^{\lambda}\bigl(T,x^{1}\bigr)h\bigr\vert ^{p} \\& \quad =\sup_{|h|=1}Ee^{-p\lambda T}\bigl\vert \nabla_{x}Y(T,x)h-\nabla _{x}Y\bigl(T,x^{1} \bigr)h\bigr\vert ^{p}< \frac{\varepsilon}{2c_{p}}. \end{aligned}$$
Therefore,
$$ \lim_{x^{1}\rightarrow x}\sup_{|h|=1}E\sup _{s\geq0}e^{-p\lambda s}\bigl\vert \nabla_{x}Y(s,x)h- \nabla _{x}Y\bigl(s,x^{1}\bigr)h\bigr\vert ^{p} =0. $$
The proof is finished. □

Corollary 4.4

Assume that Hypotheses 3.1 and 4.1 hold. Then the function \(v^{N}(x)=Y^{N}(0,x)\) belongs to \({C}^{1}({\mathcal{C}};R)\) and there exists a constant \(C>0\) independent of N such that \(|\nabla_{x} v^{N}(x)h|\leq C|h|_{C}(1+|x|_{C}^{m^{2}})\) for all \(x,h\in{\mathcal{C}}\).

Moreover, for every \(x\in {\mathcal{C}}\), we have
$$ \begin{aligned} &Y^{N}(s,x)=v^{N} \bigl(X^{N}_{s}(x)\bigr),\quad P\textit{-a.s. for all } s \geq0, \\ &Z^{N}(s,x)=\overline{\nabla_{0}v^{N} \bigl(X^{N}_{s}(x)\bigr)}G^{N} \bigl(s,X^{N}_{s}(x)\bigr), \quad P\textit{-a.s. for a.e. } s\geq0. \end{aligned} $$
(4.6)

Proof

By Theorem 4.2 and Theorem 4.3, it is easy to find that \(v^{N}\in{C}^{1}({\mathcal{C}};R)\), and its property is a direct consequence of Theorem 4.2. By a similar procedure to Theorem 6.1 in [23], we obtain \(Y^{N}(s,x)=\upsilon^{N}(X^{N}_{s}(x))\), P-a.s. for all \(s\geq0\).

For every \(T>0\), we consider the joint quadratic variation of \(Y^{N}(\cdot,x)\) and the Wiener process \(We_{i}\) on an internal \([0,T]\). By the backward stochastic differential equation we get
$$\bigl\langle Y^{N}(\cdot,x),We_{i}\bigr\rangle _{[0,T]}=\int^{T}_{0}Z^{N}(s,x)e_{i} \, ds. $$
From Theorem 3.3 it follows that
$$\bigl\langle v^{N}\bigl(\cdot,X^{N}_{\cdot}(x) \bigr), We_{i}\bigr\rangle _{[0,T]}=\int^{T}_{0} \overline{\nabla _{0}v^{N}\bigl(s,X^{N}_{s}(x) \bigr)}G^{N}\bigl(s,X^{N}_{s}(x) \bigr)e_{i}\, ds. $$
By the arbitrariness of T, we obtain
$$Z^{N}(s,x)=\overline{\nabla_{0}v^{N} \bigl(s,X^{N}_{s}(x)\bigr)}G^{N} \bigl(s,X^{N}_{s}(x)\bigr),\quad P\mbox{-a.s. for a.e. } s \geq0. $$
 □

Theorem 4.5

Let us assume that Hypotheses 3.1 and 4.1 hold true. In addition, we assume that
$$ \lim_{N\rightarrow\infty}\sup_{|v|=1}\bigl\vert \nabla_{x}G(x)v-\nabla _{x}G^{N}(x)v\bigr\vert _{L_{2}(\Xi;H)}=0. $$
(4.7)
Then, for every \(p>2\), we have
$$ \lim_{N\rightarrow\infty}\sup_{|v|=1}E\sup _{s\geq0}e^{-p\lambda s}\bigl\vert \nabla_{x}Y(s,x)h- \nabla_{x}Y^{N}(s,x)h\bigr\vert ^{p} = 0, $$
(4.8)
in particular, we find that
$$ \lim_{N\rightarrow\infty}\bigl\vert \nabla_{x}Y(0,x)- \nabla_{x}Y^{N}(0,x)\bigr\vert ^{p} =0. $$
(4.9)

Proof

The proof is very similar to Theorem 4.3, so we omit it. □

Now we are in a position to prove the main result of this section.

Theorem 4.6

Assume that Hypotheses 3.1 and 4.1 hold, and let (4.7) hold true. Then the function \(v(x)=Y(0,x)\) belongs to \(C^{1}({\mathcal{C}};R)\) and there exists a constant \(C>0\) such that \(|\nabla_{x} v(x)h|\leq C|h|_{C}(1+|x|_{C}^{m^{2}})\) for all \(x\in{\mathcal{C}}\) and \(h\in {\mathcal{C}}\). Moreover, for every \(x\in {\mathcal{C}}\), we have
$$ \begin{aligned} &Y(s,x)=v\bigl(s,X_{s}(x)\bigr), \quad P \textit{-a.s. for all } s\geq0, \\ &Z(s,x)=\overline{\nabla_{0}v\bigl(X_{s}(x)\bigr)}G \bigl(X_{s}(x)\bigr), \quad P\textit{-a.s. for a.e. } s\geq0. \end{aligned} $$
(4.10)

Proof

By a similar procedure to Corollary 4.4, we can show the statements except (4.10).

It follows from (4.5) and (4.9) that
$$\lim_{N\rightarrow\infty} \bigl\vert \nabla_{x}v^{N}(x)- \nabla_{x}v(x)\bigr\vert =0,\qquad \lim_{x^{1}\rightarrow x} \bigl\vert \nabla_{x}v(x)-\nabla_{x}v \bigl(x^{1}\bigr)\bigr\vert =0. $$
Consequently,
$$\begin{aligned}& \lim_{N\rightarrow\infty}\bigl\vert \overline{\nabla _{0}v\bigl(X_{s}(x)\bigr)}G\bigl(X_{s}(x) \bigr)-\overline{\nabla_{0}v^{N}\bigl(X^{N}_{s}(x) \bigr)}G^{N}\bigl(X^{N}_{s}(x)\bigr)\bigr\vert \\& \quad \leq \lim_{N\rightarrow\infty }c\bigl(1+\vert x\vert _{C}^{m^{2}}\bigr)\bigl\vert G\bigl(X_{s}(x) \bigr)-G^{N}\bigl(X^{N}_{s}(x)\bigr)\bigr\vert \\& \qquad {} +\lim_{N\rightarrow\infty} \bigl\vert G\bigl(X_{s}(x) \bigr)\bigr\vert \bigl\vert \nabla_{x}v\bigl(X_{s}(x) \bigr)-\nabla_{x}v^{N}\bigl(X^{N}_{s}(x) \bigr)\bigr\vert \\& \quad = 0,\quad P\mbox{-a.s.} \end{aligned}$$
(4.11)
Since
$$\lim_{N\rightarrow\infty}\bigl\vert Z(\cdot,x)-Z^{N}(\cdot,x) \bigr\vert _{L^{p}_{\mathcal {P}}(\Omega;L^{2}_{\beta}(L_{2}(\Xi,R)))}=0, $$
we see that, for every \(T>0\), there exists a subsequence \(\{N_{k}\}\) such that
$$\lim_{k\rightarrow\infty}Z^{N_{k}}(s,x)=Z(s,x),\quad P\mbox{-a.s. for a.e. } s\in[0,T]. $$
By (4.6) and (4.11) we deduce that
$$\begin{aligned}& \overline{\nabla_{0}v\bigl(X_{s}(x)\bigr)}G \bigl(X_{s}(x)\bigr) \\& \quad = \lim_{k\rightarrow\infty}\overline{\nabla _{0}v^{N_{k}} \bigl(X^{N_{k}}_{s}(x)\bigr)}G^{N_{k}} \bigl(X^{N_{k}}_{s}(x)\bigr) \\& \quad = \lim_{k\rightarrow\infty}Z^{N_{k}}(s,x)=Z(s,x),\quad P \mbox{-a.s. for a.e. } s\in[0,T]. \end{aligned}$$
By the arbitrariness of T, we have
$$ Z(s,x)=\overline{\nabla_{0}v\bigl(X_{s}(x)\bigr)}G \bigl(X_{s}(x)\bigr),\quad P\mbox{-a.s. for a.e. } s\geq0. $$
The proof is finished. □

5 Mild solution of the Kolmogorov equation

Let \(X(\cdot,x)\) denote the unique solution of (3.3). We denote by \({\mathcal {B}}(\mathcal{C})\) the set of measurable functions \(\phi:{\mathcal{C}}\rightarrow R\) with polynomial growth. The transition semigroup \(P_{s}\) is defined for arbitrary \(\phi\in{\mathcal {B}}(\mathcal{C})\) by the formula
$$P_{s}[\phi](x)=E\phi\bigl(X_{s}(x)\bigr), \quad x\in{ \mathcal{C}}. $$
We study a generalization of the Kolmogorov equation of the following form:
$$ {\mathcal{L}}v(x)-\lambda v(x)= \psi\bigl(x,v(x),\overline{\nabla _{0}v(x)}G(x)\bigr), \quad x\in{\mathcal{C}}, x(0)\in D(A), $$
(5.1)
where
$${\mathcal{L}}[\phi](x)={\mathcal {S}}(\phi) (x)+\bigl\langle Ax(0)1_{0}+F(x)1_{0},\overline{\nabla_{x}\phi(x)} \bigr\rangle +\frac{1}{2}\sum^{\infty}_{i=1} \overline{\nabla_{x}^{2}\phi(x)} \bigl(G(x)e_{i}1_{0}, G(x)e_{i}1_{0}\bigr), $$
\(\{e_{i}\}_{i=1}^{\infty}\) denotes a basis of Ξ.

Definition 5.1

We say a function \(v: {\mathcal{C}}\rightarrow R\) is a mild solution of the nonlinear Kolmogorov equation (5.1), \(v\in{\mathcal{G}}^{1}( {\mathcal{C}};R)\), if there exist some constants \(C>0\), \(q\geq0\) such that
$$ \bigl\vert v(x)\bigr\vert \leq C\bigl(1+\vert x\vert \bigr)^{q}, \qquad \bigl\vert \nabla_{x}v(x)h\bigr\vert \leq C|h|\bigl(1+\vert x\vert \bigr)^{q}, \quad x,h\in{ \mathcal{C}}, $$
(5.2)
and the following equality holds true, for every \(x\in{\mathcal{C}}\) and \(T\geq0\):
$$ v(x)=\int^{T}_{0}e^{-\lambda s}P_{s} \bigl[\psi\bigl(\cdot,v(\cdot),\overline {\nabla_{0}v(\cdot)}G(\cdot) \bigr)\bigr](x)\,ds +e^{-\lambda T}P_{T}[v](x). $$
(5.3)

Theorem 5.2

Assume that Hypotheses 3.1 and 4.1 hold, and let (4.7) hold true. Then there exists \(\hat{\lambda}\in R\) such that, for every \(\lambda>\hat{\lambda}\), the nonlinear stationary Kolmogorov equation (5.1) has a unique mild solution v. The function v coincides with the one introduced in Theorem  4.6.

Proof

(Existence) Let v be the function defined in Theorem 4.6. Then v has the regularity properties stated in Definition 5.1. It remains to prove that equality (5.3) holds true. By Theorem 4.6 we have
$$\begin{aligned}& P_{s}\bigl[\psi\bigl(\cdot,v(\cdot),\overline{\nabla_{0}v(s, \cdot )}G(s,\cdot)\bigr)\bigr](x) \\& \quad = E\bigl[\psi\bigl(X_{s}(x),v\bigl(X_{s}(x)\bigr), \overline{\nabla_{0}v\bigl(X_{s}(x)\bigr)}G \bigl(X_{s}(x)\bigr)\bigr)\bigr] \\& \quad = E\psi\bigl(X_{s}(x),Y(s,x),Z(s,x)\bigr). \end{aligned}$$
Thus
$$ \int^{T}_{0}P_{s}\bigl[\psi\bigl( \cdot,v(\cdot),\overline{\nabla_{0}v(\cdot )}G(\cdot)\bigr)\bigr](x) \,ds= E\int^{T}_{0}\psi\bigl(X_{s}(x),Y(s,x),Z(s,x) \bigr)\,ds. $$
(5.4)
Applying the Itô formula to the backward equation in (4.1) gives
$$\begin{aligned}& Y(0,x)-e^{-\lambda T}Y(T,x)+\int^{T}_{0}e^{-\lambda\sigma}Z( \sigma ,x)\,dW(\sigma) \\& \quad =\int^{T}_{0}e^{-\lambda\sigma} \psi\bigl(X_{\sigma}(x),Y(\sigma,x),Z(\sigma,x)\bigr)\,d\sigma. \end{aligned}$$
Taking the expectation and applying (5.4) we obtain the equality (5.3).
(Uniqueness) Let v be a mild solution of (5.1), by (5.3) we have, for every \(x\in{\mathcal{C}}\), \(0\leq s\leq T\),
$$v(x)=e^{-\lambda(T-s)}P_{T-s}[v](x)+E\int^{T-s}_{0}e^{-\lambda\sigma }P_{\sigma}\bigl[\psi\bigl(\cdot,v(\cdot),\overline{\nabla_{0} v(\cdot )}G(\cdot) \bigr)\bigr](x)\,d\sigma. $$
By the Markov property of X, we obtain
$$\begin{aligned} v\bigl(X_{s}(x)\bigr) =&e^{-\lambda(T-s)}E\bigl[v \bigl(X_{T}(x)\bigr)|{{\mathcal{F}}_{s}}\bigr] \\ &{}+\int^{T-s}_{0}e^{-\lambda\sigma}E\bigl[\psi \bigl(X_{\sigma +s}(x),v\bigl(X_{\sigma+s}(x)\bigr), \overline{ \nabla_{0}v\bigl(X_{\sigma+s}(x)\bigr)}G\bigl(X_{\sigma+s}(x) \bigr)\bigr)|{{\mathcal {F}}_{s}}\bigr]\,d\sigma, \end{aligned}$$
then by a change of variable, we have
$$\begin{aligned} e^{-\lambda s}v\bigl(X_{s}(x)\bigr) =& e^{-\lambda T}E\bigl[v \bigl(X_{T}(x)\bigr)|{{\mathcal{F}}_{s}}\bigr] \\ &{}+\int^{T}_{s}e^{-\lambda \sigma}E\bigl[\psi \bigl(X_{\sigma}(x),v\bigl(X_{\sigma}(x)\bigr), \overline{ \nabla_{0}v\bigl(X_{\sigma}(x)\bigr)}G\bigl(X_{\sigma}(x) \bigr)\bigr)|{{\mathcal {F}}_{s}}\bigr]\,d\sigma \\ =& E[\xi|{\mathcal{F}}_{s}]-\int^{s}_{0}e^{-\lambda\sigma} \psi \bigl(X_{\sigma}(x),v\bigl(X_{\sigma}(x)\bigr), \overline{ \nabla_{0}v\bigl(X_{\sigma}(x)\bigr)}G\bigl(X_{\sigma}(x) \bigr)\bigr)\,d\sigma, \end{aligned}$$
where
$$\xi=e^{-\lambda T}v\bigl(X_{T}(x)\bigr)+\int^{T}_{0}e^{-\lambda\sigma} \psi \bigl(X_{\sigma}(x),v\bigl(X_{\sigma}(x)\bigr), \overline{ \nabla_{0}v\bigl(X_{\sigma}(x)\bigr)}G\bigl(X_{\sigma}(x) \bigr)\bigr)\,d\sigma. $$
Now let \(T>0\) be fixed, then by the representation theorem (see Proposition 4.1 in [22]), there exists \(\widetilde{Z}\in L^{2}_{\mathcal {P}}(\Omega\times[0,T],\Xi)\) such that \(E[\xi|{\mathcal{F}}_{s}]=v(x)+\int^{s}_{0}{\widetilde{Z}}(\sigma )\,dW(\sigma)\), \(s\in[0,T]\). Therefore, by the Itô formula, we find that
$$\begin{aligned} v\bigl(X_{s}(x)\bigr) =&v(x)+\int ^{s}_{0}e^{\lambda\sigma}{\widetilde{Z}}(\sigma ) \,dW(\sigma)+\lambda\int^{s}_{0}v \bigl(X_{\sigma}(x)\bigr)\,d\sigma \\ &{}- \int^{s}_{0}\psi\bigl(X_{\sigma}(x),v \bigl(X_{\sigma}(x)\bigr), \overline{\nabla_{0}v \bigl(X_{\sigma}(x)\bigr)}G\bigl(X_{\sigma}(x)\bigr)\bigr)\,d\sigma. \end{aligned}$$
(5.5)
By Theorem 3.3 and Theorem 4.6 we have \(\langle v(X_{\cdot}(x)), We_{i}\rangle_{[0,T']}=\int^{T'}_{0}\overline{\nabla_{0}v(X_{\sigma}(x))}G(X_{\sigma}(x)) e_{i}\, d\sigma\) for every \(T'\in[0,T)\). Hence, \(e^{\lambda\sigma}{\widetilde{Z}}(\sigma)=\overline{\nabla _{0}v(X_{\sigma}(x))}G(X_{\sigma}(x))\), P-a.s., and equality (5.5) can be rewritten as
$$\begin{aligned} v\bigl(X_{s}(x)\bigr) =&v(x)+\int^{s}_{0} \overline{\nabla_{0}v\bigl(X_{\sigma}(x)\bigr)}G \bigl(X_{\sigma}(x)\bigr)\,dW(\sigma)+\lambda\int^{s}_{0}v \bigl(X_{\sigma}(x)\bigr)\,d\sigma \\ &{}- \int^{s}_{0}\psi\bigl(X_{\sigma}(x),v \bigl(X_{\sigma}(x)\bigr), \overline{\nabla_{0}v \bigl(X_{\sigma}(x)\bigr)}G\bigl(X_{\sigma}(x)\bigr)\bigr)\,d\sigma. \end{aligned}$$
By the arbitrariness of T, we see that the pairs \((Y(s,x),Z(s,x))\) and \((v(X_{s}(x)), \overline{\nabla_{0}v(X_{s}(x))}G(X_{s}(x)))\), \(s\geq0\), solve the same backward stochastic differential equation in (4.1). By uniqueness, we have \(Y(s,x)=v(X_{s}(x))\), \(s\geq0\). Setting \(s=0\) we show \(Y(0,x)=v(x)\). The proof is finished. □

6 Application to optimal control

In this section we study the controlled state equation:
$$ \textstyle\begin{cases} dX^{u}(s)=AX^{u}(s)\,ds + F(X^{u}_{s})\,ds +G(X^{u}_{s})R(X^{u}_{s},u(s))\,ds+G(X^{u}_{s})\,dW(s), \quad s\geq0, \\ X^{u}_{0}=x. \end{cases} $$
(6.1)
The solution of the above equation will be denoted by \(X^{u}(s,x)\) or simply by \(X^{u}(s)\). Our aim is to minimize the cost functional
$$ J(u)=E\int_{0}^{+\infty}e^{-\lambda\sigma}g \bigl(X^{u}_{\sigma},u(\sigma )\bigr)\,d\sigma, $$
(6.2)
over all the admissible control system.

We formulate the optimal control problem in the weak sense following the approach of [30]. By an admissible control system we mean \((\Omega,{\mathcal {F}},\{{{\mathcal{F}}}_{t}\}_{t\geq0},P,W,u,X^{u})\), where \((\Omega,{\mathcal{F}},\{{{\mathcal {F}}}_{t}\}_{t\geq0},P)\) is a filtered probability space satisfying the usual conditions, W is a cylindrical P-Wiener process with values in Ξ, adapted to the filtration \(\{{{\mathcal{F}}}_{t}\}_{t\geq0}\). u is an \({{\mathcal{F}}}_{t}\)-predictable process with values in U, \(X^{u}\) denotes a mild solution of (6.1). An admissible control system will be briefly denoted by \((W,u,X^{u})\) in the following.

We define in a classical way the Hamiltonian function relative to the optimal control problem: for every \(x\in\mathcal{C}\), \(z\in \Xi\),
$$ \psi(x,z)=\inf\bigl\{ g(x,u)+zR(x,u): u\in U\bigr\} , $$
(6.3)
and the corresponding, possibly empty, set of minimizers
$$\Gamma(x,z)=\bigl\{ u\in U, g(x,u)+zR(x,u)=\psi(x,z)\bigr\} . $$

We are now ready to formulate the assumptions we need.

Hypothesis 6.1

  1. (i)

    A, F, and G verify Hypothesis 3.1, moreover, G satisfies (4.7).

     
  2. (ii)

    \((U,\mathcal{U})\) is a measurable space. The map \(g: {\mathcal {C}\times} U\rightarrow R\) is continuous and satisfies \(|g(x,u)|\leq K_{g}(1+|x|_{C}^{m_{g}})\) for suitable constants \(K_{g}>0\), \(m_{g}>0\), and every \(x\in \mathcal{C}\), \(u\in U\). The map \(R: {\mathcal{C}\times} U\rightarrow\Xi\) is measurable and \(|R(x,u)|\leq L_{R}\) for suitable constants \(L_{R}>0\) and every \(x\in \mathcal{C}\), \(u\in U\).

     
  3. (iii)

    The Hamiltonian ψ defined in (6.3) satisfies the requirements of Hypothesis 4.1 (with \(K=R\)).

     
  4. (iv)

    We fix \(p>2\), q, and \(\beta<0\) satisfying (4.2), and such that \(q>m_{g}\).

     

We are in a position to prove the main result of this section.

Theorem 6.2

We assume that Hypothesis 6.1 holds true and λ verifies
$$ \lambda> \biggl(-\delta-\mu+\frac{L_{R}^{2}}{2}\biggr)\vee\biggl(-\delta + \frac{L^{2}_{R}}{2(p-1)}\biggr) \vee\biggl(\frac{L^{2}_{R}m_{g}}{2(q-m_{g})}-\eta(q) m_{g} \biggr), $$
(6.4)
and suppose that the set-valued map Γ has non-empty values and it admits a measurable selection \(\Gamma_{0}: {\mathcal{C}}\times\Xi\rightarrow U\). Let υ denote the function in the statement of Theorem  4.6. Then for all admissible control systems we have \(J(u)\geq\upsilon(x)\) and the equality holds if and only if
$$ u(s)=\Gamma_{0}\bigl(X_{s}^{u},\overline{ \nabla_{0}v(X_{s})}G(X_{s})\bigr), \quad P \textit{-a.s. for almost every } s\geq0. $$
(6.5)
Moreover, the closed loop equation
$$ \textstyle\begin{cases} dX(s)=AX(s)\,ds + F(X_{s})\,ds \\ \hphantom{dX(s)={}}{}+G(X_{s})(R(X_{s},\Gamma_{0}(X_{s},\overline{\nabla _{0}v(X_{s})}G(X_{s})))\,ds+\,dW(s)), \quad s\geq0, \\ X_{0}=x, \end{cases} $$
(6.6)
admits a weak solution \((\Omega,{\mathcal{F}},\{{{\mathcal {F}}}_{t}\}_{t\geq0},P,W,X)\) which is unique in law and setting
$$ u(s)=\Gamma_{0}\bigl(X_{s},\overline{\nabla_{0}v(X_{s})}G(X_{s}) \bigr), $$
we get an optimal admissible control system \((W,u,X)\).

Proof

We consider (6.1) in the probability space \((\Omega,{\mathcal{F}}, P)\) with filtration \(\{{\mathcal{F}}_{t}\}_{t\geq0}\) and with an \(\{{\mathcal{F}}_{t}\}_{t\geq0}\)-cylindrical Wiener process \(\{W(t), t\geq0\}\). Let us define
$$ W^{u}(s)=W(s)+\int^{s}_{t\wedge s}R\bigl( \sigma,X_{\sigma}^{u},u(\sigma)\bigr)\,d\sigma, \quad s\in[0, \infty) $$
and
$$ \rho(T)=\exp\biggl(\int^{T}_{t}-R^{\ast}\bigl(s,X^{u}_{s},u(s)\bigr)\,dW(s)-\frac {1}{2}\int ^{T}_{t}\bigl\vert R\bigl(s,X^{u}_{s},u(s) \bigr)\bigr\vert ^{2}\,ds\biggr). $$
Let \(P^{u}\) be the unique probability on \({\mathcal {F}}_{[0,\infty)}\) such that
$$P^{u}|_{{\mathcal{F}}_{T}}=\rho(T)P|_{{\mathcal {F}}_{T}}. $$
We note that under \(P^{u}\), the process \(W^{u}\) is a Wiener process. Let us denote by \(\{{\mathcal {F}}^{u}_{t}\}_{t\geq0}\) the filtration generated by \(W^{u}\) and completed in the usual way. Relatively to \(W^{u}\), (6.1) can be rewritten
$$ \textstyle\begin{cases} dX^{u}(s)= AX^{u}(s)\,ds + F(s,X^{u}_{s})\,ds +G(s,X^{u}_{s})\,dW^{u}(s),\quad s\in[0,\infty), \\ X^{u}_{0}=x. \end{cases} $$
In the space \((\Omega,{\mathcal{F}}_{[0,\infty)},\{{\mathcal {F}}^{u}_{t}\}_{t\geq0},P^{u})\), we consider the system of forward-backward equations
$$ \textstyle\begin{cases} X^{u}(s)=e^{(s)A}x(0)+\int_{0}^{s}{e^{(s-\sigma )A}}F(X^{u}_{\sigma})\,d\sigma +\int_{0}^{s}{e^{(s-\sigma)A}}G(X^{u}_{\sigma})\,dW^{u}(\sigma), \quad s\in[0,\infty), \\ X^{u}_{0}=x\in{\mathcal{C}}, \\ Y^{u}(s)-Y^{u}(T)+\int^{T}_{s}Z^{u}(\sigma)\,dW^{u}(\sigma)+\lambda\int^{T}_{s}Y^{u}(\sigma)\,d\sigma \\ \quad =\int^{T}_{s}\psi(X^{u}_{\sigma},Z^{u}(\sigma ))\,d\sigma, \quad 0\leq s\leq T. \end{cases} $$
(6.7)
Applying the Itô formula to \(e^{-\lambda s}Y^{u}(s)\) and writing the backward equation in (6.7) with respect to the process W we get
$$\begin{aligned}& Y^{u}(0)+\int^{T}_{0}e^{-\lambda\sigma}Z^{u}( \sigma)\,dW(\sigma) \\& \quad =\int^{T}_{0}e^{-\lambda\sigma} \bigl[\psi\bigl(X^{u}_{\sigma},Z^{u}(\sigma)\bigr)- Z^{u}(\sigma)R\bigl(X^{u}_{\sigma},u(\sigma)\bigr) \bigr]\,d\sigma+e^{-\lambda T}Y^{u}(T). \end{aligned}$$
(6.8)
Recalling that R is bounded, we have, for all \(r\geq1\) and some constant C,
$$\begin{aligned} E^{u}\bigl[\rho(T)^{-r}\bigr] =&E^{u}\biggl[\exp r\biggl(\int^{T}_{0}R^{\ast}\bigl(X^{u}_{s},u(s)\bigr)\,dW^{u}(s)- \frac{1}{2}\int^{T}_{0}\bigl\vert R \bigl(X^{u}_{s},u(s)\bigr)\bigr\vert ^{2}\,ds \biggr)\biggr] \\ =&E^{u}\biggl[\exp \biggl(\int^{T}_{0}rR^{\ast}\bigl(X^{u}_{s},u(s)\bigr)\,dW^{u}(s)- \frac{1}{2}\int^{T}_{0}r^{2} \bigl\vert R\bigl(X^{u}_{s},u(s)\bigr)\bigr\vert ^{2}\,ds\biggr) \\ & {}\times \exp\frac{r(r-1)}{2}\int^{T}_{0} \bigl\vert R\bigl(X^{u}_{s},u(s)\bigr)\bigr\vert ^{2}\,ds\biggr] \\ \leq&e^{\frac{1}{2}r(r-1)TL_{R}^{2}}E^{u}\exp \biggl(\int^{T}_{0}2R^{\ast}\bigl(X^{u}_{s},u(s)\bigr)\,dW^{u}(s)- \frac{1}{2}\int^{T}_{0}4\bigl\vert R \bigl(X^{u}_{s},u(s)\bigr)\bigr\vert ^{2}\,ds \biggr) \\ =&e^{\frac{1}{2}r(r-1)TL_{R}^{2}}. \end{aligned}$$
It follows that
$$\begin{aligned} \begin{aligned} E\biggl(\int^{T}_{0}\bigl\vert e^{-\lambda s}Z^{u}(s)\bigr\vert ^{2}\,ds \biggr)^{\frac{1}{2}}&= E^{u}\biggl[\biggl(\int^{T}_{0} \bigl\vert e^{-\lambda s}Z^{u}(s)\bigr\vert ^{2}\,ds \biggr)^{\frac{1}{2}}\rho(T)^{-1}\biggr] \\ &\leq \biggl(E^{u}\int^{T}_{0}\bigl\vert e^{-\lambda s}Z^{u}(s)\bigr\vert ^{2}\,ds \biggr)^{\frac {1}{2}}\bigl(E^{u}\rho(T)^{-2} \bigr)^{\frac{1}{2}} \\ &< \infty. \end{aligned} \end{aligned}$$
We see that the stochastic integral in (6.8) has zero expectation. If we take the expectation with respect to P in (6.8), we obtain
$$ e^{-\lambda T}EY^{u}(T)-Y^{u}(0)=E\int ^{T}_{0}e^{-\lambda\sigma}\bigl[-\psi \bigl(X^{u}_{\sigma},Z^{u}(\sigma)\bigr)+ Z^{u}(\sigma)R\bigl(X^{u}_{\sigma},u(\sigma)\bigr) \bigr]\,d\sigma. $$
By Theorem 4.2, \(Y^{u}(\cdot,x)\in L^{p}_{\mathcal {P}}(\Omega;C_{\beta}(R))\), so that
$$E^{u}\bigl\vert Y^{u}(T,x)\bigr\vert ^{p}\leq C\exp(-p\beta T). $$
By the Hölder inequality we see that, for suitable constants \(C>0\),
$$\begin{aligned} E\bigl\vert Y^{u}(T,x)\bigr\vert =&E^{u}\bigl( \rho^{-1}(T)\bigl\vert Y^{u}(T,x)\bigr\vert \bigr) \\ \leq& \bigl[E^{u}\bigl(\rho^{-\frac{p}{p-1}}\bigr)\bigr]^{\frac{p-1}{p}} \bigl[E^{u}\bigl(\bigl\vert Y^{u}(T,x)\bigr\vert ^{p}\bigr)\bigr]^{\frac{1}{p}} \\ \leq& Ce^{(\frac{L^{2}_{R}}{2(p-1)}-\beta)T}. \end{aligned}$$
By Theorem 3.4 we obtain \(E^{u}\sup_{s\geq t}e^{\eta(q) qs}|X^{u}_{s}|^{q}<\infty\), by a similar process, we find that
$$E\bigl\vert X^{u}_{T}\bigr\vert ^{m_{g}}\leq Ce^{(L_{R}^{2}m_{g}(2q-2m_{g})^{-1}-\eta(q) m_{g})T} $$
for suitable constants \(C>0\), and
$$E\int^{\infty}_{t}e^{-\lambda \sigma}\bigl\vert g \bigl(X^{u}_{\sigma},u(\sigma)\bigr)\bigr\vert \,d\sigma< \infty. $$
Since \(Y^{u}(0,x)=\upsilon(x)\) and \(Z^{u}(s,x)=\overline{\nabla _{0}v(X_{s}^{u}(x))}G(X^{u}_{s}(x))\), P-a.s. for a.a. \(s\in[0,\infty)\) we have
$$\begin{aligned} e^{-\lambda T}EY^{u}(T)-v(x) =& E\int^{T}_{0}e^{-\lambda\sigma}\bigl[- \psi\bigl(X^{u}_{\sigma},\overline {\nabla_{0}v \bigl(X_{\sigma}^{u}(x)\bigr)}G\bigl(X^{u}_{\sigma}(x) \bigr)\bigr) \\ & {}+ \overline{\nabla_{0}v\bigl(X_{\sigma}^{u}(x) \bigr)}G\bigl(X^{u}_{\sigma}(x)\bigr)R\bigl(X^{u}_{\sigma},u(\sigma)\bigr)\bigr]\,d\sigma. \end{aligned}$$
Thus adding and subtracting \(E\int^{\infty}_{0}e^{-\lambda \sigma}g(X^{u}_{\sigma},u(\sigma))\,d\sigma\) and letting \(T\rightarrow\infty\), we conclude that
$$\begin{aligned} J(u) =&\upsilon(x)+E\int^{\infty}_{0}e^{-\lambda\sigma} \bigl[-\psi \bigl(X^{u}_{\sigma},\overline{\nabla_{0}v \bigl(X_{\sigma}^{u}(x)\bigr)}G\bigl(X^{u}_{\sigma}(x)\bigr)\bigr) \\ &{}+\overline{\nabla_{0}v\bigl(X_{\sigma}^{u}(x) \bigr)}G\bigl(X^{u}_{\sigma}(x)\bigr)R\bigl(X^{u}_{\sigma},u(\sigma)\bigr)+g\bigl(X_{\sigma}^{u},u(\sigma)\bigr)\bigr]\,d \sigma. \end{aligned}$$
It implies that \(J(u)\geq\upsilon(x)\) and that the equality holds true if and only if (6.5) holds true.
(Uniqueness) Let X be a weak solution of (6.6) in an admissible set-up \((\Omega,{\mathcal{F}},\{{\mathcal {F}}_{t}\}_{t\geq0}, P,W)\). We define
$$\begin{aligned} \rho(T) =&\exp\biggl(\int^{T}_{0}-R^{\ast}\bigl(X_{\sigma},\Gamma _{0}\bigl(X_{\sigma},\overline{ \nabla_{0}v(X_{\sigma})}G(X_{\sigma})\bigr)\bigr)\,dW( \sigma) \\ &{}- \frac{1}{2}\int^{T}_{0}\bigl\vert R\bigl(X_{\sigma},\Gamma_{0}\bigl(X_{\sigma}, \overline {\nabla_{0}v(X_{\sigma})}G(X_{\sigma})\bigr) \bigr)\bigr\vert ^{2}\,d\sigma\biggr). \end{aligned}$$
Since R is bounded, the Girsanov theorem ensures that there exists a probability measure \(P^{0}\) such that the process
$$ W^{0}(s)=W(s)+\int^{s}_{0}R \bigl(X_{\sigma},\Gamma_{0}\bigl(X_{\sigma},\overline { \nabla_{0}v(X_{\sigma})}G(X_{\sigma})\bigr)\bigr)\,d \sigma, \quad s\in[0,\infty), $$
is a \(P^{0}\)-Wiener process and
$$P^{0}|_{{\mathcal{F}}_{T}}=\rho(T)P|_{{\mathcal {F}}_{T}}. $$
We denote by \(\{{\mathcal {F}}^{0}_{t}\}_{t\geq0}\) the filtration generated by \(W^{0}\) and completed in the usual way. In \((\Omega,{\mathcal{F}}_{[0,\infty)},\{{\mathcal{F}}^{0}_{t}\} _{t\geq0},P^{0})\), X is a mild solution of
$$\textstyle\begin{cases} dX(s)= AX(s)\,ds + F(X_{s})\,ds +G(X_{s})\,dW^{0}(s), \quad s\in [0,\infty), \\ X_{0}=x \end{cases} $$
and
$$\begin{aligned} \rho(T) =&\exp\biggl(\int^{T}_{0}- R^{\ast}\bigl(X_{\sigma},\Gamma_{0} \bigl(X_{\sigma},\overline{\nabla_{0}v(X_{\sigma})}G(X_{\sigma})\bigr)\bigr)\,dW^{0}(\sigma) \\ &{}+ \frac{1}{2}\int^{T}_{0}\bigl\vert R\bigl(X_{\sigma},\Gamma_{0}\bigl(X_{\sigma}, \overline {\nabla_{0}v(X_{\sigma})}G(X_{\sigma})\bigr) \bigr)\bigr\vert ^{2}\,d\sigma\biggr). \end{aligned}$$
Note the joint law of X and \(W^{0}\) is uniquely determined by A, F, G, and x. Taking into account the above formula, we conclude that the joint law of X and \(\rho(T)\) under \(P^{0}\) is also uniquely determined, and consequently so is the law of X under P. This completes the proof of the uniqueness part.
(Existence) Let \((\Omega,{\mathcal{F}}, P)\) be a given complete probability space. \(\{W(t),t\geq0\} \) is a cylindrical Wiener process on \((\Omega,{\mathcal{F}}, P)\) with values in Ξ, \(\{ {\mathcal{F}}_{t} \}_{t\geq0}\) is the natural filtration of \(\{W(t), t\geq0\}\), augmented with the family of P-null sets. Let \(X(\cdot)\) be the mild solution of
$$ \textstyle\begin{cases} dX(s)= AX(s)\,ds +F(X_{s})\,ds+G(X_{s})\,dW(s), \quad s\in[0,\infty), \\ X_{0}=x, \end{cases} $$
(6.9)
and by the Girsanov theorem, let \(P^{1}\) be the probability on Ω under which
$$ W^{1}(s)=W(s)-\int^{s}_{0}R \bigl(X_{\sigma},\Gamma_{0}\bigl(X_{\sigma},\overline { \nabla_{0}v(X_{\sigma})}G(X_{\sigma})\bigr)\bigr)\,d\sigma $$
is a Wiener process (notice that R is bounded). Then X is the weak solution of (6.6) relative to the probability \(P^{1}\) and the Wiener process \(W^{1}\). The proof is completed. □

Declarations

Acknowledgements

This work was partially supported by the National Natural Science Foundation of China (Grant No. 11401474), Shaanxi Natural Science Foundation (Grant No. 2014JQ1035), and the Fundamental Research Funds for the Central Universities (Grant No. 2452015087).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
College of Science, Northwest A&F University

References

  1. Pardoux, E, Peng, S: Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 14, 55-61 (1990) MathSciNetView ArticleMATHGoogle Scholar
  2. Briand, P, Hu, Y: Stability of BSDEs with random terminal time and homogenization of semilinear elliptic PDEs. J. Funct. Anal. 155, 455-494 (1998) MathSciNetView ArticleMATHGoogle Scholar
  3. El Karoui, N, Mazliak, L (eds.): Backward Stochastic Differential Equations. Pitman Research Notes in Mathematics Series, vol. 364. Longman, Harlow (1997) MATHGoogle Scholar
  4. Pardoux, E: BSDEs, weak convergence and homogenization of semilinear PDEs. In: Clarke, FH, Stern, RJ (eds.) Nonlinear Analysis, Differential Equations and Control, pp. 503-549. Kluwer Academic, Dordrecht (1999) View ArticleGoogle Scholar
  5. Barbu, V, Da Prato, G: Hamilton-Jacobi Equations in Hilbert Spaces. Pitman Research Notes in Mathematics, vol. 86. Pitman, London (1983) MATHGoogle Scholar
  6. Cannarsa, P, Da Prato, G: Second-order Hamilton-Jacobi equations in infinite dimensions. SIAM J. Control Optim. 29(2), 474-492 (1991) MathSciNetView ArticleMATHGoogle Scholar
  7. Cannarsa, P, Da Prato, G: Direct solution of a second-order Hamilton-Jacobi equations in Hilbert spaces. In: Da Prato, G, Tubaro, L (eds.) Stochastic Partial Differential Equations and Applications. Pitman Research Notes in Mathematics, vol. 268, pp. 72-85. Pitman, London (1992) Google Scholar
  8. Gozzi, F: Regularity of solutions of second order Hamilton-Jacobi equations and application to a control problem. Commun. Partial Differ. Equ. 20, 775-826 (1995) MathSciNetView ArticleMATHGoogle Scholar
  9. Gozzi, F: Global regular solutions of second order Hamilton-Jacobi equations in Hilbert spaces with locally Lipschitz nonlinearities. J. Math. Anal. Appl. 198, 399-443 (1996) MathSciNetView ArticleMATHGoogle Scholar
  10. Elsanousi, I: Stochastic control for system with memory. Dr. Scient. thesis, University of Oslo (2000) Google Scholar
  11. Lions, PL: Viscosity solutions of fully nonlinear second-order equations and optimal stochastic control in infinite dimensions. Part I. The case of bounded stochastic evolutions. Acta Math. 161(3-4), 243-278 (1988) MathSciNetView ArticleMATHGoogle Scholar
  12. Lions, PL: Viscosity solutions of fully nonlinear second order equations and optimal stochastic control in infinite dimensions. Part II. Optimal control of Zakai’s equation. In: Da Prato, G, Tubaro, L (eds.) Stochastic Partial Differential Equations and Applications II. Lecture Notes in Mathematics, vol. 1390, pp. 147-170. Springer, Berlin (1989) View ArticleGoogle Scholar
  13. Lions, PL: Viscosity solutions of fully nonlinear second-order equations and optimal stochastic control in infinite dimensions. III. Uniqueness of viscosity solutions for general second-order equations. J. Funct. Anal. 86(1), 1-18 (1989) MathSciNetView ArticleMATHGoogle Scholar
  14. Øksendal, B, Sulem, A: A maximum principle for optimal control of stochastic systems with delay, with applications to finance. In: Mendaldi, JM, Rofman, E, Sulem, A (eds.) Optimal Control and Partial Differential Equations - Innovations and Applications, pp. 1-16. IOS Press, Amsterdam (2000) Google Scholar
  15. Świȩch, A: ‘Unbounded’ second order partial differential equations in infinite dimensional Hilbert spaces. Commun. Partial Differ. Equ. 19(11-12), 1999-2036 (1994) Google Scholar
  16. Chang, MH, Pang, T, Pemy, M: Optimal control of stochastic functional differential equations with a bounded memory. Preprint (2006) Google Scholar
  17. Chang, MH, Pang, T, Pemy, M: Stochastic optimal control problems with a bounded memory. In: Zhang, X, Liu, D, Wu, L (eds.) Operations Research and Its Applications. Papers from the Sixth International Symposium, ISORA ’06, 8-12 August, Xinjiang, China, pp. 82-94. World Publishing Corporation, Beijing (2006) Google Scholar
  18. El Karoui, N, Peng, S, Quenez, MC: Backward stochastic differential equations in finance. Math. Finance 7(1), 1-71 (1997) MathSciNetView ArticleMATHGoogle Scholar
  19. Peng, S: A generalized dynamic programming principle and Hamilton-Jacobi-Bellman equation. Stoch. Stoch. Rep. 38, 119-134 (1992) View ArticleMATHGoogle Scholar
  20. Fuhrman, M, Hu, Y, Tessitore, G: Ergodic BSDEs and optimal ergodic control in Banach spaces. SIAM J. Control Optim. 48(3), 1542-1566 (2009) MathSciNetView ArticleMATHGoogle Scholar
  21. Fuhrman, M, Masiero, F, Tessitore, G: Stochastic equations with delay: optimal control via BSDEs and regular solutions of Hamilton-Jacobi-Bellman equations. SIAM J. Control Optim. 48(7), 4624-4651 (2010) MathSciNetView ArticleMATHGoogle Scholar
  22. Fuhrman, M, Tessitore, G: Nonlinear Kolmogorov equations in infinite dimensional spaces: the backward stochastic differential equations approach and applications to optimal control. Ann. Probab. 30(3), 1397-1465 (2002) MathSciNetView ArticleMATHGoogle Scholar
  23. Fuhrman, M, Tessitore, G: Infinite horizon backward stochastic differential equations and elliptic equations in Hilbert spaces. Ann. Probab. 32, 607-660 (2004) MathSciNetView ArticleMATHGoogle Scholar
  24. Masiero, F: Infinite horizon stochastic optimal control problems with degenerate noise and elliptic equations in Hilbert spaces. Appl. Math. Optim. 55, 285-326 (2007) MathSciNetView ArticleMATHGoogle Scholar
  25. Zhou, J, Liu, B: Optimal control problem for stochastic evolution equations in Hilbert spaces. Int. J. Control 83(9), 1771-1784 (2010) View ArticleMATHGoogle Scholar
  26. Zhou, J, Liu, B: The existence and uniqueness of the solution for nonlinear Kolmogorov equations. J. Differ. Equ. 253, 2873-2915 (2012) View ArticleMATHGoogle Scholar
  27. Zhou, J, Zhang, Z: Optimal control problems for stochastic delay evolution equations in Banach spaces. Int. J. Control 84(8), 1295-1309 (2011) View ArticleMATHGoogle Scholar
  28. Da Prato, G, Zabczyk, J: Ergodicity for Infinite-Dimensional Systems. Cambridge University Press, Cambridge (1996) View ArticleMATHGoogle Scholar
  29. Zabczyk, J: Parabolic equations on Hilbert spaces. In: Stochastic PDEs and Kolmogorov Equations in Infinite Dimensions. Lecture Notes in Math., vol. 1715, pp. 117-213. Springer, Berlin (1999) View ArticleGoogle Scholar
  30. Fleming, WH, Soner, HM: Controlled Markov Processes and Viscosity Solutions. Applications of Mathematics, vol. 25. Springer, New York (1993) MATHGoogle Scholar

Copyright

© Zhou 2015