We study our main result in this part, i.e., a maximum principle about the optimal control of a stochastic delayed system involving terminal state constraint. It should be pointed out that the timedelayed state of the controlled system is different from the case without delay.
3.1 Problem formulation
Let
$$\mathcal{U}_{\mathrm{ad}} \equiv \bigl\{ u(\cdot)\vert u(\cdot)\in L_{\mathbb{F}}^{2} \bigl(0,T; \mathbb{R}^{n\times d} \bigr) \bigr\} $$
be the set of admissible controls. For every given \(u(\cdot)\), for the control system, we consider the pastdependent state \(X(\cdot)\) depicted as
$$ \textstyle\begin{cases} dX(t) = b(t,X(t),X(t\delta),u(t))\,dt + \sigma(t,X(t),X(t\delta),u(t))\,dW(t), \\\quad 0\leq t\leq T; \\ X(t) = \eta(t), \\\quad \delta\leq t\leq0, \end{cases} $$
(3.1)
where η is a given continuous function, \(b:[0,T]\times \mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n\times d} \rightarrow\mathbb{R}^{n}\) and \(\sigma:[0,T]\times\mathbb{R}^{n} \times\mathbb{R}^{n}\times\mathbb{R}^{n\times d}\rightarrow \mathbb{R}^{n\times d}\) are given measurable functions. Define the cost function as follows:
$$J \bigl(u(\cdot) \bigr)=E \biggl[ \int_{0}^{T} \widetilde{l} \bigl(t,X(t),u(t) \bigr) \,dt + \phi \bigl(X(T) \bigr) \biggr] , $$
where \(\widetilde{l}:[0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{n \times d}\rightarrow\mathbb{R}^{n}\) and \(\phi:\mathbb {R}^{n}\rightarrow \mathbb{R}^{n}\) are given measurable functions. We give the following assumptions:
 (H3.1):

The functions b, σ, l̃, ϕ are all continuously differentiable in the arguments \((x,x',u)\), and their derivatives are all bounded.
 (H3.2):

Denote by \(C(1 + \vert x\vert + \vert u\vert )\) and \(C(1 + \vert x\vert )\) the bounds of derivatives of l̃ in its arguments \((x,u)\) and ϕ in its argument x, respectively.
Therefore, for every given \(u(\cdot)\in\mathcal{U}_{\mathrm{ad}}\), under assumptions (H3.1) and (H3.2), Eq. (3.1) admits the unique adapted solution \(X(\cdot)\in L_{\mathbb{F}}^{2}(\delta,T;\mathbb{R} ^{n})\).
Denote by \(K\in\mathbb{R}^{n}\) a given nonempty convex subset. The goal of our control problem is to solve
$$\mbox{Problem A:}\quad \textstyle\begin{cases} \mbox{Minimize } J(u(\cdot)) \\ \mbox{subject to }u(\cdot)\in\mathcal{U}_{\mathrm{ad}};\qquad X(T)\in K. \end{cases} $$
3.2 Timedelayed backward formulation
We now show an equivalent backward system of Problem A. In order to do this, one additional assumption is needed:
 (H3.3):

There exists \(\alpha>0\), and for each \(t\in[0,T]\), \(x,x'\in\mathbb{R}^{n}\) and \(u_{1},u_{2}\in\mathbb{R}^{n\times d}\),
$$\bigl\vert \sigma \bigl(t,x,x',u_{1} \bigr)  \sigma \bigl(t,x,x',u_{2} \bigr) \bigr\vert \geq \alpha \vert u_{1}u_{2}\vert . $$
Note that (H3.1) and (H3.3) imply, for every \((t,x,x')\in[0,T]\times \mathbb{R}^{n} \times\mathbb{R}^{n}\), that the following function
$$u \rightarrow\sigma \bigl(t,x,x',u \bigr) $$
is a bijection on \(\mathbb{R}^{n\times d}\). Hence, by letting \(q\equiv\sigma(t,x,x',u)\), we obtain that there is the inverse \(\sigma^{1}\) satisfying \(u=\sigma^{1}(t,x,x',q)\). Then we can rewrite (3.1) as
$$\textstyle\begin{cases} dX(t) = f(t,X(t),X(t\delta),q(t))\,dt  q(t)\,dW(t), & 0\leq t\leq T; \\ X(t) = \eta(t), & \delta\leq t\leq0, \end{cases} $$
where \(f(t,x,x',q)=b(t,x,x',\sigma^{1}(t,x,x',q))\).
Note that \(u \rightarrow\sigma(t,x,x',u)\) is a bijection, hence \(q(\cdot)\) could be regarded as the control, which is the crucial observation that encourages this method for working out Problem A. Furthermore, by virtue of the theory of BSDE, choosing the terminal state \(X(T)\) is equal to choosing \(q(\cdot)\). Therefore we innovate the following ‘controlled’ system, which essentially is a timedelayed BSDE:
$$ \textstyle\begin{cases} dX(t) = f(t,X(t),X(t\delta),q(t))\,dt  q(t)\,dW(t), & 0\leq t\leq T; \\ X(T) = \xi, \qquad X(t)=\eta(t), & \delta\leq t< 0, \end{cases} $$
(3.2)
where now ξ becomes the ‘control’ and belongs to the following set:
$$U= \bigl\{ \xi \vert E\vert \xi\vert^{2}< \infty, \xi\in K, \mbox{a.s.} \bigr\} . $$
Moreover, here the equivalent cost function is
$$J(\xi):= E \biggl[ \int_{0}^{T} l \bigl(t,X(t),q(t) \bigr)\,dt + \phi( \xi) \biggr] , $$
with \(l(t,x,q)=\widetilde{l}(t,x,\sigma^{1}(t,x,x',q))\).
Hence, the original Problem A is equivalent to the following Problem B:
$$ \mbox{Problem B:} \quad \textstyle\begin{cases} \mbox{Minimize } J(\xi) \\ \mbox{subject to } \xi\in U;\qquad X^{\xi}(0)=a, \end{cases} $$
(3.3)
where \(X^{\xi}(0)=a\) (we denote \(a=\eta(0)\) in the following for simplicity) is the solution to Eq. (3.2) at the initial time 0 under ξ.
In control theory, it is well known that to solve the control constraint is easer than to solve the state constraint. From now on, since Problem A is equivalent to Problem B, we concentrate on dealing with Problem B. The benefit is that by virtue of ξ becoming a control variable now, a control constraint in Problem B replaces the state constraint in Problem A.
Definition 3.1
For \(\xi\in U\) and \(a\in\mathbb{R}^{n}\), if the solution to (3.2) suits \(X^{\xi}(0)=a\), then we call the random variable ξ feasible. For any given a, the collection of every feasible ξ is denoted by \(\mathcal{N}(a)\). Moreover, if \(\xi^{*}\in U\) gets the minimum value of \(J(\xi)\) over \(\mathcal{N}(a)\), we call \(\xi^{*}\) optimal.
3.3 Variational equation
In the following subsections, we denote the following notations for simplicity:
$$\begin{aligned}& f(t) = f \bigl(t,X(t),X(t\delta),q(t) \bigr),\qquad f^{\rho}(t) = f \bigl(t,X^{\rho}(t),X^{\rho}(t \delta),q^{\rho}(t) \bigr), \\& f^{*}(t) = f \bigl(t,X^{*}(t),X^{*}(t \delta),q^{*}(t) \bigr),\qquad f^{*}_{\varphi}(t) = f_{\varphi} \bigl(t,X^{*}(t),X^{*}(t \delta),q^{*}(t) \bigr), \end{aligned}$$
where \(f_{\varphi}\) denotes the partial derivative of f at φ with \(\varphi=x,x_{\delta},q\), respectively. In U, for \(\xi^{1},\xi^{2}\in U\), define a metric by
$$d \bigl(\xi^{1},\xi^{2} \bigr):= \bigl(E \bigl\vert \xi^{1}\xi^{2} \bigr\vert ^{2} \bigr)^{\frac{1}{2}}. $$
Apparently, \((U,d(\cdot,\cdot))\) becomes a complete metric space. Suppose \(\xi^{*}\) is optimal and, associated with \(\xi^{*}\), the pair \((X^{*}(\cdot),q^{*}(\cdot))\) is the corresponding state processes of Eq. (3.2). Because U is convex, for every ξ, the following variational control \(\xi^{\rho}\) is also in U:
$$\xi^{\rho}:= \xi^{*} + \rho \bigl(\xi \xi^{*} \bigr),\quad 0\leq\rho\leq1. $$
Denote the solution to Eq. (3.2) associated with \(\xi=\xi^{ \rho}\) by \((X^{\rho}(\cdot),q^{\rho}(\cdot))\). And denote by \((\widehat{X}(\cdot),\widehat{q}(\cdot))\) the solution of the following variational equation:
$$ \textstyle\begin{cases} d\widehat{X}(t) = [f^{*}_{x}(t)\widehat{X}(t) + f^{*}_{x_{\delta}}(t) \widehat{X}(t\delta) + f^{*}_{q}(t)\widehat{q}(t)]\,dt  \widehat{q}(t)\,dW(t),& 0\leq t\leq T; \\ \widehat{X}(T) = \xi\xi^{*},\qquad \widehat{X}(t)=0, & \delta\leq t< 0. \end{cases} $$
(3.4)
Remark 3.2
It is easy to know that (3.4) is a linear timedelayed BSDE. By Proposition 2.1, under conditions (H3.1)(H3.3), Eq. (3.4) has a unique adapted solution in \(L_{\mathbb{F}}^{2}(\delta,T; \mathbb{R}^{n})\times L_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n\times d})\).
Lemma 3.3
Under assumptions (H3.1)(H3.3), one has
$$\begin{aligned}& \lim_{\rho\rightarrow0}\sup_{0\leq t\leq T} E \bigl\vert \widetilde{X}^{\rho}(t) \bigr\vert ^{2} = 0, \\& \lim_{\rho\rightarrow0} E \int_{0}^{T} \bigl\vert \widetilde{q}^{\rho }(t) \bigr\vert ^{2}\,dt = 0, \end{aligned}$$
where
$$\widetilde{X}^{\rho}(t) = \frac{X^{\rho}(t)  X^{*}(t)}{\rho}  \widehat{X}(t),\qquad \widetilde{q}^{\rho}(t) = \frac{q^{\rho}(t)  q^{*}(t)}{\rho}  \widehat{q}(t). $$
Remark 3.4
Since the proof of Lemma 3.3 above is the same as that of Lemma 3.1 in Chen and Huang [10], for simplicity of presentation, we only present the main result and omit the detailed proof. In fact, it is straightforward to prove Lemma 3.3 by applying Proposition 2.1, Taylor expansion and the Lebesgue dominated convergence theorem.
3.4 Variational inequality
We solve the initial constraint \(X^{\xi}(0)=a\) and obtain a variational inequality in this subsection.
Given the optimal \(\xi^{*}\), for a constant \(\varepsilon>0\), define \(F\varepsilon(\cdot):U\rightarrow\mathbb{R}\) as follows:
$$F_{\varepsilon}(\xi)= \bigl\{ \bigl\vert X^{\xi}(0)a \bigr\vert ^{2} + \bigl(\max \bigl(0, \phi(\xi)  \phi \bigl( \xi^{*} \bigr) + \varepsilon \bigr) \bigr)^{2} \bigr\} ^{ \frac{1}{2}}. $$
Remark 3.5
One can test that the functions \(\vert X^{\xi}(0)a\vert ^{2}\) and \(\phi( \xi)\) are both continuous in their argument ξ. Hence, \(F_{\varepsilon}\), defined on U, is also a continuous function in its argument ξ.
Theorem 3.6
Under assumptions (H3.1)(H3.3), suppose that
\(\xi^{*}\)
is an optimal solution to Problem B, so we have
\(h_{0}\in\mathbb{R}^{+}\)
and
\(h_{1}\in\mathbb{R}^{n}\)
satisfying
\(\vert h_{0}\vert +\vert h_{1}\vert \neq0\), so that for every
\(\xi\in U\), we have the following variational inequality:
$$ \bigl\langle h_{1},\widehat{X}(0) \bigr\rangle + h_{0} \bigl\langle \phi_{x} \bigl(\xi^{*} \bigr), \xi \xi^{*} \bigr\rangle \geq0, $$
(3.5)
where
\(\widehat{X}(0)\)
is the solution to Eq. (3.4) at time 0.
Proof
We can check the following properties by the definition:
$$\begin{aligned}& F_{\varepsilon} \bigl(\xi^{*} \bigr)= \varepsilon; \\& F_{\varepsilon}(\xi)> 0,\quad \forall\xi\in U; \\& F_{\varepsilon} \bigl(\xi^{*} \bigr)\leq\inf_{\xi\in U} F_{\varepsilon }(\xi) +\varepsilon. \end{aligned}$$
Therefore, from Proposition 2.3 (Ekeland’s variational principle), there exists \(\xi^{\varepsilon}\in U\) satisfying:

(i)
\(F_{\varepsilon} (\xi^{\varepsilon} ) \leq F_{\varepsilon} (\xi ^{*} )\);

(ii)
\(d (\xi^{\varepsilon},\xi^{*} )\leq\sqrt{ \varepsilon}\);

(iii)
\(F_{\varepsilon}(\xi)+\sqrt{\varepsilon}d (\xi, \xi^{\varepsilon} )\geq F_{\varepsilon} (\xi^{\varepsilon} )\), \(\forall\xi\in U\).
For every \(\xi\in U\), denote \(\xi_{\rho}^{\varepsilon}:= \xi^{\varepsilon}+ \rho(\xi \xi^{\varepsilon}), 0\leq\rho \leq1\). Let \((X_{\rho}^{\varepsilon}(\cdot),q_{\rho}^{\varepsilon }(\cdot))\) (resp. \(X^{\varepsilon}(\cdot)\), \(q^{\varepsilon}(\cdot)\)) be the solution to (3.2) under \(\xi_{\rho}^{\varepsilon}\) (resp. \(\xi^{\varepsilon}\)), and let \((\widehat{X}^{\varepsilon}(\cdot), \widehat{q}^{\varepsilon}(\cdot))\) be the solution to (3.4) when \(\xi^{\varepsilon}\) is replaced by \(\xi^{*}\). Hence, applying the item (iii) above, one obtains
$$ F_{\varepsilon} \bigl(\xi_{\rho}^{\varepsilon} \bigr)  F_{\varepsilon} \bigl( \xi^{\varepsilon} \bigr) +\sqrt{\varepsilon}d \bigl( \xi_{\rho}^{\varepsilon },\xi^{\varepsilon} \bigr)\geq0. $$
(3.6)
On the other hand, similar to Lemma 3.3, one concludes
$$\lim_{\rho\rightarrow0}\sup_{0\leq t\leq T} E \bigl[ \rho^{1} \bigl( X_{\rho}^{\varepsilon}(t)  X^{\varepsilon}(t) \bigr)  \widehat{X}^{\varepsilon}(t) \bigr] = 0. $$
Thus
$$X_{\rho}^{\varepsilon}(0)  X^{\varepsilon}(0)= \rho\widehat{X} ^{\varepsilon}(0) + o(\rho), $$
which leads to the following expansion:
$$\bigl\vert X_{\rho}^{\varepsilon}(0)a \bigr\vert ^{2}  \bigl\vert X^{\varepsilon }(0)a \bigr\vert ^{2} = 2\rho \bigl\langle X^{\varepsilon}(0)a, \widehat{X}^{\varepsilon}(0) \bigr\rangle + o( \rho). $$
Moreover,
$$\begin{aligned}& \bigl\vert \phi \bigl(\xi_{\rho}^{\varepsilon} \bigr)  \phi \bigl( \xi^{*} \bigr) + \varepsilon \bigr\vert ^{2}  \bigl\vert \phi \bigl(\xi^{\varepsilon} \bigr)  \phi \bigl(\xi^{*} \bigr) + \varepsilon \bigr\vert ^{2} \\& \quad = 2 \rho \bigl[\phi \bigl( \xi^{\varepsilon} \bigr)  \phi \bigl( \xi^{*} \bigr) + \varepsilon \bigr] \bigl\langle \phi_{x} \bigl(\xi^{\varepsilon} \bigr),\xi \xi^{\varepsilon} \bigr\rangle + o( \rho). \end{aligned}$$
In the next, we study the following two cases for given \(\varepsilon >0\).
Case 1: There is \(\rho_{0}>0\) so that for every \(\rho\in(0,\rho_{0})\),
$$\phi \bigl(\xi_{\rho}^{\varepsilon} \bigr)  \phi \bigl( \xi^{*} \bigr) + \varepsilon \geq0. $$
We see that
$$\begin{aligned}& \lim_{\rho\rightarrow0}\frac{F_{\varepsilon}(\xi_{\rho }^{\varepsilon })  F_{\varepsilon}(\xi^{\varepsilon})}{\rho} \\& \quad = \lim_{\rho\rightarrow0}\frac{1}{F_{\varepsilon}(\xi_{\rho}^{ \varepsilon}) + F_{\varepsilon}(\xi^{\varepsilon})}\frac{F_{\varepsilon }^{2}(\xi_{\rho}^{\varepsilon})  F_{\varepsilon}^{2}( \xi^{\varepsilon})}{\rho} \\& \quad = \frac{1}{F_{\varepsilon}(\xi^{\varepsilon})} \bigl\{ \bigl\langle X^{\varepsilon}(0)a, \widehat{X}^{\varepsilon}(0) \bigr\rangle + \bigl[\phi \bigl( \xi^{\varepsilon} \bigr)  \phi \bigl(\xi^{*} \bigr) + \varepsilon \bigr] \cdot \bigl\langle \phi_{x} \bigl(\xi^{\varepsilon} \bigr),\xi\xi^{\varepsilon} \bigr\rangle \bigr\} . \end{aligned}$$
Now, using ρ to split (3.6) and letting ρ to 0, one has
$$ h_{0}^{\varepsilon} \bigl\langle \phi_{x} \bigl(\xi^{\varepsilon} \bigr),\xi \xi^{\varepsilon} \bigr\rangle + \bigl\langle h_{1}^{\varepsilon},\widehat{X}^{\varepsilon}(0) \bigr\rangle \geq\sqrt{\varepsilon} \bigl[E \bigl\vert \xi \xi^{\varepsilon} \bigr\vert ^{2} \bigr]^{\frac{1}{2}}, $$
(3.7)
where
$$\begin{aligned}& h_{0}^{\varepsilon} = \frac{1}{F_{\varepsilon}(\xi^{\varepsilon})} \cdot \bigl[\phi \bigl( \xi^{\varepsilon} \bigr)  \phi \bigl(\xi^{*} \bigr) + \varepsilon \bigr] \geq0, \\& h_{1}^{\varepsilon} = \frac{1}{F_{\varepsilon}(\xi^{\varepsilon})} \bigl\langle X^{\varepsilon}(0)a \bigr\rangle . \end{aligned}$$
Case 2: There is a positive series \(\{\rho_{n}\}\) satisfying \(\rho_{n}\rightarrow0\), so that
$$\phi \bigl(\xi_{\rho_{n}}^{\varepsilon} \bigr)  \phi \bigl( \xi^{*} \bigr) + \varepsilon \leq0. $$
From the definition of \(F_{\varepsilon}\), for large n, \(F_{\varepsilon }(\xi_{\rho_{n}}^{\varepsilon})= \{ \vert X_{\rho _{n}}^{\varepsilon}(0)a\vert ^{2} \} ^{\frac{1}{2}}\). Owing to the continuity of \(F_{\varepsilon}(\cdot)\), one has \(F_{\varepsilon}(\xi^{\varepsilon })= \{ \vert X^{\varepsilon}(0)a\vert ^{2} \} ^{\frac{1}{2}}\).
Moreover,
$$\begin{aligned} \lim_{n\rightarrow0}\frac{F_{\varepsilon}(\xi_{\rho_{n}}^{\varepsilon })  F_{\varepsilon}(\xi^{\varepsilon})}{\rho} &= \lim_{n\rightarrow 0} \frac{1}{F_{\varepsilon}(\xi_{\rho_{n}}^{\varepsilon}) + F_{ \varepsilon}(\xi^{\varepsilon})} \frac{F_{\varepsilon}^{2}( \xi_{\rho_{n}}^{\varepsilon})  F_{\varepsilon}^{2}(\xi^{\varepsilon })}{\rho} \\ &= \frac{ \langle X^{\varepsilon}(0)a,\widehat{X}^{\varepsilon}(0) \rangle }{F_{\varepsilon}(\xi^{\varepsilon})}. \end{aligned}$$
From (3.6), the same as in Case 1,
$$ \bigl\langle h_{1}^{\varepsilon}, \widehat{X}^{\varepsilon}(0) \bigr\rangle \geq \sqrt{ \varepsilon} \bigl[E \bigl\vert \xi \xi^{\varepsilon} \bigr\vert ^{2} \bigr]^{\frac{1}{2}}, $$
(3.8)
where
$$h_{0}^{\varepsilon} = 0,\qquad h_{1}^{\varepsilon} = \frac{1}{F_{\varepsilon}(\xi^{\varepsilon})} \bigl\langle X^{\varepsilon}(0)a \bigr\rangle . $$
For both cases, in summary, from the definition of \(F_{\varepsilon}( \cdot)\), one has \(h_{0}^{\varepsilon}\geq0\) and
$$\bigl\vert h_{0}^{\varepsilon} \bigr\vert ^{2} + \bigl\vert h_{1}^{\varepsilon } \bigr\vert ^{2} =1. $$
Therefore, there exists a convergent subsequence of \((h_{0}^{\varepsilon },h_{1}^{\varepsilon})\) whose limit is denoted by \((h_{0},h_{1})\).
Due to \(d(\xi^{\varepsilon},\xi^{*})\leq\sqrt{\varepsilon}\), we have \(\xi^{\varepsilon}\rightarrow\xi^{*}\), as \(\varepsilon \rightarrow0\). Then, from the estimate of Proposition 2.1, we see that \(\widehat{X}^{\varepsilon}(0)\rightarrow\widehat{X}(0)\) as \(\varepsilon\rightarrow0\). Thus (3.5) holds. The desired result is proved now. □
By using similar analysis, when \(l(t,x,q)\neq0\), the following variational inequality can be obtained.
Theorem 3.7
Let (H3.1)(H3.3) hold. Suppose that
\(\xi^{*}\)
is an optimal solution of Problem B. Then we have
\(h_{0}\in\mathbb{R}^{+}\), \(h_{1}\in \mathbb{R}^{n}\)
satisfying
\(\vert h_{0}\vert +\vert h_{1}\vert \neq0\), so that for every
\(\xi\in U\), we have the variational inequality:
$$ \bigl\langle h_{1},\widehat{X}(0) \bigr\rangle + h_{0} \bigl\langle \phi_{x} \bigl(\xi^{*} \bigr), \xi \xi^{*} \bigr\rangle + h_{0} \int_{0}^{T} \bigl\langle l_{x}^{*}(t), \widehat{X}(t) \bigr\rangle \, dt + h_{0} \int_{0}^{T} \bigl\langle l_{q}^{*}(t), \widehat{q}(t) \bigr\rangle \,dt \geq0, $$
(3.9)
where
\(l_{\varphi}^{*}(t)=l_{\varphi}(t,X^{*}(t),q^{*}(t))\)
denotes the partial derivative of
\(l^{*}\)
at
φ
with
\(\varphi= x,q\), respectively, and
\((\widehat{X}(\cdot),\widehat{q}(\cdot))\)
is the solution to variation equation (3.4).
3.5 Maximum principle
For the sake of establishing the maximum principle, in this part, as the dual equation of Eq. (3.4), the following equation is introduced:
$$ \textstyle\begin{cases} dm(t) = \{ f^{*}_{x}(t)^{T}m(t) + E^{\mathcal{F}_{t}} [(f^{*} _{x_{\delta}}\vert_{t+\delta})^{T}m(t+\delta) ] + h_{0}l^{*}_{x}(t) \}\,dt \\ \hspace{33pt} {} + [f^{*}_{q}(t)^{T}m(t) + h_{0}l^{*}_{q}(t)]\,dW(t), \quad 0\leq t\leq T; \\ m(0) =h_{1}, \qquad m(t)=0, \quad T< t\leq T+\delta. \end{cases} $$
(3.10)
Remark 3.8
In Eq. (3.10), \(f^{*}_{x_{\delta}}\vert_{t+\delta}\) represents the value of \(f^{*}_{x_{\delta}}\) when t is replaced by \(t+\delta\).
Remark 3.9
It is easy to see that (3.10) is a linear timeadvanced SDE. By Proposition 2.2, under conditions (H3.1)(H3.3), Eq. (3.10) admits the unique adapted solution in \(L_{\mathbb{F}}^{2}(0, T+\delta;\mathbb{R}^{n})\).
Theorem 3.10
Let (H3.1)(H3.3) hold. If
\(\xi^{*}\)
is optimal to Problem B with
\((X^{*}(\cdot),q^{*}(\cdot))\)
being the corresponding state of Eq. (3.2), then we have
\(h_{0}\in\mathbb{R}^{+}\)
and
\(h_{1}\in \mathbb{R}^{n}\)
satisfying
\(\vert h_{0}\vert +\vert h_{1}\vert \neq0\), so that for every
\(\eta\in U\),
$$ \bigl\langle m(T) + h_{0}\phi_{x} \bigl(\xi^{*} \bigr), \eta\xi^{*} \bigr\rangle \geq0,\quad \textit{a.s.}, $$
(3.11)
where
\(m(\cdot)\)
is the solution of Eq. (3.10).
Proof
By using Itô’s formula to \(\langle m(t),\widehat{X}(t) \rangle \), we obtain
$$\begin{aligned} d \bigl\langle m(t),\widehat{X}(t) \bigr\rangle = &m(t) \bigl[f^{*}_{x}(t) \widehat{X}(t) + f ^{*}_{x_{\delta}}(t)\widehat{X}(t\delta) + f^{*}_{q}(t) \widehat{q}(t) \bigr]\,dt \\ &{}+\widehat{X}(t) \bigl\{ f^{*}_{x}(t)^{T}m(t) + E^{\mathcal{F}_{t}} \bigl[ \bigl(f^{*}_{x_{\delta}} \vert_{t+\delta} \bigr)^{T}m(t+\delta) \bigr] + h_{0}l ^{*}_{x}(t) \bigr\} \,dt \\ &{}+\widehat{q}(t) \bigl[f^{*}_{q}(t)^{T}m(t) + h_{0}l^{*}_{q}(t) \bigr]\,dt + \{ \cdots \} \,dW(t) \\ = & \bigl\{ E^{\mathcal{F}_{t}} \bigl[ \bigl(f^{*}_{x_{\delta}} \vert_{t+\delta} \bigr)^{T}m(t+ \delta) \bigr]\widehat{X}(t) f^{*}_{x_{\delta}}(t)m(t)\widehat{X}(t \delta) \bigr\} \,dt \\ &{}+h_{0} \bigl[l^{*}_{x}(t)\widehat{X}(t) + l^{*}_{q}(t)\widehat{q}(t) \bigr]\,dt + \{\cdots\} \,dW(t). \end{aligned}$$
Therefore,
$$E \bigl[ \bigl\langle m(T),\widehat{X}(T) \bigr\rangle  \bigl\langle m(0), \widehat{X}(0) \bigr\rangle \bigr] = \Delta_{1} + \Delta_{2}, $$
with
$$\begin{aligned}& \Delta_{1}=E \int_{0}^{T} \bigl[ \bigl(f^{*}_{x_{\delta}} \vert_{t+\delta} \bigr)^{T}m(t+ \delta)\widehat{X}(t) f^{*}_{x_{\delta}}(t)m(t)\widehat{X}(t \delta) \bigr]\,dt, \\& \Delta_{2}=h_{0}E \int_{0}^{T} \bigl[l^{*}_{x}(t) \widehat{X}(t) + l ^{*}_{q}(t)\widehat{q}(t) \bigr]\,dt. \end{aligned}$$
Paying attention to the terminal and initial conditions, one derives
$$\begin{aligned} \Delta_{1} = &E \int_{0}^{T} \bigl(f^{*}_{x_{\delta}} \vert_{t+\delta} \bigr)^{T}m(t+ \delta)\widehat{X}(t)\,dt  E \int_{0}^{T} f^{*}_{x_{\delta}}(t)m(t) \widehat{X}(t\delta)\,dt \\ = &E \int_{T}^{T+\delta} f^{*}_{x_{\delta}}(t)^{T}m(t) \widehat{X}(t \delta)\,dtE \int_{0}^{\delta}f^{*}_{x_{\delta}}(t)m(t) \widehat{X}(t \delta)\,dt \\ = &0. \end{aligned}$$
Hence
$$\begin{aligned}& E \bigl[ \bigl\langle m(T)+h_{0}\phi_{x} \bigl( \xi^{*} \bigr),\xi \xi^{*} \bigr\rangle \bigr] \\& \quad = E \biggl[ \bigl\langle h_{1},\widehat{X}(0) \bigr\rangle + h_{0} \bigl\langle \phi_{x} \bigl(\xi ^{*} \bigr), \xi \xi^{*} \bigr\rangle + h_{0} \int_{0}^{T} \bigl\langle l_{x}^{*}(t), \widehat{X}(t) \bigr\rangle \,dt + h_{0} \int_{0}^{T} \bigl\langle l_{q}^{*}(t), \widehat{q}(t) \bigr\rangle \,dt \biggr] \\& \quad \geq0. \end{aligned}$$
From the arbitrariness of \(\xi\in U\), for every \(\eta\in U\), we have
$$\bigl\langle m(T) + h_{0}\phi_{x} \bigl(\xi^{*} \bigr), \eta\xi^{*} \bigr\rangle \geq0, \quad \mbox{a.s.} $$
□
Now, we let ∂K represent the boundary of K and denote
$$\Omega_{0}:= \bigl\{ \omega\in\Omega\vert \xi^{*}\in \partial K \bigr\} . $$
According to Theorem 3.10, we directly deduce the following result.
Corollary 3.11
Assume that the assumptions in Theorem
3.10
hold, then for each
\(\eta\in K\), we have
$$\begin{aligned}& \bigl\langle m(T) + h_{0} \phi_{x} \bigl( \xi^{*} \bigr), \eta \xi^{*} \bigr\rangle \geq0,\quad \textit{a.s. on }\Omega_{0}; \\& m(T) + h_{0}\phi_{x} \bigl(\xi^{*} \bigr) = 0, \quad \textit{a.s. on }\Omega_{0}^{c}. \end{aligned}$$
Remark 3.12
By the above study, for the optimal terminal control \(\xi^{*}\), we obtain the necessary condition. Note that the previous transformation process and (H3.3) allow us to make the inverse transformation. Therefore, the characterization of the optimal control process \(u^{*}(\cdot)\) can be derived by the obtained stochastic maximum principle of the optimal terminal control \(\xi^{*}\).