Open Access

Maximum principle for a stochastic delayed system involving terminal state constraints

Journal of Inequalities and Applications20172017:103

https://doi.org/10.1186/s13660-017-1378-z

Received: 24 January 2017

Accepted: 21 April 2017

Published: 5 May 2017

Abstract

We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland’s variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.

Keywords

stochastic differential delayed equation state constraints maximum principle

MSC

93E20 60H10

1 Introduction

In 1990, the nonlinear backward stochastic differential equation (BSDE in short) was introduced by Pardoux and Peng [1]. Until now, it has had applications in many fields, such as partial differential equation (see [2]), stochastic control (see [3, 4]) and mathematical finance (see [5]). Meanwhile, BSDE itself has been developed to many different branches, such as BSDE with jumps (see [68]), mean-field BSDE (see [9]), time-delayed BSDE (see [1012]), anticipated BSDEs (see [13, 14]) and so on. A lot of works have been done for the control problem of such BSDEs. However, fewer works have been done on the control problems of stochastic delayed systems.

For a stochastic delayed system, Chen and Wu [15] obtained a stochastic maximum principle by virtue of a duality between stochastic differential delayed equations (SDDEs in short) and anticipated BSDEs. Øksendal, Sulem and Zhang [16] studied the optimal control problems for SDDEs with jumps. Yu [17] obtained a maximum principle for SDDEs with random coefficients. A maximum principle of optimal control of SDDEs on infinite horizon was proved in Agram, Haadem and Øksendal [18]. Some other recent developments on stochastic delayed system can be found in Huang, Li and Shi [19], Meng and Shen [20], etc.

To the authors’ knowledge, there has been no result concerning the control problem of a stochastic delayed system with state constraints until now. However, the state constraints of stochastic delayed systems indeed exist in reality. In this paper, the stochastic control problem of a forward delayed system with terminal state constraint is studied. The controlled system is depicted as the following SDDE:
$$ \textstyle\begin{cases} dX(t) = b(t,X(t),X(t-\delta),u(t))\,dt + \sigma(t,X(t),X(t-\delta),u(t))\,dW(t),\\ \quad 0\leq t\leq T; \\ X(t) = \eta(t),\\ \quad -\delta\leq t\leq0, \end{cases} $$
(1.1)
where \(X(T)\in K\), a.s., \(K\in\mathbb{R}^{n}\) is a convex set. However, there are two (main) difficulties in this study. The first one is that the control system (1.1) is a delayed system, as stated in [15], which is more complex than the classical case. Another difficulty is the terminal state constraint, which is a sample-wise constraint. As interpreted in Ji and Zhou [21], the stochastic control involving sample-wise state constraints cannot be resolved by the classical theory.

Some recent developed results on state constraints (see [3, 2126]) as well as the duality relation between time-advanced stochastic differential equations (SDEs, for short) and time-delayed BSDEs (see [10]) may help us to overcome the above mentioned difficulties. Firstly, an equivalent backward formulation of stochastic delayed system (1.1) is introduced, where \(X(T)\) is judged as a control variable. Meanwhile, the state constraint turns out to be a control constraint. However, such a treatment brings us both the advantage and the disadvantage. The advantage is that, in the classical control theory, to manage control constraint is easier than to manage state constraint. The disadvantage is that the initial condition (\(X(0)=\eta(0)\)) now turns into an additional constraint. To deal with the additional initial constraint, Ekeland’s variational principle is used.

Note that the equivalent backward delayed system is described by a time-delayed BSDE, so the adjoint equation of the time-delayed BSDE via duality relation is an anticipated SDE. Therefore, both the delayed system and the anticipated system are needed in our study. As a routine, the variational procedure is made firstly. Then, by virtue of Ekeland’s variational principle, the variational inequality is got. At last, the necessary condition is derived by applying the duality relationship between the backward delayed controlled system and the anticipated forward adjoint system. There is a good thing that the theory of BSDE and our assumption allow us to make the inverse transformation, so that the optimal control process can be solved by the obtained optimal terminal control. To make our conclusions be directly perceived, we also study two applications. One of them is the stochastic delayed linear quadratic (LQ in short) control model. Moreover, a production and consumption choice optimization problem (see [27]) is also adapted to our case.

We organize this article as follows. Some preliminary results about time-delayed BSDE and anticipated SDE are presented in Section 2. In Section 3, the original control problem of a forward delayed controlled system with terminal state constraint is formulated. Then an equivalent transformation is made to get a backward delayed controlled system. Moreover, a stochastic maximum principle is derived, which presents the required condition of the optimal terminal control. In Section 4, two applications are given.

2 Preliminaries

Denote by \((\Omega,\mathcal{F},\mathbb{F},P)\) a probability space such that \(\mathcal{F}_{0}\) includes all P-null elements of \(\mathcal{F}\) and assume the filtration \(\mathbb{F}=\{\mathcal{F}_{t},t\geq0\}\) is generated by a d-dimensional standard Brownian motion \(W=\{W(t), t \geq0\}\). Let \(T>0\). And \(\delta> 0\) is a given finite time delay. We denote the following notations:
  • \(L^{2}(\mathcal{F}_{t};\mathbb{R}^{n})=\{\xi: \Omega\rightarrow\mathbb{R}^{n}\vert\xi\mbox{ is } \mathcal{F}_{t}\mbox{-measurable}, E\vert \xi \vert ^{2}< \infty\}\);

  • \(L_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n})=\{\psi: \Omega\times[0,T]\rightarrow\mathbb{R}^{n}\vert\psi(\cdot)\mbox{ is } \mathbb{F}\mbox{-measurable process}, E\int_{0}^{T} \vert \psi(t)\vert ^{2}\,dt< \infty\}\).

Similarly, we can define \(L_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n\times d})\), \(L_{\mathbb{F}}^{2}(-\delta,T;\mathbb{R}^{n})\) and \(L_{ \mathbb{F}}^{2}(0,T+\delta;\mathbb{R}^{n})\).
Now we recall some useful results for the study of the following sections. Consider the following SDDE:
$$ \textstyle\begin{cases} dX(t) = b(t,X(t),X(t-\delta))\,dt + \sigma(t,X(t),X(t-\delta))\,dW(t), & 0\leq t\leq T; \\ X(t) = \eta(t), & -\delta\leq t\leq0, \end{cases} $$
(2.1)
where η is a given continuous function, which represents the initial path of X, and \(b:[0,T]\times\mathbb{R}^{n}\times \mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) and \(\sigma:[0,T]\times \mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times d}\) are given measurable functions satisfying the following:
(H2.1): 
There exists a constant \(D>0\) such that for all \(t\in[0,T]\), \(x,x',y,y'\in\mathbb{R}^{n}\),
$$\begin{aligned}& \bigl\vert b(t,x,y) - b \bigl(t,x',y' \bigr) \bigr\vert + \bigl\vert \sigma(t,x,y) - \sigma \bigl(t,x',y' \bigr) \bigr\vert \\& \quad \leq D \bigl( \bigl\vert x-x' \bigr\vert + \bigl\vert y-y' \bigr\vert \bigr); \\& \sup_{0\leq t\leq T} \bigl( \bigl\vert b(t,0,0)\bigr\vert + \bigl\vert \sigma(t,0,0)\bigr\vert \bigr)< +\infty. \end{aligned}$$

Then, from Theorem 2.2 in [15], under (H2.1), SDDE (2.1) has the unique adapted solution \(X(\cdot)\in L_{\mathbb{F}}^{2}(-\delta,T;\mathbb{R}^{n})\).

For the time-delayed BSDE, we need the following assumption.
(H2.2): 
Assume that \(f:\Omega\times[0,T]\times \mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n\times d} \rightarrow\mathbb{R}^{n}\) is \(\mathbb{F}\)-adapted and for every \(y,y_{\delta},y',y_{\delta}'\in\mathbb{R}^{n}\), \(z,z'\in\mathbb{R} ^{n\times d}\),
$$\bigl\vert f(t,y,y_{\delta},z)-f \bigl(t,y',y'_{\delta},z' \bigr) \bigr\vert ^{2}\leq C \bigl( \bigl\vert y-y' \bigr\vert ^{2} + \bigl\vert y_{\delta}-y_{\delta}' \bigr\vert ^{2} + \bigl\vert z-z' \bigr\vert ^{2} \bigr), $$
where \(C>0\) is a constant. Moreover, \(E\int_{0}^{T} \vert f(t,0,0,0)\vert ^{2}\,dt < +\infty\).

The following is the well-posedness of time-delayed BSDE.

Proposition 2.1

Suppose \(\xi\in L^{2}(\mathcal{F}_{T};\mathbb{R}^{n})\) and \(\varphi(\cdot)\) is a given continuous function. Then, under (H2.2), for sufficiently small \(\delta>0\), the following time-delayed BSDE
$$ \textstyle\begin{cases} -dY(t) = f(t,Y(t),Y(t-\delta), Z(t))\,dt - Z(t)\,dW(t), & 0\leq t\leq T; \\ Y(T) = \xi,\qquad Y(t) = \varphi(t), & -\delta\leq t< 0, \end{cases} $$
(2.2)
has the unique adapted solution \((Y(\cdot),Z(\cdot))\in L_{ \mathbb{F}}^{2}(-\delta,T;\mathbb{R}^{n})\times L_{\mathbb{F}}^{2}(0,T; \mathbb{R}^{n\times d})\), and it satisfies the following estimate:
$$E \biggl[ \sup_{0\leq t\leq T} \bigl\vert Y(t) \bigr\vert ^{2} + \int_{0}^{T} \bigl\vert Z(t) \bigr\vert ^{2}\,dt \biggr] \leq C E \biggl[ \vert \xi \vert ^{2} + \int_{0}^{T} \bigl\vert f(t,0,0,0) \bigr\vert ^{2}\,dt \biggr] , $$
with \(C>0\). Furthermore, if \((Y'(\cdot),Z'(\cdot))\) is the solution to (2.2) with ξ replaced by \(\xi'\), then
$$E \biggl[ \sup_{0\leq t\leq T} \bigl\vert Y(t)-Y'(t) \bigr\vert ^{2} + \frac {1}{2} \int_{0}^{T} \bigl\vert Z(t)-Z'(t) \bigr\vert ^{2}\,dt \biggr] \leq C E \bigl[ \bigl\vert \xi-\xi ' \bigr\vert ^{2} \bigr]. $$
As we stated in the part of Introduction, the anticipated SDE is necessary in our study. The following is the condition for anticipated SDE.
(H2.3): 
Suppose for each \(t\in[0,T]\), \(r\in[t,T+ \delta]\), \(b:\Omega\times\mathbb{R}^{n}\times L^{2}(\mathcal{F} _{r};\mathbb{R}^{n}) \rightarrow L^{2}(\mathcal{F}_{t};\mathbb{R}^{n})\), \(\sigma:\Omega\times\mathbb{R}^{n}\times L^{2}(\mathcal{F}_{r}; \mathbb{R}^{n}) \rightarrow L^{2}(\mathcal{F}_{t};\mathbb{R}^{n\times d})\) with
$$\begin{aligned}& \bigl\vert b(t,x, \varsigma_{t}) - b \bigl(t,x', \varsigma'_{t} \bigr) \bigr\vert + \bigl\vert \sigma(t,x, \varsigma_{t}) - \sigma \bigl(t,x',\varsigma '_{t} \bigr) \bigr\vert \\& \quad \leq C \bigl( \bigl\vert x-x' \bigr\vert + E ^{\mathcal{F}_{t}} \bigl[ \bigl\vert \varsigma_{t}-\varsigma'_{t} \bigr\vert \bigr] \bigr), \end{aligned}$$
for every \(t\in[0,T]\), \(x,x'\in\mathbb{R}^{n}\), \(\varsigma(\cdot), \varsigma'(\cdot)\in L_{\mathbb{F}}^{2}(t,T+\delta;\mathbb{R}^{n})\), \(r\in[t,T+\delta]\) with \(C>0\). Moreover, \(\sup_{0\leq t \leq T} (\vert b(t,0,0)\vert + \vert \sigma(t,0,0)\vert )< +\infty\).

Proposition 2.2

Suppose \(x_{0}\in\mathbb{R}^{n}\) and \(\lambda(\cdot)\in L_{ \mathbb{F}}^{2}(T,T+\delta;\mathbb{R}^{n})\) is a given \(\mathbb{F}\)-adapted process. Assume (H2.3) holds. Then, if δ is sufficiently small, the anticipated SDE
$$ \textstyle\begin{cases} dX(t) = b(t,X(t),X(t+\delta))\,dt + \sigma(t,X(t),X(t+\delta))\,dW(t), & 0\leq t\leq T; \\ X(0) = x_{0},\qquad X(t) = \lambda(t), & T< t\leq T + \delta, \end{cases} $$
(2.3)
has the unique adapted solution \(X(\cdot)\in L_{\mathbb{F}}^{2}(0,T+ \delta;\mathbb{R}^{n})\).

The above results can be found in Delong and Imkeller [11] and Chen and Huang [10]. The following is the famous Ekeland’s variational principle.

Proposition 2.3

Suppose \((U,d(\cdot,\cdot))\) is a complete metric space with a function \(F(\cdot):U\rightarrow\mathbb{R}\) is proper lower semi-continuous. Then, for every \(v\in U\) and \(\epsilon>0\) such that \(F(v)\leq\inf_{u\in U} F(u) +\epsilon\), there is \(u_{\epsilon }\in U\) so that
  1. (i)

    \(F(v_{\epsilon})\leq F(v)\),

     
  2. (ii)

    \(d(v,v_{\epsilon})\leq\epsilon\),

     
  3. (iii)

    \(F(u) + \sqrt{\epsilon} d(u,v_{\epsilon})\geq F(v_{\epsilon })\), \(\forall u\in U\).

     

3 Main result

We study our main result in this part, i.e., a maximum principle about the optimal control of a stochastic delayed system involving terminal state constraint. It should be pointed out that the time-delayed state of the controlled system is different from the case without delay.

3.1 Problem formulation

Let
$$\mathcal{U}_{\mathrm{ad}} \equiv \bigl\{ u(\cdot)\vert u(\cdot)\in L_{\mathbb{F}}^{2} \bigl(0,T; \mathbb{R}^{n\times d} \bigr) \bigr\} $$
be the set of admissible controls. For every given \(u(\cdot)\), for the control system, we consider the past-dependent state \(X(\cdot)\) depicted as
$$ \textstyle\begin{cases} dX(t) = b(t,X(t),X(t-\delta),u(t))\,dt + \sigma(t,X(t),X(t-\delta),u(t))\,dW(t), \\\quad 0\leq t\leq T; \\ X(t) = \eta(t), \\\quad -\delta\leq t\leq0, \end{cases} $$
(3.1)
where η is a given continuous function, \(b:[0,T]\times \mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n\times d} \rightarrow\mathbb{R}^{n}\) and \(\sigma:[0,T]\times\mathbb{R}^{n} \times\mathbb{R}^{n}\times\mathbb{R}^{n\times d}\rightarrow \mathbb{R}^{n\times d}\) are given measurable functions. Define the cost function as follows:
$$J \bigl(u(\cdot) \bigr)=E \biggl[ \int_{0}^{T} \widetilde{l} \bigl(t,X(t),u(t) \bigr) \,dt + \phi \bigl(X(T) \bigr) \biggr] , $$
where \(\widetilde{l}:[0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{n \times d}\rightarrow\mathbb{R}^{n}\) and \(\phi:\mathbb {R}^{n}\rightarrow \mathbb{R}^{n}\) are given measurable functions. We give the following assumptions:
(H3.1): 

The functions b, σ, , ϕ are all continuously differentiable in the arguments \((x,x',u)\), and their derivatives are all bounded.

(H3.2): 

Denote by \(C(1 + \vert x\vert + \vert u\vert )\) and \(C(1 + \vert x\vert )\) the bounds of derivatives of in its arguments \((x,u)\) and ϕ in its argument x, respectively.

Therefore, for every given \(u(\cdot)\in\mathcal{U}_{\mathrm{ad}}\), under assumptions (H3.1) and (H3.2), Eq. (3.1) admits the unique adapted solution \(X(\cdot)\in L_{\mathbb{F}}^{2}(-\delta,T;\mathbb{R} ^{n})\).
Denote by \(K\in\mathbb{R}^{n}\) a given nonempty convex subset. The goal of our control problem is to solve
$$\mbox{Problem A:}\quad \textstyle\begin{cases} \mbox{Minimize } J(u(\cdot)) \\ \mbox{subject to }u(\cdot)\in\mathcal{U}_{\mathrm{ad}};\qquad X(T)\in K. \end{cases} $$

3.2 Time-delayed backward formulation

We now show an equivalent backward system of Problem A. In order to do this, one additional assumption is needed:
(H3.3): 
There exists \(\alpha>0\), and for each \(t\in[0,T]\), \(x,x'\in\mathbb{R}^{n}\) and \(u_{1},u_{2}\in\mathbb{R}^{n\times d}\),
$$\bigl\vert \sigma \bigl(t,x,x',u_{1} \bigr) - \sigma \bigl(t,x,x',u_{2} \bigr) \bigr\vert \geq \alpha \vert u_{1}-u_{2}\vert . $$
Note that (H3.1) and (H3.3) imply, for every \((t,x,x')\in[0,T]\times \mathbb{R}^{n} \times\mathbb{R}^{n}\), that the following function
$$u \rightarrow\sigma \bigl(t,x,x',u \bigr) $$
is a bijection on \(\mathbb{R}^{n\times d}\). Hence, by letting \(q\equiv\sigma(t,x,x',u)\), we obtain that there is the inverse \(\sigma^{-1}\) satisfying \(u=\sigma^{-1}(t,x,x',q)\). Then we can rewrite (3.1) as
$$\textstyle\begin{cases} -dX(t) = f(t,X(t),X(t-\delta),q(t))\,dt - q(t)\,dW(t), & 0\leq t\leq T; \\ X(t) = \eta(t), & -\delta\leq t\leq0, \end{cases} $$
where \(f(t,x,x',q)=-b(t,x,x',\sigma^{-1}(t,x,x',q))\).
Note that \(u \rightarrow\sigma(t,x,x',u)\) is a bijection, hence \(q(\cdot)\) could be regarded as the control, which is the crucial observation that encourages this method for working out Problem A. Furthermore, by virtue of the theory of BSDE, choosing the terminal state \(X(T)\) is equal to choosing \(q(\cdot)\). Therefore we innovate the following ‘controlled’ system, which essentially is a time-delayed BSDE:
$$ \textstyle\begin{cases} -dX(t) = f(t,X(t),X(t-\delta),q(t))\,dt - q(t)\,dW(t), & 0\leq t\leq T; \\ X(T) = \xi, \qquad X(t)=\eta(t), & -\delta\leq t< 0, \end{cases} $$
(3.2)
where now ξ becomes the ‘control’ and belongs to the following set:
$$U= \bigl\{ \xi \vert E\vert \xi\vert^{2}< \infty, \xi\in K, \mbox{a.s.} \bigr\} . $$
Moreover, here the equivalent cost function is
$$J(\xi):= E \biggl[ \int_{0}^{T} l \bigl(t,X(t),q(t) \bigr)\,dt + \phi( \xi) \biggr] , $$
with \(l(t,x,q)=\widetilde{l}(t,x,\sigma^{-1}(t,x,x',q))\).
Hence, the original Problem A is equivalent to the following Problem B:
$$ \mbox{Problem B:} \quad \textstyle\begin{cases} \mbox{Minimize } J(\xi) \\ \mbox{subject to } \xi\in U;\qquad X^{\xi}(0)=a, \end{cases} $$
(3.3)
where \(X^{\xi}(0)=a\) (we denote \(a=\eta(0)\) in the following for simplicity) is the solution to Eq. (3.2) at the initial time 0 under ξ.

In control theory, it is well known that to solve the control constraint is easer than to solve the state constraint. From now on, since Problem A is equivalent to Problem B, we concentrate on dealing with Problem B. The benefit is that by virtue of ξ becoming a control variable now, a control constraint in Problem B replaces the state constraint in Problem A.

Definition 3.1

For \(\xi\in U\) and \(a\in\mathbb{R}^{n}\), if the solution to (3.2) suits \(X^{\xi}(0)=a\), then we call the random variable ξ feasible. For any given a, the collection of every feasible ξ is denoted by \(\mathcal{N}(a)\). Moreover, if \(\xi^{*}\in U\) gets the minimum value of \(J(\xi)\) over \(\mathcal{N}(a)\), we call \(\xi^{*}\) optimal.

3.3 Variational equation

In the following subsections, we denote the following notations for simplicity:
$$\begin{aligned}& f(t) = f \bigl(t,X(t),X(t-\delta),q(t) \bigr),\qquad f^{\rho}(t) = f \bigl(t,X^{\rho}(t),X^{\rho}(t- \delta),q^{\rho}(t) \bigr), \\& f^{*}(t) = f \bigl(t,X^{*}(t),X^{*}(t- \delta),q^{*}(t) \bigr),\qquad f^{*}_{\varphi}(t) = f_{\varphi} \bigl(t,X^{*}(t),X^{*}(t- \delta),q^{*}(t) \bigr), \end{aligned}$$
where \(f_{\varphi}\) denotes the partial derivative of f at φ with \(\varphi=x,x_{\delta},q\), respectively. In U, for \(\xi^{1},\xi^{2}\in U\), define a metric by
$$d \bigl(\xi^{1},\xi^{2} \bigr):= \bigl(E \bigl\vert \xi^{1}-\xi^{2} \bigr\vert ^{2} \bigr)^{\frac{1}{2}}. $$
Apparently, \((U,d(\cdot,\cdot))\) becomes a complete metric space. Suppose \(\xi^{*}\) is optimal and, associated with \(\xi^{*}\), the pair \((X^{*}(\cdot),q^{*}(\cdot))\) is the corresponding state processes of Eq. (3.2). Because U is convex, for every ξ, the following variational control \(\xi^{\rho}\) is also in U:
$$\xi^{\rho}:= \xi^{*} + \rho \bigl(\xi- \xi^{*} \bigr),\quad 0\leq\rho\leq1. $$
Denote the solution to Eq. (3.2) associated with \(\xi=\xi^{ \rho}\) by \((X^{\rho}(\cdot),q^{\rho}(\cdot))\). And denote by \((\widehat{X}(\cdot),\widehat{q}(\cdot))\) the solution of the following variational equation:
$$ \textstyle\begin{cases} -d\widehat{X}(t) = [f^{*}_{x}(t)\widehat{X}(t) + f^{*}_{x_{\delta}}(t) \widehat{X}(t-\delta) + f^{*}_{q}(t)\widehat{q}(t)]\,dt - \widehat{q}(t)\,dW(t),& 0\leq t\leq T; \\ \widehat{X}(T) = \xi-\xi^{*},\qquad \widehat{X}(t)=0, & -\delta\leq t< 0. \end{cases} $$
(3.4)

Remark 3.2

It is easy to know that (3.4) is a linear time-delayed BSDE. By Proposition 2.1, under conditions (H3.1)-(H3.3), Eq. (3.4) has a unique adapted solution in \(L_{\mathbb{F}}^{2}(-\delta,T; \mathbb{R}^{n})\times L_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n\times d})\).

Lemma 3.3

Under assumptions (H3.1)-(H3.3), one has
$$\begin{aligned}& \lim_{\rho\rightarrow0}\sup_{0\leq t\leq T} E \bigl\vert \widetilde{X}^{\rho}(t) \bigr\vert ^{2} = 0, \\& \lim_{\rho\rightarrow0} E \int_{0}^{T} \bigl\vert \widetilde{q}^{\rho }(t) \bigr\vert ^{2}\,dt = 0, \end{aligned}$$
where
$$\widetilde{X}^{\rho}(t) = \frac{X^{\rho}(t) - X^{*}(t)}{\rho} - \widehat{X}(t),\qquad \widetilde{q}^{\rho}(t) = \frac{q^{\rho}(t) - q^{*}(t)}{\rho} - \widehat{q}(t). $$

Remark 3.4

Since the proof of Lemma 3.3 above is the same as that of Lemma 3.1 in Chen and Huang [10], for simplicity of presentation, we only present the main result and omit the detailed proof. In fact, it is straightforward to prove Lemma 3.3 by applying Proposition 2.1, Taylor expansion and the Lebesgue dominated convergence theorem.

3.4 Variational inequality

We solve the initial constraint \(X^{\xi}(0)=a\) and obtain a variational inequality in this subsection.

Given the optimal \(\xi^{*}\), for a constant \(\varepsilon>0\), define \(F\varepsilon(\cdot):U\rightarrow\mathbb{R}\) as follows:
$$F_{\varepsilon}(\xi)= \bigl\{ \bigl\vert X^{\xi}(0)-a \bigr\vert ^{2} + \bigl(\max \bigl(0, \phi(\xi) - \phi \bigl( \xi^{*} \bigr) + \varepsilon \bigr) \bigr)^{2} \bigr\} ^{ \frac{1}{2}}. $$

Remark 3.5

One can test that the functions \(\vert X^{\xi}(0)-a\vert ^{2}\) and \(\phi( \xi)\) are both continuous in their argument ξ. Hence, \(F_{\varepsilon}\), defined on U, is also a continuous function in its argument ξ.

Theorem 3.6

Under assumptions (H3.1)-(H3.3), suppose that \(\xi^{*}\) is an optimal solution to Problem B, so we have \(h_{0}\in\mathbb{R}^{+}\) and \(h_{1}\in\mathbb{R}^{n}\) satisfying \(\vert h_{0}\vert +\vert h_{1}\vert \neq0\), so that for every \(\xi\in U\), we have the following variational inequality:
$$ \bigl\langle h_{1},\widehat{X}(0) \bigr\rangle + h_{0} \bigl\langle \phi_{x} \bigl(\xi^{*} \bigr), \xi- \xi^{*} \bigr\rangle \geq0, $$
(3.5)
where \(\widehat{X}(0)\) is the solution to Eq. (3.4) at time 0.

Proof

We can check the following properties by the definition:
$$\begin{aligned}& F_{\varepsilon} \bigl(\xi^{*} \bigr)= \varepsilon; \\& F_{\varepsilon}(\xi)> 0,\quad \forall\xi\in U; \\& F_{\varepsilon} \bigl(\xi^{*} \bigr)\leq\inf_{\xi\in U} F_{\varepsilon }(\xi) +\varepsilon. \end{aligned}$$
Therefore, from Proposition 2.3 (Ekeland’s variational principle), there exists \(\xi^{\varepsilon}\in U\) satisfying:
  1. (i)

    \(F_{\varepsilon} (\xi^{\varepsilon} ) \leq F_{\varepsilon} (\xi ^{*} )\);

     
  2. (ii)

    \(d (\xi^{\varepsilon},\xi^{*} )\leq\sqrt{ \varepsilon}\);

     
  3. (iii)

    \(F_{\varepsilon}(\xi)+\sqrt{\varepsilon}d (\xi, \xi^{\varepsilon} )\geq F_{\varepsilon} (\xi^{\varepsilon} )\), \(\forall\xi\in U\).

     
For every \(\xi\in U\), denote \(\xi_{\rho}^{\varepsilon}:= \xi^{\varepsilon}+ \rho(\xi- \xi^{\varepsilon}), 0\leq\rho \leq1\). Let \((X_{\rho}^{\varepsilon}(\cdot),q_{\rho}^{\varepsilon }(\cdot))\) (resp. \(X^{\varepsilon}(\cdot)\), \(q^{\varepsilon}(\cdot)\)) be the solution to (3.2) under \(\xi_{\rho}^{\varepsilon}\) (resp. \(\xi^{\varepsilon}\)), and let \((\widehat{X}^{\varepsilon}(\cdot), \widehat{q}^{\varepsilon}(\cdot))\) be the solution to (3.4) when \(\xi^{\varepsilon}\) is replaced by \(\xi^{*}\). Hence, applying the item (iii) above, one obtains
$$ F_{\varepsilon} \bigl(\xi_{\rho}^{\varepsilon} \bigr) - F_{\varepsilon} \bigl( \xi^{\varepsilon} \bigr) +\sqrt{\varepsilon}d \bigl( \xi_{\rho}^{\varepsilon },\xi^{\varepsilon} \bigr)\geq0. $$
(3.6)
On the other hand, similar to Lemma 3.3, one concludes
$$\lim_{\rho\rightarrow0}\sup_{0\leq t\leq T} E \bigl[ \rho^{-1} \bigl( X_{\rho}^{\varepsilon}(t) - X^{\varepsilon}(t) \bigr) - \widehat{X}^{\varepsilon}(t) \bigr] = 0. $$
Thus
$$X_{\rho}^{\varepsilon}(0) - X^{\varepsilon}(0)= \rho\widehat{X} ^{\varepsilon}(0) + o(\rho), $$
which leads to the following expansion:
$$\bigl\vert X_{\rho}^{\varepsilon}(0)-a \bigr\vert ^{2} - \bigl\vert X^{\varepsilon }(0)-a \bigr\vert ^{2} = 2\rho \bigl\langle X^{\varepsilon}(0)-a, \widehat{X}^{\varepsilon}(0) \bigr\rangle + o( \rho). $$
Moreover,
$$\begin{aligned}& \bigl\vert \phi \bigl(\xi_{\rho}^{\varepsilon} \bigr) - \phi \bigl( \xi^{*} \bigr) + \varepsilon \bigr\vert ^{2} - \bigl\vert \phi \bigl(\xi^{\varepsilon} \bigr) - \phi \bigl(\xi^{*} \bigr) + \varepsilon \bigr\vert ^{2} \\& \quad = 2 \rho \bigl[\phi \bigl( \xi^{\varepsilon} \bigr) - \phi \bigl( \xi^{*} \bigr) + \varepsilon \bigr] \bigl\langle \phi_{x} \bigl(\xi^{\varepsilon} \bigr),\xi- \xi^{\varepsilon} \bigr\rangle + o( \rho). \end{aligned}$$
In the next, we study the following two cases for given \(\varepsilon >0\).
Case 1: There is \(\rho_{0}>0\) so that for every \(\rho\in(0,\rho_{0})\),
$$\phi \bigl(\xi_{\rho}^{\varepsilon} \bigr) - \phi \bigl( \xi^{*} \bigr) + \varepsilon \geq0. $$
We see that
$$\begin{aligned}& \lim_{\rho\rightarrow0}\frac{F_{\varepsilon}(\xi_{\rho }^{\varepsilon }) - F_{\varepsilon}(\xi^{\varepsilon})}{\rho} \\& \quad = \lim_{\rho\rightarrow0}\frac{1}{F_{\varepsilon}(\xi_{\rho}^{ \varepsilon}) + F_{\varepsilon}(\xi^{\varepsilon})}\frac{F_{\varepsilon }^{2}(\xi_{\rho}^{\varepsilon}) - F_{\varepsilon}^{2}( \xi^{\varepsilon})}{\rho} \\& \quad = \frac{1}{F_{\varepsilon}(\xi^{\varepsilon})} \bigl\{ \bigl\langle X^{\varepsilon}(0)-a, \widehat{X}^{\varepsilon}(0) \bigr\rangle + \bigl[\phi \bigl( \xi^{\varepsilon} \bigr) - \phi \bigl(\xi^{*} \bigr) + \varepsilon \bigr] \cdot \bigl\langle \phi_{x} \bigl(\xi^{\varepsilon} \bigr),\xi-\xi^{\varepsilon} \bigr\rangle \bigr\} . \end{aligned}$$
Now, using ρ to split (3.6) and letting ρ to 0, one has
$$ h_{0}^{\varepsilon} \bigl\langle \phi_{x} \bigl(\xi^{\varepsilon} \bigr),\xi- \xi^{\varepsilon} \bigr\rangle + \bigl\langle h_{1}^{\varepsilon},\widehat{X}^{\varepsilon}(0) \bigr\rangle \geq-\sqrt{\varepsilon} \bigl[E \bigl\vert \xi- \xi^{\varepsilon} \bigr\vert ^{2} \bigr]^{\frac{1}{2}}, $$
(3.7)
where
$$\begin{aligned}& h_{0}^{\varepsilon} = \frac{1}{F_{\varepsilon}(\xi^{\varepsilon})} \cdot \bigl[\phi \bigl( \xi^{\varepsilon} \bigr) - \phi \bigl(\xi^{*} \bigr) + \varepsilon \bigr] \geq0, \\& h_{1}^{\varepsilon} = \frac{1}{F_{\varepsilon}(\xi^{\varepsilon})} \bigl\langle X^{\varepsilon}(0)-a \bigr\rangle . \end{aligned}$$
Case 2: There is a positive series \(\{\rho_{n}\}\) satisfying \(\rho_{n}\rightarrow0\), so that
$$\phi \bigl(\xi_{\rho_{n}}^{\varepsilon} \bigr) - \phi \bigl( \xi^{*} \bigr) + \varepsilon \leq0. $$
From the definition of \(F_{\varepsilon}\), for large n, \(F_{\varepsilon }(\xi_{\rho_{n}}^{\varepsilon})= \{ \vert X_{\rho _{n}}^{\varepsilon}(0)-a\vert ^{2} \} ^{\frac{1}{2}}\). Owing to the continuity of \(F_{\varepsilon}(\cdot)\), one has \(F_{\varepsilon}(\xi^{\varepsilon })= \{ \vert X^{\varepsilon}(0)-a\vert ^{2} \} ^{\frac{1}{2}}\).
Moreover,
$$\begin{aligned} \lim_{n\rightarrow0}\frac{F_{\varepsilon}(\xi_{\rho_{n}}^{\varepsilon }) - F_{\varepsilon}(\xi^{\varepsilon})}{\rho} &= \lim_{n\rightarrow 0} \frac{1}{F_{\varepsilon}(\xi_{\rho_{n}}^{\varepsilon}) + F_{ \varepsilon}(\xi^{\varepsilon})} \frac{F_{\varepsilon}^{2}( \xi_{\rho_{n}}^{\varepsilon}) - F_{\varepsilon}^{2}(\xi^{\varepsilon })}{\rho} \\ &= \frac{ \langle X^{\varepsilon}(0)-a,\widehat{X}^{\varepsilon}(0) \rangle }{F_{\varepsilon}(\xi^{\varepsilon})}. \end{aligned}$$
From (3.6), the same as in Case 1,
$$ \bigl\langle h_{1}^{\varepsilon}, \widehat{X}^{\varepsilon}(0) \bigr\rangle \geq -\sqrt{ \varepsilon} \bigl[E \bigl\vert \xi- \xi^{\varepsilon} \bigr\vert ^{2} \bigr]^{\frac{1}{2}}, $$
(3.8)
where
$$h_{0}^{\varepsilon} = 0,\qquad h_{1}^{\varepsilon} = \frac{1}{F_{\varepsilon}(\xi^{\varepsilon})} \bigl\langle X^{\varepsilon}(0)-a \bigr\rangle . $$
For both cases, in summary, from the definition of \(F_{\varepsilon}( \cdot)\), one has \(h_{0}^{\varepsilon}\geq0\) and
$$\bigl\vert h_{0}^{\varepsilon} \bigr\vert ^{2} + \bigl\vert h_{1}^{\varepsilon } \bigr\vert ^{2} =1. $$
Therefore, there exists a convergent subsequence of \((h_{0}^{\varepsilon },h_{1}^{\varepsilon})\) whose limit is denoted by \((h_{0},h_{1})\).

Due to \(d(\xi^{\varepsilon},\xi^{*})\leq\sqrt{\varepsilon}\), we have \(\xi^{\varepsilon}\rightarrow\xi^{*}\), as \(\varepsilon \rightarrow0\). Then, from the estimate of Proposition 2.1, we see that \(\widehat{X}^{\varepsilon}(0)\rightarrow\widehat{X}(0)\) as \(\varepsilon\rightarrow0\). Thus (3.5) holds. The desired result is proved now. □

By using similar analysis, when \(l(t,x,q)\neq0\), the following variational inequality can be obtained.

Theorem 3.7

Let (H3.1)-(H3.3) hold. Suppose that \(\xi^{*}\) is an optimal solution of Problem B. Then we have \(h_{0}\in\mathbb{R}^{+}\), \(h_{1}\in \mathbb{R}^{n}\) satisfying \(\vert h_{0}\vert +\vert h_{1}\vert \neq0\), so that for every \(\xi\in U\), we have the variational inequality:
$$ \bigl\langle h_{1},\widehat{X}(0) \bigr\rangle + h_{0} \bigl\langle \phi_{x} \bigl(\xi^{*} \bigr), \xi- \xi^{*} \bigr\rangle + h_{0} \int_{0}^{T} \bigl\langle l_{x}^{*}(t), \widehat{X}(t) \bigr\rangle \, dt + h_{0} \int_{0}^{T} \bigl\langle l_{q}^{*}(t), \widehat{q}(t) \bigr\rangle \,dt \geq0, $$
(3.9)
where \(l_{\varphi}^{*}(t)=l_{\varphi}(t,X^{*}(t),q^{*}(t))\) denotes the partial derivative of \(l^{*}\) at φ with \(\varphi= x,q\), respectively, and \((\widehat{X}(\cdot),\widehat{q}(\cdot))\) is the solution to variation equation (3.4).

3.5 Maximum principle

For the sake of establishing the maximum principle, in this part, as the dual equation of Eq. (3.4), the following equation is introduced:
$$ \textstyle\begin{cases} dm(t) = \{ f^{*}_{x}(t)^{T}m(t) + E^{\mathcal{F}_{t}} [(f^{*} _{x_{\delta}}\vert_{t+\delta})^{T}m(t+\delta) ] + h_{0}l^{*}_{x}(t) \}\,dt \\ \hspace{33pt} {} + [f^{*}_{q}(t)^{T}m(t) + h_{0}l^{*}_{q}(t)]\,dW(t), \quad 0\leq t\leq T; \\ m(0) =h_{1}, \qquad m(t)=0, \quad T< t\leq T+\delta. \end{cases} $$
(3.10)

Remark 3.8

In Eq. (3.10), \(f^{*}_{x_{\delta}}\vert_{t+\delta}\) represents the value of \(f^{*}_{x_{\delta}}\) when t is replaced by \(t+\delta\).

Remark 3.9

It is easy to see that (3.10) is a linear time-advanced SDE. By Proposition 2.2, under conditions (H3.1)-(H3.3), Eq. (3.10) admits the unique adapted solution in \(L_{\mathbb{F}}^{2}(0, T+\delta;\mathbb{R}^{n})\).

Theorem 3.10

Let (H3.1)-(H3.3) hold. If \(\xi^{*}\) is optimal to Problem B with \((X^{*}(\cdot),q^{*}(\cdot))\) being the corresponding state of Eq. (3.2), then we have \(h_{0}\in\mathbb{R}^{+}\) and \(h_{1}\in \mathbb{R}^{n}\) satisfying \(\vert h_{0}\vert +\vert h_{1}\vert \neq0\), so that for every \(\eta\in U\),
$$ \bigl\langle m(T) + h_{0}\phi_{x} \bigl(\xi^{*} \bigr), \eta-\xi^{*} \bigr\rangle \geq0,\quad \textit{a.s.}, $$
(3.11)
where \(m(\cdot)\) is the solution of Eq. (3.10).

Proof

By using Itô’s formula to \(\langle m(t),\widehat{X}(t) \rangle \), we obtain
$$\begin{aligned} d \bigl\langle m(t),\widehat{X}(t) \bigr\rangle = &-m(t) \bigl[f^{*}_{x}(t) \widehat{X}(t) + f ^{*}_{x_{\delta}}(t)\widehat{X}(t-\delta) + f^{*}_{q}(t) \widehat{q}(t) \bigr]\,dt \\ &{}+\widehat{X}(t) \bigl\{ f^{*}_{x}(t)^{T}m(t) + E^{\mathcal{F}_{t}} \bigl[ \bigl(f^{*}_{x_{\delta}} \vert_{t+\delta} \bigr)^{T}m(t+\delta) \bigr] + h_{0}l ^{*}_{x}(t) \bigr\} \,dt \\ &{}+\widehat{q}(t) \bigl[f^{*}_{q}(t)^{T}m(t) + h_{0}l^{*}_{q}(t) \bigr]\,dt + \{ \cdots \} \,dW(t) \\ = & \bigl\{ E^{\mathcal{F}_{t}} \bigl[ \bigl(f^{*}_{x_{\delta}} \vert_{t+\delta} \bigr)^{T}m(t+ \delta) \bigr]\widehat{X}(t) -f^{*}_{x_{\delta}}(t)m(t)\widehat{X}(t- \delta) \bigr\} \,dt \\ &{}+h_{0} \bigl[l^{*}_{x}(t)\widehat{X}(t) + l^{*}_{q}(t)\widehat{q}(t) \bigr]\,dt + \{\cdots\} \,dW(t). \end{aligned}$$
Therefore,
$$E \bigl[ \bigl\langle m(T),\widehat{X}(T) \bigr\rangle - \bigl\langle m(0), \widehat{X}(0) \bigr\rangle \bigr] = \Delta_{1} + \Delta_{2}, $$
with
$$\begin{aligned}& \Delta_{1}=E \int_{0}^{T} \bigl[ \bigl(f^{*}_{x_{\delta}} \vert_{t+\delta} \bigr)^{T}m(t+ \delta)\widehat{X}(t) -f^{*}_{x_{\delta}}(t)m(t)\widehat{X}(t- \delta) \bigr]\,dt, \\& \Delta_{2}=h_{0}E \int_{0}^{T} \bigl[l^{*}_{x}(t) \widehat{X}(t) + l ^{*}_{q}(t)\widehat{q}(t) \bigr]\,dt. \end{aligned}$$
Paying attention to the terminal and initial conditions, one derives
$$\begin{aligned} \Delta_{1} = &E \int_{0}^{T} \bigl(f^{*}_{x_{\delta}} \vert_{t+\delta} \bigr)^{T}m(t+ \delta)\widehat{X}(t)\,dt - E \int_{0}^{T} f^{*}_{x_{\delta}}(t)m(t) \widehat{X}(t-\delta)\,dt \\ = &E \int_{T}^{T+\delta} f^{*}_{x_{\delta}}(t)^{T}m(t) \widehat{X}(t- \delta)\,dt-E \int_{0}^{\delta}f^{*}_{x_{\delta}}(t)m(t) \widehat{X}(t- \delta)\,dt \\ = &0. \end{aligned}$$
Hence
$$\begin{aligned}& E \bigl[ \bigl\langle m(T)+h_{0}\phi_{x} \bigl( \xi^{*} \bigr),\xi- \xi^{*} \bigr\rangle \bigr] \\& \quad = E \biggl[ \bigl\langle h_{1},\widehat{X}(0) \bigr\rangle + h_{0} \bigl\langle \phi_{x} \bigl(\xi ^{*} \bigr), \xi- \xi^{*} \bigr\rangle + h_{0} \int_{0}^{T} \bigl\langle l_{x}^{*}(t), \widehat{X}(t) \bigr\rangle \,dt + h_{0} \int_{0}^{T} \bigl\langle l_{q}^{*}(t), \widehat{q}(t) \bigr\rangle \,dt \biggr] \\& \quad \geq0. \end{aligned}$$
From the arbitrariness of \(\xi\in U\), for every \(\eta\in U\), we have
$$\bigl\langle m(T) + h_{0}\phi_{x} \bigl(\xi^{*} \bigr), \eta-\xi^{*} \bigr\rangle \geq0, \quad \mbox{a.s.} $$
 □
Now, we let ∂K represent the boundary of K and denote
$$\Omega_{0}:= \bigl\{ \omega\in\Omega\vert \xi^{*}\in \partial K \bigr\} . $$
According to Theorem 3.10, we directly deduce the following result.

Corollary 3.11

Assume that the assumptions in Theorem  3.10 hold, then for each \(\eta\in K\), we have
$$\begin{aligned}& \bigl\langle m(T) + h_{0} \phi_{x} \bigl( \xi^{*} \bigr), \eta- \xi^{*} \bigr\rangle \geq0,\quad \textit{a.s. on }\Omega_{0}; \\& m(T) + h_{0}\phi_{x} \bigl(\xi^{*} \bigr) = 0, \quad \textit{a.s. on }\Omega_{0}^{c}. \end{aligned}$$

Remark 3.12

By the above study, for the optimal terminal control \(\xi^{*}\), we obtain the necessary condition. Note that the previous transformation process and (H3.3) allow us to make the inverse transformation. Therefore, the characterization of the optimal control process \(u^{*}(\cdot)\) can be derived by the obtained stochastic maximum principle of the optimal terminal control \(\xi^{*}\).

4 Applications of the main result

As stated in the section of Introduction, we study two applications of the main result established above in this section.

4.1 Stochastic delayed LQ control involving terminal state constraints

Stochastic delayed LQ control problem involving terminal state constraints is considered in this subsection. In order to simplify the presentation, we focus on the case \(d=n=1\). For the higher dimensional situation, one can deal with it in a similar method without substantial difficulty.

Consider the following state equation:
$$ \textstyle\begin{cases} dX(t) = [A_{1}X(t)+A_{2}X(t-\delta)+A_{3}u(t) ]\,dt \\ \hspace{32pt} {}+ [B_{1}X(t)+B_{2}X(t-\delta)+B_{3}u(t) ]\,dW(t), & 0\leq t\leq T; \\ X(t) = \eta(t), & -\delta\leq t\leq0, \end{cases} $$
(4.1)
with \(A_{i},B_{i}\in\mathbb{R}\), \(i=1,2,3\).
Next, we investigate the cost function independent of the running cost without loss of generality. Therefore, subject to \(u(\cdot)\in \mathcal{U}_{\mathrm{ad}}\), \(X(T)\in\mathbb{R}^{+}\), a.s., the goal is to minimize the following cost function:
$$ J \bigl(u(\cdot) \bigr)=\frac{1}{2} E \bigl[X(T)^{2} \bigr]. $$
(4.2)
Without doubt, question (4.2) is an extraordinary example of Problem A with
$$\begin{aligned}& b \bigl(t,x,x',u \bigr)=A_{1}x+A_{2}x'+A_{3}u, \\& \sigma \bigl(t,x,x',u \bigr)=B_{1}x+B_{2}x'+B_{3}u. \end{aligned}$$
Now we give the backward formulation of problem (4.2). Denote
$$\overline{A}_{1}=A_{3}B_{1}B_{3}^{-1} - A_{1},\qquad \overline{A}_{2}=A_{3}B_{2}B_{3}^{-1}-A_{2}, \qquad \overline{A}_{3}=-A_{3}B_{3}^{-1}. $$
Then Eq. (4.1) becomes
$$ \textstyle\begin{cases} -dX(t) = (\overline{A}_{1}X(t)+\overline{A}_{2}X(t-\delta)+ \overline{A}_{3}u(t))\,dt - q(t)\,dW(t),& 0\leq t\leq T; \\ X(T) = \xi, \qquad X(t)=\eta(t), & -\delta\leq t< 0, \end{cases} $$
(4.3)
and we can rewrite problem (4.2) as follows:
$$ \textstyle\begin{cases} \mbox{Minimize } J(\xi) \\ \mbox{subject to } \xi\in U;\qquad X^{\xi}(0)=a, \end{cases} $$
(4.4)
where
$$U= \bigl\{ \xi \vert E\vert \xi\vert^{2}< \infty, \xi\in\mathbb {R}^{+}, \mbox{a.s.} \bigr\} . $$
By Theorem 3.10, if \(\xi^{*}\) is optimal, we have \(h_{0}\in \mathbb{R}^{+}\) and \(h_{1}\in\mathbb{R}\) satisfying \(\vert h_{0}\vert +\vert h_{1}\vert \neq0\), so that \(\forall \eta\in U\),
$$ \bigl\langle m(T) + h_{0}\xi^{*}, \eta- \xi^{*} \bigr\rangle \geq0,\quad \mbox{a.s.}, $$
(4.5)
in which \(m(\cdot)\) is the solution to the following adjoint equation:
$$ \textstyle\begin{cases} dm(t) = ( \overline{A}_{1}m(t) + \overline{A}_{2}E^{\mathcal{F} _{t}} [m(t+\delta) ] )\,dt + \overline{A}_{3}m(t)\,dW(t), & 0\leq t\leq T; \\ m(0) =h_{1}, \qquad m(t)=0, & T< t\leq T+\delta. \end{cases} $$
(4.6)
Denote \(\Omega_{0}:=\{ \omega\in\Omega\vert \xi^{*}(\omega)=0 \}\). Now, the following necessary condition can be deduced owing to the arbitrariness of ξ.
$$\begin{aligned}& m(T) + h_{0}\xi^{*}\geq0,\quad \mbox{a.s. on } \Omega_{0}; \\& m(T) + h_{0}\xi^{*} = 0,\quad \mbox{a.s. on } \Omega_{0}^{c}, \end{aligned}$$
where \(m(\cdot)\) is the solution of Eq. (4.6).

4.2 Production-consumption choice optimization problem

By applying the maximum principle established before, we investigate a type of production-consumption choice optimization problem in this subsection. The shape for this issue, as in [15], originates from Ivanov and Swishchuk [27]. For the sake of completeness, let us present the model at length.

We assume an investor is going to invest his money to invent goods, and he could obtain benefits from the goods. We mark the capital of investor, the labor at time t and the rate of consumption by \(X(t)\), \(A(t)\) and \(c(t)\geq0\), respectively. Based upon the assumption that earning of the production is a function of the total sum of the capital and labor, in order to depict this system, Ramsey [28] introduced the following shape:
$$ \frac{dX(t)}{dt}= f \bigl(X(t),A(t) \bigr) - c(t). $$
(4.7)
Since in the procedure of investment, in reality, there is some risk and delay, Chen and Wu [15] generalized the model in (4.7) to the following case:
$$ \textstyle\begin{cases} dX(t) = [f(X(t-\delta),A(t))-c(t)]\,dt + \sigma(X(t-\delta))\,dW(t),\quad 0\leq t\leq T; \\ X(t) = \eta(t),\quad -\delta\leq t\leq0, \end{cases} $$
(4.8)
where η is a given continuous function.

However, the rationality of the shape has been questioned as there is no constraint for the terminal capital \(X(T)\) on the basis of the hypothesis. In fact, in real situations, sometimes the investor will set a goal (constraint) for the terminal capital \(X(T)\) in the investment. Hence we believe that in a concordant model of the production and consumption some constraints for the terminal capital \(X(T)\) should be considered, i.e., \(X(T)\in Q\), where \(Q\in\mathbb{R}^{n}\).

Let us consider the following model which modified (4.7) and (4.8):
$$ \textstyle\begin{cases} dX(t) = [f(X(t-\delta),A(t))-c(t)]\,dt + \sigma(X(t-\delta),c(t))\,dW(t), & 0\leq t\leq T; \\ X(t) = \eta(t), & -\delta\leq t\leq0. \end{cases} $$
For simplicity, let \(n=d=1\). Consider the following hypotheses:
  1. (1)

    The function \(f(X(t-\delta),A(t))=KX^{\alpha}(t-\delta)A ^{\beta}(t)\), where K, α, β are some suitable constants. Moreover, let \(\alpha=\beta=1\) and \(A(t)\equiv y\) be a constant.

     
  2. (2)

    The terminal constraint \(Q\in\mathbb{R}\) is a given convex set.

     
Under the above assumptions, we can rewrite our shape as follows:
$$ \textstyle\begin{cases} dX(t) = [KyX(t-\delta)-c(t)]\,dt + \sigma(X(t-\delta),c(t))\,dW(t), & 0\leq t\leq T; \\ X(t) = \eta(t), & -\delta\leq t\leq0. \end{cases} $$
(4.9)
By electing the hypothesis rate \(c(t)\geq0\) under the terminal constraint \(X(T)\in Q\), the purpose is to maximize the following desired function:
$$ J \bigl(c(\cdot) \bigr)=E \biggl[ \int_{0}^{T} e^{-rt}\frac{c^{\gamma}(t)}{\gamma }\,dt + X(T) \biggr] , $$
(4.10)
where r represents the bond rate, \(\gamma\in(0,1)\), and \(1-\gamma\) represents the investor’s risk aversion.
Clearly, this is a special case of Problem A when
$$\mathcal{U}_{\mathrm{ad}} \equiv \bigl\{ c(\cdot)\vert c(\cdot)\in L_{\mathbb{F}}^{2} \bigl(0,T; \mathbb{R}^{+} \bigr) \bigr\} $$
with
$$\begin{aligned}& b \bigl(t,x,x',c \bigr)=Ky\cdot x' - c,\qquad \sigma \bigl(t,x,x',c \bigr)=\sigma \bigl(x',c \bigr), \\& \widetilde{l} \bigl(t,x,x',c \bigr)=-e^{-rt} \frac{c^{\gamma}}{\gamma},\qquad \phi(x)=-x. \end{aligned}$$
Now, let \(q\equiv\sigma(x',c)\) and \(f(x',q)=-Ky\cdot x' + \widetilde{\sigma}(x',q)\), where σ̃ is the inverse function of σ w.r.t. c, i.e., \(c=\widetilde{\sigma}:= \widetilde{\sigma}(x',q)\). Then one can rewrite (4.9) as follows:
$$ \textstyle\begin{cases} -dX(t) = [-KyX(t-\delta)+ \widetilde{\sigma}(X(t-\delta),q(t))]\,dt - q(t)\,dW(t), & 0\leq t\leq T; \\ X(t) = \eta(t), & -\delta\leq t< 0. \end{cases} $$
(4.11)
Define \(U=\{ \xi \vert E\vert \xi\vert^{2}<\infty, \xi\in Q, \mbox{a.s.} \}\) and consider the following performance function:
$$ J(\xi)=-E \biggl[ \int_{0}^{T} e^{-rt}\frac{\widetilde{\sigma}^{ \gamma}(t)}{\gamma}\,dt + \xi \biggr] . $$
(4.12)
Then problem (4.10) is equivalent to the following problem:
$$ \textstyle\begin{cases} \mbox{Minimize } J(\xi) \\ \mbox{subject to } \xi\in U;\qquad X^{\xi}(0)=a. \end{cases} $$
(4.13)
Consider the adjoint equation
$$ \textstyle\begin{cases} dm(t) = E^{\mathcal{F}_{t}} [ (-Ky + (\widetilde{\sigma}_{x _{\delta}}\vert_{t+\delta}))m(t+\delta) ]\,dt \\ \hspace{30pt}{}+ [ \widetilde{\sigma}_{q}(t)m(t)-h_{0}e^{-rt}\cdot \widetilde{\sigma}^{\gamma-1}(t) ]\,dW(t), &0\leq t\leq T; \\ m(0) =h_{1},\qquad m(t)=0,& T< t\leq T+\delta, \end{cases} $$
(4.14)
where \(h_{1}\in\mathbb{R}\) is a parameter. Denote \(\Omega_{0}:=\{ \omega\in\Omega\vert \xi^{*}(\omega)=\partial Q \}\). Therefore, by using Theorem 3.10, one obtains the result below.

Theorem 4.1

Suppose \((X^{*}(\cdot),c^{*}(\cdot))\) is an optimal pair to problem (4.10), then we have \(h_{0}\in\mathbb{R}^{+}\) and \(h_{1}\in \mathbb{R}\) satisfying \(\vert h_{0}\vert +\vert h_{1}\vert \neq 0\) so that \(\xi^{*}\equiv X^{*}(T)\), we have
$$\begin{aligned}& m(T) + h_{0}\xi^{*}\geq0,\quad \textit{a.s. on } \Omega_{0}; \\& m(T) + h_{0}\xi^{*} = 0,\quad \textit{a.s. on } \Omega_{0}^{c}, \end{aligned}$$
where \(m(\cdot)\) is the solution to Eq. (4.14) with parameter \(h_{1}\).

5 Conclusions

In this content, we study a stochastic optimal control problem for stochastic differential delayed equation with terminal state constraint (at the terminal time, the state is constrained in a convex set). However, the control problem with terminal non-convex state constraint is still open. We will focus on the open problem in the future study.

Declarations

Acknowledgements

The first author would like to appreciate the Department of Mathematics of University of Central Florida, USA, for its hospitality, and express the gratitude to Prof. Qingmeng Wei for her careful reading of this paper and helpful comments.

The first and second authors are supported by NNSF of China (Grant Nos. 11371226, 11071145, 11526205, 11626247 and 11231005), the Foundation for Innovative Research Groups of National Natural Science Foundation of China (Grant No. 11221061) and the 111 Project (Grant No. B12023).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Institute for Financial Studies and School of Mathematics, Shandong University
(2)
School of Statistics, Shandong University of Finance and Economics

References

  1. Pardoux, E, Peng, S: Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 4, 55-61 (1990) MathSciNetView ArticleMATHGoogle Scholar
  2. Peng, S: Probabilistic interpretations for systems of quasilinear parabolic partial differential equation. Stoch. Stoch. Rep. 37, 61-74 (1991) MathSciNetView ArticleMATHGoogle Scholar
  3. El Karoui, N, Peng, S, Quenez, MC: A dynamic maximum principle for the optimization of recursive utilities under constraints. Ann. Appl. Probab. 11, 664-693 (2001) MathSciNetView ArticleMATHGoogle Scholar
  4. Yong, J, Zhou, X: Stochastic Controls: Hamiltonian Systems and HJB Equations. Springer, New York (1999) View ArticleMATHGoogle Scholar
  5. El Karoui, N, Peng, S, Quenez, MC: Backward stochastic differential equations in finance. Math. Finance 7(1), 1-71 (1997) MathSciNetView ArticleMATHGoogle Scholar
  6. Barles, G, Buckdahn, R, Pardoux, E: Backward stochastic differential equations and integral-partial differential equations. Stoch. Stoch. Rep. 60, 57-83 (1997) MathSciNetView ArticleMATHGoogle Scholar
  7. Li, J, Wei, Q: \(L^{p}\) estimates for fully coupled FBSDEs with jumps. Stoch. Process. Appl. 124(4), 1582-1611 (2014) MathSciNetView ArticleMATHGoogle Scholar
  8. Li, J, Wei, Q: Stochastic differential games for fully coupled FBSDEs with jumps. Appl. Math. Optim. 71, 411-448 (2015) MathSciNetView ArticleMATHGoogle Scholar
  9. Buckdahn, R, Li, J, Peng, S: Mean-field backward stochastic differential equations and related partial differential equations. Stoch. Process. Appl. 119(10), 3133-3154 (2009) MathSciNetView ArticleMATHGoogle Scholar
  10. Chen, L, Huang, J: Stochastic maximum principle for controlled backward delayed system via advanced stochastic differential equation. J. Optim. Theory Appl. 167(3), 1112-1135 (2015) MathSciNetView ArticleMATHGoogle Scholar
  11. Delong, Ł, Imkeller, P: Backward stochastic differential equations with time delayed generators results and counterexamples. Ann. Appl. Probab. 20, 1512-1536 (2010) MathSciNetView ArticleMATHGoogle Scholar
  12. Delong, Ł, Imkeller, P: On Malliavin’s differentiability of BSDE with time delayed generators driven by Brownian motions and Poisson random measures. Stoch. Process. Appl. 120, 1748-1775 (2010) MathSciNetView ArticleMATHGoogle Scholar
  13. Peng, S, Yang, Z: Anticipated backward stochastic differential equations. Ann. Probab. 37, 877-902 (2009) MathSciNetView ArticleMATHGoogle Scholar
  14. Wen, J, Shi, Y: Anticipative backward stochastic differential equations driven by fractional Brownian motion. Stat. Probab. Lett. 122, 118-127 (2017) MathSciNetView ArticleMATHGoogle Scholar
  15. Chen, L, Wu, Z: Maximum principle for the stochastic optimal control problem with delay and application. Automatica 46, 1074-1080 (2010) MathSciNetView ArticleMATHGoogle Scholar
  16. Øksendal, B, Sulem, A, Zhang, T: Optimal control of stochastic delay equations and time-advanced backward stochastic differential equations. Adv. Appl. Probab. 43(2), 572-596 (2011) MathSciNetView ArticleMATHGoogle Scholar
  17. Yu, Z: The stochastic maximum principle for optimal control problems of delay systems involving continuous and impulse controls. Automatica 48(10), 2420-2432 (2012) MathSciNetView ArticleMATHGoogle Scholar
  18. Agram, N, Haadem, S, Øksendal, B, Proske, F: A maximum principle for infinite horizon delay equations. SIAM J. Math. Anal. 45(4), 2499-2522 (2013) MathSciNetView ArticleMATHGoogle Scholar
  19. Huang, J, Li, X, Shi, J: Forward-backward linear quadratic stochastic optimal control problem with delay. Syst. Control Lett. 61(5), 623-630 (2012) MathSciNetView ArticleMATHGoogle Scholar
  20. Meng, Q, Shen, Y: Optimal control of mean-field jump-diffusion systems with delay: a stochastic maximum principle approach. J. Comput. Appl. Math. 279, 13-30 (2015) MathSciNetView ArticleMATHGoogle Scholar
  21. Ji, S, Zhou, X: A maximum principle for stochastic optimal control with terminal state constraints and its applications. Commun. Inf. Syst. 6, 321-338 (2006). A special issue dedicated Tyrone Duncan on the occasion of his 65th birthday MathSciNetMATHGoogle Scholar
  22. Ji, S, Peng, S: Terminal perturbation method for the backward approach to continuous time mean-variance portfolio selection. Stoch. Process. Appl. 118(6), 952-967 (2008) MathSciNetView ArticleMATHGoogle Scholar
  23. Ji, S, Zhou, X: A generalized Neyman-Pearson lemma under g-probabilities. Probab. Theory Relat. Fields 148, 645-669 (2010) View ArticleMATHGoogle Scholar
  24. Ji, S, Wei, Q: A maximum principle for fully coupled forward-backward stochastic control systems with terminal state constraints. J. Math. Anal. Appl. 407, 200-210 (2013) MathSciNetView ArticleMATHGoogle Scholar
  25. Aghayeva, C: Stochastic linear quadratic control problem of switching systems with constraints. J. Inequal. Appl. 2016, 100 (2016) MathSciNetView ArticleMATHGoogle Scholar
  26. Wei, Q: Stochastic maximum principle for mean-field forward-backward stochastic control system with terminal state constraints. Sci. China Math. 59(4), 809-822 (2016) MathSciNetView ArticleMATHGoogle Scholar
  27. Ivanov, A, Swishchuk, A: Optimal control of stochastic differential delay equations with application in economics. In: Conference on Stochastic Modelling of Complex Systems, Australia (2005) Google Scholar
  28. Ramsey, F: A mathematical theory of savings. Econ. J. 38, 543-559 (1928) View ArticleGoogle Scholar

Copyright

© The Author(s) 2017