- Research
- Open access
- Published:
Optimal control of stochastic singular affine systems with Markovian jumps
Journal of Inequalities and Applications volume 2022, Article number: 64 (2022)
Abstract
We consider an optimal control problem for a class of stochastic singular affine systems with Markovian jumps. We establish the existence and uniqueness of the solution to stochastic singular affine systems with Markovian jumps for the first time. Via square completion technique and the generalized Itô’s formula, we derive new kinds of generalized differential Riccati equations (GDREs) and generalized backward differential equations (GBDEs), which give sufficient conditions for the well-posedness of the optimal control problem, and present an explicit representation of optimal control. Also, we discuss the solvability of the GDREs in two cases. As an application, we present a leader-follower differential game to demonstrate the practicability of our results.
1 Introduction
Optimal control theory is the science of finding an optimal solution from all possible control schemes. It is the core content of modern control theory. Its essence is developing optimal control law or control strategy under given performance objectives and constraints. It is worth mentioning that the ideological basis of control theory can be traced back to the book [19] by Wiener, which laid to the foundation of control theory.
Stochastic optimal control problems include various physical, biological, and electronic systems, just to mention a few. In view of stochastic differential equations, even in the nonlinear case, the theory is relatively mature [3, 9, 10, 16, 20]. In [15] the authors researched linear quadratic (LQ) control problems for stochastic affine systems, which give the open-loop and closed-loop solvabilities; for additional details, we refer the readers to the book [21] and references therein. Recently, due to a better description of physical systems than regular ones, a lot of researchers have focused on singular systems. However, the research related to stochastic singular control systems is still in its fancy; only few papers can be obtained until now. Zhang and Xing [23] studied the optimal control and stability of stochastic singular systems with state- and control-dependent multiplicative noise. They established a kind of GDREs, which are harder for solvability due to their symmetry. In [24] the LQ problem for stochastic singular systems with state-dependent multiplicative noise is discussed. The authors presented a new kind of Lyapunov functional, which made the new GDREs easy to solve. In [17], we have discussed the stochastic singular optimal control systems with state- and control-dependent multiplicative noise. Moreover, for our new kinds of GDREs, we established the solvability in definite, singular, and indefinite cases.
On the other side, the parameter system model with Markovian jump provides an expedient mathematical model to depict systematic dynamics in situation where the system undergoes frequent unpredictable parameter changes. Researches on the stochastic linear jump systems can be at least dated back to the work of Krasosvkii and Lidskii [6]. During the last decades, the jump parameter LQ control systems have been extensively investigated (see, for example, [4, 5, 11, 12, 22] and references therein). For the case of stochastic Markov jump differential equation with state- and control-dependent multiplicative noise, Li et al. [8] discussed the infinite time domain control problem with indefinite state and control cost weighting matrices, whereas Li and Zhou [7] investigated the same control problem with finite time domain. However, it is worth mentioning that the papers mentioned are all concerned with stochastic Markov jump differential equations, whereas for the optimal control problem of stochastic singular Markov jump systems, to the best of our knowledge, there is no existing literature. Meanwhile, it is necessary to investigate the affine systems due to the development of the leader-follower differential games. Therefore it is a natural question of the optimal control problem of stochastic singular affine systems with Markovian jumps, which is of particular mathematical interest.
Directly inspired by the works mentioned, the purpose of this paper is investigating the optimal control of stochastic singular affine systems with Markovian jumps. The main content is as follows. We study the stochastic singular affine LQ control systems with Markovian jumps for the first time, which generalizes the result in [17]. Then to get its well-posedness, we introduce new kinds of GDREs and GBDEs. This is quite different from [17] because of the affine character of our new system. Moreover, the solvability of the GDREs is established. As a direct application, results of this paper also enrich the contents related to the theory of leader-follower game, which is one of the most important differential games.
This paper comprises several sections. Preliminaries are provided in Sect. 2. In Sect. 3, we derive sufficient conditions for the well-posedness of the LQ control problem in finite- and infinite-time horizons. Section 4 focuses on the solvability of GDREs via applying matrix decomposition. In Sect. 5, as an application, we consider a leader-follower differential game. Finally, the conclusions are given in Sect. 6.
2 Preliminaries
Let \((\Omega , \mathcal {F}, \{\mathcal {F}_{t}\}_{t\geq 0}, \mathcal {P})\) be a filtered probability space, where there is a one-dimensional standard Wiener process \(\{w(t)\}_{t\geq 0}\) and a right-continuous homogeneous Markov chain \(\{r_{t}\}_{t\geq 0}\) with state space \(\psi =\{1, 2, \ldots , l\}\), and \(\mathcal {F}_{t}\) is the smallest σ-algebra generated by the processes \(w(s)\) and \(r_{s}, 0\leq s\leq t\), i.e., \(\mathcal {F}_{t}=\sigma \{w(s), r_{s}|0\leq s\leq t\}\). We assume that \(\{r_{t}\}\) is independent of \(\{w(t)\}\) and has the transition probabilities
where \(\pi _{ij}\geq 0\) for \(i\neq j\) and \(\pi _{ii}=-\sum_{j=1, j\neq i}^{l}\pi _{ij}\).
Consider the stochastic singular system with Markovian jump
where \(x(t)\in R^{n}\) is the state variable, \(x_{0}\in R^{n}\) is a given initial function, \(f(t)\) and \(g(t)\) are both inhomogeneous terms, \(E\in R^{n\times n}\) is a known singular matrix with \(\operatorname{rank}(E)=r\leq n\), \(A(t, r_{t})=A_{i}(t)\) and \(C(t, r_{t})=C_{i}(t)\) when \(r_{t}=i\), and \(A_{i}(t)\), \(C_{i}(t)\), \(i\in \psi \), are specified matrices of suitable sizes. For each \(i\in \psi \), \(A_{i}, C_{i}\in L^{\infty}(0, T; R^{n\times n})\) and \(f, g\in L^{2}(0, T; R^{n})\). Here the Lebesgue space \(L^{p}(0, T; X)\) consists measurable functions \(\phi : [0, T]\rightarrow X\) such that \(\|\phi \|_{L^{p}(0, T; X)}<\infty \), where
and X is the real Banach space \(R^{n}\) or \(R^{n\times n}\). Here we only consider the one-dimensional Wiener process, since the multiplicative noise case can be easily generalized.
For the existence and uniqueness of a solution without impulse to system (2.1), we carry out the following assumptions for every \(i\in \psi \):
- (\(\mathcal{H}.2.1\)):
-
;
- (\(\mathcal{H}.2.2\)):
-
\(\operatorname{rank}(E C_{i}(t) g(t))=\operatorname{rank}(E)\).
Remark 2.1
(\(\mathcal{H}.2.1\)) is a basic assumption to help us eliminate the impulse phenomenon in system (2.1).
Theorem 2.1
If both assumptions (\(\mathcal{H}.2.1\)) and (\(\mathcal{H}.2.2\)) hold, then the singular system (2.1) has a unique impulse-free solution on \([0, T]\).
Proof
Given the constant-rank condition of the matrix E, we can take into account the matrix decomposition. Under assumption (\(\mathcal{H}.2.2\)), there are nonsingular matrices \(M_{i}\) and \(N_{i}\) such that for every \(i\in \psi \),
where \(C_{i, 1}(t)\in R^{r\times r}\), \(C_{i, 2}(t)\in R^{r\times (n-r)}\), and \(g_{i, 1}(t)\in R^{r}\). Accordingly, define
where \(A_{i, 11}(t)\), \(A_{i, 12}(t)\), \(A_{i, 21}(t)\), \(A_{i, 22}(t)\), \(f_{i, 1}(t)\), and \(f_{i, 2}(t)\) are of appropriate dimensions. Let \(\xi (t)=N_{i}^{-1}x(t)=[\xi _{1}^{T}(t)\enskip \xi _{2}^{T}(t)]^{T}\), where \(\xi _{1}(t)\in R^{r}\), \(\xi _{2}(t)\in R^{n-r}\). Via the above transformations, system (2.1) can be rewritten as
On the other side, by assumption (\(\mathcal{H}.2.1\)) we have the rank relation
that is, the matrix \(A_{i, 22}(t)\) has full row rank. Then there is nonsingular matrix \(F_{i}(t)\) such that for every \(i\in \psi \), \(A_{i, 22}(t) F_{i}(t)=[I_{i}]_{n-r}\). Let \(A_{i, 12}(t)F_{i}(t)\triangleq \bar{A}_{i, 12}(i)\) and \(C_{i, 2}(t)F_{i}(t)\triangleq \bar{C}_{i, 2}(i)\). Then we can transform system (2.2) as
where \(F_{i}(t)\bar{\xi}_{2}(t)=\xi _{2}(t)\).
For every \(i\in \psi \), the first equation in system (2.3) is a stochastic ordinary differential equation. According to Theorem 6.14 in [21], it has a unique solution \(\xi _{1}(t)\) on \([0, T]\), and so does (2.3). This completes the proof. □
3 Optimal control problem
In this section, we consider the LQ optimal control problem for stochastic singular systems with Markovian jumps in finite-and infinite-time horizons.
3.1 Finite-time horizon case
Taking into account the stochastic singular equations with Markovian jumps, we have
where \(x(t)\in R^{n}\) and \(u(t)\in R^{m}\) represent the state and control vectors, respectively. An admissible control u is an \(\mathcal {F}_{t}\)-adapted \(R^{m}\)-valued measurable process on \([0, T]\). The set of all admissible controls is denoted by \(\mathcal {U}_{\mathrm{ad}}\equiv L_{\mathcal {F}}^{2}(0, T; R^{m})\), and the others are defined similarly as before.
As in Sect. 2, we impose the following basic assumptions for each \(i\in \psi \), which just correspond to the existence and uniqueness of the solution to system (3.1):
- (\(\mathcal{H}.3.1\)):
-
;
- (\(\mathcal{H}.3.2\)):
-
\(\operatorname{rank}(E\enskip C_{i}(t) \enskip D_{i}(t)\enskip g(t))=\operatorname{rank}(E)\).
For all \((0, x_{0}, i)\) and \(u\in \mathcal{U}_{\mathrm{ad}}\), the related cost functional is defined as
and the objective of this control problem is minimizing the cost functional \(J(0, x_{0}, i; u(\cdot ))\) for a given \((0, x_{0})\in [0, T)\times R^{n}\) over \(u\in \mathcal {U}_{\mathrm{ad}}\). The value function is defined by
The following assumption will go into effect throughout this section:
- (\(\mathcal{H}.3.3\)):
-
For every \(i\in \psi \), the data arising in LQ problem (3.1)–(3.2) satisfy
$$ \textstyle\begin{cases} A_{i}, C_{i}\in L^{\infty}(0, T; R^{n\times n}),\qquad B_{i}, D_{i}\in L^{\infty}(0, T; R^{n\times m}), \\ Q_{i}\in L^{\infty}(0, T; \mathcal {S}^{n}), \qquad R_{i}\in L^{\infty}(0, T; \mathcal{S}^{m}), \qquad H_{i}\in \mathcal{S}^{n}, \\ f, g\in L^{2}(0, T; R^{n}). \end{cases} $$
where \(\mathcal{S}^{m}\) denotes the set of all \(m\times m\) symmetric matrices.
Definition 3.1
The optimization control problem (3.1)–(3.2) is called well-posed if
A well-posed problem is called attainable with respect to \((0, x_{0}, i)\) if there exists a control \(u^{*}\in \mathcal {U}_{\mathrm{ad}}\) that achieves \(\bar{V}(0, x_{0}, i)\). In such a case, the control \(u^{*}\) is called optimal with respect to \((0, x_{0}, i)\).
Lemma 3.1
([14])
For a symmetric matrix S, we can get
where “†” denotes the Moore–Penrose pseudoinverse of a matrix.
Lemma 3.2
([13])
Given matrices L, M, and N of appropriate sizes, the matrix equation
has a solution X if and only if
In addition, any solution to (3.3) can be written as
where S is a matrix of appropriate size.
Theorem 3.1
The finite-time horizon LQ stochastic control problem (3.1)–(3.2) is well-posed if GDREs (3.4) and GBDEs (3.5) admit a solution pair \((\bar{P}(t), \bar{\phi}(t))\), where \(\bar{P}(t)=(P_{1}(t), \ldots , P_{l}(t))\in C(0, T; (R^{n\times n})^{l})\), \(\bar{\phi}(t)=(\phi _{1}(t), \ldots , \phi _{l}(t))\in C(0, T; (R^{n})^{l})\).
Moreover, the set of all optimal controls with respect to the initial \((0, x_{0}, i)\in [0, T)\times R^{n}\times \psi \) is deteremined by (parameterized by (\(Y_{i}\), \(z_{i}\)))
Furthermore, the value function is presented by
where \(C(0, T; (R^{n\times n})^{l})\) is the set of all the \((R^{n\times n})^{l}\)-valued continuous functions on \([0, T]\).
Proof
We prove this result by using the completion of squares. First, define a Lyapunov–Krasovskii functional with \(P_{i}^{T}(t)E=E^{T}P_{i}(t)\), \(i\in \psi \), on the interval \([0, T]\) as follows:
Then applying generalized Itô’s formula [1] to \(V_{1}(t, x, i)\) and \(V_{2}(t, x, i)\), after some manipulations, we have
where
and
with
Hence by equations (3.8)–(3.9) and the cost function (3.2) we get
Now let \(Y_{i}\in L_{\mathcal{F}}^{2}(0, T; R^{m\times n})\) and \(z_{i}\in L_{\mathcal {F}}^{2}(0, T; R^{m})\) be given for every \(i\in \psi \). Set
Applying Lemma 2.1 and the above discussion, for \(k=1, 2\), we have
Hence
Thus we can represent the cost functional as follows:
Hence \(J(0, x_{0}, i; u(\cdot ))\) is minimized by the control (3.6) with value function (3.7).
On the other hand, we show that any optimal control can be expressed by (3.6) for some \(Y_{i}\) and \(z_{i}\). Let \(u^{*}\) be an optimal control; thus we know that the integrand on the right-hand side of (3.11) must be zero almost everywhere in t. This gives
Applying Lemma 3.2 to solve the above equation in \(u^{*}\), we get (3.6). This completes the proof. □
Corollary 3.1
Optimal controls can be obtained in the following cases:
-
(i)
If \(K_{i}(t)\equiv 0\) for a.e. \(t\in [0, T]\) and all \(i\in \psi \), then any admissible control is optimal;
-
(ii)
If \(K_{i}(t)>0\) for a.e. \(t\in [0, T]\) and all \(i\in \psi \), then the optimal control can be expressed by
$$ u^{*}(t)=-\sum_{i=1}^{l} \bigl\{ K_{i}^{-1}(t)\bigl[L_{i}(t)x(t)+h_{i}(t) \bigr]\bigr\} \chi _{\{r_{t}=i\}}(t), $$
In particular, if the coefficient matrices in (3.1)–(3.2) are the form \(A(t, r_{t})=A(t)\), \(B(t, r_{t})=B(t)\), etc., then we can get a sufficient condition for the corresponding optimal control problem and the display expression for optimal control as well.
Corollary 3.2
Consider the following optimal control problem:
with the state equation
Suppose that there are \(P\in C(0, T; R^{n\times n})\) and \(\phi \in C(0, T; R^{n})\) satisfying the GDREs (3.14) and the GBDEs (3.15):
Then the optimal control problem (3.12)–(3.13) is well-posed and can be represented as
and the optimal value of the cost functional can be represented by
where
3.2 Infinite-time horizon case
Consider the following stochastic singular system with Markovian jumps:
Definition 3.2
System (3.16) is called mean-square stabilizable if there is a feedback control
where \(K_{1}, \ldots , K_{l}\) are given matrices, that is stabilizing with respect to any initial state \((x_{0}, i)\).
For a given \((x_{0}, i)\in R^{n} \times \psi \), we define the set of admissible controls
where
The control problem in this subsection is to find a control that minimizes the following quadratic cost associated with (3.16):
The value function is similarly defined as
- (\(\mathcal{H}.3.4\)):
-
The data arising in the LQ control problem (3.16)–(3.17) satisfy, for each \(i\in \psi \),
$$ \textstyle\begin{cases} A_{i}, C_{i}\in R^{n\times n}, \qquad B_{i}, D_{i}\in R^{n\times m}, \\ Q_{i}\in \mathcal{S}^{n}, \qquad R_{i}\in \mathcal{S}^{m}. \end{cases} $$ - (\(\mathcal{H}.3.5\)):
-
System (3.16) is mean-square stabilizable.
Theorem 3.2
Under hypotheses (\(\mathcal{H}.3.4\))–(\(\mathcal{H}.3.5\)), the infinite-time horizon stochastic LQ control problem (3.16)–(3.17) is well-posed if the generalized algebraic Riccati equations (GAREs) (3.18) have a solution \(P=(P_{1}, \ldots , P_{l})\in (R^{n\times n})^{l}\).
where \(L_{i}=B^{T}_{i}P_{i}+D_{i}^{T}(E^{\dagger})^{T}P_{i}^{T}EE^{\dagger}C_{i}\).
In such a case the optimal control can be expressed as
and the value function is presented by
Proof
Suppose that there exists a solution P satisfying equation (3.18). Set the Lyapunov–Krasovskii functional
Applying generalized Itô’s formula to system (3.16), we derive
where
From assumption (\(\mathcal{H}.3.5\)) we have \(\mathbb{E}[V(\infty )]=0\). Then extending the integral interval to \([0, \infty )\) for equation (3.19) and combining with (3.17), we eventually obtain
From (3.20) we can obviously obtain the optimal control and the value function. This completes the proof. □
4 The solvability of GDREs
We give some conditions under which the GDREs are solvable in this section. Due to limited capacity, we only deal with the following case:
First, we declare some transformations that will be used later:
where \(P_{i, 11}(t)\in R^{r\times r}\), \(P_{i, 12}(t)\in R^{r\times (n-r)}\), \(P_{i, 21}(t)\in R^{(n-r)\times r}\), \(P_{i, 22}(t)\in R^{(n-r)\times (n-r)}\).
For discussing the solvability conditions for GDREs (4.1), we make a necessary assumption.
- (\(\mathcal{H}.4.1\)):
-
\(Q_{i}(t)\geq 0\) and \(H_{i}\geq 0\) for every \(i\in \psi \).
Under this assumption, \(\hat{Q}_{i}\geq 0\) and \(H_{i, 11}\geq 0\) for each \(i\in \psi \), and thus there exist two matrices \(F_{i, 1}\in R^{n\times r}\) and \(F_{i, 2}\in R^{n\times (n-r)}\) such that
Next, we can directly use the above transformations to GDREs (4.1), which can be divided into
with the boundary condition \(P_{i, 11}(T)=H_{i, 11}\).
Next, from the equality \(E^{T}P_{i}(t)=P_{i}^{T}(t)E\) we have \(P_{i, 11}(t)=P_{i, 11}^{T}(t)\) and \(P_{i, 12}(t)=0\). Substituting them into (4.2), after some manipulations, we obtain the following equations:
Case 1
- (\(\mathcal{H}.4.2\)):
-
\(C_{i, 2}=0\) and the triple \((A_{i, 22}\enskip B_{i, 2}\enskip F_{i, 2})\) is completely controllable and observable for every \(i\in \psi \).
In this case, equation (4.3) can be transformed as
Equation (4.4c) ia an algebraic Riccati equation, and under assumption (\(\mathcal{H}.4.2\)), it has a unique positive definite solution \(P_{i, 22}(t)\) for any given \(i\in \psi \) [24]. Substituting \(P_{i, 22}(t)\) into (4.4b), we get
where
Moreover, substituting \(P_{i, 21}(t)\) into equation (4.4a), after some calculations, we have
where
Similarly to the proof of Theorem 1 in [18], we can prove that there exist \(C_{i, 0}\) and \(D_{i, 0}\) such that \(Q_{i, 0}=C_{i, 0}^{T}C_{i, 0}\) and \(R_{i}=D_{i, 0}^{T}D_{i, 0}\).
- (\(\mathcal{H}.4.3\)):
-
-
(i)
There exists \(\rho >0\) such that \(D_{i, 0}^{T}D_{i, 0}\geq \rho I_{m}\) for all \(i\in \psi \).
-
(ii)
The triple \((C_{i, 0}\enskip A_{i, 0}\enskip C_{i, 1}; \bar{Q})\) is detectable for every \(i\in \psi \), where \(\bar{Q}=(q_{ij})_{i, j\in \psi}\).
-
(iii)
The elements of the matrix Q̄ satisfy \(q_{ij}\geq 0\), \(j\neq i, j\), \(i\in \psi \).
-
(iv)
The triple \((A_{i, 0}\enskip C_{i, 1}\enskip B_{i, 0}; \bar{Q})\) is stabilizable for each \(i\in \psi \).
-
(i)
Under these assumptions, by Theorem 5.6.15 in [2] there exists a unique positive semidefinite and bounded solution \(P_{i, 11}(t)\), which is also stabilizing.
Case 2
- (\(\mathcal{H}.4.2^{\prime }\)):
-
\(A_{i, 21}=0\), \(B_{i, 2}=0\), and \(\lambda _{i}+\lambda _{j}\neq 0\), where \(\lambda _{i}\) and \(\lambda _{j}\) are arbitrary eigenvalues of \(A_{i, 22}, i\in \psi \).
In this case, equation (4.3) can be transformed into
Under assumption (\(\mathcal{H}.4.2^{\prime }\)), equation (4.5c) has a unique solution \(P_{i, 2}(t)\) for any given \(i\in \psi \) [24]. Then similarly to case 1, we get a unique positive semidefinite and bounded solution \(P_{i, 11}(t)\) for equation (4.5a). Substituting \(P_{i, 11}(t)\) into (4.5b), we get that
Then the solution \(P_{i}(t)\) can also be obtained.
Remark 4.1
For GBDEs (3.15), using the results for linear BDEs, we can deal with it by a similar method, but we omit it here for simplicity.
5 Application
In this section, we study a stochastic LQ leader-follower differential game, where the state equation is an Itô-type linear stochastic singular equations with Markovian jump, and the cost function is quadratic.
Consider the following stochastic singular system with Markovian jumps:
where \(u_{k}(t)\in R^{m_{k}}\) is the control process of player k, for which we denote the admissible control set by \(\mathcal{U}_{k}[0, T]\triangleq L_{\mathcal{F}}^{2}(0, T; R^{m_{k}}), k=1, 2\). For player k, the cost functional is defined by
where \(Q_{i, k}(t)=Q_{i, k}^{T}(t)\), \(H_{i, k}=H_{i, k}^{T}\), \(R_{i, k}(t)=R^{T}_{i, k}(t)\), \(k=1,2\), \(i\in \psi \). We give an assumption imposed throughout this section.
- (\(\mathcal{H}.5.1\)):
-
For every \(i\in \psi \), \(D_{1, i}=0\) or \(D_{2, i}=0\).
In the leader-follower game, player 2 is the leader, and player 1 is the follower. For a fixed initial state \(x_{0}\in R^{n}\) and any choice \(u_{2}\in \mathcal{U}_{2}[0, T]\) of player 2, player 1 should expect to select \(\bar{u}_{1}\in \mathcal{U}_{1}[0, T]\) such that \(J_{1}(0, x_{0}, i; \bar{u}_{1}(\cdot ), u_{2}(\cdot ))\) is the minimum of \(J_{1}(0, x_{0}, i; u_{1}(\cdot ), u_{2}(\cdot ))\) over \(u_{1}\in \mathcal{U}_{1}[0, T]\) (denoted as Problem (LQ)1)). Knowing that the follower would take such an optimal control \(\bar{u}_{1}\), player 2 would be willing to select some \(\bar{u}_{2}\in \mathcal{U}_{2}[0, T]\) minimizing \(J_{2}(0, x_{0}, i; \bar{u}_{1}(\cdot ), u_{2}(\cdot ))\) over \(u\in \mathcal{U}_{2}[0, T]\) (denoted as Problem (LQ)2)).
To summarize, we need to solve two LQ optimal control problems, the optimal control \(\bar{u}_{1}\) for the first one and the optimal control \(\bar{u}_{2}\) for the second one. However, when a control is given, the corresponding optimal control problem is precisely what we have dealt with in Section 3. So we can directly use the developed conclusions.
-
(i)
LQ problem for the follower:
Theorem 5.1
Suppose the following equations (5.1)–(5.2) admit a solution pair \((P_{1}(t), \phi _{1}(t))\), where \(P_{1}(t)\in C(0, T; (R^{n\times n})^{l})\), \(\phi _{1}(t)\in C(0, T; (R^{n})^{l})\):
Then Problem (LQ)1 is well-posed. Optimal control in regard to the initial \((0, x_{0}, i)\) can be presented by
-
(ii)
LQ problem for the leader:
Theorem 5.2
Suppose the following equations (5.3)–(5.4) admit a solution pair \((P_{2}(t), \phi _{2}(t))\), where \(P_{2}(t)\in C(0, T; (R^{n\times n})^{l})\), \(\phi _{2}(t)\in C(0, T; (R^{n})^{l})\):
Then Problem (LQ)2 is well-posed. Optimal control in regard to the initial \((0, x_{0}, i)\) can be presented by
Remark 5.1
It is worth pointing out that for the main conclusions of Theorems 5.1 and 5.2, assumption (\(\mathcal{H}.5.1\)) is necessary, that is, we should obtain explicit expressions of the controls \(\bar{u}_{1}\) and \(\bar{u}_{2}\) at the mean time. On the other hand, under this assumption, our results for application to the leader-follower differential game have certain limitations, and we will pay more attention to this problem in the future.
6 Conclusion
We separately study an optimal control problem for a kind of stochastic singular affine systems with Markovian jumps in finite- and infinite-time horizons. By generalized Itô’s formula and square completion technique we establish a sufficient condition for the well-posedness of control problem. In particular, we also obtain the solvability of GDREs by applying some matrix decomposition. As a typical application, we discuss a leader-follower differential game.
Due to the considerable application potential of this class of stochastic affine singular systems, it will receive more research attention. We also need to pay attention to the fact that it is necessary to calculate an explicit representation of the solutions to GDREs and GBDEs. Therefore we will leave these issues for research in the future.
Availability of data and materials
Not applicable.
Abbreviations
- GDREs:
-
generalized differential Riccati equations
- GBDEs:
-
generalized backward differential equations
- LQ:
-
linear quadratic
References
Björk, T.: Finite dimensional optimal filters for a class of Itô-processes with jumping parameters. Stochastics 4(2), 167–183 (1980)
Dragan, V., Morozan, T., Stoica, A.M.: Mathematical Methods in Robust Control of Linear Stochastic Systems. Springer, Berlin (2006)
Golmankhaneh, A.K., Tunc, C.: Stochastic differential equations on fractal sets. Stoch. Int. J. Probab. Stoch. Process. 92(8), 1244–1260 (2020)
Ji, Y., Chizeck, H.J.: Controllability, stabilizability, and continuous-time Markovian jump linear-quadratic control. IEEE Trans. Autom. Control 35(7), 777–788 (1990)
Ji, Y., Chizeck, H.J.: Jump linear quadratic Gaussian control in continuous time. IEEE Trans. Autom. Control 37(12), 1884–1892 (1991)
Krasovskii, N.N., Lidskii, E.A.: Analytical design of controllers in systems with random attributes I, II, III. Autom. Remote Control 22, 1021–1025, 1141–1146, 1289–1294 (1961)
Li, X., Zhou, X.Y.: Indefinite stochastic LQ controls with Markovian jumps in a finite time horizon. Commun. Inf. Syst. 2(3), 265–282 (2002)
Li, X., Zhou, X.Y., Ait Rami, M.: Indefinite stochastic linear quadratic control with Markovian jumps in infinite time horizon. J. Glob. Optim. 27(2–3), 149–175 (2003)
Mahmoud, A.M., Tunc, C.: Asymptotic stability of solutions for a kind of third-order stochastic differential equations with delays. Miskolc Math. Notes 20(1), 381–393 (2019)
Mao, X.R.: Stochastic Differential Equations and Applications, 2nd edn. Horwood, Chichester (2008)
Mariton, M.: Jump Linear Systems in Automatic Control. Dekker, New York (1990)
Rami, M.A., Ghaoui, L.E.: LMI optimization for nonstandard Riccati equations arising in stochastic control. IEEE Trans. Autom. Control 41(11), 1666–1671 (1996)
Rami, M.A., Moore, J.B., Zhou, X.Y.: Indefinite stochastic linear quadratic control and generalized differential Riccati equation. SIAM J. Control Optim. 40(4), 1296–1311 (2006)
Rami, M.A., Zhou, X.Y.: Linear matrix inequalities, Riccati equations, and indefinite stochastic linear quadratic controls. IEEE Trans. Autom. Control 45(6), 1131–1143 (2000)
Sun, J.R., Li, X., Yong, J.M.: Open-loop and closed-loop solvabilities for stochastic linear quadratic optimal control problems. SIAM J. Control Optim. 54(5), 2274–2308 (2015)
Tunc, O., Tunc, C.: On the asymptotic stability of solutions of stochastic differential delay equations of second order. J. Taibah Univ. Sci. 13(1), 875–882 (2019)
Wang, X., Liu, B.: Optimal control of stochastic singular systems and the generalized differential Riccati equations. Unpublished
Wang, Y.Y., Shi, S.J., Zhang, Z.J.: A descriptor-system approach to singular perturbation of linear regulators. IEEE Trans. Autom. Control 33(4), 370–373 (1988)
Wiener, N.: Cybernetics: Control and Communication in the Animal and the Machine. MIT Press, Cambridge (1948)
Yao, J.C.: Qualitative analyses of differential systems with time-varying delays via Lyapunov–Krasovski approach. Mathematics 9(11), 1196 (2021)
Yong, J.M., Zhou, X.Y.: Stochastic Controls: Hamiltonian Systems and HJB Equations. Springer, New York (1999)
Zhang, Q., Yin, G.G.: On nearly optimal controls of hybrid LQG problems. IEEE Trans. Autom. Control 44(12), 2271–2282 (1999)
Zhang, Q.L., Xing, S.Y.: Stability analysis and optimal control of stochastic singular systems. Optim. Lett. 8(6), 1905–1920 (2014)
Zhang, W.H., Lin, Y.N., Xue, L.R.: Linear quadratic Pareto optimal control problem of stochastic singular systems. J. Franklin Inst. 354(2), 1220–1238 (2016)
Acknowledgements
The authors would like to show their gratitude to the editor and reviewers for valuable advices in helping to improve the manuscript.
Funding
The work was partly supported by NNSF of China (Grant No. 11901329) and the NNSF of Shandong Province (Grant No. ZR2020QA008, ZR2019BA022).
Author information
Authors and Affiliations
Contributions
XW, LW and YL studied and prepared the manuscript. LW and YL analyzed the results and made necessary improvements. XW is the major contributor in writing the paper. All authors studied the results together, and they read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wang, X., Wang, L. & Liu, Y. Optimal control of stochastic singular affine systems with Markovian jumps. J Inequal Appl 2022, 64 (2022). https://doi.org/10.1186/s13660-022-02804-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-022-02804-1