# Explicit solutions of conjugate, periodic, time-varying Sylvester equations

## Abstract

Solutions of a group of conjugate time-varying matrix equations are discussed in this paper. Through mathematical derivation, the solutions to this group of equations are equivalent to the solutions to a class of conjugate time-invariant matrix equations. Further, the related conditions of solvability are obtained and the general explicit solutions are represented by using quasicontrollability and quasiobservability matrices. A detailed algorithm is presented to make the calculation process clear, and the effectiveness of the algorithm is verified by a concrete example. The proposed algorithm can provide complete solutions to the considered equation in explicit parametric form and its main computation includes solving an ordinary linear algebraic equation and some matrix multiplication operations.

## 1 Introduction

Because solving matrix equations has extensive applications in mathematics, control, computers, physics, and other fields, it has been widely studied by scholars in both control and mathematics fields. A complex matrix equation refers to the matrix equation whose coefficient matrix and unknown matrix to be solved are both taken from the complex field. It is widely used in stability analysis and system synthesis in the control field and has attracted extensive attention [1â€“5]. In [1], a complex Riccati equation is studied and an iterative algorithm based on an HSS-like method is proposed, whose convergence is proven. In [2], time-invariant equations possessing the form of $$AXB+CXD=E$$ are investigated in a complex field and the minimal norm solutions are derived in the sense of least squares. The complex equations applied to a regulation problem are considered and the solutions are provided in an explicit and parametric formulation in [3]. The solutions of a class of complex Sylvester equations are presented in [4] in closed form. In [5], the solutions to an anti-Riccati matrix equation are utilized to deal with the regulation controller design of antilinear systems in a discrete case.

A periodic system is a kind of time-varying system, which has been extensively applied to many engineering fields. For example, a multirate sampling digital system is usually modeled as a cyclic system. Many dynamic systems including periodic properties can be naturally represented as periodic systems. For instance, the dynamic behavior of a pendulum, a helicopter, satellite attitude, and other objects has been modeled as periodic systems in the existing literature [6]. Periodic equations including periodic Lyapunov, Riccati, and Sylvester equations are widely used in performance analysis and robust control of periodic systems [7, 8]. Therefore, a large number of research achievements have been made. For example, in [9], by weighting the estimation results of the current step and the previous step, an iterative algorithm is constructed to approximate the unknown matrices in periodic Lyapunov equations. For the same equation, [10] proposes another iterative algorithm, and its convergence rate depends on a tunable parameter. Solving complex Sylvester periodic, time-varying equations via a numerical algorithm is investigated in [11]. Gradient neural-network models with various activation functions are used to approximate the solutions of the periodic, time-varying Sylvester equations in [12]. Numerical iterative solutions to periodic, coupled equations and periodic, bimatrix equations are investigated in [13] and [14], respectively. Periodic regulation equations are discussed in [15], and the parametric solutions with sufficient degrees of freedom are derived.

Matrix multiplication, as one of the basic operations of matrices, plays an important role in solving matrix equations ([16]). In control theory, complex matrix equations can describe the behavior of systems, helping us to understand and design control systems. In signal processing, complex matrix equations can represent the coefficients of filters, helping us to design and apply various filters. Therefore, the solutions of complex matrix equations have been widely studied. In [17], time-derivative information of complex matrix coefficients is utilized to build finite-time convergent neural dynamics for solving a class of linear complex matrix equation and the upper bound of the convergence time is presented. Two algorithms based on the Mooreâ€“Penrose inverse are provided to solve the least-squares Hermitian problem of the complex matrix equation $$AXB+CXD=E$$ and $$AXB=E$$ in [18]. In [19], a Modified Hermitian and skew-Hermitian splitting method is proposed to solve a large, sparse Sylvester equation with non-Hermitian and complex symmetric positive-definite/semidefinite matrices, which is a four-parameter iteration procedure where the iterative sequence is unconditionally convergent to the unique solution of the Sylvester equation. For more research on similar topics, one can refer to the references in the mentioned literature.

The complex-conjugate, periodic Sylvester matrix equation considered here is $$A(j)X(j) - \overline {X (j + 1)}F(j) = B(j)Y(j)$$, whose coefficient matrices and unknown matrices are all Ï‰-periodic and complex. This type of equation usually originates from the analysis and design of complex systems and isotropic systems. When the coefficient matrices are real, this equation becomes a periodic Sylvester matrix equation. When coefficient matrices take constant values, this equation becomes a complex-valued Sylvester matrix equation. It becomes a common Sylvester matrix equation when coefficient matrices of this equation are precisely constant and real, which often appears in linear systems. In summary, the complex-conjugate, periodic Sylvester matrix equation covers many kinds of periodic, time-varying or time-invariant matrix equations as its special forms, and each of them is essential in specific fields of control theory and engineering practice. Therefore, the study of this equation has important scientific significance and engineering value.

In this paper, we aim to provide a group of solutions to a complex-conjugate, periodic, time-varying Sylvester matrix equation in explicit parametric form. The conditions of the existence of the solutions will also be considered. The related conditions of solvability are obtained and the general explicit solutions are represented by using quasicontrollability and quasiobservability matrices. The arrangement of the article is as follows. Some notations and operations of complex matrices are provided in Sect. 2. Section 3 provides the solving condition and derives the parametric solutions to the considered conjugate, time-varying matrix equations. Section 4 checks the validity of the proposed algorithm by a numerical example and the last section summarizes the article.

In the remainder of this section, some simple notations are listed. $$[ a ]$$ is the truncation of real a satisfying $$a= [ a ] +q$$, where $$0\leq q<1$$. AÌ… represents the conjugate of matrix A. $$\lambda (A)$$ indicates the set of all the poles of matrix A. Denote $$A^{\mathrm{{T}}}$$ as the transpose of matrix A. In addition, $$\overline{j,k}$$ is used to stand for $$\{j,j+1,\ldots ,k\}$$ when $$j< k$$.

## 2 Notations and operations of complex matrices

Given arbitrary complex matrix $$A\in \mathbb{C}^{m\times n}$$, by rewriting A as $$A=A_{1}+A_{2}j$$, we can obtain the real transformations of A as

\begin{aligned} A_{\sigma }= \begin{bmatrix} A_{1} & A_{2} \\ A_{2} & -A_{1}\end{bmatrix} \in \mathbb{R}^{2m\times 2n}, \end{aligned}
(1)

where $$i=\sqrt{-1}$$, and $$A_{1}$$, $$A_{2}$$ are both real matrices. The real transformations of complex matrices have some good properties that can be usually utilized in seeking the solutions to complex matrix equations, one can refer to the literature [20].

Next, we will give the definition of the complex-conjugate operation that plays an important role in the deduction of this paper. Given matrix $$A\in \mathbb{C}^{m\times n}$$ and integer number $$l>0$$, complex-conjugate operation $$A^{\ast l}$$ is defined as

\begin{aligned} A^{\ast l}=\textstyle\begin{cases} A&\text{if } l \text{ is even}, \\ \overline{A}&\text{if } l \text{ is odd}. \end{cases}\displaystyle \end{aligned}

From the above definition, we can easily obtain that $$A^{\ast k}=\overline{A^{\ast (l-1)}}$$ and $$A^{\ast 0}=A$$.

For convenience of description, we present several complex field operators.

### Definition 1

[21] Let $$A\in \mathbb{C}^{n\times n}$$ and $$j>0$$ be an integer. Define the complex operations $$A^{\overleftarrow{k}}$$ and $$A^{\overrightarrow{k}}$$ as

\begin{aligned} A^{\overrightarrow{k}} &\triangleq ( A\overline{A} ) ^{ [ \frac{k}{2} ] }A^{k-2 [ \frac{k}{2} ] }, \\ A^{\overleftarrow{k}} &\triangleq A^{k-2 [ \frac{k}{2} ] } ( \overline{A}A ) ^{ [ \frac{k}{2} ] }\text{.} \end{aligned}

From the notation of $$[a]$$, we can see that

\begin{aligned} A^{\overrightarrow{1}}=A,\qquad A^{\overrightarrow{2}}=A\overline{A},\qquad A^{ \overrightarrow{3}}=A\overline{A}A \end{aligned}

and

\begin{aligned} A^{\overleftarrow{1}}=A, \qquad A^{\overleftarrow{2}}=\overline{A}A,\qquad A^{ \overleftarrow{3}}=A \overline{A}A. \end{aligned}

For more useful properties of the two operations, one can refer to reference [21].

Given matrices $$A\in \mathbb{C}^{n\times n}$$, $$B\in \mathbb{C}^{n\times r}$$, and $$C\in \mathbb{C}^{p\times n}$$, define the quasicontrollability and quasiobservability matrices as

\begin{aligned} &\overrightarrow{\mathrm{Ctr}}_{t}(A,B) = \begin{bmatrix} B & A\overline{B} & \cdots & A^{\overrightarrow{t-1}}B^{\ast (t-1)}\end{bmatrix} \text{,} \end{aligned}
(2)
\begin{aligned} &\overrightarrow{\mathrm{Ctr}}(A,B)=\overrightarrow{\mathrm{Ctr}}_{n}(A,B), \\ &\overleftarrow{\mathrm{Obs}}_{t} ( A,C ) = \begin{bmatrix} C \\ \overline{C}A \\ \cdots \\ C^{\ast (t-1)}A^{\overleftarrow{t-1}}\end{bmatrix}, \\ &\overleftarrow{\mathrm{Obs}} ( A,C ) = \overleftarrow{\mathrm{Obs}}_{n} ( A,C ). \end{aligned}
(3)

It can be easily seen that when A, B, C are all real matrices, then the quasicontrollability matrix $$\overrightarrow{\mathrm{Ctr}}(A,B)$$ becomes a so-called controllability matrix in linear system theory and the quasiobservability matrix $$\overleftarrow{\mathrm{Obs}} ( A,C )$$ becomes a so-called observability matrix.

Finally, for a group of matrices $$D_{i}\in \mathbb{C}^{r\times r}, i\in \overline{0,t}$$, denote

\begin{aligned} S_{t}(\mathcal{D})= \begin{bmatrix} \overline{D_{1}} & \overline{D_{2}} & \overline{D_{3}} & \overline{D_{4}} & \cdots & \overline{D_{t}} \\ D_{2} & D_{3} & D_{4} & \cdots & D_{t} & \\ \overline{D_{3}} & \overline{D_{4}} & \cdots & \overline{D_{t}} & & \\ \cdots & \cdots & \cdots & & & \\ D_{t-1}^{\ast (t-1)} & D_{t}^{\ast (t-1)} & & & & \\ D_{t}^{\ast t} & & & & & \end{bmatrix}. \end{aligned}
(4)

## 3 An explicit method

This section focuses on the explicit solutions to the complex, periodic, time-varying Sylvester equation derived in parametric form and a detailed algorithm will be presented.

Consider the complex-conjugate, periodic Sylvester matrix equation

\begin{aligned} A(j)X(j) - B(j)Y(j) = \overline {X (j + 1)}F(j), \end{aligned}
(5)

where $$j \in \mathbb{Z}$$, $$A(j) \in {\mathbb{C}^{n \times n}}$$, $$B(j) \in {\mathbb{C}^{n \times r}}$$, $$F(j) \in {\mathbb{C}^{m \times m}}$$ are known Ï‰-periodic matrices such that $$A(j+\omega )=A(j), B(j+\omega )=B(j), F(j+\omega )=F(j)$$, while $$X(j)\in \mathbb{C}^{n\times p}$$, $$Y(j)\in \mathbb{C}^{r\times p}$$ are matrices to be solved.

Define

\begin{aligned} {\Phi _{\mathrm{A}}}(k,j) \triangleq A(k - 1)\overline{A(k - 2)}...{A^{*(k - j - 1)}}(i),\quad\forall k > j. \end{aligned}
(6)

Furthermore, let

\begin{aligned} {\Phi _{\mathrm{A}}}(j,j) = {I_{n}}. \end{aligned}
(7)

We will first discuss the relationship between periodic, time-varying matrix equations and a special class of time-invariant matrix equations.

### Theorem 1

Let $$F(j),j \in \overline {0,\omega - 2}$$ be nonsingular. The conjugate periodic equation (5) is equivalent to

\begin{aligned} {\Phi _{\mathrm{A}}}(\omega ,0){X^{*(\omega - 1)}}(0) -\tilde{B} \tilde{Y} = \overline{X(0)} {\Phi _{\mathrm{F}}}(\omega ,0), \end{aligned}
(8)

where

$\stackrel{Ëœ}{B}=\left[\begin{array}{cccc}{\mathrm{Î¦}}_{\mathrm{A}}\left(\mathrm{Ï‰},1\right){B}^{âˆ—\left(\mathrm{Ï‰}âˆ’1\right)}\left(0\right)& {\mathrm{Î¦}}_{\mathrm{A}}\left(\mathrm{Ï‰},2\right){B}^{âˆ—\left(\mathrm{Ï‰}âˆ’2\right)}\left(1\right)& â‹¯& {\mathrm{Î¦}}_{\mathrm{A}}\left(\mathrm{Ï‰},\mathrm{Ï‰}\right)B\left(\mathrm{Ï‰}âˆ’1\right)\end{array}\right]$
(9)
$\stackrel{Ëœ}{Y}={\left[\begin{array}{cccccc}{\stackrel{Ëœ}{Y}}^{\mathrm{T}}\left(0\right)& {\stackrel{Ëœ}{Y}}^{\mathrm{T}}\left(1\right)& â‹¯& {\stackrel{Ëœ}{Y}}^{\mathrm{T}}\left(i\right)& â‹¯& {\stackrel{Ëœ}{Y}}^{\mathrm{T}}\left(\mathrm{Ï‰}âˆ’1\right)\end{array}\right]}^{\mathrm{T}}$
(10)

and

$\begin{array}{c}\stackrel{Ëœ}{Y}\left(i\right)={Y}^{âˆ—\left(\mathrm{Ï‰}âˆ’iâˆ’1\right)}\left(i\right){\mathrm{Î¦}}_{\mathrm{F}}^{âˆ—\left(\mathrm{Ï‰}âˆ’i\right)}\left(i,0\right),\phantom{\rule{1em}{0ex}}iâˆˆ\stackrel{â€¾}{0,\mathrm{Ï‰}âˆ’1}\end{array}.$
(11)

### Proof

Expanding equation (5) at each time in a period gives

\begin{aligned} & A(0)X(0)- B(0)Y(0) =\overline{X(1)}F(0) \end{aligned}
(12)
\begin{aligned} &\vdots \\ & A(\omega - {\mathrm{{3}}})X(\omega - {\mathrm{{3}}}) - B(\omega - {\mathrm{{3}}})Y( \omega - {\mathrm{{3}}}) = \overline{X(\omega - { \mathrm{{2}}})}F(\omega - {\mathrm{{3}}}) \end{aligned}
(13)
\begin{aligned} & A(\omega - 2)X(\omega - 2) - B(\omega - 2)Y(\omega - 2) = \overline{X(\omega - 1)}F(\omega - 2) \end{aligned}
(14)
\begin{aligned} & A(\omega - {\mathrm{{1}}})X(\omega - {\mathrm{{1}}}) - B(\omega - {\mathrm{{1}}})Y( \omega - {\mathrm{{1}}}) = \overline{\omega ({\mathrm{{0}}})}F( \omega - {\mathrm{{1}}}). \end{aligned}
(15)

Postmultiplying the two sides of equation (15) with $$\overline {F(\omega - 2)}$$, one has

\begin{aligned} A(\omega - {\mathrm{{1}}})X(\omega - {\mathrm{{1}}})\overline{F(\omega - 2)} - B( \omega - {\mathrm{{1}}})Y(\omega - {\mathrm{{1}}})\overline{F(\omega - 2)} = \overline{X ({\mathrm{{0}}})}F(\omega - {\mathrm{{1}}})\overline{F(\omega - 2)}. \end{aligned}
(16)

Taking the above formula into (14), we can obtain

\begin{aligned} \begin{aligned} &A(\omega - {\mathrm{{1}}})\overline {A(\omega - { \mathrm{{2}}})} \overline {X(\omega - 2)} - B(\omega - {\mathrm{{1}}})Y(\omega - { \mathrm{{1}}}) \overline{F(\omega - 2)} \\ &\quad = \overline {X({\mathrm{{0}}})}F(\omega - {\mathrm{{1}}}) \overline{F(\omega - 2)} + A(\omega - 1)\overline{B(\omega - 2)}\overline{Y(\omega - 2)}. \end{aligned} \end{aligned}
(17)

Taking this formula into (13), one has

\begin{aligned} \begin{aligned}&{\Phi _{\mathrm{A}}}(\omega ,\omega - 3)X(\omega - 3) -B(\omega - 1)Y( \omega - 1)\overline{F(\omega - 2)}F(\omega - 3) \\ &\quad = \overline{X (0)} { \Phi _{\mathrm{F}}}(\omega ,\omega - 3) \\ &\qquad{}+ {\Phi _{\mathrm{A}}}(\omega ,\omega - 1)\overline{B(\omega - 2)} \overline{Y( \omega - 2)}F(\omega - {\mathrm{{3}}}) + {\Phi _{\mathrm{A}}}( \omega ,\omega - 2)B(\omega - 3)Y(\omega - 3). \end{aligned} \end{aligned}
(18)

Continuing the same operations until equation (12) is incorporated, we have

\begin{aligned} {\Phi _{\mathrm{A}}}(\omega ,{\mathrm{{0}}}){X^{*(\omega - 1)}}({\mathrm{{0}}}) - \overline{ X (0)} {\Phi _{\mathrm{F}}}(\omega ,{\mathrm{{0}}}) = \sum _{i = 0}^{\omega - 1} {{\Phi _{\mathrm{A}}}(\omega ,i + 1){B^{*(\omega - i - 1)}}(i){Y^{*( \omega - i - 1)}}(i)\Phi _{\mathrm{F}}^{*(\omega - i)}(i,0)}. \end{aligned}

Noting formulas (9), (10), and (11), one can find the above equation is exactly equation (8).

Since the above derivation process is reversible, the two equations (5) and (8) have the same set of solutions. We thus complete the proof.â€ƒâ–¡

Based on the above theorem, according to Lemma 5 of [21], we have the following conclusion.

### Proposition 1

For matrices $$A(j) \in {\mathbb{C}^{n \times n}}$$ and $$F(j) \in {\mathbb{C}^{n \times n}}$$, matrix equation (8) has the exclusive resolution when and only when .

In the following, we will give an expression of solutions for the complex, time-invariant Sylvester equation (8).

### Proposition 2

Given matrices $$A(j) \in {\mathbb{C}^{n \times n}}$$, $$B(j) \in {\mathbb{C}^{n \times r}}$$, $$F(j) \in {\mathbb{C}^{m \times m}}$$, suppose that . If there exist two sets of matrices $${N_{i}} \in {\mathbb{C}^{n \times r}}$$, $${D_{i}} \in {^{r \times r}}$$, $$i \in \overline{0,t}$$, with $${N_{t}} = 0$$ such that

\begin{aligned} \textstyle\begin{cases} {\Phi _{\mathrm{A}}}(\omega ,0){N_{0}} = \tilde{B}{D_{0}} \\ \overline {{N_{i - 1}}} - {\Phi _{\mathrm{A}}}(\omega ,0){N_{i}} = - \tilde{B}{D_{i}},\quad i \in \overline{ 1,t-1}, \\ \overline {{N_{t - 1}}} = - \tilde{B}{D_{t}} \end{cases}\displaystyle \end{aligned}
(19)

then the solutions of the considered equation (8) have the following form

where $$Z \in {\mathbb{C}^{r \times m}}$$ is a parameter matrix that can be chosen arbitrarily.

The proof of the above proposition is omitted since it can be taken as a corollary of Theorem 1 in [21]. It is worth noting that the above proposition gives all possible solutions of equation (8) in parametric form.

From the above conclusion, one can see that in order to solve equation (8), it is necessary to find $${N_{i}} \in {\mathbb{C}^{n \times r}}$$, $${D_{i}} \in {^{r \times r}}$$, $$i \in \overline{0,t}$$, by solving equation (19). This is not so simple and intuitive. Therefore, it is necessary to convert equation (8) into a simpler form.

### Lemma 1

For matrices $${\Phi _{\mathrm{A}}}(\omega ,0) \in {\mathbb{C}^{n \times n}},\tilde{B} \in {\mathbb{C}^{n \times r}},{D_{i}} \in {\mathbb{C}^{r \times r}},{N_{i}} \in {\mathbb{C}^{n \times r}},i \in \overline{ 0,t}$$, matrices $${N_{i}} \in {\mathbb{C}^{n \times r}}$$, $${D_{i}} \in {^{r \times r}}$$, $$i \in \overline{0,t}$$, with $${N_{t}} = 0$$ satisfy (19) when and only when $${D_{i}} \in {\mathbb{C}^{r \times r}},i \in \overline{0,t}$$ meet

and $${N_{i}} \in {\mathbb{C}^{n \times r}},i \in \overline{ 0,t-1}$$ meet

\begin{aligned} \begin{bmatrix} {{N_{0}}}&{{N_{1}}}& \cdots &{{N_{t - 1}}} \end{bmatrix} = \overline {\overrightarrow {{ \mathrm{Ctr}}_{t}} \bigl({\Phi _{\mathrm{A}}}(T,0),\tilde{B}\bigr)} {S_{t}}(D). \end{aligned}
(22)

### Proof

Let us first prove the necessity. From the recursive expression of (19), it is easily obtained by simple computation that

and

Noting the formulation of (2) and (4), formula (23) can be rewritten as

\begin{aligned} \begin{bmatrix} {{N_{0}}}&{{N_{1}}}& \cdots &{{N_{t - 1}}} \end{bmatrix} = \overline {\overrightarrow {\mathrm{{Ctr}_{t}}} \bigl({\Phi _{\mathrm{A}}}(\omega ,0),\tilde{B}\bigr)} {S_{t}}(D). \end{aligned}

Next, we prove the necessity. From (22) and (21), one has

This is the first formula of (19). Then by (22), we can obtain

\begin{aligned} N_{i}= \overline{\sum^{t}_{j=i+1}{ \Phi _{\mathrm{A}}}(\omega ,0)^{\overrightarrow{j-(i+1)}}(\tilde {B}D_{j})^{j-(i+1)}},\quad i\in \overline{1,t-1}. \end{aligned}

Direct computation gives

\begin{aligned} \begin{aligned} {\Phi _{\mathrm{A}}}(\omega ,0)N_{i}- \tilde {B}D_{i}&=-\tilde {B}D_{i}+{ \Phi _{\mathrm{A}}}( \omega ,0) \overline{\sum^{t}_{j=i+1}{\Phi _{\mathrm{A}}}(\omega ,0)^{\overrightarrow{j-(i+1)}}(\tilde {B}D_{j})^{j-(i+1)}} \\ &=-\tilde {B}D_{i}+\sum^{t}_{j=i+1}{ \Phi _{\mathrm{A}}}(YJIT8173,0)^{ \overrightarrow{j-i}}(\tilde {B}D_{j})^{j-i} \\ &=\sum^{t}_{j=i}{\Phi _{\mathrm{A}}}( \omega ,0)^{\overrightarrow{j-i}}( \tilde {B}D_{j})^{j-i} \\ &=\overline{N_{i-1}}. \end{aligned} \end{aligned}
(25)

Hence, the second term of (19) holds. The other two terms of (19) are straightforward. Based on the sufficiency and the necessity, the conclusion holds true.â€ƒâ–¡

Denote

\begin{aligned} L = \bigl[ \textstyle\begin{array}{*{20}{c}} {{D_{0}}^{\mathrm{{T}}}}&{\overline {{D_{1}}^{\mathrm{{T}}}} }&{ \textstyle\begin{array}{*{20}{c}} {{D_{2}}^{\mathrm{{T}}}}& \cdots \end{array}\displaystyle }&\bigl({D_{t}^{*t}} \bigr)^{\mathrm{{T}}} \end{array}\displaystyle \bigr]^{\mathrm{{T}}}. \end{aligned}

Equation (21) is equivalent to

\begin{aligned} \overrightarrow {\mathrm {Ctr}_{t + 1}} \bigl({\Phi _{\mathrm{A}}}(\omega ,0), \tilde{B}\bigr)L = 0. \end{aligned}
(26)

On the basis of the above preparations, the conclusion on the solutions to equation (5) can be drawn as follows.

### Theorem 2

Consider Ï‰-peroidic matrices $$A(j) \in {\mathbb{C}^{n \times n}}$$, $$B(j) \in {\mathbb{C}^{n \times r}}$$, and $$F(j) \in {\mathbb{C}^{m \times m}}$$, and suppose that $$\lambda ({\Phi _{\mathrm{A}}}(\omega ,0) \overline {{\Phi _{\mathrm{A}}}(\omega ,0)} ) \cap \lambda ({\Phi _{\mathrm{F}}}( \omega ,0)\overline {{\Phi _{\mathrm{F}}}(\omega ,0)} ) = \phi$$. If there exist matrices $$D_{i}, i\in \overline{0,t}$$ satisfying equation (21), then the following solutions

\begin{aligned} \textstyle\begin{cases} X(0) = \begin{bmatrix} {{N_{0}}}&{{N_{1}}}& \cdots &{{N_{t - 1}}} \end{bmatrix}\overleftarrow {\mathrm{{Obs}_{t}}} ({\Phi _{\mathrm{F}}}(\omega ,0),Z) \\ \phantom{X(0)}= \overline {\overrightarrow {\mathrm{{Ctr}_{t}}} ({\Phi _{\mathrm{A}}}(\omega ,0),\tilde{B})} S(D)\overleftarrow {\mathrm{{Obs}_{t}}} ({\Phi _{\mathrm{F}}}(\omega ,0),Z) \\ {\tilde{Y} = \begin{bmatrix} {{D_{0}}}&{{D_{1}}}& \cdots &{{D_{t}}} \end{bmatrix}\overleftarrow {\mathrm{{Obs}_{t + 1}}} ({\Phi _{\mathrm{F}}}(\omega ,0),Z)} \end{cases}\displaystyle \end{aligned}
(27)

and

\begin{aligned} \textstyle\begin{cases} \tilde{Y} = {[ \textstyle\begin{array}{*{20}{c}} {\tilde{Y}(0)}&{\tilde{Y}(1)}&\cdots &{\tilde{Y}(\omega - 1)} \end{array}\displaystyle ]^{\mathrm{{T}}}} \\ Y(j) = ({\tilde {Y}(j)}(\Phi _{\mathrm{F}}^{*(\omega - j)}(j,0))^{(}-1))^{*( \omega - j - 1)},\quad {j \in \overline {0,\omega - 1} } \\ X(j+1)=\overline{[A(j)X(j)-B(j)Y(j)]F^{(}-1)(j)}, \quad j \in \overline {0,\omega - 2}, \end{cases}\displaystyle \end{aligned}
(28)

where Z is an arbitrary parameter matrix with compatible dimension, can meet the complex-conjugate, periodic Sylvester equation (5) well.

### Proof

According to Proposition (2), matrix $$X(0)$$ in (20) has another formulation as

and matrix $$\tilde{(}Y)$$ in (20) can be rewritten in the form

According to equations (10) and (11), the first two items of formula (28) can be easily obtained. Further, the third item of formula (28) can be obtained by simply transforming equation (5).

Based on Theorem 1, we complete the proof.â€ƒâ–¡

For convenience, We summarize a stepwise computational procedure to solve periodic, time-varying, conjugate equations as follows.

### Algorithm 1

(An explicit algorithm on solving a complex-conjugate, periodic Sylvester matrix equation.)

1. 1.

For given matrices $$A(j), F(j), j\in \overline{0,\omega -1}$$, compute $${\Phi _{\mathrm{A}}}(\omega ,0)$$, $${\Phi _{\mathrm{F}}}(\omega ,0)$$ and BÌƒ.

2. 2.

Determine whether the condition $$\lambda ({\Phi _{\mathrm{A}}}(\omega ,0)\overline {{\Phi _{\mathrm{A}}}(\omega ,0)} ) \cap \lambda ({\Phi _{\mathrm{F}}}(\omega ,0)\overline {{\Phi _{\mathrm{F}}}(\omega ,0)} ) = \phi$$ holds. If this condition holds, proceed to the next step; otherwise, the algorithm fails.

3. 3.

According to the conjugate linear equation (26), find unknown matrices $$D_{i}, i\in \overline{0,t}$$.

4. 4.

According to (22), compute matrices $$N_{i}, i\in \overline{0,t-1}$$.

5. 5.

Assign the free parameter matrix Z, and compute matrices $$X(0), \tilde{Y}$$ according to (27).

6. 6.

According to (28), compute $$X(j),j\in \overline{1,\omega -1}$$ and $$Y(j), j\in \overline{0,\omega -1}$$.

## 4 A validation example

### Example 1

Assume a conjugate, periodic Sylvester matrix equation (5) whose parameter matrices are as follows

\begin{aligned} &A(0) = \begin{bmatrix} 1&{2 - i}&i \\ 0&{ - 2i}&0 \\ {2i}&0&0 \end{bmatrix},\qquad A(1) = \begin{bmatrix} 0&{ - 2i}&0 \\ 0&1&{ - i} \\ { - 1 + i}&0&{ - 2i} \end{bmatrix} ,\qquad A(2) = \begin{bmatrix} i&1&0 \\ 0&{ - i}&1 \\ 0&i&0 \end{bmatrix},\\ &B(0) = \begin{bmatrix} { \textstyle\begin{array}{*{20}{c}} i \\ { - i} \\ { - 2} \end{array}\displaystyle }&{ \textstyle\begin{array}{*{20}{c}} 1 \\ 0 \\ i \end{array}\displaystyle } \end{bmatrix},\qquad B(1) = \begin{bmatrix} { \textstyle\begin{array}{*{20}{c}} 2 \\ i \\ 0 \end{array}\displaystyle }&{ \textstyle\begin{array}{*{20}{c}} 1 \\ 1 \\ { - i} \end{array}\displaystyle } \end{bmatrix},\qquad B(2) = \begin{bmatrix} { \textstyle\begin{array}{*{20}{c}} 0 \\ 1 \\ { - i} \end{array}\displaystyle }&{ \textstyle\begin{array}{*{20}{c}} i \\ {2i} \\ 1 \end{array}\displaystyle } \end{bmatrix},\\ &F(0) = \begin{bmatrix} 1&0 \\ 0&{ - i} \end{bmatrix},\qquad F(1) = \begin{bmatrix} 1&0 \\ {2i}&{1 - i} \end{bmatrix},\qquad F(2) = \begin{bmatrix} {1 - i}&0 \\ 0&{1 + i} \end{bmatrix}. \end{aligned}

For this matrix equation, it is easy to obtain

\begin{aligned} &{\Phi _{\mathrm{A}}}(3,0) = A(2)\overline {A(1)}A(0) = \begin{bmatrix} { - 2}&{2i}&0 \\ { - 5 + i}&{ - 5 - i}&{1 - i} \\ { - 2i}&2&0 \end{bmatrix}, \\ &{\Phi _{\mathrm{F}}}(3,0) = F(2)\overline{ F(1)}F(0) = \begin{bmatrix} {1 - i}&0 \\ {2 - 2i}&2 \end{bmatrix}, \\ &\begin{aligned} \tilde{B} &= [ \textstyle\begin{array}{*{20}{c}} {{\Phi _{\mathrm{A}}}(3,1)B(0)}&{{\Phi _{\mathrm{A}}}(3,2)B(1)}&{{\Phi _{\mathrm{A}}}(3,3)B(2)} \end{array}\displaystyle ] \\ &= [ \textstyle\begin{array}{*{20}{c}} {A(2)A(1)B(0)}&{A(2)B(1)}&{B(2)} \end{array}\displaystyle ] \\ &= \begin{bmatrix} { - i}&{ - 1}&i&{1 + i}&0&i \\ {2 - 5i}&{ - 3}&{ - 1}&0&1&{2i} \\ 3&{ - i}&1&i&{ - i}&1 \end{bmatrix}. \end{aligned} \end{aligned}

Define the quasicontrollability matrix

\begin{aligned} \overrightarrow {\mathrm{{Ctr}_{t}}} \bigl({\Phi _{\mathrm{A}}}(3,0), \tilde{B}\bigr) = [ \textstyle\begin{array}{*{20}{c}} {\tilde{B}}&{{\Phi _{\mathrm{A}}}(3,0)\overline {\tilde{B}} }&{{\Phi _{\mathrm{A}}}(3,0) \overline {{\Phi _{\mathrm{A}}} (3,0)}\tilde{B}} \end{array}\displaystyle ] \end{aligned}

and let

\begin{aligned} L = {[ \textstyle\begin{array}{*{20}{c}} {D^{\mathrm{T}}(0)}&{\overline {D^{\mathrm{T}}(1)}}&{D^{\mathrm{T}}(2)} \end{array}\displaystyle ]^{\mathrm{{T}}}}. \end{aligned}

Solving the following simple linear equation

\begin{aligned} \tilde{B}D(0) + {\Phi _{\mathrm{A}}}(3,0)\overline {\tilde{B}D(1)} + {\Phi _{ \mathrm{A}}}(3,0)\overline {{\Phi _{\mathrm{A}}} (3,0)}\tilde{B}D(2) = 0, \end{aligned}

i.e., $$\overrightarrow {\mathrm{{Ctr}_{t}}} ({\Phi _{\mathrm{A}}}(3,0),\tilde{B})L = 0$$, we can find that

\begin{aligned} &D(0) = \begin{bmatrix} {14 - i}&{ - 2 + 2i}&{8 + 8i} \\ { - 2 - 30i}&{4 + 6i}&{16 - 12i} \\ { - 10 - 9i}&{2 + 2i}&{ - 8i} \\ 0&0&0 \\ 0&0&0 \\ 0&0&0 \end{bmatrix},\qquad D(1) = \begin{bmatrix} 1&0&0 \\ 0&0&0 \\ 0&0&0 \\ 0&0&0 \\ 0&0&0 \\ 0&1&0 \end{bmatrix},\\ & D(2) = \begin{bmatrix} 0&0&0 \\ 0&0&0 \\ 0&0&0 \\ 0&0&0 \\ 0&0&1 \\ 0&0&0 \end{bmatrix}. \end{aligned}

Furthermore, by formula (15), we can obtain that

\begin{aligned} N(0) = \begin{bmatrix} { - i}&i&{2i} \\ {2 - 5i}&{2i}&4 \\ { - 3}&{ - 1}&{ - 2} \end{bmatrix},\qquad N(1) = \begin{bmatrix} 0&0&0 \\ 0&0&{ - 1} \\ 0&0&{ - i} \end{bmatrix}. \end{aligned}

Take the free parametric matrix Z as

\begin{aligned} Z = \begin{bmatrix} {1 + i}&{6 - 3i} \\ 4&{1 + 2i} \\ 0&{ - i} \end{bmatrix}. \end{aligned}

Then, by formula (25), a group of periodic solutions can be found as

\begin{aligned} &X(0) = \begin{bmatrix} { \textstyle\begin{array}{*{20}{c}} {1 + 3i} \\ {5 + 3i} \\ { - 5 - 5i} \end{array}\displaystyle }&{ \textstyle\begin{array}{*{20}{c}} { - 3 - 5i} \\ { - 7 - 40i} \\ { - 17 + 9i} \end{array}\displaystyle } \end{bmatrix},\qquad X(1) = \begin{bmatrix} { \textstyle\begin{array}{*{20}{c}} { - 12 + 18i} \\ { - 7 - 15i} \\ {36 + 16i} \end{array}\displaystyle }&{ \textstyle\begin{array}{*{20}{c}} {14 - 4i} \\ { - 109 + 28i} \\ { - 12 - 24i} \end{array}\displaystyle } \end{bmatrix},\\ &X(2) = \begin{bmatrix} { \textstyle\begin{array}{*{20}{c}} {4i} \\ 6 \\ {10 + 2i} \end{array}\displaystyle }&{ \textstyle\begin{array}{*{20}{c}} { - 2 - 22i} \\ { - 29 + 7i} \\ { - 50 + 8i} \end{array}\displaystyle } \end{bmatrix}, \qquad Y(0) = \begin{bmatrix} {25 + 13i}&{95 - 52i} \\ {44 - 8i}&{ - 122 - 176i} \end{bmatrix},\\ &Y(1) = \begin{bmatrix} {7 + 11i}&{18 + 97i} \\ 0&0 \end{bmatrix},\qquad Y(2) = \begin{bmatrix} { - 4i}&{2 - 2i} \\ {4 - 4i}&{3 - i} \end{bmatrix}. \end{aligned}

Simple verification shows that the above solution satisfies equation (5). Furthermore, from the arbitrariness of the choice of parameter matrix Z, it can be seen that Algorithm 1 can provide numerous groups of solutions to the considered conjugate, periodic, time-varying matrix equation.

## 5 Summary

A set of conjugated periodic time-varying matrix equations are investigated in this paper. It is first verified that the considered equation has the same solution set with a class of conjugate time-invariant matrix equation. Then, by some mathematical transformations, the solvable condition of the conjugate, periodic, time-varying matrix equations is derived and the calculation steps are given in the proposed algorithm. Finally, a numerical test on a conjugate, time-varying equation is carried out, which shows that the solutions generated by the provided algorithm meet the complex, time-varying equation well.

Not applicable.

## References

1. Dehghan, M., Shirilord, A.: On the Hermitian and skew-Hermitian splitting-like iteration approach for solving complex continuous-time algebraic Riccati matrix equation. Appl. Numer. Math. 170, 109â€“127 (2021)

2. Zhang, F., Wei, M., Li, Y., et al.: The minimal norm least squares Hermitian solution of the complex matrix equation $$AXB+ CXD= E$$. J. Franklin Inst. 355(3), 1296â€“1310 (2018)

3. Zhang, L., Zhu, A., Wu, A., et al.: Parametric solutions to the regulator-conjugate matrix equations. J. Ind. Manag. Optim. 13(2), 623â€“631 (2017)

4. Wu, A.G., Zhang, E., Liu, F.: On closed-form solutions to the generalized Sylvester-conjugate matrix equation. Appl. Math. Comput. 218(19), 9730â€“9741 (2012)

5. Wu, A.G., Qian, Y.Y., Liu, W., et al.: Linear quadratic regulation for discrete-time antilinear systems: an anti-Riccati matrix equation approach. J. Franklin Inst. 353(5), 1041â€“1060 (2016)

6. Lv, L., Duan, G., Zhou, B.: Parametric pole assignment and robust pole assignment for discrete-time linear periodic systems. SIAM J. Control Optim. 48(6), 3975â€“3996 (2010)

7. Lv, L.L., Zhang, Z., Zhang, L.: A parametric poles assignment algorithm for second-order linear periodic systems. J. Franklin Inst. 354(18), 8057â€“8071 (2017)

8. Lv, L., Zhang, L.: On the periodic Sylvester equations and their applications in periodic Luenberger observers design. J. Franklin Inst. 353(5), 1005â€“1018 (2016)

9. Wu, A.G., Chang, M.F.: Current-estimation-based iterative algorithms for solving periodic Lyapunov matrix equations. IET Control Theory Appl. 10(15), 1928â€“1936 (2016)

10. Wu, A.G., Zhang, W.X., Zhang, Y.: An iterative algorithm for discrete periodic Lyapunov matrix equations. Automatica 87, 395â€“403 (2018)

11. Zhang, L., Li, P., Han, M., et al.: On the solutions to Sylvester-conjugate periodic matrix equations via iteration. IET Control Theory Appl. 17(3), 307â€“317 (2023)

12. Lv, L., Chen, J., Zhang, L., Zhang, F.: Gradient-based neural networks for solving periodic Sylvester matrix equations. J. Franklin Inst. 359(18), 10849â€“10866 (2022)

13. Lv, L., Chen, J., Zhang, Z., Wang, B., Zhang, L.: A numerical solution of a class of periodic coupled matrix equations. J. Franklin Inst. 358(3), 2039â€“2059 (2021)

14. Zhang, L., Tang, S., Lv, L.: An finite iterative algorithm for sloving periodic Sylvester bimatrix equations. J. Franklin Inst. 357(15), 10757â€“10772 (2020)

15. Lv, L.-L., Zhang, L.: Parametric solutions to the discrete periodic regulator equations. J. Franklin Inst. 353(5), 1089â€“1101 (2016)

16. Respondek, J.: Matrix black box algorithms-a survey. Bull. Pol. Acad. Sci., Tech. Sci. 70(2), e140535 (2022)

17. Xiao, L.: A finite-time convergent neural dynamics for online solution of time-varying linear complex matrix equation. Neurocomputing 167, 254â€“259 (2015)

18. Yuan, S., Liao, A.: Least squares Hermitian solution of the complex matrix equation AXB+CXD=E with the least norm. J. Franklin Inst. 351(11), 4978â€“4997 (2014)

19. Dehghan, M., Shirilord, A.: A generalized modified Hermitian and skew-Hermitian splitting (GMHSS) method for solving complex Sylvester matrix equation. Appl. Math. Comput. 348, 632â€“651 (2019)

20. Jiang, T., Wei, M.: On solutions of the matrix equations $$X-AXB=C$$ and $$X-A\overline{X}B=C$$. Linear Algebra Appl. 367, 225â€“233 (2003)

21. Wu, A.G., Lv, L., Duan, G.R., et al.: Parametric solutions to Sylvester-conjugate matrix equations. Comput. Math. Appl. 62(9), 3317â€“3325 (2011)

## Acknowledgements

The authors would like to thank the reviewer for his/her valuable comments on the manuscript.

## Funding

This work was supported in part by the Excellent Youth Fund of Henan Natural Science Foundation (No. 212300410058), and in part by the Science and Technology Innovation Team Funding of Colleges and Universities in Henan Province (No. 22IRTSTHN011).

## Author information

Authors

### Contributions

Li Ma and Rui Chang: Conceptualization, Investigation, Writing - Original Draft. Mengqi Han: Data Curation, Validation; Yongmei Jiao (Corresponding Author): Conceptualization, Supervision, Writing - Review & Editing.

### Corresponding author

Correspondence to Yongmei Jiao.

## Ethics declarations

### Competing interests

The authors declare no competing interests.

### Publisherâ€™s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Reprints and permissions

Ma, L., Chang, R., Han, M. et al. Explicit solutions of conjugate, periodic, time-varying Sylvester equations. J Inequal Appl 2023, 133 (2023). https://doi.org/10.1186/s13660-023-03048-3