# An iterative algorithm for the generalized reflexive solutions of the general coupled matrix equations

## Abstract

The general coupled matrix equations

$\underset{j=1}{\overset{q}{âˆ‘}}{A}_{ij}{X}_{j}{B}_{ij}={M}_{i},\phantom{\rule{1em}{0ex}}i=1,2,â€¦,p$

(including the generalized coupled Sylvester matrix equations as special cases) have numerous applications in control and system theory. In this paper, an iterative algorithm is constructed to solve the general coupled matrix equations and their optimal approximation problem over generalized reflexive matrix solution $\left({X}_{1},{X}_{2},â€¦,{X}_{q}\right)$. When the general coupled matrix equations are consistent over generalized reflexive matrices, the generalized reflexive solution can be determined automatically by the iterative algorithm within finite iterative steps in the absence of round-off errors. The least Frobenius norm generalized reflexive solution of the general coupled matrix equations can be derived when an appropriate initial matrix group is chosen. Furthermore, the unique optimal approximation generalized reflexive solution $\left({\stackrel{Ë†}{X}}_{1},{\stackrel{Ë†}{X}}_{2},â€¦,{\stackrel{Ë†}{X}}_{q}\right)$ to a given matrix group $\left({X}_{1}^{0},{X}_{2}^{0},â€¦,{X}_{q}^{0}\right)$ in Frobenius norm can be derived by finding the least-norm generalized reflexive solution $\left({\stackrel{Ëœ}{X}}_{1}^{âˆ—},{\stackrel{Ëœ}{X}}_{2}^{âˆ—},â€¦,{\stackrel{Ëœ}{X}}_{q}^{âˆ—}\right)$ of the corresponding general coupled matrix equations ${âˆ‘}_{j=1}^{q}{A}_{ij}{\stackrel{Ëœ}{X}}_{j}{B}_{ij}={\stackrel{Ëœ}{M}}_{i}$, $i=1,2,â€¦,p$, where ${\stackrel{Ëœ}{X}}_{j}={X}_{j}âˆ’{X}_{j}^{0}$, ${\stackrel{Ëœ}{M}}_{i}={M}_{i}âˆ’{âˆ‘}_{j=1}^{q}{A}_{ij}{X}_{j}^{0}{B}_{ij}$. A numerical example is given to illustrate the effectiveness of the proposed iterative algorithm.

MSC:15A18, 15A57, 65F15, 65F20.

## 1 Introduction

Let $Pâˆˆ{\mathcal{R}}^{mÃ—m}$ and $Qâˆˆ{\mathcal{R}}^{nÃ—n}$ be two real generalized reflection matrices, i.e., ${P}^{T}=P$, ${P}^{2}={I}_{m}$, ${Q}^{T}=Q$, ${Q}^{2}={I}_{n}$, ${I}_{n}$ denotes the n order identity matrix. A matrix $Aâˆˆ{\mathcal{R}}^{mÃ—n}$ is called generalized reflexive matrix with respect to the matrix pair $\left(P,Q\right)$ if $PAQ=A$. The set of all m-by-n real generalized reflexive matrices with respect to a matrix pair $\left(P,Q\right)$ is denoted by ${\mathcal{R}}_{r}^{mÃ—n}\left(P,Q\right)$. We denote by the superscript T the transpose of a matrix. In a matrix space ${\mathcal{R}}^{mÃ—n}$, we define an inner product as $ã€ˆA,Bã€‰=tr\left({B}^{T}A\right)$ for all $A,Bâˆˆ{\mathcal{R}}^{mÃ—n}$, ${âˆ¥Aâˆ¥}_{F}$ represents the Frobenius norm of A, $\mathcal{R}\left(A\right)$ represents the column space of A, $vec\left(â‹\dots \right)$ represents the vector operator, i.e., $vec\left(A\right)={\left({a}_{1}^{T},{a}_{2}^{T},â€¦,{a}_{n}^{T}\right)}^{T}âˆˆ{\mathcal{R}}^{mn}$ for the matrix $A=\left({a}_{1},{a}_{2},â€¦,{a}_{n}\right)âˆˆ{\mathcal{R}}^{mÃ—n}$, ${a}_{i}âˆˆ{R}^{m}$, $i=1,2,â€¦,n$, $AâŠ—B$ stands for the Kronecker product of matrices A and B.

Least-squares-based iterative algorithms are very important in system identification, parameter estimation, and signal processing, including the recursive least squares (RLS) and iterative least squares (ILS) methods for solving the solutions of some matrix equations, for example, the Lyapunov matrix equation, Sylvester matrix equations, and coupled matrix equations as well. For example, novel gradient-based iterative (GI) method [1â€“5] and least-squares-based iterative methods [3, 4, 6] with highly computational efficiencies for solving (coupled) matrix equations are presented and have good stability performances based on the hierarchical identification principle, which regards the unknown matrix as the system parameter matrix to be identified. In this paper, we consider the following two problems.

Problem I Let ${P}_{j}âˆˆ{\mathcal{R}}^{{m}_{j}Ã—{m}_{j}}$ and ${Q}_{j}âˆˆ{\mathcal{R}}^{{n}_{j}Ã—{n}_{j}}$ be generalized reflection matrices. For given matrices ${A}_{ij}âˆˆ{\mathcal{R}}^{{r}_{i}Ã—{m}_{j}}$, ${B}_{ij}âˆˆ{\mathcal{R}}^{{n}_{j}Ã—{s}_{i}}$, and ${M}_{i}âˆˆ{\mathcal{R}}^{{r}_{i}Ã—{s}_{i}}$, find a generalized reflexive matrix solution group $\left({X}_{1},{X}_{2},â€¦,{X}_{q}\right)$ with ${X}_{j}âˆˆ{\mathcal{R}}_{r}^{{m}_{j}Ã—{n}_{j}}\left({P}_{j},{Q}_{j}\right)$ such that

$\underset{j=1}{\overset{q}{âˆ‘}}{A}_{ij}{X}_{j}{B}_{ij}={M}_{i},\phantom{\rule{1em}{0ex}}i=1,2,â€¦,p.$
(1)

Problem II When Problem I is consistent, let ${S}_{E}$ denote the set of the generalized reflexive solution group of Problem I, i.e.,

${S}_{E}=\left\{\left({X}_{1},{X}_{2},â€¦,{X}_{q}\right)|\underset{j=1}{\overset{q}{âˆ‘}}{A}_{ij}{X}_{j}{B}_{ij}={M}_{i},i=1,2,â€¦,p,{X}_{j}âˆˆ{\mathcal{R}}_{r}^{{m}_{j}Ã—{n}_{j}}\left({P}_{j},{Q}_{j}\right)\right\}.$

For a given generalized reflexive matrix group

$\left({X}_{1}^{0},{X}_{2}^{0},â€¦,{X}_{q}^{0}\right)âˆˆ{\mathcal{R}}_{r}^{{m}_{1}Ã—{n}_{1}}\left({P}_{1},{Q}_{1}\right)Ã—{\mathcal{R}}_{r}^{{m}_{2}Ã—{n}_{2}}\left({P}_{2},{Q}_{2}\right)Ã—â‹¯Ã—{\mathcal{R}}_{r}^{{m}_{q}Ã—{n}_{q}}\left({P}_{q},{Q}_{q}\right),$

find $\left({\stackrel{Ë†}{X}}_{1},{\stackrel{Ë†}{X}}_{2},â€¦,{\stackrel{Ë†}{X}}_{q}\right)âˆˆ{S}_{E}$ such that

$\underset{j=1}{\overset{q}{âˆ‘}}{âˆ¥{\stackrel{Ë†}{X}}_{j}âˆ’{X}_{j}^{0}âˆ¥}^{2}=\underset{\left({X}_{1},{X}_{2},â€¦,{X}_{q}\right)âˆˆ{S}_{E}}{min}\left\{\underset{j=1}{\overset{q}{âˆ‘}}{âˆ¥{X}_{j}âˆ’{X}_{j}^{0}âˆ¥}^{2}\right\}.$
(2)

General coupled matrix equation (1) (including the generalized coupled Sylvester matrix equations as special cases) may arise in many areas of control and system theory. Problem II occurs frequently in experiment design; see, for instance, [7].

Many theoretical and numerical results on Eq. (1) and some of its special cases have been obtained. Ding and Chen [1] presented the gradient-based iterative algorithms by applying the gradient search principle and the hierarchical identification principle for Eq. (1) with $q=p$. Wu et al. [8, 9] gave the finite iterative solutions to coupled Sylvester-conjugate matrix equations. Wu et al. [10] gave the finite iterative solutions to a class of complex matrix equations with conjugate and transpose of the unknowns. Jonsson and KÃ¥gstrÃ¶m [11, 12] proposed recursive block algorithms for solving the coupled Sylvester matrix equations and the generalized Sylvester and Lyapunov matrix equations. By extending the idea of conjugate gradient method, Dehghan and Hajarian [13] constructed an iterative algorithm to solve Eq. (1) with $q=p$ over generalized bisymmetric matrices. Very recently, Huang et al. [14] presented a finite iterative algorithm for the one-sided and generalized coupled Sylvester matrix equations over generalized reflexive solutions. Yin et al. [15] presented a finite iterative algorithm for the two-sided and generalized coupled Sylvester matrix equations over reflexive solutions. Zhou et al.[16] gave the gradient based iterative solutions for coupled matrix equation (1) and a more general case over general solution. Li et al. [17] presented a numerical solution to linear matrix equation ${âˆ‘}_{j=1}^{p}{A}_{j}X{B}_{j}=C$ by finite steps iteration. For more results, we refer to [7, 18â€“36]. However, to our knowledge, the generalized reflexive solution to the more general coupled matrix equation (1) and the optimal approximation reflexive solution have not been derived. In this paper, we consider the generalized reflexive solution of Eq. (1) and the optimal approximation generalized reflexive solution.

This paper is organized as follows. In Section 2, we solve Problem I by constructing an iterative algorithm. The convergence of the proposed algorithm is proved. For an arbitrary initial matrix group, we can obtain a generalized reflexive solution group of Problem I within finite iteration steps in the absence of round-off errors. Furthermore, for a special initial matrix group, we can obtain the least Frobenius norm solutions of Problem I. Then in Section 3, we give the optimal approximate solution group of Problem II by finding the least Frobenius norm generalized reflexive solution group of the corresponding general coupled matrix equations. In Section 4, a numerical example is given to illustrate the effectiveness of our method. Finally, some conclusions are drawn in Section 5.

## 2 An iterative algorithm for solving Problem I

In this section, we first introduce an iterative algorithm to solve Problem I, then we prove its convergence. In addition, we give the least-norm generalized reflexive solutions of Problem I when an appropriate initial iterative matrix group is chosen.

Algorithm 2.1 Step 1: Input matrices ${A}_{ij}âˆˆ{\mathcal{R}}^{{r}_{i}Ã—{m}_{j}}$, ${B}_{ij}âˆˆ{\mathcal{R}}^{{n}_{j}Ã—{s}_{i}}$, ${M}_{i}âˆˆ{\mathcal{R}}^{{r}_{i}Ã—{s}_{i}}$, and generalized reflection matrices ${P}_{j}âˆˆ{\mathcal{R}}^{{m}_{j}Ã—{m}_{j}}$ and ${Q}_{j}âˆˆ{\mathcal{R}}^{{n}_{j}Ã—{n}_{j}}$, $i=1,â€¦,p$, $j=1,â€¦,q$;

Step 2: Choose an arbitrary matrix group

$\left({X}_{1}\left(1\right),{X}_{2}\left(1\right),â€¦,{X}_{q}\left(1\right)\right)âˆˆ{\mathcal{R}}_{r}^{{m}_{1}Ã—{n}_{1}}\left({P}_{1},{Q}_{1}\right)Ã—{\mathcal{R}}_{r}^{{m}_{2}Ã—{n}_{2}}\left({P}_{2},{Q}_{2}\right)Ã—â‹¯Ã—{\mathcal{R}}_{r}^{{m}_{q}Ã—{n}_{q}}\left({P}_{q},{Q}_{q}\right).$

Compute

$\begin{array}{c}R\left(1\right)=diag\left({M}_{1}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{1l}{X}_{l}\left(1\right){B}_{1l},{M}_{2}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{2l}{X}_{l}\left(1\right){B}_{2l},â€¦,{M}_{p}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{pl}{X}_{l}\left(1\right){B}_{pl}\right),\hfill \\ {S}_{j}\left(1\right)=\frac{1}{2}\left[\underset{i=1}{\overset{p}{âˆ‘}}{A}_{ij}^{T}\left({M}_{i}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{il}{X}_{l}\left(1\right){B}_{il}\right){B}_{ij}^{T}+\underset{i=1}{\overset{p}{âˆ‘}}{P}_{j}{A}_{ij}^{T}\left({M}_{i}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{il}{X}_{l}\left(1\right){B}_{il}\right){B}_{ij}^{T}{Q}_{j}\right],\hfill \\ \phantom{\rule{1em}{0ex}}k:=1;\hfill \end{array}$

Step 3: If $R\left(k\right)=0$, then stop and $\left({X}_{1}\left(k\right),{X}_{2}\left(k\right),â€¦,{X}_{q}\left(k\right)\right)$ is the solution group of Eq. (1); else if , but ${S}_{j}\left(k\right)=0$, $j=1,â€¦,q$, then stop and Eq. (1) is not consistent over a generalized reflexive matrix group; else $k:=k+1$;

Step 4: Compute

$\begin{array}{c}{X}_{j}\left(k\right)={X}_{j}\left(kâˆ’1\right)+\frac{{âˆ¥R\left(kâˆ’1\right)âˆ¥}_{F}^{2}}{{âˆ‘}_{l=1}^{q}{âˆ¥{S}_{l}\left(kâˆ’1\right)âˆ¥}_{F}^{2}}{S}_{j}\left(kâˆ’1\right),\phantom{\rule{1em}{0ex}}j=1,â€¦,q,\hfill \\ R\left(k\right)=diag\left({M}_{1}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{1l}{X}_{l}\left(k\right){B}_{1l},{M}_{2}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{2l}{X}_{l}\left(k\right){B}_{2l},â€¦,{M}_{p}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{pl}{X}_{l}\left(k\right){B}_{pl}\right)\hfill \\ \phantom{R\left(k\right)}=R\left(kâˆ’1\right)âˆ’\frac{{âˆ¥R\left(kâˆ’1\right)âˆ¥}_{F}^{2}}{{âˆ‘}_{l=1}^{q}{âˆ¥{S}_{l}\left(kâˆ’1\right)âˆ¥}_{F}^{2}}\hfill \\ \phantom{R\left(k\right)=}Ã—diag\left(\underset{l=1}{\overset{q}{âˆ‘}}{A}_{1l}{S}_{l}\left(kâˆ’1\right){B}_{1l},\underset{l=1}{\overset{q}{âˆ‘}}{A}_{2l}{S}_{l}\left(kâˆ’1\right){B}_{2l},â€¦,\underset{l=1}{\overset{q}{âˆ‘}}{A}_{pl}{S}_{l}\left(kâˆ’1\right){B}_{pl}\right),\hfill \\ {S}_{j}\left(k\right)=\frac{1}{2}\left[\underset{i=1}{\overset{p}{âˆ‘}}{A}_{ij}^{T}\left({M}_{i}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{il}{X}_{l}\left(k\right){B}_{il}\right){B}_{ij}^{T}+\underset{i=1}{\overset{p}{âˆ‘}}{P}_{j}{A}_{ij}^{T}\left({M}_{i}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{il}{X}_{l}\left(k\right){B}_{il}\right){B}_{ij}^{T}{Q}_{j}\right]\hfill \\ \phantom{{S}_{j}\left(k\right)=}+\frac{{âˆ¥R\left(k\right)âˆ¥}_{F}^{2}}{{âˆ¥R\left(kâˆ’1\right)âˆ¥}_{F}^{2}}{S}_{j}\left(kâˆ’1\right);\hfill \end{array}$

Step 5: Go to Step 3.

Obviously, it can be seen that ${X}_{j}\left(k\right),{S}_{j}\left(k\right)âˆˆ{R}_{r}^{{m}_{j}Ã—{n}_{j}}\left({P}_{j},{Q}_{j}\right)$ for all $j=1,â€¦,q$ and $k=1,2,â€¦$â€‰.

Lemma 2.1 For the sequences $\left\{R\left(k\right)\right\}$, $\left\{{S}_{j}\left(k\right)\right\}$ ($j=1,2,â€¦,q$) generated by Algorithm 2.1, and $mâ‰¥2$, we have

(3)

The proof of Lemma 2.1 is similar to that of Lemma 2.2 in [21] and is omitted.

Lemma 2.2 Suppose $\left({X}_{1}^{âˆ—},{X}_{2}^{âˆ—},â€¦,{X}_{q}^{âˆ—}\right)$ is an arbitrary generalized reflexive solution group of Problem I, then for any initial generalized reflexive matrix group $\left({X}_{1}\left(1\right),{X}_{2}\left(1\right),â€¦,{X}_{q}\left(1\right)\right)$, we have

$\underset{j=1}{\overset{q}{âˆ‘}}tr\left({\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(k\right)\right)}^{T}{S}_{j}\left(k\right)\right)={âˆ¥R\left(k\right)âˆ¥}_{F}^{2},\phantom{\rule{1em}{0ex}}k=1,2,â€¦,$
(4)

where the sequences $\left\{{X}_{j}\left(k\right)\right\}$, $\left\{{S}_{j}\left(k\right)\right\}$ and $\left\{R\left(k\right)\right\}$ are generated by Algorithm 2.1.

Proof We prove the conclusion by induction for the positive integer k.

For $k=1$, we have that

$\begin{array}{c}\underset{j=1}{\overset{q}{âˆ‘}}tr\left({\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(1\right)\right)}^{T}{S}_{j}\left(1\right)\right)\hfill \\ \phantom{\rule{1em}{0ex}}=\underset{j=1}{\overset{q}{âˆ‘}}tr\left({\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(1\right)\right)}^{T}\left[\frac{1}{2}\left(\underset{i=1}{\overset{p}{âˆ‘}}{A}_{ij}^{T}\left({M}_{i}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{il}{X}_{l}\left(1\right){B}_{il}\right){B}_{ij}^{T}\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\underset{i=1}{\overset{p}{âˆ‘}}{P}_{j}{A}_{ij}^{T}\left({M}_{i}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{il}{X}_{l}\left(1\right){B}_{il}\right){B}_{ij}^{T}{Q}_{j}\right)\right]\right)\hfill \\ \phantom{\rule{1em}{0ex}}=\underset{j=1}{\overset{q}{âˆ‘}}tr\left({\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(1\right)\right)}^{T}\left[\underset{i=1}{\overset{p}{âˆ‘}}{A}_{ij}^{T}\left({M}_{i}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{il}{X}_{l}\left(1\right){B}_{il}\right){B}_{ij}^{T}\right]\right)\hfill \\ \phantom{\rule{1em}{0ex}}=\underset{i=1}{\overset{p}{âˆ‘}}tr\left({\left({M}_{i}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{il}{X}_{l}\left(1\right){B}_{il}\right)}^{T}\underset{j=1}{\overset{q}{âˆ‘}}{A}_{ij}\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(1\right)\right){B}_{ij}\right)\hfill \\ \phantom{\rule{1em}{0ex}}=tr\left(diag\left({\left({M}_{1}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{1l}{X}_{l}\left(1\right){B}_{1l}\right)}^{T},{\left({M}_{2}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{2l}{X}_{l}\left(1\right){B}_{2l}\right)}^{T},â€¦,\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{\left({M}_{p}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{pl}{X}_{l}\left(1\right){B}_{pl}\right)}^{T}\right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}Ã—diag\left(\underset{j=1}{\overset{q}{âˆ‘}}{A}_{1j}\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(1\right)\right){B}_{1j},\underset{j=1}{\overset{q}{âˆ‘}}{A}_{2j}\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(1\right)\right){B}_{2j},â€¦,\underset{j=1}{\overset{q}{âˆ‘}}{A}_{pj}\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(1\right)\right){B}_{pj}\right)\right)\hfill \\ \phantom{\rule{1em}{0ex}}=tr\left(diag\left({\left({M}_{1}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{1l}{X}_{l}\left(1\right){B}_{1l}\right)}^{T},{\left({M}_{2}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{2l}{X}_{l}\left(1\right){B}_{2l}\right)}^{T},â€¦,\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{\left({M}_{p}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{pl}{X}_{l}\left(1\right){B}_{pl}\right)}^{T}\right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}Ã—diag\left({M}_{1}âˆ’\underset{j=1}{\overset{q}{âˆ‘}}{A}_{1j}{X}_{j}\left(1\right){B}_{1j},{M}_{2}âˆ’\underset{j=1}{\overset{q}{âˆ‘}}{A}_{2j}{X}_{j}\left(1\right){B}_{2j},â€¦,{M}_{p}âˆ’\underset{j=1}{\overset{q}{âˆ‘}}{A}_{pj}{X}_{j}\left(1\right){B}_{pj}\right)\right)\hfill \\ \phantom{\rule{1em}{0ex}}={âˆ¥R\left(1\right)âˆ¥}^{2}.\hfill \end{array}$

Assume that (4) holds for $k=m$. When $k=m+1$, by Algorithm 2.1, we have that

$\begin{array}{c}\underset{j=1}{\overset{q}{âˆ‘}}tr\left({\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(m+1\right)\right)}^{T}{S}_{j}\left(m+1\right)\right)\hfill \\ \phantom{\rule{1em}{0ex}}=\underset{j=1}{\overset{q}{âˆ‘}}tr\left({\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(m+1\right)\right)}^{T}\left[\frac{1}{2}\left(\underset{i=1}{\overset{p}{âˆ‘}}{A}_{ij}^{T}\left({M}_{i}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{il}{X}_{l}\left(m+1\right){B}_{il}\right){B}_{ij}^{T}\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\underset{i=1}{\overset{p}{âˆ‘}}{P}_{j}{A}_{ij}^{T}\left({M}_{i}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{il}{X}_{l}\left(m+1\right){B}_{il}\right){B}_{ij}^{T}{Q}_{j}\right)+\frac{{âˆ¥R\left(m+1\right)âˆ¥}_{F}^{2}}{{âˆ¥R\left(m\right)âˆ¥}_{F}^{2}}{S}_{j}\left(m\right)\right]\right)\hfill \\ \phantom{\rule{1em}{0ex}}=\underset{j=1}{\overset{q}{âˆ‘}}tr\left({\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(m+1\right)\right)}^{T}\left[\underset{i=1}{\overset{p}{âˆ‘}}{A}_{ij}^{T}\left({M}_{i}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{il}{X}_{l}\left(m+1\right){B}_{il}\right){B}_{ij}^{T}\right]\right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\frac{{âˆ¥R\left(m+1\right)âˆ¥}_{F}^{2}}{{âˆ¥R\left(m\right)âˆ¥}_{F}^{2}}\underset{j=1}{\overset{q}{âˆ‘}}tr\left({\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(m+1\right)\right)}^{T}{S}_{j}\left(m\right)\right)\hfill \\ \phantom{\rule{1em}{0ex}}=\underset{i=1}{\overset{p}{âˆ‘}}tr\left({\left({M}_{i}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{il}{X}_{l}\left(m+1\right){B}_{il}\right)}^{T}\underset{j=1}{\overset{q}{âˆ‘}}{A}_{ij}\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(m+1\right)\right){B}_{ij}\right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\frac{{âˆ¥R\left(m+1\right)âˆ¥}_{F}^{2}}{{âˆ¥R\left(m\right)âˆ¥}_{F}^{2}}\underset{j=1}{\overset{q}{âˆ‘}}tr\left({\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(m\right)\right)}^{T}{S}_{j}\left(m\right)\right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}âˆ’\frac{{âˆ¥R\left(m+1\right)âˆ¥}_{F}^{2}}{{âˆ‘}_{j=1}^{q}{âˆ¥{S}_{j}\left(m\right)âˆ¥}_{F}^{2}}\underset{j=1}{\overset{q}{âˆ‘}}tr\left({\left({S}_{j}\left(m\right)\right)}^{T}{S}_{j}\left(m\right)\right)\hfill \\ \phantom{\rule{1em}{0ex}}=tr\left(diag\left({\left({M}_{1}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{1l}{X}_{l}\left(m+1\right){B}_{1l}\right)}^{T},{\left({M}_{2}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{2l}{X}_{l}\left(m+1\right){B}_{2l}\right)}^{T},â€¦,\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{\left({M}_{p}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{pl}{X}_{l}\left(m+1\right){B}_{pl}\right)}^{T}\right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}Ã—diag\left(\underset{j=1}{\overset{q}{âˆ‘}}{A}_{1j}\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(m+1\right)\right){B}_{1j},\underset{j=1}{\overset{q}{âˆ‘}}{A}_{2j}\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(m+1\right)\right){B}_{2j},â€¦,\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\underset{j=1}{\overset{q}{âˆ‘}}{A}_{pj}\left({X}_{j}^{âˆ—}âˆ’{X}_{j}\left(m+1\right)\right){B}_{pj}\right)\right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\frac{{âˆ¥R\left(m+1\right)âˆ¥}_{F}^{2}}{{âˆ¥R\left(m\right)âˆ¥}_{F}^{2}}{âˆ¥R\left(m\right)âˆ¥}_{F}^{2}âˆ’\frac{{âˆ¥R\left(m+1\right)âˆ¥}_{F}^{2}}{{âˆ‘}_{j=1}^{q}{âˆ¥{S}_{j}\left(m\right)âˆ¥}_{F}^{2}}\underset{j=1}{\overset{q}{âˆ‘}}{âˆ¥{S}_{j}\left(m\right)âˆ¥}_{F}^{2}\hfill \\ \phantom{\rule{1em}{0ex}}=tr\left(diag\left({\left({M}_{1}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{1l}{X}_{l}\left(m+1\right){B}_{1l}\right)}^{T},{\left({M}_{2}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{2l}{X}_{l}\left(m+1\right){B}_{2l}\right)}^{T},â€¦,\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{\left({M}_{p}âˆ’\underset{l=1}{\overset{q}{âˆ‘}}{A}_{pl}{X}_{l}\left(m+1\right){B}_{pl}\right)}^{T}\right)\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}Ã—diag\left({M}_{1}âˆ’\underset{j=1}{\overset{q}{âˆ‘}}{A}_{1j}{X}_{j}\left(m+1\right){B}_{1j},{M}_{2}âˆ’\underset{j=1}{\overset{q}{âˆ‘}}{A}_{2j}{X}_{j}\left(m+1\right){B}_{2j},â€¦,\hfill \\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{M}_{p}âˆ’\underset{j=1}{\overset{q}{âˆ‘}}{A}_{pj}{X}_{j}\left(m+1\right){B}_{pj}\right)\right)+{âˆ¥R\left(m+1\right)âˆ¥}_{F}^{2}âˆ’{âˆ¥R\left(m+1\right)âˆ¥}_{F}^{2}\hfill \\ \phantom{\rule{1em}{0ex}}={âˆ¥R\left(m+1\right)âˆ¥}_{F}^{2}.\hfill \end{array}$

Therefore, (4) holds for $k=m+1$. Thus (4) holds by the principle of induction. This completes the proof.â€ƒâ–¡

Remark 2.1 If there exists a positive number k such that ${S}_{j}\left(k\right)=0$, $j=1,2,â€¦,q$, but , then by Lemma 2.2, we have that Eq. (1) is not consistent over generalized reflexive matrices.

Theorem 2.1 Suppose that Problem I is consistent, then for an arbitrary initial matrix group $\left({X}_{1},{X}_{2},â€¦,{X}_{q}\right)$ with ${X}_{j}âˆˆ{\mathcal{R}}_{r}^{{m}_{j}Ã—{n}_{j}}\left({P}_{j},{Q}_{j}\right)$, a generalized reflexive solution group of Problem I can be obtained with finite iteration steps in the absence of round-off errors.

Proof If , $k=1,2,â€¦$â€‰, $m={âˆ‘}_{i=1}^{p}{r}_{i}{s}_{i}$, by Lemma 2.2 and Remark 2.1, we have for all $j=1,2,â€¦,q$ and $k=1,2,â€¦,m$, then we can compute $R\left(m+1\right)$ and $\left({X}_{1}\left(m+1\right),{X}_{2}\left(m+1\right),â€¦,{X}_{q}\left(m+1\right)\right)$ by Algorithm 2.1.

By Lemma 2.1, we have

$tr\left({\left(R\left(m+1\right)\right)}^{T}R\left(k\right)\right)=0,\phantom{\rule{1em}{0ex}}k=1,2,â€¦,m$

and

It can be seen that the set of $R\left(1\right),R\left(2\right),â€¦,R\left(m\right)$ is an orthogonal basis of the matrix subspace

$S=\left\{L|L=diag\left({L}_{1},{L}_{2},â€¦,{L}_{p}\right),{L}_{i}âˆˆ{\mathcal{R}}^{{r}_{i}Ã—{s}_{i}},i=1,2,â€¦,p\right\},$

which implies that $R\left(m+1\right)=0$, i.e., $\left({X}_{1}\left(m+1\right),{X}_{2}\left(m+1\right),â€¦,{X}_{q}\left(m+1\right)\right)$ with ${X}_{j}\left(m+1\right)âˆˆ{\mathcal{R}}_{r}^{{m}_{j}Ã—{n}_{j}}\left({P}_{j},{Q}_{j}\right)$ is a generalized reflexive solution group of Problem I. This completes the proof.â€ƒâ–¡

To show the least Frobenius norm generalized reflexive solution of Problem I, we first introduce the following result.

Lemma 2.3 (see [[23], Lemma 2.4])

Suppose that the consistent system of linear equation $Ax=b$ has a solution ${x}^{âˆ—}âˆˆR\left({A}^{T}\right)$, then ${x}^{âˆ—}$ is a unique least Frobenius norm solution of the system of linear equations.

By Lemma 2.3, the following result can be obtained.

Theorem 2.2 Suppose that Problem I is consistent. If we choose the initial iterative matrices ${X}_{j}\left(1\right)={âˆ‘}_{i=1}^{p}{A}_{ij}^{T}{K}_{i}{B}_{ij}^{T}+{âˆ‘}_{i=1}^{p}{P}_{j}{A}_{ij}^{T}{K}_{i}{B}_{ij}^{T}{Q}_{j}$, $j=1,2,â€¦,q$, where ${K}_{i}âˆˆ{\mathcal{R}}^{{r}_{i}Ã—{s}_{i}}$, $i=1,2,â€¦,p$ are arbitrary matrices, especially, ${X}_{j}\left(1\right)=0âˆˆ{\mathcal{R}}^{{m}_{j}Ã—{n}_{j}}\left({P}_{j},{Q}_{j}\right)$, then the solution group $\left({X}_{1}^{âˆ—},{X}_{2}^{âˆ—},â€¦,{X}_{q}^{âˆ—}\right)$ generated by Algorithm 2.1 is the unique least Frobenius norm generalized reflexive solution group of Problem I.

Proof We know the solvability of Eq. (1) over generalized reflexive matrices is equivalent to the following matrix equations:

$\left\{\begin{array}{c}{âˆ‘}_{j=1}^{q}{A}_{ij}{X}_{j}{B}_{ij}={M}_{i}\phantom{\rule{1em}{0ex}}\left(i=1,2,â€¦,p\right),\hfill \\ {âˆ‘}_{j=1}^{q}{A}_{ij}{P}_{j}{X}_{j}{Q}_{j}{B}_{ij}={M}_{i}\phantom{\rule{1em}{0ex}}\left(i=1,2,â€¦,p\right).\hfill \end{array}$
(5)

Then the system of matrix equations (5) is equivalent to

$\left(\begin{array}{ccc}{B}_{11}^{T}âŠ—{A}_{11}& â‹¯& {B}_{1q}^{T}âŠ—{A}_{1q}\\ â‹®& â‹¯& â‹®\\ {B}_{p1}^{T}âŠ—{A}_{p1}& â‹¯& {B}_{pq}^{T}âŠ—{A}_{pq}\\ {B}_{11}^{T}{Q}_{1}âŠ—{A}_{11}{P}_{1}& â‹¯& {B}_{1q}^{T}{Q}_{q}âŠ—{A}_{1q}{P}_{q}\\ â‹®& â‹¯& â‹®\\ {B}_{p1}^{T}{Q}_{1}âŠ—{A}_{p1}{P}_{1}& â‹¯& {B}_{pq}^{T}{Q}_{q}âŠ—{A}_{pq}{P}_{q}\end{array}\right)\left(\begin{array}{c}vec\left({X}_{1}\right)\\ â‹®\\ vec\left({X}_{q}\right)\end{array}\right)=\left(\begin{array}{c}vec\left({M}_{1}\right)\\ â‹®\\ vec\left({M}_{p}\right)\\ vec\left({M}_{1}\right)\\ â‹®\\ vec\left({M}_{p}\right)\end{array}\right).$
(6)

Let ${X}_{j}\left(1\right)={âˆ‘}_{i=1}^{p}{A}_{ij}^{T}{K}_{i}{B}_{ij}^{T}+{âˆ‘}_{i=1}^{p}{P}_{j}{A}_{ij}^{T}{K}_{i}{B}_{ij}^{T}{Q}_{j}$, $j=1,2,â€¦,q$, where ${K}_{i}âˆˆ{\mathcal{R}}^{{r}_{i}Ã—{s}_{i}}$ are arbitrary matrices, then

$\begin{array}{c}\left(\begin{array}{c}vec\left({X}_{1}\left(1\right)\right)\\ â‹®\\ vec\left({X}_{q}\left(1\right)\right)\end{array}\right)\hfill \\ \phantom{\rule{1em}{0ex}}=\left(\begin{array}{c}vec\left({âˆ‘}_{i=1}^{p}{A}_{i1}^{T}{K}_{i}{B}_{i1}^{T}+{âˆ‘}_{i=1}^{p}{P}_{1}{A}_{i1}^{T}{K}_{i}{B}_{i1}^{T}{Q}_{1}\right)\\ â‹®\\ vec\left({âˆ‘}_{i=1}^{p}{A}_{iq}^{T}{K}_{i}{B}_{iq}^{T}+{âˆ‘}_{i=1}^{p}{P}_{q}{A}_{iq}^{T}{K}_{i}{B}_{iq}^{T}{Q}_{q}\right)\end{array}\right)\hfill \\ \phantom{\rule{1em}{0ex}}=\left(\begin{array}{cccccc}{B}_{11}âŠ—{A}_{11}^{T}& â‹¯& {B}_{p1}âŠ—{A}_{p1}^{T}& {Q}_{1}{B}_{11}âŠ—{P}_{1}{A}_{11}^{T}& â‹¯& {Q}_{1}{B}_{p1}âŠ—{P}_{1}{A}_{p1}^{T}\\ â‹®& â‹¯& â‹®& â‹¯& â‹¯& â‹®\\ {B}_{1q}âŠ—{A}_{1q}^{T}& â‹¯& {B}_{pq}âŠ—{A}_{pq}^{T}& {Q}_{q}{B}_{1q}âŠ—{P}_{q}{A}_{1q}^{T}& â‹¯& {Q}_{q}{B}_{pq}âŠ—{P}_{q}{A}_{pq}^{T}\end{array}\right)\left(\begin{array}{c}vec\left({K}_{1}\right)\\ â‹®\\ vec\left({K}_{p}\right)\\ vec\left({K}_{1}\right)\\ â‹®\\ vec\left({K}_{p}\right)\end{array}\right)\hfill \\ \phantom{\rule{1em}{0ex}}={\left(\begin{array}{ccc}{B}_{11}^{T}âŠ—{A}_{11}& â‹¯& {B}_{1q}^{T}âŠ—{A}_{1q}\\ â‹®& â‹®& â‹®\\ {B}_{p1}^{T}âŠ—{A}_{p1}& â‹¯& {B}_{pq}^{T}âŠ—{A}_{pq}\\ {B}_{11}^{T}{Q}_{1}âŠ—{A}_{11}{P}_{1}& â‹¯& {B}_{1q}^{T}{Q}_{q}âŠ—{A}_{1q}{P}_{q}\\ â‹®& â‹®& â‹®\\ {B}_{p1}^{T}{Q}_{1}âŠ—{A}_{p1}{P}_{1}& â‹¯& {B}_{pq}^{T}{Q}_{q}âŠ—{A}_{pq}{P}_{q}\end{array}\right)}^{T}\left(\begin{array}{c}vec\left({K}_{1}\right)\\ â‹®\\ vec\left({K}_{p}\right)\\ vec\left({K}_{1}\right)\\ â‹®\\ vec\left({K}_{p}\right)\end{array}\right)\hfill \\ \phantom{\rule{1em}{0ex}}âˆˆR\left({\left(\begin{array}{ccc}{B}_{11}^{T}âŠ—{A}_{11}& â‹¯& {B}_{1q}^{T}âŠ—{A}_{1q}\\ â‹®& â‹®& â‹®\\ {B}_{p1}^{T}âŠ—{A}_{p1}& â‹¯& {B}_{pq}^{T}âŠ—{A}_{pq}\\ {B}_{11}^{T}{Q}_{1}âŠ—{A}_{11}{P}_{1}& â‹¯& {B}_{1q}^{T}{Q}_{q}âŠ—{A}_{1q}{P}_{q}\\ â‹®& â‹®& â‹®\\ {B}_{p1}^{T}{Q}_{1}âŠ—{A}_{p1}{P}_{1}& â‹¯& {B}_{pq}^{T}{Q}_{q}âŠ—{A}_{pq}{P}_{q}\end{array}\right)}^{T}\right).\hfill \end{array}$

Furthermore, we can see that all generalized reflexive matrix solution groups $\left({X}_{1}\left(k\right),{X}_{2}\left(k\right),â€¦,{X}_{q}\left(k\right)\right)$ generated by Algorithm 2.1 satisfy

$\left(\begin{array}{c}vec\left({X}_{1}\left(1\right)\right)\\ â‹®\\ vec\left({X}_{q}\left(1\right)\right)\end{array}\right)âˆˆR\left({\left(\begin{array}{ccc}{B}_{11}^{T}âŠ—{A}_{11}& â‹¯& {B}_{1q}^{T}âŠ—{A}_{1q}\\ â‹®& â‹®& â‹®\\ {B}_{p1}^{T}âŠ—{A}_{p1}& â‹¯& {B}_{pq}^{T}âŠ—{A}_{pq}\\ {B}_{11}^{T}{Q}_{1}âŠ—{A}_{11}{P}_{1}& â‹¯& {B}_{1q}^{T}{Q}_{q}âŠ—{A}_{1q}{P}_{q}\\ â‹®& â‹®& â‹®\\ {B}_{p1}^{T}{Q}_{1}âŠ—{A}_{p1}{P}_{1}& â‹¯& {B}_{pq}^{T}{Q}_{q}âŠ—{A}_{pq}{P}_{q}\end{array}\right)}^{T}\right).$

By Lemma 2.3 we know that $\left({X}_{1}^{âˆ—},{X}_{2}^{âˆ—},â€¦,{X}_{q}^{âˆ—}\right)$ is the least Frobenius norm generalized reflexive solution group of the system of linear equations (6). Since a vector operator is isomorphic, $\left({X}_{1}^{âˆ—},{X}_{2}^{âˆ—},â€¦,{X}_{q}^{âˆ—}\right)$ is the unique least Frobenius norm generalized reflexive solution group of the system of matrix equations (5). Thus $\left({X}_{1}^{âˆ—},{X}_{2}^{âˆ—},â€¦,{X}_{q}^{âˆ—}\right)$ is the unique least Frobenius norm generalized reflexive solution group of Problem I.â€ƒâ–¡

## 3 The solution of Problem II

In this section, we show that the optimal approximate generalized reflexive solution group of Problem II to a given generalized reflexive matrix group can be derived by finding the least Frobenius norm generalized reflexive solution group of the corresponding general coupled matrix equations.

When Problem I is consistent, the set of generalized reflexive solution group of Problem I denoted by ${S}_{E}$ is not empty. For a given matrix pair $\left({X}_{1}^{0},{X}_{2}^{0},â€¦,{X}_{q}^{0}\right)$ with ${X}_{j}^{0}âˆˆ{\mathcal{R}}_{r}^{{m}_{j}Ã—{n}_{j}}\left({P}_{j},{Q}_{j}\right)$, $j=1,2,â€¦,q$, we have

$\begin{array}{c}\underset{j=1}{\overset{q}{âˆ‘}}{A}_{ij}{X}_{j}{B}_{ij}={M}_{i}\hfill \\ \phantom{\rule{1em}{0ex}}âŸº\phantom{\rule{1em}{0ex}}\underset{j=1}{\overset{q}{âˆ‘}}{A}_{ij}\left({X}_{j}âˆ’{X}_{j}^{0}\right){B}_{ij}={M}_{i}âˆ’\underset{j=1}{\overset{q}{âˆ‘}}{A}_{ij}{X}_{j}^{0}{B}_{ij},\phantom{\rule{1em}{0ex}}i=1,2,â€¦,p.\hfill \end{array}$
(7)

Set ${\stackrel{Ëœ}{X}}_{j}={X}_{j}âˆ’{X}_{j}^{0}$, ${\stackrel{Ëœ}{M}}_{i}={M}_{i}âˆ’{âˆ‘}_{j=1}^{q}{A}_{ij}{X}_{j}^{0}{B}_{ij}$, then solving Problem II is equivalent to finding the least Frobenius norm generalized reflexive solution group $\left({\stackrel{Ëœ}{X}}_{1}^{âˆ—},{\stackrel{Ëœ}{X}}_{2}^{âˆ—},â€¦,{\stackrel{Ëœ}{X}}_{q}^{âˆ—}\right)$ of the corresponding general coupled matrix equations

$\underset{j=1}{\overset{q}{âˆ‘}}{A}_{ij}{\stackrel{Ëœ}{X}}_{j}{B}_{ij}={\stackrel{Ëœ}{M}}_{i},\phantom{\rule{1em}{0ex}}i=1,2,â€¦,p.$
(8)

Let initial iteration matrices be

${\stackrel{Ëœ}{X}}_{j}\left(1\right)=\underset{i=1}{\overset{p}{âˆ‘}}{A}_{ij}^{T}{K}_{i}{B}_{ij}^{T}+\underset{i=1}{\overset{p}{âˆ‘}}{P}_{j}{A}_{ij}^{T}{K}_{i}{B}_{ij}^{T}{Q}_{j},\phantom{\rule{1em}{0ex}}j=1,2,â€¦,q,$

where ${K}_{i}âˆˆ{\mathcal{R}}^{{r}_{i}Ã—{s}_{i}}$, $i=1,2,â€¦,p$ are arbitrary matrices, especially, ${\stackrel{Ëœ}{X}}_{j}\left(1\right)=0âˆˆ{\mathcal{R}}^{{m}_{j}Ã—{n}_{j}}\left({P}_{j},{Q}_{j}\right)$, $j=1,2,â€¦,q$. By using Algorithm 2.1, we can get the least Frobenius norm generalized reflexive solution group $\left({\stackrel{Ëœ}{X}}_{1}^{âˆ—},{\stackrel{Ëœ}{X}}_{2}^{âˆ—},â€¦,{\stackrel{Ëœ}{X}}_{q}^{âˆ—}\right)$ of (8). Thus the generalized reflexive solution group of Problem II can be represented as

$\left({\stackrel{Ë†}{X}}_{1},{\stackrel{Ë†}{X}}_{2},â€¦,{\stackrel{Ë†}{X}}_{q}\right)=\left({\stackrel{Ëœ}{X}}_{1}^{âˆ—}+{X}_{1}^{0},{\stackrel{Ëœ}{X}}_{2}^{âˆ—}+{X}_{2}^{0},â€¦,{\stackrel{Ëœ}{X}}_{q}^{âˆ—}+{X}_{q}^{0}\right).$

## 4 A numerical example

In this section, we show a numerical example to illustrate the proposed iterative method. All the tests are performed by MATLAB 7.8.

Example 4.1 Consider the generalized reflexive solution of the general coupled matrix equations

$\left\{\begin{array}{c}{A}_{11}{X}_{1}{B}_{11}+{A}_{12}{X}_{2}{B}_{12}={M}_{1},\hfill \\ {A}_{21}{X}_{1}{B}_{21}+{A}_{22}{X}_{2}{B}_{22}={M}_{2},\hfill \end{array}$
(9)

where

$\begin{array}{c}{A}_{11}=\left(\begin{array}{ccccc}1& 3& âˆ’5& 7& âˆ’9\\ 2& 0& 4& 6& âˆ’1\\ 0& âˆ’2& 9& 6& âˆ’8\\ 3& 6& 2& 2& âˆ’3\\ âˆ’5& 5& âˆ’22& âˆ’1& âˆ’11\\ 8& 4& âˆ’6& âˆ’9& âˆ’9\end{array}\right),\phantom{\rule{2em}{0ex}}{B}_{11}=\left(\begin{array}{cccc}4& 8& âˆ’5& 4\\ âˆ’1& 5& âˆ’2& 3\\ 3& 9& 2& âˆ’6\\ âˆ’2& 7& âˆ’8& 1\end{array}\right),\hfill \\ {A}_{12}=\left(\begin{array}{cccc}6& âˆ’5& 7& âˆ’9\\ 2& 4& 6& âˆ’11\\ 9& âˆ’12& 3& âˆ’8\\ 13& 6& 4& âˆ’15\\ âˆ’5& 15& âˆ’13& âˆ’11\\ 2& 9& âˆ’6& âˆ’9\end{array}\right),\phantom{\rule{2em}{0ex}}{B}_{12}=\left(\begin{array}{cccc}7& 1& 8& âˆ’6\\ âˆ’4& 5& âˆ’2& 3\\ 3& âˆ’12& 0& 8\\ 1& 6& 9& 4\\ âˆ’5& 8& âˆ’2& 9\end{array}\right),\hfill \\ {A}_{21}=\left(\begin{array}{ccccc}14& 5& âˆ’1& 7& 1\\ âˆ’2& 3& âˆ’2& 5& 4\\ 13& 4& 2& âˆ’3& 6\\ âˆ’8& 1& âˆ’5& 4& 8\end{array}\right),\phantom{\rule{2em}{0ex}}{B}_{21}=\left(\begin{array}{ccccc}1& 3& âˆ’5& 8& 2\\ âˆ’11& 5& âˆ’6& 2& 5\\ 13& 2& 7& âˆ’9& 7\\ âˆ’9& 6& âˆ’5& 12& 1\end{array}\right),\hfill \\ {A}_{22}=\left(\begin{array}{cccc}1& 2& âˆ’5& 8\\ âˆ’5& 5& âˆ’7& 3\\ 2& 4& 9& âˆ’6\\ âˆ’3& 7& âˆ’12& 11\end{array}\right),\phantom{\rule{2em}{0ex}}{B}_{22}=\left(\begin{array}{ccccc}2& 4& 8& âˆ’5& 4\\ 7& âˆ’1& 5& âˆ’2& 3\\ 6& 3& 9& 2& âˆ’6\\ 5& âˆ’2& 7& âˆ’8& 1\\ 1& 4& âˆ’3& âˆ’2& 6\end{array}\right),\hfill \\ {M}_{1}=\left(\begin{array}{cccc}941& âˆ’615& âˆ’299& âˆ’2,420\\ 701& 153& âˆ’1,088& âˆ’1,350\\ 746& âˆ’1,846& âˆ’1,931& âˆ’1,352\\ 1,298& 1,133& 305& âˆ’2,594\\ 1,953& âˆ’4,224& 2,945& âˆ’3,630\\ 698& âˆ’1,980& 1,488& âˆ’2,192\end{array}\right),\hfill \\ {M}_{2}=\left(\begin{array}{ccccc}1,242& 1,986& 1,365& âˆ’1,150& 2,727\\ 1,569& 477& 1,215& âˆ’439& 384\\ âˆ’2,845& 1,447& âˆ’1,795& 818& 1,722\\ 3,924& 687& 3,806& âˆ’2,488& 1,229\end{array}\right).\hfill \end{array}$

Let

$\begin{array}{c}{P}_{1}=\left(\begin{array}{ccccc}0& 0& 0& 0& âˆ’1\\ 0& 0& 0& 1& 0\\ 0& 0& âˆ’1& 0& 0\\ 0& 1& 0& 0& 0\\ âˆ’1& 0& 0& 0& 0\end{array}\right),\phantom{\rule{2em}{0ex}}{Q}_{1}=\left(\begin{array}{cccc}0& 0& 1& 0\\ 0& 0& 0& âˆ’1\\ 1& 0& 0& 0\\ 0& âˆ’1& 0& 0\end{array}\right),\hfill \\ {P}_{2}=\left(\begin{array}{cccc}0& 0& 0& 1\\ 0& 1& 0& 0\\ 0& 0& âˆ’1& 0\\ 1& 0& 0& 0\end{array}\right),\phantom{\rule{2em}{0ex}}{Q}_{2}=\left(\begin{array}{ccccc}0& 0& 0& 1& 0\\ 0& 0& 0& 0& 1\\ 0& 0& âˆ’1& 0& 0\\ 1& 0& 0& 0& 0\\ 0& 1& 0& 0& 0\end{array}\right)\hfill \end{array}$

be generalized reflection matrices.

We will find the generalized reflexive solution of Eq. (9) by using Algorithm 2.1. It can be verified that Eq. (9) is consistent over generalized reflexive matrices and the solutions are

${X}_{1}^{âˆ—}=\left(\begin{array}{cccc}âˆ’2& 9& 2& 5\\ 3& 1& 11& âˆ’1\\ 7& 3& âˆ’7& 3\\ 11& 1& 3& âˆ’1\\ âˆ’2& 5& 2& 9\end{array}\right),\phantom{\rule{2em}{0ex}}{X}_{2}^{âˆ—}=\left(\begin{array}{ccccc}14& 16& âˆ’1& 3& 4\\ 9& 7& 0& 9& 7\\ âˆ’3& âˆ’8& âˆ’8& 3& 8\\ 3& 4& 1& 14& 16\end{array}\right).$

Because of the influence of the error of calculation, the residual $R\left(k\right)$ is usually unequal to zero in the process of iteration, where $k=1,2,â€¦$â€‰. For any chosen positive number Îµ, however small it is, e.g., $\mathrm{Îµ}=1.0000\text{e-}010$, whenever $âˆ¥R\left(k\right)âˆ¥<\mathrm{Îµ}$, stop the iteration, ${X}_{1}\left(k\right)$ and ${X}_{2}\left(k\right)$ are regarded to be generalized reflexive solutions of Eq. (9). Let the initially iterative matrices be as follows:

${X}_{1}\left(1\right)=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right),\phantom{\rule{2em}{0ex}}{X}_{2}\left(1\right)=\left(\begin{array}{ccccc}0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0\end{array}\right).$

By Algorithm 2.1, we have

$\begin{array}{c}{X}_{1}^{âˆ—}={X}_{1}\left(30\right)=\left(\begin{array}{cccc}âˆ’2.0000& 9.0000& 2.0000& 5.0000\\ 3.0000& 1.0000& 11.0000& âˆ’1.0000\\ 7.0000& 3.0000& âˆ’7.0000& 3.0000\\ 11.0000& 1.0000& 3.0000& âˆ’1.0000\\ âˆ’2.0000& 5.0000& 2.0000& 9.0000\end{array}\right),\hfill \\ {X}_{2}^{âˆ—}={X}_{2}\left(30\right)=\left(\begin{array}{ccccc}14.0000& 16.0000& âˆ’1.0000& 3.0000& 4.0000\\ 9.0000& 7.0000& 0& 9.0000& 7.0000\\ âˆ’3.0000& âˆ’8.0000& âˆ’8.0000& 3.0000& 8.0000\\ 3.0000& 4.0000& 1.0000& 14.0000& 16.0000\end{array}\right),\hfill \\ âˆ¥R\left(30\right)âˆ¥=6.4815\text{e-}012<\mathrm{Îµ}.\hfill \end{array}$

Thus we obtain the reflexive solution of Eq. (9). The relative error and the residual of the solution are shown in Figure 1, where the relative error $REk=\frac{âˆ¥{X}_{1}\left(k\right)âˆ’{X}_{1}^{âˆ—}âˆ¥+âˆ¥{X}_{2}\left(k\right)âˆ’{X}_{2}^{âˆ—}âˆ¥}{âˆ¥{X}_{1}^{âˆ—}âˆ¥+âˆ¥{X}_{2}^{âˆ—}âˆ¥}$ and the residual $Rk=âˆ¥R\left(k\right)âˆ¥$.

Let ${S}_{E}$ denote the set of all generalized reflexive solution group of Eq. (9). For two given generalized reflexive matrices

${X}_{1}^{0}=\left(\begin{array}{cccc}3& âˆ’1& 2& 2\\ 3& âˆ’2& 0& 0\\ 1& âˆ’3& âˆ’1& âˆ’3\\ 0& 0& 3& 2\\ âˆ’2& 2& âˆ’3& âˆ’1\end{array}\right),\phantom{\rule{2em}{0ex}}{X}_{2}^{0}=\left(\begin{array}{ccccc}2& 4& âˆ’2& 2& 0\\ 1& 3& 0& 1& 3\\ 5& âˆ’2& 2& âˆ’5& 2\\ 2& 0& 2& 2& 4\end{array}\right),$

we will find the optimal approximate generalized reflexive solution group to the given matrix group $\left({X}_{1}^{0},{X}_{2}^{0}\right)$ in ${S}_{E}$ in Frobenius norm, i.e., find $\left({\stackrel{Ë†}{X}}_{1},{\stackrel{Ë†}{X}}_{2}\right)âˆˆ{S}_{E}$ such that

$âˆ¥{\stackrel{Ë†}{X}}_{1}âˆ’{X}_{1}^{0}âˆ¥+âˆ¥{\stackrel{Ë†}{X}}_{2}âˆ’{X}_{2}^{0}âˆ¥=\underset{\left({X}_{1},{X}_{2}\right)âˆˆ{S}_{E}}{min}âˆ¥{X}_{1}âˆ’{X}_{1}^{0}âˆ¥+âˆ¥{X}_{2}âˆ’{X}_{1}^{0}âˆ¥.$

Let ${\stackrel{Ëœ}{X}}_{1}={X}_{1}âˆ’{X}_{1}^{0}$, ${\stackrel{Ëœ}{X}}_{2}={X}_{2}âˆ’{X}_{2}^{0}$, ${\stackrel{Ëœ}{M}}_{1}={M}_{1}âˆ’{A}_{11}{X}_{1}^{0}{B}_{11}âˆ’{A}_{12}{X}_{2}^{0}{B}_{12}$, ${\stackrel{Ëœ}{M}}_{2}={M}_{2}âˆ’{A}_{21}{X}_{1}^{0}{B}_{21}âˆ’{A}_{22}{X}_{2}^{0}{B}_{22}$. By the method mentioned in Section 3, we can obtain the least-norm generalized reflexive solution group $\left({\stackrel{Ëœ}{X}}_{1}^{âˆ—},{\stackrel{Ëœ}{X}}_{2}^{âˆ—}\right)$ of the matrix equations ${A}_{11}{\stackrel{Ëœ}{X}}_{1}{B}_{11}+{A}_{12}{\stackrel{Ëœ}{X}}_{2}{B}_{12}={\stackrel{Ëœ}{M}}_{1}$, ${A}_{21}{\stackrel{Ëœ}{X}}_{1}{B}_{21}+{A}_{22}{\stackrel{Ëœ}{X}}_{2}{B}_{22}={\stackrel{Ëœ}{M}}_{2}$ by choosing the initially iterative matrices ${\stackrel{Ëœ}{X}}_{1}\left(1\right)=0$ and ${\stackrel{Ëœ}{X}}_{2}\left(1\right)=0$. Then by Algorithm 2.1, we have that

$\begin{array}{c}{\stackrel{Ëœ}{X}}_{1}^{âˆ—}={\stackrel{Ëœ}{X}}_{1}\left(29\right)=\left(\begin{array}{cccc}âˆ’5.0000& 10.0000& 0.0000& 3.0000\\ âˆ’0.0000& 3.0000& 11.0000& âˆ’1.0000\\ 6.0000& 6.0000& âˆ’6.0000& 6.0000\\ 11.0000& 1.0000& âˆ’0.0000& âˆ’3.0000\\ âˆ’0.0000& 3.0000& 5.0000& 10.0000\end{array}\right),\hfill \\ {\stackrel{Ëœ}{X}}_{2}^{âˆ—}={\stackrel{Ëœ}{X}}_{2}\left(29\right)=\left(\begin{array}{ccccc}12.0000& 12.0000& 1.0000& 1.0000& 4.0000\\ 8.0000& 4.0000& 0& 8.0000& 4.0000\\ âˆ’8.0000& âˆ’6.0000& âˆ’10.0000& 8.0000& 6.0000\\ 1.0000& 4.0000& âˆ’1.0000& 12.0000& 12.0000\end{array}\right),\hfill \\ âˆ¥R\left(29\right)âˆ¥=1.4095\text{e-}011<\mathrm{Îµ}=1.0000\text{e-}010,\hfill \end{array}$

and the optimal approximate generalized reflexive solution to the matrix group $\left({X}_{1}^{0},{X}_{2}^{0}\right)$ in Frobenius norm is

$\begin{array}{c}{\stackrel{Ë†}{X}}_{1}={\stackrel{Ëœ}{X}}_{1}^{âˆ—}+{X}_{1}^{0}=\left(\begin{array}{cccc}âˆ’2.0000& 9.0000& 2.0000& 5.0000\\ 3.0000& 1.0000& 11.0000& âˆ’1.0000\\ 7.0000& 3.0000& âˆ’7.0000& 3.0000\\ 11.0000& 1.0000& 3.0000& âˆ’1.0000\\ âˆ’2.0000& 5.0000& 2.0000& 9.0000\end{array}\right),\hfill \\ {\stackrel{Ë†}{X}}_{2}={\stackrel{Ëœ}{X}}_{2}^{âˆ—}+{X}_{2}^{0}=\left(\begin{array}{ccccc}14.0000& 16.0000& âˆ’1.0000& 3.0000& 4.0000\\ 9.0000& 7.0000& 0& 9.0000& 7.0000\\ âˆ’3.0000& âˆ’8.0000& âˆ’8.0000& 3.0000& 8.0000\\ 3.0000& 4.0000& 1.0000& 14.0000& 16.0000\end{array}\right).\hfill \end{array}$

The relative error and the residual of the solution are shown in Figure 2, where the relative error $REk=\frac{âˆ¥{\stackrel{Ëœ}{X}}_{1}\left(k\right)+{X}_{1}^{0}âˆ’{X}_{1}^{âˆ—}âˆ¥+âˆ¥{\stackrel{Ëœ}{X}}_{2}\left(k\right)+{X}_{2}^{0}âˆ’{X}_{2}^{âˆ—}âˆ¥}{âˆ¥{X}_{1}^{âˆ—}âˆ¥+âˆ¥{X}_{2}^{âˆ—}âˆ¥}$ and the residual $Rk=âˆ¥R\left(k\right)âˆ¥$.

From Figures 1 and 2, we can see that the iterative solutions generated by Algorithm 2.1 can quickly converge to the exact generalized reflexive solution of Eq. (9). This illustrates that our iterative algorithm is quite effective.

## 5 Conclusions

In this paper, an iterative algorithm is presented to solve the general coupled matrix equations ${âˆ‘}_{j=1}^{q}{A}_{ij}{X}_{j}{B}_{ij}={M}_{i}$ ($i=1,2,â€¦,p$) and their optimal approximation problem over generalized reflexive matrices. When the general coupled matrix equations are consistent over generalized reflexive matrices, for any initially generalized reflexive matrix group, the generalized reflexive solution group can be obtained by the iterative algorithm within finite iterative steps in the absence of round-off errors. When a special kind of initial iteration matrix group is chosen, the unique least-norm generalized reflexive solution of the general coupled matrix equations can be derived. Furthermore, the optimal approximate generalized reflexive solution of the general coupled matrix equations to a given generalized reflexive matrix group can be derived by finding the least-norm generalized reflexive solution of new corresponding general coupled matrix equations. Finally, a numerical example is given to show that our iterative algorithm is quite effective.

## References

1. Ding F, Chen T: On iterative solutions of general coupled matrix equations. SIAM J. Control Optim. 2006, 44(6):2269â€“2284. 10.1137/S0363012904441350

2. Ding F, Chen T: Gradient based iterative algorithms for solving a class of matrix equations. IEEE Trans. Autom. Control 2005, 50(8):1216â€“1221.

3. Ding F, Liu PX, Ding J: Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle. Appl. Math. Comput. 2008, 197(1):41â€“50. 10.1016/j.amc.2007.07.040

4. Ding J, Liu Y, Ding F:Iterative solutions to matrix equations of the form ${A}_{i}X{B}_{i}={F}_{i}$. Comput. Math. Appl. 2010, 59(11):3500â€“3507.

5. Xie L, Ding J, Ding F: Gradient based iterative solutions for general linear matrix equations. Comput. Math. Appl. 2009, 58(7):1441â€“1448.

6. Ding F, Chen T: Iterative least-squares solutions of coupled Sylvester matrix equations. Syst. Control Lett. 2005, 54(2):95â€“107. 10.1016/j.sysconle.2004.06.008

7. Wang QW: Bisymmetric and centrosymmetric solutions to systems of real quaternion matrix equations. Comput. Math. Appl. 2005, 49(5â€“6):641â€“650.

8. Wu AG, Feng G, Duan GR, Wu WJ: Iterative solutions to coupled Sylvester-conjugate matrix equations. Comput. Math. Appl. 2010, 60(1):54â€“66.

9. Wu AG, Li B, Zhang Y, Duan GR: Finite iterative solutions to coupled Sylvester-conjugate matrix equations. Appl. Math. Model. 2011, 35(3):1065â€“1080. 10.1016/j.apm.2010.07.053

10. Wu AG, Feng G, Duan GR, Wu WJ: Finite iterative solutions to a class of complex matrix equations with conjugate and transpose of the unknowns. Math. Comput. Model. 2010, 52(9â€“10):1463â€“1478. 10.1016/j.mcm.2010.06.010

11. Jonsson I, KÃ¥gstrÃ¶m B: Recursive blocked algorithms for solving triangular systems - part I: one-sided and coupled Sylvester-type matrix equations. ACM Trans. Math. Softw. 2002, 28(4):392â€“415. 10.1145/592843.592845

12. Jonsson I, KÃ¥gstrÃ¶m B: Recursive blocked algorithms for solving triangular systems - part II: two-sided and generalized Sylvester and Lyapunov matrix equations. ACM Trans. Math. Softw. 2002, 28(4):416â€“435. 10.1145/592843.592846

13. Dehghan M, Hajarian M: The general coupled matrix equations over generalized bisymmetric matrices. Linear Algebra Appl. 2010, 432(6):1531â€“1552. 10.1016/j.laa.2009.11.014

14. Huang GX, Wu N, Yin F, Zhou ZL, Guo K: Finite iterative algorithms for solving generalized coupled Sylvester systems - part I: one-sided and generalized coupled Sylvester matrix equations over generalized reflexive solutions. Appl. Math. Model. 2012, 36(4):1589â€“1603. 10.1016/j.apm.2011.09.027

15. Yin F, Huang GX, Chen DQ: Finite iterative algorithms for solving generalized coupled Sylvester systems - part II: two-sided and generalized coupled Sylvester matrix equations over reflexive solutions. Appl. Math. Model. 2012, 36(4):1604â€“1614. 10.1016/j.apm.2011.09.025

16. Zhou B, Duan GR, Li ZY: Gradient based iterative algorithm for solving coupled matrix equations. Syst. Control Lett. 2009, 58(5):327â€“333. 10.1016/j.sysconle.2008.12.004

17. Li ZY, Zhou B, Wang Y, Duan GR: Numerical solution to linear matrix equation by finite steps iteration. IET Control Theory Appl. 2010, 4(7):1245â€“1253. 10.1049/iet-cta.2009.0015

18. Cai J, Chen GX:An iterative algorithm for the least squares bisymmetric solutions of the matrix equations ${A}_{1}X{B}_{1}={C}_{1}$, ${A}_{2}X{B}_{2}={C}_{2}$. Math. Comput. Model. 2009, 50(7â€“8):1237â€“1244. 10.1016/j.mcm.2009.07.004

19. Chen D, Yin F, Huang G:An iterative algorithm for the generalized reflexive solution of the matrix equations $AXB=E$, $CXD=F$. J. Appl. Math. 2012. doi:10.1155/2012/492951

20. Ding F, Chen T: Hierarchical gradient-based identification of multivariable discrete-time systems. Automatica 2005, 41(2):315â€“325. 10.1016/j.automatica.2004.10.010

21. Ding F, Chen T: Hierarchical least squares identification methods for multivariable systems. IEEE Trans. Autom. Control 2005, 50(3):397â€“402.

22. Ding F, Chen T: Hierarchical identification of lifted state-space models for general dual-rate systems. IEEE Trans. Circuits Syst. I, Regul. Pap. 2005, 52(6):1179â€“1187.

23. Huang GX, Yin F, Guo K:An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation $AXB=C$. J. Comput. Appl. Math. 2008, 212(2):231â€“244. 10.1016/j.cam.2006.12.005

24. Liao AP, Lei Y:Least-squares solution with the minimum-norm for the matrix equation $\left(AXB,GXH\right)=\left(C,D\right)$. Comput. Math. Appl. 2005, 50(3â€“4):539â€“549.

25. Peng ZH, Hu XY, Zhang L:An efficient algorithm for the least-squares reflexive solution of the matrix equation ${A}_{1}X{B}_{1}={C}_{1}$, ${A}_{2}X{B}_{2}={C}_{2}$. Appl. Math. Comput. 2006, 181(2):988â€“999. 10.1016/j.amc.2006.01.071

26. Wang QW: The general solution to a system of real quaternion matrix equations. Comput. Math. Appl. 2005, 49(5â€“6):665â€“675.

27. Wang QW, Li CK: Ranks and the least-norm of the general solution to a system of quaternion matrix equations. Linear Algebra Appl. 2009, 430(5â€“6):1626â€“1640. 10.1016/j.laa.2008.05.031

28. Yin F, Huang GX:An iterative algorithm for the least squares generalized reflexive solutions of the matrix equations $AXB=E$, $CXD=F$. Abstr. Appl. Anal. 2012, 2012: 1â€“18.

29. Yin F, Huang G: An iterative algorithm for the generalized reflexive solutions of the generalized coupled Sylvester matrix equations. J. Appl. Math. 2012, 2012: 1â€“28.

30. Zhou B, Li ZY, Duan GR, Wang Y: Weighted least squares solutions to general coupled Sylvester matrix equations. J.Â Comput. Appl. Math. 2009, 224(2):759â€“776. 10.1016/j.cam.2008.06.014

31. Cheng G, Tan Q, Wang Z: Some inequalities for the minimum eigenvalue of the Hadamard product of an M -matrix and its inverse. J. Inequal. Appl. 2013., 2013: Article ID 65. doi:10.1186/1029â€“242X-2013â€“65

32. Liu Y, Sheng J, Ding R: Convergence of stochastic gradient estimation algorithm for multivariable ARX-like systems. Comput. Math. Appl. 2010, 59(8):2615â€“2627.

33. Liu Y, Xiao Y, Zhao X: Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model. Appl. Math. Comput. 2009, 215(4):1477â€“1483. 10.1016/j.amc.2009.07.012

34. Ding F, Liu Y, Bao G: Gradient based and least squares based iterative estimation algorithms for multi-input multi-output systems. Proc. Inst. Mech. Eng., Part I, J. Syst. Control Eng. 2012, 226(1):43â€“55. 10.1177/0959651811409491

35. Ding F, Liu G, Liu XP: Parameter estimation with scarce measurements. Automatica 2011, 47(8):1646â€“1655. 10.1016/j.automatica.2011.05.007

36. Xie L, Liu Y, Yang H:Gradient based and least squares based iterative algorithms for matrix equations $AXB+C{X}^{T}D=F$. Appl. Math. Comput. 2010, 217(5):2191â€“2199. 10.1016/j.amc.2010.07.019

## Acknowledgements

The authors are grateful to the anonymous referee for the constructive and helpful comments and Professor R Agarwal for all the communications. This work was partially supported by National Natural Science Fund (41272363), Open Fund of Geomathematics Key Laboratory of Sichuan Province (scsxdz2012001, scsxdz2012002), Key Natural Science Foundation of Sichuan Education Department (12ZA008, 12ZB289) and the young scientific research backbone teachers of CDUT.

## Author information

Authors

### Corresponding author

Correspondence to Feng Yin.

### Competing interests

The authors declare that they have no competing interests.

### Authorsâ€™ contributions

All authors have the same contributions on this paper. All authors read and approved the final manuscript.

## Authorsâ€™ original submitted files for images

Below are the links to the authorsâ€™ original submitted files for images.

## Rights and permissions

Reprints and permissions

Yin, F., Guo, K. & Huang, GX. An iterative algorithm for the generalized reflexive solutions of the general coupled matrix equations. J Inequal Appl 2013, 280 (2013). https://doi.org/10.1186/1029-242X-2013-280