Skip to main content

An iterative algorithm for the generalized reflexive solutions of the general coupled matrix equations

Abstract

The general coupled matrix equations

j = 1 q A i j X j B i j = M i ,i=1,2,,p

(including the generalized coupled Sylvester matrix equations as special cases) have numerous applications in control and system theory. In this paper, an iterative algorithm is constructed to solve the general coupled matrix equations and their optimal approximation problem over generalized reflexive matrix solution ( X 1 , X 2 ,, X q ). When the general coupled matrix equations are consistent over generalized reflexive matrices, the generalized reflexive solution can be determined automatically by the iterative algorithm within finite iterative steps in the absence of round-off errors. The least Frobenius norm generalized reflexive solution of the general coupled matrix equations can be derived when an appropriate initial matrix group is chosen. Furthermore, the unique optimal approximation generalized reflexive solution ( X ˆ 1 , X ˆ 2 ,, X ˆ q ) to a given matrix group ( X 1 0 , X 2 0 ,, X q 0 ) in Frobenius norm can be derived by finding the least-norm generalized reflexive solution ( X ˜ 1 , X ˜ 2 ,, X ˜ q ) of the corresponding general coupled matrix equations j = 1 q A i j X ˜ j B i j = M ˜ i , i=1,2,,p, where X ˜ j = X j X j 0 , M ˜ i = M i j = 1 q A i j X j 0 B i j . A numerical example is given to illustrate the effectiveness of the proposed iterative algorithm.

MSC:15A18, 15A57, 65F15, 65F20.

1 Introduction

Let P R m × m and Q R n × n be two real generalized reflection matrices, i.e., P T =P, P 2 = I m , Q T =Q, Q 2 = I n , I n denotes the n order identity matrix. A matrix A R m × n is called generalized reflexive matrix with respect to the matrix pair (P,Q) if PAQ=A. The set of all m-by-n real generalized reflexive matrices with respect to a matrix pair (P,Q) is denoted by R r m × n (P,Q). We denote by the superscript T the transpose of a matrix. In a matrix space R m × n , we define an inner product as A,B=tr( B T A) for all A,B R m × n , A F represents the Frobenius norm of A, R(A) represents the column space of A, vec() represents the vector operator, i.e., vec(A)= ( a 1 T , a 2 T , , a n T ) T R m n for the matrix A=( a 1 , a 2 ,, a n ) R m × n , a i R m , i=1,2,,n, AB stands for the Kronecker product of matrices A and B.

Least-squares-based iterative algorithms are very important in system identification, parameter estimation, and signal processing, including the recursive least squares (RLS) and iterative least squares (ILS) methods for solving the solutions of some matrix equations, for example, the Lyapunov matrix equation, Sylvester matrix equations, and coupled matrix equations as well. For example, novel gradient-based iterative (GI) method [15] and least-squares-based iterative methods [3, 4, 6] with highly computational efficiencies for solving (coupled) matrix equations are presented and have good stability performances based on the hierarchical identification principle, which regards the unknown matrix as the system parameter matrix to be identified. In this paper, we consider the following two problems.

Problem I Let P j R m j × m j and Q j R n j × n j be generalized reflection matrices. For given matrices A i j R r i × m j , B i j R n j × s i , and M i R r i × s i , find a generalized reflexive matrix solution group ( X 1 , X 2 ,, X q ) with X j R r m j × n j ( P j , Q j ) such that

j = 1 q A i j X j B i j = M i ,i=1,2,,p.
(1)

Problem II When Problem I is consistent, let S E denote the set of the generalized reflexive solution group of Problem I, i.e.,

S E = { ( X 1 , X 2 , , X q ) | j = 1 q A i j X j B i j = M i , i = 1 , 2 , , p , X j R r m j × n j ( P j , Q j ) } .

For a given generalized reflexive matrix group

( X 1 0 , X 2 0 , , X q 0 ) R r m 1 × n 1 ( P 1 , Q 1 )× R r m 2 × n 2 ( P 2 , Q 2 )×× R r m q × n q ( P q , Q q ),

find ( X ˆ 1 , X ˆ 2 ,, X ˆ q ) S E such that

j = 1 q X ˆ j X j 0 2 = min ( X 1 , X 2 , , X q ) S E { j = 1 q X j X j 0 2 } .
(2)

General coupled matrix equation (1) (including the generalized coupled Sylvester matrix equations as special cases) may arise in many areas of control and system theory. Problem II occurs frequently in experiment design; see, for instance, [7].

Many theoretical and numerical results on Eq. (1) and some of its special cases have been obtained. Ding and Chen [1] presented the gradient-based iterative algorithms by applying the gradient search principle and the hierarchical identification principle for Eq. (1) with q=p. Wu et al. [8, 9] gave the finite iterative solutions to coupled Sylvester-conjugate matrix equations. Wu et al. [10] gave the finite iterative solutions to a class of complex matrix equations with conjugate and transpose of the unknowns. Jonsson and Kågström [11, 12] proposed recursive block algorithms for solving the coupled Sylvester matrix equations and the generalized Sylvester and Lyapunov matrix equations. By extending the idea of conjugate gradient method, Dehghan and Hajarian [13] constructed an iterative algorithm to solve Eq. (1) with q=p over generalized bisymmetric matrices. Very recently, Huang et al. [14] presented a finite iterative algorithm for the one-sided and generalized coupled Sylvester matrix equations over generalized reflexive solutions. Yin et al. [15] presented a finite iterative algorithm for the two-sided and generalized coupled Sylvester matrix equations over reflexive solutions. Zhou et al.[16] gave the gradient based iterative solutions for coupled matrix equation (1) and a more general case over general solution. Li et al. [17] presented a numerical solution to linear matrix equation j = 1 p A j X B j =C by finite steps iteration. For more results, we refer to [7, 1836]. However, to our knowledge, the generalized reflexive solution to the more general coupled matrix equation (1) and the optimal approximation reflexive solution have not been derived. In this paper, we consider the generalized reflexive solution of Eq. (1) and the optimal approximation generalized reflexive solution.

This paper is organized as follows. In Section 2, we solve Problem I by constructing an iterative algorithm. The convergence of the proposed algorithm is proved. For an arbitrary initial matrix group, we can obtain a generalized reflexive solution group of Problem I within finite iteration steps in the absence of round-off errors. Furthermore, for a special initial matrix group, we can obtain the least Frobenius norm solutions of Problem I. Then in Section 3, we give the optimal approximate solution group of Problem II by finding the least Frobenius norm generalized reflexive solution group of the corresponding general coupled matrix equations. In Section 4, a numerical example is given to illustrate the effectiveness of our method. Finally, some conclusions are drawn in Section 5.

2 An iterative algorithm for solving Problem I

In this section, we first introduce an iterative algorithm to solve Problem I, then we prove its convergence. In addition, we give the least-norm generalized reflexive solutions of Problem I when an appropriate initial iterative matrix group is chosen.

Algorithm 2.1 Step 1: Input matrices A i j R r i × m j , B i j R n j × s i , M i R r i × s i , and generalized reflection matrices P j R m j × m j and Q j R n j × n j , i=1,,p, j=1,,q;

Step 2: Choose an arbitrary matrix group

( X 1 ( 1 ) , X 2 ( 1 ) , , X q ( 1 ) ) R r m 1 × n 1 ( P 1 , Q 1 )× R r m 2 × n 2 ( P 2 , Q 2 )×× R r m q × n q ( P q , Q q ).

Compute

R ( 1 ) = diag ( M 1 l = 1 q A 1 l X l ( 1 ) B 1 l , M 2 l = 1 q A 2 l X l ( 1 ) B 2 l , , M p l = 1 q A p l X l ( 1 ) B p l ) , S j ( 1 ) = 1 2 [ i = 1 p A i j T ( M i l = 1 q A i l X l ( 1 ) B i l ) B i j T + i = 1 p P j A i j T ( M i l = 1 q A i l X l ( 1 ) B i l ) B i j T Q j ] , k : = 1 ;

Step 3: If R(k)=0, then stop and ( X 1 (k), X 2 (k),, X q (k)) is the solution group of Eq. (1); else if R(k)0, but S j (k)=0, j=1,,q, then stop and Eq. (1) is not consistent over a generalized reflexive matrix group; else k:=k+1;

Step 4: Compute

X j ( k ) = X j ( k 1 ) + R ( k 1 ) F 2 l = 1 q S l ( k 1 ) F 2 S j ( k 1 ) , j = 1 , , q , R ( k ) = diag ( M 1 l = 1 q A 1 l X l ( k ) B 1 l , M 2 l = 1 q A 2 l X l ( k ) B 2 l , , M p l = 1 q A p l X l ( k ) B p l ) R ( k ) = R ( k 1 ) R ( k 1 ) F 2 l = 1 q S l ( k 1 ) F 2 R ( k ) = × diag ( l = 1 q A 1 l S l ( k 1 ) B 1 l , l = 1 q A 2 l S l ( k 1 ) B 2 l , , l = 1 q A p l S l ( k 1 ) B p l ) , S j ( k ) = 1 2 [ i = 1 p A i j T ( M i l = 1 q A i l X l ( k ) B i l ) B i j T + i = 1 p P j A i j T ( M i l = 1 q A i l X l ( k ) B i l ) B i j T Q j ] S j ( k ) = + R ( k ) F 2 R ( k 1 ) F 2 S j ( k 1 ) ;

Step 5: Go to Step 3.

Obviously, it can be seen that X j (k), S j (k) R r m j × n j ( P j , Q j ) for all j=1,,q and k=1,2, .

Lemma 2.1 For the sequences {R(k)}, { S j (k)} (j=1,2,,q) generated by Algorithm 2.1, and m2, we have

tr ( ( R ( s ) ) T R ( t ) ) =0, j = 1 q tr ( ( S j ( s ) ) T S j ( t ) ) =0,s,t=1,2,,m,st.
(3)

The proof of Lemma 2.1 is similar to that of Lemma 2.2 in [21] and is omitted.

Lemma 2.2 Suppose ( X 1 , X 2 ,, X q ) is an arbitrary generalized reflexive solution group of Problem I, then for any initial generalized reflexive matrix group ( X 1 (1), X 2 (1),, X q (1)), we have

j = 1 q tr ( ( X j X j ( k ) ) T S j ( k ) ) = R ( k ) F 2 ,k=1,2,,
(4)

where the sequences { X j (k)}, { S j (k)} and {R(k)} are generated by Algorithm 2.1.

Proof We prove the conclusion by induction for the positive integer k.

For k=1, we have that

j = 1 q tr ( ( X j X j ( 1 ) ) T S j ( 1 ) ) = j = 1 q tr ( ( X j X j ( 1 ) ) T [ 1 2 ( i = 1 p A i j T ( M i l = 1 q A i l X l ( 1 ) B i l ) B i j T + i = 1 p P j A i j T ( M i l = 1 q A i l X l ( 1 ) B i l ) B i j T Q j ) ] ) = j = 1 q tr ( ( X j X j ( 1 ) ) T [ i = 1 p A i j T ( M i l = 1 q A i l X l ( 1 ) B i l ) B i j T ] ) = i = 1 p tr ( ( M i l = 1 q A i l X l ( 1 ) B i l ) T j = 1 q A i j ( X j X j ( 1 ) ) B i j ) = tr ( diag ( ( M 1 l = 1 q A 1 l X l ( 1 ) B 1 l ) T , ( M 2 l = 1 q A 2 l X l ( 1 ) B 2 l ) T , , ( M p l = 1 q A p l X l ( 1 ) B p l ) T ) × diag ( j = 1 q A 1 j ( X j X j ( 1 ) ) B 1 j , j = 1 q A 2 j ( X j X j ( 1 ) ) B 2 j , , j = 1 q A p j ( X j X j ( 1 ) ) B p j ) ) = tr ( diag ( ( M 1 l = 1 q A 1 l X l ( 1 ) B 1 l ) T , ( M 2 l = 1 q A 2 l X l ( 1 ) B 2 l ) T , , ( M p l = 1 q A p l X l ( 1 ) B p l ) T ) × diag ( M 1 j = 1 q A 1 j X j ( 1 ) B 1 j , M 2 j = 1 q A 2 j X j ( 1 ) B 2 j , , M p j = 1 q A p j X j ( 1 ) B p j ) ) = R ( 1 ) 2 .

Assume that (4) holds for k=m. When k=m+1, by Algorithm 2.1, we have that

j = 1 q tr ( ( X j X j ( m + 1 ) ) T S j ( m + 1 ) ) = j = 1 q tr ( ( X j X j ( m + 1 ) ) T [ 1 2 ( i = 1 p A i j T ( M i l = 1 q A i l X l ( m + 1 ) B i l ) B i j T + i = 1 p P j A i j T ( M i l = 1 q A i l X l ( m + 1 ) B i l ) B i j T Q j ) + R ( m + 1 ) F 2 R ( m ) F 2 S j ( m ) ] ) = j = 1 q tr ( ( X j X j ( m + 1 ) ) T [ i = 1 p A i j T ( M i l = 1 q A i l X l ( m + 1 ) B i l ) B i j T ] ) + R ( m + 1 ) F 2 R ( m ) F 2 j = 1 q tr ( ( X j X j ( m + 1 ) ) T S j ( m ) ) = i = 1 p tr ( ( M i l = 1 q A i l X l ( m + 1 ) B i l ) T j = 1 q A i j ( X j X j ( m + 1 ) ) B i j ) + R ( m + 1 ) F 2 R ( m ) F 2 j = 1 q tr ( ( X j X j ( m ) ) T S j ( m ) ) R ( m + 1 ) F 2 j = 1 q S j ( m ) F 2 j = 1 q tr ( ( S j ( m ) ) T S j ( m ) ) = tr ( diag ( ( M 1 l = 1 q A 1 l X l ( m + 1 ) B 1 l ) T , ( M 2 l = 1 q A 2 l X l ( m + 1 ) B 2 l ) T , , ( M p l = 1 q A p l X l ( m + 1 ) B p l ) T ) × diag ( j = 1 q A 1 j ( X j X j ( m + 1 ) ) B 1 j , j = 1 q A 2 j ( X j X j ( m + 1 ) ) B 2 j , , j = 1 q A p j ( X j X j ( m + 1 ) ) B p j ) ) + R ( m + 1 ) F 2 R ( m ) F 2 R ( m ) F 2 R ( m + 1 ) F 2 j = 1 q S j ( m ) F 2 j = 1 q S j ( m ) F 2 = tr ( diag ( ( M 1 l = 1 q A 1 l X l ( m + 1 ) B 1 l ) T , ( M 2 l = 1 q A 2 l X l ( m + 1 ) B 2 l ) T , , ( M p l = 1 q A p l X l ( m + 1 ) B p l ) T ) × diag ( M 1 j = 1 q A 1 j X j ( m + 1 ) B 1 j , M 2 j = 1 q A 2 j X j ( m + 1 ) B 2 j , , M p j = 1 q A p j X j ( m + 1 ) B p j ) ) + R ( m + 1 ) F 2 R ( m + 1 ) F 2 = R ( m + 1 ) F 2 .

Therefore, (4) holds for k=m+1. Thus (4) holds by the principle of induction. This completes the proof. □

Remark 2.1 If there exists a positive number k such that S j (k)=0, j=1,2,,q, but R(k)0, then by Lemma 2.2, we have that Eq. (1) is not consistent over generalized reflexive matrices.

Theorem 2.1 Suppose that Problem I is consistent, then for an arbitrary initial matrix group ( X 1 , X 2 ,, X q ) with X j R r m j × n j ( P j , Q j ), a generalized reflexive solution group of Problem I can be obtained with finite iteration steps in the absence of round-off errors.

Proof If R(k)0, k=1,2, , m= i = 1 p r i s i , by Lemma 2.2 and Remark 2.1, we have S j (k)0 for all j=1,2,,q and k=1,2,,m, then we can compute R(m+1) and ( X 1 (m+1), X 2 (m+1),, X q (m+1)) by Algorithm 2.1.

By Lemma 2.1, we have

tr ( ( R ( m + 1 ) ) T R ( k ) ) =0,k=1,2,,m

and

tr ( ( R ( k ) ) T R ( l ) ) =0,k,l=1,2,,m,kl.

It can be seen that the set of R(1),R(2),,R(m) is an orthogonal basis of the matrix subspace

S= { L | L = diag ( L 1 , L 2 , , L p ) , L i R r i × s i , i = 1 , 2 , , p } ,

which implies that R(m+1)=0, i.e., ( X 1 (m+1), X 2 (m+1),, X q (m+1)) with X j (m+1) R r m j × n j ( P j , Q j ) is a generalized reflexive solution group of Problem I. This completes the proof. □

To show the least Frobenius norm generalized reflexive solution of Problem I, we first introduce the following result.

Lemma 2.3 (see [[23], Lemma 2.4])

Suppose that the consistent system of linear equation Ax=b has a solution x R( A T ), then x is a unique least Frobenius norm solution of the system of linear equations.

By Lemma 2.3, the following result can be obtained.

Theorem 2.2 Suppose that Problem I is consistent. If we choose the initial iterative matrices X j (1)= i = 1 p A i j T K i B i j T + i = 1 p P j A i j T K i B i j T Q j , j=1,2,,q, where K i R r i × s i , i=1,2,,p are arbitrary matrices, especially, X j (1)=0 R m j × n j ( P j , Q j ), then the solution group ( X 1 , X 2 ,, X q ) generated by Algorithm 2.1 is the unique least Frobenius norm generalized reflexive solution group of Problem I.

Proof We know the solvability of Eq. (1) over generalized reflexive matrices is equivalent to the following matrix equations:

{ j = 1 q A i j X j B i j = M i ( i = 1 , 2 , , p ) , j = 1 q A i j P j X j Q j B i j = M i ( i = 1 , 2 , , p ) .
(5)

Then the system of matrix equations (5) is equivalent to

( B 11 T A 11 B 1 q T A 1 q B p 1 T A p 1 B p q T A p q B 11 T Q 1 A 11 P 1 B 1 q T Q q A 1 q P q B p 1 T Q 1 A p 1 P 1 B p q T Q q A p q P q )( vec ( X 1 ) vec ( X q ) )=( vec ( M 1 ) vec ( M p ) vec ( M 1 ) vec ( M p ) ).
(6)

Let X j (1)= i = 1 p A i j T K i B i j T + i = 1 p P j A i j T K i B i j T Q j , j=1,2,,q, where K i R r i × s i are arbitrary matrices, then

( vec ( X 1 ( 1 ) ) vec ( X q ( 1 ) ) ) = ( vec ( i = 1 p A i 1 T K i B i 1 T + i = 1 p P 1 A i 1 T K i B i 1 T Q 1 ) vec ( i = 1 p A i q T K i B i q T + i = 1 p P q A i q T K i B i q T Q q ) ) = ( B 11 A 11 T B p 1 A p 1 T Q 1 B 11 P 1 A 11 T Q 1 B p 1 P 1 A p 1 T B 1 q A 1 q T B p q A p q T Q q B 1 q P q A 1 q T Q q B p q P q A p q T ) ( vec ( K 1 ) vec ( K p ) vec ( K 1 ) vec ( K p ) ) = ( B 11 T A 11 B 1 q T A 1 q B p 1 T A p 1 B p q T A p q B 11 T Q 1 A 11 P 1 B 1 q T Q q A 1 q P q B p 1 T Q 1 A p 1 P 1 B p q T Q q A p q P q ) T ( vec ( K 1 ) vec ( K p ) vec ( K 1 ) vec ( K p ) ) R ( ( B 11 T A 11 B 1 q T A 1 q B p 1 T A p 1 B p q T A p q B 11 T Q 1 A 11 P 1 B 1 q T Q q A 1 q P q B p 1 T Q 1 A p 1 P 1 B p q T Q q A p q P q ) T ) .

Furthermore, we can see that all generalized reflexive matrix solution groups ( X 1 (k), X 2 (k),, X q (k)) generated by Algorithm 2.1 satisfy

( vec ( X 1 ( 1 ) ) vec ( X q ( 1 ) ) )R ( ( B 11 T A 11 B 1 q T A 1 q B p 1 T A p 1 B p q T A p q B 11 T Q 1 A 11 P 1 B 1 q T Q q A 1 q P q B p 1 T Q 1 A p 1 P 1 B p q T Q q A p q P q ) T ) .

By Lemma 2.3 we know that ( X 1 , X 2 ,, X q ) is the least Frobenius norm generalized reflexive solution group of the system of linear equations (6). Since a vector operator is isomorphic, ( X 1 , X 2 ,, X q ) is the unique least Frobenius norm generalized reflexive solution group of the system of matrix equations (5). Thus ( X 1 , X 2 ,, X q ) is the unique least Frobenius norm generalized reflexive solution group of Problem I. □

3 The solution of Problem II

In this section, we show that the optimal approximate generalized reflexive solution group of Problem II to a given generalized reflexive matrix group can be derived by finding the least Frobenius norm generalized reflexive solution group of the corresponding general coupled matrix equations.

When Problem I is consistent, the set of generalized reflexive solution group of Problem I denoted by S E is not empty. For a given matrix pair ( X 1 0 , X 2 0 ,, X q 0 ) with X j 0 R r m j × n j ( P j , Q j ), j=1,2,,q, we have

j = 1 q A i j X j B i j = M i j = 1 q A i j ( X j X j 0 ) B i j = M i j = 1 q A i j X j 0 B i j , i = 1 , 2 , , p .
(7)

Set X ˜ j = X j X j 0 , M ˜ i = M i j = 1 q A i j X j 0 B i j , then solving Problem II is equivalent to finding the least Frobenius norm generalized reflexive solution group ( X ˜ 1 , X ˜ 2 ,, X ˜ q ) of the corresponding general coupled matrix equations

j = 1 q A i j X ˜ j B i j = M ˜ i ,i=1,2,,p.
(8)

Let initial iteration matrices be

X ˜ j (1)= i = 1 p A i j T K i B i j T + i = 1 p P j A i j T K i B i j T Q j ,j=1,2,,q,

where K i R r i × s i , i=1,2,,p are arbitrary matrices, especially, X ˜ j (1)=0 R m j × n j ( P j , Q j ), j=1,2,,q. By using Algorithm 2.1, we can get the least Frobenius norm generalized reflexive solution group ( X ˜ 1 , X ˜ 2 ,, X ˜ q ) of (8). Thus the generalized reflexive solution group of Problem II can be represented as

( X ˆ 1 , X ˆ 2 ,, X ˆ q )= ( X ˜ 1 + X 1 0 , X ˜ 2 + X 2 0 , , X ˜ q + X q 0 ) .

4 A numerical example

In this section, we show a numerical example to illustrate the proposed iterative method. All the tests are performed by MATLAB 7.8.

Example 4.1 Consider the generalized reflexive solution of the general coupled matrix equations

{ A 11 X 1 B 11 + A 12 X 2 B 12 = M 1 , A 21 X 1 B 21 + A 22 X 2 B 22 = M 2 ,
(9)

where

A 11 = ( 1 3 5 7 9 2 0 4 6 1 0 2 9 6 8 3 6 2 2 3 5 5 22 1 11 8 4 6 9 9 ) , B 11 = ( 4 8 5 4 1 5 2 3 3 9 2 6 2 7 8 1 ) , A 12 = ( 6 5 7 9 2 4 6 11 9 12 3 8 13 6 4 15 5 15 13 11 2 9 6 9 ) , B 12 = ( 7 1 8 6 4 5 2 3 3 12 0 8 1 6 9 4 5 8 2 9 ) , A 21 = ( 14 5 1 7 1 2 3 2 5 4 13 4 2 3 6 8 1 5 4 8 ) , B 21 = ( 1 3 5 8 2 11 5 6 2 5 13 2 7 9 7 9 6 5 12 1 ) , A 22 = ( 1 2 5 8 5 5 7 3 2 4 9 6 3 7 12 11 ) , B 22 = ( 2 4 8 5 4 7 1 5 2 3 6 3 9 2 6 5 2 7 8 1 1 4 3 2 6 ) , M 1 = ( 941 615 299 2 , 420 701 153 1 , 088 1 , 350 746 1 , 846 1 , 931 1 , 352 1 , 298 1 , 133 305 2 , 594 1 , 953 4 , 224 2 , 945 3 , 630 698 1 , 980 1 , 488 2 , 192 ) , M 2 = ( 1 , 242 1 , 986 1 , 365 1 , 150 2 , 727 1 , 569 477 1 , 215 439 384 2 , 845 1 , 447 1 , 795 818 1 , 722 3 , 924 687 3 , 806 2 , 488 1 , 229 ) .

Let

P 1 = ( 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 ) , Q 1 = ( 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 ) , P 2 = ( 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 ) , Q 2 = ( 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 )

be generalized reflection matrices.

We will find the generalized reflexive solution of Eq. (9) by using Algorithm 2.1. It can be verified that Eq. (9) is consistent over generalized reflexive matrices and the solutions are

X 1 =( 2 9 2 5 3 1 11 1 7 3 7 3 11 1 3 1 2 5 2 9 ), X 2 =( 14 16 1 3 4 9 7 0 9 7 3 8 8 3 8 3 4 1 14 16 ).

Because of the influence of the error of calculation, the residual R(k) is usually unequal to zero in the process of iteration, where k=1,2, . For any chosen positive number ε, however small it is, e.g., ε=1.0000e-010, whenever R(k)<ε, stop the iteration, X 1 (k) and X 2 (k) are regarded to be generalized reflexive solutions of Eq. (9). Let the initially iterative matrices be as follows:

X 1 (1)=( 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ), X 2 (1)=( 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ).

By Algorithm 2.1, we have

X 1 = X 1 ( 30 ) = ( 2.0000 9.0000 2.0000 5.0000 3.0000 1.0000 11.0000 1.0000 7.0000 3.0000 7.0000 3.0000 11.0000 1.0000 3.0000 1.0000 2.0000 5.0000 2.0000 9.0000 ) , X 2 = X 2 ( 30 ) = ( 14.0000 16.0000 1.0000 3.0000 4.0000 9.0000 7.0000 0 9.0000 7.0000 3.0000 8.0000 8.0000 3.0000 8.0000 3.0000 4.0000 1.0000 14.0000 16.0000 ) , R ( 30 ) = 6.4815 e- 012 < ε .

Thus we obtain the reflexive solution of Eq. (9). The relative error and the residual of the solution are shown in Figure 1, where the relative error REk= X 1 ( k ) X 1 + X 2 ( k ) X 2 X 1 + X 2 and the residual Rk=R(k).

Figure 1
figure 1

The relative error of the solutions and the residual for Example 4.1 with X 1 (1)=0 , X 2 (1)=0 .

Let S E denote the set of all generalized reflexive solution group of Eq. (9). For two given generalized reflexive matrices

X 1 0 =( 3 1 2 2 3 2 0 0 1 3 1 3 0 0 3 2 2 2 3 1 ), X 2 0 =( 2 4 2 2 0 1 3 0 1 3 5 2 2 5 2 2 0 2 2 4 ),

we will find the optimal approximate generalized reflexive solution group to the given matrix group ( X 1 0 , X 2 0 ) in S E in Frobenius norm, i.e., find ( X ˆ 1 , X ˆ 2 ) S E such that

X ˆ 1 X 1 0 + X ˆ 2 X 2 0 = min ( X 1 , X 2 ) S E X 1 X 1 0 + X 2 X 1 0 .

Let X ˜ 1 = X 1 X 1 0 , X ˜ 2 = X 2 X 2 0 , M ˜ 1 = M 1 A 11 X 1 0 B 11 A 12 X 2 0 B 12 , M ˜ 2 = M 2 A 21 X 1 0 B 21 A 22 X 2 0 B 22 . By the method mentioned in Section 3, we can obtain the least-norm generalized reflexive solution group ( X ˜ 1 , X ˜ 2 ) of the matrix equations A 11 X ˜ 1 B 11 + A 12 X ˜ 2 B 12 = M ˜ 1 , A 21 X ˜ 1 B 21 + A 22 X ˜ 2 B 22 = M ˜ 2 by choosing the initially iterative matrices X ˜ 1 (1)=0 and X ˜ 2 (1)=0. Then by Algorithm 2.1, we have that

X ˜ 1 = X ˜ 1 ( 29 ) = ( 5.0000 10.0000 0.0000 3.0000 0.0000 3.0000 11.0000 1.0000 6.0000 6.0000 6.0000 6.0000 11.0000 1.0000 0.0000 3.0000 0.0000 3.0000 5.0000 10.0000 ) , X ˜ 2 = X ˜ 2 ( 29 ) = ( 12.0000 12.0000 1.0000 1.0000 4.0000 8.0000 4.0000 0 8.0000 4.0000 8.0000 6.0000 10.0000 8.0000 6.0000 1.0000 4.0000 1.0000 12.0000 12.0000 ) , R ( 29 ) = 1.4095 e- 011 < ε = 1.0000 e- 010 ,

and the optimal approximate generalized reflexive solution to the matrix group ( X 1 0 , X 2 0 ) in Frobenius norm is

X ˆ 1 = X ˜ 1 + X 1 0 = ( 2.0000 9.0000 2.0000 5.0000 3.0000 1.0000 11.0000 1.0000 7.0000 3.0000 7.0000 3.0000 11.0000 1.0000 3.0000 1.0000 2.0000 5.0000 2.0000 9.0000 ) , X ˆ 2 = X ˜ 2 + X 2 0 = ( 14.0000 16.0000 1.0000 3.0000 4.0000 9.0000 7.0000 0 9.0000 7.0000 3.0000 8.0000 8.0000 3.0000 8.0000 3.0000 4.0000 1.0000 14.0000 16.0000 ) .

The relative error and the residual of the solution are shown in Figure 2, where the relative error REk= X ˜ 1 ( k ) + X 1 0 X 1 + X ˜ 2 ( k ) + X 2 0 X 2 X 1 + X 2 and the residual Rk=R(k).

Figure 2
figure 2

The relative error of the solutions and the residual for Example 4.1.

From Figures 1 and 2, we can see that the iterative solutions generated by Algorithm 2.1 can quickly converge to the exact generalized reflexive solution of Eq. (9). This illustrates that our iterative algorithm is quite effective.

5 Conclusions

In this paper, an iterative algorithm is presented to solve the general coupled matrix equations j = 1 q A i j X j B i j = M i (i=1,2,,p) and their optimal approximation problem over generalized reflexive matrices. When the general coupled matrix equations are consistent over generalized reflexive matrices, for any initially generalized reflexive matrix group, the generalized reflexive solution group can be obtained by the iterative algorithm within finite iterative steps in the absence of round-off errors. When a special kind of initial iteration matrix group is chosen, the unique least-norm generalized reflexive solution of the general coupled matrix equations can be derived. Furthermore, the optimal approximate generalized reflexive solution of the general coupled matrix equations to a given generalized reflexive matrix group can be derived by finding the least-norm generalized reflexive solution of new corresponding general coupled matrix equations. Finally, a numerical example is given to show that our iterative algorithm is quite effective.

References

  1. Ding F, Chen T: On iterative solutions of general coupled matrix equations. SIAM J. Control Optim. 2006, 44(6):2269–2284. 10.1137/S0363012904441350

    Article  MathSciNet  MATH  Google Scholar 

  2. Ding F, Chen T: Gradient based iterative algorithms for solving a class of matrix equations. IEEE Trans. Autom. Control 2005, 50(8):1216–1221.

    Article  MathSciNet  Google Scholar 

  3. Ding F, Liu PX, Ding J: Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle. Appl. Math. Comput. 2008, 197(1):41–50. 10.1016/j.amc.2007.07.040

    Article  MathSciNet  MATH  Google Scholar 

  4. Ding J, Liu Y, Ding F:Iterative solutions to matrix equations of the form A i X B i = F i . Comput. Math. Appl. 2010, 59(11):3500–3507.

    Article  MathSciNet  MATH  Google Scholar 

  5. Xie L, Ding J, Ding F: Gradient based iterative solutions for general linear matrix equations. Comput. Math. Appl. 2009, 58(7):1441–1448.

    Article  MathSciNet  MATH  Google Scholar 

  6. Ding F, Chen T: Iterative least-squares solutions of coupled Sylvester matrix equations. Syst. Control Lett. 2005, 54(2):95–107. 10.1016/j.sysconle.2004.06.008

    Article  MathSciNet  MATH  Google Scholar 

  7. Wang QW: Bisymmetric and centrosymmetric solutions to systems of real quaternion matrix equations. Comput. Math. Appl. 2005, 49(5–6):641–650.

    Article  MathSciNet  MATH  Google Scholar 

  8. Wu AG, Feng G, Duan GR, Wu WJ: Iterative solutions to coupled Sylvester-conjugate matrix equations. Comput. Math. Appl. 2010, 60(1):54–66.

    Article  MathSciNet  MATH  Google Scholar 

  9. Wu AG, Li B, Zhang Y, Duan GR: Finite iterative solutions to coupled Sylvester-conjugate matrix equations. Appl. Math. Model. 2011, 35(3):1065–1080. 10.1016/j.apm.2010.07.053

    Article  MathSciNet  MATH  Google Scholar 

  10. Wu AG, Feng G, Duan GR, Wu WJ: Finite iterative solutions to a class of complex matrix equations with conjugate and transpose of the unknowns. Math. Comput. Model. 2010, 52(9–10):1463–1478. 10.1016/j.mcm.2010.06.010

    Article  MathSciNet  MATH  Google Scholar 

  11. Jonsson I, Kågström B: Recursive blocked algorithms for solving triangular systems - part I: one-sided and coupled Sylvester-type matrix equations. ACM Trans. Math. Softw. 2002, 28(4):392–415. 10.1145/592843.592845

    Article  MATH  Google Scholar 

  12. Jonsson I, Kågström B: Recursive blocked algorithms for solving triangular systems - part II: two-sided and generalized Sylvester and Lyapunov matrix equations. ACM Trans. Math. Softw. 2002, 28(4):416–435. 10.1145/592843.592846

    Article  MATH  Google Scholar 

  13. Dehghan M, Hajarian M: The general coupled matrix equations over generalized bisymmetric matrices. Linear Algebra Appl. 2010, 432(6):1531–1552. 10.1016/j.laa.2009.11.014

    Article  MathSciNet  MATH  Google Scholar 

  14. Huang GX, Wu N, Yin F, Zhou ZL, Guo K: Finite iterative algorithms for solving generalized coupled Sylvester systems - part I: one-sided and generalized coupled Sylvester matrix equations over generalized reflexive solutions. Appl. Math. Model. 2012, 36(4):1589–1603. 10.1016/j.apm.2011.09.027

    Article  MathSciNet  MATH  Google Scholar 

  15. Yin F, Huang GX, Chen DQ: Finite iterative algorithms for solving generalized coupled Sylvester systems - part II: two-sided and generalized coupled Sylvester matrix equations over reflexive solutions. Appl. Math. Model. 2012, 36(4):1604–1614. 10.1016/j.apm.2011.09.025

    Article  MathSciNet  MATH  Google Scholar 

  16. Zhou B, Duan GR, Li ZY: Gradient based iterative algorithm for solving coupled matrix equations. Syst. Control Lett. 2009, 58(5):327–333. 10.1016/j.sysconle.2008.12.004

    Article  MathSciNet  MATH  Google Scholar 

  17. Li ZY, Zhou B, Wang Y, Duan GR: Numerical solution to linear matrix equation by finite steps iteration. IET Control Theory Appl. 2010, 4(7):1245–1253. 10.1049/iet-cta.2009.0015

    Article  MathSciNet  Google Scholar 

  18. Cai J, Chen GX:An iterative algorithm for the least squares bisymmetric solutions of the matrix equations A 1 X B 1 = C 1 , A 2 X B 2 = C 2 . Math. Comput. Model. 2009, 50(7–8):1237–1244. 10.1016/j.mcm.2009.07.004

    Article  MathSciNet  Google Scholar 

  19. Chen D, Yin F, Huang G:An iterative algorithm for the generalized reflexive solution of the matrix equations AXB=E, CXD=F. J. Appl. Math. 2012. doi:10.1155/2012/492951

    Google Scholar 

  20. Ding F, Chen T: Hierarchical gradient-based identification of multivariable discrete-time systems. Automatica 2005, 41(2):315–325. 10.1016/j.automatica.2004.10.010

    Article  MathSciNet  MATH  Google Scholar 

  21. Ding F, Chen T: Hierarchical least squares identification methods for multivariable systems. IEEE Trans. Autom. Control 2005, 50(3):397–402.

    Article  MathSciNet  Google Scholar 

  22. Ding F, Chen T: Hierarchical identification of lifted state-space models for general dual-rate systems. IEEE Trans. Circuits Syst. I, Regul. Pap. 2005, 52(6):1179–1187.

    Article  MathSciNet  Google Scholar 

  23. Huang GX, Yin F, Guo K:An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB=C. J. Comput. Appl. Math. 2008, 212(2):231–244. 10.1016/j.cam.2006.12.005

    Article  MathSciNet  MATH  Google Scholar 

  24. Liao AP, Lei Y:Least-squares solution with the minimum-norm for the matrix equation (AXB,GXH)=(C,D). Comput. Math. Appl. 2005, 50(3–4):539–549.

    Article  MathSciNet  MATH  Google Scholar 

  25. Peng ZH, Hu XY, Zhang L:An efficient algorithm for the least-squares reflexive solution of the matrix equation A 1 X B 1 = C 1 , A 2 X B 2 = C 2 . Appl. Math. Comput. 2006, 181(2):988–999. 10.1016/j.amc.2006.01.071

    Article  MathSciNet  Google Scholar 

  26. Wang QW: The general solution to a system of real quaternion matrix equations. Comput. Math. Appl. 2005, 49(5–6):665–675.

    Article  MathSciNet  MATH  Google Scholar 

  27. Wang QW, Li CK: Ranks and the least-norm of the general solution to a system of quaternion matrix equations. Linear Algebra Appl. 2009, 430(5–6):1626–1640. 10.1016/j.laa.2008.05.031

    Article  MathSciNet  MATH  Google Scholar 

  28. Yin F, Huang GX:An iterative algorithm for the least squares generalized reflexive solutions of the matrix equations AXB=E, CXD=F. Abstr. Appl. Anal. 2012, 2012: 1–18.

    MathSciNet  Google Scholar 

  29. Yin F, Huang G: An iterative algorithm for the generalized reflexive solutions of the generalized coupled Sylvester matrix equations. J. Appl. Math. 2012, 2012: 1–28.

    MathSciNet  MATH  Google Scholar 

  30. Zhou B, Li ZY, Duan GR, Wang Y: Weighted least squares solutions to general coupled Sylvester matrix equations. J. Comput. Appl. Math. 2009, 224(2):759–776. 10.1016/j.cam.2008.06.014

    Article  MathSciNet  MATH  Google Scholar 

  31. Cheng G, Tan Q, Wang Z: Some inequalities for the minimum eigenvalue of the Hadamard product of an M -matrix and its inverse. J. Inequal. Appl. 2013., 2013: Article ID 65. doi:10.1186/1029–242X-2013–65

    Google Scholar 

  32. Liu Y, Sheng J, Ding R: Convergence of stochastic gradient estimation algorithm for multivariable ARX-like systems. Comput. Math. Appl. 2010, 59(8):2615–2627.

    Article  MathSciNet  MATH  Google Scholar 

  33. Liu Y, Xiao Y, Zhao X: Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model. Appl. Math. Comput. 2009, 215(4):1477–1483. 10.1016/j.amc.2009.07.012

    Article  MathSciNet  MATH  Google Scholar 

  34. Ding F, Liu Y, Bao G: Gradient based and least squares based iterative estimation algorithms for multi-input multi-output systems. Proc. Inst. Mech. Eng., Part I, J. Syst. Control Eng. 2012, 226(1):43–55. 10.1177/0959651811409491

    Article  Google Scholar 

  35. Ding F, Liu G, Liu XP: Parameter estimation with scarce measurements. Automatica 2011, 47(8):1646–1655. 10.1016/j.automatica.2011.05.007

    Article  MathSciNet  MATH  Google Scholar 

  36. Xie L, Liu Y, Yang H:Gradient based and least squares based iterative algorithms for matrix equations AXB+C X T D=F. Appl. Math. Comput. 2010, 217(5):2191–2199. 10.1016/j.amc.2010.07.019

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the anonymous referee for the constructive and helpful comments and Professor R Agarwal for all the communications. This work was partially supported by National Natural Science Fund (41272363), Open Fund of Geomathematics Key Laboratory of Sichuan Province (scsxdz2012001, scsxdz2012002), Key Natural Science Foundation of Sichuan Education Department (12ZA008, 12ZB289) and the young scientific research backbone teachers of CDUT.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feng Yin.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors have the same contributions on this paper. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Yin, F., Guo, K. & Huang, GX. An iterative algorithm for the generalized reflexive solutions of the general coupled matrix equations. J Inequal Appl 2013, 280 (2013). https://doi.org/10.1186/1029-242X-2013-280

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-280

Keywords