Skip to main content

Perturbation analysis of the stochastic algebraic Riccati equation

Abstract

In this paper we study a general class of stochastic algebraic Riccati equations (SARE) arising from the indefinite linear quadratic control and stochastic H ∞ problems. Using the Brouwer fixed point theorem, we provide sufficient conditions for the existence of a stabilizing solution of the perturbed SARE. We obtain a theoretical perturbation bound for measuring accurately the relative error in the exact solution of the SARE. Moreover, we slightly modify the condition theory developed by Rice and provide explicit expressions of the condition number with respect to the stabilizing solution of the SARE. A numerical example is applied to illustrate the sharpness of the perturbation bound and its correspondence with the condition number.

MSC: Primary 15A24; 65F35; secondary 47H10, 47H14.

1 Introduction

In this paper we consider a general class of continuous-time stochastic algebraic Riccati equations

A ⊤ X + X A + C ⊤ X C − ( X B + C ⊤ X D + S ) ( R + D ⊤ X D ) − 1 ( B ⊤ X + D ⊤ X C + S ⊤ ) + H = 0 ,
(1a)
R+ D ⊤ XD≻0,
(1b)

where A∈ R n × n , C∈ R n × n , B∈ R n × m , D∈ R n × m , S∈ R n × m , respectively. Moreover, H∈ R n × n and R∈ R m × m are symmetric matrices. Here we denote M≻0 (respectively, M⪰0) if M is symmetric positive definite (respectively, positive semidefinite). The unknown X∈ R n × n is a symmetric solution to SARE (1a)-(1b). Let S n be the set of all symmetric n×n real matrices. For any X,Y∈ S n , we write X⪰Y if X−Y⪰0.

In essence, SARE (1a)-(1b) is a rational Riccati-type matrix equation associated with the operator R:domR→ S n

R(X)=P(X)−S(X)Q ( X ) − 1 S ( X ) ⊤ ,

where the affine linear operators P: S n → S n , Q: S n → S m , S: S n → R n × m , and domR are defined by

P ( X ) = A ⊤ X + X A + C ⊤ X C + H , Q ( X ) = R + D ⊤ X D , S ( X ) = X B + C ⊤ X D + S , dom R = { X ∈ S n ∣ Q ( X ) ≻ 0 } .

We say that X is the maximal solution (or the greatest solution) of SARE (1a)-(1b) if it satisfies (1a)-(1b) and X⪰P for any P∈ S n satisfying R(P)≥0 and (1b), i.e., X is the maximal solution of R(X)≥0 with the constraint (1b). Furthermore, it is easily seen that SARE (1a)-(1b) also contains the continuous-time algebraic Riccati equation (CARE)

A ⊤ X+XA−XB R − 1 B ⊤ X+H=0
(2)

with R≻0, C=0, D=0 and S=0, and the discrete-time algebraic Riccati equation (DARE)

X− C ⊤ XC+ ( C ⊤ X D + S ) ( R + D ⊤ X D ) − 1 ( D ⊤ X C + S ⊤ ) −H=0
(3)

with A= − I 2 and B=0, as special cases.

Matrix equations of the type (1a)-(1b) are encountered in the indefinite linear quadratic (LQ) control problem [1], and the disturbance attenuation problem, which is in deterministic case the H ∞ control theory, for linear stochastic systems with both state- and input-dependent white noise. For example, see [2–4]. For simplicity, we only consider one-dimensional Wiener process of white noise in this paper; it is straightforward but tedious to extend all perturbation results presented in this paper for multi-dimensional cases. In the aforementioned applications of linear stochastic systems, a symmetric solution X, called a stabilizing solution, to SARE (1a)-(1b) ought to be determined for the design of optimal controllers. This stabilizing solution plays a very important role in many applications of linear system control theory. The definition of a stabilizing solution to SARE (1a)-(1b) is given as follows. (See also [[3], Definition 5.2].)

Definition 1.1 Let X∈ S n be a solution to SARE (1a)-(1b), Φ=A+BF and Ψ=C+DF, where F=−Q ( X ) − 1 S ( X ) ⊤ . The matrix X is called a stabilizing solution for ℛ if the spectrum of the associated operator L c with respect to X defined by

L c (W)= Φ ⊤ W+WΦ+ Ψ ⊤ WΨ,W∈ S n ,
(4)

is contained in the open left half plane, i.e., σ( L c )⊂ C − .

Note that if C=D=0 in (1a)-(1b), then it is easily seen from Definition 1.1 that the matrix X∈ S n is a stabilizing solution to SARE (1a)-(1b) or, equivalently, CARE (2) if and only if σ(Φ)⊂ C − . Therefore, Definition 1.1 is a natural generalization of the definition of a stabilizing solution to CARE (2) in classical linear control theory. Moreover, a necessary and sufficient condition for the existence of the stabilizing solution to a more general SARE is derived in Theorem 7.2 of [3]. See also [[1], Theorem 10]. In this case, it is also shown that if SARE (1a)-(1b) has a stabilizing solution X∈dom(R), then it is necessarily a maximal solution and thus unique [1, 3].

The standard CARE (2) and DARE (3) are widely studied and play very important roles in both classical LQ and H ∞ control problems for deterministic linear systems [5–7]. In the past four decades, an extensive amount of numerical methods were studied and developed for solving the CARE and DARE (see [8–10] and the references therein). There are two major methodologies among these numerical methods or algorithms. One is the so-called Schur method or invariant subspace method, which was first proposed by Laub [11]. According to this methodology, the unique and non-negative definite stabilizing solution of the CARE (or DARE) can be obtained by computing the stable invariant subspace (or deflating subspace) of the associated Hamiltonian matrix (or symplectic matrix pencil). Some variants of the invariant subspace method, which preserve the structure of the Hamiltonian matrix (or symplectic matrix pencil) by special orthogonal transformations in the whole computational process, are considered by Mehrmann and his coauthors [12–18]. The other methodology comes from the iterative method, for example, it is referred to as Newton’s method [6], matrix sign function method [19], disk function method [20], and structured doubling algorithms [21, 22] and references therein. So far there has been no sources in applying the invariant subspace methods for solving SARE (1a)-(1b), since the structures of associated Hamiltonian matrix or symplectic matrix pencil are not available. Only the iterative methods, e.g., Newton’s method [3] and the interior-point algorithm presented in [1], can be applied to computing the numerical solutions of SARE (1a)-(1b). Recently, normwise residual bounds were proposed for assessing the accuracy of a computed solution to SARE (1a)-(1b) [23].

Due to the effect of roundoff errors or the measurement errors of experimental data, small perturbations are often incorporated in the coefficient matrices of SARE (1a)-(1b), and hence we obtain the perturbed SARE

A ˜ ⊤ X ˜ + X ˜ A ˜ + C ˜ ⊤ X ˜ C ˜ − ( X ˜ B ˜ + C ˜ ⊤ X ˜ D ˜ + S ˜ ) ( R ˜ + D ˜ ⊤ X ˜ D ˜ ) − 1 ( B ˜ ⊤ X ˜ + D ˜ ⊤ X ˜ C ˜ + S ˜ ⊤ ) + H ˜ = 0 ,
(5a)
R ˜ + D ˜ ⊤ X ˜ D ˜ ≻0,
(5b)

where A ˜ , B ˜ , C ˜ , D ˜ , H ˜ , R ˜ and S ˜ are perturbed coefficient matrices of compatible sizes. The main question is under what conditions perturbed SARE (5a)-(5b) still has a stabilizing solution X ˜ ∈ S n . Moreover, how sensitive is the stabilizing solution X∈dom(R) of original SARE (1a)-(1b) with respect to small changes in the coefficient matrices? This is related to the conditioning of SARE (1a)-(1b). Therefore, we will try to answer these questions for SARE (1a)-(1b) in this paper. For CARE (2) and DARE (3), the normwise non-local and local perturbation bounds have been widely studied in the literature. See, e.g., [24–26]. Also, computable residual bounds were derived for measuring the accuracy of a computed solution to CARE (2) and DARE (3), respectively [27, 28]. To our best knowledge, these issues have not been taken into account for constrained SARE (1a)-(1b) in the literature.

To facilitate our discussion, we use ∥ ⋅ ∥ F to denote the Frobenius norm and ∥⋅∥ to denote the operator norm induced by the Frobenius norm. For A=( A 1 ,…, A n )=( a i j )∈ R m × n and B∈ R p × q , the Kronecker product of A and B is defined by A⊗B=( a i j B)∈ R m p × n q , and the operator vec(A) is denoted by vec(A)= ( A 1 ⊤ , … , A n ⊤ ) ⊤ . It is known that

vec(ABC)= ( C ⊤ ⊗ A ) vec(B),vec ( A ⊤ ) = P n , m vec(A),

where A∈ R n × m , B∈ R m × ℓ , C∈ R ℓ × k , and P n , m is the Kronecker permutation matrix which maps vec(A) into vec( A ⊤ ) for a rectangle matrix A, i.e.,

P n , m = ∑ i , j = 1 n , m E i , j , n × m ⊗ E j , i , m × n ,

where the n×m matrix E i , j , n × m has 1 as its (i,j) entry and 0’s elsewhere.

This paper is organized as follows. In Section 2, a perturbation equation is derived from SAREs (1a)-(1b) and (5a)-(5b) without dropping any higher-order terms. By using Brouwer fixed point theorem, we obtain a perturbation bound for the stabilizing solution of SARE (5a)-(5b) in Section 3. In order to guarantee the existence of the stabilizing solution of perturbed SARE (5a)-(5b), some stability analysis of the operator L c is established in Section 4. A theoretical formula of the normwise condition number of the stabilizing solution to SARE (1a)-(1b) is derived in Section 5. Finally, in Section 6, a numerical example is given to illustrate the sharpness and tightness of our perturbation bounds, and Section 7 concludes the paper.

2 Perturbation equation

Assume that X∈ S n is the unique stabilizing solution to SARE (1a)-(1b) and X ˜ ∈ S n is a symmetric solution of perturbed SARE (5a)-(5b), that is,

R(X):= A ⊤ X+XA+ C ⊤ XC−Ξ(X)+H=0,
(6)
R ˜ ( X ˜ ):= A ˜ ⊤ X ˜ + X ˜ A ˜ + C ˜ ⊤ X ˜ C ˜ − Ξ ˜ ( X ˜ )+ H ˜ =0,
(7)

where the two operator Ξ: S n → S n and Ξ ˜ : S n → S n are given by

Ξ ( X ) = S ( X ) Q ( X ) − 1 S ( X ) ⊤ , Ξ ˜ ( X ˜ ) = S ˜ ( X ˜ ) Q ˜ ( X ˜ ) − 1 S ˜ ( X ˜ ) ⊤ ,
(8)

and two affine linear operators S ˜ : S n → S n , Q ˜ : S n → S m are defined by

S ˜ ( X ˜ ) = X ˜ B ˜ + C ˜ ⊤ X ˜ D ˜ + S ˜ , Q ˜ ( X ˜ ) = R ˜ + D ˜ ⊤ X ˜ D ˜

for all X ˜ ∈ S n . Let

ΔX= X ˜ −X.

The purpose of this section is to derive a perturbation equation of ΔX from SAREs (1a)-(1b) and (5a)-(5b). For the sake of perturbation analysis, we adopt the following notations:

Δ A = A ˜ − A , Δ B = B ˜ − B , Δ C = C ˜ − C , Δ D = D ˜ − D , Δ S = S ˜ − S , Δ R = R ˜ − R , Δ H = H ˜ − H
(9)

and

δ Q = Δ R + D ⊤ X Δ D + Δ D ⊤ X D + Δ D ⊤ X Δ D , δ S = Δ S + C ⊤ X Δ D + Δ C ⊤ X D + Δ C ⊤ X Δ D + X Δ B .
(10)

Moreover, let

F = − Q ( X ) − 1 S ( X ) ⊤ , F ˜ = − Q ˜ ( X ) − 1 S ˜ ( X ) ⊤ , Ψ = C + D F , Ψ ˜ = C ˜ + D ˜ F ˜ , Φ = A + B F , Φ ˜ = A ˜ + B ˜ F ˜ ,
(11)

and by the definition of Ψ, we define

K:=XΨ.
(12)

Note that S ˜ (X)=S(X)+δS and Q ˜ (X)=Q(X)+δQ. Substituting (11) into (8), we observe that

Ξ ( X ) = − S ( X ) F , Ξ ˜ ( X ˜ ) = ( S ˜ ( X ) + Δ X B ˜ + C ˜ ⊤ Δ X D ˜ ) ( Q ˜ ( X ) + D ˜ ⊤ Δ X D ˜ ) − 1 Ξ ˜ ( X ˜ ) = × ( S ˜ ( X ) + Δ X B ˜ + C ˜ ⊤ Δ X D ˜ ) ⊤ .
(13)

Thus far, we have not specified the relation between R(X) and R ˜ ( X ˜ ). Such a tedious task can be turned into a breeze by repeatedly applying the matrix identities [29]

( I + U ) − 1 =I−U ( I + U ) − 1 ,V ( I + U V ) − 1 = ( I + V U ) − 1 V.
(14)

To begin with, assume that ΔR and ΔD are sufficiently small so that Q ˜ (X) is invertible. We see that the product

( S ˜ ( X ) + Δ X B ˜ + C ˜ ⊤ Δ X D ˜ ) ( Q ˜ ( X ) + D ˜ ⊤ Δ X D ˜ ) − 1 = ( S ˜ ( X ) + Δ X B ˜ + C ˜ ⊤ Δ X D ˜ ) × [ I − Q ˜ ( X ) − 1 D ˜ ⊤ Δ X D ˜ ( I + Q ˜ ( X ) − 1 D ˜ ⊤ Δ X D ˜ ) − 1 ] Q ˜ ( X ) − 1 = − F ˜ ⊤ + Δ X B ˜ ( I + Q ˜ ( X ) − 1 D ˜ ⊤ Δ X D ˜ ) − 1 Q ˜ ( X ) − 1 + Ψ ˜ ⊤ Δ X D ˜ ( I + Q ˜ ( X ) − 1 D ˜ ⊤ Δ X D ˜ ) − 1 Q ˜ ( X ) − 1 .

It follows that

Ξ ˜ ( X ˜ ) = − S ˜ ( X ) F ˜ − F ˜ ⊤ B ˜ ⊤ Δ X − F ˜ ⊤ D ˜ ⊤ Δ X C ˜ − Ψ ˜ ⊤ Δ X D ˜ F ˜ − Δ X B ˜ F ˜ + ( Ψ ˜ ⊤ Δ X D ˜ + Δ X B ˜ ) ( I + Q ˜ ( X ) − 1 D ˜ ⊤ Δ X D ˜ ) − 1 Q ˜ ( X ) − 1 × ( D ˜ ⊤ Δ X Ψ ˜ + B ˜ ⊤ Δ X )

since F ˜ ⊤ S ˜ ( X ) ⊤ = S ˜ (X) F ˜ . Next, from (11) we can see that

Φ ˜ ⊤ Δ X + Δ X Φ ˜ = A ˜ ⊤ Δ X + Δ X A ˜ + F ˜ ⊤ B ˜ ⊤ Δ X + Δ X B ˜ F ˜ , Ψ ˜ ⊤ Δ X Ψ ˜ = C ˜ ⊤ Δ X C ˜ + F ˜ ⊤ D ˜ ⊤ Δ X C ˜ + Ψ ˜ ⊤ Δ X D ˜ F ˜ .
(15)

Applying (15), we obtain the linear equation

R ˜ ( X ˜ )−R(X)= Φ ˜ ⊤ ΔX+ΔX Φ ˜ + Ψ ˜ ⊤ ΔX Ψ ˜ −E− h 2 (ΔX)=0,
(16)

where

E : = − ( Δ A ⊤ X + X Δ A + C ˜ ⊤ X C ˜ − C ⊤ X C + S ˜ ( X ) F ˜ − S ( X ) F + Δ H ) , h 2 ( Δ X ) : = Ψ ˜ ⊤ Δ X D ˜ ( I + Q ˜ ( X ) − 1 D ˜ ⊤ Δ X D ˜ ) − 1 Q ˜ ( X ) − 1 D ˜ ⊤ Δ X Ψ ˜ h 2 ( Δ X ) : = + Δ X B ˜ ( I + Q ˜ ( X ) − 1 D ˜ ⊤ Δ X D ˜ ) − 1 Q ˜ ( X ) − 1 B ˜ ⊤ Δ X h 2 ( Δ X ) : = + Δ X B ˜ ( I + Q ˜ ( X ) − 1 D ˜ ⊤ Δ X D ˜ ) − 1 Q ˜ ( X ) − 1 D ˜ ⊤ Δ X Ψ ˜ h 2 ( Δ X ) : = + Ψ ˜ ⊤ Δ X D ˜ ( I + Q ˜ ( X ) − 1 D ˜ ⊤ Δ X D ˜ ) − 1 Q ˜ ( X ) − 1 B ˜ ⊤ Δ X .

It follows from (16) that

Φ ˜ ⊤ ΔX+ΔX Φ ˜ + Ψ ˜ ⊤ ΔX Ψ ˜ =E+ h 2 (ΔX).
(17)

Equipped with this fact, we now are going to derive a perturbation equation in terms of ΔX by using ΔA, ΔB, ΔC, ΔD, ΔS, ΔR, δS, and δQ. It should be noted that

Ψ ˜ = ( Δ C + C ) − ( Δ D + D ) ( Q ( X ) + δ Q ) − 1 ( S ( X ) + δ S ) ⊤ = Ψ + Δ Ψ ,

with

Δ Ψ : = Δ C − Δ D Q ( X ) − 1 S ( X ) ⊤ − Δ D Q ( X ) − 1 δ S ⊤ − D Q ( X ) − 1 δ S ⊤ + ( Δ D + D ) Q ( X ) − 1 δ Q Q ( X ) − 1 ( I + Q ( X ) − 1 δ Q ) − 1 ( S ( X ) ⊤ + δ S ⊤ )
(18)

and

Φ ˜ = ( Δ A + A ) − ( Δ B + B ) ( Q ( X ) + δ Q ) − 1 ( S ( X ) + δ S ) ⊤ = Φ + Δ Φ ,

with

Δ Φ : = Δ A − Δ B Q ( X ) − 1 S ( X ) ⊤ − Δ B Q ( X ) − 1 δ S ⊤ − B Q ( X ) − 1 δ S ⊤ + ( Δ B + B ) Q ( X ) − 1 δ Q Q ( X ) − 1 ( I + Q ( X ) − 1 δ Q ) − 1 ( S ( X ) ⊤ + δ S ⊤ ) .
(19)

It then is natural to express the left-hand side of (17) by ΔΦ and ΔΨ such that

Φ ˜ ⊤ ΔX+ΔX Φ ˜ + Ψ ˜ ⊤ ΔX Ψ ˜ = Φ ⊤ ΔX+ΔXΦ+ Ψ ⊤ ΔXΨ− h 1 (ΔX),

with

h 1 (ΔX):=− ( Δ Φ ⊤ Δ X + Δ X Δ Φ + Ψ ⊤ Δ X Δ Ψ + Δ Ψ ⊤ Δ X Ψ + Δ Ψ ⊤ Δ X Δ Ψ ) .

Observe further that

C ˜ ⊤ X C ˜ − C ⊤ X C = C ⊤ X Δ C + Δ C ⊤ X C + Δ C ⊤ X Δ C , S ˜ ( X ) F ˜ − S ( X ) F = − ( S ( X ) + δ S ) ( Q ( X ) + δ Q ) − 1 ( S ( X ) + δ S ) ⊤ S ˜ ( X ) F ˜ − S ( X ) F = + S ( X ) Q ( X ) − 1 S ( X ) ⊤ S ˜ ( X ) F ˜ − S ( X ) F = F ⊤ δ S ⊤ + δ S F − δ S ( I + Q ( X ) − 1 δ Q ) − 1 Q ( X ) − 1 δ S ⊤ S ˜ ( X ) F ˜ − S ( X ) F = − F ⊤ δ Q ( I + Q ( X ) − 1 δ Q ) − 1 Q ( X ) − 1 δ S ⊤ S ˜ ( X ) F ˜ − S ( X ) F = − δ S Q ( X ) − 1 δ Q ( I + Q ( X ) − 1 δ Q ) − 1 F S ˜ ( X ) F ˜ − S ( X ) F = + F ⊤ δ Q F − F ⊤ δ Q ( I + Q ( X ) − 1 δ Q ) − 1 Q ( X ) − 1 δ Q F .

Upon substituting (10) into δSF and F ⊤ δQF, we have

δ S F = Δ S F + C ⊤ X Δ D F + Δ C ⊤ X D F + Δ C ⊤ X Δ D F + X Δ B F , F ⊤ δ Q F = F ⊤ Δ R F + F ⊤ D ⊤ X Δ D F + F ⊤ Δ D ⊤ X D F + F ⊤ Δ D ⊤ X Δ D F ,

so that the structure of E in (17) can be partitioned into linear equations

E 1 : = − ( K ⊤ Δ D F + F ⊤ Δ D ⊤ K + K ⊤ Δ C + Δ C ⊤ K + F ⊤ Δ R F + F ⊤ Δ S ⊤ + Δ S F + Δ H ) , E 2 : = − [ Δ A ⊤ X + X Δ A + Δ C ⊤ X Δ C + F ⊤ Δ B ⊤ X + X Δ B F E 2 : = + F ⊤ Δ D ⊤ X Δ C + Δ C ⊤ X Δ D F + F ⊤ Δ D ⊤ X Δ D F E 2 : = − ( F ⊤ δ Q + δ S ) ( I + Q ( X ) − 1 δ R ) − 1 Q ( X ) − 1 ( δ Q F + δ S ⊤ ) ] ,

that is, E= E 1 + E 2 .

Lemma 2.1 Let X be the stabilizing solution of SARE (1a)-(1b) and X ˜ be a symmetric solution of perturbed SARE (5a)-(5b). If ΔX= X ˜ −X, then ΔX satisfies the equation

Φ ⊤ ΔX+ΔXΦ+ Ψ ⊤ ΔXΨ= E 1 + E 2 + h 1 (ΔX)+ h 2 (ΔX),
(20)

where

E 1 = − ( K ⊤ Δ D F + F ⊤ Δ D ⊤ K + K ⊤ Δ C + Δ C ⊤ K + F ⊤ Δ R F E 1 = + F ⊤ Δ S ⊤ + Δ S F + Δ H ) ,
(21a)
E 2 = − [ Δ A ⊤ X + X Δ A + Δ C ⊤ X Δ C + F ⊤ Δ B ⊤ X + X Δ B F E 2 = + F ⊤ Δ D ⊤ X Δ C + Δ C ⊤ X Δ D F + F ⊤ Δ D ⊤ X Δ D F E 2 = − ( F ⊤ δ Q + δ S ) ( I + Q ( X ) − 1 δ R ) − 1 Q ( X ) − 1 ( δ Q F + δ S ⊤ ) ] , h 1 ( Δ X ) = − ( Δ Φ ⊤ Δ X + Δ X Δ Φ + Ψ ⊤ Δ X Δ Ψ + Δ Ψ ⊤ Δ X Ψ + Δ Ψ ⊤ Δ X Δ Ψ ) ,
(21b)
h 2 ( Δ X ) = Ψ ˜ ⊤ Δ X D ˜ Ω D ˜ ⊤ Δ X Ψ ˜ + Δ X B ˜ Ω B ˜ ⊤ Δ X h 2 ( Δ X ) = + Δ X B ˜ Ω D ˜ ⊤ Δ X Ψ ˜ + Ψ ˜ ⊤ Δ X D ˜ Ω B ˜ ⊤ Δ X ,
(21c)

where Ω= ( I + Q ˜ ( X ) − 1 D ˜ ⊤ Δ X D ˜ ) − 1 Q ˜ ( X ) − 1 , the matrices ΔA, ΔB, and so on are given by (9)-(12).

Note that E 1 and E 2 are not dependent on ΔX, h 1 (ΔX) is a linear function of ΔX, and h 2 (ΔX) is a function of ΔX with degree at most 2. Assume that the linear operator L c of (4) is invertible. It is easy to see that the perturbed equation (20) is true if and only if

ΔX= L c − 1 E 1 + L c − 1 E 2 + L c − 1 h 1 (ΔX)+ L c − 1 h 2 (ΔX).
(22)

Thus far, we have not specified the condition for the existence of the solution ΔX in (22). In the subsequent discussion, we shall limit our attention to identifying the condition of the existence of a fixed point of (23), that is, to determine an upper bound on the size of Î”X.

3 Perturbation bounds

Let f: S n → S n be a continuous mapping defined by

f(Y)= L c − 1 E 1 + L c − 1 E 2 + L c − 1 h 1 (Y)+ L c − 1 h 2 (Y)for Y∈ S n .
(23)

We see that any fixed point of the mapping f is a solution to the perturbed equation (22). Our approach in this section is to present an upper bound for the existence of some fixed points ΔX. It starts with the discussion that the mapping f given by (23) satisfies

∥ f ( Δ X ) ∥ F ≤ ∥ L c − 1 E 1 ∥ F + ∥ L c − 1 E 2 ∥ F + ∥ L c − 1 h 1 ( Δ X ) ∥ F + ∥ L c − 1 h 2 ( Δ X ) ∥ F .

Define linear operators M: R n × n → S n , N: R n × m → S n , T: S m → S n and H: R n × m → S n by

MΔC= L c − 1 ( K ⊤ Δ C + Δ C ⊤ K ) ,
(24a)
NΔD= L c − 1 ( K ⊤ Δ D F + F ⊤ Δ D ⊤ K ) ,
(24b)
TΔR= L c − 1 ( F ⊤ Δ R F ) ,
(24c)
HΔS= L c − 1 ( F ⊤ Δ S ⊤ + Δ S F ) ,
(24d)

and the scalars ω, μ, ν, τ, η by

ω= ∥ L c − 1 ∥ ,μ=∥M∥,ν=∥N∥,τ=∥T∥,η=∥H∥.
(25)

From (21a) we then have

∥ L c − 1 E 1 ∥ F ≤μ ∥ Δ C ∥ F +ν ∥ Δ D ∥ F +τ ∥ Δ S ∥ F +η ∥ Δ R ∥ F +ω∥ΔH∥≡ ε 1 .
(26)

We now move into more specific details pertaining to the discussion of the fixed point of the continuous mapping f. Before doing so, we need to describe an important property of the norm of the product of two matrices and repeatedly employ it in the following discussion. For the proof, the reader is referred to [[30], Theorem 3.9].

Lemma 3.1 Let A and B be two matrices in R n × n . Then ∥ A B ∥ F ≤ ∥ A ∥ 2 ∥ B ∥ F and ∥ A B ∥ F ≤ ∥ A ∥ F ∥ B ∥ 2 .

It immediately follows that the matrices δQ and δS, defined by (10), satisfy

∥ δ Q ∥ F ≤ ∥ Δ R ∥ F + 2 ∥ X D ∥ 2 ∥ Δ D ∥ F + ∥ X ∥ 2 ∥ Δ D ∥ F 2 ≡ δ r , ∥ δ S ∥ F ≤ ∥ Δ S ∥ F + ∥ X C ∥ 2 ∥ Δ D ∥ F + ∥ X D ∥ 2 ∥ Δ C ∥ F ∥ δ S ∥ F ≤ + ∥ X ∥ 2 ∥ Δ D ∥ F ∥ Δ C ∥ F + ∥ X ∥ 2 ∥ Δ B ∥ F ≡ δ s .
(27)

Assume that the scalar δ r satisfies

1− ∥ Q ( X ) − 1 ∥ 2 δ r >0.
(28)

Then ∥ L c − 1 E 2 ∥ F is bounded by

∥ L c − 1 E 2 ∥ F ≤ 2 ω ∥ X ∥ 2 ( ∥ Δ A ∥ F + ∥ F ∥ 2 ∥ Δ B ∥ F ) + ω ∥ X ∥ 2 ( ∥ Δ C ∥ F + ∥ F ∥ 2 ∥ Δ D ∥ F ) 2 + ω ∥ Q ( X ) − 1 ∥ 2 ( ∥ F ∥ 2 δ r + δ s ) 2 1 − ∥ Q ( X ) − 1 ∥ 2 δ r ≡ ε 2 .
(29)

From (21b) we see that

∥ h 1 ( Δ X ) ∥ F ≤ ( 2 ∥ Δ Φ ∥ F + 2 ∥ Ψ ∥ 2 ∥ Δ Ψ ∥ F + ∥ Δ Ψ ∥ F 2 ) ∥ Δ X ∥ F ,

and also from (18) and (19) we have

∥ Δ Φ ∥ F ≤ ∥ Δ A ∥ F + ∥ F ∥ 2 ∥ Δ B ∥ F + ( ∥ B Q ( X ) − 1 ∥ 2 + ∥ Q ( X ) − 1 ∥ 2 ∥ Δ B ∥ F ) δ s ∥ Δ Φ ∥ F ≤ + ∥ Q ( X ) − 1 ∥ 2 ( ∥ B ∥ 2 + ∥ Δ B ∥ F ) ( ∥ F ∥ 2 + ∥ Q ( X ) − 1 ∥ 2 δ s ) δ r 1 − ∥ Q ( X ) − 1 ∥ 2 δ r ≡ δ Φ ,
(30)
∥ Δ Ψ ∥ F ≤ ∥ Δ C ∥ F + ∥ F ∥ 2 ∥ Δ D ∥ F + ( ∥ D Q ( X ) − 1 ∥ 2 + ∥ Q ( X ) − 1 ∥ 2 ∥ Δ D ∥ F ) δ s ∥ Δ Ψ ∥ F ≤ + ∥ Q ( X ) − 1 ∥ 2 ( ∥ D ∥ 2 + ∥ Δ D ∥ F ) ( ∥ F ∥ 2 + ∥ Q ( X ) − 1 ∥ 2 δ s ) δ r 1 − ∥ Q ( X ) − 1 ∥ 2 δ r ≡ δ Ψ .
(31)

It follows that

∥ L c − 1 h 1 ( Δ X ) ∥ F ≤ωδ ∥ Δ X ∥ F ,
(32)

where the positive scalar δ is defined by

δ=2 δ Φ +2ψ δ Ψ + δ Ψ 2 with  ∥ Ψ ∥ 2 =ψ.
(33)

Also, from (28) and Lemma 3.1 we know that Q Ëœ (X)=Q(X)+δQ=Q ( X ) − 1 (I+Q ( X ) − 1 δQ) and ∥ Q ( X ) − 1 δ Q ∥ F ≤ ∥ Q ( X ) − 1 ∥ 2 δ r <1. This implies that Q Ëœ (X) is nonsingular,

∥ B ˜ ∥ 2 2 ∥ Q ˜ ( X ) − 1 ∥ 2 = ∥ B + Δ B ∥ 2 2 ∥ ( Q ( X ) + δ Q ) − 1 ∥ 2 = ∥ B + Δ B ∥ 2 2 ∥ ( I + Q ( X ) − 1 δ Q ) − 1 Q ( X ) − 1 ∥ 2 ≤ ∥ Q ( X ) − 1 ∥ 2 ( ∥ B ∥ 2 + ∥ Δ B ∥ F ) 2 1 − ∥ Q ( X ) − 1 ∥ 2 δ r ≡ γ B .
(34)

Similarly, we have

∥ D ˜ ∥ 2 2 ∥ Q ˜ ( X ) − 1 ∥ 2 ≤ ∥ Q ( X ) − 1 ∥ 2 ( ∥ D ∥ 2 + ∥ Δ D ∥ F ) 2 1 − ∥ Q ( X ) − 1 ∥ 2 δ r ≡ γ D .
(35)

Assume that

1− γ D ∥ Δ X ∥ F >0.
(36)

It then follows from Lemma 3.1 and (21c) that

∥ h 2 ( Δ X ) ∥ F ≤ ( ∥ Ψ ˜ ∥ 2 ∥ D ˜ ∥ 2 + ∥ B ˜ ∥ 2 ) 2 ∥ Q ˜ ( X ) − 1 ∥ 2 ∥ Δ X ∥ F 2 1 − ∥ D ˜ ∥ 2 2 ∥ Q ˜ ( X ) − 1 ∥ 2 ∥ Δ X ∥ F ,
(37)

and from (9), (11) and (31) that

∥ B ˜ ∥ 2 ≤ ∥ B ∥ 2 + ∥ Δ B ∥ 2 ≡ α B , ∥ D ˜ ∥ 2 ≤ ∥ D ∥ 2 + ∥ Δ D ∥ 2 ≡ α D , ∥ Ψ ˜ ∥ 2 ≤ ∥ Ψ ∥ 2 + ∥ Δ Ψ ∥ F ≤ ∥ Ψ ∥ 2 + δ Ψ ≡ ψ ˜ .
(38)

Upon substituting (34), (35) and (38) into (37), we see that

∥ L c − 1 h 2 ( Δ X ) ∥ F ≤ ω ( ψ ˜ 2 γ D + 2 ψ ˜ α B α D + γ B ) ∥ Δ X ∥ F 2 1 − γ D ∥ Δ X ∥ F .

Finally, by (26), (29) and (32), we arrive at the statement

∥ f ( Δ X ) ∥ F ≤ε+ωδ ∥ Δ X ∥ F + ω α ∥ Δ X ∥ F 2 1 − γ D ∥ Δ X ∥ F ,
(39)

where

α≡ ψ ˜ 2 γ D +2 ψ ˜ α B α D + γ B ,ε≡ ε 1 + ε 2 .
(40)

Consider the quadratic equation

( γ D −ωδ γ D +ωα) ξ 2 −(1−ωδ+ε γ D )ξ+ε=0.
(41)

It is true that if

δ< 1 ω ,
(42a)
ε≤ ( 1 − ω δ ) 2 γ D − ω δ γ D + 2 ω α + ( γ D − ω δ γ D + 2 ω α ) 2 − γ D 2 ( 1 − ω δ ) 2 ,
(42b)

then the positive scalar ξ ∗ denoted by

ξ ∗ = 2 ε ( 1 − ω δ + ε γ D ) + ( 1 − ω δ + ε γ D ) 2 − 4 ( γ D − ω δ γ D + ω α ) ε
(43)

is a solution to (41). Let S ξ ∗ n be a compact subset of S n given by

S ξ ∗ n = { Δ X ∈ S n : ∥ Δ X ∥ F ≤ ξ ∗ } .

It can be seen that in (39)

∥ f ( Δ X ) ∥ F ≤ ξ ∗ if Î”X∈ S ξ ∗ n .

It then follows from the Brouwer fixed-point theorem (see [31]) that the continuous mapping f has a fixed point Δ X ∗ ∈ S ξ ∗ n , that is, condition (22) automatically holds.

Observe also that if ΔX∈ S ξ ∗ n , then

1 − γ D ∥ Δ X ∥ F ≥ ( 1 − γ D ξ ∗ ) (by (43)) ≥ 1 − 2 ε γ D 1 − ω δ + ε γ D = 1 − ω δ − ε γ D 1 − ω δ + ε γ D (by (42b)) ≥ 1 − ω δ − ( 1 − ω δ ) 2 γ D γ D − ω δ γ D + 2 ω α 1 − ω δ + ε γ D (by (42a)) = 2 ( 1 − ω δ ) ω α ( 1 − ω δ + ε γ D ) ( γ D − ω δ γ D + 2 ω α ) ≥ 0 .

This implies that assumption (36) is true, if assumption (42a)-(42b) is true.

4 Stability analysis

We have shown that the mapping f given by (23) has a Hermitian fixed point Δ X ∗ . This further implies that perturbed SARE (1a)-(1b) has a Hermitian solution X ˜ =X+Δ X ∗ . In this section, we want to discuss the stability of the solution X ˜ , i.e., show that the solution X ˜ is the unique maximal solution to SARE (1a)-(1b). Let ϒ and Π be two operators defined by

ϒ(W)= Φ ⊤ W+WΦ,Π(W)= Ψ ⊤ WΨ,W∈ S n ,

with the notations Φ and Ψ given in Definition 1.1. It follows that the operator L c defined by (4) can also be written as

L c (W)=ϒ(W)+Π(W),W∈ S n .
(44)

We then have the following important result addressing the condition for a linear operator to be stable. To see a few necessary and sufficient conditions on the stability, we refer to the results and proofs given in [3].

Theorem 4.1 The linear operator L c =ϒ+Π given by (44) is stable, i.e., σ( L c )⊂ C − , if and only if σ(Φ)⊂ C − and det(ϒ+τΠ)≠0 for all τ∈[0,1].

When small perturbations Z 1 , Z 2 ∈ R n × n are taken into consideration, the perturbed operator of L c can be expressed by

L ˜ c (W)= ϒ ˜ (W)+ Π ˜ (W),
(45)

where ϒ ˜ (W)= ( Φ + Z 1 ) ⊤ W+W(Φ+ Z 1 ) and Π ˜ (W)= ( Ψ + Z 2 ) ⊤ W(Ψ+ Z 2 ) for all W∈ S n . Define the quantity

ℓ(θ)= ∥ ( ϒ + θ Π ) − 1 ∥

for θ∈[0,1] and

β( L c )= min ( Z 1 , Z 2 ) ∈ Z max { ∥ Z 1 ∥ , ∥ Z 2 ∥ } ,

where the set Z={( Z 1 , Z 2 )∈ R n × n × R n × n ∣det( Ï’ Ëœ +θ Π Ëœ )=0 for some Î¸âˆˆ[0,1]}. It should be noted that if σ(Φ)⊂ C − , Ψ=0 and Z 2 =0, then

β( L c )≤β(Φ),
(46)

where the value β(Φ) is defined by [27]

β(Φ)=min { ∥ Z 1 ∥ | max 1 ≤ j ≤ n Re λ j ( Φ + Z 1 ) = 0 , Z 1 ∈ R n × n } .
(47)

Here, λ j (Φ+ Z 1 ) (j=1,…,n) denote the eigenvalues of Φ+ Z 1 .

The connection between β( L c ) and the maximum of the scalar function ℓ(θ) on [0,1] can be established in the following form.

Theorem 4.2 [23]

Suppose that the linear operator L c given by (44) is stable, and let

ℓ c = max θ ∈ [ 0 , 1 ] ℓ(θ),ψ=∥Ψ∥.
(48)

Then

β( L c )≥ ℓ c − 1 ( ψ + 1 ) + ( ψ + 1 ) 2 + ℓ c − 1 .

We now apply Theorem 4.2 to (46) and obtain that

β(Φ)≥β( L c )≥ ℓ c − 1 ( ψ + 1 ) + ( ψ + 1 ) 2 + ℓ c − 1 .

Hence, if a perturbation matrix Z 1 ∈ R n × n satisfies

∥ Z 1 ∥< ℓ c − 1 ( ψ + 1 ) + ( ψ + 1 ) 2 + ℓ c − 1 ,

then (47) implies that the matrix Φ+ Z 1 must be c-stable.

We now turn to a key stability test of the operator L c , the striking tool of our stability analysis.

Theorem 4.3 [23]

Suppose that the linear operator L c is stable, and let the scalars ℓ c and ψ be defined as in (48). If the perturbation matrices Z 1 , Z 2 ∈ R n × n satisfy

max { ∥ Z 1 ∥ , ∥ Z 2 ∥ } < ℓ c − 1 ( ψ + 1 ) + ( ψ + 1 ) 2 + ℓ c − 1 ,
(49)

then Φ+ Z 1 is c-stable and the perturbed linear operator L ˜ c defined by (45) is also stable, i.e., σ( L ˜ c )⊂ C − .

Upon substituting X ˜ for X in S ˜ (X) and Q ˜ (X) of (11), we shall have

Q ˜ ( X ˜ )= R ˜ + D ˜ ⊤ X ˜ D ˜ =Q(X)+δQ+ D ˜ ⊤ ΔX D ˜ ≡Q(X)+ΔR,
(50)
S ˜ ( X ˜ ) = S ˜ + C ˜ ⊤ X ˜ D ˜ + X ˜ B ˜ = S ( X ) + δ S + Δ X B ˜ + C ˜ ⊤ Δ X D ˜ S ˜ ( X ˜ ) ≡ S ( X ) + Δ S .
(51)

Also, corresponding to X ˜ , the perturbed Ψ X ˜ and Φ X ˜ of Ψ and Φ, respectively, can be expressed in terms of the formulae

Φ X ˜ = ( Δ A + A ) − ( Δ B + B ) ( Γ + Δ S ) − 1 ( Λ + Δ R ) ⊤ : = Φ + Δ Φ , Ψ X ˜ = ( Δ C + C ) − ( Δ D + D ) ( Γ + Δ R ) − 1 ( Λ + Δ S ) ⊤ : = Ψ + Δ Ψ ,
(52)

with

Δ Φ : = Δ A − Δ B Q ( X ) − 1 S ( X ) ⊤ − Δ B Q ( X ) − 1 Δ S ⊤ − B Q ( X ) − 1 Δ S ⊤ Δ Φ : = + ( Δ B + B ) Q ( X ) − 1 Δ R Q ( X ) − 1 ( I + Q ( X ) − 1 Δ R ) − 1 ( S ( X ) ⊤ + Δ S ⊤ ) , Δ Ψ : = Δ C − Δ D Q ( X ) − 1 S ( X ) ⊤ − Δ D Q ( X ) − 1 Δ S ⊤ − D Q ( X ) − 1 Δ S ⊤ Δ Ψ : = + ( Δ D + D ) Q ( X ) − 1 Δ R Q ( X ) − 1 ( I + Q ( X ) − 1 Δ R ) − 1 ( S ( X ) ⊤ + Δ S ⊤ ) .

Let α C := ∥ C ∥ 2 + ∥ Δ C ∥ 2 . Since ∥ Δ X ∥ F ≤ ξ ∗ , it follows from (38), (50) and (51) that

∥ Δ R ∥ F ≤ δ r + α D 2 ξ ∗ : = c r , ∥ Δ S ∥ F ≤ δ s + ( α B + α C α D ) ξ ∗ : = c s .

Thus ∥ Δ Φ ∥ F and ∥ Δ Ψ ∥ F are bounded by the inequalities

∥ Δ Φ ∥ F ≤ ∥ Δ A ∥ F + ∥ F ∥ 2 ∥ Δ B ∥ F + α B ∥ Q ( X ) − 1 ∥ 2 ( c s + c r ∥ F ∥ 2 ) 1 − c r ∥ Q ( X ) − 1 ∥ 2 , ∥ Δ Ψ ∥ F ≤ ∥ Δ C ∥ F + ∥ F ∥ 2 ∥ Δ D ∥ F + α D ∥ Q ( X ) − 1 ∥ 2 ( c s + c r ∥ F ∥ 2 ) 1 − c r ∥ Q ( X ) − 1 ∥ 2 .

Here, the above upper bounds are obtained by simplifying those given by (30) and (31). Let

f = ∥ F ∥ 2 , γ = ∥ Q ( X ) − 1 ∥ 2 , α C = ∥ C ∥ 2 + ∥ Δ C ∥ F , ζ 1 = max { ∥ Δ A ∥ F , ∥ Δ C ∥ F } , ζ 2 = max { ∥ Δ B ∥ F , ∥ Δ D ∥ F } , ζ 3 = max { α B , α D } ,
(53)

where α B and α D are defined by (38) and Θ is defined to be the right-hand side of (49), that is,

Θ= ℓ c − 1 ( ψ + 1 ) + ( ψ + 1 ) 2 + ℓ c − 1 .
(54)

We then have

max { ∥ Δ Φ ∥ F , ∥ Δ Ψ ∥ F } ≤ ζ 1 +f ζ 2 + γ ζ 3 ( δ s + δ r f ) + γ ζ 3 ( α B + α C α D + α D 2 f ) ξ ∗ 1 − δ r f − α D 2 f ξ ∗ .

It follows that if the condition

ζ 1 +f ζ 2 + γ ζ 3 ( δ s + δ r f ) + γ ζ 3 ( α B + α C α D + α D 2 f ) ξ ∗ 1 − δ r f − α D 2 f ξ ∗ <Θ

or, equivalently,

ξ ∗ < ( Θ − ζ 1 − f ζ 2 ) ( 1 − δ r f ) − γ ζ 3 ( δ s + δ r f ) ( Θ − ζ 1 − f ζ 2 ) α D 2 f + γ ζ 3 ( α B + α C α D + α D 2 f )

holds, then corresponding to Theorem 4.3, the perturbed linear operator L Ëœ c with respect to X Ëœ is stable. In other words, the matrix X Ëœ ∈ S n must be the unique stabilizing (and maximal) solution to perturbed SARE (5a)-(5b).

We now have all the materials needed for the existence of a stabilizing solution of (5a)-(5b).

Theorem 4.4 (Perturbation bound)

Let X be the stabilizing solution of (1a)-(1b). Let ω, δ r , δ s , δ, γ D , α B , α D , α, ε, f, γ, α C , ζ 1 , ζ 2 , ζ 3 , Θ be the scalars defined by (25), (27), (33), (35), (38), (40), (53) and (54), respectively. Define

ξ ∗ = 2 ε ( 1 − ω δ + ε γ D ) + ( 1 − ω δ + ε γ D ) 2 − 4 ( γ D − ω δ γ D + ω α ) ε .

If the perturbed quantities of the coefficients of (5a)-(5b) are sufficiently small, for example, ε≪1, such that

1 − ∥ Q ( X ) − 1 ∥ 2 δ r > 0 , 1 − ω δ > 0 , ( 1 − ω δ ) 2 γ D − ω δ γ D + 2 ω α + ( γ D − ω δ γ D + 2 ω α ) 2 − γ D 2 ( 1 − ω δ ) 2 − ε ≥ 0 , ( Θ − ζ 1 − f ζ 2 ) ( 1 − δ r f ) − γ ζ 3 ( δ s + δ r f ) ( Θ − ζ 1 − f ζ 2 ) α D 2 f + γ ζ 3 ( α B + α C α D + α D 2 f ) − ξ ∗ > 0 ,

then perturbed SARE (5a)-(5b) has the unique stabilizing solution X ˜ , and

∥ X ˜ − X ∥ F ∥ X ∥ F ≤ ξ ∗ ∥ X ∥ F .
(55)

5 Condition number of the SARE

In the study of a computational problem, a fundamental issue is to determine the condition number of a problem to be the ratio of the relative change in the solution to the relative change in the argument. Applying the theory of condition number given by Rice [32], we define the condition number c(X) of the stabilizing solution X of SARE (1a)-(1b) by

c(X)= lim δ → 0 + sup Ω δ ∥ Δ X ∥ F κ δ ,
(56)

where the set of perturbed matrices Ω δ is defined by

Ω δ = Ω δ ( κ A , κ B , κ C , κ D , κ S , κ R , κ H ) = { ( Δ A , Δ B , Δ C , Δ D , Δ S ) ∈ R n × n × R n × m × R n × n × R n × m × R n × m , ( Δ R , Δ H ) ∈ S n × S m ∣ 0 < δ p ≤ δ } ,
(57)

with

δ p = ∥ ( Δ A κ A , Δ B κ B , Δ C κ C , Δ D κ D , Δ S κ S , Δ R κ R , Δ H κ H ) ∥ F ,

and κ A , κ B , κ C , κ D , κ S , κ R , κ H , κ are positive parameters. Then (56) gives the absolute condition number c abs (X) if

( κ A , κ B , κ C , κ D , κ S , κ R , κ H ,κ)=(1,1,1,1,1,1,1,1)

and gives the relative condition number c rel (X) if

( κ A , κ B , κ C , κ D , κ S , κ R , κ H ,κ)= ( ∥ A ∥ F , ∥ B ∥ F , ∥ C ∥ F , ∥ D ∥ F , ∥ S ∥ F , ∥ R ∥ F , ∥ H ∥ F , ∥ X ∥ F ) .

It follows from (22) and (24a)-(24d) that

ΔX=PΔA+QΔB+MΔC+NΔD+HΔS+TΔR+ L c − 1 ΔH+O ( δ p 2 ) ,

where the linear operators P: R n × n → S n and Q: R n × m → S n are defined by

PΔA= L c − 1 ( X Δ A + Δ A ⊤ X ) ,
(58a)
QΔB= L c − 1 ( X Δ B F + F ⊤ Δ B ⊤ X ) .
(58b)

In order to derive the explicit expression for the condition number c(X) of the stabilizing solution X of (1a)-(1b), we require a theorem concerning the form of the optimal solution. This theorem can be regarded as a theoretical extension of the results discussed in [25, 33]. Most strategies have been established earlier by using much heavier machinery. Since this theorem is most relevant to our stability analysis, we briefly outline a direct proof with ideas from [34] to make this presentation more self-contained.

Theorem 5.1 Let L: R n × n × R m × m → R k × k be a linear operator and

L ( Z 1 , Z 2 ) ⊤ =L ( Z 1 , Z 2 ⊤ )
(59)

for all Z 1 ∈ R n × n and Z 2 ∈ R m × m . Then the optimal solution ( Z 1 ⋆ , Z 2 ⋆ ) to the problem

max ∥ ( Z 1 , Z 2 ) ∥ F = 1 ∥ L ( Z 1 , Z 2 ) ∥ F
(60)

exists for some Z 1 ⋆ ∈ R n × n and Z 2 ⋆ =± Z 2 ⋆ ⊤ ∈ R m × m . Furthermore, if the linear operator L(0, Z 2 ) is a positive operator with respect to any Z 2 ∈ C m × m , that is, for all Z 2 ∈ C m × m , we have

L(0, Z 2 )⪰0if  Z 2 ⪰0.

Then there exists an optimal solution ( Z 1 ⋆ , Z 2 ⋆ )∈ R n × n × R m × m to problem (60) such that Z 2 ⋆ is symmetric.

Proof Since ℒ is a linear operator on a finite dimensional space, it is clear that the optimal solution of (60) exists. Assume that ( Z 1 ⋆ , Z 2 ⋆ ) solves this optimization problem. Let σ max = ∥ L ( Z 1 ⋆ , Z 2 ⋆ ) ∥ F and L∈ R k 2 × ( n 2 + m 2 ) be the matrix representation of the operator ℒ such that

vec ( L ( Z 1 , Z 2 ) ) =L[ vec ( Z 1 ) vec ( Z 2 ) ].
(61)

By (59) and (61), we have

[ vec ( Z 1 ⋆ ) vec ( Z 2 ⋆ ) ] ⊤ L ⊤ L[ vec ( Z 1 ⋆ ) vec ( Z 2 ⋆ ) ]= [ vec ( Z 1 ⋆ ) vec ( Z 2 ⋆ ⊤ ) ] ⊤ L ⊤ L[ vec ( Z 1 ⋆ ) vec ( Z 2 ⋆ ⊤ ) ]= σ max 2 .
(62)

Note that

â„“:= ∥ Z 1 ⋆ ∥ F 2 + ∥ 1 2 ( Z 2 ⋆ + Z 2 ⋆ ⊤ ) ∥ F 2 =0,only if  Z 1 ⋆ =0, Z 2 ⋆ =− Z 2 ⋆ ⊤ .

It follows that if Z 2 ⋆ ⊤ ≠− Z 2 ⋆ , by (62), we see that 1 ℓ ( Z 1 ⋆ , 1 2 ( Z 2 ⋆ + Z 2 ⋆ ⊤ )) is another optimal solution for (60). This proves the first part of the theorem.

For the second part, if there exists a symmetric optimal solution, then it completes the proof. Otherwise, from the first part, we know that there exists an optimal solution ( Z 1 ⋆ , Z 2 ⋆ ) with Z 1 ⋆ =0, Z 2 ⋆ =− Z 2 ⋆ ⊤ ∈ R m × m and ∥ ( Z 1 ⋆ , Z 2 ⋆ ) ∥ F =1 to (60). Let i= − 1 . We have the following matrix decomposition:

Z 2 ⋆ = Q ⊤ diag ( [ 0 − ω 1 ω 1 0 ] , … , [ 0 − ω k ω k 0 ] , 0 r ) Q,

where 0 r is a zero matrix with size r×r, Q is an m×m orthogonal matrix, and ω j >0 for 1≤j≤k. Let

Z ˜ 2 ⋆ := Q ⊤ diag ( [ ω 1 0 0 ω 1 ] , … , [ ω k 0 0 ω k ] , 0 r ) Q

be a real symmetric matrix. Since − Z ˜ 2 ⋆ ⪯i Z 2 ⋆ ⪯ Z ˜ 2 ⋆ , it is true that

L(0, Z ˜ 2 ⋆ )⪰L(0,i Z 2 ⋆ )⪰L(0,− Z ˜ 2 ⋆ )=−L(0, Z ˜ 2 ⋆ ).

Using the fact that ∥ i Z 2 ⋆ ∥ F = ∥ Z ˜ 2 ⋆ ∥ F , we see that ∥ ( 0 , Z ˜ 2 ⋆ ) ∥ F = ∥ ( 0 , i Z 2 ⋆ ) ∥ F = ∥ ( 0 , Z 2 ⋆ ) ∥ F =1 and

∥ L ( 0 , Z ˜ 2 ⋆ ) ∥ F ≥ ∥ L ( 0 , i Z 2 ⋆ ) ∥ F = ∥ L ( 0 , Z 2 ⋆ ) ∥ F .

If W 1 ⪰ W 2 ⪰− W 1 , then ∥ W 1 ∥ F ≥ ∥ W 2 ∥ F , which implies that (0, Z ˜ 2 ⋆ ) is a symmetric optimal solution to (60) (see [[25], Lemma A.1]). This completes the proof. □

With the existence theory established above, it is interesting to note that the condition number c(X) defined by (56) can be written as

c ( X ) = 1 κ lim δ → 0 + sup Ω δ ∥ P Δ A + Q Δ B + M Δ C + N Δ D + H Δ S + T Δ R + L c − 1 Δ H ∥ F δ = 1 κ max δ p > 0 ∥ P Δ A + Q Δ B + M Δ C + N Δ D + H Δ S + T Δ R + L c − 1 Δ H ∥ F δ p .
(63)

Note that the second equality in (63) is only an application of linearity of the norm. (For the proof, see Lemma A.1.) Observe further that the inverse operator L c − 1 of (4) satisfies

[ L c − 1 ( W ) ] ⊤ = L c − 1 ( W ⊤ )

since [ L c ( W ) ] ⊤ = L c ( W ⊤ ) for all W∈ C n × n . It follows that

[ T Δ R ] ⊤ = T Δ R ⊤ , [ P Δ A ] ⊤ = P Δ A , [ Q Δ B ] ⊤ = Q Δ B , [ M Δ C ] ⊤ = M Δ C , [ N Δ D ] ⊤ = N Δ D , [ H Δ S ] ⊤ = H Δ S .

Also, it is known that the inverse operator L c − 1 is positive [3, Corollary 3.8]. It follows that T is also a positive operator. Now, applying Theorem 5.1 to the operator PΔA+QΔB+MΔC+NΔD+HΔS+TΔR+ L c − 1 ΔH in (63), we obtain the equality

c ( X ) = 1 κ max Ω ˜ ∥ κ A P Δ A + κ B Q Δ B + κ C M Δ C + κ D N Δ D + κ S H Δ S + κ R T Δ R + κ H L c − 1 Δ H ∥ F ∥ ( Δ A , Δ B , Δ C , Δ D , Δ S , Δ R , Δ H ) ∥ F ,

where the extended set Ω ˜ is defined by

Ω ˜ = { ( Δ A , Δ B , Δ C , Δ D , Δ S , Δ R , Δ H ) ∈ R n × n × R n × m × R n × n × R n × m × R n × n × R n × n × R m × m ∣ ∥ ( Δ A , Δ B , Δ C , Δ D , Δ S , Δ R , Δ H ) ∥ F > 0 } .

On the other hand, observe that the matrix representation of the operation L c in (4) can be written in terms of L c =I⊗Φ+ Φ ⊤ ⊗I+ Ψ ⊤ ⊗Ψ. Corresponding to (24a)-(24d) and (58a)-(58b), we let

M c = L c − 1 ( I ⊗ K ⊤ + ( K ⊤ ⊗ I ) P n , n ) , N c = L c − 1 ( F ⊤ ⊗ K ⊤ + ( K ⊤ ⊗ F ⊤ ) P m , n ) , T c = L c − 1 ( F ⊤ ⊗ F ⊤ ) , H c = L c − 1 ( F ⊤ ⊗ I + ( I ⊗ F ⊤ ) P m , n ) , P c = L c − 1 ( I ⊗ X + ( X ⊗ I ) P n , n ) , Q c = L c − 1 ( F ⊤ ⊗ X + ( X ⊗ F ⊤ ) P m , n )

and

U= ( κ A P c , κ B Q c , κ C M c , κ D N c , κ S H c , κ R T c , κ H L c − 1 ) .

It follows that

c(X)= 1 κ max V ∈ Ω ˜ ∥ U vec ( V ) ∥ 2 ∥ vec ( V ) ∥ 2 = ∥ U ∥ 2 κ .

Based on the above discussion, we have the following result.

Theorem 5.2 The condition number c(X) given by (56) has the explicit expression ∥ U ∥ 2 κ . In particular, we have the relative condition number

c rel (X)= ∥ ( ∥ A ∥ F P c , ∥ B ∥ F Q c , ∥ C ∥ F M c , ∥ D ∥ F N c , ∥ S ∥ F H c , ∥ R ∥ F T c , ∥ H ∥ F L c − 1 ) ∥ 2 ∥ X ∥ F .
(64)

6 Numerical experiment

In this section we want to demonstrate the sharpness of perturbation bound (55) and its relationship with the relative condition number (64). Based on Newton’s iteration [3], a numerical example, done with 2×2 coefficient matrices, is illustrated. The numerical algorithm is described in Algorithm 1. The corresponding stopping criterion is determined when the value of the Normalized Residual (NRes)

NRes= ∥ P ˜ ( X ˜ ) − S ˜ ( X ˜ ) Q ˜ ( X ˜ ) − 1 S ˜ ( X ˜ ) ⊤ ∥ ∥ P ˜ ( X ˜ ) ∥ + ∥ S ˜ ( X ˜ ) ∥ ∥ Q ˜ ( X ˜ ) − 1 ∥ ∥ S ˜ ( X ˜ ) ⊤ ∥

is less than or equal to a prescribed tolerance.

Algorithm 1
figure 1

SARE [ X ~ ] = SARE( A ~ , B ~ , C ~ , D ~ , R ~ , S ~ , H ~ )

Example 1 Given a parameter r= 10 − m , for some m>0, let the matrices A, B, C, D be defined by

A=[ − 4 0 0 − 1 ],B=[ 1 0 0 − r ],C= I 2 ,D=[ 1 0 0 0 ],

and the matrices S, R, H be defined by

S=[ 1 0 0 0 ],R=[ 3 0 0 r ],H=[ − 13 / 2 0 0 1 ].

It is easily seen that the unique stabilizing and maximal solution is

X=[ − 1 0 0 ( 5 − 1 ) / 2 ].

Let the perturbed coefficient matrices ΔA, ΔB, ΔC, ΔD, ΔS, ΔR and ΔH be generated using the MATLAB command randn with the weighted coefficient 10 − j . That is, the matrices ΔA, ΔB, ΔC, ΔD, ΔS, ΔR and ΔH are generated in forms of randn(2)× 10 − j , respectively. Since ΔR and ΔH are required to be symmetric, we need to fine-tune the perturbed matrices ΔR and ΔH by redefining ΔR and ΔH as ΔR+Δ R ⊤ and ΔH+Δ H ⊤ , respectively. Now, let ( A ˜ , B ˜ , C ˜ , D ˜ , S ˜ , R ˜ , H ˜ )=(A+ΔA,B+ΔB,C+ΔC,D+ΔD,S+ΔS,R+ΔR,H+ΔH), which are coefficient matrices of SARE (5a)-(5b).

Firstly, we would like to evaluate the accuracy of the perturbation bound with the fixed parameter r= 10 − 2 , i.e., m=2, and different weighted coefficients, 10 − j , for j=5,…,9. It can be seen from Table 1 that the values of the relative errors are closely bounded by our perturbation bounds of (55). In other words, (55) does provide a sharp upper bound of the relative errors of the stabilizing solution X.

Table 1 Relative errors and perturbation bounds

Secondly, we want to investigate how ill-conditioned matrices affect the quantities of perturbation bounds. In this sense, the weighted coefficients are fixed to be 10−15, i.e., j=15. The relationships among relative errors, perturbation bounds, and relative condition numbers are shown in Table 2. Due to the singularity of the matrix R caused by parameter r, the accuracy of the perturbation bounds is highly affected by the singularity. When the value of m increases, the perturbation bound is still tight to the relative error. Also, it can be seen that the number of accurate digits of the perturbation bounds is reduced proportionally to the increase of the quantities of the relative condition numbers. In other words, if the accurate digits of the perturbation bound are added to the digits in the relative condition numbers, this number is almost equal to 16. (While using IEEE double-precision, the machine precision is around 2.2× 10 − 16 .) This implies that the derived perturbation bound of (55) is fairly sharp.

Table 2 Relative errors, perturbation bounds and relative condition numbers

7 Conclusion

While doing numerical computation, it is important in practice to have an accurate method for estimating the relative error and the condition number of the given problems. In this paper, we focus on providing a tight perturbation bound of the stabilizing solution to SARE (1a)-(1b) under small changes in the coefficient matrices. Also, some sufficient conditions are presented for the existence of the stabilizing solution to the perturbed SARE. The corresponding condition number of the stabilizing solution is provided in this work. We highlight and compare the practical performance of the derived perturbation bound and condition number through a numerical example. Numerical results show that our perturbation bound is very sensitive to the condition number of the stabilizing solution. As a consequence, they provide good measurement tools for the sensitivity analysis of SARE (1a)-(1b).

Appendix

We provide here a proof of the condition given by (63).

Lemma A.1 Let P, Q, ℳ, N, ℋ, T, L c − 1 be the operators defined by (58a)-(58b), (24a)-(24d) and (4), and let Ω δ , δ p be defined by (57). Then the following equality holds:

lim δ → 0 + sup Ω δ ∥ P Δ A + Q Δ B + M Δ C + N Δ D + H Δ S + T Δ R + L c − 1 Δ H ∥ F δ = max δ p > 0 ∥ P Δ A + Q Δ B + M Δ C + N Δ D + H Δ S + T Δ R + L c − 1 Δ H ∥ F δ p .
(65)

Proof For any δ>0, 0< δ p ≤δ, we see that

∥ P Δ A + Q Δ B + M Δ C + N Δ D + H Δ S + T Δ R + L c − 1 Δ H ∥ F δ = δ p δ ∥ P Δ A + Q Δ B + M Δ C + N Δ D + H Δ S + T Δ R + L c − 1 Δ H ∥ F δ p ≤ max δ p ˜ > 0 ∥ P Δ A ˜ + Q Δ B ˜ + M Δ C ˜ + N Δ D ˜ + H Δ S ˜ + T Δ R ˜ + L c − 1 Δ H ˜ ∥ F δ p ˜ ,

where δ p ˜ = ∥ ( Δ A ˜ κ A , Δ B ˜ κ B , Δ C ˜ κ C , Δ D ˜ κ D , Δ S ˜ κ S , Δ R ˜ κ R , Δ H ˜ κ H ) ∥ F . It follows that

lim δ → 0 + sup Ω δ ∥ P Δ A + Q Δ B + M Δ C + N Δ D + H Δ S + T Δ R + L c − 1 Δ H ∥ F δ ≤ max δ p > 0 ∥ P Δ A + Q Δ B + M Δ C + N Δ D + H Δ S + T Δ R + L c − 1 Δ H ∥ F δ p .
(66)

On the other hand, for any fixed δ>0, choose any perturbation matrices

(Δ A 1 ,Δ B 1 ,Δ C 1 ,Δ D 1 ,Δ S 1 ,Δ R 1 ,Δ H 1 )∈ Ω δ

and therefore

∥ ( Δ A 1 κ A , Δ B 1 κ B , Δ C 1 κ C , Δ D 1 κ D , Δ S 1 κ S , Δ R 1 κ R , Δ H 1 κ H ) ∥ F = δ p 1 ≤δ.

It is true that ( δ δ p 1 Δ A 1 , δ δ p 1 Δ B 1 , δ δ p 1 Δ C 1 , δ δ p 1 Δ D 1 , δ δ p 1 Δ S 1 , δ δ p 1 Δ R 1 , δ δ p 1 Δ H 1 )∈ Ω δ and this gives the fact that

lim δ → 0 + sup Ω δ ∥ P Δ A + Q Δ B + M Δ C + N Δ D + H Δ S + T Δ R + L c − 1 Δ H ∥ F δ ≥ ∥ P δ δ p 1 Δ A 1 + Q δ δ p 1 Δ B 1 + M δ δ p 1 Δ C 1 + N δ δ p 1 Δ D 1 + H δ δ p 1 Δ S 1 + T δ δ p 1 Δ R 1 + L c − 1 δ δ p 1 Δ H 1 ∥ F δ = ∥ P Δ A 1 + Q Δ B 1 + M Δ C 1 + N Δ D 1 + H Δ S 1 + T Δ R 1 + L c − 1 Δ H 1 ) ∥ F δ p 1 .

Hence

lim δ → 0 + sup Ω δ ∥ P Δ A + Q Δ B + M Δ C + N Δ D + H Δ S + T Δ R + L c − 1 Δ H ∥ F δ ≥ max δ p > 0 ∥ P Δ A + Q Δ B + M Δ C + N Δ D + H Δ S + T Δ R + L c − 1 Δ H ∥ F δ p .
(67)

Comparison of (66) and (67) gives (65). □

References

  1. Rami MA, Zhou XY: Linear matrix inequalities, Riccati equations, and indefinite stochastic linear quadratic controls. IEEE Trans. Autom. Control 2000, 45(6):1131–1143. 10.1109/9.863597

    Article  MathSciNet  Google Scholar 

  2. El Bouhtouri A, Hinrichsen D, Pritchard AJ: On the disturbance attenuation problem for a wide class of time invariant linear stochastic systems. Stoch. Stoch. Rep. 1999, 65(3–4):255–297.

    Article  MathSciNet  Google Scholar 

  3. Damm T, Hinrichsen D: Newton’s method for a rational matrix equation occurring in stochastic control. 332/334. Proceedings of the Eighth Conference of the International Linear Algebra Society 2001, 81–109.

    Google Scholar 

  4. Hinrichsen D, Pritchard AJ:Stochastic H ∞ . SIAM J. Control Optim. 1998, 36: 1504–1538. 10.1137/S0363012996301336

    Article  MathSciNet  Google Scholar 

  5. Lancaster P, Rodman L Oxford Science Publications. In Algebraic Riccati Equations. Clarendon, New York; 1995.

    Google Scholar 

  6. Mehrmann VL Lecture Notes in Control and Information Sciences 163. In The Autonomous Linear Quadratic Control Problem: Theory and Numerical Solution. Springer, Berlin; 1991.

    Chapter  Google Scholar 

  7. Zhou K, Doyle JC, Glover K: Robust and Optimal Control. Prentice Hall, Upper Saddle River; 1996.

    Google Scholar 

  8. Benner, P, Laub, AJ, Mehrmann, V: A collection of benchmark examples for the numerical solution of algebraic Riccati equations I: continuous-time case. Technical Report SPC 95_22, Fakultät für Mathematik, TU Chemnitz-Zwickau, 09107 Chemnitz, FRG. http://www.tu-chemnitz.de/sfb393/spc95pr.html (1995).

  9. Benner, P, Laub, AJ, Mehrmann, V: A collection of benchmark examples for the numerical solution of algebraic Riccati equations II: discrete-time case. Technical Report SPC 95_23, Fakultät für Mathematik, TU Chemnitz-Zwickau, 09107 Chemnitz, FRG. http://www.tu-chemnitz.de/sfb393/spc95pr.html (1995).

  10. Sima V Monographs and Textbooks in Pure and Applied Mathematics 200. In Algorithms for Linear-Quadratic Optimization. Dekker, New York; 1996.

    Google Scholar 

  11. Laub AJ: A Schur method for solving algebraic Riccati equations. IEEE Trans. Autom. Control 1979, 24(6):913–921. 10.1109/TAC.1979.1102178

    Article  MathSciNet  Google Scholar 

  12. Ammar G, Benner P, Mehrmann V: A multishift algorithm for the numerical solution of algebraic Riccati equations. Electron. Trans. Numer. Anal. 1993, 1: 33–48.

    MathSciNet  Google Scholar 

  13. Ammar G, Mehrmann V: On Hamiltonian and symplectic Hessenberg forms. Linear Algebra Appl. 1991, 149: 55–72.

    Article  MathSciNet  Google Scholar 

  14. Benner P, Mehrmann V, Xu H: A new method for computing the stable invariant subspace of a real Hamiltonian matrix. J. Comput. Appl. Math. 1997, 86: 17–43. 10.1016/S0377-0427(97)00146-5

    Article  MathSciNet  Google Scholar 

  15. Benner P, Mehrmann V, Xu H: A numerically stable, structure preserving method for computing the eigenvalues of real Hamiltonian or symplectic pencils. Numer. Math. 1998, 78(3):329–358. 10.1007/s002110050315

    Article  MathSciNet  Google Scholar 

  16. Bunse-Gerstner A, Byers R, Mehrmann V: A chart of numerical methods for structured eigenvalue problems. SIAM J. Matrix Anal. Appl. 1992, 13: 419–453. 10.1137/0613028

    Article  MathSciNet  Google Scholar 

  17. Bunse-Gerstner A, Mehrmann V: A symplectic QR like algorithm for the solution of the real algebraic Riccati equation. IEEE Trans. Autom. Control 1986, 31(12):1104–1113. 10.1109/TAC.1986.1104186

    Article  MathSciNet  Google Scholar 

  18. Mehrmann V: A step toward a unified treatment of continuous and discrete time control problems. Linear Algebra Appl. 1996, 241–243: 749–779.

    Article  MathSciNet  Google Scholar 

  19. Byers R: Solving the algebraic Riccati equation with the matrix sign function. Linear Algebra Appl. 1987, 85: 267–279.

    Article  MathSciNet  Google Scholar 

  20. Benner, P: Contributions to the numerical solutions of algebraic Riccati equations and related eigenvalue problems. PhD thesis, Fakultät für Mathematik, TU Chemnitz-Zwickau, Chemnitz, Germany (1997)

    Google Scholar 

  21. Chu EK-W, Fan H-Y, Lin W-W: A structure-preserving doubling algorithm for continuous-time algebraic Riccati equations. Linear Algebra Appl. 2005, 396: 55–80.

    Article  MathSciNet  Google Scholar 

  22. Chu EK-W, Fan H-Y, Lin W-W, Wang C-S: Structure-preserving algorithms for periodic discrete-time algebraic Riccati equations. Int. J. Control 2004, 77(8):767–788. 10.1080/00207170410001714988

    Article  MathSciNet  Google Scholar 

  23. Chiang C-Y, Fan H-Y: Residual bounds of the stochastic algebraic Riccati equation. Appl. Numer. Math. 2013, 63: 78–87.

    Article  MathSciNet  Google Scholar 

  24. Konstantinov M, Gu D-W, Mehrmann V, Petkov P Studies in Computational Mathematics 9. In Perturbation Theory for Matrix Equations. North-Holland, Amsterdam; 2003.

    Google Scholar 

  25. Sun J-G: Perturbation theory for algebraic Riccati equations. SIAM J. Matrix Anal. Appl. 1998, 19(1):39–65. 10.1137/S0895479895291303

    Article  MathSciNet  Google Scholar 

  26. Sun J-g: Sensitivity analysis of the discrete-time algebraic Riccati equation. 275/276. Proceedings of the Sixth Conference of the International Linear Algebra Society 1998, 595–615. Chemnitz, 1996

    Google Scholar 

  27. Sun J-g: Residual bounds of approximate solutions of the algebraic Riccati equation. Numer. Math. 1997, 76(2):249–263. 10.1007/s002110050262

    Article  MathSciNet  Google Scholar 

  28. Sun J-g: Residual bounds of approximate solutions of the discrete-time algebraic Riccati equation. Numer. Math. 1998, 78(3):463–478. 10.1007/s002110050321

    Article  MathSciNet  Google Scholar 

  29. Riedel KS: A Sherman-Morrison-Woodbury identity for rank augmenting matrices with application to centering. SIAM J. Matrix Anal. Appl. 1992, 13(2):659–662. 10.1137/0613040

    Article  MathSciNet  Google Scholar 

  30. Stewart GW, Sun JG Computer Science and Scientific Computing. In Matrix Perturbation Theory. Academic Press, Boston; 1990.

    Google Scholar 

  31. Ortega JM, Rheinboldt WC Classics in Applied Mathematics 30. In Iterative Solution of Nonlinear Equations in Several Variables. SIAM, Philadelphia; 2000. Reprint of the 1970 original

    Chapter  Google Scholar 

  32. Rice JR: A theory of condition. SIAM J. Numer. Anal. 1966, 3: 287–310. 10.1137/0703023

    Article  MathSciNet  Google Scholar 

  33. Sun J-g: Condition numbers of algebraic Riccati equations in the Frobenius norm. Linear Algebra Appl. 2002, 350: 237–261. 10.1016/S0024-3795(02)00294-X

    Article  MathSciNet  Google Scholar 

  34. Xu S: Matrix Computation in Control Theory. Higher Education Press, Beijing; 2010. (In Chinese)

    Google Scholar 

Download references

Acknowledgements

The authors wish to thank the editor and two anonymous referees for many interesting and valuable suggestions on the manuscript. This research work is partially supported by the National Science Council and the National Center for Theoretical Sciences in Taiwan. The first author was supported by the National Science Council of Taiwan under Grant NSC 102-2115-M-150-002. The second author was supported by the National Science Council of Taiwan under Grant NSC 102-2115-M-003-009. The third author was supported by the National Science Council of Taiwan under Grant NSC 101-2115-M-194-007-MY3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthew M Lin.

Additional information

Competing interests

The authors declare that there is no conflict of interests regarding the publication of this article.

Authors’ contributions

All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Chiang, CY., Fan, HY., Lin, M.M. et al. Perturbation analysis of the stochastic algebraic Riccati equation. J Inequal Appl 2013, 580 (2013). https://doi.org/10.1186/1029-242X-2013-580

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-580

Keywords