 Research
 Open Access
 Published:
Common diagonal solutions to the Lyapunov inequalities for interval systems
Journal of Inequalities and Applicationsvolume 2016, Article number: 255 (2016)
Abstract
In this paper for interval systems we consider the problem of existence and evaluation of common diagonal solutions to the Lyapunov inequalities. For second order systems, we give necessary and sufficient conditions and exact solutions, that is, complete theoretical solutions. For third order systems, an algorithm for the evaluation of common solutions in the case of existence is given. In the general case a sufficient condition is obtained for a common diagonal solution in terms of the center and upper bound matrices of an interval family.
Introduction
Consider state equation
where \(x=x(t) \in\mathbb{R}^{n}\) and \(A=(a_{ij})\) (\(i,j=1,2,\ldots,n\)) is \(n \times n\) matrix. In many control system applications, each entry \(a_{ij}\) can vary independently within some interval. Such systems are called interval systems. In other words \(a^{}_{ij} \leq a_{ij} \leq a^{+}_{ij}\) where \(a^{}_{ij}\), \(a^{+}_{ij}\) are given. Denote the obtained interval family by \(\mathcal{A}\), i.e.
Interval matrices have many engineering applications. Due to its natural tie with robust control system analysis and design, several approach have involved for the stability analysis of interval matrices (see [1–3] and the references therein).
Consider \(n \times n\) real matrix \(A=(a_{ij})\). If all eigenvalues of A lie in the open left half plane (open unit disc) then A is said to be Hurwitz (Schur) stable. A necessary and sufficient condition for Hurwitz (Schur) stability is the existence of a symmetric positive definite matrix P (i.e. \(P>0\)) such that
where \(B<0\) means \(B>0\) and the symbol ‘T’ stands for the transpose. If in (2) the matrix P can be chosen to be positive diagonal then A is called diagonally stable.
Diagonal stability problems have many applications (see [4–10]). The existence of diagonal type solutions is considered in [3, 5, 8, 11–14] and the references therein.
We are looking for the existence and evaluation of a common diagonal Lyapunov function that guarantees the diagonal stability of interval systems. In other words, we investigate the problem of existence of a diagonal matrix \(D=\operatorname{diag}(x_{1},x_{2},\ldots,x_{n})\) with \(x_{i}>0\) and with the property
For the case \(n=2\), we solve the common Schur stability problem as well. The necessary condition for the existence of a common diagonal solution is the robust diagonal stability, that is, diagonal stability of each matrix from the family \(\mathcal{A}\). Robust diagonal stability means that for every \(A \in\mathcal{A}\) there exists a positive diagonal D such that \(A^{T}D+DA<0\) (\(A^{T}DAD<0\)).
Common diagonal stability problems arise, for example, in the study of largescale dynamic systems (see [10]).
This manuscript addresses the following points:

(1)
Full theoretical solutions of the common diagonal matrix problems for the case \(n=2\) (Section 2). Note that for second order systems the existence and evaluation of common nondiagonal matrix solutions to the Lyapunov inequalities have been considered in [15, 16].

(2)
A numerical algorithm for the case \(n=3\) where the proposed algorithm gives almost all common diagonal solutions (Section 3).

(3)
Sufficient condition for a common diagonal solution in the general case (Section 4).
For a matrix polytope \(\mathcal{A}\), it is well known that the problem of a common solution \(\mathcal{D}\) is equivalent to the following system of linear matrix inequalities (LMIs):
which in lower dimensions can be solved numerically by the LMIs method (here \(A_{i}\) are the extreme matrices). Note that LMIs method is an implicit numerical method which gives only one solution whereas in some problems (see, for example, [10]) an infinite number of solutions are required. Note also that in the case of, for example, the \(5 \times5\) interval matrix family, the LMIs method fails to give an answer due to the higher dimension (see Example 5).
Full solutions for second order systems
Hurwitz case
For a single \(2 \times2\) real matrix
an algebraic characterization of diagonal stability is the following.
Theorem 1
The matrix (4) is Hurwitz diagonally stable if and only if \(a_{1}<0\), \(a_{4}<0\), and \(a_{1}a_{4}a_{2}a_{3}>0\).
Consider an \(2 \times2\) interval family
Define the following 4dimensional box:
Without loss of generality, all \(2 \times2\) positive diagonal matrices may be normalized to have the form \(D=\operatorname{diag}(t,1)\) where \(t>0\).
As noted above a necessary condition for the existence of a common diagonal solution is the robust diagonal stability of (5). From Theorem 1, we have the following.
Proposition 1
The family (5) is robust diagonally stable if and only if
where the maximum is calculated over the extreme points \(a^{}_{2}\), \(a^{+}_{2}\), \(a^{}_{3}\), and \(a^{+}_{3}\).
Now we proceed to the necessary and sufficient condition for the existence of a common diagonal solution, i.e. the existence of \(D=\operatorname{diag}(t_{*},1)\) with \(t_{*}>0\) such that
for all \(a=(a_{1},a_{2},a_{3},a_{4})\in Q=[a_{1}^{},a_{1}^{+}]\times\cdots\times [a_{4}^{},a_{4}^{+}]\). A necessary condition for the existence of a common diagonal solution is the robust diagonal stability.
Assume that the family (5) is robust diagonally stable, that is, (6) is satisfied and we are looking for conditions of the existence of a common diagonal solution.
The existence of a common \(D=\operatorname{diag}(t_{*},1)\) (\(t_{*}>0\)) means that
or
for all \(a=(a_{1},a_{2},a_{3},a_{4}) \in Q\). The first condition of (7) is satisfied automatically since by (6), \(a_{1}^{+}<0\). The second condition is equivalent to the following:
or
or
or
Consider the function
which corresponds to the first condition in (8). Note that \(f(0) \geq0\) and the coefficient of x in \(f(x)\) is negative by (6). If \(a^{}_{2}=0\) the solution set of \(f(x)<0\) is an interval \((\alpha,\infty)\). If \(a^{}_{2} \neq0\) the discriminant is positive by (6). Indeed
In both cases, the solution set of \(f(x)<0\) is a positive interval \((\alpha_{1},\alpha_{2})\). For example, if \(a_{2}^{} \neq 0\) then
Analogously the exists an open interval \((\beta_{1},\beta_{2})\) which is the solution set of the second condition in (8) (the discriminant is positive by (6)).
Now we give the main result of this section.
Theorem 2
Let the family (5) be given. There exists a common diagonal solution to the Lyapunov inequalities if and only if (6) is satisfied and the intervals \((\alpha_{1},\alpha_{2})\) and \((\beta_{1},\beta_{2})\) have a nonempty intersection, i.e.
In this case, for every \(t \in(\alpha_{1},\alpha_{2}) \cap(\beta_{1},\beta _{2})\) the matrix \(D=\operatorname{diag}(t,1)\) is a common solution.
Example 1
Consider the family
The family (9) is robust Hurwitz diagonally stable by Proposition 1, since \(a_{1}^{+}=2<0\), \(a_{4}^{+}=1<0\), \(a_{1}^{+}a_{4}^{+}\max\{a_{2}a_{3}\}=(2)\cdot(1)(4)=6>0\). Corresponding to (8) inequalities are
and \(\alpha_{1}=92\sqrt{14}\), \(\alpha_{2}=9+2\sqrt{14}\), \(\beta_{1}=3\sqrt {5}\), and \(\beta_{2}=3+\sqrt{5}\). Therefore, \((\alpha_{1},\alpha_{2}) \cap (\beta_{1},\beta_{2})=(92\sqrt{14},3+\sqrt{5})\). For every \(t \in(92\sqrt {14},3+\sqrt{5})\) the matrix \(D=\operatorname{diag}(t,1)\) is a common diagonal solution.
Example 2
Consider the family
The family is robust Hurwitz diagonally stable by Proposition 1, since \(a_{1}^{+}=1<0\), \(a_{4}^{+}=2<0\), \(a_{1}^{+}a_{4}^{+}\max\{ a_{2}a_{3}\}=(1)\cdot(2)0=2>0\). Inequalities corresponding to (8) are \(8x+1<0\) and \(x^{2}10x+1<0\) with common solution interval \((1/8,5+\sqrt{24})\), every t from this interval gives a common diagonal solution.
Schur case
Here we give a necessary and sufficient condition for the existence of a common diagonal solution in the Schur case, i.e. the existence of \(D=\operatorname{diag}(\lambda_{*},1)\) with \(\lambda_{*}>0\) such that
for all \(a\in Q=[a_{1}^{},a_{1}^{+}]\times\cdots\times[a_{4}^{},a_{4}^{+}]\).
To have a common diagonal solution a family must be robust diagonally stable.
Proposition 2
Let the family (5) be given. The family (5) is robust Schur diagonally stable, i.e. every member is Schur diagonally stable if and only if the following six conditions are satisfied:
for all \((a_{1},a_{2},a_{3},a_{4}) \in Q\).
These conditions can easily be checked through the extremal points of Q. Again, the existence of a common \(D=\operatorname{diag}(\lambda_{*},1)\) (\(\lambda_{*}>0\)) means that
or
for all \((a_{1},a_{2},a_{3},a_{4}) \in Q\). From the robust Schur diagonal stability, it follows that \(a_{1}<1\) ([8], p.78). Therefore, the first condition of (11) gives \(\lambda_{*} \cdot \min(1a_{1}^{2})>\max(a_{3}^{2})\), which in turn gives \(\lambda_{*}>\alpha=(\max a_{3}^{2})/(1\max(a_{1}^{2}))\). The second condition gives
Consider the function
Denote two root functions of \(g(x)\) by \(r_{1}(a_{1},a_{2},a_{3},a_{4})\) and \(r_{2}(a_{1},a_{2},a_{3},a_{4})\). The function \(r_{1}\) is continuous on Q, whereas if \(0 \in[a_{2}^{},a_{2}^{+}]\) the function \(r_{2}\) is improper for \(a_{2}=0\), that is, \(r_{2}(a_{1},0,a_{3},a_{4})=\infty\). The function \(r_{2}\) is continuous except \(a_{2}=0\) if \(0 \in[a_{2}^{},a_{2}^{+}]\).
Denote
We state another main result of this section.
Theorem 3
Let the family (5) be given. There exists a common diagonal solution D to the inequality \(A^{T}DAD<0\) if and only if

(i)
(10) is satisfied,

(ii)
\(\gamma_{1} < \gamma_{2}\),

(iii)
\((\alpha,\infty) \cap(\gamma_{1},\gamma_{2}) \neq \emptyset\).
In this case, for every \(\lambda\in(\alpha,\infty) \cap(\gamma _{1},\gamma_{2})\) the matrix \(D=\operatorname{diag}(\lambda,1)\) is a common solution.
Example 3
Consider the following interval family:
This family is robust Schur diagonally stable by Proposition 2:
where \(Q= [0,\frac{1}{2} ] \times [\frac{1}{3},\frac {1}{2} ] \times [\frac{1}{10},\frac{1}{10} ] \times [\frac{1}{3},\frac{1}{2} ]\). The maximization of the left root function \(r_{1}(a_{1},a_{2},a_{3},a_{4})\) of \(g(x)\) over Q gives \(\gamma _{1}\approx0.02\), and the minimization of the right root function \(r_{2}(a_{1},a_{2},a_{3},a_{4})\) over Q gives \(\gamma_{2}\approx2.14\). Since \(\alpha=1/75\), for every \(\lambda\in(0.02,2.14)\) the matrix \(D=\operatorname{diag}(\lambda,1)\) is a common diagonal solution.
Solution algorithm for third order systems
In this section for \(3 \times3\) interval family, we give necessary and sufficient condition for the existence of a common diagonal solution and the corresponding solution algorithm in the Hurwitz case.
Consider \(3 \times3\) interval family
Without loss of generality, all \(3 \times3\) positive diagonal matrices \(\operatorname{diag}(x_{1},x_{2},x_{3})\) with \(x_{i}>0\) (\(i=1,2,3\)) may be normalized to have the form
with \(t>0\), \(s>0\). Is there \(D=\operatorname{diag}(t,1,s)\) with \(t>0\), \(s>0\) such that
for all \(a_{i} \in[a_{i}^{},a_{i}^{+}]\) (\(i=1,2,\ldots,9\))?
We write
The matrix inequality (13), i.e. the negative definiteness of \(A^{T}D+DA\) is equivalent to the following:

(i)
\(a_{1}<0\),

(ii)
\((a_{2}t+a_{4})^{2}4a_{1}a_{5}t<0\),

(iii)
\(d_{0}(t,a_{1},\ldots,a_{9})+d_{1}(t,a_{1},\ldots ,a_{9})s+ d_{2}(t,a_{1},\ldots,a_{9})s^{2}<0\).
The functions \(d_{i}\) (\(i=1,2,3\)) are low order polynomials and can be explicitly evaluated.
(i) is satisfied for all \(a_{1} \in[a_{1}^{},a_{1}^{+}]\) if and only if \(a_{1}^{+}<0\). The problem of existence of a common t satisfying (ii) for all \((a_{1},a_{2},a_{4},a_{5})\) is equivalent to the existence of a common diagonal solution for \(2 \times2\) family \(\bigl [ {\scriptsize\begin{matrix}{} a_{1} & a_{2} \cr a_{4} & a_{5}\end{matrix}} \bigr ]\) and has been investigated in Section 2. There, the whole interval of common t has been calculated. If this interval is empty then there is no common \(D=\operatorname{diag}(t,1,s)\) satisfying (13). Assume that this interval \((\alpha,\beta)\) of common t is nonempty. Then the existence of a common \(D=\operatorname{diag}(t,1,s)\) means that there exist \(t \in(\alpha,\beta)\) and \(s>0\) such that (iii) is satisfied for all \(a_{i} \in[a_{i}^{},a_{i}^{+}]\) (\(i=1,\ldots,9\)). This problem is a minimax problem. Indeed, denote the lefthand side of (iii) by \(f(t,s,a_{1},\ldots,a_{9})\). Then (iii) is equivalent to the following minimax inequality:
To solve the game problem (14) is difficult in general (see Example 4).
We suggest the following approach to check (14) numerically. This approach is based on the openness property of the solution set of (13) [1]. In other words, the following proposition is true.
Proposition 3
If there exists a common \(D=\operatorname{diag}(t_{*},1,s_{*})\) then there exist intervals \([t_{1},t_{2}]\) and \([s_{1},s_{2}]\) which contain \(t_{*}\) and \(s_{*}\), respectively, such that the matrix \(D=\operatorname{diag}(t,1,s)\) is a common solution for all \(t \in[t_{1},t_{2}]\), \(s \in[s_{1},s_{2}]\).
Due to this proposition, we suggest the following algorithm for a common diagonal solution.
Algorithm 1
Let the interval family (12) be given.

(s1)
Using the results on \(2 \times2\) interval systems from Section 2 calculate the interval \((\alpha,\beta)\) for t.

(s2)
Determine an upper bound s̅ for the variable s from the positive definiteness condition of a suitable submatrix of \((A^{T}D+DA)\).

(s3)
Divide the interval \([\alpha, \beta]\) into k equal parts \([\alpha_{i}, \beta_{i}]\) and the interval \([0, \overline{s}]\) into m equal parts \([s_{j}^{},s_{j}^{+}]\).

(s4)
On each box
$$[\alpha_{i},\beta_{i}] \times\bigl[s_{j}^{},s_{j}^{+} \bigr] \times\bigl[a_{1}^{},a_{1}^{+}\bigr] \times \cdots\times \bigl[a_{9}^{},a_{9}^{+}\bigr] $$consider the maximization of the polynomial function \(f(t,s,a_{1},\ldots ,a_{9})\). If there exist indices \(i_{*}\) and \(j_{*}\) such that the maximum is negative then stop. The whole interval \([\alpha_{i_{*}},\beta_{i_{*}}] \times[s_{j_{*}}^{},s_{j_{*}}^{+}]\) defines family of common diagonal solutions.
As can be seen, the above game problem (14) is reduced to a finite number of maximization problems in which low order multivariable polynomials are maximized over boxes. Instead of a single problem (14) we consider a sequence of solvable problems from step s4) where low order multivariable polynomial functions are maximized over 11dimensional boxes \([\alpha_{i},\beta_{i}] \times[s_{j}^{},s_{j}^{+}] \times[a_{1}^{},a_{1}^{+}] \times\cdots\times[a_{9}^{},a_{9}^{+}]\). These optimizations can be carried out by MAPLE program or by the Bernstein expansion. The Bernstein expansion is an effective method for testing positivity or negativity of a multivariable polynomial over a box. The following example shows that Algorithm 1 is sufficiently effective.
Example 4
Consider the interval family
where \(q_{1} \in[2,3]\), \(q_{2} \in[1,2]\), and \(q_{3} \in[1,2]\). We obtain
Positivity of the \(2 \times2\) leading principal minor gives
Hence \(64t(q_{1}t+1)^{2}>0\) for all \(t \in(0.0173,6.427)\), \(q_{1} \in[2,3]\).
The positive definiteness of the submatrix
gives \(80s(s+q_{2})^{2}>0\) or \(\max_{q_{2}} (s+q_{2})^{2}<80s\) or \((s+2)^{2}<80s\) or \(s^{2}76s+4<0\) and the upper bound \(\overline{s}=80\) is suitable. The function from (iii), that is, the determinant function is
The corresponding game problem (14) is difficult, since the solution of (14) requires the following steps:

(a)
Fix \((t,s)\).

(b)
Solve parametric maximization of f with respect to \((q_{1},q_{2},q_{3})\). Denote \({\phi(t,s)=\max_{q_{1},q_{2},q_{3}} f}\).

(c)
Solve minimization of \(\phi(t,s)\) with respect to \((t,s)\).
We divide the intervals \([0.0173,6.427]\) and \([0,80]\) into 20 and 200 equal parts, respectively, and solve the corresponding maximization problems. In Figure 1, for this example, the family of rectangles where each \((t,s)\) from each rectangle gives a common solution is shown.
It should be noted that the sufficient condition from [3], Theorem 1, is not satisfied for this example, since for the matrix U from [3], Theorem 1, the maximum real eigenvalue is positive.
Sufficient condition for the general case
In this section, we give sufficient condition for a common diagonal solution in the general case. This condition is expressed in terms of the center and upper bound matrices of interval family.
Consider the \(n \times n\) interval family
Define the center and upper bound matrices \(A_{c}\), \(A_{d}\) by
Given \(A=(a_{ij})\), \(B=(b_{ij})\) the symbol \(A \preccurlyeq B\) means that \(a_{ij} \leq b_{ij}\). Also \(A= (a_{ij} )\).
Theorem 4
Let an interval family (15) be given. Assume that there exists a positive diagonal matrix D such that
Then the matrix D is a common diagonal solution for the family (15).
Proof
Every matrix \(A \in\mathcal{A}\) can be written as
where \(X\preccurlyeq A_{p}\). The inequality \(A^{T}D+DA<0\) for all \(A \in \mathcal{A}\) and by (17) can be written as
By the known property of \(\lambda_{\max}\) the inequality (18) is satisfied if
which is equivalent to
Let us prove that
Since \(X^{T}D+DX \preccurlyeq A^{T}_{p}D+DA_{p}\) for all \(X\preccurlyeq A_{p}\) by [18], p.491,
(the last equality follows from the fact that the spectral radius of nonnegative matrices is an eigenvalue, [18], p.503). On the other hand for a symmetric matrix C, obviously \(\rho(C) \geq\lambda _{\max}(C)\). Using this fact, from (21) we get
for all \(X\preccurlyeq A_{p}\), which proves (20). From (19), (20) the inequality (16) follows. □
The lefthand side of (16) is a convex function in the entries of diagonal matrix D. For \(D=\operatorname{diag}(x_{1},x_{2},\ldots ,x_{n})\) denote the lefthand side by \(f(x_{1},x_{2},\ldots,x_{n})\). Then from Theorem 4, it follows that, by minimization of the function \(f(x_{1},x_{2},\ldots,x_{n})\) over the unit rectangle, we can arrive at a common solution.
Example 5
Consider the \(5 \times5\) interval family
The center and upper bound matrices are
respectively.
The minimization procedure of the convex function \(f(x_{1},\ldots,x_{5})\) evaluated by the Kelley cuttingplane method starting with the values \(x_{1}=\cdots=x_{5}=1\) after 5 steps gives negative value −0.088 for the function f. This value is attained for \(x_{1}=1\), \(x_{2}=0.247\), \(x_{3}=0.161\), \(x_{4}=0.219\), \(x_{5}=0.093\). Therefore by Theorem 4, the matrix \(D=\operatorname{diag}(1,0.247,0.161,0.219,0.093)\) is a common diagonal solution. Note again that the LMIs method cannot give a solution: the number of vertices equals 2^{25}.
Conclusion
In this paper, the problem of diagonal stability of interval matrices is considered. This problem is investigated in the framework of the existence of common diagonal Lyapunov functions. For second and third order systems necessary and sufficient conditions for the existence of common diagonal solutions are given. In the general case, a sufficient condition for the existence of a common diagonal solution is given.
References
 1.
Bhattacharyya, SP, Chapellat, H, Keel, LH: Robust Control: The Parametric Approach. Prentice Hall, Upper Saddle River (1995)
 2.
Rohn, J: Positive definiteness and stability of interval matrices. SIAM J. Matrix Anal. Appl. 15, 175184 (1994)
 3.
Pastravanu, O, Matcovschi, M: Sufficient conditions for Schur and Hurwitz diagonal stability of complex interval matrices. Linear Algebra Appl. 467, 149173 (2015)
 4.
Arcat, M, Sontag, E: Diagonal stability of a class of cyclic systems and its connection with the secant criterion. Automatica 42, 15311537 (2006)
 5.
Cross, GW: Three types of matrix stability. Linear Algebra Appl. 20, 253263 (1978)
 6.
Deng, M, Iwai, Z, Mizumoto, I: Robust parallel compensator design for output feedback stabilization of plants with structured uncertainty. Syst. Control Lett. 36, 193198 (1999)
 7.
Johnson, CR: Sufficient condition for Dstability. J. Econ. Theory 9, 5362 (1974)
 8.
Kaszkurewicz, E, Bhaya, A: Matrix Diagonal Stability in Systems and Computation. Birkhäuser, Boston (2000)
 9.
Ziolko, M: Application of Lyapunov functionals to studying stability of linear hyperbolic systems. IEEE Trans. Autom. Control 35, 11731176 (1990)
 10.
Berman, A, King, C, Shorten, R: A characterisation of common diagonal stability over cones. Linear Multilinear Algebra 60, 11171123 (2012)
 11.
Büyükköroğlu, T: Common diagonal Lyapunov function for third order linear switched system. J. Comput. Appl. Math. 236, 36473653 (2012)
 12.
Khalil, HK: On the existence of positive diagonal P such that \(PA + A^{T} P < 0\). IEEE Trans. Autom. Control 27, 181184 (1982)
 13.
Mason, O, Shorten, R: On the simultaneous diagonal stability of a pair of positive linear systems. Linear Algebra Appl. 413, 1323 (2006)
 14.
Oleng, NO, Narendra, KS: On the existence of diagonal solutions to the Lyapunov equation for a third order system. In: Proceedings of the American Control Conference, vol. 3, pp. 27612766 (2003)
 15.
Shorten, RN, Narendra, KS: Necessary and sufficient conditions for the existence of a common quadratic Lyapunov function for a finite number of stable second order linear timeinvariant systems. Int. J. Adapt. Control Signal Process. 16, 709728 (2002)
 16.
Büyükköroğlu, T, Esen, Ö, Dzhafarov, V: Common Lyapunov functions for some special classes of stable systems. IEEE Trans. Autom. Control 56, 19631967 (2011)
 17.
Mills, W, Mullis, CT, Roberts, RA: Digital filter realizations without overflow oscillations. IEEE Trans. Acoust. Speech Signal Process. 26, 334338 (1978)
 18.
Horn, RA, Johnson, CR: Matrix Analysis. Cambridge University Press, Cambridge (1985)
Acknowledgements
The authors thank the reviewer for his valuable comments. This work is supported by the Anadolu University Research Fund under Contract 1605F464.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Received
Accepted
Published
DOI
Keywords
 interval matrices
 Lyapunov inequality
 diagonal stability
 common diagonal solution
 minimax problem