Skip to content

Advertisement

  • Research
  • Open Access

Common diagonal solutions to the Lyapunov inequalities for interval systems

  • 1,
  • 2Email author and
  • 2
Journal of Inequalities and Applications20162016:255

https://doi.org/10.1186/s13660-016-1194-x

  • Received: 9 June 2016
  • Accepted: 3 October 2016
  • Published:

Abstract

In this paper for interval systems we consider the problem of existence and evaluation of common diagonal solutions to the Lyapunov inequalities. For second order systems, we give necessary and sufficient conditions and exact solutions, that is, complete theoretical solutions. For third order systems, an algorithm for the evaluation of common solutions in the case of existence is given. In the general case a sufficient condition is obtained for a common diagonal solution in terms of the center and upper bound matrices of an interval family.

Keywords

  • interval matrices
  • Lyapunov inequality
  • diagonal stability
  • common diagonal solution
  • minimax problem

1 Introduction

Consider state equation
$$\dot{x}=Ax, $$
where \(x=x(t) \in\mathbb{R}^{n}\) and \(A=(a_{ij})\) (\(i,j=1,2,\ldots,n\)) is \(n \times n\) matrix. In many control system applications, each entry \(a_{ij}\) can vary independently within some interval. Such systems are called interval systems. In other words \(a^{-}_{ij} \leq a_{ij} \leq a^{+}_{ij}\) where \(a^{-}_{ij}\), \(a^{+}_{ij}\) are given. Denote the obtained interval family by \(\mathcal{A}\), i.e.
$$ \mathcal{A}=\bigl\{ A=(a_{ij}): a^{-}_{ij} \leq a_{ij} \leq a^{+}_{ij}\ (i,j=1,2,\ldots,n) \bigr\} . $$
(1)
Interval matrices have many engineering applications. Due to its natural tie with robust control system analysis and design, several approach have involved for the stability analysis of interval matrices (see [13] and the references therein).
Consider \(n \times n\) real matrix \(A=(a_{ij})\). If all eigenvalues of A lie in the open left half plane (open unit disc) then A is said to be Hurwitz (Schur) stable. A necessary and sufficient condition for Hurwitz (Schur) stability is the existence of a symmetric positive definite matrix P (i.e. \(P>0\)) such that
$$ A^{T}P+PA< 0\qquad \bigl(A^{T}PA-P< 0\bigr), $$
(2)
where \(B<0\) means \(-B>0\) and the symbol ‘T’ stands for the transpose. If in (2) the matrix P can be chosen to be positive diagonal then A is called diagonally stable.

Diagonal stability problems have many applications (see [410]). The existence of diagonal type solutions is considered in [3, 5, 8, 1114] and the references therein.

We are looking for the existence and evaluation of a common diagonal Lyapunov function that guarantees the diagonal stability of interval systems. In other words, we investigate the problem of existence of a diagonal matrix \(D=\operatorname{diag}(x_{1},x_{2},\ldots,x_{n})\) with \(x_{i}>0\) and with the property
$$ A^{T}D+DA< 0 \quad \mbox{for all } A \in\mathcal{A}. $$
(3)
For the case \(n=2\), we solve the common Schur stability problem as well. The necessary condition for the existence of a common diagonal solution is the robust diagonal stability, that is, diagonal stability of each matrix from the family \(\mathcal{A}\). Robust diagonal stability means that for every \(A \in\mathcal{A}\) there exists a positive diagonal D such that \(A^{T}D+DA<0\) (\(A^{T}DA-D<0\)).

Common diagonal stability problems arise, for example, in the study of large-scale dynamic systems (see [10]).

This manuscript addresses the following points:
  1. (1)

    Full theoretical solutions of the common diagonal matrix problems for the case \(n=2\) (Section 2). Note that for second order systems the existence and evaluation of common nondiagonal matrix solutions to the Lyapunov inequalities have been considered in [15, 16].

     
  2. (2)

    A numerical algorithm for the case \(n=3\) where the proposed algorithm gives almost all common diagonal solutions (Section 3).

     
  3. (3)

    Sufficient condition for a common diagonal solution in the general case (Section 4).

     
For a matrix polytope \(\mathcal{A}\), it is well known that the problem of a common solution \(\mathcal{D}\) is equivalent to the following system of linear matrix inequalities (LMIs):
$$A_{i}^{T}D+DA_{i}< 0 $$
which in lower dimensions can be solved numerically by the LMIs method (here \(A_{i}\) are the extreme matrices). Note that LMIs method is an implicit numerical method which gives only one solution whereas in some problems (see, for example, [10]) an infinite number of solutions are required. Note also that in the case of, for example, the \(5 \times5\) interval matrix family, the LMIs method fails to give an answer due to the higher dimension (see Example 5).

2 Full solutions for second order systems

2.1 Hurwitz case

For a single \(2 \times2\) real matrix
$$ A=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} a_{1} & a_{2} \\ a_{3} & a_{4} \end{array}\displaystyle \right ] $$
(4)
an algebraic characterization of diagonal stability is the following.

Theorem 1

[5, 8]

The matrix (4) is Hurwitz diagonally stable if and only if \(a_{1}<0\), \(a_{4}<0\), and \(a_{1}a_{4}-a_{2}a_{3}>0\).

Consider an \(2 \times2\) interval family
$$ \mathcal{A}=\left \{ \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} a_{1} & a_{2} \\ a_{3} & a_{4} \end{array}\displaystyle \right ]: a_{i} \in\bigl[a_{i}^{-},a_{i}^{+} \bigr], i=1,2,3,4 \right \}. $$
(5)
Define the following 4-dimensional box:
$$ Q=\bigl[a_{1}^{-},a_{1}^{+}\bigr]\times\cdots\times \bigl[a_{4}^{-},a_{4}^{+}\bigr]. $$
Without loss of generality, all \(2 \times2\) positive diagonal matrices may be normalized to have the form \(D=\operatorname{diag}(t,1)\) where \(t>0\).

As noted above a necessary condition for the existence of a common diagonal solution is the robust diagonal stability of (5). From Theorem 1, we have the following.

Proposition 1

The family (5) is robust diagonally stable if and only if
$$ a^{+}_{1}< 0,\qquad a^{+}_{4}< 0 \quad \textit{and} \quad a^{+}_{1}a^{+}_{4}-\max\{a_{2} a_{3} \}>0, $$
(6)
where the maximum is calculated over the extreme points \(a^{-}_{2}\), \(a^{+}_{2}\), \(a^{-}_{3}\), and \(a^{+}_{3}\).
Now we proceed to the necessary and sufficient condition for the existence of a common diagonal solution, i.e. the existence of \(D=\operatorname{diag}(t_{*},1)\) with \(t_{*}>0\) such that
$$A^{T}D+DA< 0 $$
for all \(a=(a_{1},a_{2},a_{3},a_{4})\in Q=[a_{1}^{-},a_{1}^{+}]\times\cdots\times [a_{4}^{-},a_{4}^{+}]\). A necessary condition for the existence of a common diagonal solution is the robust diagonal stability.

Assume that the family (5) is robust diagonally stable, that is, (6) is satisfied and we are looking for conditions of the existence of a common diagonal solution.

The existence of a common \(D=\operatorname{diag}(t_{*},1)\) (\(t_{*}>0\)) means that
$$A^{T}D+DA=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 2a_{1} t_{*} & a_{2} t_{*}+a_{3} \\ a_{2} t_{*}+a_{3} & 2a_{4} \end{array}\displaystyle \right ]< 0 $$
or
$$ 2a_{1} t_{*}< 0,\qquad 4a_{1}a_{4} t_{*}>(a_{2} t_{*}+a_{3})^{2} $$
(7)
for all \(a=(a_{1},a_{2},a_{3},a_{4}) \in Q\). The first condition of (7) is satisfied automatically since by (6), \(a_{1}^{+}<0\). The second condition is equivalent to the following:
$$\min_{(a_{1},a_{4})} (4a_{1}a_{4})t_{*} > \max _{(a_{2},a_{3})} (a_{2} t_{*}+a_{3})^{2} $$
or
$$\bigl(4a_{1}^{+}a_{4}^{+}\bigr)t_{*} > \max \bigl\{ \bigl(a_{2}^{-} t_{*}+a_{3}^{-}\bigr)^{2}, \bigl(a_{2}^{+} t_{*}+a_{3}^{+}\bigr)^{2} \bigr\} $$
or
$$\begin{aligned}& \bigl(4a_{1}^{+}a_{4}^{+}\bigr)t_{*} > \bigl(a_{2}^{-} t_{*}+a_{3}^{-}\bigr)^{2}, \\& \bigl(4a_{1}^{+}a_{4}^{+}\bigr)t_{*} > \bigl(a_{2}^{+} t_{*}+a_{3}^{+}\bigr)^{2} \end{aligned}$$
or
$$ \begin{aligned} &\bigl(a_{2}^{-} \bigr)^{2}t_{*}^{2}+\bigl(2a_{2}^{-}a_{3}^{-}-4a_{1}^{+}a_{4}^{+} \bigr)t_{*}+\bigl(a_{3}^{-}\bigr)^{2}< 0, \\ &\bigl(a_{2}^{+}\bigr)^{2}t_{*}^{2}+ \bigl(2a_{2}^{+}a_{3}^{+}-4a_{1}^{+}a_{4}^{+} \bigr)t_{*}+\bigl(a_{3}^{+}\bigr)^{2}< 0. \end{aligned} $$
(8)
Consider the function
$$f(x)=\bigl(a_{2}^{-}\bigr)^{2}x^{2}+ \bigl(2a_{2}^{-}a_{3}^{-}-4a_{1}^{+}a_{4}^{+} \bigr)x+\bigl(a_{3}^{-}\bigr)^{2}\quad (x \geq0) $$
which corresponds to the first condition in (8). Note that \(f(0) \geq0\) and the coefficient of x in \(f(x)\) is negative by (6). If \(a^{-}_{2}=0\) the solution set of \(f(x)<0\) is an interval \((\alpha,\infty)\). If \(a^{-}_{2} \neq0\) the discriminant is positive by (6). Indeed
$$\bigl(a^{-}_{2}a^{-}_{3}-2a^{+}_{1}a^{+}_{4} \bigr)^{2}-\bigl(a^{-}_{2}a^{-}_{3}\bigr)^{2}=4a^{+}_{1}a^{+}_{4} \bigl(a^{+}_{1}a^{+}_{4}-a^{-}_{2}a^{-}_{3} \bigr)>0. $$
In both cases, the solution set of \(f(x)<0\) is a positive interval \((\alpha_{1},\alpha_{2})\). For example, if \(a_{2}^{-} \neq 0\) then
$$\begin{aligned}& \alpha_{1} = \frac{(2a_{1}^{+}a_{4}^{+}-a_{2}^{-}a_{3}^{-})-\sqrt {(a_{2}^{-}a_{3}^{-}-2a_{1}^{+}a_{4}^{+})^{2}-(a_{2}^{-}a_{3}^{-})^{2}}}{(a_{2}^{-})^{2}}, \\& \alpha_{2} = \frac{(2a_{1}^{+}a_{4}^{+}-a_{2}^{-}a_{3}^{-})+\sqrt {(a_{2}^{-}a_{3}^{-}-2a_{1}^{+}a_{4}^{+})^{2}-(a_{2}^{-}a_{3}^{-})^{2}}}{(a_{2}^{-})^{2}}. \end{aligned}$$

Analogously the exists an open interval \((\beta_{1},\beta_{2})\) which is the solution set of the second condition in (8) (the discriminant is positive by (6)).

Now we give the main result of this section.

Theorem 2

Let the family (5) be given. There exists a common diagonal solution to the Lyapunov inequalities if and only if (6) is satisfied and the intervals \((\alpha_{1},\alpha_{2})\) and \((\beta_{1},\beta_{2})\) have a nonempty intersection, i.e.
$$ \max \{\alpha_{1}, \beta_{1} \} < \min \{ \alpha_{2}, \beta_{2} \}. $$
In this case, for every \(t \in(\alpha_{1},\alpha_{2}) \cap(\beta_{1},\beta _{2})\) the matrix \(D=\operatorname{diag}(t,1)\) is a common solution.

Example 1

Consider the family
$$ \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} {[-3,-2 ]} & [1,2 ] \\ {[-5,-4 ]} & -1 \end{array}\displaystyle \right ]. $$
(9)
The family (9) is robust Hurwitz diagonally stable by Proposition 1, since \(a_{1}^{+}=-2<0\), \(a_{4}^{+}=-1<0\), \(a_{1}^{+}a_{4}^{+}-\max\{a_{2}a_{3}\}=(-2)\cdot(-1)-(-4)=6>0\). Corresponding to (8) inequalities are
$$x^{2}-18x+25< 0,\qquad 4x^{2}-24x+16< 0, $$
and \(\alpha_{1}=9-2\sqrt{14}\), \(\alpha_{2}=9+2\sqrt{14}\), \(\beta_{1}=3-\sqrt {5}\), and \(\beta_{2}=3+\sqrt{5}\). Therefore, \((\alpha_{1},\alpha_{2}) \cap (\beta_{1},\beta_{2})=(9-2\sqrt{14},3+\sqrt{5})\). For every \(t \in(9-2\sqrt {14},3+\sqrt{5})\) the matrix \(D=\operatorname{diag}(t,1)\) is a common diagonal solution.

Example 2

Consider the family
$$\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} [-2,-1 ] & [0,1 ] \\ -1 & [-3,-2 ] \end{array}\displaystyle \right ]. $$
The family is robust Hurwitz diagonally stable by Proposition 1, since \(a_{1}^{+}=-1<0\), \(a_{4}^{+}=-2<0\), \(a_{1}^{+}a_{4}^{+}-\max\{ a_{2}a_{3}\}=(-1)\cdot(-2)-0=2>0\). Inequalities corresponding to (8) are \(-8x+1<0\) and \(x^{2}-10x+1<0\) with common solution interval \((1/8,5+\sqrt{24})\), every t from this interval gives a common diagonal solution.

2.2 Schur case

Here we give a necessary and sufficient condition for the existence of a common diagonal solution in the Schur case, i.e. the existence of \(D=\operatorname{diag}(\lambda_{*},1)\) with \(\lambda_{*}>0\) such that
$$A^{T}DA-D< 0 $$
for all \(a\in Q=[a_{1}^{-},a_{1}^{+}]\times\cdots\times[a_{4}^{-},a_{4}^{+}]\).

To have a common diagonal solution a family must be robust diagonally stable.

Proposition 2

[8, 17]

Let the family (5) be given. The family (5) is robust Schur diagonally stable, i.e. every member is Schur diagonally stable if and only if the following six conditions are satisfied:
$$ \begin{aligned} &1+a_{2}a_{3}-a_{1}a_{4}>0, \\ &1+a_{1}a_{4}-a_{2}a_{3}>0, \\ &1+a_{1}a_{4}-a_{1}-a_{4}-a_{2}a_{3}>0, \\ &1+a_{1}+a_{4}+a_{1}a_{4}-a_{2}a_{3}>0, \\ &1+a_{4}+a_{2}a_{3}-a_{1}-a_{1}a_{4}>0, \\ &1+a_{1}+a_{2}a_{3}-a_{4}-a_{1}a_{4}>0 \end{aligned} $$
(10)
for all \((a_{1},a_{2},a_{3},a_{4}) \in Q\).
These conditions can easily be checked through the extremal points of Q. Again, the existence of a common \(D=\operatorname{diag}(\lambda_{*},1)\) (\(\lambda_{*}>0\)) means that
$$A^{T}DA-D= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} \lambda_{*}(a_{1}^{2}-1)+a_{3}^{2} & \lambda_{*}a_{1}a_{2}+a_{3}a_{4} \\ \lambda_{*}a_{1}a_{2}+a_{3}a_{4} & \lambda_{*}a_{2}^{2}+a_{4}^{2}-1 \end{array}\displaystyle \right ]< 0 $$
or
$$ \begin{aligned} &\lambda_{*}\bigl(a_{1}^{2}-1 \bigr)+a_{3}^{2}< 0, \\ &\bigl[\lambda_{*}\bigl(a_{1}^{2}-1\bigr)+a_{3}^{2} \bigr] \bigl[\lambda_{*}a_{2}^{2}+a_{4}^{2}-1 \bigr]-(\lambda_{*}a_{1}a_{2}+a_{3}a_{4})^{2}>0 \end{aligned} $$
(11)
for all \((a_{1},a_{2},a_{3},a_{4}) \in Q\). From the robust Schur diagonal stability, it follows that \(|a_{1}|<1\) ([8], p.78). Therefore, the first condition of (11) gives \(\lambda_{*} \cdot \min(1-a_{1}^{2})>\max(a_{3}^{2})\), which in turn gives \(\lambda_{*}>\alpha=(\max a_{3}^{2})/(1-\max(a_{1}^{2}))\). The second condition gives
$$\bigl(a_{2}^{2}\bigr)\lambda_{*}^{2}- \bigl[a_{3}^{2}a_{2}^{2}+ \bigl(a_{4}^{2}-1\bigr) \bigl(a_{1}^{2}-1 \bigr)- 2a_{1}a_{2}a_{3}a_{4} \bigr] \lambda_{*}+a_{3}^{2}< 0. $$
Consider the function
$$g(x)=\bigl(a_{2}^{2}\bigr)x^{2}- \bigl[a_{2}^{2}a_{3}^{2}+ \bigl(a_{4}^{2}-1\bigr) \bigl(a_{1}^{2}-1 \bigr)- 2a_{1}a_{2}a_{3}a_{4} \bigr]x+a_{3}^{2}\quad (x \geq0). $$
Denote two root functions of \(g(x)\) by \(r_{1}(a_{1},a_{2},a_{3},a_{4})\) and \(r_{2}(a_{1},a_{2},a_{3},a_{4})\). The function \(r_{1}\) is continuous on Q, whereas if \(0 \in[a_{2}^{-},a_{2}^{+}]\) the function \(r_{2}\) is improper for \(a_{2}=0\), that is, \(r_{2}(a_{1},0,a_{3},a_{4})=\infty\). The function \(r_{2}\) is continuous except \(a_{2}=0\) if \(0 \in[a_{2}^{-},a_{2}^{+}]\).
Denote
$$\begin{aligned}& \gamma_{1}=\max_{(a_{1},a_{2},a_{3},a_{4}) \in Q} r_{1}(a_{1},a_{2},a_{3},a_{4}), \\& \gamma _{2}=\min_{(a_{1},a_{2},a_{3},a_{4}) \in Q} r_{2}(a_{1},a_{2},a_{3},a_{4}). \end{aligned}$$
We state another main result of this section.

Theorem 3

Let the family (5) be given. There exists a common diagonal solution D to the inequality \(A^{T}DA-D<0\) if and only if
  1. (i)

    (10) is satisfied,

     
  2. (ii)

    \(\gamma_{1} < \gamma_{2}\),

     
  3. (iii)

    \((\alpha,\infty) \cap(\gamma_{1},\gamma_{2}) \neq \emptyset\).

     
In this case, for every \(\lambda\in(\alpha,\infty) \cap(\gamma _{1},\gamma_{2})\) the matrix \(D=\operatorname{diag}(\lambda,1)\) is a common solution.

Example 3

Consider the following interval family:
$$\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} {[0,\frac{1}{2} ]} & [-\frac{1}{3},\frac{1}{2} ] \\ {[-\frac{1}{10},\frac{1}{10} ]} & [\frac{1}{3},\frac {1}{2} ] \end{array}\displaystyle \right ]. $$
This family is robust Schur diagonally stable by Proposition 2:
$$\begin{aligned}& \min_{a\in Q}(1+a_{2}a_{3}-a_{1}a_{4})=7/10, \\& \min_{a\in Q}(1+a_{1}a_{4}-a_{2}a_{3})=19/20, \\& \min_{a\in Q}(1+a_{1}a_{4}-a_{1}-a_{4}-a_{2}a_{3})=1/5, \\& \min_{a\in Q}(1+a_{1}+a_{4}+a_{1}a_{4}-a_{2}a_{3})=77/60, \\& \min_{a\in Q}(1+a_{4}+a_{2}a_{3}-a_{1}-a_{1}a_{4})=37/60, \\& \min_{a\in Q}(1+a_{1}+a_{2}a_{3}-a_{4}-a_{1}a_{4})=9/20, \end{aligned}$$
where \(Q= [0,\frac{1}{2} ] \times [-\frac{1}{3},\frac {1}{2} ] \times [-\frac{1}{10},\frac{1}{10} ] \times [\frac{1}{3},\frac{1}{2} ]\). The maximization of the left root function \(r_{1}(a_{1},a_{2},a_{3},a_{4})\) of \(g(x)\) over Q gives \(\gamma _{1}\approx0.02\), and the minimization of the right root function \(r_{2}(a_{1},a_{2},a_{3},a_{4})\) over Q gives \(\gamma_{2}\approx2.14\). Since \(\alpha=1/75\), for every \(\lambda\in(0.02,2.14)\) the matrix \(D=\operatorname{diag}(\lambda,1)\) is a common diagonal solution.

3 Solution algorithm for third order systems

In this section for \(3 \times3\) interval family, we give necessary and sufficient condition for the existence of a common diagonal solution and the corresponding solution algorithm in the Hurwitz case.

Consider \(3 \times3\) interval family
$$ \mathcal{A}= \left \{ A= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} a_{1} & a_{2} & a_{3}\\ a_{4} & a_{5} & a_{6}\\ a_{7} & a_{8} & a_{9} \end{array}\displaystyle \right ] : a_{i} \in\bigl[a_{i}^{-},a_{i}^{+} \bigr]\ (i=1,2,\ldots,9) \right \}. $$
(12)
Without loss of generality, all \(3 \times3\) positive diagonal matrices \(\operatorname{diag}(x_{1},x_{2},x_{3})\) with \(x_{i}>0\) (\(i=1,2,3\)) may be normalized to have the form
$$D=\operatorname{diag}(t,1,s)= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} t & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & s \end{array}\displaystyle \right ] $$
with \(t>0\), \(s>0\). Is there \(D=\operatorname{diag}(t,1,s)\) with \(t>0\), \(s>0\) such that
$$ A^{T}D+DA< 0 $$
(13)
for all \(a_{i} \in[a_{i}^{-},a_{i}^{+}]\) (\(i=1,2,\ldots,9\))?
We write
$$A^{T}D+DA= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 2ta_{1} & ta_{2}+a_{4} & ta_{3}+sa_{7}\\ ta_{2}+a_{4} & 2a_{5} & sa_{8}+a_{6}\\ ta_{3}+sa_{7} & sa_{8}+a_{6} & 2sa_{9} \end{array}\displaystyle \right ]. $$
The matrix inequality (13), i.e. the negative definiteness of \(A^{T}D+DA\) is equivalent to the following:
  1. (i)

    \(a_{1}<0\),

     
  2. (ii)

    \((a_{2}t+a_{4})^{2}-4a_{1}a_{5}t<0\),

     
  3. (iii)

    \(d_{0}(t,a_{1},\ldots,a_{9})+d_{1}(t,a_{1},\ldots ,a_{9})s+ d_{2}(t,a_{1},\ldots,a_{9})s^{2}<0\).

     
The functions \(d_{i}\) (\(i=1,2,3\)) are low order polynomials and can be explicitly evaluated.
(i) is satisfied for all \(a_{1} \in[a_{1}^{-},a_{1}^{+}]\) if and only if \(a_{1}^{+}<0\). The problem of existence of a common t satisfying (ii) for all \((a_{1},a_{2},a_{4},a_{5})\) is equivalent to the existence of a common diagonal solution for \(2 \times2\) family \(\bigl [ {\scriptsize\begin{matrix}{} a_{1} & a_{2} \cr a_{4} & a_{5}\end{matrix}} \bigr ]\) and has been investigated in Section 2. There, the whole interval of common t has been calculated. If this interval is empty then there is no common \(D=\operatorname{diag}(t,1,s)\) satisfying (13). Assume that this interval \((\alpha,\beta)\) of common t is nonempty. Then the existence of a common \(D=\operatorname{diag}(t,1,s)\) means that there exist \(t \in(\alpha,\beta)\) and \(s>0\) such that (iii) is satisfied for all \(a_{i} \in[a_{i}^{-},a_{i}^{+}]\) (\(i=1,\ldots,9\)). This problem is a minimax problem. Indeed, denote the left-hand side of (iii) by \(f(t,s,a_{1},\ldots,a_{9})\). Then (iii) is equivalent to the following minimax inequality:
$$ \inf_{t \in(\alpha, \beta), s>0 } \max_{(a_{1},\ldots,a_{9})} f(t,s,a_{1},\ldots,a_{9})< 0. $$
(14)
To solve the game problem (14) is difficult in general (see Example 4).

We suggest the following approach to check (14) numerically. This approach is based on the openness property of the solution set of (13) [1]. In other words, the following proposition is true.

Proposition 3

If there exists a common \(D=\operatorname{diag}(t_{*},1,s_{*})\) then there exist intervals \([t_{1},t_{2}]\) and \([s_{1},s_{2}]\) which contain \(t_{*}\) and \(s_{*}\), respectively, such that the matrix \(D=\operatorname{diag}(t,1,s)\) is a common solution for all \(t \in[t_{1},t_{2}]\), \(s \in[s_{1},s_{2}]\).

Due to this proposition, we suggest the following algorithm for a common diagonal solution.

Algorithm 1

Let the interval family (12) be given.
  1. (s1)

    Using the results on \(2 \times2\) interval systems from Section 2 calculate the interval \((\alpha,\beta)\) for t.

     
  2. (s2)

    Determine an upper bound for the variable s from the positive definiteness condition of a suitable submatrix of \(-(A^{T}D+DA)\).

     
  3. (s3)

    Divide the interval \([\alpha, \beta]\) into k equal parts \([\alpha_{i}, \beta_{i}]\) and the interval \([0, \overline{s}]\) into m equal parts \([s_{j}^{-},s_{j}^{+}]\).

     
  4. (s4)
    On each box
    $$[\alpha_{i},\beta_{i}] \times\bigl[s_{j}^{-},s_{j}^{+} \bigr] \times\bigl[a_{1}^{-},a_{1}^{+}\bigr] \times \cdots\times \bigl[a_{9}^{-},a_{9}^{+}\bigr] $$
    consider the maximization of the polynomial function \(f(t,s,a_{1},\ldots ,a_{9})\). If there exist indices \(i_{*}\) and \(j_{*}\) such that the maximum is negative then stop. The whole interval \([\alpha_{i_{*}},\beta_{i_{*}}] \times[s_{j_{*}}^{-},s_{j_{*}}^{+}]\) defines family of common diagonal solutions.
     

As can be seen, the above game problem (14) is reduced to a finite number of maximization problems in which low order multivariable polynomials are maximized over boxes. Instead of a single problem (14) we consider a sequence of solvable problems from step s4) where low order multivariable polynomial functions are maximized over 11-dimensional boxes \([\alpha_{i},\beta_{i}] \times[s_{j}^{-},s_{j}^{+}] \times[a_{1}^{-},a_{1}^{+}] \times\cdots\times[a_{9}^{-},a_{9}^{+}]\). These optimizations can be carried out by MAPLE program or by the Bernstein expansion. The Bernstein expansion is an effective method for testing positivity or negativity of a multivariable polynomial over a box. The following example shows that Algorithm 1 is sufficiently effective.

Example 4

Consider the interval family
$$\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} -4 & q_{1} & 1 \\ 1 & -4 & q_{2} \\ q_{3} & 1 & -5 \end{array}\displaystyle \right ], $$
where \(q_{1} \in[2,3]\), \(q_{2} \in[1,2]\), and \(q_{3} \in[1,2]\). We obtain
$$A^{T}D+DA= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} -8t & q_{1}t+1 & t+q_{3}s\\ q_{1}t+1 & -8 & s+q_{2}\\ t+q_{3}s & s+q_{2} & -10s \end{array}\displaystyle \right ]. $$
Positivity of the \(2 \times2\) leading principal minor gives
$$\begin{aligned}& 64t-(q_{1}t+1)^{2}>0\quad \Rightarrow\quad 64t > (q_{1}t+1)^{2}, \\& {\max_{q_{1}\in[2,3]}} (q_{1}t+1)^{2}=(3t+1)^{2}< 64t, \\& 9t^{2}-58t+1< 0\quad \Rightarrow\quad t \in(0.0173,6.427). \end{aligned}$$
Hence \(64t-(q_{1}t+1)^{2}>0\) for all \(t \in(0.0173,6.427)\), \(q_{1} \in[2,3]\).
The positive definiteness of the submatrix
$$\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 8 & -(s+q_{2}) \\ -(s+q_{2}) & 10s \end{array}\displaystyle \right ] $$
gives \(80s-(s+q_{2})^{2}>0\) or \(\max_{q_{2}} (s+q_{2})^{2}<80s\) or \((s+2)^{2}<80s\) or \(s^{2}-76s+4<0\) and the upper bound \(\overline{s}=80\) is suitable. The function from (iii), that is, the determinant function is
$$\begin{aligned} f(t,s,q_{1},q_{2},q_{3}) =&2s^{2}tq_{1}q_{3}+10st^{2}q_{1}^{2}+2stq_{1}q_{2}q_{3}+8s^{2}q_{3}^{2}+2st^{2}q_{1} \\ &{}+2t^{2}q_{1}q_{2}+8s^{2}t+2s^{2}q_{3}+20stq_{1}+16stq_{2}+16stq_{3} \\ &{}+2sq_{2}q_{3}+8tq_{2}^{2}-638st+8t^{2}+2tq_{2}+10s. \end{aligned}$$
The corresponding game problem (14) is difficult, since the solution of (14) requires the following steps:
  1. (a)

    Fix \((t,s)\).

     
  2. (b)

    Solve parametric maximization of f with respect to \((q_{1},q_{2},q_{3})\). Denote \({\phi(t,s)=\max_{q_{1},q_{2},q_{3}} f}\).

     
  3. (c)

    Solve minimization of \(\phi(t,s)\) with respect to \((t,s)\).

     
We divide the intervals \([0.0173,6.427]\) and \([0,80]\) into 20 and 200 equal parts, respectively, and solve the corresponding maximization problems. In Figure 1, for this example, the family of rectangles where each \((t,s)\) from each rectangle gives a common solution is shown.
Figure 1
Figure 1

Each \(\pmb{(t,s)}\) from each rectangle gives common diagonal solution \(\pmb{D=\operatorname{diag}(t,1,s)}\) .

It should be noted that the sufficient condition from [3], Theorem 1, is not satisfied for this example, since for the matrix U from [3], Theorem 1, the maximum real eigenvalue is positive.

4 Sufficient condition for the general case

In this section, we give sufficient condition for a common diagonal solution in the general case. This condition is expressed in terms of the center and upper bound matrices of interval family.

Consider the \(n \times n\) interval family
$$ \mathcal{A}=\bigl\{ A=(a_{ij}): a^{-}_{ij} \leq a_{ij} \leq a^{+}_{ij}\bigr\} . $$
(15)
Define the center and upper bound matrices \(A_{c}\), \(A_{d}\) by
$$A_{c}= \biggl(\frac{a^{-}_{ij}+a^{+}_{ij}}{2} \biggr),\qquad A_{p}= \biggl(\frac {a^{+}_{ij}-a^{-}_{ij}}{2} \biggr). $$
Given \(A=(a_{ij})\), \(B=(b_{ij})\) the symbol \(A \preccurlyeq B\) means that \(a_{ij} \leq b_{ij}\). Also \(|A|= (|a_{ij}| )\).

Theorem 4

Let an interval family (15) be given. Assume that there exists a positive diagonal matrix D such that
$$ \lambda_{\max} \bigl(A^{T}_{c}D+DA_{c} \bigr)+\lambda_{\max} \bigl(A^{T}_{p}D+DA_{p} \bigr)< 0. $$
(16)
Then the matrix D is a common diagonal solution for the family (15).

Proof

Every matrix \(A \in\mathcal{A}\) can be written as
$$ A=A_{c}+X, $$
(17)
where \(|X|\preccurlyeq A_{p}\). The inequality \(A^{T}D+DA<0\) for all \(A \in \mathcal{A}\) and by (17) can be written as
$$ \lambda_{\max} \bigl(A^{T}_{c}D+DA_{c}+X^{T}D+DX \bigr)< 0 \quad \mbox{for all } |X|\preccurlyeq A_{p}. $$
(18)
By the known property of \(\lambda_{\max}\) the inequality (18) is satisfied if
$$\lambda_{\max} \bigl(A^{T}_{c}D+DA_{c} \bigr)+\lambda_{\max} \bigl(X^{T}D+DX \bigr)< 0\quad \mbox{for all } |X|\preccurlyeq A_{p}, $$
which is equivalent to
$$ \max_{|X|\preccurlyeq A_{p}}\lambda_{\max} \bigl(X^{T}D+DX \bigr)< -\lambda _{\max} \bigl(A^{T}_{c}D+DA_{c} \bigr). $$
(19)
Let us prove that
$$ \max_{|X|\preccurlyeq A_{p}}\lambda_{\max} \bigl(X^{T}D+DX \bigr)=\lambda _{\max} \bigl(A^{T}_{p}D+DA_{p} \bigr). $$
(20)
Since \(X^{T}D+DX \preccurlyeq A^{T}_{p}D+DA_{p}\) for all \(|X|\preccurlyeq A_{p}\) by [18], p.491,
$$ \rho \bigl(X^{T}D+DX \bigr) \leq\rho \bigl(A^{T}_{p}D+DA_{p} \bigr)=\lambda _{\max} \bigl(A^{T}_{p}D+DA_{p} \bigr) $$
(21)
(the last equality follows from the fact that the spectral radius of nonnegative matrices is an eigenvalue, [18], p.503). On the other hand for a symmetric matrix C, obviously \(\rho(C) \geq\lambda _{\max}(C)\). Using this fact, from (21) we get
$$ \lambda_{\max} \bigl(X^{T}D+DX \bigr) \leq \lambda_{\max} \bigl(A^{T}_{p}D+DA_{p} \bigr) $$
(22)
for all \(|X|\preccurlyeq A_{p}\), which proves (20). From (19), (20) the inequality (16) follows. □

The left-hand side of (16) is a convex function in the entries of diagonal matrix D. For \(D=\operatorname{diag}(x_{1},x_{2},\ldots ,x_{n})\) denote the left-hand side by \(f(x_{1},x_{2},\ldots,x_{n})\). Then from Theorem 4, it follows that, by minimization of the function \(f(x_{1},x_{2},\ldots,x_{n})\) over the unit rectangle, we can arrive at a common solution.

Example 5

Consider the \(5 \times5\) interval family
$$\mathcal{A}= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {[-33,-31 ]} & [-3,3 ] & [-12,-10 ] & {[-1,3 ]} & [-9,-7 ]\\ {[-47,-45 ]} & [-34,-30 ] & [-12,-10 ] & {[-6,-4 ]} & [-2,2 ]\\ {[-25,-23 ]} & [-3,-1 ] & [-59,-57 ] & {[0,4 ]} & [-6,-4 ]\\ {[-38,-36 ]} & [-9,-5 ] & [-1,1 ] & {[-36,-34 ]} & [-13,-11 ]\\ {[-17,-15 ]} & [-1,1 ] & [-23,-19 ] & {[-18,-16 ]} & [-55,-53 ] \end{array}\displaystyle \right ]. $$
The center and upper bound matrices are
$$A_{c}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} -32 & 0 & -11 & 1 & -8\\ -46 & -32 & -11 & -5 & 0\\ -24 & -2 & -58 & 2 & -5\\ -37 & -7 & 0 & -35 & -12\\ -16 & 0 & -21 & -17 & -54 \end{array}\displaystyle \right ]\quad \mbox{and}\quad A_{p}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 1 & 3 & 1 & 2 & 1\\ 1 & 2 & 1 & 1 & 2\\ 1 & 1 & 1 & 2 & 1\\ 1 & 2 & 1 & 1 & 1\\ 1 & 1 & 2 & 1 & 1 \end{array}\displaystyle \right ], $$
respectively.

The minimization procedure of the convex function \(f(x_{1},\ldots,x_{5})\) evaluated by the Kelley cutting-plane method starting with the values \(x_{1}=\cdots=x_{5}=1\) after 5 steps gives negative value −0.088 for the function f. This value is attained for \(x_{1}=1\), \(x_{2}=0.247\), \(x_{3}=0.161\), \(x_{4}=0.219\), \(x_{5}=0.093\). Therefore by Theorem 4, the matrix \(D=\operatorname{diag}(1,0.247,0.161,0.219,0.093)\) is a common diagonal solution. Note again that the LMIs method cannot give a solution: the number of vertices equals 225.

5 Conclusion

In this paper, the problem of diagonal stability of interval matrices is considered. This problem is investigated in the framework of the existence of common diagonal Lyapunov functions. For second and third order systems necessary and sufficient conditions for the existence of common diagonal solutions are given. In the general case, a sufficient condition for the existence of a common diagonal solution is given.

Declarations

Acknowledgements

The authors thank the reviewer for his valuable comments. This work is supported by the Anadolu University Research Fund under Contract 1605F464.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Mathematics, Bilecik Seyh Edebali University, Bilecik, Turkey
(2)
Department of Mathematics, Anadolu University, Eskisehir, Turkey

References

  1. Bhattacharyya, SP, Chapellat, H, Keel, LH: Robust Control: The Parametric Approach. Prentice Hall, Upper Saddle River (1995) MATHGoogle Scholar
  2. Rohn, J: Positive definiteness and stability of interval matrices. SIAM J. Matrix Anal. Appl. 15, 175-184 (1994) MathSciNetView ArticleMATHGoogle Scholar
  3. Pastravanu, O, Matcovschi, M: Sufficient conditions for Schur and Hurwitz diagonal stability of complex interval matrices. Linear Algebra Appl. 467, 149-173 (2015) MathSciNetView ArticleMATHGoogle Scholar
  4. Arcat, M, Sontag, E: Diagonal stability of a class of cyclic systems and its connection with the secant criterion. Automatica 42, 1531-1537 (2006) MathSciNetView ArticleMATHGoogle Scholar
  5. Cross, GW: Three types of matrix stability. Linear Algebra Appl. 20, 253-263 (1978) MathSciNetView ArticleMATHGoogle Scholar
  6. Deng, M, Iwai, Z, Mizumoto, I: Robust parallel compensator design for output feedback stabilization of plants with structured uncertainty. Syst. Control Lett. 36, 193-198 (1999) MathSciNetView ArticleMATHGoogle Scholar
  7. Johnson, CR: Sufficient condition for D-stability. J. Econ. Theory 9, 53-62 (1974) View ArticleGoogle Scholar
  8. Kaszkurewicz, E, Bhaya, A: Matrix Diagonal Stability in Systems and Computation. Birkhäuser, Boston (2000) View ArticleMATHGoogle Scholar
  9. Ziolko, M: Application of Lyapunov functionals to studying stability of linear hyperbolic systems. IEEE Trans. Autom. Control 35, 1173-1176 (1990) MathSciNetView ArticleMATHGoogle Scholar
  10. Berman, A, King, C, Shorten, R: A characterisation of common diagonal stability over cones. Linear Multilinear Algebra 60, 1117-1123 (2012) MathSciNetView ArticleMATHGoogle Scholar
  11. Büyükköroğlu, T: Common diagonal Lyapunov function for third order linear switched system. J. Comput. Appl. Math. 236, 3647-3653 (2012) MathSciNetView ArticleMATHGoogle Scholar
  12. Khalil, HK: On the existence of positive diagonal P such that \(PA + A^{T} P < 0\). IEEE Trans. Autom. Control 27, 181-184 (1982) MathSciNetView ArticleMATHGoogle Scholar
  13. Mason, O, Shorten, R: On the simultaneous diagonal stability of a pair of positive linear systems. Linear Algebra Appl. 413, 13-23 (2006) MathSciNetView ArticleMATHGoogle Scholar
  14. Oleng, NO, Narendra, KS: On the existence of diagonal solutions to the Lyapunov equation for a third order system. In: Proceedings of the American Control Conference, vol. 3, pp. 2761-2766 (2003) Google Scholar
  15. Shorten, RN, Narendra, KS: Necessary and sufficient conditions for the existence of a common quadratic Lyapunov function for a finite number of stable second order linear time-invariant systems. Int. J. Adapt. Control Signal Process. 16, 709-728 (2002) View ArticleMATHGoogle Scholar
  16. Büyükköroğlu, T, Esen, Ö, Dzhafarov, V: Common Lyapunov functions for some special classes of stable systems. IEEE Trans. Autom. Control 56, 1963-1967 (2011) MathSciNetView ArticleGoogle Scholar
  17. Mills, W, Mullis, CT, Roberts, RA: Digital filter realizations without overflow oscillations. IEEE Trans. Acoust. Speech Signal Process. 26, 334-338 (1978) MathSciNetView ArticleMATHGoogle Scholar
  18. Horn, RA, Johnson, CR: Matrix Analysis. Cambridge University Press, Cambridge (1985) View ArticleMATHGoogle Scholar

Copyright

© Yıldız et al. 2016

Advertisement