Skip to main content

Some new two-sided bounds for determinants of diagonally dominant matrices

Abstract

In this article, we present some new two-sided bounds for the determinant of some diagonally dominant matrices. In particular, the idea of the preconditioning technique is applied to obtain the new bounds.

MS Classification: 65F10; 15A15.

1 Introduction

By C n × n n × n we denote the set of all n × n complex (real) matrices. A matrix A= a i j C n × n is called a Z-matrix if a ij ≤ 0 for any ij; a nonsingular M-matrix if A is a Z-matrix with A-1 is nonnegative, i.e., A-1 ≥ 0. The comparison matrixA〉 = (ã ij ) for A is defined by

a ˜ i i = | a i i | , a ˜ i j = | a i j | , i j , i , j n ,

where 〈n〉 ≡ {1, 2,..., n}.

Throughout this article, we always assume that A = D - L - U, where D, -L and -U are nonsingular diagonal, strict lower and strict upper triangular parts of A. It is noted that 〈A〉 = |D| - |L| - |U|, where |C| = (|c ij |) for C = (c ij ).

Let B= b i j C n × m and Λ i (B) = Σikn|b ik |. Then it is easy to see that 〈Ae = (|a11| - Λ1(A),..., |a nn | - Λ n (A))T, where e = (1,..., 1)Twith appropriate dimensions. Let l i = k = 1 i - 1 a i k , and u i = k = i + 1 n a i k . Then |L|e = (l1,...,l n ), |U|e = (u1,...,u n ), and Λ i (A) = l i + u i .

Definition 1.1 Let A= a i j C n × n . Then A is said to be

  1. (1)

    a diagonally dominant matrix (d.d.), if |a ii | ≥ Λ i (A) for each i n〉;

  2. (2)

    a strictly diagonally dominant matrix (s.d.d.), if |a ii | > Λ i (A) for each i n〉;

  3. (3)

    a weakly chained diagonally dominant matrix (c.d.d.) (e.g., see [1, 2]), if A is a d.d. matrix with β ( A ) , where β(A) = {j | |a jj | > Σjkn|a jk |) and for all i n〉, i β(A), there exist indices i1,...,i k in 〈n〉 with a i r , i r + 1 0,, 0 ≤ rk -1, where i0 = i and i k β(A).

  4. (4)

    a generalized diagonally dominant matrix (g.d.d.), if there is a positive diagonal matrix D such that AD is an s.d.d. matrix.

It is noted that the comparison matrix of a g.d.d. matrix is a nonsingular M-matrix (e.g., see [[1], Lemma 3.2]).

The classical bound for the determinant of an s.d.d. matrix A is the Ostrowski's inequality [3], i.e.,

det A k = 1 n a k k - Λ k ( A ) ,

which was improved by Price as follows [4]

k = 1 n a k k - u k det A k = 1 n a k k + u k .
(1.1)

The bound (1.1) was further improved by Ostrowski [5] and Yong [6]. In [6] the author obtained the following two-sided bounds for s.d.d. matrices (see [[6], Theorem 2.2])

k = 1 n a k k - max k + 1 s n a s k a s s - u s u k det A k = 1 n a k k + max k + 1 s n a s k a s s - u s u k .
(1.2)

The inequalities of the determinant can be applied to estimate the spectral of a matrix and to determine the nonsingularity of a matrix, etc, which are useful in numerical analysis. Some numerical examples show that the bound in (1.2) is not optimal. By this motivation, in this article, we consider to give some sharper bounds than the ones in (1.1) and (1.2). The rest of this article is organized as follows. In Section 2, we use the classical technique to obtain new two-sided bounds; see Theorems 2.5 and 2.5'. In Section 3, we apply the idea of the preconditioning technique to give a new bound for the M-matrix case; see Theorem 3.2. A conclusion is given in the final section.

2 The classical technique

Let α1 and α2 be two subsets of 〈n〉 such that 〈n〉 = α1α2 and α 1 α 2 = . Let A= a i j C n × n . By A ij = A[α i |α j ] we denote the submatrix of A whose rows are indexed by α i and columns by α j . For simplicity, we denote A[α1] instead of A[α i |α i ]. If A[α1] is nonsingular, the Schur complement of A[α1] in A is denoted by S α 1 , i.e., S α 1 =A [ α 2 ] -A [ α 2 | α 1 ] A [ α 1 ] - 1 A [ α 1 | α 2 ] . By A(k)we denote A(k)= A[α(k)], where α(k)= {k + 1,..., n}.

We define s k (A) as follows:

s n ( A ) = Λ n ( A ) , s k ( A ) = i = 1 k - 1 a k i + i = k + 1 n a k i s i ( A ) a i i = l k + i = k + 1 n a k i s i ( A ) a i i , k = n - 1 , . . . , 1 .
(2.1)

Alternatively, the recursive Equation (2.1) can be computed by the following lemma, which can be deduced from the similar proof to those in [7].

Lemma 2.1 Let A= a i j C n × n . Then

D D - U - 1 L e = s 1 ( A ) , . . . , s n ( A ) T .
(2.2)

The following lemma is well-known, e.g., see [1].

Lemma 2.2 Let A be a c.d.d. matrix. Then A is g.d.d., and hence is nonsingular.

Now we partition A into the following block form:

A = a 11 x T y A ( 1 ) .
(2.3)

Then it is easy to check that

A - 1 = S 1 - 1 - S 1 - 1 x T A ( 1 ) - 1 - S 1 - 1 A ( 1 ) - 1 y A ( 1 ) - 1 + S 1 - 1 A ( 1 ) - 1 y x T A ( 1 ) - 1 ,
(2.4)

where S 1 = a 11 - x T A ( 1 ) - 1 y.

The following lemma can be found in [8].

Lemma 2.3 Let A = (a ij ) be a nonsingular d.d. M-matrix, and let A - 1 = ( a i j ) . Then

1 a 11 + s 1 ( A ) a 11 1 a 11 - s 1 ( A ) .
(2.5)

Lemma 2.4 Let A be a c.d.d. matrix. Then

k = 1 n [ a k k - s 1 ( A ( k - 1 ) ) ] det A k = 1 n [ a k k + s 1 ( A ( k - 1 ) ) ] ,
(2.6)

where we define A(0) = A.

Proof. It follows from Lemma 2.2 that A is nonsingular. Let A be as in (2.3) and A - 1 = ( a i j ) . Then we have

det A = ( a 11 - x T A ( 1 ) - 1 y ) det A ( 1 ) .
(2.7)

By (2.4) we have

a 11 = ( a 11 - x T A ( 1 ) - 1 y ) - 1 ,

which together with (2.7) gives that

det A = det A ( 1 ) a 11 .
(2.8)

It follows from (2.5) and (2.8) that

( a 11 - s 1 ( A ) ) det A ( 1 ) det A ( a 11 + s 1 ( A ) ) det A ( 1 ) .
(2.9)

Because A is c.d.d., 〈A〉 is a nonsingular M-matrix, and so is 〈A(1)〉, which implies that A(1) is also a c.d.d. matrix (see [ 1, Theorem 3.3]). Applying the induction on k to (2.9) one may deduce the desired inequality (2.6).

Remark 2.1 It is difficult to compute the bound (2.6) because one needs to compute all s i (A(k-1)), i = n,...,k for k = 1,...,n. However, we may replace s1(A(k-1)) by s i (A).

Theorem 2.5 Let A be a c.d.d. matrix. Then

k = 1 n a k k - i = k + 1 n a k i s i ( A ) a i i + i = k + 1 n a k i s = 1 k - 1 a i s a i i det A k = 1 n a k k + i = k + 1 n a k i s i ( A ) a i i - i = k + 1 n a k i s = 1 k - 1 a i s a i i .
(2.10)

Proof. By (2.1) we have

s n - k ( A ( k ) ) = i = k + 1 n - 1 a n i = Λ n ( A ) - i = 1 k a n i = s n ( A ) - s = 1 k a n i ,

and hence

s n - k - 1 ( A ( k ) ) = i = k + 1 n - 2 a n - 1 , i + a n - 1 , n s n - k ( A ( k ) ) a n - 1 , n - 1 = i = 1 n - 2 a n - 1 , i + a n - 1 , n s n ( A ) - i = 1 k a n i a n - 1 , n - 1 - s = 1 k a n - 1 , s = s n - 1 ( A ) - a n - 1 , n i = 1 k a n i a n - 1 , n - 1 - s = 1 k a n - 1 , s s n - 1 ( A ) - s = 1 k a n - 1 , s s 2 ( A ( k ) ) s k + 2 ( A ) - s = 1 k a 2 s .

Therefore, we have

s 1 ( A ( k ) ) = i = k + 2 n a k + 1 , i s i - k ( A ( k ) ) a i i i = k + 2 n a k + 1 , i s i ( A ) - s = 1 k a i s a i i = i = k + 2 n a k + 1 , i s i ( A ) a i i - i = k + 2 n a k + 1 , i s = 1 k a i s a i i , k = 1 , , n - 1 ,

which together with (2.6) gives the bound (2.10).

Let A= ( a i j ) C n × n . Then R k (A) is given by (e.g., see [9] or [1])

R 1 ( A ) = Λ 1 ( A ) , R k ( A ) = i = k + 1 n a k i + i = 1 n - 1 a k i R i ( A ) a i i , k = 2 , , n .
(2.11)

A matrix A is called a Nekrasov matrix ([9] or [1]) if |a kk | > R k (A) for k n〉. A Nekrasov matrix is a g.d.d. matrix (e.g., see [9]). The bound for the determinant of a Nekrasov matrix is given below (see [10, 11]):

a 11 k = 2 n a k k + a k 1 a 11 u k - i = 1 k - 1 a k i R i ( A ) a k k det A a 11 k = 2 n a k k - a k 1 a 11 u k + i = 1 k - 1 a k i R i ( A ) a k k .
(2.12)

However there is a typos for this bound, a counter-example was given in [12]. In the following theorem, we get an estimation of the determinant of A by using R i (A), the proof is analogical to those in Theorem 2.5.

Theorem 2.5' Let A be a c.d.d. matrix. Then

k = 1 n a k k - i = 1 k - 1 a k i R i ( A ) a i i + i = 1 k - 1 a k i s = k + 1 n a i s a i i det A k = 1 n a k k + i = 1 k - 1 a k i R i ( A ) a i i - i = 1 k - 1 a k i s = k + 1 n a i s a i i .
(2.13)

Remark 2.2 Let A = D - L - U. Then the recursive Equations (2.1) and (2.11) for S k (A) and R k (A) can be computed by (2.2) and the following formula (see [7])

D ( D - L ) - 1 U e = ( R 1 ( A ) , , R n ( A ) ) T ,
(2.14)

respectively. Hence two bounds (2.10) and (2.13) are based on different splittings A = (D - U) - L = (D - L) - U. The following two examples illustrate that none of these two bounds is better than other.

Example 2.1 Let

A = 1 0 - 0 . 6 - 0 . 8 1 - 0 . 1 - 0 . 3 - 0 . 4 1 .

Then A is an s.d.d. matrix. Applying the bounds (2.10) and (2.13) to this matrix yields

0 . 5568 det A 1 . 4768

and

0 . 588 det A 1 . 412

respectively, which shows that the bound in (2.13) is better.

Example 2.2 Let

A = 1 - 0 . 2 0 - 0 . 1 1 - 0 . 3 - 0 . 3 - 0 . 1 1 .

Then A is s.d.d.. By (2.10) and (2.13), we have

0 . 92732 det A 1 . 0753

and

0 . 83104 det A 1 . 175 ,

respectively. Hence the bound (2.10) is better.

Remark 2.3 It is noted that the bound in (2.10) (or (2.13)) only provides alternative estimation for the determinant, this bound does not improve (1.2) in general. However, Example 2.1 illustrates that the bound in (2.10) is better. In fact, by (1.2) we obtain

0 . 448 det A 1 . 5947 .

The following example shows that the upper bound in (1.2) can be better than the one in (2.10).

Example 2.3 Let

A = 1 0 0 . 6 0 . 8 1 - 0 . 1 - 0 . 5 - 0 . 4 1 .

Then by (2.10) and (1.2) we have

0 . 4608 det A 1 . 6016

and

0 . 448 det A 1 . 5947 ,

respectively.

3 The preconditioning technique

It is well known that the preconditioning technique plays more and more important roles in solving linear systems (e.g., see [13]). In this section we improve the bound (1.2) based on the idea of preconditioning.

Without loss of generality we may assume that all diagonal entries of A are equal to 1 in this section. Otherwise, we consider the matrix D-1A, where D = diag(a11,..., a nn ). Then det(D-1A) = det D-1 det A Hence, we assume that

A = I - L - U ,

where L and U are a strictly lower triangular and a strictly upper triangular matrices, respectively

Let

P = I + S = 1 a 12 0 0 0 1 a 23 0 0 0 0 a n - 1 , n 0 0 0 1 ,
(3.1)

which was first introduced in [14] for solving linear systems, and was further studied by many authors (e.g., see [1518]). Usually, P is call a preconditioner for solving the linear system Ax = b.

Let B = PA. Then det B = det A and

B = I - L - S L - ( U - S + S U ) = L ̃ - U ̃ ,

where L ̃ I-L-SL and U ̃ U-S+SU are a lower triangular and a strictly upper triangular matrices, respectively. The i th diagonal entry of B is given by

b i i = 1 - a i , i + 1 a i + 1 , i , i < n 1 , i = n .
(3.2)

If A is an s.d.d. M-matrix, so is B (see [16]). Let A have the block form (2.3). We partition I + S into the following block form

I + S = 1 α 0 I + S ( 1 ) ,

where α= ( a 12 , 0 , , 0 ) n - 1 and S ( 1 ) ( n - 1 ) × ( n - 1 ) . A simple calculation yields that

B = b 11 x ̃ T B ( 1 ) ,

where

= ( I + S ( 1 ) ) y , x ̃ T = x T + α A ( 1 ) , B ( 1 ) = ( I + S ( 1 ) A ( 1 ) ) .
(3.3)

Then

det B = ( b 11 - x ̃ T B ( 1 ) - 1 ) det B ( 1 ) .
(3.4)

It is easy to see that

b 11 - B ( 1 ) - 1 x ̃ 1 ( b 11 - x ̃ T B ( 1 ) - 1 ) b 11 + B ( 1 ) - 1 x ̃ 1 .
(3.5)

By (3.3) we have

B ( 1 ) - 1 = A ( 1 ) - 1 y ,

and hence from [19] (also see [6]) it follows that

B ( 1 ) - 1 = A ( 1 ) - 1 y max 2 s n a s 1 1 - u s .
(3.6)

Notice that x ̃ 1 = x T + α A ( 1 ) 1 = u 1 + a 12 u 2 - a 12 , which together with (3.4), (3.5), (3.6), and (3.2) gives

1 - a 1 , 2 a 2 , 1 - max 2 s n a s 1 1 - u s ( u 1 - a 12 + a 12 u 2 ) det B ( 1 ) det A 1 - a 1 , 2 a 2 , 1 + max 2 s n a s 1 1 - u s ( u 1 - a 12 + a 12 u 2 ) det B ( 1 ) .
(3.7)

By (3.3), B(1) is also the preconditioned matrix of A(1) with the preconditioner I + S(1). In this case, B(1) is also an s.d.d. matrix. So we may proceed by induction with (3.7), and then one may easily deduce the following lemma.

Lemma 3.1 Let A be an s.d.d. M-matrix with unit diagonal entries. Then

k = 1 n 1 [ 1 | a k , k + 1 | | a k + 1 , k | max k + 1 s n | a s k | 1 u s ( u k | a k , k + 1 | ( 1 u k + 1 ) ) ] det A k = 1 n 1 [ 1 | a k , k + 1 | | a k + 1 , k | + max k + 1 s n | a s k | 1 u s ( u k | a k , k + 1 | ( 1 u k + 1 ) ] .
(3.8)

By the above argument, we may deduce the following result without the assumption that A has unit diagonal entries as in Lemma 3.1.

Theorem 3.2 Let A be an s.d.d. M-matrix. Then

k = 1 n a k k - a k + 1 , k a k + 1 , k + 1 a k , k + 1 - max k + 1 s n a s k a s s - u s ( u k - a k , k + 1 a k + 1 , k + 1 ( a k + 1 , k + 1 - u k + 1 ) ) det A k = 1 n a k k - a k + 1 , k a k + 1 , k + 1 a k , k + 1 + max k + 1 s n a s k a s s - u s ( u k - a k , k + 1 a k + 1 , k + 1 ( a k + 1 , k + 1 - u k + 1 ) ) .
(3.9)

Remark 3.1 It is noted that the bound (3.9) is always sharper than the one (1.2). In fact, for any i, u i < |a ii | we have ( u k - a k , k + 1 a k + 1 , k + 1 ( a k + 1 , k + 1 - u k + 1 ) ) u k and hence the upper bound is better than the one in (1.2). For the lower bound, since

max k + 1 s n a s k a s s - u s a k , k + 1 a k + 1 , k + 1 ( a k + 1 , k + 1 - u k + 1 ) a k + 1 , k a k + 1 , k + 1 - u k + 1 a k , k + 1 a k + 1 , k + 1 ( a k + 1 , k + 1 - u k + 1 ) = a k , k + 1 a k + 1 , k a k + 1 , k + 1 ,

the lower bound in (3.9) is better than the one in (1.2), which proves our assertion.

Remark 3.2 None of these two bounds in (3.9) and (2.10) is uniformly better than other. However the following example illustrates that the upper bound in (3.9) is better.

Example 3.1 Let

A = 1 - 0 . 3 0 - 0 . 3 1 - 0 . 3 0 - 0 . 3 1 .

Applying (3.9), (1.2), and (2.10) to estimate the determinant of A, respectively we have

0 . 793 det A 0 . 8632 , 0 . 793 det A 1 . 2301

and

0 . 80353 det A 1 . 2175 ,

4 Conclusion

In Sections 2 and 3, we have provided some two-sided bounds for the determinant of a d.d. matrix via both classical and preconditioning techniques. Although none of two bounds in (1.2) and (2.10) are uniformly better than other in general, the condition in the (2.10) is weaker than the one in (1.2).

When the preconditioning technique is applied to estimate the determinant of an s.d.d. M-matrix, we may obtain a more tighter bound. Here, we only present a bound (3.9) for the special preconditioner (3.1), and prove that this bound is sharper than the bound (1.2), which shows that a good preconditioning technique is a powerful tool not only for solving linear system but also for some estimations such as determinants etc.

References

  1. Li W: On the Nekrasov matrix. Linear Algebra Appl 1998, 281: 87–96. 10.1016/S0024-3795(98)10031-9

    Article  MathSciNet  Google Scholar 

  2. Shivakumar PN, Chew KH: A sufficient condition for nonvanishing of determinants. Proc Am Math Soc 1974, 43: 63–66. 10.1090/S0002-9939-1974-0332820-0

    Article  MathSciNet  Google Scholar 

  3. Ostrowski AM: Sur la determination des bones inferieures pour une classe des determinants. Bull Sci Math 1937, 61(2):19–32.

    Google Scholar 

  4. Price GB: Bounds for determinants with dominant principal diagonal. Proc Am Math Soc 1951, 2: 497–502. 10.1090/S0002-9939-1951-0041093-2

    Article  Google Scholar 

  5. Ostrowski AM: Note on bounds for determinants with dominant principal diagonal. Proc Am Math Soc 1952, 3: 26–30. 10.1090/S0002-9939-1952-0052380-7

    Article  MathSciNet  Google Scholar 

  6. Yong XR: Two properties of diagonally dominant matrices. Numer Linear Algebra Appl 1996, 3(2):173–177. 10.1002/(SICI)1099-1506(199603/04)3:2<173::AID-NLA69>3.0.CO;2-C

    Article  MathSciNet  Google Scholar 

  7. Robert F: Blocs-H-matrices et convergence des methodes iteratives classiques par blocs. Linear Algebra Appl 1969, 2: 223–265. 10.1016/0024-3795(69)90029-9

    Article  Google Scholar 

  8. Li W: The infinity norm bound for the inverse of nonsingular diagonal dominant matrices. Appl Math Lett 2008, 21: 258–263. 10.1016/j.aml.2007.03.018

    Article  MathSciNet  Google Scholar 

  9. Szulc T: Some remarks on a theorem of Gudkov. Linear Algebra Appl 1995, 225: 221–235.

    Article  MathSciNet  Google Scholar 

  10. Bayley DW, Crabtree DE: Bounds for determinants. Linear Algebra Appl 1969, 2: 303–309. 10.1016/0024-3795(69)90032-9

    Article  Google Scholar 

  11. Szulc T: On bound for certain determinants. Z Angew Math Mech 1992, 72: 637–640.

    MathSciNet  Google Scholar 

  12. Huang TZ, Xu CX: A note on the bound for the Bayley-Crabtree determinant of Nekrasov matrices. J Xi'an Jiaotong Univ 2002, 36: 1320. (In Chinese)

    MathSciNet  Google Scholar 

  13. Axelsson O: Iterative Solution Methods. Cambridge University Press, Cambridge; 1994.

    Chapter  Google Scholar 

  14. Gunawardena AD, Jain SK, Snyder L: Modified iterative methods for consistent linear systems. Linear Algebra Appl 1991, 154/156: 123–143.

    Article  MathSciNet  Google Scholar 

  15. Hadjidimos A, Noutsos D, Tzoumas M: More on modifications and improvements of classical iterative schemes for M-matrices. Linear Algebra Appl 2003, 364: 253–279.

    Article  MathSciNet  Google Scholar 

  16. Li W, Sun W: Modified Gauss-Seidel type methods and Jacobi type methods. Linear Algebra Appl 2000, 317: 227–247. 10.1016/S0024-3795(00)00140-3

    Article  MathSciNet  Google Scholar 

  17. Li W: The convergence of the modified Gauss-Seidel method for consistent linear systems. J Comput Appl Math 2003, 154: 97–105. 10.1016/S0377-0427(02)00812-9

    Article  MathSciNet  Google Scholar 

  18. Sun LY: Some extensions of the improved modified Gauss-Seidel iterative method for H -matrices. Numer Linear Algebra Appl 2006, 13: 869–876. 10.1002/nla.498

    Article  MathSciNet  Google Scholar 

  19. Hu JG: Estimates for B-1A . J Comput Math 1984, 2: 122–149.

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The work was supported in part by National Natural Science Foundation of China (No. 10971075), Research Fund for the Doctoral Program of Higher Education of China (No. 20104407110002) and Guangdong Provincial Natural Science Foundation (No. 9151063101000021), P.R. China.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wen Li.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

Both authors participated the research study of this article, and read and approved the finial manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Li, W., Chen, Y. Some new two-sided bounds for determinants of diagonally dominant matrices. J Inequal Appl 2012, 61 (2012). https://doi.org/10.1186/1029-242X-2012-61

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-61

Keywords