# Some new two-sided bounds for determinants of diagonally dominant matrices

## Abstract

In this article, we present some new two-sided bounds for the determinant of some diagonally dominant matrices. In particular, the idea of the preconditioning technique is applied to obtain the new bounds.

MS Classification: 65F10; 15A15.

## 1 Introduction

By ${\mathcal{C}}^{n×n}\left({ℛ}^{n×n}\right)$ we denote the set of all n × n complex (real) matrices. A matrix $A=\left({a}_{ij}\right)\in {\mathcal{C}}^{n×n}$ is called a Z-matrix if a ij ≤ 0 for any ij; a nonsingular M-matrix if A is a Z-matrix with A-1 is nonnegative, i.e., A-1 ≥ 0. The comparison matrixA〉 = (ã ij ) for A is defined by

${\stackrel{˜}{a}}_{ii}=|{a}_{ii}|,{\stackrel{˜}{a}}_{ij}=-|{a}_{ij}|,i\ne j,i,j\in 〈n〉,$

where 〈n〉 ≡ {1, 2,..., n}.

Throughout this article, we always assume that A = D - L - U, where D, -L and -U are nonsingular diagonal, strict lower and strict upper triangular parts of A. It is noted that 〈A〉 = |D| - |L| - |U|, where |C| = (|c ij |) for C = (c ij ).

Let $B=\left({b}_{ij}\right)\in {\mathcal{C}}^{n×m}$ and Λ i (B) = Σikn|b ik |. Then it is easy to see that 〈Ae = (|a11| - Λ1(A),..., |a nn | - Λ n (A))T, where e = (1,..., 1)Twith appropriate dimensions. Let ${l}_{i}={\sum }_{k=1}^{i-1}\left|{a}_{ik}\right|$, and ${u}_{i}={\sum }_{k=i+1}^{n}\left|{a}_{ik}\right|$. Then |L|e = (l1,...,l n ), |U|e = (u1,...,u n ), and Λ i (A) = l i + u i .

Definition 1.1 Let $A=\left({a}_{ij}\right)\in {\mathcal{C}}^{n×n}$. Then A is said to be

1. (1)

a diagonally dominant matrix (d.d.), if |a ii | ≥ Λ i (A) for each i n〉;

2. (2)

a strictly diagonally dominant matrix (s.d.d.), if |a ii | > Λ i (A) for each i n〉;

3. (3)

a weakly chained diagonally dominant matrix (c.d.d.) (e.g., see [1, 2]), if A is a d.d. matrix with $\beta \left(A\right)\ne \varnothing$, where β(A) = {j | |a jj | > Σjkn|a jk |) and for all i n〉, i β(A), there exist indices i1,...,i k in 〈n〉 with ${a}_{{i}_{r},{i}_{r+1}}\ne 0,$, 0 ≤ rk -1, where i0 = i and i k β(A).

4. (4)

a generalized diagonally dominant matrix (g.d.d.), if there is a positive diagonal matrix D such that AD is an s.d.d. matrix.

It is noted that the comparison matrix of a g.d.d. matrix is a nonsingular M-matrix (e.g., see [, Lemma 3.2]).

The classical bound for the determinant of an s.d.d. matrix A is the Ostrowski's inequality , i.e.,

$\left|\text{det}\phantom{\rule{0.3em}{0ex}}A\right|\ge \prod _{k=1}^{n}\left(\left|{a}_{kk}\right|-{\Lambda }_{k}\left(A\right)\right),$

which was improved by Price as follows 

$\prod _{k=1}^{n}\left(\left|{a}_{kk}\right|-{u}_{k}\right)\le \left|\text{det}\phantom{\rule{2.77695pt}{0ex}}A\right|\le \prod _{k=1}^{n}\left(\left|{a}_{kk}\right|+{u}_{k}\right).$
(1.1)

The bound (1.1) was further improved by Ostrowski  and Yong . In  the author obtained the following two-sided bounds for s.d.d. matrices (see [, Theorem 2.2])

$\prod _{k=1}^{n}\left[\left|{a}_{kk}\right|-\left(\underset{k+1\le s\le n}{\text{max}}\frac{\left|{a}_{sk}\right|}{\left|{a}_{ss}\right|-{u}_{s}}\right){u}_{k}\right]\le \left|\text{det}\phantom{\rule{2.77695pt}{0ex}}A\right|\le \prod _{k=1}^{n}\left[\left|{a}_{kk}\right|+\left(\underset{k+1\le s\le n}{\text{max}}\frac{\left|{a}_{sk}\right|}{\left|{a}_{ss}\right|-{u}_{s}}\right){u}_{k}\right].$
(1.2)

The inequalities of the determinant can be applied to estimate the spectral of a matrix and to determine the nonsingularity of a matrix, etc, which are useful in numerical analysis. Some numerical examples show that the bound in (1.2) is not optimal. By this motivation, in this article, we consider to give some sharper bounds than the ones in (1.1) and (1.2). The rest of this article is organized as follows. In Section 2, we use the classical technique to obtain new two-sided bounds; see Theorems 2.5 and 2.5'. In Section 3, we apply the idea of the preconditioning technique to give a new bound for the M-matrix case; see Theorem 3.2. A conclusion is given in the final section.

## 2 The classical technique

Let α1 and α2 be two subsets of 〈n〉 such that 〈n〉 = α1α2 and ${\alpha }_{1}\bigcap {\alpha }_{2}=\varnothing$. Let $A=\left({a}_{ij}\right)\in {\mathcal{C}}^{n×n}$. By A ij = A[α i |α j ] we denote the submatrix of A whose rows are indexed by α i and columns by α j . For simplicity, we denote A[α1] instead of A[α i |α i ]. If A[α1] is nonsingular, the Schur complement of A[α1] in A is denoted by ${S}_{{\alpha }_{1}}$, i.e., ${S}_{{\alpha }_{1}}=A\left[{\alpha }_{2}\right]-A\left[{\alpha }_{2}|{\alpha }_{1}\right]A{\left[{\alpha }_{1}\right]}^{-1}A\left[{\alpha }_{1}|{\alpha }_{2}\right]$. By A(k)we denote A(k)= A[α(k)], where α(k)= {k + 1,..., n}.

We define s k (A) as follows:

$\begin{array}{ll}\hfill {s}_{n}\left(A\right)& ={\Lambda }_{n}\left(A\right),\phantom{\rule{2em}{0ex}}\\ \hfill {s}_{k}\left(A\right)& =\sum _{i=1}^{k-1}\left|{a}_{ki}\right|+\sum _{i=k+1}^{n}\left|{a}_{ki}\right|\frac{{s}_{i}\left(A\right)}{\left|{a}_{ii}\right|}\phantom{\rule{2em}{0ex}}\\ ={l}_{k}+\sum _{i=k+1}^{n}\left|{a}_{ki}\right|\frac{{s}_{i}\left(A\right)}{\left|{a}_{ii}\right|},\phantom{\rule{2.77695pt}{0ex}}k=n-1,...,1.\phantom{\rule{2em}{0ex}}\end{array}$
(2.1)

Alternatively, the recursive Equation (2.1) can be computed by the following lemma, which can be deduced from the similar proof to those in .

Lemma 2.1 Let $A=\left({a}_{ij}\right)\in {\mathcal{C}}^{n×n}$. Then

$\left|D\right|{\left(\left|D\right|-\left|U\right|\right)}^{-1}\left|L\right|e={\left({s}_{1}\left(A\right),...,{s}_{n}\left(A\right)\right)}^{T}.$
(2.2)

The following lemma is well-known, e.g., see .

Lemma 2.2 Let A be a c.d.d. matrix. Then A is g.d.d., and hence is nonsingular.

Now we partition A into the following block form:

$A=\left(\begin{array}{cc}\hfill {a}_{11}\hfill & \hfill {x}^{T}\hfill \\ \hfill y\hfill & \hfill {A}_{\left(1\right)}\hfill \end{array}\right).$
(2.3)

Then it is easy to check that

${A}^{-1}=\left(\begin{array}{c}\hfill {S}_{1}^{-1}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}-{S}_{1}^{-1}{x}^{T}{A}_{\left(1\right)}^{-1}\hfill \\ \hfill -{S}_{1}^{-1}{A}_{\left(1\right)}^{-1}y\phantom{\rule{2.77695pt}{0ex}}{A}_{\left(1\right)}^{-1}+{S}_{1}^{-1}{A}_{\left(1\right)}^{-1}y{x}^{T}{A}_{\left(1\right)}^{-1}\hfill \end{array}\right),$
(2.4)

where ${S}_{1}={a}_{11}-{x}^{T}{A}_{\left(1\right)}^{-1}y$.

The following lemma can be found in .

Lemma 2.3 Let A = (a ij ) be a nonsingular d.d. M-matrix, and let ${A}^{-1}=\left({a}_{ij}^{\prime }\right)$. Then

$\frac{1}{\left|{a}_{11}\right|+{s}_{1}\left(A\right)}\le \left|{{a}^{\prime }}_{11}\right|\le \frac{1}{\left|{a}_{11}\right|-{s}_{1}\left(A\right)}.$
(2.5)

Lemma 2.4 Let A be a c.d.d. matrix. Then

$\prod _{k=1}^{n}\left[\left|{a}_{kk}\right|-{s}_{1}\left({A}_{\left(k-1\right)}\right)\right]\le \left|\text{det}A\right|\le \prod _{k=1}^{n}\left[\left|{a}_{kk}\right|+{s}_{1}\left({A}_{\left(k-1\right)}\right)\right],$
(2.6)

where we define A(0) = A.

Proof. It follows from Lemma 2.2 that A is nonsingular. Let A be as in (2.3) and ${A}^{-1}=\left({a}_{ij}^{\prime }\right)$. Then we have

$\left|\text{det}A\right|=\left|\left({a}_{11}-{x}^{T}{A}_{\left(1\right)}^{-1}y\right)\text{det}{A}_{\left(1\right)}\right|.$
(2.7)

By (2.4) we have

${a}_{11}^{\prime }={\left({a}_{11}-{x}^{T}{A}_{\left(1\right)}^{-1}y\right)}^{-1},$

which together with (2.7) gives that

$\left|\text{det}A\right|=\frac{\left|\text{det}{A}_{\left(1\right)}\right|}{\left|{{a}^{\prime }}_{11}\right|}.$
(2.8)

It follows from (2.5) and (2.8) that

$\left(\left|{a}_{11}\right|-{s}_{1}\left(A\right)\right)\left|\text{det}{A}_{\left(1\right)}\right|\le \left|\text{det}A\right|\le \left(\left|{a}_{11}\right|+{s}_{1}\left(A\right)\right)\left|\text{det}{A}_{\left(1\right)}\right|.$
(2.9)

Because A is c.d.d., 〈A〉 is a nonsingular M-matrix, and so is 〈A(1)〉, which implies that A(1) is also a c.d.d. matrix (see [ 1, Theorem 3.3]). Applying the induction on k to (2.9) one may deduce the desired inequality (2.6).

Remark 2.1 It is difficult to compute the bound (2.6) because one needs to compute all s i (A(k-1)), i = n,...,k for k = 1,...,n. However, we may replace s1(A(k-1)) by s i (A).

Theorem 2.5 Let A be a c.d.d. matrix. Then

$\begin{array}{l}\phantom{\rule{1em}{0ex}}\prod _{k=1}^{n}\left(\left|{a}_{kk}\right|-\sum _{i=k+1}^{n}\left|{a}_{ki}\right|\frac{{s}_{i}\left(A\right)}{\left|{a}_{ii}\right|}+\sum _{i=k+1}^{n}\left|{a}_{ki}\right|\frac{{\sum }_{s=1}^{k-1}\left|{a}_{is}\right|}{\left|{a}_{ii}\right|}\right)\phantom{\rule{2em}{0ex}}\\ \le \left|\text{det}A\right|\le \prod _{k=1}^{n}\left(\left|{a}_{kk}\right|+\sum _{i=k+1}^{n}\left|{a}_{ki}\right|\frac{{s}_{i}\left(A\right)}{\left|{a}_{ii}\right|}-\sum _{i=k+1}^{n}\left|{a}_{ki}\right|\frac{{\sum }_{s=1}^{k-1}\left|{a}_{is}\right|}{\left|{a}_{ii}\right|}\right).\phantom{\rule{2em}{0ex}}\end{array}$
(2.10)

Proof. By (2.1) we have

${s}_{n-k}\left({A}_{\left(k\right)}\right)=\sum _{i=k+1}^{n-1}\left|{a}_{ni}\right|={\Lambda }_{n}\left(A\right)-\sum _{i=1}^{k}\left|{a}_{ni}\right|={s}_{n}\left(A\right)-\sum _{s=1}^{k}\left|{a}_{ni}\right|,$

and hence

$\begin{array}{ll}\hfill {s}_{n-k-1}\left({A}_{\left(k\right)}\right)& =\sum _{i=k+1}^{n-2}\left|{a}_{n-1,i}\right|+\left|{a}_{n-1,n}\right|\frac{{s}_{n-k}\left({A}_{\left(k\right)}\right)}{\left|{a}_{n-1,n-1}\right|}\phantom{\rule{2em}{0ex}}\\ =\sum _{i=1}^{n-2}\left|{a}_{n-1,i}\right|+\left|{a}_{n-1,n}\right|\frac{{s}_{n}\left(A\right)-{\sum }_{i=1}^{k}\left|{a}_{ni}\right|}{\left|{a}_{n-1,n-1}\right|}-\sum _{s=1}^{k}\left|{a}_{n-1,s}\right|\phantom{\rule{2em}{0ex}}\\ ={s}_{n-1}\left(A\right)-\left|{a}_{n-1,n}\right|\frac{{\sum }_{i=1}^{k}\left|{a}_{ni}\right|}{\left|{a}_{n-1,n-1}\right|}-\sum _{s=1}^{k}\left|{a}_{n-1,s}\right|\phantom{\rule{2em}{0ex}}\\ \le {s}_{n-1}\left(A\right)-\sum _{s=1}^{k}\left|{a}_{n-1,s}\right|\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{1em}{0ex}}\dots \phantom{\rule{2em}{0ex}}\\ \hfill {s}_{2}\left({A}_{\left(k\right)}\right)& \le {s}_{k+2}\left(A\right)-\sum _{s=1}^{k}\left|{a}_{2s}\right|.\phantom{\rule{2em}{0ex}}\end{array}$

Therefore, we have

$\begin{array}{ll}\hfill {s}_{1}\left({A}_{\left(k\right)}\right)& =\sum _{i=k+2}^{n}\left|{a}_{k+1,i}\right|\frac{{s}_{i-k}\left({A}_{\left(k\right)}\right)}{\left|{a}_{ii}\right|}\phantom{\rule{2em}{0ex}}\\ \le \sum _{i=k+2}^{n}\left|{a}_{k+1,i}\right|\frac{{s}_{i}\left(A\right)-{\sum }_{s=1}^{k}\left|{a}_{is}\right|}{\left|{a}_{ii}\right|}\phantom{\rule{2em}{0ex}}\\ =\sum _{i=k+2}^{n}\left|{a}_{k+1,i}\right|\frac{{s}_{i}\left(A\right)}{\left|{a}_{ii}\right|}-\sum _{i=k+2}^{n}\left|{a}_{k+1,i}\right|\frac{{\sum }_{s=1}^{k}\left|{a}_{is}\right|}{\left|{a}_{ii}\right|},\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}k=1,\dots ,n-1,\phantom{\rule{2em}{0ex}}\end{array}$

which together with (2.6) gives the bound (2.10).

Let $A=\left({a}_{ij}\right)\in {\mathcal{C}}^{n×n}$. Then R k (A) is given by (e.g., see  or )

$\begin{array}{ll}\hfill {R}_{1}\left(A\right)& ={\Lambda }_{1}\left(A\right),\phantom{\rule{2em}{0ex}}\\ \hfill {R}_{k}\left(A\right)& =\sum _{i=k+1}^{n}\left|{a}_{ki}\right|+\sum _{i=1}^{n-1}\left|{a}_{ki}\right|\frac{{R}_{i}\left(A\right)}{\left|{a}_{ii}\right|},\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}k=2,\dots ,n.\phantom{\rule{2em}{0ex}}\end{array}$
(2.11)

A matrix A is called a Nekrasov matrix ( or ) if |a kk | > R k (A) for k n〉. A Nekrasov matrix is a g.d.d. matrix (e.g., see ). The bound for the determinant of a Nekrasov matrix is given below (see [10, 11]):

$\begin{array}{l}\left|{a}_{11}\right|\prod _{k=2}^{n}\left(\left|{a}_{kk}\right|+\frac{\left|{a}_{k1}\right|}{\left|{a}_{11}\right|}{u}_{k}-\sum _{i=1}^{k-1}\left|{a}_{ki}\right|\frac{{R}_{i}\left(A\right)}{\left|{a}_{kk}\right|}\right)\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{1em}{0ex}}\le \left|\text{det}A\right|\le \left|{a}_{11}\right|\prod _{k=2}^{n}\left(\left|{a}_{kk}\right|-\frac{\left|{a}_{k1}\right|}{\left|{a}_{11}\right|}{u}_{k}+\sum _{i=1}^{k-1}\left|{a}_{ki}\right|\frac{{R}_{i}\left(A\right)}{\left|{a}_{kk}\right|}\right).\phantom{\rule{2em}{0ex}}\end{array}$
(2.12)

However there is a typos for this bound, a counter-example was given in . In the following theorem, we get an estimation of the determinant of A by using R i (A), the proof is analogical to those in Theorem 2.5.

Theorem 2.5' Let A be a c.d.d. matrix. Then

$\begin{array}{l}{\prod }_{k=1}^{n}\left(\left|{a}_{kk}\right|-{\sum }_{i=1}^{k-1}\left|{a}_{ki}\right|\frac{{R}_{i}\left(A\right)}{\left|{a}_{ii}\right|}+{\sum }_{i=1}^{k-1}\left|{a}_{ki}\right|\frac{{\sum }_{s=k+1}^{n}\left|{a}_{is}\right|}{\left|{a}_{ii}\right|}\right)\le \left|\text{det}A\right|\phantom{\rule{2em}{0ex}}\\ \le {\prod }_{k=1}^{n}\left(\left|{a}_{kk}\right|+{\sum }_{i=1}^{k-1}\left|{a}_{ki}\right|\frac{{R}_{i}\left(A\right)}{\left|{a}_{ii}\right|}-{\sum }_{i=1}^{k-1}\left|{a}_{ki}\right|\frac{{\sum }_{s=k+1}^{n}\left|{a}_{is}\right|}{\left|{a}_{ii}\right|}\right).\phantom{\rule{2em}{0ex}}\end{array}$
(2.13)

Remark 2.2 Let A = D - L - U. Then the recursive Equations (2.1) and (2.11) for S k (A) and R k (A) can be computed by (2.2) and the following formula (see )

$\left|D\right|{\left(\left|D\right|-\left|L\right|\right)}^{-1}\left|U\right|e={\left({R}_{1}\left(A\right),\dots ,{R}_{n}\left(A\right)\right)}^{T},$
(2.14)

respectively. Hence two bounds (2.10) and (2.13) are based on different splittings A = (D - U) - L = (D - L) - U. The following two examples illustrate that none of these two bounds is better than other.

Example 2.1 Let

$A=\left(\begin{array}{ccc}\hfill 1\hfill & \hfill 0\hfill & \hfill -0.6\hfill \\ \hfill -0.8\hfill & \hfill 1\hfill & \hfill -0.1\hfill \\ \hfill -0.3\hfill & \hfill -0.4\hfill & \hfill 1\hfill \end{array}\right).$

Then A is an s.d.d. matrix. Applying the bounds (2.10) and (2.13) to this matrix yields

$0.5568\le \left|\text{det}A\right|\le 1.4768$

and

$0.588\le \left|\text{det}A\right|\le 1.412$

respectively, which shows that the bound in (2.13) is better.

Example 2.2 Let

$A=\left(\begin{array}{ccc}\hfill 1\hfill & \hfill -0.2\hfill & \hfill 0\hfill \\ \hfill -0.1\hfill & \hfill 1\hfill & \hfill -0.3\hfill \\ \hfill -0.3\hfill & \hfill -0.1\hfill & \hfill 1\hfill \end{array}\right).$

Then A is s.d.d.. By (2.10) and (2.13), we have

$0.92732\le \left|\text{det}A\right|\le 1.0753$

and

$0.83104\le \left|\text{det}A\right|\le 1.175,$

respectively. Hence the bound (2.10) is better.

Remark 2.3 It is noted that the bound in (2.10) (or (2.13)) only provides alternative estimation for the determinant, this bound does not improve (1.2) in general. However, Example 2.1 illustrates that the bound in (2.10) is better. In fact, by (1.2) we obtain

$0.448\le \left|\text{det}A\right|\le 1.5947.$

The following example shows that the upper bound in (1.2) can be better than the one in (2.10).

Example 2.3 Let

$A=\left(\begin{array}{ccc}\hfill 1\hfill & \hfill 0\hfill & \hfill 0.6\hfill \\ \hfill 0.8\hfill & \hfill 1\hfill & \hfill -0.1\hfill \\ \hfill -0.5\hfill & \hfill -0.4\hfill & \hfill 1\hfill \end{array}\right).$

Then by (2.10) and (1.2) we have

$0.4608\le \left|\text{det}A\right|\le 1.6016$

and

$0.448\le \left|\text{det}A\right|\le 1.5947,$

respectively.

## 3 The preconditioning technique

It is well known that the preconditioning technique plays more and more important roles in solving linear systems (e.g., see ). In this section we improve the bound (1.2) based on the idea of preconditioning.

Without loss of generality we may assume that all diagonal entries of A are equal to 1 in this section. Otherwise, we consider the matrix D-1A, where D = diag(a11,..., a nn ). Then det(D-1A) = det D-1 det A Hence, we assume that

$A=I-L-U,$

where L and U are a strictly lower triangular and a strictly upper triangular matrices, respectively

Let

$P=I+S=\left(\begin{array}{ccccc}\hfill 1\hfill & \hfill \left|{a}_{12}\right|\hfill & \hfill 0\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill \left|{a}_{23}\right|\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill 0\hfill \\ \hfill ⋮\hfill & \hfill ⋮\hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill ⋮\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill \left|{a}_{n-1,n}\right|\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill 1\hfill \end{array}\right),$
(3.1)

which was first introduced in  for solving linear systems, and was further studied by many authors (e.g., see ). Usually, P is call a preconditioner for solving the linear system Ax = b.

Let B = PA. Then det B = det A and

$\begin{array}{ll}\hfill B& =I-L-SL-\left(U-S+SU\right)\phantom{\rule{2em}{0ex}}\\ =\stackrel{̃}{L}-\stackrel{̃}{U},\phantom{\rule{2em}{0ex}}\end{array}$

where $\stackrel{̃}{L}\equiv I-L-SL$ and $\stackrel{̃}{U}\equiv U-S+SU$ are a lower triangular and a strictly upper triangular matrices, respectively. The i th diagonal entry of B is given by

${b}_{ii}=\left\{\begin{array}{c}\hfill 1-\left|{a}_{i,i+1}\right|\left|{a}_{i+1,i}\right|,i
(3.2)

If A is an s.d.d. M-matrix, so is B (see ). Let A have the block form (2.3). We partition I + S into the following block form

$I+S=\left(\begin{array}{c}\hfill 1\phantom{\rule{1em}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\alpha \hfill \\ \hfill 0I+{S}_{\left(1\right)}\hfill \end{array}\right),$

where $\alpha =\left(\left|{a}_{12}\right|,0,\dots ,0\right)\in {ℛ}^{n-1}$ and ${S}_{\left(1\right)}\in {ℛ}^{\left(n-1\right)×\left(n-1\right)}$. A simple calculation yields that

$B=\left(\begin{array}{cc}\hfill {b}_{11}\hfill & \hfill {\stackrel{̃}{x}}^{T}\hfill \\ \hfill ỹ\hfill & \hfill {B}_{\left(1\right)}\hfill \end{array}\right),$

where

$ỹ=\left(I+{S}_{\left(1\right)}\right)y,\phantom{\rule{2.77695pt}{0ex}}{\stackrel{̃}{x}}^{T}={x}^{T}+\alpha {A}_{\left(1\right)},\phantom{\rule{2.77695pt}{0ex}}{B}_{\left(1\right)}=\left(I+{S}_{\left(1\right)}{A}_{\left(1\right)}\right).$
(3.3)

Then

$\left|\text{det}B\right|=\left|\left({b}_{11}-{\stackrel{̃}{x}}^{T}{B}_{\left(1\right)}^{-1}ỹ\right)\text{det}{B}_{\left(1\right)}\right|.$
(3.4)

It is easy to see that

$\left|{b}_{11}\right|-{∥{B}_{\left(1\right)}^{-1}ỹ∥}_{\infty }{∥\stackrel{̃}{x}∥}_{1}\le \left|\left({b}_{11}-{\stackrel{̃}{x}}^{T}{B}_{\left(1\right)}^{-1}ỹ\right)\right|\le \left|{b}_{11}\right|+{∥{B}_{\left(1\right)}^{-1}ỹ∥}_{\infty }{∥\stackrel{̃}{x}∥}_{1}.$
(3.5)

By (3.3) we have

${B}_{\left(1\right)}^{-1}ỹ={A}_{\left(1\right)}^{-1}y,$

and hence from  (also see ) it follows that

${∥{B}_{\left(1\right)}^{-1}ỹ∥}_{\infty }={∥{A}_{\left(1\right)}^{-1}y∥}_{\infty }\le \underset{2\le s\le n}{\text{max}}\frac{\left|{a}_{s1}\right|}{1-{u}_{s}}.$
(3.6)

Notice that ${∥\stackrel{̃}{x}∥}_{1}={∥{x}^{T}+\alpha {A}_{\left(1\right)}∥}_{1}={u}_{1}+\left|{a}_{12}\right|{u}_{2}-\left|{a}_{12}\right|$, which together with (3.4), (3.5), (3.6), and (3.2) gives

$\begin{array}{l}\left[1-\left|{a}_{1,2}\right|\left|{a}_{2,1}\right|-{\text{max}}_{2\le s\le n}\frac{\left|{a}_{s1}\right|}{1-{u}_{s}}\left({u}_{1}-\left|{a}_{12}\right|+\left|{a}_{12}\right|{u}_{2}\right)\right]\left|\text{det}{B}_{\left(1\right)}\right|\le \text{det}A\phantom{\rule{2em}{0ex}}\\ \le \left[1-\left|{a}_{1,2}\right|\left|{a}_{2,1}\right|+{\text{max}}_{2\le s\le n}\frac{\left|{a}_{s1}\right|}{1-{u}_{s}}\left({u}_{1}-\left|{a}_{12}\right|+\left|{a}_{12}\right|{u}_{2}\right)\right]\left|\text{det}{B}_{\left(1\right)}\right|.\phantom{\rule{2em}{0ex}}\end{array}$
(3.7)

By (3.3), B(1) is also the preconditioned matrix of A(1) with the preconditioner I + S(1). In this case, B(1) is also an s.d.d. matrix. So we may proceed by induction with (3.7), and then one may easily deduce the following lemma.

Lemma 3.1 Let A be an s.d.d. M-matrix with unit diagonal entries. Then

$\begin{array}{l}\prod _{k=1}^{n-1}\left[1-|{a}_{k,k+1}||{a}_{k+1,k}|-\underset{k+1\le s\le n}{\mathrm{max}}\frac{|{a}_{sk}|}{1-{u}_{s}}\left({u}_{k}-|{a}_{k,k+1}|\left(1-{u}_{k+1}\right)\right)\right]\le \\ \mathrm{det}A\le \prod _{k=1}^{n-1}\left[1-|{a}_{k,k+1}||{a}_{k+1,k}|+\underset{k+1\le s\le n}{\mathrm{max}}\frac{|{a}_{sk}|}{1-{u}_{s}}\left({u}_{k}-|{a}_{k,k+1}|\left(1-{u}_{k+1}\right)\right].\end{array}$
(3.8)

By the above argument, we may deduce the following result without the assumption that A has unit diagonal entries as in Lemma 3.1.

Theorem 3.2 Let A be an s.d.d. M-matrix. Then

$\begin{array}{l}\prod _{k=1}^{n}\left[\left|{a}_{kk}\right|-\frac{\left|{a}_{k+1,k}\right|}{\left|{a}_{k+1,k+1}\right|}\left|{a}_{k,k+1}\right|-\underset{k+1\le s\le n}{\text{max}}\frac{\left|{a}_{sk}\right|}{\left|{a}_{ss}\right|-{u}_{s}}\left({u}_{k}-\frac{\left|{a}_{k,k+1}\right|}{\left|{a}_{k+1,k+1}\right|}\left(\left|{a}_{k+1,k+1}\right|-{u}_{k+1}\right)\right)\right]\phantom{\rule{2em}{0ex}}\\ \le \text{det}A\le \prod _{k=1}^{n}\left[\left|{a}_{kk}\right|-\frac{{a}_{k+1,k}}{\left|{a}_{k+1,k+1}\right|}\left|{a}_{k,k+1}\right|+\underset{k+1\le s\le n}{\text{max}}\frac{\left|{a}_{sk}\right|}{\left|{a}_{ss}\right|-{u}_{s}}\left({u}_{k}-\frac{\left|{a}_{k,k+1}\right|}{\left|{a}_{k+1,k+1}\right|}\left(\left|{a}_{k+1,k+1}\right|-{u}_{k+1}\right)\right)\right].\phantom{\rule{2em}{0ex}}\end{array}$
(3.9)

Remark 3.1 It is noted that the bound (3.9) is always sharper than the one (1.2). In fact, for any i, u i < |a ii | we have $\left({u}_{k}-\frac{\left|{a}_{k,k+1}\right|}{\left|{a}_{k+1,k+1}\right|}\left(\left|{a}_{k+1,k+1}\right|-{u}_{k+1}\right)\right)\le {u}_{k}$ and hence the upper bound is better than the one in (1.2). For the lower bound, since

$\begin{array}{ll}\hfill \left(\underset{k+1\le s\le n}{\text{max}}\frac{\left|{a}_{sk}\right|}{\left|{a}_{ss}\right|-{u}_{s}}\right)\frac{\left|{a}_{k,k+1}\right|}{\left|{a}_{k+1,k+1}\right|}\left(\left|{a}_{k+1,k+1}\right|-{u}_{k+1}\right)& \ge \frac{\left|{a}_{k+1,k}\right|}{\left|{a}_{k+1,k+1}\right|-{u}_{k+1}}\frac{\left|{a}_{k,k+1}\right|}{\left|{a}_{k+1,k+1}\right|}\left(\left|{a}_{k+1,k+1}\right|-{u}_{k+1}\right)\phantom{\rule{2em}{0ex}}\\ =\frac{\left|{a}_{k,k+1}\right|\left|{a}_{k+1,k}\right|}{\left|{a}_{k+1,k+1}\right|},\phantom{\rule{2em}{0ex}}\end{array}$

the lower bound in (3.9) is better than the one in (1.2), which proves our assertion.

Remark 3.2 None of these two bounds in (3.9) and (2.10) is uniformly better than other. However the following example illustrates that the upper bound in (3.9) is better.

Example 3.1 Let

$A=\left(\begin{array}{ccc}\hfill 1\hfill & \hfill -0.3\hfill & \hfill 0\hfill \\ \hfill -0.3\hfill & \hfill 1\hfill & \hfill -0.3\hfill \\ \hfill 0\hfill & \hfill -0.3\hfill & \hfill 1\hfill \end{array}\right).$

Applying (3.9), (1.2), and (2.10) to estimate the determinant of A, respectively we have

$\begin{array}{ll}\hfill 0.793& \le \text{det}A\le 0.8632,\phantom{\rule{2em}{0ex}}\\ \hfill 0.793& \le \text{det}A\le 1.2301\phantom{\rule{2em}{0ex}}\end{array}$

and

$0.80353\le \text{det}A\le 1.2175,$

## 4 Conclusion

In Sections 2 and 3, we have provided some two-sided bounds for the determinant of a d.d. matrix via both classical and preconditioning techniques. Although none of two bounds in (1.2) and (2.10) are uniformly better than other in general, the condition in the (2.10) is weaker than the one in (1.2).

When the preconditioning technique is applied to estimate the determinant of an s.d.d. M-matrix, we may obtain a more tighter bound. Here, we only present a bound (3.9) for the special preconditioner (3.1), and prove that this bound is sharper than the bound (1.2), which shows that a good preconditioning technique is a powerful tool not only for solving linear system but also for some estimations such as determinants etc.

## References

1. Li W: On the Nekrasov matrix. Linear Algebra Appl 1998, 281: 87–96. 10.1016/S0024-3795(98)10031-9

2. Shivakumar PN, Chew KH: A sufficient condition for nonvanishing of determinants. Proc Am Math Soc 1974, 43: 63–66. 10.1090/S0002-9939-1974-0332820-0

3. Ostrowski AM: Sur la determination des bones inferieures pour une classe des determinants. Bull Sci Math 1937, 61(2):19–32.

4. Price GB: Bounds for determinants with dominant principal diagonal. Proc Am Math Soc 1951, 2: 497–502. 10.1090/S0002-9939-1951-0041093-2

5. Ostrowski AM: Note on bounds for determinants with dominant principal diagonal. Proc Am Math Soc 1952, 3: 26–30. 10.1090/S0002-9939-1952-0052380-7

6. Yong XR: Two properties of diagonally dominant matrices. Numer Linear Algebra Appl 1996, 3(2):173–177. 10.1002/(SICI)1099-1506(199603/04)3:2<173::AID-NLA69>3.0.CO;2-C

7. Robert F: Blocs-H-matrices et convergence des methodes iteratives classiques par blocs. Linear Algebra Appl 1969, 2: 223–265. 10.1016/0024-3795(69)90029-9

8. Li W: The infinity norm bound for the inverse of nonsingular diagonal dominant matrices. Appl Math Lett 2008, 21: 258–263. 10.1016/j.aml.2007.03.018

9. Szulc T: Some remarks on a theorem of Gudkov. Linear Algebra Appl 1995, 225: 221–235.

10. Bayley DW, Crabtree DE: Bounds for determinants. Linear Algebra Appl 1969, 2: 303–309. 10.1016/0024-3795(69)90032-9

11. Szulc T: On bound for certain determinants. Z Angew Math Mech 1992, 72: 637–640.

12. Huang TZ, Xu CX: A note on the bound for the Bayley-Crabtree determinant of Nekrasov matrices. J Xi'an Jiaotong Univ 2002, 36: 1320. (In Chinese)

13. Axelsson O: Iterative Solution Methods. Cambridge University Press, Cambridge; 1994.

14. Gunawardena AD, Jain SK, Snyder L: Modified iterative methods for consistent linear systems. Linear Algebra Appl 1991, 154/156: 123–143.

15. Hadjidimos A, Noutsos D, Tzoumas M: More on modifications and improvements of classical iterative schemes for M-matrices. Linear Algebra Appl 2003, 364: 253–279.

16. Li W, Sun W: Modified Gauss-Seidel type methods and Jacobi type methods. Linear Algebra Appl 2000, 317: 227–247. 10.1016/S0024-3795(00)00140-3

17. Li W: The convergence of the modified Gauss-Seidel method for consistent linear systems. J Comput Appl Math 2003, 154: 97–105. 10.1016/S0377-0427(02)00812-9

18. Sun LY: Some extensions of the improved modified Gauss-Seidel iterative method for H -matrices. Numer Linear Algebra Appl 2006, 13: 869–876. 10.1002/nla.498

19. Hu JG: Estimates for B-1A . J Comput Math 1984, 2: 122–149.

## Acknowledgements

The work was supported in part by National Natural Science Foundation of China (No. 10971075), Research Fund for the Doctoral Program of Higher Education of China (No. 20104407110002) and Guangdong Provincial Natural Science Foundation (No. 9151063101000021), P.R. China.

## Author information

Authors

### Corresponding author

Correspondence to Wen Li.

### Competing interests

The authors declare that they have no competing interests.

## Rights and permissions

Reprints and Permissions

Li, W., Chen, Y. Some new two-sided bounds for determinants of diagonally dominant matrices. J Inequal Appl 2012, 61 (2012). https://doi.org/10.1186/1029-242X-2012-61

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1029-242X-2012-61

### Keywords

• diagonally dominant matrix
• determinant
• M-matrix
• bound 