• Research
• Open Access

# Inequalities for M-tensors

Journal of Inequalities and Applications20142014:114

https://doi.org/10.1186/1029-242X-2014-114

• Received: 3 January 2014
• Accepted: 27 February 2014
• Published:

## Abstract

In this paper, we establish some important properties of M-tensors. We derive upper and lower bounds for the minimum eigenvalue of M-tensors, bounds for eigenvalues of M-tensors except the minimum eigenvalue are also presented; finally, we give the Ky Fan theorem for M-tensors.

MSC:15A18, 15A69, 65F15, 65F10.

## Keywords

• M-tensors
• nonnegative tensor
• eigenvalues

## 1 Introduction

Eigenvalue problems of higher-order tensors have become an important topic of study in a new applied mathematics branch, numerical multilinear algebra, and they have a wide range of practical applications .

If there are a complex number λ and a nonzero complex vector x that are solutions of the following homogeneous polynomial equations:
$\mathcal{A}{x}^{m-1}=\lambda {x}^{\left[m-1\right]},$
then λ is called the eigenvalue of and x the eigenvector of associated with λ, where $\mathcal{A}{x}^{m-1}$ and ${x}^{\left[m-1\right]}$ are vectors, whose i th component is
$\begin{array}{c}\mathcal{A}{x}^{m-1}:={\left(\sum _{{i}_{2},\dots ,{i}_{m}=1}^{n}{a}_{i{i}_{2}\cdots {i}_{m}}{x}_{{i}_{2}}\cdots {x}_{{i}_{n}}\right)}_{1\le i\le n},\hfill \\ {x}^{\left[m-1\right]}:={\left({x}_{i}^{m-1}\right)}_{1\le i\le n}.\hfill \end{array}$

This definition was introduced by Qi and Lim [8, 9] where they supposed that is an order m dimension n symmetric tensor and m is even. First, we introduce some results of nonnegative tensors , which are generalized from nonnegative matrices.

Definition 1.1 The tensor is called reducible if there exists a nonempty proper index subset $\mathbb{J}\subset \left\{1,2,\dots ,n\right\}$ such that ${a}_{{i}_{1},{i}_{2},\dots ,{i}_{m}}=0$, $\mathrm{\forall }{i}_{1}\in \mathbb{J}$, $\mathrm{\forall }{i}_{2},\dots ,{i}_{m}\notin \mathbb{J}$. If is not reducible, then we call to be irreducible.

Let , where $|\lambda |$ denotes the modulus of λ. We call $\rho \left(\mathcal{A}\right)$ the spectral radius of tensor .

Theorem 1.2 If is irreducible and nonnegative, then there exists a number $\rho \left(\mathcal{A}\right)>0$ and a vector ${x}_{0}>0$ such that $\mathcal{A}{x}_{0}^{m-1}=\rho \left(\mathcal{A}\right){x}_{0}^{\left[m-1\right]}$. Moreover, if λ is an eigenvalue with a nonnegative eigenvector, then $\lambda =\rho \left(\mathcal{A}\right)$. If λ is an eigenvalue of , then $|\lambda |\le \rho \left(\mathcal{A}\right)$.

The authors in [13, 14] extended the notion of M-matrices to higher-order tensors and introduced the definition of an M-tensor.

Definition 1.3 Let be an m-order and n-dimensional tensor. is called an M-tensor if there exist a nonnegative tensor and a real number $c>\rho \left(\mathcal{B}\right)$, where is the spectral radius of , such that
$\mathcal{A}=c\mathcal{I}-\mathcal{B}.$

Theorem 1.4 Let be an M-tensor and denote by $\tau \left(\mathcal{A}\right)$ the minimal value of the real part of all eigenvalues of . Then $\tau \left(\mathcal{A}\right)>0$ is an eigenvalue of with a nonnegative eigenvector. Moreover, there exist a nonnegative tensor and a real number $c>\rho \left(\mathcal{B}\right)$ such that $\mathcal{A}=c\mathcal{I}-\mathcal{B}$. If is irreducible, then $\tau \left(\mathcal{A}\right)$ is the unique eigenvalue with a positive eigenvector.

In this paper, let $N=\left\{1,2,\dots ,n\right\}$, we define the i th row sum of as ${R}_{i}\left(\mathcal{A}\right)={\sum }_{{i}_{2},\dots ,{i}_{m}=1}^{n}{a}_{i{i}_{2}\cdots {i}_{m}}$, and denote the largest and the smallest row sums of by
${R}_{max}\left(\mathcal{A}\right)=\underset{i=1,\dots ,n}{max}{R}_{i}\left(\mathcal{A}\right),\phantom{\rule{2em}{0ex}}{R}_{min}\left(\mathcal{A}\right)=\underset{i=1,\dots ,n}{min}{R}_{i}\left(\mathcal{A}\right).$
Furthermore, a real tensor of order m dimension n is called the unit tensor, if its entries are ${\delta }_{{i}_{1}\cdots {i}_{m}}$ for ${i}_{1},\dots ,{i}_{m}\in N$, where
And we define $\sigma \left(\mathcal{A}\right)$ as the set of all the eigenvalues of and
${r}_{i}\left(\mathcal{A}\right)=\sum _{{\delta }_{i{i}_{2}\cdots {i}_{m}}=0}|{a}_{i{i}_{2}\cdots {i}_{m}}|,\phantom{\rule{2em}{0ex}}{r}_{i}^{j}\left(\mathcal{A}\right)=\underset{{\delta }_{j{i}_{2}\cdots {i}_{m}}=0}{\sum _{{\delta }_{i{i}_{2}\cdots {i}_{m}}=0,}}|{a}_{i{i}_{2}\cdots {i}_{m}}|={r}_{i}\left(\mathcal{A}\right)-|{a}_{ij\cdots j}|.$

In this paper, we continue this research on the eigenvalue problems for tensors. In Section 2, some bounds for the minimum eigenvalue of M-tensors are obtained, and proved to be tighter than those in Theorem 1.1 in . In Section 3, some bounds for eigenvalues of M-tensors except the minimum eigenvalue are given. Moreover, the Ky Fan theorem for M-tensors is presented in Section 4.

## 2 Bounds for the minimum eigenvalue of M-tensors

Theorem 2.1 Let be an irreducible M-tensor. Then
$\tau \left(\mathcal{A}\right)\le min\left\{{a}_{i\cdots i}\right\},$
(1)
${R}_{min}\left(\mathcal{A}\right)\le \tau \left(\mathcal{A}\right)\le {R}_{max}\left(\mathcal{A}\right).$
(2)
Proof Let $x>0$ be an eigenvector of corresponding to $\tau \left(\mathcal{A}\right)$, i.e., $\mathcal{A}{x}^{m-1}=\tau \left(\mathcal{A}\right){x}^{\left[m-1\right]}$. For each $i\in N$, we can get
$\left({a}_{i\cdots i}-\tau \left(\mathcal{A}\right)\right){x}_{i}^{m-1}=-\sum _{{\delta }_{i{i}_{2}\cdots {i}_{m}}=0}{a}_{i{i}_{2}\cdots {i}_{m}}{x}_{{i}_{2}}\cdots {x}_{{i}_{m}}\ge 0,$
then
$\tau \left(\mathcal{A}\right)\le min\left\{{a}_{i\cdots i}\right\}.$
Assume that ${x}_{s}$ is the smallest component of x,
$\left({a}_{s\cdots s}-\tau \left(\mathcal{A}\right)\right){x}_{s}^{m-1}=-\sum _{{\delta }_{s{i}_{2}\cdots {i}_{m}}=0}{a}_{s{i}_{2}\cdots {i}_{m}}{x}_{{i}_{2}}\cdots {x}_{{i}_{m}}\ge 0.$
That is,
$\tau \left(\mathcal{A}\right)\le \sum _{{\delta }_{s{i}_{2}\cdots {i}_{m}}=0}{a}_{s{i}_{2}\cdots {i}_{m}}+{a}_{s\cdots s},$
so
$\tau \left(\mathcal{A}\right)\le {R}_{max}\left(\mathcal{A}\right).$
Similarly, if we assume that ${x}_{t}=\left\{max{x}_{i},i\in N\right\}$, then we can get
$\tau \left(\mathcal{A}\right)\ge \sum _{{\delta }_{t{i}_{2}\cdots {i}_{m}}=0}{a}_{t{i}_{2}\cdots {i}_{m}}+{a}_{t\cdots t}\ge {R}_{min}\left(\mathcal{A}\right).$

Thus, we complete the proof. □

Theorem 2.2 Let be an irreducible M-tensor. Then
$\begin{array}{r}\underset{i,j\in N,j\ne i}{min}\frac{1}{2}\left\{{a}_{i\cdots i}+{a}_{j\cdots j}-{r}_{i}^{j}\left(\mathcal{A}\right)-{\mathrm{\Delta }}_{i,j}^{\frac{1}{2}}\left(\mathcal{A}\right)\right\}\\ \phantom{\rule{1em}{0ex}}\le \tau \left(\mathcal{A}\right)\le \underset{i,j\in N,j\ne i}{max}\frac{1}{2}\left\{{a}_{i\cdots i}+{a}_{j\cdots j}-{r}_{i}^{j}\left(\mathcal{A}\right)-{\mathrm{\Delta }}_{i,j}^{\frac{1}{2}}\left(\mathcal{A}\right)\right\},\end{array}$
(3)
where
${\mathrm{\Delta }}_{i,j}\left(\mathcal{A}\right)={\left({a}_{i\cdots i}-{a}_{j\cdots j}+{r}_{i}^{j}\left(\mathcal{A}\right)\right)}^{2}-4{a}_{ij\cdots j}{r}_{j}\left(\mathcal{A}\right).$
Proof Because $\tau \left(\mathcal{A}\right)$ is an eigenvalue of , from Theorem 2.1 in , there are $i,j\in N$, $j\ne i$, such that
$\left(|\tau \left(\mathcal{A}\right)-{a}_{i\cdots i}|-{r}_{i}^{j}\left(\mathcal{A}\right)\right)|\tau \left(\mathcal{A}\right)-{a}_{j\cdots j}|\le |{a}_{ij\cdots j}|{r}_{j}\left(\mathcal{A}\right).$
From Theorem 2.1, we can get
$\left({a}_{i\cdots i}-\tau \left(\mathcal{A}\right)-{r}_{i}^{j}\left(\mathcal{A}\right)\right)\left({a}_{j\cdots j}-\tau \left(\mathcal{A}\right)\right)\le -{a}_{ij\cdots j}{r}_{j}\left(\mathcal{A}\right),$
equivalently,
$\tau {\left(\mathcal{A}\right)}^{2}-\left({a}_{i\cdots i}+{a}_{j\cdots j}-{r}_{i}^{j}\left(\mathcal{A}\right)\right)\tau \left(\mathcal{A}\right)+{a}_{j\cdots j}\left({a}_{i\cdots i}-{r}_{i}^{j}\left(\mathcal{A}\right)\right)+{a}_{ij\cdots j}{r}_{j}\left(\mathcal{A}\right)\le 0.$
Then, solving for $\tau \left(\mathcal{A}\right)$,
$\tau \left(\mathcal{A}\right)\ge \frac{1}{2}\left\{{a}_{i\cdots i}+{a}_{j\cdots j}-{r}_{i}^{j}\left(\mathcal{A}\right)-{\mathrm{\Delta }}_{i,j}^{\frac{1}{2}}\left(\mathcal{A}\right)\right\}\ge \underset{i,j\in N,j\ne i}{min}\frac{1}{2}\left\{{a}_{i\cdots i}+{a}_{j\cdots j}-{r}_{i}^{j}\left(\mathcal{A}\right)-{\mathrm{\Delta }}_{i,j}^{\frac{1}{2}}\left(\mathcal{A}\right)\right\}.$
Let $x>0$ be an eigenvector of corresponding to $\tau \left(\mathcal{A}\right)$, i.e., $\mathcal{A}{x}^{m-1}=\tau \left(\mathcal{A}\right){x}^{\left[m-1\right]}$, ${x}_{s}$ is the smallest component of x. For each $s,t\in N$, $s\ne t$, we can get
$\left({a}_{t\cdots t}-\tau \left(\mathcal{A}\right)\right){x}_{t}^{m-1}=-\sum _{{\delta }_{t{i}_{2}\cdots {i}_{m}}=0}{a}_{t{i}_{2}\cdots {i}_{m}}{x}_{{i}_{2}}\cdots {x}_{{i}_{m}}\ge {r}_{t}\left(\mathcal{A}\right){x}_{s}^{m-1},$
(4)
$\begin{array}{c}\left({a}_{s\cdots s}-\tau \left(\mathcal{A}\right)\right){x}_{s}^{m-1}=-\underset{{\delta }_{s{i}_{2}\cdots {i}_{m}}=0}{\sum _{{\delta }_{t{i}_{2}\cdots {i}_{m}}=0,}}{a}_{t{i}_{2}\cdots {i}_{m}}{x}_{{i}_{2}}\cdots {x}_{{i}_{m}}-{a}_{st\cdots t}{x}_{t}^{m-1}\ge {r}_{t}^{s}\left(\mathcal{A}\right){x}_{s}^{m-1}-{a}_{st\cdots t}{x}_{t}^{m-1},\hfill \\ \left({a}_{s\cdots s}-\tau \left(\mathcal{A}\right)-{r}_{t}^{s}\left(\mathcal{A}\right)\right){x}_{s}^{m-1}\ge -{a}_{st\cdots t}{x}_{t}^{m-1}.\hfill \end{array}$
(5)
Multiplying equations (4) and (5), we get
$\left({a}_{t\cdots t}-\tau \left(\mathcal{A}\right)\right)\left({a}_{s\cdots s}-\tau \left(\mathcal{A}\right)-{r}_{t}^{s}\left(\mathcal{A}\right)\right)\ge -{a}_{st\cdots t}{r}_{t}\left(\mathcal{A}\right).$
Then, solving for $\tau \left(\mathcal{A}\right)$,
$\tau \left(\mathcal{A}\right)\le \frac{1}{2}\left\{{a}_{t\cdots t}+{a}_{s\cdots s}-{r}_{t}^{s}\left(\mathcal{A}\right)-{\mathrm{\Delta }}_{t,s}^{\frac{1}{2}}\left(\mathcal{A}\right)\right\}\le \underset{i,j\in N,j\ne i}{max}\frac{1}{2}\left\{{a}_{i\cdots i}+{a}_{j\cdots j}-{r}_{i}^{j}\left(\mathcal{A}\right)-{\mathrm{\Delta }}_{i,j}^{\frac{1}{2}}\left(\mathcal{A}\right)\right\}.$

Thus, we complete the proof. □

We now show that the bounds in Theorem 2.2 are tight and sharper than those in Theorem 1.1 in  by the following example. Consider the M-tensor $\mathcal{A}=\left({a}_{ijkl}\right)$ of order 4 dimension 2 with entries defined as follows:
$\begin{array}{c}{a}_{1111}=3,\phantom{\rule{2em}{0ex}}{a}_{1222}=-1,\hfill \\ {a}_{2111}=-2,\phantom{\rule{2em}{0ex}}{a}_{2222}=2,\hfill \end{array}$
other ${a}_{ijkl}=0$. By Theorem 1.1 in , we have
$-2\le \tau \left(\mathcal{A}\right)\le 4.$
By Theorem 2.1, we have
$0\le \tau \left(\mathcal{A}\right)\le 2.$
By Theorem 2.2, we have
$\frac{1}{2}\left(5-\sqrt{17}\right)\le \tau \left(\mathcal{A}\right)\le \frac{1}{2}\left(5-\sqrt{5}\right).$

In fact, $\tau \left(\mathcal{A}\right)=1$. Hence, the bounds in Theorem 2.2 are tight and sharper than those in Theorem 1.1 in .

## 3 Bounds for eigenvalues of M-tensors except the minimum eigenvalue

In this section, we introduce the stochastic M-tensor, which is a generalization of the nonnegative stochastic tensor.

Definition 3.1 An M-tensor of order m dimension n is called stochastic provided
${R}_{i}\left(\mathcal{A}\right)=\sum _{{i}_{2},\dots ,{i}_{m}=1}^{n}{a}_{i{i}_{2}\cdots {i}_{m}}\equiv 1,\phantom{\rule{1em}{0ex}}i=1,\dots ,n.$

Obviously, when is a stochastic M-tensor, 1 is the minimum eigenvalue of and e is an eigenvector corresponding to 1, where e is an all-ones vector.

Theorem 3.2 Let be an order m dimension n irreducible M-tensor. Then there exists a diagonal matrix D with positive main diagonal entries such that
$\tau \left(\mathcal{A}\right)\cdot \mathcal{B}=\mathcal{A}\cdot {D}^{\left(1-m\right)}\cdot \stackrel{m-1}{\stackrel{⏞}{D\cdot \dots \cdot D}},$

where B is a stochastic irreducible M-tensor. Furthermore, B is unique, and the diagonal entries of D are exactly the components of the unique positive eigenvector corresponding to $\tau \left(\mathcal{A}\right)$.

Proof Let x be the unique positive eigenvector corresponding to $\tau \left(\mathcal{A}\right)$, i.e.,
$\mathcal{A}{x}^{m-1}=\tau \left(\mathcal{A}\right){x}^{\left[m-1\right]}.$
Let D be the diagonal matrix such that its diagonal entries are components of x, let us check the tensor $\mathcal{C}=\mathcal{A}\cdot {D}^{\left(1-m\right)}\cdot D\cdot \dots \cdot D$. It is clear that for $i=1,2,\dots ,n$,
$\sum _{{i}_{2},\dots ,{i}_{m}=1}^{n}{\mathcal{C}}_{i{i}_{2}\cdots {i}_{m}}={\left(\mathcal{C}{e}^{m-1}\right)}_{i}={\left(\mathcal{A}\cdot {D}^{\left(1-m\right)}\cdot \stackrel{m-1}{\stackrel{⏞}{D\cdot \dots \cdot D}}{e}^{m-1}\right)}_{i}=\tau \left(\mathcal{A}\right).$

Hence $\mathcal{B}=\mathcal{C}/\tau \left(\mathcal{A}\right)$ is the desired stochastic M-tensor. Since the positive eigenvector is unique, then B is unique, and the diagonal entries of D are exactly the components of the unique positive eigenvector corresponding to $\tau \left(\mathcal{A}\right)$. □

Theorem 3.3 Let be an order m dimension n stochastic irreducible nonnegative tensor, $\omega =min{a}_{i\cdots i}$, $\lambda \in \sigma \left(\mathcal{A}\right)$. Then
$|\lambda -\omega |\le 1-\omega .$
Proof Let λ be an eigenvalue of the stochastic irreducible nonnegative tensor , x is the eigenvector corresponding to λ, i.e.,
$\mathcal{A}{x}^{m-1}=\lambda {x}^{\left[m-1\right]}.$
Assume that $0<|{x}_{s}|={max}_{i}|{x}_{i}|$, then we can get
$\left(\lambda -{a}_{s\cdots s}\right){x}_{s}^{m-1}=\sum _{{\delta }_{s{i}_{2}\cdots {i}_{m}}=0}{a}_{s{i}_{2}\cdots {i}_{m}}{x}_{{i}_{2}}\cdots {x}_{{i}_{m}}.$
Then
$|\lambda -{a}_{s\cdots s}|\le \sum _{{\delta }_{s{i}_{2}\cdots {i}_{m}}=0}{a}_{s{i}_{2}\cdots {i}_{m}}={r}_{s}\left(\mathcal{A}\right)=1-{a}_{s\cdots s},$
and therefore,
$\begin{array}{rl}|\lambda -\omega |& \le |\lambda -{a}_{s\cdots s}+{a}_{s\cdots s}-\omega |\\ \le |\lambda -{a}_{s\cdots s}|+|{a}_{s\cdots s}-\omega |\\ \le \left(1-{a}_{s\cdots s}\right)+\left({a}_{s\cdots s}-\omega \right)\\ =1-\omega .\end{array}$
(6)

Thus, we complete the proof. □

Theorem 3.4 Let be an order m dimension n irreducible M-tensor, $\mathrm{\Omega }=max{a}_{i\cdots i}$, $\lambda \in \sigma \left(\mathcal{A}\right)$. Then
$|\mathrm{\Omega }-\lambda |\le \mathrm{\Omega }-\tau \left(\mathcal{A}\right).$
Proof From Theorem 3.2, we may evidently take $\tau \left(\mathcal{A}\right)=1$, and after performing a similarity transformation with a positive diagonal matrix, we may assume that is stochastic. Then, for $\theta \in \left(0,1\right)$, the matrix $\mathcal{A}\left(\theta \right)=\left(1+\theta \right)\mathcal{I}-\theta \mathcal{A}$ is irreducible nonnegative stochastic, by Theorem 3.3, if $\lambda \left(\theta \right)\in \sigma \left(\mathcal{A}\left(\theta \right)\right)$, $\omega \left(\theta \right)=min{a}_{i\cdots i}\left(\theta \right)$, we can get
$|\lambda \left(\theta \right)-\omega \left(\theta \right)|\le 1-\omega \left(\theta \right).$
That is,
$|1+\theta -\theta \lambda -\left(1+\theta -\theta max{a}_{i\cdots i}\right)|\le 1-\left(1+\theta -\theta max{a}_{i\cdots i}\right).$
Then
$|\mathrm{\Omega }-\lambda |\le \mathrm{\Omega }-1.$
Transforming back to , we get
$|\mathrm{\Omega }-\lambda |\le \mathrm{\Omega }-\tau \left(\mathcal{A}\right).$

Thus, we complete the proof. □

## 4 Ky Fan theorem for M-tensors

In this section we give the Ky Fan theorem for M-tensors. Denote by the set of m-order and n-dimensional real tensors whose off-diagonal entries are nonpositive.

Theorem 4.1 Let $\mathcal{A},\mathcal{B}\in \mathbb{Z}$, assume that is an M-tensor and $\mathcal{B}\ge \mathcal{A}$. Then is an M-tensor, and
$\tau \left(\mathcal{A}\right)\le \tau \left(\mathcal{B}\right).$
Proof If $x>0$, from assume that is an M-tensor and condition (D4) in , we know
$\mathcal{A}{x}^{m-1}>0.$
Because $\mathcal{B}\ge \mathcal{A}$, we can get
$\mathcal{B}{x}^{m-1}\ge \mathcal{A}{x}^{m-1}>0,$

then is an M-tensor.

Let $a={max}_{1\le i\le n}{\mathcal{B}}_{i\cdots i}$, from Theorem 3.1 and Corollary 3.2 in , assume that
$\mathcal{B}=a\mathcal{I}-\mathcal{CB},\phantom{\rule{2em}{0ex}}\mathcal{A}=a\mathcal{I}-\mathcal{CA},$

where $\mathcal{CA}$, $\mathcal{CB}$ are nonnegative tensors.

Because $\mathcal{A},\mathcal{B}\in \mathbb{Z}$ and $\mathcal{B}\ge \mathcal{A}$, then we can get
$\mathcal{CA}\ge \mathcal{CB}.$
From Lemma 3.5 in , we can get
$\rho \left(\mathcal{CA}\right)\ge \rho \left(\mathcal{CB}\right).$
Therefore,
$\tau \left(\mathcal{A}\right)\le \tau \left(\mathcal{B}\right).$

Thus, we complete the proof. □

Theorem 4.2 Let , be of order m dimension n, suppose that is an M-tensor and $|{b}_{{i}_{1}\cdots {i}_{m}}|\ge |{a}_{{i}_{1}\cdots {i}_{m}}|$ for all ${i}_{1}\ne \cdots \ne {i}_{m}$. Then, for any eigenvalue λ of , there exists $i\in 1,\dots ,n$ such that $|\lambda -{a}_{i\cdots i}|\le {b}_{i\cdots i}-\tau \left(\mathcal{B}\right)$.

Proof We first suppose that is an M-tensor, $\tau \left(\mathcal{B}\right)$ is an eigenvalue of with a positive corresponding eigenvector v. Denote
$W=diag\left({v}_{1},\dots ,{v}_{n}\right),$
where ${v}_{i}$ is the i th component of v. Let
$\mathcal{C}=\mathcal{A}\cdot {W}^{1-m}\stackrel{\left[m-1\right]}{\stackrel{⏞}{W\cdot \dots \cdot W}}$
and let λ be an eigenvalue of with x, a corresponding eigenvector, i.e., $\mathcal{A}{x}^{m-1}=\lambda {x}^{\left[m-1\right]}$. Then, as in the proof of Theorem 4.1 in , we have
$\mathcal{C}{\left({W}^{-1}x\right)}^{m-1}=\lambda {\left({W}^{-1}x\right)}^{m-1}.$
By the definition of , we have ${c}_{i\cdots i}={a}_{i\cdots i}$, $i=1,\dots ,n$. Applying the first conclusion of Theorem 6 of , we can get
$\begin{array}{rl}|\lambda -{c}_{i\cdots i}|& \le \sum _{{\delta }_{i{i}_{2}\cdots {i}_{m}}=0}|{c}_{i{i}_{2}\cdots {i}_{m}}|\\ ={v}_{i}^{1-m}\sum |{a}_{i{i}_{2}\cdots {i}_{m}}|{v}_{{i}_{2}}\cdots {v}_{{i}_{m}}\\ \le {v}_{i}^{1-m}\sum |{b}_{i{i}_{2}\cdots {i}_{m}}|{v}_{{i}_{2}}\cdots {v}_{{i}_{m}}\\ ={v}_{i}^{1-m}\left({b}_{i\cdots i}{v}^{m-1}-\sum _{{i}_{1},\dots ,{i}_{m}=1}{b}_{i{i}_{2}\cdots {i}_{m}}{v}_{{i}_{2}}\cdots {v}_{{i}_{m}}\right)\\ ={b}_{i\cdots i}-\tau \left(\mathcal{B}\right).\end{array}$
(7)

Thus, we complete the proof. □

## Declarations

### Acknowledgements

This research is supported by NSFC (61170311), Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020), Sichuan Province Sci. & Tech. Research Project (12ZC1802). The first author is supported by the Fundamental Research Funds for Central Universities.

## Authors’ Affiliations

(1)
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, P.R. China

## References 