Open Access

The rank inequality for diagonally magic matrices

Journal of Inequalities and Applications20152015:318

https://doi.org/10.1186/s13660-015-0840-z

Received: 18 August 2015

Accepted: 24 September 2015

Published: 6 October 2015

Abstract

We study a new class of matrices called diagonally magic matrices. We prove that such a matrix has rank at most 2 and that any square submatrix of a diagonally magic matrix is diagonally magic.

Keywords

rankdiagonally magic matriceseigenvaluelinear equations

MSC

15A0315A06

1 Introduction

For a positive integer n, let \(S_{n}\) be the set of all n! permutations of \(\{1, 2, \ldots, n\}\). We denote by \(\mathbb{C}^{n\times n}\) and \(\mathbb{R}^{n\times n}\) the set of \(n \times n\) complex matrices and the set of \(n \times n\) real matrices, respectively. If \(A=(a_{i,j}) \in\mathbb{C}^{n\times n}\) and \(\sigma\in S_{n}\), then the sequence \(a_{1,\sigma(1)}, a_{2,\sigma(2)}, \ldots, a_{n,\sigma (n)}\) is called a transversal of A [1]. In 2012, Professor Xingzhi Zhan defined the following new concept at a seminar and suggested studying its properties.

Definition 1.1

A matrix \(A=(a_{i,j})\in\mathbb{C}^{n\times n}\) is called diagonally magic if
$$ \sum_{i=1}^{n} a_{i,\sigma(i)}=\sum_{i=1}^{n} a_{i,\pi(i)} $$
(1)
for all \(\sigma, \pi\in S_{n}\).
Obviously, the zero matrix \(0_{n\times n}\) and \(J=[1]_{n\times n}\), the matrix of all ones, are diagonally magic matrices. Denote
$$ B_{n}= \begin{pmatrix} 1&2 & \cdots& n\\ n+1&n+2 & \cdots& 2n\\ \vdots&\vdots& \ddots& \vdots\\ (n-1)n+1&(n-1)n+2 & \cdots& n^{2} \end{pmatrix} $$
(2)
and
$$ C_{n}= \begin{pmatrix} 1&2 & \cdots& n\\ 2&3 & \cdots& n+1\\ \vdots&\vdots& \ddots& \vdots\\ n&n+1 & \cdots& 2n-1 \end{pmatrix} . $$
(3)
We will show that \(B_{n}\) and \(C_{n}\) are diagonally magic matrices. So, there are a lot of diagonally magic matrices. \(C_{n}\) is a Hankel matrix. \(B_{n}\) and \(C_{n}\) are nonnegative matrices which have been a hot research area [2, 3].
For matrix \(D=(d_{ij})\in\mathbb{C}^{m\times n}\), let the columns of D be \(d_{1},d_{2},\ldots,d_{n}\). \(\operatorname{vec}(D)\) is a vector defined by \(\operatorname{vec}(D)=(d^{T}_{1},d^{T}_{2},\ldots,d^{T}_{n})^{T}\), where the superscript T denotes the transpose. The matrix \(E_{i,j}^{n}\) denotes the Type 1 elementary matrix [4], p.8, which is simply the identity matrix \(I_{n}\) of order n, the i, i and j, j entries replaced by 0 and the i, j entry (respectively j, i entry) replaced by 1 (respectively 1). Given two matrices A and B, their direct sum is written as \(A\oplus B\). Given a sequence of matrices \(A_{i}\), for \(i=1, \ldots, k\), one may write their direct sum as
$$A=\bigoplus_{i=1}^{k}A_{i}= \operatorname{diag} (A_{1}, \ldots, A_{k}). $$
Each \(A_{i}\) is called a direct summand of A. Let \(e_{n}=(\underbrace{1, \ldots, 1} _{n})^{T}\) and \(\widehat{e}_{n}=(\underbrace{0, \ldots, 0} _{n-1}, 1)^{T}\).

2 Main results

Let \(A=(a_{i,j})\in\mathbb{C}^{n\times n}\) be a diagonally magic matrix with \(n\geq2\). Assume that the sum of every transversal is c. From the definition of diagonally magic matrices (1), we have a system of linear equations
$$ \widetilde{A}_{n} \operatorname{vec} \bigl(A^{T} \bigr)=ce_{n!}, $$
(4)
where \(\widetilde{A}_{n}=(\widetilde{a}_{1}, \widetilde{a}_{2}, \ldots, \widetilde{a}_{n^{2}}) \in\mathbb{R}^{n!\times n^{2}}\) is the coefficient matrix. If \(n=2\), from the definition of the diagonally magic matrices and (4), the coefficient matrix \(\widetilde {A}_{2}\) can be chosen to be the \(2 \times4\) matrix
$$\widetilde{A}_{2}= \begin{pmatrix} 1 & 0 & 0 & 1\\ 0 & 1 & 1 & 0 \end{pmatrix} =(\widetilde{a}_{1}, \widetilde{a}_{2}, \widetilde{a}_{3}, \widetilde{a}_{4}), $$
and the augmented matrix is
$$ \widehat{A}_{2}= \left ( \textstyle\begin{array}{c@{\hspace{3pt}}c@{\quad}c@{\quad}c|@{\quad}c} 1 & 0 & 0 & 1& c\\ 0 & 1 & 1 & 0& c \end{array}\displaystyle \right ). $$
(5)
This augmented matrix is the row-reduced echelon form.
Suppose \(n\geq2\) and that, for \(n = 2, \ldots, k\), we have constructed the coefficient matrix
$$\widetilde{A}_{k}=(\widetilde{a}_{1}, \widetilde{a}_{2}, \ldots, \widetilde{a}_{k^{2}}). $$
Let \(n=k+1\). We use the following method to construct the coefficient matrix \(\widetilde{A}_{k+1}\). Firstly, let
$$\begin{aligned}& C_{1,1}^{k+1}= (e_{k!}, 0_{k!\times k} ), \\& C_{1, m+1}^{k+1}= (0_{k!\times1},\widetilde{a}_{(m-1)k+1}, \widetilde{a}_{(m-1)k+2}, \ldots, \widetilde{a}_{mk} ) \end{aligned}$$
for \(1\leq m \leq k\). Secondly, construct
$$C_{i,j}^{k+1}=C_{i-1,j}^{k+1}E_{i-1,i}^{k+1} $$
for \(i=2,3,\ldots,k+1\), \(j=1,2,3,\ldots,k+1\), where \(E_{i,j}^{n}\) denotes the Type 1 elementary matrix [4], p.8, which is simply the identity matrix \(I_{n}\) of order n, the i, i and j, j entries replaced by 0 and the i, j entry (respectively j, i entry) replaced by 1 (respectively 1). Then we get the coefficient matrix
$$ \widetilde{A}_{k+1}= \begin{pmatrix} C_{1,1}^{k+1}& C_{1,2}^{k+1}& \cdots& C_{1,k+1}^{k+1}\\ C_{2,1}^{k+1}& C_{2,2}^{k+1}& \cdots& C_{2,k+1}^{k+1}\\ \vdots& \vdots& \ddots& \vdots\\ C_{k+1,1}^{k+1}& C_{k+1,2}^{k+1}& \cdots& C_{k+1,k+1}^{k+1} \end{pmatrix}. $$
For example, if \(n=3\), according to the constructing method and \(\widetilde{A}_{2}\), we have
$$\begin{aligned} \widetilde{A}_{3}&= \begin{pmatrix} C_{1,1}^{3}& C_{1,2}^{3}& C_{1,3}^{3}\\ C_{2,1}^{3}& C_{2,2}^{3}& C_{2,3}^{3}\\ C_{3,1}^{3}& C_{3,2}^{3}& C_{3,3}^{3} \end{pmatrix} \\ &=\left ( \textstyle\begin{array}{ccc|ccc|ccc} 1& 0& 0 &0&1&0 &0&0&1\\ 1& 0& 0 &0&0&1 &0&1&0\\ \hline 0&1& 0 &1&0&0 &0&0&1\\ 0&1& 0 &0&0&1 &1&0&0\\ \hline 0& 0&1 &1&0&0 &0&1&0\\ 0& 0&1 &0&1&0 &1&0&0 \end{array}\displaystyle \right ) \\ &=(\widetilde{a}_{1}, \widetilde{a}_{2}, \widetilde{a}_{3}, \widetilde {a}_{4},\widetilde{a}_{5}, \widetilde{a}_{6},\widetilde{a}_{7}, \widetilde {a}_{8}, \widetilde{a}_{9}). \end{aligned}$$
Assume
$$C_{j}= \bigl(C_{j,1}^{k+1}, C_{j,2}^{k+1}, \ldots, C_{j,k+1}^{k+1}, ce_{k!} \bigr) $$
for \(j=1, 2, \ldots, k+1\). Consequently, the augmented matrix of \(\widetilde{A}_{k+1}\) is
$$\bigl(C_{1}^{T}, C_{2}^{T}, \ldots, C_{k+1}^{T} \bigr)^{T}. $$
Let
$$\begin{aligned}& D_{n}= \begin{pmatrix} 0 & \cdots& 0 & 1\\ \vdots& \ddots& \vdots& \vdots\\ 0 & \cdots& 0 & 1\\ 0 & \cdots& 0 & 1 \end{pmatrix} \in \mathbb{R}^{n\times n}, \\& F_{(n-1)\times n}= (\textstyle\begin{array}{@{}c@{\quad}c@{}} I_{n-1} & -e_{n-1} \end{array}\displaystyle ) \in\mathbb{R}^{(n-1)\times n}, \\& G_{n}= \begin{pmatrix} 0 & 1& \cdots& 1 & 3-n\\ 1 & 0& \cdots& 1 & 3-n\\ \vdots&\vdots& \ddots&\vdots&\vdots\\ 1 & 1& \cdots& 0 & 3-n\\ 1 & 1& \cdots& 1 & 2-n \end{pmatrix} \in \mathbb{R}^{n\times n}. \end{aligned}$$
We claim that the row-reduced echelon form of the augmented matrix in the system of linear equations (4) has the following form:
$$ \textstyle\begin{array}{@{}c@{}} \textstyle\begin{array}{@{\hspace{-30pt}}l} \overbrace{\hphantom{\hspace{43mm}}}^{(n-2)n} \end{array}\displaystyle \\ \widehat{A}_{n}=\left ( \textstyle\begin{array}{cccccc|c} I_{n}&D_{n} &D_{n} &\cdots&D_{n}&G_{n}&ce_{n}\\ & F_{(n-1)\times n} & 0 &\cdots& 0& -F_{(n-1)\times n}&0 \\ & &F_{(n-1)\times n}& \cdots&0&-F_{(n-1)\times n}&0 \\ & & & \ddots&\vdots&\vdots&\vdots\\ & & & & F_{(n-1)\times n}&-F_{(n-1)\times n}&0\\ &&{\Huge0}&& &0&0 \end{array}\displaystyle \right ). \end{array} $$
(6)
We prove this by induction on n. For example, \((\widetilde{A}_{2}, ce_{2!})\) is row-equivalent to
$$ \widehat{A}_{2}=\left ( \textstyle\begin{array}{cccc|c} 1 & 0 & 0 & 1& c\\ 0 & 1 & 1 & 0& c \end{array}\displaystyle \right ). $$
The row-reduced echelon form of the augmented matrix \((\widetilde{A}_{3}, ce_{3!})\) has the following form:
$$\begin{aligned} \widehat{A}_{3}&=\left ( \textstyle\begin{array}{ccc|c} I_{3}& D_{3}& G_{3}&ce_{3}\\ 0_{2\times3}& F_{2\times3}& -F_{2\times3}&0_{3\times1}\\ 0_{1\times3}& 0_{1\times3}& 0_{1\times3}&0 \end{array}\displaystyle \right ) \\ &=\left ( \textstyle\begin{array}{ccc|ccc|ccc|c} 1& 0& 0 &0&0&1 &0&1&0 &1\\ 0& 1& 0 &0&0&1 &1&0&0 &1\\ 0& 0 &1 &0&0&1 &1&1&-1 &1\\ \hline 0&0&0 &1&0& -1 &-1&0&1 &0\\ 0&0&0 &0& 1&-1 &0&-1&1 &0\\ \hline 0& 0&0 &0&0&0 &0&0&0 &0 \end{array}\displaystyle \right ). \end{aligned}$$
Obviously, if \(n=2\), \(\widehat{A}_{2}\) in (5) has the form (6). Suppose \(n\geq2\) and that, for \(n = 2, \ldots, k\), the assertion has been proved for n!-by-\(n^{2}\) matrix \(\widetilde {A}_{n}\). Let \(n=k+1\),
$$\begin{aligned}& H_{k\times(k+1)}= (e_{k}, 0_{k\times k} ), \qquad J_{k\times (k+1)}= (0_{k\times1}, I_{k} ), \\& \widetilde{D}_{k\times(k+1)}= (0_{k\times1}, D_{k} ),\qquad \widetilde{G}_{k\times(k+1)}= (0_{k\times1}, G_{k} ), \\& \widetilde{F}_{(k-1)\times(k+1)}= (0_{k\times1}, F_{(k-1)\times k} ). \end{aligned}$$
By the inductive hypothesis, we can obtain the following matrix after a sequence of elementary operations for \(C_{1}\):
$$ {{\begin{aligned} P_{1} &=\left ( \textstyle\begin{array}{ccccccc|c} H_{k\times(k+1)}&J_{k\times(k+1)}&\widetilde{D}_{k\times(k+1)} &\widetilde{D}_{k\times(k+1)} &\cdots&\widetilde{D}_{k\times (k+1)}&\widetilde{G}_{k\times(k+1)}&ce_{k}\\ && \widetilde{F}_{(k-1)\times(k+1)} & 0 &\cdots& 0& -\widetilde {F}_{(k-1)\times(k+1)}&0 \\ && &\widetilde{F}_{(k-1)\times(k+1)}& \cdots&0&-\widetilde {F}_{(k-1)\times(k+1)}&0 \\ && & & \ddots&\vdots&\vdots&\vdots\\ & & & & & \widetilde{F}_{(k-1)\times(k+1)}&-\widetilde{F}_{(k-1)\times (k+1)}&0\\ &&\Huge0&&& &0&0 \end{array}\displaystyle \right ) \\ &\equiv \begin{pmatrix} P_{1,1} \\ P_{1,2} \\ P_{1,3} \\ \vdots\\ P_{1,k-1} \\ 0 \end{pmatrix} , \end{aligned}}} $$
where
$$\begin{aligned}& P_{1,1}=\left ( \textstyle\begin{array}{ccccccc|c} H_{k\times(k+1)}&J_{k\times(k+1)}&\widetilde{D}_{k\times(k+1)} &\widetilde{D}_{k\times(k+1)} &\cdots&\widetilde{D}_{k\times (k+1)}&\widetilde{G}_{k\times(k+1)}&ce_{k} \end{array}\displaystyle \right ),\\& \textstyle\begin{array}{@{}c@{}} \textstyle\begin{array}{@{\hspace{-30pt}}c} \overbrace{\hphantom{\hspace{15mm}}}^{(k-i-1)(k+1)\ \mathrm{columns}} \end{array}\displaystyle \\ P_{1,i}=\left ( \textstyle\begin{array}{cccccccc|c} 0&\cdots& 0&\widetilde{F}_{(k-1)\times(k+1)} & 0 &\cdots& 0& -\widetilde{F}_{(k-1)\times(k+1)}&0 \end{array}\displaystyle \right )\in \mathbb{R}^{(k-1)\times((k+1)^{2}+1)} \end{array}\displaystyle \end{aligned}$$
for \(i=2, 3, \ldots, k-1\). \(C_{j}\) is row-equivalent to
$$P_{j}=P_{j-1} \Biggl( \Biggl(\bigoplus ^{k+1}_{i=1}E_{j-1,j}^{k+1} \Biggr) \oplus1 \Biggr)= \bigl(P_{j,1}^{T}, \ldots, P_{j,k-1}^{T}, 0^{T} \bigr)^{T} $$
for \(j=2, \ldots, k+1\). \(P_{j,1}\in\mathbb{R}^{k\times((k+1)^{2}+1)}\) is the first k rows of \(P_{j}\). \(P_{j,i}\in\mathbb{R}^{(k-1)\times ((k+1)^{2}+1)}\) is the rows of \(P_{j}\) from \((k+(i-2)(k-1)+1)\)th to \((k+(i-1)(k-1))\)th row, \(i=2, \ldots, k-1\). In \(P_{1,1}\), multiplying row k by the scalar −1 and adding to row j, for \(j=1, \ldots, k-1\), then \(P_{1,1}\) is row-equivalent to
$$\widehat{P}_{1,1}= (\widehat{H}_{k\times(k+1)}, \widehat {J}_{k\times(k+1)}, \widehat{\widetilde{D}}_{k\times(k+1)}, \widehat { \widetilde{D}}_{k\times(k+1)}, \ldots, \widehat{\widetilde {D}}_{k\times(k+1)}, \widehat{\widetilde{G}}_{k\times(k+1)}, c\widehat {e}_{k} ), $$
where
$$\begin{aligned}& \widehat{H}_{k\times(k+1)}= \begin{pmatrix} 0&0_{(k-1)\times k}\\ 1&0 \end{pmatrix} ,\qquad \widehat{J}_{k\times(k+1)}= \begin{pmatrix} 0& I_{k-1}&-e_{k-1}\\ 0&0&1 \end{pmatrix} , \\& \widehat{\widetilde{D}}_{k\times(k+1)}= \begin{pmatrix} 0_{(k-1)\times k}&0\\ 0&1 \end{pmatrix} ,\qquad \widehat{\widetilde{G}}_{k\times(k+1)}= \begin{pmatrix} 0 & -1& 0&\cdots& 0 & 1\\ 0 & 0& -1&\cdots& 0 & 1\\ \vdots&\vdots& \vdots&\ddots&\vdots&\vdots\\ 0 & 0& 0&\cdots& -1 & 1\\ 0 & 1&1& \cdots& 1 & 2-k \end{pmatrix} . \end{aligned}$$
Applying this method to \(P_{j,1}\), \(j=2, \ldots, k+1\), then \(P_{j,1}\) is row-equivalent to
$$\widehat{P}_{j,1}=\widehat{P}_{j-1,1} \Biggl( \Biggl(\bigoplus ^{k+1}_{i=1}E_{j-1,j}^{k+1} \Biggr)\oplus1 \Biggr). $$
Multiplying row \(k-1\) of \(\widehat{P}_{1,1}\) by the scalar −1 and adding to row k of \(\widehat{P}_{k+1,1}\), and multiplying row \(k-1\) of \(P_{1,i}\) by the scalar −1 and adding to row k of \(\widehat {P}_{k+1,1}\) for \(i=2, 3, \ldots, k-1\), then row k of \(\widehat {P}_{k+1,1}\) changes to
$$ \bigl(\underbrace{\widehat{e}_{k+1}^{T}, \ldots, \widehat{e}_{k+1}^{T}} _{k(k+1)}, \underbrace{1, \ldots, 1} _{k}, 1-k, c \bigr). $$
(7)
Picking row k of \(\widehat{P}_{j,1}\), \(j=1, 2, \ldots, k\), and (7), we have
$$ (I_{k+1}, \underbrace{D_{k+1}, D_{k+1}, \ldots, D_{k+1}} _{(k-1)(k+1)}, G_{k+1}, ce_{k+1} ). $$
(8)
Combining row 1 of \(\widehat{P}_{2,1}\) and row i of \(\widehat {P}_{1,1}\), for \(i=1, \ldots, k-1\), we have
$$ (0_{k\times(k+1)}, F_{k\times(k+1)}, \underbrace{0_{k\times (k+1)}, \ldots, 0_{k\times(k+1)}} _{(k-3)(k+1)}, -F_{k\times(k+1)}, 0 ). $$
(9)
Combining row 1 of \(P_{2,i}\) and row j of \(P_{1,i}\), for \(i, j=1, 2, \ldots, k-1\), we get
$$ \left ( \textstyle\begin{array}{ccccccc|c} 0_{k\times(k+1)}& 0_{k\times(k+1)}&F_{k\times(k+1)}&0_{k\times (k+1)}& \cdots&0_{k\times(k+1)}&-F_{k\times(k+1)}&0 \\ 0_{k\times(k+1)}& 0_{k\times(k+1)}&0_{k\times(k+1)}&F_{k\times (k+1)}& \cdots&0_{k\times(k+1)}&-F_{k\times(k+1)}&0 \\ & & & &\ddots&\vdots&\vdots&\vdots\\ &&\Huge0&& & F_{k\times (k+1)}&-F_{k\times(k+1)}&0 \end{array}\displaystyle \right ). $$
(10)
Combining (8), (9) and (10), we have
$$ \left ( \textstyle\begin{array}{cccccc|c} I_{k+1}& D_{k+1}& D_{k+1}& \cdots& D_{k+1}& G_{k+1}& ce_{k+1}\\ & F_{k\times(k+1)} & 0 &\cdots& 0& -F_{k\times(k+1)}&0 \\ & &F_{k\times(k+1)}& \cdots&0&-F_{k\times(k+1)}&0 \\ & & & \ddots&\vdots&\vdots&\vdots\\ &&\Huge0& & F_{k\times (k+1)}&-F_{k\times(k+1)}&0 \end{array}\displaystyle \right ). $$
The other rows depend linearly on some rows of the above matrix. From the row-reduced echelon form (6), we get \(\operatorname{rank}(\widetilde{A}_{n})=n^{2}-2n+2\). \(a_{j,n}\), \(a_{n,i}\), for \(2\leq j\leq n\), \(1\leq i\leq n-1\), are free variables. The other entries of matrix \(A=(a_{i,j})\in\mathbb{C}^{n\times n}\) are the pivot variables. The pivot variables are completely determined in terms of free variables.

Theorem 2.1

Let \(A \in\mathbb{C}^{n\times n}\) be a diagonally magic matrix. Then \(\operatorname{rank}(A)\leq2\).

Proof

Let \(A=(a_{i,j})\in\mathbb{C}^{n\times n}\) be a diagonally magic matrix. If \(n=1\), the conclusion is trivial. Next, we prove that it is true for \(n\geq2\). Assume unknowns \(a_{j,n}\), \(a_{n,i}\), for \(2\leq j\leq n\), \(1\leq i\leq n-1\), are free variables. According to (6), we have
$$a_{1,i}=-\sum_{j=2}^{n-1}a_{j,n}- \sum_{j\neq i}^{n-1}a_{n,j}+(n-3)a_{n,n}+c $$
for \(i=1, \ldots, n-1\), and
$$a_{1,n}=-\sum_{j=2}^{n-1}a_{j,n}- \sum_{j= 1}^{n-1}a_{n,j}+(n-2)a_{n,n}+c. $$
We also have
$$a_{i,j}=a_{i,n}+a_{n,j}-a_{n,n} $$
for \(2\leq i, j \leq n-1\). That is,
$$ {{A= \begin{pmatrix} -\sum_{j=2}^{n-1}a_{j,n}-\sum_{j\neq 1}^{n-1}a_{n,j}+(n-3)a_{n,n}+c &\cdots &-\sum_{j=2}^{n-1}a_{j,n}-\sum_{j= 1}^{n-1}a_{n,j}+(n-2)a_{n,n}+c\\ \vdots&\ddots&\vdots\\ a_{n,1}&\cdots&a_{n,n} \end{pmatrix}.}} $$
(11)
Using row elementary operations, A is row-equivalent to
$$ \begin{pmatrix} c-\sum_{j=1}^{n}a_{n,j}&c-\sum_{j=1}^{n}a_{n,j}&\cdots &c-\sum_{j=1}^{n}a_{n,j}&c-\sum_{j=1}^{n}a_{n,j}\\ a_{2,n}-a_{n,n}&a_{2,n}-a_{n,n}&\cdots&a_{2,n}-a_{n,n}&a_{2,n}-a_{n,n}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ a_{n-1,n}-a_{n,n}&a_{n-1,n}-a_{n,n}&\cdots &a_{n-1,n}-a_{n,n}&a_{n-1,n}-a_{n,n}\\ a_{n,1}&a_{n,2}&\cdots&a_{n,n-1}&a_{n,n} \end{pmatrix}. $$
(12)
From (12), we can easily get
$$\operatorname{rank}(A)\leq2. $$
This completes the proof. □
According to (11), we know that the matrices \(B_{n}\) in (2) and \(C_{n}\) in (3) are diagonally magic matrices. It is easy to verify that \(B_{n}\) is row-equivalent to
$$ B_{n}\longrightarrow \begin{pmatrix} 1&2 &3& \cdots& n\\ 0&1 &2& \cdots& n-1\\ 0&0 &0& \cdots& 0\\ \vdots&\vdots& \vdots&\ddots& \vdots\\ 0&0 &0& \cdots& 0 \end{pmatrix} . $$
\(C_{n}\) is row-equivalent to
$$ C_{n}\longrightarrow \begin{pmatrix} 1&2 &3& \cdots& n\\ 0&1 &2& \cdots& n-1\\ 0&0 &0& \cdots& 0\\ \vdots&\vdots& \vdots&\ddots& \vdots\\ 0&0 &0& \cdots& 0 \end{pmatrix} . $$
Now it is clear that there are diagonally magic matrices of ranks 0, 1, 2. Indeed, \(\operatorname{rank} (0_{n\times n} )=0\), \(\operatorname{rank} ([1]_{n\times n} )=1\), and \(\operatorname{rank}(B_{n})=\operatorname{rank}(C_{n})=2\).

Theorem 2.2

If the diagonally magic matrix \(A\in\mathbb{C}^{n\times n}\) has a form (11), then the characteristic polynomial of A is
$$ \begin{aligned} p_{A}(\lambda)= \lambda^{n-2} \bigl(\lambda^{2}-c\lambda+d \bigr), \end{aligned} $$
(13)
where \(d= (\sum_{j=1}^{n}a_{n,j}-c )\sum_{j=2}^{n} (a_{n,1}-a_{n,j} )-n\sum_{j=2}^{n-1} (a_{n,n}-a_{j,n} ) (a_{n,1}-a_{n,j} )\).

From (13), we can see that the algebraic multiplicity of the eigenvalue 0 of the diagonally magic matrix A is at least \(n-2\).

Theorem 2.3

Let A and B be diagonally magic matrices of the same order. Then \(A\pm B\), kA, \(PAQ\) and \(A^{*}\) are diagonally magic matrices, where k is a constant, P and Q are the square matrices every row and every column of which has at most one nonzero entry, and \(A^{*}\) denotes the conjugate transpose of A.

Proof

This can be easily checked from the definition. □

Let \(A\in\mathbb{C}^{n\times n}\), \(1\leq i_{1} \leq i_{2}\leq\cdots\leq i_{k}\leq n\), \(1\leq j_{1} \leq j_{2}\leq\cdots\leq j_{s}\leq n\). We denote by \(A[i_{1}, i_{2}, \ldots, i_{k}|j_{1}, j_{2}, \ldots, j_{s}]\) the \(k\times s\) submatrix of A that lies in the rows \(i_{1}, i_{2}, \ldots, i_{k}\) and columns \(j_{1}, j_{2}, \ldots, j_{s}\). Denote by \(A(i_{1}, i_{2}, \ldots, i_{k}|j_{1}, j_{2}, \ldots, j_{s})\) the \((n-k)\times(n-s)\) submatrix of A obtained by deleting the rows \(i_{1}, i_{2}, \ldots, i_{k}\) and columns \(j_{1}, j_{2}, \ldots, j_{s}\).

Theorem 2.4

Any square submatrix of a diagonally magic matrix is diagonally magic.

Proof

Let B be a \(k\times k\) submatrix of a diagonally magic matrix \(A=(a_{i,j})\). Then there are row and column indices \(\alpha=(i_{1},i_{2},\ldots,i_{k})\) and \(\beta=(j_{1},j_{2},\ldots,j_{k})\) such that \(B=A[\alpha|\beta]\). Note that the union of a transversal of B and a transversal of \(A(\alpha|\beta)\) is a transversal of A. Choose an arbitrary but fixed transversal T of the square matrix \(A(\alpha|\beta)\). For any \(\sigma,\pi\in S_{k}\), \(a_{i_{1},j_{\sigma(1)}},\ldots ,a_{i_{k},j_{\sigma(k)}}\) and the entries in T constitute a transversal of A, while \(a_{i_{1},j_{\pi(1)}},\ldots,a_{i_{k},j_{\pi(k)}}\) and the entries in T also constitute a transversal of A. Let b be the sum of the entries in T. Since A is diagonally magic, we have
$$\sum_{t=1}^{k} a_{i_{t},j_{\sigma(t)}}+b=\sum _{t=1}^{k} a_{i_{t},j_{\pi(t)}}+b, $$
which yields
$$\sum_{t=1}^{k} a_{i_{t},j_{\sigma(t)}}=\sum _{t=1}^{k} a_{i_{t},j_{\pi(t)}}. $$
This shows that B is diagonally magic. □

Declarations

Acknowledgements

This work is supported by the National Natural Science Foundation of China (No. 11501126, No. 11471122, No. 61502107), the Youth Natural Science Foundation of Jiangxi Province (No. 20151BAB211011, No. 20151BAB211014), the Education Department of Jiangxi Province (No. 15YB106), the Supporting the Development for Local Colleges and Universities Foundation of China - Applied Mathematics Innovative team building, the Graduate student innovation Foundation of Gannan Normal University (No. YCX14B002, No. YCX14B003), and the Natural Science Foundation of Anhui Province (No. 1508085MA12).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
College of Mathematics and Computer Science, Gannan Normal University
(2)
Department of Mathematics, East China Normal University
(3)
Library, Gannan Normal University

References

  1. Zhan, X: Matrix Theory. Am. Math. Soc., Providence (2013) MATHGoogle Scholar
  2. Ding, J, Rhee, N: When a matrix and its inverse are nonnegative. Mo. J. Math. Sci. 26, 98-103 (2014) MATHMathSciNetGoogle Scholar
  3. Zhan, X: Extremal numbers of positive entries of imprimitive nonnegative matrices. Linear Algebra Appl. 424, 132-138 (2007) MATHMathSciNetView ArticleGoogle Scholar
  4. Horn, R, Johnson, C: Matrix Analysis. Cambridge University Press, Cambridge (1985) MATHView ArticleGoogle Scholar

Copyright

© Zhou et al. 2015