Open Access

Symmetric stochastic inverse eigenvalue problem

Journal of Inequalities and Applications20152015:180

https://doi.org/10.1186/s13660-015-0698-0

Received: 9 October 2014

Accepted: 18 May 2015

Published: 5 June 2015

Abstract

Firstly we study necessary and sufficient conditions for the constant row sums symmetric inverse eigenvalue problem to have a solution and sufficient conditions for the symmetric stochastic inverse eigenvalue problem to have a solution. Then we introduce the concept of general solutions for the symmetric stochastic inverse eigenvalue problem and the concept of totally general solutions for the \(3\times3\) symmetric stochastic inverse eigenvalue problem. Finally we study the necessary and sufficient conditions for the symmetric stochastic inverse eigenvalue problems of order 3 to have general solutions, and the necessary and sufficient conditions for the symmetric stochastic inverse eigenvalue problems of order 3 to have a totally general solution.

Keywords

inverse eigenvalue problem symmetric stochastic inverse eigenvalue problem typical orthogonal matrix

MSC

15C51 15A18

1 Introduction

For a square matrix A, let \(\sigma (A)\) denote the spectrum of A. Given an n-tuple \(\Lambda =(\lambda_{1},\ldots,\lambda_{n})\) of numbers, the problem of deciding the existence of an \(n\times n\) matrix A (of a certain class) with \(\sigma (A)=\Lambda \) is called the inverse eigenvalue problem (of a certain class) which has for a long time been one of the problems of main interest in the theory of matrices.

Sufficient conditions for the existence of an entrywise nonnegative matrix A with \(\sigma (A)=\Lambda \) have been investigated by several authors such as Borobia [1], Fiedler [2], Kellogg [3], Marijuan et al. [4] and Salzmann [5]. Soto and Rojo [6] investigated the existence of certain entrywise nonnegative matrices with a prescribed spectrum under certain conditions. Since stochastic matrices are important nonnegative matrices, it is surely important to investigate the existence of stochastic matrices with a prescribed spectrum under certain conditions. It is well known [7] that the nonnegative inverse eigenvalue problem and the stochastic inverse eigenvalue problem are equivalent problems. Hwang et al. [8, 9] and [10] gave some interesting results for the symmetric stochastic inverse eigenvalue problem. In this work we mainly study the symmetric stochastic inverse eigenvalue problem.

For simplicity, we use SIEP for Symmetric Inverse Eigenvalue Problem, SSIEP for the Symmetric Stochastic Inverse Eigenvalue Problem; and we use CRS for Constant Row Sums, and SCRSIEP for the SIEP with Constant Row Sums.

Since the spectrum of an \(n\times n\) real symmetric matrix is a set of n real numbers, we use Λ to denote a real n-tuple \((\lambda _{1},\ldots,\lambda_{n})\), whose entries are not necessarily distinct, for the given possible spectrum of the \(n\times n\) SSIEP. For convenience, we may arrange Λ in non-increasing order, i.e., \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\). A n-tuple Λ is SS (resp. SCRS) realizable if there exists an \(n\times n\) symmetric stochastic matrix (symmetric matrix with CRS) A with \(\sigma (A)=\Lambda \). In this case, A is called a SS (resp. SCRS) realization of Λ. For any two matrices A and B of order n we say that B is permutationally similar to A if \(B=PAP^{T}\) for some permutation matrix P of order n.

Throughout the paper, we denote by \(e\in R^{n}\) the all-ones column vector of dimension n, we use \(u_{n}\) to denote the unit vector \(\frac {1}{\sqrt{n}}e\), and \(I_{n}\) the identity matrix of order n.

The paper is organized as follows. In Section 2, we give necessary and sufficient conditions for the SCRSIEP. In Section 3, we give sufficient conditions for the SSIEP. In Section 4, we introduce the concept of general solution for SSIEP and the totally general solution for the \(3\times3\) SSIEP and we give the necessary and sufficient conditions for the \(2\times2\) and \(3\times3\) SSIEP to have a general solution and the necessary and sufficient conditions for the \(3\times3\) SSIEP to have totally general solution. Three examples are presented there.

2 Necessary and sufficient conditions for the SCRSIEP

The following result shows that any given set of real numbers is SCRS realizable.

Theorem 2.1

Let \(\Lambda =(\lambda_{1},\ldots,\lambda_{n})\) be a real n-tuple.
  1. (1)
    An \(n\times n\) symmetric matrix A is a SCRS realization of Λ with CRS \(\lambda_{k}\) if and only if A can be written as
    $$ A=S\operatorname{diag}(\lambda_{1},\ldots, \lambda_{n})S^{T}=\lambda_{1}\xi_{1}\xi _{1}^{T}+\lambda_{2}\xi_{2} \xi_{2}^{T}+\cdots+\lambda_{n}\xi_{n} \xi_{n}^{T}, $$
    (2.1)
    where \(S=(\xi_{1},\ldots,\xi_{n})\in R^{n\times n}\) is an orthogonal matrix (in column block form) with \(\xi_{k}=u_{n}\).
     
  2. (2)

    The SCRSIEP for Λ always has a solution A given by (2.1).

     

Proof

(1) For the necessity, let A be a SCRS realization of Λ with CRS \(\lambda_{k}\), then \(\sigma (A)=\Lambda \) and \(Ae=\lambda_{k} e\), and that means \(u_{n}\) is the unit eigenvector of A associated with eigenvalue \(\lambda_{k}\). The symmetry of A implies that A has orthonormal eigenvectors \(\xi_{1},\ldots,\xi_{n}\) such that \(A\xi_{j}=\lambda_{j}\xi_{j}\), \(j=1,2,\ldots,n\). Let \(S=(\xi_{1},\ldots,\xi_{n})\), \(D_{\Lambda }=\operatorname{diag}(\lambda_{1},\ldots,\lambda_{n})\), then S is an orthogonal matrix with \(\xi_{k}=u_{n}\) satisfying \(AS=SD_{\Lambda }\) and hence \(A=SD_{\Lambda }S^{T}\) is expressed as (2.1). For the sufficiency, let A be the \(n\times n\) symmetric matrix given in (2.1). Then \(AS=SD_{\Lambda }\). It follows that \(\xi_{j}\neq0\), \(A\xi_{j}=\lambda_{j}\xi _{j}\) for each \(j=1,2,\ldots,n\) and \(Ae=A\sqrt{n}u_{n}=\sqrt{n}A\xi_{k}=\sqrt{n}\lambda_{k}\xi_{k}=\sqrt {n}\lambda_{k}u_{n}=\lambda_{k}e\). Therefore, A is a SCRS realization of Λ with CRS \(\lambda_{k}\).

(2) It suffices to prove that there always exists an \(n\times n\) orthogonal matrix \(S=(\xi_{1},\ldots,\xi_{n})\) with \(\xi_{k}=u_{n}\) for any given \(k\in\{1,2,\ldots,n\}\) since the matrix A given by (2.1) is a SCRS realization of Λ with CRS \(\lambda_{k}\) by (1). Assume, without loss of generality, that \(\xi_{1}=u_{n}\). Expanding this vector \(\xi_{1}\) into an orthonormal base of \(R^{n}\), we obtain the desired orthogonal matrix \(S=(\xi_{1},\ldots,\xi_{n})\). □

It is obvious that if a SCRS realization A of a real n-tuple \(\Lambda =(\lambda_{1},\ldots,\lambda_{n})\) has CRS λ, then \(\lambda =\lambda_{k}\) for some \(k\in\{1,2,\ldots,n\}\). From the proof of Theorem 2.1, we can see that there are a lot of (actually infinite for \(n>2\)) SCRS realizations of Λ, each corresponds to a construction of S.

Definition 2.1

An \(n\times n\) typical orthogonal matrix is an orthogonal matrix with one column equal to \(\pm u_{n}\).

Remark 2.1

An \(n\times n\) matrix S is typical orthogonal if and only if \(PSQD\) is typical orthogonal for any permutation matrices P and Q and any unipotent diagonal matrix D i.e. such that \(D^{2}=I_{n}\).

Definition 2.2

Two \(n\times n\) typical orthogonal matrices M and H are equivalent if \(H=PMQD\) for some permutation matrices P and Q and some diagonal unipotent matrix D.

Theorem 2.2

Let \(\Lambda =(\lambda_{1},\ldots,\lambda_{n})\) (\(n>1\)) be a given real n-tuple, \(S=(\xi_{1},\ldots,\xi_{n})\), \(S'=(\xi_{1}',\ldots,\xi_{n}')\) be typical orthogonal matrices, and \(A=S\operatorname{diag}(\lambda_{1},\ldots,\lambda_{n})S^{T}\), \(A'=S'\operatorname{diag}(\lambda _{1},\ldots, \lambda_{n})S^{\prime\top}\). We have:
  1. (1)

    If S and \(S'\) are equivalent, then \(A'\) is permutationally similar to A.

     
  2. (2)

    If \(A'\) is permutationally similar to A and \(\lambda_{i}\neq\lambda_{j}\) for any \(i\neq j\), then S and \(S'\) are equivalent.

     

Proof

(1) If S, \(S'\) are equivalent, then \(S'=PSQD\) for some permutation matrices P, Q and diagonal unipotent matrix D. Denoting \(\operatorname{diag}(\lambda_{1},\ldots,\lambda_{n})\) by \(D_{\Lambda }\) we have
$$A'=S'D_{\Lambda }S^{\prime\top}=PSQDD_{\Lambda }DQ^{T}S^{T}P^{T}=PSD_{\Lambda }S^{T}P^{T}=PAP^{T} $$
and hence \(A'\) is permutationally similar to A.
(2) If \(A'\) is permutationally similar to A, then there exists a permutation matrix P such that \(S'D_{\Lambda }S^{\prime\top}=A'=PAP^{T}=PSD_{\Lambda }S^{T}P^{T} \) from which it follows that
$$D_{\Lambda }S^{T}P^{T}S'=S^{T}P^{T}S'D_{\Lambda }. $$
Let \(S^{T}P^{T}S'=D=(d_{ij})\), then the foregoing expression produces
$$ \lambda_{i}d_{ij}=d_{ij} \lambda_{j},\quad i,j=1,\ldots,n. $$
(2.2)
Now if \(\lambda_{i}\neq\lambda_{j}\) for any \(i\neq j\), then (2.2) yields \(d_{ij}=0\) for any \(i\neq j\). Therefore, \(S^{T}P^{T}S'=D\) is a diagonal orthogonal matrix with each diagonal entry equal to ±1, i.e. \(D^{2}=I_{n}\), which means D is a diagonal unipotent matrix. Finally, we have \(S'=PSD=PSI_{n}D\) and conclude that S and \(S'\) are equivalent. □

Remark 2.2

The symmetric matrix \(A=S\operatorname{diag}(\lambda _{1},\ldots,\lambda_{n})S^{T}\) is called a solution of SCRSIEP for \(\Lambda =(\lambda_{1},\ldots,\lambda_{n})\) associated with the typical orthogonal matrix S. Then we can say, by Theorem 2.2, that two solutions of the SCRSIEP for Λ associated with equivalent typical orthogonal matrices are permutationally similar to each other.

Lemma 2.1

If S is a typical orthogonal matrix of order \(n>1\), then S is indecomposable, i.e., there is no \(p\times q\) zero submatrix of S where \(p,q>0\) are two positive integers such that \(p+q=n\).

Proof

We prove it by contradiction and assume that S has the block form:
$$S= \begin{pmatrix} B&0\\ D&C \end{pmatrix}, $$
where 0 is a \(p\times q\) zero matrix with \(p+q = n\). Then the q columns of C and the column of D corresponding to the vector \(u_{n}\) form a system of \(q+1\) linearly independent vectors of dimension q, which is impossible. □

The next two lemmas present two nonequivalent \(n\times n\) typical orthogonal matrices \(S_{n}\), \(S_{n}^{*}\) for any \(n>1\):

Lemma 2.2

Let \(\beta =(\sqrt{n}-1)/(n-1)\), \(n>1\). Then the symmetric matrix
$$ S_{n}=(s_{ij})=\frac{1}{\sqrt{n}} \begin{pmatrix} 1&1&1&\cdots&1\\ 1&(2-n)\beta -1&\beta &\cdots&\beta \\ 1&\beta &(2-n)\beta -1&\cdots&\beta \\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&\beta &\beta &\cdots&(2-n)\beta -1 \end{pmatrix} $$
(2.3)
is a typical orthogonal matrix.

Proof

Noticing that β is the positive root of \((n-1)\beta ^{2}+2\beta -1\) we can verify
$$\xi_{j}^{T}\xi_{i}=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} 1,&i=j,\\ 0,&i\neq j . \end{array}\displaystyle \right . $$
 □

A simple verification provides the next result.

Lemma 2.3

Let
$$ S^{*}_{n}= \bigl(s_{ij}^{*} \bigr)= \frac{1}{\sqrt{n}} \begin{pmatrix} 1&-\sqrt{\frac{n(n-1)}{n}}&0&0&\cdots&0\\ 1&\sqrt{\frac{n}{n(n-1)}}&-\sqrt{\frac{n(n-2)}{n-1}}&\ddots&0&0\\ 1&\sqrt{\frac{n}{n(n-1)}}&\sqrt{\frac{n}{(n-1)(n-2)}}&\ddots &\ddots&\vdots\\ \vdots&\vdots&\vdots&\ddots&-\sqrt{\frac {2n}{3}}&0\\ 1&\sqrt{\frac{n}{n(n-1)}}&\sqrt{\frac{n}{(n-1)(n-2)}}&\cdots &\sqrt{\frac{n}{2\cdot3}}&-\sqrt{\frac{n}{2}}\\ 1&\sqrt{\frac{n}{n(n-1)}}&\sqrt{\frac{n}{(n-1)(n-2)}}&\cdots &\sqrt{\frac{n}{2\cdot3}}&\sqrt{\frac{n}{2}} \end{pmatrix}, $$
(2.4)
i.e.,
$$s_{ij}^{*} =\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{1}{\sqrt{n}},& j=1; \\ ((n-j+1)(n-j+2))^{-1/2},& i\geq j>1;\\ -(\frac{n-i}{n-i+1})^{1/2}, & j=i+1;\\ 0, & \textit{otherwise}. \end{array}\displaystyle \right . $$
Then \(S^{*}_{n}\) is a non-symmetric typical orthogonal matrix for any integer \(n>1\).

Theorem 2.1, together with Lemma 2.2 and Lemma 2.3, produces the following result for the SCRSIEP.

Theorem 2.3

Let \(\Lambda =(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\) (\(n>1\), \(\lambda_{1}\geq \lambda_{2}\geq\cdots\geq\lambda_{n}\)), \(D_{\Lambda }=\operatorname{diag}(\lambda _{1},\ldots,\lambda_{n})\), and \(S_{n}\), \(S^{*}_{n}\) are the typical orthogonal matrices given by (2.3), (2.4), respectively. Then both \(A=SD_{\Lambda }S^{T}\) and \(A^{*}=S^{*}D_{\Lambda }S^{*T}\) are symmetric matrices with CRS equal to \(\lambda _{1}\) and \(\sigma (A)=\Lambda \).

There is only one \(2\times2\) typical orthogonal matrix with the first column equal to \(u_{n}\), i.e.
$$S_{2}= \begin{pmatrix} \frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}} \end{pmatrix}. $$
But for any \(n\geq3\) there are infinite many \(n\times n\) typical orthogonal matrices with the first column equal to \(u_{n}\), as shown in the next lemma.

Lemma 2.4

Any \(3\times3\) typical orthogonal matrix which having no entry equal to zero must be equivalent to
$$ S_{3}(x)= \begin{pmatrix} \frac{1}{\sqrt{3}}&\frac{1}{\sqrt {2(x^{2}+x+1)}}&\frac{2x+1}{\sqrt{6(x^{2}+x+1)}}\\ \frac{1}{\sqrt{3}}&-\frac{x+1}{\sqrt{2(x^{2}+x+1)}}&\frac{1-x}{\sqrt {6(x^{2}+x+1)}}\\\frac{1}{\sqrt{3}}&\frac{x}{\sqrt {2(x^{2}+x+1)}}&-\frac{x+2}{\sqrt{6(x^{2}+x+1)}} \end{pmatrix} $$
(2.5)
for some \(x\in(0,1)\).

Proof

Let S be a \(3\times3\) typical orthogonal matrix which has no zero entry. There is no loss of generality to assume that S is equivalent to
$$S_{3}(u,v,x,y)= \begin{pmatrix} \frac{1}{\sqrt{3}}&u&v\\ \frac{1}{\sqrt{3}}&-(x+1)u&yv\\\frac{1}{\sqrt{3}}&xu&-(y+1)v \end{pmatrix} , $$
where u, v, x, y are positive parameters. Normal orthogonality of \(S_{3}(u,v,x,y)\) implies
$$u=\frac{1}{\sqrt{2(x^{2}+x+1)}},\qquad v=\frac{1}{\sqrt{2(y^{2}+y+1)}}, $$
and
$$ x(1+y)+y(1+x)=1. $$
(2.6)
It follows from (2.6) that \(y=(1-x)/(2x+1)\) (>0), which yields \(x<1\) and
$$y^{2}+y+1=3 \bigl(x^{2}+x+1 \bigr)/(2x+1)^{2}. $$
Therefore, \(S_{3}(u,v,x,y)=S_{3}(x)\), \(0< x<1\), obeys the expression as shown in (2.5). □

Theorem 2.4

Any \(3\times3\) typical orthogonal matrix must be equivalent to the matrix \(S_{3}(x)\) given by (2.5) for some \(x\in[0,1]\). Any solution of the \(3\times3\) SCRSIEP for \(\Lambda =(\lambda_{1},\lambda _{2},\lambda_{3})\) must be permutationally similar to
$$ A(x)=S_{3}(x)\operatorname{diag}(\lambda_{1}, \lambda _{2},\lambda_{3})S_{3}^{T}(x) $$
(2.7)
for some \(x\in[0,1]\).

Proof

Let S be a \(3\times3\) typical orthogonal matrix. Then S cannot have two zero entries in the same row or column by Lemma 2.1. If S has two zero entries, then the inner product of the two columns containing the two zero entries is surely nonzero, a contradiction to the orthogonality of S. Therefore, S has at most one zero entry. If S has no zero entry, then the first assertion holds by Lemma 2.4. If S has exact one zero entry, then S must be equivalent to both \(S'=(u_{3},\beta ,\zeta)\) and \(S''=(u_{3},\zeta, \beta )\) with \(\beta =(1/\sqrt{2},-1/\sqrt {2},0)^{\top}\). Now the orthogonality of S produces \(\zeta=(1/\sqrt {6},1/\sqrt{6},-\sqrt{2/3})^{\top}\), and hence \(S'=S_{3}(0)\), \(S''=S_{3}(1)\). Therefore, the first assertion holds.

The second assertion holds by Theorem 2.1 and the first assertion. □

Remark 2.3

It is interesting to note that the \(3\times3\) typical orthogonal matrix \(S^{*}_{3}\) given by (2.4) is equivalent to both \(S_{3}(0)\) and \(S_{3}(1)\), and \(S_{3}\) given by (2.3) is equivalent to \(S_{3}(\frac{\sqrt{3}-1}{2})\).

The following are some important \(4\times4\) irreducible typical orthogonal matrices:
$$\begin{aligned}& S_{4}=\frac{1}{2} \begin{pmatrix} 1&1&1&1\\ 1&-\frac{5}{3}&\frac{1}{3}&\frac{1}{3}\\ 1&\frac{1}{3}&-\frac{5}{3}&\frac{1}{3}\\ 1&\frac{1}{3}&\frac {1}{3}&-\frac{5}{3} \end{pmatrix},\qquad S_{4}^{*}=\frac{1}{2} \begin{pmatrix} 1&-\sqrt{3}&0&0\\ 1&\frac{1}{\sqrt{3}}&-\frac{4}{\sqrt{6}}&0\\ 1&\frac{1}{\sqrt{3}}&\frac{2}{\sqrt{6}}&\sqrt{2}\\ 1&\frac{1}{\sqrt{3}}&\frac{2}{\sqrt{6}}&-\sqrt{2} \end{pmatrix}, \\& S'_{4}(x)= \begin{pmatrix} \frac{1}{2}&\frac{-\sqrt{3}}{2}&0&0\\ \frac{1}{2}&\frac{1}{2\sqrt{3}}&\frac{1}{\sqrt{2(x^{2}+x+1)}}&\frac {2x+1}{\sqrt{6(x^{2}+x+1)}} \\ \frac{1}{2}&\frac{1}{2\sqrt{3}}&\frac{-(x+1)}{\sqrt {2(x^{2}+x+1)}}&\frac{1-x}{\sqrt{6(x^{2}+x+1)}}\\ \frac{1}{2}&\frac {1}{2\sqrt{3}}&\frac{x}{\sqrt{2(x^{2}+x+1)}}&\frac{-x-2}{\sqrt {6(x^{2}+x+1)}} \end{pmatrix},\quad 0 \leq x\leq1, \end{aligned}$$
(2.8)
$$\begin{aligned}& S''_{4}=\frac{1}{2} \begin{pmatrix} 1&1&\sqrt{2}&0\\ 1&1&-\sqrt{2}&0\\ 1&-1&0&\sqrt{2}\\ 1&-1&0&-\sqrt{2} \end{pmatrix}, \end{aligned}$$
(2.9)
$$\begin{aligned}& S_{4}(x)= \begin{pmatrix} \frac{1}{2}&\frac{3x+1}{\sqrt {18x^{4}+24x^{3}+32x^{2}+16x+6}}&\frac{1}{\sqrt{6x^{2}+4x+2}}&\frac {3x+1}{\sqrt{12x^{2}+8x+12}}\\ \frac{1}{2}& \frac{-3x^{2}-3x-2}{\sqrt{18x^{4}+24x^{3}+32x^{2}+16x+6}}&\frac{x}{\sqrt {6x^{2}+4x+2}}&\frac{1-x}{\sqrt{12x^{2}+8x+12}}\\ \frac{1}{2}&\frac {1-x}{\sqrt{18x^{4}+24x^{3}+32x^{2}+16x+6}}&\frac{-(2x+1)}{\sqrt {6x^{2}+4x+2}}&\frac{1-x}{\sqrt{12x^{2}+8x+12}}\\ \frac{1}{2}&\frac {3x^{2}+x}{\sqrt{18x^{4}+24x^{3}+32x^{2}+16x+6}}&\frac{x}{\sqrt {6x^{2}+4x+2}}&\frac{-x-3}{\sqrt{12x^{2}+8x+12}} \end{pmatrix}. \end{aligned}$$
(2.10)
Note that \(S^{*}_{4}\) is equivalent to both \(S_{4}(0)\) and \(S_{4}(1)\); \(S_{4}\) is equivalent to \(S_{4}(\frac{1}{3})\) and \(S''_{4}\) cannot be equivalent to any \(S_{4}(x)\), \(x\in[0,1]\).

3 Sufficient conditions for the SSIEP

In this section we give two sets of sufficient conditions for the SSIEP. It is obvious that A is a solution of the SSIEP if and only if A is a solution of the SCRSIEP, \(\rho(A)=1\) and A is (entrywise) nonnegative. It is well known that
$$ 1=\lambda_{1}\geq\lambda_{2}\geq\cdots\geq \lambda_{n} \geq-1 $$
(3.1)
and
$$ 1+\lambda_{2}+\cdots+\lambda_{n}\geq0, $$
(3.2)
are necessary for the non-increasing n-tuple Λ to be SS realizable.

Theorem 3.1

Let \(\Lambda =(\lambda_{1}=1,\lambda_{2},\ldots ,\lambda_{n})\) (\(n>1\)) satisfy (3.1) and
$$ \frac{1}{n}+\frac{\lambda_{2}}{n(n-1)}+\frac{\lambda _{3}}{(n-1)(n-2)}+\cdots+ \frac{\lambda_{n}}{2\cdot1}\geq0. $$
(3.3)
Then Λ is SS realizable. Moreover, the realizing matrix is (entrywise) positive if \(1>\lambda_{2}\) and (3.3) is a strict inequality.

Proof

Let \(D=\operatorname{diag}(1,\lambda_{2},\ldots,\lambda_{n})\) and \(A=S^{*}_{n} DS_{n}^{*T}\), where \(S^{*}_{n}=(s_{ij})\) is the \(n\times n\) typical orthogonal matrix given in (2.4). Then A is symmetric with CRS equal to 1 and \(\sigma (A)=\Lambda \) by Theorem 2.1.

In order to prove the first assertion we only need to verify the nonnegativity of A when (3.1) and (3.3) hold. For any \(i=1,2,\ldots,n-1\), we have
$$\begin{aligned}& \begin{aligned}[b] a_{ii}&= e^{T}_{i}S^{*}_{n}DS_{n}^{*T}e_{i}=s^{2}_{i1}+ \lambda _{2}s^{2}_{i2}+\cdots+\lambda_{i+1} s^{2}_{i,i+1} \\ &= \frac{1}{n}+\frac{\lambda_{2}}{n(n-1)}+\cdots+\frac{\lambda _{i}}{(n-i+2)(n-i+1)}+ \frac{(n-i)\lambda_{i+1}}{(n-i+1)}; \end{aligned} \\& a_{nn}=\frac{1}{n}+\frac{\lambda_{2}}{n(n-1)}+\frac{\lambda _{3}}{(n-1)(n-2)}+ \cdots+\frac{\lambda_{n-1}}{3\cdot2}+\frac{\lambda _{n}}{2\cdot1}=a_{n-1,n-1}. \end{aligned}$$
Since \(1=\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\), we obtain by simple calculation
$$a_{i-1,i-1}-a_{ii}=\frac{n-i}{n-i+1}(\lambda_{i}- \lambda_{i+1})\geq0 $$
for any \(i=2,\ldots,n-1\). Therefore, condition (3.3) guarantees that
$$a_{11}\geq\cdots\geq a_{nn}\geq0. $$
Now let us prove that \(a_{ij}\geq0\) for all \(1\leq i< j\leq n\). In fact, since
$$\begin{aligned} a_{ij}&=e^{T}_{i}S^{*}_{n}DS_{n}^{*T}e_{j}=s_{i1}s_{j1}+ \lambda _{2}s_{i2}s_{j2}+\cdots+ \lambda_{n}s_{in}s_{jn} \\ &=\frac{1}{n}+\frac{\lambda_{2}}{n(n-1)}+\cdots+\frac{\lambda _{i}}{(n-i+2)(n-i+1)}- \frac{\lambda_{i+1}}{n-i+1}, \end{aligned}$$
we have (by (3.1))
$$\begin{aligned} a_{ij} \geq& \biggl(\frac{1}{n}+\frac{\lambda_{2}}{n(n-1)}+\cdots+ \frac {\lambda_{i-1}}{(n-i+3)(n-i+2)} \biggr)+\frac{\lambda _{i}}{(n-i+2)(n-i+1)}-\frac{\lambda_{i}}{n-i+1} \\ =& \biggl(\frac{1}{n}+\frac{\lambda_{2}}{n(n-1)}+\cdots+\frac{\lambda _{i-1}}{(n-i+3)(n-i+2)} \biggr)-\frac{\lambda_{i}}{n-i+2} \\ \geq& \biggl(\frac{1}{n}+\frac{\lambda_{2}}{n(n-1)}+\cdots+\frac{\lambda _{i-2}}{(n-i+4)(n-i+3)} \biggr)-\frac{\lambda_{i-1}}{n-i+3} \\ \cdots& \\ \geq& \frac{1}{n}-\frac{1}{n}=0 \end{aligned}$$
for \(1\leq i< j\leq n\). Hence A is nonnegative, and the first assertion is proved.

It is easy to see that if \(1> \lambda_{2}\) and (3.3) is a strict inequality, then \(a_{ij}>0\) for \(1\leq i\leq j\leq n\). The second assertion is proved. □

Remark 3.1

The sufficient condition (3.3) of SSIEP was first given by Theorem 8 of [11]. The first assertion of Theorem 3.1 is the same as the first assertion of Theorem 4 of [8], the proof of the former presented here is simpler than the proof of the latter. Meanwhile, Fang [12] disproved the second assertion made in Theorem 4 of [8], which says that Λ is SS realizable by a positive matrix A if \(1>\lambda_{2}\), together with (3.1) and (3.3). Our Theorem 3.1 shows that this assertion does hold if \(1> \lambda_{2}\) and (3.3) is a strict inequality.

Theorem 3.2

Let \(\Lambda =(\lambda_{1}=1,\lambda_{2},\ldots ,\lambda_{n})\) (\(n>1\)) satisfy (3.1), (3.2), and \(\beta =\frac{\sqrt {n}-1}{n-1}\). If
$$\begin{aligned}& 1+\beta ^{2}(\lambda_{2}+\cdots+ \lambda_{n-1})+ \bigl((n-2)\beta +1 \bigr)^{2} \lambda_{n} \geq0, \end{aligned}$$
(3.4)
$$\begin{aligned}& 1- \bigl((n-2)\beta +1 \bigr)\lambda_{2}+\beta ( \lambda_{3}+\cdots+\lambda_{n})\geq0, \end{aligned}$$
(3.5)
$$\begin{aligned}& 1- \bigl((n-2)\beta +1 \bigr) (\lambda_{2}+ \lambda_{3})+\beta (\lambda_{4}+\cdots+\lambda _{n}) \geq0, \end{aligned}$$
(3.6)
then Λ is SS realizable. Moreover, the realizing matrix is positive if (3.2), (3.4), (3.5), and (3.6) are all strict inequalities.

Proof

Let \(D=\operatorname{diag}(1,\lambda_{2},\ldots,\lambda_{n})\) and \(A=S_{n}DS_{n}^{T}\), where \(S_{n}=(s_{ij})\) is the typical orthogonal matrix given in (2.3). Then A is a symmetric matrix with CRS equal to 1 and \(\sigma (A)=\Lambda \) by Theorem 2.1. In order to prove the first assertion we only need to verify that A is nonnegative under the conditions (3.2), (3.4), (3.5), and (3.6).

In fact (3.2) yields \(a_{11}=1+\lambda_{2}+\cdots+\lambda_{n}\geq0\); (3.1) and (3.4) yield
$$\begin{aligned} a_{ii}&=s^{2}_{i1}+\lambda_{2}s^{2}_{i2}+ \cdots+\lambda_{n} s^{2}_{in} \\ &=\frac{1}{n}\biggl(1+ \bigl((n-2)\beta +1 \bigr)^{2} \lambda_{i}+ \beta ^{2}\sum_{k\notin\{1,i\}}{ \lambda_{k}}\biggr) \\ &\geq\frac{1}{n} \Biggl(1+ \bigl((n-2)\beta +1 \bigr)^{2} \lambda_{n}+\beta ^{2}\sum_{k=2}^{n-1}{ \lambda_{k} } \Biggr)\geq0 \end{aligned}$$
for \(j = 2,\ldots,n\); (3.1) and (3.5) yield
$$\begin{aligned}[b] a_{1j}&=s_{11}s_{j1}+\cdots+\lambda_{n}s_{1n}s_{jn} \\ &=\frac{1}{n} \biggl(1- \bigl((n-2)\beta +1 \bigr)\lambda_{j}+\beta \sum_{k\notin\{ 1,j\}}{\lambda_{k} } \biggr) \\ &\geq\frac{1}{n} \Biggl(1- \bigl((n-2)\beta +1 \bigr)\lambda_{2}+ \beta \sum_{k=3}^{n}{\lambda _{k} } \Biggr)\geq0 \end{aligned} $$
for \(i=2,\ldots,n\); and finally (3.1) and (3.6) yield
$$\begin{aligned} a_{ij}&=s_{i1}s_{j1}+\cdots+\lambda_{n}s_{in}s_{jn} \\ &=\frac{1}{n} \biggl(1- \bigl((n-2)\beta +1 \bigr) (\lambda_{i}+ \lambda_{j})+\beta \sum_{k\notin\{1,i,j\}}{ \lambda_{k} } \biggr) \\ &\geq\frac{1}{n} \Biggl(1- \bigl((n-2)\beta +1 \bigr) (\lambda_{2}+ \lambda_{3})+\beta \sum_{k=4}^{n}{ \lambda_{k} } \Biggr) \geq0 \end{aligned}$$
for \(2\leq i\leq n\), \(j\neq i\). So A is nonnegative, and the first assertion is proved. The proof of the second assertion is trivial. □

4 The general solution and the totally general solution for the \(3\times3\) SSIEP

Definition 4.1

The general solution of the SSIEP for \(\Lambda =(\lambda _{1},\ldots,\lambda_{n})\) is a set Ω of \(n\times n\) symmetric stochastic matrices such that each element of Ω is a solution of the SSIEP for Λ and each solution of the SSIEP for Λ is permutationally similar to one element of Ω.

Since any possible \(2\times2\) typical orthogonal matrix is equivalent to
$$S=\frac{1}{\sqrt{2}} \begin{pmatrix} 1&-1\\ 1&1 \end{pmatrix}, $$
any possible \(2\times2\) symmetric stochastic matrix with \(\sigma (A)=\Lambda =\{1,\lambda_{2}\} \) (\(|\lambda_{2}|\leq1\)) must be permutationally similar to
$$ S\operatorname{diag}(1,\lambda_{2})S^{T} = \frac{1}{2} \begin{pmatrix} 1+\lambda_{2}&1-\lambda_{2}\\ 1-\lambda_{2}&1+\lambda_{2} \end{pmatrix} $$
(4.1)
by Theorem 2.1 and Theorem 2.2. Therefore, the general solution of the \(2\times2\) SSIEP for given \(\Lambda =(1,\lambda_{2})\) exists if and only if \(|\lambda_{2}|\leq1\) and the solution is equivalent to the matrix given in (4.1).

For \(n=3\) we have proved (Theorem 2.4) that any solution of \(3\times3\) SIEP with unit row sums must be permutationally similar to \(A(x)=S_{3}(x)\operatorname{diag}(\lambda _{1},\lambda_{2},\lambda_{3})S_{3}^{T}(x)\) for some \(x\in[0,1]\), where \(S_{3}(x)\) is given in (2.5). This motivates the following definition for the special case of \(n=3\).

Definition 4.2

If \(A(x)=S_{3}(x)\operatorname{diag}(1,\lambda_{2},\lambda_{3})S_{3}^{T}(x)\) is a solution of the SSIEP for \(\Lambda =(1,\lambda_{2},\lambda_{3})\) for each \(x\in[0,1]\), then the function matrix \(\{A(x),0\leq x\leq1\} \) is called the totally general solution of the SSIEP for \(\Lambda =(1,\lambda_{2},\lambda_{3})\).

Note that the general solution of the SSIEP for any \(\Lambda =(1,\lambda _{2},\lambda_{3})\) is a subset of \(\{A(x),0\leq x\leq1\}\) and they may not be equal. In other words, for the SSIEP, the totally general solution is the general solution, but the converse is not always true (see Example 4.2).

Lemma 4.1

Let \(\Lambda =(1,\lambda_{2},\lambda_{3})\) satisfy (3.1) and (3.2); \(S_{3}(x)= (u_{3},\xi_{2},\xi_{3} )\) (\(0\leq x\leq1\)) be the \(3\times3\) typical orthogonal matrix given by (2.5), and \(A(x)=S_{3}(x)\operatorname{diag}(1,\lambda _{2},\lambda_{3})S_{3}(x)^{T}\). Then \(A(x)\) is nonnegative if and only if \(1+2\lambda_{3}\geq0\) and \(2-3\lambda_{2}+\lambda_{3}\geq0\).

Proof

We have
$$\begin{aligned} A(x)={}& \bigl(a_{ij}(x) \bigr)=u_{3}u_{3}^{T}+ \lambda_{2}\xi_{2}\xi _{2}^{T}+ \lambda_{3}\xi_{3}\xi_{3}^{T} \\ ={}&\frac{1}{3} \begin{pmatrix} 1&1&1\\ 1&1&1\\ 1&1&1 \end{pmatrix} + \frac{\lambda_{2}}{\alpha } \begin{pmatrix} 1&-1-x&x\\ -1-x&(1+x)^{2}&-x-x^{2}\\ x&-x-x^{2}&x^{2} \end{pmatrix} \\ &{}+\frac{\lambda_{3}}{3\alpha } \begin{pmatrix} (1+2x)^{2}&1+x-2x^{2}&-2-5x-2x^{2}\\ 1+x-2x^{2}&(1-x)^{2}&-2+x+x^{2}\\ -2-5x-2x^{2}&-2+x+x^{2}&(2+x)^{2} \end{pmatrix}, \end{aligned}$$
(4.2)
where \(\alpha =2x^{2}+2x+2>0\) for all \(x\in[0,1]\). Then
$$\begin{aligned}& 3\alpha a_{11}(x)=(2+4\lambda_{3})x^{2}+(2+4\lambda _{3})x+2+3\lambda_{2}+\lambda_{3}, \\& 3\alpha a_{22}(x)=(2+3\lambda_{2}+\lambda _{3})x^{2}+(2+6 \lambda_{2}-2\lambda_{3})x+2+3\lambda_{2}+ \lambda_{3}, \\& 3\alpha a_{33}(x)=(2+3\lambda_{2}+\lambda _{3})x^{2}+(2+4 \lambda_{3})x+2+4\lambda_{3}, \\& 3\alpha a_{12}(x)=(2-2\lambda_{3})x^{2}+(2-3\lambda _{2}+\lambda_{3})x+2-3\lambda_{2}+ \lambda_{3}, \\& 3\alpha a_{13}(x)=(2-2\lambda_{3})x^{2}+(2+3\lambda _{2}-5\lambda_{3})x+2-2\lambda_{3}, \\& 3\alpha a_{23}(x)=(2-3\lambda_{2}+\lambda _{3})x^{2}+(2-3 \lambda_{2}+\lambda_{3})x+2-2\lambda_{3}. \end{aligned}$$
Since \(3\alpha (a_{11}(x)-a_{33}(x))=3(1-x^{2})\lambda_{2}-3(1-x^{2})\lambda _{3}=3(1-x^{2})(\lambda_{2}-\lambda_{3})\geq0\), we conclude \(a_{11}(x)\geq a_{33}(x)\), \(0\leq x\leq1\). Then if \(2+3\lambda_{2}+\lambda_{3}>0\) we have
$$\begin{aligned} \min_{0\leq x\leq1}3\alpha a_{33}(x)&\geq\min _{x\in\textit{R}}3\alpha a_{33}(x) \\ &=2+4\lambda_{3}-\frac{(2+4\lambda_{3})^{2}}{4(2+3\lambda_{2}+\lambda _{3})}=\frac{3(1+2\lambda_{2})(1+2\lambda_{3})}{2+3\lambda_{2}+\lambda_{3}} \end{aligned}$$
and if \(2+3\lambda_{2}+\lambda_{3}\leq0\), we have
$$\min_{0\leq x\leq1}3\alpha a_{33}(x)=\min \bigl\{ 2(1+2 \lambda_{3}),3(2+\lambda _{2}+3\lambda_{3}) \bigr\} . $$
Note that \(\lambda_{2}\geq\lambda_{3}\) leads to \(2+3\lambda_{2}+\lambda _{3}\geq2+\lambda_{2}+3\lambda_{3}\geq2(1+2\lambda_{3})\) and \(1+2\lambda _{2}\geq1+2\lambda_{3}\). If \(1+2\lambda_{3}\geq0\), then
$$\min_{0\leq x\leq1}3\alpha a_{33}(x)\geq\min \biggl\{ 2(1+2 \lambda_{3}),\frac {3(1+2\lambda_{2})(1+2\lambda_{3})}{2+3\lambda_{2}+\lambda_{3}} \biggr\} \geq0. $$
On the other hand, if \(a_{33}(x)\geq0\) for all \(x\in[0,1]\), then
$$1+2\lambda_{3} = \frac{1}{2}3\alpha a_{33}(0)\geq \frac{1}{2} \min_{0\leq x\leq1}3\alpha a_{33}(x)\geq0. $$
Therefore, \(a_{33}(x)\geq0 \Leftrightarrow 1+2\lambda_{3}\geq0\).
Since \(2-2\lambda_{3}\geq0\) and \(2+3\lambda_{2}+\lambda_{3}\geq 2(1+\lambda_{2}+\lambda_{3})\geq0\) by (3.2) we have
$$\min_{0\leq x\leq1}a_{22}(x)=\min_{0\leq x\leq1} \frac{1}{3\alpha } \bigl((2+3\lambda_{2}+\lambda_{3})x^{2}+(2+6 \lambda _{2}-2\lambda_{3})x+2+3\lambda_{2}+ \lambda_{3} \bigr)\geq 0. $$
Now \(2+3\lambda_{2}-5\lambda_{3}\geq2-2\lambda_{3}\geq2-3\lambda _{2}+\lambda_{3}\) provides \(a_{13}(x)\geq a_{23}(x)\), \(0\leq x\leq1\). Moreover, \(3\alpha (a_{23}(x)-a_{12}(x))=3(1-x^{2})(\lambda_{2}-\lambda _{3})\geq0\) yields
$$\min \bigl\{ a_{12}(x),a_{13}(x),a_{23}(x) \bigr\} =a_{12}(x),\quad 0\leq x\leq1. $$
Now if \(2-2\lambda_{3}>0\), then
$$\begin{aligned} \min_{0\leq x\leq1} 3\alpha a_{12}(x)&\geq \min _{x\in R} 3\alpha a_{12}(x) \\ &=2-3\lambda_{2}+\lambda_{3}-\frac{(2-3\lambda_{2}+\lambda _{3})^{2}}{4(2-2\lambda_{3})} \\ &=\frac{3(2+\lambda_{2}-3\lambda_{3})(2-3\lambda_{2}+\lambda _{3})}{4(2-2\lambda_{3})}. \end{aligned}$$
If \(2-2\lambda_{3}=0\), then
$$3\alpha a_{12}(x)=(2-3\lambda_{2}+\lambda_{3}) (1+x). $$
Since \(2+\lambda_{2}-3\lambda_{3}\geq0\), \(1+x>0\), we have
$$a_{12}(x)\geq0 \quad\Leftrightarrow\quad 2-3\lambda_{2}+ \lambda_{3}\geq0. $$
Therefore, \(A(x)\) is nonnegative if and only if \(1+2\lambda_{3}\geq0\) and \(2-3\lambda_{2}+\lambda_{3}\geq0\). □

Since \(1+2\lambda_{3}\geq0 \Rightarrow 1+\lambda_{2}+\lambda _{3}\geq0\), Theorem 2.4 and Lemma 4.1 produce the following result.

Theorem 4.1

The totally general solution of the SSIEP for \(\Lambda =(1,\lambda _{2},\lambda_{3})\) (\(1\geq\lambda_{2}\geq\lambda_{3}\geq-1\)) exists if and only if
$$ 1+2\lambda_{3}\geq0 \quad\textit{and}\quad 2-3 \lambda_{2}+\lambda_{3} \geq0, $$
(4.3)
and the totally general solution is given by (4.2).
As we know, a quadratic real function \(f(x)=ax^{2}+bx+c\) with \(a\neq0\) has two different real zeros if and only if its discriminant \(\Delta =b^{2}-4ac>0\), and the two zeros are \(x_{\pm}=\frac{-b\pm\sqrt{\Delta }}{2a}\). If \(\Delta>0\) and \(a>0\) (<0) then \(f(x)\geq0\) on \([-\infty ,x_{-}]\cup[x_{+},+\infty] ([x_{-},x_{+}])\). The quadratic function \(3\alpha a_{33}(x)=(2+3\lambda_{2}+\lambda_{3})x^{2}+(2+4\lambda_{3})x+2+4\lambda_{3}\) has two real different zeros:
$$ u_{\pm}=\frac{-1-2\lambda_{3}\pm\sqrt {-3(1+2\lambda_{2})(1+2\lambda_{3})}}{2+3\lambda_{2}+\lambda_{3}} $$
(4.4)
if and only if \(1+2\lambda_{3}<0<1+2\lambda_{2}\); and \(3\alpha a_{12}(x)=(2-2\lambda_{3})x^{2}+(2-3\lambda_{2}+\lambda_{3})x+2-3\lambda _{2}+\lambda_{3}\) has two real different zeros:
$$ v_{\pm}=\frac{-2+3\lambda_{2}-\lambda _{3}\pm\sqrt{-3(2-3\lambda_{2}+\lambda_{3})(2+\lambda_{2}-3\lambda _{3}})}{4(1-\lambda_{3})} $$
(4.5)
if and only if \(2-3\lambda_{2}+\lambda_{3}<0<2+\lambda_{2}-3\lambda_{3}\).

Theorem 4.2

Let \(\Lambda =(1,\lambda_{2},\lambda_{3})\) satisfy (3.1) and (3.2). Then the general solution of SSIEP for Λ exists if and only if
$$ 2+\lambda_{2}+3\lambda_{3}\geq0, $$
(4.6)
and the general solution is given by (4.2) with \(0\leq x\leq1\) when \(1+2\lambda_{3}\geq0\) and \(2-3\lambda_{2}+\lambda_{3}\geq0\); with \(v_{+}\leq x\leq1\) when \(1+2\lambda_{3}\geq0\) and \(2-3\lambda_{2}+\lambda _{3}< 0\); with \(u_{+}\leq x\leq1\) when \(1+2\lambda_{3}<0\) and \(2-3\lambda _{2}+\lambda_{3}\geq0\); with \(\max\{u_{+},v_{+}\}\leq x\leq1\) when \(1+2\lambda_{3}<0\) and \(2-3\lambda_{2}+\lambda_{3}< 0\).

Proof

Let \(A(x)=(a_{ij}(x))=S_{3}(x)\operatorname{diag}(1,\lambda_{2},\lambda _{3})S_{3}(x)^{T}\), where \(S_{3}(x)\) is given by (2.5). Let \(N_{ij}(x)(1\leq i,j\leq n)\subset[0,1]\) denote the ‘nonnegative interval’ of \(a_{ij}(x)\) such that \(a_{ij}(x)\geq0\) if and only if \(x\in N_{ij}(x)\). Then \(A(x)\) for some \(x\in[0,1]\) is a solution of the SSIEP for Λ if and only if \(\Omega= \bigcap_{i,j=1,\ldots,n}N_{ij}(x)\neq\emptyset\) by Theorem 2.1 and Theorem 2.4. Since \(N_{33}(x)\subseteq N_{11}(x)\), \(N_{12}(x)\subseteq N_{23}(x)\subseteq N_{13}(x)\) and \(N_{22}(x)=[0, 1]\) by Lemma 4.1, \(\Omega=N_{33}(x)\cap N_{12}(x)\).

If (4.6) does not hold, then \(2+4\lambda_{3}\leq2+\lambda_{2}+3\lambda _{3}<0\). Moreover, \(2+3\lambda_{2}+\lambda_{3}<0\) implies \(3\alpha a_{33}(x)<0\) for all \(x\in[0,1]\), and \(2+3\lambda_{2}+\lambda_{3}\geq 0>2+\lambda_{2}+3\lambda_{3}\) implies
$$\begin{aligned} 3\alpha a_{33}(x)&\leq(2+3\lambda_{2}+\lambda _{3})x+(2+4\lambda_{3})x+2+4\lambda_{3} \\ &=4+3\lambda_{2}+5\lambda_{3}+2+4\lambda_{3}= 3(2+ \lambda _{2}+3\lambda_{3})< 0, \end{aligned}$$
for all \(x\in[0,1]\). Therefore, \(N_{33}(x)=\emptyset\), which yields \(\Omega=N_{33}\cap N_{12}(x)=\emptyset\). This proves the necessity.

To prove the sufficiency, it suffices to show that (4.6) implies \(\Omega\neq\emptyset\). If \(1+2\lambda_{3}\geq0\), then \(N_{33}(x)=[0,1]\) by the proof of Lemma 4.1. If \(1+2\lambda_{3}<0\), then \(1+2\lambda_{3}<0\leq\frac{1}{2}(2+\lambda_{2}+3\lambda_{3})<1+2\lambda _{2}\), which produces \(u_{-}<0<u_{+}\) and hence \(N_{33}(x)=[0,1]\cap ([u_{+},+\infty)\cup[-\infty,u_{-}])=[u_{+},1]\). Similarly if \(2-3\lambda _{2}+\lambda_{3}\geq0\), then \(N_{12}(x)=[0,1]\) by the proof of Lemma 4.1. If \(2-3\lambda_{2}+\lambda_{3}<0\), then \(\lambda_{2}>\lambda_{3}\), \(2-3\lambda_{2}+\lambda_{3}<0\leq2-2\lambda_{3}<2+\lambda_{2}-3\lambda_{3}\), which produces \(v_{-}<0<v_{+}\) and hence \(N_{12}(x)=[0,1]\cap([v_{+},+\infty )\cup[-\infty,v_{-}])=[v_{+},1]\). So \(\Omega=N_{33}(x)\cap N_{12}(x)=[u_{+},1]\cap[v_{+},1]\).

Now we prove, by contradiction, that condition (4.6) implies \(\max\{ u_{+}, v_{+}\}\leq1\), which produces \([u_{+},1]\cap[v_{+},1]=[\max\{ u_{+},v_{+}\},1]\neq\emptyset\), and then the proof of sufficiency is completed. As a matter of fact, if \(u_{+}>1\), then \(2+3\lambda_{2}+\lambda_{3}<-1-2\lambda_{3}+\sqrt{-3(1+2\lambda _{2})(1+2\lambda_{3})}\) by (4.4). So \(u_{+}>1\) is equivalent to
$$3(1+\lambda_{2}+\lambda_{3})< \sqrt{-3(1+2\lambda _{2}) (1+2\lambda_{3})}, $$
or
$$3\lambda_{2}^{2}+10\lambda_{2} \lambda_{3}+8\lambda _{2}+3\lambda_{3}^{2}+8 \lambda_{3}+4< 0. $$
But this is impossible because the left side of the last inequality is equal to \((2+3\lambda_{2}+\lambda_{3})(2+\lambda_{2}+3\lambda_{3})\geq (2+\lambda_{2}+3\lambda_{3})^{2}\geq0\) by (4.6). Similarly if \(v_{+}>1\), then \(3(2-\lambda_{2}-\lambda_{3})<\sqrt{-3(2-3\lambda_{2}+\lambda _{3})(2+\lambda_{2}-3\lambda_{3})}\) by (4.5). So \(u_{+}>1\) is equivalent to
$$16(\lambda_{2}\lambda_{3}-\lambda_{2}- \lambda_{3}+1)< 0. $$
But this is impossible because the left side of the last inequality is equal to \(16(1-\lambda_{2})(1-\lambda_{3})\), which cannot be negative. □

Remark 4.1

Theorem 14 of [11] shows that the real triple \(\Lambda =\{1,\lambda_{2},\lambda_{3}\}\) with \(|\lambda_{i}|\leq1\), \(i=2,3\) is SS realizable if and only if
$$ 2+3\lambda_{2}+\lambda_{3}\geq0 \quad\mbox{and}\quad 2+\lambda_{2}+3\lambda_{3}\geq0. $$
(4.7)
It is clear that condition (4.7) and \(|\lambda_{i}|\leq1\), \(i=2,3\) is equivalent to conditions (3.1), (3.2), and (3.6).

Example 4.1

Since \(\Lambda _{1}=(1,0,-0.25)\) satisfies condition (4.3) of Theorem 4.1, the totally general solution of the SSIEP for \(\Lambda _{1}\) is
$$\biggl\{ A(x)=\frac{1}{3\alpha }M(x) \biggr\} ,\quad 0\leq x\leq 1, $$
where
$$M(x)= \begin{pmatrix} \alpha -0.25(1+2x)^{2}&\alpha -0.25(1+x-2x^{2})&\alpha +0.25(2+5x+2x^{2})\\ {*}&\alpha -0.25(1-x)^{2}&\alpha +0.25(2-x-x^{2})\\ {*} &*&\alpha -0.25(2+x)^{2} \end{pmatrix}, $$
\(\alpha =2x^{2}+2x+2>0\), \(0\leq x\leq1\), and the entries marked by ‘’ are determined by the symmetry property of \(M(x)\).
Moreover, since \(\Lambda _{1}\) satisfies condition (3.3), Theorem 3.1 provides a solution of this SSIEP:
$$A^{*}=S_{3} \begin{pmatrix} 1&0&0\\ 0&0&0\\ 0&0&-0.25 \end{pmatrix} S_{3}^{T}= \frac{1}{6} \begin{pmatrix} 7/4&7/4&10/4\\ 7/4&7/4&10/4\\ 10/4&10/4&1 \end{pmatrix}. $$
Note that since \(\Lambda _{1}\) also satisfies the condition of Theorem 4 of [7], a symmetric stochastic matrix A with \(\sigma (A)=\Lambda _{1}\) can be found by the method given in [7] to be
$$ \begin{pmatrix} 1/3&1/3&1/3\\ 1/3&625/3{,}000&1{,}375/3{,}000\\ 1/3&1{,}375/3{,}000&625/3{,}000 \end{pmatrix}, $$
which is not equivalent to \(A^{*}\). So our Theorem 3.1 and Theorem 4 of [7] are really different.

Remark 4.2

If we find the general solution for the SSIEP, then we find all the possible solutions (only different by a permutationally similar transformation) and we are able to choose some interesting solution from them. As an example, for Example 4.1 it is interesting to know: among all the possible SS realizations of \(\Lambda _{1}\) , which one has the greatest (smallest) entry? and what is the value of this greatest (smallest) entry?

Using the expression of the general solution: \(A(x)=\frac{1}{3\alpha }M(x)\), \(0\leq x\leq1\), we can easily find that the \((1,3)\) entry of \(A(1)\) equal to \(\frac{11}{18}\) (the \((1,1)\) entry of \(A(1)\) equal to \(\frac{7}{24}>0\)) is the greatest (least) entry among all the entries of all the possible SS realizations of \(\Lambda _{1}\). Therefore, any possible SS realization of \(\Lambda _{1}\) is always a positive matrix whose entries belong to the interval \([\frac{7}{24}, \frac{11}{18}]\).

Example 4.2

Since \(\Lambda _{2}=(1,0,-0.6)\) does not satisfy the conditions of both Theorem 4.1 and Theorem 3.2, the totally general solution of the SSIEP for \(\Lambda _{2}\) does not exist, and \(A(\frac{\sqrt {3}-1}{2})\) is not a solution. But \(\Lambda _{2}\) does satisfy conditions (3.1) and (3.3), and this SSIEP has a solution
$$A^{*}= \begin{pmatrix} 1/3&1/3&1/3\\ 1/3&1/30&19/30\\ 1/3&19/30&1/30 \end{pmatrix} $$
by Theorem 3.1.
Now we use Theorem 4.2 to find the general solution of this SSIEP as follows. Since \(1+2\lambda_{3}<0\leq2+3\lambda_{2}+\lambda_{3}\) and \(0< u_{+}=\frac{-1-2\lambda_{3}+\sqrt{(1+2\lambda_{2})(-3-6\lambda _{3})}}{2+3\lambda_{2}+\lambda_{3}}=0.53514<1\) we have \(S_{33}(x)=[u_{+},1]\neq\emptyset\); since \(2-3\lambda_{2}+\lambda _{3}\geq0\) we have \(S_{12}(x)=[0,1]\) and \(\Omega=S_{33}=[u_{+},1]\). Finally the general solution is
$$\left\{A(x)=\frac{1}{3\alpha } \begin{pmatrix} \alpha -0.6(1+2x)^{2}&\alpha -0.6(1+x-2x^{2})&\alpha +0.6(2+5x+2x^{2})\\ {*}&\alpha -0.6(1-x)^{2}&\alpha +0.6(2-x-x^{2})\\ {*} &*&\alpha -0.6(2+x)^{2} \end{pmatrix} \right\} $$
with \(x\in[0.53514,1]\). It is interesting to note that the preceding solution \(A^{*}\) is permutationally similar to \(A(1)\).

Example 4.3

Since \(\Lambda _{4}=(1,\lambda_{2},\lambda _{3},\lambda_{4})=(1,0,0,-0.3)\) satisfies the conditions of both Theorem 3.1 and Theorem 3.2, the SSIEP for Λ has two solutions which are not permutationally similar to each other:
$$A= \begin{pmatrix} 0.25&0.25&0.25&0.25\\ 0.25&0.25&0.25&0.25\\ 0.25&0.25&0.1&0.4\\ 0.25&0.25&0.4&0.1 \end{pmatrix},\qquad A^{*}= \begin{pmatrix} 0.175&0.225&0.225&0.375\\ 0.225&0.2417&0.2417&0.2917\\ 0.225&0.2417&0.2417&0.2917\\ 0.375&0.2917&0.2917&0.0417 \end{pmatrix}. $$

This shows that if the condition of Theorem 2.2(2) is not satisfied, i.e. Λ contains two equal elements, then for two typical orthogonal matrices S, \(S'\) of order n, the relation showing that \(S'\) is equivalent to S cannot be guaranteed by the relation showing that \(S\operatorname{diag}(\lambda_{1},\ldots,\lambda_{n})S^{T}\) is permutationally similar to \(S'\operatorname{diag}(\lambda_{1},\ldots,\lambda_{n})S^{\prime\top}\).

Using the typical orthogonal matrix \(S''_{4}\) given in (2.9) we have the following corollary.

Corollary 4.1

If
$$ 1+\lambda_{2}+2\lambda_{4}\geq0, $$
(4.8)
then
$$\begin{aligned} A&=S''_{4}\operatorname{diag}(1, \lambda_{2},\lambda_{3},\lambda _{4})S^{\prime\prime\top}_{4} \\ &=\frac{1}{4} \begin{pmatrix} 1+\lambda_{2}+2\lambda_{3}&1+\lambda_{2}-2\lambda _{3}&1-\lambda_{2}&1-\lambda_{2}\\ 1+\lambda_{2}-2\lambda_{3}&1+\lambda _{2}+2\lambda_{3}&1-\lambda_{2}&1-\lambda_{2}\\ 1-\lambda_{2}&1-\lambda_{2}&1+\lambda_{2}+2\lambda_{4}&1+\lambda _{2}-2\lambda_{4}\\ 1-\lambda_{2}&1-\lambda_{2}&1+\lambda_{2}-2\lambda _{4}&1+\lambda_{2}+2\lambda_{4} \end{pmatrix} \end{aligned}$$
is a solution of the \(4\times4\) SSIEP for \(\Lambda =(1,\lambda_{2},\lambda _{3},\lambda_{4}) \) (\(1\geq\lambda_{2}\geq\lambda_{3}\geq\lambda_{4}\geq-1\)).

Corollary 4.2

If
$$ 1+3\lambda_{2}\geq0,\qquad 4+\lambda _{2}+8 \lambda_{4}\geq0 , \qquad 2+\lambda_{2}-12 \lambda_{3}+4\lambda_{4}\geq0 , $$
(4.9)
then the \(4\times4\) SSIEP for \(\Lambda =(1,\lambda_{2},\lambda_{3},\lambda _{4}) \) (\(1\geq\lambda_{2}\geq\lambda_{3}\geq\lambda_{4}\geq-1\)) has solutions with one parameter x:
$$A(x)=S'_{4}(x)\operatorname{diag}(1,\lambda_{2}, \lambda_{3},\lambda_{4})S^{\prime T}_{4}(x), \quad 0\leq x\leq1. $$

Proof

Let \(A(x)=S'_{4}(x)\operatorname{diag}(1,\lambda_{2},\lambda_{3},\lambda _{4})S^{\prime T}_{4}(x)\), \(0\leq x\leq1\) with \(S'_{4}(x)\) given by (2.8). Then for \(0\leq x\leq1\), \(A(x)\) is a \(4\times4\) symmetric matrix with unit row sums realizing Λ by Theorem 2.1 and
$$A(x)= \begin{pmatrix} \frac{1+3\lambda_{2}}{4}&\frac{1-\lambda _{2}}{4}&\frac{1-\lambda_{2}}{4}&\frac{1-\lambda_{2}}{4}\\ \frac{1-\lambda_{2}}{4}&&&\\ \frac{1-\lambda_{2}}{4}&&A'(x)&\\ \frac{1-\lambda_{2}}{4}&&& \end{pmatrix}, $$
where \(A'(x)=(1,1,1)^{T}(1,1,1)+S'_{3}(x)\operatorname{diag}(\lambda_{2},\lambda_{3},\lambda _{4})S^{\prime T}_{3}(x)\) with \(S'_{3}(x)=S_{3}(x)\operatorname{diag}(\frac{1}{2},1,1) S^{T}_{3}(x)\). Let \(S'_{3}(x)\operatorname{diag}(\lambda_{2},\lambda_{3},\lambda_{4})S^{\prime T}_{3}(x)=(a'_{ij}(x))\), then a direct calculation produces
$$\begin{aligned}& 3\alpha a'_{11}(x)= \biggl(\frac{\lambda_{2}}{2}+4\lambda _{4} \biggr)x^{2}+ \biggl(\frac{\lambda_{2}}{2}+4 \lambda_{4} \biggr)x+\frac{\lambda _{2}}{2}+3\lambda_{3}+ \lambda_{4}, \\& 3\alpha a'_{22}(x)= \biggl(\frac{\lambda_{2}}{2}+3\lambda _{3}+\lambda_{4} \biggr)x^{2}+ \biggl( \frac{\lambda_{2}}{2}-2\lambda_{4} \biggr)x+\frac{\lambda _{2}}{2}+3 \lambda_{3}+\lambda_{4}, \\& 3\alpha a'_{33}(x)= \biggl(\frac{\lambda_{2}}{2}+3\lambda _{3}+\lambda_{4} \biggr)x^{2}+ \biggl( \frac{\lambda_{2}}{2}+4\lambda_{4} \biggr)x+\frac{\lambda _{2}}{2}+4 \lambda_{4}, \\& 3\alpha a'_{12}(x)= \biggl(\frac{\lambda_{2}}{2}-2\lambda _{4} \biggr)x^{2}+ \biggl(\frac{\lambda_{2}}{2}-3 \lambda_{3}+\lambda_{4} \biggr)x+\frac{\lambda _{2}}{2}-3 \lambda_{3}+\lambda_{4}, \\& 3\alpha a'_{13}(x)= \biggl(\frac{\lambda_{2}}{2}-2\lambda _{4} \biggr)x^{2}+ \biggl(\frac{\lambda_{2}}{2}+3 \lambda_{3}-5\lambda_{4} \biggr)x+\frac{\lambda _{2}}{2}-2 \lambda_{4}, \\& 3\alpha a'_{23}(x)= \biggl(\frac{\lambda_{2}}{2}-3\lambda _{3}+\lambda_{4} \biggr)x^{2}+ \biggl( \frac{\lambda_{2}}{2}-3\lambda_{3}+\lambda_{4} \biggr)x+ \frac {\lambda_{2}}{2}-2\lambda_{4}. \end{aligned}$$
By the analysis used in the proof of Lemma 4.1 we conclude that for \(x\in[0,1]\)
$$\begin{aligned} &\operatorname{Min}_{x\in[0,1]}S'_{3}(x) \operatorname{diag}(\lambda_{2},\lambda _{3}, \lambda_{4})S^{\prime T}_{3}(x) \\ &\quad=\operatorname{Min}_{1\leq i< j\leq n} a'_{ij}(x) \geq \operatorname{Min} \biggl\{ \frac {\lambda_{2}}{4}+2\lambda_{4}, \frac{\lambda_{2}}{2}-6\lambda_{3}+2\lambda _{4} \biggr\} . \end{aligned}$$
(4.10)
Now condition (4.9) implies \(\operatorname{Min} \{\frac{\lambda_{2}}{4}+2\lambda _{4},\frac{\lambda_{2}}{2}-6\lambda_{3}+2\lambda_{4} \}\geq-1\), and hence \(A'(x)=(1,1,1)^{T}(1,1,1)+S'_{3}(x)\operatorname{diag}(\lambda_{2},\lambda_{3},\lambda _{4})S^{\prime T}_{3}(x)\) is a nonnegative matrix. Finally, since \(1-\lambda_{2}\geq 0\) and \(1+3\lambda_{2}\geq0\) by (4.9) we see that \(A(x)\) is also a nonnegative matrix, and hence a symmetric stochastic matrix with \(\sigma(A)=\Lambda \). □

Open problems

What are the necessary and sufficient conditions of the \(4\times4\) SSIEP to have a solution? What is the general solution of this inverse eigenvalue problem when it has a solution?

Declarations

Acknowledgements

Q Zhang is supported by the National Natural Science Foundation of China (Nos. 61301296, 61377006, 61201396) and the National Natural Science Foundation of China - Guangdong Joint Fund (No. U1201255). C Xu is supported by the China State Natural Science Foundation Monumental Project (No. 6119010) and the China National Natural Science Foundation Key Project (No. 61190114). The financial support of them is gratefully acknowledged.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Key Laboratory of Intelligent Computing & Signal Processing, Ministry of Education, Anhui University
(2)
Department of Applied Mathematics, Suzhou University of Science and Technology
(3)
School of Mathematical Sciences, Anhui University

References

  1. Borobia, A: On the nonnegative eigenvalue problem. Linear Algebra Appl. 223(224), 131-140 (1995) MathSciNetView ArticleGoogle Scholar
  2. Fiedler, M: Eigenvalues of nonnegative symmetric matrices. Linear Algebra Appl. 9, 119-142 (1974) MATHMathSciNetView ArticleGoogle Scholar
  3. Kellogg, RB: Matrices similar to a positive of essentially positive matrix. Linear Algebra Appl. 4, 191-204 (1971) MATHMathSciNetView ArticleGoogle Scholar
  4. Marijuan, C, Pisonero, M, Soto, RL: A map of sufficient conditions for the real nonnegative inverse eigenvalue problem. Linear Algebra Appl. 426, 690-705 (2007) MATHMathSciNetView ArticleGoogle Scholar
  5. Salzmann, FL: A note on eigenvalues of nonnegative matrices. Linear Algebra Appl. 5, 329-338 (1972) MATHMathSciNetGoogle Scholar
  6. Soto, R, Rojo, O: Applications of a Brauer theorem in the nonnegative inverse eigenvalue problem. Linear Algebra Appl. 416, 844-856 (2006) MATHMathSciNetView ArticleGoogle Scholar
  7. Johnson, CR: Row stochastic matrices similar to doubly stochastic matrices. Linear Multilinear Algebra 10, 113-130 (1981) MATHView ArticleGoogle Scholar
  8. Hwang, SG, Pyo, SS: The inverse eigenvalue problem for symmetric doubly stochastic matrices. Linear Algebra Appl. 379, 77-83 (2004) MATHMathSciNetView ArticleGoogle Scholar
  9. Lei, Y, Xu, W, Lu, Y, Niu, Y, Gu, X: On the symmetric doubly stochastic inverse eigenvalue problem. Linear Algebra Appl. 445, 181-205 (2014) MATHMathSciNetView ArticleGoogle Scholar
  10. Zhang, Q, Xu, C, Yang, S: On the inverse eigenvalue problem for irreducible doubly stochastic matrices of small orders. Abstr. Appl. Anal. 2014, Article ID 902383 (2014). doi:10.1155/2014/902383 MathSciNetGoogle Scholar
  11. Perfect, H, Mirsky, L: Spectral properties of doubly stochastic matrices. Monatshefte Math. 69, 35-57 (1965) MATHMathSciNetView ArticleGoogle Scholar
  12. Fang, M: A note on the inverse eigenvalue problem for symmetric doubly stochastic matrices. Linear Algebra Appl. 432, 2925-2927 (2010) MATHMathSciNetView ArticleGoogle Scholar

Copyright

© Zhang et al. 2015