Skip to main content

Positive eigenvector of nonlinear eigenvalue problem with a singular M-matrix and Newton-SOR iterative solution

Abstract

Some sufficient conditions are proposed in this paper such that the nonlinear eigenvalue problem with an irreducible singular M-matrix has a unique positive eigenvector. Under these conditions, the Newton-SOR iterative method is proposed for numerically solving such a positive eigenvector and some convergence results on this iterative method are established for the nonlinear eigenvalue problems with an irreducible singular M-matrix, a nonsingular M-matrix, and a general M-matrix, respectively. Finally, a numerical example is given to illustrate that the Newton-SOR iterative method is superior to the Newton iterative method.

1 Introduction

In research of physics, Bose-Einstein condensation of atoms near absolute zero temperature is modeled by a nonlinear Gross-Pitaevskii equation, see [1, 2], i.e.,

$$\begin{aligned}& -\triangle u+V(x,y,z)u+ku^{3}=\lambda u, \end{aligned}$$
(1)
$$\begin{aligned}& \lim_{|(x,y,z)|\rightarrow\infty}u=0,\qquad \int^{\infty}_{-\infty} \int ^{\infty}_{-\infty} \int^{\infty}_{-\infty}u(x,y,z)^{2}\,dx\,dy \,dz=1, \end{aligned}$$
(2)

where V is a potential function. The discretization of this equation usually leads to the following nonlinear eigenvalue problem:

$$ Ax+F(x)=\lambda x, $$
(3)

where \(A\in\mathbb{R}^{n\times n}\) is an irreducible M-matrix, the function \(F(x)\) is diagonal, that is,

$$F(x)= \left [ \textstyle\begin{array}{@{}c@{}} f_{1}(x_{1}) \\ f_{2}(x_{2}) \\ \vdots \\ f_{n}(x_{n}) \end{array}\displaystyle \right ], $$

with the conditions that \(x_{i}>0\) and \(f_{i}(x_{i})>0\) for \(i=1,2,\ldots,n\).

In [37], some scholars studied the conditions that the nonlinear eigenvalue problem (3) with an irreducible nonsingular M-matrix has a unique positive eigenvector, applied the Newton iterative method to solve numerically this problem, and established some significant theoretical and numerical results. It is shown in [37] that the main contributions were made to the nonlinear eigenvalue problem as follows: (i) any number greater than the smallest positive eigenvalue of the nonsingular M-matrix is an eigenvalue of the nonlinear eigenvalue problems; (ii) the corresponding positive eigenvector is unique, and (iii) the Newton iterative method is convergent for numerically solving the positive eigenvector.

However, not all nonlinear eigenvalue problems from the discretization of various Gross-Pitaevskii equations have a nonsingular M-matrix. Maybe some nonlinear eigenvalue problems have a singular M-matrix or a Z-matrix. In this paper, we will mainly study the theory and solution of positive eigenvector of nonlinear eigenvalue problem with a singular M-matrix. Some sufficient conditions will be proposed such that the nonlinear eigenvalue problem with an irreducible singular M-matrix has a unique positive eigenvector. Meanwhile, the Newton-SOR iterative method will be proposed under these conditions for numerically solving such a positive eigenvector, and some convergence results on this iterative method will be established for the nonlinear eigenvalue problems with an irreducible singular M-matrix, a nonsingular M-matrix, and a general M-matrix, respectively.

The paper is organized as follows. Some notations and preliminary results as regards M-matrices are given in Section 2. The existence and uniqueness of a positive eigenvector are studied in Section 3 for the nonlinear eigenvalue problem (3) with an irreducible singular M-matrix. The Newton-SOR iterative method is proposed in Section 4 for numerically solving such a positive eigenvector and some convergence results on this iterative method are established for the nonlinear eigenvalue problems with an irreducible singular M-matrix, a nonsingular M-matrix and a general M-matrix, respectively. In Section 5, a numerical example is given to demonstrate the effectiveness of the Newton-SOR iterative method. Conclusions are given in Section 6.

2 Preliminaries

The following lemmas for the M-matrix are useful and can be found in the literature, see [3, 6, 810], we present them here to make the paper self-contained.

Definition 1

A matrix \(A=(a_{ij})\in\mathbb{R}^{n\times n}\) is called nonnegative if \(a_{ij}\geq0\) for all \(i,j\in\langle n\rangle=\{ 1,2,\ldots,n\}\).

We write \(A\geq0\) if A is nonnegative. Let \(A\geq0\) and \(B\geq0\), we write \(A\geq B\) if \(A-B\geq0\).

Definition 2

A matrix \(A=(a_{ij})\in\mathbb{R}^{n\times n}\) is called a Z-matrix if \(a_{ij}\leq0\) for all \(i\neq j\).

We will use \(Z_{n}\) to denote the set of all \(n\times n\) Z-matrices.

Definition 3

A matrix \(A=(a_{ij})\in Z_{n}\) is called an M-matrix if A can be expressed in the form \(A=sI-B\), where \(B\geq0\), and \(s\geq\rho(B)\), the spectral radius of B. If \(s>\rho(B)\), A is called a nonsingular M-matrix; if \(s=\rho(B)\), A is called a singular M-matrix.

\(M_{n}\), \(M_{n}^{\bullet}\), and \(M_{n}^{0}\) will be used to denote the set of all \(n\times n\) M-matrices, the set of all \(n\times n\) nonsingular M-matrices, and the set of all \(n\times n\) singular M-matrices, respectively. It is easy to see that

$$ M_{n}=M_{n}^{\bullet}\cup M_{n}^{0} \quad \mbox{and} \quad M_{n}^{\bullet } \cap M_{n}^{0}=\emptyset. $$
(4)

Lemma 1

(see [8])

If \(A\in M_{n}^{0}\), then \(A+D\in M_{n}^{\bullet}\) for each positive diagonal matrix D.

Lemma 2

(see [6])

If \(A=(a_{ij})\in M_{n}^{\bullet}\), and \(B=(b_{ij})\in Z_{n}\) satisfies \(a_{ij}\leq b_{ij}\), \(i,j=1,\ldots,n\), then \(B\in M_{n}^{\bullet}\), and hence, \(B^{-1}\leq A^{-1}\) and \(\mu _{A}\leq\mu_{B}\), where \(\mu_{A}\) and \(\mu_{B}\) are the smallest eigenvalues of A and B, respectively. In addition, if A is irreducible and \(A\neq B\), then \(B^{-1}< A^{-1}\) and \(\mu_{A}<\mu_{B}\).

Lemma 3

(see [6])

Let \(A\in M_{n}^{\bullet}\) and μ be the smallest positive eigenvalue of A. Then, for any \(\nu\leq\mu\), \(A-\nu I\in M_{n}\).

Lemma 4

(see [3])

Let \(S:R^{n}\rightarrow R^{n}\) be defined and continuous on an interval \([y,z]\) and let

  1. (i)

    \(y< S(y)< z\);

  2. (ii)

    \(y< S(z)< z\);

  3. (iii)

    \(y\leq x_{1}< x_{2}\leq z\) implies \(y< S(x_{1})< S(x_{2})< z\).

Then

  1. (a)

    the fixed point iteration \(x^{(k)}=S(x^{(k-1)})\) with \(x^{(0)}=y\) is monotone increasing and converges: \(x^{(k)} \rightarrow x_{*}\), \(S(x_{*})=x_{*}\), \(y< x_{*}< z\);

  2. (b)

    the fixed point iteration \(x^{(k)}=S(x^{(k-1)})\) with \(x^{(0)}=z\) is monotone decreasing and converges: \(x^{(k)} \rightarrow x^{*}\), \(S(x^{*})=x^{*}\), \(y< x^{*}< z\);

  3. (c)

    if x is a fixed point of S in \([y,z]\) then \(x_{*}\leq x\leq x^{*}\);

  4. (d)

    S has a unique fixed point in \([y,z]\) if and only if \(x_{*}=x^{*}\).

Definition 4

(see [10])

A splitting \(A=M-N\) of \(A\in\mathbb{R}^{n\times n}\) is called a regular splitting of the matrix A if M is nonsingular with \(M^{-1}\geq0\) and \(N\geq0\).

Lemma 5

(see [10])

If \(A=M-N\) is a regular splitting of the matrix \(A\in\mathbb {R}^{n\times n}\) with \(A^{-1}\geq0\), then

$$\rho\bigl(M^{-1}N\bigr)=\frac{\rho(A^{-1}N)}{1+\rho(A^{-1}N)}< 1. $$

Thus, the matrix \(M^{-1}N\) is convergent, and the iterative method of \(Mx^{m+1}=Nx^{m}+k\), \(m\geq0\) converges for any initial vector \(x^{0}\).

3 Existence and uniqueness of positive eigenvector

In this section, we study the existence and uniqueness of positive eigenvector of nonlinear eigenvalue problem (3) with an irreducible singular M-matrix.

Theorem 1

Let \(A\in M_{n}^{0}\) be irreducible. Then, for any \(\lambda\leq0\), the nonlinear eigenvalue problem (3) has no positive solution.

Proof

In (3), we let \(F(x)=D_{1}^{(x)}x\), where

$$ D_{1}^{(x)}= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} \frac{f_{1}(x_{1})}{x_{1}}&0&\cdots&0 \\ 0&\frac{f_{2}(x_{2})}{x_{1}}&\ddots&\vdots \\ \vdots&\ddots&\ddots&0 \\ 0&\cdots&0&\frac{f_{1}(x_{n})}{x_{n}} \end{array}\displaystyle \right ] $$
(5)

is a positive diagonal matrix for \(x_{i}>0\), \(f_{i}(x_{i})>0\), \(i=1,\ldots ,n\). We reformulate (3) equivalently as follows:

$$ \bigl(A+D_{1}^{(x)}-\lambda I\bigr)x=0. $$
(6)

Since \(\lambda\leq0\) and \(D_{1}^{(x)}\) is a positive diagonal matrix, it follows from Lemma 1 that \(A+D_{1}^{(x)}-\lambda I\) is an irreducible nonsingular M-matrix. Consequently, (6) has a zero solution. Thus, equation (3) has no positive solution for \(\lambda\leq0\). □

In what follows we study the solution of (3) when \(\lambda>0\).

Theorem 2

Let \(A\in M_{n}^{0}\) be irreducible. If \(\lambda>0\), and \(f_{i}(\cdot):(0,\infty)\rightarrow(0,\infty)\) is a \(C^{1}\) function satisfying the conditions:

$$ \lim_{t\rightarrow0}\frac{f_{i}(t)}{t}=0,\qquad \lim _{t\rightarrow\infty }\frac{f_{i}(t)}{t}=\infty $$
(7)

for \(i=1,\ldots,n\), then the nonlinear eigenvalue problem has a positive solution. In addition, if

$$ f'_{i}(t)>\frac{f_{i}(t)}{t},\quad i=1, \ldots,n $$
(8)

for any \(t>0\), then the positive solution is unique.

Proof

Since A is an irreducible singular M-matrix, \(\mu=0\) is the smallest eigenvalue of A and there exists a positive vector \(p=(p_{1},p_{2},\ldots,p_{n})^{T}\) such that \(Ap=0\). According to (7), take \(\beta_{1}\) small enough such that \(\lambda(\beta_{1}p_{i})>f_{i}(\beta_{1}p_{i})\), \(i=1,\ldots,n\), and take \(\beta_{2}>\beta_{1}\) large enough so that \(\lambda(\beta _{2}p_{i})\leq f_{i}(\beta_{2}p_{i})\), \(i=1,\ldots,n\). This is possible in view of (7) if and only if \(\lambda\in (0,\infty)\). Take a positive number c such that

$$ c>\max_{1\leq i\leq n}\Bigl(\sup_{\beta_{1}p_{i}\leq t\leq\beta _{2}p_{i}} \bigl\vert f'_{i}(t)\bigr\vert \Bigr)-\lambda, $$
(9)

and let

$$S(x)=(cI+A)^{-1}\cdot\bigl[(c+\lambda)x-F(x)\bigr]. $$

To prove existence we show that \(S(x)\) satisfies conditions of Lemma 4 with \(y=\beta_{1}p\) and \(z=\beta_{2}p\). For condition (i) of Lemma 4, since

$$S(y)=(cI+A)^{-1}\cdot\bigl[(c+\lambda) (\beta_{1}p)-F( \beta_{1}p)\bigr], $$

in combination with (9), we can fully show that

$$ (cI+A)^{-1}\cdot c(\beta_{1}p) \leq(cI+A)^{-1}\cdot\bigl[(c+\lambda ) (\beta_{1}p)-F( \beta_{1}p)\bigr]\leq(cI+A)^{-1}\cdot c(\beta_{2}p). $$
(10)

Since \((cI+A)^{-1}\cdot c>0\), (10) is equivalent to

$$\beta_{1}p< (cI+A)^{-1}\cdot\bigl[(c+\lambda) ( \beta_{1}p)-F(\beta _{1}p)\bigr]< \beta_{2}p. $$

This follows immediately from the choices of \(\beta_{1}\) and \(\beta_{2}\). The conditions (ii) and (iii) can be verified in a similar way using the condition in (9) on c.

Now let \(x^{*}\) and \(x_{*}\) are two fixed points of S, and \(0< x_{*}\leq x^{*}\). That means

$$\begin{aligned}& x^{*}=(cI+A)^{-1}\cdot\bigl[(c+\lambda)x^{*}-F \bigl(x^{*}\bigr)\bigr], \\& x_{*}=(cI+A)^{-1}\cdot\bigl[(c+\lambda)x_{*}-F(x_{*}) \bigr], \end{aligned}$$

or equivalently that

$$\begin{aligned}& Ax^{*}+F\bigl(x^{*}\bigr)=\lambda x^{*}, \end{aligned}$$
(11)
$$\begin{aligned}& Ax_{*}+F(x_{*})=\lambda x_{*}. \end{aligned}$$
(12)

Let us show that in this case \(x_{*}=x^{*}\). Pre-multiplying (11) and (12) by \(x_{*}^{T}\) and \(x^{*T}\), respectively, and subtracting we get

$$x_{*}^{T}F\bigl(x^{*}\bigr)=x^{*T}F(x_{*}), $$

or equivalently

$$ \sum_{i=1}^{n} \bigl(f_{i}\bigl(x^{*}_{i}\bigr)x_{*i}-f_{i}(x_{*i})x^{*}_{i} \bigr)= \sum_{i=1}^{n}\biggl(x_{*i}x^{*}_{i} \biggl(\frac {f_{i}(x^{*}_{i})}{x^{*}_{i}}-\frac{f_{i}(x_{*i})}{x_{*i}}\biggr)\biggr)=0. $$
(13)

Since \(f_{i}(t)\) satisfies the condition (8),

$$\biggl(\frac{f_{i}(t)}{t}\biggr)'=\frac{tf'_{i}(t)-f_{i}(t)}{t^{2}}>0 $$

for any \(i=1,2,\ldots,n\). Thus, the function \(\frac{f_{i}(t)}{t}\) is monotone increasing, and consequently, for any \(i=1,2,\ldots,n\),

$$\frac{f_{i}(s)}{s}< \frac{f_{i}(t)}{t},\quad 0< s< t. $$

Since all terms in (13) are nonnegative, the sum is zero if and only if

$$x_{*}=x^{*}. $$

This implies the uniqueness of positive solution of (3). We completed the proof. □

Remark 1

It is easy to see that the function \(F(x)=(f_{1}(t),f_{2}(t),\ldots ,f_{n}(t))^{T}\) in the Gross-Pitaevskii equation with \(f_{i}(t)=t^{3}\) for \(i=1,2,\ldots,n\) satisfies all conditions of Theorem 2. Thus, this equation has a positive solution.

4 Newton-SOR iteration of positive eigenvector

The Newton iterative method was applied by Choi, Koltracht, and McKenna in [3] to solve numerically the nonlinear eigenvalue problem (3) with Stiltjes matrices. Later, Choi et al. in [4, 5] and Li et al. in [6, 7] used this iterative method to compute this problem with nonsingular M-matrices, and they established some significant theoretical and numerical results. But each step in the Newton iterative method solves one linear system

$$ R'\bigl(x^{k}\bigr)x=R' \bigl(x^{k}\bigr)x^{k}-Rx^{k},\quad k=1,2,\ldots, $$
(14)

where \(R'(x^{k})=A+F'(x^{k})-\lambda I\) is nonsingular. If A is singular or ill-conditioned, the Newton iterative method can perform very bad. Thus, it is necessary to improve this method.

On the other hand, many iterative methods are proposed for linear systems; see [1113]. Among these methods, SOR iterative method is a more effective one. In the following the Newton iterative method will be improved to propose a new iterative method - the Newton-SOR iterative method. Next, the SOR iterative method will be introduced to construct the new iterative method.

For the linear equations

$$ Ax=b, $$
(15)

let \(A=D-L-U\), where \(D=\operatorname{diag}({a_{11},a_{2,2},\ldots,a_{nn}})\), and L and U are, respectively, strictly lower triangular and strictly upper triangular. We may then write the SOR iteration in the form

$$ x^{k+1}=(D-\omega L)^{-1}\bigl[(1-\omega)D+ \omega U\bigr]x^{k}+\omega(D-\omega L)^{-1}b,\quad k=0,1, \ldots. $$
(16)

The quantity ω is called the relaxation factor, and \(0\leq\omega\leq1\). Clearly, (16) reduces to the Gauss-Seidel iteration when \(\omega=1\). For the equation

$$R(x)=Ax+F(x)-\lambda x, $$

\([R'(x)]^{-1}\) exists if A, \(F(x)\), and λ satisfy the conditions of Theorem 2, and the fixed point function of the Newton iteration scheme has the following form:

$$ x=x-\bigl[R'(x)\bigr]^{-1}R(x). $$
(17)

From (17)

$$ R'(x)x=R'(x)x-R(x), $$
(18)

where

$$R'(x)=A+D_{2}^{(x)}-\lambda I $$

can be decomposed as

$$R'(x)=\bigl(D+D_{2}^{(x)}-\lambda I\bigr)-L-U, $$

where

$$D_{2}^{(x)}= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} f_{1}'(x_{1})&0&\cdots&0 \\ 0&f_{2}'(x_{2})&\ddots&\vdots \\ \vdots&\ddots&\ddots&0 \\ 0&\cdots&0&f_{n}'(x_{n}) \end{array}\displaystyle \right ]. $$

Let \(D_{x}=D+D_{2}^{(x)}-\lambda I\). Then

$$R'(x)=D_{x}-L-U, $$

where \(D_{x}\), L, and U are diagonal, strictly lower triangular, and strictly upper triangular matrices, respectively. It follows from the SOR iteration scheme (16) and the fixed point function (17) of the Newton iteration scheme that the fixed point function of the Newton-SOR iteration scheme is as follows:

$$ x=(D_{x}-\omega L)^{-1}\bigl[(1- \omega)D_{x}+\omega U\bigr]+\omega(D_{x}-\omega L)^{-1}\bigl[R'(x)x-R(x)\bigr]. $$
(19)

Assume that \(x^{k}\) has been determined. Then the Newton-SOR iteration scheme is given by

$$\begin{aligned} x^{k+1} =&(D_{x^{k}}-\omega L)^{-1} \bigl[(1-\omega)D_{x^{k}}+\omega U\bigr]x^{k} \\ &{}+ \omega(D_{x^{k}}-\omega L)^{-1}\bigl[R' \bigl(x^{k}\bigr)x^{k}-R\bigl(x^{k}\bigr)\bigr], \quad k=1,2,\ldots. \end{aligned}$$
(20)

It follows from Theorem 5.1 in [8] that the SOR iterative method converges to the unique solution of (15) for any choice of the initial guess \(x^{(0)}\) if and only if \(\rho(H_{\omega})<1\), where \(H_{\omega}\) is the SOR iterative matrix.

In the remainder of this section, the convergence result of the Newton-SOR iterative method will be established for the nonlinear eigenvalue problem (3).

Theorem 3

If A, λ, and \(F(x)=[f_{1}(x_{1}),f_{2}(x_{2}),\ldots,f_{n}(x_{n})]^{T}\) in (3) satisfy all conditions of Theorem  2, then \(\rho(H^{k}_{\omega})<1\), where \(H^{k}_{\omega}=(D_{x^{k}}-\omega L)^{-1}[(1-\omega)D_{x^{k}}+\omega U]\) is the Newton-SOR iterative matrix, i.e., the sequence \(\{x^{k}\}\) generated by the Newton-SOR iterative scheme (20) converges to the unique solution of (3) for any choice of the initial guess \(x^{(0)}\).

Proof

We only prove \(\rho(H^{k}_{\omega})<1\). According to the Newton-SOR iterative matrix

$$H^{k}_{\omega}=(D_{x^{k}}-\omega L)^{-1} \bigl[(1-\omega)D_{x^{k}}+\omega U\bigr], $$

let \(M_{k}=D_{x^{k}}-\omega L\) and \(N_{k}=(1-\omega)D_{x^{k}}+\omega U\). Then \(H^{k}_{\omega}=M_{k}^{-1}N_{k}\) and

$$\begin{aligned} M_{k}-N_{k} =&(D_{x^{k}}-\omega L)- \bigl[(1-\omega)D_{x^{k}}+\omega U\bigr] \\ =&\omega(D_{x^{k}}-L-U) \\ =&\omega\bigl(A+D_{2}^{(x^{k})}-\lambda I\bigr) \\ =&\omega R'\bigl(x^{k}\bigr). \end{aligned}$$
(21)

It follows from Lemma 1 that \(A+D_{1}^{(x)}\) is an irreducible nonsingular M-matrix, where \(D_{1}^{(x)}\) is defined in (5). From (8), \(D_{2}^{(x)}>D_{1}^{(x)}\), then \(A+D_{2}^{(x)}>A+D_{1}^{(x)}\). Lemma 2 shows the smallest eigenvalue of \(A+D_{2}^{(x)}\) is larger than \(A+D_{1}^{(x)}\). The Perron-Frobenius theorem (see Theorem 2.7 in [10]) indicates that the positive solution of (3) exists if λ is the smallest eigenvalue of \(A+D_{1}^{(x)}\). Lemma 3 shows that \(A+D_{2}^{(x)}-\lambda I\) is an irreducible nonsingular M-matrix. So

$$R'\bigl(x^{k}\bigr)=A+D_{2}^{(x^{k})}- \lambda I $$

is an irreducible nonsingular M-matrix, the same for \(\omega R'(x^{k})\). It is easy to see that \(D_{x^{k}}-\omega L\) is an irreducible nonsingular M-matrix, then \((D_{x^{k}}-\omega L)^{-1}\geq0\). It follows that \((1-\omega)D_{x^{k}}+\omega U\geq0\),

$$\omega R'\bigl(x^{k}\bigr)=M_{k}-N_{k} $$

is a regular splitting of the matrix \(\omega R'(x^{k})\). It follows from Lemma 5 that \(\rho(M_{k}^{-1}N_{k})=\rho(H_{\omega}^{k})<1\), i.e., the sequence \(\{x^{k}\}\) generated by the Newton-SOR iterative scheme (20) converges to the unique solution of (3) for any choice of the initial guess \(x^{(0)}\). This completes the proof. □

Theorem 4

Let \(A\in M_{n}^{\bullet}\). If λ and \(F(x)=[f_{1}(x_{1}),f_{2}(x_{2}),\ldots,f_{n}(x_{n})]^{T}\) in (3) satisfy all conditions of Theorem  2, then \(\rho(H^{k}_{\omega })<1\), where \(H^{k}_{\omega}=(D_{x^{k}}-\omega L)^{-1}[(1-\omega )D_{x^{k}}+\omega U]\) is the Newton-SOR iterative matrix, i.e., the sequence \(\{x^{k}\}\) generated by the Newton-SOR iterative scheme (20) converges to the unique solution of (3) for any choice of the initial guess  \(x^{(0)}\).

Proof

Similar to the proof of Theorem 3, the conclusion of this theorem is obtained immediately from Lemma 1, Lemma 2, Lemma 3, and Lemma 5. □

Theorem 5

Let \(A\in M_{n}\). If λ and \(F(x)=[f_{1}(x_{1}),f_{2}(x_{2}),\ldots ,f_{n}(x_{n})]^{T}\) in (3) satisfy all conditions of Theorem  2, then \(\rho(H^{k}_{\omega})<1\), where \(H^{k}_{\omega }=(D_{x^{k}}-\omega L)^{-1}[(1-\omega)D_{x^{k}}+\omega U]\) is the Newton-SOR iterative matrix, i.e., the sequence \(\{x^{k}\}\) generated by the Newton-SOR iterative scheme (20) converges to the unique solution of (3) for any choice of the initial guess  \(x^{(0)}\).

Proof

According to (4), the conclusion of this theorem is obtained immediately from Theorem 3 and Theorem 4. □

5 Numerical experiment

Now, we verify the convergence of Newton-SOR iterative solution by a numerically example. We start with the one-dimensional prototype of the Gross-Pitaevskii equation,

$$\begin{aligned}& -x''(t)+\nu(t)x'(t)+kx^{3}(t)= \lambda x(t), \quad -\infty< t< \infty, \\& x(\pm\infty)=0,\qquad \int_{-\infty}^{\infty}x^{2}(t)\, dt=1, \end{aligned}$$

where \(\nu(t)=t^{2}\). The finite difference method is applied to this equation, which leads to the matrix of the form

$$A=\frac{1}{h^{2}} \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 1&-1+\frac{\nu_{1}h}{2}&0&\cdots&0&0 \\ -1+\frac{\nu_{2}h}{2}&2&-1+\frac{\nu_{2}h}{2}&\cdots&0&0 \\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots \\ 0&0&0&\cdots&2&-1+\frac{\nu_{n-1}h}{2} \\ 0&0&0&\cdots&-1+\frac{\nu_{n}h}{2}&1 \end{array}\displaystyle \right ], $$

where \(\nu_{i}\) are the values of \(\nu(t)\) at the mesh points, h is the discretization step-size. When h is small enough, A is a singular M-matrix.

For simplicity, we truncate t on the interval \([-0.5,0.5]\). Let \(n=100\), \(h=\frac{10}{n+1}\), and \(t_{i}=1+ih\), \(i=1,\ldots,n+1\). We chose the parameter values \(k=1\), \(\lambda=30\), \(\alpha=15\). We begin the iteration with \(x^{(0)}=\alpha p\), where p is a positive eigenvector of A. Let \(e_{1}^{(k)}=\frac{\|x^{k}-x^{k-1}\|_{2}}{\|x^{k}\|_{2}}\) and \(e_{2}^{(k)}=\|Ax^{k}+F(x^{k})-\lambda x^{k}\|_{2}\). The iteration stops when \(e_{1}^{(k)}+e_{2}^{(k)}<\varepsilon\), \(\varepsilon=10^{-5}\). We adopt the Newton-SOR iterative method and the Newton iterative method to compute the equation above on a PC computer by Matlab 7.0. The results on performing a numerical experiment are given in Tables 1 and 2.

Table 1 Iteration number, CPU time, and values of \(\pmb{e_{1}^{(k)}}\) and \(\pmb{e_{2}^{(k)}}\) for different values of ω
Table 2 Iteration number, CPU time, and values of \(\pmb{e_{1}^{(k)}}\) and \(\pmb{e_{2}^{(k)}}\) of the Newton method

The CPU time, iteration number, and values of \(e_{1}^{(k)}\) and \(e_{2}^{(k)}\) of the Newton-SOR method for different values of ω and the Newton method are given in Table 1 and Table 2, respectively. This experiment shows that the Newton-SOR method has the shortest CPU time, the least iterations, and the minimal error when \(\omega=2.1\). By comparing Table 1 and Table 2, we find that Newton-SOR has a much shorter CPU time, much less iterations, and much smaller error than Newton iterative method does. It is clearly illustrated that the Newton-SOR iterative method is superior to the Newton iterative method.

6 Conclusions

This article mainly studies the nonlinear eigenvalue problem with an irreducible singular M-matrix and proposes some sufficient conditions such that a positive eigenvector of this problem exists and is unique. Under these conditions, we improve the SOR iterative method to construct the Newton-SOR iterative method for numerically solving such a positive eigenvector, meanwhile, we establish some convergence results on this iterative method for the nonlinear eigenvalue problems with an irreducible singular M-matrix, a nonsingular M-matrix, and a general M-matrix, respectively. Finally, we give a numerical example to illustrate that the Newton-SOR iterative method is superior to the Newton iterative method.

References

  1. Parkins, AS, Walls, DF: The physics of trapped dilute-gas Bose-Einstein condensates. Phys. Rep. 303(1), 1-80 (1998)

    Article  Google Scholar 

  2. Chang, SM, Lin, CS, Lin, TC, Lin, WW: Segregated nodal domains of two-dimensional multispecies Bose-Einstein condensates. Phys. D, Nonlinear Phenom. 196(3), 341-361 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  3. Choi, YS, Koltracht, I, McKenna, PJ: A generalization of the Perron-Frobenius theorem for non-linear perturbations of Stiltjes matrices. Contemp. Math. 281, 325-330 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  4. Choi, YS, Koltracht, I, McKenna, PJ, Savytska, N: Global monotone convergence of Newton iteration for a nonlinear eigen-problem. Linear Algebra Appl. 357(1), 217-228 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  5. Choi, YS, Koltracht, I, McKenna, PJ: On eigen-structure of a nonlinear map in \(R^{n}\). Linear Algebra Appl. 399, 141-155 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  6. Li, YT, Wu, S, Liu, D: Positive eigenvector of nonlinear perturbations of nonsymmetric M-matrix and Newton iterative solution. Appl. Math. Comput. 200(1), 308-320 (2008)

    MathSciNet  MATH  Google Scholar 

  7. Li, YT: A generalization of the Perron-Frobenius theorem for nonlinear perturbations of M-matrix. In: Proceedings of the 14th Conference of the International Linear Algebra Society, July, pp. 159-162. World Academic Union, Edgbaston (2007)

    Google Scholar 

  8. Berman, A, Plemmons, RJ: Nonnegative Matrices in the Mathematical Sciences. Academic Press, New York (1979)

    MATH  Google Scholar 

  9. Ortega, JM, Rheinboldt, WC: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York (1970)

    MATH  Google Scholar 

  10. Varga, RS: Matrix Iterative Analysis, 2nd edn. Springer, Berlin (2000)

    Book  MATH  Google Scholar 

  11. Mathews, JH, Fink, KD: Numerical Methods Using MATLAB, 4th edn. Publishing House of Electronics Industry, Beijing (2002)

    Google Scholar 

  12. He, JH: A coupling method of a homotopy technique and a perturbation technique for non-linear problems. Int. J. Non-Linear Mech. 35(1), 37-43 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  13. Wang, XH, Guo, XP: On the unified determination for the convergence of Newton’s method and its deformations. Numer. Math. J. Chinese Univ. 4, 363-368 (1999)

    MATH  Google Scholar 

Download references

Acknowledgements

The work was supported by the National Natural Science Foundations of China (11601409, 11201362, and 11271297), the Natural Science Foundations of Shaanxi Province of China (2016JM1009) and the Science Foundation of the Education Department of Shaanxi Province of China (14JK1305).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cheng-yi Zhang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Cy., Song, Yy. & Luo, S. Positive eigenvector of nonlinear eigenvalue problem with a singular M-matrix and Newton-SOR iterative solution. J Inequal Appl 2016, 225 (2016). https://doi.org/10.1186/s13660-016-1169-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1169-y

MSC

Keywords