Skip to main content

A splitting method for shifted skew-Hermitian linear system

Abstract

In this paper, we present a splitting method for solving the shifted skew-Hermitian linear system, which is briefly called an α-SSS. Some convergence results are established and numerical experiments show that the splitting method is feasible for solving the problem of this linear system.

1 Introduction

In this paper, we consider the shifted skew-Hermitian linear equation system of the form

$$ Ax=b,\quad A=(a_{ij})\in\mathbb{C}^{n\times n}\mbox{ nonsingular, and } b,x\in\mathbb{C}^{n}, $$
(1)

where A is a shifted skew-Hermitian matrix, i.e.

$$ A=\alpha{I}+S, \qquad S^{H}=-S, $$
(2)

α is some positive constant and I is the identity matrix. It is obvious that the matrix \(\alpha{I}+S\) is a non-Hermitian positive definite matrix [18]. In the literature [1, 3], the Hermitian and skew-Hermitian splitting (HSS) method and the positive-definite and skew-Hermitian splitting (PSS) method are presented for solving a non-Hermitian system. For these methods, a non-Hermitian system is divided into a skew-Hermitian system and a Hermitian positive definite system, and there are many good strategies to solve such a Hermitian positive definite system. However, for a skew-Hermitian system the previous methods are more difficult than solving the origin linear system [4]. So, there are no available methods to deal with this system efficiently and by a solution procedure which is cheaper. In this paper we consider a splitting method to solve this shifted skew-Hermitian system, and we present a corresponding convergence theorem for our method.

The rest of the paper is organized as follows. Some notions and preliminary results that are used in this paper are given in Section 2. A splitting method for solving the shifted skew-Hermitian linear system is proposed in Section 3. In Section 4, the convergent theorem for the splitting method is established. In Section 5, we study the properties of the spectral radius, and we conclude to an internal connection between the best choice of β, γ, and s. In Section 6, the power method is used to estimate the largest modulus eigenvalue of the Hermitian matrix \(L+L^{H}\). Numerical experiments are performed in Section 7 for solving the shifted skew-Hermitian system. The concluding remarks are presented in Section 8.

2 Preliminaries

In this section we give some notions and preliminary results that are used in this paper. \(C^{n\times n} (R^{n\times n})\) will be used to denote the set of all \(n\times n\) complex (real) matrices. Let \(D=(d_{ij})\in C^{n\times n}\), we denote the spectrum of D by \(\sigma(D)\), namely, the set of all eigenvalues of D. The spectral radius of D, \(\rho(D)\), is defined by \(\rho(D) = \max\{|\lambda| : \lambda\in\sigma(D)\}\), and the transpose of D will be denoted by \(D^{T}\).

Since the eigenvalues of the Hermitian matrix \(B\in\mathbb {C}^{n\times n}\) are real, we shall adopt the convention that they are labeled according to increasing (non-decreasing) size:

$$ \lambda_{\mathrm{min}}=\lambda_{1}\leq \lambda_{2}\leq\cdots\leq\lambda _{n-1}\leq \lambda_{n}=\lambda_{\mathrm{max}}. $$
(3)

The smallest and largest eigenvalues are easily characterized as the solutions to a constrained minimum and maximum problem.

Lemma 1

(Rayleigh-Ritz [9, 10])

Let \(B\in\mathbb{C}^{n\times n}\) be Hermitian and let the eigenvalues of matrix B be ordered as in (3). Then

$$\begin{aligned}& \lambda_{1}x^{H}x\leq x^{H}Bx\leq \lambda_{n}x^{H}x \quad \textit{for all }x\in \mathbb{C}^{n}, \\& \lambda_{\mathrm{max}}=\lambda_{n}=\max_{x\neq0} \frac{x^{H}Bx}{x^{H}x}=\max_{x^{H}x=1}x^{H}Bx, \\& \lambda_{\mathrm{min}}=\lambda_{1}=\min_{x\neq0} \frac{x^{H}Bx}{x^{H}x}=\min_{x^{H}x=1}x^{H}Bx. \end{aligned}$$

Corollary 1

Let \(L\in\mathbb{C}^{n\times n}\) be a given matrix, and \(\frac {x^{H}Lx}{x^{H}x}=s+ti\) for any nonzero vector \(x\in\mathbb{C}^{n}\). Then \(\frac {x^{H}L^{H}x}{x^{H}x}=s-ti\), \(s= \frac{1}{2}\frac{x^{H}(L+L^{H})x}{x^{H}x}\), and \(s\in \frac {1}{2}[\lambda_{\mathrm{min}}(L+L^{H}), \lambda_{\mathrm{max}}(L+L^{H})]\).

3 α-SSS for shifted skew-Hermitian system

In this section, we consider a splitting method to solve the shifted skew-Hermitian linear system. We call this method α-SSS briefly. We split the matrix \(\alpha {I}+S\) as a sum of a lower triangular matrix and an upper triangular matrix, i.e.

$$ \alpha{I}+S=(\beta-\gamma){I}+\bigl(L-L^{H}\bigr)=( \beta{I}+L)-\bigl(\gamma{I}+L^{H}\bigr), $$
(4)

where \(S=L-L^{H}\), L is a lower triangular, \(L^{H}\), the conjugate transpose of L, is upper triangular, and α, β, and γ are some positive constants with \(\beta> \gamma\), \(\beta-\gamma= \alpha\). So, we can write (1) as

$$ (\beta{I}+L)x=\bigl(\gamma{I}+L^{H}\bigr)x+b. $$
(5)

Since the diagonal entries of \(\beta{I}+L\) are nonzero, we carry out the following iterative method derived from (5):

$$ (\beta{I}+L)x^{k+1}=\bigl(\gamma{I}+L^{H} \bigr)x^{k}+b,\quad k=0,1,2,\ldots, $$
(6)

where the \(x^{0}\) is initial estimate of the unique solution x of (6). Because \(\beta{I}+L\) is a nonsingular matrix, the α-SSS can be written as

$$ x^{k+1}=(\beta{I}+L)^{-1}\bigl( \gamma{I}+L^{H}\bigr)x^{k}+(\beta{I}+L)^{-1}b,\quad k=0,1,2,\ldots. $$
(7)

The matrix \(T\triangleq(\beta{I}+L)^{-1}(\gamma{I}+L^{H})\) is called the iteration matrix of α-SSS. It is well known that α-SSS converges for any given \(x^{(0)}\) if and only if \(\rho(T)<1\), where \(\rho(T)\) is the spectral radius of the matrix T. Thus, to establish convergence results for the α-SSS, we need to study the spectral radius of the iteration matrix in (7).

4 Convergence analysis for α-SSS

In this section, we mainly study the convergence of the α-SSS for a shifted skew-Hermitian linear system.

Theorem 1

Let \(\alpha{I}+S\) be the shifted skew-Hermitian matrix, where S is a skew-Hermitian matrix and α is a given positive constant. If \(2s+\beta+\gamma>0\) and \(\beta>\gamma\geq0\) all hold for any \(s\in\frac{1}{2}[\lambda_{\mathrm{min}}(L+L^{H}),\lambda_{\mathrm{max}}(L+L^{H})]\), then \(\rho((\beta{I}+L)^{-1}(\gamma{I}+L^{H}))<1\), and thus the sequence \(\{x^{k}\}\) generated by the α-SSS converges to the unique solution of (1) for any choice of the initial guess \(x^{0}\).

Proof

It is well known that the iterative sequence \(\{x^{k}\}\) generated by an α-SSS converges to the unique solution of (1) for any choice of the initial guess \(x^{0}\), if and only if the spectral radius of the iteration matrix is less than 1. So, in order to guarantee the convergence of α-SSS, we just need

$$ \rho\bigl((\beta{I}+L)^{-1}\bigl(\gamma{I}+L^{H} \bigr)\bigr)< 1. $$
(8)

Assuming that λ is an eigenvalue as associated with the eigenvector x of the iteration matrix \((\beta{I}+L)^{-1}(\gamma {I}+L^{H})\), i.e.

$$ \bigl((\beta{I}+L)^{-1}\bigl(\gamma{I}+L^{H} \bigr)\bigr) x=\lambda x, $$
(9)

we will prove that any of the eigenvalues of the matrix \((\beta {I}+L)^{-1}(\gamma{I}+L^{H})\) is less than 1. It follows from Corollary 1 that there exists a nonzero vector \(x\in\mathbb{C}^{n}\) such that \(\frac {x^{H}Lx}{x^{H}x}=s+ti\) and \(\frac{x^{H}L^{H}x}{x^{H}x}=s-ti\). Since \(2s+\beta+\gamma>0\) and \(\beta>\gamma\geq0\), it is very easy to verify that

$$ |\gamma+s-ti|< |\beta+s+ti|. $$
(10)

By \(\frac{x^{H}Lx}{x^{H}x}=s+ti\), and \(\frac {x^{H}L^{H}x}{x^{H}x}=s-ti\), the inequality (10) could be rewritten as

$$\biggl\vert \gamma+\frac{x^{H}L^{H}x}{x^{H}x}\biggr\vert < \biggl\vert \beta+ \frac{x^{H}Lx}{x^{H}x}\biggr\vert . $$

Furthermore, if we set \(x^{H}x=\langle x,x\rangle=1\), it leads to

$$ \bigl\vert x^{H}\bigl(\gamma{I}+L^{H}\bigr)x \bigr\vert < \bigl\vert x^{H}\bigl(\beta{I}+L^{H}\bigr)x \bigr\vert . $$
(11)

Following (9), we get

$$\lambda=\frac{x^{H}(\gamma{I}+L^{H})x}{x^{H}(\beta{I}+L^{H})x}. $$

Thus, according to Lemma 1 and (11), we have

$$ \vert \lambda \vert =\biggl\vert \frac{x^{H}(\gamma{I}+L^{H})x}{x^{H}(\beta {I}+L^{H})x}\biggr\vert \leq\max_{x^{H}x=1}\biggl\vert \frac{x^{H}(\gamma {I}+L^{H})x}{x^{H}(\beta{I}+L^{H})x}\biggr\vert < 1 $$
(12)

for any eigenvalue of the matrix \((\beta{I}+L)^{-1}(\gamma{I}+L^{H})\).

As a result,

$$ \rho\bigl((\beta{I}+L)^{-1}\bigl(\gamma{I}+L^{H} \bigr)\bigr)=\max\bigl(\vert \lambda \vert \bigr)< 1. $$
(13)

 □

From Theorem 1, if we set \(s= \frac{1}{2}\lambda _{\mathrm{min}}(L+L^{H})\), we have the following corollary.

Corollary 2

Let \(\beta-\gamma=\alpha\) and \(\beta>\gamma\geq0\). If \(\lambda _{\mathrm{min}}(L+L^{H})+\beta+\gamma>0\) for any positive constants β and γ, then the sequence \(\{x^{k}\}\) generated by an α-SSS converges to the unique solution of (1) for any choice of the initial guess \(x^{0}\).

Furthermore, when we set the positive constants β and γ large enough, then the conditions that \(2s+\beta+\gamma>0\) and \(\beta >\gamma\geq0\) in Theorem 1 naturally hold, and thus we have the following corollary.

Corollary 3

If \(-|\lambda(L+L^{H})|_{\mathrm{max}}+\beta+\gamma>0\) for any positive constants β and γ with \(\beta-\gamma=\alpha\) and \(\beta>\gamma\geq0\), then the sequence \(\{x^{k}\}\) generated by the α-SSS converges to the unique solution of (1) for any choice of the initial guess \(x^{0}\).

Proof

We take

$$\gamma=\frac{1}{2}\bigl\vert \lambda\bigl(L+L^{H}\bigr)\bigr\vert _{\mathrm{max}},\qquad \beta=\frac {1}{2}\bigl\vert \lambda \bigl(L+L^{H}\bigr)\bigr\vert _{\mathrm{max}}+\alpha, $$

and hence

$$\beta-\gamma=\alpha,\qquad \beta>\gamma\geq0. $$

If \(-|\lambda(L+L^{H})|_{\mathrm{max}}+\beta+\gamma>0\), we have \(2s+\beta +\gamma>0\) for any \(s\in\frac{1}{2}[\lambda_{\mathrm{min}}(L+L^{H}),\lambda _{\mathrm{max}}(L+L^{H})]\). By Theorem 1, we can conclude that the sequence \(\{x^{k}\}\) generated by α-SSS converges to the unique solution of (1) for any choice of the initial guess \(x^{0}\). □

Theorem 1 shows that the convergence for α-SSS. By the conditions \(2s+\beta+\gamma>0\) and \(\beta>\gamma\geq0\) in Theorem 1, an important truth is that the choice of the positive constants β and γ is decided by the real number s. In other words, the choice of the real number s plays an important role on the properties (such as stability and rate of convergence) for the α-SSS.

5 Best choice of the constants β and γ for α-SSS

In this section, we study the best choice of the constants β and γ for the α-SSS. Furthermore, we present the internal connection between the best choice of constants β, γ and s.

From the proof of Theorem 1, it is very easy to see that

$$\begin{aligned} |\lambda|^{2} =& \frac{(s+\gamma)^{2}+t^{2}}{(s+\gamma+\alpha )^{2}+t^{2}} \\ =& 1-\frac{(s+\gamma+\alpha)^{2}+t^{2}-[(s+\gamma )^{2}+t^{2}]}{(s+\gamma+\alpha)^{2}+t^{2}} \\ =& 1-\frac{2\alpha(s+\gamma+\frac{1}{2}\alpha)}{(s+\gamma+\alpha )^{2}+t^{2}} \\ =& 1-\frac{2\alpha{y}}{(y+\frac{1}{2}\alpha)^{2}+t^{2}} \quad \mbox{and}\quad \biggl(y=s+\gamma+ \frac{1}{2}\alpha\biggr) \\ =& 1-\frac{2\alpha{y}}{y^{2}+\alpha{y}+\frac{1}{4}\alpha ^{2}+t^{2}} \\ =& 1-\frac{2\alpha}{y+(\frac{\frac{1}{4}\alpha ^{2}+t^{2}}{y})+\alpha}. \end{aligned}$$
(14)

By the triangular inequality, while \(y=\frac{\frac{1}{4}\alpha ^{2}+t^{2}}{y}\) or \(y^{2}= \frac{1}{4}\alpha^{2}+t^{2}\), we get the smallest value of the \(|\lambda|^{2}\), such that

$$ |\lambda|^{2}=1-\frac{2\alpha}{2\sqrt{\frac{1}{4}\alpha ^{2}+t^{2}}+\alpha}. $$
(15)

That is to say, while \(\gamma=\sqrt{\frac{1}{4}\alpha ^{2}+t^{2}}-\frac{1}{2}\alpha-s\), the \(|\lambda|\) gets the smallest value

$$ \sqrt{1-\frac{2\alpha}{2\sqrt{\frac{1}{4}\alpha^{2}+t^{2}}+\alpha}}. $$
(16)

Combining (13), (16) with Theorem 1, we get

$$ \sqrt{1-\frac{2\alpha}{2\sqrt{\frac{1}{4}\alpha^{2}+t^{2}}+\alpha }}\leq \rho\bigl((\beta{I}+L)^{-1} \bigl(\gamma{I}+L^{H}\bigr)\bigr)< 1. $$
(17)

(i) If \(t=0\), \(\sqrt{1-\frac{2\alpha}{2\sqrt{\frac{1}{4}\alpha ^{2}+t^{2}}+\alpha}}\) gets the smallest value 0. In other words, while \(\gamma=-s\), we have

$$\sqrt{1-\frac{2\alpha}{2\sqrt{\frac{1}{4}\alpha^{2}+t^{2}}+\alpha}}=0. $$

By the condition \(\gamma=-s\) for the smallest value of \(|\lambda|\) and the conditions in Theorem 1, we can get the best choice of constants β and γ, such that

$$\gamma_{\mathrm{best}}=-s,\qquad \beta_{\mathrm{best}}=-s+\alpha\quad \mbox{and} \quad s\leq0. $$

(ii) If \(t\neq0\), similarly, by the condition \(\gamma=\sqrt{\frac {1}{4}\alpha^{2}+t^{2}}-\frac{1}{2}\alpha-s\) for the smallest \(|\lambda|\) and the conditions in Theorem 1, we can get the best choice of the constants β and γ, such that

$$\begin{aligned}& \gamma_{\mathrm{best}}=\sqrt{\frac{1}{4}\alpha^{2}+t^{2}}- \frac{1}{2}\alpha -s, \\& \beta_{\mathrm{best}}=\sqrt{\frac{1}{4} \alpha^{2}+t^{2}}+\frac {1}{2}\alpha-s\quad \mbox{and} \\& t^{2}>\max\bigl\{ s^{2}+\alpha s, s^{2}-\alpha s \bigr\} . \end{aligned}$$

According to the above analysis, the conditions \(2s+\beta+\gamma>0\), \(\beta-\gamma=\alpha\), and \(\beta>\gamma\geq0\) in Theorem 1 hold for any \(s\leq0\) and \(t\in R\).

From the circumstance (ii), we can find how to choose the best constants β and γ this being determined by the value of constant t. Unfortunately, we do not know the value of the constant t at all, and how to find the value of t has the same difficulty as solving the original problem. So, in this paper we just consider the first circumstance, letting \(\gamma_{\mathrm{best}}=-s\), \(\beta_{\mathrm{best}}=-s+\alpha\), and \(s\leq0\). In fact, when we set positive constants β and γ large enough, the second circumstance and the conditions in Theorem 1 hold naturally.

Remark 1

By the conclusion \(s\in\frac{1}{2}[\lambda_{\mathrm{min}}(L+L^{H}),\lambda _{\mathrm{max}}(L+L^{H})]\) in Corollary 1, if the conditions \(2s+\beta+\gamma>0\), \(\beta-\gamma=\alpha\), and \(\beta>\gamma\geq0\) in Theorem 1 hold, we just let

$$s=\min\biggl\{ \frac{1}{2}\lambda_{\mathrm{min}}\bigl(L+L^{H} \bigr), 0\biggr\} ,\qquad \gamma=-s\quad \mbox{and}\quad \beta-\gamma=\alpha $$

or

$$\gamma=\frac{1}{2}\bigl\vert \lambda\bigl(L+L^{H}\bigr)\bigr\vert _{\mathrm{max}}\quad \mbox{and}\quad \beta-\gamma =\alpha. $$

This is consistent with Corollary 2 or Corollary 3.

By Remark 1, it seems that we have got the best choice of the positive constants β and γ. However, when we meet the large scale matrix, it is very difficult to get the \(\lambda_{\mathrm{min}}(L+L^{H})\). A natural idea is that we could use an approximate estimate value to replace the \(\lambda_{\mathrm{min}}(L+L^{H})\), and simultaneously, the estimate value could guarantee α-SSS convergence. A direct way that we could use the \(-|\lambda(L+L^{H})|_{\mathrm{max}}\) is to replace the \(\lambda _{\mathrm{min}}(L+L^{H})\), and this is consistent with the Corollary 3 and the second circumstance in Remark 1.

6 Estimate the largest modulus eigenvalue for \(L+L^{H}\)

In this section, we turn to estimate the largest modulus eigenvalue of the Hermitian matrix \(L+L^{H}\). One of the powerful techniques for solving eigenvalue problems is the so-called power method [11]. Simply described, this method consists of generating the sequence of vectors \(B^{k}v_{0}\) where \(v_{0}\) is some nonzero initial vector. This sequence of vectors, when normalized appropriately, and under reasonably mild conditions, converges to a dominant eigenvector, i.e., an eigenvector associated with the eigenvalue of the largest modulus. The most commonly used normalization is to ensure that the largest component of the current iterate is equal to one. This yields the following algorithm.

Algorithm

(The power method for \(|\lambda(L+L^{H})|_{\mathrm{max}}\))

$$\begin{aligned}& \textit{Initial guess}\mbox{: } v_{0} \\& \mathbf{While} \textit{ not converged } \mathbf{do} \\& v^{k}= \frac{1}{\alpha_{k}}\bigl(L+L^{H}\bigr)v_{k-1} \\& \textit{Then } k\rightarrow k+1 \\& \mathbf{End\ while} \end{aligned}$$

where \(\alpha_{k}\) is a component of the vector \((L+L^{H})v_{k-1}\) which has the maximum modulus. The literature [11] also told us that if the eigenvalue is multiple, but semi-simple, then the algorithm provides only one eigenvalue and a corresponding eigenvector. So, the power method can be used to estimate the largest modulus eigenvalue \(|\lambda(L+L^{H})|_{\mathrm{max}}\) for the Hermitian matrix \(L+L^{H}\).

Remark 2

In order to quickly get the approximate largest modulus eigenvalue of the Hermitian matrix \(L+L^{H}\), by the Gersgǒrin theorem [11], the large boundary of the eigenvalues of the Hermitian matrix \(L+L^{H}\) could be quickly obtained, and we conclude that

$$\begin{aligned} \vert \lambda_{i}\vert \leq& \bigl\vert L_{i,i}+L_{i,i}^{H}\bigr\vert +\sum _{j=1,i \neq j}^{n}\bigl\vert L_{i,j}+L_{i,j}^{H} \bigr\vert \\ \leq& \max_{i}\sum_{j=1,i\leq n\leq j}^{n} \bigl\vert L_{i,j}+L_{i,j}^{H}\bigr\vert \\ \leq& \bigl\Vert L+L^{H}\bigr\Vert _{\infty}. \end{aligned}$$
(18)

So, the \(-|\lambda(L+L^{H})|_{\mathrm{max}}\) could be approximated by the \(-\| L+L^{H}\|_{\infty}\), and then

$$ \gamma= \frac{1}{2}\bigl\| L+L^{H}\bigr\| _{\infty}, \qquad \beta= \frac{1}{2}\bigl\| L+L^{H}\bigr\| _{\infty}+\alpha. $$
(19)

It is obvious that the conditions that \(2s+\beta+\gamma>0\), \(\beta -\gamma=\alpha\), and \(\beta>\gamma\geq0\) in Theorem 1 hold, and thus the sequence \(\{x^{k}\}\) generated by the α-SSS converges to the unique solution of (1) for any choice of the initial guess \(x^{0}\).

7 Numerical examples

In this section, some numerical examples are given to demonstrate the convergence of the α-SSS for the shifted skew-Hermitian linear system.

Example 1

Consider the Korteweg-de Vries partial differential equation,

$$ u_{t}=-uu_{x}-\delta^{2}u_{xxx}, $$
(20)

with the periodic boundary conditions \(u(0,t)=u(l,t)\), where l is the period and δ is a small parameter. As shown in [12], appropriate methods of space discretization lead to solve a set of ODE’s of the form

$$ y'=H(y)y, \qquad y(0)=y_{0}. $$
(21)

The evolution is on the sphere of radius \(\|y_{0}\|\), where \(y(t)=(u_{0}(t), u_{1}(t), \ldots,u_{n-1}(t))^{T}\), \(u_{i}(t)\approx u(i\Delta x, t)\) for \(i=0, 1,\ldots, n-1\), \(n=1\text{,}000\), and where \(\Delta x=\frac{2}{n}\) is the spatial step of \([0,2]\). For instance, if we consider the space discretization method in [13], we have

$$ H(y)=-\frac{1}{6\Delta x}g(y)-\frac{\delta^{2}}{2\Delta x^{3}}P, $$
(22)

where both \(g(y)\) and P are two \(n\times n\) skew-symmetric matrices given by

$${\bigl[g(y)\bigr]}_{i,j}=\left \{ \textstyle\begin{array}{l@{\quad}l} u_{i-1}+u_{i}, & j=i+1, \\ -(u_{0}+u_{n-1}), & i=1,j=n, \\ -(u_{j-1}+u_{j}), & i=j+1, \\ u_{0}+u_{n-1}, & i=n,j=1, \\ 0, & \mbox{otherwise}, \end{array}\displaystyle \right . $$

and

$$P=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 0 & -2 & 1 & 0 & \cdots& 0 & -1 & 2 \\ 2 & 0 & -2 & 1 & \ddots& \ddots& 0 & -1 \\ -1 & 2 & 0 & -2 & \ddots& \ddots& \ddots& 0 \\ 0 & -1 & 2 & 0 & \ddots& \ddots& \ddots& \vdots\\ \vdots& \ddots& \ddots& \ddots& \ddots& \ddots& 1 & 0 \\ 0 & \ddots& \ddots& \ddots& 2 & 0 & -2 & 1 \\ 1 & 0 & \ddots& \ddots& -1 & 2 & 0 & -2 \\ -2 & 1 & 0 & \cdots& 0 & -1 & 2 & 0 \end{array}\displaystyle \right ]. $$

In Table 1, we consider five instances for the four diagonal skew-Hermitian matrix linear systems, corresponding to \(\alpha=2\times10^{1}\), \(\alpha=2\times10^{0}\), \(\alpha =5\times10^{-2}\), \(\alpha=2\times10^{-2}\), and \(\alpha=1\times 10^{-2}\), respectively. Similarly, in Table 2 we set the same values of α as Table 1 for the seven diagonal skew-Hermitian matrix linear systems considered. In these two experiments, we throughout set \(\gamma= \frac{1}{2}\|L+L^{H}\|_{\infty}\), \(\beta= \frac {1}{2}\|L+L^{H}\|_{\infty}+\alpha\), and the stopping rule is \(\| x_{k+1}-x_{k}\|_{2}\leq\epsilon\), \(\epsilon=10^{-5}\).

Table 1 The comparison of iteration steps k with different α for Example 1
Table 2 The comparison of iteration steps k with different α for Example 2

It is shown in Table 1 and Table 2 that: (i) the number of iterations, starting from a zero initial guess, becomes increasing as α decreases; (ii) the α-SSS converges for seven diagonal skew-Hermitian matrix systems becoming slower than the four diagonal skew-Hermitian matrix for a given α. See Figures 1 and 2.

Figure 1
figure 1

The convergent iteration steps with different α for Example 1 . Each curve in this graph represents the convergent iteration steps for different α, with \(\gamma= \frac{1}{2}\|L+L^{H}\|_{\infty}\), \(\beta= \frac {1}{2}\|L+L^{H}\|_{\infty}+\alpha\). The yellow curve represents \(\alpha=5\times10^{-2}\); the red curve represents \(\alpha=2\times10^{-2}\); and the blue curve represents \(\alpha=1\times10^{-2}\).

Figure 2
figure 2

The convergent iteration steps with different α for Example 2 . Each curve in this graph represents the convergent iteration steps for different α, with \(\gamma= \frac{1}{2}\|L+L^{H}\|_{\infty}\), \(\beta= \frac {1}{2}\|L+L^{H}\|_{\infty}+\alpha\). The yellow curve represents \(\alpha=5\times10^{-2}\); the red curve represents \(\alpha=2\times10^{-2}\); and the blue curve represents \(\alpha=1\times10^{-2}\).

Example 2

Furthermore, we consider the general skew-Hermitian matrix S as a \(10^{3}\times10^{3}\) seven diagonal matrix in the complex field for the shifted skew-Hermitian linear system, and the simulations are provided to show the computational cost for the α-SSS. We have

$$S= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 0 & i & 0.2i & 1 & 0.3i & i & -2i & 0 & \cdots& 0 & 0 & 0 & 0 & 0 & 0 & 0 & i \\ i & 0 & i & 0.2i & 1 & 0.3i & i & -2i & \ddots& \ddots& 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.2i & i & 0 & i & 0.2i & 1 & 0.3i & i & \ddots& \ddots& \ddots& 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0.2i & i & 0 & i & 0.2i & 1 & 0.3i & \ddots& \ddots& \ddots& \ddots& 0 & 0 & 0 & 0 & 0 \\ 0.3i & -1 & 0.2i & i & 0 & i & 0.2i & 1 & \ddots& \ddots& \ddots& \ddots& \ddots& 0 & 0 & 0 & 0 \\ i & 0.3i & -1 & 0.2i & i & 0 & i & 0.2i & \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& 0 & 0 & 0 \\ -2i & i & 0.3i & -1 & 0.2i & i & 0 & i & \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& 0 & 0 \\ 0 & -2i & -i & 0.3i & -1 & 0.2i & i & 0 & \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& 0 \\ \vdots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \vdots\\ 0 & \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& 0 & i & 0.2i & 1 & 0.3i & i & 2i & 0 \\ 0 & 0 & \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& i & 0 & i & 0.2i & 1 & 0.3i & i & 2i \\ 0 & 0 & 0 & \ddots& \ddots& \ddots& \ddots& \ddots& \ddots& 0.2i & i & 0 & i & 0.2i & 1 & 0.3i & i \\ 0 & 0 & 0 & 0 & \ddots& \ddots& \ddots& \ddots& \ddots& -1 & 0.2i & i & 0 & i & 0.2i & 1 & 0.3i \\ 0 & 0 & 0 & 0 & 0 & \ddots& \ddots& \ddots& \ddots& 0.3i & -1 & 0.2i & i & 0 & i & 0.2i & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & \ddots& \ddots& \ddots& i & 0.3i & -1 & 0.2i & i & 0 & i & 0.2i \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ddots& \ddots& -2i & i & 0.3i & -1 & 0.2i & i & 0 & i \\ i & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots& 0 & -2i & i & 0.3i & -1 & 0.2i & i & 0 \end{array}\displaystyle \right ]. $$

8 Conclusions

In this paper we present the α-SSS for the shifted skew-Hermitian linear system, and we study the convergence for α-SSS. Some of our results illustrate that α-SSS is feasible for solving the shifted skew-Hermitian linear system. However, the α-SSS converges slowly when α is very small, and how to improve the convergent rate for the smaller α is left for further work.

References

  1. Bai, ZZ, Golub, GH, Michael, NK: Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Numer. Anal. 24(3), 603-626 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bai, ZZ, Golub, GH: Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems. IMA J. Numer. Anal. 27, 1-23 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bai, ZZ, Golub, GH, Lu, LZ, Yin, JF: Block triangular and skew-Hermitian splitting methods for positive-definite linear systems. SIAM J. Sci. Comput. 26, 844-863 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  4. Benzi, M: A generalization of the Hermitian and skew-Hermitian splitting iteration. SIAM J. Matrix Anal. Appl. 31, 360-374 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Li, L, Huang, TZ, Liu, XP: Modified Hermitian and skew-Hermitian splitting methods for non-Hermitian positive-definite linear systems. Numer. Linear Algebra Appl. 14, 217-235 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  6. Benzi, M, Gander, M, Golub, GH: Optimization of the Hermitian and skew-Hermitian splitting iteration for saddle-point problems. BIT Numer. Math. 43, 881-900 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  7. Benzi, M, Golub, GH: A preconditioner for generalized saddle point problems. SIAM J. Matrix Anal. Appl. 26, 20-41 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bertaccini, D, Golub, GH, Capizzano, SS, Possio, CT: Preconditioned HSS method for the solution of non-Hermitian positive definite linear systems and applications to the discrete convection diffusion equation. Numer. Math. 99, 441-484 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  9. Horn, RA, Johoson, CR: Matrix Analysis. Cambridge University Press, Cambridge (1987)

    Google Scholar 

  10. Ortega, JM: Numerical Analysis, 2nd edn. SIAM, Philadelphia (1990)

    Book  MATH  Google Scholar 

  11. Saad, Y: Numerical Methods for Large Eigenvalue Problems. Manchester University Press, Manchester (1991)

    Google Scholar 

  12. Eisenstant, SC: A note on the generalized conjugate gradient method. SIAM J. Numer. Anal. 20, 358-361 (1983)

    Article  MathSciNet  Google Scholar 

  13. Golub, GH, Vanderstraeten, D: On the preconditioning of matrices with skew-symmetric splittings. Numer. Algorithms 25, 223-239 (2000)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The work was supported by the National Natural Science Foundations of China (11271297, 11201362) and the Natural Foundations of Shaanxi Province of China (2015JM1012, 2016JM1009).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haiyang Li.

Additional information

Competing interests

The authors declare to have no competing interests.

Authors’ contributions

All authors contributed equally to this manuscript. They read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cui, A., Li, H. & Zhang, C. A splitting method for shifted skew-Hermitian linear system. J Inequal Appl 2016, 160 (2016). https://doi.org/10.1186/s13660-016-1105-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1105-1

MSC

Keywords