A splitting method for shifted skew-Hermitian linear system
- Angang Cui^{1},
- Haiyang Li^{1}Email author and
- Chengyi Zhang^{1}
DOI: 10.1186/s13660-016-1105-1
© Cui et al. 2016
Received: 26 April 2016
Accepted: 7 June 2016
Published: 17 June 2016
Abstract
In this paper, we present a splitting method for solving the shifted skew-Hermitian linear system, which is briefly called an α-SSS. Some convergence results are established and numerical experiments show that the splitting method is feasible for solving the problem of this linear system.
Keywords
shifted skew-Hermitian linear system shifted skew-Hermitian splitting method convergenceMSC
15A06 15A15 15A481 Introduction
The rest of the paper is organized as follows. Some notions and preliminary results that are used in this paper are given in Section 2. A splitting method for solving the shifted skew-Hermitian linear system is proposed in Section 3. In Section 4, the convergent theorem for the splitting method is established. In Section 5, we study the properties of the spectral radius, and we conclude to an internal connection between the best choice of β, γ, and s. In Section 6, the power method is used to estimate the largest modulus eigenvalue of the Hermitian matrix \(L+L^{H}\). Numerical experiments are performed in Section 7 for solving the shifted skew-Hermitian system. The concluding remarks are presented in Section 8.
2 Preliminaries
In this section we give some notions and preliminary results that are used in this paper. \(C^{n\times n} (R^{n\times n})\) will be used to denote the set of all \(n\times n\) complex (real) matrices. Let \(D=(d_{ij})\in C^{n\times n}\), we denote the spectrum of D by \(\sigma(D)\), namely, the set of all eigenvalues of D. The spectral radius of D, \(\rho(D)\), is defined by \(\rho(D) = \max\{|\lambda| : \lambda\in\sigma(D)\}\), and the transpose of D will be denoted by \(D^{T}\).
The smallest and largest eigenvalues are easily characterized as the solutions to a constrained minimum and maximum problem.
Lemma 1
Corollary 1
Let \(L\in\mathbb{C}^{n\times n}\) be a given matrix, and \(\frac {x^{H}Lx}{x^{H}x}=s+ti\) for any nonzero vector \(x\in\mathbb{C}^{n}\). Then \(\frac {x^{H}L^{H}x}{x^{H}x}=s-ti\), \(s= \frac{1}{2}\frac{x^{H}(L+L^{H})x}{x^{H}x}\), and \(s\in \frac {1}{2}[\lambda_{\mathrm{min}}(L+L^{H}), \lambda_{\mathrm{max}}(L+L^{H})]\).
3 α-SSS for shifted skew-Hermitian system
4 Convergence analysis for α-SSS
In this section, we mainly study the convergence of the α-SSS for a shifted skew-Hermitian linear system.
Theorem 1
Let \(\alpha{I}+S\) be the shifted skew-Hermitian matrix, where S is a skew-Hermitian matrix and α is a given positive constant. If \(2s+\beta+\gamma>0\) and \(\beta>\gamma\geq0\) all hold for any \(s\in\frac{1}{2}[\lambda_{\mathrm{min}}(L+L^{H}),\lambda_{\mathrm{max}}(L+L^{H})]\), then \(\rho((\beta{I}+L)^{-1}(\gamma{I}+L^{H}))<1\), and thus the sequence \(\{x^{k}\}\) generated by the α-SSS converges to the unique solution of (1) for any choice of the initial guess \(x^{0}\).
Proof
From Theorem 1, if we set \(s= \frac{1}{2}\lambda _{\mathrm{min}}(L+L^{H})\), we have the following corollary.
Corollary 2
Let \(\beta-\gamma=\alpha\) and \(\beta>\gamma\geq0\). If \(\lambda _{\mathrm{min}}(L+L^{H})+\beta+\gamma>0\) for any positive constants β and γ, then the sequence \(\{x^{k}\}\) generated by an α-SSS converges to the unique solution of (1) for any choice of the initial guess \(x^{0}\).
Furthermore, when we set the positive constants β and γ large enough, then the conditions that \(2s+\beta+\gamma>0\) and \(\beta >\gamma\geq0\) in Theorem 1 naturally hold, and thus we have the following corollary.
Corollary 3
If \(-|\lambda(L+L^{H})|_{\mathrm{max}}+\beta+\gamma>0\) for any positive constants β and γ with \(\beta-\gamma=\alpha\) and \(\beta>\gamma\geq0\), then the sequence \(\{x^{k}\}\) generated by the α-SSS converges to the unique solution of (1) for any choice of the initial guess \(x^{0}\).
Proof
If \(-|\lambda(L+L^{H})|_{\mathrm{max}}+\beta+\gamma>0\), we have \(2s+\beta +\gamma>0\) for any \(s\in\frac{1}{2}[\lambda_{\mathrm{min}}(L+L^{H}),\lambda _{\mathrm{max}}(L+L^{H})]\). By Theorem 1, we can conclude that the sequence \(\{x^{k}\}\) generated by α-SSS converges to the unique solution of (1) for any choice of the initial guess \(x^{0}\). □
Theorem 1 shows that the convergence for α-SSS. By the conditions \(2s+\beta+\gamma>0\) and \(\beta>\gamma\geq0\) in Theorem 1, an important truth is that the choice of the positive constants β and γ is decided by the real number s. In other words, the choice of the real number s plays an important role on the properties (such as stability and rate of convergence) for the α-SSS.
5 Best choice of the constants β and γ for α-SSS
In this section, we study the best choice of the constants β and γ for the α-SSS. Furthermore, we present the internal connection between the best choice of constants β, γ and s.
According to the above analysis, the conditions \(2s+\beta+\gamma>0\), \(\beta-\gamma=\alpha\), and \(\beta>\gamma\geq0\) in Theorem 1 hold for any \(s\leq0\) and \(t\in R\).
From the circumstance (ii), we can find how to choose the best constants β and γ this being determined by the value of constant t. Unfortunately, we do not know the value of the constant t at all, and how to find the value of t has the same difficulty as solving the original problem. So, in this paper we just consider the first circumstance, letting \(\gamma_{\mathrm{best}}=-s\), \(\beta_{\mathrm{best}}=-s+\alpha\), and \(s\leq0\). In fact, when we set positive constants β and γ large enough, the second circumstance and the conditions in Theorem 1 hold naturally.
Remark 1
By Remark 1, it seems that we have got the best choice of the positive constants β and γ. However, when we meet the large scale matrix, it is very difficult to get the \(\lambda_{\mathrm{min}}(L+L^{H})\). A natural idea is that we could use an approximate estimate value to replace the \(\lambda_{\mathrm{min}}(L+L^{H})\), and simultaneously, the estimate value could guarantee α-SSS convergence. A direct way that we could use the \(-|\lambda(L+L^{H})|_{\mathrm{max}}\) is to replace the \(\lambda _{\mathrm{min}}(L+L^{H})\), and this is consistent with the Corollary 3 and the second circumstance in Remark 1.
6 Estimate the largest modulus eigenvalue for \(L+L^{H}\)
In this section, we turn to estimate the largest modulus eigenvalue of the Hermitian matrix \(L+L^{H}\). One of the powerful techniques for solving eigenvalue problems is the so-called power method [11]. Simply described, this method consists of generating the sequence of vectors \(B^{k}v_{0}\) where \(v_{0}\) is some nonzero initial vector. This sequence of vectors, when normalized appropriately, and under reasonably mild conditions, converges to a dominant eigenvector, i.e., an eigenvector associated with the eigenvalue of the largest modulus. The most commonly used normalization is to ensure that the largest component of the current iterate is equal to one. This yields the following algorithm.
Algorithm
(The power method for \(|\lambda(L+L^{H})|_{\mathrm{max}}\))
Remark 2
It is obvious that the conditions that \(2s+\beta+\gamma>0\), \(\beta -\gamma=\alpha\), and \(\beta>\gamma\geq0\) in Theorem 1 hold, and thus the sequence \(\{x^{k}\}\) generated by the α-SSS converges to the unique solution of (1) for any choice of the initial guess \(x^{0}\).
7 Numerical examples
In this section, some numerical examples are given to demonstrate the convergence of the α-SSS for the shifted skew-Hermitian linear system.
Example 1
The comparison of iteration steps k with different α for Example 1
The values of α | 2 × 10^{1} | 2 × 10^{0} | 5 × 10^{−2} | 2 × 10^{−2} | 1 × 10^{−2} |
Iteration steps k | 8 | 34 | 1,106 | 2,778 | 5,598 |
The comparison of iteration steps k with different α for Example 2
The values of α | 2 × 10^{1} | 2 × 10^{0} | 5 × 10^{−2} | 2 × 10^{−2} | 1 × 10^{−2} |
Iteration steps k | 10 | 48 | 1,604 | 3,945 | 7,848 |
Example 2
8 Conclusions
In this paper we present the α-SSS for the shifted skew-Hermitian linear system, and we study the convergence for α-SSS. Some of our results illustrate that α-SSS is feasible for solving the shifted skew-Hermitian linear system. However, the α-SSS converges slowly when α is very small, and how to improve the convergent rate for the smaller α is left for further work.
Declarations
Acknowledgements
The work was supported by the National Natural Science Foundations of China (11271297, 11201362) and the Natural Foundations of Shaanxi Province of China (2015JM1012, 2016JM1009).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Bai, ZZ, Golub, GH, Michael, NK: Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Numer. Anal. 24(3), 603-626 (2003) MathSciNetView ArticleMATHGoogle Scholar
- Bai, ZZ, Golub, GH: Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems. IMA J. Numer. Anal. 27, 1-23 (2007) MathSciNetView ArticleMATHGoogle Scholar
- Bai, ZZ, Golub, GH, Lu, LZ, Yin, JF: Block triangular and skew-Hermitian splitting methods for positive-definite linear systems. SIAM J. Sci. Comput. 26, 844-863 (2005) MathSciNetView ArticleMATHGoogle Scholar
- Benzi, M: A generalization of the Hermitian and skew-Hermitian splitting iteration. SIAM J. Matrix Anal. Appl. 31, 360-374 (2009) MathSciNetView ArticleMATHGoogle Scholar
- Li, L, Huang, TZ, Liu, XP: Modified Hermitian and skew-Hermitian splitting methods for non-Hermitian positive-definite linear systems. Numer. Linear Algebra Appl. 14, 217-235 (2007) MathSciNetView ArticleMATHGoogle Scholar
- Benzi, M, Gander, M, Golub, GH: Optimization of the Hermitian and skew-Hermitian splitting iteration for saddle-point problems. BIT Numer. Math. 43, 881-900 (2003) MathSciNetView ArticleMATHGoogle Scholar
- Benzi, M, Golub, GH: A preconditioner for generalized saddle point problems. SIAM J. Matrix Anal. Appl. 26, 20-41 (2004) MathSciNetView ArticleMATHGoogle Scholar
- Bertaccini, D, Golub, GH, Capizzano, SS, Possio, CT: Preconditioned HSS method for the solution of non-Hermitian positive definite linear systems and applications to the discrete convection diffusion equation. Numer. Math. 99, 441-484 (2005) MathSciNetView ArticleMATHGoogle Scholar
- Horn, RA, Johoson, CR: Matrix Analysis. Cambridge University Press, Cambridge (1987) Google Scholar
- Ortega, JM: Numerical Analysis, 2nd edn. SIAM, Philadelphia (1990) View ArticleMATHGoogle Scholar
- Saad, Y: Numerical Methods for Large Eigenvalue Problems. Manchester University Press, Manchester (1991) Google Scholar
- Eisenstant, SC: A note on the generalized conjugate gradient method. SIAM J. Numer. Anal. 20, 358-361 (1983) MathSciNetView ArticleGoogle Scholar
- Golub, GH, Vanderstraeten, D: On the preconditioning of matrices with skew-symmetric splittings. Numer. Algorithms 25, 223-239 (2000) MathSciNetView ArticleMATHGoogle Scholar