- Research
- Open access
- Published:
Some generalizations of the new SOR-like method for solving symmetric saddle-point problems
Journal of Inequalities and Applications volume 2018, Article number: 145 (2018)
Abstract
Saddle-point problems arise in many areas of scientific computing and engineering applications. Research on the efficient numerical methods of these problems has become a hot topic in recent years. In this paper, we propose some generalizations of the new SOR-like method based on the original method. Convergence of these methods is discussed under suitable restrictions on iteration parameters. Numerical experiments are given to show that these methods are effective and efficient.
1 Introduction
Consider the solution of the following symmetric saddle-point linear system with block 2-by-2 structure:
where \(A\in\mathbb{R}^{m\times m}\) is a symmetric positive definite matrix, \(B\in\mathbb{R}^{m\times n}\) is a matrix of full column rank (\(m \gg n\)), \(B^{T}\in\mathbb{R}^{n\times m}\) is the transpose of the matrix B, and \(p\in\mathbb{R}^{m}\) and \(q\in\mathbb{R}^{n}\) are given vectors. Under such conditions, system (1.1) has a unique solution [8]. Problems of this type arise in many areas of scientific computing and engineering applications, such as computational fluid dynamics, constrained and weighted least squares estimation, constrained optimization, image reconstruction, computer graphics, and so on. For background and a comprehensive survey, we refer to [3, 5, 8, 12].
Since the matrices A and B are usually very large and sparse in applications, the direct methods are not suitable to be applied to solve system (1.1). So, more attention has been paid on the iteration methods for solving problem (1.1). Many iteration methods were found for system (1.1): the Uzawa-type methods [1], matrix splitting methods, Krylov subspace methods, and so on. See [2, 3, 5, 8, 10, 11, 14–17, 19–23, 25–28] for more details. One of the best iteration methods is SOR-like method, introduced by Golub et al. [13], which includes the Uzawa-type methods as special cases. Later, many researchers generalized or modified the SOR-like method and studied their convergence properties for solving problem (1.1) from different points of view in recent years. For instance, Bai et al. [5] proposed a generalized SOR-like method, which has two parameters and is more effective than the SOR-like method; Shao et al. [19] extended this method and proposed a modified SOR-like method, and Guo et al. [15] presented another modified SOR-like method; Zheng et al. [27] also discussed a new SOR-like method based on a different splitting of the coefficient matrix; Darvishi and Hessari [10], Wu et al. [23], and Najafi et al. [18] considered the SSOR method. We refer to [2–28] and the references therein.
Recently, Guan et al. [14] came up with a new SOR-like method (NSOR-like) for solving the following equivalent system (1.2) of problem (1.1):
The coefficient matrix can be split as follows:
where
with a symmetric positive definite matrix \(Q\in\mathbb{R}^{n\times n}\) and parameters \(\alpha>0\), \(\beta\geq0\), and \(0<\omega<2\). The following iteration scheme was introduced for solving (1.2):
or, equivalently,
where
and \(\mathcal{N}=\omega(\mathcal{D}-\omega\mathcal{L})^{-1}\).
In this paper, we generalize this method to several variants for solving problem (1.1) or (1.2). These methods include some of the above-mentioned methods as particular cases. We discuss the convergence of these methods under suitable restrictions on iteration parameters, which are very easy to use in computations. Numerical experiments are given to show that these methods are effective and efficient.
The rest of the paper is organized as follows. In Sect. 2, we propose several generalizations based on the new SOR-like method for solving problem (1.1) or (1.2). In Sect. 3, we give the convergence analysis of these methods. We use some numerical examples to show the effectiveness of them in Sect. 4. A concluding remark is drawn in Sect. 5.
2 Methods
In the section, we derive some generalizations for solving system (1.1) or (1.2) based on the new SOR-like method introduced by Guan et al. [14].
2.1 The new SSOR-like method (NSSOR-like)
By combining the NSOR-like method and its backward version the NSSOR-like method can be easily obtained for solving problem (1.2). The backward iteration scheme of the NSOR-like method is as follows:
or, equivalently,
where
and \(\mathcal{N}_{1}=\omega(\mathcal{D}-\omega\mathcal{U})^{-1}\). Then the NSSOR-like method can be written as
where \(\mathcal{T}_{1}=\mathcal{M}_{1} \mathcal{M}\) and \(\mathcal {C}_{1}=\mathcal{M}_{1} \mathcal{N}+\mathcal{N}_{1}\).
2.2 The generalized NSOR-like method (GNSOR-like)
By introducing a diagonal matrix \(\Omega=\operatorname{diag}(\omega I_{m}, \tau I_{n})\), where \(I_{m}\), \(I_{n}\), and further I are all identity matrices, we can obtain a generalization of the NSOR-like method:
More precisely, we have the following algorithmic description of the GNSOR-like method.
Remark
This method, in spirit, is analogous to the GSOR method [5]. It uses a relaxation matrix Ω for the NSOR-like method instead of a single relaxation parameter. Obviously, when \(\omega=\tau\), this method reduces to the NSOR-like method mentioned in the previous section.
2.3 The generalized NSSOR-like method (GNSSOR-like)
Ordinarily, the GNSSOR-like method can also be derived by combining the symmetric technique, as introduced in [25].
Remark
It can be seen easily that, when \(\omega=\tau\), this method reduces to the NSSOR-like method mentioned in the previous subsection.
With different choices of parameters, the GNSSOR-like method covers several SSOR methods as follows:
3 Convergence analysis
In the section, we give a convergence analysis of these methods. We need the following lemmas.
Lemma 3.1
(see [24])
For the real quadratic equation \(x^{2}-bx+c=0\), both roots are less than one in modulus if and only if \(\vert c \vert <1\) and \(\vert b \vert <1+c\).
Lemma 3.2
Let \(\mathcal{R}\) be the iteration matrix of the GNSOR-like method, and let λ be an eigenvalue of the matrix \(\mathcal{R}\). Then \(\lambda \neq1\).
Proof
Suppose on the contrary that \(\lambda=1\) is an eigenvalue of the iteration matrix \(\mathcal{R}\) and that the corresponding eigenvector is \((y^{T},z^{T})^{T}\). Then we have
or
From this equation we can deduce that
Thus \(y=-A^{-1}Bz\) and \(B^{T}A^{-1}Bz=0\). Since the matrix \(B^{T}A^{-1}B\) is symmetric positive definite, we have \(z=0\). Hence we get \(y=0\), a contradiction! □
Theorem 3.3
Let the matrix A be symmetric positive definite for the saddle-point problem (1.2), and let the matrix B be of full column rank. Let \(\mathcal{R}\) be the iteration matrix of the GNSOR-like method. Then the GNSOR-like method is convergent if the parameters α, β, ω, and τ satisfy
where ρ denotes the spectral radius of the matrix \(Q^{-1}B^{T}A^{-1}B\).
Proof
Evidently, we can see that the eigenvalues of the matrix \(Q^{-1}B^{T}A^{-1}B\) are all real and positive. Let λ be a nonzero eigenvalue of the iteration matrix \(\mathcal{R}\), and let \(\bigl( {\scriptsize\begin{matrix}{}y \cr z \end{matrix}} \bigr)\) be the corresponding eigenvector. Then we have
or, equivalently,
Computations show that
We get \((1-\lambda-\alpha\omega)y=\alpha\omega A^{-1}Bz\) from the first equality in (3.1), and hence \(\lambda\tau(1-\lambda-\alpha\omega)B^{T}y=\lambda\tau \alpha\omega B^{T}A^{-1}Bz\) when \(\lambda\neq1-\alpha\omega\). From this equality and the second equality in (3.1) it follows that
If \(\lambda=1-\alpha\omega\neq0\), then \(Bz=0\) and \(-\alpha\omega (1-\tau\beta)Qz=\lambda\tau B^{T}y\). It then follows that \(z=0\) and \(y\in\operatorname{null}(B^{T})\), where \(\operatorname{null}(B^{T})\) is the null space of the matrix \(B^{T}\). Hence \(\lambda=1-\alpha\omega\) is an eigenvalue of the matrix \(\mathcal{R}\) with the corresponding eigenvector \((y^{T},0)^{T}\) with \(y\in\operatorname{null}(B^{T})\).
Therefore, except for \(\lambda=1-\alpha\omega\), the rest of the eigenvalues λ of the matrix \(\mathcal{R}\) and all the eigenvalues μ of the matrix \(Q^{-1}B^{T}A^{-1}B\) are of the functional relationship
that is, λ satisfies the quadratic equation
By Lemma 3.1 we know that \(\lambda=1-\alpha\omega\) and both roots λ of Eq. (3.2) satisfy \(\vert \lambda \vert <1\) if and only if
Thus we can deduce that
where ρ denotes the spectral radius of the matrix \(Q^{-1}B^{T}A^{-1}B\).
The proof of the theorem has been completed. □
Lemma 3.4
Let \(\mathcal{T}\) be the iteration matrix of the GNSSOR-like method. Then we have:
-
(i)
\(\lambda=\frac{(1-\omega+\omega\alpha)-\omega\alpha (2-\omega)}{1-\omega+\omega\alpha}\) is an eigenvalue of the matrix \(\mathcal{T}\) with multiplicity \(m-n\) at least.
-
(ii)
A real number \(\lambda\neq\frac{(1-\omega+\omega \alpha)-\omega\alpha(2-\omega)}{1-\omega+\omega\alpha}\) is an eigenvalue of the matrix \(\mathcal{T}\) if and only if there exists a real eigenvalue μ of the matrix \(Q^{-1}B^{T}A^{-1}B\) such that λ is a zero point of
$$ \begin{aligned}[b] &(\lambda-1) (1-\tau+\beta\tau) (1-\beta\tau)\biggl[\biggl( \frac{1}{\alpha }-\omega\biggr) (1-\omega)-\lambda\biggl(\frac{1}{\alpha}+\omega- \frac{\omega}{\alpha }\biggr)\biggr]\\&\quad{}-\lambda\omega\tau(2-\omega) (2-\tau)\mu.\end{aligned} $$(3.3)
Proof
Let \(\lambda\neq0\) be an eigenvalue of the matrix \(\mathcal{T}\), and let \(u=\bigl( {\scriptsize\begin{matrix}{} y \cr z \end{matrix}} \bigr)\) be the corresponding eigenvector. Then we have
or, equivalently,
Computations show that
and we obtain that
Let \(\lambda=\frac{(1-\omega+\omega\alpha)-\omega\alpha(2-\omega )}{1-\omega+\omega\alpha}\). From (3.4) we have \(A^{-1}Bz=0\).
We get \(B^{T}y=0\) and \(z=0\) since the matrix A is symmetric positive definite and the matrix B is of full column rank. Note that \(\operatorname{rank}(B^{T}) = n\), and then there exist \(m-n\) independent nonzero solutions of \(B^{T}y=0\), that is, there exist \(m-n\) corresponding eigenvectors of \(\lambda=\frac{(1-\omega+\omega\alpha )-\omega\alpha(2-\omega)}{1-\omega+\omega\alpha} \).
When \(\lambda\neq\frac{(1-\omega+\omega\alpha)-\omega\alpha (2-\omega)}{1-\omega+\omega\alpha}\), from (3.4) we have
Substituting y into (3.5), we get
or, equivalently,
Since μ is an eigenvalue of the matrix \(Q^{-1}B^{T}A^{-1}B\), we have
simply,
Conversely, we can also trivially prove the following: □
Theorem 3.5
Let the matrix A be symmetric positive definite, and let the matrix B be of full column rank in Eq. (1.2). Assume that α, β, and ω satisfy \((1-\beta\tau)(1-\tau+\beta\tau)(1-\omega+\omega\alpha)\neq0\). We choose a nonsingular matrix Q such that all eigenvalues of the matrix \(Q^{-1}B^{T}A^{-1}B\) are real. Let \(\mu_{\max}\), \(\mu_{\min}\) be the largest and the smallest eigenvalues of the matrix \(Q^{-1}B^{T}A^{-1}B\), respectively. Then:
-
(i)
If \(\mu_{\min}>0\), then the GNSSOR-like method is convergent if and only if
$$\begin{aligned}& 0< \omega< 2;\qquad 0< \tau< 2 ; \\& \left \{ \textstyle\begin{array}{l@{\quad}l} \frac{1}{\alpha}>\frac{\omega^{2}}{2(\omega-1)},& \textit{if } 0< \omega< 1,\\ \frac{1}{\alpha}< \frac{\omega^{2}}{2(\omega-1)}, &\textit{if } 1< \omega< 2; \end{array}\displaystyle \right . \\& (1-\tau+\beta\tau) (1-\beta\tau) > 0 ; \\& \frac{\alpha\omega\tau(2-\omega)(2-\tau)\mu_{\max}}{(1-\tau +\beta\tau)(1-\beta\tau)(1-\omega+\omega\alpha)} < 2 \biggl[1+\frac{(1-\alpha\omega)(1-\omega)}{1+\alpha\omega-\omega } \biggr]. \end{aligned}$$ -
(ii)
If \(\mu_{\max}<0\), then the GNSSOR-like method is convergent if and only if
$$\begin{aligned}& 0< \omega< 2 ; \qquad 0< \tau< 2 ; \\& \left \{ \textstyle\begin{array}{l@{\quad}l} \frac{1}{\alpha}>\frac{\omega^{2}}{2(\omega-1)},& \textit{if } 0< \omega< 1,\\ \frac{1}{\alpha}< \frac{\omega^{2}}{2(\omega-1)}, &\textit{if } 1< \omega< 2; \end{array}\displaystyle \right . \\& (1-\tau+\beta\tau) (1-\beta\tau) < 0 ; \\& \frac{\alpha\omega\tau(2-\omega)(2-\tau)\mu_{\min}}{(1-\tau +\beta\tau)(1-\beta\tau)(1-\omega+\omega\alpha)} < 2 \biggl[1+\frac{(1-\alpha\omega)(1-\omega)}{1+\alpha\omega-\omega } \biggr]. \end{aligned}$$
Proof
From (3.3) we get that
By Lemma 3.1, \(\vert \lambda \vert <1\) if and only if
and
Computations show that
if \(\mu_{\min}>0\), then we have
if \(\mu_{\max}<0\), then we have
From (3.6) we have
Cosidering \(1+\alpha\omega-\omega>0\), we obtain
□
4 Numerical experiments
In this section, we test several experiments to show the effectiveness of the GNSOR-like method and compare it with the SOR-like method in [13], MSOR-like method in [19], and MSOR-like method in [15]. We present computational results in terms of the numbers of iterations (denoted by IT) and computing time (denoted by CPU). We denote the choices of parameters α, β, ω, τ by \(\alpha_{\exp}\), \(\beta_{\exp}\), \(\omega_{\exp}\), \(\tau_{\exp}\) in our test, respectively. In our implementations, all iterations are run in MATLAB R2015a on a personal computer and are terminated when the current iterate satisfies \(\mathit{RES}<10^{-6}\) or the number of iterations is more than 1000. In our experiments, the residue is defined to be
the right-hand-side vector \((p^{T},q^{T})^{T}=\mathcal{A}e\) with \(e=(1,1,\ldots,1)^{T}\), and the initial vectors are set to be \(y_{0}=(0,0,\ldots,0)^{T}\) and \(z_{0}=(0,0,\ldots,0)^{T}\).
Example 4.1
([4])
Consider the saddle-point problem (1.1) in which
and
where \(h=\frac{1}{l+1}\) is the mesh size, and ⊗ be the Kronecker product.
In the test, we set \(m=2l^{2}\) and \(n=l^{2}\). The choices of the matrix Q are listed in Table 1 for Example 4.1. We have the following computational results summarized in Table 2.
From Table 2 we can see that the GNSOR-like method requires much less iteration number than NSOR-like [14], SOR-like, MSOR-like [15], so that it costs less CPU time than the others. So, the GNSOR-like method is effective for solving the saddle-point problem (1.1).
5 Concluding remark
In this paper, we first presented several iteration methods for solving the saddle-point problem (1.1) and then give their convergence results. The GNSOR-like method is faster and requires much less iteration steps than the other methods mentioned in the paper.
References
Arrow, K., Hurwicz, L., Uzawa, H.: Studies in Nonlinear Programming. Stanford University Press, Stanford (1958)
Bai, Z.Z.: Optimal parameters in the HSS-like methods for saddle point problems. Numer. Linear Algebra Appl. 16, 447–479 (2009)
Bai, Z.Z., Golub, G.H.: Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems. IMA J. Numer. Anal. 27, 1–23 (2007)
Bai, Z.Z., Golub, G.H., Pan, J.Y.: Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems. Numer. Math. 98, 1–32 (2004)
Bai, Z.Z., Parlett, B.N., Wang, Z.Q.: On generalized successive overrelaxation methods for augmented linear systems. Numer. Math. 102, 1–38 (2005)
Bai, Z.Z., Wang, Z.Q.: On parameterized inexact Uzawa methods for generalized saddle point problems. Linear Algebra Appl. 428, 2900–2932 (2008)
Benzi, M., Golub, G.H.: A preconditioner for generalized saddle point problems. SIAM J. Matrix Anal. Appl. 26, 20–41 (2004)
Benzi, M., Golub, G.H., Liesen, J.: Numerical solution of saddle point problems. Acta Numer. 14, 1–137 (2005)
Benzi, M., Simoncini, V.: On the eigenvalues of a class of saddle point matrices. Numer. Math. 103, 173–196 (2006)
Darvishi, M.T., Hessari, P.: Symmetric SOR method for augmented systems. J. Comput. Appl. Math. 183, 409–415 (2006)
Elman, H.C., Golub, G.H.: Inexact and preconditioned Uzawa algorithms for saddle point problems. SIAM J. Numer. Anal. 31, 1645–1661 (1994)
Elman, H.C., Silvester, D.: Fast nonsymmetric iterations and preconditioning for Navier–Stokes equations. SIAM J. Sci. Comput. 17, 33–46 (1996)
Golub, G.H., Wu, X., Yuan, J.Y.: SOR-like methods for augmented systems. BIT Numer. Math. 41, 71–85 (2001)
Guan, J.R., Ren, F.J., Feng, Y.H.: New SOR-like iteration method for saddle point problems. Math. Appl. (accepted, in press)
Guo, P., Li, C.X., Wu, S.L.: A modified SOR-like method for the augmented systems. J. Comput. Appl. Math. 274, 58–69 (2015)
Li, C.J., Li, Z., Nie, Y.Y., Evans, D.J.: Generalized AOR method for the augmented systems. Int. J. Comput. Math. 81, 495–504 (2004)
Li, J., Zhang, N.M.: A triple-parameter modified SSOR method for solving singular saddle point problems. BIT Numer. Math. 56, 501–521 (2016)
Saberi Najafi, H., Edalatpanah, S.A.: On the modified symmetric successive over-relaxation method for augmented systems. J. Comput. Appl. Math. 34, 607–617 (2015)
Shao, X.H., Li, Z., Li, C.J.: Modified SOR-like methods for the augmented systems. Int. J. Comput. Math. 84, 1653–1662 (2007)
Wang, H.D., Huang, Z.D.: On a new SSOR-like method with four parameters for the augmented systems. East Asian J. Appl. Math. 7, 82–100 (2017)
Wen, R.P., Li, S.D., Meng, G.Y.: SOR-like methods with optimization model for augmented linear systems. East Asian J. Appl. Math. 7, 101–115 (2017)
Wen, R.P., Wang, C.L., Yan, X.H.: Generalizations of nonstationary multisplitting iterative method for symmetric positive definite linear systems. Appl. Math. Comput. 216, 1707–1714 (2010)
Wu, S.L., Huang, T.Z., Zhao, X.L.: A modified SSOR iterative method for augmented systems. J. Comput. Appl. Math. 228, 424–433 (2009)
Young, D.M.: Iterative Solution of Large Linear Systems. Academic Press, New York (1971)
Zhang, G.F., Lu, Q.H.: On generalized symmetric SOR method for augmented systems. J. Comput. Appl. Math. 219, 51–58 (2008)
Zheng, B., Wang, K., Wu, Y.J.: SSOR-like methods for saddle point problem. Int. J. Comput. Math. 86, 1405–1423 (2009)
Zheng, Q.Q., Ma, C.F.: A new SOR-like method for the saddle point problems. Appl. Math. Comput. 233, 421–429 (2014)
Zheng, Q.Q., Ma, C.F.: A class of triangular splitting methods for saddle point problems. J. Comput. Appl. Math. 298, 13–23 (2016)
Acknowledgements
The authors are very much indebted to the anonymous referees for their helpful comments and suggestions, which greatly improved the original manuscript.
Funding
This work is supported by grants from the NSF of Shanxi Province (201601D011004).
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wen, R., Wu, R. & Guan, J. Some generalizations of the new SOR-like method for solving symmetric saddle-point problems. J Inequal Appl 2018, 145 (2018). https://doi.org/10.1186/s13660-018-1738-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-018-1738-3