Some generalizations of the new SOR-like method for solving symmetric saddle-point problems

Saddle-point problems arise in many areas of scientific computing and engineering applications. Research on the efficient numerical methods of these problems has become a hot topic in recent years. In this paper, we propose some generalizations of the new SOR-like method based on the original method. Convergence of these methods is discussed under suitable restrictions on iteration parameters. Numerical experiments are given to show that these methods are effective and efficient.


Introduction
Consider the solution of the following symmetric saddle-point linear system with block 2-by-2 structure: where A ∈ R m×m is a symmetric positive definite matrix, B ∈ R m×n is a matrix of full column rank (m n), B T ∈ R n×m is the transpose of the matrix B, and p ∈ R m and q ∈ R n are given vectors. Under such conditions, system (1.1) has a unique solution [8]. Problems of this type arise in many areas of scientific computing and engineering applications, such as computational fluid dynamics, constrained and weighted least squares estimation, constrained optimization, image reconstruction, computer graphics, and so on. For background and a comprehensive survey, we refer to [3,5,8,12]. Since the matrices A and B are usually very large and sparse in applications, the direct methods are not suitable to be applied to solve system (1.1). So, more attention has been paid on the iteration methods for solving problem (1.1). Many iteration methods were found for system (1.1): the Uzawa-type methods [1], matrix splitting methods, Krylov subspace methods, and so on. See [2, 3, 5, 8, 10, 11, 14-17, 19-23, 25-28] for more details. One of the best iteration methods is SOR-like method, introduced by Golub et al. [13], which includes the Uzawa-type methods as special cases. Later, many researchers generalized or modified the SOR-like method and studied their convergence properties for solving problem (1.1) from different points of view in recent years. For instance, Bai et al. [5] proposed a generalized SOR-like method, which has two parameters and is more effective than the SOR-like method; Shao et al. [19] extended this method and proposed a modified SORlike method, and Guo et al. [15] presented another modified SOR-like method; Zheng et al. [27] also discussed a new SOR-like method based on a different splitting of the coefficient matrix; Darvishi and Hessari [10], Wu et al. [23], and Najafi et al. [18] considered the SSOR method. We refer to  and the references therein.
In this paper, we generalize this method to several variants for solving problem (1.1) or (1.2). These methods include some of the above-mentioned methods as particular cases. We discuss the convergence of these methods under suitable restrictions on iteration parameters, which are very easy to use in computations. Numerical experiments are given to show that these methods are effective and efficient.
The rest of the paper is organized as follows. In Sect. 2, we propose several generalizations based on the new SOR-like method for solving problem (1.1) or (1.2). In Sect. 3, we give the convergence analysis of these methods. We use some numerical examples to show the effectiveness of them in Sect. 4. A concluding remark is drawn in Sect. 5.

Methods
In the section, we derive some generalizations for solving system (1.1) or (1.2) based on the new SOR-like method introduced by Guan et al. [14].

The new SSOR-like method (NSSOR-like)
By combining the NSOR-like method and its backward version the NSSOR-like method can be easily obtained for solving problem (1.2). The backward iteration scheme of the NSOR-like method is as follows: or, equivalently, 1) and N 1 = ω(D -ωU ) -1 . Then the NSSOR-like method can be written as The NSSOR-like method Given two initial guesses y 0 ∈ R m , z 0 ∈ R n . For k = 0, 1, . . . , until y k and z k converge, compute y k+1 and z k+1 from y k and z k by the following procedure: where α > 0, β ≥ 0, ω > 0 are given parameters.

The generalized NSOR-like method (GNSOR-like)
By introducing a diagonal matrix = diag(ωI m , τ I n ), where I m , I n , and further I are all identity matrices, we can obtain a generalization of the NSOR-like method: More precisely, we have the following algorithmic description of the GNSOR-like method.
Remark This method, in spirit, is analogous to the GSOR method [5]. It uses a relaxation matrix for the NSOR-like method instead of a single relaxation parameter. Obviously, when ω = τ , this method reduces to the NSOR-like method mentioned in the previous section.

The generalized NSSOR-like method (GNSSOR-like)
Ordinarily, the GNSSOR-like method can also be derived by combining the symmetric technique, as introduced in [25].
Remark It can be seen easily that, when ω = τ , this method reduces to the NSSOR-like method mentioned in the previous subsection.
With different choices of parameters, the GNSSOR-like method covers several SSOR methods as follows: (i) When α = 1 and β = 0, we get the SSOR method in [10], The SSOR method 1 Given initial guesses y 0 ∈ R m , z 0 ∈ R n . For k = 0, 1, . . . , until y k and z k converge, compute y k+1 and z k+1 from y k and z k by the following procedure: where ω > 0 and τ > 0 are given parameters.

Convergence analysis
In the section, we give a convergence analysis of these methods. We need the following lemmas.  Proof Suppose on the contrary that λ = 1 is an eigenvalue of the iteration matrix R and that the corresponding eigenvector is (y T , z T ) T . Then we have where ρ denotes the spectral radius of the matrix Q -1 B T A -1 B.
Proof Evidently, we can see that the eigenvalues of the matrix Q -1 B T A -1 B are all real and positive. Let λ be a nonzero eigenvalue of the iteration matrix R, and let y z be the corresponding eigenvector. Then we have Computations show that We get (1λαω)y = αωA -1 Bz from the first equality in (3.1), and hence λτ (1λαω)B T y = λτ αωB T A -1 Bz when λ = 1αω. From this equality and the second equality in (3.1) it follows that If λ = 1αω = 0, then Bz = 0 and -αω(1τβ)Qz = λτ B T y. It then follows that z = 0 and y ∈ null(B T ), where null(B T ) is the null space of the matrix B T . Hence λ = 1αω is an eigenvalue of the matrix R with the corresponding eigenvector (y T , 0) T with y ∈ null(B T ). Therefore, except for λ = 1αω, the rest of the eigenvalues λ of the matrix R and all the eigenvalues μ of the matrix Q -1 B T A -1 B are of the functional relationship λαωτ μ = (λ -1)(1τβ)(1λαω), that is, λ satisfies the quadratic equation By Lemma 3.1 we know that λ = 1αω and both roots λ of Eq. (3.2) satisfy |λ| < 1 if and only if Thus we can deduce that where ρ denotes the spectral radius of the matrix Q -1 B T A -1 B.
The proof of the theorem has been completed.

Lemma 3.4 Let T be the iteration matrix of the GNSSOR-like method. Then we have:
is an eigenvalue of the matrix T with multiplicity mn at least.  (3.4) and we obtain that

1-ω+ωα is an eigenvalue of the matrix T if and only if there exists a real eigenvalue μ of the matrix Q -1 B T A -1 B such that λ is a zero point of
Substituting y into (3.5), we get Since μ is an eigenvalue of the matrix Q -1 B T A -1 B, we have simply, Conversely, we can also trivially prove the following: (ii) If μ max < 0, then the GNSSOR-like method is convergent if and only if By Lemma 3.1, |λ| < 1 if and only if and Computations show that if μ min > 0, then we have if μ max < 0, then we have , if 0<ω < 1, , if 1<ω < 2.

Numerical experiments
In this section, we test several experiments to show the effectiveness of the GNSOR-like method and compare it with the SOR-like method in [13], MSOR-like method in [19], and MSOR-like method in [15]. We present computational results in terms of the numbers of iterations (denoted by IT) and computing time (denoted by CPU). We denote the choices of parameters α, β, ω, τ by α exp , β exp , ω exp , τ exp in our test, respectively. In our implementations, all iterations are run in MATLAB R2015a on a personal computer and are terminated when the current iterate satisfies RES < 10 -6 or the number of iterations is more than 1000. In our experiments, the residue is defined to be RES := Ay k + Bz kp 2 + B T y kq 2 < 10 -6 , the right-hand-side vector (p T , q T ) T = Ae with e = (1, 1, . . . , 1) T , and the initial vectors are set to be y 0 = (0, 0, . . . , 0) T and z 0 = (0, 0, . . . , 0) T .
where h = 1 l+1 is the mesh size, and ⊗ be the Kronecker product.
In the test, we set m = 2l 2 and n = l 2 . The choices of the matrix Q are listed in Table 1 for Example 4.1. We have the following computational results summarized in Table 2.  From Table 2 we can see that the GNSOR-like method requires much less iteration number than NSOR-like [14], SOR-like, MSOR-like [15], so that it costs less CPU time than the others. So, the GNSOR-like method is effective for solving the saddle-point problem (1.1).

Concluding remark
In this paper, we first presented several iteration methods for solving the saddle-point problem (1.1) and then give their convergence results. The GNSOR-like method is faster and requires much less iteration steps than the other methods mentioned in the paper.