- Research
- Open Access
- Published:

# The alternate direction iterative methods for generalized saddle point systems

*Journal of Inequalities and Applications*
**volume 2019**, Article number: 285 (2019)

## Abstract

The paper studies two splitting forms of generalized saddle point matrix to derive two alternate direction iterative schemes for generalized saddle point systems. Some convergence results are established for these two alternate direction iterative methods. Meanwhile, a numerical example is given to show that the proposed alternate direction iterative methods are much more effective and efficient than the existing one.

## Introduction

Consider the generalized saddle-point problem

where \(A\in R^{n\times n}\), \(B\in R^{m\times n}\), \(C\in R^{m\times m}\), \(f\in R^{n}\), \(f\in R^{m}\) and \(m\leq n\).

This class of linear systems arises in many scientific and engineering applications such as a mixed finite element approximation of elliptic partial differential equations, optimization, optimal control, structural analysis and electrical networks; see [1,2,3,4,5,6,7,8,9,10,11].

Recently, Benzi et al. [12, 13] studied the linear systems of the form (1) whose coefficient matrix

satisfy all of the assumptions:

$A=\left[\begin{array}{cc}{A}_{1}& 0\\ 0& {A}_{2}\end{array}\right]$, \(B=[B_{1}, B_{2}]\), \(A_{i}\in R^{n_{i}\times n_{i}}\) for \(i=1,2\) and \(n_{1}+n_{2}=n\), and \(B_{i}\in R^{m\times n_{i}}\) for \(i=1,2\);

\(A_{i}\) is positive definite (i.e., it has positive definite symmetric part \(H_{i}=(A_{i}+A^{T}_{i})/{2}\)) for \(i=1,2\);

\(\operatorname{rank}(B)=m\).

They [12] split the coefficient matrix \(\mathscr{A}\) as

where

which is called dimensional splitting of \(\mathscr{A}\), and proposed the following alternate direction iterative method:

which was proved to converge unconditionally for any \(\alpha >0\). Meanwhile, based on dimensional splitting of \(\mathscr{A}\), they [12, 13] proposed the dimensional splitting preconditioner for linear system (1), and applied a Krylov subspace method like restarted GMRES to the preconditioned linear system, and hence established some good results.

In this paper, we propose two types of alternate direction iterative methods: one is that on base of the dimensional splitting (3) the quantitative matrix *αI* is replaced by two nonnegative diagonal matrices \(\mathscr{D}_{1}\) and \(\mathscr{D}_{2}\) to form a new alternate direction iterative scheme; another is to propose a new splitting of \(\mathscr{A}\), i.e.,

where

and apply the two nonnegative diagonal matrices \(\mathscr{D}_{1}\) and \(\mathscr{D}_{2}\) to the new splitting such that another new alternate direction iterative scheme is obtained. Then some convergence results are established for the two alternate direction iterative schemes and a numerical example is given to show that the proposed ADI methods are much more effective and efficient than the existing one.

The paper is organized as follows. Two alternate direction iterative schemes are proposed in Sect. 2. The main convergence results of these two schemes are given in Sect. 3. In Sect. 4, a numerical examples is presented to demonstrate the proposed methods are very effective and efficient in this paper. A conclusion is given in Sect. 5.

## The ADI methods

In this section, two alternate direction iterative schemes are proposed based on the previous two splittings (3) and (6). Let

where \(\alpha >0\) and \(I_{m}\) is the \(m\times m\) identity matrix. Then for the two splittings (3) and (6) one has

which form the following two alternate direction iterative schemes.

*Given an initial guess*
\(x^{(0)}\), *for*
\(k=0,1,2,\ldots \) , *until*
\(\{x^{(k)}\}\)*converges*, *compute*

*where*
\(\mathscr{D}_{1}\)*and*
\(\mathscr{D}_{2}\)*are defined in* (8).

Eliminating \(x^{(k+\frac{1}{2})}\) in iterations (10) and (11), we obtain the stationary schemes

where

and

are the iteration matrices of the ADI iterations (12) and (13), respectively. It is easy to see that (14) and (15), respectively, are similar to the matrices

and

As is shown in [8], the iteration matrix \(\mathscr{L}\) is induced by the unique splitting \(\mathscr{A}=\mathscr{P}-\mathscr{Q}\) with \(\mathscr{P}\) nonsingular, i.e., \(\mathscr{L}=\mathscr{P}^{-1} \mathscr{Q}=I-\mathscr{P}^{-1}\mathscr{A}\). Furthermore, \(f= \mathscr{P}^{-1}b\). The matrices \(\mathscr{P}\) and \(\mathscr{Q}\) are given by

Also, the iteration matrix \(\mathscr{T}\) is induced by the unique splitting \(\mathscr{A}=\mathscr{M}-\mathscr{N}\) with

i.e., \(\mathscr{T}=\mathscr{M}^{-1}\mathscr{N}=I-\mathscr{M}^{-1} \mathscr{A}\). Furthermore, \(g=\mathscr{M}^{-1}b\). We often refer to \(\mathscr{P}\) or \(\mathscr{M}\) as the preconditioner.

## The convergence of the ADI methods

In this section, some convergence results on the ADI methods will be established. First, the following lemmas will used in this section.

### Lemma 1

*Let*
\(A=M-N\in C^{n\times n}\)*with**A**and**M**nonsingular and let*
\(T=NM^{-1}\). *Then*
\(A-TAT^{*}=(I-T)(AA^{-*}M^{*}+N)(I-T^{*})\).

The proof is similar to the proof of Lemma 5.30 in [1].

### Lemma 2

*Let*
\(A\in R^{n\times n}\)*be symmetric and positive definite*. *If*
\(A=M-N\)*with**M**nonsingular is a splitting such that*
\(M+N\)*has a nonnegative definite symmetric part*, *then*
\(\|T\|_{A}=\|A^{-1/2}TA^{1/2}\|_{2}\leq 1\), *where*
\(T=NM^{-1}\).

### Proof

It follows from Lemma 1 that

Since

it follows from (20) that \(A-TAT^{T}\succeq 0\) and thus

From (22), we have \(I\succeq (A^{-1/2}TA^{1/2})(A^{-1/2}TA ^{1/2})^{T}\succeq 0\). Therefore,

This completes the proof. □

### Lemma 3

*Let*
\(\mathscr{A}_{i}\), \(\mathscr{B}_{i}\)*and*
\(\mathscr{D}_{i}\)*be defined in* (4) *and* (8) *for*
\(i=1,2\). *If*
\({A}_{i}\)*has positive definite symmetric part*
\(H_{i}\)*and*
\(0<\alpha \leq 2\lambda _{\mathrm{min}}(H_{i})\)*with*
\(\lambda _{\mathrm{min}}(H _{i})\)*the smallest eigenvalue of*
\(H_{i}\), *then*

*where*
\(j=2\)*if*
\(i=1\)*and*
\(j=1\)*if*
\(i=2\).

### Proof

We only prove the former inequality in (23) and the same method can yield the latter one. Let \(M_{i}=\mathscr{D}_{i}+\mathscr{A}_{i}\) and \(N_{i}=-\mathscr{D}_{j}+\mathscr{A}_{i}\). Then we have

where *I* is the \((n_{1}+n_{2}+m)\times (n_{1}+n_{2}+m)\) identity matrix, and

When \(i=1\) and \(j=2\)

Noting \(0<\alpha \leq 2\lambda _{\mathrm{min}}(H_{i})\), \(2H_{i}-\alpha I_{n_{i}}=(A^{T}_{i}+A_{i})-\alpha I_{n_{i}}\succeq 0\). Thus

which shows that \(M_{1}+N_{1}\) has a nonnegative definite symmetric part. Similarly, \(M_{2}+N_{2}\) also has a nonnegative definite symmetric part. Thus, \(M_{i}+N_{i}\) has a nonnegative definite symmetric part for \(i=1,2\). Let \(T_{i}=N_{i}M_{i}^{-1}\). Then it follows from Lemma 2 that

Consequently, \(\|T_{i}\|_{2}=\|N_{i}M_{i}^{-1}\|_{2}=\|(\mathscr{D} _{j}-\mathscr{A}_{i})(\mathscr{D}_{i}+\mathscr{A}_{i})^{-1}\|_{2} \leq 1\) for \(i=1,2\). This completes the proof. □

### Theorem 1

*Consider problem* (1) *and assume that*
\(\mathscr{A}\)*satisfies the assumptions above*. *Then*
\(\mathscr{A}\)*is nonsingular*. *Further*, *if*
\(0<\alpha \leq 2\delta \)*with*
\(\delta =\min \{ \lambda _{\mathrm{min}}(H_{1}),\lambda _{\mathrm{min}}(H_{2})\}\), *then*
\(\|\hat{\mathscr{L}}\|_{2}\leq 1\)*and*
\(\|\hat{\mathscr{T}}\|_{2}\leq 1\).

### Proof

The proof of the nonsingularity of \(\mathscr{A}\) can be found in [10]. Since \(0<\alpha \leq 2\delta =2\min \{\lambda _{ \mathrm{min}}(H_{1}),\lambda _{\mathrm{min}}(H_{2})\}\), Lemma 3 shows that (23) hold for \(i=1\), \(j=2\) and \(i=2\), \(j=1\). As a result,

This completes the proof. □

### Theorem 2

*Consider problem* (1) *and assume that*
\(\mathscr{A}\)*satisfies the assumptions above*. *If*
\(0<\alpha \leq 2\delta \)*with*
\(\delta =\min \{\lambda _{\mathrm{min}}(H_{1}),\lambda _{\mathrm{min}}(H _{2})\}\), *then the iterations* (10) *and* (11) *are convergent*; *that is*, \(\rho (\mathscr{L})<1\)*and*
\(\rho (\mathscr{T})<1\).

### Proof

Firstly, we prove \(\rho (\mathscr{L})<1\). Since \(\mathscr{L}(\alpha )\) is similar to \(\hat{\mathscr{L}}\), \(\rho (\mathscr{L})=\rho ( \hat{\mathscr{{L}}})\). Let *λ* is an eigenvalue of \(\hat{\mathscr{{L}}}(\alpha )\) satisfying \(|\lambda |=\rho ( \hat{\mathscr{{L}}})\) and *x* is the corresponding eigenvector with \(\|x\|_{2}=1\) (note that it must have \(x\neq 0\)). Then \(\hat{\mathscr{{L}}}x=\lambda x\) and consequently,

where \(u=(\mathscr{D}_{1}+\mathscr{A}^{*}_{1})^{-1}(\mathscr{D}_{2}- \mathscr{A}^{*}_{1})x\) and \(v=(\mathscr{D}_{1}-\mathscr{A}_{2})( \mathscr{D}_{2}+\mathscr{A}_{2})^{-1}x\). Using the Cauchy–Schwarz inequality,

The equality in (26) holds if and only if \(u=kv\), where \(k\in \mathbb {C}\). Also, Lemma 3 yields

As a result, if \(u\neq kv\), then it follows from (26) and (27) that

if \(u=kv\) and \(u^{*}u\cdot v^{*}v<1\), then

In what follows we will prove by contradiction that \(u=kv\) and \(u^{*}u\cdot v^{*}v=1\) do not hold simultaneously.

Assume that \(u=kv\) and \(u^{*}u\cdot v^{*}v=1\). Since \(u^{*}u\leq 1\) and \(v^{*}v\leq 1\), \(|k|=u^{*}u=v^{*}v=1\). Then it follows from (27) that

Noting \(\|x\|_{2}=1\), (30) implies that *x* is the eigenvector of \((\mathscr{D}_{2}-\mathscr{A}_{1})(\mathscr{D}_{1}+\mathscr{A}_{1})^{-1}( \mathscr{D}_{1}+\mathscr{A}^{*}_{1})^{-1}(\mathscr{D}_{2}-\mathscr{A} ^{*}_{1})\) and \((\mathscr{D}_{2}+\mathscr{A}^{*}_{2})^{-1}( \mathscr{D}_{1}-\mathscr{A}^{*}_{2})(\mathscr{D}_{1}-\mathscr{A}_{2})( \mathscr{D}_{2}+\mathscr{A}_{2})^{-1}\) corresponding to their having the same eigenvalue, 1, i.e.,

Since

where *E*, *F* and *G* denote nonzero matrices, the former equation in (31) can be written as

which indicates \(x_{2}=0\). Therefore, \(x=[x^{*}_{1},0,x_{3}^{*}]^{*}\). Let \(y=(\mathscr{D}_{2}+\mathscr{A}_{2})^{-1}x\). Then \(x=(\mathscr{D} _{2}+\mathscr{A}_{2})y\). The latter equation in (31) becomes

and consequently

that is,

which indicates \(y_{1}=0\). Therefore, \(y=[0,y_{2}^{*},y_{3}^{*}]^{*}\). Also, \(x=(\mathscr{D}_{2}+\mathscr{A}_{2})y\). Then

which shows that

Since \(u=kv\),

which can be written as

for \(x=(\mathscr{D}_{2}+\mathscr{A}_{2})y\). Further, (39) becomes

i.e.,

i.e.,

Here, we assert \(k\neq 1\). Otherwise, assume \(k=1\). Then (25) shows \(\lambda =u^{*}v=v^{*}v=1\) for \(u=kv\) and \(v^{*}v=1\). Note that *λ* is an eigenvalue of \(\hat{\mathscr{L}}\) which is similar to \(\mathscr{L}(\alpha )\). Thus \(\hat{\mathscr{L}}\) and \(\mathscr{L}=[( \mathscr{D}_{1}+\mathscr{A}_{1})(\mathscr{D}_{2}+\mathscr{A}_{2})]^{-1}[( \mathscr{D}_{2}-\mathscr{A}_{1})(\mathscr{D}_{1} -\mathscr{A}_{2})]\) have the same eigenvalue, 1. Let *w* be the eigenvector of \(\mathscr{L}\) corresponding to the eigenvalue 1 (note that necessarily \(w\neq 0\)). One has

and consequently

Since \(\mathscr{A}\) is nonsingular, (44) yields \(w=0\), which contradicts that *w* is an eigenvector of \(\mathscr{L}(\alpha )\). Thus, \(k\neq 1\) and \(1-k\neq 0\). From the third equation in (42), one has

with \(\kappa :=\frac{2(1+k)}{\alpha (1-k)}\). Then it follows from the second equation in (37) that

where \(\mathscr{J}=A_{2}+\kappa B_{1}^{T}B_{1}\). Note \(|k|=1\) and \(k\neq 1\). Let \(k=\cos \theta +i\sin\theta \), where \(i=\sqrt{-1}\), \(\theta \in R\), \(\theta \neq 2t\pi \) and *t* is an integer. Then

is either pure imaginary or zero. As a result, \(\mathscr{J}^{*}+ \mathscr{J}=A_{2}^{T}+A_{2}\succ 0\) for \(A_{2}\) is positive definite. Thus, \(\mathscr{J}\) is positive definite and hence nonsingular. Equation (46) indicates \(y_{2}=0\) and thus (45) shows that \(y_{3}=0\). Then it follows from the third equation in (37) that \(x_{3}=0\). Therefore, \(x=[0,0,0]^{*}\), which contradicts that *x* is an eigenvector of \(\hat{\mathscr{L}}(\alpha )\) with \(\|x\|_{2}=1\). By the proof above, it is easy to see that \(u=kv\) and \(u^{*}u\cdot v^{*}v=1\) do not hold simultaneously. Therefore, \(\rho [\mathscr{L}(\alpha )]=| \lambda |<1\) and consequently, the iteration (10) converges.

By the same method, we can obtain \(\rho (\mathscr{T})<1\). Therefore, iterations (10) and (11) are both convergent. This completes the proof. □

## A numerical example

A numerical example is given in this section to show that the proposed alternate direction iterative methods are very effective.

### Example 1

Consider problem (1) and assume that \(\mathscr{A}\) is shown in (2), where \(A_{1}=A_{2}=\operatorname{tri}(1,1,-1)\in R^{n \times n}\), \(B_{1}=B_{2}=I_{n}\in R^{n\times n}\), an \(n\times n\) identity matrix and \(b=(1,1,\ldots ,1)^{T}\in R^{2n}\).

We conduct numerical experiments to compare the performance of the three alternate direction iterative schemes (5), (10) and (11) for the problem (1). The former scheme (5) written as Algorithm 1 (A1) was proposed denoted by Benzi et al. in [12, 13], while the latter schemes (10) and (11) written by Algorithm 2 (A2) and Algorithm 3 (A3) are proposed in this paper. These three algorithms were coded in Matlab, and all computations were performed on a HP dx7408 PC (Intel core E4500 CPU, 2.2 GHz, 1 GB RAM) with Matlab 7.9 (R2009b).

The stopping criterion is defined as

Numerical results are presented in Table 1. In particular, we report in Fig. 1 the change of RE of A1, A2 and A3 when \(n=1000\) with the iteration number increasing.

From Table 1, we can make the following observations. (i) A2 (i.e., Algorithm 2) generally has much smaller iteration number than A1 and A3 (Algorithm 1 and Algorithm 3) when \(n=500\), \(n=1000\) and \(n=1500\); (ii) A3 has much less computing time than A2 and A1. Thus, both A2 and A3 are generally superior to A1 in terms of iteration number and computing time. Therefore, the proposed methods are more effective and efficient than the existing method.

Figure 1 shows that RE generated by A3 quickly converges to 0 with the iteration number increasing when \(n=1000\). Therefore, A2 is superior to A1 and A3 in terms of iteration number.

## Conclusions

In this paper we propose two alternate direction iterative methods for generalized saddle-point systems based on two splitting forms of generalized saddle-point matrix, and then establish some convergence theorems for these two iterative methods. Finally, we present a numerical example to demonstrate that the proposed alternate direction iterative methods are superior to the existing one.

## References

- 1.
Berman, A., Plemmons, R.J.: Nonnegative Matrices in the Mathematical Sciences. Academic Press, New York (1979)

- 2.
Bai, Z.-Z., Golub, G.H., Ng, M.K.: On successive-overrelaxation acceleration of the Hermitian and skew-Hermitian splitting iterations. Numer. Linear Algebra Appl.

**17**, 319–335 (2007) - 3.
Bai, Z.-Z., Golub, G.H.: Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems. IMA J. Numer. Anal.

**27**, 1–23 (2007) - 4.
Bai, Z.-Z., Golub, G.H., Lu, L.-Z., Yin, J.-F.: Block triangular and skew-Hermitian splitting methods for positive-definite linear systems. SIAM J. Sci. Comput.

**26**, 844–863 (2005) - 5.
Bai, Z.-Z., Golub, G.H., Pan, J.-Y.: Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems. Numer. Math.

**98**, 1–32 (2004) - 6.
Bai, Z.-Z., Golub, G.H., Ng, M.K.: On inexact Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. Linear Algebra Appl.

**428**, 413–440 (2008) - 7.
Li, L., Huang, T.-Z., Liu, X.-P.: Modified Hermitian and skew-Hermitian splitting methods for non-Hermitian positive-definite linear systems. Numer. Linear Algebra Appl.

**14**, 217–235 (2007) - 8.
Benzi, M., Szyld, D.B.: Existence and uniqueness of splittings for stationary iterative methods with applications to alternating methods. Numer. Math.

**76**, 309–321 (1997) - 9.
Benzi, M., Gander, M., Golub, G.H.: Optimization of the Hermitian and skew-Hermitian splitting iteration for saddle-point problems BIT Numer. Math.

**43**, 881–900 (2003) - 10.
Benzi, M., Golub, G.H.: A preconditioner for generalized saddle point problems. SIAM J. Matrix Anal. Appl.

**26**, 20–41 (2004) - 11.
Benzi, M., Golub, G.H., Liesen, J.: Numerical solution of saddle point problems. Acta Numer.

**14**, 1–137 (2005) - 12.
Benzi, M., Ng, M., Niu, Q., Wang, Z.: A relaxed dimensional factorization preconditioner for the incompressible Navier–Stokes equations. J. Comput. Phys.

**230**, 6185–6202 (2011) - 13.
Benzi, M., Guo, X.-P.: A dimensional split preconditioner for Stokes and linearized Navier–Stokes equations. Appl. Numer. Math.

**61**, 66–76 (2011)

### Acknowledgements

The authors would like to thank the anonymous referees for their valuable comments and suggestions, which actually stimulated this work.

### Availability of data and materials

Not applicable.

### Funding

The work was supported by the National Natural Science Foundations of China (11601409, 11201362), the Natural Science Foundation of Shaanxi Province of China (2016JM1009), the Natural Science Foundation of Department of Shaanxi Province of China (2017JK0344), the Key Projects of Social Science Planning of Gansu Province (ZD007) and 2018 Strategic Research Projects of the Scientific Research Projects of Institutions of Higher Learning of Gansu Province (2018f-20).

## Author information

### Affiliations

### Contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Cheng-yi Zhang.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

## Additional information

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Luo, S., Cui, A. & Zhang, C. The alternate direction iterative methods for generalized saddle point systems.
*J Inequal Appl* **2019, **285 (2019). https://doi.org/10.1186/s13660-019-2235-z

Received:

Accepted:

Published:

### MSC

- 65F10
- 15A15
- 15F10

### Keywords

- Alternate direction iterative method
- Generalized saddle point system
- Convergence