- Research
- Open access
- Published:

# The Picard-HSS-SOR iteration method for absolute value equations

*Journal of Inequalities and Applications*
**volumeÂ 2020**, ArticleÂ number:Â 258 (2020)

## Abstract

In this paper, we present the Picard-HSS-SOR iteration method for finding the solution of the absolute value equation (AVE), which is more efficient than the Picard-HSS iteration method for AVE. The convergence results of the Picard-HSS-SOR iteration method are proved under certain assumptions imposed on the involved parameter. Numerical experiments demonstrate that the Picard-HSS-SOR iteration method for solving absolute value equations is feasible and effective.

## 1 Introduction

Let \(A \in R^{n\times n}\), \(b \in R^{n}\). We consider the following absolute value equation (AVE):

where \(|x|\) denotes the vector in \(R^{n}\) with absolute values of component of *x*. The AVE (1.1) is the special case of generalized system of absolute value equations of the form

where \(B\in R^{n\times n}\). The system of absolute value equation (1.2) was introduced in [1] and investigated in a more general context in [2].

The importance of the AVE (1.1) arises from the fact that linear programs, bimatrix games and other important problems in optimization all can be reduced to the system of absolute value equations. In recent years, the problem of finding solution of AVE has been attracted much attention and has been studied in the literature [3â€“18]. For the numerical solution of the AVE (1.1), there exist many efficient numerical methods, such as the SOR-like iteration method [12], the relaxed nonlinear PHSS-like iteration method [15], the Levenbergâ€“Marquardt method [16], the generalized Newton method [17], the Gaussâ€“Seidel iteration method [19] and so on.

Recently, Salkuyeh [18] presented the Picard-HSS iteration method for solving the AVE and established the convergence theory under suitable conditions. Some numerical experiments showed in [18] that the Picard-HSS iteration method is more efficient than the Picard iteration method and generalized Newton method.

In the paper, we present a new version iteration method for finding the solution of the absolute value equation(AVE), which is more efficient than the Picard-HSS iteration method for AVE. The convergence results of the Picard-HSS-SOR iteration method are proved under certain assumptions imposed on the involved parameter. Numerical experiments demonstrate that the Picard-HSS-SOR iteration method for solving absolute value equations is feasible and effective.

This article is arranged as follows. In Sect. 2, we recall the Picard-HSS iteration method and some results that will be used in following analysis. The Picard-HSS-SOR iteration method and its convergence analysis are proposed in Sect. 3. Experimental results and conclusions are given in Sects. 4 and 5, respectively.

## 2 Preliminaries

Firstly, we present some notations and auxiliary results.

The symbol \(I_{n}\) denotes the \(n\times n\) identity matrix. \(\|A\|\) denotes the spectral norm defined by \(\|A\|:=\max \{\|Ax\|: x\in \mathbf{R}^{n}, \|x\|=1 \}\), where \(\|x\|\) is the 2-norm. For \(x\in \mathbf{R}^{n}\), \(\operatorname{sign}(x)\) denotes a vector with components equal to âˆ’1, 0 or 1 depending on whether the corresponding component of *x* is negative, zero or positive. In addition, \(\operatorname{diag}(\operatorname{sign}(x))\) is a diagonal matrix whose diagonal elements are \(\operatorname{sign}(x)\). A matrix \(A= (a_{ij} )\in R^{m\times n}\) is said to be nonnegative (positive) if its entries satisfy \(a_{ij}\geq 0 (a_{ij}>0)\) for all \(1\leq i\leq m\) and \(1\leq j\leq n\).

### Proposition 2.1

([2])

*Suppose that* \(A \in \mathbf{R}^{n\times n}\) *is invertible*. *If* \(\|A^{-1}\|< 1\), *then the AVE in* (1.1) *has a unique solution for any* \(b\in \mathbf{R}^{n}\).

### Lemma 2.1

([20])

*For any vectors* \(x=(x_{1},x_{2},\ldots,x_{n})^{T} \in \mathbf{R}^{n}\) *and* \(y=(y_{1},y_{2},\ldots,y_{n})^{T} \in \mathbf{R}^{n}\), *the following results hold*:

(I) \(\||x|-|y|\| \leq \|x-y\|\); (II) *if* \(0 \leq x \leq y\), *then* \(\|x\| \leq \|y\|\);

(III) *if* \(x \leq y\) *and* *P* *is a nonnegative matrix*, *then* \(P x \leq P y\), *where* \(x\leq y\) *denotes* \(x_{i}\leq y_{i}, 1\leq i\leq n\).

*Let* \(A\in \mathbf{R}^{n\times n}\) *be a non*-*Hermitian positive definite matrix*. *Then the matrix* *A* *possesses a Hermitian*/*skew*-*Hermitian* (*HSS*) *splitting*

*where*

### Algorithm 2.1

(The Picard-HSS iteration method [18])

*Given an initial guess* \(x^{(0)}\in \mathbf{R}^{n}\) *and a sequence* \(\{ l_{k} \} ^{\infty }_{k=0}\) *of positive integers*, *compute* \(x^{(k+1)}\) *for* \(k = 0, 1, 2 \),â€¦, *using the following iteration scheme until* \(\{ x^{(k)} \} \) *satisfies the following stopping criterion*:

(a) *Set* \(x^{(k,0)}: = x^{(k)}\);

(b) *for* \(l=0, 1,\ldots, l_{k}-1\), *solve the following linear systems to obtain* \(x^{(k,l+1)}\):

*where* *Î±* *is a given positive constant*;

(c) *set* \(x^{(k+1)}:=x^{(k,l_{k})}\).

*The* \((k+1)\)*th iterate of the Picard*-*HSS method can be written as*

*where*

*and*

### Theorem 2.1

([18])

*Let* \(A\in \mathbf{R}^{n\times n}\) *be a positive definite matrix*. *If* \(v=\|A^{-1}\|<1\), *then the AVE* (1.1) *has a unique solution* \(x^{*}\), *and for any initial guess* \(x^{(0)}\in \mathbf{R}^{n}\) *and any sequence of positive integers* \(l_{k}, k=0,1,2,\ldots \)â€‰, *the iteration sequence* \(\{ x^{(k)} \} \) *generated by the Picard*-*HSS iteration method converges to* \(x^{*}\) *provided that* \(\widetilde{l}=\lim \inf_{k\to +\infty }l_{k}\geq N\), *where* *N* *is a natural number satisfying*

## 3 The Picard-HSS-SOR iteration method

In the section, we will introduce the Picard-HSS-SOR iteration method and prove the convergence of the proposed method.

Recently, Ke et al. presented the SOR-like method for solving (1.1) in [13]. Let \(y=|x|\), then the AVE in (1.1) is equivalent to

that is,

where \(D(x):=\operatorname{diag}(\operatorname{sign}(x)), x\in \mathbf{R}^{n}\).

Based on Eq. (3.2), we present the Picard-HSS-SOR iteration method for solving AVE (3.1) as follows.

### Algorithm 3.1

(The Picard-HSS-SOR iteration method)

*Let* \(A\in \mathbf{R}^{n\times n}\) *be a positive definite matrix with* \(H=\frac{1}{2} (A+A^{T} )\) *and* \(S=\frac{1}{2} (A-A^{T} )\) *being the Hermitian and skew*-*Hermitian parts of the matrix* *A*, *respectively*. *Given an initial guess* \(x^{(0)}\in \mathbf{R}^{n}\) *and* \(y^{(0)}\in \mathbf{R}^{n}\). *Compute* \(\{ (x^{(k+1)}, y^{(k+1)} ) \} \) *for* \(k = 0, 1, 2 \),â€¦, *using the following iteration scheme until* \(\{ (x^{(k)}, y^{(k)} ) \} \) *satisfies the stopping criterion*:

(i) *Set* \(x^{(k,0)}: = x^{(k)}\);

(ii) *for* \(l=0, 1,\ldots, l_{k}-1\), *solve the following linear systems to obtain* \(x^{(k,l+1)}\):

(iii) *set*

*where* \(\alpha > 0\) *and* \(0 < \tau < 2\).

*Let* \((x^{*}, y^{*})\) *be the solution pair of the nonlinear equation* (3.1) *and* \((x^{(k)}, y^{(k)})\) *be produced by the Algorithm* 3.1. *Define the iteration errors*

Next, we will prove the main result of this paper.

### Theorem 3.1

*Let* \(v=\|A^{-1}\|\), \(\beta =|1-\tau |\) *and* \(\widetilde{l}=\lim \inf_{k\to +\infty }l_{k}\geq N\), *where* *N* *is a natural number satisfying* (2.3). *If* \(0<\tau <2\) *and*

*then the inequality*

*holds for* \(k=0,1,2,\ldots \)â€‰.

### Proof

The \((k+1)\)th iterate of the Picard-HSS-SOR iteration method can be written as

where

Since \((x^{*}, y^{*})\) is the solution pair of the nonlinear equation (3.1), from (3.7), we can obtain

From Lemma 2.1 and (3.9), we can obtain

From Theorem 2.1 and (3.8), we have

Therefore, from (3.10) and (3.11), we have

Let P=\left(\begin{array}{cc}1& 0\\ \mathrm{\xcf\u201e}& 1\end{array}\right). In this case we have *P* is nonnegative, i.e. \(P\geq 0\).

According to Lemma 2.1, multiplying (3.12) from left by the nonnegative matrix *P*, we can obtain

Let

thus, we get

Next, we will consider the choice of the parameter *Ï„* such that \(\|W\|<1\), therefore the inequality (3.6) holds. Since

we can obtain

and

Suppose *Î»* is an eigenvalue of the matrix \(W^{T}W\) with \(\lambda \geq 0\). Therefore *Î»* will satisfy

Thus, we can obtain the following relations:

where \(\lambda _{1}\) and \(\lambda _{2}\) are eigenvalues of the matrix \(W^{T}W\).

If \(0<\tau <2\), we have \(\det (W^{T}W )<1\), that is, \(0\leq \lambda _{1}\lambda _{2}<1\).

From (3.5), we have \(\lambda _{1}+\lambda _{2}<1+\lambda _{1}\lambda _{2}\), that is, \((\lambda _{1}-1)(\lambda _{2}-1)>0\). Hence, we can obtain

Therefore \(\|W\|<1\). The proof is completed.â€ƒâ–¡

## 4 Numerical results

To illustrate the implementation and efficiency of the Picard-HSS-SOR iteration method, we test the following test problems. All test problems are performed by MATLAB R2019a on a personal computer with 2.4 GHz central processing unit (Intel (R) Core (TM) i5-3210M), 8GB memory. We use a null vector as initial guess and all the experiments are terminated if the current iterations satisfy

or if the number of the prescribed iteration steps \(k_{\max }=500\) is exceeded. In addition, the stopping criterion for the inner iterations is set to be

where \(b^{(k)}=|x^{(k)}|+b-Ax^{(k,l_{k})}\), \(s^{(k,l_{k})}=x^{(k,l_{k})}-x^{(k,l_{k}-1)}\), \(l_{k}\) is the number of inner iteration steps and a maximum number of iterations 10.

Next,we consider the two-dimensional convection-diffusion equation

where \(\Omega = (0,1)\times (0,1)\), *âˆ‚*Î© is its boundary, *q* is a positive constant used to measure the magnitude of the diffusive term and *p* is a real number. We apply the five-point finite difference scheme to the diffusive terms and the central difference scheme to the convective terms. Let \(h=1/(m+ 1)\) and \(\operatorname{Re}=(qh)/2\) denote the equidistant step size and the mesh Reynolds number, respectively. Then we get a system of linear equations \(Bx = d\), where *B* is a matrix of order \(n=m^{2}\) of the form

with

where \(t_{1} = 4\), \(t_{2}= -1-\operatorname{Re} \), \(t_{3}=-1+\operatorname{Re}\), \(I_{m}\) and \(I_{n}\) are the identity matrices of order *m* and *n*, respectively, âŠ— means the Kronecker product.

For our numerical experiments, we set \(A=B+\frac{1}{2}(L-L^{T})\), where *L* is the strictly lower part of *B*, and the right hand side vector *b* of the AVE(1.1) is taken in such a way that the vector \(x=(x_{1},x_{2},\ldots,x_{n})^{T}\) with \(x_{k}=(-1)^{k}\), \(k=1,2,\ldots,n\), being the exact solution. It is easy to see that the matrix *A* is non-symmetric positive definite.

The computation of the optimal parameter is often problem-dependent and generally difficult to determine. The optimal parameter *Î±* and *Ï„* employed in each method is experimentally determined such that it results in the least number of iterations.

In Tables 1 and 2, we present the numerical results with respect to the Picard-HSS (PHSS) and the Picard-HSS-SOR (PHSSR) iterations. We give the elapsed CPU time in seconds for the convergence (denoted CPU), the norm of absolute residual vectors (denoted RES), and the number of iteration steps (denoted IT).

From Tables 1 and 2, we can see that the Picard-HSS-SOR (PHSSR) iteration method takes fewer iterations and CPU times than the Picard-HSS iteration method. It means the PHSSR iteration method for solving absolute value equations is feasible and effective.

## 5 Conclusions

In this paper, the Picard-HSS-SOR iteration method is presented to solve the absolute value equation, which is more efficient than the Picard-HSS iteration method. We proved the convergence results of the Picard-HSS-SOR iteration method under certain assumptions. Finally, numerical experiments were also implemented so as to check the effective of the proposed method.

## Availability of data and materials

Not applicable.

## References

Rohn, J.: A theorem of the alternatives for the equation \(Ax+B|x|=b\). Linear Multilinear Algebra

**52**, 421â€“426 (2004)Mangasarian, O.L., Meyer, R.R.: Absolute value equations. Linear Algebra Appl.

**419**, 359â€“367 (2006)Rohn, J.: An algorithm for computing all solutions of an absolute value equation. Optim. Lett.

**6**, 851â€“856 (2012)Rohn, J., Hooshyarbakhsh, V., Farhadsefat, R.: An iterative method for solving absolute value equations and sufficient conditions for unique solvability. Optim. Lett.

**8**, 35â€“44 (2014)Noor, M.A., Iqbal, J., Noor, K.I.: On an iterative method for solving absolute value equations. Optim. Lett.

**6**, 1027â€“1033 (2012)Zainali, N., Lotfi, T.: On developing a stable and quadratic convergent method for solving absolute value equation. J.Â Comput. Appl. Math.

**330**, 742â€“747 (2018)Mangasarian, O.L.: Sufficient conditions for the unsolvability and solvability of the absolute value equation. Optim. Lett.

**11**, 1â€“7 (2017)Wu, S.L., Li, C.X.: The unique solution of the absolute value equations. Appl. Math. Lett.

**76**, 195â€“200 (2018)Haghani, F.K.: On generalized Traubâ€™s method for absolute value equations. J. Optim. Theory Appl.

**166**, 619â€“625 (2015)Li, C.X.: A modified generalized Newton method for absolute value equations. J. Optim. Theory Appl.

**170**, 1055â€“1059 (2016)Tang, J.Y., Zhou, J.C.: A quadratically convergent descent method for the absolute value equation \(Ax+B|x|=b\). Oper. Res. Lett.

**47**, 229â€“234 (2019)Guo, P., Wu, S.L., Li, C.X.: On the SOR-like iteration method for solving absolute value equations. Appl. Math. Lett.

**97**, 107â€“113 (2019)Ke, Y.F., Ma, C.F.: SOR-like iteration method for solving absolute value equations. Appl. Math. Comput.

**311**, 195â€“202 (2017)Ke, Y.F.: The new iteration algorithm for absolute value equation. Appl. Math. Lett.

**99**, 105990 (2020)Zhang, J.J.: The relaxed nonlinear PHSS-like iteration method for absolute value equations. Appl. Math. Comput.

**265**, 266â€“274 (2015)Iqbal, J., Iqbal, A., Arif, M.: Levenbergâ€“Marquardt method for solving systems of absolute value equations. J. Comput. Appl. Math.

**282**, 134â€“138 (2015)Mangasarian, O.L.: A generalized Newton method for absolute value equations. Optim. Lett.

**3**, 101â€“108 (2009)Salkuyeh, D.K.: The Picard-HSS iteration method for absolute value equations. Optim. Lett.

**8**, 2191â€“2202 (2014)Edalatpour, V., Hezari, D., Salkuyeh, D.K.: A generalization of the Gaussâ€“Seidel iteration method for solving absolute value equations. Appl. Math. Comput.

**293**, 156â€“167 (2017)Berman, A., Plemmons, R.: Nonnegative Matrices in the Mathematical Sciences. Academic Press, New York (1979)

## Funding

The work are supported by The Natural Science Foundation of Education Bureau of Anhui Province (KJ2020A0017, KJ2017A432).

## Author information

### Authors and Affiliations

### Contributions

The author carried out the results, and read and approved the current version of the manuscript.

### Corresponding author

## Ethics declarations

### Competing interests

The author declares that there is no conflict of interests. The author declares that there is no competing interests.

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the articleâ€™s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the articleâ€™s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Zheng, L. The Picard-HSS-SOR iteration method for absolute value equations.
*J Inequal Appl* **2020**, 258 (2020). https://doi.org/10.1186/s13660-020-02525-3

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/s13660-020-02525-3