The Picard-HSS-SOR iteration method for absolute value equations

In this paper, we present the Picard-HSS-SOR iteration method for finding the solution of the absolute value equation (AVE), which is more efficient than the Picard-HSS iteration method for AVE. The convergence results of the Picard-HSS-SOR iteration method are proved under certain assumptions imposed on the involved parameter. Numerical experiments demonstrate that the Picard-HSS-SOR iteration method for solving absolute value equations is feasible and effective.


Introduction
Let A ∈ R n×n , b ∈ R n . We consider the following absolute value equation (AVE): Recently, Salkuyeh [18] presented the Picard-HSS iteration method for solving the AVE and established the convergence theory under suitable conditions. Some numerical experiments showed in [18] that the Picard-HSS iteration method is more efficient than the Picard iteration method and generalized Newton method.
In the paper, we present a new version iteration method for finding the solution of the absolute value equation(AVE), which is more efficient than the Picard-HSS iteration method for AVE. The convergence results of the Picard-HSS-SOR iteration method are proved under certain assumptions imposed on the involved parameter. Numerical experiments demonstrate that the Picard-HSS-SOR iteration method for solving absolute value equations is feasible and effective.
This article is arranged as follows. In Sect. 2, we recall the Picard-HSS iteration method and some results that will be used in following analysis. The Picard-HSS-SOR iteration method and its convergence analysis are proposed in Sect. 3. Experimental results and conclusions are given in Sects. 4 and 5, respectively.

Preliminaries
Firstly, we present some notations and auxiliary results.
The symbol I n denotes the n × n identity matrix. A denotes the spectral norm defined by A := max{ Ax : x ∈ R n , x = 1}, where x is the 2-norm. For x ∈ R n , sign(x) denotes a vector with components equal to -1, 0 or 1 depending on whether the corresponding component of x is negative, zero or positive. In addition, diag(sign(x)) is a diagonal matrix whose diagonal elements are sign(x). A matrix A = (a ij ) ∈ R m×n is said to be nonnegative (positive) if its entries satisfy a ij ≥ 0(a ij > 0) for all 1 ≤ i ≤ m and 1 ≤ j ≤ n. Proposition 2.1 ([2]) Suppose that A ∈ R n×n is invertible. If A -1 < 1, then the AVE in (1.1) has a unique solution for any b ∈ R n . Lemma 2.1 ([20]) For any vectors x = (x 1 , x 2 , . . . , x n ) T ∈ R n and y = (y 1 , y 2 , . . . , y n ) T ∈ R n , the following results hold:  [18]) Given an initial guess x (0) ∈ R n and a sequence {l k } ∞ k=0 of positive integers, compute x (k+1) for k = 0, 1, 2,. . . , using the following iteration scheme until {x (k) } satisfies the following stopping criterion: (a) Set x (k,0) := x (k) ; (b) for l = 0, 1, . . . , l k -1, solve the following linear systems to obtain x (k,l+1) : where α is a given positive constant; The (k + 1)th iterate of the Picard-HSS method can be written as has a unique solution x * , and for any initial guess x (0) ∈ R n and any sequence of positive integers l k , k = 0, 1, 2, . . . , the iteration sequence {x (k) } generated by the Picard-HSS iteration method converges to x * provided that l = lim inf k→+∞ l k ≥ N , where N is a natural number satisfying

The Picard-HSS-SOR iteration method
In the section, we will introduce the Picard-HSS-SOR iteration method and prove the convergence of the proposed method.
Recently, Ke et al. presented the SOR-like method for solving (1.1) in [13]. Let y = |x|, then the AVE in (1.1) is equivalent to that is, where D(x) := diag(sign(x)), x ∈ R n . Based on Eq. (3.2), we present the Picard-HSS-SOR iteration method for solving AVE (3.1) as follows.
Let (x * , y * ) be the solution pair of the nonlinear equation (3.1) and (x (k) , y (k) ) be produced by the Algorithm 3.1. Define the iteration errors Next, we will prove the main result of this paper. Proof The (k + 1)th iterate of the Picard-HSS-SOR iteration method can be written as where
Therefore W < 1. The proof is completed.

Numerical results
To illustrate the implementation and efficiency of the Picard-HSS-SOR iteration method, we test the following test problems. All test problems are performed by MATLAB R2019a on a personal computer with 2.4 GHz central processing unit (Intel (R) Core (TM) i5-3210M), 8GB memory. We use a null vector as initial guess and all the experiments are terminated if the current iterations satisfy or if the number of the prescribed iteration steps k max = 500 is exceeded. In addition, the stopping criterion for the inner iterations is set to be , l k is the number of inner iteration steps and a maximum number of iterations 10. Next,we consider the two-dimensional convection-diffusion equation where = (0, 1) × (0, 1), ∂ is its boundary, q is a positive constant used to measure the magnitude of the diffusive term and p is a real number. We apply the five-point finite difference scheme to the diffusive terms and the central difference scheme to the convective terms. Let h = 1/(m + 1) and Re = (qh)/2 denote the equidistant step size and the mesh Reynolds number, respectively. Then we get a system of linear equations Bx = d, where B is a matrix of order n = m 2 of the form with T x = tridiag(t 2 , t 1 , t 3 ) m×m and T y = tridiag(t 2 , 0, t 3 ) m×m , where t 1 = 4, t 2 = -1 -Re, t 3 = -1 + Re, I m and I n are the identity matrices of order m and n, respectively, ⊗ means the Kronecker product. For our numerical experiments, we set A = B + 1 2 (L -L T ), where L is the strictly lower part of B, and the right hand side vector b of the AVE(1.1) is taken in such a way that the vector x = (x 1 , x 2 , . . . , x n ) T with x k = (-1) k , k = 1, 2, . . . , n, being the exact solution. It is easy to see that the matrix A is non-symmetric positive definite.
The computation of the optimal parameter is often problem-dependent and generally difficult to determine. The optimal parameter α and τ employed in each method is experimentally determined such that it results in the least number of iterations.
In Tables 1 and 2, we present the numerical results with respect to the Picard-HSS (PHSS) and the Picard-HSS-SOR (PHSSR) iterations. We give the elapsed CPU time in seconds for the convergence (denoted CPU), the norm of absolute residual vectors (denoted RES), and the number of iteration steps (denoted IT).
From Tables 1 and 2, we can see that the Picard-HSS-SOR (PHSSR) iteration method takes fewer iterations and CPU times than the Picard-HSS iteration method. It means the PHSSR iteration method for solving absolute value equations is feasible and effective.