Open Access

On the modified Hermitian and skew-Hermitian splitting iteration methods for a class of weakly absolute value equations

Journal of Inequalities and Applications20162016:260

DOI: 10.1186/s13660-016-1202-1

Received: 12 January 2016

Accepted: 11 October 2016

Published: 21 October 2016

Abstract

In this paper, based on the modified Hermitian and skew-Hermitian splitting (MHSS) iteration method, the nonlinear MHSS-like iteration method is presented to solve a class of the weakly absolute value equations (AVE). By using a smoothing approximate function, the convergence properties of the nonlinear MHSS-like iteration method are presented. Numerical experiments are reported to illustrate the feasibility, robustness, and effectiveness of the proposed method.

Keywords

absolute value equation MHSS iteration convergence smoothing approximate function

MSC

90C05 90C30 65F10

1 Introduction

Consider the following weakly absolute value equations (AVE):
$$ Ax-\vert x\vert =b, $$
(1.1)
where \(b \in\mathbb{R}^{n}\), \(\vert \cdot \vert \) denotes the componentwise absolute value, \(A=W+iT\) where \(W\in\mathbb{R}^{n\times n}\) is a symmetric positive definite matrix and \(T\in\mathbb{R}^{n\times n}\) is a symmetric positive semidefinite matrix, and \(i=\sqrt{-1}\) denotes the imaginary unit. A general form of AVE (1.1)
$$ Ax+B\vert x\vert =b, $$
(1.2)
was first introduced in [1] and investigated in [2]. The AVE (1.1) is a class of the important nonlinear linear systems, and it often comes from the fact that linear programs, quadratic programs, and bimatrix games can all be reduced to a linear complementarity problem (LCP) [35]. This fact implies that the AVE (1.1) is equivalent to the LCP [5] and turns out to be NP-hard; see [2].

In recent years, some efficient methods have been proposed to solve the AVE (1.1), such as the smoothing Newton method [6], the generalized Newton method [711], the sign accord method [12]. For other forms of the iteration method, one can see [1315] for more details.

When the involved matrix A in (1.1) is a non-Hermitian positive definite matrix, based on the Hermitian and skew-Hermitian splitting (HSS) iteration method [16], the Picard-HSS iteration method for solving the AVE (1.1) has been proposed in [17]. Numerical results show that the Picard-HSS iteration method outperforms the Picard and generalized Newton methods under certain conditions. Although the Picard-HSS iteration method is efficient and competitive, the numbers of the inner HSS iteration steps are often problem-dependent and difficult to be obtained in actual computations. To overcome this disadvantage and improve the convergence of the Picard-HSS iteration method, the nonlinear HSS-like iteration method in [18] has been presented and its convergent conditions are established. Numerical experiments demonstrate that the nonlinear HSS-like iteration method is feasible and robust.

When the involved matrix A in (1.1) is \(A = W + iT\), the convergent rate of the aforementioned Picard-HSS and nonlinear HSS-like methods maybe reduce. This is reason that each step of Picard-HSS and HSS-like iterations needs to solve two linear subsystems with the symmetric positive definite matrix \(\alpha I +W\) and the shifted skew-Hermitian matrix \(\alpha I + iT\). It is well known that the solution of the linear system with the coefficient matrix \(\alpha I + iT\) is not easy to obtain [19]. To overcome this defect, based on MHSS iteration method [20], we will establish the nonlinear MHSS-like iteration method to solve the AVE (1.1). Compared with the nonlinear HSS-like iteration method, the potential advantage of the nonlinear MHSS-like iteration method is that only two linear subsystems with coefficient matrices \(\alpha I +W\) and \(\alpha I +T\), both being real and symmetric positive definite, need to be solved at each step. This shows that the nonlinear MHSS-like iteration method can avoid a shifted skew-Hermitian linear subsystem with coefficient matrix \(\alpha I + iT\). Therefore, in this case, these two linear subsystems can be solved either exactly by a sparse Cholesky factorization or inexactly by conjugated gradient scheme. The convergent conditions of the nonlinear MHSS-like iteration method are obtained by using a smoothing approximate function.

The remainder of the paper is organized as follows. In Section 2, the MHSS iteration method is briefly reviewed. The nonlinear MHSS-like iteration method is discussed in Section 3. Numerical experiments are reported in Section 4. Finally, in Section 5 we draw some conclusions.

2 The MHSS iteration method

To establish the nonlinear MHSS-like iteration method for solving the AVE (1.1), a brief review of MHSS iteration is needed.

From (1.2), when B is a zero matrix, the generalized AVE (1.2) reduces to the system of linear equations
$$ Ax=b, $$
(2.1)
where \(A=W+iT\) with symmetric positive definite matrix \(W\in\mathbb{R}^{n\times n}\) and symmetric positive semidefinite matrix \(T\in\mathbb{R}^{n\times n}\). In fact, the matrices W and iT are the Hermitian and skew-Hermitian parts of the complex symmetric matrix A, respectively.
Obviously, the linear system (2.1) can be rewritten in the following two equivalent forms:
$$\begin{aligned}& (\alpha I + W)x = (\alpha I -i T )x + b, \end{aligned}$$
(2.2)
$$\begin{aligned}& (\alpha I + T)x = (\alpha I +i W )x- ib. \end{aligned}$$
(2.3)
Combining (2.2) with (2.3), Bai et al. in [20] skillfully designed a modified HSS (MHSS) method to solve the complex symmetric linear system (2.1) below.
The MHSS method. Let \(x^{(0)}\in\mathbb{C}^{n}\) be an arbitrary initial point. For \(k=0,1,2,\ldots\) , compute the next iterate \(x^{(k+1)}\) according to the following procedure:
$$ \left\{ \textstyle\begin{array}{l} (\alpha I+ W)x^{(k+\frac{1}{2})}=(\alpha I-iT)x^{(k)}+b, \\ (\alpha I+T)x^{(k+1)}=(\alpha I+iW)x^{(k+\frac{1}{2})}-ib, \end{array}\displaystyle \right.$$
(2.4)
where α is a given positive constant and I is the identity matrix.
In matrix-vector form, the MHSS iteration (2.4) can be equivalently rewritten as
$$x^{(k+1)}=M_{\alpha}x^{(k)}+G_{\alpha}b=M_{\alpha}^{k+1}x^{(0)}+ \sum_{j=0}^{k}M_{\alpha}^{j}G_{\alpha}b,\quad k=0,1,\ldots, $$
where
$$\begin{aligned}& M_{\alpha} =(\alpha I+T)^{-1}(\alpha I+iW) (\alpha I+W)^{-1}(\alpha I-iT), \\& G_{\alpha}=(1-i)\alpha(\alpha I+T)^{-1}(\alpha I+ W)^{-1}. \end{aligned}$$
Here, \(M_{\alpha}\) is the iteration matrix of the MHSS method.

Theoretical analysis in [20] shows that the MHSS method converges unconditionally to the unique solution of the complex symmetric linear system (2.1) when \(W\in\mathbb{R}^{n\times n}\) is symmetric positive definite and \(T\in\mathbb{R}^{n\times n}\) is symmetric positive semidefinite.

3 The nonlinear MHSS-like iteration method

In this section, we will establish the MHSS-like iteration method for solving the AVE (1.1). Using the techniques in [21], the AVE (1.1) can be rewritten as the system of the nonlinear fixed-point equations
$$\begin{aligned}& (\alpha I + W)x = (\alpha I -i T )x +\vert x\vert + b, \end{aligned}$$
(3.1)
$$\begin{aligned}& (\alpha I + T)x = (\alpha I +i W )x-i\vert x\vert - ib. \end{aligned}$$
(3.2)
In the following, we establish the following MHSS-like iteration method for solving the AVE (1.1) by alternately iterating between two systems of the nonlinear fixed-point equations (3.1) and (3.2).
The nonlinear MHSS-like method. Let \(x^{(0)}\in \mathbb{C}^{n}\) be an arbitrary initial point. For \(k=0,1,2,\ldots\) , compute the next iterate \(x^{(k+1)}\) according to the following procedure:
$$ \left\{ \textstyle\begin{array}{l} (\alpha I+ W)x^{(k+\frac{1}{2})}=(\alpha I-iT)x^{(k)}+\vert x^{(k)}\vert +b, \\ (\alpha I+T)x^{(k+1)}=(\alpha I+iW)x^{(k+\frac{1}{2})}-i\vert x^{(k+\frac{1}{2})}\vert -ib, \end{array}\displaystyle \right.$$
(3.3)
where α is a given positive constant and I is the identity matrix.

Evidently, each step of the nonlinear MHSS-like iteration alternates between two real symmetric positive definite matrices \(\alpha I+W\) and \(\alpha I+T\). Hence, these two linear subsystems involved in each step of the nonlinear MHSS-like iteration can be solved effectively by exactly using the Cholesky factorization. On the other hand, in the Picard-HSS and nonlinear HSS-like methods, a shifted skew-Hermitian linear subsystem with coefficient matrix \(\alpha I+iT\) needs to be solved at every iterative step.

Let
$$ \left\{ \textstyle\begin{array}{l} U(x)=(\alpha I+ W)^{-1}((\alpha I-iT)x+\vert x\vert +b), \\ V(x)=(\alpha I+T)^{-1}((\alpha I+iW)x-i\vert x\vert -ib), \end{array}\displaystyle \right.$$
(3.4)
and
$$ \psi(x)=V\circ U(x)=V\bigl(U(x)\bigr). $$
(3.5)
Then, from (3.4) and (3.5), the nonlinear MHSS-like iteration scheme can be equivalently rewritten as
$$ x^{(k+1)}=\psi\bigl(x^{(k)}\bigr), \quad k=0,1,2,\ldots. $$
(3.6)

Definition 3.1

[22]

Let \(G:\mathbb{D}\subset \mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\). Then \(x^{\ast}\) is a point of attraction of the iteration
$$x^{k+1}=Gx^{k}, \quad k=0,1,2,\ldots, $$
if there is an open neighborhood S of \(x^{\ast}\) such that \(S\subset \mathbb{D}\) and, for any \(x^{0}\in S\), the iterates \(\{x^{k}\}\) all lie in \(\mathbb{D}\) and converge to \(x^{\ast}\).

Lemma 3.1

Ostrowski theorem [22]

Suppose that \(G:\mathbb{D}\subset \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) has a fixed point \(x^{\ast}\in \operatorname {int}(\mathbb{D})\) and is F-differentiable at \(x^{\ast}\). If the spectral radius of \(G'(x^{\ast})\) is less than 1, then \(x^{\ast}\) is a point of attraction of the iteration \(x^{k+1}=Gx^{k}\), \(k=0,1,2,\ldots\) .

We note that we cannot directly use the Ostrowski theorem (Theorem 10.1.3 in [22]) to give a local convergence theory for the iteration (3.6). It is reason that the nonlinear term \(\vert x\vert +b\) is non-differentiable. To overcome this defect, based on the smoothing approximate function introduced in [6], we can establish the following local convergence theory for nonlinear MHSS-like iteration method.

Define \(\phi:\mathbb{D}\subset\mathbb{C}^{n}\rightarrow\mathbb{C}^{n}\) by
$$\phi(x)=\sqrt{x^{2}+\epsilon^{2}}=\Bigl(\sqrt {x_{1}^{2}+\epsilon^{2}},\ldots ,\sqrt {x_{n}^{2}+\epsilon^{2}} \Bigr)^{T}, \quad \epsilon>0, x\in\mathbb{D}. $$
It is easy to see that \(\phi(x)\) is a smoothing function of \(\vert x\vert \). Based on the results in [6], the following properties of \(\phi(x)\) can easily be given.

Lemma 3.2

\(\phi(x)\) is a uniformly smoothing approximation function of \(\vert x\vert \), i.e.,
$$\bigl\Vert \phi(x)-\vert x\vert \bigr\Vert \leq\sqrt{n}\epsilon. $$

Lemma 3.3

For any \(\epsilon>0\), the Jacobian of \(\phi(x)\) at \(x\in\mathbb{C}^{n}\) is
$$ D=\phi'(x)=\operatorname {diag}\biggl(\frac{x_{i}}{\sqrt{x_{i}^{2}+\epsilon ^{2}}},i=1,2, \ldots,n \biggr). $$
(3.7)
Using the smoothing approximate function \(\phi(x)\) instead of \(\vert x\vert \) in (3.4), we have
$$ \left\{ \textstyle\begin{array}{l} \hat{U}(x)=(\alpha I+ W)^{-1}((\alpha I-iT)x+\phi(x)+b), \\ \hat{V}(x)=(\alpha I+T)^{-1}((\alpha I+iW)x-i\phi(x)-ib), \end{array}\displaystyle \right.$$
(3.8)
and
$$ \hat{\psi}(x)=\hat{V}\circ\hat{U}(x)=\hat{V}\bigl(\hat{U}(x) \bigr). $$
(3.9)
Then, from (3.8) and (3.9), the smoothing nonlinear MHSS-like iteration scheme can be equivalently rewritten as
$$ \hat{x}^{(k+1)}=\hat{\psi}\bigl(x^{(k)}\bigr),\quad k=0,1,2,\ldots. $$
(3.10)

Theorem 3.1

If the AVE (1.1) is weakly absolute, then the AVE (1.1) has a unique solution.

Proof

In fact, if the AVE (1.1) is weakly absolute, then the linear term Ax is strongly dominant over the nonlinear term \(\vert x\vert \) in certain consistent matrix norm [21]. Here, according to the equivalence property of norm, we take the 2-norm and have
$$\Vert Ax\Vert _{2}>\bigl\Vert \vert x\vert \bigr\Vert _{2}\Leftrightarrow x^{T}A^{T}Ax>x^{T}x \Leftrightarrow x^{T}\bigl(A^{T}A-I\bigr)x>0. $$
This shows that, when the AVE (1.1) is weakly absolute, the matrix \(A^{T}A-I\) is symmetric positive definite. Based on Theorem 4.1 in [23], the result in Theorem 3.1 holds. □
To obtain the convergence conditions of the nonlinear MHSS-like method, we define a matrix norm \(\Vert X\Vert =\Vert (\alpha I +T)X(\alpha I +T)^{-1}\Vert _{2}\) for any \(X\in\mathbb{C}^{n\times n}\). Let \(\theta(\alpha)= \Vert M_{\alpha} \Vert \). For \(\alpha>0\), we have
$$\begin{aligned} \theta(\alpha)&= \Vert M_{\alpha} \Vert =\bigl\Vert (\alpha I+iW) ( \alpha I+W)^{-1}(\alpha I-iT) (\alpha I+T)^{-1}\bigr\Vert _{2} \\ &\leq\bigl\Vert (\alpha I+iW) (\alpha I+W)^{-1} \bigr\Vert _{2}\bigl\Vert (\alpha I-iT) (\alpha I+T)^{-1}\bigr\Vert _{2} \\ &\leq\bigl\Vert (\alpha I+iW) (\alpha I+W)^{-1}\bigr\Vert _{2}< 1; \end{aligned}$$
see [20] for details.

Theorem 3.2

Let \(\phi=\sqrt{x^{2}+\epsilon^{2}}\) (\(\epsilon>0\)) be F-differentiable at a point \(x^{\ast}\in\mathbb{D}\) with \(Ax^{\ast}=\phi(x^{\ast})+b\), and \(A=W+iT\) where W is symmetric positive definite and T is symmetric positive semidefinite. Let
$$\begin{aligned}& M_{\alpha,x^{\ast}}=(\alpha I +T)^{-1}\bigl(\alpha I +iW-i \phi'\bigl(x^{\ast}\bigr)\bigr) (\alpha I+ W)^{-1} \bigl(\alpha I-iT+\phi'\bigl(x^{\ast}\bigr)\bigr), \\& \delta=\max\bigl\{ \bigl\Vert (\alpha I +W)^{-1}\bigr\Vert _{2}, \bigl\Vert (\alpha I +T)^{-1}\bigr\Vert _{2}\bigr\} , \quad\quad \theta(\alpha)= \Vert M_{\alpha} \Vert . \end{aligned}$$
If
$$ \delta< \sqrt{2- \theta(\alpha)}-1, $$
(3.11)
then \(\rho(M_{\alpha,x^{\ast}})<1\), which implies that \(x^{\ast}\in \mathbb{D}\) is a point of attraction of the smoothing nonlinear MHSS-like iteration (3.10).

Proof

Since
$$\hat{U}'\bigl(x^{\ast}\bigr)=(\alpha I+ W)^{-1} \bigl((\alpha I-iT)+\phi\bigl(x^{\ast}\bigr)\bigr) \quad \mbox{and} \quad \hat{V}'\bigl(x^{\ast}\bigr)=(\alpha I+T)^{-1} \bigl((\alpha I+iW)-i\phi\bigl(x^{\ast}\bigr)\bigr), $$
it is easy to obtain \(\hat{\psi}'(x^{\ast})=M_{\alpha,x^{\ast}}\).
By direct computations, we have
$$ \begin{aligned} &\bigl\Vert \phi'\bigl(x^{\ast}\bigr)\bigr\Vert _{2}=\Vert D\Vert _{2}< 1,\quad\quad \bigl\Vert (\alpha I-iT) ( \alpha I +T)^{-1}\bigr\Vert _{2}\leq1, \\ & \bigl\Vert (\alpha I +iW) (\alpha I+W)^{-1}\bigr\Vert _{2}< 1. \end{aligned} $$
(3.12)
Then from (3.12), we have
$$\begin{aligned} \Vert M_{\alpha,x^{\ast}} \Vert &=\bigl\Vert (\alpha I +T)M_{\alpha,x^{\ast}}( \alpha I +T)^{-1}\bigr\Vert _{2} \\ &\leq\bigl\Vert (\alpha I+T)M_{\alpha}(\alpha I+T)^{-1}\bigr\Vert _{2}+\bigl\Vert (\alpha I +iW) (\alpha I+W)^{-1} \phi'\bigl(x^{\ast}\bigr) (\alpha I +T)^{-1}\bigr\Vert _{2} \\ &\quad{} +\bigl\Vert \phi'\bigl(x^{\ast}\bigr) (\alpha I+W)^{-1}(\alpha I-iT) (\alpha I +T)^{-1}\bigr\Vert _{2} \\ &\quad{} +\bigl\Vert \phi'\bigl(x^{\ast}\bigr) (\alpha I+W)^{-1}\phi'\bigl(x^{\ast}\bigr) (\alpha I +T)^{-1}\bigr\Vert _{2} \\ &\leq \Vert M_{\alpha} \Vert +\bigl\Vert (\alpha I +iW) (\alpha I+W)^{-1}\bigr\Vert _{2}\bigl\Vert \phi' \bigl(x^{\ast}\bigr) (\alpha I +T)^{-1}\bigr\Vert _{2} \\ &\quad{} +\bigl\Vert \phi'\bigl(x^{\ast}\bigr) (\alpha I+W)^{-1}\bigr\Vert _{2}\bigl\Vert (\alpha I-iT) (\alpha I +T)^{-1}\bigr\Vert _{2} \\ &\quad{} +\bigl\Vert \phi'\bigl(x^{\ast}\bigr) (\alpha I+W)^{-1}\bigr\Vert _{2}\bigl\Vert \phi' \bigl(x^{\ast}\bigr) (\alpha I +T)^{-1}\bigr\Vert _{2} \\ &\leq \Vert M_{\alpha} \Vert +\bigl\Vert (\alpha I +iW) (\alpha I+W)^{-1}\bigr\Vert _{2}\bigl\Vert (\alpha I +T)^{-1}\bigr\Vert _{2} \\ &\quad{} +\bigl\Vert (\alpha I+W)^{-1}\bigr\Vert _{2}\bigl\Vert (\alpha I-iT) (\alpha I +T)^{-1}\bigr\Vert _{2}+ \bigl\Vert (\alpha I+W)^{-1}\bigr\Vert _{2}\bigl\Vert ( \alpha I +T)^{-1}\bigr\Vert _{2} \\ &\leq\theta(\alpha)+2\delta+\delta^{2}. \end{aligned}$$
Obviously, under the condition (3.11), we have \(\rho(M_{\alpha,x^{\ast}})\leq \Vert M_{\alpha,x^{\ast}} \Vert <1\). Based on Lemma 3.3 (Ostrowski theorem), the results in Theorem 3.1 hold. □

Observe that the matrices W and T are strongly dominant over the matrix \(\phi'(x^{\ast})\) in certain norm (i.e., \(\Vert W\Vert \gg \Vert \phi'(x^{\ast})\Vert \) and \(\Vert T\Vert \gg \Vert \phi'(x^{\ast})\Vert \) for certain matrix norm). Therefore, matrix A satisfies the condition (3.11) with W be symmetric positive definite and T be symmetric positive semidefinite. In fact, in this case, the matrix \(M_{\alpha,x^{\ast}}\) can be approximated by the matrix \(M_{\alpha}\) and the condition (3.11) reduces to \(\rho(M_{\alpha})<1\). Clearly, \(\rho(M_{\alpha})\leq\theta(\alpha)<1\).

Theorem 3.3

Let the condition of Theorem  3.1 be satisfied. For any initial point \(x^{(0)}\in\mathbb{D}\subset\mathbb{C}^{n}\), the iteration sequence \(\{x^{(k)}\}_{k=0}^{\infty}\) generated by the nonlinear MHSS-like iteration (3.6) can be approximated by the sequence produced by its smoothed scheme (3.10), i.e.,
$$\bigl\Vert \psi\bigl(x^{(k)}\bigr)-\hat{\psi}\bigl(x^{(k)} \bigr)\bigr\Vert < \varepsilon, \quad\textit{for any } \varepsilon>0, $$
provided
$$ \epsilon< \frac{\Vert (\alpha I +T)\Vert \varepsilon}{\sqrt{n}(2+\Vert (\alpha I+ W)^{-1}\Vert )}. $$
(3.13)

Proof

Based on (3.6) and (3.10), combining with (3.12) with Lemma 3.2, we have
$$\begin{aligned} \bigl\Vert x^{(k+1)}-\hat{x}^{(k+1)}\bigr\Vert &=\bigl\Vert \psi\bigl(x^{(k)}\bigr)-\hat{\psi}\bigl(x^{(k)}\bigr)\bigr\Vert \\ &\leq\bigl\Vert (\alpha I +T)^{-1}(\alpha I +iW) (\alpha I+ W)^{-1}\bigl(\phi\bigl(x^{k}\bigr)-\bigl\vert x^{k}\bigr\vert \bigr)\bigr\Vert \\ &\quad{} +\bigl\Vert (\alpha I +T)^{-1}\bigl(\phi\bigl(\hat{U} \bigl(x^{k}\bigr)\bigr)-\bigl\vert U\bigl(x^{k}\bigr)\bigr\vert \bigr)\bigr\Vert \\ &\leq\bigl\Vert (\alpha I +T)^{-1}(\alpha I +iW) (\alpha I+ W)^{-1}\bigl(\phi\bigl(x^{k}\bigr)-\bigl\vert x^{k}\bigr\vert \bigr)\bigr\Vert \\ &\quad{} +\bigl\Vert (\alpha I +T)^{-1}\bigl(\phi\bigl(\hat{U} \bigl(x^{k}\bigr)\bigr)-\bigl\vert \hat{U}\bigl(x^{k}\bigr) \bigr\vert +\bigl\vert \hat {U}\bigl(x^{k}\bigr)\bigr\vert -\bigl\vert U\bigl(x^{k}\bigr)\bigr\vert \bigr)\bigr\Vert \\ &\leq\bigl\Vert (\alpha I +T)^{-1}(\alpha I +iW) (\alpha I+ W)^{-1}\bigr\Vert \cdot\bigl\Vert \bigl(\phi\bigl(x^{k} \bigr)-\bigl\vert x^{k}\bigr\vert \bigr)\bigr\Vert \\ &\quad{} +\bigl\Vert (\alpha I +T)^{-1}\bigr\Vert \cdot\bigl\Vert \phi \bigl(\hat{U}\bigl(x^{k}\bigr)\bigr)-\bigl\vert \hat{U} \bigl(x^{k}\bigr)\bigr\vert \bigr\Vert \\ &\quad{} +\bigl\Vert (\alpha I +T)^{-1}\bigr\Vert \cdot\bigl\Vert ( \alpha I+ W)^{-1}\bigr\Vert \cdot\bigl\Vert \phi \bigl(x^{k}\bigr)-\bigl\vert x^{k}\bigr\vert \bigr\Vert \\ &\leq\sqrt{n}\epsilon\bigl\Vert (\alpha I +T)^{-1}\bigr\Vert \bigl(2+\bigl\Vert (\alpha I+ W)^{-1}\bigr\Vert \bigr). \end{aligned}$$
For any \(\varepsilon>0\), under the condition (3.13), \(\Vert x^{(k+1)}-\hat{x}^{(k+1)}\Vert <\varepsilon\). This implies that the results in Theorem 3.2 hold. □
Under the condition of Theorem 3.1,
$$\begin{aligned} \bigl\Vert x^{(k+1)}-\hat{x}^{(k+1)}\bigr\Vert \leq&\sqrt{n} \epsilon\delta(2+\delta)\leq\sqrt{n}\epsilon. \end{aligned}$$
This implies that, if we choose ϵ such that \(\sqrt{n}\epsilon<\varepsilon\), the results in Theorem 3.2 hold as well.

Theorem 3.4

Suppose that the condition of Theorem  3.1 is satisfied. Let
$$\begin{aligned}& \delta=\max\bigl\{ \bigl\Vert (\alpha I+W)^{-1}\bigr\Vert _{2},\bigl\Vert (\alpha I+T)^{-1}\bigr\Vert _{2}\bigr\} ,\quad\quad \theta(\alpha)= \Vert M_{\alpha} \Vert , \\& M_{\alpha,x^{\ast}}=(\alpha I +T)^{-1}(\alpha I +iW-iD) (\alpha I+ W)^{-1}(\alpha I-iT+D), \end{aligned}$$
where D is the Jacobian of \(\phi(x)\) at \(x^{\ast}\in \mathbb{N}\subset\mathbb{D}\subset\mathbb{C}^{n}\) defined in (3.7), and \(\mathbb{N}(x^{\ast})\) is an open neighborhood of \(x^{\ast}\). Then \(\rho(M_{\alpha,x^{\ast}})<1\). Furthermore, if
$$ \delta< \sqrt{2- \theta(\alpha)}-1, $$
(3.14)
then, for any initial point \(x^{(0)}\in\mathbb{D}\subset \mathbb{C}^{n}\), the iteration sequence \(\{x^{(k)}\}_{ k=0}^{\infty}\) generated by the nonlinear MHSS-like iteration method (3.6) converges to \(x^{\ast}\).

Proof

Since
$$\bigl\Vert x^{(k+1)}-x^{\ast}\bigr\Vert \leq\bigl\Vert x^{(k+1)}-\hat{x}^{(k+1)}\bigr\Vert +\bigl\Vert \hat {x}^{(k+1)}-x^{\ast}\bigr\Vert =\bigl\Vert \psi \bigl(x^{(k)}\bigr)-\hat{\psi}\bigl(x^{(k)}\bigr)\bigr\Vert + \bigl\Vert \hat{\psi}\bigl(x^{(k)}\bigr)-x^{\ast}\bigr\Vert , $$
then, for any \(\varepsilon>0\), one just needs to prove
$$ \bigl\Vert \psi\bigl(x^{(k)}\bigr)-\hat{\psi} \bigl(x^{(k)}\bigr)\bigr\Vert +\bigl\Vert \hat{\psi} \bigl(x^{(k)}\bigr)-x^{\ast}\bigr\Vert < \varepsilon. $$
(3.15)
From (3.15), we just prove
$$\bigl\Vert \psi\bigl(x^{(k)}\bigr)-\hat{\psi}\bigl(x^{(k)} \bigr)\bigr\Vert < \varepsilon \quad\mbox{and}\quad \bigl\Vert \hat{\psi} \bigl(x^{(k)}\bigr)-x^{\ast}\bigr\Vert < \varepsilon. $$

From Theorems 3.1 and 3.2, obviously, \(\Vert x^{(k+1)}-x^{\ast} \Vert <\varepsilon\). This completes the proof. □

In fact, under the condition of Theorem 3.3, the solution of \(Ax=\vert x\vert +b\) can be approximated by \(x^{\ast}\), which is such that \(Ax^{\ast}=\phi(x^{\ast})+b\). It is the reason that, when \(Ax^{\ast}=\vert x^{\ast} \vert +b\), we have
$$\bigl\Vert \phi\bigl(x^{\ast}\bigr)-\bigl\vert x^{\ast}\bigr\vert \bigr\Vert \leq\sqrt{n}\epsilon, \quad\mbox{for any } \epsilon>0. $$

The convergent speed of the nonlinear MHSS-like method (3.3) may depend on two factors: (1) the nonlinear term \(\vert x\vert +b\); (2) finding the optimal parameters to guarantee that the spectral radius \(\rho(M_{\alpha})\) of the iteration matrix \(M_{\alpha}\) is less than 1. As the former is problem-independent, the latter can be estimated. Based on Corollary 2.1 in [20], the optimal parameter \(\alpha^{\ast}=\sqrt{\lambda_{\max}\lambda_{\min}}\) is obtained to minimize the upper bound on the spectral radius \(\rho(M_{\alpha})\) of the MHSS iteration matrix \(M_{\alpha}\), where \(\lambda_{\max}\) and \(\lambda_{\min}\) are the extreme eigenvalues of the symmetric positive definite matrix W, respectively. It is noted that, although one usually cannot expect to minimize the spectral radius \(\rho(M_{\alpha})\) of the corresponding iteration matrix \(M_{\alpha}\) with the optimal parameter \(\alpha^{\ast}\), it is still helpful for us to choose an effective parameter for the nonlinear MHSS-like method.

4 Numerical experiments

In this section, we give some numerical experiments to demonstrate the performance of the nonlinear MHSS-like method for solving the AVE (1.1). Since the numerical results in [18] show that the nonlinear HSS-like method outperforms the Picard and Picard-HSS methods under certain conditions, here we compare the nonlinear MHSS-like method with the nonlinear HSS-like method [18] from the point of view of the number of iterations (denoted IT) and CPU times (denoted CPU) to show the advantage of the nonlinear MHSS-like method. All the tests are performed in MATLAB 7.0.

Example 1

We consider the following AVE:
$$ Ax-\vert x\vert =b, $$
(4.1)
with \(A=W+iT\),
$$T=I \otimes V+V\otimes I \quad\mbox{and} \quad W = 10(I\otimes V_{c}+V_{c}\otimes I ) + 9\bigl(e_{1}e^{T}_{m }+e_{m }e_{1}^{T} \bigr)\otimes I, $$
where \(V=\operatorname {tridag}(-1,2,-1)\in\mathbb{R}^{m\times m}\), \(V_{c}=V-e_{1}e^{T}_{m }-e_{m }e_{1}^{T} \in\mathbb{R}^{m\times m}\), \(e_{1}\) and \(e_{m}\) are the first and the last unit vectors in \(\mathbb{R}^{m}\), respectively. Here T and W correspond to the five-point centered difference matrices approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions and periodic boundary conditions, respectively, on a uniform mesh in the unit square \([0, 1]\times[0, 1]\) with the mesh-size \(h = \frac{1}{m+1 }\). The right-hand side vector b is taken to be \(b = (A-I)\textbf{1}\), with 1 being the vector of all entries equal to 1.
In our implementations, the initial point is chosen to be the zero vector and the stopping criterion for the nonlinear MHSS-like and HSS-like methods is
$$\frac{\Vert b+\vert x^{(k)}\vert -Ax^{(k)}\Vert _{2}}{\Vert b\Vert _{2}}\leq10^{-6}. $$
In the implementations of the nonlinear MHSS-like and HSS-like methods, the optimal parameters have been obtained experimentally to yield the least CPU times and iteration numbers for both iteration methods. Specifically, see Table 1.
Table 1

The optimal parameters α for MHSS-like and MHSS-like methods

m

8 × 8

16 × 16

32 × 32

48 × 48

MHSS-like

3.66

2.09

1.22

0.87

HSS-like

9.55

5.32

3.01

2.21

In Table 2, we list the iteration numbers and CPU times for the nonlinear MHSS-like and HSS-like methods by using the optimal parameters in Table 1.
Table 2

CPU and IT for the MHSS-like and HSS-like methods

 

m

8 × 8

16 × 16

32 × 32

48 × 48

MHSS

IT

155

55

66

165

CPU

0.062

0.078

0.563

3.875

PMHSS

IT

353

133

154

231

CPU

0.203

0.313

1.969

7.953

From Table 2, the iteration numbers and CPU times of the nonlinear MHSS-like method for solving the AVE (1.1) are less than that of the nonlinear HSS-like method. The presented results in Table 2 show that in all cases the nonlinear MHSS-like method is superior to the nonlinear HSS-like method in terms of the iteration numbers and CPU times. Comparing with the nonlinear HSS-like method, the nonlinear MHSS-like method for solving the AVE (1.1) may be given priority under certain conditions.

5 Conclusions

In this paper, the nonlinear MHSS-like method has been established for solving the weakly absolute value equations (AVE). In the proposed method, two real linear subsystems with symmetric positive definite matrices \(\alpha I + W\) and \(\alpha I + T\) are solved at each step. In contrast, in the nonlinear HSS-like method a shifted skew-Hermitian linear subsystem with the matrix \(\alpha I + iT\) is solved at each iteration. By using a smoothing approximate function, the local convergence of the proposed method has been analyzed. Numerical experiments have shown that the nonlinear MHSS-like method is feasible, robust, and efficient.

Declarations

Acknowledgements

This research was supported by NSFC (No. 11301009) 17HASTIT012, Natural Science Foundations of Henan Province (No. 15A110077) and Project of Young Core Instructor of Universities in Henan Province (No. 2015GGJS-003). The author thanks the anonymous referees for their constructive suggestions and helpful comments which led to significant improvement of the original manuscript of this paper.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Mathematics and Statistics, Anyang Normal University

References

  1. Rohn, J: A theorem of the alternatives for the equation \(Ax+B\vert x\vert =b\). Linear Multilinear Algebra 52, 421-426 (2004) MathSciNetView ArticleMATHGoogle Scholar
  2. Mangasarian, OL: Absolute value programming. Comput. Optim. Appl. 36, 43-53 (2007) MathSciNetView ArticleMATHGoogle Scholar
  3. Cottle, RW, Dantzig, GB: Complementary pivot theory of mathematical programming. Linear Algebra Appl. 1, 103-125 (1968) MathSciNetView ArticleMATHGoogle Scholar
  4. Cottle, RW, Pang, J-S, Stone, RE: The Linear Complementarity Problem. Academic Press, San Diego (1992) MATHGoogle Scholar
  5. Mangasarian, OL, Meyer, RR: Absolute value equations. Linear Algebra Appl. 419, 359-367 (2006) MathSciNetView ArticleMATHGoogle Scholar
  6. Caccetta, L, Qu, B, Zhou, G-L: A globally and quadratically convergent method for absolute value equations. Comput. Optim. Appl. 48, 45-58 (2011) MathSciNetView ArticleMATHGoogle Scholar
  7. Mangasarian, OL: A generalized Newton method for absolute value equations. Optim. Lett. 3, 101-108 (2009) MathSciNetView ArticleMATHGoogle Scholar
  8. Hu, S-L, Huang, Z-H, Zhang, Q: A generalized Newton method for absolute value equations associated with second order cones. J. Comput. Appl. Math. 235, 1490-1501 (2011) MathSciNetView ArticleMATHGoogle Scholar
  9. Ketabchi, S, Moosaei, H: An efficient method for optimal correcting of absolute value equations by minimal changes in the right hand side. Comput. Math. Appl. 64, 1882-1885 (2012) MathSciNetView ArticleMATHGoogle Scholar
  10. Zhang, C, Wei, Q-J: Global and finite convergence of a generalized Newton method for absolute value equations. J. Optim. Theory Appl. 143, 391-403 (2009) MathSciNetView ArticleMATHGoogle Scholar
  11. Bello Cruz, JY, Ferreira, OP, Prudente, LF: On the global convergence of the inexact semi-smooth Newton method for absolute value equation. Comput. Optim. Appl. 65, 93-108 (2016) MathSciNetView ArticleMATHGoogle Scholar
  12. Rohn, J: An algorithm for solving the absolute value equations. Electron. J. Linear Algebra 18, 589-599 (2009) MathSciNetMATHGoogle Scholar
  13. Rohn, J, Hooshyarbakhsh, V, Farhadsefat, R: An iterative method for solving absolute value equations and sufficient conditions for unique solvability. Optim. Lett. 8, 35-44 (2014) MathSciNetView ArticleMATHGoogle Scholar
  14. Noor, MA, Iqbal, J, Al-Said, E: Residual iterative method for solving absolute value equations. Abstr. Appl. Anal. 2012, Article ID 406232 (2012) MathSciNetMATHGoogle Scholar
  15. Yong, L-Q: Particle swarm optimization for absolute value equations. J. Comput. Inf. Syst. 6(7), 2359-2366 (2010) Google Scholar
  16. Bai, Z-Z, Golub, GH, Ng, MK: Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 24, 603-626 (2003) MathSciNetView ArticleMATHGoogle Scholar
  17. Salkuyeh, DK: The Picard-HSS iteration method for absolute value equations. Optim. Lett. 8, 2191-2202 (2014) MathSciNetView ArticleMATHGoogle Scholar
  18. Zhu, M-Z, Zhang, G-F, Liang, Z-Z: The nonlinear HSS-like iteration method for absolute value equations. Preprint, arXiv:1403.7013 (April 1, 2014)
  19. Li, C-X, Wu, S-L: A modified GHSS method for non-Hermitian positive definite linear systems. Jpn. J. Ind. Appl. Math. 29, 253-268 (2012) MathSciNetView ArticleMATHGoogle Scholar
  20. Bai, Z-Z, Benzi, M, Chen, F: Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 87, 93-111 (2010) MathSciNetView ArticleMATHGoogle Scholar
  21. Bai, Z-Z, Yang, X: On HSS-based iteration methods for weakly nonlinear systems. Appl. Numer. Math. 59(12), 2923-2936 (2009) MathSciNetView ArticleMATHGoogle Scholar
  22. Ortega, JM, Rheinboldt, WC: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York (1970) MATHGoogle Scholar
  23. Lotfi, T, Veiseh, H: A note on unique solvability of the absolute value equation. J. Linear Topol. Algebra 2, 77-78 (2013) Google Scholar

Copyright

© Li 2016