Skip to main content

Convergence results of a matrix splitting algorithm for solving weakly nonlinear complementarity problems

Abstract

In this paper, we consider a class of weakly nonlinear complementarity problems (WNCP) with large sparse matrix. We present an accelerated modulus-based matrix splitting algorithm by reformulating the WNCP as implicit fixed point equations based on two splittings of the system matrixes. We show that, if the system matrix is a P-matrix, then under some mild conditions the sequence generated by the algorithm is convergent to the solution of WNCP.

1 Introduction

Consider the following weakly nonlinear complementarity problems, which is to find \(z\in R^{n}\) such that

$$\begin{aligned} z\geq0,\quad Az+\Psi(z)\geq0, \quad z^{T}\bigl(Az+\Psi(z) \bigr)=0, \end{aligned}$$
(1)

abbreviated as WNCP, where \(A=(a_{ij})\in R^{n\times n}\) is a given large, sparse and real matrix, \(\Psi(z)\): \(R^{n}\rightarrow R^{n}\) is a Lipschitz continuous nonlinear function, \(z\geq0\) means \(z_{i}\geq0\), \(i=1,2,\ldots,n\), and T always denotes the transpose of a vector.

As is well known, the classical nonlinear complementarity problems (NCP) are very important and fundamental topics in optimization theory and they have been developed into a well-established and fruitful principle. See [1–3] for details as regards the basic theories, effective algorithms and important applications of NCP. Problem (1) is a special case of NCP, but it extends from linear complementarity problems. When \(\Psi(z)=q\) is a constant vector, problem (1) reduces to a linear complementarity problem. Recently, lots of researchers [4–8] have paid close attention to feasible and efficient methods for solving linear complementarity problems. Especially, by reformulating linear complementarity problems as an implicit fixed point equation, Van Bokhoven [9] proposed a modulus iteration method, which is defined as the solution of system of linear equations at each iteration. In 2010, Bai [10] presented a modulus-based matrix splitting iteration method and showed the convergence when the system matrix is an \(H_{+}\)-matrix. Consequently, Zhang [11] proposed two-step modulus-based matrix splitting iteration methods and considered the convergence theory when the system matrix is an \(H_{+}\)-matrix. Based on the above work, for solving problem (1), in [12] Sun and Zeng proposed a modified semismooth Newton method with A being an M-matrix and \(\Psi(z)\) being a continuously differentiable monotone diagonal function on \(R^{n}\).

In this paper, for satisfying the requirements of the application, more details can be found in [13–16], we present an accelerated modulus-based matrix splitting algorithm for dealing with WNCP. The organization of this paper is as follows: Some necessary notations and definitions are introduced in Section 2. In Section 3, we establish a class of accelerated modulus-based matrix splitting iteration algorithms. In Section 4, the convergence conditions are considered.

2 Preliminaries

Some necessary notations, definitions and lemmas used in the sequel discussions are introduced in this section. For \(B\in R^{n\times n}\), we write \(B^{-1}\), \(B^{T}\), \(\rho(B)\) to denote the inverse, the transpose, the spectral radius of the matrix B, respectively. For \(x\in R^{n}\), we write \(\Vert x\Vert \), \(\vert x\vert \) to denote the norm of the vector x, \(\vert x\vert =(\vert x_{1}\vert ,\ldots, \vert x_{n}\vert )\), respectively. \(\Vert A\Vert \) denotes any norm of matrix A. Especially, we use \(\Vert \cdot \Vert _{2}\) to denote a spectral norm. \(\lambda\in\lambda(A)\) denotes the eigenvalue of matrix A where \(\lambda(A)\) is the set of all eigenvalues of matrix A.

Definition 1

[17]

If for any \(x:=(x_{1},x_{2},\ldots,x_{n})\neq0\), there exists an index k, such that \(x_{k}(Ax)_{k}=x_{k}(a_{k1}x_{1}+\cdots+a_{kn}x_{n})>0\), we call that matrix A is a P-matrix.

Definition 2

For the function \(f(x): R^{n}\rightarrow R^{n}\), if for any x, \(y\in R^{n}\), there exists a constant L such that

$$\bigl\Vert f(x)-f(y)\bigr\Vert \leq L\Vert x-y\Vert , $$

then we call f a Lipschitz continuous function on \(R^{n}\), and L is called a Lipschitz constant.

Lemma 3

[18]

Let \(A=(a_{ij})\in R^{n\times n}\) be a P-matrix, for any nonnegative diagonal matrix Ω, the matrix \(A+\Omega\) is nonsingular.

3 Algorithm

Theorem 4

Let \(M_{1}-N_{1}=M_{2}-N_{2}=A\) be two splittings of the matrix \(A\in R^{n\times n}\) and Ω, Γ be \(n\times n\) positive diagonal matrices, \(\Omega_{1}\), \(\Omega_{2}\) be \(n\times n\) nonnegative diagonal matrices such that \(\Omega=\Omega_{1}+\Omega_{2}\), then the following statements hold.

  1. (i)

    If z is a solution of problem (1), then \(x=\frac {1}{2} (\Gamma^{-1}z-\Omega^{-1}(Az+\Psi(z)) )\) satisfies the implicit fixed point equation

    $$ (M_{1}\Gamma+\Omega_{1})x =(N_{1}\Gamma-\Omega_{2})x+(\Omega-M_{2} \Gamma)\vert x\vert +N_{2}\Gamma \vert x\vert -\Psi \bigl(\Gamma \bigl(\vert x\vert +x\bigr) \bigr). $$
    (2)
  2. (ii)

    If x satisfies the implicit fixed point equation (2), then

    $$z=\Gamma\bigl(\vert x\vert +x\bigr) $$

    is a solution of problem (1).

Proof

First we prove part (i). Since z is a solution of problem (1), we have \(z\geq0\). So there exists \(x\in R^{n}\) such that

$$z=\Gamma\bigl(\vert x\vert +x\bigr). $$

Define another nonnegative vector,

$$\upsilon=\Omega\bigl(\vert x\vert -x\bigr). $$

It is easy to see that \(\upsilon\geq0\), \(z^{T}\upsilon=0\), and \(\upsilon =Az+\Psi(z)\) if and only if

$$\Omega\bigl(\vert x\vert -x\bigr)=A\Gamma\bigl(\vert x\vert +x\bigr)+ \Psi \bigl(\Gamma\bigl(\vert x\vert +x\bigr) \bigr). $$

Replace A with \(M_{1}-N_{1}\), we have

$$(M_{1}\Gamma+\Omega)x=N_{1}\Gamma x+ \bigl( \Omega-(M_{1}-N_{1})\Gamma \bigr)\vert x\vert -\Psi \bigl(\Gamma\bigl(\vert x\vert +x\bigr) \bigr). $$

Taking \(M_{1}-N_{1}=M_{2}-N_{2}\) and \(\Omega=\Omega_{1}+\Omega_{2}\) into account, it follows that

$$(M_{1}\Gamma+\Omega_{1})x=(N_{1}\Gamma- \Omega_{2}) x+(\Omega-M_{2}\Gamma )\vert x\vert +N_{2}\Gamma \vert x\vert -\Psi \bigl(\Gamma\bigl(\vert x\vert +x\bigr) \bigr). $$

This shows that equation (2) holds.

We now turn to prove part (ii). By some simple calculations, the implicit fixed point equation (2) is equivalent to

$$A\Gamma\bigl(\vert x\vert +x\bigr)+\Psi \bigl(\Gamma\bigl(\vert x\vert +x\bigr) \bigr)=\Omega\bigl(\vert x\vert -x\bigr). $$

Set \(z=\Gamma(\vert x\vert +x)\) and \(\upsilon=\Omega(\vert x\vert -x)\). Evidently, it yields \(z\geq0\), \(\upsilon\geq0\), \(z^{T}\upsilon=0\), and \(\upsilon =Az+\Psi(z)\), which means that z is a solution of problem (1). This completes the proof. □

Note that the implicit fixed point equation (2) includes many parameters that are quite complicated to be determined in a computation. For solving this problem, we will give the simple formulation as follows. Subsequently, a matrix splitting iteration algorithm will be given for solving WNCP. Let \(\Omega_{1}=\Omega\), \(\Omega_{2}=0, \Gamma=\frac{1}{\gamma}I\), where \(\gamma> 0\) is a real number, then the implicit fixed point equation reduces to

$$(M_{1}+\gamma\Omega)x=N_{1} x+(\gamma\Omega-M_{2}) \vert x\vert +N_{2}\vert x\vert -\gamma \Psi \biggl( \frac{(\vert x\vert +x)}{\gamma} \biggr). $$

In fact, γΩ in the above equation denotes a positive diagonal parameter matrix, which can be replaced by Ω for simplicity. That is, the above equation is essentially equivalent to

$$\begin{aligned} (M_{1}+\Omega)x=N_{1} x+( \Omega-M_{2})\vert x\vert +N_{2}\vert x\vert -\gamma \Psi \biggl(\frac {(\vert x\vert +x)}{\gamma} \biggr). \end{aligned}$$
(3)

In the rest of the paper, we will use the fixed point equation above to give the algorithm and convergence analysis for solving WNCP.

Algorithm 5

Step 1. Choose two splittings of the matrix \(A\in R^{n\times n}\) satisfying \(A=M_{1}-N_{1}=M_{2}-N_{2}\).

Step 2. Set \(k=0\). Give an initial vector \(x^{0}\in R^{n}\), compute \(z^{0}=\frac{1}{\gamma}(\vert x^{0}\vert +x^{0})\).

Step 3. Choose \(x^{k+1}\) such that

$$\begin{aligned} (M_{1}+\Omega)x^{k+1}=N_{1}x^{k}+( \Omega -M_{2})\bigl\vert x^{k}\bigr\vert +N_{2}\bigl\vert x^{k+1}\bigr\vert -\gamma\Psi \bigl(z^{k}\bigr), \end{aligned}$$
(4)

and set

$$z^{k+1}=\frac{1}{\gamma}\bigl(\bigl\vert x^{k+1}\bigr\vert +x^{k+1}\bigr),\quad k=k+1. $$

Here, \(\Omega, \Gamma\in R^{n\times n}\) are positive diagonal matrices and γ is a positive constant.

Step 4. If the sequence \(\{z^{k}\}_{0}^{+\infty}\) is convergent, stop. Otherwise, go to Step 3.

4 Convergence theorems

In this section, we will consider the conditions that ensure the convergence of \(\{z^{k}\}_{0}^{+\infty}\) obtained by Algorithm 5.

Theorem 6

Let \(A\in R^{n\times n}\) be a P-matrix, \(M_{1}-N_{1}=M_{2}-N_{2}=A\) be two splittings of the matrix A with \(M_{1}\in R^{n\times n}\) being a P-matrix. Assume that \(\Omega\in R ^{n\times n}\) is a positive diagonal matrix, γ is a positive constant and \(\Psi(z)\): \(R^{n}\rightarrow R^{n}\) is a Lipschitz continuous function with the Lipschitz constant L. Set

$$\rho(\Omega)=2g(\Omega)+2q(\Omega)+f(\Omega), $$

where \(g(\Omega)=\Vert (M_{1}+\Omega)^{-1}N_{2}\Vert \), \(q(\Omega)=\Vert (M_{1}+\Omega)^{-1}N_{1}\Vert +L\Vert (M_{1}+\Omega)^{-1}\Vert \), \(f(\Omega)=\Vert (M_{1}+\Omega)^{-1}(\Omega-M_{1})\Vert \). If the matrix Ω satisfies \(\rho(\Omega)<1\), then for any initial vector \(x^{0}\in R^{n}\), the iteration sequence \(\{z^{k}\}^{+\infty }_{k=0}\) generated by Algorithm 5 converges to a solution \(z^{*}\in R^{n}_{+}\) of problem (1).

Proof

Suppose that \(z^{*}\in R^{n}_{+}\) is a solution of problem (1). Note that \(\Gamma=\frac{1}{\gamma}I\), by Theorem 4 and (3), we see that \(x^{*}=\frac{1}{2} (\gamma z^{*}-\Omega ^{-1} (A(z^{*})+\Psi(z^{*}) ) )\) is a solution of the equation

$$\begin{aligned} (M_{1}+\Omega)x^{*}=N_{1}x^{*}+( \Omega-M_{2})\bigl\vert x^{*}\bigr\vert +N_{2}\bigl\vert x^{*}\bigr\vert -\gamma \Psi \bigl(z^{*}\bigr) \end{aligned}$$
(5)

with \(z^{*}=\frac{1}{\gamma}(\vert x^{*}\vert +x^{*})\). Let us consider (4) minus (5); we have

$$\begin{aligned} &(M_{1}+\Omega) \bigl(x^{k+1}-x^{*}\bigr) \\ ={}&N_{1}\bigl(x^{k}-x^{*}\bigr)+( \Omega-M_{2}) \bigl(\bigl\vert x^{k}\bigr\vert -\bigl\vert x^{*}\bigr\vert \bigr) \\ &{}+N_{2}\bigl(\bigl\vert x^{k+1}\bigr\vert -\bigl\vert x^{*}\bigr\vert \bigr)-\gamma\bigl(\Psi\bigl(z^{k} \bigr)-\Psi\bigl(z^{*}\bigr)\bigr). \end{aligned}$$

Note that \(M_{1}-N_{1}=M_{2}-N_{2}\), we have

$$\begin{aligned} &{}(\Omega-M_{2}) \bigl(\bigl\vert x^{k}\bigr\vert -\bigl\vert x^{*}\bigr\vert \bigr) \\ ={}&(\Omega-M_{2}+N_{2}-N_{2}) \bigl(\bigl\vert x^{k}\bigr\vert -\bigl\vert x^{*}\bigr\vert \bigr) \\ ={}&(\Omega-M_{1}+N_{1}-N_{2}) \bigl(\bigl\vert x^{k}\bigr\vert -\bigl\vert x^{*}\bigr\vert \bigr) \\ ={}&(\Omega-M_{1}) \bigl(\bigl\vert x^{k}\bigr\vert -\bigl\vert x^{*}\bigr\vert \bigr)+N_{1} \bigl(\bigl\vert x^{k}\bigr\vert -\bigl\vert x^{*}\bigr\vert \bigr)-N_{2} \bigl(\bigl\vert x^{k}\bigr\vert -\bigl\vert x^{*}\bigr\vert \bigr). \end{aligned}$$
(6)

Since \(M_{1}\) is a P-matrix and Ω is a positive diagonal matrix, it follows from Lemma 3 that \(M_{1}+\Omega\) is a nonsingular matrix. Hence, by (5) and (6), we have

$$\begin{aligned} &{}x^{k+1}-x^{*} \\ ={}&(M_{1}+\Omega)^{-1}N_{1} \bigl(x^{k}-x^{*}\bigr)+(M_{1}+ \Omega)^{-1}(\Omega -M_{1}) \bigl(\bigl\vert x^{k} \bigr\vert -\bigl\vert x^{*}\bigr\vert \bigr) \\ &{}+(M_{1}+\Omega)^{-1}N_{1} \bigl(\bigl\vert x^{k}\bigr\vert -\bigl\vert x^{*}\bigr\vert \bigr)-(M_{1}+\Omega )^{-1}N_{2} \bigl(\bigl\vert x^{k}\bigr\vert -\bigl\vert x^{*}\bigr\vert \bigr) \\ &{}+(M_{1}+\Omega)^{-1}N_{2} \bigl(\bigl\vert x^{k+1}\bigr\vert -\bigl\vert x^{*}\bigr\vert \bigr)-(M_{1}+\Omega )^{-1}\gamma \bigl(\Psi \bigl(z^{k}\bigr)-\Psi\bigl(z^{*}\bigr) \bigr). \end{aligned}$$
(7)

Taking the facts

$$\begin{aligned} \bigl\Vert z^{k}-z^{*}\bigr\Vert =& \biggl\Vert \frac{\vert x^{k}\vert +x^{k}}{\gamma}-\frac {\vert x^{*}\vert +x^{*}}{\gamma} \biggr\Vert \leq \frac{2}{\gamma}\bigl\Vert x^{k}-x^{*}\bigr\Vert , \end{aligned}$$
(8)

\(\Psi(z)\) is a Lipschitz continuous function, and (8) into account, we have

$$\begin{aligned} \bigl\Vert \Psi\bigl(z^{k}\bigr)-\Psi \bigl(z^{*}\bigr)\bigr\Vert \leq L\bigl\Vert z^{k}-z^{*} \bigr\Vert \leq\frac{2L}{\gamma}\bigl\Vert x^{k}-x^{*} \bigr\Vert . \end{aligned}$$
(9)

Thereby, we derive from (7) and (9) that

$$\begin{aligned} &{} \bigl(1- \bigl\Vert (M_{1}+\Omega)^{-1}N_{2} \bigr\Vert \bigr)\bigl\Vert x^{k+1}-x^{*}\bigr\Vert \\ \leq{}& \bigl(2 \bigl(\bigl\Vert (M_{1}+\Omega)^{-1}N_{1} \bigr\Vert +L\bigl\Vert (M_{1}+\Omega)^{-1}\bigr\Vert \bigr) +\bigl\Vert (M_{1}+\Omega)^{-1}(\Omega-M_{1}) \bigr\Vert \\ &{}+\bigl\Vert (M_{1}+\Omega)^{-1}N_{2} \bigr\Vert \bigr)\bigl\Vert x^{k}-x^{*}\bigr\Vert , \end{aligned}$$

which is equivalent to

$$\bigl\Vert x^{k+1}-x^{*}\bigr\Vert \leq\frac{2q(\Omega)+f(\Omega)+g(\Omega)}{1-g(\Omega )} \bigl\Vert x^{k}-x^{*}\bigr\Vert $$

with \(g(\Omega)<1\). The condition

$$\frac{2q(\Omega)+f(\Omega)+g(\Omega)}{1-g(\Omega)}< 1 $$

with \(g(\Omega)<1\), which is equivalent to \(\rho(\Omega)=2g(\Omega )+2q(\Omega)+f(\Omega)<1\), ensures that the limit \(\lim_{k\rightarrow +\infty}x^{k}=x^{*}\) holds. These results complete the proof. □

Theorem 7

Let \(A\in R^{n\times n}\) be a P-matrix, \(M_{1}-N_{1}=M_{2}-N_{2}=A\) be two splittings of the matrix A with \(M_{1}\in R^{n\times n}\) being a symmetric P-matrix. Suppose that \(\Omega=\omega I\in R ^{n\times n}\) is a positive scalar matrix and ω is a positive constant. \(\Psi (z): R^{n}\rightarrow R^{n}\) is a Lipschitz continuous function with the Lipschitz constant L. \(\lambda_{\max}\) and \(\lambda_{\min}\) to denote the largest and smallest eigenvalue of the matrix \(M_{1}\), respectively. Let \(\tau_{1}=\Vert M_{1}^{-1}N_{1}\Vert _{2}\) and \(\tau_{2}=\Vert M_{1}^{-1}N_{2}\Vert _{2}\) satisfy \(\tau_{1}+\tau_{2}<1\). If \(\lambda _{\min}>L\), the choices of the parameters ω, \(M_{1}, N_{1}, M_{2}, N_{2}\) satisfy either of the following conditions, then the iteration sequence \(\{z_{k}\}_{k}^{+\infty}\subset R^{n}_{+}\) generated by Algorithm 5 converges to the unique solution \(z^{*}\in R^{n}_{+}\) of WNCP for any initial vector \(x^{0}\in R^{n}\).

  1. (i)

    When \(0<\tau_{1}+\tau_{2}<\frac{\lambda _{\min}-L}{\lambda_{\max}}\),

    $$\omega=\sqrt{\lambda_{\max}\lambda_{\min}}. $$
  2. (ii)

    When \(\frac{\lambda_{\min}-L}{\lambda_{\max}} <\tau _{1}+\tau_{2}<\frac{\lambda_{\min}-L}{\sqrt{\lambda_{\max}\lambda_{\min}}}\),

    $$\sqrt{\lambda_{\max}\lambda_{\min}}\leq\omega< \frac{[1-(\tau_{1}+\tau _{2})]\lambda_{\max}\lambda_{\min}-L\lambda_{\max}}{(\tau_{1}+\tau _{2})\lambda_{\max}+L-\lambda_{\min}}. $$
  3. (iii)

    When \(\tau_{1}+\tau_{2}=\frac{\lambda _{\min}-L}{\lambda_{\max}}\),

    $$\omega\geq\sqrt{\lambda_{\max}\lambda_{\min}}. $$

Proof

We first give some formulations, which will be used in the proof. Since M is a symmetric P-matrix and \(\tau_{1}+\tau_{2}<1\), by the definition of the spectral norm, we have

$$\begin{aligned} \bigl\Vert (M_{1}+\Omega)^{-1}N_{1} \bigr\Vert _{2} =&\bigl\Vert (M_{1}+\omega I)^{-1}M_{1}M_{1}^{-1}N_{1} \bigr\Vert _{2} \\ \leq&\bigl\Vert (M_{1}+\omega I)^{-1}M_{1} \bigr\Vert _{2}\bigl\Vert M_{1}^{-1}N_{1} \bigr\Vert _{2} \\ =&\max_{\lambda\in\lambda(M_{1})}\frac{\lambda\tau_{1}}{\omega+\lambda } \\ =&\frac{\lambda_{\max}\tau_{1}}{\omega+\lambda_{\max}}. \end{aligned}$$
(10)

Similarly, we have

$$\begin{aligned} \bigl\Vert (M_{1}+\Omega)^{-1}N_{2} \bigr\Vert _{2} =&\bigl\Vert (M_{1}+\omega I)^{-1}M_{1}M_{1}^{-1}N_{1} \bigr\Vert _{2} \\ \leq&\frac{\lambda_{\max}\tau_{2}}{\omega+\lambda_{\max}} \end{aligned}$$
(11)

and

$$\begin{aligned} \bigl\Vert (M_{1}+\Omega)^{-1}\bigr\Vert _{2} =&\bigl\Vert (M_{1}+\omega I)^{-1}\bigr\Vert _{2} \\ =&\max_{\lambda\in\lambda(M_{1})}\frac{1}{\omega+\lambda} \\ =&\frac{1}{\omega+\lambda_{\min}}. \end{aligned}$$
(12)

In addition, from a simple calculating process, we have

$$\begin{aligned} \bigl\Vert (M_{1}+\Omega)^{-1}( \Omega-M_{1})\bigr\Vert _{2} =&\bigl\Vert (M_{1}+\omega I)^{-1}(\omega I-M_{1})\bigr\Vert _{2} \\ =&\max_{\lambda\in\lambda(M_{1})}\frac{\vert \omega-\lambda \vert }{\omega+\lambda } =\max\biggl\{ \frac{\vert \omega-\lambda_{\max} \vert }{\omega+\lambda_{\max}},\frac {\vert \omega-\lambda_{\min} \vert }{\omega+\lambda_{\min}}\biggr\} \\ =&\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{\lambda_{\max}-\omega}{\lambda_{\max}+\omega}, & \omega \leq\sqrt{\lambda_{\max}\lambda_{\min}},\\ \frac{\omega-\lambda_{\min}}{\omega+\lambda_{\min}}, & \omega \geq\sqrt{\lambda_{\max}\lambda_{\min}}. \end{array}\displaystyle \right . \end{aligned}$$
(13)

As follows from (10)-(13), we have

$$\begin{aligned} \rho(\Omega) =&2g(\Omega)+2q(\Omega)+f(\Omega) \\ =&2 \bigl\Vert (M_{1}+\Omega)^{-1}N_{2} \bigr\Vert _{2}+2 \bigl(\bigl\Vert (M_{1}+\Omega )^{-1}N_{1}\bigr\Vert _{2} \\ &{}+L\bigl\Vert (M_{1}+\Omega)^{-1}\bigr\Vert _{2} \bigr)+\bigl\Vert (M_{1}+\Omega)^{-1}( \Omega -M_{1})\bigr\Vert _{2} \\ =&2\frac{\lambda_{\max}\tau_{2}}{\omega+\lambda_{\max}}+2\biggl(\frac{\lambda _{\max}\tau_{1}}{\omega+\lambda_{\max}} +\frac{L}{\omega+\lambda_{\min}}\biggr)+\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{\lambda_{\max}-\omega}{\omega+\lambda_{\max}}, & \omega \leq\sqrt{\lambda_{\max}\lambda_{\min}},\\ \frac{\omega-\lambda_{\min}}{\omega+\lambda_{\min}}, & \omega \geq\sqrt{\lambda_{\max}\lambda_{\min}}. \end{array}\displaystyle \right . \end{aligned}$$
(14)

We then consider two cases.

(a) When \(\omega\leq\sqrt{\lambda_{\max}\lambda_{\min}}\), by a simple calculation on (14), we see that ω, \(\tau_{1}\), and \(\tau_{2}\) satisfy \(\rho(\Omega)<1\), which is equivalent to

$$\omega^{2}- \bigl((\tau_{1}+\tau_{2}) \lambda_{\max}+L-\lambda_{\min} \bigr)\omega- \bigl(( \tau_{1}+\tau_{2})\lambda_{\max} \lambda_{\min}+L\lambda _{\max} \bigr)>0. $$

Note that \(\omega>0\) and \(\omega\leq\sqrt{\lambda_{\max}\lambda_{\min}}\), then the solution of the above inequality is

$$\theta(\tau_{1},\tau_{2})< \omega\leq\sqrt{ \lambda_{\max}\lambda_{\min}}, $$

where

$$\begin{aligned} &{}\theta(\tau_{1},\tau_{2}) \\ =&\frac{\sqrt{ ((\tau_{1}+\tau_{2})\lambda_{\max}+L-\lambda_{\min} )^{2}+4(\tau_{1}+\tau_{2})\lambda_{\max}\lambda_{\min}+L\lambda _{\max}}}{2} \\ &{}+\frac{(\tau_{1}+\tau_{2})\lambda_{\max}+L-\lambda_{\min}}{2}. \end{aligned}$$

Certainly, we have

$$\begin{aligned} \theta(\tau_{1},\tau_{2})< \sqrt{ \lambda_{\max}\lambda_{\min}}. \end{aligned}$$
(15)

Since \(\lambda_{\min}>L\), by the definitions of \(\tau_{1}\), \(\tau_{2}\) and solving (15), we get

$$0< \tau_{1}+\tau_{2}< \frac{\lambda_{\min}-L}{\sqrt{\lambda_{\max}\lambda_{\min}}}. $$

(b) When \(\omega\geq\sqrt{\lambda_{\max}\lambda_{\min}}\), in a same way as (a), ω, \(\tau_{1}\), and \(\tau_{2}\) satisfying \(\rho(\Omega)<1\), which is equivalent to

$$\begin{aligned} \bigl[(\tau_{1}+\tau_{2}) \lambda_{\max}+L-\lambda_{\min} \bigr]\omega+(\tau _{1}+\tau_{2}-1)\lambda_{\max}\lambda_{\min}+L \lambda_{\max}< 0. \end{aligned}$$
(16)

If \((\tau_{1}+\tau_{2})\lambda_{\max}+L-\lambda_{\min}>0\), that is, \(\tau _{1}+\tau_{2}>\frac{\lambda_{\min}-L}{\lambda_{\max}}\), then

$$\omega< \frac{[1-(\tau_{1}+\tau_{2})]\lambda_{\max}\lambda_{\min}-L\lambda _{\max}}{(\tau_{1}+\tau_{2})\lambda_{\max}+L-\lambda_{\min}}. $$

Combined with \(\omega\geq\sqrt{\lambda_{\max}\lambda_{\min}}\), we get

$$\sqrt{\lambda_{\max}\lambda_{\min}}\leq\omega< \frac{[1-(\tau_{1}+\tau _{2})]\lambda_{\max}\lambda_{\min}-L\lambda_{\max}}{(\tau_{1}+\tau _{2})\lambda_{\max}+L-\lambda_{\min}}. $$

Naturally, we have

$$\sqrt{\lambda_{\max}\lambda_{\min}}< \frac{[1-(\tau_{1}+\tau_{2})]\lambda _{\max}\lambda_{\min}-L\lambda_{\max}}{(\tau_{1}+\tau_{2})\lambda _{\max}+L-\lambda_{\min}}. $$

That is,

$$\tau_{1}+\tau_{2}< \frac{\lambda_{\min}-L}{\sqrt{\lambda_{\max}\lambda_{\min}}}. $$

This, together with \(\tau_{1}+\tau_{2}>\frac{\lambda_{\min}-L}{\lambda _{\max}}\), shows that we have

$$\frac{\lambda_{\min}-L}{\lambda_{\max}}< \tau_{1}+\tau_{2}< \frac{\lambda _{\min}-L}{\sqrt{\lambda_{\max}\lambda_{\min}}}. $$

If \((\tau_{1}+\tau_{2})\lambda_{\max}+L-\lambda_{\min}\leq0\), that is, \(\tau_{1}+\tau_{2}\leq\frac{\lambda_{\min}-L}{\lambda_{\max}}\), then for any \(\omega>0\) (16) holds. So

$$\omega\geq{\sqrt{\lambda_{\max}\lambda_{\min}}}. $$

Hence, from (a) and (b), we see that when \(0<\tau_{1}+\tau_{2}<\frac {\lambda_{\min}-L}{\lambda_{\max}}, \omega=\sqrt{\lambda_{\max}\lambda _{\min}}\); when \(\frac{\lambda_{\min}-L}{\lambda_{\max}} <\tau_{1}+\tau _{2}<\frac{\lambda_{\min}-L}{\sqrt{\lambda_{\max}\lambda_{\min}}}\), \(\sqrt{\lambda_{\max}\lambda_{\min}}\leq\omega<\frac{[1-(\tau_{1}+\tau _{2})]\lambda_{\max}\lambda_{\min}-L\lambda_{\max}}{(\tau_{1}+\tau _{2})\lambda_{\max}+L-\lambda_{\min}}\); when \(\tau_{1}+\tau_{2}=\frac{\lambda_{\min}-L}{\lambda_{\max}}\), \(\omega \geq\sqrt{\lambda_{\max}\lambda_{\min}}\). The proof is completed. □

5 Results and discussion

This study focused on the weakly nonlinear complementarity problems with a large sparse matrix. We proposed an algorithm that is not only computationally more convenient to use but also faster than the modulus-based matrix splitting iteration methods and the convergence conditions are presented when the system matrix is a P-matrix.

Some scholars had already stressed the accelerated modulus-based matrix splitting iteration methods for linear complementarity problems and pointed out that the system matrix is either a positive definite matrix or an \(H_{+}\)-matrix. However, we suggest that the system matrix is a P-matrix, this is more adaptable but also a limitation. Notwithstanding its limitation, this study does suggest that WNCP can be solved faster.

6 Conclusions

In this paper, by reformulating the complementarity problem (1) as an implicit fixed point equation based on splittings of the system matrix A, we establish an accelerated modulus-based matrix splitting iteration algorithm and show the convergence analysis when the involved matrix of the WNCP is a P-matrix.

References

  1. Lin, GH, Fukushima, M: New reformulations for stochastic nonlinear complementarity problems. Optim. Methods Softw. 21, 551-564 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  2. Tseng, P: Growth behavior of a class of merit functions for the nonlinear complementarity problem. J. Optim. Theory Appl. 89, 17-37 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  3. Zhang, C, Chen, XJ: Stochastic nonlinear complementarity problem and applications to traffic equilibrium under uncertainty. J. Optim. Theory Appl. 137, 277-295 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  4. Chen, XJ, Zhang, C, Fukushima, M: Robust solution of monotone stochastic linear complementarity problems. Math. Program. 117, 51-80 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Fang, H, Chen, X, Fukushima, M: Stochastic \(R_{0}\) matrix linear complementarity problems. SIAM J. Optim. 18, 482-506 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  6. Fukushima, M: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Math. Program. 137, 99-110 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  7. Zhang, C, Chen, XJ: Smoothing projected gradient method and its application to stochastic linear complementarity problems. SIAM J. Optim. 20, 627-649 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  8. Zhou, G, Caccetta, L: Feasible semismooth Newton method for a class of stochastic linear complementarity problems. J. Optim. Theory Appl. 139, 379-392 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  9. Van Bokhoven, WM: A class of linear complementarity problems is solvable in polynomial time. Technical report, Department of Electrical Engineering, Technical University, Eindhoven (1980)

  10. Bai, Z: Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 6, 917-933 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  11. Zhang, L: Two-step modulus based matrix splitting iteration for linear complementarity problems. Numer. Algorithms 57, 83-99 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  12. Sun, Z, Zeng, J: A monotone semismooth Newton type method for a class of complementarity problems. J. Comput. Appl. Math. 235, 1261-1274 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Chung, K: Equivalent Differentiable Optimization Problems and Descent Methods for Asymmetric Variational Inequality Problems. Academic Press, New York (1992)

    Google Scholar 

  14. Fukushima, M: Merit functions for variational inequality and complementarity problems. In: Nonlinear Optimization and Applications, vol. 137, pp. 155-170 (1996)

    Chapter  Google Scholar 

  15. Lin, GH, Chen, XJ, Fukushima, M: New restricted NCP function and their applications to stochastic NCP and stochastic MPEC. Optimization 56, 641-753 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  16. Ling, C, Qi, L, Zhou, G: The \(SC^{1}\) property of an expected residual function arising from stochastic complementarity problems. Oper. Res. Lett. 36, 456-460 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  17. Schafer, U: A linear complementarity problem with a P-matrix. SIAM Rev. 46, 189-201 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  18. Berman, A, Plemmons, R: Nonnegative Matrices in the Mathematical Sciences. Academic Press, New York (1979)

    MATH  Google Scholar 

Download references

Acknowledgements

This work was supported in part by NSFC Grant NO. 11501275 and Scientific Research Fund of Liaoning Provincial Education Department NO. L2015199.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mei-Ju Luo.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luo, MJ., Wang, YY. & Liu, HL. Convergence results of a matrix splitting algorithm for solving weakly nonlinear complementarity problems. J Inequal Appl 2016, 203 (2016). https://doi.org/10.1186/s13660-016-1139-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1139-4

MSC

Keywords