 Research
 Open Access
 Published:
Extragradient thresholding methods for sparse solutions of cocoercive NCPs
Journal of Inequalities and Applications volume 2015, Article number: 34 (2015)
Abstract
In this paper, we aim to find sparse solutions of cocoercive nonlinear complementarity problems (NCPs). Mathematically, the underlying model is NPhard in general. Thus an \(\ell_{1}\) regularized projection minimization model is proposed for relaxation and an extragradient thresholding algorithm (ETA) is then designed for this regularized model. Furthermore, we analyze the convergence of this algorithm and show any cluster point of the sequence generated by ETA is a solution of NCP. Numerical results demonstrate that the ETA can effectively solve the \(\ell_{1}\) regularized model and output very sparse solutions of cocoercive NCPs with high quality.
Introduction
The nonlinear complementarity problem, denoted by the \(\operatorname{NCP}(F)\), is to find a vector \(x\in\mathbb{R}^{n}\) such that
where F is a mapping from \(\mathbb{R}^{n}\) into itself. The set of solutions to this problem is denoted by \(\operatorname{SOL}(F)\). Throughout this paper, we assume \(\operatorname{SOL}(F)\ne\emptyset\).
NCPs have various important applications in economics and engineering, such as Nash equilibrium problems, traffic equilibrium problems, contact mechanics problems, option pricing. Extensive studies of NCPs have been done; see [1–3] and the references therein. Numerical methods for solving NCPs, such as filter method, continuation method, nonsmooth Newton’s method, smoothing Newton methods, LevenbergMarquardt method, projection method, descent method, interiorpoint method have been extensively investigated in the literature. However, it seems that there is a vacant study of sparse solutions for NCPs. In fact, in real applications, it is very necessary to investigate the sparse solution of the NCPs. For example this is so in bimatrix games [1] and portfolio selections [4]. For more details, see [5].
In this paper, we try to compute a sparse solution of the \(\operatorname{NCP}(F)\), which has the smallest number of nonzero entries. To be specific, we seek a vector \(x\in\mathbb{R}^{n}\) by solving the \(\ell_{0}\)norm minimization problem
where \(\x\_{0}\) stands for the number of nonzero components of x. A solution of (1) is called the sparse solution of \(\operatorname{NCP}(F)\).
The above minimization problem (1) is in fact a sparse optimization [6–9] with equilibrium constraints. In the view of the objection function, the problem is \(\ell_{0}\)norm minimization problem, which is combinatorial and generally NPhard. From the point of view of constraint conditions, it is in fact a mathematical program with equilibrium constraints (MPEC) [10–13]. It is not easy to get solutions due to the equilibrium constraints, even for a continuous objective function.
To overcome the difficulty for the \(\ell_{0}\)norm, many researchers have suggested to relax the \(\ell_{0}\) norm and, instead, to consider the \(\ell_{1}\) norm; see [8, 9, 14–17]. Motivated by this outstanding work, we consider applying \(\ell_{1}\) norm minimization to find the sparse solution of NCPs, and we obtain the following minimization problem to approximate (1):
where \(\x\_{1}=\sum_{i=1}^{n} x_{i}\).
To overcome the difficulty for the complementarity constraint, we make use of the Cfunction \(\mathbf{F}_{\min} (x)\) to construct the penalty of violating the complementarity constraints. The Cfunction \(\mathbf{F}_{\min}\) associated with the ‘min’ function can be given by
where F is a mapping from \(\mathbb{R}^{n}\) into itself, and \(\Pi_{\mathbb{R}_{+}^{n}}\) is the Euclidean metric projector onto the nonnegative orthant.
It is well known [2] that solving \(\operatorname{NCP}(F)\) is equivalent to solving the fixed point equation \(\mathbf{F}_{\min}(x)=0\), that is,
where \([\cdot]_{+}\) is the Euclidean metric projector onto the nonnegative orthant.
Combining (2) and (4), by introducing a new variable \(z\in\mathbb{R}^{n}\), we obtain the following regularized minimization problem:
where \(\lambda>0\) is a regularization parameter and \(\\cdot\\) is denoted as the Euclidean norm. We call (5) the \(\ell_{1}\) regularized projection minimization problem.
This paper is organized as follows. In Section 2, we approximate (1) by the \(\ell_{1}\) regularization projection minimization problem (5), and we show theoretically that (5) is a good approximation. In Section 3, we propose an extragradient thresholding algorithm (ETA) for (5) and also analyze the convergence of this algorithm. Numerical results are demonstrated in Section 4 to show that (5) is promising in providing a sparse solution of cocoercive NCPs.
The \(\ell_{1}\) regularized approximation
In this section, we study the relation between the solutions of model (5) and those of model (2).
Theorem 2.1
For any fixed \(\lambda>0\), the solution set of (5) is nonempty and bounded. Let \(\{(\widehat{x}_{\lambda_{k}},\widehat{z}_{\lambda_{k}})\}\) be a solution of (5), and \(\{\lambda_{k}\}\) be any positive sequence converging to 0. If \(\operatorname{SOL}(F)\ne\emptyset\), then \(\{(\widehat {x}_{\lambda_{k}},\widehat{z}_{\lambda_{k}})\}\) has at least one accumulation point, and any accumulation point \(x^{*}\) of \(\{\widehat {x}_{\lambda_{k}}\}\) is a solution of (2).
Proof
For any fixed \(\lambda>0\), it is easy to show the coercivity of \(f_{\lambda}(x,z)\) in (5), namely
We also note that for any \(x\in\mathbb{R}^{n}\) and \(z\in\mathbb{R}^{n}\), \(f_{\lambda}(x,z)\geq0\). This together with (6) implies the level set
is nonempty and compact, where \(x_{0}\in\mathbb{R}^{n}\) and \(z_{0}=[x_{0} F(x_{0})]_{+}\) are given points. The solution set of (5) is nonempty and bounded since \(f_{\lambda}(x,z)\) is continuous on L.
Now we show the second part of this theorem. Let \(\widehat{x}\in\operatorname{SOL}(F)\) and \(\widehat{z}= [\widehat{x} F(\widehat{x})]_{+}\). From (5), we have \(\widehat{x}=\widehat{z}\). Since \((\widehat{x}_{\lambda_{k}}, \widehat{z}_{\lambda_{k}})\) is a solution of (5) with \(\lambda=\lambda_{k}\), where \(\widehat{z}_{\lambda _{k}}= [\widehat{x}_{\lambda_{k}}F(\widehat{x}_{\lambda_{k}})]_{+}\), it follows that
From the above inequality, we derive that, for any \(\lambda_{k}>0\),
Hence the sequence \(\{\widehat{x}_{\lambda_{k}}\}\) is bounded and has at least one cluster point. Note that the sequence \(\{\widehat{z}_{\lambda _{k}}\}\) is also bounded because \(\\widehat{x}_{\lambda_{k}}\widehat {z}_{\lambda_{k}}\^{2}\leq \lambda_{k}\\widehat{x}\_{1}\leq\lambda_{0}\ \widehat{x}\_{1}\) (\(\lambda_{k}\to0\)).
Let \(x^{*}\) and \(z^{*} \) be any cluster points of \(\{\widehat{x}_{\lambda _{k}}\}\) and \(\{\widehat{z}_{\lambda_{k}}\}\), respectively. Then there exists a subsequence of \(\{\lambda_{k}\}\), say \(\{\lambda _{k_{j}}\}\), such that
We can obtain \(z^{*}=[x^{*} F(x^{*})]_{+}\) by letting \(k_{j}\) tend to ∞ in \(z_{\lambda_{k_{j}}}= [x_{\lambda_{k_{j}}}F(x_{\lambda_{k_{j}}})]_{+}\). Letting \(\lambda_{k_{j}}\) tend to 0 in
yields \(x^{*}=z^{*}\). Consequently, \(x^{*}=[ x^{*} F(x^{*})]_{+}\) follows, which manifests \(x^{*}\in \operatorname{SOL}(F)\). From (8), namely \(\\widehat{x}_{\lambda_{k_{j}}}\_{1}\le\ \widehat{x}\_{1}\), \(k_{j}\) tending to ∞, we get \(\x^{*}\_{1} \le\\widehat{x}\_{1}\). Then by the arbitrariness of \(\widehat{x}\in\operatorname{SOL}(F)\), we know \(x^{*}\) is a solution of problem (2). This completes the proof. □
Algorithm and convergence
In this section, we suggest the extragradient thresholding algorithm (ETA) to solve \(\ell_{1}\) regularization projection minimization problem (5) and give the convergence analysis of ETA.
First we state some basic operator concepts as regards monotonicity and some properties of the projection operator. Let \(P_{K}(\cdot)\) denote the projection operator from \(\mathbb{R}^{n}\) onto K, a nonempty closed convex subset of \(\mathbb{R}^{n}\). From the definition of projection operator, it follows that
Consequently, we have
Lemma 3.1
[18]
Define a residue function
The following statements are valid.

(a)
\(\forall\alpha>0\), \(F(x)^{\top}e(x,\alpha)\geq\frac {\e(x,\alpha)\^{2}}{\alpha}\);

(b)
for any \(\alpha>0\), \(\frac{\e(x,\alpha)\}{\alpha}\) is nonincreasing;

(c)
for any \(\alpha\geq0\), \(\e(x,\alpha)\\) is nondecreasing.
In this paper, we suppose the mapping \(F:\mathbb{R}^{n}\rightarrow\mathbb {R}^{n}\) is cocoercive on a subset K of \(\mathbb{R}^{n}\). That is, there exists a constant \(c>0\) such that
It is clear that the cocoercive mapping is monotone, namely,
but not necessarily strongly monotone, i.e., there is a constant \(c>0\) such that
Remark 3.1
Every affine monotone function which is also symmetric must be cocoercive (on \(\mathbb{R}^{n}\)). The Euclidean projector \(P_{K}\) and \(IP_{K}\) are both ‘cocoercive‘ functions [2, 19].
Lemma 3.2
Suppose that \(F(\cdot)\) is cocoercive on K with modulus \(c>0\). Then for any given positive real number α, when \(c>\alpha/2\), the operator \(I\alpha F\) is nonexpansive, that is, for any \(x, y \in K\),
Proof
For any \(x, y \in K\), when \(c>\alpha/2\), using the cocoercivity of F, it follows that
which shows \(I\alpha F\) is nonexpansive. □
For giving \(z^{k}\in\mathbb{R}^{n}_{+}\) and \(\lambda_{k}>0\), we consider an unconstrained minimization subproblem:
Evidently, the minimizer \(x^{s}\) of the model (13) must satisfy the corresponding optimality condition
where the shrinkage operator \(S_{\lambda}\) is defined by
Evidently, the shrinkage operator \(S_{\lambda}\) is componentwise, i.e., \((S_{\lambda}(z))_{i}=S_{\lambda}(z_{i})\). Moreover, it is nonexpansive; i.e., \(\S_{\lambda}(x)S_{\lambda}(y)\\leq\xy\\), for any \(x, y\in \mathbb{R}_{+}^{n}\), see [20]. It demonstrates that a solution \(x\in\mathbb{R}^{n}\) of the subproblem (13) can be analytically expressed by (14).
By the solution representation, we construct the following extragradient thresholding algorithm (ETA) to solve the \(\ell_{1}\) regularized projection minimization problem (5).

Input: cthe cocoercive modulus of F.

Step 0: Choose \(0\ne z^{0}\in\mathbb{R}_{+}^{n}\), \(\lambda_{0},\beta>0\), \(\tau,\gamma,\mu\in(0,1)\), \(\beta\gamma<2c\), \(\epsilon>0\) and integers \(n_{\max}>K_{0}>0\). Set \(k=0\).

Step 1: Compute
$$\begin{aligned}[b] &x^{k}=S_{\lambda_{k}} \bigl(z^{k} \bigr), \\ &y^{k}= \bigl[x^{k}\alpha_{k}F \bigl(x^{k} \bigr) \bigr]_{+}, \end{aligned} $$where \(\alpha_{k}=\beta\gamma^{m_{k}}\) with \(m_{k}\) being the smallest nonnegative integer satisfying
$$ \bigl\ F \bigl(x^{k} \bigr)F \bigl(y^{k} \bigr) \bigr\ \leq\mu\frac{\x^{k}y^{k}\}{\alpha_{k}}. $$(16) 
Step 2: If \(\x^{k}z^{k}\\le\epsilon\) or the number of iterations is greater than \(n_{\max}\), then return \(z^{k}\), \(x^{k}\), \(y^{k}\) and stop. Otherwise, compute
$$\begin{aligned} z^{k+1}= \bigl[x^{k}\alpha_{k}F \bigl(y^{k} \bigr) \bigr]_{+} \end{aligned}$$and update \(\lambda_{k+1}\) by
$$\begin{aligned} \lambda_{k+1}=\left \{ \begin{array}{l@{\quad}l} \tau\lambda_{k}, & \mbox{if } k+1 \mbox{ is a multiple of } K_{0}, \\ \lambda_{k}, & \mbox{otherwise}, \end{array} \right . \end{aligned}$$and \(k=k+1\), then go to Step 1.
Before analyzing the convergence of ETA, we first present a key lemma as regards cocoercive mapping.
Lemma 3.3
Suppose that mapping F is cocoercive and \(\operatorname{SOL}(F)\neq\emptyset \). If \(x^{k}\) generated by ETA is not a solution of \(\operatorname{NCP}(F)\), then for any \(\widehat{x}\in\operatorname{SOL}(F)\), we have
Proof
For any \(\widehat{x}\in\operatorname{SOL}(F)\), we have \(F(\widehat{x})^{\top }\widehat{x}=0\). Since \(y^{k}\in\mathbb{R}^{n}_{+}\), it follows that \(\langle F(\widehat{x}), y^{k}\widehat{x}\rangle\geq0\). It is clear that the cocoercive mapping is pseudomonotone, that is,
By the definition of pseudomonotonicity, it follows that \(\langle F(y^{k}), y^{k}\widehat{x}\rangle\geq0\). Hence,
where the last inequality but one follows from Lemma 3.1 and (16). □
We now begin to analyze the convergence of the proposed ETA.
Theorem 3.1
Suppose that the mapping F is cocoercive with modulus \(c>\beta\gamma /2\) and \(\operatorname{SOL}(F)\neq\emptyset\). Let \(\{(z^{k}, x^{k}, y^{k})\}\) and \(\{\lambda_{k}\}\) be sequences generated by ETA, then

(i)
the sequences \(\{z^{k}\}\), \(\{x^{k}\}\), and \(\{y^{k}\}\) are all bounded;

(ii)
any cluster point of the sequence \(\{x^{k}\}\) is a solution of \(\operatorname{NCP}(F)\).
Proof
(i) Let \(\widehat{x}\in\operatorname{SOL}(F)\). By the definition (15) of operator \(S_{\lambda}\), we have
In view of \(\widehat{x}\in\operatorname{SOL}(F)\), we have \(\widehat{x}=[\widehat {x}\alpha_{k}F(\widehat{x})]_{+}\). Since \(c>\beta\gamma/2>\alpha_{k}/2\), by Lemma 3.2, we see that \(I\alpha_{k} F\) is nonexpansive. Together with the nonexpansive property of the projection operator, it follows that
By \(y^{k}=[x^{k}\alpha_{k}F(x^{k})]_{+}\) and (9), it follows that
Replacing (21) into (20), by (16) and (18), we deduce
Hence, by definition of \(\lambda_{k}\), it follows that
which shows \(\{z^{k}\}\) is bounded. Together with (18) and (19), we see that \(\{x^{k}\}\) and \(\{y^{k}\}\) are both bounded.
(ii) Now we prove \(\lim_{k\rightarrow\infty}\x^{k}y^{k}\=0\). By (22) and (18), it follows that
which leads to the following inequality:
where the third inequality holds from (23), and thus we have \(\lim_{k\rightarrow\infty}\x^{k}y^{k}\=0\).
Since \(\{x^{k}\}\) is bounded, \(\{x^{k}\}\) has at least one cluster point. Let \(x^{*}\) be a cluster point of \(\{x^{k}\}\) and a subsequence \(\{ x^{k_{j}}\}\) converge to \(x^{*}\). Next we will show \(x^{*}\in\operatorname{SOL}(F)\).
If there is a positive low bounded \(\alpha_{\min}\) such that \(\alpha _{k_{i}}\geq\alpha_{\min} >0\), from Lemma 3.1(b) and (c), we get
where \(e(x,\alpha)=x[x\alpha F(x)]_{+}\). Together with the continuity of \(e(x,\alpha)\) for x and \(\lim_{k\rightarrow\infty}\x^{k}y^{k}\=0\), we have
If \(\lim_{k_{i}\rightarrow\infty}\alpha_{k_{i}}=0\), for enough large \(k_{i}\), by Lemma 3.1(b) and (16), we get
where \(\overline{y}^{k_{i}}=[x^{k_{i}}\frac{1}{\beta}\alpha_{k_{i}}F(x^{k_{i}})]_{+}\). Taking the limit in the above inequality, we have
It means \(x^{*}=[x^{*} F(x^{*})]_{+}\). Hence we get \(x^{*}\in\operatorname{SOL}(F)\). The proof is thus complete. □
Numerical experiments
In this section, we present some numerical experiments to demonstrate the effectiveness of our ETA algorithm. All the numerical experiments were performed on a laptop (2.50GHz, 6.00GB of RAM) by utilizing MATLAB R2013a.
We will stimulate three examples to implement the ETA algorithm. They will be ran 100 times for difference dimensions, and thus average results will be recorded. In each experiment, we set \(z^{0}=e\), \(\beta =2c\), \(\gamma=0.1\), \(\mu=1/c\), \(n_{\max}=2{,}000\).
Test for LCPs with Zmatrix [5]
The test is associated with the Zmatrix which has an important property, that is, there is a unique sparse solution of LCPs when M is a kind of Zmatrix [5]. Let us consider \(\operatorname{LCP}(q,M)\) where
Here \(I_{n}\) is the identity matrix of order n and \(e=(1,1,\ldots ,1)^{\top}\in\mathbb{R}^{n}\). Such a matrix M is widely used in statistics. It is clear that M is a positive semidefinite Zmatrix. For any scalar \(a\ge0\), we know that the vector \(x=a e+e_{1}\) is a solution to \(\operatorname{LCP}(q,M)\), since it satisfies
Among all the solutions, the vector \(\hat{x}=e_{1}=(1,0,\ldots,0)^{\top}\) is the unique sparse solution.
We choose \(z^{0}=e\), \(c=1\), \(\lambda_{0}=0.2\), \(\beta=2c\), \(\tau=0.75\), \(\gamma =0.1\), \(\mu=1/c\), \(\epsilon=1e6\), \(n_{\max}=2{,}000\), \(K_{0}=5\). We will take advantage of the recovery error \(\x\hat{x}\\) to evaluate our algorithm. Apart from that, the average cpu time (in seconds), the average number of iteration times and the residual \(\xz\\) will also be taken into consideration on judging the performance of the method.
As indicated in Table 1, the ETA algorithm behaves very robust because the average number of times of iteration is identically equal to 205; the recovered error \(\x\hat{x}\\) and residual \(\ xz\\) are basically similar. In addition, the sparsity \(\x\_{0}\) of the recovered solution x is in all cases 1s, which means the recover is successful. Most importantly, the ETA algorithm is exceptionally fast, which results in only 36.70 seconds being needed to address the problem with dimension \(n=10{,}000\).
In order to illustrate the effectiveness of the ETA algorithm we proposed, we introduce another method of tackling the LCPs. In [5], the authors established an \(l_{p}\) (\(0< p<1\)) regularized minimization model:
and designed a sequential smoothing gradient (SSG) method to solve the \(l_{p}\) regularized model and get a sparse solution of \(\operatorname{LCP}(q,M)\). The results are displayed in Table 2.
It can be discerned in Table 2, where ‘ ’ denotes the method is invalid. Although the sparsity \(\x\_{0}\) of the recovered solution is in all cases as large as 1 and the recovered errors \(\x\hat{x}\\) are pretty small, the average cpu time dramatically ascends with the matrix dimension n, which manifests that SSG method for LCPs is appropriate for the small dimensional data set and thus SSG will not be appealing when n is relatively large. Contrasted with the SSG method, the ETA algorithm is more outstanding in the cpu time and the size of the solvable problems.
Test for LCPs with positive semidefinite matrices
In this subsection, we test ETA for randomly created LCPs with positive semidefinite matrices. First, we state the way of constructing LCPs and their solutions. Let a matrix \(Z\in\mathbb{R}^{n\times r} \) (\(r< n\)) be generated with the standard normal distribution and \(M=ZZ^{\top}\). Let the sparse vector \(\hat{x}\) be produced by choosing randomly the \(s=0.01 * n\) nonzero components whose values are also randomly generated from a standard normal distribution. After the matrix M and the sparse vector \(\hat{x}\) have been generated, a vector \(q\in\mathbb {R}^{n}\) can be constructed such that \(\hat{x}\) is a solution of the \(\operatorname{LCP}(q,M)\). Then \(\hat{x}\) can be regarded as a sparse solution of the \(\operatorname{LCP}(q,M)\). Namely,
To be more specific, if \(\hat{x}_{i}>0\) then choose \(q_{i}=(M\hat{x})_{i}\), if \(\hat{x}_{i}=0\) then choose \(q_{i}=(M\hat{x})_{i}(M\hat{x})_{i}\). Let M and q be the input to our ETA algorithm and take \(z^{0}=e\), \(c=\max(\operatorname{svd}(M))\), \(\lambda_{0}=0.02\), \(\beta=2c\), \(\tau=0.75\), \(\gamma=0.1\), \(\mu =1/c\), \(\epsilon=1e10\), \(n_{\max}=2{,}000\), \(K_{0}=\max(2, \mathrm{floor} (10{,}000/n))\). Then ETA will output a solution x. Similarly, the residual \(\x\hat {x}\\), average cpu time (in seconds), the average number of iteration times, and the residual \(\xz\\) will also be taken into consideration on valuating our ETA algorithm.
As manifested in Table 3, the ETA algorithm performs quite efficiently. Furthermore, the sparsity \(\x\_{0}\) of recovered solution x is in all cases equal to the sparsity \(\\hat{x}\_{0}\), which means the recover is exact. Likewise, the ETA algorithm is exceptionally fast in this example, which makes that only 46.67 seconds are needed to pursue the sparse solution of LCP when the dimension \(n=7{,}000\).
Test for cocoercive nonlinear complementarity problem
We now consider a cocoercive nonlinear complementarity problems (NCP) with
where \(D(x)\) and \(Mx+q\) are the nonlinear part and the linear part of \(F(x)\), respectively. We form \(F(x)\) similarly as in [21, 22]. The matrix \(M=A^{\top}A+B\), where A is an \(n\times n\) matrix whose entries are randomly generated in the interval \((5, 5)\), and a skewsymmetric matrix B is generated in the same way. In \(D(x)\), the nonlinear part of \(F(x)\), the components are
and \(a_{j}\) is a random variable in \((1,0)\). Then the sequent part of generating the sparse vector \(\hat{x}\) and vector \(q\in\mathbb{R}^{n}\) such that
is similar to the procedure of Section 4.2. Let M and q be the input to our ETA algorithm and take \(z^{0}=e\), \(c=150*\log(n)\), \(\lambda_{0}=0.2\), \(\beta=2c\), \(\tau =0.75\), \(\gamma=0.1\), \(\mu=1/c\), \(\epsilon=1e6\), \(n_{\max}=2{,}000\), \(K_{0}=\max(2, \mathrm{floor} (10{,}000/n))\), and \(a=\operatorname{rand}(n,1)\). Then ETA will output a solution x. Similarly, the average number of iteration times, the average residual \(\xz\\), the average sparsity \(\x\_{0}\) of x, and the average cpu time (in seconds) will also be taken into consideration on valuating our ETA algorithm.
It is not difficult to see from Table 4 that the ETA algorithm also performs quite efficiently in such nonlinear complementarity problems. The sparsity \(\x\_{0}\) of the recovered solution x are all equal to the sparsity \(\\hat{x}\_{0}\), that is, the recover is exact. What is also striking is that the ETA algorithm is exceptionally fast in this example as well, with only 144.99 seconds being needed to tackle the sparse NCP when the dimension \(n=10{,}000\).
Conclusions
In this paper, we concentrate on finding sparse solutions of cocoercive nonlinear complementarity problems (NCPs). An \(\ell_{1}\) regularized projection minimization model is proposed for relaxation, and an extragradient thresholding algorithm (ETA) is then designed for this regularized model. Furthermore, we analyze the convergence of this algorithm and show any cluster point of the sequence generated by ETA is a solution of NCP. Preliminary numerical results indicate that the \(\ell_{1}\) regularized model as well as the ETA are promising to find sparse solutions of NCPs.
References
Cottle, RW, Pang, JS, Stone, RE: The Linear Complementarity Problem. Academic Press, Boston (1992)
Facchinei, F, Pang, JS: FiniteDimensional Variational Inequalities and Complementarity Problems. Springer Series in Operations Research, vols. I and II. Springer, New York (2003)
Ferris, MC, Mangasarian, OL, Pang, JS: Complementarity: Applications, Algorithms and Extensions. Kluwer Academic, Dordrecht (2001)
Xie, J, He, S, Zhang, S: Randomized portfolio selection with constraints. Pac. J. Optim. 4, 87112 (2008)
Shang, M, Zhang, C, Xiu, N: Minimal zero norm solutions of linear complementarity problems. J. Optim. Theory Appl. 163, 795814 (2014)
Candès, EJ, Randall, PA: Highly robust error correction by convex programming. IEEE Trans. Inf. Theory 54, 28292840 (2006)
Candès, EJ, Recht, B: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717772 (2008)
Candès, EJ, Romberg, J, Tao, T: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59, 12071223 (2006)
Donoho, DL: Compressed sensing. IEEE Trans. Inf. Theory 52, 12891306 (2006)
Fukushima, M, Pang, JS: Some feasibility issues in mathematical programs with equilibrium constraints. SIAM J. Optim. 8, 673681 (1998)
Fukushima, M, Tseng, P: An implementable activeset algorithm for computing a Bstationary point of a mathematical program with linear complementarity constraints. SIAM J. Optim. 12, 724739 (2002)
Lin, G, Fukushima, M: New reformulations for stochastic nonlinear complementarity problems. Optim. Methods Softw. 21, 551564 (2006)
Luo, ZQ, Pang, JS, Ralph, D: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, Cambridge (1996)
Figueiredo, MAT, Nowak, RD: An EM algorithm for waveletbased image restoration. IEEE Trans. Image Process. 12, 906916 (2003)
Starck, JL, Donoho, DL, Candès, EJ: Astronomical image representation by the curevelet transform. Astron. Astrophys. 398, 785800 (2003)
Daubechies, I, Defrise, M, De Mol, C: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 14131457 (2004)
Figueiredo, MAT, Nowak, RD, Wright, SJ: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1, 586597 (2007)
Zhu, T, Yu, ZG: A simple proof for some important properties of the projection mappings. Math. Inequal. Appl. 7, 453456 (2004)
Zhu, DL, Marcotte, P: Cocoercivity and its role in the convergence of iterative schemes for solving variational inequalities. SIAM J. Optim. 6, 714726 (1996)
Hale, ET, Yin, W, Zhang, Y: Fixedpoint continuation for \(\ell_{1}\) minimization: methodology and convergence. SIAM J. Optim. 19, 11071130 (2008)
He, BS, Liao, LZ: Improvements of some projection methods for monotone nonlinear variational inequalities. J. Optim. Theory Appl. 112(1), 111128 (2002)
Yan, X, Han, D, Sun, W: A modified projection method with a new direction for solving variational inequalities. Appl. Math. Comput. 211, 118129 (2009)
Acknowledgements
We would like to thank the two referees for their valuable comments. This research was supported by the National Natural Science Foundation of China (71271021, 11001011, 11431002), the Fundamental Research Funds for the Central Universities of China (2013JBZ005) and the Scientific Research Fund of Hebei Provincial Education Department (No. QN20132030).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors completed this paper together. All authors read and approved the final manuscript.
Rights and permissions
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
About this article
Cite this article
Shang, M., Zhou, S. & Xiu, N. Extragradient thresholding methods for sparse solutions of cocoercive NCPs. J Inequal Appl 2015, 34 (2015). https://doi.org/10.1186/s1366001505515
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1366001505515
MSC
 90C33
 90C26
 90C90
Keywords
 nonlinear complementarity problems
 sparse solution
 cocoercive
 \(\ell_{1}\) relaxation
 extragradient
 convergence