 Research
 Open Access
Extragradient thresholding methods for sparse solutions of cocoercive NCPs
 Meijuan Shang^{1, 2}Email author,
 Shenglong Zhou^{3} and
 Naihua Xiu^{1}
https://doi.org/10.1186/s1366001505515
© Shang et al.; licensee Springer. 2015
 Received: 28 September 2014
 Accepted: 6 January 2015
 Published: 28 January 2015
Abstract
In this paper, we aim to find sparse solutions of cocoercive nonlinear complementarity problems (NCPs). Mathematically, the underlying model is NPhard in general. Thus an \(\ell_{1}\) regularized projection minimization model is proposed for relaxation and an extragradient thresholding algorithm (ETA) is then designed for this regularized model. Furthermore, we analyze the convergence of this algorithm and show any cluster point of the sequence generated by ETA is a solution of NCP. Numerical results demonstrate that the ETA can effectively solve the \(\ell_{1}\) regularized model and output very sparse solutions of cocoercive NCPs with high quality.
Keywords
 nonlinear complementarity problems
 sparse solution
 cocoercive
 \(\ell_{1}\) relaxation
 extragradient
 convergence
MSC
 90C33
 90C26
 90C90
1 Introduction
NCPs have various important applications in economics and engineering, such as Nash equilibrium problems, traffic equilibrium problems, contact mechanics problems, option pricing. Extensive studies of NCPs have been done; see [1–3] and the references therein. Numerical methods for solving NCPs, such as filter method, continuation method, nonsmooth Newton’s method, smoothing Newton methods, LevenbergMarquardt method, projection method, descent method, interiorpoint method have been extensively investigated in the literature. However, it seems that there is a vacant study of sparse solutions for NCPs. In fact, in real applications, it is very necessary to investigate the sparse solution of the NCPs. For example this is so in bimatrix games [1] and portfolio selections [4]. For more details, see [5].
The above minimization problem (1) is in fact a sparse optimization [6–9] with equilibrium constraints. In the view of the objection function, the problem is \(\ell_{0}\)norm minimization problem, which is combinatorial and generally NPhard. From the point of view of constraint conditions, it is in fact a mathematical program with equilibrium constraints (MPEC) [10–13]. It is not easy to get solutions due to the equilibrium constraints, even for a continuous objective function.
This paper is organized as follows. In Section 2, we approximate (1) by the \(\ell_{1}\) regularization projection minimization problem (5), and we show theoretically that (5) is a good approximation. In Section 3, we propose an extragradient thresholding algorithm (ETA) for (5) and also analyze the convergence of this algorithm. Numerical results are demonstrated in Section 4 to show that (5) is promising in providing a sparse solution of cocoercive NCPs.
2 The \(\ell_{1}\) regularized approximation
In this section, we study the relation between the solutions of model (5) and those of model (2).
Theorem 2.1
For any fixed \(\lambda>0\), the solution set of (5) is nonempty and bounded. Let \(\{(\widehat{x}_{\lambda_{k}},\widehat{z}_{\lambda_{k}})\}\) be a solution of (5), and \(\{\lambda_{k}\}\) be any positive sequence converging to 0. If \(\operatorname{SOL}(F)\ne\emptyset\), then \(\{(\widehat {x}_{\lambda_{k}},\widehat{z}_{\lambda_{k}})\}\) has at least one accumulation point, and any accumulation point \(x^{*}\) of \(\{\widehat {x}_{\lambda_{k}}\}\) is a solution of (2).
Proof
3 Algorithm and convergence
In this section, we suggest the extragradient thresholding algorithm (ETA) to solve \(\ell_{1}\) regularization projection minimization problem (5) and give the convergence analysis of ETA.
Lemma 3.1
[18]
 (a)
\(\forall\alpha>0\), \(F(x)^{\top}e(x,\alpha)\geq\frac {\e(x,\alpha)\^{2}}{\alpha}\);
 (b)
for any \(\alpha>0\), \(\frac{\e(x,\alpha)\}{\alpha}\) is nonincreasing;
 (c)
for any \(\alpha\geq0\), \(\e(x,\alpha)\\) is nondecreasing.
Remark 3.1
Every affine monotone function which is also symmetric must be cocoercive (on \(\mathbb{R}^{n}\)). The Euclidean projector \(P_{K}\) and \(IP_{K}\) are both ‘cocoercive‘ functions [2, 19].
Lemma 3.2
Proof

Input: cthe cocoercive modulus of F.

Step 0: Choose \(0\ne z^{0}\in\mathbb{R}_{+}^{n}\), \(\lambda_{0},\beta>0\), \(\tau,\gamma,\mu\in(0,1)\), \(\beta\gamma<2c\), \(\epsilon>0\) and integers \(n_{\max}>K_{0}>0\). Set \(k=0\).

Step 1: Computewhere \(\alpha_{k}=\beta\gamma^{m_{k}}\) with \(m_{k}\) being the smallest nonnegative integer satisfying$$\begin{aligned}[b] &x^{k}=S_{\lambda_{k}} \bigl(z^{k} \bigr), \\ &y^{k}= \bigl[x^{k}\alpha_{k}F \bigl(x^{k} \bigr) \bigr]_{+}, \end{aligned} $$$$ \bigl\ F \bigl(x^{k} \bigr)F \bigl(y^{k} \bigr) \bigr\ \leq\mu\frac{\x^{k}y^{k}\}{\alpha_{k}}. $$(16)

Step 2: If \(\x^{k}z^{k}\\le\epsilon\) or the number of iterations is greater than \(n_{\max}\), then return \(z^{k}\), \(x^{k}\), \(y^{k}\) and stop. Otherwise, computeand update \(\lambda_{k+1}\) by$$\begin{aligned} z^{k+1}= \bigl[x^{k}\alpha_{k}F \bigl(y^{k} \bigr) \bigr]_{+} \end{aligned}$$and \(k=k+1\), then go to Step 1.$$\begin{aligned} \lambda_{k+1}=\left \{ \begin{array}{l@{\quad}l} \tau\lambda_{k}, & \mbox{if } k+1 \mbox{ is a multiple of } K_{0}, \\ \lambda_{k}, & \mbox{otherwise}, \end{array} \right . \end{aligned}$$
Before analyzing the convergence of ETA, we first present a key lemma as regards cocoercive mapping.
Lemma 3.3
Proof
We now begin to analyze the convergence of the proposed ETA.
Theorem 3.1
 (i)
the sequences \(\{z^{k}\}\), \(\{x^{k}\}\), and \(\{y^{k}\}\) are all bounded;
 (ii)
any cluster point of the sequence \(\{x^{k}\}\) is a solution of \(\operatorname{NCP}(F)\).
Proof
Since \(\{x^{k}\}\) is bounded, \(\{x^{k}\}\) has at least one cluster point. Let \(x^{*}\) be a cluster point of \(\{x^{k}\}\) and a subsequence \(\{ x^{k_{j}}\}\) converge to \(x^{*}\). Next we will show \(x^{*}\in\operatorname{SOL}(F)\).
4 Numerical experiments
In this section, we present some numerical experiments to demonstrate the effectiveness of our ETA algorithm. All the numerical experiments were performed on a laptop (2.50GHz, 6.00GB of RAM) by utilizing MATLAB R2013a.
We will stimulate three examples to implement the ETA algorithm. They will be ran 100 times for difference dimensions, and thus average results will be recorded. In each experiment, we set \(z^{0}=e\), \(\beta =2c\), \(\gamma=0.1\), \(\mu=1/c\), \(n_{\max}=2{,}000\).
4.1 Test for LCPs with Zmatrix [5]
We choose \(z^{0}=e\), \(c=1\), \(\lambda_{0}=0.2\), \(\beta=2c\), \(\tau=0.75\), \(\gamma =0.1\), \(\mu=1/c\), \(\epsilon=1e6\), \(n_{\max}=2{,}000\), \(K_{0}=5\). We will take advantage of the recovery error \(\x\hat{x}\\) to evaluate our algorithm. Apart from that, the average cpu time (in seconds), the average number of iteration times and the residual \(\xz\\) will also be taken into consideration on judging the performance of the method.
ETA’s computational results on LCPs with Z matrices.
n  Iter  \(\boldsymbol {\x\hat{x}\}\)  ∥ x − z ∥  \(\boldsymbol {\\hat{x}\_{0}}\)  \(\boldsymbol {\x\_{0}}\)  Time (sec.) 

3,000  205  7.7007E−06  7.5424E07  1  1  2.90 
5,000  205  7.6995E−06  7.5424E07  1  1  7.93 
10,000  205  7.6986E−06  7.5424E07  1  1  36.70 
15,000  205  7.6983E−06  7.5424E07  1  1  78.46 
20,000  205  7.6981E−06  7.5424E07  1  1  148.50 
25,000  205  7.6980E−06  7.5424E07  1  1  232.87 
SSG’s computational results on LCPs with Z matrices.
n  Iter  \(\boldsymbol {\x\hat{x}\}\)  \(\boldsymbol {\\hat{x}\_{0}}\)  \(\boldsymbol {\x\_{0}}\)  Time (sec.) 

100  1,012  1.86E−05  1  1  2.46 
200  972  1.70E−05  1  1  5.11 
500  118  2.67E−06  1  1  3.88 
1,000  118  1.58E−06  1  1  23.04 
2,000  117  1.05E−06  1  1  139.15 
3,000  117  8.69E−07  1  1  401.33 
5,000                
It can be discerned in Table 2, where ‘ ’ denotes the method is invalid. Although the sparsity \(\x\_{0}\) of the recovered solution is in all cases as large as 1 and the recovered errors \(\x\hat{x}\\) are pretty small, the average cpu time dramatically ascends with the matrix dimension n, which manifests that SSG method for LCPs is appropriate for the small dimensional data set and thus SSG will not be appealing when n is relatively large. Contrasted with the SSG method, the ETA algorithm is more outstanding in the cpu time and the size of the solvable problems.
4.2 Test for LCPs with positive semidefinite matrices
Results on randomly created LCPs with positive semidefinite matrices.
n  Iter  ∥ x − z ∥  \(\boldsymbol {\\hat{x}\_{0}}\)  \(\boldsymbol {\x\_{0}}\)  Time (sec.) 

2,000  350  8.0316E−11  20  20  18.36 
3,000  210  9.8366E−11  30  30  12.02 
4,000  142  8.5188E−11  40  40  29.97 
5,000  142  9.5243E−11  50  50  22.29 
7,000  144  8.4519E−11  70  70  46.67 
4.3 Test for cocoercive nonlinear complementarity problem
Results on cocoercive nonlinear complementarity problems.
n  Iter  ∥ x − z ∥  \(\boldsymbol {\\hat{x}\_{0}}\)  \(\boldsymbol {\x\_{0} }\)  Time 

1,000  450  7.5467E−07  10  10  3.28 
3,000  138  9.8034E−07  30  30  7.86 
5,000  94  9.4921E−07  50  50  14.85 
7,000  96  8.4234E−07  70  70  30.24 
10,000  98  7.5510E−07  100  100  143.99 
5 Conclusions
In this paper, we concentrate on finding sparse solutions of cocoercive nonlinear complementarity problems (NCPs). An \(\ell_{1}\) regularized projection minimization model is proposed for relaxation, and an extragradient thresholding algorithm (ETA) is then designed for this regularized model. Furthermore, we analyze the convergence of this algorithm and show any cluster point of the sequence generated by ETA is a solution of NCP. Preliminary numerical results indicate that the \(\ell_{1}\) regularized model as well as the ETA are promising to find sparse solutions of NCPs.
Declarations
Acknowledgements
We would like to thank the two referees for their valuable comments. This research was supported by the National Natural Science Foundation of China (71271021, 11001011, 11431002), the Fundamental Research Funds for the Central Universities of China (2013JBZ005) and the Scientific Research Fund of Hebei Provincial Education Department (No. QN20132030).
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
Authors’ Affiliations
References
 Cottle, RW, Pang, JS, Stone, RE: The Linear Complementarity Problem. Academic Press, Boston (1992) MATHGoogle Scholar
 Facchinei, F, Pang, JS: FiniteDimensional Variational Inequalities and Complementarity Problems. Springer Series in Operations Research, vols. I and II. Springer, New York (2003) Google Scholar
 Ferris, MC, Mangasarian, OL, Pang, JS: Complementarity: Applications, Algorithms and Extensions. Kluwer Academic, Dordrecht (2001) View ArticleGoogle Scholar
 Xie, J, He, S, Zhang, S: Randomized portfolio selection with constraints. Pac. J. Optim. 4, 87112 (2008) MATHMathSciNetGoogle Scholar
 Shang, M, Zhang, C, Xiu, N: Minimal zero norm solutions of linear complementarity problems. J. Optim. Theory Appl. 163, 795814 (2014) View ArticleMATHMathSciNetGoogle Scholar
 Candès, EJ, Randall, PA: Highly robust error correction by convex programming. IEEE Trans. Inf. Theory 54, 28292840 (2006) View ArticleGoogle Scholar
 Candès, EJ, Recht, B: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717772 (2008) View ArticleGoogle Scholar
 Candès, EJ, Romberg, J, Tao, T: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59, 12071223 (2006) View ArticleMATHGoogle Scholar
 Donoho, DL: Compressed sensing. IEEE Trans. Inf. Theory 52, 12891306 (2006) View ArticleMATHMathSciNetGoogle Scholar
 Fukushima, M, Pang, JS: Some feasibility issues in mathematical programs with equilibrium constraints. SIAM J. Optim. 8, 673681 (1998) View ArticleMATHMathSciNetGoogle Scholar
 Fukushima, M, Tseng, P: An implementable activeset algorithm for computing a Bstationary point of a mathematical program with linear complementarity constraints. SIAM J. Optim. 12, 724739 (2002) View ArticleMATHMathSciNetGoogle Scholar
 Lin, G, Fukushima, M: New reformulations for stochastic nonlinear complementarity problems. Optim. Methods Softw. 21, 551564 (2006) View ArticleMATHMathSciNetGoogle Scholar
 Luo, ZQ, Pang, JS, Ralph, D: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, Cambridge (1996) View ArticleGoogle Scholar
 Figueiredo, MAT, Nowak, RD: An EM algorithm for waveletbased image restoration. IEEE Trans. Image Process. 12, 906916 (2003) View ArticleMATHMathSciNetGoogle Scholar
 Starck, JL, Donoho, DL, Candès, EJ: Astronomical image representation by the curevelet transform. Astron. Astrophys. 398, 785800 (2003) View ArticleGoogle Scholar
 Daubechies, I, Defrise, M, De Mol, C: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 14131457 (2004) View ArticleMATHGoogle Scholar
 Figueiredo, MAT, Nowak, RD, Wright, SJ: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1, 586597 (2007) View ArticleGoogle Scholar
 Zhu, T, Yu, ZG: A simple proof for some important properties of the projection mappings. Math. Inequal. Appl. 7, 453456 (2004) MATHMathSciNetGoogle Scholar
 Zhu, DL, Marcotte, P: Cocoercivity and its role in the convergence of iterative schemes for solving variational inequalities. SIAM J. Optim. 6, 714726 (1996) View ArticleMATHMathSciNetGoogle Scholar
 Hale, ET, Yin, W, Zhang, Y: Fixedpoint continuation for \(\ell_{1}\) minimization: methodology and convergence. SIAM J. Optim. 19, 11071130 (2008) View ArticleMATHMathSciNetGoogle Scholar
 He, BS, Liao, LZ: Improvements of some projection methods for monotone nonlinear variational inequalities. J. Optim. Theory Appl. 112(1), 111128 (2002) View ArticleMATHMathSciNetGoogle Scholar
 Yan, X, Han, D, Sun, W: A modified projection method with a new direction for solving variational inequalities. Appl. Math. Comput. 211, 118129 (2009) View ArticleMATHMathSciNetGoogle Scholar