# A shrinkage-thresholding projection method for sparsest solutions of LCPs

- Meijuan Shang
^{1, 2}Email author and - Cuiping Nie
^{2}

**2014**:51

https://doi.org/10.1186/1029-242X-2014-51

© Shang and Nie; licensee Springer. 2014

**Received: **2 October 2013

**Accepted: **10 January 2014

**Published: **31 January 2014

## Abstract

In this paper, we study the sparsest solutions of linear complementarity problems (LCPs), which study has many applications, such as bimatrix games and portfolio selections. Mathematically, the underlying model is NP-hard in general. By transforming the complementarity constraints into a fixed point equation with projection type, we propose an ${l}_{1}$ regularization projection minimization model for relaxation. Through developing a thresholding representation of solutions for a key subproblem of this regularization model, we design a shrinkage-thresholding projection (STP) algorithm to solve this model and also analyze convergence of STP algorithm. Numerical results demonstrate that the STP method can efficiently solve this regularized model and get a sparsest solution of LCP with high quality.

**MSC:**90C33, 90C26, 90C90.

## Keywords

## 1 Introduction

The set of solutions to this problem is denoted by $SOL(q,M)$. Throughout the paper, we always suppose $SOL(q,M)\ne \mathrm{\varnothing}$.

Many real-world phenomena in engineering, physics, mechanics, and economics are governed by linear complementarity problems. Extensive studies of LCP have been done, see the books [1–3] and the references therein. Numerical methods for solving LCPs, such as the Newton method, the interior point method, and the nonsmooth equation method, have been extensively investigated in the literature. However, it seems that there is no study of the sparsest solutions for LCPs. In fact, in real applications, it is very necessary to investigate the sparsest solution of the LCPs. For example this is so in bimatrix games [1] and portfolio selections [4]. For more details, see [5].

where ${\parallel x\parallel}_{0}$ stands for the number of nonzero components of *x*. A solution of (1) is called the sparsest solution of the LCP.

The above minimization problem (1) is in fact a sparse optimization with equilibrium constraints. From the point of view of constraint conditions, it is not easy to get solutions due to the equilibrium constraints, as well as the discontinuous objective function.

*C*-functions to construct the penalty of violating the equilibrium constraints. Recall that a function $\psi :{R}^{2}\to R$ is called a

*C*-function, where

*C*stands for complementarity, if for any pair $(a,b)\in {R}^{2}$,

*C*-function ${\mathbf{F}}_{min}$, the extension of the ‘min’ function, is defined as follows:

where $F(x)=Mx+q$, ${\parallel x\parallel}_{1}={\sum}_{i=1}^{n}|{x}_{i}|$.

where $\lambda \in (0,\mathrm{\infty})$ is a given regularization parameter and $\parallel \cdot \parallel $ refers to the Euclidean norm. We call (6) the ${l}_{1}$ regularization projection minimization problem.

This paper is organized as follows. In Section 2, we approximate (1) by the ${l}_{1}$ regularization projection minimization problem (6), and show theoretically that (6) is a good approximation. In Section 3, we develop a shrinkage-thresholding representation theory for the subproblem of (6) and propose a shrinkage-thresholding projection (STP) algorithm for (6). The convergency of the STP algorithm is proved. Numerical results are demonstrated in Section 4 to show that (6) is promising to provide a sparsest solution of LCP.

## 2 The ${l}_{1}$ regularized approximation

In this section, we study the relation between the ${l}_{1}$ regularization projection model (6) and the original model (5), which indicates that the regularized model is a good approximation.

**Theorem 2.1** *For any fixed* $\lambda >0$, *the solution set of* (6) *is nonempty and bounded*. *Let* $\{({x}_{{\lambda}_{k}},{z}_{{\lambda}_{k}})\}$ *be a solution sequence of* (6), *and* $\{{\lambda}_{k}\}$ *be any positive sequence converging to* 0. *If* $SOL(q,M)\ne \mathrm{\varnothing}$, *then* $\{{x}_{{\lambda}_{k}}\}$ *has at least one accumulation point*, *and any accumulation point* ${x}^{\ast}$ *of* $\{{x}_{{\lambda}_{k}}\}$ *is a solution of* (5).

*Proof*For any fixed $\lambda >0$, it is easy to show the coercivity of the objective function ${f}_{\lambda}(x,z)$ in (6), which refers to the property that

is nonempty and compact, where ${x}_{0}\in {R}^{n}$ and ${z}_{0}={[{x}_{0}-F({x}_{0})]}_{+}$ are given points. The solution set of (6) is nonempty and bounded since ${f}_{\lambda}(x,z)$ is continuous on ℒ.

we get ${x}^{\ast}={z}^{\ast}$, which implies ${x}^{\ast}={[{x}^{\ast}-F({x}^{\ast})]}_{+}$, that is, ${x}^{\ast}\in SOL(q,M)$. From ${\parallel {x}_{{\lambda}_{{k}_{j}}}\parallel}_{1}\le {\parallel \stackrel{\u02c6}{x}\parallel}_{1}$ with ${k}_{j}$ tending to ∞, we get ${\parallel {x}^{\ast}\parallel}_{1}\le {\parallel \stackrel{\u02c6}{x}\parallel}_{1}$. Then by the arbitrariness of $\stackrel{\u02c6}{x}\in SOL(q,M)$, we know ${x}^{\ast}$ is a solution of problem (5). This completes the proof. □

## 3 Solution representation, algorithm, and convergence

*λ*, a minimizer ${x}^{\ast}$ for the convex function (10) must satisfy the corresponding optimality conditions

It demonstrates that a solution ${x}^{\ast}\in {R}^{n}$ of the subproblem (10) can be analytically expressed by (11).

Now we will show that the STP algorithm is well defined, that is, (13) is implementable. Before doing this, we need the following lemmas.

**Lemma 3.1** [12]

*Let*${P}_{\mathrm{\Omega}}$

*be a metric projection operator onto a nonempty closed convex set*$\mathrm{\Omega}\in {R}^{n}$.

*Given*$x\in {R}^{n}$

*and*$d\in {R}^{n}$,

*define*

*then* $\parallel H(\alpha )-x\parallel $ *is nondecreasing with respect to* *α*.

**Lemma 3.2** *The step size* ${\alpha}_{k+1}$ *in* (13) *must exist*.

*Proof*From Lemma 3.1, we see that $\parallel {x}^{k+1}-{[{x}^{k+1}-\alpha F({x}^{k+1})]}_{+}\parallel $ is nondecreasing with respect to

*α*. It is obvious that $\alpha ({\parallel {x}^{k+1}-{x}^{k}\parallel}^{2}+{\parallel {x}^{k}-{z}^{k}\parallel}^{2})$ is strictly increasing with respect to

*α*since ${\parallel {x}^{k+1}-{x}^{k}\parallel}^{2}+{\parallel {x}^{k}-{z}^{k}\parallel}^{2}>0$ before the iterations stop. It follows that the term

*α*before the iterations stop. Thus $g(\beta {\gamma}^{m})$ is strictly decreasing with respect to the nonnegative integer

*m*before the iterations stop. Note that ${x}^{k+1}={S}_{{\lambda}_{k}}({z}^{k})\in \phantom{\rule{0.25em}{0ex}}{R}_{+}^{n}$ and ${[{x}^{k+1}]}_{+}={x}^{k+1}$, then we have

*m*such that

is just what we seek. All in all, the step size ${\alpha}_{k+1}$ in (13) must exist. □

We now begin to analyze the convergence of the proposed STP algorithm.

**Theorem 3.1**

*Let*$\{({x}^{k},{z}^{k})\}$

*be a sequence generated by STP algorithm*,

*then*

- (i)
$\{{f}_{{\lambda}_{k}}({x}^{k},{z}^{k})\}$

*is monotonically decreasing and converges to a constant*${C}^{\ast}$; - (ii)$\{{x}^{k}\}$
*and*$\{{z}^{k}\}$*are bounded and suppose*${inf}_{k}{\alpha}_{k}=\alpha >0$,*then*$\{{x}^{k}\}$*and*$\{{z}^{k}\}$*are both asymptotically regular*,*i*.*e*.,$\underset{k\to \mathrm{\infty}}{lim}\parallel {x}^{k+1}-{x}^{k}\parallel =0\phantom{\rule{1em}{0ex}}\mathit{\text{and}}\phantom{\rule{1em}{0ex}}\underset{k\to \mathrm{\infty}}{lim}\parallel {z}^{k+1}-{z}^{k}\parallel =0.$

*Moreover*, *any accumulation point of* $\{{x}^{k}\}$ *is a solution of* $LCP(q,M)$.

*Proof*(i) From (10) and (11), we have

which shows that $\{{f}_{{\lambda}_{k}}({x}^{k},{z}^{k})\}$ is monotonically decreasing. Since $\{{f}_{{\lambda}_{k}}({x}^{k},{z}^{k})\}$ is bounded from below, $\{{f}_{{\lambda}_{k}}({x}^{k},{z}^{k})\}$ converges to a constant ${C}^{\ast}$. This verifies (i) of Theorem 3.1.

(ii) From the fact that $({x}^{k},{z}^{k},{\lambda}^{k})\in \{(x,z,\lambda )\in {R}^{n}\times {R}_{+}^{n}\times {R}_{+}:f(x,z,\lambda )\le f({x}^{0},{z}^{0},{\lambda}_{0})\}$, which is bounded, we see that $\{{x}^{k}\}$ and $\{{z}^{k}\}$ are both bounded.

Combining (20) with (19), we get ${x}^{\ast}={z}^{\ast}$, which gives ${x}^{\ast}={[{x}^{\ast}-\overline{\alpha}F({x}^{\ast})]}_{+}$ and this means ${x}^{\ast}\in SOL(q,M)$. The proof is thus complete. □

## 4 Numerical experiments

In this section, we present some numerical experiments to demonstrate the effectiveness of our STP algorithm. All the numerical experiments were performed on a DELL computer (2.99 GHz, 2 GB of RAM), using MATLAB 7.10.

In the STP algorithm, the maximum number of iterations is set to 500. We end the STP algorithm, if $\parallel {x}^{k}-{z}^{k}\parallel <1.0\text{E}-5$ or if it reaches the maximum number of iterations. We set ${\lambda}_{0}=10$, $K=10$, $\tau =1/7$, ${x}^{0}$, and ${z}^{0}$ to be zero vectors as the initial points.

### Test 1: *Z*-matrix LCPs [5]

Here ${I}_{n}$ is the identity matrix of order *n* and $e={(1,1,\dots ,1)}^{T}\in {R}^{n}$.

*M*is widely used in statistics. It is clear that

*M*is a

*Z*-matrix as well as a positive semi-definite matrix. For any scalars $a\ge 0$, we know that the vectors $x=ae+{e}_{1}$ are solutions to $LCP(q,M)$, because

Among all the solutions, the vector ${x}^{\ast}={e}_{1}={(1,0,\dots ,0)}^{T}$ is the unique sparsest solution.

**Computational results on LCPs with**
Z
**-matrices**

n | 100 | 500 | 1,000 | 3,000 | 5,000 | 7,000 |
---|---|---|---|---|---|---|

${D}_{xz}$ | 4.43E − 7 | 1.03E − 7 | 6.02E − 8 | 3.19E − 8 | 2.62E − 8 | 2.38E − 8 |

gap | 4.47E − 7 | 1.03E − 7 | 6.02E − 8 | 3.19E − 8 | 2.62E − 8 | 2.38E − 8 |

Spar | 1 | 1 | 1 | 1 | 1 | 1 |

time | 6.58E − 3 | 3.28E − 2 | 4.12E − 1 | 3.93E + 0 | 1.09E + 1 | 2.14E + 1 |

In Table 1, ‘${D}_{xz}$’ denotes the Euclidean distance between ${x}^{k}$ and ${z}^{k}$, which is in fact the value of the merit function at ${x}^{k}$, ‘gap’ denotes the Euclidean distance between ${x}^{k}$ and the true sparsest solution ${x}^{\ast}$, ‘Spar’ denotes the number of the entries of ${x}^{k}$ such that ${x}_{i}^{k}>1.0\text{E}-5$, and ‘time’ denotes the computational time in seconds.

From Table 1, we can see that the STP algorithm is effective to find the sparse solutions of LCPs. The sparsity of our solution ${x}^{k}$ is the same as the sparsity of the true sparse solution ${x}^{\ast}$. Moreover, the ${D}_{xz}$ and the ‘gap’ decrease as the dimension of the matrix *M* increases, which indicates that the larger size of the problem, the more effective the algorithm is.

### Test 2: Randomly created LCPs with positive semidefinite matrices

In this subsection, we test STP for randomly created LCPs with positive semidefinite matrices.

First, we state the way of constructing LCPs and their solutions. Let a matrix $Z\in {R}^{n\times r}$ ($r<n$) be generated with the standard normal distribution and let $M=Z{Z}^{T}$. Let the sparse vector $\overline{x}$ be generated with the standard normal distribution and its sparsity be as follows: the sparsity is set to be $n/20$ if $n\le 1,000$; the sparsity is set to be $n/100$ if $1,000<n\le 5,000$; the sparsity is set to be $n/200$ if $n>5,000$. After the matrix *M* and the sparse vector $\overline{x}$ have been generated, a vector $q\in {R}^{n}$ can be constructed such that $\overline{x}$ is a solution of the $LCP(q,M)$. Then $\overline{x}$ can be regard as a sparse solution of the $LCP(q,M)$. Let *M* and *q* be input to our STP algorithm, then STP will output a solution ${x}^{k}$. We must emphasize that the sparsity of ${x}^{k}$ may be smaller than that of $\overline{x}$ since $\overline{x}$ maybe not the sparsest solution of the $LCP(q,M)$. In this case, ${x}^{k}$ is a sparser than $\overline{x}$.

In this set of experiments, ‘iter’ denotes the number of iterations for outputting ${x}^{k}$, ‘Spar-i’ denotes the number of the entry of $\overline{x}$ satisfying ${\overline{x}}_{i}^{k}>1.0\text{E}-5$, and ‘Spar-o’ denotes the number of the entry of ${x}^{k}$ satisfying ${x}_{i}^{k}>1.0\text{E}-5$. We set $\beta =0.9$ and $\gamma =0.5$.

**Results on randomly created LCPs with positive semidefinite matrices**

n | 100 | 300 | 500 | 1,000 | 5,000 | 8,000 |
---|---|---|---|---|---|---|

iter | 175 | 79 | 52 | 43 | 43 | 44 |

Spar-i | 4 | 14 | 22 | 47 | 47 | 38 |

Spar-o | 3 | 8 | 12 | 24 | 24 | 21 |

${D}_{xz}$ | 9.37E − 6 | 8.99E − 6 | 7.63E − 6 | 6.20E − 6 | 5.49E − 6 | 6.07E − 6 |

time | 2.81E − 2 | 3.07E − 2 | 4.35E − 2 | 4.44E − 1 | 1.13E + 1 | 2.97E + 1 |

From Table 2, we can see that the STP algorithm works very fast. Even for $n=8,000$, it only takes 29.7 seconds to yield a very sparse solution to the LCP. The values of ${D}_{xz}$ are all less than $1.0\text{E}-5$, which indicates the output points are solutions of $LCP(q,M)$. Moreover, the output solution ${x}^{k}$ is sparser than $\overline{x}$. When the dimension of *M* increases, the accuracy does not decrease but increases and the time cost by STP increases slowly. These phenomena show that STP is very robust. We can draw the conclusion that STP is very efficient for finding the sparsest solution of LCPs.

**Remark** The continuation method of the regularized parameter *λ* plays an important role in STP for find sparsest solutions of high quality. Moveover, a large amount of numerical experiments indicate that STP is very robust whenever $\lambda =1,5,10,20$.

## 5 Conclusions

In this paper, we concentrate on finding the sparsest solutions of LCPs. We propose an ${l}_{1}$ regularized projection minimization model. Then we develop a thresholding representation theory for the subproblem of ${l}_{1}$ regularized projection minimization problem, and design a shrinkage-thresholding projection (STP) algorithm to solve the regularized model. The convergence of the STP algorithm is proved. Preliminary numerical results indicate that the ${l}_{1}$ regularized model as well as the STP method are promising to find sparsest solutions of LCPs.

## Declarations

### Acknowledgements

We would like to thank the two referees for their valuable comments. This research was supported by the National Basic Research Program of China (2010CB732501), the National Natural Science Foundation of China (71271021), the Fundamental Research Funds for the Central Universities of China (2011YJS075), STRD plan of Shijiazhuang (135790075A) and the Scientific Research Fund of Hebei Provincial Education Department (QN20132030).

## Authors’ Affiliations

## References

- Cottle RW, Pang JS, Stone RE:
*The Linear Complementarity Problem*. Academic Press, Boston; 1992.Google Scholar - Facchinei F, Pang JS
**Springer Series in Operations Research.**In*Finite-Dimensional Variational Inequalities and Complementarity Problems*. Springer, New York; 2003. vols. I, IIGoogle Scholar - Ferris MC, Mangasarian OL, Pang JS:
*Complementarity: Applications, Algorithms and Extensions*. Kluwer Academic, Dordrecht; 2001.View ArticleGoogle Scholar - Xie J, He S, Zhang S:
**Randomized portfolio selection with constraints.***Pac. J. Optim.*2008,**4:**87-112.MathSciNetGoogle Scholar - Shang, M, Zhang, C, Xiu, N: Minimal Zero Norm Solutions of Linear Complementarity Problems. J. Optim. Theory Appl. (submitted)Google Scholar
- Figueiredo MAT, Nowak RD:
**An EM algorithm for wavelet-based image restoration.***IEEE Trans. Image Process.*2003,**12:**906-916. 10.1109/TIP.2003.814255MathSciNetView ArticleGoogle Scholar - Starck JL, Donoho DL, Candès EJ:
**Astronomical image representation by the curevelet transform.***Astron. Astrophys.*2003,**398:**785-800. 10.1051/0004-6361:20021571View ArticleGoogle Scholar - Daubechies I, Defrise M, De Mol C:
**An iterative thresholding algorithm for linear inverse problems with a sparsity constraint.***Commun. Pure Appl. Math.*2004,**57:**1413-1457. 10.1002/cpa.20042MathSciNetView ArticleGoogle Scholar - Figueiredo MAT, Nowak RD, Wright SJ:
**Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems.***IEEE J. Sel. Top. Signal Process.*2007,**1:**586-597.View ArticleGoogle Scholar - Cand‘es EJ, Romberg J, Tao T:
**Stable signal recovery from incomplete and inaccurate measurements.***Commun. Pure Appl. Math.*2006,**59:**1207-1223. 10.1002/cpa.20124MathSciNetView ArticleGoogle Scholar - Donoho DL:
**Compressed sensing.***IEEE Trans. Inf. Theory*2006,**52:**1289-1306.MathSciNetView ArticleGoogle Scholar - Toint PHL:
**Global convergence of a class of trust region methods for nonconvex minimization in Hilbert space.***IMA J. Numer. Anal.*1988,**8:**231-252. 10.1093/imanum/8.2.231MathSciNetView ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.