Skip to main content

A new smoothing modified three-term conjugate gradient method for \(l_{1}\)-norm minimization problem

Abstract

We consider a kind of nonsmooth optimization problems with \(l_{1}\)-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak–Ribière–Polyak conjugate gradient method. For the Polak–Ribière–Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.

1 Introduction

In this paper, we consider the following nonsmooth optimization problems with \(l_{1}\)-norm minimization problem

$$ \min_{x \in R^{n}}\frac{1}{2}\|Ax - b\|_{2}^{2} + \tau \|x\|_{1}, $$
(1)

where \(x \in R^{n}\), \(A \in R^{m \times n}\) (\(m \ll n\)), \(b \in R^{m}\), \(\tau> 0\), \(\Vert v \Vert _{2}\) denotes the Euclidean norm of v and \(\|v\|_{1} = \sum_{i = 1}^{n} |v_{i}|\) is the \(l_{1}\)-norm of v. This problem is widely used in compressed sensing, signal reconstruction, analog-to-information conversion and related to many mathematical problems [116]. Problem (1) is also a typical compressed sensing scenario, which can reconstruct a length-n sparse signal from m observations. From the Bayesian perspective, problem (1) can also be seen as a maximum a posteriori criterion for estimating x from observations \(b = Ax + \omega\), where ω is the Gaussian noise of variance \(\sigma^{2}\). Many researchers have studied the numerical algorithms, which can be used to solve problem (1) with large-scale data such as fixed point method [1], gradient projection method for sparse reconstruction [2], interior-point continuation method [3, 4], iterative shrinkage thresholds algorithms in [5, 6], linearized Bregman method [7, 8], alternating direction algorithms [9], nonsmooth equations-based method [10] and some related methods [11, 12]. Just recently, a smoothing gradient method has been given for solving problem (1) based on the new transformed absolute value equations in [14, 15]. The transformation is based on the equivalence between a linear complementarity problem and an absolute value equation problem [17, 18]. The complementarity problem, the absolute value equation problem, and the related constrained optimization problem are three kinds of important optimization problems [1923]. On the other hand, the nonlinear conjugate gradient methods and smoothing methods are used widely to solve large-scale optimization problems [24, 25], total variation image restoration [26], monotone nonlinear equations with convex constraints [27], and nonsmooth optimization problems, such as nonsmooth nonconvex problems [28], minimax problem [29], P0 nonlinear complementarity problems [30]. Specially, the effectiveness of widely used and attained different numerical outcomes three-term conjugate gradient method, which is based on Hang–Zhang conjugate gradient method and Polak–Ribière–Polyak conjugate gradient method [3133], has been widely studied. Based on the above analysis, in this paper, we propose a new smoothing modified three-term conjugate gradient method to solve problem (1). The global convergence analysis of the proposed method is also presented.

The remainder of this paper is organized as follows. In Sect. 2, we give the transformation of problem (1), which includes the transformation of a linear complementarity problem transformed into an absolute value equation problem. In Sect. 3, we present the smoothing modified three-term conjugate gradient method and give the convergence analysis of it. Finally, we give some numerical results of the given method which show the effectiveness of it.

2 Results: the transformation of the problem

In this section, as in [9, 10, 14, 15], we set

$${x} = {u} - {v},\quad{u}\geq0,{v}\geq0, $$

where \({u}_{{i}} = ({x}_{{i}})_{ +} \) and \({v}_{{i}} = ( - {x}_{{i}})_{ +} \) for all \({i} = 1,2, \ldots, {n}\) with \(({x}_{{i}})_{ +} = \max\{ x_{i},0\}\). And we also have \(\Vert {x} \Vert _{1} = 1_{n}^{T}u + 1_{n}^{T}v\), where \(1_{n} = [1,1, \ldots,1]^{T}\) is an n-dimensional vector with n ones. Thus, problem (1) can be transformed into the following problem:

$$\min_{z = (u,v)^{T} \ge0}\frac{1}{2}\|b - Az\|_{2}^{2} + \tau 1_{n}^{T}u + \tau1_{n}^{T}v, $$

i.e.,

$$ \min_{z \ge0}\frac{1}{2}z^{T}Hz + c^{T}z, $$
(2)

where

$$z = (u,v)^{T},\qquad c = \tau1_{2n} + \left ( \textstyle\begin{array}{c} - {c} ^{ -} \\ {c} ^{ -} \end{array}\displaystyle \right ),\qquad c = A^{T}b,\qquad H = \left ( \textstyle\begin{array}{c@{\quad}c} A^{T}A & - A^{T}A \\ - A^{T}A & A^{T}A \end{array}\displaystyle \right ). $$

Since H is a positive semi-definite matrix, problem (2) can be translated into a linear variable inequality problem, which is to find \(z \in R^{2n}\) such that

$$ \langle Hz + c,\tilde{z} - z \rangle\ge0, \quad\forall\tilde{z} \ge0. $$
(3)

By the feasible structure of the feasible region of z, problem (3) is equivalent to the linear complementary problem, which to find \(z \in R^{2n}\) such that

$$ z \ge0,\qquad Hz + c \ge0,\qquad z^{T}(Hz + c) = 0. $$
(4)

Due to the equivalence of linear complementarity problems and absolute value equation problems, problem (4) can be transformed into the following absolute value equation problem, which is defined by

$$(H + I)z + c = \big|(H - I)z + c\big|. $$

Then problem (1) can be transformed into the following problem:

$$ \min_{z \in R^{2n}}f(z) = \frac{1}{2}\big\| (H + I)z + c - \big|(H - I)z + c\big|\big\| ^{2}. $$
(5)

3 Main results and discussions

In this section, we present the smoothing modified three-term conjugate gradient method to solve problem (1). Firstly, we give the definition of smoothing function and smoothing approximation function of the absolute value function [14, 15, 29].

Definition 1

Let \(f:R^{n} \to R\) be a local Lipschitz continuous function. We call \(\tilde{f}:R^{n} \times R_{ +} \to R\) a smoothing function of f, if

$$\lim_{\mu\to0}\tilde{f} (x) = f(x), $$

where \(f_{\mu} ( \cdot)\) is continuously differentiable in \(R^{n}\) for any fixed \(\mu> 0\).

The smoothing function of the absolute value function is defined by

$$ \Phi_{i\mu} (z) = \sqrt{\bigl((H - I)z + c\bigr)_{i}^{2} + \mu^{2}} ,\quad\mu\in R_{ +} , i = 1,2, \ldots,2n, $$
(6)

and satisfies

$$\lim_{\mu\to0}\Phi_{i\mu} (z) = \big|\bigl((H - I)z + c \bigr)_{i}\big|,\quad i = 1,2, \ldots,2n. $$

Based on (6), we obtain the following unconstrained optimization problem:

$$\min_{z \in R^{2n}}\bar{f}_{\mu} (z) = \frac{1}{2}\sum _{i = 1}^{2n} \bar{f}_{i\mu}^{2} (z), $$

where \(\bar{f}_{i\mu}(z) = ((H + I)z + c)_{i} - \Phi_{i\mu} (z)\) is a smoothing function of \(f(z)\) in (5) for \(i = 1,2, \ldots,2n\).

Now, we give the smoothing modified three-term conjugate gradient method.

Algorithm 1

(Smoothing modified three-term conjugate gradient method)

  1. Step 0.

    Choose \(0 < \sigma< 1\), \(0 < \rho< 1\), \(r > 0\), \(\mu= 2\), \(\eta= 1\), \(\varepsilon> 0\), \(\mu_{0} > 1\) and, given an initial point \(z_{0} \in R^{n}\), let \(d_{0} = - \tilde{g}_{0} \), where \(\tilde{g}_{0} = \nabla_{z}\tilde{f} (z_{0},\mu_{0})\).

  2. Step 1.

    If \(\|\nabla_{{z}}\tilde{f} \| \le \varepsilon\), stop; otherwise, go to Step 2.

  3. Step 2.

    Compute search direction by using \(\tilde{\beta}_{k}^{\mathrm {BZAU}}\) and \(\tilde{\theta}_{k}^{\mathrm{BZAU}}\), which are defined by

    $$\begin{aligned}& \tilde{\beta}_{k}^{\mathrm{BZAU}} = \frac{\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}(\nabla_{Z}\tilde{f}_{\mu} (z_{k}) - \nabla_{Z}\tilde{f}_{\mu} (z_{k - 1}))}{ - \eta \nabla_{Z}\tilde{f}_{\mu} (z_{k - 1})^{T}d_{k - 1} + \mu |\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k - 1}|}, \end{aligned}$$
    (7)
    $$\begin{aligned}& \tilde{\theta}_{k}^{\mathrm{BZAU}} = \frac{\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k - 1}}{ - \eta\nabla_{Z}\tilde{f}_{\mu} (z_{k - 1})^{T}d_{k - 1} + \mu|\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k - 1}|}, \\& d_{k} = \left \{ \textstyle\begin{array}{l@{\quad}l} - \nabla_{Z}\tilde{f}_{\mu} (z_{k}) & \text{if } k = 0, \\ - \nabla_{Z}\tilde{f}_{\mu} (z_{k}) + \tilde{\beta}_{k}^{\mathrm{BZAU}}d_{k - 1} - \tilde{\theta}_{k}^{\mathrm {BZAU}}y_{k - 1} & \text{if } k \ge1, \end{array}\displaystyle \right . \end{aligned}$$
    (8)

    where \(y_{k - 1} = \nabla_{Z}\tilde{f}_{\mu} (z_{k}) - \nabla_{Z}\tilde{f}_{\mu} (z_{k - 1})\).

  4. Step 3.

    Compute \(\alpha_{k}\) by the Armijo line search, where \(\alpha _{k} = \max \{ \rho^{0},\rho^{1},\rho^{2}, \ldots \}\) and \(\rho^{i}\) satisfies

    $$ \tilde{f} \bigl(z_{k} + \rho^{i}d_{k}, \mu_{k}\bigr) \le \tilde{f} (z_{k},\mu_{k}) + \sigma \rho^{i}\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k}. $$
    (9)
  5. Step 4.

    Compute \(z_{k + 1} = z_{k} + \alpha_{k}d_{k}\), if \(\|\nabla_{z}\bar{f}(z_{k + 1},\mu_{k})\| \ge r\mu_{k}\), set \(\mu_{k + 1} = \mu_{k}\). Otherwise, let \(\mu_{k + 1} = \sigma \mu_{k}\).

  6. Step 5.

    Set \(k: = k + 1\) and go to Step 1.

Now, we give convergence analysis of Algorithm 1. In order to get the global convergence of Algorithm 1, we give the following assumptions.

Assumption 1

  1. (i)

    The level set \(\Omega= \{ z \in R^{2n}|\tilde{f}_{\mu} (z) \le \tilde{f}_{\mu} (z_{0}) \} \) is bounded.

  2. (ii)

    There exists a positive constant \(L > 0\) such that \(\nabla_{Z}\tilde{f}_{\mu} (z_{k})\) is Lipschitz continuous on an open convex set \(B \subseteq\Omega\) and for any \(z_{1},z_{2} \in B\), i.e.,

    $$\big\| \nabla_{Z}\tilde{f}_{\mu} (z_{1}) - \nabla_{Z}\tilde{f}_{\mu} (z_{2})\big\| \le L \big\| z_{1} - z_{2}\big\| . $$
  3. (iii)

    There exists a positive constant m such that

    $$m \Vert d_{x} \Vert ^{2} \le d^{T} \nabla_{z}^{2}\tilde{f}_{\mu} (z_{k})d_{k}, \quad\forall x,d \in R^{n}, $$

    where \(\nabla_{z}^{2}\tilde{f}_{\mu} (z_{k})\) is the Hessian matrix of .

By Assumption 1, we can see that there exist positive constants \(\gamma> 0\) and b such that

$$\big\| \nabla_{Z}\tilde{f}_{\mu} (z_{k})\big\| \le\gamma,\quad \forall z_{k} \in \Omega $$

and

$$\|z_{1} - z_{2}\| \le b,\quad\forall z_{1},z_{2} \in \Omega. $$

Lemma 1

Suppose \(\{ {z}_{{k}} \}\) and \(\{ {d}_{{k}} \}\) are generated by Algorithm 1, then

$$\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k} = - \bigl\Vert \nabla_{Z}\tilde{f}_{\mu} (z_{k}) \bigr\Vert ^{2} $$

and

$$\bigl\Vert \nabla_{Z}\tilde{f}_{\mu} (z_{k}) \bigr\Vert \le \Vert d_{k} \Vert . $$

Proof

By Algorithm 1, we have

$$d_{k} = - \nabla_{Z}\tilde{f}_{\mu} (z_{k}) + \tilde{\beta}_{k}^{\mathrm{BZAU}}d_{k - 1} - \tilde{\theta}_{k}^{\mathrm {BZAU}}y_{k - 1}. $$

Multiplying both sides of the above equation by \(\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}\), we obtain

$$\begin{aligned} \nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k} ={}& {-} \big\| \nabla_{Z} \tilde{f}_{\mu} (z_{k})\big\| ^{2} + \frac{\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}(\nabla_{Z}\tilde{f}_{\mu} (z_{k}) - \nabla_{Z}\tilde{f}_{\mu} (z_{k - 1}))(\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k - 1})}{ - \eta\nabla_{Z}\tilde{f}_{\mu} (z_{k - 1})^{T}d_{k - 1} + \mu|\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k - 1}|} \\ &- \frac{(\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k - 1}\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}(\nabla_{Z}\tilde{f}_{\mu} (z_{k}) - \nabla_{Z}\tilde{f}_{\mu} (z_{k - 1}))}{ - \eta \nabla_{Z}\tilde{f}_{\mu} (z_{k - 1})^{T}d_{k - 1} + \mu |\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k - 1}|} ,\end{aligned} $$

i.e.,

$$\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k} = - \big\| \nabla_{Z}\tilde{f}_{\mu} (z_{k}) \big\| ^{2}. $$

Now, we have

$$\big|\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k}\big| = \big\| \nabla_{Z}\tilde{f}_{\mu} (z_{k}) \big\| ^{2} $$

and

$$\big|\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k}\big| \le \big\| \nabla_{Z}\tilde{f}_{\mu} (z_{k})\big\| \Vert d_{k} \Vert . $$

By

$$\big\| \nabla_{Z}\tilde{f}_{\mu} (z_{k}) \big\| ^{2} \le \big\| \nabla_{Z}\tilde{f}_{\mu} (z_{k})\big\| \Vert d_{k} \Vert $$

we have

$$\big\| \nabla_{Z}\tilde{f}_{\mu} (z_{k})\big\| \le \Vert d_{k} \Vert . $$

Hence, the proof is complete. □

Lemma 2

Suppose Assumption 1 holds and \(\{ {z}_{{k}} \}\) and \(\{ {d}_{{k}} \}\) are generated by Algorithm 1, then

$$\sum_{k = 0}^{\infty} \frac{(\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k})^{2}}{\|d_{k}\|^{2}} < + \infty $$

and

$$\sum_{k = 0}^{\infty} \frac{\|\nabla_{Z}\tilde{f}_{\mu} (z_{k})\|^{4}}{\|d_{k}\|^{2}} < + \infty. $$

Proof

Using the techniques similar to lemmas in [3133], we can get this lemma. The description will not be repeated again. □

Lemma 3

Suppose Assumption 1 holds and \(x_{k}\) and \(d_{k}\) are generated by Algorithm 1, then

$$ a_{1}\alpha_{k}\|d_{k}\|^{2} \le- \nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k}, $$
(10)

where \(a_{1} = (1 - \sigma)^{ - 1}(m/2)\), m is a positive constant and \(0 < \sigma< 1\).

Proof

By using Taylor’s expansion, we have

$$ \tilde{f} (z_{k + 1}) = \tilde{f} (z_{k}) + \nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}s_{k} + \frac{1}{2}s_{k}^{T}G_{k}s_{k}, $$
(11)

where \(s_{k} = z_{k + 1} - z_{k} = \alpha_{k}d_{k}\) and

$$G_{k} = \int_{0}^{1} \nabla_{z}^{2} \tilde{f}_{\mu} (z_{k} + \tau s_{k})\,d\tau s_{k}. $$

By Armijo line search, we know that

$$ \tilde{f} (z_{k + 1}) \le\tilde{f} (z_{k}) + \sigma \nabla_{z}\tilde{f}_{\mu} (z_{k})^{T}s_{k}. $$
(12)

By (11) and (12), we have

$$\frac{1}{2}s_{k}^{T}G_{k}s_{k} \le(1 - \sigma) \bigl( - \nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}s_{k}\bigr), $$

i.e.,

$$\frac{1}{2}(1 - \sigma)^{ - 1}m\alpha_{k} \|d_{k}\|^{2} \le- \nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k}. $$

Denote \(a_{1} = (1 - \sigma)^{ - 1}(m/2)\), we get (10). Thus, we complete the proof. □

By Lemmas 1, 2, and 3, we can get global convergence of the given method, i.e., the following theorem.

Theorem 1

Suppose Assumption 1 holds, then

$$\lim_{k \to\infty} \big\| \nabla_{Z}\tilde{f}_{\mu} (z_{k})\big\| = 0. $$

Proof

From Assumption 1, (7), and (10), we have

$$\begin{aligned}& \big|\tilde{\beta}_{k}^{\mathrm{BZAU}}\big| \le \biggl\vert \frac{\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}(\nabla_{Z}\tilde{f}_{\mu} (z_{k}) - \nabla_{Z}\tilde{f}_{\mu} (z_{k - 1}))}{\eta( - \nabla_{Z}\tilde{f}_{\mu} (z_{k - 1})^{T}d_{k - 1})} \biggr\vert \le\frac{\|\nabla_{Z}\tilde{f}_{\mu} (z_{k})\|L\alpha_{k - 1}\|d_{k - 1}\|}{\eta(a_{1}\alpha_{k - 1}\|d_{k - 1}\|^{2})}, \\& \big|\tilde{\beta}_{k}^{\mathrm{BZAU}}\big|\|d_{k - 1}\| \le \biggl( \frac{L\|\nabla_{Z}\tilde{f}_{\mu} (z_{k})\|}{\eta (a_{1}\|d_{k - 1}\|)}\biggr)\|d_{k - 1}\|=\frac{L\|\nabla_{Z}\tilde{f}_{\mu} (z_{k})\|}{\eta a_{1}}, \end{aligned}$$
(13)

i.e.,

$$\big|\tilde{\theta}_{k}^{\mathrm{BZAU}}\big|\|y_{k - 1}\| \le \bigg| \frac{\nabla_{Z}\tilde{f}_{\mu} (z_{k})^{T}d_{k - 1}}{\eta( - \nabla_{Z}\tilde{f}_{\mu} (z_{k - 1})^{T}d_{k - 1})} \bigg|\| y_{k - 1}\|. $$

From Assumption 1, (8), and (10), we have

$$ \big|\tilde{\theta}_{k}^{\mathrm{BZAU}}\big|\|y_{k - 1}\| \le \biggl( \frac{\|\nabla_{Z}\tilde{f}_{\mu} (z_{k})\|L\|x_{k} - x_{k - 1}\|}{\eta(a_{1}\alpha_{k - 1}\|d_{k - 1}\|^{2})}\biggr)\|d_{k - 1}\| = \frac{L\|\nabla_{Z}\tilde{f}_{\mu} (z_{k})\|}{\eta a_{1}}. $$
(14)

Combining (13), (14), and \(d_{k}\) generated in Algorithm 1, we obtain

$$\begin{aligned} \|d_{k}\| &\le\big\| \nabla_{Z}\tilde{f}_{\mu} (z_{k})\big\| + \bigl\vert \tilde{\beta}_{k}^{\mathrm{BZAU}} \big\vert \|d_{k - 1}\| + \bigr\vert \tilde{\theta}_{k}^{\mathrm{BZAU}} \big\vert \|y_{k - 1}\| \\ &\le\big\| \nabla_{Z}\tilde{f}_{\mu} (z_{k})\big\| + \frac{L\|\nabla_{Z}\tilde{f}_{\mu} (z_{k})\|}{\eta a_{1}}+\frac{L\|\nabla_{Z}\tilde{f}_{\mu} (z_{k})\|}{\eta a_{1}} \\ &=\biggl(1 + \frac{2L}{\eta a_{1}}\biggr)\big\| \nabla_{Z} \tilde{f}_{\mu} (z_{k})\big\| .\end{aligned} $$

Denote \(\sqrt{B} = (1 + \frac{2L}{\eta a_{1}})\), we have \(\|d_{k}\|^{2} \le B\|\nabla_{Z}\tilde{f}_{\mu} (z_{k})\|^{2}\), i.e.,

$$\frac{1}{\|d_{k}\|^{2}} \ge \frac{1}{B\|\nabla_{Z}\tilde{f}_{\mu} (z_{k})\|^{2}} $$

and

$$\frac{B\|\nabla_{Z}\tilde{f}_{\mu} (z_{k})\|^{4}}{\|d_{k}\|^{2}} \ge \frac{\|\nabla_{Z}\tilde{f}_{\mu} (z_{k})\|^{4}}{\|\tilde{g}_{k}\|^{2}} =\big\| \nabla_{Z} \tilde{f}_{\mu} (z_{k})\big\| ^{2} $$

By Lemma 2, we have

$$\sum_{k = 0}^{\infty} \big\| \nabla_{Z} \tilde{f}_{\mu} (z_{k})\big\| ^{2} < + \infty. $$

This completes the proof. □

4 Numerical experiments

In this section, we give some numerical experiments of Algorithm 1, which are also considered in [2, 9, 10, 14, 15]. We compare Algorithm 1 with smoothing gradient method, GPSR method, debiased and minimum norm methods proposed in [2, 9, 10, 14] respectively. The numerical results of all the examples show that Algorithm 1 is effective. All codes run in MATLAB 8.0. For Examples 1 and 2, the parameters used in Algorithm 1 are chosen as \(\sigma= 0.2\), \(\mu= 5\), \(\eta= 2\), \(\gamma= 0.5\), \(\varepsilon= 10^{ - 6}\), \(\rho= 0.4\).

Example 1

Consider

$$A = \left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} 3 & 5 & 8 & 4 & 1 & 5 \\ 2 & 9 & 6 & 5 & 7 & 4 \\ 3 & 4 & 7 & 2 & 1 & 6 \\ 8 & 9 & 6 & 5 & 7 & 4 \end{array}\displaystyle \right ),\qquad b = ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 2 & 4 & 1 & 7 \end{array}\displaystyle )^{T}, $$

and \(\tau= 5\).

From [14], we know that this example has a solution \({x}^{*} = (0.3461,0.0850,0,0,0.3719,0)^{{T}}\). The optimal solution of Algorithm 1 is \({x}^{*} = ( 0.3459, 0.0850,0.0001, 0.0009,0.3717, - 0.0001)^{{T}}\). In Figs. 1 and 2, we plot the evolution of the objective function versus the number of iterations when solving Example 1 with Algorithm 1 and the smoothing gradient method respectively.

Figure 1
figure 1

Numerical results for solving Example 1 with Algorithm 1

Figure 2
figure 2

Numerical results for solving Example 1 with smoothing gradient method

Example 2

Consider

$$\begin{gathered} A = \left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} 1 & 1 & 1 & 0 & 0 & 0 & \cdots& 0 & 0 & 0 & 1 & \cdots& 1 \\ 0 & 0 & 0 & 1 & 1 & 1 & \cdots& 0 & 0 & 0 & 1 & \cdots& 1 \\ \vdots& \vdots& \vdots& \vdots& \vdots& \vdots& \cdots& \vdots& \vdots& \vdots& \vdots& \cdots& \vdots\\ 0 & 0 & 0 & 0 & 0 & 0 & \cdots& 1 & 1 & 1 & 1 & \cdots& 1 \end{array}\displaystyle \right )_{m \times n},\\b = ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 1 & 1 & \cdots& 1 \end{array}\displaystyle )^{T},\end{gathered} $$

and \(\tau= 2\). In this example, we choose \(m = 30\), \(n = 100\). The numerical results are given in Figs. 3 and 4.

Figure 3
figure 3

Numerical results for solving Example 2 with Algorithm 1

Figure 4
figure 4

Numerical results for solving Example 2 with smoothing gradient method

Example 3

Consider

$$A = \left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} 1 & 0 & \cdots& 0 & 1 & 1 & \cdots& 1 \\ 0 & 1 & \cdots& 1 & 0 & 1 & \cdots& 1 \\ \vdots& \vdots& \ddots& \vdots& \vdots& \vdots& \cdots& \vdots\\ 0 & 1 & \cdots& 1 & 0 & 1 & \cdots& 1 \\ 1 & 0 & \cdots& 0 & 1 & 1 & \cdots& 1 \end{array}\displaystyle \right )_{m \times n},\qquad b = ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 1 & 1 & \cdots& 1 \end{array}\displaystyle )^{T}, $$

and \(\tau= 6\). In this example, we choose \(m = 100\), \(n = 110\). The numerical results are given in Figs. 5 and 6.

Figure 5
figure 5

Numerical results for solving Example 3 with Algorithm 1

Figure 6
figure 6

Numerical results for solving Example 3 with smoothing gradient method

Example 4

Consider

$$A = \left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} 4 & - 1 & 0 & \cdots& 0 & 1 & \cdots& 1 \\ - 1 & 4 & - 1 & \cdots& \vdots& \vdots& \cdots& \vdots\\ 0 & - 1 & 4 & \ddots& \vdots& \vdots& \cdots& \vdots\\ \vdots& \ddots& \ddots& \ddots& - 1 & \vdots& \cdots& \vdots\\ 0 & \cdots& 0 & - 1 & 4 & 1 & \cdots& 1 \end{array}\displaystyle \right )_{m \times n},\qquad b = ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 1 & 1 & \cdots& 1 \end{array}\displaystyle )^{T}, $$

and \(\tau= 10\). In this example, we choose \(m = 200\), \(n = 210\). The numerical results are given in Figs. 7 and 8.

Figure 7
figure 7

Numerical results for solving Example 4 with Algorithm 1

Figure 8
figure 8

Numerical results for solving Example 4 with smoothing gradient method

Example 5

In this example, we consider a typical compressed sensing problem, which is also considered in [9, 10, 14, 15]. In this example, we choose \(m = 2^{4}\), \(n = 2^{6}\), \(\sigma= 0.5\), \(\rho= 0.4\), \(\gamma= 0.5\), \(\varepsilon= 10^{ - 6}\), \(\mu= 5\), \(\eta= 2\). The original signal contains 520 randomly generated ±1 spikes. Further, the \(m \times n\) matrix A is obtained by first filling it with independent samples of a standard Gaussian distribution and then orthogonalization of its rows. In this example, we choose \(\sigma^{2} = 10^{ - 4}\) and \(\tau= 0.1 \Vert A^{T}y \Vert _{\infty} \) the same as suggested in [14]. The numerical results are shown in Fig. 9.

Figure 9
figure 9

Numerical results for solving Example 5 with Algorithm 1

5 Conclusion

In this paper, we have proposed a new smoothing modified three-term conjugate gradient method for solving \(l_{1}\)-norm nonsmooth problems. Comparing with the smoothing gradient method, GPSR method, and other methods proposed in [2, 9, 10, 14], we can see that the smoothing modified three-term conjugate gradient method is simple and needs small storage. Comparing with the smoothing gradient method proposed in [14], the smoothing modified three-term conjugate gradient method is significantly faster especially in solving large-scale problems.

References

  1. Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for \(l_{1}\)-minimization: methodology and convergence. SIAM J. Optim. 19(3), 1107–1130 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  2. Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction, application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007)

    Article  Google Scholar 

  3. Kim, S.J., Koh, K., Lustig, M., Boyd, S., Gorinevsky, D.: An interior-point method for large-scale \(l_{1}\)-regularized least squares. IEEE J. Sel. Top. Signal Process. 4, 606–617 (2007)

    Article  Google Scholar 

  4. Turlach, B.A., Venables, W.N., Wright, S.J.: Simultaneous variable selection. Technometrics 47(3), 349–363 (2005)

    Article  MathSciNet  Google Scholar 

  5. Daubechies, I., Defrise, M., Mol, C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57(11), 1413–1457 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  6. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  7. Osher, S., Mao, Y., Dong, B., Yin, W.: Fast linearized Bregman iteration for compressive sensing and sparse denoising. Math. Comput. 8(1), 93–111 (2011)

    MathSciNet  MATH  Google Scholar 

  8. Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithm for \(l_{1}\) minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  9. Yang, J., Zhang, Y.: Alternating direction algorithms for \(l_{1}\)-problems in compressive sensing. SIAM J. Sci. Comput. 33, 250–278 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Xiao, Y.H., Wang, Q.Y., Hu, Q.J.: Non-smooth equations based method for \(l_{1}\)-norm problems with applications to compressed sensing. Nonlinear Anal. 74(11), 3570–3577 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  11. Wakin, M.B., Laska, J.N., Duarte, M.F., Baron, D., Sarvotham, S., Takhar, D., Kelly, K.F., Baraniuk, R.G.: An architecture for compressive imaging. In: 2006 International Conference on Image Processing, pp. 1273–1276 (2006)

    Chapter  Google Scholar 

  12. Lustig, M., Donoho, D., Pauly, J.M.: Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 58(6), 1182–1195 (2010)

    Article  Google Scholar 

  13. Farajzadeh, A.P., Plubtieng, S., Ungchittrakool, K.: On best proximity point theorems without ordering. Abstr. Appl. Anal. 2014, Article ID 130439 (2014)

    MathSciNet  Google Scholar 

  14. Chen, Y.Y., Gao, Y., Liu, Z.M., Du, S.Q.: The smoothing gradient method for a kind of special optimization problem. Oper. Res. Trans. 21(2), 119–125 (2017)

    MATH  Google Scholar 

  15. Chen, M., Du, S.Q.: The smoothing FR conjugate gradient method for solving a kind of nonsmooth optimization problem with \(l_{1}\)-norm. Math. Probl. Eng. 2018, Article ID 5817931 (2018)

    Google Scholar 

  16. Wang, Y.J., Zhou, G.L., Caccetta, L., Liu, W.Q.: An alternative Lagrange-dual based algorithm for sparse signal reconstruction. IEEE Trans. Signal Process. 59(4), 1895–1901 (2011)

    Article  Google Scholar 

  17. Mangasarian, O.L.: Absolute value programming. Comput. Optim. Appl. 36(1), 43–53 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  18. Mangasarian, O.L.: A generalized Newton method for absolute value equations. Optim. Lett. 3(1), 101–108 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  19. Caccetta, L., Qu, B., Zhou, G.L.: A globally and quadratically convergent method for absolute value equations. Comput. Optim. Appl. 48(1), 45–58 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  20. Mangasarian, O.L.: Absolute value equation solution via concave minimization. Optim. Lett. 1(1), 3–8 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  21. Chen, H.B., Wang, Y.J., Zhao, H.G.: Finite convergence of a projected proximal point algorithm for generalized variational inequalities. Oper. Res. Lett. 40(4), 303–305 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  22. Chen, Y.Y., Gao, Y.: Two kinds of new Levenberg–Marquardt method for nonsmooth nonlinear complementarity problem. ScienceAsia 40, 89–93 (2014)

    Article  Google Scholar 

  23. Qu, B., Xiu, N.H.: A relaxed extragradient-like method for a class of constrained optimization problem. J. Ind. Manag. Optim. 3(4), 645–654 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  24. Fleteher, R., Reeves, C.M.: Function minimization by conjugate gradients. Comput. J. 7(2), 149–154 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  25. Dai, Y.H., Yuan, Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10(1), 177–182 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  26. Yu, G.H., Qi, L.Q., Dai, Y.H.: On nonmonotone chambolle gradient projection algorithms for total variation image restoration. J. Math. Imaging Vis. 35(2), 143–154 (2009)

    Article  MathSciNet  Google Scholar 

  27. Yu, Z.S., Lin, J., Sun, J., Xiao, Y.H., Lin, L.Y., Li, Z.H.: Spectral gradient projection method for monotone nonlinear equations with convex constraints. Appl. Numer. Math. 59(10), 2416–2423 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  28. Chen, X.J.: Smoothing methods for nonsmooth, nonconvex minimization. Math. Program. 134, 71–99 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  29. Pang, D.Y., Du, S.Q., Ju, J.J.: The smoothing Fletcher–Reeves conjugate gradient method for solving finite minimax problems. ScienceAsia 42, 40–45 (2016)

    Article  Google Scholar 

  30. Zhang, L., Wu, S.Y., Gao, T.: Improved smoothing Newton methods for \(P_{0}\) nonlinear complementarity problems. Appl. Math. Comput. 215(1), 324–332 (2009)

    MathSciNet  MATH  Google Scholar 

  31. Zhang, L., Zhou, W.: On the global convergence of the Hang–Zhang conjugate gradient method with Armijo line search. Acta Math. Sci. 28(5), 840–845 (2008)

    MathSciNet  MATH  Google Scholar 

  32. Sun, M., Liu, J.: Three modified Polak–Ribière–Polyak conjugate gradient methods with sufficient descent property. J. Inequal. Appl. 2015, Article ID 125 (2015)

    Article  MATH  Google Scholar 

  33. Baluch, B., Salleh, Z., Alhawarat, A., Roslan, U.: A new modified three-term conjugate gradient method with sufficient descent property and its global convergence. J. Math. 2017, Article ID 2715854 (2017)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 11671220) and the Natural Science Foundation of Shandong Province (No. ZR2016AM29).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Shouqiang Du.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Du, S., Chen, M. A new smoothing modified three-term conjugate gradient method for \(l_{1}\)-norm minimization problem. J Inequal Appl 2018, 105 (2018). https://doi.org/10.1186/s13660-018-1696-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-018-1696-9

MSC

Keywords