- Research
- Open Access

# A new reweighted ${l}_{1}$ minimization algorithm for image deblurring

- Tiantian Qiao
^{1, 2}Email author, - Boying Wu
^{2}, - Weiguo Li
^{1}and - Alun Dong
^{1}

**2014**:238

https://doi.org/10.1186/1029-242X-2014-238

© Qiao et al.; licensee Springer 2014

**Received:**22 February 2013**Accepted:**16 March 2014**Published:**16 June 2014

## Abstract

In this paper, a new reweighted ${l}_{1}$ minimization algorithm for image deblurring is proposed. The algorithm is based on a generalized inverse iteration and linearized Bregman iteration, which is used for the weighted ${l}_{1}$ minimization problem ${min}_{u\in {\mathbb{R}}^{n}}\{{\parallel u\parallel}_{\omega}:Au=f\}$. In the computing process, the effective using of signal information can make up the detailed features of image, which may be lost in the deblurring process. Numerical experiments confirm that the new reweighted algorithm for image restoration is effective and competitive to the recent state-of-the-art algorithms.

## Keywords

- reweighted ${l}_{1}$ minimization
- generalized inverse
- linearized Bregman iteration
- image deblurring

## 1 Introduction

Image deblurring is a fundamental problem in image processing, since many real-life problems can be modeled as deblurring problems [1]. In this paper, a new reweighted ${l}_{1}$ minimization algorithm for image deblurring is proposed. The algorithm is obtained based on a generalized inverse iteration and a linearized Bregman iteration.

where $\eta \in {\mathbb{R}}^{n}$ is an additive noise and $A\in {\mathbb{R}}^{m\times n}$ is a linear blurring operator. This problem is ill-posed due to the large condition number of the matrix *A*. Any small perturbation on the observed blurred image *f* may cause the direct solution ${A}^{-1}f$, which is very difficult to obtain from the original image *u* [2]. This is a widely studied subject and many corresponding approaches have been developed, and one of them is to minimize some cost functionals [1]. The simplest method is a Tikhonov regularization, which minimizes an energy consisting of a data fidelity term and an ${l}_{2}$ norm regularization term. *A* is a convolution, which can solve the problem in the Fourier domain. In this case, the method is called a Wiener filter [3], this is a linear method, and the edges of restored image are usually smeared. To overcome this, a total variation (TV)-based regularization was proposed by Rudin *et al.* in [4], which is known as the ROF model. Due to its virtue of preserving edges, it is widely used in image processing, such as blind deconvolution, inpainting, and superresolution; see [1]. However, as we know, for the TV yields staircasing [5, 6], these TV-based methods do not preserve the fine structures, details, and textures. To avoid these drawbacks, nonlocal methods were proposed for denoising [7, 8], and then extended to deblurring [9]. Also, the Bregman iteration, introduced to image science [10], was shown to improve TV-based blind deconvolution [11–13]. Recently, a nonlocal TV regularization was invented based on graph theory [14] and applied to image deblurring [15]. Another approach for deblurring is the wavelet-based method, *etc.* [16].

where $J(u)$ is a continuous convex function, and when $J(u)$ is strictly or strongly convex, the solution of (1.2) is unique.

This constrained optimization problem (1.2) arise in many applications, like in image compression, reconstruction, inpainting, segmentation, compressed sensing, *etc.* The problem (1.2) can be transformed into a linear programming problem, and then solved by a conventional linear programming solver in many cases. Recently, fixed-point continuation method [17] and Bregman iteration [18] are very popular. Specially, Bregman iterative regularization was proposed by Osher *et al.* [10]. In the past few years, a series of new methods have been developed, and among them, the linearized Bregman method [19–22] and the split Bregman method [23–26] got most attention.

Obviously, the problem (1.3) is an ${l}_{1}$-norm minimization problem. Since many practical problems related to the sparsity of the solution make the problem (1.3) stay on focus for years, like in signal processing, compressive sensing *etc.* [18, 19]. Similar to the problem (1.2), the problem (1.3) also can be transformed into a linear program and then solved by conventional linear programming solvers. However, such solvers are not tailored for the matrix *A* that is large-scale and completely dense. Fortunately, the problem (1.3) can be solved very effectively by the linearized Bregman method [19–22, 27]. The computing speed of its simplified form with soft threshold operator is faster [19, 21, 22]. The corresponding convergence analysis was discussed in [20].

*X*and

*Y*. We seek sparse solutions in an orthogonal basis ${\{{\psi}_{j}\}}_{j\in N}$. The standard approach is the weighted ${\ell}_{1}$ minimization (1.3):

On this basis, the authors propose a new reweighted ${l}_{1}$ minimization method to solve the problem (1.5) and illustrate by numerical experiments.

The rest of the paper is organized as follows. In Section 2, we summarize the existing methods for solving the constrained problem (1.3). In Section 3, the generalized shrinkage operator is proposed. The new algorithm is proposed in Section 4. Numerical results are shown in Section 5. Finally, we draw some conclusions in Section 6.

## 2 Preliminaries

### 2.1 Generalized inverse

We are interested in the iterative formula of the generalized inverse, because it is used by our new algorithm. Therefore, before we give a detailed discussion, we first give some definitions and lemmas.

**Definition 2.1** [29]

*X*is called the pseudoinverse of

*A*and denoted by ${A}^{\u2020}$. If

*X*satisfies the following properties,

*i.e.*, the Moore-Penrose conditions:

**Remark 2.1** The inner inverse is not unique. In general, the set of the inner inverses of the matrix *A* is denoted ${A}^{-}$.

**Definition 2.2** [29]

is called the range of $(A,B)$.

**Lemma 2.1** [30]

*Let*$A\in {\mathbb{C}}^{m\times n}\ne 0$;

*if initial matrix*${V}_{0}$

*satisfies*

*where*

*I*

*is an identity matrix with the same dimension as matrix*

*A*

*and*${A}^{\ast}$

*is the conjugate transpose of matrix*

*A*.

*Then the sequence*${\{{V}_{q}\}}_{q\in \mathbb{N}}$

*generated by*

*is convergent to* ${A}^{\u2020}$.

### 2.2 Linearized Bregman iteration

*J*, between points

*u*and

*v*, is defined by

where $p\in \partial J(v)$ is an element in the subgradient set of *J* at the point *v*. In general ${D}_{J}^{p}(u,v)\ne {D}_{J}^{p}(v,u)$ and the triangle inequality is not satisfied, so ${D}_{J}^{p}(u,v)$ is not a distance in the usual sense. For details, see [31].

*linearized Bregman iteration*is generated by

where *δ* is a constant and ${p}^{0}={u}^{0}=0$. Hereafter, we use $\parallel \cdot \parallel ={\parallel \cdot \parallel}_{2}$ to denote the ${l}_{2}$ norm.

Namely, the algorithm (2.8) is called an ${A}^{T}$ *linearized Bregman iteration*.

*A*is any matrix, the constraint condition $Au=f$ of the problem (1.3) is not satisfied. So the conditions will be extended to solve the least-squares problem ${min}_{u\in {\mathbb{R}}^{n}}{\parallel Au-f\parallel}^{2}$, and the algorithm becomes the following ${A}^{\u2020}$

*linearized Bregman iteration*[22]:

where ${A}^{\u2020}$ is generalized inverse of matrix *A*.

## 3 The generalized shrinkage operator

**Theorem 3.1** ${T}_{\mu}(v)=arg{min}_{u\in {\mathbb{R}}^{n}}\{\mu {\parallel u\parallel}_{1}+\frac{1}{2}{\parallel u-v\parallel}^{2}\}$.

*Proof*Let $f(u)=\mu {\parallel u\parallel}_{1}+\frac{1}{2}{\parallel u-{v}^{k}\parallel}^{2}=\mu {\sum}_{i=1}^{n}|{u}_{i}|+\frac{1}{2}{\sum}_{i=1}^{n}{({v}_{i}^{k}-{u}_{i})}^{2}$, then we have

- (1)If ${u}_{i}>0$, and notice that $\frac{\partial f(u)}{\partial {u}_{i}}=0$ then ${u}_{i}={v}_{i}^{k}-\mu >0$, for this case $f(u)$ gets its minimum at point ${u}_{i}={v}_{i}^{k}-\mu $ along the direction ${e}_{i}$ and the minimum is$f(u){|}_{{u}_{i}={v}_{i}^{k}-\mu}=\mu ({v}_{i}^{k}-\mu )+\frac{1}{2}{\mu}^{2}+{\delta}_{1}\phantom{\rule{0.25em}{0ex}}(>0)={\mathrm{\Delta}}_{1}+{\delta}_{1}.$(3.2)
- (2)If ${u}_{i}<0$, and notice that $\frac{\partial f(u)}{\partial {u}_{i}}={u}_{i}-{v}_{i}^{k}-\mu <0$, again we find that $f(u)$ decreases along the direction ${e}_{i}$:$f(u){|}_{{u}_{i}=0}=\frac{1}{2}{\left({v}_{i}^{k}\right)}^{2}+{\delta}_{1}\phantom{\rule{0.25em}{0ex}}(>0)={\mathrm{\Delta}}_{2}+{\delta}_{1}.$(3.3)

Since ${\mathrm{\Delta}}_{2}-{\mathrm{\Delta}}_{1}=\frac{1}{2}{({v}_{i}^{k})}^{2}-(\mu {v}_{i}^{k}-\frac{1}{2}{\mu}^{2})=\frac{1}{2}{({v}_{i}^{k}-\mu )}^{2}>0$, along the direction ${e}_{i}$ we find that the minimizer of $f(u)$ is ${u}_{i}={v}_{i}^{k}-\mu $.

- (1)If ${u}_{i}>0$, since $\frac{\partial f(u)}{\partial {u}_{i}}={u}_{i}-{v}_{i}^{k}+\mu >0$, $f(u)$ increases along the direction ${e}_{i}$:$f(u){|}_{{u}_{i}=0}=\frac{1}{2}{\left({v}_{i}^{k}\right)}^{2}+{\delta}_{3}={\mathrm{\Delta}}_{3}+{\delta}_{3}.$(3.4)
- (2)If ${u}_{i}<0$, since $\frac{\partial f(u)}{\partial {u}_{i}}=0$ we have ${u}_{i}={v}_{i}^{k}+\mu <0$, the minimizer of $f(u)$ along the direction ${e}_{i}$ is ${u}_{i}={v}_{i}^{k}+\mu $ and the corresponding minimum is$f(u){|}_{{u}_{i}={v}_{i}+\mu}=-\mu ({v}_{i}^{k}+\mu )+\frac{1}{2}{\mu}^{2}+{\delta}_{3}={\mathrm{\Delta}}_{4}+{\delta}_{3}.$(3.5)

Since ${\mathrm{\Delta}}_{3}-{\mathrm{\Delta}}_{4}=\frac{1}{2}{({v}_{i}^{k})}^{2}+\mu ({v}_{i}^{k}+\mu )-\frac{1}{2}{\mu}^{2}=\frac{1}{2}{({v}_{i}^{k}+\mu )}^{2}>0$, we can get the minimizer of $f(u)$ at ${u}_{i}={v}_{i}^{k}+\mu $ along the direction ${e}_{i}$.

- (1)If ${u}_{i}>0$, since $\frac{\partial f(u)}{\partial {u}_{i}}={u}_{i}-{v}_{i}^{k}+\mu >0$, $f(u)$ increases along the direction ${e}_{i}$:$f(u){|}_{{u}_{i}=0}=\frac{1}{2}{\left({v}_{i}^{k}\right)}^{2}+\delta .$(3.6)
- (2)If ${u}_{i}<0$, since $\frac{\partial f(u)}{\partial {u}_{i}}={u}_{i}-{v}_{i}^{k}-\mu <0$, $f(u)$ decreases along the direction ${e}_{i}$:$f(u){|}_{{u}_{i}=0}=\frac{1}{2}{\left({v}_{i}^{k}\right)}^{2}+\delta ,$(3.7)

when ${u}_{i}=0$, the minimum of $f(u)$ along the direction ${e}_{i}$ is $f(u)=\frac{1}{2}{({v}_{i}^{k})}^{2}+\delta $.

□

*u*is component-wise separable in the problem

The generalized shrinkage operator leads to the sparse solution and removes noises. Hence, the algorithm with the generalized shrinkage operator converges to a sparse solution and is robust to noises.

## 4 The new reweighted ${l}_{1}$ minimization algorithm

*linearized Bregman iteration*converges to an optimal solution of the problem (1.3). The computation of generalized inverse ${A}^{\u2020}$ is time consuming; to overcome this, a method called

*chaotic iterative algorithm*is proposed combined with (2.5). In this algorithm we just need matrix-vector multiplication, so the generalized inverse ${A}^{\u2020}$ can be computed efficiently. In order to understand the algorithm better, we give a brief description of this method as follows:

where ${y}^{0}={V}_{0}{f}^{0}$, ${V}_{0}=\alpha {A}^{\ast}$ and $0<\alpha <\frac{2}{{\parallel A\parallel}^{2}}$. The corresponding sequence $\{{u}^{k}\}$ also converges to an optimal solution of the problem (1.3).

*A*is underdetermined; it was noticed as regards a standard least-squares regression, in which ${\parallel r\parallel}_{2}$ is minimized where $r=Ax-b$ is the residual vector. To overcome the problem of lacking of robustness of the algorithm, IRLS was proposed as an iterative method to

where $\rho (\cdot )$ is a penalty function such as the ${\ell}_{1}$ norm. This minimization can be accomplished by solving a sequence of weighted least-squares problems where the weights $\{{w}_{i}\}$ depend on the previous residuals ${w}_{i}={\rho}^{\prime}({r}_{i})/{r}_{i}$. The typical choice of *ρ* is inversely proportional to the residual, so that the large residuals will be penalized less in the subsequent iterations. Then an IRLS involving an iteratively reweighted ${\ell}_{2}$-norm can be better approximated by an ${\ell}_{1}$-like criterion. Inspired by the above idea, in order to better approximate an ${\ell}_{0}$-like criterion [34], our algorithm involves the iteratively reweighted ${\ell}_{1}$-norm.

*reweighted*${l}_{1}$

*minimization algorithm*as follows:

where ${y}^{0}={V}_{0}{f}^{0}$, ${V}_{0}=\alpha {A}^{\ast}$, and $0<\alpha <\frac{2}{{\parallel A\parallel}^{2}}$.

## 5 Numerical experiments

where ${u}^{\ast}$, ${u}^{0}$, and $mean(\cdot )$ are the restored image, original image, and average operator, respectively.

Our code is written in MATLAB and run on a Windows PC with a Intel(R) Core(TM) 2 Duo CPU T8100 @ 2.10 GHz 2.10 GHz and 1.5 GB memory. The MATLAB version is 7.1.

Reweighted ${l}_{1}$ minimization algorithm: Step 1. Set ${u}^{0}=0$, ${f}^{0}=0$, ${y}^{0}={V}_{0}{f}^{0}$, ${V}_{0}=\alpha {A}^{T}$, $0<\alpha <\frac{2}{{\parallel A\parallel}_{2}^{2}}$, $0<\delta <1$, $\mu =\mathrm{parameter}$.

Step 2. The sequence ${\{{u}^{k}\}}_{k\in \mathbb{N}}$ generated by (4.4).

Step 3. Until $\frac{\parallel {u}^{k+1}-{u}^{k}\parallel}{\parallel {u}^{k}\parallel}<\u03f5$.

We demonstrate the performance of the reweighted ${l}_{1}$ minimization algorithm, the chaotic iterative algorithm, the ${A}^{T}$ Bregman iteration, and the ${A}^{\u2020}$ Bregman iteration with $pinv(A)$ in MATLAB.

In fact, the complexity analysis also shows comparative results of several methods. Set the same loop number is *K*. So, the workload of the ${A}^{\u2020}$ algorithm (2.11) is two parts. They are the workload of the ${A}^{\u2020}$ and the loop of the (2.11). The workload is $O({n}^{3})$ during the computation of $A=USV$, ${A}^{\u2020}={V}^{T}{S}^{\u2020}{U}^{T}$, when $m<n$, because of the singular value decomposition involving multiplication of the matrix and matrix and eigenvalue calculation. The workload of the loop of the (2.11) is $O(m\ast n\ast K)$, because the loop only contains multiplication of matrix and vector. Therefore, the total workload of the ${A}^{\u2020}$ algorithm (2.11) is $O({n}^{3})+O(m\ast n\ast K)$. The workload of the chaotic iteration (4.1), the reweighted ${l}_{1}$ minimization algorithm (4.4) and the ${A}^{T}$ Bregman iteration (2.8) are $O(m\ast n\ast K)$, respectively. Obviously, $K<m\ll n$, the workload of the ${A}^{\u2020}$ algorithm (2.11) is bigger than the other three algorithms.

**The comparison of different algorithms**

Algorithm | Image scale | Blur kernel | Time (s) | SNR |
---|---|---|---|---|

${A}^{T}$ Bregman iteration | 256 × 256 | 15 × 15 ‘disk’ | 98.580627 | 2.285 |

${A}^{\u2020}$ Bregman iteration | 116.845442 | 9.4495 | ||

Chaotic iteration | 8.8303 | 114.648685 | ||

${A}^{T}$ Bregman iteration | 256 × 256 | 7 × 15 ’Gaussian’ | 51.934199 | 5.6389 |

${A}^{\u2020}$ Bregman iteration | 63.003234 | 15.1254 | ||

Chaotic iteration | 63.68804 | 13.2921 | ||

${A}^{T}$ Bregman iteration | 64 × 80 | 3 × 5 ’motion’ | 0.521046 | 26.5770 |

${A}^{\u2020}$ Bregman iteration | 17.214257 | 47.7984 | ||

Chaotic iteration | 0.617180 | 62.4899 | ||

Reweighted ${\ell}_{1}$ algorithm | 0.631006 | 63.5906 |

## 6 Conclusion

In this paper, we propose the reweighted ${l}_{1}$ minimization algorithm for image deblurring. Above all, we can see that the recovery of the image effect is obvious. Especially in the case of a large degree of blurring and difficult to recover details, it is stable and effective. In addition, we can improve the efficiency of this reweighted ${l}_{1}$ minimization algorithm combining with the ‘kicking’ technology. Because of the scale factor and efficiency of the algorithm ${A}^{\u2020}$, the new method proposed in this paper can be used in a parallel operation to get a better algorithm.

## Declarations

### Acknowledgements

This research was partly supported by Fund of Oceanic Telemetry Engineering and Technology Research Center, State Oceanic Administration (grant no. 2012003), the NSFC (grant nos. 60971132,61101208) and Fundamental Research Funds for the Central Universities (grant no. 13CX02086A).

## Authors’ Affiliations

## References

- Chan TF, Shen J:
*Image Processing and Analysis*. SIAM, Philadelphia; 2005.View ArticleGoogle Scholar - Aubert G, Kornprobst P
**Appl. Math. Sci. 147.**In*Mathematical Problems in Image Processing*. 2nd edition. Springer, New York; 2006.Google Scholar - Andrews HC, Hunt BR:
*Digital Image Restoration*. Prentice Hall, Englewood Cliffs; 1977.Google Scholar - Rudin L, Osher S, Fatemi E:
**Nonlinear total variation based noise removal algorithms.***Physica D*1992,**60:**259–268. 10.1016/0167-2789(92)90242-FView ArticleGoogle Scholar - Dobson DC, Santosa F:
**Recovery of blocky images from noise and blurred data.***SIAM J. Appl. Math.*1996,**56:**1181–1198. 10.1137/S003613999427560XMathSciNetView ArticleGoogle Scholar - Nikolova M:
**Local strong homogeneity of a regularized estimator.***SIAM J. Appl. Math.*2000,**61:**633–658. 10.1137/S0036139997327794MathSciNetView ArticleGoogle Scholar - Buades A, Coll B, Morel JM:
**A review of image denoising algorithms, with a new one.***Multiscale Model. Simul.*2005,**4:**490–530. 10.1137/040616024MathSciNetView ArticleGoogle Scholar - Tomasi C, Manduchi R:
**Bilateral filtering for gray and color images.***Proceedings of the 1998 IEEE International Conference on Computer Vision*1998. Bombay, IndiaGoogle Scholar - Buades, A, Coll, B, Morel, JM: Image enhancement by non-local reverse heat equation. CMLA Tech. rep. 22 (2006)Google Scholar
- Osher S, Burger M, Goldfarb D, Xu J, Yin W:
**An iterative regularization method for total variation-based image restoration.***Multiscale Model. Simul.*2005,**4:**460–489. 10.1137/040605412MathSciNetView ArticleGoogle Scholar - He L, Marquina A, Osher S:
**Blind deconvolution using TV regularization and Bregman iteration.***Int. J. Imaging Syst. Technol.*2005,**15:**74–83. 10.1002/ima.20040View ArticleGoogle Scholar - Marquina, A: Inverse scale space methods for blind deconvolution. UCLA-CAM-Report 06–36 (2006)Google Scholar
- Marquina A, Osher S:
**Image super-resolution by TV-regularization and Bregman iteration.***J. Sci. Comput.*2008,**37:**367–382. 10.1007/s10915-008-9214-8MathSciNetView ArticleGoogle Scholar - Gilboa G, Osher S:
**Nonlocal linear image regularization and supervised segmentation.***Multiscale Model. Simul.*2007,**6:**595–630. 10.1137/060669358MathSciNetView ArticleGoogle Scholar - Lou, Y, Zhang, X, Osher, S, Bertozzi, A: Image recovery via nonlocal operators. UCLA-CAM-Report 08–35 (2008)Google Scholar
- Cofiman RR, Donoho DL:
**Translation-invariant de-noising. Lecture Notes in Statistics 103.**In*Wavelets and Statistics*. Edited by: Antoniadis A, Oppenheim G. Springer, New York; 1995.Google Scholar - Hale, E, Yin, W, Zhang, Y: A fixed-point continuation method for l 1 -regularization with application to compressed sensing. CAAM-TR07–07 (2007)Google Scholar
- Yin W, Osher S, Goldfarb D, Darbon J: Bregman iterative algorithms for ${l}_{1}$-minimization with applications to compressed sensing.
*SIAM J. Imaging Sci*. 2008,**1:**143–168. 10.1137/070703983MathSciNetView ArticleGoogle Scholar - Cai J, Osher S, Shen Z:
**Linearized Bregman iterations for compressed sensing.***Math. Comput.*2009,**78**(267):1515–1536. 10.1090/S0025-5718-08-02189-3MathSciNetView ArticleGoogle Scholar - Cai J, Osher S, Shen Z: Convergence of the linearized Bregman iteration for ${l}_{1}$-norm minimization.
*Math. Comput.*2009,**78**(268):2127–2136. 10.1090/S0025-5718-09-02242-XMathSciNetView ArticleGoogle Scholar - Osher, S, Mao, Y, Dong, B, Yin, W: Fast linearized Bregman iteration for compressive sensing and sparse denoising. UCLA-CAM-Report 08–37 (2008)Google Scholar
- Cai J, Osher S, Shen Z:
**Linearized Bregman iterations for frame-based image deblurring.***SIAM J. Imaging Sci.*2009,**2**(1):226–252. 10.1137/080733371MathSciNetView ArticleGoogle Scholar - Goldstein T, Osher S: The split Bregman method for ${L}_{1}$-regularized problems.
*SIAM J. Imaging Sci.*2009,**2**(2):323–343. 10.1137/080725891MathSciNetView ArticleGoogle Scholar - Cai J, Osher S, Shen Z:
**Split Bregman method and frame based image restoration.***Multiscale Model. Simul.*2009,**8**(2):337–369.MathSciNetView ArticleGoogle Scholar - Wu C, Tai X:
**Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models.***SIAM J. Imaging Sci.*2010,**3**(3):300–339. 10.1137/090767558MathSciNetView ArticleGoogle Scholar - Yang, Y, Möller, M, Osher, S: A dual split Bregman method for fast ${l}_{1}$ minimization. UCLA-CAM-Report 11–57 (2011)Google Scholar
- Zhang H, Cheng L: ${A}^{-}$ Linearized Bregman iteration algorithm.
*Math. Numer. Sin.*2010,**32:**97–104. (in Chinese)MathSciNetGoogle Scholar - Candés EJ, Wakin MB, Boyd SP: Enhancing sparsity by reweighted ${\ell}_{1}$ minimization.
*J. Fourier Anal. Appl.*2008,**14**(5):877–905.MathSciNetView ArticleGoogle Scholar - Wang G, Wei Y, Qiao S:
*Generalized Inverses: Theory and Computations*. Science Press, Beijing; 2004.Google Scholar - Wang S, Yang Z:
*Generalized Inverse Matrix and Its Applications*. Beijing University of Technology Press, Beijing; 1996.Google Scholar - Bregman L:
**The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming.***USSR Comput. Math. Math. Phys.*1967,**7**(3):200–217. 10.1016/0041-5553(67)90040-7View ArticleGoogle Scholar - Donoho D:
**De-noising by soft-thresholding.***IEEE Trans. Inf. Theory*1995,**41:**613–627. 10.1109/18.382009MathSciNetView ArticleGoogle Scholar - Schlossmacher EJ:
**An iterative technique for absolute deviations curve fitting.***J. Am. Stat. Assoc.*1973,**68:**857–859. 10.1080/01621459.1973.10481436View ArticleGoogle Scholar - Zhao YB, Li D: Reweighted ${\ell}_{1}$-minimization for sparse solutions to underdetermined linear systems.
*SIAM J. Optim.*2012,**22**(3):1065–1088. 10.1137/110847445MathSciNetView ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.