An accelerated proximal augmented Lagrangian method and its application in compressive sensing

As a first-order method, the augmented Lagrangian method (ALM) is a benchmark solver for linearly constrained convex programming, and in practice some semi-definite proximal terms are often added to its primal variable’s subproblem to make it more implementable. In this paper, we propose an accelerated PALM with indefinite proximal regularization (PALM-IPR) for convex programming with linear constraints, which generalizes the proximal terms from semi-definite to indefinite. Under mild assumptions, we establish the worst-case \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal{O}(1/t^{2})$\end{document}O(1/t2) convergence rate of PALM-IPR in a non-ergodic sense. Finally, numerical results show that our new method is feasible and efficient for solving compressive sensing.


Introduction
Let R denote the set of all real numbers, R n the Euclidean space of all real vectors with n coordinates. In this paper, we are going to solve the following linearly constrained convex programming: where f (x) : R n → R is a closed proper convex function, A ∈ R m×n , b ∈ R m . Throughout, we assume that the solution set of Problem () is nonempty. By choosing different objective function f (x), a variety of problems encountered in compressive processing, machine learning and statistics can be cast into Problem () (see [-] and the references therein).
The following are two concrete examples of Problem (): • The compressive sensing (CS): where μ > , A ∈ R m×n (m n) is the sensing matrix; b ∈ R m is the observed signal, and the  -norm and  -norm of the vector x are defined by x  = n i= |x i | and x  = ( n i= x  i ) / , respectively. • The wavelet-based image processing problem: where B ∈ R m×l is a diagonal matrix whose elements are either  (missing pixels) or  (known pixels), and W ∈ R l×n is a wavelet dictionary.
Problem () can be converted into the following strongly convex programming: where the constant β >  is a penalty parameter. Introducing the Lagrange multiplier λ ∈ R m to the linear constraints Ax = b, we get the Lagrangian function associated with Problem (): which is also the augmented Lagrangian function associated with Problem (). Then the dual function is denoted by and the dual problem of () is Due to the strong convexity of the objective function of Problem (), G(λ) is continuously differentiable at any λ ∈ R m , and ). Solving the above dual problem by the gradient ascent method, we get a benchmark solver for Problem (): the augmented Lagrangian method (ALM) [, ], which first minimizes the Lagrangian function of Problem () with respect to x by fixing λ = λ k to get x(λ k ), and set x k+ = x(λ k ); then it updates the Lagrange multiplier λ. Specifically, for given λ k , the kth iteration of PALM for Problem () reads where γ ∈ (, ) is a relaxation factor. Though ALM plays a fundamental role in the algorithmic development of Problem (), the cost of solving its first subproblem is often high for general f (·) and A. To address this issue, many proximal ALMs [, -] are developed by adding the proximal term   xx k  G to the x-related subproblem, where G ∈ R n×n is a semi-definite matrix. By setting G = τ I n -βA A with τ > β A A , the x-related subproblem reduces to the following form: The above subproblem is often simple enough to have a closed-form solution or can be easily solved up to a high precision. The proximal ALM is so instructive, and along this philosophy, a lot of efficient proximal ALM-type methods [, -] have been proposed. However, a new difficult problem has arisen for the proximal ALM-type methods, which is how to determine the optimal value of the proximal In this paper, based on the study of [], we are going to further study the augmented Lagrangian method and develop a new fast proximal ALM-type method with indefinite proximal regularization, whose worst-case convergence rate is O(/t  ) in a non-ergodic sense. Furthermore, a relaxation factor γ ∈ (, ) is attached to the updated formula of our new method, which is often beneficial to speed up convergence in practice.
The rest of this paper is organized as follows. In Section , we list some necessary notations. We then give the proximal ALM with indefinite proximal regularization (PALM-IPR) and show its worst-case O(/t  ) convergence rate in Section . In Section , numerical experiments are conducted to illustrate the efficiency of PALM-IPR. Finally, some conclusions are drawn in Section .

Preliminaries
In this section, we give some notations used in the following analysis and present two criteria to measure the worst-case O(/t) convergence rate of PALM type methods. At the end of this section, we prove that the inequality L(x * , λ * ) -L(x k ,λ k ) ≥  holds for the method in [].
Throughout, we use the following standard notations. For any two vectors x, y ∈ R n , x, y or x y denote their inner products. The symbols ·  and · represent the norm and  -norm for vector variables, respectively. I n denotes the n-dimensional identity matrix. If the matrix G ∈ R n×n is symmetric, we use the symbol x  G to denote x Gx even if G is indefinite; G  (resp., G ) denotes that the matrix G is positive definite (resp., semi-definite).
The following identity will be used in the following analysis: Note that the two conditions of () correspond to the dual feasibility and the primal feasibility of Problem (), respectively. The solution set of KKT system (), denoted by W * , is nonempty under the nonempty solution set of (). By () and the property of the convex function f (·), for any (x * , λ * ) ∈ W * , we have Based on () and Ax * = b, we have the following proposition.
where C > . The second inequality of () implies that there must exist at least one (x * , λ * ) ∈ W * with λ * = .
where c, C > . Obviously, inequality () is motivated by equality (). Compared with (), the criterion () is more reasonable. Therefore, we shall use () to measure the O(/t  ) convergence rate of our new method. Now, we prove that the inequality L(x * , λ * ) -L(x k ,λ k ) ≥  holds for the iteration method proposed in [].
Proof Since x k+ is generated by the following subproblem: i.e., This and the convexity of the function f (·) yield So, it holds that This completes the proof.

PALM-IPR and its convergence rate
In this section, we first present the proximal ALM with indefinite proximal regularization (PALM-IPR) for Problem () and then prove its convergence rate step by step.
To prove the global convergence of PALM-IPR, we need to impose some restrictions on the proximal matrix G k , which is stated as follows.
Remark . The proximal matrix G k maybe indefinite. For example, if we set D k = , then G k = -(α)θ  k A A, which is indefinite when A is full-column rank.
Using the first-order optimality condition of the subproblem of PALM-IPR, we can deduce the following one-iteration result. Lemma . Let {(x k , y k , z k , λ k )} k≥ be the sequence generated by PALM-IPR. For any x ∈ R n , it holds that () Proof From the first-order optimality condition for z-related subproblem (), we have Then, from the convexity of f (·) and (), we have From () and the convexity of f (·) again, we thus get where the second inequality follows from (). Then, by rearranging terms of the above inequality, we arrive at where the second inequality uses (), and the third inequality comes from identity (). Dividing both sides of the above inequality by θ  k , we get From (), we havẽ Substituting this into the above inequality and using () lead to (). This completes the proof.
Lemma . Let {(x k , y k , z k , λ k )} k≥ be the sequence generated by PALM-IPR. For any (x, λ) ∈ R m+n with Ax = b, it holds that Proof Adding the term  θ k λ, Az k+b to both sides of (), we get Substituting the above equality into () and using (), we get () immediately. This completes the proof.
Let us further deal with the term z k - Proof By Assumption ., we have Using the inequality ξη  ≤  ξ  +  η  with ξ = Az kb and η = Az k+b, we get Substituting this inequality into the right-hand side of the above equality, we obtain assertion () immediately. This completes the proof.
Then, from () and (), we have where the second inequality comes from () and the fact β k ≥ . Based on (), we are going to prove the worst-case O(/t  ) convergence rate of PALM-IPR in an ergodic sense.
Theorem . Let {(x k , y k , z k , λ k )} k≥ be the sequence generated by PALM-IPR. Then Proof Setting x = x * and λ = λ * in (), we get

Numerical results
In this section, we apply PALM-IPR to some practical applications and report the numerical results. All the codes were written by Matlab Ra and conducted on ThinkPad notebook with GB of memory.
Problem . (Quadratic programming) Firstly, let us test PALM-IPR on equality constrained quadratic programming (ECQP) [] to validate its stability: We set the problem size to m = , n =  and generate A ∈ R m×n , b, c and Q ∈ R n×n according to the standard Gaussian distribution. We compare PALM-IPR with the classical ALM with β = . For PALM-IPR, we set G k =  for simplicity. We have tested the experiment sixty times, and the numerical results are listed in In this case, we have to solve the subproblem inexactly, which is often time-consuming. Therefore, we choose the proximal matrix G k in () as G k = θ  k (D k -(α)A A), in which D k =  β k θ  k P k + (α)A A, P k = τ k I nβ k A A and τ k > β k A A , and subproblem () can be written as which is equivalent to and has a closed-form solution as follows: Note that all computations are component-wise.  In this experiment, we set m = floor(a × n) and k = floor(b × m) with n ∈ {, ,, ,, ,, ,}, where k is the number of random nonzero elements contained in the original signal. The sensing matrix A is generated by the following Matlab scripts:Ā = randn(m, n), [Q, R] = qr(Ā , ); A = Q , and the nonzero entries of the true signal x * , whose values are sampled from the standard Gaussian: x * = zeros(n, ); p = randperm(n); x * (p( : k)) = randn(k, ), are selected at random. The observed signal b is generated by b = R \ Ax * . In addition, we set μ = , γ = .. In this experiment, we set the proximal parameter τ k = β k A A . Furthermore, the stopping criterion is or the number of iterations exceeds   , where x k is the iterate generated by PALM-IPR. Furthermore, all initial points are set as x  = A b, λ  = . For comparison, we also give the numerical results of PALM-SDPR [] and the proximal PALM with positive-indefinite proximal regularization (PALM-PIPR) [], and the proximal matrix is set to be G = τ I n -βA A in PALM-SDPR and G = .τ I n -βA A in PALM-PIPR, and τ = .β A A , β =  mean(abs(b)). Furthermore, we set γ =  in PALM-PIPR. Since the computational load of all three methods are almost the same at each iteration, we only list the number of iterations ('Iter'), the relative error ('RelErr') when three methods achieve the stopping criterion. The numerical results are listed in Table , and in the view of statistics, all the results are the average of  runs for each pair (n, a, b). Numerical results in Table  indicate that: () All methods have succeeded in solving Problem () for all the scenarios; () The new method PALM-IPR outperforms PALM-SDPR and PALM-PIPR by taking a fewer number of iterations to converge except (n, a, b) = (,, ., .).

Conclusions
In this paper, an accelerated augmented proximal Lagrangian method with indefinite proximal regularization (PALM-IPR) for linearly constrained convex programming is proposed. Under mild conditions, we have established the worst-case O(/t  ) convergence rate in a non-ergodic sense of PALM-IPR. Two sets of numerical results, which illustrate that PALM-IPR performs better than some state-of-the-art solvers, are given. Similar to our proposed method, the methods in [, ] also have the worst-case O(/t  ) convergence rate in a non-ergodic sense, but they often have to solve a difficult subproblem at each iteration, and some inner iteration has to be executed. A prominent characteristic of the methods in [, ] is that the parameter β can be any positive constant, but the parameter β in PALM-IPR changes with respect to the iteration counter k, and it can actually go to infinity as k → ∞. In practice, we often observe that larger β usually induces to slower convergence. Therefore, the method with proximal term, faster convergence rate and constant parameter β deserves further research.