- Research
- Open Access
A primal-dual algorithm framework for convex saddle-point optimization
- Benxin Zhang^{1} and
- Zhibin Zhu^{2}Email author
https://doi.org/10.1186/s13660-017-1548-z
© The Author(s) 2017
- Received: 28 June 2017
- Accepted: 17 October 2017
- Published: 25 October 2017
Abstract
In this study, we introduce a primal-dual prediction-correction algorithm framework for convex optimization problems with known saddle-point structure. Our unified frame adds the proximal term with a positive definite weighting matrix. Moreover, different proximal parameters in the frame can derive some existing well-known algorithms and yield a class of new primal-dual schemes. We prove the convergence of the proposed frame from the perspective of proximal point algorithm-like contraction methods and variational inequalities approach. The convergence rate \(O(1/t)\) in the ergodic and nonergodic senses is also given, where t denotes the iteration number.
Keywords
- primal-dual method
- proximal point algorithm
- convex optimization
- variational inequalities
1 Introduction
As analyzed in [2, 3], the saddle-point problem (2) can be regarded as the primal-dual formulation, and more and more scholars have proposed some primal-dual algorithms. Zhu and Chan [4] proposed the famous primal-dual hybrid gradient (PDHG) algorithm with adaptive stepsize. Though the algorithm is quite fast, the convergence is not proved. He, You, and Yuan [5] showed that PDHG with constant step sizes is indeed convergent if one of the functions of the saddle-point problem is strongly convex. Chambolle and Pock [2] gave a primal-dual algorithm with convergence rate \(O(1/k)\) for the complete class of these problems. They further showed accelerations of the proposed algorithm to yield improved rates on problems with some degree of smoothness. In particular, they showed that the algorithm can achieve the \(O(1/k^{2})\) convergence in problems where the primal or the dual objective is uniformly convex, and the method can show linear convergence, that is, \(O(\varsigma^{k})\) (for some \(\varsigma\in (0,1 )\)), on smooth problems. Bonettini and Ruggiero [6] established the convergence of a general primal-dual method for nonsmooth convex optimization problems and showed that the convergence of the scheme can be considered as an ϵ-subgradient method on the primal formulation of the variational problem when the steplength parameters are a priori selected sequences. He and Yuan [7] did a novel study on these primal-dual algorithms from the perspective of contraction perspective. Their method simplified the existing convergence analysis. Cai, Han, and Xu [8] proposed a new correction strategy for some first-order primal-dual algorithms. Later, He, Desai, and Wang [9] introduced another new primal-dual prediction-correction algorithm for solving a saddle-point optimization problem, which serves as a bridge between the algorithms proposed in [8] and [7]. Recently, Zhang, Zhu, and Wang [10] proposed a simple primal-dual method for total-variation image restoration problems and showed that their iterative scheme has the \(O(1/k)\) convergence rate in the ergodic sense. When we had finished this paper, we found the algorithm proposed in [11], where convergence analysis was similar to our proposed frame. However, the algorithm proposed in [11] is actually a particular case of our unified framework when the precondition matrix in our frame is fixed.
In some imaging applications, for example, partially parallel magnetic resonance imaging [13], the primal subproblem in (3) may not be easy to solve. Because of this difficulty, it is advisable to use inner iterations to get approximate solutions of the subproblems. In the recent work, several completely decoupled schemes are proposed to avoid subproblem solving, such as primal-dual fixed point algorithm [14–16] and the Uzawa method [17]. Hence, motivated by the works [7, 10, 17], we reconsider the popular iterative scheme (3) and give a primal-dual algorithm framework such that it can be well adopted in different imaging applications.
The organization of this paper is as follows. In Section 2, we propose the primal-dual-based contraction algorithm framework in prediction-correction fashion. In Section 3, we present convergence analysis. The iteration complexity in the ergodic and nonergodic senses is established in Sections 4 and 5. In Section 6, connections with well-known methods, and some new schemes are discussed. Finally, a conclusion is given.
2 Proposed frame
3 Convergence analysis
In this section, we show the convergence of the proposed frame. Convergence results easily follow from proximal point algorithm-like contraction methods [7] and VI approach [19].
Lemma 1
Proof
In the following, we give an important inequality for the output of the scheme (7)-(8).
Lemma 2
Proof
Lemma 3
Proof
The following theorem states that the proposed iterative scheme converges to an optimal primal-dual solution.
Theorem 1
If Q in (10) is positive definite, then any sequence generated by the scheme (7)-(8) converges to a solution of the minimax problem (2).
Proof
4 Convergence rate in an ergodic sense
In the following, using proximal point algorithm-like contraction methods for convex optimization [19], the convergence rate in the ergodic and nonergodic senses is given. First, we prove a lemma, which is the base for the proofs of the convergence rate in the ergodic sense.
Lemma 4
Proof
Theorem 2
Proof
5 Convergence rate in a nonergodic sense
In this section, we show that a worst-case \(O(1/t)\) convergence rate in a nonergodic sense can also be established for the proposed algorithm frame. We first prove the following lemma.
Lemma 5
Proof
Next, we are ready to prove the key inequality of this section.
Lemma 6
Proof
Now, we establish a worst-case \(O(1/t)\) convergence rate in a nonergodic sense.
Theorem 3
Proof
6 Connections with existing methods
In the following, we establish connections of the proposed frame to the well-known methods for solving (33). There are other types methods designed to solve problem (33). Among them, the split Bregman method proposed by Goldstein and Osher [21] is very popular for imaging applications. This method has been proved to be equivalent to the alternating direction of multiplier method. In [17], based on proximal forward-backward splitting and Bregman iteration, a split inexact Uzawa (SIU) method is proposed to maximally decouple the iterations, so that each iteration is explicit in this algorithm. Also, the authors gave an algorithm based on Bregman operator splitting (BOS) when A is not diagonalizable. Recently, Tian and Yuan [11] proposed a linearized primal-dual method for linear inverse problems with total-variation regularization and showed that this variant yields significant computational benefits. Next, we show that different P in (7) can induce the following well-known methods: the linearized primal-dual method, SIU, BOS, and split Bregman methods and some new primal-dual algorithms with the correction step (8).
6.1 Linearized primal-dual method
6.2 Split inexact Uzawa method
6.3 Bregman operator splitting
6.4 Split Bregman
7 Conclusions
We proposed a primal-dual-based contraction framework in the prediction-correction fashion. The convergence and convergence rate of the proposed framework are also given. Some well-known algorithms, for example, the linearized primal-dual method, SIU, Bregman operator splitting method, and split Bregman method can be considered as particular cases of our algorithm framework. Some new primal-dual schemes such as (48), (52), (55), and (58) are induced. Finally, how to choose the adaptive parameter ρ is an interesting problem, which will be discussed in a forthcoming work.
Declarations
Acknowledgements
This work is supported by the National Natural Science Foundation of China (11361018, 11461015), Guangxi Natural Science Foundation (2014GXNSFFA118001), Guangxi Key Laboratory of Cryptography and Information Security (GCIS201624), and Innovation Project of Guangxi Graduate Education.
Authors’ contributions
Both authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Rockafellar, T: Convex Analysis. Princeton University Press, Princeton (1970) View ArticleMATHGoogle Scholar
- Chambolle, A, Pock, T: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120-145 (2011) View ArticleMATHMathSciNetGoogle Scholar
- Esser, E, Zhang, X, Chan, T: A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imaging Sci. 3, 1015-1046 (2010) View ArticleMATHMathSciNetGoogle Scholar
- Zhu, M, Chan, T: An efficient primal-dual hybrid gradient algorithm for total variation image restoration. CAM report, 08-34 (2008) Google Scholar
- He, B, You, Y, Yuan, X: On the convergence of primal-dual hybrid gradient algorithm. SIAM J. Imaging Sci. 7, 2526-2537 (2014) View ArticleMATHMathSciNetGoogle Scholar
- Bonettini, S, Ruggiero, V: On the convergence of primal-dual hybrid gradient algorithms for total variation image restoration. J. Math. Imaging Vis. 44, 236-253 (2012) View ArticleMATHMathSciNetGoogle Scholar
- He, B, Yuan, X: Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspective. SIAM J. Imaging Sci. 5, 119-149 (2012) View ArticleMATHMathSciNetGoogle Scholar
- Cai, X, Han, D, Xu, L: An improved first-order primal-dual algorithm with a new correction step. J. Glob. Optim. 57(4), 1419-1428 (2013) View ArticleMATHMathSciNetGoogle Scholar
- He, H, Desai, J, Wang, K: A primal-dual prediction-correction algorithm for saddle point optimization. J. Glob. Optim. 66(3), 573-583 (2016) View ArticleMATHMathSciNetGoogle Scholar
- Zhang, B, Zhu, Z, Wang, S: A simple primal-dual method for total variation image restoration. J. Vis. Commun. Image Represent. 38, 814-823 (2016) View ArticleGoogle Scholar
- Tian, WY, Yuan, XM: Linearized primal-dual methods for linear inverse problems with total variation regularization and finite element discretization. Inverse Probl. 32(11), 115011 (2016) View ArticleMATHMathSciNetGoogle Scholar
- Komodakis, N, Pesquet, J-C: Playing with duality: an overview of recent primal-dual approaches for solving large-scale optimization problems. IEEE Signal Process. Mag. 32(6), 31-54 (2015) View ArticleGoogle Scholar
- Chen, Y, Hager, WW, Yashtini, M, Ye, X, Zhang, H: Bregman operator splitting with variable stepsize for total variation image reconstruction. Comput. Optim. Appl. 54, 317-342 (2013) View ArticleMATHMathSciNetGoogle Scholar
- Chen, P, Huang, J, Zhang, X: A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 29(2), 025011 (2013) View ArticleMATHMathSciNetGoogle Scholar
- Combettes, PL, Condat, L, Pesquet, J-C, Vu, BC: A forward-backward view of some primal-dual optimization methods in image recovery. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 4141-4145. IEEE Press, New York (2014) View ArticleGoogle Scholar
- Li, Q, Shen, L, Xu, Y: Multi-step fixed-point proximity algorithms for solving a class of optimization problems arising from image processing. Adv. Comput. Math. 41(2), 387-422 (2015) View ArticleMATHMathSciNetGoogle Scholar
- Zhang, X, Burger, M, Osher, S: A unified primal-dual algorithm framework based on Bregman iteration. J. Sci. Comput. 46, 20-46 (2011) View ArticleMATHMathSciNetGoogle Scholar
- Rockafellar, T: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976) View ArticleMATHMathSciNetGoogle Scholar
- He, B: PPA-like contraction methods for convex optimization: a framework using variational inequality approach. J. Oper. Res. Soc. China 3(4), 391-420 (2015) View ArticleMATHMathSciNetGoogle Scholar
- Rudin, L, Osher, S, Fatemi, E: Nonlinear total variation based noise removal algorithms. Physica D 60, 259-268 (1992) View ArticleMATHMathSciNetGoogle Scholar
- Goldstein, T, Osher, S: The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2, 323-343 (2009) View ArticleMATHMathSciNetGoogle Scholar
- Shefi, R, Teboulle, M: Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization. SIAM J. Optim. 24(1), 269-297 (2014) View ArticleMATHMathSciNetGoogle Scholar
- Xu, M: Proximal alternating directions method for structured variational inequalities. J. Optim. Theory Appl. 134, 107-117 (2007) View ArticleMATHMathSciNetGoogle Scholar
- Chambolle, A, Pock, T: On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 159(1), 253-287 (2016) View ArticleMATHMathSciNetGoogle Scholar
- Condat, L: A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 158(2), 460-479 (2013) View ArticleMATHMathSciNetGoogle Scholar
- Combettes, PL, Wajs, VR: Signal recovery by proximal forward-backward splitting. SIAM J. Multiscale Model. Simul. 4, 168-200 (2005) View ArticleMATHMathSciNetGoogle Scholar
- Wang, Y, Yang, J, Yin, W, Zhang, Y: A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci. 1(3), 248-272 (2008) View ArticleMATHMathSciNetGoogle Scholar
- Micchelli, CA, Shen, L, Xu, Y: Proximity algorithms for image models: denoising. Inverse Probl. 27(4), 045009 (2011) View ArticleMATHMathSciNetGoogle Scholar