- Research
- Open Access
Convergence analysis on a modified generalized alternating direction method of multipliers
- Sha Lu^{1, 2}Email authorView ORCID ID profile and
- Zengxin Wei^{3}
https://doi.org/10.1186/s13660-018-1721-z
© The Author(s) 2018
- Received: 14 April 2018
- Accepted: 5 June 2018
- Published: 8 June 2018
Abstract
The alternating direction method of multipliers (ADMM) is one of the most powerful and successful methods for solving convex composite minimization problem. The generalized ADMM relaxes both the variables and the multipliers with a common relaxation factor in \((0,2)\), which has the potential of enhancing the performance of the classic ADMM. Very recently, two different variants of semi-proximal generalized ADMM have been proposed. They allow the weighting matrix in the proximal terms to be positive semidefinite, which makes the subproblems relatively easy to evaluate. One of the variants of semi-proximal generalized ADMMs has been analyzed theoretically, but the convergence result of the other is not known so far. This paper aims to remedy this deficiency and establish its convergence result under some mild conditions in the sense that the relaxation factor is also restricted into \((0,2)\).
Keywords
- Convex optimization
- Augmented Lagrangian function
- Alternating direction method of multipliers
- Semi-proximal terms
1 Introduction
The classic ADMM algorithm was originated by Glowinski and Marroco [1], Gabay and Mercier [2] in the mid-1970s. Gabay [3] showed that the classic ADMM with the \(\tau=1\) is a special case of the Douglas–Rachford splitting method for monotone operators in the early 1980s. Later, in [4], Eckstein and Bertsekas showed that the Douglas–Rachford splitting method is actually a special case of the proximal point algorithm. The variant of proximal ADMM was proposed by Eckstein [5], which ensures that each subproblem enjoys a unique solution by introducing an additional proximal term. This technique improves the behavior of the objective functions in the iteration subproblems and thus ameliorates the convergent property of the whole algorithm. He et al. [6] in turn showed that the proximal term can be chosen differently pre-iteration. Furthermore, Fazel et al. [7] gave a deep investigation and proved that the proximal term can be chosen to be positive semidefinite, which allows more flexible applications. One may refer to [8] for a note on the historical development of the ADMM, and some further research on ADMM can be seen in [9, 10], etc.
Another contribution of Eckstein and Bertsekas [4] is the designing of a generalized ADMM based on a generalized proximal point algorithm. Very recently, combining the idea of semi-proximal terms, Xiao et al. [11] proposed a semi-proximal generalized ADMM for convex composite conic programming, and numerically illustrated that their proposed method is very promising for solving doubly nonnegative semi-positive definite programming. The method of Xiao et al. [11] relaxed all the variables with a factor of \((0,2)\), which has the potential of enhancing the performance of the classic ADMM. Additionally, in [11], Xiao et al. also developed another variant of semi-proximal generalized ADMM with different semi-proximal terms, but its convergence property has not been investigated so far. This paper targets to prove the global convergence of this semi-proximal generalized ADMM under some mild conditions, which may bring some theoretical foundations in some potential practical applications.
The rest of this paper is organized as follows. In Sect. 2, we present some preliminary results and review some variants of ADMMs. In Sect. 3, we establish the global convergence of the generalized semi-proximal ADMM with semi-proximal terms. In Sect. 4, we conclude this paper with some remarks.
2 Preliminaries
In this section, we provide some basic concepts and give a quick review of some variants of generalized ADMMs which will be used in the subsequent developments.
2.1 Basic concepts
2.2 Generalized ADMM
2.3 Proximal ADMM
3 Global convergence
Algorithm sPGADM
Set \(\rho\in(0,2)\), \(\sigma>0\). Choose \(S:{\mathcal {X}}\to {\mathcal {X}}, T:{\mathcal {Y}}\to {\mathcal {Y}}\) such that \(\sum_{f}+S+A^{*}A\succ0\) and \(\sum_{g}+T+B^{*}B\succ0\). Input an initial point \(\tilde{\omega}_{0}=(\tilde{x}_{0},\tilde{y}_{0};\tilde {\lambda}_{0})\in {\mathcal {X}}\times {\mathcal {Y}}\times {\mathcal {Z}}\). For \(k=1,2,\ldots\) ,
Before deducing the convergence property of sPGADM, we do some preparations to facilitate the later analysis. Firstly, we make the following assumption.
Assumption A
There exists at least one vector \((\bar{x},\bar{y};\bar{\lambda })\in {\mathcal {X}}\times {\mathcal {Y}}\times {\mathcal {Z}}\) such that the KKT system (3) is satisfied.
We now let \(\{(x_{k},y_{k};\lambda_{k})\}\) be the sequence generated by sPGADM and \((\bar{x},\bar{y};\bar{\lambda})\) be a solution of the KKT system (3). For a more convenient discussion, we denote \(x_{k}^{e}=x_{k}-\bar{x}\), \(y_{k}^{e}=y_{k}-\bar{y}\), and \(\lambda_{k}^{e}=\lambda_{k}-\bar{\lambda}\).
The following two lemmas play a fundamental role in our convergence analysis.
Lemma 3.1
Proof
Lemma 3.2
Proof
The following theorem shows that the sequence \(\{\phi_{k}\}_{k>0}\) is monotonically decreasing and the algorithm sPGADM is globally convergent.
Theorem 3.3
Proof
4 Conclusions
The generalized ADMM, as an important variant of ADMM, is derived from the generalized proximal point algorithm while it is used to solve the sum of maximal monotone operators inclusion problems. Recently, it was shown that the generalized ADMM is also equivalent to the unit step-length ADMM but with additional relaxation steps based on a factor within \((0,2)\). Combining the idea of semi-proximal terms, Xiao et al. [11] proposed a semi-proximal generalized ADMM and numerically illustrated that their proposed method is very promising for semi-positive definite programming. Additionally, Xiao et al. [11] introduced another variant of semi-proximal generalized ADMM with different semi-proximal terms, but its convergence property has not been investigated so far. This study aimed to remedy this deficiency and established its convergence result under some mild conditions in the sense that the relaxation factor is also restricted into \((0,2)\). More precisely, if \(\rho\in(0,2)\), theoretical analysis has shown that the proposed algorithm converges globally by assuming that the optimal solutions set is nonempty and the matrices \(\Sigma_{f}+{\mathcal {S}}+A^{*}A\) and \(\Sigma_{g}+{\mathcal {T}}+B^{*}B\) are both positive definite. The result is quite in accord with the standard semi-proximal ADMM [11]. The paper paid more attention to analyzing the generalized semi-proximal ADMM for solving separable convex minimization. However, it has not been tested with different factor values of ρ for performance comparing. This should be our further task to investigate.
Declarations
Funding
This work was supported by the Academy Science Research Foundation of Guangxi Educational Committee (Grant No. KY2015YB190) and the Key Laboratory Foundation of Guangxi Province (Grant No. GXMMSL201406).
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Glowinski, R., Marroco, A.: Sur l’approximation, par elements finis d’ordre un, et la resolution, par penalisation-dualité, d’une classe de problemes de Dirichlet nonlineares. Rev. Fr. Autom. Inform. Rech. Opér., Anal. Numér. 9, 41–76 (1975) MATHGoogle Scholar
- Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2, 17–40 (1976) View ArticleMATHGoogle Scholar
- Gabay, D.: Applications of the method of multipliers to variational inequalities in augmented Lagrangian methods: applications to the numerical solution of boundary-value problems. In: Fortin, M., Glowinski, R. (eds.) Studies in Mathematics and Its Applications, vol. 15, pp. 299–331. Elsevier, Amsterdam (1983) Google Scholar
- Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992) MathSciNetView ArticleMATHGoogle Scholar
- Eckstein, J.: Some saddle-function splitting methods for convex programming. Optim. Methods Softw. 4, 75–83 (1994) View ArticleGoogle Scholar
- He, B.S., Liao, L.Z., Han, D.R., Yang, H.: A new inexact alternating directions method for monotone variational inequalities. Math. Program. 92, 103–118 (2002) MathSciNetView ArticleMATHGoogle Scholar
- Fazel, M., Pong, T.K., Sun, D.F., Tseng, P.: Hankel matrix rank minimization with applications in system identification and realization. SIAM J. Matrix Anal. Appl. 34, 946–977 (2013) MathSciNetView ArticleMATHGoogle Scholar
- Glowinski, R.: On alternating direction methods of multipliers: a historical perspective. In: Fitzgibbon, W., Kuznetsov, Y.A., Neittaanmaki, P., Pironneau, O. (eds.) Modeling, Simulation and Optimization for Science and Technology, pp. 59–82. Springer, Berlin (2014) Google Scholar
- He, B.S., Yuan, X.M.: On the \(o(1/n)\) convergence rate of the alternating direction method. SIAM J. Numer. Anal. 50, 700–709 (2012) MathSciNetView ArticleMATHGoogle Scholar
- He, B.S., Yuan, X.M.: Block-wise alternating direction method of multipliers for multiple-block convex programming and beyond. SMAI J. Comput. Math. 1, 145–174 (2015) MathSciNetView ArticleGoogle Scholar
- Xiao, Y., Chen, L., Li, D.: A generalized alternating direction method of multipliers with semi-proximal terms for convex composite conic programming. Math. Program. Comput. (2018, in press). https://doi.org/10.1007/s12532-018-0134-9 Google Scholar
- Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970) View ArticleMATHGoogle Scholar
- Chen, C.H.: Numerical algorithms for a class of matrix norm approximation problems. PhD thesis, Nanjing University, Department of Mathematics, (2012) Google Scholar
- Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, New York (1998) View ArticleMATHGoogle Scholar