The modified proximal point algorithm in Hadamard spaces
 Shihsen Chang^{1, 2},
 Lin Wang^{2}Email author,
 ChingFeng Wen^{3} and
 Jian Qiang Zhang^{2}
https://doi.org/10.1186/s136600181713z
© The Author(s) 2018
Received: 10 December 2017
Accepted: 10 May 2018
Published: 24 May 2018
Abstract
The purpose of this paper is to propose a modified proximal point algorithm for solving minimization problems in Hadamard spaces. We then prove that the sequence generated by the algorithm converges strongly (convergence in metric) to a minimizer of convex objective functions. The results extend several results in Hilbert spaces, Hadamard manifolds and nonpositive curvature metric spaces.
Keywords
MSC
1 Introduction
Recently, many convergence results by PPA for solving optimization problems have been extended from the classical linear spaces such as Euclidean spaces,Hilbert spaces and Banach spaces to the setting of manifolds [6–9]. The minimizers of the objective convex functionals in the spaces with nonlinearity play a crucial role in the branch of analysis and geometry.
In 2015 Cholamjiak [12] presented the modified PPA by Halpern iteration and then prove strong convergence theorem in the framework of \(\operatorname{CAT}(0)\) spaces.
Very recently, Khatibzadeh et al. [13] presented a Halperntype regularization of the proximal point algorithm, under suitable conditions they proved that the sequence generated by the algorithm converges strongly to a minimizer of the convex function in Hadamard spaces.
It is therefore, in this work, to continue along these lines and by using the viscosity implicit rules to introduce the modified PPA in Hadamard space for solving minimization problems. We prove that the sequence generated by the algorithm converges strongly to a minimizer of convex objective functions. The results presented in the paper extend and improve the main results of Martinet [1], Rockafellar [2] Bačák [10], Cholamjiak [12], Xu [4], Kamimura and Takahashi [5], Khatibzadeh et al. [13, Theorem 4.4].
2 Preliminaries and lemmas
In order to prove the main results, the following notions, lemmas and conclusions will be needed.
Let \((X; d)\) be a metric space and let \(x, y \in X\). A geodesic path joining x to y is an isometry \(c : [0, d(x; y)] \to X\) such that \(c(0) = x\), \(c(d(x; y)) = y\). The image of a geodesic path joining x to y is called a geodesic segment between x and y. The metric space \((X; d)\) is said to be a geodesic space, if every two points of X are joined by a geodesic, and X is said to be uniquely geodesic space, if there is exactly one geodesic joining x and y for each \(x, y \in X\).
It is well known that any complete and simply connected Riemannian manifold having nonpositive sectional curvature is a \(\operatorname{CAT}(0)\) space. Other examples of \(\operatorname{CAT}(0)\) spaces include preHilbert spaces [15], Rtrees, Euclidean buildings [16]. A complete \(\operatorname{CAT}(0)\) space is often called a Hadamard space. We write \((1t)x \oplus ty\) for the unique point z in the geodesic segment joining from x to y such that \(d(x, z) = td(x, y)\) and \(d(y, z) = (1t)d(x, y)\). We also denote by \([x, y]\) the geodesic segment joining x to y, that is, \([x, y] = \{(1t) x \oplus ty: 0 \le t \le 1 \}\). A subset C of a \(\operatorname{CAT}(0)\) space is convex if \([x, y] \subset C\) for all \(x, y \in C\).
For a thorough discussion of \(\operatorname{CAT}(0)\) spaces, some fundamental geometric properties and important conclusions, we refer to Bridson and Haefliger [15, 16].
The following lemmas play an important role in proving our main results.
Lemma 2.1
([17])
 (1)
\(d(t x\oplus(1t)y, z)\leq t d(x,z) +(1t) d(y,z)\);
 (2)
\(d(t x\oplus(1t)y, s x\oplus(1s)y)= t s d(x,y)\);
 (3)
\(d(t x\oplus(1 t)y, t u\oplus(1 t)w)\leq t d(x, u) +(1 t) d(y, w)\).

Denote a pair \((a,b)\in X\times X\) by \(\overrightarrow{ab}\) and call it a vector. Quasilinearization in \(\operatorname{CAT}(0)\) space X is defined as a mapping \(\langle\cdot,\cdot\rangle: (X\times X)\times(X\times X)\to\mathbb {R}\) such thatfor all \(a,b,c,d\in X\).$$ \langle\overrightarrow{ab},\overrightarrow{cd}\rangle=\frac {1}{2} \bigl(d^{2}(a,d)+d^{2}(b,c)d^{2}(a,c)d^{2}(b,d) \bigr) $$(2.2)

We say that X satisfies the Cauchy–Schwarz inequality ifIt is well known [18, Corollary 3] that a geodesically connected metric space is a \(\operatorname{CAT}(0)\) space if and only if it satisfies the Cauchy–Schwarz inequality.$$ \langle\overrightarrow{ab}, \overrightarrow{cd} \rangle\le d(a, b) d(c, d),\quad \forall a, b, c, d \in X. $$(2.3)

By using quasilinearization, Ahmadi Kakavandi [19] proved that \(\{x_{n}\}\) Δconverges to \(x \in X\) if and only if$$ \limsup_{n \to\infty}\langle\overrightarrow{xx_{n}}, \overrightarrow {xy}\rangle\le0,\quad \forall y \in X. $$(2.4)

Let C be a nonempty closed convex subset of a complete \(\operatorname{CAT}(0)\) space X (i.e., a Hadamard space). The metric projection \(P_{C}: X\to C\) is defined by$$ u=P_{C}(x) \quad \Longleftrightarrow \quad d(u,x)=\inf\bigl\{ d(y,x):y \in C\bigr\} ,\quad x\in X. $$(2.5)
Lemma 2.2
([18])
Lemma 2.3
([11])
 (1)the resolvent \(J_{r}\) is firmly nonexpansive, that is,for all \(x, y \in X\) and for all \(\lambda\in(0, 1)\);$$d(J_{r} x, J_{r} y) \le d\bigl((1  \lambda)x \oplus \lambda J_{r} x, (1  \lambda )y \oplus\lambda J_{r} y\bigr) $$
 (2)
the set \(\operatorname{Fix}(J_{r})\) of fixed points of the resolvent \(J_{r}\) associated with f coincides with the set \(\operatorname {argmin}_{y\in X} f(y)\) of minimizers of f.
Remark 2.4
Every firmly nonexpansive mapping is nonexpansive. Hence \(J_{r}\) is a nonexpansive mapping.
Lemma 2.5
([21])
Lemma 2.6
([22])
 (a)
\(\sum_{n=1}^{\infty}\gamma_{n} = \infty\);
 (b)
\(\limsup_{n \to\infty} \frac{\delta_{n}}{\gamma_{n}}\leq0\) or \(\sum_{n=1}^{\infty}\delta_{n} < \infty\).
3 The main results
Now, we are in a position to give the main results in this paper.
Theorem 3.1
 (a)
\(\lim_{n\to\infty}\alpha_{n}=0\);
 (b)
\(\sum_{n=0}^{\infty} \alpha_{n}=\infty\);
 (c)
\(\frac{\alpha_{n}\alpha_{n1}}{\alpha_{n}^{2}} \to0\) as \(n \to\infty\).
Proof
We divide the proof into four steps.
Step 2. Next, we prove that \(\{x_{n}\}\) is bounded.
Step 3. Next, we prove that the sequence \(\{x_{n}\}\) converges strongly to some point in \(\operatorname{Fix}(J_{r})\).
This completes the proof. □
Remark
An simple example of a sequence \(\{\alpha_{n}\}\) satisfying conditions (a)–(c) is given by \(\{\alpha_{n} = 1/n^{\sigma}\}\), where \(0 < \sigma< 1\).
Looking at the proof of Theorem 3.1, we only use the fact that the resolvent operator \(J_{r}\) is nonexpansive. If we replace the resolvent operator \(J_{r}\) with a nonexpansive mapping \(T : C \to C\) in Theorem 3.1, then we can obtain the following.
Theorem 3.2
 (a)
\(\lim_{n\to\infty}\alpha_{n}=0\);
 (b)
\(\sum_{n=0}^{\infty} \alpha_{n}=\infty\);
 (c)
\(\frac{\alpha_{n}\alpha_{n1}}{\alpha_{n}^{2}} \to0\) as \(n \to\infty\).
Since every Hilbert space is a Hadamard space, the following result can be obtained from Theorem 3.1 immediately.
Theorem 3.3
4 Applications
In this section, we shall utilize the results presented in the paper to study a class of inclusion problems in Hilbert space.
We note that, for all \(r > 0\), the resolvent mapping \(J_{r}^{\partial f}\) is a singlevalued nonexpansive mapping.
Therefore the following result can be obtained from Theorem 3.2 immediately.
Theorem 4.1
Theorem 4.2
Declarations
Acknowledgements
The authors would like to express their thanks to the referees and the editors for their helpful comments and advices. The first author was supported by the Natural Science Foundation of China Medical University, Taiwan, and the second author was supported by the National Natural Sciences Foundation of China (Grant No. 11361070).
Authors’ contributions
All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.
Competing interests
None of the authors have any competing interests in the manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Opér. 4, 154–158 (1970) MATHGoogle Scholar
 Rockafellar, T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976) MathSciNetView ArticleMATHGoogle Scholar
 Güler, O.: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403–419 (1991) MathSciNetView ArticleMATHGoogle Scholar
 Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002) MathSciNetView ArticleMATHGoogle Scholar
 Kamimura, S., Takahashi, W.: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 106, 226–240 (2000) MathSciNetView ArticleMATHGoogle Scholar
 Ferreira, O.P., Oliveira, P.R.: Proximal point algorithm on Riemannian manifolds. Optimization 51, 257–270 (2002) MathSciNetView ArticleMATHGoogle Scholar
 Li, C., López, G., MartínMárquez, V.: Monotone vector fields and the proximal point algorithm on Hadamard manifolds. J. Lond. Math. Soc. 79, 663–683 (2009) MathSciNetView ArticleMATHGoogle Scholar
 Papa Quiroz, E.A., Oliveira, P.R.: Proximal point methods for quasiconvex and convex functions with Bregman distances on Hadamard manifolds. J. Convex Anal. 16, 49–69 (2009) MathSciNetMATHGoogle Scholar
 Wang, J.H., López, G.: Modified proximal point algorithms on Hadamard manifolds. Optimization 60, 697–708 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Bačák, M.: The proximal point algorithm in metric spaces. Isr. J. Math. 194, 689–701 (2013) MathSciNetView ArticleMATHGoogle Scholar
 ArizaRuiz, D., Leustean, L., López, G.: Firmly nonexpansive mappings in classes of geodesic spaces. Trans. Am. Math. Soc. 366, 4299–4322 (2014) MathSciNetView ArticleMATHGoogle Scholar
 Cholamjiak, P.: The modified proximal point algorithm in \(\operatorname {CAT}(0)\) spaces. Optim. Lett. 9, 1401–1410 (2015) MathSciNetView ArticleMATHGoogle Scholar
 Khatibzadeh, H., Mohebbi, V., Ranjbar, S.: New results on the proximal point algorithm in nonpositive curvaturemetric spaces. Optimization 66(7), 1191–1199 (2017) MathSciNetView ArticleMATHGoogle Scholar
 Bruhat, M., Tits, J.: Groupes réductifs sur un corps local. I. Données radicielles valuées. Publ. Math. Inst. Hautes Études Sci. 41, 5–251 (1972) View ArticleMATHGoogle Scholar
 Bridson, M.R., Haefliger, A.: Metric Spaces of Nonpositive Curvature. Grundlehren der Mathematischen Wissenschaften, vol. 319. Springer, Berlin (1999) MATHGoogle Scholar
 Brown, K.S.: Buildings. Springer, New York (1989) View ArticleMATHGoogle Scholar
 Dhompongsa, S., Panyanak, B.: On Δconvergence theorems in \(\operatorname {CAT}(0)\) spaces. Comput. Math. Appl. 56, 2572–2579 (2008) MathSciNetView ArticleMATHGoogle Scholar
 Berg, I.D., Nikolaev, I.G.: Quasilinearization and curvature of Alexandrov spaces. Geom. Dedic. 133, 195–218 (2008) View ArticleMATHGoogle Scholar
 Ahmadi, P., Khatibzadeh, H.: On the convergence of inexact proximal point algorithm on Hadamard manifolds. Taiwan. J. Math. 18, 419–433 (2014) MathSciNetView ArticleMATHGoogle Scholar
 Jost, J.: Nonpositive Curvature: Geometric and Analytic Aspects. Lectures in Mathematics ETH Zürich. Birkhäuser, Basel (1997) View ArticleMATHGoogle Scholar
 Wangkeeree, R., Preechasilp, P.: Viscosity approximation methods for nonexpansive mappings in \(\operatorname{CAT}(0)\) spaces. J. Inequal. Appl. 2013, Article ID 93 (2013) MathSciNetView ArticleMATHGoogle Scholar
 Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002) MathSciNetView ArticleMATHGoogle Scholar