Open Access

Halpern type iteration with multiple anchor points in a Hadamard space

Journal of Inequalities and Applications20152015:182

https://doi.org/10.1186/s13660-015-0693-5

Received: 28 October 2014

Accepted: 17 May 2015

Published: 6 June 2015

Abstract

In this paper, we consider an approximation sequence of a common fixed point generated by Halpern type iteration with a finite family of nonexpansive mappings in a Hadamard space. We propose another style of Halpern type iteration with multiple anchor points and prove that it converges strongly to a common fixed point.

Keywords

fixed point approximation theorem nonexpansive mapping geodesic space \(\operatorname{CAT}(0)\)

MSC

47H09

1 Introduction

The problem of finding a fixed point of nonexpansive mappings is one of the most important problems in nonlinear analysis and it has been investigated by many researchers with various approaches. In 1992, Wittmann [1] obtained that a Halpern type iteration with nonexpansive mapping is strongly convergent to a fixed point in a Hilbert space. Later, Shimizu and Takahashi [2] showed that a Halpern type iteration with two nonexpansive mappings converges strongly to a common fixed point in a Hilbert space. Moreover, Kimura et al. [3] proved an approximation of common fixed points of a finite family of nonexpansive mappings in a uniformly convex Banach space whose norm is Gâteaux differentiable.

On the other hand, in 2010, Saejung [4] introduced a Halpern type iteration with a nonexpansive mapping approximating a fixed point in a Hadamard space, and also proved the following theorem.

Theorem 1.1

Let X be a Hadamard space. Let \(T_{1}, T_{2}, \ldots, T_{N} : X \to X\) be nonexpansive mappings with \(\bigcap_{i = 1}^{N} F(T_{i}) \neq\emptyset\), and let \(u, x_{1} \in C\) be arbitrarily chosen. Define an iterative sequence \(\{x_{n}\}\) by
$$x_{n + 1} = \alpha_{n}u \oplus(1 - \alpha_{n})T_{(n \bmod N) + 1}x_{n} $$
for all \(n \in\mathbb{N}\), where \(\{\alpha_{n}\}\) is a sequence in \(] 0, 1 [ \) such that \(\lim_{n \to\infty} \alpha_{n} = 0\), \(\sum^{\infty}_{n = 1} \alpha_{n} = \infty\), and \(\sum^{\infty }_{n = 1} |\alpha_{n + 1} - \alpha_{n}| < \infty\). Suppose, in addition, that
$$\bigcap_{i = 1}^{N} F(T_{i}) = F(T_{N} \circ T_{N - 1} \circ\cdots\circ T_{1}). $$
Then \(\{x_{n}\}\) converges to \(z \in\bigcap_{i = 1}^{N} F(T_{i})\) which is nearest to u.
In this paper, we introduce an approximation theorem of common fixed points of nonexpansive mappings in a Hadamard space. We produce the iterative sequence \(\{x_{n}\}\) as follows. Let \(u_{1}, u_{2}, \ldots, u_{r}\), \(x_{1}\) be arbitrary points in a Hadamard space, and let \(\{x_{n}\}\) be iteratively generated by
$$\left \{ \textstyle\begin{array}{l} t^{i}_{n} = \alpha_{n}u_{i} \oplus(1 - \alpha_{n})T_{i}x_{n}, \quad i = 1, 2, \ldots, r, \\ y^{1}_{n} = t^{1}_{n}, \\ y^{j}_{n} = \beta^{j - 1}_{n}t^{j}_{n} \oplus(1 - \beta^{j - 1}_{n})y^{j - 1}_{n}, \quad j = 2, 3, \ldots, r, \\ x_{n + 1} = y^{r}_{n} \end{array}\displaystyle \right . $$
for all \(n \in\mathbb{N}\), where \(\{\alpha_{n}\}\) is a sequence under the same conditions of Theorem 1.1, and, for all \(k = 1, 2, \ldots, r - 1\), \(\{\beta^{k}_{n}\}\) are sequences in \([a, b] \subset \, ] 0, 1 [ \). This iterative sequence is another type of convex combination for Theorem 1.1. Furthermore, the anchor point of known Halpern type iteration is single, however our iterative sequence has multiple anchor points. Then we show that \(\{x_{n}\}\) converges strongly to a common fixed point. In a Hilbert space, \(\{x_{n}\} \) is convergent to the nearest point to \(\sum_{i = 1}^{r} \gamma^{i} u_{i}\) in the set of common fixed points of \(\{T_{i}\}\), where \(\gamma^{i} \in\, ] 0, 1 [ \) for all \(i = 1, 2, \ldots, r\), and \(\sum_{i = 1}^{r} \gamma^{i} = 1\); see [3]. However, it is not always true in a Hadamard space.

2 Preliminaries

Let \((X, d)\) be a metric space. For \(x, y \in X\), a mapping \(c: [0, l] \to X\) is called a geodesic with endpoints x, y if c satisfies \(c(0) = x\), \(c(l) = y\) and \(d(c(u), c(v)) = |u - v|\) for \(u, v \in[0, l]\). If a geodesic with endpoints x, y exists for any \(x, y \in X\), then we call X a geodesic metric space. Moreover, if a geodesic exists uniquely for each \(x, y \in X\), then we call X a uniquely geodesic space. A Hadamard space, which is defined below, is a uniquely geodesic space.

Let X be a uniquely geodesic space. For \(x, y \in X\), the image of a geodesic c with endpoints x, y is called a geodesic segment joining x and y, and is denoted by \([x, y]\). A geodesic triangle \(\bigtriangleup(x_{1}, x_{2}, x_{3})\) with vertices \(x_{1}\), \(x_{2}\), \(x_{3}\) in X is the union of geodesic segments joining each pair of vertices. A comparison triangle \(\overline{\bigtriangleup}(\bar{x}_{1},\bar {x}_{2},\bar{x}_{3})\) in \(\mathbb{R}^{2}\) for \(\bigtriangleup(x_{1}, x_{2}, x_{3})\) is a triangle such that \(d(x_{i}, x_{j}) = \|\bar{x}_{i} - \bar{x}_{j}\| \) for all \(i, j = 1, 2, 3\). If, for any \(p, q \in\bigtriangleup(x_{1}, x_{2}, x_{3})\) and their comparison points \(\bar{p}, \bar{q} \in \overline{\bigtriangleup}(\bar{x}_{1},\bar{x}_{2},\bar{x}_{3})\), the inequality
$$d(p,q) \leq\|\bar{p}-\bar{q}\| $$
is satisfied for all triangle in X, then X is called a \(\operatorname{CAT}(0)\) space, and this inequality is called the \(\operatorname{CAT}(0)\) inequality. A Hadamard space is defined as a complete \(\operatorname{CAT}(0)\) space.

Let X be a Hadamard space. For \(t \in[0, 1]\) and \(x, y \in X\), there exists unique \(z \in[x, y]\) such that \(d(x, z) = (1 - t)d(x, y)\) and \(d(z, y) = td(x, y)\). We denote z by \(tx \oplus(1 - t)y\). From the \(\operatorname{CAT}(0)\) inequality, we obtain the following lemma. This lemma plays an important role in this paper.

Lemma 2.1

Let X be a Hadamard space. Then, for any \(x, y, z \in X\) and \(t \in\, ] 0, 1 [ \), it follows that
$$d\bigl(x, ty \oplus(1 - t)z\bigr)^{2} \leq td(x, y)^{2} + (1 - t)d(x, z)^{2} - t(1 - t)d(x, y)^{2}. $$

By this lemma, it is easy to see the following result.

Lemma 2.2

Let \(\{x_{n}\}\), \(\{y_{n}\}\) be bounded sequences of a Hadamard space X. For \(\{\alpha_{n}\} \subset \, ] 0, 1 [ \), define a sequence \(\{z_{n}\}\) by \(z_{n} = \alpha_{n}x_{n} \oplus(1 - \alpha_{n})y_{n}\). Then \(\{z_{n}\}\) is bounded.

For more details on Hadamard spaces, see [5].

Let T be a mapping from X into itself. T is called a nonexpansive mapping if the inequality \(d(Tx, Ty) \leq d(x, y)\) is satisfied for any \(x,y \in X\). A point \(z \in X\) is called a fixed point of T if \(Tz = z\) holds. We denote the set of all fixed points of T by \(F(T)\). A subset \(C \subset X\) is said to be convex if, for any \(x, y \in C\), \([x, y]\) is included in C. We know that \(F(T)\) is a closed convex subset of X if T is nonexpansive.

Let \(\{x_{n}\}\) be a bounded sequence in a metric space X. For any \(x \in X\), we put
$$r\bigl(x, \{x_{n}\}\bigr) = \limsup_{n \to\infty}d(x, x_{n}),\qquad r\bigl(\{x_{n}\}\bigr) = \inf _{x \in X}r\bigl(x, \{x_{n}\}\bigr). $$
Then, if there exists \(x \in X\) such that \(r(x, \{x_{n}\}) = r(\{x_{n}\})\), we call x an asymptotic center of \(\{x_{n}\}\). Moreover if, for any subsequence of \(\{x_{n}\}\), each asymptotic center is a unique point x, we say that \(\{x_{n}\}\) is Δ-convergent to x. We know that any bounded sequence \(\{x_{n}\}\) in a Hadamard space has a Δ-converging subsequence; see [6, 7].

3 Halpern type iteration with multiple anchor points

In this section, we introduce some lemmas and show the main theorem.

Lemma 3.1

(Aoyama-Kimura-Takahashi-Toyoda [8], Xu [9])

Let \(\{s_{n}\}\) be a sequence of nonnegative real numbers, \(\{\alpha_{n}\} \) be a sequence in \([0, 1]\) with \(\sum_{n = 1}^{\infty}\alpha_{n} = \infty\), \(\{u_{n}\}\) be a sequence of nonnegative real numbers with \(\sum_{n = 1}^{\infty} u_{n} < \infty\), and \(\{t_{n}\}\) be a sequence of real numbers with \(\limsup_{n \to\infty} t_{n} \leq0\). Suppose that
$$s_{n + 1} \leq(1 - \alpha_{n})s_{n} + \alpha_{n} t_{n} + u_{n} \quad \textit {for all } n \in\mathbb{N}. $$
Then \(\lim_{n \to\infty}s_{n} = 0\).

Lemma 3.2

Let \(\{a_{n}\}\) be a sequence of real numbers with \(\sum_{n = 1}^{\infty}\lvert a_{n + 1} - a_{n} \rvert < \infty\). Then \(\{a_{n}\}\) is convergent.

Lemma 3.3

(Seajung [4])

Let X be a Hadamard space and \(T, S : X \to X\) be nonexpansive mappings with \(F(T) \cap F(S) \neq\emptyset\). For any \(\beta\in\, ] 0, 1 [ \), define a mapping U by \(Ux = \beta Tx \oplus(1 - \beta)Sx\) for all \(x \in X\). Then U is a nonexpansive mapping such that \(F(U) = F(T) \cap F(S)\).

Lemma 3.4

(He-Fang-López-Li [10])

Let X be a Hadamard space and \(\{x_{n}\}\) be a bounded sequence of X. If \(\{x_{n}\}\) is Δ-convergent to \(x \in X\), then
$$d(u, x)^{2} \leq\liminf_{n \to\infty}d(u, x_{n})^{2} $$
for all \(u \in X\).

Lemma 3.5

(Kirk-Panyanak [7])

Let X be a Hadamard space and \(T : X \to X\) be a nonexpansive mapping. Suppose \(\{x_{n}\} \subset X\) is Δ-convergent to \(x \in X\). If \(d(x_{n}, Tx_{n}) \to0\), then x is an element of \(F(T)\).

Lemma 3.6

(Mayer [11])

Let X be a Hadamard space and \(g : X \to\mathbb{R}\cup\{+\infty\}\). If g is convex and lower semicontinuous, then g is bounded from below on bounded subsets of X. Furthermore, g attains its infimum on nonempty bounded convex closed subsets of X. The resulting minimizer is unique if g is strictly convex.

Using Lemma 3.6, we get the following result.

Corollary 3.7

Let X be a Hadamard space. For any \(u_{1}, u_{2}, \ldots, u_{n} \in X\) and \(\beta^{1}, \beta^{2}, \ldots, \beta^{n} \in\, ] 0, 1 [ \) with \(\sum_{i = 1}^{n} \beta^{i} = 1\), define a function \(g : X \to\mathbb{R}\) by
$$g(x) = \sum_{i = 1}^{n}\beta^{i}d(u_{i}, x)^{2} $$
for all \(x \in X\). Then g attains its infimum on a nonempty closed convex subset C of X, and its minimizer is unique.

Proof

Let p be an element of X. Since \(g(x) \to\infty\) as \(d(x, p) \to \infty\), there exists a nonempty bounded closed convex set D such that the minimizers of g on C and D are identical.

For \(x, y \in X\) with \(x \neq y\) and \(t \in\, ] 0, 1 [ \), we have
$$\begin{aligned} g\bigl(tx \oplus(1 - t)y\bigr) &= \sum_{i = 1}^{n} \beta^{i}d\bigl(u_{i}, tx \oplus(1 - t)y\bigr)^{2} \\ &\leq\sum_{i = 1}^{n}\beta^{i} \bigl(td(u_{i}, x)^{2} + (1 - t)d(u_{i}, y)^{2} - t(1 - t)d(x, y)^{2}\bigr) \\ &= tg(x) + (1 - t)g(y) - t(1 - t)d(x, y)^{2} \\ &< tg(x) + (1 - t)g(y). \end{aligned}$$
Thus g is a strictly convex, and by Lemma 3.6 we get the desired result. □

Now we can obtain the main theorem for a finite family of nonexpansive mappings with multiple anchor points.

Theorem 3.8

Let X be a Hadamard space and \(T_{1}, T_{2}, \ldots, T_{r} : X \to X\) be nonexpansive mappings with \(F = \bigcap_{i = 1}^{r} F(T_{i}) \neq \emptyset\). Let \(u_{1}, u_{2}, \ldots, u_{r}\), \(x_{1}\) be arbitrary points in X and let \(\{x_{n}\}\) be iteratively generated by
$$\left \{ \textstyle\begin{array}{l} t^{i}_{n} = \alpha_{n}u_{i} \oplus(1 - \alpha_{n})T_{i}x_{n}, \quad i = 1, 2, \ldots, r, \\ y^{1}_{n} = t^{1}_{n}, \\ y^{j}_{n} = \beta^{j - 1}_{n}t^{j}_{n} \oplus(1 - \beta^{j - 1}_{n})y^{j - 1}_{n}, \quad j = 2, 3, \ldots, r, \\ x_{n + 1} = y^{r}_{n} \end{array}\displaystyle \right . $$
for all \(n \in\mathbb{N}\), where \(\{\alpha_{n}\}\) is a sequence in \(] 0, 1 [ \) such that
$$\begin{aligned} \begin{aligned} (\mathrm{i})&\quad \lim_{n \to\infty} \alpha_{n} = 0, \\ (\mathrm{ii})&\quad \sum^{\infty}_{n = 1} \alpha_{n} = \infty, \\ (\mathrm{iii})&\quad\sum^{\infty}_{n = 1} | \alpha_{n + 1} - \alpha _{n}| < \infty, \end{aligned} \end{aligned}$$
and, for all \(k = 1, 2, \ldots, r - 1\), \(\{\beta^{k}_{n}\}\) are sequences in \([a, b] \subset \, ] 0, 1 [ \) such that
$$\begin{aligned} (\mathrm{iv})&\quad \sum_{n = 1}^{\infty}\bigl| \beta^{k}_{n + 1} - \beta ^{k}_{n}\bigr| < \infty. \end{aligned}$$
Then \(\{x_{n}\}\) converges to \(x_{0} \in F\) which is the unique minimizer of \(g(x) = \sum_{i = 1}^{r} \gamma_{i}d(u_{i}, x)^{2}\) on F, where \(\gamma_{k} = \beta^{k - 1}\prod_{j = k}^{r - 1} (1 - \beta^{j})\) for \(k = 1, 2, \ldots, r - 1\) and \(\gamma_{r} = \beta^{r - 1}\) for \(\beta^{0} = 1\) and \(\beta^{i} = \lim_{n \to\infty}\beta^{i}_{n}\) for \(i = 1, 2, \ldots, r - 1\).

For the sake of simplicity, we will prove only the case for triple mappings, that is, the following theorem. The proof of Theorem 3.8 is omitted as it can be deduced by similar arguments.

Theorem 3.9

Let X be a Hadamard space and \(R, S, T : X \to X\) be nonexpansive mappings with \(F = F(R) \cap F(S) \cap F(T) \neq\emptyset\). Let u, v, w, \(x_{1}\) be arbitrary points in X and let \(\{x_{n}\}\) be iteratively generated by
$$\left \{ \textstyle\begin{array}{l} r_{n} = \alpha_{n} u \oplus(1 - \alpha_{n})Rx_{n}, \\ s_{n} = \alpha_{n} v \oplus(1 - \alpha_{n})Sx_{n}, \\ t_{n} = \alpha_{n} w \oplus(1 - \alpha_{n})Tx_{n}, \\ x_{n + 1} = \beta_{n}r_{n} \oplus(1 - \beta_{n})(\gamma_{n}s_{n} \oplus(1 - \gamma_{n})t_{n}) \end{array}\displaystyle \right . $$
for all \(n \in\mathbb{N}\), where \(\{\alpha_{n}\}\) is a sequence in \(] 0, 1 [ \) such that
$$\begin{aligned} (\mathrm{i})&\quad \lim_{n \to\infty} \alpha_{n} = 0, \\ (\mathrm{ii})&\quad \sum^{\infty}_{n = 1} \alpha_{n} = \infty, \\ (\mathrm{iii})&\quad \sum^{\infty}_{n = 1} | \alpha_{n + 1} - \alpha _{n}| < \infty, \end{aligned}$$
and \(\{\beta_{n}\}\), \(\{\gamma_{n}\}\) are sequences in \([a, b] \subset \, ] 0, 1 [ \) such that
$$\begin{aligned} (\mathrm{iv})&\quad \sum_{n = 1}^{\infty}| \beta_{n + 1} - \beta_{n}| < \infty, \\ (\mathrm{v})& \quad \sum_{n = 1}^{\infty}|\gamma_{n + 1} - \gamma_{n}| < \infty. \end{aligned}$$
Then \(\{x_{n}\}\) converges to \(x_{0} \in F\) which is a minimizer of \(g(x) = \beta d(u, x)^{2} + (1 - \beta)(\gamma d(v, x)^{2} + (1 - \gamma)d(w, x)^{2})\) on F, where \(\beta= \lim_{n \to\infty}\beta_{n}\) and \(\gamma= \lim_{n \to\infty}\gamma_{n}\).

Proof

Let \(y_{n} = \gamma_{n}s_{n} \oplus(1 - \gamma_{n})t_{n}\) for all \(n \in \mathbb{N}\). We first show \(\{x_{n}\}\) is bounded. Let \(p \in F\). Then
$$\begin{aligned}& d(x_{n + 1}, p)^{2} \\& \quad = d\bigl(\beta_{n} r_{n} \oplus(1 - \beta_{n})y_{n}, p\bigr)^{2} \\& \quad \leq\beta_{n}d(r_{n}, p)^{2} + (1 - \beta_{n})d(y_{n}, p)^{2} \\& \quad \leq\beta_{n}\bigl(\alpha_{n}d(u, p)^{2} + (1 - \alpha_{n})d(Rx_{n}, p)^{2}\bigr) + (1 - \beta_{n}) \bigl(\gamma_{n}d(s_{n}, p)^{2} + (1 - \gamma_{n})d(t_{n}, p)^{2}\bigr) \\& \quad \leq\beta_{n}\bigl(\alpha_{n}d(u, p)^{2} + (1 - \alpha_{n})d(x_{n}, p)^{2}\bigr) \\& \qquad {} + (1 - \beta_{n})\gamma_{n}\bigl( \alpha_{n}d(v, p)^{2} + (1 - \alpha _{n})d(x_{n}, p)^{2}\bigr) \\& \qquad {} + (1 - \beta_{n}) (1 - \gamma_{n}) \bigl( \alpha_{n}d(w, p)^{2} + (1 - \alpha _{n})d(x_{n}, p)^{2}\bigr) \\& \quad = \alpha_{n}\bigl(\beta_{n}d(u, p)^{2} + (1 - \beta_{n}) \bigl(\gamma_{n}d(v, p)^{2} + (1 - \gamma_{n})d(w, p)^{2}\bigr)\bigr) + (1 - \alpha_{n})d(x_{n}, p)^{2}. \end{aligned}$$
Putting \(M = \max\{d(u, p)^{2}, d(v, p)^{2}, d(w, p)^{2}\}\), we have
$$d(x_{n + 1}, p)^{2} \leq\max\bigl\{ M, d(x_{n}, p)^{2}\bigr\} . $$
By induction, we get
$$d(x_{n + 1}, p)^{2} \leq\max\bigl\{ M, d(x_{1}, p)^{2}\bigr\} , $$
and hence we have \(\{x_{n}\}\) is bounded. Since R, S, and T are all nonexpansive, we get \(\{Rx_{n}\}\), \(\{Sx_{n}\}\), \(\{Tx_{n}\}\) are bounded. Moreover, by Lemma 2.2, we also have that \(\{r_{n}\}\), \(\{s_{n}\}\), \(\{ t_{n}\}\), \(\{y_{n}\}\) are bounded sequences.
Next, we show that \(d(x_{n + 1}, x_{n}) \to0\). Using the \(\operatorname{CAT}(0)\) inequality, we obtain
$$\begin{aligned} d(r_{n}, r_{n - 1}) =& d\bigl(\alpha_{n}u \oplus(1 - \alpha_{n})Rx_{n}, \alpha _{n - 1}u \oplus(1 - \alpha_{n - 1})Rx_{n - 1}\bigr) \\ \leq& d\bigl(\alpha_{n}u \oplus(1 - \alpha_{n})Rx_{n}, \alpha_{n}u \oplus(1 - \alpha_{n})Rx_{n - 1}\bigr) \\ &{} + d\bigl(\alpha_{n}u \oplus(1 - \alpha_{n})Rx_{n - 1}, \alpha_{n - 1}u \oplus(1 - \alpha_{n - 1})Rx_{n - 1}\bigr) \\ \leq&(1 - \alpha_{n})d(Rx_{n}, Rx_{n - 1}) + \lvert \alpha_{n} - \alpha_{n - 1} \rvert d(u, Rx_{n - 1}) \\ \leq&(1 - \alpha_{n})d(x_{n}, x_{n - 1}) + \lvert \alpha_{n} - \alpha_{n - 1} \rvert d(u, Rx_{n - 1}). \end{aligned}$$
From this result, we also get
$$\begin{aligned} \begin{aligned} &d(y_{n}, y_{n - 1}) \\ &\quad = d\bigl(\gamma_{n}s_{n} \oplus(1 - \gamma_{n})t_{n}, \gamma_{n - 1}s_{n - 1} \oplus(1 - \gamma_{n - 1})t_{n - 1}\bigr) \\ &\quad \leq d\bigl(\gamma_{n}s_{n} \oplus(1 - \gamma_{n})t_{n}, \gamma_{n}s_{n - 1} \oplus(1 - \gamma_{n})t_{n}\bigr) \\ &\qquad {} + d\bigl(\gamma_{n}s_{n - 1} \oplus(1 - \gamma_{n})t_{n}, \gamma_{n}s_{n - 1} \oplus(1 - \gamma_{n})t_{n - 1}\bigr) \\ &\qquad {} + d\bigl(\gamma_{n}s_{n - 1} \oplus(1 - \gamma_{n})t_{n - 1}, \gamma _{n - 1}s_{n - 1} \oplus(1 - \gamma_{n - 1})t_{n - 1}\bigr) \\ &\quad \leq\gamma_{n}d(s_{n}, s_{n - 1}) + (1 - \gamma_{n})d(t_{n}, t_{n - 1}) + \lvert \gamma_{n} - \gamma_{n - 1} \rvert d(s_{n - 1}, t_{n - 1}) \\ &\quad \leq\gamma_{n}\bigl((1 - \alpha_{n})d(x_{n}, x_{n - 1}) + \lvert\alpha _{n} - \alpha_{n - 1} \rvert d(v, Sx_{n - 1})\bigr) \\ &\qquad {} + (1 - \gamma_{n}) \bigl((1 - \alpha_{n})d(x_{n}, x_{n - 1}) + \lvert\alpha_{n} - \alpha_{n - 1} \rvert d(w, Tx_{n - 1})\bigr) \\ &\qquad {}+ \lvert\gamma_{n} - \gamma_{n - 1} \rvert d(s_{n - 1}, t_{n - 1}) \\ &\quad \leq(1 - \alpha_{n})d(x_{n}, x_{n - 1}) \\ &\qquad {} + \lvert\alpha_{n} - \alpha_{n - 1} \rvert\bigl(d(v, Sx_{n - 1}) + d(w, Tx_{n - 1})\bigr) + \lvert\gamma_{n} - \gamma_{n - 1} \rvert d(s_{n - 1}, t_{n - 1}). \end{aligned} \end{aligned}$$
Therefore, we get
$$\begin{aligned} d(x_{n + 1}, x_{n}) =& d\bigl(\beta_{n}r_{n} \oplus(1 - \beta_{n})y_{n}, \beta_{n - 1}r_{n - 1} \oplus(1 - \beta_{n - 1})y_{n - 1}\bigr) \\ \leq& d\bigl(\beta_{n}r_{n} \oplus(1 - \beta_{n})y_{n}, \beta_{n}r_{n - 1} \oplus (1 - \beta_{n})y_{n}\bigr) \\ &{} + d\bigl(\beta_{n}r_{n - 1} \oplus(1 - \beta_{n})y_{n}, \beta_{n}r_{n - 1} \oplus(1 - \beta_{n})y_{n - 1}\bigr) \\ &{} + d\bigl(\beta_{n}r_{n - 1} \oplus(1 - \beta_{n})y_{n - 1}, \beta_{n - 1}r_{n - 1} \oplus(1 - \beta_{n - 1})y_{n - 1}\bigr) \\ \leq&\beta_{n}d(r_{n}, r_{n - 1}) + (1 - \beta_{n})d(y_{n}, y_{n - 1}) + \lvert \beta_{n} - \beta_{n - 1} \rvert d(r_{n - 1}, y_{n - 1}) \\ \leq&(1 - \alpha_{n})d(x_{n}, x_{n - 1}) \\ &{} + \lvert\alpha_{n} - \alpha_{n - 1} \rvert\bigl(d(u, Rx_{n - 1}) + d(v, Sx_{n - 1}) + d(w, Tx_{n - 1})\bigr) \\ &{} + \lvert\gamma_{n} - \gamma_{n - 1} \rvert d(s_{n - 1}, t_{n - 1}) + \lvert\beta_{n} - \beta_{n - 1} \rvert d(r_{n - 1}, y_{n - 1}). \end{aligned}$$
Using conditions (ii), (iii), (iv), (v), and Lemma 3.1, we have
$$d(x_{n + 1}, x_{n}) \to0. $$

From conditions (iv), (v) and Lemma 3.2, there exist \(\beta, \gamma\in\, ] 0, 1 [ \) such that \(\beta_{n} \to\beta\) and \(\gamma _{n} \to\gamma\). We put \(Ux = \beta Rx \oplus(1 - \beta)Qx\) for all \(x \in X\), where \(Qx = \gamma Sx \oplus(1 - \gamma)Tx\). From Lemma 3.3, we have that the mapping Q is nonexpansive with \(F(Q) = F(S) \cap F(T)\). Similarly, we have that U is nonexpansive with \(F(U) = F(R) \cap F(Q) = F\).

We show that \(d(Ux_{n}, x_{n}) \to0\). Let \(q_{n} = \alpha_{n}u \oplus(1 - \alpha_{n})Qx_{n}\). Then, using the \(\operatorname{CAT}(0)\) inequality, we have
$$\begin{aligned}& d\bigl(Ux_{n}, \beta_{n}r_{n} \oplus(1 - \beta_{n})q_{n}\bigr) \\& \quad = d\bigl(\beta Rx_{n} \oplus(1 - \beta)Qx_{n}, \beta_{n}\bigl(\alpha_{n}u \oplus(1 - \alpha_{n})Rx_{n} \bigr) \oplus(1 - \beta_{n}) \bigl(\alpha_{n}u \oplus(1 - \alpha _{n})Qx_{n}\bigr)\bigr) \\& \quad \leq\alpha_{n}\beta_{n}d(Rx_{n}, u) + \alpha_{n}(1 - \beta_{n})d(Qx_{n}, u) + \lvert\beta- \beta_{n} \rvert d(Rx_{n}, Qx_{n}). \end{aligned}$$
Since \(\{\beta_{n}\}\) converges to β, by condition (i), we get
$$d\bigl(Ux_{n}, \beta_{n}r_{n} \oplus(1 - \beta_{n})q_{n}\bigr) \to0. $$
Put \(t'_{n} = \alpha_{n}v \oplus(1 - \alpha_{n})Tx_{n}\). Then, using this result, we have
$$\begin{aligned} d(Qx_{n}, y_{n}) =& d\bigl(\gamma Sx_{n} \oplus(1 - \gamma)Tx_{n}, \gamma_{n}s_{n} \oplus(1 - \gamma_{n})t_{n}\bigr) \\ \leq& d\bigl(\gamma Sx_{n} \oplus(1 - \gamma)Tx_{n}, \gamma_{n}s_{n} \oplus(1 - \gamma_{n})t'_{n} \bigr) \\ &{} + d\bigl(\gamma_{n}s_{n} \oplus(1 - \gamma_{n})t'_{n}, \gamma_{n}s_{n} \oplus (1 - \gamma_{n})t_{n}\bigr) \\ \leq&\alpha_{n}\gamma_{n}d(Sx_{n}, v) + \alpha_{n}(1 - \gamma_{n})d(Tx_{n}, v) \\ &{} + \lvert\gamma- \gamma_{n} \rvert d(Sx_{n}, Tx_{n}) + (1 - \gamma _{n})d\bigl(t'_{n}, t_{n}\bigr). \end{aligned}$$
By the \(\operatorname{CAT}(0)\) inequality, we get
$$\begin{aligned} \begin{aligned} d\bigl(t'_{n}, t_{n}\bigr) &= d\bigl( \alpha_{n}v \oplus(1 - \alpha_{n})Tx_{n}, \alpha_{n}w \oplus(1 - \alpha_{n})Tx_{n}\bigr) \\ &\leq\alpha_{n} d(v, w) \to0. \end{aligned} \end{aligned}$$
Since \(\gamma_{n} \to\gamma\), we have
$$d(Qx_{n}, y_{n}) \to0. $$
Therefore, by condition (i), we have
$$\begin{aligned} d\bigl(\beta_{n}r_{n} \oplus(1 - \beta_{n})q_{n}, x_{n + 1}\bigr) &= d\bigl(\beta_{n}r_{n} \oplus(1 - \beta_{n})q_{n}, \beta_{n}r_{n} \oplus(1 - \beta_{n})y_{n}\bigr) \\ &\leq(1 - \beta_{n})d(q_{n}, y_{n}) \\ &\leq d\bigl(\alpha_{n}u \oplus(1 - \alpha_{n})Qx_{n}, Qx_{n}\bigr) + d(Qx_{n}, y_{n}) \\ &= \alpha_{n}d(u, Qx_{n}) + d(Qx_{n}, y_{n}) \\ &\to0. \end{aligned}$$
Consequently, we get
$$\begin{aligned} d(Ux_{n}, x_{n}) &\leq d\bigl(Ux_{n}, \beta_{n}r_{n} \oplus(1 - \beta_{n})q_{n} \bigr) + d\bigl(\beta_{n}r_{n} \oplus(1 - \beta_{n})q_{n}, x_{n + 1}\bigr) + d(x_{n + 1}, x_{n}) \\ &\to0. \end{aligned}$$
Suppose p is an element of F. Then we get
$$\begin{aligned} d(p, Ux_{n})^{2} &= d\bigl(p, \beta Rx_{n} \oplus(1 - \beta)Qx_{n}\bigr)^{2} \\ &\leq\beta d(p, Rx_{n})^{2} + (1 - \beta)d(p, Qx_{n})^{2} - \beta(1 - \beta )d(Rx_{n}, Qx_{n})^{2} \\ &\leq d(p, x_{n})^{2} - \beta(1 - \beta)d(Rx_{n}, Qx_{n})^{2}. \end{aligned}$$
Thus, we have
$$\begin{aligned} \beta(1 - \beta)d(Rx_{n}, Qx_{n})^{2} &\leq d(p, x_{n})^{2} - d(p, Ux_{n})^{2} \\ &\leq\bigl(d(p, x_{n}) + d(p, Ux_{n})\bigr)d(x_{n}, Ux_{n}) \\ & \to0, \end{aligned}$$
and hence we get
$$d(Rx_{n}, Qx_{n}) \to0. $$
Furthermore, we obtain that
$$\begin{aligned} d(x_{n}, Qx_{n}) &\leq d(x_{n}, Ux_{n}) + d(Ux_{n}, Qx_{n}) \\ &= d(x_{n}, Ux_{n}) + \beta d(Rx_{n}, Qx_{n}) \\ &\to0. \end{aligned}$$
By the same procedure, it follows that
$$d(p, Qx_{n})^{2} \leq d(p, x_{n})^{2} - \gamma(1 - \gamma)d(Sx_{n}, Tx_{n})^{2}, $$
and hence we get
$$d(Sx_{n}, Tx_{n}) \to0. $$
Therefore, we have that
$$\begin{aligned}& d(Rx_{n}, x_{n}) \leq d(Rx_{n}, Ux_{n}) + d(Ux_{n}, x_{n}) = (1 - \beta)d(Rx_{n}, Qx_{n}) + d(Ux_{n}, x_{n}) \to0, \\& d(Sx_{n}, x_{n}) \leq d(Sx_{n}, Qx_{n}) + d(Qx_{n}, x_{n}) = (1 - \gamma)d(Sx_{n}, Tx_{n}) + d(Qx_{n}, x_{n}) \to0, \\& d(Tx_{n}, x_{n}) \leq d(Tx_{n}, Qx_{n}) + d(Qx_{n}, x_{n}) = \gamma d(Sx_{n}, Tx_{n}) + d(Qx_{n}, x_{n}) \to0. \end{aligned}$$
Define a function g on X by \(g(x) = \beta d(u, x)^{2} + (1 - \beta )h(x)\) for all \(x \in X\), where \(h(x) = \gamma d(v, x)^{2} + (1 - \gamma )d(w, x)^{2}\). From Corollary 3.7, there exists \(x_{0} \in F\) which is the unique minimizer of g on F. Then we have
$$\begin{aligned}& d(x_{n + 1}, x_{0})^{2} \\& \quad = d\bigl(\beta_{n}r_{n} \oplus(1 - \beta_{n})y_{n}, x_{0}\bigr)^{2} \\& \quad \leq\beta_{n}d(r_{n}, x_{0})^{2} + (1 - \beta_{n}) \bigl(\gamma_{n}d(s_{n}, x_{0})^{2} + (1 - \gamma_{n})d(t_{n}, x_{0})^{2}\bigr) \\& \quad \leq\beta_{n}\bigl(\alpha_{n}d(u, x_{0})^{2} + (1 - \alpha_{n})d(Rx_{n}, x_{0})^{2} - \alpha_{n}(1 - \alpha_{n})d(u, Rx_{n})^{2} \bigr) \\& \qquad {} + (1 - \beta_{n})\gamma_{n}\bigl( \alpha_{n}d(v, x_{0})^{2} + (1 - \alpha _{n})d(Sx_{n}, x_{0})^{2} - \alpha_{n}(1 - \alpha_{n})d(v, Sx_{n})^{2} \bigr) \\& \qquad {} + (1 - \beta_{n}) (1 - \gamma_{n}) \bigl( \alpha_{n}d(w, x_{0})^{2} + (1 - \alpha_{n})d(Tx_{n}, x_{0})^{2} - \alpha_{n}(1 - \alpha_{n})d(w, Tx_{n})^{2} \bigr) \\& \quad \leq(1 - \alpha_{n})d(x_{n}, x_{0})^{2} \\& \qquad {} + \alpha_{n}\bigl(\beta_{n}d(u, x_{0})^{2} + (1 - \beta_{n}) \bigl( \gamma_{n}d(v, x_{0})^{2} + (1 - \gamma_{n})d(w, x_{0})^{2}\bigr)\bigr) \\& \qquad {} - \alpha_{n}(1 - \alpha_{n}) \bigl( \beta_{n}d(u, Rx_{n})^{2} + (1 - \beta _{n}) \bigl(\gamma_{n}d(v, Sx_{n})^{2} + (1 - \gamma_{n})d(w, Tx_{n})^{2}\bigr)\bigr). \end{aligned}$$
Put
$$\begin{aligned} c_{n} = &\beta_{n}d(u, x_{0})^{2} + (1 - \beta_{n}) \bigl(\gamma_{n}d(v, x_{0})^{2} + (1 - \gamma_{n})d(w, x_{0})^{2}\bigr) \\ &{} - (1 - \alpha_{n}) \bigl(\beta_{n}d(u, Rx_{n})^{2} + (1 - \beta_{n}) \bigl(\gamma _{n}d(v, Sx_{n})^{2} + (1 - \gamma_{n})d(w, Tx_{n})^{2}\bigr)\bigr). \end{aligned}$$
Since \(\beta_{n} \to\beta\) and \(\gamma_{n} \to\gamma\), we get
$$\bigl\lvert \beta_{n}d(u, x_{0})^{2} + (1 - \beta_{n}) \bigl(\gamma_{n}d(v, x_{0})^{2} + (1 - \gamma_{n})d(w, x_{0})^{2}\bigr) - g(x_{0}) \bigr\rvert \to0. $$
Moreover, since \(d(Rx_{n}, x_{n})\), \(d(Sx_{n}, x_{n})\) and \(d(Tx_{n}, x_{n})\) converges to 0, we also get
$$\bigl\lvert \beta_{n}d(u, Rx_{n})^{2} + (1 - \beta_{n}) \bigl(\gamma_{n}d(v, Sx_{n})^{2} + (1 - \gamma_{n})d(w, Tx_{n})^{2}\bigr) - g(x_{n}) \bigr\rvert \to0. $$
Therefore, we obtain that
$$\begin{aligned}& \bigl\lvert c_{n} - \bigl(g(x_{0}) - g(x_{n}) \bigr) \bigr\rvert \\& \quad \leq \bigl\lvert \beta_{n}d(u, x_{0})^{2} + (1 - \beta_{n}) \bigl(\gamma_{n}d(v, x_{0})^{2} + (1 - \gamma_{n})d(w, x_{0})^{2}\bigr) - g(x_{0}) \bigr\rvert \\& \qquad {} + \bigl\lvert \beta_{n}d(u, Rx_{n})^{2} + (1 - \beta_{n}) \bigl(\gamma _{n}d(v, Sx_{n})^{2} + (1 - \gamma_{n})d(w, Tx_{n})^{2}\bigr) - g(x_{n}) \bigr\rvert \\& \qquad {} + \alpha_{n} \bigl\lvert \beta_{n}d(u, Rx_{n})^{2} + (1 - \beta _{n}) \bigl(\gamma _{n}d(v, Sx_{n})^{2} + (1 - \gamma_{n})d(w, Tx_{n})^{2}\bigr) \bigr\rvert \\& \quad \to0, \end{aligned}$$
and hence
$$\limsup_{n \to\infty}c_{n} = \limsup_{n \to\infty} \bigl(g(x_{0}) - g(x_{n})\bigr). $$
Since \(\{x_{n}\}\) is bounded, there exists a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\) such that
$$\limsup_{n \to\infty}\bigl(g(x_{0}) - g(x_{n}) \bigr) = \lim_{i \to\infty }\bigl(g(x_{0}) - g(x_{n_{i}})\bigr), $$
and \(\{x_{n_{i}}\}\) is Δ-convergent to some \(x \in X\). From Lemma 3.4, we have that
$$\begin{aligned}& \lim_{i \to\infty}\bigl(g(x_{0}) - g(x_{n_{i}})\bigr) \\& \quad = g(x_{0}) \\& \qquad {}- \Bigl(\beta\liminf_{i \to\infty}d(u, x_{n_{i}})^{2} + (1 - \beta ) \Bigl(\gamma\liminf _{i \to\infty}d(v, x_{n_{i}})^{2} + (1 - \gamma ) \liminf_{i \to\infty}d(w, x_{n_{i}})^{2}\Bigr)\Bigr) \\& \quad \leq g(x_{0}) - \bigl(\beta d(u, x)^{2} + (1 - \beta) \bigl(\gamma d(v, x)^{2} + (1 - \gamma)d(w, x)^{2}\bigr)\bigr) \\& \quad = g(x_{0}) - g(x). \end{aligned}$$
Since \(d(Ux_{n}, x_{n}) \to0\), x is an element of F by Lemma 3.5. Moreover, since \(x_{0}\) is a minimizer of g on F, we have that
$$\limsup_{n \to\infty}c_{n} \leq g(x_{0}) - g(x) \leq0. $$
Hence, by Lemma 3.1, \(\{x_{n}\}\) converges to \(x_{0}\) in F. □

For any points in a Hadamard space, we know that there exists a unique point in any closed convex subset which is the nearest of each subset to the point. Thus, we obtain the following corollary.

Corollary 3.10

Let X be a Hadamard space and \(T_{1}, T_{2}, \ldots, T_{r} : X \to X\) be nonexpansive mappings with \(F = \bigcap_{i = 1}^{r} F(T_{i}) \neq \emptyset\). Suppose u, \(x_{1}\) are arbitrary points in X and \(\{x_{n}\}\) is iteratively generated by
$$\left \{ \textstyle\begin{array}{l} t^{i}_{n} = \alpha_{n}u \oplus(1 - \alpha_{n})T_{i}x_{n}, \quad i = 1, 2, \ldots , r, \\ y^{1}_{n} = t^{1}_{n}, \\ y^{j}_{n} = \beta^{j - 1}_{n}t^{j}_{n} \oplus(1 - \beta^{j - 1}_{n})y^{j - 1}_{n}, \quad j = 2, 3, \ldots, r, \\ x_{n + 1} = y^{r}_{n} \end{array}\displaystyle \right . $$
for all natural number n, where \(\{\alpha_{n}\}\) is a sequence in \(] 0, 1 [ \) such that
$$\begin{aligned} (\mathrm{i})&\quad \lim_{n \to\infty} \alpha_{n} = 0, \\ (\mathrm{ii})&\quad\sum^{\infty}_{n = 1} \alpha_{n} = \infty, \\ (\mathrm{iii})&\quad\sum^{\infty}_{n = 1} | \alpha_{n + 1} - \alpha _{n}| < \infty, \end{aligned}$$
and, for all \(k = 1, 2, \ldots, r - 1\), \(\{\beta^{k}_{n}\}\) are sequences in \([a, b] \subset \, ] 0, 1 [ \) such that
$$\begin{aligned} (\mathrm{iv})&\quad \sum_{n = 1}^{\infty}\bigl\vert \beta^{k}_{n + 1} - \beta ^{k}_{n} \bigr\vert < \infty. \end{aligned}$$
Then \(\{x_{n}\}\) converges to \(x_{0}\) in F which is the nearest point of F to u.

Proof

Let \(x_{0} \in F\) be the nearest point to u. Then we have that
$$d(u, x_{0})^{2} = \inf_{x \in F}d(u, x)^{2}. $$
From Theorem 3.8, \(\{x_{n}\}\) converges to the minimizer of \(g(x) = d(u, x)^{2}\) on F. Therefore, \(\{x_{n}\}\) is convergent to \(x_{0}\). □

Remark

It will be interesting to consider similar results for an amenable semigroup of nonexpansive mappings using asymptotic invariant nets as in [12] for Hadamard spaces.

Declarations

Acknowledgements

The authors would like to thank anonymous referees for their valuable comments and suggestions.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Information Science, Toho University

References

  1. Wittmann, R: Approximation of fixed points of nonexpansive mappings. Arch. Math. (Basel) 58, 486-491 (1992) MATHMathSciNetView ArticleGoogle Scholar
  2. Shimizu, T, Takahashi, W: Strong convergence to common fixed points of families of nonexpansive mappings. J. Math. Anal. Appl. 211(1), 71-83 (1997) MATHMathSciNetView ArticleGoogle Scholar
  3. Kimura, Y, Takahashi, W, Toyoda, M: Convergence to common fixed points of a finite family of nonexpansive mappings. Arch. Math. (Basel) 84, 350-363 (2005) MATHMathSciNetView ArticleGoogle Scholar
  4. Saejung, S: Halpern’s iteration in \(\operatorname{CAT}(0)\) spaces. Fixed Point Theory Appl. 2010, Article ID 471781 (2010) MathSciNetGoogle Scholar
  5. Bridson, MR, Haefliger, A: Metric Spaces of Non-positive Curvature. Grundlehren der Mathematischen Wissenschaften, vol. 319. Springer, Berlin (1999) MATHGoogle Scholar
  6. Goebel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990) MATHView ArticleGoogle Scholar
  7. Kirk, WA, Panyanak, B: A concept of convergence in geodesic spaces. Nonlinear Anal., Theory Methods Appl. 68(12), 3689-3696 (2008) MATHMathSciNetView ArticleGoogle Scholar
  8. Aoyama, K, Kimura, Y, Takahashi, W, Toyoda, M: Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. 67, 2350-2360 (2007) MATHMathSciNetView ArticleGoogle Scholar
  9. Xu, H-K: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 116, 659-678 (2003) MATHMathSciNetView ArticleGoogle Scholar
  10. He, JR, Fang, DH, López, G, Li, C: Mann’s algorithm for nonexpansive mappings in \(\operatorname{CAT}(\kappa)\) spaces. Nonlinear Anal. 75(2), 445-452 (2012) MATHMathSciNetView ArticleGoogle Scholar
  11. Mayer, UF: Gradient flows on nonpositively curved metric spaces and harmonic maps. Commun. Anal. Geom. 6(2), 199-253 (1998) MATHGoogle Scholar
  12. Lau, AT, Shioji, N, Takahashi, W: Existence of nonexpansive retractions for amenable semigroups of nonexpansive mappings and nonlinear ergodic theorems in Banach spaces. J. Funct. Anal. 161, 62-75 (1999) MATHMathSciNetView ArticleGoogle Scholar

Copyright

© Kimura and Wada 2015