Skip to content

Advertisement

Open Access

Strong convergence theorem for strict pseudo-contractions in Hilbert spaces

Journal of Inequalities and Applications20162016:134

https://doi.org/10.1186/s13660-016-1072-6

Received: 12 April 2016

Accepted: 19 April 2016

Published: 6 May 2016

Abstract

In this paper, inspired by Hussain et al. (Fixed Point Theory Appl. 2015:17, 2015), we study a modified Mann method to approximate strongly fixed points of strict pseudo-contractive mappings. In (Hussain et al. in Fixed Point Theory Appl. 2015:17, 2015) it is shown that the same algorithm converges strongly to a fixed point of a nonexpansive mapping under suitable hypotheses on the coefficients. Here the assumptions on the coefficients are different, as well as the techniques of the proof.

Keywords

Mann’s iterationsstrict pseudo-contractions mappingsvariational inequalitiesMainge’s lemma

MSC

47J2047J2549J4065J15

1 Introduction

Let H be a real Hilbert space with the inner product \(\langle\cdot ,\cdot\rangle\), which induces the norm \(\|\cdot\|\).

Let C be a nonempty, closed, and convex subset of H. Let T be a nonlinear mapping of C into itself; we denote with \(\operatorname{Fix}(T)\) the set of fixed points of T, that is, \(\operatorname{Fix}(T)=\{z\in C: Tz=z\}\).

We recall that a mapping \(T:C\rightarrow C\) is said to be k-strict pseudo-contractive (in the sense of Browder-Petryshyn) if there exists \(k\in[0,1)\) such that
$$ \Vert Tx-Ty\Vert ^{2}\leq \Vert x-y\Vert ^{2}+k\bigl\Vert (I-T)x-(I-T)y\bigr\Vert ^{2}, \quad \forall x,y \in C. $$
(1.1)
Note that the class of strict pseudo-contractions includes the class of nonexpansive mappings, which are mappings T on C such that
$$ \|Tx-Ty\|\leq\|x-y\|, \quad \forall x,y \in C. $$

The problem of finding fixed points of nonexpansive mappings via Mann’s algorithm [2] has been widely investigated in the literature (see e.g. [3]).

Mann’s algorithm generates, on initializing with an arbitrary \(x_{1}\in C\), a sequence according to the recursive formula
$$ x_{1}\in C, \quad x_{n+1}=\alpha_{n} x_{n}+(1-\alpha_{n})Tx_{n},\quad \forall n\geq1, $$
(1.2)
where \((\alpha_{n})_{n\in\mathbb{N}}\subset(0,1)\).

If \(T:C\rightarrow C\) is a nonexpansive mapping with a fixed point in a closed and convex subset of a uniformly convex Banach space with a Frechét differentiable norm, and if the control sequence \((\alpha _{n})_{n\in\mathbb{N}}\) is chosen so that \(\sum_{n=1}^{\infty}\alpha _{n}(1-\alpha _{n})=\infty\), then the sequence \((x_{n})\) generated by Mann’s algorithm converges weakly to a fixed point of T [3]. However, this convergence is in general not strong (see the counterexample in [4]).

On the other hand, iterative algorithms for strict pseudo-contractions are still less developed than those for nonexpansive mappings, despite the pioneering work of Browder and Petryshyn [5] dating from 1967. However, strict pseudo-contractions have many applications, due to their ties with inverse strongly monotone operators. Indeed, if A is a strongly monotone operator, then \(T=I-A\) is a strict pseudo-contraction, and so we can redraft a problem of zeros for A in a fixed point problem for T, and vice versa (see e.g. [6, 7]).

The Mann algorithm has weak convergence also in the broader setting of strict pseudo-contractions mapping, containing the nonexpansive mappings.

Theorem 1.1

(Marino and Xu [8], 2007, Mann’s method)

Let C be a closed and convex subset of a Hilbert space H. Let \(T:C\rightarrow C\) be a k-strict pseudo-contraction for some \(0\leq k<1\). Assume that T admits a fixed point in C. Let \((x_{n})\) be the sequence generated by \(x_{0}\in C\) and the Mann algorithm
$$ x_{n+1}=\alpha_{n}x_{n}+ (1-\alpha_{n} )Tx_{n}. $$
Assume that the control sequence \((\alpha_{n})\) is chosen so that \(k<\alpha_{n}<1\) for all n and
$$ \sum_{n=0}^{+\infty} (\alpha_{n}-k ) (1-\alpha _{n} )=+\infty. $$
Then \((x_{n})\) converges weakly to a fixed point of T.

It is not possible, in general, to obtain strong convergence, in view of the celebrated counterexample of Genel and Lindenstrauss [4].

So, to obtain strong convergence, one can try to modify the Mann algorithm and strengthen the hypotheses on the mapping.

We recall here some obtained results.

Theorem 1.2

(Li et al. [9], 2013, modified Halpern’s method)

Let C be a closed and convex subset of a real Hilbert space H, \(T:C\rightarrow C\) be a k-strict pseudo-contraction such that \(\operatorname{Fix}(T)\neq\emptyset\). For an arbitrary initial value \(x_{0}\in C\) and fixed anchor \(u\in C\), define iteratively a sequence \((x_{n})\) as follows:
$$ x_{n+1}=\alpha_{n}u+\beta_{n}x_{n}+ \gamma_{n}Tx_{n}, $$
where \((\alpha_{n})\), \((\beta_{n})\), \((\gamma_{n})\) are three real sequences in \((0,1)\) satisfying \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\) and \(0< k<\frac {\beta_{n}}{\beta_{n}+\gamma_{n}}\). Suppose that \((\alpha_{n})\) satisfies the conditions:
$$\lim_{n\to\infty}\alpha_{n}=0 , \qquad \sum _{n=1}^{+\infty}\alpha _{n}=+\infty. $$
Then \((x_{n})\) converges strongly to \(x^{*}=P_{\operatorname{Fix}(T)}u\), where \(P_{\operatorname{Fix}(T)}\) is the metric projection from H onto \(\operatorname{Fix}(T)\).

Theorem 1.3

(Marino and Xu [8], 2007, CQ method)

Let C be a closed convex subset of a Hilbert space H. Let \(T:C\rightarrow C\) be a k-strict pseudo-contraction for some \(0\leq k<1\) and assume that \(\operatorname{Fix}(T)\neq\emptyset\). Let \((x_{n})\) be the sequence generated by the following (CQ) algorithm:
$$\left \{ \textstyle\begin{array}{l} x_{0}\in C, \\ y_{n}=\alpha_{n}x_{n}+ (1-\alpha_{n} )Tx_{n}, \\ C_{n}= \{z\in C: \Vert y_{n}-z\Vert ^{2}\leq \Vert x_{n}-z\Vert ^{2}+ (1-\alpha_{n} ) (k-\alpha_{n} )\Vert x_{n}-Tx_{n}\Vert ^{2} \}, \\ Q_{n}= \{z\in C: \langle x_{n}-z, x_{0}-x_{n} \rangle \geq 0 \}, \\ x_{n+1}=P_{C_{n}\cap D_{n}}x_{0}. \end{array}\displaystyle \right . $$
Assume that the control sequence \((\alpha_{n})\) is chosen so that \(\alpha_{n}<1\) for all n. Then \((x_{n})\) converges strongly to \(P_{\operatorname{Fix}(T)}x_{0}\).

Theorem 1.4

(Shang [10], 2007, viscosity method)

Let C be a closed convex subset of a Hilbert space H and let \(T:C\rightarrow C\) be a k-strict pseudo-contraction with \(\operatorname{Fix}(T)\neq \emptyset\). Let \(f:C\rightarrow C\) be a contraction. The initial value \(x_{0}\in C\) is chosen arbitrarily, and we have sequences \((\alpha_{n})\) and \((\beta_{n})\) satisfying the following conditions:
  1. (1)

    \(\lim_{n\to\infty}\alpha_{n}=0\), \(\sum_{n=1}^{+\infty }\alpha _{n}=+\infty\);

     
  2. (2)

    \(0< a<\beta_{n}<\gamma\) for some \(a\in(0,\gamma]\) and \(\gamma =\min \{1,2k \}\);

     
  3. (3)

    \(\sum_{n=1}^{+\infty} \vert \alpha_{n+1}-\alpha_{n} \vert <+\infty \) and \(\sum_{n=1}^{+\infty} \vert \beta_{n+1}-\beta_{n} \vert <+\infty\).

     
Let \((x_{n})\) be the composite process defined by
$$\left \{ \textstyle\begin{array}{l} y_{n}= (1-\beta_{n} )x_{n}+\beta_{n}Tx_{n}, \\ x_{n+1}=\alpha_{n}f(x_{n})+ (1-\alpha_{n} )y_{n}. \end{array}\displaystyle \right . $$
Then \((x_{n})\) converges strongly to a fixed point \(p\in \operatorname{Fix}(T)\).

Theorem 1.5

(Osilike and Udomene [11], 2001, Ishikawa type method)

Let H be a Hilbert space. Let C be a nonempty, closed, and convex subset of H, \(T:C\rightarrow C\) a demicompact k-strict pseudo-contraction with \(\operatorname{Fix}(T)\neq\emptyset\). Let \((\alpha_{n})\) and \((\beta_{n})\) be real sequences in \([0,1]\) satisfying the following conditions:
  1. (1)

    \(0< a<\alpha_{n}\leq b< (1-k ) (1-\beta _{n} )\), \(\forall n\geq1\) and for some constants \(a,b\in(0,1)\);

     
  2. (2)

    \(\sum_{n=1}^{+\infty}\beta_{n}<+\infty\).

     
Then the sequence \((x_{n})\) generated from an arbitrary \(x_{1}\in K\) by the Ishikawa iteration method
$$\left \{ \textstyle\begin{array}{l} y_{n}= (1-\beta_{n} )x_{n}+\beta_{n}Tx_{n}, \\ x_{n+1}= (1-\alpha_{n} )x_{n}+\alpha_{n}Tx_{n}, \quad n\geq1, \end{array}\displaystyle \right . $$
converges strongly to a fixed point of T.

The mentioned results are probably neither the most general, nor the more recent, but certainly they represent very well some of the different modifications of the original Mann approximation method, made to get strong convergence.

We would like to point out that the differences with the original method are remarkable. So it is quite surprising that recently, in [1] there was obtained a strong convergence method for nonexpansive mappings that is ‘almost’ the Mann method (the difference is given only by a smaller and smaller amount). In [1] was proved the convergence of this method only for nonexpansive mappings.

Theorem 1.6

(Hussain, Marino et al. [1], 2015)

Let H be a Hilbert space and \(T:H\rightarrow H\) a nonexpansive mapping. Let \((\alpha_{n})\), \((\mu_{n})\) be sequences in \((0,1]\) such that
  • \(\lim_{n\to\infty}\alpha_{n}=0\);

  • \(\sum_{n=1}^{+\infty}\alpha_{n}\mu_{n}=+\infty\);

  • \(\vert \mu_{n+1}-\mu_{n}\vert =o(\mu_{n})\);

  • \(\vert \alpha_{n+1}-\alpha_{n}\vert =o(\alpha_{n}\mu_{n})\).

Then the sequence \((x_{n})\) generated by
$$ x_{n+1}=\alpha_{n}x_{n}+ (1-\alpha_{n} )Tx_{n}-\alpha _{n}\mu_{n}x_{n} $$
strongly converges to a point \(x^{*}\in \operatorname{Fix}(T)\) with minimum norm
$$ \bigl\Vert x^{*}\bigr\Vert =\min_{x\in \operatorname{Fix}(T)}\Vert x \Vert . $$
We would like to emphasize that:
  1. (1)
    In general, the mapping T cannot be defined on a closed convex subset C of H, since \(x_{n+1}\) is not a convex combination of two elements in C. However, since we can write
    $$ x_{n+1}=\alpha_{n} (1-\mu_{n} )x_{n}+ (1-\alpha _{n} )Tx_{n}, $$
    then \(x_{n+1}\) is meaning full if \(T:C\rightarrow C\) is a self-mapping defined on a cone C, that is, a particular convex set, closed with respect to linear combinations with positive coefficients.
     
  2. (2)

    The proof of Theorem 1.6 is easy using the properties of nonexpansive mappings and cannot be adjusted to the strict pseudo-contractive mappings. The purpose of the present paper is to show that the result is true also for strict pseudo-contractions. The proof uses completely different techniques, as well as the assumptions on coefficients. For all we know, this is the algorithm most similar to the original iterative Mann’s method (and the one most easy to implement), providing strong convergence.

     
  3. (3)

    Our techniques can also be used to clarify the proofs of main results in [12] and [13].

     

2 Preliminaries

We need some tools in a real Hilbert space H, and some facts about k-strict pseudo-contractive mappings which are listed in the following auxiliary lemmas.

The first result is very well known and easy to prove.

Lemma 2.1

Let H be a Hilbert space, then:
  1. (i)

    \(\|tx+(1-t)y\|^{2}=t\|x\|^{2}+(1-t)\|y\|^{2}-t(1-t)\|x-y\|^{2}\), for all \(x,y\in H\) and for all \(t\in[0,1]\);

     
  2. (ii)

    \(\|x+y\|^{2}\leq\|x\|^{2}+2\langle y, x+y \rangle\), for all \(x,y\in H\).

     

A pertinent tool for us is the following well-known lemma of Xu.

Lemma 2.2

[14]

Let \((a_{n})_{n\in\mathbb{N}}\) be a sequence of nonnegative real numbers satisfying the following relation:
$$a_{n+1}\leq(1-\alpha_{n})a_{n}+ \alpha_{n}\sigma_{n}+\gamma_{n}, \quad n\geq0, $$
where:
  • \((\alpha_{n})_{n\in\mathbb{N}}\subset[0,1]\), \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\);

  • \(\limsup_{n\rightarrow\infty}\sigma_{n}\leq0\);

  • \(\gamma_{n}\geq0\), \(\sum_{n=1}^{\infty}\gamma _{n}<\infty\).

Then we have
$$\lim_{n\rightarrow\infty}a_{n}=0. $$

Lemma 2.3

Let C a nonempty, closed, and convex subspace of H, T a mapping from C into itself such that \(I-T\) is demiclosed at 0, let \((y_{n})\subset C\) be a bounded sequence.

If \(\Vert y_{n}-Ty_{n}\Vert \rightarrow0\), then
$$\limsup_{n} \langle-\bar{p},y_{n}-\bar{p} \rangle \leq0, $$
where \(\bar{p}=P_{\operatorname{Fix}(T)}(0)\) is the unique point in \(\operatorname{Fix}(T)\) that satisfies the variational inequality
$$ \langle-\bar{p},x-\bar{p} \rangle\leq0,\quad \forall x\in \operatorname{Fix}(T). $$
(2.1)

Proof

Let satisfy (2.1). Let \((y_{n_{k}})\) be a subsequence of \((y_{n})\) for which
$$\limsup_{n} \langle-\bar{p}, y_{n}-\bar{p} \rangle=\lim_{k} \langle-\bar{p}, y_{n_{k}}-\bar{p} \rangle. $$
Select a subsequence \((y_{n_{k_{j}}})\) of \((y_{n_{k}})\) such that \(y_{n_{k_{j}}}\rightharpoonup v\) (this is possible by the boundedness of \((y_{n})\)). By the hypothesis \(\Vert y_{n}-Ty_{n}\Vert \rightarrow 0\), and by demiclosedness of \(I-T\), we have \(v\in \operatorname{Fix}(T)\), and
$$\limsup_{n} \langle-\bar{p}, y_{n}-\bar{p} \rangle=\lim_{j} \langle-\bar{p}, y_{n_{k_{j}}}-\bar{p} \rangle= \langle-\bar{p},v-\bar{p} \rangle. $$
So the claim follows by (2.1). □

Finally, a crucial tool for our results is the following lemma, proved by Maingé.

Lemma 2.4

[15]

Let \((\gamma_{n})_{n\in\mathbb{N}}\) be a sequence of real numbers such that there exists a subsequence \((\gamma_{n_{j}})_{j\in\mathbb{N}}\) of \((\gamma _{n})_{n\in\mathbb{N}}\) such that \(\gamma_{n_{j}}<\gamma_{n_{j}+1}\), for all \(j\in\mathbb{N}\). Then there exists a nondecreasing sequence \((m_{k})_{k\in\mathbb{N}}\) of \(\mathbb {N}\) such that \(\lim_{k\to\infty}m_{k}=\infty\) and the following properties are satisfied by all (sufficiently large) numbers \(k\in \mathbb{N}\):
$$\gamma_{m_{k}}\leq\gamma_{m_{k}+1} \quad \textit{and} \quad \gamma _{k}\leq \gamma_{m_{k}+1}. $$
In fact, \(m_{k}\) is the largest number n in the set \(\{1,\ldots,k\}\) such that the condition \(\gamma_{n}<\gamma_{n+1}\) holds.

Before proving our convergence result for strict pseudo-contractions, we recall some properties of these mappings.

Lemma 2.5

[8]

Assume C is a closed convex subset of a Hilbert space H and \(T:C\rightarrow C\) be a self-mapping of C. If T is a k-strict pseudo-contraction, then:
  1. (1)
    T satisfies the Lipschitz condition:
    $$\Vert Tx-Ty\Vert \leq\frac{1+k}{1-k}\Vert x-y\Vert ; $$
     
  2. (2)

    the mapping \(I-T\) is demiclosed at 0; that is, if \((x_{n})\) is a sequence in C such that \(x_{n}\rightharpoonup\hat{x}\) and \((I-T)x_{n}\rightarrow0\), then \(T\hat{x}=\hat{x}\);

     
  3. (3)

    the set \(\operatorname{Fix}(T)=\{x\in C: Tx=x\}\) is closed and convex, so that the projection \(P_{\operatorname{Fix}(T)}\) is well defined.

     

Moreover, we have the following auxiliary result.

Lemma 2.6

Let be \(T:C\rightarrow C\) a k-strict pseudo-contractive self-mapping of a closed and convex subset of a Hilbert space H, and suppose that \(\operatorname{Fix}(T)\neq\emptyset\); then
$$ (1-k )\Vert Tx-x\Vert ^{2}\leq2 \langle x-p,x-Tx \rangle,\quad \forall p\in \operatorname{Fix}(T), \forall x\in C. $$
(2.2)

Proof

Let \(p\in \operatorname{Fix}(T)\). Putting \(y=p\) in the definition of T, we get
$$ \Vert Tx-p\Vert ^{2}\leq \Vert x-p\Vert ^{2}+k\Vert x-Tx\Vert ^{2} $$
so
$$\begin{aligned}& \langle Tx-p,Tx-p \rangle\leq \langle x-p,x-Tx \rangle+ \langle x-p,Tx-p \rangle+k\Vert x-Tx\Vert ^{2} \\& \quad \Rightarrow\quad \langle Tx-p,Tx-x \rangle\leq \langle x-p,x-Tx \rangle+k \Vert x-Tx\Vert ^{2} \\& \quad \Rightarrow\quad \langle Tx-x,Tx-x \rangle+ \langle x-p,Tx-x \rangle\leq \langle x-p,x-Tx \rangle +k\Vert x-Tx\Vert ^{2}, \end{aligned}$$
from which we get (2.2). □

3 The main result

Now we can prove our theorem. We use the notation \(\omega_{l}(x_{n})\) to denote the set of weak limit points of \((x_{n})\).

Theorem 3.1

Let H be a Hilbert space and let C be a nonempty closed cone of H. Let \(T:C \to C\) be a k-strict pseudo-contractive mapping such that \(\operatorname{Fix}(T)\neq\emptyset\). Suppose that \((\alpha_{n})_{n\in\mathbb {N}}\) and \((\mu _{n})_{n\in\mathbb{N}}\) are real sequences, respectively, in \((k,1)\) and in \((0,1)\) satisfying the conditions:
  1. (1)

    \(k<\liminf_{n\rightarrow\infty}\alpha_{n}\leq \limsup_{n\rightarrow\infty}\alpha_{n}<1\);

     
  2. (2)

    \(\lim_{n\rightarrow\infty}\mu_{n}=0\);

     
  3. (3)

    \(\sum_{n=1}^{\infty}\mu_{n}=\infty\).

     
Let us define a sequence \((x_{n})_{n\in\mathbb{N}}\) as follows:
$$ x_{1} \in C, \quad x_{n+1}=\alpha_{n} (1- \mu_{n} ) x_{n}+ (1-\alpha_{n} )Tx_{n},\quad n \in\mathbb{N}. $$
(3.1)
Then \((x_{n})_{n\in\mathbb{N}}\) converges strongly to \(\bar{x}\in \operatorname{Fix}(T)\), that is, the unique solution of the variational inequality
$$ \langle-\bar{x},y-\bar{x}\rangle\leq0,\quad \forall y\in \operatorname{Fix}(T). $$

Proof

We begin by proving that \((x_{n})_{n\in\mathbb{N}}\) is bounded.

First of all, observe that from the conditions \(\mu_{n}\rightarrow0\) and \(k<\liminf\alpha_{n}\leq\limsup\alpha_{n}<1\), it follows that there exists an integer \(n_{0}\in\mathbb{N}\) such that
$$\mu_{n}\leq1-\frac{k}{\alpha_{n}},\quad \forall n\geq n_{0}, $$
i.e.
$$ k-\alpha_{n} (1-\mu_{n} )\leq0. $$
(3.2)
Let be \(p\in \operatorname{Fix}(T)\) and put \(r=\max \{\Vert x_{n_{0}}-p\Vert , \Vert p\Vert \}\). We have
$$\begin{aligned} x_{n+1}-p&=\alpha_{n} \bigl[ (1-\mu_{n} )x_{n}-p \bigr]+ (1-\alpha _{n} ) [Tx_{n}-p ] \\ &= \alpha_{n} \bigl[ (1-\mu_{n} ) (x_{n}-p )+ \mu _{n} (-p ) \bigr]+ (1-\alpha_{n} ) [Tx_{n}-p ]. \end{aligned}$$
Regarding Lemma 2.1(ii), we derive that
$$\begin{aligned} \Vert x_{n+1}-p\Vert ^{2} =& \alpha_{n}\bigl\Vert (1-\mu_{n} ) (x_{n}-p )+\mu_{n} (-p ) \bigr\Vert ^{2}+ (1-\alpha _{n} )\Vert Tx_{n}-p\Vert ^{2} \\ &{}-\alpha_{n} (1-\alpha_{n} )\bigl\Vert (1- \mu_{n} )x_{n}-Tx_{n}\bigr\Vert ^{2} \\ \leq&\alpha_{n} \bigl[ (1-\mu_{n} )\Vert x_{n}-p\Vert ^{2}+\mu_{n}\Vert p\Vert ^{2}-\mu_{n} (1-\mu_{n} )\Vert x_{n} \Vert ^{2} \bigr] \\ &{}+ (1-\alpha_{n} ) \bigl[\Vert x_{n}-p\Vert ^{2}+k\Vert x_{n}-Tx_{n}\Vert ^{2} \bigr] \\ &{}-\alpha_{n} (1-\alpha_{n} )\bigl\Vert (1- \mu_{n} ) (x_{n}-Tx_{n} )+\mu_{n} (-Tx_{n} )\bigr\Vert ^{2} \\ =&\alpha_{n} \bigl[ (1-\mu_{n} )\Vert x_{n}-p\Vert ^{2}+\mu _{n}\Vert p\Vert ^{2}-\mu_{n} (1-\mu_{n} )\Vert x_{n} \Vert ^{2} \bigr] \\ &{}+ (1-\alpha_{n} ) \bigl[\Vert x_{n}-p\Vert ^{2}+k\Vert x_{n}-Tx_{n}\Vert ^{2} \bigr] \\ &{}-\alpha_{n} (1-\alpha_{n} ) \bigl[ (1- \mu_{n} )\Vert x_{n}-Tx_{n}\Vert ^{2}+\mu_{n}\Vert Tx_{n}\Vert ^{2}- \mu_{n} (1-\mu _{n} )\Vert x_{n}\Vert ^{2} \bigr] \\ \leq&\alpha_{n} (1-\mu_{n} )\Vert x_{n}-p \Vert ^{2}+\alpha _{n}\mu _{n}\Vert p\Vert ^{2}+ (1-\alpha_{n} )\Vert x_{n}-p\Vert ^{2} \\ &{}+ (1-\alpha_{n} )k\Vert x_{n}-Tx_{n} \Vert ^{2}-\alpha_{n} (1-\alpha_{n} ) (1- \mu_{n} )\Vert x_{n}-Tx_{n}\Vert ^{2} \\ =& (1-\alpha_{n}\mu_{n} )\Vert x_{n}-p \Vert ^{2}+\alpha_{n}\mu _{n}\Vert p\Vert ^{2}+ (1-\alpha_{n} ) \bigl[k-\alpha_{n} (1-\mu _{n} ) \bigr]\Vert x_{n}-Tx_{n}\Vert ^{2} \\ \mbox{(from (3.2))} \leq& (1-\alpha_{n} \mu_{n} )\Vert x_{n}-p\Vert ^{2}+ \alpha_{n}\mu_{n}\Vert p\Vert ^{2} \\ \leq&\max \bigl\{ \Vert x_{n}-p\Vert ^{2},\Vert p \Vert ^{2} \bigr\} \leq \max \bigl\{ \Vert x_{n_{0}}-p\Vert ^{2},\Vert p\Vert ^{2} \bigr\} = r^{2}. \end{aligned}$$
Thus, we conclude that the sequence \((x_{n})\) is bounded.
Now we shall prove that, for \(p\in \operatorname{Fix}(T)\),
$$\begin{aligned} (1-\alpha_{n} ) (\alpha_{n}-k )\Vert x_{n}-Tx_{n}\Vert ^{2} \leq& \bigl(\Vert x_{n}-p\Vert ^{2}-\Vert x_{n+1}-p\Vert ^{2} \bigr) \\ &{}-2\alpha_{n}\mu_{n} \langle x_{n},x_{n+1}-p \rangle. \end{aligned}$$
(3.3)
Regarding (3.1), we easily observe that
$$\begin{aligned} x_{n+1}-p&=\alpha_{n} (1-\mu_{n} )x_{n}+ (1-\alpha_{n} )Tx_{n}-p \\ &= \bigl[1- \bigl(1-\alpha_{n} (1-\mu_{n} ) \bigr) \bigr]x_{n}+ (1-\alpha_{n} )Tx_{n}-p \\ &= (x_{n}-p )- (1-\alpha_{n} ) (x_{n}-Tx_{n} )-\alpha _{n}\mu_{n} x_{n}, \end{aligned}$$
and so
$$\begin{aligned} \Vert x_{n+1}-p\Vert ^{2} \leq&\bigl\Vert (x_{n}-p )- (1-\alpha _{n} ) (x_{n}-Tx_{n} )\bigr\Vert ^{2}-2\alpha_{n}\mu_{n} \langle x_{n},x_{n+1}-p \rangle \\ =&\Vert x_{n}-p\Vert ^{2}-2 (1-\alpha_{n} ) \langle x_{n}-Tx_{n},x_{n}-p \rangle \\ &{}+ (1-\alpha_{n} )^{2}\Vert x_{n}-Tx_{n} \Vert ^{2}-2\alpha_{n}\mu _{n} \langle x_{n},x_{n+1}-p \rangle \\ \mbox{(from (2.2))} \leq&\Vert x_{n}-p\Vert ^{2}+ (1-\alpha _{n} ) (k-\alpha_{n} )\Vert x_{n}-Tx_{n}\Vert ^{2}-2\alpha_{n} \mu _{n} \langle x_{n},x_{n+1}-p \rangle, \end{aligned}$$
so (3.3) is proved. Moreover, since \(\alpha_{n}\in(k,1)\),
$$ \Vert x_{n+1}-p\Vert ^{2}\leq \Vert x_{n}-p \Vert ^{2}-2\alpha_{n}\mu _{n}\langle x_{n},x_{n+1}-p\rangle. $$

Now we prove the strong convergence of \((x_{n})\) concerning two cases:

Case 1. Suppose that \(\Vert x_{n}-p\Vert \) is monotone nonincreasing. Then \(\Vert x_{n}-p\Vert \) converges and hence
$$ \lim_{n\to\infty} \Vert x_{n+1}-p\Vert ^{2}- \Vert x_{n}-p\Vert ^{2}=0. $$
From this and from the assumptions \(\lim_{n}\mu_{n}=0\), and \(k<\liminf_{n}\alpha_{n}\leq\limsup_{n}\alpha_{n}<1\), by (3.3) we get
$$ \lim_{n\to\infty} \Vert x_{n}-Tx_{n}\Vert =0; $$
from this and boundedness of \((x_{n})\), thanks to demiclosedness of \(I-T\) we deduce \(\omega_{l}(x_{n})\subseteq \operatorname{Fix}(T)\).
Now we put
$$z_{n}=\alpha_{n} x_{n}+ (1- \alpha_{n} )Tx_{n}= \bigl(1- (1-\alpha _{n} ) \bigr)x_{n}+ (1-\alpha_{n} )Tx_{n}, $$
from which we have
$$ z_{n}-x_{n}= (1-\alpha_{n} ) (Tx_{n}-x_{n} ). $$
(3.4)
Hence, we find that
$$\begin{aligned} x_{n+1}&=z_{n}-\alpha_{n} \mu_{n} x_{n} \\ &= (1-\alpha_{n}\mu_{n} )z_{n}+ \alpha_{n}\mu_{n} (z_{n}-x_{n} ) \\ \mbox{(from (3.4))}&= (1-\alpha_{n} \mu_{n} )z_{n}+\alpha _{n}\mu _{n} (1- \alpha_{n} ) (Tx_{n}-x_{n} ). \end{aligned}$$
(3.5)
Let \(\bar{x}=P_{\operatorname{Fix}(T)}(0)\in \operatorname{Fix}(T)\) the unique solution of the variational inequality
$$ \langle-\bar{x},y-\bar{x} \rangle\leq0,\quad \forall y\in \operatorname{Fix}(T). $$
(3.6)
From the definition of \(z_{n}\),
$$\begin{aligned} \Vert z_{n}-\bar{x}\Vert ^{2}&=\bigl\Vert x_{n}-\bar{x}- (1-\alpha_{n} ) (x_{n}-Tx_{n} )\bigr\Vert ^{2} \\ &=\Vert x_{n}-\bar{x}\Vert ^{2}-2 (1- \alpha_{n} ) \langle x_{n}-Tx_{n},x_{n}- \bar{x} \rangle+ (1-\alpha_{n} )\Vert x_{n}-Tx_{n} \Vert ^{2} \\ \mbox{(from(2.2))} &\leq \Vert x_{n}- \bar{x}\Vert ^{2}- (1-\alpha_{n} ) \bigl[ (1-k )- (1- \alpha_{n} ) \bigr]\Vert x_{n}-Tx_{n}\Vert ^{2} \\ &\leq \Vert x_{n}-\bar{x}\Vert ^{2}. \end{aligned}$$
(3.7)
So,
$$\begin{aligned} \Vert x_{n+1}-\bar{x}\Vert ^{2} =&\mbox{(from (3.5))}=\bigl\Vert (1-\alpha_{n} \mu_{n} )z_{n}+\alpha_{n}\mu _{n} (1- \alpha_{n} ) (Tx_{n}-x_{n} )-\bar{x}\bigr\Vert ^{2} \\ =&\bigl\Vert (1-\alpha_{n}\mu_{n} ) (z_{n}-\bar {x} )+\alpha_{n}\mu_{n} \bigl[ (1- \alpha_{n} ) (Tx_{n}-x_{n} )-\bar {x} \bigr]\bigr\Vert ^{2} \\ \mbox{(from Lemma 2.1)} \leq& (1-\alpha_{n}\mu _{n} )^{2}\Vert z_{n}-\bar{x}\Vert ^{2}+2\alpha_{n}\mu_{n} \bigl\langle (1-\alpha _{n} ) (Tx_{n}-x_{n} ),x_{n+1}-\bar{x} \bigr\rangle \\ &{}+2\alpha_{n}\mu_{n} \langle- \bar{x},x_{n+1}-\bar {x} \rangle \\ \mbox{(from (3.7))} \leq& (1- \alpha_{n}\mu _{n} )\Vert x_{n}-\bar{x}\Vert ^{2} \\ &{}+2\alpha_{n}\mu_{n} \bigl( (1-\alpha_{n} ) \langle Tx_{n}-x_{n},x_{n+1}-\bar{x} \rangle+ \langle-\bar {x},x_{n+1}-\bar {x} \rangle \bigr). \end{aligned}$$
(3.8)
Now, since \((x_{n})\) is bounded and \(\omega_{l}(x_{n})\subseteq \operatorname{Fix}(T)\), there exists an appropriate subsequence \(x_{n_{k}}\rightharpoonup p_{0}\in \operatorname{Fix}(T)\) such that
$$ \limsup_{n} \langle-\bar{x},x_{n+1}-\bar{x} \rangle =\lim_{k} \langle-\bar{x},x_{n_{k}}-\bar{x} \rangle= \langle -\bar{x},p_{0}-\bar{x} \rangle\leq0. $$
(3.9)
From this, it follows that all the hypotheses of Lemma 2.2 are satisfied and finally by (3.8) we can conclude
$$\lim_{n\to\infty} \Vert x_{n}-\bar{x}\Vert =0. $$

Let now \(\bar{x}\in \operatorname{Fix}(T)\) be defined by the variational inequality (3.6).

Case 2. If \(\Vert x_{n}-\bar{x}\Vert \) does not be monotone nonincreasing, there exists a subsequence \((x_{n_{k}})\) such that \(\Vert x_{n_{k}}-\bar{x}\Vert <\Vert x_{n_{k}+1}-\bar{x}\Vert \), \(\forall k\in\mathbb{N}\). So by Lemma 2.4, \(\exists\tau(n)\uparrow +\infty\) such that
  1. (1)

    \(\Vert x_{\tau(n)}-\bar{x}\Vert <\Vert x_{\tau (n)+1}-\bar {x}\Vert \);

     
  2. (2)

    \(\Vert x_{n}-\bar{x}\Vert <\Vert x_{\tau(n)+1}-\bar {x}\Vert \).

     
Now, we have
$$\begin{aligned} 0&\leq\liminf_{n} \bigl(\Vert x_{\tau(n)+1}-\bar{x} \Vert - \Vert x_{\tau (n)}-\bar{x}\Vert \bigr) \\ &\leq\limsup_{n} \bigl(\Vert x_{\tau(n)+1}-\bar{x} \Vert -\Vert x_{\tau (n)}-\bar{x}\Vert \bigr) \\ &\leq\limsup_{n} \bigl(\Vert x_{n+1}-\bar{x} \Vert -\Vert x_{n}-\bar {x}\Vert \bigr) \\ &\leq\limsup_{n} \bigl(\Vert x_{n}-\bar{x} \Vert +\sqrt{\mu _{n}}M-\Vert x_{n}-\bar{x}\Vert \bigr)=0. \end{aligned}$$
Thus, we derive that
$$\Vert x_{\tau(n)+1}-\bar{x}\Vert ^{2}-\Vert x_{\tau(n)}- \bar {x}\Vert ^{2}\longrightarrow0, $$
from which
$$ \Vert x_{\tau(n)}-Tx_{\tau(n)}\Vert \longrightarrow0. $$
(3.10)
Now, from (3.8), we get
$$\begin{aligned} \Vert x_{\tau(n)+1}-\bar{x}\Vert ^{2} \leq& (1-\alpha _{\tau(n)}\mu_{\tau(n)} )\Vert x_{\tau(n)}-\bar{x}\Vert ^{2} \\ &{}+2\alpha_{\tau(n)}\mu_{\tau(n)} (1-\alpha_{\tau (n)} ) \langle Tx_{\tau(n)}-x_{\tau(n)},x_{\tau(n)+1}-\bar{x} \rangle \\ &{}+2\alpha_{\tau(n)}\mu_{\tau(n)} \langle-\bar {x},x_{\tau (n)+1}-\bar{x} \rangle \\ =&\Vert x_{\tau(n)}-\bar{x}\Vert ^{2}+2 \alpha_{\tau (n)}\mu _{\tau(n)} (1-\alpha_{\tau(n)} ) \langle Tx_{\tau (n)}-x_{\tau(n)},x_{\tau(n)+1}-\bar{x} \rangle \\ &{}+2\alpha_{\tau(n)}\mu_{\tau(n)} \langle-\bar {x},x_{\tau (n)+1}-\bar{x} \rangle \\ &{}-2\alpha_{\tau(n)}\mu_{\tau(n)} \biggl(\frac{\Vert x_{\tau (n)}-\bar {x}\Vert ^{2}}{2} \biggr). \end{aligned}$$
(3.11)
Putting in (3.11)
$$\begin{aligned} A_{\tau(n)} =& (1-\alpha_{\tau(n)} ) \langle Tx_{\tau (n)}-x_{\tau(n)},x_{\tau(n)+1}- \bar{x} \rangle \\ &{}+ \langle-\bar{x},x_{\tau(n)+1}-\bar{x} \rangle-\frac {\Vert x_{\tau(n)}-\bar{x}\Vert ^{2}}{2}, \end{aligned}$$
we have
$$ \Vert x_{\tau(n)+1}-\bar{x}\Vert ^{2}\leq \Vert x_{\tau (n)}-\bar {x}\Vert ^{2}+2\alpha_{\tau(n)} \mu_{\tau(n)}A_{\tau(n)}. $$
(3.12)
Notice that we cannot use Lemma 2.2 as in Case 1 (or in [12, 13]) since we could not guarantee that \(\sum_{n=1}^{+\infty}\mu _{\tau(n)}=+\infty\). So, we proceed as follows. Assume by contradiction that \(\Vert x_{\tau(n)}-\bar{x}\Vert \) does not converge to 0. Then there exist \((n_{j})\) and an \(\epsilon>0\) such that
$$ \Vert x_{\tau(n_{j})}-\bar{x}\Vert \geq2\epsilon. $$
(3.13)
By (3.9) and (3.10) we know that there exist \(n_{0},n_{1}\in \mathbb{N}\) such that
$$ (1-\alpha_{\tau(n)} ) \langle Tx_{\tau(n)}-x_{\tau (n)},x_{\tau(n)+1}- \bar{x} \rangle< \frac{\epsilon}{3},\quad \forall n\geq n_{0} $$
(3.14)
and
$$ \langle-\bar{x},x_{\tau(n)+1}-\bar{x} \rangle< \frac {\epsilon }{3},\quad \forall n\geq n_{1} . $$
(3.15)
Hence, if we take \(n_{j_{0}}\geq\max \{n_{0},n_{1} \}\) one obtains by the definition of \(A_{\tau(n)}\),
$$ A_{\tau(n)}< \frac{\epsilon}{3}+\frac{\epsilon}{3}-\epsilon=- \frac {\epsilon}{3}< 0,\quad \forall n\geq n_{j_{0}}. $$
So, by (3.12) we have \(\Vert x_{\tau(n)+1}-\bar{x}\Vert ^{2}\leq \Vert x_{\tau(n)}-\bar{x}\Vert ^{2}\), which contradicts \(\Vert x_{\tau(n)}-\bar{x}\Vert <\Vert x_{\tau(n)+1}-\bar{x}\Vert \), n. This implies that
$$\Vert x_{\tau(n)}-\bar{x}\Vert \longrightarrow0, $$
and so, using \(\Vert x_{n}-\bar{x}\Vert <\Vert x_{\tau (n)+1}-\bar {x}\Vert \), we finally obtain
$$\Vert x_{n}-\bar{x}\Vert \longrightarrow0. $$
 □

Example 3.2

The mapping \(T:\mathbb{R}\rightarrow\mathbb{R}\) defined by \(Tx=-2x\) is \(\frac {1}{3}\)-strict pseudo-contractive. Taking \(\alpha_{n}=\frac{1}{2}\), \(\mu _{n}=\frac{1}{n}\), our algorithm becomes
$$x_{n+1}=-\frac{1}{2}\frac{n+1}{n}x_{n} , $$
which goes to \(0=\operatorname{Fix}(T)\) swinging around it.

Open questions

  1. (1)

    Does the result hold in Banach spaces?

     
  2. (2)

    Does the result hold for families of strict pseudo-contractive mappings?

     
  3. (3)

    Does the result hold for Lipschitzian pseudo-contractive mappings?

     

Declarations

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Mathematics and Computer Sciences, University of Calabria, Rende, Italy
(2)
Department of Mathematics, King Abdulaziz University, Jeddah, Saudi Arabia
(3)
Department of Mathematics, Atilim University, Ankara, Turkey

References

  1. Hussain, N, Marino, G, Muglia, L, Alamri, BAS: On some Mann’s type iterative algorithms. Fixed Point Theory Appl. 2015, 17 (2015) MathSciNetView ArticleMATHGoogle Scholar
  2. Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-510 (1953) MathSciNetView ArticleMATHGoogle Scholar
  3. Reich, S: Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 67, 274-276 (1979) MathSciNetView ArticleMATHGoogle Scholar
  4. Genel, A, Lindenstrauss, J: An example concerning fixed points. Isr. J. Math. 22, 81-86 (1975) MathSciNetView ArticleMATHGoogle Scholar
  5. Browder, FE, Petrishyn, WV: Construction of fixed points of nonlinear mappings in Hilbert spaces. J. Math. Anal. Appl. 20, 197-228 (1967) MathSciNetView ArticleGoogle Scholar
  6. Chen, R, Yao, Y: Strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Appl. Math. Comput. 32, 69-82 (2010) MathSciNetView ArticleMATHGoogle Scholar
  7. Shahazad, N, Zegeye, H: Approximating of common point of fixed points of a pseudo-contractive mapping and zeros of sum of monotone mappings. Fixed Point Theory Appl. 2014, 85 (2014) View ArticleGoogle Scholar
  8. Marino, G, Xu, HK: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 329, 336-346 (2007) MathSciNetView ArticleMATHGoogle Scholar
  9. Li, L, Li, S, Zhang, L, He, X: Strong convergence of modified Halpern’s iterations for a k-strictly pseudo-contractive mapping. J. Inequal. Appl. 2013, 98 (2013) MathSciNetView ArticleMATHGoogle Scholar
  10. Shang, M, Ye, G: Strong convergence theorems for strictly pseudo-contractive mappings by viscosity approximation methods. Mod. Appl. Sci. 1, 19-23 (2007) View ArticleMATHGoogle Scholar
  11. Osilike, MO, Udomene, A: Demiclosedness principle and convergence theorems for strictly pseudo-contractive mappings of Browder-Petryshyn type. J. Math. Anal. Appl. 256, 431-445 (2001) MathSciNetView ArticleMATHGoogle Scholar
  12. Cianciaruso, F, Marino, G, Rugiano, A, Scardamaglia, B: On strong convergence of Halpern’s method using averaged type mappings. J. Appl. Math. 2014, Article ID 473243 (2014) MathSciNetView ArticleMATHGoogle Scholar
  13. Cianciaruso, F, Marino, G, Rugiano, A, Scardamaglia, B: On strong convergence of viscosity type method using averaged type mappings. J. Nonlinear Convex Anal. 16(8), 1619-1640 (2015) MathSciNetMATHGoogle Scholar
  14. Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2, 240-256 (2002) MathSciNetView ArticleMATHGoogle Scholar
  15. Maingé, PE: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899-912 (2008) MathSciNetView ArticleMATHGoogle Scholar

Copyright

© Marino et al. 2016