# Some results on Rockafellar-type iterative algorithms for zeros of accretive operators

## Abstract

We provide a new proof technique to obtain strong convergence of the sequences generated by viscosity iterative methods for a Rockafellar-type iterative algorithm and a Halpern-type iterative algorithm to a zero of an accretive operator in Banach spaces. By using a new method different from previous ones, the main results improve and develop the recent well-known results in this area.

MSC:47H06, 47H09, 47H10, 47J25, 49M05, 65J15.

## 1 Introduction

Let E be a real Banach space with the norm $∥⋅∥$ and the dual space $E ∗$. The value of $x ∗ ∈ E ∗$ at $y∈E$ is denoted by $〈y, x ∗ 〉$ and the normalized duality mapping $J$ from E into $2 E ∗$ is defined by

$J(x)= { x ∗ ∈ E ∗ : 〈 x , x ∗ 〉 = ∥ x ∥ ∥ x ∗ ∥ , ∥ x ∥ = ∥ x ∗ ∥ } ,∀x∈E.$

Recall that a (possibly multivalued) operator $A:D(A)⊂E→ 2 E$ with the domain $D(A)$ and the range $R(A)$ in E is accretive if, for each $x i ∈D(A)$ and $y i ∈A x i$ ($i=1,2$), there exists a $j∈J( x 1 − x 2 )$ such that $〈 y 1 − y 2 ,j〉≥0$. (Here $J$ is the normalized duality mapping.) In a Hilbert space, an accretive operator is also called a monotone operator. The set of zeros of A is denoted by $A − 1 0$, that is,

$A − 1 0:= { z ∈ D ( A ) : 0 ∈ A z } .$

If $A − 1 0≠∅$, then the inclusion $0∈Ax$ is solvable.

Iterative methods have extensively been studied over the last forty years for constructions of zeros of accretive operators (see, e.g., ). In particular, in order to find a zero of a monotone operator, Rockafellar  introduced a powerful and successful algorithm in a Hilbert space H, which is recognized as the Rockafellar proximal point algorithm: For any initial point $x 0 ∈H$, a sequence ${ x n }$ is generated by

$x n + 1 = J r n ( x n + e n ),∀n≥0,$

where $J r = ( I + r A ) − 1$ for all $r>0$ is the resolvent of A and ${ e n }$ is an error sequence in H. Bruck  proposed the following in a Hilbert space H: For any fixed point $u∈H$,

$x n + 1 = J r n (u),∀n≥0.$

In 1991, Güler  gave an example showing that Rockafellar’s proximal algorithm does not converge strongly. Solodov and Svaitor  in 2000 proposed a modified proximal point algorithm which converges strongly to a solution of the equation $0∈Ax$ by using the projection method. In 2000, Kamimura and Takahashi  introduced the following iterative algorithms of Halpern type  and Mann type  in Hilbert spaces and Banach spaces: For any initial point $x 0$,

$x n + 1 = α n x 0 +(1− α n ) J r n x n , x n + 1 = α n x 0 +(1− α n ) J r n x n + e n ,∀n≥0$

and

$x n + 1 = α n x n +(1− α n ) J r n x n , x n + 1 = α n x n +(1− α n ) J r n x n + e n ,∀n≥0,$

where ${ α n }⊂(0,1)$, ${ r n }⊂(0,∞)$, and ${ e n }$ is an error sequence, and obtained strong and weak convergence of sequences generated by these algorithms.

Xu  in 2006 and Song and Yang  in 2009 obtained the strong convergence of the regularization method for Rockafellar’s proximal point algorithm in a Hilbert space H: For any initial point $x 0 ∈H$

$x n + 1 = J r n ( α n u + ( 1 − α n ) x n + e n ) ,∀n≥0,$

where ${ α n }⊂(0,1)$, ${ e n }⊂H$ and ${ r n }⊂(0,∞)$.

In 2012, as in , Zhang and Song  considered the following Rockafellar-type iterative algorithm (1.1) and Halpern-type iterative algorithm (1.2) for finding a zero of an accretive operator A in a uniformly convex Banach space E with a weakly continuous duality mapping $J φ$ with gauge function φ or with a uniformly Gâteaux differentiable norm: For any initial point $x 0 ∈E$ and fixed point $u∈E$,

$x n + 1 = β n x n +(1− β n ) J r n ( α n u + ( 1 − α n ) x n ) ,∀n≥0$
(1.1)

and

$x n + 1 = α n u+ β n x n +(1− α n − β n ) J r n x n ,∀n≥0,$
(1.2)

where the sequences ${ α n },{ β n }⊂(0,1)$ and ${ r n }⊂(0,∞)$ satisfy the conditions: (i) $lim n → ∞ α n =0$; (ii) $∑ n = 0 ∞ α n =∞$; (iii) $lim sup n → ∞ β n <1$; and (iv) $lim inf n → ∞ r n >0$. In particular, in order to obtain strong convergence of the sequence generated by (1.2) to a zero of an accretive operator A, they utilized the well-known inequality in uniformly convex Banach spaces (see Xu ). The results of Zhang and Song  in a Banach space with a uniformly Gâteaux differentiable norm and the corresponding results of Song  are mutually complementary since Zhang and Song  assumed uniform convexity on the space instead of reflexivity on the space in Song  and relaxed the conditions $0< lim inf n → ∞ β n ≤ lim sup n → ∞ β n <1$ and $lim inf n → ∞ r n >0$, $lim n → ∞ r n r n + 1 =1$ on sequences ${ β n }$ and ${ r n }$ in Song . Yu  filled the gaps in the result of Zhang and Song  for the Halpern-type iterative algorithm (1.2) by utilizing the result on the sequence of real numbers in , which is of fundamental importance for the techniques of analysis. Also, Zhang and Song  studied the Rockafellar-type iterative algorithm (1.1) in a uniformly convex Banach space with a weakly continuous normalized duality mapping $J$ or with a uniformly Gâteaux differentiable norm.

In this paper, motivated by the above mentioned results, we consider viscosity iterative methods for the Rockafellar-type iterative algorithm (1.1) and Halpern-type iterative algorithm (1.2). By using a new method different from ones in [18, 20] which recover the gaps in  as in , we establish results on strong convergence of the sequences generated by the proposed iterative methods to a zero of an accretive operator A, which solves a certain variational inequality in a uniformly convex Banach space having a weakly continuous duality mapping $J φ$ with gauge function φ or having a uniformly Gâteaux differentiable norm. Our results improve, develop and complement the corresponding results of Song , Zhang and Song , Yu  and Song et al.  as well as many existing ones.

## 2 Preliminaries and lemmas

Let E be a real Banach space with the norm $∥⋅∥$ and let $E ∗$ be its dual. When ${ x n }$ is a sequence in E, then $x n →x$ (resp., $x n ⇀x$, ) will denote strong (resp., weak, weak) convergence of the sequence ${ x n }$ to x.

Recall that a mapping $f:E→E$ is said to be contractive mapping on E if there exists a constant $k∈(0,1)$ such that $∥f(x)−f(y)∥≤k∥x−y∥$, $∀x,y∈E$. An accretive operator A is said to satisfy the range condition if $D ( A ) ¯ ⊂R(I+rA)$ for all $r>0$, where I is an identity operator of E and $D ( A ) ¯$ denotes the closure of the domain $D(A)$ of A. An accretive operator A is called m-accretive if $R(I+rA)=E$ for each $r>0$. If A is an accretive operator which satisfies the range condition, then we can define, for each $r>0$, a mapping $J r :R(I+rA)→D(A)$ defined by $J r = ( I + r A ) − 1$, which is called the resolvent of A. We know that $J r$ is nonexpansive (i.e., $∥ J r x− J r y∥≤∥x−y∥$, $∀x,y∈R(I+rA)$) and $A − 1 0=F( J r )={x∈D( J r ): J r x=x}$ for all $r>0$. Moreover, for $r>0$, $t>0$ and $x∈E$,

$J r x= J t ( t r x + ( 1 − t r ) J r x ) ,$
(2.1)

which is referred to as the resolvent identity (see [23, 24] where more details on accretive operators can be found).

The norm of E is said to be Gâteaux differentiable if

$lim t → 0 ∥ x + t y ∥ − ∥ x ∥ t$

exists for each x, y in its unit sphere $S={x∈E:∥x∥=1}$. Such an E is called a smooth Banach space. The norm is said to be uniformly Gâteaux differentiable if for $y∈S$, the limit is attained uniformly for $x∈S$.

A Banach space E is said to be uniformly convex if for all $ε∈[0,2]$, there exists $δ ε >0$ such that

Let $l>1$ and $M>0$ be two fixed real numbers. Then a Banach space is uniformly convex if and only if there exists a continuous strictly increasing convex function $g:[0,∞)→[0,∞)$ with $g(0)=0$ such that

$∥ λ x + ( 1 − λ ) y ∥ l ≤λ ∥ x ∥ l +(1−λ) ∥ y ∥ l − ω l (λ)g ( ∥ x − y ∥ )$
(2.2)

for all $x,y∈ B M (0)={x∈E:∥x∥≤M}$, where $ω l (λ)= λ l (1−λ)+λ ( 1 − λ ) l$. For more detail, see Xu .

By a gauge function we mean a continuous strictly increasing function φ defined on $R + :=[0,∞)$ such that $φ(0)=0$ and $lim r → ∞ φ(r)=∞$. The mapping $J φ :E→ 2 E ∗$ defined by

$J φ (x)= { f ∈ E ∗ : 〈 x , f 〉 = ∥ x ∥ ∥ f ∥ , ∥ f ∥ = φ ( ∥ x ∥ ) } ,∀x∈E$

is called the duality mapping with gauge function φ. In particular, the duality mapping with gauge function $φ(t)=t$ denoted by $J$, is referred to as the normalized duality mapping. The following property of duality mapping is well known :

$J φ (λx)=signλ ( φ ( | λ | ⋅ ∥ x ∥ ) ∥ x ∥ ) J(x),∀x∈E∖0,λ∈R,$
(2.3)

where is the set of all real numbers; in particular, $J(−x)=−J(x)$, $∀x∈E$. It is well known that E is smooth if and only if the normalized duality mapping $J$ is single-valued, and that in a Hilbert space H, the normalized duality mapping $J$ is the identity.

We say that a Banach space E has a weakly continuous duality mapping if there exists a gauge function φ such that the duality mapping $J φ$ is single-valued and continuous from the weak topology to the weak topology, that is, for any ${ x n }∈E$ with $x n ⇀x$, . For example, every $l p$ space ($1) has a weakly continuous duality mapping with gauge function $φ(t)= t p − 1$ .

Let LIM be a continuous linear functional on $l ∞$ and $( a 1 , a 2 ,…)∈ l ∞$. We write $LIM n ( a n )$ instead of $LIM(( a 1 , a 2 ,…))$. LIM is said to be a Banach limit if LIM satisfies $∥LIM∥= LIM n (1)=1$ and $LIM n ( a n + 1 )= LIM n ( a n )$ for all $( a 1 , a 2 ,…)∈ l ∞$. If LIM is a Banach limit, the following are well known :

1. (i)

for all $n≥1$, $a n ≤ c n$ implies $LIM n ( a n )≤ LIM n ( c n )$,

2. (ii)

$LIM n ( a n + N )= LIM n ( a n )$ for any fixed positive integer N,

3. (iii)

$lim inf n → ∞ a n ≤ LIM n ( a n )≤ lim sup n → ∞ a n$ for all $( a 1 , a 2 ,…)∈ l ∞$.

We need the following lemmas for the proofs of our main results.

Lemma 2.1 [23, 25]

Let E be a real Banach space and φ be a continuous strictly increasing function on $R +$ such that $φ(0)=0$ and $lim r → ∞ φ(r)=∞$. Define

$Φ(t)= ∫ 0 t φ(τ)dτ,∀t∈ R + .$

Then the following inequalities hold:

$Φ ( k t ) ≤ k Φ ( t ) , 0 < k < 1 , Φ ( ∥ x + y ∥ ) ≤ Φ ( ∥ x ∥ ) + 〈 y , j φ ( x + y ) 〉 , ∀ x , y ∈ E ,$

where $j φ (x+y)∈ J φ (x+y)$. In particular, if E is smooth, then one has

$∥ x + y ∥ 2 ≤ ∥ x ∥ 2 +2 〈 y , J ( x + y ) 〉 ,∀x,y∈E.$

Lemma 2.2 

Let $a∈R$ be a real number and let a sequence ${ a n }∈ l ∞$ satisfy the condition $LIM n ( a n )≤a$ for all Banach limit LIM. If $lim sup n → ∞ ( a n + 1 − a n )≤0$, then $lim sup n → ∞ a n ≤a$.

Lemma 2.3 

Let ${ s n }$ be a sequence of non-negative real numbers satisfying

$s n + 1 ≤(1− λ n ) s n + λ n δ n ,∀n≥0,$

where ${ λ n }$ and ${ δ n }$ satisfy the following conditions:

1. (i)

${ λ n }⊂[0,1]$ and $∑ n = 0 ∞ λ n =∞$;

2. (ii)

$lim sup n → ∞ δ n ≤0$ or $∑ n = 0 ∞ λ n δ n <∞$.

Then $lim n → ∞ s n =0$.

Also, we will use the next lemma which is of fundamental importance for our proof.

Lemma 2.4 

Let ${ s n }$ be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence ${ s n i }$ of ${ s n }$ such that $s n i < s n i + 1$ for all $i≥0$. For every $n≥ n 0$, define the sequence of integers ${τ(n)}$ by

$τ(n):=max{k≤n: s k < s k + 1 }.$

Then ${ τ ( n ) } n ≥ n 0$ is a nondecreasing sequence verifying

$lim n → ∞ τ(n)=∞$

and, for all $n≥ n 0$, the following two estimates hold:

$s τ ( n ) ≤ s τ ( n ) + 1 , s n ≤ s τ ( n ) + 1 .$

## 3 Main results

In this section, we study the convergence of the following two iterative algorithms: For an initial value $x 0 ∈C$,

$x n + 1 = β n x n +(1− β n ) J r n ( α n f ( x n ) + ( 1 − α n ) x n ) ,∀n≥0$
(3.1)

and

$x n + 1 = α n f( x n )+ β n x n +(1− α n − β n ) J r n x n ,∀n≥0.$
(3.2)

Throughout this section, it is assumed that $A:D(A)⊂E→ 2 E$ is an accretive operator satisfying the range condition with $A − 1 0≠∅$; C is a nonempty closed convex subset of E such that $D ( A ) ¯ ⊂C⊂ ⋂ r > 0 R(I+rA)$; $f:C→C$ is a contractive mapping with a constant $k∈(0,1)$; and ${ α n },{ β n }⊂(0,1)$ and ${ r n }⊂(0,∞)$ are sequences satisfying the conditions:

(C1) $lim n → ∞ α n =0$;

(C2) $∑ n = 0 ∞ α n =∞$;

(C3) $0≤ β n ≤a<1$ for some a;

(C4) $lim inf n → ∞ r n >0$.

We need the following result for the existence of solutions of a certain variational inequality.

Theorem J [28, 29]

Let E be a reflexive Banach space with a weakly continuous duality mapping $J φ$ with gauge function φ. Let C be a nonempty closed convex subset of E, let $T:C→C$ be a nonexpansive mapping with $F(T)≠∅$ and $f:C→C$ be a contractive mapping with a constant $k∈(0,1)$. For $t∈(0,1)$, let ${ x t }$ be the unique solution in C to the equation $x t =tf( x t )+(1−t)T x t$. Then ${ x t }$ converges as $t→ 0 +$ strongly to a point q in $F(T)$, which solves the variational inequality

$〈 ( I − f ) ( q ) , J φ ( q − p ) 〉 ≤0,∀p∈F(T).$

Using Theorem J, we have the following result.

Theorem 3.1 Let E be a reflexive Banach space having a weakly continuous duality mapping $J φ$ with gauge function φ. Let ${ x n }$ be a sequence generated by (3.1) and $y n = α n f( x n )+(1− α n ) x n$ for all $n≥0$. Let LIM be a Banach limit. If $lim n → ∞ ∥ y n − J r n y n ∥=0$, then

$LIM n ( 〈 ( I − f ) ( q ) , J φ ( q − x n ) 〉 ) ≤0,$

where $q:= lim t → 0 + x t$ with $x t$ being defined by $x t =tf( x t )+(1−t) J r x t$ for each $r>0$.

Proof Let $x t$ be defined by $x t =tf( x t )+(1−t) J r x t$ for $0 and $r>0$. Then, since $A − 1 0≠∅$, ${ x t }$ is bounded. In fact, for $p∈ A − 1 0=F( J r )$ for $r>0$, we have

$∥ x t − p ∥ ≤ t ∥ f ( x t ) − p ∥ + ( 1 − t ) ∥ J r x t − J r n p ∥ ≤ t ∥ f ( x t ) − p ∥ + ( 1 − t ) ∥ x t − p ∥ .$

This gives that

$∥ x t − p ∥ ≤ ∥ f ( x t ) − p ∥ ≤ ∥ f ( x t ) − f ( p ) ∥ + ∥ f ( p ) − P ∥ ≤ k ∥ x t − p ∥ + ∥ f ( p ) − p ∥ .$

Thus

$∥ x t −p∥≤ 1 1 − k ∥ f ( p ) − p ∥ ,t∈(0,1),$

and hence ${ x t }$ is bounded. Also, by Theorem J, ${ x t }$ converges as $t→ 0 +$ strongly to a point in $F( J r )= A − 1 0$, which is denoted by $q:= lim t → 0 + x t$.

First we show that ${ x n }$ and ${ y n }$ are bounded. Since $A − 1 0≠∅$, we take $p∈ A − 1 0=F( J r )$ for all $r>0$. From (3.1) and the nonexpansivity of $J r n$ for all n, we have

$∥ x n + 1 − p ∥ ≤ β n ∥ x n − p ∥ + ( 1 − β n ) ∥ J r n y n − p ∥ ≤ β n ∥ x n − p ∥ + ( 1 − β n ) ∥ α n f ( x n ) + ( 1 − α n ) x n − p ∥ ≤ β n ∥ x n − p ∥ + ( 1 − β n ) [ α n ∥ f ( x n ) − f ( p ) ∥ + ( 1 − α n ) ∥ x n − p ∥ + α n ∥ f ( p ) − p ∥ ] ≤ β n ∥ x n − p ∥ + ( 1 − β n ) [ α n k ∥ x n − p ∥ + ( 1 − α n ) ∥ x n − p ∥ + α n ∥ f ( p ) − p ∥ ] = ( 1 − ( 1 − β n ) ( 1 − k ) α n ) ∥ x n − p ∥ + ( 1 − β n ) ( 1 − k ) α n ∥ f ( p ) − p ∥ 1 − k ≤ max { ∥ x n − p ∥ , ∥ f ( p ) − p ∥ 1 − k } ⋯ ≤ max { ∥ x 0 − p ∥ , ∥ f ( p ) − p ∥ 1 − k } .$

Hence ${ x n }$ is bounded. Also, for $p∈ A − 1 0$, we get

$∥ y n − p ∥ ≤ α n ∥ f ( x n ) − f ( p ) ∥ + ( 1 − α n ) ∥ x n − p ∥ + α n ∥ f ( p ) − p ∥ ≤ α n k ∥ x n − p ∥ + ( 1 − α n ) ∥ x n − p ∥ + α n ∥ f ( p ) − p ∥ = ( 1 − ( 1 − k ) α n ) ∥ x n − p ∥ + ( 1 − k ) α n ∥ f ( p ) − p ∥ 1 − k ≤ max { ∥ x n − p ∥ , ∥ f ( p ) − p ∥ 1 − k }$

and so ${ y n }$ is bounded. Moreover, since $∥ J r n y n −p∥≤∥ y n −p∥$, it follows that ${ J r n y n }$ is bounded. Also, ${f( x n )}$ is bounded. As a consequence, with the control condition (C1), we get

$∥ y n − x n ∥= α n ∥ f ( x n ) − x n ∥ ≤ α n ( ∥ f ( x n ) ∥ + ∥ x n ∥ ) →0(n→∞).$
(3.3)

Since $lim n → ∞ ∥ y n − J r n y n ∥=0$, by (3.1) and (3.3), we obtain

$lim n → ∞ ∥ x n − J r n y n ∥≤ lim n → ∞ ( ∥ x n − y n ∥ + ∥ y n − J r n y n ∥ ) =0$
(3.4)

and

$lim n → ∞ ∥ x n + 1 − J r n y n ∥= lim n → ∞ β n ∥ x n − J r n y n ∥≤ lim n → ∞ a∥ x n − J r n y n ∥=0.$
(3.5)

Now, we show that $LIM n (〈(I−f)(q), J φ (q− x n )〉)≤0$, where $q= lim t → 0 + x t$ with $x t$ being defined by $x t =tf( x t )+(1−t) J r x t$ for each $r>0$. Indeed, it follows that

$x t − x n + 1 =(1−t)( J r x t − x n + 1 )+t ( f ( x t ) − x n + 1 ) .$

Applying Lemma 2.1, we have

$Φ ( ∥ x t − x n + 1 ∥ ) ≤Φ ( ( 1 − t ) ∥ J r x t − x n + 1 ∥ ) +t 〈 f ( x t ) − x n + 1 , J φ ( x t − x n + 1 ) 〉 .$
(3.6)

Along with using the resolvent identity (2.1), noting

$∥ J r y n − J r n y n ∥ = ∥ J r ( r r n y n + ( 1 − r r n ) J r n y n ) − J r y n ∥ ≤ | 1 − r r n | ∥ y n − J r n y n ∥ ≤ | 1 − r r n | ( ∥ y n − x n ∥ + ∥ x n − J r n y n ∥ ) ,$

we observe also that

$∥ J r x t − x n + 1 ∥ ≤ ∥ J r x t − J r x n ∥ + ∥ J r x n − J r y n ∥ + ∥ J r y n − J r n y n ∥ + ∥ J r n y n − x n + 1 ∥ ≤ ∥ x t − x n ∥ + ∥ x n − y n ∥ + | 1 − r r n | ( ∥ y n − x n ∥ + ∥ x n − J r n y n ∥ ) + ∥ J r n y n − x n + 1 ∥ = ∥ x t − x n ∥ + ε n ,$

where $ε n =(1+|1− r r n |)∥ x n − y n ∥+|1− r r n |∥ x n − J r n y n ∥+∥ x n + 1 − J r n y n ∥→0$ as $n→∞$ (by (3.3), (3.4) and (3.5)), and

$〈 f ( x t ) − x n + 1 , J φ ( x t − x n + 1 ) 〉 = 〈 f ( x t ) − x t , J φ ( x t − x n + 1 ) 〉 +∥ x t − x n + 1 ∥φ ( ∥ x t − x n + 1 ∥ ) .$

Thus it follows from (3.6) that

$Φ ( ∥ x t − x n + 1 ∥ ) ≤ Φ ( ( 1 − t ) ( ∥ x t − x n ∥ ) ) + ( 1 − t ) ∥ ε n ∥ φ ( ∥ x t − x n + 1 ∥ ) + t ( 〈 f ( x t ) − x t , J φ ( x t − x n + 1 ) 〉 + ∥ x t − x n + 1 ∥ φ ( ∥ x t − x n + 1 ∥ ) ) .$
(3.7)

Applying the Banach limit LIM to (3.7), we have

$LIM n ( Φ ( ∥ x t − x n + 1 ∥ ) ) ≤ LIM n ( Φ ( ( 1 − t ) ( ∥ x t − x n ∥ ) ) ) + ( 1 − t ) LIM n ( ∥ ε n ∥ φ ( ∥ x t − x n + 1 ∥ ) ) + t LIM n ( 〈 f ( x t ) − x t , J φ ( x t − x n + 1 ) 〉 ) + t LIM n ( ∥ x t − x n + 1 ∥ φ ( ∥ x t − x n + 1 ∥ ) ) .$
(3.8)

Hence, noting $lim n → ∞ ε n =0$ and applying the property $LIM n ( a n + 1 )= LIM n ( a n )$ of the Banach limit LIM to (3.8), we obtain

$LIM n ( 〈 x t − f ( x t ) , J φ ( x t − x n ) 〉 ) ≤ 1 t LIM n ( Φ ( ( 1 − t ) ∥ x t − x n ∥ ) − Φ ( ∥ x t − x n ∥ ) ) + LIM n ( ∥ x t − x n ∥ φ ( ∥ x t − x n ∥ ) ) = − 1 t LIM n ( ∫ ( 1 − t ) ∥ x t − x n ∥ ∥ x t − x n ∥ φ ( τ ) d τ ) + LIM n ( ∥ x t − x n ∥ φ ( ∥ x t − x n ∥ ) ) = LIM n ( ∥ x t − x n ∥ ( φ ( ∥ x t − x n ∥ ) − φ ( θ n ) ) )$
(3.9)

for some $τ n$ satisfying $(1−t)∥ x t − x n ∥≤ θ n ≤∥ x t − x n ∥$. Since φ is uniformly continuous on compact intervals on $R +$ and

$∥ x t − x n ∥ − θ n ≤ t ∥ x t − x n ∥ ≤ t ( 2 1 − k ∥ f ( p ) − p ∥ + ∥ x 0 − p ∥ ) → 0 ( t → 0 ) ,$

we conclude from (3.9) and $q= lim t → 0 + x t$ that

$LIM n ( 〈 ( I − f ) ( q ) , J φ ( q − x n ) 〉 ) ≤ lim sup t → 0 LIM n ( 〈 x t − f ( x t ) , J φ ( x t − x n ) 〉 ) ≤ lim sup t → 0 LIM n ( ∥ x t − x n ∥ ( φ ( ∥ x t − x n ∥ ) − φ ( θ n ) ) ) ≤ 0 .$

This completes the proof. □

By using Theorem 3.1, we establish the strong convergence of the Rockafellar-type iterative algorithm (3.1).

Theorem 3.2 Let E be a uniformly convex Banach space having a weakly continuous duality mapping $J φ$ with gauge function φ. Then the sequence ${ x n }$ generated by (3.1) converges strongly to $q∈ A − 1 0$, where q is the unique solution of the variational inequality

$〈 ( I − f ) ( q ) , J φ ( q − p ) 〉 ≤0,∀p∈ A − 1 0.$
(3.10)

Proof First, we note that by Theorem J, there exists a solution q of a variational inequality

$〈 ( I − f ) ( q ) , J φ ( q − p ) 〉 ≤0,∀p∈ A − 1 0,$

where $q= lim t → 0 + x t ∈ A − 1 0$ with $x t$ being defined by $x t =tf( x t )+(1−t) J r x t$ for each $r>0$ and $0. From now, we put $y n = α n f( x n )+(1− α n ) x n$ for all $n≥0$.

We know that $∥ x n −p∥≤max{∥ x 0 −p∥, 1 1 − k ∥f(p)−p∥}$ for all $n≥0$ and all $p∈ A − 1 0$ and ${ x n }$, ${ y n }$, ${f( x n )}$ and ${ J r n y n }$ are bounded by the proof of Theorem 3.1.

First, by using arguments similar to those of  with $u=f( x n )$ and the inequality (2.2) ($l=2$, $λ= 1 2$), we have

$∥ x n + 1 − q ∥ 2 ≤ ∥ x n − q ∥ 2 + α n ∥ f ( x n ) − q ∥ 2 −(1− β n ) 1 4 g ( ∥ y n − J r n y n ∥ ) ,$

where $g:[0,∞)→[0,∞)$ is a continuous strictly increasing convex function in (2.2). From the condition (C3), it follows that $(1− β n )≥(1−a)>0$ for all $n≥0$ and

$(1−a) 1 4 g ( ∥ y n − J r n y n ∥ ) − α n ∥ f ( x n ) − q ∥ 2 ≤ ∥ x n − q ∥ 2 − ∥ x n + 1 − q ∥ 2 .$
(3.11)

In order to prove that $lim n → ∞ ∥ x n −q∥=0$, we consider two possible cases as in the proof of Yu .

Case 1. Assume that ${∥ x n −q∥}$ is a monotone sequence. In other words, for $n 0$ large enough, ${∥ x n −q∥}$ is either nondecreasing or nonincreasing. Hence ${∥ x n −q∥}$ converges (since ${∥ x n −q∥}$ is bounded). Thus, by (3.11) we obtain

$lim n → ∞ g ( ∥ y n − J r n y n ∥ ) =0.$

Thus, from the property of the function g in (2.2), it follows that

$lim n → ∞ ∥ y n − J r n y n ∥=0.$

Now, we proceed with the following steps.

Step 1. We know from (3.5) that $lim n → ∞ ∥ x n + 1 − x n ∥=0$.

Step 2. We show that $lim sup n → ∞ 〈(I−f)(q), J φ (q− x n )〉≤0$. To this end, put

$a n := 〈 ( I − f ) ( q ) , J φ ( q − x n ) 〉 ,n≥0.$

Then Theorem 3.1 implies that $LIM n ( a n )≤0$ for any Banach limit LIM. Since ${ x n }$ is bounded, there exists a subsequence ${ x n j }$ of ${ x n }$ such that

$lim sup n → ∞ ( a n + 1 − a n )= lim j → ∞ ( a n j + 1 − a n j )$

and $x n j ⇀z∈E$. This implies that $x n j + 1 ⇀z$ since ${ x n }$ is weakly asymptotically regular by Step 1. From the weak continuity of a duality mapping $J φ$, we have

$w- lim j → ∞ J φ (q− x n j + 1 )=w- lim j → ∞ J φ (q− x n j )= J φ (q−z)$

and so

$lim sup n → ∞ ( a n + 1 − a n )= lim j → ∞ 〈 ( I − f ) ( q ) , J φ ( q − x n j + 1 ) − J φ ( q − x n j ) 〉 =0.$

Then Lemma 2.2 implies that $lim sup n → ∞ a n ≤0$, that is,

$lim sup n → ∞ 〈 ( I − f ) ( q ) , J φ ( q − x n ) 〉 ≤0.$

Step 3. We show that $lim sup n → ∞ 〈(I−f)(q), J φ (q− y n )〉≤0$. In fact, let ${ y n i }$ be a subsequence of ${ y n }$ such that $y n i ⇀v∈E$ and

$lim sup n → ∞ 〈 ( I − f ) ( q ) , J φ ( q − y n ) 〉 = lim i → ∞ 〈 ( I − f ) ( q ) , J ( q − y n i ) 〉 .$

Since $lim n → ∞ ∥ x n − y n ∥=0$ by (3.3) in the proof of Theorem 3.1, we have also $x n i ⇀v$. From the weak continuity of $J φ$, it follows that

$w- lim i → ∞ J φ (q− y n i )=w- lim i → ∞ J φ (q− x n i )= J φ (q−v).$

Hence, by Step 2, we have

$lim sup n → ∞ 〈 ( I − f ) ( q ) , J φ ( q − y n ) 〉 = lim i → ∞ 〈 ( I − f ) ( q ) , J φ ( q − y n i ) − J φ ( q − x n i ) 〉 + lim i → ∞ 〈 ( I − f ) ( q ) , J φ ( q − x n i ) 〉 = lim i → ∞ 〈 ( I − f ) ( q ) , J φ ( q − x n i ) 〉 ≤ lim sup n → ∞ 〈 ( I − f ) ( q ) , J φ ( q − x n ) 〉 ≤ 0 .$

Step 4. We show that $lim n → ∞ ∥ x n −q∥=0$. Indeed, by using (3.1), we obtain

$∥ x n + 1 − q ∥ φ ( ∥ x n + 1 − q ∥ ) = 〈 β n ( x n − q ) + ( 1 − β n ) ( J r n y n − q ) , J φ ( x n + 1 − q ) 〉 ≤ β n ∥ x n − q ∥ φ ( ∥ x n + 1 − q ∥ ) + ( 1 − β n ) ∥ y n − q ∥ φ ( ∥ x n + 1 − q ∥ )$

and so

$∥ x n + 1 −q∥≤ β n ∥ x n −q∥+(1− β n )∥ y n −q∥.$
(3.12)

Since

$y n −q= α n ( f ( x n ) − f ( q ) ) + α n ( f ( q ) − q ) +(1− α n )( x n −q),$

by Lemma 2.1, we also get

$Φ ( ∥ y n − q ∥ ) ≤ Φ ( α n ∥ f ( x n ) − f ( q ) ∥ + ( 1 − α n ) ∥ x n − q ∥ ) + α n 〈 f ( q ) − q , J φ ( y n − q ) 〉 ≤ Φ ( α n k ∥ x n − q ∥ + ( 1 − α n ) ∥ x n − q ∥ ) + α n 〈 f ( q ) − q , J φ ( y n − q ) 〉 ≤ ( 1 − ( 1 − k ) α n ) Φ ( ∥ x n − q ∥ ) + α n 〈 f ( q ) − q , J φ ( y n − q ) 〉 .$
(3.13)

As a consequence, since Φ in Lemma 2.1 is an increasing convex function with $Φ(0)=0$, by (3.12) and (3.13), we have

$Φ ( ∥ x n + 1 − q ∥ ) ≤ Φ ( β n ∥ x n − q ∥ + ( 1 − β n ) ∥ y n − q ∥ ) ≤ β n Φ ( ∥ x n − q ∥ ) + ( 1 − β n ) Φ ( ∥ y n − q ∥ ) ≤ β n Φ ( ∥ x n − q ∥ ) + ( 1 − β n ) ( 1 − ( 1 − k ) α n ) Φ ( ∥ x n − q ∥ ) + ( 1 − β n ) α n 〈 f ( q ) − q , J φ ( y n − q ) 〉 = ( 1 − ( 1 − β n ) ( 1 − k ) α n ) Φ ( ∥ x n − q ∥ ) + ( 1 − β n ) α n 〈 f ( q ) − q , J φ ( y n − q ) 〉 .$
(3.14)

Put

$λ n =(1− β n )(1−k) α n and δ n = 1 1 − k 〈 ( I − f ) ( q ) , J φ ( q − y n ) 〉 .$

From the conditions (C1)-(C3) and Step 3, it follows that $λ n →0$, $∑ n = 0 ∞ λ n =∞$ and $lim sup n → ∞ δ n ≤0$. Since (3.14) reduces to

$Φ ( ∥ x n + 1 − q ∥ ) ≤(1− λ n )Φ ( ∥ x n − q ∥ ) + λ n δ n ,$

from Lemma 2.3, we conclude that $lim n → ∞ Φ(∥ x n −q∥)=0$ and $lim n → ∞ ∥ x n −q∥=0$.

Case 2. Assume that ${∥ x n −q∥}$ is not a monotone sequence. Then, we can define a sequence of integers ${τ(n)}$ for all $n≥ n 0$ (for some $n 0$ large enough) by

$τ(n):=max { k ∈ N : k ≤ n , ∥ x k − q ∥ < ∥ x k + 1 − q ∥ } .$

Clearly, ${τ(n)}$ is a nondecreasing sequence such that $τ(n)→∞$ as $n→∞$ and

$∥ x τ ( n ) −q∥≤∥ x τ ( n ) + 1 −q∥$

for all $n≥ n 0$. In this case, we derive from (3.11) that

$lim n → ∞ g ( ∥ y τ ( n ) − J r τ ( n ) y τ ( n ) ∥ ) =0.$

So, by the property of the function g in (2.2), we have

$∥ y τ ( n ) − J r τ ( n ) y τ ( n ) ∥=0.$

From (3.3), (3.4) and (3.5), we also have

$lim n → ∞ ∥ x τ ( n ) − y τ ( n ) ∥ = 0 , lim n → ∞ ∥ x τ ( n ) − J r τ ( n ) y τ ( n ) ∥ = 0$

and

$lim n → ∞ ∥ x τ ( n ) + 1 − J r τ ( n ) y τ ( n ) ∥=0.$

By using the same argument as in Theorem 3.1 with ${ x τ ( n ) }$, ${ y τ ( n ) }$ and ${ J r τ ( n ) y τ ( n ) }$, we obtain

$LIM n ( 〈 ( I − f ) ( q ) , J φ ( q − x τ ( n ) ) 〉 ) ≤0.$

Moreover, by using the same argument as in Step 1-Step 4 of Case 1 with ${ x τ ( n ) }$, ${ y τ ( n ) }$ and ${ J r τ ( n ) y τ ( n ) }$, we obtain the following:

Step 1′ $lim n → ∞ ∥ x τ ( n ) + 1 − x τ ( n ) ∥=0$;

Step 2′ $lim sup n → ∞ 〈(I−f)(q), J φ (q− x τ ( n ) )〉≤0$;

Step 3′ $lim sup n → ∞ 〈(I−f)(q), J φ (q− y τ ( n ) )〉≤0$;

Step 4′ $lim n → ∞ Φ(∥ x τ ( n ) −q∥)=0$ and $lim n → ∞ Φ(∥ x τ ( n ) + 1 −q∥)=0$. Hence

$lim n → ∞ ∥ x τ ( n ) −q∥=0and lim n → ∞ ∥ x τ ( n ) + 1 −q∥=0.$

From Lemma 2.4, we have

$∥ x n −q∥≤∥ x τ ( n ) + 1 −q∥.$

Therefore, $lim n → ∞ ∥ x n −q∥=0$. This completes the proof. □

By taking $β n =0$ in Theorem 3.2, we obtain the following result, which is an extension of Corollary 3.4 of Zhang and Song  to the viscosity iteration method.

Corollary 3.1 Let E be a uniformly convex Banach space having a weakly continuous duality mapping $J φ$ with gauge function φ. Let ${ x n }$ be a sequence generated by

$x n + 1 = J r n ( α n f ( x n ) + ( 1 − α n ) x n ) ,∀n≥0.$

Then ${ x n }$ converges strongly to $q∈ A − 1 0$, where q is the unique solution of the variational inequality (3.10).

Theorem 3.3 Let E be a uniformly convex Banach space having a uniformly Gâteaux differentiable norm. Then the sequence ${ x n }$ generated by (3.1) converges strongly to $q∈ A − 1 0$, where q is the unique solution of the variational inequality

$〈 ( I − f ) ( q ) , J ( q − p ) 〉 ≤0,∀p∈ A − 1 0.$
(3.15)

Proof We also note that by , there exists a solution q of the variational inequality

$〈 ( I − f ) ( q ) , J ( q − p ) 〉 ≤0,∀p∈ A − 1 0,$

where $q= lim t → 0 + x t ∈ A − 1 0$ with $x t$ being defined by $x t =tf( x t )+(1−t) J r x t$ for each $r>0$ and $0. From now, we put $y n = α n f( x n )+(1− α n ) x n$ for $n≥0$.

We also know that ${ x n }$, ${ y n }$, ${ J r n x n }$ and ${f( x n )}$ are bounded by the proof of Theorem 3.1.

As in the proof of Theorem 3.2, we divide the proof into several steps. We only include the differences.

Step 1. By considering two cases as in the proof of Theorem 3.2, we have that $lim n → ∞ ∥ y n − J r n y n ∥=0$ and $lim n → ∞ ∥ y τ ( n ) − J r τ ( n ) y τ ( n ) ∥=0$, where $τ(n)$ is as in Case 2 in the proof of Theorem 3.2.

Step 2. (1) In the case when $lim n → ∞ ∥ y n − J r n y n ∥=0$, we show that

$lim sup n → ∞ 〈 ( I − f ) ( q ) , J ( q − y n ) 〉 ≤0.$

To prove this, let a subsequence ${ y n j }$ of ${ y n }$ be such that

$lim sup n → ∞ 〈 ( I − f ) ( q ) , J ( q − y n ) 〉 = lim j → ∞ 〈 ( I − f ) ( q ) , J ( q − y n j ) 〉$

and $y n j ⇀z$ for some $z∈E$. Since

$x t − y n =(1−t)( J r x t − y n )+t ( f ( x t ) − y n ) ,$

by Lemma 2.1, we have

$∥ x t − y n ∥ 2 ≤ ( 1 − t ) 2 ∥ J r x t − y n ∥ 2 +2t 〈 f ( x t ) − y n , J ( x t − y n ) 〉 .$

Along with using the resolvent identity (2.1), noting

$∥ J r y n − J r n y n ∥ = ∥ J r ( r r n y n + ( 1 − r r n ) J r n y n ) − J r y n ∥ ≤ | 1 − r r n | ∥ y n − J r n y n ∥ ,$

we observe also that

$∥ J r x t − y n ∥ ≤ ∥ J r x t − J r y n ∥ + ∥ J r y n − J r n y n ∥ + ∥ J r n y n − y n ∥ ≤ ∥ x t − y n ∥ + ( 1 + | 1 − r r n | ) ∥ y n − J r n y n ∥ = ∥ x t − y n ∥ + ε n ,$

where $ε n =(1+|1− r r n |)(∥ y n − J r n y n ∥)→0$ as $n→∞$ (by Step 1 and condition (C4)). Putting

$a j (t)= ( 1 − t ) 2 ε n j ( 2 ∥ x t − y n j ∥ + ε n j ) →0(j→∞)$

and using Lemma 2.1, we obtain

$∥ x t − y n j ∥ 2 ≤ ( 1 − t ) 2 ∥ J r x t − y n j ∥ 2 + 2 t 〈 f ( x t ) − y n j , J ( x t − y n j ) 〉 ≤ ( 1 − t ) 2 ( ∥ J r x t − J r y n j ∥ + ∥ J r y n j − y n j ∥ ) 2 + 2 t 〈 f ( x t ) − x t , J ( x t − y n j ) 〉 + 2 t ∥ x t − y n j ∥ 2 ≤ ( 1 − t ) 2 ∥ x t − y n j ∥ 2 + a j ( t ) + 2 t 〈 f ( x t ) − x t , J ( x t − y n j ) 〉 + 2 t ∥ x t − y n j ∥ 2 .$

The last inequality implies

$〈 x t − f ( x t ) , J ( x t − y n j ) 〉 ≤ t 2 ∥ x t − y n j ∥ 2 + 1 2 t a j (t).$

It follows that

$lim j → ∞ 〈 x t − f ( x t ) , J ( x t − y n j ) 〉 ≤ t 2 M,$
(3.16)

where $M>0$ is a constant such that $M≥ ∥ x t − y n ∥ 2$ for all $n≥0$ and $t∈(0,1)$. Taking the lim sup as $t→0$ in (3.16) and noticing the fact that the two limits are interchangeable due to the fact that J is uniformly continuous on bounded subsets of E from the strong topology of E to the weak topology of $E ∗$, we have

$lim sup n → ∞ 〈 ( I − f ) ( q ) , J ( q − y n ) 〉 = lim j → ∞ 〈 ( I − f ) ( q ) , J ( q − y n j ) 〉 ≤0.$
1. (2)

In the case when $lim n → ∞ ∥ y τ ( n ) − J r τ ( n ) y τ ( n ) ∥=0$, by using the same argument with ${ y τ ( n ) }$ and ${ J r τ ( n ) y τ ( n ) }$, we also have $lim sup n → ∞ 〈(I−f)(q),J(q− y τ ( n ) )〉≤0$.

Step 3. (1) In the case when $lim n → ∞ ∥ y n − J r n y n ∥=0$, we conclude $lim n → ∞ ∥ x n −q∥=0$. Indeed, by using (3.1) and applying Lemma 2.1, we obtain

$∥ x n + 1 − q ∥ 2 = 〈 β n ( x n − q ) + ( 1 − β n ) ( J r n y n − q ) , J ( x n + 1 − q ) 〉 ≤ β n ∥ x n − q ∥ ∥ x n + 1 − q ∥ + ( 1 − β n ) ∥ y n − q ∥ ∥ x n + 1 − q ∥ ≤ β n ∥ x n − q ∥ 2 + ∥ x n + 1 − q ∥ 2 2 + ( 1 − β n ) ∥ y n − q ∥ 2 + ∥ x n + 1 − q ∥ 2 2$

and

$∥ y n − q ∥ 2 = 〈 α n f ( x n ) + ( 1 − α n ) x n − p , J ( y n − q ) 〉 = 〈 α n ( f ( x n ) − f ( q ) ) + ( 1 − α n ) ( x n − q ) + α n ( f ( q ) − q ) , J ( y n − q ) 〉 ≤ ( α n k ∥ x n − q ∥ + ( 1 − α n ) ∥ x n − q ∥ ) ∥ y n − q ∥ + α n 〈 f ( q ) − q , J ( y n − q ) 〉 = ( 1 − ( 1 − k ) α n ) ∥ x n − q ∥ 2 + ∥ y n − q ∥ 2 2 + α n 〈 f ( q ) − q , J ( y n − q ) 〉 .$

Thus

$∥ x n + 1 − q ∥ 2 ≤ β n ∥ x n − q ∥ 2 +(1− β n ) ∥ y n − q ∥ 2$
(3.17)

and

$∥ y n − q ∥ 2 ≤ ( 1 − 2 ( 1 − k ) α n 1 + ( 1 − k ) α n ) ∥ x n − q ∥ 2 + 2 α n 1 + ( 1 − k ) α n 〈 f ( q ) − q , J ( y n − q ) 〉 .$
(3.18)

Combining (3.17) and (3.18) yields

$∥ x n + 1 − q ∥ 2 ≤ β n ∥ x n − q ∥ 2 + ( 1 − β n ) ( 1 − 2 ( 1 − k ) α n 1 + ( 1 − k ) α n ) ∥ x n − q ∥ 2 + 2 ( 1 − β n ) α n 1 + ( 1 − k ) α n 〈 f ( q ) − q , J ( y n − q ) 〉 = ( 1 − ( 1 − β n ) 2 ( 1 − k ) α n 1 + ( 1 − k ) α n ) ∥ x n − q ∥ 2 + ( 1 − β n ) 2 ( 1 − k ) α n ( 1 + ( 1 − k ) α n ) ( 1 − k ) 〈 f ( q ) − q , J ( y n − q ) 〉 .$
(3.19)

Put

$λ n =(1− β n ) 2 ( 1 − k ) α n 1 + ( 1 − k ) α n and δ n = 1 1 − k 〈 ( I − f ) ( q ) , J ( q − y n ) 〉 .$

From the conditions (C1)-(C3) and (1) of Step 2, it follows that $λ n →0$, $∑ n = 0 ∞ λ n =∞$ and $lim sup n → ∞ δ n ≤0$. Since (3.19) reduces to

$∥ x n + 1 − q ∥ 2 ≤(1− λ n ) ∥ x n − q ∥ 2 + λ n δ n ,$

from Lemma 2.3, we conclude that $lim n → ∞ ∥ x n −q∥=0$.

1. (2)

In the case when $lim n → ∞ ∥ y τ ( n ) − J r τ ( n ) y τ ( n ) ∥=0$, by using the same argument with ${ x τ ( n ) }$, ${ y τ ( n ) }$ and ${ J r τ ( n ) y τ ( n ) }$ and (2) of Step 2, we can obtain

$lim n → ∞ ∥ x τ ( n ) −q∥=0and lim n → ∞ ∥ x τ ( n ) + 1 −q∥=0.$

From Lemma 2.4, we have

$∥ x n −q∥≤∥ x τ ( n ) + 1 −q∥.$

Therefore, $lim n → ∞ ∥ x n −q∥=0$. This completes the proof. □

By taking $β n =0$, we also have the following.

Corollary 3.2 Let E be a uniformly convex Banach space having a uniformly Gâteaux differentiable norm. Let ${ x n }$ be a sequence generated by

$x n + 1 = J r n ( α n f ( x n ) + ( 1 − α n ) x n ) ,∀n≥0.$

Then ${ x n }$ converges strongly to $q∈ A − 1 0$, where q is the unique solution of the variational inequality (3.15).

Corollary 3.3 Let H be a Hilbert space. Assume that $A:D(A)⊂H→ 2 H$ is a monotone operator satisfying the range condition with $A − 1 0≠∅$ and that C is a nonempty closed convex subset of H such that $D ( A ) ¯ ⊂C⊂ ⋂ r > 0 R(I+rA)$. Let ${ x n }$ be a sequence generated by (3.1). Then ${ x n }$ converges strongly to $q∈ A − 1 0$, where q is the unique solution of the variational inequality

$〈 ( I − f ) ( q ) , q − p 〉 ≤0,∀p∈ A − 1 0.$
(3.20)

By taking $β n =0$ in Corollary 3.3, we also have the following.

Corollary 3.4 Let H be a Hilbert space. Assume that $A:D(A)⊂H→ 2 H$ is a maximal monotone operator with $A − 1 0≠∅$. Let ${ x n }$ be a sequence generated by

$x n + 1 = J r n ( α n f ( x n ) + ( 1 − α n ) x n ) ,∀n≥0.$

Then ${ x n }$ converges strongly to $q∈ A − 1 0$, where q is the unique solution of the variational inequality (3.20).

Proof Since A is maximal monotone, A is monotone and satisfies the range condition $D ( A ) ¯ ⊂H=R(I+rA)$ for all $r>0$. Putting $C=H$ in Corollary 3.3, we can obtain the desired result. □

By using arguments similar to those in the proofs of Theorems 3.1, 3.2 and 3.3 and , we can obtain the following theorems for the Halpern-type iterative algorithm (3.2).

Theorem 3.4 Let E be a reflexive Banach space having a weakly continuous duality mapping $J φ$ with gauge function φ. Let ${ x n }$ be a sequence generated by (3.2) and LIM be a Banach limit. If $lim n → ∞ ∥ x n − J r n x n ∥=0$, then

$LIM n ( 〈 ( I − f ) ( q ) , J φ ( q − x n ) 〉 ) ≤0,$

where $q:= lim t → 0 + x t$ with $x t$ being defined by $x t =tf( x t )+(1−t) J r x t$ for each $r>0$.

Theorem 3.5 Let E be a uniformly convex Banach space having a weakly continuous duality mapping $J φ$ with gauge function φ. Then the sequence ${ x n }$ generated by (3.2) converges strongly to $q∈ A − 1 0$, where q is the unique solution of the variational inequality (3.10).

Theorem 3.6 Let E be a uniformly convex Banach space having a uniformly Gâteaux differentiable norm. Then the sequence ${ x n }$ generated by (3.2) converges strongly to $q∈ A − 1 0$, where q is the unique solution of the variational inequality (3.15).

Corollary 3.5 Let H be a Hilbert space. Assume that $A:D(A)⊂H→ 2 H$ is a maximal monotone operator with $A − 1 0≠∅$. Let ${ x n }$ be a sequence generated by (3.2). Then ${ x n }$ converges strongly to $q∈ A − 1 0$, where q is the unique solution of the variational inequality (3.20).

Remark 3.1

(1) Theorem 3.2 improves and develops Theorem 3.7 of Zhang and Song  in the following aspects.

1. (a)

The following gaps, which authors in  overlooked, are corrected: there exist two subsequences ${ z n i }$ and ${ z n j }$ of ${ z n }$ satisfying

$1 4 (1− β n i )g ( ∥ z n i − J r n z n i ∥ ) ≤ α n i ∥ u − p ∥ 2 ,∀i≥0$

and

$1 4 (1− β n j )g ( ∥ z n j − J r n j z n j ∥ ) > α n j ∥ u − p ∥ 2 ,∀j≥0,$

where $x n + 1 = β n x n +(1− β n ) J r n z n$, $z n = α n u+(1− α n ) x n$ and $p∈ A − 1 0$.

1. (b)

The case of an iterative scheme $z n = α n u+(1− α n ) x n$ in [, Theorem 3.7] is extended to the case of a viscosity iterative scheme $y n = α n f( x n )+(1− α n ) x n$, where $f:C→C$ is a contractive mapping with a constant $k∈(0,1)$.

2. (c)

We utilize the weakly continuous duality mapping $J φ$ with gauge function φ instead of the weakly continuous normalized duality mapping $J$ in [, Theorem 3.7].

3. (2)

Theorem 3.3 extends Theorem 3.8 of Zhang and Song  to the viscosity iterative method together with our proof, which corrects the gap in the proof of .

4. (3)

Theorem 3.2 and Theorem 3.3 improve Theorem 3.3 and Theorem 3.4 of Yu , which were given without proofs, to the case of the viscosity iterative method together with our proofs. Theorem 3.3 also develops and complements Theorem 4.2 of Song . In particular, the limit point $q∈ A − 1 0$ of the sequence ${ x n }$ in Theorem 3.3 is the unique solution of the variational inequality (3.15) in comparison with [, Theorem 4.2].

5. (4)

Theorem 3.5 and Theorem 3.6 extend Theorems 3.1 and 3.2 of Zhang and Song  and Theorem 3.1 and Theorem 3.2 of Yu  to the viscosity iterative method.

6. (5)

Corollaries 3.1 and 3.2 improve the corresponding results of Zhang and Song  and Song et al. . Corollary 3.4 also develops the corresponding results of Xu  and Song and Yang .

7. (6)

As in [22, 31, 32], we can replace the contractive mapping f in our algorithms by the weakly contractive mapping g (recall that a mapping $g:C→C$ is said to be weakly contractive  if $∥g(x)−g(y)∥≤∥x−y∥−ψ(∥x−y∥)$, $∀x,y∈C$, where $ψ:[0,+∞)→[0,+∞)$ is a continuous and strictly increasing function such that ψ is positive on $(0,∞)$ and $ψ(0)=0$).

## References

1. 1.

Benavides TD, Acedo GL, Xu HK: Iterative solutions for zeros of accretive operators. Math. Nachr. 2003, 248–249: 62–71. 10.1002/mana.200310003

2. 2.

Bruck RE: A strongly convergent iterative method for the solution of$0∈Ux$ for a maximal monotone operator U in Hilbert space. J. Math. Anal. Appl. 1974, 48: 114–126. 10.1016/0022-247X(74)90219-4

3. 3.

Bréziz H, Lions PL: Produits infinis de resolvantes. Isr. J. Math. 1978, 29: 329–345. 10.1007/BF02761171

4. 4.

Jung JS, Takahashi W: Dual convergence theorems for the infinite products of resolvents in Banach spaces. Kodai Math. J. 1991, 14: 358–365. 10.2996/kmj/1138039461

5. 5.

Jung JS, Takahashi W: On the asymptotic behavior of infinite products of resolvents in Banach spaces. Nonlinear Anal. 1993, 20: 469–479. 10.1016/0362-546X(93)90034-P

6. 6.

Reich S: On infinite products of resolvents. Atti Accad. Naz. Lincei, Rend. Cl. Sci. Fis. Mat. Nat. 1977, 63: 338–340.

7. 7.

Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

8. 8.

Güler O: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 1991, 29: 403–419. 10.1137/0329022

9. 9.

Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program., Ser. A 2000, 87: 189–202.

10. 10.

Kamimura S, Takahashi W: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 2000, 106: 226–240. 10.1006/jath.2000.3493

11. 11.

Kamimura S, Takahashi W: Iterative schemes for approximating solutions of accretive operators in Banach spaces. Sci. Math. 2000, 3: 107–115.

12. 12.

Kamimura S, Takahashi W: Weak and strong convergence of solutions of accretive operator inclusion and applications. Set-Valued Anal. 2000, 8: 361–374. 10.1023/A:1026592623460

13. 13.

Halpern B: Fixed points of nonexpansive maps. Bull. Am. Math. Soc. 1967, 73: 957–961. 10.1090/S0002-9904-1967-11864-0

14. 14.

Mann WR: Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4: 506–510. 10.1090/S0002-9939-1953-0054846-3

15. 15.

Xu HK: A regularization method for the proximal point algorithm. J. Glob. Optim. 2006, 36: 115–125. 10.1007/s10898-006-9002-7

16. 16.

Song Y, Yang C: A note on a paper ‘A regularization method for the proximal point algorithm’. J. Glob. Optim. 2009, 43: 115–125.

17. 17.

Song Y: New iterative algorithms for zeros of accretive operators. J. Korean Math. Soc. 2009, 46: 83–97. 10.4134/JKMS.2009.46.1.083

18. 18.

Zhang Q, Song Y: Halpern type proximal point algorithm of accretive operators. Nonlinear Anal. 2012, 75: 1859–1868. 10.1016/j.na.2011.09.036

19. 19.

Xu HK: Inequality in Banach spaces with applications. Nonlinear Anal. 1991, 16: 1127–1138. 10.1016/0362-546X(91)90200-K

20. 20.

Yu Y: Convergence analysis of a Halpern type algorithm for accretive operators. Nonlinear Anal. 2012, 75: 5027–5031. 10.1016/j.na.2012.04.017

21. 21.

Maingé P-E: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16: 899–912. 10.1007/s11228-008-0102-z

22. 22.

Song Y, Kang JI, Cho YJ: On iterations methods for zeros of accretive operators in Banach spaces. Appl. Math. Comput. 2010, 216: 1007–1017. 10.1016/j.amc.2010.01.124

23. 23.

Cioranescu I: Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems. Kluwer Academic, Dordrecht; 1990.

24. 24.

Barbu V: Nonlinear Semigroups and Differential Equations in Banach Spaces. Noordhoff, Leiden; 1976.

25. 25.

Agarwal RP, O’Regan D, Sahu DR: Fixed Point Theory for Lipschitzian-Type Mappings with Applications. Springer, New York; 2009.

26. 26.

Shioji N, Takahashi W: Strong convergence of approximated sequences for nonexpansive mappings in Banach spaces. Proc. Am. Math. Soc. 1997, 125: 3641–3645. 10.1090/S0002-9939-97-04033-1

27. 27.

Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332

28. 28.

Chen R, Zhu Z: Viscosity approximation fixed points for nonexpansive and m -accretive operators. Fixed Point Theory Appl. 2006., 2006: Article ID 81325

29. 29.

Jung JS: Convergence theorems of iterative algorithms for a family of finite nonexpansive mappings in Banach space. Taiwan. J. Math. 2007, 11: 883–902.

30. 30.

Jung JS: Viscosity approximation methods for a family of finite nonexpansive mappings in Banach spaces. Nonlinear Anal. 2006, 64: 2536–2552. 10.1016/j.na.2005.08.032

31. 31.

Jung JS: Convergence of composite iterative methods for finding zeros of accretive operators. Nonlinear Anal. 2009, 71: 1736–1746. 10.1016/j.na.2009.01.010

32. 32.

Jung JS: Strong convergence of iterative schemes for zeros of accretive in reflexive Banach spaces. Fixed Point Theory Appl. 2010., 2010: Article ID 103465

33. 33.

Alber YI, Guerre-Delabriere S, Zelenko L: Principle of weakly contractive maps in metric spaces. Commun. Appl. Nonlinear Anal. 1998, 5: 45–68.

## Acknowledgements

The author would like to thank the anonymous referees for their valuable comments and suggestions, which improved the presentation of this manuscript. This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2012000895).

## Author information

Authors

### Corresponding author

Correspondence to Jong Soo Jung.

### Competing interests

The author declares that they have no competing interests.

## Rights and permissions

Reprints and Permissions

Jung, J.S. Some results on Rockafellar-type iterative algorithms for zeros of accretive operators. J Inequal Appl 2013, 255 (2013). https://doi.org/10.1186/1029-242X-2013-255

• Accepted:

• Published:

### Keywords

• Rockafellar proximal point algorithm
• accretive operator
• resolvent
• zeros
• nonexpansive mapping
• contractive mapping
• fixed points
• variational inequalities
• weakly continuous duality mapping 