Skip to main content

An algorithm with general errors for the zero point of monotone mappings in Banach spaces

Abstract

In this paper, we introduce a proximal point iterative algorithm with general errors for monotone mappings in Banach spaces. We prove that the proposed algorithm converges strongly to a proximal point for monotone mappings. Our theorems in this paper improve and unify most of the results that have been proposed for this important class of nonlinear mappings.

MSC:47H09, 47H10, 47L25.

1 Introduction

Let E be a real Banach space with dual E . A normalized duality mapping J:E 2 E is defined by

Jx= { f E : x , f = x 2 = f 2 } ,

where , denotes the generalized duality pairing between E and E . It is well known that E is smooth if and only if J is single-valued and if E is uniformly smooth then J is uniformly continuous on bounded subsets of E. Moreover, if E is reflexive and strictly convex Banach space with a strictly convex dual then J 1 is single-valued, one-to-one, surjective, uniformly continuous on bounded subsets and it is the duality mapping from E into E; here J J 1 = I E and J J 1 = I E . D(A) denotes the domain of A.

A mapping A:D(A)E E is said to be monotone if for each x,yD(A), the following inequality holds:

xy,AxAy0.

The class of monotone mappings is one of the most important classes of mappings among nonlinear mappings. The mapping A is said to be maximal monotone if it is not properly contained in any other monotone operator. In the past several decades, many authors have been devoting to the studies on the existence and convergence of zero points for maximal monotone mappings. A variety of problems, for example, convex optimization, linear programming, monotone inclusions, and elliptic differential equations can be formulated as finding a zero of maximal monotone operators. The proximal point algorithm is recognized as a powerful and successful algorithm in finding a solution of maximal monotone operators. In Hilbert spaces, many authors have studied monotone mappings by the viscosity approximation methods and obtained a series of good results, see [120] and the references therein.

Let C be closed convex subset of Banach space E. A mapping A:CE is called accretive if there exists j(xy)J(xy) such that

j ( x y ) , A x A y 0.

The mapping A is called m-accretive if it is accretive and R(I+rA), the range of (I+rA), is E for all r>0; and an accretive mapping A is said to satisfy the range condition if

D(A)C r > 0 R(I+rA),
(1.1)

for some nonempty, closed, and convex subset C of a real Banach space E (see [11, 18, 21]). Denote the zero set of A by

A 1 (0)= { z D ( A ) : 0 A z } .

The accretive mappings and monotone mappings have different natures in Banach spaces, these being more general than Hilbert spaces. For solving the original problem of finding a solution to the inclusion 0Az, Rockafellar [22] introduced the following algorithm:

z k + e k z k + 1 + c k A z k + 1 ,
(1.2)

where { e k } is a sequence of errors. Rockafellar obtained the weak convergence of the algorithm (1.2).

When A is maximal monotone mapping in Hilbert spaces, Xu [23] proposed the following regularization for the proximal algorithm:

z k + 1 = J c k A ( λ k u + ( 1 λ k ) z k + e k ) ,

where J c k A = ( I + c k A ) 1 is the resolvent of A, converge strongly to a point in A 1 (0).

Recently Yao and Shahzad [24] proved that sequences generated from the method of resolvent given by

z k + 1 = γ k z k + δ k J c k A ( z k )+ λ k e k ,

where J c k A = ( I + c k A ) 1 is the resolvent of A, converge strongly to a point in A 1 (0).

On the other hand, with regard to a finite family of m-accretive mappings, Zegeye and Shahzad [21] proved that under appropriate conditions in Hilbert spaces, an iterative process of Halpern type defined by

x n + 1 = α n u+(1 α n ) S r n x n ,n0,

where S r = a 0 I+ a 1 J r 1 + a 2 J r 2 ++ a N J r N with J r i = ( I + r A i ) 1 for a i (0,1), {i=0,1,2,,N} and i = 1 N a i =1, for { A i ,i=1,2,N} accretive mappings, converge strongly to a point in i = 1 N A i 1 (0). Very recently, Kimura and Takahashi [25] defined the mapping in Banach space:

y i = J ( α i J x i + ( 1 α i ) J S i x i ) ,

where { S i } are sequences of nonexpansive mappings of C into itself. Then {J y i } converges weakly and {J x i J S i x i } converges strongly to 0 if φ( x 0 , y i )φ( x 0 , x i ) for all iN.

Motivated and inspired by the above results, our concern now is the following: Is it possible to construct a new sequence with general errors in Banach spaces which converges strongly to a zero of monotone operators? Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Banach space E. Let A:C E be a continuous monotone mapping satisfying the range condition (2.1) with A 1 (0). It is our purpose in this paper to introduce an iterative scheme (two-step iterative scheme) which converges strongly to a zero of a monotone operators in Banach spaces. Our theorems presented in this paper improve and extend the corresponding results of Yao and Shahzad [24], and Zegeye and Shahzad [21] and some other results in this direction.

2 Preliminaries

Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Banach space E with dual E . A monotone mapping A is said to satisfy the range condition if we have

D(A)C r > 0 J 1 R(J+rA)
(2.1)

for some nonempty, closed, and convex subset C of a smooth, strictly convex, and reflexive Banach spaces E (see [11, 18, 26]). In the sequel, the resolvent of a monotone mapping A:C E shall be denoted by S r A = ( J + r A ) 1 J for r>0. It is well known that the fixed points of the operator S r A are the zeros of the monotone mapping A, denote F:= A 1 (0). We recall that a Banach space E has the Kadee-Klee property if for any sequence { x n }E and xE with x n x and x n x, x n x as n. We note that every uniformly convex Banach space enjoys the Kadee-Klee property.

Lemma 2.1 [26]

Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Banach space E with dual E . AE× E is a monotone mapping satisfying (2.1). Let S r A be the resolvent of A for r(0,). If { x n } is a bounded sequence of C such that S r A x n z, then z A 1 (0).

Lemma 2.2 For λ,μ>0, the identity

S λ A x= S μ A ( μ λ J x + ( 1 μ λ ) J S λ A x )

holds.

Proof By the definition of the operator S r A , the identity holds. □

Let E be a smooth Banach space with dual E . Let the Lyapunov function φ:E×ER, introduced by Alber [11], be defined by

φ(y,x)= y 2 2y,Jx+ x 2 x,yE.

Obviously, by the definition of the mapping J, φ(,)0.

Lemma 2.3 [12]

Let E be a smooth and strictly convex Banach space, and C be a nonempty, closed, and convex subset of E. Let AE× E be a monotone mapping satisfying (2.1), A 1 (0) be nonempty and S r A be the resolvent of A for some r>0. Then, for each r>0, we have

φ ( p , S r A x ) +φ ( S r A x , x ) φ(p,x)p A 1 (0),xC.

Let E be a reflexive, strictly convex, and smooth Banach space, and let C be a nonempty, closed, and convex subset of E. The generalized projection mapping, introduced by Alber [11], is a mapping Π C :EC that assigns an arbitrary point xE to the minimizer, x ¯ = Π C x, where x ¯ is the solution to the minimization problem

φ(x, x ¯ )=min { φ ( y , x ) , y C } .

Lemma 2.4 [11]

Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real reflexive Banach space E. Let xE, then, yC,

φ(y, Π C x)+φ( Π C x,x)φ(y,x).

Lemma 2.5 [11]

Let C be a nonempty closed and convex subset of a real smooth Banach space E. A mapping Π C :EC is a generalized projection. Let xE, then x 0 = Π C x if and only if

z x 0 ,JxJ x 0 0,zC.

We make use of the function V:E× E R defined by V(x, x )=φ(x, J 1 x ). That is V(x, x )= x 2 2x, x + x 2 for all xE and x E .

Lemma 2.6 [11]

Let E be a reflexive, strictly convex, and smooth real Banach space with dual E . Then

V ( x , x ) +2 J 1 x x , y V ( x , x + y ) ,

for all xE and x , y E .

Lemma 2.7 [13]

Let { a n } be a sequence of nonnegative real numbers satisfying the following relation:

a n + 1 (1 θ n ) a n + σ n ,n0,

where { θ n } is a sequence in (0,1) and { σ n } is a real sequence such that

  1. (i)

    n = 0 θ n =;

  2. (ii)

    lim sup n σ n θ n 0 or n = 0 σ n <.

Then lim n a n =0.

Lemma 2.8 [14]

Let E be a uniformly convex, and smooth real Banach space with dual  E , and let { x n } and { y n } be two sequences of E. If either { x n } or { y n } is bounded and φ( x n , y n )0 as n, then x n y n 0 as n.

Lemma 2.9 [15]

Let E be a uniformly convex and smooth real Banach space and B R (0) be a closed ball of E. Then there exists a continuous strictly increasing convex function g:[0,)[0,) with g(0)=0 such that

α 0 x 0 + α 1 x 1 + + α N x N 2 i = 0 N α i x i 2 α i α j g ( x i x j ) ,

for α i (0,1) such that i = 0 N α i =1 and x i B R (0):={xE:xR} for some R>0.

3 Main results

In this section, we introduce our algorithm and state our main result.

Theorem 3.1 Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Banach space E which also enjoys the Kadee-Klee property. Let A:C E be a continuous monotone mapping satisfying (2.1), and uC be a constant. Assume that F:= A 1 (0), for x 0 ,uC arbitrarily, let the sequence { x n } be generated iteratively by

{ y n = Π C ( J 1 ( λ n J x n + ( 1 λ n ) J S c A x n ) ) , x n + 1 = J 1 ( α n J u + β n J x n + γ n J S c A y n + ε n J e n ) ,
(3.1)

where c(0,), { e n }E is an error, λ n [0,1], { α n }, { β n }, { γ n }, { ε n } are sequences of nonnegative real numbers in [0,1] and

  1. (i)

    α n + β n + γ n + ε n =1, n0, lim n α n =0, n = 1 α n =;

  2. (ii)

    lim sup n λ n =1;

  3. (iii)

    lim sup n ε n =0, lim sup n e n =0,

then the sequence (3.1) converges strongly to x ¯ = Π F u.

Proof From the Lemma 2.3 and Kamimura et al. [12], we see that A 1 (0) is closed and convex. Thus Π F u is well defined. First we prove that { x n } is bounded. Take pF:= A 1 (0), then from (3.1) and Lemmas 2.3, 2.4, and the property of φ, we have

φ ( p , y n ) = φ ( p , Π C ( J 1 ( λ n J x n + ( 1 λ n ) J S c A x n ) ) ) φ ( p , J 1 ( λ n J x n + ( 1 λ n ) J S c A x n ) ) = p 2 2 p , ( 1 λ n ) J S c A x n + λ n J x n + ( 1 λ n ) J S c A x n + λ n J x n 2 p 2 2 ( 1 λ n ) p , J S c A x n 2 λ n p , J x n + ( 1 λ n ) J S c A x n 2 + λ n J x n 2 ( 1 λ n ) φ ( p , S c A x n ) + λ n φ ( p , x n ) ( 1 λ n ) φ ( p , x n ) + λ n φ ( p , x n ) = φ ( p , x n ) .
(3.2)

For n0, by using the definition of φ and Lemma 2.3 we have from (3.2)

φ ( p , x n + 1 ) = φ ( p , J 1 ( α n J u + β n J x n + γ n J S c A y n + ε n J e n ) ) = p 2 2 p , α n J u + β n J x n + γ n J S c A y n + ε n J e n + α n J u + β n J x n + γ n J S c A y n + ε n J e n 2 p 2 2 α n p , J u 2 β n p , J x n 2 γ n p , J S c A y n 2 ε n p , J e n + α n u 2 + β n x n 2 + γ n S c A y n 2 + ε n e n 2 = α n φ ( p , u ) + β n φ ( p , x n ) + γ n φ ( p , S c A y n ) + ε n φ ( p , e n ) α n φ ( p , u ) + ( β n + γ n ) φ ( p , x n ) + ε n φ ( p , e n ) .
(3.3)

Since lim sup n e n =0, assume that supφ(p, e n )M is a real nonnegative constant, then by induction, we have

φ(p, x n + 1 )max { φ ( p , u ) , φ ( p , x 0 ) , M } ,n0,

which implies that { x n } is bounded. Therefore, we see that { S c A x n }, { S c A y n }, and { y n } are bounded. Next we show that φ( x n , x n + 1 )0.

Because the sequence { x n } is bounded, by the reflexivity of E, there exists a subsequence { x n j } of { x n } and p 1 C such that x n j p 1 C weakly. Denote x = Π C u, then x C is the unique element that satisfies inf x C φ(x,u)=φ( x ,u). By using the weakly lower semi-continuity of the norm on E, we get

φ ( x , u ) φ ( p 1 , u ) lim inf j φ ( x n j , u ) lim sup j φ ( x n j , u ) inf x C φ ( x , u ) = φ ( x , u ) ,
(3.4)

which implies that

lim j φ( x n j ,u)=φ ( x , u ) =φ( p 1 ,u)= inf x C φ(x,u).
(3.5)

Thus, from (3.5) and Lemma 2.5, we have

z x , J x J u 0,zC,
(3.6)
z p 1 ,J p 1 Ju0,zC.
(3.7)

Putting z:= p 1 in (3.6) and z:= x in (3.7), we get

p 1 x , J x J u 0,zC,
(3.8)
x p 1 , J p 1 J u 0,zC.
(3.9)

Adding (3.8) and (3.9) we have x p 1 ,J p 1 J x 0, i.e. x p 1 ,J x J p 1 0. By using the fact that J is monotone we get x p 1 ,J x J p 1 =0. Since strict convexity of E implies that J is strictly monotone, we have p 1 = x . Consequently, x n j x weakly.

Furthermore, by the definition of φ and (3.5), we have

lim n ( x n 2 2 x n , J u + u 2 ) = x 2 2 x , J u + u 2 ,

which shows that lim n x n = x . In view of the Kadee-Klee property of E, we have x n x 0. Thus, φ( x n , x n + 1 )0.

Now we show that x A 1 (0). By using the definition of φ and Lemma 2.3 and Lemma 2.9, we get

φ ( p , x n + 1 ) = φ ( p , J 1 ( α n J u + β n J x n + γ n J S c A y n + ε n J e n ) ) p 2 2 p , α n J u + β n J x n + γ n J S c A y n + ε n J e n + α n J u + β n J x n + γ n J S c A y n + ε n J e n 2 p 2 2 α n p , J u 2 β n p , J x n 2 γ n p , J S c A y n 2 ε n p , J e n + α n u 2 + β n x n 2 + γ n S c A y n 2 + ε n e n 2 β n γ n g ( J x n J S c A y n ) = α n φ ( p , u ) + ( 1 α n ε n ) φ ( p , x n ) + ε n φ ( p , e n ) β n γ n g ( J x n J S c A y n ) .

Because x n x , φ(p, x n ) is convergent, and noticing the conditions (i), (ii) and (iii), we get

g ( J x n J S c A y n ) 0.
(3.10)

The property of the function g implies that

J x n J S c A y n 0,n.
(3.11)

Thus, since J 1 is uniformly continuous on bounded sets, we obtain

x n S c A y n 0,n.
(3.12)

From Lemma 2.4, the property of φ, and the condition (ii) we have

φ ( x n , y n ) = φ ( x n , Π C ( J 1 ( λ n J x n + ( 1 λ n ) J S c A x n ) ) ) φ ( x n , J 1 ( λ n J x n + ( 1 λ n ) J S c A x n ) ) ( 1 λ n ) φ ( x n , S c A x n ) + λ n φ ( x n , x n ) 0 .
(3.13)

Using Lemma 2.8, we get x n y n 0, furthermore y n x , and from (3.12) we get y n S c A y n 0 and from Lemma 2.1, we obtain x F= A 1 (0).

Now take x ¯ = Π F u, then x ¯ F, by the Lemma 2.5 and Lemma 2.6, then we have

φ ( x ¯ , u ) = V ( x ¯ , J u ) V ( x ¯ , J u ( J u J x ¯ ) ) 2 u x ¯ , ( J u J x ¯ ) = φ ( x ¯ , x ¯ ) + 2 u x ¯ , J u J x ¯ = 2 u x ¯ , J u J x ¯ 0 .
(3.14)

Since φ(,)0, so we have φ( x ¯ ,u)=0.

Furthermore, from (3.3) we have

φ ( x ¯ , x n + 1 ) ( β n + γ n ) φ ( x ¯ , x n ) + α n φ ( x ¯ , u ) + ε n φ ( x ¯ , e n ) ( 1 α n ε n ) φ ( x ¯ , x n ) + ( α n + ε n ) ( φ ( x ¯ , u ) + φ ( x ¯ , e n ) ) .
(3.15)

Notice the conditions (i) and (iii), from (3.15), by the Lemma 2.7 we have φ( x ¯ , x n + 1 )0 as n. Consequently, from Lemma 2.8 we have x n x ¯ . In fact, we get x = x ¯ . The proof is complete. □

Theorem 3.2 Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Banach space E which also enjoys the Kadee-Klee property. Let A:C E be a continuous monotone mapping satisfying (2.1), and uC be a constant. Assume that F:= A 1 (0), let x n be a sequence generated by x 0 C

x n + 1 = J 1 ( α n J u + β n J x n + γ n J S c A x n + ε n J e n ) ,
(3.16)

where { e n }E is an error, { α n }, { β n }, { γ n }, { ε n } are sequences of nonnegative real numbers in [0,1] and

  1. (i)

    α n + β n + γ n + ε n =1, n0;

  2. (ii)

    lim n α n =0, n = 1 α n =;

  3. (iii)

    lim sup n ε n =0, lim sup n e n =0,

then the sequence converges strongly to Π F u.

Proof Putting λ n =1 in Theorem 3.1, we obtain the result. □

We note the method of the proof of Theorem 3.1 provides a convergence theorem for a finite family of continuous monotone mappings. In fact, we have the following theorem.

Theorem 3.3 Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Banach space E which also enjoys the Kadee-Klee property. Let A i :C E for i=1,2,,N be continuous monotone mappings satisfying (2.1), and uC be a constant. Assume that F:= i = 1 N A i 1 (0), for x 0 ,uC arbitrarily, let the sequence { x n } be generated iteratively by

{ y n = Π C ( J 1 ( λ n J x n + ( 1 λ n ) i = 1 N μ i J S c A i x n ) ) , x n + 1 = J 1 ( α n J u + β n J x n + γ n i = 1 N σ i J S c A i y n + ε n J e n ) ,
(3.17)

where c(0,), { e n }E is an error, λ n [0,1], λ n [0,1], { α n }, { β n }, { γ n }, { ε n } are sequences of nonnegative real numbers in [0,1] and

  1. (i)

    α n + β n + γ n + ε n =1, n0, i = 1 N σ i =1, i = 1 N μ i =1; μ i 0, σ i 0;

  2. (ii)

    lim sup n λ n =1; lim n α n =0, n = 1 α n =;

  3. (iii)

    lim sup n ε n =0, lim sup n e n =0,

then the sequence converges strongly to x ¯ = Π F u.

If in Theorem 3.1 and Theorem 3.2 we put u0, we have the following corollaries.

Corollary 3.4 Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Banach space E which also enjoys the Kadee-Klee property. Let A:C E be a continuous monotone mapping satisfying (2.1), and uC be a constant. Assume that F:= A 1 (0), for x 0 C arbitrarily, let the sequence { x n } be generated iteratively by

{ y n = Π C ( J 1 ( λ n J x n + ( 1 λ n ) J S c A x n ) ) , x n + 1 = J 1 ( β n J x n + γ n J S c A y n + ε n J e n ) ,
(3.18)

where c(0,), { e n }E is an error λ n [0,1], { β n }, { γ n }, { ε n } be sequences of nonnegative real numbers in [0,1], and

  1. (i)

    β n + γ n + ε n =1, n0;

  2. (ii)

    lim sup n λ n =1;

  3. (iii)

    lim sup n ε n =0, lim sup n e n =0,

then the sequence converges strongly to x ¯ = Π F (0).

Corollary 3.5 Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Banach space E which also enjoys the Kadee-Klee property. Let A:C E be a continuous monotone mapping satisfying (2.1). Assume that F:= A 1 (0), let x n be a sequence generated by

x n + 1 = J 1 ( β n J x n + γ n J S c A x n + ε n J e n ) ,
(3.19)

x 0 C, where c>0 and { e n }E is an error, { β n }, { γ n }, { ε n } are sequences of nonnegative real numbers in [0,1] and

  1. (i)

    α n + β n + γ n + ε n =1, n0;

  2. (ii)

    lim sup n ε n =0, lim sup n e n =0,

then the sequence converges strongly to Π F (0).

If E is a Hilbert space, then E is a uniformly convex and smooth real Banach space. In this case, J=I, and φ(x,y)= x y 2 , and Π C = P C , the projection mapping from E on C. It is well known that P C and S c A are nonexpansive. Furthermore, (2.1) reduces to (1.1). Thus the following theorems hold.

Theorem 3.6 Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Hilbert space E which also enjoys the Kadee-Klee property. Let A:C E be a continuous monotone mapping satisfying (1.1), and f:CC be a contraction with a contraction coefficient ρ(0,1). Assume that F:= A 1 (0), let the sequence { x n } be generated iteratively by

{ y n = P C ( λ n x n + ( 1 λ n ) S c A x n ) , x n + 1 = α n f ( x n ) + β n x n + γ n S c A y n + ε n e n ,
(3.20)

where c>0 and { e n }H is an error, S c A = ( I + c A ) 1 , and the sequences λ n [0,1], { α n }, { β n }, { γ n }, { ε n } be nonnegative real numbers in [0,1] and

  1. (i)

    α n + β n + γ n + ε n =1, n0, lim n α n =0, n = 1 α n =;

  2. (ii)

    0< lim inf n λ n < lim sup n λ n <1;

  3. (iii)

    lim sup n ε n =0, and lim sup n e n =0,

then the sequence converges strongly to x ¯ = P F f( x ¯ ).

Theorem 3.7 Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Hilbert space E which also enjoys the Kadee-Klee property. Let A:C E be a continuous monotone mapping satisfying (1.1), and f:CC be a contraction with a contraction coefficient ρ(0,1). Assume that F:= A 1 (0), let x n be a sequence generated by x 0 C

x n + 1 = α n f( x n )+ β n x n + γ n S c A x n + ε n e n ,
(3.21)

where c>0 and { e n }H is an error, S c A = ( I + c A ) 1 , and the sequences { α n }, { β n }, { γ n }, { ε n } are nonnegative real numbers in [0,1] and

  1. (i)

    α n + β n + γ n + ε n =1, n0, lim n α n =0, n = 1 α n =;

  2. (ii)

    lim sup n ε n =0, and lim sup n e n =0,

then the sequence converges strongly to x ¯ = P F f( x ¯ ).

If in Theorem 3.6 and Theorem 3.7 we put f:u constant, we have the following corollaries.

Corollary 3.8 Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Hilbert space E which also enjoys the Kadee-Klee property. Let A:C E be a continuous monotone mapping satisfying (1.1), and uC be a constant. Assume that F:= A 1 (0), let the sequence { x n } be generated iteratively by

{ y n = P C ( λ n x n + ( 1 λ n ) S c A x n ) , x n + 1 = α n u + β n x n + γ n S c A γ n y n + ε n e n ,
(3.22)

where c>0 and { e n }H is an error, S c A = ( I + c A ) 1 , and the sequences λ n [0,1], { α n }, { β n }, { γ n }, { ε n } be nonnegative real numbers in [0,1] and

  1. (i)

    α n + β n + γ n + ε n =1, n0, lim n α n =0, n = 1 α n =;

  2. (ii)

    0< lim inf n λ n < lim sup n λ n <1;

  3. (iii)

    lim sup n ε n =0, and lim sup n e n =0,

then the sequence converges strongly to x ¯ = P F u.

Theorem 3.9 Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Hilbert space E which also enjoys the Kadee-Klee property. Let A:C E be a continuous monotone mapping satisfying (1.1), and uC be a constant. Assume that F:= A 1 (0), let x n be a sequence generated by x 0 C

x n + 1 = α n u+ β n x n + γ n S c A x n + ε n e n ,
(3.23)

where c>0 and { e n }H is an error, S c A = ( I + c A ) 1 , and the sequences { α n }, { β n }, { γ n }, { ε n } are nonnegative real numbers in [0,1] and

  1. (i)

    α n + β n + γ n + ε n =1, n0, lim n α n =0, n = 1 α n =;

  2. (ii)

    lim sup n ε n =0, and lim sup n e n =0,

then the sequence converges strongly to x ¯ = P F u.

Remark 3.10 In fact, if A is a maximal monotone mapping, then all the above results hold.

4 Applications

In this section, we study the problem of finding a minimizer of a continuously Fréchet differentiable convex functional in Banach spaces. Let g be continuously Fréchet differentiable convex functionals such that the gradient of g (g|C) are continuous and monotone. Denote B:=g, then the following theorem holds.

Theorem 4.1 Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Banach space E which also enjoys the Kadee-Klee property. Let g be continuously Fréchet differentiable convex functionals such that the gradient of g, B:=g, is continuous, monotone, and F:=arg min y C g(y):={zC:g(z)= min y C g(y)}. Let uC be a constant, for x 0 ,uC arbitrarily, and the sequence { x n } be generated iteratively by

{ y n = Π C ( J 1 ( λ n J x n + ( 1 λ n ) J S c B u n ) ) , x n + 1 = J 1 ( α n J u + β n J x n + γ n J S c B y n + ε n J e n ) ,
(4.1)

where c(0,), { e n }E is an error, λ n [0,1], { α n }, { β n }, { γ n }, { ε n } are sequences of nonnegative real numbers in [0,1] and

  1. (i)

    α n + β n + γ n + ε n =1, n0, lim n α n =0, n = 1 α n =;

  2. (ii)

    lim sup n λ n =1;

  3. (iii)

    lim sup n ε n =0, lim sup n e n =0,

then the sequences{ x n }, { u n } converge strongly to x ¯ = Π F u.

In addition, we can extend it to the equilibrium problem for a bifunction ϕ:C×CR where C is a nonempty, closed, and convex subset of a smoothly, strictly convex, and reflexive real Banach space E with dual E . The problem is to find xC such that ϕ(x,y)0 for all yC and the set of solutions if denoted by EP(ϕ). The mapping T r n :EC is defined as follows: for xE,

T r n (x):= { z C : ϕ ( z , y ) + 1 r n j ( y z ) , z x 0 , y C } .

It is proved in [27] that { T r n } is single-valued and nonexpansive. Furthermore, we see that F( T r n )=EP(ϕ) is closed and convex if the bifunction ϕ satisfies (A1) ϕ(x,x)=0, xC, (A2) ϕ(x,y)+ϕ(y,x)0, x,yC, (A3) for each x,y,zC, lim t 0 ϕ(tz+(1t)x,y)ϕ(x,y), and (A4) for each xC, yϕ(x,y) is convex and lower semicontinuous.

The following theorems are connected with the problem of obtaining a common element of the sets of zeros of a monotone operator and an equilibrium problem.

Theorem 4.2 Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Banach space E which also enjoys the Kadee-Klee property. Let ϕ be a bifunction from CR satisfying (A1)-(A4), and A:C E be a continuous monotone mapping satisfying (2.1), and uC be a constant. Assume that F:= A 1 (0)EP(ϕ), for x 0 ,uC arbitrarily, let the sequence { x n } be generated iteratively by

{ ϕ ( u n , y ) + 1 r n j ( y u n ) , u n x n 0 , y C , y n = Π C ( J 1 ( λ n J u n + ( 1 λ n ) J S c A u n ) ) , x n + 1 = J 1 ( α n J u + β n J x n + γ n J S c A y n + ε n J e n ) ,
(4.2)

where c(0,), { e n }E is an error, λ n [0,1], { α n }, { β n }, { γ n }, { ε n } are sequences of nonnegative real numbers in [0,1] and

  1. (i)

    α n + β n + γ n + ε n =1, n0, lim n α n =0, n = 1 α n =;

  2. (ii)

    lim sup n λ n =1;

  3. (iii)

    lim sup n ε n =0, lim sup n e n =0,

then the sequences { x n }, { u n } converge strongly to x ¯ = Π F u.

Theorem 4.3 Let C be a nonempty, closed, and convex subset of a uniformly smooth strictly convex real Banach space E which also enjoys the Kadee-Klee property. Let ϕ be a bifunction from CR satisfying (A1)-(A4), A:C E be a continuous monotone mapping satisfying (2.1), and uC be a constant. Assume that F:= A 1 (0)EP(ϕ), for x 0 ,uC arbitrarily, let the sequence { x n } be generated iteratively by

{ ϕ ( u n , y ) + 1 r n j ( y u n ) , u n x n 0 , y C , x n + 1 = J 1 ( α n J u + β n J x n + γ n J S c A u n + ε n J e n ) ,
(4.3)

where c(0,), { e n }E is an error, λ n [0,1], { α n }, { β n }, { γ n }, { ε n } are sequences of nonnegative real numbers in [0,1] and

  1. (i)

    α n + β n + γ n + ε n =1, n0;

  2. (ii)

    lim n α n =0, n = 1 α n =;

  3. (iii)

    lim sup n ε n =0, lim sup n e n =0,

then the sequences { x n }, { u n } converge strongly to x ¯ = Π F u.

Remark 4.4 Our results extend and unify most of the results that have been proved for this important class of nonlinear operators. In particular, our theorems provide a convergence scheme to the proximal point of a monotone mapping which improves the results of Yao and Shahzad [24] to Banach spaces, being more general than Hilbert spaces. We also note that our results complement the results of Zegeye and Shahzad [21], which are convergence results for accretive mappings. At the same time, Theorem 3.1 extends the results of Tang [9] and Zegeye and Shahzad [18] which have not involved errors.

References

  1. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

    MathSciNet  Article  MATH  Google Scholar 

  2. Takahashi W, Ueda Y: On Reich’s strong convergence theorems for resolvents of accretive operators. J. Math. Anal. Appl. 1984, 104: 546–553. 10.1016/0022-247X(84)90019-2

    MathSciNet  Article  MATH  Google Scholar 

  3. Song Y, Chen R: Strong convergence theorems on an iterative method for a family of finite non-expansive mappings. Appl. Math. Comput. 2006, 180: 275–287. 10.1016/j.amc.2005.12.013

    MathSciNet  Article  MATH  Google Scholar 

  4. Zegeye H, Shahzad N: Strong convergence of an iterative method for pseudo-contractive and monotone mappings. J. Glob. Optim. 2012, 54: 173–184. 10.1007/s10898-011-9755-5

    MathSciNet  Article  MATH  Google Scholar 

  5. Takahashi W: Nonlinear Functional Analysis. Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama; 2000. (in Japanese)

    MATH  Google Scholar 

  6. Guler O: On the convergence of the proximal point algorithm for convex optimization. SIAM J. Control Optim. 1991, 29: 403–419. 10.1137/0329022

    MathSciNet  Article  MATH  Google Scholar 

  7. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002,66(2):240–256.

    Article  MathSciNet  MATH  Google Scholar 

  8. Boikanyo OA, Morosanu G: A proximal point algorithm converging strongly for general errors. Optim. Lett. 2010, 4: 635–641. 10.1007/s11590-010-0176-z

    MathSciNet  Article  MATH  Google Scholar 

  9. Tang Y: Strong convergence of viscosity approximation methods for the fixed-point of pseudo-contractive and monotone mappings. Fixed Point Theory Appl. 2013. 10.1186/1687-1812-2013-273

    Google Scholar 

  10. Xu HK: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 2003, 116: 659–678. 10.1023/A:1023073621589

    MathSciNet  Article  MATH  Google Scholar 

  11. Alber Y: Metric and generalized projection operators in Banach spaces: properties and applications. Lecture Notes in Pure and Appl. Math. 178. In Theory and Applications of Nonlinear Operators of Accretive and Monotone Type. Edited by: Kartsatos A. Dekker, New York; 1996:15–50.

    Google Scholar 

  12. Kamimura S, Kohsaka F, Takahashi W: Weak and strong convergence theorems for maximal monotone operators in a Banach spaces. Set-Valued Anal. 2004, 12: 417–429. 10.1007/s11228-004-8196-4

    MathSciNet  Article  MATH  Google Scholar 

  13. Xu HK: Another control condition in an iterative method for nonexpansive mappings. Bull. Aust. Math. Soc. 2002, 65: 109–113. 10.1017/S0004972700020116

    Article  MathSciNet  MATH  Google Scholar 

  14. Kamimura S, Takahashi W: Strong convergence of proximal-type algorithm in a Banach space. SIAM J. Optim. 2002, 13: 938–945. 10.1137/S105262340139611X

    MathSciNet  Article  MATH  Google Scholar 

  15. Zegeye H, Ofoedu E, Shahzad N: Convergence theorems for equilibrium problem, variational inequality problem and countably infinite relatively quasi-nonexpansive mappings. Appl. Math. Comput. 2010, 216: 3439–3449. 10.1016/j.amc.2010.02.054

    MathSciNet  Article  MATH  Google Scholar 

  16. Rockafellar RT: Monotone operators and the proximal point algorithm. Trans. Am. Math. Soc. 1970, 194: 75–88.

    MathSciNet  Article  MATH  Google Scholar 

  17. Ceng LC, Xu HK, Yao JC: Viscosity approximation method for asymptotically nonexpansive mappings in Banach spaces. Nonlinear Anal. 2008, 69: 1402–1412. 10.1016/j.na.2007.06.040

    MathSciNet  Article  MATH  Google Scholar 

  18. Zegeye H, Shahzad N: An algorithm for a common minimum-norm zero of a finite family of monotone mappings in Banach spaces. J. Inequal. Appl. 2013. 10.10186/1029-242x-2013-566

    Google Scholar 

  19. Wen DJ: Projection methods for generalized system of nonconvex variational inequalities with different nonlinear operators. Nonlinear Anal. 2010,73(7):2292–2297. 10.1016/j.na.2010.06.010

    MathSciNet  Article  MATH  Google Scholar 

  20. Tang Y: Viscosity approximation methods and strong convergence theorems for the fixed point of pseudocontractive and monotone mappings in Banach spaces. J. Appl. Math. 2013. 10.1155/2013/926078

    Google Scholar 

  21. Zegeye H, Shahzad N: Strong convergence theorems for a common zero of a finite family of m -accretive mappings. Nonlinear Anal. 2007, 66: 1161–1169. 10.1016/j.na.2006.01.012

    MathSciNet  Article  MATH  Google Scholar 

  22. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

    MathSciNet  Article  MATH  Google Scholar 

  23. Xu HK: A regularization method for the proximal point algorithm. J. Glob. Optim. 2006, 36: 115–125. 10.1007/s10898-006-9002-7

    Article  MathSciNet  MATH  Google Scholar 

  24. Yao YH, Shahzad N: Strong convergence of a proximal point algorithm with general errors. Optim. Lett. 2012, 6: 621–628. 10.1007/s11590-011-0286-2

    MathSciNet  Article  MATH  Google Scholar 

  25. Kimura Y, Takahashi W: On a hybrid method for a family of relatively nonexpansive mappings in a Banach space. J. Math. Anal. Appl. 2009, 357: 356–363. 10.1016/j.jmaa.2009.03.052

    MathSciNet  Article  MATH  Google Scholar 

  26. Aoyama K, Kohsaka F, Takahashi W: Proximal point method for monotone operators in Banach spaces. Taiwan. J. Math. 2011,15(1):259–281.

    MathSciNet  MATH  Google Scholar 

  27. Yao YH, Cho YJ, Chen RD: An iterative algorithm for solving fixed point problems, variational inequality problems and mixed equilibrium problems. Nonlinear Anal. 2009, 71: 3363–3373. 10.1016/j.na.2009.01.236

    MathSciNet  Article  MATH  Google Scholar 

Download references

Acknowledgements

The authors express their sincere gratitude to the referee and the editor for his/her valuable comments and suggestions. This article was funded by the National Science Foundation of China (11001287), Natural Science Foundation Project of Chongqing (CSTC, 2014jcyjA00026) and Science and Technology Research Project of Chongqing Municipal Education Commission (KJ1400614).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yan Tang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Tang, Y., Bao, Z. & Wen, D. An algorithm with general errors for the zero point of monotone mappings in Banach spaces. J Inequal Appl 2014, 484 (2014). https://doi.org/10.1186/1029-242X-2014-484

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-484

Keywords

  • monotone mappings
  • fixed point
  • viscosity approximation
  • resolvent mappings