Open Access

A hybrid approximation algorithm for finding common solutions of equilibrium problems, a finite family of variational inclusions, and fixed point problems in Hilbert spaces

Journal of Inequalities and Applications20132013:165

https://doi.org/10.1186/1029-242X-2013-165

Received: 14 April 2012

Accepted: 13 March 2013

Published: 10 April 2013

Abstract

In this paper, we introduce an iterative method for finding a common element of the set of fixed points of nonexpansive mappings, the set of solutions of a finite family of variational inclusions with set-valued maximal monotone mappings and inverse strongly monotone mappings, and the set of solutions of an equilibrium problem in Hilbert spaces. Furthermore, using our new iterative scheme, under suitable conditions, we prove some strong convergence theorems for approximating these common elements. The results presented in the paper improve and extend many recent important results.

Keywords

variational inclusions inverse strongly monotone maximal monotone mapping fixed point equilibrium problem

1 Introduction

Let H be a real Hilbert space whose inner product and norm are denoted by , and , respectively. Let C be a nonempty closed convex subset of H, and let F be a bifunction of C × C into which is the set of real numbers. The equilibrium problem for F : C × C R is to find x C such that
F ( x , y ) 0 , y C .
(1.1)
The set of solutions of (1.1) is denoted by E P ( F ) . Recently, Combettes and Hirstoaga [1] introduced an iterative scheme of finding the best approximation to the initial data when E P ( F ) was nonempty and proved a strong convergence theorem. Let A : C H be a nonlinear mapping. The classical variational inequality which is denoted by V I ( A , C ) is to find x C such that
A x , y x 0 , y C .
(1.2)
The variational inequality has been extensively studied in literature; see, for example, [2, 3] and the references therein. Recall that the mapping T of C into itself is called nonexpansive if
T x T y x y , x , y C .
A mapping f : C C is called contractive if there exists a constant β ( 0 , 1 ) such that
f x f y β x y , x , y C .

We denote by F i x ( T ) the set of fixed points of T.

Some methods have been proposed to solve the equilibrium problem and the fixed point problem of nonexpansive mappings; see, for instance, [2, 46] and the references therein. Recently, Plubtieng and Punpaeng [6] introduced the following iterative scheme. Let x 1 H , and let { x n } and { u n } be sequences generated by
{ F ( u n , y ) + 1 r n y u n , u n x n 0 , y H , x n + 1 = α n γ f ( x n ) + ( I α n A ) T u n , n N .
They proved that if the sequences { α n } and { r n } of parameters satisfied appropriate conditions, then the sequences { x n } and { u n } both converged strongly to the unique solution of the variational inequality
( A γ f ) z , z x 0 , x F i x ( T ) E P ( F ) ,
which was the optimality condition for the minimization problem
min x F i x ( T ) E P ( F ) 1 2 A x , x h ( x ) ,

where h is a potential function for γf.

Let A : H H be a single-valued nonlinear mapping, and let M : H 2 H be a set-valued mapping. We consider the following variational inclusion, which is to find a point u H such that
θ A ( u ) + M ( u ) ,
(1.3)
where θ is the zero vector in H. The set of solutions of problem (1.3) is denoted by I ( A , M ) . Let A i : H H , i = 1 , 2 , , N , be single-valued nonlinear mappings, and let M i : H 2 H , i = 1 , 2 , , N , be set-valued mappings. If A 0 , then problem (1.3) becomes the inclusion problem introduced by Rockafellar [7]. If M = δ C , where C is a nonempty closed convex subset of H and δ C : H [ 0 , ] is the indicator function of C, that is,
δ C ( x ) = { 0 , x C , + , x C ,
(1.4)
then variational inclusion problem (1.3) is equivalent to variational inequality problem (1.2). It is known that (1.3) provides a convenient framework for the unified study of optimal solutions in many optimization-related areas including mathematical programming, complementarity, variational inequalities, optimal control, mathematical economics, equilibria, and game theory. Also, various types of variational inclusions problems have been extended and generalized (see [8] and the references therein). We introduce the following finite family of variational inclusions, which are to find a point u H such that
θ A i ( u ) + M i ( u ) , i = 1 , 2 , , N ,
(1.5)

where θ is the zero vector in H. The set of solutions of problem (1.5) is denoted by i = 1 N I ( A i , M i ) . The formulation (1.5) extends this formalism to a finite family of variational inclusions covering, in particular, various forms of feasibility problems (see, e.g., [9]).

In 2009, Plubtemg and Sripard [10] introduced the following iterative scheme for finding a common element of the set of solutions to problem (1.3) with a multi-valued maximal monotone mapping and an inverse-strongly monotone mapping, the set solutions of an equilibrium problem, and the set of fixed points of a nonexpansive mapping in a Hilbert space. Starting with an arbitrary x 1 H , define sequences { x n } , { y n } , and { u n } by
{ F ( u n , y ) + 1 r n y u n , u n x n 0 , y H , y n = J M , λ ( u n λ A u n ) , n > 0 , x n + 1 = α n γ f ( x n ) + ( I α n B ) S n y n ,
(1.6)

for all n N , where λ ( 0 , 2 α ] , { α n } [ 0 , 1 ] , and { r n } ( 0 , ) ; B is a strongly positive bounded linear operator on H and { S n } is a sequence of nonexpansive mappings on H. They proved that under certain appropriate conditions imposed on { α n } and { r n } , the sequences { x n } , { y n } , and { u n } generated by (1.6) converge strongly to z i = 1 F i x ( S i ) I ( A , M ) E P ( F ) , where z = P i = 1 F i x ( S i ) I ( A , M ) E P ( F ) f ( z ) .

In 2010, Tian [11] introduced the following general iterative scheme for finding an element of the set of solutions to the fixed point of a nonexpansive mapping in a Hilbert space: Define the sequence { x n } by
x n + 1 = α n γ f ( x n ) + ( I μ α n B ) T x n , n 0 ,
(1.7)
where B is a k-Lipschitzian and η-strongly monotone operator. Then he proved that if the sequence { α n } satisfies appropriate conditions, the sequence { x n } generated by (1.7) converges strongly to the unique solution x C of the variational inequality
( γ f μ B ) x , x x 0 , x C ,

where C = F i x ( T ) .

In 2012, Deng et al. [12] considered the following hybrid approximation scheme for finding common solutions of mixed equilibrium problems, a finite family of variational inclusions, and fixed point problems in Hilbert spaces. Starting with an arbitrary x 1 H , define sequences { x n } , { y n } , and { u n } by
{ F 1 ( u n , y ) + F 2 ( u n , y ) + 1 r n y u n , u n x n 0 , y H , y n = J M N , λ N , n ( I λ N , n A N ) J M 1 , λ 1 , n ( I λ 1 , n A 1 ) u n , x n + 1 = ϵ n γ f ( x n ) + β n x n + ( ( 1 β n ) I ϵ n B ) S n y n ,
for all n N , where λ i , n ( 0 , 2 α i ] , i { 1 , 2 , , N } , { ϵ n } [ 0 , 1 ] , and { r n } ( 0 , ) , B is a strongly positive bounded linear operator on H, and { S n } is a sequence of nonexpansive mappings on H. Under suitable conditions and from this iterative scheme, they proved that { x n } , { y n } , and { u n } converge strongly to z, where z = P Ω ( I B + γ f ) ( z ) is a unique solution of the variational inequality
( B γ f ) z , z x 0 , x Ω ,

where Ω : = ( n = 1 F i x ( S n ) ) M E P ( F 1 , F 2 ) ( i = 1 N I ( A i , M i ) ) .

Motivated and inspired by Aoyama et al. [13], Plubieng and Punpaeng [6], Plubtemg and Sripard [10], Peng et al. [14], Tian [11], and Deng et al. [12], we introduce an iterative scheme for finding a common element of the set of solutions of a finite family of variational inclusion problems (1.5) with multi-valued maximal monotone mappings and inverse-strongly monotone mappings, the set of solutions of an equilibrium problem, and the set of fixed points of nonexpansive mappings in a Hilbert space. Starting with an arbitrary x 1 H , define sequences { x n } , { y n } , and { u n } by
{ F ( u n , y ) + 1 r n y u n , u n x n 0 , y H , y n = J M N , λ N , n ( I λ N , n A N ) J M 1 , λ 1 , n ( I λ 1 , n A 1 ) u n , x n + 1 = ϵ n γ f ( x n ) + ( I μ ϵ n B ) S n y n ,

for all n N , where λ i , n ( 0 , 2 α i ] , i { 1 , 2 , , N } , { ϵ n } [ 0 , 1 ] , and { r n } ( 0 , ) , f is an L-Lipschitz mapping on H, B is a k-Lipschitzian and η-strongly monotone operator on H with coefficients k > 0 and η > 0 , and { S n } is a sequence of nonexpansive mappings on H. Under suitable conditions, some strong convergence theorems for approximating to these common elements are proved. Our results extend and improve some corresponding results in [10, 14] and the references therein.

2 Preliminaries

This section collects some lemmas which are used in the proofs of the main results in the next section.

Let H be a real Hilbert space with the inner product , and the norm , respectively. It is well known that for all x , y H and λ [ 0 , 1 ] , the following holds:
λ x + ( 1 λ ) y 2 = λ x 2 + ( 1 λ ) y 2 λ ( 1 λ ) x y 2 .
Let C be a nonempty closed convex subset of H. Then, for any x H , there exists a unique nearest point of C, denoted by P C x , such that x P C x x y for all y C . Such a P C is called the metric projection from H into C. We know that P C is nonexpansive. It is also known that P C x C and
x P C x , P C x z 0 , x H  and  z C .
(2.1)
It is easy to see that (2.1) is equivalent to
x z 2 x P C x 2 + P C x z 2 , x H  and  z C .

For solving the equilibrium problem for a bifunction F : C × C R , let us assume that F satisfies the following conditions:

(A1) F ( x , x ) = 0 for all x C ;

(A2) F is monotone, that is, F ( x , y ) + F ( y , x ) 0 for all x , y C ;

(A3) for each x , y , z C ,
lim t 0 F ( t z + ( 1 t ) x , y ) F ( x , y ) ;

(A4) for each x C , y F ( x , y ) is convex and lower semicontinuous.

Lemma 2.1 [1]

Let C be a nonempty closed convex subset of H, and let F be a bifunction of C × C into satisfying (A1)-(A4). Let r > 0 and x H . Then there exists z C such that
F ( z , y ) + 1 r y z , z x 0 , y C .
Define a mapping T r : H C as follows:
T r ( x ) = { z C : F ( z , y ) + 1 r y z , z x 0 , y C }
for all x H . Then the following hold:
  1. (1)
    T r
    is single-valued;
     
  2. (2)
    T r
    is firmly nonexpansive, that is, for any x , y H ,
    T r x T r y 2 T r x T r y , x y ;
     
  3. (3)
    F i x ( T r ) = E P ( F )
    ;
     
  4. (4)
    E P ( F )
    is closed and convex.
     

By the proof of Lemma 5 in [5], we have the following lemma.

Lemma 2.2 Let C be a nonempty closed convex subset of a Hilbert space H, and let F : C × C R be a bifunction. Let x C and r 1 , r 2 ( 0 , ) . Then
T r 1 x T r 2 x | 1 r 2 r 1 | ( T r 1 x + x ) .
(2.2)

Lemma 2.3 [11]

Let H be a Hilbert space, and let f : H H be a Lipschitz mapping with coefficient 0 < L . B : H H is a k-Lipschitzian and η-strongly monotone operator with k > 0 and η > 0 . Then for 0 < γ < μ η / α ,
x y , ( μ B γ f ) x ( μ B γ f ) y ( μ η γ L ) x y 2 , x , y H .

That is, μ B γ f is strongly monotone with coefficient μ η γ L .

Lemma 2.4 [15]

Assume that { α n } is a sequence of nonnegative real numbers such that
α n + 1 ( 1 γ n ) α n + δ n , n 0 ,
where { α n } is a sequence in ( 0 , 1 ) and { δ n } is a sequence in R such that
  1. (i)
    n = 1 γ n =
    ,
     
  2. (ii)
    lim sup n δ n γ n 0
    or n = 1 | δ n | < .
     

Then lim n α n = 0 .

Definition 2.5 Let A : C H be a nonlinear mapping. A is said to be:
  1. (i)
    Monotone if
    A x A y , x y 0 , x , y C .
     
  2. (ii)
    Strongly monotone if there exists a constant α > 0 such that
    A x A y , x y α x y 2 , x , y C .

    For such a case, A is said to be α-strongly-monotone.

     
  3. (iii)
    Inverse-strongly monotone if there exists a constant α > 0 such that
    A x A y , x y α A x A y 2 , x , y C .

    For such a case, A is said to be α-inverse-strongly-monotone.

     
  4. (iv)
    k-Lipschitz continuous if there exists a constant k 0 such that
    A x A y k x y , x , y C .
     

Let I be the identity mapping on H. It is well known that if A : H H is α-inverse-strongly monotone, then A is a 1 α -Lipschitz continuous and monotone mapping. In addition, if 0 < λ 2 α , then I λ A is a nonexpansive mapping.

A set-valued mapping M : H 2 H is called monotone if for all x , y H , f M x and g M y imply x y , f g 0 . A monotone mapping M : H 2 H is maximal if its graph G ( M ) : { ( x , f ) H × H | f M ( x ) } of M is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping M is maximal if and only if for ( x , f ) H × H , x y , f g 0 for every ( y , g ) G ( H ) implies f M x .

Let the set-valued mapping M : H 2 H be maximal monotone. We define the resolvent operator J M , λ associated with M and λ as follows:
J M , λ ( u ) = ( I + λ M ) 1 ( u ) , u H ,

where λ is a positive number. It is worth mentioning that the resolvent operator J M , λ is single-valued, nonexpansive, and 1-inverse-strongly monotone (see, for example, [16]) and that a solution of problem (1.3) is a fixed point of the operator J M , λ ( I λ A ) for all λ > 0 ; see, for instance, [17]. Furthermore, a solution of a finite family of variational inclusion problems (1.5) is a common fixed point of J M k , λ ( I λ A k ) , k { 1 , , N } , λ > 0 .

Lemma 2.6 [16]

Let M : H 2 H be a maximal monotone mapping, and let A : H H be a Lipschitz-continuous mapping. Then the mapping S = M + A : H 2 H is a maximal monotone mapping.

Lemma 2.7 For all x , y H , the following inequality holds:
x + y 2 x 2 + 2 y , x + y .

Lemma 2.8 (The resolvent identity)

Let E be a Banach space, for λ > 0 , μ > 0 , and x E ,
J λ x = J μ ( μ λ x + ( 1 μ λ ) J λ x ) .

Lemma 2.9 [12]

Let H be a Hilbert space. Let A i : H H , i = 1 , 2 , , N be α i -inverse-strongly monotone mappings, let M i : H 2 H , i = 1 , 2 , , N be maximal monotone mappings, and let { ω n } be a bounded sequence in H. Assume that λ j , n > 0 , j = 1 , 2 , , N , satisfy the following:

(H1) lim n n = 1 | λ j , n λ j , n + 1 | < ,

(H2) lim inf n λ j , n > 0 .

Set Θ n k = J M k , λ k , n ( I λ k , n A k ) J M 1 , λ 1 , n ( I λ 1 , n A 1 ) for k { 1 , 2 , , N } and Θ n 0 = I for all n. Then, for k { 1 , 2 , , N } ,
i = 1 Θ i + 1 k ω i Θ i k ω i < .
(2.3)

Lemma 2.10 [18]

Let H be a real Hilbert space and B be a k-Lipschitzian and η-strongly monotone operator with k > 0 , η > 0 . Let 0 < μ < 2 η k 2 and τ = μ ( η η k 2 2 ) . Then for t min { 1 , 1 τ } , I t μ B is a contraction with a constant 1 t τ .

3 Main results

Let H be a real Hilbert space and T be a nonexpansive mapping on H. Assume that the set F i x ( S n ) is nonempty, that is, F i x ( S n ) : = { x H : S n x = x } . Since F i x ( S n ) is closed convex, the nearest point projection from H onto F i x ( S n ) is well defined. Recall also that f is an L-Lipschitz mapping on H with coefficient L > 0 . Let B is a k-Lipschitzian and η-strongly monotone operator on H with coefficients k > 0 and η > 0 .

Now give f is an L-Lipschitz mapping on H with coefficient L > 0 , t ( 0 , 1 ) . Let 0 < μ < 2 η / k 2 , 0 < γ < μ ( η μ k 2 2 ) / L = τ / L . Consider a mapping W t on H defined by
W t x = t γ f ( x ) + ( I μ t B ) S n x , n > 0 .
According to Lemma 2.10, we can easily see that
W t x W t y t γ f ( x ) f ( y ) + ( I μ t B ) T x ( I μ t B ) S n y ( 1 t ( τ γ L ) ) x y .
(3.1)
Theorem 3.1 Let H be a real Hilbert space, let F be a bifunction H × H R satisfying (A1)-(A4), and let { S n } be a sequence of nonexpansive mappings on H. Let A i : H H , i = 1 , 2 , , N , be α i -inverse-strongly monotone mappings, let M i : H 2 H , i = 1 , 2 , , N , be maximal monotone mappings such that Ω : = ( n = 1 F i x ( S n ) ) E P ( F ) ( i = 1 N I ( A i , M i ) ) . Let f be an L-Lipschitz mapping on H with coefficient L > 0 , and let B be a k-Lipschitzian and η-strongly monotone operator on H with coefficients k > 0 and η > 0 . Let { x n } , { y n } , and { u n } be sequences generated by x 1 H and
{ F ( u n , y ) + 1 r n y u n , u n x n 0 , y H , y n = J M N , λ N , n ( I λ N , n A N ) J M 1 , λ 1 , n ( I λ 1 , n A 1 ) u n , x n + 1 = ϵ n γ f ( x n ) + ( I μ ϵ n B ) S n y n ,
(3.2)

for all n N , where λ i , n ( 0 , 2 α i ] , i { 1 , 2 , , N } , satisfy (H1)-(H2), { ϵ n } [ 0 , 1 ] and { r n } ( 0 , ) satisfy

(C1) lim n ϵ n = 0 ;

(C2) n = 1 ϵ n = ;

(C3) n = 1 | ϵ n + 1 ϵ n | < ;

(C4) lim inf n r n > 0 ;

(C5) n = 1 | r n + 1 r n | < .

Suppose that n = 1 sup { S n + 1 z S n z : z K } < for any bounded subset K of H. Let S be a mapping of H into itself defined by S x = lim n S n x for all x H , and suppose that F i x ( S ) = n = 1 F i x ( S n ) . Then { x n } , { y n } , and { u n } converge strongly to z, where z = P Ω ( I μ B + γ f ) ( z ) is a unique solution of the variational inequality
( μ B γ f ) z , z x 0 , x Ω .
(3.3)

Proof Using the definition of Θ n k in Lemma 2.9, we have y n = Θ n N u n . We divide the proof into several steps.

Step 1. The sequence { x n } is bounded.

Let p Ω . Using the fact that J M k , λ k , n ( I λ k , n A k ) , k { 1 , 2 , , N } , is nonexpansive and p = J M k , λ k , n ( I λ k , n A k ) p , we have
y n p = Θ n N u n Θ n N p u n p T r x n T r p x n p
for all n 1 . Then we have
x n + 1 p = ϵ n γ f ( x n ) + ( I μ ϵ n B ) S n y n p ϵ n γ f ( x n ) μ B p + I μ ϵ n B y n p ϵ n γ f ( x n ) μ B p + ( 1 ϵ n τ ) x n p ϵ n γ ( f ( x n ) f ( p ) ) + ( γ f ( p ) μ B p ) + ( 1 ϵ n τ ) x n p ϵ n γ L x n p + ϵ n γ f ( p ) μ B p + ( 1 ϵ n τ ) x n p = ( 1 ϵ n ( γ ¯ γ L ) ) x n p + ϵ n γ f ( p ) μ B p = ( 1 ϵ n ( τ γ L ) ) x n p + ϵ n ( τ γ L ) γ f ( p ) μ B p ( τ γ L ) .
(3.4)
It follows from (3.4) and induction that
x n p max { x 1 p , γ f ( p ) μ B p τ γ L } , n > 0 .

Hence { x n } is bounded and therefore { u n } , { y n } , { f ( x n ) } , and { S n y n } are also bounded.

Step 2. We show that x n + 1 x n 0 .

Since I λ A is nonexpansive, y n = Θ n N u n and y n + 1 = Θ n + 1 N u n + 1 , it follows that
y n + 1 y n = Θ n + 1 N u n + 1 Θ n N u n Θ n + 1 N u n Θ n N u n + Θ n + 1 N u n Θ n + 1 N u n + 1 Θ n N u n Θ n + 1 N u n + u n u n + 1 .
(3.5)
Then we have
x n + 2 x n + 1 = ϵ n + 1 γ f ( x n + 1 ) + ( I μ ϵ n + 1 B ) S n + 1 y n + 1 ϵ n γ f ( x n ) ( I μ ϵ n B ) S n y n = ( I μ ϵ n + 1 B ) ( S n + 1 y n + 1 S n + 1 y n ) + ( ϵ n ϵ n + 1 ) μ B S n + 1 y n + ( 1 μ ϵ n B ) ( S n + 1 y n S n y n ) + ( ϵ n + 1 ϵ n ) γ f ( x n ) + ϵ n + 1 γ ( f ( x n + 1 ) f ( x n ) ) ( 1 ϵ n + 1 τ ) y n + 1 y n + | ϵ n ϵ n + 1 | μ B S n + 1 y n + ( 1 ϵ n τ ) S n + 1 y n S n y n + | ϵ n + 1 ϵ n | γ f ( x n ) + ϵ n + 1 γ L x n + 1 x n ( 1 ϵ n + 1 τ ) y n + 1 y n + ϵ n + 1 γ L x n + 1 x n + | ϵ n ϵ n + 1 | ( μ B S n + 1 y n + γ f ( x n ) ) + S n + 1 y n S n y n ( 1 ϵ n + 1 τ ) ( Θ n N u n Θ n + 1 N u n + u n u n + 1 ) + ϵ n + 1 γ L x n + 1 x n + | ϵ n ϵ n + 1 | M 2 + sup { S n + 1 z S n z : z { y n } } ,
(3.6)
where M 2 = sup { max { μ B S n + 1 y n , γ f ( x n ) } : n 0 } < . On the other hand, using Lemma 2.2, we have
u n + 1 u n = T n + 1 x n + 1 T n x n T n + 1 x n + 1 T n + 1 x n + T n + 1 x n T n x n x n + 1 x n + | 1 r n + 1 r n | ( T n x n + x n ) .
(3.7)
Combining (3.6) and (3.7), we have
x n + 2 x n + 1 ( 1 ϵ n + 1 ( γ ¯ γ β ) ) x n + 1 x n + | 1 r n + 1 r n | ( T n x n + x n ) + | ϵ n ϵ n + 1 | M 2 + Θ n + 1 N u n Θ n N u n + sup { S n + 1 z S n z : z { u n } } .
(3.8)
From boundedness of { u n } and Lemma 2.9, using the condition of (H1)-(H2), we obtain
n = 1 Θ n + 1 N u n Θ n N u n < .
(3.9)

Since { y n } is bounded, it follows that n = 1 sup { S n + 1 z S n z : z K } < . Hence, using conditions (C1)-(C5), (3.9) and Lemma 2.4, we have x n + 1 x n 0 as n .

Step 3. We now show that
lim n Θ n k u n Θ n k 1 u n = 0 , k = 1 , 2 , , N .
(3.10)
Indeed, let p Ω . It follows from the firmly nonexpansiveness of J M k , λ k , n ( I λ k , n A k ) that
Θ n k u n p 2 = J M k , λ k , n ( I λ k , n A k ) Θ n k 1 u n J M k , λ k , n ( I λ k , n A k ) p 2 Θ n k u n p , Θ n k 1 u n p = 1 2 ( Θ n k u n p 2 + Θ n k 1 u n p 2 Θ n k u n Θ n k 1 u n 2 ) ,
(3.11)
for each k { 1 , 2 , , N } . Thus we get
Θ n k u n p 2 Θ n k 1 u n p 2 Θ n k u n Θ n k 1 u n 2 ,
which implies that for each k { 1 , 2 , , N } ,
y n p 2 = Θ n N u n p 2 Θ n 0 u n p 2 k = 1 N Θ n k u n Θ n k 1 u n 2 u n p 2 Θ n k u n Θ n k 1 u n 2 x n p 2 Θ n k u n Θ n k 1 u n 2 .
(3.12)
Using Lemma 2.7 and noting that 2 is convex, we derive from (3.12)
x n + 1 p 2 = ϵ n γ f ( x n ) + ( I μ ϵ n B ) S n y n p 2 = ( 1 μ ϵ n B ) ( S n y n p ) + ϵ n ( γ f ( x n ) B p ) 2 ( 1 ϵ n τ ) 2 S n y n p 2 + 2 ϵ n γ f ( x n ) μ B p , x n + 1 p ( 1 ϵ n τ ) 2 y n p 2 + 2 ϵ n γ f ( x n ) μ B p , x n + 1 p ( 1 ϵ n τ ) 2 ( x n p 2 Θ n k u n Θ n k 1 u n 2 ) + 2 ϵ n γ f ( x n ) f ( p ) , x n + 1 p + 2 ϵ n γ f ( p ) μ B p , x n + 1 p ( 1 ϵ n τ ) 2 ( x n p 2 Θ n k u n Θ n k 1 u n 2 ) + 2 ϵ n γ L x n p x n + 1 p + 2 ϵ n f ( p ) μ B p x n + 1 p .
(3.13)
Put M 3 = sup n 0 { f ( p ) μ B p x n + 1 p } . It follows from (3.13) that
( 1 ϵ n τ ) 2 Θ n k u n Θ n k 1 u n 2 x n p 2 x n + 1 p 2 + ( ( ϵ n τ ) 2 2 ϵ n τ ) x n p 2 + ϵ n γ L x n p x n + 1 p + 2 ϵ n M 3 x n x n + 1 ( x n p + x n + 1 p ) + ϵ n ( ϵ n τ 2 2 τ ) x n p 2 + ϵ n γ L x n p x n + 1 p + 2 ϵ n M 3 .

Since ϵ n 0 and x n x n + 1 0 , we have (3.10).

Step 4. We prove lim n u n x n = 0 .

We note from (3.2),
x n S n y n x n S n 1 y n 1 + S n 1 y n 1 S n 1 y n + S n 1 y n S n y n ϵ n 1 γ f ( x n 1 ) μ B S n 1 y n 1 + y n 1 y n + sup { S n + 1 z S n z : z y n } .
(3.14)
Since ϵ n 0 , lim n y n + 1 y n = 0 , and sup { S n + 1 z S n z : z { y n } } 0 , we get
x n S n y n 0 .
(3.15)
Let p Ω . Since u n = T r n x n , it follows from Lemma 2.1 that
u n p 2 = T r n x n T r n p 2 T r n x n T r n p , x n p = u n p , x n p 1 2 ( u n p 2 + x n p 2 u n x n 2 ) ,
and hence u n p 2 x n p 2 u n x n 2 . Therefore, using Lemma 2.7 and (3.13), we have
x n + 1 p 2 ( 1 ϵ n τ 2 ) y n p 2 + 2 ϵ n γ f ( x n ) f ( p ) , x n + 1 p + 2 ϵ n γ f ( p ) μ B p , x n + 1 p ( 1 ϵ n τ 2 ) u n p 2 + 2 ϵ n γ f ( x n ) f ( p ) , x n + 1 p + 2 ϵ n γ f ( p ) μ B p , x n + 1 p ( 1 ϵ n τ ) 2 ( x n p 2 u n x n 2 ) + 2 ϵ n γ L x n p x n + 1 p + 2 ϵ n γ f ( p ) μ B p x n + 1 p x n p 2 + ϵ n ( τ 2 2 τ ) x n p 2 ( 1 ϵ n τ ) 2 u n x n 2 + 2 ϵ n γ L x n p x n + 1 p + 2 ϵ n γ f ( p ) μ B p x n + 1 p ,
and hence
( I ϵ n τ ) 2 u n x n 2 x n p 2 x n + 1 p 2 + ϵ n ( τ 2 2 τ ) x n p 2 + 2 ϵ n γ L x n p x n + 1 p + 2 ϵ n γ f ( p ) μ B p x n + 1 p x n x n + 1 ( x n p + x n + 1 p ) + ϵ n ( τ 2 2 τ ) x n p 2 + 2 ϵ n γ L x n p x n + 1 p + 2 ϵ n γ f ( p ) μ B p x n + 1 p .
(3.16)
Since { x n } is bounded, ϵ n 0 and lim n x n x n + 1 = 0 , it follows that
lim n u n x n = 0 .
(3.17)
Next we prove lim n u n y n = 0 .
u n y n = Θ n N u n u n Θ n N u n Θ n N 1 u n + Θ n N 1 u n Θ n N 2 u n + + Θ n 2 u n Θ n 1 u n + Θ n 1 u n Θ n 0 u n + u n u n .
From (3.9), we obtain
lim n u n y n = 0 .
(3.18)
In addition, according to x n y n x n u n + u n y n , we have
lim n x n y n = 0 .
(3.19)
It follows from (3.15), (3.19) and the inequality y n S n y n y n x n + x n S n y n that lim n y n S n y n = 0 . Since
S y n y n S y n S n y n + S n y n y n sup { S z S n z : z { y n } } + S n y n y n ,
for all n N , it follows that
lim n S y n y n = 0 .
(3.20)

Step 5. We show ω ( n = 1 F i x ( S n ) ) E P ( F ) ( i = 1 N I ( A i , M i ) ) .

Since { x n } is bounded, there exists a subsequence { x n i } of { x n } which converges weakly to ω. From (3.17), we obtain { u n i } which converges weakly to ω. From (3.19), it follows y n i ω . We show ω E P ( F ) . According to (3.2) and (A2),
1 r n y u n , u n x n F ( y , u n ) ,
and hence
y u n i , u n i x n i r n i F ( y , u n i ) .
Since u n i x n i r n i 0 and u n i ω , from (A4) it follows that 0 F ( y , ω ) for all y H . For t with 0 < t 1 and y H , let y t = t y + ( 1 t ) ω , then we get 0 F ( y t , ω ) . So, from (A1) and (A4), we have
0 = F ( y t , y t ) t F ( y t , y ) + ( 1 t ) F ( y t , ω ) t F ( y t , y ) ,

and hence 0 F ( y t , y ) . From (A3), we have 0 F ( ω , y ) for all y H . Therefore, ω E P ( F ) .

We show ω n = 1 F i x ( S n ) . Assume that ω n = 1 F i x ( S n ) , then we have ω S ω . It follows, by Opial’s condition and (3.20), that
lim inf n y n ω < lim inf n y n S ω lim inf n { y n S y n + S y n S ω } lim inf n y n ω .

This is a contradiction. Hence ω n = 1 F i x ( S n ) .

We now show that ω i = 1 N I ( A i , M i ) . In fact, since A i is α i -inverse-strongly monotone, then A i , i = 1 , 2 , , N , is an 1 α i -Lipschitz continuous monotone mapping and D ( A i ) = H , i = 1 , 2 , , N . It follows from Lemma 2.6 that M i + A i , i = 1 , 2 , , N , is maximal monotone. Let ( p , g ) G ( M i + A i ) , i = 1 , 2 , , N , that is, g A i p ( M i p ) , i = 1 , 2 , , N . Since Θ n k u n = J M k , λ k , n ( I λ k , n A k ) Θ n k 1 u n , we have Θ n k 1 u n λ k , n A k Θ n k 1 u n ( I + λ k , n M k ) ( Θ n k u n ) , that is,
1 λ n , k ( Θ n k 1 u n Θ n k u n λ N , n A k Θ n k 1 u n ) M k ( Θ n k u n ) .
By the maximal monotonicity of M i + A i , i = 1 , 2 , , N , we have
p Θ n k u n , g A k p 1 λ k , n ( Θ n k 1 u n Θ n k u n λ k , n A k Θ n k 1 u n ) 0 ,
which implies
p Θ n k u n , g p Θ n k u n , A k p + 1 λ n , k ( Θ n k 1 u n Θ n k u n λ k , n A k Θ n k 1 u n ) = p Θ n k u n , A k p A k Θ n k u n + A k Θ n k u n A k Θ n k 1 u n + 1 λ k , n ( Θ n k 1 u n Θ n k u n ) 0 + p Θ n k u n , A k Θ n k u n A k Θ n k 1 u n + p Θ n k u n , 1 λ k , n ( Θ n k 1 u n Θ n k u n )
(3.21)
for k { 1 , 2 , , N } . From (3.10), it follows lim n Θ n k u n Θ n k 1 u n = 0 , especially, Θ n i k u n i ω . Since A k , k = 1 , , N , are Lipschitz continuous operators, we have A k Θ n k 1 u n A k Θ n k u n 0 . So, from (3.21), we have
lim i p Θ n i k u n i , g = p ω , g 0 .

Since A k + M k , k { 1 , 2 , , N } is maximal monotone, this implies that 0 ( M k + A k ) ( ω ) , k { 1 , 2 , , N } , i.e., ω i = 1 N I ( A i , M i ) . So, we obtain the result.

Step 6. We show that
lim sup n ( μ B γ f ) z , z x n 0 ,

where z = P Ω ( I μ B + γ f ) ( z ) is a unique solution of the variation (3.3).

To show this, we choose a subsequence { x n i } of { x n } such that
lim i ( μ B γ f ) z , z x n i = lim sup n ( μ B γ f ) z , z x n .
By the proof of Step 5, we obtain that
lim sup n ( μ B γ f ) z , z x n = lim i ( μ B γ f ) z , z x n i = ( μ B γ f ) z , z ω 0 .

Step 7. We prove that x n ω .

Using Lemma 2.7 and (3.13), we obtain
x n + 1 ω 2 = ϵ n γ f ( x n ) + ( I μ ϵ n B ) S n y n ω 2 = ϵ n ( γ f ( x n ) μ B ω ) + ( I μ ϵ n B ) ( S n y n ω ) 2 ( I μ ϵ n B ) ( S n y n ω ) 2 + 2 ϵ n γ f ( x n ) μ B ω , x n + 1 ω ( 1 ϵ n τ ) 2 y n ω 2 + 2 ϵ n γ f ( x n ) f ( ω ) , x n + 1 ω + 2 ϵ n γ f ( ω ) μ B ω , x n + 1 ω ( 1 ϵ n τ ) 2 x n ω 2 + 2 ϵ n γ L x n ω x n + 1 ω + 2 ϵ n γ f ( ω ) μ B ω , x n + 1 ω ( 1 ϵ n τ ) 2 x n ω 2 + ϵ n γ L ( x n ω 2 + x n + 1 ω 2 ) + 2 ϵ n γ f ( ω ) μ B ω , x n + 1 ω .
This implies that
x n + 1 ω 2 1 2 ϵ n τ + ( ϵ n τ ) 2 + ϵ n γ L 1 ϵ n γ L x n ω 2 + 2 ϵ n 1 ϵ n γ L γ f ( ω ) μ B ω , x n + 1 ω = [ 1 ( 2 ϵ n ( τ ) 2 τ ) 1 ϵ n γ L ] x n ω 2 + ( ϵ n τ ) 2 1 ϵ n γ L x n ω 2 + 2 ϵ n 1 ϵ n γ L γ f ( ω ) μ B ω , x n + 1 ω ( 1 γ n ) x n ω 2 + δ n ,

where γ n = 2 ϵ n ( τ γ L ) 1 ϵ n γ L and δ n = ϵ n 1 ϵ n γ L ( ϵ n τ 2 x n ω 2 + 2 γ f ( ω ) μ B ω , x n + 1 ω ) . It is easily verified that γ n 0 , n = 1 γ n = , and lim sup n δ n / γ n 0 . Hence, by Lemma 2.4, the sequence { x n } converges strongly to ω. Furthermore, from (3.17) and (3.19), we obtain that the sequences { y n } and { u n } converge strongly to ω. □

Let B I and γ = 1 in Theorem 3.1; we obtain the following corollary.

Corollary 3.2 Let H be a real Hilbert space, let F be a bifunction H × H R satisfying (A1)-(A4), and let S n be a sequence of nonexpansive mappings on H. Let A : H H be an α-inverse-strongly monotone mapping, and let M : H 2 H be a maximal monotone mapping such that Ω = n = 1 F i x ( S n ) E P ( F ) ( i = 1 N I ( A i , M i ) ) . Let f be an L-Lipschitz mapping on H with coefficient L > 0 . Let { x n } , { y n } , and { u n } be sequences generated by x 1 H and
{ F ( u n , y ) + 1 r n y u n , u n x n 0 , y H , y n = J M N , λ N , n ( I λ N , n A N ) J M 1 , λ 1 , n ( I λ 1 , n A 1 ) u n , n > 0 , x n + 1 = ϵ n f ( x n ) + ( I ϵ n ) S n y n ,
(3.22)

for all n N , where λ i , n ( 0 , 2 α i ] , i { 1 , 2 , , N } , satisfy (H1)-(H2), { ϵ n } [ 0 , 1 ] and { r n } ( 0 , ) satisfy:

(C1) lim n ϵ n = 0 ;

(C2) n = 1 ϵ n = ;

(C3) n = 1 | ϵ n + 1 ϵ n | < ;

(C4) lim inf n r n > 0 ;

(C5) n = 1 | r n + 1 r n | < .

Suppose that n = 1 sup { S n + 1 z S n z : z K } < for any bounded subset K of H. Let S be a mapping of H into itself defined by S x = lim n S n x for all x H , and suppose that F i x ( S ) = n = 1 F i x ( S n ) . Then { x n } , { y n } , and { u n } converge strongly to z, where z = P Ω ( I γ f ) ( z ) is a unique solution of the variational inequality
( I γ f ) z , z x 0 , x Ω .

Declarations

Acknowledgements

This work is supported in part by National Natural Science Foundation of China (71272148), the Ph.D. Programs Foundation of Ministry of Education of China (20120032110039) and China Postdoctoral Science Foundation (Grant No. 20100470783).

Authors’ Affiliations

(1)
School of Management, Tianjin University

References

  1. Conbettes PL, Hirstoaga SA: Equilibrium programming in Hilbert space. J. Nonlinear Convex Anal. 2005, 6(1):117–136.MathSciNetGoogle Scholar
  2. Plubtieng S, Punpaeng R: A new iterative method for equilibrium problems and fixed point problems of nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2008, 197(2):548–558. 10.1016/j.amc.2007.07.075MathSciNetView ArticleGoogle Scholar
  3. Zeng LC, Schaible S, Yao JC: Iterative algorithm for generalized set-valued strongly nonlinear mixed variational-like inequalities. J. Optim. Theory Appl. 2005, 124(3):725–738. 10.1007/s10957-004-1182-zMathSciNetView ArticleGoogle Scholar
  4. Chang SS, Joseph Lee HW, Chan CK: A new method for solving equilibrium problem fixed point problem with application to optimization. Nonlinear Anal., Theory Methods Appl. 2009, 70: 3307–3319. 10.1016/j.na.2008.04.035MathSciNetView ArticleGoogle Scholar
  5. Colao V, Acedo GL, Marino G: An implicit method for finding common solutions of variational inequalities and systems of equilibrium problems and fixed points of infinite family of nonexpansive mappings. Nonlinear Anal. 2009, 71(7–8):2708–2715. 10.1016/j.na.2009.01.115MathSciNetView ArticleGoogle Scholar
  6. Plubtieng S, Punpaeng R: A general iterative method for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 336(1):455–469. 10.1016/j.jmaa.2007.02.044MathSciNetView ArticleGoogle Scholar
  7. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14(5):877–898. 10.1137/0314056MathSciNetView ArticleGoogle Scholar
  8. Adly S: Perturbed algorithms and sensitivity analysis for a general class of variational inclusions. J. Math. Anal. Appl. 1996, 201(2):609–630. 10.1006/jmaa.1996.0277MathSciNetView ArticleGoogle Scholar
  9. Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38(3):367–426. 10.1137/S0036144593251710MathSciNetView ArticleGoogle Scholar
  10. Plubtieng S, Sriprad W: A viscosity approximation method for finding common solution of variational inclusions, equilibrium problems, and fixed point problems in Hilbert spaces. Fixed Point Theory Appl. 2009., 2009: Article ID 567147Google Scholar
  11. Tian M: A general iterative algorithm for nonexpansive mappings in Hilbert space. Nonlinear Anal. 2010, 73: 689–694. 10.1016/j.na.2010.03.058MathSciNetView ArticleGoogle Scholar
  12. Deng BC, Chen T, Xin BG: A viscosity approximation scheme for finding common solutions of mixed equilibrium problems, a finite family of variational inclusions, and fixed point problems in Hilbert spaces. J. Appl. Math. 2012., 2012: Article ID 152023. doi:10.1155/2012/152023Google Scholar
  13. Aoyama K, Kimura Y, Takahashi W, Toyoda M: Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal., Theory Methods Appl. 2007, 67(8):2350–2360. 10.1016/j.na.2006.08.032MathSciNetView ArticleGoogle Scholar
  14. Peng JW, Wang Y, Shyu DS, Yao JC: Common solutions of an iterative scheme for variational inclusions, equilibrium problems, and fixed point problems. J. Inequal. Appl. 2008., 2008: Article ID 720371Google Scholar
  15. Xu HK: Iterative algorithms for nonlinear operator. J. Lond. Math. Soc. 2002, 66(1):240–256. 10.1112/S0024610702003332View ArticleGoogle Scholar
  16. Brézis H: Opérateurs Maximaux monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert. North-Holland, Amsterdam; 1973.Google Scholar
  17. Lemaire B: Which fixed point does the iteration method select. Lecture Notes in Economics and Mathematical Systems 452. In Recent Advances in Optimization (Trier, 1996). Springer, Berlin; 1997:154–167.View ArticleGoogle Scholar
  18. Piri H: A general iterative method for finding common solutions of system of equilibrium problems system of variational inequalities and fixed point problems. Math. Comput. Model. 2011. doi:10.1016/j.mcm.2011.10.069Google Scholar

Copyright

© Deng and Chen; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.