Skip to content

Advertisement

  • Research
  • Open Access

General iterative algorithms for mixed equilibrium problems, a general system of generalized equilibria and fixed point problems

Journal of Inequalities and Applications20142014:470

https://doi.org/10.1186/1029-242X-2014-470

  • Received: 12 September 2014
  • Accepted: 12 November 2014
  • Published:

Abstract

In this paper, we introduce and analyze a general iterative algorithm for finding a common solution of a finite family of mixed equilibrium problems, a general system of generalized equilibria and a fixed point problem of nonexpansive mappings in a real Hilbert space. Under some appropriate conditions, we derive the strong convergence of the sequence generated by the proposed algorithm to a common solution, which also solves some optimization problem. The result presented in this paper improves and extends some corresponding ones in the earlier and recent literature.

MSC:49J30, 47H10, 47H15.

Keywords

  • mixed equilibrium problem
  • nonexpansive mapping
  • general system of generalized equilibria
  • fixed point

1 Introduction

Let H be a real Hilbert space with the inner product , and the norm . Let C be a nonempty, closed and convex subset of H, and let T : C C be a nonlinear mapping. Throughout this paper, we use F ( T ) to denote the fixed point set of T. A mapping T : C C is said to be nonexpansive if
T x T y x y , x , y C .
(1.1)
Let F : C × C R be a real-valued bifunction and φ : C R be a real-valued function, where R is a set of real numbers. The so-called mixed equilibrium problem (MEP) is to find x C such that
F ( x , y ) + φ ( y ) φ ( x ) 0 , y C ,
(1.2)
which was considered and studied in [1, 2]. The set of solutions of MEP (1.2) is denoted by MEP ( F , φ ) . In particular, whenever φ 0 , MEP (1.2) reduces to the equilibrium problem (EP) of finding x C such that
F ( x , y ) 0 , y C ,

which was considered and studied in [37]. The set of solutions of the EP is denoted by EP ( F ) . Given a mapping A : C H , let F ( x , y ) = A x , y x for all x , y C . Then z EP ( F ) if and only if A x , y x 0 for all y C . Numerous problems in physics, optimization and economics reduce to finding a solution of the EP.

Throughout this paper, assume that F : C × C R is a bifunction satisfying conditions (A1)-(A4) and that φ : C R is a lower semicontinuous and convex function with restriction (B1) or (B2), where
  • (A1) F ( x , x ) = 0 for all x C ;

  • (A2) F is monotone, i.e., F ( x , y ) + F ( y , x ) 0 , for any x , y C ;

  • (A3) F is upper hemicontinuous, i.e., for each x , y , z C ,
    lim sup t 0 F ( t z + ( 1 t ) x , y ) F ( x , y ) ;
  • (A4) F ( x , ) is convex and lower semicontinuous for each x C ;

  • (B1) for each x H and r > 0 , there exist a bounded subset D x C and y x C such that for any z C D x ,
    F ( z , y x ) + φ ( y x ) φ ( z ) + 1 r y x z , z x < 0 ;
  • (B2) C is a bounded set.

The mappings { T n } n = 1 are said to be an infinite family of nonexpansive self-mappings on C if
T n x T n y x y , x , y C , n 1 ,
(1.3)

and denoted by F ( T n ) is a fixed point set of T n , i.e., F ( T n ) = { x H : T n x = x } . Finding an optimal point in the intersection n = 1 F ( T n ) of fixed point sets of mappings T n , n 1 , is a matter of interest in various branches of sciences.

Recently, many authors considered some iterative methods for finding a common element of the set of solutions of MEP (1.2) and the set of fixed points of nonexpansive mappings; see, e.g., [2, 8, 9] and the references therein.

A mapping A : C H is said to be:
  1. (i)
    Monotone if
    A x A y , x y 0 , x , y C .
     
  2. (ii)
    Strongly monotone if there exists a constant η > 0 such that
    A x A y , x y η x y 2 , x , y C .
     
In such a case, A is said to be η-strongly monotone.
  1. (iii)
    Inverse-strongly monotone if there exists a constant ζ > 0 such that
    A x A y , x y ζ A x A y 2 , x , y C .
     

In such a case, A is said to be ζ-inverse-strongly monotone.

Let A : C H be a nonlinear mapping. The classical variational inequality problem (VIP) is to find x C such that
A x , y x 0 , y C .
(1.4)

We use VI ( C , A ) to denote the set of solutions to VIP (1.4). One can easily see that VIP (1.4) is equivalent to a fixed point problem, the origin of which can be traced back to Lions and Stampacchia [10]. That is, u C is a solution to VIP (1.4) if and only if u is a fixed point of the mapping P C ( I λ A ) , where λ > 0 is a constant. Variational inequality theory has been studied quite extensively and has emerged as an important tool in the study of a wide class of obstacle, unilateral, free, moving, equilibrium problems. Not only are the existence and uniqueness of solutions important topics in the study of VIP (1.4), but also how to actually find a solution of VIP (1.4) is important. Up to now, there have been many iterative algorithms in the literature for finding approximate solutions of VIP (1.4) and its extended versions; see, e.g., [3, 1114].

Recently, Ceng and Yao [8] introduced and studied the general system of generalized equilibria (GSEP) as follows: Let C be a nonempty closed convex subset of a real Hilbert space H. Let Θ 1 , Θ 2 : C × C R be two bifunctions, B 1 , B 2 : C H be two nonlinear mappings. Consider the following problem of finding ( x , y ) C × C such that
{ Θ 1 ( x , x ) + B 1 y , x x + 1 μ 1 x y , x x 0 , x C , Θ 2 ( y , y ) + B 2 x , y y + 1 μ 2 y x , y y 0 , y C ,
(1.5)
where μ 1 > 0 , μ 2 > 0 are two constants. In particular, whenever Θ 1 = Θ 2 = 0 , GSEP (1.5) reduces to the following general system of variational inequalities (GSVI): find ( x , y ) C × C such that
{ μ 1 B 1 y + x y , x x 0 , x C , μ 2 B 2 x + y x , x y 0 , y C ,
(1.6)

where μ 1 and μ 2 are two positive constants. GSVI (1.6) is considered and studied in [8, 15, 16]. In particular, whenever B 1 = B 2 = A and x = y , GSVI (1.6) reduces to VIP (1.4).

In order to prove our main results in the following sections, we need the following lemmas and propositions.

Proposition 1.1 For given x H and z C :
  1. (i)

    z P C x x z , y z 0 , y C ;

     
  2. (ii)

    z P C x x z 2 x y 2 y z 2 , y C ;

     
  3. (iii)

    P C x P C y , x y P C x P C y 2 , x , y H .

     

Consequently, P C is a firmly nonexpansive mapping of H onto C and hence nonexpansive and monotone.

Given a positive number r > 0 . Let T r ( Θ , φ ) : H C be the solution set of the auxiliary mixed equilibrium problem, that is, for each x H ,
T r ( Θ , φ ) ( x ) : = { y C : Θ ( y , z ) + φ ( z ) φ ( y ) + 1 r y x , z y 0 , z C } .

Proposition 1.2 (see [2, 8])

Let C be a nonempty closed subset of a real Hilbert space H. Let Θ : C × C R be a bifunction satisfying conditions (A1)-(A4), and let φ : C R be a lower semicontinuous and convex function with restriction (B1) or (B2). Then the following hold:
  1. (a)

    for each x H , T r ( Θ , φ ) ;

     
  2. (b)

    T r ( Θ , φ ) is single-valued;

     
  3. (c)
    T r ( Θ , φ ) is firmly nonexpansive, i.e., for any x , y H ,
    T r ( Θ , φ ) x T r ( Θ , φ ) y 2 T r ( Θ , φ ) x T r ( Θ , φ ) y , x y ;
     
  4. (d)
    for all s , t > 0 and x H ,
    T s ( Θ , φ ) x T t ( Θ , φ ) x 2 s t s T s ( Θ , φ ) x x , T s ( Θ , φ ) x T t ( Θ , φ ) x ;
     
  5. (e)

    F ( T r ( Θ , φ ) ) = MEP ( Θ , φ ) ;

     
  6. (f)

    MEP ( Θ , φ ) is closed and convex.

     
Remark 1.1 It is easy to see from conclusions (c) and (d) in Proposition 1.2 that
T r ( Θ , φ ) x T r ( Θ , φ ) y x y , r > 0 , x , y H
and
T s ( Θ , φ ) x T t ( Θ , φ ) x | s t | s T s ( Θ , φ ) x x , s , t > 0 , x H .

Remark 1.2 If φ = 0 , then T r ( Θ , φ ) is rewritten as T r Θ .

Ceng and Yao [8] transformed GSEP (1.5) into a fixed point problem in the following way.

Lemma 1.1 (see [8])

Let C be a nonempty closed convex subset of H. Let Θ 1 , Θ 2 : C × C R be two bifunctions satisfying conditions (A1)-(A4), and let the mappings B 1 , B 2 : C H be ζ 1 -inverse strongly monotone and ζ 2 -inverse strongly monotone, respectively. Let μ 1 ( 0 , 2 ζ 1 ) and μ 2 ( 0 , 2 ζ 2 ) , respectively. Then, for given x , y C , ( x , y ) is a solution of GSEP (1.5) if and only if x is a fixed point of the mapping G : C C defined by
G ( x ) = T μ 1 Θ 1 ( I μ 1 B 1 ) T μ 2 Θ 2 ( I μ 2 B 2 ) x , x C ,

where y = T μ 2 Θ 2 ( I μ 2 B 2 ) x .

Lemma 1.2 (see [8])

For given x , y C , ( x , y ) is a solution of GSVI (1.6) if and only if x is a fixed point of the mapping G : C C defined by
G x = P C ( I μ 1 B 1 ) P C ( I μ 2 B 2 ) x , x C ,

where y = P C ( x μ 2 B 2 x ) and P C is the projection of H onto C.

Remark 1.3 If Θ 1 , Θ 2 : C × C R are two bifunctions satisfying (A1)-(A4), the mappings B 1 , B 2 : C H are ζ 1 -inverse strongly monotone and ζ 2 -inverse strongly monotone, respectively, then G : C C is a nonexpansive mapping provided μ 1 ( 0 , 2 ζ 1 ) and μ 2 ( 0 , 2 ζ 2 ) .

Throughout this paper, the set of fixed points of the mapping G is denoted by Γ.

On the other hand, Moudafi [1] introduced the viscosity approximation method for nonexpansive mappings (see also [17] for further developments in both Hilbert spaces and Banach spaces).

A mapping f : C C is called α-contractive if there exists a constant α ( 0 , 1 ) such that
f ( x ) f ( y ) α x y , x , y C .
Let f be a contraction on C. Starting with an arbitrary initial x 1 C , define a sequence { x n } recursively by
x n + 1 = α n f ( x n ) + ( 1 α n ) T x n , n 0 ,
(1.7)
where T is a nonexpansive mapping of C into itself and { α n } is a sequence in ( 0 , 1 ) . It is proved [1, 17] that under certain appropriate conditions imposed on { α n } , the sequence { x n } generated by (1.6) converges strongly to the unique solution x F ( T ) to the VIP
( I f ) x , x x 0 , x F ( T ) .
A linear bounded operator A is said to be γ ¯ -strongly positive on H if there exists a constant γ ¯ ( 0 , 1 ) such that
A x , x γ ¯ x 2 , x H .
(1.8)
The typical problem is to minimize a quadratic function on a real Hilbert space H,
min x C 1 2 A x , x x , u ,
(1.9)

where C is a nonempty closed convex subset of H, u is a given point in H and A is a strongly positive bounded linear operator on H.

In 2006, Marino and Xu [18] introduced and considered the following general iterative method:
x n + 1 = α n γ f ( x n ) + ( 1 α n A ) T x n , n 0 ,
(1.10)
where A is a strongly positive bounded linear operator on a real Hilbert space H, f is a contraction on H. They proved that the above sequence { x n } converges strongly to the unique solution of the variational inequality
( γ f A ) x , x x 0 , x F ( T ) ,
which is the optimality condition for the minimization problem
min x C 1 2 A x , x h ( x ) ,

where h is a potential function for γf (i.e., h ( x ) = γ f ( x ) for all x H ).

In 2007, Takahashi and Takahashi [5] introduced an iterative scheme by the viscosity approximation method for finding a common element of the set of solutions of the EP and the set of fixed points of a nonexpansive mapping in a real Hilbert space. Let S : C H be a nonexpansive mapping. Starting with arbitrary initial x 1 H , define sequences { x n } and { u n } recursively by
{ F ( u n , y ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = α n f ( x n ) + ( 1 α n ) S u n , n 1 .
(1.11)

They proved that under appropriate conditions imposed on { α n } and { r n } , the sequences { x n } and { u n } converge strongly to x F ( S ) EP ( F ) , where x = P F ( S ) EP ( F ) f ( x ) .

Subsequently, Plubtieng and Punpaeng [19] introduced a general iterative process for finding a common element of the set of solutions of the EP and the set of fixed points of a nonexpansive mapping in a Hilbert space.

Let S : H H be a nonexpansive mapping. Starting with an arbitrary x 1 H , define sequences { x n } and { u n } by
{ F ( u n , y ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = α n γ f ( x n ) + ( 1 α n A ) S u n , n 1 .
(1.12)
They proved that under appropriate conditions imposed on { α n } and { r n } , the sequence { x n } generated by (1.12) converges strongly to the unique solution x F ( S ) EP ( F ) to the VIP
( γ f A ) x , x x 0 , x F ( S ) EP ( F ) ,
which is the optimality condition for the minimization problem
min x F ( S ) EP ( F ) 1 2 A x , x h ( x ) ,

where h is a potential function for γf (i.e., h ( x ) = γ f ( x ) for all x H ).

In 2001, Yamada [20] introduced a hybrid steepest descent method for a nonexpansive mapping T as follows:
x n + 1 = T x n μ λ n F ( T x n ) , n 0 ,
(1.13)
where F is a κ-Lipschitzian and η-strongly monotone operator with constants κ , η > 0 and 0 < μ < 2 η κ 2 . He proved that if { λ n } satisfies appropriate conditions, then the sequence { x n } generated by (1.13) converges strongly to the unique solution of the variational inequality
F x , x x 0 , x F ( T ) .
In 2010, Tian [21] combined the iterative method (1.10) with Yamada’s method (1.13) and considered the following general viscosity-type iterative method:
x n + 1 = α n γ f ( x n ) + ( 1 α n μ F ) T x n , n 0 .
(1.14)
Then he proved that the sequence { x n } generated by (1.14) converges strongly to the unique solution of the variational inequality
( γ f μ F ) x , x x 0 , x F ( T ) .
Recently, Ceng et al. [22] introduced implicit and explicit iterative schemes for finding the fixed points of a nonexpansive mapping T on a nonempty, closed and convex subset C in a real Hilbert space H as follows:
x t = P C [ t γ V x t + ( 1 t μ F ) T x t ]
(1.15)
and
x n + 1 = P C [ α n γ V x n + ( 1 α n μ F ) T x n ] , n 0 ,
(1.16)
where V is an L-Lipschitzian mapping with constant L 0 and F is a κ-Lipschitzian and η-strongly monotone operator with constants κ , η > 0 and 0 < μ < 2 η κ 2 . Then they proved that the sequences generated by (1.15) and (1.16) converge strongly to the unique solution of the variational inequality
( γ V μ F ) x , x x 0 , x F ( T ) .
Let { T n } n = 1 be an infinite family of nonexpansive self-mappings on C and { λ n } n = 1 be a sequence of nonnegative numbers in [ 0 , 1 ] . For any n 1 , define a mapping W n of C into itself as follows:
{ U n , n + 1 = I , U n , n = λ n T n U n , n + 1 + ( 1 λ n ) I , U n , n 1 = λ n 1 T n 1 U n , n + ( 1 λ n 1 ) I , , U n , k = λ k T k U n , k + 1 + ( 1 λ k ) I , U n , k 1 = λ k 1 T k 1 U n , k + ( 1 λ k 1 ) I , , U n , 2 = λ 2 T 2 U n , 3 + ( 1 λ 2 ) I , W n = U n , 1 = λ 1 T 1 U n , 2 + ( 1 λ 1 ) I .
(1.17)

Such a mapping W n is called the W-mapping generated by T n , T n 1 , , T 1 and λ n , λ n 1 , , λ 1 .

Very recently, Chen [23] introduced and considered the following iterative scheme:
x n + 1 = P C [ α n γ f ( x n ) + ( 1 α n A ) W n x n ] , n 0 ,
(1.18)
where A is a strongly positive bounded linear operator, f is a contraction on H, and W n is defined as (1.17). He proved that the above sequence { x n } converges strongly to the unique solution of the variational inequality
( γ f A ) x , x x 0 , x n = 1 F ( T n ) ,
which is the optimality condition for the minimization problem
min x n = 1 F ( T n ) 1 2 A x , x h ( x ) ,

where h is a potential function for γf (i.e., h ( x ) = γ f ( x ) for all x H ).

More recently, Rattanaseeha [7] introduced an iterative algorithm:
{ x 1 H , arbitrarily given , F ( u n , y ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = P C [ α n γ f ( x n ) + ( 1 α n A ) W n x n ] , n 1 ,
(1.19)
where A is a strongly positive bounded linear operator, f is a contraction on H, and W n is defined as (1.17). He proved that the above sequence { x n } converges strongly to the unique solution of the variational inequality
( γ f A ) x , x x 0 , x n = 1 F ( T n ) EP ( F ) .
Nowadays, Wang et al. [24] introduced an iterative algorithm:
{ x 1 H , F ( u n , y ) + φ ( y ) φ ( u n ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = P c [ α n γ f ( x n ) + β n x n + ( ( 1 β n ) I α n A ) W n u n ] , n 1 ,
(1.20)
where A is a strongly positive bounded linear operator, f is an l-Lipschitz continuous mapping, { W n } is defined by (1.17), and G = P C ( I μ 1 B 1 ) P C ( I μ 2 B 2 ) . They proved that the above sequence { x n } converges strongly to x Ω : = n = 1 F ( T n ) MEP ( F , φ ) Γ , where Γ is a fixed point set of the mapping G = P C ( I μ 1 B 1 ) P C ( I μ 2 B 2 ) , which is the unique solution of the VIP
( A γ f ) x , x x 0 , x Ω ,
or, equivalently, the unique solution of the minimization problem
min x Ω 1 2 A x , x h ( x ) .

Our concern now is the following:

Question 1 Can Theorem 3.1 of Rattanaseeha [7], Theorem 3.1 of Wang et al. [24] and so on be extended from one mixed equilibrium problem to the convex combination of a finite family of the mixed equilibrium problems?

Question 2 We know that GSEP (1.5) is more general than GSVI (1.6). What happens if GSVI (1.6) is replaced by GSEP (1.5)?

Question 3 We know that the η-strongly monotone and L-Lipschitz operator is more general than the strongly positive bounded linear operator. What happens if the strongly positive bounded linear operator is replaced by the η-strongly monotone and L-Lipschitz operator?

The purpose of this article is to give the affirmative answers to these questions mentioned above. Let B i : C H be ζ i -inverse strongly monotone for i = 1 , 2 , B 3 be a κ-Lipschitz and η-strongly monotone operator and f : C H be an l-Lipschitz mapping on H. Motivated by the above facts, in this paper we propose and analyze the general iterative algorithm
{ y n = λ n W n G ( m = 1 N β n , m u n , m ) + ( 1 λ n ) ( m = 1 N β n , m u n , m ) , x n + 1 = P C [ α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n ] , n 1 ,
(1.21)
where { u n , m } is such that
F m ( u n , m , y ) + φ ( y ) φ ( u n , m ) + 1 r n , m y u n , m , u n , m x n 0 , y C ,
for each 1 m N , W n is defined by (1.17) and G = T μ 1 Θ 1 ( I μ 1 B 1 ) T μ 2 Θ 2 ( I μ 2 B 2 ) , and x 0 C is an arbitrary initial point, for finding a common solution of a finite family of MEP (1.2), GSEP (1.5) and the fixed point problem of an infinite family of nonexpansive self-mappings { T n } n = 1 on C. It is proven that under some mild conditions imposed on parameters, the sequence { x n } generated by (1.20) converges strongly to x Ω : = ( n = 1 F ( T n ) ) ( m = 1 N MEP ( F m , φ ) ) Γ , where Γ is a fixed point set of the mapping G = T μ 1 Θ 1 ( I μ 1 B 1 ) T μ 2 Θ 2 ( I μ 2 B 2 ) , where x is the unique solution of the variational inequality
( γ f μ B 3 ) x , z x 0 , z Ω .
(1.22)

Remark 1.4 Other results on the problem of finding solutions to equilibrium problems and fixed point problems of families of mappings with different approaches can be found in [25, 26].

2 Preliminaries

We indicate weak convergence and strong convergence by using the notation and →, respectively. A mapping f : C H is called l-Lipschitz continuous if there exists a constant l 0 such that
f ( x ) f ( y ) l x y , x , y C .
In particular, if l = 1 , then f is called a nonexpansive mapping; if l [ 0 , 1 ) , then f is a contraction. Recall that a mapping T : H H is said to be a firmly nonexpansive mapping if
T x T y 2 T x T y , x y , x , y H .
The metric (or nearest point) projection from H onto C is the mapping P C : H C which assigns to each point x H the unique point P C x C satisfying the property
x P C x = inf y C x y = : d ( x , C ) .

We need some facts and tools in a real Hilbert space H which are listed as lemmas below.

Lemma 2.1 Let X be a real inner product space. Then there holds the following inequality:
x + y 2 x 2 + 2 y , x + y , x , y X .
Lemma 2.2 Let H be a Hilbert space. Then the following equalities hold:
  1. (a)

    x y 2 = x 2 y 2 2 x y , y for all x , y H ;

     
  2. (b)

    λ x + μ y 2 = λ x 2 + μ y 2 λ μ x y 2 for all x , y H and λ , μ [ 0 , 1 ] with λ + μ = 1 ;

     
  3. (c)
    If { x n } is a sequence in H such that x n x , it follows that
    lim sup n x n y 2 = lim sup n x n x 2 + x y 2 , y H .
     

We have the following crucial lemmas concerning the W-mappings defined by (1.17).

Lemma 2.3 (see [[27], Lemma 3.2])

Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C such that n = 1 F ( T n ) , and let { λ n } be a sequence in ( 0 , b ] for some b ( 0 , 1 ) . Then, for every x C and k 1 , the limit lim n U n , k exists, where U n , k is defined by (1.17).

Remark 2.1 (see [[6], Remark 3.1])

It can be known from Lemma 2.3 that if D is a nonempty bounded subset of C, then for ϵ > 0 there exists n 0 k such that for all n n 0 ,
sup x D U n , k x U k x ϵ .

Remark 2.2 (see [[6], Remark 3.2])

Utilizing Lemma 2.3, we define a mapping W : C × C as follows:
W x = lim n W n x = lim n U n , 1 x , x C .
Such W is called the W-mapping generated by T 1 , T 2 , and λ 1 , λ 2 ,  . Since W n is nonexpansive, W : C C is also nonexpansive. Indeed, observe that for each x , y C ,
W x W y = lim n W n x W n y x y .
If { x n } is a bounded sequence in C, then we put D = { x n : n 1 } . Hence, it is clear from Remark 2.1 that for arbitrary ϵ > 0 there exists N 0 1 such that for all n > N 0 ,
W n x n W x n = U n , 1 x n U 1 x n sup x D U n , 1 x U 1 x ϵ .
This implies that
lim n W n x n W x n = 0 .

Lemma 2.4 (see [[27], Lemma 3.3])

Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C such that n = 1 F ( T n ) , and let { λ n } be a sequence in ( 0 , b ] for some b ( 0 , 1 ) . Then F ( W ) = n = 1 F ( T n ) .

Lemma 2.5 (see [[28], Demiclosedness principle])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let T be a nonexpansive self-mapping on C with F ( T ) . Then I T is demiclosed. That is, whenever { x n } is a sequence in C weakly converging to some x C and the sequence { ( I T ) x n } strongly converges to some y, it follows that ( I T ) x = y . Here I is the identity operator of H.

Lemma 2.6 Let A : C H be a monotone mapping. In the context of the variational inequality problem, the characterization of the projection (see Proposition  1.1(i)) implies
u VI ( C , A ) u = P C ( u λ A u ) , λ > 0 .

Lemma 2.7 (see [29])

Assume that { a n } is a sequence of nonnegative real numbers such that
a n + 1 ( 1 γ n ) a n + σ n γ n , n 1 ,
where γ n is a sequence in [ 0 , 1 ] and σ n is a real sequence such that
  1. (i)

    n = 1 γ n = ;

     
  2. (ii)

    lim sup n σ n 0 or n = 1 | σ n γ n | < .

     

Then lim n a n = 0 .

Lemma 2.8 Each Hilbert space H satisfies Opial’s condition, i.e., for the sequence { x n } H with x n x . Then the inequality
lim inf n x n x < lim inf x n y

holds for any y H such that y x .

3 Main result

We will introduce and analyze a general iterative algorithm for finding a common solution of a finite family of MEP (1.2), GSEP (1.5) and the fixed point problems of an infinite family of nonexpansive self-mappings { T n } n = 1 on C. Under some appropriate conditions imposed on the parameter sequences, we will prove strong convergence of the proposed algorithm.

Theorem 3.1 Let C be a nonempty closed convex subset of a Hilbert space H. Let F m be a sequence of bifunctions from C × C to R satisfying (A1)-(A4), and let φ : C R be a lower semicontinuous and convex function with restriction (B1) or (B2) for every 1 m N , where N denotes some positive integer. Let Θ 1 , Θ 2 : C × C R be two bifunctions satisfying (A1)-(A4), the mapping B i : C H be ζ i -inverse strongly monotone for i = 1 , 2 , B 3 be a κ-Lipschitz and η-strongly monotone operator with constants κ , η > 0 , and let f : H H be an l-Lipschitz mapping with constant l 0 . Let { T n } n = 1 be a sequence of nonexpansive mappings on C and { λ n } be a sequence in ( 0 , b ] for some b ( 0 , 1 ) . Suppose that 0 < μ < 2 η / κ 2 and 0 < γ l < τ , where τ = 1 1 μ ( 2 η μ κ 2 ) . Assume that Ω : = ( n = 1 F ( T n ) ) ( m = 1 N MEP ( F m , φ ) ) Γ , where Γ is a fixed point set of the mapping G = T μ 1 Θ 1 ( I μ 1 B 1 ) T μ 2 Θ 2 ( I μ 2 B 2 ) with μ i ( 0 , 2 ζ i ) for i = 1 , 2 . Let { α n } , { δ n } , { β n , 1 } , and { β n , N } be sequences in [ 0 , 1 ] and { r n , m } be a sequence in ( 0 , ) for every 1 m N such that:
  1. (a)

    lim n α n = 0 , n = 1 α n = and n = 1 | α n + 1 α n | < ;

     
  2. (b)

    m = 1 N β n , m = 1 and n = 1 | β n + 1 , m β n , m | < for each 1 m N ;

     
  3. (c)

    lim n δ n = 0 and n = 1 | δ n + 1 δ n | < ;

     
  4. (d)

    lim inf n r n , m > 0 and n = 1 | r n + 1 , m r n , m | < for each 1 m N .

     
Given x 1 H arbitrarily, the sequence { x n } is generated iteratively by
{ y n = δ n W n G ( m = 1 N β n , m u n , m ) + ( 1 δ n ) ( m = 1 N β n , m u n , m ) , x n + 1 = P C [ α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n ] , n 1 ,
(3.1)
where u n , m is such that
F m ( u n , m , y ) + φ ( y ) φ ( u n , m ) + 1 r n , m y u n , m , u n , m x n 0 , y C ,
for each 1 m N , W n is defined by (1.17). Then the sequence { x n } defined by (3.1) converges strongly to x Ω as n , where x is the unique solution of the variational inequality
( γ f μ B 3 ) x , z x 0 , z Ω .
(3.2)
Proof Let z n = m = 1 N β n , m u n , m in (3.1), then (3.1) reduces to
{ z n = m = 1 N β n , m u n , m , y n = δ n W n G z n + ( 1 δ n ) z n , x n + 1 = P C [ α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n ] , n 1 .
(3.3)

We divide the proof into several steps.

Step 1. We show that { x n } is bounded. Indeed, take p Ω arbitrarily. Since p = W n p = T r n , m ( F m , φ ) p = G p , B i is ζ i -inverse-strongly monotone for i = 1 , 2 , by Remark 1.1 we deduce from 0 μ i 2 ζ i , i = 1 , 2 that for any n 1 ,
G y n p 2 = T μ 1 Θ 1 ( I μ 1 B 1 ) T μ 2 Θ 2 ( I μ 2 B 2 ) y n T μ 1 Θ 1 ( I μ 1 B 1 ) T μ 2 Θ 2 ( I μ 2 B 2 ) p 2 ( I μ 1 B 1 ) T μ 2 Θ 2 ( I μ 2 B 2 ) y n ( I μ 1 B 1 ) T μ 2 Θ 2 ( I μ 2 B 2 ) p 2 = [ T μ 2 Θ 2 ( I μ 2 B 2 ) y n T μ 2 Θ 2 ( I μ 2 B 2 ) p ] μ 1 [ B 1 T μ 2 Θ 2 ( I μ 2 B 2 ) y n B 1 T μ 2 Θ 2 ( I μ 2 B 2 ) p ] 2 T μ 2 Θ 2 ( I μ 2 B 2 ) y n T μ 2 Θ 2 ( I μ 2 B 2 ) p 2 + μ 1 ( μ 1 2 ζ 1 ) B 1 T μ 2 Θ 2 ( I μ 2 B 2 ) y n B 1 T μ 2 Θ 2 ( I μ 2 B 2 ) p 2 T μ 2 Θ 2 ( I μ 2 B 2 ) y n T μ 2 Θ 2 ( I μ 2 B 2 ) p 2 ( I μ 2 B 2 ) y n ( I μ 2 B 2 ) p 2 = ( y n p ) μ 2 ( B 2 y n B 2 p ) 2 = y n p 2 + μ 2 ( μ 2 2 ζ 2 ) B 2 y n B 2 p 2 y n p 2 = δ n W n G z n + ( 1 δ n ) z n p 2 δ n W n G z n p 2 + ( 1 δ n ) z n p 2 δ n G z n p 2 + ( 1 δ n ) z n p 2 δ n z n p 2 + ( 1 δ n ) z n p 2 z n p 2 = m = 1 N β n , m u n , m p 2 m = 1 N β n , m u n , m p 2 = m = 1 N β n , m T r n , m ( F m , φ ) x n T r n , m ( F m , φ ) p 2 m = 1 N β n , m x n p 2 = x n p 2 .
(3.4)
(This shows that G is nonexpansive.) It follows that
x n + 1 p = P C [ α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n ] p α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n p = α n ( γ f ( x n ) μ B 3 p ) + ( 1 α n μ B 3 ) W n G y n ( 1 α n μ B 3 ) p α n γ l x n p + α n ( γ f μ B 3 ) p + ( 1 α n τ ) W n G y n p α n γ l x n p + α n ( γ f μ B 3 ) p + ( 1 α n τ ) G y n p α n γ l x n p + α n ( γ f μ B 3 ) p + ( 1 α n τ ) x n p = ( 1 α n ( τ γ l ) ) x n p + α n ( γ f μ B 3 ) p max { x n p , ( γ f μ B 3 ) p τ γ l } .
By induction, we get
x n p max { x 0 p , ( γ f μ B 3 ) p τ γ l } , n 0 .
Therefore, { x n } is bounded and so are the sequences { u n , m } , { z n } , { y n } , { f ( x n ) } and { W n G y n } . Without loss of generality, suppose that there exists a bounded subset K C such that
x n , u n , m , z n , y n , W n G x n , W n G z n , W n G y n K , n 1 .
(3.5)

Step 2. Show that x n + 1 x n 0 as n .

First, we estimate u n + 1 , m u n , m . Taking into account that lim inf n r n , m > 0 , we may assume, without loss of generality, that r n , m [ ϵ , ) for some ϵ > 0 , for every 1 m N . Utilizing Remark 1.1, we get
u n + 1 , m u n , m = T r n + 1 , m ( F m , φ ) x n + 1 T r n , m ( F m , φ ) x n T r n + 1 , m ( F m , φ ) x n + 1 T r n + 1 , m ( F m , φ ) x n + T r n + 1 , m ( F m , φ ) x n T r n , m ( F m , φ ) x n x n + 1 x n + | r n + 1 , m r n , m | r n + 1 , m T r n + 1 , m ( F m , φ ) x n x n x n + 1 x n + | r n + 1 , m r n , m | ϵ T r n + 1 , m ( F m , φ ) x n x n x n + 1 x n + M | r n + 1 , m r n , m | ,
(3.6)
where sup n 1 { 1 ϵ T r n + 1 , m ( F m , φ ) x n x n } M for some M > 0 . Next, we estimate z n + 1 z n .
z n + 1 z n = m = 1 N β n + 1 , m u n + 1 , m m = 1 N β n , m u n , m = m = 1 N ( β n + 1 , m u n + 1 , m β n , m u n , m ) m = 1 N ( β n + 1 , m u n + 1 , m β n , m u n + 1 , m ) + m = 1 N ( β n , m u n + 1 , m β n , m u n , m ) m = 1 N ( | β n + 1 , m β n , m | u n + 1 , m ) + m = 1 N β n , m u n + 1 , m u n , m m = 1 N ( | β n + 1 , m β n , m | u n + 1 , m ) + m = 1 N β n , m ( x n + 1 x n + M | r n + 1 , m r n , m | ) x n + 1 x n + m = 1 N β n , m M | r n + 1 , m r n , m | + u n + 1 , m x n + 1 x n + M m = 1 N | r n + 1 , m r n , m | + M 0 m = 1 N | β n + 1 , m β n , m | ,
(3.7)

where M 0 = m = 1 N u n + 1 , m .

On the other hand, from (1.17), since W n , T n and U n , i are all nonexpansive, we have
W n + 1 G z n W n G z n = λ 1 T 1 U n + 1 , 2 G z n λ 1 T 1 U n , 2 G z n λ 1 U n + 1 , 2 G z n U n , 2 G z n = λ 1 λ 2 T 2 U n + 1 , 3 G z n λ 2 T 2 U n , 3 G z n λ 1 λ 2 U n + 1 , 3 G z n U n , 3 G z n λ 1 λ 2 λ n U n + 1 , n + 1 G z n U n , n + 1 G z n M 1 i = 1 n λ i ,
(3.8)
where sup n 1 { U n + 1 , n + 1 G z n + U n , n + 1 G z n } M 1 for some M 1 > 0 . Hence, we have
W n + 1 G z n + 1 W n G z n W n + 1 G z n + 1 W n + 1 G z n + W n + 1 G z n W n G z n z n + 1 z n + M 1 i = 1 n λ i x n + 1 x n + M m = 1 N | r n + 1 , m r n , m | + M 0 m = 1 N | β n + 1 , m β n , m | + M 1 i = 1 n λ i .
(3.9)
Putting (3.9) and (3.7) into (3.3), we have
y n + 1 y n δ n + 1 W n + 1 G z n + 1 W n G z n + ( 1 δ n + 1 ) z n + 1 z n + | δ n + 1 δ n | W n G z n z n δ n + 1 ( z n + 1 z n + M 1 i = 1 n λ i ) + ( 1 δ n + 1 ) z n + 1 z n + | δ n + 1 δ n | W n G z n z n z n + 1 z n + δ n + 1 M 1 i = 1 n λ i + | δ n + 1 δ n | W n G z n z n x n + 1 x n + M m = 1 N | r n + 1 , m r n , m | + M 0 m = 1 N | β n + 1 , m β n , m | + δ n + 1 M 1 i = 1 n λ i + | δ n + 1 δ n | W n G z n z n .
(3.10)
Similarly to (3.8), we have
W n + 1 G y n W n G y n = λ 1 T 1 U n + 1 , 2 G y n λ 1 T 1 U n , 2 G y n λ 1 U n + 1 , 2 G y n U n , 2 G y n = λ 1 λ 2 T 2 U n + 1 , 3 G y n λ 2 T 2 U n , 3 G y n λ 1 λ 2 U n + 1 , 3 G y n U n , 3 G y n λ 1 λ 2 λ n U n + 1 , n + 1 G y n U n , n + 1 G y n M 2 i = 1 n λ i ,
(3.11)
where sup n 1 { U n + 1 , n + 1 G y n + U n , n + 1 G y n } M 2 for some M 2 > 0 . Then we have
W n + 1 G y n + 1 W n G y n W n + 1 G y n + 1 W n + 1 G y n + W n + 1 G y n W n G y n y n + 1 y n + M 2 i = 1 n λ i x n + 1 x n + M m = 1 N | r n + 1 , m r n , m | + M 0 m = 1 N | β n + 1 , m β n , m | + δ n + 1 M 1 i = 1 n λ i + | δ n + 1 δ n | W n G z n z n + M 2 i = 1 n λ i .
(3.12)
Hence, it follows from (3.3)-(3.12) that
x n + 2 x n + 1 = P C [ α n + 1 γ f ( x n + 1 ) + ( 1 α n + 1 μ B 3 ) W n + 1 G y n + 1 ] P C [ α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n ] [ α n + 1 γ f ( x n + 1 ) + ( 1 α n + 1 μ B 3 ) W n + 1 G y n + 1 ] [ α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n ] α n γ ( f ( x n + 1 ) f ( x n ) ) + γ ( α n + 1 α n ) f ( x n + 1 ) + ( 1 α n μ B 3 ) ( W n + 1 G y n + 1 W n G y n ) + μ ( α n α n + 1 ) B 3 W n + 1 G y n + 1 α n γ l x n + 1 x n + | α n + 1 α n | ( γ f ( x n + 1 ) + μ B 3 W n + 1 G y n + 1 ) + ( 1 α n τ ) W n + 1 G y n + 1 W n G y n ( 1 α n ( τ γ l ) ) x n + 1 x n + | α n + 1 α n | ( γ f ( x n + 1 ) + μ B 3 W n + 1 G y n + 1 ) + ( 1 α n τ ) ( M m = 1 N | r n + 1 , m r n , m | + M 0 m = 1 N | β n + 1 , m β n , m | + δ n + 1 M 1 i = 1 n λ i + | δ n + 1 δ n | W n G z n z n + M 2 i = 1 n λ i ) ( 1 α n ( τ γ l ) ) x n + 1 x n + M 3 ( | α n + 1 α n | + m = 1 N | r n + 1 , m r n , m | + m = 1 N | β n + 1 , m β n , m | + | δ n + 1 δ n | + b n ) ,
(3.13)

where sup n 1 { γ f ( x n + 1 ) + μ B 3 W n + 1 G y n + 1 + M + M 0 + W n G z n z n + δ n + 1 M 1 + M 2 } M 3 for some M 3 > 0 . Noticing conditions (a), (b), (c), (d) and Lemma 2.7, we get x n + 1 x n 0 as n .

Step 3. We show that
lim n y n G y n = 0 ,
(3.14)
lim n x n u n , m = 0 , 1 m N ,
(3.15)
lim n x n W G x n = 0 .
(3.16)
First, we show lim n y n G y n = 0 . Indeed, for simplicity, we write y ˜ n = T μ 2 Θ 2 ( I μ 2 B 2 ) y n , p ˜ = T μ 2 Θ 2 ( I μ 2 B 2 ) p , w n = T μ 1 Θ 1 ( I μ 1 B 1 ) y ˜ n . Then w n = G y n and p = G p . Similar to the proof of (3.4), we get
G y n p 2 y n p 2 + μ 2 ( μ 2 2 ζ 2 ) B 2 y n B 2 p 2 + μ 1 ( μ 1 2 ζ 1 ) B y ˜ n B 1 p ˜ 2 .
(3.17)
From (3.3), (3.4), (3.17), we obtain that for p Ω ,
x n + 1 p 2 α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n p 2 = α n ( γ f ( x n ) μ B 3 W n G y n ) + W n G y n p 2 = W n G y n p 2 + 2 α n γ f ( x n ) μ B 3 W n G y n , W n G y n p + α n 2 γ f ( x n ) μ B 3 W n G y n 2 G y n p 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] y n p 2 + μ 2 ( μ 2 2 ζ 2 ) B 2 y n B 2 p 2 + μ 1 ( μ 1 2 ζ 1 ) B y ˜ n B 1 p ˜ 2 + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] x n p 2 + μ 2 ( μ 2 2 ζ 2 ) B 2 y n B 2 p 2 + μ 1 ( μ 1 2 ζ 1 ) B y ˜ n B 1 p ˜ 2 + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] ,
(3.18)
which immediately implies that
μ 2 ( 2 ζ 2 μ 2 ) B 2 y n B 2 p 2 + μ 1 ( 2 ζ 1 μ 1 ) B 1 y ˜ n B 1 p ˜ 2 x n p 2 x n + 1 p 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] x n x n 1 ( x n p + x n + 1 p ) + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] .
Since lim n α n = 0 , lim n x n x n + 1 = 0 and μ i ( 0 , 2 ζ i ) , i = 1 , 2 , we deduce from the boundedness of { x n } , f ( x n ) and { W n G y n } that
lim n B 2 y n B 2 p = 0 , lim n B 1 y ˜ n B 1 p ˜ = 0 .
(3.19)
Also, in terms of the firm nonexpansivity of T μ 1 Θ 1 , T μ 2 Θ 2 , we obtain from μ i ( 0 , 2 ζ i ) , i = 1 , 2 , that
y ˜ n p ˜ 2 = T μ 2 Θ 2 ( I μ 2 B 2 ) y n T μ 2 Θ 2 ( I μ 2 B 2 ) p 2 ( I μ 2 B 2 ) y n ( I μ 2 B 2 ) p , y ˜ n p ˜ = 1 2 [ ( I μ 2 B 2 ) y n ( I μ 2 B 2 ) p 2 + y ˜ n p ˜ 2 ( I μ 2 B 2 ) y n ( I μ 2 B 2 ) p ( y ˜ n p ˜ ) 2 ] 1 2 [ y n p 2 + y ˜ n p ˜ 2 ( y n y ˜ n ) μ 2 ( B 2 y n B 2 p ) ( p p ˜ ) 2 ] 1 2 [ x n p 2 + y ˜ n p ˜ 2 ( y n y ˜ n ) ( p p ˜ ) 2 + 2 μ 2 ( y n y ˜ n ) ( p p ˜ ) , B 2 y n B 2 p ]
and
w n p 2 = T μ 1 Θ 1 ( I μ 1 B 1 ) y ˜ n T μ 1 Θ 1 ( I μ 1 B 1 ) p ˜ 2 ( I μ 1 B 1 ) y ˜ n ( I μ 1 B 1 ) p ˜ , w n p = 1 2 [ ( I μ 1 B 1 ) y ˜ n ( I μ 1 B 1 ) p ˜ 2 + w n p 2 ( I μ 1 B 1 ) y ˜ n ( I μ 1 B 1 ) p ˜ ( w n p ) 2 ] 1 2 [ y ˜ n p ˜ 2 + w n p 2 ( y ˜ n w n ) + ( p p ˜ ) 2 2 μ 1 B 1 y ˜ n B 1 p ˜ , ( y ˜ n w n ) + ( p p ˜ ) μ 1 2 B 1 y ˜ n B 1 p ˜ 2 ] 1 2 [ x n p 2 + w n p 2 ( y ˜ n w n ) + ( p p ˜ ) 2 + 2 μ 1 B 1 y ˜ n B 1 p ˜ , ( y ˜ n w n ) + ( p p ˜ ) ] .
Thus, we have
y ˜ n p ˜ 2 x n p 2 ( y n y ˜ n ) ( p p ˜ ) 2 + 2 μ 2 ( y n y ˜ n ) ( p p ˜ ) , B 2 y n B 2 p
(3.20)
and
w n p 2 x n p 2 ( y ˜ n w n ) + ( p p ˜ ) 2 + 2 μ 1 B 1 y ˜ n B 1 p ˜ , ( y ˜ n w n ) + ( p p ˜ ) .
(3.21)
Consequently, it follows from (3.4), (3.18) and (3.20) that
x n + 1 p 2 G y n p 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] y ˜ n p ˜ 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] x n p 2 ( y n y ˜ n ) ( p p ˜ ) 2 + 2 μ 2 ( y n y ˜ n ) ( p p ˜ ) , B 2 y n B 2 p + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] ,
which yields
( y n y ˜ n ) ( p p ˜ ) 2 x n p 2 x n + 1 p 2 + 2 μ 2 ( y n y ˜ n ) ( p p ˜ ) B 2 y n B 2 p + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] ( x n x n + 1 ) ( x n p + x n + 1 p ) + 2 μ 2 ( y n y ˜ n ) ( p p ˜ ) B 2 y n B 2 p + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] .
Since lim n α n = 0 , lim n x n + 1 x n = 0 and lim n B 2 y n B 2 p = 0 , we deduce that
lim n ( y n y ˜ n ) ( p p ˜ ) = 0 .
(3.22)
Furthermore, it follows from (3.4), (3.18) and (3.21) that
x n + 1 p 2 G y n p 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] w n p 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] x n p 2 ( y ˜ n w n ) + ( p p ˜ ) 2 + 2 μ 1 B 1 y ˜ n B 1 p ˜ , ( y ˜ n w n ) + ( p p ˜ ) + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] ,
which leads to
( y ˜ n w n ) + ( p p ˜ ) 2 x n p 2 x n + 1 p 2 + 2 μ 1 B 1 y ˜ n B 1 p ˜ ( y ˜ n w n ) + ( p p ˜ ) + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] ( x n x n + 1 ) ( x n p + x n + 1 p ) + 2 μ 1 B 1 y ˜ n B 1 p ˜ ( y ˜ n w n ) + ( p p ˜ ) + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] .
Since lim n α n = 0 , lim n x n + 1 x n = 0 and lim n B 1 y ˜ n B 1 p ˜ = 0 , we deduce that
lim n ( y ˜ n w n ) + ( p p ˜ ) = 0 .
(3.23)
Note that
y n w n ( y n y ˜ n ) ( p p ˜ ) + ( y ˜ n w n ) + ( p p ˜ ) .
Hence from (3.20) and (3.21), we get
lim n y n w n = lim n y n G y n = 0 .
Next, we show that lim n x n u n , m = 0 for every 1 m N and lim n x n W G x n = 0 . Indeed, by Proposition 1.2(c), we obtain that for any p Ω and for each 1 m N ,
u n , m p 2 = T r n , m ( F m , φ ) x n T r n , m ( F m , φ ) p 2 u n , m p , x n p = 1 2 [ u n , m p 2 + x n p 2 u n , m x n 2 ] .
That is,
u n , m p 2 x n p 2 u n , m x n 2 .
Then we have
z n p 2 = m = 1 N β n , m u n , m p 2 m = 1 N β n , m u n , m p 2 m = 1 N ( x n p 2 u n , m x n 2 ) = x n p 2 m = 1 N u n , m x n 2 .
It follows that
y n p 2 z n p 2 x n p 2 m = 1 N u n , m x n 2 .
(3.24)
It follows from (3.18) and (3.24) that
x n + 1 p 2 y n p 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] x n p 2 m = 1 N u n , m x n 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] ,
(3.25)
which immediately implies that
m = 1 N u n , m x n 2 x n p 2 x n + 1 p 2 + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] ( x n x n + 1 ) ( x n p + x n + 1 p ) + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] .
Since lim n α n = 0 and lim n x n x n + 1 = 0 , we deduce that
lim n u n , m x n = 0 , 1 m N .
(3.26)
Since
z n x n = m = 1 N β n , m u n , m x n m = 1 N β n , m u n , m x n ,
from lim n u n , m x n = 0 , we get
lim n z n x n = 0 .
(3.27)
Notice that
y n x n y n z n + z n x n δ n W n G z n + ( 1 δ n ) z n z n + z n x n = δ n W n G z n z n + z n x n .
Since δ n 0 and z n x n 0 , we get
lim n y n x n = 0 .
(3.28)
Note that
x n W n G x n x n W n G y n + W n G y n W n G x n x n W n G y n + y n x n .
(3.29)
On the other hand,
x n W n G y n x n x n + 1 + x n + 1 W n G y n = x n x n + 1 + P C [ α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n ] P C W n G y n x n x n + 1 + α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n W n G y n = x n x n + 1 + α n γ f ( x n ) μ B 3 W n G y n 0 .
From lim n x n x n + 1 = 0 and lim n α n = 0 , we get
lim n x n W n G x n = 0 .
(3.30)
Note that
x n W G x n x n W n G x n + W n G x n W G x n .
From (3.30) and Remark 2.2, we see
lim n x n W G x n = 0 .
Step 4. Now we shall prove
lim sup n x n x , ( γ f μ B 3 ) x 0 ,
(3.31)
where x is the unique solution of variational inequality (3.2). To show this, we take a subsequence { x n i } of { x n } such that
lim sup n x n x , ( γ f μ B 3 ) x = lim i x n i x , ( γ f μ B 3 ) x .
(3.32)

Since { x n i } is bounded, there exists a subsequence of { x n i } . Without loss of generality, we can still denote it by { x n i } such that x n i ω . Let us show ω Ω : = ( n = 1 F ( T n ) ) ( m = 1 N MEP ( F m , φ ) ) Γ .

We first show ω Γ . From y n G y n 0 and x n y n 0 and Lemma 2.5 (demiclosedness principle), we have ω F ( G ) = Γ .

Next we show ω m = 1 N MEP ( F m , φ ) . Since u n , m = T r n , m ( F m , φ ) x n , we have
F m ( u n , m , y ) + φ ( y ) φ ( u n , m ) + 1 r n , m y u n , m , u n , m x n 0 , y C .
It follows from (A2) that
φ ( y ) φ ( u n , m ) + 1 r n , m y u n , m , u n , m x n F m ( y , u n , m ) , y C .
Replacing n by n i , we arrive at
φ ( y ) φ ( u n i , m ) + u n i , m x n i r n i , m , y u n i , m F m ( y , u n i , m ) , y C .
(3.33)
Put y t m = t m y + ( 1 t m ) ω for all t m ( 0 , 1 ] and y C . Then from (3.33) we have
0 φ ( y t m ) + φ ( u n i , m ) u n i , m x n i r n i , m , y t m u n i , m + F m ( y t m , u n i , m ) .
So, from (A4), the weak lower semicontinuity of φ, u n i , m x n i r n i , m 0 and u