Skip to main content

Advertisement

General iterative algorithms for mixed equilibrium problems, a general system of generalized equilibria and fixed point problems

Article metrics

  • 482 Accesses

Abstract

In this paper, we introduce and analyze a general iterative algorithm for finding a common solution of a finite family of mixed equilibrium problems, a general system of generalized equilibria and a fixed point problem of nonexpansive mappings in a real Hilbert space. Under some appropriate conditions, we derive the strong convergence of the sequence generated by the proposed algorithm to a common solution, which also solves some optimization problem. The result presented in this paper improves and extends some corresponding ones in the earlier and recent literature.

MSC:49J30, 47H10, 47H15.

1 Introduction

Let H be a real Hilbert space with the inner product , and the norm . Let C be a nonempty, closed and convex subset of H, and let T:CC be a nonlinear mapping. Throughout this paper, we use F(T) to denote the fixed point set of T. A mapping T:CC is said to be nonexpansive if

TxTyxy,x,yC.
(1.1)

Let F:C×CR be a real-valued bifunction and φ:CR be a real-valued function, where R is a set of real numbers. The so-called mixed equilibrium problem (MEP) is to find xC such that

F(x,y)+φ(y)φ(x)0,yC,
(1.2)

which was considered and studied in [1, 2]. The set of solutions of MEP (1.2) is denoted by MEP(F,φ). In particular, whenever φ0, MEP (1.2) reduces to the equilibrium problem (EP) of finding xC such that

F(x,y)0,yC,

which was considered and studied in [37]. The set of solutions of the EP is denoted by EP(F). Given a mapping A:CH, let F(x,y)=Ax,yx for all x,yC. Then zEP(F) if and only if Ax,yx0 for all yC. Numerous problems in physics, optimization and economics reduce to finding a solution of the EP.

Throughout this paper, assume that F:C×CR is a bifunction satisfying conditions (A1)-(A4) and that φ:CR is a lower semicontinuous and convex function with restriction (B1) or (B2), where

  • (A1) F(x,x)=0 for all xC;

  • (A2) F is monotone, i.e., F(x,y)+F(y,x)0, for any x,yC;

  • (A3) F is upper hemicontinuous, i.e., for each x,y,zC,

    lim sup t 0 F ( t z + ( 1 t ) x , y ) F(x,y);
  • (A4) F(x,) is convex and lower semicontinuous for each xC;

  • (B1) for each xH and r>0, there exist a bounded subset D x C and y x C such that for any zC D x ,

    F(z, y x )+φ( y x )φ(z)+ 1 r y x z,zx<0;
  • (B2) C is a bounded set.

The mappings { T n } n = 1 are said to be an infinite family of nonexpansive self-mappings on C if

T n x T n yxy,x,yC,n1,
(1.3)

and denoted by F( T n ) is a fixed point set of T n , i.e., F( T n )={xH: T n x=x}. Finding an optimal point in the intersection n = 1 F( T n ) of fixed point sets of mappings T n , n1, is a matter of interest in various branches of sciences.

Recently, many authors considered some iterative methods for finding a common element of the set of solutions of MEP (1.2) and the set of fixed points of nonexpansive mappings; see, e.g., [2, 8, 9] and the references therein.

A mapping A:CH is said to be:

  1. (i)

    Monotone if

    AxAy,xy0,x,yC.
  2. (ii)

    Strongly monotone if there exists a constant η>0 such that

    AxAy,xyη x y 2 ,x,yC.

In such a case, A is said to be η-strongly monotone.

  1. (iii)

    Inverse-strongly monotone if there exists a constant ζ>0 such that

    AxAy,xyζ A x A y 2 ,x,yC.

In such a case, A is said to be ζ-inverse-strongly monotone.

Let A:CH be a nonlinear mapping. The classical variational inequality problem (VIP) is to find x C such that

A x , y x 0,yC.
(1.4)

We use VI(C,A) to denote the set of solutions to VIP (1.4). One can easily see that VIP (1.4) is equivalent to a fixed point problem, the origin of which can be traced back to Lions and Stampacchia [10]. That is, uC is a solution to VIP (1.4) if and only if u is a fixed point of the mapping P C (IλA), where λ>0 is a constant. Variational inequality theory has been studied quite extensively and has emerged as an important tool in the study of a wide class of obstacle, unilateral, free, moving, equilibrium problems. Not only are the existence and uniqueness of solutions important topics in the study of VIP (1.4), but also how to actually find a solution of VIP (1.4) is important. Up to now, there have been many iterative algorithms in the literature for finding approximate solutions of VIP (1.4) and its extended versions; see, e.g., [3, 1114].

Recently, Ceng and Yao [8] introduced and studied the general system of generalized equilibria (GSEP) as follows: Let C be a nonempty closed convex subset of a real Hilbert space H. Let Θ 1 , Θ 2 :C×CR be two bifunctions, B 1 , B 2 :CH be two nonlinear mappings. Consider the following problem of finding ( x , y )C×C such that

{ Θ 1 ( x , x ) + B 1 y , x x + 1 μ 1 x y , x x 0 , x C , Θ 2 ( y , y ) + B 2 x , y y + 1 μ 2 y x , y y 0 , y C ,
(1.5)

where μ 1 >0, μ 2 >0 are two constants. In particular, whenever Θ 1 = Θ 2 =0, GSEP (1.5) reduces to the following general system of variational inequalities (GSVI): find ( x , y )C×C such that

{ μ 1 B 1 y + x y , x x 0 , x C , μ 2 B 2 x + y x , x y 0 , y C ,
(1.6)

where μ 1 and μ 2 are two positive constants. GSVI (1.6) is considered and studied in [8, 15, 16]. In particular, whenever B 1 = B 2 =A and x = y , GSVI (1.6) reduces to VIP (1.4).

In order to prove our main results in the following sections, we need the following lemmas and propositions.

Proposition 1.1 For given xH and zC:

  1. (i)

    z P C xxz,yz0, yC;

  2. (ii)

    z P C x x z 2 x y 2 y z 2 , yC;

  3. (iii)

    P C x P C y,xy P C x P C y 2 , x,yH.

Consequently, P C is a firmly nonexpansive mapping of H onto C and hence nonexpansive and monotone.

Given a positive number r>0. Let T r ( Θ , φ ) :HC be the solution set of the auxiliary mixed equilibrium problem, that is, for each xH,

T r ( Θ , φ ) (x):= { y C : Θ ( y , z ) + φ ( z ) φ ( y ) + 1 r y x , z y 0 , z C } .

Proposition 1.2 (see [2, 8])

Let C be a nonempty closed subset of a real Hilbert space H. Let Θ:C×CR be a bifunction satisfying conditions (A1)-(A4), and let φ:CR be a lower semicontinuous and convex function with restriction (B1) or (B2). Then the following hold:

  1. (a)

    for each xH, T r ( Θ , φ ) ;

  2. (b)

    T r ( Θ , φ ) is single-valued;

  3. (c)

    T r ( Θ , φ ) is firmly nonexpansive, i.e., for any x,yH,

    T r ( Θ , φ ) x T r ( Θ , φ ) y 2 T r ( Θ , φ ) x T r ( Θ , φ ) y , x y ;
  4. (d)

    for all s,t>0 and xH,

    T s ( Θ , φ ) x T t ( Θ , φ ) x 2 s t s T s ( Θ , φ ) x x , T s ( Θ , φ ) x T t ( Θ , φ ) x ;
  5. (e)

    F( T r ( Θ , φ ) )=MEP(Θ,φ);

  6. (f)

    MEP(Θ,φ) is closed and convex.

Remark 1.1 It is easy to see from conclusions (c) and (d) in Proposition 1.2 that

T r ( Θ , φ ) x T r ( Θ , φ ) y xy,r>0,x,yH

and

T s ( Θ , φ ) x T t ( Θ , φ ) x | s t | s T s ( Θ , φ ) x x ,s,t>0,xH.

Remark 1.2 If φ=0, then T r ( Θ , φ ) is rewritten as T r Θ .

Ceng and Yao [8] transformed GSEP (1.5) into a fixed point problem in the following way.

Lemma 1.1 (see [8])

Let C be a nonempty closed convex subset of H. Let Θ 1 , Θ 2 :C×CR be two bifunctions satisfying conditions (A1)-(A4), and let the mappings B 1 , B 2 :CH be ζ 1 -inverse strongly monotone and ζ 2 -inverse strongly monotone, respectively. Let μ 1 (0,2 ζ 1 ) and μ 2 (0,2 ζ 2 ), respectively. Then, for given x , y C, ( x , y ) is a solution of GSEP (1.5) if and only if x is a fixed point of the mapping G:CC defined by

G(x)= T μ 1 Θ 1 (I μ 1 B 1 ) T μ 2 Θ 2 (I μ 2 B 2 )x,xC,

where y = T μ 2 Θ 2 (I μ 2 B 2 ) x .

Lemma 1.2 (see [8])

For given x , y C, ( x , y ) is a solution of GSVI (1.6) if and only if x is a fixed point of the mapping G:CC defined by

Gx= P C (I μ 1 B 1 ) P C (I μ 2 B 2 )x,xC,

where y = P C ( x μ 2 B 2 x ) and P C is the projection of H onto C.

Remark 1.3 If Θ 1 , Θ 2 :C×CR are two bifunctions satisfying (A1)-(A4), the mappings B 1 , B 2 :CH are ζ 1 -inverse strongly monotone and ζ 2 -inverse strongly monotone, respectively, then G:CC is a nonexpansive mapping provided μ 1 (0,2 ζ 1 ) and μ 2 (0,2 ζ 2 ).

Throughout this paper, the set of fixed points of the mapping G is denoted by Γ.

On the other hand, Moudafi [1] introduced the viscosity approximation method for nonexpansive mappings (see also [17] for further developments in both Hilbert spaces and Banach spaces).

A mapping f:CC is called α-contractive if there exists a constant α(0,1) such that

f ( x ) f ( y ) αxy,x,yC.

Let f be a contraction on C. Starting with an arbitrary initial x 1 C, define a sequence { x n } recursively by

x n + 1 = α n f( x n )+(1 α n )T x n ,n0,
(1.7)

where T is a nonexpansive mapping of C into itself and { α n } is a sequence in (0,1). It is proved [1, 17] that under certain appropriate conditions imposed on { α n }, the sequence { x n } generated by (1.6) converges strongly to the unique solution x F(T) to the VIP

( I f ) x , x x 0,xF(T).

A linear bounded operator A is said to be γ ¯ -strongly positive on H if there exists a constant γ ¯ (0,1) such that

Ax,x γ ¯ x 2 ,xH.
(1.8)

The typical problem is to minimize a quadratic function on a real Hilbert space H,

min x C 1 2 Ax,xx,u,
(1.9)

where C is a nonempty closed convex subset of H, u is a given point in H and A is a strongly positive bounded linear operator on H.

In 2006, Marino and Xu [18] introduced and considered the following general iterative method:

x n + 1 = α n γf( x n )+(1 α n A)T x n ,n0,
(1.10)

where A is a strongly positive bounded linear operator on a real Hilbert space H, f is a contraction on H. They proved that the above sequence { x n } converges strongly to the unique solution of the variational inequality

( γ f A ) x , x x 0,xF(T),

which is the optimality condition for the minimization problem

min x C 1 2 Ax,xh(x),

where h is a potential function for γf (i.e., h (x)=γf(x) for all xH).

In 2007, Takahashi and Takahashi [5] introduced an iterative scheme by the viscosity approximation method for finding a common element of the set of solutions of the EP and the set of fixed points of a nonexpansive mapping in a real Hilbert space. Let S:CH be a nonexpansive mapping. Starting with arbitrary initial x 1 H, define sequences { x n } and { u n } recursively by

{ F ( u n , y ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = α n f ( x n ) + ( 1 α n ) S u n , n 1 .
(1.11)

They proved that under appropriate conditions imposed on { α n } and { r n }, the sequences { x n } and { u n } converge strongly to x F(S)EP(F), where x = P F ( S ) EP ( F ) f( x ).

Subsequently, Plubtieng and Punpaeng [19] introduced a general iterative process for finding a common element of the set of solutions of the EP and the set of fixed points of a nonexpansive mapping in a Hilbert space.

Let S:HH be a nonexpansive mapping. Starting with an arbitrary x 1 H, define sequences { x n } and { u n } by

{ F ( u n , y ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = α n γ f ( x n ) + ( 1 α n A ) S u n , n 1 .
(1.12)

They proved that under appropriate conditions imposed on { α n } and { r n }, the sequence { x n } generated by (1.12) converges strongly to the unique solution x F(S)EP(F) to the VIP

( γ f A ) x , x x 0,xF(S)EP(F),

which is the optimality condition for the minimization problem

min x F ( S ) EP ( F ) 1 2 Ax,xh(x),

where h is a potential function for γf (i.e., h (x)=γf(x) for all xH).

In 2001, Yamada [20] introduced a hybrid steepest descent method for a nonexpansive mapping T as follows:

x n + 1 =T x n μ λ n F(T x n ),n0,
(1.13)

where F is a κ-Lipschitzian and η-strongly monotone operator with constants κ,η>0 and 0<μ< 2 η κ 2 . He proved that if { λ n } satisfies appropriate conditions, then the sequence { x n } generated by (1.13) converges strongly to the unique solution of the variational inequality

F x , x x 0,xF(T).

In 2010, Tian [21] combined the iterative method (1.10) with Yamada’s method (1.13) and considered the following general viscosity-type iterative method:

x n + 1 = α n γf( x n )+(1 α n μF)T x n ,n0.
(1.14)

Then he proved that the sequence { x n } generated by (1.14) converges strongly to the unique solution of the variational inequality

( γ f μ F ) x , x x 0,xF(T).

Recently, Ceng et al. [22] introduced implicit and explicit iterative schemes for finding the fixed points of a nonexpansive mapping T on a nonempty, closed and convex subset C in a real Hilbert space H as follows:

x t = P C [ t γ V x t + ( 1 t μ F ) T x t ]
(1.15)

and

x n + 1 = P C [ α n γ V x n + ( 1 α n μ F ) T x n ] ,n0,
(1.16)

where V is an L-Lipschitzian mapping with constant L0 and F is a κ-Lipschitzian and η-strongly monotone operator with constants κ,η>0 and 0<μ< 2 η κ 2 . Then they proved that the sequences generated by (1.15) and (1.16) converge strongly to the unique solution of the variational inequality

( γ V μ F ) x , x x 0,xF(T).

Let { T n } n = 1 be an infinite family of nonexpansive self-mappings on C and { λ n } n = 1 be a sequence of nonnegative numbers in [0,1]. For any n1, define a mapping W n of C into itself as follows:

{ U n , n + 1 = I , U n , n = λ n T n U n , n + 1 + ( 1 λ n ) I , U n , n 1 = λ n 1 T n 1 U n , n + ( 1 λ n 1 ) I , , U n , k = λ k T k U n , k + 1 + ( 1 λ k ) I , U n , k 1 = λ k 1 T k 1 U n , k + ( 1 λ k 1 ) I , , U n , 2 = λ 2 T 2 U n , 3 + ( 1 λ 2 ) I , W n = U n , 1 = λ 1 T 1 U n , 2 + ( 1 λ 1 ) I .
(1.17)

Such a mapping W n is called the W-mapping generated by T n , T n 1 ,, T 1 and λ n , λ n 1 ,, λ 1 .

Very recently, Chen [23] introduced and considered the following iterative scheme:

x n + 1 = P C [ α n γ f ( x n ) + ( 1 α n A ) W n x n ] ,n0,
(1.18)

where A is a strongly positive bounded linear operator, f is a contraction on H, and W n is defined as (1.17). He proved that the above sequence { x n } converges strongly to the unique solution of the variational inequality

( γ f A ) x , x x 0,x n = 1 F( T n ),

which is the optimality condition for the minimization problem

min x n = 1 F ( T n ) 1 2 Ax,xh(x),

where h is a potential function for γf (i.e., h (x)=γf(x) for all xH).

More recently, Rattanaseeha [7] introduced an iterative algorithm:

{ x 1 H , arbitrarily given , F ( u n , y ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = P C [ α n γ f ( x n ) + ( 1 α n A ) W n x n ] , n 1 ,
(1.19)

where A is a strongly positive bounded linear operator, f is a contraction on H, and W n is defined as (1.17). He proved that the above sequence { x n } converges strongly to the unique solution of the variational inequality

( γ f A ) x , x x 0,x n = 1 F( T n )EP(F).

Nowadays, Wang et al. [24] introduced an iterative algorithm:

{ x 1 H , F ( u n , y ) + φ ( y ) φ ( u n ) + 1 r n y u n , u n x n 0 , y C , x n + 1 = P c [ α n γ f ( x n ) + β n x n + ( ( 1 β n ) I α n A ) W n u n ] , n 1 ,
(1.20)

where A is a strongly positive bounded linear operator, f is an l-Lipschitz continuous mapping, { W n } is defined by (1.17), and G= P C (I μ 1 B 1 ) P C (I μ 2 B 2 ). They proved that the above sequence { x n } converges strongly to x Ω:= n = 1 F( T n )MEP(F,φ)Γ, where Γ is a fixed point set of the mapping G= P C (I μ 1 B 1 ) P C (I μ 2 B 2 ), which is the unique solution of the VIP

( A γ f ) x , x x 0,xΩ,

or, equivalently, the unique solution of the minimization problem

min x Ω 1 2 Ax,xh(x).

Our concern now is the following:

Question 1 Can Theorem 3.1 of Rattanaseeha [7], Theorem 3.1 of Wang et al. [24] and so on be extended from one mixed equilibrium problem to the convex combination of a finite family of the mixed equilibrium problems?

Question 2 We know that GSEP (1.5) is more general than GSVI (1.6). What happens if GSVI (1.6) is replaced by GSEP (1.5)?

Question 3 We know that the η-strongly monotone and L-Lipschitz operator is more general than the strongly positive bounded linear operator. What happens if the strongly positive bounded linear operator is replaced by the η-strongly monotone and L-Lipschitz operator?

The purpose of this article is to give the affirmative answers to these questions mentioned above. Let B i :CH be ζ i -inverse strongly monotone for i=1,2, B 3 be a κ-Lipschitz and η-strongly monotone operator and f:CH be an l-Lipschitz mapping on H. Motivated by the above facts, in this paper we propose and analyze the general iterative algorithm

{ y n = λ n W n G ( m = 1 N β n , m u n , m ) + ( 1 λ n ) ( m = 1 N β n , m u n , m ) , x n + 1 = P C [ α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n ] , n 1 ,
(1.21)

where { u n , m } is such that

F m ( u n , m ,y)+φ(y)φ( u n , m )+ 1 r n , m y u n , m , u n , m x n 0,yC,

for each 1mN, W n is defined by (1.17) and G= T μ 1 Θ 1 (I μ 1 B 1 ) T μ 2 Θ 2 (I μ 2 B 2 ), and x 0 C is an arbitrary initial point, for finding a common solution of a finite family of MEP (1.2), GSEP (1.5) and the fixed point problem of an infinite family of nonexpansive self-mappings { T n } n = 1 on C. It is proven that under some mild conditions imposed on parameters, the sequence { x n } generated by (1.20) converges strongly to x Ω:=( n = 1 F( T n ))( m = 1 N MEP( F m ,φ))Γ, where Γ is a fixed point set of the mapping G= T μ 1 Θ 1 (I μ 1 B 1 ) T μ 2 Θ 2 (I μ 2 B 2 ), where x is the unique solution of the variational inequality

( γ f μ B 3 ) x , z x 0,zΩ.
(1.22)

Remark 1.4 Other results on the problem of finding solutions to equilibrium problems and fixed point problems of families of mappings with different approaches can be found in [25, 26].

2 Preliminaries

We indicate weak convergence and strong convergence by using the notation and →, respectively. A mapping f:CH is called l-Lipschitz continuous if there exists a constant l0 such that

f ( x ) f ( y ) lxy,x,yC.

In particular, if l=1, then f is called a nonexpansive mapping; if l[0,1), then f is a contraction. Recall that a mapping T:HH is said to be a firmly nonexpansive mapping if

T x T y 2 TxTy,xy,x,yH.

The metric (or nearest point) projection from H onto C is the mapping P C :HC which assigns to each point xH the unique point P C xC satisfying the property

x P C x= inf y C xy=:d(x,C).

We need some facts and tools in a real Hilbert space H which are listed as lemmas below.

Lemma 2.1 Let X be a real inner product space. Then there holds the following inequality:

x + y 2 x 2 +2y,x+y,x,yX.

Lemma 2.2 Let H be a Hilbert space. Then the following equalities hold:

  1. (a)

    x y 2 = x 2 y 2 2xy,y for all x,yH;

  2. (b)

    λ x + μ y 2 =λ x 2 +μ y 2 λμ x y 2 for all x,yH and λ,μ[0,1] with λ+μ=1;

  3. (c)

    If { x n } is a sequence in H such that x n x, it follows that

    lim sup n x n y 2 = lim sup n x n x 2 + x y 2 ,yH.

We have the following crucial lemmas concerning the W-mappings defined by (1.17).

Lemma 2.3 (see [[27], Lemma 3.2])

Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C such that n = 1 F( T n ), and let { λ n } be a sequence in (0,b] for some b(0,1). Then, for every xC and k1, the limit lim n U n , k exists, where U n , k is defined by (1.17).

Remark 2.1 (see [[6], Remark 3.1])

It can be known from Lemma 2.3 that if D is a nonempty bounded subset of C, then for ϵ>0 there exists n 0 k such that for all n n 0 ,

sup x D U n , k x U k xϵ.

Remark 2.2 (see [[6], Remark 3.2])

Utilizing Lemma 2.3, we define a mapping W:C×C as follows:

Wx= lim n W n x= lim n U n , 1 x,xC.

Such W is called the W-mapping generated by T 1 , T 2 , and λ 1 , λ 2 , . Since W n is nonexpansive, W:CC is also nonexpansive. Indeed, observe that for each x,yC,

WxWy= lim n W n x W n yxy.

If { x n } is a bounded sequence in C, then we put D={ x n :n1}. Hence, it is clear from Remark 2.1 that for arbitrary ϵ>0 there exists N 0 1 such that for all n> N 0 ,

W n x n W x n = U n , 1 x n U 1 x n sup x D U n , 1 x U 1 xϵ.

This implies that

lim n W n x n W x n =0.

Lemma 2.4 (see [[27], Lemma 3.3])

Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C such that n = 1 F( T n ), and let { λ n } be a sequence in (0,b] for some b(0,1). Then F(W)= n = 1 F( T n ).

Lemma 2.5 (see [[28], Demiclosedness principle])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let T be a nonexpansive self-mapping on C with F(T). Then IT is demiclosed. That is, whenever { x n } is a sequence in C weakly converging to some xC and the sequence {(IT) x n } strongly converges to some y, it follows that (IT)x=y. Here I is the identity operator of H.

Lemma 2.6 Let A:CH be a monotone mapping. In the context of the variational inequality problem, the characterization of the projection (see Proposition  1.1(i)) implies

uVI(C,A)u= P C (uλAu),λ>0.

Lemma 2.7 (see [29])

Assume that { a n } is a sequence of nonnegative real numbers such that

a n + 1 (1 γ n ) a n + σ n γ n ,n1,

where γ n is a sequence in [0,1] and σ n is a real sequence such that

  1. (i)

    n = 1 γ n =;

  2. (ii)

    lim sup n σ n 0 or n = 1 | σ n γ n |<.

Then lim n a n =0.

Lemma 2.8 Each Hilbert space H satisfies Opial’s condition, i.e., for the sequence { x n }H with x n x. Then the inequality

lim inf n x n x<lim inf x n y

holds for any yH such that yx.

3 Main result

We will introduce and analyze a general iterative algorithm for finding a common solution of a finite family of MEP (1.2), GSEP (1.5) and the fixed point problems of an infinite family of nonexpansive self-mappings { T n } n = 1 on C. Under some appropriate conditions imposed on the parameter sequences, we will prove strong convergence of the proposed algorithm.

Theorem 3.1 Let C be a nonempty closed convex subset of a Hilbert space H. Let F m be a sequence of bifunctions from C×C to R satisfying (A1)-(A4), and let φ:CR be a lower semicontinuous and convex function with restriction (B1) or (B2) for every 1mN, where N denotes some positive integer. Let Θ 1 , Θ 2 :C×CR be two bifunctions satisfying (A1)-(A4), the mapping B i :CH be ζ i -inverse strongly monotone for i=1,2, B 3 be a κ-Lipschitz and η-strongly monotone operator with constants κ,η>0, and let f:HH be an l-Lipschitz mapping with constant l0. Let { T n } n = 1 be a sequence of nonexpansive mappings on C and { λ n } be a sequence in (0,b] for some b(0,1). Suppose that 0<μ<2η/ κ 2 and 0<γl<τ, where τ=1 1 μ ( 2 η μ κ 2 ) . Assume that Ω:=( n = 1 F( T n ))( m = 1 N MEP( F m ,φ))Γ, where Γ is a fixed point set of the mapping G= T μ 1 Θ 1 (I μ 1 B 1 ) T μ 2 Θ 2 (I μ 2 B 2 ) with μ i (0,2 ζ i ) for i=1,2. Let { α n },{ δ n },{ β n , 1 }, and { β n , N } be sequences in [0,1] and { r n , m } be a sequence in (0,) for every 1mN such that:

  1. (a)

    lim n α n =0, n = 1 α n = and n = 1 | α n + 1 α n |<;

  2. (b)

    m = 1 N β n , m =1 and n = 1 | β n + 1 , m β n , m |< for each 1mN;

  3. (c)

    lim n δ n =0 and n = 1 | δ n + 1 δ n |<;

  4. (d)

    lim inf n r n , m >0 and n = 1 | r n + 1 , m r n , m |< for each 1mN.

Given x 1 H arbitrarily, the sequence { x n } is generated iteratively by

{ y n = δ n W n G ( m = 1 N β n , m u n , m ) + ( 1 δ n ) ( m = 1 N β n , m u n , m ) , x n + 1 = P C [ α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n ] , n 1 ,
(3.1)

where u n , m is such that

F m ( u n , m ,y)+φ(y)φ( u n , m )+ 1 r n , m y u n , m , u n , m x n 0,yC,

for each 1mN, W n is defined by (1.17). Then the sequence { x n } defined by (3.1) converges strongly to x Ω as n, where x is the unique solution of the variational inequality

( γ f μ B 3 ) x , z x 0,zΩ.
(3.2)

Proof Let z n = m = 1 N β n , m u n , m in (3.1), then (3.1) reduces to

{ z n = m = 1 N β n , m u n , m , y n = δ n W n G z n + ( 1 δ n ) z n , x n + 1 = P C [ α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n ] , n 1 .
(3.3)

We divide the proof into several steps.

Step 1. We show that { x n } is bounded. Indeed, take pΩ arbitrarily. Since p= W n p= T r n , m ( F m , φ ) p=Gp, B i is ζ i -inverse-strongly monotone for i=1,2, by Remark 1.1 we deduce from 0 μ i 2 ζ i , i=1,2 that for any n1,

G y n p 2 = T μ 1 Θ 1 ( I μ 1 B 1 ) T μ 2 Θ 2 ( I μ 2 B 2 ) y n T μ 1 Θ 1 ( I μ 1 B 1 ) T μ 2 Θ 2 ( I μ 2 B 2 ) p 2 ( I μ 1 B 1 ) T μ 2 Θ 2 ( I μ 2 B 2 ) y n ( I μ 1 B 1 ) T μ 2 Θ 2 ( I μ 2 B 2 ) p 2 = [ T μ 2 Θ 2 ( I μ 2 B 2 ) y n T μ 2 Θ 2 ( I μ 2 B 2 ) p ] μ 1 [ B 1 T μ 2 Θ 2 ( I μ 2 B 2 ) y n B 1 T μ 2 Θ 2 ( I μ 2 B 2 ) p ] 2 T μ 2 Θ 2 ( I μ 2 B 2 ) y n T μ 2 Θ 2 ( I μ 2 B 2 ) p 2 + μ 1 ( μ 1 2 ζ 1 ) B 1 T μ 2 Θ 2 ( I μ 2 B 2 ) y n B 1 T μ 2 Θ 2 ( I μ 2 B 2 ) p 2 T μ 2 Θ 2 ( I μ 2 B 2 ) y n T μ 2 Θ 2 ( I μ 2 B 2 ) p 2 ( I μ 2 B 2 ) y n ( I μ 2 B 2 ) p 2 = ( y n p ) μ 2 ( B 2 y n B 2 p ) 2 = y n p 2 + μ 2 ( μ 2 2 ζ 2 ) B 2 y n B 2 p 2 y n p 2 = δ n W n G z n + ( 1 δ n ) z n p 2 δ n W n G z n p 2 + ( 1 δ n ) z n p 2 δ n G z n p 2 + ( 1 δ n ) z n p 2 δ n z n p 2 + ( 1 δ n ) z n p 2 z n p 2 = m = 1 N β n , m u n , m p 2 m = 1 N β n , m u n , m p 2 = m = 1 N β n , m T r n , m ( F m , φ ) x n T r n , m ( F m , φ ) p 2 m = 1 N β n , m x n p 2 = x n p 2 .
(3.4)

(This shows that G is nonexpansive.) It follows that

x n + 1 p = P C [ α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n ] p α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n p = α n ( γ f ( x n ) μ B 3 p ) + ( 1 α n μ B 3 ) W n G y n ( 1 α n μ B 3 ) p α n γ l x n p + α n ( γ f μ B 3 ) p + ( 1 α n τ ) W n G y n p α n γ l x n p + α n ( γ f μ B 3 ) p + ( 1 α n τ ) G y n p α n γ l x n p + α n ( γ f μ B 3 ) p + ( 1 α n τ ) x n p = ( 1 α n ( τ γ l ) ) x n p + α n ( γ f μ B 3 ) p max { x n p , ( γ f μ B 3 ) p τ γ l } .

By induction, we get

x n pmax { x 0 p , ( γ f μ B 3 ) p τ γ l } ,n0.

Therefore, { x n } is bounded and so are the sequences { u n , m }, { z n }, { y n }, {f( x n )} and { W n G y n }. Without loss of generality, suppose that there exists a bounded subset KC such that

x n , u n , m , z n , y n , W n G x n , W n G z n , W n G y n K,n1.
(3.5)

Step 2. Show that x n + 1 x n 0 as n.

First, we estimate u n + 1 , m u n , m . Taking into account that lim inf n r n , m >0, we may assume, without loss of generality, that r n , m [ϵ,) for some ϵ>0, for every 1mN. Utilizing Remark 1.1, we get

u n + 1 , m u n , m = T r n + 1 , m ( F m , φ ) x n + 1 T r n , m ( F m , φ ) x n T r n + 1 , m ( F m , φ ) x n + 1 T r n + 1 , m ( F m , φ ) x n + T r n + 1 , m ( F m , φ ) x n T r n , m ( F m , φ ) x n x n + 1 x n + | r n + 1 , m r n , m | r n + 1 , m T r n + 1 , m ( F m , φ ) x n x n x n + 1 x n + | r n + 1 , m r n , m | ϵ T r n + 1 , m ( F m , φ ) x n x n x n + 1 x n + M | r n + 1 , m r n , m | ,
(3.6)

where sup n 1 { 1 ϵ T r n + 1 , m ( F m , φ ) x n x n }M for some M>0. Next, we estimate z n + 1 z n .

z n + 1 z n = m = 1 N β n + 1 , m u n + 1 , m m = 1 N β n , m u n , m = m = 1 N ( β n + 1 , m u n + 1 , m β n , m u n , m ) m = 1 N ( β n + 1 , m u n + 1 , m β n , m u n + 1 , m ) + m = 1 N ( β n , m u n + 1 , m β n , m u n , m ) m = 1 N ( | β n + 1 , m β n , m | u n + 1 , m ) + m = 1 N β n , m u n + 1 , m u n , m m = 1 N ( | β n + 1 , m β n , m | u n + 1 , m ) + m = 1 N β n , m ( x n + 1 x n + M | r n + 1 , m r n , m | ) x n + 1 x n + m = 1 N β n , m M | r n + 1 , m r n , m | + u n + 1 , m x n + 1 x n + M m = 1 N | r n + 1 , m r n , m | + M 0 m = 1 N | β n + 1 , m β n , m | ,
(3.7)

where M 0 = m = 1 N u n + 1 , m .

On the other hand, from (1.17), since W n , T n and U n , i are all nonexpansive, we have

W n + 1 G z n W n G z n = λ 1 T 1 U n + 1 , 2 G z n λ 1 T 1 U n , 2 G z n λ 1 U n + 1 , 2 G z n U n , 2 G z n = λ 1 λ 2 T 2 U n + 1 , 3 G z n λ 2 T 2 U n , 3 G z n λ 1 λ 2 U n + 1 , 3 G z n U n , 3 G z n λ 1 λ 2 λ n U n + 1 , n + 1 G z n U n , n + 1 G z n M 1 i = 1 n λ i ,
(3.8)

where sup n 1 { U n + 1 , n + 1 G z n + U n , n + 1 G z n } M 1 for some M 1 >0. Hence, we have

W n + 1 G z n + 1 W n G z n W n + 1 G z n + 1 W n + 1 G z n + W n + 1 G z n W n G z n z n + 1 z n + M 1 i = 1 n λ i x n + 1 x n + M m = 1 N | r n + 1 , m r n , m | + M 0 m = 1 N | β n + 1 , m β n , m | + M 1 i = 1 n λ i .
(3.9)

Putting (3.9) and (3.7) into (3.3), we have

y n + 1 y n δ n + 1 W n + 1 G z n + 1 W n G z n + ( 1 δ n + 1 ) z n + 1 z n + | δ n + 1 δ n | W n G z n z n δ n + 1 ( z n + 1 z n + M 1 i = 1 n λ i ) + ( 1 δ n + 1 ) z n + 1 z n + | δ n + 1 δ n | W n G z n z n z n + 1 z n + δ n + 1 M 1 i = 1 n λ i + | δ n + 1 δ n | W n G z n z n x n + 1 x n + M m = 1 N | r n + 1 , m r n , m | + M 0 m = 1 N | β n + 1 , m β n , m | + δ n + 1 M 1 i = 1 n λ i + | δ n + 1 δ n | W n G z n z n .
(3.10)

Similarly to (3.8), we have

W n + 1 G y n W n G y n = λ 1 T 1 U n + 1 , 2 G y n λ 1 T 1 U n , 2 G y n λ 1 U n + 1 , 2 G y n U n , 2 G y n = λ 1 λ 2 T 2 U n + 1 , 3 G y n λ 2 T 2 U n , 3 G y n λ 1 λ 2 U n + 1 , 3 G y n U n , 3 G y n λ 1 λ 2 λ n U n + 1 , n + 1 G y n U n , n + 1 G y n M 2 i = 1 n λ i ,
(3.11)

where sup n 1 { U n + 1 , n + 1 G y n + U n , n + 1 G y n } M 2 for some M 2 >0. Then we have

W n + 1 G y n + 1 W n G y n W n + 1 G y n + 1 W n + 1 G y n + W n + 1 G y n W n G y n y n + 1 y n + M 2 i = 1 n λ i x n + 1 x n + M m = 1 N | r n + 1 , m r n , m | + M 0 m = 1 N | β n + 1 , m β n , m | + δ n + 1 M 1 i = 1 n λ i + | δ n + 1 δ n | W n G z n z n + M 2 i = 1 n λ i .
(3.12)

Hence, it follows from (3.3)-(3.12) that

x n + 2 x n + 1 = P C [ α n + 1 γ f ( x n + 1 ) + ( 1 α n + 1 μ B 3 ) W n + 1 G y n + 1 ] P C [ α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n ] [ α n + 1 γ f ( x n + 1 ) + ( 1 α n + 1 μ B 3 ) W n + 1 G y n + 1 ] [ α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n ] α n γ ( f ( x n + 1 ) f ( x n ) ) + γ ( α n + 1 α n ) f ( x n + 1 ) + ( 1 α n μ B 3 ) ( W n + 1 G y n + 1 W n G y n ) + μ ( α n α n + 1 ) B 3 W n + 1 G y n + 1 α n γ l x n + 1 x n + | α n + 1 α n | ( γ f ( x n + 1 ) + μ B 3 W n + 1 G y n + 1 ) + ( 1 α n τ ) W n + 1 G y n + 1 W n G y n ( 1 α n ( τ γ l ) ) x n + 1 x n + | α n + 1 α n | ( γ f ( x n + 1 ) + μ B 3 W n + 1 G y n + 1 ) + ( 1 α n τ ) ( M m = 1 N | r n + 1 , m r n , m | + M 0 m = 1 N | β n + 1 , m β n , m | + δ n + 1 M 1 i = 1 n λ i + | δ n + 1 δ n | W n G z n z n + M 2 i = 1 n λ i ) ( 1 α n ( τ γ l ) ) x n + 1 x n + M 3 ( | α n + 1 α n | + m = 1 N | r n + 1 , m r n , m | + m = 1 N | β n + 1 , m β n , m | + | δ n + 1 δ n | + b n ) ,
(3.13)

where sup n 1 {γf( x n + 1 )+μ B 3 W n + 1 G y n + 1 +M+ M 0 + W n G z n z n + δ n + 1 M 1 + M 2 } M 3 for some M 3 >0. Noticing conditions (a), (b), (c), (d) and Lemma 2.7, we get x n + 1 x n 0 as n.

Step 3. We show that

lim n y n G y n =0,
(3.14)
lim n x n u n , m =0,1mN,
(3.15)
lim n x n WG x n =0.
(3.16)

First, we show lim n y n G y n =0. Indeed, for simplicity, we write y ˜ n = T μ 2 Θ 2 (I μ 2 B 2 ) y n , p ˜ = T μ 2 Θ 2 (I μ 2 B 2 )p, w n = T μ 1 Θ 1 (I μ 1 B 1 ) y ˜ n . Then w n =G y n and p=Gp. Similar to the proof of (3.4), we get

G y n p 2 y n p 2 + μ 2 ( μ 2 2 ζ 2 ) B 2 y n B 2 p 2 + μ 1 ( μ 1 2 ζ 1 ) B y ˜ n B 1 p ˜ 2 .
(3.17)

From (3.3), (3.4), (3.17), we obtain that for pΩ,

x n + 1 p 2 α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n p 2 = α n ( γ f ( x n ) μ B 3 W n G y n ) + W n G y n p 2 = W n G y n p 2 + 2 α n γ f ( x n ) μ B 3 W n G y n , W n G y n p + α n 2 γ f ( x n ) μ B 3 W n G y n 2 G y n p 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] y n p 2 + μ 2 ( μ 2 2 ζ 2 ) B 2 y n B 2 p 2 + μ 1 ( μ 1 2 ζ 1 ) B y ˜ n B 1 p ˜ 2 + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] x n p 2 + μ 2 ( μ 2 2 ζ 2 ) B 2 y n B 2 p 2 + μ 1 ( μ 1 2 ζ 1 ) B y ˜ n B 1 p ˜ 2 + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] ,
(3.18)

which immediately implies that

μ 2 ( 2 ζ 2 μ 2 ) B 2 y n B 2 p 2 + μ 1 ( 2 ζ 1 μ 1 ) B 1 y ˜ n B 1 p ˜ 2 x n p 2 x n + 1 p 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] x n x n 1 ( x n p + x n + 1 p ) + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] .

Since lim n α n =0, lim n x n x n + 1 =0 and μ i (0,2 ζ i ), i=1,2, we deduce from the boundedness of { x n }, f( x n ) and { W n G y n } that

lim n B 2 y n B 2 p=0, lim n B 1 y ˜ n B 1 p ˜ =0.
(3.19)

Also, in terms of the firm nonexpansivity of T μ 1 Θ 1 , T μ 2 Θ 2 , we obtain from μ i (0,2 ζ i ), i=1,2, that

y ˜ n p ˜ 2 = T μ 2 Θ 2 ( I μ 2 B 2 ) y n T μ 2 Θ 2 ( I μ 2 B 2 ) p 2 ( I μ 2 B 2 ) y n ( I μ 2 B 2 ) p , y ˜ n p ˜ = 1 2 [ ( I μ 2 B 2 ) y n ( I μ 2 B 2 ) p 2 + y ˜ n p ˜ 2 ( I μ 2 B 2 ) y n ( I μ 2 B 2 ) p ( y ˜ n p ˜ ) 2 ] 1 2 [ y n p 2 + y ˜ n p ˜ 2 ( y n y ˜ n ) μ 2 ( B 2 y n B 2 p ) ( p p ˜ ) 2 ] 1 2 [ x n p 2 + y ˜ n p ˜ 2 ( y n y ˜ n ) ( p p ˜ ) 2 + 2 μ 2 ( y n y ˜ n ) ( p p ˜ ) , B 2 y n B 2 p ]

and

w n p 2 = T μ 1 Θ 1 ( I μ 1 B 1 ) y ˜ n T μ 1 Θ 1 ( I μ 1 B 1 ) p ˜ 2 ( I μ 1 B 1 ) y ˜ n ( I μ 1 B 1 ) p ˜ , w n p = 1 2 [ ( I μ 1 B 1 ) y ˜ n ( I μ 1 B 1 ) p ˜ 2 + w n p 2 ( I μ 1 B 1 ) y ˜ n ( I μ 1 B 1 ) p ˜ ( w n p ) 2 ] 1 2 [ y ˜ n p ˜ 2 + w n p 2 ( y ˜ n w n ) + ( p p ˜ ) 2 2 μ 1 B 1 y ˜ n B 1 p ˜ , ( y ˜ n w n ) + ( p p ˜ ) μ 1 2 B 1 y ˜ n B 1 p ˜ 2 ] 1 2 [ x n p 2 + w n p 2 ( y ˜ n w n ) + ( p p ˜ ) 2 + 2 μ 1 B 1 y ˜ n B 1 p ˜ , ( y ˜ n w n ) + ( p p ˜ ) ] .

Thus, we have

y ˜ n p ˜ 2 x n p 2 ( y n y ˜ n ) ( p p ˜ ) 2 + 2 μ 2 ( y n y ˜ n ) ( p p ˜ ) , B 2 y n B 2 p
(3.20)

and

w n p 2 x n p 2 ( y ˜ n w n ) + ( p p ˜ ) 2 + 2 μ 1 B 1 y ˜ n B 1 p ˜ , ( y ˜ n w n ) + ( p p ˜ ) .
(3.21)

Consequently, it follows from (3.4), (3.18) and (3.20) that

x n + 1 p 2 G y n p 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] y ˜ n p ˜ 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] x n p 2 ( y n y ˜ n ) ( p p ˜ ) 2 + 2 μ 2 ( y n y ˜ n ) ( p p ˜ ) , B 2 y n B 2 p + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] ,

which yields

( y n y ˜ n ) ( p p ˜ ) 2 x n p 2 x n + 1 p 2 + 2 μ 2 ( y n y ˜ n ) ( p p ˜ ) B 2 y n B 2 p + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] ( x n x n + 1 ) ( x n p + x n + 1 p ) + 2 μ 2 ( y n y ˜ n ) ( p p ˜ ) B 2 y n B 2 p + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] .

Since lim n α n =0, lim n x n + 1 x n =0 and lim n B 2 y n B 2 p=0, we deduce that

lim n ( y n y ˜ n ) ( p p ˜ ) =0.
(3.22)

Furthermore, it follows from (3.4), (3.18) and (3.21) that

x n + 1 p 2 G y n p 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] w n p 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] x n p 2 ( y ˜ n w n ) + ( p p ˜ ) 2 + 2 μ 1 B 1 y ˜ n B 1 p ˜ , ( y ˜ n w n ) + ( p p ˜ ) + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] ,

which leads to

( y ˜ n w n ) + ( p p ˜ ) 2 x n p 2 x n + 1 p 2 + 2 μ 1 B 1 y ˜ n B 1 p ˜ ( y ˜ n w n ) + ( p p ˜ ) + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] ( x n x n + 1 ) ( x n p + x n + 1 p ) + 2 μ 1 B 1 y ˜ n B 1 p ˜ ( y ˜ n w n ) + ( p p ˜ ) + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] .

Since lim n α n =0, lim n x n + 1 x n =0 and lim n B 1 y ˜ n B 1 p ˜ =0, we deduce that

lim n ( y ˜ n w n ) + ( p p ˜ ) =0.
(3.23)

Note that

y n w n ( y n y ˜ n ) ( p p ˜ ) + ( y ˜ n w n ) + ( p p ˜ ) .

Hence from (3.20) and (3.21), we get

lim n y n w n = lim n y n G y n =0.

Next, we show that lim n x n u n , m =0 for every 1mN and lim n x n WG x n =0. Indeed, by Proposition 1.2(c), we obtain that for any pΩ and for each 1mN,

u n , m p 2 = T r n , m ( F m , φ ) x n T r n , m ( F m , φ ) p 2 u n , m p , x n p = 1 2 [ u n , m p 2 + x n p 2 u n , m x n 2 ] .

That is,

u n , m p 2 x n p 2 u n , m x n 2 .

Then we have

z n p 2 = m = 1 N β n , m u n , m p 2 m = 1 N β n , m u n , m p 2 m = 1 N ( x n p 2 u n , m x n 2 ) = x n p 2 m = 1 N u n , m x n 2 .

It follows that

y n p 2 z n p 2 x n p 2 m = 1 N u n , m x n 2 .
(3.24)

It follows from (3.18) and (3.24) that

x n + 1 p 2 y n p 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] x n p 2 m = 1 N u n , m x n 2 + α n γ f ( x n ) μ B 3 W n G y n × [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] ,
(3.25)

which immediately implies that

m = 1 N u n , m x n 2 x n p 2 x n + 1 p 2 + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] ( x n x n + 1 ) ( x n p + x n + 1 p ) + α n γ f ( x n ) μ B 3 W n G y n [ 2 W n G y n p + α n γ f ( x n ) μ B 3 W n G y n ] .

Since lim n α n =0 and lim n x n x n + 1 =0, we deduce that

lim n u n , m x n =0,1mN.
(3.26)

Since

z n x n = m = 1 N β n , m u n , m x n m = 1 N β n , m u n , m x n ,

from lim n u n , m x n =0, we get

lim n z n x n =0.
(3.27)

Notice that

y n x n y n z n + z n x n δ n W n G z n + ( 1 δ n ) z n z n + z n x n = δ n W n G z n z n + z n x n .

Since δ n 0 and z n x n 0, we get

lim n y n x n =0.
(3.28)

Note that

x n W n G x n x n W n G y n + W n G y n W n G x n x n W n G y n + y n x n .
(3.29)

On the other hand,

x n W n G y n x n x n + 1 + x n + 1 W n G y n = x n x n + 1 + P C [ α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n ] P C W n G y n x n x n + 1 + α n γ f ( x n ) + ( 1 α n μ B 3 ) W n G y n W n G y n = x n x n + 1 + α n γ f ( x n ) μ B 3 W n G y n 0 .

From lim n x n x n + 1 =0 and lim n α n =0, we get

lim n x n W n G x n =0.
(3.30)

Note that

x n WG x n x n W n G x n + W n G x n WG x n .

From (3.30) and Remark 2.2, we see

lim n x n WG x n =0.

Step 4. Now we shall prove

lim sup n x n x , ( γ f μ B 3 ) x 0,
(3.31)

where x is the unique solution of variational inequality (3.2). To show this, we take a subsequence { x n i } of { x n } such that

lim sup n x n x , ( γ f μ B 3 ) x = lim i x n i x , ( γ f μ B 3 ) x .
(3.32)

Since { x n i } is bounded, there exists a subsequence of { x n i }. Without loss of generality, we can still denote it by { x n i } such that x n i ω. Let us show ωΩ:=( n = 1 F( T n ))( m = 1 N MEP( F m ,φ))Γ.

We first show ωΓ. From y n G y n 0 and x n y n 0 and Lemma 2.5 (demiclosedness principle), we have ωF(G)=Γ.

Next we show ω m = 1 N MEP( F m ,φ). Since u n , m = T r n , m ( F m , φ ) x n , we have

F m ( u n , m ,y)+φ(y)φ( u n , m )+ 1 r n , m y u n , m , u n , m x n 0,yC.

It follows from (A2) that

φ(y)φ( u n , m )+ 1 r n , m y u n , m , u n , m x n F m (y, u n , m ),yC.

Replacing n by n i , we arrive at