Skip to content

Advertisement

  • Research
  • Open Access

A unified extragradient method for systems of hierarchical variational inequalities in a Hilbert space

Journal of Inequalities and Applications20142014:460

https://doi.org/10.1186/1029-242X-2014-460

  • Received: 3 July 2014
  • Accepted: 25 September 2014
  • Published:

Abstract

In this paper, we introduce and analyze a multistep Mann-type extragradient iterative algorithm by combining Korpelevich’s extragradient method, viscosity approximation method, hybrid steepest-descent method, Mann’s iteration method, and the projection method. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of infinitely many nonexpansive mappings and a strict pseudocontraction, the solution set of finitely many generalized mixed equilibrium problems (GMEPs), the solution set of finitely many variational inclusions and the solution set of a variational inequality problem (VIP), which is just a unique solution of a system of hierarchical variational inequalities (SHVI) in a real Hilbert space. The results obtained in this paper improve and extend the corresponding results announced by many others.

MSC:49J30, 47H09, 47J20, 49M05.

Keywords

  • system of hierarchical variational inequalities
  • generalized mixed equilibrium problem
  • variational inclusion
  • variational inequality problem
  • multistep Mann-type extragradient method
  • fixed point

1 Introduction

Let C be a nonempty closed convex subset of a real Hilbert space H and P C be the metric projection of H onto C. Let S : C H be a nonlinear mapping on C. We denote by Fix ( S ) the set of fixed points of S and by R the set of all real numbers. Let A : C H be a nonlinear mapping on C. We consider the following variational inequality problem (VIP): find a point x C such that
A x , y x 0 , y C .
(1.1)

The solution set of VIP (1.1) is denoted by VI ( C , A ) .

The VIP (1.1) was first discussed by Lions [1]. There are many applications of VIP (1.1) in various fields; see, e.g., [25]. It is well known that, if A is a strongly monotone and Lipschitz-continuous mapping on C, then VIP (1.1) has a unique solution. In 1976, Korpelevich [6] proposed an iterative algorithm for solving the VIP (1.1) in Euclidean space  R n :
{ y n = P C ( x n τ A x n ) , x n + 1 = P C ( x n τ A y n ) , n 0 ,

with τ > 0 a given number, which is known as the extragradient method. The literature on the VIP is vast and Korpelevich’s extragradient method has received great attention given by many authors, who improved it in various ways; see, e.g., [716] and references therein, to name but a few.

Let φ : C R be a real-valued function, A : H H be a nonlinear mapping and Θ : C × C R be a bifunction. In 2008, Peng and Yao [9] introduced the generalized mixed equilibrium problem (GMEP) of finding x C such that
Θ ( x , y ) + φ ( y ) φ ( x ) + A x , y x 0 , y C .
(1.2)
We denote the set of solutions of GMEP (1.2) by GMEP ( Θ , φ , A ) . The GMEP (1.2) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, Nash equilibrium problems in noncooperative games and others. The GMEP is further considered and studied; see, e.g., [7, 10, 14, 1619]. In particular, if φ = 0 , then GMEP (1.2) reduces to the generalized equilibrium problem (GEP) which is to find x C such that
Θ ( x , y ) + A x , y x 0 , y C .

It was introduced and studied by Takahashi and Takahashi [20]. The set of solutions of GEP is denoted by GEP ( Θ , A ) .

If A = 0 , then GMEP (1.2) reduces to the mixed equilibrium problem (MEP) which is to find x C such that
Θ ( x , y ) + φ ( y ) φ ( x ) 0 , y C .

It was considered and studied in [21]. The set of solutions of MEP is denoted by MEP ( Θ , φ ) .

If φ = 0 , A = 0 , then GMEP (1.2) reduces to the equilibrium problem (EP) which is to find x C such that
Θ ( x , y ) 0 , y C .

It was considered and studied in [2224]. The set of solutions of EP is denoted by EP ( Θ ) .

On the other hand, let B be a single-valued mapping of C into H and R be a multivalued mapping with D ( R ) = C . Consider the following variational inclusion: find a point x C such that
0 B x + R x .
(1.3)
We denote by I ( B , R ) the solution set of the variational inclusion (1.3). In particular, if B = R = 0 , then I ( B , R ) = C . If B = 0 , then problem (1.3) becomes the inclusion problem introduced by Rockafellar [25]. It is well known that problem (1.3) provides a convenient framework for the unified study of optimal solutions in many optimization related areas including mathematical programming, complementarity problems, variational inequalities, optimal control, mathematical economics, equilibria and game theory, etc. Let a set-valued mapping R : D ( R ) H 2 H be maximal monotone. We define the resolvent operator J R , λ : H D ( R ) ¯ associated with R and λ as follows:
J R , λ = ( I + λ R ) 1 , x H ,

where λ is a positive number.

In 1998, Huang [26] studied problem (1.3) in the case where R is maximal monotone and B is strongly monotone and Lipschitz continuous with D ( R ) = C = H . Subsequently, Zeng et al. [27] further studied problem (1.3) in the case which is more general than Huang’s one [26]. Moreover, the authors [27] obtained the same strong convergence conclusion as in Huang’s result [26]. In addition, the authors also gave the geometric convergence rate estimate for approximate solutions. Also, various types of iterative algorithms for solving variational inclusions have been further studied and developed; for more details, refer to [11, 19, 2830] and the references therein.

Let S and T be two nonexpansive mappings. In 2009, Yao et al. [31] considered the following hierarchical variational inequality problem (HVIP): find hierarchically a fixed point of T, which is a solution to the VIP for monotone mapping I S ; namely, find x ˜ Fix ( T ) such that
( I S ) x ˜ , p x ˜ 0 , p Fix ( T ) .
(1.4)
The solution set of the hierarchical VIP (1.4) is denoted by Λ. It is not hard to check that solving the hierarchical VIP (1.4) is equivalent to the fixed point problem of the composite mapping P Fix ( T ) S , i.e., find x ˜ C such that x ˜ = P Fix ( T ) S x ˜ . The authors [31] introduced and analyzed the following iterative algorithm for solving the HVIP (1.4):
{ y n = β n S x n + ( 1 β n ) x n , x n + 1 = α n V x n + ( 1 α n ) T y n , n 0 .
(1.5)
It is proved [[31], Theorem 3.2] that { x n } converges strongly to x ˜ = P Λ V x ˜ , which solves the hierarchical VIP:
( I S ) x ˜ , p x ˜ 0 , p Fix ( T ) .

Very recently, Kong et al. [7] introduced and considered the following system of hierarchical variational inequalities (SHVI) (over the fixed point set of a strictly pseudocontractive mapping) with a variational inequality constraint:

to find x Ξ such that
{ ( μ F γ V ) x , x x 0 , x Fix ( T ) VI ( C , A ) , ( μ F γ S ) x , y x 0 , y Fix ( T ) VI ( C , A ) .
(1.6)

In particular, if T = T 1 and A = I T 2 , where T i : C C is ξ i -strictly pseudocontractive for i = 1 , 2 , SHVI (1.6) reduces to the following:

to find x Ω such that
{ ( μ F γ V ) x , x x 0 , x Fix ( T 1 ) Fix ( T 2 ) , ( μ F γ S ) x , y x 0 , y Fix ( T 1 ) Fix ( T 2 ) .
(1.7)
The authors in [7] proposed the following algorithm for solving SHVI (1.6) and presented its convergence analysis:
{ x 0 = x C chosen arbitrarily , y n = P C ( x n ν n A n x n ) , z n = β n x n + γ n P C ( x n ν n A n y n ) + σ n T P C ( x n ν n A n y n ) , x n + 1 = P C [ λ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I λ n μ F ) z n ] , n 0 ,
(1.8)
where A n = α n I + A for all n 0 . In particular, if V 0 , then (1.8) reduces to the following iterative scheme:
{ x 0 = x C chosen arbitrarily , y n = P C ( x n ν n A n x n ) , z n = β n x n + γ n P C ( x n ν n A n y n ) + σ n T P C ( x n ν n A n y n ) , x n + 1 = P C [ λ n ( 1 δ n ) γ S x n + ( I λ n μ F ) z n ] , n 0 .
(1.9)

In this paper, we introduce and study the following system of hierarchical variational inequalities (SHVI) (over the fixed point set of an infinite family of nonexpansive mappings and a strictly pseudocontractive mapping) with constraints of finitely many GMEPs, finitely many variational inclusions and the VIP (1.1):

Let M, N be two positive integers. Assume that
  1. (i)

    A : C H is a monotone and L-Lipschitzian mapping and F : C H is κ-Lipschitzian and η-strongly monotone with positive constants κ , η > 0 such that 0 < γ τ and 0 < μ < 2 η κ 2 where τ = 1 1 μ ( 2 η μ κ 2 ) ;

     
  2. (ii)

    Θ k is a bifunction from C × C to R satisfying (A1)-(A4) and φ k : C R { + } is a proper lower semicontinuous and convex function with restriction (B1) or (B2), where k { 1 , 2 , , M } ;

     
  3. (iii)

    R i : C 2 H is a maximal monotone mapping, and A k : H H and B i : C H are μ k -inverse-strongly monotone and η i -inverse strongly monotone, respectively, where k { 1 , 2 , , M } and i { 1 , 2 , , N } ;

     
  4. (iv)

    { T n } n = 1 is a sequence of nonexpansive self-mappings on C, T : C C is a ξ-strict pseudocontraction, S : C C is a nonexpansive mapping and V : C H is a ρ-contraction with coefficient ρ [ 0 , 1 ) ;

     
  5. (v)

    Ω : = n = 1 Fix ( T n ) k = 1 M GMEP ( Θ k , φ k , A k ) i = 1 N I ( B i , R i ) VI ( C , A ) Fix ( T ) .

     
Then the objective is to find x Ω such that
{ ( μ F γ V ) x , x x 0 , x Ω , ( μ F γ S ) x , y x 0 , y Ω .
(1.10)
In particular, whenever V 0 , the objective is to find x Ω such that
{ F x , x x 0 , x Ω , ( μ F γ S ) x , y x 0 , y Ω .
(1.11)

Motivated and inspired by the above facts, we introduce and analyze a multistep Mann-type extragradient iterative algorithm by combining Korpelevich’s extragradient method, viscosity approximation method, hybrid steepest-descent method, Mann’s iteration method and projection method. It is proven that under mild conditions, the proposed algorithm converges strongly to a common element x Ω : = n = 1 Fix ( T n ) k = 1 M GMEP ( Θ k , φ k , A k ) i = 1 N I ( B i , R i ) VI ( C , A ) Fix ( T ) of the solution set of finitely many GMEPs, the solution set of finitely many variational inclusions, the solution set of VIP (1.1) and the fixed point set of an infinite family of nonexpansive mappings { T n } n = 1 and a strict pseudocontraction T, which is just a unique solution of the SHVI (1.10). The results obtained in this paper improve and extend the corresponding results announced by many others. For recent related work, we refer to [32] and the references therein.

2 Preliminaries

Throughout this paper, we assume that H is a real Hilbert space whose inner product and norm are denoted by , and , respectively.

Then we have the following inequality:
x + y 2 x 2 + 2 y , x + y , x , y H .
We write x n x to indicate that the sequence { x n } converges weakly to x and x n x to indicate that the sequence { x n } converges strongly to x. Moreover, we use ω w ( x n ) to denote the weak ω-limit set of the sequence { x n } , i.e.,
ω w ( x n ) : = { x H : x n i x  for some subsequence  { x n i }  of  { x n } } .
Lemma 2.1 Let H be a real Hilbert space. Then the following hold:
  1. (a)

    x y 2 = x 2 y 2 2 x y , y for all x , y H ;

     
  2. (b)

    λ x + μ y 2 = λ x 2 + μ y 2 λ μ x y 2 for all x , y H and λ , μ [ 0 , 1 ] with λ + μ = 1 ;

     
  3. (c)
    If { x n } is a sequence in H such that x n x , it follows that
    lim sup n x n y 2 = lim sup n x n x 2 + x y 2 , y H .
     

2.1 Nonexpansive type mappings

Let C be a nonempty closed convex subset of H. The metric (or nearest point) projection from H onto C is the mapping P C : H C which assigns to each point x H the unique point P C x C satisfying the property
x P C x = inf y C x y = : d ( x , C ) .

The following properties of projections are useful and pertinent to our purpose.

Proposition 2.1 Given any x H and z C . Then we have the following:
  1. (i)

    z = P C x x z , y z 0 , y C ;

     
  2. (ii)

    z = P C x x z 2 x y 2 y z 2 , y C ;

     
  3. (iii)

    P C x P C y , x y P C x P C y 2 , y H .

     
Definition 2.1 A mapping T : H H is said to be
  1. (a)
    nonexpansive if
    T x T y x y , x , y H ;
     
  2. (b)
    firmly nonexpansive if 2 T I is nonexpansive, or equivalently, if T is 1-inverse-strongly monotone (1-ism),
    x y , T x T y T x T y 2 , x , y H .
     

Alternatively, T is firmly nonexpansive if and only if T can be expressed as T = 1 2 ( I + S ) , where S : H H is nonexpansive and I is the identity mapping on H. Note projections are firmly nonexpansive.

Definition 2.2 A mapping T : H H is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping, that is, T ( 1 α ) I + α S , where α ( 0 , 1 ) and S : H H is nonexpansive.

Proposition 2.2 ([33])

Let T : H H be a given mapping. Then:
  1. (i)

    T is nonexpansive if and only if the complement I T is 1 2 -ism.

     
  2. (ii)

    If T is ν-ism, then for γ > 0 , γT is ν γ -ism.

     
  3. (iii)

    T is averaged if and only if the complement I T is ν-ism for some ν > 1 / 2 . Indeed, for α ( 0 , 1 ) , T is α-averaged if and only if I T is 1 2 α -ism.

     

Proposition 2.3 ([33, 34])

Let S , T , V : H H be given operators.
  1. (i)

    If T = ( 1 α ) S + α V for some α ( 0 , 1 ) and if S is averaged and V is nonexpansive, then T is averaged.

     
  2. (ii)

    T is firmly nonexpansive if and only if the complement I T is firmly nonexpansive.

     
  3. (iii)

    If T = ( 1 α ) S + α V for some α ( 0 , 1 ) and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.

     
  4. (iv)

    The composite of finitely many averaged mappings is averaged. That is, if each of the mappings { T i } i = 1 N is averaged, then so is the composite T 1 T N . In particular, if T 1 is α 1 -averaged and T 2 is α 2 -averaged, where α 1 , α 2 ( 0 , 1 ) , then the composite T 1 T 2 is α-averaged, where α = α 1 + α 2 α 1 α 2 .

     
  5. (v)
    If the mappings { T i } i = 1 N are averaged and have a common fixed point, then
    i = 1 N Fix ( T i ) = Fix ( T 1 T N ) .
     

We need some facts and tools which are listed as lemmas below.

Lemma 2.2 ([[35], Demiclosedness principle])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let S be a nonexpansive self-mapping on C. Then I S is demiclosed at 0.

Let { T n } n = 1 be an infinite family of nonexpansive self-mappings on C and { λ n } n = 1 be a sequence in [ 0 , 1 ] . For any n 1 , define a mapping W n on C as follows:
{ U n , n + 1 = I , U n , n = λ n T n U n , n + 1 + ( 1 λ n ) I , U n , n 1 = λ n 1 T n 1 U n , n + ( 1 λ n 1 ) I , , U n , k = λ k T k U n , k + 1 + ( 1 λ k ) I , U n , k 1 = λ k 1 T k 1 U n , k + ( 1 λ k 1 ) I , , U n , 2 = λ 2 T 2 U n , 3 + ( 1 λ 2 ) I , W n = U n , 1 = λ 1 T 1 U n , 2 + ( 1 λ 1 ) I .
(2.1)

Such a mapping W n is called the W-mapping generated by T n , T n 1 , , T 1 and λ n , λ n 1 , , λ 1 .

Lemma 2.3 ([36])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C such that n = 1 Fix ( T n ) and let { λ n } n = 1 be a sequence in ( 0 , b ] for some b ( 0 , 1 ) . Then, for every x C and k 1 the limit lim n U n , k x exists, where U n , k is defined as in (2.1).

Remark 2.1 ([[37], Remark 3.1])

It can be known from Lemma 2.4 that if D is a nonempty bounded subset of C, then for ϵ > 0 there exists n 0 k such that sup x D U n , k x U k x ϵ for all n > n 0 .

Remark 2.2 ([[37], Remark 3.2])

Utilizing Lemma 2.4, we define a mapping W : C C as follows:
W x = lim n W n x = lim n U n , 1 x , x C .
Such a W is called the W-mapping generated by T 1 , T 2 , and λ 1 , λ 2 ,  . Since W n is nonexpansive, W : C C is also nonexpansive. For a bounded sequence { x n } in C, we put D = { x n : n 1 } . Hence, it is clear from Remark 2.1 that for an arbitrary ϵ > 0 , there exists N 0 1 such that for all n > N 0
W n x n W x n = U n , 1 x n U 1 x n sup x D U n , 1 x U 1 x ϵ .

This implies that lim n W n x n W x n = 0 .

Lemma 2.4 ([36])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C such that n = 1 Fix ( T n ) , and let { λ n } n = 1 be a sequence in ( 0 , b ] for some b ( 0 , 1 ) . Then Fix ( W ) = n = 1 Fix ( T n ) .

It is clear that, in a real Hilbert space H, T : C C is ξ-strictly pseudocontractive if and only if the following inequality holds:
T x T y , x y x y 2 1 ξ 2 ( I T ) x ( I T ) y 2 , x , y C .

This immediately implies that if T is a ξ-strictly pseudocontractive mapping, then I T is 1 ξ 2 -inverse-strongly monotone; for further detail, we refer to [38] and the references therein. It is well known that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings and that the class of pseudocontractions strictly includes the class of strict pseudocontractions. In addition, for the extension of strict pseudocontractions, please the reader refer to [39] and the references therein.

Proposition 2.4 ([[38], Proposition 2.1])

Let C be a nonempty closed convex subset of a real Hilbert space H and T : C C be a mapping.
  1. (i)
    If T is a ξ-strictly pseudocontractive mapping, then T satisfies the Lipschitzian condition
    T x T y 1 + ξ 1 ξ x y , x , y C .
     
  2. (ii)

    If T is a ξ-strictly pseudocontractive mapping, then the mapping I T is demiclosed at 0, that is, if { x n } is a sequence in C such that x n x ˜ and ( I T ) x n 0 , then ( I T ) x ˜ = 0 .

     
  3. (iii)

    If T is ξ-(quasi-)strict pseudocontraction, then the fixed point set Fix ( T ) of T is closed and convex so that the projection P Fix ( T ) is well defined.

     

Proposition 2.5 ([40])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let T : C C be a ξ-strictly pseudocontractive mapping. Let γ and δ be two nonnegative real numbers such that ( γ + δ ) ξ γ . Then
γ ( x y ) + δ ( T x T y ) ( γ + δ ) x y , x , y C .

2.2 Mixed equilibrium problems

We list some elementary conclusions for the MEP.

It was assumed in [9] that Θ : C × C R is a bifunction satisfying conditions (A1)-(A4) and φ : C R is a lower semicontinuous and convex function with restriction (B1) or (B2), where

(A1) Θ ( x , x ) = 0 for all x C ;

(A2) Θ is monotone, i.e., Θ ( x , y ) + Θ ( y , x ) 0 for any x , y C ;

(A3) Θ is upper-hemicontinuous, i.e., for each x , y , z C ,
lim sup t 0 + Θ ( t z + ( 1 t ) x , y ) Θ ( x , y ) ;

(A4) Θ ( x , ) is convex and lower semicontinuous for each x C ;

(B1) for each x H and r > 0 , there exists a bounded subset D x C and y x C such that for any z C D x ,
Θ ( z , y x ) + φ ( y x ) φ ( z ) + 1 r y x z , z x < 0 ;

(B2) C is a bounded set.

Proposition 2.6 ([21])

Assume that Θ : C × C R satisfies (A1)-(A4) and let φ : C R be a proper lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For r > 0 and x H , define a mapping T r ( Θ , φ ) : H C as follows:
T r ( Θ , φ ) ( x ) = { z C : Θ ( z , y ) + φ ( y ) φ ( z ) + 1 r y z , z x 0 , y C }
for all x H . Then the following hold:
  1. (i)

    for each x H , T r ( Θ , φ ) ( x ) is nonempty and single-valued;

     
  2. (ii)
    T r ( Θ , φ ) is firmly nonexpansive, that is, for any x , y H ,
    T r ( Θ , φ ) x T r ( Θ , φ ) y 2 T r ( Θ , φ ) x T r ( Θ , φ ) y , x y ;
     
  3. (iii)

    Fix ( T r ( Θ , φ ) ) = MEP ( Θ , φ ) ;

     
  4. (iv)

    MEP ( Θ , φ ) is closed and convex;

     
  5. (v)

    T s ( Θ , φ ) x T t ( Θ , φ ) x 2 s t s T s ( Θ , φ ) x T t ( Θ , φ ) x , T s ( Θ , φ ) x x for all s , t > 0 and x H .

     

2.3 Monotone operators

Definition 2.3 Let T be a nonlinear operator with the domain D ( T ) H and the range R ( T ) H . Then T is said to be
  1. (i)
    monotone if
    T x T y , x y 0 , x , y D ( T ) ;
     
  2. (ii)
    β-strongly monotone if there exists a constant β > 0 such that
    T x T y , x y η x y 2 , x , y D ( T ) ;
     
  3. (iii)
    ν-inverse-strongly monotone if there exists a constant ν > 0 such that
    T x T y , x y ν T x T y 2 , x , y D ( T ) .
     
It can easily be seen that if T is nonexpansive, then I T is monotone. It is also easy to see that the projection P C is 1-ism. Inverse strongly monotone (also referred to as co-coercive) operators have been applied widely in solving practical problems in various fields, for instance, in traffic assignment problems; see [41, 42]. On the other hand, it is obvious that if A is ζ-inverse-strongly monotone, then A is monotone and 1 ζ -Lipschitz continuous. Moreover, we also have, for all u , v C and λ > 0 ,
( I λ A ) u ( I λ A ) v 2 = ( u v ) λ ( A u A v ) 2 = u v 2 2 λ A u A v , u v + λ 2 A u A v 2 u v 2 + λ ( λ 2 ζ ) A u A v 2 .
(2.2)

So, if λ 2 ζ , then I λ A is a nonexpansive mapping from C to H.

Let C be a nonempty closed convex subset of a real Hilbert space H. We introduce some notations. Let λ be a number in ( 0 , 1 ] and let μ > 0 . Associated with a nonexpansive mapping T : C C , we define the mapping T λ : C H by
T λ x : = T x λ μ F ( T x ) , x C ,
where F : C H is an operator such that, for some positive constants κ , η > 0 , F is κ-Lipschitzian and η-strongly monotone on C; that is, F satisfies the conditions:
F x F y κ x y and F x F y , x y η x y 2 , x , y C .

Lemma 2.5 (see [[43], Lemma 3.1])

T λ is a contraction provided 0 < μ < 2 η κ 2 ; that is,
T λ x T λ y ( 1 λ τ ) x y , x , y C ,

where τ = 1 1 μ ( 2 η μ κ 2 ) ( 0 , 1 ] .

Remark 2.3 (i) Since F is κ-Lipschitzian and η-strongly monotone on C, we get 0 < η κ . Hence, whenever 0 < μ < 2 η κ 2 , we have 0 < 1 1 2 μ η + μ 2 κ 2 1 . So, τ = 1 1 μ ( 2 η μ κ 2 ) ( 0 , 1 ] .

(ii) In Lemma 2.5, put F = 1 2 I and μ = 2 . Then we know that κ = η = 1 2 , 0 < μ = 2 < 2 η κ 2 = 4 and
τ = 1 1 μ ( 2 η μ κ 2 ) = 1 1 2 ( 2 × 1 2 2 × ( 1 2 ) 2 ) = 1 .
Lemma 2.6 Let A : C H be a monotone mapping. The characterization of the projection (see Proposition  2.1(i)) implies
u VI ( C , A ) u = P C ( u λ A u ) , λ > 0 .
Finally, recall that a set-valued mapping T ˜ : D ( T ˜ ) H 2 H is called monotone if for all x , y D ( T ˜ ) , f T ˜ x and g T ˜ y imply
f g , x y 0 .

A set-valued mapping T ˜ is called maximal monotone if T ˜ is monotone and ( I + λ T ˜ ) D ( T ˜ ) = H for each λ > 0 . We denote by G ( T ˜ ) the graph of T ˜ . It is well known that a monotone mapping T ˜ is maximal if and only if, for ( x , f ) H × H , f g , x y 0 for every ( y , g ) G ( T ˜ ) implies f T ˜ x . Next we provide an example to illustrate the concept of maximal monotone mapping.

Let A : C H be a monotone, k-Lipschitz-continuous mapping and let N C v be the normal cone to C at v C , i.e.,
N C ( v ) = { u H : v x , u 0 , x C } .
Define
T ˜ v = { A v + N C ( v ) , if  v C , , if  v C .
Then T ˜ is maximal monotone (see [25]) such that
0 T ˜ v v VI ( C , A ) .
(2.3)

Let R : D ( R ) H 2 H be a maximal monotone mapping. Let λ , μ > 0 be two positive numbers.

Lemma 2.7 (see [44])

We have the resolvent identity
J R , λ x = J R , μ ( μ λ x + ( 1 μ λ ) J R , λ x ) , x H .

In terms of Huang [26] (see also [27]), we have the following property for the resolvent operator J R , λ : H D ( R ) ¯ .

Lemma 2.8 J R , λ is single-valued and firmly nonexpansive, i.e.,
J R , λ x J R , λ y , x y J R , λ x J R , λ y 2 , x , y H .

Consequently, J R , λ is nonexpansive and monotone.

Lemma 2.9 ([11])

Let R be a maximal monotone mapping with D ( R ) = C . Then for any given λ > 0 , u C is a solution of problem (1.3) if and only if u C satisfies
u = J R , λ ( u λ B u ) .

Lemma 2.10 ([27])

Let R be a maximal monotone mapping with D ( R ) = C and let B : C H be a strongly monotone, continuous and single-valued mapping. Then for each z H , the equation z ( B + λ R ) x has a unique solution x λ for λ > 0 .

Lemma 2.11 ([11])

Let R be a maximal monotone mapping with D ( R ) = C and B : C H be a monotone, continuous and single-valued mapping. Then ( I + λ ( R + B ) ) C = H for each λ > 0 . In this case, R + B is maximal monotone.

2.4 Technical lemmas

The following lemma plays a key role in proving strong convergence of the sequences generated by our algorithms.

Lemma 2.12 ([45])

Let { a n } be a sequence of nonnegative real numbers satisfying the property:
a n + 1 ( 1 s n ) a n + s n b n + t n , n 1 ,
where { s n } ( 0 , 1 ] and { b n } are such that
  1. (i)

    n = 1 s n = ;

     
  2. (ii)

    either lim sup n b n 0 or n = 1 | s n b n | < ;

     
  3. (iii)

    n = 1 t n < where t n 0 , for all n 1 .

     

Then lim n a n = 0 .

Lemma 2.13 ([46])

Let { α n } and { β n } be the sequences of nonnegative real numbers and a sequence of real numbers, respectively, such that lim sup n α n < and lim sup n β n 0 . Then lim sup n α n β n 0 .

3 Main results

In this section, we will introduce and analyze a multistep Mann-type extragradient iterative algorithm for finding a solution of SHVI (1.10) (over the fixed point set of an infinite family of nonexpansive mappings and a strict pseudocontraction) with constraints of several problems: finitely many GMEPs, finitely many variational inclusions and VIP (1.1) in a real Hilbert space. This algorithm is based on Korpelevich’s extragradient method, viscosity approximation method, hybrid steepest-descent method, Mann’s iteration method and projection method. We prove the strong convergence of the proposed algorithm to a unique solution of SHVI (1.10) under suitable conditions.

We are now in a position to state and prove the main result in this paper.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let M, N be two positive integers. Let Θ k be a bifunction from C × C to R satisfying (A1)-(A4) and φ k : C R { + } be a proper lower semicontinuous and convex function with restriction (B1) or (B2), where k { 1 , 2 , , M } . Let R i : C 2 H be a maximal monotone mapping and let A k : H H and B i : C H be μ k -inverse-strongly monotone and η i -inverse-strongly monotone, respectively, where k { 1 , 2 , , M } , i { 1 , 2 , , N } . Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C and { λ n } n = 1 be a sequence in ( 0 , b ] for some b ( 0 , 1 ) . Let T : C C be a ξ-strictly pseudocontractive mapping, S : C C be a nonexpansive mapping and V : C H be a ρ-contraction with coefficient ρ [ 0 , 1 ) . Let A : C H be a 1 L -inverse-strongly monotone mapping, and F : C H be κ-Lipschitzian and η-strongly monotone with positive constants κ , η > 0 such that 0 < μ < 2 η κ 2 and 0 < γ τ where τ = 1 1 μ ( 2 η μ κ 2 ) . Assume that SHVI (1.10) over Ω : = n = 1 Fix ( T n ) k = 1 M GMEP ( Θ k , φ k , A k ) i = 1 N I ( B i , R i ) VI ( C , A ) Fix ( T ) has a solution. Let { α n } [ 0 , ) , { ν n } ( 0 , 1 L ) , { ϵ n } , { δ n } , { β n } , { γ n } , { σ n } ( 0 , 1 ) , and { λ i , n } [ a i , b i ] ( 0 , 2 η i ) , { r k , n } [ c k , d k ] ( 0 , 2 μ k ) where i { 1 , 2 , , N } and k { 1 , 2 , , M } . For arbitrarily given x 1 C , let { x n } be a sequence generated by
{ u n = T r M , n ( Θ M , φ M ) ( I r M , n A M ) T r M 1 , n ( Θ M 1 , φ M 1 ) ( I r M 1 , n A M 1 ) T r 1 , n ( Θ 1 , φ 1 ) ( I r 1 , n A 1 ) x n , v n = J R N , λ N , n ( I λ N , n B N ) J R N 1 , λ N 1 , n ( I λ N 1 , n B N 1 ) J R 1 , λ 1 , n ( I λ 1 , n B 1 ) u n , y n = P C ( v n ν n A n v n ) , z n = β n W n x n + γ n P C ( v n ν n A n y n ) + σ n T P C ( v n ν n A n y n ) , x n + 1 = P C [ ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) z n ] , n 1 ,
(3.1)

where A n = α n I + A for all n 1 . Suppose that

(C1) n = 1 α n < ;

(C2) 0 < lim inf n ν n lim sup n ν n < 1 L ;

(C3) β n + γ n + σ n = 1 and ( γ n + σ n ) ξ γ n for all n 1 ;

(C4) 0 < lim inf n β n lim sup n β n < 1 and lim inf n σ n > 0 ;

(C5) 0 < lim inf n δ n lim sup n δ n < 1 ;

(C6) lim n ϵ n = 0 and n = 1 ϵ n = .

If { S x n } is bounded, then { x n } converges strongly to a unique solution of SHVI (1.10) provided lim n x n x n + 1 = 0 .

Proof For n 1 , put
Δ n k = T r k , n ( Θ k , φ k ) ( I r k , n A k ) T r k 1 , n ( Θ k 1 , φ k 1 ) ( I r k 1 , n A k 1 ) T r 1 , n ( Θ 1 , φ 1 ) ( I r 1 , n A 1 ) x n
for all k { 1 , 2 , , M } and
Λ n i = J R i , λ i , n ( I λ i , n B i ) J R i 1 , λ i 1 , n ( I λ i 1 , n B i 1 ) J R 1 , λ 1 , n ( I λ 1 , n B 1 )

for all i { 1 , 2 , , N } , Δ n 0 = I and Λ n 0 = I . Then we have u n = Δ n M x n and v n = Λ n N u n . In addition, in terms of conditions (C1), (C2), and (C4), without loss of generality, we may assume that { β n } [ c , d ] for some c , d ( 0 , 1 ) , { ν n } [ a ˆ , b ˆ ] for some a ˆ , b ˆ ( 0 , 1 L ) , and ν n ( α n + L ) 1 for all n 1 .

One can readily see that P C ( I ν n A n ) are nonexpansive for all n 1 ; see [7] (also [47]).

Next, we divide the remainder of the proof into several steps.

Step 1. { x n } is bounded.

Take a fixed p Ω arbitrarily. Utilizing (2.2) and Proposition 2.6(ii) we have
u n p = T r M , n ( Θ M , φ M ) ( I r M , n B M ) Δ n M 1 x n T r M , n ( Θ M , φ M ) ( I r M , n B M ) Δ n M 1 p ( I r M , n B M ) Δ n M 1 x n ( I r M , n B M ) Δ n M 1 p Δ n M 1 x n Δ n M 1 p Δ n 0 x n Δ n 0 p = x n p .
(3.2)
Utilizing (2.2) and Lemma 2.8 we have
v n p = J R N , λ N , n ( I λ N , n A N ) Λ n N 1 u n J R N , λ N , n ( I λ N , n A N ) Λ n N 1 p ( I λ N , n A N ) Λ n N 1 u n ( I λ N , n A N ) Λ n N 1 p Λ n N 1 u n Λ n N 1 p Λ n 0 u n Λ n 0 p = u n p ,
(3.3)
which together with the last inequality, implies that
v n p x n p .
(3.4)
Note that W n p = p for all n 1 and P C ( I ν A ) p = p for ν ( 0 , 2 L ) . Hence, from (3.1) and (3.4), it follows that
y n p = P C ( I ν n A n ) v n P C ( I ν n A ) p P C ( I ν n A n ) v n P C ( I ν n A n ) p + P C ( I ν n A n ) p P C ( I ν n A ) p v n p + ( I ν n A n ) p ( I ν n A ) p x n p + ν n α n p .
(3.5)
Put t n = P C ( v n ν n A n y n ) for each n 1 . Then, by Proposition 2.1(ii), we have
t n p 2 v n ν n A n y n p 2 v n ν n A n y n t n 2 = v n p 2 v n t n 2 + 2 ν n A n y n , p t n = v n p 2 v n t n 2 + 2 ν n ( A n y n A n p , p y n + A n p , p y n + A n y n , y n t n ) v n p 2 v n t n 2 + 2 ν n ( A n p , p y n + A n y n , y n t n ) = v n p 2 v n t n 2 + 2 ν n [ ( α n I + A ) p , p y n + A n y n , y n t n ] v n p 2 v n t n 2 + 2 ν n [ α n p , p y n + A n y n , y n t n ] = v n p 2 v n y n 2 2 v n y n , y n t n y n t n 2 + 2 ν n [ α n p , p y n + A n y n , y n t n ] = v n p 2 v n y n 2 y n t n 2 + 2 v n ν n A n y n y n , t n y n + 2 ν n α n p , p y n .
(3.6)
Further, by Proposition 2.1(i), we have
v n ν n A n y n y n , t n y n = v n ν n A n v n y n , t n y n + ν n A n v n ν n A n y n , t n y n ν n A n v n ν n A n y n , t n y n ν n A n v n A n y n t n y n ν n ( α n + L ) v n y n t n y n .
(3.7)
So, we obtain from (3.6)
t n p 2 v n p 2 v n y n 2 y n t n 2 + 2 v n ν n A n y n y n , t n y n + 2 ν n α n p , p y n v n p 2 v n y n 2 y n t n 2 + 2 ν n ( α n + L ) v n y n t n y n + 2 ν n α n p p y n v n p 2 v n y n 2 y n t n 2 + ν n 2 ( α n + L ) 2 v n y n 2 + t n y n 2 + 2 ν n α n p p y n = v n p 2 + 2 ν n α n p p y n + ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 x n p 2 + 2 ν n α n p p y n + ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 x n p 2 + 2 ν n α n p p y n .
(3.8)
Since ( γ n + σ n ) ξ γ n for all n 1 , utilizing Proposition 2.5 and Lemma 2.1(b), from (3.5) and (3.8), we conclude that
z n p 2 = β n W n x n + γ n t n + σ n T t n p 2 = β n ( W n x n p ) + ( γ n + σ n ) 1 γ n + σ n [ γ n ( t n p ) + σ n ( T t n p ) ] 2 = β n W n x n p 2 + ( γ n + σ n ) 1 γ n + σ n [ γ n ( t n p ) + σ n ( T t n p ) ] 2 β n ( γ n + σ n ) 1 γ n + σ n [ γ n ( t n W n x n ) + σ n ( T t n W n x n ) ] 2 β n x n p 2 + ( 1 β n ) t n p 2 β n 1 β n z n W n x n 2 β n x n p 2 + ( 1 β n ) [ x n p 2 + 2 ν n α n p p y n + ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 ] β n 1 β n z n W n x n 2 x n p 2 + 2 ν n α n p p y n + ( 1 β n ) ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 β n 1 β n z n W n x n 2 x n p 2 + 2 ν n α n p ( x n p + ν n α n p ) + ( 1 β n ) ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 β n 1 β n z n W n x n 2 x n p 2 + 2 x n p ( 2 ν n α n p ) + ( 2 ν n α n p ) 2 + ( 1 β n ) ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 β n 1 β n z n W n x n 2 = ( x n p + 2 ν n α n p ) 2 + ( 1 β n ) ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 β n 1 β n z n W n x n 2 ( x n p + 2 ν n α n p ) 2 .
(3.9)
Noticing the boundedness of { S x n } , we get sup n 1 γ S x n μ F p M ˜ for some M ˜ > 0 . Moreover, utilizing Lemma 2.5 we have from (3.1)
x n + 1 p = P C [ ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) z n ] P C p ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) z n p = ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) ϵ n μ F p + ( I ϵ n μ F ) z n ( I ϵ n μ F ) p ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) ϵ n μ F p + ( I ϵ n μ F ) z n ( I ϵ n μ F ) p = ϵ n δ n ( γ V x n μ F p ) + ( 1 δ n ) ( γ S x n μ F p ) + ( I ϵ n μ F ) z n ( I ϵ n μ F ) p ϵ n [ δ n γ V x n μ F p + ( 1 δ n ) γ S x n μ F p ] + ( 1 ϵ n τ ) z n p ϵ n [ δ n ( γ V x n γ V p + γ V p μ F p ) + ( 1 δ n ) M ˜ ] + ( 1 ϵ n τ ) z n p ϵ n [ δ n γ ρ x n p + δ n γ V p μ F p + ( 1 δ n ) M ˜ ] + ( 1 ϵ n τ ) [ x n p + 2 ν n α n p ] ϵ n [ δ n γ ρ x n p + max { M ˜ , γ V p μ F p } ] + ( 1 ϵ n τ ) [ x n p + 2 ν n α n p ] ϵ n γ ρ x n p + ϵ n max { M ˜ , γ V p μ F p } + ( 1 ϵ n τ ) x n p + 2 ν n α n p = [ 1 ( τ γ ρ ) ϵ n ] x n p + ϵ n max { M ˜ , γ V p μ F p } + 2 ν n α n p = [ 1 ( τ γ ρ ) ϵ n ] x n p + ( τ γ ρ ) ϵ n max { M ˜ τ γ ρ , γ V p μ F p τ γ ρ } + 2 ν n α n p max { x n p , M ˜ τ γ ρ , γ V p μ F p τ γ ρ } + 2 ν n α n p .
(3.10)
By induction, we can derive
x n + 1 p max { x 1 p , M ˜ τ γ ρ , γ V p μ F p τ γ ρ } + j = 1 n 2 ν j α j p , n 1 .

Consequently, { x n } is bounded (due to n = 1 α n < ) and so are the sequences { u n } , { v n } , { y n } , { z n } , { A v n } and { A y n } .

Step 2. x n u n 0 , x n v n 0 , x n y n 0 , x n t n 0 , x n W x n 0 and t n T t n 0 as n .

From (3.1) and (3.9), it follows that
x n + 1 p 2 ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) z n p 2 = ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) ϵ n μ F p + ( I ϵ n μ F ) z n ( I ϵ n μ F ) p 2 { ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) ϵ n μ F p + ( I ϵ n μ F ) z n ( I ϵ n μ F ) p } 2 { ϵ n δ n ( γ V x n μ F p ) + ( 1 δ n ) ( γ S x n μ F p ) + ( 1 ϵ n τ ) z n p } 2 ϵ n 1 τ [ δ n γ V x n μ F p + ( 1 δ n ) γ S x n μ F p ] 2 + ( 1 ϵ n τ ) z n p 2 ϵ n 1 τ [ γ V x n μ F p + γ S x n μ F p ] 2 + z n p 2 ϵ n 1 τ [ γ V x n μ F p + γ S x n μ F p ] 2 + ( x n p + 2 ν n α n p ) 2 + ( 1 β n ) ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 β n 1 β n z n W n x n 2 ( x n p + 2 ν n α n p ) 2 + ϵ n M ˜ 1 + ( 1 β n ) ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 β n 1 β n z n W n x n 2 ,
(3.11)
where M ˜ 1 = sup n 1 { 1 τ [ γ V x n μ F p + γ S x n μ F p ] 2 } . This together with { ν n } [ a ˆ , b ˆ ] ( 0 , 1 L ) and { β n } [ c , d ] ( 0 , 1 ) implies that
( 1 d ) ( 1 b ˆ 2 ( α n + L ) 2 ) v n y n 2 + c 1 c z n W n x n 2 ( 1 β n ) ( 1 ν n 2 ( α n + L ) 2 ) v n y n 2 + β n 1 β n z n W n x n 2 ( x n p + 2 ν n α n p ) 2 x n + 1 p 2 + ϵ n M ˜ 1 = [ ( x n p + 2 ν n α n p ) x n + 1 p ] × [ ( x n p + 2 ν n α n p ) + x n + 1 p ] + ϵ n M ˜ 1 [ x n + 1 x n + 2 ν n α n p ] [ x n p + x n + 1 p + 2 ν n α n p ] + ϵ n M ˜ 1 [ x n + 1 x n + 2 b ˆ α n p ] [ x n p + x n + 1 p + 2 b ˆ α n p ] + ϵ n M ˜ 1 .
(3.12)
Note that lim n α n = lim n ϵ n = 0 . Hence, taking into account the boundedness of { x n } and lim n x n + 1 x n = 0 , we deduce from (3.12) that
lim n v n y n = lim n z n W n x n = 0 .
(3.13)
Furthermore, for simplicity, we write w n = ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) z n for all n 1 . Then we have
x n + 1 x n = P C w n w n + ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) z n x n = P C w n w n + ϵ n [ γ ( δ n V x n + ( 1 δ n ) S x n ) μ F z n ] + z n x n ,
which immediately yields
z n x n = x n + 1 x n ϵ n [ γ ( δ n V x n + ( 1 δ n ) S x n ) μ F z n ] ( P C w n w n ) .
So, utilizing Proposition 2.1(i) we get
z n x n 2 = x n + 1 x n ϵ n [ γ ( δ n V x n + ( 1 δ n ) S x n ) μ F z n ] ( P C w n w n ) 2 x n + 1 x n ϵ n [ γ ( δ n V x n + ( 1 δ n ) S x n ) μ F z n ] 2 2 P C w n w n , z n x n = x n + 1 x n ϵ n [ γ ( δ n V x n + ( 1 δ n ) S x n ) μ F z n ] 2 2 ( P C w n w n , z n P C w n + P C w n w n , P C w n x n ) [ x n + 1 x n + ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) μ F z n ] 2 + 2 ( P C w n w n , P C w n z n + P C w n w n P C w n x n ) 2 x n + 1 x n 2 + 2 ϵ n 2 γ ( δ n V x n + ( 1 δ n ) S x n ) μ F z n 2 + 2 x n + 1 w n x n + 1 x n .
Since x n + 1 x n 0 and ϵ n 0 (due to (C6)), we know from the boundedness of { w n } , { x n } , and { z n } that
lim n z n x n = 0 .
(3.14)
Taking into account that W n x n x n W n x n z n + z n x n , we obtain from (3.13) and (3.14)
lim n x n W n x n = 0 .
(3.15)
Next let us show that lim n x n y n = 0 . As a matter of fact, from (3.4) and (3.8) it follows that
t n p 2 v n p 2 v n y n 2 y n t n 2 + 2 ν n ( α n + L ) v n y n t n y n + 2 ν n α n p p y n v n p 2 v n y n 2 y n t n 2 + ν n 2 ( α n + L ) 2 t n y n 2 + v n y n 2 + 2 ν n α n p p y n = v n p 2 + 2 ν n α n p p y n + ( ν n 2 ( α n + L ) 2 1 ) t n y n 2 x n p 2 + 2 ν n α n p p y n + ( ν n 2 ( α n + L ) 2 1 ) t n y n 2 .
(3.16)
Utilizing Lemma 2.1(b), from (3.9) and (3.16), we obtain
z n p 2 β n x n p 2 + ( 1 β n ) t n p 2 β n 1 β n z n W n x n 2 β n x n p 2 + ( 1 β n ) t n p 2 β n x n p 2 + ( 1 β n ) [ x n p 2 + 2 ν n α n p p y n + ( ν n 2 ( α n + L ) 2 1 ) t n y n 2 ] x n p 2 + 2 ν n α n p p y n + ( 1 β n ) ( ν n 2 ( α n + L ) 2 1 ) t n y n 2 ,
which immediately implies that
( 1 d ) ( 1 b ˆ 2 ( α n + L ) 2 ) t n y n 2 ( 1 β n ) ( 1 ν n 2 ( α n + L ) 2 ) t n y n 2 x n p 2 z n p 2 + 2 ν n α n p p y n x n z n ( x n p + z n p ) + 2 b ˆ α n p p y n .
Since α n 0 , x n z n 0 (due to (3.14)) and { x n } , { y n } , { z n } are bounded, we get
lim n t n y n = 0 .
(3.17)
In the meantime, from (3.8) and (3.9) it is not hard to find that
z n p 2 β n x n p 2 + ( 1 β n ) t n p 2 β n x n p 2 + ( 1 β n ) [ v n p 2 + 2 ν n α n p p y n + ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 ] β n x n p 2 + ( 1 β n ) v n p 2 + 2 ν n α n p p y n .
(3.18)
Now, let us show that lim n x n u n = lim n x n v n = 0 . In fact, observe that
Δ n k x n p 2 = T r k , n ( Θ k , φ k ) ( I r k , n A k ) Δ n k 1 x n T r k , n ( Θ k , φ k ) ( I r k , n A k ) p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p 2 Δ n k 1 x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2
(3.19)
and
Λ n i u n p 2 = J R i , λ i , n ( I λ i , n B i ) Λ n i 1 u n J R i , λ i , n ( I λ i , n B i ) p 2 ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p 2 Λ n i 1 u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 x n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2
(3.20)
for i { 1 , 2 , , N } and k { 1 , 2 , , M } . Combining (3.18), (3.19), and (3.20), we get
z n p 2 β n x n p 2 + ( 1 β n ) v n p 2 + 2 ν n α n p p y n β n x n p 2 + ( 1 β n ) Λ n i u n p 2 + 2 ν n α n p p y n β n x n p 2 + ( 1 β n ) [ u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] + 2 ν n α n p p y n β n x n p 2 + ( 1 β n ) [ Δ n k x n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] + 2 ν n α n p p y n β n x n p 2 + ( 1 β n ) [ x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] + 2 ν n α n p p y n = x n p 2 + ( 1 β n ) [ r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] + 2 ν n α n p p y n ,
which hence implies that
( 1 d ) [ r k , n ( 2 μ k r k , n ) A k Δ n k 1 x n A k p 2 + λ i , n ( 2 η i λ i , n ) B i Λ n i 1 u n B i p 2 ] ( 1 β n ) [ r k , n ( 2 μ k r k , n ) A k Δ n k 1 x n A k p 2 + λ i , n ( 2 η i λ i , n ) B i Λ n i 1 u n B i p 2 ] x n p 2 z n p 2 + 2 ν n α n p p y n x n z n ( x n p + z n p ) + 2 ν n α n p p y n .
Since α n 0 , { ν n } [ a ˆ , b ˆ ] ( 0 , 1 L ) , { λ i , n } [ a i , b i ] ( 0 , 2 η i ) and { r k , n } [ c k , d k ] ( 0 , 2 μ k ) where i { 1 , 2 , , N } and k { 1 , 2 , , M } , we deduce from (3.14) and the boundedness of { x n } , { y n } , { z n } that
lim n A k Δ n k 1 x n A k p = 0 and lim n B i Λ n i 1 u n B i p = 0 ,
(3.21)

where i { 1 , 2 , , N } and k { 1 , 2 , , M } .

Furthermore, by Proposition 2.6(ii) and Lemma 2.1(a) we have
Δ n k x n p 2 = T r k , n ( Θ k , φ k ) ( I r k , n A k ) Δ n k 1 x n T r k , n ( Θ k , φ k ) ( I r k , n A k ) p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p , Δ n k x n p = 1 2 ( ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p 2 + Δ n k x n p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p ( Δ n k x n p ) 2 ) 1 2 ( Δ n k 1 x n p 2 + Δ n k x n p 2 Δ n k 1 x n Δ n k x n r k , n ( A k Δ n k 1 x n A k p ) 2 ) ,
which implies that
Δ n k x n p 2 Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n r k , n ( A k Δ n k 1 x n A k p ) 2 = Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n 2 r k , n 2 A k Δ n k 1 x n A k p 2 + 2 r k , n Δ n k 1 x n Δ n k x n , A k Δ n k 1 x n A k p Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p x n p 2 Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p .
(3.22)
By Lemma 2.1(a) and Lemma 2.8, we obtain
Λ n i u n p 2 = J R i , λ i , n ( I λ i , n B i ) Λ n i 1 u n J R i , λ i , n ( I λ i , n B i ) p 2 ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p , Λ n i u n p = 1 2 ( ( I λ i , n B i ) Λ n i