Skip to main content

A unified extragradient method for systems of hierarchical variational inequalities in a Hilbert space

Abstract

In this paper, we introduce and analyze a multistep Mann-type extragradient iterative algorithm by combining Korpelevich’s extragradient method, viscosity approximation method, hybrid steepest-descent method, Mann’s iteration method, and the projection method. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of infinitely many nonexpansive mappings and a strict pseudocontraction, the solution set of finitely many generalized mixed equilibrium problems (GMEPs), the solution set of finitely many variational inclusions and the solution set of a variational inequality problem (VIP), which is just a unique solution of a system of hierarchical variational inequalities (SHVI) in a real Hilbert space. The results obtained in this paper improve and extend the corresponding results announced by many others.

MSC:49J30, 47H09, 47J20, 49M05.

1 Introduction

Let C be a nonempty closed convex subset of a real Hilbert space H and P C be the metric projection of H onto C. Let S:CH be a nonlinear mapping on C. We denote by Fix(S) the set of fixed points of S and by R the set of all real numbers. Let A:CH be a nonlinear mapping on C. We consider the following variational inequality problem (VIP): find a point xC such that

Ax,yx0,yC.
(1.1)

The solution set of VIP (1.1) is denoted by VI(C,A).

The VIP (1.1) was first discussed by Lions [1]. There are many applications of VIP (1.1) in various fields; see, e.g., [25]. It is well known that, if A is a strongly monotone and Lipschitz-continuous mapping on C, then VIP (1.1) has a unique solution. In 1976, Korpelevich [6] proposed an iterative algorithm for solving the VIP (1.1) in Euclidean space  R n :

{ y n = P C ( x n τ A x n ) , x n + 1 = P C ( x n τ A y n ) , n 0 ,

with τ>0 a given number, which is known as the extragradient method. The literature on the VIP is vast and Korpelevich’s extragradient method has received great attention given by many authors, who improved it in various ways; see, e.g., [716] and references therein, to name but a few.

Let φ:CR be a real-valued function, A:HH be a nonlinear mapping and Θ:C×CR be a bifunction. In 2008, Peng and Yao [9] introduced the generalized mixed equilibrium problem (GMEP) of finding xC such that

Θ(x,y)+φ(y)φ(x)+Ax,yx0,yC.
(1.2)

We denote the set of solutions of GMEP (1.2) by GMEP(Θ,φ,A). The GMEP (1.2) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, Nash equilibrium problems in noncooperative games and others. The GMEP is further considered and studied; see, e.g., [7, 10, 14, 1619]. In particular, if φ=0, then GMEP (1.2) reduces to the generalized equilibrium problem (GEP) which is to find xC such that

Θ(x,y)+Ax,yx0,yC.

It was introduced and studied by Takahashi and Takahashi [20]. The set of solutions of GEP is denoted by GEP(Θ,A).

If A=0, then GMEP (1.2) reduces to the mixed equilibrium problem (MEP) which is to find xC such that

Θ(x,y)+φ(y)φ(x)0,yC.

It was considered and studied in [21]. The set of solutions of MEP is denoted by MEP(Θ,φ).

If φ=0, A=0, then GMEP (1.2) reduces to the equilibrium problem (EP) which is to find xC such that

Θ(x,y)0,yC.

It was considered and studied in [2224]. The set of solutions of EP is denoted by EP(Θ).

On the other hand, let B be a single-valued mapping of C into H and R be a multivalued mapping with D(R)=C. Consider the following variational inclusion: find a point xC such that

0Bx+Rx.
(1.3)

We denote by I(B,R) the solution set of the variational inclusion (1.3). In particular, if B=R=0, then I(B,R)=C. If B=0, then problem (1.3) becomes the inclusion problem introduced by Rockafellar [25]. It is well known that problem (1.3) provides a convenient framework for the unified study of optimal solutions in many optimization related areas including mathematical programming, complementarity problems, variational inequalities, optimal control, mathematical economics, equilibria and game theory, etc. Let a set-valued mapping R:D(R)H 2 H be maximal monotone. We define the resolvent operator J R , λ :H D ( R ) ¯ associated with R and λ as follows:

J R , λ = ( I + λ R ) 1 ,xH,

where λ is a positive number.

In 1998, Huang [26] studied problem (1.3) in the case where R is maximal monotone and B is strongly monotone and Lipschitz continuous with D(R)=C=H. Subsequently, Zeng et al. [27] further studied problem (1.3) in the case which is more general than Huang’s one [26]. Moreover, the authors [27] obtained the same strong convergence conclusion as in Huang’s result [26]. In addition, the authors also gave the geometric convergence rate estimate for approximate solutions. Also, various types of iterative algorithms for solving variational inclusions have been further studied and developed; for more details, refer to [11, 19, 2830] and the references therein.

Let S and T be two nonexpansive mappings. In 2009, Yao et al. [31] considered the following hierarchical variational inequality problem (HVIP): find hierarchically a fixed point of T, which is a solution to the VIP for monotone mapping IS; namely, find x ˜ Fix(T) such that

( I S ) x ˜ , p x ˜ 0,pFix(T).
(1.4)

The solution set of the hierarchical VIP (1.4) is denoted by Λ. It is not hard to check that solving the hierarchical VIP (1.4) is equivalent to the fixed point problem of the composite mapping P Fix ( T ) S, i.e., find x ˜ C such that x ˜ = P Fix ( T ) S x ˜ . The authors [31] introduced and analyzed the following iterative algorithm for solving the HVIP (1.4):

{ y n = β n S x n + ( 1 β n ) x n , x n + 1 = α n V x n + ( 1 α n ) T y n , n 0 .
(1.5)

It is proved [[31], Theorem 3.2] that { x n } converges strongly to x ˜ = P Λ V x ˜ , which solves the hierarchical VIP:

( I S ) x ˜ , p x ˜ 0,pFix(T).

Very recently, Kong et al. [7] introduced and considered the following system of hierarchical variational inequalities (SHVI) (over the fixed point set of a strictly pseudocontractive mapping) with a variational inequality constraint:

to find x Ξ such that

{ ( μ F γ V ) x , x x 0 , x Fix ( T ) VI ( C , A ) , ( μ F γ S ) x , y x 0 , y Fix ( T ) VI ( C , A ) .
(1.6)

In particular, if T= T 1 and A=I T 2 , where T i :CC is ξ i -strictly pseudocontractive for i=1,2, SHVI (1.6) reduces to the following:

to find x Ω such that

{ ( μ F γ V ) x , x x 0 , x Fix ( T 1 ) Fix ( T 2 ) , ( μ F γ S ) x , y x 0 , y Fix ( T 1 ) Fix ( T 2 ) .
(1.7)

The authors in [7] proposed the following algorithm for solving SHVI (1.6) and presented its convergence analysis:

{ x 0 = x C chosen arbitrarily , y n = P C ( x n ν n A n x n ) , z n = β n x n + γ n P C ( x n ν n A n y n ) + σ n T P C ( x n ν n A n y n ) , x n + 1 = P C [ λ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I λ n μ F ) z n ] , n 0 ,
(1.8)

where A n = α n I+A for all n0. In particular, if V0, then (1.8) reduces to the following iterative scheme:

{ x 0 = x C chosen arbitrarily , y n = P C ( x n ν n A n x n ) , z n = β n x n + γ n P C ( x n ν n A n y n ) + σ n T P C ( x n ν n A n y n ) , x n + 1 = P C [ λ n ( 1 δ n ) γ S x n + ( I λ n μ F ) z n ] , n 0 .
(1.9)

In this paper, we introduce and study the following system of hierarchical variational inequalities (SHVI) (over the fixed point set of an infinite family of nonexpansive mappings and a strictly pseudocontractive mapping) with constraints of finitely many GMEPs, finitely many variational inclusions and the VIP (1.1):

Let M, N be two positive integers. Assume that

  1. (i)

    A:CH is a monotone and L-Lipschitzian mapping and F:CH is κ-Lipschitzian and η-strongly monotone with positive constants κ,η>0 such that 0<γτ and 0<μ< 2 η κ 2 where τ=1 1 μ ( 2 η μ κ 2 ) ;

  2. (ii)

    Θ k is a bifunction from C×C to R satisfying (A1)-(A4) and φ k :CR{+} is a proper lower semicontinuous and convex function with restriction (B1) or (B2), where k{1,2,,M};

  3. (iii)

    R i :C 2 H is a maximal monotone mapping, and A k :HH and B i :CH are μ k -inverse-strongly monotone and η i -inverse strongly monotone, respectively, where k{1,2,,M} and i{1,2,,N};

  4. (iv)

    { T n } n = 1 is a sequence of nonexpansive self-mappings on C, T:CC is a ξ-strict pseudocontraction, S:CC is a nonexpansive mapping and V:CH is a ρ-contraction with coefficient ρ[0,1);

  5. (v)

    Ω:= n = 1 Fix( T n ) k = 1 M GMEP( Θ k , φ k , A k ) i = 1 N I( B i , R i )VI(C,A)Fix(T).

Then the objective is to find x Ω such that

{ ( μ F γ V ) x , x x 0 , x Ω , ( μ F γ S ) x , y x 0 , y Ω .
(1.10)

In particular, whenever V0, the objective is to find x Ω such that

{ F x , x x 0 , x Ω , ( μ F γ S ) x , y x 0 , y Ω .
(1.11)

Motivated and inspired by the above facts, we introduce and analyze a multistep Mann-type extragradient iterative algorithm by combining Korpelevich’s extragradient method, viscosity approximation method, hybrid steepest-descent method, Mann’s iteration method and projection method. It is proven that under mild conditions, the proposed algorithm converges strongly to a common element x Ω:= n = 1 Fix( T n ) k = 1 M GMEP( Θ k , φ k , A k ) i = 1 N I( B i , R i )VI(C,A)Fix(T) of the solution set of finitely many GMEPs, the solution set of finitely many variational inclusions, the solution set of VIP (1.1) and the fixed point set of an infinite family of nonexpansive mappings { T n } n = 1 and a strict pseudocontraction T, which is just a unique solution of the SHVI (1.10). The results obtained in this paper improve and extend the corresponding results announced by many others. For recent related work, we refer to [32] and the references therein.

2 Preliminaries

Throughout this paper, we assume that H is a real Hilbert space whose inner product and norm are denoted by , and , respectively.

Then we have the following inequality:

x + y 2 x 2 +2y,x+y,x,yH.

We write x n x to indicate that the sequence { x n } converges weakly to x and x n x to indicate that the sequence { x n } converges strongly to x. Moreover, we use ω w ( x n ) to denote the weak ω-limit set of the sequence { x n }, i.e.,

ω w ( x n ):= { x H : x n i x  for some subsequence  { x n i }  of  { x n } } .

Lemma 2.1 Let H be a real Hilbert space. Then the following hold:

  1. (a)

    x y 2 = x 2 y 2 2xy,y for all x,yH;

  2. (b)

    λ x + μ y 2 =λ x 2 +μ y 2 λμ x y 2 for all x,yH and λ,μ[0,1] with λ+μ=1;

  3. (c)

    If { x n } is a sequence in H such that x n x, it follows that

    lim sup n x n y 2 = lim sup n x n x 2 + x y 2 ,yH.

2.1 Nonexpansive type mappings

Let C be a nonempty closed convex subset of H. The metric (or nearest point) projection from H onto C is the mapping P C :HC which assigns to each point xH the unique point P C xC satisfying the property

x P C x= inf y C xy=:d(x,C).

The following properties of projections are useful and pertinent to our purpose.

Proposition 2.1 Given any xH and zC. Then we have the following:

  1. (i)

    z= P C xxz,yz0, yC;

  2. (ii)

    z= P C x x z 2 x y 2 y z 2 , yC;

  3. (iii)

    P C x P C y,xy P C x P C y 2 , yH.

Definition 2.1 A mapping T:HH is said to be

  1. (a)

    nonexpansive if

    TxTyxy,x,yH;
  2. (b)

    firmly nonexpansive if 2TI is nonexpansive, or equivalently, if T is 1-inverse-strongly monotone (1-ism),

    xy,TxTy T x T y 2 ,x,yH.

Alternatively, T is firmly nonexpansive if and only if T can be expressed as T= 1 2 (I+S), where S:HH is nonexpansive and I is the identity mapping on H. Note projections are firmly nonexpansive.

Definition 2.2 A mapping T:HH is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping, that is, T(1α)I+αS, where α(0,1) and S:HH is nonexpansive.

Proposition 2.2 ([33])

Let T:HH be a given mapping. Then:

  1. (i)

    T is nonexpansive if and only if the complement IT is 1 2 -ism.

  2. (ii)

    If T is ν-ism, then for γ>0, γT is ν γ -ism.

  3. (iii)

    T is averaged if and only if the complement IT is ν-ism for some ν>1/2. Indeed, for α(0,1), T is α-averaged if and only if IT is 1 2 α -ism.

Proposition 2.3 ([33, 34])

Let S,T,V:HH be given operators.

  1. (i)

    If T=(1α)S+αV for some α(0,1) and if S is averaged and V is nonexpansive, then T is averaged.

  2. (ii)

    T is firmly nonexpansive if and only if the complement IT is firmly nonexpansive.

  3. (iii)

    If T=(1α)S+αV for some α(0,1) and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.

  4. (iv)

    The composite of finitely many averaged mappings is averaged. That is, if each of the mappings { T i } i = 1 N is averaged, then so is the composite T 1 T N . In particular, if T 1 is α 1 -averaged and T 2 is α 2 -averaged, where α 1 , α 2 (0,1), then the composite T 1 T 2 is α-averaged, where α= α 1 + α 2 α 1 α 2 .

  5. (v)

    If the mappings { T i } i = 1 N are averaged and have a common fixed point, then

    i = 1 N Fix( T i )=Fix( T 1 T N ).

We need some facts and tools which are listed as lemmas below.

Lemma 2.2 ([[35], Demiclosedness principle])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let S be a nonexpansive self-mapping on C. Then IS is demiclosed at 0.

Let { T n } n = 1 be an infinite family of nonexpansive self-mappings on C and { λ n } n = 1 be a sequence in [0,1]. For any n1, define a mapping W n on C as follows:

{ U n , n + 1 = I , U n , n = λ n T n U n , n + 1 + ( 1 λ n ) I , U n , n 1 = λ n 1 T n 1 U n , n + ( 1 λ n 1 ) I , , U n , k = λ k T k U n , k + 1 + ( 1 λ k ) I , U n , k 1 = λ k 1 T k 1 U n , k + ( 1 λ k 1 ) I , , U n , 2 = λ 2 T 2 U n , 3 + ( 1 λ 2 ) I , W n = U n , 1 = λ 1 T 1 U n , 2 + ( 1 λ 1 ) I .
(2.1)

Such a mapping W n is called the W-mapping generated by T n , T n 1 ,, T 1 and λ n , λ n 1 ,, λ 1 .

Lemma 2.3 ([36])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C such that n = 1 Fix( T n ) and let { λ n } n = 1 be a sequence in (0,b] for some b(0,1). Then, for every xC and k1 the limit lim n U n , k x exists, where U n , k is defined as in (2.1).

Remark 2.1 ([[37], Remark 3.1])

It can be known from Lemma 2.4 that if D is a nonempty bounded subset of C, then for ϵ>0 there exists n 0 k such that sup x D U n , k x U k xϵ for all n> n 0 .

Remark 2.2 ([[37], Remark 3.2])

Utilizing Lemma 2.4, we define a mapping W:CC as follows:

Wx= lim n W n x= lim n U n , 1 x,xC.

Such a W is called the W-mapping generated by T 1 , T 2 , and λ 1 , λ 2 , . Since W n is nonexpansive, W:CC is also nonexpansive. For a bounded sequence { x n } in C, we put D={ x n :n1}. Hence, it is clear from Remark 2.1 that for an arbitrary ϵ>0, there exists N 0 1 such that for all n> N 0

W n x n W x n = U n , 1 x n U 1 x n sup x D U n , 1 x U 1 xϵ.

This implies that lim n W n x n W x n =0.

Lemma 2.4 ([36])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C such that n = 1 Fix( T n ), and let { λ n } n = 1 be a sequence in (0,b] for some b(0,1). Then Fix(W)= n = 1 Fix( T n ).

It is clear that, in a real Hilbert space H, T:CC is ξ-strictly pseudocontractive if and only if the following inequality holds:

TxTy,xy x y 2 1 ξ 2 ( I T ) x ( I T ) y 2 ,x,yC.

This immediately implies that if T is a ξ-strictly pseudocontractive mapping, then IT is 1 ξ 2 -inverse-strongly monotone; for further detail, we refer to [38] and the references therein. It is well known that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings and that the class of pseudocontractions strictly includes the class of strict pseudocontractions. In addition, for the extension of strict pseudocontractions, please the reader refer to [39] and the references therein.

Proposition 2.4 ([[38], Proposition 2.1])

Let C be a nonempty closed convex subset of a real Hilbert space H and T:CC be a mapping.

  1. (i)

    If T is a ξ-strictly pseudocontractive mapping, then T satisfies the Lipschitzian condition

    TxTy 1 + ξ 1 ξ xy,x,yC.
  2. (ii)

    If T is a ξ-strictly pseudocontractive mapping, then the mapping IT is demiclosed at 0, that is, if { x n } is a sequence in C such that x n x ˜ and (IT) x n 0, then (IT) x ˜ =0.

  3. (iii)

    If T is ξ-(quasi-)strict pseudocontraction, then the fixed point set Fix(T) of T is closed and convex so that the projection P Fix ( T ) is well defined.

Proposition 2.5 ([40])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let T:CC be a ξ-strictly pseudocontractive mapping. Let γ and δ be two nonnegative real numbers such that (γ+δ)ξγ. Then

γ ( x y ) + δ ( T x T y ) (γ+δ)xy,x,yC.

2.2 Mixed equilibrium problems

We list some elementary conclusions for the MEP.

It was assumed in [9] that Θ:C×CR is a bifunction satisfying conditions (A1)-(A4) and φ:CR is a lower semicontinuous and convex function with restriction (B1) or (B2), where

(A1) Θ(x,x)=0 for all xC;

(A2) Θ is monotone, i.e., Θ(x,y)+Θ(y,x)0 for any x,yC;

(A3) Θ is upper-hemicontinuous, i.e., for each x,y,zC,

lim sup t 0 + Θ ( t z + ( 1 t ) x , y ) Θ(x,y);

(A4) Θ(x,) is convex and lower semicontinuous for each xC;

(B1) for each xH and r>0, there exists a bounded subset D x C and y x C such that for any zC D x ,

Θ(z, y x )+φ( y x )φ(z)+ 1 r y x z,zx<0;

(B2) C is a bounded set.

Proposition 2.6 ([21])

Assume that Θ:C×CR satisfies (A1)-(A4) and let φ:CR be a proper lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For r>0 and xH, define a mapping T r ( Θ , φ ) :HC as follows:

T r ( Θ , φ ) (x)= { z C : Θ ( z , y ) + φ ( y ) φ ( z ) + 1 r y z , z x 0 , y C }

for all xH. Then the following hold:

  1. (i)

    for each xH, T r ( Θ , φ ) (x) is nonempty and single-valued;

  2. (ii)

    T r ( Θ , φ ) is firmly nonexpansive, that is, for any x,yH,

    T r ( Θ , φ ) x T r ( Θ , φ ) y 2 T r ( Θ , φ ) x T r ( Θ , φ ) y , x y ;
  3. (iii)

    Fix( T r ( Θ , φ ) )=MEP(Θ,φ);

  4. (iv)

    MEP(Θ,φ) is closed and convex;

  5. (v)

    T s ( Θ , φ ) x T t ( Θ , φ ) x 2 s t s T s ( Θ , φ ) x T t ( Θ , φ ) x, T s ( Θ , φ ) xx for all s,t>0 and xH.

2.3 Monotone operators

Definition 2.3 Let T be a nonlinear operator with the domain D(T)H and the range R(T)H. Then T is said to be

  1. (i)

    monotone if

    TxTy,xy0,x,yD(T);
  2. (ii)

    β-strongly monotone if there exists a constant β>0 such that

    TxTy,xyη x y 2 ,x,yD(T);
  3. (iii)

    ν-inverse-strongly monotone if there exists a constant ν>0 such that

    TxTy,xyν T x T y 2 ,x,yD(T).

It can easily be seen that if T is nonexpansive, then IT is monotone. It is also easy to see that the projection P C is 1-ism. Inverse strongly monotone (also referred to as co-coercive) operators have been applied widely in solving practical problems in various fields, for instance, in traffic assignment problems; see [41, 42]. On the other hand, it is obvious that if A is ζ-inverse-strongly monotone, then A is monotone and 1 ζ -Lipschitz continuous. Moreover, we also have, for all u,vC and λ>0,

( I λ A ) u ( I λ A ) v 2 = ( u v ) λ ( A u A v ) 2 = u v 2 2 λ A u A v , u v + λ 2 A u A v 2 u v 2 + λ ( λ 2 ζ ) A u A v 2 .
(2.2)

So, if λ2ζ, then IλA is a nonexpansive mapping from C to H.

Let C be a nonempty closed convex subset of a real Hilbert space H. We introduce some notations. Let λ be a number in (0,1] and let μ>0. Associated with a nonexpansive mapping T:CC, we define the mapping T λ :CH by

T λ x:=TxλμF(Tx),xC,

where F:CH is an operator such that, for some positive constants κ,η>0, F is κ-Lipschitzian and η-strongly monotone on C; that is, F satisfies the conditions:

FxFyκxyandFxFy,xyη x y 2 ,x,yC.

Lemma 2.5 (see [[43], Lemma 3.1])

T λ is a contraction provided 0<μ< 2 η κ 2 ; that is,

T λ x T λ y (1λτ)xy,x,yC,

where τ=1 1 μ ( 2 η μ κ 2 ) (0,1].

Remark 2.3 (i) Since F is κ-Lipschitzian and η-strongly monotone on C, we get 0<ηκ. Hence, whenever 0<μ< 2 η κ 2 , we have 0<1 1 2 μ η + μ 2 κ 2 1. So, τ=1 1 μ ( 2 η μ κ 2 ) (0,1].

(ii) In Lemma 2.5, put F= 1 2 I and μ=2. Then we know that κ=η= 1 2 , 0<μ=2< 2 η κ 2 =4 and

τ=1 1 μ ( 2 η μ κ 2 ) =1 1 2 ( 2 × 1 2 2 × ( 1 2 ) 2 ) =1.

Lemma 2.6 Let A:CH be a monotone mapping. The characterization of the projection (see Proposition  2.1(i)) implies

uVI(C,A)u= P C (uλAu),λ>0.

Finally, recall that a set-valued mapping T ˜ :D( T ˜ )H 2 H is called monotone if for all x,yD( T ˜ ), f T ˜ x and g T ˜ y imply

fg,xy0.

A set-valued mapping T ˜ is called maximal monotone if T ˜ is monotone and (I+λ T ˜ )D( T ˜ )=H for each λ>0. We denote by G( T ˜ ) the graph of T ˜ . It is well known that a monotone mapping T ˜ is maximal if and only if, for (x,f)H×H, fg,xy0 for every (y,g)G( T ˜ ) implies f T ˜ x. Next we provide an example to illustrate the concept of maximal monotone mapping.

Let A:CH be a monotone, k-Lipschitz-continuous mapping and let N C v be the normal cone to C at vC, i.e.,

N C (v)= { u H : v x , u 0 , x C } .

Define

T ˜ v= { A v + N C ( v ) , if  v C , , if  v C .

Then T ˜ is maximal monotone (see [25]) such that

0 T ˜ vvVI(C,A).
(2.3)

Let R:D(R)H 2 H be a maximal monotone mapping. Let λ,μ>0 be two positive numbers.

Lemma 2.7 (see [44])

We have the resolvent identity

J R , λ x= J R , μ ( μ λ x + ( 1 μ λ ) J R , λ x ) ,xH.

In terms of Huang [26] (see also [27]), we have the following property for the resolvent operator J R , λ :H D ( R ) ¯ .

Lemma 2.8 J R , λ is single-valued and firmly nonexpansive, i.e.,

J R , λ x J R , λ y,xy J R , λ x J R , λ y 2 ,x,yH.

Consequently, J R , λ is nonexpansive and monotone.

Lemma 2.9 ([11])

Let R be a maximal monotone mapping with D(R)=C. Then for any given λ>0, uC is a solution of problem (1.3) if and only if uC satisfies

u= J R , λ (uλBu).

Lemma 2.10 ([27])

Let R be a maximal monotone mapping with D(R)=C and let B:CH be a strongly monotone, continuous and single-valued mapping. Then for each zH, the equation z(B+λR)x has a unique solution x λ for λ>0.

Lemma 2.11 ([11])

Let R be a maximal monotone mapping with D(R)=C and B:CH be a monotone, continuous and single-valued mapping. Then (I+λ(R+B))C=H for each λ>0. In this case, R+B is maximal monotone.

2.4 Technical lemmas

The following lemma plays a key role in proving strong convergence of the sequences generated by our algorithms.

Lemma 2.12 ([45])

Let { a n } be a sequence of nonnegative real numbers satisfying the property:

a n + 1 (1 s n ) a n + s n b n + t n ,n1,

where { s n }(0,1] and { b n } are such that

  1. (i)

    n = 1 s n =;

  2. (ii)

    either lim sup n b n 0 or n = 1 | s n b n |<;

  3. (iii)

    n = 1 t n < where t n 0, for all n1.

Then lim n a n =0.

Lemma 2.13 ([46])

Let { α n } and { β n } be the sequences of nonnegative real numbers and a sequence of real numbers, respectively, such that lim sup n α n < and lim sup n β n 0. Then lim sup n α n β n 0.

3 Main results

In this section, we will introduce and analyze a multistep Mann-type extragradient iterative algorithm for finding a solution of SHVI (1.10) (over the fixed point set of an infinite family of nonexpansive mappings and a strict pseudocontraction) with constraints of several problems: finitely many GMEPs, finitely many variational inclusions and VIP (1.1) in a real Hilbert space. This algorithm is based on Korpelevich’s extragradient method, viscosity approximation method, hybrid steepest-descent method, Mann’s iteration method and projection method. We prove the strong convergence of the proposed algorithm to a unique solution of SHVI (1.10) under suitable conditions.

We are now in a position to state and prove the main result in this paper.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let M, N be two positive integers. Let Θ k be a bifunction from C×C to R satisfying (A1)-(A4) and φ k :CR{+} be a proper lower semicontinuous and convex function with restriction (B1) or (B2), where k{1,2,,M}. Let R i :C 2 H be a maximal monotone mapping and let A k :HH and B i :CH be μ k -inverse-strongly monotone and η i -inverse-strongly monotone, respectively, where k{1,2,,M}, i{1,2,,N}. Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C and { λ n } n = 1 be a sequence in (0,b] for some b(0,1). Let T:CC be a ξ-strictly pseudocontractive mapping, S:CC be a nonexpansive mapping and V:CH be a ρ-contraction with coefficient ρ[0,1). Let A:CH be a 1 L -inverse-strongly monotone mapping, and F:CH be κ-Lipschitzian and η-strongly monotone with positive constants κ,η>0 such that 0<μ< 2 η κ 2 and 0<γτ where τ=1 1 μ ( 2 η μ κ 2 ) . Assume that SHVI (1.10) over Ω:= n = 1 Fix( T n ) k = 1 M GMEP( Θ k , φ k , A k ) i = 1 N I( B i , R i )VI(C,A)Fix(T) has a solution. Let { α n }[0,), { ν n }(0, 1 L ), { ϵ n },{ δ n },{ β n },{ γ n },{ σ n }(0,1), and { λ i , n }[ a i , b i ](0,2 η i ), { r k , n }[ c k , d k ](0,2 μ k ) where i{1,2,,N} and k{1,2,,M}. For arbitrarily given x 1 C, let { x n } be a sequence generated by

{ u n = T r M , n ( Θ M , φ M ) ( I r M , n A M ) T r M 1 , n ( Θ M 1 , φ M 1 ) ( I r M 1 , n A M 1 ) T r 1 , n ( Θ 1 , φ 1 ) ( I r 1 , n A 1 ) x n , v n = J R N , λ N , n ( I λ N , n B N ) J R N 1 , λ N 1 , n ( I λ N 1 , n B N 1 ) J R 1 , λ 1 , n ( I λ 1 , n B 1 ) u n , y n = P C ( v n ν n A n v n ) , z n = β n W n x n + γ n P C ( v n ν n A n y n ) + σ n T P C ( v n ν n A n y n ) , x n + 1 = P C [ ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) z n ] , n 1 ,
(3.1)

where A n = α n I+A for all n1. Suppose that

(C1) n = 1 α n <;

(C2) 0< lim inf n ν n lim sup n ν n < 1 L ;

(C3) β n + γ n + σ n =1 and ( γ n + σ n )ξ γ n for all n1;

(C4) 0< lim inf n β n lim sup n β n <1 and lim inf n σ n >0;

(C5) 0< lim inf n δ n lim sup n δ n <1;

(C6) lim n ϵ n =0 and n = 1 ϵ n =.

If {S x n } is bounded, then { x n } converges strongly to a unique solution of SHVI (1.10) provided lim n x n x n + 1 =0.

Proof For n1, put

Δ n k = T r k , n ( Θ k , φ k ) (I r k , n A k ) T r k 1 , n ( Θ k 1 , φ k 1 ) (I r k 1 , n A k 1 ) T r 1 , n ( Θ 1 , φ 1 ) (I r 1 , n A 1 ) x n

for all k{1,2,,M} and

Λ n i = J R i , λ i , n (I λ i , n B i ) J R i 1 , λ i 1 , n (I λ i 1 , n B i 1 ) J R 1 , λ 1 , n (I λ 1 , n B 1 )

for all i{1,2,,N}, Δ n 0 =I and Λ n 0 =I. Then we have u n = Δ n M x n and v n = Λ n N u n . In addition, in terms of conditions (C1), (C2), and (C4), without loss of generality, we may assume that { β n }[c,d] for some c,d(0,1), { ν n }[ a ˆ , b ˆ ] for some a ˆ , b ˆ (0, 1 L ), and ν n ( α n +L)1 for all n1.

One can readily see that P C (I ν n A n ) are nonexpansive for all n1; see [7] (also [47]).

Next, we divide the remainder of the proof into several steps.

Step 1. { x n } is bounded.

Take a fixed pΩ arbitrarily. Utilizing (2.2) and Proposition 2.6(ii) we have

u n p = T r M , n ( Θ M , φ M ) ( I r M , n B M ) Δ n M 1 x n T r M , n ( Θ M , φ M ) ( I r M , n B M ) Δ n M 1 p ( I r M , n B M ) Δ n M 1 x n ( I r M , n B M ) Δ n M 1 p Δ n M 1 x n Δ n M 1 p Δ n 0 x n Δ n 0 p = x n p .
(3.2)

Utilizing (2.2) and Lemma 2.8 we have

v n p = J R N , λ N , n ( I λ N , n A N ) Λ n N 1 u n J R N , λ N , n ( I λ N , n A N ) Λ n N 1 p ( I λ N , n A N ) Λ n N 1 u n ( I λ N , n A N ) Λ n N 1 p Λ n N 1 u n Λ n N 1 p Λ n 0 u n Λ n 0 p = u n p ,
(3.3)

which together with the last inequality, implies that

v n p x n p.
(3.4)

Note that W n p=p for all n1 and P C (IνA)p=p for ν(0, 2 L ). Hence, from (3.1) and (3.4), it follows that

y n p = P C ( I ν n A n ) v n P C ( I ν n A ) p P C ( I ν n A n ) v n P C ( I ν n A n ) p + P C ( I ν n A n ) p P C ( I ν n A ) p v n p + ( I ν n A n ) p ( I ν n A ) p x n p + ν n α n p .
(3.5)

Put t n = P C ( v n ν n A n y n ) for each n1. Then, by Proposition 2.1(ii), we have

t n p 2 v n ν n A n y n p 2 v n ν n A n y n t n 2 = v n p 2 v n t n 2 + 2 ν n A n y n , p t n = v n p 2 v n t n 2 + 2 ν n ( A n y n A n p , p y n + A n p , p y n + A n y n , y n t n ) v n p 2 v n t n 2 + 2 ν n ( A n p , p y n + A n y n , y n t n ) = v n p 2 v n t n 2 + 2 ν n [ ( α n I + A ) p , p y n + A n y n , y n t n ] v n p 2 v n t n 2 + 2 ν n [ α n p , p y n + A n y n , y n t n ] = v n p 2 v n y n 2 2 v n y n , y n t n y n t n 2 + 2 ν n [ α n p , p y n + A n y n , y n t n ] = v n p 2 v n y n 2 y n t n 2 + 2 v n ν n A n y n y n , t n y n + 2 ν n α n p , p y n .
(3.6)

Further, by Proposition 2.1(i), we have

v n ν n A n y n y n , t n y n = v n ν n A n v n y n , t n y n + ν n A n v n ν n A n y n , t n y n ν n A n v n ν n A n y n , t n y n ν n A n v n A n y n t n y n ν n ( α n + L ) v n y n t n y n .
(3.7)

So, we obtain from (3.6)

t n p 2 v n p 2 v n y n 2 y n t n 2 + 2 v n ν n A n y n y n , t n y n + 2 ν n α n p , p y n v n p 2 v n y n 2 y n t n 2 + 2 ν n ( α n + L ) v n y n t n y n + 2 ν n α n p p y n v n p 2 v n y n 2 y n t n 2 + ν n 2 ( α n + L ) 2 v n y n 2 + t n y n 2 + 2 ν n α n p p y n = v n p 2 + 2 ν n α n p p y n + ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 x n p 2 + 2 ν n α n p p y n + ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 x n p 2 + 2 ν n α n p p y n .
(3.8)

Since ( γ n + σ n )ξ γ n for all n1, utilizing Proposition 2.5 and Lemma 2.1(b), from (3.5) and (3.8), we conclude that

z n p 2 = β n W n x n + γ n t n + σ n T t n p 2 = β n ( W n x n p ) + ( γ n + σ n ) 1 γ n + σ n [ γ n ( t n p ) + σ n ( T t n p ) ] 2 = β n W n x n p 2 + ( γ n + σ n ) 1 γ n + σ n [ γ n ( t n p ) + σ n ( T t n p ) ] 2 β n ( γ n + σ n ) 1 γ n + σ n [ γ n ( t n W n x n ) + σ n ( T t n W n x n ) ] 2 β n x n p 2 + ( 1 β n ) t n p 2 β n 1 β n z n W n x n 2 β n x n p 2 + ( 1 β n ) [ x n p 2 + 2 ν n α n p p y n + ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 ] β n 1 β n z n W n x n 2 x n p 2 + 2 ν n α n p p y n + ( 1 β n ) ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 β n 1 β n z n W n x n 2 x n p 2 + 2 ν n α n p ( x n p + ν n α n p ) + ( 1 β n ) ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 β n 1 β n z n W n x n 2 x n p 2 + 2 x n p ( 2 ν n α n p ) + ( 2 ν n α n p ) 2 + ( 1 β n ) ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 β n 1 β n z n W n x n 2 = ( x n p + 2 ν n α n p ) 2 + ( 1 β n ) ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 β n 1 β n z n W n x n 2 ( x n p + 2 ν n α n p ) 2 .
(3.9)

Noticing the boundedness of {S x n }, we get sup n 1 γS x n μFp M ˜ for some M ˜ >0. Moreover, utilizing Lemma 2.5 we have from (3.1)

x n + 1 p = P C [ ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) z n ] P C p ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) z n p = ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) ϵ n μ F p + ( I ϵ n μ F ) z n ( I ϵ n μ F ) p ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) ϵ n μ F p + ( I ϵ n μ F ) z n ( I ϵ n μ F ) p = ϵ n δ n ( γ V x n μ F p ) + ( 1 δ n ) ( γ S x n μ F p ) + ( I ϵ n μ F ) z n ( I ϵ n μ F ) p ϵ n [ δ n γ V x n μ F p + ( 1 δ n ) γ S x n μ F p ] + ( 1 ϵ n τ ) z n p ϵ n [ δ n ( γ V x n γ V p + γ V p μ F p ) + ( 1 δ n ) M ˜ ] + ( 1 ϵ n τ ) z n p ϵ n [ δ n γ ρ x n p + δ n γ V p μ F p + ( 1 δ n ) M ˜ ] + ( 1 ϵ n τ ) [ x n p + 2 ν n α n p ] ϵ n [ δ n γ ρ x n p + max { M ˜ , γ V p μ F p } ] + ( 1 ϵ n τ ) [ x n p + 2 ν n α n p ] ϵ n γ ρ x n p + ϵ n max { M ˜ , γ V p μ F p } + ( 1 ϵ n τ ) x n p + 2 ν n α n p = [ 1 ( τ γ ρ ) ϵ n ] x n p + ϵ n max { M ˜ , γ V p μ F p } + 2 ν n α n p = [ 1 ( τ γ ρ ) ϵ n ] x n p + ( τ γ ρ ) ϵ n max { M ˜ τ γ ρ , γ V p μ F p τ γ ρ } + 2 ν n α n p max { x n p , M ˜ τ γ ρ , γ V p μ F p τ γ ρ } + 2 ν n α n p .
(3.10)

By induction, we can derive

x n + 1 pmax { x 1 p , M ˜ τ γ ρ , γ V p μ F p τ γ ρ } + j = 1 n 2 ν j α j p,n1.

Consequently, { x n } is bounded (due to n = 1 α n <) and so are the sequences { u n }, { v n }, { y n }, { z n }, {A v n } and {A y n }.

Step 2. x n u n 0, x n v n 0, x n y n 0, x n t n 0, x n W x n 0 and t n T t n 0 as n.

From (3.1) and (3.9), it follows that

x n + 1 p 2 ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) z n p 2 = ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) ϵ n μ F p + ( I ϵ n μ F ) z n ( I ϵ n μ F ) p 2 { ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) ϵ n μ F p + ( I ϵ n μ F ) z n ( I ϵ n μ F ) p } 2 { ϵ n δ n ( γ V x n μ F p ) + ( 1 δ n ) ( γ S x n μ F p ) + ( 1 ϵ n τ ) z n p } 2 ϵ n 1 τ [ δ n γ V x n μ F p + ( 1 δ n ) γ S x n μ F p ] 2 + ( 1 ϵ n τ ) z n p 2 ϵ n 1 τ [ γ V x n μ F p + γ S x n μ F p ] 2 + z n p 2 ϵ n 1 τ [ γ V x n μ F p + γ S x n μ F p ] 2 + ( x n p + 2 ν n α n p ) 2 + ( 1 β n ) ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 β n 1 β n z n W n x n 2 ( x n p + 2 ν n α n p ) 2 + ϵ n M ˜ 1 + ( 1 β n ) ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 β n 1 β n z n W n x n 2 ,
(3.11)

where M ˜ 1 = sup n 1 { 1 τ [ γ V x n μ F p + γ S x n μ F p ] 2 }. This together with { ν n }[ a ˆ , b ˆ ](0, 1 L ) and { β n }[c,d](0,1) implies that

( 1 d ) ( 1 b ˆ 2 ( α n + L ) 2 ) v n y n 2 + c 1 c z n W n x n 2 ( 1 β n ) ( 1 ν n 2 ( α n + L ) 2 ) v n y n 2 + β n 1 β n z n W n x n 2 ( x n p + 2 ν n α n p ) 2 x n + 1 p 2 + ϵ n M ˜ 1 = [ ( x n p + 2 ν n α n p ) x n + 1 p ] × [ ( x n p + 2 ν n α n p ) + x n + 1 p ] + ϵ n M ˜ 1 [ x n + 1 x n + 2 ν n α n p ] [ x n p + x n + 1 p + 2 ν n α n p ] + ϵ n M ˜ 1 [ x n + 1 x n + 2 b ˆ α n p ] [ x n p + x n + 1 p + 2 b ˆ α n p ] + ϵ n M ˜ 1 .
(3.12)

Note that lim n α n = lim n ϵ n =0. Hence, taking into account the boundedness of { x n } and lim n x n + 1 x n =0, we deduce from (3.12) that

lim n v n y n = lim n z n W n x n =0.
(3.13)

Furthermore, for simplicity, we write w n = ϵ n γ( δ n V x n +(1 δ n )S x n )+(I ϵ n μF) z n for all n1. Then we have

x n + 1 x n = P C w n w n + ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) z n x n = P C w n w n + ϵ n [ γ ( δ n V x n + ( 1 δ n ) S x n ) μ F z n ] + z n x n ,

which immediately yields

z n x n = x n + 1 x n ϵ n [ γ ( δ n V x n + ( 1 δ n ) S x n ) μ F z n ] ( P C w n w n ).

So, utilizing Proposition 2.1(i) we get

z n x n 2 = x n + 1 x n ϵ n [ γ ( δ n V x n + ( 1 δ n ) S x n ) μ F z n ] ( P C w n w n ) 2 x n + 1 x n ϵ n [ γ ( δ n V x n + ( 1 δ n ) S x n ) μ F z n ] 2 2 P C w n w n , z n x n = x n + 1 x n ϵ n [ γ ( δ n V x n + ( 1 δ n ) S x n ) μ F z n ] 2 2 ( P C w n w n , z n P C w n + P C w n w n , P C w n x n ) [ x n + 1 x n + ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) μ F z n ] 2 + 2 ( P C w n w n , P C w n z n + P C w n w n P C w n x n ) 2 x n + 1 x n 2 + 2 ϵ n 2 γ ( δ n V x n + ( 1 δ n ) S x n ) μ F z n 2 + 2 x n + 1 w n x n + 1 x n .

Since x n + 1 x n 0 and ϵ n 0 (due to (C6)), we know from the boundedness of { w n }, { x n }, and { z n } that

lim n z n x n =0.
(3.14)

Taking into account that W n x n x n W n x n z n + z n x n , we obtain from (3.13) and (3.14)

lim n x n W n x n =0.
(3.15)

Next let us show that lim n x n y n =0. As a matter of fact, from (3.4) and (3.8) it follows that

t n p 2 v n p 2 v n y n 2 y n t n 2 + 2 ν n ( α n + L ) v n y n t n y n + 2 ν n α n p p y n v n p 2 v n y n 2 y n t n 2 + ν n 2 ( α n + L ) 2 t n y n 2 + v n y n 2 + 2 ν n α n p p y n = v n p 2 + 2 ν n α n p p y n + ( ν n 2 ( α n + L ) 2 1 ) t n y n 2 x n p 2 + 2 ν n α n p p y n + ( ν n 2 ( α n + L ) 2 1 ) t n y n 2 .
(3.16)

Utilizing Lemma 2.1(b), from (3.9) and (3.16), we obtain

z n p 2 β n x n p 2 + ( 1 β n ) t n p 2 β n 1 β n z n W n x n 2 β n x n p 2 + ( 1 β n ) t n p 2 β n x n p 2 + ( 1 β n ) [ x n p 2 + 2 ν n α n p p y n + ( ν n 2 ( α n + L ) 2 1 ) t n y n 2 ] x n p 2 + 2 ν n α n p p y n + ( 1 β n ) ( ν n 2 ( α n + L ) 2 1 ) t n y n 2 ,

which immediately implies that

( 1 d ) ( 1 b ˆ 2 ( α n + L ) 2 ) t n y n 2 ( 1 β n ) ( 1 ν n 2 ( α n + L ) 2 ) t n y n 2 x n p 2 z n p 2 + 2 ν n α n p p y n x n z n ( x n p + z n p ) + 2 b ˆ α n p p y n .

Since α n 0, x n z n 0 (due to (3.14)) and { x n }, { y n }, { z n } are bounded, we get

lim n t n y n =0.
(3.17)

In the meantime, from (3.8) and (3.9) it is not hard to find that

z n p 2 β n x n p 2 + ( 1 β n ) t n p 2 β n x n p 2 + ( 1 β n ) [ v n p 2 + 2 ν n α n p p y n + ( ν n 2 ( α n + L ) 2 1 ) v n y n 2 ] β n x n p 2 + ( 1 β n ) v n p 2 + 2 ν n α n p p y n .
(3.18)

Now, let us show that lim n x n u n = lim n x n v n =0. In fact, observe that

Δ n k x n p 2 = T r k , n ( Θ k , φ k ) ( I r k , n A k ) Δ n k 1 x n T r k , n ( Θ k , φ k ) ( I r k , n A k ) p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p 2 Δ n k 1 x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2
(3.19)

and

Λ n i u n p 2 = J R i , λ i , n ( I λ i , n B i ) Λ n i 1 u n J R i , λ i , n ( I λ i , n B i ) p 2 ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p 2 Λ n i 1 u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 x n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2
(3.20)

for i{1,2,,N} and k{1,2,,M}. Combining (3.18), (3.19), and (3.20), we get

z n p 2 β n x n p 2 + ( 1 β n ) v n p 2 + 2 ν n α n p p y n β n x n p 2 + ( 1 β n ) Λ n i u n p 2 + 2 ν n α n p p y n β n x n p 2 + ( 1 β n ) [ u n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] + 2 ν n α n p p y n β n x n p 2 + ( 1 β n ) [ Δ n k x n p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] + 2 ν n α n p p y n β n x n p 2 + ( 1 β n ) [ x n p 2 + r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] + 2 ν n α n p p y n = x n p 2 + ( 1 β n ) [ r k , n ( r k , n 2 μ k ) A k Δ n k 1 x n A k p 2 + λ i , n ( λ i , n 2 η i ) B i Λ n i 1 u n B i p 2 ] + 2 ν n α n p p y n ,

which hence implies that

( 1 d ) [ r k , n ( 2 μ k r k , n ) A k Δ n k 1 x n A k p 2 + λ i , n ( 2 η i λ i , n ) B i Λ n i 1 u n B i p 2 ] ( 1 β n ) [ r k , n ( 2 μ k r k , n ) A k Δ n k 1 x n A k p 2 + λ i , n ( 2 η i λ i , n ) B i Λ n i 1 u n B i p 2 ] x n p 2 z n p 2 + 2 ν n α n p p y n x n z n ( x n p + z n p ) + 2 ν n α n p p y n .

Since α n 0, { ν n }[ a ˆ , b ˆ ](0, 1 L ), { λ i , n }[ a i , b i ](0,2 η i ) and { r k , n }[ c k , d k ](0,2 μ k ) where i{1,2,,N} and k{1,2,,M}, we deduce from (3.14) and the boundedness of { x n }, { y n }, { z n } that

lim n A k Δ n k 1 x n A k p =0and lim n B i Λ n i 1 u n B i p =0,
(3.21)

where i{1,2,,N} and k{1,2,,M}.

Furthermore, by Proposition 2.6(ii) and Lemma 2.1(a) we have

Δ n k x n p 2 = T r k , n ( Θ k , φ k ) ( I r k , n A k ) Δ n k 1 x n T r k , n ( Θ k , φ k ) ( I r k , n A k ) p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p , Δ n k x n p = 1 2 ( ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p 2 + Δ n k x n p 2 ( I r k , n A k ) Δ n k 1 x n ( I r k , n A k ) p ( Δ n k x n p ) 2 ) 1 2 ( Δ n k 1 x n p 2 + Δ n k x n p 2 Δ n k 1 x n Δ n k x n r k , n ( A k Δ n k 1 x n A k p ) 2 ) ,

which implies that

Δ n k x n p 2 Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n r k , n ( A k Δ n k 1 x n A k p ) 2 = Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n 2 r k , n 2 A k Δ n k 1 x n A k p 2 + 2 r k , n Δ n k 1 x n Δ n k x n , A k Δ n k 1 x n A k p Δ n k 1 x n p 2 Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p x n p 2 Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p .
(3.22)

By Lemma 2.1(a) and Lemma 2.8, we obtain

Λ n i u n p 2 = J R i , λ i , n ( I λ i , n B i ) Λ n i 1 u n J R i , λ i , n ( I λ i , n B i ) p 2 ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p , Λ n i u n p = 1 2 ( ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p 2 + Λ n i u n p 2 ( I λ i , n B i ) Λ n i 1 u n ( I λ i , n B i ) p ( Λ n i u n p ) 2 ) 1 2 ( Λ n i 1 u n p 2 + Λ n i u n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 ) 1 2 ( u n p 2 + Λ n i u n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 ) 1 2 ( x n p 2 + Λ n i u n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 ) ,

which immediately leads to

Λ n i u n p 2 x n p 2 Λ n i 1 u n Λ n i u n λ i , n ( B i Λ n i 1 u n B i p ) 2 = x n p 2 Λ n i 1 u n Λ n k u n 2 λ i , n 2 B i Λ n i 1 u n B i p 2 + 2 λ i , n Λ n i 1 u n Λ n i u n , B i Λ n i 1 u n B i p x n p 2 Λ n i 1 u n Λ n i u n 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p .
(3.23)

Combining (3.18) and (3.23) we conclude that

z n p 2 β n x n p 2 + ( 1 β n ) v n p 2 + 2 ν n α n p p y n β n x n p 2 + ( 1 β n ) Λ n i u n p 2 + 2 ν n α n p p y n β n x n p 2 + ( 1 β n ) [ x n p 2 Λ n i 1 u n Λ n i u n 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p ] + 2 ν n α n p p y n x n p 2 ( 1 β n ) Λ n i 1 u n Λ n i u n 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p + 2 ν n α n p p y n ,

which yields

( 1 d ) Λ n i 1 u n Λ n i u n 2 ( 1 β n ) Λ n i 1 u n Λ n i u n 2 x n p 2 z n p 2 + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p + 2 ν n α n p p y n x n z n ( x n p + z n p ) + 2 λ i , n Λ n i 1 u n Λ n i u n B i Λ n i 1 u n B i p + 2 ν n α n p p y n .

Since α n 0, { ν n }[ a ˆ , b ˆ ](0, 1 L ) and { λ i , n }[ a i , b i ](0,2 η i ) where i{1,2,,N}, we deduce from (3.21) and the boundedness of { u n }, { x n }, { y n }, { z n } that

lim n Λ n i 1 u n Λ n i u n =0,i{1,2,,N}.
(3.24)

Also, combining (3.3), (3.18), and (3.22) we deduce that

z n p 2 β n x n p 2 + ( 1 β n ) v n p 2 + 2 ν n α n p p y n β n x n p 2 + ( 1 β n ) u n p 2 + 2 ν n α n p p y n β n x n p 2 + ( 1 β n ) Δ n k x n p 2 + 2 ν n α n p p y n β n x n p 2 + ( 1 β n ) [ x n p 2 Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p ] + 2 ν n α n p p y n x n p 2 ( 1 β n ) Δ n k 1 x n Δ n k x n 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p + 2 ν n α n p p y n ,

which leads to

( 1 d ) Δ n k 1 x n Δ n k x n 2 ( 1 β n ) Δ n k 1 x n Δ n k x n 2 x n p 2 z n p 2 + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p + 2 ν n α n p p y n x n z n ( x n p + z n p ) + 2 r k , n Δ n k 1 x n Δ n k x n A k Δ n k 1 x n A k p + 2 ν n α n p p y n .

Since α n 0, { ν n }[ a ˆ , b ˆ ](0, 1 L ) and { r k , n }[ c k , d k ](0,2 μ k ) where k{1,2,,M}, we deduce from (3.21) and the boundedness of { x n }, { y n }, { z n } that

lim n Δ n k 1 x n Δ n k x n =0,k{1,2,,M}.
(3.25)

Hence from (3.24) and (3.25) we get

x n u n = Δ n 0 x n Δ n M x n Δ n 0 x n Δ n 1 x n + Δ n 1 x n Δ n 2 x n + + Δ n M 1 x n Δ n M x n 0 as  n
(3.26)

and

u n v n = Λ n 0 u n Λ n N u n Λ n 0 u n Λ n 1 u n + Λ n 1 u n Λ n 2 u n + + Λ n N 1 u n Λ n N u n 0 as  n ,
(3.27)

respectively. Thus, from (3.26) and (3.27) we obtain

x n v n x n u n + u n v n 0as n.
(3.28)

In addition, it is clear that

x n W x n x n W n x n + W n x n W x n .

Thus, we conclude from Remark 2.2, (3.15), and the boundedness of { x n } that

lim n x n W x n =0.
(3.29)

Noting that x n y n x n v n + v n y n , we have from (3.13) and (3.28) that

lim n x n y n =0.
(3.30)

Again, noting that t n x n t n y n + y n x n , we obtain from (3.17) and (3.30)

lim n t n x n =0.
(3.31)

Furthermore, from (3.1) we find that z n t n = β n ( x n t n )+ σ n (T t n t n ) and hence

σ n ( T t n t n ) = z n t n β n ( x n t n ) = z n x n + x n t n β n ( x n t n ) = z n x n + ( 1 β n ) ( x n t n ) ,

which immediately leads to

σ n T t n t n z n x n +(1 β n ) x n t n z n x n + x n t n .

Consequently, from (3.14), (3.31), and lim inf n σ n >0 we get

lim n T t n t n =0.
(3.32)

Step 3. ω w ( x n )Ω.

Since H is reflexive and { x n } is bounded, there exists at least a weak convergence subsequence of { x n }. Hence it is well known that ω w ( x n ). Now, take an arbitrary w ω w ( x n ). Then there exists a subsequence { x n i } of { x n } such that x n i w. From (3.24)-(3.26), (3.28), (3.30), and (3.31), we have t n i w, y n i w, u n i w, v n i w, Λ n i m u n i w and Δ n i k x n i w, where m{1,2,,N} and k{1,2,,M}. Utilizing Proposition 2.4(ii) and Lemma 2.2, we deduce from (3.29) and (3.32) that wFix(T) and wFix(W)= n = 1 Fix( T n ) (due to Lemma 2.4). Next, we prove that w m = 1 N I( B m , R m ). As a matter of fact, since B m is η m -inverse-strongly monotone, B m is a monotone and Lipschitz-continuous mapping. It follows from Lemma 2.11 that R m + B m is maximal monotone. Let (v,g)G( R m + B m ), i.e., g B m v R m v. Again, since Λ n m u n = J R m , λ m , n (I λ m , n B m ) Λ n m 1 u n , n1, m{1,2,,N}, we have

Λ n m 1 u n λ m , n B m Λ n m 1 u n (I+ λ m , n R m ) Λ n m u n ,

that is,

1 λ m , n ( Λ n m 1 u n Λ n m u n λ m , n B m Λ n m 1 u n ) R m Λ n m u n .

In terms of the monotonicity of R m , we get

v Λ n m u n , g B m v 1 λ m , n ( Λ n m 1 u n Λ n m u n λ m , n B m Λ n m 1 u n ) 0

and hence

v Λ n m u n , g v Λ n m u n , B m v + 1 λ m , n ( Λ n m 1 u n Λ n m u n λ m , n B m Λ n m 1 u n ) = v Λ n m u n , B m v B m Λ n m u n + B m Λ n m u n B m Λ n m 1 u n + 1 λ m , n ( Λ n m 1 u n Λ n m u n ) v Λ n m u n , B m Λ n m u n B m Λ n m 1 u n + v Λ n m u n , 1 λ m , n ( Λ n m 1 u n Λ n m u n ) .

In particular,

v Λ n i m u n i , g v Λ n i m u n i , B m Λ n i m u n i B m Λ n i m 1 u n i + v Λ n i m u n i , 1 λ m , n i ( Λ n i m 1 u n i Λ n i m u n i ) .

Since Λ n m u n Λ n m 1 u n 0 (due to (3.24)) and B m Λ n m u n B m Λ n m 1 u n 0 (due to the Lipschitz continuity of B m ), we conclude from Λ n i m u n i w and { λ i , n }[ a i , b i ](0,2 η i ) that

lim i v Λ n i m u n i , g =vw,g0.

It follows from the maximal monotonicity of B m + R m that 0( R m + B m )w, i.e., wI( B m , R m ). Therefore, w m = 1 N I( B m , R m ). Next we prove that w k = 1 M GMEP( Θ k , φ k , A k ). Since Δ n k x n = T r k , n ( Θ k , φ k ) (I r k , n A k ) Δ n k 1 x n , n1, k{1,2,,M}, we have

Θ k ( Δ n k x n , y ) + φ k ( y ) φ k ( Δ n k x n ) + A k Δ n k 1 x n , y Δ n k x n + 1 r k , n y Δ n k x n , Δ n k x n Δ n k 1 x n 0 .

By (A2), we have

φ k ( y ) φ k ( Δ n k x n ) + A k Δ n k 1 x n , y Δ n k x n + 1 r k , n y Δ n k x n , Δ n k x n Δ n k 1 x n Θ k ( y , Δ n k x n ) .

Let z t =ty+(1t)w for all t(0,1] and yC. This implies that z t C. Then we have

z t Δ n k x n , A k z t φ k ( Δ n k x n ) φ k ( z t ) + z t Δ n k x n , A k z t z t Δ n k x n , A k Δ n k 1 x n z t Δ n k x n , Δ n k x n Δ n k 1 x n r k , n + Θ k ( z t , Δ n k x n ) = φ k ( Δ n k x n ) φ k ( z t ) + z t Δ n k x n , A k z t A k Δ n k x n + z t Δ n k x n , A k Δ n k x n A k Δ n k 1 x n z t Δ n k x n , Δ n k x n Δ n k 1 x n r k , n + Θ k ( z t , Δ n k x n ) .
(3.33)

By (3.25), we have A k Δ n k x n A k Δ n k 1 x n 0 as n. Furthermore, by the monotonicity of A k , we obtain z t Δ n k x n , A k z t A k Δ n k x n 0. Then by (A4) we obtain

z t w, A k z t φ k (w) φ k ( z t )+ Θ k ( z t ,w).
(3.34)

Utilizing (A1), (A4), and (3.34), we obtain

0 = Θ k ( z t , z t ) + φ k ( z t ) φ k ( z t ) t Θ k ( z t , y ) + ( 1 t ) Θ k ( z t , w ) + t φ k ( y ) + ( 1 t ) φ k ( w ) φ k ( z t ) t [ Θ k ( z t , y ) + φ k ( y ) φ k ( z t ) ] + ( 1 t ) z t w , A k z t = t [ Θ k ( z t , y ) + φ k ( y ) φ k ( z t ) ] + ( 1 t ) t y w , A k z t ,

and hence

0 Θ k ( z t ,y)+ φ k (y) φ k ( z t )+(1t)yw, A k z t .

Letting t0, we have, for each yC,

0 Θ k (w,y)+ φ k (y) φ k (w)+yw, A k w.

This implies that wGMEP( Θ k , φ k , A k ) and hence w k = 1 M GMEP( Θ k , φ k , A k ). Furthermore, let us show that wVI(C,A). In fact, define

T ˜ v= { A v + N C v , if  v C , , if  v C ,

where N C v={uH:vx,u0,xC}. Then T ˜ is maximal monotone and 0 T ˜ v if and only if vVI(C,A); see [25]. Let (v, v ˜ )G( T ˜ ). Then we have v ˜ T ˜ v=Av+ N C v, and hence v ˜ Av N C v. So, we have vx, v ˜ Av0 for all xC. On the other hand, from y n = P C ( v n ν n A n v n ) and vC, we get v n ν n A n v n y n , y n v0, and hence,

v y n , y n v n ν n + A n v n 0.

Therefore, from v ˜ Av N C v and y n i C, we have

v y n i , v ˜ v y n i , A v v y n i , A v v y n i , y n i v n i ν n i + A n i v n i = v y n i , A v v y n i , y n i v n i ν n i + A v n i α n i v y n i , v n i = v y n i , A v A y n i + v y n i , A y n i A v n i v y n i , y n i v n i ν n i α n i v y n i , v n i v y n i , A y n i A v n i v y n i , y n i v n i ν n i α n i v y n i , v n i .

Hence, it is easy to see that vw, v ˜ 0 as i. Since T ˜ is maximal monotone, we have w T ˜ 1 0, and hence, wVI(C,A). Consequently, w n = 1 Fix( T n ) k = 1 M GMEP( Θ k , φ k , A k ) i = 1 N I( B i , R i )VI(C,A)Fix(T)=:Ω. This shows that ω w ( x n )Ω.

Step 4. { x n } converges strongly to a unique solution x Ω of SHVI (1.10).

Indeed, according to x n + 1 x n 0, we can take a subsequence { x n i } of { x n } satisfying

lim sup n ( γ V μ F ) x , x n + 1 x = lim sup n ( γ V μ F ) x , x n x = lim sup i ( γ V μ F ) x , x n i x .

Without loss of generality, we may further assume that x n i x ˜ ; then x ˜ Ω due to Step 3. Since x is a solution of SHVI (1.10), we get

lim sup n ( γ V μ F ) x , x n + 1 x = ( γ V μ F ) x , x ˜ x 0.
(3.35)

Repeating the argument of (3.35), we have

lim sup n ( γ S μ F ) x , x n + 1 x 0.
(3.36)

From (3.1) and (3.9), it follows that (noticing that x n + 1 = P C w n and 0<γτ)

x n + 1 x 2 = w n x , x n + 1 x + P C w n w n , P C w n x w n x , x n + 1 x = ( I ϵ n μ F ) z n ( I ϵ n μ F ) x , x n + 1 x + δ n ϵ n γ V x n V x , x n + 1 x + ϵ n ( 1 δ n ) γ S x n S x , x n + 1 x + δ n ϵ n ( γ V μ F ) x , x n + 1 x + ϵ n ( 1 δ n ) ( γ S μ F ) x , x n + 1 x ( 1 ϵ n τ ) z n x x n + 1 x + [ δ n ϵ n γ ρ + ϵ n ( 1 δ n ) γ ] x n x x n + 1 x + δ n ϵ n ( γ V μ F ) x , x n + 1 x + ϵ n ( 1 δ n ) ( γ S μ F ) x , x n + 1 x ( 1 ϵ n τ ) 1 2 ( z n x 2 + x n + 1 x 2 ) + [ δ n ϵ n γ ρ + ϵ n ( 1 δ n ) γ ] 1 2 ( x n x 2 + x n + 1 x 2 ) + δ n ϵ n ( γ V μ F ) x , x n + 1 x + ϵ n ( 1 δ n ) ( γ S μ F ) x , x n + 1 x ( 1 ϵ n τ ) 1 2 [ ( x n x + 2 ν n α n x ) 2 + x n + 1 x 2 ] + [ δ n ϵ n γ ρ + ϵ n ( 1 δ n ) γ ] 1 2 ( x n x 2 + x n + 1 x 2 ) + δ n ϵ n ( γ V μ F ) x , x n + 1 x + ϵ n ( 1 δ n ) ( γ S μ F ) x , x n + 1 x ( 1 ϵ n τ ) 1 2 ( x n x 2 + α n M ˜ 2 + x n + 1 x 2 ) + [ δ n ϵ n γ ρ + ϵ n ( 1 δ n ) γ ] 1 2 ( x n x 2 + x n + 1 x 2 ) + δ n ϵ n ( γ V μ F ) x , x n + 1 x + ϵ n ( 1 δ n ) ( γ S μ F ) x , x n + 1 x [ 1 δ n ϵ n γ ( 1 ρ ) ] 1 2 ( x n x 2 + x n + 1 x 2 ) + δ n ϵ n ( γ V μ F ) x , x n + 1 x + ϵ n ( 1 δ n ) ( γ S μ F ) x , x n + 1 x + α n M ˜ 2 ,

where M ˜ 2 = sup n 1 {2 ν n x ( 2 x n x + ν n α n x )}<. It turns out that

x n + 1 x 2 1 δ n ϵ n γ ( 1 ρ ) 1 + δ n ϵ n γ ( 1 ρ ) x n x 2 + 2 1 + δ n ϵ n γ ( 1 ρ ) [ δ n ϵ n ( γ V μ F ) x , x n + 1 x + ϵ n ( 1 δ n ) ( γ S μ F ) x , x n + 1 x + α n M ˜ 2 ] [ 1 δ n ϵ n γ ( 1 ρ ) ] x n x 2 + 2 1 + δ n ϵ n γ ( 1 ρ ) [ δ n ϵ n ( γ V μ F ) x , x n + 1 x + ϵ n ( 1 δ n ) ( γ S μ F ) x , x n + 1 x ] + 2 α n M ˜ 2 = ( 1 s n ) x n x 2 + s n b n + t n ,
(3.37)

where t n =2 α n M ˜ 2 , s n = δ n ϵ n γ(1ρ) and

b n = 2 γ ( 1 ρ ) [ 1 + δ n ϵ n γ ( 1 ρ ) ] ( γ V μ F ) x , x n + 1 x + 2 ( 1 δ n ) δ n γ ( 1 ρ ) [ 1 + δ n ϵ n γ ( 1 ρ ) ] ( γ S μ F ) x , x n + 1 x .

In terms of conditions (C5) and (C6), we conclude from 0<1ρ1 that { s n }(0,1] and n = 1 s n =. Note that 2 γ ( 1 ρ ) [ 1 + δ n ϵ n γ ( 1 ρ ) ] 2 γ ( 1 ρ ) and 2 ( 1 δ n ) δ n γ ( 1 ρ ) [ 1 + δ n ϵ n γ ( 1 ρ ) ] 2 a γ ( 1 ρ ) , where a=inf{ δ n :n1}>0. Consequently, utilizing Lemma 2.13 we obtain

lim sup n b n lim sup n 2 γ ( 1 ρ ) [ 1 + δ n ϵ n γ ( 1 ρ ) ] ( γ V μ F ) x , x n + 1 x + lim sup n 2 ( 1 δ n ) δ n γ ( 1 ρ ) [ 1 + δ n ϵ n γ ( 1 ρ ) ] ( γ S μ F ) x , x n + 1 x 0 .

So, applying Lemma 2.12 to (3.37), we infer that lim n x n x =0. The proof is complete. □

Remark 3.1 In Theorem 3.1, whenever V0, the iterative scheme (3.1) reduces to the following one:

{ u n = T r M , n ( Θ M , φ M ) ( I r M , n A M ) T r M 1 , n ( Θ M 1 , φ M 1 ) ( I r M 1 , n A M 1 ) T r 1 , n ( Θ 1 , φ 1 ) ( I r 1 , n A 1 ) x n , v n = J R N , λ N , n ( I λ N , n B N ) J R N 1 , λ N 1 , n ( I λ N 1 , n B N 1 ) J R 1 , λ 1 , n ( I λ 1 , n B 1 ) u n , y n = P C ( v n ν n A n v n ) , z n = β n W n x n + γ n P C ( v n ν n A n y n ) + σ n T P C ( v n ν n A n y n ) , x n + 1 = P C [ ϵ n ( 1 δ n ) γ S x n + ( I ϵ n μ F ) z n ] , n 1 ,
(3.38)

where A n = α n I+A for all n1. Assume that the SHVI (1.11) has a solution and that all the conditions in Theorem 3.1 are satisfied. If {S x n } is bounded, then { x n } converges strongly to a unique solution of SHVI (1.11) provided lim n x n x n + 1 =0.

Next we consider a special case of SHVI (1.10). In SHVI (1.10), put μ=2, F= 1 2 I and γ=τ=1. In this case, the objective is to find x Ω such that

{ ( I V ) x , x x 0 , x Ω , ( I S ) x , y x 0 , y Ω .
(3.39)

Utilizing Theorem 3.1 we immediately derive the following.

Corollary 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let M, N be two positive integers. Let Θ k be a bifunction from C×C to R satisfying (A1)-(A4) and φ k :CR{+} be a proper lower semicontinuous and convex function with restriction (B1) or (B2), where k{1,2,,M}. Let R i :C 2 H be a maximal monotone mapping and let A k :HH and B i :CH be μ k -inverse-strongly monotone and η i -inverse-strongly monotone, respectively, where k{1,2,,M}, i{1,2,,N}. Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C and { λ n } n = 1 be a sequence in (0,b] for some b(0,1). Let T:CC be a ξ-strictly pseudocontractive mapping, S:CC be a nonexpansive mapping and V:CH be a ρ-contraction with coefficient ρ[0,1). Let A:CH be a 1 L -inverse-strongly monotone mapping. Assume that the SHVI (3.39) has a solution, where Ω:= n = 1 Fix( T n ) k = 1 M GMEP( Θ k , φ k , A k ) i = 1 N I( B i , R i )VI(C,A)Fix(T). Let { α n }[0,), { ν n }(0, 1 L ), { ϵ n },{ δ n },{ β n },{ γ n },{ σ n }(0,1) and { λ i , n }[ a i , b i ](0,2 η i ), { r k , n }[ c k , d k ](0,2 μ k ) where i{1,2,,N} and k{1,2,,M}. For arbitrarily given x 1 C, let { x n } be a sequence generated by

{ u n = T r M , n ( Θ M , φ M ) ( I r M , n A M ) T r M 1 , n ( Θ M 1 , φ M 1 ) ( I r M 1 , n A M 1 ) T r 1 , n ( Θ 1 , φ 1 ) ( I r 1 , n A 1 ) x n , v n = J R N , λ N , n ( I λ N , n B N ) J R N 1 , λ N 1 , n ( I λ N 1 , n B N 1 ) J R 1 , λ 1 , n ( I λ 1 , n B 1 ) u n , y n = P C ( v n ν n A n v n ) , z n = β n W n x n + γ n P C ( v n ν n A n y n ) + σ n T P C ( v n ν n A n y n ) , x n + 1 = P C [ ϵ n ( δ n V x n + ( 1 δ n ) S x n ) + ( 1 ϵ n ) z n ] , n 1 ,
(3.40)

where A n = α n I+A for all n1. Suppose that

(C1) n = 1 α n <;

(C2) 0< lim inf n ν n lim sup n ν n < 1 L ;

(C3) β n + γ n + σ n =1 and ( γ n + σ n )ξ γ n for all n1;

(C4) 0< lim inf n β n lim sup n β n <1 and lim inf n σ n >0;

(C5) 0< lim inf n δ n lim sup n δ n <1;

(C6) lim n ϵ n =0 and n = 1 ϵ n =.

If {S x n } is bounded, then { x n } converges strongly to a unique solution of the SHVI (3.39) provided lim n x n x n + 1 =0.

In Theorem 3.1, putting M=1 and N=2, we obtain the following.

Corollary 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H. Let Θ 1 be a bifunction from C×C to R satisfying (A1)-(A4), φ 1 :CR{+} be a proper lower semicontinuous and convex function with restriction (B1) or (B2), and A 1 :HH be μ 1 -inverse strongly monotone. Let R i :C 2 H be a maximal monotone mapping and B i :CH be η i -inverse-strongly monotone, for i=1,2. Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C and { λ n } n = 1 be a sequence in (0,b] for some b(0,1). Let T:CC be a ξ-strictly pseudocontractive mapping, S:CC be a nonexpansive mapping and V:CH be a ρ-contraction with coefficient ρ[0,1). Let A:CH be a 1 L -inverse-strongly monotone mapping, and F:CH be κ-Lipschitzian and η-strongly monotone with positive constants κ,η>0 such that 0<μ< 2 η κ 2 and 0<γτ where τ=1 1 μ ( 2 η μ κ 2 ) . Assume that the SHVI (1.10) has a solution, where Ω:= n = 1 Fix( T n )GMEP( Θ 1 , φ 1 , A 1 )I( B 2 , R 2 )I( B 1 , R 1 )VI(C,A)Fix(T). Let { α n }[0,), { ν n }(0, 1 L ), { ϵ n },{ δ n },{ β n },{ γ n },{ σ n }(0,1), { r 1 , n }[ c 1 , d 1 ](0,2 μ 1 ) and { λ i , n }[ a i , b i ](0,2 η i ) for i=1,2. For arbitrarily given x 1 C, let { x n } be a sequence generated by

{ Θ 1 ( u n , y ) + φ 1 ( y ) φ 1 ( u n ) + A 1 x n , y u n + 1 r 1 , n u n x n , y u n 0 , y C , v n = J R 2 , λ 2 , n ( I λ 2 , n B 2 ) J R 1 , λ 1 , n ( I λ 1 , n B 1 ) u n , y n = P C ( v n ν n A n v n ) , z n = β n W n x n + γ n P C ( v n ν n A n y n ) + σ n T P C ( v n ν n A n y n ) , x n + 1 = P C [ ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) z n ] , n 1 ,
(3.41)

where A n = α n I+A for all n1. Suppose that

(C1) n = 1 α n <;

(C2) 0< lim inf n ν n lim sup n ν n < 1 L ;

(C3) β n + γ n + σ n =1 and ( γ n + σ n )ξ γ n for all n1;

(C4) 0< lim inf n β n lim sup n β n <1 and lim inf n σ n >0;

(C5) 0< lim inf n δ n lim sup n δ n <1;

(C6) lim n ϵ n =0 and n = 1 ϵ n =.

If {S x n } is bounded, then { x n } converges strongly to a unique solution of the SHVI (1.10) provided lim n x n x n + 1 =0.

In Theorem 3.1, putting M=N=1, we obtain the following.

Corollary 3.3 Let C be a nonempty closed convex subset of a real Hilbert space H. Let Θ 1 be a bifunction from C×C to R satisfying (A1)-(A4), φ 1 :CR{+} be a proper lower semicontinuous and convex function with restriction (B1) or (B2), and A 1 :HH be μ 1 -inverse strongly monotone. Let R 1 :C 2 H be a maximal monotone mapping and B 1 :CH be η 1 -inverse-strongly monotone. Let { T n } n = 1 be a sequence of nonexpansive self-mappings on C and { λ n } n = 1 be a sequence in (0,b] for some b(0,1). Let T:CC be a ξ-strictly pseudocontractive mapping, S:CC be a nonexpansive mapping and V:CH be a ρ-contraction with coefficient ρ[0,1). Let A:CH be a 1 L -inverse-strongly monotone mapping, and F:CH be κ-Lipschitzian and η-strongly monotone with positive constants κ,η>0 such that 0<μ< 2 η κ 2 and 0<γτ where τ=1 1 μ ( 2 η μ κ 2 ) . Assume that the SHVI (1.10) has a solution, where Ω:= n = 1 Fix( T n )GMEP( Θ 1 , φ 1 , A 1 )I( B 1 , R 1 )VI(C,A)Fix(T). Let { α n }[0,), { ν n }(0, 1 L ), { ϵ n },{ δ n },{ β n },{ γ n },{ σ n }(0,1), { λ 1 , n }[ a 1 , b 1 ](0,2 η 1 ) and { r 1 , n }[ c 1 , d 1 ](0,2 μ 1 ). For arbitrarily given x 1 C, let { x n } be a sequence generated by

{ Θ 1 ( u n , y ) + φ 1 ( y ) φ 1 ( u n ) + A 1 x n , y u n + 1 r 1 , n u n x n , y u n 0 , y C , v n = J R 1 , λ 1 , n ( I λ 1 , n B 1 ) u n , y n = P C ( v n ν n A n v n ) , z n = β n W n x n + γ n P C ( v n ν n A n y n ) + σ n T P C ( v n ν n A n y n ) , x n + 1 = P C [ ϵ n γ ( δ n V x n + ( 1 δ n ) S x n ) + ( I ϵ n μ F ) z n ] , n 1 ,
(3.42)

where A n = α n I+A for all n1. Suppose that

(C1) n = 1 α n <;

(C2) 0< lim inf n ν n lim sup n ν n < 1 L ;

(C3) β n + γ n + σ n =1 and ( γ n + σ n )ξ γ n for all n1;

(C4) 0< lim inf n β n lim sup n β n <1 and lim inf n σ n >0;

(C5) 0< lim inf n δ n lim sup n δ n <1;

(C6) lim n ϵ n =0 and n = 1 ϵ n =.

If {S x n } is bounded, then { x n } converges strongly to a unique solution of the SHVI (1.10) provided lim n x n x n + 1 =0.

Remark 3.2 It is obvious that our iterative scheme (3.1) is very different from Yao et al.’s iterative one (1.5) and Kong et al.’s iterative one (1.8). Here, the three-step iterative scheme in [[7], Algorithm I] is extended to develop our five-step iterative scheme (3.1) for the SHVI (1.10) by combining Korpelevich’s extragradient method, viscosity approximation method, hybrid steepest-descent method [48], Mann’s iteration method and projection method. It is worth pointing out that under the lack of the assumptions similar to those in [[31], Theorem 3.2], e.g., { x n } is bounded and Fix(T)intC, the sequence { x n } generated by (3.1) converges strongly to a point x n = 1 Fix( T n ) k = 1 M GMEP( Θ k , φ k , A k ) i = 1 N I( B i , R i )VI(C,A)Fix(T)=:Ω, which is a unique solution x Ω of the SHVI (1.10) (over the fixed point set of an infinite family of nonexpansive mappings { T n } n = 1 and a ξ-strictly pseudocontractive mapping T). It is worth emphasizing that the nonexpansive mapping T in (1.5) is extended to a ξ-strictly pseudocontractive mapping T in (3.1) and the VIP in SHVI (1.6) is extended to the setting of finitely many GMEPs and finitely many variational inclusions in SHVI (1.10).

Remark 3.3 Our Theorem 3.1 improves, extends, supplements and develops Yao et al. [[31], Theorems 3.1 and 3.2] and Kong et al. [[7], Theorem 17] in the following aspects:

  1. (a)

    Our SHVI (1.10) with the unique solution x Ω satisfying

    x = P n = 1 Fix ( T n ) k = 1 M GMEP ( Θ k , φ k , A k ) i = 1 N I ( B i , R i ) VI ( C , A ) Fix ( T ) ( I ( μ F γ S ) ) x

    is more general than the problem of finding a point x ˜ C satisfying x ˜ = P Fix ( T ) S x ˜ in [31] and than the problem of finding a point x Fix(T)VI(C,A) satisfying x = P Fix ( T ) VI ( C , A ) (I(μFγS)) x in [[7], Theorem 17]. It is worth emphasizing that S is nonexpansive if and only if the complement IS is 1 2 -inverse-strongly monotone; see [19].

  2. (b)

    Our five-step iterative scheme (3.1) for SHVI (1.10) is more flexible, more advantageous and more subtle than Kong et al.’s three-step iterative one in [[7], Algorithm I] and than Yao et al.’s two-step iterative one (1.5) because it can be used to solve several kinds of problems, e.g., the SHVI, the HVIP and the problem of finding a common point of four sets: n = 1 Fix( T n )Fix(T), k = 1 M GMEP( Θ k , φ k , A k ), i = 1 N I( B i , R i ) and VI(C,A). In addition, our Theorem 3.1 drops the crucial requirements in [[31], Theorem 3.2] that lim n α n β n =0, lim n β n 2 α n =0, Fix(T)intC and { x n } is bounded, generalizes [[7], Theorem 17] from one nonlinear mapping T to an infinite family of nonlinear mappings { T n } n = 1 and T and extends [[7], Theorem 17] to the setting of finitely many GMEPs and finitely many variational inclusions.

  3. (c)

    The argument techniques in our Theorem 3.1 are very different from the argument ones in [[31], Theorems 3.1 and 3.2] and from the argument ones in [[7], Theorem 17] because we make use of the properties of the W-mappings W n (see Lemmas 2.4 and 2.5), the properties of resolvent operators and maximal monotone mappings (see Proposition 2.4 and Lemmas 2.9-2.13), the inclusion problem 0 T ˜ v (vVI(C,A) for maximal monotone operator T ˜ ) (see (2.3)) and the contractive coefficient estimates for the contractions associating with nonexpansive mappings (see Lemma 2.7).

References

  1. Lions JL: Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires. Dunod, Paris; 1969.

    MATH  Google Scholar 

  2. Glowinski R: Numerical Methods for Nonlinear Variational Problems. Springer, New York; 1984.

    Book  MATH  Google Scholar 

  3. Takahashi W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama; 2000.

    MATH  Google Scholar 

  4. Oden JT: Quantitative Methods on Nonlinear Mechanics. Prentice Hall, Englewood Cliffs; 1986.

    MATH  Google Scholar 

  5. Zeidler E: Nonlinear Functional Analysis and Its Applications. Springer, New York; 1985.

    Book  MATH  Google Scholar 

  6. Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 12: 747–756.

    MATH  MathSciNet  Google Scholar 

  7. Kong ZR, Ceng LC, Ansari QH, Pang CT: Multistep hybrid extragradient method for triple hierarchical variational inequalities. Abstr. Appl. Anal. 2013., 2013: Article ID 718624

    Google Scholar 

  8. Zeng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006,10(5):1293–1303.

    MathSciNet  MATH  Google Scholar 

  9. Peng JW, Yao JC: A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 2008, 12: 1401–1432.

    MathSciNet  MATH  Google Scholar 

  10. Ceng LC, Ansari QH, Schaible S: Hybrid extragradient-like methods for generalized mixed equilibrium problems, system of generalized equilibrium problems and optimization problems. J. Glob. Optim. 2012, 53: 69–96. 10.1007/s10898-011-9703-4

    Article  MathSciNet  MATH  Google Scholar 

  11. Ceng LC, Ansari QH, Wong MM, Yao JC: Mann type hybrid extragradient method for variational inequalities, variational inclusions and fixed point problems. Fixed Point Theory 2012,13(2):403–422.

    MathSciNet  MATH  Google Scholar 

  12. Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z

    Article  MathSciNet  MATH  Google Scholar 

  13. Ceng LC, Hadjisavvas N, Wong NC: Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 2010, 46: 635–646. 10.1007/s10898-009-9454-7

    Article  MathSciNet  MATH  Google Scholar 

  14. Ceng LC, Yao JC: A relaxed extragradient-like method for a generalized mixed equilibrium problem, a general system of generalized equilibria and a fixed point problem. Nonlinear Anal. 2010, 72: 1922–1937. 10.1016/j.na.2009.09.033

    Article  MathSciNet  MATH  Google Scholar 

  15. Ceng LC, Ansari QH, Yao JC: Relaxed extragradient iterative methods for variational inequalities. Appl. Math. Comput. 2011, 218: 1112–1123. 10.1016/j.amc.2011.01.061

    Article  MathSciNet  MATH  Google Scholar 

  16. Ceng LC, Petrusel A: Relaxed extragradient-like method for general system of generalized mixed equilibria and fixed point problem. Taiwan. J. Math. 2012,16(2):445–478.

    MathSciNet  MATH  Google Scholar 

  17. Ceng LC, Guu SM, Yao JC: Hybrid iterative method for finding common solutions of generalized mixed equilibrium and fixed point problems. Fixed Point Theory Appl. 2012., 2012: Article ID 92

    Google Scholar 

  18. Ceng LC, Hu HY, Wong MM: Strong and weak convergence theorems for generalized mixed equilibrium problem with perturbation and fixed point problem of infinitely many nonexpansive mappings. Taiwan. J. Math. 2011,15(3):1341–1367.

    MathSciNet  MATH  Google Scholar 

  19. Ceng LC, Al-Homidan S: Algorithms of common solutions for generalized mixed equilibria, variational inclusions, and constrained convex minimization. Abstr. Appl. Anal. 2014., 2014: Article ID 132053

    Google Scholar 

  20. Takahashi S, Takahashi W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 2008, 69: 1025–1033. 10.1016/j.na.2008.02.042

    Article  MathSciNet  MATH  Google Scholar 

  21. Ceng LC, Yao JC: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 2008, 214: 186–201. 10.1016/j.cam.2007.02.022

    Article  MathSciNet  MATH  Google Scholar 

  22. Ceng LC, Yao JC: Hybrid viscosity approximation schemes for equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. Appl. Math. Comput. 2008, 198: 729–741. 10.1016/j.amc.2007.09.011

    Article  MathSciNet  MATH  Google Scholar 

  23. Colao V, Marino G, Xu HK: An iterative method for finding common solutions of equilibrium and fixed point problems. J. Math. Anal. Appl. 2008, 344: 340–352. 10.1016/j.jmaa.2008.02.041

    Article  MathSciNet  MATH  Google Scholar 

  24. Ceng LC, Petrusel A, Yao JC: Iterative approaches to solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. J. Optim. Theory Appl. 2009, 143: 37–58. 10.1007/s10957-009-9549-9

    Article  MathSciNet  MATH  Google Scholar 

  25. Rockafellar RT: Monotone operators and the proximal point algorithms. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

    Article  MathSciNet  MATH  Google Scholar 

  26. Huang NJ: A new completely general class of variational inclusions with noncompact valued mappings. Comput. Math. Appl. 1998,35(10):9–14. 10.1016/S0898-1221(98)00067-4

    Article  MathSciNet  MATH  Google Scholar 

  27. Zeng LC, Guu SM, Yao JC: Characterization of H -monotone operators with applications to variational inclusions. Comput. Math. Appl. 2005,50(3–4):329–337. 10.1016/j.camwa.2005.06.001

    Article  MathSciNet  MATH  Google Scholar 

  28. Ceng LC, Guu SM, Yao JC: Iterative approximation of solutions for a class of completely generalized set-valued quasi-variational inclusions. Comput. Math. Appl. 2008, 56: 978–987. 10.1016/j.camwa.2008.01.026

    Article  MathSciNet  MATH  Google Scholar 

  29. Zhang SS, Lee HWJ, Chan CK: Algorithms of common solutions for quasivariational inclusions and fixed point problems. Appl. Math. Mech. 2008, 29: 571–581. 10.1007/s10483-008-0502-y

    Article  MathSciNet  Google Scholar 

  30. Ceng LC, Guu SM, Yao JC: Iterative algorithm for finding approximate solutions of mixed quasi-variational-like inclusions. Comput. Math. Appl. 2008, 56: 942–952. 10.1016/j.camwa.2008.01.024

    Article  MathSciNet  MATH  Google Scholar 

  31. Yao Y, Liou YC, Marino G: Two-step iterative algorithms for hierarchical fixed point problems and variational inequality problems. J. Appl. Math. Comput. 2009,31(1–2):433–445. 10.1007/s12190-008-0222-5

    Article  MathSciNet  MATH  Google Scholar 

  32. Ceng LC, Yao JC: On the triple hierarchical variational inequalities with constraints of mixed equilibria, variational inclusions and systems of generalized equilibria. Tamkang J. Math. 2014,45(3):297–334. 10.5556/j.tkjm.45.2014.1658

    Article  MathSciNet  MATH  Google Scholar 

  33. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004,20(1):103–120. 10.1088/0266-5611/20/1/006

    Article  MathSciNet  MATH  Google Scholar 

  34. Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004,53(5–6):475–504. 10.1080/02331930412331327157

    Article  MathSciNet  MATH  Google Scholar 

  35. Geobel K, Kirk WA: Topics on Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Book  Google Scholar 

  36. Shimoji K, Takahashi W: Strong convergence to common fixed points of infinite nonexpansive mappings and applications. Taiwan. J. Math. 2001, 5: 387–404.

    MathSciNet  MATH  Google Scholar 

  37. Yao Y, Liou YC, Yao JC: Convergence theorem for equilibrium problems and fixed point problems of infinite family of nonexpansive mappings. Fixed Point Theory Appl. 2007., 2007: Article ID 64363

    Google Scholar 

  38. Marino G, Xu HK: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 2007, 329: 336–346. 10.1016/j.jmaa.2006.06.055

    Article  MathSciNet  MATH  Google Scholar 

  39. Sahu DR, Xu HK, Yao JC: Asymptotically strict pseudocontractive mappings in the intermediate sense. Nonlinear Anal. 2009, 70: 3502–3511. 10.1016/j.na.2008.07.007

    Article  MathSciNet  MATH  Google Scholar 

  40. Yao Y, Liou YC, Kang SM: Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59: 3472–3480. 10.1016/j.camwa.2010.03.036

    Article  MathSciNet  MATH  Google Scholar 

  41. Han D, Lo HK: Solving non-additive traffic assignment problems: a descent method for co-coercive variational inequalities. Eur. J. Oper. Res. 2004,159(3):529–544. 10.1016/S0377-2217(03)00423-5

    Article  MathSciNet  MATH  Google Scholar 

  42. Bertsekas DP, Gafni EM: Projection methods for variational inequalities with application to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965

    Article  MathSciNet  MATH  Google Scholar 

  43. Xu HK, Kim TH: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003,119(1):185–201.

    Article  MathSciNet  MATH  Google Scholar 

  44. Barbu V: Nonlinear Semigroups and Differential Equations in Banach Spaces. Noordhoff, Groningen; 1976.

    Book  MATH  Google Scholar 

  45. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. (2) 2002,66(1):240–256. 10.1112/S0024610702003332

    Article  MATH  MathSciNet  Google Scholar 

  46. Ceng LC, Petrusel A, Yao JC: Strong convergence theorems of averaging iterations of nonexpansive nonself-mappings in Banach spaces. Fixed Point Theory 2007,8(2):219–236.

    MathSciNet  MATH  Google Scholar 

  47. Ceng LC, Petrusel A, Yao JC: Relaxed extragradient methods with regularization for general system of variational inequalities with constraints of split feasibility and fixed point problems. Abstr. Appl. Anal. 2013., 2013: Article ID 891232

    Google Scholar 

  48. Yamada I: The hybrid steepest-descent method for the variational inequality problems over the intersection of the fixed-point sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Edited by: Batnariu D, Censor Y, Reich S. North-Holland, Amsterdam; 2001:473–504.

    Chapter  Google Scholar 

Download references

Acknowledgements

Lu-Chuan Ceng was partially supported by the National Science Foundation of China (11071169), Innovation Program of Shanghai Municipal Education Commission and PhD Program Foundation of Ministry of Education of China (20123127110002). Jen-Chih Yao was partially supported by the grant NSC 102-2115-M-037-002-MY3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jen-Chih Yao.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ceng, LC., Sahu, D.R. & Yao, JC. A unified extragradient method for systems of hierarchical variational inequalities in a Hilbert space. J Inequal Appl 2014, 460 (2014). https://doi.org/10.1186/1029-242X-2014-460

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-460

Keywords