Skip to main content

Regularized hybrid iterative algorithms for triple hierarchical variational inequalities

Abstract

In this paper, we introduce and study a triple hierarchical variational inequality (THVI) with constraints of minimization and equilibrium problems. More precisely, let Fix(T) be the fixed point set of a nonexpansive mapping, let MEP(Θ,φ) be the solution set of a mixed equilibrium problem (MEP), and let Γ be the solution set of a minimization problem (MP) for a convex and continuously Frechet differential functional in Hilbert spaces. We want to find a solution x Fix(T)MEP(Θ,φ)Γ of a variational inequality with a variational inequality constraint over the intersection of Fix(T), MEP(Θ,φ), and Γ. We propose a hybrid iterative algorithm with regularization to compute approximate solutions of the THVI, and we present the convergence analysis of the proposed iterative algorithm.

MSC:49J40, 47J20, 47H10, 65K05, 47H09.

1 Introduction

Let H be a Hilbert space with inner product , and norm over the real scalar field . Let C be a nonempty closed convex subset of H, and P C be the metric projection of H onto C. Let T:CC be a self-mapping on C. Denote by Fix(T) the set of fixed points of T. We say that T is L-Lipschitzian if there exists a constant L0 such that

TxTyLxy,x,yC.

When L=1 or 0L<1, we call T a nonexpansive or a contractive mapping, respectively. We say that a mapping A:CH is α-inverse strongly monotone if there exists a constant α>0 such that

AxAy,xyα A x A y 2 ,x,yC,

and that A is η-strongly monotone (resp. monotone) if there exists a constant η>0 (resp. η=0) such that

AxAy,xyη x y 2 ,x,yC.

It is known that T is nonexpansive if and only if IT is 1 2 -inverse strongly monotone. Moreover, L-Lipschitz continuous mappings are 1 L -inversely strong monotone (see, e.g., [1]).

Let f:CR be a convex and continuously Frechet differentiable functional. Consider the minimization problem (MP):

min x C f(x)
(1.1)

(assuming the existence of minimizers). We denote by Γ the set of minimizers of problem (1.1). The gradient-projection algorithm (GPA) generates a sequence { x n } determined by the gradient f and the metric projection P C :

x n + 1 := P C ( x n λ f ( x n ) ) ,n0.
(1.2)

The convergence of algorithm (1.2) depends on f. It is known that if f is η-strongly monotone and L-Lipschitz continuous, then for 0<λ< 2 η L 2 , the operator

S:= P C (Iλf)

is a contraction. Hence, the sequence { x n } defined by the GPA (1.2) converges in norm to the unique solution of (1.1). If the gradient f is only assumed to be Lipschitz continuous, then { x n } can only be weakly convergent when H is infinite-dimensional (a counterexample to the norm convergence of { x n } is given by Xu [[2], Section 5]).

The regularization, in particular the traditional Tikhonov regularization, is usually used to solve ill-posed optimization problems. Consider the regularized minimization problem

min x C f α (x):=f(x)+ α 2 x 2 ,

where α>0 is the regularization parameter, and again f is convex with Lipschitz continuous gradient f. While a regularization method provides the possible strong convergence to the minimum-norm solution, its disadvantage is the implicity. Hence explicit iterative methods seem to be attractive. See, e.g., Xu [2, 3].

On the other hand, for a given mapping A:CH, we consider the variational inequality problem (VIP) of finding x C such that

A x , x x 0,xC.
(1.3)

The solution set of VIP (1.3) is denoted by VI(C,A). It is well known that when A is monotone,

xVI(C,A)x= P C (xλAx),λ>0.

Variational inequality theory has been studied quite extensively and has emerged as an important tool in several branches of pure and applied sciences; see, e.g., [1, 48] and the references therein.

When C is the fixed point set Fix(T) of a nonexpansive mapping T and A=IS, VIP (1.3) becomes the variational inequality problem of finding x Fix(T) such that

( I S ) x , x x 0,xFix(T).
(1.4)

This problem, introduced by Moudafi and Maingé [9, 10], is called the hierarchical fixed point problem. It is clear that if S has fixed points, then they are solutions of VIP (1.4). If S is contractive, the solution set of VIP (1.4) is a singleton and it is well known as a viscosity problem. This was previously introduced by Moudafi [11] and also developed by Xu [12]. In this case, solving VIP (1.4) is equivalent to finding a fixed point of the nonexpansive mapping P Fix ( T ) S, where P Fix ( T ) is the metric projection onto the closed and convex set Fix(T). Yao et al. [8] introduced a two-step algorithm to solve VIP (1.4).

Let Θ:C×CR be a bifunction and φ:CR be a function. Consider the mixed equilibrium problem (MEP) of finding xC such that

Θ(x,y)+φ(y)φ(x)0,yC,
(1.5)

which was studied by Ceng and Yao [13]. The solution set of MEP (1.5) is denoted by MEP(Θ,φ). The MEP (1.5) is very general in the sense that it includes, as special cases, fixed point problems, optimization problems, variational inequality problems, minimax problems, Nash equilibrium problems in noncooperative games and others; see, e.g., [1315].

Recently, Iiduka [16, 17] considered a variational inequality with a variational inequality constraint over the set of fixed points of a nonexpansive mapping. Since this problem has a triple structure in contrast with bilevel programming problems or hierarchical constrained optimization problems or hierarchical fixed point problems, it is referred to as a triple hierarchical constrained optimization problem (THCOP). He presented some examples of THCOP and developed iterative algorithms to find the solution of such a problem. Since the original problem is a variational inequality, in this paper, we call it a triple hierarchical variational inequality (THVI). Ceng et al. introduced and considered some THVI in [18]. A nice survey article on THVI is [19]. See also [2022].

Extending the works done in [18], we introduce and study in this paper the following triple hierarchical variational inequality with constraints of minimization and equilibrium problems.

The problem to study

Let C be a nonempty closed convex subset of a real Hilbert space H. Let f:CR be convex and continuously Frechet differentiable with Γ being the set of its minimizers. Let T:CC and S:HH be both nonexpansive. Let V:HH be ρ-contractive, and F:CH be κ-Lipschitzian and η-strongly monotone with constants ρ[0,1) and κ,η>0. Suppose 0<μ<2η/ κ 2 and 0<γτ where τ=1 1 μ ( 2 η μ κ 2 ) .

Let Ξ denote the solution set of the following hierarchical variational inequality (HVI): find z Fix(T)MEP(Θ,φ)Γ such that

( μ F γ S ) z , z z 0,zFix(T)MEP(Θ,φ)Γ,

where the solution set Ξ is assumed to be nonempty. Consider the following triple hierarchical variational inequality (THVI).

Find x Ξ such that

( μ F γ V ) x , x x 0,xΞ.
(1.6)

Based on the iterative schemes provided by Xu [2] and the two-step iterative scheme provided by Yao et al. [8], by virtue of the viscosity approximation method, hybrid steepest-descent method and the regularization method, we propose the following hybrid iterative algorithm with regularization:

{ Θ ( u n , y ) + φ ( y ) φ ( u n ) + 1 r n y u n , u n x n 0 , y C , y n = θ n γ S x n + ( I θ n μ F ) P C ( u n λ f α n ( u n ) ) , x n + 1 = β n γ V y n + ( I β n μ F ) T P C ( u n λ f α n ( u n ) ) , n 0 .

Here, 0<λ<2/L, { r n },{ α n }(0,+), and { β n },{ θ n }(0,1). It is shown that under appropriate assumptions, the two iterative sequences { x n } and { y n } converge strongly to the unique solution of the THVI (1.6).

2 Preliminaries

Let K be a nonempty closed convex subset of a real Hilbert space H. We write x n x and x n x to indicate that the sequence { x n } converges weakly and strongly to x, respectively. The weak ω-limit set of the sequence { x n } is denoted by

ω w ( x n ):= { x H : x n i x  for some subsequence  { x n i }  of  { x n } } .

The metric (or nearest point) projection from H onto K is the mapping P K :HK which assigns to each point xH the unique point P K xK satisfying the property

x P K x= inf y K xy=:d(x,K).

Proposition 2.1 For given xH and zK:

  1. (i)

    z= P K xxz,yz0, yK;

  2. (ii)

    z= P K x x z 2 x y 2 y z 2 , yK;

  3. (iii)

    P K x P K y,xy P K x P K y 2 , yH.

Hence, P K is nonexpansive and monotone.

Definition 2.2 A mapping T:HH is said to be firmly nonexpansive if 2TI is nonexpansive, or equivalently,

xy,TxTy T x T y 2 ,x,yH.

Alternatively, T is firmly nonexpansive if and only if T can be expressed as

T= 1 2 (I+S),

where S:HH is nonexpansive. Projections are firmly nonexpansive. We call T an averaged mapping if T can be expressed as a proper convex combination of the identity map I and a nonexpansive mapping. In particular, firmly nonexpansive mappings are averaged.

Proposition 2.3 (see [23])

Let T:HH be a given mapping.

  1. (i)

    T is nonexpansive if and only if the complement IT is 1 2 -inverse strongly monotone.

  2. (ii)

    If T is ν-inverse strongly monotone, then so is γT for all γ>0.

  3. (iii)

    T is averaged if and only if the complement IT is ν-inverse strongly monotone for some ν>1/2. Indeed, for α(0,1), T is α-averaged if and only if IT is 1 2 α -inverse strongly monotone.

Proposition 2.4 (see [23, 24])

Let S,T,V:HH.

  1. (i)

    If T=(1α)S+αV for some α(0,1) and if S is averaged and V is nonexpansive, then T is averaged.

  2. (ii)

    T is firmly nonexpansive if and only if the complement IT is firmly nonexpansive.

  3. (iii)

    If T=(1α)S+αV for some α(0,1) and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.

  4. (iv)

    The composition of finitely many averaged mappings is averaged. In particular, if T 1 is α 1 -averaged and T 2 is α 2 -averaged, where α 1 , α 2 (0,1), then T 1 T 2 is ( α 1 + α 2 α 1 α 2 )-averaged.

  5. (v)

    If the mappings T 1 ,, T N are averaged and have a common fixed point, then

    i = 1 N Fix( T i )=Fix( T 1 T N ).

For solving the equilibrium problem for a bifunction Θ:C×CR, let us consider the following conditions:

  • (A1) T(x,x)=0 for all x?C;

  • (A2) T is monotone, that is, T(x,y)+T(y,x)=0 for all x,y?C;

  • (A3) for each x,y,z?C, lim t ? 0 T(tz+(1-t)x,y)=T(x,y);

  • (A4) for each x?C, y?T(x,y) is convex and lower semicontinuous;

  • (A5) for each y?C, x?T(x,y) is weakly upper semicontinuous;

  • (B1) for each x?H and r>0, there exist a bounded subset D x ?C and y x ?C such that for any z?C\ D x ,

    T(z, y x )+f( y x )-f(z)+ 1 r y x -z,z-x><0;
  • (B2) C is a bounded set.

Lemma 2.5 (see [14])

Let C be a nonempty closed convex subset of a real Hilbert space H and T:C×C?R be a bifunction satisfying (A1)-(A4). Let r>0 and x?H. Then there exists z?C such that

T(z,y+ 1 r y-z,z-x>=0,?y?C.

Lemma 2.6 (see [15])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let Θ:C×CR be a bifunction satisfying (A1)-(A5) and φ:CR be a proper lower semicontinuous and convex function. For r>0 and xH, define a mapping T r :HC as follows:

T r x:= { z C : Θ ( z , y ) + φ ( y ) φ ( z ) + 1 r y z , z x 0 , y C }

for all xH. Assume that either (B1) or (B2) holds. Then T r is a single-valued firmly nonexpansive map on H, and Fix( T r )=MEP(Θ,φ) is closed and convex.

Lemma 2.7 (see [25])

Let { a n } be a sequence of nonnegative real numbers such that

a n + 1 (1 s n ) a n + s n t n + ϵ n ,n0.

Here, 0< s n 1, 0 ϵ n , and t n R for all n=0,1,2, , such that

  1. (i)

    n = 0 s n =+;

  2. (ii)

    either lim sup n t n 0 or n = 0 s n | t n |<+;

  3. (iii)

    n = 0 ϵ n <+.

Then lim n a n =0.

Lemma 2.8 (Demiclosedness principle; see [1])

Let C be a nonempty closed convex subset of a real Hilbert space H and let T:CC be a nonexpansive mapping with Fix(T). If { x n } is a sequence in C converging weakly to x and if {(IT) x n } converges strongly to y, then (IT)x=y; in particular, if y=0, then xFix(T).

Lemma 2.9 (see [12])

Let S:HH be a nonexpansive mapping and V:HH be a ρ-contraction with ρ[0,1), respectively.

  1. (i)

    IS is monotone, i.e.,

    ( I S ) x ( I S ) y , x y 0,x,yH.
  2. (ii)

    IV is (1ρ)-strongly monotone, i.e.,

    ( I V ) x ( I V ) y , x y (1ρ) x y 2 ,x,yH.

Lemma 2.10 ([26])

Let H be a real Hilbert space. Then, for all x,yH and λ[0,1],

λ x + ( 1 λ ) y 2 =λ x 2 +(1λ) y 2 λ(1λ) x y 2 .

Lemma 2.11 We have the following inequality in an inner product space X:

x + y 2 x 2 +2y,x+y,x,yX.

Notations Let λ be a number in (0,1] and let μ,κ,η>0. Let F:CH be κ-Lipschitzian and η-strongly monotone. Associated with a nonexpansive mapping T:CC, we define the mapping T λ :CH by

T λ x:=(IλμF)Tx,xC.

Lemma 2.12 (see [[27], Lemma 3.1])

The map T λ is a contraction provided μ<2η/ κ 2 , that is,

T λ x T λ y (1λτ)xy,x,yC,

where τ=1 1 μ ( 2 η μ κ 2 ) (0,1]. In particular, if T=I the identity mapping, then

( I λ μ F ) x ( I λ μ F ) y (1λτ)xy,x,yC.

A set-valued mapping T ˜ :H 2 H is called monotone if xy,fg0 for all x,yH, f T ˜ x and g T ˜ y. A monotone set-valued mapping T ˜ :H 2 H is called maximal if its graph Gph( T ˜ ) is not properly contained in the graph of any other monotone set-valued mapping. It is known that a monotone set-valued mapping T ˜ :H 2 H is maximal if and only if for (x,f)H×H, xy,fg0 for every (y,g)Gph( T ˜ ) implies that f T ˜ x.

Let A:CH be a monotone and Lipschitz continuous mapping and let N C v be the normal cone to C at vC, namely

N C v= { w H : v u , w 0 , u C } .

Define

T ˜ v= { A v + N C v , if  v C , , if  v C .

Lemma 2.13 (see [28])

Let A:CH be a monotone mapping.

  1. (i)

    T ˜ is maximal monotone;

  2. (ii)

    v T ˜ 1 0vVI(C,A).

3 Main results

Let us consider the following three-step iterative scheme with regularization:

{ Θ ( u n , y ) + φ ( y ) φ ( u n ) + 1 r n y u n , u n x n 0 , y C , y n = θ n γ S x n + ( I θ n μ F ) P C ( u n λ f α n ( u n ) ) , x n + 1 = β n γ V y n + ( I β n μ F ) T P C ( u n λ f α n ( u n ) ) , n = 0 , 1 , 2 , .
(3.1)

Here,

  • V:HH is a ρ-contraction;

  • S:HH and T:CC are nonexpansive mappings;

  • F:CH is a κ-Lipschitzian and η-strongly monotone mapping;

  • Θ:C×CR and φ:CR are real-valued functions;

  • f:CH is L-Lipschitz continuous with 0<λ< 2 L ;

  • { r n } and { α n } are sequences in (0,+) with n = 0 α n <+ and lim inf n r n >0;

  • { β n } and { θ n } are sequences in (0,1);

  • 0<μ<2η/ κ 2 and 0<γτ, where τ=1 1 μ ( 2 η μ κ 2 ) .

Theorem 3.1 Suppose that Θ:C×CR satisfies (A1)-(A5) and that (B1) or (B2) holds. Let { x n } be the bounded sequence generated from any given x 0 C by (3.1). Assume that

(H1) n = 0 β n =+, lim n 1 β n |1 θ n 1 θ n |=0;

(H2) lim n 1 β n | 1 θ n 1 θ n 1 |=0 and lim n 1 θ n |1 β n 1 β n |=0;

(H3) lim n θ n =0, lim n α n + β n θ n =0, and lim n θ n 2 β n =0;

(H4) lim n | α n α n 1 | β n θ n =0 and lim n | r n r n 1 | β n θ n =0.

Then we have the following:

  1. (i)

    lim n x n + 1 x n θ n =0;

  2. (ii)

    ω w ( x n )Fix(T)MEP(Θ,φ)Γ;

  3. (iii)

    ω w ( x n )Ξ if x n y n =o( θ n ) held in addition, i.e., lim n x n y n θ n =0.

Proof First, let us show that P C (Iλ f α ) is ξ-averaged for each λ(0, 2 α + L ), where

ξ= 2 + λ ( α + L ) 4 (0,1).

The Lipschitz condition implies that the gradient f is 1 L -inverse strongly monotone [1], that is,

f ( x ) f ( y ) , x y 1 L f ( x ) f ( y ) 2 .

Observe that

( α + L ) f α ( x ) f α ( y ) , x y = ( α + L ) [ α x y 2 + f ( x ) f ( y ) , x y ] = α 2 x y 2 + α f ( x ) f ( y ) , x y + α L x y 2 + L f ( x ) f ( y ) , x y α 2 x y 2 + 2 α f ( x ) f ( y ) , x y + f ( x ) f ( y ) 2 = α ( x y ) + f ( x ) f ( y ) 2 = f α ( x ) f α ( y ) 2 .

Hence, f α =αI+f is 1 α + L -inverse strongly monotone. Thus, λ f α is 1 λ ( α + L ) -inverse strongly monotone by Proposition 2.3(ii). By Proposition 2.3(iii) the complement Iλ f α is λ ( α + L ) 2 -averaged. Noting that P C is 1 2 -averaged and utilizing Proposition 2.4(iv), we know that for each λ(0, 2 α + L ), the map P C (Iλ f α ) is ξ-averaged with

ξ= 1 2 + λ ( α + L ) 2 1 2 λ ( α + L ) 2 = 2 + λ ( α + L ) 4 (0,1).

In particular, P C (Iλ f α ) is nonexpansive. Furthermore, for λ(0, 2 L ), utilizing the fact that lim n 2 α n + L = 2 L , we may assume

0<λ< 2 α n + L ,n0.

Consequently, for each integer n0, P C (Iλ f α n ) is ξ n -averaged with

ξ n = 1 2 + λ ( α n + L ) 2 1 2 λ ( α n + L ) 2 = 2 + λ ( α n + L ) 4 (0,1).

This immediately implies that P C (Iλ f α n ) is nonexpansive for all n0.

We divide the proof into several steps.

Step 1. lim n x n + 1 x n θ n =0.

For simplicity, put u ˜ n = P C ( u n λ f α n ( u n )). Then y n = θ n γS x n +(I θ n μF) u ˜ n and x n + 1 = β n γV y n +(I β n μF)T u ˜ n for every n0. We observe that

u ˜ n u ˜ n 1 P C ( I λ f α n ) u n P C ( I λ f α n ) u n 1 + P C ( I λ f α n ) u n 1 P C ( I λ f α n 1 ) u n 1 u n u n 1 + P C ( I λ f α n ) u n 1 P C ( I λ f α n 1 ) u n 1 u n u n 1 + ( I λ f α n ) u n 1 ( I λ f α n 1 ) u n 1 = u n u n 1 + λ f α n ( u n 1 ) λ f α n 1 ( u n 1 ) = u n u n 1 + λ | α n α n 1 | u n 1 , n 1 .
(3.2)

Moreover, from (3.1) we have

{ y n = θ n γ S x n + ( I θ n μ F ) u ˜ n , y n 1 = θ n 1 γ S x n 1 + ( I θ n 1 μ F ) u ˜ n 1 , n 1 .

Thus

y n y n 1 = θ n ( γ S x n γ S x n 1 ) + ( θ n θ n 1 ) ( γ S x n 1 μ F u ˜ n 1 ) + ( I θ n μ F ) u ˜ n ( I θ n μ F ) u ˜ n 1 .

Utilizing Lemma 2.12 from (3.2) we deduce that

y n y n 1 θ n γ S x n γ S x n 1 + | θ n θ n 1 | γ S x n 1 μ F u ˜ n 1 + ( I θ n μ F ) u ˜ n ( I θ n μ F ) u ˜ n 1 θ n γ x n x n 1 + | θ n θ n 1 | γ S x n 1 μ F u ˜ n 1 + ( 1 θ n τ ) u ˜ n u ˜ n 1 θ n γ x n x n 1 + | θ n θ n 1 | γ S x n 1 μ F u ˜ n 1 + ( 1 θ n τ ) ( u n u n 1 + λ | α n α n 1 | u n 1 ) ,
(3.3)

where τ=1 1 μ ( 2 η μ κ 2 ) . Taking into consideration that u n = T r n x n and u n 1 = T r n 1 x n 1 , we have

Θ( u n ,y)+φ(y)φ( u n )+ 1 r n y u n , u n x n 0,yC
(3.4)

and

Θ( u n 1 ,y)+φ(y)φ( u n 1 )+ 1 r n 1 y u n 1 , u n 1 x n 1 0,yC.
(3.5)

Putting y= u n 1 in (3.4) and y= u n in (3.5), we obtain

Θ( u n , u n 1 )+φ( u n 1 )φ( u n )+ 1 r n u n 1 u n , u n x n 0,yC

and

Θ( u n 1 , u n )+φ( u n )φ( u n 1 )+ 1 r n 1 u n u n 1 , u n 1 x n 1 0,yC.

Adding the last two inequalities, by (A2) we get

u n u n 1 , u n 1 x n 1 r n 1 u n x n r n 0,

and hence

u n u n 1 , u n 1 u n + u n x n 1 r n 1 r n ( u n x n ) 0.

Since lim inf n r n >0, we may assume, without loss of generality, that there exists a positive number c such that r n c>0 for all n0. Thus we have

u n u n 1 2 u n u n 1 , x n x n 1 + ( 1 r n 1 r n ) ( u n x n ) u n u n 1 [ x n x n 1 + | 1 r n 1 r n | u n x n ] ,

and hence

u n u n 1 x n x n 1 + | 1 r n 1 r n | u n x n x n x n 1 + M 0 c | r n r n 1 | .
(3.6)

Here, M 0 =sup{ u n x n :n0}<+.

Substituting (3.6) into (3.3) we derive

y n y n 1 θ n γ x n x n 1 + | θ n θ n 1 | γ S x n 1 μ F u ˜ n 1 + ( 1 θ n τ ) ( x n x n 1 + M 0 c | r n r n 1 | + λ | α n α n 1 | u n 1 ) ( 1 θ n ( τ γ ) ) x n x n 1 + | θ n θ n 1 | γ S x n 1 μ F u ˜ n 1 + M 0 c | r n r n 1 | + λ | α n α n 1 | u n 1 x n x n 1 + M 1 ( | θ n θ n 1 | + | r n r n 1 | + | α n α n 1 | ) .
(3.7)

Here, γS x n μF u ˜ n + M 0 c +λ u n M 1 for some M 1 0.

On the other hand, from (3.1) we have

{ x n + 1 = β n γ V y n + ( I β n μ F ) T u ˜ n , x n = β n 1 γ V y n 1 + ( I β n 1 μ F ) T u ˜ n 1 , n 1 .

Simple calculations show that

x n + 1 x n = ( I β n μ F ) T u ˜ n ( I β n μ F ) T u ˜ n 1 + ( β n β n 1 ) ( γ V y n 1 μ F T u ˜ n 1 ) + β n ( γ V y n γ V y n 1 ) .

Utilizing Lemma 2.12 from (3.2), (3.6), and (3.7) we deduce that

x n + 1 x n ( I β n μ F ) T u ˜ n ( I β n μ F ) T u ˜ n 1 + | β n β n 1 | γ V y n 1 μ F T u ˜ n 1 + β n γ V y n γ V y n 1 ( 1 β n τ ) u ˜ n u ˜ n 1 + | β n β n 1 | γ V y n 1 μ F T u ˜ n 1 + β n γ ρ y n y n 1 ( 1 β n τ ) ( u n u n 1 + λ | α n α n 1 | u n 1 ) + | β n β n 1 | γ V y n 1 μ F T u ˜ n 1 + β n γ ρ y n y n 1 ( 1 β n τ ) ( x n x n 1 + M 0 c | r n r n 1 | + λ | α n α n 1 | u n 1 ) + | β n β n 1 | γ V y n 1 μ F T u ˜ n 1 + β n γ ρ [ x n x n 1 + M 1 ( | θ n θ n 1 | + | r n r n 1 | + | α n α n 1 | ) ] ( 1 β n τ ) [ x n x n 1 + M 1 ( | r n r n 1 | + | α n α n 1 | ) ] + | β n β n 1 | γ V y n 1 μ F T u ˜ n 1 + β n γ ρ [ x n x n 1 + M 1 ( | θ n θ n 1 | + | r n r n 1 | + | α n α n 1 | ) ] ( 1 β n τ ) x n x n 1 + ( 1 β n τ ) M 1 ( | r n r n 1 | + | α n α n 1 | ) + | β n β n 1 | γ V y n 1 μ F T u ˜ n 1 + β n γ ρ x n x n 1 + β n τ M 1 ( | θ n θ n 1 | + | r n r n 1 | + | α n α n 1 | ) ( 1 β n ( τ γ ρ ) ) x n x n 1 + | β n β n 1 | γ V y n 1 μ F T u ˜ n 1 + M 1 ( | θ n θ n 1 | + | r n r n 1 | + | α n α n 1 | ) ( 1 β n ( τ γ ρ ) ) x n x n 1 + M ( | α n α n 1 | + | β n β n 1 | + | θ n θ n 1 | + | r n r n 1 | ) ,

where M 1 +γV y n μFT u ˜ n M, n0 for some M0. Therefore,

x n + 1 x n θ n ( 1 β n ( τ γ ρ ) ) x n x n 1 θ n + M ( | α n α n 1 | θ n + | β n β n 1 | θ n + | θ n θ n 1 | θ n + | r n r n 1 | θ n ) = ( 1 ( τ γ ρ ) β n ) x n x n 1 θ n 1 + ( 1 ( τ γ ρ ) β n ) x n x n 1 ( 1 θ n 1 θ n 1 ) + M ( | α n α n 1 | θ n + | β n β n 1 | θ n + | θ n θ n 1 | θ n + | r n r n 1 | θ n ) ( 1 ( τ γ ρ ) β n ) x n x n 1 θ n 1 + ( τ γ ρ ) β n 1 τ γ ρ { x n x n 1 1 β n | 1 θ n 1 θ n 1 | + M ( | α n α n 1 | β n θ n + | β n β n 1 | β n θ n + | θ n θ n 1 | β n θ n + | r n r n 1 | β n θ n ) } ( 1 ( τ γ ρ ) β n ) x n x n 1 θ n 1 + ( τ γ ρ ) β n M ˜ τ γ ρ { 1 β n | 1 θ n 1 θ n 1 | + | α n α n 1 | β n θ n + 1 θ n | 1 β n 1 β n | + 1 β n | 1 θ n 1 θ n | + | r n r n 1 | β n θ n } ,
(3.8)

where M+ x n x n 1 M ˜ , n1 for some M ˜ 0. From (H1), (H2), and (H4), it follows that n = 0 (τγρ) β n =+ and

lim n M ˜ τ γ ρ { 1 β n | 1 θ n 1 θ n 1 | + | α n α n 1 | β n θ n + 1 θ n | 1 β n 1 β n | + 1 β n | 1 θ n 1 θ n | + | r n r n 1 | β n θ n } = 0 .

Applying Lemma 2.7 to (3.8), we immediately conclude that

lim n x n + 1 x n θ n =0.

In particular, from (H3) it follows that

lim n x n + 1 x n =0.

Step 2. lim n x n u n =0 and lim n y n u ˜ n =0.

By the firm nonexpansivity of T r n , if vMEP(Θ,φ)=Fix( T r n ), we have

u n v 2 = T r n x n T r n v 2 T r n x n T r n v , x n v = 1 2 { u n v 2 + x n v 2 T r n x n T r n v ( x n v ) 2 } .

This immediately yields

u n v 2 x n v 2 x n u n 2 .
(3.9)

Let pFix(T)MEP(Θ,φ)Γ. We have

u ˜ n p = P C ( I λ f α n ) u n P C ( I λ f ) p P C ( I λ f α n ) u n P C ( I λ f α n ) p + P C ( I λ f α n ) p P C ( I λ f ) p u n p + P C ( I λ f α n ) p P C ( I λ f ) p u n p + λ α n p .
(3.10)

Note that

y n p = θ n γ S x n θ n μ F p + ( I θ n μ F ) u ˜ n ( I θ n μ F ) p = θ n ( γ S x n μ F p ) + ( 1 θ n ) ( u ˜ n p ) + θ n [ ( I μ F ) u ˜ n ( I μ F ) p ] = θ n ( γ S x n + ( I μ F ) u ˜ n p ) + ( 1 θ n ) ( u ˜ n p ) .

Hence we have

y n u ˜ n = θ n ( γ S x n + ( I μ F ) u ˜ n u ˜ n ) .

By Lemmas 2.10 and 2.12, we have from (3.9) and (3.10) that

y n p 2 = θ n ( γ S x n + ( I μ F ) u ˜ n p ) + ( 1 θ n ) ( u ˜ n p ) 2 = θ n γ S x n + ( I μ F ) u ˜ n p 2 + ( 1 θ n ) u ˜ n p 2 θ n ( 1 θ n ) γ S x n + ( I μ F ) u ˜ n u ˜ n 2 = θ n γ S x n + ( I μ F ) u ˜ n p 2 + ( 1 θ n ) u ˜ n p 2 1 θ n θ n y n u ˜ n 2 θ n γ S x n + ( I μ F ) u ˜ n p 2 + u ˜ n p 2 1 θ n θ n y n u ˜ n 2 .
(3.11)

Furthermore, utilizing Lemmas 2.11 and 2.12 we have from (3.9) and (3.10) that

x n + 1 p 2 = β n γ V y n β n μ F T p + ( I β n μ F ) T u ˜ n ( I β n μ F ) T p 2 ( β n γ V y n β n μ F T p + ( I β n μ F ) T u ˜ n ( I β n μ F ) T p ) 2 ( β n γ V y n μ F p + ( 1 β n τ ) u ˜ n p ) 2 β n 1 τ γ V y n μ F p 2 + ( 1 β n τ ) u ˜ n p 2 β n 1 τ [ γ V y n γ V p 2 + 2 γ V p μ F p , γ V y n μ F p ] + ( 1 β n τ ) u ˜ n p 2 β n γ 2 ρ 2 τ y n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p + ( 1 β n τ ) u ˜ n p 2 β n γ 2 ρ 2 τ [ θ n γ S x n + ( I μ F ) u ˜ n p 2 + u ˜ n p 2 1 θ n θ n y n u ˜ n 2 ] + β n 2 τ γ V p μ F p γ V y n μ F p + ( 1 β n τ ) u ˜ n p 2 = ( 1 β n ( τ γ 2 ρ 2 τ ) ) u ˜ n p 2 + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 β n γ 2 ρ 2 ( 1 θ n ) τ θ n y n u ˜ n 2 + β n 2 τ γ V p μ F p γ V y n μ F p u ˜ n p 2 + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 β n γ 2 ρ 2 ( 1 θ n ) τ θ n y n u ˜ n 2 + β n 2 τ γ V p μ F p γ V y n μ F p ( u n p + λ α n p ) 2 + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 β n γ 2 ρ 2 ( 1 θ n ) τ θ n y n u ˜ n 2 + β n 2 τ γ V p μ F p γ V y n μ F p u n p 2 + λ α n p ( 2 u n p + λ α n p ) + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 β n γ 2 ρ 2 ( 1 θ n ) τ θ n y n u ˜ n 2 + β n 2 τ γ V p μ F p γ V y n μ F p x n p 2 x n u n 2 + λ α n p ( 2 u n p + λ α n p ) + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 β n γ 2 ρ 2 ( 1 θ n ) τ θ n y n u ˜ n 2 + β n 2 τ γ V p μ F p γ V y n μ F p .
(3.12)

It turns out therefore that

x n u n 2 + β n γ 2 ρ 2 ( 1 θ n ) τ θ n y n u ˜ n 2 x n p 2 x n + 1 p 2 + λ α n p ( 2 u n p + λ α n p ) + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p ( x n p + x n + 1 p ) x n x n + 1 + λ α n p ( 2 u n p + λ α n p ) + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p .

Then it is clear that

x n u n 2 ( x n p + x n + 1 p ) x n x n + 1 + λ α n p ( 2 u n p + λ α n p ) + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p .

Since α n 0, β n 0, θ n 0, and x n + 1 x n 0, we conclude that

lim n x n u n =0.

Furthermore,

β n γ 2 ρ 2 ( 1 θ n ) τ θ n y n u ˜ n 2 ( x n p + x n + 1 p ) x n x n + 1 + λ α n p ( 2 u n p + λ α n p ) + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p .

This yields

γ 2 ρ 2 ( 1 θ n ) τ y n u ˜ n 2 ( x n p + x n + 1 p ) θ n β n x n x n + 1 + α n θ n β n λ p ( 2 u n p + λ α n p ) + θ n 2 γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + θ n 2 τ γ V p μ F p γ V y n μ F p .

Since x n + 1 x n θ n 0, α n + β n θ n 0, and θ n 2 β n 0 as n, we have

lim n θ n β n x n x n + 1 = lim n x n x n + 1 θ n θ n 2 β n =0

and

lim n α n θ n β n = lim n α n θ n θ n 2 β n =0.

Therefore, from the last inequality we have

lim n y n u ˜ n =0.

Step 3. lim n u n u ˜ n =0 and lim n x n y n =0.

Let pFix(T)Fix(Γ)Ξ. Utilizing Lemmas 2.6 and 2.11 we have from (3.12) that

x n + 1 p 2 u ˜ n p 2 + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 β n γ 2 ρ 2 ( 1 θ n ) τ θ n y n u ˜ n 2 + β n 2 τ γ V p μ F p γ V y n μ F p P C ( I λ f α n ) u n P C ( I λ f ) p 2 + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p ( I λ f ) u n ( I λ f ) p λ α n u n 2 + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p ( I λ f ) u n ( I λ f ) p 2 2 λ α n u n , ( I λ f α n ) u n ( I λ f ) p + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p u n p 2 + λ ( λ 2 L ) f ( u n ) f ( p ) 2 + 2 λ α n u n ( I λ f α n ) u n ( I λ f ) p + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p x n p 2 + λ ( λ 2 L ) f ( u n ) f ( p ) 2 + 2 λ α n u n ( I λ f α n ) u n ( I λ f ) p + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p .

Hence,

λ ( 2 L λ ) f ( u n ) f ( p ) 2 x n p 2 x n + 1 p 2 + 2 λ α n u n ( I λ f α n ) u n ( I λ f ) p + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p ( x n p + x n + 1 p ) x n x n + 1 + 2 λ α n u n ( I λ f α n ) u n ( I λ f ) p + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p .

Since α n 0, β n 0, θ n 0, and x n + 1 x n 0, it follows from 0<λ< 2 L that lim n f( u n )f(p)=0, and hence

lim n f α n ( u n ) f ( p ) =0.

Furthermore, from the firm nonexpansiveness of P C we obtain

u ˜ n p 2 = P C ( I λ f α n ) u n P C ( I λ f ) p 2 ( I λ f α n ) u n ( I λ f ) p , u ˜ n p = 1 2 { ( I λ f α n ) u n ( I λ f ) p 2 + u ˜ n p 2 ( I λ f α n ) u n ( I λ f ) p ( u ˜ n p ) 2 } 1 2 { u n p 2 + 2 λ f α n ( u n ) f ( p ) ( I λ f α n ) u n ( I λ f ) p + u ˜ n p 2 u n u ˜ n 2 + 2 λ u n u ˜ n , f α n ( u n ) f ( p ) λ 2 f α n ( u n ) f ( p ) 2 } .

Consequently,

u ˜ n p 2 u n p 2 u n u ˜ n 2 + 2 λ f α n ( u n ) f ( p ) ( I λ f α n ) u n ( I λ f ) p + 2 λ u n u ˜ n , f α n ( u n ) f ( p ) λ 2 f α n ( u n ) f ( p ) 2 .

Thus, from (3.12) we have

x n + 1 p 2 u ˜ n p 2 + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 β n γ 2 ρ 2 ( 1 θ n ) τ θ n y n u ˜ n 2 + β n 2 τ γ V p μ F p γ V y n μ F p u n p 2 u n u ˜ n 2 + 2 λ f α n ( u n ) f ( p ) ( I λ f α n ) u n ( I λ f ) p + 2 λ u n u ˜ n , f α n ( u n ) f ( p ) λ 2 f α n ( u n ) f ( p ) 2 + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 β n γ 2 ρ 2 ( 1 θ n ) τ θ n y n u ˜ n 2 + β n 2 τ γ V p μ F p γ V y n μ F p x n p 2 u n u ˜ n 2 + 2 λ f α n ( u n ) f ( p ) ( I λ f α n ) u n ( I λ f ) p + 2 λ u n u ˜ n f α n ( u n ) f ( p ) + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p .

This implies that

u n u ˜ n 2 x n p 2 x n + 1 p 2 + 2 λ f α n ( u n ) f ( p ) ( I λ f α n ) u n ( I λ f ) p + 2 λ u n u ˜ n f α n ( u n ) f ( p ) + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p ( x n p + x n + 1 p ) x n x n + 1 + 2 λ f α n ( u n ) f ( p ) ( I λ f α n ) u n ( I λ f ) p + 2 λ u n u ˜ n f α n ( u n ) f ( p ) + β n θ n γ 2 ρ 2 τ γ S x n + ( I μ F ) u ˜ n p 2 + β n 2 τ γ V p μ F p γ V y n μ F p .

Since θ n 0, β n 0, x n x n + 1 0, and f α n ( u n )f(p)0, it follows that

lim n u n u ˜ n =0.

This, together with x n u n 0 and y n u ˜ n 0 (due to Step 2), implies that

x n y n x n u n + u n u ˜ n + u ˜ n y n 0as n,

and thus

lim n x n y n =0.

Step 4. ω w ( x n )Fix(T)MEP(Θ,φ)Γ; moreover, if x n y n =o( θ n ) in addition, then ω w ( x n )Ξ.

Let p ω w ( x n ). Then there exists a subsequence { x n i } of { x n } such that x n i p . Since

x n + 1 u ˜ n = β n γ S y n + ( I β n μ F ) T u ˜ n u ˜ n = β n ( γ S y n μ F T u ˜ n ) + T u ˜ n u ˜ n ,

we have

T u ˜ n u ˜ n = x n + 1 u ˜ n β n ( γ S y n μ F T u ˜ n ) x n + 1 u ˜ n + β n γ S y n μ F T u ˜ n x n + 1 x n + x n u n + u n u ˜ n + β n γ S y n μ F T u ˜ n .

Hence from x n + 1 x n 0, x n u n 0, u n u ˜ n 0, and β n 0, we get

lim n T u ˜ n u ˜ n =0.

Since x n u n 0 and u n u ˜ n 0, we have u ˜ n i p . Utilizing Lemma 2.8 we derive p Fix(T).

Let us show that p MEP(Θ,φ). As a matter of fact, since u n = T r n x n , for any yC we have

Θ( u n ,y)+φ(y)φ( u n )+ 1 r n y u n , u n x n 0.

It follows from (A2) that

φ(y)φ( u n )+ 1 r n y u n , u n x n Θ(y, u n ).

Replacing n by n i , we have

φ(y)φ( u n i )+ 1 r n i y u n i , u n i x n i Θ(y, u n i ).

Since u n i x n i r n i 0 and u n i p , it follows from (A4) that

0φ(y)+φ ( p ) +Θ ( y , p ) ,yC.

Put z t =ty+(1t) p for all t(0,1] and yC. We have z t C and

0φ( z t )+φ ( p ) +Θ ( z t , p ) .
(3.13)

Utilizing (A1), (A4), and (3.13), we have

0 = Θ ( z t , z t ) + φ ( z t ) φ ( z t ) t Θ ( z t , y ) + ( 1 t ) Θ ( z t , p ) + t φ ( y ) + ( 1 t ) φ ( p ) φ ( z t ) t ( Θ ( z t , y ) + φ ( y ) φ ( z t ) ) + ( 1 t ) ( Θ ( z t , p ) + φ ( p ) φ ( z t ) ) t ( Θ ( z t , y ) + φ ( y ) φ ( z t ) ) ,

and hence

0Θ( z t ,y)+φ(y)φ( z t ).
(3.14)

Letting t0 in (3.14) and utilizing (A3), we get, for each yC,

0Θ ( p , y ) +φ(y)φ ( p ) .

Hence, p MEP(Θ,φ).

Let us show that p Γ. From x n u n 0 and u n u ˜ n 0, we know that u n i p and u ˜ n i p . Define

T ˜ v= { f ( v ) + N C v , if  v C , , if  v C ,

where

N C v= { w H : v u , w 0 , u C } .

Then T ˜ is maximal monotone and 0 T ˜ v if and only if vVI(C,f); see [28] for more details. Let (v,w)Gph( T ˜ ). Then we have

w T ˜ v=f(v)+ N C v

and hence,

wf(v) N C v.

Therefore,

v u , w f ( v ) 0,uC.

On the other hand, from

u ˜ n = P C ( u n λ f α n ( u n ) ) andvC,

we have

u n λ f α n ( u n ) u ˜ n , u ˜ n v 0,

and hence

v u ˜ n , u ˜ n u n λ + f α n ( u n ) 0.

Therefore, from

wf(v) N C (v)and u ˜ n i C,

we have

v u ˜ n i , w v u ˜ n i , f ( v ) v u ˜ n i , f ( v ) v u ˜ n i , u ˜ n i u n i λ + f α n i ( u n i ) = v u ˜ n i , f ( v ) v u ˜ n i , u ˜ n i u n i λ + f ( u n i ) α n i v u ˜ n i , u n i = v u ˜ n i , f ( v ) f ( u ˜ n i ) + v u ˜ n i , f ( u ˜ n i ) f ( u n i ) v u ˜ n i , u ˜ n i u n i λ α n i v u ˜ n i , u n i v u ˜ n i , f ( u ˜ n i ) f ( u n i ) v u ˜ n i , u ˜ n i u n i λ α n i v u ˜ n i , u n i .

Hence, we obtain

v p , w 0as i.

Since T ˜ is maximal monotone, we have p T ˜ 1 0, and hence, p VI(C,f), which leads to p Γ. Consequently, p Fix(T)MEP(Θ,φ)Γ. This shows that ω w ( x n )Fix(T)MEP(Θ,φ)Γ.

Utilizing Lemmas 2.11 and 2.12, we have for every pFix(T)MEP(Θ,φ)Γ,

y n p 2 = θ n γ S x n + ( I μ F ) u ˜ n p 2 = θ n ( γ S p μ F p ) + θ n ( γ S x n γ S p ) + ( I θ n μ F ) u ˜ n ( I θ n μ F ) p 2 θ n ( γ S x n γ S p ) + ( I θ n μ F ) u ˜ n ( I θ n μ F ) p 2 + 2 θ n γ S p μ F p , y n p [ θ n ( γ S x n γ S p ) + ( I θ n μ F ) u ˜ n ( I θ n μ F ) p ] 2 + 2 θ n γ S p μ F p , y n p [ θ n γ x n p + ( 1 θ n τ ) u ˜ n p ] 2 + 2 θ n γ S p μ F p , y n p [ θ n γ x n p + ( 1 θ n τ ) ( u n p + λ α n p ) ] 2 + 2 θ n γ S p μ F p , y n p [ θ n γ x n p + ( 1 θ n τ ) ( x n p + λ α n p ) ] 2 + 2 θ n γ S p μ F p , y n p [ ( 1 θ n ( τ γ ) ) x n p + λ α n p ] 2 + 2 θ n γ S p μ F p , y n p ( x n p + λ α n p ) 2 + 2 θ n γ S p μ F p , y n p .
(3.15)

Suppose now that x n y n =o( θ n ) in addition. It follows from (3.15) that

2 γ S p μ F p , y n p 1 θ n [ ( x n p + λ α n p ) 2 y n p 2 ] 1 θ n ( x n p + λ α n p + y n p ) ( x n p + λ α n p y n p ) 1 θ n ( x n p + λ α n p + y n p ) ( x n y n + λ α n p ) .

This, together with α n θ n 0 and x n y n θ n 0, leads to

lim sup n γSpμFp, y n p0.

Observe that

lim sup n γ S p μ F p , x n p = lim sup n ( γ S p μ F p , x n y n + γ S p μ F p , y n p ) = lim sup n γ S p μ F p , y n p 0 .

So, it follows from x n i p that

γ S p μ F p , p p 0,pFix(T)MEP(Θ,φ)Γ.

Note also that 0<γτ and

μ η τ μ η 1 1 μ ( 2 η μ κ 2 ) 1 μ ( 2 η μ κ 2 ) 1 μ η 1 2 μ η + μ 2 κ 2 1 2 μ η + μ 2 η 2 κ 2 η 2 κ η .

It is clear that

( μ F γ S ) x ( μ F γ S ) y , x y (μηγ) x y 2 ,x,yH.

Hence, it follows from 0<γτμη that μFγS is monotone. Since p ω w ( x n )Fix(T)MEP(Θ,φ)Γ, by Minty’s lemma [1] we have

γ S p μ F p , p p 0,pFix(T)MEP(Θ,φ)Γ;

that is, p Ξ. This shows that ω w ( x n )Ξ. □

Theorem 3.2 Assuming the conditions in Theorem  3.1. We have:

  1. (i)

    { x n } and { y n } both converge strongly to an element x Fix(T)MEP(Θ,φ)Γ, which is a unique solution of the variational inequality

    γ V x μ F x , x x 0,xFix(T)MEP(Θ,φ)Γ.
  2. (ii)

    { x n } and { y n } both converge strongly to a unique solution of THVI (1.6) if x n y n =o( θ n ) in addition.

Proof Utilizing Lemmas 2.11 and 2.12 we get from (3.15)

x n + 1 p 2 = β n γ V y n + ( I μ F ) T u ˜ n p 2 = β n ( γ V p μ F p ) + β n ( γ V y n γ V p ) + ( I β n μ F ) T u ˜ n ( I β n μ F ) T p 2 β n ( γ V y n γ V p ) + ( I θ n μ F ) T u ˜ n ( I θ n μ F ) T p 2 + 2 β n γ V p μ F p , x n + 1 p [ β n ( γ V y n γ V p ) + ( I β n μ F ) T u ˜ n ( I θ n μ F ) T p ] 2 + 2 β n γ V p μ F p , x n + 1 p [ β n γ ρ y n p + ( 1 β n τ ) u ˜ n p ] 2 + 2 β n γ V p μ F p , x n + 1 p β n γ 2 ρ 2 τ y n p 2 + ( 1 β n τ ) u ˜ n p 2 + 2 β n γ V p μ F p , x n + 1 p β n γ 2 ρ 2 τ [ ( x n p + λ α n p ) 2 + 2 θ n γ S p μ F p , y n p ] + ( 1 β n τ ) ( x n p + λ α n p ) 2 + 2 β n γ V p μ F p , x n + 1 p = ( 1 β n τ 2 γ 2 ρ 2 τ ) ( x n p + λ α n p ) 2 + 2 β n θ n γ 2 ρ 2 τ γ S p μ F p , y n p + 2 β n γ V p μ F p , x n + 1 p ,
(3.16)

where τ=1 1 μ ( 2 η μ κ 2 ) .

Note that μFγV:HH is (μκ+γρ)-Lipschitzian and (μηγρ)-strongly monotone, namely

( μ F γ V ) x ( μ F γ V ) y (μκ+γρ)xy,x,yH

and

( μ F γ V ) x ( μ F γ V ) y , x y (μηγρ) x y 2 ,x,yH.

Hence there exists a unique solution x Fix(T)MEP(Θ,φ)Γ of the variational inequality problem

γ V x μ F x , x x 0,xFix(T)MEP(Θ,φ)Γ.
(3.17)

Since the sequence { x n } is bounded, there exists a subsequence { x n i } of { x n } such that

lim sup n γ V x μ F x , x n x = lim i γ V x μ F x , x n i x .
(3.18)

Also, since H is reflexive and { x n } is bounded, without loss of generality we may assume that x n i x ¯ Fix(T)MEP(Θ,φ)Γ (due to Theorem 3.1(i)). Taking into consideration that x is the unique solution of VIP (3.17), we obtain from (3.18)

lim sup n γ V x μ F x , x n + 1 x = lim sup n ( γ V x μ F x , x n x + γ V x μ F x , x n + 1 x n ) = lim sup n γ V x μ F x , x n x = lim i γ V x μ F x , x n i x = γ V x μ F x , x ¯ x 0 .
(3.19)

Putting p= x , from (3.16) we conclude that

x n + 1 x 2 ( 1 β n τ 2 γ 2 ρ 2 τ ) ( x n x + λ α n x ) 2 + 2 β n θ n γ 2 ρ 2 τ γ S x μ F x y n x + 2 β n γ V x μ F x , x n + 1 x ( 1 β n τ 2 γ 2 ρ 2 τ ) x n x 2 + λ α n x ( 2 x n x + λ α n x ) + 2 β n θ n γ 2 ρ 2 τ γ S x μ F x y n x + 2 β n γ V x μ F x , x n + 1 x = ( 1 β n τ 2 γ 2 ρ 2 τ ) x n x 2 + β n τ 2 γ 2 ρ 2 τ τ τ 2 γ 2 ρ 2 { 2 θ n γ 2 ρ 2 τ γ S x μ F x y n x + 2 γ V x μ F x , x n + 1 x } + α n λ x ( 2 x n x + λ α n x ) .
(3.20)

Since n = 0 β n =+, n = 0 α n <+, and θ n 0 as n, it follows from (3.19) that n = 0 β n τ 2 γ 2 ρ 2 τ =+, n = 0 α n λ x (2 x n x +λ α n x )<+, and

lim sup n τ τ 2 γ 2 ρ 2 { 2 θ n γ 2 ρ 2 τ γ S x μ F x y n x + 2 γ V x μ F x , x n + 1 x } 0.

Applying Lemma 2.7 to (3.20), we get

lim n x n x =0.

This, together with x n y n 0, implies that

lim n y n x =0.

From now on, we suppose that x n y n =o( θ n ). Then by Theorem 3.1(ii) we know that ω w ( x n )Ξ. Since μFγV:HH is (μκ+γρ)-Lipschitzian and (μηγρ)-strongly monotone, there exists a unique solution x Ξ of the variational inequality problem

γ V x μ F x , x x 0,xΞ.
(3.21)

Since the sequence { x n } is bounded, there exists a subsequence { x n i } of { x n } such that

lim sup n γ V x μ F x , x n x = lim i γ V x μ F x , x n i x .
(3.22)

Again, since H is reflexive and { x n } is bounded, without loss of generality we may assume that x n i x ¯ Ξ (due to Theorem 3.1(ii)). Taking into account that x is the unique solution of VIP (3.21), we deduce from (3.22) that

lim sup n γ V x μ F x , x n + 1 x γ V x μ F x , x ¯ x 0.

Putting p= x , from (3.16) we immediately infer that

x n + 1 x 2 ( 1 β n τ 2 γ 2 ρ 2 τ ) x n x 2 + β n τ 2 γ 2 ρ 2 τ τ τ 2 γ 2 ρ 2 { 2 θ n γ 2 ρ 2 τ γ S x μ F x y n x + 2 γ V x μ F x , x n + 1 x } + α n λ x ( 2 x n x + λ α n x ) .

Repeating the same arguments as above, we can readily see that

lim n x n x =0,

which, together with x n y n 0, yields

lim n y n x =0.

This completes the proof. □

Remark 3.3 Our iterative algorithm (3.1) is very different from Xu’s iterative ones in [2], and Yao et al.’s iterative one in [8]. Here, the two-step iterative scheme in [8] for two nonexpansive mappings and the gradient-projection iterative schemes in [2] for MP (1.1) are extended to develop our three-step iterative scheme (3.1) with regularization for the THVI (1.6). It is worth pointing out that without assuming the conditions that x n y n =o( θ n ) and that xTxkDist(x,Fix(T)), xC for some constant k>0, our three-step iterative scheme (3.1) converges strongly to an element x Fix(T)MEP(Θ,φ)Γ, which is a unique solution of the variational inequality

γ V x μ F x , x x 0,xFix(T)MEP(Θ,φ)Γ.

See Theorem 3.2(i).

Remark 3.4 As an example, we consider the following sequences:

  1. (a)

    α n = 1 n 1 + s + t , β n = 1 n s , and θ n = 1 n t where t(0, 1 3 ] and s(t,2t) or t( 1 3 , 1 2 ), s(t,1t);

  2. (b)

    r n = 1 2 + 1 n 1 + s + t .

They satisfy the hypotheses on the parameter sequences in Theorems 3.1 and 3.2.

Remark 3.5 Our Theorems 3.1 and 3.2 improve, extend, supplement, and develop [[8], Theorems 3.1 and 3.2] and [[2], Theorems 5.2 and 6.1] in the following aspects:

  1. (a)

    Our THVI (1.6) with the unique solution x Ξ satisfying

    x = P Fix ( T ) Fix ( Γ ) Ξ ( I ( μ F γ S ) ) x

    is more general than the problem of finding an element x ˜ C satisfying x ˜ = P Fix ( T ) S x ˜ in [8] and the problem of finding an element x ˜ Ξ in [2].

  2. (b)

    Our three-step iterative algorithm (3.1) for THVI (1.6) is more flexible, more advantageous and more subtle than Xu’s iterative ones in [2] and than Yao et al.’s two-step iterative one in [8], because, e.g., it drops the requirement of xTxkDist(x,Fix(T)), xC for some k>0 in [[8], Theorem 3.2(v)].

  3. (c)

    The arguments and techniques in our Theorems 3.1 and 3.2 are very different from the ones in [[8], Theorems 3.1 and 3.2] and in [[2], Theorems 5.2 and 6.1] because we utilize the properties of resolvent operators and maximal monotone mappings (Lemmas 2.5, 2.6 and 2.13), the convergence criteria of real sequences (Lemma 2.7), and the contractive coefficient estimates for the contractions associated with nonexpansive mappings (Lemma 2.12).

  4. (d)

    Compared with the proofs of [[2], Theorems 5.2 and 6.1], the proofs of our Theorems 3.1 and 3.2 derive lim n u n P C (Iλ f α n ) u n =0 via the argument showing lim n f α n ( u n )f(p)=0, pFix(T)MEP(Θ,φ)Γ (see Step 3 in the proof of Theorem 3.1).

References

  1. Goebel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics on Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

  2. Xu H-K: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z

    Article  MathSciNet  MATH  Google Scholar 

  3. Xu H-K: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Problems 2010., 26: Article ID 105018

    Google Scholar 

  4. Ceng L-C, Wang C-Y, Yao J-C: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67: 375–390. 10.1007/s00186-007-0207-4

    Article  MathSciNet  MATH  Google Scholar 

  5. Ceng L-C, Yao J-C: An extragradient-like approximation method for variational inequality problems and fixed point problems. Appl. Math. Comput. 2007, 190: 205–215. 10.1016/j.amc.2007.01.021

    Article  MathSciNet  MATH  Google Scholar 

  6. Glowinski R: Numerical Methods for Nonlinear Variational Problems. Springer, New York; 1984.

    Book  MATH  Google Scholar 

  7. Kinderlehrer D, Stampacchia G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York; 1980.

    MATH  Google Scholar 

  8. Yao Y, Liou Y-C, Marino G: Two-step iterative algorithms for hierarchical fixed point problems and variational inequality problems. J. Appl. Math. Comput. 2009,31(1–2):433–445. 10.1007/s12190-008-0222-5

    Article  MathSciNet  MATH  Google Scholar 

  9. Moudafi A, Maingé P-E: Towards viscosity approximations of hierarchical fixed points problems. Fixed Point Theory Appl. 2006., 2006: Article ID 95453

    Google Scholar 

  10. Moudafi A, Maingé P-E: Strong convergence of an iterative method for hierarchical fixed point problems. Pac. J. Optim. 2007,3(3):529–538.

    MathSciNet  MATH  Google Scholar 

  11. Moudafi A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000,241(1):46–55. 10.1006/jmaa.1999.6615

    Article  MathSciNet  MATH  Google Scholar 

  12. Xu H-K: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059

    Article  MathSciNet  MATH  Google Scholar 

  13. Ceng L-C, Yao J-C: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 2008, 214: 186–201. 10.1016/j.cam.2007.02.022

    Article  MathSciNet  MATH  Google Scholar 

  14. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

    MathSciNet  MATH  Google Scholar 

  15. Peng J-W, Yao J-C: A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 2008, 12: 1401–1432.

    MathSciNet  MATH  Google Scholar 

  16. Iiduka H: Strong convergence for an iterative method for the triple-hierarchical constrained optimization problem. Nonlinear Anal. 2010, 71: 1292–1297.

    Article  MathSciNet  Google Scholar 

  17. Iiduka H: Iterative algorithm for solving triple-hierarchical constrained optimization problem. J. Optim. Theory Appl. 2011, 148: 580–592. 10.1007/s10957-010-9769-z

    Article  MathSciNet  MATH  Google Scholar 

  18. Ceng L-C, Ansari QH, Yao J-C: Iterative methods for triple hierarchical variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2011, 151: 489–512. 10.1007/s10957-011-9882-7

    Article  MathSciNet  MATH  Google Scholar 

  19. Ansari QH, Ceng L-C, Gupta H: Triple hierarchical variational inequalities. In Nonlinear Analysis: Approximation Theory, Optimization and Applications. Edited by: Ansari QH. Birkhäuser, Basel; 2014:231–280.

    Google Scholar 

  20. Ceng L-C, Ansari QH, Wen C-F: Hybrid steepest-descent viscosity method for triple hierarchical variational inequalities. Abstr. Appl. Anal. 2012., 2012: Article ID 907105

    Google Scholar 

  21. Ceng L-C, Ansari QH, Yao J-C: Relaxed hybrid steepest-descent methods with variable parameters for triple-hierarchical variational inequalities. Appl. Anal. 2012,91(10):1793–1810. 10.1080/00036811.2011.614602

    Article  MathSciNet  MATH  Google Scholar 

  22. Kong Z-R, Ceng L-C, Pang CT, Ansari QH: Multi-step hybrid extragradient method for triple hierarchical variational inequalities. Abstr. Appl. Anal. 2013., 2013: Article ID 718624

    Google Scholar 

  23. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Problems 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MathSciNet  MATH  Google Scholar 

  24. Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004,53(5–6):475–504. 10.1080/02331930412331327157

    Article  MathSciNet  MATH  Google Scholar 

  25. Xu H-K: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002,66(2):240–256.

    Article  MathSciNet  MATH  Google Scholar 

  26. Reineermann J: Über fixpunkte kontrahierender abbildungen und schwach konvergente Toeplitz-verfahren. Arch. Math. (Basel) 1969, 20: 59–64. 10.1007/BF01898992

    Article  MathSciNet  MATH  Google Scholar 

  27. Xu H-K, Kim T-H: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119: 185–201.

    Article  MathSciNet  MATH  Google Scholar 

  28. Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149: 75–88. 10.1090/S0002-9947-1970-0282272-5

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

Lu-Chuan Ceng is partially supported by the National Science Foundation of China (11071169), Innovation Program of Shanghai Municipal Education Commission (09ZZ133) and Leading Academic Discipline Project of Shanghai Normal University (DZL707). Ngai-Ching Wong is partially supported by the Taiwan MOST grant 102-2115-M-110-002-MY2. Jen-Chih Yao is partially supported by the Taiwan MOST grant 102-2111-E-037-004-MY3. Both Ngai-Ching Wong and Jen-Chih Yao are also partially supported by the NSYSU-KMU joint venture 103-P013.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ngai-Ching Wong.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ceng, LC., Wong, NC. & Yao, JC. Regularized hybrid iterative algorithms for triple hierarchical variational inequalities. J Inequal Appl 2014, 490 (2014). https://doi.org/10.1186/1029-242X-2014-490

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-490

Keywords