Skip to main content

Composite schemes for variational inequalities over equilibrium problems and variational inclusions

Abstract

Let C be a nonempty closed convex subset of a Hilbert space H, and let T:HH be a nonlinear mapping. It is well known that the following classical variational inequality has been applied in many areas of applied mathematics, modern physical sciences, computerized tomography and many others. Find a point x C such that

T x , x x 0,xC.
(A)

In this paper, we consider the following variational inequality. Find a point x C such that

( F γ f ) x , x x 0,xC,
(B)

and, for solutions of the variational inequality (B) with the feasibility set C, which is the intersection of the set of solutions of an equilibrium problem and the set of a solutions of a variational inclusion, construct the two composite schemes, that is, the implicit and explicit schemes to converge strongly to the unique solution of the variational inequality (B).

Recently, many authors introduced some kinds of algorithms for solving the variational inequality problems, but, in fact, our two schemes are more simple for finding solutions of the variational inequality (B) than others.

MSC:49J30, 47H10, 47H17, 49M05.

1 Introduction

A very common problem in areas of mathematics and physical sciences consists of trying to find a point in a nonempty closed convex subset C of a Hilbert space H. This problem is related to the variational inequality problem (A). One frequently employed approach in solving the variational inequality problems is the approximation methods. Some approximation methods for solving variational inequality problems and the related optimization problems can be found in [116].

In this paper, we consider the following variational inequality. Find a point x C such that

where C is the intersection of the set of solutions of an equilibrium problem and the set of a variational inclusion. In fact, the reason that we focus on the set C in the equilibrium problems and the variational inclusion problems, plays a very important role in many practical applications.

For this purpose, we construct the following composite schemes, that is, the implicit scheme { x t } and the explicit scheme { x n }, respectively,

x t = [ I t ( F γ f ) ] J R , λ (IλA) S μ (IμB) x t ,t ( 0 , 1 ρ γ τ )
(1.1)

and

x n + 1 = [ I α n ( F γ f ) ] J R , λ (IλA) S μ (IμB) x n ,n0.
(1.2)

Our idea is to involve directly the operator Fγf to generate the two composite schemes (1.1) and (1.2) that converge strongly to solutions of the variational inequality problem (B). In fact, our two schemes are very simple.

2 Preliminaries

In this section, we introduce some notations and useful conclusions for our main results.

Let H be a real Hilbert space. Let B:HH be a nonlinear mapping, let φ:HR be a function, and let Θ:H×HR be a bifunction.

Now, we consider the following equilibrium problem. Find a point xC such that

Θ(x,y)+φ(y)φ(x)+Bx,yx0,yC.
(2.1)

The set of solutions of problem (2.1) is denoted by EP. The equilibrium problems include fixed point problems, optimization problems and variational inequality problems as special cases. For the related works, see [1730].

Let f:HH be a τ-contraction, that is, there exists a constant τ[0,1) such that

f ( x ) f ( y ) τxy,x,yH,

and let S:HH be a nonexpansive mapping, that is,

SxSyxy,x,yH.

Recall that a mapping A:HH is said to be α-inverse strongly monotone if there exists a constant α>0 such that

AxAy,xyα A x A y 2 ,x,yC.

A mapping F:HH is said to be strongly positive if there exists a constant ρ>0 such that Fx,xρ x 2 for all xH.

Let A:HH be a single-valued nonlinear mapping, and let R:H 2 H be a set-valued mapping.

Now, we consider the following variational inclusion. Find a point xH such that

θA(x)+R(x),
(2.2)

where θ is the zero element in H. The set of solutions of problem (2.2) is denoted by I(A,R). The variational inclusion problems have been considered extensively in [3138] and the references therein.

A set-valued mapping T:H 2 H is said to be monotone if, for all x,yH, fTx and gTy imply xy,fg0. A monotone mapping T:H 2 H is said to be maximal if its graph G(T) is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping T is maximal if and only if, for any (x,f)H×H, xy,fg0 for all (y,g)G(T) implies fTx.

Let R:H 2 H be a maximal monotone set-valued mapping. We define the resolvent operator J R , λ associated with R and λ as follows:

J R , λ = ( I + λ R ) 1 (x),xH,

where λ is a positive number. It is worth mentioning that the resolvent operator J R , λ is single-valued, nonexpansive and 1-inverse strongly monotone and, further, a solution of problem (2.2) is a fixed point of the operator J R , λ (IλA) for all λ>0.

Throughout this paper, we assume that a bifunction Θ:H×HR and a convex function φ:HR satisfy the following conditions:

(H1) Θ(x,x)=0 for all xH;

(H2) Θ is monotone, i.e., Θ(x,y)+Θ(y,x)0 for all x,yH;

(H3) for all yH, xΘ(x,y) is weakly upper semi-continuous;

(H4) for all xH, yΘ(x,y) is convex and lower semi-continuous;

(H5) for all xH and μ>0, there exists a bounded subset D x H and y x H such that, for any zH D x ,

Θ(z, y x )+φ( y x )φ(z)+ 1 μ y x z,zx<0.

Lemma 2.1 [39]

Let H be a real Hilbert space. Let Θ:H×HR be a bifunction, and let φ:HR be a proper lower semi-continuous and convex function. For any μ>0 and xH, define a mapping S μ :HH as follows:

S μ (x)= { z H : Θ ( z , y ) + φ ( y ) φ ( z ) + 1 μ y z , z x 0 , y H } ,xH.

Assume that conditions (H1)-(H5) hold. Then we have the following results:

  1. (1)

    For each xH, S μ (x) and S μ is single-valued.

  2. (2)

    S μ is firmly nonexpansive, i.e., for any x,yH,

    S μ x S μ y 2 S μ x S μ y,xy.
  3. (3)

    Fix( S μ (IμB))=EP.

  4. (4)

    EP is closed and convex.

Lemma 2.2 [40]

Let R:H 2 H be a maximal monotone mapping, and let A:HH be a Lipschitz-continuous mapping. Then the mapping (R+A):H 2 H is maximal monotone.

Lemma 2.3 [8]

Let H be a real Hilbert space. Let the mapping A:HH be α-inverse strongly monotone, and let λ>0 be a constant. Then, we have

( I λ A ) x ( I λ A ) y 2 x y 2 +λ(λ2α) A x A y 2 ,x,yH.

In particular, if 0λ2α, then IλA is nonexpansive.

Lemma 2.4 [41]

Assume that { a n } is a sequence of nonnegative real numbers satisfying a n + 1 (1 γ n ) a n + δ n , where { γ n } is a sequence in (0,1), and { δ n } is a sequence such that

  1. (a)

    n = 1 γ n =;

  2. (b)

    lim sup n δ n γ n 0 or n = 1 | δ n |<.

Then lim n a n =0.

3 Main results

In this section, we give our main results. In the sequel, we assume the following conditions are satisfied.

Condition 3.1 H is a real Hilbert space. φ:HR is a lower semi-continuous and convex function, and Θ:H×HR is a bifunction satisfying conditions (H1)-(H5).

Condition 3.2 F is a strongly positive bounded linear operator with coefficient 0<ρ<1, f:HH is a τ-contraction satisfying ρ>γτ, where γ>0 is a constant, and R:H 2 H is a maximal monotone mapping.

Condition 3.3 A,B:CC are an α-inverse strongly monotone operator and a β-inverse strongly monotone operator, respectively.

Condition 3.4 λ and μ are two constants such that 0<λ<2α and 0<μ<2β.

Condition 3.5 Ω:=EPI(A,R) is nonempty.

Now, we first consider the following scheme.

Algorithm 3.1 For any t(0, 1 ρ γ τ ), define a net { x t } as follows:

x t = [ I t ( F γ f ) ] J R , λ (IλA) S μ (IμB) x t .
(3.1)

Remark 3.2 The net { x t } defined by (3.1) is well-defined. In fact, from Lemmas 2.1 and 2.3, we know that the mappings IλA and IμB and S μ are nonexpansive. For any xH, we define a mapping W t x=[It(Fγf)] J R , λ (IλA) S μ (IμB)x. We note that ItF is positive and ItF1tρ. Hence we have

W t x W t y = [ I t ( F γ f ) ] J R , λ ( I λ A ) S μ ( I μ B ) x [ I t ( F γ f ) ] J R , λ ( I λ A ) S μ ( I μ B ) y I t F J R , λ ( I λ A ) S μ ( I μ B ) x J R , λ ( I λ A ) S μ ( I μ B ) y + t γ f ( J R , λ ( I λ A ) S μ ( I μ B ) x ) f ( J R , λ ( I λ A ) S μ ( I μ B ) y ) ( 1 ρ t ) x y + t γ τ x y = [ 1 ( ρ γ τ ) t ] x y .

This shows that W is a contraction. Therefore, W has a unique fixed point, which is denoted by x t .

Theorem 3.3 The net { x t } defined by (3.1) converges strongly to the unique solution x ˜ Ω of the following variational inequality:

( F γ f ) x ˜ , y x ˜ 0,yΩ.
(3.2)

Remark 3.4 First, we can check easily that Fγf is strongly monotone with coefficient ργτ. Now, we show the uniqueness of the solution of the variational inequality (3.2). Suppose that x Ω and x ˜ Ω both are solutions to (3.2). Then we have

( F γ f ) x , x ˜ x 0, ( F γ f ) x ˜ , x x ˜ 0.

Adding up the last two inequalities gives

( F γ f ) x ˜ ( F γ f ) x , x ˜ x 0.

The strong monotonicity of Fγf implies that x ˜ = x , and so, the uniqueness is proved.

Next, we give the detail proofs of Theorem 3.3.

Proof Pick up x Ω. It is clear that S μ ( x μB x )= J R , λ ( x λA x )= x . Set z t = S μ ( x t μB x t ) and y t = J R , λ ( z t λA z t ) for all t[0,1]. It follows from Lemma 2.3 that

y t x = J R , λ ( z t λ A z t ) J R , λ ( x λ A x ) ( z t λ A z t ) ( x λ A x ) z t x

and

z t x 2 = S μ ( x t μ B x t ) S μ ( x μ B x ) 2 ( x t μ B x t ) ( x μ B x ) 2 x t x 2 + μ ( μ 2 β ) B x t B x 2 x t x 2 .
(3.3)

Therefore, we have

y t x x t x .

From (3.1), we get

x t x = [ I t ( F γ f ) ] y t x ( I t F ) ( y t x ) + t γ f ( y t ) f ( x ) + t ( F γ f ) x ( 1 ρ t ) y t x + t γ τ y t x + t ( F γ f ) x [ 1 ( ρ γ τ ) t ] x t x + ( ρ γ τ ) t ( F γ f ) x ρ γ τ ,

and so,

x t x ( F γ f ) x ρ γ τ .

Therefore, the net ( x t ) is bounded, and so { y t }, { z t }, {F y t } and {f y t } are all bounded. It follows from (3.3) and Lemma 2.3 that

y t x 2 = J R , λ ( z t λ A z t ) J R , λ ( x λ A x ) 2 ( z t λ A z t ) ( x λ A x ) 2 z t x 2 + λ ( λ 2 α ) A z t A x 2 x t x 2 + μ ( μ 2 β ) B x t B x 2 + λ ( λ 2 α ) A z t A x 2 .
(3.4)

By (3.1), we obtain

x t x 2 = [ I t ( F γ f ) ] y t x 2 [ y t x + t ( F γ f ) y t ] 2 = y t x 2 + t ( 2 y t x ( F γ f ) y t + t ( F γ f ) y t 2 ) y t x 2 + t M ,
(3.5)

where M>0 is some constant satisfying

sup { 2 y t x ( F γ f ) y t + t ( F γ f ) y t 2 : t ( 0 , 1 ρ γ τ ) } M.

By (3.4) and (3.5), we have

y t x 2 y t x 2 +tM+μ(μ2β) B x t B x 2 +λ(λ2α) A z t A x 2 ,

and so,

μ(2βμ) B x t B x 2 +λ(2αλ) A z t A x 2 tM,

which implies that

lim t 0 A z t A x =0, lim t 0 B x t B x =0.

Since S μ is firmly nonexpansive, we have

z t x 2 = S μ ( x t μ B x t ) S μ ( x μ B x ) 2 x t μ B x t ( x μ B x ) , z t x = 1 2 ( x t μ B x t ( x μ B x ) 2 + z t x 2 x t μ B x t ( x μ B x ) ( z t x ) 2 ) 1 2 ( x t x 2 + z t x 2 x t z t μ ( B x t B x ) 2 ) = 1 2 ( x t x 2 + z t x 2 x t z t 2 + 2 μ B x t B x , x t z t μ 2 B x t B x 2 ) ,

and so,

z t x 2 x t x 2 x t z t 2 +2μ B x t B x x t z t .
(3.6)

Since J R , λ is 1-inverse strongly monotone, we have

y t x 2 = J R , λ ( z t λ A z t ) J R , λ ( x λ A x ) 2 z t λ A z t ( x λ A x ) , y t x = 1 2 ( z t λ A z t ( x λ A x ) 2 + y t x 2 z t λ A z t ( x λ A x ) ( y t x ) 2 ) 1 2 ( z t x 2 + y t x 2 z t y t λ ( A z t A x ) 2 ) = 1 2 ( z t x 2 + y t x 2 z t y t 2 + 2 λ A z t A x , z t y t λ 2 A z t A x 2 ) ,

which implies that

y t x 2 z t x 2 z t y t 2 +2λ A z t A x z t y t .
(3.7)

Thus, by (3.6) and (3.7), we obtain

y t x 2 x t x 2 x t z t 2 + 2 μ B x t B x x t z t z t y t 2 + 2 λ A z t A x z t y t .
(3.8)

Substituting (3.5) into (3.8), we get

y t x 2 y t x 2 + t M x t z t 2 + 2 μ B x t B x x t z t z t y t 2 + 2 λ A z t A x z t y t .

Thus, we derive

x t z t 2 + z t y t 2 tM+2μ B x t B x x t z t +2λ A z t A x z t y t ,

and so,

lim t 0 x t z t =0, lim t 0 z t y t =0.

By (3.1), we obtain

x t x 2 = [ I t ( F γ f ) ] y t x , x t x = [ I t ( F γ f ) ] y t [ I t ( F γ f ) ] x , x t x t ( F γ f ) x , x t x ( 1 ρ t ) y t x x t x + t γ f ( y t ) f ( x ) x t x t ( F γ f ) x , x t x [ 1 ( ρ γ τ ) t ] x t x 2 t ( F γ f ) x , x t x .

It follows that

x t x 2 1 ρ γ τ ( F γ f ) x , x t x .
(3.9)

Next, we show that the net { x t } is relatively norm-compact as t 0 + . In fact, assume that { t n }(0,1) is such that t n 0 + as n. Put x n := x t n , y n := y t n and z n := z t n . From (3.9), we have

x n x 2 1 ρ γ τ ( F γ f ) x , x n x , x Ω.
(3.10)

Since { x n } is bounded, without loss of generality, we may assume that { x n } converges weakly to a point x ˜ H.

Next, we prove that x ˜ Ω. We first show that x ˜ EP. By z n = S μ ( x n μB x n ), we know that

Θ( z n ,y)+φ(y)φ( z n )+ 1 μ y z n , z n ( x n μ B x n ) 0,yH.

It follows from (H2) that

φ(y)φ( z n )+ 1 μ y z n , z n ( x n μ B x n ) Θ(y, z n ),yH,

and so,

φ(y)φ( z n i )+ y z n i , z n i ( x n i μ B x n i ) μ Θ(y, z n i ),yH.
(3.11)

For any t(0,1] and yH, let u t =ty+(1t) x ˜ . It follows from (3.11) that

u t z n i , B u t u t z n i , B u t φ ( u t ) + φ ( z n i ) u t z n i , z n i ( x n i μ B x n i ) μ + Θ ( u t , z n i ) = u t z n i , B u t B z n i + u t z n i , B z n i B x n i φ ( u t ) + φ ( z n i ) u t z n i , z n i x n i μ + Θ ( u t , z n i ) .

Since z n i x n i 0, we have B z n i B x n i 0. Further, by the monotonicity of B, we have u t z n i ,B u t B z n i 0. Thus, from (H4) and the weakly lower semi-continuity of φ, z n i x n i μ 0 and z n i x ˜ weakly, it follows that

u t x ˜ ,B u t φ( u t )+φ( x ˜ )+Θ( u t , x ˜ ).
(3.12)

From conditions (H1), (H4) and (3.12), we also have

0 = Θ ( u t , u t ) + φ ( u t ) φ ( u t ) t Θ ( u t , y ) + ( 1 t ) Θ ( u t , x ˜ ) + t φ ( y ) + ( 1 t ) φ ( x ˜ ) φ ( u t ) = t [ Θ ( u t , y ) + φ ( y ) φ ( u t ) ] + ( 1 t ) [ Θ ( u t , x ˜ ) + φ ( x ˜ ) φ ( u t ) ] t [ Θ ( u t , y ) + φ ( y ) φ ( u t ) ] + ( 1 t ) u t x ˜ , B u t = t [ Θ ( u t , y ) + φ ( y ) φ ( u t ) ] + ( 1 t ) t y x ˜ , B u t ,

and hence

0Θ( u t ,y)+φ(y)φ( u t )+(1t)y x ˜ ,B u t .

Letting t0, we have

Θ( x ˜ ,y)+φ(y)φ( x ˜ )+y x ˜ ,B x ˜ 0,yH.

This implies that x ˜ EP.

Next, we show that x ˜ I(A,R). In fact, since A is α-inverse strongly monotone, A is a Lipschitz continuous monotone mapping. It follows from Lemma 2.2 that R+A is maximal monotone. Let (v,g)G(R+A), i.e., gAvR(v). Again, since y n i = J R , λ ( z n i λA z n i ), we have z n i λA z n i (I+λR)( y n i ), i.e., 1 λ ( z n i y n i λA z n i )R( y n i ). By virtue of the maximal monotonicity of R, we have

v y n i , g A v 1 λ ( z n i y n i λ A z n i ) 0,

and so,

v y n i , g v y n i , A v + 1 λ ( z n i y n i λ A z n i ) = v y n i , A v A y n i + A y n i A z n i + 1 λ ( z n i y n i ) v y n i , A y n i A z n i + v y n i , 1 λ ( z n i y n i ) .

It follows from z n y n 0, A z n A y n 0 and y n i x ˜ weakly that

lim n i v y n i ,g=v x ˜ ,g0.

It follows from the maximal monotonicity of A+R that θ(R+A)( x ˜ ), i.e., x ˜ I(A,R). Hence x ˜ Ω. Therefore, if we can substitute x ˜ for x in (3.10), then we get

x n x ˜ 2 1 ρ γ τ ( F γ f ) x ˜ , x n x ˜ .
(3.13)

Consequently, the weak convergence of { x n } to x ˜ actually implies that x n x ˜ strongly. This shows the relative norm-compactness of the net { x t } as t 0 + .

Now, we return to (3.10). If we take the limit as n in (3.10), then we get

x ˜ x 2 1 ρ γ τ ( F γ f ) x , x ˜ x , x Ω.

In particular, x ˜ solves the following variational inequality

x ˜ Ω, ( F γ f ) x , x x ˜ 0, x Ω.
(3.14)

We know that the variational inequality (3.14) is equivalent to its dual variational inequality

x ˜ Ω, ( F γ f ) x ˜ , x x ˜ 0, x Ω.

Thus, by the uniqueness of the variational inequality, we deduce that the entire net { x t } converges in norm to x ˜ as t 0 + . This completes the proof. □

Next, we introduce an explicit scheme and prove its strong convergence to the unique solution of the variational inequality (3.2).

Algorithm 3.5 For any x 0 H, define the sequence { x n } generated iteratively by

x n + 1 = [ I α n ( F γ f ) ] J R , λ (IλA) S μ (IμB) x n ,n0,
(3.15)

where { α n } is a real sequence in [0,1].

Theorem 3.6 Assume the following conditions are also satisfied:

(C1) lim n α n =0 and n = 0 α n =;

(C2) lim n α n α n + 1 =1.

Then the sequence { x n } generated by (3.15) converges strongly to the unique solution x ˜ Ω of the variational inequality (3.2).

Proof We write z n = S μ (IμB) x n and y n = J R , λ ( z n λA z n ) for all n0. Then it follows from Lemma 2.3 that, for any x Ω,

y n x = J R , λ ( z n λ A z n ) J R , λ ( x λ A x ) ( z n λ A z n ) ( x λ A x ) z n x

and

z n x 2 = S μ ( x n μ B x n ) S μ ( x μ B x ) 2 x n μ B x n ( x μ B x ) 2 x n x 2 + μ ( μ 2 β ) B x n B x 2 x n x 2 .
(3.16)

Hence we have

y n x x n x .

By induction, it follows from (3.15) that

x n + 1 x = [ I α n ( F γ f ) ] y n x [ I α n ( F γ f ) ] y n [ I α n ( F γ f ) ] x + α n ( F γ f ) x ( I α n F ) ( y n x ) + t γ f ( y n ) f ( x ) + α n ( F γ f ) x ( 1 ρ α n ) y n x + α n γ τ y n x + α n ( F γ f ) x [ 1 ( ρ γ τ ) α n ] x n x + ( ρ γ τ ) α n ( F γ f ) x ρ γ τ max { x n x , ( F γ f ) x ρ γ τ } max { x 0 x , ( F γ f ) x ρ γ τ } .

Therefore, { x n } is bounded, and so, { z n }, { y n }, {F y n } and {f y n } are all bounded. It follows from (3.15) that

x n + 2 x n + 1 = [ I α n + 1 ( F γ f ) ] y n + 1 [ I α n ( F γ f ) ] y n [ I α n + 1 ( F γ f ) ] y n + 1 [ I α n + 1 ( F γ f ) ] y n + | α n + 1 α n | ( F γ f ) y n I α n + 1 F y n + 1 y n + α n + 1 γ f ( y n + 1 ) f ( y n ) + | α n + 1 α n | ( F γ f ) y n ( 1 ρ α n + 1 ) y n + 1 y n + α n + 1 γ τ y n + 1 y n + | α n + 1 α n | ( F γ f ) y n = [ 1 ( ρ γ τ ) α n + 1 ] y n + 1 y n + | α n + 1 α n | ( F γ f ) y n .
(3.17)

Note that

y n + 1 y n = J R , λ ( z n + 1 λ A z n + 1 ) J R , λ ( z n λ A z n ) z n + 1 λ A z n + 1 ( z n λ A z n ) z n + 1 z n = S μ ( x n + 1 μ B x n + 1 ) S μ ( x n μ B x n ) ( x n + 1 μ B x n + 1 ) ( x n μ B x n ) x n + 1 x n .
(3.18)

Substituting (3.18) into (3.17), we get

x n + 2 x n + 1 [ 1 ( ρ γ τ ) α n + 1 ] x n + 1 x n + | α n + 1 α n | ( F γ f ) y n = [ 1 ( ρ γ τ ) α n + 1 ] y n + 1 y n + ( ρ γ τ ) α n + 1 | α n + 1 α n | α n + 1 ( F γ f ) y n ρ γ τ .

Notice that lim n |1 α n α n + 1 |=0. This, together with the last inequality and Lemma 2.4, implies that

lim n x n + 1 x n =0.

Again, using Lemma 2.3 and (3.16), we get

y n x 2 = J R , λ ( z n λ A z n ) J R , λ ( x λ A x ) 2 ( z n λ A z n ) ( x λ A x ) 2 z n x 2 + λ ( λ 2 α ) A z n A x 2 x n x 2 + μ ( μ 2 β ) B x n B x 2 + λ ( λ 2 α ) A z n A x 2 .
(3.19)

By (3.15), we obtain

x n + 1 x 2 = [ I α n ( F γ f ) ] y n x 2 [ y n x + α n ( F γ f ) y n ] 2 = y n x 2 + α n [ 2 y n x ( F γ f ) y n + α n ( F γ f ) y n 2 ] y n x 2 + α n M 1 ,
(3.20)

where M 1 >0 is a constant satisfying

sup { 2 y n x ( F γ f ) y n + α n ( F γ f ) y n 2 : n 1 } M 1 .

From (3.19) and (3.20), we have

x n + 1 x 2 x n x 2 + μ ( μ 2 β ) B x n B x 2 + λ ( λ 2 α ) A z n A x 2 + α n M 1 ,

and so,

μ ( 2 β μ ) B x n B x 2 + λ ( 2 α λ ) A z n A x 2 x n x 2 x n + 1 x 2 + α n M 1 ( x n x + x n + 1 x ) x n + 1 x n + α n M 1 ,

which implies that

lim n B x n B x =0, lim n A z n A x =0.

Since S r is firmly nonexpansive, we have

z n x 2 = S μ ( x n μ B x n ) S μ ( x μ B x ) 2 x n μ B x n ( x μ B x ) , z n x = 1 2 ( x n μ B x n ( x μ B x ) 2 + z n x 2 x n μ B x n ( x μ B x ) ( z n x ) 2 ) 1 2 ( x n x 2 + z n x 2 x n z n μ ( B x n B x ) 2 ) = 1 2 ( x n x 2 + z n x 2 x n z n 2 + 2 μ B x n B x , x n z n μ 2 B x n B x 2 ) ,

and hence

z n x 2 x n x 2 x n z n 2 +2μ B x n B x x n z n .
(3.21)

Since J R , λ is 1-inverse strongly monotone, we have

y n x 2 = J R , λ ( z n λ A z n ) J R , λ ( x λ A x ) 2 z n λ A z n ( x λ A x ) , y n x = 1 2 ( z n λ A z n ( x λ A x ) 2 + y n x 2 z n λ A z n ( x λ A x ) ( y n x ) 2 ) 1 2 ( z n x 2 + y n x 2 z n y n λ ( A z n A x ) 2 ) = 1 2 ( z n x 2 + y n x 2 z n y n 2 + 2 λ A z n A x , z n y n λ 2 A z n A x 2 ) ,

which implies that

y n x 2 z n x 2 z n y n 2 +2λ A z n A x z n y n .
(3.22)

Thus, by (3.21) and (3.22), we obtain

y n x 2 x n x 2 x n z n 2 + 2 μ B x n B x x n z n z n y n 2 + 2 λ A z n A x z n y n .
(3.23)

Substituting (3.20) into (3.23), we get

y n x 2 y n x 2 + α n M 1 x n z n 2 + 2 μ B x n B x x n z n z n y n 2 + 2 λ A z n A x z n y n .

Thus, we derive

x n z n 2 + z n y n 2 α n M 1 + 2 μ B x n B x x n z n + 2 λ A z n A x z n y n ,

and so,

lim n x n z n =0, lim n z n y n =0.

Next, we prove that

lim sup n 2 ρ γ τ ( F γ f ) x ˜ , x n x ˜ 0,
(3.24)

where x ˜ is the unique solution of the variational inequality (3.2). To see this, we can take a subsequence { x n k } of { x n } satisfying

lim sup n 2 ρ γ τ ( F γ f ) x ˜ , x n x ˜ = lim k 2 ρ γ τ ( F γ f ) x ˜ , x n k x ˜
(3.25)

and { x n k } converges weakly to a point x as k. By the similar argument as in Theorem 3.3, we can deduce x Ω. Since x ˜ solves the variational inequality (3.2), by combining (3.24) and (3.25), we get

lim sup n 2 ρ γ τ ( F γ f ) x ˜ , x n x ˜ = 2 ρ γ τ ( F γ f ) x ˜ , x x ˜ 0.

Finally, we show that x n x ˜ as n. It follows from (3.15) that

x n + 1 x 2 = x n + 1 x , x n + 1 x = [ I α n ( F γ f ) ] y n x , x n + 1 x = [ I α n ( F γ f ) ] y n [ I α n ( F γ f ) ] x , x n + 1 x α n ( F γ f ) x , x n + 1 x ( 1 α n ρ ) y n x x n + 1 x + α n γ f ( y n ) f ( x ) x n + 1 x α n ( F γ f ) x , x n + 1 x [ 1 ( ρ γ τ ) α n ] y n x x n + 1 x α n ( F γ f ) x , x n + 1 x [ 1 ( ρ γ τ ) α n ] x n x x n + 1 x α n ( F γ f ) x , x n + 1 x 1 ( ρ γ τ ) α n 2 x n x 2 + 1 2 x n + 1 x 2 α n ( F γ f ) x , x n + 1 x ,

that is,

x n + 1 x 2 [ 1 ( ρ γ τ ) α n ] x n x 2 2 α n ( F γ f ) x , x n + 1 x = ( 1 δ n ) x n x 2 + δ n σ n ,

where δ n =(ργτ) α n and σ n = 2 ρ γ τ (Fγf) x , x n + 1 x . It is easy to see that n = 1 δ n = and lim sup n σ n 0. Hence, by Lemma 2.4, we conclude that the sequence { x n } converges strongly to the point x . This completes the proof. □

References

  1. Agarwal RP, Cho YJ, Petrot N: Systems of general nonlinear set-valued mixed variational inequalities problems in Hilbert spaces. Fixed Point Theory Appl. 2011., 2011: Article ID 31

    Google Scholar 

  2. Chai YF, Cho YJ, Li J: Some characterizations of ideal point in vector optimization problems. J. Inequal. Appl. 2008., 2008: Article ID 231845

    Google Scholar 

  3. Cho YJ, Petrot N: An optimization problem related to generalized equilibrium and fixed point problems with applications. Fixed Point Theory 2010, 11: 237–250.

    MathSciNet  MATH  Google Scholar 

  4. Stampacchia G: Formes bilineaires coercitives sur les ensembles convexes. C. R. Math. Acad. Sci. Paris 1964, 258: 4413–4416.

    MathSciNet  MATH  Google Scholar 

  5. Lions JL, Stampacchia G: Variational inequalities. Commun. Pure Appl. Math. 1967, 20: 493–517.

    Article  MathSciNet  MATH  Google Scholar 

  6. Yao JC: Variational inequalities with generalized monotone operators. Math. Oper. Res. 1994, 19: 691–705.

    Article  MathSciNet  MATH  Google Scholar 

  7. Noor MA: Some developments in general variational inequalities. Appl. Math. Comput. 2004, 152: 199–277.

    Article  MathSciNet  MATH  Google Scholar 

  8. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428.

    Article  MathSciNet  MATH  Google Scholar 

  9. Iiduka H, Takahashi W: Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings. Nonlinear Anal. 2005, 61: 341–350.

    Article  MathSciNet  MATH  Google Scholar 

  10. Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149: 75–88.

    Article  MathSciNet  MATH  Google Scholar 

  11. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898.

    Article  MathSciNet  MATH  Google Scholar 

  12. Liu F, Nashed MZ: Regularization of nonlinear ill-posed variational inequalities and convergence rates. Set-Valued Anal. 1998, 6: 313–344.

    Article  MathSciNet  MATH  Google Scholar 

  13. Yao Y, Yao JC: On modified iterative method for nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2007, 186: 1551–1558.

    Article  MathSciNet  MATH  Google Scholar 

  14. Yao Y, Cho YJ, Liou YC: Iterative algorithms for variational inclusions, mixed equilibrium problems and fixed point problems approach to optimization problems. Cent. Eur. J. Math. 2011, 9: 640–656.

    Article  MathSciNet  MATH  Google Scholar 

  15. Yao Y, Cho YJ, Liou YC: Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems. Eur. J. Oper. Res. 2011, 212: 242–250.

    Article  MathSciNet  MATH  Google Scholar 

  16. Zhu J, Zhong CK, Cho YJ: A generalized variational principle and vector optimization. J. Optim. Theory Appl. 2000, 106: 201–218.

    Article  MathSciNet  MATH  Google Scholar 

  17. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

    MathSciNet  MATH  Google Scholar 

  18. Combettes PL, Hirstoaga SA: Equilibrium programming using proximal-like algorithms. Math. Program. 1997, 78: 29–41.

    Article  MathSciNet  Google Scholar 

  19. Combettes PL, Hirstoaga SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.

    MathSciNet  MATH  Google Scholar 

  20. Flam SD, Antipin AS: Equilibrium programming using proximal-like algorithms. Math. Program. 1997, 78: 29–41.

    Article  MathSciNet  MATH  Google Scholar 

  21. Zeng LC, Wu SY, Yao JC: Generalized KKM theorem with applications to generalized minimax inequalities and generalized equilibrium problems. Taiwan. J. Math. 2006, 10: 1497–1514.

    MathSciNet  MATH  Google Scholar 

  22. Chadli O, Wong NC, Yao JC: Equilibrium problems with applications to eigenvalue problems. J. Optim. Theory Appl. 2003, 117: 245–266.

    Article  MathSciNet  MATH  Google Scholar 

  23. Chadli O, Schaible S, Yao JC: Regularized equilibrium problems with an application to noncoercive hemivariational inequalities. J. Optim. Theory Appl. 2004, 121: 571–596.

    Article  MathSciNet  MATH  Google Scholar 

  24. Konnov IV, Schaible S, Yao JC: Combined relaxation method for mixed equilibrium problems. J. Optim. Theory Appl. 2005, 126: 309–322.

    Article  MathSciNet  MATH  Google Scholar 

  25. Chadli O, Konnov IV, Yao JC: Descent methods for equilibrium problems in a Banach space. Comput. Math. Appl. 2004, 48: 609–616.

    Article  MathSciNet  MATH  Google Scholar 

  26. Ding XP, Lin YC, Yao JC: Predictor-corrector algorithms for solving generalized mixed implicit quasi-equilibrium problems. Appl. Math. Mech. 2006, 27: 1157–1164.

    Article  MathSciNet  MATH  Google Scholar 

  27. Takahashi S, Takahashi W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331: 506–515.

    Article  MathSciNet  MATH  Google Scholar 

  28. Takahashi S, Takahashi W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 2008, 69: 1025–1033.

    Article  MathSciNet  MATH  Google Scholar 

  29. Yao Y, Liou YC, Yao JC: Convergence theorem for equilibrium problems and fixed point problems of infinite family of nonexpansive mappings. Fixed Point Theory Appl. 2007., 2007: Article ID 64363

    Google Scholar 

  30. Zeng LC, Yao JC: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 2008, 214: 186–201.

    Article  MathSciNet  MATH  Google Scholar 

  31. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898.

    Article  MathSciNet  MATH  Google Scholar 

  32. Adly S: Perturbed algorithm and sensitivity analysis for a generalized class of variational inclusions. J. Math. Anal. Appl. 1996, 201: 609–630.

    Article  MathSciNet  MATH  Google Scholar 

  33. Zhang SS, Lee JHW, Chan CK: Algorithms of common solutions for quasi variational inclusion and fixed point problems. Appl. Math. Mech. 2008, 29: 571–581.

    Article  MathSciNet  MATH  Google Scholar 

  34. Ding XP: Perturbed Ishikawa type iterative algorithm for generalized quasivariational inclusions. Appl. Math. Comput. 2003, 141: 359–373.

    Article  MathSciNet  MATH  Google Scholar 

  35. Huang NJ: Mann and Ishikawa type perturbed iteration algorithm for nonlinear generalized variational inclusions. Comput. Math. Appl. 1998, 35: 9–14.

    Article  Google Scholar 

  36. Fang YP, Huang NJ: H -Monotone operator and resolvent operator technique for quasi-variational inclusions. Appl. Math. Comput. 2003, 145: 795–803.

    Article  MathSciNet  MATH  Google Scholar 

  37. Verma RU:General system of (η,A)-monotone variational inclusion problems based on generalized hybrid iterative algorithm. Nonlinear Anal. Hybrid Syst. 2007, 1: 326–335.

    Article  MathSciNet  MATH  Google Scholar 

  38. Peng JW, Wang Y, Shyu DS, Yao JC: Common solutions of an iterative scheme for variational inclusions, equilibrium problems and fixed point problems. J. Inequal. Appl. 2008., 2008: Article ID 720371

    Google Scholar 

  39. Peng JW, Yao JC: A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 2008, 12: 1401–1432.

    MathSciNet  MATH  Google Scholar 

  40. Brezis H: Operateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert. North-Holland, Amsterdam; 1973.

    MATH  Google Scholar 

  41. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

Yonghong Yao was supported in part by NSFC 11071279 and NSFC 71161001-G0105. Yeol Je Cho was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant Number: 2012-0008170). Yeong-Cheng Liou was supported in part by NSC 101-2628-E-230-001-MY3 and NSC 101-2622-E-230-005-CC3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yeol Je Cho.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Yao, Y., Kang, J.I., Cho, Y.J. et al. Composite schemes for variational inequalities over equilibrium problems and variational inclusions. J Inequal Appl 2013, 414 (2013). https://doi.org/10.1186/1029-242X-2013-414

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-414

Keywords