Skip to main content

An explicit algorithm for solving the optimize hierarchical problems

Abstract

In this paper, we consider the variational inequality problem over the generalized mixed equilibrium problem which has a hierarchical structure. Strong convergence of the algorithm to the unique solution is guaranteed under some assumptions.

MSC:47H09, 47H10, 47J20, 49J40, 65J15.

1 Introduction

Let C be a closed convex subset of a real Hilbert space H with the inner product , and the norm . We denote weak convergence and strong convergence by the notations and →, respectively. Let A:CH be a nonlinear mapping and let F be a bifunction of C×C into , where is the set of real numbers.

Consider the generalized mixed equilibrium problem which is to find xC such that

F(x,y)+Ax,yx+φ(y)φ(x)0,yC.
(1.1)

The solution set of (1.1) is denoted by GMEP(F,φ,A). See [14].

If φ0, the problem (1.1) is reduced to the generalized equilibrium problem which is to find xC such that

F(x,y)+Ax,yx0,yC.
(1.2)

The set of solutions of (1.2) is denoted by GEP(F,A).

If A0 and φ0, the problem (1.1) is reduced to the equilibrium problem [5] which is to find xC such that

F(x,y)0,yC.
(1.3)

The solution set of (1.3) is denoted by EP(F).

If F0 and φ0, the problem (1.1) is reduced to the Hartmann-Stampacchia variational inequality [6] which is to find xC such that

Ax,yx0,yC.
(1.4)

The solution set of (1.4) is denoted by VI(C,A).

A mapping T:CC is called nonexpansive if TxTyxy for all x,yC. If C is bounded closed convex and T is a nonexpansive mapping of C into itself, then F(T) is nonempty [7]. A point xH is a fixed point of T provided Tx=x. Denote by F(T) the set of fixed points of T; that is, F(T)={xH:Tx=x}.

We discuss the following variational inequality problem over the generalized mixed equilibrium problem, which is called the hierarchical problem over the generalized mixed equilibrium problem, which is to find a point xGMEP(F,φ,B) such that

Ax,yx0,yGMEP(F,φ,B),

where A, B are two monotone operators. See [8, 9].

A mapping A:CC is called α-strongly monotone if there exists a positive real number α such that AxAy,xyα x y 2 for all x,yC. A mapping A:CC is called L-Lipschitz continuous if there exists a positive real number L such that AxAyLxy for all x,yC. A linear bounded operator A is called strongly positive on H if there exists a constant γ ¯ >0 with the property Ax,x γ ¯ x 2 for all xH. A mapping f:CH is called a ρ-contraction if there exists a constant ρ[0,1) such that f(x)f(y)ρxy for all x,yC.

In 2010, Yao et al. [10] considered the hierarchical problem over the generalized equilibrium problem, x s , t being defined by implicit algorithms:

x s , t =s [ t f ( x s , t ) + ( 1 t ) ( x s , t λ A x s , t ) ] +(1s) T r ( x s , t rB x s , t ),s,t(0,1),
(1.5)

for each (s,t) ( 0 , 1 ) 2 . The net x s , t hierarchically converges to the unique solution x of the problem of the variational inequality which is to find a point x GEP(F,B) such that

A x , x x 0,xGEP(F,B),
(1.6)

where A, B are two monotone operators. The solution set of (1.6) is denoted by Ω. Furthermore, x also solves the following variational inequality:

x Ω, ( I f ) x , x x 0,xΩ.

In 2011, Yao et al. [11] studied the hierarchical problem over the fixed point set. Let the sequence { x n } be generated by two algorithms as follows.

Implicit Algorithm: x t =T P C [It(Aγf)] x t , t(0,1) and

Explicit Algorithm: x n + 1 = β n x n +(1 β n )T P C [I α n (Aγf)] x n , n0.

They showed that these two algorithms converge strongly to the unique solution of the problem of the variational inequality which is to find x F(T) such that

( A γ f ) x , x x 0,xF(T),

where A:CH is a strongly positive linear bounded operator, f:CH is a ρ-contraction, and T:CC is a nonexpansive mapping.

In this paper, we construct an algorithm and introduce the hierarchical problem over the generalized mixed equilibrium problem. The sequence { x n } is generated by the algorithm for x 0 C,

x n + 1 = α n ( β n x n + ( 1 β n ) P C [ I λ n ( A γ f ) ] x n ) +(1 α n ) T r n (I r n B) x n ,
(1.7)

where { α n },{ β n },{ λ n }[0,1], and r n (0,2β) satisfy some conditions. Then { x n } converges strongly to x GMEP(F,φ,B), which is the unique solution of the variational inequality:

( A γ f ) x , x x 0,xGMEP(F,φ,B).
(1.8)

Our results improve the results of Yao et al. [10], Yao et al. [11] and some other authors.

2 Preliminaries

Let C be a nonempty closed convex subset of H. We have the following inequality in an inner product space: x + y 2 x 2 +2y,x+y, x,yH. For every point xH, there exists a unique nearest point in C, denoted by P C x, such that

x P C xxy,for all yC.

P C is called the metric projection of H onto C. It is well known that P C is a nonexpansive mapping of H onto C and satisfies

xy, P C x P C y P C x P C y 2 ,

for every x,yH. Moreover, P C x is characterized by the following properties: P C xC and

x P C x , y P C x 0 , x y 2 x P C x 2 + y P C x 2 ,
(2.1)

for all xH, yC. Let B be a monotone mapping of C into H. In the context of the variational inequality problem the characterization of projection (2.1) implies the following:

uVI(C,B)u= P C (uλBu),λ>0.

It is also well known that H satisfies the Opial condition [12], i.e., for any sequence { x n }H with x n x, the inequality lim inf n x n x< lim inf n x n y, holds for every yH with xy.

For solving the generalized mixed equilibrium problem and the mixed equilibrium problem, let us give the following assumptions for the bifunction F, φ, and the set C:

  • (A1) F(x,x)=0 for all xC;

  • (A2) F is monotone, i.e., F(x,y)+F(y,x)0 for all x,yC;

  • (A3) for each yC, xF(x,y) is weakly upper semicontinuous;

  • (A4) for each xC, yF(x,y) is convex;

  • (A5) for each xC, yF(x,y) is lower semicontinuous;

  • (B1) for each xH and r>0, there exist a bounded subset D x C and y x C such that for any zC D x ,

    F(z, y x )+φ( y x )φ(z)+ 1 r y x z,zx<0;
    (2.2)
  • (B2) C is a bounded set.

Lemma 2.1 [13]

Let C be a nonempty closed convex subset of a real Hilbert space H. Let F be a bifunction from C×C to satisfying (A1)-(A5) and let φ:CR be a proper lower semicontinuous and convex function. For r>0 and xH, define a mapping T r :HC as follows.

T r (x)= { z C : F ( z , y ) + φ ( y ) φ ( z ) + 1 r y z , z x 0 , y C }
(2.3)

for all xH. Assume that either (B1) or (B2) holds. Then the following results hold:

  1. (1)

    for each xH, T r (x);

  2. (2)

    T r is single-valued;

  3. (3)

    T r is firmly nonexpansive, i.e., for any x,yH, T r x T r y 2 T r x T r y,xy;

  4. (4)

    F( T r )=MEP(F,φ);

  5. (5)

    MEP(F,φ) is closed and convex.

Lemma 2.2 [14]

Let C be a closed convex subset of a real Hilbert space H and let T:CC be a nonexpansive mapping. Then IT is demiclosed at zero, that is, x n x, x n T x n 0 implies x=Tx.

Lemma 2.3 [15]

Assume A is a self adjoint and strongly positive linear bounded operator on a Hilbert space H with coefficient γ ¯ >0 and 0<ρ A 1 , then IρA1ρ γ ¯ .

Lemma 2.4 [16]

Assume { a n } is a sequence of nonnegative real numbers such that

a n + 1 (1 γ n ) a n + δ n ,n0,

where { γ n }(0,1) and { δ n } are sequences in such that

  1. (i)

    n = 1 γ n =,

  2. (ii)

    lim sup n δ n γ n 0 or n = 1 | δ n |<.

Then lim n a n =0.

3 Strong convergence theorems

In this section, we introduce an explicit algorithm for solving some hierarchical problem over the set of fixed points of a nonexpansive and the generalized mixed equilibrium problem.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H, A:HH be a strongly positive linear bounded operator, f:CH be ρ-contraction, γ be a positive real number such that γ ¯ 1 ρ <γ< γ ¯ ρ . Let B:CH be β-inverse-strongly monotone and F be a bifunction from C×CR satisfying (A1)-(A5) and let φ:CR be convex and lower semicontinuous with either (B1) or (B2). Let { x n } be a sequence generated by the following algorithm for arbitrary x 0 C:

x n + 1 = α n ( β n x n + ( 1 β n ) P C [ I λ n ( A γ f ) ] x n ) +(1 α n ) T r n (I r n B) x n ,
(3.1)

where { α n },{ β n },{ λ n }[0,1], α n λ n , and r n (0,2β) satisfy the following conditions:

  • (C1) n = 1 | α n α n 1 |<;

  • (C2) n = 1 | β n β n 1 |<;

  • (C3) n = 1 | λ n λ n 1 |<, n = 1 λ n =, lim n λ n =0;

  • (C4) n = 1 | r n r n 1 |<, lim inf n r n >0.

Then { x n } converges strongly to x GMEP(F,φ,B), which is the unique solution of the variational inequality:

( A γ f ) x , x x 0,xGMEP(F,φ,B).
(3.2)

Proof We will divide the proof into five steps.

Step 1. We will show { x n } is bounded. For any qGMEP(F,φ,B) and taking y n = P C [I λ n (Aγf)] x n , we note that

y n q = P C [ I λ n ( A γ f ) ] x n P C q [ I λ n ( A γ f ) ] x n q λ n γ f ( x n ) γ f ( q ) + λ n γ f ( q ) A q + | I λ n A | x n q λ n γ ρ x n q + λ n γ f ( q ) A q + ( 1 λ n γ ¯ ) x n q = [ 1 ( γ ¯ γ ρ ) λ n ] x n q + λ n γ f ( q ) A q .
(3.3)

From (3.1), we have

x n + 1 q = α n { β n x n + ( 1 β n ) y n } + ( 1 α n ) T r n ( I r n B ) x n q α n β n x n + ( 1 β n ) y n q + ( 1 α n ) T r n ( I r n B ) x n q α n β n x n q + α n ( 1 β n ) y n q + ( 1 α n ) T r n ( I r n B ) x n T r n ( I r n B ) q α n β n x n q + α n ( 1 β n ) { [ 1 ( γ ¯ γ ρ ) λ n ] x n q + λ n γ f ( q ) A q } + ( 1 α n ) x n q = α n β n x n q + α n ( 1 β n ) [ 1 ( γ ¯ γ ρ ) λ n ] x n q + α n ( 1 β n ) λ n γ f ( q ) A q + ( 1 α n ) x n q = [ 1 α n ( 1 β n ) ] x n q + α n ( 1 β n ) [ 1 ( γ ¯ γ ρ ) λ n ] x n q + α n ( 1 β n ) λ n γ f ( q ) A q = { 1 α n ( 1 β n ) ( 1 [ 1 ( γ ¯ γ ρ ) λ n ] ) } x n q + α n ( 1 β n ) λ n γ f ( q ) A q = [ 1 α n ( 1 β n ) λ n ( γ ¯ γ ρ ) ] x n q + α n ( 1 β n ) λ n γ f ( q ) A q = [ 1 α n ( 1 β n ) λ n ( γ ¯ γ ρ ) ] x n q + α n ( 1 β n ) λ n ( γ ¯ γ ρ ) γ f ( q ) A q γ ¯ γ ρ .

It follows by induction that

x n qmax { x 0 q , γ f ( q ) A q γ ¯ γ ρ } ,n0.

Therefore { x n } is bounded and so are { y n }, {A x n }, and {f( x n )}.

Step 2. We show that lim n x n + 1 x n =0. Setting v n =[I λ n (Aγf)] x n and we observe that

y n + 1 y n = P C v n + 1 P C v n [ I λ n + 1 ( A γ f ) ] x n + 1 [ I λ n ( A γ f ) ] x n = λ n + 1 γ [ f ( x n + 1 ) f ( x n ) ] + ( λ n + 1 λ n ) γ f ( x n ) + ( I λ n + 1 A ) ( x n + 1 x n ) + ( λ n + 1 λ n ) A x n λ n + 1 γ f ( x n + 1 ) f ( x n ) + ( 1 λ n + 1 γ ¯ ) x n + 1 x n + | λ n + 1 λ n | ( γ f ( x n ) + A x n ) λ n + 1 γ ρ x n + 1 x n + ( 1 λ n + 1 γ ¯ ) x n + 1 x n + | λ n + 1 λ n | ( γ f ( x n ) + A x n ) = [ 1 ( γ ¯ γ ρ ) λ n + 1 ] x n + 1 x n + | λ n + 1 λ n | M 1 ,
(3.4)

where M 1 =sup{γf( x n )+A x n :nN}. Setting z n = β n x n +(1 β n ) y n for all n0. We observes that

z n + 1 z n = β n + 1 x n + 1 + ( 1 β n + 1 ) y n + 1 ( β n x n + ( 1 β n ) y n ) β n + 1 x n + 1 x n + | β n + 1 β n | x n y n + | 1 β n + 1 | y n + 1 y n .
(3.5)

Substituting (3.4) into (3.5) it follows that

z n + 1 z n β n + 1 x n + 1 x n + | β n + 1 β n | x n y n + | 1 β n + 1 | { [ 1 ( γ ¯ γ ρ ) λ n + 1 ] x n + 1 x n + | λ n + 1 λ n | M 1 } β n + 1 x n + 1 x n + | β n + 1 β n | x n y n + [ 1 β n + 1 ( 1 β n + 1 ) ( γ ¯ γ ρ ) λ n + 1 ] x n + 1 x n + | λ n + 1 λ n | M 1 = [ 1 ( 1 β n + 1 ) ( γ ¯ γ ρ ) λ n + 1 ] x n + 1 x n + | β n + 1 β n | M 2 + | λ n + 1 λ n | M 1 ,
(3.6)

where M 2 =sup{ x n y n :nN}. On the other hand, from u n 1 = T r n 1 ( x n 1 r n 1 B x n 1 ) and u n = T r n ( x n r n B x n ) it follows that

F ( u n 1 , y ) + B x n 1 , y u n 1 + φ ( y ) φ ( u n 1 ) + 1 r n 1 y u n 1 , u n 1 x n 1 0 , y C ,
(3.7)

and

F( u n ,y)+B x n ,y u n +φ(y)φ( u n )+ 1 r n y u n , u n x n 0,yC.
(3.8)

Substituting y= u n into (3.7) and y= u n 1 into (3.8), we have

F( u n 1 , u n )+B x n 1 , u n u n 1 +φ( u n )φ( u n 1 )+ 1 r n 1 u n u n 1 , u n 1 x n 1 0

and

F( u n , u n 1 )+B x n , u n 1 u n +φ( u n 1 )φ( u n )+ 1 r n u n 1 u n , u n x n 0.

From (A2), we have

u n u n 1 , B x n 1 B x n + u n 1 x n 1 r n 1 u n x n r n 0,

and then

u n u n 1 , r n 1 ( B x n 1 B x n ) + u n 1 x n 1 r n 1 r n ( u n x n ) 0,

so

u n u n 1 , r n 1 B x n 1 r n 1 B x n + u n 1 u n + u n x n 1 r n 1 r n ( u n x n ) 0.

It follows that

u n u n 1 , ( I r n 1 B ) x n ( I r n 1 B ) x n 1 + u n 1 u n + u n x n r n 1 r n ( u n x n ) 0 , u n u n 1 , u n 1 u n + u n u n 1 , x n x n 1 + ( 1 r n 1 r n ) ( u n x n ) 0 .

Without loss of generality, let us assume that there exists a real number c such that r n 1 >c>0, for all nN. Then we have

u n u n 1 2 u n u n 1 , x n x n 1 + ( 1 r n 1 r n ) ( u n x n ) u n u n 1 { x n x n 1 + | 1 r n 1 r n | u n x n }

and hence

u n u n 1 x n x n 1 + 1 r n | r n r n 1 | u n x n x n x n 1 + M 3 c | r n r n 1 | ,
(3.9)

where M 3 =sup{ u n x n :nN}. From (3.1), we have

x n + 1 x n = α n z n + ( 1 α n ) u n α n 1 z n 1 ( 1 α n 1 ) u n 1 α n z n z n 1 + | α n α n 1 | z n 1 u n 1 + | 1 α n | u n u n 1 = α n z n z n 1 + | α n α n 1 | M 4 + | 1 α n | u n u n 1 ,
(3.10)

where M 4 =sup{ z n u n :nN}. Substituting (3.6) and (3.9) into (3.10)

x n + 1 x n α n { [ 1 ( 1 β n ) ( γ ¯ γ ρ ) λ n ] x n x n 1 + | β n β n 1 | M 2 + | λ n λ n 1 | M 1 } + | α n α n 1 | M 4 + | 1 α n | { x n x n 1 + M 3 c | r n r n 1 | } [ 1 ( 1 β n ) ( γ ¯ γ ρ ) α n λ n ] x n x n 1 + α n | β n β n 1 | M 2 + α n | λ n λ n 1 | M 1 + | α n α n 1 | M 4 + M 3 c | r n r n 1 | ,
(3.11)

from (C1)-(C4) and the boundedness of { x n }, { y n }, { z n }, {f( x n )}, and {A x n }. Applying Lemma 2.4, we obtain

lim n x n + 1 x n =0.
(3.12)

Step 3. We show that lim n x n u n =0. For each qGMEP(F,φ,B), note that T r n is firmly nonexpansive, then we have

u n q 2 = T r n ( x n r n B x n ) T r n ( q r n B q ) 2 T r n ( x n r n B x n ) T r n ( q r n B q ) , u n q = ( x n r n B x n ) ( q r n B q ) , u n q = 1 2 { ( x n r n B x n ) ( q r n B q ) 2 + u n q 2 ( x n r n B x n ) ( q r n B q ) ( u n q ) 2 } 1 2 { x n q 2 + u n q 2 x n u n r n ( B x n B q ) 2 } 1 2 { x n q 2 + u n q 2 x n u n 2 + 2 r n x n u n , B x n B q r n 2 B x n B q 2 } ,
(3.13)

which implies that

u n q 2 x n q 2 x n u n 2 +2 r n x n u n B x n Bq.
(3.14)

From (3.1), we get

y n x n = P C ( I λ n ( A γ f ) ) x n P C x n ( I λ n ( A γ f ) ) x n x n λ n ( A γ f ) x n .

By (C3), we have

lim n y n x n =0.
(3.15)

Setting w n =[I λ n (Aγf)] x n . It follows that

w n x n = [ I λ n ( A γ f ) ] x n x n [ I λ n ( A γ f ) ] x n x n λ n ( A γ f ) x n .

By using (C3) again, we get

lim n w n x n =0.
(3.16)

From y n = P C [I λ n (Aγf)] x n , we compute

y n q = P C [ I λ n ( A γ f ) ] x n P C q [ I λ n ( A γ f ) ] x n q = w n q .
(3.17)

It follows from (3.15) that

x n q w n q.
(3.18)

Then we get

w n q 2 [ I λ n ( A γ f ) ] x n q , w n q = λ n γ f x n A q , w n q + ( I λ n A ) ( x n q ) , w n q λ n γ f x n A q , w n q + ( 1 λ n γ ¯ ) x n q w n q ( 1 λ n γ ¯ ) w n q 2 + λ n γ f x n A q , w n q .

It follows that

w n q 2 1 γ ¯ γ f x n A q , w n q = 1 γ ¯ [ γ f x n f q , w n q + γ f q A q , w n q ] 1 γ ¯ [ γ ρ w n q 2 + ( A γ f ) q , q w n ] ,

that is,

w n q 2 1 γ ¯ γ ρ ( A γ f ) q , q w n .
(3.19)

On the other hand, we note that

u n q 2 = T r n ( x n r n B x n ) T r n ( q r n B q ) 2 ( x n r n B x n ) ( q r n B q ) 2 = ( x n q ) r n ( B x n B q ) 2 x n q 2 2 r n x n q , B x n B q + r n 2 B x n B q 2 x n q 2 2 r n β B x n B q 2 + r n 2 B x n B q 2 .
(3.20)

Using (3.17), (3.18), (3.19), and (3.20), we note that

x n + 1 q 2 α n β n x n q 2 + α n ( 1 β n ) y n q 2 + ( 1 α n ) u n q 2 α n β n w n q 2 + α n ( 1 β n ) w n q 2 + ( 1 α n ) u n q 2 = α n w n q 2 + ( 1 α n ) u n q 2 α n γ ¯ γ ρ ( A γ f ) q , q w n + ( 1 α n ) { x n q 2 2 r n β B x n B q 2 + r n 2 B x n B q 2 } = α n γ ¯ γ ρ ( A γ f ) q , q w n + ( 1 α n ) { x n q 2 + r n ( r n 2 β ) B x n B q 2 } α n γ ¯ γ ρ ( A γ f ) q , q w n + x n q 2 + ( 1 α n ) r n ( r n 2 β ) B x n B q 2 .
(3.21)

Then we have

( 1 α n ) c ( 2 β d ) B x n B q 2 α n γ ¯ γ ρ ( A γ f ) q , q w n + x n q 2 x n + 1 q 2 α n γ ¯ γ ρ ( A γ f ) q , q w n + x n x n + 1 ( x n q + x n + 1 q ) .

From (C3), { r n }[c,d](0,2β), and (3.12), we obtain

lim n B x n Bq=0.
(3.22)

Substituting (3.13) into (3.21), we have

x n + 1 q 2 α n w n q 2 + ( 1 α n ) u n q 2 α n w n q 2 + ( 1 α n ) { x n q 2 x n u n 2 + 2 r n x n u n B x n B q } α n w n q 2 + x n q 2 ( 1 α n ) x n u n 2 + 2 r n ( 1 α n ) x n u n B x n B q ,

and it follows that

( 1 α n ) x n u n 2 α n w n q 2 + x n q 2 x n + 1 q 2 + 2 r n ( 1 α n ) x n u n B x n B q α n w n q 2 + x n x n + 1 ( x n q + x n + 1 q ) + 2 r n ( 1 α n ) x n u n B x n B q .

Since we have (C3), (3.12), and (3.22),

lim n x n u n =0.
(3.23)

By (C4), we obtain

lim n x n u n r n = lim n 1 r n x n u n =0.
(3.24)

Step 4. Next, we will show that

lim sup n ( γ f A ) x , x n x 0.

Indeed, we choose a subsequence { x n i } of { x n } such that

lim sup n ( γ f A ) x , x n x = lim i ( γ f A ) x , x n i x .

Since { x n i } is bounded, there exists a subsequence { x n i j } of { x n i } which converges weakly to zC. We notice that w n x n λ n (Aγf) x n 0. Hence, we get lim sup n (γfA) x , x n x 0. Next, we will show that zGMEP(F,φ,B). Since u n = T r n ( x n r n B x n ), we have

F( u n ,y)+B x n ,y u n +φ(y)φ( u n )+ 1 r n y u n , u n x n 0,yC.

From (A2), we also have

B x n ,y u n +φ(y)φ( u n )+ 1 r n y u n , u n x n F(y, u n ),yC,

and hence

B x n i ,y u n i +φ(y)φ( u n i )+ y u n i , u n i x n i r n i F(y, u n i ),yC.
(3.25)

For t with 0<t1 and yC, let y t =ty+(1t)z. Since yC and zC, we have y t C. So, from (3.25), we have

y t u n i , B y t y t u n i , B y t φ ( y t ) + φ ( u n i ) y t u n i , B x n i y t u n i , u n i x n i r n i + F ( y t , u n i ) = y t u n i , B y t B u n i + y t u n i , B u n i B x n i φ ( y t ) + φ ( u n i ) y t u n i , u n i x n i r n i + F ( y t , u n i ) .

Since u n i x n i 0, we have B u n i B x n i 0. Further, from the inverse strongly monotonicity of B, we have y t u n i ,B y t B u n i 0. So, from (A4), (A5), and the weak lower semicontinuity of φ, u n i x n i r n i 0 and u n i w, we have in the limit

y t w,B y t φ( y t )+φ(w)+F( y t ,w)
(3.26)

as i. From (A1), (A4) and (3.26), we also get

0 = F ( y t , y t ) + φ ( y t ) φ ( y t ) t F ( y t , y ) + ( 1 t ) F ( y t , z ) + t φ ( y ) ( 1 t ) φ ( z ) φ ( y t ) = t [ F ( y t , y ) + φ ( y ) φ ( y t ) ] + ( 1 t ) [ F ( y t , z ) + φ ( z ) φ ( y t ) ] t [ F ( y t , y ) + φ ( y ) φ ( y t ) ] + ( 1 t ) y t z , B y t = t [ F ( y t , y ) + φ ( y ) φ ( y t ) ] + ( 1 t ) t y z , B y t , 0 F ( y t , y ) + φ ( y ) φ ( y t ) + ( 1 t ) y z , B y t .

Letting t0, we have, for each yC,

F(z,y)+φ(y)φ(z)+yz,Bz0.

This implies that zGMEP(F,φ,B). It is easy to see that P G M E P ( F , φ , B ) (IA+γf)( x ) is a contraction of H into itself. Hence H is complete, there exists a unique fixed point x H, such that x = P G M E P ( F , φ , B ) (IA+γf)( x ).

Step 5. Next, we will prove x n x GMEP(F,φ,B), which solves the variational inequality (1.8). It follows from (3.1) that

x n + 1 x 2 = α n β n x n x , x n + 1 x + α n ( 1 β n ) P C [ I λ n ( A γ f ) ] x n P C [ I λ n ( A γ f ) ] x , x n + 1 x + α n ( 1 β n ) P C [ I λ n ( A γ f ) ] x x , x n + 1 x + ( 1 α n ) T r n ( I r n B ) x n T r n ( I r n B ) x , x n + 1 x α n β n x n x x n + 1 x + α n ( 1 β n ) [ I λ n ( A γ f ) ] x n [ I λ n ( A γ f ) ] x , x n + 1 x + α n ( 1 β n ) [ I λ n ( A γ f ) ] x x , x n + 1 x + ( 1 α n ) T r n ( I r n B ) x n T r n ( I r n B ) x x n + 1 x α n β n x n x x n + 1 x + α n ( 1 β n ) { [ 1 ( γ ¯ γ ρ ) λ n ] x n x + λ n γ f ( x ) A x } x n + 1 x α n ( 1 β n ) λ n ( A γ f ) x , x n + 1 x + ( 1 α n ) x n x x n + 1 x = α n β n x n x x n + 1 x + ( 1 α n ) x n x x n + 1 x + α n ( 1 β n ) [ 1 ( γ ¯ γ ρ ) λ n ] x n x x n + 1 x + α n ( 1 β n ) λ n γ f ( x ) A x x n + 1 x α n ( 1 β n ) λ n ( A γ f ) x , x n + 1 x = [ 1 α n ( 1 β n ) ] x n x x n + 1 x + α n ( 1 β n ) [ 1 ( γ ¯ γ ρ ) λ n ] x n x x n + 1 x + α n ( 1 β n ) λ n γ f ( x ) A x x n + 1 x α n ( 1 β n ) λ n ( A γ f ) x , x n + 1 x = [ 1 α n ( 1 β n ) [ 1 1 + ( γ ¯ γ ρ ) λ n ] ] x n x x n + 1 x + α n ( 1 β n ) λ n γ f ( x ) A x x n + 1 x α n ( 1 β n ) λ n ( A γ f ) x , x n + 1 x = [ 1 α n ( 1 β n ) ( γ ¯ γ ρ ) λ n ] x n x x n + 1 x + α n ( 1 β n ) λ n γ f ( x ) A x x n + 1 x α n ( 1 β n ) λ n ( A γ f ) x , x n + 1 x 1 α n ( 1 β n ) ( γ ¯ γ ρ ) λ n 2 ( x n x 2 + x n + 1 x 2 ) + α n ( 1 β n ) λ n γ f ( x ) A x x n + 1 x α n ( 1 β n ) λ n ( A γ f ) x , x n + 1 x 1 ( 1 β n ) ( γ ¯ γ ρ ) α n λ n 2 x n x 2 + 1 2 x n + 1 x 2 + ( 1 β n ) α n λ n γ f ( x ) A x x n + 1 x ( 1 β n ) α n λ n ( A γ f ) x , x n + 1 x ,

which implies that

x n + 1 x 2 [ 1 ( 1 β n ) ( γ ¯ γ ρ ) α n λ n ] x n x 2 + 2 ( 1 β n ) α n λ n γ f ( x ) A x x n + 1 x 2 ( 1 β n ) α n λ n ( A γ f ) x , x n + 1 x .

Since { x n }, {f( x n )}, and {A x n } are all bounded, we can choose a constant M>0 such that

sup 1 γ ¯ γ ρ { 2 γ f ( x ) A x x n + 1 x + 2 ( A γ f ) x , x n + 1 x } M.

It follows that

x n + 1 x 2 [ 1 ( 1 β n ) ( γ ¯ γ ρ ) α n λ n ] x n x 2 +(1 β n ) α n λ n M.

By (C3), we conclude that x n x , as n. This completes the proof. □

4 An example

Next, the following example shows that all conditions of Theorem 3.1 are satisfied.

Example 4.1 For instance, let α n = n + 1 n 2 + 1 , β n = 1 n + 1 , λ n = 1 2 ( n + 1 ) , and r n = n n + 1 . Then clearly the sequences { α n }, { λ n } satisfy the following condition:

n + 1 n 2 + 1 1 2 ( n + 1 ) .

We will show that the condition (C1) is fulfilled. Indeed, we have

n = 1 | α n α n 1 | = n = 1 | n + 1 n 2 + 1 n ( n 1 ) 2 + 1 | = n = 1 | ( n + 1 ) ( n 2 2 n + 2 ) n ( n 2 + 1 ) ( n 2 + 1 ) ( n 2 2 n + 2 ) | = n = 1 | 2 + n n 2 n 4 2 n 3 + 3 n 2 2 n + 2 | .

The sequence { α n } satisfies the condition (C1) by a p-series.

Next, we will show that the condition (C2) is fulfilled. We compute

n = 1 | β n β n 1 | = n = 1 | 1 n + 1 1 n | = ( 1 1 1 2 ) + ( 1 2 1 3 ) + ( 1 3 1 4 ) + = 1 .

The sequence { β n } satisfies the condition (C2).

Next, we will show that the condition (C3) is fulfilled. We compute

n = 1 | λ n λ n 1 | = n = 1 | 1 2 ( n + 1 ) 1 2 n | = ( 1 2 1 1 2 2 ) + ( 1 2 2 1 2 3 ) + ( 1 2 3 1 2 4 ) + = 1 2 , lim n λ n = lim n 1 2 ( n + 1 ) = 0 ,

and

n = 1 λ n = n = 1 1 2 ( n + 1 ) =.

The sequence { λ n } satisfies the condition (C3).

Finally, we will show that the condition (C4) is fulfilled. We compute

n = 1 | r n r n 1 | = n = 1 | n n + 1 n 1 ( n 1 ) + 1 | = n = 1 | n ( n ) ( n 1 ) ( n + 1 ) ( n + 1 ) n | = n = 1 | n 2 n 2 + 1 ( n + 1 ) n | = n = 1 | 1 n ( n + 1 ) |

and

lim inf n r n = lim inf n n n + 1 =1.

The sequence { r n } satisfies the condition (C4).

Corollary 4.2 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A:HH be a strongly positive linear bounded operator, f:CH be ρ-contraction, γ be a positive real number such that γ ¯ 1 ρ <γ< γ ¯ ρ and T:CC be a nonexpansive mapping with F(T). Let { x n } be a sequence generated by the following algorithm for arbitrary x 0 C:

x n + 1 = β n x n +(1 β n )T P C [ I λ n ( A γ f ) ] x n ,
(4.1)

where { β n },{ λ n }[0,1] satisfy the following conditions:

  • (C1) n = 1 | β n + 1 β n |<, 0< lim inf n β n < lim sup n β n <1;

  • (C2) n = 1 | λ n + 1 λ n |<, n = 1 λ n =, lim n λ n =0.

Then { x n } converges strongly to x F(T), which is the unique solution of the variational inequality:

( A γ f ) x , x x 0,xF(T).
(4.2)

Proof Setting { α n }1 and T to be a nonexpansive mapping in Theorem 3.1, we obtain the desired conclusion immediately. □

Remark 4.3 Corollary 4.2 generalizes and improves the results of Yao et al. [11].

Corollary 4.4 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A:HH be a strongly positive linear bounded operator, B:CH be a β-inverse-strongly monotone and F be a bifunction from C×CR satisfying (A1)-(A5) and let φ:CR be convex and lower semicontinuous with either (B1) or (B2). Suppose GMEP(F,φ,B). Let { x n } be a sequence by the following algorithm for arbitrary x 0 C:

x n + 1 = α n ( β n x n + ( 1 β n ) [ I λ n A ] x n ) +(1 α n ) T r n (I r n B) x n ,
(4.3)

where { α n },{ β n },{ λ n }[0,1], α n λ n , and r n (0,2β) satisfy the following conditions:

  • (C1) n = 1 | α n α n 1 |<;

  • (C2) n = 1 | β n β n 1 |<;

  • (C3) n = 1 | λ n λ n 1 |<, n = 1 λ n =, lim n λ n =0;

  • (C4) n = 1 | r n r n 1 |<, lim inf n r n >0.

Then { x n } converges strongly to x GMEP(F,φ,B), which is the unique solution of the variational inequality:

A x , x x 0,xGMEP(F,φ,B).
(4.4)

Proof Setting T, P C to be the identity and f0 in Theorem 3.1, we obtain the desired conclusion immediately. □

Corollary 4.5 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A:HH be a strongly positive linear bounded operator, f:CH be ρ-contraction, B:CH be β-inverse-strongly monotone and F be a bifunction from C×CR satisfying (A1)-(A5) and let φ:CR be convex and lower semicontinuous with either (B1) or (B2). Let { x n } be a sequence generated by the following algorithm for arbitrary x 0 C:

x n + 1 = α n ( λ n ( 1 β n ) f ( x n ) + [ I λ n ( 1 β n ) A ] x n ) +(1 α n ) T r n (I r n B) x n ,
(4.5)

where { α n },{ β n },{ λ n }[0,1], λ n β n , and r n (0,2β) satisfy the following conditions:

  • (C1) n = 1 | α n + 1 α n |<;

  • (C2) n = 1 | β n + 1 β n |<;

  • (C3) n = 1 | λ n + 1 λ n |<, n = 1 λ n =, lim n λ n =0;

  • (C4) n = 1 | r n + 1 r n |<, lim inf n r n >0.

Then { x n } converges strongly to x GMEP(F,φ,B), which is the unique solution of the variational inequality

( A f ) x , x x 0,xGMEP(F,φ,B).
(4.6)

Proof Setting T, P C to be the identity and γ1 in Theorem 3.1, we obtain the desired conclusion immediately. □

References

  1. 1.

    Al-Mazeooei AE, Latif A, Yao JC: Solving generalized mixed equilibria, variational inequalities, and constrained convex minimization. Abstr. Appl. Anal. 2014. Article ID 587865, 2014:

    Google Scholar 

  2. 2.

    Ceng LC, Chen CM, Pang CT: Hybrid extragradient-like viscosity methods for generalized mixed equilibrium problems, variational inclusions, and optimization problems. Abstr. Appl. Anal. 2014. Article ID 120172, 2014:

    Google Scholar 

  3. 3.

    Ceng LC, Chen CM, Wen CF, Pang CT: Relaxed iterative algorithms for generalized mixed equilibrium problems with constraints of variational inequalities and variational inclusions. Abstr. Appl. Anal. 2014. Article ID 345212, 2014:

    Google Scholar 

  4. 4.

    Ceng LC, Ho JL: Hybrid extragradient method with regularization for convex minimization, generalized mixed equilibrium, variational inequality and fixed point problems. Abstr. Appl. Anal. 2014. Article ID 436069, 2014:

    Google Scholar 

  5. 5.

    Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123-145.

    MathSciNet  MATH  Google Scholar 

  6. 6.

    Hartman P, Stampacchia G: On some nonlinear elliptic differential functional equations. Acta Math. 1966, 115: 271-310. 10.1007/BF02392210

    MathSciNet  Article  MATH  Google Scholar 

  7. 7.

    Kirk WA: Fixed point theorem for mappings which do not increase distance. Am. Math. Mon. 1965, 72: 1004-1006. 10.2307/2313345

    MathSciNet  Article  MATH  Google Scholar 

  8. 8.

    Yao YH, Cho YJ, Yang PX: An iterative algorithm for a hierarchical problem. J. Appl. Math. 2012. Article ID 320421, 2012: Article ID 320421

    Google Scholar 

  9. 9.

    Yao YH, Kang JI, Cho YJ, Liou YC: Composite schemes for variational inequalities over equilibrium problems and variational inclusions. J. Inequal. Appl. 2013. Article ID 414, 2013:

    Google Scholar 

  10. 10.

    Yao Y, Liou Y-C, Chen C-P: Hierarchical convergence of a double-net algorithm for equilibrium problems and variational inequality problems. Fixed Point Theory Appl. 2010. Article ID 642584, 2010: 10.1155/2010/642584

    Google Scholar 

  11. 11.

    Yao Y, Liou Y-C, Kang SM: Algorithms construction for variational inequalities. Fixed Point Theory Appl. 2011. Article ID 794203, 2011: 10.1155/2011/794203

    Google Scholar 

  12. 12.

    Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73: 595-597.

    MathSciNet  Article  MATH  Google Scholar 

  13. 13.

    Peng JW, Yao JC: A new hybrid-extragradient method for generalized mixed equilibrium problems and fixed point problems and variational inequality problems. Taiwan. J. Math. 2008,12(6):1401-1432.

    MathSciNet  MATH  Google Scholar 

  14. 14.

    Browder FE: Nonlinear operators and nonlinear equations of evolution in Banach spaces. Proc. Symp. Pure Math. 1976, 18: 78-81.

    MathSciNet  Google Scholar 

  15. 15.

    Marino G, Xu HK: A general iterative method for nonexpansive mapping in Hilbert space. J. Math. Anal. Appl. 2006, 318: 43-52. 10.1016/j.jmaa.2005.05.028

    MathSciNet  Article  MATH  Google Scholar 

  16. 16.

    Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240-256. 10.1112/S0024610702003332

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This project was partially supported by Centre of Excellence in Mathematics, the Commission on Higher Education, Ministry of Education, Thailand. The second author and third author were supported by Innovation park, RMUTL Hands-on Researcher Project, Rajamangala University of Technology Lanna Chiangrai under Grant no. 57HRG-10 during the preparation of this paper.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Thanyarat Jitpeera.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kumam, P., Jitpeera, T. & Yarangkham, W. An explicit algorithm for solving the optimize hierarchical problems. J Inequal Appl 2014, 405 (2014). https://doi.org/10.1186/1029-242X-2014-405

Download citation

Keywords

  • nonexpansive
  • strong convergence
  • variational inequality
  • fixed point
  • hierarchical problem