Open Access

Strong convergence algorithm for approximating the common solutions of a variational inequality, a mixed equilibrium problem and a hierarchical fixed-point problem

Journal of Inequalities and Applications20142014:154

https://doi.org/10.1186/1029-242X-2014-154

Received: 19 October 2013

Accepted: 27 March 2014

Published: 2 May 2014

Abstract

This paper investigates the common set of solutions of a variational inequality, a mixed equilibrium problem, and a hierarchical fixed-point problem in a Hilbert space. A numerical method is proposed to find the approximate element of this common set. The strong convergence of this method is proved under some conditions. The proposed method is shown to be an improvement and extension of some known results.

MSC: 49J30, 47H09, 47J20.

Keywords

mixed equilibrium problem variational inequality problem hierarchical fixed-point problem projection method

1 Introduction

Let H be a real Hilbert space, whose inner product and norm are denoted by , and . Let C be a nonempty closed convex subset of H and A be a mapping from C into H. A classical variational inequality problem, denoted by VI ( A , C ) , is to find a vector u C such that
v u , A u 0 , v C .
(1.1)
The solution of VI ( A , C ) is denoted by Ω . It is easy to observe that
u Ω u = P C [ u ρ A u ] , where  ρ > 0 .

We now have a variety of techniques to suggest and analyze various iterative algorithms for solving variational inequalities and the related optimization problems; see [125]. The fixed-point theory has played an important role in the development of various algorithms for solving variational inequalities. Using the projection operator technique, one usually establishes an equivalence between the variational inequalities and the fixed-point problem. This alternative equivalent formulation was used by Lions and Stampacchia [1] to study the existence of a solution of the variational inequalities.

We introduce the following definitions, which are useful in the following analysis.

Definition 1.1 The mapping T : C H is said to be
  1. (a)
    monotone if
    T x T y , x y 0 , x , y C ;
     
  2. (b)
    strongly monotone if there exists an α > 0 such that
    T x T y , x y α x y 2 , x , y C ;
     
  3. (c)
    α-inverse strongly monotone if there exists an α > 0 such that
    T x T y , x y α T x T y 2 , x , y C ;
     
  4. (d)
    nonexpansive if
    T x T y x y , x , y C ;
     
  5. (e)
    k-Lipschitz continuous if there exists a constant k > 0 such that
    T x T y k x y , x , y C ;
     
  6. (f)
    contraction on C if there exists a constant 0 k < 1 such that
    T x T y k x y , x , y C .
     
It is easy to observe that every α-inverse strongly monotone T is monotone and Lipschitz continuous. A mapping T : C H is called k-strict pseudo-contraction if there exists a constant 0 k < 1 such that
T x T y 2 x y 2 + k ( I T ) x ( I T ) y 2 , x , y C .
(1.2)
The fixed-point problem for the mapping T is to find x C such that
T x = x .
(1.3)

We denote by F ( T ) the set of solutions of (1.3). It is well known that the class of strict pseudo-contractions includes the class of Lipschitzian mappings, then F ( T ) is closed and convex and P F ( T ) is well defined (see [2]).

The mixed equilibrium problem, denoted by M E P , is to find x C such that
F 1 ( x , y ) + D x , y x 0 , y C ,
(1.4)
where F 1 : C × C R is a bifunction, and D : C H is a nonlinear mapping. This problem was introduced and studied by Moudafi and Théra [3] and Moudafi [4]. The set of solutions of (1.4) is denoted by
M E P ( F 1 ) : = { x C : F 1 ( x , y ) + D x , y x 0 , y C } .
(1.5)
If D = 0 , then it is reduced to the equilibrium problem, which is to find x C such that
F 1 ( x , y ) 0 , y C .
(1.6)

The solution set of (1.6) is denoted by E P ( F 1 ) . Numerous problems in physics, optimization, and economics reduce to finding a solution of (1.6); see [59]. In 1997, Combettes and Hirstoaga [10] introduced an iterative scheme of finding the best approximation to the initial data when E P ( F ) is nonempty. Recently Plubtieng and Punpaeng [7] introduced an iterative method for finding the common element of the set F ( T ) Ω E P ( F 1 ) .

Let S : C H be a nonexpansive mapping. The following problem is called a hierarchical fixed point problem: Find x F ( T ) such that
x S x , y x 0 , y F ( T ) .
(1.7)
It is known that the hierarchical fixed-point problem (1.7) links with some monotone variational inequalities and convex programming problems; see [11, 12, 26]. Various methods have been proposed to solve the hierarchical fixed-point problem; see Moudafi [13], Mainge and Moudafi in [14], Marino and Xu in [15] and Cianciaruso et al. [16]. Very recently, Yao et al. [12] introduced the following strong convergence iterative algorithm to solve problem (1.7):
y n = β n S x n + ( 1 β n ) x n , x n + 1 = P C [ α n f ( x n ) + ( 1 α n ) T y n ] , n 0 ,
(1.8)
where f : C H is a contraction mapping, and { α n } and { β n } are two sequences in ( 0 , 1 ) . Under some certain restrictions on the parameters, Yao et al. proved that the sequence { x n } generated by (1.8) converges strongly to z F ( T ) , which is the unique solution of the following variational inequality:
( I f ) z , y z 0 , y F ( T ) .
(1.9)
In 2011, Ceng et al. [17] investigated the following iterative method:
x n + 1 = P C [ α n ρ U ( x n ) + ( I α n μ F ) ( T ( y n ) ) ] , n 0 ,
(1.10)
where U is a Lipschitzian mapping, and F is a Lipschitzian and strongly monotone mapping. They proved that under some approximate assumptions on the operators and parameters, the sequence { x n } generated by (1.10) converges strongly to the unique solution of the variational inequality
ρ U ( z ) μ F ( z ) , x z 0 , x Fix ( T ) .

In this paper, motivated by the work of Yao et al. [12], Ceng et al. [17], Bnouhachem [18, 19] and by the recent work going in this direction, we give an iterative method for finding the approximate element of the common set of solutions of (1.1), (1.4), and (1.7) in a real Hilbert space. We establish a strong convergence theorem based on this method. We would like to mention that our proposed method is quite general and flexible and includes many known results for solving equilibrium problems, variational inequality problems, and hierarchical fixed-point problems; see, e.g., [11, 12, 1417, 24, 25] and relevant references cited therein.

2 Preliminaries

In this section, we list some fundamental lemmas that are useful in the consequent analysis. The first lemma provides some basic properties of projection onto C.

Lemma 2.1Let P C denote the projection ofHontoC. Then we have the following inequalities:
z P C [ z ] , P C [ z ] v 0 , z H , v C ;
(2.1)
u v , P C [ u ] P C [ v ] P C [ u ] P C [ v ] 2 , u , v H ;
(2.2)
P C [ u ] P C [ v ] u v , u , v H ;
(2.3)
u P C [ z ] 2 z u 2 z P C [ z ] 2 , z H , u C .
(2.4)

Lemma 2.2 [20]

Let F 1 : C × C R be a bifunction satisfying the following assumptions:
  1. (i)

    F 1 ( x , x ) = 0 , x C ;

     
  2. (ii)

    F 1 is monotone, i.e., F 1 ( x , y ) + F 1 ( y , x ) 0 , x , y C ;

     
  3. (iii)

    for each x , y , z C , lim t 0 F 1 ( t z + ( 1 t ) x , y ) F 1 ( x , y ) ;

     
  4. (iv)

    for each x C , y F 1 ( x , y ) is convex and lower semicontinuous.

     
Let r > 0 and x H . Then there exists z C such that
F 1 ( z , y ) + 1 r y z , z x 0 , y C .

Lemma 2.3 [10]

Assume that F 1 : C × C R satisfies assumptions (i)-(iv) of Lemma  2.2, and for r > 0 and x H , define a mapping T r : H C as follows:
T r ( x ) = { z C : F 1 ( z , y ) + 1 r y z , z x 0 , y C } .
Then the following hold:
  1. (i)

    T r is single-valued;

     
  2. (ii)
    T r is firmly nonexpansive, i.e.,
    T r x T r y 2 T r x T r y , x y , x , y H ;
     
  3. (iii)

    F ( T r ) = E P ( F 1 ) ;

     
  4. (iv)

    E P ( F 1 ) is closed and convex.

     

Lemma 2.4 [21]

LetCbe a nonempty closed convex subset of a real Hilbert spaceH. If T : C C is a nonexpansive mapping with Fix ( T ) , then the mapping I T is demiclosed at 0, i.e., if { x n } is a sequence inCweakly converging toxand if { ( I T ) x n } converges strongly to 0, then ( I T ) x = 0 .

Lemma 2.5 [17]

Let U : C H be aτ-Lipschitzian mapping, and let F : C H be ak-Lipschitzian andη-strongly monotone mapping, then for 0 ρ τ < μ η , μ F ρ U is μ η ρ τ -strongly monotone, i.e.,
( μ F ρ U ) x ( μ F ρ U ) y , x y ( μ η ρ τ ) x y 2 , x , y C .

Lemma 2.6 [22]

Suppose that λ ( 0 , 1 ) and μ > 0 . Let F : C H be ak-Lipschitzian andη-strongly monotone operator. In association with a nonexpansive mapping T : C C , define the mapping T λ : C H by
T λ x = T x λ μ F T ( x ) , x C .
Then T λ is a contraction provided μ < 2 η k 2 , that is,
T λ x T λ y ( 1 λ ν ) x y , x , y C ,

where ν = 1 1 μ ( 2 η μ k 2 ) .

Lemma 2.7 [23]

Assume that { a n } is a sequence of nonnegative real numbers such that
a n + 1 ( 1 γ n ) a n + δ n ,
where { γ n } is a sequence in ( 0 , 1 ) , and δ n is a sequence such that
  1. (1)

    n = 1 γ n = ;

     
  2. (2)

    lim sup n δ n / γ n 0 or n = 1 | δ n | < .

     

Then lim n a n = 0 .

Lemma 2.8 [27]

LetCbe a closed convex subset ofH. Let { x n } be a bounded sequence in H. Assume that
  1. (i)

    the weakw-limit set w w ( x n ) C , where w w ( x n ) = { x : x n i x } ;

     
  2. (ii)

    for each z C , lim n x n z exists.

     

Then { x n } is weakly convergent to a point inC.

3 The proposed method and some properties

In this section, we suggest and analyze our method for finding the common solutions of the variational inequality (1.1), the mixed equilibrium problem (1.4), and the hierarchical fixed-point problem (1.7).

Let C be a nonempty closed convex subset of a real Hilbert space H. Let D , A : C H be θ, α-inverse strongly monotone mappings, respectively. Let F 1 : C × C R be a bifunction satisfying assumptions (i)-(iv) of Lemma 2.2 and S , T : C C be a nonexpansive mappings such that F ( T ) Ω M E P ( F 1 ) . Let F : C C be a k-Lipschitzian mapping and be η-strongly monotone, and let U : C C be a τ-Lipschitzian mapping.

Algorithm 3.1 For an arbitrary given x 0 C , let the iterative sequences { u n } , { x n } , { y n } , and { z n } be generated by
F 1 ( u n , y ) + D x n , y u n + 1 r n y u n , u n x n 0 , y C ; z n = P C [ u n λ n A u n ] ; y n = β n S x n + ( 1 β n ) z n ; x n + 1 = P C [ α n ρ U ( x n ) + ( I α n μ F ) ( T ( y n ) ) ] , n 0 ,
(3.1)
where { λ n } ( 0 , 2 α ) , { r n } ( 0 , 2 θ ) . Suppose that the parameters satisfy 0 < μ < 2 η k 2 , 0 ρ τ < ν , where ν = 1 1 μ ( 2 η μ k 2 ) . Also, { α n } and { β n } are sequences in ( 0 , 1 ) satisfying the following conditions:
  1. (a)

    lim n α n = 0 and n = 1 α n = ,

     
  2. (b)

    lim n ( β n / α n ) = 0 ,

     
  3. (c)

    n = 1 | α n 1 α n | < and n = 1 | β n 1 β n | < ,

     
  4. (d)

    lim inf n r n > 0 and n = 1 | r n 1 r n | < ,

     
  5. (e)

    lim inf n λ n < lim sup n λ n < 2 α and n = 1 | λ n 1 λ n | < .

     

Remark 3.1 Our method can be viewed as an extension and improvement for some well-known results, for example, the following.

  • If A = 0 , we obtain an extension and improvement of the method of Wang and Xu [24] for finding the approximate element of the common set of solutions of a mixed equilibrium problem and a hierarchical fixed-point problem in a real Hilbert space.

  • If we have the Lipschitzian mapping U = f , F = I , ρ = μ = 1 , and A = 0 , we obtain an extension and improvement of the method of Yao et al.[12] for finding the approximate element of the common set of solutions of a mixed equilibrium problem and a hierarchical fixed-point problem in a real Hilbert space.

  • The contractive mapping f with a coefficient α [ 0 , 1 ) in other papers [12, 15, 22, 25] is extended to the cases of the Lipschitzian mapping U with a coefficient constant γ [ 0 , ) .

This shows that Algorithm 3.1 is quite general and unifying.

Lemma 3.1Let x F ( T ) Ω M E P ( F 1 ) . Then { x n } , { u n } , { z n } , and { y n } are bounded.

Proof First, we show that the mapping ( I r n D ) is nonexpansive. For any x , y C ,
( I r n D ) x ( I r n D ) y 2 = ( x y ) r n ( D x D y ) 2 = x y 2 2 r n x y , D x D y + r n 2 D x D y 2 x y 2 r n ( 2 θ r n ) D x D y 2 x y 2 .
Similarly, we can show that the mapping ( I λ n A ) is nonexpansive. It follows from Lemma 2.3 that u n = T r n ( x n r n D x n ) . Let x F ( T ) Ω M E P ( F 1 ) ; we have x = T r n ( x r n D x ) .
u n x 2 = T r n ( x n r n D x n ) T r n ( x r n D x ) 2 ( x n r n D x n ) ( x r n D x ) 2 x n x 2 r n ( 2 θ r n ) D x n D x 2 x n x 2 .
(3.2)
Since the mapping A is α-inverse strongly monotone, we have
z n x 2 = P C [ u n λ n A u n ) P C [ x λ n A x ] 2 u n x λ n ( A u n A x ) 2 u n x 2 λ n ( 2 α λ n ) A u n A x 2 u n x 2 x n x 2 .
(3.3)
We define V n = α n ρ U ( x n ) + ( I α n μ F ) ( T ( y n ) ) . Next, we prove that the sequence { x n } is bounded, and without loss of generality we can assume that β n α n for all n 1 . From (3.1), we have
x n + 1 x = P C [ V n ] P C [ x ] α n ρ U ( x n ) + ( I α n μ F ) ( T ( y n ) ) x α n ρ U ( x n ) μ F ( x ) + ( I α n μ F ) ( T ( y n ) ) ( I α n μ F ) T ( x ) = α n ρ U ( x n ) ρ U ( x ) + ( ρ U μ F ) x + ( I α n μ F ) ( T ( y n ) ) ( I α n μ F ) T ( x ) α n ρ τ x n x + α n ( ρ U μ F ) x + ( 1 α n ν ) y n x α n ρ τ x n x + α n ( ρ U μ F ) x + ( 1 α n ν ) β n S x n + ( 1 β n ) z n x α n ρ τ x n x + α n ( ρ U μ F ) x + ( 1 α n ν ) ( β n S x n S x + β n S x x + ( 1 β n ) z n x ) α n ρ τ x n x + α n ( ρ U μ F ) x + ( 1 α n ν ) ( β n S x n S x + β n S x x + ( 1 β n ) x n x ) α n ρ τ x n x + α n ( ρ U μ F ) x + ( 1 α n ν ) ( β n x n x + β n S x x + ( 1 β n ) x n x ) = ( 1 α n ( ν ρ τ ) ) x n x + α n ( ρ U μ F ) x + ( 1 α n ν ) β n S x x ( 1 α n ( ν ρ τ ) ) x n x + α n ( ρ U μ F ) x + β n S x x ( 1 α n ( ν ρ τ ) ) x n x + α n ( ( ρ U μ F ) x + S x x ) = ( 1 α n ( ν ρ τ ) ) x n x + α n ( ν ρ τ ) ν ρ τ ( ( ρ U μ F ) x + S x x ) max { x n x , 1 ν ρ τ ( ( ρ U μ F ) x + S x x ) } ,

where the third inequality follows from Lemma 2.6.

By induction on n, we obtain x n x max { x 0 x , 1 ν ρ τ ( ( ρ U μ F ) x + S x x ) } for n 0 and x 0 C . Hence, { x n } is bounded and, consequently, we deduce that { u n } , { z n } , { v n } , { y n } , { S ( x n ) } , { T ( x n ) } , { F ( T ( y n ) ) } , and { U ( x n ) } are bounded. □

Lemma 3.2Let x F ( T ) Ω M E P ( F 1 ) and { x n } be the sequence generated by Algorithm  3.1. Then we have:
  1. (a)

    lim n x n + 1 x n = 0 .

     
  2. (b)

    The weakw-limit set w w ( x n ) F ( T ) , ( w w ( x n ) = { x : x n i x } ).

     
Proof From the nonexpansivity of the mapping ( I λ n A ) and P C , we have
z n z n 1 ( u n λ n A u n ) ( u n 1 λ n 1 A u n 1 ) = ( u n u n 1 ) λ n ( A u n A u n 1 ) ( λ n λ n 1 ) A u n 1 ( u n u n 1 ) λ n ( A u n A u n 1 ) + | λ n λ n 1 | A u n 1 u n u n 1 + | λ n λ n 1 | A u n 1 .
(3.4)
Next, we estimate that
y n y n 1 = β n S x n + ( 1 β n ) z n ( β n 1 S x n 1 + ( 1 β n 1 ) z n 1 ) = β n ( S x n S x n 1 ) + ( β n β n 1 ) S x n 1 + ( 1 β n ) ( z n z n 1 ) + ( β n 1 β n ) z n 1 β n x n x n 1 + ( 1 β n ) z n z n 1 + | β n β n 1 | ( S x n 1 + z n 1 ) .
(3.5)
It follows from (3.4) and (3.5) that
y n y n 1 β n x n x n 1 + ( 1 β n ) { u n u n 1 + | λ n λ n 1 | A u n 1 } + | β n β n 1 | ( S x n 1 + z n 1 ) .
(3.6)
On the other hand, u n = T r n ( x n r n D x n ) and u n 1 = T r n 1 ( x n 1 r n 1 D x n 1 ) , we have
F 1 ( u n , y ) + D x n , y u n + 1 r n y u n , u n x n 0 , y C
(3.7)
and
F 1 ( u n 1 , y ) + D x n 1 , y u n 1 + 1 r n 1 y u n 1 , u n 1 x n 1 0 , y C .
(3.8)
Take y = u n 1 in (3.7) and y = u n in (3.8), we get
F 1 ( u n , u n 1 ) + D x n , u n 1 u n + 1 r n u n 1 u n , u n x n 0
(3.9)
and
F 1 ( u n 1 , u n ) + D x n 1 , u n u n 1 + 1 r n 1 u n u n 1 , u n 1 x n 1 0 .
(3.10)
Adding (3.9) and (3.10) and using the monotonicity of F 1 , we have
D x n 1 D x n , u n u n 1 + u n u n 1 , u n 1 x n 1 r n 1 u n x n r n 0 ,
which implies that
0 u n u n 1 , r n ( D x n 1 D x n ) + r n r n 1 ( u n 1 x n 1 ) ( u n x n ) = u n 1 u n , u n u n 1 + ( 1 r n r n 1 ) u n 1 + ( x n 1 r n D x n 1 ) ( x n r n D x n ) x n 1 + r n r n 1 x n 1 = u n 1 u n , ( 1 r n r n 1 ) u n 1 + ( x n 1 r n D x n 1 ) ( x n r n D x n ) x n 1 + r n r n 1 x n 1 u n u n 1 2 = u n 1 u n , ( 1 r n r n 1 ) ( u n 1 x n 1 ) + ( x n 1 r n D x n 1 ) ( x n r n D x n ) u n u n 1 2 u n 1 u n { | 1 r n r n 1 | u n 1 x n 1 + ( x n 1 r n D x n 1 ) ( x n r n D x n } u n u n 1 2 u n 1 u n { | 1 r n r n 1 | u n 1 x n 1 + x n 1 x n } u n u n 1 2 ,
and then
u n 1 u n | 1 r n r n 1 | u n 1 x n 1 + x n 1 x n .
Without loss of generality, let us assume that there exists a real number μ such that r n > μ > 0 for all positive integers n. Then we get
u n 1 u n x n 1 x n + 1 μ | r n 1 r n | u n 1 x n 1 .
(3.11)
It follows from (3.6) and (3.11) that
y n y n 1 β n x n x n 1 + ( 1 β n ) { x n x n 1 + 1 μ | r n r n 1 | u n 1 x n 1 + | λ n λ n 1 | A u n 1 } + | β n β n 1 | ( S x n 1 + z n 1 ) = x n x n 1 + ( 1 β n ) { 1 μ | r n r n 1 | u n 1 x n 1 + | λ n λ n 1 | A u n 1 } + | β n β n 1 | ( S x n 1 + z n 1 ) .
(3.12)
Next, we estimate that
x n + 1 x n = P C [ V n ] P C [ V n 1 α n ρ ( U ( x n ) U ( x n 1 ) ) + ( α n α n 1 ) ρ U ( x n 1 ) + ( I α n μ F ) ( T ( y n ) ) ( I α n μ F ) T ( y n 1 ) + ( I α n μ F ) ( T ( y n 1 ) ) ( I α n 1 μ F ) ( T ( y n 1 ) ) α n ρ τ x n x n 1 + ( 1 α n ν ) y n y n 1 + | α n α n 1 | ( ρ U ( x n 1 ) + μ F ( T ( y n 1 ) ) ) ,
(3.13)
where the second inequality follows from Lemma 2.6. From (3.12) and (3.13), we have
x n + 1 x n α n ρ τ x n x n 1 + ( 1 α n ν ) × ( x n x n 1 + 1 μ | r n r n 1 | u n 1 x n 1 + | λ n λ n 1 | A u n 1 ) + | β n β n 1 | ( S x n 1 + z n 1 ) + | α n α n 1 | ( ρ U ( x n 1 ) + μ F ( T ( y n 1 ) ) ) ( 1 ( ν ρ τ ) α n ) x n x n 1 + 1 μ | r n r n 1 | u n 1 x n 1 + | λ n λ n 1 | A u n 1 + | β n β n 1 | ( S x n 1 + z n 1 ) + | α n α n 1 | ( ρ U ( x n 1 ) + μ F ( T ( y n 1 ) ) ) ( 1 ( ν ρ τ ) α n ) x n x n 1 + M ( 1 μ | r n r n 1 | + | λ n λ n 1 | + | β n β n 1 | + | α n α n 1 | ) .
(3.14)
Here
M = max { sup n 1 u n 1 x n 1 , sup n 1 A u n 1 , sup n 1 ( S x n 1 + z n 1 ) , sup n 1 ( ρ U ( x n 1 ) + μ F ( T ( y n 1 ) ) ) } .
It follows by conditions (a)-(e) of Algorithm 3.1 and Lemma 2.7 that
lim n x n + 1 x n = 0 .
Next, we show that lim n u n x n = 0 . Since x F ( T ) Ω M E P ( F 1 ) , by using (3.2) and (3.3), we obtain
x n + 1 x 2 = P C ( V n ) x , x n + 1 x = P C ( V n ) V n , P C ( V n ) x + V n x , x n + 1 x α n ( ρ U ( x n ) μ F ( x ) ) + ( I α n μ F ) ( T ( y n ) ) ( I α n μ F ) ( T ( x ) ) , x n + 1 x = α n ρ ( U ( x n ) U ( x ) ) , x n + 1 x + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( I α n μ F ) ( T ( y n ) ) ( I α n μ F ) ( T ( x ) ) , x n + 1 x α n ρ τ x n x x n + 1 x + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) y n x x n + 1 x α n ρ τ 2 ( x n x 2 + x n + 1 x 2 ) + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) 2 ( y n x 2 + x n + 1 x 2 ) ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) 2 ( β n S x n x 2 + ( 1 β n ) z n x 2 ) ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 2 S x n x 2 + ( 1 α n ν ) ( 1 β n ) 2 { x n x 2 r n ( 2 θ r n ) D x n D x 2 λ n ( 2 α λ n ) A u n A x 2 } ,
(3.15)
which implies that
x n + 1 x 2 α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + ( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) { x n x 2 r n ( 2 θ r n ) D x n D x 2 λ n ( 2 α λ n ) A u n A x 2 } α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + x n x 2 + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 ( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) { r n ( 2 θ r n ) D x n D x 2 + λ n ( 2 α λ n ) A u n A x 2 } .
Then, from the inequality above, we get
( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) { r n ( 2 θ r n ) D x n D x 2 + λ n ( 2 α λ n ) A u n A x 2 } α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + β n S x n x 2 + x n x 2 x n + 1 x 2 α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + β n S x n x 2 + ( x n x + x n + 1 x ) x n + 1 x n .

Since lim inf n λ n lim sup n λ n < 2 α , { r n } ( 0 , 2 θ ) , lim n x n + 1 x n = 0 , α n 0 , and β n 0 , we obtain lim n D x n D x = 0 and lim n A u n A x = 0 .

Since T r n is firmly nonexpansive, we have
u n x 2 = T r n ( x n r n D x n ) T r n ( x r n D x ) 2 u n x , ( x n r n D x n ) ( x r n D x ) = 1 2 { u n x 2 + ( x n r n D x n ) ( x r n D x ) 2 u n x [ ( x n r n D x n ) ( x r n D x ) ] 2 } .
Hence,
u n x 2 ( x n r n D x n ) ( x r n D x ) 2 u n x n + r n ( D x n D x ) 2 x n x 2 u n x n + r n ( D x n D x ) 2 x n x 2 u n x n 2 + 2 r n u n x n D x n D x .
From (3.15), (3.3), and the inequality above, we have
x n + 1 x 2 ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) 2 ( β n S x n x 2 + ( 1 β n ) z n x 2 ) ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) 2 ( β n S x n x 2 + ( 1 β n ) u n x 2 ) ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) 2 { β n S x n x 2 + ( 1 β n ) ( x n x 2 u n x n 2 + 2 r n u n x n D x n D x ) } ,
which implies that
x n + 1 x 2 α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + ( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) { x n x 2 u n x n 2 + 2 r n u n x n D x n D x } α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + x n x 2 + ( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) { u n x n 2 + 2 r n u n x n D x n D x } .
Hence,
( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) u n x n 2 α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + 2 ( 1 α n ν ) ( 1 β n ) r n 1 + α n ( ν ρ τ ) u n x n D x n D x + x n x 2 x n + 1 x 2 = α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + 2 ( 1 α n ν ) ( 1 β n ) r n 1 + α n ( ν ρ τ ) u n x n D x n D x + ( x n x + x n + 1 x ) x n + 1 x n .
Since lim n x n + 1 x n = 0 , α n 0 , β n 0 , and lim n D x n D x = 0 , we obtain
lim n u n x n = 0 .
(3.16)
From (2.2), we get
z n x 2 = P C [ u n λ n A u n ] P C [ x λ n A x ] 2 z n x , ( u n λ n A u n ) ( x λ n A x ) = 1 2 { z n x 2 + u n x λ n ( A u n A x ) 2 u n x λ n ( A u n A x ) ( z n x ) 2 } 1 2 { z n x 2 + u n x 2 u n z n λ n ( A u n A x ) 2 } 1 2 { z n x 2 + u n x 2 u n z n 2 + 2 λ n u n z n , A u n A x } 1 2 { z n x 2 + u n x 2 u n z n 2 + 2 λ n u n z n A u n A x } .
Hence,
z n x 2 u n x 2 u n z n 2 + 2 λ n u n z n A u n A x x n x 2 u n z n 2 + 2 λ n u n z n A u n A x .
From (3.15) and the inequality above, we have
x n + 1 x 2 ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) 2 ( β n S x n x 2 + ( 1 β n ) z n x 2 ) ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) 2 { β n S x n x 2 + ( 1 β n ) ( x n x 2 u n z n 2 + 2 λ n u n z n A u n A x ) } ,
which implies that
x n + 1 x 2 α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + ( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) { x n x 2 u n z n 2 + 2 λ n u n z n A u n A x } .
Hence,
( 1 α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) u n z n 2 α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + x n x 2 x n + 1 x 2 + 2 λ n u n z n A u n A x = α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + ( x n x + x n + 1 x ) x n + 1 x n + 2 λ n u n z n A u n A x .
Since lim n x n + 1 x n = 0 , α n 0 , β n 0 , and lim n A u n A x = 0 , we obtain
lim n u n z n = 0 .
(3.17)
It follows from (3.16) and (3.17) that
lim n x n z n = 0 .
(3.18)
Since T ( x n ) C , we have
x n T ( x n ) x n x n + 1 + x n + 1 T ( x n ) = x n x n + 1 + P C [ V n ] P C [ T ( x n ) ] x n x n + 1 + α n ( ρ U ( x n ) μ F ( T ( y n ) ) ) + T ( y n ) T ( x n ) x n x n + 1 + α n ρ U ( x n ) μ F ( T ( y n ) ) + y n x n x n x n + 1 + α n ρ U ( x n ) μ F ( T ( y n ) ) + β n S x n + ( 1 β n ) z n x n x n x n + 1 + α n ρ U ( x n ) μ F ( T ( y n ) ) + β n S x n x n + ( 1 β n ) z n x n .
Since lim n x n + 1 x n = 0 , α n 0 , β n 0 , and ρ U ( x n ) μ F ( T ( y n ) ) and S x n x n are bounded, and lim n x n z n = 0 , we obtain
lim n x n T ( x n ) = 0 .

Since { x n } is bounded, without loss of generality we can assume that x n x C . It follows from Lemma 2.4 that x F ( T ) . Therefore, w w ( x n ) F ( T ) . □

Theorem 3.1The sequence { x n } generated by Algorithm  3.1 converges strongly toz, which is the unique solution of the variational inequality
ρ U ( z ) μ F ( z ) , x z 0 , x Ω M E P ( F 1 ) F ( T ) .
(3.19)
Proof Since { x n } is bounded x n w and from Lemma 3.2, we have w F ( T ) . Next, we show that w M E P ( F 1 ) . Since u n = T r n ( x n r n D x n ) , we have
F 1 ( u n , y ) + D x n , y u n + 1 r n y u n , u n x n 0 , y C .
It follows from the monotonicity of F 1 that
D x n , y u n + 1 r n y u n , u n x n F 1 ( y , u n ) , y C
and
D x n k , y u n k + y u n k , u n k x n k r n k F 1 ( y , u n k ) , y C .
(3.20)
Since lim n u n x n = 0 , and x n w , it is easy to observe that u n k w . For any 0 < t 1 and y C , let y t = t y + ( 1 t ) w , and we have y t C . Then from (3.20), we obtain
D y t , y t u n k D y t , y t u n k D x n k , y t u n k y t u n k , u n k x n k r n k + F 1 ( y t , u n k ) = D y t D u n k , y t u n k + D u n k D x n k , y t u n k y t u n k , u n k x n k r n k + F 1 ( y t , u n k ) .
(3.21)
Since D is Lipschitz continuous and lim n u n x n = 0 , we obtain lim k D u n k D x n k = 0 . From the monotonicity of D and u n k w , it follows from (3.21) that
D y t , y t w F 1 ( y t , w ) .
(3.22)
Hence, from assumptions (i)-(iv) of Lemma 2.2 and (3.22), we have
0 = F 1 ( y t , y t ) t F 1 ( y t , y ) + ( 1 t ) F 1 ( y t , w ) t F 1 ( y t , y ) + ( 1 t ) D y t , y t w t F 1 ( y t , y ) + ( 1 t ) t D y t , y w ,
(3.23)
which implies that F 1 ( y t , y ) + ( 1 t ) D y t , y w 0 . Letting t 0 + , we have
F 1 ( w , y ) + D w , y w 0 , y C ,

which implies that w M E P ( F 1 ) .

Furthermore, we show that w Ω . Let
T v = { A v + N C v , v C , , otherwise,
where N C v : = { w H : w , v u 0 , u C } is the normal cone to C at v C . Then T is maximal monotone and 0 T v if and only if v Ω (see [28]). Let G ( T ) denote the graph of T, and let ( v , u ) G ( T ) ; since u A v N C v and z n C , we have
v z n , u A v 0 .
(3.24)
On the other hand, it follows from z n = P C [ u n λ n A u n ] and v C that
v z n , z n ( u n λ n A u n ) 0
and
v z n , z n u n λ n + A u n 0 .
Therefore, from (3.24) and the inverse strong monotonicity of A, we have
v z n k , u v z n k , A v v z n k , A v v z n k , z n k u n k λ n k + A u n k v z n k , A v A z n k + v z n k , A z n k A u n k v z n k , z n k u n k λ n k v z n k , A z n k A u n k v z n k , z n k u n k λ n k .
Since lim n u n z n = 0 and u n k w , it is easy to observe that z n k w . Hence, we obtain v w , u 0 . Since T is maximal monotone, we have w T 1 0 , and hence w Ω . Thus we have
w Ω M E P ( F 1 ) F ( T ) .
Observe that the constants satisfy 0 ρ τ < ν and
k η k 2 η 2 1 2 μ η + μ 2 k 2 1 2 μ η + μ 2 η 2 1 μ ( 2 η μ k 2 ) 1 μ η μ η 1 1 μ ( 2 η μ k 2 ) μ η ν ,

therefore, from Lemma 2.5, the operator μ F ρ U is μ η ρ τ strongly monotone, and we get the uniqueness of the solution of the variational inequality (3.19) and denote it by z Ω M E P ( F 1 ) F ( T ) .

Next, we claim that lim sup n ρ U ( z ) μ F ( z ) , x n z 0 . Since { x n } is bounded, there exists a subsequence { x n k } of { x n } such that
lim sup n ρ U ( z ) μ F ( z ) , x n z = lim sup k ρ U ( z ) μ F ( z ) , x n k z = ρ U ( z ) μ F ( z ) , w z 0 .
Next, we show that x n z . We have
x n + 1 z 2 = P C ( V n ) z , x n + 1 z = P C ( V n ) V n , P C ( V n ) z + V n z , x n + 1 z α n ( ρ U ( x n ) μ F ( z ) ) + ( I α n μ F ) ( T ( y n ) ) ( I α n μ F ) ( T ( z ) ) , x n + 1 z = α n ρ ( U ( x n ) U ( z ) ) , x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( I α n μ F ) ( T ( y n ) ) ( I α n μ F ) ( T ( z ) ) , x n + 1 z α n ρ τ x n z x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) y n z x n + 1 z α n ρ τ x n z x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) { β n S x n S z + β n S z z + ( 1 β n ) z n z } x n + 1 z α n ρ τ x n z x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) { β n x n z + β n S z z + ( 1 β n ) x n z } x n + 1 z = ( 1 α n ( ν ρ τ ) ) x n z x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) β n S z z x n + 1 z 1 α n ( ν ρ τ ) 2 ( x n z 2 + x n + 1 z 2 ) + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) β n S z z x n + 1 z ,
which implies that
x n + 1 z 2 1 α n ( ν ρ τ ) 1 + α n ( ν ρ τ ) x n z 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( z ) μ F ( z ) , x n + 1 z + 2 ( 1 α n ν ) β n 1 + α n ( ν ρ τ ) S z z x n + 1 z ( 1 α n ( ν ρ τ ) ) x n z 2 + 2 α n ( ν ρ τ ) 1 + α n ( ν ρ τ ) × { 1 ν ρ τ ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) β n α n ( ν ρ τ ) S z z x n + 1 z } .

Let γ n = α n ( ν ρ τ ) and δ n = 2 α n ( ν ρ τ ) 1 + α n ( ν ρ τ ) { 1 ν ρ τ ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) β n α n ( ν ρ τ ) S z z x n + 1 z } .

We have
n = 1 α n =
and
lim sup n { 1 ν ρ τ ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 α n ν ) β n α n ( ν ρ τ ) S z z x n + 1 z } 0 .
It follows that
n = 1 γ n = and lim sup n δ n γ n 0 .

Thus all the conditions of Lemma 2.7 are satisfied. Hence we deduce that x n z . This completes the proof. □

4 Applications

In this section, we obtain the following results by using a special case of the proposed method for example.

Putting A = 0 in Algorithm 3.1, we obtain the following result which can be viewed as an extension and improvement of the method of Wang and Xu [24] for finding the approximate element of the common set of solutions of a mixed equilibrium problem and a hierarchical fixed-point problem in a real Hilbert space.

Corollary 4.1LetCbe a nonempty closed convex subset of a real Hilbert spaceH. Let D : C H beθ-inverse strongly monotone mappings. Let F 1 : C × C R be a bifunction satisfying assumptions (i)-(iv) of Lemma  2.2 and S , T : C C be a nonexpansive mappings such that F ( T ) M E P ( F 1 ) . Let F : C C be ak-Lipschitzian mapping and beη-strongly monotone, and let U : C C be aτ-Lipschitzian mapping. For an arbitrary given x 0 C , let the iterative sequences { u n } , { x n } , { y n } , and { z n } be generated by
F 1 ( u n , y ) + D x n , y u n + 1 r n y u n , u n x n 0 , y C ; y n = β n S x n + ( 1 β n ) u n ; x n + 1 = P C [ α n ρ U ( x n ) + ( I α n μ F ) ( T ( y n ) ) ] , n 0 ,
where { r n } ( 0 , 2 θ ) , { α n } ( 0 , 1 ) , { β n } ( 0 , 1 ) . Suppose that the parameters satisfy 0 < μ < 2 η k 2 , 0 ρ τ < ν , where ν = 1 1 μ ( 2 η μ k 2 ) . Also, { α n } , { β n } , and { r n } are sequences satisfying conditions (a)-(d) of Algorithm  3.1. The sequence { x n } converges strongly toz, which is the unique solution of the variational inequality
ρ U ( z ) μ F ( z ) , x z 0 , x M E P ( F 1 ) F ( T ) .

Putting U = f , F = I , ρ = μ = 1 , and A = 0 , we obtain an extension and improvement of the method of Yao et al. [12]for finding the approximate element of the common set of solutions of a mixed equilibrium problem and a hierarchical fixed-point problem in a real Hilbert space.

Corollary 4.2LetCbe a nonempty closed convex subset of a real Hilbert spaceH. Let D : C H beθ-inverse strongly monotone mappings. Let F 1 : C × C R be a bifunction satisfying assumptions (i)-(iv) of Lemma  2.2 and S , T : C C be a nonexpansive mappings such that F ( T ) M E P ( F 1 ) . Let f : C C be aτ-Lipschitzian mapping. For an arbitrary given x 0 C , let the iterative sequences { u n } , { x n } , { y n } , and { z n } be generated by
F 1 ( u n , y ) + D x n , y u n + 1 r n y u n , u n x n 0 , y C ; y n = β n S x n + ( 1 β n ) u n ; x n + 1 = P C [ α n f ( x n ) + ( 1 α n ) T ( y n ) ] , n 0 ,
where { r n } ( 0 , 2 θ ) , { α n } , { β n } are sequences in ( 0 , 1 ) satisfying conditions (a)-(d) of Algorithm  3.1. The sequence { x n } converges strongly toz, which is the unique solution of the variational inequality
f ( z ) z , x z 0 , x M E P ( F 1 ) F ( T ) .

Remark 4.1 Some existing methods (e.g., [12, 14, 16, 17, 25]) can be viewed as special cases of Algorithm 3.1. Therefore, the new algorithm is expected to be widely applicable.

To verify the theoretical assertions, we consider the following example.

Example 4.1 Let α n = 1 3 n , β n = 1 n 3 , λ n = 1 2 ( n + 1 ) , and r n = n n + 1 .

We have
lim n α n = 1 3 lim n 1 n = 0
and
n = 1 α n = 1 3 n = 1 1 n = .
The sequence { α n } satisfies condition (a).
lim n β n α n = lim n 3 n 2 = 0 .
Condition (b) is satisfied. We compute
α n 1 α n = 1 3 ( 1 n 1 1 n ) = 1 3 n ( n 1 ) .
It is easy to show n = 1 | α n 1 α n | < . Similarly, we can show n = 1 | β n 1 β n | < . The sequences { α n } and { β n } satisfy condition (c). We have
lim inf n r n = lim inf n n n + 1 = 1
and
n = 1 | r n 1 r n | = n = 1 | n 1 n n n + 1 | = n = 1 1 n ( n + 1 ) n = 1 1 n 2 < .
Then, the sequence { r n } satisfies condition (d). We compute
n = 1 | λ n 1 λ n | = n = 1 | 1 2 n 1 2 ( n + 1 ) | = 1 2 < .

Then, the sequence { λ n } satisfies condition (e).

Let be the set of real numbers, D = 0 , and let the mapping A : R R be defined by
A x = x 2 , x R ,
let the mapping T : R R be defined by
T ( x ) = x 2 , x R ,
let the mapping F : R R be defined by
F ( x ) = 2 x + 5 7 , x R ,
let the mapping S : R R be defined by
S ( x ) = x 2 , x R ,
let the mapping U : R R be defined by
U ( x ) = x 14 , x R ,
and let the mapping F 1 : R × R R be defined by
F 1 ( x , y ) = 3 x 2 + x y + 2 y 2 , ( x , y ) R × R .
It is easy to show that A is a 1-inverse strongly monotone mapping, T and S are nonexpansive mappings, F is a 1-Lipschitzian mapping and 1 7 -strongly monotone and U is 1 7 -Lipschitzian. It is clear that
Ω M E P ( F 1 ) F ( T ) = { 0 } .
By the definition of F 1 , we have
0 F 1 ( u n , y ) + 1 r n y u n , u n x n = 3 u n 2 + u n y + 2 y 2 + 1 r n ( y u n ) ( u n x n ) .
Then
0 r n ( 3 u n 2 + u n y + 2 y 2 ) + ( y u n y x n u n 2 + u n x n ) = 2 r n y 2 + ( r n u n + u n x n ) y 3 r n u n 2 u n 2 + u n x n .
Let B ( y ) = 2 r n y 2 + ( r n u n + u n x n ) y 3 r n u n 2 u n 2 + u n x n . B ( y ) is a quadratic function of y with coefficient a = 2 r n , b = r n u n + u n x n , c = 3 r n u n 2 u n 2 + u n x n . We determine the discriminant Δ of B as follows:
Δ = b 2 4 a c = ( r n u n + u n x n ) 2 8 r n ( 3 r n u n 2 u n 2 + u n x n ) = u n 2 + 10 r n u n 2 + 25 u n 2 r n 2 2 x n u n 10 x n u n r n + x n 2 = (