Skip to content

Advertisement

  • Research
  • Open Access

Strong convergence of projection methods for a countable family of nonexpansive mappings and applications to constrained convex minimization problems

Journal of Inequalities and Applications20132013:546

https://doi.org/10.1186/1029-242X-2013-546

  • Received: 28 April 2013
  • Accepted: 17 September 2013
  • Published:

Abstract

In this paper, we introduce a general algorithm to approximate common fixed points for a countable family of nonexpansive mappings in a real Hilbert space, which solves a corresponding variational inequality. Furthermore, we propose explicit iterative schemes for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by our schemes converge strongly to a solution of the constrained convex minimization problem. Our results improve and generalize some known results in the current literature.

MSC:47H10, 37C25.

Keywords

  • countable family of nonexpansive mappings
  • fixed point
  • strong convergence
  • constrained convex minimization problem
  • projection method

1 Introduction

A viscosity approximation method for finding fixed points of nonexpansive mappings was first proposed by Moudafi in 2000 [1]. He proved the convergence of the sequence generated by the proposed method. In 2004, Xu [2] proved the strong convergence of the sequence generated by the viscosity approximation method to a unique solution of a certain variational inequality problem defined on the set of fixed points of a nonexpansive map.

It is well known that the iterative methods for finding fixed points of nonexpansive mappings can also be used to solve a convex minimization problem; see, for example, [35] and the references therein. In 2003, Xu [4] introduced an iterative method for computing the approximate solutions of a quadratic minimization problem over the set of fixed points of a nonexpansive mapping defined on a real Hilbert space. He proved that the sequence generated by the proposed method converges strongly to the unique solution of the quadratic minimization problem. By combining the iterative schemes proposed by Moudafi [1] and Xu [4], Marino and Xu [6] considered a general iterative method and proved that the sequence generated by the method converges strongly to a unique solution of a certain variational inequality problem, which is the optimality condition for a particular minimization problem. Liu [7] and Qin et al. [8] also studied some applications of the iterative method considered in [6]. Yamada [5] introduced the so-called hybrid steepest-descent method for solving the variational inequality problem and also studied the convergence of the sequence generated by the proposed method. Very recently, Tian [9] combined the iterative methods of [5, 6] in order to propose implicit and explicit schemes for constructing a fixed point of a nonexpansive mapping T defined on a real Hilbert space. He also proved the strong convergence of these two schemes to a fixed point of T under appropriate conditions. Related iterative methods for solving fixed point problems, variational inequalities and optimization problems can be found in [1015] and the references therein.

On the other hand, the gradient-projection method for finding the approximate solutions of the constrained convex minimization problem is well known; see, for example, [16] and the references therein. The convergence of the sequence generated by this method depends on the behavior of the gradient of the objective function. If the gradient fails to be strongly monotone, then the strong convergence of the sequence generated by the gradient-projection method may fail. Very recently, Xu [17] gave an operator-oriented approach as an alternative to the gradient-projection method and to the relaxed gradient-projection algorithm, namely, an averaged mapping approach. Moreover, he constructed a counterexample which shows that the sequence generated by the gradient-projection method does not converge strongly in the setting of an infinite-dimensional space. He also presented two modifications of gradient-projection algorithms which are shown to have strong convergence. Further, he regularized the minimization problem to derive an iterative scheme that generates a sequence converging in norm to the minimum-norm solution of the constrained convex minimization problem in the consistent case. The related methods and results can be found in [1826] and the references therein. By virtue of projections, the authors in [27] extended the implicit and explicit iterative schemes proposed in [9].

The purpose of this paper is to introduce a general algorithm to approximate common fixed points for a countable family of nonexpansive mappings in a real Hilbert space. We prove the strong convergence theorems for the sequences produced by the methods to a common fixed point of a countable family of nonexpansive mappings which is the unique solution of a corresponding variational inequality. We also propose explicit iterative schemes for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by our schemes converge strongly to a solution of the constrained convex minimization problem. Our results improve and generalize some known results in the current literature, see, for example, [27, 28].

2 Preliminaries

Throughout this paper, we denote the set of real numbers and the set of positive integers by and , respectively. Let H be a real Hilbert space, and let C be a nonempty subset of H. Let T : C C be a mapping. We denote by F ( T ) the set of fixed points of T, i.e., F ( T ) = { x C : T x = x } .

Definition 2.1 (i) A mapping T : C C is said to be nonexpansive if T x T y x y for all x, y in C. T is said to be quasi-nonexpansive if F ( T ) and T x y x y for all x in C and y in F ( T ) .
  1. (ii)
    A mapping T : H H is said to be an averaged mapping [29] if it can be written as the average of the identity I and a nonexpansive mapping; that is,
    T = ( 1 α ) I + α S ,
    (2.1)
     
where α is a number in ( 0 , 1 ) , and S : H H is nonexpansive. More precisely, when (2.1), holds, we say that T is α-averaged.
  1. (iii)

    A mapping B : C H is said to be L-Lipschitzian if B x B y L x y , x , y C , where L 0 is a constant. In particular, if L [ 0 , 1 ) , then B is called a contraction on C; if L = 1 , then B is nonexpansive.

     
  2. (iv)
    A mapping V : D ( V ) H H is called firmly nonexpansive [30] if
    x y , V x V y V x V y 2 , x , y D ( V ) ,
     

where D ( V ) is the domain of V.

Clearly, a firmly nonexpansive mapping is a 1 2 -averaged map.

Proposition 2.1 [31, 32]

Let H be a real Hilbert space, and let S , T , B : H H be mappings.
  1. (a)

    If T = ( 1 α ) S + α B for some α in ( 0 , 1 ) , and if S is averaged, and B is nonexpansive, then T is averaged.

     
  2. (b)

    T is firmly nonexpansive if and only if the complement ( I T ) is firmly nonexpansive.

     
  3. (c)

    If T = ( 1 α ) S + α B for some α in ( 0 , 1 ) , and if S is firmly nonexpansive, and B is nonexpansive, then T is averaged.

     
Recall that the metric (or nearest point) projection from H onto C is the mapping P C : H C , which assigns to each point x in H the unique point P C x in C satisfying the property
x P C x = d ( x , C ) : = inf y C x y .

Lemma 2.1 [30]

Let H be a real Hilbert space. For given x in H:
  1. (a)
    z = P C x if and only if
    z x , z y 0 , y C .
     
  2. (b)
    z = P C x if and only if
    x z 2 x y 2 y z 2 , y C .
     
  3. (c)
    P C x P C y , x y P C x P C y 2 , x , y H .
     

Consequently, P C is nonexpansive and monotone.

In general, a projection mapping is firmly nonexpansive, and, thus, a 1 / 2 -averaged map.

Lemma 2.2 The following inequality holds in an inner product space X:
x + y 2 x 2 + 2 y , x + y , x , y X .

Lemma 2.3 [33]

In a Hilbert space H, we have
λ x + ( 1 λ ) y 2 = λ x 2 + ( 1 λ ) y 2 λ ( 1 λ ) x y 2 , x , y H and λ [ 0 , 1 ] .

Lemma 2.4 (Demiclosedness principle [33])

Let T : C C be a nonexpansive mapping with F ( T ) . If { x n } is a sequence in C that converges weakly to x, and if { ( I T ) x n } converges strongly to y, then ( I T ) x = y ; in particular, if y = 0 , then x F ( T ) .

Definition 2.2 Let H be a real Hilbert space. A nonlinear operator T, whose domain D ( T ) H and range R ( T ) H is said to be:
  1. (a)
    monotone if
    x y , T x T y 0 , x , y D ( T ) ,
     
  2. (b)
    η-strongly monotone if there exists η > 0 such that
    x y , T x T y η x y 2 , x , y D ( T ) ,
     
  3. (c)
    α-inverse strongly monotone (for short, α-ism) if there exists α > 0 such that
    x y , T x T y α T x T y 2 , x , y D ( T ) .
     

It can be easily seen that (i) if T is nonexpansive, then I T is monotone; (ii) the projection map P C is a 1-ism. The inverse strongly monotone (also referred to as co-coercive) operators have been widely used to solve practical problems in various fields, for instance, in traffic assignment problems; see, for example, [34, 35] and the references therein.

Proposition 2.2 [31]

Let T : H H be an operator.
  1. (a)

    T is nonexpansive if and only if the complement I T is 1 2 -ism.

     
  2. (b)

    If T is v-ism, then γT is v γ -ism for v > 0 .

     
  3. (b)

    T is averaged if and only if the complement I T is v-ism for some v > 1 2 . Indeed, for α in ( 0 , 1 ) , T is α-averaged if and only if I T is 1 2 α -ism.

     

Lemma 2.5 [27]

Let B : C H be an L-Lipschitzian mapping with constant L 0 , and let A : C H be a κ-Lipschitzian and η-strongly monotone operator with constants κ , η > 0 . Then for 0 γ L < μ η , we have
x y , ( μ A γ B ) x ( μ A γ B ) y ( μ η γ L ) x y 2 , x , y C .

That is, μ A γ B is strongly monotone with coefficient μ η γ L .

The following lemma plays a key role in proving strong convergence of our iterative schemes.

Lemma 2.6 [[5], Lemma 3.1]

Suppose that λ ( 0 , 1 ) and μ , κ , η > 0 . Let A : C H be an L-Lipschitzian and η-strongly monotone operator on C. In association with a nonexpansive mapping T : C C , define the mapping T λ : C H by
T λ x : = T x λ μ A ( T x ) , x C .
Then T λ is a contraction provided μ < 2 η κ 2 , that is,
T λ x T λ y ( 1 λ τ ) x y , x C ,

where τ = 1 1 μ ( 2 η μ κ 2 ) ( 0 , 1 ] .

Let A : C H be a κ-Lipschitzian and η-strongly monotone operator with constants κ , η > 0 . Let B : C E be an L-Lipschitzian mapping with constant L 0 . Assume that T : C C is a nonexpansive mapping with F ( T ) . Let 0 < μ < 2 η κ 2 and 0 γ L < τ , where τ = 1 1 μ ( 2 η μ κ 2 ) . Let the net { x t } t ( 0 , 1 ) be generated by the following implicit scheme:
x t = P C [ t γ B x t + ( I t μ A ) T x t ] .
(2.2)
Then { x t } t ( 0 , 1 ) converges strongly to a fixed point x ˜ of T, which solves the variational inequality
( γ B μ A ) x ˜ , x ˜ z 0 , z F ( T ) .
(2.3)
Let the sequence { x n } n N be generated by the following explicit scheme:
x n + 1 = P C [ α n γ B x n + ( I α n μ A ) T x n ] .
(2.4)

Then { x n } n N converges strongly to a fixed point x ˜ of T, which is also a solution of the variational inequality (2.3), see, for more details, [27].

Consider a self-mapping S t on C defined by
S t x = P C [ t γ B x t + ( I t μ A ) T x ] , x C .
(2.5)

Then, S t is a contraction, and it has a unique fixed point in C, which uniquely solves the fixed point equation (2.2), see [27] for more details.

Proposition 2.3 [[27], Proposition 3.1]

Let H be a real Hilbert space, and let C be a nonempty, closed and convex subset of H. Let A : C H be a κ-Lipschitzian and η-strongly accretive operator with constants κ , η > 0 . Let B : C H be an L-Lipschitzian mapping with constant L 0 . Assume that T : C C is a nonexpansive mapping with F ( T ) . Let 0 < μ < 2 η κ 2 and 0 γ L < τ , where τ = 1 1 μ ( 2 η μ κ 2 ) . For each t in ( 0 , 1 ) , let x t denote a unique solution of the fixed point equation (2.2). Then, the following properties hold for the net { x t } t ( 0 , 1 ) :
  1. (1)

    { x t } t ( 0 , 1 ) is bounded;

     
  2. (2)

    lim t 0 x t T x t = 0 ;

     
  3. (3)

    t x t defines a continuous curve from ( 0 , 1 ) into C.

     

Theorem 2.1 [[27], Theorem 3.1]

Let H be a real Hilbert space, and let C be a nonempty, closed and convex subset of H. Let A : C H be an κ-Lipschitzian and η-strongly accretive operator with constants κ , η > 0 . Let B : C H be an L-Lipschitzian mapping with constant L 0 . Assume T : C C is a nonexpansive mapping with F ( T ) . Let 0 < μ < 2 η κ 2 and 0 γ L < τ , where τ = 1 1 μ ( 2 η μ κ 2 ) . For each t in ( 0 , 1 ) , let { x t } denote the unique solution of the fixed point equation (2.2). Then the net { x t } converges strongly, as t 0 , to a fixed point x ˜ of T, which solves the variational inequality (2.3), or equivalently, P F ( T ) ( I μ A + γ B ) x ˜ = x ˜ .

Lemma 2.7 [36]

Let { s n } , { γ n } be sequences of nonnegative real numbers satisfying the inequality:
s n + 1 ( 1 γ n ) s n + γ n δ n , n 0 .
Suppose that { γ n } and { δ n } satisfy the conditions:
  1. (i)

    { γ n } [ 0 , 1 ] and n = 0 γ n = , or equivalently, n = 0 ( 1 γ n ) = 0 ;

     
  2. (ii)

    lim sup n δ n 0 , or

     

(ii)′ n = 0 γ n δ n < .

Then lim n s n = 0 .

Lemma 2.8 [37]

Let { β n } be a sequence of real numbers with
0 < lim inf n β n lim sup n β n < 1 .
Let { x n } and { z n } be two sequences in a Banach space E such that
x n + 1 = ( 1 β n ) x n + β n z n , n 1 .
If
lim sup n ( x n + 1 z n x n + 1 x n ) 0 ,

then lim n x n z n = 0 .

Let C be a subset of a real Banach space E, and let { T n } n = 1 be a family of mappings of C such that n = 1 F ( T n ) . Then { T n } n = 1 is said to satisfy the A K T T -condition [38] if for each bounded subset D of C, we have
n = 1 sup { T n + 1 z T n z : z D } < .

Lemma 2.9 [38]

Let C be a nonempty subset of a real Banach space E, and let { T n } n = 1 be a family of mappings of C into itself, which satisfies the A K T T -condition. Then for each x in C, we have that { T n x } n = 1 converges strongly to a point in C. Moreover, let the mapping T be defined by
T x = lim n T n x , x C .
Then for each bounded subset D of C, we have
lim sup n { T n z T z : z D } = 0 .

In the sequel, we will write that ( { T n } n = 1 , T ) satisfies the AKKT-condition if { T n } n = 1 satisfies the AKKT-condition, and T is defined by Lemma  2.9.

We end this section with the following simple examples of mappings satisfying the A K T T -condition (see also Lemma 5.2).

Example 2.1 (i) Let E be any Banach space. For any n N , let a mapping T n : E E be defined by
T n ( x ) = x n ( x E ) .
Then, T n is a nonexpansive mapping for each n N . It could easily be seen that ( { T n } n = 1 , T ) satisfies the AKKT-condition, where T ( x ) = 0 for all x E .
  1. (ii)
    Let E = l 2 , where
    l 2 = { σ = ( σ 1 , σ 2 , , σ n , ) : n = 1 σ n 2 < } , σ = ( n = 1 σ n 2 ) 1 2 , σ l 2 , σ , η = n = 1 σ n η n , δ = ( σ 1 , σ 2 , , σ n , ) , η = ( η 1 , η 2 , , η n , ) l 2 .
     
Let { x n } n N { 0 } E be a sequence defined by
x 0 = ( 1 , 0 , 0 , 0 , ) , x 1 = ( 1 , 1 , 0 , 0 , 0 , ) , x 2 = ( 1 , 0 , 1 , 0 , 0 , 0 , ) , x 3 = ( 1 , 0 , 0 , 1 , 0 , 0 , 0 , ) , , x n = ( σ n , 1 , σ n , 2 , , σ n , k , ) , ,
where
σ n , k = { 1 if  k = 1 , n + 1 , 0 if  k 1 , k n + 1 ,
for all n N . It is clear that the sequence { x n } n N converges weakly to x 0 . Indeed, for any Λ = ( λ 1 , λ 2 , , λ n , ) l 2 = ( l 2 ) , we have
Λ ( x n x 0 ) = x n x 0 , Λ = k = 2 λ k σ n , k 0
as n . It is also obvious that x n x m = 2 for any n m with n, m sufficiently large. Thus, { x n } n N is not a Cauchy sequence. We define a countable family of mappings T j : E E by
T j ( x ) = { n n + 1 x , if  x = x n ; j j + 1 x , if  x x n ,

for all j 1 and n 0 . It is clear that F ( T j ) = { 0 } for all j 1 . It is obvious that T j is a quasi-nonexpansive mapping for each j N . Thus, { T j } j N is a countable family of quasi-nonexpansive mappings.

Let T x = lim j T j x for all x E . It is easy to see that
T ( x ) = { n n + 1 x , if  x = x n ; x , if  x x n .
Then, we obtain that T is a quasi-nonexpansive mapping with F ( T ) = { 0 } = F ˜ ( T ) . Let D be a bounded subset of E. Then there exists r > 0 such that D B r = { z E : z < r } . On the other hand, for any j N , we have
j = 1 sup { T j + 1 z T j z : z D } = j = 1 sup { j 1 j + 2 z j j + 1 z : z D } = j = 1 1 ( j + 2 ) ( j + 1 ) sup { z : z D } < .
Furthermore, we have
lim sup j { T j z T z : z D } = 0 .

Therefore, ( { T n } n = 1 , T ) satisfies the AKKT-condition.

3 Fixed point and convergence theorems

Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let P C be the metric (or nearest point) projection from H onto C.

Theorem 3.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Assume { T n } n = 1 is a sequence of nonexpansive mappings from C into itself such that n = 1 F ( T n ) . Suppose, in addition, that T : C C is a nonexpansive mapping such that ( { T n } n = 1 , T ) satisfies the A K T T -condition, and S : C C is a nonexpansive mapping with F : = n = 1 F ( T n ) F ( S ) . Let A : C E be a κ-Lipschitzian and η-strongly monotone operator with constants κ , η > 0 . Let B : C E be an L-Lipschitzian mapping with constant L 0 . Let 0 < μ < 2 η κ 2 and 0 γ L < τ , where τ = 1 1 μ ( 2 η μ κ 2 ) .

For arbitrarily given x 1 in C, let the sequence { x n } be generated iteratively by
{ y n = P C [ α n γ B x n + ( I α n μ A ) x n ] , x n + 1 = ( 1 β n ) x n + β n T n S y n , n N ,
(3.1)
where { α n } , { β n } are two real sequences in ( 0 , 1 ) satisfying the following control conditions:
(a) lim n α n = 0 and n = 1 α n = ; (b) 0 < lim inf n β n lim sup n β n < 1 .
(3.2)
Then the sequence { x n } converges strongly to x F ( T S ) , which solves the variational inequality
( μ A γ B ) x , x z 0 , z F ( T S ) .
(3.3)

Proof We divide the proof into several steps.

Step I. We claim that the sequence { x n } is bounded. Let p in F be fixed. In view of Lemma 2.6 we conclude that
( I α n μ A ) x n ( I α n μ A ) p ( 1 α n τ ) x n p , n N .
This together with (3.1)-(3.3) implies that
y n p = P C [ α n γ B x n + ( I α n μ A ) x n ] P C p α n γ B x n + ( I α n μ A ) x n p = α n ( γ B x n μ A ( p ) ) + ( I α n μ A ) x n ( I α n μ A ) p α n γ L x n p + α n ( γ B μ A ) p + ( 1 α n τ ) x n p = ( 1 α n ( τ γ L ) ) x n p + α n ( γ B μ A ) p max { x n p , ( γ B μ A ) p τ γ L } .
(3.4)
Since T n is nonexpansive, for all n in , it follows from (3.1) and (3.4) that
x n + 1 p = ( 1 β n ) ( x n p ) + β n ( T n S y n p ) ( 1 β n ) x n p + β n T n S y n p ( 1 β n ) x n p + β n y n p ( 1 β n ) x n p + β n max { x n p , ( γ B μ A ) p τ γ L } max { x n p , ( γ B μ A ) p τ γ L } .
(3.5)
By induction, we have that { x n } is bounded. This implies that the sequences { A x n } , { B x n } , { y n } , { S y n } and { T n S y n } are bounded too. Let
M = sup { x n , A x n , B x n , y n , S y n , T n S y n : n N } < ,
and set
D = { z E : z M } .

Then we have D is a bounded subset of E and { x n } , { A x n } , { B x n } , { y n } , { T n S y n } D .

Step II. We claim that lim n y n T S y n = 0 . In view of (3.1), we obtain
y n x n = P C [ α n γ B x n + ( I α n μ A ) x n ] P C [ x n ] α n γ B x n + ( I α n μ A ) x n x n = α n γ B x n + x n α n μ A x n x n = α n ( γ B μ A ) x n .
(3.6)
Since lim n α n = 0 , it follows from (3.6) that
lim n y n x n = 0 .
(3.7)
In view of Lemma 2.6, we conclude that
( I α n + 1 μ A ) x n + 1 ( I α n + 1 μ A ) x n ( 1 α n + 1 τ ) x n + 1 x n , n N .
This implies that
y n + 1 y n = P C [ α n + 1 γ B x n + 1 + ( I α n + 1 μ A ) x n + 1 ] P C [ α n γ B x n + ( I α n μ A ) x n ] α n + 1 γ B x n + 1 + ( I α n + 1 μ A ) x n + 1 α n γ B x n + ( I α n μ A ) x n α n + 1 γ ( B x n + 1 B x n ) + γ ( α n + 1 α n ) B x n + ( I α n + 1 μ A ) x n + 1 ( I α n + 1 μ A ) x n α n + 1 μ A x n α n + 1 γ L x n + 1 x n + ( 1 α n + 1 τ ) x n + 1 x n + γ | α n + 1 α n | B x n + α n + 1 μ A x n ( 1 α n + 1 ( τ γ L ) ) x n + 1 x n + γ M | α n + 1 α n | + μ M α n + 1 x n + 1 x n + γ M | α n + 1 α n | + μ M α n + 1 .
(3.8)
Next, we show that lim n x n + 1 x n = 0 . For this purpose, we denote a sequence { z n } by z n = T n S y n . It follows from (3.8) that
z n + 1 z n = T n + 1 S y n + 1 T n S y n T n + 1 S y n + 1 T n + 1 S y n + T n + 1 S y n T n S y n y n + 1 y n + T n + 1 S y n T n S y n x n + 1 x n + γ M | α n + 1 α n | + μ M α n + 1 + sup { T n + 1 z T n z : z D } .
(3.9)
This implies that
z n + 1 z n x n + 1 x n M | α n + 1 α n | + sup { T n + 1 z T n z : z D } .
(3.10)
Since lim n α n = 0 , in view of the A K T T -condition and (3.2)(a), we conclude that
lim sup n ( z n + 1 z n x n + 1 x n ) 0 .
Utilizing Lemma 2.8, we deduce that
lim n z n x n = 0 .
It follows from (3.1) and (3.2)(b) that
lim n x n + 1 x n = lim n β n z n x n = 0 .
(3.11)
On the other hand, we have
y n T n S y n y n x n + x n x n + 1 + x n + 1 T n S y n y n x n + x n x n + 1 + ( 1 β n ) x n T n S y n y n x n + x n x n + 1 + ( 1 β n ) [ x n y n + y n T n S y n ] .
This implies that
y n T n S y n 1 β n [ 2 y n x n + x n x n + 1 ] .
(3.12)
In view of (3.7) and (3.11)-(3.12), we obtain
lim n y n T n S y n = 0 .
(3.13)
By the triangle inequality, we obtain
y n T S y n y n T n S y n + T n S y n T S y n y n T n S y n + sup { T n z T z : z D } .
(3.14)
In view of the A K T T -condition and (3.13)-(3.14), we deduce that
lim n y n T S y n = 0 .
Step III. We prove that there exists x in F ( T S ) such that
lim sup n ( μ A γ B ) x , x y n 0 .
For each t in ( 0 , 1 ) , we define the mapping S t : C C by
S t ( x ) = P C [ t γ B x + ( I t μ A ) T S x ] , x C .
Since S , T and I t μ A are nonexpansive mappings for each t in ( 0 , 1 ) , in view of (2.5), we conclude that S t is a contraction for each t in ( 0 , 1 ) , and hence by the Banach contraction principle, there exists a unique fixed point x t in C such that S t ( x t ) = x t . Thus, we have
x t = P C [ t γ B x t + ( I t μ A ) T S x t ] .
(3.15)
Next, we show that lim t 0 x t : = x exists. We first show that { x t } is bounded. To this end, let p in F be fixed. In view of Lemma 2.7, we obtain
x t p = P C [ t γ B x t + ( I t μ A ) T S x t ] P C p t γ B x t + ( I t μ A ) T S x t p t ( γ B x t μ A ( p ) ) + ( I t μ A ) T S x t ( I t μ A ) p = ( 1 t ( τ γ L ) ) x t p + t ( γ B μ A ) p .
This implies that
x t p ( γ B μ A ) p ( τ γ L ) .
Thus, we have that { x t } t ( 0 , 1 ) is bounded and so are { A T S x t } t ( 0 , 1 ) and { ( γ B μ A ) x t } t ( 0 , 1 ) . In view of (3.15), we obtain
x t T S x t = P C [ t γ B x t + ( I t μ A ) T S x t ] P C ( T S x t ) t γ B x t + ( I t μ A ) T S x t T S x t t ( γ B μ A ) x t .
This implies that
lim t 0 + x t T S x t = 0 .
(3.16)
Using the techniques in the proof of Theorem 2.1, we see that the variational inequality (3.3) has a unique solution x ˜ F ( T S ) . We show that x t x ˜ as t 0 . To this end, set
y t = t γ B x t + ( I t μ A ) T S x t , t ( 0 , 1 ) .
Then we have x t = P C y t , and for any given z in F ( T S ) ,
x t z = P C y t y t + y t z = P C y t y t + t ( γ B x t μ A z ) + ( I t μ A ) T S x t ( I t μ A ) T S z .
(3.17)
Since P C is the metric projection from E onto C, for each z in F ( T S ) , we have
P C y t y t , P C y t z 0 .
Exploiting Lemma 2.1 and (3.17), we obtain
x t z 2 = x t z , x t z = P C y t y t , P C y t z + ( I t μ A ) T S x t ( I t μ A ) z , x t z + t γ B x t μ A z , x t z ( 1 t τ ) x t z + t γ B x t μ A z , x t z .
(3.18)
It follows from (3.18) that
x t z 2 1 τ γ B x t μ A z , x t z 1 τ [ γ B x t γ B z , x t z + γ B z μ A z , x t z ] 1 τ [ γ L x t z 2 + γ B z μ A z , x t z ] .
This implies that
x t z 2 1 τ γ L γ B z μ A z , x t z .
(3.19)
Let { t n } ( 0 , 1 ) be such that t n 0 + as n . Let x n : = x t n . It follows from (3.16) that lim n x n T S x n = 0 . The boundedness of { x t } implies that there exists x in C such that x n x , i.e., converges weakly, as n . In view of Lemma 2.4, we deduce that x F ( T S ) . Since x n x as n , it follows from (3.19) that lim n x n x = 0 . Thus, we have that lim t 0 + x t = x is well defined. Next, we show that x solves the variational inequality (3.3). Observe that
x t = P C y t = P C y t y t + t γ B x t + ( I t μ A ) T S x t .
Thus, we have
( μ A γ B ) x t = 1 t ( P C y t y t ) 1 t ( I T S ) x t + μ ( A x t A T S x t ) .
Since TS is nonexpansive, we conclude that I T S is monotone. The property of metric projection implies that
( γ B μ A ) x t , x t z = 1 t P C y t y t , x t z 1 t ( I T S ) x t ( I T S ) z , x t z + μ A x t A T S x t , x t z μ A x t A T S x t , x t z μ L x t T S x t x t z .
(3.20)
Replacing t by t n in (3.20), letting n , and noticing that { x t z } t ( 0 , 1 ) is bounded for z in F ( T S ) , with (3.16), we have
( γ B μ A ) x , x z 0 .
(3.21)
Thus, we have that x in F ( T S ) is a solution of the variational inequality (3.3). Consequently, x = x ˜ by uniqueness. Therefore, x t x ˜ as t 0 . The variational inequality (3.3) can be written as
( I μ A + γ B ) x ˜ x ˜ , x ˜ z 0 , z F ( T S ) .
So, in terms of Lemma 2.1, it is equivalent to the following fixed point equation:
P F ( T S ) ( I μ A + γ B ) x ˜ = x ˜ .
Since { y n } is bounded, for any subsequence of { y n } , there exists a further subsequence { y n i } such that y n i u in C. In view of Lemma 2.4 and Step II, we conclude that u F ( T S ) . This together with (3.21) implies that
lim sup n μ A x γ B x , x y n = lim i μ A x γ B x , x y n i = μ A x γ B x , x u 0 .

Step IV. We claim that lim n x n x = 0 .

For each n in N { 0 } , we set
v n = α n γ B x n + ( I α n μ A ) T S x n
and observe that y n = P C v n . Then, by Lemmas 2.1 and 2.5, we obtain
y n x 2 = y n x , y n x = P C v n v n , P C v n x + v n x , y n x v n x , y n x = α n γ B x n μ A ( x ) , y n x + ( I α n μ A ) T S x n ( I α n μ A ) T S x , y n x = α n γ B x n B x , y n x + α n ( γ B μ A ) x , y n x + ( I α n μ A ) T S x n ( I α n μ A ) T S x , y n x α n γ L x n x y n x + α n ( γ B μ A ) x , y n x + ( 1 α n τ ) x n x y n x = ( 1 α n ( τ γ L ) ) x n x y n x + α n ( γ B μ A ) x , y n x ( 1 α n ( τ γ L ) ) 1 2 ( x n x 2 + y n x 2 ) + α n ( γ B μ A ) x , y n x .
(3.22)
This implies that
y n x 2 ( 1 α n ( τ γ L ) ) ( 1 + α n ( τ γ L ) ) x n x 2 + 2 α n 1 + α n ( τ γ L ) ( γ B μ A ) x , y n x ( 1 α n ( τ γ L ) ) x n x 2 + 2 α n 1 + α n ( τ γ L ) ( γ B μ A ) x , y n x = ( 1 α n ( τ γ L ) ) x n x 2 + α n θ n ξ n ,
(3.23)
where
θ n = τ γ L and ξ n = 2 ( 1 + α n ( τ γ L ) ) ( τ γ L ) ( γ B μ A ) x , y n x .
In view of (3.22) and (3.23), we conclude that
x n + 1 x 2 = ( 1 β n ) x n + β n T n S y n x 2 ( 1 β n ) x n x 2 + β n T n S y n x 2 ( 1 β n ) x n x 2 + β n y n x 2 ( 1 β n ) x n x 2 + β n [ ( 1 α n ( τ γ L ) ) x n x 2 + α n θ n ξ n ] ( 1 β n α n ( τ γ L ) ) x n x 2 + β n α n θ n ξ n = ( 1 β n α n ( τ γ L ) ) x n x 2 + β n α n ( τ γ L ) 1 τ γ L θ n ξ n ,
(3.24)

where γ n = β n α n ( τ γ L ) . It is easy to show that lim n γ n = 0 , n = 0 γ n = and lim sup n ξ n 0 . Hence, in view of Lemma 2.7 and (3.24), we conclude that the sequence { x n } converges strongly to x in F ( T S ) . □

Remark 3.1 Theorem 3.1 improves and extends [[28], Theorems 3.1 and 3.2] in the following aspects.
  1. (i)

    The identity mapping I is extended to the case of I A : C E , where A : C E is a k-Lipschitzian and η-strongly monotone (possibly nonself-) mapping.

     
  2. (ii)

    In order to find a common fixed point of a countable family of nonexpansive self-mappings T n : C C , the Mann-type iterations in [[28], Theorem 3.2] are extended to develop the new Mann-type iteration (3.1).

     
  3. (iii)

    The new technique of an argument is applied in deriving Theorem 3.1. For instance, the characteristic properties (Lemma 2.1) of metric projection P C play an important role in proving the strong convergence of the net { x t } t ( 0 , 1 ) in Theorem 3.1.

     
  4. (iv)

    Whenever we have C = E , B = 0 , A = I , the identity mapping on C and μ = 1 , then Theorem 3.1 reduces to [[28], Theorem 3.2]. Thus, Theorem 3.1 covers [[28], Theorems 3.1 and 3.2] as special cases.

     

Remark 3.2 In Theorem 3.1, it is shown that any sequence generated by the iterative step (3.1) converges strongly to the unique solution of the variational inequality problem (3.3). This variational inequality problem is more general than many variational inequality problems (see, for example, [27]) due to the fact that S is an arbitrary nonexpansive mapping, and due to the well-known relations between fixed points of nonexpansive mappings and variational inequalities, the solution of (3.3) can be seen as a fixed point set of some nonexpansive mapping V, and then this mapping could be added to countable family of non-expansive mappings T n . In addition, the feasible set of the variational inequality problem (3.3) is Fix ( T S ) , with T and S being nonexpansive mappings. For several sub-sets of nonexpansive mapping, Fix ( T S ) = Fix ( T ) Fix ( S ) hold (see, e.g., [31, 32] for averaged mappings).

4 Constrained convex minimization problems

Let H be a real Hilbert space, and let C be a nonempty, closed and convex subset of H. Consider the following constrained convex minimization problem:
minimize { f ( x ) : x C } ,
(4.1)
where f : C R is a real-valued convex function. If f is Fréchet differentiable, then the gradient-projection method (for short, GPM) generates a sequence { x n } using the following recursive formula:
x n + 1 = P C ( x n λ f ( x n ) ) , n 0 ,
(4.2)
or more generally,
x n + 1 = P C ( x n λ n f ( x n ) ) , n 0 ,
(4.3)
where in both (4.2) and (4.3), the initial guess x 0 is taken from C arbitrary, and the parameters, λ or λ n , are positive real numbers. The convergence of algorithms (4.2) and (4.3) depends on the behavior of the gradient f. As a matter of fact, it is known that if f is α-strongly monotone and L-Lipschitzian with constants α , L > 0 , then the operator
T : = P C ( I λ f )
(4.4)
is a contraction; hence, the sequence { x n } defined by algorithm (4.2) converges in norm to the unique solution of the minimization problem (4.1). More generally, if the sequence { λ n } is chosen to satisfy the property
0 < lim inf n λ n lim sup n λ n < 2 α L 2 ,
(4.5)

then the sequence { x n } defined by algorithm (4.2) converges in norm to the unique minimizer of (4.1).

However, if the gradient f fails to be strongly monotone, the operator T defined by (4.4) would fail to be contractive; consequently, the sequence { x n } generated by algorithm (4.2) may fail to converge strongly (see [[17], Section 4]). If f is Lipschitzian, then algorithms (4.2) and (4.3) can still converge in the weak topology under certain conditions.

Very recently, Xu [17] gave an alternative operator-oriented approach to algorithm (4.3); namely, an averaged mapping approach. He gave his averaged mapping approach to the gradient-projection algorithm (4.3) and the relaxed gradient-projection algorithm. Moreover, he constructed a counterexample, which shows that algorithm (4.2) does not converge in norm in an infinite-dimensional space, and also presented two modifications of gradient-projection algorithms, which are shown to have strong convergence. Further, he regularized the minimization problem (4.1) to devise an iterative scheme that generates a sequence converging in norm to the minimum-norm solution of (4.1) in the consistent case.

Let A : C H be a κ-Lipschitzian and η-strongly monotone operator with constants κ , η > 0 , and let B : C H be an l-Lipschitzian mapping with constant l 0 . Suppose that 0 < μ < 2 η κ 2 and 0 γ l < τ , where τ = 1 μ ( 2 η μ κ 2 ) . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient f is L-Lipschitzian with constant L > 0 . Motivated by the work of Xu [17], the authors of [27] introduced the following implicit scheme that generates a net { x λ } λ ( 0 , 2 L ) in an implicit way:
x λ = P C [ s γ B x λ + ( I s μ A ) T λ x λ ] ,
(4.6)
where T λ and s satisfy the following conditions:
  1. (i)

    s : = s ( λ ) = 2 λ L 4 for each λ ( 0 , 2 L ) ;

     
  2. (ii)

    P C ( I λ f ) = s I + ( 1 s ) T λ for each λ ( 0 , 2 L ) .

     

They proved that { x λ } λ ( 0 , 2 L ) converges strongly to a minimizer x in Ω of (4.1), which solves the variational inequality (2.3).

For a given arbitrary initial guess x 0 in C and a sequence { λ n } ( 0 , 2 L ) with λ n 2 L , they also proposed the following explicit scheme that generates a sequence { x n } in an explicit way:
x n + 1 = P C [ s n γ B x n + ( I s n μ A ) T n x n ] , n 0 ,
(4.7)

where r n = 2 λ n L 4 and P C ( I λ n f ) = s n I + ( 1 s n ) T n for each n 0 . It is proven in [27] that the sequence { x n } strongly converges to a minimizer x in Ω of (4.1).

On the other hand, we know that x in C solves the minimization problem (4.1) if and only if x solves the fixed point equation
x = P C ( I λ f ) x ,
where λ > 0 is any fixed positive number. Note that f being Lipschitzian implies that the gradient f is 1 L -ism [39], which then implies that λ f is 1 λ L -ism. So by Proposition 2.2(c), I λ f is λ L 2 -averaged. Now since the projection P C is 1 2 -averaged, we know from Proposition 2.2(c) that I λ f is 2 + λ L 4 -averaged for each λ ( 0 , 2 L ) . Hence, we can write
P C ( I λ f ) = 2 λ L 4 I + 2 + λ L 4 T λ = s I + ( 1 s ) T λ ,
where T λ is nonexpansive, and s : = s ( λ ) = 2 λ L 4 ( 0 , 1 2 ) for each λ ( 0 , 2 L ) . It is easy to see that
λ 2 L s 0 + .
For each fixed λ ( 0 , 2 L ) , we now consider the self-mapping
Q λ x = P C [ s γ B x λ + ( I s μ A ) T λ x ] , x C .

It is easy to see that Q λ is a contraction, see [27] for more details. Thus, there exists a unique fixed point x λ in C, which uniquely solves the fixed point equation (4.6).

The following two results, which summarize the properties of the net { x λ } λ ( 0 , 2 L ) have been proved in [27].

Proposition 4.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let A : C H be a κ-Lipschitzian and η-strongly monotone operator with constants κ , η > 0 , and let B : C H be an l-Lipschitzian mapping with constant l 0 . Suppose that 0 < μ < 2 η κ 2 and 0 γ l < τ , where τ = 1 μ ( 2 η μ κ 2 ) . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient f is L-Lipschitzian with constant L > 0 . For each λ in ( 0 , 2 L ) , let x λ denote a unique solution of the fixed point equation (4.6), where T λ and s satisfy the following conditions:
  1. (i)

    s : = s ( λ ) = 2 λ L 4 for each λ in ( 0 , 2 L ) ;

     
  2. (ii)

    P C ( I λ f ) = s I + ( 1 s ) T λ for each λ in ( 0 , 2 L ) .

     
Then, the following properties for the net { x λ } λ ( 0 , 2 L ) hold:
  1. (a)

    { x λ } λ ( 0 , 2 L ) is bounded;

     
  2. (b)

    lim λ 2 L x λ T λ x λ = 0 ;

     
  3. (c)

    x λ defines a continuous curve from ( 0 , 2 L ) into C.

     
Theorem 4.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let A : C H be a κ-Lipschitzian and η-strongly monotone operator with constants κ , η > 0 , and let B : C H be an l-Lipschitzian mapping with constant l 0 . Suppose that 0 < μ < 2 η κ 2 and 0 γ l < τ , where τ = 1 μ ( 2 η μ κ 2 ) . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient f is L-Lipschitzian with constant L > 0 . For each λ in ( 0 , 2 L ) , let x λ denote a unique solution of the fixed point equation (4.6), where T λ and r satisfy the following conditions:
  1. (i)

    s : = s ( λ ) = 2 λ L 4 for each λ in ( 0 , 2 L ) ;

     
  2. (ii)

    P C ( I λ f ) = s I + ( 1 s ) T λ for each λ in ( 0 , 2 L ) .

     

Then the net { x λ } λ ( 0 , 2 L ) converges strongly, as λ 2 L , to a minimizer x of (4.1), which solves the variational inequality (2.3); equivalently, we have P C ( I μ A + γ B ) x = x .

Now, we are ready to propose explicit iterative schemes for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by our schemes converge strongly to a solution of the constrained convex minimization problem.

Theorem 4.2 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Assume that { S n } n = 1 is a sequence of nonexpansive mappings from C into itself such that n = 1 F ( S n ) . Suppose, in addition, that S : C C is a nonexpansive mapping such that ( { S n } n = 1 , S ) satisfies the A K T T -condition. Let A : C H be a κ-Lipschitzian and η-strongly monotone operator with constants κ , η > 0 , and let B : C H be an l-Lipschitzian mapping with constant l 0 . Suppose that 0 < μ < 2 η κ 2 and 0 γ l < τ , where τ = 1 μ ( 2 η μ κ 2 ) . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient f is L-Lipschitzian with constant L > 0 . Let { λ n } be a sequence in the interval ( 0 , 2 L ) such that { T n } and { s n } satisfy the following conditions:
  1. (i)

    s n = 2 λ n L 4 for each n 0 ;

     
  2. (ii)

    P C ( I λ n f ) = s n I + ( 1 s n ) T n for each n 0 ;

     
  3. (iii)

    s n 0 ;

     
  4. (iv)

    n = 0 s n = ;

     
  5. (v)

    n = 1 | λ n + 1 λ n | < .

     
Suppose that { α n } , { β n } are two sequences of real numbers in ( 0 , 1 ) satisfying the following control conditions:
(a) lim n α n = 0 and n = 1 α n = ; (b) 0 < lim inf n β n lim sup n β n < 1 .
(4.8)
For given x 1 in C arbitrarily, let the sequence { x n } be generated by
{ y n = P C [ α n γ B x n + ( I α n μ A ) x n ] , x n + 1 = ( 1 β n ) x n + β n T n S n y n , n N .
(4.9)
If lim n y n S y n = 0 and n = 1 F ( T n ) F ( S ) , then there exists a nonexpansive mapping T : C C such that ( { T n } n = 1 , T ) satisfies the A K T T -condition, and { x n } converges strongly to a common element x in F ( T S ) Ω , which solves the variational inequality
( μ A γ B ) x , x z 0 , z F ( T S ) .
(4.10)

Proof We divide the proof into several steps.

First, we note that
  1. (1)
    x ˜ in C solves the minimization problem (4.1) if and only if for each fixed λ > 0 , x ˜ solves the fixed point equation
    x ˜ = P C ( I λ f ) x ˜ ;
     
  2. (2)
    P C ( I λ f ) is 2 + λ L 4 -averaged for each λ in ( 0 , 2 L ) ; in particular, the following relation holds:
    P C ( I λ n f ) = 2 λ n L 4 I + 2 + λ n L 4 T n = s n I + ( 1 s n ) T n , n 0 .
     

Step I. We claim that the sequence { T n } satisfies the A K T T -condition.

From the proof of Theorem 3.1, { x n } is bounded and so are { B x n } and { T n x n } . Let D be a bounded subset of C such that { B x n , T n x n : n N } D . Since f is 1 L -ism, P C ( I λ n f ) is nonexpansive. It follows that for any given z in D and v in Ω,
P C ( I λ n f ) z P C ( I λ n f ) z v + v P C ( I λ n f ) z P C ( I λ n f ) v + v z v + v z + 2 v .
This implies that
sup { P C ( I λ n f ) z : n N , z D } < .
On the other hand, we have for any z in D and u in Ω that
A T n z A T n u A z + A u κ T n z T n v + A v κ z v + A v z + 2 A v .
Therefore,
sup { A T n z : n N , z D } < .
This shows that { A T n z : n N , z D } is bounded. We also obtain, for any z in D, that
T n + 1 z T n z = 4 P C ( I λ n + 1 f ) ( 2 λ n + 1 L ) I 2 + λ n + 1 L z 4 P C ( I λ n f ) ( 2 λ n L ) I 2 + λ n L z 4 P C ( I λ n + 1 f ) 2 + λ n + 1 L z 4 P C ( I λ n f ) 2 + λ n L z + ( 2 λ n L ) I 2 + λ n L z ( 2 λ n + 1 L ) I 2 + λ n + 1 L z = 4 ( 2 + λ n L ) P C ( I λ n + 1 f ) z 4 ( 2 + λ n + 1 L ) P C ( I λ n f ) z ( 2 + λ n + 1 L ) ( 2 + λ n L ) + 4 L | λ n + 1 λ n | ( 2 + λ n L ) ( 2 + λ n + 1 L ) z 4 L ( λ n + 1 λ n ) P C ( I λ n + 1 f ) z ( 2 + λ n + 1 L ) ( 2 + λ n L ) + 4 ( 2 + λ n + 1 L ) P C ( I λ n + 1 f ) z ( 2 + λ n + 1 L ) P C ( I λ n f ) z ( 2 + λ n + 1 L ) ( 2 + λ n L ) + 4 L | λ n + 1 λ n | ( 2 + λ n L ) ( 2 + λ n + 1 L ) z 4 L | ( λ n + 1 λ n ) | P C ( I λ n + 1 f ) z ( 2 + λ n + 1 L ) ( 2 + λ n L ) + 4 ( 2 + λ n + 1 L ) P C ( I λ n + 1 f ) z P C ( I λ n f ) z ( 2 + λ n + 1 L ) ( 2 + λ n L ) + 4 L | λ n + 1 λ n | ( 2 + λ n L ) ( 2 + λ n + 1 L ) z | λ n + 1 λ n | [ L P C ( I λ n + 1 f ) z + 4 f ( z ) + L z ] M | λ n + 1 λ n | ,
for some appropriate constant M > 0 such that
L P C ( I λ n + 1 f ) z + 4 f ( z ) + L z M , n 0 .
Thus, we get
n = 1 sup { T n + 1 z T n z : z D } M n = 1 | λ n + 1 λ n | < .
Now, define a mapping T : C C by
T x = lim n T n x , x C .

Then T is a nonexpansive mapping. Since the minimization problem (4.1) is consistent, we conclude that n = 1 F ( T n ) . Consequently, the sequence ( { T n } n = 1 , T ) satisfies the A K T T -condition.

Step II. We claim that lim n y n T S y n = 0 . In view of (4.8), we obtain
y n x n = Q C [ α n γ B x n + ( I α n μ A ) x n ] Q C [ x n ] α n γ B x n + ( I α n μ A ) x n x n = α n γ B x n + x n α n μ A x n x n = α n γ B x n + x n α n μ A x n x n = α n ( γ B μ A ) x n .
(4.11)
Since lim n α n = 0 , it follows from (4.11) that
lim n y n x n = 0 .
(4.12)
In view of Lemma 2.6, we conclude that
( I α n + 1 μ A ) x n + 1 ( I α n + 1 μ A ) x n ( 1 α n + 1 τ ) x n + 1 x n , n N .
This implies that
y n + 1 y n = P C [ α n + 1 γ B x n + 1 + ( I α n + 1 μ A ) x n + 1 ] P C [ α n γ B x n + ( I α n μ A ) x n ] α n + 1 γ B x n + 1 + ( I α n + 1 μ A ) x n + 1 α n γ B x n + ( I α n μ A ) x n α n + 1 γ ( B x n + 1 B x n ) + γ ( α n + 1 α n ) B x n + ( I α n + 1 μ A ) x n + 1 ( I α n + 1 μ A ) x n α n + 1 μ A x n α n + 1 γ L x n + 1 x n + ( 1 α n + 1 τ ) x n + 1 x n + γ | α n + 1 α n | B x n + α n + 1 μ A x n ( 1 α n + 1 ( τ γ L ) ) x n + 1 x n + γ M | α n + 1 α n | + μ M α n + 1 x n + 1 x n + γ M | α n + 1 α n | + μ M α n + 1 .
(4.13)
Next, we show that lim n x n + 1 x n = 0 . To this end, denote a sequence { z n } by z n = T n S n y n . It follows from (4.14) that
z n + 1 z n = T n + 1 S n + 1 y n + 1 T n S n y n T n + 1 S n + 1 y n + 1 T n + 1 S n + 1 y n + T n + 1 S n + 1 y n T n + 1 S n y n + T n + 1 S n y n T n S n y n y n + 1 y n + T n + 1 S n + 1 y n T n + 1 S n y n + T n + 1 S n y n T n S n y n y n + 1 y n + S n + 1 y n S n y n + T n + 1 S n y n T n S n y n y n + 1 y n + sup { S n + 1 z S n z : z D } + sup { T n + 1 z T n z : z D } x n + 1 x n + γ M | α n + 1 α n | + μ M α n + 1 + sup { S n + 1 z S n z : z D } + sup { T n + 1 z T n z : z D } .
This implies that
z n + 1 z n x n + 1 x n γ M | α n + 1 α n | + μ M α n + 1 + sup { S n + 1 z S n z : z D } + sup { T n + 1 z T n z : z D } .
(4.14)
Since lim n α n = 0 , in view of Lemma 2.8 and (4.14), we conclude that
lim sup n ( z n + 1 z n x n + 1 x n ) 0 .
Using Lemma 2.8, we deduce that
lim n z n x n = 0 .
Thus, we have
lim n x n + 1 x n = lim n β n z n x n = 0 .
(4.15)
On the other hand, we have
y n T n S n y n y n x n + x n x n + 1 + x n + 1 T n S n y n y n x n + x n x n + 1 + x n + 1 T n S n y n y n x n + x n x n + 1 + ( 1 β n ) x n T n S n y n y n x n + x n x n + 1 + ( 1 β n ) [ x n y n + y n T n S n y n ] .
(4.16)
It follows from (4.16) that
y n T n S n y n 1 β n [ 2 y n x n + x n x n + 1 ] .
(4.17)
In view of (4.11) and (4.15), we obtain
lim n y n T n S n y n = 0 .
(4.18)
By the triangle inequality, we obtain
y n T n S y n y n T n S n y n + T n S n y n T n S y n y n T n S n y n + S n y n S y n y n T n S n y n + sup { S n z S z : z D } .
(4.19)
In view of Lemma 2.9, (4.18) and (4.19), we deduce that
lim n y n T n S y n = 0 .
(4.20)
By the triangle inequality, we obtain
y n T S y n y n T n S y n + T n S y n T S y n y n T n S y n + sup { T n z T z : z D } .
(4.21)
In view of the A K T T -condition and (4.20)-(4.21), we deduce that
lim n y n T S y n = 0 .
Step III. We prove that
lim sup n ( μ A γ B ) x , x y n 0 ,
where x F ( T S ) is the same as in Theorem 3.1 and satisfies
( γ B μ A ) x , x z 0 , z F ( T S ) .
(4.22)
Let { y n k } be such that
lim sup n ( μ A γ B ) x , x y n = lim k ( μ A γ B ) x , x y n k .
(4.23)
By the same manner as in the proof of Theorem 3.1 Step II, we can find u F ( T S ) such that y n k u as k . In view of (ii), we have that
P C ( I λ n f ) y n y n = s n y n + ( 1 s n ) T n y n y n + ( 1 s n ) T n y n y n T n y n y n T n y n T n S y n + T n S y n y n y n S y n + T n S y n y n ,
(4.24)
where s n = 2 λ n L 4 for each n 0 . In view of (4.24) and taking into account y n S y n 0 , we conclude that
lim n P C ( I λ n f ) y n y n = lim n y n T n y n = 0 .
Hence we have
P C ( I 2 L f ) y n y n P C ( I 2 L f ) y n P C ( I λ n f ) y n + P C ( I λ n f ) y n x n ( I 2 L f ) y n ( I λ n f ) y n + P C ( I λ n f ) y n y n = ( 2 L λ n ) f y n + T n y n y n .
Thus, from the boundedness of { x n } , s n 0 ( λ n 2 L ) and T n y n y n 0 , we conclude that
lim n P C ( I 2 L f ) y n y n = 0 .
(4.25)
Note that the gradient f is 1 L -ism. Hence, it is known that P C ( I 2 L f ) is a nonexpansive self-mapping on C. As a matter of fact, we have for each x, y in C (see the proof of Theorem 4.1)
P C ( I 2 L f ) x P C ( I 2 L f ) y 2 x y 2 .
Since y n k u , by Lemma 2.4, we obtain
u = P C ( I 2 L f ) u .
This shows that u Ω . Consequently, from (3.22) and (4.23), it follows that
lim sup n ( γ B μ A ) x , y n x = ( γ B μ A ) x , u x 0 .

As in the last part of the proof of Theorem 3.1, we obtain that x n x , which completes the proof. □

Corollary 4.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let A : C H be a κ-Lipschitzian and η-strongly monotone operator with constants κ , η > 0 , and let B : C H be an l-Lipschitzian mapping with constant l 0 . Suppose that 0 < μ < 2 η κ 2 and 0 γ l < τ , where τ = 1 μ ( 2 η μ κ 2 ) . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient f is L-Lipschitzian with constant L > 0 . Let { λ n } be a sequence in the interval ( 0 , 2 L ) such that { T n } and { s n } satisfy the following conditions:
  1. (i)

    s n = 2 λ n L 4 for each n 0 ;

     
  2. (ii)

    P C ( I λ n f ) = s n I + ( 1 s n ) T n for each n 0 ;

     
  3. (iii)

    s n 0 ;

     
  4. (iv)

    n = 0 s n = ;

     
  5. (v)

    n = 1 | λ n + 1 λ n | < .

     
Suppose that { α n } , { β n } are two real sequences in ( 0 , 1 ) satisfying the following control conditions:
(a) lim n α n = 0 and n = 1 α n = ; (b) 0 < lim inf n β n lim sup n β n < 1 .
For given x 1 in C arbitrarily, let the sequence { x n } be generated by
{ y n = P C [ α n γ B x n + ( I α n μ A ) x n ] , x n + 1 = ( 1 β n ) x n + β n T n y n , n N .
Then, there exists a nonexpansive mapping T : C C such that ( { T n } n = 1 , T ) satisfies the A K T T -condition, and { x n } converges strongly to a common element x F ( T ) Ω , which solves the variational inequality
( μ A γ B ) x , x z 0 , z F ( T ) .

We end this section by considering simple examples of sequences that fulfill the desired conditions of our results.

Example 4.1 Let { α n } n = 1 be a sequence defined by
α n = 1 n + 1 , n N .
Let L > 0 be any arbitrary real number, and let n 0 N be such that n 0 > L 2 . We define the sequence { λ n } n = 1 as follows:
{ λ 1 = λ 2 = = λ n 0 = 1 L , if  n n 0 , λ n = 2 n ( n + 1 ) L , if  n > n 0 .

Then the sequences { α n } n = 1 and { λ n } n = 1 satisfy all the aspects of the hypotheses of our results.

5 Applications

Let H be a real Hilbert space, and let Q : H 2 H be a mapping. The effective domain of Q is denoted by dom ( Q ) , that is, dom ( Q ) = { x H : Q x } . The range of Q is denoted by R ( Q ) . A multi-valued mapping Q is said to be monotone if for all x , y H , f Q x and g in Qy,
x y , f g 0 .
A monotone mapping Q : H 2 H is said to be maximal if its graph G ( Q ) : { ( x , f ) : f Q ( x ) } is not properly contained in the graph of any other monotone mapping. It is well-known that a monotone mapping Q : H 2 H is maximal if and only if, for ( x , f ) in H × H , x y , f g 0 for every ( y , g ) G ( Q ) implies that f Q ( x ) . For a maximal monotone operator Q on H and r > 0 , we may define a single-valued operator J r = ( I + r Q ) 1 : H dom ( Q ) , which is called the resolvent of Q for r > 0 . Assume that Q 1 0 = { x H : 0 Q x } . It is known that Q 1 0 = F ( J r ) for all r > 0 , and the resolvent J r is firmly nonexpansive, i.e.,
J r x J r y 2 x y , J r x J r y , x , y H .

The following lemma has been proved in [40].

Lemma 5.1 Let H be a real Hilbert space, and let Q be a maximal monotone operator on H. For r > 0 , let J r be the resolvent operator associated with Q and r. Then
J ρ x J σ x | ρ σ | ρ x J ρ x

for all ρ , σ > 0 and x H .

We also know the following lemma from [38].

Lemma 5.2 Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let Q be a maximal monotone operator on H such that Q 1 0 and cl ( dom ( Q ) ) C r > 0 R ( I + r Q ) , where cl ( dom ( Q ) ) stands for the closure of dom ( Q ) . Suppose that { r n } is a sequence of ( 0 , ) such that inf { r n : n N } > 0 and n = 1 | r n + 1 r n | < . Then
  1. (i)

    n = 1 sup { J r n + 1 z J r n z : z D } < for any bounded subset D of C.

     
  2. (ii)

    lim n J r n z = J r z for all z in C and F ( J r ) = n = 1 F ( J r n ) , where r n r as n .

     

From Theorem 3.1 and Lemma 5.2, we obtain the following result.

Theorem 5.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let Q be a maximal monotone operator on H such that Q 1 0 . Given real sequences { α n } , { β n } in ( 0 , 1 ) and { r n } in ( 0 , ) , assume that { α n } , { β n } satisfy the following control conditions:
(a) lim n α n = 0 and n = 1 α n = ; (b) 0 < lim inf n β n lim sup n β n < 1 ; (c) r n ϵ , n N and n = 1 | r n + 1 r n | < .
Suppose, in addition, that S : C C is a nonexpansive mapping with F : = n = 1 F ( J r n ) F ( S ) . Let A : C E be a κ-Lipschitzian and η-strongly monotone operator with constants κ , η > 0 , let B : C E be an L-Lipschitzian mapping with constant L 0 . Let 0 < μ < 2 η κ 2 and 0 γ L < τ , where τ = 1 1 μ ( 2 η μ κ 2 ) . For given x 1 in C arbitrarily, let the sequence { x n } be generated iteratively by
{ y n = P C [ α n γ B x n + ( I α n μ A ) x n ] , x n + 1 = ( 1 β n ) x n + β n J r n S y n , n N .
(5.1)
Then the sequence { x n } defined by (5.1) converges strongly to x in F ( J r S ) , which solves the variational inequality
( μ A γ B ) x , x z 0 , z F ( J r S ) .
(5.2)

The following result is yet another easy consequence of Theorem 3.1 and Lemma 5.2.

Theorem 5.2 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let Q be a maximal monotone operator on H such that Q 1 0 . Given real sequences { α n } , { β n } in ( 0 , 1 ) and { r n } in ( 0 , ) , assume that { α n } , { β n } satisfy the following control conditions:
(a) lim n α n = 0 and n = 1 α n = ; (b) 0 < lim inf n β n lim sup n β n < 1 ; (c) r n ϵ , n N and n = 1 | r n + 1 r n | < .
Let A : C E be a κ-Lipschitzian and η-strongly accretive operator with constants κ , η > 0 , B : C E be an L-Lipschitzian mapping with constant L 0 . Let 0 < μ < 2 η κ 2 and