Skip to main content

Strong convergence of projection methods for a countable family of nonexpansive mappings and applications to constrained convex minimization problems

Abstract

In this paper, we introduce a general algorithm to approximate common fixed points for a countable family of nonexpansive mappings in a real Hilbert space, which solves a corresponding variational inequality. Furthermore, we propose explicit iterative schemes for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by our schemes converge strongly to a solution of the constrained convex minimization problem. Our results improve and generalize some known results in the current literature.

MSC:47H10, 37C25.

1 Introduction

A viscosity approximation method for finding fixed points of nonexpansive mappings was first proposed by Moudafi in 2000 [1]. He proved the convergence of the sequence generated by the proposed method. In 2004, Xu [2] proved the strong convergence of the sequence generated by the viscosity approximation method to a unique solution of a certain variational inequality problem defined on the set of fixed points of a nonexpansive map.

It is well known that the iterative methods for finding fixed points of nonexpansive mappings can also be used to solve a convex minimization problem; see, for example, [35] and the references therein. In 2003, Xu [4] introduced an iterative method for computing the approximate solutions of a quadratic minimization problem over the set of fixed points of a nonexpansive mapping defined on a real Hilbert space. He proved that the sequence generated by the proposed method converges strongly to the unique solution of the quadratic minimization problem. By combining the iterative schemes proposed by Moudafi [1] and Xu [4], Marino and Xu [6] considered a general iterative method and proved that the sequence generated by the method converges strongly to a unique solution of a certain variational inequality problem, which is the optimality condition for a particular minimization problem. Liu [7] and Qin et al. [8] also studied some applications of the iterative method considered in [6]. Yamada [5] introduced the so-called hybrid steepest-descent method for solving the variational inequality problem and also studied the convergence of the sequence generated by the proposed method. Very recently, Tian [9] combined the iterative methods of [5, 6] in order to propose implicit and explicit schemes for constructing a fixed point of a nonexpansive mapping T defined on a real Hilbert space. He also proved the strong convergence of these two schemes to a fixed point of T under appropriate conditions. Related iterative methods for solving fixed point problems, variational inequalities and optimization problems can be found in [1015] and the references therein.

On the other hand, the gradient-projection method for finding the approximate solutions of the constrained convex minimization problem is well known; see, for example, [16] and the references therein. The convergence of the sequence generated by this method depends on the behavior of the gradient of the objective function. If the gradient fails to be strongly monotone, then the strong convergence of the sequence generated by the gradient-projection method may fail. Very recently, Xu [17] gave an operator-oriented approach as an alternative to the gradient-projection method and to the relaxed gradient-projection algorithm, namely, an averaged mapping approach. Moreover, he constructed a counterexample which shows that the sequence generated by the gradient-projection method does not converge strongly in the setting of an infinite-dimensional space. He also presented two modifications of gradient-projection algorithms which are shown to have strong convergence. Further, he regularized the minimization problem to derive an iterative scheme that generates a sequence converging in norm to the minimum-norm solution of the constrained convex minimization problem in the consistent case. The related methods and results can be found in [1826] and the references therein. By virtue of projections, the authors in [27] extended the implicit and explicit iterative schemes proposed in [9].

The purpose of this paper is to introduce a general algorithm to approximate common fixed points for a countable family of nonexpansive mappings in a real Hilbert space. We prove the strong convergence theorems for the sequences produced by the methods to a common fixed point of a countable family of nonexpansive mappings which is the unique solution of a corresponding variational inequality. We also propose explicit iterative schemes for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by our schemes converge strongly to a solution of the constrained convex minimization problem. Our results improve and generalize some known results in the current literature, see, for example, [27, 28].

2 Preliminaries

Throughout this paper, we denote the set of real numbers and the set of positive integers by and , respectively. Let H be a real Hilbert space, and let C be a nonempty subset of H. Let T:CC be a mapping. We denote by F(T) the set of fixed points of T, i.e., F(T)={xC:Tx=x}.

Definition 2.1 (i) A mapping T:CC is said to be nonexpansive if TxTyxy for all x, y in C. T is said to be quasi-nonexpansive if F(T) and Txyxy for all x in C and y in F(T).

  1. (ii)

    A mapping T:HH is said to be an averaged mapping [29] if it can be written as the average of the identity I and a nonexpansive mapping; that is,

    T=(1α)I+αS,
    (2.1)

where α is a number in (0,1), and S:HH is nonexpansive. More precisely, when (2.1), holds, we say that T is α-averaged.

  1. (iii)

    A mapping B:CH is said to be L-Lipschitzian if BxByLxy, x,yC, where L0 is a constant. In particular, if L[0,1), then B is called a contraction on C; if L=1, then B is nonexpansive.

  2. (iv)

    A mapping V:D(V)HH is called firmly nonexpansive [30] if

    xy,VxVy V x V y 2 ,x,yD(V),

where D(V) is the domain of V.

Clearly, a firmly nonexpansive mapping is a 1 2 -averaged map.

Proposition 2.1 [31, 32]

Let H be a real Hilbert space, and let S,T,B:HH be mappings.

  1. (a)

    If T=(1α)S+αB for some α in (0,1), and if S is averaged, and B is nonexpansive, then T is averaged.

  2. (b)

    T is firmly nonexpansive if and only if the complement (IT) is firmly nonexpansive.

  3. (c)

    If T=(1α)S+αB for some α in (0,1), and if S is firmly nonexpansive, and B is nonexpansive, then T is averaged.

Recall that the metric (or nearest point) projection from H onto C is the mapping P C :HC, which assigns to each point x in H the unique point P C x in C satisfying the property

x P C x=d(x,C):= inf y C xy.

Lemma 2.1 [30]

Let H be a real Hilbert space. For given x in H:

  1. (a)

    z= P C x if and only if

    zx,zy0,yC.
  2. (b)

    z= P C x if and only if

    x z 2 x y 2 y z 2 ,yC.
  3. (c)
    P C x P C y,xy P C x P C y 2 ,x,yH.

Consequently, P C is nonexpansive and monotone.

In general, a projection mapping is firmly nonexpansive, and, thus, a 1/2-averaged map.

Lemma 2.2 The following inequality holds in an inner product space X:

x + y 2 x 2 +2y,x+y,x,yX.

Lemma 2.3 [33]

In a Hilbert space H, we have

λ x + ( 1 λ ) y 2 =λ x 2 +(1λ) y 2 λ(1λ) x y 2 ,x,yHandλ[0,1].

Lemma 2.4 (Demiclosedness principle [33])

Let T:CC be a nonexpansive mapping with F(T). If { x n } is a sequence in C that converges weakly to x, and if {(IT) x n } converges strongly to y, then (IT)x=y; in particular, if y=0, then xF(T).

Definition 2.2 Let H be a real Hilbert space. A nonlinear operator T, whose domain D(T)H and range R(T)H is said to be:

  1. (a)

    monotone if

    xy,TxTy0,x,yD(T),
  2. (b)

    η-strongly monotone if there exists η>0 such that

    xy,TxTyη x y 2 ,x,yD(T),
  3. (c)

    α-inverse strongly monotone (for short, α-ism) if there exists α>0 such that

    xy,TxTyα T x T y 2 ,x,yD(T).

It can be easily seen that (i) if T is nonexpansive, then IT is monotone; (ii) the projection map P C is a 1-ism. The inverse strongly monotone (also referred to as co-coercive) operators have been widely used to solve practical problems in various fields, for instance, in traffic assignment problems; see, for example, [34, 35] and the references therein.

Proposition 2.2 [31]

Let T:HH be an operator.

  1. (a)

    T is nonexpansive if and only if the complement IT is 1 2 -ism.

  2. (b)

    If T is v-ism, then γT is v γ -ism for v>0.

  3. (b)

    T is averaged if and only if the complement IT is v-ism for some v> 1 2 . Indeed, for α in (0,1), T is α-averaged if and only if IT is 1 2 α -ism.

Lemma 2.5 [27]

Let B:CH be an L-Lipschitzian mapping with constant L0, and let A:CH be a κ-Lipschitzian and η-strongly monotone operator with constants κ,η>0. Then for 0γL<μη, we have

x y , ( μ A γ B ) x ( μ A γ B ) y (μηγL) x y 2 ,x,yC.

That is, μAγB is strongly monotone with coefficient μηγL.

The following lemma plays a key role in proving strong convergence of our iterative schemes.

Lemma 2.6 [[5], Lemma 3.1]

Suppose that λ(0,1) and μ,κ,η>0. Let A:CH be an L-Lipschitzian and η-strongly monotone operator on C. In association with a nonexpansive mapping T:CC, define the mapping T λ :CH by

T λ x:=TxλμA(Tx),xC.

Then T λ is a contraction provided μ< 2 η κ 2 , that is,

T λ x T λ y (1λτ)xy,xC,

where τ=1 1 μ ( 2 η μ κ 2 ) (0,1].

Let A:CH be a κ-Lipschitzian and η-strongly monotone operator with constants κ,η>0. Let B:CE be an L-Lipschitzian mapping with constant L0. Assume that T:CC is a nonexpansive mapping with F(T). Let 0<μ< 2 η κ 2 and 0γL<τ, where τ=1 1 μ ( 2 η μ κ 2 ) . Let the net { x t } t ( 0 , 1 ) be generated by the following implicit scheme:

x t = P C [ t γ B x t + ( I t μ A ) T x t ] .
(2.2)

Then { x t } t ( 0 , 1 ) converges strongly to a fixed point x ˜ of T, which solves the variational inequality

( γ B μ A ) x ˜ , x ˜ z 0,zF(T).
(2.3)

Let the sequence { x n } n N be generated by the following explicit scheme:

x n + 1 = P C [ α n γ B x n + ( I α n μ A ) T x n ] .
(2.4)

Then { x n } n N converges strongly to a fixed point x ˜ of T, which is also a solution of the variational inequality (2.3), see, for more details, [27].

Consider a self-mapping S t on C defined by

S t x= P C [ t γ B x t + ( I t μ A ) T x ] ,xC.
(2.5)

Then, S t is a contraction, and it has a unique fixed point in C, which uniquely solves the fixed point equation (2.2), see [27] for more details.

Proposition 2.3 [[27], Proposition 3.1]

Let H be a real Hilbert space, and let C be a nonempty, closed and convex subset of H. Let A:CH be a κ-Lipschitzian and η-strongly accretive operator with constants κ,η>0. Let B:CH be an L-Lipschitzian mapping with constant L0. Assume that T:CC is a nonexpansive mapping with F(T). Let 0<μ< 2 η κ 2 and 0γL<τ, where τ=1 1 μ ( 2 η μ κ 2 ) . For each t in (0,1), let x t denote a unique solution of the fixed point equation (2.2). Then, the following properties hold for the net { x t } t ( 0 , 1 ) :

  1. (1)

    { x t } t ( 0 , 1 ) is bounded;

  2. (2)

    lim t 0 x t T x t =0;

  3. (3)

    t x t defines a continuous curve from (0,1) into C.

Theorem 2.1 [[27], Theorem 3.1]

Let H be a real Hilbert space, and let C be a nonempty, closed and convex subset of H. Let A:CH be an κ-Lipschitzian and η-strongly accretive operator with constants κ,η>0. Let B:CH be an L-Lipschitzian mapping with constant L0. Assume T:CC is a nonexpansive mapping with F(T). Let 0<μ< 2 η κ 2 and 0γL<τ, where τ=1 1 μ ( 2 η μ κ 2 ) . For each t in (0,1), let { x t } denote the unique solution of the fixed point equation (2.2). Then the net { x t } converges strongly, as t0, to a fixed point x ˜ of T, which solves the variational inequality (2.3), or equivalently, P F ( T ) (IμA+γB) x ˜ = x ˜ .

Lemma 2.7 [36]

Let { s n }, { γ n } be sequences of nonnegative real numbers satisfying the inequality:

s n + 1 (1 γ n ) s n + γ n δ n ,n0.

Suppose that { γ n } and { δ n } satisfy the conditions:

  1. (i)

    { γ n }[0,1] and n = 0 γ n =, or equivalently, n = 0 (1 γ n )=0;

  2. (ii)

    lim sup n δ n 0, or

(ii)′ n = 0 γ n δ n <.

Then lim n s n =0.

Lemma 2.8 [37]

Let { β n } be a sequence of real numbers with

0< lim inf n β n lim sup n β n <1.

Let { x n } and { z n } be two sequences in a Banach space E such that

x n + 1 =(1 β n ) x n + β n z n ,n1.

If

lim sup n ( x n + 1 z n x n + 1 x n ) 0,

then lim n x n z n =0.

Let C be a subset of a real Banach space E, and let { T n } n = 1 be a family of mappings of C such that n = 1 F( T n ). Then { T n } n = 1 is said to satisfy the AKTT-condition [38] if for each bounded subset D of C, we have

n = 1 sup { T n + 1 z T n z : z D } <.

Lemma 2.9 [38]

Let C be a nonempty subset of a real Banach space E, and let { T n } n = 1 be a family of mappings of C into itself, which satisfies the AKTT-condition. Then for each x in C, we have that { T n x } n = 1 converges strongly to a point in C. Moreover, let the mapping T be defined by

Tx= lim n T n x,xC.

Then for each bounded subset D of C, we have

lim sup n { T n z T z : z D } =0.

In the sequel, we will write that ( { T n } n = 1 ,T) satisfies the AKKT-condition if { T n } n = 1 satisfies the AKKT-condition, and T is defined by Lemma  2.9.

We end this section with the following simple examples of mappings satisfying the AKTT-condition (see also Lemma 5.2).

Example 2.1 (i) Let E be any Banach space. For any nN, let a mapping T n :EE be defined by

T n (x)= x n (xE).

Then, T n is a nonexpansive mapping for each nN. It could easily be seen that ( { T n } n = 1 ,T) satisfies the AKKT-condition, where T(x)=0 for all xE.

  1. (ii)

    Let E= l 2 , where

    l 2 = { σ = ( σ 1 , σ 2 , , σ n , ) : n = 1 σ n 2 < } , σ = ( n = 1 σ n 2 ) 1 2 , σ l 2 , σ , η = n = 1 σ n η n , δ = ( σ 1 , σ 2 , , σ n , ) , η = ( η 1 , η 2 , , η n , ) l 2 .

Let { x n } n N { 0 } E be a sequence defined by

x 0 = ( 1 , 0 , 0 , 0 , ) , x 1 = ( 1 , 1 , 0 , 0 , 0 , ) , x 2 = ( 1 , 0 , 1 , 0 , 0 , 0 , ) , x 3 = ( 1 , 0 , 0 , 1 , 0 , 0 , 0 , ) , , x n = ( σ n , 1 , σ n , 2 , , σ n , k , ) , ,

where

σ n , k ={ 1 if  k = 1 , n + 1 , 0 if  k 1 , k n + 1 ,

for all nN. It is clear that the sequence { x n } n N converges weakly to x 0 . Indeed, for any Λ=( λ 1 , λ 2 ,, λ n ,) l 2 = ( l 2 ) , we have

Λ( x n x 0 )= x n x 0 ,Λ= k = 2 λ k σ n , k 0

as n. It is also obvious that x n x m = 2 for any nm with n, m sufficiently large. Thus, { x n } n N is not a Cauchy sequence. We define a countable family of mappings T j :EE by

T j (x)={ n n + 1 x , if  x = x n ; j j + 1 x , if  x x n ,

for all j1 and n0. It is clear that F( T j )={0} for all j1. It is obvious that T j is a quasi-nonexpansive mapping for each jN. Thus, { T j } j N is a countable family of quasi-nonexpansive mappings.

Let Tx= lim j T j x for all xE. It is easy to see that

T(x)={ n n + 1 x , if  x = x n ; x , if  x x n .

Then, we obtain that T is a quasi-nonexpansive mapping with F(T)={0}= F ˜ (T). Let D be a bounded subset of E. Then there exists r>0 such that D B r ={zE:z<r}. On the other hand, for any jN, we have

j = 1 sup { T j + 1 z T j z : z D } = j = 1 sup { j 1 j + 2 z j j + 1 z : z D } = j = 1 1 ( j + 2 ) ( j + 1 ) sup { z : z D } < .

Furthermore, we have

lim sup j { T j z T z : z D } =0.

Therefore, ( { T n } n = 1 ,T) satisfies the AKKT-condition.

3 Fixed point and convergence theorems

Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let P C be the metric (or nearest point) projection from H onto C.

Theorem 3.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Assume { T n } n = 1 is a sequence of nonexpansive mappings from C into itself such that n = 1 F( T n ). Suppose, in addition, that T:CC is a nonexpansive mapping such that ( { T n } n = 1 ,T) satisfies the AKTT-condition, and S:CC is a nonexpansive mapping with F:= n = 1 F( T n )F(S). Let A:CE be a κ-Lipschitzian and η-strongly monotone operator with constants κ,η>0. Let B:CE be an L-Lipschitzian mapping with constant L0. Let 0<μ< 2 η κ 2 and 0γL<τ, where τ=1 1 μ ( 2 η μ κ 2 ) .

For arbitrarily given x 1 in C, let the sequence { x n } be generated iteratively by

{ y n = P C [ α n γ B x n + ( I α n μ A ) x n ] , x n + 1 = ( 1 β n ) x n + β n T n S y n , n N ,
(3.1)

where { α n }, { β n } are two real sequences in (0,1) satisfying the following control conditions:

(a) lim n α n = 0 and n = 1 α n = ; (b) 0 < lim inf n β n lim sup n β n < 1 .
(3.2)

Then the sequence { x n } converges strongly to x F(TS), which solves the variational inequality

( μ A γ B ) x , x z 0,zF(TS).
(3.3)

Proof We divide the proof into several steps.

Step I. We claim that the sequence { x n } is bounded. Let p in F be fixed. In view of Lemma 2.6 we conclude that

( I α n μ A ) x n ( I α n μ A ) p (1 α n τ) x n p,nN.

This together with (3.1)-(3.3) implies that

y n p = P C [ α n γ B x n + ( I α n μ A ) x n ] P C p α n γ B x n + ( I α n μ A ) x n p = α n ( γ B x n μ A ( p ) ) + ( I α n μ A ) x n ( I α n μ A ) p α n γ L x n p + α n ( γ B μ A ) p + ( 1 α n τ ) x n p = ( 1 α n ( τ γ L ) ) x n p + α n ( γ B μ A ) p max { x n p , ( γ B μ A ) p τ γ L } .
(3.4)

Since T n is nonexpansive, for all n in , it follows from (3.1) and (3.4) that

x n + 1 p = ( 1 β n ) ( x n p ) + β n ( T n S y n p ) ( 1 β n ) x n p + β n T n S y n p ( 1 β n ) x n p + β n y n p ( 1 β n ) x n p + β n max { x n p , ( γ B μ A ) p τ γ L } max { x n p , ( γ B μ A ) p τ γ L } .
(3.5)

By induction, we have that { x n } is bounded. This implies that the sequences {A x n }, {B x n }, { y n }, {S y n } and { T n S y n } are bounded too. Let

M=sup { x n , A x n , B x n , y n , S y n , T n S y n : n N } <,

and set

D= { z E : z M } .

Then we have D is a bounded subset of E and { x n },{A x n },{B x n },{ y n },{ T n S y n }D.

Step II. We claim that lim n y n TS y n =0. In view of (3.1), we obtain

y n x n = P C [ α n γ B x n + ( I α n μ A ) x n ] P C [ x n ] α n γ B x n + ( I α n μ A ) x n x n = α n γ B x n + x n α n μ A x n x n = α n ( γ B μ A ) x n .
(3.6)

Since lim n α n =0, it follows from (3.6) that

lim n y n x n =0.
(3.7)

In view of Lemma 2.6, we conclude that

( I α n + 1 μ A ) x n + 1 ( I α n + 1 μ A ) x n (1 α n + 1 τ) x n + 1 x n ,nN.

This implies that

y n + 1 y n = P C [ α n + 1 γ B x n + 1 + ( I α n + 1 μ A ) x n + 1 ] P C [ α n γ B x n + ( I α n μ A ) x n ] α n + 1 γ B x n + 1 + ( I α n + 1 μ A ) x n + 1 α n γ B x n + ( I α n μ A ) x n α n + 1 γ ( B x n + 1 B x n ) + γ ( α n + 1 α n ) B x n + ( I α n + 1 μ A ) x n + 1 ( I α n + 1 μ A ) x n α n + 1 μ A x n α n + 1 γ L x n + 1 x n + ( 1 α n + 1 τ ) x n + 1 x n + γ | α n + 1 α n | B x n + α n + 1 μ A x n ( 1 α n + 1 ( τ γ L ) ) x n + 1 x n + γ M | α n + 1 α n | + μ M α n + 1 x n + 1 x n + γ M | α n + 1 α n | + μ M α n + 1 .
(3.8)

Next, we show that lim n x n + 1 x n =0. For this purpose, we denote a sequence { z n } by z n = T n S y n . It follows from (3.8) that

z n + 1 z n = T n + 1 S y n + 1 T n S y n T n + 1 S y n + 1 T n + 1 S y n + T n + 1 S y n T n S y n y n + 1 y n + T n + 1 S y n T n S y n x n + 1 x n + γ M | α n + 1 α n | + μ M α n + 1 + sup { T n + 1 z T n z : z D } .
(3.9)

This implies that

z n + 1 z n x n + 1 x n M| α n + 1 α n |+sup { T n + 1 z T n z : z D } .
(3.10)

Since lim n α n =0, in view of the AKTT-condition and (3.2)(a), we conclude that

lim sup n ( z n + 1 z n x n + 1 x n ) 0.

Utilizing Lemma 2.8, we deduce that

lim n z n x n =0.

It follows from (3.1) and (3.2)(b) that

lim n x n + 1 x n = lim n β n z n x n =0.
(3.11)

On the other hand, we have

y n T n S y n y n x n + x n x n + 1 + x n + 1 T n S y n y n x n + x n x n + 1 + ( 1 β n ) x n T n S y n y n x n + x n x n + 1 + ( 1 β n ) [ x n y n + y n T n S y n ] .

This implies that

y n T n S y n 1 β n [ 2 y n x n + x n x n + 1 ] .
(3.12)

In view of (3.7) and (3.11)-(3.12), we obtain

lim n y n T n S y n =0.
(3.13)

By the triangle inequality, we obtain

y n T S y n y n T n S y n + T n S y n T S y n y n T n S y n + sup { T n z T z : z D } .
(3.14)

In view of the AKTT-condition and (3.13)-(3.14), we deduce that

lim n y n TS y n =0.

Step III. We prove that there exists x in F(TS) such that

lim sup n ( μ A γ B ) x , x y n 0.

For each t in (0,1), we define the mapping S t :CC by

S t (x)= P C [ t γ B x + ( I t μ A ) T S x ] ,xC.

Since S,T and ItμA are nonexpansive mappings for each t in (0,1), in view of (2.5), we conclude that S t is a contraction for each t in (0,1), and hence by the Banach contraction principle, there exists a unique fixed point x t in C such that S t ( x t )= x t . Thus, we have

x t = P C [ t γ B x t + ( I t μ A ) T S x t ] .
(3.15)

Next, we show that lim t 0 x t := x exists. We first show that { x t } is bounded. To this end, let p in F be fixed. In view of Lemma 2.7, we obtain

x t p = P C [ t γ B x t + ( I t μ A ) T S x t ] P C p t γ B x t + ( I t μ A ) T S x t p t ( γ B x t μ A ( p ) ) + ( I t μ A ) T S x t ( I t μ A ) p = ( 1 t ( τ γ L ) ) x t p + t ( γ B μ A ) p .

This implies that

x t p ( γ B μ A ) p ( τ γ L ) .

Thus, we have that { x t } t ( 0 , 1 ) is bounded and so are { A T S x t } t ( 0 , 1 ) and { ( γ B μ A ) x t } t ( 0 , 1 ) . In view of (3.15), we obtain

x t T S x t = P C [ t γ B x t + ( I t μ A ) T S x t ] P C ( T S x t ) t γ B x t + ( I t μ A ) T S x t T S x t t ( γ B μ A ) x t .

This implies that

lim t 0 + x t TS x t =0.
(3.16)

Using the techniques in the proof of Theorem 2.1, we see that the variational inequality (3.3) has a unique solution x ˜ F(TS). We show that x t x ˜ as t0. To this end, set

y t =tγB x t +(ItμA)TS x t ,t(0,1).

Then we have x t = P C y t , and for any given z in F(TS),

x t z = P C y t y t + y t z = P C y t y t + t ( γ B x t μ A z ) + ( I t μ A ) T S x t ( I t μ A ) T S z .
(3.17)

Since P C is the metric projection from E onto C, for each z in F(TS), we have

P C y t y t , P C y t z0.

Exploiting Lemma 2.1 and (3.17), we obtain

x t z 2 = x t z , x t z = P C y t y t , P C y t z + ( I t μ A ) T S x t ( I t μ A ) z , x t z + t γ B x t μ A z , x t z ( 1 t τ ) x t z + t γ B x t μ A z , x t z .
(3.18)

It follows from (3.18) that

x t z 2 1 τ γ B x t μ A z , x t z 1 τ [ γ B x t γ B z , x t z + γ B z μ A z , x t z ] 1 τ [ γ L x t z 2 + γ B z μ A z , x t z ] .

This implies that

x t z 2 1 τ γ L γBzμAz, x t z.
(3.19)

Let { t n }(0,1) be such that t n 0 + as n. Let x n := x t n . It follows from (3.16) that lim n x n TS x n =0. The boundedness of { x t } implies that there exists x in C such that x n x , i.e., converges weakly, as n. In view of Lemma 2.4, we deduce that x F(TS). Since x n x as n, it follows from (3.19) that lim n x n x =0. Thus, we have that lim t 0 + x t = x is well defined. Next, we show that x solves the variational inequality (3.3). Observe that

x t = P C y t = P C y t y t +tγB x t +(ItμA)TS x t .

Thus, we have

(μAγB) x t = 1 t ( P C y t y t ) 1 t (ITS) x t +μ(A x t ATS x t ).

Since TS is nonexpansive, we conclude that ITS is monotone. The property of metric projection implies that

( γ B μ A ) x t , x t z = 1 t P C y t y t , x t z 1 t ( I T S ) x t ( I T S ) z , x t z + μ A x t A T S x t , x t z μ A x t A T S x t , x t z μ L x t T S x t x t z .
(3.20)

Replacing t by t n in (3.20), letting n, and noticing that { x t z } t ( 0 , 1 ) is bounded for z in F(TS), with (3.16), we have

( γ B μ A ) x , x z 0.
(3.21)

Thus, we have that x in F(TS) is a solution of the variational inequality (3.3). Consequently, x = x ˜ by uniqueness. Therefore, x t x ˜ as t0. The variational inequality (3.3) can be written as

( I μ A + γ B ) x ˜ x ˜ , x ˜ z 0,zF(TS).

So, in terms of Lemma 2.1, it is equivalent to the following fixed point equation:

P F ( T S ) (IμA+γB) x ˜ = x ˜ .

Since { y n } is bounded, for any subsequence of { y n }, there exists a further subsequence { y n i } such that y n i u in C. In view of Lemma 2.4 and Step II, we conclude that uF(TS). This together with (3.21) implies that

lim sup n μ A x γ B x , x y n = lim i μ A x γ B x , x y n i = μ A x γ B x , x u 0 .

Step IV. We claim that lim n x n x =0.

For each n in N{0}, we set

v n = α n γB x n +(I α n μA)TS x n

and observe that y n = P C v n . Then, by Lemmas 2.1 and 2.5, we obtain

y n x 2 = y n x , y n x = P C v n v n , P C v n x + v n x , y n x v n x , y n x = α n γ B x n μ A ( x ) , y n x + ( I α n μ A ) T S x n ( I α n μ A ) T S x , y n x = α n γ B x n B x , y n x + α n ( γ B μ A ) x , y n x + ( I α n μ A ) T S x n ( I α n μ A ) T S x , y n x α n γ L x n x y n x + α n ( γ B μ A ) x , y n x + ( 1 α n τ ) x n x y n x = ( 1 α n ( τ γ L ) ) x n x y n x + α n ( γ B μ A ) x , y n x ( 1 α n ( τ γ L ) ) 1 2 ( x n x 2 + y n x 2 ) + α n ( γ B μ A ) x , y n x .
(3.22)

This implies that

y n x 2 ( 1 α n ( τ γ L ) ) ( 1 + α n ( τ γ L ) ) x n x 2 + 2 α n 1 + α n ( τ γ L ) ( γ B μ A ) x , y n x ( 1 α n ( τ γ L ) ) x n x 2 + 2 α n 1 + α n ( τ γ L ) ( γ B μ A ) x , y n x = ( 1 α n ( τ γ L ) ) x n x 2 + α n θ n ξ n ,
(3.23)

where

θ n =τγLand ξ n = 2 ( 1 + α n ( τ γ L ) ) ( τ γ L ) ( γ B μ A ) x , y n x .

In view of (3.22) and (3.23), we conclude that

x n + 1 x 2 = ( 1 β n ) x n + β n T n S y n x 2 ( 1 β n ) x n x 2 + β n T n S y n x 2 ( 1 β n ) x n x 2 + β n y n x 2 ( 1 β n ) x n x 2 + β n [ ( 1 α n ( τ γ L ) ) x n x 2 + α n θ n ξ n ] ( 1 β n α n ( τ γ L ) ) x n x 2 + β n α n θ n ξ n = ( 1 β n α n ( τ γ L ) ) x n x 2 + β n α n ( τ γ L ) 1 τ γ L θ n ξ n ,
(3.24)

where γ n = β n α n (τγL). It is easy to show that lim n γ n =0, n = 0 γ n = and lim sup n ξ n 0. Hence, in view of Lemma 2.7 and (3.24), we conclude that the sequence { x n } converges strongly to x in F(TS). □

Remark 3.1 Theorem 3.1 improves and extends [[28], Theorems 3.1 and 3.2] in the following aspects.

  1. (i)

    The identity mapping I is extended to the case of IA:CE, where A:CE is a k-Lipschitzian and η-strongly monotone (possibly nonself-) mapping.

  2. (ii)

    In order to find a common fixed point of a countable family of nonexpansive self-mappings T n :CC, the Mann-type iterations in [[28], Theorem 3.2] are extended to develop the new Mann-type iteration (3.1).

  3. (iii)

    The new technique of an argument is applied in deriving Theorem 3.1. For instance, the characteristic properties (Lemma 2.1) of metric projection P C play an important role in proving the strong convergence of the net { x t } t ( 0 , 1 ) in Theorem 3.1.

  4. (iv)

    Whenever we have C=E, B=0, A=I, the identity mapping on C and μ=1, then Theorem 3.1 reduces to [[28], Theorem 3.2]. Thus, Theorem 3.1 covers [[28], Theorems 3.1 and 3.2] as special cases.

Remark 3.2 In Theorem 3.1, it is shown that any sequence generated by the iterative step (3.1) converges strongly to the unique solution of the variational inequality problem (3.3). This variational inequality problem is more general than many variational inequality problems (see, for example, [27]) due to the fact that S is an arbitrary nonexpansive mapping, and due to the well-known relations between fixed points of nonexpansive mappings and variational inequalities, the solution of (3.3) can be seen as a fixed point set of some nonexpansive mapping V, and then this mapping could be added to countable family of non-expansive mappings T n . In addition, the feasible set of the variational inequality problem (3.3) is Fix(TS), with T and S being nonexpansive mappings. For several sub-sets of nonexpansive mapping, Fix(TS)=Fix(T)Fix(S) hold (see, e.g., [31, 32] for averaged mappings).

4 Constrained convex minimization problems

Let H be a real Hilbert space, and let C be a nonempty, closed and convex subset of H. Consider the following constrained convex minimization problem:

minimize { f ( x ) : x C } ,
(4.1)

where f:CR is a real-valued convex function. If f is Fréchet differentiable, then the gradient-projection method (for short, GPM) generates a sequence { x n } using the following recursive formula:

x n + 1 = P C ( x n λ f ( x n ) ) ,n0,
(4.2)

or more generally,

x n + 1 = P C ( x n λ n f ( x n ) ) ,n0,
(4.3)

where in both (4.2) and (4.3), the initial guess x 0 is taken from C arbitrary, and the parameters, λ or λ n , are positive real numbers. The convergence of algorithms (4.2) and (4.3) depends on the behavior of the gradient f. As a matter of fact, it is known that if f is α-strongly monotone and L-Lipschitzian with constants α,L>0, then the operator

T:= P C (Iλf)
(4.4)

is a contraction; hence, the sequence { x n } defined by algorithm (4.2) converges in norm to the unique solution of the minimization problem (4.1). More generally, if the sequence { λ n } is chosen to satisfy the property

0< lim inf n λ n lim sup n λ n < 2 α L 2 ,
(4.5)

then the sequence { x n } defined by algorithm (4.2) converges in norm to the unique minimizer of (4.1).

However, if the gradient f fails to be strongly monotone, the operator T defined by (4.4) would fail to be contractive; consequently, the sequence { x n } generated by algorithm (4.2) may fail to converge strongly (see [[17], Section 4]). If f is Lipschitzian, then algorithms (4.2) and (4.3) can still converge in the weak topology under certain conditions.

Very recently, Xu [17] gave an alternative operator-oriented approach to algorithm (4.3); namely, an averaged mapping approach. He gave his averaged mapping approach to the gradient-projection algorithm (4.3) and the relaxed gradient-projection algorithm. Moreover, he constructed a counterexample, which shows that algorithm (4.2) does not converge in norm in an infinite-dimensional space, and also presented two modifications of gradient-projection algorithms, which are shown to have strong convergence. Further, he regularized the minimization problem (4.1) to devise an iterative scheme that generates a sequence converging in norm to the minimum-norm solution of (4.1) in the consistent case.

Let A:CH be a κ-Lipschitzian and η-strongly monotone operator with constants κ,η>0, and let B:CH be an l-Lipschitzian mapping with constant l0. Suppose that 0<μ< 2 η κ 2 and 0γl<τ, where τ= 1 μ ( 2 η μ κ 2 ) . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient f is L-Lipschitzian with constant L>0. Motivated by the work of Xu [17], the authors of [27] introduced the following implicit scheme that generates a net { x λ } λ ( 0 , 2 L ) in an implicit way:

x λ = P C [ s γ B x λ + ( I s μ A ) T λ x λ ] ,
(4.6)

where T λ and s satisfy the following conditions:

  1. (i)

    s:=s(λ)= 2 λ L 4 for each λ(0, 2 L );

  2. (ii)

    P C (Iλf)=sI+(1s) T λ for each λ(0, 2 L ).

They proved that { x λ } λ ( 0 , 2 L ) converges strongly to a minimizer x in Ω of (4.1), which solves the variational inequality (2.3).

For a given arbitrary initial guess x 0 in C and a sequence { λ n }(0, 2 L ) with λ n 2 L , they also proposed the following explicit scheme that generates a sequence { x n } in an explicit way:

x n + 1 = P C [ s n γ B x n + ( I s n μ A ) T n x n ] ,n0,
(4.7)

where r n = 2 λ n L 4 and P C (I λ n f)= s n I+(1 s n ) T n for each n0. It is proven in [27] that the sequence { x n } strongly converges to a minimizer x in Ω of (4.1).

On the other hand, we know that x in C solves the minimization problem (4.1) if and only if x solves the fixed point equation

x = P C (Iλf) x ,

where λ>0 is any fixed positive number. Note that f being Lipschitzian implies that the gradient f is 1 L -ism [39], which then implies that λf is 1 λ L -ism. So by Proposition 2.2(c), Iλf is λ L 2 -averaged. Now since the projection P C is 1 2 -averaged, we know from Proposition 2.2(c) that Iλf is 2 + λ L 4 -averaged for each λ(0, 2 L ). Hence, we can write

P C (Iλf)= 2 λ L 4 I+ 2 + λ L 4 T λ =sI+(1s) T λ ,

where T λ is nonexpansive, and s:=s(λ)= 2 λ L 4 (0, 1 2 ) for each λ(0, 2 L ). It is easy to see that

λ 2 L s 0 + .

For each fixed λ(0, 2 L ), we now consider the self-mapping

Q λ x= P C [ s γ B x λ + ( I s μ A ) T λ x ] ,xC.

It is easy to see that Q λ is a contraction, see [27] for more details. Thus, there exists a unique fixed point x λ in C, which uniquely solves the fixed point equation (4.6).

The following two results, which summarize the properties of the net { x λ } λ ( 0 , 2 L ) have been proved in [27].

Proposition 4.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let A:CH be a κ-Lipschitzian and η-strongly monotone operator with constants κ,η>0, and let B:CH be an l-Lipschitzian mapping with constant l0. Suppose that 0<μ< 2 η κ 2 and 0γl<τ, where τ= 1 μ ( 2 η μ κ 2 ) . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient f is L-Lipschitzian with constant L>0. For each λ in (0, 2 L ), let x λ denote a unique solution of the fixed point equation (4.6), where T λ and s satisfy the following conditions:

  1. (i)

    s:=s(λ)= 2 λ L 4 for each λ in (0, 2 L );

  2. (ii)

    P C (Iλf)=sI+(1s) T λ for each λ in (0, 2 L ).

Then, the following properties for the net { x λ } λ ( 0 , 2 L ) hold:

  1. (a)

    { x λ } λ ( 0 , 2 L ) is bounded;

  2. (b)

    lim λ 2 L x λ T λ x λ =0;

  3. (c)

    x λ defines a continuous curve from (0, 2 L ) into C.

Theorem 4.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let A:CH be a κ-Lipschitzian and η-strongly monotone operator with constants κ,η>0, and let B:CH be an l-Lipschitzian mapping with constant l0. Suppose that 0<μ< 2 η κ 2 and 0γl<τ, where τ= 1 μ ( 2 η μ κ 2 ) . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient f is L-Lipschitzian with constant L>0. For each λ in (0, 2 L ), let x λ denote a unique solution of the fixed point equation (4.6), where T λ and r satisfy the following conditions:

  1. (i)

    s:=s(λ)= 2 λ L 4 for each λ in (0, 2 L );

  2. (ii)

    P C (Iλf)=sI+(1s) T λ for each λ in (0, 2 L ).

Then the net { x λ } λ ( 0 , 2 L ) converges strongly, as λ 2 L , to a minimizer x of (4.1), which solves the variational inequality (2.3); equivalently, we have P C (IμA+γB) x = x .

Now, we are ready to propose explicit iterative schemes for finding the approximate minimizer of a constrained convex minimization problem and prove that the sequences generated by our schemes converge strongly to a solution of the constrained convex minimization problem.

Theorem 4.2 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Assume that { S n } n = 1 is a sequence of nonexpansive mappings from C into itself such that n = 1 F( S n ). Suppose, in addition, that S:CC is a nonexpansive mapping such that ( { S n } n = 1 ,S) satisfies the AKTT-condition. Let A:CH be a κ-Lipschitzian and η-strongly monotone operator with constants κ,η>0, and let B:CH be an l-Lipschitzian mapping with constant l0. Suppose that 0<μ< 2 η κ 2 and 0γl<τ, where τ= 1 μ ( 2 η μ κ 2 ) . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient f is L-Lipschitzian with constant L>0. Let { λ n } be a sequence in the interval (0, 2 L ) such that { T n } and { s n } satisfy the following conditions:

  1. (i)

    s n = 2 λ n L 4 for each n0;

  2. (ii)

    P C (I λ n f)= s n I+(1 s n ) T n for each n0;

  3. (iii)

    s n 0;

  4. (iv)

    n = 0 s n =;

  5. (v)

    n = 1 | λ n + 1 λ n |<.

Suppose that { α n }, { β n } are two sequences of real numbers in (0,1) satisfying the following control conditions:

(a) lim n α n = 0 and n = 1 α n = ; (b) 0 < lim inf n β n lim sup n β n < 1 .
(4.8)

For given x 1 in C arbitrarily, let the sequence { x n } be generated by

{ y n = P C [ α n γ B x n + ( I α n μ A ) x n ] , x n + 1 = ( 1 β n ) x n + β n T n S n y n , n N .
(4.9)

If lim n y n S y n =0 and n = 1 F( T n )F(S), then there exists a nonexpansive mapping T:CC such that ( { T n } n = 1 ,T) satisfies the AKTT-condition, and { x n } converges strongly to a common element x in F(TS)Ω, which solves the variational inequality

( μ A γ B ) x , x z 0,zF(TS).
(4.10)

Proof We divide the proof into several steps.

First, we note that

  1. (1)

    x ˜ in C solves the minimization problem (4.1) if and only if for each fixed λ>0, x ˜ solves the fixed point equation

    x ˜ = P C (Iλf) x ˜ ;
  2. (2)

    P C (Iλf) is 2 + λ L 4 -averaged for each λ in (0, 2 L ); in particular, the following relation holds:

    P C (I λ n f)= 2 λ n L 4 I+ 2 + λ n L 4 T n = s n I+(1 s n ) T n ,n0.

Step I. We claim that the sequence { T n } satisfies the AKTT-condition.

From the proof of Theorem 3.1, { x n } is bounded and so are {B x n } and { T n x n }. Let D be a bounded subset of C such that {B x n , T n x n :nN}D. Since f is 1 L -ism, P C (I λ n f) is nonexpansive. It follows that for any given z in D and v in Ω,

P C ( I λ n f ) z P C ( I λ n f ) z v + v P C ( I λ n f ) z P C ( I λ n f ) v + v z v + v z + 2 v .

This implies that

sup { P C ( I λ n f ) z : n N , z D } <.

On the other hand, we have for any z in D and u in Ω that

A T n z A T n u A z + A u κ T n z T n v + A v κ z v + A v z + 2 A v .

Therefore,

sup { A T n z : n N , z D } <.

This shows that {A T n z:nN,zD} is bounded. We also obtain, for any z in D, that

T n + 1 z T n z = 4 P C ( I λ n + 1 f ) ( 2 λ n + 1 L ) I 2 + λ n + 1 L z 4 P C ( I λ n f ) ( 2 λ n L ) I 2 + λ n L z 4 P C ( I λ n + 1 f ) 2 + λ n + 1 L z 4 P C ( I λ n f ) 2 + λ n L z + ( 2 λ n L ) I 2 + λ n L z ( 2 λ n + 1 L ) I 2 + λ n + 1 L z = 4 ( 2 + λ n L ) P C ( I λ n + 1 f ) z 4 ( 2 + λ n + 1 L ) P C ( I λ n f ) z ( 2 + λ n + 1 L ) ( 2 + λ n L ) + 4 L | λ n + 1 λ n | ( 2 + λ n L ) ( 2 + λ n + 1 L ) z 4 L ( λ n + 1 λ n ) P C ( I λ n + 1 f ) z ( 2 + λ n + 1 L ) ( 2 + λ n L ) + 4 ( 2 + λ n + 1 L ) P C ( I λ n + 1 f ) z ( 2 + λ n + 1 L ) P C ( I λ n f ) z ( 2 + λ n + 1 L ) ( 2 + λ n L ) + 4 L | λ n + 1 λ n | ( 2 + λ n L ) ( 2 + λ n + 1 L ) z 4 L | ( λ n + 1 λ n ) | P C ( I λ n + 1 f ) z ( 2 + λ n + 1 L ) ( 2 + λ n L ) + 4 ( 2 + λ n + 1 L ) P C ( I λ n + 1 f ) z P C ( I λ n f ) z ( 2 + λ n + 1 L ) ( 2 + λ n L ) + 4 L | λ n + 1 λ n | ( 2 + λ n L ) ( 2 + λ n + 1 L ) z | λ n + 1 λ n | [ L P C ( I λ n + 1 f ) z + 4 f ( z ) + L z ] M | λ n + 1 λ n | ,

for some appropriate constant M>0 such that

L P C ( I λ n + 1 f ) z +4 f ( z ) +LzM,n0.

Thus, we get

n = 1 sup { T n + 1 z T n z : z D } M n = 1 | λ n + 1 λ n |<.

Now, define a mapping T:CC by

Tx= lim n T n x,xC.

Then T is a nonexpansive mapping. Since the minimization problem (4.1) is consistent, we conclude that n = 1 F( T n ). Consequently, the sequence ( { T n } n = 1 ,T) satisfies the AKTT-condition.

Step II. We claim that lim n y n TS y n =0. In view of (4.8), we obtain

y n x n = Q C [ α n γ B x n + ( I α n μ A ) x n ] Q C [ x n ] α n γ B x n + ( I α n μ A ) x n x n = α n γ B x n + x n α n μ A x n x n = α n γ B x n + x n α n μ A x n x n = α n ( γ B μ A ) x n .
(4.11)

Since lim n α n =0, it follows from (4.11) that

lim n y n x n =0.
(4.12)

In view of Lemma 2.6, we conclude that

( I α n + 1 μ A ) x n + 1 ( I α n + 1 μ A ) x n (1 α n + 1 τ) x n + 1 x n ,nN.

This implies that

y n + 1 y n = P C [ α n + 1 γ B x n + 1 + ( I α n + 1 μ A ) x n + 1 ] P C [ α n γ B x n + ( I α n μ A ) x n ] α n + 1 γ B x n + 1 + ( I α n + 1 μ A ) x n + 1 α n γ B x n + ( I α n μ A ) x n α n + 1 γ ( B x n + 1 B x n ) + γ ( α n + 1 α n ) B x n + ( I α n + 1 μ A ) x n + 1 ( I α n + 1 μ A ) x n α n + 1 μ A x n α n + 1 γ L x n + 1 x n + ( 1 α n + 1 τ ) x n + 1 x n + γ | α n + 1 α n | B x n + α n + 1 μ A x n ( 1 α n + 1 ( τ γ L ) ) x n + 1 x n + γ M | α n + 1 α n | + μ M α n + 1 x n + 1 x n + γ M | α n + 1 α n | + μ M α n + 1 .
(4.13)

Next, we show that lim n x n + 1 x n =0. To this end, denote a sequence { z n } by z n = T n S n y n . It follows from (4.14) that

z n + 1 z n = T n + 1 S n + 1 y n + 1 T n S n y n T n + 1 S n + 1 y n + 1 T n + 1 S n + 1 y n + T n + 1 S n + 1 y n T n + 1 S n y n + T n + 1 S n y n T n S n y n y n + 1 y n + T n + 1 S n + 1 y n T n + 1 S n y n + T n + 1 S n y n T n S n y n y n + 1 y n + S n + 1 y n S n y n + T n + 1 S n y n T n S n y n y n + 1 y n + sup { S n + 1 z S n z : z D } + sup { T n + 1 z T n z : z D } x n + 1 x n + γ M | α n + 1 α n | + μ M α n + 1 + sup { S n + 1 z S n z : z D } + sup { T n + 1 z T n z : z D } .

This implies that

z n + 1 z n x n + 1 x n γ M | α n + 1 α n | + μ M α n + 1 + sup { S n + 1 z S n z : z D } + sup { T n + 1 z T n z : z D } .
(4.14)

Since lim n α n =0, in view of Lemma 2.8 and (4.14), we conclude that

lim sup n ( z n + 1 z n x n + 1 x n ) 0.

Using Lemma 2.8, we deduce that

lim n z n x n =0.

Thus, we have

lim n x n + 1 x n = lim n β n z n x n =0.
(4.15)

On the other hand, we have

y n T n S n y n y n x n + x n x n + 1 + x n + 1 T n S n y n y n x n + x n x n + 1 + x n + 1 T n S n y n y n x n + x n x n + 1 + ( 1 β n ) x n T n S n y n y n x n + x n x n + 1 + ( 1 β n ) [ x n y n + y n T n S n y n ] .
(4.16)

It follows from (4.16) that

y n T n S n y n 1 β n [ 2 y n x n + x n x n + 1 ] .
(4.17)

In view of (4.11) and (4.15), we obtain

lim n y n T n S n y n =0.
(4.18)

By the triangle inequality, we obtain

y n T n S y n y n T n S n y n + T n S n y n T n S y n y n T n S n y n + S n y n S y n y n T n S n y n + sup { S n z S z : z D } .
(4.19)

In view of Lemma 2.9, (4.18) and (4.19), we deduce that

lim n y n T n S y n =0.
(4.20)

By the triangle inequality, we obtain

y n T S y n y n T n S y n + T n S y n T S y n y n T n S y n + sup { T n z T z : z D } .
(4.21)

In view of the AKTT-condition and (4.20)-(4.21), we deduce that

lim n y n TS y n =0.

Step III. We prove that

lim sup n ( μ A γ B ) x , x y n 0,

where x F(TS) is the same as in Theorem 3.1 and satisfies

( γ B μ A ) x , x z 0,zF(TS).
(4.22)

Let { y n k } be such that

lim sup n ( μ A γ B ) x , x y n = lim k ( μ A γ B ) x , x y n k .
(4.23)

By the same manner as in the proof of Theorem 3.1 Step II, we can find uF(TS) such that y n k u as k. In view of (ii), we have that

P C ( I λ n f ) y n y n = s n y n + ( 1 s n ) T n y n y n + ( 1 s n ) T n y n y n T n y n y n T n y n T n S y n + T n S y n y n y n S y n + T n S y n y n ,
(4.24)

where s n = 2 λ n L 4 for each n0. In view of (4.24) and taking into account y n S y n 0, we conclude that

lim n P C ( I λ n f ) y n y n = lim n y n T n y n =0.

Hence we have

P C ( I 2 L f ) y n y n P C ( I 2 L f ) y n P C ( I λ n f ) y n + P C ( I λ n f ) y n x n ( I 2 L f ) y n ( I λ n f ) y n + P C ( I λ n f ) y n y n = ( 2 L λ n ) f y n + T n y n y n .

Thus, from the boundedness of { x n }, s n 0 ( λ n 2 L ) and T n y n y n 0, we conclude that

lim n P C ( I 2 L f ) y n y n =0.
(4.25)

Note that the gradient f is 1 L -ism. Hence, it is known that P C (I 2 L f) is a nonexpansive self-mapping on C. As a matter of fact, we have for each x, y in C (see the proof of Theorem 4.1)

P C ( I 2 L f ) x P C ( I 2 L f ) y 2 x y 2 .

Since y n k u, by Lemma 2.4, we obtain

u= P C ( I 2 L f ) u.

This shows that uΩ. Consequently, from (3.22) and (4.23), it follows that

lim sup n ( γ B μ A ) x , y n x = ( γ B μ A ) x , u x 0.

As in the last part of the proof of Theorem 3.1, we obtain that x n x , which completes the proof. □

Corollary 4.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let A:CH be a κ-Lipschitzian and η-strongly monotone operator with constants κ,η>0, and let B:CH be an l-Lipschitzian mapping with constant l0. Suppose that 0<μ< 2 η κ 2 and 0γl<τ, where τ= 1 μ ( 2 η μ κ 2 ) . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient f is L-Lipschitzian with constant L>0. Let { λ n } be a sequence in the interval (0, 2 L ) such that { T n } and { s n } satisfy the following conditions:

  1. (i)

    s n = 2 λ n L 4 for each n0;

  2. (ii)

    P C (I λ n f)= s n I+(1 s n ) T n for each n0;

  3. (iii)

    s n 0;

  4. (iv)

    n = 0 s n =;

  5. (v)

    n = 1 | λ n + 1 λ n |<.

Suppose that { α n }, { β n } are two real sequences in (0,1) satisfying the following control conditions:

(a) lim n α n = 0 and n = 1 α n = ; (b) 0 < lim inf n β n lim sup n β n < 1 .

For given x 1 in C arbitrarily, let the sequence { x n } be generated by

{ y n = P C [ α n γ B x n + ( I α n μ A ) x n ] , x n + 1 = ( 1 β n ) x n + β n T n y n , n N .

Then, there exists a nonexpansive mapping T:CC such that ( { T n } n = 1 ,T) satisfies the AKTT-condition, and { x n } converges strongly to a common element x F(T)Ω, which solves the variational inequality

( μ A γ B ) x , x z 0,zF(T).

We end this section by considering simple examples of sequences that fulfill the desired conditions of our results.

Example 4.1 Let { α n } n = 1 be a sequence defined by

α n = 1 n + 1 ,nN.

Let L>0 be any arbitrary real number, and let n 0 N be such that n 0 > L 2 . We define the sequence { λ n } n = 1 as follows:

{ λ 1 = λ 2 = = λ n 0 = 1 L , if  n n 0 , λ n = 2 n ( n + 1 ) L , if  n > n 0 .

Then the sequences { α n } n = 1 and { λ n } n = 1 satisfy all the aspects of the hypotheses of our results.

5 Applications

Let H be a real Hilbert space, and let Q:H 2 H be a mapping. The effective domain of Q is denoted by dom(Q), that is, dom(Q)={xH:Qx}. The range of Q is denoted by R(Q). A multi-valued mapping Q is said to be monotone if for all x,yH, fQx and g in Qy,

xy,fg0.

A monotone mapping Q:H 2 H is said to be maximal if its graph G(Q):{(x,f):fQ(x)} is not properly contained in the graph of any other monotone mapping. It is well-known that a monotone mapping Q:H 2 H is maximal if and only if, for (x,f) in H×H, xy,fg0 for every (y,g)G(Q) implies that fQ(x). For a maximal monotone operator Q on H and r>0, we may define a single-valued operator J r = ( I + r Q ) 1 :Hdom(Q), which is called the resolvent of Q for r>0. Assume that Q 1 0={xH:0Qx}. It is known that Q 1 0=F( J r ) for all r>0, and the resolvent J r is firmly nonexpansive, i.e.,

J r x J r y 2 xy, J r x J r y,x,yH.

The following lemma has been proved in [40].

Lemma 5.1 Let H be a real Hilbert space, and let Q be a maximal monotone operator on H. For r>0, let J r be the resolvent operator associated with Q and r. Then

J ρ x J σ x | ρ σ | ρ x J ρ x

for all ρ,σ>0 and xH.

We also know the following lemma from [38].

Lemma 5.2 Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let Q be a maximal monotone operator on H such that Q 1 0 and cl(dom(Q))C r > 0 R(I+rQ), where cl(dom(Q)) stands for the closure of dom(Q). Suppose that { r n } is a sequence of (0,) such that inf{ r n :nN}>0 and n = 1 | r n + 1 r n |<. Then

  1. (i)

    n = 1 sup{ J r n + 1 z J r n z:zD}< for any bounded subset D of C.

  2. (ii)

    lim n J r n z= J r z for all z in C and F( J r )= n = 1 F( J r n ), where r n r as n.

From Theorem 3.1 and Lemma 5.2, we obtain the following result.

Theorem 5.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let Q be a maximal monotone operator on H such that Q 1 0. Given real sequences { α n }, { β n } in (0,1) and { r n } in (0,), assume that { α n }, { β n } satisfy the following control conditions:

(a) lim n α n = 0 and n = 1 α n = ; (b) 0 < lim inf n β n lim sup n β n < 1 ; (c) r n ϵ , n N and n = 1 | r n + 1 r n | < .

Suppose, in addition, that S:CC is a nonexpansive mapping with F:= n = 1 F( J r n )F(S). Let A:CE be a κ-Lipschitzian and η-strongly monotone operator with constants κ,η>0, let B:CE be an L-Lipschitzian mapping with constant L0. Let 0<μ< 2 η κ 2 and 0γL<τ, where τ=1 1 μ ( 2 η μ κ 2 ) . For given x 1 in C arbitrarily, let the sequence { x n } be generated iteratively by

{ y n = P C [ α n γ B x n + ( I α n μ A ) x n ] , x n + 1 = ( 1 β n ) x n + β n J r n S y n , n N .
(5.1)

Then the sequence { x n } defined by (5.1) converges strongly to x in F( J r S), which solves the variational inequality

( μ A γ B ) x , x z 0,zF( J r S).
(5.2)

The following result is yet another easy consequence of Theorem 3.1 and Lemma 5.2.

Theorem 5.2 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let Q be a maximal monotone operator on H such that Q 1 0. Given real sequences { α n }, { β n } in (0,1) and { r n } in (0,), assume that { α n }, { β n } satisfy the following control conditions:

(a) lim n α n = 0 and n = 1 α n = ; (b) 0 < lim inf n β n lim sup n β n < 1 ; (c) r n ϵ , n N and n = 1 | r n + 1 r n | < .

Let A:CE be a κ-Lipschitzian and η-strongly accretive operator with constants κ,η>0, B:CE be an L-Lipschitzian mapping with constant L0. Let 0<μ< 2 η κ 2 and 0γL<τ, where τ=1 1 μ ( 2 η μ κ 2 ) . For given x 1 in C arbitrarily, let the sequence { x n } be generated iteratively by

{ y n = P C [ α n γ B x n + ( I α n μ A ) x n ] , x n + 1 = ( 1 β n ) x n + β n J r n y n , n N .

If n = 1 F( J r n ), then the sequence { x n } converges strongly to x in Q 1 0, which solves the variational inequality

( μ A γ B ) x , x z 0,z n = 1 F( J r n ).

The following results are easy consequences of Theorem 4.2 and Lemma 5.2.

Theorem 5.3 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Assume that { S n } n = 1 is a sequence of nonexpansive mappings from C into itself such that n = 1 F( S n ). Suppose, in addition, that S:CC is a nonexpansive mapping such that ( { S n } n = 1 ,S) satisfies the AKTT-condition. Let A:CH be a κ-Lipschitzian and η-strongly monotone operator with constants κ,η>0, and let B:CH be an l-Lipschitzian mapping with constant l0. Suppose that 0<μ< 2 η κ 2 and 0γl<τ, where τ= 1 μ ( 2 η μ κ 2 ) . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient f is L-Lipschitzian with constant L>0. Let { λ n } be a sequence in the interval (0, 2 L ) such that { J r n } and { s n } satisfy the following conditions:

  1. (i)

    s n = 2 λ n L 4 for each n0;

  2. (ii)

    P C (I λ n f)= s n I+(1 s n ) J r n for each n0;

  3. (iii)

    s n 0;

  4. (iv)

    n = 0 s n =;

  5. (v)

    n = 1 | λ n + 1 λ n |<.

Suppose that { α n }, { β n } are two real sequences in (0,1) satisfying the following control conditions:

(a) lim n α n = 0 and n = 1 α n = ; (b) 0 < lim inf n β n lim sup n β n < 1 .

For given x 1 in C arbitrarily, let the sequence { x n } be generated by

{ y n = P C [ α n γ B x n + ( I α n μ A ) x n ] , x n + 1 = ( 1 β n ) x n + β n J r n S n y n , n N .

If lim n y n S y n =0, then the sequence { x n } converges strongly to a common element x in F( J r S)Ω, which solves the variational inequality

( μ A γ B ) x , x z 0,zF( J r S).

Theorem 5.4 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Assume that { S n } n = 1 is a sequence of nonexpansive mappings from C into itself such that n = 1 F( S n ). Let A:CH be a κ-Lipschitzian and η-strongly monotone operator with constants κ,η>0, and let B:CH be an l-Lipschitzian mapping with constant l0. Suppose that 0<μ< 2 η κ 2 and 0γl<τ, where τ= 1 μ ( 2 η μ κ 2 ) . Suppose that the minimization problem (4.1) is consistent, and let Ω denote its solution set. Assume that the gradient f is L-Lipschitzian with constant L>0. Let { λ n } be a sequence in the interval (0, 2 L ) such that { J r n } and { s n } satisfy the following conditions:

  1. (i)

    s n = 2 λ n L 4 for each n0;

  2. (ii)

    P C (I λ n f)= s n I+(1 s n ) J r n for each n0;

  3. (iii)

    s n 0;

  4. (iv)

    n = 0 s n =;

  5. (v)

    n = 1 | λ n + 1 λ n |<.

Suppose that { α n }, { β n } are two real sequences in (0,1) satisfying the following control conditions:

(a) lim n α n = 0 and n = 1 α n = ; (b) 0 < lim inf n β n lim sup n β n < 1 .

For given x 1 in C arbitrarily, let the sequence { x n } be generated by

{ y n = P C [ α n γ B x n + ( I α n μ A ) x n ] , x n + 1 = ( 1 β n ) x n + β n J r n y n , n N .

Then the sequence { x n } converges strongly to a common element x in Q 1 0Ω, which solves the variational inequality

( μ A γ B ) x , x z 0,z n = 1 F( J r n ).

Remark 5.1 In Theorem 5.1, it is shown that any sequence generated by the iterative step (5.1) converges strongly to the unique solution of the variational inequality problem (5.2). This variational inequality problem is more general than many variational inequality problems (see, for example, [27]) due to the fact that S is an arbitrary nonexpansive mapping. Indeed, in particular case, when S=I, the identity mapping on H, the corresponding results in current literature are special cases of our result (Theorem 5.1).

References

  1. Moudafi A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615

    Article  MATH  MathSciNet  Google Scholar 

  2. Xu H-K: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059

    Article  MATH  MathSciNet  Google Scholar 

  3. Deutch F, Yamada I: Minimizing certain convex functions over the intersection of the fixed point sets of nonexpansive mappings. Numer. Funct. Anal. Optim. 1998, 19: 33–56.

    Article  MathSciNet  Google Scholar 

  4. Xu H-K: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 2003, 116: 659–678. 10.1023/A:1023073621589

    Article  MATH  MathSciNet  Google Scholar 

  5. Yamada I: The hybrid steepest descent method for the variational inequality problems over the intersection of fixed point sets of nonexpansive mappings. Stud. Comput. Math. 8. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Edited by: Butnariu D, Censor Y, Reich S. North-Holland, Amsterdam; 2001:473–504.

    Chapter  Google Scholar 

  6. Marino G, Xu H-K: A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006, 318: 43–52. 10.1016/j.jmaa.2005.05.028

    Article  MATH  MathSciNet  Google Scholar 

  7. Liu Y: A general iterative method for equilibrium problems and strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2009, 71: 4852–4861. 10.1016/j.na.2009.03.060

    Article  MATH  MathSciNet  Google Scholar 

  8. Qin X, Shang M, Kang SM: Strong convergence theorems of modified Mann iterative process for strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2009, 70: 1257–1264. 10.1016/j.na.2008.02.009

    Article  MATH  MathSciNet  Google Scholar 

  9. Tian M: A general iterative algorithm for nonexpansive mappings in Hilbert spaces. Nonlinear Anal. 2010, 73: 689–694. 10.1016/j.na.2010.03.058

    Article  MATH  MathSciNet  Google Scholar 

  10. Ceng LC, Huang S: Modified extragradient methods for strict pseudo-contractions and monotone mappings. Taiwan. J. Math. 2009, 13(4):1197–1211.

    MATH  MathSciNet  Google Scholar 

  11. Ceng LC, Huang S, Liou YC: Hybrid proximal point algorithms for solving constrained minimization problems in Banach spaces. Taiwan. J. Math. 2009, 13(2B):805–820.

    MATH  MathSciNet  Google Scholar 

  12. Ceng LC, Huang S, Petrusel A: Weak convergence theorem by a modified extragradient method for nonexpansive mappings and monotone mappings. Taiwan. J. Math. 2009, 13(1):225–238.

    MATH  MathSciNet  Google Scholar 

  13. Ceng LC, Wong NC: Viscosity approximation methods for equilibrium problems and fixed point problems of nonlinear semigroups. Taiwan. J. Math. 2009, 13(5):1497–1513.

    MATH  MathSciNet  Google Scholar 

  14. Ceng LC, Yao JC: Relaxed viscosity approximation methods for fixed point problems and variational inequality problems. Nonlinear Anal. 2008, 69: 3299–3309. 10.1016/j.na.2007.09.019

    Article  MATH  MathSciNet  Google Scholar 

  15. Zeng LC, Ansari QH, Shyu DS, Yao JC: Strong and weak convergence theorems for common solutions of generalized equilibrium problems and zeros of maximal monotone operators. Fixed Point Theory Appl. 2010., 2010: Article ID 590278

    Google Scholar 

  16. Su M, Xu H-K: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 2010, 1(1):35–43.

    MathSciNet  Google Scholar 

  17. Xu H-K: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150(2):360–378. 10.1007/s10957-011-9837-z

    Article  MATH  MathSciNet  Google Scholar 

  18. Chen R, Su Y, Xu H-K: Regularization and iteration methods for a class of monotone variational inequalities. Taiwan. J. Math. 2009, 13(2B):739–752.

    MATH  MathSciNet  Google Scholar 

  19. Cianciaruso F, Colao V, Muglia L, Xu H-K: On an implicit hierarchical fixed point approach to variational inequalities. Bull. Aust. Math. Soc. 2009, 80(1):117–124. 10.1017/S0004972709000082

    Article  MATH  MathSciNet  Google Scholar 

  20. He S, Xu H-K: Variational inequalities governed by boundedly Lipschitzian and strongly monotone operators. Fixed Point Theory 2009, 10(2):245–258.

    MATH  MathSciNet  Google Scholar 

  21. Marino G, Xu H-K: Explicit hierarchical fixed point approach to variational inequalities. J. Optim. Theory Appl. 2011, 149: 61–78. 10.1007/s10957-010-9775-1

    Article  MATH  MathSciNet  Google Scholar 

  22. Xu H-K: Viscosity method for hierarchical fixed point approach to variational inequalities. Taiwan. J. Math. 2010, 14(2):463–478.

    MATH  Google Scholar 

  23. Yao Y, Chen R, Xu H-K: Schemes for finding minimum-norm solutions of variational inequalities. Nonlinear Anal. 2010, 72: 3447–3456. 10.1016/j.na.2009.12.029

    Article  MATH  MathSciNet  Google Scholar 

  24. Ceng LC, Guu SY, Hu HY, Yao JC: Hybrid shrinking projection method for a generalized equilibrium problem, a maximal monotone operator and a countable family of relatively nonexpansive mappings. Comput. Math. Appl. 2011, 61(9):2468–2479. 10.1016/j.camwa.2011.02.028

    Article  MATH  MathSciNet  Google Scholar 

  25. Ceng LC, Ansari QH, Yao JC: Extragradient-projection method for solving constrained convex minimization problems. Numer. Algebra Control Optim. 2011, 1(3):341–359.

    Article  MATH  MathSciNet  Google Scholar 

  26. Kimura Y, Takahashi W, Yao JC: Strong convergence of an iterative scheme by a new type of projection method for a family of quasinonexpansive mappings. J. Optim. Theory Appl. 2011, 149: 239–253. 10.1007/s10957-010-9788-9

    Article  MATH  MathSciNet  Google Scholar 

  27. Ceng LC, Ansari QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005

    Article  MATH  MathSciNet  Google Scholar 

  28. Yao Y, Liou YC, Marino G: Strong convergence of two iterative algorithms for nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2009., 2009: Article ID 179058

    Google Scholar 

  29. Baillon JB, Bruck RE, Reich S: On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houst. J. Math. 1978, 4: 1–9.

    MathSciNet  Google Scholar 

  30. Goebel G, Reich S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York; 1984.

    MATH  Google Scholar 

  31. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MATH  MathSciNet  Google Scholar 

  32. Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53(5–6):475–504. 10.1080/02331930412331327157

    Article  MATH  MathSciNet  Google Scholar 

  33. Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics on Metric Fixed-Point Theory. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

  34. Bertsekas DP, Gafni EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965

    Article  MATH  MathSciNet  Google Scholar 

  35. Han D, Lo HK: Solving non-additive traffic assignment problems: a descent method for co-coercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544. 10.1016/S0377-2217(03)00423-5

    Article  MATH  MathSciNet  Google Scholar 

  36. Xu H-K: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332

    Article  MATH  Google Scholar 

  37. Suzuki T: Strong convergence of Krasnoseleskii and Mann type sequences for one parameter nonexpansive semigroups without Bochner integrals. J. Math. Anal. Appl. 2005, 305: 227–239. 10.1016/j.jmaa.2004.11.017

    Article  MATH  MathSciNet  Google Scholar 

  38. Aoyama K, Kamimura Y, Takahashi W, Toyoda M: Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. TMA 2007, 67: 2350–2360. 10.1016/j.na.2006.08.032

    Article  MATH  Google Scholar 

  39. Baillon JB, Haddad G: Quelques proprietes des operateurs angle-bornes et n-cycliquement monotones. Isr. J. Math. 1977, 26: 137–150. 10.1007/BF03007664

    Article  MATH  MathSciNet  Google Scholar 

  40. Takahashi W: Nonlinear Functional Analysis, Fixed Point Theory and Its Applications. Yokahama Publishers, Yokahama; 2000.

    MATH  Google Scholar 

Download references

Acknowledgements

The author would like to thank Professor Simeon Reich and two anonymous referees for sincere evaluation and constructive comments which improved the paper considerably.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eskandar Naraghirad.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Naraghirad, E. Strong convergence of projection methods for a countable family of nonexpansive mappings and applications to constrained convex minimization problems. J Inequal Appl 2013, 546 (2013). https://doi.org/10.1186/1029-242X-2013-546

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-546

Keywords