Skip to main content

Multi-step implicit iterative methods with regularization for minimization problems and fixed point problems

Abstract

In this paper we introduce a multi-step implicit iterative scheme with regularization for finding a common solution of the minimization problem (MP) for a convex and continuously Fréchet differentiable functional and the common fixed point problem of an infinite family of nonexpansive mappings in the setting of Hilbert spaces. The multi-step implicit iterative method with regularization is based on three well-known methods: the extragradient method, approximate proximal method and gradient projection algorithm with regularization. We derive a weak convergence theorem for the sequences generated by the proposed scheme. On the other hand, we also establish a strong convergence result via an implicit hybrid method with regularization for solving these two problems. This implicit hybrid method with regularization is based on the CQ method, extragradient method and gradient projection algorithm with regularization.

MSC:49J30, 47H09, 47J20.

1 Introduction

Let H be a real Hilbert space with the inner product , and the norm , let C be a nonempty closed convex subset of H and let P C be the metric projection of H onto C. Let S:CC be a self-mapping on C. We denote by Fix(S) the set of fixed points of S and by R the set of all real numbers. A mapping A:CH is called L-Lipschitz continuous if there exists a constant L0 such that AxAyLxy for all x,yC. In particular, if L=1, then A is called a nonexpansive mapping [1]; if L[0,1) then A is called a contraction.

Let f:CR be a convex and continuously Fréchet differentiable functional. Consider the minimization problem (MP) of minimizing f over the constraint set C

min x C f(x).
(1.1)

We denote by Γ the set of minimizers of MP (1.1) which are assumed to be nonempty.

On the other hand, consider the following variational inequality problem (VIP): find a x ¯ C such that

A x ¯ ,y x ¯ 0,yC.
(1.2)

The solution set of VIP (1.2) is denoted by VI(C,A).

We remark that VIP (1.2) was first discussed by Lions [2] and now is well known. There are a lot of different approaches towards solving VIP (1.2) in finite-dimensional and infinite-dimensional spaces, and the research is intensively investigated. VIP (1.2) has many applications in computational mathematics, mathematical physics, operations research, mathematical economics, optimization theory, and other fields; see, e.g., [36] and the references therein.

Recently, motivated by the work of Takahashi and Zembayashi [7], Cholamjiak [8] introduced a new hybrid projection algorithm for finding a common element of the set of solutions of the equilibrium problem and the set of solutions of the variational inequality problem and the set of fixed points of relatively quasi-nonexpansive mappings in a Banach space. Here, the involved operator in [8] is an inverse-strongly monotone operator. Furthermore, Nadezhkina and Takahashi [9] introduced an iterative process for finding an element of Fix(S)VI(C,A) and obtained a strong convergence theorem.

Theorem NT (see [[9], Theorem 3.1])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let A:CH be a monotone and L-Lipschitz-continuous mapping and S:CC be a nonexpansive mapping such that Fix(S)VI(C,A). Let { x n }, { y n } and { z n } be the sequences generated by

{ x 0 = x C chosen arbitrarily , y n = P C ( x n λ n A x n ) , z n = α n x n + ( 1 α n ) P C ( x n λ n A y n ) , C n = { z C : z n z x n z } , Q n = { z C : x n z , x x n 0 } , x n + 1 = P C n Q n x , n 0 ,

where { λ n }[a,b] for some a,b(0,1/L) and { α n }[0,c] for some c[0,1). Then the sequences { x n }, { y n } and { z n } converge strongly to P Fix ( S ) VI ( C , A ) x.

Also, it is remarkable that the joint work of Nadezhkina and Takahashi [9], which introduced a new iterative method, combines Korpelevich’s extragradient method and the so-called CQ method. We note that Nadezhkina and Takahashi employed the monotonicity and Lipschitz-continuity of A to define a maximal monotone operator T [10]. However, if the mapping A is pseudomonotone Lipschitz-continuous, then T is not necessarily a maximal monotone operator. To overcome this difficulty, Ceng et al. [11] suggested another iterative method. They established necessary and sufficient mild conditions such that the sequences generated by their proposed method converge weakly to some common solution of VIP (1.2) and the common fixed point problem of a finite family of nonexpansive mappings.

Theorem CTY ([[11], Theorem 3.1])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be a pseudomonotone, k-Lipschitz-continuous and (w,s)-sequentially-continuous mapping of C into H, and let { S n } i = 1 N be N nonexpansive mappings of C into itself such that i = 1 N Fix( S i )VI(C,A). Let { x n }, { y n }, { z n } be the sequences generated by

{ x 1 = x C chosen arbitrarily , y n = P C ( x n λ n A x n ) , z n = α n x n + ( 1 α n ) S n P C ( x n λ n A y n ) , C n = { z C : z n z x n z } , find x n + 1 C n such that x n x n + 1 + e n σ n A x n + 1 , x n + 1 x ε n , x C n ,

for every n=1,2, , where S n = S n mod N , { e n } is an error sequence in H such that n = 1 e n < and the following conditions hold:

  1. (i)

    { σ n }(0,1/k), { ε n }[0,) and n = 1 ε n <;

  2. (ii)

    { λ n }[a,b] for some a,b(0,1/k);

  3. (iii)

    { α n }[0,c] for some c[0,1).

Then the sequences { x n }, { y n }, { z n } converge weakly to the same element of i = 1 N Fix( S i )VI(C,A) if and only if lim inf n A x n ,x x n 0, xC.

In this paper, we aim to find a common solution of the minimization problem (MP) for a convex and continuously Fréchet differentiable functional and the common fixed point problem of an infinite family of nonexpansive mappings in the setting of Hilbert spaces. Motivated and inspired by the research going on in this area, we propose two iterative schemes for this purpose. One is called a multi-step implicit iterative method with regularization which is based on three well-known methods: extragradient method, approximate proximal method and gradient projection algorithm with regularization. Another is an implicit hybrid method with regularization which is based on the CQ method, extragradient method and gradient projection algorithm with regularization. Weak and strong convergence results for these two schemes are established, respectively. Recent results in this direction can be found, e.g., in [732].

2 Preliminaries

Let H be a real Hilbert space whose inner product and norm are denoted by , and , respectively. Let C be a nonempty closed convex subset of H. We write x n x to indicate that the sequence { x n } converges weakly to x and x n x to indicate that the sequence { x n } converges strongly to x. Moreover, we use ω w ( x n ) to denote the weak ω-limit set of the sequence { x n }, i.e.,

ω w ( x n ):= { x H : x n i x  for some subsequence  { x n i }  of  { x n } } .

The metric (or nearest point) projection from H onto C is the mapping P C :HC which assigns to each point xH the unique point P C xC satisfying the property

x P C x= inf y C xy=:d(x,C).

Some important properties of projections are gathered in the following proposition.

Proposition 2.1 For given xH and zC:

  1. (i)

    z= P C xxz,yz0, yC;

  2. (ii)

    z= P C x x z 2 x y 2 y z 2 , yC;

  3. (iii)

    P C x P C y,xy P C x P C y 2 , yH.

Consequently, P C is nonexpansive and monotone.

Definition 2.1 A mapping A:CH is said to be:

  1. (a)

    pseudomonotone if for all x,yC

    AyAy,xy0Ax,xy0;
  2. (b)

    monotone if

    AxAy,xy0,x,yC;
  3. (c)

    η-strongly monotone if there exists a constant η>0 such that

    AxAy,xyη x y 2 ,x,yC;
  4. (d)

    α-inverse-strongly monotone (α-ism) if there exists a constant α>0 such that

    AxAy,xyα A x A y 2 ,x,yC.

It is obvious that if A is α-inverse-strongly monotone, then A is monotone and 1 α -Lipschitz continuous.

Recall that a mapping S:CC is said to be nonexpansive [1] if

SxSyxy,x,yC.

Denote by Fix(S) the set of fixed points of S; that is, Fix(S)={xC:Sx=x}. It can be easily seen that if S:CC is nonexpansive, then IS is monotone. It is also easy to see that a projection P C is 1-ism. Inverse strongly monotone (also referred to as co-coercive) operators have been applied widely in solving practical problems in various fields.

We need some facts and tools which are listed as lemmas below.

Lemma 2.1 Let X be a real inner product space. Then the following inequality holds:

x + y 2 x 2 +2y,x+y,x,yX.

Lemma 2.2 Let { x n } be a bounded sequence in a reflexive Banach space X. If ω w ({ x n })={x}, then x n x.

Lemma 2.3 Let A:CH be a monotone mapping. In the context of the variational inequality problem, the characterization of the projection (see Proposition  2.1(i)) implies

uVI(C,A)u= P C (uλAu),λ>0.

Lemma 2.4 Let H be a real Hilbert space. Then the following hold:

  1. (a)

    x y 2 = x 2 y 2 2xy,y for all x,yH;

  2. (b)

    λ x + μ y 2 =λ x 2 +μ y 2 λμ x y 2 for all x,yH and λ,μ[0,1] with λ+μ=1;

  3. (c)

    If { x n } is a sequence in H such that x n x, it follows that

    lim sup n x n y 2 = lim sup n x n x 2 + x y 2 ,yH.

Lemma 2.5 ([[33], Lemma 2.5])

Let H be a real Hilbert space. Given a nonempty closed convex subset of H and points x,y,zH and given also a real number aR, the set

{ v C : y v 2 x v 2 + z , v + a }

is convex (and closed).

Let C be a nonempty closed convex subset of a real Hilbert space H. Let { S i } i = 1 be an infinite family of nonexpansive mappings of C into itself and let { ξ i } i = 1 be a sequence in [0,1]. For any n1, define a mapping W n of C into itself as follows:

{ U n , n + 1 = I , U n , n = ξ n S n U n , n + 1 + ( 1 ξ n ) I , U n , n 1 = ξ n 1 S n 1 U n , n + ( 1 ξ n 1 ) I , , U n , k = ξ k S k U n , k + 1 + ( 1 ξ k ) I , U n , k 1 = ξ k 1 S k 1 U n , k + ( 1 x i k 1 ) I , , U n , 2 = ξ 2 S 2 U n , 3 + ( 1 ξ 2 ) I , W n = U n , 1 = ξ 1 S 1 U n , 2 + ( 1 ξ 1 ) I .
(2.1)

Such W n is called a W-mapping generated by { S i } i = 1 and { ξ i } i = 1 . We need the following lemmas for proving our main results.

Lemma 2.6 [34]

Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let S 1 , S 2 , be nonexpansive mappings of C into itself such that n = 1 S n is nonempty, and let ξ 1 , ξ 2 , be real numbers such that 0< ξ i b<1 for all i1. Then, for every xC and k1, the limit lim n U n , k x exists.

Using Lemma 2.6, one can define a mapping W of C into itself as follows:

Wx= lim n W n x= lim n U n , 1 x,xC.

Lemma 2.7 ([34])

Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let S 1 , S 2 , be nonexpansive mappings of C into itself such that n = 1 Fix( S n ) is nonempty, and let ξ 1 , ξ 2 , be real numbers such that 0< ξ i b<1 for all i1. Then

Fix(W)= n = 1 Fix( S n ).

Lemma 2.8 ([35])

If { x n } is a bounded sequence in C, then

lim n W x n W n x n =0.

Lemma 2.9 ([[36], Demiclosedness principle])

Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let S:CC be a nonexpansive mapping such that Fix(S). Then S is demiclosed on C, i.e., if y n zC and y n S y n y, then (IS)z=y.

To prove a weak convergence theorem by the multi-step implicit iterative method with regularization for MP (1.1) and infinitely many nonexpansive mappings { S n } n = 1 , we need the following lemma due to Osilike et al. [37].

Lemma 2.10 ([[37], p.80])

Let { a n } n = 1 , { b n } n = 1 and { δ n } n = 1 be sequences of nonnegative real numbers satisfying the inequality

a n + 1 (1+ δ n ) a n + b n ,n1.

If n = 1 δ n < and n = 1 b n <, then lim n a n exists. If, in addition, { a n } n = 1 has a subsequence which converges to zero, then lim n a n =0.

Corollary 2.1 ([[38], p.303])

Let { a n } n = 0 and { b n } n = 0 be two sequences of nonnegative real numbers satisfying the inequality

a n + 1 a n + b n ,n0.

If n = 0 b n converges, then lim n a n exists.

Lemma 2.11 ([36])

Every Hilbert space H has the Kadec-Klee property; that is, given a sequence { x n }H and a point xH, we have

x n x x n x } x n x.

It is well known that every Hilbert space H satisfies Opial’s condition [39], i.e., for any sequence { x n } with x n x, the inequality

lim inf n x n x< lim inf n x n y

holds for every yH with yx.

A set-valued mapping T:H 2 H is called monotone if for all x,yH, fTx and gTy imply xy,fg0. A monotone mapping T:H 2 H is maximal if its graph G(T) is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping T is maximal if and only if for (x,f)H×H, xy,fg0 for all (y,g)G(T) implies fTx. Let A:CH be a monotone, L-Lipschitz continuous mapping and let N C v be the normal cone to C at vC, i.e., N C v={wH:vu,w0,uC}. Define

Tv={ A v + N C v if  v C , if  v C .

It is known that in this case T is maximal monotone, and 0Tv if and only if vΩ; see [10].

3 Weak convergence theorem

In this section, we derive weak convergence criteria for a multi-step implicit iterative method with regularization for finding a common solution of the common fixed point problem of infinitely many nonexpansive mappings { S n } n = 1 and MP (1.1) for a convex functional f:CR with an L-Lipschitz continuous gradient f. This implicit iterative method with regularization is based on the extragradient method, approximate proximal method and gradient projection algorithm (GPA) with regularization.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let W n be a W-mapping defined by (2.1), let f:CH be an L-Lipschitz continuous mapping with L>0, and let { S n } n = 1 be an infinite family of nonexpansive mappings of C into itself such that n = 1 Fix( S n )Γ is nonempty and bounded. Let { x n }, { y n } and { z n } be the sequences generated by

{ x 1 = x C chosen arbitrarily , y n = P C ( x n λ n μ n f α n ( x n ) λ n ( 1 μ n ) f α n ( y n ) ) , t n = P C ( x n λ n f α n ( y n ) λ n ( 1 μ n ) f α n ( t n ) ) , z n = γ n x n + ( 1 γ n ) W n t n , C n = { z C : z n z 2 x n z 2 + θ n } , x n + 1 = P C n ( ( 1 β n ) x n + e n σ n f ( x n + 1 ) ) , n 1 ,
(3.1)

where { e n }H an error sequence with n = 1 e n <, and θ n = α n 18 μ n 4 Δ n with

Δ n =sup { x n z 2 + ( 1 + 1 18 L 2 ) z 2 : z n = 1 Fix ( S n ) Γ } <.

Assume the following conditions hold:

  1. (i)

    { α n }(0,) and lim n α n =0;

  2. (ii)

    { γ n }[0,c] for some c[0,1);

  3. (iii)

    { μ n }(0,1] and lim n μ n =1;

  4. (iv)

    λ n ( α n +L)<1, n1 and { λ n }[a,b] for some a,b(0,1/L);

  5. (v)

    { σ n }(0,1/L) and { β n }[0,1] satisfy n = 1 β n <.

Then the sequences { x n }, { y n } and { z n } generated by (3.1) converge weakly to some u n = 1 Fix( S n )Γ.

Remark 3.1 In the proof of Theorem 3.1 below, we show that every C n is closed and convex and that n = 1 Fix( S n )Γ C n , n1.

Now, we observe that for all x,yC and all n1,

P C ( x n λ n μ n f α n ( x n ) λ n ( 1 μ n ) f α n ( x ) ) P C ( x n λ n μ n f α n ( x n ) λ n ( 1 μ n ) f α n ( y ) ) ( x n λ n μ n f α n ( x n ) λ n ( 1 μ n ) f α n ( x ) ) ( x n λ n μ n f α n ( x n ) λ n ( 1 μ n ) f α n ( y ) ) = λ n ( 1 μ n ) f α n ( x ) f α n ( y ) λ n ( α n + L ) x y .

Hence, by the Banach contraction principle, we know that for each n1 there exists a unique y n C such that

y n = P C ( x n λ n μ n f α n ( x n ) λ n ( 1 μ n ) f α n ( y n ) ) .
(3.2)

Also, observe that for all x,yC and all n1,

P C ( x n λ n f α n ( y n ) λ n ( 1 μ n ) f α n ( x ) ) P C ( x n λ n f α n ( y n ) λ n ( 1 μ n ) f α n ( y ) ) ( x n λ n f α n ( y n ) λ n ( 1 μ n ) f α n ( x ) ) ( x n λ n f α n ( y n ) λ n ( 1 μ n ) f α n ( y ) ) = λ n ( 1 μ n ) f α n ( x ) f α n ( y ) λ n ( α n + L ) x y .

So, by the Banach contraction principle, we know that for each n1 there exists a unique z n C such that

t n = P C ( x n λ n f α n ( y n ) λ n ( 1 μ n ) f α n ( t n ) ) .
(3.3)

In addition, observe that for all x,yC and all n1,

P C n ( ( 1 β n ) x n + e n σ n f ( x ) ) P C n ( ( 1 β n ) x n + e n σ n f ( y ) ) ( ( 1 β n ) x n + e n σ n f ( x ) ) ( ( 1 β n ) x n + e n σ n f ( y ) ) = σ n f ( x ) f ( y ) σ n L x y .

Thus, by the Banach contraction principle, we know that for each n1 there exists a unique x n + 1 C n such that

x n + 1 = P C n ( ( 1 β n ) x n + e n σ n f ( x n + 1 ) ) .
(3.4)

Therefore, the sequences { x n }, { y n } and { z n } generated by (3.1) are well defined.

Next, we divide our detailed proof into several propositions. For this purpose, in the sequel, we assume that all our assumptions are satisfied.

Proposition 3.1 n = 1 Fix( S n )Γ C n , n1.

Proof First we note that every set C n is closed and convex. As a matter of fact, since the defining inequality in C n is equivalent to the inequality

2 ( x n z n ) , z x n 2 z n 2 + θ n ,

by Lemma 2.5 we also have that C n is convex and closed for every n=1,2, . Also, note that the L-Lipschitz continuity of the gradient f implies that f is 1/L-ism [31], that is,

f ( x ) f ( y ) , x y 1 L f ( x ) f ( y ) 2 ,x,yC.

Observe that

( α + L ) f α ( x ) f α ( y ) , x y = ( α + L ) [ α x y 2 + f ( x ) f ( y ) , x y ] = α 2 x y 2 + α f ( x ) f ( y ) , x y + α L x y 2 + L f ( x ) f ( y ) , x y α 2 x y 2 + 2 α f ( x ) f ( y ) , x y + f ( x ) f ( y ) 2 = α ( x y ) + f ( x ) f ( y ) 2 = f α ( x ) f α ( y ) 2 .

Hence, it follows that f α =αI+f is 1/(α+L)-ism. Now, take u n = 1 Fix( S n )Γ arbitrarily. Taking into account λ n ( α n +L)<1, n1, we deduce that

x n u λ n μ n ( f α n ( x n ) f α n ( u ) ) 2 x n u 2 + λ n μ n ( λ n μ n 2 α n + L ) f α n ( x n ) f α n ( u ) 2 x n u 2 + λ n ( λ n 2 α n + L ) f α n ( x n ) f α n ( u ) 2 x n u 2 ,

and

y n u = P C ( x n λ n μ n f α n ( x n ) λ n ( 1 μ n ) f α n ( y n ) ) P C ( u λ n μ n f ( u ) λ n ( 1 μ n ) f ( u ) ) ( x n λ n μ n f α n ( x n ) λ n ( 1 μ n ) f α n ( y n ) ) ( u λ n μ n f ( u ) λ n ( 1 μ n ) f ( u ) ) x n u λ n μ n ( f α n ( x n ) f ( u ) ) + λ n ( 1 μ n ) f α n ( y n ) f ( u ) x n u λ n μ n ( f α n ( x n ) f α n ( u ) ) + λ n μ n f α n ( u ) f ( u ) + λ n ( 1 μ n ) [ f α n ( y n ) f α n ( u ) + f α n ( u ) f ( u ) ] x n u + λ n μ n α n u + λ n ( 1 μ n ) [ ( α n + L ) y n u + α n u ] = x n u + λ n ( 1 μ n ) ( α n + L ) y n u + λ n α n u x n u + ( 1 μ n ) y n u + λ n α n u ,

which implies that

y n u 1 μ n ( x n u + λ n α n u ) .
(3.5)

Meantime, we also have

t n u = P C ( x n λ n f α n ( y n ) λ n ( 1 μ n ) f α n ( t n ) ) P C ( u λ n f ( u ) λ n ( 1 μ n ) f ( u ) ) ( x n λ n f α n ( y n ) λ n ( 1 μ n ) f α n ( t n ) ) ( u λ n f ( u ) λ n ( 1 μ n ) f ( u ) ) x n u + λ n f α n ( y n ) f ( u ) + λ n ( 1 μ n ) f α n ( t n ) f ( u ) x n u + λ n [ f α n ( y n ) f α n ( u ) + f α n ( u ) f ( u ) ] + λ n ( 1 μ n ) [ f α n ( t n ) f α n ( u ) + f α n ( u ) f ( u ) ] x n u + λ n [ ( α n + L ) y n u + α n u ] + λ n ( 1 μ n ) [ ( α n + L ) t n u + α n u ] x n u + y n u + λ n α n u + ( 1 μ n ) t n u + λ n ( 1 μ n ) α n u x n u + y n u + ( 2 μ n ) λ n α n u + ( 1 μ n ) t n u ,

which hence implies that

t n u 1 μ n [ x n u + y n u + ( 2 μ n ) λ n α n u ] 1 μ n { x n u + 1 μ n ( x n u + λ n α n u ) + ( 2 μ n ) λ n α n u } = ( 1 μ n + 1 μ n 2 ) x n u + ( 1 μ n 2 + 2 μ n μ n ) λ n α n u 1 + 2 μ n μ n 2 μ n 2 ( x n u + λ n α n u ) 1 + 2 μ n μ n 2 ( x n u + λ n α n u ) .
(3.6)

Thus, from (3.5) and (3.6) it follows that

y n u + ( 1 μ n ) t n u 1 μ n ( x n u + λ n α n u ) + ( 1 μ n ) 1 + 2 μ n μ n 2 ( x n u + λ n α n u ) = 1 + 2 μ n ( 1 μ n ) μ n 2 ( x n u + λ n α n u ) 3 μ n 2 ( x n u + λ n α n u ) ,

which together with λ n ( α n +L)<1 implies that

[ y n u + ( 1 μ n ) t n u ] 2 9 μ n 4 ( x n u + λ n α n u ) 2 9 μ n 4 ( 2 x n u 2 + 2 λ n 2 α n 2 u 2 ) 9 μ n 4 ( 2 x n u 2 + 2 u 2 ) = 18 μ n 4 x n u 2 + 18 μ n 4 u 2 .
(3.7)

Furthermore, from Proposition 2.1(ii), the monotonicity of f, and uΓ, we have

t n u 2 ( x n λ n f α n ( y n ) λ n ( 1 μ n ) f α n ( t n ) ) u 2 ( x n λ n f α n ( y n ) λ n ( 1 μ n ) f α n ( t n ) ) t n 2 = x n λ n ( 1 μ n ) f α n ( t n ) u 2 x n λ n ( 1 μ n ) f α n ( t n ) t n 2 + 2 λ n f α n ( y n ) , u t n = x n λ n ( 1 μ n ) f α n ( t n ) u 2 x n λ n ( 1 μ n ) f α n ( t n ) t n 2 + 2 λ n ( f α n ( y n ) , u y n + f α n ( y n ) , y n t n ) = x n λ n ( 1 μ n ) f α n ( t n ) u 2 x n λ n ( 1 μ n ) f α n ( t n ) t n 2 + 2 λ n ( f α n ( y n ) f α n ( u ) , u y n + f α n ( u ) , u y n + f α n ( y n ) , y n t n ) x n λ n ( 1 μ n ) f α n ( t n ) u 2 x n λ n ( 1 μ n ) f α n ( t n ) t n 2 + 2 λ n ( α n u , u y n + f α n ( y n ) , y n t n ) = x n u 2 x n t n 2 2 λ n ( 1 μ n ) f α n ( t n ) , t n u + 2 λ n ( α n u , u y n + f α n ( y n ) , y n t n ) = x n u 2 x n y n 2 2 x n y n , y n t n y n t n 2 + 2 λ n ( α n u , u y n + f α n ( y n ) , y n t n ) 2 λ n ( 1 μ n ) ( f α n ( t n ) f α n ( u ) , t n u + f α n ( u ) , t n u ) x n u 2 x n y n 2 y n t n 2 + 2 x n λ n f α n ( y n ) y n , t n y n + 2 λ n α n ( u , u y n + ( 1 μ n ) u , u t n ) x n u 2 x n y n 2 y n t n 2 + 2 x n λ n f α n ( y n ) y n , t n y n + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] .

Since y n = P C ( x n λ n μ n f α n ( x n ) λ n (1 μ n ) f α n ( y n )) and f α n ( α n +L)-Lipschitz continuous, by Proposition 2.1(i) we have

x n λ n f α n ( y n ) y n , t n y n = x n λ n μ n f α n ( x n ) λ n ( 1 μ n ) f α n ( y n ) y n , t n y n + λ n μ n f α n ( x n ) f α n ( y n ) , t n y n λ n μ n f α n ( x n ) f α n ( y n ) , t n y n λ n μ n f α n ( x n ) f α n ( y n ) t n y n λ n ( α n + L ) x n y n t n y n .

So, we have

t n u 2 x n u 2 x n y n 2 y n t n 2 + 2 λ n ( α n + L ) x n y n t n y n + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] x n u 2 x n y n 2 y n t n 2 + λ n 2 ( α n + L ) 2 x n y n 2 + t n y n 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] = x n u 2 + ( λ n 2 ( α n + L ) 2 1 ) x n y n 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] x n u 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] .
(3.8)

Therefore, from (3.7) and (3.8), together with z n = γ n x n +(1 γ n ) W n t n and u= W n u, by Lemma 2.4(b) we have

z n u 2 = γ n ( x n u ) + ( 1 γ n ) ( W n t n u ) 2 γ n x n u 2 + ( 1 γ n ) W n t n u 2 γ n x n u 2 + ( 1 γ n ) t n u 2 γ n x n u 2 + ( 1 γ n ) { x n u 2 + ( λ n 2 ( α n + L ) 2 1 ) x n y n 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] } x n u 2 + ( 1 γ n ) ( λ n 2 ( α n + L ) 2 1 ) x n y n 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] x n u 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] x n u 2 + α n [ λ n 2 u 2 + ( y n u + ( 1 μ n ) t n u ) 2 ] x n u 2 + α n [ 1 L 2 u 2 + 18 μ n 4 x n u 2 + 18 μ n 4 u 2 ] = x n u 2 + α n 18 μ n 4 [ x n u 2 + ( 1 + μ n 4 18 L 2 ) u 2 ] x n u 2 + α n 18 μ n 4 [ x n u 2 + ( 1 + 1 18 L 2 ) u 2 ] x n u 2 + α n 18 μ n 4 Δ n = x n u 2 + θ n ,
(3.9)

which implies that u C n . Therefore,

n = 1 Fix( S n )Γ C n ,n1,

and this completes the proof. □

Proposition 3.2 The sequences { x n }, { y n }, { t n } and { z n } are all bounded.

Proof Since uΓ and x n C for all n1, from the monotonicity of f, we have

f ( u ) , x n u 0and f ( x n ) f ( u ) , x n u 0,n1,

which hence implies that

f ( x n ) , x n u f ( u ) , x n u 0,n1.
(3.10)

Note that x n + 1 = P C n ((1 β n ) x n + e n σ n f( x n + 1 )) is equivalent to the inequality

( 1 β n ) x n x n + 1 + e n σ n f ( x n + 1 ) , x n + 1 x 0,x C n .

Taking x=u in the last inequality, we deduce

( 1 β n ) x n x n + 1 + e n σ n f ( x n + 1 ) , x n + 1 u 0,

which implies that

( 1 β n ) x n x n + 1 + e n , x n + 1 u σ n f ( x n + 1 ) , x n + 1 u .
(3.11)

From (3.10) and (3.11) we get

( 1 β n ) x n x n + 1 + e n , x n + 1 u 0,

which can be rewritten as

( 1 β n ) ( x n u ) ( x n + 1 u ) β n u + e n , x n + 1 u 0.
(3.12)

It follows that

x n + 1 u 2 ( 1 β n ) ( x n u ) β n u + e n , x n + 1 u ( 1 β n ) x n u x n + 1 u + β n u x n + 1 u + e n x n + 1 u .

Hence,

x n + 1 u ( 1 β n ) x n u + β n u + e n max { x n u , u } + e n .
(3.13)

By induction, we can obtain

x n + 1 umax { x 1 u , u } + i = 1 n e i .

Since n = 1 e n <, we immediately conclude that the sequence { x n } is bounded. Thus, from α n 0, μ n 1, λ n ( α n +L)<1, (3.5), (3.6) and (3.9) it follows that { y n }, { t n } and { z n } are bounded. This completes the proof. □

Proposition 3.3 The following statements hold:

  1. (i)

    lim n x n u exists for each u n = 1 Fix( S n )Γ;

  2. (ii)

    lim n x n x n + 1 = lim n x n y n = lim n x n z n = lim n x n t n =0;

  3. (iii)

    lim n x n W n x n = lim n x n W x n =0.

Proof For each u n = 1 Fix( S n )Γ, we get from (3.13)

x n + 1 u ( 1 β n ) x n u + β n u + e n x n u + β n u + e n .

Since the conditions n = 1 β n < and n = 1 e n < lead to n = 1 ( β n u+ e n )<, by Corollary 2.1, we know that lim n x n u exists for each u n = 1 Fix( S n )Γ. Note that by Lemma 2.4(a) we have from (3.12)

x n x n + 1 2 = x n u 2 x n + 1 u 2 + 2 x n + 1 x n , x n + 1 u x n u 2 x n + 1 u 2 + 2 β n x n + e n , x n + 1 u x n u 2 x n + 1 u 2 + 2 ( β n x n + e n ) x n + 1 u .

Since β n 0 and e n 0 as n, from the existence of lim n x n u and the boundedness of { x n }, we obtain that

lim n x n x n + 1 =0.

Since x n + 1 C n , it follows that

z n x n + 1 2 x n x n + 1 2 + θ n ,

which implies that

z n x n + 1 x n x n + 1 + θ n ,

and hence

x n z n x n x n + 1 + x n + 1 z n 2 x n + 1 x n + θ n 0 .

For each u n = 1 Fix( S n )Γ, from (3.9) we have

( 1 γ n ) ( 1 λ n 2 ( α n + L ) 2 ) x n y n 2 x n u 2 z n u 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] ( x n u + z n u ) x n z n + 2 λ n α n u [ y n u + t n u ] .

Since { γ n }[0,c], { λ n }[a,b], 1 b 2 L 2 >0, α n 0 and x n z n 0 as n, from the boundedness of { x n }, { y n }, { t n } and { z n } we conclude that

x n y n 1 ( 1 γ n ) ( 1 λ n 2 ( α n + L ) 2 ) { ( x n u + z n u ) x n z n + 2 λ n α n u [ y n u + t n u ] } 1 ( 1 c ) ( 1 b 2 ( α n + L ) 2 ) { ( x n u + z n u ) x n z n + 2 λ n α n u [ y n u + t n u ] } 0 .

Utilizing the arguments similar to those in (3.8),

t n u 2 x n u 2 x n y n 2 y n t n 2 + 2 λ n ( α n + L ) x n y n t n y n + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] x n u 2 x n y n 2 y n t n 2 + λ n 2 ( α n + L ) 2 t n y n 2 + x n y n 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] = x n u 2 + ( λ n 2 ( α n + L ) 2 1 ) t n y n 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] .

Hence,

z n u 2 γ n x n u 2 + ( 1 γ n ) t n u 2 γ n x n u 2 + ( 1 γ n ) { x n u 2 + ( λ n 2 ( α n + L ) 2 1 ) t n y n 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] } x n u 2 + ( 1 γ n ) ( λ n 2 ( α n + L ) 2 1 ) t n y n 2 + 2 λ n α n u [ y n u + t n u ] .

It follows that

( 1 γ n ) ( 1 λ n 2 ( α n + L ) 2 ) t n y n 2 x n u 2 z n u 2 + 2 λ n α n u [ y n u + t n u ] ( x n u + z n u ) x n z n + 2 λ n α n u [ y n u + t n u ] .

Since { γ n }[0,c], { λ n }[a,b], 1 b 2 L 2 >0, α n 0 and x n z n 0 as n, from the boundedness of { x n }, { y n }, { t n } and { z n } we deduce that

t n y n 2 1 ( 1 γ n ) ( 1 λ n 2 ( α n + L ) 2 ) { ( x n u + z n u ) x n z n + 2 λ n α n u [ y n u + t n u ] } 1 ( 1 c ) ( 1 b 2 ( α n + L ) 2 ) { ( x n u + z n u ) x n z n + 2 λ n α n u [ y n u + t n u ] } 0 .

Taking into consideration that

x n t n x n y n + y n t n ,

we also have

lim n x n t n =0.

Since z n = γ n x n +(1 γ n ) W n t n , we have

(1 γ n )( W n t n t n )= γ n ( t n x n )+ z n t n .

Then

( 1 c ) W n t n t n ( 1 γ n ) W n t n t n γ n t n x n + z n t n ( 1 + γ n ) t n x n + z n x n ,

and hence t n W n t n 0. Observe also that

x n W n x n x n t n + t n W n t n + W n t n W n x n x n t n + t n W n t n + t n x n 2 x n t n + t n W n t n .

So, we have x n W n x n 0. On the other hand, since { x n } is bounded, from Lemma 2.8, we have lim n W n x n W x n =0. Therefore, we obtain

lim n x n W x n =0,

and this completes the proof. □

Proposition 3.4 ω w ( x n ) n = 1 Fix( S n )Γ.

Proof By Proposition 3.3(iii), we know that

lim n x n W x n =0.

Take u ˆ ω w ( x n ) arbitrarily. Then there exists a subsequence { x n i } of { x n } such that x n i u ˆ ; hence, we have lim i x n i W x n i =0. Note that from Lemma 2.9 it follows that IW is demiclosed at zero. Thus, u ˆ Fix(W). Now, let us show u ˆ Γ. Since x n t n 0 and x n y n 0, we have t n i u ˆ and y n i u ˆ . Let

Tv={ f ( v ) + N C v if  v C , if  v C ,

where N C v is the normal cone to C at vC. We have already mentioned that in this case the mapping T is maximal monotone, and 0Tv if and only if vVI(C,f); see [10] for more details. Let G(T) be the graph of T and let (v,w)G(T). Then we have wTv=f(v)+ N C v and hence wf(v) N C v. So, we have vt,wf(v)0 for all tC. On the other hand, from t n = P C ( x n λ n f α n ( y n ) λ n (1 μ n ) f α n ( t n )) and vC, we have

x n λ n f α n ( y n ) λ n ( 1 μ n ) f α n ( t n ) t n , t n v 0

and hence

v t n , t n x n λ n + f α n ( y n ) + ( 1 μ n ) f α n ( t n ) 0.

Therefore, from vt,wf(v)0 for all tC and t n i C, we have

v t n i , w v t n i , f ( v ) v t n i , f ( v ) v t n i , t n i x n i λ n i + f α n i ( y n i ) + ( 1 μ n i ) f α n i ( t n i ) = v t n i , f ( v ) v t n i , t n i x n i λ n i + f ( y n i ) α n i v t n i , y n i ( 1 μ n i ) v t n i , f α n i ( t n i ) = v t n i , f ( v ) f ( t n i ) + v t n i , f ( t n i ) f ( y n i ) v t n i , t n i x n i λ n i α n i v t n i , y n i ( 1 μ n i ) v t n i , f α n i ( t n i ) v t n i , f ( t n i ) f ( y n i ) v t n i , t n i x n i λ n i α n i v t n i , y n i ( 1 μ n i ) v t n i , f α n i ( t n i ) .

Since f( t n )f( y n )0 (due to the Lipschitz continuity of f), t n x n λ n 0 (due to { λ n }[a,b]), α n 0 and μ n 1, we obtain v u ˆ ,w0 as i. Since T is maximal monotone, we have u ˆ T 1 0 and hence u ˆ VI(C,f). Clearly, u ˆ Γ. Consequently, u ˆ n = 1 Fix( S n )Γ. This implies that ω w ( x n ) n = 1 Fix( S n )Γ. □

Finally, according to Propositions 3.1-3.4, we prove the remainder of Theorem 3.1.

Proof It is sufficient to show that ω w ( x n ) is a single-point set because x n y n 0 and x n z n 0 as n. Since ω w ( x n ), let us take two points u, u ˆ ω w ( x n ) arbitrarily. Then there exist two subsequences { x n j } and { x m k } of { x n } such that x n j u and x m k u ˆ , respectively. In terms of Proposition 3.4, we know that u, u ˆ n = 1 Fix( S n )Γ. Meantime, according to Proposition 3.3(i), we also know that there exist both lim n x n u and lim n x n u ˆ . Let us show that u= u ˆ . Assume that u u ˆ . From the Opial condition [39] it follows that

lim n x n u = lim inf j x n j u < lim inf j x n j u ˆ = lim n x n u ˆ = lim inf k x m k u ˆ < lim inf k x m k u = lim n x n u .

This leads to a contradiction. Thus, we must have u= u ˆ . This implies that ω w ( x n ) is a singleton. Without loss of generality, we may write ω w ( x n )={u}. Consequently, by Lemma 2.2 we obtain that x n u n = 1 Fix( S n )Γ. Since x n y n 0 and x n z n 0 as n, we also have that y n u and z n u. This completes the proof. □

Remark 3.2 Our Theorem 3.1 improves, extends, supplements and develops Nadezhkina and Takahashi [[9], Theorem 3.1] and Ceng et al. [[11], Theorem 3.1] in the following aspects.

  1. (i)

    The combination of the problem of finding an element of Fix(S)VI(C,A) in [[9], Theorem 3.1] and the one of finding an element of i = 1 N Fix( S i )VI(C,A) in [[11], Theorem 3.1] is extended to develop the one of finding an element of i = 1 Fix( S i )Γ in our Theorem 3.1.

  2. (ii)

    Our Theorem 3.1 drops the required condition lim inf n A x n ,x x n 0, xC in [[11], Theorem 3.1].

  3. (iii)

    The iterative scheme in [[11], Theorem 3.1] is extended to develop the iterative scheme (3.1) of our Theorem 3.1 by virtue of the iterative scheme of [[9], Theorem 3.1]. The iterative scheme (3.1) of our Theorem 3.1 is more advantageous and more flexible than the iterative scheme of [[11], Theorem 3.1] because it involves several parameter sequences { λ n }, { μ n }, { α n }, { β n }, { γ n }, { σ n } and { e n }.

  4. (iv)

    The iterative scheme (3.1) in our Theorem 3.1 is very different from every one in [[9], Theorem 3.1] and [[11], Theorem 3.1] because the final iteration steps of computing x n + 1 in [[9], Theorem 3.1] and [[11], Theorem 3.1] are replaced by the implicit iteration step x n + 1 = P C n ((1 β n ) x n + e n σ n f( x n + 1 )) in the iterative scheme (3.1) of our Theorem 3.1.

  5. (v)

    The argument technique of our Theorem 3.1 combines the argument one in [[9], Theorem 3.1] and the argument one in [[11], Theorem 3.1]. Because the problem of finding an element of i = 1 Fix( S i )Γ in our Theorem 3.1 involves a countable family of nonexpansive mappings { S n }, the proof of our Theorem 3.1 depends on the properties of the W-mapping (see Lemmas 2.6-2.8 of Section 2 in this paper). Therefore, the proof of our Theorem 3.1 is very different from every one in [[9], Theorem 3.1] and [[11], Theorem 3.1].

4 Strong convergence theorem

In this section, we prove a strong convergence theorem via an implicit hybrid method with regularization for finding a common element of the set of common fixed points of an infinite family of nonexpansive mappings { S n } n = 1 and the set of solutions of MP (1.1) for a convex functional f:CR with an L-Lipschitz continuous gradient f. This implicit hybrid method with regularization is based on the CQ method, extragradient method and gradient projection algorithm (GPA) with regularization.

Theorem 4.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let W n be a W-mapping defined by (2.1), let f:CH be an L-Lipschitz continuous mapping with L>0, and let { S n } n = 1 be an infinite family of nonexpansive mappings of C into itself such that n = 1 Fix( S n )Γ is nonempty and bounded. Let { x n }, { y n } and { z n } be the sequences generated by

{ x 1 = x C chosen arbitrarily , y n = P C ( x n λ n μ n f α n ( x n ) λ n ( 1 μ n ) f α n ( y n ) ) , t n = P C ( x n λ n f α n ( y n ) λ n ( 1 μ n ) f α n ( t n ) ) , z n = γ n x n + ( 1 γ n ) W n t n , C n = { z C : z n z 2 x n z 2 + θ n } , Q n = { z C : x n z , x x n 0 } , x n + 1 = P C n Q n x , n 1 ,
(4.1)

where θ n = α n 18 μ n 4 Δ n and

Δ n =sup { x n z 2 + ( 1 + 1 18 L 2 ) z 2 : z n = 1 Fix ( S n ) Γ } <.

Assume the following conditions hold:

  1. (i)

    { α n }(0,) and lim n α n =0;

  2. (ii)

    { γ n }[0,c] for some c[0,1);

  3. (iii)

    { μ n }(0,1] and lim n μ n =1;

  4. (iv)

    λ n ( α n +L)<1, n1 and { λ n }[a,b] for some a,b(0,1/L).

Then the sequences { x n }, { y n } and { z n } generated by (4.1) converge strongly to the same point P n = 1 Fix ( S n ) Γ x.

Proof Utilizing the condition λ n ( α n +L)<1, n1, and repeating the same arguments as in Remark 3.1, we can see that { y n }, { t n } and { z n } are defined well. Note that the L-Lipschitz continuity of the gradient f implies that f is 1 L -ism [31], that is,

f ( x ) f ( y ) , x y 1 L f ( x ) f ( y ) 2 ,x,yC.

Repeating the same arguments as in the proof of Proposition 3.1, we know that f α =αI+f is 1/(α+L)-ism. It is clear that C n is closed and Q n is closed and convex for every n=1,2, . As the defining inequality in C n is equivalent to the inequality

2 ( x n z n ) , z x n 2 z n 2 + θ n ,

by Lemma 2.5 we also have that C n is convex for every n=1,2, . As Q n ={zC: x n z,x x n 0}, we have x n z,x x n 0 for all z Q n , and by Proposition 2.1(i) we get x n = P Q n x.

We divide the rest of the proof into several steps.

Step 1. { x n }, { y n } and { z n } are bounded.

Indeed, take u n = 1 Fix( S n )Γ arbitrarily. Taking into account λ n ( α n +L)<1, n1 and repeating the same arguments as in (3.5) and (3.6), we deduce that

y n u 1 μ n ( x n u + λ n α n u )
(4.2)

and

t n u 1 + 2 μ n μ n 2 ( x n u + λ n α n u ) .
(4.3)

Thus, from (4.2) and (4.3) it follows that

y n u+(1 μ n ) t n u 3 μ n 2 ( x n u + λ n α n u ) ,

which together with λ n ( α n +L)<1 implies that

[ y n u + ( 1 μ n ) t n u ] 2 9 μ n 4 ( x n u + λ n α n u ) 2 9 μ n 4 ( 2 x n u 2 + 2 u 2 ) = 18 μ n 4 x n u 2 + 18 μ n 4 u 2 .

Repeating the same arguments as in (3.8) and (3.9), we can deduce that

t n u 2 x n u 2 x n y n 2 y n t n 2 + 2 λ n ( α n + L ) x n y n t n y n + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] x n u 2 + ( λ n 2 ( α n + L ) 2 1 ) x n y n 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] x n u 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ]
(4.4)

and

z n u 2 γ n x n u 2 + ( 1 γ n ) t n u 2 x n u 2 + ( 1 γ n ) ( λ n 2 ( α n + L ) 2 1 ) x n y n 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] x n u 2 + α n [ λ n 2 u 2 + ( y n u + ( 1 μ n ) t n u ) 2 ] x n u 2 + α n 18 μ n 4 [ x n u 2 + ( 1 + 1 18 L 2 ) u 2 ] x n u 2 + α n 18 μ n 4 Δ n = x n u 2 + θ n
(4.5)

for every n=1,2, and hence u C n . So,

n = 1 Fix( S n )Γ C n ,n1.

Next, let us show by mathematical induction that { x n } is well defined and n = 1 Fix( S n )Γ C n Q n for every n=1,2, . For n=1 we have Q 1 =C. Hence we obtain n = 1 Fix( S n )Γ C 1 Q 1 . Suppose that x k is given and n = 1 Fix( S n )Γ C k Q k for some integer k1. Since n = 1 Fix( S n )Γ is nonempty, C k Q k is a nonempty closed convex subset of C. So, there exists a unique element x k + 1 C k Q k such that x k + 1 = P C k Q k x. It is also obvious that there holds x k + 1 z,x x k + 1 0 for every z C k Q k . Since n = 1 Fix( S n )Γ C k Q k , we have x k + 1 z,x x k + 1 0 for every z n = 1 Fix( S n )Γ and hence n = 1 Fix( S n )Γ Q k + 1 . Therefore, we obtain n = 1 Fix( S n )Γ C k + 1 Q k + 1 .

Step 2. lim n x n x n + 1 =0 and lim n x n z n =0.

Indeed, let q= P n = 1 Fix ( S n ) Γ x. From x n + 1 = P C n Q n x and q n = 1 Fix( S n )Γ C n Q n , we have

x n + 1 xqx
(4.6)

for every n=1,2, . Therefore, { x n } is bounded. From (4.2), (4.3) and (4.5) we also obtain that { y n }, { t n } and { z n } are bounded. Since x n + 1 C n Q n Q n and x n = P Q n x, we have

x n x x n + 1 x

for every n=1,2, . Therefore, there exists lim n x n x. Since x n = P Q n x and x n + 1 Q n , using Proposition 2.1(ii), we have

x n + 1 x n 2 x n + 1 x 2 x n x 2

for every n=1,2, . This implies that

lim n x n + 1 x n =0.

Since x n + 1 C n , we have

z n x n + 1 2 x n x n + 1 2 + θ n ,

which implies that

z n x n + 1 x n x n + 1 + θ n .

Hence we get

x n z n x n x n + 1 + x n + 1 z n 2 x n + 1 x n + θ n

for every n=1,2, . From x n + 1 x n 0 and θ n 0, we conclude that x n z n 0 as n.

Step 3. lim n x n y n = lim n x n t n =0 and lim n x n W n x n = lim n x n W x n =0.

Indeed, since { γ n }[0,c], { λ n }[a,b], 1 b 2 L 2 >0, α n 0 and x n z n 0 as n, from the boundedness of { x n }, { y n }, { t n } and { z n } we conclude from (4.5) that

x n y n 2 1 ( 1 γ n ) ( 1 λ n 2 ( α n + L ) 2 ) { x n u 2 z n u 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] } 1 ( 1 γ n ) ( 1 λ n 2 ( α n + L ) 2 ) { ( x n u + z n u ) x n z n + 2 λ n α n u [ y n u + t n u ] } 1 ( 1 c ) ( 1 b 2 ( α n + L ) 2 ) { ( x n u + z n u ) x n z n + 2 λ n α n u [ y n u + t n u ] } 0 .

Utilizing the arguments similar to those in (4.4),

t n u 2 x n u 2 x n y n 2 y n t n 2 + 2 λ n ( α n + L ) x n y n t n y n + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] x n u 2 + ( λ n 2 ( α n + L ) 2 1 ) t n y n 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] .

Hence,

z n u 2 γ n x n u 2 + ( 1 γ n ) t n u 2 γ n x n u 2 + ( 1 γ n ) { x n u 2 + ( λ n 2 ( α n + L ) 2 1 ) t n y n 2 + 2 λ n α n u [ y n u + ( 1 μ n ) t n u ] } x n u 2 + ( 1 γ n ) ( λ n 2 ( α n + L ) 2 1 ) t n y n 2 + 2 λ n α n u [ y n u + t n u ] .

Since { γ n }[0,c], { λ n }[a,b], 1 b 2 L 2 >0, α n 0 and x n z n 0 as n, from the boundedness of { x n }, { y n }, { t n } and { z n } we deduce that

t n y n 2 1 ( 1 γ n ) ( 1 λ n 2 ( α n + L ) 2 ) { x n u 2 z n u 2 + 2 λ n α n u [ y n u + t n u ] } 1 ( 1 γ n ) ( 1 λ n 2 ( α n + L ) 2 ) { ( x n u + z n u ) x n z n + 2 λ n α n u [ y n u + t n u ] } 1 ( 1 c ) ( 1 b 2 ( α n + L ) 2 ) { ( x n u + z n u ) x n z n + 2 λ n α n u [ y n u + t n u ] } 0 .

Taking into consideration that

x n t n x n y n + y n t n ,

we also have

lim n x n t n =0.

Since (1 γ n )( W n t n t n )= γ n ( t n x n )+ z n t n , we get

( 1 c ) W n t n t n ( 1 γ n ) W n t n t n γ n t n x n + z n t n ( 1 + γ n ) t n x n + z n x n ,

and hence t n W n t n 0. Observe also that

x n W n x n x n t n + t n W n t n + W n t n W n x n 2 x n t n + t n W n t n .

So, we have x n W n x n 0. On the other hand, since { x n } is bounded, from Lemma 2.8, we have lim n W n x n W x n =0. Therefore, we obtain

lim n x n W x n =0.

Step 4. ω w ( x n ) n = 1 Fix( S n )Γ.

Indeed, repeating the same arguments as in the proof of Proposition 3.4, we can derive the desired conclusion.

Step 5. { x n }, { y n } and { z n } converge strongly to q= P Fix ( S ) Γ x.

Indeed, take u ˆ ω w ( x n ) arbitrarily. Then u ˆ n = 1 Fix( S n )Γ according to Step 4. Moreover, there exists a subsequence { x n i } of { x n } such that x n i u ˆ . Hence, from q= P n = 1 Fix ( S n ) Γ x, u ˆ n = 1 Fix( S n )Γ, and (4.6), we have

qx u ˆ x lim inf i x n i x lim sup i x n i xqx.

So, we obtain

lim i x n i x= u ˆ x.

From x n i x u ˆ x we have x n i x u ˆ x due to the Kadec-Klee property of Hilbert spaces [36]. So, it is clear that x n i u ˆ . Since x n = P Q n x and q n = 1 Fix( S n )Γ C n Q n Q n , we have

q x n i 2 =q x n i , x n i x+q x n i ,xqq x n i ,xq.

As i, we obtain q u ˆ 2 q u ˆ ,xq0 by q= P n = 1 Fix ( S n ) Γ x and u ˆ n = 1 Fix( S n )Γ. Hence we have u ˆ =q. This implies that x n q. It is easy to see that y n q and z n q. This completes the proof. □

Remark 4.1 Our Theorem 4.1 improves, extends, supplements, and develops Nadezhkina and Takahashi [[9], Theorem 3.1] and Ceng et al. [[11], Theorem 3.1] in the following aspects.

  1. (i)

    The combination of the problem of finding an element of Fix(S)VI(C,A) in [[9], Theorem 3.1] and the one of finding an element of i = 1 N Fix( S i )VI(C,A) in [[11], Theorem 3.1] is extended to develop the one of finding an element of i = 1 Fix( S i )Γ in our Theorem 4.1.

  2. (ii)

    Our Theorem 3.1 is one strong convergence result and drops the required condition lim inf n A x n ,x x n 0, xC in [[11], Theorem 3.1].

  3. (iii)

    The iterative scheme in [[9], Theorem 3.1] is extended to develop the iterative scheme (4.1) of our Theorem 4.1 by virtue of the iterative scheme of [[11], Theorem 3.1]. The iterative scheme (4.1) of our Theorem 4.1 is more advantageous and more flexible than the iterative scheme of [[9], Theorem 3.1] because it involves several parameter sequences { λ n }, { μ n }, { α n } and { γ n }.

  4. (iv)

    The iterative scheme (4.1) in our Theorem 4.1 is very different from every one in [[9], Theorem 3.1] and [[11], Theorem 3.1] because two explicit iteration steps of computing y n and z n in [[9], Theorem 3.1] and [[11], Theorem 3.1] are replaced by three iteration steps involving two implicit steps in the iterative scheme (3.1) of our Theorem 3.1.

  5. (v)

    The argument technique of our Theorem 4.1 combines the argument one in [[9], Theorem 3.1] and the argument one in [[11], Theorem 3.1]. Because the problem of finding an element of i = 1 Fix( S i )Γ in our Theorem 4.1 involves a countable family of nonexpansive mappings { S n }, the proof of our Theorem 4.1 depends on the properties of the W-mapping (see Lemmas 2.6-2.8 of Section 2 in this paper). Therefore, the proof of our Theorem 4.1 is very different from every one in [[9], Theorem 3.1] and [[11], Theorem 3.1].

References

  1. Browder FE, Petryshyn WV: Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 1967, 20: 197–228. 10.1016/0022-247X(67)90085-6

    Article  MathSciNet  MATH  Google Scholar 

  2. Lions JL: Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires. Dunod, Paris; 1969.

    MATH  Google Scholar 

  3. Glowinski R: Numerical Methods for Nonlinear Variational Problems. Springer, New York; 1984.

    Book  MATH  Google Scholar 

  4. Takahashi W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama; 2000.

    MATH  Google Scholar 

  5. Oden JT: Quantitative Methods on Nonlinear Mechanics. Prentice Hall, Englewood Cliffs; 1986.

    MATH  Google Scholar 

  6. Zeidler E: Nonlinear Functional Analysis and Its Applications. Springer, New York; 1985.

    Book  MATH  Google Scholar 

  7. Takahashi W, Zembayashi K: Strong convergence theorem by a new hybrid method for equilibrium problems and relatively nonexpansive mappings. Fixed Point Theory Appl. 2009., 2009: Article ID 719360. doi:10.1155/2009/719360

    Google Scholar 

  8. Cholamjiak P: A hybrid iterative scheme for equilibrium problems, variational inequality problems, and fixed point problems in Banach spaces. Fixed Point Theory Appl. 2008., 2008: Article ID 528476

    Google Scholar 

  9. Nadezhkina N, Takahashi W: Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings. SIAM J. Optim. 2006, 16: 1230–1241. 10.1137/050624315

    Article  MathSciNet  MATH  Google Scholar 

  10. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

    Article  MathSciNet  MATH  Google Scholar 

  11. Ceng LC, Teboulle M, Yao JC: Weak convergence of an iterative method for pseudomonotone variational inequalities and fixed point problems. J. Optim. Theory Appl. 2010, 146: 19–31. 10.1007/s10957-010-9650-0

    Article  MathSciNet  MATH  Google Scholar 

  12. Liu F, Nashed MZ: Regularization of nonlinear ill-posed variational inequalities and convergence rates. Set-Valued Anal. 1998, 6: 313–344. 10.1023/A:1008643727926

    Article  MathSciNet  MATH  Google Scholar 

  13. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

    Article  MathSciNet  MATH  Google Scholar 

  14. Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z

    Article  MathSciNet  MATH  Google Scholar 

  15. Zeng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006, 10(5):1293–1303.

    MathSciNet  MATH  Google Scholar 

  16. Ceng LC, Ansari QH, Yao JC: Relaxed extragradient iterative methods for variational inequalities. Appl. Math. Comput. 2011, 218: 1112–1123. 10.1016/j.amc.2011.01.061

    Article  MathSciNet  MATH  Google Scholar 

  17. Iiduka H, Takahashi W: Strong convergence theorem by a hybrid method for nonlinear mappings of nonexpansive and monotone type and applications. Adv. Nonlinear Var. Inequal. 2006, 9: 1–10.

    MathSciNet  MATH  Google Scholar 

  18. Yao Y, Noor MA: On viscosity iterative methods for variational inequalities. J. Math. Anal. Appl. 2007, 325: 776–787. 10.1016/j.jmaa.2006.01.091

    Article  MathSciNet  MATH  Google Scholar 

  19. Ceng LC, Yao JC: An extragradient-like approximation method for variational inequality problems and fixed point problems. Appl. Math. Comput. 2007, 190: 205–215. 10.1016/j.amc.2007.01.021

    Article  MathSciNet  MATH  Google Scholar 

  20. Ceng LC, Yao JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006, 10: 1293–1303.

    MathSciNet  MATH  Google Scholar 

  21. Ceng LC, Al-Homidan S, Ansari QH, Yao JC: An iterative scheme for equilibrium problems and fixed point problems of strict pseudo-contraction mappings. J. Comput. Appl. Math. 2009, 223: 967–974. 10.1016/j.cam.2008.03.032

    Article  MathSciNet  MATH  Google Scholar 

  22. Yao Y, Yao JC: On modified iterative method for nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2007, 186: 1551–1558. 10.1016/j.amc.2006.08.062

    Article  MathSciNet  MATH  Google Scholar 

  23. Ceng LC, Wong NC, Yao JC: Fixed point solutions of variational inequalities for a finite family of asymptotically nonexpansive mappings without common fixed point assumption. Comput. Math. Appl. 2008, 56: 2312–2322. 10.1016/j.camwa.2008.05.002

    Article  MathSciNet  MATH  Google Scholar 

  24. Yao Y, Postolache M: Iterative methods for pseudomonotone variational inequalities and fixed-point problems. J. Optim. Theory Appl. 2012, 155: 273–287. 10.1007/s10957-012-0055-0

    Article  MathSciNet  MATH  Google Scholar 

  25. Ceng LC, Yao JC: A viscosity relaxed-extragradient method for monotone variational inequalities and fixed point problems. J. Math. Inequal. 2007, 1(2):225–241.

    Article  MathSciNet  MATH  Google Scholar 

  26. Xu HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z

    Article  MathSciNet  MATH  Google Scholar 

  27. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018

    Google Scholar 

  28. Ceng LC, Yao JC: Approximate proximal methods in vector optimization. Eur. J. Oper. Res. 2007, 183: 1–19. 10.1016/j.ejor.2006.09.070

    Article  MathSciNet  MATH  Google Scholar 

  29. Ceng LC, Ansari QH, Yao JC: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64(4):633–642. 10.1016/j.camwa.2011.12.074

    Article  MathSciNet  MATH  Google Scholar 

  30. Ceng LC, Ansari QH, Yao JC: Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. 2012, 75(4):2116–2125. 10.1016/j.na.2011.10.012

    Article  MathSciNet  MATH  Google Scholar 

  31. Ceng LC, Wong NC, Yao JC: Strong and weak convergence theorems for an infinite family of nonexpansive mappings and applications. J. Fixed Point Theory Appl. 2012., 2012: Article ID 117

    Google Scholar 

  32. Ceng LC, Hadjisavvas N, Wong NC: Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 2010, 46: 635–646. 10.1007/s10898-009-9454-7

    Article  MathSciNet  MATH  Google Scholar 

  33. Sahu DR, Xu HK, Yao JC: Asymptotically strict pseudocontractive mappings in the intermediate sense. Nonlinear Anal. 2009, 70: 3502–3511. 10.1016/j.na.2008.07.007

    Article  MathSciNet  MATH  Google Scholar 

  34. Qin X, Cho YJ, Kang SM: An iterative method for an infinite family of nonexpansive mappings in Hilbert spaces. Bull. Malays. Math. Soc. 2009, 32(2):161–171.

    MathSciNet  MATH  Google Scholar 

  35. Yao Y, Liou YC, Yao JC: Convergence theorem for equilibrium problems and fixed point problems of infinite family of nonexpansive mappings. Fixed Point Theory Appl. 2007., 2007: Article ID 64363

    Google Scholar 

  36. Goebel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics on Metric Fixed-Point Theory. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

  37. Osilike MO, Aniagbosor SC, Akuchu BG: Fixed points of asymptotically demicontractive mappings in arbitrary Banach space. Panam. Math. J. 2002, 12: 77–88.

    MathSciNet  MATH  Google Scholar 

  38. Tan KK, Xu HK: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 1993, 178: 301–308. 10.1006/jmaa.1993.1309

    Article  MathSciNet  MATH  Google Scholar 

  39. Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73: 591–597. 10.1090/S0002-9904-1967-11761-0

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

Dedicated to Professor Hari M Srivastava.

The first author was partially supported by the National Science Foundation of China (11071169), Innovation Program of Shanghai Municipal Education Commission (09ZZ133) and Ph.D. Program Foundation of Ministry of Education of China (20123127110002). The last author was partially supported by the a grant from the NSC 101-2115-M-037-001.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ching-Feng Wen.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contribute equally to this work. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ceng, LC., Ansari, Q.H. & Wen, CF. Multi-step implicit iterative methods with regularization for minimization problems and fixed point problems. J Inequal Appl 2013, 240 (2013). https://doi.org/10.1186/1029-242X-2013-240

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-240

Keywords