Skip to main content

Regularized gradient-projection methods for equilibrium and constrained convex minimization problems

Abstract

In this article, based on Marino and Xu’s method, an iterative method which combines the regularized gradient-projection algorithm (RGPA) and the averaged mappings approach is proposed for finding a common solution of equilibrium and constrained convex minimization problems. Under suitable conditions, it is proved that the sequences generated by implicit and explicit schemes converge strongly. The results of this paper extend and improve some existing results.

MSC:58E35, 47H09, 65J15.

1 Introduction

Let H be a real Hilbert space with the inner product , and the induced norm . Let C be a nonempty, closed and convex subset of H. We need some nonlinear operators which are introduced below.

Let T,A:HH be nonlinear operators.

  • T is nonexpansive if TxTyxy for all x,yH.

  • T is Lipschitz continuous if there exists a constant L>0 such that TxTyLxy for all x,yH.

  • A is a strongly positive bounded linear operator if there exists a constant γ ¯ such that Ax,x γ ¯ x 2 for all xH.

  • A:HH is monotone if xy,AxAy0 for all x,yH.

  • Given is a number η>0, A:HH is η-strongly monotone if xy,AxAyη x y 2 for all x,yH.

  • Given is a number υ>0. A:HH is υ-inverse strongly monotone (υ-ism) if xy,AxAyυ A x A y 2 for all x,yH.

It is known that inverse strongly monotone operators have been studied widely (see [13]) and applied to solve practical problems in various fields; for instance, in traffic assignment problems (see [4, 5]).

  • T is firmly nonexpansive if and only if 2TI is nonexpansive or, equivalently, xy,TxTy T x T y 2 for all x,yH.

  • T:HH is said to be an averaged mapping if T=(1α)I+αS, where α is a number in (0,1) and S:HH is nonexpansive. In particular, projections are (1/2)-averaged mappings.

Averaged mappings have received many investigations, see ([610]).

Let ϕ be a bifunction of C×C into , where is the set of real numbers. The equilibrium problem for ϕ:C×CR is to find xC such that

ϕ(x,y)0for all yC.
(1.1)

The set of solutions of (1.1) is denoted by EP(ϕ). Given a mapping T:CH, let ϕ(x,y)=Tx,yx for all x,yC. Then zEP(ϕ) if and only if Tz,yz0 for all yC, i.e., z is a solution of the variational inequality. Numerous problems in physics, optimization and economics reduce to finding a solution of (1.1). Some methods have been proposed to solve the equilibrium problem; see, for instance, [1115].

In 2000, Moudafi [16] introduced the viscosity approximation method for nonexpansive mappings. Let h be a contraction on H, starting with an arbitrary initial x 0 H, define a sequence { x n } recursively by

x n + 1 = α n h( x n )+(1 α n )T x n ,n0,
(1.2)

where { α n } is a sequence in (0,1). Xu [17] proved that under certain conditions on { α n }, the sequence { x n } generated by (1.2) converges strongly to the unique solution x F(T) of the variational inequality

( I h ) x , x x 0xF(T).

We use F(T) to denote the set of fixed points of the mapping T; that is, F(T)={xH:x=Tx}.

In 2006, Marino and Xu [18] introduced a general iterative method for nonexpansive mappings. Let h be a contraction on H with a coefficient ρ(0,1), and let A be a strongly positive bounded linear operator on H with a constant γ ¯ >0. Starting with an arbitrary initial guess x 0 H, define a sequence { x n } recursively by

x n + 1 = α n γh( x n )+(I α n A)T x n ,n0,
(1.3)

where { α n } is a sequence in (0,1), and 0<γ< γ ¯ /ρ is a constant. It is proved that the sequence { x n } converges strongly to the unique solution x F(T) of the variational inequality

( γ h A ) x , x x 0xF(T).

For finding the common solution of EP(ϕ) and a fixed point problem, Takahashi and Takahashi [11] introduced the following iterative scheme by the viscosity approximation method in a Hilbert space: x 1 H and

{ ϕ ( u n , y ) + 1 r n y u n , u n x n 0 y C , x n + 1 = α n h ( x n ) + ( 1 α n ) S u n
(1.4)

for all nN, where { α n }(0,1) and { r n }(0,) satisfy some appropriate conditions. Further, they proved { x n } and { u n } converge strongly to zF(S)EP(ϕ), where z= P F ( S ) EP ( ϕ ) h(z).

On the other hand, let f:CR be a convex function, and consider the following minimization problem:

min x C f(x).
(1.5)

Assume that the constrained convex minimization problem (1.5) is solvable, and let U denote the set of solutions of (1.5). Then the gradient-projection algorithm (GPA) generates a sequence { x n } n = 0 according to the recursive formula

x n + 1 = Proj C (I γ n f) x n ,

where the parameters γ n are real positive numbers, and Proj C is the metric projection from H onto C. It is known that the convergence of the sequence { x n } n = 0 depends on the behavior of the gradient f. If the gradient f is only assumed to be inverse strongly monotone, then { x n } n = 0 can only be convergent weakly to a minimizer of (1.5). If the gradient f is Lipschitz continuous and strongly monotone, then the sequence { x n } n = 0 can be convergent strongly to a minimizer of (1.5) provided the parameters γ n satisfy appropriate conditions.

As everyone knows, Xu [9] gave an averaged mapping approach to the gradient-projection method, and he constructed a counterexample which shows that the sequence generated by the gradient-projection method does not converge strongly in the infinite-dimensional space. Moreover, he presented two modifications of the gradient-projection method which are shown to have strong convergence.

In 2011, motivated by Xu, Ceng [19] proposed the following iterative algorithm:

x n + 1 = Proj C [ s n γ V x n + ( I s n μ F ) T n x n ] ,n0,
(1.6)

where V:CH is an l-Lipschitzian mapping with a constant l>0, and F:CH is a k-Lipschitzian and η-strongly monotone operator with constants k,η>0. Let 0<μ<2η/ k 2 , 0γl<τ, and τ=1 1 μ ( 2 η μ k 2 ) . Let T n and s n satisfy s n = 2 λ n L 4 , Proj C (I λ n f)= s n I+(1 s n ) T n . Under suitable conditions, it is proved that the sequence { x n } n = 0 generated by (1.6) converges strongly to a minimizer x of (1.5).

In 2012, Tian and Liu [20] introduced the following iterative method in a Hilbert space: x 1 C and

{ ϕ ( u n , y ) + 1 β n y u n , u n x n 0 y C , x n + 1 = α n γ V u n + ( I α n A ) T n u n n N ,
(1.7)

where u n = Q β n x n , Proj C (I λ n f)= s n I+(1 s n ) T n , s n = 2 λ n L 4 and { λ n }(0,2/L), and { α n }, { β n }, { s n } satisfy appropriate conditions. Further, they proved the sequence { x n } converges strongly to a point qUEP(ϕ), which solves the variational inequality

( A γ V ) q , q z 0,zUEP(ϕ).

It is the first time that the equilibrium and constrained convex minimization problems have been solved.

Since, in general, the minimization problem (1.5) has more than one solution, so regularization is needed. Now we consider the following regularized minimization problem:

min x C f α (x):=f(x)+ α 2 x 2 ,

where α>0 is the regularization parameter, f is a convex function with a 1/L-ism continuous gradient f. Then the regularized GPA generates a sequence { x n } n = 0 by the following recursive formula:

x n + 1 = Proj C (Iγ f α n ) x n = Proj C [ x n γ ( f + α n I ) ( x n ) ] ,
(1.8)

where the parameter α n >0, γ is a constant with 0<γ<2/L, and Proj C is the metric projection from H onto C. We all know that the sequence { x n } n = 0 generated by algorithm (1.8) converges weakly to a minimizer of (1.5) in the setting of infinite-dimensional spaces (see [21]).

In this paper, motivated and inspired by the above results, we introduce a new iterative method: x 1 H and

{ ϕ ( u n , y ) + 1 β n y u n , u n x n 0 y C , x n + 1 = α n r V u n + ( I α n A ) T λ n u n n N ,
(1.9)

for finding a element of UEP(ϕ), where u n = Q β n x n , Proj C (Iγ f λ n )= θ n I+(1 θ n ) T λ n , θ n = 2 γ ( L + λ n ) 4 , γ(0,2/L). Under appropriate conditions, it is proved that the sequence { x n } generated by (1.9) converges strongly to a point zUEP(ϕ), which solves the variational inequality

( A r V ) z , z x 0,xUEP(ϕ).

2 Preliminaries

In this section we introduce some useful properties and lemmas which will be used in the proofs for the main results in the next section.

Some properties of averaged mappings are gathered in the proposition below.

Proposition 2.1 [7, 8]

Let the operators S,T,V:HH be given:

  1. (i)

    If T=(1α)S+αV for some α(0,1) and if S is averaged and V is nonexpansive, then T is averaged.

  2. (ii)

    The composition of finitely many averaged mappings is averaged. That is, if each of the mappings { T i } i = 1 N is averaged, then so is the composite T 1 T N . In particular, if T 1 is α 1 -averaged and T 2 is α 2 -averaged, where α 1 , α 2 (0,1), then the composite T 1 T 2 is α-averaged, where α= α 1 + α 2 α 1 α 2 .

  3. (iii)

    If the mappings { T i } i = 1 N are averaged and have a common fixed point, then

    i = 1 N Fix( T i )=Fix( T 1 T N ).

Here the notation Fix(T) denotes the set of fixed points of the mapping T; that is, Fix(T):={xH:Tx=x}.

The following proposition gathers some results on the relationship between averaged mappings and inverse strongly monotone operators.

Proposition 2.2 [7, 22]

Let T:HH be given. We have:

  1. (i)

    T is nonexpansive if and only if the complement IT is (1/2)-ism;

  2. (ii)

    If T is υ-ism, then for γ>0, γT is (υ/γ)-ism;

  3. (iii)

    T is averaged if and only if the complement IT is υ-ism for some υ>1/2; indeed, for α(0,1), T is α-averaged if and only if IT is (1/2α)-ism.

Lemma 2.1 [9]

Assume that { a n } n = 0 is a sequence of nonnegative real numbers such that

a n + 1 (1 γ n ) a n + γ n δ n + β n ,n0,

where { γ n } n = 0 and { β n } n = 0 are sequences in (0,1) and { δ n } n = 0 is a sequence in such that

  1. (i)

    n = 0 γ n =;

  2. (ii)

    either lim sup n δ n 0 or n = 0 γ n | δ n |<;

  3. (iii)

    n = 0 β n <.

Then lim n a n =0.

The so-called demiclosed principle for nonexpansive mappings will often be used.

Lemma 2.2 (Demiclosed principle [23])

Let C be a closed and convex subset of a Hilbert space H and let T:CC be a nonexpansive mapping with Fix(T). If { x n } n = 1 is a sequence in C weakly converging to x and if { ( I T ) x n } n = 1 converges strongly to y, then (IT)x=y. In particular, if y=0, then xFix(T).

Lemma 2.3 [18]

Let H be a Hilbert space, let C be a closed and convex subset of H, let V:CH be a Lipschitzian operator with a coefficient l>0, and let A:CH be a strongly positive bounded linear operator with a coefficient γ ¯ >0. Then, for 0<r< γ ¯ /l,

x y , ( A r V ) x ( A r V ) y ( γ ¯ r l ) x y 2 x , y C .

That is, ArV is strongly monotone with a coefficient γ ¯ rl.

Recall the metric (nearest point) projection Proj C from a real Hilbert space H to a closed and convex subset C of H is defined as follows: given xH, Proj C x is the unique point in C with the property

x Proj C x=inf { x y : y C } .

Proj C is characterized as follows.

Lemma 2.4 Let C be a closed and convex subset of a real Hilbert space H. Given xH and yC. Then y= Proj C x if and only if the following inequality holds:

xy,yz0zC.

Lemma 2.5 [18]

Assume that A is a strongly positive bounded linear operator on a Hilbert space H with a coefficient γ ¯ >0 and 0<t A 1 . Then ItA1t γ ¯ .

For solving the equilibrium problem for a bifunction ϕ:C×CR, let us assume that ϕ satisfies the following conditions:

(A1) ϕ(x,x)=0 for all xC;

(A2) ϕ is monotone, i.e., ϕ(x,y)+ϕ(y,x)0 for all x,yC;

(A3) for each x,y,zC, lim t 0 ϕ(tz+(1t)x,y)ϕ(x,y);

(A4) for each xC, yϕ(x,y) is convex and lower semicontinuous.

Lemma 2.6 [24]

Let C be a nonempty, closed and convex subset of H and let ϕ be a bifunction of C×C into satisfying (A1)-(A4). Let r>0 and xH. Then there exists zC such that

ϕ(z,y)+ 1 r yz,zx0 for all yC.

Lemma 2.7 [13]

Assume that ϕ:C×CR satisfies (A1)-(A4). For r>0 and xH, define a mapping Q r :HC as follows:

Q r (x)= { z C : ϕ ( z , y ) + 1 r y z , z x 0 y C } .

Then the following hold:

  1. (1)

    Q r is single-valued;

  2. (2)

    Q r is firmly nonexpansive, i.e., Q r x Q r y 2 Q r x Q r y,xy for any x,yH;

  3. (3)

    F( Q r )=EP(ϕ);

  4. (4)

    EP(ϕ) is closed and convex.

We adopt the following notation:

  • x n x means that x n x strongly;

  • x n x means that x n x weakly.

3 Main results

Recall that throughout this paper we always denote U as the solution set of the constrained convex minimization problem (1.5), and denote EP(ϕ) as the solution set of the equilibrium problem (1.1).

Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Let V:CH be Lipschitzian with a constant l>0, and A:CH be a strongly positive bounded linear operator with a coefficient γ ¯ >0, and 0<r< γ ¯ /l. Suppose that f is 1/L-ism continuous. Let Q β n be a mapping defined as in Lemma 2.7. We now consider the following mapping S n on H defined by

S n (x)= α n rV Q β n (x)+(I α n A) T λ n Q β n (x)xH,nN,

where Proj C (Iγ f λ n )= θ n I+(1 θ n ) T λ n , θ n = 2 γ ( L + λ n ) 4 , and γ(0,2/L), { α n }(0,1). It is easy to see that S n is a contraction. Indeed, by Lemma 2.5 and Lemma 2.7, we have for each x,yH

S n x S n y = ( α n r V Q β n x α n r V Q β n y ) + ( I α n A ) T λ n Q β n x ( I α n A ) T λ n Q β n y α n r l x y + ( 1 α n γ ¯ ) x y = ( 1 α n ( γ ¯ r l ) ) x y .

Since 0<1 α n ( γ ¯ rl)<1, it follows that S n is a contraction. Therefore, by the Banach contraction principle, S n has a unique fixed point x n H such that

x n = α n rV Q β n ( x n )+(I α n A) T λ n Q β n ( x n ).

Note that x n indeed depends on V as well, but we will suppress this dependence of x n on V for simplicity of notation throughout the rest of this paper.

The following theorem summarizes the properties of the sequence { x n }.

Theorem 3.1 Let C be a nonempty, closed and convex subset of a Hilbert space H. Let ϕ be a bifunction from C×CR satisfying (A1)-(A4), and let f:CR be a real-valued convex function, and assume that the gradient f is 1/L-ism with a constant L>0. Assume that UEP(ϕ). Let V:CH be Lipschitzian with a constant l>0, and let A:CH be a strongly positive bounded linear operator with a coefficient γ ¯ >0, and 0<r< γ ¯ /l. Let the sequences { u n } and { x n } be generated by

{ ϕ ( u n , y ) + 1 β n y u n , u n x n 0 y C , x n = α n r V u n + ( I α n A ) T λ n u n n N ,
(3.1)

where u n = Q β n x n , Proj C (Iγ f λ n )= θ n I+(1 θ n ) T λ n , θ n = 2 γ ( L + λ n ) 4 and γ(0,2/L). Let { β n }, { α n } and { λ n } satisfy the following conditions:

  1. (i)

    { β n }(0,), lim inf n β n >0;

  2. (ii)

    { α n }(0,1), lim n α n =0;

  3. (iii)

    { λ n }(0,2/γL), λ n =o( α n ).

Then the sequence { x n } converges strongly to a point zUEP(ϕ), which solves the variational inequality

( A r V ) z , z x 0,xUEP(ϕ).
(3.2)

Equivalently, we have z= P U EP ( ϕ ) (IA+rV)(z).

Proof It is well known that x ˆ C solves the minimization problem (1.5) if and only if for each fixed 0<γ<2/L, x ˆ solves the fixed-point equation

x ˆ = Proj C (Iγf) x ˆ = 2 γ L 4 x ˆ + 2 + γ L 4 T x ˆ .

It is clear that x ˆ =T x ˆ , i.e., x ˆ U=Fix(T).

First, we assume that α n (0, A 1 ). By Lemma 2.5, we obtain I α n A1 α n γ ¯ . Let pUEP(ϕ), then from u n = Q β n x n , we have

u n p= Q β n x n Q β n p x n p
(3.3)

for all nN. Thus, we have from (3.3)

x n p = α n r V u n + ( I α n A ) T λ n u n p α n r V u n α n A ( p ) + I α n A T λ n u n p α n r V u n α n r V ( p ) + α n r V ( p ) α n A ( p ) + ( 1 α n γ ¯ ) T λ n u n T λ n ( p ) + T λ n ( p ) T p α n r l u n p + α n r V ( p ) A ( p ) + ( 1 α n γ ¯ ) u n p + ( 1 α n γ ¯ ) T λ n ( p ) T p ( 1 α n ( γ ¯ r l ) ) x n p + α n r V ( p ) A ( p ) + ( 1 α n γ ¯ ) T λ n ( p ) T p .

It follows that

x n p 1 γ ¯ r l r V ( p ) A ( p ) + 1 α n γ ¯ α n ( γ ¯ r l ) T λ n ( p ) T p .
(3.4)

For xC, note that

Proj C (Iγ f λ n )x= θ n x+(1 θ n ) T λ n x

and

Proj C (Iγf)x=θx+(1θ)Tx,

where θ n = 2 γ ( L + λ n ) 4 and θ= 2 γ L 4 .

Then we get

( θ n θ ) x + T λ n x T x + θ T x θ n T λ n x = Proj C ( I γ f λ n ) x Proj C ( I γ f ) x γ λ n x .

Since θ n = 2 γ ( L + λ n ) 4 and θ= 2 γ L 4 , there exists M(x)>0 such that

T λ n xTx λ n γ ( 5 x + T x ) 2 + γ ( L + λ n ) λ n M(x),
(3.5)

where M(x)=γ(5x+Tx).

It follows from (3.4) and (3.5) that

x n p 1 γ ¯ r l r V ( p ) A ( p ) + M ( p ) γ ¯ r l λ n α n .

Since λ n =o( α n ), there exists a real number M >0 such that λ n α n M , and

x n p 1 γ ¯ r l r V ( p ) A ( p ) + M M ( p ) γ ¯ r l = r V ( p ) A ( p ) + M M ( p ) γ ¯ r l .

Hence { x n } is bounded and we also obtain that { u n } is bounded.

Next, we show that x n u n 0.

Indeed, for any pUEP(ϕ), by Lemma 2.7, we have

u n p 2 = Q β n x n Q β n p 2 x n p , u n p = 1 2 ( x n p 2 + u n p 2 u n x n 2 ) .

This implies that

u n p 2 x n p 2 u n x n 2 .
(3.6)

Then from (3.5), we derive that

x n p 2 = α n r V u n + ( I α n A ) T λ n u n p 2 = ( I α n A ) ( T λ n u n p ) + α n r V u n α n A ( p ) 2 ( I α n A ) ( T λ n u n p ) 2 + 2 ( I α n A ) ( T λ n u n p ) α n r V u n α n A ( p ) + α n r V u n α n A ( p ) 2 ( 1 α n γ ¯ ) 2 T λ n u n T λ n p + T λ n p T p 2 + 2 ( 1 α n γ ¯ ) T λ n u n p α n r V u n α n A ( p ) + α n 2 r V u n A ( p ) 2 ( 1 α n γ ¯ ) 2 ( u n p + T λ n p T p ) 2 + 2 ( 1 α n γ ¯ ) ( u n p + T λ n p T p ) ( α n r l u n p + α n r V ( p ) A ( p ) ) + α n 2 r V u n A ( p ) 2 ( ( 1 α n γ ¯ ) 2 + 2 ( 1 α n γ ¯ ) α n r l ) u n p 2 + ( 1 α n γ ¯ ) 2 λ n 2 ( M ( p ) ) 2 + 2 ( 1 α n γ ¯ ) 2 u n p λ n M ( p ) + 2 ( 1 α n γ ¯ ) u n p α n r V ( p ) A ( p ) + 2 ( 1 α n γ ¯ ) λ n M ( p ) α n r l u n p + 2 ( 1 α n γ ¯ ) λ n M ( p ) α n r V ( p ) A ( p ) + α n 2 ( r l u n p + r V ( p ) A ( p ) ) 2 .

It follows from (3.6) that

( ( 1 α n γ ¯ ) 2 + 2 ( 1 α n γ ¯ ) α n r l ) u n x n 2 ( ( 1 α n γ ¯ ) 2 + 2 ( 1 α n γ ¯ ) α n r l 1 ) x n p 2 + λ n 2 ( M ( p ) ) 2 + 2 u n p ( λ n M ( p ) + α n r V ( p ) A ( p ) ) + 2 α n λ n M ( p ) ( r l u n p + r V ( p ) A ( p ) ) + α n 2 ( r l u n p + r V ( p ) A ( p ) ) 2 .

Since both { x n } and { u n } are bounded and α n 0, λ n 0, it follows that u n x n 0.

We claim that x n T λ n x n 0. Indeed,

x n T λ n x n = x n T λ n u n + T λ n u n T λ n x n x n T λ n u n + T λ n u n T λ n x n α n r V u n A T λ n u n + u n x n .

Since α n 0 and u n x n 0, we obtain that

x n T λ n x n 0.

Thus,

u n T λ n u n = u n x n + x n T λ n x n + T λ n x n T λ n u n u n x n + x n T λ n x n + T λ n x n T λ n u n u n x n + x n T λ n x n + x n u n

and

x n T λ n u n u n x n + T λ n u n u n ,

we have u n T λ n u n 0 and x n T λ n u n 0.

Since { u n } is bounded, without loss of generality, we can assume that u n i z. Next, we show that zUEP(ϕ).

By (3.5), we have

u n T u n u n T λ n u n + T λ n u n T u n u n T λ n u n + λ n M ( u n ) 0 .

So, by Lemma 2.2, we get zFix(T)=U.

Next, we show that zEP(ϕ). Since u n = Q β n x n , for any yC, we obtain

ϕ( u n ,y)+ 1 β n y u n , u n x n 0.

From (A2), we have

1 β n y u n , u n x n ϕ(y, u n )

and hence

y u n i , u n i x n i β n i ϕ(y, u n i ).

Since u n i x n i β n i 0 and u n i z, it follows from (A4) that ϕ(y,z)0 for any yC.

Let z t =ty+(1t)z, t(0,1], yC, then we have z t C and hence ϕ( z t ,z)0.

Thus, from (A1) and (A4), we have

0 = ϕ ( z t , z t ) t ϕ ( z t , y ) + ( 1 t ) ϕ ( z t , z ) t ϕ ( z t , y )

and hence ϕ( z t ,y)0. From (A3), we have ϕ(z,y)0 for any yC, hence zEP(ϕ). Therefore, zUEP(ϕ).

On the other hand, we note that

x n z = α n r V u n + ( I α n A ) T λ n u n z = ( I α n A ) ( T λ n u n z ) + α n r V u n α n A ( z ) = ( I α n A ) ( T λ n u n z ) + α n r ( V u n V z ) + α n ( r V ( z ) A ( z ) ) .

Hence, we obtain from (3.3) and (3.5) that

x n z 2 = ( I α n A ) ( T λ n u n z ) , x n z + α n r V u n V z , x n z + α n r V ( z ) A ( z ) , x n z ( 1 α n γ ¯ ) T λ n u n z x n z + α n r l u n z x n z + α n r V ( z ) A ( z ) , x n z ( 1 α n γ ¯ ) ( T λ n u n T λ n z + T λ n z T z ) x n z + α n r l u n z x n z + α n r V ( z ) A ( z ) , x n z ( 1 α n γ ¯ ) x n z 2 + ( 1 α n γ ¯ ) T λ n z T z x n z + α n r l x n z 2 + α n r V ( z ) A ( z ) , x n z ( 1 α n ( γ ¯ r l ) ) x n z 2 + ( 1 α n γ ¯ ) λ n M ( z ) x n z + α n r V ( z ) A ( z ) , x n z .

It follows that

x n z 2 1 γ ¯ r l λ n α n M(z) x n z+ 1 γ ¯ r l r V ( z ) A ( z ) , x n z .

In particular,

x n i z 2 1 γ ¯ r l λ n i α n i M(z) x n i z+ 1 γ ¯ r l r V ( z ) A ( z ) , x n i z .
(3.7)

Since x n i z and λ n =o( α n ), it follows from (3.7) that x n i z as i.

Next, we show that z solves the variational inequality (3.2). Observe that x n = α n rV u n +(I α n A) T λ n u n .

Hence, we conclude that

(ArV) x n = 1 α n (I α n A)(I T λ n Q β n ) x n +r(V u n V x n ).

Since T λ n Q β n is nonexpansive, we have I T λ n Q β n is monotone. Note that for any given xUEP(ϕ),

( A r V ) x n , x n x = 1 α n ( I α n A ) ( I T λ n Q β n ) x n , x n x + r V u n V x n , x n x = 1 α n ( I T λ n Q β n ) x n ( I T λ n Q β n ) x , x n x 1 α n ( I T λ n Q β n ) x , x n x + A ( I T λ n Q β n ) x n , x n x + r V u n V x n , x n x 1 α n ( I T λ n Q β n ) x , x n x + A ( I T λ n Q β n ) x n , x n x + r V u n V x n , x n x 1 α n x T λ n x x n x + A ( I T λ n Q β n ) x n x n x + r l u n x n x n x λ n α n M ( x ) x n x + A x n T λ n Q β n x n x n x + r l u n x n x n x .

Now, replacing n with n i in the above inequality, and letting i, since λ n =o( α n ), x n T λ n u n 0, and u n x n 0, we have

( A r V ) z , z x 0.

It follows that zUEP(ϕ) is a solution of the variational inequality (3.2). Further, by the uniqueness of the solution of the variational inequality (3.2), we conclude that x n z as n.

The variational inequality (3.2) can be rewritten as

( I A + r V ) z z , z x 0xUEP(ϕ).

By Lemma 2.4, it is equivalent to the following fixed-point equation:

P U EP ( ϕ ) (IA+rV)z=z.

This completes the proof. □

Theorem 3.2 Let C be a nonempty, closed and convex subset of Hilbert space H. Let ϕ be a bifunction from C×CR satisfying (A1)-(A4), and let f:CR be a real-valued convex function, and assume that the gradient f is 1/L-ism with a constant L>0. Assume that UEP(ϕ). Let V:CH be Lipschitzian with a constant l>0, and let A:CH be a strongly positive bounded linear operator with a coefficient γ ¯ >0, and 0<r< γ ¯ /l. Let the sequences { u n } and { x n } be generated by x 1 H and

{ ϕ ( u n , y ) + 1 β n y u n , u n x n 0 y C , x n + 1 = α n r V u n + ( I α n A ) T λ n u n n N ,
(3.8)

where u n = Q β n x n , Proj C (Iγ f λ n )= θ n I+(1 θ n ) T λ n , θ n = 2 γ ( L + λ n ) 4 and γ(0,2/L). Let { β n }, { α n } and { λ n } satisfy the following conditions:

(C1) { β n }(0,), lim inf n β n >0, n = 1 | β n + 1 β n |<;

(C2) { α n }(0,1), lim n α n =0, n = 1 α n =, n = 1 | α n + 1 α n |<;

(C3) { λ n }(0,2/γL), λ n =o( α n ), n = 1 | λ n + 1 λ n |<.

Then the sequence { x n } converges strongly to a point zUEP(ϕ), which solves the variational inequality (3.2).

Proof It is well known that:

  1. (a)

    x ˆ C solves the minimization problem (1.5) if and only if for each fixed 0<γ<2/L, x ˆ solves the fixed-point equation

    x ˆ = Proj C (Iγf) x ˆ = 2 γ L 4 x ˆ + 2 + γ L 4 T x ˆ .

It is clear that x ˆ =T x ˆ , i.e., x ˆ U=Fix(T).

  1. (b)

    The gradient f is 1/L-ism.

  2. (c)

    Proj C (Iγ f λ n ) is 2 + γ ( L + λ n ) 4 averaged for γ(0,2/L), in particular, the following relation holds:

    Proj C (Iγ f λ n )= 2 γ ( L + λ n ) 4 I+ 2 + γ ( L + λ n ) 4 T λ n = θ n I+(1 θ n ) T λ n .

Since α n 0, we may assume that α n (0, A 1 ). Now, we first show that { x n } is bounded. Indeed, pick pUEP(ϕ), since u n = Q β n x n , by Lemma 2.7, we know that

u n p= Q β n x n Q β n p x n p.
(3.9)

Thus, we derive from (3.5) that

x n + 1 p = α n r V u n + ( I α n A ) T λ n u n p = ( I α n A ) ( T λ n u n p ) + α n r V u n α n A ( p ) ( 1 α n γ ¯ ) T λ n u n p + α n r V u n A ( p ) ( 1 α n γ ¯ ) ( T λ n u n T λ n p + T λ n p T p ) + α n ( r V u n r V ( p ) + r V ( p ) A ( p ) ) ( 1 α n γ ¯ ) ( u n p + λ n M ( p ) ) + α n ( r l u n p + r V ( p ) A ( p ) ) = ( 1 α n ( γ ¯ r l ) ) u n p + ( 1 α n γ ¯ ) λ n M ( p ) + α n r V ( p ) A ( p ) ( 1 α n ( γ ¯ r l ) ) x n p + λ n M ( p ) + α n r V ( p ) A ( p ) = ( 1 α n ( γ ¯ r l ) ) x n p + α n ( γ ¯ r l ) [ λ n α n M ( p ) γ ¯ r l + r V ( p ) A ( p ) γ ¯ r l ] .

Since λ n =o( α n ), there exists a real number M >0 such that λ n α n M . Thus,

x n + 1 p ( 1 α n ( γ ¯ r l ) ) x n p+ α n ( γ ¯ rl) M M ( p ) + r V ( p ) A ( p ) γ ¯ r l .

By induction, we have

x n pmax { x 1 p , M M ( p ) + r V ( p ) A ( p ) γ ¯ r l } ,n1.

Hence { x n } is bounded. From (3.9), we also derive that { u n } is bounded.

Next, we show that x n + 1 x n 0.

Indeed, since

Proj C (Iγ f λ n )= 2 γ ( L + λ n ) 4 I+ 2 + γ ( L + λ n ) 4 T λ n ,

we have

T λ n = 4 Proj C ( I γ f λ n ) [ 2 γ ( L + λ n ) ] I 2 + γ ( L + λ n ) .

So, we obtain that

T λ n ( u n 1 ) T λ n 1 ( u n 1 ) = 4 Proj C ( I γ f λ n ) [ 2 γ ( L + λ n ) ] I 2 + γ ( L + λ n ) u n 1 4 Proj C ( I γ f λ n 1 ) [ 2 γ ( L + λ n 1 ) ] I 2 + γ ( L + λ n 1 ) u n 1 4 Proj C ( I γ f λ n ) 2 + γ ( L + λ n ) u n 1 4 Proj C ( I γ f λ n 1 ) 2 + γ ( L + λ n 1 ) u n 1 + [ 2 γ ( L + λ n ) ] I 2 + γ ( L + λ n ) u n 1 [ 2 γ ( L + λ n 1 ) ] I 2 + γ ( L + λ n 1 ) u n 1 = 4 [ 2 + γ ( L + λ n 1 ) ] Proj C ( I γ f λ n ) 4 [ 2 + γ ( L + λ n ) ] Proj C ( I γ f λ n 1 ) [ 2 + γ ( L + λ n ) ] [ 2 + γ ( L + λ n 1 ) ] u n 1 + 4 γ | λ n λ n 1 | [ 2 + γ ( L + λ n ) ] [ 2 + γ ( L + λ n 1 ) ] u n 1 = 4 γ ( λ n 1 λ n ) Proj C ( I γ f λ n ) [ 2 + γ ( L + λ n ) ] [ 2 + γ ( L + λ n 1 ) ] u n 1 + 4 [ 2 + γ ( L + λ n ) ] ( Proj C ( I γ f λ n ) Proj C ( I γ f λ n 1 ) ) [ 2 + γ ( L + λ n ) ] [ 2 + γ ( L + λ n 1 ) ] u n 1 + 4 γ | λ n λ n 1 | [ 2 + γ ( L + λ n ) ] [ 2 + γ ( L + λ n 1 ) ] u n 1 4 γ | λ n λ n 1 | Proj C ( I γ f λ n ) u n 1 [ 2 + γ ( L + λ n ) ] [ 2 + γ ( L + λ n 1 ) ] + 4 [ 2 + γ ( L + λ n ) ] Proj C ( I γ f λ n ) u n 1 Proj C ( I γ f λ n 1 ) u n 1 [ 2 + γ ( L + λ n ) ] [ 2 + γ ( L + λ n 1 ) ] + 4 γ | λ n λ n 1 | [ 2 + γ ( L + λ n ) ] [ 2 + γ ( L + λ n 1 ) ] u n 1 | λ n λ n 1 | [ γ Proj C ( I γ f λ n ) u n 1 + 4 γ u n 1 + γ u n 1 ] K | λ n λ n 1 |

for some appropriate constant K>0 such that

Kγ Proj C ( I γ f λ n ) u n 1 +5γ u n 1 .

Thus, we get

x n + 1 x n = [ α n r V u n + ( I α n A ) T λ n u n ] [ α n 1 r V u n 1 + ( I α n 1 A ) T λ n 1 u n 1 ] α n r V u n α n r V u n 1 + α n r V u n 1 α n 1 r V u n 1 + ( I α n A ) ( T λ n u n T λ n u n 1 ) + ( I α n A ) T λ n u n 1 ( I α n 1 A ) T λ n 1 u n 1 α n r l u n u n 1 + | α n α n 1 | r V u n 1 + ( 1 α n γ ¯ ) u n u n 1 + T λ n u n 1 T λ n 1 u n 1 + α n 1 A T λ n 1 u n 1 α n A T λ n u n 1 ( 1 α n ( γ ¯ r l ) ) u n u n 1 + | α n α n 1 | r V u n 1 + T λ n u n 1 T λ n 1 u n 1 ( 1 + α n 1 A ) + | α n α n 1 | A T λ n u n 1 ( 1 α n ( γ ¯ r l ) ) u n u n 1 + | α n α n 1 | ( r V u n 1 + A T λ n u n 1 ) + | λ n λ n 1 | ( K + K A ) .

Since both {V u n 1 } and {A T λ n u n 1 } are bounded, we can take a constant E>0 such that

ErV u n 1 +A T λ n u n 1 ,n1.

Consequently,

x n + 1 x n ( 1 α n ( γ ¯ r l ) ) u n u n 1 + E | α n α n 1 | + | λ n λ n 1 | ( K + A K ) .
(3.10)

From u n + 1 = Q β n + 1 x n + 1 and u n = Q β n x n , we note that

ϕ( u n + 1 ,y)+ 1 β n + 1 y u n + 1 , u n + 1 x n + 1 0yC
(3.11)

and

ϕ( u n ,y)+ 1 β n y u n , u n x n 0yC.
(3.12)

Putting y= u n in (3.11) and y= u n + 1 in (3.12), we have

ϕ( u n + 1 , u n )+ 1 β n + 1 u n u n + 1 , u n + 1 x n + 1 0

and

ϕ( u n , u n + 1 )+ 1 β n u n + 1 u n , u n x n 0.

So, from (A2), we have

u n + 1 u n , u n x n β n u n + 1 x n + 1 β n + 1 0

and hence

u n + 1 u n , u n u n + 1 + u n + 1 x n β n β n + 1 ( u n + 1 x n + 1 ) 0.

Since lim inf n β n >0, without loss of generality, let us assume that there exists a real number a such that β n >a>0 for all nN. Thus, we have

u n + 1 u n 2 u n + 1 u n , x n + 1 x n + ( 1 β n β n + 1 ) ( u n + 1 x n + 1 ) u n + 1 u n { x n + 1 x n + | 1 β n β n + 1 | u n + 1 x n + 1 } ,

thus,

u n + 1 u n x n + 1 x n + 1 a | β n + 1 β n | M 1 ,
(3.13)

where M 1 =sup{ u n x n :nN}.

From (3.10) and (3.13), we obtain

x n + 1 x n ( 1 α n ( γ ¯ r l ) ) ( x n x n 1 + 1 a | β n β n 1 | M 1 ) + E | α n α n 1 | + | λ n λ n 1 | ( K + A K ) ( 1 α n ( γ ¯ r l ) ) x n x n 1 + M 1 a | β n β n 1 | + E | α n α n 1 | + | λ n λ n 1 | ( K + A K ) ( 1 α n ( γ ¯ r l ) ) x n x n 1 + M 2 ( | β n β n 1 | + | α n α n 1 | + | λ n λ n 1 | ) ,

where M 2 =max{ M 1 a ,E,K+AK}. Hence, by Lemma 2.1, we have

lim n x n + 1 x n =0.
(3.14)

Then, from (3.13), (3.14) and | β n + 1 β n |0, we have

lim n u n + 1 u n =0.
(3.15)

For any pUEP(ϕ), as the same proof of Theorem 3.1, we have

u n p 2 x n p 2 u n x n 2 .
(3.16)

Then, from (3.5) and (3.16), by the same argument as in the proof of Theorem 3.1, we derive that

x n + 1 p 2 ( ( 1 α n γ ¯ ) 2 + 2 ( 1 α n γ ¯ ) α n r l ) ( x n p 2 u n x n 2 ) + λ n 2 ( M ( p ) ) 2 + 2 u n p ( λ n M ( p ) + α n r V ( p ) A ( p ) ) + 2 α n λ n M ( p ) ( r l u n p + r V ( p ) A ( p ) ) + α n 2 ( r l u n p + r V ( p ) A ( p ) ) 2 .

Since both { x n } and { u n } are bounded, α n 0, λ n 0, and x n + 1 x n 0, we have

lim n x n u n =0.
(3.17)

Next,

x n T λ n x n = x n x n + 1 + x n + 1 T λ n u n + T λ n u n T λ n x n x n x n + 1 + x n + 1 T λ n u n + T λ n u n T λ n x n x n x n + 1 + α n r V u n A T λ n u n + u n x n

and then

x n T λ n x n 0.
(3.18)

It follows that u n T λ n u n 0.

Now, we show that

lim sup n x n z , ( r V A ) z 0,

where zUEP(ϕ) is a unique solution of the variational inequality (3.2).

Indeed, since { x n } is bounded, without loss of generality, we may assume that x n k x ˜ such that

lim sup n x n z , ( r V A ) z = lim k x n k z , ( r V A ) z .
(3.19)

By (3.17) and x n k x ˜ , we derive that u n k x ˜ .

Note that

u n T u n u n T λ n u n + T λ n u n T u n u n T λ n u n + λ n M ( u n ) .

Hence, by u n T λ n u n 0, we get u n T u n 0.

In terms of Lemma 2.2, we get x ˜ Fix(T)=U.

Then, by the same argument as in the proof of Theorem 3.1, we have x ˜ UEP(ϕ).

Since zUEP(ϕ) is the solution of the variational inequality (3.2), we derive from (3.19) that

lim sup n x n z , ( r V A ) z 0.
(3.20)

Finally, we show that x n z.

As a matter of fact,

x n + 1 z = α n r V u n + ( I α n A ) T λ n u n z = ( I α n A ) ( T λ n u n z ) + α n r V u n α n A ( z ) = ( I α n A ) ( T λ n u n z ) + α n r ( V u n V z ) + α n ( r V ( z ) A ( z ) ) .

So, from (3.5) and (3.9), we derive

x n + 1 z 2 = ( I α n A ) ( T λ n u n z ) , x n + 1 z + α n r V u n V z , x n + 1 z + α n r V ( z ) A ( z ) , x n + 1 z ( 1 α n γ ¯ ) T λ n u n z x n + 1 z + α n r l u n z x n + 1 z + α n r V ( z ) A ( z ) , x n + 1 z ( 1 α n γ ¯ ) ( T λ n u n T λ n z + T λ n z T z ) x n + 1 z + α n r l u n z x n + 1 z + α n r V ( z ) A ( z ) , x n + 1 z ( 1 α n γ ¯ ) u n z x n + 1 z + ( 1 α n γ ¯ ) T λ n z T z x n + 1 z + α n r l u n z x n + 1 z + α n r V ( z ) A ( z ) , x n + 1 z ( 1 α n ( γ ¯ r l ) ) x n z x n + 1 z + ( 1 α n γ ¯ ) λ n M ( z ) x n + 1 z + α n r V ( z ) A ( z ) , x n + 1 z ( 1 α n ( γ ¯ r l ) ) 1 2 ( x n z 2 + x n + 1 z 2 ) + α n [ r V ( z ) A ( z ) , x n + 1 z + λ n α n M ( z ) x n + 1 z ] .

It follows that

x n + 1 z 2 1 α n ( γ ¯ r l ) 1 + α n ( γ ¯ r l ) x n z 2 + 2 α n 1 + α n ( γ ¯ r l ) [ r V ( z ) A ( z ) , x n + 1 z + λ n α n M ( z ) x n + 1 z ] ( 1 α n ( γ ¯ r l ) ) x n z 2 + 2 α n 1 + α n ( γ ¯ r l ) [ r V ( z ) A ( z ) , x n + 1 z + λ n α n M ( z ) x n + 1 z ] .

Since { x n } is bounded, we can take a constant M 3 >0 such that

M 3 M(z) x n + 1 z,n1.

Then, we obtain that

x n + 1 z 2 ( 1 α n ( γ ¯ r l ) ) x n z 2 + α n δ n ,
(3.21)

where δ n = 2 1 + α n ( γ ¯ r l ) [rV(z)A(z), x n + 1 z+ λ n α n M 3 ].

By (3.20) and λ n =o( α n ), we get lim sup n δ n 0. Now applying Lemma 2.1 to (3.21) concludes that x n z as n. □

4 Application

In this section, we give an application of Theorem 3.2 to the split feasibility problem (SFP for short) which was introduced by Censor and Elfving [25]. Since its inception in 1994, the split feasibility problem (SFP) has received much attention (see [21, 26, 27]) due to its applications in signal processing and image reconstruction, with particular progress in intensity-modulated radiation therapy.

The SFP can mathematically be formulated as the problem of finding a point x with the property

xCandBxQ,
(4.1)

where C and Q are nonempty closed convex subsets of Hilbert spaces H 1 and H 2 , respectively. B: H 1 H 2 is a bounded linear operator.

It is clear that x is a solution to the split feasibility problem (4.1) if and only if x C and B x Proj Q B x =0. We define the proximity function f by

f(x)= 1 2 B x Proj Q B x 2

and consider the constrained convex minimization problem

min x C f(x)= min x C 1 2 B x Proj Q B x 2 .
(4.2)

Then x solves the split feasibility problem (4.1) if and only if x solves the minimization problem (4.2) with the minimize equal to 0. Byrne [7] introduced the so-called CQ algorithm to solve the SFP.

x n + 1 = Proj C ( I γ B ( I Proj Q ) B ) x n ,n0,
(4.3)

where 0<γ<2/ B 2 . He obtained that the sequence { x n } generated by (4.3) converges weakly to a solution of the SFP.

In order to obtain a strong convergence iterative sequence to solve the SFP, we propose the following algorithm:

{ ϕ ( u n , y ) + 1 β n y u n , u n x n 0 y C , x n + 1 = α n r V u n + ( I α n A ) T λ n u n n N ,
(4.4)

where u n = Q β n x n . Let { T λ n } satisfy the following conditions:

  1. (i)

    Proj C (Iγ( B (I Proj Q )B+ λ n I))= θ n I+(1 θ n ) T λ n and γ(0,2/ B 2 );

  2. (ii)

    θ n = 2 γ ( L + λ n ) 4 for all n,

where V:CH is Lipschitzian with a constant l>0 and A:CH is a strongly positive bounded linear operator with a constant γ ¯ >0. Suppose that 0<r< γ ¯ /l. We can show that the sequence { x n } generated by (4.4) converges strongly to a solution of SFP (4.1) if the sequence { α n }(0,1) and the sequence { λ n } of parameters satisfy appropriate conditions.

Applying Theorem 3.2, we obtain the following result.

Theorem 4.1 Assume that the split feasibility problem (4.1) is consistent. Let the sequence { x n } be generated by (4.4). Where the sequence { β n },{ α n }(0,1) and the sequence { λ n } satisfy the conditions (C1)-(C3). Then the sequence { x n } converges strongly to a solution of the split feasibility problem (4.1).

Proof By the definition of the proximity function f, we have

f(x)= B (I Proj Q )Bx

and f is Lipschitz continuous, i.e.,

f ( x ) f ( y ) Lxy,

where L= B 2 .

Set f λ n (x)=f(x)+ λ n 2 x 2 ; consequently,

f λ n ( x ) = f ( x ) + λ n I ( x ) = B ( I Proj Q ) B x + λ n x .

Then the iterative scheme (4.4) is equivalent to

{ ϕ ( u n , y ) + 1 β n y u n , u n x n 0 y C , x n + 1 = α n r V u n + ( I α n A ) T λ n u n n N ,
(4.5)

where u n = Q β n x n . { T λ n } satisfy the following conditions:

  1. (i)

    Proj C (Iγ f λ n )= θ n I+(1 θ n ) T λ n and γ(0,2/L);

  2. (ii)

    θ n = 2 γ ( L + λ n ) 4 for all n.

Due to Theorem 3.2, we have the conclusion immediately. □

5 Conclusion

Methods for solving the equilibrium problem (EP) and the constrained convex minimization problem have been extensively studied, respectively, in a Hilbert space. In 2012, Tian and Liu proposed an iterative method for finding a common solution of an EP and a constrained convex minimization problem. But, in this paper, it is the first time that we combine the regularized gradient-projection algorithm and the averaged mappings approach to propose implicit and explicit algorithms for finding the common solution of an EP and a constrained convex minimization problem, which also solves a certain variational inequality.

References

  1. Brezis H: Operateurs Maximaux Monotones et Semi-Groups de Contractions dans les Espaces de Hilbert. North-Holland, Amsterdam; 1973.

    Google Scholar 

  2. Jaiboon C, Kumam P: A hybrid extragradient viscosity approximation method for solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. Fixed Point Theory Appl. 2009. doi:10.1155/2009/374815

    Google Scholar 

  3. Jitpeera T, Katchang P, Kumam P: A viscosity of Cesàro mean approximation methods for a mixed equilibrium, variational inequalities, and fixed point problems. Fixed Point Theory Appl. 2011. doi:10.1155/2011/945051

    Google Scholar 

  4. Bertsekas DP, Gafni EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965

    Article  MathSciNet  Google Scholar 

  5. Han D, Lo HK: Solving non-additive traffic assignment problems, a descent method for cocoercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544. 10.1016/S0377-2217(03)00423-5

    Article  MathSciNet  Google Scholar 

  6. Bauschke H, Borwein J: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710

    Article  MathSciNet  Google Scholar 

  7. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MathSciNet  Google Scholar 

  8. Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53: 475–504. 10.1080/02331930412331327157

    Article  MathSciNet  Google Scholar 

  9. Xu HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z

    Article  MathSciNet  Google Scholar 

  10. Yamada I: The hybrid steepest descent for the variational inequality problems over the intersection of fixed points sets of nonexpansive mapping. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Application. Edited by: Butnariu D, Censor Y, Reich S. Elservier, New York; 2001:473–504.

    Chapter  Google Scholar 

  11. Takahashi S, Takahashi W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331: 506–515. 10.1016/j.jmaa.2006.08.036

    Article  MathSciNet  Google Scholar 

  12. Combettes PL, Hirstoaga SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.

    MathSciNet  Google Scholar 

  13. Flam SD, Antipin AS: Equilibrium programming using proximal-like algorithms. Math. Program. 1997, 78: 29–41.

    Article  MathSciNet  Google Scholar 

  14. He HM, Liu SY, Cho YJ: An explicit method for systems of equilibrium problems and fixed points of infinite family of nonexpansive mappings. J. Comput. Appl. Math. 2011, 235: 4128–4139. 10.1016/j.cam.2011.03.003

    Article  MathSciNet  Google Scholar 

  15. Qin XL, Cho YJ, Kang SM: Convergence analysis on hybrid projection algorithms for equilibrium problems and variational inequality problems. Math. Model. Anal. 2009, 14: 335–351. 10.3846/1392-6292.2009.14.335-351

    Article  MathSciNet  Google Scholar 

  16. Moudafi A: Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615

    Article  MathSciNet  Google Scholar 

  17. Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059

    Article  MathSciNet  Google Scholar 

  18. Marino G, Xu HK: A generated iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006, 318: 43–52. 10.1016/j.jmaa.2005.05.028

    Article  MathSciNet  Google Scholar 

  19. Ceng LC, Ansari QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005

    Article  MathSciNet  Google Scholar 

  20. Tian M, Liu L: General iterative methods for equilibrium and constrained convex minimization problem. Optimization 2012. doi:10.1080/02331934.2012.713361

    Google Scholar 

  21. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018

    Google Scholar 

  22. Martinez-Yanes C, Xu HK: Strong convergence of the CQ method for fixed-point iteration processes. Nonlinear Anal. 2006, 64: 2400–2411. 10.1016/j.na.2005.08.018

    Article  MathSciNet  Google Scholar 

  23. Hundal H: An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57: 35–61. 10.1016/j.na.2003.11.004

    Article  MathSciNet  Google Scholar 

  24. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

    MathSciNet  Google Scholar 

  25. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

    Article  MathSciNet  Google Scholar 

  26. López G, Martin-Márquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012., 28: Article ID 085004

    Google Scholar 

  27. Zhao JL, Zhang YJ, Yang QZ: Modified projection methods for the split feasibility problem and the multiple-set split feasibility problem. Appl. Math. Comput. 2012, 219: 1644–1653. 10.1016/j.amc.2012.08.005

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors wish to thank the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported by the Fundamental Research Funds for the Central Universities (No. ZXH2012K001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming Tian.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All the authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tian, M., Huang, LH. Regularized gradient-projection methods for equilibrium and constrained convex minimization problems. J Inequal Appl 2013, 243 (2013). https://doi.org/10.1186/1029-242X-2013-243

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-243

Keywords