Skip to main content

General iterative scheme based on the regularization for solving a constrained convex minimization problem

Abstract

It is well known that the regularization method plays an important role in solving a constrained convex minimization problem. In this article, we introduce implicit and explicit iterative schemes based on the regularization for solving a constrained convex minimization problem. We establish results on the strong convergence of the sequences generated by the proposed schemes to a solution of the minimization problem. Such a point is also a solution of a variational inequality. We also apply the algorithm to solve a split feasibility problem.

MSC:47H09, 47H05, 47H06, 47J25, 47J05.

1 Introduction

The gradient-projection algorithm is a classical power method for solving constrained convex optimization problems and has been studied by many authors (see [114] and the references therein). The method has recently been applied to solve split feasibility problems which find applications in image reconstruction and the intensity modulated radiation therapy (see [1522]).

Consider the problem of minimizing f over the constraint set C (assuming that C is a nonempty closed and convex subset of a real Hilbert space H). If f:HR is a convex and continuously Fréchet differentiable functional, the gradient-projection algorithm generates a sequence { x n } n = 0 determined by the gradient of f and the metric projection onto C. Under the condition that f has a Lipschitz continuous and strongly monotone gradient, the sequence { x n } n = 0 can be strongly convergent to a minimizer of f in C. If the gradient of f is only assumed to be inverse strongly monotone, then { x n } n = 0 can only be weakly convergent if H is infinite-dimensional.

Recently, Xu [23] gave an operator-oriented approach as an alternative to the gradient-projection method and to the relaxed gradient-projection algorithm, namely, an averaged mapping approach. He also presented two modifications of gradient-projection algorithms which are shown to have strong convergence.

On the other hand, regularization, in particular the traditional Tikhonov regularization, is usually used to solve ill-posed optimization problems [24, 25]. Under some conditions, we know that the regularization method is weakly convergent.

The purpose of this paper is to present the general iterative method combining the regularization method and the averaged mapping approach. We first propose implicit and explicit iterative schemes for solving a constrained convex minimization problem and prove that the methods converge strongly to a solution of the minimization problem, which is also a solution of the variational inequality. Furthermore, we use the above method to solve a split feasibility problem.

2 Preliminaries

Throughout the paper, we assume that H is a real Hilbert space whose inner product and norm are denoted by , and , respectively, and that C is a nonempty closed convex subset of H. The set of fixed points of a mapping T is denoted by Fix(T), that is, Fix(T)={xH:Tx=x}. We write x n x to indicate that the sequence { x n } converges weakly to x. The fact that the sequence { x n } converges strongly to x is denoted by x n x. The following definition and results are needed in the subsequent sections.

Recall that a mapping T:HH is said to be L-Lipschitzian if

TxTyLxy,x,yH,
(1)

where L>0 is a constant. In particular, if L[0,1), then T is called a contraction on H; if L=1, then T is called a nonexpansive mapping on H. T is called firmly nonexpansive if 2TI is nonexpansive, or equivalently, xy,TxTy T x T y 2 , x,yH. Alternatively, T is firmly nonexpansive if and only if T can be expressed as T= 1 2 (I+W), where W:HH is nonexpansive.

Definition 2.1 A mapping T:HH is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping; that is,

T=(1α)I+αW,
(2)

where α is a number in (0,1) and W:HH is nonexpansive. More precisely, when (2) holds, we say that T is α-averaged. Clearly, a firmly nonexpansive mapping (in particular, projection) is a 1 2 -averaged map.

Proposition 2.1 [16, 26]

For given operators W,T,V:HH:

  1. (i)

    If T=(1α)W+αV for some α(0,1) and if W is averaged and V is nonexpansive, then T is averaged.

  2. (ii)

    T is firmly nonexpansive if and only if the complement IT is firmly nonexpansive.

  3. (iii)

    If T=(1α)W+αV for some α(0,1) and if W is firmly nonexpansive and V is nonexpansive, then T is averaged.

  4. (iv)

    The composite of finitely many averaged mappings is averaged. That is, if each of the mappings { T i } i = 1 N is averaged, then so is the composite T 1 T N . In particular, if T 1 is α 1 -averaged and T 2 is α 2 -averaged, then the composite T 1 T 2 is α-averaged, where α= α 1 + α 2 α 1 α 2 .

Recall that the metric (or nearest point) projection from H onto C is the mapping P C :HC which assigns to each point xH the unique point P C xC satisfying the property

x P C x= inf y C xy=:d(x,C).
(3)

Lemma 2.1 For given xH:

  1. (i)

    z= P C x if and only if

    xz,yz0,yC;
  2. (ii)

    z= P C x if and only if

    x z 2 x y 2 y z 2 ,yC;
  3. (iii)
    P C x P C y,xy P C x P C y 2 ,x,yH.

Consequently, P C is nonexpansive.

Lemma 2.2 The following inequality holds in a Hilbert space X:

x + y 2 x 2 +2y,x+y,x,yX.

Lemma 2.3 [27]

In a Hilbert space H, we have

λ x + ( 1 λ ) y 2 =λ x 2 +(1λ) y 2 λ(1λ) x y 2 ,x,yH and λ[0,1].

Lemma 2.4 (Demiclosedness principle [27])

Let C be a closed and convex subset of a Hilbert space H, and let T:CC be a nonexpansive mapping with Fix(T). If { x n } n = 1 is a sequence in C weakly converging to x and if { ( I T ) x n } n = 1 converges strongly to y, then (IT)x=y. In particular, if y=0, then xFix(T).

Definition 2.2 A nonlinear operator G with domain D(G)H and range R(G)H is said to be:

  1. (i)

    monotone if

    xy,GxGy0,x,yD(G),
  2. (ii)

    β-strongly monotone if there exists β>0 such that

    xy,GxGyβ x y 2 ,x,yD(G),
  3. (iii)

    ν-inverse strongly monotone (for short, ν-ism) if there exists ν>0 such that

    xy,GxGyν G x G y 2 ,x,yD(G).

Proposition 2.2 [16]

Let T:HH be an operator from H to itself.

  1. (i)

    T is nonexpansive if and only if the complement IT is 1 2 -ism.

  2. (ii)

    If T is ν-ism, then for γ>0, γT is ν γ -ism.

  3. (iii)

    T is averaged if and only if the complement IT is ν-ism for some ν>1/2. Indeed, for α(0,1), T is α-averaged if and only if IT is 1 2 α -ism.

Lemma 2.5 [6]

Assume that { a n } is a sequence of nonnegative real numbers such that

a n + 1 (1 γ n ) a n + γ n δ n ,n0,

where { γ n } is a sequence in (0,1) and { δ n } is a sequence in such that

  1. (i)

    n = 1 γ n =;

  2. (ii)

    lim sup n δ n 0 or n = 1 γ n | δ n |<.

Then lim n a n =0.

3 Main results

We now look at the constrained convex minimization problem:

min x C f(x),
(4)

where C is a closed and convex subset of a Hilbert space H and f:CR is a real-valued convex function. Assume that problem (4) is consistent, let S denote the solution set. If f is Fréchet differentiable, then the gradient-projection algorithm (GPA) generates a sequence { x n } n = 0 according to the recursive formula

x n + 1 = Proj C (Iγf)( x n ),n0,
(5)

or more generally,

x n + 1 = Proj C (I γ n f)( x n ),n0,
(6)

where, in both (5) and (6), the initial guess x 0 is taken from C arbitrarily, the parameters γ or γ n are positive real numbers.

As a matter of fact, it is known that if f fails to be strongly monotone, and is only 1 L -ism, namely, there is a constant L>0 such that

x y , f ( x ) f ( y ) 1 L f ( x ) f ( y ) 2 ,x,yC,

under some assumption for γ or γ n , then algorithms (5) and (6) can still converge in the weak topology.

Now consider the regularized minimization problem

min x C f α (x):= min x C { f ( x ) + α 2 x 2 } ,

where α>0 is the regularization parameter, and again f is convex with a 1 L -ism gradient f.

It is known that the regularization method is defined as follows:

x n + 1 = Proj C (Iγ f α n )( x n ).

We also know that { x n } x ˜ , where x ˜ is a solution of constrained convex minimization problem (4).

Let h:CH be a contraction with a constant ρ>0. In this section, we introduce the following implicit scheme generating a net { x s , t s } in an implicit way:

x s , t s = P C [ s h ( x s , t s ) + ( 1 s ) T t s x s , t s ] ,
(7)

where 0<γ< 2 L , t s (0, 2 γ L). Let T t s and s satisfy the following conditions:

  1. (i)

    λ:=λ( t s )= 2 γ ( L + t s ) 4 ;

  2. (ii)

    P C (Iγ f t s )=λI+(1λ) T t s .

Consider a mapping

Q s x= P C [ s h ( x ) + ( 1 s ) T t s x ] ,xC.
(8)

It is easy to see that Q s is a contraction. Indeed, we have

Q s x Q s y s h ( x ) + ( 1 s ) T t s x [ s h ( y ) + ( 1 s ) T t s y ] ρ s x y + ( 1 s ) x y = [ 1 ( 1 ρ ) s ] x y .

Hence, Q s has a unique fixed point in C, denoted by x s , t s , which uniquely solves fixed point equation (7).

We proved the strong convergence of { x s , t s } t s ( 0 , 2 γ L ) to a solution x of the minimization problem

( I h ) x , x z 0,zS.
(9)

For a given arbitrary guess x 0 C and a sequence { α n }(0, 2 γ L), we also propose the following explicit scheme that generates a sequence { x n } in an explicit way:

x n + 1 = P C [ θ n h ( x n ) + ( 1 θ n ) T n x n ] ,n0,
(10)

where 0<γ< 2 L , λ n = 2 γ ( L + α n ) 4 and P C (Iγ f α n )= λ n I+(1 λ n ) T n for each n0. It is proven that this sequence { x n } converges strongly to a minimizer x S of (4).

3.1 Convergence of the implicit scheme

Proposition 3.1 If 0<γ< 2 L , α(0, 2 γ L), f is 1 L -ism, then

Proj C ( I γ f α ) = ( 1 μ α ) I + μ α T α , Proj C ( I γ f ) = ( 1 μ ) I + μ T ,

where μ α = 2 + γ ( L + α ) 4 , μ= 2 + γ L 4 .

In addition, for xC,

T α xTxαM(x),

where

M(x)=γ ( 5 x + T x ) .

Proof Since f is 1 L -ism, so γ f α is 1 γ ( L + α ) -ism, by Proposition 2.2, Iγ f α is γ ( L + α ) 2 -averaged, because Proj C is 1 2 -averaged, by Proposition 2.1, Proj C (Iγ f α ) is μ α -averaged, i.e.,

Proj C (Iγ f α )=(1 μ α )I+ μ α T α ,

where μ α = 2 + γ ( L + α ) 4 . The same case holds for Proj C (Iγf).

Hence,

Proj C ( I γ f α ) x Proj C ( I γ f α ) x = ( μ μ α ) x + μ α T α x μ T x ( I γ f α ) x ( I γ f ) x = γ f α ( x ) f ( x ) = α γ x ,

then

μ α ( T α x ) μ T x | μ μ α | x + α γ x , T α x T x α γ ( 5 x + T x ) 2 + γ ( L + α ) α M ( x ) ,

where M(x)=γ(5x+Tx). □

Proposition 3.2 Let h:CH be a contraction with 0<ρ<1 and 0<γ< 2 L , let t s be continuous with respect to s, t s =o(s). Suppose that problem (4) is consistent, let S denote the solution set for each s(0,1), and let x s , t s denote a unique solution of fixed point equation (7). Then the following properties hold for the net { x s , t s }:

  1. (i)

    { x s , t s } t s ( 0 , 2 γ L ) is bounded;

  2. (ii)

    lim s 0 x s , t s T t s x s , t s =0;

  3. (iii)

    x s , t s defines a continuous curve from (0,1) into C.

Proof (i) Take any pS, then

x s , t s p= P C [ s h ( x s , t s ) + ( 1 s ) T t s x s , t s ] P C p.

Therefore,

x s , t s p s ρ x s , t s p + s h ( p ) p + ( 1 s ) T t s x s , t s T p s ρ x s , t s p + s h ( p ) p + ( 1 s ) [ T t s x s , t s T t s p + T t s p T p ] [ 1 ( 1 ρ ) s ] x s , t s p + s h ( p ) p + ( 1 s ) t s M ( p ) ,

hence,

x s , t s p h ( p ) p 1 ρ +(1s) t s s ( 1 ρ ) M ( p ) .
(11)

So, { x s , t s } is bounded.

  1. (ii)
    x s , t s T t s x s , t s s h ( x s , t s ) s T t s x s , t s 0.
  2. (iii)

    Take s, s 0 (0,1), and calculate

    x s , t s x s 0 , t s 0 s h ( x s , t s ) + ( 1 s ) T t s x s , t s [ s 0 h ( x s 0 , t s 0 ) + ( 1 s 0 ) T t s 0 x s 0 , t s 0 ] x s , t s x s 0 , t s 0 = s ( h ( x s , t s ) h ( x s 0 , t s 0 ) ) + ( s s 0 ) [ h ( x s 0 , t s 0 ) T t s 0 x s 0 , t s 0 ] x s , t s x s 0 , t s 0 + ( 1 s ) [ T t s x s , t s T t s x s 0 , t s 0 ] + ( 1 s ) [ T t s x s 0 , t s 0 T t s 0 x s 0 , t s 0 ] x s , t s x s 0 , t s 0 s ρ x s , t s x s 0 , t s 0 + ( 1 s ) x s , t s x s 0 , t s 0 x s , t s x s 0 , t s 0 + ( 1 s ) T t s x s 0 , t s 0 T t s 0 x s 0 , t s 0 + | s s 0 | h ( x s 0 , t s 0 ) T t s 0 x s 0 , t s 0 ,
    (12)
T t s x s 0 , t s 0 T t s 0 x s 0 , t s 0 = 4 P C ( I γ f t s ) [ 2 γ ( L + t s ) ] I 2 + γ ( L + t s ) x s 0 , t s 0 4 P C ( I γ f t s 0 ) [ 2 γ ( L + t s 0 ) ] I 2 + γ ( L + t s 0 ) x s 0 , t s 0 4 P C ( I γ f t s ) 2 + γ ( L + t s ) x s 0 , t s 0 4 P C ( I γ f t s 0 ) 2 + γ ( L + t s 0 ) x s 0 , t s 0 + [ 2 γ ( L + t s ) ] 2 + γ ( L + t s ) x s 0 , t s 0 + [ 2 γ ( L + t s 0 ) ] 2 + γ ( L + t s 0 ) x s 0 , t s 0 = 4 [ 2 + γ ( L + t s 0 ) ] P C ( I γ f t s ) x s 0 , t s 0 4 [ 2 + γ ( L + t s ) ] P C ( I γ f t s 0 ) x s 0 , t s 0 [ 2 + γ ( L + t s ) ] [ 2 + γ ( L + t s 0 ) ] + 4 γ | t s t s 0 | x s 0 , t s 0 [ 2 + γ ( L + t s ) ] [ 2 + γ ( L + t s 0 ) ] = 4 γ ( t s 0 t s ) P C ( I γ f t s ) x s 0 , t s 0 [ 2 + γ ( L + t s ) ] [ 2 + γ ( L + t s 0 ) ] + 4 ( 2 + γ ( L + t s ) ) [ P C ( I γ f t s ) x s 0 , t s 0 P C ( I γ f t s 0 ) x s 0 , t s 0 ] [ 2 + γ ( L + t s ) ] [ 2 + γ ( L + t s 0 ) ] + 4 γ | t s t s 0 | x s 0 , t s 0 [ 2 + γ ( L + t s ) ] [ 2 + γ ( L + t s 0 ) ] M 1 | t s t s 0 | .
(13)

So, by (12) and (13),

x s , t s x s 0 , t s 0 0(s s 0 ).

 □

Theorem 3.1 Assume that minimization problem (4) is consistent, and let S denote the solution set. Assume that the gradient f is 1 L -ism. Let h:CH be a ρ-contraction with ρ[0,1),

x s , t s = P C [ s h ( x s , t s ) + ( 1 s ) T t s x s , t s ] ,

where 0<γ< 2 L , t s (0, 2 γ L), t s =o(s). Let T t s satisfy the following conditions:

  1. (i)

    λ:=λ( t s )= 2 γ ( L + t s ) 4 ;

  2. (ii)

    P C (Iγ f t s )=λI+(1λ) T t s .

Then the net { x s , t s } converges strongly as s0 to a minimizer of problem (4), which is also the unique solution of the variational inequality

x S, ( I h ) x , x x 0,xS.

Proof Set Proj C (Iγf)=(1τ)I+τT, τ= 2 γ L 4 . Let y s , t s =sh( x s , t s )+(1s) T t s x s , t s , t s (0, 2 γ L).

We then have x s , t s = P C y s , t s . For any given zS, z= P C (Iγf)z, we obtain

x s , t s z = P C y s , t s y s , t s + y s , t s z = P C y s , t s y s , t s + s h ( x s , t s ) + ( 1 s ) T t s x s , t s T z = P C y s , t s y s , t s + s [ h ( x s , t s ) h ( z ) ] + s [ h ( z ) T z ] + ( 1 s ) [ T t s x s , t s T z ] .

Next we prove that { x s , t s } x S, which is also the unique solution of the variational inequality. We have

x s , t s z 2 = P C y s , t s y s , t s , P C y s , t s z + s h ( x s , t s ) h ( z ) , x s , t s z + s h ( z ) T z , x s , t s z + ( 1 s ) T t s x s , t s T z , x s , t s z s ρ x s , t s z 2 + s h ( z ) T z , x s , t s z + ( 1 s ) T t s x s , t s T z , x s , t s z [ 1 ( 1 ρ ) s ] x s , t s z 2 + s h ( z ) T z , x s , t s z + ( 1 s ) t s M ( z ) x s , t s z .

So,

x s , t s z 2 h ( z ) T z , x s , t s z 1 ρ + ( 1 s ) t s M ( z ) x s , t s z s ( 1 ρ ) .
(14)

Then if x s n , t s n p, then x s n , t s n p. Next, we prove that

x s , t s Proj C ( I γ f ) x s , t s 0.

Since

Proj C ( I γ f t s ) x s , t s x s , t s = λ x s , t s + ( 1 λ ) T t s ( x s , t s ) x s , t s T t s ( x s , t s ) x s , t s .

So,

x s , t s Proj C ( I γ f ) x s , t s γ t s x s , t s + T t s ( x s , t s ) x s , t s 0.

Finally, we prove that { x s , t s } x S, which is also the unique solution of the variational inequality. We only need to prove that if x s n , t s n x ˜ , then

( I h ) x ˜ , x x ˜ 0,xS.

Suppose that x s n , t s n x ˜ , by Lemma 2.4 and x s , t s Proj C (Iγf) x s , t s 0, x ˜ = Proj C (Iγf) x ˜ , it follows that x ˜ S. Note that x s n , t s n x ˜ by (14). From the definition

x s , t s = P C [ s h ( x s , t s ) + ( 1 s ) T t s x s , t s ] ,

we have

(Ih)( x s n , t s n )= 1 s n ( P C y s n , t s n y s n , t s n ) 1 s n [ ( I T t s n ) x s n , t s n ] +[ x s n , t s n T t s n x s n , t s n ].

So

( I h ) ( x s n , t s n ) , x s n , t s n z = 1 s n P C y s n , t s n y s n , t s n , x s n , t s n z 1 s n x s n , t s n T t s n x s n , t s n , x s n , t s n z + x s n , t s n T t s n x s n , t s n , x s n , t s n z 1 s n ( I T t s n ) x s n , t s n , x s n , t s n z + ( I T t s n ) x s n , t s n , x s n , t s n z 1 s n [ ( I T ) x s n , t s n , x s n , t s n z 1 s n ( T T t s n ) x s n , t s n , x s n , t s n z ] + x s n , t s n T t s n x s n , t s n , x s n , t s n z ,

then

( I h ) x ˜ , x ˜ z = lim n ( I h ) ( x s n , t s n ) , x s n , t s n z 0.
(15)

So, { x s , t s } x S, which is also the unique solution of the variational inequality. □

3.2 Convergence of the explicit scheme

Theorem 3.2 Assume that minimization problem (4) is consistent, and let S denote the solution set. Assume that the gradient f is 1 L -ism. Let h:CH be a ρ-contraction with ρ[0,1). Let a sequence { x n } n = 0 be generated by the following hybrid gradient projection algorithm:

x n + 1 = P c [ θ n h ( x n ) + ( 1 θ n ) T n ( x n ) ] ,n=0,1,2,,
(16)

where 0<γ< 2 L , P c [Iγ f α n ]= λ n I+(1 λ n ) T n and λ n = 2 γ ( L + α n ) 4 , and, in addition, assume that the following conditions are satisfied for { θ n } n = 0 and { α n } n = 0 :

  1. (i)

    θ n 0; α n =o( θ n );

  2. (ii)

    n = 0 θ n =;

  3. (iii)

    n = 0 | θ n + 1 θ n |<;

  4. (iv)

    n = 0 | α n + 1 α n |<.

Then the sequence { x n } n = 0 converges in norm to a minimizer of (4) which is also the unique solution of the variational inequality (VI)

x S, ( I h ) x , x x 0,xS.

In other words, x is the unique fixed point of the contraction Proj S h,

x = Proj S h ( x ) .

Proof (1) We first prove that { x n } n = 0 is bounded. Set Proj C (Iγf)=(1τ)I+τT, τ= 2 γ L 4 . Indeed, we have, for x ˜ S,

x n + 1 x ˜ = P C [ θ n h ( x n ) + ( 1 θ n ) T n ( x n ) ] P C x ˜ θ n h ( x n ) + ( 1 θ n ) T n ( x n ) x ˜ = θ n ( h ( x n ) h ( x ˜ ) ) + θ n ( h ( x ˜ ) x ˜ ) + ( 1 θ n ) ( T n ( x n ) x ˜ ) θ n ρ x n x ˜ + θ n h ( x ˜ ) x ˜ + ( 1 θ n ) [ x n x ˜ + T n ( x ˜ ) T ( x ˜ ) ] ( 1 ( 1 ρ ) θ n ) x n x ˜ + θ n h ( x ˜ ) x ˜ + α n M ( x ˜ ) max { x n x ˜ , 1 1 ρ [ h ( x ˜ ) x ˜ + M ( x ˜ ) ] } .

So, { x n } is bounded.

(2) Next we prove that x n + 1 x n 0 as n.

x n + 1 x n = P C [ θ n h ( x n ) + ( 1 θ n ) T n x n ] P C [ θ n 1 h ( x n 1 ) + ( 1 θ n 1 ) T n 1 x n 1 ] [ θ n h ( x n ) + ( 1 θ n ) T n x n ] [ θ n 1 h ( x n 1 ) + ( 1 θ n 1 ) T n 1 x n 1 ] = θ n ( h ( x n ) h ( x n 1 ) ) + ( 1 θ n ) ( T n x n T n x n 1 ) + ( θ n θ n 1 ) ( h ( x n 1 ) T n x n 1 ) + ( 1 θ n 1 ) ( T n x n 1 T n 1 x n 1 ) [ 1 ( 1 ρ ) θ n ] x n x n 1 + M 2 | θ n θ n 1 | + ( 1 θ n 1 ) T n x n 1 T n 1 x n 1 ,

but

T n x n 1 T n 1 x n 1 = 4 P C ( I γ f α n ) [ 2 γ ( L + α n ) ] I 2 + γ ( L + α n ) x n 1 4 P C ( I γ f α n 1 ) [ 2 γ ( L + α n 1 ) ] I 2 + γ ( L + α n 1 ) x n 1 4 P C ( I γ f α n ) 2 + γ ( L + α n ) x n 1 4 P C ( I γ f α n 1 ) 2 + γ ( L + α n 1 ) x n 1 + [ 2 γ ( L + α n ) ] 2 + γ ( L + α n ) x n 1 + [ 2 γ ( L + α n 1 ) ] 2 + γ ( L + α n 1 ) x n 1 = 4 [ 2 + γ ( L + α n 1 ) ] P C ( I γ f α n ) x n 1 4 [ 2 + γ ( L + α n ) ] P C ( I γ f α n 1 ) x n 1 [ 2 + γ ( L + α n ) ] [ 2 + γ ( L + α n 1 ) ] + 4 γ | α n α n 1 | x n 1 [ 2 + γ ( L + α n ) ] [ 2 + γ ( L + α n 1 ) ] = 4 γ ( α n 1 α n ) P C ( I γ f α n ) x n 1 [ 2 + γ ( L + α n ) ] [ 2 + γ ( L + α n 1 ) ] + 4 ( 2 + γ ( L + α n ) ) [ P C ( I γ f α n ) x n 1 P C ( I γ f α n 1 ) x n 1 ] [ 2 + γ ( L + α n ) ] [ 2 + γ ( L + α n 1 ) ] + 4 γ | α n α n 1 | x n 1 [ 2 + γ ( L + α n ) ] [ 2 + γ ( L + α n 1 ) ] 4 γ | α n 1 α n | P C ( I γ f α n ) x n 1 [ 2 + γ ( L + α n ) ] [ 2 + γ ( L + α n 1 ) ] + 4 γ | α n 1 α n | [ 2 + γ ( L + α n ) ] x n 1 [ 2 + γ ( L + α n ) ] [ 2 + γ ( L + α n 1 ) ] + 4 γ | α n α n 1 | x n 1 [ 2 + γ ( L + α n ) ] [ 2 + γ ( L + α n 1 ) ] | α n 1 α n | [ γ P C ( I γ f α n ) x n 1 + 5 γ x n 1 ] M 3 | α n 1 α n | .

So,

x n + 1 x n [ 1 ( 1 ρ ) θ n ] x n x n 1 + M 2 | θ n θ n 1 |+ M 3 | α n 1 α n |,

by Lemma 2.5,

x n + 1 x n 0.

(3) Next we show that x n T n x n 0.

Indeed, it follows that

x n T n x n x n x n + 1 + x n + 1 T n x n = x n x n + 1 + P c [ θ n h ( x n ) + ( 1 θ n ) T n ( x n ) ] P C T n ( x n ) x n x n + 1 + θ n h ( x n ) T n x n 0 .

Now we show that

lim sup n h ( x ) x , x n x 0.

Let x n k x ˜ , observe that

P C ( I γ f α n ) x n x n = λ n I + ( 1 λ n ) T n x n x n = ( 1 λ n ) T n x n x n T n x n x n ,

hence we have

P C ( I γ f ) x n x n P C ( I γ f ) x n P C ( I γ f α n ) x n + P C ( I γ f α n ) x n x n γ α n x n + T n x n x n 0 .

So

lim n P C ( I γ f ) x n x n =0,

by Lemma 2.4 and lim n P C (Iγf) x n x n =0, then

x ˜ = P C (Iγf)( x ˜ ).

This shows that

lim sup n h ( x ) x , x n x 0.

It follows that

x n + 1 x 2 = P c [ θ n h ( x n ) + ( 1 θ n ) T n ( x n ) ] P c x 2 θ n ( h ( x n ) x ) + ( 1 θ n ) ( T n x n T x ) 2 = θ n ( h ( x n ) h ( x ) ) + ( 1 θ n ) ( T n x n T x ) + θ n ( h ( x ) x ) 2 θ n ( h ( x n ) h ( x ) ) + ( 1 θ n ) ( T n x n T x ) 2 + 2 θ n h ( x ) x , x n + 1 x θ n h ( x n ) h ( x ) 2 + ( 1 θ n ) ( T n x n x ) 2 + 2 θ n h ( x ) x , x n + 1 x θ n ρ 2 x n x 2 + ( 1 θ n ) ( T n x n T n x + T n x T x ) 2 + 2 θ n h ( x ) x , x n + 1 x θ n ρ 2 x n x 2 + ( 1 θ n ) [ x n x + α n M ( x ) ] 2 + 2 θ n h ( x ) x , x n + 1 x [ 1 ( 1 ρ ) θ n ] x n x 2 + ( 1 θ n ) [ 2 α n M ( x ) x n x + α n 2 ( M ( x ) ) 2 ] + 2 θ n h ( x ) x , x n + 1 x .

Hence,

x n + 1 x 2 = ( 1 β n ) x n x 2 + β n δ n , β n = ( 1 ρ ) θ n , δ n = 1 1 ρ [ α n θ n ( 1 θ n ) 2 M ( x ) x n x + ( 1 θ n ) ( M ( x ) ) 2 α n 2 θ n δ n = + 2 h ( x ) x , x n + 1 x ] ,
(17)

by Lemma 2.5 and lim n β n =0, n = 0 β n =, lim sup n δ n 0, we have x n x . □

4 Application of the iterative method

Next, we give an application of Theorem 3.2 to the split feasibility problem (say SFP, for short) which was introduced by Censor and Elfving [15].

Find xC such that AxQ,
(18)

where C and Q are nonempty closed convex subsets of Hilbert spaces H 1 and H 2 , respectively. A: H 1 H 2 is a bounded linear operator.

It is clear that x is a solution of split feasibility problem (18) if and only if x C and A x P Q A x =0.

We define the proximity function f by

f(x)= 1 2 A x P Q A x 2 ,

and consider the convex optimization problem

min x C f(x)= min x C 1 2 A x P Q A x 2 .
(19)

Then x solves split feasibility problem (18) if and only if x solves minimization (19) with minimizer equal to 0. Byrne [16] introduced the so-called CQ algorithm to solve the (SFP)

x n + 1 = P C ( I μ A ( I P Q ) A ) x n ,n0,
(20)

where 0<μ< 2 A A = 2 A 2 .

He obtained that the sequence { x n } generated by (20) converges weakly to a solution of the (SFP).

Now we consider the regularization technique. Let

f α (x)= 1 2 A x P Q A x 2 + α 2 x 2 ,

then we establish the iterative scheme as follows:

x n + 1 = P C [ θ n h ( x n ) + ( 1 θ n ) T n x n ] ,n0,

where h:C H 1 is a contraction with 0<ρ<1,

P C [ I μ ( A ( I P Q ) A + α n I ) ] = λ n I + ( 1 λ n ) T n , λ n = 2 μ ( A 2 + α n ) 4 .

Applying Theorem 3.2, we obtain the following result.

Theorem 4.1 Assume that the split problem is consistent, let the sequence { x n } be generated by

x n + 1 = P C [ θ n h ( x n ) + ( 1 θ n ) T n x n ] ,n0,

where

P C [ I μ ( A ( I P Q ) A + α n I ) ] = λ n I + ( 1 λ n ) T n , λ n = 2 μ ( A 2 + α n ) 4 ,

and the sequence { θ n }(0,1), { α n } and { θ n } satisfy the following conditions:

  1. (i)

    θ n 0; 0<μ< 2 A A = 2 A 2 ;

  2. (ii)

    n = 0 θ n =;

  3. (iii)

    n = 0 | θ n + 1 θ n |<;

  4. (iv)

    n = 0 | α n + 1 α n |<;

  5. (v)

    α n =o( θ n ).

Then the sequence { x n } converges strongly to the solution of split feasibility problem (18).

Proof By the definition of the proximity function f, we have

f(x)= A (I Proj Q )Ax,

and f is 1/ A 2 -ism, i.e., since Proj Q is 1/2-averaged mapping, then I Proj Q is 1-ism,

f ( x ) f ( y ) , x y 1 / A 2 f ( x ) f ( y ) 2 = A ( I Proj Q ) A x A ( I Proj Q ) A y , x y 1 / A 2 A ( I Proj Q ) A x A ( I Proj Q ) A y 2 = A [ ( I Proj Q ) A x ( I Proj Q ) A y ] , x y 1 / A 2 A [ ( I Proj Q ) A x ( I Proj Q ) A y ] 2 = ( I Proj Q ) A x ( I Proj Q ) A y , A x A y 1 / A 2 A [ ( I Proj Q ) A x ( I Proj Q ) A y ] 2 ( I Proj Q ) A x ( I Proj Q ) A y 2 ( I Proj Q ) A x ( I Proj Q ) A y 2 = 0 .

Hence, f(x)f(y),xy1/ A 2 f ( x ) f ( y ) 2 .

Set f λ n (x)=f(x)+ α 2 x 2 ; consequently,

f α ( x ) = f ( x ) + α I ( x ) = A ( I Proj Q ) A x + α x .

Let γ=μ, L= A 2 , then the iterative scheme is equivalent to

x n + 1 = P C [ θ n h ( x n ) + ( 1 θ n ) T n x n ] ,n0,

where 0<γ< 2 L , P c [Iγ f α n ]= λ n I+(1 λ n ) T n and λ n = 2 γ ( L + α n ) 4 .

Due to Theorem 3.2, we have the conclusion immediately. □

Author’s contributions

The author read and approved the final manuscript.

References

  1. Levitin ES, Polyak BT: Constrained minimization methods. Zh. Vychisl. Mat. Mat. Fiz. 1966, 6: 787–823.

    MATH  Google Scholar 

  2. Calamai PH, More JJ: Projected gradient methods for linearly constrained problem. Math. Program. 1987, 39: 93–116. 10.1007/BF02592073

    Article  MATH  MathSciNet  Google Scholar 

  3. Polyak BT: Introduction to Optimization. Optimization Software, New York; 1987.

    Google Scholar 

  4. Su M, Xu HK: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 2010, 1: 35–43.

    MathSciNet  Google Scholar 

  5. Moudafi A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615

    Article  MATH  MathSciNet  Google Scholar 

  6. Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059

    Article  MATH  MathSciNet  Google Scholar 

  7. Ceng LC, Ansari QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. TMA 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005

    Article  MATH  MathSciNet  Google Scholar 

  8. Marino G, Xu HK: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 2004, 3: 791–808.

    Article  MATH  MathSciNet  Google Scholar 

  9. Marino G, Xu HK: A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006, 318: 43–52. 10.1016/j.jmaa.2005.05.028

    Article  MATH  MathSciNet  Google Scholar 

  10. Tian M: A general iterative algorithm for nonexpansive mappings in Hilbert spaces. Nonlinear Anal. TMA 2010, 73: 689–694. 10.1016/j.na.2010.03.058

    Article  MATH  Google Scholar 

  11. Yao Y, Liou YC, Chen CP: Algorithms construction for nonexpansive mappings and inverse-strongly monotone mapping. Taiwan. J. Math. 2011, 15: 1979–1998.

    MATH  MathSciNet  Google Scholar 

  12. Yao Y, Chen R, Liou Y-C: A unified implicit algorithm for solving the triple-hierarchical constrained optimization problem. Math. Comput. Model. 2012, 55: 1506–1515. 10.1016/j.mcm.2011.10.041

    Article  MATH  MathSciNet  Google Scholar 

  13. Yao Y, Liou Y-C, Kang SM: Two-step projection methods for a system of variational inequality problems in Banach spaces. J. Glob. Optim. 2013. 10.1007/s10898-011-9804-0

    Google Scholar 

  14. Wiyada K, Praairat J, Poom K: Generalized systems of variational inequalities and projection methods for inverse-strongly monotone mapping. Discrete Dyn. Nat. Soc. 2011. 10.1155/2011/976505

    Google Scholar 

  15. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

    Article  MATH  MathSciNet  Google Scholar 

  16. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MATH  MathSciNet  Google Scholar 

  17. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problem. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017

    Article  MATH  MathSciNet  Google Scholar 

  18. Censor Y, Bortfeld T, Martin B, Trfimov A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001

    Article  Google Scholar 

  19. Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007

    Article  MATH  Google Scholar 

  20. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert space. Inverse Probl. 2010., 26: Article ID 105018

    Google Scholar 

  21. Lopez G, Martin V, Xu HK: Perturbation techniques for nonexpansive mapping with applications. Nonlinear Anal., Real World Appl. 2009, 10: 2369–2383. 10.1016/j.nonrwa.2008.04.020

    Article  MATH  MathSciNet  Google Scholar 

  22. Lopez G, Martin V, Xu HK: Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems. Edited by: Censor Y, Jiang M, Wang G. Medical Physica Publishing, Madison; 2009:243–279.

    Google Scholar 

  23. Xu HK: Averaged mapping and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z

    Article  MATH  MathSciNet  Google Scholar 

  24. Xu HK: A regularization method for the proximal point algorithm. J. Glob. Optim. 2006, 36: 115–125. 10.1007/s10898-006-9002-7

    Article  MATH  Google Scholar 

  25. Cho YJ, Petrot N: Regularization and iterative method for general variational inequality problem in Hilbert spaces. J. Inequal. Appl. 2011., 2011: Article ID 21

    Google Scholar 

  26. Combettes PL: Solving monotone inclusions via composition of nonexpansive averaged operators. Optimization 2004, 53(5–6):475–504. 10.1080/02331930412331327157

    Article  MATH  MathSciNet  Google Scholar 

  27. Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics on Metric Fixed-Point Theory. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

Download references

Acknowledgements

The author thanks the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported by the Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. 3122013K004).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming Tian.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tian, M. General iterative scheme based on the regularization for solving a constrained convex minimization problem. J Inequal Appl 2013, 550 (2013). https://doi.org/10.1186/1029-242X-2013-550

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-550

Keywords