Open Access

On over-relaxed ( A , η , m ) -proximal point algorithm frameworks with errors and applications to general variational inclusion problems

Journal of Inequalities and Applications20132013:97

https://doi.org/10.1186/1029-242X-2013-97

Received: 7 July 2012

Accepted: 6 January 2013

Published: 12 March 2013

Abstract

The purpose of this paper is to provide some remarks for the main results of the paper Verma (Appl. Math. Lett. 21:142-147, 2008). Further, by using the generalized proximal operator technique associated with the ( A , η , m ) -monotone operators, we discuss the approximation solvability of general variational inclusion problem forms in Hilbert spaces and the convergence analysis of iterative sequences generated by the over-relaxed ( A , η , m ) -proximal point algorithm frameworks with errors, which generalize the hybrid proximal point algorithm frameworks due to Verma.

MSC:47H05, 49J40.

Keywords

( A , η , m ) -monotonicitygeneralized proximal operator techniqueover-relaxed ( A , η , m ) -proximal point algorithm frameworks with errorsgeneral variational inclusion problemconvergence analysis

1 Introduction

In 2008, Verma [1] developed a general framework for a hybrid proximal point algorithm using the notion of ( A , η ) -monotonicity (also referred to as ( A , η ) -maximal monotonicity or ( A , η , m ) -monotonicity in literature) and explored convergence analysis for this algorithm in the context of solving the following variational inclusion problems along with some results on the resolvent operator corresponding to ( A , η ) -monotonicity: Find a solution to
0 M ( x ) ,
(1.1)

where M : H 2 H is a set-valued mapping on a real Hilbert space .

We remark that the problem (1.1) provides us a general and unified framework for studying a wide range of interesting and important problems arising in mathematics, physics, engineering sciences, economics finance, etc. For more details, see [114] and the following example.

Example 1.1 [5]

Let V : R n R be a local Lipschitz continuous function, and let K be a closed convex set in R n . If x R n is a solution to the following problem:
min x K V ( x ) ,
then
0 V ( x ) + N K ( x ) ,

where V ( x ) denotes the subdifferential of V at x , and N K ( x ) the normal cone of K at  x .

Very recently, Huang and Noor [7] have pointed out ‘the question on whether the strong convergence holds or not for the over-relaxed proximal point algorithm is still open’. Verma [12] also pointed out ‘the over-relaxed proximal point algorithm is of interest in the sense that it is quite application-oriented, but nontrivial in nature’. In [10, 11], we discussed the convergence of iterative sequences generated by the hybrid proximal point algorithm frameworks associated with ( A , η , m ) -monotonicity when operator A is strongly monotone and Lipschitz continuous.

Motivated and inspired by the recent works, in this paper, we correct the main result of the paper [1]. Further, by using the generalized proximal operator technique associated with the ( A , η , m ) -monotone operators, we discuss the approximation solvability of general variational inclusion problem forms in Hilbert spaces and the convergence analysis of iterative sequences generated by the over-relaxed ( A , η , m ) -proximal point algorithm frameworks with errors, which generalize the hybrid proximal point algorithm frameworks due to Verma [1].

2 Preliminaries

In the sequel, let be a real Hilbert space with the norm and the inner product , and 2 H denote the family of all subsets of .

Definition 2.1 A single-valued operator A : H H is said to be
  1. (i)
    r-strongly monotone, if there exists a positive constant r such that
    A ( x ) A ( y ) , x y r x y 2 , x , y H ;
     
  2. (ii)
    s-Lipschitz continuous, if there exists a constant s > 0 such that
    A ( x ) A ( y ) s x y , x , y H .
     

If s = 1 , then A is called nonexpansive.

Definition 2.2 Let A : H H and η : H × H H be two nonlinear (in general) operators. A set-valued operator M : H 2 H is said to be
  1. (i)
    maximal monotone if for any ( y , v ) Graph ( M ) = { ( y , v ) H × H | v M ( y ) } ,
    u v , x y 0 implies x D ( M ) , u M ( x ) ;
     
  2. (ii)
    r-strongly η-monotone if there exists a positive constant r such that
    u v , η ( x , y ) r x y 2 , ( x , u ) , ( y , v ) Graph ( M ) ,
     
where η is said to be τ-Lipschitz continuous if there exists a constant τ > 0 such that
η ( x , y ) τ x y , x , y H ;
  1. (iii)
    m-relaxed η-monotone if there exists a positive constant m such that
    u v , η ( x , y ) m x y 2 , ( x , u ) , ( y , v ) Graph ( M ) ;
     

Similarly, if η ( x , y ) = x y for all x , y H , we can obtain the definition of strong monotonicity and relaxed monotonicity.

Definition 2.3 Let A : H H be r-strongly monotone. The operator M : H 2 H is said to be A-maximal monotone if
  1. (i)

    M is m-relaxed monotone;

     
  2. (ii)

    R ( A + ρ M ) = H for ρ > 0 .

     
Definition 2.4 Let A : H H be r-strongly η-monotone. Then M : H 2 H is said to be ( A , η , m ) -monotone if
  1. (i)

    M is m-relaxed η-monotone;

     
  2. (ii)

    R ( A + ρ M ) = H for ρ > 0 .

     

Lemma 2.1 [13]

Let be a real Hilbert space, A : H H be r-strongly monotone, and M : H 2 H be A-maximal monotone. Then the resolvent operator associated with M and defined by
J ρ , A M ( x ) = ( A + ρ M ) 1 ( x ) , x H ,

is 1 r ρ m -Lipschitz continuous.

Lemma 2.2 [9]

Let be a real Hilbert space, A : H H be r-strongly η-monotone, M : H 2 H be ( A , η , m ) -maximal monotone, and η : H × H H be τ-Lipschitz continuous. Then the generalized resolvent operator associated with M and defined by
J ρ , A M , η ( x ) = ( A + ρ M ) 1 ( x ) , x H ,

is τ r ρ m -Lipschitz continuous.

3 Remarks and algorithm frameworks

In this section, we give some remarks for the main results of [1] and then introduce a new class of over-relaxed ( A , η , m ) -proximal point algorithm frameworks with errors to approximate solvability of the general variational inclusion problem (1.1).

Lemma 3.1 [1]

Let be a real Hilbert space, A : H H be r-strongly η-monotone, and M : H 2 H be ( A , η , m ) -maximal monotone. Then the following statements are mutually equivalent:
  1. (i)

    An element x H is a solution to (1.1).

     
  2. (ii)
    For an x H , we have
    x = J ρ , A M , η ( A ( x ) ) ,
     

where J ρ , A M , η = ( A + ρ M ) 1 .

Lemma 3.2 [1]

Let be a real Hilbert space, A : H H be r-strongly monotone, and M : H 2 H be A-maximal monotone. Then the following statements are mutually equivalent:
  1. (i)

    An element x H is a solution to (1.1).

     
  2. (ii)
    For an x H , we have
    x = J ρ , A M ( A ( x ) ) ,
     

where J ρ , A M = ( A + ρ M ) 1 .

In [1], by using Lemmas 2.1, 2.2, 3.1, and 3.2, the author obtained the following main results on the convergence rate (or convergence), which hold only when r ρ m > 0 :

Theorem V1 (See [[1], p.145, Theorem 3.3])

Let be a real Hilbert space, let A : H H be r-strongly monotone and s-Lipschitz continuous, and let M : H 2 H be A-maximal monotone. For an arbitrarily chosen initial point x 0 , suppose that the sequence { x n } is generated by an iterative procedure
x n + 1 = ( 1 α n ) x n + α n y n , n 0 ,
and y n satisfies
y n J ρ n , A M ( A ( x n ) ) δ n y n x n ,
where J ρ n , A M = ( A + ρ n M ) 1 and { δ n } , { α n } , { ρ n } [ 0 , ) are scalar sequences such that
n = 0 δ n < , δ n 0 , α = lim sup n α k < 1 , ρ n ρ + .
Then the sequence { x n } converges linearly to a solution of (1.1) with the convergence rate
1 2 α [ 1 ( 1 α ) s r ρ m 1 2 α ( s r ρ m ) 2 1 2 α ] < 1 ,
for c = r ρ m ,
c < ( 1 α ) s ( 1 α ) 2 s 2 + ( 2 α ) α s 2 2 α , ( 1 α ) s > ( 1 α ) 2 s 2 + ( 2 α ) α s 2 ,
and for
c > ( 1 α ) s + ( 1 α ) 2 s 2 + ( 2 α ) α s 2 2 α .

Theorem V2 (See [[1], p.147, Theorem 3.4])

Let be a real Hilbert space, let A : H H be r-strongly η-monotone and s-Lipschitz continuous, and let M : H 2 H be ( A , η ) -maximal monotone. Let η : H × H H be τ-Lipschitz continuous. For an arbitrarily chosen initial point x 0 , suppose that the sequence { x n } is generated by an iterative procedure
x n + 1 = ( 1 α n ) x n + α n y n , n 0 ,
and y n satisfies
y n J ρ n , A M , η ( A ( x n ) ) n 1 y n x n ,
where J ρ n , A M = ( A + ρ n M ) 1 and { α n } , { ρ n } [ 0 , ) are scalar sequences such that α = lim sup n α k < 1 , ρ n ρ + . Then the sequence { x n } converges linearly to a solution of (1.1) for
and for
r > ( 1 α ) s τ + ( 1 α ) 2 s 2 τ 2 + ( 2 α ) α s 2 τ 2 2 α .

In the sequel, we give the following remarks to show that the main proof of Theorems 3.3 and 3.4 of [1] is worth correcting.

Remark 3.1 By the r-strongly monotonicity and s-Lipschitz continuity of the underlying operator A, it follows that for all x , y H , if x y ,
r x y 2 A ( x ) A ( y ) , x y A ( x ) A ( y ) x y s x y 2 ,

showing that r s .

Remark 3.2 From Remark 3.1, it is easy to prove that the convergence rate θ n > 1 in p.146 of [1] for n 0 . Therefore, the strong convergence of [[1], Theorem 3.3], is not true.

In fact, from Remark 3.1 and the definition of the convergence rate in line 11, p.146 of [1], we have the following estimate:
s r r ρ n m > 0 , i.e. , s r ρ n m 1
and
θ n 2 = 1 2 α n [ 1 ( 1 α n ) s r ρ n m 1 2 α n ( s r ρ n m ) 2 1 2 α n ] = 1 2 α n + 2 α n ( 1 α n ) s r ρ n m + α n 2 ( s r ρ n m ) 2 + α n 2 = ( 1 α n ) 2 + 2 ( 1 α n ) s α n r ρ n m + ( s α n r ρ n m ) 2 = [ ( 1 α n ) + s α n r ρ n m ] 2 = [ 1 α n ( 1 s r ρ n m ) ] 2 > 1 ,
(3.1)

it is because α n > 0 for all n 0 .

Remark 3.3 Similarly, we can show that the conditions for the convergence of [[1], Theorem 3.4] must be revised.

Indeed, from 0 α < 1 and the assumption, it follows that the conditions for the convergence of a sequence { x n } generated by the iterative algorithm are equivalent to
( 2 α ) c 2 2 ( 1 α ) s τ c α s 2 τ 2 > 0 , c = r ρ m ,
that is,
1 ( 1 α ) s τ r ρ m 1 2 α ( s τ r ρ m ) 2 1 2 α > 0 ,
which should be revised because it follows from the assumption, (3.1), and Remark 3.1 that
s τ < r ρ m r s , i.e. , τ < 1 .

Thus, if τ 1 , then the conditions for the convergence are not true.

Next, in order to illustrate the main results in [1], we construct the following over-relaxed proximal point algorithm frameworks with errors based on Lemmas 3.1 and 3.2.

Algorithm 3.1 Step 1. Choose an arbitrary initial point u 0 H .

Step 2. Choose sequences { α n } , { δ n } , and { ρ n } such that for n 0 , { α n } , { δ n } , and { ρ n } are three sequences in [ 0 , ) satisfying
n = 0 δ n < , ρ n ρ .
Step 3. Let { x n } H be generated by the following iterative procedure:
x n + 1 = ( 1 α n ) x n + α n y n + e n , n 0 ,
(3.2)
where { e n } is an error sequence in to take into account a possible inexact computation of the operator point, which satisfies n = 0 e n < , and y n satisfies
y n J ρ n , A M , η ( A ( x n ) ) δ n y n x n ,

where n 0 , J ρ n , A M , η = ( A + ρ n M ) 1 and ρ n > 0 .

Step 4. If x n and y n satisfy (3.2) to sufficient accuracy, stop; otherwise, set n : = n + 1 and return to Step 2.

Remark 3.4 If e n 0 , δ n = 1 n , and α n < 1 for n 0 , then Algorithm 3.1 is reduced to the iterative algorithm in Theorem 3.4 of [1].

Algorithm 3.2 Step 1. Choose an arbitrary initial point x 0 H .

Step 2. Choose sequences { α n } , { δ n } , and { ρ n } such that for n 0 , { α n } , { δ n } , and { ρ n } are three sequences in [ 0 , ) satisfying
n = 0 δ n < , ρ n ρ .
Step 3. Let { x n } H be generated by the following iterative procedure:
x n + 1 = ( 1 α n ) x n + α n y n + e n , n 0 ,
(3.3)
where { e n } is an error sequence in to take into account a possible inexact computation of the operator point, which satisfies n = 0 e n < , and y n satisfies
y n J ρ n , A M ( A ( x n ) ) δ n y n x n ,

where n 0 , J ρ n , A M = ( A + ρ n M ) 1 and ρ n > 0 .

Step 4. If x n and y n satisfy (3.3) to sufficient accuracy, stop; otherwise, set n : = n + 1 and return to Step 2.

Remark 3.5 If e n 0 and α n < 1 for n 0 , then Algorithm 3.2 is reduced to the iterative algorithm in Theorem 3.3 of [1].

4 Convergence analysis

In this section, we apply the over-relaxed proximal point Algorithms 3.1 and 3.2 to approximate the solution of (1.1), and as a result, we end up showing linear convergence.

Theorem 4.1 Let be a real Hilbert space, A : H H be r-strongly monotone and s-Lipschitz continuous, η : H × H H be τ-Lipschitz continuous, and M : H 2 H be ( A , η , m ) -maximal monotone. If for γ > 1 2 ,
A ( x n ) A ( x ) , J ρ n , A M , η ( A ( x n ) ) J ρ n , A M , η ( A ( x ) ) γ J ρ n , A M , η ( A ( x n ) ) J ρ n , A M , η ( A ( x ) ) 2 ,
and there exists a constant ρ ( 0 , r m ) such that
α + s 2 k 2 [ α 2 γ ( α 1 ) ] < 2 , k = τ r ρ m < 1 ,
(4.1)
then the sequence { x n } generated by Algorithm  3.1 converges linearly to a solution x of the problem (1.1) with the convergence rate
ϑ = 1 α { 2 [ 1 γ ( s τ r ρ m ) 2 ] α [ 1 ( 2 γ 1 ) ( s τ r ρ m ) 2 ] } < 1 ,

where α = lim sup n α n > 1 and ρ n ρ .

Proof Let x be a solution of the problem (1.1). Then it follows from Lemma 3.1 that
x = ( 1 α n ) x + α n J ρ n , A M , η ( A ( x ) ) .
(4.2)
Let
z n + 1 = ( 1 α n ) x n + α n J ρ n , A M , η ( A ( x n ) ) , n 0 .
Thus, by the assumptions of the theorem, Lemma 2.2, and (4.2), now we find the estimate
where
ϑ n = 1 α n { 2 [ 1 γ ( s τ r ρ n m ) 2 ] α n [ 1 ( 2 γ 1 ) ( s τ r ρ n m ) 2 ] } .
Thus, we have
x n + 1 x ϑ n x n x .
Since x n + 1 = ( 1 α n ) x n + α n y n + e n , x n + 1 x n = α n ( y n x n ) + e n , it follows that
Using the above arguments, we estimate that
This implies that
x n + 1 x ϑ n + δ n 1 δ n x n x + 1 1 δ n e n .
(4.3)
Since A is r-strongly monotone (and hence, A ( u ) A ( v ) r u v , u , v H ), this implies from (4.3) that the { x n } converges linearly to a solution x for
ϑ n = 1 α n { 2 [ 1 γ ( s τ r ρ n m ) 2 ] α n [ 1 ( 2 γ 1 ) ( s τ r ρ n m ) 2 ] } .
Hence, we have
lim sup n ϑ n + δ n 1 δ n = lim sup n ϑ n = 1 α { 2 [ 1 γ ( s τ r ρ m ) 2 ] α [ 1 ( 2 γ 1 ) ( s τ r ρ m ) 2 ] } ,

where α = lim sup n α n , ρ n ρ . This completes the proof. □

Remark 4.1 The conditions (4.1) in Theorem 4.1 hold for some suitable values of constants, for example, α = 1.35 , γ = 1.5262 , τ = 0.025 , r = 0.5 , ρ = 0.7348 , m = 0.6 , s = 0.6048 and the convergence rate θ = 0.7840 < 1 .

From Theorem 4.1, we have the following result.

Theorem 4.2 Let be a real Hilbert space, A : H H be r-strongly monotone with r > 1 and s-Lipschitz continuous, and M : H 2 H be A-maximal monotone. If for γ > 1 2 ,
and there exists a constant ρ ( 0 , r 1 m ) such that
α + [ α 2 γ ( α 1 ) ] ( s r ρ m ) 2 < 2 ,
then the sequence { x n } generated by Algorithm  3.2 converges linearly to a solution x of the problem (1.1) with the convergence rate
θ = 1 α { 2 [ 1 γ ( s r ρ m ) 2 ] α [ 1 ( 2 γ 1 ) ( s r ρ m ) 2 ] } < 1 ,

where α = lim sup n α n > 1 and ρ n ρ .

Remark 4.2 In Theorems 4.1 and 4.2, if we apply the c-Lipschitz continuity of M 1 instead, it seems that the strong convergence could be achieved (see, for example, [68]).

Remark 4.3 For an arbitrarily chosen initial point x 0 , let the iterative sequence { x n } generate the following over-relaxed proximal point algorithm:
A ( x n + 1 ) = ( 1 α n ) A ( x n ) + α n y n ,
and y n satisfy
y n A ( G ( A ( x n ) ) ) δ n y n A ( x n ) ,

where n 0 , G = J ρ n , A M , η = ( A + ρ n M ) 1 , the resolvent operator associated with A-maximal monotone M, or G = J ρ n , A M = ( A + ρ n M ) 1 , the resolvent operator associated with ( A , η , m ) -maximal monotone M, and scalar sequences { α n } , { ρ n } , { δ n } [ 0 , ) . Then we can obtain the corresponding results by using the same method as in Theorem 4.1 (see, for example, [10, 11]). Therefore, the results presented in this paper improve, generalize, and unify the corresponding results of recent works.

Declarations

Acknowledgements

This work was supported by the Scientific Research Fund of Sichuan Provincial Education Department (10ZA136), Sichuan Province Youth Fund project (2011JTD0031) and the Cultivation Project of Sichuan University of Science and Engineering (2011PY01), and Artificial Intelligence of Key Laboratory of Sichuan Province (2012RYY04).

Authors’ Affiliations

(1)
Department of Mathematics, Sichuan University
(2)
Department of Mathematics, Sichuan University of Science & Engineering

References

  1. Verma RU:A hybrid proximal point algorithm based on the ( A , η ) -maximal monotonicity framework. Appl. Math. Lett. 2008, 21: 142–147. 10.1016/j.aml.2007.02.017MathSciNetView ArticleGoogle Scholar
  2. Agarwal RP, Huang NJ, Cho YJ: Generalized nonlinear mixed implicit quasi-variational inclusions with set-valued mappings. J. Inequal. Appl. 2002, 7(6):807–828.MathSciNetGoogle Scholar
  3. Agarwal RP, Verma RU: Inexact A -proximal point algorithm and applications to nonlinear variational inclusion problems. J. Optim. Theory Appl. 2010, 144(3):431–444. 10.1007/s10957-009-9615-3MathSciNetView ArticleGoogle Scholar
  4. Ceng LC, Guu SM, Yao JC: Hybrid approximate proximal point algorithms for variational inequalities in Banach spaces. J. Inequal. Appl. 2009., 2009: Article ID 275208Google Scholar
  5. Clarke FH: Optimization and Nonsmooth Analysis. Wiley, New York; 1983.Google Scholar
  6. Huang ZY: A remark on the strong convergence of the over-relaxed proximal point algorithm. Comput. Math. Appl. 2010, 60: 1616–1619. 10.1016/j.camwa.2010.06.043MathSciNetView ArticleGoogle Scholar
  7. Huang ZY, Noor MA: On the convergence rate of the over-relaxed proximal point algorithm. Appl. Math. Lett. 2012, 25(11):1740–1743. 10.1016/j.aml.2012.02.003MathSciNetView ArticleGoogle Scholar
  8. Kim JK, Nguyen B: Regularization inertial proximal point algorithm for monotone hemicontinuous mapping and inverse strongly monotone mappings in Hilbert spaces. J. Inequal. Appl. 2010., 2010: Article ID 451916Google Scholar
  9. Lan HY:A class of nonlinear ( A , η ) -monotone operator inclusion problems with relaxed cocoercive mappings. Adv. Nonlinear Var. Inequal. 2006, 9(2):1–11.MathSciNetGoogle Scholar
  10. Lan HY:On hybrid ( A , η , m ) -proximal point algorithm frameworks for solving general operator inclusion problems. J. Appl. Funct. Anal. 2012, 7(3):258–266.MathSciNetGoogle Scholar
  11. Li F, Lan HY:Over-relaxed ( A , η ) -proximal point algorithm framework for approximating the solutions of operator inclusions. Adv. Nonlinear Var. Inequal. 2012, 15(1):99–109.MathSciNetGoogle Scholar
  12. Verma RU: A general framework for the over-relaxed A -proximal point algorithm and applications to inclusion problems. Appl. Math. Lett. 2009, 22: 698–703. 10.1016/j.aml.2008.05.001MathSciNetView ArticleGoogle Scholar
  13. Verma RU: A -monotonicity and its role in nonlinear variational inclusions. J. Optim. Theory Appl. 2006, 129(3):457–467. 10.1007/s10957-006-9079-7MathSciNetView ArticleGoogle Scholar
  14. Verma RU: On general over-relaxed proximal point algorithm and applications. Optimization 2011, 60(4):531–536. 10.1080/02331930903499675MathSciNetView ArticleGoogle Scholar

Copyright

© Lan; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.