 Research
 Open Access
 Published:
On overrelaxed (A,\eta ,m)proximal point algorithm frameworks with errors and applications to general variational inclusion problems
Journal of Inequalities and Applications volume 2013, Article number: 97 (2013)
Abstract
The purpose of this paper is to provide some remarks for the main results of the paper Verma (Appl. Math. Lett. 21:142147, 2008). Further, by using the generalized proximal operator technique associated with the (A,\eta ,m)monotone operators, we discuss the approximation solvability of general variational inclusion problem forms in Hilbert spaces and the convergence analysis of iterative sequences generated by the overrelaxed (A,\eta ,m)proximal point algorithm frameworks with errors, which generalize the hybrid proximal point algorithm frameworks due to Verma.
MSC:47H05, 49J40.
1 Introduction
In 2008, Verma [1] developed a general framework for a hybrid proximal point algorithm using the notion of (A,\eta )monotonicity (also referred to as (A,\eta )maximal monotonicity or (A,\eta ,m)monotonicity in literature) and explored convergence analysis for this algorithm in the context of solving the following variational inclusion problems along with some results on the resolvent operator corresponding to (A,\eta )monotonicity: Find a solution to
where M:\mathcal{H}\to {2}^{\mathcal{H}} is a setvalued mapping on a real Hilbert space ℋ.
We remark that the problem (1.1) provides us a general and unified framework for studying a wide range of interesting and important problems arising in mathematics, physics, engineering sciences, economics finance, etc. For more details, see [1–14] and the following example.
Example 1.1 [5]
Let V:{\mathbb{R}}^{n}\to \mathbb{R} be a local Lipschitz continuous function, and let K be a closed convex set in {\mathbb{R}}^{n}. If {x}^{\ast}\in {\mathbb{R}}^{n} is a solution to the following problem:
then
where \partial V({x}^{\ast}) denotes the subdifferential of V at {x}^{\ast}, and {\mathcal{N}}_{K}({x}^{\ast}) the normal cone of K at {x}^{\ast}.
Very recently, Huang and Noor [7] have pointed out ‘the question on whether the strong convergence holds or not for the overrelaxed proximal point algorithm is still open’. Verma [12] also pointed out ‘the overrelaxed proximal point algorithm is of interest in the sense that it is quite applicationoriented, but nontrivial in nature’. In [10, 11], we discussed the convergence of iterative sequences generated by the hybrid proximal point algorithm frameworks associated with (A,\eta ,m)monotonicity when operator A is strongly monotone and Lipschitz continuous.
Motivated and inspired by the recent works, in this paper, we correct the main result of the paper [1]. Further, by using the generalized proximal operator technique associated with the (A,\eta ,m)monotone operators, we discuss the approximation solvability of general variational inclusion problem forms in Hilbert spaces and the convergence analysis of iterative sequences generated by the overrelaxed (A,\eta ,m)proximal point algorithm frameworks with errors, which generalize the hybrid proximal point algorithm frameworks due to Verma [1].
2 Preliminaries
In the sequel, let ℋ be a real Hilbert space with the norm \parallel \cdot \parallel and the inner product \u3008\cdot ,\cdot \u3009 and {2}^{\mathcal{H}} denote the family of all subsets of ℋ.
Definition 2.1 A singlevalued operator A:\mathcal{H}\to \mathcal{H} is said to be

(i)
rstrongly monotone, if there exists a positive constant r such that
\u3008A(x)A(y),xy\u3009\ge r{\parallel xy\parallel}^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall}x,y\in \mathcal{H}; 
(ii)
sLipschitz continuous, if there exists a constant s>0 such that
\parallel A(x)A(y)\parallel \le s\parallel xy\parallel ,\phantom{\rule{1em}{0ex}}\mathrm{\forall}x,y\in \mathcal{H}.
If s=1, then A is called nonexpansive.
Definition 2.2 Let A:\mathcal{H}\to \mathcal{H} and \eta :\mathcal{H}\times \mathcal{H}\to \mathcal{H} be two nonlinear (in general) operators. A setvalued operator M:\mathcal{H}\to {2}^{\mathcal{H}} is said to be

(i)
maximal monotone if for any (y,v)\in Graph(M)=\{(y,v)\in \mathcal{H}\times \mathcal{H}v\in M(y)\},
\u3008uv,xy\u3009\ge 0\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}x\in D(M),u\in M(x); 
(ii)
rstrongly ηmonotone if there exists a positive constant r such that
\u3008uv,\eta (x,y)\u3009\ge r{\parallel xy\parallel}^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall}(x,u),(y,v)\in Graph(M),
where η is said to be τLipschitz continuous if there exists a constant \tau >0 such that

(iii)
mrelaxed ηmonotone if there exists a positive constant m such that
\u3008uv,\eta (x,y)\u3009\ge m{\parallel xy\parallel}^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall}(x,u),(y,v)\in Graph(M);
Similarly, if \eta (x,y)=xy for all x,y\in \mathcal{H}, we can obtain the definition of strong monotonicity and relaxed monotonicity.
Definition 2.3 Let A:\mathcal{H}\to \mathcal{H} be rstrongly monotone. The operator M:\mathcal{H}\to {2}^{\mathcal{H}} is said to be Amaximal monotone if

(i)
M is mrelaxed monotone;

(ii)
R(A+\rho M)=\mathcal{H} for \rho >0.
Definition 2.4 Let A:\mathcal{H}\to \mathcal{H} be rstrongly ηmonotone. Then M:\mathcal{H}\to {2}^{\mathcal{H}} is said to be (A,\eta ,m)monotone if

(i)
M is mrelaxed ηmonotone;

(ii)
R(A+\rho M)=\mathcal{H} for \rho >0.
Lemma 2.1 [13]
Let ℋ be a real Hilbert space, A:\mathcal{H}\to \mathcal{H} be rstrongly monotone, and M:\mathcal{H}\to {2}^{\mathcal{H}} be Amaximal monotone. Then the resolvent operator associated with M and defined by
is \frac{1}{r\rho m}Lipschitz continuous.
Lemma 2.2 [9]
Let ℋ be a real Hilbert space, A:\mathcal{H}\to \mathcal{H} be rstrongly ηmonotone, M:\mathcal{H}\to {2}^{\mathcal{H}} be (A,\eta ,m)maximal monotone, and \eta :\mathcal{H}\times \mathcal{H}\to \mathcal{H} be τLipschitz continuous. Then the generalized resolvent operator associated with M and defined by
is \frac{\tau}{r\rho m}Lipschitz continuous.
3 Remarks and algorithm frameworks
In this section, we give some remarks for the main results of [1] and then introduce a new class of overrelaxed (A,\eta ,m)proximal point algorithm frameworks with errors to approximate solvability of the general variational inclusion problem (1.1).
Lemma 3.1 [1]
Let ℋ be a real Hilbert space, A:\mathcal{H}\to \mathcal{H} be rstrongly ηmonotone, and M:\mathcal{H}\to {2}^{\mathcal{H}} be (A,\eta ,m)maximal monotone. Then the following statements are mutually equivalent:

(i)
An element x\in \mathcal{H} is a solution to (1.1).

(ii)
For an x\in \mathcal{H}, we have
x={J}_{\rho ,A}^{M,\eta}(A(x)),
where {J}_{\rho ,A}^{M,\eta}={(A+\rho M)}^{1}.
Lemma 3.2 [1]
Let ℋ be a real Hilbert space, A:\mathcal{H}\to \mathcal{H} be rstrongly monotone, and M:\mathcal{H}\to {2}^{\mathcal{H}} be Amaximal monotone. Then the following statements are mutually equivalent:

(i)
An element x\in \mathcal{H} is a solution to (1.1).

(ii)
For an x\in \mathcal{H}, we have
x={J}_{\rho ,A}^{M}(A(x)),
where {J}_{\rho ,A}^{M}={(A+\rho M)}^{1}.
In [1], by using Lemmas 2.1, 2.2, 3.1, and 3.2, the author obtained the following main results on the convergence rate (or convergence), which hold only when r\rho m>0:
Theorem V1 (See [[1], p.145, Theorem 3.3])
Let ℋ be a real Hilbert space, let A:\mathcal{H}\to \mathcal{H} be rstrongly monotone and sLipschitz continuous, and let M:\mathcal{H}\to {2}^{\mathcal{H}} be Amaximal monotone. For an arbitrarily chosen initial point {x}_{0}, suppose that the sequence \{{x}_{n}\} is generated by an iterative procedure
and {y}_{n} satisfies
where {J}_{{\rho}_{n},A}^{M}={(A+{\rho}_{n}M)}^{1} and \{{\delta}_{n}\},\{{\alpha}_{n}\},\{{\rho}_{n}\}\subset [0,\mathrm{\infty}) are scalar sequences such that
Then the sequence \{{x}_{n}\} converges linearly to a solution of (1.1) with the convergence rate
for c=r\rho m,
and for
Theorem V2 (See [[1], p.147, Theorem 3.4])
Let ℋ be a real Hilbert space, let A:\mathcal{H}\to \mathcal{H} be rstrongly ηmonotone and sLipschitz continuous, and let M:\mathcal{H}\to {2}^{\mathcal{H}} be (A,\eta )maximal monotone. Let \eta :\mathcal{H}\times \mathcal{H}\to \mathcal{H} be τLipschitz continuous. For an arbitrarily chosen initial point {x}_{0}, suppose that the sequence \{{x}_{n}\} is generated by an iterative procedure
and {y}_{n} satisfies
where {J}_{{\rho}_{n},A}^{M}={(A+{\rho}_{n}M)}^{1} and \{{\alpha}_{n}\},\{{\rho}_{n}\}\subset [0,\mathrm{\infty}) are scalar sequences such that \alpha ={lim\hspace{0.17em}sup}_{n\to \mathrm{\infty}}{\alpha}_{k}<1, {\rho}_{n}\uparrow \rho \le +\mathrm{\infty}. Then the sequence \{{x}_{n}\} converges linearly to a solution of (1.1) for
and for
In the sequel, we give the following remarks to show that the main proof of Theorems 3.3 and 3.4 of [1] is worth correcting.
Remark 3.1 By the rstrongly monotonicity and sLipschitz continuity of the underlying operator A, it follows that for all x,y\in \mathcal{H}, if x\ne y,
showing that r\le s.
Remark 3.2 From Remark 3.1, it is easy to prove that the convergence rate {\theta}_{n}>1 in p.146 of [1] for n\ge 0. Therefore, the strong convergence of [[1], Theorem 3.3], is not true.
In fact, from Remark 3.1 and the definition of the convergence rate in line 11, p.146 of [1], we have the following estimate:
and
it is because {\alpha}_{n}>0 for all n\ge 0.
Remark 3.3 Similarly, we can show that the conditions for the convergence of [[1], Theorem 3.4] must be revised.
Indeed, from 0\le \alpha <1 and the assumption, it follows that the conditions for the convergence of a sequence \{{x}_{n}\} generated by the iterative algorithm are equivalent to
that is,
which should be revised because it follows from the assumption, (3.1), and Remark 3.1 that
Thus, if \tau \ge 1, then the conditions for the convergence are not true.
Next, in order to illustrate the main results in [1], we construct the following overrelaxed proximal point algorithm frameworks with errors based on Lemmas 3.1 and 3.2.
Algorithm 3.1 Step 1. Choose an arbitrary initial point {u}_{0}\in \mathcal{H}.
Step 2. Choose sequences \{{\alpha}_{n}\}, \{{\delta}_{n}\}, and \{{\rho}_{n}\} such that for n\ge 0, \{{\alpha}_{n}\}, \{{\delta}_{n}\}, and \{{\rho}_{n}\} are three sequences in [0,\mathrm{\infty}) satisfying
Step 3. Let \{{x}_{n}\}\subset \mathcal{H} be generated by the following iterative procedure:
where \{{e}_{n}\} is an error sequence in ℋ to take into account a possible inexact computation of the operator point, which satisfies {\sum}_{n=0}^{\mathrm{\infty}}\parallel {e}_{n}\parallel <\mathrm{\infty}, and {y}_{n} satisfies
where n\ge 0, {J}_{{\rho}_{n},A}^{M,\eta}={(A+{\rho}_{n}M)}^{1} and {\rho}_{n}>0.
Step 4. If {x}_{n} and {y}_{n} satisfy (3.2) to sufficient accuracy, stop; otherwise, set n:=n+1 and return to Step 2.
Remark 3.4 If {e}_{n}\equiv 0, {\delta}_{n}=\frac{1}{n}, and {\alpha}_{n}<1 for n\ge 0, then Algorithm 3.1 is reduced to the iterative algorithm in Theorem 3.4 of [1].
Algorithm 3.2 Step 1. Choose an arbitrary initial point {x}_{0}\in \mathcal{H}.
Step 2. Choose sequences \{{\alpha}_{n}\}, \{{\delta}_{n}\}, and \{{\rho}_{n}\} such that for n\ge 0, \{{\alpha}_{n}\}, \{{\delta}_{n}\}, and \{{\rho}_{n}\} are three sequences in [0,\mathrm{\infty}) satisfying
Step 3. Let \{{x}_{n}\}\subset \mathcal{H} be generated by the following iterative procedure:
where \{{e}_{n}\} is an error sequence in ℋ to take into account a possible inexact computation of the operator point, which satisfies {\sum}_{n=0}^{\mathrm{\infty}}\parallel {e}_{n}\parallel <\mathrm{\infty}, and {y}_{n} satisfies
where n\ge 0, {J}_{{\rho}_{n},A}^{M}={(A+{\rho}_{n}M)}^{1} and {\rho}_{n}>0.
Step 4. If {x}_{n} and {y}_{n} satisfy (3.3) to sufficient accuracy, stop; otherwise, set n:=n+1 and return to Step 2.
Remark 3.5 If {e}_{n}\equiv 0 and {\alpha}_{n}<1 for n\ge 0, then Algorithm 3.2 is reduced to the iterative algorithm in Theorem 3.3 of [1].
4 Convergence analysis
In this section, we apply the overrelaxed proximal point Algorithms 3.1 and 3.2 to approximate the solution of (1.1), and as a result, we end up showing linear convergence.
Theorem 4.1 Let ℋ be a real Hilbert space, A:\mathcal{H}\to \mathcal{H} be rstrongly monotone and sLipschitz continuous, \eta :\mathcal{H}\times \mathcal{H}\to \mathcal{H} be τLipschitz continuous, and M:\mathcal{H}\to {2}^{\mathcal{H}} be (A,\eta ,m)maximal monotone. If for \gamma >\frac{1}{2},
and there exists a constant \rho \in (0,\frac{r}{m}) such that
then the sequence \{{x}_{n}\} generated by Algorithm 3.1 converges linearly to a solution {x}^{\ast} of the problem (1.1) with the convergence rate
where \alpha ={lim\hspace{0.17em}sup}_{n\to \mathrm{\infty}}{\alpha}_{n}>1 and {\rho}_{n}\uparrow \rho.
Proof Let {x}^{\ast} be a solution of the problem (1.1). Then it follows from Lemma 3.1 that
Let
Thus, by the assumptions of the theorem, Lemma 2.2, and (4.2), now we find the estimate
where
Thus, we have
Since {x}_{n+1}=(1{\alpha}_{n}){x}_{n}+{\alpha}_{n}{y}_{n}+{e}_{n}, {x}_{n+1}{x}_{n}={\alpha}_{n}({y}_{n}{x}_{n})+{e}_{n}, it follows that
Using the above arguments, we estimate that
This implies that
Since A is rstrongly monotone (and hence, \parallel A(u)A(v)\parallel \ge r\parallel uv\parallel, \mathrm{\forall}u,v\in \mathcal{H}), this implies from (4.3) that the \{{x}_{n}\} converges linearly to a solution {x}^{\ast} for
Hence, we have
where \alpha ={lim\hspace{0.17em}sup}_{n\to \mathrm{\infty}}{\alpha}_{n}, {\rho}_{n}\uparrow \rho. This completes the proof. □
Remark 4.1 The conditions (4.1) in Theorem 4.1 hold for some suitable values of constants, for example, \alpha =1.35, \gamma =1.5262, \tau =0.025, r=0.5, \rho =0.7348, m=0.6, s=0.6048 and the convergence rate \theta =0.7840<1.
From Theorem 4.1, we have the following result.
Theorem 4.2 Let ℋ be a real Hilbert space, A:\mathcal{H}\to \mathcal{H} be rstrongly monotone with r>1 and sLipschitz continuous, and M:\mathcal{H}\to {2}^{\mathcal{H}} be Amaximal monotone. If for \gamma >\frac{1}{2},
and there exists a constant \rho \in (0,\frac{r1}{m}) such that
then the sequence \{{x}_{n}\} generated by Algorithm 3.2 converges linearly to a solution {x}^{\ast} of the problem (1.1) with the convergence rate
where \alpha ={lim\hspace{0.17em}sup}_{n\to \mathrm{\infty}}{\alpha}_{n}>1 and {\rho}_{n}\uparrow \rho.
Remark 4.2 In Theorems 4.1 and 4.2, if we apply the cLipschitz continuity of {M}^{1} instead, it seems that the strong convergence could be achieved (see, for example, [6–8]).
Remark 4.3 For an arbitrarily chosen initial point {x}_{0}, let the iterative sequence \{{x}_{n}\} generate the following overrelaxed proximal point algorithm:
and {y}_{n} satisfy
where n\ge 0, G={J}_{{\rho}_{n},A}^{M,\eta}={(A+{\rho}_{n}M)}^{1}, the resolvent operator associated with Amaximal monotone M, or G={J}_{{\rho}_{n},A}^{M}={(A+{\rho}_{n}M)}^{1}, the resolvent operator associated with (A,\eta ,m)maximal monotone M, and scalar sequences \{{\alpha}_{n}\},\{{\rho}_{n}\},\{{\delta}_{n}\}\subset [0,\mathrm{\infty}). Then we can obtain the corresponding results by using the same method as in Theorem 4.1 (see, for example, [10, 11]). Therefore, the results presented in this paper improve, generalize, and unify the corresponding results of recent works.
References
Verma RU:A hybrid proximal point algorithm based on the (A,\eta )maximal monotonicity framework. Appl. Math. Lett. 2008, 21: 142–147. 10.1016/j.aml.2007.02.017
Agarwal RP, Huang NJ, Cho YJ: Generalized nonlinear mixed implicit quasivariational inclusions with setvalued mappings. J. Inequal. Appl. 2002, 7(6):807–828.
Agarwal RP, Verma RU: Inexact A proximal point algorithm and applications to nonlinear variational inclusion problems. J. Optim. Theory Appl. 2010, 144(3):431–444. 10.1007/s1095700996153
Ceng LC, Guu SM, Yao JC: Hybrid approximate proximal point algorithms for variational inequalities in Banach spaces. J. Inequal. Appl. 2009., 2009: Article ID 275208
Clarke FH: Optimization and Nonsmooth Analysis. Wiley, New York; 1983.
Huang ZY: A remark on the strong convergence of the overrelaxed proximal point algorithm. Comput. Math. Appl. 2010, 60: 1616–1619. 10.1016/j.camwa.2010.06.043
Huang ZY, Noor MA: On the convergence rate of the overrelaxed proximal point algorithm. Appl. Math. Lett. 2012, 25(11):1740–1743. 10.1016/j.aml.2012.02.003
Kim JK, Nguyen B: Regularization inertial proximal point algorithm for monotone hemicontinuous mapping and inverse strongly monotone mappings in Hilbert spaces. J. Inequal. Appl. 2010., 2010: Article ID 451916
Lan HY:A class of nonlinear (A,\eta )monotone operator inclusion problems with relaxed cocoercive mappings. Adv. Nonlinear Var. Inequal. 2006, 9(2):1–11.
Lan HY:On hybrid (A,\eta ,m)proximal point algorithm frameworks for solving general operator inclusion problems. J. Appl. Funct. Anal. 2012, 7(3):258–266.
Li F, Lan HY:Overrelaxed (A,\eta )proximal point algorithm framework for approximating the solutions of operator inclusions. Adv. Nonlinear Var. Inequal. 2012, 15(1):99–109.
Verma RU: A general framework for the overrelaxed A proximal point algorithm and applications to inclusion problems. Appl. Math. Lett. 2009, 22: 698–703. 10.1016/j.aml.2008.05.001
Verma RU: A monotonicity and its role in nonlinear variational inclusions. J. Optim. Theory Appl. 2006, 129(3):457–467. 10.1007/s1095700690797
Verma RU: On general overrelaxed proximal point algorithm and applications. Optimization 2011, 60(4):531–536. 10.1080/02331930903499675
Acknowledgements
This work was supported by the Scientific Research Fund of Sichuan Provincial Education Department (10ZA136), Sichuan Province Youth Fund project (2011JTD0031) and the Cultivation Project of Sichuan University of Science and Engineering (2011PY01), and Artificial Intelligence of Key Laboratory of Sichuan Province (2012RYY04).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The author declares that they have no competing interests.
Author’s contributions
HYL conceived of the study and participated in its design and coordination, the proof of convergence of the theorems and gave some examples to show the main results. The author read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Lan, Hy. On overrelaxed (A,\eta ,m)proximal point algorithm frameworks with errors and applications to general variational inclusion problems. J Inequal Appl 2013, 97 (2013). https://doi.org/10.1186/1029242X201397
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029242X201397
Keywords
 (A,\eta ,m)monotonicity
 generalized proximal operator technique
 overrelaxed (A,\eta ,m)proximal point algorithm frameworks with errors
 general variational inclusion problem
 convergence analysis