- Research
- Open access
- Published:
Bounded perturbation resilience of the viscosity algorithm
Journal of Inequalities and Applications volume 2016, Article number: 299 (2016)
Abstract
In this article, we investigate the bounded perturbation resilience of the viscosity algorithm and propose the superiorized version of the viscosity algorithm. The convergence of the proposed algorithm is analyzed for a nonexpansive mapping. A modified viscosity algorithm and its convergence is presented.
1 Introduction and preliminaries
Let H be a real Hilbert space and C be a nonempty closed and convex subset of H. A self-mapping \(f :C\rightarrow C\) is called a contraction with \(\rho\in (0, 1)\) if
A mapping \(T :C\rightarrow C\) is called nonexpansive if
Denote by \(\mathit{Fix}(T)\) the set of fixed points of T, i.e., \(\mathit{Fix}(T) = \{x \in C: x = T x\}\). Throughout this article we assume that \(\mathit{Fix}(T)\neq\emptyset\).
The fixed point problems for nonexpansive mappings [1–4] capture various applications in diversified areas, such as convex feasibility problems, convex optimization problems, problems of finding the zeros of monotone operators, and monotone variational inequalities (see [1, 5] and references therein). Many iteration algorithms were introduced to approximate a fixed point of a nonexpansive mapping in a Hilbert space or a Banach space (for example, see [6–14]).
The iteration algorithms are usually divided into two kinds. One is the algorithms with weak convergence, such as the Mann iteration algorithm [15] and the Ishikawa iteration algorithm [16]. The other is the algorithms with strong convergence, such as the Halpern iteration algorithm [17], hybrid algorithms [18], and the shrinking projection algorithm [19].
As an extension to Halpern’s iteration process, Moudafi [20] introduced the viscosity algorithm which is defined as follows:
where \(\{\alpha_{k}\}\) is a sequence in the interval \([0,1]\). For a contraction f and a nonexpansive mapping T, Moudafi proved the strong convergence of \(\{x^{k}\}\) provided that \(\{\alpha_{k}\}\) satisfies the following conditions:
-
(C1)
\(\alpha_{k}\rightarrow0\) (\(k\rightarrow\infty\));
-
(C2)
\(\sum_{k=1}^{+\infty}\alpha_{k}=+\infty\);
-
(C3)
\(\sum_{k=1}^{+\infty}|\alpha_{k+1}-\alpha_{k}|<+\infty\) or \(\lim_{k\rightarrow\infty}\frac {\alpha_{k}}{\alpha_{k+1}}=1\).
Xu [21] studied the viscosity algorithm in the setting of Banach spaces and obtained the strong convergence theorems.
Recently, Yang and He [22] proposed the so-called general alternative regularization algorithm:
where \(\{\alpha_{k}\}\) is a sequence in the interval \([0,1]\). Actually, the general alternative regularization algorithm is a variant of the viscosity algorithm (1). Define \(T_{k}:C\rightarrow C\) by
Then the viscosity algorithm (2) can be rewritten as
Under the conditions (C1)-(C3), Yang and He [22] showed the strong convergence of \(\{x^{k}\}\) provided that f is a Lipschitzian and strongly pseudo-contractive mapping. It is obvious that if f is a contraction, then f is a Lipschitzian and strongly pseudo-contractive mapping. So, from Theorem 3.1 in [22], we can get the following results.
Theorem 1.1
Let C be a nonempty closed convex subset of a Hilbert space H. Let \(T : C \rightarrow C\) be a nonexpansive mapping with \(\mathit{Fix}(T)\neq\emptyset\) and let \(f:C \rightarrow C\) be a contraction. If \(\{\alpha_{k}\}\subset(0,1)\) satisfies the conditions (C1)-(C3). Then the sequence \(\{x^{k}\}\) generated by (4) converges strongly to a fixed point \(x^{*}\) of T, where \(x^{*}\) is the unique solution of the variational inequality
The superiorization methodology was first proposed by Butnariu et al. [23] and formulated over general problem structures in [24]. Recently, there are increasing interests in studying of the superiorization methodology (see [25–30] and the references therein). Davidi et al. [25] analyzed perturbation resilience of the class of block-iterative projection (BIP) methods. Jin et al. [30] studied the bounded perturbation resilience of projected scaled gradient methods (PSG) and applied the superiorization methodology to the PSG. Censor et al. [26] introduced the superiorized version of the dynamic string-averaging projection algorithm and revealed new information as regards the mathematical behavior of the superiorization methodology. Herman and Davidi [27] studied the advantages of superiorization for image reconstruction from a small number of projections. Nikazad et al. [28] proposed two acceleration schemes based on BIP methods. In [29], the authors investigated total variation superiorization schemes in proton computed tomography image reconstruction.
Our aim in this article is to investigate the superiorization and the bounded perturbation resilience of the viscosity algorithm and construct algorithms based on them.
Next, we list two lemmas needed in the proof of the main results.
Lemma 1.1
[6]
Assume \(\{a_{n}\}\) is a sequence of nonnegative real numbers such that
where \(\{\gamma_{n}\}\subset(0,1)\) and \(\{\beta_{n}\}\subset(0,\infty)\) and \(\{\delta_{n}\}\subset\mathbb{R}\) satisfies
-
(1)
\(\sum_{n=1}^{\infty}\gamma_{n}=\infty\);
-
(2)
\(\limsup_{n\rightarrow\infty}\delta_{n}\leq0\) or \(\sum_{n=1}^{\infty}\gamma_{n}|\delta_{n}|<\infty\);
-
(3)
\(\sum_{n=1}^{\infty}\beta_{n}<\infty\).
Then \(\lim_{n\rightarrow\infty}a_{n}=0\).
Lemma 1.2
see [1], Corollary 4.18
Let \(D\subseteq \mathcal{H}\) be nonempty closed and convex, \(T:D\rightarrow \mathcal{H}\) be nonexpansive and let \(\{x_{n}\}\) be a sequence in D and \(x\in \mathcal{H}\) such that \(x_{n}\rightharpoonup x\) and \(Tx_{n}-x_{n} \rightarrow 0\) as \(n\rightarrow+\infty\). Then \(x\in \mathit{Fix}(T)\).
2 The superiorization methodology
Consider some mathematically formulated problem denoted by Problem P, the set of solutions of which is denoted by \(\mathit{SOL}(P)\). The superiorization methodology of [24, 26, 27] is intended for constrained minimization problems:
where \(\phi : \mathbb{R}^{J} \rightarrow \mathbb{R}\) is an objective function and \(\Psi_{P}\subseteq \mathbb{R}^{J}\) is the solution set \(\Psi_{P} = \mathit{SOL}(P)\) of a problem P. Here, we assume \(\Psi_{P} = \mathit{SOL}(P)\neq\emptyset\) throughout this paper.
The superiorization methodology strives not to solve (5), but rather the task is to find a point in \(\Psi_{P}\) which is superior, i.e., has a lower, but not necessarily minimal, value of the objective function ϕ.
From Theorem 1.1, the viscosity algorithm
converges to a fixed point \(x^{*}\) of a nonexpansive mapping T, which satisfies the following variational inequality:
It is well known that the constrained optimization problem
is equivalent to a variational inequality as follows:
provided that ϕ is differentiable.
Hence, if the function f is specially chosen, the viscosity algorithm converges to a solution of a constrained optimization problem:
This motivates us to investigate the viscosity algorithm by using superiorization methodology. In the following sections, we first investigate the bounded perturbation resilience of the viscosity algorithm and use the bounded perturbation resilience to introduce an inertial viscosity algorithm. Then we present the superiorized version of the viscosity algorithm and analyze its convergence. Finally, we give a modified viscosity algorithm.
3 The bounded perturbation resilience of the viscosity algorithm
In this section, we discuss the bounded perturbation resilience of the viscosity algorithm (2).
First, we present the definition of bounded perturbation resilience which is essential in the superiorization algorithm.
Definition 3.1
[26]
Given a problem P, an algorithmic operator \(A : H\rightarrow H\) is said to be bounded perturbations resilient iff the following is true: if the sequence \(\{x^{k}\}_{k=0}^{\infty}\), generated by \(x^{k+1} = A(x^{k})\), for all \(k\geq 0\), converges to a solution of P for all \(x^{0}\in H\), then any sequence \(\{y^{k}\}_{k=0}^{\infty}\) of points in H that is generated by \(y^{k+1} = A(y^{k} + \alpha_{k}v^{k})\), for all \(k \geq0\), also converges to a solution of P provided that, for all \(k \geq0\), \(\alpha_{k}v^{k}\) are bounded perturbations, meaning that \(\alpha_{k}\geq 0\) for all \(k \geq 0\) such that \(\sum_{k=0}^{\infty}\alpha_{k}<\infty\) and such that the sequence \(\{v^{k}\}_{k=0}^{\infty}\) is bounded.
Since Theorem 1.1 holds, we only need to prove the following results for showing the bounded perturbation resilience of the viscosity algorithm.
Theorem 3.1
Let \(T : H \rightarrow H\) be a nonexpansive mapping with \(\mathit{Fix}(T)\neq\emptyset\) and let \(f: H \rightarrow H\) be a contraction. Let \(\{\alpha_{k}\}\) be a sequence of nonnegative numbers satisfies (C1)-(C3) and let \(\{\beta_{k}\}\) be a sequence of nonnegative numbers such that \(\beta_{k}\leq\alpha_{k}\) and \(\sum_{k=1}^{\infty}\beta_{k}<\infty\). Let \(\{v^{k}\}\subset H\) be a norm-bounded sequence. Let \(\{T_{k}\}\) be defined by (3). Then any sequence \(\{y^{k}\}\) generated by the iterative formula
converges strongly to a fixed point \(y^{*}\) of T, where \(y^{*}\) is the unique solution of the variational inequality
Proof
First we show that \(\{y^{k}\}\) is bounded. Let
Then, for any \(p\in \mathit{Fix}(T)\), we have
where \(M_{1}:=\sup_{k\in \mathbb{N}}\{\|v^{k}\|\}\). From (6) and (8), it follows that
By induction,
and \(\{y^{k}\}\) is bounded, so are \(\{f(u^{k})\}\) and \(\{u^{k}\}\).
We now show that
Indeed we have
where \(M_{2}:=\max\{\sup_{k\in \mathbb{N}}\{\|f(u^{k})\|+\|y^{k}\|\}, M_{1}\}\). By (8), we have
From (6), it follows that
where \(M_{3}:=\sup_{k\in \mathbb{N}}\{\|f(u^{k})\|+\|u^{k}\|\}\). By Lemma 1.1, we obtain
Combining (11) and (12), we get (10).
We next show
where \(y^{*}\) satisfies (7). Indeed take a subsequence \(\{y^{k_{l}}\}\) of \(\{y^{k}\}\) such that
We may assume that \(y^{k_{l}}\rightharpoonup \bar{y}\). It follows from Lemma 1.2 and (10) that \(\bar{y}\in \mathit{Fix}(T)\). Hence by (7) we obtain
as required.
Finally we show that \(y^{k}\rightarrow y^{*}\). Using (9), we have
where
and
where \(M_{4}<+\infty\) from the boundedness of \(\{y_{k}\}\). It is easily seen that \(\bar{\alpha}_{k}\rightarrow0 \), \(\sum_{k=1}^{\infty}\bar{\alpha}_{k}=\infty\), and \(\limsup_{k\rightarrow\infty} \delta_{k}\leq0\). By Lemma 1.1, we obtain the results. □
The inertial-type algorithms originate from the heavy ball method (an implicit discretization) of the second-order dynamical systems in time [31, 32], the main feature of which is that the next iteration is defined by making use of the previous two iterates. The inertial-type algorithms speed up the original algorithm without the inertial extrapolation. A classical inertial-type algorithm is the well-known FISTA, which has better global rate of convergence than that of the ISTA (see [33, 34]). Recently there are increasing interests in studying inertial-type algorithms, e.g., inertial extragradient methods [35], the inertial Mann iteration algorithm [36, 37], and the inertial ADMM [38].
Combining the inertial-type algorithm and the viscosity algorithm, we construct the following inertial viscosity algorithm:
where
and
Theorem 3.2
Let \(T : H \rightarrow H\) be a nonexpansive mapping with \(\mathit{Fix}(T)\neq\emptyset\) and let \(f: H \rightarrow H\) be a contraction. Let \(\{\alpha_{k}\}\) be a sequence of nonnegative numbers satisfies (C1)-(C3) and let \(\{\beta_{k}\}\) be a sequence of nonnegative numbers such that \(\beta_{k}\leq\alpha_{k}\) and \(\sum_{k=1}^{\infty}\beta_{k}<\infty\). Then any sequence \(\{x^{k}\}\) generated by the iterative formula (13) converges strongly to a fixed point \(y^{*}\) of T, where \(y^{*}\) is the unique solution of the variational inequality
Proof
From (14) and (15), it is obvious that \(\|v^{k}\|\leq1\). Thus \(\{v^{k}\}\) is a norm-bounded sequence. Using Theorem 3.1, we get the results. □
4 The superiorized version of the viscosity algorithm
In this section, we present the superiorized version of the viscosity algorithm and show its convergence.
Let \(\phi: X \rightarrow \mathbb{R}\) be a convex continuous function. Consider the set
and assume that \(C_{\mathrm{min}}\neq\emptyset\).
Algorithm 4.1
The superiorized version of the viscosity algorithm
-
(0)
Initialization: Let N be a natural number and let \(y^{0} \in H\) be an arbitrary user-chosen vector.
-
(1)
Iterative step: Given a current vector \(y^{k}\), pick an \(N_{k} \in \{1, 2, \ldots, N\}\) and start an inner loop of calculations as follows:
-
(1.1)
Inner loop initialization: Define \(y^{k,0} = y^{k}\).
-
(1.2)
Inner loop step: Given \(y^{k,n}\), as long as \(n < N_{k}\) do as follows:
-
(1.2.1)
Pick a \(0 < \beta_{k,n}\leq 1\), \(n=0,\ldots,N_{k}-1\), in a way that guarantees that
$$ \sum_{k=1}^{\infty}\sum _{n=0}^{N_{k}-1}\beta_{k,n}< \infty. $$(17) -
(1.2.2)
Let \(\partial \phi(y^{k,n})\) be the subgradient set of ϕ at \(y^{k,n}\) and define \(v^{k,n}\) as follows:
$$ v^{k,n}= \left\{ \textstyle\begin{array}{l@{\quad}l} -\frac{s^{k,n}}{\|s^{k,n}\|} & \text{if } 0 \notin\partial \phi(y^{k,n}), \\ 0& \text{if } 0\in\partial \phi(y^{k,n}), \end{array}\displaystyle \right. $$(18)where \(s^{k,n}\in\partial \phi(y^{k,n})\).
-
(1.2.3)
Calculate
$$ y^{k,n+1}=y^{k,n}+\beta_{k,n}v^{k,n}, $$(19)and go to step (1.2).
-
(1.2.1)
-
(1.3)
Exit the inner loop with the vector \(y^{k,N_{k}}\).
-
(1.4)
Calculate
$$ y^{k+1}=T_{k}\bigl(y^{k,N_{k}}\bigr), $$(20)and go back to step (1).
-
(1.1)
We will present the convergence of the Algorithm 4.1.
Theorem 4.1
Let \(T : H \rightarrow H\) be a nonexpansive mapping with \(\mathit{Fix}(T)\neq\emptyset\). Then any sequence \(\{y^{k}\}_{k=0}^{\infty}\), generated by Algorithm 4.1 converges strongly to a fixed point \(y^{*}\) of T, where \(y^{*}\) is the unique solution of the variational inequality
Proof
By Theorem 3.1, we only need to show that \(\{y^{k,N_{k}}-y^{k}\}\) is bounded. From Algorithm 4.1, a sequence \(\{y^{k}\}\) has the property that for each integer \(k\geq 1\) and each \(h\in \{1,2,\ldots,N_{k}\}\),
and thus
So, by (17),
The bounded perturbation resilience secured by Theorem 3.1 guarantees the convergence of \(\{y^{k}\}\) to the unique solution of (7). □
5 A modified viscosity algorithm
Algorithm 4.1 can be seen as a unified frame, based on which, some algorithms are constructed. In this section, we introduce a modified viscosity algorithm by choosing a special function \(\phi(x)\) on Algorithm 4.1.
Define a convex function \(\phi:H\rightarrow \mathbb{R}\) by
where \(h:H\rightarrow \mathbb{R}\) is a continuous function. Then the set
equals
Furthermore, if we assume that \(h:H\rightarrow \mathbb{R}\) is a differentiable function, then
where \(f(x)=\nabla h(x)\). It is obvious that \(\nabla \phi=I-f\).
From Algorithm 4.1, we construct the following algorithm.
Algorithm 5.1
A modified viscosity algorithm
-
(0)
Initialization: Let N be a natural number and let \(y^{0} \in C\) be an arbitrary user-chosen vector.
-
(1)
Iterative step: Given a current vector \(y^{k}\), pick an \(N_{k} \in \{1, 2, \ldots, N\}\) and start an inner loop of calculations as follows:
-
(1.1)
Inner loop initialization: Define \(y^{k,0} = y^{k}\).
-
(1.2)
Inner loop step: Given \(y^{k,n}\), as long as \(n < N_{k}\) do as follows:
-
(1.2.1)
Pick a \(0 \leq\beta_{k,0}\leq 1\) for \(N_{k}=1\) and \(0 <\beta_{k,n}\leq 1\), \(n=0,\ldots,N_{k}-1\) for \(N_{k}>1\), in a way that guarantees that
$$ \sum_{k=1}^{\infty}\sum _{n=0}^{N_{k}-1}\beta_{k,n}< \infty. $$(21) -
(1.2.2)
Calculate
$$ y^{k,n+1}=\beta_{k,n}f\bigl(y^{k,n} \bigr)+(1-\beta_{k,n})y^{k,n}, $$(22)and go to step (1.2).
-
(1.2.1)
-
(1.3)
Exit the inner loop with the vector \(y^{k,N_{k}}\).
-
(1.4)
Calculate
$$ y^{k+1}=T_{k}\bigl(y^{k,N_{k}}\bigr), $$(23)and go back to step (1).
-
(1.1)
Remark 5.1
(1) In the modified viscosity algorithm, \(\beta_{k,0}\) can be taken zero. Furthermore, the modified viscosity algorithm reduces to the viscosity algorithm (2) if \(N_{k}=1\) for \(k\in \mathbb{N}\).
(2) In the superiorized version of the viscosity algorithm, from the definition of \(v^{k,n}\), it follows that \(\|v^{k,n}\|\leq1\). However, in the modified viscosity algorithm, the boundedness of \(\{v^{k}:=y^{k,N_{k}}-y^{k}\}\) is not obvious. So, we need to show the boundedness of the \(\{v^{k}\}\).
Theorem 5.1
Let C be a nonempty closed convex subset of a Hilbert space H and let \(T : C \rightarrow C\) be a nonexpansive mapping with \(\mathit{Fix}(T)\neq\emptyset\). Then any sequence \(\{y^{k}\}_{k=0}^{\infty}\), generated by Algorithm 5.1, converges strongly to a fixed point \(y^{*}\) of T, where \(y^{*}\) is the unique solution of the variational inequality
Proof
Similar to the proof of Theorem 4.1, we only need to show that the boundedness of \(\{y^{k,N_{k}}-y^{k}\}\). Let \(p\in \mathit{Fix}(T)\), then, for \(n=0,1,\ldots,N_{k}-1\), it follows that
So, we have
where the first inequality comes from the nonexpansivity of T. By induction,
Thus \(\{y^{k}\}\) and \(\{y^{k,n}\}\), \(n=1,\ldots,N_{k}\), are bounded, so is \(\{f(y^{k,n})\}\), \(n=0,\ldots,N_{k}-1\). From (22), we have
So,
where \(M_{5}:=\sup_{k\in \mathbb{N}}\{\max_{0\leq n\leq N_{k}}\{\|f(y^{k,n})\|+\|y^{k,n}\|\}\}\). Thus
The bounded perturbation resilience secured by Theorem 3.1 guarantees the convergence of \(\{y^{k}\}\) to a point in C. □
References
Bauschke, HH, Combettes, PL: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011)
Goebel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (1990)
Goebel, K, Reich, S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)
Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)
Bauschke, HH, Borwein, JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38(3), 367-426 (1996)
Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240-256 (2002)
Dong, QL, Yuan, HB: Accelerated Mann and CQ algorithms for finding a fixed point of a nonexpansive mapping. Fixed Point Theroy Appl. 2015, 125 (2015)
Dong, QL, Lu, YY: A new hybrid algorithm for a nonexpansive mapping. Fixed Point Theroy Appl. 2015, 37 (2015)
Yao, Y, Liou, YC, Yao, JC: Split common fixed point problem for two quasi-pseudocontractive operators and its algorithm construction. Fixed Point Theroy Appl. 2015, 127 (2015)
Yao, Y, Agarwal, RP, Postolache, M, Liou, YC: Algorithms with strong convergence for the split common solution of the feasibility problem and fixed point problem. Fixed Point Theroy Appl. 2014, 183 (2014)
Yao, Y, Postolache, M, Kang, SM: Strong convergence of approximated iterations for asymptotically pseudocontractive mappings. Fixed Point Theory Appl. 2014, 100 (2014)
Yao, Y, Postolache, M, Liou, YC, Yao, Z: Construction algorithms for a class of monotone variational inequalities. Optim. Lett. 10, 1519-1528 (2016)
Yao, Y, Liou, YC, Kang, SM: Two-step projection methods for a system of variational inequality problems in Banach spaces. J. Glob. Optim. 55, 801-811 (2013)
Yao, Y, Noor, MA, Noor, KI, Liou, YC, Yaqoob, H: Modified extragradient method for a system of variational inequalities in Banach spaces. Acta Appl. Math. 110, 1211-1224 (2010)
Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-510 (1953)
Ishikawa, S: Fixed points by a new iteration method. Proc. Am. Math. Soc. 44, 147-150 (1974)
Halpern, B: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 73, 957-961 (1967)
Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279, 372-379 (2003)
Kimura, Y, Nakajo, K, Takahashi, W: Convexity of the set of fixed points of a quasi-pseudocontractive type Lipschitz mapping and the shrinking projection method. Sci. Math. Jpn. 70, 213-220 (2009)
Moudafi, A: Viscosity approximation methods for fixed points problems. J. Math. Anal. Appl. 241, 46-55 (2000)
Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279-291 (2004)
Yang, C, He, S: General alternative regularization methods for nonexpansive mappings in Hilbert spaces. Fixed Point Theroy Appl. 2014, 203 (2014)
Butnariu, D, Davidi, R, Herman, GT, Kazantsev, IG: Stable convergence behavior under summable perturbations of a class of projection methods for convex feasibility and optimization problems. IEEE J. Sel. Top. Signal Process. 1, 540-547 (2007)
Censor, Y, Davidi, R, Herman, GT: Perturbation resilience and superiorization of iterative algorithms. Inverse Probl. 26, 065008 (2010)
Davidi, R, Herman, GT, Censor, Y: Perturbation-resilient block-iterative projection methods with application to image reconstruction from projections. Int. Trans. Oper. Res. 16, 505-524 (2009)
Censor, Y, Zaslavski, AJ: Strict Fejér monotonicity by superiorization of feasibility-seeking projection methods. J. Optim. Theory Appl. 165, 172-187 (2015)
Herman, GT, Davidi, R: Image reconstruction from a small number of projections. Inverse Probl. 24, 045011 (2008)
Nikazad, T, Davidi, R, Herman, G: Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction. Inverse Probl. 28, 035005 (2012)
Penfold, SN, Schulte, RW, Censor, Y, Rosenfeld, AB: Total variation superiorization schemes in proton computed tomography image reconstruction. Med. Phys. 37, 5887-5895 (2010)
Jin, W, Censor, Y, Jiang, M: Bounded perturbation resilience of projected scaled gradient methods. Comput. Optim. Appl. 63, 365-392 (2016)
Alvarez, F: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14(3), 773-782 (2004)
Alvarez, F, Attouch, H: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3-11 (2001)
Beck, A, Teboulle, M: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183-202 (2009)
Chambolle, A, Dossal, C: On the convergence of the iterates of the “Fast Iterative Shrinkage/Thresholding Algorithm”. J. Optim. Theory Appl. 166, 968-982 (2015)
Dong, QL, Lu, YY, Yang, J: The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 65, 2217-2226 (2016)
Bot, RI, Csetnek, ER, Hendrich, C: Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 256, 472-487 (2015)
Mainge, PE: Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 219, 223-236 (2008)
Chen, C, Ma, S, Yang, J: A general inertial proximal point algorithm for mixed variational inequality problem. SIAM J. Optim. 25, 2120-2142 (2015)
Acknowledgements
Supported by National Natural Science Foundation of China (No. 61379102) and Fundamental Research Funds for the Central Universities (No. 3122016L006).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Dong, QL., Zhao, J. & He, S. Bounded perturbation resilience of the viscosity algorithm. J Inequal Appl 2016, 299 (2016). https://doi.org/10.1186/s13660-016-1242-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-016-1242-6