Skip to main content

Bounded perturbation resilience of the viscosity algorithm

Abstract

In this article, we investigate the bounded perturbation resilience of the viscosity algorithm and propose the superiorized version of the viscosity algorithm. The convergence of the proposed algorithm is analyzed for a nonexpansive mapping. A modified viscosity algorithm and its convergence is presented.

1 Introduction and preliminaries

Let H be a real Hilbert space and C be a nonempty closed and convex subset of H. A self-mapping \(f :C\rightarrow C\) is called a contraction with \(\rho\in (0, 1)\) if

$$\bigl\| f(x)-f(y)\bigr\| \leq\rho \|x-y\|,\quad \forall x,y\in C. $$

A mapping \(T :C\rightarrow C\) is called nonexpansive if

$$\|Tx-Ty\|\leq \|x-y\|, \quad\forall x,y\in C. $$

Denote by \(\mathit{Fix}(T)\) the set of fixed points of T, i.e., \(\mathit{Fix}(T) = \{x \in C: x = T x\}\). Throughout this article we assume that \(\mathit{Fix}(T)\neq\emptyset\).

The fixed point problems for nonexpansive mappings [14] capture various applications in diversified areas, such as convex feasibility problems, convex optimization problems, problems of finding the zeros of monotone operators, and monotone variational inequalities (see [1, 5] and references therein). Many iteration algorithms were introduced to approximate a fixed point of a nonexpansive mapping in a Hilbert space or a Banach space (for example, see [614]).

The iteration algorithms are usually divided into two kinds. One is the algorithms with weak convergence, such as the Mann iteration algorithm [15] and the Ishikawa iteration algorithm [16]. The other is the algorithms with strong convergence, such as the Halpern iteration algorithm [17], hybrid algorithms [18], and the shrinking projection algorithm [19].

As an extension to Halpern’s iteration process, Moudafi [20] introduced the viscosity algorithm which is defined as follows:

$$ x^{k+1}=\alpha_{k}f\bigl(x^{k} \bigr)+(1-\alpha_{k})Tx^{k} $$
(1)

where \(\{\alpha_{k}\}\) is a sequence in the interval \([0,1]\). For a contraction f and a nonexpansive mapping T, Moudafi proved the strong convergence of \(\{x^{k}\}\) provided that \(\{\alpha_{k}\}\) satisfies the following conditions:

  1. (C1)

    \(\alpha_{k}\rightarrow0\) (\(k\rightarrow\infty\));

  2. (C2)

    \(\sum_{k=1}^{+\infty}\alpha_{k}=+\infty\);

  3. (C3)

    \(\sum_{k=1}^{+\infty}|\alpha_{k+1}-\alpha_{k}|<+\infty\) or \(\lim_{k\rightarrow\infty}\frac {\alpha_{k}}{\alpha_{k+1}}=1\).

Xu [21] studied the viscosity algorithm in the setting of Banach spaces and obtained the strong convergence theorems.

Recently, Yang and He [22] proposed the so-called general alternative regularization algorithm:

$$ x^{k+1}=T\bigl(\alpha_{k}f\bigl(x^{k} \bigr)+(1-\alpha_{k})x^{k}\bigr), $$
(2)

where \(\{\alpha_{k}\}\) is a sequence in the interval \([0,1]\). Actually, the general alternative regularization algorithm is a variant of the viscosity algorithm (1). Define \(T_{k}:C\rightarrow C\) by

$$ T_{k}x=T\bigl(\alpha_{k}f(x)+(1- \alpha_{k})x\bigr). $$
(3)

Then the viscosity algorithm (2) can be rewritten as

$$ x^{k+1}=T_{k}\bigl(x^{k}\bigr). $$
(4)

Under the conditions (C1)-(C3), Yang and He [22] showed the strong convergence of \(\{x^{k}\}\) provided that f is a Lipschitzian and strongly pseudo-contractive mapping. It is obvious that if f is a contraction, then f is a Lipschitzian and strongly pseudo-contractive mapping. So, from Theorem 3.1 in [22], we can get the following results.

Theorem 1.1

Let C be a nonempty closed convex subset of a Hilbert space H. Let \(T : C \rightarrow C\) be a nonexpansive mapping with \(\mathit{Fix}(T)\neq\emptyset\) and let \(f:C \rightarrow C\) be a contraction. If \(\{\alpha_{k}\}\subset(0,1)\) satisfies the conditions (C1)-(C3). Then the sequence \(\{x^{k}\}\) generated by (4) converges strongly to a fixed point \(x^{*}\) of T, where \(x^{*}\) is the unique solution of the variational inequality

$$\bigl\langle x^{*}-f\bigl(x^{*}\bigr),x-x^{*}\bigr\rangle \geq0, \quad\forall x\in \mathit{Fix}(T). $$

The superiorization methodology was first proposed by Butnariu et al. [23] and formulated over general problem structures in [24]. Recently, there are increasing interests in studying of the superiorization methodology (see [2530] and the references therein). Davidi et al. [25] analyzed perturbation resilience of the class of block-iterative projection (BIP) methods. Jin et al. [30] studied the bounded perturbation resilience of projected scaled gradient methods (PSG) and applied the superiorization methodology to the PSG. Censor et al. [26] introduced the superiorized version of the dynamic string-averaging projection algorithm and revealed new information as regards the mathematical behavior of the superiorization methodology. Herman and Davidi [27] studied the advantages of superiorization for image reconstruction from a small number of projections. Nikazad et al. [28] proposed two acceleration schemes based on BIP methods. In [29], the authors investigated total variation superiorization schemes in proton computed tomography image reconstruction.

Our aim in this article is to investigate the superiorization and the bounded perturbation resilience of the viscosity algorithm and construct algorithms based on them.

Next, we list two lemmas needed in the proof of the main results.

Lemma 1.1

[6]

Assume \(\{a_{n}\}\) is a sequence of nonnegative real numbers such that

$$a_{n+1}\leq(1-\gamma_{n})a_{n}+ \gamma_{n}\delta_{n}+\beta_{n}, $$

where \(\{\gamma_{n}\}\subset(0,1)\) and \(\{\beta_{n}\}\subset(0,\infty)\) and \(\{\delta_{n}\}\subset\mathbb{R}\) satisfies

  1. (1)

    \(\sum_{n=1}^{\infty}\gamma_{n}=\infty\);

  2. (2)

    \(\limsup_{n\rightarrow\infty}\delta_{n}\leq0\) or \(\sum_{n=1}^{\infty}\gamma_{n}|\delta_{n}|<\infty\);

  3. (3)

    \(\sum_{n=1}^{\infty}\beta_{n}<\infty\).

Then \(\lim_{n\rightarrow\infty}a_{n}=0\).

Lemma 1.2

see [1], Corollary 4.18

Let \(D\subseteq \mathcal{H}\) be nonempty closed and convex, \(T:D\rightarrow \mathcal{H}\) be nonexpansive and let \(\{x_{n}\}\) be a sequence in D and \(x\in \mathcal{H}\) such that \(x_{n}\rightharpoonup x\) and \(Tx_{n}-x_{n} \rightarrow 0\) as \(n\rightarrow+\infty\). Then \(x\in \mathit{Fix}(T)\).

2 The superiorization methodology

Consider some mathematically formulated problem denoted by Problem P, the set of solutions of which is denoted by \(\mathit{SOL}(P)\). The superiorization methodology of [24, 26, 27] is intended for constrained minimization problems:

$$ \text{minimize } \bigl\{ \phi(x) | x\in \Psi_{P}\bigr\} , $$
(5)

where \(\phi : \mathbb{R}^{J} \rightarrow \mathbb{R}\) is an objective function and \(\Psi_{P}\subseteq \mathbb{R}^{J}\) is the solution set \(\Psi_{P} = \mathit{SOL}(P)\) of a problem P. Here, we assume \(\Psi_{P} = \mathit{SOL}(P)\neq\emptyset\) throughout this paper.

The superiorization methodology strives not to solve (5), but rather the task is to find a point in \(\Psi_{P}\) which is superior, i.e., has a lower, but not necessarily minimal, value of the objective function ϕ.

From Theorem 1.1, the viscosity algorithm

$$x^{k+1}=T\bigl(\alpha_{k}f\bigl(x^{k}\bigr)+(1- \alpha_{k})x^{k}\bigr), $$

converges to a fixed point \(x^{*}\) of a nonexpansive mapping T, which satisfies the following variational inequality:

$$\bigl\langle x^{*}-f\bigl(x^{*}\bigr),x-x^{*}\bigr\rangle \geq0, \quad\forall x\in \mathit{Fix}(T). $$

It is well known that the constrained optimization problem

$$\min_{x\in C}\phi(x) $$

is equivalent to a variational inequality as follows:

$$\text{find } x^{*}\in C,\quad \text{such that } \bigl\langle \nabla\phi\bigl(x^{*} \bigr),x-x^{*}\bigr\rangle \geq0, \quad\forall x\in C, $$

provided that ϕ is differentiable.

Hence, if the function f is specially chosen, the viscosity algorithm converges to a solution of a constrained optimization problem:

$$\min_{x\in \mathit{Fix}(T)}\phi(x). $$

This motivates us to investigate the viscosity algorithm by using superiorization methodology. In the following sections, we first investigate the bounded perturbation resilience of the viscosity algorithm and use the bounded perturbation resilience to introduce an inertial viscosity algorithm. Then we present the superiorized version of the viscosity algorithm and analyze its convergence. Finally, we give a modified viscosity algorithm.

3 The bounded perturbation resilience of the viscosity algorithm

In this section, we discuss the bounded perturbation resilience of the viscosity algorithm (2).

First, we present the definition of bounded perturbation resilience which is essential in the superiorization algorithm.

Definition 3.1

[26]

Given a problem P, an algorithmic operator \(A : H\rightarrow H\) is said to be bounded perturbations resilient iff the following is true: if the sequence \(\{x^{k}\}_{k=0}^{\infty}\), generated by \(x^{k+1} = A(x^{k})\), for all \(k\geq 0\), converges to a solution of P for all \(x^{0}\in H\), then any sequence \(\{y^{k}\}_{k=0}^{\infty}\) of points in H that is generated by \(y^{k+1} = A(y^{k} + \alpha_{k}v^{k})\), for all \(k \geq0\), also converges to a solution of P provided that, for all \(k \geq0\), \(\alpha_{k}v^{k}\) are bounded perturbations, meaning that \(\alpha_{k}\geq 0\) for all \(k \geq 0\) such that \(\sum_{k=0}^{\infty}\alpha_{k}<\infty\) and such that the sequence \(\{v^{k}\}_{k=0}^{\infty}\) is bounded.

Since Theorem 1.1 holds, we only need to prove the following results for showing the bounded perturbation resilience of the viscosity algorithm.

Theorem 3.1

Let \(T : H \rightarrow H\) be a nonexpansive mapping with \(\mathit{Fix}(T)\neq\emptyset\) and let \(f: H \rightarrow H\) be a contraction. Let \(\{\alpha_{k}\}\) be a sequence of nonnegative numbers satisfies (C1)-(C3) and let \(\{\beta_{k}\}\) be a sequence of nonnegative numbers such that \(\beta_{k}\leq\alpha_{k}\) and \(\sum_{k=1}^{\infty}\beta_{k}<\infty\). Let \(\{v^{k}\}\subset H\) be a norm-bounded sequence. Let \(\{T_{k}\}\) be defined by (3). Then any sequence \(\{y^{k}\}\) generated by the iterative formula

$$ y^{k+1} = T_{k}\bigl(y^{k} + \beta_{k}v^{k}\bigr) $$
(6)

converges strongly to a fixed point \(y^{*}\) of T, where \(y^{*}\) is the unique solution of the variational inequality

$$ \bigl\langle y^{*}-f\bigl(y^{*}\bigr),y-y^{*}\bigr\rangle \geq0,\quad \forall y\in \mathit{Fix}(T). $$
(7)

Proof

First we show that \(\{y^{k}\}\) is bounded. Let

$$ u^{k}=y^{k} +\beta_{k}v^{k}. $$
(8)

Then, for any \(p\in \mathit{Fix}(T)\), we have

$$ \bigl\| u^{k} -p\bigr\| =\bigl\| y^{k} +\beta_{k}v^{k}-p \bigr\| \leq\bigl\| y^{k}-p\bigr\| +\beta_{k}\bigl\| v^{k}\bigr\| \leq \bigl\| y^{k}-p\bigr\| +\beta_{k}M_{1}, $$
(9)

where \(M_{1}:=\sup_{k\in \mathbb{N}}\{\|v^{k}\|\}\). From (6) and (8), it follows that

$$ \begin{aligned} \bigl\| y^{k+1}-p\bigr\| &=\bigl\| T_{k}\bigl(u^{k}\bigr)-p\bigr\| \\ &\leq\bigl\| \bigl(\alpha_{k}f\bigl(u^{k}\bigr)+(1- \alpha_{k})u^{k}-p\bigr)\bigr\| \\ &\leq\alpha_{k}\bigl\| f\bigl(u^{k}\bigr)-p\bigr\| +(1- \alpha_{k})\bigl\| u^{k} -p\bigr\| \\ &\leq\alpha_{k}\bigl(\bigl\| f\bigl(u^{k}\bigr)-f(p)\bigr\| +\bigl\| f(p)-p\bigr\| \bigr)+(1-\alpha_{k})\bigl\| u^{k} -p\bigr\| \\ &\leq\alpha_{k}\bigl(\rho\bigl\| u^{k} -p\bigr\| +\bigl\| f(p)-p\bigr\| \bigr)+(1- \alpha_{k})\bigl\| u^{k} -p\bigr\| \\ &=\bigl(1-(1-\rho)\alpha_{k}\bigr)\bigl\| u^{k} -p\bigr\| + \alpha_{k}\bigl\| f(p)-p\bigr\| \\ &\leq\bigl(1-(1-\rho)\alpha_{k}\bigr)\bigl\| y^{k} -p\bigr\| + \alpha_{k}\bigl\| f(p)-p\bigr\| +\beta_{k}M_{1} \\ &=\bigl(1-(1-\rho)\alpha_{k}\bigr)\bigl\| y^{k} -p\bigr\| + \alpha_{k}\biggl(\bigl\| f(p)-p\bigr\| +\frac{\beta_{k}}{\alpha_{k}}M_{1}\biggr) \\ &\leq\bigl(1-(1-\rho)\alpha_{k}\bigr)\bigl\| y^{k} -p\bigr\| + \alpha_{k}\bigl(\bigl\| f(p)-p\bigr\| +M_{1}\bigr) \\ &\leq\max\biggl\{ \bigl\| y^{k} -p\bigr\| ,\frac{1}{1-\rho}\bigl(\bigl\| f(p)-p \bigr\| +M_{1}\bigr)\biggr\} . \end{aligned} $$

By induction,

$$\bigl\| y^{k}-p\bigr\| \leq\max\biggl\{ \bigl\| y^{0} -p\bigr\| ,\frac{1}{1-\rho} \bigl(\bigl\| f(p)-p\bigr\| +M_{1}\bigr)\biggr\} ,\quad k\geq0, $$

and \(\{y^{k}\}\) is bounded, so are \(\{f(u^{k})\}\) and \(\{u^{k}\}\).

We now show that

$$ \bigl\| y^{k}-Ty^{k}\bigr\| \rightarrow0. $$
(10)

Indeed we have

$$\begin{aligned} \bigl\| y^{k+1}-Ty^{k}\bigr\| =&\bigl\| T_{k} \bigl(u^{k}\bigr)-Ty^{k}\bigr\| \\ =&\bigl\| T\bigl(\alpha_{k}f\bigl(u^{k}\bigr)+(1- \alpha_{k})u^{k}\bigr)-Ty^{k}\bigr\| \\ \leq&\bigl\| \alpha_{k}f\bigl(u^{k}\bigr)+(1- \alpha_{k})u^{k}-y^{k}\bigr\| \\ \leq&\alpha_{k}\bigl\| f\bigl(u^{k}\bigr)-y^{k}\bigr\| +(1- \alpha_{k})\bigl\| u^{k}-y^{k}\bigr\| \\ =&\alpha_{k}\bigl\| f\bigl(u^{k}\bigr)-y^{k}\bigr\| + \beta_{k}\bigl\| v^{k}\bigr\| \\ \leq&(\alpha_{k}+\beta_{k})M_{2}\rightarrow0, \end{aligned}$$
(11)

where \(M_{2}:=\max\{\sup_{k\in \mathbb{N}}\{\|f(u^{k})\|+\|y^{k}\|\}, M_{1}\}\). By (8), we have

$$\bigl\| u^{k}-u^{k-1}\bigr\| \leq\bigl\| y^{k}-y^{k-1}\bigr\| + \beta_{k}\bigl\| v^{k}\bigr\| +\beta_{k-1}\bigl\| v^{k-1}\bigr\| . $$

From (6), it follows that

$$ \begin{aligned} \bigl\| y^{k+1}-y^{k}\bigr\| ={}&\bigl\| T_{k}u^{k}-T_{k-1}u^{k-1} \bigr\| \\ \leq{}&\bigl\| \alpha_{k}f\bigl(u^{k}\bigr)+(1-\alpha_{k})u^{k}- \alpha_{k-1}f\bigl(u^{k-1}\bigr)-(1-\alpha_{k-1})u^{k-1} \bigr\| \\ ={}&\bigl\| (1-\alpha_{k}) \bigl(u^{k}-u^{k-1}\bigr)+( \alpha_{k}-\alpha_{k-1}) \bigl(f\bigl(u^{k-1} \bigr)-u^{k-1}\bigr) \\ &{} +\alpha_{k}\bigl(f\bigl(u^{k}\bigr)-f \bigl(u^{k-1}\bigr)\bigr)\bigr\| \\ \leq{}&\bigl(1-(1-\rho)\alpha_{k}\bigr)\bigl\| u^{k}-u^{k-1} \bigr\| +|\alpha_{k}-\alpha_{k-1}|\bigl\| f\bigl(u^{k-1} \bigr)-u^{k-1}\bigr\| \\ \leq{}&\bigl(1-(1-\rho)\alpha_{k}\bigr)\bigl\| y^{k}-y^{k-1} \bigr\| +|\alpha_{k}-\alpha_{k-1}|M_{3} + \beta_{k}\bigl\| v^{k}\bigr\| +\beta_{k-1}\bigl\| v^{k-1}\bigr\| , \end{aligned} $$

where \(M_{3}:=\sup_{k\in \mathbb{N}}\{\|f(u^{k})\|+\|u^{k}\|\}\). By Lemma 1.1, we obtain

$$ \bigl\| y^{k+1}-y^{k}\bigr\| \rightarrow0. $$
(12)

Combining (11) and (12), we get (10).

We next show

$$\limsup_{k\rightarrow\infty}\bigl\langle y^{*}-y^{k}, y^{*}-f\bigl(y^{*} \bigr)\bigr\rangle \leq0, $$

where \(y^{*}\) satisfies (7). Indeed take a subsequence \(\{y^{k_{l}}\}\) of \(\{y^{k}\}\) such that

$$\limsup_{k\rightarrow\infty}\bigl\langle y^{*}-y^{k}, y^{*}-f\bigl(y^{*} \bigr)\bigr\rangle = \lim_{l\rightarrow\infty}\bigl\langle y^{*}-y^{k_{l}}, y^{*}-f\bigl(y^{*}\bigr)\bigr\rangle . $$

We may assume that \(y^{k_{l}}\rightharpoonup \bar{y}\). It follows from Lemma 1.2 and (10) that \(\bar{y}\in \mathit{Fix}(T)\). Hence by (7) we obtain

$$\limsup_{k\rightarrow\infty}\bigl\langle y^{*}-y^{k}, y^{*}-f\bigl(y^{*} \bigr)\bigr\rangle = \bigl\langle y^{*}-\bar{y}, y^{*}-f\bigl(y^{*}\bigr)\bigr\rangle \leq0, $$

as required.

Finally we show that \(y^{k}\rightarrow y^{*}\). Using (9), we have

$$\begin{aligned} \bigl\| y^{k+1}-y^{*}\bigr\| ^{2} \leq&\bigl\| (1-\alpha_{k}) \bigl(u^{k}-y^{*}\bigr)+\alpha_{k}\bigl(f\bigl(u^{k} \bigr)-y^{*}\bigr)\bigr\| ^{2} \\ =&(1-\alpha_{k})^{2}\bigl\| u^{k}-y^{*}\bigr\| ^{2}+ \alpha_{k}^{2}\bigl\| f\bigl(u^{k}\bigr)-y^{*} \bigr\| ^{2} \\ &{} +2\alpha_{k}(1-\alpha_{k}) \bigl\langle u^{k}-y^{*},f\bigl(u^{k}\bigr)-y^{*}\bigr\rangle \\ =&(1-\alpha_{k})^{2}\bigl\| u^{k}-y^{*}\bigr\| ^{2}+ \alpha_{k}^{2}\bigl\| f\bigl(u^{k}\bigr)-y^{*} \bigr\| ^{2} \\ &{} +2\alpha_{k}(1-\alpha_{k}) \bigl\langle u^{k}-y^{*},f\bigl(u^{k}\bigr)-f\bigl(y^{*}\bigr)\bigr\rangle \\ &{} +2\alpha_{k}(1-\alpha_{k}) \bigl\langle u^{k}-y^{*},f\bigl(y^{*}\bigr)-y^{*}\bigr\rangle \\ \leq&\bigl(1-2\alpha_{k}+\alpha_{k}^{2}+2\rho \alpha_{k}(1-\alpha_{k})\bigr)\bigl\| u^{k}-y^{*} \bigr\| ^{2} \\ &{} + \alpha_{k}\bigl[\alpha_{k}\bigl\| f\bigl(u^{k} \bigr)-y^{*}\bigr\| ^{2} +2(1-\alpha_{k}) \bigl\langle u^{k}-y^{*},f\bigl(y^{*}\bigr)-y^{*}\bigr\rangle \bigr] \\ \leq&[1-\bar{\alpha}_{k}] \bigl[\bigl\| y^{k}-y^{*} \bigr\| ^{2}+2\beta_{k}\bigl\| y^{k}-y^{*}\bigr\| M_{1}+ \beta_{k}^{2}M_{1}^{2}\bigr] +\bar{\alpha}_{k}\delta_{k}+\beta_{k}M_{4} \\ \leq&[1-\bar{\alpha}_{k}]\bigl\| y^{k}-y^{*}\bigr\| ^{2}+ \bar{\alpha}_{k}\delta_{k}+\beta_{k}M_{4}, \end{aligned}$$

where

$$\begin{aligned}& \bar{\alpha}_{k}=\alpha_{k}\bigl(2-\alpha_{k}-2 \rho(1-\alpha_{k})\bigr), \\& \delta_{k}=\frac{\alpha_{k}\|f(u^{k})-y^{*}\|^{2} +2(1-\alpha_{k}) \langle u^{k}-y^{*},f(y^{*})-y^{*}\rangle}{2-\alpha_{k}-2\rho(1-\alpha_{k})}, \end{aligned}$$

and

$$M_{4}=\sup_{k\in\mathbb{N}}\bigl\{ 2\bigl\| y^{k}-y^{*} \bigr\| M_{1}+\beta_{k}M_{1}^{2}\bigr\} , $$

where \(M_{4}<+\infty\) from the boundedness of \(\{y_{k}\}\). It is easily seen that \(\bar{\alpha}_{k}\rightarrow0 \), \(\sum_{k=1}^{\infty}\bar{\alpha}_{k}=\infty\), and \(\limsup_{k\rightarrow\infty} \delta_{k}\leq0\). By Lemma 1.1, we obtain the results. □

The inertial-type algorithms originate from the heavy ball method (an implicit discretization) of the second-order dynamical systems in time [31, 32], the main feature of which is that the next iteration is defined by making use of the previous two iterates. The inertial-type algorithms speed up the original algorithm without the inertial extrapolation. A classical inertial-type algorithm is the well-known FISTA, which has better global rate of convergence than that of the ISTA (see [33, 34]). Recently there are increasing interests in studying inertial-type algorithms, e.g., inertial extragradient methods [35], the inertial Mann iteration algorithm [36, 37], and the inertial ADMM [38].

Combining the inertial-type algorithm and the viscosity algorithm, we construct the following inertial viscosity algorithm:

$$ \left\{\textstyle\begin{array}{l} w^{k}=x^{k}+\beta_{k}v^{k}, \\ x^{k+1}=T(\alpha_{k}f(w^{k})+(1- \alpha_{k})w^{k}), \end{array}\displaystyle \right. $$
(13)

where

$$ v^{k}=\gamma_{k}\bigl(x^{k}-x^{k-1} \bigr) $$
(14)

and

$$ \gamma_{k}= \left \{ \textstyle\begin{array}{l@{\quad}l} \frac{1}{\|x^{k}-x^{k-1}\|}& \text{if } \|x^{k}- x^{k-1}\|>1,\\ 1& \text{if } \|x^{k}- x^{k-1}\|\leq1. \end{array}\displaystyle \right. $$
(15)

Theorem 3.2

Let \(T : H \rightarrow H\) be a nonexpansive mapping with \(\mathit{Fix}(T)\neq\emptyset\) and let \(f: H \rightarrow H\) be a contraction. Let \(\{\alpha_{k}\}\) be a sequence of nonnegative numbers satisfies (C1)-(C3) and let \(\{\beta_{k}\}\) be a sequence of nonnegative numbers such that \(\beta_{k}\leq\alpha_{k}\) and \(\sum_{k=1}^{\infty}\beta_{k}<\infty\). Then any sequence \(\{x^{k}\}\) generated by the iterative formula (13) converges strongly to a fixed point \(y^{*}\) of T, where \(y^{*}\) is the unique solution of the variational inequality

$$\bigl\langle x^{*}-f\bigl(x^{*}\bigr),x-x^{*}\bigr\rangle \geq0,\quad \forall x\in \mathit{Fix}(T). $$

Proof

From (14) and (15), it is obvious that \(\|v^{k}\|\leq1\). Thus \(\{v^{k}\}\) is a norm-bounded sequence. Using Theorem 3.1, we get the results. □

4 The superiorized version of the viscosity algorithm

In this section, we present the superiorized version of the viscosity algorithm and show its convergence.

Let \(\phi: X \rightarrow \mathbb{R}\) be a convex continuous function. Consider the set

$$ C_{\mathrm{min}}:=\bigl\{ x\in \mathit{Fix}(T) | \phi(x)\leq\phi(y) \text{ for all } y\in \mathit{Fix}(T)\bigr\} , $$
(16)

and assume that \(C_{\mathrm{min}}\neq\emptyset\).

Algorithm 4.1

The superiorized version of the viscosity algorithm

  1. (0)

    Initialization: Let N be a natural number and let \(y^{0} \in H\) be an arbitrary user-chosen vector.

  2. (1)

    Iterative step: Given a current vector \(y^{k}\), pick an \(N_{k} \in \{1, 2, \ldots, N\}\) and start an inner loop of calculations as follows:

    1. (1.1)

      Inner loop initialization: Define \(y^{k,0} = y^{k}\).

    2. (1.2)

      Inner loop step: Given \(y^{k,n}\), as long as \(n < N_{k}\) do as follows:

      1. (1.2.1)

        Pick a \(0 < \beta_{k,n}\leq 1\), \(n=0,\ldots,N_{k}-1\), in a way that guarantees that

        $$ \sum_{k=1}^{\infty}\sum _{n=0}^{N_{k}-1}\beta_{k,n}< \infty. $$
        (17)
      2. (1.2.2)

        Let \(\partial \phi(y^{k,n})\) be the subgradient set of ϕ at \(y^{k,n}\) and define \(v^{k,n}\) as follows:

        $$ v^{k,n}= \left\{ \textstyle\begin{array}{l@{\quad}l} -\frac{s^{k,n}}{\|s^{k,n}\|} & \text{if } 0 \notin\partial \phi(y^{k,n}), \\ 0& \text{if } 0\in\partial \phi(y^{k,n}), \end{array}\displaystyle \right. $$
        (18)

        where \(s^{k,n}\in\partial \phi(y^{k,n})\).

      3. (1.2.3)

        Calculate

        $$ y^{k,n+1}=y^{k,n}+\beta_{k,n}v^{k,n}, $$
        (19)

        and go to step (1.2).

    3. (1.3)

      Exit the inner loop with the vector \(y^{k,N_{k}}\).

    4. (1.4)

      Calculate

      $$ y^{k+1}=T_{k}\bigl(y^{k,N_{k}}\bigr), $$
      (20)

      and go back to step (1).

We will present the convergence of the Algorithm 4.1.

Theorem 4.1

Let \(T : H \rightarrow H\) be a nonexpansive mapping with \(\mathit{Fix}(T)\neq\emptyset\). Then any sequence \(\{y^{k}\}_{k=0}^{\infty}\), generated by Algorithm 4.1 converges strongly to a fixed point \(y^{*}\) of T, where \(y^{*}\) is the unique solution of the variational inequality

$$\bigl\langle y^{*}-f\bigl(y^{*}\bigr),y-y^{*}\bigr\rangle \geq0, \quad\forall y\in \mathit{Fix}(T). $$

Proof

By Theorem 3.1, we only need to show that \(\{y^{k,N_{k}}-y^{k}\}\) is bounded. From Algorithm 4.1, a sequence \(\{y^{k}\}\) has the property that for each integer \(k\geq 1\) and each \(h\in \{1,2,\ldots,N_{k}\}\),

$$\bigl\| y^{k,h}-y^{k}\bigr\| =\Biggl\| \sum_{j=1}^{h} \bigl(y^{k,j}-y^{k,j-1}\bigr)\Biggr\| \leq\sum _{n=1}^{N_{k}}\bigl\| y^{k,n}-y^{k,n-1}\bigr\| \leq \sum_{n=0}^{N_{k}-1}\beta_{k,n}, $$

and thus

$$\bigl\| y^{k,N_{k}}-y^{k}\bigr\| \leq\sum_{n=0}^{N_{k}-1} \beta_{k,n}. $$

So, by (17),

$$\sum_{k=0}^{\infty}\bigl\| y^{k,N_{k}}-y^{k} \bigr\| \leq\sum_{k=0}^{\infty}\sum _{n=0}^{N_{k}-1}\beta_{k,n}. $$

The bounded perturbation resilience secured by Theorem 3.1 guarantees the convergence of \(\{y^{k}\}\) to the unique solution of (7). □

5 A modified viscosity algorithm

Algorithm 4.1 can be seen as a unified frame, based on which, some algorithms are constructed. In this section, we introduce a modified viscosity algorithm by choosing a special function \(\phi(x)\) on Algorithm 4.1.

Define a convex function \(\phi:H\rightarrow \mathbb{R}\) by

$$\phi(x)=\frac{1}{2}\|x\|^{2}-h(x), $$

where \(h:H\rightarrow \mathbb{R}\) is a continuous function. Then the set

$$C_{\mathrm{min}}:=\bigl\{ x\in \mathit{Fix}(T) | \phi(x)\leq\phi(y) \text{ for all } y\in \mathit{Fix}(T)\bigr\} , $$

equals

$$C_{\mathrm{min}}:=\arg\min_{x\in \mathit{Fix}(T)}\bigl\{ \phi(x)\bigr\} . $$

Furthermore, if we assume that \(h:H\rightarrow \mathbb{R}\) is a differentiable function, then

$$C_{\mathrm{min}}:=\bigl\{ x^{*}\in \mathit{Fix}(T)| \bigl\langle x^{*}-f\bigl(x^{*}\bigr),x-x^{*} \bigr\rangle \geq0 \text{ for all } x\in \mathit{Fix}(T)\bigr\} , $$

where \(f(x)=\nabla h(x)\). It is obvious that \(\nabla \phi=I-f\).

From Algorithm 4.1, we construct the following algorithm.

Algorithm 5.1

A modified viscosity algorithm

  1. (0)

    Initialization: Let N be a natural number and let \(y^{0} \in C\) be an arbitrary user-chosen vector.

  2. (1)

    Iterative step: Given a current vector \(y^{k}\), pick an \(N_{k} \in \{1, 2, \ldots, N\}\) and start an inner loop of calculations as follows:

    1. (1.1)

      Inner loop initialization: Define \(y^{k,0} = y^{k}\).

    2. (1.2)

      Inner loop step: Given \(y^{k,n}\), as long as \(n < N_{k}\) do as follows:

      1. (1.2.1)

        Pick a \(0 \leq\beta_{k,0}\leq 1\) for \(N_{k}=1\) and \(0 <\beta_{k,n}\leq 1\), \(n=0,\ldots,N_{k}-1\) for \(N_{k}>1\), in a way that guarantees that

        $$ \sum_{k=1}^{\infty}\sum _{n=0}^{N_{k}-1}\beta_{k,n}< \infty. $$
        (21)
      2. (1.2.2)

        Calculate

        $$ y^{k,n+1}=\beta_{k,n}f\bigl(y^{k,n} \bigr)+(1-\beta_{k,n})y^{k,n}, $$
        (22)

        and go to step (1.2).

    3. (1.3)

      Exit the inner loop with the vector \(y^{k,N_{k}}\).

    4. (1.4)

      Calculate

      $$ y^{k+1}=T_{k}\bigl(y^{k,N_{k}}\bigr), $$
      (23)

      and go back to step (1).

Remark 5.1

(1) In the modified viscosity algorithm, \(\beta_{k,0}\) can be taken zero. Furthermore, the modified viscosity algorithm reduces to the viscosity algorithm (2) if \(N_{k}=1\) for \(k\in \mathbb{N}\).

(2) In the superiorized version of the viscosity algorithm, from the definition of \(v^{k,n}\), it follows that \(\|v^{k,n}\|\leq1\). However, in the modified viscosity algorithm, the boundedness of \(\{v^{k}:=y^{k,N_{k}}-y^{k}\}\) is not obvious. So, we need to show the boundedness of the \(\{v^{k}\}\).

Theorem 5.1

Let C be a nonempty closed convex subset of a Hilbert space H and let \(T : C \rightarrow C\) be a nonexpansive mapping with \(\mathit{Fix}(T)\neq\emptyset\). Then any sequence \(\{y^{k}\}_{k=0}^{\infty}\), generated by Algorithm 5.1, converges strongly to a fixed point \(y^{*}\) of T, where \(y^{*}\) is the unique solution of the variational inequality

$$\bigl\langle y^{*}-f\bigl(y^{*}\bigr),y-y^{*}\bigr\rangle \geq0, \quad\forall y\in \mathit{Fix}(T). $$

Proof

Similar to the proof of Theorem 4.1, we only need to show that the boundedness of \(\{y^{k,N_{k}}-y^{k}\}\). Let \(p\in \mathit{Fix}(T)\), then, for \(n=0,1,\ldots,N_{k}-1\), it follows that

$$ \begin{aligned} \bigl\| y^{k,n+1}-p\bigr\| &\leq(1-\alpha_{k,n}) \bigl\| y^{k,n}-p\bigr\| +\alpha_{k,n}\bigl\| f\bigl(y^{k,n}\bigr)-p\bigr\| \\ &\leq(1-\alpha_{k,n})\bigl\| y^{k,n}-p\bigr\| +\alpha_{k,n}\bigl( \bigl\| f\bigl(y^{k,n}\bigr)-f(p)\bigr\| +\bigl\| f(p)-p\bigr\| \bigr) \\ &\leq(1-\alpha_{k,n})\bigl\| y^{k,n}-p\bigr\| +\alpha_{k,n}\bigl( \rho\bigl\| y^{k,n}-p\bigr\| +\bigl\| f(p)-p\bigr\| \bigr) \\ &=\bigl(1-(1-\rho)\alpha_{k,n}\bigr)\bigl\| y^{k,n}-p\bigr\| + \alpha_{k,n}\bigl\| f(p)-p\bigr\| \\ &\leq\max\biggl\{ \bigl\| y^{k,n}-p\bigr\| ,\frac{1}{1-\rho}\bigl\| f(p)-p\bigr\| \biggr\} . \end{aligned} $$

So, we have

$$ \begin{aligned} \bigl\| y^{k+1}-p\bigr\| &\leq\bigl\| y^{k,N_{k}}-p\bigr\| \\ &\leq\max\biggl\{ \bigl\| y^{k,N_{k}-1}-p\bigr\| ,\frac{1}{1-\rho}\bigl\| f(p)-p\bigr\| \biggr\} \\ &\leq\cdots\leq\max\biggl\{ \bigl\| y^{k}-p\bigr\| ,\frac{1}{1-\rho}\bigl\| f(p)-p\bigr\| \biggr\} , \end{aligned} $$

where the first inequality comes from the nonexpansivity of T. By induction,

$$\bigl\| y^{k}-p\bigr\| \leq\max\biggl\{ \bigl\| y^{0}-p\bigr\| ,\frac{1}{1-\rho} \bigl\| f(p)-p\bigr\| \biggr\} . $$

Thus \(\{y^{k}\}\) and \(\{y^{k,n}\}\), \(n=1,\ldots,N_{k}\), are bounded, so is \(\{f(y^{k,n})\}\), \(n=0,\ldots,N_{k}-1\). From (22), we have

$$y^{k,n+1}-y^{k,n}=\beta_{k,n}\bigl(f \bigl(y^{k,n}\bigr)-y^{k,n}\bigr). $$

So,

$$\bigl\| y^{k,N_{k}}-y^{k}\bigr\| \leq\sum_{n=0}^{N_{k}-1} \bigl\| y^{k,n+1}-y^{k,n}\bigr\| \leq\sum_{n=0}^{N_{k}-1} \beta_{k,n}\bigl\| f\bigl(y^{k,n}\bigr)-y^{k,n}\bigr\| \leq M_{5}\sum_{n=0}^{N_{k}-1} \beta_{k,n}, $$

where \(M_{5}:=\sup_{k\in \mathbb{N}}\{\max_{0\leq n\leq N_{k}}\{\|f(y^{k,n})\|+\|y^{k,n}\|\}\}\). Thus

$$\sum_{k=0}^{\infty}\bigl\| y^{k,N_{k}}-y^{k} \bigr\| \leq M_{5}\sum_{k=0}^{\infty}\sum _{n=0}^{N_{k}-1}\beta_{k,n}. $$

The bounded perturbation resilience secured by Theorem 3.1 guarantees the convergence of \(\{y^{k}\}\) to a point in C. □

References

  1. Bauschke, HH, Combettes, PL: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011)

    Book  MATH  Google Scholar 

  2. Goebel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (1990)

    Book  MATH  Google Scholar 

  3. Goebel, K, Reich, S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)

    MATH  Google Scholar 

  4. Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)

    MATH  Google Scholar 

  5. Bauschke, HH, Borwein, JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38(3), 367-426 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  6. Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240-256 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  7. Dong, QL, Yuan, HB: Accelerated Mann and CQ algorithms for finding a fixed point of a nonexpansive mapping. Fixed Point Theroy Appl. 2015, 125 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  8. Dong, QL, Lu, YY: A new hybrid algorithm for a nonexpansive mapping. Fixed Point Theroy Appl. 2015, 37 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  9. Yao, Y, Liou, YC, Yao, JC: Split common fixed point problem for two quasi-pseudocontractive operators and its algorithm construction. Fixed Point Theroy Appl. 2015, 127 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  10. Yao, Y, Agarwal, RP, Postolache, M, Liou, YC: Algorithms with strong convergence for the split common solution of the feasibility problem and fixed point problem. Fixed Point Theroy Appl. 2014, 183 (2014)

    Article  MathSciNet  Google Scholar 

  11. Yao, Y, Postolache, M, Kang, SM: Strong convergence of approximated iterations for asymptotically pseudocontractive mappings. Fixed Point Theory Appl. 2014, 100 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  12. Yao, Y, Postolache, M, Liou, YC, Yao, Z: Construction algorithms for a class of monotone variational inequalities. Optim. Lett. 10, 1519-1528 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  13. Yao, Y, Liou, YC, Kang, SM: Two-step projection methods for a system of variational inequality problems in Banach spaces. J. Glob. Optim. 55, 801-811 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  14. Yao, Y, Noor, MA, Noor, KI, Liou, YC, Yaqoob, H: Modified extragradient method for a system of variational inequalities in Banach spaces. Acta Appl. Math. 110, 1211-1224 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  15. Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-510 (1953)

    Article  MathSciNet  MATH  Google Scholar 

  16. Ishikawa, S: Fixed points by a new iteration method. Proc. Am. Math. Soc. 44, 147-150 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  17. Halpern, B: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 73, 957-961 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  18. Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279, 372-379 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  19. Kimura, Y, Nakajo, K, Takahashi, W: Convexity of the set of fixed points of a quasi-pseudocontractive type Lipschitz mapping and the shrinking projection method. Sci. Math. Jpn. 70, 213-220 (2009)

    MathSciNet  MATH  Google Scholar 

  20. Moudafi, A: Viscosity approximation methods for fixed points problems. J. Math. Anal. Appl. 241, 46-55 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  21. Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279-291 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  22. Yang, C, He, S: General alternative regularization methods for nonexpansive mappings in Hilbert spaces. Fixed Point Theroy Appl. 2014, 203 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  23. Butnariu, D, Davidi, R, Herman, GT, Kazantsev, IG: Stable convergence behavior under summable perturbations of a class of projection methods for convex feasibility and optimization problems. IEEE J. Sel. Top. Signal Process. 1, 540-547 (2007)

    Article  Google Scholar 

  24. Censor, Y, Davidi, R, Herman, GT: Perturbation resilience and superiorization of iterative algorithms. Inverse Probl. 26, 065008 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  25. Davidi, R, Herman, GT, Censor, Y: Perturbation-resilient block-iterative projection methods with application to image reconstruction from projections. Int. Trans. Oper. Res. 16, 505-524 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  26. Censor, Y, Zaslavski, AJ: Strict Fejér monotonicity by superiorization of feasibility-seeking projection methods. J. Optim. Theory Appl. 165, 172-187 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  27. Herman, GT, Davidi, R: Image reconstruction from a small number of projections. Inverse Probl. 24, 045011 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  28. Nikazad, T, Davidi, R, Herman, G: Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction. Inverse Probl. 28, 035005 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  29. Penfold, SN, Schulte, RW, Censor, Y, Rosenfeld, AB: Total variation superiorization schemes in proton computed tomography image reconstruction. Med. Phys. 37, 5887-5895 (2010)

    Article  Google Scholar 

  30. Jin, W, Censor, Y, Jiang, M: Bounded perturbation resilience of projected scaled gradient methods. Comput. Optim. Appl. 63, 365-392 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  31. Alvarez, F: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14(3), 773-782 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  32. Alvarez, F, Attouch, H: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3-11 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  33. Beck, A, Teboulle, M: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183-202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  34. Chambolle, A, Dossal, C: On the convergence of the iterates of the “Fast Iterative Shrinkage/Thresholding Algorithm”. J. Optim. Theory Appl. 166, 968-982 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  35. Dong, QL, Lu, YY, Yang, J: The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 65, 2217-2226 (2016)

    Article  MathSciNet  Google Scholar 

  36. Bot, RI, Csetnek, ER, Hendrich, C: Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 256, 472-487 (2015)

    MathSciNet  MATH  Google Scholar 

  37. Mainge, PE: Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 219, 223-236 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  38. Chen, C, Ma, S, Yang, J: A general inertial proximal point algorithm for mixed variational inequality problem. SIAM J. Optim. 25, 2120-2142 (2015)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

Supported by National Natural Science Foundation of China (No. 61379102) and Fundamental Research Funds for the Central Universities (No. 3122016L006).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qiao-Li Dong.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dong, QL., Zhao, J. & He, S. Bounded perturbation resilience of the viscosity algorithm. J Inequal Appl 2016, 299 (2016). https://doi.org/10.1186/s13660-016-1242-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1242-6

Keywords