Skip to main content

The zeros of monotone operators for the variational inclusion problem in Hilbert spaces

Abstract

In this paper, we introduce a regularization method for solving the variational inclusion problem of the sum of two monotone operators in real Hilbert spaces. We suggest and analyze this method under some mild appropriate conditions imposed on the parameters, which allow us to obtain a short proof of another strong convergence theorem for this problem. We also apply our main result to the fixed point problem of the nonexpansive variational inequality problem, the common fixed point problem of nonexpansive strict pseudocontractions, the convex minimization problem, and the split feasibility problem. Finally, we provide numerical experiments to illustrate the convergence behavior and to show the effectiveness of the sequences constructed by the inertial technique.

1 Introduction

Let C be a nonempty closed convex subset of a real Hilbert space H. The variational inclusion problem is to find \(x^{*} \in H\) such that

$$\begin{aligned} 0 \in Ax^{*}+Bx^{*}, \end{aligned}$$
(1.1)

where \(A:H \rightarrow H\) is an operator, and \(B: D(B) \subset H \rightarrow 2^{H}\) is a set-valued operator.

If \(A=\nabla F\) and \(B = \partial G\), where F is the gradient of F, and ∂G is the subdifferential of G defined by

$$\begin{aligned} \partial G(x) = \bigl\{ z\in H: \langle y-x,z \rangle + G(x) \leq G(y), \forall y \in H\bigr\} , \end{aligned}$$

then problem (1.1) is reduced to the following convex minimization problem:

$$\begin{aligned} F\bigl(x^{*}\bigr)+G\bigl(x^{*}\bigr) = \min _{x\in H} F(x)+G(x)\quad \Leftrightarrow\quad 0 \in \nabla F \bigl(x^{*}\bigr)+\partial G\bigl(x^{*}\bigr). \end{aligned}$$

If \(A=0\) and \(B=\partial G\), then problem (1.1) is reduced to a proximal minimization problem, and if \(A = \nabla F\) and \(B=0\), then problem (1.1) is reduced to a constrained convex minimization problem and also to a split feasibility problem. Some typical problems arising in various branches of sciences, applied sciences, economics, and engineering, such as machine learning, image restoration, and signal recovery, can be viewed as problems of the form (1.1).

To solve the variational inclusion problem (1.1) via fixed point theory, for \(r>0\), we define the mapping \(T_{r}: H\rightarrow D(B)\) as

$$\begin{aligned} T_{r} = (I+rB)^{-1}(I-rA). \end{aligned}$$

For \(x \in H\), we see that

$$\begin{aligned} T_{r} x = x &\Leftrightarrow \quad x = (I+rB)^{-1}(x-rAx) \\ &\Leftrightarrow \quad x-rAx \in x+r Bx \\ &\Leftrightarrow \quad 0 \in Ax+Bx, \end{aligned}$$

which shows that the fixed point set of \(T_{r}\) coincides with the solution set of \((A+B)^{-1}(0)\). This suggests the following iteration process: \(x_{0} \in C\), and

$$\begin{aligned} x_{n+1} = (I+r_{n} B)^{-1} (x_{n}-r_{n} Ax_{n}) = T_{r_{n}} x_{n},\quad n=0,1,2,\ldots, \end{aligned}$$

where \(\{r_{n}\} \subset (0,\infty )\) and \(D(B) \subset C\). This method is called the forward–backward splitting algorithm [1, 2]. In the literature, many methods have been suggested to solve the variational inclusion problem (1.1) for maximal monotone operators (see also, e.g., [311]).

Very recently, Cholamjiak et al. [12, 13] proved the following theorems in real Hilbert spaces.

Theorem C1

Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(A: C\rightarrow H\) be an α-inverse strongly monotone mapping, and let B be a maximal monotone operator on H such that \(D(B) \subset C\) and \((A+B)^{-1}(0)\) is nonempty. Let \(f:C \rightarrow C\) be a k-contraction, and let \(J_{r_{n}}=(I+r_{n} B)^{-1}\). Let \(\{z_{n}\}\) be a sequence in C of the following process: \(z_{0} \in C\), and

$$\begin{aligned} \textstyle\begin{cases} w_{n} = \alpha _{n} f(z_{n}) + (1-\alpha _{n})z_{n}, \\ z_{n+1} = J_{r_{n}}(w_{n}-r_{n} Aw_{n}+e_{n}),& n= 0,1,2,\ldots, \end{cases}\displaystyle \end{aligned}$$

where \(\{ \alpha _{n} \} \subset (0,1),\{e_{n}\} \subset H\), and \(\{r_{n}\} \subset (0,2\alpha )\). Suppose that the control sequences satisfy the following restrictions:

  1. (C1)

    \(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), and \(\sum_{n=0}^{\infty }\alpha _{n} = \infty \),

  2. (C2)

    \(0< a\leq r_{n} \leq b < 2\alpha \) for some \(a,b>0\),

  3. (C3)

    \(\sum_{n=0}^{\infty }\|e_{n}\| < \infty \), or \(\lim_{n\rightarrow \infty }\frac{\|e_{n}\|}{\alpha _{n}}=0\).

Then the sequence \(\{z_{n}\}\) converges strongly to a point \(\bar{x} \in (A+B)^{-1}(0)\), where \(\bar{x} = P_{(A+B)^{-1}(0)} f(\bar{x})\).

Theorem C2

Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping of H into itself, and let B be a maximal monotone operator on H such that the domain of B is included in C. Let \(J_{\lambda }= (I+\lambda B)^{-1}\) be the resolvent of B for \(\lambda > 0\), let S be a nonexpansive mapping of C into itself such that \(\operatorname{Fix}(S) \cap (A+B)^{-1}(0) \neq \emptyset \), and let \(f:C \rightarrow C\) be a contraction. Let \(x_{0},x_{1} \in C\), and let \(\{x_{n}\} \subset C\) be the sequence generated by

$$\begin{aligned} \textstyle\begin{cases} y_{n} = x_{n} + \theta _{n} (x_{n} - x_{n-1}), \\ x_{n+1} = \beta _{n} x_{n} + (1-\beta _{n}) S(\alpha _{n} f(x_{n})+(1- \alpha _{n}) J_{\lambda _{n}}(y_{n}-\lambda _{n} Ay_{n}) ), \end{cases}\displaystyle \end{aligned}$$

for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1),\{\beta _{n} \} \subset (0,1), \{ \lambda _{n}\} \subset (0,2\alpha )\), and \(\{\theta _{n} \} \subset [0,\theta ]\) such that \(\theta \in [0,1)\) satisfy

  1. (C1)

    \(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), and \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \),

  2. (C2)

    \(\liminf_{n\rightarrow \infty } \beta _{n} (1-\beta _{n}) > 0\),

  3. (C3)

    \(0 < \liminf_{n\rightarrow \infty } \lambda _{n} \leq \limsup_{n \rightarrow \infty } \lambda _{n} < 2\alpha \),

  4. (C4)

    \(\lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| = 0\).

Then the sequence \(\{x_{n} \}\) converges strongly to a point \(\bar{x} \in \operatorname{Fix}(S)\cap (A+B)^{-1}(0)\), where \(\bar{x} = P_{\operatorname{Fix}(S) \cap (A+B)^{-1}(0)} f(\bar{x})\).

In this paper, we modify the algorithms in Theorems C1and C2under the same assumptions to solve the variational inclusion problem (1.1) as follows: let \(x_{0},x_{1} \in C\), and let \(\{x_{n}\} \subset C\) be the sequence generated by

$$\begin{aligned} \textstyle\begin{cases} y_{n} = x_{n}+\theta _{n} (x_{n}-x_{n-1}), \\ x_{n+1} = S(\alpha _{n} f(x_{n})+(1-\alpha _{n})J_{\lambda _{n}}(y_{n}- \lambda _{n} Ay_{n}+e_{n})), \end{cases}\displaystyle \end{aligned}$$

for all \(n \in \mathbb{N}\). We suggest and analyze this method under some mild appropriate conditions imposed on the parameters, which allow us to obtain a short proof of another strong convergence theorem for those problem.

We also apply our main result to the fixed point problem of nonexpansive variational inequality problem, the common fixed point problem of nonexpansive strict pseudocontractions, the convex minimization problem, and the split feasibility problem. Finally, we provide numerical experiments to illustrate the convergence behavior and to show the effectiveness of the sequences constructed by the inertial technique.

2 Preliminaries

Let C be a nonempty closed convex subset of a real Hilbert space H. We use the notation: → denotes the strong convergence, denotes the weak convergence,

$$\begin{aligned} \omega _{w}(x_{n}) = \bigl\{ x: \exists \{x_{n_{k}} \} \subset \{x_{n}\} \text{ such that } x_{n_{k}} \rightharpoonup x \bigr\} \end{aligned}$$

denotes the weak limit set of \(\{x_{n}\}\), and \(\operatorname{Fix}(T) = \{x:x=Tx \}\) is the fixed point set of the mapping T.

Recall that the metric projection \(P_{C}: H \rightarrow C\) is defined as follows: for each \(x \in H\), \(P_{C} x\) is the unique point in C satisfying

$$\begin{aligned} \Vert x-P_{C} x \Vert = \inf \bigl\{ \Vert x-y \Vert :y\in C \bigr\} . \end{aligned}$$

The operator \(T:H\rightarrow H\) is called:

  1. (i)

    monotone if

    $$\begin{aligned} \langle x-y,Tx-Ty \rangle \geq 0, \quad\forall x,y \in H, \end{aligned}$$
  2. (ii)

    L-Lipschitzian with \(L>0\) if

    $$\begin{aligned} \Vert Tx-Ty \Vert \leq L \Vert x-y \Vert , \quad\forall x,y \in H, \end{aligned}$$
  3. (iii)

    k-contraction if it is k-Lipschitzian with \(k \in (0,1)\),

  4. (iv)

    nonexpansive if it is 1-Lipschitzian,

  5. (v)

    firmly nonexpansive if

    $$\begin{aligned} \Vert Tx-Ty \Vert ^{2} \leq \Vert x-y \Vert ^{2} - \bigl\Vert (I-T)x-(I-T)y \bigr\Vert ^{2}, \quad\forall x,y \in H, \end{aligned}$$
  6. (vi)

    α-strongly monotone if

    $$\begin{aligned} \langle Tx-Ty,x-y \rangle \geq \alpha \Vert x-y \Vert ^{2},\quad \forall x,y \in H, \end{aligned}$$
  7. (vii)

    α-inverse strongly monotone if

    $$\begin{aligned} \langle Tx-Ty,x-y \rangle \geq \alpha \Vert Tx-Ty \Vert ^{2},\quad \forall x,y \in H. \end{aligned}$$

Let B be a mapping of H into \(2^{H}\). The domain and the range of B are denoted by \(D(B) = \{x\in H: Bx \neq \emptyset \}\) and \(R(B) = \cup \{Bx:x \in D(B) \}\), respectively. The inverse of B, denoted by \(B^{-1}\), is defined by \(x\in B^{-1}y\) if and only if \(y\in Bx\). A multivalued mapping B is said to be a monotone operator on H if \(\langle x-y,u-v\rangle \geq 0\) for all \(x,y \in D(B),u \in Bx\), and \(v \in By\). A monotone operator B on H is said to be maximal if its graph is not strictly contained in the graph of any other monotone operator on H. For a maximal monotone operator B on H and \(r>0\), we define the single-valued resolvent operator \(J_{r}:H\rightarrow D(B)\) by \(J_{r}=(I+rB)^{-1}\). It is well known that \(J_{r}\) is firmly nonexpansive and \(\operatorname{Fix}(J_{r})=B^{-1}(0)\).

We collect together some known lemmas, which are the main tools in proving our result.

Lemma 2.1

Let H be a real Hilbert space. Then, for all \(x,y\in H\),

  1. (i)

    \(\|x+y\|^{2} = \|x\|^{2}+2 \langle x,y \rangle +\|y\|^{2}\),

  2. (ii)

    \(\|x+y\|^{2} \leq \|x\|^{2} + 2 \langle y,x+y \rangle \).

Lemma 2.2

([14])

Let C be a nonempty closed convex subset of a real Hilbert space H. Then

  1. (i)

    \(z=P_{C}x \Leftrightarrow \langle x-z,z -y \rangle \geq 0, \forall x\in H,y \in C\),

  2. (ii)

    \(z=P_{C}x \Leftrightarrow \|x-z \|^{2} \leq \|x-y\|^{2} - \| y-z \|^{2}, \forall x\in H,y \in C\),

  3. (iii)

    \(\| P_{C} x - P_{C} y\|^{2} \leq \langle x-y,P_{C} x - P_{C} y \rangle, \forall x,y\in H\).

Lemma 2.3

([15])

Let H be a real Hilbert space. For any \(x,y \in H\) and \(\lambda \in \mathbb{R}\), we have

$$\begin{aligned} \bigl\Vert \lambda x+(1-\lambda )y \bigr\Vert ^{2} = \lambda \Vert x \Vert ^{2}+(1-\lambda ) \Vert y \Vert ^{2}- \lambda (1-\lambda ) \Vert x-y \Vert ^{2}. \end{aligned}$$

Lemma 2.4

([16])

Let H and K be two real Hilbert spaces, and let \(T:K \rightarrow K\) be a firmly nonexpansive mapping such that \(\|(I-T)x\|\) is a convex function from K to \(\overline{\mathbb{R}}=[-\infty,+\infty ]\). Let \(A:H\rightarrow K\) be a bounded linear operator and \(f(x) = \frac{1}{2}\|(I-T)Ax\|^{2} \) for all \(x\in H\). Then

  1. (i)

    f is convex and differential,

  2. (ii)

    \(\nabla f(x) = A^{*}(I-T)Ax \) for all \(x\in H\) such that \(A^{*}\) denotes the adjoint of A,

  3. (iii)

    f is weakly lower semicontinuous on H, and

  4. (iv)

    f is \(\|A\|^{2}\)-Lipschitzian.

Lemma 2.5

([16])

Let H be a real Hilbert space, and let \(T: H\rightarrow H\) be an operator. The following statements are equivalent:

  1. (i)

    T is firmly nonexpansive,

  2. (ii)

    \(\|Tx-Ty\|^{2} \leq \langle x-y,Tx-Ty \rangle, \forall x,y \in H\),

  3. (iii)

    \(I-T\) is firmly nonexpansive.

Lemma 2.6

([17])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let the mapping \(A:C\rightarrow H\) be an α-inverse strongly monotone, and let \(r>0\) be a constant. Then we have

$$\begin{aligned} \bigl\Vert (I-rA)x-(I-rA)y \bigr\Vert ^{2} \leq \Vert x-y \Vert ^{2}-r(2\alpha -r) \Vert Ax-Ay \Vert ^{2} \end{aligned}$$

for all \(x,y \in C\). In particular, if \(0< r\leq 2\alpha \), then \(I-rA\) is nonexpansive.

Lemma 2.7

([18] (Demiclosedness principle))

Let C be a nonempty closed convex subset of a real Hilbert space H, and let \(S:C \rightarrow C\) be a nonexpansive mapping with \(\operatorname{Fix}(S)\neq \emptyset \). If the sequence \(\{x_{n}\}\subset C\) converges weakly to x and the sequence \(\{(I-S)x_{n}\}\) converges strongly to y, then \((I-S)x = y\); in particular, if \(y=0\), then \(x\in \operatorname{Fix}(S)\).

Lemma 2.8

([19, 20])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(\{T_{n}\}\) and φ be two classes of nonexpansive mappings of C into itself such that

$$\begin{aligned} \emptyset \neq \operatorname{Fix}(\varphi ) = \bigcap _{n=0}^{\infty } \operatorname{Fix}(T_{n}). \end{aligned}$$

Then, for any bounded sequence \(\{z_{n}\} \subset C\), we have:

  1. (i)

    if \(\lim_{n \rightarrow \infty } \|z_{n}-T_{n}z_{n}\|=0\), then \(\lim_{n \rightarrow \infty } \|z_{n}-Tz_{n}\|=0\) for all \(T \in \varphi \), which is called the NST-condition (I),

  2. (ii)

    if \(\lim_{n \rightarrow \infty } \|z_{n+1}-T_{n}z_{n}\|=0\), then \(\lim_{n \rightarrow \infty } \|z_{n}-T_{m}z_{n}\|=0\) for all \(m \in \mathbb{N}\cup \{0\}\), which is called the NST-condition (II).

Lemma 2.9

([21])

Let \(\{a_{n}\}\) and \(\{c_{n}\}\) be sequences of nonnegative real numbers such that

$$\begin{aligned} a_{n+1} \leq (1-\delta _{n})a_{n}+b_{n}+c_{n},\quad n=0,1,2,\ldots, \end{aligned}$$

where \(\{\delta _{n} \}\) is a sequence in \((0,1)\), and \(\{b_{n}\}\) is a real sequence. Assume that \(\sum_{n=0}^{\infty }c_{n} < \infty \). Then we have:

  1. (i)

    if \(b_{n} \leq \delta _{n} M\) for some \(M\geq 0\), then \(\{a_{n}\}\) is a bounded sequence,

  2. (ii)

    if \(\sum_{n=0}^{\infty }\delta _{n} = \infty \) and \(\limsup_{n\rightarrow \infty } b_{n}/\delta _{n} \leq 0\), then \(\lim_{n\rightarrow \infty }a_{n}=0\).

Lemma 2.10

([22])

Let \(\{s_{n}\}\) be a sequence of nonnegative real numbers such that

$$\begin{aligned} s_{n+1} \leq (1-\gamma _{n})s_{n} + \gamma _{n} \delta _{n},\quad n=0,1,2,\ldots, \end{aligned}$$

and

$$\begin{aligned} s_{n+1} \leq s_{n} - \eta _{n} +\rho _{n},\quad n=0,1,2,\ldots, \end{aligned}$$

where \(\{\gamma _{n}\}\) is a sequence in \((0,1),\{\eta _{n}\}\) is a sequence of nonnegative real numbers, and \(\{\delta _{n}\},\{\rho _{n}\}\) are real sequences such that

  1. (i)

    \(\sum_{n=0}^{\infty }\gamma _{n} = \infty \),

  2. (ii)

    \(\lim_{n\rightarrow \infty } \rho _{n} = 0\),

  3. (iii)

    if \(\lim_{k\rightarrow \infty } \eta _{n_{k}} = 0\), then \(\limsup_{k\rightarrow \infty } \delta _{n_{k}} \leq 0\) for any subsequence \(\{n_{k}\}\) of \(\{n\}\).

Then \(\lim_{n\rightarrow \infty } s_{n} = 0\).

3 Main result

Theorem 3.1

Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping of H into itself, and let B be a maximal monotone operator on H such that the domain of B is included in C. Let \(J_{\lambda }=(I+\lambda B)^{-1}\) be the resolvent of B for \(\lambda > 0\), let S be a nonexpansive mapping of C into itself such that \(\Omega:= \operatorname{Fix}(S)\cap (A+B)^{-1}(0) \neq \emptyset \), and let f be a k-contraction mapping of C into itself. Let \(x_{0},x_{1} \in C\), and let \(\{x_{n}\} \subset C\) be the sequence generated by

$$\begin{aligned} \textstyle\begin{cases} y_{n} = x_{n}+\theta _{n} (x_{n}-x_{n-1}), \\ x_{n+1} = S(\alpha _{n} f(x_{n})+(1-\alpha _{n})J_{\lambda _{n}}(y_{n}- \lambda _{n} Ay_{n}+e_{n})), \end{cases}\displaystyle \end{aligned}$$

for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1), \{\lambda _{n}\} \subset (0,2\alpha ), \{e_{n}\} \subset H\), and \(\{\theta _{n}\} \subset [0,\theta ]\) such that \(\theta \in [0,1)\) satisfy the following conditions:

  1. (C1)

    \(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), and \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \),

  2. (C2)

    \(0< a\leq \lambda _{n} \leq b < 2\alpha \) for some \(a,b>0\),

  3. (C3)

    \(\lim_{n\rightarrow \infty }\frac{\|e_{n}\|}{\alpha _{n}}=0\),

  4. (C4)

    \(\sum_{n=1}^{\infty }\|e_{n}\| < \infty \), and \(\lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| = 0\).

Then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Omega \), where \(x^{*} = P_{\Omega }f(x^{*})\).

Proof

Picking \(z\in \operatorname{Fix}(S)\cap (A+B)^{-1}(0)\) and fixing \(n \in \mathbb{N}\), it follows that \(z=S(z)=J_{\lambda _{n}}(z-\lambda _{n} Az)\). Let

$$\begin{aligned} w_{n} = \alpha _{n} f(x_{n})+(1-\alpha _{n})J_{\lambda _{n}}(y_{n}- \lambda _{n} Ay_{n}+e_{n}). \end{aligned}$$

Firstly, we will show that \(\{x_{n}\}\) and \(\{y_{n}\}\) are bounded. Since

$$\begin{aligned} \Vert y_{n}-z \Vert &= \bigl\Vert x_{n}+\theta _{n} (x_{n}-x_{n-1})-z \bigr\Vert \\ &\leq \Vert x_{n}-z \Vert +\theta _{n} \Vert x_{n}-x_{n-1} \Vert . \end{aligned}$$
(3.1)

Therefore by (3.1) and the nonexpansiveness of \(S,J_{\lambda _{n}}\), and \(I-\lambda _{n} A\) in Lemma 2.6 we obtain

$$\begin{aligned} \Vert x_{n+1}-z \Vert ={}& \Vert Sw_{n}-Sz \Vert \leq \Vert w_{n}-z \Vert \\ \leq {}& \alpha _{n} \bigl\Vert f(x_{n})-z \bigr\Vert +(1-\alpha _{n}) \bigl\Vert J_{\lambda _{n}}(y_{n}- \lambda _{n} Ay_{n}+e_{n})-J_{\lambda _{n}}(z- \lambda _{n} Az) \bigr\Vert \\ \leq{} & \alpha _{n}\bigl( \bigl\Vert f(x_{n})-f(z) \bigr\Vert + \bigl\Vert f(z)-z \bigr\Vert \bigr) \\ &{} +(1-\alpha _{n}) \bigl\Vert (y_{n}-\lambda _{n} Ay_{n}+e_{n})-(z-\lambda _{n} Az) \bigr\Vert \\ \leq{} & \alpha _{n} \bigl(k \Vert x_{n}-z \Vert + \bigl\Vert f(z)-z \bigr\Vert \bigr)+(1-\alpha _{n}) \bigl( \Vert y_{n}-z \Vert + \Vert e_{n} \Vert \bigr) \\ \leq{} & \alpha _{n} \bigl(k \Vert x_{n}-z \Vert + \bigl\Vert f(z)-z \bigr\Vert \bigr) \\ &{} +(1-\alpha _{n}) \bigl( \Vert x_{n}-z \Vert +\theta _{n} \Vert x_{n}-x_{n-1} \Vert + \Vert e_{n} \Vert \bigr) \\ \leq{} & \bigl(1-\alpha _{n} (1-k)\bigr) \Vert x_{n}-z \Vert +\alpha _{n} \bigl\Vert f(z)-z \bigr\Vert + \theta _{n} \Vert x_{n}-x_{n-1} \Vert + \Vert e_{n} \Vert \\ ={}& \bigl(1-\alpha _{n} (1-k)\bigr) \Vert x_{n}-z \Vert \\ &{} +\alpha _{n} (1-k) \biggl(\frac{ \Vert f(z)-z \Vert }{1-k}+\frac{1}{1-k} \frac{\theta _{n}}{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert \biggr) + \Vert e_{n} \Vert . \end{aligned}$$

So, by condition (C4), putting \(M = \frac{1}{1-k} ( \|f(z)-z\|+ \sup_{n\in \mathbb{N}} \frac{\theta _{n}}{\alpha _{n}} \|x_{n}-x_{n-1}\| ) \geq 0\) in Lemma 2.9(i), we conclude that the sequence \(\{\|x_{n}-z\|\}\) is bounded, that is, the sequence \(\{x_{n}\}\) is bounded, and so is \(\{y_{n}\}\). Moreover, by condition (C4), \(\sum_{n=1}^{\infty }\|e_{n}\| < \infty \) implies \(\lim_{n\rightarrow \infty } \|e_{n}\| =0\), that is, \(\lim_{n \rightarrow \infty } e_{n} =0\). It follows that the sequence \(\{e_{n}\}\) is also bounded, and so is \(\{w_{n}\}\).

Since \(P_{\operatorname{Fix}(S)\cap (A+B)^{-1}(0)} f\) is a k-contraction on C, by Banach’s contraction principle there exists a unique element \(x^{*} \in C\) such that \(x^{*} = P_{\operatorname{Fix}(S)\cap (A+B)^{-1}(0)} f(x^{*})\), that is, \(x^{*} \in \operatorname{Fix}(S)\cap (A+B)^{-1}(0)\). It follows that \(x^{*}=S(x^{*})=J_{\lambda _{n}}(x^{*}-\lambda _{n} Ax^{*})\). Now we will show that \(x_{n} \rightarrow x^{*}\) as \(n\rightarrow \infty \). On the other hand, we have

$$\begin{aligned} \bigl\Vert y_{n}-x^{*} \bigr\Vert ^{2} &= \bigl\langle y_{n}-x^{*},y_{n}-x^{*} \bigr\rangle \\ &= \bigl\langle x_{n}+\theta _{n}(x_{n}-x_{n-1})-x^{*},y_{n}-x^{*} \bigr\rangle \\ &= \bigl\langle x_{n}-x^{*},y_{n}-x^{*} \bigr\rangle +\theta _{n} \bigl\langle x_{n}-x_{n-1},y_{n}-x^{*} \bigr\rangle \\ &\leq \bigl\Vert x_{n}-x^{*} \bigr\Vert \bigl\Vert y_{n}-x^{*} \bigr\Vert + \theta _{n} \Vert x_{n}-x_{n-1} \Vert \bigl\Vert y_{n}-x^{*} \bigr\Vert \\ &\leq \frac{1}{2} \bigl( \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+ \bigl\Vert y_{n}-x^{*} \bigr\Vert ^{2} \bigr)+ \theta _{n} \Vert x_{n}-x_{n-1} \Vert \bigl\Vert y_{n}-x^{*} \bigr\Vert . \end{aligned}$$

This implies that

$$\begin{aligned} \bigl\Vert y_{n}-x^{*} \bigr\Vert ^{2} \leq \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+2\theta _{n} \Vert x_{n}-x_{n-1} \Vert \bigl\Vert y_{n}-x^{*} \bigr\Vert . \end{aligned}$$
(3.2)

Therefore by (3.2), Lemma 2.6, and the firm nonexpansiveness of \(J_{\lambda _{n}}\) we obtain

$$\begin{aligned} &\bigl\| J_{\lambda _{n}} (y_{n}-\lambda _{n} Ay_{n}+e_{n})-x^{*}\bigr\| ^{2} \\ &\quad= \bigl\Vert J_{\lambda _{n}} (y_{n}-\lambda _{n} Ay_{n}+e_{n})-J_{\lambda _{n}} \bigl(x^{*}- \lambda _{n} Ax^{*}\bigr) \bigr\Vert ^{2} \\ &\quad\leq \bigl\Vert (y_{n}-\lambda _{n} Ay_{n}+e_{n})-\bigl(x^{*}-\lambda _{n} Ax^{*}\bigr) \bigr\Vert ^{2} \\ &\qquad{} - \bigl\Vert (I-J_{\lambda _{n}}) (y_{n}-\lambda _{n} Ay_{n}+e_{n})-(I-J_{ \lambda _{n}}) \bigl(x^{*}- \lambda _{n} Ax^{*}\bigr) \bigr\Vert ^{2} \\ &\quad\leq \bigl( \bigl\Vert (y_{n}-\lambda _{n} Ay_{n}) -\bigl(x^{*}-\lambda _{n} Ax^{*}\bigr) \bigr\Vert + \Vert e_{n} \Vert \bigr)^{2} \\ &\qquad{} - \bigl\Vert (I-J_{\lambda _{n}}) (y_{n}-\lambda _{n} Ay_{n}+e_{n})-(I-J_{ \lambda _{n}}) \bigl(x^{*}- \lambda _{n} Ax^{*}\bigr) \bigr\Vert ^{2} \\ &\quad= \bigl\Vert (I-\lambda _{n}A)y_{n} -(I-\lambda _{n} A)x^{*} \bigr\Vert ^{2} +2 \bigl\Vert (y_{n}- \lambda _{n} Ay_{n})- \bigl(x^{*}-\lambda _{n} Ax^{*}\bigr) \bigr\Vert \Vert e_{n} \Vert \\ &\qquad{} + \Vert e_{n} \Vert ^{2} - \bigl\Vert (I-J_{\lambda _{n}}) (y_{n}-\lambda _{n} Ay_{n}+e_{n})-(I-J_{ \lambda _{n}}) \bigl(x^{*}- \lambda _{n} Ax^{*}\bigr) \bigr\Vert ^{2} \\ &\quad\leq \bigl\Vert y_{n}-x^{*} \bigr\Vert ^{2} -\lambda _{n} (2\alpha -\lambda _{n}) \bigl\Vert Ay_{n}-Ax^{*} \bigr\Vert ^{2} \\ &\qquad{} +2 \bigl\Vert (y_{n}-\lambda _{n} Ay_{n})- \bigl(x^{*}-\lambda _{n} Ax^{*}\bigr) \bigr\Vert \Vert e_{n} \Vert + \Vert e_{n} \Vert ^{2} \\ &\qquad{} - \bigl\Vert (I-J_{\lambda _{n}}) (y_{n}-\lambda _{n} Ay_{n}+e_{n})-(I-J_{ \lambda _{n}}) \bigl(x^{*}- \lambda _{n} Ax^{*}\bigr) \bigr\Vert ^{2} \\ &\quad\leq \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+2\theta _{n} \Vert x_{n}-x_{n-1} \Vert \bigl\Vert y_{n}-x^{*} \bigr\Vert -\lambda _{n} (2\alpha -\lambda _{n}) \bigl\Vert Ay_{n}-Ax^{*} \bigr\Vert ^{2} \\ &\qquad{} +2 \bigl\Vert (y_{n}-\lambda _{n} Ay_{n})- \bigl(x^{*}-\lambda _{n} Ax^{*}\bigr) \bigr\Vert \Vert e_{n} \Vert + \Vert e_{n} \Vert ^{2} \\ &\qquad{} - \bigl\Vert (I-J_{\lambda _{n}}) (y_{n}-\lambda _{n} Ay_{n}+e_{n})-(I-J_{ \lambda _{n}}) \bigl(x^{*}- \lambda _{n} Ax^{*}\bigr) \bigr\Vert ^{2}. \end{aligned}$$
(3.3)

We also have

$$\begin{aligned} &\bigl\Vert w_{n}-x^{*} \bigr\Vert ^{2} \\ &\quad= \bigl\langle w_{n}-x^{*},w_{n}-x^{*} \bigr\rangle \\ &\quad= \bigl\langle \alpha _{n} f(x_{n})+(1-\alpha _{n})J_{\lambda _{n}}(y_{n}- \lambda _{n} Ay_{n}+e_{n})-x^{*},w_{n}-x^{*} \bigr\rangle \\ &\quad= \bigl\langle \alpha _{n} \bigl(f(x_{n})-x^{*} \bigr)+(1-\alpha _{n}) \bigl(J_{\lambda _{n}}(y_{n}- \lambda _{n} Ay_{n}+e_{n})-x^{*} \bigr),w_{n}-x^{*}\bigr\rangle \\ &\quad= \alpha _{n} \bigl\langle f(x_{n})-f \bigl(x^{*}\bigr),w_{n}-x^{*} \bigr\rangle + \alpha _{n} \bigl\langle f\bigl(x^{*}\bigr)-x^{*},w_{n}-x^{*} \bigr\rangle \\ &\qquad{} +(1-\alpha _{n})\bigl\langle J_{\lambda _{n}}(y_{n}- \lambda _{n} Ay_{n}+e_{n})-x^{*} ,w_{n}-x^{*} \bigr\rangle \\ &\quad\leq \alpha _{n} k \bigl\Vert x_{n}-x^{*} \bigr\Vert \bigl\Vert w_{n}-x^{*} \bigr\Vert +\alpha _{n} \bigl\langle f\bigl(x^{*}\bigr)-x^{*},w_{n}-x^{*} \bigr\rangle \\ &\qquad{} +(1-\alpha _{n}) \bigl\Vert J_{\lambda _{n}}(y_{n}- \lambda _{n} Ay_{n}+e_{n})-x^{*} \bigr\Vert \bigl\Vert w_{n}-x^{*} \bigr\Vert \\ &\quad\leq \frac{1}{2} \alpha _{n} k \bigl( \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+ \bigl\Vert w_{n}-x^{*} \bigr\Vert ^{2} \bigr) +\alpha _{n} \bigl\langle f\bigl(x^{*}\bigr)-x^{*},w_{n}-x^{*} \bigr\rangle \\ &\qquad{} +\frac{1}{2} (1-\alpha _{n}) \bigl( \bigl\Vert J_{\lambda _{n}}(y_{n}- \lambda _{n} Ay_{n}+e_{n})-x^{*} \bigr\Vert ^{2}+ \bigl\Vert w_{n}-x^{*} \bigr\Vert ^{2} \bigr). \end{aligned}$$

This implies that

$$\begin{aligned} \bigl\Vert w_{n}-x^{*} \bigr\Vert ^{2} \leq {}& \frac{\alpha _{n} k}{1+\alpha _{n} (1-k)} \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} +\frac{2\alpha _{n}}{1+\alpha _{n} (1-k)} \bigl\langle f \bigl(x^{*}\bigr)-x^{*},w_{n}-x^{*} \bigr\rangle \\ &{} +\frac{1-\alpha _{n}}{1+\alpha _{n} (1-k)} \bigl\Vert J_{\lambda _{n}}(y_{n}- \lambda _{n} Ay_{n}+e_{n})-x^{*} \bigr\Vert ^{2}. \end{aligned}$$
(3.4)

Hence by (3.3), (3.4), and the nonexpansiveness of S we obtain

$$\begin{aligned} &\bigl\| x_{n+1}-x^{*}\bigr\| ^{2}\\ &\quad = \bigl\| Sw_{n}-Sx^{*}\bigr\| ^{2} \leq \bigl\| w_{n}-x^{*}\bigr\| ^{2}\\ &\quad\leq \frac{\alpha _{n} k}{1+\alpha _{n} (1-k)} \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} +\frac{2\alpha _{n}}{1+\alpha _{n} (1-k)} \bigl\langle f\bigl(x^{*} \bigr)-x^{*},w_{n}-x^{*} \bigr\rangle \\ &\qquad{} +\frac{1-\alpha _{n}}{1+\alpha _{n} (1-k)} \bigl( \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+2 \theta _{n} \Vert x_{n}-x_{n-1} \Vert \bigl\Vert y_{n}-x^{*} \bigr\Vert \\ &\qquad{} -\lambda _{n} (2\alpha -\lambda _{n}) \bigl\Vert Ay_{n}-Ax^{*} \bigr\Vert ^{2} +2 \bigl\Vert (y_{n}- \lambda _{n} Ay_{n})- \bigl(x^{*}-\lambda _{n} Ax^{*}\bigr) \bigr\Vert \Vert e_{n} \Vert \\ &\qquad{} + \Vert e_{n} \Vert ^{2}- \bigl\Vert (I-J_{\lambda _{n}}) (y_{n}-\lambda _{n} Ay_{n}+e_{n})-(I-J_{ \lambda _{n}}) \bigl(x^{*}- \lambda _{n} Ax^{*}\bigr) \bigr\Vert ^{2} \bigr). \end{aligned}$$

It follows that

$$\begin{aligned} &\bigl\| x_{n+1}-x^{*}\bigr\| ^{2} \\ &\quad\leq \biggl( 1-\frac{\alpha _{n} (1-k)}{1+\alpha _{n} (1-k)} \biggr) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} + \frac{\alpha _{n} (1-k)}{1+\alpha _{n} (1-k)} \biggl( \frac{2}{1-k} \bigl\langle f\bigl(x^{*}\bigr)-x^{*},w_{n}-x^{*} \bigr\rangle \\ &\qquad{} +\frac{2(1-\alpha _{n})}{1-k} \frac{\theta _{n}}{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert \bigl\Vert y_{n}-x^{*} \bigr\Vert \\ &\qquad{} +\frac{2(1-\alpha _{n})}{1-k} \frac{ \Vert e_{n} \Vert }{\alpha _{n}} \bigl\Vert (y_{n}- \lambda _{n} Ay_{n})-\bigl(x^{*}-\lambda _{n} Ax^{*}\bigr) \bigr\Vert + \frac{1-\alpha _{n}}{1-k} \frac{ \Vert e_{n} \Vert }{\alpha _{n}} \Vert e_{n} \Vert \biggr) \end{aligned}$$

and

$$\begin{aligned} &\bigl\| x_{n+1}-x^{*}\bigr\| ^{2}\\ &\quad\leq \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} - \bigl( \lambda _{n} (2\alpha -\lambda _{n}) \bigl\Vert Ay_{n}-Ax^{*} \bigr\Vert ^{2} \\ &\qquad{} + \bigl\Vert (I-J_{\lambda _{n}}) (y_{n}-\lambda _{n} Ay_{n}+e_{n})-(I-J_{ \lambda _{n}}) \bigl(x^{*}- \lambda _{n} Ax^{*}\bigr) \bigr\Vert ^{2} \bigr) \\ &\qquad{} + \biggl( \frac{2\alpha _{n}}{1+\alpha _{n} (1-k)} \bigl\Vert f\bigl(x^{*} \bigr)-x^{*} \bigr\Vert \bigl\Vert w_{n}-x^{*} \bigr\Vert +2\alpha _{n} \frac{\theta _{n}}{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert \bigl\Vert y_{n}-x^{*} \bigr\Vert \\ &\qquad{} +2 \bigl\Vert (y_{n}-\lambda _{n} Ay_{n})- \bigl(x^{*}-\lambda _{n} Ax^{*}\bigr) \bigr\Vert \Vert e_{n} \Vert + \Vert e_{n} \Vert ^{2} \biggr), \end{aligned}$$

which are of the forms

$$\begin{aligned} s_{n+1} \leq (1-\gamma _{n})s_{n} + \gamma _{n} \delta _{n} \end{aligned}$$

and

$$\begin{aligned} s_{n+1} \leq s_{n} -\eta _{n} + \rho _{n}, \end{aligned}$$

respectively, where \(s_{n}=\|x_{n}-x^{*}\|^{2}, \gamma _{n} = \frac{\alpha _{n} (1-k)}{1+\alpha _{n} (1-k)}, \delta _{n} = \frac{2}{1-k} \langle f(x^{*})-x^{*},w_{n}-x^{*} \rangle + \frac{2(1-\alpha _{n})}{1-k} \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| \|y_{n}-x^{*}\| +\frac{2(1-\alpha _{n})}{1-k} \frac{\|e_{n}\|}{\alpha _{n}}\|(y_{n}-\lambda _{n} Ay_{n})-(x^{*}- \lambda _{n} Ax^{*})\| + \frac{1-\alpha _{n}}{1-k} \frac{\|e_{n}\|}{\alpha _{n}}\) \(\|e_{n}\|, \eta _{n} = \lambda _{n} (2\alpha -\lambda _{n})\|Ay_{n}-Ax^{*} \|^{2} +\|(I-J_{\lambda _{n}})(y_{n}-\lambda _{n} Ay_{n}+e_{n})-(I-J_{ \lambda _{n}})(x^{*}-\lambda _{n} Ax^{*})\|^{2} \), and \(\rho _{n} = \frac{2\alpha _{n}}{1+\alpha _{n} (1-k)} \| f(x^{*})-x^{*} \| \|w_{n}-x^{*} \| +2\alpha _{n} \frac{\theta _{n}}{\alpha _{n}} \| x_{n}-x_{n-1} \| \|y_{n}-x^{*} \| +2\|(y_{n}-\lambda _{n} Ay_{n})-(x^{*}-\lambda _{n} Ax^{*})\| \|e_{n}\| +\|e_{n}\|^{2}\). Therefore, using conditions (C1) and (C4), we can check that all those sequences satisfy conditions (i) and (ii) in Lemma 2.10. To complete the proof, we verify that condition (iii) in Lemma 2.10 is satisfied. Let \(\lim_{i\rightarrow \infty }\eta _{n_{i}} = 0\). Then by condition (C2) we have

$$\begin{aligned} \lim_{i\rightarrow \infty } \bigl\Vert Ay_{n_{i}}-Ax^{*} \bigr\Vert = 0 \end{aligned}$$
(3.5)

and

$$\begin{aligned} \lim_{i\rightarrow \infty } \bigl\Vert (I-J_{\lambda _{n_{i}}}) (y_{n_{i}}- \lambda _{n_{i}} Ay_{n_{i}}+e_{n_{i}})-(I-J_{\lambda _{n_{i}}}) \bigl(x^{*}- \lambda _{n_{i}} Ax^{*}\bigr) \bigr\Vert = 0. \end{aligned}$$

It follows by conditions (C2) and (C4) and by (3.5) that

$$\begin{aligned} & \lim_{i\rightarrow \infty } \bigl\Vert (y_{n_{i}}- \lambda _{n_{i}} Ay_{n_{i}}+e_{n_{i}})-J_{ \lambda _{n_{i}}}(y_{n_{i}}- \lambda _{n_{i}} Ay_{n_{i}}+e_{n_{i}}) \\ &\quad{}-\bigl(\bigl(x^{*}-\lambda _{n_{i}} Ax^{*} \bigr)-J_{\lambda _{n_{i}}}\bigl(x^{*}-\lambda _{n_{i}} Ax^{*}\bigr)\bigr) \bigr\Vert = 0, \\ &\lim_{i\rightarrow \infty } \bigl\Vert y_{n_{i}}-J_{\lambda _{n_{i}}}(y_{n_{i}}- \lambda _{n_{i}} Ay_{n_{i}}) \bigr\Vert = 0. \end{aligned}$$
(3.6)

Consider a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\). As \(\{x_{n}\}\) is bounded, so is \(\{x_{n_{i}}\}\), and thus there exists a subsequence \(\{x_{n_{i_{j}}}\}\) of \(\{x_{n_{i}}\}\) that weakly converges to \(x \in C\). Without loss of generality, we can assume that \(x_{n_{i}} \rightharpoonup x\) as \(i\rightarrow \infty \). On the other hand, by conditions (C1) and (C4) we have

$$\begin{aligned} \lim_{i\rightarrow \infty } \Vert y_{n_{i}}-x_{n_{i}} \Vert = \lim_{i \rightarrow \infty } \alpha _{n_{i}} \frac{\theta _{n_{i}}}{\alpha _{n_{i}}} \Vert x_{n_{i}}-x_{{n_{i}}-1} \Vert = 0. \end{aligned}$$
(3.7)

It follows that \(y_{n_{i}} \rightharpoonup x\) as \(i\rightarrow \infty \). Therefore by (3.6) and the demiclosedness at zero in Lemma 2.7 we obtain \(x \in \operatorname{Fix}(J_{\lambda _{n_{i}}}(I-\lambda _{n_{i}}A))\), that is, \(x \in (A+B)^{-1}(0)\). Next, we will show that \(x \in \operatorname{Fix}(S)\). By the nonexpansiveness of S we have

$$\begin{aligned} \Vert x_{{n_{i}}+1}-Sx_{n_{i}} \Vert ={}& \Vert Sw_{n_{i}}-Sx_{n_{i}} \Vert \leq \Vert w_{n_{i}}-x_{n_{i}} \Vert \\ \leq {}& \Vert w_{n_{i}}-y_{n_{i}} \Vert + \Vert y_{n_{i}}-x_{n_{i}} \Vert \\ \leq {}& \alpha _{n_{i}} \bigl\Vert f(x_{n_{i}})-y_{n_{i}} \bigr\Vert \\ &{} +(1-\alpha _{n_{i}}) \bigl\Vert J_{\lambda _{n_{i}}}(y_{n_{i}}- \lambda _{n_{i}}Ay_{n_{i}}+e_{n_{i}})-y_{n_{i}} \bigr\Vert + \Vert y_{n_{i}}-x_{n_{i}} \Vert . \end{aligned}$$

It follows by (3.6), (3.7), and conditions (C1) and (C4) that

$$\begin{aligned} \lim_{i\rightarrow \infty } \Vert x_{{n_{i}}+1}-Sx_{n_{i}} \Vert = 0. \end{aligned}$$

Then by NST-condition (II) in Lemma 2.8(ii) we get

$$\begin{aligned} \lim_{i\rightarrow \infty } \Vert x_{n_{i}}-Sx_{n_{i}} \Vert = 0. \end{aligned}$$
(3.8)

Hence by (3.8) and the demiclosedness at zero in Lemma 2.7 again we obtain \(x \in \operatorname{Fix}(S)\), that is, \(x\in \operatorname{Fix}(S)\cap (A+B)^{-1}(0)\). Since

$$\begin{aligned} \Vert w_{n_{i}}-x_{n_{i}} \Vert \leq {}& \alpha _{n_{i}} \bigl\Vert f(x_{n_{i}})-x_{n_{i}} \bigr\Vert +(1-\alpha _{n_{i}}) \bigl\Vert J_{\lambda _{n_{i}}}(y_{n_{i}}-\lambda _{n_{i}}Ay_{n_{i}}+e_{n_{i}})-x_{n_{i}} \bigr\Vert \\ \leq {}& \alpha _{n_{i}} \bigl\Vert f(x_{n_{i}})-x_{n_{i}} \bigr\Vert +(1-\alpha _{n_{i}}) \bigl( \bigl\Vert J_{\lambda _{n_{i}}}(y_{n_{i}}- \lambda _{n_{i}}Ay_{n_{i}}+e_{n_{i}})-y_{n_{i}} \bigr\Vert \\ &{} + \Vert y_{n_{i}}-x_{n_{i}} \Vert \bigr), \end{aligned}$$

by (3.6) and (3.7) and conditions (C1) and (C4) we obtain

$$\begin{aligned} \lim_{i\rightarrow \infty } \Vert w_{n_{i}}-x_{n_{i}} \Vert = 0. \end{aligned}$$

This implies that \(w_{n_{i}} \rightharpoonup x\) as \(i\rightarrow \infty \). Therefore by Lemma 2.2(i) we obtain

$$\begin{aligned} \limsup_{i\rightarrow \infty }\bigl\langle f\bigl(x^{*} \bigr)-x^{*},w_{n_{i}}-x^{*} \bigr\rangle = \bigl\langle f\bigl(x^{*}\bigr)-x^{*},x-x^{*} \bigr\rangle \leq 0. \end{aligned}$$

It follows by conditions (C1), (C3), and (C4) that \(\limsup_{i\rightarrow \infty } \delta _{n_{i}} \leq 0\). So by Lemma 2.10 we conclude that \(x_{n} \rightarrow x^{*}\) as \(n\rightarrow \infty \). This completes the proof. □

Remark 3.2

([23])

We remark here that as, by condition (C4), \(\lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}}\) \(\|x_{n}-x_{n-1}\| = 0\), the theorem is easily implemented in numerical computation since the values of \(\|x_{n}-x_{n-1}\|\) are known before choosing \(\theta _{n}\). Indeed, the parameter \(\theta _{n}\) can be chosen as \(0 \leq \theta _{n} \leq \bar{\theta _{n}}\) such that

$$\begin{aligned} \bar{\theta _{n}} = \textstyle\begin{cases} \min \{ \frac{\omega _{n}}{ \Vert x_{n}-x_{n-1} \Vert }, \theta \} & \text{if } x_{n} \neq x_{n-1}, \\ \theta & \text{otherwise}, \end{cases}\displaystyle \end{aligned}$$

where \(\{\omega _{n}\}\) is a positive sequence such that \(\omega _{n} = o(\alpha _{n})\).

4 Applications and numerical examples

In this section, we give some applications of our result to the fixed point problem of the nonexpansive variational inequality problem, the common fixed point problem of nonexpansive strict pseudocontractions, the convex minimization problem, and the split feasibility problem.

4.1 Fixed point problem of the nonexpansive variational inequality problem

The variational inequality problem is to find \(x^{*}\in C\) such that

$$\begin{aligned} \bigl\langle Ax^{*},y-x^{*} \bigr\rangle \geq 0, \quad\forall y \in C. \end{aligned}$$
(4.1)

We denote the solution set of (4.1) by \(VI(C,A)\). It is well known that \(\operatorname{Fix}(P_{C}(I-rA)) = VI(C,A)\) for all \(r>0\). Define the indicator function of C, denoted by \(i_{C}\), as \(i_{C}(x)=0\) if \(x \in C\) and \(i_{C}(x) = \infty \) if \(x \notin C\). We see that \(\partial i_{C}\) is maximal monotone. So, for \(r>0\), we can define \(J_{r}=(I+r \partial i_{C})^{-1}\). Moreover, \(x=J_{r} y\) if and only if \(x=P_{C} y\). Hence by Theorem 3.1 we obtain the following result.

Theorem 4.1

Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping of H into itself, let S be a nonexpansive mapping of C into itself such that \(\Omega:= \operatorname{Fix}(S)\cap VI(C,A) \neq \emptyset \), and let f be a k-contraction mapping of C into itself. Let \(x_{0},x_{1} \in C\), and let \(\{x_{n}\} \subset C\) be a sequence generated by

$$\begin{aligned} \textstyle\begin{cases} y_{n} = x_{n}+\theta _{n} (x_{n}-x_{n-1}), \\ x_{n+1} = S(\alpha _{n} f(x_{n})+(1-\alpha _{n})P_{C}(y_{n}- \lambda _{n} Ay_{n}+e_{n})), \end{cases}\displaystyle \end{aligned}$$

for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1), \{\lambda _{n}\} \subset (0,2\alpha ), \{e_{n}\} \subset H\), and \(\{\theta _{n}\} \subset [0,\theta ]\) such that \(\theta \in [0,1)\) satisfy the following conditions:

  1. (C1)

    \(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), and \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \),

  2. (C2)

    \(0< a\leq \lambda _{n} \leq b < 2\alpha \) for some \(a,b>0\),

  3. (C3)

    \(\lim_{n\rightarrow \infty }\frac{\|e_{n}\|}{\alpha _{n}}=0\),

  4. (C4)

    \(\sum_{n=1}^{\infty }\|e_{n}\| < \infty \), and \(\lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| = 0\).

Then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Omega \), where \(x^{*} = P_{\Omega }f(x^{*})\).

We next provide a formulation that will be used in our example and its numerical results.

Proposition 4.2

([24, 25])

For \(\rho > 0\) and \(C = \{x\in \mathbb{R}^{N}: \|x\|_{2} \leq \rho \}\), we have

$$\begin{aligned} P_{C} x = \textstyle\begin{cases} \frac{\rho x}{ \Vert x \Vert _{2}}, & x \notin C, \\ x, &x\in C. \end{cases}\displaystyle \end{aligned}$$

Example 4.3

Let \(C = \{a\in \mathbb{R}^{2}: \|a\|_{2} \leq 1 \}\). Find a point \(x^{*} \in C\) that satisfies the following variational inequality:

$$\begin{aligned} -2 \Vert x \Vert ^{2}+2x^{T} y+(1,1) (x-y) \geq 0,\quad \forall y \in C. \end{aligned}$$

Let \(H=(\mathbb{R}^{2},\|\cdot \|_{2} )\). For each \(x=(u_{1},u_{2})^{T},y=(v_{1},v_{2})^{T} \in \mathbb{R}^{2}\), we have

$$\begin{aligned} &-2\|x\|^{2}+2x^{T} y+(1,1)(x-y) \\ &\quad= -2\bigl(u_{1}^{2}+u_{2}^{2} \bigr)+2(u_{1}v_{1}+u_{2}v_{2})+(u_{1}-v_{1})+(u_{2}-v_{2}) \\ &\quad= (v_{1}-u_{1}) (2u_{1}-1)+(v_{2}-u_{2}) (2u_{2}-1) \\ &\quad= (y-x)^{T} Ax = \langle Ax,y-x \rangle, \end{aligned}$$

where \(Ax = (2u_{1}-1,2u_{2}-1)^{T}\) for \(x = (u_{1},u_{2})^{T} \in \mathbb{R}^{2}\). Note that A is α-inverse strongly monotone with \(\alpha =\frac{1}{2}\) and \(\frac{1}{L}\)-Lipschitzian with \(L=\frac{1}{2}\).

We set \(S(x)= P_{C}(1-u_{1},1-u_{2})^{T}\) and \(f(x)=\frac{3x}{5}\) for \(x = (u_{1},u_{2})^{T} \in C\). Then S is nonexpansive, and f is k-contraction such that \(k \in [\frac{3}{5},1)\). For each \(n \in \mathbb{N}\), we choose \(\alpha _{n} = \frac{10^{-6}}{n+1},e_{n}= \frac{1}{(n+1)^{3}}(1,1)^{T}, \theta = 0.5\), and \(\omega _{n} = \frac{1}{(n+1)^{3}}\), and we define \(\theta _{n} = \bar{\theta }_{n}\) as in Remark 3.2.

On the best of our result, we consider the type of the sequences \(\{\lambda _{n} \}\) as in Table 1 so that it converges to L, and also constant sequences converge to the value near L and near to its boundary values, to study the best choice types of the sequences \(\{\lambda _{n} \}\) for amount to the least loops in recursive computing of the sequence \(\{x_{n} \}\) using the algorithm in Theorem 4.1.

Table 1 Numerical results of Example 4.3 for the initial points \(x_{0} = (-1,0)^{T}\) and \(x_{1} = (0,1)^{T}\) using the algorithm in Theorem 4.1

We choose the initial points \(x_{0}=(-1,0)^{T}, x_{1} = (0,1)^{T} \in C\) (indeed, \(x_{0},x_{1}\) can be chosen arbitrarily in H) for recursive computing of the sequence \(\{x_{n}\}\) using the algorithm in Theorem 4.1 with an error 10−6. As \(n \rightarrow \infty \), we obtain \(x_{n} \rightarrow x^{*}\) such that an approximate of \(x^{*}\) is \((0.5,0.5)^{T}\) as in Table 1, and we also show the benchmark for all choice types of the sequences \(\{\lambda _{n} \}\) in recursive computing of the sequence \(\{x_{n} \}\) as in Fig. 1, and we also show convergence behavior of the error sequences \(\{ \|x_{n+1}-x_{n}\|_{2} \}\), which converge to zero in all the best choice types of the sequences \(\{ \lambda _{n} \}\) as in Fig. 2.

Figure 1
figure 1

Benchmark for all choice types of the sequences \(\{\lambda _{n}\}\) in Example 4.3

Figure 2
figure 2

Convergence behavior of the error sequences \(\{ \|x_{n+1}-x_{n}\|_{2} \}\) for all the best choice types A2, B1, and C1 in Example 4.3

In this example, we found that the sequences \(\{\lambda _{n} \}\) in the C1 and C2 types are the best choice types in recursive computing of the sequence \(\{x_{n}\}\).

4.2 Common fixed point problem of nonexpansive strict pseudocontractions

A mapping \(T:C\rightarrow C\) is called β-strictly pseudocontractive if there exists \(\beta \in [0,1)\) such that

$$\begin{aligned} \Vert Tx-Ty \Vert ^{2} \leq \Vert x-y \Vert ^{2} + \beta \bigl\Vert (I-T)x-(I-T)y \bigr\Vert ^{2} \end{aligned}$$

for all \(x,y \in C\). It is well known that if T is β-strictly pseudocontractive, then \(I-T\) is \(\frac{1-\beta }{2}\)-inverse strongly monotone. Moreover, by putting \(A=I-T\) we have \(\operatorname{Fix}(T)=VI(C,A)\). So by Theorem 4.1 we obtain the following result.

Theorem 4.4

Let C be a nonempty closed convex subset of a real Hilbert space H. Let T be a β-strict pseudocontraction of H into itself, let S be a nonexpansive mapping of C into itself such that \(\Omega:= \operatorname{Fix}(S)\cap \operatorname{Fix}(T) \neq \emptyset \), and let f be a k-contraction mapping of C into itself. Let \(x_{0},x_{1} \in C\), and let \(\{x_{n}\} \subset C\) be a sequence generated by

$$\begin{aligned} \textstyle\begin{cases} y_{n} = x_{n}+\theta _{n} (x_{n}-x_{n-1}), \\ x_{n+1} = S(\alpha _{n} f(x_{n})+(1-\alpha _{n})P_{C}((1-\lambda _{n})y_{n}+ \lambda _{n} Ty_{n}+e_{n})), \end{cases}\displaystyle \end{aligned}$$

for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1), \{\lambda _{n}\} \subset (0,1-\beta ), \{e_{n}\} \subset H\), and \(\{\theta _{n}\} \subset [0,\theta ]\) such that \(\theta \in [0,1)\) satisfy the following conditions:

  1. (C1)

    \(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), and \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \),

  2. (C2)

    \(0< a\leq \lambda _{n} \leq b < 1-\beta \) for some \(a,b>0\),

  3. (C3)

    \(\lim_{n\rightarrow \infty }\frac{\|e_{n}\|}{\alpha _{n}}=0\),

  4. (C4)

    \(\sum_{n=1}^{\infty }\|e_{n}\| < \infty \), and \(\lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| = 0\).

Then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Omega \), where \(x^{*} = P_{\Omega }f(x^{*})\).

Example 4.5

Let \(C = \{a\in \mathbb{R}^{3}: \|a\|_{2} \leq 2 \}\). Find a common fixed point \(x^{*} \in C\) of the mappings S and T defined as follows:

$$\begin{aligned} &S(x) = P_{C}(2-u,2-v,2-w)^{T}, \quad\forall x = (u,v,w)^{T} \in C, \\ &T(x) = (4-3u, 4-3v, 4-3w)^{T}, \quad\forall x = (u,v,w)^{T} \in \mathbb{R}^{3}. \end{aligned}$$

Let \(H=(\mathbb{R}^{3},\|\cdot \|_{2} )\). Note that T is β-strictly pseudocontractive with \(\beta =\frac{1}{2}\), \(I-T\) is \(\frac{1}{L}\)-Lipschitzian with \(L=\frac{1}{4}\), and S is nonexpansive. We set \(f(x)=\frac{3x}{5}\) for \(x \in C\). For each \(n \in \mathbb{N}\), we choose \(\alpha _{n} = \frac{10^{-6}}{n+1},e_{n}= \frac{1}{(n+1)^{3}}(1,1,1)^{T}, \theta = 0.5\), and \(\omega _{n} = \frac{1}{(n+1)^{3}}\), and we define \(\theta _{n} = \bar{\theta }_{n}\) as in Remark 3.2.

We choose the initial points \(x_{0}=(1,-1,-1)^{T}, x_{1} = (-1,0,1)^{T} \in C\) (indeed, \(x_{0},x_{1}\) can be chosen arbitrarily in H) for recursive computing of the sequence \(\{x_{n}\}\) using the algorithm in Theorem 4.4 with an error 10−6 in the same of choice types of the sequences \(\{\lambda _{n} \}\) with \(L=\frac{1}{4}\). As \(n \rightarrow \infty \), we obtain \(x_{n} \rightarrow x^{*}\) such that an approximate of \(x^{*}\) is \((1,1,1)^{T}\) as in Table 2, and we also show the benchmark for all choice types of the sequences \(\{\lambda _{n} \}\) in recursive computing of the sequence \(\{x_{n} \}\) as in Fig. 3, and we also show convergence behavior of the error sequences \(\{ \|x_{n+1}-x_{n}\|_{2} \}\), which converse to zero in all the best choices types of the sequences \(\{ \lambda _{n} \}\) as in Fig. 4.

Figure 3
figure 3

Benchmark for all choice types of the sequences \(\{\lambda _{n}\}\) in Example 4.5

Figure 4
figure 4

Convergence behavior of the error sequences \(\{ \|x_{n+1}-x_{n}\|_{2} \}\) for all the best choice types A2, B1, and C2 in Example 4.5

Table 2 Numerical results of Example 4.5 for the initial points \(x_{0} = (1,-1,-1)^{T}\) and \(x_{1} = (-1,0,1)^{T}\) using the algorithm in Theorem 4.4

In this example, we found that the sequences \(\{\lambda _{n} \}\) in the C1 and C2 types are the best choice types in recursive computing of the sequence \(\{x_{n}\}\).

4.3 Convex minimization problem

We next consider the following convex minimization problem (CMP): find \(x^{*} \in H\) such that

$$\begin{aligned} F\bigl(x^{*}\bigr)+G\bigl(x^{*}\bigr) = \min _{x\in H}F(x)+G(x) \quad\Leftrightarrow \quad0 \in \nabla F \bigl(x^{*}\bigr)+\partial G\bigl(x^{*}\bigr), \end{aligned}$$
(4.2)

where \(F: H\rightarrow \mathbb{R}\) is a convex differentiable function, and \(G:H \rightarrow \mathbb{R}\) is a convex function. It is well known that if F is \((1/L)\)-Lipschitz continuous, then it is L-inverse strongly monotone [26]. Moreover, ∂G is maximal monotone [27]. Putting \(A=\nabla F\) and \(B=\partial G\), by Theorem 3.1 we obtain the following result.

Theorem 4.6

Let H be a real Hilbert space. Let \(F: H\rightarrow \mathbb{R}\) be a convex differentiable function with \((1/L)\)-Lipschitz continuous gradient F, and let \(G: H \rightarrow \mathbb{R}\) be a convex and lower semicontinuous function. Let \(J_{\lambda }=(I+\lambda \partial G)^{-1}\) be the resolvent of ∂G for \(\lambda > 0\), let S be a nonexpansive mapping of H into itself such that \(\Omega:= \operatorname{Fix}(S)\cap (\nabla F+\partial G)^{-1}(0) \neq \emptyset \), and let f be a k-contraction mapping of H into itself. Let \(x_{0},x_{1} \in H\), and let \(\{x_{n}\} \subset H\) be the sequence generated by

$$\begin{aligned} \textstyle\begin{cases} y_{n} = x_{n}+\theta _{n} (x_{n}-x_{n-1}), \\ x_{n+1} = S(\alpha _{n} f(x_{n})+(1-\alpha _{n})J_{\lambda _{n}}(y_{n}- \lambda _{n} \nabla F(y_{n})+e_{n})), \end{cases}\displaystyle \end{aligned}$$

for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1), \{\lambda _{n}\} \subset (0,2L), \{e_{n} \} \subset H\), and \(\{\theta _{n}\} \subset [0,\theta ]\) such that \(\theta \in [0,1)\) satisfy the following conditions:

  1. (C1)

    \(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), and \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \),

  2. (C2)

    \(0< a\leq \lambda _{n} \leq b < 2L\) for some \(a,b>0\),

  3. (C3)

    \(\lim_{n\rightarrow \infty }\frac{\|e_{n}\|}{\alpha _{n}}=0\),

  4. (C4)

    \(\sum_{n=1}^{\infty }\|e_{n}\| < \infty \), and \(\lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| = 0\).

Then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Omega \), where \(x^{*} = P_{\Omega }f(x^{*})\).

We next provide the formulation which will be used in our example and its numerical results.

Proposition 4.7

([28])

Let \(G: \mathbb{R}^{N} \rightarrow \mathbb{R} \) be given by \(G(x)=\|x\|_{1}\) for \(x \in \mathbb{R}^{N}\). For \(r > 0\) and \(x=(x_{1},x_{2},\ldots,x_{N})^{T} \in \mathbb{R}^{N}\), we have \((I+r \partial G)^{-1} (x) = y\) such that \(y = (y_{1},y_{2},\ldots,y_{N})^{T} \in \mathbb{R}^{N}\) where \(y_{i} = \operatorname{sign}(x_{i}) \max \{|x_{i}|-r,0\}\) for \(i=1,2,\ldots,N\).

Example 4.8

Find a point minimizing the following \(\ell _{1}\)-least square problem:

$$\begin{aligned} \min_{x\in \mathbb{R}^{3}} \Vert x \Vert _{1}+ \frac{1}{2} \Vert x \Vert _{2}^{2}+(-2,1,-3)x+3, \end{aligned}$$

where \(x =(u,v,w)^{T} \in \mathbb{R}^{3}\).

Let \(H = (\mathbb{R}^{3},\|\cdot \|_{2})\), \(F(x)=\frac{1}{2}\|x\|_{2}^{2}+(-2,1,-3)x+3\), and \(G(x)=\|x\|_{1}\) for all \(x \in \mathbb{R}^{3}\). Then \(\nabla F(x) = (u-2,v+1,w-3)^{T}\) for all \(x \in \mathbb{R}^{3}\). It follows that F is convex and differentiable on \(\mathbb{R}^{3}\) with \(L=1\) of \(\frac{1}{L}\)-Lipschitz continuous gradient F. Moreover, G is convex and lower semicontinuous but not differentiable on \(\mathbb{R}^{3}\).

We set \(S(x) =(2-u,-v,4-w)^{T}\) and \(f(x) = \frac{x}{5}\) for \(x \in \mathbb{R}^{3}\). Then S is nonexpansive, and f is k-contraction with \(k \in [\frac{1}{5},1)\). For each \(n \in \mathbb{N}\), we choose \(\alpha _{n} = \frac{10^{-6}}{n+1},e_{n}= \frac{1}{(n+1)^{3}}(1,1,1)^{T}, \theta = 0.5\), and \(\omega _{n} = \frac{1}{(n+1)^{3}}\), and we define \(\theta _{n} = \bar{\theta }_{n}\) as in Remark 3.2.

For each \(n \in \mathbb{N}\), by Proposition 4.7 we have

$$\begin{aligned} &(I+\lambda _{n} \partial G )^{-1}(x)\\ &\quad = \bigl( \operatorname{sign}(u) \max \bigl\{ \vert u \vert -\lambda _{n},0\bigr\} , \operatorname{sign}(v) \max \bigl\{ \vert v \vert - \lambda _{n},0\bigr\} , \operatorname{sign}(w) \max \bigl\{ \vert w \vert -\lambda _{n},0\bigr\} \bigr)^{T}. \end{aligned}$$

We choose the initial points \(x_{0}=(1,-2,-1)^{T}\) and \(x_{1} = (-2,-1,2)^{T}\) for recursive computing of the sequence \(\{x_{n}\}\) using the algorithm in Theorem 4.6 with an error 10−6 in the same of choice types of the sequences \(\{\lambda _{n} \}\) with \(L=1\). As \(n \rightarrow \infty \), we obtain \(x_{n} \rightarrow x^{*}\) such that an approximate minimum of \(F+G\) is \((1,0,2)^{T}\) and its approximate minimum value is 0.5 as in Table 3, and we also show the benchmark for all choice types of the sequences \(\{\lambda _{n} \}\) in recursive computing of the sequence \(\{x_{n} \}\) as in Fig. 5, and we also show convergence behavior of the error sequences \(\{ \|x_{n+1}-x_{n}\|_{2} \}\), which converge to zero in all the best choice types of the sequences \(\{ \lambda _{n} \}\) as in Fig. 6.

Figure 5
figure 5

Benchmark for all choice types of the sequences \(\{\lambda _{n}\}\) in example 4.8

Figure 6
figure 6

Convergence behavior of the error sequences \(\{ \|x_{n+1}-x_{n}\|_{2} \}\) for all the best choice types A2, B1, and C1 in Example 4.8

Table 3 Numerical results of Example 4.8 for the initial points \(x_{0} = (1,-2,-1)^{T}\) and \(x_{1} = (-2,-1,2)^{T}\) using the algorithm in Theorem 4.6

In this example, we found that the sequences \(\{\lambda _{n} \}\) in the C1 and C2 types are the best choice types in recursive computing of the sequence \(\{x_{n}\}\).

4.4 Split feasibility problem

We next consider the following split feasibility problem (SFP), which was first introduced by Cencer and Elfving [29]: find

$$\begin{aligned} x^{*}\in C \quad\text{such that } Ax^{*} \in Q, \end{aligned}$$
(4.3)

where C and Q are two nonempty closed convex subsets of two real Hilbert spaces H and K, respectively, and \(A: H\rightarrow K\) is a bounded linear operator. Then problem (4.3) becomes to find \(x^{*} \in C\) in the following minimization problem:

$$\begin{aligned} F\bigl(x^{*}\bigr) = \min_{x\in C}F(x):= \frac{1}{2} \Vert Ax - P_{Q} Ax \Vert ^{2}\quad \Leftrightarrow\quad 0 \in \nabla F\bigl(x^{*}\bigr), \end{aligned}$$
(4.4)

which is a particular case of the convex minimization problem (4.2) when \(G = 0\). It is well known from Lemma 2.4 that F is a convex differentiable function with \(\|A\|^{2}\)-Lipschitz continuous gradient F and weakly lower semicontinuous function on H, and \(\nabla F(x) = A^{*}(I-P_{Q})Ax\) for all \(x \in H\), where \(A^{*}\) denotes the adjoint of A. Putting \(F(x) = \frac{1}{2}\|Ax - P_{Q} Ax\|^{2}\) for \(x \in H, \partial G =0\), and \(S=P_{C}\), by Theorem 4.6 we obtain the following result.

Theorem 4.9

Let C and Q are two nonempty closed convex subsets of two real Hilbert spaces H and K, respectively. Let \(A: H\rightarrow K\) be a bounded linear operator, and let f be a k-contraction mapping of H into itself. Assume that the SFP (4.3) has a nonempty solution set Γ. Let \(x_{0},x_{1} \in H\), and let \(\{x_{n}\} \subset H\) be a sequence generated by

$$\begin{aligned} \textstyle\begin{cases} y_{n} = x_{n}+\theta _{n} (x_{n}-x_{n-1}), \\ x_{n+1} = P_{C}(\alpha _{n} f(x_{n})+(1-\alpha _{n})(y_{n}-\lambda _{n} A^{*}(I-P_{Q})Ay_{n}+e_{n})), \end{cases}\displaystyle \end{aligned}$$

for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1), \{\lambda _{n}\} \subset (0, \frac{2}{\|A\|^{2}}), \{e_{n}\} \subset H\), and \(\{\theta _{n}\} \subset [0,\theta ]\) such that \(\theta \in [0,1)\) satisfy the following conditions:

  1. (C1)

    \(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), and \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \),

  2. (C2)

    \(0< a\leq \lambda _{n} \leq b < \frac{2}{\|A\|^{2}}\) for some \(a,b>0\),

  3. (C3)

    \(\lim_{n\rightarrow \infty }\frac{\|e_{n}\|}{\alpha _{n}}=0\),

  4. (C4)

    \(\sum_{n=1}^{\infty }\|e_{n}\| < \infty \), and \(\lim_{n\rightarrow \infty } \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| = 0\).

Then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Gamma \), where \(x^{*} = P_{\Gamma }f(x^{*})\), which is a minimizer of the minimum-norm solution of the SFP (4.3).

Example 4.10

Let \(C =\{a \in \mathbb{R}^{4}: \|a\|_{2} \leq 2 \}\). Find some point \(x^{*} \in C\) that satisfies the following system of linear equations:

$$\begin{aligned} &x+y-2z+w = 1, \\ &x-y+3z+w = 2, \\ &x+y+z-3w = 3, \end{aligned}$$

where \(x,y,z,w \in \mathbb{R}\).

Let \(H=(\mathbb{R}^{4},\|\cdot \|_{2})\) and \(K=(\mathbb{R}^{3},\|\cdot \|_{2})\). We set

$$A= \begin{pmatrix} 1 & 1 & -2 & 1 \\ 1 & -1 & 3 & 1 \\ 1 & 1 & 1 & -3 \end{pmatrix} , $$

\(Q = \{b:b=(1,2,3)^{T}\}\) and \(f(u) = \frac{u}{5}\) for \(u \in \mathbb{R}^{4}\). For each \(n \in \mathbb{N}\), we choose \(\alpha _{n} = \frac{10^{-6}}{n+1},e_{n}= \frac{1}{(n+1)^{3}}(1,1,1,1)^{T}, \theta = 0.5\), and \(\omega _{n} = \frac{1}{(n+1)^{3}}\), and we define \(\theta _{n} = \bar{\theta }_{n}\) as in Remark 3.2.

We choose the initial points \(x_{0}=(1,1,0,2)^{T}\) and \(x_{1} = (2,1,3,0)^{T}\) for recursive computing of the sequence \(\{x_{n}\}\) using the algorithm in Theorem 4.9 with an error 10−6 in the same of choice types of the sequences \(\{\lambda _{n} \}\) with \(L=\frac{1}{\|A\|^{2}}\) of \(\frac{1}{L}\)-Lipschitz continuous gradient F defined by (4.4) such that \(\|A\|\) is the square root of the maximum eigenvalue of \(A^{T} A\). By Proposition 4.2, as \(n \rightarrow \infty \), we obtain \(x_{n} \rightarrow x^{*}\) such that \(x^{*}\) is our solution, and the numerical results are listed in Table 4. We also show the benchmark for all choice types of the sequences \(\{\lambda _{n} \}\) in recursive computing of the sequence \(\{x_{n} \}\) as in Fig. 7, and we also show convergence behavior of the error sequences \(\{ \|x_{n+1}-x_{n}\|_{2} \}\), which converge to zero in all the best choice types of the sequences \(\{ \lambda _{n} \}\) as in Fig. 8.

Figure 7
figure 7

Benchmark for all choice types of the sequences \(\{\lambda _{n}\}\) in Example 4.10

Figure 8
figure 8

Convergence behavior of the error sequences \(\{ \|x_{n+1}-x_{n}\|_{2} \}\) for all the best choice types A4, B1 and C1 in example 4.10

Table 4 Numerical results of Example 4.10 for the initial points \(x_{0} = (1,1,0,2)^{T}\) and \(x_{1} = (2,1,3,0)^{T}\) using the algorithm in Theorem 4.9

In this example, we found that the sequences \(\{\lambda _{n} \}\) in the A2, A3, A4, B1, B2, C1, and C2 types are the best choice types in recursive computing of the sequence \(\{x_{n}\}\).

Remark 4.11

A new iterative shrinkage thresholding algorithm (NISTA) with an error is obtained in our main result, based on the forward–backward splitting method with an error as follows: \(x_{1} \in C\), and

$$\begin{aligned} x_{n+1} = \underbrace{(I+\lambda _{n} B)^{-1}}_{\text{backward step}} \underbrace{\bigl((I-\lambda _{n} A)x_{n} +e_{n} \bigr)}_{ \text{forward step with an error} } = J_{\lambda _{n}}^{B} \bigl((I- \lambda _{n} A)x_{n} +e_{n}\bigr),\quad \forall n \in \mathbb{N}, \end{aligned}$$

where \(\{\lambda _{n}\} \subset (0,\infty ), \{e_{n}\} \subset H, D(B) \subset C\), and \(J_{\lambda _{n}}^{B} = J_{\lambda _{n}} = (I+\lambda _{n} B)^{-1}\). It can be applied to solve many kinds of problems in optimization. For the fast convergence of the sequence \(\{x_{n}\}\) to its solution using this method with an α-inverse A strongly monotone (or \(\frac{1}{L}\)-Lipschitzian with \(L = \alpha \)), we choose the inertial parameter \(\lambda _{n}\), which depends on L as a momentum to controlled by the operator A in the forward step of algorithm using an alternating sequence \(\{\lambda _{n}\}\subset (0,2L)\) such that \(\lambda _{n} \rightarrow L\) as \(n \rightarrow \infty \), which guarantees the fast convergence of the sequence \(\{x_{n}\}\) to its solution. For instance,

$$\begin{aligned} \lambda _{n} = \textstyle\begin{cases} L+\frac{(-1)^{n} L}{n+1} \quad \text{or} \\ L+\frac{(-1)^{n+1}L}{n+1}, \end{cases}\displaystyle \quad n\in \mathbb{N}. \end{aligned}$$

Furthermore, we can choose the parameter \(\theta _{n}\) that controls the momentum of \(x_{n} -x_{n-1}\) for the fast convergence of the sequence \(\{x_{n}\}\) to its solution as follows:

$$\begin{aligned} \theta _{n} = \textstyle\begin{cases} \sigma _{n} \in [0,1) \text{ such that } \sigma _{n} \rightarrow 0 \text{ as } n \rightarrow \infty & \text{if } n \leq N, \\ \textstyle\begin{cases} \min \{ \frac{\omega _{n}}{ \Vert x_{n}-x_{n-1} \Vert }, \theta \} & \text{if } x_{n} \neq x_{n-1}, \\ \theta & \text{otherwise,} \end{cases}\displaystyle &\text{otherwise}, \end{cases}\displaystyle \quad\forall n\in \mathbb{N}, \end{aligned}$$

where \(N \in \mathbb{N},\theta \in [0,1)\), and \(\{\omega _{n}\}\) is a positive sequence such that \(\omega _{n} = o(\alpha _{n})\). For instance, \(\sigma _{n} = \frac{1}{2^{n}}\) for all \(n \in \mathbb{N}\), which guarantees the fast convergence of the sequence \(\{x_{n}\}\) to its solution, except for complex problems (e.g., the image/signal recovery problems). In this case, the parameter \(\theta _{n}\) can be chosen as follows:

$$\begin{aligned} \theta _{n} = \textstyle\begin{cases} \sigma _{n} = \frac{t_{n}-1}{t_{n+1}} \text{ such that } t_{1} = 1 \text{ and } t_{n+1}=\frac{1+\sqrt{1+4t_{n}^{2}}}{2} & \text{if } n \leq N, \\ \textstyle\begin{cases} \min \{ \frac{\omega _{n}}{ \Vert x_{n}-x_{n-1} \Vert }, \theta \} & \text{if } x_{n} \neq x_{n-1}, \\ \theta & \text{otherwise}, \end{cases}\displaystyle &\text{otherwise}, \end{cases}\displaystyle \quad \forall n\in \mathbb{N}, \end{aligned}$$

where \(N \in \mathbb{N},\theta \in [0,1)\), \(\sigma _{n} \in [0,1)\) are such that \(\sigma _{n} \rightarrow 1\) as \(n \rightarrow \infty \), and \(\{\omega _{n}\}\) is a positive sequence such that \(\omega _{n} = o(\alpha _{n})\).

5 Conclusion

We obtain the regularization method for solving the variational inclusion problem of the sum of two monotone operators in real Hilbert spaces. Under some mild appropriate conditions on the parameters, we obtain a short proof of another strong convergence theorem for this problem.

Availability of data and materials

Not applicable.

References

  1. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)

    Article  MathSciNet  Google Scholar 

  2. Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. 72, 383–390 (1979)

    Article  MathSciNet  Google Scholar 

  3. Combettes, P.L.: Iterative construction of the resolvent of a sum of maximal monotone operators. J. Convex Anal. 16, 727–748 (2009)

    MathSciNet  MATH  Google Scholar 

  4. Lopez, G., Martin-Marquez, V., Wang, F., Xu, H.-K.: Forward-backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. 2012, Article ID 109236 (2012)

    Article  MathSciNet  Google Scholar 

  5. Takahashi, W.: Viscosity approximation methods for resolvents of accretive operators in Banach spaces. J. Fixed Point Theory Appl. 1, 135–147 (2007)

    Article  MathSciNet  Google Scholar 

  6. Wang, F., Cui, H.: On the contraction-proximal point algorithms with multi-parameters. J. Glob. Optim. 54, 485–491 (2012)

    Article  MathSciNet  Google Scholar 

  7. Xu, H.-K.: A regularization method for the proximal point algorithm. J. Glob. Optim. 36, 115–125 (2006)

    Article  MathSciNet  Google Scholar 

  8. Cholamjiak, W., Cholamjiak, P., Suantai, S.: An inertial forward–backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 20(1), 1–17 (2018)

    Article  MathSciNet  Google Scholar 

  9. Khan, S.A., Suantai, S., Cholamjiak, W.: Shrinking projection methods involving inertial forward–backward splitting methods for inclusion problems. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 113(2), 645–656 (2019)

    Article  MathSciNet  Google Scholar 

  10. Cholamjiak, W., Pholasa, N., Suantai, S.: A modified inertial shrinking projection method for solving inclusion problems and quasi-nonexpansive multivalued mappings. Comput. Appl. Math. 37(5), 5750–5774 (2018)

    Article  MathSciNet  Google Scholar 

  11. Cholamjiak, W., Khan, S.A., Yambangwai, D., Kazmi, K.R.: Strong convergence analysis of common variational inclusion problems involving an inertial parallel monotone hybrid method for a novel application to image restoration. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 114(2), 1–20 (2020)

    MathSciNet  MATH  Google Scholar 

  12. Cholamjiak, P., Cholamjiak, W., Suantai, S.: A modified regularization method for finding zeros of monotone operators in Hilbert spaces. J. Inequal. Appl. 2015, 220 (2015)

    Article  MathSciNet  Google Scholar 

  13. Cholamjiak, P., Kesornprom, S., Pholasa, N.: Weak and strong convergence theorems for the inclusion problem and the fixed-point problem of nonexpansive mappings. Mathematics 167(7), 485–491 (2019)

    MATH  Google Scholar 

  14. Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)

    MATH  Google Scholar 

  15. Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009)

    MATH  Google Scholar 

  16. Tang, J.F., Chang, S.S., Yuan, F.: A strong convergence theorem for equilibrium problems and split feasibility problems in Hilbert spaces. Fixed Point Theory Appl. 2014, 36 (2014)

    Article  MathSciNet  Google Scholar 

  17. Nadezhkina, N., Takahashi, W.: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191–201 (2006)

    Article  MathSciNet  Google Scholar 

  18. Geobel, K., Kirk, W.A.: Topic in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990)

    Book  Google Scholar 

  19. Nakajo, K., Shimoji, K., Takahashi, W.: Strong convergence to common fixed points of families of nonexpansive mappings in Banach spaces. J. Nonlinear Convex Anal. 8(1), 11–34 (2007)

    MathSciNet  MATH  Google Scholar 

  20. Takahashi, W., Takeuchi, Y., Kubota, R.: Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 341, 276–286 (2008)

    Article  MathSciNet  Google Scholar 

  21. Takahashi, W., Xu, H.-K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002)

    Article  MathSciNet  Google Scholar 

  22. He, S., Yang, C.: Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, Article ID 942315 (2013)

    MathSciNet  MATH  Google Scholar 

  23. Suantai, S., Pholasa, N., Cholamjiak, P.: The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 14(4), 1595–1615 (2018)

    MathSciNet  MATH  Google Scholar 

  24. Tianchai, P.: Gradient projection method with a new step size for the split feasibility problem. J. Inequal. Appl. 2018, 120 (2018)

    Article  MathSciNet  Google Scholar 

  25. Sirirut, T., Tianchai, P.: On solving of constrained convex minimize problem using gradient projection method. Int. J. Math. Math. Sci. 2018, Article ID 1580837 (2018)

    Article  MathSciNet  Google Scholar 

  26. Baillon, J.B., Haddad, G.: Quelques proprietes des operateurs angle-bornes et cycliquement monotones. Isr. J. Math. 26, 137–150 (1977)

    Article  MathSciNet  Google Scholar 

  27. Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)

    Article  MathSciNet  Google Scholar 

  28. Hale, E.T., Yin, W., Zhang, Y.: A fixed-point continuation method for \(\ell _{1}\)-regularized minimization with applications to compressed sensing. CAAM Technical Report, TR07-07, (2007)

  29. Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. J. Numer. Algorithms 8, 221–239 (1994)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author would like to thank the Faculty of Science, Maejo University, for its financial support.

Funding

This research was supported by Faculty of Science, Maejo University.

Author information

Authors and Affiliations

Authors

Contributions

Author read and approved the final manuscript.

Corresponding author

Correspondence to Pattanapong Tianchai.

Ethics declarations

Competing interests

The author declares that he has no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tianchai, P. The zeros of monotone operators for the variational inclusion problem in Hilbert spaces. J Inequal Appl 2021, 126 (2021). https://doi.org/10.1186/s13660-021-02663-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-021-02663-2

MSC

Keywords