Skip to main content

New shrinking iterative methods for infinite families of monotone operators in a Banach space, computational experiments and applications

Abstract

New shrinking iterative algorithms for approximating common zeros of two infinite families of maximal monotone operators in a real uniformly convex and uniformly smooth Banach space are designed. Two steps of multiple choices can be made in the new iterative algorithms, two groups of interactive containment sets \(C_{n}\) and \(Q_{n}\) are constructed and computational errors are considered, which are different from the previous ones. Strong convergence theorems are proved under mild assumptions and some new proof techniques can be found. Computational experiments for some special cases are conducted to show the effectiveness of the iterative algorithms and meanwhile some inequalities are proved to guarantee the strong convergence. Moreover, the applications of the abstract results on convex minimization problems and variational inequalities are exemplified.

1 Introduction

Throughout this paper, suppose E is a real Banach space with \(E^{*}\) being its dual space. Let C be a non-empty closed and convex subset of E. The symbols “\(\langle x,f\rangle \)”, “→” and “” denote the values of \(f\in E^{*}\) at \(x \in E\), the strong convergence and the weak convergence either in E or \(E^{*}\), respectively.

For a nonlinear mapping \(S:D(S) \subset E \rightarrow 2^{E}\), we use \(F(S)\) to denote the set of fixed points of S, that is, \(F(S) = \{x\in D(S): x \in Sx\}\). For a nonlinear mapping \(S:D(S) \subset E \rightarrow 2^{E^{*}}\), we use \(S^{-1}0\) to denote the set of zeros of S, that is, \(S^{-1}0 = \{x \in D(S) : 0 \in Sx\}\).

The normalized duality mapping \(J_{E}: E\rightarrow 2^{E^{*}}\) is defined as follows [1]:

$$ J_{E}(x) = \bigl\{ x^{*} \in E^{*}: \bigl\langle x, x^{*}\bigr\rangle = \Vert x \Vert ^{2} = \bigl\Vert x^{*} \bigr\Vert ^{2}\bigr\} , \quad \forall x \in E. $$

An operator \(A: E \rightarrow 2^{E^{*}}\) is said to be monotone [1] if \(\langle x_{1} - x_{2},y_{1} - y_{2}\rangle \geq 0\), \(\forall y_{i} \in Ax_{i}\), \(i = 1,2\). The monotone operator A is called maximal monotone if \(R(J_{E}+\lambda A) = E^{*}\), \(\forall \lambda > 0\).

The Lyapunov functional \(\varphi :E \times E \rightarrow R^{+}\) is defined as follows [2]:

$$ \varphi (x,y) = \Vert x \Vert ^{2}-2 \bigl\langle x,j_{E}(y)\bigr\rangle + \Vert y \Vert ^{2}, \quad \forall x , y \in E, j_{E}(y) \in J_{E}(y). $$

If E is a real reflexive and strictly convex Banach space, then for each \(x \in E\) there exists a unique element \(x_{0}\in C\) such that \(\Vert x - x_{0} \Vert = \inf \{ \Vert x - y \Vert : y \in C\}\). Such an element \(x_{0}\) is denoted by \(P_{C}x\) and \(P_{C}\) is called the metric projection of E onto C (see [2]).

If E is a real reflexive, smooth and strictly convex Banach space, then, for \(\forall x \in E\), there exists a unique element \(x_{0} \in C\) satisfying \(\varphi (x_{0}, x) = \inf \{\varphi (z,x) : z \in C\}\). In this case, \(\forall x \in E\), define \(\varPi _{C} : E\rightarrow C\) by \(\varPi _{C} x = x_{0}\), and then \(\varPi _{C}\) is called the generalized projection from E onto C (see [2]).

A mapping \(B: C \rightarrow C\) is called generalized non-expansive [3] if \(F(B) \neq \emptyset \) and \(\varphi (Bx,y) \leq \varphi (x,y)\), \(\forall x \in C\) and \(y \in F(B)\). A point \(p \in C\) is said to be a strong asymptotic fixed point of B [4] if there exists a sequence \(\{x_{n}\}\subset C\) with \(x_{n} - Bx_{n} \rightarrow 0\) such that \(x_{n} \rightarrow p\), as \(n \rightarrow \infty \). We use \(\widetilde{F}(B)\) to denote the set of strong asymptotic fixed points of B. A mapping B is called weakly relatively non-expansive [4] if \(\widetilde{F}(B) = F(B)\neq \emptyset \) and \(\varphi (p, Bx)\leq \varphi (p,x)\) for \(x \in C\) and \(p \in F(B)\).

A mapping \(S: E \rightarrow C\) is said to be sunny [3] if \(S(S(x)+t(x-S(x))) = S(x)\), \(\forall x \in E\) and \(t \geq 0\). A mapping \(S: E \rightarrow C\) is said to be a retraction [3] if \(S(z) = z\) for \(\forall z \in C\). If E is a real smooth and strictly convex Banach space, then there exists a unique sunny generalized non-expansive retraction of E onto C, which is denoted by \(R_{C}\).

Maximal monotone operator is a kind of important nonlinear mappings which draws much attention of mathematicians since it has rich practical background [58]. Some problems in nonlinear equations, minimization problems, variational inequalities and split problems and some others can be reduced to the problems for finding zeros of maximal monotone operators. Designing iterative algorithms to approximate zeros of maximal monotone operators is a hot topic, which can be seen in [913] and the references therein.

It is a natural idea to extend the study on designing iterative algorithms to approximate zeros of a maximal monotone operator to that for approximating common zeros of finite or infinite families of maximal monotone operators for the purpose of describing a complicated system in practical problems. Some related work can be found in [1418] and the references therein.

Recall that in 2014 Wei et al. [15] introduced two composite operators \(U_{n} := J^{-1}_{E}[a_{0}J_{E}+\sum_{i = 1}^{m} a_{i}J_{E}(J_{E}+r_{n,i}A_{i})^{-1}J_{E}]\) and \(W_{n} := J^{-1}_{E}\{ b_{0}J_{E}+\sum_{j = 1}^{l} b_{j}J_{E}[(J_{E}+s_{n,j}B_{j})^{-1}J_{E}(J_{E}+s_{n,j-1}B_{j-1})^{-1} J_{E} \cdots (J_{E}+s_{n,1}B_{1})^{-1}J_{E}]\}\), where \(A_{i} : E \rightarrow E^{*}\) and \(B_{j}: E\rightarrow E^{*}\) are maximal monotone operators, for \(i \in \{1,2,\ldots, m\}\) and \(j \in \{1,2,\ldots,l\}\). And the following iterative algorithm is presented for approximating the common zeros of \(\{A_{i}\}_{i = 1}^{m}\) and \(\{B_{j}\}_{j=1}^{l}\):

$$\begin{aligned}& \textstyle\begin{cases} x_{1} \in E, \\ u_{n} = J^{-1}_{E}[(1-\alpha _{n})J_{E}x_{n}], \\ v_{n}= J^{-1}_{E}[(1- \beta _{n})J_{E}x_{n} + \beta _{n} J_{E}U_{n} u_{n}], \\ x_{n+1}= J^{-1}_{E}[\gamma _{n} J_{E}x_{n}+(1-\gamma _{n})J_{E}W_{n} v_{n}], \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(1.1)

Under the strong assumptions that the normalized duality mappings \(J_{E}\) and \(J_{E}^{-1}\) are weakly sequentially continuous, the result that \(x_{n} \rightharpoonup v_{0} = \lim_{n\rightarrow \infty }\varPi _{( \bigcap _{i = 1}^{m} A_{i}^{-1}0)\cap (\bigcap _{j = 1}^{l} B_{j}^{-1}0)}(x_{n})\) is proved, as \(n \rightarrow \infty \). Though only weak convergence is obtained, the idea of constructing composite operators is quite interesting.

In 2015, Wei et al. [16] deleted the strong assumptions imposed on both \(J_{E}\) and \(J_{E}^{-1}\) and obtained the result of strong convergence instead of weak convergence by constructing a sequence of shrinking projection sets. The iterative algorithm is presented in a real smooth and uniformly convex Banach space E as follows:

$$\begin{aligned} \textstyle\begin{cases} x_{1} \in E,\quad\quad u \in E, \\ u_{n} = J^{-1}_{E}\{\alpha _{n}J_{E}x_{n} \\ \hphantom{u_{n} =}{}+(1-\alpha _{n})J_{E}[(J_{E}+r_{n,m}A_{m})^{-1}J_{E} (J_{E}+r_{n,m-1}A_{m-1})^{-1}J_{E} \cdots (J_{E}+r_{n,1}A_{1})^{-1}J_{E}x_{n}]\}, \\ v_{n} = J^{-1}_{E}[\beta _{n}J_{E}u + (1-\beta _{n}) \sum_{j= 1}^{l} a_{j}J_{E}(J_{E}+s_{n,j}B_{j})^{-1}J_{E}u_{n}], \\ C_{1} = E, \\ C_{n+1}= \{p \in C_{n}: \varphi (p,u_{n}) \leq \varphi (p,x_{n}), \varphi (p,v_{n}) \leq \beta _{n} \varphi (p,u)+(1-\beta _{n}) \varphi (p,u_{n})\}, \\ x_{n+1} = \varPi _{C_{n+1}}(x_{1}), \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(1.2)

Under mild conditions, the result that \(x_{n} \rightarrow \varPi _{(\bigcap _{i = 1}^{m} A_{i}^{-1}0)\cap ( \bigcap _{j = 1}^{l} B_{j}^{-1}0)}(u)\), as \(n \rightarrow \infty \), is proved, where \(A_{i}: E\rightarrow E^{*}\) and \(B_{j}: E\rightarrow E^{*}\) are maximal monotone mappings for \(i \in \{1,2,\ldots,m\}\) and \(j \in \{1,2,\ldots,l\}\). Moreover, the iterative algorithm is applied to one kind p-Laplacian-like equation.

In 2015, Wei et al. [17] studied the maximal operators \(A_{i}: E^{*}\rightarrow E\) and \(B_{j}: E^{*}\rightarrow E\), for \(i \in \{1,2,\ldots,m\}\) and \(j \in \{1,2,\ldots,l\}\). Since the domain of the operators is \(E^{*}\) not E, the sunny generalized non-expansive retraction \(R_{C_{n+1}}\) is employed in the iterative construction instead of the generalized projection \(\varPi _{C_{n+1}}\). The iterative algorithm is presented as follows:

$$\begin{aligned} \textstyle\begin{cases} x_{1} \in E, \quad\quad u \in E, \\ y_{n} = \alpha _{n}x_{n}+(1-\alpha _{n}) (I+r_{n,m}A_{m}J_{E})^{-1}(I+r_{n,m-1}A_{m-1}J_{E})^{-1} \cdots (I+r_{n,1}A_{1}J_{E})^{-1}x_{n}, \\ z_{n} = \beta _{n}u + (1-\beta _{n}) \sum_{j= 1}^{l} a_{j}(I+s_{n,j}B_{j}J_{E})^{-1}y_{n}, \\ C_{1} = E, \\ C_{n+1}= \{p \in C_{n}: \varphi (y_{n},p) \leq \varphi (x_{n},p), \varphi (z_{n},p) \leq \beta _{n} \varphi (u,v)+(1-\beta _{n}) \varphi (y_{n},p)\}, \\ x_{n+1} = R_{C_{n+1}}(x_{1}), \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(1.3)

Under the assumption that \(J_{E}\) is weakly sequentially continuous, the result that \(x_{n} \rightarrow R_{(\bigcap _{i = 1}^{m} (A_{i}J_{E})^{-1}0) \cap (\bigcap _{j = 1}^{l} (B_{j}J_{E})^{-1}0)}(x_{1})\) is proved, as \(n \rightarrow \infty \). The application of the iterative algorithm is applied to a kind of curvature systems.

In 2018, Wei et al. [18] extended the topic to the case for infinite family of maximal monotone operators \(A_{i}: E\rightarrow E^{*}\) and infinite family of weakly relatively non-expansive mappings \(B_{i}: E\rightarrow E\), for \(i \in N\). In each iterative step n, two groups of subsets of E are constructed and multi-choice of the iterative element can be made avoiding the calculation of the generalized projection, which is different but contains the traditional projection iterative algorithm. The iterative algorithm can be seen as follows:

$$\begin{aligned}& \textstyle\begin{cases} x_{1} \in E, \quad\quad e_{1} \in E, \\ v_{n,i} = (J_{E}+s_{n,i}A_{i})^{-1}J_{E}(x_{n}+e_{n}), \\ w_{n,i} = J_{E}^{-1}[\alpha _{n}J_{E}x_{n}+(1-\alpha _{n})J_{E}B_{i}v_{n,i}], \\ C_{1} = E = Q_{1}, \\ C_{n+1,i}= \{z \in E: \langle v_{n,i} - z, J_{E}(x_{n}+e_{n})-J_{E}v_{n,i} \rangle \geq 0\}, \\ C_{n+1} = (\bigcap_{i = 1}^{\infty }C_{n+1,i})\cap C_{n}, \\ Q_{n+1,i} = \{z \in C_{n+1,i}: \varphi (z,w_{n,i})\leq \alpha _{n} \varphi (z,u_{n})+(1-\alpha _{n})\varphi (z,v_{n,i})\}, \\ Q_{n+1} = (\bigcap_{i = 1}^{\infty }Q_{n+1,i})\cap Q_{n}, \\ U_{n+1} = \{z\in Q_{n+1}: \Vert x_{1}-z \Vert ^{2} \leq \Vert P_{Q_{n+1}}(x_{1})-x_{1} \Vert ^{2}+\tau _{n+1}\}, \\ x_{n+1} \in U_{n+1}, \quad n \in N, \end{cases}\displaystyle \end{aligned}$$
(1.4)

where \(\{e_{n}\} \subset E\) is the error sequence and \(P_{Q_{n+1}}\) is the metric projection from E onto \(Q_{n+1}\). The result that \(x_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }Q_{n}}(x_{1}) \in ( \bigcap_{i = 1}^{\infty } A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{ \infty } F(B_{i}))\) is proved, as \(n \rightarrow \infty \).

Later, in [14], the iterative algorithm (1.4) was simplified in the sense that the evaluation of the sets \(C_{n+1,i}\) and \(Q_{n+1,i}\) for \(i \in N\) are replaced by that of \(C_{n+1}\) and \(Q_{n+1}\) directly. The iterative algorithm is stated as follows:

$$\begin{aligned}& \textstyle\begin{cases} x_{1} \in E, \quad\quad e_{1} \in E, \\ y_{n} = J_{E}^{-1}[\alpha _{n}J_{E}x_{n}+(1-\alpha _{n})\sum_{i = 1}^{ \infty }a_{n,i}J_{E}(J_{E}+r_{n,i}A_{i})^{-1}J_{E}(x_{n}+e_{n})], \\ z_{n} = J_{E}^{-1}[\beta _{n}J_{E}x_{n}+(1-\beta _{n})J_{E}B_{i}y_{n}], \\ C_{1} = E = Q_{1}, \\ C_{n+1} = \{v \in C_{n}: \varphi (v,y_{n})\leq \alpha _{n} \varphi (v,x_{n})+(1- \alpha _{n})\varphi (v,x_{n}+e_{n}), \\ \varphi (v,z_{n})\leq \beta _{n} \varphi (v,x_{n})+(1-\beta _{n}) \varphi (v,y_{n})\}, \\ Q_{n+1} = \{v\in C_{n+1}: \Vert x_{1}-v \Vert ^{2} \leq \Vert P_{C_{n+1}}(x_{1})-x_{1} \Vert ^{2}+\lambda _{n+1}\}, \\ x_{n+1} \in Q_{n+1}, \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(1.5)

The result that \(x_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }Q_{n}}(x_{1}) \in ( \bigcap_{i = 1}^{\infty } A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{ \infty } F(B_{i}))\) is proved, as \(n \rightarrow \infty \). Computational experiments are conducted for some special cases.

In this paper, our purpose is to extend the topic from two finite families of maximal monotone operators (e.g. [17]) to that for the infinite case. Two steps of multiple choices can be made in the new iterative algorithms and two groups of interactive containment sets \(C_{n}\) and \(Q_{n}\) are constructed, which are different from the previous ones(e.g. [18]). Some new proof techniques can be found, especially the wide use of inequalities. Computational experiments are conducted and the applications on convex minimization problems and variational inequalities are exemplified.

2 Preliminaries

A Banach space E is said to be uniformly convex [19] if, for any two sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) in E with \(\Vert x_{n} \Vert = \Vert y_{n} \Vert = 1\) and \(\lim_{n\rightarrow \infty } \Vert x_{n}+y_{n} \Vert =2\), one has \(\lim_{n\rightarrow \infty } \Vert x_{n} - y_{n} \Vert = 0\).

Let \(\lambda _{E}: [0,+\infty ) \rightarrow [0,+\infty )\) be a function. Then \(\lambda _{E}\) is called the modulus of smoothness of E if it is defined as follows [19]:

$$ \lambda _{E}(t) = \sup \biggl\{ \frac{1}{2}\bigl( \Vert x+y \Vert + \Vert x-y \Vert \bigr)-1 : x,y \in E, \Vert x \Vert =1, \Vert y \Vert \leq t\biggr\} . $$

A Banach space E is said to be uniformly smooth [19] if \(\frac{\lambda _{E}(t)}{t} \rightarrow 0\), as \(t \rightarrow 0\).

A uniformly convex and uniformly smooth Banach space E has Property (H) in the sense that, if for every sequence \(\{x_{n}\} \subset E\) with \(x_{n} \rightharpoonup x \in E\) and \(\Vert x_{n} \Vert \rightarrow \Vert x \Vert \), one has \(x_{n} \rightarrow x\), as \(n \rightarrow \infty \).

Lemma 2.1

([19, 20])

IfEis real uniformly convex and uniformly smooth Banach space, then (1) \(J_{E}\)is single-valued, surjective and for\(x \in E\)and\(k \in (0,+\infty )\), \(J_{E}(kx) = kJ_{E}(x)\); (2) \(J^{-1}_{E} = J_{E^{*}}\)is the normalized duality mapping from\(E^{*}\)toE; (3) both\(J_{E}\)and\(J^{-1}_{E}\)are uniformly continuous on each bounded subset ofEor\(E^{*}\), respectively.

Lemma 2.2

([1])

Let\(A: E \rightarrow 2^{E^{*}}\)be a maximal monotone operator, then

  1. (1)

    \(A^{-1}0\)is a closed and convex subset ofE;

  2. (2)

    if\(x_{n} \rightarrow x\)and\(y_{n} \in Ax_{n}\)with\(y_{n} \rightharpoonup y\), or\(x_{n} \rightharpoonup x\)and\(y_{n} \in Ax_{n}\)with\(y_{n} \rightarrow y\), then\(x \in D(A)\)and\(y \in Ax\).

Definition 2.3

([21])

Let \(\{C_{n}\}\) be a sequence of non-empty closed and convex subsets of E, then

  1. (1)

    s-\(\liminf C_{n}\), which is called strong lower limit of \(\{C_{n}\}\), is defined as the set of all \(x \in E\) such that there exists \(x_{n} \in C_{n}\) for almost all n and it tends to x as \(n \rightarrow \infty \) in the norm.

  2. (2)

    w-\(\limsup C_{n}\), which is called weak upper limit of \(\{C_{n}\}\), is defined as the set of all \(x \in E\) such that there exists a subsequence \(\{C_{n_{m}}\}\) of \(\{C_{n}\}\) and \(x_{n_{m}} \in C_{n_{m}}\) for every \(n_{m}\) and it tends to x as \(n_{m} \rightarrow \infty \) in the weak topology.

  3. (3)

    If s-\(\liminf C_{n} =\) w-\(\limsup C_{n}\), then the common value is denoted by \(\lim C_{n}\).

Lemma 2.4

([21])

Let\(\{C_{n}\}\)be a decreasing sequence of closed and convex subsets ofE, i.e. \(C_{n} \subset C_{m}\)if\(n \geq m\). Then\(\{C_{n}\}\)converges inEand\(\lim C_{n} = \bigcap_{n = 1}^{\infty } C_{n}\).

Lemma 2.5

([22])

SupposeEis a real uniformly smooth and uniformly convex Banach space. If\(\lim C_{n}\)exists and is not empty, then\(\{P_{C_{n}}x\}\)converges strongly to\(P_{\lim C_{n}}x\)for every\(x \in E\).

Lemma 2.6

([23])

LetEbe a real uniformly smooth and uniformly convex Banach space, and let\(\{x_{n}\}\)and\(\{y_{n}\}\)be two sequences in E. If either\(\{x_{n}\}\)or\(\{y_{n}\}\)is bounded and\(\varphi (x_{n}, y_{n})\rightarrow 0\)as\(n \rightarrow \infty \), then\(x_{n} - y_{n}\rightarrow 0\)as\(n \rightarrow \infty \).

Lemma 2.7

([23])

SupposeEis a real uniformly convex and uniformly smooth Banach space and\(A : E \rightarrow 2^{E^{*}}\)is a maximal monotone operator such that\(A^{-1}0 \neq \emptyset \). Then\(\forall x \in E\), \(y \in A^{-1}0\)and\(r>0\), one has\(\varphi (y,(J_{E}+rA)^{-1}J_{E}x)+\varphi ((J_{E}+rA)^{-1}J_{E}x, x) \leq \varphi (y,x) \).

Lemma 2.8

([23])

LetEbe a real strictly convex and smooth Banach space andCis a non-empty closed and convex subset of E. Then\(\forall x \in E\), \(\forall y \in C\), one has\(\varphi (y, \varPi _{C} x)+\varphi (\varPi _{C} x, x)\leq \varphi (y, x)\).

Lemma 2.9

([24])

LetEbe a real uniformly convex Banach space and\(r \in (0,+\infty )\). Then there exists a continuous, strictly increasing and convex function\(g: [0, 2r] \rightarrow [0, +\infty )\)with\(g(0) = 0\)such that

$$ \bigl\Vert \alpha x+(1-\alpha )y \bigr\Vert ^{2} \leq \alpha \Vert x \Vert ^{2}+(1-\alpha ) \Vert y \Vert ^{2}- \alpha (1-\alpha )g\bigl( \Vert x-y \Vert \bigr), $$

for\(\alpha \in [0,1]\), \(x,y \in E\)with\(\Vert x \Vert \leq r\)and\(\Vert y \Vert \leq r\).

3 Iterative algorithms and computational experiments

3.1 Iterative algorithms

Theorem 3.1

SupposeEis a real uniformly convex and uniformly smooth Banach space and\(J_{E}: E \rightarrow E^{*}\)is the normalized duality mapping. Let\(A_{i}, B_{i}: E \rightarrow 2^{E^{*}}\)be maximal monotone operators, for each\(i\in N\). Denote\(\overline{U_{n}} = J^{-1}_{E}[a_{0}J_{E}+\sum_{i=1}^{\infty }a_{i}J_{E}Q_{r_{n,i}}^{A_{i}}]\)and\(\overline{W_{n}} = J^{-1}_{E}[b_{0}J_{E}+\sum_{j=1}^{\infty }b_{j}J_{E}Q_{s_{n,j}}^{B_{j}}Q_{s_{n,j-1}}^{B_{j-1}} \cdots Q_{s_{n,1}}^{B_{1}}]\), where\(Q^{A_{i}}_{r_{n,i}} = (J_{E}+r_{n,i}A_{i})^{-1}J_{E}\)and\(Q^{B_{j}}_{s_{n,j}} = (J_{E}+s_{n,j}B_{j})^{-1}J_{E}\), for\(i, j, n \in N\). Let\(\{e_{n}\}\)and\(\{\varepsilon _{n}\}\)be two error sequences inE, \(\{r_{n,i}\}\), \(\{s_{n,j}\}\), \(\{\delta _{n}\}\)and\(\{\vartheta _{n}\}\)be real number sequences in\((0,+\infty )\), for\(i,j,n\in N\). Suppose\(\{a_{i}\}_{i= 0}^{\infty }\)and\(\{b_{i}\}_{i= 0}^{\infty }\)are real number sequences in\((0,1)\)such that\(\sum_{i = 0}^{\infty } a_{i} = \sum_{i = 0}^{\infty } b_{i} = 1\), \(\{\alpha _{n}\}\)and\(\{\beta _{n}\}\)are real number sequences in\([0,1)\), for\(n\in N\). Let\(\{x_{n}\}\)be generated by the following iterative algorithm:

$$\begin{aligned}& \textstyle\begin{cases} x_{1}, e_{1}, \varepsilon _{1} \in E\quad \text{\textit{chosen arbitrarily}}, \\ y_{n} = J^{-1}_{E}[\alpha _{n}J_{E}x_{n}+(1-\alpha _{n})J_{E} \overline{U_{n}}(x_{n}+e_{n})], \\ C_{1} = E = X_{1}, \quad\quad Q_{1} = E = Y_{1}, \\ C_{n+1} = \{v \in Q_{n}: \varphi (v,y_{n}) \leq \alpha _{n}\varphi (v,x_{n}) + (1-\alpha _{n})\varphi (v,x_{n}+e_{n})\}, \\ X_{n+1} = \{v \in C_{n+1}: \Vert x_{1} - v \Vert ^{2} \leq \Vert P_{C_{n+1}}(x_{1}) - x_{1} \Vert ^{2} + \delta _{n}\}, \\ w_{n} \in X_{n+1}, \\ z_{n}= J^{-1}_{E}[\beta _{n}J_{E}x_{n}+(1-\beta _{n})J_{E}\overline{W_{n}}(w_{n}+ \varepsilon _{n})], \\ Q_{n+1} = \{v \in C_{n+1}: \varphi (v,z_{n}) \leq \beta _{n}\varphi (v,x_{n}) + (1-\beta _{n}) \varphi (v,w_{n}+\varepsilon _{n})\}, \\ Y_{n+1} = \{v \in Q_{n+1}: \Vert x_{1} - v \Vert ^{2} \leq \Vert P_{Q_{n+1}}(x_{1}) - x_{1} \Vert ^{2} + \vartheta _{n}\}, \\ x_{n+1}\in Y_{n+1}, \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(3.1)

Under the assumptions that\((i)\) \((\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{ \infty }B_{i}^{-1}0) \neq \emptyset \); \((\mathit{ii})\) \(\inf_{n}r_{n,i} > 0\), \(\inf_{n}s_{n,i} > 0\)for\(i \in N\); \((\mathit{iii})\) \(0 \leq \sup_{n}\alpha _{n} < 1\), \(0 \leq \sup_{n}\beta _{n} <1\); \((\mathit{iv})\) \(\delta _{n} \rightarrow 0\), \(\vartheta _{n}\rightarrow 0\); \((v)\) \(e_{n} \rightarrow 0\)and\(\varepsilon _{n} \rightarrow 0\), as\(n \rightarrow \infty \), one has

$$ \textstyle\begin{cases} x_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ w_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ y_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ z_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty . \end{cases} $$

Proof

The proof is split into ten steps.

Step 1. \((\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{ \infty }B_{i}^{-1}0)\subset C_{n}\cap Q_{n}\), for \(n \in N\).

For this purpose, we shall use the inductive method.

If \(n=1\), it is obvious that \((\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{ \infty }B_{i}^{-1}0)\subset C_{1}\cap Q_{1} = E\). Suppose the result is true for \(n = k\), that is, \((\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{ \infty }B_{i}^{-1}0)\subset C_{k} \cap Q_{k}\). Then \(\forall p \in (\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap_{i = 1}^{\infty }B_{i}^{-1}0)\), it follows from the definition of the Lyapunov functional, the convexity of \(\Vert \cdot \Vert ^{2}\) and Lemma 2.7 that

$$\begin{aligned} \varphi (p, y_{k}) &= \Vert p \Vert ^{2} - 2\Biggl\langle p, \alpha _{k}J_{E}x_{k}+(1- \alpha _{k})\Biggl[a_{0}J_{E}(x_{k}+e_{k})+ \sum_{i = 1}^{\infty }a_{i}J_{E}Q_{r_{{k},i}}^{A_{i}}(x_{k}+e_{k}) \Biggr] \Biggr\rangle \\ &\quad{} + \Biggl\Vert \alpha _{k}J_{E}x_{k}+(1- \alpha _{k})\Biggl[a_{0}J_{E}(x_{k}+e_{k})+ \sum_{i = 1}^{\infty }a_{i}J_{E}Q_{r_{{k},i}}^{A_{i}}(x_{k}+e_{k}) \Biggr] \Biggr\Vert ^{2} \\ &\leq \Vert p \Vert ^{2} - 2\alpha _{k}\langle p, J_{E}x_{k}\rangle + \alpha _{k} \Vert x_{k} \Vert ^{2}-2(1-\alpha _{k})a_{0} \bigl\langle p,J_{E}(x_{k}+e_{k}) \bigr\rangle \\ &\quad{} - 2(1-\alpha _{k})\sum_{i = 1}^{\infty }a_{i} \bigl\langle p, J_{E}Q_{r_{{k},i}}^{A_{i}}(x_{k}+e_{k}) \bigr\rangle \\ &\quad{}+ (1-\alpha _{k})a_{0} \Vert x_{k}+e_{k} \Vert ^{2} + (1-\alpha _{k})\sum_{i = 1}^{\infty }a_{i} \bigl\Vert Q_{r_{{k},i}}^{A_{i}}(x_{k}+e_{k}) \bigr\Vert ^{2} \\ &= \alpha _{k} \varphi (p, x_{k})+(1- \alpha _{k})a_{0}\varphi (p,x_{k}+e_{k})+(1- \alpha _{k+1})\sum_{i = 1}^{\infty }a_{i} \varphi \bigl(p,Q_{r_{{k},i}}^{A_{i}}(x_{k}+e_{k}) \bigr) \\ &\leq \alpha _{k}\varphi (p, x_{k})+ (1-\alpha _{k})\varphi (p, x_{k}+e_{k}). \end{aligned}$$

Thus \(p \in C_{k+1}\). By induction, \(p \in C_{n}\) for \(n \in N\).

And, using Lemma 2.7 repeatedly, one has

$$\begin{aligned} \varphi (p, z_{k})&\leq \Vert p \Vert ^{2} - 2\beta _{k}\langle p, J_{E}x_{k} \rangle + \beta _{k} \Vert x_{k} \Vert ^{2}-2(1-\beta _{k})b_{0}\bigl\langle p,J_{E}(w_{k}+ \varepsilon _{k})\bigr\rangle \\ &\quad{}- 2(1-\beta _{k})\sum_{j = 1}^{\infty }b_{j} \bigl\langle p, J_{E}Q_{s_{{k},j}}^{B_{j}} \cdots Q_{s_{{k},1}}^{B_{1}}(w_{k}+\varepsilon _{k}) \bigr\rangle + (1- \beta _{k})b_{0} \Vert w_{k}+\varepsilon _{k} \Vert ^{2} \\ &\quad{}+ (1-\beta _{k})\sum_{j = 1}^{\infty }b_{j} \bigl\Vert Q_{s_{{k},j}}^{B_{j}} \cdots Q_{s_{{k},1}}^{B_{1}}(w_{k}+ \varepsilon _{k}) \bigr\Vert ^{2} \\ &= \beta _{k} \varphi (p, x_{k})+(1- \beta _{k})b_{0}\varphi (p,w_{k}+ \varepsilon _{k}) \\ &\quad{} +(1-\beta _{k})\sum_{j = 1}^{\infty }b_{j} \varphi \bigl(p,Q_{s_{{k},j}}^{B_{j}} \cdots Q_{s_{{k},1}}^{B_{1}}(w_{k}+ \varepsilon _{k})\bigr) \\ &\leq \beta _{k}\varphi (p, x_{k})+(1-\beta _{k})\varphi (p, w_{k}+ \varepsilon _{k}). \end{aligned}$$

Thus \(p \in Q_{k+1}\). By induction \((\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{ \infty }B_{i}^{-1}0)\subset Q_{n}\), for \(n \in N\), which implies that \((\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{ \infty }B_{i}^{-1}0)\subset C_{n} \cap Q_{n}\), for \(n \in N\).

Step 2. \(C_{n}\) and \(Q_{n}\) are non-empty closed and convex subsets of E, for each \(n \in N\).

It follows from Step 1 that both \(C_{n}\) and \(Q_{n}\) are non-empty subsets of E for \(n \in N\).

It is obvious that both \(C_{1}\) and \(Q_{1}\) are closed and convex subsets of E. Suppose that both \(C_{k}\) and \(Q_{k}\) are closed and convex subsets of E, then noticing the fact that

$$\begin{aligned} &\varphi (v,y_{k}) \leq \alpha _{k}\varphi (v,x_{k}) + (1-\alpha _{k}) \varphi (v,x_{k}+e_{k}) \\ &\quad \Leftrightarrow \quad \bigl\langle v, \alpha _{k}J_{E}x_{k} + (1-\alpha _{k}) J_{E}(x_{k}+e_{k})- J_{E}y_{k} \bigr\rangle \leq \frac{(1-\alpha _{k}) \Vert x_{k}+e_{k} \Vert ^{2} + \alpha _{k} \Vert x_{k} \Vert ^{2} - \Vert y_{k} \Vert ^{2}}{2}, \end{aligned}$$

one sees that \(C_{k+1}\) is closed and convex. Therefore, by induction, \(C_{n}\) is closed and convex for each \(n \in N\).

Notice that

$$\begin{aligned} &\varphi (v,z_{k}) \leq \beta _{k}\varphi (v,x_{k}) + (1-\beta _{k}) \varphi (v,w_{k}+ \varepsilon _{k}) \\ &\begin{aligned} \quad \Leftrightarrow \quad &\bigl\langle p, \beta _{k}J_{E}x_{k} + (1-\beta _{k})J_{E}(w_{k}+ \varepsilon _{k})- J_{E}z_{k} \bigr\rangle \\&\quad \leq \frac{(1-\beta _{k}) \Vert w_{k}+\varepsilon _{k} \Vert ^{2} + \beta _{k} \Vert x_{k} \Vert ^{2} - \Vert z_{k} \Vert ^{2}}{2}. \end{aligned} \end{aligned}$$

Combining with the fact that \(C_{n}\) is closed and convex for \(n\in N\), one sees that \(Q_{k+1}\) is closed and convex. By induction, \(Q_{n}\) is closed and convex, for each \(n \in N\).

Step 3. \(P_{C_{n}}(x_{1}) \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), \(P_{Q_{n}}(x_{1}) \rightarrow P_{\bigcap _{n=1}^{\infty }Q_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Since \((\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{ \infty }B_{i}^{-1}0) \neq \emptyset \), from Steps 1 and 2 and (3.1), we know that \(C_{n}\) is a non-empty closed, convex and decreasing subset of E. Using Lemmas 2.4 and 2.5, we know that \(P_{C_{n}}(x_{1}) \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Similarly, we have \(P_{Q_{n}}(x_{1}) \rightarrow P_{\bigcap _{n=1}^{\infty }Q_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Step 4. \(P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1}) = P_{\bigcap _{n=1}^{ \infty }Q_{n}}(x_{1})\).

It suffices to show that \(\bigcap_{n=1}^{\infty }C_{n} = \bigcap_{n=1}^{\infty }Q_{n}\).

In fact, from (3.1), \(Q_{n} \subset C_{n}\), then \(\bigcap_{n=1}^{\infty }Q_{n} \subset \bigcap_{n=1}^{\infty }C_{n}\). On the other hand, since \(C_{1} = E\) and \(C_{n+1} \subset Q_{n}\), then \(\bigcap_{n=1}^{\infty }C_{n+1} = \bigcap_{n=1}^{\infty }C_{n} \subset \bigcap_{n=1}^{\infty }Q_{n}\), which ensures that \(P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1}) = P_{\bigcap _{n=1}^{ \infty }Q_{n}}(x_{1})\).

Step 5. \(\{w_{n}\}\) and \(\{x_{n}\}\) are well-defined.

In fact, we only need to show that \(X_{n} \neq \emptyset \) and \(Y_{n} \neq \emptyset \), for each \(n \in N\).

Since \(\Vert P_{C_{n+1}}(x_{1})- x_{1} \Vert = \inf_{q \in C_{n+1}} \Vert q - x_{1} \Vert \), for \(\delta _{n}\) there exists \(k_{n} \in C_{n+1}\) such that

$$ \Vert x_{1} - k_{n} \Vert ^{2} \leq \Bigl(\inf_{q \in C_{n+1}} \Vert q - x_{1} \Vert \Bigr)^{2}+ \delta _{n}= \bigl\Vert P_{C_{n+1}}(x_{1})- x_{1} \bigr\Vert ^{2}+\delta _{n}. $$

Then \(X_{n} \neq \emptyset \), which implies that \(\{w_{n}\}\) is well-defined.

Similarly, \(Y_{n} \neq \emptyset \), which implies that \(\{x_{n}\}\) is well-defined.

Step 6. Both \(\{w_{n}\}\) and \(\{x_{n}\}\) are bounded.

Since \(w_{n} \in X_{n+1}\),

$$ \Vert x_{1} - w_{n} \Vert ^{2} \leq \bigl\Vert P_{C_{n+1}}(x_{1})- x_{1} \bigr\Vert ^{2}+\delta _{n}. $$

Since \(\{P_{C_{n}}(x_{1})\}\) is convergent from Step 3 and \(\delta _{n} \rightarrow 0\), \(\{w_{n}\}\) is bounded.

Similarly, \(\{x_{n}\}\) is bounded.

Step 7. \(w_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1}) = P_{ \bigcap _{n=1}^{\infty }Q_{n}}(x_{1})\) and \(x_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }Q_{n}}(x_{1}) = P_{ \bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Since \(w_{n} \in X_{n+1} \subset C_{n+1}\) and \(C_{n}\) is convex, for \(\forall t\in (0,1)\), \(tP_{C_{n+1}}(x_{1}) + (1-t)w_{n} \in C_{n+1}\). Thus \(\Vert P_{C_{n+1}}(x_{1})- x_{1} \Vert \leq \Vert tP_{C_{n+1}}(x_{1})+(1-t)w_{n}- x_{1} \Vert \). Using Lemma 2.9, we have

$$\begin{aligned} & \bigl\Vert P_{C_{n+1}}(x_{1})- x_{1} \bigr\Vert ^{2} \\ &\quad \leq \bigl\Vert tP_{C_{n+1}}(x_{1})+(1-t)w_{n}- x_{1} \bigr\Vert ^{2} \\ &\quad \leq t \bigl\Vert P_{C_{n+1}}(x_{1})- x_{1} \bigr\Vert ^{2} + (1-t) \Vert w_{n}-x_{1} \Vert ^{2}-t(1-t)g\bigl( \bigl\Vert P_{C_{n+1}}(x_{1})- w_{n} \bigr\Vert \bigr). \end{aligned}$$

Therefore, \(tg( \Vert P_{C_{n+1}}(x_{1})- w_{n} \Vert )\leq \Vert w_{n}-x_{1} \Vert ^{2}- \Vert P_{C_{n+1}}(x_{1})- x_{1} \Vert ^{2} \leq \delta _{n}\rightarrow 0\), as \(n \rightarrow \infty \). Then \(w_{n} - P_{C_{n+1}}(x_{1}) \rightarrow 0\), as \(n \rightarrow \infty \). Combining with Steps 3 and 4, we have \(w_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})= P_{ \bigcap _{n=1}^{\infty }Q_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Since \(x_{n+1} \in Y_{n+1} \subset Q_{n+1}\) and \(Q_{n}\) is convex, for \(\forall t\in (0,1)\), \(tP_{Q_{n+1}}(x_{1}) + (1-t)x_{n+1} \in Q_{n+1}\). Thus \(\Vert P_{Q_{n+1}}(x_{1})- x_{1} \Vert \leq \Vert tP_{Q_{n+1}}(x_{1})+(1-t)x_{n+1}- x_{1} \Vert \). Using Lemma 2.9 again, we have

$$\begin{aligned} & \bigl\Vert P_{Q_{n+1}}(x_{1})- x_{1} \bigr\Vert ^{2} \\ &\quad \leq \bigl\Vert tP_{Q_{n+1}}(x_{1})+(1-t)x_{n+1}- x_{1} \bigr\Vert ^{2} \\ &\quad \leq t \bigl\Vert P_{Q_{n+1}}(x_{1})- x_{1} \bigr\Vert ^{2} + (1-t) \Vert x_{n+1}-x_{1} \Vert ^{2}-t(1-t)g\bigl( \bigl\Vert P_{Q_{n+1}}(x_{1})- x_{n+1} \bigr\Vert \bigr). \end{aligned}$$

Therefore, \(tg( \Vert P_{Q_{n+1}}(x_{1})- x_{n+1} \Vert )\leq \Vert x_{n+1}-x_{1} \Vert ^{2}- \Vert P_{Q_{n+1}}(x_{1})- x_{1} \Vert ^{2} \leq \vartheta _{n}\rightarrow 0\), as \(n \rightarrow \infty \). Combining with Steps 3 and 4, we have \(x_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }Q_{n}}(x_{1})= P_{ \bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Step 8. \(y_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1}) = P_{ \bigcap _{n=1}^{\infty }Q_{n}}(x_{1})\) and \(z_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1}) = P_{ \bigcap _{n=1}^{\infty }Q_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Since \(w_{n} \in X_{n+1} \subset C_{n+1} \subset Q_{n}\), for \(n \geq 2\),

$$ \varphi (w_{n}, y_{n}) \leq \alpha _{n} \varphi (w_{n},x_{n}) + (1- \alpha _{n})\varphi (w_{n}, x_{n}+e_{n}) $$

and

$$ \varphi (w_{n}, z_{n-1}) \leq \beta _{n-1} \varphi (w_{n},x_{n-1}) + (1- \beta _{n-1})\varphi (w_{n}, w_{n-1}+\varepsilon _{n-1}). $$

Since \(e_{n} \rightarrow 0\) and \(\varepsilon _{n} \rightarrow 0\), from Lemma 2.6 and Steps 6 and 7, we have \(w_{n} - y_{n} \rightarrow 0\) and \(w_{n} - z_{n-1} \rightarrow 0\), as \(n \rightarrow \infty \). Therefore, \(y_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1}) = P_{ \bigcap _{n=1}^{\infty }Q_{n}}(x_{1})\) and \(z_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1}) = P_{ \bigcap _{n=1}^{\infty }Q_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Step 9. \(P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1}) = P_{\bigcap _{n=1}^{ \infty }Q_{n}}(x_{1}) \in (\bigcap_{i = 1}^{\infty }A_{i}^{-1}0) \cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0)\).

For \(\forall q\in (\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap_{i = 1}^{\infty }B_{i}^{-1}0)\), using Lemma 2.7 and (3.1), we have

$$\begin{aligned} \varphi (q, y_{n}) &\leq \alpha _{n} \varphi (q,x_{n}) + (1-\alpha _{n}) \varphi \bigl(q, \overline{U_{n}}(x_{n}+e_{n})\bigr) \\ &\leq \alpha _{n} \varphi (q,x_{n}) + (1-\alpha _{n})\Biggl[a_{0} \varphi (q, x_{n}+e_{n})+ \sum_{i = 1}^{\infty }a_{i}\varphi \bigl(q, Q_{r_{n,i}}^{A_{i}}(x_{n}+e_{n})\bigr) \Biggr] \\ &= \alpha _{n} \varphi (q,x_{n}) + (1-\alpha _{n})\Biggl[a_{0} \varphi (q, x_{n}+e_{n})+ \sum_{i = 1, i \neq i_{0}}^{\infty }a_{i}\varphi \bigl(q, Q_{r_{n,i}}^{A_{i}}(x_{n}+e_{n}) \bigr) \\ &\quad{} +a_{i_{0}} \varphi \bigl(q, Q_{r_{n,i_{0}}}^{A_{i_{0}}}(x_{n}+e_{n}) \bigr)\Biggr] \\ &\leq \alpha _{n} \varphi (q,x_{n}) + (1-\alpha _{n})\bigl[(1-a_{i_{0}}) \varphi (q, x_{n}+e_{n})+ a_{i_{0}}\varphi \bigl(q, Q_{r_{n,i_{0}}}^{A_{i_{0}}}(x_{n}+e_{n}) \bigr)\bigr] \\ &\leq \alpha _{n} \varphi (q,x_{n}) + (1-\alpha _{n})\bigl\{ (1-a_{i_{0}}) \varphi (q, x_{n}+e_{n})+a_{i_{0}} \bigl[\varphi (q, x_{n}+e_{n}) \\ &\quad{} -\varphi \bigl(Q_{r_{n,i_{0}}}^{A_{i_{0}}}(x_{n}+e_{n}), x_{n}+e_{n}\bigr)\bigr]\bigr\} \\ &= \alpha _{n} \varphi (q,x_{n}) + (1-\alpha _{n}) \varphi (q, x_{n}+e_{n}) - (1-\alpha _{n})a_{i_{0}}\varphi \bigl(Q_{r_{n,i_{0}}}^{A_{i_{0}}}(x_{n}+e_{n}), x_{n}+e_{n}\bigr). \end{aligned}$$

Thus

$$ (1-\alpha _{n})a_{i_{0}}\varphi \bigl(Q_{r_{n,i_{0}}}^{A_{i_{0}}}(x_{n}+e_{n}), x_{n}+e_{n}\bigr)\leq \alpha _{n} \varphi (q,x_{n}) + (1-\alpha _{n}) \varphi (q, x_{n}+e_{n})- \varphi (q,y_{n}), $$

which ensures that \(x_{n}+e_{n} - Q_{r_{n,i_{0}}}^{A_{i_{0}}}(x_{n}+e_{n}) \rightarrow 0\), as \(n \rightarrow \infty \), since \(0 \leq \sup_{n} \alpha _{n} < 1\).

Repeating the above process, \(x_{n}+e_{n} - Q_{r_{n,i}}^{A_{i}}(x_{n}+e_{n}) \rightarrow 0\), for each \(\forall i \in N\), as \(n \rightarrow \infty \). Thus, \(Q_{r_{n,i}}^{A_{i}}(x_{n}+e_{n}) \rightarrow P_{\bigcap _{n=1}^{ \infty }C_{n}}(x_{1})\), for \(\forall i \in N\), as \(n \rightarrow \infty \).

Let \(u_{n,i} = Q_{r_{n,i}}^{A_{i}}(x_{n}+e_{n})\), then \(J_{E}u_{n,i}+ r_{n,i}A_{i}u_{n,i} = J_{E}(x_{n}+e_{n})\). Note that \(u_{n,i} \rightarrow P_{\bigcap _{n = 1}^{\infty }C_{n}}(x_{1})\), \(x_{n} \rightarrow P_{\bigcap _{n = 1}^{\infty }C_{n}}(x_{1})\), \(e_{n} \rightarrow 0\) and \(\inf_{n} r_{n,i} > 0\), then \(A_{i}u_{n,i} \rightarrow 0\), as \(n \rightarrow \infty \). In view of Lemmas 2.1 and 2.2, \(P_{\bigcap _{n = 1}^{\infty }C_{n}}(x_{1}) \in \bigcap_{i = 1}^{ \infty }A_{i}^{-1}0\).

For \(\forall q\in (\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap_{i = 1}^{\infty }B_{i}^{-1}0)\), using Lemma 2.7 again, we have

$$\begin{aligned} \varphi (q, z_{n})&\leq \beta _{n} \varphi (q,x_{n})+(1-\beta _{n}) \varphi \bigl(q, \overline{W_{n}}(w_{n}+\varepsilon _{n})\bigr) \\ &\leq \beta _{n} \varphi (q,x_{n})+(1-\beta _{n})\Biggl[b_{0}\varphi (q,w_{n}+ \varepsilon _{n}) \\ &\quad{} + \sum_{j = 1}^{\infty }b_{j} \varphi \bigl(q, Q_{s_{n,j}}^{B_{j}}Q_{s_{n,j-1}}^{B_{j-1}} \cdots Q_{s_{n,1}}^{B_{1}}(w_{n}+\varepsilon _{n})\bigr)\Biggr] \\ &\leq \beta _{n} \varphi (q,x_{n})+(1-\beta _{n})b_{0} \varphi (q, w_{n}+ \varepsilon _{n}) \\ &\quad{}+ (1-\beta _{n})\sum_{j = 1}^{\infty }b_{j} \varphi \bigl(q, Q_{s_{n,j-1}}^{B_{j-1}}\cdots Q_{s_{n,1}}^{B_{1}}(w_{n}+ \varepsilon _{n})\bigr) \\ &\quad{} - (1-\beta _{n})\sum_{j = 1}^{\infty }b_{j} \varphi \bigl(Q_{s_{n,j}}^{B_{j}}Q_{s_{n,j-1}}^{B_{j-1}} \cdots Q_{s_{n,1}}^{B_{1}}(w_{n}+\varepsilon _{n}), Q_{s_{n,j-1}}^{B_{j-1}} \cdots Q_{s_{n,1}}^{B_{1}}( w_{n}+\varepsilon _{n})\bigr). \end{aligned}$$

Then using Lemma 2.7 repeatedly and noticing the results of Steps 7 and 8, one has

$$\begin{aligned}& (1-\beta _{n})\sum_{j = 1}^{\infty }b_{j} \varphi \bigl(Q_{s_{n,j}}^{B_{j}}Q_{s_{n,j-1}}^{B_{j-1}} \cdots Q_{s_{n,1}}^{B_{1}}(w_{n}+\varepsilon _{n}), Q_{s_{n,j-1}}^{B_{j-1}} \cdots Q_{s_{n,1}}^{B_{1}}( w_{n}+\varepsilon _{n})\bigr) \\ & \quad \leq \beta _{n} \varphi (q,x_{n})+(1-\beta _{n})b_{0} \varphi (q, w_{n}+ \varepsilon _{n})+ (1-\beta _{n})\sum_{j = 1}^{\infty }b_{j} \varphi \bigl(q, Q_{s_{n,j-1}}^{B_{j-1}}\cdots Q_{s_{n,1}}^{B_{1}}(w_{n}+ \varepsilon _{n})\bigr) \\ & \quad\quad {}- \varphi (q, z_{n}) \\ & \quad \leq \beta _{n} \varphi (q,x_{n})+ (1-\beta _{n})\varphi (q, w_{n}+ \varepsilon _{n}) - \varphi (q, z_{n}) \rightarrow 0, \quad \textit{as } n \rightarrow \infty , \end{aligned}$$

which implies that \(\varphi (Q_{s_{n,j}}^{B_{j}}Q_{s_{n,j-1}}^{B_{j-1}}\cdots Q_{s_{n,1}}^{B_{1}}(w_{n}+ \varepsilon _{n}), Q_{s_{n,j-1}}^{B_{j-1}}\cdots Q_{s_{n,1}}^{B_{1}}(w_{n}+ \varepsilon _{n})) \rightarrow 0\), and then Lemma 2.6 implies that \(Q_{s_{n,j}}^{B_{j}}Q_{s_{n,j-1}}^{B_{j-1}}\cdots Q_{s_{n,1}}^{B_{1}}(w_{n}+ \varepsilon _{n})- Q_{s_{n,j-1}}^{B_{j-1}}\cdots Q_{s_{n,1}}^{B_{1}}(w_{n}+ \varepsilon _{n}) \rightarrow 0\), as \(n \rightarrow \infty \).

Repeating the above process, by induction, we have

$$\begin{aligned}& \textstyle\begin{cases} Q_{s_{n,j-1}}^{B_{j-1}}\cdots Q_{s_{n,1}}^{B_{1}}(w_{n}+\varepsilon _{n})- Q_{s_{n,j-2}}^{B_{j-2}}\cdots Q_{s_{n,1}}^{B_{1}}(w_{n}+\varepsilon _{n}) \rightarrow 0, \\ Q_{s_{n,j-2}}^{B_{j-2}}\cdots Q_{s_{n,1}}^{B_{1}}(w_{n}+\varepsilon _{n})- Q_{s_{n,j-3}}^{B_{j-3}}\cdots Q_{s_{n,1}}^{B_{1}}(w_{n}+\varepsilon _{n}) \rightarrow 0, \\ \vdots \\ Q_{s_{n,1}}^{B_{1}}(w_{n}+\varepsilon _{n})- (w_{n}+\varepsilon _{n}) \rightarrow 0, \end{cases}\displaystyle \end{aligned}$$
(3.2)

as \(n \rightarrow \infty \).

Therefore, \(Q_{s_{n,1}}^{B_{1}}(w_{n}+\varepsilon _{n}) \rightarrow P_{\bigcap _{n = 1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \). Imitating the proof of \(P_{\bigcap _{n = 1}^{\infty }C_{n}}(x_{1})\in \bigcap_{i = 1}^{ \infty }A^{-1}_{i}0\), we know that \(P_{\bigcap _{n = 1}^{\infty }C_{n}}(x_{1})\in B_{1}^{-1}0\).

Now, set \(v_{n,1} = Q_{s_{n,1}}^{B_{1}}(w_{n}+\varepsilon _{n})\) and \(v_{n,2} = Q_{s_{n,2}}^{B_{2}}Q_{s_{n,1}}^{B_{1}}(w_{n}+\varepsilon _{n})\), then \(J_{E}v_{n,2}+ s_{n,2}B_{2}v_{n,2} = J_{E}v_{n,1}\). Since \(v_{n,1} \rightarrow P_{\bigcap _{n = 1}^{\infty }C_{n}}(x_{1})\), from (3.2) we have \(v_{n,2} \rightarrow P_{\bigcap _{n = 1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \). Since \(\inf_{n} s_{n,2} > 0\), by using Lemma 2.1, \(B_{2}J_{E}v_{n,2} \rightarrow 0\), as \(n \rightarrow \infty \). Lemma 2.2 implies that \(P_{\bigcap _{n = 1}^{\infty }C_{n}}(x_{1}) \in B_{2}^{-1}0\).

By induction, we easily show that \(P_{\bigcap _{n = 1}^{\infty }C_{n}}(x_{1}) \in B_{j}^{-1}0\), for each \(j \in N\). Therefore, \(P_{\bigcap _{n = 1}^{\infty }C_{n}}(x_{1}) \in \bigcap_{j = 1}^{ \infty }B_{j}^{-1}0\), which implies that \(P_{\bigcap _{n = 1}^{\infty }C_{n}}(x_{1}) = P_{\bigcap _{n = 1}^{ \infty }Q_{n}}(x_{1})\in (\bigcap_{i = 1}^{\infty }A^{-1}_{i}0) \cap (\bigcap_{i = 1}^{\infty }B^{-1}_{i}0)\).

Step 10. \(P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1}) = P_{\bigcap _{n=1}^{ \infty }Q_{n}}(x_{1})= P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0) \cap (\bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1})\).

From Step 9, we see that

$$ \bigl\Vert P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1}) - x_{1} \bigr\Vert \geq \bigl\Vert P_{( \bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap (\bigcap _{i = 1}^{ \infty }B_{i}^{-1}0)}(x_{1}) - x_{1} \bigr\Vert . $$

From Step 1, we see that

$$ \bigl\Vert P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap (\bigcap _{i = 1}^{ \infty }B_{i}^{-1}0)}(x_{1}) - x_{1} \bigr\Vert \geq \bigl\Vert P_{\bigcap _{n=1}^{ \infty }C_{n}}(x_{1}) - x_{1} \bigr\Vert . $$

Therefore,

$$ \bigl\Vert P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1}) - x_{1} \bigr\Vert = \bigl\Vert P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap (\bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) - x_{1} \bigr\Vert . $$

Since \(P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\) is unique, \(P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1}) = P_{(\bigcap _{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1})\).

This completes the proof. □

Corollary 3.2

If we choose\(w_{n} = P_{C_{n+1}}(x_{1})\), then (3.1) reduces to the following one:

$$\begin{aligned}& \textstyle\begin{cases} x_{1}, e_{1}, \varepsilon _{1} \in E\quad \text{\textit{chosen arbitrarily}}, \\ y_{n} = J^{-1}_{E}[\alpha _{n}J_{E}x_{n}+(1-\alpha _{n})J_{E} \overline{U_{n}}(x_{n}+e_{n})], \\ C_{1} = E, \quad\quad Q_{1} = E = Y_{1}, \\ C_{n+1} = \{v \in Q_{n}: \varphi (v,y_{n}) \leq \alpha _{n}\varphi (v,x_{n}) + (1-\alpha _{n})\varphi (v,x_{n}+e_{n})\}, \\ w_{n} = P_{C_{n+1}}(x_{1}), \\ z_{n}= J^{-1}_{E}[\beta _{n}J_{E}x_{n}+(1-\beta _{n})J_{E}\overline{W_{n}}(w_{n}+ \varepsilon _{n})], \\ Q_{n+1} = \{v \in C_{n+1}: \varphi (v,z_{n}) \leq \beta _{n}\varphi (v,x_{n}) + (1-\beta _{n}) \varphi (v,w_{n}+\varepsilon _{n})\}, \\ Y_{n+1} = \{v \in Q_{n+1}: \Vert x_{1} - v \Vert ^{2} \leq \Vert P_{Q_{n+1}}(x_{1}) - x_{1} \Vert ^{2} + \vartheta _{n}\}, \\ x_{n+1}\in Y_{n+1}, \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(3.3)

Under the assumptions that\((i)\), \((\mathit{ii})\), \((\mathit{iii})\)and\((v)\)in Theorem 3.1and\((\mathit{iv})'\) \(\vartheta _{n}\rightarrow 0\), as\(n \rightarrow \infty \), one has

$$ \textstyle\begin{cases} x_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ w_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\cap _{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ y_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ z_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty . \end{cases} $$

Proof

The proof can also be split into ten steps. Copy the proof of Steps 1–5 and Steps 8–10 in Theorem 3.1 and modify Steps 6 and 7 as follows, we can still get the result.

Step 6. Both \(\{w_{n}\}\) and \(\{x_{n}\}\) are bounded.

Since \(w_{n}= P_{C_{n+1}}(x_{1})\), we have \(\forall q \in (\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap_{i = 1}^{\infty }B_{i}^{-1}0)\subset C_{n+1}\), \(\Vert w_{n} - x_{1} \Vert \leq \Vert q - x_{1} \Vert \), which implies that \(\{w_{n}\}\) is bounded.

Since \(x_{n+1} \in Y_{n+1}\),

$$ \Vert x_{1} - x_{n+1} \Vert ^{2} \leq \bigl\Vert P_{Q_{n+1}}(x_{1})- x_{1} \bigr\Vert ^{2}+ \delta _{n}. $$

Since \(P_{Q_{n}}(x_{1}) \rightarrow P_{\bigcap _{n = 1}^{\infty }Q_{n}}(x_{1})\) and \(\delta _{n} \rightarrow 0\), \(\{x_{n}\}\) is bounded.

Step 7. \(w_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\) and \(x_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

It follows from Lemmas 2.4 and 2.5 that \(w_{n} = P_{C_{n+1}}(x_{1})\rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \). Copy Step 7 in Theorem 3.1, \(x_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

This completes the proof. □

Similar to Corollary 3.2, we have the following two results:

Corollary 3.3

If we choose\(x_{n+1} = P_{Q_{n+1}}(x_{1})\), then (3.1) reduces to the following one:

$$\begin{aligned}& \textstyle\begin{cases} x_{1}, e_{1}, \varepsilon _{1} \in E\quad \text{\textit{chosen arbitrarily}}, \\ y_{n} = J^{-1}_{E}[\alpha _{n}J_{E}x_{n}+(1-\alpha _{n})J_{E} \overline{U_{n}}(x_{n}+e_{n})], \\ C_{1} = E = X_{1}, \quad\quad Q_{1} = E, \\ C_{n+1} = \{v \in Q_{n}: \varphi (v,y_{n}) \leq \alpha _{n}\varphi (v,x_{n}) + (1-\alpha _{n})\varphi (v,x_{n}+e_{n})\}, \\ X_{n+1} = \{v \in C_{n+1}: \Vert x_{1} - v \Vert ^{2} \leq \Vert P_{C_{n+1}}(x_{1}) - x_{1} \Vert ^{2} + \delta _{n}\}, \\ w_{n} \in X_{n+1}, \\ z_{n}= J^{-1}_{E}[\beta _{n}J_{E}x_{n}+(1-\beta _{n})J_{E}\overline{W_{n}}(w_{n}+ \varepsilon _{n})], \\ Q_{n+1} = \{v \in C_{n+1}: \varphi (v,z_{n}) \leq \beta _{n}\varphi (v,x_{n}) + (1-\beta _{n}) \varphi (v,w_{n}+\varepsilon _{n})\}, \\ x_{n+1}= P_{Q_{n+1}}(x_{1}), \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(3.4)

Under the assumptions of\((i)\), \((\mathit{ii})\), \((\mathit{iii})\)and\((v)\)in Theorem 3.1and\((\mathit{iv})''\) \(\delta _{n} \rightarrow 0\), one has

$$ \textstyle\begin{cases} x_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ w_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ y_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ z_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty . \end{cases} $$

Corollary 3.4

If we choose\(w_{n} = P_{C_{n+1}}(x_{1})\)and\(x_{n+1} = P_{Q_{n+1}}(x_{1})\), then (3.1) reduces to the following one:

$$\begin{aligned}& \textstyle\begin{cases} x_{1}, e_{1}, \varepsilon _{1} \in E\quad \text{\textit{chosen arbitrarily}}, \\ y_{n} = J^{-1}_{E}[\alpha _{n}J_{E}x_{n}+(1-\alpha _{n})J_{E} \overline{U_{n}}(x_{n}+e_{n})], \\ C_{1} = E = Q_{1}, \\ C_{n+1} = \{v \in Q_{n}: \varphi (v,y_{n}) \leq \alpha _{n}\varphi (v,x_{n}) + (1-\alpha _{n})\varphi (v,x_{n}+e_{n})\}, \\ w_{n} = P_{C_{n+1}}(x_{1}), \\ z_{n}= J^{-1}_{E}[\beta _{n}J_{E}x_{n}+(1-\beta _{n})J_{E}\overline{W_{n}}(w_{n}+ \varepsilon _{n})], \\ Q_{n+1} = \{v \in C_{n+1}: \varphi (v,z_{n}) \leq \beta _{n}\varphi (v,x_{n}) + (1-\beta _{n}) \varphi (v,w_{n}+\varepsilon _{n})\}, \\ x_{n+1}= P_{Q_{n+1}}(x_{1}), \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(3.5)

Under the assumptions of\((i)\), \((\mathit{ii})\), \((\mathit{iii})\)and\((v)\), one has

$$ \textstyle\begin{cases} x_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ w_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ y_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ z_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty . \end{cases} $$

Corollary 3.5

If we choose\(w_{n} = \varPi _{C_{n+1}}(x_{n})\), then (3.1) reduces to the following one:

$$\begin{aligned}& \textstyle\begin{cases} x_{1}, e_{1}, \varepsilon _{1} \in E\quad \text{\textit{chosen arbitrarily}}, \\ y_{n} = J^{-1}_{E}[\alpha _{n}J_{E}x_{n}+(1-\alpha _{n})J_{E} \overline{U_{n}}(x_{n}+e_{n})], \\ C_{1} = E, \quad\quad Q_{1} = E = Y_{1}, \\ C_{n+1} = \{v \in Q_{n}: \varphi (v,y_{n}) \leq \alpha _{n}\varphi (v,x_{n}) + (1-\alpha _{n})\varphi (v,x_{n}+e_{n})\}, \\ w_{n} = \varPi _{C_{n+1}}(x_{n}), \\ z_{n}= J^{-1}_{E}[\beta _{n}J_{E}x_{n}+(1-\beta _{n})J_{E}\overline{W_{n}}(w_{n}+ \varepsilon _{n})], \\ Q_{n+1} = \{v \in C_{n+1}: \varphi (v,z_{n}) \leq \beta _{n}\varphi (v,x_{n}) + (1-\beta _{n}) \varphi (v,w_{n}+\varepsilon _{n})\}, \\ Y_{n+1} = \{v \in Q_{n+1}: \Vert x_{1} - v \Vert ^{2} \leq \Vert P_{Q_{n+1}}(x_{1}) - x_{1} \Vert ^{2} + \vartheta _{n}\}, \\ x_{n+1} \in Y_{n+1}, \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(3.6)

Under the assumptions that\((i)\), \((\mathit{ii})\), \((\mathit{iii})\)and\((v)\)in Theorem 3.1and\((\mathit{iv})'\)in Corollary 3.2, one has

$$ \textstyle\begin{cases} x_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ w_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ y_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ z_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty . \end{cases} $$

Proof

Copy Steps 1–5 and 9 and 10 in Theorem 3.1, we are left to show the results of Steps 6, 7 and 8 are still true.

Step 6. Both \(\{w_{n}\}\) and \(\{x_{n}\}\) are bounded.

Copy Theorem 3.1, \(\{x_{n}\}\) is bounded. Since \(w_{n}= \varPi _{C_{n+1}}(x_{n})\), \(\forall q \in (\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap_{i = 1}^{\infty }B_{i}^{-1}0)\subset C_{n+1}\), using Lemma 2.8, \(\varphi (q, w_{n}) + \varphi (w_{n}, x_{n}) \leq \varphi (q, x_{n})\). Thus \(\{\varphi (q,w_{n})\}\) is bounded. Since \(\varphi (q,w_{n}) \geq ( \Vert w_{n} \Vert - \Vert q \Vert )^{2}\), \(\{w_{n}\}\) is bounded.

Step 7. \(w_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\) and \(x_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Copy Theorem 3.1, \(x_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Since \(x_{n+1}\in Y_{n+1} \subset Q_{n+1} \subset C_{n+1}\), using Lemma 2.8, \(\varphi (x_{n+1}, w_{n}) + \varphi (w_{n}, x_{n}) \leq \varphi (x_{n+1}, x_{n})\rightarrow 0\), as \(n \rightarrow \infty \). Thus \(\varphi (w_{n}, x_{n})\rightarrow 0\), which implies from Lemma 2.6 that \(w_{n} - x_{n} \rightarrow 0\) and then \(w_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\) as \(n \rightarrow \infty \).

Step 8. \(y_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\) and \(z_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Since \(x_{n+1}\in Y_{n+1} \subset Q_{n+1} \subset C_{n+1}\), \(\varphi (x_{n+1}, y_{n}) \leq \alpha _{n}\varphi (x_{n+1}, x_{n})+(1- \alpha _{n}) \varphi (x_{n+1}, x_{n}+e_{n}) \rightarrow 0\), which implies from Lemma 2.6 that \(x_{n+1} - y_{n} \rightarrow 0\) as \(n \rightarrow \infty \). Thus \(y_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Since \(w_{n} = \varPi _{C_{n+1}}(x_{n}) \in C_{n+1}\subset Q_{n}\), we have \(\varphi (w_{n}, z_{n-1}) \leq \beta _{n-1}\varphi (w_{n}, x_{n-1})+(1- \beta _{n-1}) \varphi (w_{n}, w_{n-1}+\varepsilon _{n-1}) \rightarrow 0\), as \(n \rightarrow \infty \).

Thus \(w_{n} - z_{n-1} \rightarrow 0\) which implies that \(z_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

This completes the proof. □

Corollary 3.6

If we choose\(x_{n+1} = \varPi _{Q_{n+1}}(w_{n})\), then (3.1) reduces to the following one:

$$\begin{aligned}& \textstyle\begin{cases} x_{1}, e_{1}, \varepsilon _{1} \in E\quad \text{\textit{chosen arbitrarily}}, \\ y_{n} = J^{-1}_{E}[\alpha _{n}J_{E}x_{n}+(1-\alpha _{n})J_{E} \overline{U_{n}}(x_{n}+e_{n})], \\ C_{1} = E = X_{1}, \quad\quad Q_{1} = E, \\ C_{n+1} = \{v \in Q_{n}: \varphi (v,y_{n}) \leq \alpha _{n}\varphi (v,x_{n}) + (1-\alpha _{n})\varphi (v,x_{n}+e_{n})\}, \\ X_{n+1} = \{v \in C_{n+1}: \Vert x_{1} - v \Vert ^{2} \leq \Vert P_{C_{n+1}}(x_{1}) - x_{1} \Vert ^{2} + \delta _{n} \}, \\ w_{n} \in X_{n+1}, \\ z_{n}= J^{-1}_{E}[\beta _{n}J_{E}x_{n}+(1-\beta _{n})J_{E}\overline{W_{n}}(w_{n}+ \varepsilon _{n})], \\ Q_{n+1} = \{v \in C_{n+1}: \varphi (v,z_{n}) \leq \beta _{n}\varphi (v,x_{n}) + (1-\beta _{n}) \varphi (v,w_{n}+\varepsilon _{n})\}, \\ x_{n+1} = \varPi _{Q_{n+1}}(w_{n}), \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(3.7)

Under the assumptions that\((i)\), \((\mathit{ii})\), \((\mathit{iii})\)and\((v)\)in Theorem 3.1and\((\mathit{iv})''\)in Corollary 3.3, one has

$$ \textstyle\begin{cases} x_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ w_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ y_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ z_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty . \end{cases} $$

Proof

Copy Steps 1–5 and 9 and 10 in Theorem 3.1, we are left to show the results of Steps 6, 7 and 8 are still true.

Step 6. Both \(\{w_{n}\}\) and \(\{x_{n}\}\) are bounded.

Copy Theorem 3.1, \(\{w_{n}\}\) is bounded. Since \(x_{n+1}= \varPi _{Q_{n+1}}(w_{n})\), \(\forall q \in (\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap_{i = 1}^{\infty }B_{i}^{-1}0)\subset Q_{n+1}\), Lemma 2.8 implies that \(\varphi (q, x_{n+1}) + \varphi (x_{n+1}, w_{n}) \leq \varphi (q, w_{n})\). Thus \(\{x_{n}\}\) is bounded.

Step 7. \(w_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\) and \(x_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Copy Theorem 3.1, \(w_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Since \(w_{n+1}\in X_{n+2} \subset C_{n+2} \subset Q_{n+1}\), using Lemma 2.8, we have \(\varphi (w_{n+1}, x_{n+1}) + \varphi (x_{n+1}, w_{n}) \leq \varphi (w_{n+1}, w_{n})\rightarrow 0\), as \(n \rightarrow \infty \). Thus \(w_{n+1} - x_{n+1} \rightarrow 0\) and thus \(x_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\) as \(n \rightarrow \infty \).

Step 8. \(y_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\) and \(z_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Since \(x_{n+1}\in Q_{n+1} \subset C_{n+1}\), \(\varphi (x_{n+1}, y_{n}) \leq \alpha _{n}\varphi (x_{n+1}, x_{n})+(1- \alpha _{n}) \varphi (x_{n+1}, x_{n}+e_{n}) \rightarrow 0\), which implies from Lemma 2.6 that \(x_{n+1} - y_{n} \rightarrow 0\) as \(n \rightarrow \infty \). Thus \(y_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

Since \(x_{n+1} \in Q_{n+1}\), we have \(\varphi (x_{n+1}, z_{n}) \leq \beta _{n}\varphi (x_{n+1}, x_{n})+(1- \beta _{n}) \varphi (x_{n+1}, w_{n}+\varepsilon _{n}) \rightarrow 0\), as \(n \rightarrow \infty \). Thus \(z_{n} \rightarrow P_{\bigcap _{n=1}^{\infty }C_{n}}(x_{1})\), as \(n \rightarrow \infty \).

This completes the proof. □

Corollary 3.7

If we choose\(w_{n} = \varPi _{C_{n+1}}(x_{n})\)and\(x_{n+1} = P_{Q_{n+1}}(x_{1})\), then (3.1) becomes to (3.8). If we choose\(w_{n} = P_{C_{n+1}}(x_{1})\)and\(x_{n+1} = \varPi _{Q_{n+1}}(w_{n})\), then (3.1) becomes to (3.9).

$$\begin{aligned}& \textstyle\begin{cases} x_{1}, e_{1}, \varepsilon _{1} \in E\quad \text{\textit{chosen arbitrarily}}, \\ y_{n} = J^{-1}_{E}[\alpha _{n}J_{E}x_{n}+(1-\alpha _{n})J_{E} \overline{U_{n}}(x_{n}+e_{n})], \\ C_{1} = E = Q_{1}, \\ C_{n+1} = \{v \in Q_{n}: \varphi (v,y_{n}) \leq \alpha _{n}\varphi (v,x_{n}) + (1-\alpha _{n})\varphi (v,x_{n}+e_{n})\}, \\ w_{n} = \varPi _{C_{n+1}}(x_{n}), \\ z_{n}= J^{-1}_{E}[\beta _{n}J_{E}x_{n}+(1-\beta _{n})J_{E}\overline{W_{n}}(w_{n}+ \varepsilon _{n})], \\ Q_{n+1} = \{v \in C_{n+1}: \varphi (v,z_{n}) \leq \beta _{n}\varphi (v,x_{n}) + (1-\beta _{n}) \varphi (v,w_{n}+\varepsilon _{n})\}, \\ x_{n+1} = P_{Q_{n+1}}(x_{1}), \quad n \in N, \end{cases}\displaystyle \end{aligned}$$
(3.8)

and

$$\begin{aligned}& \textstyle\begin{cases} x_{1}, e_{1}, \varepsilon _{1} \in E\quad \text{\textit{chosen arbitrarily}}, \\ y_{n} = J^{-1}_{E}[\alpha _{n}J_{E}x_{n}+(1-\alpha _{n})J_{E} \overline{U_{n}}(x_{n}+e_{n})], \\ C_{1} = E = Q_{1}, \\ C_{n+1} = \{v \in Q_{n}: \varphi (v,y_{n}) \leq \alpha _{n}\varphi (v,x_{n}) + (1-\alpha _{n})\varphi (v,x_{n}+e_{n})\}, \\ w_{n} = P_{C_{n+1}}(x_{1}), \\ z_{n}= J^{-1}_{E}[\beta _{n}J_{E}x_{n}+(1-\beta _{n})J_{E}\overline{W_{n}}(w_{n}+ \varepsilon _{n})], \\ Q_{n+1} = \{v \in C_{n+1}: \varphi (v,z_{n}) \leq \beta _{n}\varphi (v,x_{n}) + (1-\beta _{n}) \varphi (v,w_{n}+\varepsilon _{n})\}, \\ x_{n+1} = \varPi _{Q_{n+1}}(w_{n}), \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(3.9)

Under the assumptions that\((i)\), \((\mathit{ii})\), \((\mathit{iii})\)and\((v)\)in Theorem 3.1, one has

$$ \textstyle\begin{cases} x_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ w_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ y_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty , \\ z_{n} \rightarrow P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap ( \bigcap _{i = 1}^{\infty }B_{i}^{-1}0)}(x_{1}) \in (\bigcap_{i = 1}^{ \infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{\infty }B_{i}^{-1}0),\quad \textit{as } n \rightarrow \infty . \end{cases} $$

Remark 3.8

Compared to [1517], we have the following differences: (1) infinite maximal monotone operators are studied instead of finite cases; (2) the limit of the iterative sequences, \(P_{(\bigcap _{i = 1}^{\infty }A_{i}^{-1}0)\cap (\bigcap _{i = 1}^{ \infty }B_{i}^{-1}0)}(x_{1})\), is easier for computation theoretically since metric projection only involves \(\Vert \cdot \Vert \) while generalized projection involves Lyapunov functional φ; (3) computational errors are considered in each step; (4) for each given iterative step n, multi-choice can be made on both \(\{w_{n}\}\) and \(\{x_{n}\}\) in (3.1); (5) the normalized duality mappings \(J_{E}\) or \(J^{-1}_{E}\) are no longer needed to be weakly sequentially continuous.

Remark 3.9

Compared to [14] and [18], we have the following differences: (1) for each iterative step n, multi-choice can be made on both \(\{w_{n}\}\) and \(\{x_{n}\}\) in (3.1); (2) four key sets \(\{C_{n}\}\), \(\{Q_{n}\}\), \(\{X_{n}\}\) and \(\{Y_{n}\}\) are defined which permits more choices for the iterative sequences; (3) both \(\{C_{n}\}\) and \(\{Q_{n}\}\) are decreasing sets in (3.1) and satisfy the following inter-relationship: \(C_{n+1} \subset Q_{n} \subset C_{n} \subset Q_{n-1}\) for \(n \geq 2\); (4) \(\bigcap_{n = 1}^{\infty }C_{n} = \bigcap_{n = 1}^{\infty }Q_{n}\) can be proved which guarantees the limit of the iterative sequences is unique.

Remark 3.10

Corollaries 3.23.4 can be seen as a group of results and Corollaries 3.5 and 3.6 can be seen as another. In Corollaries 3.23.4, we want to say that if we take \(w_{n}\) or \(x_{n}\) or both as the value of metric projections, the results are still true. In Corollaries 3.5 and 3.6, we want to say that if we take \(w_{n}\) or \(x_{n}\) as the value of generalized projections, the results are still true. In this sense, Theorem 3.1 is a new and general result.

3.2 Computational experiments

Remark 3.11

If E reduces to a Hilbert space H, then the Lyapunov functional is reduced to

$$ \varphi (x,y) = \Vert x - y \Vert ^{2}, \quad \forall x , y \in H. $$

Remark 3.12

Take \(E = (-\infty , +\infty )\). Suppose \(A_{i}, B_{i}: (-\infty , +\infty ) \rightarrow (-\infty , +\infty )\) are defined as follows: \(A_{i} x = \frac{x}{2^{i}}\) and \(B_{i} x = 2^{i} x\) for \(x \in (-\infty , +\infty )\) and \(i \in N\). Then \(A_{i}\) and \(B_{i}\) are maximal monotone for \(i \in N\) and \((\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{ \infty }B_{i}^{-1}0) = \{0\}\). Let \(a_{i} = \frac{1}{2^{i+1}} = b_{i}\) for \(i \in \{0\} \cup N\), \(\beta _{n} = \delta _{n} = \vartheta _{n} = e_{n} = \varepsilon _{n} = \frac{1}{n}\) and \(\alpha _{n} = \frac{1}{2^{n}}\) for \(n \in N\). Let \(r_{n,i} = (2^{n+i-1}-1)2^{i}\) and \(s_{n,i} = \frac{2^{n} - 1}{2^{i}}\) for \(i,n \in N\). It is easy to check that all of the assumptions of Theorem 3.1 are satisfied for this special case.

Corollary 3.13

Taking the example in Remark 3.12, we can choose the following iterative sequences among infinite choices generated by iterative algorithm (3.1) in Theorem 3.1:

$$\begin{aligned}& \textstyle\begin{cases} x_{1} = 1, \quad\quad t_{0} = t_{1} = 1, \\ y_{n} = \frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(\frac{1}{2} + \frac{1}{3\times 2^{n}})(x_{n}+\frac{1}{n}), \quad n \in N, \\ v_{n} = \frac{\frac{x_{n}^{2}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})^{2} - y_{n}^{2}}{2[\frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})-y_{n}]}, \quad n\in N, \\ \overline{a_{n}}= \min_{m\leq n, m \in N}\{v_{m}, t_{m-1}\}, \quad n \in N, \\ w_{n} = x_{1} - \sqrt{(x_{1} - \overline{a_{n}})^{2}+\frac{1}{n}}, \quad n\in N, \\ z_{n}= \frac{1}{n}x_{n}+(1-\frac{1}{n})\frac{2^{n}}{2^{n+1}-1}(w_{n}+ \frac{1}{n}), \quad n \in N, \\ t_{n} = \frac{\frac{x_{n}^{2}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})^{2} - z_{n}^{2}}{2[\frac{x_{n}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})-z_{n}]}, \quad n\in N\setminus \{1\}, \\ \overline{b_{n}}= \min_{m\leq n, m \in N}\{v_{m}, t_{m}\}, \quad n \in N, \\ x_{n+1} = x_{1} - \sqrt{(x_{1} - \overline{b_{n}})^{2}+\frac{1}{n}}, \quad n\in N, \end{cases}\displaystyle \end{aligned}$$
(3.10)
$$\begin{aligned}& \textstyle\begin{cases} x_{1} = 1, \quad\quad t_{0} = t_{1} = 1, \\ y_{n} = \frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(\frac{1}{2} + \frac{1}{3\times 2^{n}})(x_{n}+\frac{1}{n}), \quad n \in N, \\ v_{n} = \frac{\frac{x_{n}^{2}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})^{2} - y_{n}^{2}}{2[\frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})-y_{n}]}, \quad n \in N, \\ w_{n} = \min_{m\leq n, m \in N}\{v_{m}, t_{m-1}\}, \quad n \in N, \\ z_{n}= \frac{1}{n}x_{n}+(1-\frac{1}{n})\frac{2^{n}}{2^{n+1}-1}(w_{n}+ \frac{1}{n}), \quad n \in N, \\ t_{n} = \frac{\frac{x_{n}^{2}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})^{2} - z_{n}^{2}}{2[\frac{x_{n}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})-z_{n}]}, \quad n \in N\setminus \{1\}, \\ \overline{b_{n}}= \min_{m\leq n, m \in N}\{v_{m}, t_{m}\}, \quad n \in N, \\ x_{n+1} = x_{1} - \sqrt{(x_{1} - \overline{b_{n}})^{2}+\frac{1}{n}}, \quad n \in N, \end{cases}\displaystyle \end{aligned}$$
(3.11)
$$\begin{aligned}& \textstyle\begin{cases} x_{1} = 1,\quad\quad t_{0} = t_{1} = 1, \\ y_{n} = \frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(\frac{1}{2} + \frac{1}{3\times 2^{n}})(x_{n}+\frac{1}{n}), \quad n \in N, \\ v_{n} = \frac{\frac{x_{n}^{2}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})^{2} - y_{n}^{2}}{2[\frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})-y_{n}]}, \quad n \in N, \\ w_{n} = \min_{m\leq n, m \in N}\{v_{m}, t_{m-1}\}, \quad n \in N, \\ z_{n}= \frac{1}{n}x_{n}+(1-\frac{1}{n})\frac{2^{n}}{2^{n+1}-1}(w_{n}+ \frac{1}{n}), \quad n \in N, \\ t_{n} = \frac{\frac{x_{n}^{2}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})^{2} - z_{n}^{2}}{2[\frac{x_{n}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})-z_{n}]}, \quad n \in N\setminus \{1\}, \\ x_{n+1} = \min_{m\leq n, m \in N}\{v_{m}, t_{m}\}, \quad n \in N, \end{cases}\displaystyle \end{aligned}$$
(3.12)
$$\begin{aligned}& \textstyle\begin{cases} x_{1} = 1, \quad\quad t_{0} = t_{1} = 1, \\ y_{n} = \frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(\frac{1}{2} + \frac{1}{3\times 2^{n}})(x_{n}+\frac{1}{n}), \quad n \in N, \\ v_{n} = \frac{\frac{x_{n}^{2}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})^{2} - y_{n}^{2}}{2[\frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})-y_{n}]}, \quad n \in N, \\ \overline{a_{n}}= \min_{m\leq n, m \in N}\{v_{m}, t_{m-1}\}, \quad n \in N, \\ w_{n} = x_{1} - \sqrt{(x_{1} - \overline{a_{n}})^{2}+\frac{1}{n}}, \quad n \in N, \\ z_{n}= \frac{1}{n}x_{n}+(1-\frac{1}{n})\frac{2^{n}}{2^{n+1}-1}(w_{n}+ \frac{1}{n}), \quad n \in N, \\ t_{n} = \frac{\frac{x_{n}^{2}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})^{2} - z_{n}^{2}}{2[\frac{x_{n}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})-z_{n}]}, \quad n \in N\setminus \{1\}, \\ x_{n+1} = \min_{m\leq n, m \in N}\{v_{m}, t_{m}\}, \quad n \in N, \end{cases}\displaystyle \end{aligned}$$
(3.13)
$$\begin{aligned}& \textstyle\begin{cases} x_{1} = 1,\quad\quad t_{0} = t_{1} = 1, \\ y_{n} = \frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(\frac{1}{2} + \frac{1}{3\times 2^{n}})(x_{n}+\frac{1}{n}), \quad n \in N, \\ v_{n} = \frac{\frac{x_{n}^{2}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})^{2} - y_{n}^{2}}{2[\frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})-y_{n}]}, \quad n \in N, \\ \overline{a_{n}}= \min_{m\leq n, m \in N}\{v_{m}, t_{m-1}\}, \quad n \in N, \\ w_{n} = \frac{x_{1} - \sqrt{(x_{1} - \overline{a_{n}})^{2}+\frac{1}{n}}+\overline{a_{n}}}{2}, \quad n \in N, \\ z_{n}= \frac{1}{n}x_{n}+(1-\frac{1}{n})\frac{2^{n}}{2^{n+1}-1}(w_{n}+ \frac{1}{n}), \quad n \in N, \\ t_{n} = \frac{\frac{x_{n}^{2}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})^{2} - z_{n}^{2}}{2[\frac{x_{n}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})-z_{n}]}, \quad n \in N\setminus \{1\}, \\ \overline{b_{n}}= \min_{m\leq n, m \in N}\{v_{m}, t_{m}\}, \quad n \in N, \\ x_{n+1} = x_{1} - \sqrt{(x_{1} - \overline{b_{n}})^{2}+\frac{1}{n}}, \quad n \in N, \end{cases}\displaystyle \end{aligned}$$
(3.14)
$$\begin{aligned}& \textstyle\begin{cases} x_{1} = 1, \quad\quad t_{0} = t_{1} = 1, \\ y_{n} = \frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(\frac{1}{2} + \frac{1}{3\times 2^{n}})(x_{n}+\frac{1}{n}), \quad n \in N, \\ v_{n} = \frac{\frac{x_{n}^{2}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})^{2} - y_{n}^{2}}{2[\frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})-y_{n}]}, \quad n \in N, \\ \overline{a_{n}}= \min_{m\leq n, m \in N}\{v_{m}, t_{m-1}\}, \quad n \in N, \\ w_{n} = \frac{x_{1} - \sqrt{(x_{1} - \overline{a_{n}})^{2}+\frac{1}{n}}+\overline{a_{n}}}{2}, \quad n \in N, \\ z_{n}= \frac{1}{n}x_{n}+(1-\frac{1}{n})\frac{2^{n}}{2^{n+1}-1}(w_{n}+ \frac{1}{n}), \quad n \in N, \\ t_{n} = \frac{\frac{x_{n}^{2}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})^{2} - z_{n}^{2}}{2[\frac{x_{n}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})-z_{n}]}, \quad n \in N\setminus \{1\}, \\ x_{n+1} = \min_{m\leq n, m \in N}\{v_{m}, t_{m}\}, \quad n \in N, \end{cases}\displaystyle \end{aligned}$$
(3.15)
$$\begin{aligned}& \textstyle\begin{cases} x_{1} = 1, \quad\quad t_{0} = t_{1} = 1, \\ y_{n} = \frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(\frac{1}{2} + \frac{1}{3\times 2^{n}})(x_{n}+\frac{1}{n}), \quad n \in N, \\ v_{n} = \frac{\frac{x_{n}^{2}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})^{2} - y_{n}^{2}}{2[\frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})-y_{n}]}, \quad n \in N, \\ \overline{a_{n}}= \min_{m\leq n, m \in N}\{v_{m}, t_{m-1}\}, \quad n \in N, \\ w_{n} = \frac{x_{1} - \sqrt{(x_{1} - \overline{a_{n}})^{2}+\frac{1}{n}}+\overline{a_{n}}}{2}, \quad n \in N, \\ z_{n}= \frac{1}{n}x_{n}+(1-\frac{1}{n})\frac{2^{n}}{2^{n+1}-1}(w_{n}+ \frac{1}{n}), \quad n \in N, \\ t_{n} = \frac{\frac{x_{n}^{2}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})^{2} - z_{n}^{2}}{2[\frac{x_{n}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})-z_{n}]}, \quad n \in N\setminus \{1\}, \\ \overline{b_{n}}= \min_{m\leq n, m \in N}\{v_{m}, t_{m}\}, \quad n \in N, \\ x_{n+1} = \frac{x_{1} - \sqrt{(x_{1} - \overline{b_{n}})^{2}+\frac{1}{n}}+\overline{b_{n}}}{2}, \quad n \in N, \end{cases}\displaystyle \end{aligned}$$
(3.16)
$$\begin{aligned}& \textstyle\begin{cases} x_{1} = 1,\quad\quad t_{0} = t_{1} = 1, \\ y_{n} = \frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(\frac{1}{2} + \frac{1}{3\times 2^{n}})(x_{n}+\frac{1}{n}), \quad n \in N, \\ v_{n} = \frac{\frac{x_{n}^{2}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})^{2} - y_{n}^{2}}{2[\frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})-y_{n}]}, \quad n \in N, \\ \overline{a_{n}}= \min_{m\leq n, m \in N}\{v_{m}, t_{m-1}\}, \quad n \in N, \\ w_{n} = x_{1} - \sqrt{(x_{1} - \overline{a_{n}})^{2}+\frac{1}{n}}, \quad n \in N, \\ z_{n}= \frac{1}{n}x_{n}+(1-\frac{1}{n})\frac{2^{n}}{2^{n+1}-1}(w_{n}+ \frac{1}{n}), \quad n \in N, \\ t_{n} = \frac{\frac{x_{n}^{2}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})^{2} - z_{n}^{2}}{2[\frac{x_{n}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})-z_{n}]}, \quad n \in N\setminus \{1\}, \\ \overline{b_{n}}= \min_{m\leq n, m \in N}\{v_{m}, t_{m}\}, \quad n \in N, \\ x_{n+1} = \frac{x_{1} - \sqrt{(x_{1} - \overline{b_{n}})^{2}+\frac{1}{n}}+\overline{b_{n}}}{2}, \quad n \in N, \end{cases}\displaystyle \end{aligned}$$
(3.17)

and

$$\begin{aligned}& \textstyle\begin{cases} x_{1} = 1,\quad\quad t_{0} = t_{1} = 1, \\ y_{n} = \frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(\frac{1}{2} + \frac{1}{3\times 2^{n}})(x_{n}+\frac{1}{n}), \quad n \in N, \\ v_{n} = \frac{\frac{x_{n}^{2}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})^{2} - y_{n}^{2}}{2[\frac{x_{n}}{2^{n}}+(1-\frac{1}{2^{n}})(x_{n}+\frac{1}{n})-y_{n}]}, \quad n \in N, \\ w_{n} = \min_{m\leq n, m \in N}\{v_{m}, t_{m-1}\}, \quad n \in N, \\ z_{n}= \frac{1}{n}x_{n}+(1-\frac{1}{n})\frac{2^{n}}{2^{n+1}-1}(w_{n}+ \frac{1}{n}), \quad n \in N, \\ t_{n} = \frac{\frac{x_{n}^{2}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})^{2} - z_{n}^{2}}{2[\frac{x_{n}}{n}+(1-\frac{1}{n})(w_{n}+\frac{1}{n})-z_{n}]}, \quad n \in N\setminus \{1\}, \\ \overline{b_{n}}= \min_{m\leq n, m \in N}\{v_{m}, t_{m}\}, \quad n \in N, \\ x_{n+1} = \frac{x_{1} - \sqrt{(x_{1} - \overline{b_{n}})^{2}+\frac{1}{n}}+\overline{b_{n}}}{2}, \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(3.18)

Then\(\{x_{n}\}\)generated by (3.10)(3.18) converges strongly to\(0 \in (\bigcap_{i = 1}^{\infty }A_{i}^{-1}0)\cap (\bigcap_{i = 1}^{ \infty }B_{i}^{-1}0)\), as\(n \rightarrow \infty \).

Proof

We shall only show that \(\{x_{n}\}\) in (3.10) can be obtained by iterative algorithm (3.1) and the result of strong convergence is true. Similarly, (3.11)–(3.18) are available.

Compute \(y_{n}\) and \(z_{n}\) in (3.1) for the example, where \(n \in N\).

$$\begin{aligned}& \begin{aligned}[b] y_{n} &= \alpha _{n} x_{n}+(1-\alpha _{n})\overline{U_{n}}(x_{n}+e_{n}) \\ &= \alpha _{n} x_{n}+(1-\alpha _{n})a_{0}(x_{n}+e_{n})+(1- \alpha _{n}) \sum_{i = 1}^{\infty }a_{i} (I+r_{n,i}A_{i})^{-1}(x_{n}+e_{n}) \\ &= \alpha _{n} x_{n}+(1-\alpha _{n}) \frac{x_{n}+e_{n}}{2}+(1-\alpha _{n}) \sum_{i = 1}^{\infty } \frac{1}{4^{i}}\frac{x_{n}+e_{n}}{2^{n}} \\ &= \frac{x_{n}}{2^{n}}+\biggl(1-\frac{1}{2^{n}}\biggr) \biggl( \frac{1}{2} + \frac{1}{3\times 2^{n}}\biggr) \biggl(x_{n}+ \frac{1}{n}\biggr), \end{aligned} \end{aligned}$$
(3.19)

and

$$\begin{aligned}& \begin{aligned}[b] z_{n} &= \beta _{n} x_{n}+(1-\beta _{n})\overline{W_{n}}(w_{n}+ \varepsilon _{n}) \\ &= \beta _{n} x_{n}+(1-\beta _{n})b_{0}(w_{n}+ \varepsilon _{n}) \\ &\quad{} +(1-\beta _{n})\sum_{j = 1}^{\infty }b_{j}(I+s_{n,j}B_{j})^{-1}(I+s_{n,j-1}B_{j-1})^{-1} \cdots (I+s_{n,1}B_{1})^{-1}(w_{n}+ \varepsilon _{n}) \\ &= \beta _{n} x_{n}+(1-\beta _{n}) \frac{w_{n}+\varepsilon _{n}}{2}+(1- \beta _{n})\sum_{j = 1}^{\infty } \frac{w_{n}+\varepsilon _{n}}{2^{(n+1)j+1}} \\ &= \frac{x_{n}}{n}+\biggl(1-\frac{1}{n}\biggr)\frac{2^{n}}{2^{n+1}-1} \biggl(w_{n}+ \frac{1}{n}\biggr). \end{aligned} \end{aligned}$$
(3.20)

Compute \(C_{n+1}\) and \(Q_{n+1}\) in (3.1) for the example, where \(n \in N\):

$$\begin{aligned}& \begin{aligned}[b] C_{n+1} &= Q_{n} \cap \bigl\{ v \in R: 2\bigl[ \alpha _{n} x_{n}+(1-\alpha _{n}) (x_{n}+e_{n}) - y_{n}\bigr]v \\ &\leq \alpha _{n} x_{n}^{2}+(1-\alpha _{n}) (x_{n}+e_{n})^{2}-y_{n}^{2} \bigr\} , \end{aligned} \end{aligned}$$
(3.21)

and

$$\begin{aligned}& \begin{aligned}[b] Q_{n+1}& = C_{n+1}\cap \bigl\{ v \in R: 2\bigl[\beta _{n} x_{n}+(1-\beta _{n}) (w_{n}+ \varepsilon _{n}) - z_{n}\bigr]v \\ &\leq \beta _{n} x_{n}^{2}+(1-\beta _{n}) (w_{n}+ \varepsilon _{n})^{2}-z_{n}^{2}\bigr\} . \end{aligned} \end{aligned}$$
(3.22)

Next, we shall use inductive method to show that the following is true:

$$\begin{aligned}& \textstyle\begin{cases} x_{1} = 1, \quad\quad t_{0} = t_{1} = 1, \\ C_{1} = Q_{1} = X_{1} =Y_{1} = (-\infty , +\infty ), \\ C_{2} = (-\infty , \frac{41}{24}] = Q_{2}, \quad\quad X_{2} = [0, \frac{41}{24}] = Y_{2}, \\ C_{n+1} = (-\infty , \overline{a_{n}}], \quad n \in N\setminus \{1\}, \\ X_{n+1} =[x_{1} - \sqrt{(x_{1}-\overline{a_{n}})^{2}+\frac{1}{n}}, \overline{a_{n}}], \quad n \in N\setminus \{1\}, \\ \text{\textit{we may choose}} \quad w_{n} = x_{1} - \sqrt{(x_{1}-\overline{a_{n}})^{2}+ \frac{1}{n}}, \quad n \in N, \\ Q_{n+1} = (-\infty , \overline{b_{n}}], \quad n \in N\setminus \{1\}, \\ Y_{n+1}= [x_{1} - \sqrt{(x_{1}-\overline{b_{n}})^{2}+\frac{1}{n}}, \overline{b_{n}}], \quad n \in N\setminus \{1\}, \\ \text{\textit{we may choose }}\quad x_{n+1} = x_{1} - \sqrt{(x_{1}-\overline{b_{n}})^{2}+ \frac{1}{n}}, \quad n \in N, \\ 0 < \overline{b_{n}} \leq \overline{a_{n}}\leq 1, \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(3.23)

In fact, if \(n = 1\), using (3.19) and (3.10), \(y_{1} = \frac{7}{6}\), \(v_{1} = \frac{41}{24}\) and \(\overline{a_{1}}= \min \{v_{1}, t_{0}\} = 1\). Then from (3.1), \(C_{2} = (-\infty , \frac{41}{24}]\), \(P_{C_{2}}(x_{1}) = x_{1}\), and then \(X_{2} = C_{2} \cap [0, 2] = [0, \frac{41}{24}]\). Thus we may choose \(w_{1} = x_{1} - \sqrt{(x_{1}-\overline{a_{1}})^{2}+\frac{1}{1}} = 0\). Then using (3.20), \(z_{1} = 1\). Since \(\beta _{1} x_{1}+(1-\beta _{1})(w_{1}+\varepsilon _{1}) - z_{1} = \beta _{1} x_{1}^{2}+(1-\beta _{1})(w_{1}+\varepsilon _{1})^{2} - z_{1}^{2} = 0\), we have \(Q_{2} = C_{2} \cap (-\infty , +\infty ) = C_{2}\) and then \(Y_{2} = X_{2}\). And \(\overline{b_{1}} = \min \{v_{1}, t_{1}\} = 1\), thus we may choose \(x_{2} = 1-\sqrt{(1-1)^{2}+1} = 0\). Therefore, (3.23) is true for \(n = 1\).

If \(n = 2\), it is easy to calculate that \(y_{2} = \frac{7}{32}\), \(v_{2} = \frac{143}{320}\) and \(0< \overline{a_{2}}= \min \{v_{1}, t_{0}, v_{2}, t_{1}\} = v_{2} = \frac{143}{320} < 1\). Then from (3.21), \(C_{3} = Q_{2} \cap (-\infty , v_{2}] = (-\infty , v_{1}] \cap (- \infty , v_{2}] = (-\infty , v_{2}] = (-\infty , \overline{a_{2}}]\), \(P_{C_{3}}(x_{1}) = \overline{a_{2}}\), and then \(X_{3} = [x_{1} - \sqrt{(x_{1}-\overline{a_{2}})^{2}+\frac{1}{2}}, \overline{a_{2}}]\). Thus we may choose \(w_{2} = x_{1} - \sqrt{(x_{1}-\overline{a_{2}})^{2}+\frac{1}{2}} = 0.1022543\). Thus \(z_{2} = 0.1720727\) and \(t_{2} = 0.587915\). And then from (3.22), \(Q_{3} = C_{3} \cap (-\infty , t_{2}] = (-\infty , \overline{a_{2}}] \cap (-\infty , t_{2}] = (-\infty , \overline{b_{2}}]\), \(Y_{3} = Q_{3} \cap [x_{1} - \sqrt{(x_{1}-\overline{b_{2}})^{2}+ \frac{1}{2}}, x_{1} + \sqrt{(x_{1}-\overline{b_{2}})^{2}+\frac{1}{2}}] = [x_{1} - \sqrt{(x_{1}-\overline{b_{2}})^{2}+\frac{1}{2}}, \overline{b_{2}}]\). Thus we may choose \(x_{3} = x_{1} - \sqrt{(x_{1}-\overline{b_{2}})^{2}+\frac{1}{2}}\). It is easy to check that \(0 < \overline{b_{2}} \leq \overline{a_{2}}\leq 1\). Therefore, (3.23) is true for \(n = 2\).

Suppose (3.23) is true for \(n = k\) (\(k \geq 2\)), that is,

$$ \textstyle\begin{cases} C_{k+1} = (-\infty , \overline{a_{k}}], \\ X_{k+1} =[x_{1} - \sqrt{(x_{1}-\overline{a_{k}})^{2}+\frac{1}{k}}, \overline{a_{k}}], \\ {\textit{we may choose }}\quad w_{k} = x_{1} - \sqrt{(x_{1}-\overline{a_{k}})^{2}+ \frac{1}{k}}, \\ Q_{k+1} = (-\infty , \overline{b_{k}}], \\ Y_{k+1}= [x_{1} - \sqrt{(x_{1}-\overline{b_{k}})^{2}+\frac{1}{k}}, \overline{b_{k}}], \\ {\textit{we may choose }}\quad x_{k+1} = x_{1} - \sqrt{(x_{1}-\overline{b_{k}})^{2}+ \frac{1}{k}}, \\ 0 < \overline{b_{k}} \leq \overline{a_{k}}\leq 1. \end{cases} $$

Then, if \(n = k+1\), we can easily see from definitions of \(\overline{a_{n}}\) and \(\overline{b_{n}}\) that \(\overline{b_{k+1}} \leq \overline{a_{k+1}}\leq t_{0} = 1\). Since \(0 < \overline{b_{k}}\leq 1\), we have \(1+\frac{1}{k+1} > \sqrt{(x_{1}-\overline{b_{k}})^{2}+\frac{1}{k}}\), which implies that \(x_{k+1} + e_{k+1} = x_{k+1}+\frac{1}{k+1} > 0\). Therefore,

$$\begin{aligned}& \begin{aligned}[b] &\alpha _{k+1}x_{k+1}+(1-\alpha _{k+1}) (x_{k+1} + e_{k+1})-y_{k+1} \\ &\quad = \biggl(1- \frac{1}{2^{k+1}}\biggr) \biggl(\frac{1}{2} - \frac{1}{3\times 2^{k+1}}\biggr) (x_{k+1} + e_{k+1})> 0. \end{aligned} \end{aligned}$$
(3.24)

Note that

$$\begin{aligned} 2(1-\alpha _{k+1}) \biggl(\frac{1}{2} + \frac{1}{3\times 2^{k+1}} \biggr)^{2} &= \biggl(1- \frac{1}{2^{k+1}}\biggr) \biggl(1 + \frac{1}{3\times 2^{k}}\biggr) \biggl(\frac{1}{2} + \frac{1}{3\times 2^{k+1}}\biggr) \\ &= \biggl(1+\frac{1}{3\times 2^{k}} - \frac{1}{2^{k+1}}- \frac{1}{6\times 4^{k}}\biggr) \biggl(\frac{1}{2} + \frac{1}{3\times 2^{k+1}}\biggr) \\ &= \biggl(1- \frac{1}{6\times 2^{k}} - \frac{1}{6\times 4^{k}}\biggr) \biggl(\frac{1}{2} + \frac{1}{3\times 2^{k+1}}\biggr) < 1, \end{aligned}$$

then

$$\begin{aligned}& \begin{aligned}[b] y_{k+1}^{2} &= \alpha _{k+1}^{2}x_{k+1}^{2}+2\alpha _{k+1}(1-\alpha _{k+1}) \biggl( \frac{1}{2} + \frac{1}{3\times 2^{k+1}}\biggr)x_{k+1}(x_{k+1} + e_{k+1}) \\ &\quad{} +(1-\alpha _{k+1})^{2}\biggl(\frac{1}{2} + \frac{1}{3\times 2^{k+1}}\biggr)^{2}(x_{k+1} + e_{k+1})^{2} \\ &\leq 2\alpha _{k+1}^{2}x_{k+1}^{2}+2(1- \alpha _{k+1})^{2}\biggl(\frac{1}{2} + \frac{1}{3\times 2^{k+1}}\biggr)^{2}(x_{k+1} + e_{k+1})^{2} \\ &\leq \alpha _{k+1}x_{k+1}^{2}+(1-\alpha _{k+1}) (x_{k+1} + e_{k+1})^{2}. \end{aligned} \end{aligned}$$
(3.25)

Therefore, (3.24) and (3.25) imply that \(v_{k+1} > 0\). Since \(\overline{b_{k}} > 0\) and \(v_{k+1} > 0\), we have \(\overline{a_{k+1}} > 0\). That is, \(0 < \overline{a_{k+1}}\leq 1\).

Using (3.21), \(C_{k+2} = Q_{k+1} \cap (-\infty , v_{k+1}] = (-\infty , \overline{b_{k}}] \cap (-\infty , v_{k+1}] = (-\infty , \overline{a_{k+1}}]\). Then \(X_{k+2} = C_{k+2} \cap [x_{1} - \sqrt{(x_{1}-\overline{a_{k+1}})^{2}+ \frac{1}{k+1}}, x_{1}+\sqrt{(x_{1}-\overline{a_{k+1}})^{2}+ \frac{1}{k+1}}] = [x_{1} - \sqrt{(x_{1}-\overline{a_{k+1}})^{2}+ \frac{1}{k+1}}, \overline{a_{k+1}}]\). Thus we may choose \(w_{k+1} = x_{1} - \sqrt{(x_{1}-\overline{a_{k+1}})^{2}+\frac{1}{k+1}}\).

Since \((1+\frac{1}{k+1})^{2} > (1 - \overline{a_{k+1}})^{2}+ \frac{1}{k+1}\), we have \(w_{k+1}+\varepsilon _{k+1} = 1 - \sqrt{(1-\overline{a_{k+1}})^{2}+ \frac{1}{k+1}} +\frac{1}{k+1}> 0\), which ensures that

$$\begin{aligned}& \begin{aligned}[b] & \beta _{k+1}x_{k+1}+(1-\beta _{k+1}) (w_{k+1} + \varepsilon _{k+1})-z_{k+1} \\ &\quad = \biggl(1- \frac{1}{k+1}\biggr)\frac{2^{k+2}-2^{k+1}-1}{2^{k+2}-1}(w_{k+1} + \varepsilon _{k+1})>0. \end{aligned} \end{aligned}$$
(3.26)

Note that

$$\begin{aligned}& 2(1-\beta _{k+1})^{2} \biggl(\frac{2^{k+1}}{2^{k+2}-1} \biggr)^{2}\leq 1-\beta _{k+1} \\& \quad \Longleftrightarrow \quad \biggl(1-\frac{1}{k+1}\biggr) \frac{2^{k+2}}{2^{k+2}-1} \frac{2^{k+1}}{2^{k+2}-1}\leq 1 \\& \quad \Longleftrightarrow \quad (k+1)\times 8\times 2^{k} \leq (k+1)+8(k+1) \times 4^{k}+8\times 4^{k}. \end{aligned}$$

This last inequality above is obviously true for \(k \in N\). Thus

$$\begin{aligned}& \begin{aligned}[b] z_{k+1}^{2} &= \beta _{k+1}^{2}x_{k+1}^{2}+2\beta _{k+1}(1-\beta _{k+1}) \frac{2^{k+1}}{2^{k+2}-1}x_{k+1}(w_{k+1} + \varepsilon _{k+1}) \\ &\quad{} +(1-\beta _{k+1})^{2}\biggl(\frac{2^{k+1}}{2^{k+2}-1} \biggr)^{2}(w_{k+1} + \varepsilon _{k+1})^{2} \\ &\leq 2\beta _{k+1}^{2}x_{k+1}^{2}+2(1- \beta _{k+1})^{2}\biggl( \frac{2^{k+1}}{2^{k+2}-1} \biggr)^{2}(w_{k+1} + \varepsilon _{k+1})^{2} \\ &\leq \beta _{k+1}x_{k+1}^{2}+(1-\beta _{k+1}) (w_{k+1} + \varepsilon _{k+1})^{2}. \end{aligned} \end{aligned}$$
(3.27)

Equation (3.26) and (3.27) imply that \(t_{k+1}> 0\), which ensures that \(\overline{b_{k+1}} > 0\) since \(\overline{a_{k+1}}> 0\). Using (3.22), \(Q_{k+2} = C_{k+2}\cap (-\infty ,t_{k+1}] = (-\infty , \overline{a_{k+1}}] \cap (-\infty ,t_{k+1}]= (-\infty ,\overline{b_{k+1}}]\), and \(Y_{k+2}= Q_{k+2}\cap [x_{1} - \sqrt{(x_{1}-\overline{b_{k+1}})^{2}+ \frac{1}{k+1}}, x_{1}+ \sqrt{(x_{1}-\overline{b_{k+1}})^{2}+ \frac{1}{k+1}}] = [x_{1} - \sqrt{(x_{1}-\overline{b_{k+1}})^{2}+ \frac{1}{k+1}}, \overline{b_{k+1}}]\). Thus we may choose \(x_{k+2} = x_{1} - \sqrt{(x_{1}-\overline{b_{k+1}})^{2}+\frac{1}{k+1}}\). By now, we have proved that (3.23) is true for \(n \in N\).

Therefore, \(\{x_{n}\}\) defined in (3.10) is valid.

Finally, we shall show that \(x_{n} \rightarrow 0\), as \(n \rightarrow \infty \).

From (3.10) or (3.23), we can easily see that \(\{x_{n}\}\) is bounded. Let \(\{x_{n_{j}}\}\) be any subsequence of \(\{x_{n}\}\) such that \(\lim_{j \rightarrow \infty }x_{n_{j}} = \xi \). Then using (3.10), we may see that \(\lim_{j \rightarrow \infty }y_{n_{j}} = \frac{\xi }{2}\) and \(\lim_{j \rightarrow \infty }v_{n_{j}} = \frac{3\xi }{4}\). Since \(x_{n_{j}+1} = x_{1} - \sqrt{(x_{1}-\overline{b_{n_{j}}})^{2}+ \frac{1}{n_{j}}}\), we have \(\lim_{j \rightarrow \infty }\overline{b_{n_{j}}} = \xi \). Note that \(0 < \overline{b_{n_{j}}}\leq v_{n_{j}}\), then \(0 \leq \xi \leq \frac{3}{4}\xi \), which implies that \(\xi = 0\). This means that each strongly convergent subsequence of \(\{x_{n}\}\) converges strongly to 0. Therefore, \(x_{n} \rightarrow 0\), as \(n \rightarrow \infty \). And, it is not difficult to see that \(y_{n} \rightarrow 0\), \(v_{n} \rightarrow 0\), \(w_{n} \rightarrow 0\), and \(z_{n} \rightarrow 0\), as \(n \rightarrow \infty \).

This completes the proof. □

Remark 3.14

Do computational experiments on (3.11) in Corollary 3.13. By using codes of Visual Basic Six, we get Table 1 and Fig. 1.

Figure 1
figure 1

Convergence of \(\{x_{n}\}\) and \(\{w_{n}\}\) corresponding to Table 1

Table 1 Numerical Results of \(\{x_{n}\}\) and \(\{w_{n}\}\) with initial \(x_{1} = 1.0\) based on (3.11)

4 Applications

4.1 Application to convex minimization problems

Suppose \(f : E \rightarrow (-\infty , +\infty ]\) is a proper convex and lower-semicontinuous function. Then the subdifferential of f, ∂f, is defined as follows: \(\forall x \in E\),

$$ \partial f(x) = \bigl\{ y \in E^{*}: f(x) + \langle z - x, y\rangle \leq f(y), \forall z \in E\bigr\} . $$

Theorem 4.1

LetE, \(\alpha _{n}\), \(\beta _{n}\), \(e_{n}\), \(\varepsilon _{n}\), \(\delta _{n}\)and\(\vartheta _{n}\)be the same as those in Theorem 3.1. Let\(f , g: E \rightarrow (-\infty , +\infty ]\)be two proper convex and lower-semicontinuous functions. Let\(\{x_{n}\}\)be generated by the following iterative algorithm:

$$\begin{aligned}& \textstyle\begin{cases} x_{1}, e_{1}, \varepsilon _{1} \in E, \\ \overline{u_{n}}= \operatorname{argmin}_{z \in E}\{f(z) + \frac{ \Vert z \Vert ^{2}}{2r_{n}}- \frac{1}{r_{n}} \langle z, J_{E}(x_{n}+e_{n})\rangle \}, \\ y_{n} = J^{-1}_{E}[\alpha _{n}J_{E}x_{n}+(1-\alpha _{n})a_{0}J_{E}(x_{n}+e_{n})+(1- \alpha _{n})(1-a_{0})J_{E}\overline{u_{n}}], \\ C_{1} = E = X_{1}, \quad\quad Q_{1} = E = Y_{1}, \\ C_{n+1} = \{v \in Q_{n}: \varphi (v,y_{n}) \leq \alpha _{n}\varphi (v,x_{n}) + (1-\alpha _{n})\varphi (v,x_{n}+e_{n})\}, \\ X_{n+1} = \{v \in C_{n+1}: \Vert x_{1} - v \Vert ^{2} \leq \Vert P_{C_{n+1}}(x_{1}) - x_{1} \Vert ^{2} + \delta _{n}\}, \\ w_{n} \in X_{n+1}, \\ \overline{\overline{u_{n}}}= \operatorname{argmin}_{z \in E}\{g(z) + \frac{ \Vert z \Vert ^{2}}{2s_{n}}-\frac{1}{s_{n}} \langle z, J_{E}(w_{n}+ \varepsilon _{n})\rangle \}, \\ z_{n}= J^{-1}_{E}[\beta _{n}J_{E}x_{n}+(1-\beta _{n})b_{0}J_{E}(w_{n}+ \varepsilon _{n})+(1-\beta _{n})(1-b_{0})J_{E}\overline{\overline{u_{n}}}], \\ Q_{n+1} = \{v \in C_{n+1}: \varphi (v,z_{n}) \leq \beta _{n}\varphi (v,x_{n}) + (1-\beta _{n}) \varphi (v,w_{n}+\varepsilon _{n})\}, \\ Y_{n+1} = \{v \in Q_{n+1}: \Vert x_{1} - v \Vert ^{2} \leq \Vert P_{Q_{n+1}}(x_{1}) - x_{1} \Vert ^{2} + \vartheta _{n}\}, \\ x_{n+1}\in Y_{n+1}, \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(4.1)

Under the assumptions that\((\partial f)^{-1}0 \cap (\partial g)^{-1}0\neq \emptyset \), \(\inf_{n} r_{n} > 0\)and\(\inf_{n} s_{n} > 0\), we have\(x_{n} \rightarrow P_{(\partial f)^{-1}0 \cap (\partial g)^{-1}0}(x_{1})\), as\(n \rightarrow \infty \).

Proof

Similar to [25], \(\overline{u_{n}}= \operatorname{argmin}_{z \in E}\{f(z) + \frac{ \Vert z \Vert ^{2}}{2r_{n}}- \frac{1}{r_{n}} \langle z, J_{E}(x_{n}+e_{n})\rangle \}\) is equivalent to \(0 \in \partial f(\overline{u_{n}})+\frac{1}{r_{n}}J_{E} \overline{u_{n}}- \frac{1}{r_{n}}J_{E}(x_{n}+e_{n})\). Then \(\overline{u_{n}}= (J_{E}+r_{n} \partial f)^{-1} J_{E}(x_{n}+e_{n})\). And, \(\overline{\overline{u_{n}}}= \operatorname{argmin}_{z \in E}\{g(z) + \frac{ \Vert z \Vert ^{2}}{2s_{n}}-\frac{1}{s_{n}} \langle z, J_{E}(w_{n}+ \varepsilon _{n})\rangle \}\) is equivalent to \(0 \in \partial g(\overline{\overline{u_{n}}})+\frac{1}{s_{n}}J_{E} \overline{\overline{u_{n}}}- \frac{1}{s_{n}}J_{E}(w_{n}+\varepsilon _{n})\). Then \(\overline{\overline{u_{n}}}= (J_{E}+s_{n} \partial g)^{-1} J_{E}(w_{n}+ \varepsilon _{n})\). Using Theorem 3.1, the result is available.

This completes the proof. □

Remark 4.2

Similarly, we can modify (4.1) and get the corresponding convergence theorems with respect to Corollaries 3.23.7.

4.2 Application to variational inequalities

Let C be the non-empty closed and convex subset of E. Let \(T: C \rightarrow E^{*}\) be a single-valued, monotone and hemi-continuous mapping. The variational inequality problem is to find \(u \in C\) such that

$$ \langle y - u, Tu\rangle \geq 0, \quad \forall y \in C. $$
(4.2)

The symbol \(\operatorname{VI}(C,T)\) denotes the set of solution of the variational inequality problem (4.2).

It follows from [26] that \(A: E \rightarrow 2^{E^{*}}\) defined by

$$ Ax = \textstyle\begin{cases} Tx+N_{C}x, &x\in C, \\ \emptyset , &x \overline{\in }C, \end{cases} $$

is maximal monotone and \(A^{-1}0 = \operatorname{VI}(C,T)\), where \(N_{C}(x) = \{z\in E^{*}: \langle y - x, z\rangle \leq 0, \forall y \in C\}\).

Theorem 4.3

LetE, \(\alpha _{n}\), \(\beta _{n}\), \(e_{n}\), \(\varepsilon _{n}\), \(\delta _{n}\)and\(\vartheta _{n}\)be the same as those in Theorem 3.1. LetCbe the non-empty closed and convex subset of E. Let\(T_{1}, T_{2}: C \rightarrow E^{*}\)be two single-valued, monotone and hemi-continuous mappings. Let\(A,B: E \rightarrow 2^{E^{*}}\)be defined as follows:

$$ Ax = \textstyle\begin{cases} T_{1}x+N_{C}x,&x\in C, \\ \emptyset , &x \overline{\in }C, \end{cases} $$

and

$$ Bx = \textstyle\begin{cases} T_{2}x+N_{C}x,&x\in C, \\ \emptyset , &x \overline{\in }C. \end{cases} $$

Let\(\{x_{n}\}\)be generated by the following iterative algorithm:

$$\begin{aligned}& \textstyle\begin{cases} x_{1}, e_{1}, \varepsilon _{1} \in E, \\ \overline{u_{n}}= \operatorname{VI}(C, T_{1}+\frac{1}{r_{n}}J_{E} - \frac{1}{r_{n}}J_{E}(x_{n}+e_{n})), \\ y_{n} = J^{-1}_{E}[\alpha _{n}J_{E}x_{n}+(1-\alpha _{n})a_{0}J_{E}(x_{n}+e_{n})+(1- \alpha _{n})(1-a_{0})J_{E}\overline{u_{n}}], \\ C_{1} = E = X_{1}, \quad\quad Q_{1} = E = Y_{1}, \\ C_{n+1} = \{v \in Q_{n}: \varphi (v,y_{n}) \leq \alpha _{n}\varphi (v,x_{n}) + (1-\alpha _{n})\varphi (v,x_{n}+e_{n})\}, \\ X_{n+1} = \{v \in C_{n+1}: \Vert x_{1} - v \Vert ^{2} \leq \Vert P_{C_{n+1}}(x_{1}) - x_{1} \Vert ^{2} + \delta _{n}\}, \\ w_{n} \in X_{n+1}, \\ \overline{\overline{u_{n}}}= \operatorname{VI}(C, T_{2}+\frac{1}{s_{n}}J_{E} - \frac{1}{s_{n}}J_{E}(w_{n}+\varepsilon _{n})), \\ z_{n}= J^{-1}_{E}[\beta _{n}J_{E}x_{n}+(1-\beta _{n})b_{0}J_{E}(w_{n}+ \varepsilon _{n})+(1-\beta _{n})(1-b_{0})J_{E}\overline{\overline{u_{n}}}], \\ Q_{n+1} = \{v \in C_{n+1}: \varphi (v,z_{n}) \leq \beta _{n}\varphi (v,x_{n}) + (1-\beta _{n}) \varphi (v,w_{n}+\varepsilon _{n})\}, \\ Y_{n+1} = \{v \in Q_{n+1}: \Vert x_{1} - v \Vert ^{2} \leq \Vert P_{Q_{n+1}}(x_{1}) - x_{1} \Vert ^{2} + \vartheta _{n}\}, \\ x_{n+1}\in Y_{n+1}, \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(4.3)

Under the assumptions that\(\operatorname{VI}(C,T_{1})\cap \operatorname{VI}(C,T_{2})\neq \emptyset \), \(\inf_{n} r_{n} > 0\)and\(\inf_{n} s_{n} > 0\), we have\(x_{n} \rightarrow P_{\operatorname{VI}(C,T_{1})\cap \operatorname{VI}(C,T_{2})}(x_{1})\), as\(n \rightarrow \infty \).

Proof

$$ \begin{gathered} \overline{u_{n}}= \operatorname{VI} \biggl(C, T_{1}+\frac{1}{r_{n}}J_{E} - \frac{1}{r_{n}}J_{E}(x_{n}+e_{n})\biggr) \\ \quad \Leftrightarrow \quad \biggl\langle y - \overline{u_{n}}, T_{1}\overline{u_{n}}+ \frac{1}{r_{n}} J_{E} \overline{u_{n}}- \frac{1}{r_{n}}J_{E}(x_{n}+e_{n}) \biggr\rangle \geq 0, \quad \forall y \in C \\ \quad \Leftrightarrow \quad J_{E}(x_{n}+e_{n}) \in r_{n}A\overline{u_{n}}+J_{E} \overline{u_{n}} \Longleftrightarrow \overline{u_{n}} = (J_{E}+r_{n}A)^{-1}J_{E}(x_{n}+e_{n}). \end{gathered} $$

Similarly, we have \(\overline{\overline{u_{n}}} = (J_{E}+s_{n}B)^{-1}J_{E}(w_{n}+ \varepsilon _{n})\). Using Theorem 3.1, the result is available.

This completes the proof. □

Remark 4.4

Similarly, we can modify (4.3) and get the corresponding convergence theorems with respect to Corollaries 3.23.7.

4.3 Approximating to common solution of both minimization problems and variational inequalities

Theorem 4.5

LetE, \(\alpha _{n}\), \(\beta _{n}\), \(e_{n}\), \(\varepsilon _{n}\), \(\delta _{n}\), \(\vartheta _{n}\)andfbe the same as those in Theorem 4.1. LetCbe the non-empty closed and convex subset of E. Suppose\(T: C \rightarrow E^{*}\)is a single-valued, monotone and hemi-continuous mapping and\(A: E \rightarrow 2^{E^{*}}\)is defined by

$$ Ax = \textstyle\begin{cases} Tx+N_{C}x, &x\in C, \\ \emptyset , &x \overline{\in }C. \end{cases} $$

Let\(\{x_{n}\}\)be generated by the following iterative algorithm:

$$\begin{aligned}& \textstyle\begin{cases} x_{1}, e_{1}, \varepsilon _{1} \in E, \\ \overline{u_{n}}= \operatorname{argmin}_{z \in E}\{f(z) + \frac{ \Vert z \Vert ^{2}}{2r_{n}}- \frac{1}{r_{n}} \langle z, J_{E}(x_{n}+e_{n})\rangle \}, \\ y_{n} = J^{-1}_{E}[\alpha _{n}J_{E}x_{n}+(1-\alpha _{n})a_{0}J_{E}(x_{n}+e_{n})+(1- \alpha _{n})(1-a_{0})J_{E}\overline{u_{n}}], \\ C_{1} = E = X_{1}, \quad\quad Q_{1} = E = Y_{1}, \\ C_{n+1} = \{v \in Q_{n}: \varphi (v,y_{n}) \leq \alpha _{n}\varphi (v,x_{n}) + (1-\alpha _{n})\varphi (v,x_{n}+e_{n})\}, \\ X_{n+1} = \{v \in C_{n+1}: \Vert x_{1} - v \Vert ^{2} \leq \Vert P_{C_{n+1}}(x_{1}) - x_{1} \Vert ^{2} + \delta _{n}\}, \\ w_{n} \in X_{n+1}, \\ \overline{\overline{u_{n}}} = \operatorname{VI}(C, T+\frac{1}{s_{n}}J_{E} - \frac{1}{s_{n}}J_{E}(w_{n}+\varepsilon _{n})), \\ z_{n}= J^{-1}_{E}[\beta _{n}J_{E}x_{n}+(1-\beta _{n})b_{0}J_{E}(w_{n}+ \varepsilon _{n})+(1-\beta _{n})(1-b_{0})J_{E}\overline{\overline{u_{n}}}], \\ Q_{n+1} = \{v \in C_{n+1}: \varphi (v,z_{n}) \leq \beta _{n}\varphi (v,x_{n}) + (1-\beta _{n}) \varphi (v,w_{n}+\varepsilon _{n})\}, \\ Y_{n+1} = \{v \in Q_{n+1}: \Vert x_{1} - v \Vert ^{2} \leq \Vert P_{Q_{n+1}}(x_{1}) - x_{1} \Vert ^{2} + \vartheta _{n}\}, \\ x_{n+1}\in Y_{n+1}, \quad n \in N. \end{cases}\displaystyle \end{aligned}$$
(4.4)

Under the assumptions that\((\partial f)^{-1}0 \cap \operatorname{VI}(C,T)\neq \emptyset \), \(\inf_{n} r_{n} > 0\)and\(\inf_{n} s_{n} > 0\), we have\(x_{n} \rightarrow P_{(\partial f)^{-1}0 \cap \operatorname{VI}(C,T)}(x_{1})\), as\(n \rightarrow \infty \).

Proof

Similar to Theorems 4.1 and 4.3, the result can be easily obtained. This completes the proof. □

References

  1. Pascali, D., Sburlan, S.: Nonlinear Mappings of Monotone Type. Sijthoff & Noordhoff, Rockville (1978)

    Book  Google Scholar 

  2. Alber, Y.I.: Metric and generalized projection operators in Banach spaces: properties and applications. In: Theory and Applications of Nonlinear Operators of Accretive and Monotone Type. Dekker, New York (1996)

    Google Scholar 

  3. Takahashi, W.: Proximal point algorithm and four resolvents of nonlinear operators of monotone type in Banach spaces. Taiwan. J. Math. 12, 1883–1910 (2008)

    Article  MathSciNet  Google Scholar 

  4. Zhang, J.L., Su, Y.F., Cheng, Q.Q.: Simple projection algorithm for a countable family of weakly relatively nonexpansive mappings and applications. Fixed Point Theory Appl. 2012, Article ID 205 (2012) http://www.fixedpointtheoryandapplications.com/content/2012/1/205

    Article  Google Scholar 

  5. Petrot, N., Wattanawitoon, K., Kumam, P.: A hybrid projection method for generalized mixed equilibrium problems and fixed point problems in Banach spaces. Nonlinear Anal. Hybrid Syst. 4, 631–643 (2010)

    Article  MathSciNet  Google Scholar 

  6. Saewan, S., Kumam, P.: Convergence theorems for mixed equilibrium problems, variational inequality problem and uniformly quasi-ψ-asymptotically non-expansive mappings. Appl. Math. Comput. 218, 3522–3538 (2011)

    MathSciNet  MATH  Google Scholar 

  7. Saewan, S., Kumam, P., Kanjanasamranwonga, P.: The hybrid projection algorithm for finding the common fixed points of nonexpansive mappings and the zeroes of maximal monotone operators in Banach spaces. Optimization 63(9), 1319–1338 (2014) https://www.tandfonline.com/doi/abs/10.1080/02331934.2012.724686?queryID=51/6325696

    Article  MathSciNet  Google Scholar 

  8. Wattanawitoon, K., Kumam, P.: Strong convergence theorems by a new hybrid projection algorithm for fixed point problems and equilibrium problems of two relatively quasi-nonexpansive mappings. Nonlinear Anal. Hybrid Syst. 3, 11–20 (2009)

    Article  MathSciNet  Google Scholar 

  9. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976)

    Article  MathSciNet  Google Scholar 

  10. Ceng, L.C., Petrusel, A., Yao, J.C., Yao, Y.: Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 20, 113–133 (2019)

    Article  MathSciNet  Google Scholar 

  11. Ceng, L.C., Petrusel, A., Yao, J.C., Yao, Y.: Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 19, 487–502 (2018)

    Article  MathSciNet  Google Scholar 

  12. Kohsaka, F.: An implicity defined iterative sequence for monotone operators in Banach spaces. J. Inequal. Appl. 2014, Article ID 181 (2014) http://www.journalofinequalitiesandapplications.com/content/2014/1/181

    Article  Google Scholar 

  13. Yao, Y., Postolache, M., Yao, J.C.: An iterative algorithm for solving the generalized variational inequalities and fixed points problems. Mathematics 7, Article ID 61 (2019)

    Article  Google Scholar 

  14. Wei, L., Agarwal, R.P.: Simple form of a projection set in hybrid iterative schemes for non-linear mappings, application of inequalities and computational experiments. J. Inequal. Appl. 2018, Article ID 179 (2018). https://doi.org/10.1186/s13660-018-1774-z

    Article  MathSciNet  Google Scholar 

  15. Wei, L., Tan, R.L.: Iterative schemes for finite families of maximal monotone operators based on resolvents. Abstr. Appl. Anal. 2014, Article ID 451279 (2014). https://doi.org/10.1155/2014/451279

    Article  MathSciNet  MATH  Google Scholar 

  16. Wei, L., Chen, R.: Study on the existence of non-trivial solution of one kind p-Laplacian-like Neumann boundary value problems and iterative schemes. Appl. Math. J. Chin. Univ. Ser. A 30(2), 180–190 (2015) (in Chinese)

    MATH  Google Scholar 

  17. Wei, L., Shi, A.F.: Iterative approximation to solution of one kind curvature systems. Math. Appl. 28(4), 761–770 (2015) (in Chinese)

    MathSciNet  MATH  Google Scholar 

  18. Wei, L., Agarwal, R.P.: New construction and proof techniques of projection algorithm for countable maximal monotone mappings and weakly relatively non-expansive mappings in a Banach space. J. Inequal. Appl. 2018, Article ID 64 (2018). https://doi.org/10.1186/s13660-018-1657-3

    Article  MathSciNet  Google Scholar 

  19. Agarwal, R.P., O’Regan, D., Sahu, D.R.: Fixed Point Theory for Lipschitz-Type Mappings with Applications. Springer, Berlin (2008)

    Google Scholar 

  20. Takahashi, W.: Nonlinear Functional Analysis. Fixed Point Theory and Its Applications. Yokohama (2000)

    MATH  Google Scholar 

  21. Mosco, U.: Convergence of convex sets and of solutions of variational inequalities. Adv. Math. 3, 510–585 (1969)

    Article  MathSciNet  Google Scholar 

  22. Tsukada, M.: Convergence of best approximations in a smooth Banach space. J. Approx. Theory 40, 301–309 (1984)

    Article  MathSciNet  Google Scholar 

  23. Kamimura, S., Takahashi, W.: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13, 938–945 (2003)

    Article  MathSciNet  Google Scholar 

  24. Xu, H.K.: Inequalities in Banach spaces with applications. Nonlinear Anal. 16, 1127–1138 (1991)

    Article  MathSciNet  Google Scholar 

  25. Wei, L., Cho, Y.J.: Iterative schemes for zero points of maximal monotone operators and fixed points of nonexpansive mappings and their applications. Fixed Point Theory Appl. 2008, Article ID 168468 (2008)

    Article  MathSciNet  Google Scholar 

  26. Rockafellar, R.T.: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 75–88 (1970)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are grateful the for reviewers’ valuable suggestions.

Availability of data and materials

Data sharing not applicable to this article as no data sets were generated or analyzed during the current study.

Funding

The first three authors were supported by Natural Science Foundation of Hebei Province under Grant No. A2019207064, Key Project of Science and Research of Hebei Educational Department under Grant No. ZD2019073, Key Project of Science and Research of Hebei University of Economics and Business under Grant No. 2018ZD06.

Author information

Authors and Affiliations

Authors

Contributions

The first and the fourth author are responsible for abstract results and paper writing. The second author is responsible for the numerical experiment and the third author is responsible for applications of the abstract results. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Li Wei.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wei, L., Chen, R., Zhang, Y. et al. New shrinking iterative methods for infinite families of monotone operators in a Banach space, computational experiments and applications. J Inequal Appl 2020, 67 (2020). https://doi.org/10.1186/s13660-020-02330-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-020-02330-y

Keywords