Skip to main content

Composite viscosity methods for common solutions of general mixed equilibrium problem, variational inequalities and common fixed points

Abstract

In this paper, we introduce a new composite viscosity iterative algorithm and prove the strong convergence of the proposed algorithm to a common fixed point of one finite family of nonexpansive mappings and another infinite family of nonexpansive mappings, which also solves a general mixed equilibrium problem and a finite family of variational inequalities. An example is also provided in support of the main result. The main result presented in this paper improves and extends some corresponding ones in the earlier and recent literature.

1 Introduction

Let H be a real Hilbert space with inner product \(\langle\cdot,\cdot \rangle\) and norm \(\|\cdot\|\), C be a nonempty closed convex subset of H and \(P_{C}\) be the metric projection of H onto C. Let \(S:C\to C\) be a self-mapping on C. We denote by \(\operatorname{Fix}(S)\) the set of fixed points of S and by R the set of all real numbers. A mapping \(A:C\to H\) is called α-inverse strongly monotone, if there exists a constant \(\alpha>0\) such that

$$\langle Ax-Ay,x-y\rangle\geq\alpha\|Ax-Ay\|^{2}, \quad \forall x,y\in C. $$

A mapping \(A:C\to H\) is called L-Lipschitz continuous if there exists a constant \(L>0\) such that

$$\|Ax-Ay\|\leq L\|x-y\|,\quad \forall x,y\in C. $$

In particular, if \(L=1\) then A is called a nonexpansive mapping; if \(L\in(0,1)\) then A is called a contraction.

Let \(A:C\to H\) be a nonlinear mapping on C. We consider the following variational inequality problem (VIP): find a point \(x^{*}\in C\) such that

$$ \bigl\langle Ax^{*},x-x^{*}\bigr\rangle \geq0,\quad \forall x\in C. $$
(1.1)

The solution set of VIP (1.1) is denoted by \(\operatorname{VI}(C,A)\).

The VIP (1.1) was first discussed by Lions [1] and now is well known; there are a lot of different approaches toward solving VIP (1.1) in finite-dimensional and infinite-dimensional spaces, and the research is intensively continued. The VIP (1.1) has many applications in computational mathematics, mathematical physics, operations research, mathematical economics, optimization theory, and other fields; see, e.g., [25]. It is well known that, if A is a strongly monotone and Lipschitz continuous mapping on C, then VIP (1.1) has a unique solution. Not only the existence and uniqueness of solutions are important topics in the study of VIP (1.1), but also how to actually find a solution of VIP (1.1) is important.

In 1976, Korpelevich [6] proposed an iterative algorithm for solving the VIP (1.1) in Euclidean space \({\mathbf{R}}^{n}\):

$$\left \{ \textstyle\begin{array}{l} y_{n}=P_{C}(x_{n}-\tau Ax_{n}), \\ x_{n+1}=P_{C}(x_{n}-\tau Ay_{n}), \quad \forall n\geq0, \end{array}\displaystyle \right . $$

with \(\tau>0\) a given number, which is known as the extragradient method. The literature on the VIP is vast and Korpelevich’s extragradient method has received much attention by many authors, who improved it in various ways; see, e.g., [724] and references therein, to name but a few. In particular, motivated by the idea of Korpelevich’s extragradient method [6], Nadezhkina and Takahashi [11] introduced an extragradient iterative scheme:

$$ \left \{ \textstyle\begin{array}{l} x_{0}=x\in C\quad \mbox{chosen arbitrary}, \\ y_{n}=P_{C}(x_{n}-\lambda_{n}Ax_{n}), \\ x_{n+1}=\alpha_{n}x_{n}+(1-\alpha_{n})SP_{C}(x_{n}-\lambda_{n}Ay_{n}),\quad \forall n\geq0, \end{array}\displaystyle \right . $$
(1.2)

where \(A:C\to H\) is a monotone, L-Lipschitz continuous mapping, \(S:C\to C\) is a nonexpansive mapping and \(\{\lambda_{n}\}\subset[a,b]\) for some \(a,b\in(0,1/L)\) and \(\{\alpha_{n}\}\subset [c,d]\) for some \(c,d\in(0,1)\). They proved the weak convergence of \(\{x_{n}\}\) generated by (1.2) to an element of \(\operatorname{Fix}(S)\cap\operatorname{VI}(C,A)\). Subsequently, given a contractive mapping \(f:C\to C\), an α-inverse strongly monotone mapping \(A:C\to H\) and a nonexpansive mapping \(T: C\to C\), Jung ([25], Theorem 3.1) introduced the following two-step iterative scheme by the viscosity approximation method:

$$ \left \{ \textstyle\begin{array}{l} x_{0}=x\in C\quad \mbox{chosen arbitrary}, \\ y_{n}=\alpha_{n}f(x_{n})+(1-\alpha_{n})TP_{C}(x_{n}-\lambda_{n}Ax_{n}), \\ x_{n+1}=(1-\beta_{n})y_{n}+\beta_{n}TP_{C}(y_{n}-\lambda_{n}Ay_{n}), \quad \forall n\geq0, \end{array}\displaystyle \right . $$
(1.3)

where \(\{\lambda_{n}\}\subset(0,2\alpha)\) and \(\{\alpha_{n}\},\{\beta_{n}\} \subset[0,1)\). It was proven in [25] that, if \(\operatorname{Fix} (T)\cap\operatorname{VI}(C,A)\neq\emptyset\), then the sequence \(\{x_{n}\}\) generated by (1.3) converges strongly to \(q=P_{\operatorname{Fix}(T) \cap\operatorname{VI}(C,A)}f(q)\).

On the other hand, we consider the general mixed equilibrium problem (GMEP) (see also [26, 27]) of finding \(x \in C\) such that

$$ {\varTheta }(x,y)+h(x,y)\geq0,\quad \forall y\in C, $$
(1.4)

where \({\varTheta },h:C\times C\to{\mathbf{R}}\) are two bi-functions. The GMEP (1.4) has been considered and studied by many authors; see, e.g., [2830]. We denote the set of solutions of GMEP (1.4) by \(\operatorname{GMEP}({\varTheta },h)\). The GMEP (1.4) is very general, for example, it includes the following equilibrium problems as special cases.

As an example, in [14, 15, 31], the authors considered and studied the generalized equilibrium problem (GEP) which is to find \(x\in C\) such that

$${\varTheta }(x,y)+\langle Ax,y-x\rangle\geq0,\quad \forall y\in C. $$

The set of solutions of GEP is denoted by \(\operatorname{GEP}({\varTheta },A)\).

In [22, 26, 32, 33], the authors considered and studied the mixed equilibrium problem (MEP) which is to find \(x\in C\) such that

$${\varTheta }(x,y)+\varphi(y)-\varphi(x)\geq0, \quad \forall y\in C. $$

The set of solutions of MEP is denoted by \(\operatorname{MEP}({\varTheta },\varphi)\).

In [3437], the authors considered and studied the equilibrium problem (EP) which is to find \(x\in C\) such that

$${\varTheta }(x,y)\geq0,\quad \forall y\in C. $$

The set of solutions of EP is denoted by \(\operatorname{EP}({\varTheta })\). It is worth to mention that the EP is an unified model of several problems, namely, variational inequality problems, optimization problems, saddle point problems, complementarity problems, fixed point problems, Nash equilibrium problems, etc.

Throughout this paper, it is assumed as in [38] that \({\varTheta }:C\times C\to{\mathbf{R}}\) is a bi-function satisfying conditions (θ1)-(θ3) and \(h:C\times C\to{\mathbf{R}}\) is a bi-function with restrictions (h1)-(h3), where

(θ1):

\({\varTheta }(x,x)=0\) for all \(x\in C\);

(θ2):

Θ is monotone (i.e., \({\varTheta }(x,y)+{\varTheta }(y,x)\leq0\), \(\forall x,y\in C\)) and upper hemicontinuous in the first variable, i.e., for each \(x,y,z\in C\),

$$\limsup_{t\to0^{+}}{\varTheta }\bigl(tz+(1-t)x,y\bigr)\leq{ \varTheta }(x,y); $$
(θ3):

Θ is lower semicontinuous and convex in the second variable;

(h1):

\(h(x,x)=0\) for all \(x\in C\);

(h2):

h is monotone and weakly upper semicontinuous in the first variable;

(h3):

h is convex in the second variable.

For \(r>0\) and \(x\in H\), let \(T_{r}:H\to2^{C}\) be a mapping defined by

$$T_{r}x=\biggl\{ z\in C:{\varTheta }(z,y)+h(z,y)+\frac{1}{r}\langle y-z,z-x\rangle \geq0,\forall y\in C\biggr\} $$

called the resolvent of Θ and h.

In 2012, Marino et al. [30] introduced a multi-step iterative scheme

$$ \left \{ \textstyle\begin{array}{l} {\varTheta }(u_{n},y)+h(u_{n},y)+\frac{1}{r_{n}}\langle y-u_{n},u_{n}-x_{n}\rangle\geq0, \quad \forall y\in C, \\ y_{n,1}=\beta_{n,1}S_{1}u_{n}+(1-\beta_{n,1})u_{n}, \\ y_{n,i}=\beta_{n,i}S_{i}u_{n}+(1-\beta_{n,i})y_{n,i-1},\quad i=2,\ldots,N, \\ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})Ty_{n,N}, \end{array}\displaystyle \right . $$
(1.5)

with \(f:C\to C\) a ρ-contraction and \(\{\alpha_{n}\},\{\beta_{n,i}\} \subset(0,1)\), \(\{r_{n}\}\subset(0,\infty)\), which generalizes the two-step iterative scheme in [39] for two nonexpansive mappings to a finite family of nonexpansive mappings \(T,S_{i}:C\to C\), \(i=1,\ldots,N\), and proved that the proposed scheme (1.5) converges strongly to a common fixed point of the mappings that is also an equilibrium point of the GMEP (1.4).

More recently, Marino et al.’s multi-step iterative scheme (1.5) was extended to develop the following composite viscosity iterative algorithm by virtue of Jung’s two-step iterative scheme (1.3).

Algorithm CPY

(see (3.1) in [29])

Let \(f:C\to C\) be a ρ-contraction and \(A:C\to H\) be an α-inverse strongly monotone mapping. Let \(S_{i},T:C\to C\) be nonexpansive mappings for each \(i=1,\ldots,N\). Let \({\varTheta }:C\times C\to{\mathbf{R}}\) be a bi-function satisfying conditions (θ1)-(θ3) and \(h:C\times C\to{\mathbf{R}}\) be a bi-function with restrictions (h1)-(h3). Let \(\{x_{n}\}\) be the sequence generated by

$$ \left \{ \textstyle\begin{array}{l} {\varTheta }(u_{n},y)+h(u_{n},y)+\frac{1}{r_{n}}\langle y-u_{n},u_{n}-x_{n}\rangle\geq0, \quad \forall y\in C, \\ y_{n,1}=\beta_{n,1}S_{1}u_{n}+(1-\beta_{n,1})u_{n}, \\ y_{n,i}=\beta_{n,i}S_{i}u_{n}+(1-\beta_{n,i})y_{n,i-1},\quad i=2,\ldots,N, \\ y_{n}=\alpha_{n}f(y_{n,N})+(1-\alpha_{n})TP_{C}(y_{n,N}-\lambda_{n}Ay_{n,N}), \\ x_{n+1}=(1-\beta_{n})y_{n}+\beta_{n}TP_{C}(y_{n}-\lambda_{n}Ay_{n}), \quad \forall n\geq1, \end{array}\displaystyle \right . $$
(1.6)

where \(\{\lambda_{n}\}\) is a sequence in \((0,2\alpha)\) with \(0<\liminf_{n\to\infty}\lambda_{n}\leq\limsup_{n\to\infty}\lambda_{n}<1\), \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\) are sequences in \((0,1)\) with \(0<\liminf_{n\to\infty}\beta_{n}\leq\limsup_{n\to\infty}\beta _{n}<1\), \(\{\beta_{n,i}\}\) is a sequence in \((0,1)\) for each \(i=1,\ldots,N\), and \(\{r_{n}\}\) is a sequence in \((0,\infty)\) with \(\liminf_{n\to\infty}r_{n}>0\).

It was proven in [29] that the proposed scheme (1.6) converges strongly to a common fixed point of the mappings \(T,S_{i}:C\to C\), \(i=1,\ldots,N\), that is also an equilibrium point of the GMEP (1.4) and a solution of the VIP (1.1).

In this paper, we introduce a new composite viscosity iterative algorithm for finding a common element of the solution set \(\operatorname{GMEP}({\varTheta },h)\) of GMEP (1.4), the solution set \(\bigcap^{M}_{k=1}\operatorname{VI}(C,A_{k})\) of a finite family of variational inequalities for inverse strongly monotone mappings \(A_{k}:C\to H\), \(k=1,\ldots,M\), and the common fixed point set \(\bigcap^{N}_{i=1}\operatorname{Fix}(S_{i})\cap\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\) of one finite family of nonexpansive mappings \(S_{i}:C\to C\), \(i=1,\ldots,N\), and another infinite family of nonexpansive mappings \(T_{n}:C\to C\), \(n=1,2,\ldots \) , in the setting of the infinite-dimensional Hilbert space. The iterative algorithm is based on viscosity approximation method [40] (see also [41]), Mann’s iterative method, Korpelevich’s extragradient method and the W-mapping approach to common fixed points of finitely many nonexpansive mappings. Our aim is to prove that the iterative algorithm converges strongly to a common fixed point of the mappings \(S_{i},T_{n}:C\to C\), \(i=1,\ldots,N\), \(n=1,2,\ldots \) , which is also an equilibrium point of GMEP (1.4) and a solution of a finite family of variational inequalities for inverse strongly monotone mappings \(A_{k}:C\to H\), \(k=1,\ldots,M\).

2 Preliminaries

Throughout this paper, we assume that H is a real Hilbert space whose inner product and norm are denoted by \(\langle\cdot, \cdot\rangle\) and \(\|\cdot\|\), respectively. Let C be a nonempty, closed, and convex subset of H. We write \(x_{n}\rightharpoonup x\) to indicate that the sequence \(\{x_{n}\}\) converges weakly to x and \(x_{n}\to x\) to indicate that the sequence \(\{x_{n}\}\) converges strongly to x. Moreover, we use \(\omega_{w}(x_{n})\) to denote the weak ω-limit set of the sequence \(\{x_{n}\}\) and \(\omega_{s}(x_{n})\) to denote the strong ω-limit set of the sequence \(\{x_{n}\}\), i.e.,

$$\omega_{w}(x_{n}):=\bigl\{ x\in H:x_{n_{i}} \rightharpoonup x \mbox{ for some subsequence }\{x_{n_{i}}\} \mbox{ of } \{x_{n}\}\bigr\} $$

and

$$\omega_{s}(x_{n}):=\bigl\{ x\in H:x_{n_{i}}\to x \mbox{ for some subsequence }\{ x_{n_{i}}\} \mbox{ of } \{x_{n}\} \bigr\} . $$

The metric (or nearest point) projection from H onto C is the mapping \(P_{C}:H\to C\) which assigns to each point \(x\in H\) the unique point \(P_{C}x\in C\) satisfying the property

$$\|x-P_{C}x\|=\inf_{y\in C}\|x-y\|=:d(x,C). $$

The following properties of projections are useful and pertinent to our purpose.

Proposition 2.1

Given any \(x\in H\) and \(z\in C\). One has

  1. (i)

    \(z=P_{C}x \Leftrightarrow \langle x-z,y-z\rangle\leq0\), \(\forall y\in C\);

  2. (ii)

    \(z=P_{C}x \Leftrightarrow \|x-z\|^{2}\leq\|x-y\|^{2}-\|y-z\|^{2}\), \(\forall y\in C\);

  3. (iii)

    \(\langle P_{C}x-P_{C}y,x-y\rangle\geq\|P_{C}x-P_{C}y\|^{2}\), \(\forall y\in H\), which hence implies that \(P_{C}\) is nonexpansive and monotone.

Definition 2.1

A mapping \(T:H\to H\) is said to be

  1. (a)

    nonexpansive if

    $$\|Tx-Ty\|\leq\|x-y\|,\quad \forall x,y\in H; $$
  2. (b)

    firmly nonexpansive if \(2T-I\) is nonexpansive, or equivalently, if T is 1-inverse strongly monotone (1-ism),

    $$\langle x-y,Tx-Ty\rangle\geq\|Tx-Ty\|^{2},\quad \forall x,y\in H; $$

    alternatively, T is firmly nonexpansive if and only if T can be expressed as

    $$T=\frac{1}{2}(I+S), $$

    where \(S:H\to H\) is nonexpansive; projections are firmly nonexpansive.

Definition 2.2

Let T be a nonlinear operator with the domain \(D(T)\subset H\) and the range \(R(T)\subset H\). Then T is said to be

  1. (i)

    monotone if

    $$\langle Tx-Ty,x-y\rangle\geq0,\quad \forall x,y\in D(T); $$
  2. (ii)

    β-strongly monotone if there exists a constant \(\beta>0\) such that

    $$\langle Tx-Ty,x-y\rangle\geq\eta\|x-y\|^{2},\quad \forall x,y\in D(T); $$
  3. (iii)

    ν-inverse strongly monotone if there exists a constant \(\nu >0\) such that

    $$\langle Tx-Ty,x-y\rangle\geq\nu\|Tx-Ty\|^{2},\quad \forall x,y\in D(T). $$

It can easily be seen that if T is nonexpansive, then \(I-T\) is monotone. It is also easy to see that the projection \(P_{C}\) is 1-ism. Inverse strongly monotone (also referred to as co-coercive) operators have been applied widely in solving practical problems in various fields.

On the other hand, it is obvious that if A is η-inverse strongly monotone, then A is monotone and \(\frac{1}{\eta} \)-Lipschitz continuous. Moreover, we also have, for all \(u,v\in C\) and \(\lambda>0\),

$$\begin{aligned} \bigl\Vert (I-\lambda A)u-(I-\lambda A)v\bigr\Vert ^{2}&=\bigl\Vert (u-v)-\lambda (Au-Av)\bigr\Vert ^{2} \\ &=\|u-v\|^{2}-2\lambda\langle Au-Av,u-v\rangle+\lambda^{2} \|Au-Av\|^{2} \\ &\leq\|u-v\|^{2}+\lambda(\lambda-2\eta)\|Au-Av\|^{2}. \end{aligned}$$
(2.1)

So, if \(\lambda\leq2\eta\), then \(I-\lambda A\) is a nonexpansive mapping from C to H.

We need some facts and tools in a real Hilbert space H, which are listed as lemmas below.

Lemma 2.1

Let X be a real inner product space. Then we have the following inequality:

$$\|x+y\|^{2}\leq\|x\|^{2}+2\langle y,x+y\rangle, \quad \forall x,y\in X. $$

Lemma 2.2

Let H be a real Hilbert space. Then the following hold:

  1. (a)

    \(\|x-y\|^{2}=\|x\|^{2}-\|y\|^{2}-2\langle x-y,y\rangle\) for all \(x,y\in H\);

  2. (b)

    \(\|\lambda x+\mu y\|^{2}=\lambda\|x\|^{2}+\mu\|y\|^{2}-\lambda\mu\|x-y\| ^{2}\) for all \(x,y\in H\) and \(\lambda,\mu\in[0,1]\) with \(\lambda+\mu=1\);

  3. (c)

    if \(\{x_{n}\}\) is a sequence in H such that \(x_{n}\rightharpoonup x\), it follows that

    $$\limsup_{n\to\infty}\|x_{n}-y\|^{2}=\limsup _{n\to\infty}\|x_{n}-x\|^{2}+\|x-y\| ^{2}, \quad \forall y\in H. $$

Let \(\{T_{n}\}^{\infty}_{n=1}\) be an infinite family of nonexpansive self-mappings on C and \(\{\lambda_{n}\}^{\infty}_{n=1}\) be a sequence of nonnegative numbers in \([0,1]\). For any \(n\geq1\), define a mapping \(W_{n}\) on C as follows:

$$ \left \{ \textstyle\begin{array}{l} U_{n,n+1}=I, \\ U_{n,n}=\lambda_{n}T_{n}U_{n,n+1}+(1-\lambda_{n})I, \\ U_{n,n-1}=\lambda_{n-1}T_{n-1}U_{n,n}+(1-\lambda_{n-1})I, \\ \ldots, \\ U_{n,k}=\lambda_{k}T_{k}U_{n,k+1}+(1-\lambda_{k})I, \\ U_{n,k-1}=\lambda_{k-1}T_{k-1}U_{n,k}+(1-\lambda_{k-1})I, \\ \ldots, \\ U_{n,2}=\lambda_{2}T_{2}U_{n,3}+(1-\lambda_{2})I, \\ W_{n}=U_{n,1}=\lambda_{1}T_{1}U_{n,2}+(1-\lambda_{1})I. \end{array}\displaystyle \right . $$
(2.2)

Such a mapping \(W_{n}\) is called the W-mapping generated by \(T_{n},T_{n-1},\ldots,T_{1}\) and \(\lambda_{n},\lambda_{n-1}, \ldots,\lambda_{1}\).

Lemma 2.3

(see [42])

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(\{T_{n}\}^{\infty}_{n=1}\) be a sequence of nonexpansive self-mappings on C such that \(\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n}) \neq\emptyset\) and let \(\{\lambda_{n}\}^{\infty}_{n=1}\) be a sequence in \((0,b]\) for some \(b\in(0,1)\). Then, for every \(x\in C\) and \(k\geq1\) the limit \(\lim_{n\to\infty}U_{n,k}x\) exists where \(U_{n,k}\) is defined as in (2.2).

Remark 2.1

(see Remark 3.1 in [36])

It can be known from Lemma 2.3 that if D is a nonempty bounded subset of C, then for \(\epsilon>0\) there exists \(n_{0}\geq k\) such that, for all \(n>n_{0}\),

$$\sup_{x\in D}\|U_{n,k}x-U_{k}x\|\leq\epsilon. $$

Remark 2.2

(see Remark 3.2 in [36])

Utilizing Lemma 2.3, we define a mapping \(W:C\to C\) as follows:

$$Wx=\lim_{n\to\infty}W_{n}x=\lim_{n\to\infty}U_{n,1}x, \quad \forall x\in C. $$

Such a W is called the W-mapping generated by \(T_{1},T_{2},\ldots \) , and \(\lambda_{1},\lambda_{2},\ldots \) . Since \(W_{n}\) is nonexpansive, \(W:C\to C\) is also nonexpansive. Indeed, observe that, for each \(x,y\in C\),

$$\|Wx-Wy\|=\lim_{n\to\infty}\|W_{n}x-W_{n}y\|\leq \|x-y\|. $$

If \(\{x_{n}\}\) is a bounded sequence in C, then we put \(D=\{x_{n}:n\geq1\} \). Hence, it is clear from Remark 2.1 that for an arbitrary \(\epsilon>0\) there exists \(N_{0}\geq1\) such that, for all \(n>N_{0}\),

$$\|W_{n}x_{n}-Wx_{n}\|=\|U_{n,1}x_{n}-U_{1}x_{n} \|\leq\sup_{x\in D}\|U_{n,1}x-U_{1}x\| \leq \epsilon. $$

This implies that

$$\lim_{n\to\infty}\|W_{n}x_{n}-Wx_{n} \|=0. $$

Lemma 2.4

(see [42])

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(\{T_{n}\}^{\infty}_{n=1}\) be a sequence of nonexpansive self-mappings on C such that \(\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n}) \neq\emptyset\), and let \(\{\lambda_{n}\}^{\infty}_{n=1}\) be a sequence in \((0,b]\) for some \(b\in(0,1)\). Then \(\operatorname{Fix} (W)=\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\).

Lemma 2.5

(see [43], Demiclosedness principle)

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let S be a nonexpansive self-mapping on C with \(\operatorname{Fix}(S)\neq\emptyset\). Then \(I-S\) is demiclosed. That is, whenever \(\{x_{n}\}\) is a sequence in C weakly converging to some \(x\in C\) and the sequence \(\{(I-S)x_{n}\}\) strongly converges to some y, it follows that \((I-S)x=y\). Here I is the identity operator of H.

Lemma 2.6

Let \(A:C\to H\) be a monotone mapping. In the context of the variational inequality problem the characterization of the projection (see Proposition  2.1(i)) implies

$$u\in\operatorname{VI}(C,A)\quad \Leftrightarrow\quad u=P_{C}(u-\lambda Au),\quad \forall\lambda>0. $$

Lemma 2.7

Let \(f:C\to C\) be a ρ-contraction. Then \(I-f:C\to H\) is \((1-\rho)\)-strongly monotone, i.e.,

$$\bigl\langle (I-f)x-(I-f)y,x-y\bigr\rangle \geq(1-\rho)\|x-y\|^{2},\quad \forall x,y\in C. $$

Lemma 2.8

(see [44])

Let \(\{a_{n}\}\) be a sequence of nonnegative real numbers satisfying

$$a_{n+1}\leq(1-s_{n})a_{n}+s_{n}b_{n}+t_{n}, \quad \forall n\geq1, $$

where \(\{s_{n}\}\), \(\{t_{n}\}\), and \(\{b_{n}\}\) satisfy the following conditions:

  1. (i)

    \(\{s_{n}\}\subset[0,1]\) and \(\sum^{\infty}_{n=1}s_{n}=\infty\);

  2. (ii)

    either \(\limsup_{n\to\infty}b_{n}\leq0\) or \(\sum^{\infty}_{n=1}|s_{n}b_{n}|<\infty\);

  3. (iii)

    \(t_{n}\geq0\) for all \(n\geq1\), and \(\sum^{\infty}_{n=1}t_{n}<\infty\).

Then \(\lim_{n\to\infty}a_{n}=0\).

In the sequel, we will denote by \(\operatorname{GMEP}({\varTheta },h)\) the solution set of GMEP (1.4).

Lemma 2.9

(see [38])

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \({\varTheta }:C\times C\to{\mathbf{R}}\) be a bi-function satisfying conditions (θ1)-(θ3) and \(h:C\times C\to{\mathbf{R}}\) is a bi-function with restrictions (h1)-(h3). Moreover, let us suppose that

  1. (H)

    for fixed \(r>0\) and \(x\in C\), there exist a bounded \(K\subset C\) and \(\hat{x}\in K\) such that for all \(z\in C\setminus K\), \(-{\varTheta }(\hat{x},z)+h(z,\hat{x})+\frac{1}{r}\langle\hat{x}-z,z-x\rangle<0\).

For \(r>0\) and \(x\in H\), the mapping \(T_{r}:H\to2^{C}\) (i.e., the resolvent of Θ and h) has the following properties:

  1. (i)

    \(T_{r}x\neq\emptyset\);

  2. (ii)

    \(T_{r}x\) is a singleton;

  3. (iii)

    \(T_{r}\) is firmly nonexpansive;

  4. (iv)

    \(\operatorname{GMEP}({\varTheta },h)=\operatorname{Fix}(T_{r})\) and it is closed and convex.

Lemma 2.10

(see [38])

Let us suppose that (θ1)-(θ3), (h1)-(h3), and (H) hold. Let \(x,y\in H\), \(r_{1},r_{2}>0\). Then

$$\|T_{r_{2}}y-T_{r_{1}}x\|\leq\|y-x\|+\biggl\vert \frac{r_{2}-r_{1}}{r_{2}}\biggr\vert \|T_{r_{2}}y-y\|. $$

Lemma 2.11

(see [30])

Suppose that the hypotheses of Lemma  2.9 are satisfied. Let \(\{r_{n}\}\) be a sequence in \((0,\infty)\) with \(\liminf_{n\to\infty}r_{n}>0\). Suppose that \(\{x_{n}\}\) is a bounded sequence. Then the following statements are equivalent and true:

  1. (a)

    If \(\|x_{n}-T_{r_{n}}x_{n}\|\to0\) as \(n\to\infty\), each weak cluster point of \(\{x_{n}\}\) satisfies the problem

    $${\varTheta }(x,y)+h(x,y)\geq0, \quad \forall y\in C, $$

    i.e., \(\omega_{w}(x_{n})\subseteq\operatorname{GMEP}({\varTheta },h)\).

  2. (b)

    The demiclosedness principle holds in the sense that, if \(x_{n}\rightharpoonup x^{*}\) and \(\|x_{n}-T_{r_{n}}x_{n}\|\to0\) as \(n\to\infty\), then \((I-T_{r_{k}})x^{*}=0\) for all \(k\geq1\).

Finally, recall that a set-valued mapping \(\widetilde{T}:H\to2^{H}\) is called monotone if for all \(x,y\in H\), \(f\in\widetilde{T}x\) and \(g\in \widetilde{T}y\) imply \(\langle x-y,f-g\rangle\geq0\). A monotone mapping \(\widetilde{T}:H\to2^{H}\) is maximal if its graph \(G(\widetilde{T})\) is not properly contained in the graph of any other monotone mapping. It is well known that a monotone mapping \(\widetilde{T}\) is maximal if and only if for \((x,f)\in H\times H\), \(\langle x-y,f-g\rangle\geq0\) for all \((y,g)\in G(\widetilde{T})\) implies \(f\in \widetilde{T}x\). Let \(A:C\to H\) be a monotone, L-Lipschitz continuous mapping and let \(N_{C}v\) be the normal cone to C at \(v\in C\), i.e., \(N_{C}v=\{w\in H:\langle v-u,w\rangle\geq0, \forall u\in C\}\). Define

$$\widetilde{T}v=\left \{ \textstyle\begin{array}{l@{\quad}l} Av+N_{C}v, &\mbox{if }v\in C, \\ \emptyset, & \mbox{if }v\notin C. \end{array}\displaystyle \right . $$

It is well known [45] that in this case \(\widetilde{T}\) is maximal monotone, and

$$ 0\in\widetilde{T}v \quad \Leftrightarrow\quad v\in\operatorname{VI}(C,A). $$
(2.3)

3 Main results

Let \(M,N\geq1\) be two integers and let us consider the following new composite viscosity iterative scheme:

$$ \left \{ \textstyle\begin{array}{l} {\varTheta }(u_{n},y)+h(u_{n},y)+\frac{1}{r_{n}}\langle y-u_{n},u_{n}-x_{n}\rangle\geq0,\quad \forall y\in C, \\ y_{n,1}=\beta_{n,1}S_{1}u_{n}+(1-\beta_{n,1})u_{n}, \\ y_{n,i}=\beta_{n,i}S_{i}u_{n}+(1-\beta_{n,i})y_{n,i-1}, \quad i=2,\ldots,N, \\ y_{n}=\alpha_{n}f(y_{n,N})+(1-\alpha_{n})W_{n}{\varLambda }^{M}_{n}y_{n,N}, \\ x_{n+1}=(1-\beta_{n})y_{n}+\beta_{n}W_{n}{\varLambda }^{M}_{n}y_{n},\quad \forall n\geq1, \end{array}\displaystyle \right . $$
(3.1)

where

  • the mapping \(f:C\to C\) is an ρ-contraction;

  • \(A_{k}:C\to H\) is \(\eta_{k}\)-inverse strongly monotone for each \(k=1,\ldots,M\);

  • \(S_{i},T_{n}:C\to C\) are nonexpansive mappings for each \(i=1,\ldots,N\) and \(n=1,2,\ldots \) ;

  • \(\{\lambda_{n}\}\) is a sequence in \((0,b]\) for some \(b\in(0,1)\) and \(W_{n}\) is the W-mapping defined by (2.2);

  • \({\varTheta },h:C\times C\to{\mathbf{R}}\) are two bi-functions satisfying the hypotheses of Lemma 2.9;

  • \(\{\lambda_{k,n}\}\subset[a_{k},b_{k}]\subset(0,2\eta_{k})\), \(\forall k\in\{ 1,\ldots,M\}\), and \({\varLambda }^{M}_{n}:=P_{C}(I-\lambda_{M,n}A_{M})\cdots P_{C}(I-\lambda_{1,n}A_{1})\);

  • \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\) are sequences in \((0,1)\) with \(0<\liminf_{n\to\infty}\beta_{n}\leq\limsup_{n\to\infty}\beta_{n}<1\);

  • \(\{\beta_{n,i}\}^{N}_{i=1}\) are sequences in \((0,1)\) and \(\{r_{n}\}\) is a sequence in \((0,\infty)\) with \(\liminf_{n\to\infty}r_{n}>0\).

Lemma 3.1

Let us suppose that \({\varOmega }=\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap\bigcap^{N}_{i=1}\operatorname{Fix}(S_{i})\cap\bigcap^{M}_{k=1}\operatorname{VI}(C,A_{k})\cap\operatorname{GMEP}({\varTheta },h)\neq\emptyset\). Then the sequences \(\{x_{n}\}\), \(\{y_{n}\}\), \(\{y_{n,i}\}\) for all i, \(\{u_{n}\}\) are bounded.

Proof

Put \(\tilde{y}_{n,N}={\varLambda }^{M}_{n}y_{n,N}\), \(\tilde{y}_{n}={\varLambda }^{M}_{n}y_{n}\), and

$${\varLambda }^{k}_{n}=P_{C}(I-\lambda_{k,n}A_{k})P_{C}(I- \lambda _{k-1,n}A_{k-1})\cdots P_{C}(I- \lambda_{1,n}A_{1}) $$

for all \(k\in\{1,\ldots,M\}\) and \(n\geq1\), and \({\varLambda }^{0}_{n}=I\), where I is the identity mapping on H.

Let us observe, first of all that, if \(p\in{ \varOmega }\), then

$$\|y_{n,1}-p\|\leq\|u_{n}-p\|\leq\|x_{n}-p\|. $$

For all from \(i=2\) to \(i=N\), by induction, one proves that

$$\|y_{n,i}-p\|\leq\beta_{n,i}\|u_{n}-p\|+(1- \beta_{n,i})\|y_{n,i-1}-p\|\leq \|u_{n}-p\|\leq \|x_{n}-p\|. $$

Thus we obtain, for every \(i=1,\ldots,N\),

$$ \|y_{n,i}-p\|\leq\|u_{n}-p\|\leq\|x_{n}-p \|. $$
(3.2)

Since for each \(k\in\{1,\ldots,M\}\), \(I-\lambda_{k,n}A_{k}\) is nonexpansive and \(p=P_{C}(I-\lambda_{k,n}A_{k})p\) (due to Lemma 2.6), we have

$$\begin{aligned} \Vert \tilde{y}_{n,N}-p\Vert =&\bigl\Vert P_{C}(I- \lambda_{M,n}A_{M}){ \varLambda }^{M-1}_{n}y_{n,N}-P_{C}(I- \lambda_{M,n}A_{M}){ \varLambda }^{M-1}_{n}p \bigr\Vert \\ \leq&\bigl\Vert (I-\lambda_{M,n}A_{M}){ \varLambda }^{M-1}_{n}y_{n,N}-(I-\lambda _{M,n}A_{M}){\varLambda }^{M-1}_{n}p\bigr\Vert \\ \leq&\bigl\Vert {\varLambda }^{M-1}_{n}y_{n,N}-{ \varLambda }^{M-1}_{n}p\bigr\Vert \\ &\cdots \\ \leq&\bigl\Vert {\varLambda }^{0}_{n}y_{n,N}-{ \varLambda }^{0}_{n}p\bigr\Vert \\ =&\Vert y_{n,N}-p\Vert \end{aligned}$$
(3.3)

and

$$\begin{aligned} \Vert \tilde{y}_{n}-p\Vert =&\bigl\Vert P_{C}(I- \lambda_{M,n}A_{M}){\varLambda }^{M-1}_{n}y_{n}-P_{C}(I- \lambda_{M,n}A_{M}){ \varLambda }^{M-1}_{n}p \bigr\Vert \\ \leq&\bigl\Vert (I-\lambda_{M,n}A_{M}){ \varLambda }^{M-1}_{n}y_{n}-(I-\lambda _{M,n}A_{M}){\varLambda }^{M-1}_{n}p\bigr\Vert \\ \leq&\bigl\Vert {\varLambda }^{M-1}_{n}y_{n}-{ \varLambda }^{M-1}_{n}p\bigr\Vert \\ &\cdots \\ \leq&\bigl\Vert {\varLambda }^{0}_{n}y_{n}-{ \varLambda }^{0}_{n}p\bigr\Vert \\ =&\Vert y_{n}-p\Vert . \end{aligned}$$
(3.4)

Since \(W_{n}\) is nonexpansive and \(p=W_{n}p\) for all \(n\geq1\), we get from (3.2)-(3.4)

$$\begin{aligned} \Vert y_{n}-p\Vert &=\bigl\Vert \alpha_{n} \bigl(f(y_{n,N})-p\bigr)+(1-\alpha _{n}) (W_{n} \tilde{y}_{n,N}-p)\bigr\Vert \\ &\leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert +(1- \alpha_{n})\Vert \tilde{y}_{n,N}-p\Vert \\ &\leq\alpha_{n}\bigl\Vert f(y_{n,N})-f(p)\bigr\Vert + \alpha_{n}\bigl\Vert f(p)-p\bigr\Vert +(1-\alpha_{n}) \Vert y_{n,N}-p\Vert \\ &\leq\alpha_{n}\rho \Vert y_{n,N}-p\Vert + \alpha_{n}\bigl\Vert f(p)-p\bigr\Vert +(1-\alpha_{n}) \Vert y_{n,N}-p\Vert \\ &=\bigl(1-(1-\rho)\alpha_{n}\bigr)\Vert y_{n,N}-p\Vert + \alpha_{n}\bigl\Vert f(p)-p\bigr\Vert \\ &\leq\bigl(1-(1-\rho)\alpha_{n}\bigr)\Vert x_{n}-p\Vert +\alpha_{n}\bigl\Vert f(p)-p\bigr\Vert \\ &=\bigl(1-(1-\rho)\alpha_{n}\bigr)\Vert x_{n}-p\Vert +(1-\rho)\alpha_{n}\frac{\Vert f(p)-p\Vert }{1-\rho} \\ &\leq\max\biggl\{ \Vert x_{n}-p\Vert ,\frac{\Vert f(p)-p\Vert }{1-\rho}\biggr\} , \end{aligned}$$

and hence

$$\begin{aligned} \|x_{n+1}-p\|&=\bigl\Vert (1-\beta_{n}) (y_{n}-p)+\beta_{n}(W_{n}\tilde{y}_{n}-p) \bigr\Vert \\ &\leq(1-\beta_{n})\|y_{n}-p\|+\beta_{n}\| \tilde{y}_{n}-p\| \\ &\leq(1-\beta_{n})\|y_{n}-p\|+\beta_{n} \|y_{n}-p\| \\ &=\|y_{n}-p\| \\ &\leq\max\biggl\{ \|x_{n}-p\|,\frac{\|f(p)-p\|}{1-\rho}\biggr\} . \end{aligned}$$

By induction, we get

$$\|x_{n}-p\|\leq\max\biggl\{ \|x_{0}-p\|,\frac{\|f(p)-p\|}{1-\rho} \biggr\} ,\quad \forall n\geq1. $$

This implies that \(\{x_{n}\}\) is bounded and so are \(\{u_{n}\}\), \(\{\tilde{y}_{n}\}\), \(\{\tilde{y}_{n,N}\}\), \(\{y_{n}\}\), \(\{y_{n,i}\}\) for each \(i=1,\ldots,N\). Since \(\|W_{n}\tilde{y}_{n,N}-p\|\leq\|y_{n,N}-p\|\leq\| x_{n}-p\|\) and \(\|W_{n}\tilde{y}_{n}-p\|\leq\|y_{n}-p\|\), \(\{W_{n} \tilde{y}_{n,N}\}\) and \(\{W_{n}\tilde{y}_{n}\}\) are also bounded. □

Lemma 3.2

Let us suppose that \({\varOmega }\neq\emptyset\). Moreover, let us suppose that the following hold:

  1. (H1)

    \(\lim_{n\to\infty}\alpha_{n}=0\) and \(\sum^{\infty}_{n=1}\alpha _{n}=\infty\);

  2. (H2)

    \(\sum^{\infty}_{n=1}|\alpha_{n}-\alpha_{n-1}|<\infty\) or \(\lim_{n\to \infty}\frac{|\alpha_{n}-\alpha_{n-1}|}{\alpha_{n}}=0\);

  3. (H3)

    \(\sum^{\infty}_{n=1}|\beta_{n,i}-\beta_{n-1,i}|<\infty\) or \(\lim_{n\to\infty}\frac{|\beta_{n,i}-\beta_{n-1,i}|}{\alpha_{n}}=0\) for each \(i=1,\ldots,N\);

  4. (H4)

    \(\sum^{\infty}_{n=1}|r_{n}-r_{n-1}|<\infty\) or \(\lim_{n\to\infty}\frac {|r_{n}-r_{n-1}|}{\alpha_{n}}=0\);

  5. (H5)

    \(\sum^{\infty}_{n=1}|\beta_{n}-\beta_{n-1}|<\infty\) or \(\lim_{n\to \infty}\frac{|\beta_{n}-\beta_{n-1}|}{\alpha_{n}}=0\);

  6. (H6)

    \(\sum^{\infty}_{n=1}|\lambda_{k,n}-\lambda_{k,n-1}|<\infty\) or \(\lim_{n\to\infty}\frac{|\lambda_{k,n}-\lambda_{k,n-1}|}{\alpha_{n}}=0\) for each \(k=1,\ldots,M\).

Then \(\lim_{n\to\infty}\|x_{n+1}-x_{n}\|=0\), i.e., \(\{ x_{n}\}\) is asymptotically regular.

Proof

From (3.1), we have

$$\left \{ \textstyle\begin{array}{l} y_{n}=\alpha_{n}f(y_{n,N})+(1-\alpha_{n})W_{n}\tilde{y}_{n,N}, \\ y_{n-1}=\alpha_{n-1}f(y_{n-1,N})+(1-\alpha_{n-1})W_{n-1}\tilde{y}_{n-1,N}. \end{array}\displaystyle \right . $$

Simple calculations show that

$$\begin{aligned} y_{n}-y_{n-1} =&(1-\alpha_{n}) (W_{n} \tilde{y}_{n,N}-W_{n-1}\tilde{y}_{n-1,N}) +( \alpha_{n}-\alpha_{n-1}) \bigl(f(y_{n-1,N})-W_{n-1} \tilde{y}_{n-1,N}\bigr) \\ & {}+\alpha_{n}\bigl(f(y_{n,N})-f(y_{n-1,N})\bigr). \end{aligned}$$
(3.5)

Note that

$$\begin{aligned} \Vert \tilde{y}_{n,N}-\tilde{y}_{n-1,N}\Vert =&\bigl\Vert {\varLambda }^{M}_{n}y_{n,N}-{\varLambda }^{M}_{n-1}y_{n-1,N} \bigr\Vert \\ =&\bigl\Vert P_{C}(I-\lambda_{M,n}A_{M}){ \varLambda }^{M-1}_{n}y_{n,N}-P_{C}(I-\lambda _{M,n-1}A_{M}){\varLambda }^{M-1}_{n-1}y_{n-1,N} \bigr\Vert \\ \leq&\bigl\Vert P_{C}(I-\lambda_{M,n}A_{M}){ \varLambda }^{M-1}_{n}y_{n,N}-P_{C}(I-\lambda _{M,n-1}A_{M}){\varLambda }^{M-1}_{n}y_{n,N} \bigr\Vert \\ &{} +\bigl\Vert P_{C}(I-\lambda_{M,n-1}A_{M}){ \varLambda }^{M-1}_{n}y_{n,N}-P_{C}(I- \lambda_{M,n-1}A_{M}){\varLambda }^{M-1}_{n-1}y_{n-1,N} \bigr\Vert \\ \leq&\bigl\Vert (I-\lambda_{M,n}A_{M}){ \varLambda }^{M-1}_{n}y_{n,N}-(I-\lambda _{M,n-1}A_{M}){\varLambda }^{M-1}_{n}y_{n,N} \bigr\Vert \\ &{} +\bigl\Vert (I-\lambda_{M,n-1}A_{M}){ \varLambda }^{M-1}_{n}y_{n,N}-(I-\lambda _{M,n-1}A_{M}){\varLambda }^{M-1}_{n-1}y_{n-1,N} \bigr\Vert \\ \leq&|\lambda_{M,n}-\lambda_{M,n-1}|\bigl\Vert A_{M}{\varLambda }^{M-1}_{n}y_{n,N}\bigr\Vert +\bigl\Vert {\varLambda }^{M-1}_{n}y_{n,N}-{ \varLambda }^{M-1}_{n-1}y_{n-1,N}\bigr\Vert \\ \leq&|\lambda_{M,n}-\lambda_{M,n-1}|\bigl\Vert A_{M}{\varLambda }^{M-1}_{n}y_{n,N}\bigr\Vert +|\lambda_{M-1,n}-\lambda_{M-1,n-1}|\bigl\Vert A_{M-1}{ \varLambda }^{M-2}_{n}y_{n,N}\bigr\Vert \\ &{} +\bigl\Vert {\varLambda }^{M-2}_{n}y_{n,N}-{ \varLambda }^{M-2}_{n-1}y_{n-1,N}\bigr\Vert \\ \leq&\cdots \\ \leq&|\lambda_{M,n}-\lambda_{M,n-1}|\bigl\Vert A_{M}{\varLambda }^{M-1}_{n}y_{n,N}\bigr\Vert +|\lambda_{M-1,n}-\lambda_{M-1,n-1}|\bigl\Vert A_{M-1}{ \varLambda }^{M-2}_{n}y_{n,N}\bigr\Vert \\ &{} +\cdots+|\lambda_{1,n}-\lambda_{1,n-1}|\bigl\Vert A_{1}{\varLambda }^{0}_{n}y_{n,N}\bigr\Vert +\bigl\Vert {\varLambda }^{0}_{n}y_{n,N}-{ \varLambda }^{0}_{n-1}y_{n-1,N}\bigr\Vert \\ \leq&\widetilde{M}_{0} \sum^{M}_{k=1}| \lambda_{k,n}-\lambda _{k,n-1}|+\Vert y_{n,N}-y_{n-1,N} \Vert \end{aligned}$$
(3.6)

and

$$\begin{aligned} \Vert \tilde{y}_{n}-\tilde{y}_{n-1}\Vert =&\bigl\Vert {\varLambda }^{M}_{n}y_{n}-{\varLambda }^{M}_{n-1}y_{n-1} \bigr\Vert \\ =&\bigl\Vert P_{C}(I-\lambda_{M,n}A_{M}){ \varLambda }^{M-1}_{n}y_{n}-P_{C}(I-\lambda _{M,n-1}A_{M}){\varLambda }^{M-1}_{n-1}y_{n-1} \bigr\Vert \\ \leq&\bigl\Vert P_{C}(I-\lambda_{M,n}A_{M}){ \varLambda }^{M-1}_{n}y_{n}-P_{C}(I-\lambda _{M,n-1}A_{M}){\varLambda }^{M-1}_{n}y_{n} \bigr\Vert \\ &{} +\bigl\Vert P_{C}(I-\lambda_{M,n-1}A_{M}){ \varLambda }^{M-1}_{n}y_{n}-P_{C}(I-\lambda _{M,n-1}A_{M}){\varLambda }^{M-1}_{n-1}y_{n-1} \bigr\Vert \\ \leq&\bigl\Vert (I-\lambda_{M,n}A_{M}){ \varLambda }^{M-1}_{n}y_{n}-(I-\lambda _{M,n-1}A_{M}){\varLambda }^{M-1}_{n}y_{n} \bigr\Vert \\ &{} +\bigl\Vert (I-\lambda_{M,n-1}A_{M}){ \varLambda }^{M-1}_{n}y_{n}-(I-\lambda _{M,n-1}A_{M}){\varLambda }^{M-1}_{n-1}y_{n-1} \bigr\Vert \\ \leq&|\lambda_{M,n}-\lambda_{M,n-1}|\bigl\Vert A_{M}{\varLambda }^{M-1}_{n}y_{n}\bigr\Vert +\bigl\Vert {\varLambda }^{M-1}_{n}y_{n}-{ \varLambda }^{M-1}_{n-1}y_{n-1}\bigr\Vert \\ \leq&|\lambda_{M,n}-\lambda_{M,n-1}|\bigl\Vert A_{M}{\varLambda }^{M-1}_{n}y_{n}\bigr\Vert +|\lambda_{M-1,n}-\lambda_{M,n-1}|\bigl\Vert A_{M-1}{ \varLambda }^{M-2}_{n}y_{n}\bigr\Vert \\ &{} +\bigl\Vert {\varLambda }^{M-2}_{n}y_{n}-{ \varLambda }^{M-2}_{n-1}y_{n-1}\bigr\Vert \\ &\cdots \\ \leq&|\lambda_{M,n}-\lambda_{M,n-1}|\bigl\Vert A_{M}{\varLambda }^{M-1}_{n}y_{n}\bigr\Vert +|\lambda_{M-1,n}-\lambda_{M-1,n-1}|\bigl\Vert A_{M-1}{ \varLambda }^{M-2}_{n}y_{n}\bigr\Vert \\ &{} +\cdots+|\lambda_{1,n}-\lambda_{1,n-1}|\bigl\Vert A_{1}{\varLambda }^{0}_{n}y_{n}\bigr\Vert +\bigl\Vert {\varLambda }^{0}_{n}y_{n}-{ \varLambda }^{0}_{n-1}y_{n-1}\bigr\Vert \\ \leq&\widetilde{M}_{0} \sum^{M}_{k=1}| \lambda_{k,n}-\lambda _{k,n-1}|+\Vert y_{n}-y_{n-1} \Vert , \end{aligned}$$
(3.7)

where \(\sup_{n\geq1}\{\sum^{M}_{k=1}\|A_{k}{\varLambda }^{k-1}_{n}y_{n,N}\|\} \leq\widetilde{M}_{0}\) and \(\sup_{n\geq1}\{\sum^{M}_{k=1} \|A_{k}{\varLambda }^{k-1}_{n}y_{n}\|\}\leq\widetilde{M}_{0}\) for some \(\widetilde{M}_{0}>0\).

Also, from (2.2), since \(W_{n}\), \(T_{n}\), and \(U_{n,i}\) are all nonexpansive, we have

$$\begin{aligned} \Vert W_{n}\tilde{y}_{n-1,N}-W_{n-1} \tilde{y}_{n-1,N}\Vert =&\Vert \lambda_{1}T_{1}U_{n,2} \tilde{y}_{n-1,N}-\lambda_{1}T_{1}U_{n-1,2} \tilde{y}_{n-1,N}\Vert \\ \leq&\lambda_{1}\Vert U_{n,2}\tilde{y}_{n-1,N}-U_{n-1,2} \tilde{y}_{n-1,N}\Vert \\ =&\lambda_{1}\Vert \lambda_{2}T_{2}U_{n,3} \tilde{y}_{n-1,N}-\lambda _{2}T_{2}U_{n-1,3} \tilde{y}_{n-1,N}\Vert \\ \leq&\lambda_{1}\lambda_{2}\Vert U_{n,3} \tilde{y}_{n-1,N}-U_{n-1,3}\tilde{y}_{n-1,N}\Vert \\ \leq&\cdots \\ \leq&\lambda_{1}\lambda_{2}\cdots\lambda_{n-1} \Vert U_{n,n}\tilde{y}_{n-1,N}-U_{n-1,n} \tilde{y}_{n-1,N}\Vert \\ \leq&\widehat{M} { \prod^{n-1}_{i=1} \lambda_{i}} \end{aligned}$$
(3.8)

and

$$\begin{aligned} \Vert W_{n}\tilde{y}_{n-1}-W_{n-1} \tilde{y}_{n-1}\Vert =&\Vert \lambda_{1}T_{1}U_{n,2} \tilde{y}_{n-1}-\lambda_{1}T_{1}U_{n-1,2} \tilde{y}_{n-1}\Vert \\ \leq&\lambda_{1}\Vert U_{n,2}\tilde{y}_{n-1}-U_{n-1,2} \tilde{y}_{n-1}\Vert \\ =&\lambda_{1}\Vert \lambda_{2}T_{2}U_{n,3} \tilde{y}_{n-1}-\lambda _{2}T_{2}U_{n-1,3} \tilde{y}_{n-1}\Vert \\ \leq&\lambda_{1}\lambda_{2}\Vert U_{n,3} \tilde{y}_{n-1}-U_{n,3}\tilde{y}_{n-1}\Vert \\ \leq&\cdots \\ \leq&\lambda_{1}\lambda_{2}\cdots\lambda_{n-1} \Vert U_{n,n}\tilde{y}_{n-1}-U_{n-1,n} \tilde{y}_{n-1}\Vert \\ \leq&\widehat{M} { \prod^{n-1}_{i=1} \lambda_{i}}, \end{aligned}$$
(3.9)

where \(\sup_{n\geq1}\{\|U_{n+1,n+1}\tilde{y}_{n,N}\|+\|U_{n,n+1}\tilde{y}_{n,N}\|\}\leq\widehat{M}\) and \(\sup_{n\geq1} \{\|U_{n+1,n+1}\tilde{y}_{n}\|+\|U_{n,n+1}\tilde{y}_{n}\|\}\leq\widehat{M}\) for some \(\widehat{M}>0\). Combining (3.5), (3.6), and (3.8), we get from \(\{\lambda_{n}\}\subset (0,b]\subset(0,1)\),

$$\begin{aligned}& \Vert y_{n}-y_{n-1}\Vert \\ & \quad \leq(1-\alpha_{n})\Vert W_{n}\tilde{y}_{n,N}-W_{n-1} \tilde{y}_{n-1,N}\Vert +|\alpha_{n}-\alpha_{n-1}|\bigl\Vert f(y_{n-1,N})-W_{n-1}\tilde{y}_{n-1,N}\bigr\Vert \\ & \qquad {}+\alpha_{n}\bigl\Vert f(y_{n,N})-f(y_{n-1,N}) \bigr\Vert \\ & \quad \leq(1-\alpha_{n})\bigl[\Vert W_{n} \tilde{y}_{n,N}-W_{n}\tilde{y}_{n-1,N}\Vert +\Vert W_{n}\tilde{y}_{n-1,N}-W_{n-1}\tilde{y}_{n-1,N} \Vert \bigr] \\ & \qquad {}+|\alpha_{n}-\alpha_{n-1}|\bigl\Vert f(y_{n-1,N})-W_{n-1}\tilde{y}_{n-1,N}\bigr\Vert + \alpha_{n}\rho \Vert y_{n,N}-y_{n-1,N}\Vert \\ & \quad \leq(1-\alpha_{n})\bigl[\Vert \tilde{y}_{n,N}- \tilde{y}_{n-1,N}\Vert +\Vert W_{n}\tilde{y}_{n-1,N}-W_{n-1} \tilde{y}_{n-1,N}\Vert \bigr] \\ & \qquad {}+|\alpha_{n}-\alpha_{n-1}|\bigl\Vert f(y_{n-1,N})-W_{n-1}\tilde{y}_{n-1,N}\bigr\Vert + \alpha_{n}\rho \Vert y_{n,N}-y_{n-1,N}\Vert \\ & \quad \leq(1-\alpha_{n})\Biggl[\widetilde{M}_{0} \sum ^{M}_{k=1}|\lambda _{k,n}- \lambda_{k,n-1}|+\Vert y_{n,N}-y_{n-1,N}\Vert + \widehat{M} { \prod^{n-1}_{i=1} \lambda_{i}}\Biggr] \\ & \qquad {}+|\alpha_{n}-\alpha_{n-1}|\bigl\Vert f(y_{n-1,N})-W_{n-1}\tilde{y}_{n-1,N}\bigr\Vert + \alpha_{n}\rho \Vert y_{n,N}-y_{n-1,N}\Vert \\ & \quad \leq\bigl(1-\alpha_{n}(1-\rho)\bigr)\Vert y_{n,N}-y_{n-1,N} \Vert +\widetilde{M}_{0} \sum^{M}_{k=1}| \lambda_{k,n}-\lambda_{k,n-1}| \\ & \qquad {}+|\alpha_{n}-\alpha_{n-1}|\bigl\Vert f(y_{n-1,N})-W_{n-1}\tilde{y}_{n-1,N}\bigr\Vert + \widehat{M} { \prod^{n-1}_{i=1} \lambda_{i}}. \end{aligned}$$
(3.10)

Furthermore, from (3.1) we have

$$\left \{ \textstyle\begin{array}{l} x_{n+1}=(1-\beta_{n})y_{n}+\beta_{n}W_{n}\tilde{y}_{n}, \\ x_{n}=(1-\beta_{n-1})y_{n-1}+\beta_{n-1}W_{n-1}\tilde{y}_{n-1}. \end{array}\displaystyle \right . $$

Simple calculations show that

$$ x_{n+1}-x_{n}=(1-\beta_{n}) (y_{n}-y_{n-1})+ \beta_{n}(W_{n}\tilde{y}_{n}-W_{n-1} \tilde{y}_{n-1}) +(\beta_{n}-\beta_{n-1}) (W_{n-1}\tilde{y}_{n-1}-y_{n-1}). $$
(3.11)

Combining (3.7) and (3.9)-(3.11), we get from \(\{\lambda_{n}\}\subset (0,b]\subset(0,1)\),

$$\begin{aligned}& \Vert x_{n+1}-x_{n}\Vert \\ & \quad \leq(1-\beta_{n})\Vert y_{n}-y_{n-1}\Vert +\beta_{n}\Vert W_{n}\tilde{y}_{n}-W_{n-1} \tilde{y}_{n-1}\Vert +|\beta_{n}-\beta_{n-1}|\Vert W_{n-1}\tilde{y}_{n-1}-y_{n-1}\Vert \\ & \quad \leq(1-\beta_{n})\Vert y_{n}-y_{n-1}\Vert +\beta_{n}\bigl[\Vert W_{n}\tilde{y}_{n}-W_{n} \tilde{y}_{n-1}\Vert +\Vert W_{n}\tilde{y}_{n-1}-W_{n-1} \tilde{y}_{n-1}\Vert \bigr] \\ & \qquad {} +|\beta_{n}-\beta_{n-1}|\Vert W_{n-1} \tilde{y}_{n-1}-y_{n-1}\Vert \\ & \quad \leq(1-\beta_{n})\Vert y_{n}-y_{n-1}\Vert +\beta_{n}\bigl[\Vert \tilde{y}_{n}-\tilde{y}_{n-1} \Vert +\Vert W_{n}\tilde{y}_{n-1}-W_{n-1} \tilde{y}_{n-1}\Vert \bigr] \\ & \qquad {} +|\beta_{n}-\beta_{n-1}|\Vert W_{n-1} \tilde{y}_{n-1}-y_{n-1}\Vert \\ & \quad \leq(1-\beta_{n})\Vert y_{n}-y_{n-1}\Vert +\beta_{n}\Biggl[\widetilde{M}_{0} \sum ^{M}_{k=1}|\lambda_{k,n}- \lambda_{k,n-1}|+\Vert y_{n}-y_{n-1}\Vert + \widehat{M} { \prod^{n-1}_{i=1} \lambda_{i}}\Biggr] \\ & \qquad {} +|\beta_{n}-\beta_{n-1}|\Vert W_{n-1} \tilde{y}_{n-1}-y_{n-1}\Vert \\ & \quad \leq \Vert y_{n}-y_{n-1}\Vert +\widetilde{M}_{0} \sum^{M}_{k=1}|\lambda _{k,n}- \lambda_{k,n-1}| +|\beta_{n}-\beta_{n-1}|\Vert W_{n-1}\tilde{y}_{n-1}-y_{n-1}\Vert +\widehat {M} { \prod^{n-1}_{i=1}\lambda_{i}} \\ & \quad \leq\bigl(1-\alpha_{n}(1-\rho)\bigr)\Vert y_{n,N}-y_{n-1,N} \Vert +\widetilde{M}_{0} \sum^{M}_{k=1}| \lambda_{k,n}-\lambda_{k,n-1}| \\ & \qquad {} +|\alpha_{n}-\alpha_{n-1}|\bigl\Vert f(y_{n-1,N})-W_{n-1}\tilde{y}_{n-1,N}\bigr\Vert + \widehat{M} { \prod^{n-1}_{i=1} \lambda_{i}} \\ & \qquad {} +\widetilde{M}_{0} \sum^{M}_{k=1}| \lambda_{k,n}-\lambda_{k,n-1}| +|\beta_{n}- \beta_{n-1}|\Vert W_{n-1}\tilde{y}_{n-1}-y_{n-1} \Vert +\widehat {M} { \prod^{n-1}_{i=1} \lambda_{i}} \\ & \quad \leq\bigl(1-\alpha_{n}(1-\rho)\bigr)\Vert y_{n,N}-y_{n-1,N} \Vert +\widetilde{M}_{1}\Biggl[ \sum^{M}_{k=1}| \lambda_{k,n}-\lambda_{k,n-1}| +|\alpha_{n}- \alpha_{n-1}| \\ & \qquad {} +|\beta_{n}-\beta_{n-1}|+b^{n-1}\Biggr], \end{aligned}$$
(3.12)

where \(\sup_{n\geq1}\{\|f(y_{n,N})-W_{n}\tilde{y}_{n,N}\|+\|W_{n}\tilde{y}_{n}-y_{n}\|+2\widehat{M}+2\widetilde{M}_{0}\}\leq\widetilde{M}_{1}\) for some \(\widetilde{M}_{1}>0\).

In the meantime, by the definition of \(y_{n,i}\) one obtains, for all \(i=N,\ldots,2\),

$$\begin{aligned} \|y_{n,i}-y_{n-1,i}\| \leq&\beta_{n,i} \|u_{n}-u_{n-1}\|+\| S_{i}u_{n-1}-y_{n-1,i-1} \||\beta_{n,i}-\beta_{n-1,i}| \\ &{}+(1-\beta_{n,i}) \|y_{n,i-1}-y_{n-1,i-1}\|. \end{aligned}$$
(3.13)

In the case \(i=1\), we have

$$\begin{aligned} \|y_{n,1}-y_{n-1,1}\|&\leq\beta_{n,1} \|u_{n}-u_{n-1}\|+\| S_{1}u_{n-1}-u_{n-1} \||\beta_{n,1}-\beta_{n-1,1}| +(1-\beta_{n,1}) \|u_{n}-u_{n-1}\| \\ &=\|u_{n}-u_{n-1}\|+\|S_{1}u_{n-1}-u_{n-1} \||\beta_{n,1}-\beta_{n-1,1}|. \end{aligned}$$
(3.14)

Substituting (3.14) in all (3.13)-type one obtains, for \(i=2,\ldots,N\),

$$\begin{aligned} \begin{aligned} \|y_{n,i}-y_{n-1,i}\|\leq{}&\|u_{n}-u_{n-1}\|+ \sum^{i}_{k=2}\| S_{k}u_{n-1}-y_{n-1,k-1} \||\beta_{n,k}-\beta_{n-1,k}| \\ &{}+\|S_{1}u_{n-1}-u_{n-1} \||\beta_{n,1}-\beta_{n-1,1}|. \end{aligned} \end{aligned}$$

This together with (3.12) implies that

$$\begin{aligned}& \Vert x_{n+1}-x_{n}\Vert \\& \quad \leq\bigl(1-\alpha_{n}(1-\rho)\bigr)\Vert y_{n,N}-y_{n-1,N} \Vert +\widetilde{M}_{1}\Biggl[ \sum^{M}_{k=1}| \lambda_{k,n}-\lambda_{k,n-1}| +|\alpha_{n}- \alpha_{n-1}| \\& \qquad {} +|\beta_{n}-\beta_{n-1}|+b^{n-1}\Biggr] \\& \quad \leq\bigl(1-\alpha_{n}(1-\rho)\bigr)\Biggl[\Vert u_{n}-u_{n-1}\Vert +{ \sum^{N}_{k=2}} \Vert S_{k}u_{n-1}-y_{n-1,k-1}\Vert | \beta_{n,k}-\beta_{n-1,k}| \\& \qquad {} +\Vert S_{1}u_{n-1}-u_{n-1}\Vert | \beta_{n,1}-\beta_{n-1,1}|\Biggr]+\widetilde{M}_{1} \Biggl[ \sum^{M}_{k=1}|\lambda_{k,n}- \lambda_{k,n-1}| \\& \qquad {} +|\alpha_{n}-\alpha_{n-1}|+|\beta_{n}- \beta_{n-1}|+b^{n-1}\Biggr] \\& \quad \leq\bigl(1-\alpha_{n}(1-\rho)\bigr)\Vert u_{n}-u_{n-1} \Vert +{ \sum^{N}_{k=2}}\Vert S_{k}u_{n-1}-y_{n-1,k-1}\Vert |\beta_{n,k}- \beta_{n-1,k}| \\& \qquad {} +\Vert S_{1}u_{n-1}-u_{n-1}\Vert | \beta_{n,1}-\beta_{n-1,1}|+\widetilde{M}_{1}\Biggl[ \sum^{M}_{k=1}|\lambda_{k,n}- \lambda_{k,n-1}| \\& \qquad {} +|\alpha_{n}-\alpha_{n-1}|+|\beta_{n}- \beta_{n-1}|+b^{n-1}\Biggr]. \end{aligned}$$
(3.15)

By Lemma 2.10, we know that

$$ \|u_{n}-u_{n-1}\|\leq\|x_{n}-x_{n-1}\|+L \biggl\vert 1-\frac{r_{n-1}}{r_{n}}\biggr\vert , $$
(3.16)

where \(L=\sup_{n\geq1}\|u_{n}-x_{n}\|\). So, substituting (3.16) in (3.15) we obtain

$$\begin{aligned}& \Vert x_{n+1}-x_{n}\Vert \\& \quad \leq\bigl(1-\alpha_{n}(1-\rho)\bigr) \biggl(\Vert x_{n}-x_{n-1}\Vert +L\biggl\vert 1-\frac {r_{n-1}}{r_{n}} \biggr\vert \biggr)+{ \sum^{N}_{k=2}} \Vert S_{k}u_{n-1}-y_{n-1,k-1}\Vert | \beta_{n,k}-\beta_{n-1,k}| \\& \qquad {}+\Vert S_{1}u_{n-1}-u_{n-1}\Vert | \beta_{n,1}-\beta_{n-1,1}|+\widetilde{M}_{1}\Biggl[ \sum^{M}_{k=1}|\lambda_{k,n}- \lambda_{k,n-1}| \\& \qquad {}+|\alpha_{n}-\alpha_{n-1}|+|\beta_{n}- \beta_{n-1}|+b^{n-1}\Biggr] \\& \quad \leq\bigl(1-\alpha_{n}(1-\rho)\bigr)\Vert x_{n}-x_{n-1} \Vert +L\biggl\vert 1-\frac {r_{n-1}}{r_{n}}\biggr\vert +{ \sum ^{N}_{k=2}}\Vert S_{k}u_{n-1}-y_{n-1,k-1} \Vert |\beta_{n,k}-\beta_{n-1,k}| \\& \qquad {}+\Vert S_{1}u_{n-1}-u_{n-1}\Vert | \beta_{n,1}-\beta_{n-1,1}|+\widetilde{M}_{1}\Biggl[ \sum^{M}_{k=1}|\lambda_{k,n}- \lambda_{k,n-1}| \\& \qquad {}+|\alpha_{n}-\alpha_{n-1}|+|\beta_{n}- \beta_{n-1}|\Biggr]+\widetilde{M}_{1}b^{n-1} \\& \quad \leq\bigl(1-\alpha_{n}(1-\rho)\bigr)\Vert x_{n}-x_{n-1} \Vert +\widetilde{M}_{2}\Biggl[\frac {|r_{n}-r_{n-1}|}{r_{n}}+{ \sum ^{N}_{k=2}}|\beta_{n,k}-\beta _{n-1,k}| \\& \qquad {}+|\beta_{n,1}-\beta_{n-1,1}|+ \sum ^{M}_{k=1}|\lambda _{k,n}- \lambda_{k,n-1}| +|\alpha_{n}-\alpha_{n-1}|+| \beta_{n}-\beta_{n-1}|\Biggr]+\widetilde{M}_{2}b^{n-1} \\& \quad \leq\bigl(1-\alpha_{n}(1-\rho)\bigr)\Vert x_{n}-x_{n-1} \Vert +\widetilde{M}_{2}\Biggl[\frac {|r_{n}-r_{n-1}|}{\gamma}+{ \sum ^{N}_{k=1}}|\beta_{n,k}-\beta _{n-1,k}| \\& \qquad {}+ \sum^{M}_{k=1}| \lambda_{k,n}-\lambda_{k,n-1}|+|\alpha _{n}- \alpha_{n-1}|+|\beta_{n}-\beta_{n-1}|\Biggr]+ \widetilde{M}_{2}b^{n-1}, \end{aligned}$$
(3.17)

where \(\gamma>0\) is a minorant for \(\{r_{n}\}\) and \(\sup_{n\geq1}\{ L+\widetilde{M}_{1}+\sum^{N}_{k=2}\|S_{k}u_{n}-y_{n,k-1}\|+\|S_{1}u_{n}-u_{n}\|\}\leq \widetilde{M}_{2}\) for some \(\widetilde{M}_{2}>0\). By hypotheses (H1)-(H6) and Lemma 2.8, we obtain the claim. □

Lemma 3.3

Let us suppose that \({\varOmega }\neq\emptyset\). Let us suppose that \(\{x_{n}\}\) is asymptotically regular. Then \(\|x_{n}-y_{n}\|\to0\), \(\|y_{n}-Wy_{n}\|\to0\), and \(\|x_{n}-u_{n}\|=\| x_{n}-T_{r_{n}}x_{n}\|\to0\) as \(n\to\infty\).

Proof

Taking into account \(0<\liminf_{n\to\infty}\beta_{n}\leq \limsup_{n\to\infty}\beta_{n}<1\) we may assume, without loss of generality, that \(\{\beta_{n}\}\subset[c,d]\subset(0,1)\). Let \(p\in {\varOmega }\). Then from (2.1) and (3.2) it follows that, for all \(k\in\{1,2,\ldots,M\}\),

$$\begin{aligned} \Vert \tilde{y}_{n,N}-p\Vert ^{2}&=\bigl\Vert { \varLambda }^{M}_{n}y_{n,N}-p\bigr\Vert ^{2} \\ &\leq\bigl\Vert {\varLambda }^{k}_{n}y_{n,N}-p\bigr\Vert ^{2} \\ &=\bigl\Vert P_{C}(I-\lambda_{k,n}A_{k}){ \varLambda }^{k-1}_{n}y_{n,N}-P_{C}(I-\lambda _{k,n}A_{k})p\bigr\Vert ^{2} \\ &\leq\bigl\Vert (I-\lambda_{k,n}A_{k}){ \varLambda }^{k-1}_{n}y_{n,N}-(I-\lambda _{k,n}A_{k})p\bigr\Vert ^{2} \\ &\leq\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-p\bigr\Vert ^{2}+\lambda_{k,n}(\lambda _{k,n}-2 \eta_{k})\bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr\Vert ^{2} \\ &\leq \Vert y_{n,N}-p\Vert ^{2}+\lambda_{k,n}( \lambda_{k,n}-2\eta_{k})\bigl\Vert A_{k}{ \varLambda }^{k-1}_{n}y_{n,N}-A_{k}p\bigr\Vert ^{2}. \end{aligned}$$
(3.18)

Similarly, we have

$$\begin{aligned} \|\tilde{y}_{n}-p\|^{2}&=\bigl\Vert {\varLambda }^{M}_{n}y_{n}-p \bigr\Vert ^{2} \\ &\leq\|y_{n}-p\|^{2}+\lambda_{k,n}( \lambda_{k,n}-2\eta_{k})\bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n}-A_{k}p\bigr\Vert ^{2}. \end{aligned}$$
(3.19)

So, utilizing the convexity of \(\|\cdot\|^{2}\), we get from (3.1)-(3.2) and (3.18)-(3.19)

$$\begin{aligned} \Vert y_{n}-p\Vert ^{2}&=\bigl\Vert \alpha_{n}\bigl(f(y_{n,N})-p\bigr)+(1-\alpha _{n}) \bigl(W_{n}{\varLambda }^{M}_{n}y_{n,N}-p\bigr) \bigr\Vert ^{2} \\ &\leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+(1-\alpha_{n})\bigl\Vert W_{n}{\varLambda }^{M}_{n}y_{n,N}-p\bigr\Vert ^{2} \\ &\leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\bigl\Vert {\varLambda }^{M}_{n}y_{n,N}-p \bigr\Vert ^{2} \\ &\leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert y_{n,N}-p\Vert ^{2}+ \lambda_{k,n}(\lambda _{k,n}-2\eta_{k})\bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr\Vert ^{2}, \end{aligned}$$

and hence

$$\begin{aligned} \Vert x_{n+1}-p\Vert ^{2} =&\bigl\Vert (1- \beta_{n}) (y_{n}-p)+\beta_{n}\bigl(W_{n}{ \varLambda }^{M}_{n}y_{n}-p\bigr)\bigr\Vert ^{2} \\ \leq&(1-\beta_{n})\Vert y_{n}-p\Vert ^{2}+ \beta_{n}\bigl\Vert W_{n}{\varLambda }^{M}_{n}y_{n}-p \bigr\Vert ^{2} \\ \leq&(1-\beta_{n})\Vert y_{n}-p\Vert ^{2}+ \beta_{n}\bigl\Vert {\varLambda }^{M}_{n}y_{n}-p \bigr\Vert ^{2} \\ \leq&(1-\beta_{n})\Vert y_{n}-p\Vert ^{2}+ \beta_{n}\bigl[\Vert y_{n}-p\Vert ^{2}+ \lambda_{k,n}(\lambda _{k,n}-2\eta_{k})\bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n}-A_{k}p \bigr\Vert ^{2}\bigr] \\ =&\Vert y_{n}-p\Vert ^{2}+\beta_{n} \lambda_{k,n}(\lambda_{k,n}-2\eta_{k})\bigl\Vert A_{k}{ \varLambda }^{k-1}_{n}y_{n}-A_{k}p \bigr\Vert ^{2} \\ \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert y_{n,N}-p\Vert ^{2}+ \lambda_{k,n}(\lambda _{k,n}-2\eta_{k})\bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr\Vert ^{2} \\ &{} +\beta_{n}\lambda_{k,n}(\lambda_{k,n}-2 \eta_{k})\bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n}-A_{k}p \bigr\Vert ^{2} \\ \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert x_{n}-p\Vert ^{2}+ \lambda_{k,n}(\lambda _{k,n}-2\eta_{k})\bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr\Vert ^{2} \\ &{} +\beta_{n}\lambda_{k,n}(\lambda_{k,n}-2 \eta_{k})\bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n}-A_{k}p \bigr\Vert ^{2}. \end{aligned}$$

This together with \(\{\lambda_{k,n}\}\subset[a_{k},b_{k}]\subset(0,2\eta _{k})\), \(k=1,\ldots,M\), implies that

$$\begin{aligned}& a_{k}(2\eta_{k}-b_{k})\bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr\Vert ^{2} +ca_{k}(2\eta_{k}-b_{k}) \bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n}-A_{k}p \bigr\Vert ^{2} \\& \quad \leq\lambda_{k,n}(2\eta_{k}-\lambda_{k,n}) \bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr\Vert ^{2} +\beta_{n}\lambda_{k,n}(2 \eta_{k}-\lambda_{k,n})\bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n}-A_{k}p\bigr\Vert ^{2} \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert x_{n}-p\Vert ^{2}-\Vert x_{n+1}-p\Vert ^{2} \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert x_{n}-x_{n+1}\Vert \bigl(\Vert x_{n}-p\Vert +\Vert x_{n+1}-p\Vert \bigr). \end{aligned}$$

Since \(\alpha_{n}\to0\) and \(\|x_{n+1}-x_{n}\|\to0\) as \(n\to\infty\), from the boundedness of \(\{x_{n}\}\) and \(\{y_{n,N}\}\) we get

$$ \lim_{n\to\infty}\bigl\| A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr\| =0\quad \mbox{and}\quad \lim_{n\to\infty}\bigl\| A_{k}{ \varLambda }^{k-1}_{n}y_{n}-A_{k}p \bigr\| =0. $$
(3.20)

We recall that, by the firm nonexpansivity of \(T_{r_{n}}\), a standard calculation (see [37]) shows that for \(p\in\operatorname{GMEP}({\varTheta },h)\),

$$\|u_{n}-p\|^{2}\leq\|x_{n}-p\|^{2}- \|x_{n}-u_{n}\|^{2}. $$

By Proposition 2.1(iii), we deduce that, for each \(k\in\{1,2,\ldots,M\}\),

$$\begin{aligned}& \bigl\Vert {\varLambda }^{k}_{n}y_{n,N}-p\bigr\Vert ^{2} \\& \quad =\bigl\Vert P_{C}(I-\lambda_{k,n}A_{k}){ \varLambda }^{k-1}_{n}y_{n,N}-P_{C}(I-\lambda _{k,n}A_{k})p\bigr\Vert ^{2} \\& \quad \leq\bigl\langle (I-\lambda_{k,n}A_{k}){ \varLambda }^{k-1}_{n}y_{n,N}-(I-\lambda _{k,n}A_{k})p,{\varLambda }^{k}_{n}y_{n,N}-p \bigr\rangle \\& \quad =\frac{1}{2}\bigl(\bigl\Vert (I-\lambda_{k,n}A_{k}){ \varLambda }^{k-1}_{n}y_{n,N}-(I-\lambda_{k,n}A_{k})p \bigr\Vert ^{2}+\bigl\Vert {\varLambda }^{k}_{n}y_{n,N}-p \bigr\Vert ^{2} \\& \qquad {}-\bigl\Vert (I-\lambda_{k,n}A_{k}){ \varLambda }^{k-1}_{n}y_{n,N}-(I-\lambda _{k,n}A_{k})p-\bigl({\varLambda }^{k}_{n}y_{n,N}-p \bigr)\bigr\Vert ^{2}\bigr) \\& \quad \leq\frac{1}{2}\bigl(\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-p \bigr\Vert ^{2}+\bigl\Vert {\varLambda }^{k}_{n}y_{n,N}-p \bigr\Vert ^{2} \\& \qquad {}-\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-{ \varLambda }^{k}_{n}y_{n,N}-\lambda _{k,n} \bigl(A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr)\bigr\Vert ^{2}\bigr) \\& \quad \leq\frac{1}{2}\bigl(\Vert y_{n,N}-p\Vert ^{2}+\bigl\Vert {\varLambda }^{k}_{n}y_{n,N}-p \bigr\Vert ^{2} \\ & \qquad {}-\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-{ \varLambda }^{k}_{n}y_{n,N}-\lambda _{k,n} \bigl(A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr)\bigr\Vert ^{2}\bigr), \end{aligned}$$

which implies

$$\begin{aligned} \bigl\Vert {\varLambda }^{k}_{n}y_{n,N}-p\bigr\Vert ^{2} \leq&\Vert y_{n,N}-p\Vert ^{2}-\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-{\varLambda }^{k}_{n}y_{n,N}-\lambda_{k,n} \bigl(A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr)\bigr\Vert ^{2} \\ =&\Vert y_{n,N}-p\Vert ^{2}-\bigl\Vert { \varLambda }^{k-1}_{n}y_{n,N}-{\varLambda }^{k}_{n}y_{n,N}\bigr\Vert ^{2}- \lambda^{2}_{k,n}\bigl\Vert A_{k}{ \varLambda }^{k-1}_{n}y_{n,N}-A_{k}p\bigr\Vert ^{2} \\ &{} +2\lambda_{k,n}\bigl\langle { \varLambda }^{k-1}_{n}y_{n,N}-{ \varLambda }^{k}_{n}y_{n,N},A_{k}{ \varLambda }^{k-1}_{n}y_{n,N}-A_{k}p\bigr\rangle \\ \leq&\Vert y_{n,N}-p\Vert ^{2}-\bigl\Vert { \varLambda }^{k-1}_{n}y_{n,N}-{\varLambda }^{k}_{n}y_{n,N}\bigr\Vert ^{2} \\ &{} +2\lambda_{k,n}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-{ \varLambda }^{k}_{n}y_{n,N}\bigr\Vert \bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr\Vert . \end{aligned}$$
(3.21)

Similarly, we have

$$\begin{aligned} \bigl\Vert {\varLambda }^{k}_{n}y_{n}-p\bigr\Vert ^{2} \leq&\Vert y_{n}-p\Vert ^{2}-\bigl\Vert { \varLambda }^{k-1}_{n}y_{n}-{\varLambda }^{k}_{n}y_{n} \bigr\Vert ^{2} \\ &{}+2\lambda_{k,n}\bigl\Vert { \varLambda }^{k-1}_{n}y_{n}-{\varLambda }^{k}_{n}y_{n} \bigr\Vert \bigl\Vert A_{k}{ \varLambda }^{k-1}_{n}y_{n}-A_{k}p \bigr\Vert . \end{aligned}$$
(3.22)

Thus, by Lemma 2.2(b), we get from (3.1)-(3.2) and (3.21)-(3.22)

$$\begin{aligned} \Vert y_{n}-p\Vert ^{2} \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert +(1-\alpha_{n})\bigl\Vert W_{n}{\varLambda }^{M}_{n}y_{n,N}-p\bigr\Vert ^{2} \\ \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\bigl\Vert {\varLambda }^{M}_{n}y_{n,N}-p \bigr\Vert ^{2} \\ \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\bigl\Vert {\varLambda }^{k}_{n}y_{n,N}-p \bigr\Vert ^{2} \\ \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert y_{n,N}-p\Vert ^{2}-\bigl\Vert { \varLambda }^{k-1}_{n}y_{n,N}-{\varLambda }^{k}_{n}y_{n,N} \bigr\Vert ^{2} \\ &{} +2\lambda_{k,n}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-{ \varLambda }^{k}_{n}y_{n,N}\bigr\Vert \bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr\Vert \\ \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert u_{n}-p\Vert ^{2}-\bigl\Vert { \varLambda }^{k-1}_{n}y_{n,N}-{\varLambda }^{k}_{n}y_{n,N} \bigr\Vert ^{2} \\ &{} +2\lambda_{k,n}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-{ \varLambda }^{k}_{n}y_{n,N}\bigr\Vert \bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr\Vert \\ \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert x_{n}-p\Vert ^{2}-\Vert x_{n}-u_{n}\Vert ^{2}-\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-{\varLambda }^{k}_{n}y_{n,N} \bigr\Vert ^{2} \\ &{} +2\lambda_{k,n}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-{ \varLambda }^{k}_{n}y_{n,N}\bigr\Vert \bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr\Vert , \end{aligned}$$

and hence

$$\begin{aligned}& \Vert x_{n+1}-p\Vert ^{2} \\& \quad =(1-\beta_{n})\Vert y_{n}-p\Vert ^{2}+ \beta_{n}\bigl\Vert W_{n}{\varLambda }^{M}_{n}y_{n}-p \bigr\Vert ^{2}-\beta _{n}(1-\beta_{n})\bigl\Vert y_{n}-W_{n}{\varLambda }^{M}_{n}y_{n} \bigr\Vert ^{2} \\& \quad \leq(1-\beta_{n})\Vert y_{n}-p\Vert ^{2}+\beta_{n}\bigl\Vert {\varLambda }^{M}_{n}y_{n}-p \bigr\Vert ^{2}-\beta _{n}(1-\beta_{n})\bigl\Vert y_{n}-W_{n}{\varLambda }^{M}_{n}y_{n} \bigr\Vert ^{2} \\& \quad \leq(1-\beta_{n})\Vert y_{n}-p\Vert ^{2}+\beta_{n}\bigl\Vert {\varLambda }^{k}_{n}y_{n}-p \bigr\Vert ^{2}-\beta _{n}(1-\beta_{n})\bigl\Vert y_{n}-W_{n}{\varLambda }^{M}_{n}y_{n} \bigr\Vert ^{2} \\& \quad \leq(1-\beta_{n})\Vert y_{n}-p\Vert ^{2}+\beta_{n}\bigl[\Vert y_{n}-p\Vert ^{2}-\bigl\Vert {\varLambda }^{k-1}_{n}y_{n}-{ \varLambda }^{k}_{n}y_{n}\bigr\Vert ^{2} \\& \qquad {} +2\lambda_{k,n}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n}-{ \varLambda }^{k}_{n}y_{n}\bigr\Vert \bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n}-A_{k}p \bigr\Vert \bigr] -\beta_{n}(1-\beta_{n})\bigl\Vert y_{n}-W_{n}{\varLambda }^{M}_{n}y_{n} \bigr\Vert ^{2} \\& \quad \leq \Vert y_{n}-p\Vert ^{2}-\beta_{n} \bigl\Vert {\varLambda }^{k-1}_{n}y_{n}-{ \varLambda }^{k}_{n}y_{n}\bigr\Vert ^{2} \\& \qquad {} +2\lambda_{k,n}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n}-{ \varLambda }^{k}_{n}y_{n}\bigr\Vert \bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n}-A_{k}p \bigr\Vert -\beta_{n}(1-\beta_{n})\bigl\Vert y_{n}-W_{n}{\varLambda }^{M}_{n}y_{n} \bigr\Vert ^{2} \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert x_{n}-p\Vert ^{2}-\Vert x_{n}-u_{n}\Vert ^{2}-\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-{\varLambda }^{k}_{n}y_{n,N} \bigr\Vert ^{2} \\& \qquad {} +2\lambda_{k,n}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-{ \varLambda }^{k}_{n}y_{n,N}\bigr\Vert \bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr\Vert -\beta_{n}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n}-{ \varLambda }^{k}_{n}y_{n}\bigr\Vert ^{2} \\& \qquad {} +2\lambda_{k,n}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n}-{ \varLambda }^{k}_{n}y_{n}\bigr\Vert \bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n}-A_{k}p \bigr\Vert -\beta_{n}(1-\beta_{n})\bigl\Vert y_{n}-W_{n}{\varLambda }^{M}_{n}y_{n} \bigr\Vert ^{2}. \end{aligned}$$

This together with \(\{\lambda_{k,n}\}\subset[a_{k},b_{k}]\subset(0,2\eta _{k})\), \(k=1,\ldots,M\), implies that

$$\begin{aligned}& \Vert x_{n}-u_{n}\Vert ^{2}+\bigl\Vert { \varLambda }^{k-1}_{n}y_{n,N}-{ \varLambda }^{k}_{n}y_{n,N} \bigr\Vert ^{2}+c\bigl\Vert {\varLambda }^{k-1}_{n}y_{n}-{ \varLambda }^{k}_{n}y_{n}\bigr\Vert ^{2} \\& \qquad {}+c(1-d)\bigl\Vert y_{n}-W_{n}{ \varLambda }^{M}_{n}y_{n}\bigr\Vert ^{2} \\& \quad \leq \Vert x_{n}-u_{n}\Vert ^{2}+\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-{\varLambda }^{k}_{n}y_{n,N}\bigr\Vert ^{2}+ \beta_{n}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n}-{ \varLambda }^{k}_{n}y_{n}\bigr\Vert ^{2} \\& \qquad {} +\beta_{n}(1-\beta_{n})\bigl\Vert y_{n}-W_{n}{\varLambda }^{M}_{n}y_{n} \bigr\Vert ^{2} \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert x_{n}-p\Vert ^{2}-\Vert x_{n+1}-p\Vert ^{2} \\& \qquad {} +2\lambda_{k,n}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-{ \varLambda }^{k}_{n}y_{n,N}\bigr\Vert \bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr\Vert \\& \qquad {}+2\lambda_{k,n}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n}-{ \varLambda }^{k}_{n}y_{n}\bigr\Vert \bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n}-A_{k}p \bigr\Vert \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert x_{n}-x_{n+1}\Vert \bigl(\Vert x_{n}-p\Vert +\Vert x_{n+1}-p\Vert \bigr) \\& \qquad {}+2b_{k}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-{ \varLambda }^{k}_{n}y_{n,N}\bigr\Vert \bigl\Vert A_{k}{\varLambda }^{k-1}_{n}y_{n,N}-A_{k}p \bigr\Vert \\& \qquad {}+2b_{k}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n}-{ \varLambda }^{k}_{n}y_{n}\bigr\Vert \bigl\Vert A_{k}{ \varLambda }^{k-1}_{n}y_{n}-A_{k}p \bigr\Vert . \end{aligned}$$
(3.23)

Since \(\alpha_{n}\to0\) and \(\|x_{n+1}-x_{n}\|\to0\) as \(n\to\infty\), and \(\{ x_{n}\}\), \(\{y_{n}\}\), and \(\{y_{n,N}\}\) are bounded, from (3.20) and (3.23) we conclude that

$$ \lim_{n\to\infty}\|x_{n}-u_{n}\|=\lim _{n\to\infty}\bigl\Vert y_{n}-W_{n}{\varLambda }^{M}_{n}y_{n}\bigr\Vert =0 $$
(3.24)

and

$$ \lim_{n\to\infty}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n,N}-{ \varLambda }^{k}_{n}y_{n,N}\bigr\Vert =\lim _{n\to\infty}\bigl\Vert {\varLambda }^{k-1}_{n}y_{n}-{ \varLambda }^{k}_{n}y_{n}\bigr\Vert =0 $$
(3.25)

for all \(k\in\{1,\ldots,M\}\). Therefore we get

$$\begin{aligned} \Vert y_{n,N}-\tilde{y}_{n,N}\Vert =&\bigl\Vert {\varLambda }^{0}_{n}y_{n,N}-{\varLambda }^{M}_{n}y_{n,N} \bigr\Vert \\ \leq&\bigl\Vert {\varLambda }^{0}_{n}y_{n,N}-{ \varLambda }^{1}_{n}y_{n,N}\bigr\Vert +\bigl\Vert { \varLambda }^{1}_{n}y_{n,N}-{\varLambda }^{2}_{n}y_{n,N} \bigr\Vert \\ &{}+\cdots+\bigl\Vert {\varLambda }^{M-1}_{n}y_{n,N}-{ \varLambda }^{M}_{n}y_{n,N}\bigr\Vert \\ \to&0\quad \mbox{as } n\to\infty \end{aligned}$$
(3.26)

and

$$\begin{aligned} \Vert y_{n}-\tilde{y}_{n}\Vert &=\bigl\Vert { \varLambda }^{0}_{n}y_{n}-{\varLambda }^{M}_{n}y_{n} \bigr\Vert \\ &\leq\bigl\Vert {\varLambda }^{0}_{n}y_{n}-{ \varLambda }^{1}_{n}y_{n}\bigr\Vert +\bigl\Vert { \varLambda }^{1}_{n}y_{n}-{\varLambda }^{2}_{n}y_{n} \bigr\Vert +\cdots+\bigl\Vert {\varLambda }^{M-1}_{n}y_{n}-{ \varLambda }^{M}_{n}y_{n}\bigr\Vert \\ &\to0 \quad \mbox{as } n\to\infty. \end{aligned}$$
(3.27)

We note that \(\|x_{n+1}-y_{n}\|=\beta_{n}\|W_{n}{\varLambda }^{M}_{n}y_{n}-y_{n}\|\to 0\) as \(n\to\infty\). This, together with \(\|x_{n+1}-x_{n}\|\to0\), implies that

$$ \lim_{n\to\infty}\|x_{n}-y_{n}\|=0. $$
(3.28)

In addition, observe that

$$\begin{aligned} \Vert W_{n}y_{n}-y_{n}\Vert &\leq\bigl\Vert W_{n}y_{n}-W_{n}{\varLambda }^{M}_{n}y_{n} \bigr\Vert +\bigl\Vert W_{n}{\varLambda }^{M}_{n}y_{n}-y_{n} \bigr\Vert \\ &\leq\bigl\Vert y_{n}-{\varLambda }^{M}_{n}y_{n} \bigr\Vert +\bigl\Vert W_{n}{\varLambda }^{M}_{n}y_{n}-y_{n} \bigr\Vert . \end{aligned}$$

Hence from (3.24) and (3.27) it follows that

$$\lim_{n\to\infty}\|W_{n}y_{n}-y_{n} \|=0. $$

Utilizing the boundedness of \(\{y_{n}\}\) and Remark 2.2, we conclude that

$$\begin{aligned} \|Wy_{n}-y_{n}\|&\leq\|Wy_{n}-W_{n}y_{n} \|+\|W_{n}y_{n}-y_{n}\| \\ &\to0\quad \mbox{as } n\to\infty. \end{aligned}$$
(3.29)

 □

Remark 3.1

By the last lemma we have \(\omega_{w}(x_{n})=\omega _{w}(u_{n})\) and \(\omega_{s}(x_{n})=\omega_{s}(u_{n})\), i.e., the sets of strong/weak cluster points of \(\{x_{n}\}\) and \(\{u_{n}\}\) coincide.

Of course, if \(\beta_{n,i}\to\beta_{i}\neq0\) as \(n\to\infty\), for all indices i, the assumptions of Lemma 3.2 are enough to assure that

$$\lim_{n\to\infty}\frac{\|x_{n+1}-x_{n}\|}{\beta_{n,i}}=0,\quad \forall i\in \{1,\ldots,N \}. $$

In the next lemma, we estimate the case in which at least one sequence \(\{\beta_{n,k_{0}}\}\) is a null sequence.

Lemma 3.4

Let us suppose that \({\varOmega }\neq\emptyset\). Let us suppose that (H1) holds. Moreover, for an index \(k_{0}\in\{1,\ldots,N\}\), \(\lim_{n\to\infty}\beta_{n,k_{0}}=0\), and the following hold:

  1. (H7)

    for each \(i\in\{1,\ldots,N\}\) and \(k\in\{1,\ldots,M\}\),

    $$\begin{aligned} { \lim_{n\to\infty}}\frac{|\beta _{n,i}-\beta_{n-1,i}|}{\alpha_{n}\beta_{n,k_{0}}} &={ \lim_{n\to\infty}} \frac{|\alpha_{n}-\alpha_{n-1}|}{\alpha _{n}\beta_{n,k_{0}}} ={ \lim_{n\to\infty}}\frac{|\beta_{n}-\beta_{n-1}|}{\alpha _{n}\beta_{n,k_{0}}} \\ &={ \lim_{n\to\infty}}\frac{|r_{n}-r_{n-1}|}{\alpha_{n}\beta_{n,k_{0}}} ={ \lim_{n\to\infty}} \frac{b^{n}}{\alpha_{n}\beta_{n,k_{0}}} ={ \lim_{n\to\infty}}\frac{|\lambda_{k,n}-\lambda _{k,n-1}|}{\alpha_{n}\beta_{n,k_{0}}}=0; \end{aligned}$$
  2. (H8)

    there exists a constant \(\tau>0\) such that \(\frac{1}{\alpha _{n}}|\frac{1}{\beta_{n,k_{0}}}-\frac{1}{\beta_{n-1,k_{0}}}|< \tau\) for all \(n\geq1\).

Then

$$\lim_{n\to\infty}\frac{\|x_{n+1}-x_{n}\|}{\beta_{n,k_{0}}}. $$

Proof

We start by (3.17). Dividing both terms by \(\beta _{n,k_{0}}\) we have

$$\begin{aligned} \frac{\|x_{n+1}-x_{n}\|}{\beta_{n,k_{0}}} \leq&\bigl[1-\alpha_{n}(1-\rho)\bigr] \frac{\|x_{n}-x_{n-1}\|}{\beta_{n-1,k_{0}}} \\ &{}+\bigl[1-\alpha_{n}(1-\rho)\bigr] \|x_{n}-x_{n-1}\|\biggl\vert \frac{1}{\beta_{n,k_{0}}}- \frac {1}{\beta_{n-1,k_{0}}}\biggr\vert \\ &{} +\widetilde{M}_{2}\Biggl[\frac{|r_{n}-r_{n-1}|}{\gamma\beta _{n,k_{0}}}+{ \sum ^{N}_{k=1}}\frac{|\beta_{n,k}-\beta _{n-1,k}|}{\beta_{n,k_{0}}} \\ &{} + \sum^{M}_{k=1}\frac{|\lambda_{k,n}-\lambda _{k,n-1}|}{\beta_{n,k_{0}}}+ \frac{|\alpha_{n}-\alpha_{n-1}|}{\beta_{n,k_{0}}} +\frac{|\beta_{n}-\beta_{n-1}|}{\beta_{n,k_{0}}}+\frac{b^{n-1}}{\beta _{n,k_{0}}}\Biggr] \\ \leq&\bigl[1-\alpha_{n}(1-\rho)\bigr]\frac{\|x_{n}-x_{n-1}\|}{\beta_{n-1,k_{0}}}+\| x_{n}-x_{n-1}\|\biggl\vert \frac{1}{\beta_{n,k_{0}}}- \frac{1}{\beta_{n-1,k_{0}}}\biggr\vert \\ &{} +\widetilde{M}_{2}\Biggl[\frac{|r_{n}-r_{n-1}|}{\gamma\beta _{n,k_{0}}}+{ \sum ^{N}_{k=1}}\frac{|\beta_{n,k}-\beta _{n-1,k}|}{\beta_{n,k_{0}}} \\ &{} + \sum^{M}_{k=1}\frac{|\lambda_{k,n}-\lambda _{k,n-1}|}{\beta_{n,k_{0}}}+ \frac{|\alpha_{n}-\alpha_{n-1}|}{\beta_{n,k_{0}}} +\frac{|\beta_{n}-\beta_{n-1}|}{\beta_{n,k_{0}}}+\frac{b^{n}}{b\beta _{n,k_{0}}}\Biggr] \\ \leq&\bigl[1-\alpha_{n}(1-\rho)\bigr]\frac{\|x_{n}-x_{n-1}\|}{\beta_{n-1,k_{0}}}+\alpha _{n}\tau\|x_{n}-x_{n-1}\| \\ &{} +\widetilde{M}_{2}\Biggl[\frac{|r_{n}-r_{n-1}|}{\gamma\beta _{n,k_{0}}}+{ \sum ^{N}_{k=1}}\frac{|\beta_{n,k}-\beta _{n-1,k}|}{\beta_{n,k_{0}}} \\ &{} + \sum^{M}_{k=1}\frac{|\lambda_{k,n}-\lambda _{k,n-1}|}{\beta_{n,k_{0}}}+ \frac{|\alpha_{n}-\alpha_{n-1}|}{\beta_{n,k_{0}}} +\frac{|\beta_{n}-\beta_{n-1}|}{\beta_{n,k_{0}}}+\frac{b^{n}}{b\beta _{n,k_{0}}}\Biggr] \\ =&\bigl[1-\alpha_{n}(1-\rho)\bigr]\frac{\|x_{n}-x_{n-1}\|}{\beta_{n-1,k_{0}}}+\alpha _{n}(1-\rho)\cdot\frac{1}{1-\rho}\Biggl\{ \tau\|x_{n}-x_{n-1} \| \\ &{} +\widetilde{M}_{2}\Biggl[\frac{|r_{n}-r_{n-1}|}{\gamma\alpha_{n}\beta _{n,k_{0}}}+{ \sum ^{N}_{k=1}}\frac{|\beta_{n,k}-\beta _{n-1,k}|}{\alpha_{n}\beta_{n,k_{0}}} \\ &{} + \sum^{M}_{k=1}\frac{|\lambda_{k,n}-\lambda _{k,n-1}|}{\alpha_{n}\beta_{n,k_{0}}}+ \frac{|\alpha_{n}-\alpha_{n-1}|}{\alpha _{n}\beta_{n,k_{0}}} +\frac{|\beta_{n}-\beta_{n-1}|}{\alpha_{n}\beta_{n,k_{0}}}+\frac{b^{n}}{b\alpha _{n}\beta_{n,k_{0}}}\Biggr]\Biggr\} . \end{aligned}$$

Therefore, utilizing Lemma 2.8, from (H1), (H7), and the asymptotical regularity of \(\{x_{n}\}\) (due to Lemma 3.2), we deduce that

$$\lim_{n\to\infty}\frac{\|x_{n+1}-x_{n}\|}{\beta_{n,k_{0}}}=0. $$

 □

Lemma 3.5

Let us suppose that \({\varOmega }\neq\emptyset\). Let us suppose that \(0<\liminf_{n\to\infty}\beta_{n,i} \leq \limsup_{n\to\infty}\beta_{n,i}<1\) for each \(i=1,\ldots,N\). Moreover, suppose that (H1)-(H6) are satisfied. Then \(\lim_{n\to\infty}\|S_{i}u_{n}-u_{n}\|=0\) for each \(i=1,\ldots,N\).

Proof

First of all, by Lemma 3.2, we know that \(\{x_{n}\}\) is asymptotically regular, i.e., \(\lim_{n\to\infty} \|x_{n+1}-x_{n}\|=0\). Let us show that for each \(i\in\{1,\ldots,N\}\), one has \(\|S_{i}u_{n}-y_{n,i-1}\|\to0\) as \(n\to\infty\). Let \(p\in{ \varOmega }\). When \(i=N\), by Lemma 2.2(b) we have from (3.2) and (3.3)

$$\begin{aligned}& \Vert y_{n}-p\Vert ^{2} \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+(1-\alpha_{n})\Vert W_{n} \tilde{y}_{n,N}-p\Vert ^{2} \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+(1-\alpha_{n})\Vert \tilde{y}_{n,N}-p\Vert ^{2} \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert y_{n,N}-p\Vert ^{2} \\& \quad =\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\beta_{n,N}\Vert S_{N}u_{n}-p \Vert ^{2}+(1-\beta _{n,N})\Vert y_{n,N-1}-p\Vert ^{2} \\& \qquad {} -\beta_{n,N}(1-\beta_{n,N})\Vert S_{N}u_{n}-y_{n,N-1}\Vert ^{2} \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\beta_{n,N}\Vert u_{n}-p\Vert ^{2}+(1-\beta _{n,N})\Vert u_{n}-p\Vert ^{2} \\& \qquad {} -\beta_{n,N}(1-\beta_{n,N})\Vert S_{N}u_{n}-y_{n,N-1}\Vert ^{2} \\& \quad =\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert u_{n}-p\Vert ^{2}- \beta_{n,N}(1-\beta_{n,N})\Vert S_{N}u_{n}-y_{n,N-1} \Vert ^{2} \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert x_{n}-p\Vert ^{2}- \beta_{n,N}(1-\beta _{n,N})\Vert S_{N}u_{n}-y_{n,N-1} \Vert ^{2}. \end{aligned}$$

So, we have

$$\begin{aligned}& \beta_{n,N}(1-\beta_{n,N})\Vert S_{N}u_{n}-y_{n,N-1} \Vert ^{2} \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n,N})-p \bigr\Vert ^{2}+\Vert x_{n}-p\Vert ^{2}- \Vert y_{n}-p\Vert ^{2} \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert x_{n}-y_{n}\Vert \bigl(\Vert x_{n}-p\Vert +\Vert y_{n}-p\Vert \bigr). \end{aligned}$$

Since \(\alpha_{n}\to0\), \(0<\liminf_{n\to\infty}\beta_{n,N}\leq\limsup_{n\to \infty}\beta_{n,N}<1\) and \(\lim_{n\to\infty}\|x_{n}-y_{n}\|=0\) (due to (3.28)), and it is well known that \(\{\|S_{N}u_{n}-y_{n,N-1}\|\}\) is a null sequence.

Let \(i\in\{1,\ldots,N-1\}\). Then one has

$$\begin{aligned} \Vert y_{n}-p\Vert ^{2} \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert y_{n,N}-p\Vert ^{2} \\ \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\beta_{n,N}\Vert S_{N}u_{n}-p \Vert ^{2}+(1-\beta _{n,N})\Vert y_{n,N-1}-p\Vert ^{2} \\ \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\beta_{n,N}\Vert x_{n}-p\Vert ^{2}+(1-\beta _{n,N})\Vert y_{n,N-1}-p\Vert ^{2} \\ \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\beta_{n,N}\Vert x_{n}-p\Vert ^{2} \\ &{} +(1-\beta_{n,N})\bigl[\beta_{n,N-1}\Vert S_{N-1}u_{n}-p\Vert ^{2}+(1-\beta _{n,N-1}) \Vert y_{n,N-2}-p\Vert ^{2}\bigr] \\ \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\bigl(\beta_{n,N}+(1-\beta_{n,N})\beta _{n,N-1}\bigr)\Vert x_{n}-p\Vert ^{2} \\ &{} +{ \prod^{N}_{k=N-1}}(1- \beta_{n,k})\Vert y_{n,N-2}-p\Vert ^{2}, \end{aligned}$$

and so, after \((N-i+1)\)-iterations,

$$\begin{aligned} \Vert y_{n}-p\Vert ^{2} \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Biggl(\beta _{n,N}+{ \sum^{N}_{j=i+2}} \Biggl({ \prod^{N}_{l=j}}(1-\beta_{n,l}) \Biggr)\beta_{n,j-1}\Biggr)\Vert x_{n}-p\Vert ^{2} \\ &{} +{ \prod^{N}_{k=i+1}}(1- \beta_{n,k})\Vert y_{n,i}-p\Vert ^{2} \\ \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Biggl(\beta_{n,N}+{ \sum^{N}_{j=i+2}} \Biggl({ \prod^{N}_{l=j}}(1- \beta_{n,l})\Biggr)\beta_{n,j-1}\Biggr)\Vert x_{n}-p \Vert ^{2} \\ &{} +{ \prod^{N}_{k=i+1}}(1- \beta_{n,k}) \bigl[\beta_{n,i}\Vert S_{i}u_{n}-p \Vert ^{2}+(1-\beta_{n,i})\Vert y_{n,i-1}-p\Vert ^{2} \\ &{} -\beta_{n,i}(1-\beta_{n,i})\Vert S_{i}u_{n}-y_{n,i-1} \Vert ^{2}\bigr] \\ \leq&\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert x_{n}-p\Vert ^{2}- \beta_{n,i}{ \prod^{N}_{k=i}}(1- \beta_{n,k}) \Vert S_{i}u_{n}-y_{n,i-1} \Vert ^{2}. \end{aligned}$$
(3.30)

Again we obtain

$$\begin{aligned}& \beta_{n,i}{ \prod^{N}_{k=i}}(1- \beta _{n,k})\Vert S_{i}u_{n}-y_{n,i-1} \Vert ^{2} \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n,N})-p \bigr\Vert ^{2}+\Vert x_{n}-p\Vert ^{2}- \Vert y_{n}-p\Vert ^{2} \\& \quad \leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert x_{n}-y_{n}\Vert \bigl(\Vert x_{n}-p\Vert +\Vert y_{n}-p\Vert \bigr). \end{aligned}$$

Since \(\alpha_{n}\to0\), \(0<\liminf_{n\to\infty}\beta_{n,i}\leq\limsup_{n\to \infty}\beta_{n,i}<1\), for each \(i=1,\ldots,N-1\), and \(\lim_{n\to\infty}\|x_{n}-y_{n}\|=0\) (due to (3.28)), and it is well known that

$$\lim_{n\to\infty}\|S_{i}u_{n}-y_{n,i-1} \|=0. $$

Obviously for \(i=1\), we have \(\|S_{1}u_{n}-u_{n}\|\to0\).

To conclude, we have

$$\|S_{2}u_{n}-u_{n}\|\leq\|S_{2}u_{n}-y_{n,1} \|+\|y_{n,1}-u_{n}\|=\|S_{2}u_{n}-y_{n,1} \| +\beta_{n,1}\|S_{1}u_{n}-u_{n}\| $$

from which \(\|S_{2}u_{n}-u_{n}\|\to0\). Thus by induction \(\|S_{i}u_{n}-u_{n}\|\to0\) for all \(i=2,\ldots,N\) since it is enough to observe that

$$\begin{aligned} \|S_{i}u_{n}-u_{n}\|&\leq\|S_{i}u_{n}-y_{n,i-1} \|+\| y_{n,i-1}-S_{i-1}u_{n}\|+\|S_{i-1}u_{n}-u_{n} \| \\ &\leq\|S_{i}u_{n}-y_{n,i-1}\|+(1- \beta_{n,i-1})\|S_{i-1}u_{n}-y_{n,i-2}\|+\| S_{i-1}u_{n}-u_{n}\|. \end{aligned}$$

 □

Remark 3.2

As an example, we consider \(M=1\), \(N=2\), and the sequences:

  1. (a)

    \(\lambda_{1,n}=\eta_{1}-\frac{1}{n}\), \(\forall n>\frac{1}{\eta_{1}}\);

  2. (b)

    \(\alpha_{n}=\frac{1}{\sqrt{n}}\), \(r_{n}=2-\frac{1}{n}\), \(\forall n>1\);

  3. (c)

    \(\beta_{n}=\beta_{n,1}=\frac{1}{2}-\frac{1}{n}\), \(\beta_{n,2}=\frac {1}{2}-\frac{1}{n^{2}}\), \(\forall n>2\).

Then they satisfy the hypotheses of Lemma 3.5.

Lemma 3.6

Let us suppose that \({\varOmega }\neq\emptyset\) and \(\beta_{n,i}\to\beta_{i}\) for all i as \(n\to\infty\). Suppose there exists \(k\in\{1,\ldots,N\}\) such that \(\beta _{n,k}\to0\) as \(n\to\infty\). Let \(k_{0}\in\{1,\ldots,N\}\) be the largest index such that \(\beta_{n,k_{0}}\to0\) as \(n\to\infty\). Suppose that

  1. (i)

    \(\frac{\alpha_{n}}{\beta_{n,k_{0}}}\to0\) as \(n\to\infty\);

  2. (ii)

    if \(i\leq k_{0}\) and \(\beta_{n,i}\to0\) then \(\frac{\beta _{n,k_{0}}}{\beta_{n,i}}\to0\) as \(n\to\infty\);

  3. (iii)

    if \(\beta_{n,i}\to\beta_{i}\neq0\) then \(\beta_{i}\) lies in \((0,1)\).

Moreover, suppose that (H1), (H7), and (H8) hold. Then \(\lim_{n\to\infty}\|S_{i}u_{n}-u_{n}\|=0\) for each \(i=1,\ldots,N\).

Proof

First of all we note that if (H7) holds then also (H2)-(H6) are satisfied. So \(\{x_{n}\}\) is asymptotically regular.

Let \(k_{0}\) be as in the hypotheses. As in Lemma 3.5, for every index \(i\in\{1,\ldots,N\}\) such that \(\beta_{n,i}\to\beta_{i} \neq0\) (which leads to \(0<\liminf_{n\to\infty}\beta_{n,i}\leq\limsup_{n\to\infty}\beta_{n,i}<1\)), one has \(\|S_{i}u_{n}-y_{n,i-1}\|\to0\) as \(n\to\infty\).

For all the other indices \(i\leq k_{0}\), we can prove that \(\| S_{i}u_{n}-y_{n,i-1}\|\to0\) as \(n\to\infty\) in a similar manner. By the relation (due to (3.1) and (3.30))

$$\begin{aligned} \Vert x_{n+1}-p\Vert ^{2}&=\bigl\Vert (1- \beta_{n}) (y_{n}-p)+\beta_{n}\bigl(W_{n}{ \varLambda }^{M}_{n}y_{n}-p\bigr)\bigr\Vert ^{2} \\ &\leq(1-\beta_{n})\Vert y_{n}-p\Vert ^{2}+ \beta_{n}\bigl\Vert W_{n}{\varLambda }^{M}_{n}y_{n}-p \bigr\Vert ^{2} \\ &\leq \Vert y_{n}-p\Vert ^{2} \\ &\leq\alpha_{n}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2}+\Vert x_{n}-p\Vert ^{2}- \beta_{n,i}{ \prod^{N}_{k=i}}(1- \beta_{n,k}) \Vert S_{i}u_{n}-y_{n,i-1} \Vert ^{2}, \end{aligned}$$

we immediately obtain

$$\prod^{N}_{k=i}\bigl(1-\beta_{n,k} \Vert S_{i}u_{n}-y_{n,i-1}\Vert ^{2} \bigr)\leq\frac{\alpha _{n}}{\beta_{n,i}}\bigl\Vert f(y_{n,N})-p\bigr\Vert ^{2} +\frac{\Vert x_{n}-x_{n+1}\Vert }{\beta_{n,i}}\bigl(\Vert x_{n}-p\Vert +\Vert x_{n+1}-p\Vert \bigr). $$

By Lemma 3.4 or by hypothesis (ii) on the sequences, we have

$$\frac{\|x_{n}-x_{n+1}\|}{\beta_{n,i}}=\frac{\|x_{n}-x_{n+1}\|}{\beta _{n,k_{0}}}\cdot\frac{\beta_{n,k_{0}}}{\beta_{n,i}}\to0. $$

So, the conclusion follows. □

Remark 3.3

Let us consider \(M=1\), \(N=3\), and the following sequences:

  1. (a)

    \(\alpha_{n}=\frac{1}{n^{1/2}}\), \(r_{n}=2-\frac{1}{n^{2}}\), \(\forall n>1\);

  2. (b)

    \(\lambda_{1,n}=\eta_{1}-\frac{1}{n^{2}}\), \(\forall n>\frac{1}{\eta^{1/2}_{1}}\);

  3. (c)

    \(\beta_{n,1}=\frac{1}{n^{1/4}}\), \(\beta_{n}=\beta_{n,2}=\frac {1}{2}-\frac{1}{n^{2}}\), \(\beta_{n,3}=\frac{1}{n^{1/3}}\), \(\forall n>1\).

It is easy to see that all hypotheses (i)-(iii), (H1), (H7), and (H8) of Lemma 3.6 are satisfied.

Remark 3.4

Under the hypotheses of Lemma 3.6, analogously to Lemma 3.5, one can see that

$$\lim_{n\to\infty}\|S_{i}u_{n}-y_{n,i-1} \|=0,\quad \forall i\in\{2,\ldots,N\}. $$

Corollary 3.1

Let us suppose that the hypotheses of either Lemma  3.5 or Lemma  3.6 are satisfied. Then \(\omega_{w}(x_{n})=\omega_{w}(u_{n})=\omega_{w}(y_{n})\), \(\omega_{s}(x_{n})=\omega _{s}(u_{n})=\omega_{s}(y_{n,1})\), and \(\omega_{w}(x_{n}) \subset{ \varOmega }\).

Proof

By Remark 3.1, we have \(\omega_{w}(x_{n})=\omega_{w}(u_{n})\) and \(\omega_{s}(x_{n})=\omega_{s}(u_{n})\). Note that by Remark 3.4,

$$\lim_{n\to\infty}\|S_{N}u_{n}-y_{n,N-1} \|=0. $$

In the meantime, it is well known that

$$\lim_{n\to\infty}\|S_{N}u_{n}-u_{n}\|= \lim_{n\to\infty}\|u_{n}-x_{n}\|=\lim _{n\to \infty}\|x_{n}-y_{n}\|=0. $$

Hence we have

$$ \lim_{n\to\infty}\|S_{N}u_{n}-y_{n} \|=0. $$
(3.31)

Furthermore, it follows from (3.1) that

$$\lim_{n\to\infty}\|y_{n,N}-y_{n,N-1}\|=\lim _{n\to\infty}\beta_{n,N}\| S_{N}u_{n}-y_{n,N-1} \|=0, $$

which, together with \(\lim_{n\to\infty}\|S_{N}u_{n}-y_{n,N-1}\|=0\), yields

$$ \lim_{n\to\infty}\|S_{N}u_{n}-y_{n,N} \|=0. $$
(3.32)

Combining (3.31) and (3.32), we conclude that

$$ \lim_{n\to\infty}\|y_{n}-y_{n,N} \|=0, $$
(3.33)

which, together with \(\lim_{n\to\infty}\|x_{n}-y_{n}\|=0\), leads to

$$ \lim_{n\to\infty}\|x_{n}-y_{n,N} \|=0. $$
(3.34)

Now we observe that

$$\|x_{n}-y_{n,1}\|\leq\|x_{n}-u_{n}\|+ \|y_{n,1}-u_{n}\|=\|x_{n}-u_{n}\|+ \beta_{n,1}\| S_{1}u_{n}-u_{n}\|. $$

By Lemmas 3.3 and 3.5, \(\|x_{n}-u_{n}\|\to0\) and \(\|S_{1}u_{n}-u_{n}\|\to0\) as \(n\to\infty\), and we have

$$\lim_{n\to\infty}\|x_{n}-y_{n,1}\|=0. $$

So we get \(\omega_{w}(x_{n})=\omega_{w}(y_{n,1})\) and \(\omega_{s}(x_{n})=\omega _{s}(y_{n,1})\).

Let \(p\in\omega_{w}(x_{n})\). Since \(p\in\omega_{w}(u_{n})\), by Lemma 3.5 and Lemma 2.5 (demiclosedness principle), we have \(p\in\operatorname{Fix}(S_{i})\) for each \(i=i,\ldots,N\), i.e., \(p\in\bigcap^{N}_{i=1}\operatorname{Fix}(S_{i})\). Also, since \(p\in\omega_{w}(y_{n})\) (due to \(\|x_{n}-y_{n}\|\to0\)), in terms of (3.29) and Lemma 2.5 (demiclosedness principle), we get \(p\in\operatorname{Fix}(W)=\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\) (due to Lemma 2.4). Moreover, by Lemmas 2.11 and 3.3 we know that \(p\in\operatorname{GMEP}({\varTheta },h)\). Next we prove that \(p\in\bigcap^{M}_{m=1}\operatorname{VI}(C,A_{m})\). Indeed, since \(p\in\omega_{w}(y_{n,N})\) (due to (3.34)), there exists a subsequence \(\{y_{n_{i},N}\}\) of \(\{y_{n,N}\}\) such that \(y_{n_{i},N}\rightharpoonup p\). So, from (3.25) we know that \({\varLambda }^{m}_{n_{i}}y_{n_{i},N}\rightharpoonup p\) for each \(m=1,\ldots,M\). Let

$$\widetilde{T}_{m}v=\left \{ \textstyle\begin{array}{l@{\quad}l} A_{m}v+N_{C}v,& v\in C, \\ \emptyset, & v\notin C, \end{array}\displaystyle \right . $$

where \(m\in\{1,2,\ldots,M\}\). Let \((v,u)\in G(\widetilde{T}_{m})\). Since \(u-A_{m}v\in N_{C}v\) and \({\varLambda }^{m}_{n}y_{n,N}\in C\), we have

$$\bigl\langle v-{\varLambda }^{m}_{n}y_{n,N},u-A_{m}v \bigr\rangle \geq0. $$

On the other hand, from \({\varLambda }^{m}_{n}y_{n,N}=P_{C}(I-\lambda _{m,n}A_{m}){\varLambda }^{m-1}_{n}y_{n,N}\) and \(v\in C\), we have

$$\bigl\langle v-{\varLambda }^{m}_{n}y_{n,N},{ \varLambda }^{m}_{n}y_{n,N}-\bigl({\varLambda }^{m-1}_{n}y_{n,N}-\lambda_{m,n}A_{m}{ \varLambda }^{m-1}_{n}y_{n,N}\bigr)\bigr\rangle \geq0, $$

and hence

$$\biggl\langle v-{\varLambda }^{m}_{n}y_{n,N}, \frac{{\varLambda }^{m}_{n}y_{n,N}-{ \varLambda }^{m-1}_{n}y_{n,N}}{\lambda_{m,n}}+A_{m}{\varLambda }^{m-1}_{n}y_{n,N} \biggr\rangle \geq0. $$

Therefore we have

$$\begin{aligned}& \bigl\langle v-{\varLambda }^{m}_{n_{i}}y_{n_{i},N},u\bigr\rangle \\& \quad \geq\bigl\langle v-{\varLambda }^{m}_{n_{i}}y_{n_{i},N},A_{m}v \bigr\rangle \\& \quad \geq\bigl\langle v-{\varLambda }^{m}_{n_{i}}y_{n_{i},N},A_{m}v \bigr\rangle -\biggl\langle v-{\varLambda }^{m}_{n_{i}}y_{n_{i},N}, \frac{{\varLambda }^{m}_{n_{i}}y_{n_{i},N}-{\varLambda }^{m-1}_{n_{i}}y_{n_{i},N}}{\lambda_{m,{n_{i}}}} +A_{m}{\varLambda }^{m-1}_{n_{i}}y_{n_{i},N} \biggr\rangle \\& \quad =\bigl\langle v-{\varLambda }^{m}_{n_{i}}y_{n_{i},N},A_{m}v-A_{m}{ \varLambda }^{m}_{n_{i}}y_{n_{i},N}\bigr\rangle +\bigl\langle v-{\varLambda }^{m}_{n_{i}}y_{n_{i},N},A_{m}{ \varLambda }^{m}_{n_{i}}y_{n_{i},N}-A_{m}{ \varLambda }^{m-1}_{n_{i}}y_{n_{i},N}\bigr\rangle \\& \qquad {} -\biggl\langle v-{\varLambda }^{m}_{n_{i}}y_{n_{i},N}, \frac{{\varLambda }^{m}_{n_{i}}y_{n_{i},N}-{\varLambda }^{m-1}_{n_{i}}y_{n_{i},N}}{\lambda _{m,{n_{i}}}}\biggr\rangle \\& \quad \geq\bigl\langle v-{\varLambda }^{m}_{n_{i}}y_{n_{i},N},A_{m}{ \varLambda }^{m}_{n_{i}}y_{n_{i},N}-A_{m}{ \varLambda }^{m-1}_{n_{i}}y_{n_{i},N}\bigr\rangle \\& \qquad {}-\biggl\langle v-{\varLambda }^{m}_{n_{i}}y_{n_{i},N},\frac{{\varLambda }^{m}_{n_{i}}y_{n_{i},N}-{\varLambda }^{m-1}_{n_{i}}y_{n_{i},N}}{\lambda _{m,{n_{i}}}} \biggr\rangle . \end{aligned}$$

From (3.25) and since \(A_{m}\) is Lipschitz continuous, we obtain \(\lim_{n\to\infty}\|A_{m}{\varLambda }^{m}_{n}y_{n,N}-A_{m}{\varLambda }^{m-1}_{n}y_{n,N}\|=0\). From \({\varLambda }^{m}_{n_{i}}y_{n_{i},N}\rightharpoonup p\), \(\{\lambda_{m,n}\} \subset[a_{m},b_{m}]\subset(0,2\eta_{m})\), \(\forall m\in\{1,2,\ldots,M\}\), and (3.25), we have

$$\langle v-p,u\rangle\geq0. $$

Since \(\widetilde{T}_{m}\) is maximal monotone, we have \(p\in\widetilde{T}^{-1}_{m}0\) and hence \(p\in\operatorname{VI}(C,A_{m})\), \(m=1,2,\ldots,M\), which implies \(p\in\bigcap^{M}_{m=1}\operatorname{VI}(C,A_{m})\). Consequently,

$$p\in\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap\bigcap^{N}_{i=1}\operatorname{Fix}(S_{i}) \cap\bigcap^{M}_{m=1}\operatorname{VI}(C,A_{m})\cap\operatorname{GMEP}({\varTheta },h)=:{ \varOmega }. $$

 □

Theorem 3.1

Let us suppose that \({\varOmega }\neq\emptyset\). Let \(\{\alpha_{n}\}\), \(\{\beta_{n,i}\}\), \(i=1,\ldots,N\), be sequences in \((0,1)\) such that \(0<\liminf_{n\to\infty}\beta_{n,i}\leq \limsup_{n\to\infty}\beta_{n,i}<1\) for each index i. Moreover, let us suppose that (H1)-(H6) hold. Then the sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\), explicitly defined by the scheme

$$\left \{ \textstyle\begin{array}{l} {\varTheta }(u_{n},y)+h(u_{n},y)+\frac{1}{r_{n}}\langle y-u_{n},u_{n}-x_{n}\rangle \geq0, \quad \forall y\in C, \\ y_{n,1}=\beta_{n,1}S_{1}u_{n}+(1-\beta_{n,1})u_{n}, \\ y_{n,i}=\beta_{n,i}S_{i}u_{n}+(1-\beta_{n,i})y_{n,i-1}, \quad i=2,\ldots,N, \\ y_{n}=\alpha_{n}f(y_{n,N})+(1-\alpha_{n})W_{n}{\varLambda }^{M}_{n}y_{n,N}, \\ x_{n+1}=(1-\beta_{n})y_{n}+\beta_{n}W_{n}{\varLambda }^{M}_{n}y_{n}, \quad \forall n\geq1, \end{array}\displaystyle \right . $$

converge strongly to a unique solution \(x^{*}\) in Ω of the following variational inequality problem (VIP):

$$ \bigl\langle f\bigl(x^{*}\bigr)-x^{*},z-x^{*}\bigr\rangle \leq0, \quad \forall z\in{ \varOmega }. $$
(3.35)

Proof

Since the mapping \(P_{\varOmega }f\) is a ρ-contraction, it has a unique fixed point \(x^{*}\in H\); it is the unique solution of VIP (3.35). Since (H1)-(H6) hold, the sequence \(\{x_{n}\}\) is asymptotically regular (according to Lemma 3.2). By Lemma 3.3, \(\|x_{n}-y_{n}\|\to0\) and \(\|x_{n}-u_{n}\|\to0\) as \(n\to\infty\). Moreover, utilizing Lemma 2.1 and the nonexpansivity of \((I-\lambda_{k,n}A_{k})\), we get from (3.1) and (3.2)

$$\begin{aligned}& \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \\& \quad =\bigl\Vert (1-\beta_{n}) (y_{n}-p)+ \beta_{n}\bigl(W_{n}{\varLambda }^{M}_{n}y_{n}-x^{*} \bigr)\bigr\Vert ^{2} \\& \quad \leq(1-\beta_{n})\Vert y_{n}-p\Vert ^{2}+\beta_{n}\bigl\Vert W_{n}{ \varLambda }^{M}_{n}y_{n}-x^{*}\bigr\Vert ^{2} \\& \quad \leq\bigl\Vert y_{n}-x^{*}\bigr\Vert ^{2} \\& \quad \leq\bigl\Vert \alpha_{n}\bigl(f(y_{n,N})-f\bigl(x^{*} \bigr)\bigr)+(1-\alpha_{n}) \bigl(W_{n}{\varLambda }^{M}_{n}y_{n,N}-x^{*}\bigr)\bigr\Vert ^{2}+2\alpha_{n}\bigl\langle f\bigl(x^{*}\bigr)-x^{*},y_{n}-x^{*} \bigr\rangle \\& \quad \leq\alpha_{n}\rho\bigl\Vert y_{n,N}-x^{*}\bigr\Vert ^{2}+(1-\alpha_{n})\bigl\Vert {\varLambda }^{M}_{n}y_{n,N}-x^{*} \bigr\Vert ^{2}+2\alpha_{n}\bigl\langle f\bigl(x^{*} \bigr)-x^{*},y_{n}-x^{*}\bigr\rangle \\& \quad \leq\alpha_{n}\rho\bigl\Vert y_{n,N}-x^{*}\bigr\Vert ^{2}+(1-\alpha_{n})\bigl\Vert y_{n,N}-x^{*}\bigr\Vert ^{2}+2\alpha_{n}\bigl\langle f\bigl(x^{*} \bigr)-x^{*},y_{n}-x^{*}\bigr\rangle \\& \quad =\bigl[1-(1-\rho)\alpha_{n}\bigr]\bigl\Vert y_{n,N}-x^{*}\bigr\Vert ^{2}+2\alpha_{n}\bigl\langle f\bigl(x^{*}\bigr)-x^{*},y_{n}-x^{*}\bigr\rangle \\& \quad \leq\bigl[1-(1-\rho)\alpha_{n}\bigr]\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+2\alpha_{n}\bigl\langle f\bigl(x^{*}\bigr)-x^{*},y_{n}-x^{*}\bigr\rangle \\& \quad \leq\bigl[1-(1-\rho)\alpha_{n}\bigr]\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+(1-\rho)\alpha_{n}\cdot \frac {2}{1-\rho}\bigl\langle f\bigl(x^{*}\bigr)-x^{*},y_{n}-x^{*}\bigr\rangle . \end{aligned}$$

Now, let \(\{x_{n_{k}}\}\) be a subsequence of \(\{x_{n}\}\) such that

$$ \limsup_{n\to\infty}\bigl\langle f\bigl(x^{*}\bigr)-x^{*},x_{n}-x^{*} \bigr\rangle =\lim_{k\to\infty }\bigl\langle f\bigl(x^{*} \bigr)-x^{*},x_{n_{k}}-x^{*}\bigr\rangle . $$
(3.36)

By the boundedness of \(\{x_{n}\}\), we may assume, without loss of generality, that \(x_{n_{k}}\rightharpoonup p\in\omega_{w}(x_{n})\). According to Corollary 3.1, we know that \(\omega_{w}(x_{n})\subset{ \varOmega }\) and hence \(p\in{ \varOmega }\). Taking into consideration that \(x^{*}=P_{\varOmega }f(x^{*})\) we obtain from (3.36)

$$\begin{aligned}& { \limsup_{n\to\infty}}\bigl\langle f\bigl(x^{*}\bigr)-x^{*},y_{n}-x^{*} \bigr\rangle \\& \quad ={ \limsup_{n\to\infty}}\bigl[\bigl\langle f\bigl(x^{*} \bigr)-x^{*},x_{n}-x^{*}\bigr\rangle +\bigl\langle f\bigl(x^{*} \bigr)-x^{*},y_{n}-x_{n}\bigr\rangle \bigr] \\& \quad ={ \limsup_{n\to\infty}}\bigl\langle f\bigl(x^{*} \bigr)-x^{*},x_{n}-x^{*}\bigr\rangle ={ \lim_{k\to\infty}}\bigl\langle f\bigl(x^{*}\bigr)-x^{*},x_{n_{k}}-x^{*}\bigr\rangle \\& \quad =\bigl\langle f\bigl(x^{*}\bigr)-x^{*},p-x^{*}\bigr\rangle \leq0. \end{aligned}$$

In terms of Lemma 2.8 we derive \(x_{n}\to x^{*}\) as \(n\to\infty\). □

In the following, we provide a numerical example to illustrate how our main theorem, Theorem 3.1, works.

Example

Let \(H={\mathbf{R}}^{2}\) with inner product \(\langle\cdot,\cdot \rangle\) and norm \(\|\cdot\|\) which are defined by

$$\langle x,y\rangle=ac+bd,\qquad \|x\|=\sqrt{a^{2}+b^{2}} $$

for all \(x,y\in{\mathbf{R}}^{2}\) with \(x=(a,b)\) and \(y=(c,d)\). Let \(C=\{ (a,a):a\in{\mathbf{R}}\}\). Clearly, C is a nonempty, closed, and convex subset of a real Hilbert space \(H={\mathbf{R}}^{2}\). Let \(M=N=2\). Let \(f:C\to C\) be a ρ-contraction mapping, \(A,A_{k} :C\to H\) be η-inverse strongly monotone and \(\eta_{k}\)-inverse strongly monotone for each \(k=1,2\), and let \(S_{i},T_{n}:C\to C\) be nonexpansive mappings for each \(i=1,2\) and \(n=1,2,\ldots \) , for instance, putting

$$\begin{aligned}& A_{1}=S_{1}=\left \{ \textstyle\begin{array}{@{}c@{\quad}c@{}} \frac{3}{5} & \frac{2}{5} \\ \frac{2}{5} & \frac{3}{5} \end{array}\displaystyle \right \},\qquad T_{n}=S_{2}=\left \{ \textstyle\begin{array}{@{}c@{\quad}c@{}} \frac{2}{3} & \frac{1}{3} \\ \frac{1}{3} & \frac{2}{3} \end{array}\displaystyle \right \}, \\& f=\frac{1}{2}S_{1}, \qquad A=I-S_{1}=\left \{ \textstyle\begin{array}{@{}c@{\quad}c@{}} \frac{2}{5} & -\frac{2}{5} \\ -\frac{2}{5} & \frac{2}{5} \end{array}\displaystyle \right \}, \qquad A_{2}=I-S_{2}= \left \{ \textstyle\begin{array}{@{}c@{\quad}c@{}} \frac{1}{3} & -\frac{1}{3} \\ -\frac{1}{3} & \frac{1}{3} \end{array}\displaystyle \right \}. \end{aligned}$$

Let \({\varTheta },h:C\times C\to{\mathbf{R}}\) be bi-functions satisfying the hypotheses of Lemma 2.8, for instance, putting \(h(x,y)=0\) and \({\varTheta }(x,y)=\langle Ax,y\rangle\). It is easy to see that \(\|f\|=\frac{1}{2}\) and \(\|A_{1}\|= \|S_{1}\|=\|S_{2}\|=\|T_{n}\|=1\), for each \(n=1,2,\ldots \) , that f is a \(\frac {1}{2}\)-contraction mapping, that A, \(A_{1}\) and \(A_{2}\) are \(\frac{1}{2}\)-inverse strongly monotone, and that \(S_{i}\) and \(T_{n}\) both are nonexpansive for each \(i=1,2\) and \(n=1,2,\ldots \) . Moreover, it is clear that \(\bigcap^{2}_{i=1}\operatorname{Fix}(S_{i})=C\), \(\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})=C\), \(\bigcap^{2}_{m=1}\operatorname{VI}(C,A_{m})=C \cap\{0\}=\{0\}\) and \(\operatorname{GMEP}({\varTheta },h)=\operatorname{VI}(C,A)=C\). Hence, \({\varOmega }:=\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap \bigcap^{2}_{i=1}\operatorname{Fix}(S_{i})\cap\bigcap^{2}_{m=1}\operatorname{VI}(C,A_{m})\cap\operatorname{GMEP}({\varTheta },h)=\{0\}\). In this case, from scheme (3.8), we obtain, for any given \(x_{1}\in C\),

$$\left \{ \textstyle\begin{array}{l} u_{n}=T_{r_{n}}x_{n}=P_{C}(I-r_{n}A)x_{n}=x_{n}, \\ y_{n,1}=\beta_{n,1}S_{1}u_{n}+(1-\beta_{n,1})u_{n} \\ \hphantom{y_{n,1}}=\beta_{n,1}S_{1}x_{n}+(1-\beta_{n,1})x_{n} \\ \hphantom{y_{n,1}}=x_{n}, \\ y_{n,2}=\beta_{n,i}S_{i}u_{n}+(1-\beta_{n,i})y_{n,1} \\ \hphantom{y_{n,2}}=\beta_{n,i}S_{i}x_{n}+(1-\beta_{n,i})x_{n} \\ \hphantom{y_{n,2}}=x_{n}, \\ y_{n}=\alpha_{n}f(y_{n,2})+(1-\alpha_{n})W_{n}{\varLambda }^{2}_{n}y_{n,2} \\ \hphantom{y_{n}}=\frac{1}{2}\alpha_{n}x_{n}+(1-\alpha_{n})W_{n}P_{C}(I-\lambda _{2,n}A_{2})P_{C}(I-\lambda_{1,n}A_{1})x_{n} \\ \hphantom{y_{n}}=\frac{1}{2}\alpha_{n}x_{n}+(1-\alpha_{n})W_{n}P_{C}(I-\lambda _{2,n}A_{2})(1-\lambda_{1,n})x_{n} \\ \hphantom{y_{n}}=\frac{1}{2}\alpha_{n}x_{n}+(1-\alpha_{n})W_{n}(1-\lambda_{1,n})x_{n} \\ \hphantom{y_{n}}=\frac{1}{2}\alpha_{n}x_{n}+(1-\alpha_{n})(1-\lambda_{1,n})x_{n} \\ \hphantom{y_{n}}=[\frac{1}{2}\alpha_{n}+(1-\alpha_{n})(1-\lambda_{1,n})]x_{n}, \\ x_{n+1}=(1-\beta_{n})y_{n}+\beta_{n}W_{n}{\varLambda }^{2}_{n}y_{n} \\ \hphantom{x_{n+1}}=(1-\beta_{n})y_{n}+\beta_{n}W_{n}P_{C}(I-\lambda_{2,n}A_{2})P_{C}(I-\lambda _{1,n}A_{1})y_{n} \\ \hphantom{x_{n+1}}=(1-\beta_{n})y_{n}+\beta_{n}W_{n}P_{C}(I-\lambda_{2,n}A_{2})(1-\lambda _{1,n})y_{n} \\ \hphantom{x_{n+1}}=(1-\beta_{n})y_{n}+\beta_{n}W_{n}(1-\lambda_{1,n})y_{n} \\ \hphantom{x_{n+1}}=(1-\beta_{n})y_{n}+\beta_{n}(1-\lambda_{1,n})y_{n} \\ \hphantom{x_{n+1}}=[(1-\beta_{n})+\beta_{n}(1-\lambda_{1,n})]y_{n} \\ \hphantom{x_{n+1}}=(1-\beta_{n}\lambda_{1,n})y_{n} \\ \hphantom{x_{n+1}}=(1-\beta_{n}\lambda_{1,n})[\frac{1}{2}\alpha_{n}+(1-\alpha _{n})(1-\lambda_{1,n})]x_{n}. \end{array}\displaystyle \right . $$

Whenever \(\{\alpha_{n}\},\{\beta_{n}\}\subset(0,1)\) with \(\sum^{\infty}_{n=1}\alpha_{n}=\infty\) and \(\{\lambda_{k,n}\}\subset [a_{k},b_{k}]\subset(0,2\eta_{k})\) with \(\eta_{k}=\frac{1}{2}\), \(k=1,2\), we have

$$\begin{aligned} \|x_{n+1}\|&=(1-\beta_{n}\lambda_{1,n})\biggl[ \frac{1}{2}\alpha _{n}+(1-\alpha_{n}) (1- \lambda_{1,n})\biggr]\|x_{n}\| \\ &\leq\biggl[\frac{1}{2}\alpha_{n}+(1-\alpha_{n}) (1- \lambda_{1,n})\biggr]\|x_{n}\| \\ &\leq\biggl[\frac{1}{2}\alpha_{n}+(1-\alpha_{n}) \biggr]\|x_{n}\| \\ &=\biggl(1-\frac{1}{2}\alpha_{n}\biggr)\|x_{n}\| \\ &\leq\exp\biggl(-\frac{1}{2}\alpha_{n}\biggr) \|x_{n}\| \\ &\leq\cdots \\ &\leq\exp\Biggl(-\frac{1}{2}{ \sum^{n}_{k=1}} \alpha_{n}\Biggr)\|x_{1}\| \to0\quad \mbox{as } n\to\infty. \end{aligned}$$

That is,

$$\lim_{n\to\infty}\|x_{n}\|=0. $$

This shows that \(\{x_{n}\}\) converges to the unique element of Ω.

In a similar way, we can conclude to another theorem as follows.

Theorem 3.2

Let us suppose that \({\varOmega }\neq\emptyset\). Let \(\{\alpha_{n}\}\), \(\{\beta_{n,i}\}\), \(i=1,\ldots,N\), be sequences in \((0,1)\) such that \(\beta_{n,i}\to\beta_{i}\) for each index i as \(n\to\infty\). Suppose that there exists \(k\in\{1,\ldots,N\}\) for which \(\beta_{n,k}\to0\) as \(n\to\infty\). Let \(k_{0}\in\{1,\ldots,N\}\) be the largest index for which \(\beta_{n,k_{0}}\to0\). Moreover, let us suppose that (H1), (H7), and (H8) hold and

  1. (i)

    \(\frac{\alpha_{n}}{\beta_{n,k_{0}}}\to0\) as \(n\to\infty\);

  2. (ii)

    if \(i\leq k_{0}\) and \(\beta_{n,i}\to\beta_{i}\) then \(\frac{\beta _{n,k_{0}}}{\beta_{n,i}}\to0\) as \(n\to\infty\);

  3. (iii)

    if \(\beta_{n,i}\to\beta_{i}\neq0\) then \(\beta_{i}\) lies in \((0,1)\).

Then the sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\) explicitly defined by scheme (3.1) all converge strongly to the unique solution \(x^{*}\) in Ω to the VIP

$$\bigl\langle f\bigl(x^{*}\bigr)-x^{*},z-x^{*}\bigr\rangle \leq0,\quad \forall z\in{ \varOmega }. $$

Remark 3.5

According to the above argument processes for Theorems 3.1 and 3.2, we can readily see that if in scheme (3.1), the iterative step \(y_{n}=\alpha_{n}f(y_{n,N})+(1-\alpha_{n})W_{n}{ \varLambda }^{M}_{n}y_{n,N}\) is replaced by the iterative one \(y_{n}=\alpha_{n}f(x_{n})+(1-\alpha_{n})W_{n}{\varLambda }^{M}_{n}y_{n,N}\), then Theorems 3.1 and 3.2 remain valid.

Remark 3.6

Theorems 3.1 and 3.2 improve, extend, supplement, and develop Theorems 3.12 and 3.13 of [29] and Theorems 3.12 and 3.13 of [30] in the following aspects.

  1. (i)

    The multi-step iterative scheme (3.1) of [29] is extended to develop our composite viscosity iterative scheme (3.1) by virtue of Korpelevich’s extragradient method and the W-mapping approach to common fixed points of infinitely many nonexpansive mappings. Our scheme (3.1) is more general and more advantageous than schemes (1.5) and (1.6) because it solves three problems: GMEP (1.4), a finite family of variational inequalities for inverse strongly monotone mappings \(A_{k}\), \(k=1,\ldots,M\), and the fixed point problem of one finite family of nonexpansive mappings \(\{S_{i}\}^{N}_{i=1}\) and another infinite family of nonexpansive mappings \(\{T_{n}\}^{\infty}_{n=1}\).

  2. (ii)

    The argument techniques in our Theorems 3.1 and 3.2 are a combination and development of those in Theorems 3.12 and 3.13 of [30] and Theorems 3.12 and 3.13 of [29] because we make use of the properties of the resolvent operator associated with Θ and h (see Lemmas 2.9-2.11), the inclusion problem \(0\in\widetilde{T}v\) (\(\Leftrightarrow v\in\operatorname{VI}(C,A)\)) (see (2.3)), and the properties of the W-mappings \(W_{n}\) (see Remarks 2.1 and 2.2 and Lemmas 2.3 and 2.4).

  3. (iii)

    The problem of finding an element of \(\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap\bigcap^{N}_{i=1}\operatorname{Fix}(S_{i})\cap \bigcap^{M}_{k=1} \operatorname{VI}(C,A_{k})\cap\operatorname{GMEP}({\varTheta },h)\) in our Theorems 3.1 and 3.2 is more general and more subtle than the one of finding an element of \(\operatorname{Fix}(T)\cap\bigcap^{N}_{i=1}\operatorname{Fix}(S_{i})\cap\operatorname{GMEP}({\varTheta },h)\) in Theorems 3.12 and 3.13 of [30] and the one of finding an element of \(\operatorname{Fix}(T)\cap\bigcap^{N}_{i=1}\operatorname{Fix}(S_{i})\cap\operatorname{GMEP}({\varTheta },h) \cap\operatorname{VI}(C,A)\) in Theorems 3.12 and 3.13 of [29].

  4. (iv)

    Our Theorems 3.1 and 3.2 extend Theorems 3.12 and 3.13 of [29] from one nonexpansive mapping T to infinitely many nonexpansive mappings \(\{T_{n}\}^{\infty}_{n=1}\) and from one variational inequality to finitely many variational inequalities. Moreover, these also extend Theorems 3.12 and 3.13 of [30] from one nonexpansive mapping T to infinitely many nonexpansive mappings \(\{T_{n}\}^{\infty}_{n=1}\) and generalize Theorems 3.12 and 3.13 of [30] to the setting of finitely many variational inequalities.

4 Applications

For a given nonlinear mapping \(A:C\to H\), we consider the variational inequality problem (VIP) of finding \(\bar{x}\in C\) such that

$$ \langle A\bar{x},y-\bar{x}\rangle\geq0,\quad \forall y\in C. $$
(4.1)

We will denote by \(\operatorname{VI}(C,A)\) the set of solutions of the VIP (4.1).

Recall that if u is a point in C, then the following relation holds:

$$ u\in\operatorname{VI}(C,A) \quad \Leftrightarrow\quad u=P_{C}(I- \lambda A)u, \quad \forall \lambda>0. $$
(4.2)

An operator \(A:C\to H\) is said to be an α-inverse strongly monotone operator if there exists a constant \(\alpha>0\) such that

$$\langle Ax-Ay,x-y\rangle\geq\alpha\|Ax-Ay\|^{2},\quad \forall x,y\in C. $$

As an example, we recall that the α-inverse strongly monotone operators are firmly nonexpansive mappings if \(\alpha\geq1\) and that every α-inverse strongly monotone operator is also \(\frac{1}{\alpha}\)-Lipschitz continuous (see [46]).

Let us observe also that, if A is α-inverse strongly monotone, the mappings \(P_{C}(I-\lambda A)\) are nonexpansive for all \(\lambda\in(0,2\alpha]\) since they are compositions of nonexpansive mappings (see p.419 in [46]).

Let us consider \(\widetilde{S}_{1},\ldots,\widetilde{S}_{K}\) a finite number of nonexpansive self-mappings on C and \(\widetilde{A}_{1},\ldots,\widetilde{A}_{N}\) be a finite number of α-inverse strongly monotone operators. Let \(\{T_{n}\}^{\infty}_{n=1}\) be a sequence of nonexpansive self-mappings on C. Let us consider the mixed problem of finding \(x^{*}\in\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap\operatorname{GMEP} ({\varTheta },h)\cap\bigcap^{M}_{k=1}\operatorname{VI}(C,A_{k})\) such that

$$ \left \{ \textstyle\begin{array}{l} \langle(I-\widetilde{S}_{1})x^{*},y-x^{*}\rangle\geq0,\quad \forall y\in\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap\operatorname{GMEP}({\varTheta },h)\cap \bigcap^{M}_{k=1}\operatorname{VI}(C,A_{k}), \\ \langle(I-\widetilde{S}_{2})x^{*},y-x^{*}\rangle\geq0, \quad \forall y\in\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap\operatorname{GMEP}({\varTheta },h)\cap \bigcap^{M}_{k=1}\operatorname{VI}(C,A_{k}), \\ \ldots, \\ \langle(I-\widetilde{S}_{K})x^{*},y-x^{*}\rangle\geq0,\quad \forall y\in\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap\operatorname{GMEP}({\varTheta },h)\cap \bigcap^{M}_{k=1}\operatorname{VI}(C,A_{k}), \\ \langle\widetilde{A}_{1}x^{*},y-x^{*}\rangle\geq0,\quad \forall y\in C, \\ \langle\widetilde{A}_{2}x^{*},y-x^{*}\rangle\geq0,\quad \forall y\in C, \\ \ldots, \\ \langle\widetilde{A}_{N}x^{*},y-x^{*}\rangle\geq0, \quad \forall y\in C. \end{array}\displaystyle \right . $$
(4.3)

Let us call \((\mathrm{SVI})\) the set of solutions of the \((K+N)\)-system. This problem is equivalent to finding a common fixed point of \(\{T_{n}\}^{\infty}_{n=1}\), \(\{P_{\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap\operatorname{GMEP}({\varTheta },h)\cap\bigcap^{M}_{k=1}\operatorname{VI}(C,A_{k})}\widetilde{S}_{i}\}^{K}_{i=1}, \{P_{C}(I-\lambda\widetilde{A}_{i})\}^{N}_{i=1}\). So we claim that the following holds.

Theorem 4.1

Let us suppose that \({\varOmega }=\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap(\mathrm{SVI})\cap\operatorname{GMEP}({\varTheta },h) \cap\bigcap^{M}_{k=1}\operatorname{VI}(C, A_{k})\neq\emptyset\). Fix \(\lambda>0\). Let \(\{ \alpha_{n}\}\), \(\{\beta_{n,i}\}\), \(i=1,\ldots,(K+N)\), be sequences in \((0,1)\) such that \(0<\liminf_{n\to\infty}\beta_{n,i}\leq\limsup_{n\to\infty}\beta_{n,i}<1\) for all indices i. Moreover, let us suppose that (H1)-(H6) hold. Then the sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\) explicitly defined by the scheme

$$ \left \{ \textstyle\begin{array}{l} {\varTheta }(u_{n},y)+h(u_{n},y)+\frac{1}{r_{n}}\langle y-u_{n},u_{n}-x_{n}\rangle\geq0,\quad \forall y\in C, \\ y_{n,1}=\beta_{n,1}P_{\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap\operatorname{GMEP}({\varTheta },h)\cap\bigcap^{M}_{k=1}\operatorname{VI}(C,A_{k})}\widetilde{S}_{1}u_{n}+(1-\beta_{n,1})u_{n}, \\ y_{n,i}=\beta_{n,i}P_{\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap\operatorname{GMEP}({\varTheta },h)\cap\bigcap^{M}_{k=1}\operatorname{VI}(C,A_{k})}\widetilde{S}_{i}u_{n}+(1-\beta_{n,i})y_{n,i-1},\quad i=2,\ldots,K, \\ y_{n,K+j}=\beta_{n,K+j}P_{C}(I-\lambda\widetilde{A}_{j})u_{n}+(1-\beta _{n,K+j})y_{n,K+j-1}, \quad j=1,\ldots,N, \\ y_{n}=\alpha_{n}f(y_{n,K+N})+(1-\alpha_{n})W_{n}{\varLambda }^{M}_{n}y_{n,K+N}, \\ x_{n+1}=(1-\beta_{n})y_{n}+\beta_{n}W_{n}{\varLambda }^{M}_{n}y_{n},\quad \forall n\geq1, \end{array}\displaystyle \right . $$
(4.4)

all converge strongly to the unique solution \(x^{*}\) in Ω to the VIP

$$\bigl\langle f\bigl(x^{*}\bigr)-x^{*},z-x^{*}\bigr\rangle \leq0,\quad \forall x\in{ \varOmega }. $$

Theorem 4.2

Let us suppose that \({\varOmega }\neq\emptyset\). Fix \(\lambda>0\). Let \(\{\alpha_{n}\}\), \(\{\beta_{n,i}\}\), \(i=1,\ldots,(K+N)\), be sequences in \((0,1)\) and \(\beta_{n,i}\to\beta_{i}\) for all i as \(n\to\infty\). Suppose that there exists \(k\in\{1,\ldots,K+N\}\) such that \(\beta_{n,k}\to0\) as \(n\to\infty\). Let \(k_{0}\in\{1,\ldots,K+N\}\) be the largest index for which \(\beta_{n,k_{0}}\to0\). Moreover, let us suppose that (H1), (H7), and (H8) hold and

  1. (i)

    \(\frac{\alpha_{n}}{\beta_{n,k_{0}}}\to0\) as \(n\to\infty\);

  2. (ii)

    if \(i\leq k_{0}\) and \(\beta_{n,i}\to0\) then \(\frac{\beta _{n,k_{0}}}{\beta_{n,i}}\to0\) as \(n\to\infty\);

  3. (iii)

    if \(\beta_{n,i}\to\beta_{i}\neq0\) then \(\beta_{i}\) lies in \((0,1)\).

Then the sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\) explicitly defined by scheme (4.4) all converge strongly to the unique solution \(x^{*}\) in Ω to the VIP

$$\bigl\langle f\bigl(x^{*}\bigr)-x^{*},z-x^{*}\bigr\rangle \leq0, \quad \forall z\in{ \varOmega }. $$

Remark 4.1

If in system (4.3), \(A_{1}=\cdots=A_{M}=\widetilde{A}_{1}=\cdots =\widetilde{A}_{N}=0\), and \(T_{n}\equiv T\) a nonexpansive mapping, we obtain a system of hierarchical fixed point problems introduced by Mainge and Moudafi [33, 47].

On the other hand, recall that a mapping \({\varGamma }:C\to C\) is called κ-strictly pseudocontractive if there exists a constant \(\kappa\in[0,1)\) such that

$$\|{\varGamma }x-{\varGamma }y\|^{2}\leq\|x-y\|^{2}+\kappa\bigl\Vert (I-{\varGamma })x-(I-{\varGamma })y\bigr\Vert ^{2},\quad \forall x,y \in C. $$

If \(\kappa=0\), then Γ is nonexpansive. Put \(A=I-{\varGamma }\), where \({\varGamma }:C\to C\) is a κ-strictly pseudocontractive mapping. Then A is \(\frac{1-\kappa}{2}\)-inverse strongly monotone; see [25].

Utilizing Theorems 3.1 and 3.2, we first give the following strong convergence theorems for finding a common element of the solution set \(\operatorname{GMEP}({\varTheta },h)\) of GMEP (1.4) and the common fixed point set \(\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap \bigcap^{N}_{i=1}\operatorname{Fix}(S_{i})\cap\bigcap^{M}_{k=1}\operatorname{Fix}({\varGamma }_{k})\) of a finite family of \(\kappa_{k}\)-strictly pseudocontractive mappings \(\{{\varGamma }_{k}\}^{M}_{k=1}\), one finite family of nonexpansive mappings \(\{S_{i}\}^{N}_{i=1}\), and another infinite family of nonexpansive mappings \(\{T_{n}\}^{\infty}_{n=1}\).

Theorem 4.3

Let \(\eta_{k}=\frac{1-\kappa_{k}}{2}\) for each \(k=1,\ldots,M\). Let us suppose that \({\varOmega }=\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap\bigcap^{N}_{i=1}\operatorname{Fix}(S_{i})\cap\bigcap^{M}_{k=1}\operatorname{Fix}({\varGamma }_{k})\cap\operatorname{GMEP}({\varTheta },h) \neq\emptyset\). Let \(\{\alpha_{n}\}\), \(\{\beta_{n,i}\}\), \(i=1,\ldots,N\), be sequences in \((0,1)\) such that \(0<\liminf_{n\to\infty} \beta_{n,i}\leq\limsup_{n\to\infty}\beta_{n,i}<1\) for all indices i. Moreover, let us suppose that (H1)-(H6) hold. Then the sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\) generated explicitly by

$$ \left \{ \textstyle\begin{array}{l} {\varTheta }(u_{n},y)+h(u_{n},y)+\frac{1}{r_{n}}\langle y-u_{n},u_{n}-x_{n}\rangle\geq0, \quad \forall y\in C, \\ y_{n,1}=\beta_{n,1}S_{1}u_{n}+(1-\beta_{n,1})u_{n}, \\ y_{n,i}=\beta_{n,i}S_{i}u_{n}+(1-\beta_{n,i})y_{n,i-1},\quad i=2,\ldots,N, \\ y_{n}=\alpha_{n}f(y_{n,N})+(1-\alpha_{n})W_{n}{ \prod^{M}_{k=1}} ((1-\lambda_{k,n})I+\lambda_{k,n}{\varGamma }_{k})y_{n,N}, \\ x_{n+1}=(1-\beta_{n})y_{n}+\beta_{n}W_{n}{ \prod^{M}_{k=1}} ((1-\lambda_{k,n})I+\lambda_{k,n}{\varGamma }_{k})y_{n},\quad \forall n\geq1, \end{array}\displaystyle \right . $$
(4.5)

all converge strongly to the unique solution \(x^{*}\) in Ω to the VIP

$$\bigl\langle f\bigl(x^{*}\bigr)-x^{*},z-x^{*}\bigr\rangle \leq0,\quad \forall z\in{ \varOmega }. $$

Proof

In Theorem 3.1, put \(A_{k}=I-{\varGamma }_{k}\) for each \(k=1,\ldots,M\). Then \(A_{k}\) is \(\frac{1-\kappa_{k}}{2}\)-inverse strongly monotone. Hence we deduce that \(\operatorname{Fix}({\varGamma }_{k})=\operatorname{VI}(C,A_{k})\) and \(P_{C}(I-\lambda_{1,n}A_{1})y_{n,N}=(1 -\lambda_{1,n})y_{n,N}+\lambda_{1,n}{\varGamma }_{1}y_{n,N}\). Thus, it is easy to see that \({\varLambda }^{M}_{n}y_{n,N}=\prod^{M} _{k=1}((1-\lambda_{k,n})I+\lambda_{k,n}{\varGamma }_{k})y_{n,N}\). Similarly, we also have \({\varLambda }^{M}_{n}y_{n}=\prod^{M}_{k=1} ((1-\lambda_{k,n})I+\lambda_{k,n}{\varGamma }_{k})y_{n}\). Consequently, in terms of Theorem 3.1, we obtain the desired result. □

Theorem 4.4

Let \(\eta_{k}=\frac{1-\kappa_{k}}{2}\) for each \(k=1,\ldots,M\). Let us suppose that \({\varOmega }=\bigcap^{\infty}_{n=1}\operatorname{Fix}(T_{n})\cap\bigcap^{N}_{i=1}\operatorname{Fix}(S_{i})\cap\bigcap^{M}_{k=1}\operatorname{Fix}({\varGamma }_{k})\cap\operatorname{GMEP}({\varTheta },h) \neq\emptyset\). Let \(\{\alpha_{n}\}\), \(\{\beta_{n,i}\}\), \(i=1,\ldots,N\), be sequences in \((0,1)\) such that \(\beta_{n,i}\to\beta_{i}\) for all i as \(n\to\infty\). Suppose that there exists \(k\in\{1,\ldots,N\} \) for which \(\beta_{n,k}\to0\) as \(n\to\infty\). Let \(k_{0}\in\{1,\ldots,N\}\) be the largest index for which \(\beta _{n,k_{0}}\to0\). Moreover, let us suppose that (H1), (H7), and (H8) hold and

  1. (i)

    \(\frac{\alpha_{n}}{\beta_{n,k_{0}}}\to0\) as \(n\to\infty\);

  2. (ii)

    if \(i\leq k_{0}\) and \(\beta_{n,i}\to0\) then \(\frac{\beta _{n,k_{0}}}{\beta_{n,i}}\to0\) as \(n\to\infty\);

  3. (iii)

    if \(\beta_{n,i}\to\beta_{i}\neq0\) then \(\beta_{i}\) lies in \((0,1)\).

Then the sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\) generated explicitly by (4.5) all converge strongly to the unique solution \(x^{*}\) in Ω to the VIP

$$\bigl\langle f\bigl(x^{*}\bigr)-x^{*},z-x^{*}\bigr\rangle \leq0,\quad \forall z\in{ \varOmega }. $$

References

  1. Lions, JL: Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires. Dunod, Paris (1969)

    MATH  Google Scholar 

  2. Glowinski, R: Numerical Methods for Nonlinear Variational Problems. Springer, New York (1984)

    Book  MATH  Google Scholar 

  3. Oden, JT: Quantitative Methods on Nonlinear Mechanics. Prentice Hall, Englewood Cliffs (1986)

    Google Scholar 

  4. Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)

    MATH  Google Scholar 

  5. Zeidler, E: Nonlinear Functional Analysis and Its Applications. Springer, New York (1985)

    Book  MATH  Google Scholar 

  6. Korpelevich, GM: The extragradient method for finding saddle points and other problems. Matecon 12, 747-756 (1976)

    MATH  Google Scholar 

  7. Ceng, LC, Ansari, QH, Yao, JC: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 64(4), 633-642 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  8. Ceng, LC, Yao, JC: An extragradient-like approximation method for variational inequality problems and fixed point problems. Appl. Math. Comput. 190, 205-215 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  9. Ceng, LC, Hadjisavvas, N, Wong, NC: Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 46, 635-646 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  10. Nadezhkina, N, Takahashi, W: Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings. SIAM J. Optim. 16, 1230-1241 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  11. Nadezhkina, N, Takahashi, W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191-201 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  12. Zeng, LC, Yao, JC: Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 10(5), 1293-1303 (2006)

    MATH  MathSciNet  Google Scholar 

  13. Ceng, LC, Ansari, QH, Yao, JC: Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. 75(4), 2116-2125 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  14. Ceng, LC, Yao, JC: A relaxed extragradient-like method for a generalized mixed equilibrium problem, a general system of generalized equilibria and a fixed point problem. Nonlinear Anal. 72, 1922-1937 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  15. Ceng, LC, Ansari, QH, Schaible, S: Hybrid extragradient-like methods for generalized mixed equilibrium problems, system of generalized equilibrium problems and optimization problems. J. Glob. Optim. 53, 69-96 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  16. Ceng, LC, Ansari, QH, Yao, JC: Relaxed extragradient iterative methods for variational inequalities. Appl. Math. Comput. 218, 1112-1123 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  17. Yao, Y, Liou, YC, Kang, SM: Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 59, 3472-3480 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  18. Ceng, LC, Teboulle, M, Yao, JC: Weak convergence of an iterative method for pseudomonotone variational inequalities and fixed point problems. J. Optim. Theory Appl. 146, 19-31 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  19. Ceng, LC, Guu, SM, Yao, JC: Finding common solutions of a variational inequality, a general system of variational inequalities, and a fixed-point problem via a hybrid extragradient method. Fixed Point Theory Appl. 2011, Article ID 626159 (2011)

    Article  MathSciNet  Google Scholar 

  20. Ceng, LC, Ansari, QH, Wong, MM, Yao, JC: Mann type hybrid extragradient method for variational inequalities, variational inclusions and fixed point problems. Fixed Point Theory 13(2), 403-422 (2012)

    MATH  MathSciNet  Google Scholar 

  21. Latif, A, Sahu, D, Ansari, QH: Variable KM-like algorithm for fixed point problems and split feasibility problems. Fixed Point Theory Appl. 2014, 211 (2014)

    Article  MathSciNet  Google Scholar 

  22. Latif, A, Al-Mazrooei, AE, Alofi, ASM, Yao, JC: Shrinking projection method for systems of generalized equilibria with constraints of variational inclusion and fixed point problems. Fixed Point Theory Appl. 2014, 164 (2014)

    Article  MathSciNet  Google Scholar 

  23. Bin Dehaish, BA, Latif, A, Bakodah, HO, Qin, X: A viscosity splitting algorithm for solving inclusion and equilibrium problems. J. Inequal. Appl. 2015, 50 (2015)

    Article  Google Scholar 

  24. Ceng, LC, Latif, A, Al-Mazrooei, AE: Hybrid viscosity methods for equilibrium problems, variational inequalities, and fixed point problems. Appl. Anal. (2015). doi:10.1080/00036811.2015.1051971

    Google Scholar 

  25. Jung, JS: A new iteration method for nonexpansive mappings and monotone mappings in Hilbert spaces. J. Inequal. Appl. 2010, Article ID 251761 (2010)

    Google Scholar 

  26. Blum, E, Oettli, W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63(1-4), 123-145 (1994)

    MATH  MathSciNet  Google Scholar 

  27. Oettli, W: A remark on vector-valued equilibria and generalized monotonicity. Acta Math. Vietnam. 22, 215-221 (1997)

    MathSciNet  Google Scholar 

  28. Ceng, LC, Kong, ZR, Wen, CF: On general systems of variational inequalities. Comput. Math. Appl. 66, 1514-1532 (2013)

    Article  MathSciNet  Google Scholar 

  29. Ceng, LC, Petrusel, A, Yao, JC: Composite viscosity approximation methods for equilibrium problem, variational inequality and common fixed points. J. Nonlinear Convex Anal. 15(1), 219-240 (2014)

    MATH  MathSciNet  Google Scholar 

  30. Marino, G, Muglia, L, Yao, Y: Viscosity methods for common solutions of equilibrium and variational inequality problems via multi-step iterative algorithms and common fixed points. Nonlinear Anal. 75, 1787-1798 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  31. Takahashi, S, Takahashi, W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 69, 1025-1033 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  32. Ceng, LC, Yao, JC: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 214, 186-201 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  33. Moudafi, A, Mainge, P-E: Strong convergence of an iterative method for hierarchical fixed point problems. Pac. J. Optim. 3(3), 529-538 (2007)

    MATH  MathSciNet  Google Scholar 

  34. Moudafi, A: Weak convergence theorems for nonexpansive mappings and equilibrium problems. J. Nonlinear Convex Anal. 9(1), 37-43 (2008)

    MATH  MathSciNet  Google Scholar 

  35. Ceng, LC, Petrusel, A, Yao, JC: Iterative approaches to solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. J. Optim. Theory Appl. 143, 37-58 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  36. Yao, Y, Liou, YC, Yao, JC: Convergence theorem for equilibrium problems and fixed point problems of infinite family of nonexpansive mappings. Fixed Point Theory Appl. 2007, Article ID 064363 (2007)

    Article  MathSciNet  Google Scholar 

  37. Colao, V, Marino, G, Xu, HK: An iterative method for finding common solutions of equilibrium and fixed point problems. J. Math. Anal. Appl. 344, 340-352 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  38. Cianciaruso, F, Marino, G, Muglia, L, Yao, Y: A hybrid projection algorithm for finding solutions of mixed equilibrium problem and variational inequality problem. Fixed Point Theory Appl. 2010, Article ID 383740 (2010)

    Article  MathSciNet  Google Scholar 

  39. Yao, Y, Liou, Y-C, Marino, G: Two-step iterative algorithms for hierarchical fixed point problems and variational inequality problems. J. Appl. Math. Comput. 31(1-2), 433-445 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  40. Moudafi, A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 241(1), 46-55 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  41. Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279-291 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  42. Shimoji, K, Takahashi, W: Strong convergence to common fixed points of infinite nonexpansive mappings and applications. Taiwan. J. Math. 5, 387-404 (2001)

    MATH  MathSciNet  Google Scholar 

  43. Goebel, K, Kirk, WA: Topics on Metric Fixed-Point Theory. Cambridge University Press, Cambridge (1990)

    Book  Google Scholar 

  44. Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66(2), 240-256 (2002)

    Article  MATH  Google Scholar 

  45. Rockafellar, RT: Monotone operators and the proximal point algorithms. SIAM J. Control Optim. 14, 877-898 (1976)

    Article  MATH  MathSciNet  Google Scholar 

  46. Takahashi, W, Toyoda, M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118(2), 417-428 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  47. Moudafi, A, Mainge, P-E: Towards viscosity approximations of hierarchical fixed point problems. Fixed Point Theory Appl. 2006, Article ID 95453 (2006)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This article was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. Therefore, the authors acknowledge with thanks DSR, for technical and financial support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdul Latif.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ceng, LC., Latif, A. & Al-Mazrooei, A.E. Composite viscosity methods for common solutions of general mixed equilibrium problem, variational inequalities and common fixed points. J Inequal Appl 2015, 217 (2015). https://doi.org/10.1186/s13660-015-0736-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0736-y

MSC

Keywords