Skip to main content

Iterative algorithms of common solutions for a hierarchical fixed point problem, a system of variational inequalities, and a split equilibrium problem in Hilbert spaces

Abstract

In this paper, we suggest and analyze an iterative algorithm to approximate a common solution of a hierarchical fixed point problem for nonexpansive mappings, a system of variational inequalities, and a split equilibrium problem in Hilbert spaces. Under some suitable conditions imposed on the sequences of parameters, we prove that the sequence generated by the proposed iterative method converges strongly to a common element of the solution set of these three kinds of problems. The results obtained here extend and improve the corresponding results of the relevant literature.

1 Introduction

Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, whose inner product and norm are denoted by \(\langle \cdot, \cdot \rangle \) and \(\Vert \cdot \Vert \). And let \(C_{1}\) and \(C_{2}\) be two nonempty closed convex subsets of \(H_{1}\) and \(H_{2}\), respectively. Recall that the mapping \(T:C_{1} \to C_{1}\) is nonexpansive if \(\Vert Tx - Ty \Vert \le \Vert x - y \Vert \) for all \(x,y \in C_{1}\). We denote the fixed point set of T by \(\mathrm{Fix}(T) = \{ x \in C_{1}:x = Tx \} \). If T is nonexpansive, then \(\mathrm{Fix}(T)\) is nonempty, closed, and convex. Next, we consider the following three kinds of problems, which are paid attention to in our paper.

Problem 1

(Hierarchical fixed point problem (HFPP))

In 2006, Moudafi and Mainge [23] introduced and studied the following hierarchical fixed point problem (in short HFPP) for a nonexpansive mapping T with respect to another nonexpansive mapping S on \(C_{1}\): Find \(x \in \mathrm{Fix}(T)\) such that

$$\begin{aligned} \langle x - Sx,y - x \rangle \ge 0,\quad \forall y \in \mathrm{Fix}(T), \end{aligned}$$
(1)

which amounts to saying that \(x \in \mathrm{Fix}(T)\) satisfies the variational inequality depending on a given criterion S, namely, find \(x \in C_{1}\) such that

$$\begin{aligned} 0 \in (I - S)x + N_{\mathrm{Fix}(T)}(x), \end{aligned}$$

where I is the identity mapping on \(C_{1}\) and \(N_{\mathrm{Fix}(T)}\) is the normal cone to \(\mathrm{Fix}(T)\) at x defined by

$$\begin{aligned} N_{\mathrm{Fix}(T)}(x) = \textstyle\begin{cases} \{ u \in H_{1}: \langle y - x,u \rangle \le 0,\forall y \in \mathrm{Fix}(T) \}& \text{if }x \in \mathrm{Fix}(T), \\ \emptyset& \text{otherwise}. \end{cases}\displaystyle \end{aligned}$$

We know that the hierarchical fixed point problem links with some monotone variational inequalities and convex programming problems, see [39] and the references therein. In 2007, Moudafi [22] introduced the following Krasnoselski–Mann algorithm for solving HFPP (1):

$$\begin{aligned} x_{n + 1} = (1 - \alpha _{n})x_{n} + \alpha _{n}\bigl(\sigma _{n}Sx_{n} + (1 - \sigma _{n})Tx_{n}\bigr), \end{aligned}$$

where \(\{ \alpha _{n} \} \) and \(\{ \sigma _{n} \} \) are two real sequences in (0,1).

On the other hand, in 2011, Ceng, Anasri, and Yao [8] proposed the following iterative method:

$$\begin{aligned} x_{n + 1} = P_{C}\bigl[\alpha _{n}\rho U(x_{n}) + (I - \alpha _{n}\mu F) \bigl(T(y_{n}) \bigr)\bigr], \end{aligned}$$

where U is a Lipschitzian mapping, and F is a Lipschitzian and strongly monotone mapping. Under some approximate assumptions, they proved that the sequence \(\{ x_{n} \} \) generated by the above iterative algorithm converges strongly to the unique solution of the variational inequality

$$\begin{aligned} \bigl\langle \rho U(x) - \mu F(x),y - x \bigr\rangle \ge 0,\quad \forall y \in \mathrm{Fix}(T). \end{aligned}$$
(2)

Note that HFPP (2) is more general than HFPP (1).

Problem 2

(Split equilibrium problem (SEP))

Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Let F be a bifunction of \(C \times C\) into R, where R is the set of real numbers. The equilibrium problem(in short, EP) for \(F:C \times C \to R\) is to find \(x \in C\) such that

$$\begin{aligned} F(x,y) \ge 0,\quad \forall y \in C, \end{aligned}$$
(3)

which was introduced and studied by Blum and Oettli [3]. It contains many problems, such as fixed point problem, variational inequality problem, Nash equilibrium problem, optimization problem, and complementarity problem as special cases, see, e.g., [1, 2, 20, 31] and the references therein. In 1997, Combettes and Hirstoaga [15] introduced an iterative scheme of finding the best approximation to the initial data when a set of solutions (3) is nonempty and proved a strong convergence theorem. We denote the solution set of EP (3) by \(EP(F) = \{ x \in C:F(x,y) \ge 0,\forall y \in C \} \).

Recently, Kazmi and Rizvi [21] considered the following split equilibrium problem (in short, SEP): Let \(F_{1}:C_{1} \times C_{1} \to R\) and \(F_{2}:C_{2} \times C_{2} \to R\) be two nonlinear bifunctions and \(A:H_{1} \to H_{2}\) be a bounded linear operator, then the SEP is to find \(x^{*} \in C_{1}\) such that

$$\begin{aligned} F_{1}\bigl(x^{*},x\bigr) \ge 0,\quad \forall x \in C_{1} \end{aligned}$$
(4)

and

$$\begin{aligned} F_{2}\bigl(y^{*},y\bigr) \ge 0,\quad \forall y \in C_{2}, \end{aligned}$$
(5)

where \(y^{*} = Ax^{*} \in C_{2}\). The solution set of SEP (4)–(5) is denoted by \(\Gamma = \{ p \in EP(F_{1}):Ap \in EP(F_{2}) \} \). This formalism is also the core of modeling of many inverse problems arising in phase retrieval and other real word problems, for example, in sensor networks in computerized tomography, in intensity-modulated radiation therapy treatment planning, and data compression, see, e.g., [5, 6, 1214] and the references therein.

Problem 3

(System of variational inequalities (SVI))

Let \(C_{1}\) be a nonempty closed convex subset of \(H_{1}\) and \(A,B:C_{1} \to H_{1}\) be two mappings. Ceng, Wang, and Yao [11] considered the following problem which finds \((x^{*},y^{*}) \in C_{1} \times C_{1}\) such that

$$\begin{aligned} \textstyle\begin{cases} \langle \lambda _{1}Ay^{*} + x^{*} - y^{*},x - x^{*} \rangle \ge 0,&\forall x \in C_{1}, \\ \langle \lambda _{2}Bx^{*} + y^{*} - x^{*},x - y^{*} \rangle \ge 0,&\forall x \in C_{1}. \end{cases}\displaystyle \end{aligned}$$
(6)

Problem (6) is called a general system of variational inequalities, where \(\lambda _{1} > 0\) and \(\lambda _{2} > 0\) are constants. In 2015, Jitsupa et al. [19] introduced the following system of variational inequalities in a Hilbert space \(H_{1}\), that is, finding \(x_{i}^{*} \in C_{1}(i = 1,2, \ldots,N)\) such that

$$\begin{aligned} \textstyle\begin{cases} \langle \lambda _{N}B_{N}x_{N}^{*} + x_{1}^{*} - x_{N}^{*},x - x_{1}^{*} \rangle \ge 0,&\forall x \in C_{1}, \\ \langle \lambda _{N - 1}B_{N - 1}x_{N - 1}^{*} + x_{N}^{*} - x_{N - 1}^{*},x - x_{N}^{*} \rangle \ge 0,&\forall x \in C_{1}, \\ \vdots \\ \langle \lambda _{2}B_{2}x_{2}^{*} + x_{3}^{*} - x_{2}^{*},x - x_{3}^{*} \rangle \ge 0,&\forall x \in C_{1}, \\ \langle \lambda _{1}B_{1}x_{1}^{*} + x_{2}^{*} - x_{1}^{*},x - x_{2}^{*} \rangle \ge 0,&\forall x \in C_{1}, \end{cases}\displaystyle \end{aligned}$$
(7)

which is called a more general system of variational inequalities, where \(\lambda _{i} > 0\) and \(B_{i}:C_{1} \to H_{1}\) is a nonlinear mapping for all \(i \in \{ 1,2, \ldots,N \} \). The solution set of SVI (7) is denoted by \(GSVI(C_{1},B_{i})\).

In view of these different three kinds of problems, there are some new research results on numerical algorithm in the recent literature. Under the setting of uniformly convex Banach spaces, in [2730], the Thakur three-step iterative process in the context of Suzuki-type nonexpansive mappings or generalized nonexpansive mappings enriched with property (E) was studied, and a comparative numerical experiment was performed with the visualization of some convergence behaviors. In [25], an S-iteration technique for finding common fixed points for nonself quasi-nonexpansive mappings was developed, and convergence properties of the proposed algorithm were analyzed. And in [17], a hybrid projection algorithm for a countable family of mappings was considered, and the strong convergence of the algorithm converging to the common fixed point of the mappings was given. Very recently, Dadashi and Postolache [18] constructed a forward–backward splitting algorithm for approximating a zero of the sum of an α-inverse strongly monotone operator and a maximal monotone operator. They proved the strong convergence theorem under mild conditions. Especially, they added a nonexpansive mapping in the algorithm and proved that the generated sequence converged strongly to a common element of the fixed point set of a nonexpansive mapping and the zero point set of the sum of monotone operators. They also applied their main result both to equilibrium problems and convex programming.

On the other hand, Ceng et al. [9] introduced a hybrid viscosity extragradient method for finding the common elements of the solution set of a general system of variational inequalities and the common fixed point set of a countable family of nonexpansive mappings and zero points of an accretive operator in real smooth Banach spaces. Moreover, they [10] proposed an implicit composite extragradient-like method based on the Mann iteration method, the viscosity approximation method, and the Korpelevich extragradient method for solving a general system of variational inequalities with a hierarchical variational inequality constraint for countably many uniformly Lipschitzian pseudocontractive mappings and an accretive operator in a real Banach space. In [36, 38], Yao, Postolache, and Yao suggested a projected type algorithm and an extragradient algorithm for finding the common solutions of two variational inequalities and the common element of the set of fixed points of a pseudocontractive operator and the set of solutions of the variational inequality problem in Hilbert spaces, respectively. In [35, 37], Yao et al. introduced iterative algorithms for solving a split variational inequality and a fixed point problem that requires finding a solution of a generalized variational inequality whose image is a fixed point of a pseudocontractive operator or a fixed point of two quasi-pseudocontractive operators under a nonlinear transformation in Hilbert spaces. In [33, 34], Yao et al. constructed iterative algorithms for solving the split feasibility problem and the fixed point problem, the split equilibrium problems and fixed point problems involved in the pseudocontractive mappings in Hilbert spaces and proved their strong convergence.

Inspired and motivated by the above research work, we suggest an iterative approximation method for finding an element of the common solution set of HFPP (2), SEP (4)–(5), and SVI (7) involved in nonexpansive mappings. To our best knowledge, there is no further study on finding the element of the common solution set of HFPP (2), SEP (4)–(5), and SVI (7). When the mappings take different types of cases, we can obtain a corollary on the common element of the set of fixed points of a nonexpansive mapping, the solution set of a variational inequality and an equilibrium problem. So, our results presented here are new and very interesting.

The paper is organized as follows. In Sect. 2, we recall some concepts and lemmas which are needed in proving our main results. In Sect. 3, we suggest an iterative algorithm for solving the three different kinds of problems and prove its strong convergence. At last, the conclusion is given.

2 Preliminaries

In this section, we list some fundamental results that are useful in the consequent analysis.

Let H be a real Hilbert space, C be a nonempty closed and convex subset of H.

Then, for all \(x,y \in H\), the following inequalities hold:

$$\begin{aligned} \Vert x - y \Vert ^{2} = \Vert x \Vert ^{2} - \Vert y \Vert ^{2} - 2 \langle x - y,y \rangle,\qquad \Vert x + y \Vert ^{2} \le \Vert x \Vert ^{2} + 2 \langle y,x + y \rangle. \end{aligned}$$

A function \(F:C \times C \to R\) is called an equilibrium function if it satisfies the following conditions:

  1. (A1)

    \(F(x,x) = 0\) for all \(x \in C\);

  2. (A2)

    F is monotone, i.e., \(F(x,y) + F(y,x) \le 0\) for all \(x,y \in C\);

  3. (A3)

    \(\mathop{\lim \sup}_{t \downarrow 0}F(tz + (1 - t)x,y) \le F(x,y)\) for all \(x,y,z \in C\);

  4. (A4)

    for each \(x \in C\), \(y \mapsto F(x,y)\) is convex and lower semi-continuous;

  5. (A5)

    Fix \(r > 0\) and \(z \in C\), there exists a nonempty compact convex subset K of H and \(x \in C \cap K\) such that

    $$\begin{aligned} F(y,x) + \frac{1}{r} \langle y - x,x - z \rangle < 0, \quad\forall y \in C \backslash K. \end{aligned}$$

Lemma 2.1

([16])

Assume that \(F:C \times C \to R\) is an equilibrium function. For \(r > 0\), define a mapping \(R_{r,F}:H \to C\) as follows:

$$\begin{aligned} R_{r,F}(x) = \biggl\{ z \in C:F(x,y) + \frac{1}{r} \langle y - z,z - x \rangle \ge 0,\forall y \in C \biggr\} \end{aligned}$$

for all \(x \in H\). Then the following hold:

  1. (B1)

    \(R_{r,F}\) is single-valued;

  2. (B2)

    Fix \((R_{r,F}) = EP(F)\) and \(EP(F)\) is a nonempty closed and convex subset of C;

  3. (B3)

    \(R_{r,F}\) is a firmly nonexpansive mapping, i.e.,

    $$\begin{aligned} \bigl\Vert R_{r,F}(x) - R_{r,F}(y) \bigr\Vert ^{2} \le \bigl\langle R_{r,F}(x) - R_{r,F}(y),x - y \bigr\rangle , \quad\forall x,y \in H.C. \end{aligned}$$

Lemma 2.2

Let \(F:C \times C \to R\) be an equilibrium function, and let \(R_{r,F}\) be defined as in Lemma 2.1for \(r > 0\). Let \(x,y \in H\) and \(r_{1},r_{2} > 0\), then

$$\begin{aligned} \bigl\Vert R_{r_{2},F}(y) - R_{r_{1},F}(x) \bigr\Vert \le \Vert y - x \Vert + \biggl\vert \frac{r_{2} - r_{1}}{r_{2}} \biggr\vert \bigl\Vert R_{r_{2},F}(y) - y \bigr\Vert . \end{aligned}$$

Lemma 2.3

([32])

Let \(\{ a_{n} \} \) be a sequence of nonnegative real numbers such that

$$\begin{aligned} a_{n + 1} \le ( 1 - \alpha _{n} )a_{n} + \delta _{n},\quad n \ge 0, \end{aligned}$$

where \(\{ \alpha _{n} \} \) is a sequence in \(( 0,1 )\) and \(\{ \delta _{n} \} \) is a sequence in R such that

$$\begin{aligned} (\mathrm{i})\quad \sum_{n = 1}^{\infty } \alpha _{n} = \infty;\qquad (\mathrm{ii})\quad \mathop{\lim \sup}_{n \to \infty } \frac{\delta _{n}}{\alpha _{n}} \le 0 \quad\textit{or}\quad \sum_{n = 1}^{\infty } \vert \delta _{n} \vert < \infty. \end{aligned}$$

Then \(\lim_{n \to \infty } a_{n} = 0\).

Lemma 2.4

Let \(P_{C}\) denote the projection of H onto C. It is known that \(P_{C}\) is nonexpansive and the following inequalities hold:

$$\begin{aligned} &\Vert P_{C}x - P_{C}y \Vert ^{2} \le \langle x - y,P_{C}x - P_{C}y \rangle,\quad \forall x,y \in H, \\ &\Vert x - y \Vert ^{2} \ge \Vert x - P_{C}x \Vert ^{2} + \Vert y - P_{C}x \Vert ^{2},\quad \forall x \in H,y \in C, \\ &\bigl\Vert (x - y) - (P_{C}x - P_{C}y) \bigr\Vert ^{2} \ge \Vert x - y \Vert ^{2} - \Vert P_{C}x - P_{C}y \Vert ^{2}, \quad\forall x,y \in H. \end{aligned}$$

Lemma 2.5

If B is an α-inverse-strongly monotone mapping of C into H, and \(\lambda \in [0,2\alpha ]\), then \(I - \lambda B\) is a nonexpansive mapping.

Proof

For any \(w,u \in C_{1}\), we have

$$\begin{aligned} \bigl\Vert (I - \lambda B)w - (I - \lambda B)u \bigr\Vert ^{2} &= \bigl\Vert (w - u) - \lambda (Bw - Bu) \bigr\Vert ^{2} \\ &= \Vert w - u \Vert ^{2} - 2\lambda \langle Bw - Bu,w - u \rangle + \lambda ^{2} \Vert Bw - Bu \Vert ^{2} \\ &\le \Vert w - u \Vert ^{2} + \lambda (\lambda - 2\alpha ) \Vert Bw - Bu \Vert ^{2} \\ &\le \Vert w - u \Vert ^{2}, \end{aligned}$$

which implies that \(I - \lambda B\) is nonexpansive, completing the proof. □

Lemma 2.6

([7])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(B_{i}:C \to H\) be an \(\alpha _{i}\)-inverse-strongly monotone mapping, where \(i \in \{ 1,2, \ldots, N \} \). Let \(G:C \to C\) be a mapping defined by

$$\begin{aligned} G(x) = P_{C}(I - \lambda _{N}B_{N})P_{C}(I - \lambda _{N - 1}B_{N - 1}) \cdots P_{C}(I - \lambda _{2}B_{2})P_{C}(I - \lambda _{1}B_{1})x,\quad \forall x \in C. \end{aligned}$$

If \(\lambda _{i} \in [0,2\alpha _{i}]\), \(i = 1,2, \ldots,N\), then \(G:C \to C\) is nonexpansive.

Proof

Putting \(T^{i} = P_{C}(I - \lambda _{i}B_{i})P_{C}(I - \lambda _{i - 1}B_{i - 1}) \cdots P_{C}(I - \lambda _{2}B_{2})P_{C}(I - \lambda _{1}B_{1}), i = 1,2, \ldots,N\), and \(T^{0} = I\), where I is an identity mapping on C. Then \(G = T^{N}\). For all \(x,y \in C\), we have

$$\begin{aligned} \bigl\Vert G(x) - G(y) \bigr\Vert &= \bigl\Vert T^{N}(x) - T^{N}(y) \bigr\Vert \\ &= \bigl\Vert P_{C}(I - \lambda _{N}B_{N})T^{N - 1}x - P_{C}(I - \lambda _{N}B_{N})T^{N - 1}y \bigr\Vert \\ &\le \bigl\Vert (I - \lambda _{N}B_{N})T^{N - 1}x - (I - \lambda _{N}B_{N})T^{N - 1}y \bigr\Vert \\ &\le \bigl\Vert T^{N - 1}x - T^{N - 1}y \bigr\Vert \\ &\vdots \\ &\le \Vert x - y \Vert . \end{aligned}$$

Then G is nonexpansive, which completes the proof. □

Lemma 2.7

([8])

Let \(U:C \to H\) be a τ-Lipschitzian mapping, and let \(F:C \to H\) be a k-Lipschitzian mapping and η-strongly monotone mapping, then, for \(0 \le \rho \tau < \mu \eta \), \(\mu F - \rho U\) is \((\mu \eta - \rho \tau )\)-strongly monotone, i.e.,

$$\begin{aligned} \bigl\langle (\mu F - \rho U)x - (\mu F - \rho U)y,x - y \bigr\rangle \ge (\mu \eta - \rho \tau ) \Vert x - y \Vert ^{2},\quad \forall x,y \in C. \end{aligned}$$

Lemma 2.8

([26])

Suppose that \(\lambda \in (0,1)\) and \(\mu > 0\). Let \(F:C \to H\) be a k-Lipschitzian and η-strongly monotone mapping. In association with a nonexpansive mapping \(T:C \to C\), define the mapping \(T^{\lambda }:C \to H\) by

$$\begin{aligned} T^{\lambda } (x) = T(x) - \lambda \mu FT(x), \quad\forall x \in C. \end{aligned}$$

Then \(T^{\lambda }\) is a contractive mapping with \(\mu < \frac{2\eta }{k^{2}}\), that is,

$$\begin{aligned} \bigl\Vert T^{\lambda } x - T^{\lambda } y \bigr\Vert \le (1 - \lambda \nu ) \Vert x - y \Vert ,\quad \forall x,y \in C, \end{aligned}$$

where \(\nu = 1 - \sqrt{1 - \mu (2\eta - \mu k^{2})}\).

Lemma 2.9

([24])

Each Hilbert space H satisfies the Opial condition, that is, for any sequence \(\{ x_{n}\}\) with \(x_{n}\) converging weakly to x, the inequality \(\mathop{\lim \inf}_{n \to \infty } \Vert x_{n} - x \Vert < \mathop{\lim \inf}_{n \to \infty } \Vert x_{n} - y \Vert \) holds for every \(y \in H\) with \(y \ne x\).

Lemma 2.10

([4] Demiclosedness principle)

Let C be a closed convex subset of a real Hilbert space H, and let \(T:C \to C\) be a nonexpansive mapping. Then \(I - T\) is demiclosed at zero, that is, \(x_{n}\) converges weakly to \(x,x_{n} - Tx_{n} \to 0\) implies \(x = Tx\).

3 Main results

Theorem 3.1

For \(i \in \{ 1,2 \} \), let \(H_{i}\) be a real Hilbert space, \(C_{i}\) be a nonempty closed convex subset of \(H_{i}\), let \(F_{i}:C_{i} \times C_{i} \to R\) be an equilibrium function. Let \(A:H_{1} \to H_{2}\) be bounded linear operators with their adjoint operators \(A^{*}\). Let \(B_{i}\) be \(\xi _{i}\)-inverse-strongly monotone, respectively, where \(i \in \{ 1,2, \ldots, N \} \). Let \(F:C_{1} \to C_{1}\) be a k-Lipschitzian mapping and η-strongly monotone, and let \(U:C_{1} \to C_{1}\) be a τ-Lipschitzian mapping. Let \(S,T:C_{1} \to C_{1}\) be two nonexpansive mappings such that \(\Theta = \Gamma \cap \mathrm{Fix}(G) \cap \mathrm{Fix}(T) \ne \emptyset \). For a given \(x_{0} \in C_{1}\) arbitrarily, let the iterative sequences \(\{ u_{n} \} \), \(\{ y_{n} \} \), and \(\{ x_{n} \} \) be generated by

$$\begin{aligned} \textstyle\begin{cases} u_{n} = R_{r_{n},F_{1}}(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}), \\ y_{n} = P_{C_{1}}(I - \lambda _{N}B_{N})P_{C_{1}}(I - \lambda _{N - 1}B_{N - 1}) \cdots P_{C_{1}}(I - \lambda _{2}B_{2})P_{C_{1}}(I - \lambda _{1}B_{1})u_{n}, \\ z_{n} = \beta _{n}Sx_{n} + (1 - \beta _{n})y_{n}, \\ x_{n + 1} = P_{C_{1}}[\alpha _{n}\rho U(x_{n}) + (I - \alpha _{n}\mu F)(T(z_{n}))], \end{cases}\displaystyle \end{aligned}$$
(8)

where \(\{ r_{n} \} \subset (0,\infty ),\gamma \in (0,1 / L_{A}), L_{A}\) is the spectral radius of the operators \(A^{*}A\). Suppose that the parameters satisfy \(0 < \mu < \frac{2\eta }{k^{2}}\), \(k \ge \eta \), \(0 \le \rho \tau < \nu \), where \(\nu = 1 - \sqrt{1 - \mu (2\eta - \mu k)^{2}}\), and \(\{ \alpha _{n} \} \), \(\{ \beta _{n} \} \) are the sequences in \((0,1)\) satisfying the following conditions:

  1. (i)

    \(\lim_{n \to \infty } \alpha _{n} = 0\) and \(\sum_{n = 0}^{\infty } \alpha _{n} = \infty \), \(\sum_{n = 1}^{\infty } \vert \alpha _{n - 1} - \alpha _{n} \vert < \infty \);

  2. (ii)

    \(\mathop{\lim \sup}_{n \to \infty } \frac{\beta _{n}}{\alpha _{n}} = 0\), \(\beta _{n} \le \alpha _{n} ( n \ge 1 )\) and \(\sum_{n = 1}^{\infty } \vert \beta _{n - 1} - \beta _{n} \vert < \infty \);

  3. (iii)

    \(\mathop{\lim \inf}_{n \to \infty } r_{n} > 0\), \(\sum_{n = 1}^{\infty } \vert r_{n - 1} - r_{n} \vert < \infty\).

Then the sequence \(\{ x_{n} \} \) generated by (8) converges strongly to \(w \in \Theta \).

Proof

Let \(p \in \Theta \), i.e., \(p \in \Gamma \), that is, \(p = R_{r_{n},F_{1}}(p)\) and \(Ap = R_{r_{n},F_{2}}(Ap)\). For convenience, we split the proof into several steps.

Step 1. We show that \(\{ x_{n} \} \), \(\{ u_{n} \} \), \(\{ y_{n} \} \), \(\{ z_{n} \} \) are bounded.

First, by (8) and the expansiveness of \(R_{r_{n},F_{1}}\), we estimate

$$\begin{aligned} \begin{aligned}[b] \Vert u_{n} - p \Vert ^{2} &= \bigl\Vert R_{r_{n},F_{1}}\bigl(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}\bigr) - p \bigr\Vert ^{2} \\ &= \bigl\Vert R_{r_{n},F_{1}}\bigl(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}\bigr) - R_{r_{n},F_{1}}(p) \bigr\Vert ^{2} \\ &\le \bigl\Vert x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} - p \bigr\Vert ^{2} \\ &= \Vert x_{n} - p \Vert ^{2} + \gamma ^{2} \bigl\Vert A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} + 2\gamma \bigl\langle x_{n} - p,A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle . \end{aligned} \end{aligned}$$
(9)

It follows from the definition of \(L_{A}\) that

$$\begin{aligned} \begin{aligned}[b]&\gamma ^{2} \bigl\Vert A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \\ &\quad= \gamma ^{2} \bigl\langle (R_{r_{n},F_{2}} - I)Ax_{n},AA^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle \\ &\quad\le L_{A}\gamma ^{2} \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2}. \end{aligned} \end{aligned}$$
(10)

By using Lemma 2.4, we have

$$\begin{aligned} \begin{aligned}[b] &2\gamma \bigl\langle x_{n} - p,A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle \\ &\quad= 2\gamma \bigl\langle A(x_{n} - p),(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle \\ &\quad= 2\gamma \bigl\langle A(x_{n} - p) + (R_{r_{n},F_{2}} - I)Ax_{n} - (R_{r_{n},F_{2}} - I)Ax_{n},(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle \\ &\quad= 2\gamma \bigl\{ \bigl\langle R_{r_{n},F_{2}}Ax_{n} - Ap,(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle - \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \bigr\} \\ &\quad\le 2\gamma \biggl\{ \frac{1}{2} \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} - \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \biggr\} \\ &\quad= - \gamma \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2}. \end{aligned} \end{aligned}$$
(11)

From (9)–(11) and \(\gamma \in (0,1 / L_{A})\) it follows that

$$\begin{aligned} \Vert u_{n} - p \Vert ^{2} \le \Vert x_{n} - p \Vert ^{2} + \gamma (L_{A}\gamma - 1) \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \le \Vert x_{n} - p \Vert ^{2}. \end{aligned}$$
(12)

It follows from (8), (12), and Lemma 2.6 that we have

$$\begin{aligned} \Vert y_{n} - p \Vert = \bigl\Vert T^{N}u_{n} - T^{N}p \bigr\Vert \le \Vert u_{n} - p \Vert \le \Vert x_{n} - p \Vert . \end{aligned}$$
(13)

Next, we prove that the sequence \(\{ x_{n} \} \) is bounded. Note \(\beta _{n} \le \alpha _{n}\) for all \(n \ge 1\). Put \(V_{n} = \alpha _{n}\rho U(x_{n}) + (I - \alpha _{n}\mu F)(T(z_{n}))\),

from (8), we get

$$\begin{aligned} \Vert x_{n + 1} - p \Vert ={}& \bigl\Vert P_{C_{1}}\bigl[\alpha _{n}\rho U(x_{n}) + (I - \alpha _{n}\mu F) \bigl(T(z_{n})\bigr)\bigr] - p \bigr\Vert \\ \le{}& \alpha _{n} \bigl\Vert \rho U(x_{n}) - \mu F(p) \bigr\Vert + \bigl\Vert (I - \alpha _{n}\mu F) \bigl(T(z_{n})\bigr) - (I - \alpha _{n}\mu F) \bigl(T(p) \bigr) \bigr\Vert \\ ={}& \alpha _{n} \bigl\Vert \rho U(x_{n}) - \rho U(p) + ( \rho U - \mu F) (p) \bigr\Vert \\ &{}+ \bigl\Vert (I - \alpha _{n}\mu F) \bigl(T(z_{n})\bigr) - (I - \alpha _{n}\mu F) \bigl(T(p)\bigr) \bigr\Vert \\ \le{}& \alpha _{n}\rho \tau \Vert x_{n} - p \Vert + \alpha _{n} \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert + (1 - \alpha _{n}\nu ) \Vert z_{n} - p \Vert \\ \le{}& \alpha _{n}\rho \tau \Vert x_{n} - p \Vert + \alpha _{n} \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert \\ &{}+ (1 - \alpha _{n}\nu ) \bigl\Vert \beta _{n}Sx_{n} + (1 - \beta _{n})y_{n} - p \bigr\Vert \\ \le{}& \alpha _{n}\rho \tau \Vert x_{n} - p \Vert + \alpha _{n} \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert \\ &{}+ (1 - \alpha _{n}\nu ) \bigl(\beta _{n} \Vert Sx_{n} - Sp \Vert + \beta _{n} \Vert Sp - p \Vert + (1 - \beta _{n}) \Vert y_{n} - p \Vert \bigr) \\ \le{}& \alpha _{n}\rho \tau \Vert x_{n} - p \Vert + \alpha _{n} \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert \\ &{}+ (1 - \alpha _{n}\nu ) \bigl(\beta _{n} \Vert x_{n} - p \Vert + \beta _{n} \Vert Sp - p \Vert + (1 - \beta _{n}) \Vert x_{n} - p \Vert \bigr) \\ \le{}& \bigl(1 - \alpha _{n}(\nu - \rho \tau )\bigr) \Vert x_{n} - p \Vert + \alpha _{n} \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert \\ &{}+ (1 - \alpha _{n}\nu )\beta _{n} \Vert Sp - p \Vert \\ \le{}& \bigl(1 - \alpha _{n}(\nu - \rho \tau )\bigr) \Vert x_{n} - p \Vert + \alpha _{n} \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert + \beta _{n} \Vert Sp - p \Vert \\ \le{}& \bigl(1 - \alpha _{n}(\nu - \rho \tau )\bigr) \Vert x_{n} - p \Vert + \alpha _{n}\bigl( \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert + \Vert Sp - p \Vert \bigr) \\ \le{}& \bigl(1 - \alpha _{n}(\nu - \rho \tau )\bigr) \Vert x_{n} - p \Vert + \frac{\alpha _{n}(\nu - \rho \tau )}{\nu - \rho \tau } \bigl( \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert + \Vert Sp - p \Vert \bigr) \\ \le{}& \max \biggl\{ \Vert x_{0} - p \Vert ,\frac{1}{\nu - \rho \tau } \bigl( \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert + \Vert Sp - p \Vert \bigr) \biggr\} . \end{aligned}$$
(14)

So \(\{ x_{n} \} \) is bounded, and consequently we can deduce that \(\{ u_{n} \} \), \(\{ y_{n} \}, \{ z_{n} \} \) are also bounded.

Step 2. We will show the following:

$$\begin{aligned} (\mathrm{a})\quad \lim_{n \to \infty } \Vert x_{n + 1} - x_{n} \Vert = 0;\qquad (\mathrm{b})\quad\lim_{n \to \infty } \Vert u_{n} - x_{n} \Vert = 0;\qquad (\mathrm{c})\quad\lim _{n \to \infty } \Vert u_{n} - y_{n} \Vert = 0. \end{aligned}$$

Noting \(u_{n} = R_{r_{n},F_{1}}(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n})\) and \(u_{n - 1} = R_{r_{n - 1},F_{1}}(x_{n - 1} + \gamma A^{*}(R_{r_{n - 1},F_{2}} - I)Ax_{n - 1})\), from Lemma 2.2, we have

$$\begin{aligned} &\Vert u_{n} - u_{n - 1} \Vert \\ &\quad = \Vert R_{r_{n},F_{1}}v_{n} - R_{r_{n - 1},F_{1}}v_{n - 1} \Vert \\ &\quad\le \bigl\Vert x_{n} - x_{n - 1} + \gamma A^{*} \bigl[(R_{r_{n},F_{2}} - I)Ax_{n} - (R_{r_{n - 1},F_{2}} - I)Ax_{n - 1})\bigr] \bigr\Vert \\ &\qquad{}+ \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \bigl\Vert R_{r_{n},F_{1}} \bigl(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}\bigr) - x_{n} - \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert \\ &\quad \le \bigl\Vert x_{n} - x_{n - 1} - \gamma A^{*}A(x_{n} - x_{n - 1}) \bigr\Vert + \gamma \bigl\Vert A^{*} \bigr\Vert \Vert R_{r_{n},F_{2}}Ax_{n} - R_{r_{n - 1},F_{2}}Ax_{n - 1} \Vert \\ &\qquad{} + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \delta _{n - 1} \\ &\quad\le \bigl\{ \Vert x_{n} - x_{n - 1} \Vert ^{2} - 2\gamma \Vert Ax_{n} - Ax_{n - 1} \Vert ^{2} + \gamma ^{2} \Vert A \Vert ^{4} \Vert x_{n} - x_{n - 1} \Vert ^{2} \bigr\} ^{\frac{1}{2}} \\ &\qquad{}+ \gamma \Vert A \Vert \biggl\{ \Vert Ax_{n} - Ax_{n - 1} \Vert + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \Vert R_{r_{n},F_{2}}Ax_{n} - Ax_{n} \Vert \biggr\} + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \delta _{n - 1} \\ &\quad\le \bigl(1 - 2\gamma \Vert A \Vert ^{2} + \gamma ^{2} \Vert A \Vert ^{4}\bigr)^{\frac{1}{2}} \Vert x_{n} - x_{n - 1} \Vert + \gamma \Vert A \Vert ^{2} \Vert x_{n} - x_{n - 1} \Vert \\ &\qquad{} + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \gamma \Vert A \Vert \sigma _{n - 1} + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \delta _{n - 1} \\ &\quad= \Vert x_{n} - x_{n - 1} \Vert + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \bigl(\gamma \Vert A \Vert \sigma _{n - 1} + \delta _{n - 1}\bigr), \end{aligned}$$
(15)

where

$$\begin{aligned} &\delta _{n - 1} = \bigl\Vert R_{r_{n},F_{1}}\bigl(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}\bigr) - \bigl(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}\bigr) \bigr\Vert , \\ &\sigma _{n - 1} = \Vert R_{r_{n},F_{2}}Ax_{n} - Ax_{n} \Vert . \end{aligned}$$

So, from Lemma 2.6, we have

$$\begin{aligned} \begin{aligned}[b] \Vert y_{n} - y_{n - 1} \Vert & = \bigl\Vert G(u_{n}) - G(u_{n - 1}) \bigr\Vert \le \Vert u_{n} - u_{n - 1} \Vert \\ &\le \Vert x_{n} - x_{n - 1} \Vert + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \bigl(\gamma \Vert A \Vert \sigma _{n - 1} + \delta _{n - 1}\bigr). \end{aligned} \end{aligned}$$
(16)

Then from (16) we get

$$\begin{aligned} \Vert z_{n} - z_{n - 1} \Vert ={}& \bigl\Vert \beta _{n}Sx_{n} + (1 - \beta _{n})y_{n} - \beta _{n - 1}Sx_{n - 1} - (1 - \beta _{n - 1})y_{n - 1} \bigr\Vert \\ \le{}& \beta _{n} \Vert x_{n} - x_{n - 1} \Vert + \vert \beta _{n} - \beta _{n - 1} \vert \bigl( \Vert Sx_{n - 1} \Vert + \Vert y_{n - 1} \Vert \bigr) + (1 - \beta _{n}) \Vert y_{n} - y_{n - 1} \Vert \\ \le{}& \beta _{n} \Vert x_{n} - x_{n - 1} \Vert + \vert \beta _{n} - \beta _{n - 1} \vert \bigl( \Vert Sx_{n - 1} \Vert + \Vert y_{n - 1} \Vert \bigr) \\ &{}+ (1 - \beta _{n}) \biggl\{ \Vert x_{n} - x_{n - 1} \Vert + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \bigl(\gamma \Vert A \Vert \sigma _{n - 1} + \delta _{n - 1} \bigr) \biggr\} \\ \le{}& \Vert x_{n} - x_{n - 1} \Vert + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \bigl(\gamma \Vert A \Vert \sigma _{n - 1} + \delta _{n - 1}\bigr) \\ &{}+ \vert \beta _{n} - \beta _{n - 1} \vert \bigl( \Vert Sx_{n - 1} \Vert + \Vert y_{n - 1} \Vert \bigr). \end{aligned}$$
(17)

Next, by Lemma 2.8, we estimate

$$\begin{aligned} \begin{aligned}[b] &\Vert x_{n + 1} - x_{n} \Vert \\ &\quad= \bigl\Vert P_{C}[V_{n}] - P_{C}[V_{n - 1}] \bigr\Vert \\ &\quad\le \bigl\Vert \alpha _{n}\rho \bigl(U(x_{n}) - U(x_{n - 1})\bigr) + (\alpha _{n} - \alpha _{n - 1})\rho U(x_{n - 1}) + (I - \alpha _{n}\mu F) \bigl(T(z_{n}) \bigr) \\ &\qquad{} - (I - \alpha _{n}\mu F) \bigl(T(z_{n - 1})\bigr) + (I - \alpha _{n}\mu F) \bigl(T(z_{n - 1})\bigr) - (I - \alpha _{n - 1}\mu F) \bigl(T(z_{n - 1})\bigr) \bigr\Vert \\ &\quad\le \alpha _{n}\rho \tau \Vert x_{n} - x_{n - 1} \Vert + \vert \alpha _{n} - \alpha _{n - 1} \vert \bigl( \bigl\Vert \rho U(x_{n - 1}) \bigr\Vert + \bigl\Vert \mu F \bigl(T(z_{n - 1})\bigr) \bigr\Vert \bigr)\\ &\qquad{} + (1 - \alpha _{n} \nu ) \Vert z_{n} - z_{n - 1} \Vert . \end{aligned} \end{aligned}$$
(18)

From (17) and (18), we get

$$\begin{aligned} & \Vert x_{n + 1} - x_{n} \Vert \\ &\quad \le \alpha _{n}\rho \tau \Vert x_{n} - x_{n - 1} \Vert + \vert \alpha _{n} - \alpha _{n - 1} \vert \bigl( \bigl\Vert \rho U(x_{n - 1}) \bigr\Vert + \bigl\Vert \mu F \bigl(T(z_{n - 1})\bigr) \bigr\Vert \bigr) \\ &\qquad{}+ (1 - \alpha _{n}\nu ) \biggl\{ \Vert x_{n} - x_{n - 1} \Vert + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \bigl(\gamma \Vert A \Vert \sigma _{n - 1} + \delta _{n - 1} \bigr) \\ &\qquad{}+ \vert \beta _{n} - \beta _{n - 1} \vert \bigl( \Vert Sx_{n - 1} \Vert + \Vert z_{n - 1} \Vert \bigr) \biggr\} \\ &\quad\le \bigl(1 - (\nu - \rho \tau )\alpha _{n}\bigr) \Vert x_{n} - x_{n - 1} \Vert + \vert \alpha _{n} - \alpha _{n - 1} \vert \bigl( \bigl\Vert \rho U(x_{n - 1}) \bigr\Vert + \bigl\Vert \mu F\bigl(T(z_{n - 1})\bigr) \bigr\Vert \bigr) \\ &\qquad{}+ \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \bigl(\gamma \Vert A \Vert \sigma _{n - 1} + \delta _{n - 1}\bigr) + \vert \beta _{n} - \beta _{n - 1} \vert \bigl( \Vert Sx_{n - 1} \Vert + \Vert z_{n - 1} \Vert \bigr) \\ &\quad \le \bigl(1 - (\nu - \rho \tau )\alpha _{n}\bigr) \Vert x_{n} - x_{n - 1} \Vert + M\biggl( \vert \alpha _{n} - \alpha _{n - 1} \vert + \frac{1}{\varepsilon } \vert r_{n - 1} - r_{n} \vert + \vert \beta _{n} - \beta _{n - 1} \vert \biggr), \end{aligned}$$
(19)

where \(M = \max \{ \sup_{n \ge 1}( \Vert \rho U(x_{n - 1}) \Vert + \Vert \mu F(T(z_{n - 1})) \Vert ), \sup_{n \ge 1}(\gamma \Vert A \Vert \sigma _{n - 1} + \delta _{n - 1}), \sup_{n \ge 1}( \Vert Sx_{n - 1} \Vert + \Vert z_{n - 1} \Vert ) \} \). And ε is a real number such that \(0 < \varepsilon < r_{n}\). So, it follows from Conditions (i)–(iii) and Lemma 2.3 that

$$\begin{aligned} \lim_{n \to \infty } \Vert x_{n + 1} - x_{n} \Vert = 0. \end{aligned}$$
(20)

Next, we show that \(\lim_{n \to \infty } \Vert u_{n} - x_{n} \Vert = 0\). In view of (8), (9), (12), and (13), we obtain

$$\begin{aligned} \begin{aligned}[b] \Vert x_{n + 1} - p \Vert ^{2} ={}& \bigl\langle P_{C}[V_{n}] - p,x_{n + 1} - p \bigr\rangle \\ ={}& \bigl\langle P_{C}[V_{n}] - V_{n},P_{C}[V_{n}] - p \bigr\rangle + \langle V_{n} - p,x_{n + 1} - p \rangle \\ \le{}& \bigl\langle \alpha _{n}\bigl(\rho U(x_{n}) - \mu F(p) \bigr) + (I - \alpha _{n}\mu F) \bigl(T(z_{n})\bigr) \\ &{}- (I - \alpha _{n}\mu F) \bigl(T(p)\bigr),x_{n + 1} - p \bigr\rangle \\ ={}& \bigl\langle \alpha _{n}\rho \bigl(U(x_{n}) - U(p) \bigr),x_{n + 1} - p \bigr\rangle + \alpha _{n} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &{}+ \bigl\langle (I - \alpha _{n}\mu F) \bigl(T(z_{n})\bigr) - (I - \alpha _{n}\mu F) \bigl(T(p)\bigr),x_{n + 1} - p \bigr\rangle \\ \le{}& \alpha _{n}\rho \tau \Vert x_{n} - p \Vert \Vert x_{n + 1} - p \Vert + \alpha _{n} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &{}+ (1 - \alpha _{n}\nu ) \Vert z_{n} - p \Vert \Vert x_{n + 1} - p \Vert \\ \le{}& \frac{\alpha _{n}\rho \tau }{2}\bigl( \Vert x_{n} - p \Vert ^{2} - \Vert x_{n + 1} - p \Vert ^{2}\bigr) + \alpha _{n} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &{}+ \frac{(1 - \alpha _{n}\nu )}{2}\bigl( \Vert z_{n} - p \Vert ^{2} - \Vert x_{n + 1} - p \Vert ^{2}\bigr) \\ \le{}& \frac{(1 - \alpha _{n}(\nu - \rho \tau ))}{2} \Vert x_{n + 1} - p \Vert ^{2} + \alpha _{n} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &{}+ \frac{\alpha _{n}\rho \tau }{2} \Vert x_{n} - p \Vert ^{2} + \frac{(1 - \alpha _{n}\nu )}{2} \Vert z_{n} - p \Vert ^{2} \\ \le{}& \frac{(1 - \alpha _{n}(\nu - \rho \tau ))}{2} \Vert x_{n + 1} - p \Vert ^{2} + \alpha _{n} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &{}+ \frac{\alpha _{n}\rho \tau }{2} \Vert x_{n} - p \Vert ^{2} + \frac{(1 - \alpha _{n}\nu )}{2}\bigl(\beta _{n} \Vert Sx_{n} - p \Vert ^{2} + (1 - \beta _{n}) \Vert y_{n} - p \Vert ^{2}\bigr). \end{aligned} \end{aligned}$$
(21)

From the above inequality and (12), (13), we get

$$\begin{aligned} \Vert x_{n + 1} - p \Vert ^{2} \le{}& \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} \\ &{}+ \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\{ \Vert x_{n} - p \Vert ^{2} + \gamma (L_{A}\gamma - 1) \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \bigr\} \\ \le{}& \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} \\ &{}+ \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\{ \gamma (L_{A}\gamma - 1) \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \bigr\} , \end{aligned}$$
(22)

which means that

$$\begin{aligned} \begin{aligned}[b]& \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\{ \gamma (1 - L_{A}\gamma ) \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \bigr\} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} - \Vert x_{n + 1} - p \Vert ^{2} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \beta _{n} \Vert Sx_{n} - p \Vert ^{2} + \bigl( \Vert x_{n} - p \Vert + \Vert x_{n + 1} - p \Vert \bigr) \Vert x_{n + 1} - x_{n} \Vert . \end{aligned} \end{aligned}$$
(23)

Since \(\alpha _{n} \to 0\), \(\beta _{n} \to 0\) and \(\lim_{n \to \infty } \Vert x_{n + 1} - x_{n} \Vert = 0\), we obtain

$$\lim_{n \to \infty } \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert = 0. $$

And since \(R_{r_{n},F_{1}}\) is firmly nonexpansive, from (8) we get

$$\begin{aligned} &\Vert u_{n} - p \Vert ^{2} \\ &\quad= \bigl\Vert R_{r_{n},F_{1}}\bigl(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}\bigr) - p \bigr\Vert ^{2} \\ &\quad= \bigl\Vert R_{r_{n},F_{1}}\bigl(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}\bigr) - R_{r_{n},F_{1}}(p) \bigr\Vert ^{2} \\ &\quad\le \bigl\langle u_{n} - p,x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} - p \bigr\rangle \\ &\quad= \frac{1}{2} \bigl\{ \Vert u_{n} - p \Vert ^{2} + \bigl\Vert x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} - p \bigr\Vert ^{2} \\ &\qquad{} - \bigl\Vert u_{n} - p - \bigl[ x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} - p \bigr] \bigr\Vert ^{2} \bigr\} \\ &\quad= \frac{1}{2} \bigl\{ \Vert u_{n} - p \Vert ^{2} + \bigl\Vert x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} - p \bigr\Vert ^{2} \\ &\qquad{}- \bigl\Vert u_{n} - x_{n} - \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \bigr\} \\ &\quad= \frac{1}{2} \bigl\{ \Vert u_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} + 2\gamma \bigl\langle x_{n} - p,A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle \\ &\qquad{} + \gamma ^{2} \bigl\Vert A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \\ &\qquad{}- \bigl[ \Vert u_{n} - x_{n} \Vert ^{2} - 2\gamma \bigl\langle u_{n} - x_{n},A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle + \gamma ^{2} \bigl\Vert A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2}\bigr] \bigr\} \\ &\quad= \frac{1}{2} \bigl\{ \Vert u_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} + 2\gamma \bigl\langle u_{n} - p,A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle - \Vert u_{n} - x_{n} \Vert ^{2} \bigr\} , \end{aligned}$$
(24)

which implies that

$$\begin{aligned} \Vert u_{n} - p \Vert ^{2} \le \Vert x_{n} - p \Vert ^{2} - \Vert u_{n} - x_{n} \Vert ^{2} + 2\gamma \bigl\Vert A(u_{n} - p) \bigr\Vert \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert . \end{aligned}$$
(25)

So, from (21) and (25) we have

$$\begin{aligned} & \Vert x_{n + 1} - p \Vert ^{2} \\ &\quad\le \frac{(1 - \alpha _{n}(\nu - \rho \tau ))}{2} \Vert x_{n + 1} - p \Vert ^{2} + \alpha _{n} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{\alpha _{n}\rho \tau }{2} \Vert x_{n} - p \Vert ^{2} + \frac{(1 - \alpha _{n}\nu )}{2}\bigl(\beta _{n} \Vert Sx_{n} - p \Vert ^{2} + (1 - \beta _{n}) \Vert u_{n} - p \Vert ^{2}\bigr) \\ &\quad\le \frac{(1 - \alpha _{n}(\nu - \rho \tau ))}{2} \Vert x_{n + 1} - p \Vert ^{2} + \alpha _{n} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle + \frac{\alpha _{n}\rho \tau }{2} \Vert x_{n} - p \Vert ^{2} \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )}{2} \bigl\{ \beta _{n} \Vert Sx_{n} - p \Vert ^{2} + (1 - \beta _{n}) \bigl( \Vert x_{n} - p \Vert ^{2} - \Vert u_{n} - x_{n} \Vert ^{2} \\ &\qquad{}+ 2\gamma \bigl\Vert A(u_{n} - p) \bigr\Vert \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert \bigr) \bigr\} , \end{aligned}$$
(26)

which implies that

$$\begin{aligned} \begin{aligned}[b]& \Vert x_{n + 1} - p \Vert ^{2} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\{ \Vert x_{n} - p \Vert ^{2} - \Vert u_{n} - x_{n} \Vert ^{2} \\ &\qquad{} + 2\gamma \bigl\Vert A(u_{n} - p) \bigr\Vert \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert \bigr\} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\{ - \Vert u_{n} - x_{n} \Vert ^{2} + 2\gamma \bigl\Vert A(u_{n} - p) \bigr\Vert \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert \bigr\} . \end{aligned} \end{aligned}$$
(27)

Hence

$$\begin{aligned} &\frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert u_{n} - x_{n} \Vert ^{2} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} - \Vert x_{n + 1} - p \Vert ^{2} \\ &\qquad{}+ \frac{2(1 - \alpha _{n}\nu )(1 - \beta _{n})\gamma }{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\Vert A(u_{n} - p) \bigr\Vert \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \bigl( \Vert x_{n} - p \Vert + \Vert x_{n + 1} - p \Vert \bigr) \Vert x_{n + 1} - x_{n} \Vert . \end{aligned}$$
(28)

Since \(\lim_{n \to \infty } \alpha _{n} = 0,\lim_{n \to \infty } \beta _{n} = 0,\lim_{n \to \infty } \Vert x_{n + 1} - x_{n} \Vert = 0,\text{and} \lim_{n \to \infty } \Vert (R_{r_{n},F_{2}} - I)Ax_{n} \Vert = 0\), we have

$$\begin{aligned} \lim_{n \to \infty } \Vert u_{n} - x_{n} \Vert = 0. \end{aligned}$$

Then, by Lemma 2.5 and Lemma 2.6, we obtain

$$\begin{aligned} \begin{aligned}[b]& \bigl\Vert T^{N}u_{n} - T^{N}p \bigr\Vert ^{2}\\ &\quad = \bigl\Vert P_{C_{1}}(I - \lambda _{N}B_{N})T^{N - 1}u_{n} - P_{C_{1}}(I - \lambda _{N}B_{N})T^{N - 1}p \bigr\Vert ^{2} \\ &\quad\le \bigl\Vert (I - \lambda _{N}B_{N})T^{N - 1}u_{n} - (I - \lambda _{N}B_{N})T^{N - 1}p \bigr\Vert ^{2} \\ &\quad\le \bigl\Vert T^{N - 1}u_{n} - T^{N - 1}p \bigr\Vert ^{2} + \lambda _{N}(\lambda _{N} - 2\xi _{N}) \bigl\Vert B_{N}T^{N - 1}u_{n} - B_{N}T^{N - 1}p \bigr\Vert ^{2} \\ &\quad\le \Vert u_{n} - p \Vert ^{2} + \sum _{i = 1}^{N} \lambda _{i} (\lambda _{i} - 2\xi _{i}) \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert ^{2} \\ &\quad\le \Vert x_{n} - p \Vert ^{2} + \sum _{i = 1}^{N} \lambda _{i} (\lambda _{i} - 2\xi {}_{i}) \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert ^{2}. \end{aligned} \end{aligned}$$
(29)

From (21), we obtain

$$\begin{aligned} \begin{aligned}[b]& \Vert x_{n + 1} - p \Vert ^{2} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Biggl\{ \Vert x_{n} - p \Vert ^{2} + \sum_{i = 1}^{N} \lambda _{i} (\lambda _{i} - 2\xi _{i}) \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert ^{2} \Biggr\} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} \\ &\qquad{}+ \Vert x_{n} - p \Vert ^{2} + \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Biggl\{ \sum_{i = 1}^{N} \lambda _{i} (\lambda _{i} - 2\xi {}_{i}) \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert ^{2} \Biggr\} , \end{aligned} \end{aligned}$$
(30)

which implies that

$$\begin{aligned} & \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Biggl\{ \sum _{i = 1}^{N} \lambda _{i}(2\xi _{i} - \lambda _{i}) \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert ^{2} \Biggr\} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} - \Vert x_{n + 1} - p \Vert ^{2} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} \\ &\qquad{}+ \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \bigl( \Vert x_{n} - p \Vert + \Vert x_{n + 1} - p \Vert \bigr) \Vert x_{n} - x_{n + 1} \Vert . \end{aligned}$$
(31)

Since \(\lim_{n \to \infty } \alpha _{n} = 0,\lim_{n \to \infty } \beta _{n} = 0\) and \(\lim_{n \to \infty } \Vert x_{n + 1} - x_{n} \Vert = 0\), we have

$$\begin{aligned} \lim_{n \to \infty } \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert = 0. \end{aligned}$$

By Lemma 2.4, we obtain

$$\begin{aligned} \begin{aligned}[b] &\Vert y_{n} - p \Vert ^{2} \\ &\quad= \bigl\Vert T^{N}u_{n} - T^{N}p \bigr\Vert ^{2} \\ &\quad= \bigl\Vert P_{C}(I - \lambda _{N}B_{N})T^{N - 1}u_{n} - P_{C}(I - \lambda _{N}B_{N})T^{N - 1}p \bigr\Vert ^{2} \\ &\quad\le \bigl\langle (I - \lambda _{N}B_{N})T^{N - 1}u_{n} - (I - \lambda _{N}B_{N})T^{N - 1}p,T^{N}u_{n} - T^{N}p \bigr\rangle \\ &\quad=\frac{1}{2}\bigl( \Vert y_{n} - p \Vert ^{2} + \bigl\Vert (I - \lambda _{N}B_{N})T^{N - 1}u_{n} - (I - \lambda _{N}B_{N})T^{N - 1}p \bigr\Vert ^{2} \\ &\qquad{}- \bigl\Vert (I - \lambda _{N}B_{N})T^{N - 1}u_{n} - (I - \lambda _{N}B_{N})T^{N - 1}p - \bigl(T^{N}u_{n} - T^{N}p\bigr) \bigr\Vert ^{2}\bigr) \\ &\quad\le \frac{1}{2}\bigl( \Vert y_{n} - p \Vert ^{2} + \bigl\Vert T^{N - 1}u_{n} - T^{N - 1}p \bigr\Vert ^{2} \\ &\qquad{}- \bigl\Vert T^{N - 1}u_{n} - T^{N}u_{n} + T^{N}p - T^{N - 1}p - \lambda _{N} \bigl(B_{N}T^{N - 1}u_{n} - B_{N}T^{N - 1}p \bigr) \bigr\Vert ^{2}\bigr), \end{aligned} \end{aligned}$$
(32)

which implies

$$\begin{aligned} & \Vert y_{n} - p \Vert ^{2} \\ &\quad\le \bigl\Vert T^{N - 1}u_{n} - T^{N - 1}p \bigr\Vert ^{2} \\ &\qquad{}- \bigl\Vert T^{N - 1}u_{n} - T^{N}u_{n} + T^{N}p - T^{N - 1}p - \lambda _{N} \bigl(B_{N}T^{N - 1}u_{n} - B_{N}T^{N - 1}p \bigr) \bigr\Vert ^{2} \\ &\quad= \bigl\Vert T^{N - 1}u_{n} - T^{N - 1}p \bigr\Vert ^{2} - \bigl\Vert T^{N - 1}u_{n} - T^{N}u_{n} + T^{N}p - T^{N - 1}p \bigr\Vert ^{2} \\ &\qquad{}- \lambda _{N}^{2} \bigl\Vert B_{N}T^{N - 1}u_{n} - B_{N}T^{N - 1}p \bigr\Vert ^{2} \\ &\qquad{}+ 2\lambda _{N} \bigl\langle T^{N - 1}u_{n} - T^{N}u_{n} + T^{N}p - T^{N - 1}p,B_{N}T^{N - 1}u_{n} - B_{N}T^{N - 1}p \bigr\rangle \\ &\quad\le \bigl\Vert T^{N - 1}u_{n} - T^{N - 1}p \bigr\Vert ^{2} - \bigl\Vert T^{N - 1}u_{n} - T^{N}u_{n} + T^{N}p - T^{N - 1}p \bigr\Vert ^{2} \\ &\qquad{}+ 2\lambda _{N} \bigl\Vert T^{N - 1}u_{n} - T^{N}u_{n} + T^{N}p - T^{N - 1}p \bigr\Vert \bigl\Vert B_{N}T^{N - 1}u_{n} - B_{N}T^{N - 1}p \bigr\Vert . \end{aligned}$$
(33)

By induction and (12), we have

$$\begin{aligned} \begin{aligned}[b] \Vert y_{n} - p \Vert ^{2} \le{}& \Vert x_{n} - p \Vert ^{2} - \sum _{i = 1}^{N} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert ^{2} \\ &{}+ \sum_{i = 1}^{N} 2\lambda _{i} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert . \end{aligned} \end{aligned}$$
(34)

It follows from (21) and (34) that we have

$$\begin{aligned} \begin{aligned}[b] &\Vert x_{n + 1} - p \Vert ^{2} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Biggl\{ \Vert x_{n} - p \Vert ^{2} \\ &\qquad{} - \sum_{i = 1}^{N} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert ^{2} \\ &\qquad{}+ \sum _{i = 1}^{N} 2\lambda _{i} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert \Biggr\} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Biggl\{ - \sum_{i = 1}^{N} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert ^{2} \\ &\qquad{} + \sum_{i = 1}^{N} 2\lambda _{i} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert \Biggr\} , \end{aligned} \end{aligned}$$
(35)

which implies

$$\begin{aligned} & \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Biggl\{ \sum _{i = 1}^{N} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert ^{2} \Biggr\} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} - \Vert x_{n + 1} - p \Vert ^{2} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \bigl( \Vert x_{n} - p \Vert + \Vert x_{n + 1} - p \Vert \bigr) \Vert x_{n} - x_{n + 1} \Vert \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Biggl\{ \sum_{i = 1}^{N} 2\lambda _{i} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert \\ &\qquad{}\times\bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert \Biggr\} . \end{aligned}$$
(36)

Since \(\lim_{n \to \infty } \alpha _{n} = 0\), \(\lim_{n \to \infty } \beta _{n} = 0\) and \(\lim_{n \to \infty } \Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \Vert ^{2} = 0\), we have

$$\begin{aligned} \lim_{n \to \infty } \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert = 0. \end{aligned}$$
(37)

From (37), we obtain

$$\begin{aligned} \Vert u_{n} - y_{n} \Vert = \bigl\Vert T^{0}u_{n} - T^{N}u_{n} \bigr\Vert \le \sum_{i = 1}^{N} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert , \end{aligned}$$
(38)

which means \(\lim_{n \to \infty } \Vert u_{n} - y_{n} \Vert = 0\). Note \(\lim_{n \to \infty } \Vert u_{n} - x_{n} \Vert = 0,\lim_{n \to \infty } \Vert u_{n} - y_{n} \Vert = 0\), then we have \(\lim_{n \to \infty } \Vert x_{n} - y_{n} \Vert = 0\). Since \(T(x_{n}) \in C_{1}\), we have

$$\begin{aligned} \bigl\Vert x_{n} - T(x_{n}) \bigr\Vert \le {}&\Vert x_{n} - x_{n + 1} \Vert + \bigl\Vert x_{n + 1} - T(x_{n}) \bigr\Vert \\ ={}& \Vert x_{n} - x_{n + 1} \Vert + \bigl\Vert P_{C_{1}}[V_{n}] - P_{C_{1}}\bigl[T(x_{n}) \bigr] \bigr\Vert \\ \le {}&\Vert x_{n} - x_{n + 1} \Vert + \bigl\Vert \alpha _{n}(\rho U(x_{n}) - \mu F\bigl(T(y_{n})\bigr) + T(y_{n}) - T(x_{n}) \bigr\Vert \\ \le{}& \Vert x_{n} - x_{n + 1} \Vert + \alpha _{n} \bigl\Vert \rho U(x_{n}) - \mu F(T(y_{n}) \bigr\Vert + \Vert y_{n} - x_{n} \Vert \\ \le{}& \Vert x_{n} - x_{n + 1} \Vert + \alpha _{n} \bigl\Vert \rho U(x_{n}) - \mu F(T(y_{n}) \bigr\Vert + \bigl\Vert \beta _{n}Sx_{n} + (1 - \beta _{n})y_{n} - x_{n} \bigr\Vert \\ \le{}& \Vert x_{n} - x_{n + 1} \Vert + \alpha _{n} \bigl\Vert \rho U(x_{n}) - \mu F(T(y_{n}) \bigr\Vert \\ &{} + \beta _{n} \Vert Sx_{n} - x_{n} \Vert + (1 - \beta _{n}) \Vert y_{n} - x_{n} \Vert . \end{aligned}$$

Noting that \(\lim_{n \to \infty } \alpha _{n} = 0\), \(\lim_{n \to \infty } \beta _{n} = 0\), \(\lim_{n \to \infty } \Vert x_{n} - y_{n} \Vert = 0\),and \(\lim_{n \to \infty } \Vert x_{n + 1} - x_{n} \Vert = 0\), we have \(\lim_{n \to \infty } \Vert x_{n} - T(x_{n}) \Vert = 0\).

Step 3. We show that \(z \in F(T)\). Assume that \(z \notin F(T)\). Since \(x_{n_{i}}\) converges weakly to z and \(Tz \ne z\), by Lemma 2.9, we have

$$\begin{aligned} &\mathop{\mathop{\lim \inf}}_{n \to \infty } \Vert x_{n_{i}} - z \Vert \\ &\quad < \mathop{\lim \inf}_{n \to \infty } \Vert x_{n_{i}} - Tz \Vert \le \mathop{\lim \inf}_{n \to \infty } \bigl( \Vert x_{n_{i}} - Tx_{n_{i}} \Vert + \Vert Tx_{n_{i}} - Tz \Vert \bigr) \le \mathop{\lim \inf}_{n \to \infty } \Vert x_{n_{i}} - z \Vert , \end{aligned}$$

which is a contradiction. Thus, we obtain \(z \in F(T)\). To prove the convergence of the sequence \(\{ x_{n} \} \), we need to prove the following conclusion, that is, the sequence \(\{ x_{n} \} \) generated by (8) converges strongly to w, which is the unique solution of the variational inequality

$$\begin{aligned} \bigl\langle \rho U(w) - \mu F(w),x - w \bigr\rangle \le 0,\quad \forall x \in \Theta. \end{aligned}$$

In fact, noting that \(u_{n} = R_{r_{n,F_{1}}}(x_{n} + \gamma A^{*}(R_{r_{n,F_{2}}} - I)Ax_{n}\) and

$$\begin{aligned} F_{1}(u_{n},y) + \frac{1}{r_{n}} \langle y - u_{n},u_{n} - x_{n} \rangle - \frac{1}{r_{n}} \bigl\langle y - u_{n},\gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle \ge 0,\quad\forall y \in C_{1}. \end{aligned}$$

From the monotonicity of \(F_{1}\), we have

$$\begin{aligned} - \frac{1}{r_{n}} \bigl\langle y - u_{n},\gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle + \frac{1}{r_{n}} \langle y - u_{n},u_{n} - x_{n} \rangle \ge F_{1}(y,u_{n}),\quad \forall y \in C_{1}, \end{aligned}$$

and

$$\begin{aligned} - \frac{1}{r_{n_{i}}} \bigl\langle y - u_{n_{i}},\gamma A^{*}(R_{r_{n_{i}},F_{2}} - I)Ax_{n_{i}} \bigr\rangle + \biggl\langle y - u_{n_{i}},\frac{u_{n_{i}} - x_{n_{i}}}{r_{n_{i}}} \biggr\rangle \ge F_{1}(y,u_{n_{i}}),\quad\forall y \in C_{1}. \end{aligned}$$

Since \(\Vert u_{n} - x_{n} \Vert \to 0\), \(\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \Vert \to 0\), we get \(\{ u_{n_{i}} \} \) converges weakly to z. By (A4), we know \(F_{1}(y,z) \le 0\), \(\forall y \in C_{1}\). Let \(y_{t} = ty + (1 - t)z\), \(t \in (0,1]\), it follows from \(y \in C_{1}\), \(z \in C_{1}\) and the convexity of \(C_{1}\) that \(F_{1}(y_{t},z) \le 0\). So, from (A1), (A3), and (A4), we have

$$\begin{aligned} 0 = F_{1}(y_{t},y_{t}) \le tF_{1}(y_{t},y) + (1 - t)F_{1}(y_{t},z) \le F_{1}(y_{t},y). \end{aligned}$$

Therefore \(F_{1}(z,y) \ge 0, \forall y \in C_{1}\). This is \(z \in EP(F_{1})\).

Next we show that \(Az \in EP(F_{2})\), since \(\Vert u_{n} - x_{n} \Vert \to 0\), there exists a subsequence \(\{ x_{n_{k}} \} \) of \(\{ x_{n} \} \) such that \(\{ x_{n_{k}} \} \) converges weakly to z, and since A is a bounded linear operator, \(\{ Ax_{n_{k}} \} \) converges weakly to Az. Setting \(\varpi _{n_{k}} = Ax_{n_{k}} - R_{r_{n_{k}},F_{2}}Ax_{n_{k}}\), it follows tfrom \(\lim_{n \to \infty } \Vert (R_{r_{n},F_{2}} - I)Ax_{n} \Vert = 0\) that \(\lim_{k \to \infty } \varpi _{n_{k}} = 0\). By Lemma 2.1, we have

$$F_{2}(Ax_{n_{k}} - \varpi _{n_{k}},y) + \frac{1}{r_{n_{k}}} \bigl\langle y - (Ax_{n_{k}} - \varpi _{n_{k}}),(Ax_{n_{k}} - \varpi _{n_{k}}) - Ax_{n_{k}} \bigr\rangle \ge 0,\quad \forall y \in C_{2}. $$

Since \(F_{2}\) is upper semicontinuous in the first argument, taking limsup to the above inequality as \(k \to \infty \), we have \(F_{2}(Az,y) \ge 0\), \(\forall y \in C_{2}\), which means that \(Az \in EP(F_{2})\), so \(z \in \Gamma \). Next, we claim that \(z \in \mathrm{Fix}(G)\). From Lemma 2.6, we know \(G = T^{N}\) is nonexpansive, and

$$\begin{aligned} \Vert y_{n} - Gy_{n} \Vert = \bigl\Vert T^{N}u_{n} - T^{N}y_{n} \bigr\Vert \le \Vert u_{n} - y_{n} \Vert . \end{aligned}$$

It follows from \(\lim_{n \to \infty } \Vert u_{n} - x_{n} \Vert = 0\) and \(\lim_{n \to \infty } \Vert x_{n} - y_{n} \Vert = 0\) that \(\lim_{n \to \infty } \Vert y_{n} - Gy_{n} \Vert = 0\). Furthermore, we get

$$\begin{aligned} \Vert x_{n} - Gx_{n} \Vert \le \Vert x_{n} - y_{n} \Vert + \Vert y_{n} - Gy_{n} \Vert + \Vert Gy_{n} - Gx_{n} \Vert \\ \le 2 \Vert x_{n} - y_{n} \Vert + \Vert y_{n} - Gy_{n} \Vert , \end{aligned}$$

which implies \(\lim_{n \to \infty } \Vert x_{n} - Gx_{n} \Vert = 0\). Then, by Lemma 2.10, we obtain \(z \in \mathrm{Fix}(G)\). Thus, we have \(z \in \Theta \). Observe that the constants satisfy \(0 \le \rho \tau < v\) and \(k \ge \eta \), from Lemma 2.7, the operator \(\mu F - \rho U\) is \(\mu \eta - \rho \tau \) strongly monotone. Then we get the uniqueness of the solution of the variational inequality and denote it by \(w \in \Theta \).

Last, we show that \(x_{n} \to w\). Note that

$$\begin{aligned} &\mathop{\lim \sup}_{n \to \infty } \bigl\langle \rho U(w) - \mu F(w),x_{n} - w \bigr\rangle \\ &\quad = \mathop{\lim \sup}_{i \to \infty } \bigl\langle \rho U(w) - \mu F(w),x_{n_{i}} - w \bigr\rangle = \bigl\langle \rho U(w) - \mu F(w),z - w \bigr\rangle \le 0, \end{aligned}$$

and

$$\begin{aligned} &\Vert x_{n + 1} - w \Vert ^{2}\\ &\quad = \bigl\langle P_{C}[V_{n}] - w,x_{n + 1} - w \bigr\rangle \\ &\quad= \bigl\langle P_{C}[V_{n}] - V_{n},P_{C}[V_{n}] - w \bigr\rangle + \langle V_{n} - w,x_{n + 1} - w \rangle \\ &\quad\le \bigl\langle \alpha _{n}\bigl(\rho U(x_{n}) - \mu F(w) \bigr) + (I - \alpha _{n}\mu F) \bigl(T(z_{n})\bigr) - (I - \alpha _{n}\mu F) \bigl(T(w)\bigr),x_{n + 1} - w \bigr\rangle \\ &\quad= \bigl\langle \alpha _{n}\rho \bigl(U(x_{n}) - U(w) \bigr),x_{n + 1} - w \bigr\rangle + \alpha _{n} \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &\qquad{}+ \bigl\langle (I - \alpha _{n}\mu F) \bigl(T(z_{n})\bigr) - (I - \alpha _{n}\mu F) \bigl(T(w)\bigr),x_{n + 1} - w \bigr\rangle \\ &\quad\le \alpha _{n}\rho \tau \Vert x_{n} - w \Vert \Vert x_{n + 1} - w \Vert + \alpha _{n} \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &\qquad{}+ (1 - \alpha _{n}\nu ) \Vert z_{n} - w \Vert \Vert x_{n + 1} - w \Vert \\ &\quad\le \alpha _{n}\rho \tau \Vert x_{n} - w \Vert \Vert x_{n + 1} - w \Vert + \alpha _{n} \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &\qquad{}+ (1 - \alpha _{n}\nu ) \bigl\{ \beta _{n} \Vert Sx_{n} - Sw \Vert + \beta _{n} \Vert Sw - w \Vert + (1 - \beta _{n}) \Vert y_{n} - w \Vert \bigr\} \Vert x_{n + 1} - w \Vert \\ &\quad\le \alpha _{n}\rho \tau \Vert x_{n} - w \Vert \Vert x_{n + 1} - w \Vert + \alpha _{n} \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &\qquad{}+ (1 - \alpha _{n}\nu ) \bigl\{ \beta _{n} \Vert x_{n} - w \Vert + \beta _{n} \Vert Sw - w \Vert + (1 - \beta _{n}) \Vert x_{n} - w \Vert \bigr\} \Vert x_{n + 1} - w \Vert \\ &\quad= \bigl( 1 - \alpha _{n}(\nu - \rho \tau ) \bigr) \Vert x_{n} - w \Vert \Vert x_{n + 1} - w \Vert + \alpha _{n} \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &\qquad{}+ (1 - \alpha _{n}\nu )\beta _{n} \Vert Sw - w \Vert \Vert x_{n + 1} - w \Vert \\ &\quad\le \frac{(1 - \alpha _{n}(\nu - \rho \tau ))}{2} \bigl( \Vert x_{n} - w \Vert ^{2} + \Vert x_{n + 1} - w \Vert ^{2} \bigr) + \alpha _{n} \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &\qquad{}+ (1 - \alpha _{n}\nu )\beta _{n} \Vert Sw - w \Vert \Vert x_{n + 1} - w \Vert , \end{aligned}$$

which implies that

$$\begin{aligned} \Vert x_{n + 1} - w \Vert ^{2} \le{}& \frac{1 - \alpha _{n}(\nu - \rho \tau )}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - w \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &{}+ \frac{2 ( 1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sw - w \Vert \Vert x_{n + 1} - w \Vert \\ \le{}& \bigl( 1 - \alpha _{n}(\nu - \rho \tau ) \bigr) \Vert x_{n} - w \Vert ^{2} \\ &{}+ \frac{2\alpha _{n} ( \nu - \rho \tau )}{1 + \alpha _{n}(\nu - \rho \tau )} \biggl\{ \frac{1}{\nu - \rho \tau } \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &{} + \frac{ ( 1 - \alpha _{n}\nu )\beta _{n}}{\alpha _{n}(\nu - \rho \tau )} \Vert Sw - w \Vert \Vert x_{n + 1} - w \Vert \biggr\} . \end{aligned}$$

Let \(\sigma _{n} = \Vert x_{n} - w \Vert ^{2},\phi _{n} = \alpha _{n}(\nu - \rho \tau )\) and

$$\begin{aligned} \varphi _{n} = {}&\frac{2\alpha _{n} ( \nu - \rho \tau )}{1 + \alpha _{n}(\nu - \rho \tau )} \biggl\{ \frac{1}{\nu - \rho \tau } \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &{} + \frac{ ( 1 - \alpha _{n}\nu )\beta _{n}}{\alpha _{n}(\nu - \rho \tau )} \Vert Sw - w \Vert \Vert x_{n + 1} - w \Vert \biggr\} . \end{aligned}$$

Then the above inequality turns into the following:

$$\begin{aligned} \sigma _{n + 1} = (1 - \phi _{n})\sigma _{n} + \varphi _{n}. \end{aligned}$$

From Conditions (i) and (ii) of Theorem 3.1, we have

$$\begin{aligned} &\phi _{n} \to 0(n \to \infty )\quad\text{and} \\ &\mathop{\lim \sup}_{n \to \infty } \frac{\varphi _{n}}{\phi _{n}} = \mathop{\lim \sup} _{n \to \infty } \frac{2}{1 + \alpha _{n}(\nu - \rho \tau )} \biggl\{ \frac{1}{\nu - \rho \tau } \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &\phantom{\mathop{\lim \sup}_{n \to \infty } \frac{\varphi _{n}}{\phi _{n}} =}{}+ \frac{ ( 1 - \alpha _{n}\nu )\beta _{n}}{\alpha _{n}(\nu - \rho \tau )} \Vert Sw - w \Vert \Vert x_{n + 1} - w \Vert \biggr\} \le 0. \end{aligned}$$

Then all conditions in Lemma 2.3 are satisfied, thus we can get \(\sigma _{n} \to 0\ (n \to \infty )\), that is, \(x_{n} \to w\ (n \to \infty )\). This completes the proof. □

Corollary 3.1

For \(i \in \{ 1,2 \} \), let \(H_{i}\) be a real Hilbert space, \(C_{i}\) be a nonempty closed convex subset of \(H_{i}\), let \(F_{i}:C_{i} \times C_{i} \to R\) be an equilibrium function. Let \(A:H_{1} \to H_{2}\) be bounded linear operators with their adjoint operators \(A^{*}\). Let \(B_{1}\) be \(\xi _{1}\)-inverse-strongly monotone. Let \(F:C_{1} \to C_{1}\) be a k-Lipschitzian mapping and be η-strongly monotone, and let \(U:C_{1} \to C_{1}\) be a τ-Lipschitzian mapping. Let \(S,T:C_{1} \to C_{1}\) be two nonexpansive mappings such that \(\Theta = \Gamma \cap \mathrm{Fix}(G) \cap \mathrm{Fix}(T) \ne \emptyset \). For given \(x_{0} \in C_{1}\) arbitrarily, let the iterative sequences \(\{ u_{n} \} \), \(\{ y_{n} \} \), and \(\{ x_{n} \} \) be generated by

$$\begin{aligned} \textstyle\begin{cases} u_{n} = R_{r_{n},F_{1}}(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}), \\ y_{n} = P_{C_{1}}(I - \lambda _{1}B_{1})u_{n}, \\ z_{n} = \beta _{n}Sx_{n} + (1 - \beta _{n})y_{n}, \\ x_{n + 1} = P_{C_{1}}[\alpha _{n}\rho U(x_{n}) + (I - \alpha _{n}\mu F)(T(z_{n}))], \end{cases}\displaystyle \end{aligned}$$
(39)

where \(\{ r_{n} \} \subset (0,\infty ),\gamma \in (0,1 / L_{A}), L_{A}\) is the spectral radius of the operators \(A^{*}A\). Suppose that the parameters satisfy \(0 < \mu < \frac{2\eta }{k^{2}}\), \(k \ge \eta \), \(0 \le \rho \tau < \nu \), where \(\nu = 1 - \sqrt{1 - \mu (2\eta - \mu k)^{2}}\), and \(\{ \alpha _{n} \} \), \(\{ \beta _{n} \} \) are the sequences in \((0,1)\) satisfying the following conditions:

  1. (i)

    \(\lim_{n \to \infty } \alpha _{n} = 0\) and \(\sum_{n = 0}^{\infty } \alpha _{n} = \infty, \sum_{n = 1}^{\infty } \vert \alpha _{n - 1} - \alpha _{n} \vert < \infty \);

  2. (ii)

    \(\mathop{\lim \sup}_{n \to \infty } \frac{\beta _{n}}{\alpha _{n}} = 0\), and \(\beta _{n} \le \alpha _{n} ( n \ge 1 )\), \(\sum_{n = 1}^{\infty } \vert \beta _{n - 1} - \beta _{n} \vert < \infty \);

  3. (iii)

    \(\mathop{\lim \inf}_{n \to \infty } r_{n} > 0\), \(\sum_{n = 1}^{\infty } \vert r_{n - 1} - r_{n} \vert < \infty \).

Then the sequence \(\{ x_{n} \} \) generated by (39) converges strongly to \(w \in \Theta \).

Proof

Putting \(N = 1\) in Theorem 3.1, we can conclude the desired conclusion directly. □

4 Conclusion

In this paper, we considered a hierarchical fixed point problem (2), a split equilibrium problem (4)–(5), and a system of variational inequalities (7) in Hilbert spaces. An iterative algorithm for finding the common element of the solution sets of the three kinds of problems is presented. Strong convergence of the proposed algorithm is proved. The results presented here are new and very interesting.

Availability of data and materials

Not applicable.

References

  1. Anh, P.N., Anh, T.T.H., Hien, N.D.: Modified basic projection methods for a class of equilibrium problems. Numer. Algorithms 79, 139–152 (2018)

    Article  MathSciNet  Google Scholar 

  2. Bigi, G., Castellani, M., Pappalardo, M., Passacantando, M.: Nonlinear Programming Techniques for Equilibria. Springer, Switzerland (2019)

    Book  Google Scholar 

  3. Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)

    MathSciNet  MATH  Google Scholar 

  4. Browder, F.E.: Nonlinear Operators and Nonlinear Equations of Evolution in Banach Spaces. Am. Math. Soc., Washington (1976)

    Book  Google Scholar 

  5. Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002)

    Article  MathSciNet  Google Scholar 

  6. Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)

    Article  MathSciNet  Google Scholar 

  7. Cai, G., Bu, S.: Hybrid algorithm for generalized mixed equilibrium problems and variational inequality problems and fixed point problems. Comput. Math. Appl. 62, 4772–4782 (2011)

    Article  MathSciNet  Google Scholar 

  8. Ceng, L.C., Anasri, Q.H., Yao, J.C.: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286–5302 (2011)

    Article  MathSciNet  Google Scholar 

  9. Ceng, L.C., Petrusel, A., Yao, J.C., Yao, Y.: Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 19, 487–502 (2018)

    Article  MathSciNet  Google Scholar 

  10. Ceng, L.C., Petrusel, A., Yao, J.C., Yao, Y.: Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 20, 113–133 (2019)

    Article  MathSciNet  Google Scholar 

  11. Ceng, L.C., Wang, C.Y., Yao, J.C.: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 67, 375–390 (2008)

    Article  MathSciNet  Google Scholar 

  12. Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: Unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006)

    Article  Google Scholar 

  13. Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projection in product space. Numer. Algorithms 8, 221–239 (1994)

    Article  MathSciNet  Google Scholar 

  14. Combettes, P.L.: The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 95, 155–453 (1996)

    Article  Google Scholar 

  15. Combettes, P.L., Hirstoaga, S.A.: Equilibrium programming using proximal-like algorithms. Math. Program. 78, 117–136 (1997)

    MathSciNet  MATH  Google Scholar 

  16. Combettes, P.L., Hirstoaga, S.A.: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117–136 (2005)

    MathSciNet  MATH  Google Scholar 

  17. Dadashi, V., Postolache, M.: Hybrid proximal point algorithm and applications to equilibrium problems and convex programming. J. Optim. Theory Appl. 174(2), 518–529 (2017)

    Article  MathSciNet  Google Scholar 

  18. Dadashi, V., Postolache, M.: Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 9(1), 89–99 (2020)

    Article  MathSciNet  Google Scholar 

  19. Jitsupa, D., Juan, M.M., Kanokwan, S., Poom, K.: Convergence analysis of hybrid projection with Cesà ro mean method for the split equilibrium and general system of finite variational inequalities. J. Comput. Appl. Math. 318, 658–673 (2017)

    Article  MathSciNet  Google Scholar 

  20. Konnov, L.V.: Equilibrium Models and Variational Inequalities. Elsevier, Amsterdam (2007)

    MATH  Google Scholar 

  21. Kzami, K.R., Rizvi, S.H.: Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and a fixed point problem. J. Egypt. Math. Soc. 21, 44–51 (2013)

    Article  MathSciNet  Google Scholar 

  22. Moudafi, A.: Krasnoselski Mann iteration for hierarchical fixed point problems. Inverse Probl. 23, 1635–1640 (2007)

    Article  MathSciNet  Google Scholar 

  23. Moudafi, A., Mainge, P.E.: Towards viscosity approximations of hierarchical fixed point problems. Fixed Point Theory Appl. 2006, Article ID 95453 (2006)

    Article  MathSciNet  Google Scholar 

  24. Opial, Z.: Weak convergence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591–597 (1967)

    Article  MathSciNet  Google Scholar 

  25. Sahu, D.R., Pitea, A., Verma, M.: A new iteration technique for nonlinear operators as concerns convex programming and feasibility problems. Numer. Algorithms 83(2), 421–449 (2020)

    Article  MathSciNet  Google Scholar 

  26. Suzuki, T.: Moudafi’s viscosity approximations with Meir–Keeler contractions. J. Math. Anal. Appl. 325, 342–352 (2007)

    Article  MathSciNet  Google Scholar 

  27. Thakur, B.S., Thakur, D., Postolache, M.: A new iterative scheme for numerical reckoning fixed points of Suzuki’s generalized nonexpansive mappings. Appl. Math. Comput. 275, 147–155 (2016)

    MathSciNet  MATH  Google Scholar 

  28. Thakur, B.S., Thakur, D., Postolache, M.: A new iteration scheme for approximating fixed points of nonexpansive mappings. Filomat 30(10), 2711–2720 (2016)

    Article  MathSciNet  Google Scholar 

  29. Usurelu, G.I., Bejenaru, A., Postolache, M.: Operators with property (E) as concerns numerical analysis and visualization. Numer. Funct. Anal. Optim. 41(11), 1398–1411 (2020)

    Article  MathSciNet  Google Scholar 

  30. Usurelu, G.I., Postolache, M.: Convergence analysis for a three-step Thakur iteration for Suzuki-type nonexpansive mappings with visualization. Symmetry 11(12), Article ID 1441 (2019)

    Article  Google Scholar 

  31. Xu, H.K.: Viscosity approximation method for nonexpansive mappings. J. Math. Anal. Appl. 298(1), 279–291 (2004)

    Article  MathSciNet  Google Scholar 

  32. Xu, H.K.: Viscosity approximation method for nonexpansive mappings. J. Math. Anal. Appl. 298, 279–291 (2004)

    Article  MathSciNet  Google Scholar 

  33. Yao, Y., Agarwal, R.P., Postolache, M., Liu, Y.C.: Algorithms with strong convergence for the split common solution of the feasibility problem and fixed point problem. Fixed Point Theory Appl. 2014, Article ID 183 (2014)

    Article  MathSciNet  Google Scholar 

  34. Yao, Y., Li, H., Postolache, M.: Iterative algorithms for split equilibrium problems of monotone operators and fixed point problems of pseudo-contractions. Optimization (2020). https://doi.org/10.1080/02331934.2020.1857757

  35. Yao, Y., Liou, Y.C., Yao, J.C.: Iterative algorithms for the split variational inequality and fixed point problems under nonlinear transformations. J. Nonlinear Sci. Appl. 10, 843–854 (2017)

    Article  MathSciNet  Google Scholar 

  36. Yao, Y., Postolache, M., Yao, J.C.: Iterative algorithms for generalized variational inequalities. U.P.B. Sci. Bull., Series A. 81, 3–16 (2019)

    MathSciNet  Google Scholar 

  37. Yao, Y., Postolache, M., Yao, J.C.: An iterative algorithm for solving generalized variational inequalities and fixed points problems. Mathematics 7, 61 (2019). https://doi.org/10.3390/math7010061

    Article  Google Scholar 

  38. Yao, Y., Postolache, M., Yao, J.C.: Strong convergence of an extragradient algorithm for variational inequality and fixed point problems. U.P.B. Sci. Bull., Ser. A 82(1), 3–12 (2020)

    MathSciNet  Google Scholar 

  39. Yao, Y.H., Cho, Y.J., Liou, Y.C.: Iterative algorithms for hierarchical fixed points problems and variational inequalities. Math. Comput. Model. 52(9–10), 1697–1705 (2010)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the reviewers for their valuable comments, which have helped to improve the quality of this paper.

Funding

This research was supported by the National Natural Science Foundation of China, Liaoning Provincial Department of Education, and Liaoning Natural Fund Guidance Plan under project No. 11371070, No. LJ2019011, No. 2019-ZD-0502.

Author information

Authors and Affiliations

Authors

Contributions

The authors carried out the results and read and approved the current version of the manuscript.

Corresponding author

Correspondence to Yali Zhao.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, Y., Liu, X. & Sun, R. Iterative algorithms of common solutions for a hierarchical fixed point problem, a system of variational inequalities, and a split equilibrium problem in Hilbert spaces. J Inequal Appl 2021, 111 (2021). https://doi.org/10.1186/s13660-021-02645-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-021-02645-4

MSC

Keywords