Generalized extragradient iterative method for systems of variational inequalities

Abstract

The purpose of this article is to investigate the problem of finding a common element of the solution sets of two different systems of variational inequalities and the set of fixed points a strict pseudocontraction mapping defined in the setting of a real Hilbert space. Based on the well-known extragradient method, viscosity approximation method and Mann iterative method, we propose and analyze a generalized extra-gradient iterative method for computing a common element. Under very mild assumptions, we obtain a strong convergence theorem for three sequences generated by the proposed method. Our proposed method is quite general and flexible and includes the iterative methods considered in the earlier and recent literature as special cases. Our result represents the modification, supplement, extension and improvement of some corresponding results in the references.

Mathematics Subject Classification (2000): Primary 49J40; Secondary 65K05; 47H09.

1. Introduction

Let H be a real Hilbert space with inner product 〈·, ·〉 and norm ║ · ║. Let C be a nonempty closed convex subset of H and S : CC be a self-mapping on C. We denote by Fix(S) the set of fixed points of S and by P C the metric projection of H onto C. Moreover, we also denote by R the set of all real numbers. For a given nonlinear mapping A : CH, consider the following classical variational inequality problem of finding x* C such that

$〈A{x}^{*},x-{x}^{*}〉\ge 0,\phantom{\rule{1em}{0ex}}\forall x\phantom{\rule{2.77695pt}{0ex}}\in \phantom{\rule{2.77695pt}{0ex}}C.$
(1.1)

The set of solutions of problem (1.1) is denoted by VI(A, C). It is now well known that the variational inequalities are equivalent to the fixed-point problems, the origin of which can be traced back to Lions and Stampacchia [1]. This alternative formulation has been used to suggest and analyze Picard successive iterative method for solving variational inequalities under the conditions that the involved operator must be strongly monotone and Lipschitz continuous. Related to the variational inequalities, we have the problem of finding fixed points of nonexpansive mappings or strict pseudo-contractions, which is the current interest in functional analysis. Several authors considered some approaches to solve fixed point problems, optimization problems, variational inequality problems and equilibrium problems; see, for example, [232] and the references therein.

For finding an element of Fix(S) ∩ VI(A, C) under the assumption that a set C H is nonempty, closed and convex, a mapping S : CC is nonexpansive and a mapping A : CH is α-inverse strongly monotone, Takahashi and Toyoda [20] introduced the following iterative algorithm:

$\left\{\begin{array}{c}{x}_{0}=x\in C\phantom{\rule{2.77695pt}{0ex}}\text{chosen}\phantom{\rule{2.77695pt}{0ex}}\text{arbitrarily,}\\ {{x}_{n}}_{+1}={\alpha }_{n}{x}_{n}+\left(1-{\alpha }_{n}\right)S{P}_{C}\left({x}_{n}-{\lambda }_{n}A{x}_{n}\right),\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{1em}{0ex}}\forall n\ge 0,\end{array}\right\$

where {α n } is a sequence in (0, 1), and {λ n } is a sequence in (0, 2α). It was proven in [20] that if $\text{Fix}\left(S\right)\cap \phantom{\rule{2.77695pt}{0ex}}VI\left(A,C\right)\ne \varnothing$ then the sequence {x n } converges weakly to some z Fix(S) ∩ VI(A, C). Recently, Nadezhkina and Takahashi [19] and Zeng and Yao [32] proposed some so-called extragra-dient method motivated by the idea of Korpelevich [33] for finding a common element of the set of fixed points of a nonexpansive mapping and the set of solutions of a variational inequality. Further, these iterative methods were extended in [27] to develop a general iterative method for finding a element of Fix(S) ∩ VI(A, C).

Let A1, A2 : CH be two mappings. In this article, we consider the following problem of finding (x*, y*) C × C such that

$\left\{\begin{array}{c}〈{\lambda }_{\text{1}}{A}_{1}{y}^{*}+{x}^{*}-{y}^{*},x-{x}^{*}〉\ge 0,\phantom{\rule{1em}{0ex}}\forall x\in C,\\ 〈{\lambda }_{\text{2}}{A}_{2}{x}^{*}+{y}^{*}-{x}^{*},x-{y}^{*}〉\ge 0,\phantom{\rule{1em}{0ex}}\forall x\in C,\end{array}\right\$
(1.2)

which is called a general system of variational inequalities, where λ1 > 0 and λ2 > 0 are two constants. It was introduced and considered by Ceng et al. [7]. In particular, if A1 = A2 = A, then problem (1.2) reduces to the following problem of finding (x*, y*) C × C such that

$\left\{\begin{array}{c}〈{\lambda }_{\text{1}}A{y}^{*}+{x}^{*}-{y}^{*},x-{x}^{*}〉\ge 0,\phantom{\rule{1em}{0ex}}\forall x\in C,\\ 〈{\lambda }_{\text{2}}A{x}^{*}+{y}^{*}-{x}^{*},x-{y}^{*}〉\ge 0,\phantom{\rule{1em}{0ex}}\forall x\in C,\end{array}\right\$
(1.3)

which was defined by Verma [22] (see also [21]) and it is called a new system of variational inequalities. Further, if x* = y* additionally, then problem (1.3) reduces to the classical variational inequality problem (1.1). We remark that in [34], Ceng et al. proposed a hybrid extragradient method for finding a common element of the solution set of a variational inequality problem, the solution set of problem (1.2) and the fixed-point set of a strictly pseudocontractive mapping in a real Hilbert space. Recently, Ceng et al. [7] transformed problem (1.2) into a fixed point problem in the following way:

Lemma 1.1.[7]. For given $\overline{x},\overline{y}\in C,\left(\overline{x},\overline{y}\right)$is a solution of problem (1.2) if and only if $\overline{x}$ is a fixed point of the mapping G: CC defined by

$G\left(x\right)={P}_{C}\left[{P}_{C}\left(x-{\lambda }_{\text{2}}{A}_{2}x\right)-{\lambda }_{\text{1}}{A}_{1}{P}_{C}\left(x-{\lambda }_{\text{2}}{A}_{2}x\right)\right],\phantom{\rule{1em}{0ex}}\forall x\in C,$
(1.4)

where $ȳ={P}_{C}\left(\stackrel{̄}{x}-{\lambda }_{\text{2}}{A}_{2}\stackrel{̄}{x}\right)$

In particular, if the mapping A i : CH is ${\stackrel{^}{\alpha }}_{i}$-inverse strongly monotone for i = 1, 2, then the mapping G is nonexpansive provided ${\lambda }_{i}\in \left(0,2{\stackrel{^}{\alpha }}_{i}\right)$fori = 1, 2.

Utilizing Lemma 1.1, they proposed and analyzed a relaxed extragradient method for solving problem (1.2). Throughout this article, the set of fixed points of the mapping G is denoted by Γ. Based on the extragradient method [33] and viscosity approximation method [23], Yao et al. [26] introduced and studied a relaxed extragradient iterative algorithm for finding a common solution of problem (1.2) and the fixed point problem of a strictly pseudocontraction in a real Hilbert space H.

Theorem 1.1. [[26], Theorem 3.2]. Let C be a nonempty bounded closed convex subset of a real Hilbert space H. Let the mapping A i : CH be ${\stackrel{^}{\alpha }}_{i}$-inverse strongly monotone for i = 1, 2. Let S : CC be a k-strict pseudocontraction mapping such that $\Omega :=\text{Fix}\left(S\right)\cap \Gamma \ne \varnothing$. Let Q : CC be a ρ-contraction mapping with $\rho \in \left[0,\frac{1}{2}\right)$For given x0 C arbitrarily, let the sequences {x n }, {y n } and {z n } be generated iteratively by

$\left\{\begin{array}{c}{z}_{n}={P}_{C}\left({x}_{n}-{\lambda }_{\text{2}}{A}_{2}{x}_{n}\right),\\ {y}_{n}={\alpha }_{n}Q{x}_{n}+\left(1-{\alpha }_{n}\right){P}_{C}\left({z}_{n}-{\lambda }_{1}{A}_{1}{z}_{n}\right),\\ {x}_{x+1}={\beta }_{n}{x}_{n}+{\gamma }_{n}{P}_{C}\left({z}_{n}-{\lambda }_{\text{1}}{A}_{1}{z}_{n}\right)+{\delta }_{n}S{y}_{n},\phantom{\rule{1em}{0ex}}\forall n\ge 0,\end{array}\right\$
(1.5)

where ${\lambda }_{i}\in \left(0,2{\stackrel{^}{\alpha }}_{i}\right)$for i = 1, 2, and {α n }, {β n }, {γ n }, {δ n } are four sequences in [0, 1] such that

(i) β n + γ n + δ n = 1 and (γ n + δ n )kγ n <(1 - 2ρ)δ n for all n ≥ 0;

(ii)$\underset{x\to \infty }{\text{lim}}{\alpha }_{n}=0$ and ${\sum }_{n=1}^{\infty }{\alpha }_{n}=\infty$

(iii) $0<\underset{n\to \infty }{\text{lim inf}}{\beta }_{n}\le \underset{n\to \infty }{\text{lim sup}}{\beta }_{n}<1$ and $\underset{n\to \infty }{\text{lim inf}}{\delta }_{n}>0$

(iv) $\underset{n\to \infty }{\text{lim}}\left(\frac{{\gamma }_{n+1}}{1-{\beta }_{n+1}}-\frac{{\gamma }_{n}}{1-{\beta }_{n}}\right)=0$

Then the sequence {x n } generated by (1.5) converges strongly to x* = P Ω Qx* and (x*, y*) is a solution of the general system of variational inequalities (1.2), where y* = P C (x* - λ2A2x*).

Let B1, B2 : C → H be two mappings. In this article, we also consider another general system of variational inequalities, that is, finding (x*, y*) C × C such that

$\left\{\begin{array}{c}〈{\mu }_{1}{B}_{1}{y}^{*}+{x}^{*}-{y}^{*},x-{x}^{*}〉\ge 0,\phantom{\rule{1em}{0ex}}\forall x\in C,\\ 〈{\mu }_{2}{B}_{2}{x}^{*}+{y}^{*}-{x}^{*},x-{y}^{*}〉\ge 0,\phantom{\rule{1em}{0ex}}\forall x\in C,\end{array}\right\$
(1.6)

where μ1 > 0 and μ2 > 0 are two constants.

Utilizing Lemma 1.1, we know that for given $\overline{x},\overline{y}\in C,\left(\overline{x},\overline{y}\right)$ is a solution of problem (1.6) if and only if $\overline{x}$ is a fixed point of the mapping F : CC defined by

$F\left(x\right)={P}_{C}\left[{P}_{C}\left(x-{\mu }_{2}{B}_{2}x\right)-{\mu }_{1}{B}_{1}{P}_{C}\left(x-{\mu }_{2}{B}_{2}x\right)\right],\phantom{\rule{1em}{0ex}}\forall x\in C,$
(1.7)

where $\overline{y}={P}_{C}\left(\overline{x}-{\mu }_{2}{B}_{2}\overline{x}\right)$ In particular, if the mapping B i : CH is ${\stackrel{^}{\beta }}_{i}$-inverse strongly monotone for i = 1, 2, then the mapping F is nonexpansive provided ${\mu }_{i}\in \left(0,2{\stackrel{^}{\alpha }}_{i}\right)$ for i = 1, 2. Throughout this article, the set of fixed points of the mapping F is denoted by Γ0.

Assume that A i : CH is ${\stackrel{^}{\alpha }}_{i}$-inverse strongly monotone and and B i : CH is ${\stackrel{^}{\beta }}_{i}$-inverse strongly monotone for i = 1, 2. Let S : CC be a k-strict pseudocontraction mapping such that $\Omega :=\text{Fix}\left(S\right)\cap \Gamma \cap {\Gamma }_{\text{0}}\ne \varnothing$. Let Q : CC be a ρ-contraction mapping with $\rho \in \left[0,\frac{1}{2}\right)$. Motivated and inspired by the research work going on in this area, we propose and analyze the following iterative scheme for computing a common element of the solution set Γ of one general system of variational inequalities (1.2), the solution set Γ0 of another general system of variational inequalities (1.6), and the fixed point set Fix(S) of the mapping S:

$\left\{\begin{array}{c}{z}_{n}={P}_{C}\left[{P}_{C}\left({x}_{n}-{\mu }_{2}{B}_{2}{x}_{n}\right)-{\mu }_{1}{B}_{1}{P}_{C}\left({x}_{n}-{\mu }_{2}{B}_{2}{x}_{n}\right)\right],\\ {y}_{n}={\alpha }_{n}Q{x}_{n}+\left(1-{\alpha }_{n}\right){P}_{C}\left[{P}_{C}\left({z}_{n}-{\lambda }_{\text{2}}{A}_{2}{z}_{n}\right)-{\lambda }_{\text{1}}{A}_{1}{P}_{C}\left({z}_{n}-{\lambda }_{\text{2}}{A}_{2}{z}_{n}\right)\right],\\ {x}_{n+1}={\beta }_{n}{x}_{n}+{\gamma }_{n}{P}_{C}\left[{P}_{C}\left({z}_{n}-{\lambda }_{\text{2}}{A}_{2}{z}_{n}\right)-{\lambda }_{\text{1}}{A}_{1}{P}_{C}\left({z}_{n}-{\lambda }_{\text{2}}{A}_{2}{z}_{n}\right)\right]+{\delta }_{n}S{y}_{n},\phantom{\rule{1em}{0ex}}\forall n\ge 0,\end{array}\right\$
(1.8)

where ${\lambda }_{i}\in \left(0,2{\stackrel{^}{\alpha }}_{i}\right)$ and ${\mu }_{i}\in \left(0,2{\stackrel{^}{\beta }}_{i}\right)$ for i = 1, 2, and {α n },{β n }, {γ n }, {δ n } [0, 1] such that β n + γ n + δ n = 1 for all n ≥ 0. Furthermore, it is proven that the sequences {x n }, {y n } and {z n } generated by (1.8) converge strongly to the same point x* = P Ω Qx* under very mild conditions, and (x*, y*) and $\left({x}^{*},{ȳ}^{*}\right)$ are a solution of general system of variational inequalities (1.2) and a solution of general system of variational inequalities (1.6), respectively, where y* = P C (x* - λ2A2x*) and ${\overline{y}}^{*}={P}_{C}\left({x}^{*}-{\mu }_{2}{B}_{2}{x}^{*}\right)$

Our result represents the modification, supplement, extension and improvement of the above Theorem 1.1 in the following aspects.

1. (a)

our problem of finding an element of Fix(S) ∩ ΓΓ0 is more general and more complex than the problem of finding an element of Fix(S) ∩ Γ in the above Theorem 1.1.

2. (b)

Algorithm (1.8) for finding an element of Fix(S)∩ΓΓ0 is also more general and more flexible than algorithm (1.5) for finding an element of Fix(S) ∩ Γ in the above Theorem 1.1. Indeed, whenever B1 = B2 = 0, we have

${z}_{n}={P}_{C}\left[{P}_{C}\left({x}_{n}-{\mu }_{2}{B}_{2}{x}_{n}\right)-{\mu }_{1}{B}_{1}{P}_{C}\left({x}_{n}-{\mu }_{2}{B}_{2}{x}_{n}\right)\right]={x}_{n},\phantom{\rule{1em}{0ex}}\forall n\ge 0.$

In this case, algorithm (1.8) reduces essentially to algorithm (1.5).

1. (c)

Algorithm (1.8) is very different from algorithm (1.5) in the above Theorem YLK because algorithm (1.8) is closely related to the viscosity approximation method with the ρ-contraction Q : CC and involves the Picard successive iteration for the general system of variational inequalities (1.6).

2. (d)

The techniques of proving strong convergence in our result are very different from those in the above Theorem 1.1 because our techniques depend on the norm inequality in Lemma 2.2 and the inverse-strong monotonicity of mappings A i , B i : CH for i = 1, 2, the demiclosed-ness principle for strict pseudocontractions, and the transformation of two general systems of variational inequalities (1.2) and (1.6) into the fixed-point problems of the nonexpansive self-mappings G: C → C and F: C → C (see the above Lemma 1.1, respectively.

2. Preliminaries

Let H be a real Hilbert space whose inner product and norm are 〈·, ·〉 and ║ · ║, respectively. Let C be a nonempty closed convex subset of H. We write → to indicate that the sequence {x n } converges strongly to x and to indicate that the sequence {x n } converges weakly to x. Moreover, we use ω w (x n ) to denote the weak ω-limit set of the sequence {x n }, that is,

${\omega }_{w}\left({x}_{n}\right):=\left\{x:{x}_{{n}_{i}}⇀x\phantom{\rule{2.77695pt}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}\text{some}\phantom{\rule{2.77695pt}{0ex}}\text{subsequence}\left\{{x}_{{n}_{i}}\right\}\phantom{\rule{2.77695pt}{0ex}}\text{of}\phantom{\rule{2.77695pt}{0ex}}\left\{{x}_{n}\right\}\right\}.$

Recall that a mapping A : CH is called α-inverse strongly monotone if there exists a constant α > 0 such that

$〈Ax-Ay,x-y〉\ge \alpha {∥Ax-Ay∥}^{2},\phantom{\rule{1em}{0ex}}\forall x,y\in C.$

It is obvious that any α-inverse strongly monotone mapping is Lipschitz continuous. A mapping S : CC is called a strict pseudocontraction [35] if there exists a constant 0 ≤ k < 1 such that

${∥Sx-Sy∥}^{2}\le {∥x+y∥}^{2}+k{∥\left(I-S\right)x-\left(I-S\right)y∥}^{2},\phantom{\rule{1em}{0ex}}\forall x,y\in C.$
(2.1)

In this case, we also say that S is a k-strict pseudocontraction. Meantime, observe that (2.1) is equivalent to the following

$〈Sx-Sy,x-y〉\le {∥x-y∥}^{2}-\frac{1-k}{2}{∥\left(I-S\right)x-\left(I-S\right)y∥}^{2},\phantom{\rule{1em}{0ex}}\forall x,y\in C.$
(2.2)

It is easy to see that if S is a k-strictly pseudocontractive mapping, then I - S is $\frac{1-k}{2}$-inverse strongly monotone and hence $\frac{2}{1-k}$-Lipschitz continuous; for further detail, we refer to [30] and the references therein. It is clear that the class of strict pseudocontractions strictly includes the one of nonexpansive mappings which are mappings S : CC such that ║Sx - Sy║ ≤ ║x - y║ for all x, y C.

For every point x H, there exists a unique nearest point in C, denoted by P C x such that

$∥x-{P}_{C}x∥\le ∥x-y∥,\phantom{\rule{1em}{0ex}}\forall x\in C.$

The mapping P C is called the metric projection of H onto C. We know that P C is a firmly nonexpansive mapping of H onto C; that is, there holds the following relation

$〈{P}_{C}x-{P}_{C}y,x-y〉\ge {∥{P}_{C}x-{P}_{C}y∥}^{2},\phantom{\rule{1em}{0ex}}\forall x,y\in H.$

Consequently, P C is nonexpansive and monotone. It is also known that P C is characterized by the following properties: P C x C and

$〈x-{P}_{C}x,{P}_{C}x-y〉\ge 0,$
(2.3)
${∥x-y∥}^{2}\ge {∥x-{P}_{C}x∥}^{2}+{∥y-{P}_{C}x∥}^{2},\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\forall x\in H,\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}y\in C.$
(2.4)

See [36] for more details.

In order to prove our main result in the next section, we need the following lemmas. The following lemma is an immediate consequence of an inner product.

Lemma 2.1. In a real Hilbert space H, there holds the inequality

${∥x+y∥}^{2}\le {∥x∥}^{2}+2〈y,x+y〉,\phantom{\rule{1em}{0ex}}\forall x,y\in H.$
(2.5)

Recall that S : CC is called a quasi-strict pseudocontraction if the fixed point set of S, Fix(S), is nonempty and if there exists a constant 0 ≤ k < 1 such that

${∥Sx-p∥}^{2}\le {∥x-p∥}^{2}+k{∥x-Sx∥}^{2}\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}\text{all}\phantom{\rule{2.77695pt}{0ex}}x\in C\phantom{\rule{2.77695pt}{0ex}}\text{and}\phantom{\rule{2.77695pt}{0ex}}p\in \text{Fix}\left(S\right).$
(2.6)

We also say that S is a k-quasi-strict pseudocontraction if condition (2.6) holds.

The following lemma was proved by Suzuki [37].

Lemma 2.2.[37]Let {x n } and {y n } be bounded sequences in a Banach space X and let {β n } be a sequence in [0, 1] with $0<\underset{n\to \infty }{\text{lim}\text{inf}}{\beta }_{n}\le \underset{n\to \infty }{\text{lim}\text{sup}}{\beta }_{n}<1$. Suppose x n+ 1 = (1 - β n )y n + β n x n for all integers n ≥ 0 and $\underset{n\to \infty }{\text{lim}\text{sup}}\left(∥{y}_{n+1}-{y}_{n}∥-∥{x}_{n+1}-{x}_{n}∥\right)\le 0$. Then, $\underset{n\to \infty }{\text{lim}}∥{y}_{n}-{x}_{n}∥=0$.

Lemma 2.3. [[17], Proposition 2.1] Assume C is a nonempty closed convex subset of a real Hilbert space H and let S: C → C be a self-mapping on C.

(a) If S is a k-strict pseudocontraction, then S satisfies the Lipschitz condition

$∥Sx-Sy∥\le \frac{1+k}{1-k}∥x-y∥,\phantom{\rule{1em}{0ex}}\forall x,y\in C.$
(2.7)

(b) if S is a k-strict pseudocontraction, then the mapping I - S is demiclosed (at 0). That is, if {x n } is a sequence in C such that ${x}_{n}⇀\stackrel{̃}{x}$and (I - S)x n → 0, then $\left(I-S\right)\stackrel{̃}{x}=0$, i.e., $\stackrel{̃}{x}\in \text{Fix}\left(S\right)$

(c) if S is a k-quasi-strict pseudocontraction, then the fixed point set Fix(S) of S is closed and convex so that the projection PFix(S)is well defined.

Lemma 2.4.[24]Let {a n } be a sequence of nonnegative numbers satisfying the condition

${a}_{n+1}\le \left(1-{\delta }_{n}\right){a}_{n}+{\delta }_{n}{\sigma }_{n},\phantom{\rule{1em}{0ex}}\forall n\ge 0,$

where {δ n }, {σ n } are sequences of real numbers such that

(i) {δ n } [0, 1] and ${\sum }_{n=0}^{\infty }{\delta }_{n}=\infty$, or equivalently,

$\prod _{n=0}^{\infty }\left(1-{\delta }_{n}\right):=\underset{n\to \infty }{\text{lim}}\prod _{j=0}^{n}\left(1-{\delta }_{j}\right)=0;$

(ii) $\underset{n\to \infty }{\text{lim}\text{sup}}{\sigma }_{n}\le 0$ or

(ii) ${\sum }_{n=0}^{\infty }{\delta }_{n}{\sigma }_{n}$is convergent.

Then $\underset{n\to \infty }{\text{lim}}{a}_{n}=0$.

3. Strong convergence theorems

We are now in a position to state and prove our main result.

Lemma 3.1. [[26], Lemma 3.1] Let C be a nonempty closed convex subset of a real Hilbert space H. Let S: C → C be a k-strict pseudocontraction mapping. Let γ and δ be two nonnegative real numbers. Assume (γ + δ)k ≤ γ. Then

$∥\gamma \left(x-y\right)+\delta \left(Sx-Sy\right)∥\le \left(\gamma +\delta \right)∥x-y∥,\phantom{\rule{1em}{0ex}}\forall x,y\in C.$
(3.1)

Theorem 3.1. Let C be a nonempty bounded closed convex subset of a real Hilbert space H. Assume that for i = 1, 2, the mappings A i , B i : C → H are ${\stackrel{^}{\alpha }}_{i}$ -inverse strongly monotone and ${\stackrel{^}{\beta }}_{i}$ -inverse strongly monotone, respectively. Let S : C → C be a k-strict pseudocontraction mapping such that $\Omega :=\text{Fix}\left(S\right)\cap \Gamma \cap {\Gamma }_{\text{0}}\ne \varnothing$. Let Q : C → C be a ρ-contraction mapping with $\rho \in \left[0,\frac{1}{2}\right)$For given x0 C arbitrarily, let the sequences {x n }, {y n } and {z n } be generated iteratively by

$\left\{\begin{array}{c}{z}_{n}={P}_{C}\left[{P}_{C}\left({x}_{n}-{\mu }_{2}{B}_{2}{x}_{n}\right)-{\mu }_{1}{B}_{1}{P}_{C}\left({x}_{n}-{\mu }_{2}{B}_{2}{x}_{n}\right)\right],\\ {y}_{n}={\alpha }_{n}Q{x}_{n}+\left(1-{\alpha }_{n}\right){P}_{C}\left[{P}_{C}\left({z}_{n}-{\lambda }_{\text{2}}{A}_{2}{z}_{n}\right)-{\lambda }_{\text{1}}{A}_{1}{P}_{C}\left({z}_{n}-{\lambda }_{\text{2}}{A}_{2}{z}_{n}\right)\right],\\ {x}_{n+1}={\beta }_{n}{x}_{n}+{\gamma }_{n}{P}_{C}\left[{P}_{C}\left({z}_{n}-{\lambda }_{\text{2}}{A}_{2}{z}_{n}\right)-{\lambda }_{\text{1}}{A}_{1}{P}_{C}\left({z}_{n}-{\lambda }_{\text{2}}{A}_{2}{z}_{n}\right)\right]+{\delta }_{n}S{y}_{n},\phantom{\rule{1em}{0ex}}\forall n\ge 0,\end{array}\right\$
(3.2)

where ${\lambda }_{i}\in \left(0,2{\stackrel{^}{\alpha }}_{i}\right)$and ${\mu }_{i}\in \left(0,2{\stackrel{^}{\beta }}_{i}\right)$for i = 1, 2, and {α n }, {β n }, {γ n }, {δ n } are four sequences in [0, 1] such that:

(i) β n + γ n + δ n = 1 and (γ n + δ n )k ≤ γ n <(1 - 2ρ)δ n for all n ≥ 0;

(ii) $\underset{x\to \infty }{\text{lim}}{\alpha }_{n}=0$ and ${\sum }_{n=0}^{\infty }{\alpha }_{n}=\infty$

(iii) $0<\underset{n\to \infty }{\text{lim}\text{inf}}{\beta }_{n}\le \underset{n\to \infty }{\text{lim}\text{sup}}{\beta }_{n}<1$ and $\underset{n\to \infty }{\text{lim}\text{inf}}{\delta }_{n}>0$

(iv) $\underset{n\to \infty }{\text{lim}}\left(\frac{{\gamma }_{n+1}}{1-{\beta }_{n+1}}-\frac{{\gamma }_{n}}{1-{\beta }_{n}}\right)=0$

Then, the sequences {x n }, {y n }, {z n } generated by (3.2) converge strongly to the same point x* = P Ω Qx*, and (x*, y*) and $\left({x}^{*},{ȳ}^{*}\right)$are a solution of general system of variational inequalities (1.2) and a solution of general system of variational inequalities (1.6), respectively, where y* = P C (x* - λ2A2x*) and ${\overline{y}}^{*}={P}_{C}\left({x}^{*}-{\mu }_{2}{B}_{2}{x}^{*}\right)$.

Proof. Let us show that the mappings I - λ i A i and I - μ i Bi are nonexpansive for i = 1, 2. Indeed, since for i = 1, 2, A i , B i are ${\stackrel{^}{\alpha }}_{i}$-inverse strongly monotone and ${\stackrel{^}{\beta }}_{i}$-inverse strongly monotone, respectively, we have for all x, y C

$\begin{array}{ll}\hfill {∥\left(I-{\lambda }_{i}{A}_{i}\right)x-\left(I-{\lambda }_{i}{A}_{i}\right)y∥}^{2}& ={∥\left(x-y\right)-{\lambda }_{i}\left({A}_{i}x-{A}_{i}y\right)∥}^{2}\phantom{\rule{2em}{0ex}}\\ ={∥x-y∥}^{2}-2{\lambda }_{i}〈{A}_{i}x-{A}_{i}y,x-y〉+{\lambda }_{i}^{2}{∥{A}_{i}x-{A}_{i}y∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {∥x-y∥}^{2}-2{\lambda }_{i}{\stackrel{^}{\alpha }}_{i}{∥{A}_{i}x-{A}_{i}y∥}^{2}+{\lambda }_{{\phantom{\rule{0.1em}{0ex}}}_{i}}^{2}{∥{A}_{i}x-{A}_{i}y∥}^{2}\phantom{\rule{2em}{0ex}}\\ ={∥x-y∥}^{2}-{\lambda }_{i}\left(2{\stackrel{^}{\alpha }}_{i}-{\lambda }_{i}\right){∥{A}_{i}x-{A}_{i}y∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {∥x-y∥}^{2},\phantom{\rule{2em}{0ex}}\end{array}$

and

$\begin{array}{ll}\hfill {∥\left(I-{\mu }_{i}{B}_{i}\right)x-\left(I-{\mu }_{i}{B}_{i}\right)y∥}^{2}& ={∥\left(x-y\right)-{\mu }_{i}\left({B}_{i}x-{B}_{i}y\right)∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {∥x-y∥}^{2}-{\mu }_{i}\left(2{\stackrel{^}{\beta }}_{i}-{\mu }_{i}\right){∥{B}_{i}x-{B}_{i}y∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {∥x-y∥}^{2}.\phantom{\rule{2em}{0ex}}\end{array}$

This shows that both I - λ i A i and I - μ i Bi are nonexpansive for i = 1, 2.

We divide the rest of the proof into several steps.

Step 1. $\underset{x\to \infty }{\text{lim}}∥{x}_{n+1}-{x}_{n}∥=0$.

Indeed, first, we can write (3.2) as x n+ 1 = β n x n + (1 - β n )u n , n ≥ 0, where ${u}_{n}=\frac{{x}_{n+1}-{\beta }_{n}{x}_{n}}{1-{\beta }_{n}}$. Set ${\stackrel{̃}{z}}_{n}={P}_{C}\left({z}_{n}-{\lambda }_{\text{2}}{A}_{2}{z}_{n}\right),\forall n\ge 0$. It follows that

$\begin{array}{ll}\hfill {u}_{n+1}-{u}_{n}& =\frac{{x}_{n+2}-{\beta }_{n+1}{x}_{n+1}}{1-{\beta }_{n+1}}-\frac{{x}_{n+1}-{\beta }_{n}{x}_{n}}{1-{\beta }_{n}}\phantom{\rule{2em}{0ex}}\\ =\frac{{\gamma }_{n+1}{P}_{C}\left({\stackrel{̃}{z}}_{n+1}-{\lambda }_{1}{A}_{1}{\stackrel{̃}{z}}_{n+1}\right)+{\delta }_{n+1}S{y}_{n+1}}{1-{\beta }_{n+1}}-\frac{{\gamma }_{n}{P}_{C}\left({\stackrel{̃}{z}}_{n}-{\lambda }_{1}{A}_{1}{\stackrel{̃}{z}}_{n}\right)+{\delta }_{n}S{y}_{n}}{1-{\beta }_{n}}\phantom{\rule{2em}{0ex}}\\ =\frac{{\gamma }_{n+1}\left[{P}_{C}\left({\stackrel{̃}{z}}_{n+1}-{\lambda }_{1}{A}_{1}{\stackrel{̃}{z}}_{n+1}\right)-{P}_{C}\left({\stackrel{̃}{z}}_{n}-{\lambda }_{1}{A}_{1}{\stackrel{̃}{z}}_{n}\right)\right]+{\delta }_{n+1}\left(S{y}_{n+1}-S{y}_{n}\right)}{1-{\beta }_{n+1}}\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}+\left(\frac{{\gamma }_{n+1}}{1-{\beta }_{n+1}}-\frac{{\gamma }_{n}}{1-{\beta }_{n}}\right){P}_{C}\left({\stackrel{̃}{z}}_{n}-{\lambda }_{1}{A}_{1}{\stackrel{̃}{z}}_{n}\right)+\left(\frac{{\delta }_{n+1}}{1-{\beta }_{n+1}}-\frac{{\delta }_{n}}{1-{\beta }_{n}}\right)S{y}_{n}.\phantom{\rule{2em}{0ex}}\end{array}$
(3.3)

From Lemma 3.1 and (3.2), we get

(3.4)

Note that

$\begin{array}{ll}\hfill ∥{P}_{C}\left({\stackrel{̃}{z}}_{n+1}-{\lambda }_{1}{A}_{1}{\stackrel{̃}{z}}_{n+1}\right)-{P}_{C}\left({\stackrel{̃}{z}}_{n}-{\lambda }_{1}{A}_{1}{\stackrel{̃}{z}}_{n}\right)∥& \le ∥\left({\stackrel{̃}{z}}_{n+1}-{\lambda }_{1}{A}_{1}{\stackrel{̃}{z}}_{n+1}\right)-\left({\stackrel{̃}{z}}_{n}-{\lambda }_{1}{A}_{1}{\stackrel{̃}{z}}_{n}\right)∥\phantom{\rule{2em}{0ex}}\\ \le ∥{\stackrel{̃}{z}}_{n+1}-{\stackrel{̃}{z}}_{n}∥\phantom{\rule{2em}{0ex}}\\ =∥{P}_{C}\left({z}_{n+1}-{\lambda }_{2}{A}_{2}{z}_{n+1}\right)-{P}_{C}\left({z}_{n}-{\lambda }_{2}{A}_{2}{z}_{n}\right)∥\phantom{\rule{2em}{0ex}}\\ \le ∥\left({z}_{n+1}-{\lambda }_{2}{A}_{2}{z}_{n+1}\right)-\left({z}_{n}-{\lambda }_{2}{A}_{2}{z}_{n}\right)∥\phantom{\rule{2em}{0ex}}\\ \le ∥{z}_{n+1}-{z}_{n}∥,\phantom{\rule{2em}{0ex}}\end{array}$
(3.5)

and

(3.6)

Then it follows from (3.5) and (3.6) that

$\begin{array}{l}∥{y}_{n+1}-{y}_{n}∥\phantom{\rule{2em}{0ex}}\\ \le ∥{P}_{C}\left({\stackrel{̃}{z}}_{n+1}-{\lambda }_{1}{A}_{1}{\stackrel{̃}{z}}_{n+1}\right)-{P}_{C}\left({\stackrel{̃}{z}}_{n}-{\lambda }_{\text{1}}{A}_{1}\stackrel{̃}{z}\right)∥+{\alpha }_{n+1}∥Q{x}_{n+1}{P}_{C}\left({\stackrel{̃}{z}}_{n+1}-{\lambda }_{\text{1}}{A}_{1}{\stackrel{̃}{z}}_{n+1}\right)∥\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}+{\alpha }_{n}∥Q{x}_{n}-{P}_{C}\left({\stackrel{̃}{z}}_{n}-{\lambda }_{\text{1}}{A}_{1}{\stackrel{̃}{z}}_{n}\right)∥\phantom{\rule{2em}{0ex}}\\ \le ∥{x}_{n+1}-{x}_{n}∥+{\alpha }_{n}∥Q{x}_{n}-{P}_{C}\left({\stackrel{̃}{z}}_{n}-{\lambda }_{\text{1}}{A}_{1}{\stackrel{̃}{z}}_{n}\right)∥+{\alpha }_{n+1}∥Q{x}_{n+1}-{P}_{C}\left({\stackrel{̃}{z}}_{n+1}-{\lambda }_{\text{1}}{A}_{1}{\stackrel{̃}{z}}_{n+1}\right)∥.\phantom{\rule{2em}{0ex}}\end{array}$
(3.7)

Therefore, from (3.3), (3.4) and (3.7), we have

$\begin{array}{c}‖{u}_{n+1}-{u}_{n}‖\le ‖{x}_{n+1}-{x}_{n}‖+\left(1+\frac{{\gamma }_{n+1}}{1-{\beta }_{n+1}}\right){\alpha }_{n}‖Q{x}_{n}-{P}_{C}\left({\stackrel{˜}{z}}_{n}-{\lambda }_{1}{A}_{1}{\stackrel{˜}{z}}_{n}\right)‖\\ +\left(1+\frac{{\gamma }_{n+1}}{1-{\beta }_{n+1}}\right){\alpha }_{n+1}‖Q{x}_{n+1}-{P}_{C}\left({\stackrel{˜}{z}}_{n+1}-{\lambda }_{\text{1}}{A}_{1}{\stackrel{˜}{z}}_{n+1}\right)‖\\ +|\frac{{\gamma }_{n+1}}{1-{\beta }_{n+1}}-\frac{{\gamma }_{n}}{1-{\beta }_{n}}|\left(‖{P}_{C}\left({\stackrel{˜}{z}}_{n}-{\lambda }_{\text{1}}{A}_{1}{\stackrel{˜}{z}}_{n}‖+‖S{y}_{n}‖\right).\end{array}$

This implies that

$\underset{n\to \infty }{\text{lim}\text{sup}}\left(∥{u}_{n+1}-{u}_{n}∥-∥{x}_{n+1}-{x}_{n}∥\right)\le 0.$

Hence by Lemma 2.2 we get limn → ∞u n - x n = 0. Consequently,

$\underset{n\to \infty }{\text{lim}}∥{x}_{n+1}-{x}_{n}∥=\underset{n\to \infty }{\text{lim}}\left(1-{\beta }_{n}\right)∥{u}_{n}-{x}_{n}∥=0.$
(3.8)

Step 2. $\underset{n\to \infty }{\text{lim}}∥{A}_{1}{\stackrel{̃}{z}}_{n}-{A}_{1}{y}^{*}∥=\underset{n\to \infty }{\text{lim}}∥{A}_{2}{z}_{n}-{A}_{2}{x}^{*}∥=\underset{n\to \infty }{\text{lim}}∥{B}_{1}{\stackrel{̃}{x}}_{n}-{B}_{1}{ỹ}^{*}∥=\underset{n\to \infty }{\text{lim}}∥{B}_{2}{x}_{n}-{B}_{2}{x}^{*}∥=0$.

Indeed, let x* Ω. Utilizing Lemma 1.1 we have x* = Sx*, x* = P C [P C (x* - λ2A2x*) - λ1A1P C (x* - λ2A2x*)] and

${x}^{*}={P}_{C}\left[{P}_{C}\left({x}^{*}-{\mu }_{2}{B}_{2}{x}^{*}\right)-{\mu }_{1}{B}_{1}{P}_{C}\left({x}^{*}-{\mu }_{2}{B}_{2}{x}^{*}\right)\right].$

Put y* = P C (x* - λ2A2x*) and . Then x* = P C (y* - λ1A1y*) and ${x}^{*}={P}_{C}\left({\overline{y}}^{*}-{\mu }_{1}{B}_{1}{\overline{y}}^{*}\right)$. Thus it follows that

$\begin{array}{l}{∥{P}_{C}\left({\stackrel{̃}{z}}_{n}-{\lambda }_{1}{A}_{1}{\stackrel{̃}{z}}_{n}\right)-{P}_{C}\left({y}^{*}-{\lambda }_{\text{1}}{A}_{1}{y}^{*}\right)∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {∥\left({\stackrel{̃}{z}}_{n}-{\lambda }_{\text{1}}{A}_{1}{\stackrel{̃}{z}}_{n}\right)-\left({y}^{*}-{\lambda }_{\text{1}}{A}_{1}{y}^{*}\right)∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {∥{\stackrel{̃}{z}}_{n}-{y}^{*}∥}^{2}-{\lambda }_{\text{1}}\left(2{\stackrel{⌢}{\alpha }}_{1}-{\lambda }_{\text{1}}\right){∥{A}_{1}{\stackrel{̃}{z}}_{n}-{A}_{1}{y}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ ={∥{P}_{C}\left({z}_{n}-{\lambda }_{\text{2}}{A}_{2}{z}_{n}\right)-{P}_{C}\left({x}^{*}-{\lambda }_{\text{2}}A2{x}^{*}\right)∥}^{2}-{\lambda }_{\text{1}}\left(2{\stackrel{⌢}{\alpha }}_{1}-{\lambda }_{\text{1}}\right){∥{A}_{1}{\stackrel{̃}{z}}_{n}-{A}_{1}{y}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {∥\left({z}_{n}-{\lambda }_{\text{2}}{A}_{2}{z}_{n}\right)-\left({x}^{*}-{\lambda }_{\text{2}}{A}_{2}{x}^{*}\right)∥}^{2}-{\lambda }_{\text{1}}\left(2{\stackrel{⌢}{\alpha }}_{1}-{\lambda }_{\text{1}}\right){∥{A}_{1}{\stackrel{̃}{z}}_{n}-{A}_{1}{y}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {∥{z}_{n}-{x}^{*}∥}^{2}-{\lambda }_{\text{2}}\left({\stackrel{⌢}{\alpha }}_{2}-{\lambda }_{2}\right){∥{A}_{2}{z}_{n}-{A}_{2}{x}^{*}∥}^{2}-{\lambda }_{\text{1}}\left(2{\stackrel{⌢}{\alpha }}_{1}-{\lambda }_{\text{1}}\right){∥{A}_{1}{\stackrel{̃}{z}}_{n}-{A}_{1}{y}^{*}∥}^{2},\phantom{\rule{2em}{0ex}}\end{array}$
(3.9)

and

$\begin{array}{ll}\hfill {∥{z}_{n}-{x}^{*}∥}^{2}& ={∥{P}_{C}\left({\stackrel{̃}{x}}_{n}-{\mu }_{1}{B}_{1}{\stackrel{̃}{x}}_{n}\right)-{P}_{C}\left({\overline{y}}^{*}-{\mu }_{1}{B}_{1}{y}^{*}\right)∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {∥\left({\stackrel{̃}{x}}_{n}-{\mu }_{1}{B}_{1}{\stackrel{̃}{x}}_{n}\right)-\left({\overline{y}}^{*}-{\mu }_{1}{B}_{1}{\overline{y}}^{*}\right)∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {∥{\stackrel{̃}{x}}_{n}-{y}^{*}∥}^{2}-{\mu }_{1}\left(2{\stackrel{⌢}{\beta }}_{1}-{\mu }_{1}\right){∥{B}_{1}{\stackrel{̃}{x}}_{n}-{B}_{1}{\overline{y}}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ ={∥{P}_{C}\left({x}_{n}-{\mu }_{2}{B}_{2}{x}_{n}\right)-{P}_{C}\left({x}^{*}-{\mu }_{2}{B}_{2}{x}^{*}\right)∥}^{2}-{\mu }_{1}\left(2{\stackrel{⌢}{\beta }}_{1}-{\mu }_{1}\right){∥{B}_{1}{\stackrel{̃}{x}}_{n}-{B}_{1}{\overline{y}}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {∥{x}_{n}-{x}^{*}∥}^{2}-{\mu }_{2}\left({\stackrel{⌢}{\beta }}_{2}-{\mu }_{2}\right){∥{B}_{2}{x}_{n}-{B}_{2}x∥}^{2}-{\mu }_{1}\left(2{\stackrel{⌢}{\beta }}_{1}-{\mu }_{1}\right){∥{B}_{1}{\stackrel{̃}{x}}_{n}-{B}_{1}{\overline{y}}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\end{array}$
(3.10)

It follows from (3.2), (3.9) and (3.10) that

$\begin{array}{ll}\hfill {∥{y}_{n}-{x}^{*}∥}^{2}& \le {\alpha }_{n}{∥Q{x}_{n}-{x}^{*}∥}^{2}+\left(1-{\alpha }_{n}\right){∥{P}_{C}\left({\stackrel{̃}{z}}_{n}-{\lambda }_{\text{1}}{A}_{1}{\stackrel{̃}{z}}_{n}\right)-{P}_{C}\left({y}^{*}-{\lambda }_{1}{A}_{1}{y}^{*}\right)∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {\alpha }_{n}{∥Q{x}_{n}-{x}^{*}∥}^{2}+{∥{P}_{C}\left({\stackrel{̃}{z}}_{n}-{\lambda }_{1}{A}_{1}{\stackrel{̃}{z}}_{n}\right)-{P}_{C}\left({y}^{*}-{\lambda }_{1}{A}_{1}{y}^{*}\right)∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {\alpha }_{n}{∥Q{x}_{n}-{x}^{*}∥}^{2}+{∥{z}_{n}-{x}^{*}∥}^{2}-{\lambda }_{2}\left({\stackrel{⌢}{\alpha }}_{2}-{\lambda }_{2}\right){∥{A}_{2}{z}_{n}-{A}_{2}{x}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-{\lambda }_{1}\left(2{\stackrel{⌢}{\alpha }}_{1}-{\lambda }_{1}\right){∥{A}_{1}{\stackrel{̃}{z}}_{n}-{A}_{1}{y}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {\alpha }_{n}{∥Q{x}_{n}-{x}^{*}∥}^{2}+{∥{x}_{n}-{x}^{*}∥}^{2}-{\mu }_{2}\left({\stackrel{⌢}{\beta }}_{2}-{\mu }_{2}\right){∥{B}_{2}{x}_{n}-{B}_{2}{x}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-{\mu }_{1}\left(2{\stackrel{⌢}{\beta }}_{1}-\mu \right){∥{B}_{1}{\stackrel{̃}{x}}_{n}-{B}_{1}{\overline{y}}^{*}∥}^{2}-{\lambda }_{2}\left({\stackrel{⌢}{\alpha }}_{2}-{\lambda }_{2}\right){∥{A}_{2}{z}_{n}-{A}_{2}{x}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-{\lambda }_{1}\left(2{\stackrel{⌢}{\alpha }}_{1}-{\lambda }_{1}\right){∥{A}_{1}{\stackrel{̃}{z}}_{n}-{A}_{1}{y}^{*}∥}^{2}.\phantom{\rule{2em}{0ex}}\end{array}$
(3.11)

Utilizing the convexity of ║ · ║2, we have

$\begin{array}{c}{‖{x}_{n+1}-{x}^{*}‖}^{2}={‖{\beta }_{n}\left({x}_{n}-{x}^{*}\right)+\left(1-{\beta }_{n}\right)\frac{1}{1-{\beta }_{n}}\left[{P}_{C}\left({\stackrel{˜}{z}}_{n}-{\lambda }_{1}{A}_{1}{\stackrel{˜}{z}}_{n}\right)-{x}^{*}\right)+{\delta }_{n}\left(S{y}_{n}-{x}^{*}\right)\right]‖}^{2}\\ \le {\beta }_{n}{‖{x}_{n}-{x}^{*}‖}^{2}+\left(1-{\beta }_{n}\right){‖\frac{{\gamma }_{n}}{1-{\beta }_{n}}\left({P}_{C}\left({\stackrel{˜}{z}}_{n}-{\lambda }_{1}{A}_{1}{\stackrel{˜}{z}}_{n}\right)-{x}^{*}\right)+\frac{{\delta }_{n}}{1-{\beta }_{n}}\left(S{y}_{n}-{x}^{*}\right)‖}^{2}\\ ={\beta }_{n}{‖{x}_{n}-{x}^{*}‖}^{2}+\left(1-{\beta }_{n}\right){‖\frac{{\gamma }_{n}\left({y}_{n}-{x}^{*}\right)+{\delta }_{n}\left(S{y}_{n}-{x}^{*}\right)}{1-{\beta }_{n}}+\frac{{\alpha }_{n}{\gamma }_{n}}{1-{\beta }_{n}}\left({P}_{C}\left({\stackrel{˜}{z}}_{n}-{\lambda }_{1}{A}_{1}{\stackrel{˜}{z}}_{n}\right)-Q{x}_{n}\right)‖}^{2}\\ \le {\beta }_{n}{‖{x}_{n}-{x}^{*}‖}^{2}+\left(1-{\beta }_{n}\right){‖\frac{{\gamma }_{n}\left({y}_{n}-{x}^{*}\right)+{\delta }_{n}\left(S{y}_{n}-{x}^{*}\right)}{1-{\beta }^{*}}‖}^{2}+M{\alpha }_{n}\\ \le {\beta }_{n}{‖{x}_{n}-{x}^{*}‖}^{2}+\left(1-{\beta }_{n}\right){‖{y}_{n}-{x}^{*}‖}^{2}+M{\alpha }_{n},\end{array}$
(3.12)

where M > 0 is some appropriate constant. So, from (3.11) and (3.12) we have

$\begin{array}{ll}\hfill {∥{x}_{n+1}-{x}^{*}∥}^{2}& \le {∥{x}_{n}-{x}^{*}∥}^{2}-{\mu }_{2}\left({\stackrel{⌢}{\beta }}_{2}-{\mu }_{2}\right)\left(1-{\beta }_{n}\right){∥{B}_{2}{x}_{n}-{B}_{2}{x}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-{\mu }_{1}\left(2{\stackrel{⌢}{\beta }}_{1}-{\mu }_{1}\right)\left(1-{\beta }_{n}\right){∥{B}_{1}{\stackrel{̃}{x}}_{n}-{B}_{1}{\overline{y}}^{*}∥}^{2}-{\lambda }_{2}\left({\stackrel{⌢}{\alpha }}_{2}-{\lambda }_{2}\right)\left(1-{\beta }_{n}\right){∥{A}_{2}{z}_{n}-{A}_{2}{x}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}-{\lambda }_{1}\left(2{\stackrel{⌢}{\alpha }}_{1}-{\lambda }_{1}\right)\left(1-{\beta }_{n}\right){∥{A}_{1}{\stackrel{̃}{z}}_{n}-{A}_{1}{y}^{*}∥}^{2}+\left(M+{∥Q{x}_{n}-{x}^{*}∥}^{2}\right){\alpha }_{n}.\phantom{\rule{2em}{0ex}}\end{array}$

Therefore,

$\begin{array}{l}{\lambda }_{1}\left(2{\stackrel{⌢}{\alpha }}_{1}-{\lambda }_{1}\right)\left(1-{\beta }_{n}\right){∥{A}_{1}{\stackrel{̃}{z}}_{n}-{A}_{1}{y}^{*}∥}^{2}+{\lambda }_{2}\left({\stackrel{⌢}{\alpha }}_{2}-{\lambda }_{2}\right)\left(1-{\beta }_{n}\right){∥{A}_{2}{z}_{n}-{A}_{2}{x}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}+{\mu }_{1}\left(2{\stackrel{⌢}{\beta }}_{1}-{\mu }_{1}\right)\left(1-{\beta }_{n}\right){∥{B}_{1}{\stackrel{̃}{x}}_{n}-{B}_{1}{\overline{y}}^{*}∥}^{2}+{\mu }_{2}\left({\stackrel{⌢}{\beta }}_{2}-{\mu }_{2}\right)\left(1-{\beta }_{n}\right){∥{B}_{2}{x}_{n}-{B}_{2}{x}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {∥{x}_{n}-{x}^{*}∥}^{2}-{∥{x}_{n+1}-{x}^{*}∥}^{2}+\left(M+{∥Q{x}_{n}-{x}^{*}∥}^{2}\right){\alpha }_{n}\phantom{\rule{2em}{0ex}}\\ \le \left(∥{x}_{n}-{x}^{*}∥+∥{x}_{n+1}-{x}^{*}∥\right)∥{x}_{n}-{x}_{n+1}∥+\left(M+{∥Q{x}_{n}-{x}^{*}∥}^{2}\right){\alpha }_{n}.\phantom{\rule{2em}{0ex}}\end{array}$

Since $\text{lim}\underset{n\to \infty }{\text{inf}}{\lambda }_{1}\left(2{\stackrel{^}{\alpha }}_{1}-{\lambda }_{1}\right)\left(1-{\beta }_{n}\right)>0,\phantom{\rule{2.22198pt}{0ex}}\text{lim}\underset{n\to \infty }{\text{inf}}{\lambda }_{2}\left(2{\stackrel{^}{\alpha }}_{2}-{\lambda }_{2}\right)\left(1-{\beta }_{n}\right)>0,\phantom{\rule{2.22198pt}{0ex}}\text{lim}\underset{n\to \infty }{\text{inf}}{\mu }_{1}\left(2{\stackrel{^}{\beta }}_{1}-{\mu }_{1}\right)\left(1-{\beta }_{n}\right)>0,\phantom{\rule{2.22198pt}{0ex}}\text{lim}\underset{n\to \infty }{\text{inf}}{\mu }_{2}\left({\stackrel{^}{\beta }}_{2}-{\mu }_{2}\right)\left(1-{\beta }_{n}\right)>0$, ║x n - x n+1 → 0 and α n → 0, we have

$\underset{n\to \infty }{\text{lim}}∥{A}_{1}{\stackrel{̃}{z}}_{n}-{A}_{1}{y}^{*}∥=\underset{n\to \infty }{\text{lim}}∥{A}_{2}{z}_{n}-{A}_{2}{x}^{*}∥=\underset{n\to \infty }{\text{lim}}∥{B}_{1}{\stackrel{̃}{x}}_{n}-{B}_{1}{\overline{y}}^{*}∥=\underset{n\to \infty }{\text{lim}}∥{B}_{2}{x}_{n}-{B}_{2}{x}^{*}∥=0.$

Step 3. limn║z n - y n = limn║x n - z n ║ = limn║Sy n - y n = 0.

Indeed, set ${v}_{n}={P}_{C}\left({\stackrel{̃}{z}}_{n}-{\lambda }_{1}{A}_{1}{\stackrel{̃}{z}}_{n}\right)$. Noting that P C is firmly nonexpansive, we have

and

due to (3.9). Thus, we have

${∥{\stackrel{̃}{z}}_{n}-{y}^{*}∥}^{2}\le {∥{z}_{n}-{z}^{*}∥}^{2}-{∥{z}_{n}-{\stackrel{̃}{z}}_{n}-\left({x}^{*}-{y}^{*}\right)∥}^{2}+2{\lambda }_{2}〈{z}_{n}-{\stackrel{̃}{z}}_{n}-\left({x}^{*}-{y}^{*}\right),{A}_{2}{z}_{n}-{A}_{2}{x}^{*}〉-{\lambda }_{2}^{2}{∥{A}_{2}{z}_{n}-{A}_{2}{x}^{*}∥}^{2},$
(3.13)

and

${∥{v}_{n}-{x}^{*}∥}^{2}\le {∥{z}_{n}-{x}^{*}∥}^{2}-{∥{\stackrel{̃}{z}}_{n}-{v}_{n}+\left({x}^{*}-{y}^{*}\right)∥}^{2}+2{\lambda }_{1}∥{A}_{1}{\stackrel{̃}{z}}_{n}-{A}_{1}{y}^{*}∥∥∥{\stackrel{̃}{z}}_{n}-{v}_{n}+\left({x}^{*}-{y}^{*}\right)∥.$

It follows that

$\begin{array}{ll}\hfill ∥{y}_{n}-{x}^{*}∥& \le {\alpha }_{n}{∥Q{x}_{n}-{x}^{*}∥}^{2}+\left(1-{\alpha }_{n}\right){∥{v}_{n}-{x}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {\alpha }_{n}{∥Q{x}_{n}-{x}^{*}∥}^{2}+{∥{v}_{n}-{x}^{*}∥}^{2}\phantom{\rule{2em}{0ex}}\\ \le {\alpha }_{n}{∥Q{x}_{n}-{x}^{*}∥}^{2}+{∥{z}_{n}-{x}^{*}∥}^{2}-{∥{\stackrel{̃}{z}}_{n}-{v}_{n}+\left({x}^{*}-{y}^{*}\right)∥}^{2}\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{1em}{0ex}}+2{\lambda }_{1}∥{A}_{1}{\stackrel{̃}{z}}_{n}-{A}_{1}{y}^{*}∥∥{∥\stackrel{̃}{z}}_{n}-{v}_{n}+\left({x}^{*}-{y}^{*}\right)∥.\phantom{\rule{2em}{0ex}}\end{array}$
(3.14)

Utilizing (3.2), (3.10), (3.12) and (3.13), we have

It follows that

$\begin{array}{ll}\hfill \left(1-{\beta }_{n}\right){∥{z}_{n}-{\stackrel{̃}{z}}_{n}-\left({x}^{*}-{y}^{*}\right)∥}^{2}& \le \left(∥{x}_{n}-{x}^{*}∥∥{x}_{n+1}-{x}^{*}∥\right)∥{x}_{n+1}-{x}_{n}∥+\left(M+{∥Q{x}_{n}-{x}^{*}∥}^{2}\right){\alpha }_{n}\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}+2\left(1-{\beta }_{n}\right){\lambda }_{2}∥{z}_{\mathrm{n<}}∥\end{array}$