Skip to main content

Inertial-based extragradient algorithm for approximating a common solution of split-equilibrium problems and fixed-point problems of nonexpansive semigroups

Abstract

In this paper, we introduce a simple and easily computable algorithm for finding a common solution to split-equilibrium problems and fixed-point problems in the framework of real Hilbert spaces. New self-adaptive step sizes are adopted for the avoidance of Lipschitz constants that are not practically implemented. Furthermore, an inertial term is incorporated to speed up the rate of convergence, a condition that is very desirable in applications. A strong convergence is obtained under some mild assumptions, which is not limited to the fact that the bifunctions are pseudomonotone operators. This condition is better, weaker, and more general than being strongly pseudomonotone or monotone. Our result improves and extends already announced results in this direction of research.

1 Introduction

Fixed-point theory of nonlinear functional analysis has proven to be at the heart of technological development and numerous applications ranging from engineering, computer science, mathematical sciences, and social sciences. This is due to the fact that many real-world problems, after transforming them into mathematical equations, may not have analytic solutions. Hence, seeking the existence and uniqueness of solutions, the fixed points of a certain class of operators, and the approximation of solutions of such problems, for more than ten decades now has become an interesting and flourishing area of research. The Banach contraction mapping principle is one of the cornerstones in this high level of achievement. In recent times, fixed-point theory has successfully been employed in data science, machine learning, and artificial intelligence to mention but a few (for these updates, the reader is encouraged to consult [14]). One of the powerful tools that paved the way for this modern development is the use of nonexpansive operators. A self-map T defined on C is said to be nonexpansive if \(\|Tx - Ty\|\leq \|x-y\|, \forall x, y \in C\), where C is a nonempty, closed, and convex subset of a real Hilbert space, H. However, a point \(x^{*} \in C\) is said to be the fixed point of T if \(Tx^{*}=x^{*}\). This type of operator has significantly been used for modeling many inverse problems and has a central role in signal processing and image reconstruction, convex feasibility problems, approximation theory, game theory, convex optimization theory among many others (see [59] and references therein). Since the existence of solutions of the above-mentioned problems heavily rely on the existence of fixed points of a certain class of nonlinear operators, we note, however, that there is a richer and more general class of operator called the one-parameter family, whose existence of common fixed points of a family of nonlinear operators has valuable applications in applied sciences. It is known that the family of mappings \(\lbrace T(t): {0\leq t< \infty}\rbrace \) define a semigroup if the functional equations \(T(0)x=x\) and \(T(t+s)=T(t)T(s)\) are satisfied, where t is the time parameter, and each \(T(t)\) maps the “state space” system into itself. These maps completely determine the time evolution of the system in the following way: if the system is in state \(x_{0}\) at time \(t_{0}=0\), then at time t it is in the state \(T(t)x_{0}\). Furthermore, in many cases, a complete knowledge of the maps \(T(t)\) is difficult to determine. Hence, this challenge led to the great discoveries of mathematical physics, based on the invention of calculus, which made it possible to understand the “infinitesimal changes” occurring at any given time. This is based on this fact that the functional equation can be transformed into a differential equation. We therefore mention that this family of operators plays a crucial role in abstract Cauchy problems [10, 11]. As this family is related to first-order linear differential equations, it corresponds to dynamical systems, evolution equations, and can be applied in deterministic and stochastic dynamical systems. It has been used in partial differential equations, evolution equation theory, fixed-point theory, and many others (see [12, 13, 48]).

One of the most fruitful areas where fixed points of nonexpansive mappings have been successfully employed is in signal processing and image recovery, which are examples of inverse problems. Among many research articles in this direction, Censor and Elfving [48] published their most influential paper where they introduced and studied the split-feasibility problem (SFP) for modeling inverse problems arising in phase retrieval and medical imaging. The problem is of the form:

$$\begin{aligned} \text{find } x^{*}\in C \text{ such that } Ax^{*} \in D, \end{aligned}$$
(1.1)

where C and D are nonempty, closed, and convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, and \(A: H_{1} \rightarrow H_{2}\) is a bounded linear operator. The SFP (1.1) has received huge attention due to its wide applications across different fields of study (see [2, 6, 1420] and the cited references contained therein). The classical method for solving (1.1) is the known CQ algorithm, introduced and studied by Byrne [6] that is presented in the following iterative form:

$$\begin{aligned} x_{n+1} = P_{C} \bigl(x_{n} + \gamma A^{T}(P_{Q} -I)Ax_{n} \bigr),\quad n=1,2, \ldots, \end{aligned}$$
(1.2)

where A is an \(N \times M\) real matrix, \(A^{T}\) is the transpose of A, \(P_{C}\) and \(P_{Q}\) are projections onto nonempty, closed, and convex subsets, I is an identity operator defined on H and \(\gamma \in (0, \frac{1}{L})\), and L denotes the largest eigenvalue of the matrix \(A^{T} A\). It is well known that \(x\in C\) solves (1.1) if and only if it solves the fixed-point equation:

$$\begin{aligned} P_{C} \bigl(I - \gamma A^{*}(I-P_{Q})A \bigr)x=x, \end{aligned}$$
(1.3)

where \(A^{*}\) is the adjoint of A and \(\gamma > 0\).

Being at the hub of modern research, problem (1.1) has been modified and generalized into different areas; for instance, in 2009, Censor and Segal [21] introduced the following problem:

$$\begin{aligned} \text{find } x\in F(S) \text{ such that } Ax \in F(T), \end{aligned}$$
(1.4)

where \(S: H_{1} \rightarrow H_{1} \), and \(T: H_{2} \rightarrow H_{2}\), are two nonlinear directed operators and the nonempty fixed-point sets of S and T are denoted by \(F(T): = \lbrace x\in H_{2}: Tx = x \rbrace \), and \(F(S): = \lbrace x\in H_{1}: Sx = x \rbrace \). Problem (1.4) is called the split common fixed point (SCFP), which can easily be reduced to the SFP when we set \(S: = P_{C}\) and \(T:= P_{Q}\). Another important area of generalization is the split-equilibrium problem (SEP), a concept that was introduced and studied by Kazmi and Rizvi [22] for solving optimization problems, and problems arising in economics and finance. The SEP can be defined as follows:

$$\begin{aligned} \text{find } x^{*}\in C \text{ such that } F \bigl(x^{*}, x \bigr) \geq 0\quad \forall x\in C, \end{aligned}$$
(1.5)

and

$$\begin{aligned} y^{*} = Ax^{*}\in D \text{ such that } G \bigl(y^{*}, y \bigr) \geq 0 \quad\forall y\in D. \end{aligned}$$
(1.6)

Furthermore, problem (1.5) and (1.6) can be reduced to the well-known equilibrium problem (EP) if \(G\equiv F, T\equiv 0\equiv S, C=D,H_{1} \equiv H_{2}, A\equiv I\). The EP is an optimization problem introduced by Blum and Oettli [23] for the purpose of solving problems arising in economics, engineering, and optimization. The equilibrium problem is thus formulated as follows:

$$\begin{aligned} \text{find } x^{*}\in C \text{ such that } F \bigl(x^{*}, x \bigr) \geq 0\quad \forall x\in C. \end{aligned}$$
(1.7)

The EP (1.7) includes, as a special case, the variational inequality problem (VIP), optimization problems, Nash equilibrium problems, saddle-point problems, and fixed-point problems. It is obvious that (1.7) is a unifying model for several problems in engineering and economics among many other areas.

Our aim in this paper is to study an interesting combination of problems (1.4) and (1.5) and (1.6), called the split-equilibrium fixed-point problem (SEFPP) that is formulated as follows:

$$\begin{aligned} \text{find } x^{*}\in C \text{ such that } \textstyle\begin{cases} F(x^{*}, x) \geq 0 & \forall x\in C, \\ x^{*} \in F(T), \end{cases}\displaystyle \end{aligned}$$
(1.8)

and

$$\begin{aligned} y^{*} = Ax^{*}\in D \text{ such that } \textstyle\begin{cases} G(y^{*}, y) \geq 0 & \forall y\in D, \\ y^{*} \in F(S), \end{cases}\displaystyle \end{aligned}$$
(1.9)

where \(F:C \times C \rightarrow \mathbb{R}\) and \(G:D\times D \rightarrow \mathbb{R}\) are two bifunctions, C and D are two nonempty, closed, and convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, A is a bounded linear operator defined on \(H_{1}\), while S and T are nonexpansive mappings. It is pertinent to note that problem (1.8) and (1.9) is an important generalization of so many convex optimization problems. Its study is rich and demanding, in the sense that a wide range of applications of mathematical problems can be cast into this framework. On a critical look at the problem (1.8) and (1.9), if \(T\equiv 0\equiv S\), we obtain the classical SEP (1.5) and (1.6). Observe that (1.1) is also obtained from (1.8) and (1.9) if \(F\equiv 0\equiv G, S\equiv 0\equiv T\). Numerous inverse problems can be cast in this model. Problem (1.8) and (1.9) has been widely employed in numerous problems (see, [2426] and their cited references).

In recent years, algorithms with fast convergence have been desirable by many researchers and such algorithms are profitable in applications. Polyak [27] was the first to introduce inertial extrapolation techniques using the heavy-ball method. This concept has been practically tested and proven to speed up the rate of convergence of iterative algorithms. Since then, many researchers have incorporated it to improve their results (see, [2831] and the references therein)

In 2008, Plubtieng and Punpaeng [32] proposed and studied fixed-point solutions of variational inequlities for nonexpansive semigroups in real Hilbert spaces. In an attempt to improve on [32], Cianciaruso et al., [33] proposed an algorithm for equilibrium and fixed-point problems in real Hilbert spaces, see also [34].

In 2019, problem (1.8) and (1.9) was studied by Narin et al. [35] who obtained a strong convergence result. They proposed the following iterative algorithm:

$$\begin{aligned} \textstyle\begin{cases} u_{n} = \operatorname{argmin} \lbrace \mu _{n} g(P_{D}(Lx_{n}),u) + \frac{1}{2} \Vert u-P_{D}(Lx_{n}) \Vert ^{2};u\in D\rbrace, \\ v_{n} = \operatorname{argmin} \lbrace \mu _{n} g(u_{n}, u) + \frac{1}{2} \Vert u-P_{D}(Lx_{n}) \Vert ^{2};u\in D\rbrace, \\ y_{n}= P_{C}(x_{n} + \delta _{n} L^{*}(Sv_{n} - Lx_{n})), \\ t_{n} = \operatorname{argmin} \lbrace \lambda _{n} f(y_{n},y) + \frac{1}{2} \Vert y-y_{n} \Vert ^{2};y\in C\rbrace, \\ z_{n} = \operatorname{argmin} \lbrace \lambda _{n} f(t_{n},y) + \frac{1}{2} \Vert y-y_{n} \Vert ^{2};y\in C\rbrace, \\ x_{n+1}= \alpha _{n} h(x_{n})+ (1-\alpha _{n})(\beta _{n} x_{n} + (1- \beta _{n})Tz_{n}), \end{cases}\displaystyle \end{aligned}$$
(1.10)

where \(0< \underline{\lambda}\leq \lambda _{n} \leq \overline{\lambda} < \min \lbrace \frac{1}{2c_{1}}, \frac{1}{2c_{2}}\rbrace, 0< \underline{\mu}\leq \mu _{n} \leq \overline{\mu} < \min\lbrace \frac{1}{2d_{1}}, \frac{1}{2d_{2}}\rbrace, L\) is a bounded linear operator, while h is a contraction mapping.

In 2020, Arfat et al. [36] constructed the extragradient method for approximation of results for split-equilibrium problems and fixed-point problems of nonexpansive semigroups in real Hilbert spaces. In fact, they presented the following algorithm:

$$\begin{aligned} \textstyle\begin{cases} x_{1} \in C_{1} = C, \\ y_{n} = \operatorname{argmin} \lbrace \lambda _{n} F(x_{n}, y) + \frac{1}{2} \Vert y-x_{n} \Vert ^{2};y\in C\rbrace, \\ z_{n} = \operatorname{argmin} \lbrace \lambda _{n} F(y_{n}, z) + \frac{1}{2} \Vert z-x_{n} \Vert ^{2};z\in C\rbrace, \\ v_{n}= (1-\alpha _{n}) z_{n} + \alpha _{n} \frac{1}{t_{n}}\int ^{t_{n}}_{0} T(t) z_{n} \,dt, \\ u_{n} = T^{G}_{r_{n}} Av_{n}, \\ x_{n+1}= P_{C}(v_{n} + \gamma A^{*}(\frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s)u_{n} \,ds - Av_{n})), \end{cases}\displaystyle \end{aligned}$$
(1.11)

where

  1. C1

    \(\lbrace \lambda _{n} \rbrace \subset [a,b]\) for some \(a,b \in (0, \operatorname{min} \lbrace \frac{1}{2c_{1}}, \frac{1}{2c_{2}} \rbrace )\);

  2. C2

    \(0\leq d< e\leq \alpha _{n} \leq f< 1, \liminf_{n\rightarrow \infty} r_{n} > 0, \lim_{n\rightarrow \infty} s_{n} = 0 = \lim_{n\rightarrow \infty} t_{n} \);

  3. C3

    \(0< \gamma < \frac{1}{\|A\|^{2}}\).

They proved under the underlying conditions that the sequence \(\lbrace x_{n} \rbrace \) generated by the algorithm (1.11) converged weakly to the solution set, provided it is not an empty set. Furthermore, the authors in [36] modified the above algorithm and obtained a strong convergence using the shrinking-projection technique.

In 2021, Shehu et al. [37] studied a modified inertial extragradient method for solving equilibrium problems. They presented the following algorithm:

$$\begin{aligned} \textstyle\begin{cases} w_{n} =\alpha _{n} x_{0} + (1-\alpha _{n})x_{n} + \theta _{n}(x_{n} - x_{n-1}), \\ y_{n} = \operatorname{argmin}_{y\in C}\lbrace \lambda _{n} f(w_{n}, y)+\frac{1}{2} \Vert y-w_{n} \Vert ^{2} \rbrace, \\ z_{n} = \operatorname{argmin}_{y\in T_{n}} \lbrace \lambda _{n} f(y_{n}, y)+ \frac{1}{2} \Vert y-y_{n} \Vert ^{2}\rbrace, \\ x_{n+1}=(1-\tau )w_{n} + \tau z_{n}, \end{cases}\displaystyle \end{aligned}$$
(1.12)

where

$$\begin{aligned} {\lambda _{n+1}} := \textstyle\begin{cases} \min \lbrace \frac{\mu}{2} \frac{ \Vert w_{n}-y_{n} \Vert ^{2}+ \Vert z_{n} - y_{n} \Vert ^{2}}{f(w_{n}, z_{n}) - f(w_{n}, y_{n}) - F(y_{n}, z_{n})}, \lambda _{n} \rbrace, \\ \phantom{\lambda _{n}, \quad}\text{if } f(w_{n}, z_{n}) - f(w_{n}, y_{n}) - f(y_{n}, z_{n})> 0, \\ \lambda _{n}, \quad\text{otherwise}. \end{cases}\displaystyle \end{aligned}$$
(1.13)

\(T_{n} = \lbrace z \in H: \langle w_{n} -\lambda _{n} v_{n}-y_{n},z-y_{n} \rangle \leq 0 \rbrace, q_{n} = w_{n} -\lambda _{n} v_{n} -y_{n}, v_{n} \in \partial _{2} f(w_{n},\cdot) \text{ and } q_{n} \in N_{C}(y_{n})\)

Under some mild conditions and assumptions, the authors proved that the sequence \(\lbrace x_{n}\rbrace \) generated by the Algorithm (1.12) converged strongly to the solution \(EP(f,C)\neq \emptyset \). Further, they improved on the algorithm (1.12) by studying a modified inertial subgradient, extragradient method for solving equilibrium problems and also obtained a strong convergence. Their work [37] is an improvement on the works of [36] and [35] to some reasonable extent.

Remark 1

The algorithm (1.10) of Narin et al. [35] has one major drawback, apart from the fact that the projection operators slow down the convergence rate. The shortcoming is that the step sizes \(\lambda _{n}\) and \(\mu _{n}\) depend on the Lipschitz constants (\(c_{1}\) and \(c_{2}\), \(d_{1}\) and \(d_{2}\)) of the bifunctions. Likewise, the Algorithm (1.11) of Artfat et al. [36] also suffered from the step size \(\lambda _{n}\) being dependent on the Lipschitz constants of the bifunctions and γ depending on the spectral radius of the operator \(A^{*} A\). In many practical importance, the Lipschitz constants are difficult to estimate and often impossible. The spectral radius is computationally expensive, even if the overall estimate is known. These drawbacks affect the efficiency of their algorithms, thus making the Algorithms difficult to implement in practical cases.

The question of interest is: can these conditions in the two algorithms be removed and the results recovered via a new inertial extrapolation algorithm? Our interest in this paper is to give an affirmative answer to this question.

The main contributions of this paper shall include:

(1) A new modified extragradient method is proposed, which is the combination of (1.10), (1.11), and (1.12), where the nonexpansive mappings are precisely nonexpansive semigroups of real Hilbert spaces. The new algorithm includes, as a special case, the Algorithm (1.12) of [37]. This shows that our Algorithm 3.3 is an improvement on many others in this direction.

(2) We construct our algorithm in such away that the step size does not depend on the knowledge of the operator norm or require an estimate of the operator norm of the bounded linear map. However, the control parameters do not also depend on the Lipschitz constants of the bifunctions.

(3) We have observed that the nature of this type of algorithm involves many projection operators that slow the rate of convergence. Thus, we deem it important to incorporate the inertial term, \(\theta _{n}(x_{n}-x_{n-1})\) in the spirit of Polyak [27] to improve on the performance of our algorithm. This significantly improves the work of Artfat et al. [36] and Narin et al. [35]. Our inertial extrapolation term does not require computing the norm difference between the terms \(x_{n}\) and \(x_{n-1}\). The inertial factor \(\theta _{n}\) is nonsummable and not diminishing, as has been used by some authors. It is different from the one used by Shehu et al. [37], Algorithm 2.1.

(4) We establish a strong convergence with the self-adaptive step sizes. We assume that the bifunctions are not strongly pseudomonotone as proposed by [30, 38]. The strongly pseudomonotonicity assumption is restrictive and stronger than being pseudomonotone. Also, pseudomonotone is more general than monotone.

The rest of the paper is organized as follows: In Sect. 2, some basic definitions, related to our work and helpful Lemmas are stated without their proofs. Our Algorithm is presented in Sect. 3 with some conditions that will enable us to obtain the strong convergence. The proofs are discussed in detail in Sect. 4. In Sect. 5, applications of our algorithm are studied, while in Sect. 6 numerical illustrations are presented, and the conclusion is given in Sect. 7.

2 Preliminaries

As we mentioned early, some Lemmas that are very relevant and helpful in proving our result will be stated without necessarily providing their proofs. Basic definitions will also be given in this section.

Throughout this paper, C and D denote nonempty, closed, and convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. Let and → represent weak and strong convergence. Let \(\langle,\rangle \) and \(\|\cdot\|\) represent the inner product and the induced norm.

Definition 2.1

A map \(T: H_{1} \rightarrow H_{1}\) is said to be a contraction if there exists a coefficient \(\alpha \in (0,1)\) such that

$$\begin{aligned} \Vert Tx-Ty \Vert \leq \alpha \Vert x-y \Vert , \quad\forall x,y \in H_{1}. \end{aligned}$$

Definition 2.2

A map \(T: H_{1} \rightarrow H_{1}\) is said to be nonexpansive if

$$\begin{aligned} \Vert Tx-Ty \Vert \leq \Vert x-y \Vert , \quad\forall x,y \in H_{1}. \end{aligned}$$

That is, \(\alpha =1\) in Definition 2.1.

Definition 2.3

A map \(T: H_{1} \rightarrow H_{1}\) is said to be firmly nonexpansive, if

$$\begin{aligned} \langle Tx-Ty, x-y\rangle \geq \Vert Tx-Ty \Vert ^{2}, \quad \forall x,y \in H_{1}. \end{aligned}$$

Definition 2.4

A one-parameter family \(\lbrace T(t): 0\leq t< \infty \rbrace \) defined on \(H_{1}\) to \(H_{1}\) is called a nonexpansive semigroup if it satisfies the following conditions:

  1. (a)

    \(T(0)x=x, \forall x\in H_{1}\);

  2. (b)

    \(T(a+b) = T(a)T(b), \forall a,b\geq 0\);

  3. (c)

    For any \(x\in H_{1}, T(t)x\) is continuous;

  4. (d)

    \(\|T(t)x- T(t)y\| \leq \|x-y\|, \forall x,y \in H_{1}\) and for \(t\geq 0\).

The fixed-point set of this family of the nonlinear map is denoted by

$$\begin{aligned} F(T)= \bigl\lbrace x\in H_{1}: T(t)x=x, 0\leq t< \infty \bigr\rbrace . \end{aligned}$$

It is well known (see [39]) that \(F(T)\) is closed and convex.

Example 2.5

Let \(H_{1} =\mathbb{R}\) and \(\Omega: \lbrace T(t): 0\leq t < \infty \rbrace \). Define \(T(t)x=(\frac{1}{5^{t}})x\). Then, Ω is a nonexpansive semigroup. Indeed,

  1. (a)

    \(T(0)x = (\frac{1}{5^{0}}) x=x\);

  2. (b)

    \(T(a+b) = (\frac{1}{5^{a+b}})=(1/5^{a})(1/5^{b})= T(a)T(b), \forall a,b \geq 0\);

  3. (c)

    for each \(x \in H_{1}\), the mapping \(T(t)x = (1/5^{t})x\) is continuous;

  4. (d)

    \(\|T(t)x - T(t)y\| = \|(1/5^{t})x - (1/5^{t})y\| = \|(1/5^{t})(x-y)\|= (1/5^{t})\|x-y\|, \forall x,y \in H_{1}, t\geq 0\).

Definition 2.6

Let \(x,y \in H\). A map \(T: H \rightarrow H\) is called:

  1. (1)

    monotone if \(\langle Tx - Ty, x-y\rangle \geq 0\);

  2. (2)

    Pseudomonotone on H if \(\langle Tx, y-x\rangle \geq 0\Rightarrow \langle Ty,y-x\rangle \leq 0\);

  3. (3)

    Lipschitz continuous on H if there exist a positive constant L such that \(\|Tx -Ty\|\leq L\|x-y\| \).

A bifunction \(F: C \times C \rightarrow R\) is said to be:

Definition 2.7

  1. (i)

    Strongly monotone on C if there exists a constant \(\gamma >0\) such that

    $$\begin{aligned} F(x,y)+F(y,x)+ \gamma \Vert x-y \Vert ^{2} \leq 0,\quad \forall x,y \in C; \end{aligned}$$
  2. (ii)

    monotone if

    $$\begin{aligned} F(x,y)+F(y,x)\leq 0, \quad\forall x,y \in C; \end{aligned}$$
  3. (iii)

    pseudomonotone if \(F(x,y)\geq 0 \Rightarrow F(y,x)\leq 0, \forall x,y \in C\);

  4. (iv)

    to satisfy Lipschitz-type conditions if there exist positive constants \(c_{1}\) and \(c_{2}\) such that

    $$\begin{aligned} c_{1} \Vert x-y \Vert ^{2} +c_{2} \Vert y-z \Vert ^{2} \geq F(x,z)-F(x,y) - F(y,z), \quad\forall x,y,z \in C. \end{aligned}$$

    Observe that \((i) \Longrightarrow (ii)\Longrightarrow (iii)\), but the converse is not necessarily true.

We shall assume from now onwards that the bifunction \(G: D \times D \rightarrow \mathbb{R}\) satisfies the following conditions:

  1. (A1)

    \(G(x,y) = 0, \forall x,y \in D\);

  2. (A2)

    G is monotone on D;

  3. (A3)

    for each \(u \in D\), the function \(x\longmapsto G(u, v)\) is upper hemicontinuous, that is, for each \(u, v \in D\)

    $$\begin{aligned} \lim_{n\rightarrow \infty} G \bigl(\lambda w + (1-\lambda )u, v \bigr) \leq G(u,v); \end{aligned}$$
  4. (A4)

    for each \(v \in D\), the function \(v \longmapsto G(u,v)\) is convex and lower semicontinuous.

We further note that the bifunction \(F: C\times C \rightarrow R\) will also possess the following properties:

  1. (B1)

    \(F(x,x)=0, \forall x\in C\);

  2. (B2)

    F is pseudomonotone on C with respect to \(EP(C, F)\);

  3. (B3)

    F is weakly continuous on \(C \times C\) in the sense that if \(x,y \in C\) and \(\lbrace x_{n} \rbrace, \lbrace y_{n}\rbrace \subset C\) converges weakly to x and y, respectively, then \(f(x_{n}, y_{n})\rightarrow (x,y)\) as \(n\rightarrow \infty \);

  4. (B4)

    for each \(x \in C\), the function \(y \longmapsto F(x,y)\) is convex and subdifferentaible;

  5. (B5)

    F is Lipschitz-type continuous on C.

For a nonempty closed and convex C of \(H_{1}\), we define a metric projection \(P_{C}: H_{1} \rightarrow C\) such that for each \(x \in H_{1}\), there exists a unique nearest point \(P_{C} x \in C\) satisfying the inequality:

$$\begin{aligned} \Vert x-P_{C} x \Vert \leq \Vert x-y \Vert , \quad\forall x,y \in C. \end{aligned}$$

It is characterized by

$$\begin{aligned} \langle x- P_{C} x, P_{C} x-y\rangle \geq 0, \quad\forall x,y \in C. \end{aligned}$$

This further implies that

$$\begin{aligned} \Vert x-P_{C} x \Vert ^{2} + \Vert y - P_{C} x \Vert ^{2} \leq \Vert x-y \Vert ^{2}, \quad\forall x,y \in H_{1}. \end{aligned}$$

This operator \(P_{C}\) is not only nonexpansive but also firmly nonexpansive.

Definition 2.8

A mapping \(T: H_{1} \rightarrow H_{1}\) is said to be demiclosed at the origin if for any sequence \(\lbrace x_{n} \rbrace \in H_{1}\) with \(x_{n} \rightharpoonup x\) and \(\| x_{n} - Tx_{n} \| \longrightarrow 0\), then \(Tx=x\). Thus, \(I-T\) is demiclosed at 0.

Lemma 2.9

([40])

For each \(x, y,z,u,v \in H_{1}\) and \(\alpha \in \mathbb{R}\) then the following hold:

  1. (1)

    \(\| x-y\|^{2} = \|x\|^{2} - \|y\|^{2} - 2\langle x-y,y\rangle \);

  2. (2)

    \(\|x+y\|^{2} \leq \|x\|^{2} + 2\langle y, x+y\rangle \);

  3. (3)

    \(2\langle x-y, u-v\rangle = \|x-v\|^{2} + \|y-u\|^{2}-\|x-u\|^{2} -\|y-v \|^{2}\);

  4. (4)

    \(\|\alpha x + (1-\alpha )y\|^{2} = \alpha \|x\|^{2} + (1-\alpha )\|y \|^{2} -\alpha (1-\alpha )\|x-y\|^{2}\).

Lemma 2.10

([41])

For each \(x \in H_{1}, \lambda > 0\),

$$\begin{aligned} \lambda \bigl(g(y)- g(\mathrm{prox}_{\lambda g(x)}) \bigr) \geq \bigl\langle x- \mathrm{prox}_{\lambda g}(x), y- \mathrm{prox}_{\lambda g}(x) \bigr\rangle ,\quad \forall y \in C, \end{aligned}$$

where \(\mathrm{prox}_{\lambda g}(x): = \operatorname{argmin} \lbrace \lambda g(y) + \frac{1}{2}\| x-y\|^{2}; y \in C \rbrace \).

Lemma 2.11

([42])

Let C be a nonempty, closed, and convex subset of a real Hilbert space \(H_{1}\). Let \(\omega: = \lbrace T(t): 0\leq t < \infty \rbrace \) be a nonexpansive semigroup on C; then for all \(v \geq 0\),

$$\begin{aligned} \lim_{t \rightarrow \infty}\sup_{x \in C} \biggl\Vert \frac{1}{t} \int ^{t}_{0} T(s) x \,ds - T(v) \biggl( \frac{1}{t} \int ^{t}_{0} T(s)x \,ds \biggr) \biggr\Vert =0. \end{aligned}$$

Lemma 2.12

([43])

Let \(a_{n}\) be a sequence of nonnegative real numbers such that

$$\begin{aligned} a_{n+1} \leq (1-\gamma _{n} )a_{n} + \gamma _{n} \delta _{n}, \quad\forall n \in \mathbb{N}, \end{aligned}$$

where \(\lbrace \gamma _{n} \rbrace \) is a sequence in \((0,1)\) and \(\lbrace \delta _{n} \rbrace \) is a sequence in \(\mathbb{R}\) such that

  1. (a)

    \(\lim_{n\rightarrow \infty} \gamma _{n} =0, \sum^{\infty}_{n=1} \gamma _{n} = \infty \);

  2. (b)

    \(\limsup_{n\rightarrow \infty} \delta _{n} \leq 0\).

Then, \(\lim_{n\rightarrow \infty} a_{n} =0\).

Lemma 2.13

([44])

Let \(\lbrace a_{n} \rbrace \) be a sequence of real numbers such that there exists a subsequence \(\lbrace n_{i} \rbrace \) of n such that \(a_{n_{i}}\leq a_{n_{i}+1}\) for all \(i \in N\). Then, there exists a nondecreasing sequence \(\lbrace m_{k} \rbrace \subset N\) such that \(m_{k} \rightarrow 0\) as \(k\rightarrow \infty \) and the following conditions are satisfied:

  1. (a)

    \(a_{m_{k}}\leq a_{m_{k}+1}\) and \(a_{k} \leq a_{m_{k}+1}\).

Indeed, \(m_{k} = \operatorname{max} \lbrace j\leq k: a_{j} \leq a_{j+1}\rbrace \).

Lemma 2.14

([45])

Let \(\lbrace a_{n} \rbrace \) be a sequence of nonnegative real numbers satisfying the following:

$$\begin{aligned} a_{n+1} \leq (1-\beta _{n}) a_{n} + \eta _{n} +\xi _{n}, \quad\forall n \geq 1, \end{aligned}$$

where \(\beta _{n}\) is a sequence in \((0,1)\) and \(\eta _{n}\) is a real sequence. Suppose that \(\sum^{\infty}_{n=1}\xi _{n} < \infty \) and \(\eta _{n} \leq \beta _{n} M \) for some positive number M. Then, \(\lbrace a_{n} \rbrace \) is bounded.

3 The proposed algorithm

We present in this section, the proposed algorithm and some assumptions and conditions that the control sequences will possess to enable strong convergence.

Assumption 3.1

Let \(H_{1}\) and \(H_{2}\) be two Hilbert spaces such that the following conditions hold:

  1. (a)

    \(F:C \times C \rightarrow \mathbb{R}\) and \(G: D \times D \rightarrow \mathbb{R}\) are two bifunctions;

  2. (b)

    \(A: H_{1} \rightarrow H_{2}\) is a bounded linear operator such that \(A \neq 0\) and \(A^{*}: H_{2} \rightarrow H_{1}\) is its adjoint;

  3. (c)

    \(T: = \lbrace T(t): 0\leq t< \infty \rbrace: C \rightarrow C\) and \(S: = \lbrace S(s): 0\leq s< \infty \rbrace: D \rightarrow D\) are two nonexpansive semigroups;

    and the solution set

  4. (d)

    \(\Gamma: \lbrace x^{*} \in C:x^{*} \in EP(C, F) \cap F(T) \text{ and } y=Ax^{*} \in EP (D, G) \cap F(S)\rbrace \neq \emptyset \).

Assumption 3.2

The control sequences shall satisfy the following conditions:

  1. (i)

    The sequences \(\lbrace \alpha _{n} \rbrace \), \(\lbrace \beta _{n} \rbrace \), and \(\lbrace \sigma _{n} \rbrace \) are sequences of reals in \((0,1)\) and \(\lbrace \varepsilon _{n}\rbrace \) is a positive such that \(\lim_{n\rightarrow \infty}\beta _{n}= 0\) and \(\sum^{\infty}_{n=1} \beta _{n}=\infty \);

  2. (ii)

    \(\frac{\varepsilon _{n}}{\beta{^{2}}_{n}} \rightarrow 0 \text{ as } n\to \infty \);

  3. (iii)

    \(\sigma _{n} \in (a, 1-\beta _{n})\) for some \(a> 0\).

Algorithm 3.3

(Self-Adaptive Inertial Extragradient Method for Split-Equilibrium Problem)

Iterative Steps: Step 0: Let \(x_{0}, x_{1} \in H_{1}, \delta _{0}, \tau _{0} >0, \mu \in (0,1)\)

Step 1: Given the current iterate \(x_{n}, x_{n-1} (n\geq 1), \theta _{n} \in (0,1) \) and choose \(\theta _{n}\) such that \(0< \theta \leq \overline{\theta _{n}}\), where

$$\begin{aligned} \overline{\theta _{n}} :=\textstyle\begin{cases} \min \lbrace \theta, \frac{\epsilon _{n}}{ \Vert x_{n}-x_{n-1} \Vert }\rbrace & \text{if }x_{n} \neq x_{n-1}, \\ \theta & \text{otherwise} \end{cases}\displaystyle \end{aligned}$$
(3.1)

and compute

$$\begin{aligned} \textstyle\begin{cases} w_{n}=x_{n} + \theta _{n}(x_{n} - x_{n-1}), \\ p_{n} = (1-\delta _{n})w_{n} + \delta _{n} \frac{1}{t_{n}} \int ^{t_{n}} _{0} T(t)w_{n} \,dt, \\ u_{n} = \operatorname{argmin} \lbrace \mu _{n} G(P_{D}(Ap_{n}), u)+\frac{1}{2} \Vert u-P_{D}(Ap_{n}) \Vert ^{2}; u \in D\rbrace, \\ v_{n} = \operatorname{argmin}\lbrace G(u_{n}, u) + \frac{1}{2} \Vert u-P_{D}(Ap_{n}) \Vert ^{2}; u\in D \rbrace, \\ y_{n}= (1-\alpha _{n}) v_{n} + \alpha _{n} \frac{1}{s_{n}}\int ^{s_{n}}_{0} S(s) v_{n} \,ds, \\ q_{n}= P_{C}(p_{n} + \gamma _{n} A^{*}(S (s)y_{n} - Ap_{n})), \\ t_{n} = \operatorname{argmin} \lbrace \lambda _{n} F(q_{n}, y) + \frac{1}{2} \Vert y-q_{n} \Vert ^{2};y\in C\rbrace, \\ z_{n} = \operatorname{argmin} \lbrace \lambda _{n} F(t_{n}, y) + \frac{1}{2} \Vert y-q_{n} \Vert ^{2};y\in C\rbrace, \\ x_{n+1} = (1-\sigma _{n} - \beta _{n} )p_{n} + \sigma _{n} z_{n}, n\geq 1, \end{cases}\displaystyle \end{aligned}$$
(3.2)

where

$$\begin{aligned} {\mu _{n+1}} :=\textstyle\begin{cases} \min \lbrace \frac{\delta}{2} \frac{ \Vert P_{D}(Ap_{n})-u_{n} \Vert ^{2}+ \Vert v_{n} - u_{n} \Vert ^{2}}{G(P_{D}(Ap_{n}),v_{n})-F(P_{D}(Ap_{n}), u_{n})-G(u_{n},v_{n})}, \mu _{n} \rbrace \\ \phantom{\mu _{n}, \quad}\text{if } G(P_{D}(Ap_{n}),v_{n})-F(P_{D}(Ap_{n}), u_{n})-G(u_{n},v_{n})> 0, \\ \mu _{n}, \quad \text{otherwise}, \end{cases}\displaystyle \end{aligned}$$
(3.3)

for a given large enough \(\epsilon > 0\), such that \(\gamma _{n} \in (\epsilon, \frac{\|Sy_{n} - Ap_{n}\|^{2}}{\|A^{*}(Sy_{n} - Ap_{n})\|^{2}} - \epsilon )\), and

$$\begin{aligned} {\lambda _{n+1}} := \textstyle\begin{cases} \min \lbrace \frac{\tau}{2} \frac{ \Vert q_{n}-t_{n} \Vert ^{2}+ \Vert z_{n} - t_{n} \Vert ^{2}}{F(q_{n}, z_{n}) - F(q_{n}, t_{n}) - F(t_{n}, z_{n})}, \lambda _{n} \rbrace, \\ \phantom{\lambda _{n}, \quad}\text{if } F(q_{n}, z_{n}) - F(q_{n}, t_{n}) - F(t_{n}, z_{n})> 0, \\ \lambda _{n}, \quad\text{otherwise}. \end{cases}\displaystyle \end{aligned}$$
(3.4)

Set \(\boldsymbol {n:=n+1}\) and return to step 1.

4 Convergence analysis

This section is devoted to the proof of the entire body of work. The proof is divided into several parts; the steps are presented below.

Lemma 4.1

If the sequence \(\lbrace x_{n} \rbrace \) generated by the Algorithm 3.3above satisfies Assumptions 3.1and 3.2, then, \(\lbrace x_{n} \rbrace \) is bounded.

Proof

Using Lemma 2.10 and the definition of \(t_{n}\), we obtain

$$\begin{aligned} \lambda _{n} \bigl( F(q_{n},y)- F(q_{n}, t_{n}) \bigr)\geq \langle q_{n} -t_{n}, y-t_{n} \rangle. \end{aligned}$$
(4.1)

Substituting \(y=z_{n}\) in (4.1), we obtain

$$\begin{aligned} \lambda _{n} \bigl(F(q_{n},z_{n})- F(q_{n}, t_{n}) \bigr)\geq \langle q_{n} -t_{n}, z_{n}-t_{n}\rangle = \langle t_{n} - q_{n}, t_{n} -z_{n}\rangle. \end{aligned}$$
(4.2)

Also, using the definition of \(z_{n}\), we obtain

$$\begin{aligned} \lambda _{n} \bigl(F(t_{n},y)- F(q_{n}, z_{n}) \bigr)\geq \langle q_{n} -z_{n}, y-z_{n} \rangle. \end{aligned}$$
(4.3)

Observe that (4.2) and (4.3) implies that

$$\begin{aligned} & 2 \lambda _{n} F(t_{n}, y) + 2 \lambda _{n} \bigl(F(q_{n}, z_{n})-F(q_{n}, t_{n})-F(t_{n}, z_{n}) \bigr) \\ &\quad\geq 2 \langle t_{n}-q_{n}, t_{n}-z_{n} \rangle + 2\langle q_{n} - z_{n}, y-z_{n} \rangle. \end{aligned}$$
(4.4)

If \(F(q_{n}, z_{n})-F(q_{n}, t_{n}) - F(t_{n}, z_{n}) > 0\), we have

$$\begin{aligned} F(q_{n}, z_{n}) - F(q_{n}, t_{n}) - F(t_{n}, z_{n}) \leq \tau \biggl( \frac{ \Vert q_{n} -t_{n} \Vert ^{2}+ \Vert z_{n} -t_{n} \Vert ^{2}}{2\lambda _{n+1}} \biggr). \end{aligned}$$
(4.5)

Hence, from (4.4) and (4.5), we obtain

$$\begin{aligned} & 2 \lambda _{n} F(t_{n}, y) + \tau \lambda _{n} \biggl( \frac{ \Vert q_{n} -t_{n} \Vert ^{2}+ \Vert z_{n} -t_{n} \Vert ^{2}}{\lambda _{n+1}} \biggr) \\ &\quad\geq 2 \langle t_{n}-q_{n}, t_{n}-z_{n} \rangle + 2\langle q_{n} - z_{n}, y-z_{n} \rangle. \end{aligned}$$
(4.6)

Using Lemma 2.9, we have the following from (4.6)

$$\begin{aligned} \begin{aligned} & 2 \langle t_{n}-q_{n}, t_{n}-z_{n} \rangle = \Vert t_{n} -q_{n} \Vert ^{2} + \Vert t_{n} -z_{n} \Vert ^{2} - \Vert z_{n} -q_{n} \Vert ^{2} \quad\text{and} \\ &2\langle q_{n} - z_{n}, y-z_{n}\rangle = \Vert q_{n} - z_{n} \Vert ^{2} + \Vert z_{n} -y \Vert ^{2} - \Vert q_{n} - y \Vert ^{2}. \end{aligned} \end{aligned}$$
(4.7)

Hence, using (4.7) and (4.6), we obtain

$$\begin{aligned} &2 \lambda _{n} F(t_{n}, y) + \tau \lambda _{n} \biggl( \frac{ \Vert q_{n} -t_{n} \Vert ^{2}+ \Vert z_{n} -t_{n} \Vert ^{2}}{\lambda _{n+1}} \biggr) \\ &\quad \geq \Vert t_{n} -q_{n} \Vert ^{2} + \Vert t_{n} -z_{n} \Vert ^{2} - \Vert z_{n} -q_{n} \Vert ^{2} + \Vert q_{n} - z_{n} \Vert ^{2} \\ &\qquad{}+ \Vert z_{n} -y \Vert ^{2} - \Vert q_{n} - y \Vert ^{2}. \end{aligned}$$
(4.8)

Thus,

$$\begin{aligned} &\Vert t_{n} - q_{n} \Vert ^{2} + \Vert t_{n} - z_{n} \Vert ^{2} + \Vert z_{n} - y \Vert ^{2} - \Vert q_{n} - y \Vert ^{2} \\ &\quad\leq 2 \lambda _{n} F(t_{n}, y) + \tau \lambda _{n} \biggl( \frac{ \Vert q_{n} -t_{n} \Vert ^{2}+ \Vert z_{n} -t_{n} \Vert ^{2}}{\lambda _{n+1}} \biggr). \end{aligned}$$

Hence,

$$\begin{aligned} \Vert z_{n} -y \Vert ^{2} \leq {}&\Vert q_{n} - y \Vert ^{2} - \Vert t_{n} - z_{n} \Vert ^{2} - \Vert t_{n} -q_{n} \Vert ^{2} + 2\lambda _{n} F(t_{n}, y) \\ &{} + \tau \lambda _{n} \biggl( \frac{ \Vert q_{n} -t_{n} \Vert ^{2}+ \Vert z_{n} -t_{n} \Vert ^{2}}{\lambda _{n+1}} \biggr) \\ ={}& \Vert q_{n} - y \Vert ^{2} - \Vert t_{n} - z_{n} \Vert ^{2} + \tau \lambda _{n} \frac{ \Vert z_{n} -t_{n} \Vert ^{2}}{\lambda _{n+1}} - \Vert t_{n} -q_{n} \Vert ^{2} \\ &{}+ \tau \lambda _{n} \frac{ \Vert q_{n} -t_{n} \Vert ^{2}}{\lambda _{n+1}} + 2 \lambda _{n} F(t_{n}, y). \end{aligned}$$
(4.9)

Now, for \(y \in EP (C, F) \subset C\), we have that

$$\begin{aligned} F(y, t_{n})\geq 0, \quad\forall n. \end{aligned}$$

By the condition (B2), we establish that

$$\begin{aligned} F(t_{n}, y)\leq 0, \quad\forall n. \end{aligned}$$

It follows that

$$\begin{aligned} \Vert z_{n} -y \Vert ^{2} \leq \Vert q_{n} -y \Vert ^{2} - \biggl(1-\tau \frac{\lambda _{n}}{\lambda _{n+1}} \biggr) \Vert z_{n} - t_{n} \Vert ^{2} - \biggl(1-\tau \frac{\lambda _{n}}{\lambda _{n+1}} \biggr) \Vert q_{n} - t_{n} \Vert ^{2}. \end{aligned}$$
(4.10)

Observe that from (3.4), \(\lambda _{n}\) is a monotone decreasing sequence. Hence, the limit of \(\lambda _{n}\) exists. Without loss of generality, we may assume that \(\lim_{n\rightarrow \infty} \lambda _{n} = \lambda \). Therefore,

$$\begin{aligned} \lim_{n\rightarrow \infty} \biggl(1-\tau \frac{\lambda _{n}}{\lambda _{n+1}} \biggr) = (1-\tau )> 0. \end{aligned}$$

Hence, there exists \(N_{0} \in N\) such that

$$\begin{aligned} 1-\tau \frac{\lambda _{n}}{\lambda _{n+1}} >0,\quad \forall N_{0} \in \mathbb{N}. \end{aligned}$$

It follows from these facts and (4.10) that

$$\begin{aligned} \Vert z_{n} -y \Vert ^{2} \leq \Vert q_{n} -y \Vert ^{2}. \end{aligned}$$
(4.11)

In a similar way and by definition of \(u_{n}\), we obtain

$$\begin{aligned} \mu _{n} \bigl(G \bigl(P_{D}(Ap_{n}),u \bigr) - G \bigl(P_{D}(Ap_{n}), u_{n} \bigr) \bigr) &\geq \bigl\langle P_{D}(Ap_{n}) - u_{n}, u-u_{n} \bigr\rangle \quad \forall q \in D \\ &= \bigl\langle u_{n} - P_{D}(Ap_{n}), u_{n} -u \bigr\rangle . \end{aligned}$$
(4.12)

Setting \(q=v_{n}\) in (4.12) we obtain

$$\begin{aligned} \mu _{n} \bigl(G \bigl(P_{D}(Ap_{n}),v_{n} \bigr) - G \bigl(P_{D}(Ap_{n}), u_{n} \bigr) \bigr) \geq \bigl\langle u_{n} - P_{D}(Ap_{n}), u_{n} -v_{n} \bigr\rangle . \end{aligned}$$
(4.13)

Using the definition of \(v_{n}\), we further obtain

$$\begin{aligned} \mu _{n} \bigl(G(u_{n}, u)-G(u_{n}, v_{n}) \bigr)\geq \bigl\langle P_{D}(Ap_{n})-v_{n}, u-v_{n} \bigr\rangle . \end{aligned}$$
(4.14)

Now, equations (4.13) and (4.14) imply that

$$\begin{aligned} &2 \mu _{n} G(u_{n}, u)+ 2\mu _{n} \bigl(G \bigl(P_{D}(Ap_{n}),v_{n} \bigr)- G \bigl(P_{D}(Ap_{n}), u_{n} \bigr)- G(u_{n}, v_{n}) \bigr) \\ &\quad\geq 2 \bigl\langle u_{n}- P_{D}(Ap_{n}), u_{n} - v_{n} \bigr\rangle +2 \bigl\langle P_{D}(Ap_{n})-v_{n}, u-v_{n} \bigr\rangle . \end{aligned}$$
(4.15)

If \(G(P_{D}(Ap_{n}),v_{n}) - G(P_{D}(Ap_{n}), u_{n}) - G(u_{n}, v_{n})> 0\), then \(G(P_{D}(Ap_{n}),v_{n}) - G(P_{D}(Ap_{n}), u_{n}) - G(u_{n}, v_{n}) \leq \delta ( \frac{\|P_{D}(Ap_{n})-u_{n}\|^{2} + \|v_{n}- u_{n}\|^{2}}{\mu _{n+1}}) \).

It follows from the last inequality and from (4.15) that

$$\begin{aligned} &2\mu _{n} G(u_{n}, u) + \delta \biggl( \frac{ \Vert P_{D}(Ap_{n})-u_{n} \Vert ^{2} + \Vert v_{n}- u_{n} \Vert ^{2}}{\mu _{n+1}} \biggr) \\ &\quad \geq 2 \bigl\langle u_{n}- P_{D}(Ap_{n}), u_{n} - v_{n} \bigr\rangle + 2 \bigl\langle P_{D}(Ap_{n})-v_{n}, u-v_{n} \bigr\rangle . \end{aligned}$$
(4.16)

We know from Lemma 2.9 that

$$\begin{aligned} \begin{aligned}& 2 \bigl\langle u_{n} - P_{D}(Ap_{n}), u_{n} -v_{n} \bigr\rangle \\ &\quad= \bigl\Vert u_{n} - P_{D}(Ap_{n}) \bigr\Vert ^{2} + \Vert u_{n} - v_{n} \Vert ^{2} - \bigl\Vert v_{n} - P_{D}(Ap_{n}) \bigr\Vert ^{2} \quad\text{and} \\ &2 \bigl\langle P_{D}(Ap_{n}) - v_{n}, u-v_{n} \bigr\rangle \\ &\quad = \bigl\Vert P_{D}(Ap_{n}) - v_{n} \bigr\Vert ^{2} + \Vert u-v_{n} \Vert ^{2} - \bigl\Vert P_{D}(Ap_{n}) -u \bigr\Vert ^{2}. \end{aligned} \end{aligned}$$
(4.17)

Hence, using (4.17) and (4.16), we have

$$\begin{aligned} \Vert v_{n} -u \Vert ^{2} \leq {}& \bigl\Vert P_{D}(Ap_{n})-q \bigr\Vert ^{2} - \bigl\Vert u_{n} - P_{D}(Ap_{n}) \bigr\Vert ^{2} - \Vert u_{n} - v_{n} \Vert ^{2} + \delta \mu _{n} \frac{ \Vert P_{D}(Ap_{n})-u_{n} \Vert ^{2}}{\mu _{n+1}} \\ &{}+\delta \mu _{n} \frac{ \Vert v_{n}-u_{n} \Vert ^{2}}{\mu _{n+1}} + 2\mu _{n} G(u_{n}, u) \\ ={}& \bigl\Vert P_{D}(Ap_{n}) -u \bigr\Vert ^{2} - \biggl(1-\frac{\delta \mu _{n}}{\mu _{n+1}} \biggr) \bigl\Vert P_{D}(Ap_{n})- u_{n} \bigr\Vert ^{2} - \biggl(1-\frac{\delta \mu _{n}}{\mu _{n+1}} \biggr) \Vert v_{n} - u_{n} \Vert ^{2} \\ &{}+ 2\mu _{n} G(u_{n}, u). \end{aligned}$$
(4.18)

Taking \(u=Ax^{*}\), we obtain that

\(Ax^{*} \in EP (D, G) \subset D\). Using (A2), we obtain

$$\begin{aligned} G \bigl(Ax^{*}, u_{n} \bigr)\geq 0, \quad\forall n. \end{aligned}$$

Further, it implies that

$$\begin{aligned} G \bigl(u_{n}, Ax^{*} \bigr)\leq 0. \end{aligned}$$

It follows from these facts and from (4.18) that

$$\begin{aligned} \bigl\Vert v_{n} -Ax^{*} \bigr\Vert ^{2} \leq \bigl\Vert P_{D}(Ap_{n})- Ax^{*} \bigr\Vert ^{2}. \end{aligned}$$
(4.19)

Next, let \(x^{*} \in \Gamma \) so that \(x^{*} \in EP (C, F) \cap F(T)\) and \(Ax^{*} \in EP(D, G) \cap F(S)\). Since \(P_{D}\) is firmly nonexpansive, we obtain that

$$\begin{aligned} \bigl\Vert P_{D}(Ap_{n})-Ax^{*} \bigr\Vert ^{2} &= \bigl\Vert P_{D}(Ap_{n}) - P_{D} \bigl(Ax^{*} \bigr) \bigr\Vert ^{2} \\ &\leq \bigl\langle P_{D}(Ap_{n}) - P_{D} \bigl(Ax^{*} \bigr), Ap_{n} - Ax^{*} \bigr\rangle \\ &=\frac{1}{2} \bigl[ \bigl\Vert P_{D}(Ap_{n})- Ax^{*} \bigr\Vert ^{2} + \bigl\Vert Ap_{n} - Ax^{*} \bigr\Vert ^{2}- \bigl\Vert P_{D}(Ap_{n}) - Ap_{n} \bigr\Vert ^{2} \bigr] \\ &\leq \bigl\Vert Ap_{n} - Ax^{*} \bigr\Vert ^{2} - \bigl\Vert P_{D}(Ap_{n})- Ap_{n} \bigr\Vert ^{2}. \end{aligned}$$
(4.20)

Also,

$$\begin{aligned} \bigl\Vert y_{n} - Ax^{*} \bigr\Vert ^{2} ={}& \biggl\Vert (1-\alpha _{n})v_{n} + \alpha _{n} \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt - Ax^{*} \biggr\Vert ^{2} \\ ={}&(1-\alpha _{n}) \bigl(v_{n} - Ax^{*} \bigr) + \alpha _{n} \biggl(\frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt - Ax^{*} \biggr)\|^{2} \\ ={}&(1-\alpha _{n}) \bigl\Vert v_{n} - Ax^{*} \bigr\Vert ^{2} + \alpha _{n} \biggl\Vert \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt -Ax^{*} \biggr\Vert ^{2} \\ &{}- \alpha _{n}(1-\alpha _{n}) \biggl\Vert v_{n} - \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt \biggr\Vert ^{2} \\ \leq{} &(1-\alpha _{n}) \bigl\Vert v_{n} - Ax^{*} \bigr\Vert ^{2} + \alpha _{n} \biggl\Vert \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt -Ax^{*} \biggr\Vert ^{2} \\ ={}&(1-\alpha _{n}) \bigl\Vert v_{n} - Ax^{*} \bigr\Vert ^{2} + \alpha _{n} \biggl\Vert \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt -S \bigl(Ax^{*} \bigr) \biggr\Vert ^{2} \\ \leq{} &(1-\alpha _{n}) \bigl\Vert v_{n} - Ax^{*} \bigr\Vert ^{2} + \alpha _{n} \bigl\Vert v_{n} -Ax^{*} \bigr\Vert ^{2} \\ ={}& \bigl\Vert v_{n} - Ax^{*} \bigr\Vert ^{2}. \end{aligned}$$
(4.21)

Since \(S(s)\) is nonexpansive, \(Ax^{*} \in F(S(s))\), from (4.19) and (4.20) we obtain that

$$\begin{aligned} \bigl\Vert S(s)y_{n} - Ax^{*} \bigr\Vert ^{2} &= \bigl\Vert S(s)y_{n} - S(s) \bigl(Ax^{*} \bigr) \bigr\Vert ^{2} \\ &\leq \bigl\Vert y_{n} - Ax^{*} \bigr\Vert ^{2} \\ &\leq \bigl\Vert v_{n} - Ax^{*} \bigr\Vert ^{2} \\ &\leq \bigl\Vert P_{D}(Ap_{n}) - Ax^{*} \bigr\Vert ^{2} \\ &\leq \bigl\Vert Ap_{n} - Ax^{*} \bigr\Vert ^{2} - \bigl\Vert P_{D}(Ap_{n})- Ap \bigr\Vert ^{2}. \end{aligned}$$
(4.22)

By the nonexpansiveness of \(P_{C}\), we obtain

$$\begin{aligned} &\bigl\Vert q_{n} - x^{*} \bigr\Vert ^{2} \\ &\quad= \bigl\Vert P_{C} \bigl(p_{n} + \gamma _{n} A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) \bigr) - x^{*} \bigr\Vert ^{2} \\ &\quad= \bigl\Vert P_{C} \bigl(p_{n} + \gamma _{n} A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) \bigr) - P_{C} \bigl(x^{*} \bigr) \bigr\Vert ^{2} \\ &\quad\leq \bigl\Vert p_{n} + \gamma _{n} A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) - x^{*} \bigr\Vert ^{2} \\ &\quad= \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \gamma ^{2}_{n} \bigl\Vert A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) \bigr\Vert ^{2} + 2\gamma _{n} \bigl\langle p_{n} - x^{*}, A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) \bigr\rangle . \end{aligned}$$
(4.23)

However,

$$\begin{aligned} & 2\gamma _{n} \bigl\langle p_{n} - x^{*}, A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) \bigr\rangle \\ &\quad =2 \gamma _{n} \bigl\langle Ap_{n} - Ax^{*}, S(s)y_{n} - Ap_{n} \bigr\rangle \\ &\quad= 2\gamma _{n} \bigl\langle Ap_{n} - Ax^{*}+S(s)y_{n} - Ap_{n} - \bigl(S(s)y_{n} - Ap_{n} \bigr), S(s)y_{n} - Ap_{n} \bigr\rangle \\ &\quad= 2\gamma _{n} \bigl\langle S(s)y_{n} - Ap, S(s)y_{n} - Ap_{n} \bigr\rangle -2 \gamma _{n} \bigl\Vert S(s)y_{n} - Ap_{n} \bigr\Vert ^{2} \\ &\quad=\gamma _{n} \bigl\Vert S(s)y_{n} - Ax^{*} \bigr\Vert ^{2} + \gamma _{n} \bigl\Vert S(s)y_{n} - Ap_{n} \bigr\Vert ^{2} - \gamma _{n} \bigl\Vert Ap_{n} - Ax^{*} \bigr\Vert ^{2} \\ &\qquad{}- 2\gamma _{n} \bigl\Vert S(s)y_{n} - Ap_{n} \bigr\Vert ^{2} \\ &\quad=\gamma _{n} \bigl\Vert S(s)y_{n} - Ax^{*} \bigr\Vert ^{2} - \gamma _{n} \bigl\Vert Ap_{n} - Ax^{*} \bigr\Vert ^{2} - \gamma _{n} \bigl\Vert S(s)y_{n} - Ap_{n} \bigr\Vert ^{2}. \end{aligned}$$
(4.24)

Using (4.22) in (4.24), we obtain that

$$\begin{aligned} 2\gamma _{n} \bigl\langle p_{n} - x^{*}, A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) \bigr\rangle =- \gamma _{n} \bigl\Vert P_{D}(Ap_{n}) - Ax^{*} \bigr\Vert ^{2} - \gamma _{n} \bigl\Vert S(s)y_{n} - Ap_{n} \bigr\Vert ^{2}. \end{aligned}$$
(4.25)

Using (4.25) in (4.23), and the condition of \(\gamma _{n}\) in step 2, we obtain

$$\begin{aligned} \bigl\Vert q_{n} - x^{*} \bigr\Vert ^{2} \leq{}& \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \gamma ^{2}_{n} \bigl\Vert A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) \bigr\Vert ^{2} \\ &{}- \gamma _{n} \bigl\Vert P_{D}(Ap_{n}) - Ax^{*} \bigr\Vert ^{2} - \gamma _{n} \bigl\Vert S(s)y_{n} - Ap_{n} \bigr\Vert ^{2} \\ \leq{}& \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \gamma ^{2}_{n} \bigl\Vert A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) \bigr\Vert ^{2} - \gamma _{n} \bigl\Vert S(s)y_{n} - Ap_{n} \bigr\Vert ^{2} \\ = {}&\bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} - \gamma _{n} \bigl[ \bigl\Vert S(s)y_{n} - Ap_{n} \bigr\Vert ^{2} - \gamma _{n} \bigl\Vert A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) \bigr\Vert ^{2} \bigr] \\ \leq {}&\bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2}. \end{aligned}$$
(4.26)

We further estimate from the Algorithm that

$$\begin{aligned} \bigl\Vert p_{n} -x^{*} \bigr\Vert ^{2} ={}& \biggl\Vert (1-\delta _{n})w_{n} + \delta _{n} \frac{1}{t_{n}} \int ^{t_{n}}_{0} T(t)w_{n} \,dt -x^{*} \biggr\Vert ^{2} \\ ={}& \biggl\Vert (1-\delta _{n}) \bigl(w_{n} -x^{*} \bigr) + \delta _{n} \biggl(\frac{1}{t_{n}} \int ^{t_{n}}_{0} T(t)w_{n} \,dt - x^{*} \biggr) \biggr\Vert ^{2} \\ ={}&(1-\delta _{n}) \bigl\Vert w_{n} - x^{*} \bigr\Vert ^{2} + \delta _{n} \biggl\Vert \frac{1}{t_{n}} \int ^{t_{n}}_{0} T(t)w_{n} \,dt -x^{*} \biggr\Vert ^{2} \\ &{}- \delta _{n} (1-\delta _{n}) \biggl\Vert \frac{1}{t_{n}} \int ^{t_{n}}_{0} w_{n} \,dt -w_{n} \biggr\Vert ^{2} \\ \leq{} &(1-\delta _{n}) \bigl\Vert w_{n} - x^{*} \bigr\Vert ^{2} + \delta _{n} \biggl\Vert \frac{1}{t_{n}} \int ^{t_{n}}_{0} T(t)w_{n} \,dt -x^{*} \biggr\Vert ^{2} \\ ={}&(1-\delta _{n}) \bigl\Vert w_{n} - x^{*} \bigr\Vert ^{2} + \delta _{n} \bigl\Vert T(t)w_{n} - T(t) \bigl(x^{*} \bigr) \bigr\Vert ^{2} \\ \leq{} &(1-\delta _{n}) \bigl\Vert w_{n} - x^{*} \bigr\Vert ^{2} + \delta _{n} \bigl\Vert w_{n} -x^{*} \bigr\Vert ^{2} \\ ={}& \bigl\Vert w_{n} - x^{*} \bigr\Vert ^{2}. \end{aligned}$$
(4.27)

From the condition on \(\theta _{n}\), we know that

$$\begin{aligned} \theta _{n} \Vert x_{n} - x_{n-1} \Vert \leq \varepsilon _{n}, \quad\forall n \geq 1. \end{aligned}$$

Hence,

$$\begin{aligned} \frac{\theta _{n}}{\beta ^{2}_{n}} \Vert x_{n} - x_{n-1} \Vert \leq \frac{\varepsilon _{n}}{\beta ^{2}_{n}} \rightarrow 0. \end{aligned}$$

Thus, there exists \(M_{1} > 0\) such that

$$\begin{aligned} \frac{\theta _{n}}{\beta ^{2}_{n}} \Vert x_{n} - x_{n-1} \Vert \leq M_{1},\quad \forall n\geq 1. \end{aligned}$$

Therefore,

$$\begin{aligned} \bigl\Vert w_{n} - x^{*} \bigr\Vert &= \bigl\Vert x_{n} + \theta _{n}(x_{n} - x_{n-1}) \bigr\Vert \\ &= \bigl\Vert x_{n} - x^{*} + \theta _{n}(x_{n} - x_{n-1}) \bigr\Vert \\ &\leq \bigl\Vert x_{n} - x^{*} \bigr\Vert + \theta _{n} \Vert x_{n} - x_{n-1} \Vert \\ &= \bigl\Vert x_{n} - x^{*} \bigr\Vert + \beta ^{2}_{n} \frac{\theta _{n}}{\beta ^{2}_{n}} \Vert x_{n} - x_{n-1} \Vert \\ &\leq \bigl\Vert x_{n} -x^{*} \bigr\Vert + \beta ^{2}_{n}M_{1}. \end{aligned}$$
(4.28)

Observe that

$$\begin{aligned} & \bigl\Vert (1-\sigma _{n}- \beta _{n}) \bigl(p_{n} - x^{*} \bigr) +\sigma _{n} \bigl(z_{n} -x^{*} \bigr) \bigr\Vert ^{2} \\ &\quad=(1- \sigma _{n}- \beta _{n})^{2} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \sigma ^{2}_{n} \bigl\Vert z_{n} - x^{*} \bigr\Vert ^{2} \\ &\qquad{}+ 2 \sigma _{n}(1-\sigma _{n}- \beta _{n}) \bigl\langle p_{n} - x^{*}, z_{n} - x^{*} \bigr\rangle \\ &\quad\leq (1-\sigma _{n}- \beta _{n})^{2} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \sigma ^{2}_{n} \bigl\Vert z_{n} - x^{*} \bigr\Vert ^{2} \\ &\qquad{}+ 2\sigma _{n} (1-\sigma _{n}- \beta _{n}) \bigl\Vert p_{n} - x^{*} \bigr\Vert . \bigl\Vert z_{n} -x^{*} \bigr\Vert \\ &\quad\leq (1-\sigma _{n}- \beta _{n})^{2} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \sigma ^{2}_{n} \bigl\Vert q_{n} - x^{*} \bigr\Vert ^{2} \\ &\qquad{}+ 2\sigma _{n} (1-\sigma _{n}- \beta _{n}) \bigl\Vert p_{n} - x^{*} \bigr\Vert . \bigl\Vert q_{n} -x^{*} \bigr\Vert \\ &\quad\leq (1-\sigma _{n}- \beta _{n})^{2} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \sigma ^{2}_{n} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} \\ &\qquad{}+ 2\sigma _{n} (1-\sigma _{n}- \beta _{n}) \bigl\Vert p_{n} - x^{*} \bigr\Vert . \bigl\Vert p_{n} -x^{*} \bigr\Vert \\ &\quad=(1-\sigma _{n}- \beta _{n})^{2} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \sigma ^{2}_{n} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} \\ &\qquad{}+ 2\sigma _{n} (1-\sigma _{n}- \beta _{n}) \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} \\ &\quad= \bigl[(1-\sigma _{n}- \beta _{n})^{2} + \sigma ^{2}_{n}+2\sigma _{n} (1- \sigma _{n}- \beta _{n}) \bigr] \bigl\Vert p_{n} -x^{*} \bigr\Vert ^{2} \\ &\quad=(1-\beta _{n})^{2} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} \\ &\quad\leq (1-\beta _{n}) \bigl\Vert w_{n} - x^{*} \bigr\Vert ^{2}. \end{aligned}$$
(4.29)

Therefore, from the Algorithm 3.3, and (4.29), we have

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*} \bigr\Vert &= \bigl\Vert (1-\sigma _{n}- \beta _{n}) \bigl(p_{n} -x^{*} \bigr) + \sigma _{n} \bigl(z_{n} - x^{*} \bigr) - \beta _{n} x^{*} \bigr\Vert \\ &\leq \bigl\Vert (1-\sigma _{n}- \beta _{n}) \bigl(p_{n} -x^{*} \bigr) + \sigma _{n} \bigl(z_{n} - x^{*} \bigr) \bigr\Vert + \beta _{n} \bigl\Vert x^{*} \bigr\Vert \\ &\leq (1-\beta _{n}) \bigl\Vert w_{n} - x^{*} \bigr\Vert + \beta _{n} \bigl\Vert x^{*} \bigr\Vert \\ &\leq (1-\beta _{n}) \bigl[ \bigl\Vert x_{n} - x^{*} \bigr\Vert + \beta ^{2}_{n} M_{1} \bigr] + \beta _{n} \bigl\Vert x^{*} \bigr\Vert \\ &=(1-\beta _{n}) \bigl\Vert x_{n} - x^{*} \bigr\Vert +(1-\beta _{n})\beta ^{2}_{n} M_{1} + \beta _{n} \bigl\Vert x^{*} \bigr\Vert \\ &\leq (1-\beta _{n}) \bigl\Vert x_{n} - x^{*} \bigr\Vert + \beta _{n} \bigl(M_{1} + \bigl\Vert x^{*} \bigr\Vert \bigr). \end{aligned}$$
(4.30)

Hence, it follows from Lemma 2.14 that the \(\lbrace x_{n} \rbrace \) is bounded. This completes the proof of Lemma 4.1. □

Next, we establish the following claim.

Claim a:

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} \leq (1-\vartheta _{n}) \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} + \vartheta _{n} \biggl( \frac{\beta _{n} (M_{0} + K_{0} )+ 2\langle x^{*}, x^{*}- x_{n+1}\rangle}{2-5\sigma _{n}} \biggr). \end{aligned}$$

Proof

Observe that for all \(x^{*} \in \Gamma \),

$$\begin{aligned} &\bigl\Vert (1-\sigma _{n})p_{n} + \sigma _{n} z_{n} - x^{*} \bigr\Vert ^{2} \\ &\quad= \bigl\Vert (1- \sigma _{n}) \bigl(p_{n} -x^{*} \bigr) + \sigma _{n} \bigl(z_{n} -x^{*} \bigr) \bigr\Vert ^{2} \\ &\quad=(1-\sigma _{n})^{2} \bigl\Vert p_{n} -x^{*} \bigr\Vert ^{2} + \sigma ^{2}_{n} \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} \\ &\qquad{}+2\sigma _{n}(1-\sigma _{n}) \bigl\langle p_{n} -x^{*}, z_{n} - x^{*} \bigr\rangle \\ &\quad\leq (1-\sigma _{n})^{2} \bigl\Vert p_{n} -x^{*} \bigr\Vert ^{2} + \sigma ^{2}_{n} \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} \\ & \qquad{}+ 2\sigma _{n}(1-\sigma _{n}) \bigl\Vert p_{n} - x^{*} \bigr\Vert . \bigl\Vert z_{n} -x^{*} \bigr\Vert \\ &\quad\leq(1-\sigma _{n})^{2} \bigl\Vert p_{n} -x^{*} \bigr\Vert ^{2} + \sigma ^{2}_{n} \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} + \sigma _{n}(1-\sigma _{n}) \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} \\ &\qquad{}+\sigma _{n}(1-\sigma _{n}) \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} \\ &\quad= \bigl[(1-\sigma _{n})^{2} + \sigma _{n}(1- \sigma _{n}) \bigr] \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \bigl[\sigma ^{2}_{n} + \sigma _{n}(1-\sigma _{n}) \bigr] \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} \\ &\quad=(1-\sigma _{n}) \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \sigma _{n} \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} \\ &\quad\leq(1-\sigma _{n}) \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \sigma _{n} \bigl\Vert q_{n} -x^{*} \bigr\Vert ^{2} \\ &\quad\leq (1-\sigma _{n}) \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \sigma _{n} \bigl\Vert p_{n} -x^{*} \bigr\Vert ^{2} \\ &\quad= \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} \\ &\quad\leq \bigl\Vert w_{n} - x^{*} \bigr\Vert ^{2}. \end{aligned}$$
(4.31)

Using (4.31) in the inequality below to obtain

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} ={}& \bigl\Vert (1-\beta _{n}) \bigl[(1-\sigma _{n})p_{n} + \sigma _{n} z_{n} -x^{*} \bigr]- \bigl[\beta _{n}\sigma _{n}(p_{n} - z_{n})+ \beta _{n} x^{*} \bigr] \bigr\Vert ^{2} \\ \leq{} &(1-\beta _{n})^{2} \bigl\Vert (1-\sigma _{n})p_{n} + \sigma _{n} z_{n} -x^{*} \bigr\Vert ^{2} - 2 \bigl\langle \beta _{n} \sigma _{n}(p_{n} - z_{n}) + \beta _{n} x^{*}, x_{n+1} -x^{*} \bigr\rangle \\ ={}&(1-\beta _{n})^{2} \bigl\Vert (1-\sigma _{n})p_{n} + \sigma _{n} z_{n} -x^{*} \bigr\Vert ^{2} + 2 \bigl\langle \beta _{n} \sigma _{n}(p_{n} - z_{n}), x^{*} - x_{n+1} \bigr\rangle \\ &{}+ 2\beta _{n} \bigl\langle x^{*}, x^{*}-x_{n+1} \bigr\rangle \\ \leq {}& (1-\beta _{n})^{2} \bigl\Vert w_{n} -x^{*} \bigr\Vert ^{2} + 2 \bigl\langle \beta _{n} \sigma _{n}(p_{n} - z_{n}), x^{*}-x_{n+1} \bigr\rangle + 2\beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle \\ \leq {}& (1-\beta _{n})^{2} \bigl\Vert w_{n} -x^{*} \bigr\Vert ^{2} + 2\beta _{n} \sigma _{n} \Vert p_{n} - z_{n} \Vert . \bigl\Vert x^{*}-x_{n+1} \bigr\Vert + 2\beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle \\ \leq {}& (1-\beta _{n})^{2} \bigl\Vert w_{n} -x^{*} \bigr\Vert ^{2} + \beta _{n} \sigma _{n} \Vert p_{n} - z_{n} \Vert ^{2} + \beta _{n} \sigma _{n} \bigl\Vert x_{n+1} - x^{*} \bigr\Vert ^{2} \\ &{}+ 2\beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle \\ ={}&(1-\beta _{n})^{2} \bigl\Vert w_{n} -x^{*} \bigr\Vert ^{2} + \beta _{n} \sigma _{n} \bigl\Vert p_{n} - x^{*}+x^{*} -z_{n} \bigr\Vert ^{2} + \beta _{n} \sigma _{n} \bigl\Vert x_{n+1} - x^{*} \bigr\Vert ^{2} \\ &{}+ 2\beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle \\ ={}& (1-\beta _{n})^{2} \bigl\Vert w_{n} -x^{*} \bigr\Vert ^{2} + \beta _{n}\sigma _{n} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \beta _{n}\sigma _{n} \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} \\ &{}+ 2\beta _{n}\sigma _{n} \bigl\langle p_{n} -x^{*}, z_{n} - x^{*} \bigr\rangle + \beta _{n} \sigma _{n} \bigl\Vert x_{n+1} -x^{*} \bigr\Vert ^{2} +2\beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle \\ \leq{} & 1-\beta _{n})^{2} \bigl\Vert w_{n} -x^{*} \bigr\Vert ^{2} + \beta _{n}\sigma _{n} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \beta _{n}\sigma _{n} \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} \\ &{}+2\beta _{n}\sigma _{n} \bigl\Vert p_{n} -x^{*} \bigr\Vert . \bigl\Vert z_{n} - x^{*} \bigr\Vert + \beta _{n} \sigma _{n} \bigl\Vert x_{n+1} -x^{*} \bigr\Vert ^{2} \\ &{}+2\beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle \\ \leq {}&(1-\beta _{n})^{2} \bigl\Vert w_{n} -x^{*} \bigr\Vert ^{2} + \beta _{n}\sigma _{n} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \beta _{n}\sigma _{n} \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} \\ &{}+\beta _{n}\sigma _{n} \bigl\Vert p_{n} -x^{*} \bigr\Vert ^{2} + \beta _{n}\sigma _{n} \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} + \beta _{n} \sigma _{n} \bigl\Vert x_{n+1} -x^{*} \bigr\Vert ^{2} \\ &{} +2 \beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle \\ ={}&(1-\beta _{n})^{2} \bigl\Vert w_{n} -x^{*} \bigr\Vert ^{2} + 2\beta _{n}\sigma _{n} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + 2\beta _{n}\sigma _{n} \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} \\ &{}+ \beta _{n} \sigma _{n} \bigl\Vert x_{n+1} -x^{*} \bigr\Vert ^{2} +2\beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle \\ \leq{} & (1-\beta _{n})^{2} \bigl\Vert w_{n} -x^{*} \bigr\Vert ^{2} + 2\beta _{n}\sigma _{n} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + 2\beta _{n}\sigma _{n} \bigl\Vert q_{n} -x^{*} \bigr\Vert ^{2} \\ &{}+\beta _{n} \sigma _{n} \bigl\Vert x_{n+1} -x^{*} \bigr\Vert ^{2} +2\beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle \\ \leq {}&(1-\beta _{n})^{2} \bigl\Vert w_{n} -x^{*} \bigr\Vert ^{2} + 2\beta _{n}\sigma _{n} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + 2\beta _{n}\sigma _{n} \bigl\Vert p_{n} -x^{*} \bigr\Vert ^{2} \\ &{}+\beta _{n} \sigma _{n} \bigl\Vert x_{n+1} -x^{*} \bigr\Vert ^{2} +2\beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle \\ ={}&(1-\beta _{n})^{2} \bigl\Vert w_{n} -x^{*} \bigr\Vert ^{2} + 4\beta _{n}\sigma _{n} \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} + \beta _{n} \sigma _{n} \bigl\Vert x_{n+1} -x^{*} \bigr\Vert ^{2} \\ &{} +2 \beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle \\ \leq {}&(1-\beta _{n})^{2} \bigl\Vert w_{n} -x^{*} \bigr\Vert ^{2} + 4\beta _{n}\sigma _{n} \bigl\Vert w_{n} - x^{*} \bigr\Vert ^{2} + \beta _{n} \sigma _{n} \bigl\Vert x_{n+1} -x^{*} \bigr\Vert ^{2} \\ &{} +2 \beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle \\ ={}& \bigl[(1-\beta _{n})^{2} + 4\beta _{n} \sigma _{n} \bigr] \bigl\Vert w_{n} - x^{*} \bigr\Vert ^{2} + \beta _{n} \sigma _{n} \bigl\Vert x_{n+1} -x^{*} \bigr\Vert ^{2} \\ &{}+2\beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle . \end{aligned}$$
(4.32)

At this point, we estimate that

$$\begin{aligned} \bigl\Vert w_{n} -x^{*} \bigr\Vert ^{2} &= \bigl\Vert x_{n} + \theta _{n}(x_{n} - x_{n-1})-x^{*} \bigr\Vert ^{2} \\ &\leq \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} + 2\theta _{n} \bigl\Vert x_{n} - x^{*} \bigr\Vert . \Vert x_{n} - x_{n-1} \Vert + \theta ^{2}_{n} \Vert x_{n} -x_{n-1} \Vert ^{2} \\ &\leq \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} + \theta _{n} \Vert x_{n} - x_{n-1} \Vert \bigl[2 \bigl\Vert x_{n} -x^{*} \bigr\Vert + \theta _{n} \Vert x_{n}-x_{n-1} \Vert \bigr] \\ &\leq \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} + 3\theta _{n} \Vert x_{n} - x_{n-1} \Vert M_{2} \quad\text{for some } M_{2}>0. \\ &= \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} + 3\beta ^{2}_{n} \frac{\theta _{n}}{\beta ^{2}_{n}} \Vert x_{n} - x_{n-1} \Vert M_{2} \\ &\leq \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} + 3\beta ^{2}_{n} M_{3},\quad \text{for some } M_{3} > 0. \end{aligned}$$
(4.33)

It follows from (4.32) and (4.33) that

$$\begin{aligned} &\bigl\Vert x_{n+1}- x^{*} \bigr\Vert ^{2} \\ &\quad \leq\frac{1}{(1-\beta _{n} \sigma _{n})} \bigl[ (1- \beta _{n})^{2} + 4\beta _{n}\sigma _{n} \bigr] \bigl\Vert w_{n} - x^{*} \bigr\Vert ^{2} +2 \beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle ] \\ &\quad\leq \frac{1}{(1-\beta _{n} \sigma _{n})} \bigl( \bigl[(1-\beta _{n})^{2} + 4 \beta _{n}\sigma _{n} \bigr] \bigl[ \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} + 3\beta ^{2}_{n} M_{3} \bigr]+2 \beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle \bigr) \\ &\quad= \frac{1}{(1-\beta _{n} \sigma _{n})} \bigl( \bigl[(1-\beta _{n})^{2} + 4 \beta _{n}\sigma _{n} \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} \bigr]+ \bigl[(1-\beta _{n})^{2} + 4 \beta _{n}\sigma _{n} \bigr]3\beta ^{2}_{n} M_{3} \\ &\qquad{}+2\beta _{n} \bigl\langle x^{*}, x^{*} - x_{n+1} \bigr\rangle \bigr) \\ &\quad\leq \biggl( \frac{1-2\beta _{n} + 4\beta _{n}\sigma _{n}}{1-\beta _{n} \sigma _{n}} \biggr) \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} + \frac{\beta ^{2}_{n} \Vert x_{n} - x^{*} \Vert ^{2} + (1+4\beta _{n}\sigma _{n})3\beta ^{2}_{n}M_{3}}{1-\beta _{n}\sigma _{n}} \\ &\qquad{}+ \biggl( \frac{2\beta _{n}\langle x^{*}, x^{*}-x_{n+1}\rangle}{1-\beta _{n}\sigma _{n}} \biggr) \\ &\quad\leq \biggl( \frac{1-\beta _{n} \sigma _{n} - 2\beta _{n} + 5\beta _{n}\sigma _{n}}{1-\beta _{n} \sigma _{n}} \biggr) \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} + \frac{\beta ^{2}_{n} M_{0} + \beta ^{2}_{n} K_{0} + 2\beta _{n} \langle x^{*}, x^{*}- x_{n+1}\rangle}{1-\beta _{n} \sigma _{n}} \\ &\quad= \biggl( \frac{1-\beta _{n} \sigma _{n} - \beta _{n}(2-5\sigma _{n})}{1-\beta _{n} \sigma _{n}} \biggr) \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} \\ &\qquad{} + \frac{\beta _{n}(2-5\sigma _{n})}{1-\beta _{n} \sigma _{n}} \biggl( \frac{\beta _{n} M_{0} + \beta _{n} K_{0} + 2 \langle x^{*}, x^{*}- x_{n+1}\rangle }{2-5\sigma _{n}} \biggr) \\ &\quad= \biggl(1-\frac{\beta _{n}(2-5\sigma _{n})}{1-\beta _{n}\sigma _{n}} \biggr) \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} + \frac{\beta _{n}(2-5\sigma _{n})}{1-\beta _{n} \sigma _{n}} \biggl( \frac{\beta _{n} M_{0} +\beta _{n} K_{0} + 2\langle x^{*}, x^{*}- x_{n+1}\rangle }{2-5\sigma _{n}} \biggr) \\ &\quad=(1-\vartheta _{n}) \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} + \vartheta _{n} \biggl( \frac{\beta _{n} (M_{0} + K_{0}) + 2\langle x^{*}, x^{*}- x_{n+1}\rangle}{2-5\sigma _{n}} \biggr), \end{aligned}$$
(4.34)

where \(M_{0}: = \sup \lbrace \|x_{n} - x^{*}\|^{2}, n\in N\rbrace,K_{0}= \sup \lbrace 3 M_{3}(1+4\beta _{n}\sigma _{n}), n \in N \rbrace \) and \(\vartheta _{n}:= \frac{\beta _{n}(2-5\beta _{n})}{1-\beta _{n} \sigma _{n}}\).

By the assumption on \(\beta _{n}\), we see that \(\lim_{n\rightarrow \infty} \vartheta _{n} =0\) and \(\sum^{\infty}_{n=1} \vartheta _{n} =\infty \). □

Theorem 4.2

If the Assumptions 3.1and 3.2are satisfied, Lemma 4.1and claim (a) hold, then the recursive sequence \(\lbrace x_{n} \rbrace \) generated by our algorithm strongly converges to an element in the solution set Γ.

Proof

In order to show that \(\lbrace x_{n} \rbrace \) converges strongly to the solution set Γ, we consider the following two cases.

Case 1: Suppose that there exists \(n_{0} \in \mathbb{N}\) such that \(\lbrace \|x_{n} - x^{*}\| \rbrace _{n\geq 1}\) is nonincreasing. Then, the limit of \(\lbrace \|x_{n} - x^{*}\| \rbrace _{n\geq 1}\) exists.

Clearly

$$\begin{aligned} \lim_{n\rightarrow \infty} \bigl( \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} - \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} \bigr) =0. \end{aligned}$$

It follows from the Algorithm 3.3 and the condition (ii) of Assumption 3.2 that

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert w_{n} - x_{n} \Vert = \lim_{n\rightarrow \infty} \theta _{n} \Vert x_{n} - x_{n-1} \Vert = \lim_{n\rightarrow \infty} \beta _{n} \frac{\theta _{n}}{\beta _{n}} \Vert x_{n} - x_{n-1} \Vert \to 0. \end{aligned}$$
(4.35)

Thus,

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert w_{n} - x_{n} \Vert = 0. \end{aligned}$$
(4.36)

Moreover, since \(\lim_{n\rightarrow \infty} \lbrace \|x_{n} - x^{*}\| \rbrace \) exists, we have from the estimates (4.11), (4.19), (4.26), (4.27), and (4.33)

$$\begin{aligned} \lim_{n\rightarrow \infty} \bigl\Vert x_{n} - x^{*} \bigr\Vert = \lim_{n\rightarrow \infty} \bigl\Vert w_{n} - x^{*} \bigr\Vert = \lim_{n\rightarrow \infty} \bigl\Vert p_{n} - x^{*} \bigr\Vert = \lim _{n\rightarrow \infty} \bigl\Vert q_{n} - x^{*} \bigr\Vert . \end{aligned}$$
(4.37)

Also,

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert y_{n} - Ap \Vert = \lim _{n\rightarrow \infty} \Vert v_{n} - Ap \Vert = \lim _{n\rightarrow \infty} \bigl\Vert P_{D}(Ap_{n}) - Ap \bigr\Vert . \end{aligned}$$

It follows from (4.10) and (4.37) that

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert z_{n} - t_{n} \Vert =\lim_{n\rightarrow \infty} \Vert q_{n} - t_{n} \Vert =0. \end{aligned}$$
(4.38)

Subsequently,

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert z_{n} - q_{n} \Vert =0. \end{aligned}$$

In a similar way, following (4.37) and (4.18) we obtain that

$$\begin{aligned} \lim_{n\rightarrow \infty} \bigl\Vert P_{D}(Ap_{n}) - u_{n} \bigr\Vert =\lim_{n \rightarrow \infty} \Vert v_{n} - u_{n} \Vert =0. \end{aligned}$$
(4.39)

Also, from (4.37) and (4.26) we obtain

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert Sy_{n} - Ap_{n} \Vert = \lim_{n\rightarrow \infty} \bigl\Vert A^{*}(Sy_{n} - Ap_{n}) \bigr\Vert =0. \end{aligned}$$
(4.40)

Since \(p_{n} \in C\), it follows from the definition of \(q_{n}\) and (4.40) that

$$\begin{aligned} \Vert q_{n} - p_{n} \Vert &= \bigl\Vert P_{C} \bigl(p_{n} + \gamma _{n} A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) \bigr) - p_{n} \bigr\Vert \\ &= \bigl\Vert P_{C} \bigl(p_{n} + \gamma _{n} A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) \bigr) - P_{C}(p_{n}) \bigr\Vert \\ &\leq \bigl\Vert p_{n} + \gamma _{n} A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) - p_{n} \bigr\Vert \\ &= \bigl\Vert \gamma _{n} A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) \bigr\Vert \\ &= \gamma _{n} \bigl\Vert A^{*} \bigl(S(s)y_{n} - Ap_{n} \bigr) \bigr\Vert \to 0. \end{aligned}$$
(4.41)

Therefore,

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert q_{n} - p_{n} \Vert =0. \end{aligned}$$
(4.42)

It follows from the same argument that

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert q_{n} - w_{n} \Vert =0. \end{aligned}$$
(4.43)

Using the definition of \(y_{n}\), Lemma 2.9(4) and (4.37) we estimate that

$$\begin{aligned} \bigl\Vert y_{n} - x^{*} \bigr\Vert ^{2} ={}& \biggl\Vert (1-\alpha _{n})v_{n} + \alpha _{n} \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s)v_{n} \,dt - x^{*} \biggr\Vert ^{2} \\ ={}& \biggl\Vert (1-\alpha _{n}) \bigl(v_{n}-x^{*} \bigr) + \alpha _{n} \biggl(\frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s)v_{n} \,dt - x^{*} \biggr) \biggr\Vert ^{2} \\ \leq {}&(1-\alpha _{n}) \bigl\Vert v_{n}-x^{*} \bigr\Vert ^{2} + \alpha _{n} \biggl\Vert \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s)v_{n} \,dt - x^{*} \biggr\Vert ^{2} \\ &{}- \alpha _{n}(1-\alpha _{n}) \biggl\Vert v_{n} - \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s)v_{n} \,dt \biggr\Vert ^{2} \\ ={}&(1-\alpha _{n}) \bigl\Vert v_{n}-x^{*} \bigr\Vert ^{2} + \alpha _{n} \biggl\Vert \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s)v_{n} \,dt - S(s) \bigl(x^{*} \bigr) \biggr\Vert ^{2} \\ &{}- \alpha _{n}(1-\alpha _{n}) \biggl\Vert v_{n} - \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s)v_{n} \,dt \biggr\Vert ^{2} \\ \leq{} &(1-\alpha _{n}) \bigl\Vert v_{n}-x^{*} \bigr\Vert ^{2} + \tau _{n} \bigl\Vert v_{n} -x^{*} \bigr\Vert ^{2} \\ &{}- \alpha _{n}(1-\alpha _{n}) \biggl\Vert v_{n} - \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s)v_{n} \,dt \biggr\Vert ^{2} \\ ={}& \bigl\Vert v_{n} -x^{*} \bigr\Vert ^{2} - \alpha _{n}(1-\alpha _{n}) \biggl\Vert v_{n} - \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s)v_{n} \,dt \biggr\Vert ^{2}. \end{aligned}$$
(4.44)

Hence,

$$\begin{aligned} \alpha _{n}(1-\alpha _{n}) \biggl\Vert v_{n} - \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s)v_{n} \,dt \biggr\Vert ^{2} \leq \bigl\Vert v_{n} -x^{*} \bigr\Vert ^{2} - \bigl\Vert y_{n} - x^{*} \bigr\Vert ^{2}. \end{aligned}$$
(4.45)

We obtain from (4.45) that

$$\begin{aligned} \lim_{n\rightarrow \infty} \biggl\Vert v_{n} - \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s)v_{n} \,dt \biggr\Vert ^{2}=0. \end{aligned}$$
(4.46)

In a similar way, using the definition of \(p_{n}\), Lemma 2.9(4), and (4.44)–(4.46) we obtain that

$$\begin{aligned} \lim_{n\rightarrow \infty} \biggl\Vert w_{n} - \frac{1}{t_{n}} \int ^{t_{n}}_{0} T(t)w_{n} \,dt \biggr\Vert ^{2}=0. \end{aligned}$$
(4.47)

Observe that

$$\begin{aligned} \bigl\Vert v_{n} - S(t)v_{n} \bigr\Vert ={}& \biggl\Vert v_{n} - \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt + \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt - S(t) \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt \\ &{} - S(t) \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt - S(t)v_{n} \biggr\Vert \\ \leq {}& \biggl\Vert v_{n} - \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt \biggr\Vert + \biggl\Vert \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt - S(t) \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt \biggr\Vert \\ &{}+ \biggl\Vert S(t) \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt - S(t)v_{n} \biggr\Vert \\ \leq {}& 2 \biggl\Vert v_{n} - \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt \biggr\Vert \\ &{}+ \biggl\Vert \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt - S(t) \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s) v_{n} \,dt \biggr\Vert . \end{aligned}$$
(4.48)

Now, it follows from (4.46) and Lemma 2.11 that

$$\begin{aligned} \lim_{n\rightarrow \infty} \bigl\Vert v_{n} - S(t)v_{n} \bigr\Vert =0. \end{aligned}$$
(4.49)

Following the same line of argument in (4.48), the fact we established in (4.47) and Lemma 2.11, we obtain that

$$\begin{aligned} \lim_{n\rightarrow \infty} \bigl\Vert w_{n} - T(s)w_{n} \bigr\Vert =0. \end{aligned}$$
(4.50)

Furthermore, since \(y_{n},v_{n} \in D\), we obtain

$$\begin{aligned} \Vert y_{n} -v_{n} \Vert &= \biggl\Vert (1-\alpha _{n})v_{n} + \alpha _{n} \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s)v_{n} \,dt -v_{n} \biggr\Vert \\ &= \biggl\Vert (1-\alpha _{n}) (v_{n}-v_{n})+ \alpha _{n} \biggl(\frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s)v_{n} \,dt -v_{n} \biggr) \biggr\Vert \\ &=\alpha _{n} \biggl\Vert \frac{1}{s_{n}} \int ^{s_{n}}_{0} S(s)v_{n} \,dt -v_{n} \biggr\Vert \to 0. \end{aligned}$$
(4.51)

Now, using (4.46) in (4.51) we obtain

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert y_{n} - v_{n} \Vert =0. \end{aligned}$$
(4.52)

Using the result in (4.52) and (4.39), we obtain that

\(\|y_{n} - u_{n}\|\leq \|y_{n} - v_{n} \|+\|v_{n} - u_{n}\|\), now letting \(n\rightarrow \infty \) we conclude that

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert y_{n} - u_{n} \Vert = \lim_{n\rightarrow \infty} \Vert v_{n} - u_{n} \Vert =0. \end{aligned}$$
(4.53)

It is also clear from (4.38) and (4.42) that

$$\begin{aligned} \Vert z_{n} - p_{n} \Vert \leq \Vert z_{n} - t_{n} \Vert + \Vert t_{n} - q_{n} \Vert + \Vert q_{n} -p_{n} \Vert . \end{aligned}$$
(4.54)

Now, letting \(n\rightarrow \infty \) in (4.54), we obtain that

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert z_{n} -p_{n} \Vert = \lim_{n\rightarrow \infty} \Vert z_{n} -t_{n} \Vert =0. \end{aligned}$$
(4.55)

From (4.38), it is clear that

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert z_{n} - q_{n} \Vert = 2 \lim_{n\rightarrow \infty} \Vert z_{n} - t_{n} \Vert =0. \end{aligned}$$
(4.56)

Since \(q_{n}, x_{n} \in C\), using the definition of \(q_{n}\), we obtain the following inequality

$$\begin{aligned} \Vert q_{n} - x_{n} \Vert &= \bigl\Vert P_{C} \bigl(p_{n} + \gamma _{n} A^{*}(Sy_{n} - Ap_{n}) \bigr)-x_{n} \bigr\Vert \\ &= \bigl\Vert P_{C} \bigl(p_{n} + \gamma _{n} A^{*}(Sy_{n} - Ap_{n}) \bigr)-P_{C}(x_{n}) \bigr\Vert \\ &\leq \bigl\| p_{n} + \gamma _{n} A^{*}(Sy_{n} - Ap_{n}))-x_{n}\bigr\| \\ &=\bigl\| p_{n}-x_{n} + \gamma _{n} A^{*}(Sy_{n} - Ap_{n}))\bigr\| \\ &\leq \Vert p_{n} - x_{n} \Vert + \gamma _{n}\bigl\| A^{*}(Sy_{n} - Ap_{n}))\bigr\| . \end{aligned}$$
(4.57)

Using (4.42), (4.43), and (4.36)

$$\begin{aligned} \Vert p_{n} -x_{n} \Vert \leq \Vert p_{n} -q_{n} \Vert + \Vert q_{n} - w_{n} \Vert + \Vert w_{n} - x_{n} \Vert . \end{aligned}$$

Letting \(n\rightarrow \infty \) in (4.57) and using the immediate estimate we obtain

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert q_{n} - x_{n} \Vert =0. \end{aligned}$$
(4.58)

Using the fact that \(w_{n} \in C\) and applying the argument as in (4.57) and using (4.40), we conclude that

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert q_{n} - w_{n} \Vert = \lim_{n\rightarrow \infty} \Vert p_{n} -w_{n} \Vert \leq \lim_{n\rightarrow \infty} \bigl( \Vert p_{n} - q_{n} \Vert + \Vert q_{n} -w_{n} \Vert \bigr)=0. \end{aligned}$$
(4.59)

We note, however, that \(\|z_{n} - x_{n}\| \leq \|z_{n} - p_{n}\| + \|p_{n} -x_{n}\|\).

Letting \(n\rightarrow \infty \) in the above inequality, we obtain that

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert z_{n} - x_{n} \Vert \leq \lim_{n \rightarrow \infty} \Vert z_{n} - p_{n} \Vert + \lim_{n\rightarrow \infty} \Vert p_{n} - x_{n} \Vert =0. \end{aligned}$$
(4.60)

However,

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert z_{n} - p_{n} \Vert \leq \lim_{n \rightarrow \infty} \bigl( \Vert z_{n} -q_{n} \Vert + \Vert q_{n} -p_{n} \Vert \bigr) \to 0. \end{aligned}$$

Using (4.38) we obtain

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert z_{n} -x_{n} \Vert =0. \end{aligned}$$

Now, to establish that the sequence \(\lbrace x_{n} \rbrace \) is asymptotically regular, we have the following

$$\begin{aligned} \Vert x_{n+1} - x_{n} \Vert &= \bigl\Vert (1-\sigma _{n} -\beta _{n})p_{n} + \sigma _{n} z_{n} - x_{n} \bigr\Vert \\ &= \bigl\Vert (1-\sigma _{n} -\beta _{n}) (p_{n}-x_{n}) + \sigma _{n} (z_{n} - x_{n}) -\beta _{n} x_{n} \bigr\Vert \\ &\leq (1-\sigma _{n} -\beta _{n}) \Vert p_{n}-x_{n} \Vert + \sigma _{n} \Vert z_{n} - x_{n} \Vert +\beta _{n} \Vert x_{n} \Vert . \end{aligned}$$
(4.61)

Considering the condition in \(\sigma _{n}, \beta _{n}\), using (4.58) and (4.60) in (4.61) we obtain

$$\begin{aligned} \Vert x_{n+1} - x_{n} \Vert &< (1-\beta _{n}) \Vert z_{n} - x_{n} \Vert +\beta _{n} \Vert x_{n} \Vert \\ &\leq \Vert z_{n} - x_{n} \Vert +\beta _{n} \Vert x_{n} \Vert \to 0. \end{aligned}$$
(4.62)

It follows from (4.62) that

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert x_{n+1} - x_{n} \Vert =0. \end{aligned}$$
(4.63)

We also have from (4.39) that

$$\begin{aligned} \begin{aligned} & \bigl\Vert P_{D}(Ap_{n}) -v_{n} \bigr\Vert \leq \bigl\Vert P_{D}(Ap_{n})-u_{n} \bigr\Vert + \Vert u_{u} - v_{n} \Vert .\\ &\lim_{n\rightarrow \infty} \bigl\Vert P_{D}(Ap_{n}) - v_{n} \bigr\Vert =0.\end{aligned} \end{aligned}$$
(4.64)

The boundedness of \(\lbrace x_{n} \rbrace \) implies that there exists a subsequence \(\lbrace x_{n_{k}}\rbrace \) of \(\lbrace x_{n} \rbrace \) such that \(x_{n_{k}} \rightharpoonup q\) as \(k \rightarrow \infty \). Furthermore, since \(\lbrace x_{n} \rbrace \) is bounded, it follows that \(\lbrace w_{n} \rbrace, \lbrace p_{n} \rbrace, \lbrace q_{n} \rbrace, \lbrace t_{n} \rbrace \), and \(\lbrace z_{n} \rbrace \) have subsequences that converge to q weakly. Similarly, the sequences \(\lbrace u_{n} \rbrace, \lbrace v_{n} \rbrace, \lbrace P_{D}(Ap_{n}) \rbrace \), and \(\lbrace y_{n} \rbrace \) have subsequences that converge weakly to Aq.

However,

$$\begin{aligned} \limsup_{k\rightarrow \infty} \bigl\langle x^{*}, x^{*}-x_{n_{k}+1} \bigr\rangle &=\lim_{n\rightarrow \infty} \bigl\langle x^{*}, x^{*}-x_{n_{k}} \bigr\rangle \\ &= \bigl\langle x^{*}, x^{*}-q \bigr\rangle . \end{aligned}$$
(4.65)

Consequently, \(\lbrace Ax_{n_{k}}\rbrace \) also converges weakly to Aq. By (4.64), \(v_{n}\) converges weakly to Aq. Our target is to show that \(q\in \Gamma \). From the construction of our algorithm, \(x_{n} \in C\) and \(v_{n} \in D, \forall n\in \mathbb{N}\). Since C and D are nonempty, closed, and convex sets, they are weakly closed. Thus, \(q\in C\) and \(Aq \in D\). Using Lemma 2.10 and estimates we have in (4.1) and (4.12) we have that

$$\begin{aligned} \lambda _{n_{k}} \bigl(F(q_{n_{k}}, y) - F(q_{n_{k}},t_{n_{k}}) \bigr) &\geq \langle t_{n_{k}}-q_{n_{k}}, t_{n_{k}}-y\rangle \\ &\geq - \Vert t_{n_{k}}-q_{n_{k}} \Vert \Vert t_{n_{k}}-y \Vert , \quad\forall y\in C, \end{aligned}$$
(4.66)

and

$$\begin{aligned} & \mu _{n_{k}} \bigl(G \bigl(P_{D}(Ap_{n_{k}}), u \bigr) - G \bigl(P_{D}(Ap_{n_{k}}),u_{n_{k}} \bigr) \bigr) \\ &\quad\geq \bigl\langle u_{n_{k}}-P_{D}(Ap_{n_{k}}, u_{n_{k}}-u \bigr\rangle \\ &\quad\geq - \bigl\Vert u_{n_{k}}-P_{D}(Ap_{n_{k}}) \bigr\Vert \Vert u_{n_{k}}-u \Vert ,\quad \forall u \in D. \end{aligned}$$
(4.67)

Consequently, we obtain from (4.66) and (4.67) that

$$\begin{aligned} F(q_{n_{k}}, y) - F(q_{n_{k}},t_{n_{k}})+\frac{1}{\lambda _{n_{k}}} \Vert t_{n_{k}}-q_{n_{k}} \Vert \Vert t_{n_{k}}-y \Vert \geq 0, \quad\forall y \in C \end{aligned}$$

and

$$\begin{aligned} G \bigl(P_{D}(Ap_{n_{k}}), u \bigr) - G \bigl(P_{D}(Ap_{n_{k}}),u_{n_{k}} \bigr) + \frac{1}{\mu _{n_{k}}} \bigl\Vert u_{n_{k}}-P_{D}(Ap_{n_{k}}) \bigr\Vert \Vert u_{n_{k}}-u \Vert \geq 0,\quad \forall u\in D. \end{aligned}$$

Now, letting \(k\rightarrow \infty \) in the above inequality, using (4.38) and (4.39), the conditions on \(\lambda _{n}, \mu _{n}\) and the weak continuity of F and G, we obtain that

$$\begin{aligned} F(q,y)\geq 0,\quad \forall y \in C \quad\text{and}\quad G(Aq,u)\geq 0, \quad \forall u\in D. \end{aligned}$$

This implies that

$$\begin{aligned} q \in EP(C, F) \quad\text{and}\quad Aq \in EP(D,G). \end{aligned}$$
(4.68)

Using Definition 2.8, (4.50), (4.49), and the definition, we obtain that

$$\begin{aligned} q \in F(T)\quad \text{and} \quad Aq \in F(S). \end{aligned}$$
(4.69)

The combination of (4.68) and (4.69) gives

$$\begin{aligned} q\in EP(C, F) \cap F(T) \quad\text{and}\quad Aq \in EP(D,G) \cap F(S), \end{aligned}$$

which implies that \(q \in \Gamma \). From the fact that \(q \in \Gamma \) and (4.65), we obtain

$$\begin{aligned} \limsup_{n\rightarrow \infty} \bigl\langle x^{*}, x^{*}-x_{n+1} \bigr\rangle = \bigl\langle x^{*}, x^{*}-q \bigr\rangle \leq 0. \end{aligned}$$
(4.70)

By using the estimates in (4.34) the conditions that followed underneath, using (4.70) and Lemma 2.12, we conclude that the sequence \(\lbrace x_{n} \rbrace \) converges strongly to q, that is, \(q= P_{\Gamma}0\). This completes case 1.

Case 2: Suppose there exists a subsequence \(\lbrace n_{i} \rbrace \) of \(\lbrace n\rbrace \) such that

$$\begin{aligned} \Vert x_{n_{i}} - q \Vert \leq \Vert x_{n_{i}+1} -q \Vert , \quad \forall i\in N. \end{aligned}$$

By Lemma 2.13, there exists a nondecreasing sequence \(\lbrace m_{k} \rbrace \subset N\) such that \(m_{k} \rightarrow \infty \),

$$\begin{aligned} \Vert x_{m_{k}} -q \Vert \leq \Vert x_{m_{k+1}} -q \Vert \quad\text{and}\quad \Vert x_{k} -q \Vert \leq \Vert x_{m_{k+1}} -q \Vert ,\quad \forall k \in N. \end{aligned}$$
(4.71)

From (4.32), (4.10), and Lemma 2.9(4) we obtain

$$\begin{aligned} &\bigl\Vert x_{n+1} - x^{*} \bigr\Vert ^{2} \\ &\quad\leq (1-\beta _{n}) \bigl\Vert (1-\sigma _{n})p_{n} + \sigma _{n} z_{n} -x^{*} \bigr\Vert ^{2} \\ &\quad\leq \bigl\Vert (1-\sigma _{n})p_{n} + \sigma _{n} z_{n} -x^{*} \bigr\Vert ^{2} \\ &\quad= \bigl\Vert (1-\sigma _{n}) \bigl(p_{n} -x^{*} \bigr)+ \sigma _{n} \bigl(z_{n} -x^{*} \bigr) \bigr\Vert ^{2} \\ &\quad\leq (1-\sigma _{n}) \bigl\Vert p_{n}- x^{*} \bigr\Vert ^{2} + \sigma _{n} \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} -\sigma _{n}(1-\sigma _{n}) \Vert p_{n} -z_{n} \Vert ^{2} \\ &\quad\leq (1-\sigma _{n}) \bigl\Vert p_{n}- x^{*} \bigr\Vert ^{2} + \sigma _{n} \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} \\ &\quad\leq (1-\sigma _{n}) \bigl\Vert p_{n}- x^{*} \bigr\Vert ^{2} \\ &\qquad{}+ \sigma _{n} \biggl[ \bigl\Vert q_{n} - x^{*} \bigr\Vert ^{2} - \biggl(1- \frac{\tau \lambda _{n}}{\lambda _{n+1}} \biggr) \Vert z_{n}-t_{n} \Vert ^{2} - \biggl(1-\frac{\tau \lambda _{n}}{\lambda _{n+1}} \biggr) \Vert q_{n} -t_{n} \Vert ^{2} \biggr] \\ &\quad\leq (1-\sigma _{n}) \bigl\Vert p_{n}- x^{*} \bigr\Vert ^{2} \\ &\qquad{} + \sigma _{n} \biggl[ \bigl\Vert p_{n} - x^{*} \bigr\Vert ^{2} - \biggl(1- \frac{\tau \lambda _{n}}{\lambda _{n+1}} \biggr) \Vert z_{n}-t_{n} \Vert ^{2} - \biggl(1-\frac{\tau \lambda _{n}}{\lambda _{n+1}} \biggr) \Vert q_{n} -t_{n} \Vert ^{2} \biggr] \\ &\quad= \bigl\Vert p_{n}- x^{*} \bigr\Vert ^{2} - \sigma _{n} \biggl[ \biggl(1- \frac{\tau \lambda _{n}}{\lambda _{n+1}} \biggr) \Vert z_{n}-t_{n} \Vert ^{2} + \biggl(1- \frac{\tau \lambda _{n}}{\lambda _{n+1}} \biggr) \Vert q_{n} -t_{n} \Vert ^{2} \biggr] \\ &\quad\leq \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2} + 3\beta ^{2}_{n} M_{1} \\ &\qquad{}-\sigma _{n} \biggl[ \biggl(1- \frac{\tau \lambda _{n}}{\lambda _{n+1}} \biggr) \Vert z_{n}-t_{n} \Vert ^{2} + \biggl(1- \frac{\tau \lambda _{n}}{\lambda _{n+1}} \biggr) \Vert q_{n} -t_{n} \Vert ^{2} \biggr]. \end{aligned}$$
(4.72)

It follows from (4.72) that

$$\begin{aligned} &\sigma _{n} \biggl[ \biggl(1-\frac{\tau \lambda _{n}}{\lambda _{n+1}} \biggr) \Vert z_{n}-t_{n} \Vert ^{2} + \biggl(1- \frac{\tau \lambda _{n}}{\lambda _{n+1}} \biggr) \Vert q_{n} -t_{n} \Vert ^{2} \biggr] \\ &\quad\leq 3\beta ^{2}_{n} M_{1} + \Vert x_{m_{k}} - q \Vert ^{2} - \Vert x_{m_{k+1}}-q \Vert ^{2} \\ &\quad\leq 3\beta ^{2}_{n} M_{1}. \end{aligned}$$
(4.73)

With the condition on \(\beta _{n}\) and the fact that the limit of \(\lambda _{n}\) exists, we obtain

$$\begin{aligned} \lim_{n\rightarrow \infty} \Vert z_{m_{k}} - t_{m_{k}} \Vert =\lim_{n \rightarrow \infty} \Vert q_{m_{k}}-t_{m_{k}} \Vert =0\quad \text{and}\quad \lim_{n \rightarrow \infty} \Vert z_{m_{k}} - q_{m_{k}} \Vert =0. \end{aligned}$$

Following the same argument as in case 1, we have

$$\begin{aligned} \limsup_{k \rightarrow \infty} \bigl\langle x^{*}, x^{*}-x_{m_{k+1}} \bigr\rangle = \lim_{n \rightarrow \infty} \bigl\langle x^{*}, x^{*}-x_{m_{k}} \bigr\rangle = \bigl\langle x^{*}, x^{*}-q \bigr\rangle \leq 0. \end{aligned}$$

Furthermore, it follows from (4.34) and (4.71) that

$$\begin{aligned} &\Vert x_{m_{k+1}} - q \Vert ^{2} \\ &\quad\leq (1-\vartheta _{m_{k}}) \Vert x_{m_{k}} -q \Vert ^{2} + \vartheta _{m_{k}} \biggl( \frac{\beta _{m_{k}} (M_{0} + K_{0}) + 2\langle x^{*}, x^{*} - x_{m_{k+1}}\rangle}{2-5\sigma _{m_{k}}} \biggr) \\ &\quad\leq (1-\vartheta _{m_{k}}) \Vert x_{m_{k+1}} - q \Vert ^{2} + \vartheta _{m_{k}} \biggl( \frac{\beta _{m_{k}} (M_{0} + K_{0}) + 2\langle x^{*}, x^{*} - x_{m_{k+1}}\rangle}{2-5\sigma _{m_{k}}} \biggr), \end{aligned}$$
(4.74)

hence,

$$\begin{aligned} \vartheta _{m_{k}} \Vert x_{m_{k+1}} - q \Vert ^{2} \leq \vartheta _{m_{k}} \biggl( \frac{\beta _{m_{k}} (M_{0} + K_{0}) + 2\langle x^{*}, x^{*} - x_{m_{k+1}}\rangle}{2-5\sigma _{m_{k}}} \biggr). \end{aligned}$$

Since \(\vartheta _{m_{k}} > 0\) and using (4.71) we obtain

$$\begin{aligned} \Vert x_{m_{k}} -q \Vert ^{2} \leq \Vert x_{m_{k+1}} - q \Vert ^{2} \leq \biggl( \frac{\beta _{m_{k}} (M_{0} + K_{0}) + 2\langle x^{*}, x^{*} - x_{m_{k+1}}\rangle}{2-5\sigma _{m_{k}}} \biggr). \end{aligned}$$

Taking the limit in the above inequality as \(k \rightarrow \infty \), we deduce that \(x_{k}\) converges strongly to \(q=P_{\Gamma}0\). This completes the proof of Theorem 4.2. □

5 Applications

We apply our Algorithm 3.3 to variational inequality problems for monotone and Lipschitz-type continuous mappings. It is based on the setting of real Hilbert spaces and C a nonempty, closed, and convex subset of H. Let \(T: C\rightarrow C\) be a nonlinear operator.

A mapping A is said to be

  • monotone on C if

    $$\begin{aligned} \langle Tx-Ty, x-y \rangle \geq 0,\quad\text{for all } x,y \in C; \end{aligned}$$
  • pseudomonotone on C if

    $$\begin{aligned} \langle Tx,y-x\rangle \geq 0 \quad\Rightarrow\quad \langle Ty, x-y\rangle \leq 0,\quad \text{for all } x,y \in C; \end{aligned}$$
  • L-Lipschitz continuous on C if there exists a positive constant L such that

    $$\begin{aligned} \Vert Tx-Ty \Vert \leq L \Vert x-y \Vert ,\quad\text{for all } x,y \in C. \end{aligned}$$

The variational inequality \((VI)\) has the following structure:

$$\begin{aligned} \text{find } x^{*} \in C \text{ such that } \bigl\langle Ax^{*},x-x^{*} \bigr\rangle \geq 0,\quad \forall x \in C. \end{aligned}$$
(5.1)

Now, for every \(x,y \in H\), we define \(f(x,y)=\langle Tx, y-x\rangle \), then the equilibrium problem (1.7) becomes a variational inequality (5.1), where the single-valued operator A is monotone and Lipschitz continuous. Let problem (5.1) and its solution set be denoted by \(VI(C,T)\) and SOL VI(C,T), respectively. We shall assume that T satisfies the following conditions:

  1. (C1)

    T is pseudomonotone on C;

  2. (C2)

    T is weak to strong continuous on C, that is, \(Tx_{n} \rightarrow Tx\) for each sequence \(\lbrace x_{n} \rbrace \subset C\) converging weakly to x;

  3. (C3)

    T is \(L_{1}\)-Lipschitz continuous on C for some positive constant \(L_{1}\).

Let C and D be two nonempty, closed, and convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. Let \(A_{1}: C \rightarrow C\) and \(A_{2}: D\rightarrow D\) be \(L_{1}\)- and \(L_{2}\)-Lipschitz continuous on C and D, respectively. Suppose \(B: H_{1} \rightarrow H_{2}\) is a bounded linear operator and \(B^{*}: H_{2} \rightarrow H_{1}\), its adjoint. Let \(T(t)\) and \(S(s)\) be two nonexpansive semigroups defined on C and D, respectively. We consider the extragradient method for solving (5.1). Let \(\lbrace x_{n} \rbrace \) be a sequence generated by the following algorithm:

Algorithm 5.1

(Self-Adaptive Inertial Extragradient Method for VI Problem)

textbfIterative Steps: Step 0: Let \(x_{0}, x_{1} \in H_{1}, \delta _{0}, \tau _{0} >0, \mu \in (0,1)\).

Step 1: Given the current iterate \(x_{n}, x_{n-1} (n\geq 1), \theta _{n} \in (0,1) \), choose \(\theta _{n}\) such that \(0< \theta \leq \overline{\theta _{n}}\), where

$$\begin{aligned} \overline{\theta _{n}} := \textstyle\begin{cases} \min \lbrace \theta, \frac{\epsilon _{n}}{ \Vert x_{n}-x_{n-1} \Vert }\rbrace & \text{if }x_{n} \neq x_{n-1}, \\ \theta & \text{otherwise}, \end{cases}\displaystyle \end{aligned}$$
(5.2)

and compute

$$\begin{aligned} \textstyle\begin{cases} w_{n}=x_{n} + \theta _{n}(x_{n} - x_{n-1}), \\ p_{n} = (1-\delta _{n})w_{n} + \delta _{n} \frac{1}{t_{n}} \int ^{t_{n}} _{0} T(t)w_{n} \,dt, \\ u_{n} = P_{D}(P_{D}(Bx_{n})-\mu _{n}A_{2}(P_{D}(Bx_{n}))), \\ v_{n} = P_{D}(P_{D}(Bx_{n})-\mu _{n} A_{2}(u_{n})), \\ y_{n}= (1-\alpha _{n}) v_{n} + \alpha _{n} \frac{1}{s_{n}}\int ^{s_{n}}_{0} S(s) v_{n} \,ds, \\ q_{n}= P_{C}(p_{n} + \gamma _{n} B^{*}(S (s)y_{n} - Bp_{n})), \\ t_{n} = P_{C}(q_{n} -\lambda _{n} A_{1}q_{n}), \\ z_{n} = P_{C}(q_{n}-\lambda _{n} A_{1} t_{n}), \\ x_{n+1} = (1-\sigma _{n} - \beta _{n} )p_{n} + \sigma _{n} z_{n}, \quad n\geq 1, \end{cases}\displaystyle \end{aligned}$$
(5.3)

where

$$\begin{aligned} {\mu _{n+1}} := \textstyle\begin{cases} \min \lbrace \frac{\delta \Vert u_{n}-v_{n} \Vert ^{2}}{ \Vert A_{2} u_{n}-A_{2} v_{n} \Vert },\mu _{n} \rbrace &\text{if } A_{2} u_{n}-A_{2} v_{n} > 0, \\ \mu _{n} & \text{otherwise}, \end{cases}\displaystyle \end{aligned}$$
(5.4)

for a given large enough \(\epsilon > 0\), such that \(\gamma _{n} \in (\epsilon, \frac{\|S(s)y_{n} - Bp_{n}\|^{2}}{\|B^{*}(S(s)y_{n} - Bp_{n})\|^{2}} - \epsilon )\), and

$$\begin{aligned} {\lambda _{n+1}} := \textstyle\begin{cases} \min \lbrace \frac{\tau \Vert t_{n}-z_{n} \Vert ^{2}}{ \Vert A_{1} t_{n}-A_{1} z_{n} \Vert }, \lambda _{n} \rbrace & \text{if } A_{1} t_{n} -A_{1} z_{n} > 0, \\ \lambda _{n} & \text{otherwise}. \end{cases}\displaystyle \end{aligned}$$
(5.5)

Set \(\boldsymbol {n:=n+1}\) and return to step 1.

Theorem 5.2

Let \(A_{1}\) and \(A_{2}\) be mappings defined on C and D, respectively, such that assumptions (C1)(C3) hold and \(\Gamma:= \lbrace p\in VI(C,A_{1}) \cap F(T(t)): Bp\in VI(D, A_{2}) \cap F(S(s)) \rbrace \neq \emptyset \). If Assumptions 3.1and 3.2are met, then the sequence \(\lbrace x_{n} \rbrace \) generated by the Algorithm 5.1strongly converges to \(q=P_{\Gamma}0\).

Proof

Since the single-valued operator \(A_{1}\) satisfies the assumptions (C1)–(C2), then one can easily verify that the bifunction \(f(x,y)=\langle A_{1}x, y-x\rangle \) satisfies conditions (B1)–(B4). From \(A_{1}\) being \(L_{1}\)-Lipschitz continuous on C, we obtain that

$$\begin{aligned} f(x,y)+f(y,z)-f(x,z)&=\langle A_{1} x-A_{1} y,y-z\rangle \\ &\geq - \Vert A_{1}x - A_{1}y \Vert \Vert y-z \Vert \\ &\geq -\frac{\tau}{\lambda _{n+1}} \Vert x-y \Vert \Vert y-z \Vert \\ &\geq -\frac{\tau}{2\lambda _{n+1}} \Vert x-y \Vert ^{2}- \frac{\tau}{2\lambda _{n+1}} \Vert y-z \Vert ^{2}. \end{aligned}$$
(5.6)

Then, f is Lipschitz continuous on C with \(c_{1}=c_{2}=\frac{\tau}{\lambda _{n+1}}\). Therefore, the bifunction f satisfies condition (B5).

Now, from the definition of \(q_{n}\) and f, we obtain that

$$\begin{aligned} t_{n} &= \operatorname{argmin} \biggl\lbrace \lambda _{n} \bigl\langle (A_{1}, y-q_{n}) \bigr\rangle + \frac{1}{2} \Vert y-q_{n} \Vert ^{2}; y\in C \biggr\rbrace \\ &=\operatorname{argmin} \biggl\lbrace \frac{1}{2} \bigl\Vert y-(q_{n}-\lambda _{n}A_{1}q_{n}) \bigr\Vert ^{2}; y\in C \biggr\rbrace \\ &=P_{C}(q_{n} -\lambda _{n} A_{1}q_{n}). \end{aligned}$$
(5.7)

Similarly, we can obtain that \(u_{n}=P_{D}(P_{D}(Bx_{n})-\mu _{n} A_{2}(P_{D}(Bx_{n}))), v_{n}=P_{D}(P_{D}(Bx_{n})- \mu _{n} A_{2}(u_{n}))\), and \(z_{n} =P_{C}(q_{n} -\lambda _{n} A_{1} t_{n})\). This shows that the extragradient method in (3.3) now reduces to (5.3) and the conclusion is from Theorem 4.2. □

6 Numerical illustration

We present some numerical illustrations in this section and compare our work [Algorithm 3.3] with that of Narin et al. [46] [Algorithm (3.1)] and Arfat et al. [36] [Algorithm (3.2)]. All the codes were written in MATLAB R2018a. All the computations were performed on a personal computer with an Intel(R) Core (TM) i5-4300U CPU at 1.90 GHz 2.49 GHz with 8.00 Gb-RAM and 64-OS.

In our computations, we define \(TOL_{n}:= \|x_{n+1}-x_{n}\|\) for our Algorithm 3.3, Algorithm 3.1 of Narin et al. [35], and Algorithm 3.2 of Arfat et al. [36], respectively, and use the stopping criterion \(TOL_{n} < \epsilon \) for the iterative processes, where ϵ is the predetermined error.

We provide the equilibrium problem for the following bifunction \(F: H \times H \rightarrow R\) arising from Nash–Cournot oligopolistic models of electricity markets [20, 21, 35]. The bifunctions can be formulated as follows:

$$\begin{aligned} F(x,y)=(Px+Qy)^{T}(y-x),\quad \forall x,y \in R^{k}, \end{aligned}$$
(6.1)

and

$$\begin{aligned} G(u,v)=(U u+ Vv)^{T} (u-v),\quad \forall x,y \in R^{m}, \end{aligned}$$
(6.2)

where \(P,Q \in R^{k \times k}\) and \(U,V \in R^{m \times m}\) are positive-semidefinite matrices such that \(P-Q\) and \(U-V\) are positive-semidefinite matrices. It is well known that the bifunctions F and G satisfy conditions \((A1)- (A4)\) (see, [27] for details), with the Lipschitz-type constants \(c_{1}=c_{2}=\frac{\|P-Q\|}{2}\) and \(d_{1} = d_{2} = \frac{\|U-V\|}{2}\).

Example

Let the bifunctions F and G be given as in (6.1) and (6.2), respectively. For the sake of computational purposes, the following boxes shall be considered: \(C= \prod^{k}_{i=1} [-5, 5], D= \prod^{m}_{j=1}[-20, 20], \overline{C}= \prod^{k}_{i=1}[-3,3]\) and \(\overline{D}= \prod^{m}_{j=1}[-10, 10]\). The nonexpansive mappings \(T: C \rightarrow C\) and \(S: D \rightarrow D\) are given by \(T=P_{\overline{C}}\) and \(S=P_{\overline{D}}\), respectively, while the linear operator \(A: R^{k} \rightarrow R^{m}\) is an \(m \times k\) matrix. Furthermore, the matrices \(P, Q, U\), and V are randomly generated in the interval \([-5,5]\) such that they satisfy the properties above while the matrix A is generated randomly in the interval \((0, \frac{1}{k})\) and \([-2, 2]\) with the control sequences: \(\theta _{n} = \theta =\frac{9.9}{10}-\frac{1}{n+1}, \alpha _{n} = \frac{1}{(n+3)}\), Note that our \(\mu _{n}\) and \(\lambda _{n}\) are generated at each iteration and for the sake of Algorithms (1.10) and (1.11), we take \(\mu _{n} = \lambda _{n} = \frac{1}{4 \operatorname{max} \lbrace b_{1}, b_{2}\rbrace}\). However, while in our Algorithm 3.3, \(\gamma _{n}\) is generated at each iteration, in Algorithms (1.10) and (1.12), \(\gamma _{n} = \frac{1}{2\|A\|^{2}.}\)

We shall consider the following cases in our numerical computation shown in Table 1:

Table 1 Numerical results comparing the Algorithms and their time of convergence and number of iterations

Case 1, \(m=50\).

Case 2, \(m=100\).

Case 3, \(m=150\).

Case 4, \(m=200\).

We note that to obtain the vector \(u_{n}\) in the Algorithm 3.3, we need to solve the optimization problem

$$\begin{aligned} \operatorname{argmin} \biggl\lbrace \mu _{n} g \bigl(P_{D}(Aw_{n}), u \bigr)+ \frac{1}{2} \bigl\Vert u- P_{D}(Aw_{n}) \bigr\Vert ^{2}; u\in D \biggr\rbrace , \end{aligned}$$

which is equivalent to the following quadratic problem:

$$\begin{aligned} \operatorname{argmin} \biggl\lbrace \frac{1}{2}u^{T} Ju + K^{T} u: u\in D \biggr\rbrace , \end{aligned}$$
(6.3)

where \(J= 2 \mu _{n} V + I_{m}\) and \(K= \mu _{n} UP_{D}(Aw_{n})- \mu _{n} V P_{D}(Aw_{n}) - P_{D}(Aw_{n} )\), see [47].

On the other hand, in order to obtain the vector \(v_{n} \), we solve the following optimization problem

$$\begin{aligned} \operatorname{argmin} \biggl\lbrace \mu _{n} g(u_{n}, u)+ \frac{1}{2} \bigl\Vert u- P_{D}(Aw_{n}) \bigr\Vert ^{2}; u\in D \biggr\rbrace , \end{aligned}$$

which is equivalent to the following quadratic problem:

$$\begin{aligned} \operatorname{argmin} \biggl\lbrace \frac{1}{2}u^{T} \overline{J}u + \overline{K}^{T} u: u\in D \biggr\rbrace , \end{aligned}$$
(6.4)

where \(\overline{J}= J\) and \(\overline{K}= \mu _{n} Uu_{n} - \mu _{n} Vu_{n} - P_{D}(Aw_{n}) \). In the same way, the vector \(t_{n}\) is obtained by solving the optimization problem

$$\begin{aligned} \operatorname{argmin} \biggl\lbrace \lambda _{n}f(y_{n}, y)+ \frac{1}{2}\|y- y_{n})\|^{2}; y\in C \biggr\rbrace , \end{aligned}$$

which is equivalent to the following quadratic problem:

$$\begin{aligned} \operatorname{argmin} \biggl\lbrace \frac{1}{2}y^{T} N y + M^{T} y:y\in C \biggr\rbrace , \end{aligned}$$
(6.5)

where \(N= \lambda _{n} Q + I_{m}\) and \(M=\lambda _{n} P y_{n} -\lambda _{n} Q y_{n} -y_{n}\) and \(z_{n}\) is obtained by solving the problem:

$$\begin{aligned} \operatorname{argmin} \biggl\lbrace \frac{1}{2} y^{T} K{^{*}} y + K^{**}{^{T}} y: y\in C \biggr\rbrace , \end{aligned}$$
(6.6)

where \(K^{*} = \lambda _{n} Q + I_{m}\) and \(K^{**}=\lambda _{n} P t_{n}-\lambda _{n} Qt_{n} -t_{n} \).

Our Algorithm 3.3 is tested using the stopping criterion \(\|x_{n+1} - x_{n}\| < 10^{-3}\).

7 Conclusion

An extragradient-type algorithm is constructed, which involves an inertial extrapolation term for solving split-equilibrium problems and fixed-point problems of nonexpansive semigroups. The scheme is easily implemented and practically useful, since it does not require the prior knowledge or an estimate of the operator norm. Also, knowing that the Lipschitz constants of the bifunctions in many cases in practice, cannot be determined, we employed self-adaptive step sizes. Under the assumption that the bifunctions are pseudomonotone, we established a strong convergence theorem. Finally, we apply our algorithm to solving variational inequality problems and fixed-point problems of nonexpansive semigroups in the framework of real Hilbert spaces.

References

  1. Combettes, P.L., Pesquet, J.-C.: Deep Neural Network Structures. arXiv:1808.07526. https://doi.org/10.48550/arXiv.1808.07526

  2. Heaton, H., Wu Fung, S., Gibali, A., et al.: Feasibility-based fixed point networks. Fixed Point Theory Algorithms Sci. Eng. 2021, Article ID 21 (2021). https://doi.org/10.1186/s13663-021-00706-3.

    Article  MathSciNet  MATH  Google Scholar 

  3. Combettes, P.L., Pesquet, J.C.: Fixed point strategies in data science. In: IEEE Transactions on Signal Processing, vol. 69, pp. 3878–3905 (2021). https://doi.org/10.1109/TSP.2021.3069677

    Chapter  Google Scholar 

  4. Jung, A.: A fixed-point of view on gradient methods for big data. Front. Appl. Math. Stat. 3. https://doi.org/10.3389/fams.2017.00018

  5. Censor, Y., Elfving, T.: A Multiprojection algorithm using Bregman projections in product space. Numer. Algorithms 8, 221–239 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  6. Byrne, C.: A unified treatment for some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  7. Gibali, A.: A new split inverse problem and application to least intensity feasible solutions. Pure Appl. Funct. Anal. 2(2), 243–258 (2017)

    MathSciNet  MATH  Google Scholar 

  8. Combettes, P.L.: The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 95, 155–453 (1996)

    Article  Google Scholar 

  9. Adler, R., Dedieu, J.P., Margulies, J.Y., Martens, M., Shub, M.: Newton’s method on Riemannian manifolds and a geometric model for human spine. IMA J. Numer. Anal. 22, 359–390 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  10. Barbu, V.: Nonlinear Semigroups and Differential Equations in Banach Spaces. Noordhoff, Leyden (1976)

    Book  MATH  Google Scholar 

  11. Brezis, H., Pazy, A.: Semigroups of nonlinear contractions on convex sets. J. Funct. Anal. 6, 237–281 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  12. Suzuki, T.: On strong convergence to common fixed points of nonexpansive semigroup in Hilbert spaces. Proc. Am. Math. Soc. 131, 2133–2136 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  13. Rode, G.: An ergodic theorem for semigroups of nonexpansive mappings in a Hilbert space. J. Math. Anal. Appl. 85, 172–178 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  14. Censor, Y., et al.: The multiple-sets split feasibility problem and its applications for inverseproblems. Inverse Probl. 21(6), 2071–2084 (2005).

    Article  MATH  Google Scholar 

  15. Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006). https://doi.org/10.1088/0031-9155/51/10/001

    Article  Google Scholar 

  16. Shehu, Y., Gibali, A.: New inertial relaxed method for solving split feasibilities. Optim. Lett. (2020). https://doi.org/10.1007/s11590-020-01603-1

    Article  MATH  Google Scholar 

  17. Dang, Y.Z., Sun, J., Xu, H.K.: Inertial accelerated algorithms for solving a split feasibility problem. J. Ind. Manag. Optim. 13, 1383–1394 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  18. Qu, B., Xiu, N.: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 21, 1655–1665 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  19. Shehu, Y., Iyiola, O.S., Enyi, C.D.: An iterative algorithm for solving split feasibility problems and fixed point problems in Banach spaces. Numer. Algorithms 72, 835–864 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  20. Suantai, S., Pholasa, N., Cholamjiak, P.: The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 14, 1595–1615 (2018)

    Article  MathSciNet  Google Scholar 

  21. Censor, Y., Segal, A.: On the string averaging method for sparse common fixed-point problems. Int. Trans. Oper. Res. 16(4), 481–494 (2009). https://doi.org/10.1111/j.1475-3995.2008.00684.x

    Article  MathSciNet  MATH  Google Scholar 

  22. Kazmi, K.R., Rizvi, S.H.: Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and a fixed point problem. J. Egypt. Math. Soc. 21(1), 44–51 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  23. Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Program. 63, 123–145(1994)

    MathSciNet  MATH  Google Scholar 

  24. Harisa, S.A., Khan, M.A.A., Mumtaz, F., Farid, N., Morsy, A., Nisar, K.S., Ghaffar, A.: Shrinking Cesaro means method for the split equilibrium and fixed point problems in Hilbert spaces. Adv. Differ. Equ. 2020, Article ID 345 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  25. Khan, M.A.A.: Convergence characteristics of a shrinking projection algorithm in the sense of Mosco for split equilibrium problem and fixed point problem in Hilbert spaces. Linear Nonlinear Anal. 3, 423–435 (2017)

    MathSciNet  MATH  Google Scholar 

  26. Khan, M.A.A., Arfat, Y., Butt, A.R.: A shrinking projection approach to solve split equilibrium problems and fixed point problems in Hilbert spaces. UPB Sci. Bull., Ser. A 80(1), 33–46 (2018)

    MATH  Google Scholar 

  27. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4, 1–17 (1964)

    Article  Google Scholar 

  28. Lorenz, D.A., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51, 311–325 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  29. Vinh, N.T., Muu, L.D.: Inertial extragradient algorithms for solving equilibrium problems. Acta Math. Vietnam. 44(3), 639–663 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  30. Rehman, H., Kumam, P., Argyros, I.K., et al.: Inertial extra-gradient method for solving a family of strongly pseudomonotone equilibrium problems in real Hilbert spaces with application in variational inequality problem. Symmetry 12, 503 (2020)

    Article  Google Scholar 

  31. Tan, B., Fan, J., Li, S.: Self-adaptive inertial extragradient algorithms for solving variational inequality problems. Comput. Appl. Math. 40, 19 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  32. Plubtieng, S., Punpaeng, R.: Fixed-point solutions of variational inequalities for nonexpansive semigroups in Hilbert spaces. Math. Comput. Model. 48(1–2), 279–286 (2008). https://doi.org/10.1016/j.mcm.2007.10.002

    Article  MathSciNet  MATH  Google Scholar 

  33. Cianciaruso, F., Marino, G., Muglia, L.: Iterative methods for equilibrium and fixed point problems for nonexpansive semigroups in Hilbert spaces. J. Optim. Theory Appl. 146(2), 491–509 (2009). https://doi.org/10.1007/s10957-009-9628-y

    Article  MathSciNet  MATH  Google Scholar 

  34. Kazmi, K.R., Rizvi, S.H.: Implicit iterative method for approximating a common solution of split equilibrium problem and fixed point problem for a nonexpansive semigroup. Arab J. Math. Sci. 20(1), 57–75 (2014). https://doi.org/10.1016/j.ajmsc.2013.04.002

    Article  MathSciNet  MATH  Google Scholar 

  35. Narin, P., Mohsen, R., Manatchanok, K., Vahid, D.: A new extragradient algorithm for splitequilibrium problems and fixed point problems. J. Inequal. Appl. 2019, Article ID 137 (2019). https://doi.org/10.1186/s13660-019-2086-7

    Article  MATH  Google Scholar 

  36. Arfat, Y., Kumam, P., Ngiamsunthorn, P.S., Khan, M.A.A., Sarwar, H., Fukhar-ud-Din, H.: Approximation results for split-equilibrium problems and fixed point problems of nonexpansive semigroup in Hilbert spaces. Adv. Differ. Equ. 2020, Article ID 512 (2020). https://doi.org/10.1186/s13662-020-02956-8

    Article  MathSciNet  MATH  Google Scholar 

  37. Shehu, Y., Izuchukwu, C., Yao, J.C., Qin, X.: Strongly convergent inertial extragradient type methods for equilibrium problems. Appl. Anal. (2021). https://doi.org/10.1080/00036811.2021.2021187

    Article  Google Scholar 

  38. Hieu, D.V.: New inertial algorithm for a class of equilibrium problems. Numer. Algorithms 80(4), 1413–1436 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  39. Browder, F.E.: Convergence of approximants to fixed points of nonexpansive nonlinear mappings in Banach spaces. Arch. Ration. Mech. Anal. 24(1), 82–89 (1967)

    Article  MATH  Google Scholar 

  40. Chang, S.S.: Some problems and results in the study of nonlinear analysis. Nonlinear Anal., Theory Methods Appl. 30(7), 4197–4208 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  41. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer, New York (2011)

    Book  MATH  Google Scholar 

  42. Shimizu, T., Takahashi, W.: Strong convergence to common fixed points of families of nonexpansive mappings. J. Math. Anal. Appl. 211, 71–83 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  43. Xu, H.K.: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2, 1–17 (2002)

    MathSciNet  Google Scholar 

  44. Mainge, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  45. Mainge, P.E.: Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 325, 469–479 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  46. Nesterov, Y.: Amethod of solving a convex programming problem with convergence rate O(1/k2). Sov. Math. Dokl. 27, 372–376 (1983)

    Google Scholar 

  47. Censor, Y., Segal, A.: The split common fixed point problem for directed operators. J. Convex Anal. 16, 587–600 (2009)

    MathSciNet  MATH  Google Scholar 

  48. Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8(2–4), 221–239 (1994).

    Article  MathSciNet  MATH  Google Scholar 

  49. Korpelevich, G.M.: Extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors sincerely thank the editor and the three anonymous reviewers for their careful reading, constructive comments, and fruitful suggestions that substantially improved the manuscript.This paper is part of the doctoral thesis of the first author who is a PhD candidate at the Department of Mathematics/Statistics, University of Port Harcourt. He wishes to thank the staff of the Department, in particular, Dr. J. N. Ezeora, who is the advisor, for his fruitful mentorship.

Funding

There is no funding for this project.

Author information

Authors and Affiliations

Authors

Contributions

The problem design, formulation and computation are done by the author.

Corresponding author

Correspondence to Francis O. Nwawuru.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nwawuru, F.O., Ezeora, J.N. Inertial-based extragradient algorithm for approximating a common solution of split-equilibrium problems and fixed-point problems of nonexpansive semigroups. J Inequal Appl 2023, 22 (2023). https://doi.org/10.1186/s13660-023-02923-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-023-02923-3