- Research
- Open Access
- Published:
Inertial-based extragradient algorithm for approximating a common solution of split-equilibrium problems and fixed-point problems of nonexpansive semigroups
Journal of Inequalities and Applications volume 2023, Article number: 22 (2023)
Abstract
In this paper, we introduce a simple and easily computable algorithm for finding a common solution to split-equilibrium problems and fixed-point problems in the framework of real Hilbert spaces. New self-adaptive step sizes are adopted for the avoidance of Lipschitz constants that are not practically implemented. Furthermore, an inertial term is incorporated to speed up the rate of convergence, a condition that is very desirable in applications. A strong convergence is obtained under some mild assumptions, which is not limited to the fact that the bifunctions are pseudomonotone operators. This condition is better, weaker, and more general than being strongly pseudomonotone or monotone. Our result improves and extends already announced results in this direction of research.
1 Introduction
Fixed-point theory of nonlinear functional analysis has proven to be at the heart of technological development and numerous applications ranging from engineering, computer science, mathematical sciences, and social sciences. This is due to the fact that many real-world problems, after transforming them into mathematical equations, may not have analytic solutions. Hence, seeking the existence and uniqueness of solutions, the fixed points of a certain class of operators, and the approximation of solutions of such problems, for more than ten decades now has become an interesting and flourishing area of research. The Banach contraction mapping principle is one of the cornerstones in this high level of achievement. In recent times, fixed-point theory has successfully been employed in data science, machine learning, and artificial intelligence to mention but a few (for these updates, the reader is encouraged to consult [1–4]). One of the powerful tools that paved the way for this modern development is the use of nonexpansive operators. A self-map T defined on C is said to be nonexpansive if \(\|Tx - Ty\|\leq \|x-y\|, \forall x, y \in C\), where C is a nonempty, closed, and convex subset of a real Hilbert space, H. However, a point \(x^{*} \in C\) is said to be the fixed point of T if \(Tx^{*}=x^{*}\). This type of operator has significantly been used for modeling many inverse problems and has a central role in signal processing and image reconstruction, convex feasibility problems, approximation theory, game theory, convex optimization theory among many others (see [5–9] and references therein). Since the existence of solutions of the above-mentioned problems heavily rely on the existence of fixed points of a certain class of nonlinear operators, we note, however, that there is a richer and more general class of operator called the one-parameter family, whose existence of common fixed points of a family of nonlinear operators has valuable applications in applied sciences. It is known that the family of mappings \(\lbrace T(t): {0\leq t< \infty}\rbrace \) define a semigroup if the functional equations \(T(0)x=x\) and \(T(t+s)=T(t)T(s)\) are satisfied, where t is the time parameter, and each \(T(t)\) maps the “state space” system into itself. These maps completely determine the time evolution of the system in the following way: if the system is in state \(x_{0}\) at time \(t_{0}=0\), then at time t it is in the state \(T(t)x_{0}\). Furthermore, in many cases, a complete knowledge of the maps \(T(t)\) is difficult to determine. Hence, this challenge led to the great discoveries of mathematical physics, based on the invention of calculus, which made it possible to understand the “infinitesimal changes” occurring at any given time. This is based on this fact that the functional equation can be transformed into a differential equation. We therefore mention that this family of operators plays a crucial role in abstract Cauchy problems [10, 11]. As this family is related to first-order linear differential equations, it corresponds to dynamical systems, evolution equations, and can be applied in deterministic and stochastic dynamical systems. It has been used in partial differential equations, evolution equation theory, fixed-point theory, and many others (see [12, 13, 48]).
One of the most fruitful areas where fixed points of nonexpansive mappings have been successfully employed is in signal processing and image recovery, which are examples of inverse problems. Among many research articles in this direction, Censor and Elfving [48] published their most influential paper where they introduced and studied the split-feasibility problem (SFP) for modeling inverse problems arising in phase retrieval and medical imaging. The problem is of the form:
where C and D are nonempty, closed, and convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, and \(A: H_{1} \rightarrow H_{2}\) is a bounded linear operator. The SFP (1.1) has received huge attention due to its wide applications across different fields of study (see [2, 6, 14–20] and the cited references contained therein). The classical method for solving (1.1) is the known CQ algorithm, introduced and studied by Byrne [6] that is presented in the following iterative form:
where A is an \(N \times M\) real matrix, \(A^{T}\) is the transpose of A, \(P_{C}\) and \(P_{Q}\) are projections onto nonempty, closed, and convex subsets, I is an identity operator defined on H and \(\gamma \in (0, \frac{1}{L})\), and L denotes the largest eigenvalue of the matrix \(A^{T} A\). It is well known that \(x\in C\) solves (1.1) if and only if it solves the fixed-point equation:
where \(A^{*}\) is the adjoint of A and \(\gamma > 0\).
Being at the hub of modern research, problem (1.1) has been modified and generalized into different areas; for instance, in 2009, Censor and Segal [21] introduced the following problem:
where \(S: H_{1} \rightarrow H_{1} \), and \(T: H_{2} \rightarrow H_{2}\), are two nonlinear directed operators and the nonempty fixed-point sets of S and T are denoted by \(F(T): = \lbrace x\in H_{2}: Tx = x \rbrace \), and \(F(S): = \lbrace x\in H_{1}: Sx = x \rbrace \). Problem (1.4) is called the split common fixed point (SCFP), which can easily be reduced to the SFP when we set \(S: = P_{C}\) and \(T:= P_{Q}\). Another important area of generalization is the split-equilibrium problem (SEP), a concept that was introduced and studied by Kazmi and Rizvi [22] for solving optimization problems, and problems arising in economics and finance. The SEP can be defined as follows:
and
Furthermore, problem (1.5) and (1.6) can be reduced to the well-known equilibrium problem (EP) if \(G\equiv F, T\equiv 0\equiv S, C=D,H_{1} \equiv H_{2}, A\equiv I\). The EP is an optimization problem introduced by Blum and Oettli [23] for the purpose of solving problems arising in economics, engineering, and optimization. The equilibrium problem is thus formulated as follows:
The EP (1.7) includes, as a special case, the variational inequality problem (VIP), optimization problems, Nash equilibrium problems, saddle-point problems, and fixed-point problems. It is obvious that (1.7) is a unifying model for several problems in engineering and economics among many other areas.
Our aim in this paper is to study an interesting combination of problems (1.4) and (1.5) and (1.6), called the split-equilibrium fixed-point problem (SEFPP) that is formulated as follows:
and
where \(F:C \times C \rightarrow \mathbb{R}\) and \(G:D\times D \rightarrow \mathbb{R}\) are two bifunctions, C and D are two nonempty, closed, and convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, A is a bounded linear operator defined on \(H_{1}\), while S and T are nonexpansive mappings. It is pertinent to note that problem (1.8) and (1.9) is an important generalization of so many convex optimization problems. Its study is rich and demanding, in the sense that a wide range of applications of mathematical problems can be cast into this framework. On a critical look at the problem (1.8) and (1.9), if \(T\equiv 0\equiv S\), we obtain the classical SEP (1.5) and (1.6). Observe that (1.1) is also obtained from (1.8) and (1.9) if \(F\equiv 0\equiv G, S\equiv 0\equiv T\). Numerous inverse problems can be cast in this model. Problem (1.8) and (1.9) has been widely employed in numerous problems (see, [24–26] and their cited references).
In recent years, algorithms with fast convergence have been desirable by many researchers and such algorithms are profitable in applications. Polyak [27] was the first to introduce inertial extrapolation techniques using the heavy-ball method. This concept has been practically tested and proven to speed up the rate of convergence of iterative algorithms. Since then, many researchers have incorporated it to improve their results (see, [28–31] and the references therein)
In 2008, Plubtieng and Punpaeng [32] proposed and studied fixed-point solutions of variational inequlities for nonexpansive semigroups in real Hilbert spaces. In an attempt to improve on [32], Cianciaruso et al., [33] proposed an algorithm for equilibrium and fixed-point problems in real Hilbert spaces, see also [34].
In 2019, problem (1.8) and (1.9) was studied by Narin et al. [35] who obtained a strong convergence result. They proposed the following iterative algorithm:
where \(0< \underline{\lambda}\leq \lambda _{n} \leq \overline{\lambda} < \min \lbrace \frac{1}{2c_{1}}, \frac{1}{2c_{2}}\rbrace, 0< \underline{\mu}\leq \mu _{n} \leq \overline{\mu} < \min\lbrace \frac{1}{2d_{1}}, \frac{1}{2d_{2}}\rbrace, L\) is a bounded linear operator, while h is a contraction mapping.
In 2020, Arfat et al. [36] constructed the extragradient method for approximation of results for split-equilibrium problems and fixed-point problems of nonexpansive semigroups in real Hilbert spaces. In fact, they presented the following algorithm:
where
-
C1
\(\lbrace \lambda _{n} \rbrace \subset [a,b]\) for some \(a,b \in (0, \operatorname{min} \lbrace \frac{1}{2c_{1}}, \frac{1}{2c_{2}} \rbrace )\);
-
C2
\(0\leq d< e\leq \alpha _{n} \leq f< 1, \liminf_{n\rightarrow \infty} r_{n} > 0, \lim_{n\rightarrow \infty} s_{n} = 0 = \lim_{n\rightarrow \infty} t_{n} \);
-
C3
\(0< \gamma < \frac{1}{\|A\|^{2}}\).
They proved under the underlying conditions that the sequence \(\lbrace x_{n} \rbrace \) generated by the algorithm (1.11) converged weakly to the solution set, provided it is not an empty set. Furthermore, the authors in [36] modified the above algorithm and obtained a strong convergence using the shrinking-projection technique.
In 2021, Shehu et al. [37] studied a modified inertial extragradient method for solving equilibrium problems. They presented the following algorithm:
where
\(T_{n} = \lbrace z \in H: \langle w_{n} -\lambda _{n} v_{n}-y_{n},z-y_{n} \rangle \leq 0 \rbrace, q_{n} = w_{n} -\lambda _{n} v_{n} -y_{n}, v_{n} \in \partial _{2} f(w_{n},\cdot) \text{ and } q_{n} \in N_{C}(y_{n})\)
Under some mild conditions and assumptions, the authors proved that the sequence \(\lbrace x_{n}\rbrace \) generated by the Algorithm (1.12) converged strongly to the solution \(EP(f,C)\neq \emptyset \). Further, they improved on the algorithm (1.12) by studying a modified inertial subgradient, extragradient method for solving equilibrium problems and also obtained a strong convergence. Their work [37] is an improvement on the works of [36] and [35] to some reasonable extent.
Remark 1
The algorithm (1.10) of Narin et al. [35] has one major drawback, apart from the fact that the projection operators slow down the convergence rate. The shortcoming is that the step sizes \(\lambda _{n}\) and \(\mu _{n}\) depend on the Lipschitz constants (\(c_{1}\) and \(c_{2}\), \(d_{1}\) and \(d_{2}\)) of the bifunctions. Likewise, the Algorithm (1.11) of Artfat et al. [36] also suffered from the step size \(\lambda _{n}\) being dependent on the Lipschitz constants of the bifunctions and γ depending on the spectral radius of the operator \(A^{*} A\). In many practical importance, the Lipschitz constants are difficult to estimate and often impossible. The spectral radius is computationally expensive, even if the overall estimate is known. These drawbacks affect the efficiency of their algorithms, thus making the Algorithms difficult to implement in practical cases.
The question of interest is: can these conditions in the two algorithms be removed and the results recovered via a new inertial extrapolation algorithm? Our interest in this paper is to give an affirmative answer to this question.
The main contributions of this paper shall include:
(1) A new modified extragradient method is proposed, which is the combination of (1.10), (1.11), and (1.12), where the nonexpansive mappings are precisely nonexpansive semigroups of real Hilbert spaces. The new algorithm includes, as a special case, the Algorithm (1.12) of [37]. This shows that our Algorithm 3.3 is an improvement on many others in this direction.
(2) We construct our algorithm in such away that the step size does not depend on the knowledge of the operator norm or require an estimate of the operator norm of the bounded linear map. However, the control parameters do not also depend on the Lipschitz constants of the bifunctions.
(3) We have observed that the nature of this type of algorithm involves many projection operators that slow the rate of convergence. Thus, we deem it important to incorporate the inertial term, \(\theta _{n}(x_{n}-x_{n-1})\) in the spirit of Polyak [27] to improve on the performance of our algorithm. This significantly improves the work of Artfat et al. [36] and Narin et al. [35]. Our inertial extrapolation term does not require computing the norm difference between the terms \(x_{n}\) and \(x_{n-1}\). The inertial factor \(\theta _{n}\) is nonsummable and not diminishing, as has been used by some authors. It is different from the one used by Shehu et al. [37], Algorithm 2.1.
(4) We establish a strong convergence with the self-adaptive step sizes. We assume that the bifunctions are not strongly pseudomonotone as proposed by [30, 38]. The strongly pseudomonotonicity assumption is restrictive and stronger than being pseudomonotone. Also, pseudomonotone is more general than monotone.
The rest of the paper is organized as follows: In Sect. 2, some basic definitions, related to our work and helpful Lemmas are stated without their proofs. Our Algorithm is presented in Sect. 3 with some conditions that will enable us to obtain the strong convergence. The proofs are discussed in detail in Sect. 4. In Sect. 5, applications of our algorithm are studied, while in Sect. 6 numerical illustrations are presented, and the conclusion is given in Sect. 7.
2 Preliminaries
As we mentioned early, some Lemmas that are very relevant and helpful in proving our result will be stated without necessarily providing their proofs. Basic definitions will also be given in this section.
Throughout this paper, C and D denote nonempty, closed, and convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. Let ⇀ and → represent weak and strong convergence. Let \(\langle,\rangle \) and \(\|\cdot\|\) represent the inner product and the induced norm.
Definition 2.1
A map \(T: H_{1} \rightarrow H_{1}\) is said to be a contraction if there exists a coefficient \(\alpha \in (0,1)\) such that
Definition 2.2
A map \(T: H_{1} \rightarrow H_{1}\) is said to be nonexpansive if
That is, \(\alpha =1\) in Definition 2.1.
Definition 2.3
A map \(T: H_{1} \rightarrow H_{1}\) is said to be firmly nonexpansive, if
Definition 2.4
A one-parameter family \(\lbrace T(t): 0\leq t< \infty \rbrace \) defined on \(H_{1}\) to \(H_{1}\) is called a nonexpansive semigroup if it satisfies the following conditions:
-
(a)
\(T(0)x=x, \forall x\in H_{1}\);
-
(b)
\(T(a+b) = T(a)T(b), \forall a,b\geq 0\);
-
(c)
For any \(x\in H_{1}, T(t)x\) is continuous;
-
(d)
\(\|T(t)x- T(t)y\| \leq \|x-y\|, \forall x,y \in H_{1}\) and for \(t\geq 0\).
The fixed-point set of this family of the nonlinear map is denoted by
It is well known (see [39]) that \(F(T)\) is closed and convex.
Example 2.5
Let \(H_{1} =\mathbb{R}\) and \(\Omega: \lbrace T(t): 0\leq t < \infty \rbrace \). Define \(T(t)x=(\frac{1}{5^{t}})x\). Then, Ω is a nonexpansive semigroup. Indeed,
-
(a)
\(T(0)x = (\frac{1}{5^{0}}) x=x\);
-
(b)
\(T(a+b) = (\frac{1}{5^{a+b}})=(1/5^{a})(1/5^{b})= T(a)T(b), \forall a,b \geq 0\);
-
(c)
for each \(x \in H_{1}\), the mapping \(T(t)x = (1/5^{t})x\) is continuous;
-
(d)
\(\|T(t)x - T(t)y\| = \|(1/5^{t})x - (1/5^{t})y\| = \|(1/5^{t})(x-y)\|= (1/5^{t})\|x-y\|, \forall x,y \in H_{1}, t\geq 0\).
Definition 2.6
Let \(x,y \in H\). A map \(T: H \rightarrow H\) is called:
-
(1)
monotone if \(\langle Tx - Ty, x-y\rangle \geq 0\);
-
(2)
Pseudomonotone on H if \(\langle Tx, y-x\rangle \geq 0\Rightarrow \langle Ty,y-x\rangle \leq 0\);
-
(3)
Lipschitz continuous on H if there exist a positive constant L such that \(\|Tx -Ty\|\leq L\|x-y\| \).
A bifunction \(F: C \times C \rightarrow R\) is said to be:
Definition 2.7
-
(i)
Strongly monotone on C if there exists a constant \(\gamma >0\) such that
$$\begin{aligned} F(x,y)+F(y,x)+ \gamma \Vert x-y \Vert ^{2} \leq 0,\quad \forall x,y \in C; \end{aligned}$$ -
(ii)
monotone if
$$\begin{aligned} F(x,y)+F(y,x)\leq 0, \quad\forall x,y \in C; \end{aligned}$$ -
(iii)
pseudomonotone if \(F(x,y)\geq 0 \Rightarrow F(y,x)\leq 0, \forall x,y \in C\);
-
(iv)
to satisfy Lipschitz-type conditions if there exist positive constants \(c_{1}\) and \(c_{2}\) such that
$$\begin{aligned} c_{1} \Vert x-y \Vert ^{2} +c_{2} \Vert y-z \Vert ^{2} \geq F(x,z)-F(x,y) - F(y,z), \quad\forall x,y,z \in C. \end{aligned}$$Observe that \((i) \Longrightarrow (ii)\Longrightarrow (iii)\), but the converse is not necessarily true.
We shall assume from now onwards that the bifunction \(G: D \times D \rightarrow \mathbb{R}\) satisfies the following conditions:
-
(A1)
\(G(x,y) = 0, \forall x,y \in D\);
-
(A2)
G is monotone on D;
-
(A3)
for each \(u \in D\), the function \(x\longmapsto G(u, v)\) is upper hemicontinuous, that is, for each \(u, v \in D\)
$$\begin{aligned} \lim_{n\rightarrow \infty} G \bigl(\lambda w + (1-\lambda )u, v \bigr) \leq G(u,v); \end{aligned}$$ -
(A4)
for each \(v \in D\), the function \(v \longmapsto G(u,v)\) is convex and lower semicontinuous.
We further note that the bifunction \(F: C\times C \rightarrow R\) will also possess the following properties:
-
(B1)
\(F(x,x)=0, \forall x\in C\);
-
(B2)
F is pseudomonotone on C with respect to \(EP(C, F)\);
-
(B3)
F is weakly continuous on \(C \times C\) in the sense that if \(x,y \in C\) and \(\lbrace x_{n} \rbrace, \lbrace y_{n}\rbrace \subset C\) converges weakly to x and y, respectively, then \(f(x_{n}, y_{n})\rightarrow (x,y)\) as \(n\rightarrow \infty \);
-
(B4)
for each \(x \in C\), the function \(y \longmapsto F(x,y)\) is convex and subdifferentaible;
-
(B5)
F is Lipschitz-type continuous on C.
For a nonempty closed and convex C of \(H_{1}\), we define a metric projection \(P_{C}: H_{1} \rightarrow C\) such that for each \(x \in H_{1}\), there exists a unique nearest point \(P_{C} x \in C\) satisfying the inequality:
It is characterized by
This further implies that
This operator \(P_{C}\) is not only nonexpansive but also firmly nonexpansive.
Definition 2.8
A mapping \(T: H_{1} \rightarrow H_{1}\) is said to be demiclosed at the origin if for any sequence \(\lbrace x_{n} \rbrace \in H_{1}\) with \(x_{n} \rightharpoonup x\) and \(\| x_{n} - Tx_{n} \| \longrightarrow 0\), then \(Tx=x\). Thus, \(I-T\) is demiclosed at 0.
Lemma 2.9
([40])
For each \(x, y,z,u,v \in H_{1}\) and \(\alpha \in \mathbb{R}\) then the following hold:
-
(1)
\(\| x-y\|^{2} = \|x\|^{2} - \|y\|^{2} - 2\langle x-y,y\rangle \);
-
(2)
\(\|x+y\|^{2} \leq \|x\|^{2} + 2\langle y, x+y\rangle \);
-
(3)
\(2\langle x-y, u-v\rangle = \|x-v\|^{2} + \|y-u\|^{2}-\|x-u\|^{2} -\|y-v \|^{2}\);
-
(4)
\(\|\alpha x + (1-\alpha )y\|^{2} = \alpha \|x\|^{2} + (1-\alpha )\|y \|^{2} -\alpha (1-\alpha )\|x-y\|^{2}\).
Lemma 2.10
([41])
For each \(x \in H_{1}, \lambda > 0\),
where \(\mathrm{prox}_{\lambda g}(x): = \operatorname{argmin} \lbrace \lambda g(y) + \frac{1}{2}\| x-y\|^{2}; y \in C \rbrace \).
Lemma 2.11
([42])
Let C be a nonempty, closed, and convex subset of a real Hilbert space \(H_{1}\). Let \(\omega: = \lbrace T(t): 0\leq t < \infty \rbrace \) be a nonexpansive semigroup on C; then for all \(v \geq 0\),
Lemma 2.12
([43])
Let \(a_{n}\) be a sequence of nonnegative real numbers such that
where \(\lbrace \gamma _{n} \rbrace \) is a sequence in \((0,1)\) and \(\lbrace \delta _{n} \rbrace \) is a sequence in \(\mathbb{R}\) such that
-
(a)
\(\lim_{n\rightarrow \infty} \gamma _{n} =0, \sum^{\infty}_{n=1} \gamma _{n} = \infty \);
-
(b)
\(\limsup_{n\rightarrow \infty} \delta _{n} \leq 0\).
Then, \(\lim_{n\rightarrow \infty} a_{n} =0\).
Lemma 2.13
([44])
Let \(\lbrace a_{n} \rbrace \) be a sequence of real numbers such that there exists a subsequence \(\lbrace n_{i} \rbrace \) of n such that \(a_{n_{i}}\leq a_{n_{i}+1}\) for all \(i \in N\). Then, there exists a nondecreasing sequence \(\lbrace m_{k} \rbrace \subset N\) such that \(m_{k} \rightarrow 0\) as \(k\rightarrow \infty \) and the following conditions are satisfied:
-
(a)
\(a_{m_{k}}\leq a_{m_{k}+1}\) and \(a_{k} \leq a_{m_{k}+1}\).
Indeed, \(m_{k} = \operatorname{max} \lbrace j\leq k: a_{j} \leq a_{j+1}\rbrace \).
Lemma 2.14
([45])
Let \(\lbrace a_{n} \rbrace \) be a sequence of nonnegative real numbers satisfying the following:
where \(\beta _{n}\) is a sequence in \((0,1)\) and \(\eta _{n}\) is a real sequence. Suppose that \(\sum^{\infty}_{n=1}\xi _{n} < \infty \) and \(\eta _{n} \leq \beta _{n} M \) for some positive number M. Then, \(\lbrace a_{n} \rbrace \) is bounded.
3 The proposed algorithm
We present in this section, the proposed algorithm and some assumptions and conditions that the control sequences will possess to enable strong convergence.
Assumption 3.1
Let \(H_{1}\) and \(H_{2}\) be two Hilbert spaces such that the following conditions hold:
-
(a)
\(F:C \times C \rightarrow \mathbb{R}\) and \(G: D \times D \rightarrow \mathbb{R}\) are two bifunctions;
-
(b)
\(A: H_{1} \rightarrow H_{2}\) is a bounded linear operator such that \(A \neq 0\) and \(A^{*}: H_{2} \rightarrow H_{1}\) is its adjoint;
-
(c)
\(T: = \lbrace T(t): 0\leq t< \infty \rbrace: C \rightarrow C\) and \(S: = \lbrace S(s): 0\leq s< \infty \rbrace: D \rightarrow D\) are two nonexpansive semigroups;
and the solution set
-
(d)
\(\Gamma: \lbrace x^{*} \in C:x^{*} \in EP(C, F) \cap F(T) \text{ and } y=Ax^{*} \in EP (D, G) \cap F(S)\rbrace \neq \emptyset \).
Assumption 3.2
The control sequences shall satisfy the following conditions:
-
(i)
The sequences \(\lbrace \alpha _{n} \rbrace \), \(\lbrace \beta _{n} \rbrace \), and \(\lbrace \sigma _{n} \rbrace \) are sequences of reals in \((0,1)\) and \(\lbrace \varepsilon _{n}\rbrace \) is a positive such that \(\lim_{n\rightarrow \infty}\beta _{n}= 0\) and \(\sum^{\infty}_{n=1} \beta _{n}=\infty \);
-
(ii)
\(\frac{\varepsilon _{n}}{\beta{^{2}}_{n}} \rightarrow 0 \text{ as } n\to \infty \);
-
(iii)
\(\sigma _{n} \in (a, 1-\beta _{n})\) for some \(a> 0\).
Algorithm 3.3
(Self-Adaptive Inertial Extragradient Method for Split-Equilibrium Problem)
Iterative Steps: Step 0: Let \(x_{0}, x_{1} \in H_{1}, \delta _{0}, \tau _{0} >0, \mu \in (0,1)\)
Step 1: Given the current iterate \(x_{n}, x_{n-1} (n\geq 1), \theta _{n} \in (0,1) \) and choose \(\theta _{n}\) such that \(0< \theta \leq \overline{\theta _{n}}\), where
and compute
where
for a given large enough \(\epsilon > 0\), such that \(\gamma _{n} \in (\epsilon, \frac{\|Sy_{n} - Ap_{n}\|^{2}}{\|A^{*}(Sy_{n} - Ap_{n})\|^{2}} - \epsilon )\), and
Set \(\boldsymbol {n:=n+1}\) and return to step 1.
4 Convergence analysis
This section is devoted to the proof of the entire body of work. The proof is divided into several parts; the steps are presented below.
Lemma 4.1
If the sequence \(\lbrace x_{n} \rbrace \) generated by the Algorithm 3.3above satisfies Assumptions 3.1and 3.2, then, \(\lbrace x_{n} \rbrace \) is bounded.
Proof
Using Lemma 2.10 and the definition of \(t_{n}\), we obtain
Substituting \(y=z_{n}\) in (4.1), we obtain
Also, using the definition of \(z_{n}\), we obtain
Observe that (4.2) and (4.3) implies that
If \(F(q_{n}, z_{n})-F(q_{n}, t_{n}) - F(t_{n}, z_{n}) > 0\), we have
Hence, from (4.4) and (4.5), we obtain
Using Lemma 2.9, we have the following from (4.6)
Hence, using (4.7) and (4.6), we obtain
Thus,
Hence,
Now, for \(y \in EP (C, F) \subset C\), we have that
By the condition (B2), we establish that
It follows that
Observe that from (3.4), \(\lambda _{n}\) is a monotone decreasing sequence. Hence, the limit of \(\lambda _{n}\) exists. Without loss of generality, we may assume that \(\lim_{n\rightarrow \infty} \lambda _{n} = \lambda \). Therefore,
Hence, there exists \(N_{0} \in N\) such that
It follows from these facts and (4.10) that
In a similar way and by definition of \(u_{n}\), we obtain
Setting \(q=v_{n}\) in (4.12) we obtain
Using the definition of \(v_{n}\), we further obtain
Now, equations (4.13) and (4.14) imply that
If \(G(P_{D}(Ap_{n}),v_{n}) - G(P_{D}(Ap_{n}), u_{n}) - G(u_{n}, v_{n})> 0\), then \(G(P_{D}(Ap_{n}),v_{n}) - G(P_{D}(Ap_{n}), u_{n}) - G(u_{n}, v_{n}) \leq \delta ( \frac{\|P_{D}(Ap_{n})-u_{n}\|^{2} + \|v_{n}- u_{n}\|^{2}}{\mu _{n+1}}) \).
It follows from the last inequality and from (4.15) that
We know from Lemma 2.9 that
Hence, using (4.17) and (4.16), we have
Taking \(u=Ax^{*}\), we obtain that
\(Ax^{*} \in EP (D, G) \subset D\). Using (A2), we obtain
Further, it implies that
It follows from these facts and from (4.18) that
Next, let \(x^{*} \in \Gamma \) so that \(x^{*} \in EP (C, F) \cap F(T)\) and \(Ax^{*} \in EP(D, G) \cap F(S)\). Since \(P_{D}\) is firmly nonexpansive, we obtain that
Also,
Since \(S(s)\) is nonexpansive, \(Ax^{*} \in F(S(s))\), from (4.19) and (4.20) we obtain that
By the nonexpansiveness of \(P_{C}\), we obtain
However,
Using (4.22) in (4.24), we obtain that
Using (4.25) in (4.23), and the condition of \(\gamma _{n}\) in step 2, we obtain
We further estimate from the Algorithm that
From the condition on \(\theta _{n}\), we know that
Hence,
Thus, there exists \(M_{1} > 0\) such that
Therefore,
Observe that
Therefore, from the Algorithm 3.3, and (4.29), we have
Hence, it follows from Lemma 2.14 that the \(\lbrace x_{n} \rbrace \) is bounded. This completes the proof of Lemma 4.1. □
Next, we establish the following claim.
Claim a:
Proof
Observe that for all \(x^{*} \in \Gamma \),
Using (4.31) in the inequality below to obtain
At this point, we estimate that
It follows from (4.32) and (4.33) that
where \(M_{0}: = \sup \lbrace \|x_{n} - x^{*}\|^{2}, n\in N\rbrace,K_{0}= \sup \lbrace 3 M_{3}(1+4\beta _{n}\sigma _{n}), n \in N \rbrace \) and \(\vartheta _{n}:= \frac{\beta _{n}(2-5\beta _{n})}{1-\beta _{n} \sigma _{n}}\).
By the assumption on \(\beta _{n}\), we see that \(\lim_{n\rightarrow \infty} \vartheta _{n} =0\) and \(\sum^{\infty}_{n=1} \vartheta _{n} =\infty \). □
Theorem 4.2
If the Assumptions 3.1and 3.2are satisfied, Lemma 4.1and claim (a) hold, then the recursive sequence \(\lbrace x_{n} \rbrace \) generated by our algorithm strongly converges to an element in the solution set Γ.
Proof
In order to show that \(\lbrace x_{n} \rbrace \) converges strongly to the solution set Γ, we consider the following two cases.
Case 1: Suppose that there exists \(n_{0} \in \mathbb{N}\) such that \(\lbrace \|x_{n} - x^{*}\| \rbrace _{n\geq 1}\) is nonincreasing. Then, the limit of \(\lbrace \|x_{n} - x^{*}\| \rbrace _{n\geq 1}\) exists.
Clearly
It follows from the Algorithm 3.3 and the condition (ii) of Assumption 3.2 that
Thus,
Moreover, since \(\lim_{n\rightarrow \infty} \lbrace \|x_{n} - x^{*}\| \rbrace \) exists, we have from the estimates (4.11), (4.19), (4.26), (4.27), and (4.33)
Also,
It follows from (4.10) and (4.37) that
Subsequently,
In a similar way, following (4.37) and (4.18) we obtain that
Also, from (4.37) and (4.26) we obtain
Since \(p_{n} \in C\), it follows from the definition of \(q_{n}\) and (4.40) that
Therefore,
It follows from the same argument that
Using the definition of \(y_{n}\), Lemma 2.9(4) and (4.37) we estimate that
Hence,
We obtain from (4.45) that
In a similar way, using the definition of \(p_{n}\), Lemma 2.9(4), and (4.44)–(4.46) we obtain that
Observe that
Now, it follows from (4.46) and Lemma 2.11 that
Following the same line of argument in (4.48), the fact we established in (4.47) and Lemma 2.11, we obtain that
Furthermore, since \(y_{n},v_{n} \in D\), we obtain
Now, using (4.46) in (4.51) we obtain
Using the result in (4.52) and (4.39), we obtain that
\(\|y_{n} - u_{n}\|\leq \|y_{n} - v_{n} \|+\|v_{n} - u_{n}\|\), now letting \(n\rightarrow \infty \) we conclude that
It is also clear from (4.38) and (4.42) that
Now, letting \(n\rightarrow \infty \) in (4.54), we obtain that
From (4.38), it is clear that
Since \(q_{n}, x_{n} \in C\), using the definition of \(q_{n}\), we obtain the following inequality
Using (4.42), (4.43), and (4.36)
Letting \(n\rightarrow \infty \) in (4.57) and using the immediate estimate we obtain
Using the fact that \(w_{n} \in C\) and applying the argument as in (4.57) and using (4.40), we conclude that
We note, however, that \(\|z_{n} - x_{n}\| \leq \|z_{n} - p_{n}\| + \|p_{n} -x_{n}\|\).
Letting \(n\rightarrow \infty \) in the above inequality, we obtain that
However,
Using (4.38) we obtain
Now, to establish that the sequence \(\lbrace x_{n} \rbrace \) is asymptotically regular, we have the following
Considering the condition in \(\sigma _{n}, \beta _{n}\), using (4.58) and (4.60) in (4.61) we obtain
It follows from (4.62) that
We also have from (4.39) that
The boundedness of \(\lbrace x_{n} \rbrace \) implies that there exists a subsequence \(\lbrace x_{n_{k}}\rbrace \) of \(\lbrace x_{n} \rbrace \) such that \(x_{n_{k}} \rightharpoonup q\) as \(k \rightarrow \infty \). Furthermore, since \(\lbrace x_{n} \rbrace \) is bounded, it follows that \(\lbrace w_{n} \rbrace, \lbrace p_{n} \rbrace, \lbrace q_{n} \rbrace, \lbrace t_{n} \rbrace \), and \(\lbrace z_{n} \rbrace \) have subsequences that converge to q weakly. Similarly, the sequences \(\lbrace u_{n} \rbrace, \lbrace v_{n} \rbrace, \lbrace P_{D}(Ap_{n}) \rbrace \), and \(\lbrace y_{n} \rbrace \) have subsequences that converge weakly to Aq.
However,
Consequently, \(\lbrace Ax_{n_{k}}\rbrace \) also converges weakly to Aq. By (4.64), \(v_{n}\) converges weakly to Aq. Our target is to show that \(q\in \Gamma \). From the construction of our algorithm, \(x_{n} \in C\) and \(v_{n} \in D, \forall n\in \mathbb{N}\). Since C and D are nonempty, closed, and convex sets, they are weakly closed. Thus, \(q\in C\) and \(Aq \in D\). Using Lemma 2.10 and estimates we have in (4.1) and (4.12) we have that
and
Consequently, we obtain from (4.66) and (4.67) that
and
Now, letting \(k\rightarrow \infty \) in the above inequality, using (4.38) and (4.39), the conditions on \(\lambda _{n}, \mu _{n}\) and the weak continuity of F and G, we obtain that
This implies that
Using Definition 2.8, (4.50), (4.49), and the definition, we obtain that
The combination of (4.68) and (4.69) gives
which implies that \(q \in \Gamma \). From the fact that \(q \in \Gamma \) and (4.65), we obtain
By using the estimates in (4.34) the conditions that followed underneath, using (4.70) and Lemma 2.12, we conclude that the sequence \(\lbrace x_{n} \rbrace \) converges strongly to q, that is, \(q= P_{\Gamma}0\). This completes case 1.
Case 2: Suppose there exists a subsequence \(\lbrace n_{i} \rbrace \) of \(\lbrace n\rbrace \) such that
By Lemma 2.13, there exists a nondecreasing sequence \(\lbrace m_{k} \rbrace \subset N\) such that \(m_{k} \rightarrow \infty \),
From (4.32), (4.10), and Lemma 2.9(4) we obtain
It follows from (4.72) that
With the condition on \(\beta _{n}\) and the fact that the limit of \(\lambda _{n}\) exists, we obtain
Following the same argument as in case 1, we have
Furthermore, it follows from (4.34) and (4.71) that
hence,
Since \(\vartheta _{m_{k}} > 0\) and using (4.71) we obtain
Taking the limit in the above inequality as \(k \rightarrow \infty \), we deduce that \(x_{k}\) converges strongly to \(q=P_{\Gamma}0\). This completes the proof of Theorem 4.2. □
5 Applications
We apply our Algorithm 3.3 to variational inequality problems for monotone and Lipschitz-type continuous mappings. It is based on the setting of real Hilbert spaces and C a nonempty, closed, and convex subset of H. Let \(T: C\rightarrow C\) be a nonlinear operator.
A mapping A is said to be
-
monotone on C if
$$\begin{aligned} \langle Tx-Ty, x-y \rangle \geq 0,\quad\text{for all } x,y \in C; \end{aligned}$$ -
pseudomonotone on C if
$$\begin{aligned} \langle Tx,y-x\rangle \geq 0 \quad\Rightarrow\quad \langle Ty, x-y\rangle \leq 0,\quad \text{for all } x,y \in C; \end{aligned}$$ -
L-Lipschitz continuous on C if there exists a positive constant L such that
$$\begin{aligned} \Vert Tx-Ty \Vert \leq L \Vert x-y \Vert ,\quad\text{for all } x,y \in C. \end{aligned}$$
The variational inequality \((VI)\) has the following structure:
Now, for every \(x,y \in H\), we define \(f(x,y)=\langle Tx, y-x\rangle \), then the equilibrium problem (1.7) becomes a variational inequality (5.1), where the single-valued operator A is monotone and Lipschitz continuous. Let problem (5.1) and its solution set be denoted by \(VI(C,T)\) and SOL VI(C,T), respectively. We shall assume that T satisfies the following conditions:
-
(C1)
T is pseudomonotone on C;
-
(C2)
T is weak to strong continuous on C, that is, \(Tx_{n} \rightarrow Tx\) for each sequence \(\lbrace x_{n} \rbrace \subset C\) converging weakly to x;
-
(C3)
T is \(L_{1}\)-Lipschitz continuous on C for some positive constant \(L_{1}\).
Let C and D be two nonempty, closed, and convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. Let \(A_{1}: C \rightarrow C\) and \(A_{2}: D\rightarrow D\) be \(L_{1}\)- and \(L_{2}\)-Lipschitz continuous on C and D, respectively. Suppose \(B: H_{1} \rightarrow H_{2}\) is a bounded linear operator and \(B^{*}: H_{2} \rightarrow H_{1}\), its adjoint. Let \(T(t)\) and \(S(s)\) be two nonexpansive semigroups defined on C and D, respectively. We consider the extragradient method for solving (5.1). Let \(\lbrace x_{n} \rbrace \) be a sequence generated by the following algorithm:
Algorithm 5.1
(Self-Adaptive Inertial Extragradient Method for VI Problem)
textbfIterative Steps: Step 0: Let \(x_{0}, x_{1} \in H_{1}, \delta _{0}, \tau _{0} >0, \mu \in (0,1)\).
Step 1: Given the current iterate \(x_{n}, x_{n-1} (n\geq 1), \theta _{n} \in (0,1) \), choose \(\theta _{n}\) such that \(0< \theta \leq \overline{\theta _{n}}\), where
and compute
where
for a given large enough \(\epsilon > 0\), such that \(\gamma _{n} \in (\epsilon, \frac{\|S(s)y_{n} - Bp_{n}\|^{2}}{\|B^{*}(S(s)y_{n} - Bp_{n})\|^{2}} - \epsilon )\), and
Set \(\boldsymbol {n:=n+1}\) and return to step 1.
Theorem 5.2
Let \(A_{1}\) and \(A_{2}\) be mappings defined on C and D, respectively, such that assumptions (C1)–(C3) hold and \(\Gamma:= \lbrace p\in VI(C,A_{1}) \cap F(T(t)): Bp\in VI(D, A_{2}) \cap F(S(s)) \rbrace \neq \emptyset \). If Assumptions 3.1and 3.2are met, then the sequence \(\lbrace x_{n} \rbrace \) generated by the Algorithm 5.1strongly converges to \(q=P_{\Gamma}0\).
Proof
Since the single-valued operator \(A_{1}\) satisfies the assumptions (C1)–(C2), then one can easily verify that the bifunction \(f(x,y)=\langle A_{1}x, y-x\rangle \) satisfies conditions (B1)–(B4). From \(A_{1}\) being \(L_{1}\)-Lipschitz continuous on C, we obtain that
Then, f is Lipschitz continuous on C with \(c_{1}=c_{2}=\frac{\tau}{\lambda _{n+1}}\). Therefore, the bifunction f satisfies condition (B5).
Now, from the definition of \(q_{n}\) and f, we obtain that
Similarly, we can obtain that \(u_{n}=P_{D}(P_{D}(Bx_{n})-\mu _{n} A_{2}(P_{D}(Bx_{n}))), v_{n}=P_{D}(P_{D}(Bx_{n})- \mu _{n} A_{2}(u_{n}))\), and \(z_{n} =P_{C}(q_{n} -\lambda _{n} A_{1} t_{n})\). This shows that the extragradient method in (3.3) now reduces to (5.3) and the conclusion is from Theorem 4.2. □
6 Numerical illustration
We present some numerical illustrations in this section and compare our work [Algorithm 3.3] with that of Narin et al. [46] [Algorithm (3.1)] and Arfat et al. [36] [Algorithm (3.2)]. All the codes were written in MATLAB R2018a. All the computations were performed on a personal computer with an Intel(R) Core (TM) i5-4300U CPU at 1.90 GHz 2.49 GHz with 8.00 Gb-RAM and 64-OS.
In our computations, we define \(TOL_{n}:= \|x_{n+1}-x_{n}\|\) for our Algorithm 3.3, Algorithm 3.1 of Narin et al. [35], and Algorithm 3.2 of Arfat et al. [36], respectively, and use the stopping criterion \(TOL_{n} < \epsilon \) for the iterative processes, where ϵ is the predetermined error.
We provide the equilibrium problem for the following bifunction \(F: H \times H \rightarrow R\) arising from Nash–Cournot oligopolistic models of electricity markets [20, 21, 35]. The bifunctions can be formulated as follows:
and
where \(P,Q \in R^{k \times k}\) and \(U,V \in R^{m \times m}\) are positive-semidefinite matrices such that \(P-Q\) and \(U-V\) are positive-semidefinite matrices. It is well known that the bifunctions F and G satisfy conditions \((A1)- (A4)\) (see, [27] for details), with the Lipschitz-type constants \(c_{1}=c_{2}=\frac{\|P-Q\|}{2}\) and \(d_{1} = d_{2} = \frac{\|U-V\|}{2}\).
Example
Let the bifunctions F and G be given as in (6.1) and (6.2), respectively. For the sake of computational purposes, the following boxes shall be considered: \(C= \prod^{k}_{i=1} [-5, 5], D= \prod^{m}_{j=1}[-20, 20], \overline{C}= \prod^{k}_{i=1}[-3,3]\) and \(\overline{D}= \prod^{m}_{j=1}[-10, 10]\). The nonexpansive mappings \(T: C \rightarrow C\) and \(S: D \rightarrow D\) are given by \(T=P_{\overline{C}}\) and \(S=P_{\overline{D}}\), respectively, while the linear operator \(A: R^{k} \rightarrow R^{m}\) is an \(m \times k\) matrix. Furthermore, the matrices \(P, Q, U\), and V are randomly generated in the interval \([-5,5]\) such that they satisfy the properties above while the matrix A is generated randomly in the interval \((0, \frac{1}{k})\) and \([-2, 2]\) with the control sequences: \(\theta _{n} = \theta =\frac{9.9}{10}-\frac{1}{n+1}, \alpha _{n} = \frac{1}{(n+3)}\), Note that our \(\mu _{n}\) and \(\lambda _{n}\) are generated at each iteration and for the sake of Algorithms (1.10) and (1.11), we take \(\mu _{n} = \lambda _{n} = \frac{1}{4 \operatorname{max} \lbrace b_{1}, b_{2}\rbrace}\). However, while in our Algorithm 3.3, \(\gamma _{n}\) is generated at each iteration, in Algorithms (1.10) and (1.12), \(\gamma _{n} = \frac{1}{2\|A\|^{2}.}\)
We shall consider the following cases in our numerical computation shown in Table 1:
Case 1, \(m=50\).
Case 2, \(m=100\).
Case 3, \(m=150\).
Case 4, \(m=200\).
We note that to obtain the vector \(u_{n}\) in the Algorithm 3.3, we need to solve the optimization problem
which is equivalent to the following quadratic problem:
where \(J= 2 \mu _{n} V + I_{m}\) and \(K= \mu _{n} UP_{D}(Aw_{n})- \mu _{n} V P_{D}(Aw_{n}) - P_{D}(Aw_{n} )\), see [47].
On the other hand, in order to obtain the vector \(v_{n} \), we solve the following optimization problem
which is equivalent to the following quadratic problem:
where \(\overline{J}= J\) and \(\overline{K}= \mu _{n} Uu_{n} - \mu _{n} Vu_{n} - P_{D}(Aw_{n}) \). In the same way, the vector \(t_{n}\) is obtained by solving the optimization problem
which is equivalent to the following quadratic problem:
where \(N= \lambda _{n} Q + I_{m}\) and \(M=\lambda _{n} P y_{n} -\lambda _{n} Q y_{n} -y_{n}\) and \(z_{n}\) is obtained by solving the problem:
where \(K^{*} = \lambda _{n} Q + I_{m}\) and \(K^{**}=\lambda _{n} P t_{n}-\lambda _{n} Qt_{n} -t_{n} \).
Our Algorithm 3.3 is tested using the stopping criterion \(\|x_{n+1} - x_{n}\| < 10^{-3}\).
7 Conclusion
An extragradient-type algorithm is constructed, which involves an inertial extrapolation term for solving split-equilibrium problems and fixed-point problems of nonexpansive semigroups. The scheme is easily implemented and practically useful, since it does not require the prior knowledge or an estimate of the operator norm. Also, knowing that the Lipschitz constants of the bifunctions in many cases in practice, cannot be determined, we employed self-adaptive step sizes. Under the assumption that the bifunctions are pseudomonotone, we established a strong convergence theorem. Finally, we apply our algorithm to solving variational inequality problems and fixed-point problems of nonexpansive semigroups in the framework of real Hilbert spaces.
References
Combettes, P.L., Pesquet, J.-C.: Deep Neural Network Structures. arXiv:1808.07526. https://doi.org/10.48550/arXiv.1808.07526
Heaton, H., Wu Fung, S., Gibali, A., et al.: Feasibility-based fixed point networks. Fixed Point Theory Algorithms Sci. Eng. 2021, Article ID 21 (2021). https://doi.org/10.1186/s13663-021-00706-3.
Combettes, P.L., Pesquet, J.C.: Fixed point strategies in data science. In: IEEE Transactions on Signal Processing, vol. 69, pp. 3878–3905 (2021). https://doi.org/10.1109/TSP.2021.3069677
Jung, A.: A fixed-point of view on gradient methods for big data. Front. Appl. Math. Stat. 3. https://doi.org/10.3389/fams.2017.00018
Censor, Y., Elfving, T.: A Multiprojection algorithm using Bregman projections in product space. Numer. Algorithms 8, 221–239 (1994)
Byrne, C.: A unified treatment for some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)
Gibali, A.: A new split inverse problem and application to least intensity feasible solutions. Pure Appl. Funct. Anal. 2(2), 243–258 (2017)
Combettes, P.L.: The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 95, 155–453 (1996)
Adler, R., Dedieu, J.P., Margulies, J.Y., Martens, M., Shub, M.: Newton’s method on Riemannian manifolds and a geometric model for human spine. IMA J. Numer. Anal. 22, 359–390 (2002)
Barbu, V.: Nonlinear Semigroups and Differential Equations in Banach Spaces. Noordhoff, Leyden (1976)
Brezis, H., Pazy, A.: Semigroups of nonlinear contractions on convex sets. J. Funct. Anal. 6, 237–281 (1970)
Suzuki, T.: On strong convergence to common fixed points of nonexpansive semigroup in Hilbert spaces. Proc. Am. Math. Soc. 131, 2133–2136 (2002)
Rode, G.: An ergodic theorem for semigroups of nonexpansive mappings in a Hilbert space. J. Math. Anal. Appl. 85, 172–178 (1982)
Censor, Y., et al.: The multiple-sets split feasibility problem and its applications for inverseproblems. Inverse Probl. 21(6), 2071–2084 (2005).
Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006). https://doi.org/10.1088/0031-9155/51/10/001
Shehu, Y., Gibali, A.: New inertial relaxed method for solving split feasibilities. Optim. Lett. (2020). https://doi.org/10.1007/s11590-020-01603-1
Dang, Y.Z., Sun, J., Xu, H.K.: Inertial accelerated algorithms for solving a split feasibility problem. J. Ind. Manag. Optim. 13, 1383–1394 (2017)
Qu, B., Xiu, N.: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 21, 1655–1665 (2005)
Shehu, Y., Iyiola, O.S., Enyi, C.D.: An iterative algorithm for solving split feasibility problems and fixed point problems in Banach spaces. Numer. Algorithms 72, 835–864 (2016)
Suantai, S., Pholasa, N., Cholamjiak, P.: The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 14, 1595–1615 (2018)
Censor, Y., Segal, A.: On the string averaging method for sparse common fixed-point problems. Int. Trans. Oper. Res. 16(4), 481–494 (2009). https://doi.org/10.1111/j.1475-3995.2008.00684.x
Kazmi, K.R., Rizvi, S.H.: Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and a fixed point problem. J. Egypt. Math. Soc. 21(1), 44–51 (2013)
Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Program. 63, 123–145(1994)
Harisa, S.A., Khan, M.A.A., Mumtaz, F., Farid, N., Morsy, A., Nisar, K.S., Ghaffar, A.: Shrinking Cesaro means method for the split equilibrium and fixed point problems in Hilbert spaces. Adv. Differ. Equ. 2020, Article ID 345 (2020)
Khan, M.A.A.: Convergence characteristics of a shrinking projection algorithm in the sense of Mosco for split equilibrium problem and fixed point problem in Hilbert spaces. Linear Nonlinear Anal. 3, 423–435 (2017)
Khan, M.A.A., Arfat, Y., Butt, A.R.: A shrinking projection approach to solve split equilibrium problems and fixed point problems in Hilbert spaces. UPB Sci. Bull., Ser. A 80(1), 33–46 (2018)
Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4, 1–17 (1964)
Lorenz, D.A., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51, 311–325 (2015)
Vinh, N.T., Muu, L.D.: Inertial extragradient algorithms for solving equilibrium problems. Acta Math. Vietnam. 44(3), 639–663 (2019)
Rehman, H., Kumam, P., Argyros, I.K., et al.: Inertial extra-gradient method for solving a family of strongly pseudomonotone equilibrium problems in real Hilbert spaces with application in variational inequality problem. Symmetry 12, 503 (2020)
Tan, B., Fan, J., Li, S.: Self-adaptive inertial extragradient algorithms for solving variational inequality problems. Comput. Appl. Math. 40, 19 (2021)
Plubtieng, S., Punpaeng, R.: Fixed-point solutions of variational inequalities for nonexpansive semigroups in Hilbert spaces. Math. Comput. Model. 48(1–2), 279–286 (2008). https://doi.org/10.1016/j.mcm.2007.10.002
Cianciaruso, F., Marino, G., Muglia, L.: Iterative methods for equilibrium and fixed point problems for nonexpansive semigroups in Hilbert spaces. J. Optim. Theory Appl. 146(2), 491–509 (2009). https://doi.org/10.1007/s10957-009-9628-y
Kazmi, K.R., Rizvi, S.H.: Implicit iterative method for approximating a common solution of split equilibrium problem and fixed point problem for a nonexpansive semigroup. Arab J. Math. Sci. 20(1), 57–75 (2014). https://doi.org/10.1016/j.ajmsc.2013.04.002
Narin, P., Mohsen, R., Manatchanok, K., Vahid, D.: A new extragradient algorithm for splitequilibrium problems and fixed point problems. J. Inequal. Appl. 2019, Article ID 137 (2019). https://doi.org/10.1186/s13660-019-2086-7
Arfat, Y., Kumam, P., Ngiamsunthorn, P.S., Khan, M.A.A., Sarwar, H., Fukhar-ud-Din, H.: Approximation results for split-equilibrium problems and fixed point problems of nonexpansive semigroup in Hilbert spaces. Adv. Differ. Equ. 2020, Article ID 512 (2020). https://doi.org/10.1186/s13662-020-02956-8
Shehu, Y., Izuchukwu, C., Yao, J.C., Qin, X.: Strongly convergent inertial extragradient type methods for equilibrium problems. Appl. Anal. (2021). https://doi.org/10.1080/00036811.2021.2021187
Hieu, D.V.: New inertial algorithm for a class of equilibrium problems. Numer. Algorithms 80(4), 1413–1436 (2019)
Browder, F.E.: Convergence of approximants to fixed points of nonexpansive nonlinear mappings in Banach spaces. Arch. Ration. Mech. Anal. 24(1), 82–89 (1967)
Chang, S.S.: Some problems and results in the study of nonlinear analysis. Nonlinear Anal., Theory Methods Appl. 30(7), 4197–4208 (1997)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer, New York (2011)
Shimizu, T., Takahashi, W.: Strong convergence to common fixed points of families of nonexpansive mappings. J. Math. Anal. Appl. 211, 71–83 (1997)
Xu, H.K.: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2, 1–17 (2002)
Mainge, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)
Mainge, P.E.: Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 325, 469–479 (2007)
Nesterov, Y.: Amethod of solving a convex programming problem with convergence rate O(1/k2). Sov. Math. Dokl. 27, 372–376 (1983)
Censor, Y., Segal, A.: The split common fixed point problem for directed operators. J. Convex Anal. 16, 587–600 (2009)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8(2–4), 221–239 (1994).
Korpelevich, G.M.: Extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)
Acknowledgements
The authors sincerely thank the editor and the three anonymous reviewers for their careful reading, constructive comments, and fruitful suggestions that substantially improved the manuscript.This paper is part of the doctoral thesis of the first author who is a PhD candidate at the Department of Mathematics/Statistics, University of Port Harcourt. He wishes to thank the staff of the Department, in particular, Dr. J. N. Ezeora, who is the advisor, for his fruitful mentorship.
Funding
There is no funding for this project.
Author information
Authors and Affiliations
Contributions
The problem design, formulation and computation are done by the author.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Competing interests
The authors declare no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Nwawuru, F.O., Ezeora, J.N. Inertial-based extragradient algorithm for approximating a common solution of split-equilibrium problems and fixed-point problems of nonexpansive semigroups. J Inequal Appl 2023, 22 (2023). https://doi.org/10.1186/s13660-023-02923-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-023-02923-3