- Research
- Open Access
- Published:
Dynamical inertial extragradient techniques for solving equilibrium and fixed-point problems in real Hilbert spaces
Journal of Inequalities and Applications volume 2023, Article number: 7 (2023)
Abstract
In this paper, we propose new methods for finding a common solution to pseudomonotone and Lipschitz-type equilibrium problems, as well as a fixed-point problem for demicontractive mapping in real Hilbert spaces. A novel hybrid technique is used to solve this problem. The method shown here is a hybrid of the extragradient method (a two-step proximal method) and a modified Mann-type iteration. Our methods use a simple step-size rule that is generated by specific computations at each iteration. A strong convergence theorem is established without knowing the operator’s Lipschitz constants. The numerical behaviors of the suggested algorithms are described and compared to previously known ones in many numerical experiments.
1 Introduction
The equilibrium problem (EP) is a broad framework that includes many mathematical models as special cases, such as variational inequality problems, optimization problems, fixed-point problems, complementarity problems, Nash-equilibrium problems, and inverse optimization problems (for more details see [7, 8, 12, 33]). This equilibrium problem can be expressed mathematically as follows.
Suppose that a bifunction \(\mathcal{L} : \mathcal{Y} \times \mathcal{Y} \rightarrow \mathbb{R}\) together with \(\mathcal{L} (\aleph _{1}, \aleph _{1}) = 0\), in accordance with \(\aleph _{1} \in \mathcal{M}\). An equilibrium problem for a granted bifunction \(\mathcal{L}\) on \(\mathcal{M}\) is interpreted as follows: Find \(s^{*} \in \mathcal{M}\) such that
where \(\mathcal{Y}\) represents a real Hilbert space and \(\mathcal{M}\) represents a nonempty, closed, and convex subset of \(\mathcal{Y}\). The study focuses on an iterative strategy for resolving the equilibrium problem. The solution set of problem (1.1) is denoted by \(EP(\mathcal{M}, \mathcal{L})\). The problem (1.1) is widely known as the Ky Fan inequality, which has since been studied in [14]. Many authors have focused on this topic in recent years, for example, see [10, 11, 13, 19, 21, 23, 32, 35, 50]. This interest comes from the fact that, as observed, it neatly combines all of the above mentioned specific problems. Many writers have established and generalized many conclusions about the presence and nature of an equilibrium problem solution (for more details see [2, 7, 14]). Due to the obvious significance of the equilibrium problem and its implications in both pure and practical sciences, numerous researchers have conducted substantial studies on it in recent years [7, 9, 16]. Let us recall the definition of a Lipschitz-type continuous bifunction. A bifunction \(\mathcal{L}\) is said to be Lipschitz-type continuous [31] on \(\mathcal{M}\) if there exist two constants \(c_{1}, c_{2} > 0\), such that
Flam [15] and Tran et al. [42] generated two sequences \(\{s_{k}\}\) and \(\{u_{k}\}\) in Euclidean spaces in the following manner:
where \(0 < \delta < \min \{\frac{1}{2 c_{1}}, \frac{1}{2 c_{2}} \}\). Due to Korpelevich’s earlier work on the saddle-point problems [25], this approach is often referred to as the two-step extragradient method. It is interesting to note that the method generates a weakly convergent sequence and utilizes a fixed step size that is entirely dependent on bifunctional Lipschitz-type constants. Because Lipschitz-type variables are typically unknown or difficult to discover, this may limit application possibilities. Inertial-type procedures, on the other hand, are two-step iterative procedures wherein the following iteration is derived from the two preceding iterations (see [4, 36] for further details). To increase the numerical efficiency of the iterative sequence, an inertial extrapolation term is usually applied. According to numerical research, inertial phenomena improve numerical performance in terms of execution time and total number of iterations. Several inertial-type techniques have recently been explored for various types of equilibrium problems [3, 5, 17, 19, 47].
In this study, we are interested to find a common solution to an equilibrium problem and a fixed-point problem in a Hilbert space [20, 26, 28, 34, 40]. The motivation and idea for researching such a common solution problem comes from its potential applicability to mathematical models with limitations that may be stated as fixed-point problems. This is especially true in practical scenarios such as signal processing, network-resource allocation, and picture recovery; see, for example, [22, 28, 29]. In this study, we are interested in finding a common solution to an equilibrium problem and a fixed-point problem in a Hilbert space [1, 20, 26, 28, 34, 40, 44–46, 48]. The motivation and idea for researching such a common solution problem come from its potential applicability to mathematical models with limitations that may be stated as fixed-point problems. This is especially true in practical scenarios such as signal processing, network-resource allocation, and picture recovery; see, for example, [22, 28, 29].
Let \(\mathcal{T} : \mathcal{Y} \rightarrow \mathcal{Y}\) be a mapping. Then, the fixed-point problem (FPP) for the mapping \(\mathcal{T}\) is to determine \(s^{*} \in \mathcal{Y}\) such that
The solution set of problem (1.3) is known as the fixed-point set of \(\mathcal{T}\) and is represented by \(\operatorname{Fix}(\mathcal{T})\). The majority of algorithms for addressing problem (1.3) are derived from the basic Mann iteration, in particular from \(s_{1} \in \mathcal{Y}\), create a sequence \(\{s_{k+1}\}\) for all \(k \geq 1\) by
where the random sequence \(\{\wp _{k}\}\) must meet certain conditions in order to achieve weak convergence. The Halpern iteration is yet another formalized iterative mechanism for achieving strong convergence in infinite-dimensional Hilbert spaces. The iterative process can be expressed as follows:
where \(s_{1} \in \mathcal{Y}\) and a sequence \(\wp _{k} \subset (0; 1)\) is slowly diminishing and nonsummable, i.e.,
In addition to the Halpern iteration, there is a generic variant, namely, the Mann-type algorithm [30], in which the cost mapping \(\mathcal{T}\) is combined with such a contraction mapping in the iterates. Furthermore, the hybrid steepest-descent algorithm introduced in [53] is another strategy that yields strong convergence.
Vuong et al. [52] introduced a new numerical algorithm, the extragradient method [15, 43] for trying to solve an equilibrium problem involving a fixed-point problem for a demicontractive mapping using the extragradient method and the hybrid steepest-descent technique in [53]. The authors proved that the proposed algorithm has strong convergence under the premise that the bifunction is pseudomonotone and meets the Lipschitz-type requirement [31]. As stated in [31], this technique has the benefit of being numerically calculated utilizing optimization tools. The extragradient Mann-type approach described in [31] also enables us to eliminate numerous strong criteria in establishing the convergence of previously known extragradient algorithms. Other strongly convergent methods for finding an element in \(s^{*} \in \operatorname{Fix}(\mathcal{T}) \cap EP(\mathcal{M}, \mathcal{L})\) that integrates the extragradient approach with the hybrid or shrinking projection technique may be found in [20, 34, 39].
In this study, inspired and motivated by the findings of Takahashi et al. [40], Maingé [29], and Vuong et al. in [52] and based on the work of [27], we present a new strongly convergent algorithm as a combination of the extragradient method (two-step proximal-like method) and the Mann-type iteration [30] for approximating a common solution of a pseudomonotone and Lipschitz-type equilibrium problem and a fixed-point problem for a demicontractive mapping.
As indicated above, the result in this study is still valid for the more general class of demicontractive mappings when examining a relaxation of a demicontractive mapping. The typical Mann iteration produces weak convergence; however, the approach used in this study, which employs the comparable Mann-type iteration, produces strong convergence. This is especially true in infinite-dimensional Hilbert spaces, where strong norm convergence is more valuable than weak norm convergence. Several numerical experiments in finite- and infinite-dimensional Hilbert spaces demonstrated that the novel strategy is promising and offers competitive advantages over previous approaches.
The paper is organized as follows. Section 2 presented some basic results. Section 3 introduces new methods and validates their convergence analysis, while Sect. 4 describes some applications. Finally, Sect. 5 provides some numerical statistics to demonstrate the practical utility of the techniques presented.
2 Preliminaries
Let \(\mathcal{M}\) be a nonempty, closed, and convex subset of \(\mathcal{Y}\), the real Hilbert space. The weak convergence is denoted by \(s_{k} \rightharpoonup s\) and the strong convergence by \(s_{k} \rightarrow s\). The following information is available for each \(\aleph _{1}, \aleph _{2} \in \mathcal{Y}\):
-
(1)
\(\|\aleph _{1} + \aleph _{2}\|^{2} = \|\aleph _{1}\|^{2} + 2 \langle \aleph _{1}, \aleph _{2} \rangle + \|\aleph _{2}\|^{2}\);
-
(2)
\(\|\aleph _{1} + \aleph _{2}\|^{2} \leq \|\aleph _{1}\|^{2} + 2 \langle \aleph _{2}, \aleph _{1} + \aleph _{2} \rangle \);
-
(3)
\(\|a \aleph _{1} + (1 - a) \aleph _{2} \|^{2} = a \|\aleph _{1}\|^{2} + (1 - a)\| \aleph _{2} \|^{2} - a(1 - a)\|\aleph _{1} - \aleph _{2}\|^{2}\).
A metric projection \(P_{\mathcal{M}}(\aleph _{1})\) of an element \(\aleph _{1} \in \mathcal{Y}\) is defined by:
It is generally known that \(P_{\mathcal{M}}\) is nonexpansive, and \(P_{\mathcal{M}}\) completes the following useful characteristics:
-
(1)
\(\langle \aleph _{1} - P_{\mathcal{M}}(\aleph _{1}), \aleph _{2} - P_{\mathcal{M}}(\aleph _{1}) \rangle \leq 0\), \(\forall \aleph _{2} \in \mathcal{M}\);
-
(2)
\(\|P_{\mathcal{M}}(\aleph _{1}) - P_{\mathcal{M}}(\aleph _{2})\|^{2} \leq \langle P_{\mathcal{M}}(\aleph _{1}) - P_{\mathcal{M}}( \aleph _{2}), \aleph _{1} - \aleph _{2} \rangle \), \(\forall \aleph _{2} \in \mathcal{M}\).
Definition 2.1
Assume that \(\mathcal{T} : \mathcal{Y} \rightarrow \mathcal{Y}\) is a nonlinear mapping and \(\operatorname{Fix}(\mathcal{T}) \neq \emptyset \). Then, \(I - \mathcal{T}\) is called demiclosed at zero if, for each \(\{s_{k}\}\) in \(\mathcal{Y}\), the following conclusion remains true:
Lemma 2.2
([37])
Suppose that there are sequences \(\{g_{k}\} \subset [0, +\infty )\), \(\{h_{k}\} \subset (0, 1)\) and \(\{r_{k}\} \subset \mathbb{R}\) such as those that satisfy the following basic requirements:
If \(\limsup_{j \rightarrow +\infty} r_{k_{j}} \leq 0\) for any subsequence \(\{g_{k_{j}}\}\) of \(\{g_{k}\}\) meet
Then, \(\lim_{k \rightarrow +\infty} g_{k} = 0\).
Definition 2.3
Let \(\mathcal{M}\) be a subset of a real Hilbert space \(\mathcal{Y}\) and \(\digamma : \mathcal{M} \rightarrow \mathbb{R}\) a given convex function.
-
(1)
The normal cone at \(\aleph _{1} \in \mathcal{M}\) is defined by
$$ N_{\mathcal{M}}(\aleph _{1}) = \bigl\{ \aleph _{3} \in \mathcal{Y} : \langle \aleph _{3}, \aleph _{2} - \aleph _{1} \rangle \leq 0, \forall \aleph _{2} \in \mathcal{M} \bigr\} . $$(2.1) -
(2)
The subdifferential of a function Ϝ at \(\aleph _{1} \in \mathcal{M}\) is defined by
$$ \partial \digamma (\aleph _{1}) = \bigl\{ \aleph _{3} \in \mathcal{Y} : \digamma (\aleph _{2}) - \digamma (\aleph _{1}) \geq \langle \aleph _{3}, \aleph _{2} - \aleph _{1} \rangle , \forall \aleph _{2} \in \mathcal{M} \bigr\} . $$(2.2)
Lemma 2.4
([41])
Let \(\digamma : \mathcal{M} \rightarrow \mathbb{R}\) be a subdifferentiable and lower semicontinuous function on \(\mathcal{M}\). A member \(\aleph _{1} \in \mathcal{M}\) is called a minimizer of a mapping Ϝ if and only if
where \(\partial \digamma (\aleph _{1})\) denotes the subdifferential of Ϝ at vector \(\aleph _{1} \in \mathcal{M}\) and \(N_{\mathcal{M}}(\aleph _{1})\) is the normal cone of \(\mathcal{M}\) at vector \(\aleph _{1}\).
3 Main results
In this section, we examine in detail the convergence of several different inertial extragradient algorithms for solving equilibrium and fixed-point problems. First, we consider that our algorithms have distinct characteristics. To justify the strong convergence, the following conditions must be met:
(\(\mathcal{L}\)1) The solution set \(\operatorname{Fix}(\mathcal{T}) \cap EP(\mathcal{M}, \mathcal{L}) \neq \emptyset \);
(\(\mathcal{L}\)2) The bifunction \(\mathcal{L}\) is said to be pseudomonotone [6, 8], i.e.,
(\(\mathcal{L}\)3) The bifunction \(\mathcal{L}\) is said to be Lipschitz-type continuous [31] on \(\mathcal{M}\) if there exists two constants \(c_{1}, c_{2} > 0\), such that
(\(\mathcal{L}\)4) For any sequence \(\{\aleph _{k}\} \subset \mathcal{M}\) satisfying \(\aleph _{k} \rightharpoonup \aleph ^{*}\), then the following inequality holds:
(\(\mathcal{L}\)5) Assume that \(\mathcal{T} : \mathcal{Y} \rightarrow \mathcal{Y}\) is a mapping such that \((I - \mathcal{T})\) is demiclosed at zero. A mapping \(\mathcal{T}\) is said to be ρ-demicontractive if there exists a constant \(0 \leq \rho < 1\) such that
or equivalently
The first algorithm is described below to find a common solution to an equilibrium and a fixed-point problem. The main advantage of this method is that it employs a monotone step-size rule that is independent of Lipschitz constants. The algorithm employs Mann-type iteration to aid in the solution of a fixed-point problem, and the two-step extragradient approach to solve an equilibrium problem.

The following lemma is used to demonstrate that the monotone step-size sequence generated by equation (3.2) is properly defined and bounded.
Lemma 3.1
A sequence \(\{\delta _{k} \}\) is convergent to δ and \(\min \{\frac{\tau}{\max \{2c_{1}, 2c_{2}\}}, \delta _{1} \} \leq \delta _{k} \leq \delta _{1}\).
Proof
Let \(\mathcal{L}(\varkappa _{k}, v_{k}) - \mathcal{L}(\varkappa _{k}, u_{k}) - \mathcal{L}(u_{k}, v_{k}) > 0\). Thus, we have
Thus, we obtain \(\lim_{k \rightarrow +\infty} \delta _{k} = \delta \). This completes the proof. □
The second method is described below to find a common solution to an equilibrium and a fixed-point problem. The primary benefit of this method is that it employs a nonmonotone step-size rule that is independent of Lipschitz constants. The algorithm solves a fixed-point problem using Mann-type iteration and an equilibrium problem with the two-step extragradient approach.

The following lemma is employed to establish that the nonmonotone step-size sequence created by equation (3.5) is properly defined and bounded. We give a proof that completely establishes the boundedness and convergence of a step-size sequence.
Lemma 3.2
A sequence \(\{\delta _{k} \}\) is convergent to δ and \(\min \{\frac{\tau }{\max \{2c_{1}, 2c_{2}\}}, \delta _{1} \} \leq \delta _{k} \leq \delta _{1} + P\) along with \(P = \sum_{k=1}^{+\infty} \chi _{k}\).
Proof
Let \(\mathcal{L}(\varkappa _{k}, v_{k}) - \mathcal{L}(\varkappa _{k}, u_{k}) - \mathcal{L}(u_{k}, v_{k}) > 0\). Thus, we have
The idea of \(\delta _{k+1}\) may be deduced through mathematical induction.
Assume that \([\delta _{k+1} - \delta _{k}]^{+} = \max \{0, \delta _{k+1} - \delta _{k} \}\) and
We receive \(\{\delta _{k}\}\) because of the definition
That is, the series \(\sum_{k=1}^{+\infty} (\delta _{k+1} - \delta _{k})^{+}\) is convergent. The convergence must now be proven of \(\sum_{k=1}^{+\infty} (\delta _{k+1} - \delta _{k})^{-}\). Let \(\sum_{k=1}^{+\infty} (\delta _{k+1} - \delta _{k})^{-} = +\infty \). Due to the fact that
we might be able to obtain
Letting \(k \rightarrow +\infty \) in (3.8), we have \(\delta _{k} \rightarrow -\infty \) as \(k \rightarrow +\infty \). This is an absurdity. As a result of the series convergence \(\sum_{k=0}^{k} (\delta _{k+1} - \delta _{k})^{+}\) and \(\sum_{k=0}^{k} (\delta _{k+1} - \delta _{k})^{-}\) taking \(k \rightarrow +\infty \) in (3.8), we obtain \(\lim_{k \rightarrow +\infty} \delta _{k} = \delta \). This concludes the proof. □
The following lemma can be used to verify the boundedness of an iterative sequence. It is critical in terms of proving the boundedness of a sequence and proving the strong convergence of a proposed sequence to find a common solution.
Lemma 3.3
Suppose that \(\{s_{k}\}\) is a sequence generated by Algorithm 1 that meets the conditions (\(\mathcal{L}\)1)–(\(\mathcal{L}\)5). Then, we have
Proof
By the use of Lemma 2.4, we have
There is a vector \(\omega \in \partial \mathcal{L}(u_{k}, v_{k})\) and there exists a vector \(\overline{\omega} \in N_{\mathcal{Y}_{k}}(v_{k})\) in order that
The preceding phrase suggests that
Since \(\overline{\omega} \in N_{\mathcal{Y}_{k}}(v_{k})\) implies that \(\langle \overline{\omega}, u - v_{k} \rangle \leq 0\), for all \(u \in \mathcal{Y}_{k}\). As a result, we acquire
Furthermore, \(\omega \in \partial \mathcal{L}(u_{k}, v_{k})\) and because of the concept of subdifferential, we obtain
We obtain by combining the formulas (3.9) and (3.10)
Due to the concept of a half-space \(\mathcal{Y}_{k}\), we have
Due to \(\omega _{k} \in \partial \mathcal{L}(\varkappa _{k}, u_{k})\) this indicates that
By inserting \(u = v_{k}\), we derive
From (3.12) and (3.13), we derive
By inserting \(u = s^{*}\) into formula (3.11), we obtain
Given \(s^{*} \in EP(\mathcal{L}, \mathcal{M})\), we conclude that \(\mathcal{L}(s^{*}, u_{k}) \geq 0\). Due to the pseudomonotonicity of the bifunction \(\mathcal{L}\), we derive \(\mathcal{L}(u_{k}, s^{*}) \leq 0\). We have achieved this by using equation (3.15) such that
By using the definition of \(\delta _{k+1}\), we obtain
Due to the expressions (3.16) and (3.17), we obtain
Integrating the formulas (3.14) and (3.18), we obtain
We have the following identities that are valuable to us:
By using expressions (3.19), (3.20), and (3.21), we obtain
□
The following theorem is the main theorem that is used to establish the strong convergence of an iterative sequence. This theorem proves the boundedness of a sequence and the strong convergence of a suggested sequence to a common solution. This is the key theorem, and it proves that the suggested sequence strongly converges to a solution in the case of monotone and nonmonotone step-size criteria.
Theorem 3.4
Suppose that \(\mathcal{L} : \mathcal{M} \times \mathcal{M} \rightarrow \mathbb{R}\) satisfies the conditions (\(\mathcal{L}\)1)–(\(\mathcal{L}\)5). Then, sequence \(\{s_{k}\}\) generated by Algorithm 1 strongly converges to \(s^{*} \in \operatorname{Fix}(\mathcal{T}) \cap EP(\mathcal{M}, \mathcal{L})\), where \(s^{*} = P_{\operatorname{Fix}(\mathcal{T}) \cap EP(\mathcal{M}, \mathcal{L})} (0)\).
Proof
Claim 1: The sequence \(\{s_{k}\}\) is bounded.
It is worth noting that \(EP(\mathcal{M}, \mathcal{L})\) and \(\operatorname{Fix}(\mathcal{T})\) are both closed, convex subsets. It is given that
Namely, \(s^{*} \in EP(\mathcal{M}, \mathcal{L}) \cap \operatorname{Fix}(\mathcal{T})\), as well as
As \(s^{*} \in \Omega \) and based on the description of \(s_{k+1}\), we have
Then, we must compute the following:
As a result, the previous expression implies that
From expressions (3.24) and (3.26), we have
In the context of Lemma 3.3, we derive
Due to Lemma 3.1, we obtain
Thus, this means that there exists \(N_{1} \in \mathbb{N}\) such that
According to expressions (3.28) and (3.30), we have
From expression (3.1), we have
and
As a result, this indicates that
From the formulas (3.31) and (3.32) with the definition of \(\{\varkappa _{k}\}\), we obtain
where \(K_{1} > 0\) is a constant
Considering the formulas (3.31) and (3.33), we obtain
Combining (3.26) and (3.35), we obtain
As a result, we infer that the sequence \(\{s_{k}\}\) is bounded.
Claim 2:
for some \(K_{4} > 0\). Indeed, it follows from relation (3.35) that
for some \(K_{2} > 0\). In addition, we have
where \(K_{4} = K_{2} + K_{3}\). Finally, we have
Claim 3:
By setting the following value
we have
where
By definition of \(s_{k+1}\), we can write
Next, we have to evaluate
It is given that \(\wp _{k} \subset (0, 1 - \rho )\) and using the expression (3.31), we obtain
According to the definition of \(\varkappa _{k}\), one obtains
where
Combining expressions (3.43), (3.44), and (3.46), we obtain
Claim 4: The sequence \(\lVert s_{k} - s^{*} \rVert ^{2}\) converges to zero.
Set
and
Then, Claim 4 can be rewritten as follows:
Indeed, from Lemma 2.2, it suffices to show that \(\limsup_{j \rightarrow \infty} r_{k_{j}} \leq 0\) for every subsequence \(\{p_{k_{j}}\}\) of \(\{p_{k}\}\) satisfying
This is equivalent to the need to show that
for every subsequence \(\{\|s_{k_{j}} - s^{*}\|\}\) of \(\{\|s_{k} - s^{*}\|\}\) satisfying
Assume that \(\{\|s_{k_{j}} - s^{*}\|\}\) is a subsequence of \(\{\|s_{k} - s^{*}\|\}\) satisfying
Then,
It follows from Claim 2 that
The above relation implies that
Therefore, we obtain
According to the definition of \(\varkappa _{k}\) one has
This, together with \(\lim_{j \rightarrow \infty} \| v_{k_{j}} - \varkappa _{k_{j}}\| = 0\), yields that
From expressions (3.50) and (3.53), we deduce that
Taking limit \(j \rightarrow \infty \) on both sides of the equation, we have
The following phrase suggests that
Due to expression (3.11), we have
By expression (3.17), we obtain
Combining relations (3.57), (3.58), and (3.14) we write
where u is an arbitrary element in \(\mathcal{Y}_{k}\). By using the boundedness of the sequence and expression (3.50), that right-hand side of the last inequality goes to zero. By the use of \(\delta _{k_{j}} \geq \delta > 0\), we obtain
It is given that \(\mathcal{M} \subset \mathcal{Y}_{k}\), that is \(\mathcal{L}(\hat{s}, u) \geq 0\), for all \(u \in \mathcal{M}\). This gives that \(\hat{s} \in EP(\mathcal{L}, \mathcal{M})\). By the demiclosedness of \((I - \mathcal{T})\), we obtain that \(\hat{s} \in Fix(\mathcal{T})\). Since the sequence \(\{s_{k}\}\) is bounded, this implies that there exists a subsequence \(\{s_{k_{j}}\}\) of \(\{s_{k}\}\) such that \(s_{k_{j}} \rightharpoonup \hat{s}\). It is given that
Namely, \(s^{*} \in EP(\mathcal{M}, \mathcal{L}) \cap \operatorname{Fix}(\mathcal{T})\) as well as
It is given that \(\hat{s} \in EP(\mathcal{M}, \mathcal{A}) \cap Fix (\mathcal{T})\). Thus, we have
By using the fact \(\lim_{j \rightarrow \infty} \lVert s_{k_{j} + 1} - s_{k_{j}} \rVert = 0\). Thus, we have
Combining Claim 3 and in the light of Lemma 2.2, we observe that \(s_{k} \rightarrow s^{*}\) as \(k \rightarrow \infty \). The proof of Theorem 3.4 is completed. □
The third method does not involve subgradient techniques and is effective in some situations. Its proof is the same as that of Algorithm 1. The third strategy is discussed below to obtain a common solution to an equilibrium and a fixed-point problem without using the subgradient technique. The key feature of this method is that it adopts a monotone step-size rule that is independent of Lipschitz constants. The algorithm uses Mann-type iteration to solve a fixed-point problem and the two-step extragradient technique to solve an equilibrium problem.

The fourth method, which does not use a subgradient method, is successful in some scenarios. Its proof is the same as that of Algorithm 1. The key feature of this technique is that it uses a nonmonotone step-size rule that is independent of Lipschitz constants.

4 Applications
In this section, we need to find a common solution of the variational inequalities and fixed-point problems using the results from our main results. The expression (4.2) is employed to obtain the following conclusions. All the methods are based on our main findings, which are interpreted below.
Let \(\mathcal{A} : \mathcal{M} \rightarrow \mathcal{Y}\) be an operator. First, we look at the classic variational inequality problem [24, 38], which is expressed as follows:
Let us define a bifunction \(\mathcal{F}\) defined as follows:
Then, the equilibrium problem converts into the problem of variational inequalities defined in (4.1) and the Lipschitz constant of the mapping \(\mathcal{A}\) is \(L = 2 c_{1} = 2 c_{2}\).
The following corollary is derived from the proposed Algorithm 1 and the minimization problem for solving equilibrium problems that transform into projections on a convex set. This result helps in the finding of a common solution to a variational inequality problem and a fixed-point problem.
Corollary 4.1
Suppose that \(\mathcal{A} : \mathcal{M} \rightarrow \mathcal{Y}\) is a weakly continuous, pseudomonotone, and L-Lipschitz continuous mapping and the solution set \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\) is nonempty. Take \(s_{0}, s_{1} \in \mathcal{M}\), \(\ell \in (0, 1)\), \(\tau \in (0, 1)\), \(\delta_{1} > 0\). Choose two positive numbers \(a, b\) such that \(0 < a, b < 1- \rho\) and \(0 < a, b < 1- \Im_{k}\). Moreover, choose \(\{\wp_{k}\} \subset (a, b)\) and \(\{\Im_{k}\} \subset (0, 1)\) satisfying the following conditions:
Calculate
while \(\ell _{k}\) is taken as follows:
Moreover, a positive sequence \(\varrho _{k} = \circ (\wp _{k})\) satisfies \(\lim_{k \rightarrow +\infty} \frac{\varrho _{k}}{\wp _{k}} = 0\). First, we have to compute
where
Calculate
The following step size should be updated:
Then, the sequence \(\{s_{k}\}\) converges strongly to \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\).
The following corollary comes from the proposed Algorithm 2 and the minimization problem for resolving equilibrium problems that transform into projections on a convex set.
Corollary 4.2
Suppose that \(\mathcal{A} : \mathcal{M} \rightarrow \mathcal{Y}\) is a weakly continuous, pseudomonotone, and L-Lipschitz continuous mapping and the solution set \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\) is nonempty. Take \(s_{0}, s_{1} \in \mathcal{M}\), \(\ell \in (0, 1)\), \(\tau \in (0, 1)\), \(\delta_{1} > 0\). Choose two positive numbers \(a, b\) such that \(0 < a, b < 1- \rho\) and \(0 < a, b < 1- \Im_{k}\). Moreover, choose \(\{\wp_{k}\} \subset (a, b)\) and \(\{\Im_{k}\} \subset (0, 1)\) satisfying the following conditions:
Calculate
while \(\ell _{k}\) is taken as follows:
Moreover, a positive sequence \(\varrho _{k} = \circ (\wp _{k})\) satisfies \(\lim_{k \rightarrow +\infty} \frac{\varrho _{k}}{\wp _{k}} = 0\). First, we have to compute
where
Calculate
Moreover, choose a nonnegative real sequence \(\{\chi _{k}\}\) such that \(\sum_{k=1}^{+\infty} \chi _{k} < +\infty \). The following step size should be updated:
Then, the sequence \(\{s_{k}\}\) converges strongly to \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\).
The following corollary comes from the proposed Algorithm 3 and the minimization problem for resolving equilibrium problems that transform into projections on a convex set.
Corollary 4.3
Suppose that \(\mathcal{A} : \mathcal{M} \rightarrow \mathcal{Y}\) is a weakly continuous, pseudomonotone, and L-Lipschitz continuous mapping and the solution set \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\) is nonempty. Take \(s_{0}, s_{1} \in \mathcal{M}\), \(\ell \in (0, 1)\), \(\tau \in (0, 1)\), \(\delta_{1} > 0\). Choose two positive numbers \(a, b\) such that \(0 < a, b < 1- \rho\) and \(0 < a, b < 1- \Im_{k}\). Moreover, choose \(\{\wp_{k}\} \subset (a, b)\) and \(\{\Im_{k}\} \subset (0, 1)\) satisfying the following conditions:
Calculate
while \(\ell _{k}\) is taken as follows:
Moreover, a positive sequence \(\varrho _{k} = \circ (\wp _{k})\) satisfies \(\lim_{k \rightarrow +\infty} \frac{\varrho _{k}}{\wp _{k}} = 0\). First, we have to compute
Calculate
The following step size should be updated:
Then, the sequence \(\{s_{k}\}\) converges strongly to \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\).
The proposed Algorithm 4 and the minimization problem for resolving equilibrium problems that transform into projections on a convex set lead to the following corollary.
Corollary 4.4
Suppose that \(\mathcal{A} : \mathcal{M} \rightarrow \mathcal{Y}\) is a weakly continuous, pseudomonotone, and L-Lipschitz continuous mapping and the solution set \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\) is nonempty. Take \(s_{0}, s_{1} \in \mathcal{M}\), \(\ell \in (0, 1)\), \(\tau \in (0, 1)\), \(\delta_{1} > 0\). Choose two positive numbers \(a, b\) such that \(0 < a, b < 1- \rho\) and \(0 < a, b < 1- \Im_{k}\). Moreover, choose \(\{\wp_{k}\} \subset (a, b)\) and \(\{\Im_{k}\} \subset (0, 1)\) satisfying the following conditions:
Calculate
while \(\ell _{k}\) is taken as follows:
Moreover, a positive sequence \(\varrho _{k} = \circ (\wp _{k})\) satisfies \(\lim_{k \rightarrow +\infty} \frac{\varrho _{k}}{\wp _{k}} = 0\). First, we have to compute
Calculate
Moreover, choose a nonnegative real sequence \(\{\chi _{k}\}\) such that \(\sum_{k=1}^{+\infty} \chi _{k} < +\infty \). The following step size should be updated:
Then, the sequence \(\{s_{k}\}\) converges strongly to \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\).
5 Numerical illustrations
This section covers the computational consequences of the presented methodologies, as well as an examination of how variations in control settings impact the numerical efficacy of the suggested algorithms. All computations are run in MATLAB R2018b on an HP i5 Core (TM) i5-6200 laptop with 8.00 GB (7.78 GB useable) RAM.
Example 5.1
The first sample problem here is taken from the Nash–Cournot Oligopolistic Equilibrium model in [43]. Suppose that a function \(q : \mathcal{Y} \rightarrow \mathbb{R}\) is described through
The subgradient projection is a mapping that is characterized as follows:
wherein \(r(s) \in \partial q(s)\). In such instance, \(\mathcal{T}\) is quasinonexpansive, demiclosed at zero, and \(\operatorname{Fix}(\mathcal{T}) = lev_{\leq q}\). In this instance, the bifunction \(\mathcal{F}\) can be expressed as follows:
wherein \(c \in \mathbb{R}^{M}\) and P, Q are matrices of order M. The matrix \(Q - P\) is symmetric negative-semidefinite, while the matrix P is symmetric positive-semidefinite, through Lipschitz-like parameters \(c_{1} = c_{2} = \frac{1}{2}\| P - Q\|\) (for additional information, see [43]). The starting point for this study is \(s_{0} = s_{1}= (2, 2, \ldots , 2)\) and the size of the space is chosen differently with the stopping condition \(D_{k} = \|\varkappa _{k} - u_{k}\| \leq 10^{-3}\). Figures 1–10 depict numerical observations for Example 5.1. The following control criteria are in use:
-
(1)
Algorithm 1 in [52] (briefly, EGM):
$$ \wp _{k} = \frac{1}{(10 k + 4)}, \qquad \Im _{k} = \frac{1 - \Im}{5},\qquad \delta _{k} = \min \biggl\{ \frac{1}{4 c_{1}}, \frac{1}{4 c_{2}} \biggr\} ; $$ -
(2)
Algorithm 2 in [51] (briefly, I-EGM):
$$ \wp _{k} = \frac{1}{(10 k + 4)},\qquad \Im _{k} = \frac{1 - \Im}{5}, \qquad \delta _{k} = \min \biggl\{ \frac{1}{4 c_{1}}, \frac{1}{4 c_{2}} \biggr\} ; $$ -
(3)
Algorithm 1 in [18] (briefly, H-EGM):
$$ \wp _{k} = \frac{1}{(10 k + 4)}, \qquad \Im _{k} = \frac{1}{5}, \qquad \delta _{k} = \min \biggl\{ \frac{1}{4 c_{1}}, \frac{1}{4 c_{2}} \biggr\} ; $$ -
(4)
Algorithm 1 (briefly, M-EGM):
$$\begin{aligned}& \delta _{1} = 0.36,\qquad \ell = 0.57,\qquad \tau = 0.264,\qquad \varrho _{k} = \frac{10}{(1+k)^{3}}, \\& \Im _{k} = \frac{1 - \Im}{5},\qquad \wp _{k} = \frac{1}{(10 k + 4)},\qquad \mathrm{g}(s)=\frac{s}{5}; \end{aligned}$$ -
(5)
Algorithm 2 (briefly, IM-EGM):
$$\begin{aligned}& \delta _{1} = 0.36,\qquad \ell = 0.57,\qquad \tau = 0.264, \qquad \varrho _{k} = \frac{10}{(1+k)^{2}}, \\& \Im _{k} = \frac{1 - \Im}{5}, \qquad \wp _{k} = \frac{1}{(10 k + 4)},\qquad \mathrm{g}(s)=\frac{s}{5},\qquad \chi _{k} = \frac{20}{(1 + k)^{2}}. \end{aligned}$$
Example 5.2
Consider the fact that \(\mathcal{Y} = L^{2}([0, 1])\) is a real Hilbert space through an inner product \(\langle s, u \rangle = \int _{0}^{1} s(t) u(t)\,dt\), \(\forall s, u \in \mathcal{Y} \), in which the induced norm obtains
Assume an operator \(\mathcal{A} : \mathcal{M} \rightarrow \mathcal{Y}\) is specified by
where \(\mathcal{M} := \{ s \in L^{2}([0, 1]): \|s\| \leq 1 \}\) is the unit ball and
The bifunction is stated as follows:
Moreover, \(\mathcal{F}\) is clearly a Lipschitz-type continuous bifunction with the Lipschitz constant \(c_{1} = c_{2} = 1\) and the monotone [49]. A metric projection on \(\mathcal{M}\) is evaluated as follows:
A \(\mathcal{T} : L^{2}([0, 1]) \rightarrow L^{2}([0, 1])\) is written as follows:
A simple calculation shows that \(\mathcal{T}\) is 0-demicontractive. The solution to the problem is \(s^{*}(t) = 0\). Figures 11–18 depict numerical observations for Example 5.2. The following control criteria are in use:
-
(1)
Algorithm 1 in [52] (briefly, EGM):
$$ \wp _{k} = \frac{1}{(5 k + 10)},\qquad \Im _{k} = \frac{1 - \Im}{4}, \qquad \delta _{k} = \min \biggl\{ \frac{1}{4 c_{1}}, \frac{1}{4 c_{2}} \biggr\} ; $$ -
(2)
Algorithm 2 in [51] (briefly, I-EGM):
$$ \wp _{k} = \frac{1}{(5 k + 10)}, \qquad \Im _{k} = \frac{1 - \Im}{4}, \qquad \delta _{k} = \min \biggl\{ \frac{1}{4 c_{1}}, \frac{1}{4 c_{2}} \biggr\} ; $$ -
(3)
Algorithm 1 in [18] (briefly, H-EGM):
$$ \wp _{k} = \frac{1}{(5 k + 10)}, \qquad \Im _{k} = \frac{1}{5},\qquad \delta _{k} = \min \biggl\{ \frac{1}{4 c_{1}}, \frac{1}{4 c_{2}} \biggr\} ; $$ -
(4)
Algorithm 1 (briefly, M-EGM):
$$\begin{aligned}& \delta _{1} = 0.42,\qquad \ell = 0.67, \qquad \tau = 0.33,\qquad \varrho _{k} = \frac{10}{(1+k)^{2}}, \\& \Im _{k} = \frac{1 - \Im}{3},\qquad \wp _{k} = \frac{1}{(5 k + 10)},\qquad \mathrm{g}(s)=\frac{s}{3}; \end{aligned}$$ -
(5)
Algorithm 2 (briefly, IM-EGM):
$$\begin{aligned}& \delta _{1} = 0.42, \qquad \ell = 0.67,\qquad \tau = 0.33,\qquad \varrho _{k} = \frac{10}{(1+k)^{2}}, \\& \Im _{k} = \frac{1 - \Im}{3}, \qquad \wp _{k} = \frac{1}{(5 k + 10)}, \qquad \mathrm{g}(s)=\frac{s}{3},\qquad \chi _{k} = \frac{10}{(1 + k)^{2}}. \end{aligned}$$
6 Conclusion
The paper provides two explicit extragradient-like approaches for finding a common solution to an equilibrium problem containing a pseudomonotone and Lipschitz-type bifunction with such a fixed-point problem needing a ρ-demicontractive mapping in a real Hilbert space. A new step-size criterion that is not reliant on Lipschitz-type constant information has been developed. Under certain standard conditions, strong convergence theorems for the proposed algorithms are established. The computational data was studied to confirm the suggested approaches’ arithmetic superiority over current methods. These computational findings show that the nonmonotone variable step-size rule continues to improve the iterative sequence’s performance in this case.
Availability of data and materials
Not applicable.
References
Abass, H., Godwin, G., Narain, O., Darvish, V.: Inertial extragradient method for solving variational inequality and fixed-point problems of a Bregman demigeneralized mapping in a reflexive Banach spaces. Numer. Funct. Anal. Optim. 42(8), 933–960 (2022)
Antipin, A.: Equilibrium programming: proximal methods. Comput. Math. Math. Phys. 37(11), 1285–1296 (1997)
Attouch, F.A.H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Var. Anal. 9, 3–11 (2001)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Bianchi, M., Schaible, S.: Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 90(1), 31–43 (1996)
Bigi, G., Castellani, M., Pappalardo, M., Passacantando, M.: Existence and solution methods for equilibria. Eur. J. Oper. Res. 227(1), 1–11 (2013)
Blum, E.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)
Dafermos, S.: Traffic equilibrium and variational inequalities. Transp. Sci. 14(1), 42–54 (1980)
Dong, Q.L., Cho, Y.J., Zhong, L.L., Rassias, T.M.: Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 70(3), 687–704 (2017)
Dong, Q.L., Huang, J.Z., Li, X.H., Cho, Y.J., Rassias, T.M.: MiKM: multi-step inertial Krasnosel’skiı̌–Mann algorithm and its applications. J. Glob. Optim. 73(4), 801–824 (2018)
Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2002)
Fan, J., Liu, L., Qin, X.: A subgradient extragradient algorithm with inertial effects for solving strongly pseudomonotone variational inequalities. Optimization, 1–17 (2019)
Fan, K.: A minimax inequality and applications. In: Shisha, O. (ed.) Inequalities III, pp. 103–113. Academic Press, New York (1972)
Flåm, S.D., Antipin, A.S.: Equilibrium programming using proximal-like algorithms. Math. Program. 78(1), 29–41 (1996)
Giannessi, F., Maugeri, A., Pardalos, P.M.: Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models, vol. 58. Springer, Berlin (2006)
Hieu, D.V.: An inertial-like proximal algorithm for equilibrium problems. Math. Methods Oper. Res. 88(3), 399–415 (2018)
Hieu, D.V.: Strong convergence of a new hybrid algorithm for fixed point problems and equilibrium problems. Math. Model. Anal. 24(1), 1–19 (2018)
Hieu, D.V., Cho, Y.J., Xiao, Y.B.: Modified extragradient algorithms for solving equilibrium problems. Optimization 67(11), 2003–2029 (2018)
Hieu, D.V., Muu, L.D., Anh, P.K.: Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings. Numer. Algorithms 73(1), 197–217 (2016)
Hung, P.G., Muu, L.D.: The Tikhonov regularization extended to equilibrium problems involving pseudomonotone bifunctions. Nonlinear Anal., Theory Methods Appl. 74(17), 6121–6129 (2011)
Iiduka, H., Yamada, I.: A subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optimization 58(2), 251–261 (2009)
Konnov, I.: Application of the proximal point method to nonmonotone equilibrium problems. J. Optim. Theory Appl. 119(2), 317–333 (2003)
Konnov, I.V.: On systems of variational inequalities. Russ. Math. Izv.-Vyss. Uchebnye Zaved. Mat. 41, 77–86 (1997)
Korpelevich, G.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)
Kumam, P., Katchang, P.: A viscosity of extragradient approximation method for finding equilibrium problems, variational inequalities and fixed-point problems for nonexpansive mappings. Nonlinear Anal. Hybrid Syst. 3(4), 475–486 (2009)
Li, M., Yao, Y.: Strong convergence of an iterative algorithm for λ-strictly pseudo-contractive mappings in Hilbert spaces. An. Ştiinţ. Univ. ‘Ovidius’ Constanţa 18(1), 219–228 (2010)
Maingé, P.-E.: A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 47(3), 1499–1515 (2008)
Maingé, P.-E., Moudafi, A.: Coupling viscosity methods with the extragradient algorithm for solving equilibrium problems. J. Nonlinear Convex Anal. 9(2), 283–294 (2008)
Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4(3), 506–506 (1953)
Mastroeni, G.: On auxiliary principle for equilibrium problems. In: Nonconvex Optimization and Its Applications, pp. 289–298. Springer, Berlin (2003)
Moudafi, A.: Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 15(1–2), 91–100 (1999)
Muu, L., Oettli, W.: Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal., Theory Methods Appl. 18(12), 1159–1166 (1992)
Nguyen, T.T.V., Strodiot, J.J., Nguyen, V.H.: Hybrid methods for solving simultaneously an equilibrium problem and countably many fixed point problems in a Hilbert space. J. Optim. Theory Appl. 160(3), 809–831 (2013)
Oliveira, P., Santos, P., Silva, A.: A Tikhonov-type regularization for equilibrium problems in Hilbert spaces. J. Math. Anal. Appl. 401(1), 336–342 (2013)
Polyak, B.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)
Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal., Theory Methods Appl. 75(2), 742–750 (2012)
Stampacchia, G.: Formes bilinéaires coercitives sur les ensembles convexes. C. R. Hebd. Séances Acad. Sci. 258(18), 4413–4416 (1964)
Strodiot, J.J., Vuong, P.T., Van Nguyen, T.T.: A class of shrinking projection extragradient methods for solving non-monotone equilibrium problems in Hilbert spaces. J. Glob. Optim. 64(1), 159–178 (2016)
Takahashi, S., Takahashi, W.: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 331(1), 506–515 (2007)
Tiel, J.V.: Convex Analysis: An Introductory Text, 1st edn. Wiley, New York (1984)
Tran, D.Q., Dung, M.L., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57(6), 749–776 (2008)
Tran, D.Q., Dung, M.L., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57(6), 749–776 (2008)
ur Rehman, H., Kumam, P., Abubakar, A.B., Cho, Y.J.: The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 39(2), 100 (2020)
ur Rehman, H., Kumam, P., Argyros, I.K., Deebani, W., Kumam, W.: Inertial extra-gradient method for solving a family of strongly pseudomonotone equilibrium problems in real Hilbert spaces with application in variational inequality problem. Symmetry 12(4), 503 (2020)
ur Rehman, H., Kumam, P., Cho, Y.J., Suleiman, Y.I., Kumam, W.: Modified Popov's explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 36(1), 82–113 (2020)
ur Rehman, H., Kumam, P., Cho, Y.J., Yordsorn, P.: Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequal. Appl. 2019(1), 282 (2019)
ur Rehman, H., Kumam, P., Kumam, W., Shutaywi, M., Jirakitpuwapat, W.: The inertial sub-gradient extra-gradient method for a class of pseudo-monotone equilibrium problems. Symmetry 12(3), 463 (2020)
Van Hieu, D., Anh, P.K., Muu, L.D.: Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 66(1), 75–96 (2017)
Vinh, N.T., Muu, L.D.: Inertial extragradient algorithms for solving equilibrium problems. Acta Math. Vietnam. 44(3), 639–663 (2019)
Vuong, P.T., Strodiot, J.J., Nguyen, V.H.: Extragradient methods and linesearch algorithms for solving Ky Fan inequalities and fixed point problems. J. Optim. Theory Appl. 155(2), 605–627 (2012)
Vuong, P.T., Strodiot, J.J., Nguyen, V.H.: On extragradient-viscosity methods for solving equilibrium and fixed point problems in a Hilbert space. Optimization 64(2), 429–451 (2013)
Yamada, I., Ogura, N.: Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings. Numer. Funct. Anal. Optim. 25(7–8), 619–655 (2005)
Acknowledgements
The fourth author would like to thank his mentor Professor Dr. Poom Kumam from King Mongkut’s University of Technology Thonburi, Thailand for his advice and comments to improve the quality and the results of this manuscrript.
Funding
The first author was partially supported by Chiang Mai University under Fundamental Fund 2023. The third author was supported by University of Phayao and Thailand Science Research and Innovation grant no. FF66-UoE. The fourth author was financially supported by the Office of the Permanent Secretary, Ministry of Higher Education, Science, Research and Innovation (Grant No. RGNS 65-168).
Author information
Authors and Affiliations
Contributions
BP, CK, NP(N. Pholasa) and NP (N. Pakkranang): Conceptualization, Methodology, Supervision, Writing and Editing manuscript preparation. CK, NP(N. Pholasa) and NP(N. Pakkranang): Investigation, Formal Analysis, Investigation, Review and Validation. BP, NP(N. Pholasa) and NP(N. Pakkranang): Investigation, Funding Acquisition and Validation. All authors have read and approved with the final of this manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Panyanak, B., Khunpanuk, C., Pholasa, N. et al. Dynamical inertial extragradient techniques for solving equilibrium and fixed-point problems in real Hilbert spaces. J Inequal Appl 2023, 7 (2023). https://doi.org/10.1186/s13660-023-02912-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-023-02912-6
MSC
- 47H09
- 47H05
- 47J20
- 49J15
- 65K15
Keywords
- Equilibrium problem
- Subgradient extragradient method
- Fixed-point problem
- Strong convergence theorems
- Demicontractive mapping