Skip to main content

Dynamical inertial extragradient techniques for solving equilibrium and fixed-point problems in real Hilbert spaces

Abstract

In this paper, we propose new methods for finding a common solution to pseudomonotone and Lipschitz-type equilibrium problems, as well as a fixed-point problem for demicontractive mapping in real Hilbert spaces. A novel hybrid technique is used to solve this problem. The method shown here is a hybrid of the extragradient method (a two-step proximal method) and a modified Mann-type iteration. Our methods use a simple step-size rule that is generated by specific computations at each iteration. A strong convergence theorem is established without knowing the operator’s Lipschitz constants. The numerical behaviors of the suggested algorithms are described and compared to previously known ones in many numerical experiments.

1 Introduction

The equilibrium problem (EP) is a broad framework that includes many mathematical models as special cases, such as variational inequality problems, optimization problems, fixed-point problems, complementarity problems, Nash-equilibrium problems, and inverse optimization problems (for more details see [7, 8, 12, 33]). This equilibrium problem can be expressed mathematically as follows.

Suppose that a bifunction \(\mathcal{L} : \mathcal{Y} \times \mathcal{Y} \rightarrow \mathbb{R}\) together with \(\mathcal{L} (\aleph _{1}, \aleph _{1}) = 0\), in accordance with \(\aleph _{1} \in \mathcal{M}\). An equilibrium problem for a granted bifunction \(\mathcal{L}\) on \(\mathcal{M}\) is interpreted as follows: Find \(s^{*} \in \mathcal{M}\) such that

$$ \mathcal{L} \bigl(s^{*}, \aleph _{1}\bigr) \geq 0, \quad \forall \aleph _{1} \in \mathcal{M}, $$
(1.1)

where \(\mathcal{Y}\) represents a real Hilbert space and \(\mathcal{M}\) represents a nonempty, closed, and convex subset of \(\mathcal{Y}\). The study focuses on an iterative strategy for resolving the equilibrium problem. The solution set of problem (1.1) is denoted by \(EP(\mathcal{M}, \mathcal{L})\). The problem (1.1) is widely known as the Ky Fan inequality, which has since been studied in [14]. Many authors have focused on this topic in recent years, for example, see [10, 11, 13, 19, 21, 23, 32, 35, 50]. This interest comes from the fact that, as observed, it neatly combines all of the above mentioned specific problems. Many writers have established and generalized many conclusions about the presence and nature of an equilibrium problem solution (for more details see [2, 7, 14]). Due to the obvious significance of the equilibrium problem and its implications in both pure and practical sciences, numerous researchers have conducted substantial studies on it in recent years [7, 9, 16]. Let us recall the definition of a Lipschitz-type continuous bifunction. A bifunction \(\mathcal{L}\) is said to be Lipschitz-type continuous [31] on \(\mathcal{M}\) if there exist two constants \(c_{1}, c_{2} > 0\), such that

$$ \mathcal{L} (\aleph _{1}, \aleph _{3}) \leq \mathcal{L} (\aleph _{1}, \aleph _{2}) + \mathcal{L} (\aleph _{2}, \aleph _{3}) + c_{1} \Vert \aleph _{1} - \aleph _{2} \Vert ^{2} + c_{2} \Vert \aleph _{2} - \aleph _{3} \Vert ^{2}, \quad \forall \aleph _{1}, \aleph _{2}, \aleph _{3} \in \mathcal{M}. $$

Flam [15] and Tran et al. [42] generated two sequences \(\{s_{k}\}\) and \(\{u_{k}\}\) in Euclidean spaces in the following manner:

$$ \textstyle\begin{cases} s_{1} \in \mathcal{M}, \\ u_{k} = \mathop {\arg \min}_{u \in \mathcal{M}} \{ \delta \mathcal{L}(s_{k}, u) + \frac{1}{2} \Vert s_{k} - u \Vert ^{2} \}, \\ s_{k+1} = \mathop {\arg \min}_{u \in \mathcal{M}} \{ \delta \mathcal{L}(u_{k}, u) + \frac{1}{2} \Vert s_{k} - u \Vert ^{2} \}, \end{cases} $$
(1.2)

where \(0 < \delta < \min \{\frac{1}{2 c_{1}}, \frac{1}{2 c_{2}} \}\). Due to Korpelevich’s earlier work on the saddle-point problems [25], this approach is often referred to as the two-step extragradient method. It is interesting to note that the method generates a weakly convergent sequence and utilizes a fixed step size that is entirely dependent on bifunctional Lipschitz-type constants. Because Lipschitz-type variables are typically unknown or difficult to discover, this may limit application possibilities. Inertial-type procedures, on the other hand, are two-step iterative procedures wherein the following iteration is derived from the two preceding iterations (see [4, 36] for further details). To increase the numerical efficiency of the iterative sequence, an inertial extrapolation term is usually applied. According to numerical research, inertial phenomena improve numerical performance in terms of execution time and total number of iterations. Several inertial-type techniques have recently been explored for various types of equilibrium problems [3, 5, 17, 19, 47].

In this study, we are interested to find a common solution to an equilibrium problem and a fixed-point problem in a Hilbert space [20, 26, 28, 34, 40]. The motivation and idea for researching such a common solution problem comes from its potential applicability to mathematical models with limitations that may be stated as fixed-point problems. This is especially true in practical scenarios such as signal processing, network-resource allocation, and picture recovery; see, for example, [22, 28, 29]. In this study, we are interested in finding a common solution to an equilibrium problem and a fixed-point problem in a Hilbert space [1, 20, 26, 28, 34, 40, 4446, 48]. The motivation and idea for researching such a common solution problem come from its potential applicability to mathematical models with limitations that may be stated as fixed-point problems. This is especially true in practical scenarios such as signal processing, network-resource allocation, and picture recovery; see, for example, [22, 28, 29].

Let \(\mathcal{T} : \mathcal{Y} \rightarrow \mathcal{Y}\) be a mapping. Then, the fixed-point problem (FPP) for the mapping \(\mathcal{T}\) is to determine \(s^{*} \in \mathcal{Y}\) such that

$$ \mathcal{T} \bigl(s^{*}\bigr) = s^{*}. $$
(1.3)

The solution set of problem (1.3) is known as the fixed-point set of \(\mathcal{T}\) and is represented by \(\operatorname{Fix}(\mathcal{T})\). The majority of algorithms for addressing problem (1.3) are derived from the basic Mann iteration, in particular from \(s_{1} \in \mathcal{Y}\), create a sequence \(\{s_{k+1}\}\) for all \(k \geq 1\) by

$$ s_{k+1} = \wp _{k} s_{k} + (1 - \wp _{k}) \mathcal{T} s_{k}, $$
(1.4)

where the random sequence \(\{\wp _{k}\}\) must meet certain conditions in order to achieve weak convergence. The Halpern iteration is yet another formalized iterative mechanism for achieving strong convergence in infinite-dimensional Hilbert spaces. The iterative process can be expressed as follows:

$$ s_{k+1} = \wp _{k} s_{1} + (1 - \wp _{k}) \mathcal{T} s_{k}, $$
(1.5)

where \(s_{1} \in \mathcal{Y}\) and a sequence \(\wp _{k} \subset (0; 1)\) is slowly diminishing and nonsummable, i.e.,

$$ \wp _{k} \rightarrow 0, \quad \text{and} \quad \sum _{k=1}^{\infty} \wp _{k} = + \infty . $$

In addition to the Halpern iteration, there is a generic variant, namely, the Mann-type algorithm [30], in which the cost mapping \(\mathcal{T}\) is combined with such a contraction mapping in the iterates. Furthermore, the hybrid steepest-descent algorithm introduced in [53] is another strategy that yields strong convergence.

Vuong et al. [52] introduced a new numerical algorithm, the extragradient method [15, 43] for trying to solve an equilibrium problem involving a fixed-point problem for a demicontractive mapping using the extragradient method and the hybrid steepest-descent technique in [53]. The authors proved that the proposed algorithm has strong convergence under the premise that the bifunction is pseudomonotone and meets the Lipschitz-type requirement [31]. As stated in [31], this technique has the benefit of being numerically calculated utilizing optimization tools. The extragradient Mann-type approach described in [31] also enables us to eliminate numerous strong criteria in establishing the convergence of previously known extragradient algorithms. Other strongly convergent methods for finding an element in \(s^{*} \in \operatorname{Fix}(\mathcal{T}) \cap EP(\mathcal{M}, \mathcal{L})\) that integrates the extragradient approach with the hybrid or shrinking projection technique may be found in [20, 34, 39].

In this study, inspired and motivated by the findings of Takahashi et al. [40], Maingé [29], and Vuong et al. in [52] and based on the work of [27], we present a new strongly convergent algorithm as a combination of the extragradient method (two-step proximal-like method) and the Mann-type iteration [30] for approximating a common solution of a pseudomonotone and Lipschitz-type equilibrium problem and a fixed-point problem for a demicontractive mapping.

As indicated above, the result in this study is still valid for the more general class of demicontractive mappings when examining a relaxation of a demicontractive mapping. The typical Mann iteration produces weak convergence; however, the approach used in this study, which employs the comparable Mann-type iteration, produces strong convergence. This is especially true in infinite-dimensional Hilbert spaces, where strong norm convergence is more valuable than weak norm convergence. Several numerical experiments in finite- and infinite-dimensional Hilbert spaces demonstrated that the novel strategy is promising and offers competitive advantages over previous approaches.

The paper is organized as follows. Section 2 presented some basic results. Section 3 introduces new methods and validates their convergence analysis, while Sect. 4 describes some applications. Finally, Sect. 5 provides some numerical statistics to demonstrate the practical utility of the techniques presented.

2 Preliminaries

Let \(\mathcal{M}\) be a nonempty, closed, and convex subset of \(\mathcal{Y}\), the real Hilbert space. The weak convergence is denoted by \(s_{k} \rightharpoonup s\) and the strong convergence by \(s_{k} \rightarrow s\). The following information is available for each \(\aleph _{1}, \aleph _{2} \in \mathcal{Y}\):

  1. (1)

    \(\|\aleph _{1} + \aleph _{2}\|^{2} = \|\aleph _{1}\|^{2} + 2 \langle \aleph _{1}, \aleph _{2} \rangle + \|\aleph _{2}\|^{2}\);

  2. (2)

    \(\|\aleph _{1} + \aleph _{2}\|^{2} \leq \|\aleph _{1}\|^{2} + 2 \langle \aleph _{2}, \aleph _{1} + \aleph _{2} \rangle \);

  3. (3)

    \(\|a \aleph _{1} + (1 - a) \aleph _{2} \|^{2} = a \|\aleph _{1}\|^{2} + (1 - a)\| \aleph _{2} \|^{2} - a(1 - a)\|\aleph _{1} - \aleph _{2}\|^{2}\).

A metric projection \(P_{\mathcal{M}}(\aleph _{1})\) of an element \(\aleph _{1} \in \mathcal{Y}\) is defined by:

$$ P_{\mathcal{M}}(\aleph _{1}) = \mathop {\arg \min}_{} \bigl\{ \Vert \aleph _{1} - \aleph _{2} \Vert : \aleph _{2} \in \mathcal{M} \bigr\} . $$

It is generally known that \(P_{\mathcal{M}}\) is nonexpansive, and \(P_{\mathcal{M}}\) completes the following useful characteristics:

  1. (1)

    \(\langle \aleph _{1} - P_{\mathcal{M}}(\aleph _{1}), \aleph _{2} - P_{\mathcal{M}}(\aleph _{1}) \rangle \leq 0\), \(\forall \aleph _{2} \in \mathcal{M}\);

  2. (2)

    \(\|P_{\mathcal{M}}(\aleph _{1}) - P_{\mathcal{M}}(\aleph _{2})\|^{2} \leq \langle P_{\mathcal{M}}(\aleph _{1}) - P_{\mathcal{M}}( \aleph _{2}), \aleph _{1} - \aleph _{2} \rangle \), \(\forall \aleph _{2} \in \mathcal{M}\).

Definition 2.1

Assume that \(\mathcal{T} : \mathcal{Y} \rightarrow \mathcal{Y}\) is a nonlinear mapping and \(\operatorname{Fix}(\mathcal{T}) \neq \emptyset \). Then, \(I - \mathcal{T}\) is called demiclosed at zero if, for each \(\{s_{k}\}\) in \(\mathcal{Y}\), the following conclusion remains true:

$$ s_{k} \rightharpoonup s \quad \text{and} \quad (I - \mathcal{T}) s_{k} \rightarrow 0 \quad \Rightarrow\quad s \in \operatorname{Fix}( \mathcal{T}). $$

Lemma 2.2

([37])

Suppose that there are sequences \(\{g_{k}\} \subset [0, +\infty )\), \(\{h_{k}\} \subset (0, 1)\) and \(\{r_{k}\} \subset \mathbb{R}\) such as those that satisfy the following basic requirements:

$$ g_{k+1} \leq (1 - h_{k}) g_{k} + h_{k} r_{k}, \quad \forall k \in \mathbb{N} \quad \textit{and} \quad \sum _{k=1}^{+\infty} h_{k} = + \infty . $$

If \(\limsup_{j \rightarrow +\infty} r_{k_{j}} \leq 0\) for any subsequence \(\{g_{k_{j}}\}\) of \(\{g_{k}\}\) meet

$$ \liminf_{j \rightarrow +\infty} ( g_{k_{j}+1} - g_{k_{j}} ) \geq 0. $$

Then, \(\lim_{k \rightarrow +\infty} g_{k} = 0\).

Definition 2.3

Let \(\mathcal{M}\) be a subset of a real Hilbert space \(\mathcal{Y}\) and \(\digamma : \mathcal{M} \rightarrow \mathbb{R}\) a given convex function.

  1. (1)

    The normal cone at \(\aleph _{1} \in \mathcal{M}\) is defined by

    $$ N_{\mathcal{M}}(\aleph _{1}) = \bigl\{ \aleph _{3} \in \mathcal{Y} : \langle \aleph _{3}, \aleph _{2} - \aleph _{1} \rangle \leq 0, \forall \aleph _{2} \in \mathcal{M} \bigr\} . $$
    (2.1)
  2. (2)

    The subdifferential of a function Ϝ at \(\aleph _{1} \in \mathcal{M}\) is defined by

    $$ \partial \digamma (\aleph _{1}) = \bigl\{ \aleph _{3} \in \mathcal{Y} : \digamma (\aleph _{2}) - \digamma (\aleph _{1}) \geq \langle \aleph _{3}, \aleph _{2} - \aleph _{1} \rangle , \forall \aleph _{2} \in \mathcal{M} \bigr\} . $$
    (2.2)

Lemma 2.4

([41])

Let \(\digamma : \mathcal{M} \rightarrow \mathbb{R}\) be a subdifferentiable and lower semicontinuous function on \(\mathcal{M}\). A member \(\aleph _{1} \in \mathcal{M}\) is called a minimizer of a mapping Ϝ if and only if

$$ 0 \in \partial \digamma (\aleph _{1}) + N_{\mathcal{M}}(\aleph _{1}), $$

where \(\partial \digamma (\aleph _{1})\) denotes the subdifferential of Ϝ at vector \(\aleph _{1} \in \mathcal{M}\) and \(N_{\mathcal{M}}(\aleph _{1})\) is the normal cone of \(\mathcal{M}\) at vector \(\aleph _{1}\).

3 Main results

In this section, we examine in detail the convergence of several different inertial extragradient algorithms for solving equilibrium and fixed-point problems. First, we consider that our algorithms have distinct characteristics. To justify the strong convergence, the following conditions must be met:

(\(\mathcal{L}\)1) The solution set \(\operatorname{Fix}(\mathcal{T}) \cap EP(\mathcal{M}, \mathcal{L}) \neq \emptyset \);

(\(\mathcal{L}\)2) The bifunction \(\mathcal{L}\) is said to be pseudomonotone [6, 8], i.e.,

$$ \mathcal{L} (\aleph _{1}, \aleph _{2}) \geq 0 \quad \Longrightarrow\quad \mathcal{L} (\aleph _{2}, \aleph _{1}) \leq 0, \quad \forall \aleph _{1}, \aleph _{2} \in \mathcal{M}; $$

(\(\mathcal{L}\)3) The bifunction \(\mathcal{L}\) is said to be Lipschitz-type continuous [31] on \(\mathcal{M}\) if there exists two constants \(c_{1}, c_{2} > 0\), such that

$$ \mathcal{L} (\aleph _{1}, \aleph _{3}) \leq \mathcal{L} (\aleph _{1}, \aleph _{2}) + \mathcal{L} (\aleph _{2}, \aleph _{3}) + c_{1} \Vert \aleph _{1} - \aleph _{2} \Vert ^{2} + c_{2} \Vert \aleph _{2} - \aleph _{3} \Vert ^{2}, \quad \forall \aleph _{1}, \aleph _{2}, \aleph _{3} \in \mathcal{M}; $$

(\(\mathcal{L}\)4) For any sequence \(\{\aleph _{k}\} \subset \mathcal{M}\) satisfying \(\aleph _{k} \rightharpoonup \aleph ^{*}\), then the following inequality holds:

$$ \limsup_{k \rightarrow +\infty} \mathcal{L} (\aleph _{k}, \aleph _{1}) \leq \mathcal{L} \bigl(\aleph ^{*}, \aleph _{1}\bigr), \quad \forall \aleph _{1} \in \mathcal{M}; $$

(\(\mathcal{L}\)5) Assume that \(\mathcal{T} : \mathcal{Y} \rightarrow \mathcal{Y}\) is a mapping such that \((I - \mathcal{T})\) is demiclosed at zero. A mapping \(\mathcal{T}\) is said to be ρ-demicontractive if there exists a constant \(0 \leq \rho < 1\) such that

$$ \bigl\Vert \mathcal{T}(\aleph _{1}) - \aleph _{2} \bigr\Vert ^{2} \leq \Vert \aleph _{1} - \aleph _{2} \Vert ^{2} + \rho \bigl\Vert (I - \mathcal{T}) (\aleph _{1}) \bigr\Vert ^{2}, \quad \forall \aleph _{2} \in \operatorname{Fix}(\mathcal{T}), \aleph _{1} \in \mathcal{Y}; $$

or equivalently

$$ \bigl\langle \mathcal{T}(\aleph _{1}) - \aleph _{1}, \aleph _{1} - \aleph _{2} \bigr\rangle \leq \frac{\rho - 1}{2} \bigl\Vert \aleph _{1} - \mathcal{T}(\aleph _{1}) \bigr\Vert ^{2}, \quad \forall \aleph _{2} \in \operatorname{Fix}( \mathcal{T}), \aleph _{1} \in \mathcal{Y}. $$

The first algorithm is described below to find a common solution to an equilibrium and a fixed-point problem. The main advantage of this method is that it employs a monotone step-size rule that is independent of Lipschitz constants. The algorithm employs Mann-type iteration to aid in the solution of a fixed-point problem, and the two-step extragradient approach to solve an equilibrium problem.

figure a

The following lemma is used to demonstrate that the monotone step-size sequence generated by equation (3.2) is properly defined and bounded.

Lemma 3.1

A sequence \(\{\delta _{k} \}\) is convergent to δ and \(\min \{\frac{\tau}{\max \{2c_{1}, 2c_{2}\}}, \delta _{1} \} \leq \delta _{k} \leq \delta _{1}\).

Proof

Let \(\mathcal{L}(\varkappa _{k}, v_{k}) - \mathcal{L}(\varkappa _{k}, u_{k}) - \mathcal{L}(u_{k}, v_{k}) > 0\). Thus, we have

$$\begin{aligned} \frac{\tau ( \Vert \varkappa _{k} - u_{k} \Vert ^{2} + \Vert v_{k} - u_{k} \Vert ^{2})}{2 [\mathcal{L}(\varkappa _{k}, v_{k}) - \mathcal{L}(\varkappa _{k}, u_{k}) - \mathcal{L}(u_{k}, v_{k})]} &\geq \frac{\tau ( \Vert \varkappa _{k} - u_{k} \Vert ^{2} + \Vert v_{k} - u_{k} \Vert ^{2})}{2 [c_{1} \Vert \varkappa _{k} - u_{k} \Vert ^{2} + c_{2} \Vert v_{k} - u_{k} \Vert ^{2}]} \\ &\geq \frac{\tau }{2 \max \{c_{1}, c_{2}\}}. \end{aligned}$$
(3.3)

Thus, we obtain \(\lim_{k \rightarrow +\infty} \delta _{k} = \delta \). This completes the proof. □

The second method is described below to find a common solution to an equilibrium and a fixed-point problem. The primary benefit of this method is that it employs a nonmonotone step-size rule that is independent of Lipschitz constants. The algorithm solves a fixed-point problem using Mann-type iteration and an equilibrium problem with the two-step extragradient approach.

figure b

The following lemma is employed to establish that the nonmonotone step-size sequence created by equation (3.5) is properly defined and bounded. We give a proof that completely establishes the boundedness and convergence of a step-size sequence.

Lemma 3.2

A sequence \(\{\delta _{k} \}\) is convergent to δ and \(\min \{\frac{\tau }{\max \{2c_{1}, 2c_{2}\}}, \delta _{1} \} \leq \delta _{k} \leq \delta _{1} + P\) along with \(P = \sum_{k=1}^{+\infty} \chi _{k}\).

Proof

Let \(\mathcal{L}(\varkappa _{k}, v_{k}) - \mathcal{L}(\varkappa _{k}, u_{k}) - \mathcal{L}(u_{k}, v_{k}) > 0\). Thus, we have

$$\begin{aligned} \frac{\tau ( \Vert \varkappa _{k} - u_{k} \Vert ^{2} + \Vert v_{k} - u_{k} \Vert ^{2})}{2 [\mathcal{L}(\varkappa _{k}, v_{k}) - \mathcal{L}(\varkappa _{k}, u_{k}) - \mathcal{L}(u_{k}, v_{k})]} &\geq \frac{\tau ( \Vert \varkappa _{k} - u_{k} \Vert ^{2} + \Vert v_{k} - u_{k} \Vert ^{2})}{2 [c_{1} \Vert \varkappa _{k} - u_{k} \Vert ^{2} + c_{2} \Vert v_{k} - u_{k} \Vert ^{2}]} \\ &\geq \frac{\tau }{2 \max \{c_{1}, c_{2}\}}. \end{aligned}$$
(3.6)

The idea of \(\delta _{k+1}\) may be deduced through mathematical induction.

$$ \min \biggl\{ \frac{\tau }{\max \{2c_{1}, 2c_{2}\}}, \delta _{1} \biggr\} \leq \delta _{k} \leq \delta _{1} + P. $$

Assume that \([\delta _{k+1} - \delta _{k}]^{+} = \max \{0, \delta _{k+1} - \delta _{k} \}\) and

$$ [\delta _{k+1} - \delta _{k}]^{-} = \max \bigl\{ 0, -(\delta _{k+1} - \delta _{k}) \bigr\} . $$

We receive \(\{\delta _{k}\}\) because of the definition

$$ \sum_{k=1}^{+\infty} (\delta _{k+1} - \delta _{k})^{+} = \sum _{k=1}^{+ \infty} \max \{0, \delta _{k+1} - \delta _{k} \} \leq P < + \infty . $$
(3.7)

That is, the series \(\sum_{k=1}^{+\infty} (\delta _{k+1} - \delta _{k})^{+}\) is convergent. The convergence must now be proven of \(\sum_{k=1}^{+\infty} (\delta _{k+1} - \delta _{k})^{-}\). Let \(\sum_{k=1}^{+\infty} (\delta _{k+1} - \delta _{k})^{-} = +\infty \). Due to the fact that

$$ \delta _{k+1} - \delta _{k} = (\delta _{k+1} - \delta _{k})^{+} - ( \delta _{k+1} - \delta _{k})^{-}, $$

we might be able to obtain

$$ \delta _{k+1} - \delta _{1} = \sum _{k=0}^{k} (\delta _{k+1} - \delta _{k}) = \sum_{k=0}^{k} (\delta _{k+1} - \delta _{k})^{+} - \sum _{k=0}^{k} (\delta _{k+1} - \delta _{k})^{-}. $$
(3.8)

Letting \(k \rightarrow +\infty \) in (3.8), we have \(\delta _{k} \rightarrow -\infty \) as \(k \rightarrow +\infty \). This is an absurdity. As a result of the series convergence \(\sum_{k=0}^{k} (\delta _{k+1} - \delta _{k})^{+}\) and \(\sum_{k=0}^{k} (\delta _{k+1} - \delta _{k})^{-}\) taking \(k \rightarrow +\infty \) in (3.8), we obtain \(\lim_{k \rightarrow +\infty} \delta _{k} = \delta \). This concludes the proof. □

The following lemma can be used to verify the boundedness of an iterative sequence. It is critical in terms of proving the boundedness of a sequence and proving the strong convergence of a proposed sequence to find a common solution.

Lemma 3.3

Suppose that \(\{s_{k}\}\) is a sequence generated by Algorithm 1 that meets the conditions (\(\mathcal{L}\)1)(\(\mathcal{L}\)5). Then, we have

$$ \bigl\Vert v_{k} - s^{*} \bigr\Vert ^{2} \leq \bigl\Vert \varkappa _{k} - s^{*} \bigr\Vert ^{2} - \biggl(1 - \frac{\tau \delta _{k}}{\delta _{k+1}} \biggr) \Vert \varkappa _{k} - u_{k} \Vert ^{2} - \biggl(1 - \frac{\tau \delta _{k}}{\delta _{k+1}} \biggr) \Vert v_{k} - u_{k} \Vert ^{2}. $$

Proof

By the use of Lemma 2.4, we have

$$ 0 \in \partial _{2} \biggl\{ \delta _{k} \mathcal{L}(u_{k}, \cdot ) + \frac{1}{2} \Vert \varkappa _{k} - \cdot \Vert ^{2} \biggr\} (v_{k}) + N_{ \mathcal{Y}_{k}}(v_{k}). $$

There is a vector \(\omega \in \partial \mathcal{L}(u_{k}, v_{k})\) and there exists a vector \(\overline{\omega} \in N_{\mathcal{Y}_{k}}(v_{k})\) in order that

$$ \delta _{k} \omega + v_{k} - \varkappa _{k} + \overline{\omega} = 0. $$

The preceding phrase suggests that

$$ \langle \varkappa _{k} - v_{k}, u - v_{k} \rangle = \delta _{k} \langle \omega , u - v_{k} \rangle + \langle \overline{\omega}, u - v_{k} \rangle , \quad \forall u \in \mathcal{Y}_{k}. $$

Since \(\overline{\omega} \in N_{\mathcal{Y}_{k}}(v_{k})\) implies that \(\langle \overline{\omega}, u - v_{k} \rangle \leq 0\), for all \(u \in \mathcal{Y}_{k}\). As a result, we acquire

$$ \langle \varkappa _{k} - v_{k}, u - v_{k} \rangle \leq \delta _{k} \langle \omega , u - v_{k} \rangle , \quad \forall u \in \mathcal{Y}_{k}. $$
(3.9)

Furthermore, \(\omega \in \partial \mathcal{L}(u_{k}, v_{k})\) and because of the concept of subdifferential, we obtain

$$ \mathcal{L}(u_{k}, u) - \mathcal{L}(u_{k}, v_{k}) \geq \langle \omega , u - v_{k} \rangle , \quad \forall u \in \mathcal{Y}. $$
(3.10)

We obtain by combining the formulas (3.9) and (3.10)

$$ \delta _{k} \mathcal{L}(u_{k}, u) - \delta _{k} \mathcal{L}(u_{k}, v_{k}) \geq \langle \varkappa _{k} - v_{k}, u - v_{k} \rangle , \quad \forall u \in \mathcal{Y}_{k}. $$
(3.11)

Due to the concept of a half-space \(\mathcal{Y}_{k}\), we have

$$ \delta _{k} \langle \omega _{k}, v_{k} - u_{k} \rangle \geq \langle \varkappa _{k} - u_{k}, v_{k} - u_{k} \rangle . $$
(3.12)

Due to \(\omega _{k} \in \partial \mathcal{L}(\varkappa _{k}, u_{k})\) this indicates that

$$ \mathcal{L}(\varkappa _{k}, u) - \mathcal{L}(\varkappa _{k}, u_{k}) \geq \langle \omega _{k}, u - u_{k} \rangle , \quad \forall u \in \mathcal{Y}. $$

By inserting \(u = v_{k}\), we derive

$$ \mathcal{L}(\varkappa _{k}, v_{k}) - \mathcal{L}(\varkappa _{k}, u_{k}) \geq \langle \omega _{k}, v_{k} - u_{k} \rangle . $$
(3.13)

From (3.12) and (3.13), we derive

$$ \delta _{k} \bigl\{ \mathcal{L}(\varkappa _{k}, v_{k}) - \mathcal{L}( \varkappa _{k}, u_{k}) \bigr\} \geq \langle \varkappa _{k} - u_{k}, v_{k} - u_{k} \rangle . $$
(3.14)

By inserting \(u = s^{*}\) into formula (3.11), we obtain

$$ \delta _{k} \mathcal{L}\bigl(u_{k}, s^{*}\bigr) - \delta _{k} \mathcal{L}(u_{k}, v_{k}) \geq \bigl\langle \varkappa _{k} - v_{k}, s^{*} - v_{k} \bigr\rangle . $$
(3.15)

Given \(s^{*} \in EP(\mathcal{L}, \mathcal{M})\), we conclude that \(\mathcal{L}(s^{*}, u_{k}) \geq 0\). Due to the pseudomonotonicity of the bifunction \(\mathcal{L}\), we derive \(\mathcal{L}(u_{k}, s^{*}) \leq 0\). We have achieved this by using equation (3.15) such that

$$ \bigl\langle \varkappa _{k} - v_{k}, v_{k} - s^{*} \bigr\rangle \geq \delta _{k} \mathcal{L}(u_{k}, v_{k}). $$
(3.16)

By using the definition of \(\delta _{k+1}\), we obtain

$$ \mathcal{L}(\varkappa _{k}, v_{k}) - \mathcal{L}(\varkappa _{k}, u_{k}) - \mathcal{L}(u_{k}, v_{k}) \leq \frac{\tau \Vert \varkappa _{k} - u_{k} \Vert ^{2} + \tau \Vert v_{k} - u_{k} \Vert ^{2}}{2 \delta _{k+1}}. $$
(3.17)

Due to the expressions (3.16) and (3.17), we obtain

$$ \begin{aligned} \bigl\langle \varkappa _{k} - v_{k}, v_{k} - s^{*} \bigr\rangle \geq{}& \delta _{k} \bigl\{ \mathcal{L}(\varkappa _{k}, v_{k}) - \mathcal{L}(\varkappa _{k}, u_{k}) \bigr\} \\ & {} - \frac{\tau \delta _{k}}{2\delta _{k+1}} \Vert \varkappa _{k} - u_{k} \Vert ^{2} - \frac{\tau \delta _{k}}{2\delta _{k+1}} \Vert v_{k} - u_{k} \Vert ^{2}. \end{aligned} $$
(3.18)

Integrating the formulas (3.14) and (3.18), we obtain

$$ \begin{aligned} \bigl\langle \varkappa _{k} - v_{k}, v_{k} - s^{*} \bigr\rangle \geq{}& \langle \varkappa _{k} - u_{k}, v_{k} - u_{k} \rangle \\ &{} - \frac{\tau \delta _{k}}{2\delta _{k+1}} \Vert \varkappa _{k} - u_{k} \Vert ^{2} - \frac{\tau \delta _{k}}{2\delta _{k+1}} \Vert v_{k} - u_{k} \Vert ^{2}. \end{aligned} $$
(3.19)

We have the following identities that are valuable to us:

$$\begin{aligned}& - 2 \bigl\langle \varkappa _{k} - v_{k}, v_{k} - s^{*} \bigr\rangle = - \bigl\Vert \varkappa _{k} - s^{*} \bigr\Vert ^{2} + \Vert v_{k} - \varkappa _{k} \Vert ^{2} + \bigl\Vert v_{k} - s^{*} \bigr\Vert ^{2}, \end{aligned}$$
(3.20)
$$\begin{aligned}& 2 \langle u_{k} - \varkappa _{k}, u_{k} - v_{k} \rangle = \Vert \varkappa _{k} - u_{k} \Vert ^{2} + \Vert v_{k} - u_{k} \Vert ^{2} - \Vert \varkappa _{k} - v_{k} \Vert ^{2}. \end{aligned}$$
(3.21)

By using expressions (3.19), (3.20), and (3.21), we obtain

$$\begin{aligned} \bigl\Vert v_{k} - s^{*} \bigr\Vert ^{2} \leq& \bigl\Vert \varkappa _{k} - s^{*} \bigr\Vert ^{2} - \biggl(1 - \frac{\tau \delta _{k}}{\delta _{k+1}} \biggr) \Vert \varkappa _{k} - u_{k} \Vert ^{2} \\ &{} - \biggl(1 - \frac{\tau \delta _{k}}{\delta _{k+1}} \biggr) \Vert v_{k} - u_{k} \Vert ^{2}. \end{aligned}$$
(3.22)

 □

The following theorem is the main theorem that is used to establish the strong convergence of an iterative sequence. This theorem proves the boundedness of a sequence and the strong convergence of a suggested sequence to a common solution. This is the key theorem, and it proves that the suggested sequence strongly converges to a solution in the case of monotone and nonmonotone step-size criteria.

Theorem 3.4

Suppose that \(\mathcal{L} : \mathcal{M} \times \mathcal{M} \rightarrow \mathbb{R}\) satisfies the conditions (\(\mathcal{L}\)1)(\(\mathcal{L}\)5). Then, sequence \(\{s_{k}\}\) generated by Algorithm 1 strongly converges to \(s^{*} \in \operatorname{Fix}(\mathcal{T}) \cap EP(\mathcal{M}, \mathcal{L})\), where \(s^{*} = P_{\operatorname{Fix}(\mathcal{T}) \cap EP(\mathcal{M}, \mathcal{L})} (0)\).

Proof

Claim 1: The sequence \(\{s_{k}\}\) is bounded.

It is worth noting that \(EP(\mathcal{M}, \mathcal{L})\) and \(\operatorname{Fix}(\mathcal{T})\) are both closed, convex subsets. It is given that

$$ s^{*} = P_{EP(\mathcal{M}, \mathcal{L}) \cap \operatorname{Fix}(\mathcal{T})} (0). $$

Namely, \(s^{*} \in EP(\mathcal{M}, \mathcal{L}) \cap \operatorname{Fix}(\mathcal{T})\), as well as

$$ \bigl\langle 0 - s^{*}, u - s^{*} \bigr\rangle \leq 0, \quad \forall u \in EP( \mathcal{M}, \mathcal{L}) \cap \operatorname{Fix}(\mathcal{T}). $$
(3.23)

As \(s^{*} \in \Omega \) and based on the description of \(s_{k+1}\), we have

$$\begin{aligned} \bigl\lVert s_{k+1} - s^{*} \bigr\rVert &= \bigl\lVert (1 - \wp _{k} - \Im _{k} ) v_{k} + \wp _{k} \mathcal{T}(v_{k}) - s^{*} \bigr\rVert \\ &= \bigl\lVert (1 - \wp _{k} - \Im _{k}) \bigl(v_{k} - s^{*}\bigr) + \wp _{k} \bigl( \mathcal{T}(v_{k}) - s^{*}\bigr) - \Im _{k} s^{*} \bigr\rVert \\ &\leq \bigl\lVert (1 - \wp _{k} - \Im _{k}) \bigl(v_{k} - s^{*}\bigr) + \wp _{k} \bigl( \mathcal{T}(v_{k}) - s^{*}\bigr) \bigr\rVert + \Im _{k} \rVert s^{*} \rVert . \end{aligned}$$
(3.24)

Then, we must compute the following:

$$\begin{aligned} & \bigl\lVert (1 - \wp _{k} - \Im _{k}) \bigl(v_{k} - s^{*}\bigr) + \wp _{k} \bigl( \mathcal{T}(v_{k}) - s^{*}\bigr) \bigr\rVert ^{2} \\ &\quad = (1 - \wp _{k} - \Im _{k})^{2} \lVert v_{k} - s^{*} \lVert ^{2} + \wp _{k}^{2} \lVert \mathcal{T}(v_{k}) - s^{*} \lVert ^{2} \\ &\qquad {}+ 2 \bigl\langle (1 - \wp _{k} - \Im _{k}) \bigl(v_{k} - s^{*}\bigr), \wp _{k} \bigl( \mathcal{T}(v_{k}) - s^{*}\bigr) \bigr\rangle \\ &\quad \leq (1 - \wp _{k} - \Im _{k})^{2} \lVert v_{k} - s^{*} \lVert ^{2} + \wp _{k}^{2} \bigl[ \bigl\Vert v_{k} - s^{*} \bigr\Vert ^{2} + \rho \bigl\Vert v_{k} - \mathcal{T}(v_{k}) \bigr\Vert ^{2} \bigr] \\ &\qquad {}+ 2 \wp _{k} (1 - \wp _{k} - \Im _{k}) \biggl[ \bigl\Vert v_{k} - s^{*} \bigr\Vert ^{2} + \frac{\rho - 1}{2} \bigl\Vert \mathcal{T}(v_{k}) - v_{k} \bigr\Vert ^{2} \biggr] \\ &\quad \leq (1 - \Im _{k})^{2} \lVert v_{k} - s^{*} \lVert ^{2} + \wp _{k} \bigl[ \wp _{k} - (1 - \rho ) (1 - \Im _{k}) \bigr] \bigl\Vert \mathcal{T}(v_{k}) - v_{k} \bigr\Vert ^{2} \\ &\quad \leq (1 - \Im _{k})^{2} \lVert v_{k} - s^{*} \lVert ^{2}. \end{aligned}$$
(3.25)

As a result, the previous expression implies that

$$\begin{aligned} \bigl\lVert (1 - \wp _{k} - \Im _{k}) \bigl(v_{k} - s^{*}\bigr) + \wp _{k} \bigl(\mathcal{T}(v_{k}) - s^{*}\bigr) \bigr\rVert &\leq (1 - \Im _{k}) \lVert v_{k} - s^{*} \lVert . \end{aligned}$$
(3.26)

From expressions (3.24) and (3.26), we have

$$\begin{aligned} \bigl\lVert s_{k+1} - s^{*} \bigr\rVert &\leq (1 - \Im _{k}) \bigl\lVert v_{k} - s^{*} \lVert + \Im _{k} \rVert s^{*} \bigr\rVert . \end{aligned}$$
(3.27)

In the context of Lemma 3.3, we derive

$$ \bigl\Vert v_{k} - s^{*} \bigr\Vert ^{2} \leq \bigl\Vert \varkappa _{k} - s^{*} \bigr\Vert ^{2} - \biggl(1 - \frac{\tau \delta _{k}}{\delta _{k+1}} \biggr) \Vert \varkappa _{k} - u_{k} \Vert ^{2} - \biggl(1 - \frac{\tau \delta _{k}}{\delta _{k+1}} \biggr) \Vert v_{k} - u_{k} \Vert ^{2}. $$
(3.28)

Due to Lemma 3.1, we obtain

$$ \lim_{k \rightarrow +\infty} \biggl(1 - \frac{\tau \delta _{k}}{\delta _{k+1}} \biggr) = 1 - \tau > 0. $$
(3.29)

Thus, this means that there exists \(N_{1} \in \mathbb{N}\) such that

$$ \lim_{k \rightarrow +\infty} \biggl(1 - \frac{\tau \delta _{k}}{\delta _{k+1}} \biggr) > 0, \quad \forall k \geq N_{1}. $$
(3.30)

According to expressions (3.28) and (3.30), we have

$$ \bigl\Vert v_{k} - s^{*} \bigr\Vert ^{2} \leq \bigl\Vert \varkappa _{k} - s^{*} \bigr\Vert ^{2}. $$
(3.31)

From expression (3.1), we have

$$ \ell _{k} \Vert s_{k} - s_{k-1} \Vert \leq \varrho _{k}, \quad \text{for all } k \in \mathbb{N} $$

and

$$ \lim_{k \rightarrow +\infty} \biggl(\frac{\varrho _{k}}{\Im _{k}} \biggr) = 0. $$

As a result, this indicates that

$$ \lim_{k \rightarrow +\infty} \frac{\ell _{k}}{\Im _{k}} \lVert s_{k} - s_{k-1} \rVert \leq \lim_{k +\rightarrow \infty} \frac{\varrho _{k}}{\Im _{k}} = 0. $$
(3.32)

From the formulas (3.31) and (3.32) with the definition of \(\{\varkappa _{k}\}\), we obtain

$$\begin{aligned} \bigl\Vert v_{k} - s^{*} \bigr\Vert \leq \bigl\lVert \varkappa _{k} - s^{*} \bigr\rVert &= \bigl\lVert s_{k} + \ell _{k} (s_{k} - s_{k-1}) - s^{*} \bigr\rVert \\ &\leq \bigl\lVert s_{k} - s^{*} \bigr\rVert + \ell _{k} \lVert s_{k} - s_{k-1} \rVert \\ &= \bigl\lVert s_{k} - s^{*} \bigr\rVert + \Im _{k} \frac{\ell _{k}}{\Im _{k}} \lVert s_{k} - s_{k-1} \rVert \\ &\leq \bigl\Vert s_{k} - s^{*} \bigr\Vert + \Im _{k} K_{1}, \end{aligned}$$
(3.33)

where \(K_{1} > 0\) is a constant

$$ \frac{\ell _{k}}{\Im _{k}} \lVert s_{k} - s_{k-1} \rVert \leq K_{1}, \quad \forall k \geq 1. $$
(3.34)

Considering the formulas (3.31) and (3.33), we obtain

$$ \bigl\Vert v_{k} - s^{*} \bigr\Vert \leq \bigl\Vert \varkappa _{k} - s^{*} \bigr\Vert \leq \bigl\Vert s_{k} - s^{*} \bigr\Vert + \Im _{k} K_{1}, \quad \forall k \geq N_{1}. $$
(3.35)

Combining (3.26) and (3.35), we obtain

$$\begin{aligned} \bigl\lVert s_{k+1} - s^{*} \bigr\rVert &\leq (1 - \Im _{k}) \bigl\lVert v_{k} - s^{*} \lVert + \Im _{k} \rVert s^{*} \bigr\rVert \\ &\leq (1 - \Im _{k}) \bigl\lVert s_{k} - s^{*} \bigl\lVert + (1 - \Im _{k}) \Im _{k} K_{1} + \Im _{k} \bigr\rVert s^{*} \bigr\rVert \\ &\leq (1 - \Im _{k}) \lVert s_{k} - s^{*} \lVert + \Im _{k} \bigl(K_{1} + s^{*}\bigr) \\ &\leq \max \bigl\{ \bigl\lVert s_{k} - s^{*} \bigr\rVert , K_{1} + s^{*} \bigr\} \\ &\leq \max \bigl\{ \bigl\lVert s_{N_{1}} - s^{*} \bigr\rVert , K_{1} + s^{*} \bigr\} . \end{aligned}$$
(3.36)

As a result, we infer that the sequence \(\{s_{k}\}\) is bounded.

Claim 2:

$$\begin{aligned} & \biggl(1 - \frac{\tau \delta _{k}}{\delta _{k+1}} \biggr) \Vert \varkappa _{k} - u_{k} \Vert ^{2} + \biggl(1 - \frac{\tau \delta _{k}}{\delta _{k+1}} \biggr) \Vert v_{k} - u_{k} \Vert ^{2} + \wp _{k} [1 - \rho - \wp _{k} ] \bigl\lVert \mathcal{T} (v_{k}) - v_{k} \bigr\rVert ^{2} \\ &\quad \leq \bigl\Vert s_{k} - s^{*} \bigr\Vert ^{2} - \bigl\Vert s_{k+1} - s^{*} \bigr\Vert ^{2} + \Im _{k} K_{4} \end{aligned}$$
(3.37)

for some \(K_{4} > 0\). Indeed, it follows from relation (3.35) that

$$\begin{aligned} \bigl\Vert \varkappa _{k} - s^{*} \bigr\Vert ^{2} &\leq \bigl( \bigl\Vert s_{k} - s^{*} \bigr\Vert + \Im _{k} K_{1} \bigr)^{2} \\ &= \bigl\Vert s_{k} - s^{*} \bigr\Vert ^{2} + \Im _{k} \bigl( 2 K_{1} \bigl\Vert s_{k} - s^{*} \bigr\Vert + \Im _{k} K_{1}^{2} \bigr) \\ &\leq \bigl\Vert s_{k} - s^{*} \bigr\Vert ^{2} + \Im _{k} K_{2}, \end{aligned}$$
(3.38)

for some \(K_{2} > 0\). In addition, we have

$$\begin{aligned} \bigl\lVert s_{k+1} - s^{*} \bigr\rVert ^{2} ={}& \bigl\lVert (1 - \wp _{k} - \Im _{k} ) v_{k} + \wp _{k} \mathcal{T}(v_{k}) - s^{*} \bigr\rVert ^{2} \\ ={}& \bigl\lVert \bigl(v_{k} - s^{*}\bigr) + \wp _{k} \bigl(\mathcal{T}(v_{k}) - v_{k}\bigr) - \Im _{k} v_{k} \bigr\rVert ^{2} \\ \leq{}& \bigl\lVert \bigl(v_{k} - s^{*}\bigr) + \wp _{k} \bigl(\mathcal{T}(v_{k}) - v_{k}\bigr) \bigr\rVert ^{2} - 2 \Im _{k} \bigl\langle v_{k}, s_{k+1} - s^{*} \bigr\rangle \\ ={}& \bigl\lVert v_{k} - s^{*} \bigr\rVert ^{2} + \wp _{k}^{2} \bigl\lVert \mathcal{T}(v_{k}) - v_{k} \bigr\rVert ^{2} + 2 \wp _{k} \bigl\langle \mathcal{T}(v_{k}) - v_{k}, v_{k} - s^{*} \bigr\rangle \\ &{}+ 2 \Im _{k} \bigl\langle v_{k}, s^{*} - s_{k+1} \bigr\rangle \\ \leq{}& \bigl\lVert v_{k} - s^{*} \bigr\rVert ^{2} + \wp _{k}^{2} \bigl\lVert \mathcal{T}(v_{k}) - v_{k} \bigr\rVert ^{2} + \wp _{k} (\rho - 1) \bigl\lVert v_{k} - \mathcal{T}(v_{k}) \bigr\rVert ^{2} + \Im _{k} K_{3} \\ \leq {}& \bigl\lVert s_{k} - s^{*} \bigr\rVert ^{2} + \Im _{k} K_{4} - \wp _{k} \bigl[(1 - \rho ) - \wp _{k} \bigr] \bigl\lVert \mathcal{T}(v_{k}) - v_{k} \bigr\rVert ^{2} \\ &{}- \biggl(1 - \frac{\tau \delta _{k}}{\delta _{k+1}} \biggr) \Vert \varkappa _{k} - u_{k} \Vert ^{2} - \biggl(1 - \frac{\tau \delta _{k}}{\delta _{k+1}} \biggr) \Vert v_{k} - u_{k} \Vert ^{2}, \end{aligned}$$
(3.39)

where \(K_{4} = K_{2} + K_{3}\). Finally, we have

$$\begin{aligned} & \biggl(1 - \frac{\tau \delta _{k}}{\delta _{k+1}} \biggr) \Vert \varkappa _{k} - u_{k} \Vert ^{2} + \biggl(1 - \frac{\tau \delta _{k}}{\delta _{k+1}} \biggr) \Vert v_{k} - u_{k} \Vert ^{2} + \wp _{k} [1 - \rho - \wp _{k} ] \bigl\lVert \mathcal{T} (v_{k}) - v_{k} \bigr\rVert ^{2} \\ &\quad \leq \bigl\Vert s_{k} - s^{*} \bigr\Vert ^{2} - \bigl\Vert s_{k+1} - s^{*} \bigr\Vert ^{2} + \Im _{k} K_{4}. \end{aligned}$$
(3.40)

Claim 3:

$$\begin{aligned} \bigl\lVert s_{k+1} - s^{*} \bigr\rVert ^{2} \leq{}& (1 - \Im _{k}) \bigl\lVert s_{k} - s^{*} \bigr\rVert ^{2} + \Im _{k} \biggl[ 2 \wp _{k} \bigl\lVert \mathcal{T}(v_{k}) - v_{k} \bigr\rVert \bigl\lVert s_{k+1} - s^{*} \bigr\rVert \\ & {} + \frac{3 K \eth _{k}}{\Im _{k}} \lVert s_{k} - s_{k-1} \lVert + 2 \bigl\langle s^{*}, s^{*} - s_{k+1} \bigr\rangle \biggr]. \end{aligned}$$
(3.41)

By setting the following value

$$ t_{k} = (1 - \wp _{k}) v_{k} + \wp _{k} \mathcal{T}(v_{k}), $$

we have

$$ s_{k+1} = t_{k} - \Im _{k} v_{k} = (1 - \Im _{k}) t_{k} - \Im _{k} (v_{k} - t_{k}) = (1 - \Im _{k}) t_{k} - \Im _{k} \wp _{k} \bigl(v_{k} - \mathcal{T}(v_{k})\bigr), $$
(3.42)

where

$$ v_{k} - t_{k} = v_{k} - (1 - \wp _{k}) v_{k} - \wp _{k} \mathcal{T}(v_{k}) = \wp _{k} \bigl(v_{k} - \mathcal{T}(v_{k})\bigr). $$

By definition of \(s_{k+1}\), we can write

$$\begin{aligned} & \bigl\lVert s_{k+1} - s^{*} \bigr\rVert ^{2} \\ &\quad = \bigl\lVert (1 - \Im _{k}) t_{k} + \wp _{k} \Im _{k} \bigl(\mathcal{T}(v_{k}) - v_{k}\bigr) - s^{*} \bigr\rVert ^{2} \\ &\quad = \bigl\lVert (1 - \Im _{k}) \bigl(t_{k} - s^{*}\bigr) + \bigl[ \wp _{k} \Im _{k} \bigl( \mathcal{T}(v_{k}) - v_{k}\bigr) - \Im _{k} s^{*} \bigr] \bigr\rVert ^{2} \\ &\quad \leq (1 - \Im _{k})^{2} \bigl\lVert t_{k} - s^{*} \bigr\rVert ^{2} \\ &\qquad {} + 2 \bigl\langle \wp _{k} \Im _{k} \bigl( \mathcal{T}(v_{k}) - v_{k}\bigr) - \Im _{k} s^{*}, (1 - \Im _{k}) \bigl(t_{k} - s^{*} \bigr) + \wp _{k} \Im _{k} \bigl(\mathcal{T}(v_{k}) - v_{k}\bigr) - \Im _{k} s^{*} \bigr\rangle \\ &\quad = (1 - \Im _{k})^{2}\bigl\lVert t_{k} - s^{*}\bigr\lVert ^{2} + 2 \Im _{k} \bigl\langle \wp _{k} \bigl(\mathcal{T}(v_{k}) - v_{k}\bigr) - s^{*}, s_{k+1} - s^{*} \bigr\rangle \\ &\quad \leq (1 - \Im _{k})\bigl\lVert t_{k} - s^{*} \bigr\rVert ^{2} + 2 \wp _{k} \Im _{k} \bigl\langle \mathcal{T}(v_{k}) - v_{k}, s_{k+1} - s^{*} \bigr\rangle + 2 \Im _{k} \bigl\langle s^{*}, s^{*} - s_{k+1} \bigr\rangle . \end{aligned}$$
(3.43)

Next, we have to evaluate

$$\begin{aligned} & \bigl\lVert t_{k} - s^{*} \bigr\rVert ^{2} \\ &\quad = \bigl\lVert (1 - \wp _{k}) v_{k} + \wp _{k} \mathcal{T}(v_{k}) - s^{*} \bigr\rVert ^{2} \\ &\quad = \bigl\lVert (1 - \wp _{k}) \bigl(v_{k} - s^{*} \bigr) + \wp _{k} \bigl(\mathcal{T}(v_{k}) - s^{*} \bigr) \bigr\rVert ^{2} \\ &\quad = (1 - \wp _{k})^{2} \bigl\lVert v_{k} - s^{*} \bigr\rVert ^{2} + \wp _{k}^{2} \bigl\lVert \mathcal{T}(v_{k}) - s^{*} \bigr\rVert ^{2} + 2 \bigl\langle (1 - \wp _{k}) \bigl(v_{k} - s^{*}\bigr), \wp _{k} \bigl(\mathcal{T}(v_{k}) - s^{*}\bigr) \bigr\rangle \\ &\quad \leq (1 - \wp _{k})^{2} \bigl\lVert v_{k} - s^{*} \bigr\rVert ^{2} + \wp _{k}^{2} \bigl\lVert v_{k} - s^{*} \bigr\rVert ^{2} + \wp _{k}^{2} \rho \bigl\lVert \mathcal{T}(v_{k}) - v_{k} \bigr\rVert ^{2} \\ &\qquad {}+ 2 (1 - \wp _{k}) \wp _{k} \biggl[ \bigl\lVert v_{k} - s^{*} \bigr\rVert ^{2} - \frac{1 - \rho}{2} \bigl\lVert \mathcal{T}(v_{k}) - s^{*} \bigr\rVert ^{2} \biggr] \\ &\quad \leq \bigl\lVert v_{k} - s^{*} \bigr\rVert ^{2} + \wp _{k} [\wp _{k} - 1 + \rho ] \bigl\lVert \mathcal{T}(v_{k}) - s^{*} \bigr\rVert ^{2}. \end{aligned}$$
(3.44)

It is given that \(\wp _{k} \subset (0, 1 - \rho )\) and using the expression (3.31), we obtain

$$\begin{aligned} \bigl\lVert t_{k} - s^{*} \bigr\rVert ^{2} &\leq \bigl\lVert \varkappa _{k} - s^{*} \bigr\rVert ^{2}. \end{aligned}$$
(3.45)

According to the definition of \(\varkappa _{k}\), one obtains

$$\begin{aligned} \bigl\lVert \varkappa _{k} - s^{*} \bigr\rVert ^{2} &= \bigl\lVert s_{k} + \ell _{k} (s_{k} - s_{k-1}) - s^{*} \bigr\rVert ^{2} \\ &= \bigl\lVert s_{k} - s^{*} + \ell _{k} (s_{k} - s_{k-1}) \bigr\rVert ^{2} \\ &= \bigl\lVert s_{k} - s^{*} \bigr\rVert ^{2} + \ell _{k}^{2} \lVert s_{k} - s_{k-1} \rVert ^{2} + 2 \bigl\langle s_{k} - s^{*}, \ell _{k} (s_{k} - s_{k-1}) \bigr\rangle \\ &\leq \bigl\lVert s_{k} - s^{*}\bigr\rVert ^{2} + \ell _{k}^{2} \lVert s_{k} - s_{k-1} \rVert ^{2} + 2 \ell _{k} \bigl\lVert s_{k} - s^{*} \bigr\rVert \lVert s_{k} - s_{k-1} \rVert \\ &= \bigl\lVert s_{k} - s^{*} \bigr\rVert ^{2} + \ell _{k} \lVert s_{k} - s_{k-1} \rVert \bigl[ 2 \bigl\lVert s_{k} - s^{*} \bigr\rVert + \ell _{k} \lVert s_{k} - s_{k-1} \rVert \bigr] \\ &\leq \bigl\lVert s_{k} - s^{*} \bigr\rVert ^{2} + 3 \ell _{k} K \lVert s_{k} - s_{k-1} \rVert , \end{aligned}$$
(3.46)

where

$$ K = \sup_{k \in \mathbb{N}} \bigl\{ \bigl\lVert s_{k} - s^{*} \bigr\rVert , \ell _{k} \lVert s_{k} - s_{k-1} \rVert \bigr\} . $$

Combining expressions (3.43), (3.44), and (3.46), we obtain

$$\begin{aligned} & \bigl\lVert s_{k+1} - s^{*} \bigr\rVert ^{2} \\ &\quad \leq (1 - \Im _{k}) \lVert t_{k} - s^{*} \lVert ^{2} + 2 \wp _{k} \Im _{k} \bigl\langle \mathcal{T}(v_{k}) - v_{k}, s_{k+1} - s^{*} \bigr\rangle + 2 \Im _{k} \bigl\langle s^{*}, s^{*} - s_{k+1} \bigr\rangle \\ &\quad \leq (1 - \Im _{k}) \lVert s_{k} - s^{*} \lVert ^{2} + \Im _{k} \biggl[2 \wp _{k} \lVert \mathcal{T}(v_{k}) - v_{k} \lVert \lVert s_{k+1} - s^{*} \lVert \\ &\qquad {} + 2 \bigl\langle s^{*}, s^{*} - s_{k+1} \bigr\rangle + \frac{3 \ell _{k} K}{\Im _{k}} \lVert s_{k} - s_{k-1} \lVert \biggr]. \end{aligned}$$
(3.47)

Claim 4: The sequence \(\lVert s_{k} - s^{*} \rVert ^{2}\) converges to zero.

Set

$$ p_{k} := \bigl\Vert s_{k} - s^{*} \bigr\Vert ^{2} $$

and

$$ r_{k} := \biggl[2 \wp _{k} \lVert \mathcal{T}(v_{k}) - v_{k} \lVert \lVert s_{k+1} - s^{*} \lVert + 2 \bigl\langle s^{*}, s^{*} - s_{k+1} \bigr\rangle + \frac{3 \ell _{k} K}{\Im _{k}} \lVert s_{k} - s_{k-1} \lVert \biggr]. $$

Then, Claim 4 can be rewritten as follows:

$$ p_{k+1} \leq (1 - \Im _{k}) p_{k} + \Im _{k} r_{k}. $$

Indeed, from Lemma 2.2, it suffices to show that \(\limsup_{j \rightarrow \infty} r_{k_{j}} \leq 0\) for every subsequence \(\{p_{k_{j}}\}\) of \(\{p_{k}\}\) satisfying

$$ \liminf_{j \rightarrow +\infty} ( p_{k_{j}+1} - p_{k_{j}} ) \geq 0. $$

This is equivalent to the need to show that

$$ \limsup_{j \rightarrow \infty} \bigl\langle s^{*}, s^{*} - s_{k_{j}+1} \bigr\rangle \leq 0 $$

for every subsequence \(\{\|s_{k_{j}} - s^{*}\|\}\) of \(\{\|s_{k} - s^{*}\|\}\) satisfying

$$ \liminf_{j \rightarrow +\infty} \bigl( \bigl\Vert s_{k_{j}+1} - s^{*} \bigr\Vert - \bigl\Vert s_{k_{j}} - s^{*} \bigr\Vert \bigr) \geq 0. $$

Assume that \(\{\|s_{k_{j}} - s^{*}\|\}\) is a subsequence of \(\{\|s_{k} - s^{*}\|\}\) satisfying

$$ \liminf_{j \rightarrow +\infty} \bigl( \bigl\Vert s_{k_{j}+1} - s^{*} \bigr\Vert - \bigl\Vert s_{k_{j}} - s^{*} \bigr\Vert \bigr) \geq 0. $$

Then,

$$\begin{aligned} & \liminf_{j \rightarrow +\infty} \bigl( \bigl\Vert s_{k_{j}+1} - s^{*} \bigr\Vert ^{2} - \bigl\Vert s_{k_{j}} - s^{*} \bigr\Vert ^{2} \bigr) \\ &\quad = \liminf_{j \rightarrow +\infty} \bigl( \bigl\Vert s_{k_{j}+1} - s^{*} \bigr\Vert - \bigl\Vert s_{k_{j}} - s^{*} \bigr\Vert \bigr) \bigl( \bigl\Vert s_{k_{j}+1} - s^{*} \bigr\Vert + \bigl\Vert s_{k_{j}} - s^{*} \bigr\Vert \bigr) \geq 0. \end{aligned}$$
(3.48)

It follows from Claim 2 that

$$\begin{aligned} & \limsup_{j \rightarrow \infty} \biggl[ \biggl(1 - \frac{\tau \delta _{k_{j}}}{\delta _{k_{j}+1}} \biggr) \Vert \varkappa _{k_{j}} - u_{k_{j}} \Vert ^{2} + \biggl(1 - \frac{\tau \delta _{k_{j}}}{\delta _{k_{j}+1}} \biggr) \Vert v_{k_{j}} - u_{k_{j}} \Vert ^{2} \\ &\qquad {}+ \wp _{k} [1 - \rho - \wp _{k} ] \bigl\lVert \mathcal{T} (v_{k_{j}}) - v_{k_{j}} \bigr\rVert ^{2} \biggr] \\ &\quad \leq \limsup_{j \rightarrow \infty} \bigl[ \bigl\Vert s_{k_{j}} - s^{*} \bigr\Vert ^{2} - \bigl\Vert s_{k_{j}+1} - s^{*} \bigr\Vert ^{2} + \Im _{k_{j}} K_{4} \bigr] \\ &\quad = - \liminf_{j \rightarrow \infty} \bigl[ \bigl\Vert s_{k_{j}+1} - s^{*} \bigr\Vert ^{2} - \bigl\Vert s_{k_{j}} - s^{*} \bigr\Vert ^{2} \bigr] \\ &\quad \leq 0. \end{aligned}$$
(3.49)

The above relation implies that

$$ \begin{aligned} &\lim_{j \rightarrow \infty} \Vert \varkappa _{k_{j}} - u_{k_{j}} \Vert = 0, \\ & \lim_{j \rightarrow \infty} \Vert v_{k_{j}} - u_{k_{j}} \Vert = 0, \\ & \lim_{j \rightarrow \infty} \bigl\lVert \mathcal{T} (v_{k_{j}}) - v_{k_{j}} \bigr\rVert = 0. \end{aligned}$$
(3.50)

Therefore, we obtain

$$ \lim_{j \rightarrow \infty} \Vert v_{k_{j}} - \varkappa _{k_{j}} \Vert = 0. $$
(3.51)

According to the definition of \(\varkappa _{k}\) one has

$$\begin{aligned} \Vert \varkappa _{k_{j}} - s_{k_{j}} \Vert &= \ell _{k_{j}} \Vert s_{k_{j}} - s_{k_{j}-1} \Vert \\ &= \wp _{k_{j}} \frac{\ell _{k_{j}}}{\wp _{k_{j}}} \Vert s_{k_{j}} - s_{k_{j}-1} \Vert \rightarrow 0, \quad \text{as } k \rightarrow +\infty . \end{aligned}$$
(3.52)

This, together with \(\lim_{j \rightarrow \infty} \| v_{k_{j}} - \varkappa _{k_{j}}\| = 0\), yields that

$$ \lim_{j \rightarrow \infty} \Vert v_{k_{j}} - s_{k_{j}} \Vert = 0. $$
(3.53)

From expressions (3.50) and (3.53), we deduce that

$$\begin{aligned} \Vert s_{k_{j}+1} - s_{k_{j}} \Vert &\leq \Vert v_{k_{j}} - s_{k_{j}} \Vert + \Im _{k_{j}} \Vert v_{k_{j}} \Vert + \wp _{k_{j}} \bigl\lVert \mathcal{T} (v_{k_{j}}) - v_{k_{j}} \bigr\rVert . \end{aligned}$$
(3.54)

Taking limit \(j \rightarrow \infty \) on both sides of the equation, we have

$$ \lim_{j \rightarrow \infty} \Vert s_{k_{j}+1} - s_{k_{j}} \Vert = 0. $$
(3.55)

The following phrase suggests that

$$ \lim_{j \rightarrow \infty} \Vert \varkappa _{k_{j}} - s_{k_{j}+1} \Vert \leq \lim_{j \rightarrow \infty} \Vert \varkappa _{k_{j}} - s_{k_{j}} \Vert + \lim_{j \rightarrow \infty} \Vert s_{k_{j}} - s_{k_{j}+1} \Vert = 0. $$
(3.56)

Due to expression (3.11), we have

$$\begin{aligned} \delta _{k_{j}} \mathcal{L}(u_{k_{j}}, u) \geq \delta _{k_{j}} \mathcal{L}(u_{k_{j}}, v_{k_{j}}) + \langle \varkappa _{k_{j}} - v_{k_{j}}, u - v_{k_{j}} \rangle . \end{aligned}$$
(3.57)

By expression (3.17), we obtain

$$ \begin{aligned} \delta _{k_{j}} \mathcal{L}(u_{k_{j}}, v_{k_{j}})\geq{}& \delta _{k_{j}} \mathcal{L}(\varkappa _{k_{j}}, v_{k_{j}}) - \delta _{k_{j}} \mathcal{L}(\varkappa _{k_{j}}, u_{k_{j}}) \\ & {}- \frac{\delta _{k_{j}} \tau ( \Vert \varkappa _{k_{j}} - u_{k_{j}} \Vert ^{2} + \Vert v_{k_{j}} - u_{k_{j}} \Vert ^{2})}{2 \delta _{k_{j}+1}}. \end{aligned} $$
(3.58)

Combining relations (3.57), (3.58), and (3.14) we write

$$ \begin{aligned} \delta _{k_{j}} \mathcal{L}(u_{k_{j}}, u)\geq{}& \langle \varkappa _{k_{j}} - u_{k_{j}}, v_{k_{j}} - u_{k_{j}} \rangle - \frac{\tau \delta _{k_{j}}}{2 \delta _{k_{j}+1}} \Vert \varkappa _{k_{j}} - u_{k_{j}} \Vert ^{2} \\ & {} - \frac{\tau \delta _{k_{j}}}{2 \delta _{k_{j}+1}} \Vert u_{k_{j}} - v_{k_{j}} \Vert ^{2} + \langle \varkappa _{k_{j}} - v_{k_{j}}, u - v_{k_{j}} \rangle , \end{aligned} $$
(3.59)

where u is an arbitrary element in \(\mathcal{Y}_{k}\). By using the boundedness of the sequence and expression (3.50), that right-hand side of the last inequality goes to zero. By the use of \(\delta _{k_{j}} \geq \delta > 0\), we obtain

$$ 0 \leq \limsup_{j \rightarrow \infty} \mathcal{L}(u_{k_{j}}, u) \leq \mathcal{L}(\hat{s}, u), \quad \forall u \in \mathcal{Y}_{k}. $$

It is given that \(\mathcal{M} \subset \mathcal{Y}_{k}\), that is \(\mathcal{L}(\hat{s}, u) \geq 0\), for all \(u \in \mathcal{M}\). This gives that \(\hat{s} \in EP(\mathcal{L}, \mathcal{M})\). By the demiclosedness of \((I - \mathcal{T})\), we obtain that \(\hat{s} \in Fix(\mathcal{T})\). Since the sequence \(\{s_{k}\}\) is bounded, this implies that there exists a subsequence \(\{s_{k_{j}}\}\) of \(\{s_{k}\}\) such that \(s_{k_{j}} \rightharpoonup \hat{s}\). It is given that

$$ s^{*} = P_{EP(\mathcal{M}, \mathcal{L}) \cap \operatorname{Fix}(\mathcal{T})} (0). $$

Namely, \(s^{*} \in EP(\mathcal{M}, \mathcal{L}) \cap \operatorname{Fix}(\mathcal{T})\) as well as

$$ \bigl\langle 0 - s^{*}, u - s^{*} \bigr\rangle \leq 0, \quad \forall u \in EP( \mathcal{M}, \mathcal{L}) \cap \operatorname{Fix}(\mathcal{T}). $$

It is given that \(\hat{s} \in EP(\mathcal{M}, \mathcal{A}) \cap Fix (\mathcal{T})\). Thus, we have

$$\begin{aligned} & \limsup_{k \rightarrow \infty} \bigl\langle s^{*}, s^{*} - s_{k} \bigr\rangle \\ &\quad = \lim_{j \rightarrow \infty} \bigl\langle s^{*}, s^{*} - s_{k_{j}} \bigr\rangle = \bigl\langle s^{*}, s^{*} - \hat{s} \bigr\rangle \leq 0. \end{aligned}$$
(3.60)

By using the fact \(\lim_{j \rightarrow \infty} \lVert s_{k_{j} + 1} - s_{k_{j}} \rVert = 0\). Thus, we have

$$\begin{aligned} & \limsup_{k \rightarrow \infty} \bigl\langle s^{*}, s^{*} - s_{k+1} \bigr\rangle \\ &\quad \leq \limsup_{j \rightarrow \infty} \bigl\langle s^{*}, s_{k_{j}} - s_{k_{j+1}} \bigr\rangle + \limsup_{j \rightarrow \infty} \bigl\langle s^{*}, s^{*} - s_{k_{j}} \bigr\rangle \\ &\quad = \bigl\langle s^{*}, \hat{s} - s^{*} \bigr\rangle \leq 0. \end{aligned}$$
(3.61)

Combining Claim 3 and in the light of Lemma 2.2, we observe that \(s_{k} \rightarrow s^{*}\) as \(k \rightarrow \infty \). The proof of Theorem 3.4 is completed. □

The third method does not involve subgradient techniques and is effective in some situations. Its proof is the same as that of Algorithm 1. The third strategy is discussed below to obtain a common solution to an equilibrium and a fixed-point problem without using the subgradient technique. The key feature of this method is that it adopts a monotone step-size rule that is independent of Lipschitz constants. The algorithm uses Mann-type iteration to solve a fixed-point problem and the two-step extragradient technique to solve an equilibrium problem.

figure c

The fourth method, which does not use a subgradient method, is successful in some scenarios. Its proof is the same as that of Algorithm 1. The key feature of this technique is that it uses a nonmonotone step-size rule that is independent of Lipschitz constants.

figure d

4 Applications

In this section, we need to find a common solution of the variational inequalities and fixed-point problems using the results from our main results. The expression (4.2) is employed to obtain the following conclusions. All the methods are based on our main findings, which are interpreted below.

Let \(\mathcal{A} : \mathcal{M} \rightarrow \mathcal{Y}\) be an operator. First, we look at the classic variational inequality problem [24, 38], which is expressed as follows:

$$ \bigl\langle \mathcal{A} \bigl(s^{*}\bigr), \aleph _{1} - s^{*} \bigr\rangle \geq 0, \quad \forall \aleph _{1} \in \mathcal{M}. $$
(4.1)

Let us define a bifunction \(\mathcal{F}\) defined as follows:

$$ \mathcal{F} (\aleph _{1}, \aleph _{2}) := \bigl\langle \mathcal{A}( \aleph _{1}), \aleph _{2} - \aleph _{1} \bigr\rangle , \quad \forall \aleph _{1}, \aleph _{2} \in \mathcal{M}. $$
(4.2)

Then, the equilibrium problem converts into the problem of variational inequalities defined in (4.1) and the Lipschitz constant of the mapping \(\mathcal{A}\) is \(L = 2 c_{1} = 2 c_{2}\).

The following corollary is derived from the proposed Algorithm 1 and the minimization problem for solving equilibrium problems that transform into projections on a convex set. This result helps in the finding of a common solution to a variational inequality problem and a fixed-point problem.

Corollary 4.1

Suppose that \(\mathcal{A} : \mathcal{M} \rightarrow \mathcal{Y}\) is a weakly continuous, pseudomonotone, and L-Lipschitz continuous mapping and the solution set \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\) is nonempty. Take \(s_{0}, s_{1} \in \mathcal{M}\), \(\ell \in (0, 1)\), \(\tau \in (0, 1)\), \(\delta_{1} > 0\). Choose two positive numbers \(a, b\) such that \(0 < a, b < 1- \rho\) and \(0 < a, b < 1- \Im_{k}\). Moreover, choose \(\{\wp_{k}\} \subset (a, b)\) and \(\{\Im_{k}\} \subset (0, 1)\) satisfying the following conditions:

$$ \lim_{k \rightarrow +\infty} \Im _{k} = 0 \quad \textit{and}\quad \sum _{k=1}^{+ \infty} \Im _{k} = +\infty . $$

Calculate

$$ \varkappa _{k} = s_{k} + \ell _{k} (s_{k} - s_{k-1}), $$

while \(\ell _{k}\) is taken as follows:

$$ 0 \leq \ell _{k} \leq \hat{\ell _{k}} \quad \textit{and}\quad \hat{\ell _{k}} = \textstyle\begin{cases} \min \{\frac{\ell}{2}, \frac{\varrho _{k}}{ \Vert s_{k} - s_{k-1} \Vert } \} & \textit{if } s_{k} \neq s_{k-1}, \\ \frac{\ell}{2} & \textit{otherwise}. \end{cases} $$
(4.3)

Moreover, a positive sequence \(\varrho _{k} = \circ (\wp _{k})\) satisfies \(\lim_{k \rightarrow +\infty} \frac{\varrho _{k}}{\wp _{k}} = 0\). First, we have to compute

$$ \textstyle\begin{cases} u_{k} = P_{\mathcal{M}}(\varkappa _{k} - \delta _{k} \mathcal{A} ( \varkappa _{k})), \\ v_{k} = P_{\mathcal{Y}_{k}}(\varkappa _{k} - \delta _{k} \mathcal{A} (u_{k})), \end{cases} $$

where

$$ \mathcal{Y}_{k} = \bigl\{ z \in \mathcal{Y} : \bigl\langle \varkappa _{k} - \delta _{k} \mathcal{A}(\varkappa _{k}) - u_{k}, z - u_{k} \bigr\rangle \leq 0 \bigr\} \quad \textit{for each } k \geq 0. $$

Calculate

$$ s_{k+1} = (1 - \wp _{k} - \Im _{k} ) v_{k} + \wp _{k} \mathcal{T}(v_{k}). $$

The following step size should be updated:

$$ \delta _{k+1} = \textstyle\begin{cases} \min \{ \delta _{k}, \frac{\tau \Vert \varkappa _{k} - u_{k} \Vert ^{2} + \tau \Vert v_{k} - u_{k} \Vert ^{2}}{2 \langle \mathcal{A}(\varkappa _{k}) - \mathcal{A}(u_{k}), v_{k} - u_{k} \rangle} \} \\ \quad \textit{if } \langle \mathcal{A}(\varkappa _{k}) - \mathcal{A}(u_{k}), v_{k} - u_{k} \rangle > 0, \\ \delta _{k}, \quad \textit{otherwise}. \end{cases} $$

Then, the sequence \(\{s_{k}\}\) converges strongly to \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\).

The following corollary comes from the proposed Algorithm 2 and the minimization problem for resolving equilibrium problems that transform into projections on a convex set.

Corollary 4.2

Suppose that \(\mathcal{A} : \mathcal{M} \rightarrow \mathcal{Y}\) is a weakly continuous, pseudomonotone, and L-Lipschitz continuous mapping and the solution set \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\) is nonempty. Take \(s_{0}, s_{1} \in \mathcal{M}\), \(\ell \in (0, 1)\), \(\tau \in (0, 1)\), \(\delta_{1} > 0\). Choose two positive numbers \(a, b\) such that \(0 < a, b < 1- \rho\) and \(0 < a, b < 1- \Im_{k}\). Moreover, choose \(\{\wp_{k}\} \subset (a, b)\) and \(\{\Im_{k}\} \subset (0, 1)\) satisfying the following conditions:

$$ \lim_{k \rightarrow +\infty} \Im _{k} = 0\quad \textit{and}\quad \sum _{k=1}^{+ \infty} \Im _{k} = +\infty . $$

Calculate

$$ \varkappa _{k} = s_{k} + \ell _{k} (s_{k} - s_{k-1}), $$

while \(\ell _{k}\) is taken as follows:

$$ 0 \leq \ell _{k} \leq \hat{\ell _{k}} \quad \textit{and}\quad \hat{\ell _{k}} = \textstyle\begin{cases} \min \{\frac{\ell}{2}, \frac{\varrho _{k}}{ \Vert s_{k} - s_{k-1} \Vert } \} & \textit{if } s_{k} \neq s_{k-1}, \\ \frac{\ell}{2} & \textit{otherwise}. \end{cases} $$
(4.4)

Moreover, a positive sequence \(\varrho _{k} = \circ (\wp _{k})\) satisfies \(\lim_{k \rightarrow +\infty} \frac{\varrho _{k}}{\wp _{k}} = 0\). First, we have to compute

$$ \textstyle\begin{cases} u_{k} = P_{\mathcal{M}}(\varkappa _{k} - \delta _{k} \mathcal{A} ( \varkappa _{k})), \\ v_{k} = P_{\mathcal{Y}_{k}}(\varkappa _{k} - \delta _{k} \mathcal{A} (u_{k})), \end{cases} $$

where

$$ \mathcal{Y}_{k} = \bigl\{ z \in \mathcal{Y} : \bigl\langle \varkappa _{k} - \delta _{k} \mathcal{A}(\varkappa _{k}) - u_{k}, z - u_{k} \bigr\rangle \leq 0 \bigr\} \quad \textit{for each } k \geq 0. $$

Calculate

$$ s_{k+1} = (1 - \wp _{k} - \Im _{k} ) v_{k} + \wp _{k} \mathcal{T}(v_{k}). $$

Moreover, choose a nonnegative real sequence \(\{\chi _{k}\}\) such that \(\sum_{k=1}^{+\infty} \chi _{k} < +\infty \). The following step size should be updated:

$$ \delta _{k+1} = \textstyle\begin{cases} \min \{ \delta _{k} + \chi _{k}, \frac{\tau \Vert \varkappa _{k} - u_{k} \Vert ^{2} + \tau \Vert v_{k} - u_{k} \Vert ^{2}}{2 \langle \mathcal{A}(\varkappa _{k}) - \mathcal{A}(u_{k}), v_{k} - u_{k} \rangle} \} \\ \quad \textit{if } \langle \mathcal{A}(\varkappa _{k}) - \mathcal{A}(u_{k}), v_{k} - u_{k} \rangle > 0, \\ \delta _{k} + \chi _{k}, \quad \textit{otherwise}. \end{cases} $$

Then, the sequence \(\{s_{k}\}\) converges strongly to \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\).

The following corollary comes from the proposed Algorithm 3 and the minimization problem for resolving equilibrium problems that transform into projections on a convex set.

Corollary 4.3

Suppose that \(\mathcal{A} : \mathcal{M} \rightarrow \mathcal{Y}\) is a weakly continuous, pseudomonotone, and L-Lipschitz continuous mapping and the solution set \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\) is nonempty. Take \(s_{0}, s_{1} \in \mathcal{M}\), \(\ell \in (0, 1)\), \(\tau \in (0, 1)\), \(\delta_{1} > 0\). Choose two positive numbers \(a, b\) such that \(0 < a, b < 1- \rho\) and \(0 < a, b < 1- \Im_{k}\). Moreover, choose \(\{\wp_{k}\} \subset (a, b)\) and \(\{\Im_{k}\} \subset (0, 1)\) satisfying the following conditions:

$$ \lim_{k \rightarrow +\infty} \Im _{k} = 0 \quad \textit{and}\quad \sum _{k=1}^{+ \infty} \Im _{k} = +\infty . $$

Calculate

$$ \varkappa _{k} = s_{k} + \ell _{k} (s_{k} - s_{k-1}), $$

while \(\ell _{k}\) is taken as follows:

$$ 0 \leq \ell _{k} \leq \hat{\ell _{k}} \quad \textit{and}\quad \hat{\ell _{k}} = \textstyle\begin{cases} \min \{\frac{\ell}{2}, \frac{\varrho _{k}}{ \Vert s_{k} - s_{k-1} \Vert } \} & \textit{if } s_{k} \neq s_{k-1}, \\ \frac{\ell}{2} & \textit{otherwise}. \end{cases} $$
(4.5)

Moreover, a positive sequence \(\varrho _{k} = \circ (\wp _{k})\) satisfies \(\lim_{k \rightarrow +\infty} \frac{\varrho _{k}}{\wp _{k}} = 0\). First, we have to compute

$$ \textstyle\begin{cases} u_{k} = P_{\mathcal{M}}(\varkappa _{k} - \delta _{k} \mathcal{A} ( \varkappa _{k})), \\ v_{k} = P_{\mathcal{M}}(\varkappa _{k} - \delta _{k} \mathcal{A} (u_{k})). \end{cases} $$

Calculate

$$ s_{k+1} = (1 - \wp _{k} - \Im _{k} ) v_{k} + \wp _{k} \mathcal{T}(v_{k}). $$

The following step size should be updated:

$$ \delta _{k+1} = \textstyle\begin{cases} \min \{ \delta _{k}, \frac{\tau \Vert \varkappa _{k} - u_{k} \Vert ^{2} + \tau \Vert v_{k} - u_{k} \Vert ^{2}}{2 \langle \mathcal{A}(\varkappa _{k}) - \mathcal{A}(u_{k}), v_{k} - u_{k} \rangle} \} \\ \quad \textit{if } \langle \mathcal{A}(\varkappa _{k}) - \mathcal{A}(u_{k}), v_{k} - u_{k} \rangle > 0, \\ \delta _{k}, \quad \textit{otherwise}. \end{cases} $$

Then, the sequence \(\{s_{k}\}\) converges strongly to \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\).

The proposed Algorithm 4 and the minimization problem for resolving equilibrium problems that transform into projections on a convex set lead to the following corollary.

Corollary 4.4

Suppose that \(\mathcal{A} : \mathcal{M} \rightarrow \mathcal{Y}\) is a weakly continuous, pseudomonotone, and L-Lipschitz continuous mapping and the solution set \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\) is nonempty. Take \(s_{0}, s_{1} \in \mathcal{M}\), \(\ell \in (0, 1)\), \(\tau \in (0, 1)\), \(\delta_{1} > 0\). Choose two positive numbers \(a, b\) such that \(0 < a, b < 1- \rho\) and \(0 < a, b < 1- \Im_{k}\). Moreover, choose \(\{\wp_{k}\} \subset (a, b)\) and \(\{\Im_{k}\} \subset (0, 1)\) satisfying the following conditions:

$$ \lim_{k \rightarrow +\infty} \Im _{k} = 0\quad \textit{and}\quad \sum _{k=1}^{+ \infty} \Im _{k} = +\infty . $$

Calculate

$$ \varkappa _{k} = s_{k} + \ell _{k} (s_{k} - s_{k-1}), $$

while \(\ell _{k}\) is taken as follows:

$$ 0 \leq \ell _{k} \leq \hat{\ell _{k}} \quad \textit{and}\quad \hat{\ell _{k}} = \textstyle\begin{cases} \min \{\frac{\ell}{2}, \frac{\varrho _{k}}{ \Vert s_{k} - s_{k-1} \Vert } \} & \textit{if } s_{k} \neq s_{k-1}, \\ \frac{\ell}{2} & \textit{otherwise}. \end{cases} $$
(4.6)

Moreover, a positive sequence \(\varrho _{k} = \circ (\wp _{k})\) satisfies \(\lim_{k \rightarrow +\infty} \frac{\varrho _{k}}{\wp _{k}} = 0\). First, we have to compute

$$ \textstyle\begin{cases} u_{k} = P_{\mathcal{M}}(\varkappa _{k} - \delta _{k} \mathcal{A} ( \varkappa _{k})), \\ v_{k} = P_{\mathcal{M}}(\varkappa _{k} - \delta _{k} \mathcal{A} (u_{k})). \end{cases} $$

Calculate

$$ s_{k+1} = (1 - \wp _{k} - \Im _{k} ) v_{k} + \wp _{k} \mathcal{T}(v_{k}). $$

Moreover, choose a nonnegative real sequence \(\{\chi _{k}\}\) such that \(\sum_{k=1}^{+\infty} \chi _{k} < +\infty \). The following step size should be updated:

$$ \delta _{k+1} = \textstyle\begin{cases} \min \{ \delta _{k} + \chi _{k}, \frac{\tau \Vert \varkappa _{k} - u_{k} \Vert ^{2} + \tau \Vert v_{k} - u_{k} \Vert ^{2}}{2 \langle \mathcal{A}(\varkappa _{k}) - \mathcal{A}(u_{k}), v_{k} - u_{k} \rangle} \} \\ \quad \textit{if } \langle \mathcal{A}(\varkappa _{k}) - \mathcal{A}(u_{k}), v_{k} - u_{k} \rangle > 0, \\ \delta _{k} + \chi _{k}, \quad \textit{otherwise}. \end{cases} $$

Then, the sequence \(\{s_{k}\}\) converges strongly to \(\operatorname{Fix}(\mathcal{T}) \cap VI(\mathcal{M}, \mathcal{A})\).

5 Numerical illustrations

This section covers the computational consequences of the presented methodologies, as well as an examination of how variations in control settings impact the numerical efficacy of the suggested algorithms. All computations are run in MATLAB R2018b on an HP i5 Core (TM) i5-6200 laptop with 8.00 GB (7.78 GB useable) RAM.

Example 5.1

The first sample problem here is taken from the Nash–Cournot Oligopolistic Equilibrium model in [43]. Suppose that a function \(q : \mathcal{Y} \rightarrow \mathbb{R}\) is described through

$$ lev_{\leq q} := \bigl\{ s \in \mathcal{Y} : q (s) \leq 0 \bigr\} \neq \emptyset . $$

The subgradient projection is a mapping that is characterized as follows:

$$ \mathcal{T} (s) = \textstyle\begin{cases} s - \frac{q(s)}{ \Vert r(s) \Vert ^{2}} r(s), \quad \text{if } q(s) \geq 0, \\ s, \quad \text{otherwise}, \end{cases} $$

wherein \(r(s) \in \partial q(s)\). In such instance, \(\mathcal{T}\) is quasinonexpansive, demiclosed at zero, and \(\operatorname{Fix}(\mathcal{T}) = lev_{\leq q}\). In this instance, the bifunction \(\mathcal{F}\) can be expressed as follows:

$$ \mathcal{F} (s, u) = \langle P s + Q u + c, u - s \rangle , $$

wherein \(c \in \mathbb{R}^{M}\) and P, Q are matrices of order M. The matrix \(Q - P\) is symmetric negative-semidefinite, while the matrix P is symmetric positive-semidefinite, through Lipschitz-like parameters \(c_{1} = c_{2} = \frac{1}{2}\| P - Q\|\) (for additional information, see [43]). The starting point for this study is \(s_{0} = s_{1}= (2, 2, \ldots , 2)\) and the size of the space is chosen differently with the stopping condition \(D_{k} = \|\varkappa _{k} - u_{k}\| \leq 10^{-3}\). Figures 110 depict numerical observations for Example 5.1. The following control criteria are in use:

  1. (1)

    Algorithm 1 in [52] (briefly, EGM):

    $$ \wp _{k} = \frac{1}{(10 k + 4)}, \qquad \Im _{k} = \frac{1 - \Im}{5},\qquad \delta _{k} = \min \biggl\{ \frac{1}{4 c_{1}}, \frac{1}{4 c_{2}} \biggr\} ; $$
  2. (2)

    Algorithm 2 in [51] (briefly, I-EGM):

    $$ \wp _{k} = \frac{1}{(10 k + 4)},\qquad \Im _{k} = \frac{1 - \Im}{5}, \qquad \delta _{k} = \min \biggl\{ \frac{1}{4 c_{1}}, \frac{1}{4 c_{2}} \biggr\} ; $$
  3. (3)

    Algorithm 1 in [18] (briefly, H-EGM):

    $$ \wp _{k} = \frac{1}{(10 k + 4)}, \qquad \Im _{k} = \frac{1}{5}, \qquad \delta _{k} = \min \biggl\{ \frac{1}{4 c_{1}}, \frac{1}{4 c_{2}} \biggr\} ; $$
  4. (4)

    Algorithm 1 (briefly, M-EGM):

    $$\begin{aligned}& \delta _{1} = 0.36,\qquad \ell = 0.57,\qquad \tau = 0.264,\qquad \varrho _{k} = \frac{10}{(1+k)^{3}}, \\& \Im _{k} = \frac{1 - \Im}{5},\qquad \wp _{k} = \frac{1}{(10 k + 4)},\qquad \mathrm{g}(s)=\frac{s}{5}; \end{aligned}$$
  5. (5)

    Algorithm 2 (briefly, IM-EGM):

    $$\begin{aligned}& \delta _{1} = 0.36,\qquad \ell = 0.57,\qquad \tau = 0.264, \qquad \varrho _{k} = \frac{10}{(1+k)^{2}}, \\& \Im _{k} = \frac{1 - \Im}{5}, \qquad \wp _{k} = \frac{1}{(10 k + 4)},\qquad \mathrm{g}(s)=\frac{s}{5},\qquad \chi _{k} = \frac{20}{(1 + k)^{2}}. \end{aligned}$$
Figure 1
figure 1

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(N = 10\) for the first 100 iterations

Figure 2
figure 2

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(N = 10\) for the first 100 iterations

Figure 3
figure 3

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(N = 20\) for the error term 10−3

Figure 4
figure 4

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(N = 20\) for the error term 10−3

Figure 5
figure 5

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(N = 30\) for the first 500 iterations

Figure 6
figure 6

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(N = 30\) for the first 500 iterations

Figure 7
figure 7

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(N = 40\) for the error term 10−3

Figure 8
figure 8

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(N = 40\) for the error term 10−3

Figure 9
figure 9

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(N = 50\) for the error term 10−3

Figure 10
figure 10

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(N = 50\) for the error term 10−3

Example 5.2

Consider the fact that \(\mathcal{Y} = L^{2}([0, 1])\) is a real Hilbert space through an inner product \(\langle s, u \rangle = \int _{0}^{1} s(t) u(t)\,dt\), \(\forall s, u \in \mathcal{Y} \), in which the induced norm obtains

$$ \Vert s \Vert = \sqrt{ \int _{0}^{1} \bigl\vert s(t) \bigr\vert ^{2}\,dt}. $$

Assume an operator \(\mathcal{A} : \mathcal{M} \rightarrow \mathcal{Y}\) is specified by

$$ \mathcal{A}(s) (t) = \int _{0}^{1} \bigl( s(t) - H(t, s) f\bigl(s(t) \bigr) \bigr)\,ds + g(t), $$

where \(\mathcal{M} := \{ s \in L^{2}([0, 1]): \|s\| \leq 1 \}\) is the unit ball and

$$ H(t, s) = \frac{2ts e^{(t+s)}}{e \sqrt{e^{2}-1}}, \qquad f(s) = \cos s,\qquad g(t) = \frac{2t e^{t}}{e \sqrt{e^{2}-1}}. $$

The bifunction is stated as follows:

$$ \mathcal{F} (s, u) := \bigl\langle \mathcal{A}(s), u - s \bigr\rangle , \quad \forall s, u \in \mathcal{M}. $$

Moreover, \(\mathcal{F}\) is clearly a Lipschitz-type continuous bifunction with the Lipschitz constant \(c_{1} = c_{2} = 1\) and the monotone [49]. A metric projection on \(\mathcal{M}\) is evaluated as follows:

$$ P_{\mathcal{M}} (s) = \textstyle\begin{cases} \frac{s}{ \Vert s \Vert } & \text{if } \Vert s \Vert > 1, \\ s, &\Vert s \Vert \leq 1. \end{cases} $$

A \(\mathcal{T} : L^{2}([0, 1]) \rightarrow L^{2}([0, 1])\) is written as follows:

$$ \mathcal{T}(s) (t) = \int _{0}^{1} t s(s)\,ds, \quad t \in [0, 1]. $$

A simple calculation shows that \(\mathcal{T}\) is 0-demicontractive. The solution to the problem is \(s^{*}(t) = 0\). Figures 1118 depict numerical observations for Example 5.2. The following control criteria are in use:

  1. (1)

    Algorithm 1 in [52] (briefly, EGM):

    $$ \wp _{k} = \frac{1}{(5 k + 10)},\qquad \Im _{k} = \frac{1 - \Im}{4}, \qquad \delta _{k} = \min \biggl\{ \frac{1}{4 c_{1}}, \frac{1}{4 c_{2}} \biggr\} ; $$
  2. (2)

    Algorithm 2 in [51] (briefly, I-EGM):

    $$ \wp _{k} = \frac{1}{(5 k + 10)}, \qquad \Im _{k} = \frac{1 - \Im}{4}, \qquad \delta _{k} = \min \biggl\{ \frac{1}{4 c_{1}}, \frac{1}{4 c_{2}} \biggr\} ; $$
  3. (3)

    Algorithm 1 in [18] (briefly, H-EGM):

    $$ \wp _{k} = \frac{1}{(5 k + 10)}, \qquad \Im _{k} = \frac{1}{5},\qquad \delta _{k} = \min \biggl\{ \frac{1}{4 c_{1}}, \frac{1}{4 c_{2}} \biggr\} ; $$
  4. (4)

    Algorithm 1 (briefly, M-EGM):

    $$\begin{aligned}& \delta _{1} = 0.42,\qquad \ell = 0.67, \qquad \tau = 0.33,\qquad \varrho _{k} = \frac{10}{(1+k)^{2}}, \\& \Im _{k} = \frac{1 - \Im}{3},\qquad \wp _{k} = \frac{1}{(5 k + 10)},\qquad \mathrm{g}(s)=\frac{s}{3}; \end{aligned}$$
  5. (5)

    Algorithm 2 (briefly, IM-EGM):

    $$\begin{aligned}& \delta _{1} = 0.42, \qquad \ell = 0.67,\qquad \tau = 0.33,\qquad \varrho _{k} = \frac{10}{(1+k)^{2}}, \\& \Im _{k} = \frac{1 - \Im}{3}, \qquad \wp _{k} = \frac{1}{(5 k + 10)}, \qquad \mathrm{g}(s)=\frac{s}{3},\qquad \chi _{k} = \frac{10}{(1 + k)^{2}}. \end{aligned}$$
Figure 11
figure 11

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(s_{0} = s_{1} = t\) for the first 500 iterations

Figure 12
figure 12

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(s_{0} = s_{1} = t\) for the first 500 iterations

Figure 13
figure 13

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(s_{0} = s_{1} = \sin (t)\) for the first 500 iterations

Figure 14
figure 14

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(s_{0} = s_{1} = \sin (t)\) for the first 500 iterations

Figure 15
figure 15

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(s_{0} = s_{1} = \cos (t)\) for the first 500 iterations

Figure 16
figure 16

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(s_{0} = s_{1} = \cos (t)\) for the first 500 iterations

Figure 17
figure 17

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(s_{0} = s_{1} = \cos (t)\) for the first 500 iterations

Figure 18
figure 18

Numerical comparison of Algorithm 1 and Algorithm 2 using Algorithm 1 in [52], Algorithm 2 in [51], and Algorithm 1 in [18], while \(s_{0} = s_{1} = \cos (t)\) for the first 500 iterations

6 Conclusion

The paper provides two explicit extragradient-like approaches for finding a common solution to an equilibrium problem containing a pseudomonotone and Lipschitz-type bifunction with such a fixed-point problem needing a ρ-demicontractive mapping in a real Hilbert space. A new step-size criterion that is not reliant on Lipschitz-type constant information has been developed. Under certain standard conditions, strong convergence theorems for the proposed algorithms are established. The computational data was studied to confirm the suggested approaches’ arithmetic superiority over current methods. These computational findings show that the nonmonotone variable step-size rule continues to improve the iterative sequence’s performance in this case.

Availability of data and materials

Not applicable.

References

  1. Abass, H., Godwin, G., Narain, O., Darvish, V.: Inertial extragradient method for solving variational inequality and fixed-point problems of a Bregman demigeneralized mapping in a reflexive Banach spaces. Numer. Funct. Anal. Optim. 42(8), 933–960 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  2. Antipin, A.: Equilibrium programming: proximal methods. Comput. Math. Math. Phys. 37(11), 1285–1296 (1997)

    MathSciNet  MATH  Google Scholar 

  3. Attouch, F.A.H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Var. Anal. 9, 3–11 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  4. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bianchi, M., Schaible, S.: Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 90(1), 31–43 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bigi, G., Castellani, M., Pappalardo, M., Passacantando, M.: Existence and solution methods for equilibria. Eur. J. Oper. Res. 227(1), 1–11 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  8. Blum, E.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)

    MathSciNet  MATH  Google Scholar 

  9. Dafermos, S.: Traffic equilibrium and variational inequalities. Transp. Sci. 14(1), 42–54 (1980)

    Article  MathSciNet  Google Scholar 

  10. Dong, Q.L., Cho, Y.J., Zhong, L.L., Rassias, T.M.: Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 70(3), 687–704 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  11. Dong, Q.L., Huang, J.Z., Li, X.H., Cho, Y.J., Rassias, T.M.: MiKM: multi-step inertial Krasnosel’skiı̌–Mann algorithm and its applications. J. Glob. Optim. 73(4), 801–824 (2018)

    Article  MATH  Google Scholar 

  12. Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2002)

    MATH  Google Scholar 

  13. Fan, J., Liu, L., Qin, X.: A subgradient extragradient algorithm with inertial effects for solving strongly pseudomonotone variational inequalities. Optimization, 1–17 (2019)

  14. Fan, K.: A minimax inequality and applications. In: Shisha, O. (ed.) Inequalities III, pp. 103–113. Academic Press, New York (1972)

    Google Scholar 

  15. Flåm, S.D., Antipin, A.S.: Equilibrium programming using proximal-like algorithms. Math. Program. 78(1), 29–41 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  16. Giannessi, F., Maugeri, A., Pardalos, P.M.: Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models, vol. 58. Springer, Berlin (2006)

    MATH  Google Scholar 

  17. Hieu, D.V.: An inertial-like proximal algorithm for equilibrium problems. Math. Methods Oper. Res. 88(3), 399–415 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  18. Hieu, D.V.: Strong convergence of a new hybrid algorithm for fixed point problems and equilibrium problems. Math. Model. Anal. 24(1), 1–19 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  19. Hieu, D.V., Cho, Y.J., Xiao, Y.B.: Modified extragradient algorithms for solving equilibrium problems. Optimization 67(11), 2003–2029 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  20. Hieu, D.V., Muu, L.D., Anh, P.K.: Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings. Numer. Algorithms 73(1), 197–217 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  21. Hung, P.G., Muu, L.D.: The Tikhonov regularization extended to equilibrium problems involving pseudomonotone bifunctions. Nonlinear Anal., Theory Methods Appl. 74(17), 6121–6129 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  22. Iiduka, H., Yamada, I.: A subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optimization 58(2), 251–261 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  23. Konnov, I.: Application of the proximal point method to nonmonotone equilibrium problems. J. Optim. Theory Appl. 119(2), 317–333 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  24. Konnov, I.V.: On systems of variational inequalities. Russ. Math. Izv.-Vyss. Uchebnye Zaved. Mat. 41, 77–86 (1997)

    MathSciNet  MATH  Google Scholar 

  25. Korpelevich, G.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  26. Kumam, P., Katchang, P.: A viscosity of extragradient approximation method for finding equilibrium problems, variational inequalities and fixed-point problems for nonexpansive mappings. Nonlinear Anal. Hybrid Syst. 3(4), 475–486 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  27. Li, M., Yao, Y.: Strong convergence of an iterative algorithm for λ-strictly pseudo-contractive mappings in Hilbert spaces. An. Ştiinţ. Univ. ‘Ovidius’ Constanţa 18(1), 219–228 (2010)

    MathSciNet  MATH  Google Scholar 

  28. Maingé, P.-E.: A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 47(3), 1499–1515 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  29. Maingé, P.-E., Moudafi, A.: Coupling viscosity methods with the extragradient algorithm for solving equilibrium problems. J. Nonlinear Convex Anal. 9(2), 283–294 (2008)

    MathSciNet  MATH  Google Scholar 

  30. Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4(3), 506–506 (1953)

    Article  MathSciNet  MATH  Google Scholar 

  31. Mastroeni, G.: On auxiliary principle for equilibrium problems. In: Nonconvex Optimization and Its Applications, pp. 289–298. Springer, Berlin (2003)

    Google Scholar 

  32. Moudafi, A.: Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 15(1–2), 91–100 (1999)

    MathSciNet  MATH  Google Scholar 

  33. Muu, L., Oettli, W.: Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal., Theory Methods Appl. 18(12), 1159–1166 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  34. Nguyen, T.T.V., Strodiot, J.J., Nguyen, V.H.: Hybrid methods for solving simultaneously an equilibrium problem and countably many fixed point problems in a Hilbert space. J. Optim. Theory Appl. 160(3), 809–831 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  35. Oliveira, P., Santos, P., Silva, A.: A Tikhonov-type regularization for equilibrium problems in Hilbert spaces. J. Math. Anal. Appl. 401(1), 336–342 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  36. Polyak, B.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)

    Article  Google Scholar 

  37. Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal., Theory Methods Appl. 75(2), 742–750 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  38. Stampacchia, G.: Formes bilinéaires coercitives sur les ensembles convexes. C. R. Hebd. Séances Acad. Sci. 258(18), 4413–4416 (1964)

    MATH  Google Scholar 

  39. Strodiot, J.J., Vuong, P.T., Van Nguyen, T.T.: A class of shrinking projection extragradient methods for solving non-monotone equilibrium problems in Hilbert spaces. J. Glob. Optim. 64(1), 159–178 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  40. Takahashi, S., Takahashi, W.: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 331(1), 506–515 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  41. Tiel, J.V.: Convex Analysis: An Introductory Text, 1st edn. Wiley, New York (1984)

    MATH  Google Scholar 

  42. Tran, D.Q., Dung, M.L., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57(6), 749–776 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  43. Tran, D.Q., Dung, M.L., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57(6), 749–776 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  44. ur Rehman, H., Kumam, P., Abubakar, A.B., Cho, Y.J.: The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 39(2), 100 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  45. ur Rehman, H., Kumam, P., Argyros, I.K., Deebani, W., Kumam, W.: Inertial extra-gradient method for solving a family of strongly pseudomonotone equilibrium problems in real Hilbert spaces with application in variational inequality problem. Symmetry 12(4), 503 (2020)

    Article  Google Scholar 

  46. ur Rehman, H., Kumam, P., Cho, Y.J., Suleiman, Y.I., Kumam, W.: Modified Popov's explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 36(1), 82–113 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  47. ur Rehman, H., Kumam, P., Cho, Y.J., Yordsorn, P.: Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequal. Appl. 2019(1), 282 (2019)

    Article  MATH  Google Scholar 

  48. ur Rehman, H., Kumam, P., Kumam, W., Shutaywi, M., Jirakitpuwapat, W.: The inertial sub-gradient extra-gradient method for a class of pseudo-monotone equilibrium problems. Symmetry 12(3), 463 (2020)

    Article  Google Scholar 

  49. Van Hieu, D., Anh, P.K., Muu, L.D.: Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 66(1), 75–96 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  50. Vinh, N.T., Muu, L.D.: Inertial extragradient algorithms for solving equilibrium problems. Acta Math. Vietnam. 44(3), 639–663 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  51. Vuong, P.T., Strodiot, J.J., Nguyen, V.H.: Extragradient methods and linesearch algorithms for solving Ky Fan inequalities and fixed point problems. J. Optim. Theory Appl. 155(2), 605–627 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  52. Vuong, P.T., Strodiot, J.J., Nguyen, V.H.: On extragradient-viscosity methods for solving equilibrium and fixed point problems in a Hilbert space. Optimization 64(2), 429–451 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  53. Yamada, I., Ogura, N.: Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings. Numer. Funct. Anal. Optim. 25(7–8), 619–655 (2005)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The fourth author would like to thank his mentor Professor Dr. Poom Kumam from King Mongkut’s University of Technology Thonburi, Thailand for his advice and comments to improve the quality and the results of this manuscrript.

Funding

The first author was partially supported by Chiang Mai University under Fundamental Fund 2023. The third author was supported by University of Phayao and Thailand Science Research and Innovation grant no. FF66-UoE. The fourth author was financially supported by the Office of the Permanent Secretary, Ministry of Higher Education, Science, Research and Innovation (Grant No. RGNS 65-168).

Author information

Authors and Affiliations

Authors

Contributions

BP, CK, NP(N. Pholasa) and NP (N. Pakkranang): Conceptualization, Methodology, Supervision, Writing and Editing manuscript preparation. CK, NP(N. Pholasa) and NP(N. Pakkranang): Investigation, Formal Analysis, Investigation, Review and Validation. BP, NP(N. Pholasa) and NP(N. Pakkranang): Investigation, Funding Acquisition and Validation. All authors have read and approved with the final of this manuscript.

Corresponding author

Correspondence to Nuttapol Pakkaranang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Panyanak, B., Khunpanuk, C., Pholasa, N. et al. Dynamical inertial extragradient techniques for solving equilibrium and fixed-point problems in real Hilbert spaces. J Inequal Appl 2023, 7 (2023). https://doi.org/10.1186/s13660-023-02912-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-023-02912-6

MSC

Keywords