Skip to main content

A new class of computationally efficient algorithms for solving fixed-point problems and variational inequalities in real Hilbert spaces

Abstract

A family of inertial extragradient-type algorithms is proposed for solving convex pseudomonotone variational inequality with fixed-point problems, where the involved mapping for the fixed point is a ρ-demicontractive mapping. Under standard hypotheses, the generated iterative sequence achieves strong convergence to the common solution of the variational inequality and fixed-point problem. Some special cases and sufficient conditions that guarantee the validity of the hypotheses of the convergence statements are also discussed. Numerical applications in detail illustrate the theoretical results and comparison with existing methods.

1 Introduction

The objective for studying a common solution problem is its potential application to mathematical models with fixed-point constraints. This is especially true in real-world applications like signal processing, network resource allocation, and image recovery. This is extremely important for signal analysis, composite reduction, optimization techniques, and image-recovery problems; see for example, [1, 13, 20, 21, 27]. Let us look at both problems highlighted by this study. Let \(\mathcal{D}\) to be a nonempty, closed, and convex subset of a real Hilbert space \(\mathcal{E}\) with the inner product \(\langle \cdot , \cdot \rangle \), and induced norm \(\|\cdot \|\). This study contributes significantly by investigating the convergence analysis of iterative algorithms for handling variational inequality problems and fixed-point problems in real Hilbert spaces. Let \(\mathcal{N} : \mathcal{D} \rightarrow \mathcal{E}\) be an operator. Then, the variational inequality problem [29] is defined in the following manner:

$$ \text{Find} \quad \varpi ^{*} \in \mathcal{D} \quad \text{such that} \ \bigl\langle \mathcal{N}\bigl(\varpi ^{*}\bigr), r - \varpi ^{*} \bigr\rangle \geq 0, \quad \forall r \in \mathcal{D}. $$
(VIP)

Consider \(\operatorname{VI}(\mathcal{D}, \mathcal{N})\) to describe the solution set of the problem (VIP). Variational inequalities are used in a number of areas, including partial differential equations, optimization, engineering, applied mathematics, and economics (see [12, 1417, 24, 30]). The variational inequality problem is important in applied sciences. Many researchers have investigated not only the existence and stability of solutions, but also iterative methods for solving such problems. Projection methods, in particular, are crucial for determining the numerical solution to variational inequalities. Several authors have proposed projection methodologies to solve the problem [3, 4, 10, 11, 18, 25, 26, 33, 4042]) and others in [59, 3438]. The projection technique, which is computed on the feasible set \(\mathcal{D}\), is used by most of the algorithms to solve the problem. The extragradient method was developed by Korpelevich [18] and Antipin [2]. The method has the following form:

$$ \textstyle\begin{cases} s_{1} \in \mathcal{D} \quad \text{and} \quad 0 < \hbar < \frac{1}{L}, \\ r_{k} = P_{\mathcal{D}}{[s_{k} - \hbar \mathcal{N}(s_{k})]}, \\ s_{k+1} = P_{\mathcal{D}}{[s_{k} - \hbar \mathcal{N}(r_{k})]}. \end{cases} $$
(1.1)

This method needs to compute two projections on the feasible set \(\mathcal{D}\) for each iteration. In fact, if the feasible set \(\mathcal{D}\) has a sophisticated structure, the computing efficacy of the chosen method may decline.

The first is the subgradient extragradient method developed by Censor et al. [10]. This method takes the following form:

$$ \textstyle\begin{cases} s_{1} \in \mathcal{D} \quad \text{and} \quad 0 < \hbar < \frac{1}{L}, \\ r_{k} = P_{\mathcal{D}}{[s_{k} - \hbar \mathcal{N}(s_{k})]}, \\ s_{k+1} = P_{\mathcal{E}_{k}}{[s_{k} - \hbar \mathcal{N}(r_{k})]}, \end{cases} $$
(1.2)

where

$$ \mathcal{E}_{k} = \bigl\{ z \in \mathcal{E} : \bigl\langle s_{k} - \hbar \mathcal{N}(s_{k}) - r_{k}, z - r_{k} \bigr\rangle \leq 0 \bigr\} . $$

Following that, we will look at the strong convergence analysis of Tseng’s extragradient method [33], which uses only one projection per iteration:

$$ \textstyle\begin{cases} s_{1} \in \mathcal{D} \quad \text{and} \quad 0 < \hbar < \frac{1}{L}, \\ r_{k} = P_{\mathcal{D}}{[s_{k} - \hbar \mathcal{N}(s_{k})]}, \\ s_{k+1} = r_{k} - \hbar [\mathcal{N} (r_{k}) - \mathcal{N} (s_{k}) ]. \end{cases} $$
(1.3)

In terms of computing, the technique (1.3) is especially efficient since it only requires one solution to a minimization problem each iteration. As a result, the method (1.3) is not more computationally expensive, but it performs better in most situations. Suppose that \(\mathcal{M} : \mathcal{E} \rightarrow \mathcal{E}\) is a mapping. The fixed-point problem for a mapping \(\mathcal{M}\) is defined by:

$$ \mathcal{M} \bigl(\varpi ^{*}\bigr) = \varpi ^{*}. $$
(FP)

The solution set of the fixed-point problem (FP) is represented by the set \(\operatorname{Fix}(\mathcal{M})\). Most of the methods for solving problem (FP) are derived from the basic Mann iteration, specifically from \(s_{1} \in \mathcal{E}\), which generates sequence \(\{s_{k+1}\}\) for every \(k \geq 1\) by

$$ s_{k+1} = \sigma _{k} s_{k} + (1 - \sigma _{k}) \mathcal{M} s_{k}. $$
(1.4)

To accomplish weak convergence, the variable sequence \(\{\sigma _{k}\}\) must adhere to certain criteria. The Halpern iteration is another structured iterative method that is more effective at achieving strong convergence in infinite-dimensional Hilbert spaces. The iterative sequence is as follows:

$$ s_{k+1} = \sigma _{k} s_{1} + (1 - \sigma _{k}) \mathcal{M} s_{k}, $$
(1.5)

where \(s_{1} \in \mathcal{E}\) and the sequence \(\sigma _{k} \subset (0; 1)\) is nonsummable and gradually declining, i.e.,

$$ \sigma _{k} \rightarrow 0 \quad \text{and} \quad \sum _{k=1}^{\infty} \sigma _{k} = + \infty . $$

Furthermore, the viscosity algorithm [23], in which the cost mapping \(\mathcal{M}\) is iteratively combined with a contraction mapping, is a generic variant of the Halpern iteration. In addition to the Halpern iteration, there is a general form of it, namely the viscosity algorithm [23], in which the cost mapping \(\mathcal{M}\) is merged with a contraction mapping in the iterates. Finally, the hybrid steepest-descent approach published in [39] is another methodology that provides significant convergence.

Tan et al. [31, 32] developed an innovative numerical algorithm, the extragradient viscosity algorithm, for solving variational inequalities involving a fixed-point problem constraint of a ρ-demicontractive mapping using the extragradient algorithm [10, 18] and the Mann-type technique [22]. The authors showed the strong convergence of all methods under the condition that the operator is monotone and meets the Lipschitz condition. These techniques have the advantage of being numerically estimated using optimization tools, as shown in [31, 32].

The fundamental issue with these methods is that they rely on viscosity and Mann-type techniques to achieve strong convergence. As is known, achieving strong convergence is important for iterative sequences, especially in infinite-dimensional domains. There are a few methods with strong convergence that use inertial schemes. The Mann and viscosity procedures may be difficult to estimate from an algorithmic perspective, affecting the algorithm’s convergence speed and usefulness. These algorithms increase the number of numerical and computational steps, making the system more complicated.

As a result, the following straightforward question arises:

Is it possible to design self-adaptive strongly convergent inertial extragradient algorithms that do not rely on Mann- and viscosity-type methods for solving variational inequalities and fixed-point problems?

We respond to the above question by constructing two strong convergence extragradient-type algorithms for solving monotone variational inequalities and the ρ-demicontractive fixed-point problem in real Hilbert spaces, inspired by the studies described in [31, 32]. Furthermore, we avoid employing any hybrid schemes, such as the Mann-type scheme and the viscosity scheme, to achieve the strong convergence of these methods. We presented novel algorithms with strong convergence that make use of inertial mechanisms.

The paper is divided into sections. Section 2 gives some basic results. Section 3 introduces four different methods and confirms their convergence analysis. Finally, Sect. 4 provides some numerical data to demonstrate the practical use of the methods presented.

2 Preliminaries

Let \(\mathcal{D}\) be a nonempty, closed, and convex subset of \(\mathcal{E}\), the real Hilbert space. For any \(s, r \in \mathcal{E}\), we have

  1. (i)

    \(\|s + r\|^{2} = \|s\|^{2} + 2 \langle s, r \rangle + \|r\|^{2}\);

  2. (ii)

    \(\|s + r\|^{2} \leq \|s\|^{2} + 2 \langle r, s + r \rangle \);

  3. (iii)

    \(\|b s + (1 - b) r \|^{2} = b \|s\|^{2} + (1 - b)\| r \|^{2} - b(1 - b) \|s - r\|^{2}\).

A metric projection \(P_{\mathcal{D}}(s)\) of \(s \in \mathcal{E}\) is defined by

$$ P_{\mathcal{D}}(s) = \underset{}{\arg \min} \bigl\{ \Vert s - r \Vert : r \in \mathcal{D} \bigr\} . $$

It is well known that \(P_{\mathcal{D}}\) is nonexpansive and possesses the following important properties:

  1. (1)

    \(\langle s - P_{\mathcal{D}}(s), r - P_{\mathcal{D}}(s) \rangle \leq 0\), \(\forall r \in \mathcal{D}\);

  2. (2)

    \(\|P_{\mathcal{D}}(s) - P_{\mathcal{D}}(r)\|^{2} \leq \langle P_{ \mathcal{D}}(s) - P_{\mathcal{D}}(r), s - r \rangle \), \(\forall r \in \mathcal{D}\).

Definition 2.1

Let \(\mathcal{M} : \mathcal{E} \rightarrow \mathcal{E}\) be a nonlinear mapping with \(\operatorname{Fix}(\mathcal{M}) \neq \emptyset \). Then, \(I - \mathcal{M}\) is said to be demiclosed at zero if for any \(\{s_{k}\}\) in \(\mathcal{E}\). Then, the following statement is true:

$$ s_{k} \rightharpoonup s \quad \text{and} \quad (I - \mathcal{M}) s_{k} \rightarrow 0 \quad \Rightarrow\quad s \in \operatorname{Fix}( \mathcal{M}). $$

Definition 2.2

Let \(\mathcal{N} : \mathcal{D} \rightarrow \mathcal{D}\) be an operator. It is said to be:

  1. (1)

    monotone if

    $$ \bigl\langle \mathcal{N}(s_{1}) - \mathcal{N}(s_{2}), s_{1} - s_{2} \bigr\rangle \geq 0, \quad \forall s_{1}, s_{2} \in \mathcal{D}; $$
  2. (2)

    Lipschitz-continuous with constant \(L >0\) such that

    $$ \bigl\Vert \mathcal{N}(s_{1}) - \mathcal{N}(s_{2}) \bigr\Vert \leq L \Vert s_{1} - s_{2} \Vert , \quad \forall s_{1}, s_{2} \in \mathcal{D}; $$
  3. (3)

    sequentially weakly continuous if a sequence \(\{\mathcal{N}(s_{k})\}\) convergent weakly to \(\mathcal{N}(s)\) for any sequence \(\{s_{k}\}\) convergent weakly to s.

Definition 2.3

Suppose the \(\mathcal{M} : \mathcal{D} \rightarrow \mathcal{D}\) is a mapping and \(\operatorname{Fix}(\mathcal{M}) \neq \emptyset \). It is said to be:

  1. (1)

    ρ-demicontractive if for any fixed number \(0 \leq \rho < 1\) such that

    $$ \bigl\Vert \mathcal{M}(s_{1}) - s_{2} \bigr\Vert ^{2} \leq \Vert s_{1} - s_{2} \Vert ^{2} + \rho \bigl\Vert (I - \mathcal{M}) (s_{1}) \bigr\Vert ^{2}, \quad \forall s_{2} \in \operatorname{Fix}( \mathcal{M}), s_{1} \in \mathcal{E}; $$

    or equivalently

    $$ \bigl\langle \mathcal{M}(s_{1}) - s_{1}, s_{1} - s_{2} \bigr\rangle \leq \frac{\rho - 1}{2} \bigl\Vert s_{1} - \mathcal{M}(s_{1}) \bigr\Vert ^{2}, \quad \forall s_{2} \in \operatorname{Fix}( \mathcal{M}), s_{1} \in \mathcal{E}. $$

Lemma 2.4

([19])

Let \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) be a L-Lipschitz continuous and monotone operator on \(\mathcal{D}\). Take \(\mathcal{M} = P_{\mathcal{D}}(I - \hbar \mathcal{N})\), with \(\hbar > 0\). If \(\{s_{k}\}\) is a sequence in \(\mathcal{E}\) that satisfies \(s_{k} \rightharpoonup q\) and \(s_{k} - \mathcal{N}(s_{k}) \rightarrow 0\), then \(q \in \operatorname{VI}(\mathcal{D}, \mathcal{N}) = \operatorname{Fix}(\mathcal{M})\).

Lemma 2.5

([28])

Suppose that \(\{c_{k}\} \subset [0, +\infty )\), \(\{d_{k}\} \subset (0, 1)\), and \(\{e_{k}\} \subset \mathbb{R}\) sequences satisfies the following criteria:

$$ c_{k+1} \leq (1 - d_{k}) c_{k} + d_{k} e_{k}, \quad \forall k \in \mathbb{N}, \textit{ and } \sum_{k=1}^{+\infty} d_{k} = + \infty . $$

If \(\limsup_{j \rightarrow +\infty} r_{k_{j}} \leq 0\) for any subsequence \(\{c_{k_{j}}\}\) of \(\{c_{j}\}\) such that

$$ \liminf_{j \rightarrow +\infty} ( c_{k_{j}+1} - c_{k_{j}} ) \geq 0, $$

then \(\lim_{k \rightarrow \infty} c_{k} = 0\).

3 Main results

In this section, we examine the convergence of four novel inertial extragradient algorithms for solving the fixed-point and variational inequality problems in detail. First, we consider the given algorithms. In order to confirm the strong convergence, it is assumed that the following conditions are satisfied:

(\(\mathcal{N}\)1) The common solution set is denoted by \(\operatorname{Fix}(\mathcal{M}) \cap \operatorname{VI}(\mathcal{D}, \mathcal{N})\) and it is nonempty;

(\(\mathcal{N}\)2) The operator \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) is monotone;

(\(\mathcal{N}\)3) The operator \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) is Lipschitz continuous;

(\(\mathcal{N}\)4) The mapping \(\mathcal{M} : \mathcal{E} \rightarrow \mathcal{E}\) is ρ-demicontractive for \(0 \leq \rho < 1\) and demiclosed at zero;

(\(\mathcal{N}\)5) The operator \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) is sequentially weakly continuous.

Algorithm 1

(Inertial Subgradient Extragradient Method With Constant Step-Size Rule)

STEP 0: Take \(s_{0}, s_{1} \in \mathcal{D}\), \(\ell \in (0, 1)\), \(0 < \hbar < \frac{1}{L}\) and a sequence \(\{\varsigma _{k}\} \subset (0, 1 - \rho )\) that satisfies the following condition:

$$ \lim_{k \rightarrow +\infty} \varsigma _{k} = 0 \quad \text{and} \quad \sum_{k=1}^{+\infty} \varsigma _{k} = +\infty . $$

STEP 1: Calculate

$$ q_{k} = s_{k} + \ell _{k} (s_{k} - s_{k-1}) - \varsigma _{k} \bigl[ s_{k} + \ell _{k} (s_{k} - s_{k-1}) \bigr], $$

where \(\ell _{k}\) is defined as follows:

$$ 0 \leq \ell _{k} \leq \hat{\ell _{k}} \quad \text{and} \quad \hat{\ell _{k}} = \textstyle\begin{cases} \min \{\frac{\ell}{2}, \frac{\chi _{k}}{ \Vert s_{k} - s_{k-1} \Vert } \} & \text{if } s_{k} \neq s_{k-1}, \\ \frac{\ell}{2} & \text{else}. \end{cases} $$
(3.1)

Moreover, take a sequence \(\chi _{k} = \circ (\varsigma _{k})\) satisfying the condition \(\lim_{k \rightarrow +\infty} \frac{\chi _{k}}{\varsigma _{k}} = 0\).

STEP 2: Calculate

$$ r_{k} = P_{\mathcal{D}}\bigl(q_{k} - \hbar \mathcal{N}(q_{k})\bigr). $$

If \(q_{k} = r_{k}\), then STOP. Otherwise, go to STEP 3.

STEP 3: Construct a half-space first

$$ \mathcal{E}_{k} = \bigl\{ z \in \mathcal{E} : \bigl\langle q_{k} - \hbar \mathcal{N} (q_{k}) - r_{k}, z - r_{k} \bigr\rangle \leq 0 \bigr\} $$

and \(p_{k} = P_{\mathcal{E}_{k}} (q_{k} - \hbar \mathcal{N} (r_{k})) \).

STEP 4: For any sequence \(\sigma _{k} \subset (0, 1 - \rho )\). Calculate

$$ s_{k+1} = (1 - \sigma _{k}) p_{k} + \sigma _{k} \mathcal{M}(p_{k}). $$

Set \(k := k + 1\) and go to STEP 1.

Algorithm 2

(Inertial Subgradient Extragradient Method With Nonmonotone Step-Size Rule)

STEP 0: Take \(s_{0}, s_{1} \in \mathcal{D}\), \(\ell \in (0, 1)\), \(\mu \in (0, 1)\), \(\hbar _{1} > 0\). Moreover, a sequence \(\{\beth _{k}\}\) such that \(\sum_{k=1}^{\infty} \beth _{k} < +\infty \), and a sequence \(\{\varsigma _{k}\} \subset (0, 1 - \rho )\) that satisfies the following condition:

$$ \lim_{k \rightarrow +\infty} \varsigma _{k} = 0 \quad \text{and} \quad \sum_{k=1}^{+\infty} \varsigma _{k} = +\infty . $$

STEP 1: Calculate

$$ q_{k} = s_{k} + \ell _{k} (s_{k} - s_{k-1}) - \varsigma _{k} \bigl[ s_{k} + \ell _{k} (s_{k} - s_{k-1}) \bigr], $$

where \(\ell _{k}\) is defined as follows:

$$ 0 \leq \ell _{k} \leq \hat{\ell _{k}} \quad \text{and} \quad \hat{\ell _{k}} = \textstyle\begin{cases} \min \{\frac{\ell}{2}, \frac{\chi _{k}}{ \Vert s_{k} - s_{k-1} \Vert } \} & \text{if } s_{k} \neq s_{k-1}, \\ \frac{\ell}{2} & \text{otherwise}. \end{cases} $$
(3.2)

Moreover, a sequence \(\chi _{k} = \circ (\varsigma _{k})\) satisfying the condition \(\lim_{k \rightarrow +\infty} \frac{\chi _{k}}{\varsigma _{k}} = 0\).

STEP 2: Calculate

$$ r_{k} = P_{\mathcal{D}}\bigl(q_{k} - \hbar _{k} \mathcal{N}(q_{k})\bigr). $$

If \(q_{k} = r_{k}\), then STOP. Otherwise, go to STEP 3.

STEP 3: Create a half-space first

$$ \mathcal{E}_{k} = \bigl\{ z \in \mathcal{E} : \bigl\langle q_{k} - \hbar _{k} \mathcal{N} (q_{k}) - r_{k}, z - r_{k} \bigr\rangle \leq 0 \bigr\} $$

and calculate \(p_{k} = P_{\mathcal{E}_{k}} (q_{k} - \hbar _{k} \mathcal{N} (r_{k})) \).

STEP 4: For any sequence \(\sigma _{k} \subset (0, 1 - \rho )\). Calculate

$$ s_{k+1} = (1 - \sigma _{k}) p_{k} + \sigma _{k} \mathcal{M}(p_{k}). $$

STEP 5: Calculate

$$ \hbar _{k+1} = \textstyle\begin{cases} \min \{ \hbar _{k} + \beth _{k}, \frac{\mu \Vert q_{k} - r_{k} \Vert ^{2} + \mu \Vert p_{k} - r_{k} \Vert ^{2}}{2 [ \langle \mathcal{N}(q_{k})- \mathcal{N}(r_{k}), p_{k} - r_{k} \rangle ]} \} & \text{if } \langle \mathcal{N}(q_{k})- \mathcal{N}(r_{k}), p_{k} - r_{k} \rangle > 0, \\ \hbar _{k} + \beth _{k}, & \text{otherwise}. \end{cases} $$
(3.3)

Set \(k := k + 1\) and go to STEP 1.

Lemma 3.1

A sequence \(\{\hbar _{k} \}\) that is generated by the expression (3.3) is convergent to ħ and bounded by \(\min \{\frac{\mu}{L}, \hbar _{1} \} \leq \hbar \leq \hbar _{1} + P\), where

$$ P = \sum_{k=1}^{+\infty} \beth _{k}. $$

Proof

It is given that \(\langle \mathcal{N}(q_{k})- \mathcal{N}(r_{k}), p_{k} - r_{k} \rangle > 0\), such that

$$\begin{aligned} \frac{\mu ( \Vert q_{k} - r_{k} \Vert ^{2} + \Vert p_{k} - r_{k} \Vert ^{2})}{2 \langle \mathcal{N}(q_{k})- \mathcal{N}(r_{k}), p_{k} - r_{k} \rangle} &\geq \frac{2 \mu \Vert q_{k} - r_{k} \Vert \Vert p_{k} - r_{k} \Vert }{2 \Vert \mathcal{N}(q_{k})- \mathcal{N}(r_{k}) \Vert \Vert p_{k} - r_{k} \Vert } \\ &\geq \frac{2 \mu \Vert q_{k} - r_{k} \Vert \Vert p_{k} - r_{k} \Vert }{2 L \Vert q_{k} - r_{k} \Vert \Vert p_{k} - r_{k} \Vert } \\ &\geq \frac{\mu}{L}. \end{aligned}$$
(3.4)

By definition of \(\hbar _{k+1}\), we have

$$ \min \biggl\{ \frac{\mu}{L}, \hbar _{1} \biggr\} \leq \hbar _{k} \leq \hbar _{1} + P. $$

Let

$$ [\hbar _{k+1} - \hbar _{k}]^{+} = \max \{0, \hbar _{k+1} - \hbar _{k} \} $$

and

$$ [\hbar _{k+1} - \hbar _{k}]^{-} = \max \bigl\{ 0, -(\hbar _{k+1} - \hbar _{k}) \bigr\} . $$

By using the definition of \(\{\hbar _{k}\}\), we have

$$ \sum_{k=1}^{+\infty} (\hbar _{k+1} - \hbar _{k})^{+} = \sum _{k=1}^{+ \infty} \max \{0, \hbar _{k+1} - \hbar _{k} \} \leq P < + \infty . $$
(3.5)

This implies that the series \(\sum_{k=1}^{+\infty} (\hbar _{k+1} - \hbar _{k})^{+}\) is convergent. Following that, we must demonstrate the convergence of

$$ \sum_{k=1}^{+\infty} (\hbar _{k+1} - \hbar _{k})^{-}.\vadjust{\goodbreak} $$

Let us consider that \(\sum_{k=1}^{+\infty} (\hbar _{k+1} - \hbar _{k})^{-} = +\infty \). Thus, we obtain

$$ \hbar _{k+1} - \hbar _{k} = (\hbar _{k+1} - \hbar _{k})^{+} - (\hbar _{k+1} - \hbar _{k})^{-}. $$

Thus, we obtain

$$ \hbar _{k+1} - \hbar _{1} = \sum_{k=0}^{k} (\hbar _{k+1} - \hbar _{k}) = \sum _{k=0}^{k} (\hbar _{k+1} - \hbar _{k})^{+} - \sum _{k=0}^{k} (\hbar _{k+1} - \hbar _{k})^{-}. $$
(3.6)

By allowing \(k \rightarrow +\infty \) in the formulation (3.6), we obtain \(\hbar _{k} \rightarrow - \infty \) as \(k \rightarrow \infty \). This is a logical contradiction. Due to the convergence of the series \(\sum_{k=0}^{k} (\hbar _{k+1} - \hbar _{k})^{+}\) and \(\sum_{k=0}^{k} (\hbar _{k+1} - \hbar _{k})^{-}\) taking \(k \rightarrow +\infty \) in (3.6), we obtain \(\lim_{k \rightarrow \infty} \hbar _{k} = \hbar _{.}\) This completes the proof of the lemma. □

Algorithm 3

(Inertial Tseng’s Extragradient Method With Constant Step-Size Rule)

STEP 0: Consider \(s_{0}, s_{1} \in \mathcal{D}\), \(\ell \in (0, 1)\), \(\mu \in (0, 1)\), \(0 < \hbar < \frac{1}{L}\) and a sequence \(\{\varsigma _{k}\} \subset (0, 1 - \rho )\) that satisfies the following condition:

$$ \lim_{k \rightarrow +\infty} \varsigma _{k} = 0 \quad \text{and} \quad \sum_{k=1}^{+\infty} \varsigma _{k} = +\infty . $$

STEP 1: Calculate

$$ q_{k} = s_{k} + \ell _{k} (s_{k} - s_{k-1}) - \varsigma _{k} \bigl[ s_{k} + \ell _{k} (s_{k} - s_{k-1}) \bigr], $$

where \(\ell _{k}\) is defined as follows:

$$ 0 \leq \ell _{k} \leq \hat{\ell _{k}} \quad \text{and} \quad \hat{\ell _{k}} = \textstyle\begin{cases} \min \{\frac{\ell}{2}, \frac{\chi _{k}}{ \Vert s_{k} - s_{k-1} \Vert } \} & \text{if } s_{k} \neq s_{k-1}, \\ \frac{\ell}{2} & \text{else}. \end{cases} $$
(3.7)

Moreover, a sequence \(\chi _{k} = \circ (\varsigma _{k})\) satisfying the condition \(\lim_{k \rightarrow +\infty} \frac{\chi _{k}}{\varsigma _{k}} = 0\).

STEP 2: Calculate

$$ r_{k} = P_{\mathcal{D}}\bigl(q_{k} - \hbar \mathcal{N}(q_{k})\bigr). $$

If \(q_{k} = r_{k}\), then STOP. Otherwise, go to STEP 3.

STEP 3: Calculate

$$ p_{k} = r_{k} + \hbar \bigl[ \mathcal{N}(q_{k}) - \mathcal{N}(r_{k}) \bigr]. $$

STEP 4: For any sequence \(\sigma _{k} \subset (0, 1 - \rho )\). Calculate

$$ s_{k+1} = (1 - \sigma _{k}) p_{k} + \sigma _{k} \mathcal{M}(p_{k}). $$

Set \(k := k + 1\) and go back to STEP 1.

Algorithm 4

(Inertial Tseng’s Extragradient Method With Nonmonotone Step-Size Rule)

STEP 0: Consider \(s_{0}, s_{1} \in \mathcal{D}\), \(\ell \in (0, 1)\), \(\mu \in (0, 1)\), \(\hbar _{1} > 0\). Moreover, \(\{\beth _{k}\}\) such that \(\sum_{k=1}^{\infty} \beth _{k} < +\infty \) and a sequence \(\{\varsigma _{k}\} \subset (0, 1 - \rho )\) that satisfies the following condition:

$$ \lim_{k \rightarrow +\infty} \varsigma _{k} = 0 \quad \text{and} \quad \sum_{k=1}^{+\infty} \varsigma _{k} = +\infty . $$

STEP 1: Calculate

$$ q_{k} = s_{k} + \ell _{k} (s_{k} - s_{k-1}) - \varsigma _{k} \bigl[ s_{k} + \ell _{k} (s_{k} - s_{k-1}) \bigr], $$

where \(\ell _{k}\) is defined as follows:

$$ 0 \leq \ell _{k} \leq \hat{\ell _{k}} \quad \text{and} \quad \hat{\ell _{k}} = \textstyle\begin{cases} \min \{\frac{\ell}{2}, \frac{\chi _{k}}{ \Vert s_{k} - s_{k-1} \Vert } \} & \text{if } s_{k} \neq s_{k-1}, \\ \frac{\ell}{2} & \text{otherwise}. \end{cases} $$
(3.8)

Moreover, a sequence \(\chi _{k} = \circ (\varsigma _{k})\) satisfying the condition \(\lim_{k \rightarrow +\infty} \frac{\chi _{k}}{\varsigma _{k}} = 0\).

STEP 2: Calculate

$$ r_{k} = P_{\mathcal{D}}\bigl(q_{k} - \hbar _{k} \mathcal{N}(q_{k})\bigr). $$

If \(q_{k} = r_{k}\), then STOP. Otherwise, go to STEP 3.

STEP 3: Calculate

$$ p_{k} = r_{k} + \hbar _{k} \bigl[ \mathcal{N}(q_{k}) - \mathcal{N}(r_{k}) \bigr]. $$

STEP 4: For any sequence \(\sigma _{k} \subset (0, 1 - \rho )\). Calculate

$$ s_{k+1} = (1 - \sigma _{k}) p_{k} + \sigma _{k} \mathcal{M}(p_{k}). $$

STEP 5: Compute

$$ \textstyle\begin{cases} \min \{ \hbar _{k} + \beth _{k}, \frac{\mu \Vert q_{k} - r_{k} \Vert }{ \Vert \mathcal{N}(q_{k})- A(r_{k}) \Vert } \} & \text{if} \ \mathcal{N}(q_{k})\neq \mathcal{N}(r_{k}), \\ \hbar _{k} + \beth _{k}, & \text{otherwise}. \end{cases} $$
(3.9)

Set \(k := k + 1\) and go back to STEP 1.

Lemma 3.2

A sequence \(\{\hbar _{k} \}\) that is generated by the expression (3.9) is decreasing monotonically and bounded by \(\min \{\frac{\mu}{L}, \hbar _{1} \} \leq \hbar \leq \hbar _{1} + P\), where

$$ P = \sum_{k=1}^{+\infty} \beth _{k}. $$

Proof

It is given that the mapping \(\mathcal{N}\) is Lipschitz continuous. Thus, we have

$$\begin{aligned} \frac{\mu \Vert q_{k} - r_{k} \Vert }{ \Vert \mathcal{N}(q_{k})- \mathcal{N}(r_{k}) \Vert } &\geq \frac{\mu \Vert q_{k} - r_{k} \Vert }{ L \Vert q_{k} - r_{k} \Vert } \geq \frac{\mu}{L}. \end{aligned}$$
(3.10)

The remainder of the proof is similar to that of Lemma 3.1. □

Lemma 3.3

Let \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) be an operator that satisfies the conditions (\(\mathcal{N}\)1)(\(\mathcal{N}\)5). Suppose that \(\{s_{k}\}\) is a sequence generated by Algorithm 2. For any \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\), we have

$$ \bigl\Vert p_{k} - \varpi ^{*} \bigr\Vert ^{2} \leq \bigl\Vert q_{k} - \varpi ^{*} \bigr\Vert ^{2} - \biggl(1 - \frac{\mu \hbar _{k}}{\hbar _{k+1}} \biggr) \Vert q_{k} - r_{k} \Vert ^{2} - \biggl(1 - \frac{\mu \hbar _{k}}{\hbar _{k+1}} \biggr) \Vert p_{k} - r_{k} \Vert ^{2}. $$

Proof

Consider that

$$\begin{aligned} \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} ={}& \bigl\lVert P_{ \mathcal{E}_{k}}\bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] - \varpi ^{*} \bigr\rVert ^{2} \\ ={}& \bigl\lVert P_{\mathcal{E}_{k}} \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] + \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] - \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k}) \bigr] - \varpi ^{*} \bigr\rVert ^{2} \\ ={}& \bigl\lVert \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] - \varpi ^{*} \bigr\rVert ^{2} + \bigl\lVert P_{\mathcal{E}_{k}} \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] - \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] \bigr\rVert ^{2} \\ &{} + 2 \bigl\langle P_{\mathcal{E}_{k}} \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] - \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr], \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k}) \bigr] - \varpi ^{*} \bigr\rangle . \end{aligned}$$
(3.11)

It is given that \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N}) \subset \mathcal{D} \subset \mathcal{E}_{k}\), we have

$$\begin{aligned} & \bigl\lVert P_{\mathcal{E}_{k}} \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] - \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] \bigr\rVert ^{2} \\ &\qquad {}+ \bigl\langle P_{\mathcal{E}_{k}} \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] - \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr], \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k}) \bigr] - \varpi ^{*} \bigr\rangle \\ &\quad = \bigl\langle \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] - P_{ \mathcal{E}_{k}} \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr], \varpi ^{*} - P_{\mathcal{E}_{k}} \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] \bigr\rangle \leq 0. \end{aligned}$$
(3.12)

Moreover, we have

$$\begin{aligned} & \bigl\langle P_{\mathcal{E}_{k}} \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] - \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr], \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k}) \bigr] - \varpi ^{*} \bigr\rangle \\ &\quad \leq - \bigl\lVert P_{\mathcal{E}_{k}} \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] - \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] \bigr\rVert ^{2}. \end{aligned}$$
(3.13)

By combining expressions (3.11) and (3.13), we obtain

$$\begin{aligned} \bigl\Vert p_{k} - \varpi ^{*} \bigr\Vert ^{2} &\leq \bigl\lVert q_{k} - \hbar _{k} \mathcal{N}(r_{k}) - \varpi ^{*} \bigr\rVert ^{2} - \bigl\lVert P_{ \mathcal{E}_{k}} \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] - \bigl[q_{k} - \hbar _{k} \mathcal{N}(r_{k})\bigr] \bigr\rVert ^{2} \\ &\leq \bigl\Vert q_{k} - \varpi ^{*} \bigr\Vert ^{2} - \Vert q_{k} - p_{k} \Vert ^{2} + 2 \hbar _{k} \bigl\langle \mathcal{N}(r_{k}), \varpi ^{*} - p_{k} \bigr\rangle . \end{aligned}$$
(3.14)

Furthermore, we have

$$ \bigl\langle \mathcal{N} \bigl(\varpi ^{*}\bigr), y - \varpi ^{*} \bigr\rangle - \bigl\langle \mathcal{N} (y), y - \varpi ^{*} \bigr\rangle \leq 0, \quad \forall y \in \mathcal{D}. $$

Since \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\), we obtain

$$ \bigl\langle \mathcal{N} (y), y - \varpi ^{*} \bigr\rangle \geq 0, \quad \forall y \in \mathcal{D}. $$

By substituting \(y = r_{k} \in \mathcal{D}\), we have

$$ \bigl\langle \mathcal{N} (r_{k}), r_{k} - \varpi ^{*} \bigr\rangle \geq 0. $$

Thus, this implies that

$$ \bigl\langle \mathcal{N}(r_{k}), \varpi ^{*} - p_{k} \bigr\rangle = \bigl\langle \mathcal{N}(r_{k}), \varpi ^{*} - r_{k} \bigr\rangle + \bigl\langle \mathcal{N}(r_{k}), r_{k} - p_{k} \bigr\rangle \leq \bigl\langle \mathcal{N}(r_{k}), r_{k} - p_{k} \bigr\rangle . $$
(3.15)

We obtain by combining formulas (3.14) and (3.15)

$$\begin{aligned} \bigl\Vert p_{k} - \varpi ^{*} \bigr\Vert ^{2} \leq{}& \bigl\Vert q_{k} - \varpi ^{*} \bigr\Vert ^{2} - \Vert q_{k} - p_{k} \Vert ^{2} + 2 \hbar _{k} \bigl\langle \mathcal{N}(r_{k}), r_{k} - p_{k} \bigr\rangle \\ \leq{}& \bigl\Vert q_{k} - \varpi ^{*} \bigr\Vert ^{2} - \Vert q_{k} - r_{k} + r_{k} - p_{k} \Vert ^{2} + 2 \hbar _{k} \bigl\langle \mathcal{N}(r_{k}), r_{k} - p_{k} \bigr\rangle \\ \leq{}& \bigl\Vert q_{k} - \varpi ^{*} \bigr\Vert ^{2} - \Vert q_{k} - r_{k} \Vert ^{2} - \Vert r_{k} - p_{k} \Vert ^{2} \\ &{}+ 2 \bigl\langle q_{k} - \hbar _{k} \mathcal{N}(r_{k}) - r_{k}, p_{k} - r_{k} \bigr\rangle . \end{aligned}$$
(3.16)

By using expression \(p_{k} = P_{\mathcal{E}_{k}} [q_{k} - \hbar _{k} \mathcal{N}(r_{k})]\), we have

$$\begin{aligned} & 2 \bigl\langle q_{k} - \hbar _{k} \mathcal{N}(r_{k}) - r_{k}, p_{k} - r_{k} \bigr\rangle \\ &\quad= 2 \bigl\langle q_{k} - \hbar _{k} \mathcal{N}(q_{k}) - r_{k}, p_{k} - r_{k} \bigr\rangle + 2 \hbar _{k} \bigl\langle \mathcal{N}(q_{k}) - \mathcal{N}(r_{k}), p_{k} - r_{k} \bigr\rangle \\ &\quad\leq \frac{\hbar _{k}}{\hbar _{k+1}} 2 \hbar _{k+1} \bigl\langle \mathcal{N}(q_{k}) - \mathcal{N}(r_{k}), p_{k} - r_{k} \bigr\rangle \\ &\quad\leq \frac{ \mu \hbar _{k}}{\hbar _{k+1}} \Vert q_{k} - r_{k} \Vert ^{2} + \frac{ \mu \hbar _{k}}{\hbar _{k+1}} \Vert p_{k} - r_{k} \Vert ^{2}. \end{aligned}$$
(3.17)

From expressions (3.16) and (3.17) we can obtain

$$\begin{aligned} & \bigl\Vert p_{k} - \varpi ^{*} \bigr\Vert ^{2} \\ &\quad\leq \bigl\Vert q_{k} - \varpi ^{*} \bigr\Vert ^{2} - \Vert q_{k} - r_{k} \Vert ^{2} - \Vert r_{k} - p_{k} \Vert ^{2} + \frac{\hbar _{k}}{\hbar _{k+1}} \bigl[ \mu \Vert q_{k} - r_{k} \Vert ^{2} + \mu \Vert p_{k} - r_{k} \Vert ^{2} \bigr] \\ &\quad\leq \bigl\Vert q_{k} - \varpi ^{*} \bigr\Vert ^{2} - \biggl(1 - \frac{\mu \hbar _{k}}{\hbar _{k+1}} \biggr) \Vert q_{k} - r_{k} \Vert ^{2} - \biggl(1 - \frac{\mu \hbar _{k}}{\hbar _{k+1}} \biggr) \Vert p_{k} - r_{k} \Vert ^{2}. \end{aligned}$$
(3.18)

 □

Lemma 3.4

Let \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) be an operator that satisfies the conditions (\(\mathcal{N}\)1)(\(\mathcal{N}\)5). Suppose that \(\{s_{k}\}\) is a sequence generated by the Algorithm 1. For any \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\), we have

$$ \bigl\Vert p_{k} - \varpi ^{*} \bigr\Vert ^{2} \leq \bigl\Vert q_{k} - \varpi ^{*} \bigr\Vert ^{2} - (1 - \hbar L ) \Vert q_{k} - r_{k} \Vert ^{2} - (1 - \hbar L ) \Vert p_{k} - r_{k} \Vert ^{2}. $$

Proof

The proof is similar to the proof of Lemma 3.3. □

Lemma 3.5

Let \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) be an operator that satisfies the conditions (\(\mathcal{N}\)1)(\(\mathcal{N}\)5). Suppose that \(\{s_{k}\}\) is a sequence generated by the Algorithm 4. For any \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\), we have

$$ \bigl\Vert p_{k} - \varpi ^{*} \bigr\Vert ^{2} \leq \bigl\lVert q_{k} - \varpi ^{*} \bigr\rVert ^{2} - \biggl( 1 - \mu ^{2} \frac{\hbar _{k}^{2}}{\hbar _{k+1}^{2}} \biggr) \lVert q_{k} - r_{k} \rVert ^{2}. $$

Proof

Consider the following:

$$\begin{aligned} & \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} \\ &\quad= \bigl\lVert r_{k} + \hbar _{k} \bigl[\mathcal{N} (s_{k}) - \mathcal{N}(r_{k})\bigr] - \varpi ^{*} \bigr\rVert ^{2} \\ &\quad= \bigl\lVert r_{k} - \varpi ^{*} \bigr\rVert ^{2} + \hbar _{k}^{2} \bigl\lVert \mathcal{N} (s_{k}) - \mathcal{N}(r_{k}) \bigr\rVert ^{2} + 2 \hbar _{k} \bigl\langle r_{k} - \varpi ^{*}, \mathcal{N} (s_{k}) - \mathcal{N}(r_{k}) \bigr\rangle \\ &\quad= \bigl\lVert r_{k} + s_{k} - s_{k} - \varpi ^{*} \bigr\rVert ^{2} + \hbar _{k}^{2} \bigl\lVert \mathcal{N} (s_{k}) - \mathcal{N}(r_{k}) \bigr\rVert ^{2} + 2 \hbar _{k} \bigl\langle r_{k} - \varpi ^{*}, \mathcal{N} (s_{k}) - \mathcal{N}(r_{k}) \bigr\rangle \\ &\quad= \lVert r_{k} - s_{k} \rVert ^{2} + \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert ^{2} + 2 \bigl\langle r_{k} - s_{k}, s_{k} - \varpi ^{*}\bigr\rangle \\ &\qquad {} + \hbar _{k}^{2} \bigl\lVert \mathcal{N} (s_{k}) - \mathcal{N}(r_{k}) \bigr\rVert ^{2} + 2 \hbar _{k} \bigl\langle r_{k} - \varpi ^{*}, \mathcal{N} (s_{k}) - \mathcal{N}(r_{k}) \bigr\rangle \\ &\quad= \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert ^{2} + \lVert r_{k} - s_{k} \rVert ^{2} + 2 \bigl\langle r_{k} - s_{k}, r_{k} - \varpi ^{*} \bigr\rangle + 2 \langle r_{k} - s_{k}, s_{k} - r_{k} \rangle \\ &\qquad {}+ \hbar _{k}^{2} \bigl\lVert \mathcal{N} (s_{k}) - \mathcal{N}(r_{k}) \bigr\rVert ^{2} + 2 \hbar _{k} \bigl\langle r_{k} - \varpi ^{*}, \mathcal{N} (s_{k}) - \mathcal{N}(r_{k}) \bigr\rangle . \end{aligned}$$
(3.19)

Furthermore, we can write

$$ \bigl\langle s_{k} - \hbar _{k} \mathcal{N}(s_{k}) - r_{k}, y - r_{k} \bigr\rangle \leq 0, \quad \forall y \in \mathcal{D}. $$
(3.20)

For given \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\), we can write

$$ \bigl\langle s_{k} - r_{k}, \varpi ^{*} - r_{k} \bigr\rangle \leq \hbar _{k} \bigl\langle \mathcal{N}(s_{k}), \varpi ^{*} - r_{k} \bigr\rangle . $$
(3.21)

By combining expressions (3.19) and (3.21), we can obtain

$$\begin{aligned} & \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} \\ &\quad\leq \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert ^{2} + \lVert r_{k} - s_{k} \rVert ^{2} + 2 \hbar _{k} \bigl\langle \mathcal{N}(s_{k}), \varpi ^{*} - r_{k} \bigr\rangle - 2 \langle s_{k} - r_{k}, s_{k} - r_{k} \rangle \\ &\qquad{}+ \hbar _{k}^{2} \bigl\lVert \mathcal{N} (s_{k}) - \mathcal{N}(r_{k}) \bigr\rVert ^{2} - 2 \hbar _{k} \bigl\langle \mathcal{N} (s_{k}) - \mathcal{N}(r_{k}), \varpi ^{*} - r_{k} \bigr\rangle \\ &\quad= \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert ^{2} - \lVert s_{k} - r_{k} \rVert ^{2} + \hbar _{k}^{2} \bigl\lVert \mathcal{N} (s_{k}) - \mathcal{N}(r_{k}) \bigr\rVert ^{2} - 2 \hbar _{k} \bigl\langle \mathcal{N}(r_{k}), r_{k} - \varpi ^{*} \bigr\rangle . \end{aligned}$$
(3.22)

By use of the notion of a mapping \(\mathcal{N}\) on \(\mathcal{D}\), we can obtain

$$ \bigl\langle \mathcal{N} \bigl(\varpi ^{*}\bigr), y - \varpi ^{*} \bigr\rangle - \bigl\langle \mathcal{N} (y), y - \varpi ^{*} \bigr\rangle \leq 0, \quad \forall y \in \mathcal{D}. $$

By using \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\), we obtain

$$ \bigl\langle \mathcal{N} (y), y - \varpi ^{*} \bigr\rangle \geq 0, \quad \forall y \in \mathcal{D}. $$

By substituting \(y = r_{k} \in \mathcal{D}\), we can write

$$ \bigl\langle \mathcal{N} (r_{k}), r_{k} - \varpi ^{*} \bigr\rangle \geq 0. $$
(3.23)

From expressions (3.22) and (3.23) we can obtain

$$\begin{aligned} \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} &\leq \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert ^{2} - \lVert s_{k} - r_{k} \rVert ^{2} + \mu ^{2} \frac{\hbar _{k}^{2}}{\hbar _{k+1}^{2}} \lVert s_{k} - r_{k} \rVert ^{2} \\ &= \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert ^{2} - \biggl( 1 - \mu ^{2} \frac{\hbar _{k}^{2}}{\hbar _{k+1}^{2}} \biggr) \lVert s_{k} - r_{k} \rVert ^{2}. \end{aligned}$$
(3.24)

 □

Lemma 3.6

Let \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) be a map that fulfils the criteria (\(\mathcal{N}\)1)(\(\mathcal{N}\)5). Suppose that \(\{s_{k}\}\) is a sequence created due to Algorithm 3. For any \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\), we have

$$ \bigl\Vert p_{k} - \varpi ^{*} \bigr\Vert ^{2} \leq \bigl\lVert q_{k} - \varpi ^{*} \bigr\rVert ^{2} - \bigl(1 - \hbar ^{2} L^{2} \bigr) \lVert q_{k} - r_{k} \rVert ^{2}. $$

Proof

The proof is similar to the proof of Lemma 3.5. □

Theorem 3.7

Let \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) be an operator that satisfies the conditions (\(\mathcal{N}\)1)(\(\mathcal{N}\)5). Then, the sequence \(\{s_{k}\}\) generated by the Algorithm 2converges strongly to some \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N}) \cap \operatorname{Fix} (\mathcal{M})\), where \(\varpi ^{*} = P_{\operatorname{VI}(\mathcal{D}, \mathcal{N}) \cap \operatorname{Fix} (\mathcal{M})} (0)\).

Proof

Claim 1: The sequence is bounded.

It is given that

$$ s_{k+1} = (1 - \sigma _{k}) p_{k} + \sigma _{k} \mathcal{M}(p_{k}). $$

To use this result of the sequence \(\{s_{k+1}\}\), we derive

$$\begin{aligned} \bigl\lVert s_{k+1} - \varpi ^{*} \bigr\rVert ^{2} &= \bigl\lVert (1 - \sigma _{k}) p_{k} + \sigma _{k} \mathcal{M}(p_{k}) - \varpi ^{*} \bigr\rVert ^{2} \\ &= \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} + 2 \sigma _{k} \bigl\langle p_{k} - \varpi ^{*}, \mathcal{M}(p_{k}) - p_{k} \bigr\rangle + \sigma _{k}^{2} \bigl\lVert \mathcal{M}(p_{k}) - p_{k} \bigr\rVert ^{2} \\ &\leq \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} + \sigma _{k} ( \rho - 1) \bigl\lVert \mathcal{M}(p_{k}) - p_{k} \bigr\rVert ^{2} + \sigma _{k}^{2} \bigl\lVert \mathcal{M}(p_{k}) - p_{k} \bigr\rVert ^{2} \\ &= \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} - \sigma _{k} (1 - \rho - \sigma _{k}) \bigl\lVert \mathcal{M} (p_{k}) - p_{k} \bigr\rVert ^{2}. \end{aligned}$$
(3.25)

By using the value of \(\{q_{k}\}\), we obtain

$$\begin{aligned} \bigl\lVert q_{k} - \varpi ^{*} \bigr\rVert &= \bigl\lVert s_{k} + \ell _{k} (s_{k} - s_{k-1}) - \varsigma _{k} s_{k} - \ell _{k} \varsigma _{k} (s_{k} - s_{k-1}) - \varpi ^{*} \bigr\rVert \\ &= \bigl\lVert (1 - \varsigma _{k}) \bigl(s_{k} - \varpi ^{*}\bigr) + (1 - \varsigma _{k}) \ell _{k} (s_{k} - s_{k-1}) - \varsigma _{k} \varpi ^{*} \bigr\rVert \end{aligned}$$
(3.26)
$$\begin{aligned} &\leq (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert + (1 - \varsigma _{k}) \ell _{k} \lVert s_{k} - s_{k-1} \rVert + \varsigma _{k} \bigl\lVert \varpi ^{*} \bigr\rVert \\ &\leq (1 - \varsigma _{k}) \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert + \varsigma _{k} K_{1}, \end{aligned}$$
(3.27)

for some fixed number \(K_{1}\) we have

$$ (1 - \varsigma _{k}) \frac{\ell _{k}}{\varsigma _{k}} \lVert s_{k} - s_{k-1} \rVert + \bigl\lVert \varpi ^{*} \bigr\rVert \leq K_{1}. $$

It is given that \(\hbar _{k} \rightarrow \hbar \) such that there exists a fixed number \(\vartheta \in (0, 1 - \mu )\) such that

$$ \lim_{k \rightarrow \infty} \biggl(1 - \frac{\mu \hbar _{k}}{\hbar _{k+1}} \biggr) = 1 - \mu > \vartheta > 0. $$

As a result, there exists a finite natural number \(N_{1} \in \mathbb{N}\) such that

$$ \biggl(1 - \frac{\mu \hbar _{k}}{\hbar _{k+1}} \biggr) > \vartheta > 0, \quad \forall k \geq N_{1}. $$
(3.28)

By using Lemma 3.3, we can write

$$ \bigl\Vert p_{k} - \varpi ^{*} \bigr\Vert ^{2} \leq \bigl\Vert q_{k} - \varpi ^{*} \bigr\Vert ^{2}, \quad \forall k \geq N_{1}. $$
(3.29)

From expressions (3.25), (3.27), and (3.29) we infer that

$$\begin{aligned} \bigl\Vert s_{k+1} - \varpi ^{*} \bigr\Vert &\leq (1 - \varsigma _{k}) \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert + \varsigma _{k} K_{1} - \sigma _{k} (1 - \rho - \sigma _{k}) \bigl\lVert \mathcal{M} (p_{k}) - p_{k} \bigr\rVert ^{2}. \end{aligned}$$
(3.30)

It is considered that \(\{\sigma _{k}\} \subset (0, 1 - \rho )\) such that

$$\begin{aligned} \bigl\Vert s_{k+1} - \varpi ^{*} \bigr\Vert &\leq (1 - \varsigma _{k}) \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert + \varsigma _{k} K_{1} \\ &\leq \max \bigl\{ \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert , K_{1} \bigr\} \\ \vdots \\ &\leq \max \bigl\{ \bigl\Vert s_{N_{1}} - \varpi ^{*} \bigr\Vert , K_{1} \bigr\} . \end{aligned}$$
(3.31)

This implies that the sequence \(\{s_{k}\}\) is a bounded sequence.

Claim 2:

$$\begin{aligned} & \biggl(1 - \frac{\mu \hbar _{k}}{\hbar _{k+1}} \biggr) \Vert q_{k} - r_{k} \Vert ^{2} + \biggl(1 - \frac{\mu \hbar _{k}}{\hbar _{k+1}} \biggr) \Vert p_{k} - r_{k} \Vert ^{2} \\ &\qquad {}+ \sigma _{k} (1 - \rho - \sigma _{k}) \bigl\lVert \mathcal{M} (p_{k}) - p_{k} \bigr\rVert ^{2} \\ &\quad \leq \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert ^{2} - \bigl\Vert s_{k+1} - \varpi ^{*} \bigr\Vert ^{2} + \varsigma _{k} K_{2}, \end{aligned}$$
(3.32)

for some \(K_{2} > 0\). By using the definition of \(\{s_{k+1}\}\), we have

$$\begin{aligned} \bigl\lVert s_{k+1} - \varpi ^{*} \bigr\rVert ^{2} &= \bigl\lVert (1 - \sigma _{k}) p_{k} + \sigma _{k} \mathcal{M}(p_{k}) - \varpi ^{*} \bigr\rVert ^{2} \\ &= \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} + 2 \sigma _{k} \bigl\langle p_{k} - \varpi ^{*}, \mathcal{M}(p_{k}) - p_{k} \bigr\rangle + \sigma _{k}^{2} \bigl\lVert \mathcal{M}(p_{k}) - p_{k} \bigr\rVert ^{2} \\ &\leq \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} + \sigma _{k} ( \rho - 1) \bigl\lVert \mathcal{M}(p_{k}) - p_{k} \bigr\rVert ^{2} + \sigma _{k}^{2} \bigl\lVert \mathcal{M}(p_{k}) - p_{k} \bigr\rVert ^{2} \\ &= \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} - \sigma _{k} (1 - \rho - \sigma _{k}) \bigl\lVert \mathcal{M} (p_{k}) - p_{k} \bigr\rVert ^{2}. \end{aligned}$$
(3.33)

By using the expression (3.18), we can derive

$$\begin{aligned} \bigl\Vert p_{k} - \varpi ^{*} \bigr\Vert ^{2} &\leq \bigl\Vert q_{k} - \varpi ^{*} \bigr\Vert ^{2} - \biggl(1 - \frac{\mu \hbar _{k}}{\hbar _{k+1}} \biggr) \Vert q_{k} - r_{k} \Vert ^{2} - \biggl(1 - \frac{\mu \hbar _{k}}{\hbar _{k+1}} \biggr) \Vert p_{k} - r_{k} \Vert ^{2}. \end{aligned}$$
(3.34)

Thus, the expression (3.27) implies that

$$\begin{aligned} \bigl\lVert q_{k} - \varpi ^{*} \bigr\rVert ^{2} &\leq (1 - \varsigma _{k})^{2} \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert ^{2} + \varsigma _{k}^{2} K_{1}^{2} + 2 K_{1} \varsigma _{k} (1 - \varsigma _{k}) \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert \\ &\leq \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert ^{2} + \varsigma _{k} \bigl[ \varsigma _{k} K_{1}^{2} + 2 K_{1} (1 - \varsigma _{k}) \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert \bigr] \\ &\leq \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert ^{2} + \varsigma _{k} K_{2}, \end{aligned}$$
(3.35)

for \(K_{2} > 0\). Combining expressions (3.33), (3.34), and (3.35), we obtain

$$\begin{aligned} \bigl\Vert s_{k+1} - \varpi ^{*} \bigr\Vert ^{2}\leq{} &\bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert ^{2} + \varsigma _{k} K_{2} - \sigma _{k} (1 - \rho - \sigma _{k}) \bigl\lVert \mathcal{M} (p_{k}) - p_{k} \bigr\rVert ^{2} \\ &{} - \biggl(1 - \frac{\mu \hbar _{k}}{\hbar _{k+1}} \biggr) \Vert q_{k} - r_{k} \Vert ^{2} - \biggl(1 - \frac{\mu \hbar _{k}}{\hbar _{k+1}} \biggr) \Vert p_{k} - r_{k} \Vert ^{2}. \end{aligned}$$
(3.36)

Claim 3:

By using the definition of \(\{q_{k}\}\), we obtain

$$\begin{aligned} & \bigl\lVert q_{k} - \varpi ^{*} \bigr\rVert ^{2} \\ &\quad= \bigl\lVert s_{k} + \ell _{k} (s_{k} - s_{k-1}) - \varsigma _{k} s_{k} - \ell _{k} \varsigma _{k} (s_{k} - s_{k-1}) - \varpi ^{*} \bigr\rVert ^{2} \\ &\quad= \bigl\lVert (1 - \varsigma _{k}) \bigl(s_{k} - \varpi ^{*}\bigr) + (1 - \varsigma _{k}) \ell _{k} (s_{k} - s_{k-1}) - \varsigma _{k} \varpi ^{*} \bigr\rVert ^{2} \\ &\quad\leq \bigl\lVert (1 - \varsigma _{k}) \bigl(s_{k} - \varpi ^{*}\bigr) + (1 - \varsigma _{k}) \ell _{k} (s_{k} - s_{k-1}) \bigr\rVert ^{2} + 2 \varsigma _{k} \bigl\langle - \varpi ^{*}, q_{k} - \varpi ^{*} \bigr\rangle \\ &\quad= (1 - \varsigma _{k})^{2} \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert ^{2} + (1 - \varsigma _{k})^{2} \ell _{k}^{2} \lVert s_{k} - s_{k-1} \rVert ^{2} \\ &\qquad{}+ 2 \ell _{k} (1 - \varsigma _{k})^{2} \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert \lVert s_{k} - s_{k-1} \rVert + 2 \varsigma _{k} \bigl\langle - \varpi ^{*}, q_{k} - s_{k+1} \bigr\rangle + 2 \varsigma _{k} \bigl\langle - \varpi ^{*}, s_{k+1} - \varpi ^{*} \bigr\rangle \\ &\quad\leq (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert ^{2} + \ell _{k}^{2} \lVert s_{k} - s_{k-1} \rVert ^{2} + 2 \ell _{k} (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert \lVert s_{k} - s_{k-1} \rVert \\ &\qquad{}+ 2 \varsigma _{k} \bigl\lVert \varpi ^{*} \bigr\rVert \lVert q_{k} - s_{k+1} \rVert + 2 \varsigma _{k} \bigl\langle - \varpi ^{*}, s_{k+1} - \varpi ^{*} \bigr\rangle \\ &\quad= (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert ^{2} + \varsigma _{k} \biggl[ \ell _{k} \lVert s_{k} - s_{k-1} \rVert \frac{\ell _{k}}{\varsigma _{k}} \lVert s_{k} - s_{k-1} \rVert \\ &\qquad{}+ 2 (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert \frac{\ell _{k}}{\varsigma _{k}} \lVert s_{k} - s_{k-1} \rVert + 2 \bigl\lVert \varpi ^{*} \bigr\rVert \lVert q_{k} - s_{k+1} \rVert \\ &\qquad{}+ 2 \bigl\langle \varpi ^{*}, \varpi ^{*} - s_{k+1} \bigr\rangle \biggr]. \end{aligned}$$
(3.37)

Combining expressions (3.29) and (3.37), we obtain

$$\begin{aligned} & \bigl\lVert s_{k+1} - \varpi ^{*} \bigr\rVert ^{2} \\ &\quad\leq (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert ^{2} + \varsigma _{k} \biggl[ \ell _{k} \lVert s_{k} - s_{k-1} \rVert \frac{\ell _{k}}{\varsigma _{k}} \lVert s_{k} - s_{k-1} \rVert \\ &\qquad{}+ 2 (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert \frac{\ell _{k}}{\varsigma _{k}} \lVert s_{k} - s_{k-1} \rVert + 2 \bigl\lVert \varpi ^{*} \bigr\rVert \lVert q_{k} - s_{k+1} \rVert \\ &\qquad{}+ 2 \bigl\langle \varpi ^{*}, \varpi ^{*} - s_{k+1} \bigr\rangle \biggr]. \end{aligned}$$
(3.38)

Claim 4: The sequence converges to zero.

Suppose that

$$ p_{k} := \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert ^{2} $$

and

$$\begin{aligned} e_{k} :={}& \ell _{k} \lVert s_{k} - s_{k-1} \rVert \frac{\ell _{k}}{\varsigma _{k}} \lVert s_{k} - s_{k-1} \rVert + 2 (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert \frac{\ell _{k}}{\varsigma _{k}} \lVert s_{k} - s_{k-1} \rVert\\ &{} + 2 \bigl\lVert \varpi ^{*} \bigr\rVert \lVert q_{k} - s_{k+1} \rVert + 2 \bigl\langle \varpi ^{*}, \varpi ^{*} - s_{k+1} \bigr\rangle . \end{aligned}$$

Then, Claim 4 can be rewritten as follows:

$$ p_{k+1} \leq (1 - \varsigma _{k}) p_{k} + \varsigma _{k} e_{k}. $$

Indeed, from Lemma 2.5, it suffices to prove that \(\limsup_{j \rightarrow \infty} e_{k_{j}} \leq 0\) for any subsequence \(\{p_{k_{j}}\}\) of \(\{p_{k}\}\) satisfying

$$ \liminf_{j \rightarrow +\infty} ( p_{k_{j}+1} - p_{k_{j}} ) \geq 0. $$

This is comparable to demonstrating that

$$ \limsup_{j \rightarrow \infty} \bigl\langle \varpi ^{*}, \varpi ^{*} - s_{k_{j}+1} \bigr\rangle \leq 0 $$

and

$$ \limsup_{j \rightarrow \infty} \lVert q_{k_{j}} - s_{k_{j}+1} \rVert \leq 0, $$

in each subsequence \(\{\|s_{k_{j}} - \varpi ^{*}\|\}\) of \(\{\|s_{k} - \varpi ^{*}\|\}\) reassuring

$$ \liminf_{j \rightarrow +\infty} \bigl( \bigl\Vert s_{k_{j}+1} - \varpi ^{*} \bigr\Vert - \bigl\Vert s_{k_{j}} - \varpi ^{*} \bigr\Vert \bigr) \geq 0. $$

Suppose that \(\{\|s_{k_{j}} - \varpi ^{*}\|\}\) is a subsequence of \(\{\|s_{k} - \varpi ^{*}\|\}\) satisfying

$$ \liminf_{j \rightarrow +\infty} \bigl( \bigl\Vert s_{k_{j}+1} - \varpi ^{*} \bigr\Vert - \bigl\Vert s_{k_{j}} - \varpi ^{*} \bigr\Vert \bigr) \geq 0. $$

Then,

$$\begin{aligned} & \liminf_{j \rightarrow +\infty} \bigl( \bigl\Vert s_{k_{j}+1} - \varpi ^{*} \bigr\Vert ^{2} - \bigl\Vert s_{k_{j}} - \varpi ^{*} \bigr\Vert ^{2} \bigr) \\ &\quad = \liminf_{j \rightarrow +\infty} \bigl( \bigl\Vert s_{k_{j}+1} - \varpi ^{*} \bigr\Vert - \bigl\Vert s_{k_{j}} - \varpi ^{*} \bigr\Vert \bigr) \bigl( \bigl\Vert s_{k_{j}+1} - \varpi ^{*} \bigr\Vert + \bigl\Vert s_{k_{j}} - \varpi ^{*} \bigr\Vert \bigr) \geq 0. \end{aligned}$$
(3.39)

By using Claim 2 that

$$\begin{aligned} & \limsup_{j \rightarrow \infty} \biggl[ \biggl(1 - \frac{\mu \hbar _{k_{j}}}{\hbar _{k_{j}+1}} \biggr) \Vert q_{k_{j}} - r_{k_{j}} \Vert ^{2} \\ &\qquad {} + \biggl(1 - \frac{\mu \hbar _{k_{j}}}{\hbar _{k_{j}+1}} \biggr) \Vert p_{k_{j}} - r_{k_{j}} \Vert ^{2} + \sigma _{k_{j}} (1 - \rho - \sigma _{k_{j}}) \bigl\lVert \mathcal{M} (p_{k_{j}}) - p_{k_{j}} \bigr\rVert ^{2} \biggr] \\ &\quad \leq \limsup_{j \rightarrow \infty} \bigl[ \bigl\Vert s_{k_{j}} - \varpi ^{*} \bigr\Vert ^{2} - \bigl\Vert s_{k_{j}+1} - \varpi ^{*} \bigr\Vert ^{2} \bigr] + \limsup_{j \rightarrow \infty} \varsigma _{k_{j}} K_{2} \\ &\quad = - \liminf_{j \rightarrow \infty} \bigl[ \bigl\Vert s_{k_{j}+1} - \varpi ^{*} \bigr\Vert ^{2} - \bigl\Vert s_{k_{j}} - \varpi ^{*} \bigr\Vert ^{2} \bigr] \\ &\quad \leq 0, \end{aligned}$$
(3.40)

the above relationship suggests that

$$ \lim_{j \rightarrow \infty} \Vert q_{k_{j}} - r_{k_{j}} \Vert = 0, \qquad \lim_{j \rightarrow \infty} \Vert p_{k_{j}} - r_{k_{j}} \Vert = 0, \qquad \lim _{j \rightarrow \infty} \bigl\lVert \mathcal{M} (p_{k_{j}}) - p_{k_{j}} \bigr\rVert = 0. $$
(3.41)

Thus, we obtain

$$ \lim_{j \rightarrow \infty} \Vert p_{k_{j}} - q_{k_{j}} \Vert = 0. $$
(3.42)

Next, we compute the following:

$$\begin{aligned} \Vert q_{k_{j}} - s_{k_{j}} \Vert &= \bigl\Vert s_{k_{j}} + \ell _{k_{j}} (s_{k_{j}} - s_{k_{j}-1}) - \varsigma _{k_{j}} \bigl[ s_{k_{j}} + \ell _{k_{j}} (s_{k_{j}} - s_{k_{j}-1}) \bigr] - s_{k_{j}} \bigr\Vert \\ &\leq \ell _{k_{j}} \Vert s_{k_{j}} - s_{k_{j}-1} \Vert + \varsigma _{k_{j}} \Vert s_{k_{j}} \Vert + \ell _{k_{j}} \varsigma _{k_{j}} \Vert s_{k_{j}} - s_{k_{j}-1} \Vert \\ &= \varsigma _{k_{j}} \frac{\ell _{k_{j}}}{\varsigma _{k_{j}}} \Vert s_{k_{j}} - s_{k_{j}-1} \Vert + \varsigma _{k_{j}} \Vert s_{k_{j}} \Vert + \varsigma _{k_{j}}^{2} \frac{\ell _{k_{j}}}{\varsigma _{k_{j}}} \Vert s_{k_{j}} - s_{k_{j}-1} \Vert \longrightarrow 0. \end{aligned}$$
(3.43)

This, together with \(\lim_{j \rightarrow \infty} \| p_{k_{j}} - q_{k_{j}}\| = 0\), yields that

$$ \lim_{j \rightarrow \infty} \Vert p_{k_{j}} - s_{k_{j}} \Vert = 0. $$
(3.44)

From the definition of \(s_{k_{j}+1} = (1 - \sigma _{k_{j}}) p_{k_{j}} + \sigma _{k_{j}} \mathcal{M}(p_{k_{j}})\), one sees that

$$ \lim_{j \rightarrow \infty} \Vert s_{k_{j}+1} - p_{k_{j}} \Vert = \sigma _{k_{j}} \bigl\Vert \mathcal{M}(p_{k_{j}}) - p_{k_{j}} \bigr\Vert \leq (1 - \rho ) \bigl\Vert \mathcal{M}(p_{k_{j}}) - p_{k_{j}} \bigr\Vert . $$
(3.45)

Thus, we obtain

$$ \lim_{j \rightarrow \infty} \Vert s_{k_{j}+1} - p_{k_{j}} \Vert = 0. $$
(3.46)

The above expression implies that

$$ \lim_{j \rightarrow \infty} \Vert s_{k_{j}} - s_{k_{j}+1} \Vert \leq \lim_{j \rightarrow \infty} \Vert s_{k_{j}} - p_{k_{j}} \Vert + \lim_{j \rightarrow \infty} \Vert p_{k_{j}} - s_{k_{j}+1} \Vert = 0 $$
(3.47)

and

$$ \lim_{j \rightarrow \infty} \Vert q_{k_{j}} - s_{k_{j}+1} \Vert \leq \lim_{j \rightarrow \infty} \Vert q_{k_{j}} - p_{k_{j}} \Vert + \lim_{j \rightarrow \infty} \Vert p_{k_{j}} - s_{k_{j}+1} \Vert = 0. $$
(3.48)

This implies that the sequence \(\{s_{k_{j}}\}\) is a bounded sequence. We can infer that \(\{s_{k_{j}}\}\) weakly converges to some \(\hat{u} \in \mathcal{E}\). By using the value \(\varpi ^{*} = P_{\operatorname{VI}(\mathcal{D}, \mathcal{N}) \cap \operatorname{Fix} (\mathcal{M})} (0)\), we have

$$ \bigl\langle 0 - \varpi ^{*}, y - \varpi ^{*} \bigr\rangle \leq 0, \quad \forall y \in \operatorname{VI}( \mathcal{D}, \mathcal{N}) \cap \operatorname{Fix} (\mathcal{M}). $$
(3.49)

From the expression (3.43) it is provided that \(\{q_{k_{j}}\}\) weakly converges to \(\hat{u} \in \mathcal{E}\). By using the expression (3.41), \(\lim_{k \rightarrow \infty} \hbar _{k} = \hbar \), and Lemma 2.4, one concludes that \(\hat{u} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\). It follows from (3.44) that \(\{p_{k_{j}}\}\) weakly converges to \(\hat{u} \in \mathcal{E}\). Due to the use of the demiclosedness of \((I - \mathcal{M})\), we derive that \(\hat{u} \in \operatorname{Fix}(\mathcal{M})\). This implies that \(\hat{u} \in \operatorname{VI}(\mathcal{D}, \mathcal{N}) \cap \operatorname{Fix} (\mathcal{M})\). Thus, we obtain

$$\begin{aligned} & \lim_{j \rightarrow \infty} \bigl\langle \varpi ^{*}, \varpi ^{*} - s_{k_{j}} \bigr\rangle = \bigl\langle \varpi ^{*}, \varpi ^{*} - \hat{u} \bigr\rangle \leq 0. \end{aligned}$$
(3.50)

Next, we can use \(\lim_{j \rightarrow \infty} \lVert s_{k_{j} + 1} - s_{k_{j}} \rVert = 0\). Thus, we can write

$$\begin{aligned} & \limsup_{j \rightarrow \infty} \bigl\langle \varpi ^{*}, \varpi ^{*} - s_{k_{j} + 1} \bigr\rangle \\ &\quad \leq \limsup_{j \rightarrow \infty} \bigl\langle \varpi ^{*}, \varpi ^{*} - s_{k_{j}} \bigr\rangle + \limsup _{j \rightarrow \infty} \bigl\langle \varpi ^{*}, s_{k_{j}} - s_{k_{j} + 1} \bigr\rangle \leq 0. \end{aligned}$$
(3.51)

By using Claim 3 and Lemma 2.5, we see that \(s_{k} \rightarrow \varpi ^{*}\) as \(k \rightarrow \infty \). This completes the proof of the theorem. □

Theorem 3.8

Let \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) be an operator that satisfies the conditions (\(\mathcal{N}\)1)(\(\mathcal{N}\)5). Then, the sequence \(\{s_{k}\}\) generated by the Algorithm 4converges strongly to \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N}) \cap \operatorname{Fix} (\mathcal{M})\), where \(\varpi ^{*} = P_{\operatorname{VI}(\mathcal{D}, \mathcal{N}) \cap \operatorname{Fix} (\mathcal{M})} (0)\).

Proof

Claim 1: is a bounded sequence.

Let us consider that

$$ s_{k+1} = (1 - \sigma _{k}) p_{k} + \sigma _{k} \mathcal{M}(p_{k}). $$

By using the definition of a sequence \(\{s_{k+1}\}\), we have

$$\begin{aligned} \bigl\lVert s_{k+1} - \varpi ^{*} \bigr\rVert ^{2} &= \bigl\lVert (1 - \sigma _{k}) p_{k} + \sigma _{k} \mathcal{M}(p_{k}) - \varpi ^{*} \bigr\rVert ^{2} \\ &= \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} + 2 \sigma _{k} \bigl\langle p_{k} - \varpi ^{*}, \mathcal{M}(p_{k}) - p_{k} \bigr\rangle + \sigma _{k}^{2} \bigl\lVert \mathcal{M}(p_{k}) - p_{k} \bigr\rVert ^{2} \\ &\leq \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} + \sigma _{k} ( \rho - 1) \bigl\lVert \mathcal{M}(p_{k}) - p_{k} \bigr\rVert ^{2} + \sigma _{k}^{2} \bigl\lVert \mathcal{M}(p_{k}) - p_{k} \bigr\rVert ^{2} \\ &= \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} - \sigma _{k} (1 - \rho - \sigma _{k}) \bigl\lVert \mathcal{M} (p_{k}) - p_{k} \bigr\rVert ^{2}. \end{aligned}$$
(3.52)

By using the value of \(\{q_{k}\}\), we obtain

$$\begin{aligned} \bigl\lVert q_{k} - \varpi ^{*} \bigr\rVert &= \bigl\lVert s_{k} + \ell _{k} (s_{k} - s_{k-1}) - \varsigma _{k} s_{k} - \ell _{k} \varsigma _{k} (s_{k} - s_{k-1}) - \varpi ^{*} \bigr\rVert \\ &= \bigl\lVert (1 - \varsigma _{k}) \bigl(s_{k} - \varpi ^{*}\bigr) + (1 - \varsigma _{k}) \ell _{k} (s_{k} - s_{k-1}) - \varsigma _{k} \varpi ^{*} \bigr\rVert \end{aligned}$$
(3.53)
$$\begin{aligned} &\leq (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert + (1 - \varsigma _{k}) \ell _{k} \lVert s_{k} - s_{k-1} \rVert + \varsigma _{k} \bigl\lVert \varpi ^{*} \bigr\rVert \\ &\leq (1 - \varsigma _{k}) \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert + \varsigma _{k} M_{1}, \end{aligned}$$
(3.54)

for some fixed number \(M_{1}\) we have

$$ (1 - \varsigma _{k}) \frac{\ell _{k}}{\varsigma _{k}} \lVert s_{k} - s_{k-1} \rVert + \bigl\lVert \varpi ^{*} \bigr\rVert \leq M_{1}. $$

By using \(\hbar _{k} \rightarrow \hbar \) such that \(\chi \in (0, 1 - \mu ^{2})\), we have

$$ \lim_{k \rightarrow \infty} \biggl( 1 - \mu ^{2} \frac{\hbar _{k}^{2}}{\hbar _{k+1}^{2}} \biggr) = 1 - \mu ^{2} > \chi > 0. $$

Thus, there exists some fixed \(k_{0} \in \mathbb{N}\) such that

$$ \biggl( 1 - \mu ^{2} \frac{\hbar _{k}^{2}}{\hbar _{k+1}^{2}} \biggr) > \chi > 0, \quad \forall k \geq k_{0}. $$
(3.55)

By using Lemma 3.5, we can rewrite

$$ \bigl\Vert p_{k} - \varpi ^{*} \bigr\Vert ^{2} \leq \bigl\Vert q_{k} - \varpi ^{*} \bigr\Vert ^{2}, \quad \forall k \geq k_{0}. $$
(3.56)

From expressions (3.52), (3.54), and (3.56) we infer that

$$\begin{aligned} \bigl\Vert s_{k+1} - \varpi ^{*} \bigr\Vert &\leq (1 - \varsigma _{k}) \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert + \varsigma _{k} M_{1} - \sigma _{k} (1 - \rho - \sigma _{k}) \bigl\lVert \mathcal{M} (p_{k}) - p_{k} \bigr\rVert ^{2}. \end{aligned}$$
(3.57)

Thus, for \(\{\sigma _{k}\} \subset (0, 1 - \rho )\), we obtain

$$\begin{aligned} \bigl\Vert s_{k+1} - \varpi ^{*} \bigr\Vert &\leq (1 - \varsigma _{k}) \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert + \varsigma _{k} M_{1} \\ &\leq \max \bigl\{ \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert , M_{1} \bigr\} \\ \vdots \\ &\leq \max \bigl\{ \bigl\Vert s_{k_{0}} - \varpi ^{*} \bigr\Vert , M_{1} \bigr\} . \end{aligned}$$
(3.58)

Consequently, we may infer that the sequence \(\{s_{k}\}\) is a bounded sequence.

Claim 2:

$$\begin{aligned} & \biggl( 1 - \mu ^{2} \frac{\hbar _{k}^{2}}{\hbar _{k+1}^{2}} \biggr) \lVert q_{k} - r_{k} \rVert ^{2} + \sigma _{k} (1 - \rho - \sigma _{k}) \bigl\lVert \mathcal{M} (p_{k}) - p_{k} \bigr\rVert ^{2} \\ &\quad \leq \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert ^{2} - \bigl\Vert s_{k+1} - \varpi ^{*} \bigr\Vert ^{2} + \varsigma _{k} M_{2}, \end{aligned}$$
(3.59)

for some fixed \(M_{2} > 0\). Indeed, by using the definition of \(\{s_{k+1}\}\), we have

$$\begin{aligned} \bigl\lVert s_{k+1} - \varpi ^{*} \bigr\rVert ^{2} &= \bigl\lVert (1 - \sigma _{k}) p_{k} + \sigma _{k} \mathcal{M}(p_{k}) - \varpi ^{*} \bigr\rVert ^{2} \\ &= \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} + 2 \sigma _{k} \bigl\langle p_{k} - \varpi ^{*}, \mathcal{M}(p_{k}) - p_{k} \bigr\rangle + \sigma _{k}^{2} \bigl\lVert \mathcal{M}(p_{k}) - p_{k} \bigr\rVert ^{2} \\ &\leq \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} + \sigma _{k} ( \rho - 1) \bigl\lVert \mathcal{M}(p_{k}) - p_{k} \bigr\rVert ^{2} + \sigma _{k}^{2} \bigl\lVert \mathcal{M}(p_{k}) - p_{k} \bigr\rVert ^{2} \\ &= \bigl\lVert p_{k} - \varpi ^{*} \bigr\rVert ^{2} - \sigma _{k} (1 - \rho - \sigma _{k}) \bigl\lVert \mathcal{M} (p_{k}) - p_{k} \bigr\rVert ^{2}. \end{aligned}$$
(3.60)

By using Lemma 3.5, we obtain

$$ \bigl\Vert p_{k} - \varpi ^{*} \bigr\Vert ^{2} \leq \bigl\lVert q_{k} - \varpi ^{*} \bigr\rVert ^{2} - \biggl( 1 - \mu ^{2} \frac{\hbar _{k}^{2}}{\hbar _{k+1}^{2}} \biggr) \lVert q_{k} - r_{k} \rVert ^{2}. $$
(3.61)

By using expression (3.54), we can obtain

$$\begin{aligned} \bigl\lVert q_{k} - \varpi ^{*} \bigr\rVert ^{2} &\leq (1 - \varsigma _{k})^{2} \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert ^{2} + \varsigma _{k}^{2} M_{1}^{2} + 2 M_{1} \varsigma _{k} (1 - \varsigma _{k}) \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert \\ &\leq \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert ^{2} + \varsigma _{k} \bigl[ \varsigma _{k} M_{1}^{2} + 2 M_{1} (1 - \varsigma _{k}) \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert \bigr] \\ &\leq \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert ^{2} + \varsigma _{k} M_{2}, \end{aligned}$$
(3.62)

where for some fixed constant \(M_{2} > 0\). From expressions (3.60), (3.61), and (3.62) we obtain

$$\begin{aligned} \bigl\Vert s_{k+1} - \varpi ^{*} \bigr\Vert ^{2} \leq{}& \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert ^{2} + \varsigma _{k} M_{2} - \sigma _{k} (1 - \rho - \sigma _{k}) \bigl\lVert \mathcal{M} (p_{k}) - p_{k} \bigr\rVert ^{2} \\ & {}- \biggl( 1 - \mu ^{2} \frac{\hbar _{k}^{2}}{\hbar _{k+1}^{2}} \biggr) \lVert q_{k} - r_{k} \rVert ^{2}. \end{aligned}$$
(3.63)

Claim 3:

Using the value of \(\{q_{k}\}\), we can write as follows:

$$\begin{aligned} & \bigl\lVert q_{k} - \varpi ^{*} \bigr\rVert ^{2} \\ &\quad= \bigl\lVert s_{k} + \ell _{k} (s_{k} - s_{k-1}) - \varsigma _{k} s_{k} - \ell _{k} \varsigma _{k} (s_{k} - s_{k-1}) - \varpi ^{*} \bigr\rVert ^{2} \\ &\quad= \bigl\lVert (1 - \varsigma _{k}) \bigl(s_{k} - \varpi ^{*}\bigr) + (1 - \varsigma _{k}) \ell _{k} (s_{k} - s_{k-1}) - \varsigma _{k} \varpi ^{*} \bigr\rVert ^{2} \\ &\quad\leq \bigl\lVert (1 - \varsigma _{k}) \bigl(s_{k} - \varpi ^{*}\bigr) + (1 - \varsigma _{k}) \ell _{k} (s_{k} - s_{k-1}) \bigr\rVert ^{2} + 2 \varsigma _{k} \bigl\langle - \varpi ^{*}, q_{k} - \varpi ^{*} \bigr\rangle \\ &\quad= (1 - \varsigma _{k})^{2} \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert ^{2} + (1 - \varsigma _{k})^{2} \ell _{k}^{2} \lVert s_{k} - s_{k-1} \rVert ^{2} \\ &\qquad{}+ 2 \ell _{k} (1 - \varsigma _{k})^{2} \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert \lVert s_{k} - s_{k-1} \rVert + 2 \varsigma _{k} \bigl\langle - \varpi ^{*}, q_{k} - s_{k+1} \bigr\rangle \\ &\qquad{}+ 2 \varsigma _{k} \bigl\langle - \varpi ^{*}, s_{k+1} - \varpi ^{*} \bigr\rangle \\ &\quad\leq (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert ^{2} + \ell _{k}^{2} \lVert s_{k} - s_{k-1} \rVert ^{2} + 2 \ell _{k} (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert \lVert s_{k} - s_{k-1} \rVert \\ &\qquad{}+ 2 \varsigma _{k} \bigl\lVert \varpi ^{*} \bigr\rVert \lVert q_{k} - s_{k+1} \rVert + 2 \varsigma _{k} \bigl\langle - \varpi ^{*}, s_{k+1} - \varpi ^{*} \bigr\rangle \\ &\quad= (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert ^{2} + \varsigma _{k} \biggl[ \ell _{k} \lVert s_{k} - s_{k-1} \rVert \frac{\ell _{k}}{\varsigma _{k}} \lVert s_{k} - s_{k-1} \rVert \\ &\qquad{}+ 2 (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert \frac{\ell _{k}}{\varsigma _{k}} \lVert s_{k} - s_{k-1} \rVert + 2 \bigl\lVert \varpi ^{*} \bigr\rVert \lVert q_{k} - s_{k+1} \rVert \\ &\qquad{}+ 2 \bigl\langle \varpi ^{*}, \varpi ^{*} - s_{k+1} \bigr\rangle \biggr]. \end{aligned}$$
(3.64)

Combining expressions (3.56) and (3.64), we obtain

$$\begin{aligned} & \bigl\lVert s_{k+1} - \varpi ^{*} \bigr\rVert ^{2} \\ &\quad\leq (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert ^{2} + \varsigma _{k} \biggl[ \ell _{k} \lVert s_{k} - s_{k-1} \rVert \frac{\ell _{k}}{\varsigma _{k}} \lVert s_{k} - s_{k-1} \rVert \\ &\qquad{}+ 2 (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert \frac{\ell _{k}}{\varsigma _{k}} \lVert s_{k} - s_{k-1} \rVert + 2 \bigl\lVert \varpi ^{*} \bigr\rVert \lVert q_{k} - s_{k+1} \rVert \\ &\qquad{} + 2 \bigl\langle \varpi ^{*}, \varpi ^{*} - s_{k+1} \bigr\rangle \biggr]. \end{aligned}$$
(3.65)

Claim 4: is a sequence that is convergent to zero.

Set

$$ p_{k} := \bigl\Vert s_{k} - \varpi ^{*} \bigr\Vert ^{2} $$

and

$$\begin{aligned} e_{k} := {}&\ell _{k} \lVert s_{k} - s_{k-1} \rVert \frac{\ell _{k}}{\varsigma _{k}} \lVert s_{k} - s_{k-1} \rVert + 2 (1 - \varsigma _{k}) \bigl\lVert s_{k} - \varpi ^{*} \bigr\rVert \frac{\ell _{k}}{\varsigma _{k}} \lVert s_{k} - s_{k-1} \rVert\\ &{} + 2 \bigl\lVert \varpi ^{*} \bigr\rVert \lVert q_{k} - s_{k+1} \rVert + 2 \bigl\langle \varpi ^{*}, \varpi ^{*} - s_{k+1} \bigr\rangle . \end{aligned}$$

Then, Claim 4 can be rewritten as follows:

$$ c_{k+1} \leq (1 - \varsigma _{k}) p_{k} + \varsigma _{k} e_{k}. $$

By Lemma 2.5, it suffices to show that \(\limsup_{j \rightarrow \infty} e_{k_{j}} \leq 0\) for \(\{c_{k_{j}}\}\) of \(\{p_{k}\}\) such as

$$ \liminf_{j \rightarrow +\infty} ( c_{k_{j}+1} - c_{k_{j}} ) \geq 0. $$

This seems to be equivalent to stating that

$$ \limsup_{j \rightarrow \infty} \bigl\langle \varpi ^{*}, \varpi ^{*} - s_{k_{j}+1} \bigr\rangle \leq 0 $$

and

$$ \limsup_{j \rightarrow \infty} \lVert q_{k_{j}} - s_{k_{j}+1} \rVert \leq 0, $$

one from each subsequence \(\{\|s_{k_{j}} - \varpi ^{*}\|\}\) of \(\{\|s_{k} - \varpi ^{*}\|\}\) following that

$$ \liminf_{j \rightarrow +\infty} \bigl( \bigl\Vert s_{k_{j}+1} - \varpi ^{*} \bigr\Vert - \bigl\Vert s_{k_{j}} - \varpi ^{*} \bigr\Vert \bigr) \geq 0. $$

Suppose that \(\{\|s_{k_{j}} - \varpi ^{*}\|\}\) is a subsequence of \(\{\|s_{k} - \varpi ^{*}\|\}\) satisfying

$$ \liminf_{j \rightarrow +\infty} \bigl( \bigl\Vert s_{k_{j}+1} - \varpi ^{*} \bigr\Vert - \bigl\Vert s_{k_{j}} - \varpi ^{*} \bigr\Vert \bigr) \geq 0, $$

then, we have

$$\begin{aligned} & \liminf_{j \rightarrow +\infty} \bigl( \bigl\Vert s_{k_{j}+1} - \varpi ^{*} \bigr\Vert ^{2} - \bigl\Vert s_{k_{j}} - \varpi ^{*} \bigr\Vert ^{2} \bigr) \\ &\quad = \liminf_{j \rightarrow +\infty} \bigl( \bigl\Vert s_{k_{j}+1} - \varpi ^{*} \bigr\Vert - \bigl\Vert s_{k_{j}} - \varpi ^{*} \bigr\Vert \bigr) \bigl( \bigl\Vert s_{k_{j}+1} - \varpi ^{*} \bigr\Vert + \bigl\Vert s_{k_{j}} - \varpi ^{*} \bigr\Vert \bigr) \geq 0. \end{aligned}$$
(3.66)

As a result of Claim 2 that

$$\begin{aligned} & \limsup_{j \rightarrow \infty} \biggl[ \biggl(1 - \frac{\mu ^{2} \hbar _{k_{j}}^{2}}{\hbar _{k_{j}+1}^{2}} \biggr) \Vert q_{k_{j}} - r_{k_{j}} \Vert ^{2} + \sigma _{k_{j}} (1 - \rho - \sigma _{k_{j}}) \bigl\lVert \mathcal{M} (p_{k_{j}}) - p_{k_{j}} \bigr\rVert ^{2} \biggr] \\ &\quad\leq \limsup_{j \rightarrow \infty} \bigl[ \bigl\Vert s_{k_{j}} - \varpi ^{*} \bigr\Vert ^{2} - \bigl\Vert s_{k_{j}+1} - \varpi ^{*} \bigr\Vert ^{2} \bigr] + \limsup_{j \rightarrow \infty} \varsigma _{k_{j}} K_{2} \\ &\quad= - \liminf_{j \rightarrow \infty} \bigl[ \bigl\Vert s_{k_{j}+1} - \varpi ^{*} \bigr\Vert ^{2} - \bigl\Vert s_{k_{j}} - \varpi ^{*} \bigr\Vert ^{2} \bigr] \\ &\quad\leq 0, \end{aligned}$$
(3.67)

the above relationship implies that

$$ \lim_{j \rightarrow \infty} \Vert q_{k_{j}} - r_{k_{j}} \Vert = 0, \qquad \lim_{j \rightarrow \infty} \bigl\lVert \mathcal{M} (p_{k_{j}}) - p_{k_{j}} \bigr\rVert = 0. $$
(3.68)

It follows that

$$ \Vert p_{k_{j}} - r_{k_{j}} \Vert = \bigl\Vert r_{k_{j}} + \hbar _{k_{j}} \bigl[ \mathcal{N} (q_{k_{j}}) - \mathcal{N}(r_{k_{j}})\bigr] - r_{k_{j}} \bigr\Vert \leq \hbar _{k_{j}} L \Vert q_{k_{j}} - r_{k_{j}} \Vert . $$
(3.69)

Thus, we have

$$ \lim_{j \rightarrow \infty} \Vert p_{k_{j}} - r_{k_{j}} \Vert = 0. $$
(3.70)

The remaining proof is similar to Claim 4 of Theorem 3.7. As a result, we omit it here and this completes the proof of the theorem. □

4 Numerical illustrations

In contrast to some past work in the literature, this part discusses the algorithmic implications of the supplied techniques, as well as a study of how differences in control settings affect the numerical efficacy of the recommended algorithms. All calculations are performed in MATLAB R2018b on an HP i5 Core (TM) i5-6200 laptop with 8.00 GB (7.78 GB usable) RAM.

Example 4.1

Consider that a mapping \(\mathcal{N} : \mathbb{R}^{m} \to \mathbb{R}^{m}\) is described using

$$ \mathcal{N}(u) = M u + q, $$

where \(q = 0\). Moreover, we have

$$ M = N N^{T} + B + D. $$

The matrices \(N = \operatorname{rand}(m)\) and \(K = \operatorname{rand}(m)\) are chosen randomly, whereas the other two are generated in the following manner:

$$ B = 0.5 K - 0.5 K^{T} \quad \text{and} \quad D = \operatorname{diag}\bigl(\operatorname{rand}(m,1)\bigr). $$

The feasible set \(\mathcal{D}\) is interpreted as follows:

$$ \mathcal{D} = \bigl\{ u \in \mathbb{R}^{m} : -10 \leq s_{i} \leq 10 \bigr\} . $$

It is evident that the mapping \(\mathcal{N}\) is monotone and Lipschitz is continuous with the value \(L = \|M\|\). Moreover, the function \(\mathcal{M} : \mathcal{E} \rightarrow \mathcal{E}\) is considered as follows:

$$ \mathcal{M} (u) = \frac{1}{2} u. $$

The starting points for these tests are \(s_{0} = s_{1}= (2, 2, \ldots , 2)\). The dimension of the Hilbert space is treated differently while studying the behavior of higher-dimension Hilbert spaces. The stopping condition for such experiments is \(D_{k} = \|q_{k} - r_{k}\| \leq 10^{-10}\). The initial points to run these experiments are taken as \(s_{0} = s_{1}= (2, 2, \ldots , 2)\). The dimension of the Hilbert space is taken differently to study the behavior for higher-dimension Hilbert spaces. The stopping criterion for such experiments is taken as \(D_{k} = \|q_{k} - r_{k}\| \leq 10^{-10}\). Figures 16 and Tables 1 and 2 illustrate empirical observations for Example 2. The following control criteria are in effect:

Figure 1
figure 1

Numerical comparison of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(m = 5\)

Figure 2
figure 2

Numerical comparison of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(m = 10\)

Figure 3
figure 3

Numerical comparison of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(m = 20\)

Figure 4
figure 4

Numerical comparison of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(m = 50\)

Figure 5
figure 5

Numerical comparison of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(m = 100\)

Figure 6
figure 6

Numerical comparison of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(m = 200\)

Table 1 Numerical values for Figs. 16
Table 2 Numerical values for Figs. 16
  1. (1)

    Algorithm 2 (alg-1): \(\hbar _{1} = 0.55\), \(\ell = 0.45\), \(\mu = 0.44\), \(\chi _{k} = \frac{100}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(2 k + 4)}\), \(\sigma _{k} = \frac{k}{(2k+1)}\).

  2. (2)

    Algorithm 4 (alg-2): \(\hbar _{1} = 0.55\), \(\ell = 0.45\), \(\mu = 0.44\), \(\chi _{k} = \frac{100}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(2 k + 4)}\), \(\sigma _{k} = \frac{k}{(2k+1)}\).

  3. (3)

    Algorithm 1 in [31] (mtalg-1): \(\gamma _{1} = 0.55\), \(\delta = 0.45\), \(\phi = 0.44\), \(\ell _{k} = \frac{1}{(2 k + 4)}\), \(\hbar _{k} = \frac{1}{2} (1 - \ell _{k})\), \(\chi _{k} = \frac{100}{(1+k)^{2}}\).

  4. (4)

    Algorithm 2 in [31] (mtalg-2): \(\gamma _{1} = 0.55\), \(\delta = 0.45\), \(\phi = 0.44\), \(\ell _{k} = \frac{1}{(2 k + 4)}\), \(\hbar _{k} = \frac{1}{2} (1 - \ell _{k})\), \(\chi _{k} = \frac{100}{(1+k)^{2}}\).

  5. (5)

    Algorithm 1 in [32] (vtalg-1): \(\tau _{1} = 0.55\), \(\ell = 0.45\), \(\mu = 0.44\), \(\chi _{k} = \frac{100}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(2 k + 4)}\), \(\sigma _{k} = \frac{k}{(2k+1)}\), \(f(u)=\frac{u}{2}\).

  6. (6)

    Algorithm 2 in [32] (vtalg-2): \(\tau _{1} = 0.55\), \(\ell = 0.45\), \(\mu = 0.44\), \(\chi _{k} = \frac{100}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(2 k + 4)}\), \(\sigma _{k} = \frac{k}{(2k+1)}\), \(f(u)=\frac{u}{2}\).

Example 4.2

Consider a nonlinear mapping \(\mathcal{N} : \mathcal{R}^{2} \rightarrow \mathcal{R}^{2}\) described using

$$ \mathcal{N} (u, y) = (u + y + \sin u; -u + y + \sin y). $$

Furthermore, the workable set \(\mathcal{D}\) is just a set written as follows:

$$ \mathcal{D} = [-1, 1] \times [1, 1]. $$

It is simple to demonstrate that \(\mathcal{N}\) is monotone and Lipschitz continuous given the constant \(L = 3\). Suppose another mapping \(\mathcal{M} : \mathcal{R}^{2} \rightarrow \mathcal{R}^{2}\) is described as follows:

$$ \mathcal{M} (z) = \Vert E \Vert ^{-1} E z, $$

where E is defined by:

$$ E = \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix} . $$

The mapping \(\mathcal{M}\) is clearly 0-demicontractive, with \(\rho = 0\). Depending on the stopping condition, the starting point for this experiment is calculated differently as \(D_{k} = \|q_{k} - r_{k}\| \leq 10^{-10}\). Figures 714 and Tables 3 and 4 demonstrate quantitative data for Example 4.2. The following control criteria are in effect:

  1. (1)

    Algorithm 2 (briefly, alg-1): \(\hbar _{1} = 0.45\), \(\ell = 0.35\), \(\mu = 0.33\), \(\chi _{k} = \frac{10}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(3 k + 6)}\), \(\sigma _{k} = \frac{k}{(3k+1)}\).

  2. (2)

    Algorithm 4 (briefly, alg-2): \(\hbar _{1} = 0.45\), \(\ell = 0.35\), \(\mu = 0.33\), \(\chi _{k} = \frac{10}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(3 k + 4)}\), \(\sigma _{k} = \frac{k}{(3k+1)}\).

  3. (3)

    Algorithm 1 in [31] (briefly, mtalg-1): \(\gamma _{1} = 0.45\), \(\delta = 0.35\), \(\phi = 0.33\), \(\ell _{k} = \frac{1}{(3 k + 6)}\), \(\hbar _{k} = \frac{1}{2.5} (1 - \ell _{k})\), \(\chi _{k} = \frac{10}{(1+k)^{2}}\).

  4. (4)

    Algorithm 2 in [31] (briefly, mtalg-2): \(\gamma _{1} = 0.45\), \(\delta = 0.35\), \(\phi = 0.33\), \(\ell _{k} = \frac{1}{(3 k + 6)}\), \(\hbar _{k} = \frac{1}{2.5} (1 - \ell _{k})\), \(\chi _{k} = \frac{10}{(1+k)^{2}}\).

  5. (5)

    Algorithm 1 in [32] (briefly, vtalg-1): \(\tau _{1} = 0.45\), \(\ell = 0.35\), \(\mu = 0.33\), \(\chi _{k} = \frac{10}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(3 k + 6)}\), \(\sigma _{k} = \frac{k}{(3k+1)}\), \(f(u)=\frac{u}{2}\).

  6. (6)

    Algorithm 2 in [32] (briefly, vtalg-2): \(\tau _{1} = 0.45\), \(\ell = 0.35\), \(\mu = 0.33\), \(\chi _{k} = \frac{10}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(3 k + 6)}\), \(\sigma _{k} = \frac{k}{(3k+1)}\), \(f(u)=\frac{u}{2}\).

Figure 7
figure 7

Numerical comparsion of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(s_{0} = s_{1} = (1, 1)^{T}\)

Figure 8
figure 8

Numerical comparsion of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(s_{0} = s_{1} = (1, 1)^{T}\)

Figure 9
figure 9

Numerical comparsion of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(s_{0} = s_{1} = (2, 2)^{T}\)

Figure 10
figure 10

Numerical comparsion of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(s_{0} = s_{1} = (2, 2)^{T}\)

Figure 11
figure 11

Numerical comparsion of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(s_{0} = s_{1} = (1, -1)^{T}\)

Figure 12
figure 12

Numerical comparsion of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(s_{0} = s_{1} = (1, -1)^{T}\)

Figure 13
figure 13

Numerical comparsion of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(s_{0} = s_{1} = (-2, -3)^{T}\)

Figure 14
figure 14

Numerical comparsion of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(s_{0} = s_{1} = (-2, -3)^{T}\)

Table 3 Numerical values for Figs. 714
Table 4 Numerical values for Figs. 714

Example 4.3

Take the following set:

$$ \mathcal{D} := \bigl\{ u \in L^{2}\bigl([0, 1]\bigr): \Vert u \Vert \leq 1 \bigr\} . $$

Let \(\mathcal{N} : \mathcal{D} \rightarrow \mathcal{E}\) be an operator described through

$$ \mathcal{N}(u) (t) = \int _{0}^{1} \bigl( u(t) - H(t, s) f\bigl(u(s) \bigr) \bigr)\,ds + g(t), $$

where

$$ H(t, s) = \frac{2ts e^{(t+s)}}{e \sqrt{e^{2}-1}}, \qquad f(u) = \cos u, \qquad g(t) = \frac{2t e^{t}}{e \sqrt{e^{2}-1}}. $$

In this case, \(\mathcal{E} = L^{2}([0, 1])\) denotes a Hilbert space via an inner product

$$ \langle u, y \rangle = \int _{0}^{1} u(t) y(t)\,dt, \quad \forall u, y \in \mathcal{E}, $$

where its induced norm is:

$$ \Vert u \Vert = \sqrt{ \int _{0}^{1} \bigl\vert u(t) \bigr\vert ^{2}\,dt}. $$

A function \(\mathcal{M} : L^{2}([0, 1]) \rightarrow L^{2}([0, 1])\) is of the form

$$ \mathcal{M}(u) (t) = \int _{0}^{1} t u(s)\,ds, \quad t \in [0, 1]. $$

A simple calculation suggests that \(\mathcal{M}\) is 0-demicontractive. The solution is \(\varpi ^{*}(t) = 0\). The stopping condition in this experiment is \(D_{k} = \|q_{k} - r_{k}\| \leq 10^{-6}\). Figures 1518 and Tables 5 and 6 illustrate numerical observations for Example 4.3. The following control criteria are in effect:

  1. (1)

    Algorithm 2 (briefly, alg-1): \(\hbar _{1} = 0.33\), \(\ell = 0.66\), \(\mu = 0.55\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(4 k + 8)}\), \(\sigma _{k} = \frac{k}{(5k+1)}\).

  2. (2)

    Algorithm 4 (briefly, alg-2): \(\hbar _{1} = 0.33\), \(\ell = 0.66\), \(\mu = 0.55\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(4 k + 8)}\), \(\sigma _{k} = \frac{k}{(5k+1)}\).

  3. (3)

    Algorithm 1 in [31] (briefly, mtalg-1): \(\gamma _{1} = 0.33\), \(\delta = 0.66\), \(\phi = 0.55\), \(\ell _{k} = \frac{1}{(4 k + 8)}\), \(\hbar _{k} = \frac{1}{2} (1 - \ell _{k})\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\).

  4. (4)

    Algorithm 2 in [31] (briefly, mtalg-2): \(\gamma _{1} = 0.33\), \(\delta = 0.66\), \(\phi = 0.55\), \(\ell _{k} = \frac{1}{(4 k + 8)}\), \(\hbar _{k} = \frac{1}{2} (1 - \ell _{k})\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\).

  5. (5)

    Algorithm 1 in [32] (briefly, vtalg-1): \(\tau _{1} = 0.33\), \(\ell = 0.66\), \(\mu = 0.55\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(4 k + 8)}\), \(\sigma _{k} = \frac{k}{(4 k+1)}\), \(f(u)=\frac{u}{3}\).

  6. (6)

    Algorithm 2 in [32] (briefly, vtalg-2): \(\tau _{1} = 0.33\), \(\ell = 0.66\), \(\mu = 0.55\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(4 k + 8)}\), \(\sigma _{k} = \frac{k}{(4 k+1)}\), \(f(u)=\frac{u}{3}\).

Figure 15
figure 15

Numerical comparison of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(s_{0} = 1\)

Figure 16
figure 16

Numerical comparison of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(s_{0} = s_{1} = t\)

Figure 17
figure 17

Numerical comparison of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(s_{0} = s_{1} = \sin (t)\)

Figure 18
figure 18

Numerical comparison of Algorithm 2 and Algorithm 4 with Algorithm 1 in [31], Algorithm 2 in [31], Algorithm 1 in [32], and Algorithm 2 in [32] when \(s_{0} = s_{1} = \cos (t)\)

Table 5 Numerical values for the Figs. 1518
Table 6 Numerical values for the Figs. 1518

Example 4.4

Consider that the feasible set \(\mathcal{D}\) is provided by

$$ \mathcal{D} := \bigl\{ u \in L^{2}\bigl([0, 1]\bigr): \Vert u \Vert \leq 1 \bigr\} . $$

Let us design an operator \(\mathcal{N} : \mathcal{D} \rightarrow \mathcal{E}\) as

$$ \mathcal{N}(u) (t) = \max \bigl\{ u(t), 0 \bigr\} = \frac{u(t) + \vert u(t) \vert }{2}. $$

Let \(\mathcal{E} = L^{2}([0, 1])\) represent a real Hilbert space. Its induced norm and inner product are described by

$$ \langle u, y \rangle = \int _{0}^{1} u(t) y(t)\,dt, \quad \forall u, y \in \mathcal{E} $$

and

$$ \Vert u \Vert = \sqrt{ \int _{0}^{1} \bigl\vert u(t) \bigr\vert ^{2}\,dt}. $$

It is trivial to verify that \(\mathcal{N}\) is monotone and 1-Lipschitz continuous, and that the projection on \(\mathcal{D}\) is naturally straightforward, that is,

$$ P_{C} (u) = \textstyle\begin{cases} \frac{u}{ \Vert u \Vert } ,& \text{if } \Vert u \Vert > 1, \\ u, & \Vert u \Vert \leq 1. \end{cases} $$

A mapping \(\mathcal{M} : L^{2}([0, 1]) \rightarrow L^{2}([0, 1])\) takes the following form:

$$ \mathcal{M}(u) (t) = \int _{0}^{1} t u(s)\,ds, \quad t \in [0, 1]. $$

A simple analysis demonstrates that \(\mathcal{M}\) is 0-demicontractive. The solution is \(\varpi ^{*}(t) = 0\). These trials begin differently by setting a halting requirement \(D_{k} = \|q_{k} - r_{k}\| \leq 10^{-6}\). Tables 7 and 8 include numerical results for Example 4.4. The following conditions are used as control criteria:

  1. (1)

    Algorithm 2 (briefly, alg-1): \(\hbar _{1} = 0.25\), \(\ell = 0.44\), \(\mu = 0.75\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(5 k + 10)}\), \(\sigma _{k} = \frac{k}{(2k+1)}\).

  2. (2)

    Algorithm 4 (briefly, alg-2): \(\hbar _{1} = 0.25\), \(\ell = 0.44\), \(\mu = 0.75\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(5 k + 10)}\), \(\sigma _{k} = \frac{k}{(2k+1)}\).

  3. (3)

    Algorithm 1 in [31] (briefly, mtalg-1): \(\gamma _{1} = 0.25\), \(\delta = 0.44\), \(\phi = 0.75\), \(\ell _{k} = \frac{1}{(5 k + 10)}\), \(\hbar _{k} = \frac{1}{2} (1 - \ell _{k})\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\).

  4. (4)

    Algorithm 2 in [31] (briefly, mtalg-2): \(\gamma _{1} = 0.25\), \(\delta = 0.44\), \(\phi = 0.75\), \(\ell _{k} = \frac{1}{(5 k + 10)}\), \(\hbar _{k} = \frac{1}{2} (1 - \ell _{k})\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\).

  5. (5)

    Algorithm 1 in [32] (briefly, vtalg-1): \(\tau _{1} = 0.25\), \(\ell = 0.44\), \(\mu = 0.75\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(5 k + 10)}\), \(\sigma _{k} = \frac{k}{(2 k+1)}\), \(f(u)=\frac{u}{4}\).

  6. (6)

    Algorithm 2 in [32] (briefly, vtalg-2): \(\tau _{1} = 0.25\), \(\ell = 0.44\), \(\mu = 0.75\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(5 k + 10)}\), \(\sigma _{k} = \frac{k}{(2 k+1)}\), \(f(u)=\frac{u}{4}\).

Table 7 Numerical values for Example 4.4
Table 8 Numerical values for Example 4.4

5 Conclusion

We proposed four inertial extragradient-type methods to solve the monotone variational inequality problem numerically as well as a fixed-point problem. These methods are viewed as a modified version of the two-step extragradient method. Two strong convergence theorems have been established for the proposed methods. These numerical results were established in order to confirm the numerical effectiveness of the suggested algorithms over the existing methods. These computational results show that the nonmonotone variable step-size rule continues to improve the iterative sequence’s usefulness in this context.

Availability of data and materials

Not applicable.

References

  1. An, N.T., Nam, N.M., Qin, X.: Solving k-center problems involving sets based on optimization techniques. J. Glob. Optim. 76(1), 189–209 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  2. Antipin, A.S.: On a method for convex programs using a symmetrical modification of the Lagrange function. Èkon. Mat. Metody 12(6), 1164–1173 (1976)

    Google Scholar 

  3. Ceng, L., Petruşel, A., Qin, X., Yao, J.: A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 21(1), 93–108 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  4. Ceng, L., Petruşel, A., Qin, X., Yao, J.: Pseudomonotone variational inequalities and fixed points. Fixed Point Theory 22(2), 543–558 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  5. Ceng, L.-C., Köbis, E., Zhao, X.: On general implicit hybrid iteration method for triple hierarchical variational inequalities with hierarchical variational inequality constraints. Optimization 69(9), 1961–1986 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  6. Ceng, L.-C., Shang, M.: Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 70(4), 715–740 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  7. Ceng, L.-C., Yao, J.-C., Shehu, Y.: On mann implicit composite subgradient extragradient methods for general systems of variational inequalities with hierarchical variational inequality constraints. J. Inequal. Appl. 2022(1) (2022)

  8. Ceng, L.-C., Yuan, Q.: Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems. J. Inequal. Appl. 2019(1), 1 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  9. Ceng, L.C., Petruşel, A., Qin, X., Yao, J.C.: Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints. Optimization 70(5–6), 1337–1358 (2020)

    MathSciNet  MATH  Google Scholar 

  10. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148(2), 318–335 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  11. Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich extragradient method for the variational inequality problem in Euclidean space. Optimization 61(9), 1119–1132 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  12. Elliott, C.M.: Variational and quasivariational inequalities applications to free—boundary problems. (Claudio Baiocchi and António Capelo). SIAM Rev. 29(2), 314–315 (1987)

    Article  Google Scholar 

  13. Iiduka, H., Yamada, I.: A subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optimization 58(2), 251–261 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  14. Kassay, G., Kolumbán, J., Páles, Z.: On Nash stationary points. Publ. Math. 54(3–4), 267–279 (1999)

    MathSciNet  MATH  Google Scholar 

  15. Kassay, G., Kolumbán, J., Páles, Z.: Factorization of Minty and Stampacchia variational inequality systems. Eur. J. Oper. Res. 143(2), 377–389 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  16. Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. SIAM, Philadelphia (2000)

    Book  MATH  Google Scholar 

  17. Konnov, I.: Equilibrium Models and Variational Inequalities, vol. 210. Elsevier, Amsterdam (2007)

    Book  MATH  Google Scholar 

  18. Korpelevich, G.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  19. Kraikaew, R., Saejung, S.: Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 163(2), 399–412 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  20. Maingé, P.-E.: A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 47(3), 1499–1515 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  21. Maingé, P.-E., Moudafi, A.: Coupling viscosity methods with the extragradient algorithm for solving equilibrium problems. J. Nonlinear Convex Anal. 9(2), 283–294 (2008)

    MathSciNet  MATH  Google Scholar 

  22. Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4(3), 506–506 (1953)

    Article  MathSciNet  MATH  Google Scholar 

  23. Moudafi, A.: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 241(1), 46–55 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  24. Nagurney, A., Economics, E.N.: (1999). A variational inequality approach

  25. Noor, M., Noor, K.: Some new trends in mixed variational inequalities. J. Adv. Math. Stud. 15, 105–140 (2022)

    MathSciNet  MATH  Google Scholar 

  26. Noor, M.A.: Some iterative methods for nonconvex variational inequalities. Comput. Math. Model. 21(1), 97–108 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  27. Qin, X., An, N.T.: Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput. Optim. Appl. 74(3), 821–850 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  28. Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal., Theory Methods Appl. 75(2), 742–750 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  29. Stampacchia, G.: Formes bilinéaires coercitives sur les ensembles convexes. C. R. Hebd. Séances Acad. Sci. 258(18), 4413 (1964)

    MATH  Google Scholar 

  30. Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers (2009)

    MATH  Google Scholar 

  31. Tan, B., Fan, J., Qin, X.: Inertial extragradient algorithms with non-monotonic step sizes for solving variational inequalities and fixed point problems. Adv. Oper. Theory 6(4) (2021)

  32. Tan, B., Zhou, Z., Li, S.: Viscosity-type inertial extragradient algorithms for solving variational inequality problems and fixed point problems. J. Appl. Math. Comput. (2021)

  33. Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38(2), 431–446 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  34. ur Rehman, H., Kumam, P., Argyros, I.K., Shutaywi, M., Shah, Z.: Optimization based methods for solving the equilibrium problems with applications in variational inequality problems and solution of Nash equilibrium models. Mathematics 8(5), 822 (2020)

    Article  Google Scholar 

  35. ur Rehman, H., Kumam, P., Kumam, W., Sombut, K.: A new class of inertial algorithms with monotonic step sizes for solving fixed point and variational inequalities. Math. Methods Appl. Sci. 45(16), 9061–9088 (2022)

    Article  MathSciNet  Google Scholar 

  36. ur Rehman, H., Kumam, P., Shutaywi, M., Alreshidi, N.A., Kumam, W.: Inertial optimization based two-step methods for solving equilibrium problems with applications in variational inequality problems and growth control equilibrium models. Energies 13(12), 3292 (2020)

    Article  Google Scholar 

  37. ur Rehman, H., Kumam, P., Suleiman, Y.I., Kumam, W.: An adaptive block iterative process for a class of multiple sets split variational inequality problems and common fixed point problems in Hilbert spaces. Numer. Algebra Control Optim. 0(0), 0 (2022)

    Google Scholar 

  38. ur Rehman, H., Kumam, W., Sombut, K.: Inertial modification using self-adaptive subgradient extragradient techniques for equilibrium programming applied to variational inequalities and fixed-point problems. Mathematics 10(10), 1751 (2022)

    Article  Google Scholar 

  39. Yamada, I., Ogura, N.: Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings. Numer. Funct. Anal. Optim. 25(7–8), 619–655 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  40. Yao, Y., Shahzad, N., Yao, J.-C.: Convergence of Tseng-type self-adaptive algorithms for variational inequalities and fixed point problems. Carpath. J. Math. 37(3), 541–550 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  41. Zhao, T.-Y., Wang, D.-Q., Ceng, L.-C., He, L., Wang, C.-Y., Fan, H.-L.: Quasi-inertial Tseng’s extragradient algorithms for pseudomonotone variational inequalities and fixed point problems of quasi-nonexpansive operators. Numer. Funct. Anal. Optim. 42(1), 69–90 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  42. Zhao, X.P., Yao, Y.: A nonmonotone gradient method for constrained multiobjective optimization problems. J. Nonlinear Var. Anal. 6(6), 693–706 (2022)

    MATH  Google Scholar 

Download references

Acknowledgements

This research was supported by The Science, Research and Innovation Promotion Funding (TSRI) (Grant No. FRB660012/0168). This research block grant was managed under Rajamangala University of Technology Thanyaburi (FRB66E0653S.2).

Funding

This research was supported by The Science, Research and Innovation Promotion Funding (TSRI) (Grant No. FRB660012/0168). This research block grant was managed under Rajamangala University of Technology Thanyaburi (FRB66E0653S.2).

Author information

Authors and Affiliations

Authors

Contributions

WK, writing-original draft preparation. HR, writing-original draft preparation and project administration. PK, methodology and project administration. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Wiyada Kumam.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kumam, W., Rehman, H.u. & Kumam, P. A new class of computationally efficient algorithms for solving fixed-point problems and variational inequalities in real Hilbert spaces. J Inequal Appl 2023, 48 (2023). https://doi.org/10.1186/s13660-023-02948-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-023-02948-8

MSC

Keywords