- Research
- Open Access
- Published:
A new class of computationally efficient algorithms for solving fixed-point problems and variational inequalities in real Hilbert spaces
Journal of Inequalities and Applications volume 2023, Article number: 48 (2023)
Abstract
A family of inertial extragradient-type algorithms is proposed for solving convex pseudomonotone variational inequality with fixed-point problems, where the involved mapping for the fixed point is a ρ-demicontractive mapping. Under standard hypotheses, the generated iterative sequence achieves strong convergence to the common solution of the variational inequality and fixed-point problem. Some special cases and sufficient conditions that guarantee the validity of the hypotheses of the convergence statements are also discussed. Numerical applications in detail illustrate the theoretical results and comparison with existing methods.
1 Introduction
The objective for studying a common solution problem is its potential application to mathematical models with fixed-point constraints. This is especially true in real-world applications like signal processing, network resource allocation, and image recovery. This is extremely important for signal analysis, composite reduction, optimization techniques, and image-recovery problems; see for example, [1, 13, 20, 21, 27]. Let us look at both problems highlighted by this study. Let \(\mathcal{D}\) to be a nonempty, closed, and convex subset of a real Hilbert space \(\mathcal{E}\) with the inner product \(\langle \cdot , \cdot \rangle \), and induced norm \(\|\cdot \|\). This study contributes significantly by investigating the convergence analysis of iterative algorithms for handling variational inequality problems and fixed-point problems in real Hilbert spaces. Let \(\mathcal{N} : \mathcal{D} \rightarrow \mathcal{E}\) be an operator. Then, the variational inequality problem [29] is defined in the following manner:
Consider \(\operatorname{VI}(\mathcal{D}, \mathcal{N})\) to describe the solution set of the problem (VIP). Variational inequalities are used in a number of areas, including partial differential equations, optimization, engineering, applied mathematics, and economics (see [12, 14–17, 24, 30]). The variational inequality problem is important in applied sciences. Many researchers have investigated not only the existence and stability of solutions, but also iterative methods for solving such problems. Projection methods, in particular, are crucial for determining the numerical solution to variational inequalities. Several authors have proposed projection methodologies to solve the problem [3, 4, 10, 11, 18, 25, 26, 33, 40–42]) and others in [5–9, 34–38]. The projection technique, which is computed on the feasible set \(\mathcal{D}\), is used by most of the algorithms to solve the problem. The extragradient method was developed by Korpelevich [18] and Antipin [2]. The method has the following form:
This method needs to compute two projections on the feasible set \(\mathcal{D}\) for each iteration. In fact, if the feasible set \(\mathcal{D}\) has a sophisticated structure, the computing efficacy of the chosen method may decline.
The first is the subgradient extragradient method developed by Censor et al. [10]. This method takes the following form:
where
Following that, we will look at the strong convergence analysis of Tseng’s extragradient method [33], which uses only one projection per iteration:
In terms of computing, the technique (1.3) is especially efficient since it only requires one solution to a minimization problem each iteration. As a result, the method (1.3) is not more computationally expensive, but it performs better in most situations. Suppose that \(\mathcal{M} : \mathcal{E} \rightarrow \mathcal{E}\) is a mapping. The fixed-point problem for a mapping \(\mathcal{M}\) is defined by:
The solution set of the fixed-point problem (FP) is represented by the set \(\operatorname{Fix}(\mathcal{M})\). Most of the methods for solving problem (FP) are derived from the basic Mann iteration, specifically from \(s_{1} \in \mathcal{E}\), which generates sequence \(\{s_{k+1}\}\) for every \(k \geq 1\) by
To accomplish weak convergence, the variable sequence \(\{\sigma _{k}\}\) must adhere to certain criteria. The Halpern iteration is another structured iterative method that is more effective at achieving strong convergence in infinite-dimensional Hilbert spaces. The iterative sequence is as follows:
where \(s_{1} \in \mathcal{E}\) and the sequence \(\sigma _{k} \subset (0; 1)\) is nonsummable and gradually declining, i.e.,
Furthermore, the viscosity algorithm [23], in which the cost mapping \(\mathcal{M}\) is iteratively combined with a contraction mapping, is a generic variant of the Halpern iteration. In addition to the Halpern iteration, there is a general form of it, namely the viscosity algorithm [23], in which the cost mapping \(\mathcal{M}\) is merged with a contraction mapping in the iterates. Finally, the hybrid steepest-descent approach published in [39] is another methodology that provides significant convergence.
Tan et al. [31, 32] developed an innovative numerical algorithm, the extragradient viscosity algorithm, for solving variational inequalities involving a fixed-point problem constraint of a ρ-demicontractive mapping using the extragradient algorithm [10, 18] and the Mann-type technique [22]. The authors showed the strong convergence of all methods under the condition that the operator is monotone and meets the Lipschitz condition. These techniques have the advantage of being numerically estimated using optimization tools, as shown in [31, 32].
The fundamental issue with these methods is that they rely on viscosity and Mann-type techniques to achieve strong convergence. As is known, achieving strong convergence is important for iterative sequences, especially in infinite-dimensional domains. There are a few methods with strong convergence that use inertial schemes. The Mann and viscosity procedures may be difficult to estimate from an algorithmic perspective, affecting the algorithm’s convergence speed and usefulness. These algorithms increase the number of numerical and computational steps, making the system more complicated.
As a result, the following straightforward question arises:
Is it possible to design self-adaptive strongly convergent inertial extragradient algorithms that do not rely on Mann- and viscosity-type methods for solving variational inequalities and fixed-point problems?
We respond to the above question by constructing two strong convergence extragradient-type algorithms for solving monotone variational inequalities and the ρ-demicontractive fixed-point problem in real Hilbert spaces, inspired by the studies described in [31, 32]. Furthermore, we avoid employing any hybrid schemes, such as the Mann-type scheme and the viscosity scheme, to achieve the strong convergence of these methods. We presented novel algorithms with strong convergence that make use of inertial mechanisms.
The paper is divided into sections. Section 2 gives some basic results. Section 3 introduces four different methods and confirms their convergence analysis. Finally, Sect. 4 provides some numerical data to demonstrate the practical use of the methods presented.
2 Preliminaries
Let \(\mathcal{D}\) be a nonempty, closed, and convex subset of \(\mathcal{E}\), the real Hilbert space. For any \(s, r \in \mathcal{E}\), we have
-
(i)
\(\|s + r\|^{2} = \|s\|^{2} + 2 \langle s, r \rangle + \|r\|^{2}\);
-
(ii)
\(\|s + r\|^{2} \leq \|s\|^{2} + 2 \langle r, s + r \rangle \);
-
(iii)
\(\|b s + (1 - b) r \|^{2} = b \|s\|^{2} + (1 - b)\| r \|^{2} - b(1 - b) \|s - r\|^{2}\).
A metric projection \(P_{\mathcal{D}}(s)\) of \(s \in \mathcal{E}\) is defined by
It is well known that \(P_{\mathcal{D}}\) is nonexpansive and possesses the following important properties:
-
(1)
\(\langle s - P_{\mathcal{D}}(s), r - P_{\mathcal{D}}(s) \rangle \leq 0\), \(\forall r \in \mathcal{D}\);
-
(2)
\(\|P_{\mathcal{D}}(s) - P_{\mathcal{D}}(r)\|^{2} \leq \langle P_{ \mathcal{D}}(s) - P_{\mathcal{D}}(r), s - r \rangle \), \(\forall r \in \mathcal{D}\).
Definition 2.1
Let \(\mathcal{M} : \mathcal{E} \rightarrow \mathcal{E}\) be a nonlinear mapping with \(\operatorname{Fix}(\mathcal{M}) \neq \emptyset \). Then, \(I - \mathcal{M}\) is said to be demiclosed at zero if for any \(\{s_{k}\}\) in \(\mathcal{E}\). Then, the following statement is true:
Definition 2.2
Let \(\mathcal{N} : \mathcal{D} \rightarrow \mathcal{D}\) be an operator. It is said to be:
-
(1)
monotone if
$$ \bigl\langle \mathcal{N}(s_{1}) - \mathcal{N}(s_{2}), s_{1} - s_{2} \bigr\rangle \geq 0, \quad \forall s_{1}, s_{2} \in \mathcal{D}; $$ -
(2)
Lipschitz-continuous with constant \(L >0\) such that
$$ \bigl\Vert \mathcal{N}(s_{1}) - \mathcal{N}(s_{2}) \bigr\Vert \leq L \Vert s_{1} - s_{2} \Vert , \quad \forall s_{1}, s_{2} \in \mathcal{D}; $$ -
(3)
sequentially weakly continuous if a sequence \(\{\mathcal{N}(s_{k})\}\) convergent weakly to \(\mathcal{N}(s)\) for any sequence \(\{s_{k}\}\) convergent weakly to s.
Definition 2.3
Suppose the \(\mathcal{M} : \mathcal{D} \rightarrow \mathcal{D}\) is a mapping and \(\operatorname{Fix}(\mathcal{M}) \neq \emptyset \). It is said to be:
-
(1)
ρ-demicontractive if for any fixed number \(0 \leq \rho < 1\) such that
$$ \bigl\Vert \mathcal{M}(s_{1}) - s_{2} \bigr\Vert ^{2} \leq \Vert s_{1} - s_{2} \Vert ^{2} + \rho \bigl\Vert (I - \mathcal{M}) (s_{1}) \bigr\Vert ^{2}, \quad \forall s_{2} \in \operatorname{Fix}( \mathcal{M}), s_{1} \in \mathcal{E}; $$or equivalently
$$ \bigl\langle \mathcal{M}(s_{1}) - s_{1}, s_{1} - s_{2} \bigr\rangle \leq \frac{\rho - 1}{2} \bigl\Vert s_{1} - \mathcal{M}(s_{1}) \bigr\Vert ^{2}, \quad \forall s_{2} \in \operatorname{Fix}( \mathcal{M}), s_{1} \in \mathcal{E}. $$
Lemma 2.4
([19])
Let \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) be a L-Lipschitz continuous and monotone operator on \(\mathcal{D}\). Take \(\mathcal{M} = P_{\mathcal{D}}(I - \hbar \mathcal{N})\), with \(\hbar > 0\). If \(\{s_{k}\}\) is a sequence in \(\mathcal{E}\) that satisfies \(s_{k} \rightharpoonup q\) and \(s_{k} - \mathcal{N}(s_{k}) \rightarrow 0\), then \(q \in \operatorname{VI}(\mathcal{D}, \mathcal{N}) = \operatorname{Fix}(\mathcal{M})\).
Lemma 2.5
([28])
Suppose that \(\{c_{k}\} \subset [0, +\infty )\), \(\{d_{k}\} \subset (0, 1)\), and \(\{e_{k}\} \subset \mathbb{R}\) sequences satisfies the following criteria:
If \(\limsup_{j \rightarrow +\infty} r_{k_{j}} \leq 0\) for any subsequence \(\{c_{k_{j}}\}\) of \(\{c_{j}\}\) such that
then \(\lim_{k \rightarrow \infty} c_{k} = 0\).
3 Main results
In this section, we examine the convergence of four novel inertial extragradient algorithms for solving the fixed-point and variational inequality problems in detail. First, we consider the given algorithms. In order to confirm the strong convergence, it is assumed that the following conditions are satisfied:
(\(\mathcal{N}\)1) The common solution set is denoted by \(\operatorname{Fix}(\mathcal{M}) \cap \operatorname{VI}(\mathcal{D}, \mathcal{N})\) and it is nonempty;
(\(\mathcal{N}\)2) The operator \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) is monotone;
(\(\mathcal{N}\)3) The operator \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) is Lipschitz continuous;
(\(\mathcal{N}\)4) The mapping \(\mathcal{M} : \mathcal{E} \rightarrow \mathcal{E}\) is ρ-demicontractive for \(0 \leq \rho < 1\) and demiclosed at zero;
(\(\mathcal{N}\)5) The operator \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) is sequentially weakly continuous.
Algorithm 1
(Inertial Subgradient Extragradient Method With Constant Step-Size Rule)
STEP 0: Take \(s_{0}, s_{1} \in \mathcal{D}\), \(\ell \in (0, 1)\), \(0 < \hbar < \frac{1}{L}\) and a sequence \(\{\varsigma _{k}\} \subset (0, 1 - \rho )\) that satisfies the following condition:
STEP 1: Calculate
where \(\ell _{k}\) is defined as follows:
Moreover, take a sequence \(\chi _{k} = \circ (\varsigma _{k})\) satisfying the condition \(\lim_{k \rightarrow +\infty} \frac{\chi _{k}}{\varsigma _{k}} = 0\).
STEP 2: Calculate
If \(q_{k} = r_{k}\), then STOP. Otherwise, go to STEP 3.
STEP 3: Construct a half-space first
and \(p_{k} = P_{\mathcal{E}_{k}} (q_{k} - \hbar \mathcal{N} (r_{k})) \).
STEP 4: For any sequence \(\sigma _{k} \subset (0, 1 - \rho )\). Calculate
Set \(k := k + 1\) and go to STEP 1.
Algorithm 2
(Inertial Subgradient Extragradient Method With Nonmonotone Step-Size Rule)
STEP 0: Take \(s_{0}, s_{1} \in \mathcal{D}\), \(\ell \in (0, 1)\), \(\mu \in (0, 1)\), \(\hbar _{1} > 0\). Moreover, a sequence \(\{\beth _{k}\}\) such that \(\sum_{k=1}^{\infty} \beth _{k} < +\infty \), and a sequence \(\{\varsigma _{k}\} \subset (0, 1 - \rho )\) that satisfies the following condition:
STEP 1: Calculate
where \(\ell _{k}\) is defined as follows:
Moreover, a sequence \(\chi _{k} = \circ (\varsigma _{k})\) satisfying the condition \(\lim_{k \rightarrow +\infty} \frac{\chi _{k}}{\varsigma _{k}} = 0\).
STEP 2: Calculate
If \(q_{k} = r_{k}\), then STOP. Otherwise, go to STEP 3.
STEP 3: Create a half-space first
and calculate \(p_{k} = P_{\mathcal{E}_{k}} (q_{k} - \hbar _{k} \mathcal{N} (r_{k})) \).
STEP 4: For any sequence \(\sigma _{k} \subset (0, 1 - \rho )\). Calculate
STEP 5: Calculate
Set \(k := k + 1\) and go to STEP 1.
Lemma 3.1
A sequence \(\{\hbar _{k} \}\) that is generated by the expression (3.3) is convergent to ħ and bounded by \(\min \{\frac{\mu}{L}, \hbar _{1} \} \leq \hbar \leq \hbar _{1} + P\), where
Proof
It is given that \(\langle \mathcal{N}(q_{k})- \mathcal{N}(r_{k}), p_{k} - r_{k} \rangle > 0\), such that
By definition of \(\hbar _{k+1}\), we have
Let
and
By using the definition of \(\{\hbar _{k}\}\), we have
This implies that the series \(\sum_{k=1}^{+\infty} (\hbar _{k+1} - \hbar _{k})^{+}\) is convergent. Following that, we must demonstrate the convergence of
Let us consider that \(\sum_{k=1}^{+\infty} (\hbar _{k+1} - \hbar _{k})^{-} = +\infty \). Thus, we obtain
Thus, we obtain
By allowing \(k \rightarrow +\infty \) in the formulation (3.6), we obtain \(\hbar _{k} \rightarrow - \infty \) as \(k \rightarrow \infty \). This is a logical contradiction. Due to the convergence of the series \(\sum_{k=0}^{k} (\hbar _{k+1} - \hbar _{k})^{+}\) and \(\sum_{k=0}^{k} (\hbar _{k+1} - \hbar _{k})^{-}\) taking \(k \rightarrow +\infty \) in (3.6), we obtain \(\lim_{k \rightarrow \infty} \hbar _{k} = \hbar _{.}\) This completes the proof of the lemma. □
Algorithm 3
(Inertial Tseng’s Extragradient Method With Constant Step-Size Rule)
STEP 0: Consider \(s_{0}, s_{1} \in \mathcal{D}\), \(\ell \in (0, 1)\), \(\mu \in (0, 1)\), \(0 < \hbar < \frac{1}{L}\) and a sequence \(\{\varsigma _{k}\} \subset (0, 1 - \rho )\) that satisfies the following condition:
STEP 1: Calculate
where \(\ell _{k}\) is defined as follows:
Moreover, a sequence \(\chi _{k} = \circ (\varsigma _{k})\) satisfying the condition \(\lim_{k \rightarrow +\infty} \frac{\chi _{k}}{\varsigma _{k}} = 0\).
STEP 2: Calculate
If \(q_{k} = r_{k}\), then STOP. Otherwise, go to STEP 3.
STEP 3: Calculate
STEP 4: For any sequence \(\sigma _{k} \subset (0, 1 - \rho )\). Calculate
Set \(k := k + 1\) and go back to STEP 1.
Algorithm 4
(Inertial Tseng’s Extragradient Method With Nonmonotone Step-Size Rule)
STEP 0: Consider \(s_{0}, s_{1} \in \mathcal{D}\), \(\ell \in (0, 1)\), \(\mu \in (0, 1)\), \(\hbar _{1} > 0\). Moreover, \(\{\beth _{k}\}\) such that \(\sum_{k=1}^{\infty} \beth _{k} < +\infty \) and a sequence \(\{\varsigma _{k}\} \subset (0, 1 - \rho )\) that satisfies the following condition:
STEP 1: Calculate
where \(\ell _{k}\) is defined as follows:
Moreover, a sequence \(\chi _{k} = \circ (\varsigma _{k})\) satisfying the condition \(\lim_{k \rightarrow +\infty} \frac{\chi _{k}}{\varsigma _{k}} = 0\).
STEP 2: Calculate
If \(q_{k} = r_{k}\), then STOP. Otherwise, go to STEP 3.
STEP 3: Calculate
STEP 4: For any sequence \(\sigma _{k} \subset (0, 1 - \rho )\). Calculate
STEP 5: Compute
Set \(k := k + 1\) and go back to STEP 1.
Lemma 3.2
A sequence \(\{\hbar _{k} \}\) that is generated by the expression (3.9) is decreasing monotonically and bounded by \(\min \{\frac{\mu}{L}, \hbar _{1} \} \leq \hbar \leq \hbar _{1} + P\), where
Proof
It is given that the mapping \(\mathcal{N}\) is Lipschitz continuous. Thus, we have
The remainder of the proof is similar to that of Lemma 3.1. □
Lemma 3.3
Let \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) be an operator that satisfies the conditions (\(\mathcal{N}\)1)–(\(\mathcal{N}\)5). Suppose that \(\{s_{k}\}\) is a sequence generated by Algorithm 2. For any \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\), we have
Proof
Consider that
It is given that \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N}) \subset \mathcal{D} \subset \mathcal{E}_{k}\), we have
Moreover, we have
By combining expressions (3.11) and (3.13), we obtain
Furthermore, we have
Since \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\), we obtain
By substituting \(y = r_{k} \in \mathcal{D}\), we have
Thus, this implies that
We obtain by combining formulas (3.14) and (3.15)
By using expression \(p_{k} = P_{\mathcal{E}_{k}} [q_{k} - \hbar _{k} \mathcal{N}(r_{k})]\), we have
From expressions (3.16) and (3.17) we can obtain
□
Lemma 3.4
Let \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) be an operator that satisfies the conditions (\(\mathcal{N}\)1)–(\(\mathcal{N}\)5). Suppose that \(\{s_{k}\}\) is a sequence generated by the Algorithm 1. For any \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\), we have
Proof
The proof is similar to the proof of Lemma 3.3. □
Lemma 3.5
Let \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) be an operator that satisfies the conditions (\(\mathcal{N}\)1)–(\(\mathcal{N}\)5). Suppose that \(\{s_{k}\}\) is a sequence generated by the Algorithm 4. For any \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\), we have
Proof
Consider the following:
Furthermore, we can write
For given \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\), we can write
By combining expressions (3.19) and (3.21), we can obtain
By use of the notion of a mapping \(\mathcal{N}\) on \(\mathcal{D}\), we can obtain
By using \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\), we obtain
By substituting \(y = r_{k} \in \mathcal{D}\), we can write
From expressions (3.22) and (3.23) we can obtain
□
Lemma 3.6
Let \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) be a map that fulfils the criteria (\(\mathcal{N}\)1)–(\(\mathcal{N}\)5). Suppose that \(\{s_{k}\}\) is a sequence created due to Algorithm 3. For any \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\), we have
Proof
The proof is similar to the proof of Lemma 3.5. □
Theorem 3.7
Let \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) be an operator that satisfies the conditions (\(\mathcal{N}\)1)–(\(\mathcal{N}\)5). Then, the sequence \(\{s_{k}\}\) generated by the Algorithm 2converges strongly to some \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N}) \cap \operatorname{Fix} (\mathcal{M})\), where \(\varpi ^{*} = P_{\operatorname{VI}(\mathcal{D}, \mathcal{N}) \cap \operatorname{Fix} (\mathcal{M})} (0)\).
Proof
Claim 1:
The sequence
is bounded.
It is given that
To use this result of the sequence \(\{s_{k+1}\}\), we derive
By using the value of \(\{q_{k}\}\), we obtain
for some fixed number \(K_{1}\) we have
It is given that \(\hbar _{k} \rightarrow \hbar \) such that there exists a fixed number \(\vartheta \in (0, 1 - \mu )\) such that
As a result, there exists a finite natural number \(N_{1} \in \mathbb{N}\) such that
By using Lemma 3.3, we can write
From expressions (3.25), (3.27), and (3.29) we infer that
It is considered that \(\{\sigma _{k}\} \subset (0, 1 - \rho )\) such that
This implies that the sequence \(\{s_{k}\}\) is a bounded sequence.
Claim 2:
for some \(K_{2} > 0\). By using the definition of \(\{s_{k+1}\}\), we have
By using the expression (3.18), we can derive
Thus, the expression (3.27) implies that
for \(K_{2} > 0\). Combining expressions (3.33), (3.34), and (3.35), we obtain
Claim 3:
By using the definition of \(\{q_{k}\}\), we obtain
Combining expressions (3.29) and (3.37), we obtain
Claim 4:
The sequence
converges to zero.
Suppose that
and
Then, Claim 4 can be rewritten as follows:
Indeed, from Lemma 2.5, it suffices to prove that \(\limsup_{j \rightarrow \infty} e_{k_{j}} \leq 0\) for any subsequence \(\{p_{k_{j}}\}\) of \(\{p_{k}\}\) satisfying
This is comparable to demonstrating that
and
in each subsequence \(\{\|s_{k_{j}} - \varpi ^{*}\|\}\) of \(\{\|s_{k} - \varpi ^{*}\|\}\) reassuring
Suppose that \(\{\|s_{k_{j}} - \varpi ^{*}\|\}\) is a subsequence of \(\{\|s_{k} - \varpi ^{*}\|\}\) satisfying
Then,
By using Claim 2 that
the above relationship suggests that
Thus, we obtain
Next, we compute the following:
This, together with \(\lim_{j \rightarrow \infty} \| p_{k_{j}} - q_{k_{j}}\| = 0\), yields that
From the definition of \(s_{k_{j}+1} = (1 - \sigma _{k_{j}}) p_{k_{j}} + \sigma _{k_{j}} \mathcal{M}(p_{k_{j}})\), one sees that
Thus, we obtain
The above expression implies that
and
This implies that the sequence \(\{s_{k_{j}}\}\) is a bounded sequence. We can infer that \(\{s_{k_{j}}\}\) weakly converges to some \(\hat{u} \in \mathcal{E}\). By using the value \(\varpi ^{*} = P_{\operatorname{VI}(\mathcal{D}, \mathcal{N}) \cap \operatorname{Fix} (\mathcal{M})} (0)\), we have
From the expression (3.43) it is provided that \(\{q_{k_{j}}\}\) weakly converges to \(\hat{u} \in \mathcal{E}\). By using the expression (3.41), \(\lim_{k \rightarrow \infty} \hbar _{k} = \hbar \), and Lemma 2.4, one concludes that \(\hat{u} \in \operatorname{VI}(\mathcal{D}, \mathcal{N})\). It follows from (3.44) that \(\{p_{k_{j}}\}\) weakly converges to \(\hat{u} \in \mathcal{E}\). Due to the use of the demiclosedness of \((I - \mathcal{M})\), we derive that \(\hat{u} \in \operatorname{Fix}(\mathcal{M})\). This implies that \(\hat{u} \in \operatorname{VI}(\mathcal{D}, \mathcal{N}) \cap \operatorname{Fix} (\mathcal{M})\). Thus, we obtain
Next, we can use \(\lim_{j \rightarrow \infty} \lVert s_{k_{j} + 1} - s_{k_{j}} \rVert = 0\). Thus, we can write
By using Claim 3 and Lemma 2.5, we see that \(s_{k} \rightarrow \varpi ^{*}\) as \(k \rightarrow \infty \). This completes the proof of the theorem. □
Theorem 3.8
Let \(\mathcal{N} : \mathcal{E} \rightarrow \mathcal{E}\) be an operator that satisfies the conditions (\(\mathcal{N}\)1)–(\(\mathcal{N}\)5). Then, the sequence \(\{s_{k}\}\) generated by the Algorithm 4converges strongly to \(\varpi ^{*} \in \operatorname{VI}(\mathcal{D}, \mathcal{N}) \cap \operatorname{Fix} (\mathcal{M})\), where \(\varpi ^{*} = P_{\operatorname{VI}(\mathcal{D}, \mathcal{N}) \cap \operatorname{Fix} (\mathcal{M})} (0)\).
Proof
Claim 1:
is a bounded sequence.
Let us consider that
By using the definition of a sequence \(\{s_{k+1}\}\), we have
By using the value of \(\{q_{k}\}\), we obtain
for some fixed number \(M_{1}\) we have
By using \(\hbar _{k} \rightarrow \hbar \) such that \(\chi \in (0, 1 - \mu ^{2})\), we have
Thus, there exists some fixed \(k_{0} \in \mathbb{N}\) such that
By using Lemma 3.5, we can rewrite
From expressions (3.52), (3.54), and (3.56) we infer that
Thus, for \(\{\sigma _{k}\} \subset (0, 1 - \rho )\), we obtain
Consequently, we may infer that the sequence \(\{s_{k}\}\) is a bounded sequence.
Claim 2:
for some fixed \(M_{2} > 0\). Indeed, by using the definition of \(\{s_{k+1}\}\), we have
By using Lemma 3.5, we obtain
By using expression (3.54), we can obtain
where for some fixed constant \(M_{2} > 0\). From expressions (3.60), (3.61), and (3.62) we obtain
Claim 3:
Using the value of \(\{q_{k}\}\), we can write as follows:
Combining expressions (3.56) and (3.64), we obtain
Claim 4:
is a sequence that is convergent to zero.
Set
and
Then, Claim 4 can be rewritten as follows:
By Lemma 2.5, it suffices to show that \(\limsup_{j \rightarrow \infty} e_{k_{j}} \leq 0\) for \(\{c_{k_{j}}\}\) of \(\{p_{k}\}\) such as
This seems to be equivalent to stating that
and
one from each subsequence \(\{\|s_{k_{j}} - \varpi ^{*}\|\}\) of \(\{\|s_{k} - \varpi ^{*}\|\}\) following that
Suppose that \(\{\|s_{k_{j}} - \varpi ^{*}\|\}\) is a subsequence of \(\{\|s_{k} - \varpi ^{*}\|\}\) satisfying
then, we have
As a result of Claim 2 that
the above relationship implies that
It follows that
Thus, we have
The remaining proof is similar to Claim 4 of Theorem 3.7. As a result, we omit it here and this completes the proof of the theorem. □
4 Numerical illustrations
In contrast to some past work in the literature, this part discusses the algorithmic implications of the supplied techniques, as well as a study of how differences in control settings affect the numerical efficacy of the recommended algorithms. All calculations are performed in MATLAB R2018b on an HP i5 Core (TM) i5-6200 laptop with 8.00 GB (7.78 GB usable) RAM.
Example 4.1
Consider that a mapping \(\mathcal{N} : \mathbb{R}^{m} \to \mathbb{R}^{m}\) is described using
where \(q = 0\). Moreover, we have
The matrices \(N = \operatorname{rand}(m)\) and \(K = \operatorname{rand}(m)\) are chosen randomly, whereas the other two are generated in the following manner:
The feasible set \(\mathcal{D}\) is interpreted as follows:
It is evident that the mapping \(\mathcal{N}\) is monotone and Lipschitz is continuous with the value \(L = \|M\|\). Moreover, the function \(\mathcal{M} : \mathcal{E} \rightarrow \mathcal{E}\) is considered as follows:
The starting points for these tests are \(s_{0} = s_{1}= (2, 2, \ldots , 2)\). The dimension of the Hilbert space is treated differently while studying the behavior of higher-dimension Hilbert spaces. The stopping condition for such experiments is \(D_{k} = \|q_{k} - r_{k}\| \leq 10^{-10}\). The initial points to run these experiments are taken as \(s_{0} = s_{1}= (2, 2, \ldots , 2)\). The dimension of the Hilbert space is taken differently to study the behavior for higher-dimension Hilbert spaces. The stopping criterion for such experiments is taken as \(D_{k} = \|q_{k} - r_{k}\| \leq 10^{-10}\). Figures 1–6 and Tables 1 and 2 illustrate empirical observations for Example 2. The following control criteria are in effect:
-
(1)
Algorithm 2 (alg-1): \(\hbar _{1} = 0.55\), \(\ell = 0.45\), \(\mu = 0.44\), \(\chi _{k} = \frac{100}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(2 k + 4)}\), \(\sigma _{k} = \frac{k}{(2k+1)}\).
-
(2)
Algorithm 4 (alg-2): \(\hbar _{1} = 0.55\), \(\ell = 0.45\), \(\mu = 0.44\), \(\chi _{k} = \frac{100}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(2 k + 4)}\), \(\sigma _{k} = \frac{k}{(2k+1)}\).
-
(3)
Algorithm 1 in [31] (mtalg-1): \(\gamma _{1} = 0.55\), \(\delta = 0.45\), \(\phi = 0.44\), \(\ell _{k} = \frac{1}{(2 k + 4)}\), \(\hbar _{k} = \frac{1}{2} (1 - \ell _{k})\), \(\chi _{k} = \frac{100}{(1+k)^{2}}\).
-
(4)
Algorithm 2 in [31] (mtalg-2): \(\gamma _{1} = 0.55\), \(\delta = 0.45\), \(\phi = 0.44\), \(\ell _{k} = \frac{1}{(2 k + 4)}\), \(\hbar _{k} = \frac{1}{2} (1 - \ell _{k})\), \(\chi _{k} = \frac{100}{(1+k)^{2}}\).
-
(5)
Algorithm 1 in [32] (vtalg-1): \(\tau _{1} = 0.55\), \(\ell = 0.45\), \(\mu = 0.44\), \(\chi _{k} = \frac{100}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(2 k + 4)}\), \(\sigma _{k} = \frac{k}{(2k+1)}\), \(f(u)=\frac{u}{2}\).
-
(6)
Algorithm 2 in [32] (vtalg-2): \(\tau _{1} = 0.55\), \(\ell = 0.45\), \(\mu = 0.44\), \(\chi _{k} = \frac{100}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(2 k + 4)}\), \(\sigma _{k} = \frac{k}{(2k+1)}\), \(f(u)=\frac{u}{2}\).
Example 4.2
Consider a nonlinear mapping \(\mathcal{N} : \mathcal{R}^{2} \rightarrow \mathcal{R}^{2}\) described using
Furthermore, the workable set \(\mathcal{D}\) is just a set written as follows:
It is simple to demonstrate that \(\mathcal{N}\) is monotone and Lipschitz continuous given the constant \(L = 3\). Suppose another mapping \(\mathcal{M} : \mathcal{R}^{2} \rightarrow \mathcal{R}^{2}\) is described as follows:
where E is defined by:
The mapping \(\mathcal{M}\) is clearly 0-demicontractive, with \(\rho = 0\). Depending on the stopping condition, the starting point for this experiment is calculated differently as \(D_{k} = \|q_{k} - r_{k}\| \leq 10^{-10}\). Figures 7–14 and Tables 3 and 4 demonstrate quantitative data for Example 4.2. The following control criteria are in effect:
-
(1)
Algorithm 2 (briefly, alg-1): \(\hbar _{1} = 0.45\), \(\ell = 0.35\), \(\mu = 0.33\), \(\chi _{k} = \frac{10}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(3 k + 6)}\), \(\sigma _{k} = \frac{k}{(3k+1)}\).
-
(2)
Algorithm 4 (briefly, alg-2): \(\hbar _{1} = 0.45\), \(\ell = 0.35\), \(\mu = 0.33\), \(\chi _{k} = \frac{10}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(3 k + 4)}\), \(\sigma _{k} = \frac{k}{(3k+1)}\).
-
(3)
Algorithm 1 in [31] (briefly, mtalg-1): \(\gamma _{1} = 0.45\), \(\delta = 0.35\), \(\phi = 0.33\), \(\ell _{k} = \frac{1}{(3 k + 6)}\), \(\hbar _{k} = \frac{1}{2.5} (1 - \ell _{k})\), \(\chi _{k} = \frac{10}{(1+k)^{2}}\).
-
(4)
Algorithm 2 in [31] (briefly, mtalg-2): \(\gamma _{1} = 0.45\), \(\delta = 0.35\), \(\phi = 0.33\), \(\ell _{k} = \frac{1}{(3 k + 6)}\), \(\hbar _{k} = \frac{1}{2.5} (1 - \ell _{k})\), \(\chi _{k} = \frac{10}{(1+k)^{2}}\).
-
(5)
Algorithm 1 in [32] (briefly, vtalg-1): \(\tau _{1} = 0.45\), \(\ell = 0.35\), \(\mu = 0.33\), \(\chi _{k} = \frac{10}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(3 k + 6)}\), \(\sigma _{k} = \frac{k}{(3k+1)}\), \(f(u)=\frac{u}{2}\).
-
(6)
Algorithm 2 in [32] (briefly, vtalg-2): \(\tau _{1} = 0.45\), \(\ell = 0.35\), \(\mu = 0.33\), \(\chi _{k} = \frac{10}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(3 k + 6)}\), \(\sigma _{k} = \frac{k}{(3k+1)}\), \(f(u)=\frac{u}{2}\).
Example 4.3
Take the following set:
Let \(\mathcal{N} : \mathcal{D} \rightarrow \mathcal{E}\) be an operator described through
where
In this case, \(\mathcal{E} = L^{2}([0, 1])\) denotes a Hilbert space via an inner product
where its induced norm is:
A function \(\mathcal{M} : L^{2}([0, 1]) \rightarrow L^{2}([0, 1])\) is of the form
A simple calculation suggests that \(\mathcal{M}\) is 0-demicontractive. The solution is \(\varpi ^{*}(t) = 0\). The stopping condition in this experiment is \(D_{k} = \|q_{k} - r_{k}\| \leq 10^{-6}\). Figures 15–18 and Tables 5 and 6 illustrate numerical observations for Example 4.3. The following control criteria are in effect:
-
(1)
Algorithm 2 (briefly, alg-1): \(\hbar _{1} = 0.33\), \(\ell = 0.66\), \(\mu = 0.55\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(4 k + 8)}\), \(\sigma _{k} = \frac{k}{(5k+1)}\).
-
(2)
Algorithm 4 (briefly, alg-2): \(\hbar _{1} = 0.33\), \(\ell = 0.66\), \(\mu = 0.55\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(4 k + 8)}\), \(\sigma _{k} = \frac{k}{(5k+1)}\).
-
(3)
Algorithm 1 in [31] (briefly, mtalg-1): \(\gamma _{1} = 0.33\), \(\delta = 0.66\), \(\phi = 0.55\), \(\ell _{k} = \frac{1}{(4 k + 8)}\), \(\hbar _{k} = \frac{1}{2} (1 - \ell _{k})\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\).
-
(4)
Algorithm 2 in [31] (briefly, mtalg-2): \(\gamma _{1} = 0.33\), \(\delta = 0.66\), \(\phi = 0.55\), \(\ell _{k} = \frac{1}{(4 k + 8)}\), \(\hbar _{k} = \frac{1}{2} (1 - \ell _{k})\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\).
-
(5)
Algorithm 1 in [32] (briefly, vtalg-1): \(\tau _{1} = 0.33\), \(\ell = 0.66\), \(\mu = 0.55\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(4 k + 8)}\), \(\sigma _{k} = \frac{k}{(4 k+1)}\), \(f(u)=\frac{u}{3}\).
-
(6)
Algorithm 2 in [32] (briefly, vtalg-2): \(\tau _{1} = 0.33\), \(\ell = 0.66\), \(\mu = 0.55\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(4 k + 8)}\), \(\sigma _{k} = \frac{k}{(4 k+1)}\), \(f(u)=\frac{u}{3}\).
Example 4.4
Consider that the feasible set \(\mathcal{D}\) is provided by
Let us design an operator \(\mathcal{N} : \mathcal{D} \rightarrow \mathcal{E}\) as
Let \(\mathcal{E} = L^{2}([0, 1])\) represent a real Hilbert space. Its induced norm and inner product are described by
and
It is trivial to verify that \(\mathcal{N}\) is monotone and 1-Lipschitz continuous, and that the projection on \(\mathcal{D}\) is naturally straightforward, that is,
A mapping \(\mathcal{M} : L^{2}([0, 1]) \rightarrow L^{2}([0, 1])\) takes the following form:
A simple analysis demonstrates that \(\mathcal{M}\) is 0-demicontractive. The solution is \(\varpi ^{*}(t) = 0\). These trials begin differently by setting a halting requirement \(D_{k} = \|q_{k} - r_{k}\| \leq 10^{-6}\). Tables 7 and 8 include numerical results for Example 4.4. The following conditions are used as control criteria:
-
(1)
Algorithm 2 (briefly, alg-1): \(\hbar _{1} = 0.25\), \(\ell = 0.44\), \(\mu = 0.75\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(5 k + 10)}\), \(\sigma _{k} = \frac{k}{(2k+1)}\).
-
(2)
Algorithm 4 (briefly, alg-2): \(\hbar _{1} = 0.25\), \(\ell = 0.44\), \(\mu = 0.75\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(5 k + 10)}\), \(\sigma _{k} = \frac{k}{(2k+1)}\).
-
(3)
Algorithm 1 in [31] (briefly, mtalg-1): \(\gamma _{1} = 0.25\), \(\delta = 0.44\), \(\phi = 0.75\), \(\ell _{k} = \frac{1}{(5 k + 10)}\), \(\hbar _{k} = \frac{1}{2} (1 - \ell _{k})\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\).
-
(4)
Algorithm 2 in [31] (briefly, mtalg-2): \(\gamma _{1} = 0.25\), \(\delta = 0.44\), \(\phi = 0.75\), \(\ell _{k} = \frac{1}{(5 k + 10)}\), \(\hbar _{k} = \frac{1}{2} (1 - \ell _{k})\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\).
-
(5)
Algorithm 1 in [32] (briefly, vtalg-1): \(\tau _{1} = 0.25\), \(\ell = 0.44\), \(\mu = 0.75\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(5 k + 10)}\), \(\sigma _{k} = \frac{k}{(2 k+1)}\), \(f(u)=\frac{u}{4}\).
-
(6)
Algorithm 2 in [32] (briefly, vtalg-2): \(\tau _{1} = 0.25\), \(\ell = 0.44\), \(\mu = 0.75\), \(\chi _{k} = \frac{1}{(1+k)^{2}}\), \(\varsigma _{k} = \frac{1}{(5 k + 10)}\), \(\sigma _{k} = \frac{k}{(2 k+1)}\), \(f(u)=\frac{u}{4}\).
5 Conclusion
We proposed four inertial extragradient-type methods to solve the monotone variational inequality problem numerically as well as a fixed-point problem. These methods are viewed as a modified version of the two-step extragradient method. Two strong convergence theorems have been established for the proposed methods. These numerical results were established in order to confirm the numerical effectiveness of the suggested algorithms over the existing methods. These computational results show that the nonmonotone variable step-size rule continues to improve the iterative sequence’s usefulness in this context.
Availability of data and materials
Not applicable.
References
An, N.T., Nam, N.M., Qin, X.: Solving k-center problems involving sets based on optimization techniques. J. Glob. Optim. 76(1), 189–209 (2019)
Antipin, A.S.: On a method for convex programs using a symmetrical modification of the Lagrange function. Èkon. Mat. Metody 12(6), 1164–1173 (1976)
Ceng, L., Petruşel, A., Qin, X., Yao, J.: A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 21(1), 93–108 (2020)
Ceng, L., Petruşel, A., Qin, X., Yao, J.: Pseudomonotone variational inequalities and fixed points. Fixed Point Theory 22(2), 543–558 (2021)
Ceng, L.-C., Köbis, E., Zhao, X.: On general implicit hybrid iteration method for triple hierarchical variational inequalities with hierarchical variational inequality constraints. Optimization 69(9), 1961–1986 (2019)
Ceng, L.-C., Shang, M.: Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 70(4), 715–740 (2019)
Ceng, L.-C., Yao, J.-C., Shehu, Y.: On mann implicit composite subgradient extragradient methods for general systems of variational inequalities with hierarchical variational inequality constraints. J. Inequal. Appl. 2022(1) (2022)
Ceng, L.-C., Yuan, Q.: Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems. J. Inequal. Appl. 2019(1), 1 (2019)
Ceng, L.C., Petruşel, A., Qin, X., Yao, J.C.: Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints. Optimization 70(5–6), 1337–1358 (2020)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148(2), 318–335 (2010)
Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich extragradient method for the variational inequality problem in Euclidean space. Optimization 61(9), 1119–1132 (2012)
Elliott, C.M.: Variational and quasivariational inequalities applications to free—boundary problems. (Claudio Baiocchi and António Capelo). SIAM Rev. 29(2), 314–315 (1987)
Iiduka, H., Yamada, I.: A subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optimization 58(2), 251–261 (2009)
Kassay, G., Kolumbán, J., Páles, Z.: On Nash stationary points. Publ. Math. 54(3–4), 267–279 (1999)
Kassay, G., Kolumbán, J., Páles, Z.: Factorization of Minty and Stampacchia variational inequality systems. Eur. J. Oper. Res. 143(2), 377–389 (2002)
Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. SIAM, Philadelphia (2000)
Konnov, I.: Equilibrium Models and Variational Inequalities, vol. 210. Elsevier, Amsterdam (2007)
Korpelevich, G.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)
Kraikaew, R., Saejung, S.: Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 163(2), 399–412 (2013)
Maingé, P.-E.: A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 47(3), 1499–1515 (2008)
Maingé, P.-E., Moudafi, A.: Coupling viscosity methods with the extragradient algorithm for solving equilibrium problems. J. Nonlinear Convex Anal. 9(2), 283–294 (2008)
Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4(3), 506–506 (1953)
Moudafi, A.: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 241(1), 46–55 (2000)
Nagurney, A., Economics, E.N.: (1999). A variational inequality approach
Noor, M., Noor, K.: Some new trends in mixed variational inequalities. J. Adv. Math. Stud. 15, 105–140 (2022)
Noor, M.A.: Some iterative methods for nonconvex variational inequalities. Comput. Math. Model. 21(1), 97–108 (2010)
Qin, X., An, N.T.: Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput. Optim. Appl. 74(3), 821–850 (2019)
Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal., Theory Methods Appl. 75(2), 742–750 (2012)
Stampacchia, G.: Formes bilinéaires coercitives sur les ensembles convexes. C. R. Hebd. Séances Acad. Sci. 258(18), 4413 (1964)
Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers (2009)
Tan, B., Fan, J., Qin, X.: Inertial extragradient algorithms with non-monotonic step sizes for solving variational inequalities and fixed point problems. Adv. Oper. Theory 6(4) (2021)
Tan, B., Zhou, Z., Li, S.: Viscosity-type inertial extragradient algorithms for solving variational inequality problems and fixed point problems. J. Appl. Math. Comput. (2021)
Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38(2), 431–446 (2000)
ur Rehman, H., Kumam, P., Argyros, I.K., Shutaywi, M., Shah, Z.: Optimization based methods for solving the equilibrium problems with applications in variational inequality problems and solution of Nash equilibrium models. Mathematics 8(5), 822 (2020)
ur Rehman, H., Kumam, P., Kumam, W., Sombut, K.: A new class of inertial algorithms with monotonic step sizes for solving fixed point and variational inequalities. Math. Methods Appl. Sci. 45(16), 9061–9088 (2022)
ur Rehman, H., Kumam, P., Shutaywi, M., Alreshidi, N.A., Kumam, W.: Inertial optimization based two-step methods for solving equilibrium problems with applications in variational inequality problems and growth control equilibrium models. Energies 13(12), 3292 (2020)
ur Rehman, H., Kumam, P., Suleiman, Y.I., Kumam, W.: An adaptive block iterative process for a class of multiple sets split variational inequality problems and common fixed point problems in Hilbert spaces. Numer. Algebra Control Optim. 0(0), 0 (2022)
ur Rehman, H., Kumam, W., Sombut, K.: Inertial modification using self-adaptive subgradient extragradient techniques for equilibrium programming applied to variational inequalities and fixed-point problems. Mathematics 10(10), 1751 (2022)
Yamada, I., Ogura, N.: Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings. Numer. Funct. Anal. Optim. 25(7–8), 619–655 (2005)
Yao, Y., Shahzad, N., Yao, J.-C.: Convergence of Tseng-type self-adaptive algorithms for variational inequalities and fixed point problems. Carpath. J. Math. 37(3), 541–550 (2021)
Zhao, T.-Y., Wang, D.-Q., Ceng, L.-C., He, L., Wang, C.-Y., Fan, H.-L.: Quasi-inertial Tseng’s extragradient algorithms for pseudomonotone variational inequalities and fixed point problems of quasi-nonexpansive operators. Numer. Funct. Anal. Optim. 42(1), 69–90 (2021)
Zhao, X.P., Yao, Y.: A nonmonotone gradient method for constrained multiobjective optimization problems. J. Nonlinear Var. Anal. 6(6), 693–706 (2022)
Acknowledgements
This research was supported by The Science, Research and Innovation Promotion Funding (TSRI) (Grant No. FRB660012/0168). This research block grant was managed under Rajamangala University of Technology Thanyaburi (FRB66E0653S.2).
Funding
This research was supported by The Science, Research and Innovation Promotion Funding (TSRI) (Grant No. FRB660012/0168). This research block grant was managed under Rajamangala University of Technology Thanyaburi (FRB66E0653S.2).
Author information
Authors and Affiliations
Contributions
WK, writing-original draft preparation. HR, writing-original draft preparation and project administration. PK, methodology and project administration. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kumam, W., Rehman, H.u. & Kumam, P. A new class of computationally efficient algorithms for solving fixed-point problems and variational inequalities in real Hilbert spaces. J Inequal Appl 2023, 48 (2023). https://doi.org/10.1186/s13660-023-02948-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-023-02948-8
MSC
- 47H09
- 47H05
- 47J20
- 49J15
- 65K15
Keywords
- Fixed-point problem
- ρ-demicontractive mapping
- Variational inequalities
- Strong convergence theorems
- Inertial iterative schemes