- Research
- Open Access
- Published:
The inertial iterative extragradient methods for solving pseudomonotone equilibrium programming in Hilbert spaces
Journal of Inequalities and Applications volume 2022, Article number: 58 (2022)
Abstract
In this paper, we present new iterative techniques for approximating the solution of an equilibrium problem involving a pseudomonotone and a Lipschitz-type bifunction in Hilbert spaces. These techniques consist of two computing steps of a proximal-type mapping with an inertial term. Improved simplified stepsize rules that do not involve line search are investigated, allowing the method to be implemented more quickly without knowing the Lipschitz-type constants of a bifunction. The iterative sequences converge weakly on a specific solution to the problem when the control parameter conditions are properly specified. The numerical tests were carried out, and the results demonstrated the applicability and quick convergence of innovative approaches over earlier ones.
1 Introduction
Let \(\mathcal{H}\) be a real Hilbert space and \(\mathcal{K}\) be a nonempty closed convex subset of \(\mathcal{H}\). The main objective here is to study different iterative methods for solving equilibrium problems ((EP), to put it short) involving pseudomonotone and a Lipschitz-type bifunction. Let \(\mathcal{F} : \mathcal{H} \times \mathcal{H} \rightarrow \mathbb{R}\) be a bifunction with \(\mathcal{F} (u_{1}, u_{1}) = 0\), for each \(u_{1} \in \mathcal{K}\). An equilibrium problem for \(\mathcal{F}\) on \(\mathcal{K}\) is described in the following manner: Find \(\eth ^{*} \in \mathcal{K}\) such that
Let us denote the solution set of a problem (EP) as \(\operatorname{Sol}(\mathcal{F} , \mathcal{K})\), and we will assume in the following text that this solution set is nonempty. The numerical evaluation of the equilibrium problem under the following conditions is the focus of this study. We will assume that the following conditions have been met:
- (\(\mathcal{F}\)1):
-
The solution set of a problem (EP) is denoted by \(\operatorname{Sol}(\mathcal{F} , \mathcal{K})\), and it is nonempty;
- (\(\mathcal{F}\)2):
-
The bifunction \(\mathcal{F}\) is said to be pseudomonotone [5, 7], i.e.,
$$ \mathcal{F} (u_{1}, u_{2}) \geq 0 \quad \Longrightarrow \quad \mathcal{F} (u_{2}, u_{1}) \leq 0, \quad \forall u_{1}, u_{2} \in \mathcal{K}; $$ - (\(\mathcal{F}\)3):
-
The bifunction \(\mathcal{F}\) is said to be Lipschitz-type continuous [18] on \(\mathcal{K}\) if there exist two constants \(k_{1}, k_{2} > 0\), such that
$$\begin{aligned} \mathcal{F} (u_{1}, u_{3}) \leq& \mathcal{F} (u_{1}, u_{2}) + \mathcal{F} (u_{2}, u_{3}) + k_{1} \Vert u_{1} - u_{2} \Vert ^{2} \\ &{}+ k_{2} \Vert u_{2} - u_{3} \Vert ^{2}, \quad \forall u_{1}, u_{2}, u_{3} \in \mathcal{K}; \end{aligned}$$ - (\(\mathcal{F}\)4):
-
For any sequence \(\{v_{k}\} \subset \mathcal{K}\) satisfying \(v_{k} \rightharpoonup v^{*}\), then the following inequality holds
$$ \limsup_{k \rightarrow +\infty} \mathcal{F} (v_{k}, u_{1}) \leq \mathcal{F} \bigl(v^{*}, u_{1}\bigr), \quad \forall u_{1} \in \mathcal{K}; $$ - (\(\mathcal{F}\)5):
-
\(\mathcal{F} (u_{1}, \cdot )\) is convex and subdifferentiable on \(\mathcal{H}\) for each fixed \(u_{1} \in \mathcal{H}\).
The equilibrium problem is of tremendous interest among researchers these days since it connects numerous mathematical problems, including vector and scalar minimization problems, fixed point problems, variational inequalities, the saddle point problems, the complementarity problems, and Nash equilibrium problems in non-cooperative games (for more details, see [6, 7, 12, 15, 20]). It also has various applications in economics [11], the dynamics of offer and demand [1] and continues to utilize the theoretical framework of non-cooperative games and Nash’s equilibrium models [21, 22]. The phrase “equilibrium problem” in its precise design was first introduced in the literature in 1992 by Muu and Oettli [20] and since then has been studied by Blum and Oettli [7]. More precisely, we consider two applications for the problem (EP). (i) The variational inequality problem for \(\mathcal{A} : \mathcal{K} \rightarrow \mathcal{H}\) is stated as follows: Find \(\eth ^{*} \in \mathcal{K}\) such that
Let us define a bifunction \(\mathcal{F}\) define as follows:
Then, problem (EP) is converted into the problem of variational inequalities defined in (VIP), and the Lipschitz constants of the mapping \(\mathcal{A}\) are \(L = 2 k_{1} = 2 k_{2}\). (ii) Let a mapping \(\mathcal{B} : \mathcal{K} \rightarrow \mathcal{K}\) is said to κ-strict pseudocontraction [8] with \(\kappa \in (0, 1)\) such that
A fixed point problem (FPP) for \(\mathcal{B} : \mathcal{K} \rightarrow \mathcal{K}\) is to find \(\eth ^{*} \in \mathcal{K}\) such that \(\mathcal{B} (\eth ^{*}) = \eth ^{*}\). Let us define a bifunction \(\mathcal{F}\) as follows:
It can be easily seen in [35] that the expression (1.3) satisfies the conditions \(\mathcal{F}\)1-\(\mathcal{F}\)5 as well as the value of Lipschitz constants are \(k_{1} = k_{2} = \frac{3 - 2 \kappa}{2 - 2 \kappa}\).
The extragradient method developed by Lyashko and Semenov [17] is one of the useful methods to solve equilibrium problems. The following is how this approach was constructed. Take an arbitrary initial points \(u_{0}, v_{0} \in \mathcal{H}\); using the current iteration \(u_{k}\), take the next iteration as continues to follow:
where \(0 < \varkappa < \frac{1}{2 k_{2} + 4 k_{1}}\) and \(k_{1}\), \(k_{2}\) are two Lipschitz-type constants. The iterative techniques in [17] are also acknowledged as Popov’s extragradient method because of Popov’s first contribution in the work [27] to solve saddle point problems. Recently, Yang [36] combined Popov’s extragradient method (1.4) with a non-monotonic stepsize rule. This method requires the solution of one optimization program on \(\mathcal{K}\) as well as a minimization problem on a half-space with a non-monotonic stepsize rule.
The main goal is to develop inertial-type methods in the case of [36] that will be designed to increase the rate of convergence of the iterative sequence. Such methods have already been established as a result of the oscillator equation, damping, and conservative force restoration. This second-order dynamical scheme represents a heavy friction ball, which Polyak first viewed in [26]. The main characteristic of this method is that the next iteration is composed of two previous iterations. In this context, numerical results indicate that inertial terms increase the method’s efficiency in terms of the number of iterations and elapsed time. In recent years, such methods have been extensively studied for specific types of equilibrium problems [2, 4, 13, 14, 19, 29–33] and others in [9, 16, 23, 34, 37–40].
As a result, a natural question arises:
Is it possible to develop new inertial-like weakly convergent extragradient-type methods for solving equilibrium problems using monotone and non-monotone stepsize rules?
In our study, we provide a positive answer to this question, namely, the gradient approach still generates a weak convergence sequence when solving equilibrium problems involving pseudomonotone bifunctions using a monotone and nonmonotone variable stepsize rule. Inspired by the work by Censor et al. [10] and Yang [36], we will describe new inertial extragradient-like approaches to solving the problem (EP) in the setting of real Hilbert spaces.
Our important contributions to this work are as follows: (i) We build an inertial subgradient extragradient method to solving equilibrium problems in Hilbert spaces using a monotone variable stepsize rule and show that the resulting sequence is weakly convergent. (ii) To solve equilibrium problems, we develop a new inertial subgradient extragradient strategy that makes use of a variable nonmonotone stepsize rule that is independent of the Lipschitz constants. (iii) Some conclusions are drawn in order to address various types of equilibrium problems in real Hilbert space. (iv) We provide more mathematical demonstrations of the proposed approaches for the verification of theoretical findings and compare them to the results in Algorithm 3.1 in [36]. The mathematical findings suggest that the proposed methods are advantageous and perform better than the already existed.
The paper is structured as follows: In Sect. 2, preliminary results were presented. Section 3 gives all new methods and their convergence theorems. Finally, Sect. 4 gives certain numerical results to highlight the practical effectiveness of the proposed approaches.
2 Preliminaries
In this section, we will go over some elementary identities as well as key lemmas and definitions. A metric projection \(P_{\mathcal{K}}(u_{1})\) of \(u_{1} \in \mathcal{H}\) is defined as follows:
Lemma 2.1
([3])
Let \(P_{\mathcal{K}} : \mathcal{H} \rightarrow \mathcal{K}\) be a metric projection. Then
-
(i)
$$ \bigl\Vert u_{1} - P_{\mathcal{K}}(u_{2}) \bigr\Vert ^{2} + \bigl\Vert P_{\mathcal{K}}(u_{2}) - u_{2} \bigr\Vert ^{2} \leq \Vert u_{1} - u_{2} \Vert ^{2},\quad u_{1} \in \mathcal{K}, u_{2} \in \mathcal{H}; $$
-
(ii)
\(u_{3} = P_{\mathcal{K}}(u_{1}) \) if and only if
$$ \langle u_{1} - u_{3}, u_{2} - u_{3} \rangle \leq 0,\quad \forall u_{2} \in \mathcal{K}; $$ -
(iii)
$$ \bigl\Vert u_{1} - P_{\mathcal{K}}(u_{1}) \bigr\Vert \leq \Vert u_{1} - u_{2} \Vert ,\quad u_{2} \in \mathcal{K}, u_{1} \in \mathcal{H}. $$
Lemma 2.2
([3])
For any \(u_{1}, u_{2} \in \mathcal{H}\) and \(\digamma \in \mathbb{R}\). Then the following conditions are satisfied:
-
(i)
$$ \bigl\Vert \digamma u_{1} + (1 - \digamma ) u_{2} \bigr\Vert ^{2} = \digamma \Vert u_{1} \Vert ^{2} + (1 - \digamma ) \Vert u_{2} \Vert ^{2} - \digamma (1 - \digamma ) \Vert u_{1} - u_{2} \Vert ^{2}; $$
-
(ii)
$$ \Vert u_{1} + u_{2} \Vert ^{2} \leq \Vert u_{1} \Vert ^{2} + 2 \langle u_{2}, u_{1} + u_{2} \rangle . $$
A normal cone of \(\mathcal{K}\) at \(u_{1} \in \mathcal{K}\) is defined as follows:
Let \(\mho : \mathcal{K} \rightarrow \mathbb{R}\) be a convex function and subdifferential of ℧ at \(u_{1} \in \mathcal{K}\) is defined by
Lemma 2.3
([25], Proposition 3.61)
Let \(\mho : \mathcal{K} \rightarrow \mathbb{R} \cup \{+\infty \}\) be a proper, convex and lower semicontinuous function on \(\mathcal{H}\). Assume either that ℧ is continuous at some point of \(\mathcal{K}\), or that there is an interior point of \(\mathcal{K}\) where ℧ is finite. An element \(u \in \mathcal{K}\) is a minimizer of a function ℧ if and only if
while \(\partial \mho (u)\) represent subdifferential of ℧ at \(u \in \mathcal{K,}\) and \(N_{\mathcal{K}}(u)\) is the normal cone of \(\mathcal{K}\) at u.
Lemma 2.4
([2])
Suppose that \(b_{k}\), \(c_{k,}\) and \(d_{k}\) are three sequences in \([0, +\infty )\) that meet the inequality below
for all \(k \geq 1\) and \(\sum_{k=1}^{+\infty} c_{k} < +\infty \). Thus, there exists number d satisfying \(0 \leq d_{k} \leq d < 1\), \(\forall k \in \mathbb{N}\). Then
-
(i)
\(\sum_{k=1}^{+\infty} [b_{k} - b_{k-1}]_{+} < +\infty \), while \([t]_{+} := \max \{t, 0\}\);
-
(ii)
there exists \(b^{*} \in [0, +\infty )\) such that \(\lim_{k \rightarrow +\infty} b_{k} = b^{*}\).
Lemma 2.5
([24])
Let \(\mathcal{K}\) be a nonempty subset of \(\mathcal{H}\) and \(\{u_{k}\}\) be a sequence in \(\mathcal{H}\) satisfying
-
(i)
for each \(u \in \mathcal{K}\), \(\lim_{k\rightarrow \infty} \|u_{k} - u \|\) exists;
-
(ii)
each weak sequentially cluster point of \(\{u_{k}\}\) inside \(\mathcal{K}\).
Then, sequence \(\{u_{k}\}\) weakly converges to an element in \(\mathcal{K}\).
3 Main results
In this section, we provide a numerical iterative method that comprises two strong convex optimization problems linked by an inertial term to accelerate the rate of convergence of an iterative sequence. We offer the following method for solving equilibrium problems.
Lemma 3.1
From Algorithm 1, can derive the following useful inequality
Proof
Due to the use of Lemma 2.3, we have
Thus, \(\upsilon \in \partial \mathcal{F}(v_{k}, u_{k+1}) \), and there exists a vector \(\overline{\upsilon} \in N_{\mathcal{H}_{k}(}u_{k+1})\) such that
As a result, we have
Due to \(\overline{\upsilon} \in N_{\mathcal{H}_{k}}(u_{k+1})\), ensure that \(\langle \overline{\upsilon}, v - u_{k+1} \rangle \leq 0\) for all \(v \in \mathcal{H}_{k}\). Thus, we have
Since \(\upsilon \in \partial \mathcal{F}(v_{k}, u_{k+1})\), we have
Combining the formulas (3.1) and (3.2), we obtain
□
Lemma 3.2
From Algorithm 1, can derive the following useful inequality
Proof
The proof is identical to the proof of Lemma 3.1. By substituting \(v = u_{k+1}\), we have
□
Lemma 3.3
Suppose that \(\mathcal{F} : \mathcal{K} \times \mathcal{K} \rightarrow \mathbb{R}\) satisfies the conditions (\(\mathcal{F}\)1)–(\(\mathcal{F}\)5). For any \(\eth ^{*} \in \operatorname{Sol}(\mathcal{F} , \mathcal{K})\), we have
Proof
By letting \(v = \eth ^{*}\) in Lemma 3.1, we have
Using condition (\(\mathcal{F}\)2), we obtain
From expression (3.3), we obtain
with \(\varkappa _{k} > 0\) gives that
Combining expressions (3.6) and (3.7), we obtain
Using expression (3.4), we have
Combining expressions (3.8) and (3.9), we have
The following facts are available to us:
As a result, we have
There are additional inequities to consider
The above expressions implies that
Finally, the previous expression implies that
□
Let us now establish the main convergence result for the Algorithm 1.
Theorem 3.4
Let \(\{u_{k}\}\) be a sequence generated by Algorithm 1 and
Then, the sequence \(\{u_{k}\}\) weakly converges to \(\eth ^{*} \in \operatorname{Sol}(\mathcal{F} , \mathcal{K})\).
Proof
Adding both sides \(\frac{(2 - \sqrt{2} - \theta ) \varrho \varkappa _{k+1}}{\varkappa _{k+2}} \| \gimel _{k+1} - v_{k}\|^{2}\) in Lemma 3.3, we have
Due to the term \(\gimel _{k}\) in Algorithm 1, we obtain
Due to the term \(\gimel _{k}\) and using Cauchy inequality, we have
Thus, we have
The above expression implies that
By the use \(\gimel _{k+1}\) and using Cauchy inequality, we have
Furthermore, we have
where
and
Furthermore, we can write
where
and
Next, we substitute
Thus, we have
Following that, we must compute
It is given that \(\varkappa _{k} \rightarrow \varkappa \) with ϱ such that
From above arguments with expression (3.25) through \(k_{0} \in \mathbb{N}\)
From expressions (3.24) and (3.26) with \(k \geq k_{0}\), the following relation is true
Therefore, the sequence \(\{\Gamma _{k}\}\) is nonincreasing for \(k \geq k_{0}\). Using \(\Gamma _{k+1}\) for \(k \geq k_{0}\), we have
By the definition of \(\Gamma _{k}\) for \(k \geq k_{0}\), we obtain
The above expression for \(k \geq k_{0}\) implies that
From expressions (3.28) and (3.30), we obtain
It follows from expressions (3.27) and (3.31) that
Allowing \(j \rightarrow +\infty \) in (3.32) states that
Due to expressions (3.20) and (3.33), we obtain
Moreover, we obtain
We have the following substitution:
From expression (3.31) with above \(\Psi _{k}\) substitution, we have
By expression (3.18), we can rewrite
Due to the condition on ϱ, we have
Let fix for some \(j \geq k_{0}\) with expressions (3.36) and (3.37) for \(k = 1, 2,\ldots , j\). Thus, we have
where \(R = \alpha (1 + \alpha ) + \frac{1}{2} \alpha (1 - \alpha )\), and allowing \(j \rightarrow +\infty \) provides that
Thus, we have
From expressions (3.33) and (3.40), we can infer that
By definition \(\gimel _{k}\) and using Cauchy inequality, we have
Now, summing up expression (3.44) for \(k= k_{0}, \ldots , j\) where \(j > k_{0}\), we obtain
Letting \(j \rightarrow +\infty \) in expression (3.45) implies that
Rewriting the expression (3.17), we have
Thus, the above expression with (3.33), (3.39), (3.45) through Lemma 2.4 provides limit of \(\|u_{k} - \eth ^{*}\|\) exists. Hence, all \(\{u_{k}\}\), \(\{\gimel _{k}\}\) and \(\{v_{k}\}\) sequences are bounded. Consider z to be a weak cluster point of \(\{u_{k}\}\), i.e., there is a subsequence that is indicated by \(\{u_{k_{m}}\}\) of \(\{u_{k}\}\) that is weakly convergent to z. Then \(\{v_{k_{m}}\}\) also weakly convergent to \(z \in \mathcal{K}\). We require to prove that \(z \in \operatorname{Sol}(\mathcal{F} , \mathcal{K})\). Using Lemma 3.1 with expressions (3.7) and (3.4), we have
where v is any member in \(\mathcal{H}_{k}\). It adopts from expressions (3.34) and (3.40)–(3.43) and the boundedness of \(\{u_{k}\}\) that last inequality turns to zero. By employing \(\varkappa _{k_{m}} \geq \varkappa > 0\) with item (\(\mathcal{F}\)4) and \(v_{k_{m}} \rightharpoonup z\), such as
Since \(\mathcal{K} \subset \mathcal{H}_{k}\) and \(\mathcal{F}(z, v) \geq 0\), \(\forall v \in \mathcal{K}\). The above illustrates that \(z \in \operatorname{Sol}(\mathcal{F} , \mathcal{K})\). Thus, Lemma 2.5 guarantees that \(\{\gimel _{k}\}\), \(\{u_{k}\}\), and \(\{v_{k}\}\) weakly converge to \(\eth ^{*}\) as \(k \rightarrow +\infty \). □
We now provide an iterative method (see Algorithm 2) that consists of a variable non-monotone stepsize rule and two strongly convex minimization problems. The details of the second main result are presented as follows.
In this section, we solve variational inequalities and fixe point problems using the results from our main results. The expressions (1.1) and (1.3) are employed to obtain the following conclusions. All the methods are based on our main findings, which are interpreted below.
Corollary 3.5
Assume that \(\mathcal{A} : \mathcal{K} \rightarrow \mathcal{H}\) is a pseudomonotone, weakly continuous and L-Lipschitz continuous operator and the solution set \(\operatorname{Sol}(\mathcal{A}, \mathcal{K}) \neq \emptyset \). Choose \(\varkappa _{0} = \varkappa _{1} > 0\), \(u_{-1}, u_{0}, v_{0} \in \mathcal{H}\), \(\varrho \in (0, 1)\), \(\theta \in (0, 2 - \sqrt{2})\) and \(\alpha _{k}\) to be a decreasing sequence such that \(0 \leq \underline{\alpha} \leq \alpha _{k} \leq \overline{\alpha} < \sqrt{5} - 2\). First, we have to compute
where \(\gimel _{0} = u_{0} + \alpha _{0}(u_{0} - u_{-1})\) and \(\gimel _{1} = u_{1} + \alpha _{1}(u_{1} - u_{0})\). Given \(u_{k-1}\), \(v_{k-1}\), \(u_{k}\), \(v_{k}\), and construct a half-space
Compute
The stepsize should be updated as follows:
Compute
Then, the sequences \(\{u_{k}\}\) converge weakly to \(\eth ^{*} \in \operatorname{Sol}(\mathcal{A}, \mathcal{K})\).
Corollary 3.6
Assume that \(\mathcal{A} : \mathcal{K} \rightarrow \mathcal{H}\) is a pseudomonotone, weakly continuous and L-Lipschitz continuous operator and the solution set \(\operatorname{Sol}(\mathcal{A}, \mathcal{K}) \neq \emptyset \). Choose \(\varkappa _{0} = \varkappa _{1} > 0\), \(u_{-1}, u_{0}, v_{0} \in \mathcal{H}\), \(\varrho \in (0, 1)\), \(\theta \in (0, 2 - \sqrt{2})\) and \(\alpha _{k}\) to be a decreasing sequence such that \(0 \leq \underline{\alpha} \leq \alpha _{k} \leq \overline{\alpha} < \sqrt{5} - 2\). Select a real sequence that is \(\{p_{k}\}\) such that \(\sum_{k=1}^{+\infty} p_{k} < + \infty \). First, we have to compute
where \(\gimel _{0} = u_{0} + \alpha _{0}(u_{0} - u_{-1})\) and \(\gimel _{1} = u_{1} + \alpha _{1}(u_{1} - u_{0})\). Given \(u_{k-1}\), \(v_{k-1}\), \(u_{k}\), \(v_{k}\), and construct a half-space
Compute
Update the stepsize in the following way:
Compute
Then, \(\{u_{k}\}\) sequence weakly converges to \(\eth ^{*} \in \operatorname{Sol}(\mathcal{A}, \mathcal{K})\).
Corollary 3.7
Let \(\mathcal{B} : \mathcal{K} \rightarrow \mathcal{H}\) be a weakly continuous and κ-strict pseudocontraction with the solution set \(\operatorname{Sol}(\mathcal{B}, \mathcal{K}) \neq \emptyset \). Choose \(\varkappa _{0} = \varkappa _{1} > 0\), \(u_{-1}, u_{0}, v_{0} \in \mathcal{H}\), \(\varrho \in (0, 1)\), \(\theta \in (0, 2 - \sqrt{2})\) and \(\alpha _{k}\) to be a decreasing sequence such that \(0 \leq \underline{\alpha} \leq \alpha _{k} \leq \overline{\alpha} < \sqrt{5} - 2\). First, we have to compute
where \(\gimel _{0} = u_{0} + \alpha _{0}(u_{0} - u_{-1})\) and \(\gimel _{1} = u_{1} + \alpha _{1}(u_{1} - u_{0})\). Given \(u_{k-1}\), \(v_{k-1}\), \(u_{k}\), \(v_{k}\), construct a half-space
Compute
Evaluate stepsize rule for the next iteration is evaluated as follows:
Compute
Then, the sequence \(\{u_{k}\}\) converges weakly to \(\eth ^{*} \in \operatorname{Sol}(\mathcal{B}, \mathcal{K})\).
Corollary 3.8
Let \(\mathcal{B} : \mathcal{K} \rightarrow \mathcal{H}\) be a weakly continuous and κ-strict pseudocontraction with the solution set \(\operatorname{Sol}(\mathcal{B}, \mathcal{K}) \neq \emptyset \). Choose \(\varkappa _{0} = \varkappa _{1} > 0\), \(u_{-1}, u_{0}, v_{0} \in \mathcal{H}\), \(\varrho \in (0, 1)\), \(\theta \in (0, 2 - \sqrt{2})\) and \(\alpha _{k}\) to be a decreasing sequence such that \(0 \leq \underline{\alpha} \leq \alpha _{k} \leq \overline{\alpha} < \sqrt{5} - 2\). Select a real sequence that is \(\{p_{k}\}\) such that \(\sum_{k=1}^{+\infty} p_{k} < + \infty \). First, we have to compute
where \(\gimel _{0} = u_{0} + \alpha _{0}(u_{0} - u_{-1})\) and \(\gimel _{1} = u_{1} + \alpha _{1}(u_{1} - u_{0})\). Given \(u_{k-1}\), \(v_{k-1}\), \(u_{k}\), \(v_{k}\), and
Compute
Evaluate stepsize rule for the next iteration is evaluated as follows:
Compute
Then, the sequence \(\{u_{k}\}\) converges weakly to \(\eth ^{*} \in \operatorname{Sol}(\mathcal{B}, \mathcal{K})\).
4 Numerical illustrations
This section describes a number of computational experiments conducted to demonstrate the efficacy of the proposed methods. Some of these numerical illustrations provide a thorough understanding of how to select effective control parameters. Some of them demonstrate how proactive approaches outperform current ones in the literature. All MATLAB codes were run in MATLAB 9.5 (R2018b) on an Intel(R) Core(TM) i5-6200 Processor CPU at 2.30 GHz, 2.40 GHz, and 8.00 GB RAM.
Example 4.1
The first test problem here is taken from the Nash-Cournot Oligopolistic Equilibrium model in [28]. In this case, the bifunction \(\mathcal{F}\) can be defined as follows:
where \(c \in \mathbb{R}^{M}\), and P, Q are matrices of order M. The matrix \(Q - P\) is symmetric negative semidefinite, and the matrix P is symmetric positive semidefinite, having Lipschitz-type constants that are \(k_{1} = k_{2} = \frac{1}{2}\| P - Q\|\) (see [28] for more details).
Experiment 1: In the first experiment, we take an Example 4.1 to examine how Algorithm 2 performs numerically when alternative control sequence ϱ options are used. This experiment assisted us in determining the best potential control parameter ϱ. The starting points for these numerical studies are \(u_{-1} = v_{-1} = u_{0} = (1, 1, \ldots , 1)\), \(M = 5\), and error term \(D_{k} = \|u_{k+1} - u_{k} \|\). Two matrices P, Q, and vector c are written as
The feasible set \(\mathcal{K} \subset \mathbb{R}^{M}\) is defined by
Figures 1 and 2 demonstrate a numerical results using an error \(D_{k} = \|u_{k+1} - u_{k} \| \leq 10^{-5}\). The following information about control settings should be considered: (i) Algorithm 2 (shortly, IEgM):
Experiment 2: In the second experiment, we look at Example 4.1 to examine how Algorithm 2 performs numerically when alternative control sequence θ options are used. This experiment assisted us in determining the best potential control parameter θ. The starting points for these numerical studies are \(u_{-1} = v_{-1} = u_{0} = (1, 1, \ldots , 1)\), \(M = 5\), and error term \(D_{k} = \|u_{k+1} - u_{k} \|\). Figures 3 and 4 show a number of results for the error term \(D_{k} = \|u_{k+1} - u_{k} \| \leq 10^{-5}\). Information concerning the control parameters shall be considered as follows: (i) Algorithm 2 (shortly, IEgM):
Experiment 3: In the third experiment, we consider Example 4.1 to see the computational performance of Algorithm 2 with Algorithm 3.1 in [36] using different choices for the dimension M. Two P, Q matrices are taken randomly [Two diagonal matrices randomly \(A_{1}\) and \(A_{2}\) with elements from \([0,2]\) and \([-2, 0]\), respectively. Two random orthogonal matrices \(O_{1}=\operatorname{RandOrthMat}(M)\) and \(O_{2}=\operatorname{RandOrthMat}(M)\) are generated. Thus, a positive semi-definite matrix \(B_{1}=O_{1}A_{1}O_{1}^{T}\) and a negative semi-definite matrix \(B_{2}=O_{2}A_{2}O_{2}^{T}\) are achieved. Finally, set \(Q=B_{1}+B_{1}^{T}\), \(S=B_{2}+B_{2}^{T}\) and \(P=Q-S\)]. A set of constraints \(\mathcal{K} \subset \mathbb{R}^{M}\) is illustrated by
For these numerical studies, starting points are \(u_{-1} = v_{-1} = u_{0} = (1, 1, \ldots , 1)\), and error term \(D_{k} = \|u_{k+1} - u_{k} \|\). Figures 5, 6, 7, 8, 9 and Table 1 show a number of results for the error term \(D_{k} = \|u_{k+1} - u_{k} \| \leq 10^{-5}\). Information regarding the control parameters shall be considered as follows:
-
(i)
Algorithm 3.1 in [36] (shortly, EgM):
$$ \varkappa _{0} = \frac{1}{2c}, \qquad \varrho = 0.45,\qquad \theta = 0.05,\qquad p_{k} = \frac{100}{(k + 1)^{2}}; $$ -
(ii)
Algorithm 2 (shortly, IEgM):
$$ \varkappa _{0} = \frac{1}{2c},\qquad \varrho = 0.45, \qquad \alpha _{k} = 0.18, \qquad \theta = 0.05,\qquad p_{k} = \frac{100}{(k + 1)^{2}}. $$
Computational comparability of Algorithm 2 with Algorithm 3.1 in [36] for \(M=5\)
Computational comparability of Algorithm 2 with Algorithm 3.1 in [36] for \(M=10\)
Computational comparability of Algorithm 2 with Algorithm 3.1 in [36] for \(M=20\)
Computational comparability of Algorithm 2 with Algorithm 3.1 in [36] for \(M=50\)
Computational comparability of Algorithm 2 with Algorithm 3.1 in [36] for \(M=100\)
Example 4.2
Let a bifunction \(\mathcal{F}\) is defined by
where
Consider that \(I (\breve{u}) = G(\breve{u}) + H(\breve{u})\) in the following manner:
where \(c = (-1, -1, \ldots , -1)\) and
The matrix E entries are taken as follows:
In this experiment, we consider Example 4.2 to see the numerical illustration of Algorithm 2 in comparison with Algorithm 3.1 in [36] using different choices for different values of the dimension M. For these numerical studies, starting points are \(u_{-1} = v_{-1} = u_{0} = (1, 1, \ldots , 1)\), \(M = 5\), and the error term \(D_{k} = \|u_{k+1} - u_{k} \|\). Figures 10, 11, 12, 13 and Table 2 show a number of results for the error term \(D_{k} = \|u_{k+1} - u_{k} \| \leq 10^{-5}\). Information concerning the control parameters shall be considered as follows:
-
(i)
Algorithm 3.1 in [36] (shortly, EgM):
$$ \varkappa _{0} = \frac{1}{2c},\qquad \varrho = 0.45, \qquad \theta = 0.05,\qquad p_{k} = \frac{100}{(k + 1)^{2}}. $$ -
(ii)
Algorithm 2 (shortly, IEgM):
$$ \varkappa _{0} = \frac{1}{2c}, \qquad \varrho = 0.45,\qquad \alpha _{k} = 0.18,\qquad \theta = 0.05,\qquad p_{k} = \frac{100}{(k + 1)^{2}}. $$
Computational comparability of Algorithm 2 with Algorithm 3.1 in [36] for \(M=20\)
Computational comparability of Algorithm 2 with Algorithm 3.1 in [36] for \(M=50\)
Computational comparability of Algorithm 2 with Algorithm 3.1 in [36] for \(M=100\)
Computational comparability of Algorithm 2 with Algorithm 3.1 in [36] for \(M=200\)
5 Conclusion
This research presents two explicit extragradient-like methods for solving an equilibrium problem in a real Hilbert space, which include a pseudomonotone and a Lipschitz-type bifunction. A novel stepsize rule has been given that does not become reliant on the information provided by Lipschitz-type constants. For the given methods, convergence theorems have been established. Several experiments are detailed in order to show the numerical behavior of algorithms and compare them to other well-known methods in the literature.
Availability of data and materials
Not applicable.
References
Arrow, K.J., Debreu, G.: Existence of an equilibrium for a competitive economy. Econometrica 22(3), 265 (1954)
Attouch, F.A.H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Var. Anal. 9, 3–11 (2001)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. CMS Books in Mathematics. Springer, Berlin (2017)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Bianchi, M., Schaible, S.: Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 90(1), 31–43 (1996)
Bigi, G., Castellani, M., Pappalardo, M., Passacantando, M.: Existence and solution methods for equilibria. Eur. J. Oper. Res. 227(1), 1–11 (2013)
Blum, E.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)
Browder, F., Petryshyn, W.: Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 20(2), 197–228 (1967)
Ceng, L.-C., Yao, J.-C.: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 214(1), 186–201 (2008)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148(2), 318–335 (2010)
Cournot, A.A.: Recherches sur les Principes Mathématiques de la Théorie des Richesses. Hachette, Paris (1838)
Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York (2002)
Hieu, D.V.: An inertial-like proximal algorithm for equilibrium problems. Math. Methods Oper. Res. 88(3), 399–415 (2018)
Hieu, D.V., Cho, Y.J., Xiao, Y.-B.: Modified extragradient algorithms for solving equilibrium problems. Optimization 67(11), 2003–2029 (2018)
Konnov, I.: Equilibrium Models and Variational Inequalities, vol. 210. Elsevier, Amsterdam (2007)
Liu, L., Cho, S.Y., Yao, J.-C.: Convergence analysis of an inertial Tseng’s extragradient algorithm for solving pseudomonotone variational inequalities and applications. J. Nonlinear Var. Anal. 5(4), 627–644 (2021)
Lyashko, S.I., Semenov, V.V.: A new two-step proximal algorithm of solving the problem of equilibrium programming. In: Optimization and Its Applications in Control and Data Sciences, pp. 315–325. Springer, Berlin (2016)
Mastroeni, G.: On auxiliary principle for equilibrium problems. In: Equilibrium Problems and Variational Models. Nonconvex Optimization and Its Applications, pp. 289–298. Springer, Boston (2003)
Muangchoo, K., ur Rehman, H., Kumam, P.: Two strongly convergent methods governed by pseudo-monotone bi-function in a real Hilbert space with applications. J. Appl. Math. Comput. 67(1–2), 891–917 (2021)
Muu, L., Oettli, W.: Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal., Theory Methods Appl. 18(12), 1159–1166 (1992)
Nash, J.: Non-cooperative games. Ann. Math. 54, 286–295 (1951)
Nash, J.F., et al.: Equilibrium points in n-person games. Proc. Natl. Acad. Sci. 36(1), 48–49 (1950)
Ogbuisi, F., Iyiola, O., Ngnotchouye, J., Shumba, T.: On inertial type self-adaptive iterative algorithms for solving pseudomonotone equilibrium problems and fixed point problems. J. Nonlinear Funct. Anal. 2021(1), Article ID 4 (2021)
Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591–598 (1967)
Peypouquet, J.: Convex Optimization in Normed Spaces: Theory, Methods and Examples. Springer, Berlin (2015)
Polyak, B.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)
Popov, L.D.: A modification of the Arrow–Hurwicz method for search of saddle points. Math. Notes Acad. Sci. USSR 28(5), 845–848 (1980)
Tran, D.Q., Dung, M.L., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57(6), 749–776 (2008)
ur Rehman, H., Gibali, A., Kumam, P., Sitthithakerngkiet, K.: Two new extragradient methods for solving equilibrium problems. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 115(2), 75 (2021)
ur Rehman, H., Kumam, P., Cho, Y.J., Suleiman, Y.I., Kumam, W.: Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 36(1), 82–113 (2020)
ur Rehman, H., Kumam, P., Dong, Q.-L., Cho, Y.J.: A modified self-adaptive extragradient method for pseudomonotone equilibrium problem in a real Hilbert space with applications. Math. Methods Appl. Sci. 44(5), 3527–3547 (2020)
ur Rehman, H., Kumam, P., Gibali, A., Kumam, W.: Convergence analysis of a general inertial projection-type method for solving pseudomonotone equilibrium problems with applications. J. Inequal. Appl. 2021(1), 63 (2021)
ur Rehman, H., Pakkaranang, N., Kumam, P., Cho, Y.J.: Modified subgradient extragradient method for a family of pseudomonotone equilibrium problems in real a Hilbert space. J. Nonlinear Convex Anal. 21(9), 2011–2025 (2020)
Van Hieu, D., Duong, H.N., Thai, B.: Convergence of relaxed inertial methods for equilibrium problems. J. Appl. Numer. Optim. 3(1), 215–229 (2021)
Wang, S., Zhang, Y., Ping, P., Cho, Y., Guo, H.: New extragradient methods with non-convex combination for pseudomonotone equilibrium problems with applications in Hilbert spaces. Filomat 33(6), 1677–1693 (2019)
Yang, J.: The iterative methods for solving pseudomontone equilibrium problems. J. Sci. Comput. 84(3), 50 (2020)
Yao, Y., Iyiola, O.S., Shehu, Y.: Subgradient extragradient method with double inertial steps for variational inequalities. J. Sci. Comput. 90(2), 71 (2022)
Yao, Y., Li, H., Postolache, M.: Iterative algorithms for split equilibrium problems of monotone operators and fixed point problems of pseudo-contractions. Optimization, 1–19 (2020). https://doi.org/10.1080/02331934.2020.1857757
Zhao, X., Köbis, M.A., Yao, Y., Yao, J.-C.: A projected subgradient method for nondifferentiable quasiconvex multiobjective optimization problems. J. Optim. Theory Appl. 190(1), 82–107 (2021)
Zhu, L.-J., Yao, Y., Postolache, M.: Projection methods with linesearch technique for pseudomonotone equilibrium problems and fixed point problems. UPB Sci. Bull., Ser. A, Appl. Math. Phys. 83(1), 3–14 (2021)
Acknowledgements
The authors acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. Moreover, this research project is supported by Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal year 2022 (under project number FRB65E0633M.2).
Funding
This research project is supported by Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal year 2022 (under project number FRB65E0633M.2).
Author information
Authors and Affiliations
Contributions
HR was a major contributor in writing the manuscript and conceptualization. PK dealt with the conceptualization, supervision, and funding acquisition. IKA dealt with the methodology, investigation, and edition original draft preparation. WK performed the validation, formal analysis, and funding acquisition. MS performed conceptualization, formal analysis and writing revised version. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Rehman, H.u., Kumam, P., Argyros, I.K. et al. The inertial iterative extragradient methods for solving pseudomonotone equilibrium programming in Hilbert spaces. J Inequal Appl 2022, 58 (2022). https://doi.org/10.1186/s13660-022-02790-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-022-02790-4
MSC
- 47J25
- 47H09
- 47H06
- 47J05
Keywords
- Equilibrium problem
- Iterative method
- Pseudomonotone bifunction
- Weak convergence theorem