- Research
- Open access
- Published:
A new extragradient algorithm with adaptive step-size for solving split equilibrium problems
Journal of Inequalities and Applications volume 2021, Article number: 136 (2021)
Abstract
He (J. Inequal. Appl. 2012:Article ID 162 2012) introduced the proximal point CQ algorithm (PPCQ) for solving the split equilibrium problem (SEP). However, the PPCQ converges weakly to a solution of the SEP and is restricted to monotone bifunctions. In addition, the step-size used in the PPCQ is a fixed constant μ in the interval \((0, \frac{1}{ \| A \|^{2} } )\). This often leads to excessive numerical computation in each iteration, which may affect the applicability of the PPCQ. In order to overcome these intrinsic drawbacks, we propose a robust step-size \(\{ \mu _{n} \}_{n=1}^{\infty }\) which does not require computation of \(\| A \|\) and apply the adaptive step-size rule on \(\{ \mu _{n} \}_{n=1}^{\infty }\) in such a way that it adjusts itself in accordance with the movement of associated components of the algorithm in each iteration. Then, we introduce a self-adaptive extragradient-CQ algorithm (SECQ) for solving the SEP and prove that our proposed SECQ converges strongly to a solution of the SEP with more general pseudomonotone equilibrium bifunctions. Finally, we present a preliminary numerical test to demonstrate that our SECQ outperforms the PPCQ.
1 Introduction
The equilibrium problem (EP) associated with a bifunction \(f : C \times C \to \mathbb{R}\) and a nonempty subset C of a real Hilbert space \(\mathbb{H}\) consists of finding a vector \(x^{*} \in C\) such that
It is well known that the mathematical basis for the EP pre-dates the works of Ky Fan [10]. However, due to his dedication to the subject, the EP is often called the Ky Fan inequality. The EP is an incredibly important powerful tool that unifies a number of useful and elegant nonlinear problems. In recent days, the EP is one of the major nonlinear methods that has provided significant success in modeling several real world problems (see, e.g., [1, 2, 8, 9, 13, 14, 18, 23, 24]).
The set of solutions of EP is denoted by \(\operatorname{SOL}(f, C)\). To solve the EP, it is common to use the proximal point method proposed by Combettes and Hirstoaga [8]: given \(x_{n}\in C\), as a current iterate, the next iterate \(x_{n+1}\) solves the following problem:
In 2008, Tran et al. [23] (see also Dang [9]) submitted that a large number of important real-world problems can be reformulated as pseudomonotone bifunctions. A well-known example is the Nash–Cournot oligopolistic electricity market model. Unfortunately, the traditional proximal point method (2) does not converge if f is pseudomonotone (see, e.g., [21, Example 2.1]). Hence, Tran et al. [23] introduced the following proximal-extragradient algorithm for solving (1) when f is pseudomonotone.
It is worthy to mention that, unlike the proximal point method (2), the extragradient algorithm (3) falls within the applicable scope of standard matlab optimization toolbox. So, in order to implement algorithm (3), one only needs to solve a pair of strongly convex programs via matlab optimization toolbox.
On the other hand, the study of classical split feasibility problem (SFP) is pioneered by Censor and Elfving [5]. It provides effective tools for obtaining the existence of solutions of constrained and inverse problems arising in optimization, engineering, medical sciences, and most notably in image reconstruction, signal processing, and phase retrieval (see, e.g., [3, 4, 6]). The SFP is to find a vector
where C and Q are given nonempty, closed, and convex subsets of real Hilbert spaces H1 and H2, respectively, and \(A: \operatorname{H}_{1}\to \operatorname{H}_{2}\) is a bounded linear operator. Bryne [3] imposed the restrictive condition \(\{\gamma _{n}\}_{n=1}^{\infty }\subset ( 0, \frac{2}{ \Vert A \Vert ^{2}} )\) on the CQ-method for solving (4):
where \(\operatorname{P}_{C}\) and \(\operatorname{P}_{Q}\) stand for the metric projection onto C and Q, respectively, and \(A^{*}\) denotes the adjoint operator of A from H2 to H1. The concept of self-adaptive step-size for solving (4) was introduced by Yang [26] and developed by Lopez et al. [15] in order to dispense with the restrictive condition \(\{\gamma _{n}\}_{n=1}^{\infty }\subset ( 0, \frac{2}{ \Vert A \Vert ^{2}} )\). In general, self-adaptive algorithms operate under the assumption that future events (inputs) are uncertain so they are characterized by step-sizes that continuously monitor themselves, gather data, analyze data, and adapt when their requirements fail due to unexpected changes in their components. Research found out that an ever-growing number of algorithms with adaptive step-sizes for solving nonlinear problems are faster and robust to failure (see, e.g., [11, 19, 20, 22, 27, 28]). The SFP is a particular case of a more general problem, called the split equilibrium problem (SEP)
Here, \(g : Q \times Q \to \mathbb{R}\) stands for another bifunction on H2. The SEP was briefly introduced by Moudafi [17] in 2011. Perhaps, due to its relevance, the problem was reintroduced a year after by He [12], and the following proximal point-CQ algorithm (PPCQ) for solving SEP (6) was proposed:
It is worth noting that, as a prototype of the proximal point method, the PPCQ may not converge when f and g are pseudomonotone. In addition, the PPCQ converges weakly to a solution of (6) when it is consistent and the step-size \(\mu \in (0, \frac{1}{ \Vert A \Vert ^{2}} )\) depends on \(\Vert A \Vert \), which, in turn, may lead to excessive numerical computation that may affect the convergence of the PPCQ. The question now becomes: is it possible to develop an extragradient algorithm with an adaptive step-size that converges strongly to a solution of (6) when f and g are pseudomonotone? In answering this question, we present a self-adaptive extragradient-CQ algorithm (SECQ) for solving (6) and prove that a sequence generated by our proposed SECQ converges strongly to solutions of (6) when f and g are pseudomonotone. A numerical example is also given to demonstrate the effectiveness of our iterative scheme.
2 Preliminaries
Let \(\mathbb{H}\) be a real Hilbert space whose inner product and norm are denoted by \({\langle \cdot , \cdot \rangle} \) and \({ \Vert \cdot \Vert }\). Let \(C\subseteq \mathbb{H}\) be a nonempty, closed, and convex set, and denote by the symbols ⇀ and ⟶ weak and strong convergence of a sequence \(\{x_{n}\}_{n=1}^{\infty }\).
Definition 2.1
A bifunction \(f:C\times C \to \mathbb{R}\) is said to be
-
(i)
monotone on C, if
$$ f(x,y) + f(y,x) \leq 0, \quad \forall x,y\in C; $$ -
(ii)
pseudomonotone on C with respect to \(x\in C\), if
$$ f(x, y)\geq 0 \quad \Rightarrow \quad f(y, x )\leq 0, \quad \forall y\in C; $$ -
(iii)
pseudomonotone on C with respect to \(\emptyset \neq \Omega \subset C\), if \(\forall x^{*}\in \Omega \),
$$ f \bigl(x^{*}, y \bigr)\geq 0 \quad \Rightarrow \quad f \bigl(y, x^{*} \bigr)\leq 0, \quad \forall y \in C; $$ -
(iv)
Lipschitz-type continuous, if there are two positive constants \(L_{1}\), \(L_{2}\) such that
$$ f(x,y)+ f(y,z) \geq f(x,z)-L_{1} \Vert x-y \Vert ^{2} - L_{2} \Vert y-z \Vert ^{2}, \quad \forall x,y,z\in C; $$ -
(v)
jointly weakly continuous on \(C \times C\) in the sense that, given any \(x,y\in C\) and \(\{x_{n} \}_{n=1}^{\infty }, \{y_{n} \}_{n=1}^{\infty }\subset C\) converge weakly to x and y, respectively, then
$$ \lim_{n \to \infty } f(x_{n}, y_{n}) = f(x,y). $$
The following conditions will be used in the sequel.
Assumption A
- \((A1)\).:
-
f is pseudomonotone with respect to \(\operatorname{SOL}(f, C)\);
- \((A2)\).:
-
f is jointly weakly and Lipschitz-type continuous on C with constants \(L_{1}\) and \(L_{2}\);
- \((A3)\).:
-
\(f(x,\cdot )\) is convex and subdifferentiable on C;
- \((A4)\).:
-
\(\Omega = \{ x\in \operatorname{SOL}(f, C) \text{ such that } A(x)\in \operatorname{SOL}(g, Q ) \}\neq \emptyset \).
Lemma 2.2
([2])
If the bifunction f satisfies conditions \((A1)\)–\((A4)\), then \(\operatorname{SOL}(f, C)\) is weakly closed and convex.
Recall that a metric projection of \(\mathbb{H}\) onto C is the mapping \(\operatorname{P}_{C}: \mathbb{H} \to C\) which assigns to each \(x\in \mathbb{H}\) the (nearest) unique point \(\operatorname{P}_{C}(x)\) in C satisfying
Lemma 2.3
Given \(u\in \mathbb{H}\) and \(z\in C\). Then
Lemma 2.4
([16])
Let \(\{a_{n}\}_{n=1}^{\infty }\) be a sequence of real numbers such that there exists a subsequence \(\{{n_{i}}\}\) of \(\{n\}\) such that \(a_{n_{i}} < a_{n_{i+1}}\) for all \(i\geq 0\). Then there exists an increasing sequence \(\{ m_{k}\}_{k=1}^{\infty }\subset \mathbb{N}\) such that \(m_{k} \to \infty \) and the following properties are satisfied by all (sufficiently large) numbers \(k\in \mathbb{N}\):
In fact, \(m_{k}\) is the largest number n in the set \(\{1,2,\ldots,k \}\) such that the condition \(a_{n} \leq a_{n+1} \) holds.
Lemma 2.5
([25])
Let \(\{\gamma _{n}\}_{n=1}^{\infty }\) be a sequence in \((0,1)\) and \(\{\delta _{n}\}_{n=1}^{\infty }\) be in \(\mathbb{R}\) satisfying \(\sum_{n=1}^{\infty } \gamma _{n}= \infty \) and \(\limsup_{n \to \infty } \delta _{n} \leq 0\) or \(\sum_{n=1}^{\infty } \vert \gamma _{n} \delta _{n} \vert < \infty \). If \(\{a_{n}\}_{n=1}^{\infty }\) is a sequence of nonnegative real numbers such that \(a_{n+1} \leq (1- \gamma _{n})a_{n} + \gamma _{n} \delta _{n}\), \(\forall n\geq 0\), then \(\lim_{n \to \infty } a_{n} =0\).
3 Main results
The following is our main algorithm for solving (6).
Algorithm 3.1
(Self-adaptive proximal-extragradient algorithm for SEP)
-
Initialization: Given initial choice \(x_{1}\) and u in C. Pick parameters \(\{ \delta _{n} \}_{n=1}^{\infty }\subset [ \underline{\delta }, \bar{\delta }]\subset (0,1)\), \(\{ \sigma _{n} \}_{n=1}^{\infty }\subset (0,2)\), \(\{\rho _{n} \}_{n=1}^{\infty }\subset [ \underline{\rho }, \bar{\rho }]\), and \(\{r_{n} \}_{n=1}^{\infty }\subset [ \underline{r}, \bar{r}]\) such that
$$ [ \underline{\rho }, \bar{\rho }], [ \underline{r}, \bar{r}] \subset \biggl(0, \min \biggl\{ \frac{1}{2L_{1}} , \frac{1}{2L_{2}} \biggr\} \biggr),\quad\quad \lim_{n\rightarrow \infty }\delta _{n} = 0,\quad \text{and}\quad \sum _{i=1}^{N} \delta _{n}=\infty . $$ -
Iterative steps: Assume that \(x_{n}\) is known for \(n\in \mathbb{N}\), then compute the update \(x_{n+1}\) according to the following rule.
-
Step 1: Compute:
$$ y_{n}= \mathop {\operatorname{argmin}} _{y \in C} \biggl\{ f(x_{n},y)+ \frac{1}{2\rho _{n}} \Vert y-x_{n} \Vert ^{2} \biggr\} \quad \text{and}\quad z_{n}= \mathop { \operatorname{argmin}} _{y \in C} \biggl\{ f(y_{n},y)+ \frac{1}{2\rho _{n}} \Vert y-x_{n} \Vert ^{2} \biggr\} . $$ -
Step 2: Set \(\hat{v_{n}}:= \operatorname{P}_{Q} A(z_{n})\).
-
Step 3: Compute:
$$ v_{n}= \mathop {\operatorname{argmin}} _{v \in Q} \biggl\{ g( \hat{v_{n}},v)+ \frac{1}{2r_{n}} \Vert v-\hat{v_{n}} \Vert ^{2} \biggr\} \quad \text{and} \quad u_{n} = \mathop { \operatorname{argmin}} _{v \in Q} \biggl\{ g(v_{n},v)+ \frac{1}{2r_{n}} \Vert v-\hat{v_{n}} \Vert ^{2} \biggr\} . $$ -
Step 4: Set \(F(z_{n}):= \frac{1}{2} \Vert Az_{n} - u_{n} \Vert ^{2}\) and \(G(z_{n}):= A^{*}(Az_{n} - u_{n})\).
-
Step 5: Compute
$$\begin{aligned}& x_{n+1}=\delta _{n}u + (1-\delta _{n}) \operatorname{P}_{C} \bigl( z_{n}- \mu _{n} G(z_{n}) \bigr), \quad \text{where} \\& \mu _{n}= \textstyle\begin{cases} \sigma _{n} \frac{F(z_{n})}{ \Vert G(z_{n}) \Vert ^{2}}, & \text{if } G(z_{n}) \neq 0; \\ 0, & \text{otherwise}. \end{cases}\displaystyle \end{aligned}$$
-
-
Stopping criterion: If \(x_{n+1} = x_{n}\), then \(x_{n}\) is a solution of SEP (6) and the iterative process stops, otherwise, put \(n := n + 1\) and go back to Step 1.
Lemma 3.2
([1], Lemma 3.1)
Let H1 and H2 be real Hilbert spaces. Let C and Q be nonempty, closed, and convex subsets of H1 and H2, respectively. Let \(A: \operatorname{H}_{1}\to \operatorname{H}_{2}\) be a bounded linear operator with adjoint \(A^{*}\). Assume that \(f: C \times C \to \mathbb{R}\) satisfies conditions \((A1)\)–\((A4)\). Let \(\{x_{n}\}_{n=1}^{\infty }\) be a sequence generated by Algorithm 3.1. Then, for all \(x^{*}\in \operatorname{SOL}(f, C)\), the following statements hold:
- \((a)\).:
-
\(\rho _{n} ( f(x_{n}, y)-f( x_{n}, y_{n} ) )\geq \langle y_{n}-x_{n}, y_{n}-y \rangle \) for all \(y\in C\);
- \((b)\).:
-
\(\Vert z_{n}- x^{*} \Vert ^{2} \leq \Vert x_{n}-x^{*} \Vert ^{2} - (1-2 \rho _{n} L_{1}) \Vert x_{n}-y_{n} \Vert ^{2}- (1-2\rho _{n} L_{2}) \Vert y_{n}- z_{n} \Vert ^{2}\).
The following theorem gives conditions that guarantee strong convergence of Algorithm 3.1.
Theorem 3.3
Let H1 and H2 be real Hilbert spaces. Let C and Q be nonempty, closed, and convex subsets of H1 and H2, respectively. Let \(A: \operatorname{H}_{1}\to \operatorname{H}_{2}\) be a bounded linear operator with adjoint \(A^{*}\). Assume that \(f: C \times C \to \mathbb{R}\) and \(g: Q \times Q \to \mathbb{R}\) satisfy conditions \((A1)\)–\((A4)\). Let \(\{ x_{n} \}_{n=1}^{\infty }\) be any sequence generated by Algorithm 3.1. Then \(x_{n} \to \operatorname{P}_{\Omega } (u)\) under the following conditions:
- \((C1)\).:
-
\(0< t_{1} \leq \mu _{n} \leq t_{2}\) for some \(t_{1}, t_{2} \in \mathbb{R}\) and \(\forall n \in \Gamma = \{ n\geq 1 : G(z_{n}) \neq 0 \}\);
- \((C2)\).:
-
\(\liminf_{n \to \infty} (2 - \sigma_{n}) > 0\);
- \((C3)\).:
-
\(\langle Gz_{n}, z_{n}-x^{*} \rangle \geq F ( z_{n} )\) for all \(x^{*} \in \Omega \).
Proof
Let \(x^{*}= P_{\Omega }(u)\) and \(\pi _{n}= \operatorname{P}_{C} (z_{n} - \mu _{n}G(z_{n}) )\). Then, by using \((C3)\), we have
Thus
This implies
Consequently, by Lemma 3.2 and (9), we get
By Algorithm 3.1 and (10), we have
Hence \(\{ x_{n} \}_{n=1}^{\infty }\) is bounded. Then, from (10), we deduce that \(\{ \pi _{n} \}_{n=1}^{\infty }\) and \(\{ z_{n} \}_{n=1}^{\infty }\) are bounded. By using Algorithm 3.1 and subdifferential inequality
we obtain
Thus, by Lemma 3.2, (10), and (11), we have
Case 1. Suppose that there exists \(n_{0}\in \mathbb{N}\) such that \(\{ \Vert x_{n}-x^{*} \Vert \}_{n=1}^{\infty }\) is decreasing for \(n \geq n_{0}\). Then the limit of \(\{ \Vert x_{n}-x^{*} \Vert \}_{n=1}^{\infty }\) exists. Consequently,
Moreover, by using \(0 < \underline{\rho } \leq \rho _{n} \leq \bar{\rho } < \min \{ \frac{1}{2L_{1}} , \frac{1}{2L_{2}} \}\) and (12), we have
Since \(\lim_{n \to \infty } \delta _{n} = 0\), \((1-2\bar{\rho } L_{1}) >0\), and \((1-2\bar{\rho } L_{2}) >0\), then it follows from (14) and (13) that
This implies that
By combining (10) and (11), we obtain
Thus, from (13) and (17), we get
Owing to \(( C1)\), (8), and (10), we have
Clearly, from (19), (18), and \(( C2)\), we obtain
Since \(\Vert Az_{n} -Ax^{*} \Vert \leq \Vert Az_{n} -u_{n} \Vert + \Vert u_{n}-Ax^{*} \Vert \), then it follows from (20) that
By Lemma 3.2, we have
Similarly, from \(0 < \underline{r} \leq r_{n} \leq \bar{r} < \min \{ \frac{1}{2L_{1}} , \frac{1}{2L_{2}} \}\) and (22), we get
Moreover, since \((1-2\bar{r} L_{1}) >0\) and \((1-2\bar{r} L_{2}) >0\), then it follows from (23) and (21) that
Now, let \(\zeta _{n}= z_{n} - \mu _{n}G(z_{n}) \). Then
Again, by using \((C1)\), (20), and (25), we get
Since \(\Vert \pi _{n} - z_{n} \Vert ^{2}\leq \Vert \zeta _{n}-z_{n} \Vert ^{2}\), then it follows from (26) that
By the triangle inequality in conjunction with (16) and (27), we obtain
It is clear that
Since \(\delta _{n} \to 0\) as \(n \to \infty \) and \(\{ \Vert u - \pi _{n} \Vert \}_{n=1}^{\infty }\) is bounded, then
Consequently, from (30) and (28), we have
Furthermore, since H1 is reflexive and \(\{x_{n}\}_{n=1}^{\infty }\subset \operatorname{H}_{1}\) is bounded, then there exists a subsequence \(\{x_{n_{k}} \}_{k=1}^{\infty }\) of \(\{x_{n}\}_{n=1}^{\infty }\) such that
Moreover, since \(\{ z_{n} \}_{n=1}^{\infty }\), \(\{ y_{n} \}_{n=1}^{\infty }\), and \(\{ \pi _{n} \}_{n=1}^{\infty }\) are bounded, then it follows from (15), (16), (28), and (32) that
It will now be shown that the weak limit \(e^{*}\) solves SEP (6). That is, \(e^{*}\in \Omega \). Indeed, since C and Q are closed and convex, then
Also, since \(\{x_{n}\}_{n=1}^{\infty } \subset C\), then it follows from (32) and (35) that \(e^{*}\in C\). Note that A is linear and bounded. So, from (34), we obtain \(Az_{n_{k}}\rightharpoonup Ae^{*}\) as \(k\rightarrow \infty \). In view of (20) and the boundedness of \(\{u_{n}\}_{n=1}^{\infty }\), we see that
Likewise, since \(\{ v_{n} \}_{n=1}^{\infty }\) and \(\{ \hat{v_{n}} \}_{n=1}^{\infty }\) are bounded, then, from (36) and (24), we get
Clearly, since \(\{ \hat{v_{n_{k}}} \}_{k=1}^{\infty }\subset Q\), then, from (35) and (37), we deduce that \(A(e^{*})\in Q\). It remains to show that \(e^{*}\in \operatorname{EP}(C,f)\) and \(Ae^{*}\in \operatorname{EP}(Q,g)\). By Lemma 3.2, in particular, for all \(k\in \mathbb{N}\), we have
This implies
However, since \(\rho _{n_{k}} \geq \underline{\rho } >0\), then, by applying the Cauchy–Schwartz inequality, we see that
On the other hand, since \(\{ y_{n_{k}} \}_{k=1}^{\infty }\) is bounded and \(\lim_{k \to \infty } \Vert y_{n_{k}}-x_{n_{k}} \Vert =0\), then \(\Vert y_{n_{k}}- x_{n_{k}} \Vert \Vert y_{n_{k}}-y \Vert \to 0\) as \(k \to \infty \). Consequently,
Since f satisfies \((A2)\), then, by passing limit as \(k \to \infty \) in (38) in conjunction with (34), (32), and (40), we have
Thus \(e^{*}\in \operatorname{EP}(C,f)\). Again, by Lemma 3.2, we see that
Therefore
Since \(r_{n_{k}} \geq \underline{r} >0\), applying the Cauchy–Schwarz inequality, we have
Now, since \(\{ v_{n_{k}} \}_{k=1}^{\infty }\) is bounded and \(\lim_{k \to \infty } \Vert v_{n_{k}}- \hat{v_{n_{k}}} \Vert =0\), then \(\Vert v_{n_{k}}- \hat{v_{n_{k}}} \Vert \Vert v_{n_{k}}-v \Vert \to 0\) as \(k \to \infty \). Hence,
Again since g satisfies \((A2)\), then by passing limit as \(k \to \infty \) in (42) using (36), (37), and (44), we get
Hence \(e^{*}\in \Omega \). Since Ω is closed, convex, and \(x^{*}= \operatorname{P}_{\Omega }(u)\), then it follows from (31), (32), (33), and Lemma 2.3 that
Consequently, by Lemma 2.5, (11), and (46), we conclude that
Case 2. Suppose that \(\{ \Vert x_{n}-x^{*} \Vert \}_{n=1}^{\infty }\) is not decreasing such that there exists a subsequence \(\{ \Vert x_{n_{l}}-x^{*} \Vert \}_{l=1}^{\infty }\) of \(\{ \Vert x_{n}-x^{*} \Vert \}_{n=1}^{\infty }\) satisfying
In view of Lemma 2.4, there exists a nondecreasing sequence \(\{ m_{k} \}_{k=1}^{\infty }\subset \mathbb{N}\) such that \(\lim_{k\rightarrow \infty } m_{k}= \infty \) and the following inequalities hold for all \(k\in \mathbb{N}\):
Note that by discarding the repeated terms of \(\{m_{k} \}_{k=1}^{\infty }\), but still denoted by \(\{m_{k} \}_{k=1}^{\infty }\), we can have \(\{x_{m_{k}} \}_{k=1}^{\infty }\), as a subsequence of \(\{x_{n} \}_{n=1}^{\infty }\). Since \(\{x_{m_{k}} \}_{k=1}^{\infty }\) is bounded, then \(\lim_{n \to \infty }( \Vert x_{m_{k}}-x^{*} \Vert - \Vert x_{m_{k+1}}-x^{*} \Vert )=0\). Then, following arguments similar to those in Case 1, we deduce that
It follows from (47) and (11) that
Clearly, by dividing through by \(\delta _{m_{k}}\), we get
Passing limit as \(k\to \infty \) in (51) using (49), we obtain
Consequently, from (47), we see that
Hence \(x_{n}\rightarrow x^{*}\in \Omega \) in both cases and this ends the proof. □
Remark 1
-
(1)
Theorem 3.3 will take the form of the extragradient method studied by Tran et al. [23] if we set \(g\equiv 0\) and \(\operatorname{H}_{1} \equiv \operatorname{H}_{2}\).
-
(2)
Theorem 3.3 coincides with the work of Lopez et al. [15], whenever \(g=f \equiv 0\).
-
(3)
The restriction on the step-size \(\mu \in (0, \frac{1}{ \Vert A \Vert ^{2}} )\) is imposed by He [12], while the step-size \(\mu \in (0, \frac{1}{ \Vert A \Vert ^{2}} )\) is relaxed with adaptive step-size that uses a simpler initialization sequence \(\{\sigma _{n} \}_{n=1}^{\infty }\subset (0,2)\) in our work.
-
(4)
Moreover, in the work of He [12], a weak convergence result was obtained for solving (6) and the strong convergence follows only through the hybrid proximal point algorithm, which is not easy to implement. In this paper, we obtain a strong convergence result for solving (6) without using the hybrid scheme.
-
(5)
The conclusion of our Theorem 3.3 holds for pseudomonotone bifunctions, while the corresponding result by He [12] is restricted to monotone bifunctions.
4 Numerical results
In this section, a preliminary numerical test is presented to compare the convergence behavior of proposed Algorithm 3.1 with algorithm (7).
Example 4.1
Consider the Nash–Cournot equilibrium problem studied in [20, 23], where \(f: C \times C \rightarrow \mathbb{R}\) is defined by
where \(c \in \mathbb{R}^{m}\) and U, V are two matrices of order m such that V is symmetric positive semidefinite and \(V - U\) is negative semidefinite with Lipschitz constants \(L_{1} = L_{2} = \frac{1}{2} \Vert U - V \Vert \). The matrices U, V are randomly generated Footnote 1 and the entries of c randomly belong to \([-1, 1]\). The constraint set \(C \subset \mathbb{R}^{m}\) is taken as follows:
Assume that \(g:Q\times Q \to \mathbb{R}\) is defined by \(g(x, y) = x (y - x)\), \(\forall x, y \in Q=[-1, \infty )\). Suppose that \(A : \mathbb{R}^{m} \rightarrow Q\) is a linear operator defined by \(A x = \langle a, x \rangle \), \(\forall x \in \mathbb{R}^{m}\), where a is a vector in \(\mathbb{R}^{m}\) whose elements are randomly generated from \([1; m]\). Thus, \(A^{*}: [-1, \infty ) \to \mathbb{R}^{m} \) is of the form \(A^{*} y = y . a\) for all \(y \in \mathbb{R}\) and \(\Vert A \Vert = \Vert a \Vert \). The starting points \(x_{1}\in C\) are randomly generated in the interval \([-10, 10]\), and we choose \(\mu = \frac{1}{2 \Vert a \Vert ^{2}}\), \(\rho _{n}= r_{n}= \frac{1}{4 L_{1}}\), \(\delta _{n}=\frac{1}{n+2}\), and \(\sigma _{n} = 2- \frac{1}{n+1}\). We define the function \(\mathit{TOL}_{n}\) by \(\mathit{TOL}_{n}:= \Vert x_{n+1}- x_{n} \Vert \) and use the stopping rule \(\mathit{TOL}_{n} < \epsilon \) for the iterative process, where ϵ is the predetermined error. The equivalent convex quadratic problems are solved using the function fmincon and implemented in MATLAB 7.0 running on an HP Compaq510, Core(TM)2 Duo Processor T5870 with 2.00 GHz and 2 GB RAM. Table 1 shows that Algorithm 3.1 outperforms He’s algorithm (7) in running time and in the number of iterations for different cases of m.
5 Conclusion
In this paper, we have proposed a self-adaptive extragradient iterative process for solving split pseudomonotone equilibrium problems. We established strong convergence of our proposed algorithm, and the performance of the algorithm such as CPU time and the number of iterations required for convergence is highlighted through preliminary numerical tests that show that our proposed algorithm is faster than the corresponding algorithm by He [12].
Availability of data and materials
Not applicable.
Notes
Two matrices are randomly generated E and F with entries from \([-1,1]\). The matrix \(V=E^{T}E\), \(S=F^{T}F\) and \(U = S + V\).
References
Anh, P.N.: A hybrid extragradient method extended to fixed point problems and equilibrium problems. Optimization 62, 271–283 (2013)
Bianchi, M., Schaible, S.: Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 90, 31–34 (1996)
Bryne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)
Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intesity-modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projection in a product space. Numer. Algorithms 8, 221–239 (1994)
Censor, Y., Elfving, T., Kopf, N., Bortfed, T.: The multiple-set split feasibility problem and its applications for inverse problems. Inverse Probl. 21, 2071–2084 (2005)
Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012)
Combettes, P.L., Hirstoaga, S.A.: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117–136 (2005)
Dang, V.H.: Convergence analysis of a new algorithm for strongly pseudomontone equilibrium problems. Numer. Algorithms 77, 983–1001 (2018)
Fan, K.: A minimax inequality and applications. In: Inequalities III, pp. 103–113. Academic Press, New York (1972)
He, B., He, X., Liu, H., Wu, T.: Self-adaptive projection method for co-coercive variational inequalities. Eur. J. Oper. Res. 196, 43–48 (2009)
He, Z.: The split equilibrium problem and its convergence algorithms. J. Inequal. Appl. 2012, Article ID 162 (2012)
Kim, J.K., Majee, P.: Modified Krasnoselski–Mann iterative method for hierarchical fixed point problem and split mixed equilibrium problem. J. Inequal. Appl. 2020, Article ID 227 (2020)
Kim, J.K., Salahuddin, S.: Existence of solutions for multivalued equilibrium problems. Nonlinear Funct. Anal. Appl. 23(4), 779–795 (2018)
Lopez, G., Martin-Marquez, V., Wang, F.H., Xu, H.K.: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28, Article ID 085004 (2012)
Mainge, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)
Moudafi, A.: Split monotone variational inclusions. J. Optim. Theory Appl. 50, 275–283 (2011)
Oyewole, O.K., Mewomo, O.T.: Existence results for new generalized mixed equilibrium and fixed point problems in Banach spaces. Nonlinear Funct. Anal. Appl. 25(2), 273–301 (2020)
Rehman, H.U., Kumam, P., Cho, Y.J., Suleiman, Y.I., Kumam, W.: Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. (2020). https://doi.org/10.1080/10556788.2020.1734805
Suleiman, Y.I., Rehman, H.U., Gibali, A., Kumam, P.: A self-adaptive extragradient CQ-method for a class of bilevel split equilibrium problem with application to Nash Cournot oligopolistic electricity market models. Comput. Appl. Math. 39, 293 (2020)
Tam, N.N., Yao, J.C., Yen, N.D.: Solution methods for pseudomonotone variational inequalities. J. Optim. Theory Appl. 138, 253–273 (2008)
Tang, Y., Gibali, A.: New self-adaptive step size algorithms for solving split variational inclusion problems and its applications. Numer. Algorithms 83, 305–331 (2020)
Tran, D.Q., Dung, L.M., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–776 (2008)
Wang, S.H., Liu, X.M., An, Y.S.: A new iterative algorithm for generalized split equilibrium problem in Hilbert spaces. Nonlinear Funct. Anal. Appl. 22(4), 911–924 (2017)
Xu, H.K.: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 116, 659–678 (2003)
Yang, Q.: On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 302, 166–179 (2005)
Yao, Y., Yao, Z., Abdou, A.N., Cho, Y.J.: Self-adaptive algorithms for proximal split feasibility problems and strong convergence analysis. Fixed Point Theory Appl. 2015, 205 (2015)
Zhang, W., Han, D., Li, Z.: A self-adaptive projection method for solving the multiple-sets split feasibility problem. Inverse Probl. 25, 115001 (2009)
Acknowledgements
The authors are grateful to the editor and anonymous referees for their valuable suggestions and constructive comments which have improved this paper. Yusuf I. Suleiman is grateful to King Mongkut’s University of Technology Thonburi (KMUTT), Bangkok 10140, Thailand, for providing state-of-the-art research facilities to carry out this research work during his bench work in KMUTT. The authors acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. Moreover, this research project is supported by Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal year 2021 under project number 64A306000005.
Funding
The authors acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. Moreover, this research project is supported by Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal year 2021 under project number 64A306000005.
Author information
Authors and Affiliations
Contributions
All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Suleiman, Y.I., Kumam, P., Rehman, H.u. et al. A new extragradient algorithm with adaptive step-size for solving split equilibrium problems. J Inequal Appl 2021, 136 (2021). https://doi.org/10.1186/s13660-021-02668-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-021-02668-x