Skip to main content

A new extragradient algorithm with adaptive step-size for solving split equilibrium problems

Abstract

He (J. Inequal. Appl. 2012:Article ID 162 2012) introduced the proximal point CQ algorithm (PPCQ) for solving the split equilibrium problem (SEP). However, the PPCQ converges weakly to a solution of the SEP and is restricted to monotone bifunctions. In addition, the step-size used in the PPCQ is a fixed constant μ in the interval \((0, \frac{1}{ \| A \|^{2} } )\). This often leads to excessive numerical computation in each iteration, which may affect the applicability of the PPCQ. In order to overcome these intrinsic drawbacks, we propose a robust step-size \(\{ \mu _{n} \}_{n=1}^{\infty }\) which does not require computation of \(\| A \|\) and apply the adaptive step-size rule on \(\{ \mu _{n} \}_{n=1}^{\infty }\) in such a way that it adjusts itself in accordance with the movement of associated components of the algorithm in each iteration. Then, we introduce a self-adaptive extragradient-CQ algorithm (SECQ) for solving the SEP and prove that our proposed SECQ converges strongly to a solution of the SEP with more general pseudomonotone equilibrium bifunctions. Finally, we present a preliminary numerical test to demonstrate that our SECQ outperforms the PPCQ.

Introduction

The equilibrium problem (EP) associated with a bifunction \(f : C \times C \to \mathbb{R}\) and a nonempty subset C of a real Hilbert space \(\mathbb{H}\) consists of finding a vector \(x^{*} \in C\) such that

$$ f \bigl(x^{*},y \bigr) \geq 0, \quad \forall y \in C. $$
(1)

It is well known that the mathematical basis for the EP pre-dates the works of Ky Fan [10]. However, due to his dedication to the subject, the EP is often called the Ky Fan inequality. The EP is an incredibly important powerful tool that unifies a number of useful and elegant nonlinear problems. In recent days, the EP is one of the major nonlinear methods that has provided significant success in modeling several real world problems (see, e.g., [1, 2, 8, 9, 13, 14, 18, 23, 24]).

The set of solutions of EP is denoted by \(\operatorname{SOL}(f, C)\). To solve the EP, it is common to use the proximal point method proposed by Combettes and Hirstoaga [8]: given \(x_{n}\in C\), as a current iterate, the next iterate \(x_{n+1}\) solves the following problem:

$$ \text{find }x\in C \text{ such that}\quad f(x,y)+ \frac{1}{r_{n}} \langle y-x, x-x_{n} \rangle \geq 0, \quad \forall y \in C, \{ r_{n} \}_{n=1}^{ \infty }\subset (0,\infty ). $$
(2)

In 2008, Tran et al. [23] (see also Dang [9]) submitted that a large number of important real-world problems can be reformulated as pseudomonotone bifunctions. A well-known example is the Nash–Cournot oligopolistic electricity market model. Unfortunately, the traditional proximal point method (2) does not converge if f is pseudomonotone (see, e.g., [21, Example 2.1]). Hence, Tran et al. [23] introduced the following proximal-extragradient algorithm for solving (1) when f is pseudomonotone.

$$ \textstyle\begin{cases} \text{select arbitrary}\quad x_{1}\in C, \\ y_{n}= \operatorname{argmin}_{y\in C} \{ f(x_{n},y)+ \frac{1}{2\rho _{n}} \Vert y-x_{n} \Vert ^{2} \}, \\ x_{n+1}= \operatorname{argmin}_{y\in C} \{ f(y_{n},y)+ \frac{1}{2\rho _{n}} \Vert y-x_{n} \Vert ^{2} \}, \quad n \in \mathbb{N}. \end{cases} $$
(3)

It is worthy to mention that, unlike the proximal point method (2), the extragradient algorithm (3) falls within the applicable scope of standard matlab optimization toolbox. So, in order to implement algorithm (3), one only needs to solve a pair of strongly convex programs via matlab optimization toolbox.

On the other hand, the study of classical split feasibility problem (SFP) is pioneered by Censor and Elfving [5]. It provides effective tools for obtaining the existence of solutions of constrained and inverse problems arising in optimization, engineering, medical sciences, and most notably in image reconstruction, signal processing, and phase retrieval (see, e.g., [3, 4, 6]). The SFP is to find a vector

$$ x^{*}\in C\quad \text{such that}\quad A \bigl(x^{*} \bigr)\in Q, $$
(4)

where C and Q are given nonempty, closed, and convex subsets of real Hilbert spaces H1 and H2, respectively, and \(A: \operatorname{H}_{1}\to \operatorname{H}_{2}\) is a bounded linear operator. Bryne [3] imposed the restrictive condition \(\{\gamma _{n}\}_{n=1}^{\infty }\subset ( 0, \frac{2}{ \Vert A \Vert ^{2}} )\) on the CQ-method for solving (4):

$$ x_{1}\in C, \quad\quad x_{n+1} := \operatorname{P}_{C} \bigl( x_{n} - \gamma _{n} A^{*}(I- \operatorname{P}_{Q}) Ax_{n} \bigr), \quad n\in \mathbb{N}, $$
(5)

where \(\operatorname{P}_{C}\) and \(\operatorname{P}_{Q}\) stand for the metric projection onto C and Q, respectively, and \(A^{*}\) denotes the adjoint operator of A from H2 to H1. The concept of self-adaptive step-size for solving (4) was introduced by Yang [26] and developed by Lopez et al. [15] in order to dispense with the restrictive condition \(\{\gamma _{n}\}_{n=1}^{\infty }\subset ( 0, \frac{2}{ \Vert A \Vert ^{2}} )\). In general, self-adaptive algorithms operate under the assumption that future events (inputs) are uncertain so they are characterized by step-sizes that continuously monitor themselves, gather data, analyze data, and adapt when their requirements fail due to unexpected changes in their components. Research found out that an ever-growing number of algorithms with adaptive step-sizes for solving nonlinear problems are faster and robust to failure (see, e.g., [11, 19, 20, 22, 27, 28]). The SFP is a particular case of a more general problem, called the split equilibrium problem (SEP)

$$ \textstyle\begin{cases} \text{find } x^{*}\in C \text{ such that}\quad f(x^{*},y)\geq 0,\quad \forall y \in C\quad \text{and} \\ A(x^{*})=u^{*}\in Q\quad \text{solves}\quad g (u^{*}, v )\geq 0,\quad \forall v\in Q. \end{cases} $$
(6)

Here, \(g : Q \times Q \to \mathbb{R}\) stands for another bifunction on H2. The SEP was briefly introduced by Moudafi [17] in 2011. Perhaps, due to its relevance, the problem was reintroduced a year after by He [12], and the following proximal point-CQ algorithm (PPCQ) for solving SEP (6) was proposed:

$$ \textstyle\begin{cases} \text{Select arbitrary:} \quad x_{1}\in C,\quad\quad \{\rho _{n} \}_{n=1}^{\infty } \subset (0, \infty ),\quad \text{and}\quad \mu \in (0, \frac{1}{ \Vert A \Vert ^{2}}), \\ f(y_{n},y)+\frac{1}{\rho _{n}} \langle y-y_{n}, y_{n}-x_{n} \rangle \geq 0, \quad \forall y \in C, \\ g (v_{n},v)+\frac{1}{\rho _{n}} \langle v-v_{n}, v_{n}-Ay_{n} \rangle \geq 0, \quad \forall v \in Q, \\ x_{n+1}= P_{C} ( y_{n}-\mu A^{*}(Ay_{n}-v_{n} ) ), \quad n\in \mathbb{N}. \end{cases} $$
(7)

It is worth noting that, as a prototype of the proximal point method, the PPCQ may not converge when f and g are pseudomonotone. In addition, the PPCQ converges weakly to a solution of (6) when it is consistent and the step-size \(\mu \in (0, \frac{1}{ \Vert A \Vert ^{2}} )\) depends on \(\Vert A \Vert \), which, in turn, may lead to excessive numerical computation that may affect the convergence of the PPCQ. The question now becomes: is it possible to develop an extragradient algorithm with an adaptive step-size that converges strongly to a solution of (6) when f and g are pseudomonotone? In answering this question, we present a self-adaptive extragradient-CQ algorithm (SECQ) for solving (6) and prove that a sequence generated by our proposed SECQ converges strongly to solutions of (6) when f and g are pseudomonotone. A numerical example is also given to demonstrate the effectiveness of our iterative scheme.

Preliminaries

Let \(\mathbb{H}\) be a real Hilbert space whose inner product and norm are denoted by \({\langle \cdot , \cdot \rangle} \) and \({ \Vert \cdot \Vert }\). Let \(C\subseteq \mathbb{H}\) be a nonempty, closed, and convex set, and denote by the symbols and weak and strong convergence of a sequence \(\{x_{n}\}_{n=1}^{\infty }\).

Definition 2.1

A bifunction \(f:C\times C \to \mathbb{R}\) is said to be

  1. (i)

    monotone on C, if

    $$ f(x,y) + f(y,x) \leq 0, \quad \forall x,y\in C; $$
  2. (ii)

    pseudomonotone on C with respect to \(x\in C\), if

    $$ f(x, y)\geq 0 \quad \Rightarrow \quad f(y, x )\leq 0, \quad \forall y\in C; $$
  3. (iii)

    pseudomonotone on C with respect to \(\emptyset \neq \Omega \subset C\), if \(\forall x^{*}\in \Omega \),

    $$ f \bigl(x^{*}, y \bigr)\geq 0 \quad \Rightarrow \quad f \bigl(y, x^{*} \bigr)\leq 0, \quad \forall y \in C; $$
  4. (iv)

    Lipschitz-type continuous, if there are two positive constants \(L_{1}\), \(L_{2}\) such that

    $$ f(x,y)+ f(y,z) \geq f(x,z)-L_{1} \Vert x-y \Vert ^{2} - L_{2} \Vert y-z \Vert ^{2}, \quad \forall x,y,z\in C; $$
  5. (v)

    jointly weakly continuous on \(C \times C\) in the sense that, given any \(x,y\in C\) and \(\{x_{n} \}_{n=1}^{\infty }, \{y_{n} \}_{n=1}^{\infty }\subset C\) converge weakly to x and y, respectively, then

    $$ \lim_{n \to \infty } f(x_{n}, y_{n}) = f(x,y). $$

The following conditions will be used in the sequel.

Assumption A

\((A1)\).:

f is pseudomonotone with respect to \(\operatorname{SOL}(f, C)\);

\((A2)\).:

f is jointly weakly and Lipschitz-type continuous on C with constants \(L_{1}\) and \(L_{2}\);

\((A3)\).:

\(f(x,\cdot )\) is convex and subdifferentiable on C;

\((A4)\).:

\(\Omega = \{ x\in \operatorname{SOL}(f, C) \text{ such that } A(x)\in \operatorname{SOL}(g, Q ) \}\neq \emptyset \).

Lemma 2.2

([2])

If the bifunction f satisfies conditions \((A1)\)\((A4)\), then \(\operatorname{SOL}(f, C)\) is weakly closed and convex.

Recall that a metric projection of \(\mathbb{H}\) onto C is the mapping \(\operatorname{P}_{C}: \mathbb{H} \to C\) which assigns to each \(x\in \mathbb{H}\) the (nearest) unique point \(\operatorname{P}_{C}(x)\) in C satisfying

$$ \bigl\Vert x-\operatorname{P}_{C}(x) \bigr\Vert = \min \bigl\{ \Vert x-y \Vert : y\in C \bigr\} . $$

Lemma 2.3

Given \(u\in \mathbb{H}\) and \(z\in C\). Then

$$ z=\operatorname{P}_{C}(u) \quad \iff \quad \langle u-z, z-y \rangle \geq 0, \quad \forall y\in C. $$

Lemma 2.4

([16])

Let \(\{a_{n}\}_{n=1}^{\infty }\) be a sequence of real numbers such that there exists a subsequence \(\{{n_{i}}\}\) of \(\{n\}\) such that \(a_{n_{i}} < a_{n_{i+1}}\) for all \(i\geq 0\). Then there exists an increasing sequence \(\{ m_{k}\}_{k=1}^{\infty }\subset \mathbb{N}\) such that \(m_{k} \to \infty \) and the following properties are satisfied by all (sufficiently large) numbers \(k\in \mathbb{N}\):

$$ a_{m_{k}} \leq a_{m_{k+1}} \quad \textit{and} \quad a_{k} \leq a_{m_{k+1}}. $$

In fact, \(m_{k}\) is the largest number n in the set \(\{1,2,\ldots,k \}\) such that the condition \(a_{n} \leq a_{n+1} \) holds.

Lemma 2.5

([25])

Let \(\{\gamma _{n}\}_{n=1}^{\infty }\) be a sequence in \((0,1)\) and \(\{\delta _{n}\}_{n=1}^{\infty }\) be in \(\mathbb{R}\) satisfying \(\sum_{n=1}^{\infty } \gamma _{n}= \infty \) and \(\limsup_{n \to \infty } \delta _{n} \leq 0\) or \(\sum_{n=1}^{\infty } \vert \gamma _{n} \delta _{n} \vert < \infty \). If \(\{a_{n}\}_{n=1}^{\infty }\) is a sequence of nonnegative real numbers such that \(a_{n+1} \leq (1- \gamma _{n})a_{n} + \gamma _{n} \delta _{n}\), \(\forall n\geq 0\), then \(\lim_{n \to \infty } a_{n} =0\).

Main results

The following is our main algorithm for solving (6).

Algorithm 3.1

(Self-adaptive proximal-extragradient algorithm for SEP)

  • Initialization: Given initial choice \(x_{1}\) and u in C. Pick parameters \(\{ \delta _{n} \}_{n=1}^{\infty }\subset [ \underline{\delta }, \bar{\delta }]\subset (0,1)\), \(\{ \sigma _{n} \}_{n=1}^{\infty }\subset (0,2)\), \(\{\rho _{n} \}_{n=1}^{\infty }\subset [ \underline{\rho }, \bar{\rho }]\), and \(\{r_{n} \}_{n=1}^{\infty }\subset [ \underline{r}, \bar{r}]\) such that

    $$ [ \underline{\rho }, \bar{\rho }], [ \underline{r}, \bar{r}] \subset \biggl(0, \min \biggl\{ \frac{1}{2L_{1}} , \frac{1}{2L_{2}} \biggr\} \biggr),\quad\quad \lim_{n\rightarrow \infty }\delta _{n} = 0,\quad \text{and}\quad \sum _{i=1}^{N} \delta _{n}=\infty . $$
  • Iterative steps: Assume that \(x_{n}\) is known for \(n\in \mathbb{N}\), then compute the update \(x_{n+1}\) according to the following rule.

    • Step 1: Compute:

      $$ y_{n}= \mathop {\operatorname{argmin}} _{y \in C} \biggl\{ f(x_{n},y)+ \frac{1}{2\rho _{n}} \Vert y-x_{n} \Vert ^{2} \biggr\} \quad \text{and}\quad z_{n}= \mathop { \operatorname{argmin}} _{y \in C} \biggl\{ f(y_{n},y)+ \frac{1}{2\rho _{n}} \Vert y-x_{n} \Vert ^{2} \biggr\} . $$
    • Step 2: Set \(\hat{v_{n}}:= \operatorname{P}_{Q} A(z_{n})\).

    • Step 3: Compute:

      $$ v_{n}= \mathop {\operatorname{argmin}} _{v \in Q} \biggl\{ g( \hat{v_{n}},v)+ \frac{1}{2r_{n}} \Vert v-\hat{v_{n}} \Vert ^{2} \biggr\} \quad \text{and} \quad u_{n} = \mathop { \operatorname{argmin}} _{v \in Q} \biggl\{ g(v_{n},v)+ \frac{1}{2r_{n}} \Vert v-\hat{v_{n}} \Vert ^{2} \biggr\} . $$
    • Step 4: Set \(F(z_{n}):= \frac{1}{2} \Vert Az_{n} - u_{n} \Vert ^{2}\) and \(G(z_{n}):= A^{*}(Az_{n} - u_{n})\).

    • Step 5: Compute

      $$\begin{aligned}& x_{n+1}=\delta _{n}u + (1-\delta _{n}) \operatorname{P}_{C} \bigl( z_{n}- \mu _{n} G(z_{n}) \bigr), \quad \text{where} \\& \mu _{n}= \textstyle\begin{cases} \sigma _{n} \frac{F(z_{n})}{ \Vert G(z_{n}) \Vert ^{2}}, & \text{if } G(z_{n}) \neq 0; \\ 0, & \text{otherwise}. \end{cases}\displaystyle \end{aligned}$$
  • Stopping criterion: If \(x_{n+1} = x_{n}\), then \(x_{n}\) is a solution of SEP (6) and the iterative process stops, otherwise, put \(n := n + 1\) and go back to Step 1.

Lemma 3.2

([1], Lemma 3.1)

Let H1 and H2 be real Hilbert spaces. Let C and Q be nonempty, closed, and convex subsets of H1 and H2, respectively. Let \(A: \operatorname{H}_{1}\to \operatorname{H}_{2}\) be a bounded linear operator with adjoint \(A^{*}\). Assume that \(f: C \times C \to \mathbb{R}\) satisfies conditions \((A1)\)\((A4)\). Let \(\{x_{n}\}_{n=1}^{\infty }\) be a sequence generated by Algorithm 3.1. Then, for all \(x^{*}\in \operatorname{SOL}(f, C)\), the following statements hold:

\((a)\).:

\(\rho _{n} ( f(x_{n}, y)-f( x_{n}, y_{n} ) )\geq \langle y_{n}-x_{n}, y_{n}-y \rangle \) for all \(y\in C\);

\((b)\).:

\(\Vert z_{n}- x^{*} \Vert ^{2} \leq \Vert x_{n}-x^{*} \Vert ^{2} - (1-2 \rho _{n} L_{1}) \Vert x_{n}-y_{n} \Vert ^{2}- (1-2\rho _{n} L_{2}) \Vert y_{n}- z_{n} \Vert ^{2}\).

The following theorem gives conditions that guarantee strong convergence of Algorithm 3.1.

Theorem 3.3

Let H1 and H2 be real Hilbert spaces. Let C and Q be nonempty, closed, and convex subsets of H1 and H2, respectively. Let \(A: \operatorname{H}_{1}\to \operatorname{H}_{2}\) be a bounded linear operator with adjoint \(A^{*}\). Assume that \(f: C \times C \to \mathbb{R}\) and \(g: Q \times Q \to \mathbb{R}\) satisfy conditions \((A1)\)\((A4)\). Let \(\{ x_{n} \}_{n=1}^{\infty }\) be any sequence generated by Algorithm 3.1. Then \(x_{n} \to \operatorname{P}_{\Omega } (u)\) under the following conditions:

\((C1)\).:

\(0< t_{1} \leq \mu _{n} \leq t_{2}\) for some \(t_{1}, t_{2} \in \mathbb{R}\) and \(\forall n \in \Gamma = \{ n\geq 1 : G(z_{n}) \neq 0 \}\);

\((C2)\).:

\(\liminf_{n \to \infty} (2 - \sigma_{n}) > 0\);

\((C3)\).:

\(\langle Gz_{n}, z_{n}-x^{*} \rangle \geq F ( z_{n} )\) for all \(x^{*} \in \Omega \).

Proof

Let \(x^{*}= P_{\Omega }(u)\) and \(\pi _{n}= \operatorname{P}_{C} (z_{n} - \mu _{n}G(z_{n}) )\). Then, by using \((C3)\), we have

$$ \begin{aligned} \bigl\Vert \pi _{n}-x^{*} \bigr\Vert ^{2} ={}& \bigl\Vert \operatorname{P}_{C} \bigl(z_{n} - \mu _{n}G(z_{n}) \bigr) - \operatorname{P}_{C} \bigl(x^{*} \bigr) \bigr\Vert ^{2} \\ \leq{}& \bigl\Vert z_{n}- \mu _{n}G(z_{n}) -x^{*} \bigr\Vert ^{2}= \bigl\Vert z_{n} -x^{*}- \mu _{n}G(z_{n}) \bigr\Vert ^{2} \\ \leq{}& \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} + \bigl\Vert \mu _{n}G(z_{n}) \bigr\Vert ^{2}- 2\mu _{n} \bigl\langle G(z_{n}), z_{n}-x^{*} \bigr\rangle \\ \leq{}& \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} + \mu _{n}^{2} \bigl\Vert G(z_{n}) \bigr\Vert ^{2}- 2 \mu _{n} F(z_{n}) \\ ={}& \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} + \sigma _{n}^{2} \frac{[F( z_{n} )]^{2} }{ \Vert G(z_{n}) \Vert ^{4}} \bigl\Vert G(z_{n}) \bigr\Vert ^{2}- 2 \sigma _{n} \frac{F( z_{n} ) }{ \Vert G(z_{n}) \Vert ^{2}} F(z_{n}) \\ ={}& \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} + \sigma _{n}^{2} \frac{[F( z_{n} )]^{2} }{ \Vert G(z_{n}) \Vert ^{2}} - 2 \sigma _{n} \frac{[F( z_{n} )]^{2} }{ \Vert G(z_{n}) \Vert ^{2}}. \end{aligned} $$

Thus

$$ \bigl\Vert \pi _{n}-x^{*} \bigr\Vert ^{2} \leq \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} - \sigma _{n}(2- \sigma _{n}) \frac{[F( z_{n} )]^{2} }{ \Vert G(z_{n}) \Vert ^{2}}. $$
(8)

This implies

$$ \bigl\Vert \pi _{n}-x^{*} \bigr\Vert ^{2} \leq \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2}. $$
(9)

Consequently, by Lemma 3.2 and (9), we get

$$ \bigl\Vert \pi _{n}- x^{*} \bigr\Vert ^{2} \leq \bigl\Vert z_{n} -x^{*} \bigr\Vert ^{2} \leq \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}. $$
(10)

By Algorithm 3.1 and (10), we have

$$ \begin{aligned} \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ={}& \bigl\Vert \delta _{n}u + (1- \delta _{n})\pi _{n} -x^{*} \bigr\Vert \\ ={}& \bigl\Vert \delta _{n} \bigl(u -x^{*} \bigr) + (1- \delta _{n}) \bigl(\pi _{n}-x^{*} \bigr) \bigr\Vert \\ \leq{}& \delta _{n} \bigl\Vert u -x^{*} \bigr\Vert + (1- \delta _{n}) \bigl\Vert \pi _{n}-x^{*} \bigr\Vert \\ \leq{}& \delta _{n} \bigl\Vert u -x^{*} \bigr\Vert + (1- \delta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert \\ \leq{}& \max \bigl\{ \bigl\Vert u -x^{*} \bigr\Vert , \bigl\Vert x_{n}-x^{*} \bigr\Vert \bigr\} \\ \vdots{}& \\ \leq{}& \max \bigl\{ \bigl\Vert u -x^{*} \bigr\Vert , \bigl\Vert x_{1}-x^{*} \bigr\Vert \bigr\} . \end{aligned} $$

Hence \(\{ x_{n} \}_{n=1}^{\infty }\) is bounded. Then, from (10), we deduce that \(\{ \pi _{n} \}_{n=1}^{\infty }\) and \(\{ z_{n} \}_{n=1}^{\infty }\) are bounded. By using Algorithm 3.1 and subdifferential inequality

$$ \Vert x+y \Vert ^{2} \leq \Vert x \Vert ^{2} + 2 \langle y, x+y \rangle , \quad \forall x,y \in \mathbb{H}, $$

we obtain

$$ \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} \leq (1-\delta _{n}) \bigl\Vert \pi _{n} -x^{*} \bigr\Vert ^{2} + 2\delta _{n} \bigl\langle u-x^{*}, x_{n+1}-x^{*} \bigr\rangle . $$
(11)

Thus, by Lemma 3.2, (10), and (11), we have

$$ \begin{aligned} \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} &\leq (1-\delta _{n}) \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}- (1-\delta _{n}) \bigl( (1-2\rho _{n} L_{1}) \Vert x_{n}-y_{n} \Vert ^{2} \\ &\quad{} + (1-2\rho _{n} L_{2}) \Vert y_{n}- z_{n} \Vert ^{2} \bigr) + 2\delta _{n} \bigl\langle u-x^{*}, x_{n+1}-x^{*} \bigr\rangle . \end{aligned} $$
(12)

Case 1. Suppose that there exists \(n_{0}\in \mathbb{N}\) such that \(\{ \Vert x_{n}-x^{*} \Vert \}_{n=1}^{\infty }\) is decreasing for \(n \geq n_{0}\). Then the limit of \(\{ \Vert x_{n}-x^{*} \Vert \}_{n=1}^{\infty }\) exists. Consequently,

$$ \lim_{n \to \infty } \bigl( \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} \bigr)=0. $$
(13)

Moreover, by using \(0 < \underline{\rho } \leq \rho _{n} \leq \bar{\rho } < \min \{ \frac{1}{2L_{1}} , \frac{1}{2L_{2}} \}\) and (12), we have

$$ \begin{aligned} &0 \leq (1-\delta _{n}) \bigl( (1-2 \bar{\rho } L_{1}) \Vert x_{n}-y_{n} \Vert ^{2}+ (1-2\bar{\rho } L_{2}) \Vert y_{n}- z_{n} \Vert ^{2} \bigr) \\ &\quad\quad{} + \delta _{n} \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}- 2\delta _{n} \bigl\langle u-x^{*}, x_{n+1}-x^{*} \bigr\rangle \\ &\quad{} \leq \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} - \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2}. \end{aligned} $$
(14)

Since \(\lim_{n \to \infty } \delta _{n} = 0\), \((1-2\bar{\rho } L_{1}) >0\), and \((1-2\bar{\rho } L_{2}) >0\), then it follows from (14) and (13) that

$$ \lim_{n \to \infty } \Vert x_{n}-y_{n} \Vert =0\quad \text{and}\quad \lim_{n \to \infty } \Vert y_{n}- z_{n} \Vert =0. $$
(15)

This implies that

$$ \lim_{n \to \infty } \Vert z_{n}-x_{n} \Vert =0. $$
(16)

By combining (10) and (11), we obtain

$$ \begin{aligned} 0 \leq{}& \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}- \bigl\Vert \pi _{n}-x^{*} \bigr\Vert ^{2} - \delta _{n} \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} - 2\delta _{n} \bigl\langle u-x^{*}, x_{n+1}-x^{*} \bigr\rangle \\ \leq {}& \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2} - \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2}. \end{aligned} $$
(17)

Thus, from (13) and (17), we get

$$ \lim_{n \to \infty } \bigl( \bigl\Vert x_{n} - x^{*} \bigr\Vert ^{2}- \bigl\Vert \pi _{n}-x^{*} \bigr\Vert ^{2} \bigr)=0. $$
(18)

Owing to \(( C1)\), (8), and (10), we have

$$ 0\leq t_{1}(2- \sigma _{n} ) F( z_{n} )\leq \bigl\Vert x_{n} -x^{*} \bigr\Vert ^{2} - \bigl\Vert \pi _{n}-x^{*} \bigr\Vert ^{2}. $$
(19)

Clearly, from (19), (18), and \(( C2)\), we obtain

$$ \lim_{n \to \infty } F( z_{n})=0.\quad \text{Hence}\quad \lim_{n \to \infty } \Vert Az_{n} - u_{n} \Vert =0. $$
(20)

Since \(\Vert Az_{n} -Ax^{*} \Vert \leq \Vert Az_{n} -u_{n} \Vert + \Vert u_{n}-Ax^{*} \Vert \), then it follows from (20) that

$$ \lim_{n \to \infty } \bigl( \bigl\Vert Az_{n}- Ax^{*} \bigr\Vert ^{2}- \bigl\Vert u_{n}-Ax^{*} \bigr\Vert ^{2} \bigr)=0. $$
(21)

By Lemma 3.2, we have

$$ \bigl\Vert u_{n}- Ax^{*} \bigr\Vert ^{2} \leq \bigl\Vert \hat{v_{n}}-Ax^{*} \bigr\Vert ^{2} - (1-2r_{n} L_{1}) \Vert \hat{v_{n}}-v_{n} \Vert ^{2}- (1-2r_{n} L_{2}) \Vert v_{n}- u_{n} \Vert ^{2}. $$
(22)

Similarly, from \(0 < \underline{r} \leq r_{n} \leq \bar{r} < \min \{ \frac{1}{2L_{1}} , \frac{1}{2L_{2}} \}\) and (22), we get

$$ (1-2\bar{r} L_{1}) \Vert \hat{v_{n}}-v_{n} \Vert ^{2}+ (1-2 \bar{r} L_{2}) \Vert v_{n}- u_{n} \Vert ^{2} \leq \bigl\Vert Az_{n}-Ax^{*} \bigr\Vert ^{2} - \bigl\Vert u_{n}- Ax^{*} \bigr\Vert ^{2}. $$
(23)

Moreover, since \((1-2\bar{r} L_{1}) >0\) and \((1-2\bar{r} L_{2}) >0\), then it follows from (23) and (21) that

$$ \lim_{n \to \infty } \Vert \hat{v_{n}}-v_{n} \Vert =0\quad \text{and}\quad \lim_{n \to \infty } \Vert v_{n}- u_{n} \Vert = 0. $$
(24)

Now, let \(\zeta _{n}= z_{n} - \mu _{n}G(z_{n}) \). Then

$$ \Vert \zeta _{n}-z_{n} \Vert = \mu _{n} \bigl\Vert G( z_{n}) \bigr\Vert = \mu _{n} \bigl\Vert A^{*} \bigr\Vert \bigl\Vert A(z_{n})-u_{n} \bigr\Vert . $$
(25)

Again, by using \((C1)\), (20), and (25), we get

$$ \lim_{n \to \infty } \Vert \zeta _{n}-z_{n} \Vert =0. $$
(26)

Since \(\Vert \pi _{n} - z_{n} \Vert ^{2}\leq \Vert \zeta _{n}-z_{n} \Vert ^{2}\), then it follows from (26) that

$$ \lim_{n \to \infty } \Vert \pi _{n}- z_{n} \Vert =0. $$
(27)

By the triangle inequality in conjunction with (16) and (27), we obtain

$$ \lim_{n \to \infty } \Vert \pi _{n}-x_{n} \Vert =0. $$
(28)

It is clear that

$$ \Vert x_{n+1}-\pi _{n} \Vert \leq \delta _{n} \Vert u - \pi _{n} \Vert . $$
(29)

Since \(\delta _{n} \to 0\) as \(n \to \infty \) and \(\{ \Vert u - \pi _{n} \Vert \}_{n=1}^{\infty }\) is bounded, then

$$ \lim_{n \to \infty } \Vert x_{n+1}-\pi _{n} \Vert =0. $$
(30)

Consequently, from (30) and (28), we have

$$ \lim_{n \to \infty } \Vert x_{n+1}-x_{n} \Vert =0. $$
(31)

Furthermore, since H1 is reflexive and \(\{x_{n}\}_{n=1}^{\infty }\subset \operatorname{H}_{1}\) is bounded, then there exists a subsequence \(\{x_{n_{k}} \}_{k=1}^{\infty }\) of \(\{x_{n}\}_{n=1}^{\infty }\) such that

$$\begin{aligned}& x_{n_{k}}\rightharpoonup e^{*}\in \operatorname{H}_{1}\quad \text{as } k \rightarrow \infty . \quad \text{Therefore, assume w.o.l.o.g. that} \\ \end{aligned}$$
(32)
$$\begin{aligned}& \limsup_{n\rightarrow \infty } \bigl\langle u-x^{*},x_{n}-x^{*} \bigr\rangle = \lim_{k\rightarrow \infty } \bigl\langle u-x^{*}, x_{n_{k}}-x^{*} \bigr\rangle . \end{aligned}$$
(33)

Moreover, since \(\{ z_{n} \}_{n=1}^{\infty }\), \(\{ y_{n} \}_{n=1}^{\infty }\), and \(\{ \pi _{n} \}_{n=1}^{\infty }\) are bounded, then it follows from (15), (16), (28), and (32) that

$$ z_{n_{k}}\rightharpoonup e^{*},\quad\quad y_{n_{k}} \rightharpoonup e^{*},\quad \pi _{n_{k}} \rightharpoonup e^{*}\quad \text{as } k\rightarrow \infty . $$
(34)

It will now be shown that the weak limit \(e^{*}\) solves SEP (6). That is, \(e^{*}\in \Omega \). Indeed, since C and Q are closed and convex, then

$$ C\text{ and } Q \text{ are weakly closed}. $$
(35)

Also, since \(\{x_{n}\}_{n=1}^{\infty } \subset C\), then it follows from (32) and (35) that \(e^{*}\in C\). Note that A is linear and bounded. So, from (34), we obtain \(Az_{n_{k}}\rightharpoonup Ae^{*}\) as \(k\rightarrow \infty \). In view of (20) and the boundedness of \(\{u_{n}\}_{n=1}^{\infty }\), we see that

$$ u_{n_{k}}\rightharpoonup A \bigl(e^{*} \bigr) \quad \text{as } k\rightarrow \infty . $$
(36)

Likewise, since \(\{ v_{n} \}_{n=1}^{\infty }\) and \(\{ \hat{v_{n}} \}_{n=1}^{\infty }\) are bounded, then, from (36) and (24), we get

$$ v_{n_{k}} \rightharpoonup A \bigl(e^{*} \bigr), \quad\quad \hat{v_{n_{k}}}\rightharpoonup A \bigl(e^{*} \bigr) \quad \text{as } k\rightarrow \infty . $$
(37)

Clearly, since \(\{ \hat{v_{n_{k}}} \}_{k=1}^{\infty }\subset Q\), then, from (35) and (37), we deduce that \(A(e^{*})\in Q\). It remains to show that \(e^{*}\in \operatorname{EP}(C,f)\) and \(Ae^{*}\in \operatorname{EP}(Q,g)\). By Lemma 3.2, in particular, for all \(k\in \mathbb{N}\), we have

$$ \rho _{n_{k}} \bigl( f(x_{n_{k}}, y) - f(x_{n_{k}}, y_{n_{k}}) \bigr) \geq \langle y_{n_{k}}- x_{n_{k}}, y_{n_{k}}-y \rangle , \quad \forall y \in C. $$

This implies

$$ \frac{ \langle y_{n_{k}}- x_{n_{k}}, y_{n_{k}}-y \rangle }{\rho _{n_{k}}} \leq f(x_{n_{k}}, y) - f(x_{n_{k}}, y_{n_{k}}), \quad \forall y\in C. $$
(38)

However, since \(\rho _{n_{k}} \geq \underline{\rho } >0\), then, by applying the Cauchy–Schwartz inequality, we see that

$$ \begin{aligned} \biggl\vert \frac{ \langle y_{n_{k}}- x_{n_{k}}, y_{n_{k}}-y \rangle }{\rho _{n_{k}}} \biggr\vert &= \frac{ \vert \langle y_{n_{k}}- x_{n_{k}}, y_{n_{k}}-y \rangle \vert }{\rho _{n_{k}}} \leq \frac{ \vert \langle y_{n_{k}}- x_{n_{k}}, y_{n_{k}}-y \rangle \vert }{\underline{\rho }} \\ &\leq \frac{ \Vert y_{n_{k}}- x_{n_{k}} \Vert \Vert y_{n_{k}}-y \Vert }{\underline{\rho }}, \quad \forall y\in C. \end{aligned} $$
(39)

On the other hand, since \(\{ y_{n_{k}} \}_{k=1}^{\infty }\) is bounded and \(\lim_{k \to \infty } \Vert y_{n_{k}}-x_{n_{k}} \Vert =0\), then \(\Vert y_{n_{k}}- x_{n_{k}} \Vert \Vert y_{n_{k}}-y \Vert \to 0\) as \(k \to \infty \). Consequently,

$$ \lim_{k \to \infty } \frac{ \langle y_{n_{k}}- x_{n_{k}}, y_{n_{k}}-y \rangle }{\rho _{n_{k}}}=0. $$
(40)

Since f satisfies \((A2)\), then, by passing limit as \(k \to \infty \) in (38) in conjunction with (34), (32), and (40), we have

$$ \begin{aligned} \lim_{k \to \infty } \frac{ \langle y_{n_{k}}- x_{n_{k}}, y_{n_{k}}-y \rangle }{\rho _{n_{k}}} &\leq \lim_{k \to \infty } \bigl(f(x_{n_{k}}, y) - f(x_{n_{k}}, y_{n_{k}}) \bigr)\\&=f \bigl(e^{*},y \bigr) -f \bigl(e^{*}, e^{*} \bigr),\quad \forall y\in C.\end{aligned} $$
(41)

Thus \(e^{*}\in \operatorname{EP}(C,f)\). Again, by Lemma 3.2, we see that

$$ r_{n_{k}} \bigl( g(\hat{v_{n_{k}}}, y) - g(\hat{v_{n_{k}}}, v_{n_{k}}) \bigr) \geq \langle v_{n_{k}}- \hat{v_{n_{k}}}, v_{n_{k}}-v \rangle , \quad \forall v\in Q. $$

Therefore

$$ \frac{ \langle v_{n_{k}}- \hat{v_{n_{k}}}, v_{n_{k}}-v \rangle }{r_{n_{k}}} \leq g(\hat{v_{n_{k}}}, v) - g( \hat{v_{n_{k}}}, v_{n_{k}}), \quad \forall v \in Q. $$
(42)

Since \(r_{n_{k}} \geq \underline{r} >0\), applying the Cauchy–Schwarz inequality, we have

$$ \begin{aligned} \biggl\vert \frac{ \langle v_{n_{k}}- \hat{v_{n_{k}}}, v_{n_{k}}-v \rangle }{r_{n_{k}}} \biggr\vert &= \frac{ \vert \langle v_{n_{k}}- \hat{v_{n_{k}}}, v_{n_{k}}-v \rangle \vert }{r_{n_{k}}} \leq \frac{ \vert \langle v_{n_{k}}- \hat{v_{n_{k}}}, v_{n_{k}}-v \rangle \vert }{\underline{r}} \\ &\leq \frac{ \Vert v_{n_{k}}- \hat{v_{n_{k}}} \Vert \Vert v_{n_{k}}-v \Vert }{\underline{r}}, \quad \forall v\in Q. \end{aligned} $$
(43)

Now, since \(\{ v_{n_{k}} \}_{k=1}^{\infty }\) is bounded and \(\lim_{k \to \infty } \Vert v_{n_{k}}- \hat{v_{n_{k}}} \Vert =0\), then \(\Vert v_{n_{k}}- \hat{v_{n_{k}}} \Vert \Vert v_{n_{k}}-v \Vert \to 0\) as \(k \to \infty \). Hence,

$$ \lim_{k \to \infty } \frac{ \langle v_{n_{k}}- \hat{v_{n_{k}}}, v_{n_{k}}-v \rangle }{r_{n_{k}}}=0. $$
(44)

Again since g satisfies \((A2)\), then by passing limit as \(k \to \infty \) in (42) using (36), (37), and (44), we get

$$ 0 \leq \lim_{k \to \infty } \bigl(g(\hat{v_{n_{k}}}, v) - g( \hat{v_{n_{k}}}, v_{n_{k}}) \bigr)=g \bigl(Ae^{*},v \bigr) -g \bigl(Ae^{*}, Ae^{*} \bigr), \quad \forall v\in Q. $$
(45)

Hence \(e^{*}\in \Omega \). Since Ω is closed, convex, and \(x^{*}= \operatorname{P}_{\Omega }(u)\), then it follows from (31), (32), (33), and Lemma 2.3 that

$$ \begin{aligned} \limsup_{n\rightarrow \infty } \bigl\langle u-x^{*},x_{n+1}-x^{*} \bigr\rangle \leq{}& \limsup_{n\rightarrow \infty } \bigl\langle u-x^{*},x_{n}-x^{*} \bigr\rangle = \lim_{k\rightarrow \infty } \bigl\langle u-x^{*},x_{n_{k}}-x^{*} \bigr\rangle \\ ={}& \bigl\langle u-x^{*}, e^{*}-x^{*} \bigr\rangle \leq 0. \end{aligned} $$
(46)

Consequently, by Lemma 2.5, (11), and (46), we conclude that

$$ \lim_{n\rightarrow \infty } \bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}=0. $$

Case 2. Suppose that \(\{ \Vert x_{n}-x^{*} \Vert \}_{n=1}^{\infty }\) is not decreasing such that there exists a subsequence \(\{ \Vert x_{n_{l}}-x^{*} \Vert \}_{l=1}^{\infty }\) of \(\{ \Vert x_{n}-x^{*} \Vert \}_{n=1}^{\infty }\) satisfying

$$ \bigl\Vert x_{n_{l}}-x^{*} \bigr\Vert < \bigl\Vert x_{n_{l+1}}-x^{*} \bigr\Vert \quad \text{for all } l\in \mathbb{N}. $$

In view of Lemma 2.4, there exists a nondecreasing sequence \(\{ m_{k} \}_{k=1}^{\infty }\subset \mathbb{N}\) such that \(\lim_{k\rightarrow \infty } m_{k}= \infty \) and the following inequalities hold for all \(k\in \mathbb{N}\):

$$ \bigl\Vert x_{m_{k}}-x^{*} \bigr\Vert \leq \bigl\Vert x_{m_{k+1}}-x^{*} \bigr\Vert \quad \text{and}\quad \bigl\Vert x_{k}-x^{*} \bigr\Vert \leq \bigl\Vert x_{m_{k+1}}-x^{*} \bigr\Vert . $$
(47)

Note that by discarding the repeated terms of \(\{m_{k} \}_{k=1}^{\infty }\), but still denoted by \(\{m_{k} \}_{k=1}^{\infty }\), we can have \(\{x_{m_{k}} \}_{k=1}^{\infty }\), as a subsequence of \(\{x_{n} \}_{n=1}^{\infty }\). Since \(\{x_{m_{k}} \}_{k=1}^{\infty }\) is bounded, then \(\lim_{n \to \infty }( \Vert x_{m_{k}}-x^{*} \Vert - \Vert x_{m_{k+1}}-x^{*} \Vert )=0\). Then, following arguments similar to those in Case 1, we deduce that

$$\begin{aligned}& \lim_{k \to \infty } \Vert x_{m_{k+1}}- x_{m_{k}} \Vert =0\quad \text{and} \end{aligned}$$
(48)
$$\begin{aligned}& \limsup_{k\rightarrow \infty } \bigl\langle u-x^{*},x_{m_{k+1}}-x^{*} \bigr\rangle \leq 0,\quad \text{where } x^{*}= \operatorname{P}_{\Omega }(u). \end{aligned}$$
(49)

It follows from (47) and (11) that

$$ \begin{aligned} \bigl\Vert x_{m_{k+1}}-x^{*} \bigr\Vert ^{2} \leq{}& (1-\delta _{m_{k}}) \bigl\Vert x_{m_{k}}-x^{*} \bigr\Vert ^{2}+2\delta _{m_{k}} \bigl\langle u-x^{*},x_{m_{k+1}}-x^{*} \bigr\rangle \\ \leq{}& (1-\delta _{m_{k}}) \bigl\Vert x_{m_{k+1}}-x^{*} \bigr\Vert ^{2}+2\delta _{m_{k}} \bigl\langle u-x^{*},x_{m_{k+1}}-x^{*} \bigr\rangle . \end{aligned} $$
(50)

Clearly, by dividing through by \(\delta _{m_{k}}\), we get

$$ \bigl\Vert x_{m_{k+1}}-x^{*} \bigr\Vert ^{2} \leq 2 \bigl\langle u-x^{*},x_{m_{k+1}}-x^{*} \bigr\rangle . $$
(51)

Passing limit as \(k\to \infty \) in (51) using (49), we obtain

$$ \lim_{k\rightarrow \infty } \bigl\Vert x_{m_{k+1}}-x^{*} \bigr\Vert ^{2}=0. $$

Consequently, from (47), we see that

$$ \lim_{k\rightarrow \infty } \bigl\Vert x_{k}-x^{*} \bigr\Vert ^{2}=0. $$

Hence \(x_{n}\rightarrow x^{*}\in \Omega \) in both cases and this ends the proof. □

Remark 1

  1. (1)

    Theorem 3.3 will take the form of the extragradient method studied by Tran et al. [23] if we set \(g\equiv 0\) and \(\operatorname{H}_{1} \equiv \operatorname{H}_{2}\).

  2. (2)

    Theorem 3.3 coincides with the work of Lopez et al. [15], whenever \(g=f \equiv 0\).

  3. (3)

    The restriction on the step-size \(\mu \in (0, \frac{1}{ \Vert A \Vert ^{2}} )\) is imposed by He [12], while the step-size \(\mu \in (0, \frac{1}{ \Vert A \Vert ^{2}} )\) is relaxed with adaptive step-size that uses a simpler initialization sequence \(\{\sigma _{n} \}_{n=1}^{\infty }\subset (0,2)\) in our work.

  4. (4)

    Moreover, in the work of He [12], a weak convergence result was obtained for solving (6) and the strong convergence follows only through the hybrid proximal point algorithm, which is not easy to implement. In this paper, we obtain a strong convergence result for solving (6) without using the hybrid scheme.

  5. (5)

    The conclusion of our Theorem 3.3 holds for pseudomonotone bifunctions, while the corresponding result by He [12] is restricted to monotone bifunctions.

Numerical results

In this section, a preliminary numerical test is presented to compare the convergence behavior of proposed Algorithm 3.1 with algorithm (7).

Example 4.1

Consider the Nash–Cournot equilibrium problem studied in [20, 23], where \(f: C \times C \rightarrow \mathbb{R}\) is defined by

$$ f(x, y) = \langle Ux + V y + c, y - x \rangle , $$

where \(c \in \mathbb{R}^{m}\) and U, V are two matrices of order m such that V is symmetric positive semidefinite and \(V - U\) is negative semidefinite with Lipschitz constants \(L_{1} = L_{2} = \frac{1}{2} \Vert U - V \Vert \). The matrices U, V are randomly generated Footnote 1 and the entries of c randomly belong to \([-1, 1]\). The constraint set \(C \subset \mathbb{R}^{m}\) is taken as follows:

$$ C := \Biggl\{ x \in \mathbb{R}^{m} : \sum _{i=1}^{m} x_{i} \geq -1, -10 \leq x_{i} \leq 10 \Biggr\} . $$

Assume that \(g:Q\times Q \to \mathbb{R}\) is defined by \(g(x, y) = x (y - x)\), \(\forall x, y \in Q=[-1, \infty )\). Suppose that \(A : \mathbb{R}^{m} \rightarrow Q\) is a linear operator defined by \(A x = \langle a, x \rangle \), \(\forall x \in \mathbb{R}^{m}\), where a is a vector in \(\mathbb{R}^{m}\) whose elements are randomly generated from \([1; m]\). Thus, \(A^{*}: [-1, \infty ) \to \mathbb{R}^{m} \) is of the form \(A^{*} y = y . a\) for all \(y \in \mathbb{R}\) and \(\Vert A \Vert = \Vert a \Vert \). The starting points \(x_{1}\in C\) are randomly generated in the interval \([-10, 10]\), and we choose \(\mu = \frac{1}{2 \Vert a \Vert ^{2}}\), \(\rho _{n}= r_{n}= \frac{1}{4 L_{1}}\), \(\delta _{n}=\frac{1}{n+2}\), and \(\sigma _{n} = 2- \frac{1}{n+1}\). We define the function \(\mathit{TOL}_{n}\) by \(\mathit{TOL}_{n}:= \Vert x_{n+1}- x_{n} \Vert \) and use the stopping rule \(\mathit{TOL}_{n} < \epsilon \) for the iterative process, where ϵ is the predetermined error. The equivalent convex quadratic problems are solved using the function fmincon and implemented in MATLAB 7.0 running on an HP Compaq510, Core(TM)2 Duo Processor T5870 with 2.00 GHz and 2 GB RAM. Table 1 shows that Algorithm 3.1 outperforms He’s algorithm (7) in running time and in the number of iterations for different cases of m.

Table 1 Example 4.1: Comparison of Algorithm 3.1 with He’s algorithm (7)

Conclusion

In this paper, we have proposed a self-adaptive extragradient iterative process for solving split pseudomonotone equilibrium problems. We established strong convergence of our proposed algorithm, and the performance of the algorithm such as CPU time and the number of iterations required for convergence is highlighted through preliminary numerical tests that show that our proposed algorithm is faster than the corresponding algorithm by He [12].

Availability of data and materials

Not applicable.

Notes

  1. 1.

    Two matrices are randomly generated E and F with entries from \([-1,1]\). The matrix \(V=E^{T}E\), \(S=F^{T}F\) and \(U = S + V\).

References

  1. 1.

    Anh, P.N.: A hybrid extragradient method extended to fixed point problems and equilibrium problems. Optimization 62, 271–283 (2013)

    MathSciNet  Article  Google Scholar 

  2. 2.

    Bianchi, M., Schaible, S.: Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 90, 31–34 (1996)

    MathSciNet  Article  Google Scholar 

  3. 3.

    Bryne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)

    MathSciNet  Article  Google Scholar 

  4. 4.

    Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intesity-modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006)

    Article  Google Scholar 

  5. 5.

    Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projection in a product space. Numer. Algorithms 8, 221–239 (1994)

    MathSciNet  Article  Google Scholar 

  6. 6.

    Censor, Y., Elfving, T., Kopf, N., Bortfed, T.: The multiple-set split feasibility problem and its applications for inverse problems. Inverse Probl. 21, 2071–2084 (2005)

    MathSciNet  Article  Google Scholar 

  7. 7.

    Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012)

    MathSciNet  Article  Google Scholar 

  8. 8.

    Combettes, P.L., Hirstoaga, S.A.: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117–136 (2005)

    MathSciNet  MATH  Google Scholar 

  9. 9.

    Dang, V.H.: Convergence analysis of a new algorithm for strongly pseudomontone equilibrium problems. Numer. Algorithms 77, 983–1001 (2018)

    MathSciNet  Article  Google Scholar 

  10. 10.

    Fan, K.: A minimax inequality and applications. In: Inequalities III, pp. 103–113. Academic Press, New York (1972)

    Google Scholar 

  11. 11.

    He, B., He, X., Liu, H., Wu, T.: Self-adaptive projection method for co-coercive variational inequalities. Eur. J. Oper. Res. 196, 43–48 (2009)

    MathSciNet  Article  Google Scholar 

  12. 12.

    He, Z.: The split equilibrium problem and its convergence algorithms. J. Inequal. Appl. 2012, Article ID 162 (2012)

    MathSciNet  Article  Google Scholar 

  13. 13.

    Kim, J.K., Majee, P.: Modified Krasnoselski–Mann iterative method for hierarchical fixed point problem and split mixed equilibrium problem. J. Inequal. Appl. 2020, Article ID 227 (2020)

    MathSciNet  Article  Google Scholar 

  14. 14.

    Kim, J.K., Salahuddin, S.: Existence of solutions for multivalued equilibrium problems. Nonlinear Funct. Anal. Appl. 23(4), 779–795 (2018)

    MATH  Google Scholar 

  15. 15.

    Lopez, G., Martin-Marquez, V., Wang, F.H., Xu, H.K.: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28, Article ID 085004 (2012)

    MathSciNet  Article  Google Scholar 

  16. 16.

    Mainge, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)

    MathSciNet  Article  Google Scholar 

  17. 17.

    Moudafi, A.: Split monotone variational inclusions. J. Optim. Theory Appl. 50, 275–283 (2011)

    MathSciNet  Article  Google Scholar 

  18. 18.

    Oyewole, O.K., Mewomo, O.T.: Existence results for new generalized mixed equilibrium and fixed point problems in Banach spaces. Nonlinear Funct. Anal. Appl. 25(2), 273–301 (2020)

    MATH  Google Scholar 

  19. 19.

    Rehman, H.U., Kumam, P., Cho, Y.J., Suleiman, Y.I., Kumam, W.: Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. (2020). https://doi.org/10.1080/10556788.2020.1734805

    Article  MATH  Google Scholar 

  20. 20.

    Suleiman, Y.I., Rehman, H.U., Gibali, A., Kumam, P.: A self-adaptive extragradient CQ-method for a class of bilevel split equilibrium problem with application to Nash Cournot oligopolistic electricity market models. Comput. Appl. Math. 39, 293 (2020)

    MathSciNet  Article  Google Scholar 

  21. 21.

    Tam, N.N., Yao, J.C., Yen, N.D.: Solution methods for pseudomonotone variational inequalities. J. Optim. Theory Appl. 138, 253–273 (2008)

    MathSciNet  Article  Google Scholar 

  22. 22.

    Tang, Y., Gibali, A.: New self-adaptive step size algorithms for solving split variational inclusion problems and its applications. Numer. Algorithms 83, 305–331 (2020)

    MathSciNet  Article  Google Scholar 

  23. 23.

    Tran, D.Q., Dung, L.M., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–776 (2008)

    MathSciNet  Article  Google Scholar 

  24. 24.

    Wang, S.H., Liu, X.M., An, Y.S.: A new iterative algorithm for generalized split equilibrium problem in Hilbert spaces. Nonlinear Funct. Anal. Appl. 22(4), 911–924 (2017)

    MATH  Google Scholar 

  25. 25.

    Xu, H.K.: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 116, 659–678 (2003)

    MathSciNet  Article  Google Scholar 

  26. 26.

    Yang, Q.: On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 302, 166–179 (2005)

    MathSciNet  Article  Google Scholar 

  27. 27.

    Yao, Y., Yao, Z., Abdou, A.N., Cho, Y.J.: Self-adaptive algorithms for proximal split feasibility problems and strong convergence analysis. Fixed Point Theory Appl. 2015, 205 (2015)

    MathSciNet  Article  Google Scholar 

  28. 28.

    Zhang, W., Han, D., Li, Z.: A self-adaptive projection method for solving the multiple-sets split feasibility problem. Inverse Probl. 25, 115001 (2009)

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the editor and anonymous referees for their valuable suggestions and constructive comments which have improved this paper. Yusuf I. Suleiman is grateful to King Mongkut’s University of Technology Thonburi (KMUTT), Bangkok 10140, Thailand, for providing state-of-the-art research facilities to carry out this research work during his bench work in KMUTT. The authors acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. Moreover, this research project is supported by Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal year 2021 under project number 64A306000005.

Funding

The authors acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. Moreover, this research project is supported by Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal year 2021 under project number 64A306000005.

Author information

Affiliations

Authors

Contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Poom Kumam or Wiyada Kumam.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Suleiman, Y.I., Kumam, P., Rehman, H.u. et al. A new extragradient algorithm with adaptive step-size for solving split equilibrium problems. J Inequal Appl 2021, 136 (2021). https://doi.org/10.1186/s13660-021-02668-x

Download citation

MSC

  • 47H05
  • 47H09
  • 49M37
  • 65K10

Keywords

  • Split equilibrium problems
  • Extragradient algorithm
  • Self-adaptive step-sizes
  • Pseudomonotone equilibrium problems