# A cutting hyperplane method for solving pseudomonotone non-Lipschitzian equilibrium problems

## Abstract

We present a new method for solving equilibrium problems, where the underlying function is continuous and satisfies a pseudomonotone assumption. First, we construct an appropriate hyperplane which separates the current iterative point from the solution set. Then the next iterate is obtained as the projection of the current iterate onto the intersection of the feasible set with the half-space containing the solution set. We also analyze the global convergence of the method under minimal assumptions.

MSC: 65K10, 90C25.

## 1 Introduction

The typical form of equilibrium problems is formulated by the Ky Fan inequality as follows (see ):

where C is a nonempty closed convex subset of ${\mathbb{R}}^{n}$ and $f:C×C\to \mathbb{R}$ is a bifunction such that $f\left(x,x\right)=0$ for all $x\in C$, shortly $EP\left(f,C\right)$. In this paper, we suppose that $f\left(x,\cdot \right)$ is convex on C for all $x\in C$, f is continuous on $C×C$ and the solution set S of $EP\left(f,C\right)$ is nonempty.

Although $EP\left(f,C\right)$ has a simple formulation, it includes many important problems in applied mathematics such as variational inequalities, complementarity problems, (vector) optimization problems, fixed point problems and saddle point problems (see ). In recent years, equilibrium problems have become an attractive field for many researchers in both theory and applications (see ). There is a myriad of literature related to equilibrium problems and their applications in electricity market, transportation, economics and network [12, 13].

Theory of equilibrium problems has been studied extensively and intensively in terms of the existence of solutions and generalizations to many abstract ways. However, methods for solving $EP\left(f,C\right)$ are still limited and have not satisfied the need of applications. There are popular approaches to solving $EP\left(f,C\right)$ to our knowledge. The first approach is based on the gap function (see ), the second way is to use the proximal point method  and the third one is the auxiliary subproblem principle . Recently, basing on the fixed point property that ${x}^{\ast }\in C$ is a solution to $EP\left(f,C\right)$ if and only if it is the unique solution of the problem

$min\left\{f\left({x}^{\ast },y\right)+\frac{1}{2\lambda }{\parallel y-{x}^{\ast }\parallel }^{2}\mid y\in C\right\},$
(1.1)

where $\lambda >0$, and Armijo linesearch techniques, Tran et al. in  introduced extragradient algorithms for solving equilibrium problems and obtained the convergence under the assumption that the bifunction f is pseudomonotone as the following:

(1.2)

By replacing a quadratic term ${\parallel y-{x}^{\ast }\parallel }^{2}$ in the subproblem (1.1) by the Bregman distance function, Nguyen et al. in  proposed the interior proximal extragradient method for solving $EP\left(f,C\right)$, where f is pseudomonotone and C only is a polyhedron convex set. The method has also been extensively studied to solve $EP\left(f,C\right)$ and variational inequalities (see ).

A special case of Problem $EP\left(f,C\right)$ is the variational inequality problem, shortly $VI\left(F,C\right)$, which is to find a point ${x}^{\ast }\in C$ such that

$〈F\left({x}^{\ast }\right),x-{x}^{\ast }〉\ge 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in C,$

where C is a nonempty closed convex subset of ${\mathbb{R}}^{n}$ and $F:C\to C$. A typical method to solve Problem $VI\left(F,C\right)$ is the projection method, which is based on the property that x is a solution to Problem $VI\left(F,C\right)$ if and only if it coincides with zeros of the projected residual function $r\left(x\right):=x-{Pr}_{C}\left(x-F\left(x\right)\right)$, where ${Pr}_{C}\left(\cdot \right)$ is the metric projection on C. Solodov and Svaiter in  proposed a projection method which starts with a point ${x}^{0}\in C$ and generates a sequence $\left\{{x}^{k}\right\}$ defined, for all $k\ge 0$, $\gamma \in \left(0,1\right)$, $\sigma \in \left(0,1\right)$, by

(1.3)

Under pseudomonotone and continuous assumptions of F, the authors showed that the sequences globally converge to a solution of the variational inequality problem $VI\left(F,C\right)$. Note that if $r\left({x}^{k}\right)=0$, then ${x}^{k}$ is a solution to the problem.

In this paper, by combining the extragradient methods in  and Armijo-type linesearch techniques in (1.3), we propose a new method for solving Problem $EP\left(f,C\right)$, which is called to be cutting hyperplane method. First, we construct an appropriate hyperplane which separates the current iterative point from the solution set. Next, we combine this technique with Armijo-type linesearch techniques to obtain a convergent iteration scheme for pseudomonotone equilibrium problems. Then, the next iterate is obtained as the projection of the current iterate onto the intersection of the feasible set with the half-space containing the solution set. Compared with the extragradient method in  and the current methods, our iteration method is quite simple. The fundamental difference here is that the global convergence of the method only requires the continuity and pseudomonotonicity of the bifunction f. Moreover, we also show that the cluster point of the sequence in our scheme is the limit of the projection of the iteration point onto the solution set of Problem $EP\left(f,C\right)$.

The rest of the paper is organized as follows. In Section 2, we give formal definitions of our target $EP\left(f,C\right)$ and the pseudomonotonicity of f. We then propose the cutting hyperplane method. Section 3 is devoted to the proof of its global convergence to a solution of $EP\left(f,C\right)$. In the last section, we apply the method for oligopolistic equilibrium market models with concave cost functions and a generalized form of the bifunction defined by the Cournot-Nash equilibrium model considered in [13, 1921].

## 2 Proposed method

Suppose that $C\subseteq {\mathbb{R}}^{n}$, $f:C×C\to \mathbb{R}\cup \left\{+\mathrm{\infty }\right\}$ is a bifunction. We first recall the following definitions that will be required in our analysis of equilibrium problems (see [8, 13]).

Definition 2.1 A bifunction f is said to be

(a) strongly monotone on C if there exists a constant $\rho >0$ such that

$f\left(x,y\right)+f\left(y,x\right)\le -\rho {\parallel x-y\parallel }^{2}\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in C;$

(b) monotone on C if

$f\left(x,y\right)+f\left(y,x\right)\le 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in C;$

(c) pseudomonotone on C if

$f\left(x,y\right)\ge 0\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}f\left(y,x\right)\le 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in C.$

It is observed that (a) (b) (c).

If f is a mapping defined by

$f\left(x,y\right):=sup\left\{〈w,y-x〉\mid w\in F\left(x\right)\right\},$

where $F:C\to {2}^{{\mathbb{R}}^{n}}$ is a multivalued mapping such that $F\left(x\right)\ne \mathrm{\varnothing }$ for all $x\in C$, then $EP\left(f,C\right)$ can be formulated as the multivalued variational inequality (shortly, MVI):

Find ${x}^{\ast }\in C$, ${w}^{\ast }\in F\left({x}^{\ast }\right)$ such that

$〈{w}^{\ast },x-{x}^{\ast }〉\ge 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in C.$

In this case, it is known that solutions coincide with zeros of the following projected residual function:

$T\left(x\right)=x-{Pr}_{C}\left(x-\lambda w\right),$

where $\lambda >0$ and $w\in F\left(x\right)$. In other words, with ${x}^{0}\in C$, ${w}^{0}\in F\left({x}^{0}\right)$, the point $\left({x}^{0},{w}^{0}\right)$ is a solution of (MVI) if and only if $T\left({x}^{0}\right)=0$, where $T\left({x}^{0}\right)={x}^{0}-{Pr}_{C}\left({x}^{0}-{w}^{0}\right)$ (see ). Applying this idea to the equilibrium problems $EP\left(f,C\right)$, we obtain the following solution scheme.

Let ${x}^{k}$ be a current approximation to the solution of $EP\left(f,C\right)$. First, we compute

${y}^{k}=argmin\left\{f\left({x}^{k},y\right)+\frac{\beta }{2}{\parallel y-{x}^{k}\parallel }^{2}\mid y\in C\right\}$
(2.1)

for some positive constant β (as Step 1 of Algorithm 2 in ). Set $r\left({x}^{k}\right):={y}^{k}-{x}^{k}$. It is easy to see that if $r\left({x}^{k}\right)=0$, then ${x}^{k}$ is a solution to Problem $EP\left(f,C\right)$. Otherwise, we search the line segment between ${x}^{k}$ and ${y}^{k}$ for a point $\left({\overline{w}}^{k},{z}^{k}\right)$ such that the hyperplane

$\partial {H}_{k}=\left\{x\in {\mathbb{R}}^{n}\mid 〈{\overline{w}}^{k},x-{z}^{k}〉=0\right\}$

strictly separates ${x}^{k}$ from the solution set S of $EP\left(f,C\right)$. To find such ${z}^{k}$, we may use a computationally inexpensive Armijo-type procedure in . We find the smallest nonnegative number ${m}_{k}$ such that

$f\left({x}^{k}-{\gamma }^{{m}_{k}}r\left({x}^{k}\right),{y}^{k}\right)\le -\sigma {\parallel r\left({x}^{k}\right)\parallel }^{2}.$
(2.2)

We set ${z}^{k}:={x}^{k}-{\gamma }^{{m}_{k}}r\left({x}^{k}\right)$ and choose ${\overline{w}}^{k}\in {\partial }_{2}f\left({z}^{k},{z}^{k}\right)$. Then we compute the next iterate ${x}^{k+1}$ by projecting ${x}^{k}$ onto the intersection of the feasible set C with the half-space

${H}_{k}:=\left\{x\in {\mathbb{R}}^{n}\mid 〈{\overline{w}}^{k},x-{z}^{k}〉\le 0\right\}.$

This means that

${x}^{k+1}:={Pr}_{C\cap {H}_{k}}\left({x}^{k}\right).$
(2.3)

## 3 Convergence

Instead of (2.2), Tran et al. in  used a linesearch technique as follows:

$f\left({x}^{k}-{\gamma }^{{m}_{k}}r\left({x}^{k}\right),{y}^{k}\right)+\frac{\alpha }{\rho }\left(G\left({y}^{k}\right)-G\left({x}^{k}\right)-〈\mathrm{\nabla }G\left({x}^{k}\right),{y}^{k}-{x}^{k}〉\right)\le 0,$
(3.1)

where $\alpha \in \left(0,1\right)$, $\rho >0$ and $G:{\mathbb{R}}^{n}\to \mathbb{R}$ is a strongly convex (with modulus $\beta >0$) and continuously differentiable function. It is clear to see that (2.2) is different and simpler than the technique (3.1). Both of them are Armijo-type linesearch techniques, so a small part of the proof of the following lemma is close to the proof of Lemma 4.2 in .

Lemma 3.1 If $r\left({x}^{k}\right)\ne 0$ for $\gamma \in \left(0,1\right)$, then there exists the smallest nonnegative integer ${m}_{k}$ such that

$f\left({x}^{k}-{\gamma }^{{m}_{k}}r\left({x}^{k}\right),{y}^{k}\right)\le -\sigma {\parallel r\left({x}^{k}\right)\parallel }^{2}.$

Proof For $r\left({x}^{k}\right)\ne 0$ and $\gamma \in \left(0,1\right)$, we suppose to obtain a contradiction that for every nonnegative integer m, we have

$f\left({x}^{k}-{\gamma }^{m}r\left({x}^{k}\right),{y}^{k}\right)+\sigma {\parallel r\left({x}^{k}\right)\parallel }^{2}>0.$

Passing to the limit in the above inequality, as $m\to \mathrm{\infty }$, by continuity of $f\left(\cdot ,{y}^{k}\right)$, we obtain

$f\left({x}^{k},{y}^{k}\right)+\sigma {\parallel r\left({x}^{k}\right)\parallel }^{2}\ge 0.$
(3.2)

On the other hand, since ${y}^{k}$ is a solution to the convex optimization problem

$min\left\{f\left({x}^{k},y\right)+\frac{\beta }{2}{\parallel y-{x}^{k}\parallel }^{2}\mid y\in C\right\},$

we have

$f\left({x}^{k},y\right)+\frac{\beta }{2}{\parallel y-{x}^{k}\parallel }^{2}\ge f\left({x}^{k},{y}^{k}\right)+\frac{\beta }{2}{\parallel {y}^{k}-{x}^{k}\parallel }^{2}\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C.$

With $y={x}^{k}$, the last inequality implies

$f\left({x}^{k},{y}^{k}\right)+\frac{\beta }{2}{\parallel r\left({x}^{k}\right)\parallel }^{2}\le 0.$
(3.3)

Combining (3.2) with (3.3), we obtain

$\sigma {\parallel r\left({x}^{k}\right)\parallel }^{2}\ge \frac{\beta }{2}{\parallel r\left({x}^{k}\right)\parallel }^{2}.$

Hence, it must be either $r\left({x}^{k}\right)=0$ or $\sigma \ge \frac{\beta }{2}$. The first case contradicts $r\left({x}^{k}\right)\ne 0$, while the second one contradicts the fact that $\sigma <\frac{\beta }{2}$. □

Lemma 3.2 (see )

Let C be a nonempty closed convex subset of a real Hilbert space . Suppose that, for all $u\in C$, the sequence $\left\{{x}^{k}\right\}$ satisfies

$\parallel {x}^{k+1}-u\parallel \le \parallel {x}^{k}-u\parallel \phantom{\rule{1em}{0ex}}\mathrm{\forall }k\ge 0.$

Then the sequence $\left\{{Pr}_{C}\left({x}^{k}\right)\right\}$ converges strongly to some ${x}^{\ast }\in C$.

Let us discuss the global convergence of Scheme (2.1)-(2.3).

Lemma 3.3 Let $\left\{{x}^{k}\right\}$ be the sequence generated by Scheme (2.1)-(2.3). Then the following hold:

(i) If $\parallel r\left({x}^{k}\right)\parallel =0$, then ${x}^{k}\in S$.

(ii) If $\parallel r\left({x}^{k}\right)\parallel \ne 0$, then ${x}^{k}\notin {H}_{k}$.

(iii) If $\parallel r\left({x}^{k}\right)\parallel \ne 0$, then $S\subseteq C\cap {H}_{k}$.

(iv) ${x}^{k+1}={Pr}_{C\cap {H}_{k}}\left({\overline{y}}^{k}\right)$, where ${\overline{y}}^{k}={Pr}_{{H}_{k}}\left({x}^{k}\right)$.

Proof (i) For a proof of this, see Lemma 3.1 in  and Theorem 3.1 in .

(ii) From ${z}^{k}={x}^{k}-{\gamma }^{{m}_{k}}r\left({x}^{k}\right)$, we have

${x}^{k}-{z}^{k}=\frac{{\gamma }^{{m}_{k}}}{1-{\gamma }^{{m}_{k}}}\left({z}^{k}-{y}^{k}\right).$

Combining this inequality with (2.2) and ${\overline{w}}^{k}\in {\partial }_{2}f\left({z}^{k},{z}^{k}\right)$, we obtain

$\begin{array}{rl}〈{\overline{w}}^{k},{x}^{k}-{z}^{k}〉& =\frac{{\gamma }^{{m}_{k}}}{1-{\gamma }^{{m}_{k}}}〈{\overline{w}}^{k},{z}^{k}-{y}^{k}〉\\ \ge \frac{{\gamma }^{{m}_{k}}}{1-{\gamma }^{{m}_{k}}}\left(f\left({z}^{k},{z}^{k}\right)-f\left({z}^{k},{y}^{k}\right)\right)\\ =-\frac{{\gamma }^{{m}_{k}}}{1-{\gamma }^{{m}_{k}}}f\left({z}^{k},{y}^{k}\right)\\ \ge \frac{{\gamma }^{{m}_{k}}\sigma }{1-{\gamma }^{{m}_{k}}}{\parallel r\left({x}^{k}\right)\parallel }^{2}\\ >0.\end{array}$

This implies that $〈{\overline{w}}^{k},{x}^{k}-{z}^{k}〉>0$. It means that ${x}^{k}\notin {H}_{k}$.

(iii) Suppose ${x}^{\ast }\in S$. Then $f\left({x}^{\ast },x\right)\ge 0$ for all $x\in C$, and since f is pseudomonotone on C, we get

$f\left({z}^{k},{x}^{\ast }\right)\le 0.$
(3.4)

From ${\overline{w}}^{k}\in {\partial }_{2}f\left({z}^{k},{z}^{k}\right)$, we have

$\begin{array}{rl}〈{\overline{w}}^{k},{x}^{\ast }-{z}^{k}〉& \le f\left({z}^{k},{x}^{\ast }\right)-f\left({z}^{k},{z}^{k}\right)\\ =f\left({z}^{k},{x}^{\ast }\right).\end{array}$

From this inequality and (3.4), it follows that

$〈{\overline{w}}^{k},{x}^{\ast }-{z}^{k}〉\le 0.$

Thus ${x}^{\ast }\in {H}_{k}$.

(iv) We know that

Since ${x}^{k}\in C$ and ${x}^{k}\notin {H}_{k}$, for every $y\in C\cap {H}_{k}$, there exists $\lambda \in \left(0,1\right)$ such that

$\stackrel{ˆ}{x}=\lambda {x}^{k}+\left(1-\lambda \right)y\in C\cap \partial {H}_{k},$

where $\partial {H}_{k}=\left\{x\in {\mathbb{R}}^{n}\mid 〈{\overline{w}}^{k},x-{z}^{k}〉=0\right\}$. In particular, for $y={x}^{k+1}$, we easily deduce that the corresponding $\stackrel{ˆ}{x}={x}^{k+1}\in C\cap \partial {H}_{k}$ and thus that ${x}^{k+1}={Pr}_{C\cap \partial {H}_{k}}\left({x}^{k}\right)$. Therefore, for every $y\in C\cap {H}_{k}$, we have

$\begin{array}{rcl}{\parallel y-{\overline{y}}^{k}\parallel }^{2}& \ge & {\left(1-\lambda \right)}^{2}{\parallel y-{\overline{y}}^{k}\parallel }^{2}\\ =& {\parallel \stackrel{ˆ}{x}-\lambda {x}^{k}-\left(1-\lambda \right){\overline{y}}^{k}\parallel }^{2}\\ =& {\parallel \left(\stackrel{ˆ}{x}-{\overline{y}}^{k}\right)-\lambda \left({x}^{k}-{\overline{y}}^{k}\right)\parallel }^{2}\\ =& {\parallel \stackrel{ˆ}{x}-{\overline{y}}^{k}\parallel }^{2}+{\lambda }^{2}{\parallel {x}^{k}-{\overline{y}}^{k}\parallel }^{2}-2\lambda 〈\stackrel{ˆ}{x}-{\overline{y}}^{k},{x}^{k}-{\overline{y}}^{k}〉\\ =& {\parallel \stackrel{ˆ}{x}-{\overline{y}}^{k}\parallel }^{2}+{\lambda }^{2}{\parallel {x}^{k}-{\overline{y}}^{k}\parallel }^{2}\\ \ge & {\parallel \stackrel{ˆ}{x}-{\overline{y}}^{k}\parallel }^{2}\end{array}$
(3.5)

because ${\overline{y}}^{k}={Pr}_{\partial {H}_{k}}\left({x}^{k}\right)$. Also, we have

$\begin{array}{rcl}{\parallel \stackrel{ˆ}{x}-{x}^{k}\parallel }^{2}& =& {\parallel \stackrel{ˆ}{x}-{\overline{y}}^{k}+{\overline{y}}^{k}-{x}^{k}\parallel }^{2}\\ =& {\parallel \stackrel{ˆ}{x}-{\overline{y}}^{k}\parallel }^{2}-2〈\stackrel{ˆ}{x}-{\overline{y}}^{k},{x}^{k}-{\overline{y}}^{k}〉+{\parallel {\overline{y}}^{k}-{x}^{k}\parallel }^{2}\\ =& {\parallel \stackrel{ˆ}{x}-{\overline{y}}^{k}\parallel }^{2}+{\parallel {\overline{y}}^{k}-{x}^{k}\parallel }^{2}.\end{array}$

Since ${x}^{k+1}={Pr}_{C\cap {H}_{k}}\left({x}^{k}\right)$, using the Pythagorean theorem, we can reduce that

$\begin{array}{rcl}{\parallel \stackrel{ˆ}{x}-{\overline{y}}^{k}\parallel }^{2}& =& {\parallel \stackrel{ˆ}{x}-{x}^{k}\parallel }^{2}-{\parallel {\overline{y}}^{k}-{x}^{k}\parallel }^{2}\\ \ge & {\parallel {x}^{k+1}-{x}^{k}\parallel }^{2}-{\parallel {\overline{y}}^{k}-{x}^{k}\parallel }^{2}\\ =& {\parallel {x}^{k+1}-{\overline{y}}^{k}\parallel }^{2}.\end{array}$
(3.6)

From (3.5) and (3.6), we have

$\parallel {x}^{k+1}-{\overline{y}}^{k}\parallel \le \parallel y-{\overline{y}}^{k}\parallel \phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C\cap {H}_{k},$

which means

${x}^{k+1}={Pr}_{C\cap {H}_{k}}\left({\overline{y}}^{k}\right).$

□

Using Lemma 3.3, we can prove the global convergence of Scheme (2.1)-(2.3) under moderate assumptions.

Theorem 3.4 Let f be pseudomonotone and $\partial f\left(x,\cdot \right)\left(x\right)$ be bounded on C. Then

${\parallel {x}^{k+1}-{x}^{\ast }\parallel }^{2}\le {\parallel {x}^{k}-{x}^{\ast }\parallel }^{2}-{\parallel {x}^{k+1}-{\overline{y}}^{k}\parallel }^{2}-{\left(\frac{{\gamma }^{{m}_{k}}\sigma }{\parallel {\overline{w}}^{k}\parallel \left(1-{\gamma }^{{m}_{k}}\right)}\right)}^{2}{\parallel r\left({x}^{k}\right)\parallel }^{4},$

where ${\overline{y}}^{k}={Pr}_{\partial {H}_{k}}\left({x}^{k}\right)$, and the sequence $\left\{{x}^{k}\right\}$ generated by Scheme (2.1)-(2.3) converges to a solution of $EP\left(f,C\right)$.

Proof We first show that the sequence $\left\{{x}^{k}\right\}$ is bounded. Since ${x}^{k+1}={Pr}_{C\cap {H}_{k}}\left({\overline{y}}^{k}\right)$, we have

$〈{\overline{y}}^{k}-{x}^{k+1},z-{x}^{k+1}〉\le 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }z\in C\cap {H}_{k}.$

Substituting $z={x}^{\ast }\in C\cap {H}_{k}$, we have

$〈{\overline{y}}^{k}-{x}^{k+1},{x}^{\ast }-{x}^{k+1}〉\le 0\phantom{\rule{1em}{0ex}}⇔\phantom{\rule{1em}{0ex}}〈{\overline{y}}^{k}-{x}^{k+1},{x}^{\ast }-{\overline{y}}^{k}+{\overline{y}}^{k}-{x}^{k+1}〉\le 0,$

which implies that

${\parallel {x}^{k+1}-{\overline{y}}^{k}\parallel }^{2}\le 〈{x}^{k+1}-{\overline{y}}^{k},{x}^{\ast }-{\overline{y}}^{k}〉.$

Hence, we have

$\begin{array}{rcl}{\parallel {x}^{k+1}-{x}^{\ast }\parallel }^{2}& =& {\parallel {x}^{k+1}-{\overline{y}}^{k}+{\overline{y}}^{k}-{x}^{\ast }\parallel }^{2}\\ =& {\parallel {x}^{k+1}-{\overline{y}}^{k}\parallel }^{2}+{\parallel {\overline{y}}^{k}-{x}^{\ast }\parallel }^{2}+2〈{x}^{k+1}-{\overline{y}}^{k},{\overline{y}}^{k}-{x}^{\ast }〉\\ \le & 〈{x}^{\ast }-{\overline{y}}^{k},{x}^{k+1}-{\overline{y}}^{k}〉+{\parallel {\overline{y}}^{k}-{x}^{\ast }\parallel }^{2}+2〈{x}^{k+1}-{\overline{y}}^{k},{\overline{y}}^{k}-{x}^{\ast }〉\\ =& {\parallel {\overline{y}}^{k}-{x}^{\ast }\parallel }^{2}+〈{x}^{k+1}-{\overline{y}}^{k},{\overline{y}}^{k}-{x}^{\ast }〉\\ =& {\parallel {\overline{y}}^{k}-{x}^{\ast }\parallel }^{2}-{\parallel {x}^{k+1}-{\overline{y}}^{k}\parallel }^{2}.\end{array}$
(3.7)

Since ${z}^{k}={x}^{k}-{\gamma }^{{m}_{k}}r\left({x}^{k}\right)$ and

${\overline{y}}^{k}={Pr}_{{H}_{k}}\left({x}^{k}\right)={x}^{k}-\frac{〈{\overline{w}}^{k},{x}^{k}-{z}^{k}〉}{{\parallel {\overline{w}}^{k}\parallel }^{2}}{\overline{w}}^{k},$

we have

$\begin{array}{rcl}{\parallel {\overline{y}}^{k}-{x}^{\ast }\parallel }^{2}& =& {\parallel {x}^{k}-{x}^{\ast }\parallel }^{2}+\frac{{〈{\overline{w}}^{k},{x}^{k}-{z}^{k}〉}^{2}}{{\parallel {\overline{w}}^{k}\parallel }^{4}}{\parallel {\overline{w}}^{k}\parallel }^{2}-\frac{2〈{\overline{w}}^{k},{x}^{k}-{z}^{k}〉}{{\parallel {\overline{w}}^{k}\parallel }^{2}}〈{\overline{w}}^{k},{x}^{k}-{x}^{\ast }〉\\ =& {\parallel {x}^{k}-{x}^{\ast }\parallel }^{2}+{\left(\frac{{\gamma }^{{m}_{k}}〈{\overline{w}}^{k},r\left({x}^{k}\right)〉}{\parallel {\overline{w}}^{k}\parallel }\right)}^{2}-\frac{2{\gamma }^{{m}_{k}}〈{\overline{w}}^{k},r\left({x}^{k}\right)〉}{{\parallel {\overline{w}}^{k}\parallel }^{2}}〈{\overline{w}}^{k},{x}^{k}-{x}^{\ast }〉\\ =& {\parallel {x}^{k}-{x}^{\ast }\parallel }^{2}-{\left(\frac{{\gamma }^{{m}_{k}}〈{\overline{w}}^{k},r\left({x}^{k}\right)〉}{\parallel {\overline{w}}^{k}\parallel }\right)}^{2}\\ -2\left[\frac{{\gamma }^{{m}_{k}}〈{\overline{w}}^{k},r\left({x}^{k}\right)〉}{{\parallel {\overline{w}}^{k}\parallel }^{2}}〈{\overline{w}}^{k},{x}^{k}-{x}^{\ast }〉-{\left(\frac{{\gamma }^{{m}_{k}}〈{\overline{w}}^{k},r\left({x}^{k}\right)〉}{\parallel {\overline{w}}^{k}\parallel }\right)}^{2}\right]\\ =& {\parallel {x}^{k}-{x}^{\ast }\parallel }^{2}-{\left(\frac{{\gamma }^{{m}_{k}}〈{\overline{w}}^{k},r\left({x}^{k}\right)〉}{\parallel {\overline{w}}^{k}\parallel }\right)}^{2}\\ -\frac{2{\gamma }^{{m}_{k}}〈{\overline{w}}^{k},r\left({x}^{k}\right)〉}{{\parallel {\overline{w}}^{k}\parallel }^{2}}\left[〈{\overline{w}}^{k},{x}^{k}-{x}^{\ast }〉-{\gamma }^{{m}_{k}}〈{\overline{w}}^{k},r\left({x}^{k}\right)〉\right]\\ =& {\parallel {x}^{k}-{x}^{\ast }\parallel }^{2}-{\left(\frac{{\gamma }^{{m}_{k}}〈{\overline{w}}^{k},r\left({x}^{k}\right)〉}{\parallel {\overline{w}}^{k}\parallel }\right)}^{2}\\ -\frac{2{\gamma }^{{m}_{k}}〈{\overline{w}}^{k},r\left({x}^{k}\right)〉}{{\parallel {\overline{w}}^{k}\parallel }^{2}}〈{\overline{w}}^{k},{x}^{k}-{x}^{\ast }-{\gamma }^{{m}_{k}}r\left({x}^{k}\right)〉\\ =& {\parallel {x}^{k}-{x}^{\ast }\parallel }^{2}-{\left(\frac{{\gamma }^{{m}_{k}}〈{\overline{w}}^{k},r\left({x}^{k}\right)〉}{\parallel {\overline{w}}^{k}\parallel }\right)}^{2}-\frac{2{\gamma }^{{m}_{k}}〈{\overline{w}}^{k},r\left({x}^{k}\right)〉}{{\parallel {\overline{w}}^{k}\parallel }^{2}}〈{\overline{w}}^{k},{z}^{k}-{x}^{\ast }〉.\end{array}$
(3.8)

From ${\overline{w}}^{k}\in {\partial }_{2}f\left({z}^{k},{z}^{k}\right)$, it follows that

$f\left({z}^{k},y\right)-f\left({z}^{k},{z}^{k}\right)\ge 〈{\overline{w}}^{k},y-{z}^{k}〉\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C.$

Replacing y by ${y}^{k}$ and combining with assumptions $f\left({z}^{k},{z}^{k}\right)=0$ and ${z}^{k}={x}^{k}-{\gamma }^{{m}_{k}}r\left({x}^{k}\right)$, we have

$\begin{array}{rl}f\left({z}^{k},{y}^{k}\right)& \ge 〈{\overline{w}}^{k},{y}^{k}-{z}^{k}〉\\ =-\left(1-{\gamma }^{{m}_{k}}\right)〈{\overline{w}}^{k},r\left({x}^{k}\right)〉.\end{array}$

Combining this inequality with (2.2) and the assumption $\gamma \in \left(0,1\right)$, we obtain

$〈{\overline{w}}^{k},r\left({x}^{k}\right)〉\ge \frac{\sigma }{1-{\gamma }^{{m}_{k}}}{\parallel r\left({x}^{k}\right)\parallel }^{2}.$

Hence, (3.8) reduces to

$\begin{array}{rcl}{\parallel {\overline{y}}^{k}-{x}^{\ast }\parallel }^{2}& \le & {\parallel {x}^{k}-{x}^{\ast }\parallel }^{2}-{\left(\frac{{\gamma }^{{m}_{k}}〈{\overline{w}}^{k},r\left({x}^{k}\right)〉}{\parallel {\overline{w}}^{k}\parallel }\right)}^{2}\\ \le & {\parallel {x}^{k}-{x}^{\ast }\parallel }^{2}-{\left(\frac{{\gamma }^{{m}_{k}}\sigma }{\parallel {\overline{w}}^{k}\parallel \left(1-{\gamma }^{{m}_{k}}\right)}\right)}^{2}{\parallel r\left({x}^{k}\right)\parallel }^{4}.\end{array}$
(3.9)

Combining (3.7) with (3.9), we obtain

${\parallel {x}^{k+1}-{x}^{\ast }\parallel }^{2}\le {\parallel {x}^{k}-{x}^{\ast }\parallel }^{2}-{\parallel {x}^{k+1}-{\overline{y}}^{k}\parallel }^{2}-{\left(\frac{{\gamma }^{{m}_{k}}\sigma }{\parallel {\overline{w}}^{k}\parallel \left(1-{\gamma }^{{m}_{k}}\right)}\right)}^{2}{\parallel r\left({x}^{k}\right)\parallel }^{4}.$
(3.10)

This implies that the sequence $\left\{\parallel {x}^{k}-{x}^{\ast }\parallel \right\}$ is nonincreasing and hence convergent. So, there exists a subsequence $\left\{{x}^{{k}_{j}}\right\}$ which converges to $\overline{x}$. We consider the function $g\left(y\right):=f\left({x}^{{k}_{j}},y\right)+\frac{\beta }{2}{\parallel y-{x}^{{k}_{j}}\parallel }^{2}+{\delta }_{C}\left(y\right)$, where ${\delta }_{C}\left(\cdot \right)$ is the indicator function on C. Then g is the strongly convex function on C and hence ∂g is strongly monotone with a constant $\beta >0$. By the definition of a strongly monotone mapping, we have

$\begin{array}{rl}\beta {\parallel {x}^{{k}_{j}}-{y}^{{k}_{j}}\parallel }^{2}& \le 〈{g}_{1}^{{k}_{j}}-{g}_{2}^{{k}_{j}},{x}^{{k}_{j}}-{y}^{{k}_{j}}〉\phantom{\rule{1em}{0ex}}\mathrm{\forall }{g}_{1}^{{k}_{j}}\in \partial g\left({x}^{{k}_{j}}\right),{g}_{2}^{{k}_{j}}\in \partial g\left({y}^{{k}_{j}}\right)\\ \le \parallel {g}_{1}^{{k}_{j}}-{g}_{2}^{{k}_{j}}\parallel \parallel {x}^{{k}_{j}}-{y}^{{k}_{j}}\parallel .\end{array}$

Since $\partial g\left(y\right)=\partial f\left({x}^{{k}_{j}},\cdot \right)\left(y\right)+\beta \left(y-{x}^{{k}_{j}}\right)+{N}_{C}\left(y\right)$ and $0\in \partial g\left({y}^{{k}_{j}}\right)$, we choose ${g}_{1}^{{k}_{j}}\in \partial f\left({x}^{{k}_{j}},\cdot \right)\left({x}^{{k}_{j}}\right)$ and ${g}_{2}^{{k}_{j}}=0$. So,

$\beta \parallel {x}^{{k}_{j}}-{y}^{{k}_{j}}\parallel \le \parallel {g}_{1}^{{k}_{j}}\parallel .$
(3.11)

By the assumption that $\partial f\left(x,\cdot \right)\left(x\right)$ is upper semicontinuous on C and $\left\{{x}^{{k}_{j}}\right\}$ converges to $\overline{x}$, the sequence $\left\{{g}_{1}^{{k}_{j}}\right\}$ is bounded. Combining this and (3.11), the sequence $\left\{{y}^{{k}_{j}}\right\}$ is also bounded. Therefore, the sequences $\left\{{z}^{{k}_{j}}={x}^{{k}_{j}}-{\gamma }^{{m}_{{k}_{j}}}r\left({x}^{{k}_{j}}\right)\right\}$ and $\left\{{\overline{w}}^{{k}_{j}}\right\}$ are bounded. We suppose that

$\parallel {\overline{w}}^{{k}_{j}}\parallel

This together with (3.10) implies

${\parallel {x}^{{k}_{j}+1}-{x}^{\ast }\parallel }^{2}\le {\parallel {x}^{{k}_{j}}-{x}^{\ast }\parallel }^{2}-{\parallel {x}^{{k}_{j}+1}-{\overline{y}}^{{k}_{j}}\parallel }^{2}-{\left(\frac{{\gamma }^{{m}_{{k}_{j}}}\sigma }{M\left(1-{\gamma }^{{m}_{{k}_{j}}}\right)}\right)}^{2}{\parallel r\left({x}^{{k}_{j}}\right)\parallel }^{4}.$
(3.12)

Since $\left\{\parallel {x}^{k}-{x}^{\ast }\parallel \right\}$ is convergent, it is easy to see that

$\underset{j\to \mathrm{\infty }}{lim}{\gamma }^{{m}_{{k}_{j}}}{\parallel r\left({x}^{{k}_{j}}\right)\parallel }^{2}=0.$

The cases remaining to consider are the following.

Case 1. ${lim sup}_{j\to \mathrm{\infty }}{\gamma }^{{m}_{{k}_{j}}}>0$. This case must follow that ${lim inf}_{j\to \mathrm{\infty }}\parallel r\left({x}^{{k}_{j}}\right)\parallel =0$. In other words, the subsequence $\left\{{x}^{{k}_{j}}\right\}$ converges to $\overline{x}$ and $\left\{{y}^{{k}_{j}}\right\}$ converges to $\overline{y}$. Then we have $r\left(\overline{x}\right):=\overline{x}-\overline{y}=0$. Then we see from Lemma 3.3 that $\overline{x}\in S$, and besides we can take ${x}^{\ast }=\overline{x}$, in particular in (3.12). Thus $\left\{\parallel {x}^{k}-\overline{x}\parallel \right\}$ is a convergent sequence. Since $\overline{x}$ is an accumulation point of $\left\{{x}^{k}\right\}$, the sequence $\left\{\parallel {x}^{k}-\overline{x}\parallel \right\}$ converges to zero, i.e., $\left\{{x}^{k}\right\}$ converges to $\overline{x}\in S$.

Case 2. ${lim}_{j\to \mathrm{\infty }}{\gamma }^{{m}_{{k}_{j}}}=0$. Since ${m}_{{k}_{j}}$ is the smallest nonnegative integer, ${m}_{{k}_{j}}-1$ does not satisfy (2.2). Hence, we have

$f\left({x}^{{k}_{j}}-{\gamma }^{{m}_{{k}_{j}}-1}r\left({x}^{{k}_{j}}\right),{y}^{{k}_{j}}\right)>-\sigma {\parallel r\left({x}^{{k}_{j}}\right)\parallel }^{2}.$
(3.13)

Passing to the limit in (3.13) as $j\to \mathrm{\infty }$, and using the continuity of f, ${lim}_{j\to \mathrm{\infty }}{x}^{{k}_{j}}=\overline{x}$, ${lim}_{j\to \mathrm{\infty }}{y}^{{k}_{j}}=\overline{y}$, we have

$f\left(\overline{x},\overline{y}\right)\ge -\sigma {\parallel r\left(\overline{x}\right)\parallel }^{2},$
(3.14)

where $r\left(\overline{x}\right)=\overline{x}-\overline{y}$. From Scheme (2.1)-(2.3), we have

$f\left({x}^{{k}_{j}}-{\gamma }^{{m}_{{k}_{j}}}r\left({x}^{{k}_{j}}\right),{y}^{{k}_{j}}\right)\le -\sigma {\parallel r\left({x}^{{k}_{j}}\right)\parallel }^{2}.$

Since f is continuous, passing to the limit as $j\to \mathrm{\infty }$, we obtain

$f\left(\overline{x},\overline{y}\right)\le -\sigma {\parallel r\left(\overline{x}\right)\parallel }^{2}.$

Combining this with (3.14), we have

$-\sigma {\parallel r\left(\overline{x}\right)\parallel }^{2}\le f\left(\overline{x},\overline{y}\right)\le -\sigma {\parallel r\left(\overline{x}\right)\parallel }^{2},$

which implies $r\left(\overline{x}\right)=0$, and hence $\overline{x}\in S$. Letting ${x}^{\ast }=\overline{x}$ and repeating the previous arguments, we conclude that the whole sequence $\left\{{x}^{k}\right\}$ converges to $\overline{x}\in S$. This completes the proof. □

Corollary 3.5 Under assumptions of Theorem  3.4, the sequence $\left\{{x}^{k}\right\}$ converges to ${x}^{\ast }$, where

${x}^{\ast }=\underset{k\to \mathrm{\infty }}{lim}{Pr}_{S}\left({x}^{k}\right).$

Proof It is well known that f is pseudomonotone, so S is convex. By Theorem 3.4, the sequence $\left\{{x}^{k}\right\}$ converges to a solution ${x}^{\ast }\in S$. Set ${z}^{k}:={Pr}_{S}\left({x}^{k}\right)$. By the definition of ${Pr}_{C}\left(\cdot \right)$, we have

$〈{z}^{k}-{x}^{k},{z}^{k}-x〉\le 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in S.$
(3.15)

It follows from Theorem 3.4 that

$\parallel {x}^{k+1}-{x}^{\ast }\parallel \le \parallel {x}^{k}-{x}^{\ast }\parallel \phantom{\rule{1em}{0ex}}\mathrm{\forall }k\ge 0,{x}^{\ast }\in S.$

Then, by Lemma 3.2, we have

(3.16)

Passing the limit in (3.15) and combining this with (3.16), we have

$〈{x}_{1}-{x}^{\ast },{x}_{1}-x〉\le 0\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in S.$

This means that ${x}^{\ast }={x}_{1}$ and

${x}^{\ast }=\underset{k\to \mathrm{\infty }}{lim}{Pr}_{S}\left({x}^{k}\right).$

□

## 4 Illustrative examples and numerical results

As an example for equilibrium problems $EP\left(f,C\right)$, we consider the Cournot-Nash oligopolistic market equilibrium model (see [21, 25, 26]). In this model, it is assumed that there are n-firms producing a common homogenous commodity and that the price ${p}_{i}$ of the firm i depends on the total quantity ${\sigma }_{x}={\sum }_{i=1}^{n}{x}_{i}$ of the commodity. Let ${h}_{i}\left({x}_{i}\right)$ denote the cost of the firm i when its production level is ${x}_{i}$. Suppose that the profit of the firm i is given by

${f}_{i}\left({x}_{1},\dots ,{x}_{n}\right):={x}_{i}{p}_{i}\left({\sigma }_{x}\right)-{h}_{i}\left({x}_{i}\right),\phantom{\rule{1em}{0ex}}i=1,\dots ,n,$
(4.1)

where ${h}_{i}$ is the cost function of the firm i that is assumed to be dependent only on its production level.

Let ${C}_{i}\subset {\mathbb{R}}_{+}:=\left\{x\in \mathbb{R}\mid x\ge 0\right\}$ ($i=1,\dots ,n$) denote the strategy set of the firm i. Each firm seeks to maximize its own profit by choosing the corresponding production level under the presumption that the production of other firms is parametric input. In this context, a Nash equilibrium is a production pattern in which no firm can increase its profit by changing its controlled variables. Thus, under this equilibrium concept, each firm determines its best response given other firms’ actions. Mathematically, a point ${x}^{\ast }=\left({x}_{1}^{\ast },\dots ,{x}_{n}^{\ast }\right)\in C:={C}_{1}×\cdots ×{C}_{n}$ is said to be a Nash equilibrium point if

${f}_{i}\left({x}_{1}^{\ast },\dots ,{x}_{i-1}^{\ast },{y}_{i},{x}_{i+1}^{\ast },\dots ,{x}_{n}^{\ast }\right)\le {f}_{i}\left({x}_{1}^{\ast },\dots ,{x}_{n}^{\ast }\right)\phantom{\rule{1em}{0ex}}\mathrm{\forall }{y}_{i}\in {C}_{i},\mathrm{\forall }i=1,\dots ,n.$
(4.2)

When ${h}_{i}$ is affine, this market problem can be formulated as a special Nash equilibrium problem in the n-person noncooperative game theory.

Set

$\varphi \left(x,y\right):=-\sum _{i=1}^{n}{f}_{i}\left({x}_{1},\dots ,{x}_{i-1},{y}_{i},{x}_{i+1},\dots ,{x}_{n}\right)$
(4.3)

and

$f\left(x,y\right):=\varphi \left(x,y\right)-\varphi \left(x,x\right).$
(4.4)

Then it has been proved in  that the problem of finding an equilibrium point of this model can be formulated as the following equilibrium problem in the sense of Blum and Oettli:

In classical Cournot-Nash models (see ), the price and cost functions for each firm are assumed to be affine of the following forms:

Combining this with (4.1), (4.2), (4.3) and (4.4), we obtain that

$f\left(x,y\right)=〈Ax+{\chi }^{n}\left(y+x\right)+\mu -\alpha ,y-x〉,$
(4.5)

where

$A={\left(\begin{array}{ccccc}0& \chi & \chi & \cdots & \chi \\ \chi & 0& \chi & \cdots & \chi \\ \chi & \chi & 0& \cdots & \chi \\ \cdot & \cdot & \cdot & \cdot & \cdot \\ \chi & \chi & \cdots & \cdots & 0\end{array}\right)}_{n×n},\phantom{\rule{2em}{0ex}}\alpha ={\left({\alpha }_{0},\dots ,{\alpha }_{0}\right)}^{T},\phantom{\rule{2em}{0ex}}\mu ={\left({\mu }_{1},\dots ,{\mu }_{n}\right)}^{T}.$

It follows from Definition 2.1 that the following result holds.

Proposition 4.1 If the parametric μ satisfies ${\mu }_{i}\ge {\alpha }_{0}$ for all $i=1,\dots ,n$, then the function f defined by (4.5) is pseudomonotone on C and it can be not monotone on C.

Now we consider a generalized form of the bifunction defined by the above Cournot-Nash equilibrium model. Let C be a polyhedral convex set given by

$C:=\left\{x\in {\mathbb{R}}^{n}\mid Dx\le b,0\le {x}_{i},i=1,\dots ,n\right\}\ne \mathrm{\varnothing },$

where $D\in {\mathbb{R}}^{m×n}$, $b\in {\mathbb{R}}^{m}$. The equilibrium bifunction $f:C×C\to \mathbb{R}\cup \left\{+\mathrm{\infty }\right\}$ is of the form

$f\left(x,y\right)=〈F\left(x,y\right),y-x〉,$
(4.6)

where $F:C×C\to {\mathbb{R}}_{+}^{n}:=\left\{x\in {\mathbb{R}}_{+}^{n}\mid {x}_{i}\ge 0\phantom{\rule{0.25em}{0ex}}\left(i=1,\dots ,n\right)\right\}$, $f\left(x,\cdot \right)$ is convex for each fixed $x\in C$ and continuous on C. The function defined by (4.6) also is a generalized form of the bifunction defined by the Cournot-Nash equilibrium model considered in . By using Definition 2.1, it is easy to have the following property of f.

Proposition 4.2 If there exists a bifunction $\theta :C×C\to {\mathbb{R}}_{+}:=\left\{t\in \mathbb{R}\mid t\ge 0\right\}$ which satisfies $F\left(x,y\right)=\theta \left(x,y\right)F\left(y,x\right)$ for all $x,y\in C$, then the function f defined by (4.6) is pseudomonotone and it can be not monotone on $C×C$.

To illustrate our scheme, we consider two academic numerical tests of the function $f\left(x,y\right)$.

Case 1. $n=5$, $f\left(x,y\right):=〈M\left(x+y\right)+〈x,d〉B\left(x+y\right)+q,y-x〉$, where

$\begin{array}{c}M:=\left(\begin{array}{ccccc}1& 2& 3& 4& 5\\ 1& 0& 0& 5& 6\\ 0& 0& 9& 2& 3\\ 3& 3& 3& 4& 1\\ 3& 4& 5& 1& 2\end{array}\right),\phantom{\rule{2em}{0ex}}B:=\left(\begin{array}{ccccc}1& 2& 3& 4& 1\\ 9& 0& 2& 1& 3\\ 3& 4& 5& 2& 4\\ 0& 1& 2& 3& 7\\ 0& 1& 2& 3& 4\end{array}\right),\hfill \\ q:=\left(\begin{array}{c}1\\ 3\\ 0\\ 1\\ 5\end{array}\right),\phantom{\rule{2em}{0ex}}d:=\left(\begin{array}{c}3\\ 2\\ 0\\ 4\\ 9\end{array}\right)\hfill \end{array}$

and

$C:=\left\{\begin{array}{c}x\in {\mathbb{R}}_{+}^{5},\hfill \\ 4\le {x}_{1}+2{x}_{2}+{x}_{3}+3{x}_{5}\le 12,\hfill \\ 7\le {\sum }_{i=1}^{5}{x}_{i}\le 15,\hfill \\ 6\le {x}_{2}+{x}_{3}+2{x}_{4}\le 13,\hfill \\ 3\le {x}_{2}+{x}_{3}\le 5.\hfill \end{array}$

In this case, the bifunction f defined in (4.6) is pseudomonotone, continuous and differentiable on C. The subproblem needed to solve at Step 1 is of the strongly convex quadratic programming

${y}^{k}:=argmin\left\{〈M\left({x}^{k}+y\right)+〈{x}^{k},d〉B\left({x}^{k}+y\right)+q,y-{x}^{k}〉+\frac{1}{\beta }{\parallel x-{x}^{k}\parallel }^{2}\mid x\in C\right\}.$
(4.7)

In Step 2, ${\partial }_{2}f\left({z}^{k},{z}^{k}\right)$ is defined by the form

${\partial }_{2}f\left(x,y\right)=\left\{F\left(x,y\right)+〈M+〈x,d〉B,y-x〉\right\}\phantom{\rule{1em}{0ex}}\mathrm{\forall }\left(x,y\right)\in C×C.$

Thus, ${\partial }_{2}f\left({z}^{k},{z}^{k}\right)=\left\{F\left({z}^{k},{z}^{k}\right)\right\}$ and the sequence $\left\{{w}^{k}\right\}$ is uniform bounded. Note that ${x}^{k+1}:={Pr}_{C\cap {H}_{k}}\left({x}^{k}\right)$ is the unique solution to

$min\left\{{\parallel x-{x}^{k}\parallel }^{2}\mid x\in C\cap {H}_{k}\right\}.$
(4.8)

Subproblems (4.7) and (4.8) can then be solved efficiently, for example, by the Matlab optimization toolbox. Lemma 3.3 shows that if $r\left({x}^{k}\right)=0$, then ${x}^{k}$ is a solution to problems $EP\left(f,C\right)$. So, we can say that ${x}^{k}$ is an ϵ-solution to problems $EP\left(f,C\right)$ if we have $\parallel r\left({x}^{k}\right)\parallel \le ϵ$ with $ϵ>0$. Taking $ϵ={10}^{-6}$, $\gamma =0.7$, $\beta =2$ and $\sigma =1$, we obtained the iterates in Table 1.

The approximate solution obtained after 14 iterations is

${x}^{14}={\left(4.5397,0.0529,2.8942,2.0265,1.4868\right)}^{T}.$

In Table 2, we compare Scheme (2.1)-(2.3) with Algorithm 2a in . Now, there have been some changes in this case: ${M}_{11}$, ${B}_{11}$ and the first component of vector q are chosen randomly in $\left(0,1\right)$, and the first component of vector d is chosen randomly in $\left(0,2\right)$. In both cases, we use Algorithm 2a with the same equilibrium bifunction, the quadratic regularization function $G\left(x\right):=ln\left(1+2x\right)$ and parameters $\theta =0.7$, $\rho =2$, $\alpha =0.45$.

Case 2. We use Scheme (2.1)-(2.3) with the same equilibrium problems and dates as in Case 1. Unless the bifunction $F\left(x,y\right):=M\left(x+y\right)+D\left(x+y\right)+q$, where D is defined by the components of the $D\left(x\right)$, is ${D}_{j}\left(x\right)={d}_{j}arctan\left({x}_{j}\right)$ $\mathrm{\forall }j=1,\dots ,5$, ${d}_{j}$ is chosen by $d={\left(1,3,0,4,1\right)}^{T}$. This example is given by Bnouhachem (see ). Under these assumptions, it can be proved that f is continuous and pseudomonotone on C.

We also see that in Step 1 the solution ${y}^{k}$ can be written by

${y}^{k}:=argmin\left\{〈M\left({x}^{k}+y\right)+D\left({x}^{k}+y\right)+q,y-{x}^{k}〉+\frac{1}{\beta }{\parallel x-{x}^{k}\parallel }^{2}\mid x\in C\right\},$

where

$D\left({x}^{k}+y\right)={\left({d}_{1}arctan\left({x}_{1}^{k}+{y}_{1}\right),\dots ,{d}_{5}arctan\left({x}_{5}^{k}+{y}_{5}\right)\right)}^{T},$

and in Step 2,

$\begin{array}{rl}{\partial }_{2}f\left({z}^{k},{z}^{k}\right)& =\left\{F\left({z}^{k},{z}^{k}\right)\right\}\\ =\left\{2M{z}^{k}+{\left({d}_{1}arctan\left(2{z}_{1}^{k}\right),\dots ,{d}_{5}arctan\left(2{z}_{5}^{k}\right)\right)}^{T}+q\right\}.\end{array}$

Thus, the sequence $\left\{{w}^{k}\right\}$ is uniform bounded. There have been some changes in this case: $f\left(x,y\right)=〈M\left(x+y\right)+D\left(x+y\right)+q,y-x〉$, where $D\left(x\right)={\left({d}_{1}arctan\left({x}_{1}\right),\dots ,{d}_{5}arctan\left({x}_{5}\right)\right)}^{T}$, but d is defined, the components are chosen randomly in $\left(0,1\right)$. Choosing $\gamma =0.4$, $\beta =2$, $\sigma =0.5$ and $\overline{x}={\left(1,2,3,1,1\right)}^{T}\in C$ and comparing Scheme (2.1)-(2.3) and Algorithm 2.1 in , we obtained the computation presented in Table 3.

We perform Scheme (2.1)-(2.3) and Algorithm 2a  in Matlab R2008a running on a PC Desktop Intel(R) Core(TM)2 Duo CPU T5750@ 2.00GHz 1.32 GB, 2Gb RAM.

## References

1. Blum E, Oettli W: From optimization and variational inequality to equilibrium problems. Math. Stud. 1994, 63: 127–149.

2. Daniele P, Giannessi F, Maugeri A: Equilibrium Problems and Variational Models. Kluwer Academic, Dordrecht; 2003.

3. Hai NX, Khanh PQ: Existence of solutions to general quasi-equilibrium problems and applications. J. Optim. Theory Appl. 2007, 133: 317–327. 10.1007/s10957-007-9170-8

4. Konnov IV: Application of the proximal point method to nonmonotone equilibrium problems. J. Optim. Theory Appl. 2003, 119: 317–333.

5. Anh PN: A logarithmic quadratic regularization method for solving pseudomonotone equilibrium problems. Acta Math. Vietnam. 2009, 34: 183–200.

6. Anh PN, Kim JK: Outer approximation algorithms for pseudomonotone equilibrium problems. Comput. Math. Appl. 2011, 61: 2588–2595. 10.1016/j.camwa.2011.02.052

7. Anh LQ, Khanh PQ: Existence conditions in symmetric multivalued vector quasi-equilibrium problems. Control Cybern. 2007, 36: 519–530.

8. Mastroeni G: Gap function for equilibrium problems. J. Glob. Optim. 2004, 27: 411–426.

9. Moudafi A: Proximal point algorithm extended to equilibrium problem. J. Nat. Geom. 1999, 15: 91–100.

10. Noor MA: Auxiliary principle technique for equilibrium problems. J. Optim. Theory Appl. 2004, 122: 371–386.

11. Quoc TD, Anh PN, Muu LD: Dual extragradient algorithms to equilibrium problems. J. Glob. Optim. 2012, 52: 139–159. 10.1007/s10898-011-9693-2

12. Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementary Problems. Springer, New York; 2003.

13. Konnov IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin; 2000.

14. Tran DQ, Dung ML, Nguyen VH: Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57: 749–776. 10.1080/02331930601122876

15. Anh PN: An LQP regularization method for equilibrium problems on polyhedral. Vietnam J. Math. 2008, 36: 209–228.

16. Auslender A, Teboulle M, Ben-Tiba S: A logarithmic-quadratic proximal method for variational inequalities. Comput. Optim. Appl. 1999, 12: 31–40. 10.1023/A:1008607511915

17. Bnouhachem A: An LQP method for pseudomonotone variational inequalities. J. Glob. Optim. 2006, 36: 351–363. 10.1007/s10898-006-9013-4

18. Solodov MV, Svaiter BF: A new projection method for variational inequality problems. SIAM J. Control Optim. 1999, 37: 765–776. 10.1137/S0363012997317475

19. Anh PN: Strong convergence theorems for nonexpansive mappings and Ky Fan inequalities. J. Optim. Theory Appl. 2012. doi:10.1007/s10957–012–0005-x

20. Anh PN, Kuno T: A cutting hyperplane method for generalized monotone nonlipschitzian multivalued variational inequalities. In Modeling, Simulation and Optimization of Complex Processes. Edited by: Bock HG, Phu HX, Rannacher R, Schloder JP. Springer, Berlin; 2012.

21. Muu LD, Nguyen VH, Quy NV: On Nash-Cournot oligopolistic market equilibrium models with concave cost functions. J. Glob. Optim. 2008, 41: 351–364. 10.1007/s10898-007-9243-0

22. Anh PN, Muu LD, Strodiot JJ: Generalized projection method for non-Lipschitz multivalued monotone variational inequalities. Acta Math. Vietnam. 2009, 34: 67–79.

23. Takahashi S, Toyoda M: Weakly convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

24. Anh PN: A hybrid extragradient method extended to fixed point problems and equilibrium problems. Optimization 2012. doi:10.1080/02331934.2011.607497

25. Anh PN, Muu LD, Nguyen VH, Strodiot JJ: Using the Banach contraction principle to implement the proximal point method for multivalued monotone variational inequalities. J. Optim. Theory Appl. 2005, 124: 285–306. 10.1007/s10957-004-0926-0

26. Dafermos S: Exchange price equilibria and variational inequalities. Math. Program. 1990, 46: 391–402. 10.1007/BF01585753

## Acknowledgements

We are very grateful to the anonymous referees for their really helpful and constructive comments on improving the paper. This work was completed while the first author was studying at the Kyungnam University for the NRF Postdoctoral Fellowship for Foreign Researchers. And the second author was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2012R1A1A2042138).

## Author information

Authors

### Corresponding author

Correspondence to Jong K Kim.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

The main idea of this paper was proposed by PNA. PNA, JKK and NDH prepared the manuscript initially and performed all the steps of the proof in this research. All authors read and approved the final manuscript.

## Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

Anh, P.N., Kim, J.K. & Hien, N.D. A cutting hyperplane method for solving pseudomonotone non-Lipschitzian equilibrium problems. J Inequal Appl 2012, 288 (2012). https://doi.org/10.1186/1029-242X-2012-288 