# An interior approximal method for solving pseudomonotone equilibrium problems

## Abstract

In this paper, we present an interior approximal method for solving equilibrium problems for pseudomonotone bifunctions without Lipschitz-type continuity on polyhedra. The method can be viewed as combining a special interior proximal function, which replaces the usual quadratic function, Armijo-type linesearch techniques and the cutting hyperplane methods. Convergence properties of the method are established, among them the global convergences are proved under few assumptions. Finally, we present some preliminary computational results to Cournot-Nash oligopolistic market equilibrium models.

MSC:65K10, 90C25.

## 1 Introduction

Let C be a nonempty closed convex subset of ${\mathcal{R}}^{n}$ and a bifunction $f:CÃ—Câ†’\mathcal{R}$ satisfying $f\left(x,x\right)=0$ for all $xâˆˆC$. We consider equilibrium problems in the sense of Blum and Oettli [1] (shortly $EP\left(f,C\right)$), which are to find ${x}^{âˆ—}âˆˆC$ such that

$f\left({x}^{âˆ—},y\right)â‰¥0\phantom{\rule{1em}{0ex}}\mathrm{âˆ€}yâˆˆC.$

Let $Sol\left(f,C\right)$ denote the set of solutions of Problem $EP\left(f,C\right)$. When

$f\left(x,y\right)=ã€ˆF\left(x\right),yâˆ’xã€‰\phantom{\rule{1em}{0ex}}\mathrm{âˆ€}x,yâˆˆC,$

where $F:Câ†’{\mathcal{R}}^{n}$, Problem $EP\left(f,C\right)$ is reduced to the variational inequalities:

In this article, for solving Problem $EP\left(f,C\right)$, we assume that the bifunction f and C satisfy the following conditions:

A1. $C=\left\{xâˆˆ{R}^{n}:Axâ‰¤b\right\}$, where A is a $pÃ—n$ maximal matrix ($rankA=n$), $bâˆˆ{\mathcal{R}}^{p}$, and $intC=\left\{x:Ax is nonempty.

A2. For each $xâˆˆC$, the function $f\left(x,â‹\dots \right)$ is convex and subdifferentiable on C.

A3. f is pseudomonotone on $CÃ—C$, i.e., for each $x,yâˆˆC$, it holds

$f\left(x,y\right)â‰¥0\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}f\left(y,x\right)â‰¤0.$

A4. f is continuous on $CÃ—C$.

A5. .

Equilibrium problems appear in many practical problems arising, for instance, physics, engineering, game theory, transportation, economics, and network (see [2â€“5]). In recent years, both theory and applications became attractive for many researchers (see [1, 6â€“14]).

Most of the methods for solving equilibrium problems are derived from fixed point formulations of Problem $EP\left(f,C\right)$: A point ${x}^{âˆ—}âˆˆC$ is a solution of the problem if and only if ${x}^{âˆ—}âˆˆC$ is a solution of the following problem:

$min\left\{f\left({x}^{âˆ—},y\right):yâˆˆC\right\}.$

Namely, the sequence $\left\{{x}^{k}\right\}$ is generated by ${x}^{0}âˆˆC$ and

${x}^{k+1}âˆˆargâ€‰min\left\{f\left({x}^{k},y\right):yâˆˆC\right\}.$

To conveniently compute the point ${x}^{k+1}$, Mastroeni in [15] proposed the auxiliary problem principle for solving Problem $EP\left(f,C\right)$. This principle is based on the following fixed-point property: ${x}^{âˆ—}âˆˆC$ is a solution of Problem $EP\left(f,C\right)$ if and only if ${x}^{âˆ—}âˆˆC$ is a solution of the problem:

$min\left\{\mathrm{Î»}f\left({x}^{âˆ—},y\right)+g\left(y\right)âˆ’ã€ˆ\mathrm{âˆ‡}g\left({x}^{âˆ—}\right),yã€‰:yâˆˆC\right\},$
(1.1)

where $\mathrm{Î»}>0$ and $g\left(â‹\dots \right)$ is a strongly convex differentiable function on C. Under the assumptions that f is strongly monotone with constant $\mathrm{Î²}>0$ on $CÃ—C$, i.e.,

$f\left(x,y\right)+f\left(y,x\right)â‰¤\mathrm{Î²}{âˆ¥xâˆ’yâˆ¥}^{2}\phantom{\rule{1em}{0ex}}\mathrm{âˆ€}x,yâˆˆC,$

and f is Lipschitz-type continuous with constants ${c}_{1}>0$, ${c}_{2}>0$, i.e.,

$f\left(x,y\right)+f\left(y,z\right)â‰¥f\left(x,z\right)âˆ’{c}_{1}{âˆ¥xâˆ’yâˆ¥}^{2}âˆ’{c}_{2}{âˆ¥yâˆ’zâˆ¥}^{2}\phantom{\rule{1em}{0ex}}\mathrm{âˆ€}x,y,zâˆˆC,$

the author showed that the sequence $\left\{{x}^{k}\right\}$ globally converges to a solution of Problem $EP\left(f,C\right)$. However, the convergence depends on three positive parameters ${c}_{1}$, ${c}_{2}$, and Î² and in some cases, they are unknown or difficult to approximate.

Many algorithms for solving the optimization problems and variational inequalities are projection algorithms that employ projections onto the feasible set C, or onto some related set, in order to iteratively reach a solution. In particular, Korpelevich [16] proposed an algorithm for solving the variational inequalities. In each iteration of the algorithm, in order to get the next iterate ${x}^{k+1}$, two orthogonal projections onto C are calculated, according to the following iterative step. Given the current iterate ${x}^{k}$, calculate

${y}^{k}:={Pr}_{C}\left({x}^{k}âˆ’\mathrm{Î»}F\left({x}^{k}\right)\right)$

and then

${x}^{k+1}:={Pr}_{C}\left({x}^{k}âˆ’\mathrm{Î»}F\left({y}^{k}\right)\right),$

where $\mathrm{Î»}>0$ is some positive number. Recently, Tran et al. [17] extended these projection techniques to Problem $EP\left(f,C\right)$ involving monotone equilibrium bifunctions but it must satisfy a certain Lipschitz-type continuous condition. To avoid this requirement, they proposed linesearch procedures commonly used in variational inequalities to obtain projection-type algorithms for solving equilibrium problems.

It is well known that the interior approximal technique is a powerful tool for analyzing and solving optimization problems. This technique has been used extensively by many authors for solving variational inequalities and equilibrium problems on a polyhedron convex set (see [18â€“21]), where Bregman-type interior approximal function d replaces the function g in (1.1):

(1.2)

with $\mathrm{Î¼}âˆˆ\left(0,1\right)$ and $yâˆˆ{\mathcal{R}}_{+}^{n}=\left\{{\left({x}_{1},â€¦,{x}_{n}\right)}^{T}âˆˆ{\mathcal{R}}^{n}:{x}_{i}>0\phantom{\rule{0.25em}{0ex}}\mathrm{âˆ€}i=1,â€¦,n\right\}$. Then the interior proximal linesearch extragradient methods can be viewed as combining the function d and Armijo-type linesearch techniques. Convergence of the iterative sequence is established under the weaker assumptions that f is pseudomonotone on $CÃ—C$. However, at each iteration k in the Armijo-type linesearch progress of the algorithm requires the computation of a subgradient of the bifunction $\mathrm{âˆ‚}f\left({x}^{k},â‹\dots \right)\left({y}^{k}\right)$, which is not easy in some cases. Moreover, most of current algorithms for solving Problem $EP\left(f,C\right)$ are based on Lipschitz-type continuous assumptions or the computation of subgradients of the bifunction f (see [21â€“25]).

Our main purpose of this paper is to give an iterative algorithm for solving a pseudomonotone equilibrium problem without Lipschitz-type continuity of the bifunction and the computation of subgradients. To summarize our approach, first we use an interior proximal function d as in [22], which replaces the usual quadratic function in auxiliary problems. Next, we construct an appropriate hyperplane and a convex set, which separate the current iterative point from the solution set and we also combine this technique with the Armijo-type linesearch technique. Then the next iteration is obtained as the projection of the current iterate onto the intersection of the feasible set with the convex set and the half-space containing the solution set.

The paper is organized as follows. In Section 2, we recall the auxiliary problem principle of Problem $EP\left(f,C\right)$ and propose a new iterative algorithm. Section 3 is devoted to the proof of its global convergence and also show the relation between the solution set of $EP\left(f,C\right)$ and the cluster point of the iterative sequences in the algorithm. In Section 4, we apply our algorithm for solving generalized variational inequalities. Applications to the Nash-Cournot oligopolistic market equilibrium model and the numerical results are reported in the last section.

## 2 Proposed algorithm

Let ${a}_{i}$ denote the rows of the matrix A, and

(2.1)

where the function d is defined by (1.2). Then the gradient ${\mathrm{âˆ‡}}_{1}D\left(x,y\right)$ of $D\left(â‹\dots ,y\right)$ at x for every $yâˆˆC$ is defined by

${\mathrm{âˆ‡}}_{1}D\left(x,y\right)=âˆ’{A}^{T}\left(l\left(x\right)âˆ’l\left(y\right)+\mathrm{Î¼}{X}_{y}log\frac{l\left(x\right)}{l\left(y\right)}\right),$

where ${X}_{y}=diag\left({l}_{1}\left(y\right),â€¦,{l}_{p}\left(y\right)\right)$ and $log\frac{l\left(x\right)}{l\left(y\right)}=\left(log\frac{{l}_{1}\left(x\right)}{{l}_{1}\left(y\right)},â€¦,log\frac{{l}_{p}\left(x\right)}{{l}_{p}\left(y\right)}\right)$.

It is well known that ${x}^{âˆ—}$ is a solution of the regularized auxiliary problem:

where $c>0$ is a regularization parameter, if and only if ${x}^{âˆ—}$ is a solution of Problem $EP\left(f,C\right)$ (see [3]). Motivated by this, first we solve the following strongly convex problem with the interior proximal function D:

${y}^{k}=argâ€‰min\left\{f\left({x}^{k},y\right)+\mathrm{Î²}D\left(y,{x}^{k}\right):yâˆˆC\right\},$

for some positive constants Î². It is easy to see that with $f\left(x,y\right)=ã€ˆF\left(x\right),yâˆ’xã€‰$, where $F:Câ†’{\mathcal{R}}^{n}$, and $D\left(x,y\right)=\frac{1}{2}{âˆ¥xâˆ’yâˆ¥}^{2}$, computing ${y}^{k}$ becomes Step 1 of the extragradient method proposed in [16]. In Lemma 3.2(i), we will show that if $âˆ¥{y}^{k}âˆ’{x}^{k}âˆ¥=0$ then ${x}^{k}$ is a solution to Problem $EP\left(f,C\right)$. Otherwise, a computationally inexpensive Armijo-type procedure is used to find a point ${z}^{k}$ such that the convex set ${C}_{k}:=\left\{xâˆˆ{\mathcal{R}}^{n}:f\left({z}^{k},x\right)â‰¤0\right\}$ and the hyperplane ${H}_{k}:=\left\{xâˆˆ{\mathcal{R}}^{n}:ã€ˆxâˆ’{x}^{k},{x}^{0}âˆ’{x}^{k}ã€‰â‰¤0\right\}$ contain the solution set $Sol\left(f,C\right)$ and strictly separates ${x}^{k}$ from the solution. Then we compute the next iterate ${x}^{k+1}$ by projecting ${x}^{0}$ onto the intersection of the feasible set C with ${C}_{k}$ and the half-space ${H}_{k}$. The algorithm is described in more detail as follows.

Algorithm 2.1 Choose ${x}^{0}âˆˆC$, $0<\mathrm{Ïƒ}<\frac{\mathrm{Î²}}{2{âˆ¥{\stackrel{Â¯}{A}}^{âˆ’1}âˆ¥}^{2}}$ and $\mathrm{Î³}âˆˆ\left(0,1\right)$.

Step 1.

Evaluate

${y}^{k}=argâ€‰min\left\{f\left({x}^{k},y\right)+\frac{\mathrm{Î²}}{2}D\left(y,{x}^{k}\right):yâˆˆC\right\},\phantom{\rule{2em}{0ex}}r\left({x}^{k}\right)={x}^{k}âˆ’{y}^{k}.$
(2.2)

If $r\left({x}^{k}\right)=0$ then Stop. Otherwise, set ${z}^{k}={x}^{k}âˆ’{\mathrm{Î³}}^{{m}_{k}}r\left({x}^{k}\right)$, where ${m}_{k}$ is the smallest nonnegative number such that

$f\left({x}^{k}âˆ’{\mathrm{Î³}}^{{m}_{k}}r\left({x}^{k}\right),{y}^{k}\right)â‰¤âˆ’\mathrm{Ïƒ}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2}.$
(2.3)

Step 2. Evaluate ${x}^{k+1}={Pr}_{Câˆ©{C}_{k}âˆ©{H}_{k}}\left({x}^{0}\right)$, where

$\left\{\begin{array}{c}{C}_{k}=\left\{xâˆˆ{\mathcal{R}}^{n}:f\left({z}^{k},x\right)â‰¤0\right\},\hfill \\ {H}_{k}=\left\{xâˆˆ{\mathcal{R}}^{n}:ã€ˆxâˆ’{x}^{k},{x}^{0}âˆ’{x}^{k}ã€‰â‰¤0\right\}.\hfill \end{array}$

## 3 Convergence results

In the next lemma, we show the existence of the nonnegative integer ${m}_{k}$ in Algorithm 2.1.

Lemma 3.1 For $\mathrm{Î³}âˆˆ\left(0,1\right)$, $0<\mathrm{Ïƒ}<\frac{\mathrm{Î²}}{2{âˆ¥{\stackrel{Â¯}{A}}^{âˆ’1}âˆ¥}^{2}}$, if $âˆ¥r\left({x}^{k}\right)âˆ¥>0$ then there exists the smallest nonnegative integer ${m}_{k}$ which satisfies (2.3).

Proof Assume on the contrary, (2.3) is not satisfied for any nonnegative integer i, i.e.,

$f\left({x}^{k}âˆ’{\mathrm{Î³}}^{i}r\left({x}^{k}\right),{y}^{k}\right)+\mathrm{Ïƒ}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2}>0.$

Letting $iâ†’\mathrm{âˆž}$, from the continuity of f, we have

$f\left({x}^{k},{y}^{k}\right)+\mathrm{Ïƒ}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2}â‰¥0.$
(3.1)

Otherwise, for each $t>0$, we have $1âˆ’\frac{1}{t}â‰¤logt$. We obtain after multiplication by $\frac{{l}_{i}\left({y}^{k}\right)}{{l}_{i}\left({x}^{k}\right)}>0$ for each $i=1,â€¦,p$,

$\frac{{l}_{i}\left({y}^{k}\right)}{{l}_{i}\left({x}^{k}\right)}âˆ’1â‰¤\frac{{l}_{i}\left({y}^{k}\right)}{{l}_{i}\left({x}^{k}\right)}log\frac{{l}_{i}\left({y}^{k}\right)}{{l}_{i}\left({x}^{k}\right)}.$

Then it follows from $rank\stackrel{Â¯}{A}=n$ that

$âˆ¥xâˆ’yâˆ¥=âˆ¥{\stackrel{Â¯}{A}}^{âˆ’1}\stackrel{Â¯}{A}\left(xâˆ’y\right)âˆ¥â‰¤âˆ¥{\stackrel{Â¯}{A}}^{âˆ’1}âˆ¥âˆ¥\stackrel{Â¯}{A}\left(xâˆ’y\right)âˆ¥$

and

$\begin{array}{rcl}D\left({y}^{k},{x}^{k}\right)& =& \frac{1}{2}{âˆ¥l\left({x}^{k}\right)âˆ’l\left({y}^{k}\right)âˆ¥}^{2}+\mathrm{Î¼}\underset{i=1}{\overset{n}{âˆ‘}}{l}_{i}^{2}\left({x}^{k}\right)\left(\frac{{l}_{i}\left({y}^{k}\right)}{{l}_{i}\left({x}^{k}\right)}log\frac{{l}_{i}\left({y}^{k}\right)}{{l}_{i}\left({x}^{k}\right)}âˆ’\frac{{l}_{i}\left({y}^{k}\right)}{{l}_{i}\left({x}^{k}\right)}+1\right)\\ â‰¥& \frac{1}{2}{âˆ¥l\left({x}^{k}\right)âˆ’l\left({y}^{k}\right)âˆ¥}^{2}\\ =& \frac{1}{2}{âˆ¥A\left({x}^{k}âˆ’{y}^{k}\right)âˆ¥}^{2}\\ â‰¥& \frac{1}{2}{âˆ¥\stackrel{Â¯}{A}\left({x}^{k}âˆ’{y}^{k}\right)âˆ¥}^{2}\\ â‰¥& \frac{1}{2{âˆ¥{\stackrel{Â¯}{A}}^{âˆ’1}âˆ¥}^{2}}{âˆ¥{x}^{k}âˆ’{y}^{k}âˆ¥}^{2}\\ =& \frac{1}{2{âˆ¥{\stackrel{Â¯}{A}}^{âˆ’1}âˆ¥}^{2}}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2}.\end{array}$
(3.2)

Since ${y}^{k}$ is the solution to the strongly convex program (2.2), we have

$f\left({x}^{k},y\right)+\mathrm{Î²}D\left(y,{x}^{k}\right)â‰¥f\left({x}^{k},{y}^{k}\right)+\mathrm{Î²}D\left({y}^{k},{x}^{k}\right)\phantom{\rule{1em}{0ex}}\mathrm{âˆ€}yâˆˆC.$

Substituting $y={x}^{k}âˆˆC$ and using assumptions $f\left({x}^{k},{x}^{k}\right)=0$, $D\left({x}^{k},{x}^{k}\right)=0$, we get

$f\left({x}^{k},{y}^{k}\right)+\mathrm{Î²}D\left({y}^{k},{x}^{k}\right)â‰¤0.$
(3.3)

Combining (3.2) with (3.3), we obtain

$f\left({x}^{k},{y}^{k}\right)+\frac{\mathrm{Î²}}{2{âˆ¥{\stackrel{Â¯}{A}}^{âˆ’1}âˆ¥}^{2}}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2}â‰¤0.$
(3.4)

Then inequalities (3.1) and (3.4) imply that

$âˆ’\mathrm{Ïƒ}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2}â‰¤f\left({x}^{k},{y}^{k}\right)â‰¤âˆ’\frac{\mathrm{Î²}}{2{âˆ¥{\stackrel{Â¯}{A}}^{âˆ’1}âˆ¥}^{2}}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2}.$

Hence, it must be either $r\left({x}^{k}\right)=0$ or $\mathrm{Ïƒ}â‰¥\frac{\mathrm{Î²}}{2{âˆ¥{\stackrel{Â¯}{A}}^{âˆ’1}âˆ¥}^{2}}$. The first case contradicts to , while the second one contradicts to the fact $\mathrm{Ïƒ}<\frac{\mathrm{Î²}}{2{âˆ¥{\stackrel{Â¯}{A}}^{âˆ’1}âˆ¥}^{2}}$. The proof is completed.â€ƒâ–¡

Let us discuss the global convergence of Algorithm 2.1.

Lemma 3.2 Let $\left\{{x}^{k}\right\}$ be the sequence generated by Algorithm 2.1 and . Then the following hold.

1. (i)

If $r\left({x}^{k}\right)=0$, then ${x}^{k}âˆˆSol\left(f,C\right)$.

2. (ii)
${x}^{k}âˆ‰{C}_{k}$

.

3. (iii)
$Sol\left(f,C\right)âŠ†Câˆ©{C}_{k}âˆ©{H}_{k}$

.

4. (iv)
${lim}_{kâ†’\mathrm{âˆž}}âˆ¥{x}^{k+1}âˆ’{x}^{k}âˆ¥=0$

.

Proof (i) Since ${y}^{k}$ is the solution to problem (2.2) and an optimization result in convex programming, we have

$0âˆˆ\mathrm{âˆ‚}f\left({x}^{k},â‹\dots \right)\left({y}^{k}\right)+\mathrm{Î²}{\mathrm{âˆ‡}}_{1}D\left({y}^{k},{x}^{k}\right)+{N}_{C}\left({y}^{k}\right),$

where ${N}_{C}$ denotes the normal cone. From ${y}^{k}âˆˆintC$, it follows that ${N}_{C}\left({y}^{k}\right)=\left\{0\right\}$. Hence,

${\mathrm{Î¾}}^{k}+\mathrm{Î²}{\mathrm{âˆ‡}}_{1}D\left({y}^{k},{x}^{k}\right)=0,$

where ${\mathrm{Î¾}}^{k}âˆˆ\mathrm{âˆ‚}f\left({x}^{k},â‹\dots \right)\left({y}^{k}\right)$. Replacing ${y}^{k}={x}^{k}$ in this equality, we get

${\mathrm{Î¾}}^{k}+\mathrm{Î²}{\mathrm{âˆ‡}}_{1}D\left({x}^{k},{x}^{k}\right)=0.$

Since

${\mathrm{âˆ‡}}_{1}D\left(x,y\right)=âˆ’{A}^{T}\left(l\left(x\right)âˆ’l\left(y\right)+\mathrm{Î¼}{X}_{y}log\frac{l\left(x\right)}{l\left(y\right)}\right)\phantom{\rule{1em}{0ex}}\mathrm{âˆ€}x,yâˆˆintC,$

we have

${\mathrm{âˆ‡}}_{1}D\left({x}^{k},{x}^{k}\right)=0.$

Thus, ${\mathrm{Î¾}}^{k}=0$. Combining this with $f\left({x}^{k},{x}^{k}\right)=0$, we obtain

$f\left({x}^{k},y\right)â‰¥ã€ˆ{\mathrm{Î¾}}^{k},yâˆ’{\mathrm{Î¾}}^{k}ã€‰=0\phantom{\rule{1em}{0ex}}\mathrm{âˆ€}yâˆˆC,$

which means that ${x}^{k}âˆˆSol\left(f,C\right)$.

1. (ii)

Since ${z}^{k}={x}^{k}âˆ’{\mathrm{Î³}}^{{m}_{k}}r\left({x}^{k}\right)$, $r\left({x}^{k}\right)>0$, $f\left(x,x\right)=0$ for every $xâˆˆC$ and $f\left(x,â‹\dots \right)$ is convex on C, we have

$\begin{array}{rcl}0& =& f\left({z}^{k},{z}^{k}\right)\\ =& f\left({z}^{k},\left(1âˆ’{\mathrm{Î³}}^{{m}_{k}}\right){x}^{k}+{\mathrm{Î³}}^{{m}_{k}}{y}^{k}\right)\\ â‰¤& \left(1âˆ’{\mathrm{Î³}}^{{m}_{k}}\right)f\left({z}^{k},{x}^{k}\right)+{\mathrm{Î³}}^{{m}_{k}}f\left({z}^{k},{y}^{k}\right)\\ â‰¤& \left(1âˆ’{\mathrm{Î³}}^{{m}_{k}}\right)f\left({z}^{k},{x}^{k}\right)âˆ’{\mathrm{Î³}}^{{m}_{k}}\mathrm{Ïƒ}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2}\\ <& \left(1âˆ’{\mathrm{Î³}}^{{m}_{k}}\right)f\left({z}^{k},{x}^{k}\right).\end{array}$

Hence, we have $f\left({z}^{k},{x}^{k}\right)>0$. This means that ${x}^{k}âˆ‰{C}_{k}$.

1. (iii)

For ${x}^{âˆ—}âˆˆSol\left(f,C\right)$. Then since f is pseudomonotone on C and $f\left({x}^{âˆ—},{z}^{k}\right)â‰¥0$, we have $f\left({z}^{k},{x}^{âˆ—}\right)â‰¤0$. So ${x}^{âˆ—}âˆˆ{C}_{k}$. To prove $Sol\left(f,C\right)âŠ†{H}_{k}$, we will use mathematical induction. Indeed, for $k=0$ we have ${H}_{0}={\mathcal{R}}^{n}$. This holds. Suppose that

Then, from ${x}^{âˆ—}âˆˆSol\left(f,C\right)$ and ${x}^{m+1}={Pr}_{Câˆ©{C}_{m}âˆ©{H}_{m}}\left({x}^{0}\right)$, it follows that

$ã€ˆ{x}^{âˆ—}âˆ’{x}^{m+1},{x}^{0}âˆ’{x}^{m+1}ã€‰â‰¤0,$

and hence ${x}^{âˆ—}âˆˆ{H}_{m+1}$. It implies that $Sol\left(f,C\right)âŠ†{H}_{m+1}$. Therefore, (iii) is proved.

1. (iv)

Since ${x}^{k}$ is the projection of ${x}^{0}$ onto $Câˆ©{C}_{kâˆ’1}âˆ©{H}_{kâˆ’1}$ and (iii), by the definition of projection, we have

$âˆ¥{x}^{k}âˆ’{x}^{0}âˆ¥â‰¤âˆ¥{x}^{âˆ—}âˆ’{x}^{0}âˆ¥\phantom{\rule{1em}{0ex}}\mathrm{âˆ€}{x}^{âˆ—}âˆˆSol\left(f,C\right).$

So, $\left\{{x}^{k}\right\}$ is bounded. Otherwise, using the definition of ${H}_{k}$, we have

$ã€ˆxâˆ’{x}^{k},{x}^{0}âˆ’{x}^{k}ã€‰â‰¤0\phantom{\rule{1em}{0ex}}\mathrm{âˆ€}xâˆˆ{H}_{k},$

and hence

${x}^{k}={Pr}_{{H}_{k}}\left({x}^{0}\right).$
(3.5)

From ${x}^{k+1}âˆˆ{H}_{k}$, it holds ${x}^{k+1}={Pr}_{{H}_{k}}\left({x}^{k+1}\right)$. Combining this and (3.5), we obtain

$\begin{array}{rcl}{âˆ¥{x}^{k+1}âˆ’{x}^{k}âˆ¥}^{2}& =& {âˆ¥{Pr}_{{H}_{k}}\left({x}^{k+1}\right)âˆ’{Pr}_{{H}_{k}}\left({x}^{0}\right)âˆ¥}^{2}\\ â‰¤& {âˆ¥{x}^{k+1}âˆ’{x}^{0}âˆ¥}^{2}âˆ’{âˆ¥{Pr}_{{H}_{k}}\left({x}^{k+1}\right)âˆ’{x}^{k+1}+{x}^{0}âˆ’{Pr}_{{H}_{k}}\left({x}^{0}\right)âˆ¥}^{2}\\ =& {âˆ¥{x}^{k+1}âˆ’{x}^{0}âˆ¥}^{2}âˆ’{âˆ¥{x}^{k}âˆ’{x}^{0}âˆ¥}^{2}.\end{array}$

This implies that

${âˆ¥{x}^{k+1}âˆ’{x}^{0}âˆ¥}^{2}â‰¥{âˆ¥{x}^{k}âˆ’{x}^{0}âˆ¥}^{2}+{âˆ¥{x}^{k+1}âˆ’{x}^{k}âˆ¥}^{2}.$
(3.6)

Thus, the sequence $\left\{âˆ¥{x}^{k}âˆ’{x}^{0}âˆ¥\right\}$ is bounded and nondecreasing, and hence there exists ${lim}_{kâ†’\mathrm{âˆž}}âˆ¥{x}^{k}âˆ’{x}^{0}âˆ¥$. Consequently,

$\underset{kâ†’\mathrm{âˆž}}{lim}âˆ¥{x}^{k+1}âˆ’{x}^{k}âˆ¥=0.$

â€ƒâ–¡

Theorem 3.3 Suppose that assumptions A1 to A5 hold, $\mathrm{âˆ‚}f\left(x,â‹\dots \right)\left(x\right)$ is upper semicontinuous on C, and the sequence $\left\{{x}^{k}\right\}$ is generated by Algorithm 2.1. Then $\left\{{x}^{k}\right\}$ globally converges to a solution ${x}^{âˆ—}$ of Problem $EP\left(f,C\right)$, where

${x}^{âˆ—}={Pr}_{Sol\left(f,C\right)}\left({x}^{0}\right).$

Proof For each ${\stackrel{Â¯}{w}}^{k}âˆˆ\mathrm{âˆ‚}f\left({z}^{k},â‹\dots \right)\left({z}^{k}\right)$, set

${H}_{k}^{âˆ—}=\left\{yâˆˆ{\mathcal{R}}^{n}:ã€ˆ{\stackrel{Â¯}{w}}^{k},yâˆ’{z}^{k}ã€‰â‰¤0\right\}.$

From ${\stackrel{Â¯}{w}}^{k}âˆˆ\mathrm{âˆ‚}f\left({z}^{k},â‹\dots \right)\left({z}^{k}\right)$ and $f\left(x,x\right)=0$ for every $xâˆˆC$, it follows that

$\begin{array}{rcl}ã€ˆ{\stackrel{Â¯}{w}}^{k},yâˆ’{z}^{k}ã€‰& â‰¤& f\left({z}^{k},y\right)âˆ’f\left({z}^{k},{z}^{k}\right)\\ =& f\left({z}^{k},y\right).\end{array}$
(3.7)

Then we have

${C}_{k}âŠ†{H}_{k}^{âˆ—}\phantom{\rule{1em}{0ex}}\mathrm{âˆ€}kâ‰¥0,$

and hence

$âˆ¥{x}^{k}âˆ’{Pr}_{{H}_{k}^{âˆ—}}\left({x}^{k}\right)âˆ¥â‰¤âˆ¥{x}^{k}âˆ’{Pr}_{{C}_{k}}\left({x}^{k}\right)âˆ¥.$
(3.8)

On the other hand, it follows from

${Pr}_{{H}_{k}^{âˆ—}}\left({x}^{k}\right)={x}^{k}âˆ’\frac{ã€ˆ{\stackrel{Â¯}{w}}^{k},yâˆ’{z}^{k}ã€‰}{{âˆ¥{\stackrel{Â¯}{w}}^{k}âˆ¥}^{2}}{\stackrel{Â¯}{w}}^{k}$

that

$âˆ¥{x}^{k}âˆ’{Pr}_{{H}_{k}^{âˆ—}}\left({x}^{k}\right)âˆ¥=\frac{|ã€ˆ{\stackrel{Â¯}{w}}^{k},yâˆ’{z}^{k}ã€‰|}{âˆ¥{\stackrel{Â¯}{w}}^{k}âˆ¥}.$
(3.9)

Substituting $y={y}^{k}$ into (3.7), we have

$f\left({z}^{k},{y}^{k}\right)â‰¥ã€ˆ{\stackrel{Â¯}{w}}^{k},{y}^{k}âˆ’{z}^{k}ã€‰.$

Combining this and (2.3), we have

$ã€ˆ{\stackrel{Â¯}{w}}^{k},{z}^{k}âˆ’{y}^{k}ã€‰â‰¥\mathrm{Ïƒ}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2}.$
(3.10)

From ${z}^{k}=\left(1âˆ’{\mathrm{Î³}}^{{m}_{k}}\right){x}^{k}+{\mathrm{Î³}}^{{m}_{k}}{y}^{k}$, it implies that

${x}^{k}âˆ’{z}^{k}=\frac{{\mathrm{Î³}}^{{m}_{k}}}{1âˆ’{\mathrm{Î³}}^{{m}_{k}}}\left({z}^{k}âˆ’{y}^{k}\right).$

Using this and (3.10), we have

$ã€ˆ{\stackrel{Â¯}{w}}^{k},{x}^{k}âˆ’{z}^{k}ã€‰=\frac{{\mathrm{Î³}}^{{m}_{k}}}{1âˆ’{\mathrm{Î³}}^{{m}_{k}}}ã€ˆ{\stackrel{Â¯}{w}}^{k},{z}^{k}âˆ’{y}^{k}ã€‰â‰¥\frac{{\mathrm{Î³}}^{{m}_{k}}\mathrm{Ïƒ}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2}}{1âˆ’{\mathrm{Î³}}^{{m}_{k}}}.$
(3.11)

From (3.9) and (3.11), it follows that

$âˆ¥{x}^{k}âˆ’{Pr}_{{H}_{k}^{âˆ—}}\left({x}^{k}\right)âˆ¥â‰¥\frac{{\mathrm{Î³}}^{{m}_{k}}\mathrm{Ïƒ}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2}}{âˆ¥{\stackrel{Â¯}{w}}^{k}âˆ¥}.$

Then, since $\mathrm{âˆ‚}f\left(x,â‹\dots \right)\left(x\right)$ is upper semicontinuous on C and $\left\{{x}^{k}\right\}$ is bounded, there exists $M>0$ such that

$âˆ¥{x}^{k}âˆ’{Pr}_{{H}_{k}^{âˆ—}}\left({x}^{k}\right)âˆ¥â‰¥\frac{{\mathrm{Î³}}^{{m}_{k}}\mathrm{Ïƒ}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2}}{M}.$

Combining this, ${x}^{k+1}âˆˆ{C}_{k}$ and (3.8), we have

$âˆ¥{x}^{k+1}âˆ’{x}^{k}âˆ¥â‰¥âˆ¥{x}^{k}âˆ’{Pr}_{{C}_{k}}\left({x}^{k}\right)âˆ¥â‰¥\frac{{\mathrm{Î³}}^{{m}_{k}}\mathrm{Ïƒ}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2}}{M}.$

Then, it follows from (iv) of Lemma 3.2 that

$\underset{kâ†’\mathrm{âˆž}}{lim}{\mathrm{Î³}}^{{m}_{k}}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2}=0.$

The cases remaining to consider are the following.

Case 1. ${limâ€‰sup}_{kâ†’\mathrm{âˆž}}{\mathrm{Î³}}^{{m}_{k}}>0$.

This case must follow that ${limâ€‰inf}_{kâ†’\mathrm{âˆž}}âˆ¥r\left({x}^{k}\right)âˆ¥=0$. Since $\left\{{x}^{k}\right\}$ is bounded, there exists an accumulation point $\stackrel{Â¯}{x}$ of $\left\{{x}^{k}\right\}$. In other words, a subsequence $\left\{{x}^{{k}_{i}}\right\}$ converges to some $\stackrel{Â¯}{x}$ such that $r\left(\stackrel{Â¯}{x}\right)=0$, as $iâ†’\mathrm{âˆž}$. Then we see from Lemma 3.2(i) that $\stackrel{Â¯}{x}âˆˆSol\left(f,C\right)$.

Case 2. ${lim}_{kâ†’\mathrm{âˆž}}{\mathrm{Î³}}^{{m}_{k}}=0$.

Since $\left\{âˆ¥{x}^{k}âˆ’{x}^{0}âˆ¥\right\}$ is convergent, there is the subsequence $\left\{{x}^{{k}_{j}}\right\}$ of $\left\{{x}^{k}\right\}$ which converges to $\stackrel{Â¯}{x}$ as $jâ†’\mathrm{âˆž}$. Then, from the continuity of f and

${y}^{{k}_{j}}=argâ€‰min\left\{f\left({x}^{{k}_{j}},y\right)+\frac{\mathrm{Î²}}{2}D\left(y,{x}^{{k}_{j}}\right):yâˆˆC\right\},$

there exists $\stackrel{Â¯}{y}$ such that the sequence $\left\{{y}^{{k}_{j}}\right\}$ converges $\stackrel{Â¯}{y}$ as $jâ†’\mathrm{âˆž}$, where

$\stackrel{Â¯}{y}=argâ€‰min\left\{f\left(\stackrel{Â¯}{x},y\right)+\frac{\mathrm{Î²}}{2}D\left(y,\stackrel{Â¯}{x}\right):yâˆˆC\right\}.$

Since ${m}_{k}$ is the smallest nonnegative integer, ${m}_{k}âˆ’1$ does not satisfy (2.3). Hence, we have

$f\left({x}^{k}âˆ’{\mathrm{Î³}}^{{m}_{k}âˆ’1}r\left({x}^{k}\right),{y}^{k}\right)>âˆ’\mathrm{Ïƒ}{âˆ¥r\left({x}^{k}\right)âˆ¥}^{2},$

and besides

$f\left({x}^{{k}_{j}}âˆ’{\mathrm{Î³}}^{{m}_{{k}_{j}}âˆ’1}r\left({x}^{{k}_{j}}\right),{y}^{{k}_{j}}\right)>âˆ’\mathrm{Ïƒ}{âˆ¥r\left({x}^{{k}_{j}}\right)âˆ¥}^{2}.$
(3.12)

Passing onto the limit in (3.12), as $jâ†’\mathrm{âˆž}$ and using the continuity of f, we have

$f\left(\stackrel{Â¯}{x},\stackrel{Â¯}{y}\right)â‰¥âˆ’\mathrm{Ïƒ}{âˆ¥r\left(\stackrel{Â¯}{x}\right)âˆ¥}^{2},$
(3.13)

where $r\left(\stackrel{Â¯}{x}\right)=\stackrel{Â¯}{x}âˆ’\stackrel{Â¯}{y}$. From Algorithm 2.1, we have

$f\left({x}^{{k}_{j}}âˆ’{\mathrm{Î³}}^{{m}_{{k}_{j}}}r\left({x}^{{k}_{j}}\right),{y}^{{k}_{j}}\right)â‰¤âˆ’\mathrm{Ïƒ}{âˆ¥r\left({x}^{{k}_{j}}\right)âˆ¥}^{2}.$

Since f is continuous, passing onto the limit, as $jâ†’\mathrm{âˆž}$, we obtain

$f\left(\stackrel{Â¯}{x},\stackrel{Â¯}{y}\right)â‰¤âˆ’\mathrm{Ïƒ}{âˆ¥r\left(\stackrel{Â¯}{x}\right)âˆ¥}^{2}.$

Using this and (3.13), we have

$âˆ’\mathrm{Ïƒ}{âˆ¥r\left(\stackrel{Â¯}{x}\right)âˆ¥}^{2}â‰¤f\left(\stackrel{Â¯}{x},\stackrel{Â¯}{y}\right)â‰¤âˆ’\mathrm{Ïƒ}{âˆ¥r\left(\stackrel{Â¯}{x}\right)âˆ¥}^{2},$

which implies $r\left(\stackrel{Â¯}{x}\right)=0$, and hence $\stackrel{Â¯}{x}âˆˆSol\left(f,C\right)$. So, all cluster points of $\left\{{x}^{k}\right\}$ belong to the solution set $Sol\left(f,C\right)$.

Set $\stackrel{Ë†}{x}={Pr}_{Sol\left(f,C\right)}\left({x}^{0}\right)$ and suppose that the subsequence $\left\{{x}^{{k}_{j}}\right\}$ converges to ${x}^{âˆ—}âˆˆSol\left(f,C\right)$ as $jâ†’\mathrm{âˆž}$. By (iii) of Lemma 3.2, we have

$\stackrel{Ë†}{x}âˆˆCâˆ©{C}_{{k}_{j}âˆ’1}âˆ©{H}_{{k}_{j}âˆ’1}.$

So,

$âˆ¥{x}^{{k}_{j}}âˆ’{x}^{0}âˆ¥â‰¤âˆ¥\stackrel{Ë†}{x}âˆ’{x}^{0}âˆ¥.$

Thus,

$\begin{array}{rcl}{âˆ¥{x}^{{k}_{j}}âˆ’\stackrel{Ë†}{x}âˆ¥}^{2}& =& {âˆ¥{x}^{{k}_{j}}âˆ’{x}^{0}+{x}^{0}âˆ’\stackrel{Ë†}{x}âˆ¥}^{2}\\ =& {âˆ¥{x}^{{k}_{j}}âˆ’{x}^{0}âˆ¥}^{2}+{âˆ¥{x}^{0}âˆ’\stackrel{Ë†}{x}âˆ¥}^{2}+2ã€ˆ{x}^{{k}_{j}}âˆ’{x}^{0},{x}^{0}âˆ’\stackrel{Ë†}{x}ã€‰\\ â‰¤& {âˆ¥\stackrel{Ë†}{x}âˆ’{x}^{0}âˆ¥}^{2}+{âˆ¥{x}^{0}âˆ’\stackrel{Ë†}{x}âˆ¥}^{2}+2ã€ˆ{x}^{{k}_{j}}âˆ’{x}^{0},{x}^{0}âˆ’\stackrel{Ë†}{x}ã€‰.\end{array}$

As $jâ†’\mathrm{âˆž}$, we get ${x}^{{k}_{j}}â†’{x}^{âˆ—}$ and

$\begin{array}{rcl}{âˆ¥{x}^{âˆ—}âˆ’\stackrel{Ë†}{x}âˆ¥}^{2}& â‰¤& 2{âˆ¥\stackrel{Ë†}{x}âˆ’{x}^{0}âˆ¥}^{2}+2ã€ˆ{x}^{âˆ—}âˆ’{x}^{0},{x}^{0}âˆ’\stackrel{Ë†}{x}ã€‰\\ =& ã€ˆ{x}^{âˆ—}âˆ’\stackrel{Â¯}{x},{x}^{0}âˆ’\stackrel{Â¯}{x}ã€‰\\ â‰¤& 0.\end{array}$

The last inequality is due to $\stackrel{Ë†}{x}={Pr}_{Sol\left(f,C\right)}\left({x}^{0}\right)$. So, ${x}^{âˆ—}=\stackrel{Ë†}{x}$ and the sequence $\left\{{x}^{k}\right\}$ has an unique cluster point ${Pr}_{Sol\left(f,C\right)}\left({x}^{0}\right)$.â€ƒâ–¡

Now we consider the relation between the solution existence of Problem $EP\left(f,C\right)$ and the convergence of $\left\{{x}^{k}\right\}$ generated by Algorithm 2.1.

Lemma 3.4 (see [4])

Suppose that C is a compact convex subset of ${\mathcal{R}}^{n}$ and f is continuous on C. Then the solution set of Problem $EP\left(f,C\right)$ is nonempty.

Theorem 3.5 Suppose that assumptions A1 to A4 hold, f is continuous, $\mathrm{âˆ‚}f\left(x,â‹\dots \right)\left(x\right)$ is upper semicontinuous on C, the sequence $\left\{{x}^{k}\right\}$ is generated by Algorithm 2.1, and $Sol\left(f,C\right)=\mathrm{âˆ\dots }$. Then we have

$\underset{kâ†’\mathrm{âˆž}}{lim}âˆ¥{x}^{k}âˆ’{x}^{0}âˆ¥=+\mathrm{âˆž}.$

Consequently, the solution set of Problem $EP\left(f,C\right)$ is empty if and only if the sequence $\left\{{x}^{k}\right\}$ diverges to infinity.

Proof The first, we show that for every $kâ‰¥0$. On the contrary, suppose that there exists ${k}_{0}â‰¥1$ such that

$Câˆ©{C}_{k}âˆ©{H}_{k}=\mathrm{âˆ\dots }.$

Then there exists a positive number M such that

$\left\{{x}^{k}:0â‰¤kâ‰¤{k}_{0}\right\}âŠ†B\left({x}^{0},M\right),$

where $B\left({x}^{0},M\right)=\left\{xâˆˆ{\mathcal{R}}^{n}:âˆ¥xâˆ’{x}^{0}âˆ¥â‰¤M\right\}$. From Lemma 3.4, it implies that the solution set of Problem $EP\left(f,\stackrel{Â¯}{C}\right)$ is nonempty, where $\stackrel{Â¯}{C}=Câˆ©B\left({x}^{0},2M\right)$. Applying Algorithm 2.1 to Problem $EP\left(f,\stackrel{Â¯}{C}\right)$. In order to avoid confusion with the sequences $\left\{{x}^{k}\right\}$, $\left\{{C}_{k}\right\}$ and $\left\{{H}_{k}\right\}$, we denote the three corresponding sequences by $\left\{{\stackrel{Â¯}{x}}^{k}\right\}$, $\left\{{\stackrel{Â¯}{C}}_{k}\right\}$ and $\left\{{\stackrel{Â¯}{H}}_{k}\right\}$. With ${\stackrel{Â¯}{x}}^{0}={x}^{0}$, the following claims hold:

1. (a)

The set $\left\{{\stackrel{Â¯}{x}}^{k}\right\}$ has at least ${k}_{0}+1$ elements.

2. (b)
${x}^{k}={\stackrel{Â¯}{x}}^{k}$

, ${C}_{k}={\stackrel{Â¯}{C}}_{k}$ and ${H}_{k}={\stackrel{Â¯}{H}}_{k}$ for every $k=0,1,â€¦,{k}_{0}$.

3. (c)
${x}^{{k}_{0}}$

is not a solution to Problem $EP\left(f,\stackrel{Â¯}{C}\right)$.

Using and (iii) of Lemma 3.2, we have . Then we also have , which contradicts the supposition that $Câˆ©{C}_{k}âˆ©{H}_{k}=\mathrm{âˆ\dots }$. So,

This implies that the inequality (3.6) also holds in this case, the sequence $\left\{âˆ¥{x}^{k}âˆ’{x}^{0}âˆ¥\right\}$ is still nondecreasing. We claim that

$\underset{kâ†’\mathrm{âˆž}}{lim}âˆ¥{x}^{k}âˆ’{x}^{0}âˆ¥=\mathrm{âˆž}.$

Suppose for contraction that the exists ${lim}_{kâ†’\mathrm{âˆž}}âˆ¥{x}^{k}âˆ’{x}^{0}âˆ¥âˆˆ\left[0,+\mathrm{âˆž}\right)$. Then $\left\{{x}^{k}\right\}$ is bounded and it follows from (3.6) that

$\underset{kâ†’\mathrm{âˆž}}{lim}âˆ¥{x}^{k+1}âˆ’{x}^{k}âˆ¥=0.$

A similar discussion as above leads to the conclusion that the sequence $\left\{{x}^{k}\right\}$ converges to ${Pr}_{Sol\left(f,C\right)}\left({x}^{0}\right)$, which contradicts the emptiness of the solution set $Sol\left(f,C\right)$. The theorem is proved.â€ƒâ–¡

## 4 Applications to Cournot-Nash equilibrium model

Now we consider the following Cournot-Nash oligopolistic market equilibrium model (see [25â€“28]): There are n-firms producing a common homogenous commodity and that the price ${p}_{i}$ of firm i depends on the total quantity ${\mathrm{Ïƒ}}_{x}={âˆ‘}_{i=1}^{n}{x}_{i}$ of the commodity. Let ${h}_{i}\left({x}_{i}\right)$ denote the cost of the firm i when its production level is ${x}_{i}$. Suppose that the profit of firm i is given by

${f}_{i}\left({x}_{1},â€¦,{x}_{n}\right)={x}_{i}{p}_{i}\left({\mathrm{Ïƒ}}_{x}\right)âˆ’{h}_{i}\left({x}_{i}\right)\phantom{\rule{1em}{0ex}}i=1,â€¦,n,$

where ${h}_{i}$ is the cost function of firm i that is assumed to be dependent only on its production level. There is a common strategy space $CâŠ†{\mathcal{R}}^{n}$ for all firms. Each firm seeks to maximize its own profit by choosing the corresponding production level under the presumption that the production of the other firms are parametric input. In this context, a Nash equilibrium is a production pattern in which in which no firm can increase its profit by changing its controlled variables. Thus, under this equilibrium concept, each firm determines its best response given other firmsâ€™ actions. Mathematically, a point ${x}^{âˆ—}={\left({x}_{1}^{âˆ—},â€¦,{x}_{n}^{âˆ—}\right)}^{T}âˆˆC$ is said to be a Nash equilibrium point if ${x}^{âˆ—}$ is a solution of the problem:

$max\left\{{f}_{i}\left({y}^{âˆ—,i}\right):{y}^{âˆ—,i}={\left({x}_{1}^{âˆ—},â€¦,{x}_{iâˆ’1}^{âˆ—},{y}_{i},{x}_{i+1}^{âˆ—},â€¦,{x}_{n}^{âˆ—}\right)}^{T}âˆˆC\right\}\phantom{\rule{1em}{0ex}}\mathrm{âˆ€}i=1,â€¦,n.$

Set

$\mathrm{Ï•}\left(x,y\right)=âˆ’\underset{i=1}{\overset{n}{âˆ‘}}{f}_{i}\left({x}_{1},â€¦,{x}_{iâˆ’1},{y}_{i},{x}_{i+1},â€¦,{x}_{n}\right)$
(4.1)

and

$f\left(x,y\right)=\mathrm{Ï•}\left(x,y\right)âˆ’\mathrm{Ï•}\left(x,x\right).$
(4.2)

Then the problem of finding an equilibrium point of this model can be formulated as Problem $EP\left(f,C\right)$. It follows from Lemma 3.2 (i) that ${x}^{k}$ is a solution of Problem $EP\left(f,C\right)$ if and only if $r\left({x}^{k}\right)=0$. Thus, ${x}^{k}$ is an Ïµ-solution to Problem $EP\left(f,C\right)$, if $âˆ¥r\left({x}^{k}\right)âˆ¥â‰¤\mathrm{Ïµ}$. To illustrate our algorithm, we consider two academic numerical tests of the bifunction f in ${\mathcal{R}}^{5}$.

Example 4.1 We consider an application of Cournot-Nash oligopolistic market equilibrium model taken from [17]. The equilibrium bifunction is defined by

$f\left(x,y\right)=ã€ˆM\left(x+y\right)+ã€ˆx,dã€‰B\left(x+y\right)+q,yâˆ’xã€‰,$
(4.3)

where

and

$C=\left\{\begin{array}{c}xâˆˆ{\mathcal{R}}_{+}^{5},\hfill \\ 4â‰¤{x}_{1}+2{x}_{2}+{x}_{3}+3{x}_{5}â‰¤12,\hfill \\ 7â‰¤{âˆ‘}_{i=1}^{5}{x}_{i}â‰¤15,\hfill \\ 6â‰¤{x}_{2}+{x}_{3}+2{x}_{4}â‰¤13,\hfill \\ 3â‰¤{x}_{2}+{x}_{3}â‰¤5.\hfill \end{array}$

In this case, the bifunction f is pseudomonotone on C and the interior approximal function (2.1) is defined through

It is easy to see that $rankA=5$. Take $âˆ¥{\stackrel{Â¯}{A}}^{âˆ’1}âˆ¥=1$, $\mathrm{Î²}=4$, $\mathrm{Ïƒ}=1.5$, $\mathrm{Î³}=0.7$, $\mathrm{Î¼}=0.55$, we get iterates in Table 1. The approximate solution obtained after 361 iterations is

${x}^{361}={\left(4.5397,0.0529,2.8942,2.0265,1.4868\right)}^{T}.$

Example 4.2

The same as Example 4.1, we only change the bifunction which has the form

$f\left(x,y\right)=ã€ˆPx+Qy+q,yâˆ’xã€‰+ã€ˆd,arctan\left(xâˆ’y\right)ã€‰,$

where $arctan\left(xâˆ’y\right)={\left(arctan\left({x}_{1}âˆ’{y}_{1}\right),â€¦,arctan\left({x}_{5}âˆ’{y}_{5}\right)\right)}^{T}$, the components of d are chosen randomly in $\left(0,10\right)$.

Then the bifunction f satisfies convergent assumptions of Theorem 3.3 in this paper and Theorem 3.1 in [21]. We choose the parameters in Algorithm 2.1: $âˆ¥{\stackrel{Â¯}{A}}^{âˆ’1}âˆ¥=1$, $\mathrm{Î²}=5$, $\mathrm{Ïƒ}=1.2$, $\mathrm{Î³}=0.5$, $\mathrm{Î¼}=0.2$. In the algorithm (shortly (IPLE)) proposed by Nguyen et al. [21], the parameters are chosen as follows: $\mathrm{Î¸}=0.5$, $\mathrm{Ï„}=0.7$, $\mathrm{Î±}=0.4$, $\mathrm{Î¼}=2$, ${c}_{k}=0.5+\frac{1}{k+1}$ for all $kâ‰¥1$. We compare Algorithm 2.1 with (IPLE). The iteration numbers and the computational time for 5 problems are given in Table 2.

The computations are performed by Matlab R2008a running on a PC Desktop Intel(R) Core(TM)i5 650@3.2 GHz 3.33 GHz 4 Gb RAM.

## 5 Conclusion

This paper presented an iterative algorithm for solving pseudomonotone equilibrium problems without Lipschitz-type continuity of the bifunctions. Combining the interior proximal extragradient method in [22], the Armijo-type linesearch and cutting hyperplane techniques, the global convergence properties of the algorithm are established under few assumptions. Compared with the current methods such as the interior proximal extragradient method, the dual extragradient algorithm in [14], the auxiliary principle in [15], the inexact subgradient method in [29], and other methods in [4], the fundamental difference here is that our algorithm does not require the computation of subgradient of a convex function. We show that the cluster point of the sequence in our algorithm is the projection of the starting point onto the solution set of the equilibrium problems. Moreover, we also give the relation between the existence of solutions of equilibrium problems and the convergence of the iteration sequence.

## References

1. Blum E, Oettli W: From optimization and variational inequality to equilibrium problems. Math. Stud. 1994, 63: 127â€“149.

2. Bigi G, Castellani M, Pappalardo M: A new solution method for equilibrium problems. Optim. Methods Softw. 2009, 24: 895â€“911. 10.1080/10556780902855620

3. Daniele P, Giannessi F, Maugeri A: Equilibrium Problems and Variational Models. Kluwer Academic, Dordrecht; 2003.

4. Konnov IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin; 2000.

5. Moudafi A: Proximal point algorithm extended to equilibrium problem. J. Nat. Geom. 1999, 15: 91â€“100.

6. Anh PN: Strong convergence theorems for nonexpansive mappings and Ky Fan inequalities. J. Optim. Theory Appl. 2012. doi:10.1007/s10957â€“012â€“0005-x

7. Anh PN: A hybrid extragradient method extended to fixed point problems and equilibrium problems. Optimization 2012. doi:10.1080/02331934.2011.607497

8. Anh PN, Kim JK: Outer approximation algorithms for pseudomonotone equilibrium problems. Comput. Math. Appl. 2011, 61: 2588â€“2595. 10.1016/j.camwa.2011.02.052

9. Anh PN, Muu LD, Nguyen VH, Strodiot JJ: Using the Banach contraction principle to implement the proximal point method for multivalued monotone variational inequalities. J. Optim. Theory Appl. 2005, 124: 285â€“306. 10.1007/s10957-004-0926-0

10. Iusem AN, Sosa W: On the proximal point method for equilibrium problems in Hilbert spaces. Optimization 2010, 59: 1259â€“1274. 10.1080/02331931003603133

11. Zeng LC, Yao JC: Modified combined relaxation method for general monotone equilibrium problems in Hilbert spaces. J. Optim. Theory Appl. 2006, 131: 469â€“483. 10.1007/s10957-006-9162-0

12. Heusinger A, Kanzow C: Relaxation methods for generalized Nash equilibrium problems with inexact line search. J.Â Optim. Theory Appl. 2009, 143: 159â€“183. 10.1007/s10957-009-9553-0

13. Konnov IV: Combined relaxation methods for monotone equilibrium problems. J. Optim. Theory Appl. 2001, 111: 327â€“340. 10.1023/A:1011930301552

14. Quoc TD, Anh PN, Muu LD: Dual extragradient algorithms to equilibrium problems. J. Glob. Optim. 2012, 52: 139â€“159. 10.1007/s10898-011-9693-2

15. Mastroeni G: On auxiliary principle for equilibrium problems. Nonconvex Optimization and Its Applications 68. In Equilibrium Problems and Variational Models. Edited by: Daniele P, Giannessi F, Maugeri A. Kluwer Academic, Dordrecht; 2003:289â€“298.

16. Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 12: 747â€“756.

17. Tran DQ, Dung ML, Nguyen VH: Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57: 749â€“776. 10.1080/02331930601122876

18. Anh PN: A logarithmic quadratic regularization method for solving pseudomonotone equilibrium problems. Acta Math. Vietnam. 2009, 34: 183â€“200.

19. Bnouhachem A: An LQP method for pseudomonotone variational inequalities. J. Glob. Optim. 2006, 36: 351â€“356. 10.1007/s10898-006-9013-4

20. Forsgren A, Gill PE, Wright MH: Interior methods for nonlinear optimization. SIAM Rev. 2002, 44: 525â€“597. 10.1137/S0036144502414942

21. Nguyen TTV, Strodiot JJ, Nguyen VH: The interior proximal extragradient method for solving equilibrium problems. J. Glob. Optim. 2009, 44: 175â€“192. 10.1007/s10898-008-9311-0

22. Anh PN: An LQP regularization method for equilibrium problems on polyhedral. Vietnam J. Math. 2008, 36: 209â€“228.

23. Auslender A, Teboulle M, Bentiba S: A logarithmic-quadratic proximal method for variational inequalities. Comput. Optim. Appl. 1999, 12: 31â€“40. 10.1023/A:1008607511915

24. Auslender A, Teboulle M, Bentiba S: Iterior proximal and multiplier methods based on second order homogeneous kernels. Math. Oper. Res. 1999, 24: 646â€“668.

25. Bigi G, Passacantando M: Gap functions and penalization for solving equilibrium problems with nonlinear constraints. Comput. Optim. Appl. 2012, 53: 323â€“346. 10.1007/s10589-012-9481-z

26. Marcotte P: Algorithms for the network oligopoly problem. J. Oper. Res. Soc. 1987, 38: 1051â€“1065.

27. Mordukhovich BS, Outrata JV, Cervinka M: Equilibrium problems with complementarity constraints: case study with applications to oligopolistic markets. Optimization 2007, 56: 479â€“494. 10.1080/02331930701421079

28. Murphy FH, Sherali HD, Soyster AL: A mathematical programming approach for determining oligopolistic market equilibrium. Math. Program. 1982, 24: 92â€“106. 10.1007/BF01585096

29. Santos P, Scheimberg S: An inexact subgradient algorithm for equilibrium problems. Comput. Appl. Math. 2011, 30: 91â€“107.

## Acknowledgements

We are very grateful to the anonymous referees for their really helpful and constructive comments in improving the paper. The work was supported by National Foundation for Science and Technology Development of Vietnam (NAFOSTED), code 101.02-2011.07.

## Author information

Authors

### Corresponding author

Correspondence to Pham N Anh.

### Competing interests

The authors declare that they have no competing interests.

### Authorsâ€™ contributions

The main idea of this paper is proposed by PNA. PNA and PMT prepared the manuscript initially and performed all the steps of proof in this research. All authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Anh, P.N., Tuan, P.M. & Long, L.B. An interior approximal method for solving pseudomonotone equilibrium problems. J Inequal Appl 2013, 156 (2013). https://doi.org/10.1186/1029-242X-2013-156