# An interior proximal cutting hyperplane method for equilibrium problems

## Abstract

We propose a new method for solving equilibrium problems on polyhedra, where the underlying function is continuous and satisfies a pseudomonone assumption which is called an interior proximal cutting hyperplane method. The method is based on the special interior proximal function which replaces the usual quadratic function. This leads to an interior proximal algorithm. The algorithm can be viewed as combining the cutting hyperplane method and the special interior proximal function. Finally some preliminary computational results are given.

AMS Mathematics Subject Classification 2000: 65K10; 90 C25.

## 1 Introduction

Equilibrium problems appear frequently in many practical problems arising, for instance, physics, engineering, game theory, transportation, economics and network (see [1, 2]). They become an attractive field for many researchers both theory and applications (see ). These problems are models whose formulation includes optimization, variational inequalities, (vector) optimization problems, fixed point problems, saddle point problems, Nash equilibria and complementarity problems as particular cases (see [1, 5, 9]). In this article, we consider the equilibrium problems (shortly EP (f, C)):

$\mathsf{\text{Find}}\phantom{\rule{2.77695pt}{0ex}}{x}^{*}\in C\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{such}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{that}}\phantom{\rule{2.77695pt}{0ex}}f\left({x}^{*},y\right)\ge 0\phantom{\rule{1em}{0ex}}\mathsf{\text{for}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{all}}\phantom{\rule{2.77695pt}{0ex}}y\in C,$

where C is a polyhedral set on ndefined by

$C:=\left\{x\in {ℝ}^{n}|Ax\le b\right\},$
(1.1)

A is a p × n matrix, b p, f : C × C is a bifunction such that f(x, x) = 0 for every x C.

(A.1) intC = {x | Ax < b} is nonempty.

(A.2) f(x,·) is convex on C for all x C.

(A.3) f is continuous on C × C.

(A.4) The solution set S of EP (f, C) is nonempty.

Theory of equilibrium problems has been studied extensively and intensively on the existence of solutions and generalization to many abstract ways. However, methods for solving EP(f, C) still restrict and have not satisfy the need of applications. There are popular approaches for solving EP(f, C) to our knowledge. The first approach is based on the gap function (see ), the second way is to use the proximal point method  and the third one is the auxiliary subproblem principle .

In [3, 4], Anh proposed interior proximal methods for solving monotone equilibrium problems when C is a polyhedral convex set. The method is based on a special interior proximal function which replaces the usual quadratic function. The method has also been studied to variational inequalities by many authors (see [11, 12]). This leads to an interior proximal-type algorithm, which can be viewed as combining an Armijo-type line search technique and the special interior proximal function. The only assumption required is that f is monotone on C.

In this article, we propose an algorithm for solving EP(f, C), by making no assumptions on the problem other than continuity and pseudomonotonicity of the bifunction f. Recently, Anh and Kuno  introduced a new method for solving multivalued variational inequalities on a closed convex set, where the underlying function is upper semicontinuous and generalized monotone. We extend the cutting hyperplane method to EP(f, C). First, we construct an appropriate hyperplane which separates the current iterative point from the solution set. Next we combine this technique with Armijo-type line search technique to obtain a convergent algorithm for pseudomonotone equilibrium problems. Then the next iteration is obtained as the projection of the current iteration onto the intersection of the feasible set with the halfspace containing the solution set.

The article is organized as follows. In Section 2, we give formal definitions of our target EP(f, C) and the pseudomonotonicity of f. We then combined an idea often used for multivalued variational inequalities to EP(f, C) and interior proximal technique to develop an iterative algorithm. Section 3 is devoted to the proof of its global convergence to a solution of EP(f, C). In the last section, we apply the algorithm for the Nash-Cournot oligopolistic market equilibrium model. The numerical results are implemented to verify our development.

## 2 The interior proximal cutting hyperplane algorithm

We list some well known definitions and the projection under the Euclidean norm which will be required in our following analysis.

Definition 2.1 Let C be a closed convex subset of n, we denote the projection on C by Pr C (·), i.e,

${\mathsf{\text{Pr}}}_{C}\left(x\right)=\mathsf{\text{arg}}\mathsf{\text{min}}\left\{∥y-x∥|y\in C\right\}\phantom{\rule{1em}{0ex}}\forall x\in {ℝ}^{n}.$

Then the bifunction f: C × C {+∞} is said to be

(i) monotone on C if for each x, y C,

$f\left(x,y\right)+f\left(y,x\right)\le 0;$

(ii) pseudomonotone on C if for each x, y C,

$f\left(x,y\right)\ge 0\phantom{\rule{1em}{0ex}}implies\phantom{\rule{1em}{0ex}}f\left(y,x\right)\le 0.$

It is observe that (i) (ii). But the converse is not true. There are some examples in .

Classical variational inequality problems (shortly VIP) are to find a vector x* C such that

$⟨F\left({x}^{*}\right),y-{x}^{*}⟩\ge 0\phantom{\rule{1em}{0ex}}\forall y\in C,$

where C nis a nonempty closed convex subset of nand F is a continuous mapping from C into n. Then they can be alternatively formulated as finding the zero point of the operator T(x) = F(x) + N C (x) where

${N}_{C}\left(x\right)=\left\{\begin{array}{cc}\hfill \left\{y\in C|⟨y,z-x⟩\le 0,\phantom{\rule{2.77695pt}{0ex}}\forall z\in C\right\}\hfill & \mathsf{\text{if}}\phantom{\rule{2.77695pt}{0ex}}x\in C,\hfill \\ \varnothing \hfill & \hfill \mathsf{\text{otherwise}}.\hfill \end{array}\right\$

A well known method to solve this problem is the proximal point algorithm , which starting with any point x0 C and λ k ≥ λ > 0, iteratively updates xk+1conforming the following problem:

$0\in {\lambda }_{k}T\left(x\right)+{\nabla }_{x}h\left(x,{x}^{k}\right),$
(2.1)

where

$h\left(x,{x}^{k}\right)=\frac{1}{2}{∥x-{x}^{k}∥}^{2}.$

Motivation for studying the algorithm of problem (2.1) could be found in [11, 15, 16].

Auslender et al.  have proposed an interior proximal-type method for solving (VIP) on $C:={ℝ}_{+}^{n}=\left\{x\in {ℝ}^{n}|x\ge 0\right\}$ through replacing function h(x, xk) by d ϕ (x, xk) which is defined as

${d}_{\varphi }\left(x,y\right)=\sum _{i=1}^{n}{y}_{i}^{2}\varphi \left({y}_{i}^{-1}{x}_{i}\right),$

where

$\varphi \left(t\right)=\left\{\begin{array}{cc}\hfill \frac{\nu }{2}{\left(t-1\right)}^{2}+\mu \left(t-\mathsf{\text{log}}t-1\right)\hfill & \mathsf{\text{if}}\phantom{\rule{2.77695pt}{0ex}}t>0,\hfill \\ +\infty \hfill & \hfill \mathsf{\text{otherwise}},\hfill \end{array}\right\$
(2.2)

with ν > μ > 0. The fundamental difference here is that the term d ϕ is used to force the iterations {xk+1} to stay in the interior of ${ℝ}_{+}^{n}$. This technique is extended by many authors to variational inequalities and equilibrium problems (see [1, 3]).

Applying this idea to the equilibrium problem EP(f, C), we consider another interior proximal function defined by

$D\left(x,y\right)=\left\{\begin{array}{cc}\hfill \frac{1}{2}{∥x-y∥}^{2}+\mu \sum _{i=1}^{p}{l}_{i}^{2}\left(y\right)\left(\frac{{l}_{i}\left(x\right)}{{l}_{i}\left(y\right)}\mathsf{\text{log}}\frac{{l}_{i}\left(x\right)}{{l}_{i}\left(y\right)}-\frac{{l}_{i}\left(x\right)}{{l}_{i}\left(y\right)}+1\right)\hfill & \hfill \mathsf{\text{if}}\phantom{\rule{2.77695pt}{0ex}}x\in \mathsf{\text{int}}C,\hfill \\ +\infty \hfill & \hfill \mathsf{\text{otherwise}},\hfill \end{array}\right\$

with μ (0, 1), a i (i = 1, ..., p) are the rows of the matrix A, and

$\begin{array}{cc}\hfill {l}_{i}\left(x\right)& ={b}_{i}-⟨{a}_{i},x⟩,\hfill \\ \hfill l\left(x\right)& ={\left({l}_{1}\left(x\right),{l}_{2}\left(x\right),\dots ,{l}_{p}\left(x\right)\right)}^{T}.\hfill \end{array}$

We denote by 1D(x, y) the gradient of D(·, y) at x for every y C. It is easy to see that

${\nabla }_{1}D\left(x,y\right)=x-y-\mu {A}^{T}{X}_{y}\mathsf{\text{log}}\frac{l\left(x\right)}{l\left(y\right)},$

where

Then we consider the following regularized auxiliary problem (shortly RAP):

$\mathsf{\text{Find}}\phantom{\rule{2.77695pt}{0ex}}{x}^{*}\in C\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{such}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{that}}\phantom{\rule{2.77695pt}{0ex}}f\left({x}^{*},y\right)+\frac{1}{c}D\left(y,{x}^{*}\right)\ge 0\phantom{\rule{1em}{0ex}}\mathsf{\text{for}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{all}}\phantom{\rule{2.77695pt}{0ex}}y\in C,$

where c > 0 is a regularization parameter.

The equivalence between EP(f, C) and (RAP) is due to the following lemma (see ).

Lemma 2.2 Let f: C × C {+∞} be a bifunction and x* C. Then x* is a solution to EP(f, C) if and only if x* is a solution to (RAP).

Lemma 2.2 shows that the solution of the equilibrium problem EP(f, C) can be approximated by an iterative procedure xk+l= h(xk), k = 0, 1,..., where c > 0, x0 is any starting point in C and h(xk) is the unique solution of the strongly convex program:

However, generally, the sequence {xk} does not converge to a solution of the equilibrium problems (see ).

Let f be a mapping defined by

$f\left(x,y\right):=\mathsf{\text{sup}}\left\{⟨w,y-x⟩|w\in F\left(x\right)\right\},$

where $F:C\to {2}^{{ℝ}^{n}}$ is a multivalued mapping such that $F\left(x\right)\ne \varnothing$ for all x C. Then EP(f, C) can be formulated as the multivalued variational inequality problems (shortly MVIP):

Find x* C, w* F(x*) such that

$⟨{w}^{*},x-{x}^{*}⟩\ge 0\phantom{\rule{1em}{0ex}}\forall x\in C.$

In this case, it is known that solutions coincide with zeros of the following projected residual function

$T\left(x\right):=x-{\mathsf{\text{Pr}}}_{C}\left(x-{w}^{*}\right).$

In other words, with x0 C, w0 F(x0), the point (x0, w0) is a solution of (MVIP) if and only if T(x0) = 0, where T(x0) = x0 - Pr C (x0 - w0) (see ). Applying this idea and interior proximal function technique D(·,·) to the equilibrium problem EP(f, C), we obtain the solution scheme: Let xkbe a current approximation to the solution of EP(f, C). First, we compute yk= arg min{f(xk, y) + βD(y, xk) | y C} for some positive constant β. Next, we search the line segment between xkand r(xk) = xk-ykfor a point $\left({\stackrel{̄}{w}}^{k},{z}^{k}\right)$ such that the hyperplane strictly separates xkfrom the solution set S of EP(f, C). To find such $\left({\stackrel{̄}{w}}^{k},{z}^{k}\right)$, we may use a computationally inexpensive Armijo-type procedure. Then we compute the next iterate xk+1by projecting xkonto the intersection of the feasible set C with the halfspace .

Then, the algorithm is described as follows.

Algorithm 2.3

Step 0. Choose$\sigma >0,{x}^{0}\in C,0<\sigma <\frac{\beta }{2}$ , and γ (0, 1).

Step 1. Compute

(2.3)

Find the smallest nonnegative number m k of m such that

$f\left({x}^{k}-{\gamma }^{{m}_{k}}r\left({x}^{k}\right),{y}^{k}\right)+\sigma {∥r\left({x}^{k}\right)∥}^{2}\le 0.$
(2.4)

Step 2. (Cutting hyperplane) Choose${\stackrel{̄}{w}}^{k}\in {\partial }_{2}f\left({z}^{k},{z}^{k}\right)$, where${z}^{k}:={x}^{k}-{\gamma }^{{m}_{k}}r\left({x}^{k}\right)$.

Set

Find${x}^{k+1}:={\mathsf{\text{Pr}}}_{C\cap {H}_{k}}\left({x}^{k}\right)$.

Step 3. Set k: = k + 1, and go to Step 1.

## 3 Convergence of the algorithm

In the next lemma, we justify the stopping criterion.

Lemma 3.1 If r(xk) = 0, then xkis a solution to equilibrium problem EP(f, C).

Proof. Since ykis the solution to problem (2.3) and an optimization result in convex programming (see ), we have

$0\in {\partial }_{2}f\left({x}^{k},{y}^{k}\right)+\beta {\nabla }_{1}D\left({y}^{k},{x}^{k}\right)+{N}_{C}\left({y}^{k}\right),$

where N C denotes the normal cone. From yk intC, it follows that N C (yk) = {0}. Hence

${\xi }^{k}+\beta {\nabla }_{1}D\left({y}^{k},{x}^{k}\right)=0,$

where ξk 2f (xk, yk). Replacing yk= xkin this equality, we get

${\xi }^{k}+\beta {\nabla }_{1}D\left({x}^{k},{x}^{k}\right)=0.$

Since

${\nabla }_{1}D\left(x,y\right)=x-y-\mu {A}^{T}{X}_{y}\mathsf{\text{log}}\frac{l\left(x\right)}{l\left(y\right)}\phantom{\rule{1em}{0ex}}\forall x,y\in C,$
(3.1)

we have

${\nabla }_{1}D\left({x}^{k},{x}^{k}\right)=0.$

Thus ξk= 0. Combining this with f(xk,xk) = 0, we obtain

$f\left({x}^{k},y\right)\ge ⟨{\xi }^{k},y-{\xi }^{k}⟩=0\phantom{\rule{1em}{0ex}}\forall y\in C.$

which means that xkis a solution to EP(f, C).

In Algorithm 2.3, we need to show the existence of the nonnegative integer m k .

Lemma 3.2 For$\gamma \in \left(0,1\right),0<\sigma <\frac{\beta }{2}$, if r (xk) > 0 then there exists the smallest nonnegative integer m k such that the inequality (2.4) holds.

Proof. Assume on the contrary, the inequality (2.4) is not satisfied for any nonnegative integer i, i.e.,

$f\left({x}^{k}-{\gamma }^{i}r\left({x}^{k}\right),{y}^{k}\right)+\sigma {∥r\left({x}^{k}\right)∥}^{2}>0.$

Letting i → ∞, from the continuity of f we have

$f\left({x}^{k},{y}^{k}\right)+\sigma ∥r\left({x}^{k}\right)∥\ge 0.$
(3.2)

Otherwise, for each t > 0 we have . We obtain after multiplication by $\frac{{l}_{i}\left({y}^{k}\right)}{{l}_{i}\left({x}^{k}\right)}>0$ for each i = 1, ..., p,

$\frac{{l}_{i}\left({y}^{k}\right)}{{l}_{i}\left({x}^{k}\right)}-1\le \frac{{l}_{i}\left({y}^{k}\right)}{{l}_{i}\left({x}^{k}\right)}\mathsf{\text{log}}\frac{{l}_{i}\left({y}^{k}\right)}{{l}_{i}\left({x}^{k}\right)}.$

Then,

$\begin{array}{cc}\hfill D\left({y}^{k},{x}^{k}\right)& =\frac{1}{2}{∥{x}^{k}-{y}^{k}∥}^{2}+\mu \sum _{i=1}^{n}{l}_{i}^{2}\left({x}^{k}\right)\left(\frac{{l}_{i}\left({y}^{k}\right)}{{l}_{i}\left({x}^{k}\right)}\mathsf{\text{log}}\frac{{l}_{i}\left({y}^{k}\right)}{{l}_{i}\left({x}^{k}\right)}-\frac{{l}_{i}\left({y}^{k}\right)}{{l}_{i}\left({x}^{k}\right)}+1\right)\hfill \\ \ge \frac{1}{2}{∥r\left({x}^{k}\right)∥}^{2}.\hfill \end{array}$
(3.3)

Since ykis the solution to the strongly convex program (2.4), we have

$f\left({x}^{k},y\right)+\beta D\left(y,{x}^{k}\right)\ge f\left({x}^{k},{y}^{k}\right)+\beta D\left({y}^{k},{x}^{k}\right)\phantom{\rule{1em}{0ex}}\forall y\in C.$

Substituting y = xk C and using assumptions f(xk, xk) = 0, D(xk, xk) = 0, we get

$f\left({x}^{k},{y}^{k}\right)+\beta D\left({y}^{k},{x}^{k}\right)\le 0.$
(3.4)

Combining (3.3) with (3.4), we obtain

$f\left({x}^{k},{y}^{k}\right)+\frac{\beta }{2}{∥r\left({x}^{k}\right)∥}^{2}\le 0.$
(3.5)

Then, inequalities (3.2) and (3.5) imply that

$-\sigma {∥r\left({x}^{k}\right)∥}^{2}\le f\left({x}^{k},{y}^{k}\right)\le -\frac{\beta }{2}{∥r\left({x}^{k}\right)∥}^{2}.$

Hence it must be either r(xk) = 0 or $\sigma \ge \frac{\beta }{2}$. The first case contradicts to r(xk) ≠ 0, while the second one contradicts to the fact $\sigma <\frac{\beta }{2}$.

The following results perform some property of the cutting hyperplane H k .

Lemma 3.3 Let {xk} be the sequence generated by Algorithm 2.3. Then the following hold:

1. (i)

x k H k , S C H k .

2. (ii)

${x}^{k+1}={\mathsf{\text{Pr}}}_{C\cap {H}_{k}}\left({ȳ}^{k}\right)$, where ${ȳ}^{k}={\mathsf{\text{Pr}}}_{{H}_{k}}\left({x}^{k}\right)$.

Proof. (i) Since ${\stackrel{̄}{w}}^{k}\in {\partial }_{2}f\left({z}^{k},{z}^{k}\right),{y}^{k}\in C,f\left({z}^{k},{z}^{k}\right)=0$, and ${z}^{k}={x}^{k}-{\gamma }^{{m}_{k}}r\left({x}^{k}\right)$, we have

$\begin{array}{cc}\hfill f\left({z}^{k},{y}^{k}\right)& \ge ⟨{\stackrel{̄}{w}}^{k},{y}^{k}-{z}^{k}⟩\hfill \\ =-\left(1+{\gamma }^{{m}_{k}}\right)⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩.\hfill \end{array}$

.

Combining this with (2.4), we obtain that

$\left(1+{\gamma }^{{m}_{k}}\right)⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩\ge \sigma {∥r\left({x}^{k}\right)∥}^{2}.$
(3.6)

Hence

$\begin{array}{cc}\hfill ⟨{\stackrel{̄}{w}}^{k},{x}^{k}-{z}^{k}⟩& ={\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩\hfill \\ \ge \frac{\sigma {\gamma }^{{m}_{k}}}{1+{\gamma }^{{m}_{k}}}{∥r\left({x}^{k}\right)∥}^{2}\hfill \\ >0.\hfill \end{array}$

This implies xk H k .

Since f is assumed to be pseudomonotone on C, zk C and x* S,

$f\left({x}^{*},{z}^{k}\right)\ge 0⇒f\left({z}^{k},{x}^{*}\right)\le 0.$

Combining this with ${\stackrel{̄}{w}}^{k}\in {\partial }_{2}f\left({z}^{k},{z}^{k}\right)$, we get

$\begin{array}{cc}\hfill ⟨{\stackrel{̄}{w}}^{k},{x}^{*}-{z}^{k}⟩& \le f\left({z}^{k},{x}^{*}\right)-f\left({z}^{k},{z}^{k}\right)\hfill \\ \le 0.\hfill \end{array}$

Thus, x* H k .

1. (ii)

We know that

$H=\left\{x\in {ℝ}^{n}|⟨w,x-{x}^{0}⟩\le 0\right\},{\mathsf{\text{Pr}}}_{H}\left(y\right)=y-\frac{⟨w,y-{x}^{0}⟩}{{∥w∥}^{2}}w.$

Hence,

$\begin{array}{cc}\hfill {ȳ}^{k}& ={\mathsf{\text{Pr}}}_{{H}_{k}}\left({x}^{k}\right)\hfill \\ ={x}^{k}-\frac{⟨{\stackrel{̄}{w}}^{k},{x}^{k}-{z}^{k}⟩}{{∥{\stackrel{̄}{w}}^{k}∥}^{2}}{\stackrel{̄}{w}}^{k}\hfill \\ ={x}^{k}-\frac{{\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩}{{∥{\stackrel{̄}{w}}^{k}∥}^{2}}{\stackrel{̄}{w}}^{k}.\hfill \end{array}$

Otherwise, for every y C H k there exists λ (0, 1) such that

$\stackrel{^}{x}=\lambda {x}^{k}+\left(1-\lambda \right)y\in C\cap \partial {H}_{k},$

where $\partial {H}_{k}=\left\{x\in {ℝ}^{n}\left|⟨{\stackrel{̄}{w}}^{k},x-{z}^{k}⟩=0\right\right\}$, because xk C but xk H k .

$\begin{array}{cc}\hfill {∥y-{ȳ}^{k}∥}^{2}& \ge {\left(1-\lambda \right)}^{2}{∥y-{ȳ}^{k}∥}^{2}\hfill \\ ={∥\stackrel{^}{x}-\lambda {x}^{k}-\left(1-\lambda \right){ȳ}^{k}∥}^{2}\hfill \\ ={∥\left(\stackrel{^}{x}-{ȳ}^{k}\right)-\lambda \left({x}^{k}-{ȳ}^{k}\right)∥}^{2}\hfill \\ ={∥\stackrel{^}{x}-{ȳ}^{k}∥}^{2}+{\lambda }^{2}{∥{x}^{k}-{ȳ}^{k}∥}^{2}-2\lambda ⟨\stackrel{^}{x}-{ȳ}^{k},{x}^{k}-{ȳ}^{k}⟩\hfill \\ ={∥\stackrel{^}{x}-{ȳ}^{k}∥}^{2}+{\lambda }^{2}{∥{x}^{k}-{ȳ}^{k}∥}^{2}\hfill \\ \ge {∥\stackrel{^}{x}-{ȳ}^{k}∥}^{2},\hfill \end{array}$
(3.7)

because ${ȳ}^{k}={\mathsf{\text{Pr}}}_{{H}_{k}}\left({x}^{k}\right)$. Also we have

$\begin{array}{cc}\hfill {∥\stackrel{^}{x}-{x}^{k}∥}^{2}& ={∥\stackrel{^}{x}-{ȳ}^{k}+{ȳ}^{k}-{x}^{k}∥}^{2}\hfill \\ ={∥\stackrel{^}{x}-{ȳ}^{k}∥}^{2}-2⟨\stackrel{^}{x}-{ȳ}^{k},{x}^{k}-{ȳ}^{k}⟩+{∥{ȳ}^{k}-{x}^{k}∥}^{2}\hfill \\ ={∥\stackrel{^}{x}-{ȳ}^{k}∥}^{2}+{∥{ȳ}^{k}-{x}^{k}∥}^{2}.\hfill \end{array}$

Since ${x}^{k+1}={\mathsf{\text{Pr}}}_{C\cap {H}_{k}}\left({x}^{k}\right)$, using the Pythagorean theorem we can reduce that

$\begin{array}{cc}\hfill {∥\stackrel{^}{x}-{ȳ}^{k}∥}^{2}& ={∥\stackrel{^}{x}-{x}^{k}∥}^{2}-{∥{ȳ}^{k}-{x}^{k}∥}^{2}\hfill \\ \ge {∥{x}^{k+1}-{x}^{k}∥}^{2}-{∥{ȳ}^{k}-{x}^{k}∥}^{2}\hfill \\ ={∥{x}^{k+1}-{ȳ}^{k}∥}^{2}.\hfill \end{array}$
(3.8)

From (3.7) and (3.8), we have

$∥{x}^{k+1}-{ȳ}^{k}∥\le ∥y-{ȳ}^{k}∥\phantom{\rule{1em}{0ex}}\forall y\in C\cap {H}_{k},$

which implies

${x}^{k+1}={\mathsf{\text{Pr}}}_{C\cap {H}_{k}}\left({ȳ}^{k}\right).$

In order to prove the convergence of Algorithm 2.3, we give the following key property of the sequence {xk} generated by the algorithm.

Lemma 3.4 The sequence {xk} generated by Algorithm 2.3 satisfies the following inequality.

${∥{x}^{k+1}-{x}^{*}∥}^{2}\le {∥{x}^{k}-{x}^{*}∥}^{2}-{∥{x}^{k+1}-{y}^{k}∥}^{2}-{\left(\frac{{\gamma }^{{m}_{k}}\sigma }{∥{\stackrel{̄}{w}}^{k}∥\left(1+{\gamma }^{{m}_{k}}\right)}\right)}^{2}{∥r\left({x}^{k}\right)∥}^{4}.$
(3.9)

Proof. Since ${x}^{k+1}={\mathsf{\text{Pr}}}_{C\cap {H}_{k}}\left({y}^{k}\right)$, we have

$⟨{y}^{k}-{x}^{k+1},z-{x}^{k+1}⟩\le 0\phantom{\rule{1em}{0ex}}\forall z\in C\cap {H}_{k}.$

Substituting z = x* C H k , then we have

$⟨{y}^{k}-{x}^{k+1},{x}^{*}-{x}^{k+1}⟩\le 0⇔⟨{y}^{k}-{x}^{k+1},{x}^{*}-{y}^{k}+{y}^{k}-{x}^{k+1}⟩\le 0,$

which implies

${∥{x}^{k+1}-{y}^{k}∥}^{2}\le ⟨{x}^{k+1}-{y}^{k},{x}^{*}-{y}^{k}⟩.$

Hence,

$\begin{array}{cc}\hfill {∥{x}^{k+1}-{x}^{*}∥}^{2}& ={∥{x}^{k+1}-{y}^{k}+{y}^{k}-{x}^{*}∥}^{2}\hfill \\ ={∥{x}^{k+1}-{y}^{k}∥}^{2}+{∥{y}^{k}-{x}^{*}∥}^{2}+2⟨{x}^{k+1}-{y}^{k},{y}^{k}-{x}^{*}⟩\hfill \\ \le ⟨{x}^{*}-{y}^{k},{x}^{k+1}-{y}^{k}⟩+{∥{y}^{k}-{x}^{*}∥}^{2}+2⟨{x}^{k+1}-{y}^{k},{y}^{k}-{x}^{*}⟩\hfill \\ ={∥{y}^{k}-{x}^{*}∥}^{2}+⟨{x}^{k+1}-{y}^{k},{y}^{k}-{x}^{*}⟩\hfill \\ ={∥{y}^{k}-{x}^{*}∥}^{2}-{∥{x}^{k+1}-{y}^{k}∥}^{2}.\hfill \end{array}$
(3.10)

Since ${z}^{k}={x}^{k}-{\gamma }^{{m}_{k}}r\left({x}^{k}\right)$ and

${y}^{k}-{\mathsf{\text{Pr}}}_{{H}_{k}}\left({x}^{k}\right)={x}^{k}-\frac{⟨{\stackrel{̄}{w}}^{k},{x}^{k}-{z}^{k}⟩}{{∥{\stackrel{̄}{w}}^{k}∥}^{2}}{\stackrel{̄}{w}}^{k},$

we have

$\begin{array}{c}{∥{y}^{k}-{x}^{*}∥}^{2}\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}={∥{x}^{k}-{x}^{*}∥}^{2}+\frac{{⟨{\stackrel{̄}{w}}^{k},{x}^{k}-{z}^{k}⟩}^{2}}{{∥{\stackrel{̄}{w}}^{k}∥}^{4}}{∥{\stackrel{̄}{w}}^{k}∥}^{2}-\frac{2⟨{\stackrel{̄}{w}}^{k},{x}^{k}-{z}^{k}⟩}{{∥{\stackrel{̄}{w}}^{k}∥}^{2}}⟨{\stackrel{̄}{w}}^{k},{x}^{k}-{x}^{*}⟩\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}={∥{x}^{k}-{x}^{*}∥}^{2}+{\left(\frac{{\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩}{∥{\stackrel{̄}{w}}^{k}∥}\right)}^{2}-\frac{2{\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩}{{∥{\stackrel{̄}{w}}^{k}∥}^{2}}⟨{\stackrel{̄}{w}}^{k},{x}^{k}-{x}^{*}⟩\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}={∥{x}^{k}-{x}^{*}∥}^{2}-{\left(\frac{{\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩}{∥{\stackrel{̄}{w}}^{k}∥}\right)}^{2}\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{2.77695pt}{0ex}}-2\left[\frac{{\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩}{{∥{\stackrel{̄}{w}}^{k}∥}^{2}}⟨{\stackrel{̄}{w}}^{k},{x}^{k}-{x}^{*}⟩-{\left(\frac{{\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩}{∥{\stackrel{̄}{w}}^{k}∥}\right)}^{2}\right]\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}={∥{x}^{k}-{x}^{*}∥}^{2}-{\left(\frac{{\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩}{∥{\stackrel{̄}{w}}^{k}∥}\right)}^{2}\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{2.77695pt}{0ex}}-\frac{2{\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩}{{∥{\stackrel{̄}{w}}^{k}∥}^{2}}\left[⟨{\stackrel{̄}{w}}^{k},{x}^{k}-{x}^{*}⟩-{\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩\right]\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}={∥{x}^{k}-{x}^{*}∥}^{2}-{\left(\frac{{\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩}{∥{\stackrel{̄}{w}}^{k}∥}\right)}^{2}\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{2.77695pt}{0ex}}-\frac{2{\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩}{{∥{\stackrel{̄}{w}}^{k}∥}^{2}}⟨{\stackrel{̄}{w}}^{k},{x}^{k}-{x}^{*}-{\gamma }^{{m}_{k}}r\left({x}^{k}\right)⟩\hfill \\ \phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}={∥{x}^{k}-{x}^{*}∥}^{2}-{\left(\frac{{\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩}{∥{\stackrel{̄}{w}}^{k}∥}\right)}^{2}-\frac{2{\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩}{{∥{\stackrel{̄}{w}}^{k}∥}^{2}}⟨{\stackrel{̄}{w}}^{k},{z}^{k}-{x}^{*}⟩.\hfill \end{array}$
(3.11)

From (3.6), it follows that

$⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩\ge \frac{\sigma }{1+{\gamma }^{{m}_{k}}}{∥r\left({x}^{k}\right)∥}^{2}.$

Thus, (3.11) reduces to

$\begin{array}{cc}\hfill {∥{y}^{k}-{x}^{*}∥}^{2}& \le {∥{x}^{k}-{x}^{*}∥}^{2}-{\left(\frac{{\gamma }^{{m}_{k}}⟨{\stackrel{̄}{w}}^{k},r\left({x}^{k}\right)⟩}{∥{\stackrel{̄}{w}}^{k}∥}\right)}^{2}\hfill \\ \le {∥{x}^{k}-{x}^{*}∥}^{2}-{\left(\frac{{\gamma }^{{m}_{k}}\sigma }{∥{\stackrel{̄}{w}}^{k}∥\left(1+{\gamma }^{{m}_{k}}\right)}\right)}^{2}{∥r\left({x}^{k}\right)∥}^{4}.\hfill \end{array}$
(3.12)

Combining (3.10) and (3.12), we obtain the inequality (3.9)

Theorem 3.5 Suppose that Assumptions A.1-A.4 hold, the mapping ∂2f (·, zk) is uniformly bounded by M > 0, and f is pseudomonone on C. Then the sequence {xk} generated by Algorithm 2.3 converges to a solution of EP(f, C).

Proof. The inequality (3.9) implies that the sequence {xk-x*} is nonincreasing and hence convergent. Consequently, the sequence {xk} is bounded.

Since the mapping 2f (·, zk) is uniformly bounded by M > 0, i.e.,

$∥{w}^{k}∥\le M\phantom{\rule{1em}{0ex}}\forall k=1,....$

This, together with (3.9), implies

${∥{x}^{k+1}-{x}^{*}∥}^{2}\le {∥{x}^{k}-{x}^{*}∥}^{2}-{∥{x}^{k+1}-{y}^{k}∥}^{2}-{\left(\frac{{\gamma }^{{m}_{k}}\sigma }{M\left(1+{\gamma }^{{m}_{k}}\right)}\right)}^{2}{∥r\left({x}^{4}\right)∥}^{4}.$
(3.13)

Since {xk- x*} converges to zero, it is easy to see that

$\underset{k\to \infty }{\mathsf{\text{lim}}}{\gamma }^{{m}_{k}}∥r\left({x}^{k}\right)∥=0.$

The cases remaining to consider are the following.

Case 1. . This case must follow that . Since {xk} is bounded, there exists $\stackrel{̄}{x}$ which is an accumulation point of {xk}. In other words, a subsequence $\left\{{x}^{{k}_{i}}\right\}$ converges to some $\stackrel{̄}{x}$ such that $r\left(\stackrel{̄}{x}\right)=0$, as i → ∞. Then we see from Lemma 3.3 that $\stackrel{̄}{x}\in S$, and besides we can take ${x}^{*}=\stackrel{̄}{x}$, in particular in (3.13). Thus $\left\{∥{x}^{k}-\stackrel{̄}{x}∥\right\}$ is a convergent sequence. Since $\stackrel{̄}{x}$ is an accumulation point of {xk}, the sequence {xk- x*} converges to zero, i.e., {xk} converges to $\stackrel{̄}{x}\in S$.

Case 2. $\underset{k\to \infty }{\mathsf{\text{lim}}}{\gamma }^{{m}_{k}}=0$. Since m k is the smallest nonnegative integer, m k - 1 does not satisfy (2.4). Hence, we have

$f\left({x}^{k}-{\gamma }^{{m}_{k-1}}r\left({x}^{k}\right),{y}^{k}\right)>-\sigma {∥r\left({x}^{k}\right)∥}^{2},$

and besides

$f\left({x}^{{k}_{i}}-{\gamma }^{{m}_{{k}_{i}}-1}r\left({x}^{{k}_{i}}\right),{y}^{{k}_{i}}\right)>-\sigma {∥r\left({x}^{{k}_{i}}\right)∥}^{2}.$
(3.14)

Passing onto the limit in (3.14), as i → ∞, and using the continuity of f, we have

$f\left(\stackrel{̄}{x},ȳ\right)\ge -\sigma {∥\stackrel{̄}{x}-ȳ∥}^{2}.$
(3.15)

From (3.5) we have

$f\left({x}^{{k}_{i}},{y}^{{k}_{i}}\right)\le -\frac{\beta }{2}{∥r\left({x}^{{k}_{i}}∥}^{2}.$

Since f is continuous, passing onto the limit, as i → ∞, we obtain

$f\left(\stackrel{̄}{x},ȳ\right)\le -\frac{\beta }{2}{∥\stackrel{̄}{x}-ȳ∥}^{2}.$

Combining this with (3.15), we have

$s\sigma {∥\stackrel{̄}{x}-ȳ∥}^{2}\ge \frac{\beta }{2}{∥\stackrel{̄}{x}-ȳ∥}^{2},$

which implies $r\left(\stackrel{̄}{x}\right):=∥\stackrel{̄}{x}-ȳ∥=0$ or $\sigma \ge \frac{\beta }{2}$. The second case contradicts to the fact $0<\sigma <\frac{\beta }{2}$ and hence $r\left(\stackrel{̄}{x}\right)=0,\phantom{\rule{2.77695pt}{0ex}}\stackrel{̄}{x}\in S$. Letting ${x}^{*}=\stackrel{̄}{x}$ and repeating the previous arguments, we conclude that the whole sequence {xk} converges to $\stackrel{̄}{x}\in S$.

## 4 Numerical results

We applied the algorithm to solve a problem of production competition under the Nash-Cournot oligopolistic market equilibrium model (see [1, 2, 17]). In this model, it is assumed that there are n-firms producing a common homogenous commodity and that the price p i of firm i depends on the total quantity ${\sigma }_{x}={\sum }_{i=1}^{n}{x}_{i}$ of the commodity.

Let h i (x i ) denote the cost of the firm i when its production level is x i . Suppose that the profit of firm i is given by

${f}_{i}\left({x}_{1},...,{x}_{n}\right):={x}_{i}{p}_{i}\left({\sigma }_{x}\right)-{h}_{i}\left({x}_{i}\right)\phantom{\rule{1em}{0ex}}i=1,...,n,$
(4.1)

where h i is the cost function of firm i that is assumed to be dependent only on its production level.

Let $C\subset {ℝ}_{+}^{n}:=\left\{x\in {ℝ}^{n}\left|x\ge 0\right\right\}$ be closed convex which denotes the strategy set of firms. Each firm seeks to maximize its own profit by choosing the corresponding production level under the presumption that the production of the other firms are parametric input. In this context, a Nash equilibrium is a production pattern in which no firm can increase its profit by changing its controlled variables. Thus under this equilibrium concept, each firm determines its best response given other firms' actions. Mathematically, a point ${x}^{*}=\left({x}_{1}^{*},...,{x}_{n}^{*}\right)\in C$ is said to be a Nash equilibrium point if

${f}_{i}\left({x}_{1}^{*},...,{x}_{i-1}^{*},{y}_{i},{x}_{i+1}^{*},...,{x}_{n}^{*}\right)\le {f}_{i}\left({x}_{1}^{*},...,{x}_{n}^{*}\right)\phantom{\rule{1em}{0ex}}\forall y\in C.$
(4.2)

When h i is affine, this market problem can be formulated as a special Nash equilibrium problem in the n-person nonco-operative game theory.

Set

$\varphi \left(x,y\right):=-\sum _{i=1}^{n}{f}_{i}\left({x}_{1},...,{x}_{i-1},{y}_{i},{x}_{i+1},...,{x}_{n}\right)$
(4.3)

and

$\begin{array}{cc}\hfill f\left(x,y\right):& =\varphi \left(x,y\right)-\varphi \left(x,x\right)\hfill \\ =\sum _{i=1}^{n}\left({h}_{i}\left({y}_{i}\right)-{h}_{i}\left({x}_{i}\right)-{y}_{i}p\left({y}_{i}+\sum _{j\ne i}{x}_{j}\right)+{x}_{i}p\left(\sum _{i=1}^{n}{x}_{i}\right)\right).\hfill \end{array}$
(4.4)

Then it has been proved in  that the problem of finding an equilibrium point of this model can be formulated as EP(f, C):

$\mathsf{\text{Find}}\phantom{\rule{2.77695pt}{0ex}}x\in C\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{such}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{that}}\phantom{\rule{2.77695pt}{0ex}}f\left({x}^{*},y\right)\ge 0,\phantom{\rule{1em}{0ex}}\mathsf{\text{for}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{all}}\phantom{\rule{2.77695pt}{0ex}}y\in C.$

Proposition 4.1A point x* is an equilibrium point for the oligopolistic market problem if and only if it is a solution to EP(f, C), where

$\begin{array}{c}f\left(x,y\right):=⟨H\left(x\right)-p\left({\sigma }_{x}\right)e-{p}^{\prime }\left({\sigma }_{x}\right)x,y-x⟩\hfill \\ H\left(x\right)={\left({h}_{1}^{\mathsf{\text{'}}}\left({x}_{1}\right),...,{h}_{n}^{\mathsf{\text{'}}}\left({x}_{n}\right)\right)}^{T},e={\left(1,...,1\right)}^{T},{\sigma }_{x}=⟨x,e⟩.\hfill \end{array}$

The following proposition gives some properties of the bifunction f.

Proposition 4.2Let p : C+be convex, twice continuously differentiable, and nonincreasing and let the function μ τ : ++, defined by μ τ (σ x ) = σ x p(σ x + τ) be concave for every τ ≥ 0. Also, let the function h i : +, i = 1, ..., n, be convex and twice continuously differentiable. Then, the cost bifunction

$f\left(x,y\right):=⟨H\left(x\right)-p\left({\sigma }_{x}\right)e-{p}^{\prime }\left({\sigma }_{x}\right)x,y-x⟩$

is monotone on C.

We now applied the algorithm to the example with seven firms (n = 7) provided in [9, 17], where the cost and inverse demand functions have the form

$\begin{array}{c}H\left(x\right):=\left(2{x}_{1}+1,3{x}_{2}+4,4{x}_{3}+2,1.5{x}_{4}+3,4{x}_{5}+1,{x}_{6}-2,3{x}_{7}+1\right)T,\hfill \\ p\left(t\right):=\frac{2}{3t}t\in \left(0,+\infty \right).\hfill \end{array}$

Then Propositions 4.1 and 4.2 show that the bifunction defined by (4.4) is monotone on C × C and therefore assumptions of our algorithm are satisfied.

In this example, we choose

$\begin{array}{cc}\hfill n& =7,\hfill \\ \hfill \eta & =0.1,\hfill \\ \hfill \gamma & =2,\hfill \\ \hfill \beta & =5,\hfill \\ \hfill \mu & =0.5,\hfill \\ \hfill {x}^{0}& ={\left(3,3,3,3,3,3,3\right)}^{T},\hfill \\ \hfill C& =\left\{x\in {ℝ}^{n}\left|13\le \sum _{i=1}^{n}{x}_{i}\le 25,1\le {x}_{i}\le 5\left(i=1,...,n\right)\right\right\}.\hfill \end{array}$

Note that in this case, at iteration k, we have

${\partial }_{2}f\left({z}^{k},{z}^{k}\right)=\left\{H\left({z}^{k}\right)-p\left({\sigma }_{{z}^{k}}\right)e-{p}^{\prime }\left({\sigma }_{{z}^{k}}\right){z}^{k}\right\},$

where ${p}^{\prime }\left({\sigma }_{{z}^{k}}\right)=-\frac{2}{3{\sigma }_{{z}^{k}}^{2}}$. Lemma 3.1 shows that if r(xk) = 0, then xkis a solution to EP(f, C). So we can say that xkis an ϵ-solution to EP(f, C) if we have r(xk) ≤ ϵ. The tolerance is taken by ϵ = 10-6, we obtained the following Table 1.

The approximate solution obtained after seven iterations is

${x}^{7}={\left(2.0940,1.0000,1.0003,1.4610,1.0482,5.0001,1.3968\right)}^{T}.$

## References

1. Daniele P, Giannessi F, Maugeri A: Equilibrium Problems and Variational Models. Kluwer, Academic Publisher; 2003.

2. Konnov IV: Combined Relaxation Methods for Variational Inequalities. Springer-Verlag, Berlin; 2000.

3. Anh PN: A logarithmic quadratic regularization method for solving pseudomonotone equilibrium problems. Acta Math Vietnam 2009, 34: 183–200.

4. Anh PN: An LQP regularization method for equilibrium problems on polyhedral. Vietnam J Math 2008, 36: 209–228.

5. Blum E, Oettli W: From optimization and variational inequality to equilibrium problems. Math Stud 1994, 63: 127–149.

6. Mastroeni G: Gap function for equilibrium problems. J Global Optim 2004, 27: 411–426.

7. Moudafi A: Proximal point algorithm extended to equilibrium problem. J Nat Geom 1999, 15: 91–100.

8. Noor MA: Auxiliary principle technique for equilibrium problems. J Optim Theory Appl 2004, 122: 371–386.

9. Bigi G, Castellani M, Pappalardo M: A new solution method for equilibrium problems. Optim Method Softw 2009, 24: 895–911.

10. Mangasarian OL, Solodov MV: A linearly convergent derivative-free descent method for strongly monotone complementarity problem. Comput Optim Appl 1999, 14: 5–16.

11. Anh PN: An interior-quadratic proximal method for solving monotone generalized variational inequalities. East West J Math 2008, 10: 81–100.

12. Auslender A, Teboulle M, Bentiba S: A logarithmic-quadratic proximal method for variational inequalities. J Comput Optim Appl 1999, 12: 31–40.

13. Anh PN, Kuno T: A cutting hyperplane method for generalized monotone nonlips-chitzian multivalued variational inequalities. In Modeling, Simulation and Optimization of Complex Processes. Springer Heidelberg; 2012.

14. Schaible S, Karamardian S, Crouzeix JP: Characterizations of generalized monotone maps. J Optim Theory Appl 1993, 76: 399–413.

15. Anh PN, Muu LD, Strodiot JJ: Generalized projection method for non-Lipschitz multivalued monotone variational inequalities. Acta Math Vietnam 2009, 34: 67–79.

16. Anh PN, Muu LD, Nguyen VH, Strodiot JJ: Using the Banach contraction principle to implement the proximal point method for multivalued monotone variational inequalities. J Optim Theory Appl 2005, 124: 285–306.

17. Murphy FH, Sherali HD, Soyster AL: A mathematical programming approach for determining oligopolistic market equilibrium. Math Program 1982, 24: 92–106.

## Acknowledgements

This study was completed while the first author was staying at the Kyungnam University for the NRF Postdoctoral Fellowship for Foreign Researchers. And the second author was supported by the Kyungnam University Research Fund, 2011.

## Author information

Authors

### Corresponding author

Correspondence to Jong Kyu Kim.

### Competing interests

The authors declare that they have no competing interests.

### Authors' contributions

JKK conceived the study and participated in its design and coordination. JKK suggested many good ideas that are useful for achievement this paper and made the revision. PNA and JKK prepared the manuscript initially and performed all the steps of proof in this research. All authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Anh, P.N., Kim, J.K. An interior proximal cutting hyperplane method for equilibrium problems. J Inequal Appl 2012, 99 (2012). https://doi.org/10.1186/1029-242X-2012-99

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1029-242X-2012-99

### Keywords

• Equilibrium problems
• pseudomonotone
• interior proximal function
• cutting hyperplane method 