# A variational inequality method for computing a normalized equilibrium in the generalized Nash game

## Abstract

The generalized Nash equilibrium problem is a generalization of the standard Nash equilibrium problem, in which both the utility function and the strategy space of each player may depend on the strategies chosen by all other players. This problem has been used to model various problems in applications but convergent solution algorithms are extremely scare in the literature. In this article, we show that a generalized Nash equilibrium can be calculated by solving a variational inequality (VI). Moreover, conditions for the local superlinear convergence of a semismooth Newton method being applied to the VI are also given. Some numerical results are presented to illustrate the performance of the method.

## 1 Introduction

In this article, We consider the generalized Nash equilibrium problem (GNEP). To this end, we first recall the definition of the Nash equilibrium problem (NEP). There are N players, each player ν {1,...,N} controls the variables ${x}^{\nu }\in {\Re }^{{n}_{\nu }}$. All players' strategies are collectively denoted by a vector $x={\left({x}^{1},...,{x}^{N}\right)}^{T}\in {\Re }^{n}$, where n = n1 + + n N . To emphasize the ν th player's variables within the vector x, we sometimes write x = (xν, x-ν)T, where ${x}^{-\nu }\in {\Re }^{{n}_{-\nu }}$ subsumes all the other players' variables.

Let ${\theta }^{\nu }:{\Re }^{n}\to \Re$ be the ν th player's payoff (or loss or utility) function, and let ${X}^{\nu }\subseteq {\Re }^{{n}_{\nu }}$ be the strategy set of player ν. Then, ${x}^{*}={\left({x}^{*,1},...,{x}^{*,N}\right)}^{T}\in {\Re }^{n}$ is called a Nash equilibrium, or a solution of the NEP, if each block component x*,νis a solution of the optimization problem

$\begin{array}{l}\underset{{x}^{\nu }}{\text{min}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}{\theta }^{\nu }\left({x}^{\nu },{x}^{*,-\nu }\right)\phantom{\rule{2em}{0ex}}\\ \text{s.t.}\phantom{\rule{1em}{0ex}}{x}^{\nu }\in {X}^{\nu }.\phantom{\rule{2em}{0ex}}\end{array}$

On the other hand, in a GNEP, each player's strategy belongs to a set ${X}_{\nu }\left({x}^{-\nu }\right)\subseteq {\Re }^{{n}_{\nu }}$ that depends on the rival players' strategies. The aim of each player ν, given the other players' strategies x-ν, is to choose a strategy xνthat solves the minimization problem

$\begin{array}{l}\underset{{x}^{\nu }}{\text{min}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}{\theta }^{\nu }\left({x}^{\nu },{x}^{-\nu }\right)\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{1em}{0ex}}\text{s.t.}\phantom{\rule{1em}{0ex}}{x}^{\nu }\in {X}_{\nu }\left({x}^{-\nu }\right).\phantom{\rule{2em}{0ex}}\end{array}$

The GNEP is the problem of finding a vector x* such that each player's strategy x*,νsatisfies

${\theta }^{\nu }\left({x}^{*,\nu },{x}^{*,-\nu }\right)\le {\theta }^{\nu }\left({y}^{\nu },{x}^{*,-\nu }\right),\phantom{\rule{1em}{0ex}}\forall {y}^{\nu }\in {X}_{\nu }\left({x}^{*,-\nu }\right).$

Such a vector x* is called a generalized Nash equilibrium or, more simply a solution of the GNEP.

In this article, we focus on a special class of GNEPs referred to as jointly convex GNEPs. More precisely, we assume that there is a closed and convex set $X\subseteq {\Re }^{n}$, which represents the joint constraints of all the players, such that

${X}_{\nu }\left({x}^{-\nu }\right):=\left\{{x}^{\nu }\in {\Re }^{{n}_{\nu }}|\left({x}^{\nu },{x}^{-\nu }\right)\in X\right\},$
(1.1)

for all ν = 1,..., N. This condition results to be verified in several applications. Throughout this article, we assume that the set X can be represented as

$X=\left\{x\in {\Re }^{n}|g\left(x\right)\le 0\right\}$
(1.2)

for some function $g:{\Re }^{n}\to {\Re }^{m}$. Additional equality constraints are also allowed, but for notational simplicity, we prefer not to include them explicitly. In many cases, a player ν might have some additional constraints depending on his decision variables only. However, these additional constraints can be viewed as part of the joint constraints g(x) ≤ 0, so, we include these latter constraints in the former ones.

Assumption 1.1 (i) The utility functions θνare twice continuously differentiable and as a function of xνalong, convex.

(ii) The function g is twice continuously differentiable, its components g i are convex (in x), and the corresponding strategy space X defined by (1.2) is nonempty.

The convexity assumptions are standard in the context of GNEPs. The smoothness assumptions are also very natural since our aim is to develop locally fast convergent methods for the solution of GNEPs.

The GNEP was formally introduced by Debreu [1] as early as 1952, but it is only from the mid-1990s that the GNEP attracted much attention because of its capability of modeling a number of interesting problems in economy computer science, telecommunications, and deregulated markets (e.g., see [24]). Another approach for solving the GNEP is based on the Nikaido-Isoda function. Relaxation methods and proximal-like methods using the Nikaido-Isoda function are investigated in [57]. A regularized version of the Nikaido-Isoda function was first introduced in [8] for standard NEPs then further investigated by Heusinger and Kanzow [9], they reformulated the GNEP as a constrained optimization problem with continuously differentiable objective function.

Motivated by the fact that a standard NEP can be reformulated as a variational inequality problem (VI for short), see, for example, [10, 11], Harker [12] characterized the GNEP as a quasi-variational inequality(QVI). But unlike VI, there are few efficient methods for solving QVI, and therefore such a reformulation is not used widely in designing implementable algorithms. On the other hand, it was noted in [13], for example, that certain solutions of the GNEP (the normalized Nash equilibria, to be defined later) can be found by solving a suitable standard VI associated to the GNEP.

Here, we further investigate the properties of the normalized Nash equilibria. The rest of the article is organized as follows. Section 2 gives some preliminaries. In Section 3, we use the fact that the normalized Nash equilibria can be found by solving a suitable VI, we reformulate the VI associated to the GNEP as a semismooth system of equations and the nonsingularity of the B-subdifferential for the system is explored. Finally, in Section 4, we implement a semismooth Newton method to some examples of the GNEP.

We use the following notations throughout the article. A function $G:{\Re }^{n}\to {\Re }^{t}$ is called a Ck-function if it is k times continuously differentiable. For a differentiable function $g:{\Re }^{n}\to {\Re }^{m}$, the Jacobian of g at $x\in {\Re }^{n}$ is denoted by $\mathcal{J}g\left(x\right)$, and its transposed by g(x). Given a differentiable function $\Psi :{\Re }^{n}\to \Re$, the symbol ${\nabla }_{{x}^{\nu }}\Psi \left(x\right)$ denotes the partial gradient with respect to xν-part only, and ${\nabla }_{{x}^{\nu }{x}^{\mu }}^{2}\Psi \left(x\right)$ denotes the second-order partial derivative with respect to xν-part and xμ-part. For a function $f:{\Re }^{n}×{\Re }^{n}\to \Re ,\phantom{\rule{1em}{0ex}}f\left(x,\cdot \right):{\Re }^{n}\to \Re$ denotes the function with x being fixed. For vectors $x,y\in {\Re }^{n},⟨x,y⟩$ denotes the inner product defined by x,y := xTy and x y means x,y = 0.

## 2 Preliminaries

Let $F:{\Re }^{n}\to {\Re }^{m}$ be a locally Lipschitz continuous function. By Rademacher's theorem, F is differentiable almost everywhere. Let D F denote the set of points where F is differentiable. Then, the Bouligand-subdifferential of F at x is given by (see [14]),

${\partial }_{B}F\left(x\right):=\left\{H\in {\Re }^{m×n}|\exists \left\{{x}^{k}\right\}\subseteq {D}_{F}:{x}^{k}\to x,H=\underset{k\to \infty }{\text{lim}}\mathcal{J}F\left({x}^{k}\right)\right\}.$

Its convex hull

$\partial F\left(x\right):=\text{conv}\phantom{\rule{2.77695pt}{0ex}}\left\{{\partial }_{B}F\left(x\right)\right\}$

is Clarke's generalized Jacobian of F at x (see [15]).

Based on this notation, we next recall the definition of a semismooth function. This concept was firstly introduced by Mifflin [16] for real-valued mappings and extended by Qi and Sun [17] to vector-valued mappings.

Definition 2.1 Let$\Phi :\mathcal{O}\subseteq {\Re }^{n}\to {\Re }^{m}$be a locally Lipschitz continuous function on the open set$\mathcal{O}$. We say that Φ is semismooth at a point$x\in \mathcal{O}$if

1. (i)

Φ is directionally differentiable at x; and

2. (ii)

for any Δx X and V ∂Φ(x + Δx) with Δx → 0,

$\Phi \left(x+\Delta x\right)-\Phi \left(x\right)-V\left(\Delta x\right)=\text{o}\left(∥\Delta x∥\right).$

Furthermore, Φ is said to be strongly semismooth at$x\in \mathcal{O}$if Φ is semismooth at x and for any Δx X and V ∂Φ(x + Δx) with Δx → 0,

$\Phi \left(x+\Delta x\right)-\Phi \left(x\right)-V\left(\Delta x\right)=\text{O}\left({∥\Delta x∥}^{2}\right).$

In the study of algorithms for locally Lipschitzian systems of equations, the following regularity condition plays a role similar to that of the nonsingularity of the Jacobian in the study of algorithms for smooth systems of equations.

Definition 2.2 Let$G:{\Re }^{n}\to {\Re }^{n}$be Lipschitzian around x, G is said to be BD-regular at x if all the elements in B G(x) are nonsingular. If$\stackrel{̄}{x}$is a solution of the system G(x) = 0 and G is BD-regular at$\stackrel{̄}{x}$, then$\stackrel{̄}{x}$is called a BD-regular solution of this system.

Given a closed convex set $K\subseteq {\Re }^{n}$ and a continuous function $G:K\to {\Re }^{n}$, solving the VI defined by K and G (which is denoted by VI(G, K)) means finding a vector x K such that

$G{\left(x\right)}^{T}\left(y-x\right)\ge 0,\phantom{\rule{1em}{0ex}}for\phantom{\rule{2.77695pt}{0ex}}all\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}y\in K.$

Define the function $F:{\Re }^{n}\to {\Re }^{n}$ by

$F\left(x\right):=\left(\begin{array}{c}\hfill {\nabla }_{{x}^{1}}{\theta }^{1}\left(x\right)\hfill \\ \hfill ⋮\hfill \\ \hfill {\nabla }_{{x}^{N}}{\theta }^{N}\left(x\right)\hfill \end{array}\right),$

we state a result due to [13] which will be used later.

Lemma 2.1 Suppose that the GNEP satisfies Assumption 1.1 and assume further that the sets X ν (x-ν) are defined by (1.1) with X closed and convex. Then, every solution of the VI(F, X) is a solution of the GNEP.

## 3 The nonsmooth equation reformulation and nonsingularity conditions

Consider the GNEP from Section 1 with utility functions θνand a strategy set X satisfying the requirements from Assumption 1.1. In this section, our aim is to show that the GNEP can be reformulated as a nonsmooth equation and then we present several conditions guaranteeing the BD-regularity condition of the equation.

Suppose that x is a solution of the GNEP. Then if for player ν, a suitable constraint qualification (like the slater condition) holds, it follows that there exists a Lagrange multiplier ${\lambda }^{\nu }\in {\Re }^{m}$ such that the Karush-Kuhn-Tucker (KKT) conditions

$\begin{array}{l}{\nabla }_{{x}^{\nu }}{\theta }^{\nu }\left({x}^{\nu },{x}^{-\nu }\right)+{\nabla }_{{x}^{\nu }}g\left({x}^{\nu },{x}^{-\nu }\right){\lambda }^{\nu }=0,\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}0\le {\lambda }^{\nu }\perp -g\left({x}^{\nu },{x}^{-\nu }\right)\ge 0\phantom{\rule{2em}{0ex}}\end{array}$
(3.1)

are satisfied.

Let us consider the KKT conditions for the VI(F,X). Assuming that a suitable constraint qualification holds at a solution x, the KKT conditions can be expressed as

$\begin{array}{l}F\left(x\right)+\nabla g\left(x\right)\lambda =0,\phantom{\rule{2em}{0ex}}\\ 0\le \lambda \perp -g\left(x\right)\ge 0,\phantom{\rule{2em}{0ex}}\end{array}$
(3.2)

which is equivalent to

$\begin{array}{c}\left(\begin{array}{c}\hfill {\nabla }_{{x}^{1}}{\theta }^{1}\left(x\right)\hfill \\ \hfill ⋮\hfill \\ \hfill {\nabla }_{{x}^{N}}{\theta }^{N}\left(x\right)\hfill \end{array}\right)+\left(\begin{array}{c}\hfill {\nabla }_{{x}^{1}}g\left(x\right)\hfill \\ \hfill ⋮\hfill \\ \hfill {\nabla }_{{x}^{N}}g\left(x\right)\hfill \end{array}\right)\lambda =0,\\ 0\le \lambda \perp -g\left(x\right)\ge 0.\end{array}$
(3.3)

The next lemma from [13] relates the normalized Nash equilibria to the KKT conditions (3.3).

Lemma 3.1 (i) Let x be a solution of VI(F,X) at which the KKT conditions (3.3) hold. Then x is a solution of the GNEP (normalized Nash equilibria) at which the KKT conditions (3.1) hold with λ1 = λ2 = = λN= λ.

(ii) Viceversa, let x be a solution of the GNEP at which KKT conditions (3.1) hold with λ1 = λ2 = = λN. Then x is a solution of VI(F, X).

Using the minimum function $\phi :\Re ×\Re \to \Re ,\phantom{\rule{2.77695pt}{0ex}}\phi \left(a,b\right):=\text{min}\left\{a,b\right\}$, the KKT conditions (3.2) can equivalently be written as the nonlinear system of equations

$\Phi \left(\omega \right):=\Phi \left(x,\lambda \right)=0,$
(3.4)

where $\Phi :{\Re }^{n+m}\to {\Re }^{n+m}$ is defined by

$\Phi \left(\omega \right)=\Phi \left(x,\lambda \right):=\left(\begin{array}{c}\hfill L\left(x,\lambda \right)\hfill \\ \hfill \varphi \left(-g\left(x\right),\lambda \right)\hfill \end{array}\right),$

and

$\begin{array}{l}L\left(x,\lambda \right):=F\left(x\right)+\nabla g\left(x\right)\lambda ,\phantom{\rule{2em}{0ex}}\\ \varphi \left(-g\left(x\right),\lambda \right):={\left(\phi \left(-{g}_{1}\left(x\right),{\lambda }_{1}\right),...,\phi \left(-{g}_{m}\left(x\right),{\lambda }_{m}\right)\right)}^{T}\in {\Re }^{m}.\phantom{\rule{2em}{0ex}}\end{array}$

From Assumption 1.1, we know that Φ is semismooth.

In the following, our aim is to present several conditions guaranteeing that all elements in the generalized Jacobian ∂Φ(ω) (and hence in the B-subdifferential ∂ B Φ(ω)) are nonsingular. Our first result gives a description of the structure of the matrices in the generalized Jacobian ∂Φ(ω).

Lemma 3.2 Let$\omega =\left(x,\lambda \right)\in {\Re }^{n+m}$. Then, each element H ∂Φ(ω)Tcan be represented as follows:

$H=\left[\begin{array}{cc}\hfill {\nabla }_{x}L\left(\omega \right)\hfill & \hfill -\nabla g\left(x\right){D}_{a}\left(\omega \right)\hfill \\ \hfill \nabla g{\left(x\right)}^{T}\hfill & \hfill {D}_{b}\left(\omega \right)\hfill \end{array}\right],$

where ${D}_{a}\left(\omega \right):=\text{diag}\left({a}_{1}\left(\omega \right),...,{a}_{m}\left(\omega \right)\right),\phantom{\rule{2.77695pt}{0ex}}{D}_{b}\left(\omega \right):=\text{diag}\left({b}_{1}\left(\omega \right),...,{b}_{m}\left(\omega \right)\right)\in {\Re }^{m×m}$ are diagonal matrices whose ith diagonal elements are given by

${a}_{i}\left(\omega \right)=\left\{\begin{array}{ccc}\hfill 1,\hfill & \hfill if\hfill & \hfill -{g}_{i}\left(x\right)<{\lambda }_{i},\hfill \\ \hfill 0,\hfill & \hfill if\hfill & \hfill -{g}_{i}\left(x\right)>{\lambda }_{i},\hfill \\ \hfill {\mu }_{i},\hfill & \hfill if\hfill & \hfill -{g}_{i}\left(x\right)={\lambda }_{i},\hfill \end{array}\right\\phantom{\rule{2.77695pt}{0ex}}and\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}{b}_{i}\left(\omega \right)=\left\{\begin{array}{ccc}\hfill 0,\hfill & \hfill if\hfill & \hfill -{g}_{i}\left(x\right)<{\lambda }_{i},\hfill \\ \hfill 1,\hfill & \hfill if\hfill & \hfill -{g}_{i}\left(x\right)>{\lambda }_{i},\hfill \\ \hfill 1-{\mu }_{i},\hfill & \hfill if\hfill & \hfill -{g}_{i}\left(x\right)={\lambda }_{i},\hfill \end{array}\right\$

for any μ i [0,1].

Proof. The first n components of the vector function Φ are continuously differentiable, so the expression for the first n columns of H readily follows. Then, consider the last m columns. Use the fact that

$\partial \varphi {\left(-g\left(x\right),\lambda \right)}^{T}\subset \partial \phi {\left(-{g}_{1}\left(x\right),{\lambda }_{1}\right)}^{T}×\cdots ×\partial \phi {\left(-{g}_{m}\left(x\right),{\lambda }_{m}\right)}^{T},$

if i is such that -g i (x) ≠ λ i , then φ is continuously differentiable at (-g i (x), λ i ) and the expression for the (n + i)th column of H follows. If instead -g i (x) = λ i , then, using the definition of the B-subdifferential, it follows that

${\partial }_{B}\phi {\left(-{g}_{i}\left(x\right),{\lambda }_{i}\right)}^{T}=\left\{\left(-\nabla {g}_{i}{\left(x\right)}^{T},0\right),\left(0,{e}_{i}^{T}\right)\right\}.$

Taking the convex hull, we get

$\partial \phi {\left(-{g}_{i}\left(x\right),{\lambda }_{i}\right)}^{T}=\left\{\left(-{\mu }_{i}\nabla {g}_{i}{\left(x\right)}^{T},\left(1-{\mu }_{i}\right){e}_{i}^{T}\right)|{\mu }_{i}\in \left[0,1\right]\right\}.$

This gives the representation of H ∂Φ(ω)T.

Our next aim is to establish conditions guaranteeing that all elements in the generalized Jacobian ∂Φ(ω) at a point ω = (x,λ) satisfying Φ(ω) = 0 are nonsingular.

Theorem 3.1 Let${\omega }^{*}=\left({x}^{*},{\lambda }^{*}\right)\in {\Re }^{n+m}$be a solution of the system Φ(ω) = 0. Consider the following two statements:

1. (a)

The strong second-order sufficient condition and the linear independence constraint qualification (LICQ) for VI(F,X) holds at x*.

2. (b)

Any element in ∂Φ(ω*) is nonsingular.

It holds that (a) (b).

Proof. For the sake of notational simplicity, let us define the following subsets of the index set I := {1,...,m},

${I}_{0}:=\left\{i|{g}_{i}\left({x}^{*}\right)=0,{\lambda }_{i}^{*}\ge 0\right\},\phantom{\rule{1em}{0ex}}{I}_{<}:=\left\{i|{g}_{i}\left({x}^{*}\right)<0,{\lambda }_{i}^{*}=0\right\}.$

Moreover, we need

$\begin{array}{ll}\hfill {I}_{00}& :=\left\{i|{g}_{i}\left({x}^{*}\right)=0,{\lambda }_{i}^{*}=0\right\},\phantom{\rule{1em}{0ex}}{I}_{+}:=\left\{i|{g}_{i}\left({x}^{*}\right)=0,{\lambda }_{i}^{*}>0\right\},\phantom{\rule{2em}{0ex}}\\ \hfill {I}_{01}& :=\left\{i\in {I}_{00}|{\mu }_{i}=1\right\},\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{0.3em}{0ex}}{I}_{02}:=\left\{i\in {I}_{00}|{\mu }_{i}\in \left(0,1\right)\right\},\phantom{\rule{2em}{0ex}}\\ \hfill {I}_{03}& :=\left\{i\in {I}_{00}|{\mu }_{i}=0\right\}.\phantom{\rule{2em}{0ex}}\end{array}$

The following relationships between these index sets can easily be seen to hold:

$I={I}_{0}\cup {I}_{<},\phantom{\rule{1em}{0ex}}{I}_{0}={I}_{00}\cup {I}_{+},\phantom{\rule{1em}{0ex}}{I}_{00}={I}_{01}\cup {I}_{02}\cup {I}_{03}.$

Using a suitable reordering of the constraints, every element H ∂Φ(ω*)Thas the following structure:

$H=\left[\begin{array}{cccccc}\hfill {\nabla }_{x}L\left({\omega }^{*}\right)\hfill & \hfill -\nabla {g}_{+}\left({x}^{*}\right)\hfill & \hfill -\nabla {g}_{01}\left({x}^{*}\right)\hfill & \hfill -\nabla {g}_{02}\left({x}^{*}\right){D}_{a}{\left({\omega }^{*}\right)}_{02}\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \nabla {g}_{+}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \nabla {g}_{01}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \nabla {g}_{02}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill {D}_{b}{\left({\omega }^{*}\right)}_{02}\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \nabla {g}_{03}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill I\hfill & \hfill 0\hfill \\ \hfill \nabla {g}_{<}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill I\hfill \end{array}\right],$
(3.5)

where D a (ω*)02 and D b (ω*)02 are positive definite diagonal matrices. Note that we abbreviated ${g}_{{I}_{+}}$ etc. by g+ etc. in (3.5). It is obvious that H is nonsingular if and only if the following matrix is nonsingular,

$\left[\begin{array}{cccccc}\hfill {\nabla }_{x}L\left({\omega }^{*}\right)\hfill & \hfill -\nabla {g}_{+}\left({x}^{*}\right)\hfill & \hfill -\nabla {g}_{01}\left({x}^{*}\right)\hfill & \hfill -\nabla {g}_{02}\left({x}^{*}\right)\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \nabla {g}_{+}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \nabla {g}_{01}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \nabla {g}_{02}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill {D}_{b}{\left({\omega }^{*}\right)}_{02}{D}_{a}{\left({\omega }^{*}\right)}_{02}^{-1}\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \nabla {g}_{03}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill I\hfill & \hfill 0\hfill \\ \hfill \nabla {g}_{<}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill I\hfill \end{array}\right].$

In turn, this matrix is nonsingular if and only if the following matrix is nonsingular:

$\left[\begin{array}{cccc}\hfill {\nabla }_{x}L\left({\omega }^{*}\right)\hfill & \hfill -\nabla {g}_{+}\left({x}^{*}\right)\hfill & \hfill -\nabla {g}_{01}\left({x}^{*}\right)\hfill & \hfill -\nabla {g}_{02}\left({x}^{*}\right)\hfill \\ \hfill \nabla {g}_{+}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \nabla {g}_{01}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \nabla {g}_{02}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill {D}_{b}{\left({\omega }^{*}\right)}_{02}{D}_{a}{\left({\omega }^{*}\right)}_{02}^{-1}\hfill \end{array}\right].$
(3.6)

Let $\left(\Delta {x}_{1},\Delta {x}_{2},\Delta {x}_{3},\Delta {x}_{4}\right)\in {\Re }^{n}×{\Re }^{\left|{I}_{+}\right|}×{\Re }^{\left|{I}_{01}\right|}×{\Re }^{\left|{I}_{02}\right|}$ be such that

$\left[\begin{array}{cccc}\hfill {\nabla }_{x}L\left({\omega }^{*}\right)\hfill & \hfill -\nabla {g}_{+}\left({x}^{*}\right)\hfill & \hfill -\nabla {g}_{01}\left({x}^{*}\right)\hfill & \hfill -\nabla {g}_{02}\left({x}^{*}\right)\hfill \\ \hfill \nabla {g}_{+}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \nabla {g}_{01}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \nabla {g}_{02}{\left({x}^{*}\right)}^{T}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill {D}_{b}{\left({\omega }^{*}\right)}_{02}{D}_{a}{\left({\omega }^{*}\right)}_{02}^{-1}\hfill \end{array}\right]\left[\begin{array}{c}\hfill \Delta {x}_{1}\hfill \\ \hfill \Delta {x}_{2}\hfill \\ \hfill \Delta {x}_{3}\hfill \\ \hfill \Delta {x}_{4}\hfill \end{array}\right]=0,$
(3.7)

we know that

$\begin{array}{c}{\nabla }_{x}L\left({\omega }^{*}\right)\Delta {x}_{1}-\nabla {g}_{+}\left({x}^{*}\right)\Delta {x}_{2}-\nabla {g}_{01}\left({x}^{*}\right)\Delta {x}_{3}-\nabla {g}_{02}\left({x}^{*}\right)\Delta {x}_{4}=0,\\ \nabla {g}_{+}{\left({x}^{*}\right)}^{T}\Delta {x}_{1}=0,\\ \nabla {g}_{01}{\left({x}^{*}\right)}^{T}\Delta {x}_{1}=0,\\ \nabla {g}_{02}{\left({x}^{*}\right)}^{T}\Delta {x}_{1}+\left[{D}_{b}{\left({\omega }^{*}\right)}_{02}{D}_{a}{\left({\omega }^{*}\right)}_{02}^{-1}\right]\Delta {x}_{4}=0.\end{array}$
(3.8)

By the first, second and third equations of (3.8), we obtain that

$\begin{array}{ll}\hfill 0& =⟨\Delta {x}_{1},{\nabla }_{x}L\left({\omega }^{*}\right)\Delta {x}_{1}-\nabla {g}_{+}\left({x}^{*}\right)\Delta {x}_{2}-\nabla {g}_{01}\left({x}^{*}\right)\Delta {x}_{3}-\nabla {g}_{02}\left({x}^{*}\right)\Delta {x}_{4}⟩\phantom{\rule{2em}{0ex}}\\ =⟨\Delta {x}_{1},{\nabla }_{x}L\left({\omega }^{*}\right)\Delta {x}_{1}⟩-⟨\Delta {x}_{1},\nabla {g}_{+}\left({x}^{*}\right)\Delta {x}_{2}⟩-⟨\Delta {x}_{1},\nabla {g}_{01}\left({x}^{*}\right)\Delta {x}_{3}⟩\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{1em}{0ex}}-⟨\Delta {x}_{1},\nabla {g}_{02}\left({x}^{*}\right)\Delta {x}_{4}⟩\phantom{\rule{2em}{0ex}}\\ =⟨\Delta {x}_{1},{\nabla }_{x}L\left({\omega }^{*}\right)\Delta {x}_{1}⟩-⟨\Delta {x}_{1},\nabla {g}_{02}\left({x}^{*}\right)\Delta {x}_{4}⟩,\phantom{\rule{2em}{0ex}}\end{array}$

which, together with the last equation of (3.8), implies that

$⟨\Delta {x}_{1},{\nabla }_{x}L\left({\omega }^{*}\right)\Delta {x}_{1}⟩=-\Delta {x}_{4}^{T}\left[{D}_{b}{\left({\omega }^{*}\right)}_{02}{D}_{a}{\left({\omega }^{*}\right)}_{02}^{-1}\right]\Delta {x}_{4}\le 0.$
(3.9)

From the second equation of (3.8), we know that

$\Delta {x}_{1}\in \text{a}\text{f}\text{f}\left(C\left({x}^{*}\right)\right),$

where C(x*) denotes the critical cone of VI(F,X). Then, by (3.9) and the strong second-order sufficient condition that

$\Delta {x}_{1}=0.$

Thus, the first equation of (3.8) reduces to

$\nabla {g}_{+}\left({x}^{*}\right)\Delta {x}_{2}+\nabla {g}_{01}\left({x}^{*}\right)\Delta {x}_{3}+\nabla {g}_{02}\left({x}^{*}\right)\Delta {x}_{4}=0.$
(3.10)

By the LICQ for VI(F,X), we have

$\Delta {x}_{2}=0,\phantom{\rule{2.77695pt}{0ex}}\Delta {x}_{3}=0,\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\text{and}\phantom{\rule{2.77695pt}{0ex}}\Delta {x}_{4}=0.$

This together with Δx1 = 0 shows that the matrix (3.6) is nonsingular, and then, H is nonsingular.

Now, we are able to apply Theorem 3.1 to some classes of GNEPs.

Proposition 3.1 Let${\omega }^{*}=\left({x}^{*},{\lambda }^{*}\right)\in {\Re }^{n+m}$satisfying Φ(ω*) = 0, for all ν = 1,..., N the payoff functions θνare separable, that is

${\theta }^{\nu }\left(x\right)={f}^{\nu }\left({x}^{\nu }\right)+{h}^{\nu }\left({x}^{-\nu }\right),$

where${f}^{\nu }:{\Re }^{{n}_{\nu }}\to \Re$is stongly convex and${h}^{\nu }:{\Re }^{n-{n}_{\nu }}\to \Re$. Assume that LICQ holds at x*. Then all elements H ∂Φ(ω*) are nonsingular.

Proof. We know that

$F\left({x}^{*}\right)=\left(\begin{array}{c}\hfill {\nabla }_{{x}^{1}}{\theta }^{1}\left({x}^{*}\right)\hfill \\ \hfill ⋮\hfill \\ \hfill {\nabla }_{{x}^{N}}{\theta }^{N}\left({x}^{*}\right)\hfill \end{array}\right),$

then, by the definition of θν(·), we have

$\begin{array}{c}\nabla F\left(x*\right)=\left(\begin{array}{cccc}\hfill {\nabla }_{{x}^{1}{x}^{1}}^{2}{\theta }^{1}\left(x*\right)\hfill & \hfill {\nabla }_{{x}^{1}{x}^{2}}^{2}{\theta }^{1}\left(x*\right)\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill {\nabla }_{{x}^{1}{x}^{N}}^{2}{\theta }^{1}\left(x*\right)\hfill \\ \hfill {\nabla }_{{x}^{2}{x}^{1}}^{2}{\theta }^{2}\left(x*\right)\hfill & \hfill {\nabla }_{{x}^{2}{x}^{2}}^{2}{\theta }^{2}\left(x*\right)\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill {\nabla }_{{x}^{2}{x}^{N}}^{2}{\theta }^{2}\left(x*\right)\hfill \\ \hfill ⋮\hfill & \hfill ⋮\hfill & \hfill \ddots \hfill \\ \hfill {\nabla }_{{x}^{N}{x}^{1}}^{2}{\theta }^{N}\left(x*\right)\hfill & \hfill {\nabla }_{{x}^{N}{x}^{2}}^{2}{\theta }^{N}\left(x*\right)\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill {\nabla }_{{x}^{N}{x}^{N}}^{2}{\theta }^{N}\left(x*\right)\hfill \end{array}\right)\\ =\left(\begin{array}{c}\hfill {\nabla }_{{x}^{1}{x}^{1}}^{2}{f}^{1}\left({x}^{*,1}\right)\hfill \\ \hfill {\nabla }_{{x}^{2}{x}^{2}}^{2}{f}^{2}\left({x}^{*,2}\right)\hfill \\ \hfill \ddots \hfill \\ \hfill {\nabla }_{{x}^{N}{x}^{N}}^{2}{f}^{N}\left({x}^{*,N}\right)\hfill \end{array}\right)\end{array}$

By the strong convexity of fν, we can conclude that F(x*) is positive definite.

From ${\lambda }_{i}^{*}\ge 0$ and the convexity of g i , we obtain that

${\nabla }_{x}\left(\nabla g\left({x}^{*}\right){\lambda }^{*}\right)=\sum _{i=1}^{m}{\lambda }_{i}^{*}{\nabla }^{2}{g}_{i}\left({x}^{*}\right)$

is positive semidefinite, which together with F(x*) is positive definite implies that

${\nabla }_{x}L\left({\omega }^{*}\right)=\nabla F\left({x}^{*}\right)+\sum _{i=1}^{m}{\lambda }_{i}^{*}{\nabla }^{2}{g}_{i}\left({x}^{*}\right)$

is positive definite. Thus, the strong second-order sufficient condition for the VI(F, X) holds at x*. From Theorem 3.1, we obtain any element in ∂Φ(ω*) is nonsingular.

Proposition 3.2 Let${\omega }^{*}=\left({x}^{*},{\lambda }^{*}\right)\in {\Re }^{n+m}$be such that Φ(ω*) = 0. Consider the case where the payoff functions are quadratic, i.e. for all ν = 1,...,N one has

${\theta }^{\nu }\left(x\right):=\frac{1}{2}{\left({x}^{\nu }\right)}^{T}{A}_{\nu \nu }{x}^{\nu }+\sum _{\mu =1,\mu \ne \nu }^{N}{\left({x}^{\nu }\right)}^{T}{A}_{\nu \mu }{x}^{\mu },$

where the matrices${A}_{\nu \mu }\in {\Re }^{{n}_{\nu }}×{\Re }^{{n}_{\mu }}$and A νν are symmetric. Suppose that LICQ holds at x*, and

$\mathbf{B}:=\left[\begin{array}{cccc}\hfill {A}_{11}\hfill & \hfill {A}_{12}\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill {A}_{1N}\hfill \\ \hfill {A}_{21}\hfill & \hfill {A}_{22}\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill {A}_{2N}\hfill \\ \hfill ⋮\hfill & \hfill ⋮\hfill & \hfill \ddots \hfill & \hfill ⋮\hfill \\ \hfill {A}_{N1}\hfill & \hfill {A}_{N2}\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill {A}_{NN}\hfill \end{array}\right]$

is positive definite. Then all the elements in the generalized Jacobian ∂Φ(ω*) are nonsingular.

Proof. We show that x L(ω*) is positive definite, which implies that the strong second order sufficient condition for the VI(F,X) holds at x*, and then apply Theorem 3.1. To this end, first note that

$F\left({x}^{*}\right)=\left(\begin{array}{c}\hfill {\nabla }_{{x}^{1}}{\theta }^{1}\left({x}^{*}\right)\hfill \\ \hfill {\nabla }_{{x}^{2}}{\theta }^{2}\left({x}^{*}\right)\hfill \\ \hfill ⋮\hfill \\ \hfill {\nabla }_{{x}^{N}}{\theta }^{N}\left({x}^{*}\right)\hfill \end{array}\right)=\left(\begin{array}{c}\hfill {\sum }_{\mu =1}^{N}{A}_{1\mu }{x}^{\mu }\hfill \\ \hfill {\sum }_{\mu =1}^{N}{A}_{2\mu }{x}^{\mu }\hfill \\ \hfill ⋮\hfill \\ \hfill {\sum }_{\mu =1}^{N}{A}_{N\mu }{x}^{\mu }\hfill \end{array}\right).$

Moreover,

$\nabla F\left({x}^{*}\right)=\left(\begin{array}{cccc}\hfill {A}_{11}\hfill & \hfill {A}_{12}\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill {A}_{1N}\hfill \\ \hfill {A}_{21}\hfill & \hfill {A}_{22}\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill {A}_{21}\hfill \\ \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill \\ \hfill {A}_{N1}\hfill & \hfill {A}_{N1}\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill {A}_{NN}\hfill \end{array}\right),$

which together with ${\lambda }_{i}^{*}\ge 0$ and the convexity of g i implies that

${\nabla }_{x}\left(\nabla g\left({x}^{*}\right){\lambda }^{*}\right)=\sum _{i=1}^{m}{\lambda }_{i}^{*}{\nabla }^{2}{g}_{i}\left({x}^{*}\right)$

is positive semidefinite. Hence, we obtain that x L(ω*) is positive definite. The statement therefore follows from Theorem 3.1.

## 4 Numerical illustrations

Here, we want to illustrate the performance of the VI method on some GNEPs taken from the literature. To this end, we use a nonsmooth Newton method to the nonlinear system of equations Φ(ω) = 0. The globalization strategy is based on the merit function

$\Psi \left(\omega \right):=\frac{1}{2}\Phi {\left(\omega \right)}^{T}\Phi \left(\omega \right).$

A simple Armijo-type line search is used in the algorithm and we switch to the steepest direction whenever the generalized Newton direction is not computable or does not satisfy a sufficient decrease condition.

Algorithm 4.1

Step 0 Choose ${\omega }^{0}=\left({x}^{0},{\lambda }^{0}\right)\in {\Re }^{n+m},\rho >0,\kappa >2,\sigma \in \left(0,\frac{1}{2}\right),\beta \in \left(0,1\right),\epsilon \ge 0$, and set k = 0.

Step 1 If Ψ(ωk)ε, stop.

Step 2 Select an element H k B Φ(ωk). Find a solution dkof the linear system

${H}_{k}d=-\Phi \left({\omega }^{k}\right).$
(4.1)

If system (4.1) is not solvable or if dkdoes not satisfy the condition

$\nabla \Psi {\left({\omega }^{k}\right)}^{T}{d}^{k}\le -\rho {∥{d}^{k}∥}^{\kappa },$
(4.2)

then set

${d}^{k}=-\nabla \Psi \left({\omega }^{k}\right).$
(4.3)

Step 3 Let tkbe the greatest number in {βj|j = 0,1,2,...} such that

$\Psi \left({\omega }^{k}+{t}^{k}{d}^{k}\right)\le \Psi \left({\omega }^{k}\right)+{t}^{k}\sigma \nabla \Psi {\left({\omega }^{k}\right)}^{T}{d}^{k}.$
(4.4)

Step 4 Set ωk+1= ωk+ tkdk, k = k + 1 and go to step 1.

The following result about the convergence property of Algorithm 4.1 comes from [18] directly.

Theorem 4.1 Assume that Algorithm 4.1 does not terminate within a finite number of iterations, let {ωk} be generated by Algorithm 4.1 having an accumulation point ω*, then ω* is a stationary point of Ψ. Moreover, if ω* is a BD-regular solution of the system Φ(ω) = 0, then {ωk} convergence to ω* Q-superlinearly.

We applied MATLAB 7.0 to some problems of GNEPs. The method is terminated whenever Ψ(ωk) < ε with ε := 10-7. The computational results are summarized in Tables 1, 2, and 3, which indicate that the proposed method produces good approximate solutions.

Example 4.1 This test problem is the internet switching model introduced by Facchinei et al. [19]. The payoff function of each user is given by

${\theta }^{\nu }\left(x\right):=\frac{{x}^{\nu }}{B}-\frac{{x}^{\nu }}{{\sum }_{\nu =1}^{N}{x}^{\nu }},$

with constraints xv≥ 0.01, ν = 1,..., N and${\sum }_{\nu =1}^{N}{x}^{\nu }\le B$. According to[20], we also set N = 10, B = 1 and use the starting point${x}^{0}={\left(0.1,0.1,0.1,...,\right)}^{T}\in {\Re }^{10}$. The exact solution of this problem is x* = (0.09,0.09,..., 0.09)T. We only state the first three components of the iteration vectors in Table1.

Example 4.2 This example is the river basin pollution game taken from[5]and is also analyzed by Heusinger and Kanzow[20]. There are three players, each controlling a single variable${x}^{\nu }\in \Re$. The objective functions are

${\theta }^{\nu }\left(x\right):={x}^{\nu }\left({c}_{1\nu }+{c}_{2\nu }{x}^{\nu }-{d}_{1}+{d}_{2}\left({x}^{1}+{x}^{2}+{x}^{3}\right)\right)$

for ν = 1,2,3, and the constraints are

$\begin{array}{l}{\mu }_{11}{e}_{1}{x}^{1}+{\mu }_{21}{e}_{2}{x}^{2}+{\mu }_{31}{e}_{3}{x}^{3}\le K1,\phantom{\rule{2em}{0ex}}\\ {\mu }_{12}{e}_{1}{x}^{1}+{\mu }_{22}{e}_{2}{x}^{2}+{\mu }_{32}{e}_{3}{x}^{3}\le K2.\phantom{\rule{2em}{0ex}}\end{array}$

The economic constants d1and d2determine the inverse demand law and set to 3.0 and 0.01, respectively. Values for constants c1,v, c2,v, e v , μv,1and μV,2are given in the following table, and K1 = K2 = 100. Table 2for the corresponding numerical results.

Example 4.3 We use Algorithm 4.1 to solve a class of problems in which for each player ν, his payoff function θν(·) is quadratic, that is

${\theta }^{\nu }\left(x\right):=\frac{1}{2}{\left({x}^{\nu }\right)}^{T}{A}_{\nu \nu }{x}^{\nu }+\sum _{\mu =1,\mu \ne \nu }^{N}{\left({x}^{\nu }\right)}^{T}{A}_{\nu \mu }{x}^{\mu }$

for certain matrices ${A}_{\nu \mu }\in {\mathbf{R}}^{{n}_{\nu }}×{\mathbf{R}}^{{n}_{\mu }}$ such that the diagonal block A νν are symmetric. Let

$\mathbf{B}:=\left[\begin{array}{cccc}\hfill {A}_{11}\hfill & \hfill {A}_{12}\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill {A}_{1N}\hfill \\ \hfill {A}_{21}\hfill & \hfill {A}_{22}\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill {A}_{2N}\hfill \\ \hfill ⋮\hfill & \hfill ⋮\hfill & \hfill \ddots \hfill & \hfill ⋮\hfill \\ \hfill {A}_{N1}\hfill & \hfill {A}_{N2}\hfill & \hfill \cdots \phantom{\rule{0.3em}{0ex}}\hfill & \hfill {A}_{NN}\hfill \end{array}\right]$

be positive definite, The strategy space X is defined by some linear constraints. For convenience, we set the elements of x0 all 1. The elements of λ0 all 0. We set other parameters in the algorithm as ρ = 10-8, κ = 2.1, σ = 10-4, β = 0.55. Our numerical results are reported in Table 3, where Iter., Func, Res0. and Res*. stand for, respectively, the number of iterations, the number of function evaluations, the residual Ψ(·) at the starting point and the residual Ψ(·) at the final iterate of implementation.

## References

1. Debreu G: A social equilibrium existence theorem. Proc Natl Acad Sci USA 1952, 38: 886–893. 10.1073/pnas.38.10.886

2. Altman E, Wynter L: Equilibrium, games, and pricing in transportation and telecommunication networks. Netw Spat Econ 2004, 4: 7–21.

3. Hu X, Ralph D: Using EPECs to model bilevel games in restructured electricity markets with locational prices. Oper Res 2007, 55: 809–827. 10.1287/opre.1070.0431

4. Krawczyk JB: Coupled constraint Nash equilibria in enviromental games. Resour Energy Econ 2005, 27: 157–181. 10.1016/j.reseneeco.2004.08.001

5. Krawczyk JB, Uryasev S: Relaxation algorithms to find Nash equilibria with economic applications. Environ Model Assess 2000, 5: 63–73. 10.1023/A:1019097208499

6. Uryasev S, Rubinstein RY: On relaxation algorithms in computation of noncoopera-tive equilibria. IEEE Trans Autom Control 1994, 39: 1263–1267. 10.1109/9.293193

7. Flam SD, Ruszczynski A: Noncooperative convex games: computing equilibrium by partial regulalization. IIASA Working Paper Austria 1994, 94–142.

8. Gürkan G, Pang JS: Approximations of Nash equilibria. Math Program 2009, 117: 223–253. 10.1007/s10107-007-0156-y

9. Heusinger AV, Kanzow C: Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions. Comput Optim Appl 2009, 43: 353–377. 10.1007/s10589-007-9145-6

10. Facchinei F, Pang JS: Finite-dimensional variational inequalities and complementarity problems. 2003, I: Springer, New York.

11. Facchinei F, Pang JS: Finite-dimensional variational inequalities and complementarity problems. Volume II. Springer, New York; 2003.

12. Harker PT: Generalized Nash games and quasi-variational inequalities. Eur J Oper Res 1991, 54: 81–94. 10.1016/0377-2217(91)90325-P

13. Facchinei F, Fisher A, Piccialli V: On generalized Nash games and variational inequalities. Oper Res Lett 2007, 35: 159–164. 10.1016/j.orl.2006.03.004

14. Qi L: Convergence analysis of some algorithms for solving nonsmooth equations. Math Oper Res 1993, 18: 227–244. 10.1287/moor.18.1.227

15. Clarke FH: Optimization and Nonsmooth Analysis. John Wiley, New York; 1983.

16. Mifflin R: Semismooth and semiconvex functions in constrained optimization. SIAM J Control Optim 1977, 15: 959–972. 10.1137/0315061

17. Qi L, Sun J: A nonsmooth version of Newton's method. Math Program 1993, 58: 353–368. 10.1007/BF01581275

18. Luca TD, Facchinei F, Kanzow C: A semismooth equation approach to the solution of nonlinear complementarity problems. Math Program 1996, 75: 407–439.

19. Facchinei F, Fisher A, Piccialli V: Generalized Nash equilibrium problems and Newton methods. Math Program 2009, 117: 163–194. 10.1007/s10107-007-0160-2

20. Heusinger AV, Kanzow C: Relaxation methods for generalized Nash equilibrium problems with inexact line search. J Optim Theory Appl 2009, 143: 159–183. 10.1007/s10957-009-9553-0

## Acknowledgements

The research was supported by the Fundamental Innovation Methods Funds under Project No. 2010IM020300 and the Technology Research of Inner Mongolia under Project No. 20100915.

## Author information

Authors

### Corresponding author

Correspondence to Jian Hou.

### Competing interests

The authors declare that they have no competing interests.

### Authors' contributions

JH and Z-CW carried out the design of the study and performed the analysis. ZW participated in its design and coordination. All authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Hou, J., Wen, ZC. & Wen, Z. A variational inequality method for computing a normalized equilibrium in the generalized Nash game. J Inequal Appl 2012, 60 (2012). https://doi.org/10.1186/1029-242X-2012-60

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1029-242X-2012-60

### Keywords

• Nash equilibrium problem
• generalized Nash equilibrium problem
• variational inequality
• semismooth function
• superlinear convergence