Skip to main content

A variational inequality method for computing a normalized equilibrium in the generalized Nash game

Abstract

The generalized Nash equilibrium problem is a generalization of the standard Nash equilibrium problem, in which both the utility function and the strategy space of each player may depend on the strategies chosen by all other players. This problem has been used to model various problems in applications but convergent solution algorithms are extremely scare in the literature. In this article, we show that a generalized Nash equilibrium can be calculated by solving a variational inequality (VI). Moreover, conditions for the local superlinear convergence of a semismooth Newton method being applied to the VI are also given. Some numerical results are presented to illustrate the performance of the method.

1 Introduction

In this article, We consider the generalized Nash equilibrium problem (GNEP). To this end, we first recall the definition of the Nash equilibrium problem (NEP). There are N players, each player ν {1,...,N} controls the variables x ν n ν . All players' strategies are collectively denoted by a vector x= x 1 , . . . , x N T n , where n = n1 + + n N . To emphasize the ν th player's variables within the vector x, we sometimes write x = (xν, x-ν)T, where x - ν n - ν subsumes all the other players' variables.

Let θ ν : n be the ν th player's payoff (or loss or utility) function, and let X ν n ν be the strategy set of player ν. Then, x * = x * , 1 , . . . , x * , N T n is called a Nash equilibrium, or a solution of the NEP, if each block component x*,νis a solution of the optimization problem

min x ν θ ν x ν , x * , - ν s.t. x ν X ν .

On the other hand, in a GNEP, each player's strategy belongs to a set X ν x - ν n ν that depends on the rival players' strategies. The aim of each player ν, given the other players' strategies x-ν, is to choose a strategy xνthat solves the minimization problem

min x ν θ ν x ν , x - ν s.t. x ν X ν ( x - ν ) .

The GNEP is the problem of finding a vector x* such that each player's strategy x*,νsatisfies

θ ν x * , ν , x * , - ν θ ν y ν , x * , - ν , y ν X ν x * , - ν .

Such a vector x* is called a generalized Nash equilibrium or, more simply a solution of the GNEP.

In this article, we focus on a special class of GNEPs referred to as jointly convex GNEPs. More precisely, we assume that there is a closed and convex set X n , which represents the joint constraints of all the players, such that

X ν ( x - ν ) : = x ν n ν | x ν , x - ν X ,
(1.1)

for all ν = 1,..., N. This condition results to be verified in several applications. Throughout this article, we assume that the set X can be represented as

X = x n | g ( x ) 0
(1.2)

for some function g: n m . Additional equality constraints are also allowed, but for notational simplicity, we prefer not to include them explicitly. In many cases, a player ν might have some additional constraints depending on his decision variables only. However, these additional constraints can be viewed as part of the joint constraints g(x) ≤ 0, so, we include these latter constraints in the former ones.

Throughout this article, we make the following blanket assumptions.

Assumption 1.1 (i) The utility functions θνare twice continuously differentiable and as a function of xνalong, convex.

(ii) The function g is twice continuously differentiable, its components g i are convex (in x), and the corresponding strategy space X defined by (1.2) is nonempty.

The convexity assumptions are standard in the context of GNEPs. The smoothness assumptions are also very natural since our aim is to develop locally fast convergent methods for the solution of GNEPs.

The GNEP was formally introduced by Debreu [1] as early as 1952, but it is only from the mid-1990s that the GNEP attracted much attention because of its capability of modeling a number of interesting problems in economy computer science, telecommunications, and deregulated markets (e.g., see [24]). Another approach for solving the GNEP is based on the Nikaido-Isoda function. Relaxation methods and proximal-like methods using the Nikaido-Isoda function are investigated in [57]. A regularized version of the Nikaido-Isoda function was first introduced in [8] for standard NEPs then further investigated by Heusinger and Kanzow [9], they reformulated the GNEP as a constrained optimization problem with continuously differentiable objective function.

Motivated by the fact that a standard NEP can be reformulated as a variational inequality problem (VI for short), see, for example, [10, 11], Harker [12] characterized the GNEP as a quasi-variational inequality(QVI). But unlike VI, there are few efficient methods for solving QVI, and therefore such a reformulation is not used widely in designing implementable algorithms. On the other hand, it was noted in [13], for example, that certain solutions of the GNEP (the normalized Nash equilibria, to be defined later) can be found by solving a suitable standard VI associated to the GNEP.

Here, we further investigate the properties of the normalized Nash equilibria. The rest of the article is organized as follows. Section 2 gives some preliminaries. In Section 3, we use the fact that the normalized Nash equilibria can be found by solving a suitable VI, we reformulate the VI associated to the GNEP as a semismooth system of equations and the nonsingularity of the B-subdifferential for the system is explored. Finally, in Section 4, we implement a semismooth Newton method to some examples of the GNEP.

We use the following notations throughout the article. A function G: n t is called a Ck-function if it is k times continuously differentiable. For a differentiable function g: n m , the Jacobian of g at x n is denoted by Jg ( x ) , and its transposed by g(x). Given a differentiable function Ψ: n , the symbol x ν Ψ ( x ) denotes the partial gradient with respect to xν-part only, and x ν x μ 2 Ψ ( x ) denotes the second-order partial derivative with respect to xν-part and xμ-part. For a function f: n × n ,f ( x , ) : n denotes the function with x being fixed. For vectors x,y n , x , y denotes the inner product defined by x,y := xTy and x y means x,y = 0.

2 Preliminaries

Let F: n m be a locally Lipschitz continuous function. By Rademacher's theorem, F is differentiable almost everywhere. Let D F denote the set of points where F is differentiable. Then, the Bouligand-subdifferential of F at x is given by (see [14]),

B F ( x ) : = H m × n | x k D F : x k x , H = lim k J F x k .

Its convex hull

F ( x ) : = conv B F ( x )

is Clarke's generalized Jacobian of F at x (see [15]).

Based on this notation, we next recall the definition of a semismooth function. This concept was firstly introduced by Mifflin [16] for real-valued mappings and extended by Qi and Sun [17] to vector-valued mappings.

Definition 2.1 LetΦ:O n m be a locally Lipschitz continuous function on the open setO. We say that Φ is semismooth at a pointxOif

  1. (i)

    Φ is directionally differentiable at x; and

  2. (ii)

    for any Δx X and V ∂Φ(x + Δx) with Δx → 0,

    Φ x + Δ x - Φ ( x ) - V ( Δ x ) = o Δ x .

Furthermore, Φ is said to be strongly semismooth atxOif Φ is semismooth at x and for any Δx X and V ∂Φ(x + Δx) with Δx → 0,

Φ x + Δ x - Φ ( x ) - V ( Δ x ) = O Δ x 2 .

In the study of algorithms for locally Lipschitzian systems of equations, the following regularity condition plays a role similar to that of the nonsingularity of the Jacobian in the study of algorithms for smooth systems of equations.

Definition 2.2 LetG: n n be Lipschitzian around x, G is said to be BD-regular at x if all the elements in B G(x) are nonsingular. If x ̄ is a solution of the system G(x) = 0 and G is BD-regular at x ̄ , then x ̄ is called a BD-regular solution of this system.

Given a closed convex set K n and a continuous function G:K n , solving the VI defined by K and G (which is denoted by VI(G, K)) means finding a vector x K such that

G ( x ) T ( y - x ) 0 , f o r a l l y K .

Define the function F: n n by

F ( x ) : = x 1 θ 1 ( x ) x N θ N ( x ) ,

we state a result due to [13] which will be used later.

Lemma 2.1 Suppose that the GNEP satisfies Assumption 1.1 and assume further that the sets X ν (x-ν) are defined by (1.1) with X closed and convex. Then, every solution of the VI(F, X) is a solution of the GNEP.

3 The nonsmooth equation reformulation and nonsingularity conditions

Consider the GNEP from Section 1 with utility functions θνand a strategy set X satisfying the requirements from Assumption 1.1. In this section, our aim is to show that the GNEP can be reformulated as a nonsmooth equation and then we present several conditions guaranteeing the BD-regularity condition of the equation.

Suppose that x is a solution of the GNEP. Then if for player ν, a suitable constraint qualification (like the slater condition) holds, it follows that there exists a Lagrange multiplier λ ν m such that the Karush-Kuhn-Tucker (KKT) conditions

x ν θ ν x ν , x - ν + x ν g x ν , x - ν λ ν = 0 , 0 λ ν - g x ν , x - ν 0
(3.1)

are satisfied.

Let us consider the KKT conditions for the VI(F,X). Assuming that a suitable constraint qualification holds at a solution x, the KKT conditions can be expressed as

F ( x ) + g ( x ) λ = 0 , 0 λ - g ( x ) 0 ,
(3.2)

which is equivalent to

x 1 θ 1 ( x ) x N θ N ( x ) + x 1 g ( x ) x N g ( x ) λ = 0 , 0 λ - g ( x ) 0 .
(3.3)

The next lemma from [13] relates the normalized Nash equilibria to the KKT conditions (3.3).

Lemma 3.1 (i) Let x be a solution of VI(F,X) at which the KKT conditions (3.3) hold. Then x is a solution of the GNEP (normalized Nash equilibria) at which the KKT conditions (3.1) hold with λ1 = λ2 = = λN= λ.

(ii) Viceversa, let x be a solution of the GNEP at which KKT conditions (3.1) hold with λ1 = λ2 = = λN. Then x is a solution of VI(F, X).

Using the minimum function φ:×,φ ( a , b ) :=min { a , b } , the KKT conditions (3.2) can equivalently be written as the nonlinear system of equations

Φ ( ω ) : = Φ ( x , λ ) = 0 ,
(3.4)

where Φ: n + m n + m is defined by

Φ ( ω ) = Φ ( x , λ ) : = L ( x , λ ) ϕ ( - g ( x ) , λ ) ,

and

L ( x , λ ) : = F ( x ) + g ( x ) λ , ϕ ( - g ( x ) , λ ) : = φ ( - g 1 ( x ) , λ 1 ) , . . . , φ ( - g m ( x ) , λ m ) T m .

From Assumption 1.1, we know that Φ is semismooth.

In the following, our aim is to present several conditions guaranteeing that all elements in the generalized Jacobian ∂Φ(ω) (and hence in the B-subdifferential ∂ B Φ(ω)) are nonsingular. Our first result gives a description of the structure of the matrices in the generalized Jacobian ∂Φ(ω).

Lemma 3.2 Letω= ( x , λ ) n + m . Then, each element H ∂Φ(ω)Tcan be represented as follows:

H = x L ( ω ) - g ( x ) D a ( ω ) g ( x ) T D b ( ω ) ,

where D a ( ω ) :=diag ( a 1 ( ω ) , . . . , a m ( ω ) ) , D b ( ω ) :=diag ( b 1 ( ω ) , . . . , b m ( ω ) ) m × m are diagonal matrices whose ith diagonal elements are given by

a i ( ω ) = 1 , i f - g i ( x ) < λ i , 0 , i f - g i ( x ) > λ i , μ i , i f - g i ( x ) = λ i , a n d b i ( ω ) = 0 , i f - g i ( x ) < λ i , 1 , i f - g i ( x ) > λ i , 1 - μ i , i f - g i ( x ) = λ i ,

for any μ i [0,1].

Proof. The first n components of the vector function Φ are continuously differentiable, so the expression for the first n columns of H readily follows. Then, consider the last m columns. Use the fact that

ϕ ( - g ( x ) , λ ) T φ ( - g 1 ( x ) , λ 1 ) T × × φ ( - g m ( x ) , λ m ) T ,

if i is such that -g i (x) ≠ λ i , then φ is continuously differentiable at (-g i (x), λ i ) and the expression for the (n + i)th column of H follows. If instead -g i (x) = λ i , then, using the definition of the B-subdifferential, it follows that

B φ - g i ( x ) , λ i T = - g i ( x ) T , 0 , 0 , e i T .

Taking the convex hull, we get

φ ( - g i ( x ) , λ i ) T = - μ i g i ( x ) T , ( 1 - μ i ) e i T | μ i [ 0 , 1 ] .

This gives the representation of H ∂Φ(ω)T.

Our next aim is to establish conditions guaranteeing that all elements in the generalized Jacobian ∂Φ(ω) at a point ω = (x,λ) satisfying Φ(ω) = 0 are nonsingular.

Theorem 3.1 Let ω * = ( x * , λ * ) n + m be a solution of the system Φ(ω) = 0. Consider the following two statements:

  1. (a)

    The strong second-order sufficient condition and the linear independence constraint qualification (LICQ) for VI(F,X) holds at x*.

  2. (b)

    Any element in ∂Φ(ω*) is nonsingular.

It holds that (a) (b).

Proof. For the sake of notational simplicity, let us define the following subsets of the index set I := {1,...,m},

I 0 : = i | g i ( x * ) = 0 , λ i * 0 , I < : = i | g i ( x * ) < 0 , λ i * = 0 .

Moreover, we need

I 00 : = i | g i ( x * ) = 0 , λ i * = 0 , I + : = i | g i ( x * ) = 0 , λ i * > 0 , I 01 : = i I 00 | μ i = 1 , I 02 : = i I 00 | μ i ( 0 , 1 ) , I 03 : = i I 00 | μ i = 0 .

The following relationships between these index sets can easily be seen to hold:

I = I 0 I < , I 0 = I 00 I + , I 00 = I 01 I 02 I 03 .

Using a suitable reordering of the constraints, every element H ∂Φ(ω*)Thas the following structure:

H = x L ( ω * ) - g + ( x * ) - g 01 ( x * ) - g 02 ( x * ) D a ( ω * ) 02 0 0 g + ( x * ) T 0 0 0 0 0 g 01 ( x * ) T 0 0 0 0 0 g 02 ( x * ) T 0 0 D b ( ω * ) 02 0 0 g 03 ( x * ) T 0 0 0 I 0 g < ( x * ) T 0 0 0 0 I ,
(3.5)

where D a (ω*)02 and D b (ω*)02 are positive definite diagonal matrices. Note that we abbreviated g I + etc. by g+ etc. in (3.5). It is obvious that H is nonsingular if and only if the following matrix is nonsingular,

x L ( ω * ) - g + ( x * ) - g 01 ( x * ) - g 02 ( x * ) 0 0 g + ( x * ) T 0 0 0 0 0 g 01 ( x * ) T 0 0 0 0 0 g 02 ( x * ) T 0 0 D b ( ω * ) 02 D a ( ω * ) 02 - 1 0 0 g 03 ( x * ) T 0 0 0 I 0 g < ( x * ) T 0 0 0 0 I .

In turn, this matrix is nonsingular if and only if the following matrix is nonsingular:

x L ( ω * ) - g + ( x * ) - g 01 ( x * ) - g 02 ( x * ) g + ( x * ) T 0 0 0 g 01 ( x * ) T 0 0 0 g 02 ( x * ) T 0 0 D b ( ω * ) 02 D a ( ω * ) 02 - 1 .
(3.6)

Let Δ x 1 , Δ x 2 , Δ x 3 , Δ x 4 n × I + × I 01 × I 02 be such that

x L ( ω * ) - g + ( x * ) - g 01 ( x * ) - g 02 ( x * ) g + ( x * ) T 0 0 0 g 01 ( x * ) T 0 0 0 g 02 ( x * ) T 0 0 D b ( ω * ) 02 D a ( ω * ) 02 - 1 Δ x 1 Δ x 2 Δ x 3 Δ x 4 = 0 ,
(3.7)

we know that

x L ( ω * ) Δ x 1 - g + ( x * ) Δ x 2 - g 01 ( x * ) Δ x 3 - g 02 ( x * ) Δ x 4 = 0 , g + ( x * ) T Δ x 1 = 0 , g 01 ( x * ) T Δ x 1 = 0 , g 02 ( x * ) T Δ x 1 + [ D b ( ω * ) 02 D a ( ω * ) 02 - 1 ] Δ x 4 = 0 .
(3.8)

By the first, second and third equations of (3.8), we obtain that

0 = Δ x 1 , x L ( ω * ) Δ x 1 - g + ( x * ) Δ x 2 - g 01 ( x * ) Δ x 3 - g 02 ( x * ) Δ x 4 = Δ x 1 , x L ( ω * ) Δ x 1 - Δ x 1 , g + ( x * ) Δ x 2 - Δ x 1 , g 01 ( x * ) Δ x 3 - Δ x 1 , g 02 ( x * ) Δ x 4 = Δ x 1 , x L ( ω * ) Δ x 1 - Δ x 1 , g 02 ( x * ) Δ x 4 ,

which, together with the last equation of (3.8), implies that

Δ x 1 , x L ( ω * ) Δ x 1 = - Δ x 4 T D b ( ω * ) 02 D a ( ω * ) 02 - 1 Δ x 4 0 .
(3.9)

From the second equation of (3.8), we know that

Δ x 1 a f f ( C ( x * ) ) ,

where C(x*) denotes the critical cone of VI(F,X). Then, by (3.9) and the strong second-order sufficient condition that

Δ x 1 = 0 .

Thus, the first equation of (3.8) reduces to

g + ( x * ) Δ x 2 + g 01 ( x * ) Δ x 3 + g 02 ( x * ) Δ x 4 = 0 .
(3.10)

By the LICQ for VI(F,X), we have

Δ x 2 = 0 , Δ x 3 = 0 , and Δ x 4 = 0 .

This together with Δx1 = 0 shows that the matrix (3.6) is nonsingular, and then, H is nonsingular.

Now, we are able to apply Theorem 3.1 to some classes of GNEPs.

Proposition 3.1 Let ω * = ( x * , λ * ) n + m satisfying Φ(ω*) = 0, for all ν = 1,..., N the payoff functions θνare separable, that is

θ ν ( x ) = f ν ( x ν ) + h ν ( x - ν ) ,

where f ν : n ν is stongly convex and h ν : n - n ν . Assume that LICQ holds at x*. Then all elements H ∂Φ(ω*) are nonsingular.

Proof. We know that

F ( x * ) = x 1 θ 1 ( x * ) x N θ N ( x * ) ,

then, by the definition of θν(·), we have

F ( x * ) = x 1 x 1 2 θ 1 ( x * ) x 1 x 2 2 θ 1 ( x * ) x 1 x N 2 θ 1 ( x * ) x 2 x 1 2 θ 2 ( x * ) x 2 x 2 2 θ 2 ( x * ) x 2 x N 2 θ 2 ( x * ) x N x 1 2 θ N ( x * ) x N x 2 2 θ N ( x * ) x N x N 2 θ N ( x * ) = x 1 x 1 2 f 1 ( x * , 1 ) x 2 x 2 2 f 2 ( x * , 2 ) x N x N 2 f N ( x * , N )

By the strong convexity of fν, we can conclude that F(x*) is positive definite.

From λ i * 0 and the convexity of g i , we obtain that

x g ( x * ) λ * = i = 1 m λ i * 2 g i ( x * )

is positive semidefinite, which together with F(x*) is positive definite implies that

x L ( ω * ) = F ( x * ) + i = 1 m λ i * 2 g i ( x * )

is positive definite. Thus, the strong second-order sufficient condition for the VI(F, X) holds at x*. From Theorem 3.1, we obtain any element in ∂Φ(ω*) is nonsingular.

Proposition 3.2 Let ω * = ( x * , λ * ) n + m be such that Φ(ω*) = 0. Consider the case where the payoff functions are quadratic, i.e. for all ν = 1,...,N one has

θ ν ( x ) : = 1 2 x ν T A ν ν x ν + μ = 1 , μ ν N ( x ν ) T A ν μ x μ ,

where the matrices A ν μ n ν × n μ and A νν are symmetric. Suppose that LICQ holds at x*, and

B : = A 11 A 12 A 1 N A 21 A 22 A 2 N A N 1 A N 2 A N N

is positive definite. Then all the elements in the generalized Jacobian ∂Φ(ω*) are nonsingular.

Proof. We show that x L(ω*) is positive definite, which implies that the strong second order sufficient condition for the VI(F,X) holds at x*, and then apply Theorem 3.1. To this end, first note that

F ( x * ) = x 1 θ 1 ( x * ) x 2 θ 2 ( x * ) x N θ N ( x * ) = μ = 1 N A 1 μ x μ μ = 1 N A 2 μ x μ μ = 1 N A N μ x μ .

Moreover,

F ( x * ) = A 11 A 12 A 1 N A 21 A 22 A 21 A N 1 A N 1 A N N ,

which together with λ i * 0 and the convexity of g i implies that

x g ( x * ) λ * = i = 1 m λ i * 2 g i ( x * )

is positive semidefinite. Hence, we obtain that x L(ω*) is positive definite. The statement therefore follows from Theorem 3.1.

4 Numerical illustrations

Here, we want to illustrate the performance of the VI method on some GNEPs taken from the literature. To this end, we use a nonsmooth Newton method to the nonlinear system of equations Φ(ω) = 0. The globalization strategy is based on the merit function

Ψ ( ω ) : = 1 2 Φ ( ω ) T Φ ( ω ) .

A simple Armijo-type line search is used in the algorithm and we switch to the steepest direction whenever the generalized Newton direction is not computable or does not satisfy a sufficient decrease condition.

Algorithm 4.1

Step 0 Choose ω 0 = x 0 , λ 0 n + m , ρ > 0 , κ > 2 , σ 0 , 1 2 , β ( 0 , 1 ) , ε 0 , and set k = 0.

Step 1 If Ψ(ωk)ε, stop.

Step 2 Select an element H k B Φ(ωk). Find a solution dkof the linear system

H k d = - Φ ( ω k ) .
(4.1)

If system (4.1) is not solvable or if dkdoes not satisfy the condition

Ψ ( ω k ) T d k - ρ d k κ ,
(4.2)

then set

d k = - Ψ ( ω k ) .
(4.3)

Step 3 Let tkbe the greatest number in {βj|j = 0,1,2,...} such that

Ψ ω k + t k d k Ψ ω k + t k σ Ψ ω k T d k .
(4.4)

Step 4 Set ωk+1= ωk+ tkdk, k = k + 1 and go to step 1.

The following result about the convergence property of Algorithm 4.1 comes from [18] directly.

Theorem 4.1 Assume that Algorithm 4.1 does not terminate within a finite number of iterations, let {ωk} be generated by Algorithm 4.1 having an accumulation point ω*, then ω* is a stationary point of Ψ. Moreover, if ω* is a BD-regular solution of the system Φ(ω) = 0, then {ωk} convergence to ω* Q-superlinearly.

We applied MATLAB 7.0 to some problems of GNEPs. The method is terminated whenever Ψ(ωk) < ε with ε := 10-7. The computational results are summarized in Tables 1, 2, and 3, which indicate that the proposed method produces good approximate solutions.

Table 1 Numerical results for Example 4.1
Table 2 Numerical results for Example 4.2
Table 3 Numerical results for Example 4.3

Example 4.1 This test problem is the internet switching model introduced by Facchinei et al. [19]. The payoff function of each user is given by

θ ν ( x ) : = x ν B - x ν ν = 1 N x ν ,

with constraints xv≥ 0.01, ν = 1,..., N and ν = 1 N x ν B . According to[20], we also set N = 10, B = 1 and use the starting point x 0 = 0 . 1 , 0 . 1 , 0 . 1 , . . . , T 10 . The exact solution of this problem is x* = (0.09,0.09,..., 0.09)T. We only state the first three components of the iteration vectors in Table1.

Example 4.2 This example is the river basin pollution game taken from[5]and is also analyzed by Heusinger and Kanzow[20]. There are three players, each controlling a single variable x ν . The objective functions are

θ ν ( x ) : = x ν c 1 ν + c 2 ν x ν - d 1 + d 2 x 1 + x 2 + x 3

for ν = 1,2,3, and the constraints are

μ 11 e 1 x 1 + μ 21 e 2 x 2 + μ 31 e 3 x 3 K 1 , μ 12 e 1 x 1 + μ 22 e 2 x 2 + μ 32 e 3 x 3 K 2 .

The economic constants d1and d2determine the inverse demand law and set to 3.0 and 0.01, respectively. Values for constants c1,v, c2,v, e v , μv,1and μV,2are given in the following table, and K1 = K2 = 100. Table 2for the corresponding numerical results.

Table 1 Numerical results for Example 4.1

Example 4.3 We use Algorithm 4.1 to solve a class of problems in which for each player ν, his payoff function θν(·) is quadratic, that is

θ ν ( x ) : = 1 2 x ν T A ν ν x ν + μ = 1 , μ ν N ( x ν ) T A ν μ x μ

for certain matrices A ν μ R n ν × R n μ such that the diagonal block A νν are symmetric. Let

B : = A 11 A 12 A 1 N A 21 A 22 A 2 N A N 1 A N 2 A N N

be positive definite, The strategy space X is defined by some linear constraints. For convenience, we set the elements of x0 all 1. The elements of λ0 all 0. We set other parameters in the algorithm as ρ = 10-8, κ = 2.1, σ = 10-4, β = 0.55. Our numerical results are reported in Table 3, where Iter., Func, Res0. and Res*. stand for, respectively, the number of iterations, the number of function evaluations, the residual Ψ(·) at the starting point and the residual Ψ(·) at the final iterate of implementation.

The numerical experiments show that the method proposed in this article is imple-mentable and effective.

References

  1. Debreu G: A social equilibrium existence theorem. Proc Natl Acad Sci USA 1952, 38: 886–893. 10.1073/pnas.38.10.886

    Article  MathSciNet  Google Scholar 

  2. Altman E, Wynter L: Equilibrium, games, and pricing in transportation and telecommunication networks. Netw Spat Econ 2004, 4: 7–21.

    Article  Google Scholar 

  3. Hu X, Ralph D: Using EPECs to model bilevel games in restructured electricity markets with locational prices. Oper Res 2007, 55: 809–827. 10.1287/opre.1070.0431

    Article  MathSciNet  Google Scholar 

  4. Krawczyk JB: Coupled constraint Nash equilibria in enviromental games. Resour Energy Econ 2005, 27: 157–181. 10.1016/j.reseneeco.2004.08.001

    Article  Google Scholar 

  5. Krawczyk JB, Uryasev S: Relaxation algorithms to find Nash equilibria with economic applications. Environ Model Assess 2000, 5: 63–73. 10.1023/A:1019097208499

    Article  Google Scholar 

  6. Uryasev S, Rubinstein RY: On relaxation algorithms in computation of noncoopera-tive equilibria. IEEE Trans Autom Control 1994, 39: 1263–1267. 10.1109/9.293193

    Article  MathSciNet  Google Scholar 

  7. Flam SD, Ruszczynski A: Noncooperative convex games: computing equilibrium by partial regulalization. IIASA Working Paper Austria 1994, 94–142.

    Google Scholar 

  8. Gürkan G, Pang JS: Approximations of Nash equilibria. Math Program 2009, 117: 223–253. 10.1007/s10107-007-0156-y

    Article  MathSciNet  Google Scholar 

  9. Heusinger AV, Kanzow C: Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions. Comput Optim Appl 2009, 43: 353–377. 10.1007/s10589-007-9145-6

    Article  MathSciNet  Google Scholar 

  10. Facchinei F, Pang JS: Finite-dimensional variational inequalities and complementarity problems. 2003, I: Springer, New York.

    Google Scholar 

  11. Facchinei F, Pang JS: Finite-dimensional variational inequalities and complementarity problems. Volume II. Springer, New York; 2003.

    Google Scholar 

  12. Harker PT: Generalized Nash games and quasi-variational inequalities. Eur J Oper Res 1991, 54: 81–94. 10.1016/0377-2217(91)90325-P

    Article  Google Scholar 

  13. Facchinei F, Fisher A, Piccialli V: On generalized Nash games and variational inequalities. Oper Res Lett 2007, 35: 159–164. 10.1016/j.orl.2006.03.004

    Article  MathSciNet  Google Scholar 

  14. Qi L: Convergence analysis of some algorithms for solving nonsmooth equations. Math Oper Res 1993, 18: 227–244. 10.1287/moor.18.1.227

    Article  MathSciNet  Google Scholar 

  15. Clarke FH: Optimization and Nonsmooth Analysis. John Wiley, New York; 1983.

    Google Scholar 

  16. Mifflin R: Semismooth and semiconvex functions in constrained optimization. SIAM J Control Optim 1977, 15: 959–972. 10.1137/0315061

    Article  MathSciNet  Google Scholar 

  17. Qi L, Sun J: A nonsmooth version of Newton's method. Math Program 1993, 58: 353–368. 10.1007/BF01581275

    Article  MathSciNet  Google Scholar 

  18. Luca TD, Facchinei F, Kanzow C: A semismooth equation approach to the solution of nonlinear complementarity problems. Math Program 1996, 75: 407–439.

    Google Scholar 

  19. Facchinei F, Fisher A, Piccialli V: Generalized Nash equilibrium problems and Newton methods. Math Program 2009, 117: 163–194. 10.1007/s10107-007-0160-2

    Article  MathSciNet  Google Scholar 

  20. Heusinger AV, Kanzow C: Relaxation methods for generalized Nash equilibrium problems with inexact line search. J Optim Theory Appl 2009, 143: 159–183. 10.1007/s10957-009-9553-0

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The research was supported by the Fundamental Innovation Methods Funds under Project No. 2010IM020300 and the Technology Research of Inner Mongolia under Project No. 20100915.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jian Hou.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

JH and Z-CW carried out the design of the study and performed the analysis. ZW participated in its design and coordination. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Hou, J., Wen, ZC. & Wen, Z. A variational inequality method for computing a normalized equilibrium in the generalized Nash game. J Inequal Appl 2012, 60 (2012). https://doi.org/10.1186/1029-242X-2012-60

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-60

Keywords