Open Access

The models of bilevel programming with lower level second-order cone programs

Journal of Inequalities and Applications20142014:168

https://doi.org/10.1186/1029-242X-2014-168

Received: 27 January 2014

Accepted: 10 April 2014

Published: 6 May 2014

Abstract

Robust optimization is an effective method for dealing with the optimization problems under uncertainty. When there is uncertainty in the lower level optimization problem of a bilevel programming, it can be formulated by a robust optimization method as a bilevel programming problem with lower level second-order cone program (SOCBLP). In this paper, we present the mathematical models of the SOCBLP, and we give some basic concepts, such as constraint region, inducible region, and optimal solution. It is illustrated that the SOCBLP is generally a nonconvex and nondifferentiable optimization problem, whose feasible set may be not connected in some cases and the constraint region is generally not polyhedral. Finally under suitable conditions we propose the optimality conditions for several models of the SOCBLP in the optimistic case.

MSC:90C30.

Keywords

bilevel programming with lower level second-order cone program nondifferentiable nonconvex feasible set optimality conditions

1 Introduction

Bilevel programming (BLP) problems are hierarchical ones-optimization problems having a second (parametric) optimization problem as part of their constraints [1, 2]. Second-order cone programming (SOCP) problems are convex optimization problems in which a linear function is minimized over the intersection of an affine linear manifold with the Cartesian product of second-order cones [36]. In the development of mathematical programming, linear programming (LP) is extended to second-order cone programming (SOCP), linear complementarity problems (LCP) and nonlinear complementarity problems (NLCP) [7] are extended to second-order cone complementarity problems (SOCCP) [8], and mathematical programming with complementarity constraints (MPCC) [9] is extended to mathematical programming with second-order cone complementarity constraints (SOCMPCC) [10]. However, there have been rare works about extending BLP to bilevel programming with lower level second-order cone program (SOCBLP).

In real world, we often face uncertainty. Uncertainty makes the optimal solution of the deterministic BLP become feasible but not optimal, or infeasible. Robust optimization is an effective method for dealing with the optimization problems under uncertainty. When there is uncertainty in the lower level optimization problem of a bilevel programming [1], it can be formulated by a robust optimization method as a bilevel programming problem having second-order cone programming [2] as its lower level problem, i.e., a bilevel programming problem with lower level second-order cone programs (SOCBLP).

In spite of its significance, there have been rare works to deal with SOCBLP or related problems. In 2007, Ejiri [11] reformulates a nonlinear SOCBLP as a mathematical program with second-order cone complementarity constraints (SOCMPCC), and he proposes a smoothing method which has the global convergence property to a Clarke- (C-) stationary point of the SOCMPCC under suitable assumptions. In 2011, Yan and Fukushima [12] extend the results and the smoothing method to mathematical programming with symmetric cone complementarity constraints. Jiang [13] studies the optimality conditions for optimization problems with second-order cone equilibrium constraints, the Aubin property of the second-order cone complementarity set and the smoothing methods for solving inverse linear programming problems and inverse linear second-order cone programming problems. Wu et al. [14] study the necessary optimality conditions and the second-order sufficient conditions, and they present a smoothing method for mathematical programs governed by second-order cone constrained generalized equations. Ding et al. [15] derive explicit expressions for the strong-, Mordukhovich-, and Clarke- (S-, M- and C-) stationary conditions and give constraint qualifications under which a local solution of mathematical programs with semidefinite cone complementarity constraints is a S-, M- and C-stationary point. Zhang et al. [10] first introduce B-stationary, C-stationary, M-stationary, S-stationary point, SOCMPCC-linear independence constraint qualification, second-order cone upper level strict complementarity condition at a feasible point of a SOCMPCC problem and discuss the convergence properties of a smoothing approach for solving SOCMPCCs.

In this paper, we consider bilevel programming problems with lower level second-order cone programs (SOCBLP). For x X R n , y Y R m , F : X × Y R 1 and f : X × Y R 1 , the SOCBLP problem is given by
" min " x X F ( x , y ) = c 1 T x + d 1 T y s.t.  A 1 x + B 1 y b 1 , min y Y f ( x , y ) = c 2 T x + d 2 T y s.t.  A 2 x + B 2 y b 2 , s.t.  y K m ,
(1)
where c 1 , c 2 R n , d 1 , d 2 R m , b 1 R p , b 2 R q , A 1 R p × n , B 1 R p × m , A 2 R q × n , and B 2 R q × m . Here
K m = K m 1 × K m 2 × × K m r ,
with m = m 1 + m 2 + + m r is the Cartesian product of second-order cones. K m j , j = 1 , 2 , , r is the second-order cone (SOC) of dimension m j defined by
K m j : = { y j = ( y 0 j ; y ¯ j ) R × R m j 1 : y 0 j y ¯ j 0 } ,
where refers to the Euclidean norm. Then the interior and boundary of the SOC K m j , j = 1 , 2 , , r , can be defined as
int K m j : = { y j = ( y 0 j ; y ¯ j ) R × R m j 1 : y 0 j y ¯ j > 0 } , bd K m j : = { y j = ( y 0 j ; y ¯ j ) R × R m j 1 : y 0 j y ¯ j = 0 } .
Thus it is easy to verify that the SOC K m j is self-dual, that is,
K m j = K m j : = { v j R m j : v j T y j 0 , y j K m j } .

Obviously, if m 1 = m 2 = = m r = 1 , we have K m = R + m and therefore the SOCBLP problem (1) becomes the classical linear bilevel programming problem (BLP) in [1].

Remark The formulation of the SOCBLP (1) with the quotation marks is used to express the uncertainty in case of non-uniquely determined lower level optimal solutions.

In this paper, we aim to establish the mathematical models of the SOCBLP, study the characteristics of the feasible set, and we propose the optimality conditions of the SOCBLP problems in the optimistic case.

The organization of this paper is as follows. In Section 2, we review some preliminaries including the Euclidean Jordan algebra and the generalized differential calculus of Mordukhovich. We give some basic concepts and illustrate the characteristics of the SOCBLP in Section 3. The constraint qualification and optimality condition for the SOCBLP are studied in Section 4. The optimality conditions for the several models of the SOCBLP in the optimistic case are proposed in Section 5. Finally some conclusions are given in Section 6.

2 Preliminaries

First, we introduce the spectral factorization of vectors in R m associated with the SOC K m , which is an important character of Jordan algebra [16].

For any z = ( z 0 ; z ¯ ) R × R m 1 , its spectral factorization [16] is defined as
z = λ 1 ( z ) c 1 ( z ) + λ 2 ( z ) c 2 ( z ) .
Here λ 1 ( z ) , λ 2 ( z ) are the spectral values given by
λ i ( z ) = z 0 + ( 1 ) i z ¯ , i = 1 , 2 ,
and c 1 ( z ) , c 2 ( z ) are the associated spectral vectors given by
c i ( z ) = { 1 2 ( 1 ; ( 1 ) i z ¯ z ¯ ) , if  z ¯ 0 , 1 2 ( 1 ; ( 1 ) i ω ) , otherwise , i = 1 , 2 ,

with ω R m 1 being any vector satisfying ω = 1 .

Lemma 2.1 [3]

Suppose that y K m and u K m with K m = K m 1 × K m 2 × × K m r satisfies y u = 0 . Then for all j = 1 , 2 , , r , either (i) y j = 0 ; or (ii) u j = 0 ; or (iii) there exists σ j > 0 such that y j = σ j ( u 0 j ; u ¯ j ) .

Let Π K m ( z ) be the metric projection of z onto the SOC K m and s + : = max { 0 , s } , s : = min { 0 , s } for any s R . Then
Π K m ( z ) = ( λ 1 ( z ) ) + c 1 ( z ) + ( λ 2 ( z ) ) + c 2 ( z ) .

Now, we introduce some notions of the generalized differential calculus of Mordukhovich [17].

Give a closed set A R n and a point x ¯ A , we denote by N ˆ A ( x ¯ ) the Fréchet (regular) normal cone to A at x ¯ , defined by
N ˆ A ( x ¯ ) : = { x R n | lim sup x A x ¯ x , x x ¯ x x ¯ 0 } .
The limiting (Mordukhovich) normal cone to A at x ¯ , denoted N A ( x ¯ ) , is defined by
N A ( x ¯ ) : = lim sup x A x ¯ N ˆ A ( x ) ,

where ‘lim sup’ is the Painlevé-Kuratowski outer limit of sets [18]. If A is convex, then N A ( x ¯ ) = N ˆ A ( x ¯ ) amounts to the classic normal cone in the sense of convex analysis.

Let S : R n R m be a multifunction with its graph, denoted by gph S : = { ( x , y ) R n × R m | y S ( x ) } , being closed and ( a ¯ , b ¯ ) gph S . The multifunction D S ( a ¯ , b ¯ ) : R m R n , defined by
D S ( a ¯ , b ¯ ) ( v ) : = { w R n | ( w , v ) N gph S ( a ¯ , b ¯ ) }

is called limiting (Mordukhovich) coderivative of S at ( a ¯ , b ¯ ) [17]. If S happens to be single-valued, we usually write D S ( a ¯ ) . If S is continuously differentiable, then D S ( a ¯ ) amounts to the adjoint Jacobian of S at a ¯ .

Lemma 2.2 [19, 20]

Suppose that u R m and z = ( z 0 ; z ¯ ) R m satisfies either (i) z 0 > z ¯ ; or (ii) z 0 > z ¯ ; or (iii) | z 0 | < z ¯ . Then Π K m ( ) is a continuously differentiable function in a neighborhood of z, and
D Π K m ( z ) ( u ) = { J Π K m ( z ) u } .
Moreover, we have
  1. (1)

    if z 0 > z ¯ , we have J Π K m ( z ) = I m ;

     
  2. (2)

    if z 0 > z ¯ , we have J Π K m ( z ) = 0 ;

     
  3. (3)
    if | z 0 | < z ¯ , we have
    J Π K m ( z ) = 1 2 ( 1 z ¯ T z ¯ z ¯ z ¯ I m 1 + z 0 z ¯ I m 1 z 0 z ¯ z ¯ z ¯ T z ¯ 2 ) .
     
For any u = ( u 0 ; u ¯ ) R × R m 1 with u ¯ 0 , the matrix-valued mapping P ( u ) is defined by
P ( u ) = 1 2 ( 1 u ¯ T u ¯ u ¯ u ¯ I m 1 + u 0 u ¯ I m 1 u 0 u ¯ u ¯ u ¯ T u ¯ 2 ) .

Then we obtain the following property of the matrix P ( u ) .

Lemma 2.3 For any u = ( u 0 ; u ¯ ) R × R m 1 with u ¯ 0 , the matrix P ( u ) is nonsingular and P ( u ) I is singular.

Proof It is not difficult to show the results hold by direct calculations. □

3 Concepts and characteristics of feasible set

In this section, we give some basic concepts of the SOCBLP, such as constraint region, inducible region, and optimal solution. Then it is illustrated that the SOCBLP is generally a nonconvex and nondifferentiable optimization problem, whose feasible set may be not connected in some cases and the constraint region is generally not polyhedral for m 3 .

3.1 Basic concepts

Definition 3.1
  1. (1)
    Constraint region of the SOCBLP problem:
    S = { ( x , y ) : x X , y Y , A 1 x + B 1 y b 1 , A 2 x + B 2 y b 2 , y K m } .
     
  2. (2)
    Feasible set for the follower for each fixed x X :
    S ( x ) = { y Y : B 2 y b 2 A 2 x , y K m } .
     
  3. (3)
    Projection of S onto the leader’s decision space:
    S ( X ) = { x X : y Y , A 1 x + B 1 y b 1 , A 2 x + B 2 y b 2 , y K m } .
     
  4. (4)
    Follower’s rational reaction set for x S ( X ) :
    P ( x ) = { y Y : y Argmin [ f ( x , y ˆ ) : y ˆ S ( x ) ] } ,
     
where Argmin [ f ( x , y ˆ ) : y ˆ S ( x ) ] = { y S ( x ) : f ( x , y ) f ( x , y ˆ ) , y ˆ S ( x ) } .
  1. (5)
    Inducible region:
    IR = { ( x , y ) : ( x , y ) S , y P ( x ) } .
     

For simplicity, unless specified we make the following assumptions throughout this paper.

Assumption 3.1
  1. (1)

    S is nonempty and compact.

     
  2. (2)

    For decisions taken by the leader, the follower has some room to respond, i.e., P ( x ) .

     
  3. (3)

    The feasible set (i.e., inducible region IR) of the SOCBLP is connected.

     
The rational reaction set P ( x ) defines the response, while the inducible region IR represents the set over which the leader may optimize his objective. Thus the SOCBLP problem (1) can be written as
min { F ( x , y ) : ( x , y ) IR } .
Definition 3.2 A feasible point ( x , y ) to the SOCBLP problem (1) is a local optimal solution provided that there exists ε > 0 such that
F ( x , y ) F ( x , y ) , ( x , y ) IR  and  ( x , y ) ( x , y ) < ε .

A local optimal solution is a global one, if ε can be chosen arbitrarily large.

When the lower level problem has more than one solution, the leader has no knowledge about the real choice of the follower. He will be hard to evaluate his objective function value before he is aware of the follower’s real choice. In this paper, we assume that the leader is able to influence the follower to select in each case the solution of the lower level problem which is the best one for the leader. This results in the optimistic SOCBLP problem:
min x X , y Y F ( x , y ) = c 1 T x + d 1 T y s.t.  A 1 x + B 1 y b 1 , s.t.  y Ψ L ( x ) ,
(2)

where Ψ L ( x ) : = Argmin y Y { f ( x , y ) = c 2 T x + d 2 T y : A 2 x + B 2 y b 2 , y K m } . Unless specified, we assume that X = R n , Y = R m in the following analysis.

3.2 Characteristics of feasible set

It is well known that the BLP problem is usually nonconvex, nondifferentiable and its feasible set may be not connected in some cases. To detect if these are valid in SOCBLP, we discuss the characteristics of the feasible set of SOCBLP in this subsection.

Example 3.1 Consider the following SOCBLP problem with x R 1 , y R 2 , X = { x | x 0 } and Y = { y = ( y 1 , y 2 ) | y 0 } :
" min " x X F ( x , y ) = x + 3 ( y 1 y 2 ) s.t.  2 x 6 , min y Y f ( x , y ) = ( y 1 y 2 ) s.t.  x + ( y 1 y 2 ) 8 , s.t.  x + 4 ( y 1 y 2 ) 8 , s.t.  x + 2 ( y 1 y 2 ) 12 , s.t.  y K 2 .
Figure 1 illustrates the feasible set of this example. The optimal solution of the lower level problem is
y 1 ( x ) y 2 ( x ) = { 6 0.5 x , if  2 x 4 , 8 x , if  4 x 6 ,
with y 2 ( x ) 0 . On this set, the upper level objective function is to be minimized:
F ( x , y ( x ) ) = { 18 0.5 x , if  2 x 4 , 24 2 x , if  4 x 6 .
The global optimal solutions are found at the points
D = { ( x , y ) R + × R 2 : x = 6 , y 1 y 2 = 2 , y 2 0 } ,
which are depicted by the thick line in Figure 1, with an optimal function value of 12.
Figure 1

The SOCBLP problem with nonconvex feasible set. The optimal solution of the lower level problem of Example 3.1 is y 1 ( x ) y 2 ( x ) = 6 0.5 x if 2 x 4 , and y 1 ( x ) y 2 ( x ) = 8 x if 4 x 6 , with y 2 ( x ) 0 . The global optimal solutions are found at the points D = { ( x , y ) R + × R 2 : x = 6 , y 1 y 2 = 2 , y 2 0 } , which are depicted by the thick line in Figure 1, with an optimal function value of 12. It is shown by Figure 1 that, even in the simple case of functions, SOCBLP is a nonconvex and nondifferentiable optimization problem. Thus, it is possible that there exist local optimal solutions or stationary solutions for the SOCBLP. Furthermore, the optimal solutions of the lower level problem may be not unique in some cases.

The SOCBLP problem with nonconvex feasible set.

From Example 3.1, even in its simple case of functions, SOCBLP is a nonconvex and nondifferentiable optimization problem. Thus, it is possible that there exist local optimal solutions or stationary solutions for SOCBLPs. Furthermore, the optimal solutions of the lower level problem may be not unique in some cases.

Example 3.2 Consider the following SOCBLP problem with x R 1 , y R 2 , X = { x | x 0 } and Y = { y = ( y 1 , y 2 ) | y 1 0 , y 2 0 } :
" min " x X F ( x , y ) = x + 2 ( y 1 + y 2 ) s.t.  0 x 6 , s.t.  y 1 + y 2 3 , min y Y f ( x , y ) = ( y 1 + y 2 ) s.t.  x + ( y 1 + y 2 ) 8 , s.t.  x + 3 ( y 1 + y 2 ) 8 , s.t.  x + ( y 1 + y 2 ) 0 , s.t.  y K 2 .
The optimal solution of the lower level problem is
y 1 ( x ) + y 2 ( x ) = { x , if  2 x 4 , 8 x , if  4 x 6 ,
with y 2 ( x ) 0 . However, only for x [ 2 , 3 ] [ 5 , 6 ] the upper level constraint y 1 + y 2 3 holds, and therefore the feasible set is not connected, which is depicted in Figure 2. Hence the global optimal solutions are found at
D = { ( x , y ) R + × R 2 : x = 2 , y 1 + y 2 = 2 , y 2 0 } ,
which are the points on the thick line in Figure 2, with an optimal function value of 6.
Figure 2

The SOCBLP problem with disconnected feasible set. The optimal solution of the lower level problem of Example 3.2 is y 1 ( x ) + y 2 ( x ) = x if 2 x 4 , and y 1 ( x ) + y 2 ( x ) = 8 x if 4 x 6 , with y 2 ( x ) 0 . However, only for x [ 2 , 3 ] [ 5 , 6 ] the upper level constraint y 1 + y 2 3 holds, and therefore the feasible set is not connected, which is depicted in Figure 2. Hence the global optimal solutions are found at D = { ( x , y ) R + × R 2 : x = 2 , y 1 + y 2 = 2 , y 2 0 } , which are the points on the thick line in Figure 2, with an optimal function value of 6. By Figure 2, we can see that the feasible set of the SOCBLP may be not connected especially when the upper level constraints depend on the lower level optimal solution.

The SOCBLP problem with disconnected feasible set.

It should be noted that if the inequality y 1 + y 2 3 is moved into the lower level problem, the feasible set of the SOCBLP is again connected, and it is equal to all the points ( x , y ( x ) ) with y 2 ( x ) 0 ,
y 1 ( x ) + y 2 ( x ) = { x , if  2 x 3 , 3 , if  3 x 5 , 8 x , if  5 x 6 .

By Example 3.2, we can see that the feasible set of the SOCBLP may be not connected especially when the upper level constraints depend on the lower level optimal solution. Furthermore, the position of constraints is not arbitrary, whose changes may affect the feasible set of the problem. Therefore, it is necessary to point out that the solutions of the SOCBLP strongly depend on the order of play.

Example 3.3 Consider the following SOCBLP problem with x R 1 , y R 3 , X = { x | x 0 } and Y = K 3 :
" min " x X F ( x , y ) = x + 3 y 1 s.t.  2 x 6 , min y Y f ( x , y ) = y 1 s.t.  x + y 1 8 , s.t.  x + 4 y 1 8 , s.t.  x + 2 y 1 12 , s.t.  y K 3 .
Similarly to Example 3.1, the optimal solution of the lower level problem is
y 1 ( x ) = { 6 0.5 x , if  2 x 4 , 8 x , if  4 x 6 ,
with y 2 2 ( x ) + y 3 2 ( x ) y 1 ( x ) . The global optimal solution of this problem is found at points
D = { ( x , y ) R + × R 3 : x = 6 , y 1 = 2 , y 2 2 + y 3 2 2 }

with an optimal function value is 12.

From Example 3.3, unlike linear BLP, the constraint region of the SOCBLP with m 3 is usually not polyhedral and its inducible region is not made up of a piecewise linear equality constraint comprised of supporting hyperplanes of S. In theory, the second-order cone with m 3 is a closed convex cone, but not a polyhedron, which is greatly different from the linear BLP.

Stated briefly, the SOCBLP has the following characteristics:
  1. (i)

    it is generally a nonconvex and nondifferentiable optimization problem;

     
  2. (ii)

    its feasible set may be not connected especially when the upper level constraints depend on the lower level optimal solution, and its constraint region is generally not polyhedral for m 3 ;

     
  3. (iii)

    its solutions strongly depend on the order of play, since the changes of the position of constraints could generally affect the feasible set of the problem.

     

Therefore, compared to many mathematical programming problems, it is difficult for us to study the theories and algorithms of the SOCBLP.

4 Optimality conditions

In this section, the SOCBLP problem (1) in the optimistic case is reformulated as a single level optimization problem by using the Karush-Kuhn-Tucker (KKT) conditions for the lower level problem. And the necessary optimality conditions for problem (1) are given under the strict complementarity and linear independence constraint qualification assumptions.

For any fixed x S ( X ) , the Lagrange function of the lower level problem in (1) is defined by
L ( x , y , u , v ) = c 2 T x + d 2 T y + u T ( A 2 x + B 2 y b 2 ) v T y ,
and the set of the Lagrange multipliers corresponding to the point ( x , y ) R n × R m is denoted by
Λ ( x , y ) : = { ( u , v ) R p × R m : B 2 T u + d 2 v = 0 , v T y = 0 , v K m , y K m , u T ( b 2 A 2 x B 2 y ) = 0 , b 2 A 2 x B 2 y 0 , u 0 } .
(3)

Definition 4.1 The generalized slater condition is satisfied at x 0 S ( X ) for the SOCBLP problem (1) if there exists a y 0 int K m such that A 2 x 0 + B 2 y 0 < b 2 holds.

By replacing the lower level problem in (1) with its KKT conditions, we reformulate the SOCBLP problem (1) as the following mathematical programming problem with second-order cone complementarity constraints and linear complementarity constraints:
min F ( x , y ) = c 1 T x + d 1 T y s.t.  A 1 x + B 1 y b 1 0 , s.t.  B 2 T u + d 2 v = 0 , s.t.  u T ( b 2 A 2 x B 2 y ) = 0 , s.t.  b 2 A 2 x B 2 y 0 , u 0 , s.t.  v T y = 0 , v K m , y K m .
(4)

Remark Problem (4) is more complicated than usual mathematical programs with second-order cone complementarity constraints (SOCMPCC). In usual SOCMPCC, there exist only second-order cone complementarity constraints and equality constraints in some cases. However, in problem (4) there are also inequality constraints and linear complementarity constraints.

By extending Theorem 2.1 in [21] about BLP and MPCC, we obtain the following results as regards the SOCBLP problem (1) and problem (4).

Theorem 4.1 Let ( x , y ) be a global (resp. local) optimal solution of the SOCBLP problem (1) and assume that the generalized slater condition is satisfied at x . Then, for each ( u , v ) Λ ( x , y ) , the point ( x , y , u , v ) is a global (resp. local) optimal solution of problem (4).

Proof Since ( u , v ) Λ ( x , y ) if and only if y P ( x ) , the results of the theorem obviously hold. □

Next we discuss the necessary optimality conditions for problem (4).

By Proposition 6 in [22], we obtain
u , ( b 2 A 2 x B 2 y ) = 0 , ( b 2 A 2 x B 2 y ) R + q , u R + q Π R + q ( u + A 2 x + B 2 y b 2 ) = u ( u + A 2 x + B 2 y b 2 , u ) gph Π R + q
and
y , ( B 2 T u + d 2 ) = 0 , ( B 2 T u + d 2 ) K m , y K m Π K m ( y B 2 T u d 2 ) = y ( y B 2 T u d 2 , y ) gph Π K m .
Here , stands for the Euclidean inner product, which is defined by x , s : = x T s for any two vectors x and s in R n . Then problem (4) is equivalent to the following optimization problem:
min F ( x , y ) = c 1 T x + d 1 T y s.t.  Φ ( x , y , u ) Ω ,
(5)
where Ω = R p × gph Π R + q × gph Π K m , and Φ ( x , y , u ) : R n × R m × R q R 2 m + p + 2 q is defined by
Φ ( x , y , u ) = ( A 1 x + B 1 y b 1 u + A 2 x + B 2 y b 2 u y B 2 T u d 2 y ) .
(6)

Theorem 4.2 [18]

Suppose that ( x , y , u ) is a local optimal solution of (5). If the constraint qualification
J Φ ( x , y , u ) T w = 0 , w N Ω ( Φ ( x , y , u ) ) } w = 0
(7)
holds, there exists w N Ω ( Φ ( x , y , u ) ) such that
0 ( c 1 d 1 0 q ) + J Φ ( x , y , u ) T w .
(8)

By Theorem 4.2, we discuss the optimality conditions for problem (1).

Suppose that ( x , y , u ) is a feasible point to problem (5). Then there exists a partition ( S , N ) of { 1 , 2 , , r } such that
S : = { j | y j + ( B 2 T u + d 2 ) j int K m j , j = 1 , 2 , , r } , N : = { 1 , 2 , , r } S .
By Lemma 2.1, we define three index subsets of S as
S 1 : = { j | y j int K m j , ( B 2 T u + d 2 ) j = 0 , j S } , S 2 : = { j | y j , ( B 2 T u + d 2 ) j bd K m j { 0 } , j S } , S 3 : = { j | y j = 0 , ( B 2 T u + d 2 ) j int K m j , j S } .
There also exists a partition ( S , N ) of { 1 , 2 , , q } such that
S : = { i | u i + ( b 2 A 2 x B 2 y ) i > 0 , i = 1 , 2 , , q } , N : = { 1 , 2 , , q } S .
We define two index subsets of S as
S 1 : = { i | u i > 0 , ( b 2 A 2 x B 2 y ) i = 0 , i S } , S 2 : = { i | u i = 0 , ( b 2 A 2 x B 2 y ) i > 0 , i S } .

We are now in a position to present some assumptions.

(A1) The linear independence constraint qualification holds, i.e., the matrix ( A 1 , A 2 ) is of full rank in row with p + q m , and ( I q S 2 , B 2 S 1 S 2 ) is of full rank in column, where | S 2 | + | S 1 S 2 | q and I q = ( I q 1 , I q 2 , , I q q ) , B 2 = ( B 2 1 , B 2 2 , , B 2 m ) .

(A2) The componentwise strict complementarity condition holds at ( y , B 2 T u + d 2 ) , i.e.,
y + ( B 2 T u + d 2 ) int K m ,

or in other words, N = .

(A3) The componentwise strict complementarity condition holds at ( u , b 2 A 2 x B 2 y ) , i.e.,
u + ( b 2 A 2 x B 2 y ) int R + q ,

or, in other words, N = .

Lemma 4.3 If assumptions (A1), (A2), and (A3) hold, then the constraint qualification (7) hold.

Proof Suppose that there exists w = ( w 0 ; w 1 ; w 2 ; w 3 ; w 4 ) R p × R q × R q × R m × R m , such that J Φ ( x , y , u ) T w = 0 and w N Ω ( Φ ( x , y , u ) ) , where
w 1 = ( w 1 1 ; w 1 2 ; ; w 1 q ) R q , w 2 = ( w 2 1 ; w 2 2 ; ; w 2 q ) R q , w 3 = ( w 3 1 ; w 3 2 ; ; w 3 r ) j = 1 r R m j , w 4 = ( w 4 1 ; w 4 2 ; ; w 4 r ) j = 1 r R m j .
Then we have
A 1 T w 0 + A 2 T w 1 = 0 , B 1 T w 0 + B 2 T w 1 + w 3 + w 4 = 0 ,
(9)
w 1 + w 2 B 2 w 3 = 0 , w 0 N R p ( A 1 x + B 1 y b 1 ) , ( w 1 , w 2 ) N gph Π R + q ( u + A 2 x + B 2 y b 2 , u ) , ( w 3 , w 4 ) N gph Π K m ( y B 2 T u d 2 , y ) .
(10)
Since ( A 1 , A 2 ) is of full rank in row by the assumption (A1), it follows from (9) that
w 0 = w 1 = 0 , w 3 + w 4 = 0 , w 2 B 2 w 3 = 0 .
(11)
From assumption (A3), the second inclusion in (10) is equivalent to
w 1 i D Π R + ( u i + ( A 2 x + B 2 y b 2 ) i , u i ) ( w 2 i ) , i = 1 , 2 , , q ,
i.e.,
J Π R + ( u i + ( A 2 x + B 2 y b 2 ) i ) ( w 2 i ) w 1 i = 0 , i = 1 , 2 , , q .
The last relation implies that
w 1 i + w 2 i = 0 , for  i S 1 , w 1 i = 0 , for  i S 2 .
(12)
From (11) and (12), we have
w 1 i = w 2 i = 0 , for  i S 1 , w 1 i = 0 , for  i S 2 .
(13)
From the assumption (A2), the third inclusion in (10) is equivalent to
w 3 j D Π K m j ( y j ( B 2 T u + d 2 ) j , y j ) ( w 4 j ) , j = 1 , 2 , , r ,
i.e.,
J Π K m j ( y j ( B 2 T u + d 2 ) j ) ( w 4 j ) w 3 j = 0 , j = 1 , 2 , , r .
The last relation together with Lemma 2.2 implies that
w 3 j + w 4 j = 0 , for  j S 1 , P ( y j ( B 2 T u + d 2 ) j ) w 4 j + w 3 j = 0 , for  j S 2 , w 3 j = 0 , for  j S 3 .
(14)
Then it follows from (11) and (14) that
w 3 j + w 4 j = 0 , for  j S 1 S 2 , w 3 j = w 4 j = 0 , for  j S 3 .
(15)

Since ( I q S 2 , B 2 S 1 S 2 ) is of full rank in column by the assumption (A1), we have from (11), (13), and (15) that w 2 i = 0 , i S 2 and w 3 j = 0 , j S 1 S 2 . Then it follows from (11), (13), and (15) that w = 0 . This completes the proof. □

From Theorem 4.1, Theorem 4.2, and Lemma 4.3, we obtain the following optimality conditions for the SOCBLP problem (1).

Theorem 4.4 Let ( x , y ) be a local optimal solution of the SOCBLP problem (1) and assume that the generalized slater condition given by Definition  4.1 is satisfied at x . If assumptions (A1), (A2), and (A3) hold, then for each ( u , v ) Λ ( x , y ) , there exist λ R p , η = ( η 1 ; η 2 ; ; η q ) R q , ζ = ( ζ 1 ; ζ 2 ; ; ζ q ) R q , μ = ( μ 1 ; μ 2 ; ; μ r ) j = 1 r R m j and ν = ( ν 1 ; ν 2 ; ; ν r ) j = 1 r R m j such that
c 1 + A 1 T λ + A 2 T η = 0 , d 1 + B 1 T λ + B 2 T η + μ + ν = 0 , η + ζ B 2 μ = 0 , Π R + p ( λ + A 1 x + B 1 y b 1 ) λ = 0 , Π R + q ( u + A 2 x + B 2 y b 2 ) u = 0 , Π K m ( y B 2 T u d 2 ) y = 0 , f 1 ( x , y , u , λ , η , ζ , μ , ν ) = 0 , f 2 ( x , y , u , λ , η , ζ , μ , ν ) = 0 ,
(16)
with
f 1 ( x , y , u , λ , η , ζ , μ , ν ) = { μ j + ν j , if  j S 1 , P ( y j ( B 2 T u + d 2 ) j ) ν j + μ j , if  j S 2 , μ j , if  j S 3 , f 2 ( x , y , u , λ , η , ζ , μ , ν ) = { η i + ζ i , if  i S 1 , η i , if  i S 2 .
Proof It follows from Theorem 4.1 that ( x , y , u ) is a local optimal solution of (5). Since assumptions (A1), (A2), and (A3) hold, we have from Theorem 4.2 and Lemma 4.3
0 ( c 1 d 1 0 q ) + J Φ ( x , y , u ) T N Ω ( Φ ( x , y , u ) ) .
Then there exists ( λ , η , ζ , μ , ν ) R p × R q × R q × R m × R m such that
c 1 + A 1 T λ + A 2 T η = 0 , d 1 + B 1 T λ + B 2 T η + μ + ν = 0 ,
(17)
η + ζ B 2 μ = 0 , λ N R p ( A 1 x + B 1 y b 1 ) , ( η , ζ ) N gph Π R + q ( u + A 2 x + B 2 y b 2 , u ) , ( μ , ν ) N gph Π K m ( y B 2 T u d 2 , y ) .
(18)
The first inclusion in (18) implies
λ N R p ( A 1 x + B 1 y b 1 ) λ R + p , b 1 A 1 x B 1 y R + p , λ T ( b 1 A 1 x B 1 y ) = 0 Π R + p ( λ + A 1 x + B 1 y b 1 ) λ = 0 .
(19)
From assumption (A3), the second inclusion in (18) is equivalent to
η D Π R + q ( u + A 2 x + B 2 y b 2 ) ( ζ ) ,
i.e.,
J Π R + ( u i + ( A 2 x + B 2 y b 2 ) i ) ( ζ i ) η i = 0 , i = 1 , 2 , , q .
The last relation implies that
η i + ζ i = 0 , if  i S 1 , η i = 0 , if  i S 2 .
From the assumption (A2), the third inclusion in (18) is equivalent to
μ D Π K m ( y B 2 T u d 2 , y ) ( ν ) ,
i.e.,
J Π K m j ( y j ( B 2 T u + d 2 ) j ) ( ν j ) μ j = 0 , j = 1 , 2 , , r .
The last relation together with Lemma 2.2 implies that
μ j + ν j = 0 , if  j S 1 , P ( y j ( B 2 T u + d 2 ) j ) ν j + μ j = 0 , if  j S 2 , μ j = 0 , if  j S 3 .

Therefore, combing (17), (18), (19), and the feasibility of ( x , y , u ) implies equation (16) holds. This completes the proof. □

Remark Theorem 4.4 shows that under the strict complementarity conditions and linear independence constraint qualifications, a local optimal solution is a M-stationary point which is introduced for mathematical programming governed by second-order cone constrained generalized equations in [14].

5 Extensions

In this section, we consider the optimality conditions for the following three common models of the SOCBLP in the optimistic case,
" min " x R n F ( x , y ) = c 1 T x + d 1 T y s.t.  A 1 x = b 1 , min y R m f ( x , y ) = d 2 T y s.t.  A 2 x + B 2 y b 2 , s.t.  y K m ,
(20)
" min " x R n F ( x , y ) = c 1 T x + d 1 T y s.t.  A 1 x + B 1 y = b 1 , min y R m f ( x , y ) = c 2 T x + d 2 T y s.t.  A 2 x + B 2 y = b 2 , s.t.  y K m ,
(21)
" min " x R n F ( x , y ) = c 1 T x + d 1 T y s.t.  A 1 x + B 1 y b 1 , min y R m f ( x , y ) = c 2 T x + d 2 T y s.t.  b 2 A 2 x B 2 y K q , s.t.  y 0 ,
(22)

where K q = K q 1 × K q 2 × × K q s with q = q 1 + q 2 + + q s is the Cartesian product of second-order cones in problem (22). Here problem (20) is an extension of the bilevel programming problem proposed by Dempe [2], problem (21) is the case that equality constraints always hold in problem (1), and problem (22) is closely connected with problem (1) which will be seen in Section 5.3.

5.1 Optimality conditions for problem (20)

In this subsection, we discuss the optimization conditions for problem (20) under strict complementarity and linear independence constraint qualification assumptions.

For any fixed x S ( X ) , the Lagrange function of the lower level problem in (20) is defined by
L 1 ( x , y , u , v ) = d 2 T y + u T ( A 2 x + B 2 y b 2 ) v T y ,
and the set of the Lagrange multipliers corresponding to the point ( x , y ) R n × R m is denoted by
Λ 1 ( x , y ) : = { ( u , v ) R p × R m : B 2 T u + d 2 v = 0 , v T y = 0 , v K m , y K m , u T ( b 2 A 2 x B 2 y ) = 0 , b 2 A 2 x B 2 y 0 , u 0 } .

Definition 5.1 The generalized slater condition is satisfied at x 0 S ( X ) for the SOCBLP problem (20) if there exists a y 0 int K m such that A 2 x 0 + B 2 y 0 < b 2 holds.

Theorem 5.1 Let ( x , y ) be a global (resp. local) optimal solution of the SOCBLP problem (20) and assume that the generalized slater condition given by Definition  5.1 is satisfied at  x . Then, for each ( u , v ) Λ ( x , y ) , the point ( x , y , u , v ) is a global (resp. local) optimal solution of problem
min F ( x , y ) = c 1 T x + d 1 T y s.t.  A 1 x b 1 = 0 , s.t.  B 2 T u + d 2 v = 0 , s.t.  u T ( b 2 A 2 x B 2 y ) = 0 , s.t.  b 2 A 2 x B 2 y 0 , u 0 , s.t.  v T y = 0 , v K m , y K m .
(23)
It is easy to see that problem (23) is equivalent to the following optimization problem:
min F ( x , y ) = c 1 T x + d 1 T y s.t.  Φ 1 ( x , y , u ) Ω 1 ,
(24)
where Ω 1 = { 0 p } × gph Π R + q × gph Π K m and Φ 1 ( x , y , u ) : R n × R m × R q R p + 2 q + 2 m is defined by
Φ 1 ( x , y , u ) = ( A 1 x b 1 u + A 2 x + B 2 y b 2 u y B 2 T u d 2 y ) .

Then similarly to Lemma 4.3 and Theorem 4.4, we obtain the optimality conditions for problem (20).

Lemma 5.2 Suppose that ( x , y , u ) is a local optimal solution of (24). If assumptions (A1), (A2), and (A3) hold, then the following constraint qualification holds:
J Φ 1 ( x , y , u ) T w = 0 , w N Ω 1 ( Φ 1 ( x , y , u ) ) } w = 0 .
Theorem 5.3 Let ( x , y ) be a local optimal solution of the SOCBLP problem (20) and assume that the generalized slater condition given by Definition  5.1 is satisfied at x . If assumptions (A1), (A2), and (A3) hold, then for each ( u , v ) Λ ( x , y ) , there exist λ R p , η = ( η 1 ; η 2 ; ; η q ) R q , ζ = ( ζ 1 ; ζ 2 ; ; ζ q ) R q , μ = ( μ 1 ; μ 2 ; ; μ r ) j = 1 r R m j and ν = ( ν 1 ; ν 2 ; ; ν r ) j = 1 r R m j such that
c 1 + A 1 T λ + A 2 T η = 0 , d 1 + B 2 T η + μ + ν = 0 , η + ζ B 2 μ = 0 , A 1 x b 1 = 0 , Π R + q ( u + A 2 x + B 2 y b 2 ) u = 0 , Π K m ( y B 2 T u d 2 ) y = 0 , f 1 ( x , y , u , λ , η , ζ , μ , ν ) = 0 , f 2 ( x , y , u , λ , η , ζ , μ , ν ) = 0 ,
with
f 1 ( x , y , u , λ , η , ζ , μ , ν ) = { μ j + ν j , if  j S 1 , P ( y j ( B 2 T u + d 2 ) j ) ν j + μ j , if  j S 2 , μ j , if  j S 3 , f 2 ( x , y , u , λ , η , ζ , μ , ν ) = { η i + ζ i , if  i S 1 , η i , if  i S 2 .

5.2 Optimality conditions for problem (21)

In this subsection, we reformulate problem (21) as a single level optimization problem, and then we discuss its optimization conditions under strict complementarity and linear independence constraint qualification assumptions.

For any fixed x S ( X ) , we define the Lagrange function of the lower level problem in (21) by
L 2 ( x , y , u , v ) = c 2 T x + d 2 T y + u T ( A 2 x + B 2 y b 2 ) v T y ,
and the set of the Lagrange multipliers corresponding to the point ( x , y ) R n × R m by
Λ 2 ( x , y ) : = { ( u , v ) R p × R m : B 2 T u + d 2 v = 0 , A 2 x + B 2 y = b 2 , v T y = 0 , v K m , y K m } .

Definition 5.2 The generalized slater condition is satisfied at x 0 S ( X ) for the SOCBLP problem (21) if B 2 is of full rank in row and there exists a y 0 int K m such that A 2 x 0 + B 2 y 0 = b 2 holds.

Theorem 5.4 Let ( x , y ) be a global (resp. local) optimal solution of the SOCBLP problem (21) and assume that the generalized slater condition given by Definition  5.2 is satisfied at x . Then, for each ( u , v ) Λ 2 ( x , y ) , the point ( x , y , u , v ) is a global (resp. local) optimal solution of problem
min F ( x , y ) = c 1 T x + d 1 T y s.t.  A 1 x + B 1 y b 1 = 0 , s.t.  A 2 x + B 2 y b 2 = 0 , s.t.  B 2 T u + d 2 v = 0 , s.t.  v T y = 0 , v K m , y K m .
(25)
It is not difficult to see that problem (25) is equivalent to the following optimization problem:
min F ( x , y ) = c 1 T x + d 1 T y s.t.  Φ 2 ( x , y , u ) Ω 2 ,
(26)
where Ω 2 = { 0 p + q } × gph Π K m and Φ 2 ( x , y , u ) : R n × R m × R q R p + q + 2 m is defined by
Φ 2 ( x , y , u ) = ( A 1 x + B 1 y b 1 A 2 x + B 2 y b 2 y B 2 T u d 2 y ) .

Now we give the following assumption.

(A4) The linear independence constraint qualification holds, i.e., the matrix ( A 1 , A 2 ) is of full rank in row with p + q m , and B 2 S 1 S 2 is of full rank in column, where | S 1 S 2 | q and B 2 = ( B 2 1 , B 2 2 , , B 2 m ) .

Then similarly to Lemma 4.3 and Theorem 4.4, we obtain the optimality conditions for problem (21).

Lemma 5.5 Suppose that ( x , y , u ) is a local optimal solution of (26). If assumptions (A2) and (A4) hold, then the following constraint qualification holds:
J Φ 2 ( x , y , u ) T w = 0 , w N Ω 2 ( Φ 2 ( x , y , u ) ) } w = 0 .
Theorem 5.6 Let ( x , y ) be a local optimal solution of the SOCBLP problem (21) and assume that the generalized slater condition given by Definition  5.2 is satisfied at x . If assumptions (A2) and (A4) hold, then for each ( u , v ) Λ 2 ( x , y ) , there exist λ R p , η = ( η 1 ; η 2 ; ; η q ) R q , μ = ( μ 1 ; μ 2 ; ; μ r ) j = 1 r R m j and ν = ( ν 1 ; ν 2 ; ; ν r ) j = 1 r R m j such that
c 1 + A 1 T λ + A 2 T η = 0 , d 1 + B 1 T λ + B 2 T η + μ + ν = 0 , B 2 μ = 0 , A 1 x + B 1 y b 1 = 0 , A 2 x + B 2 y b 2 = 0 , Π K m ( y B 2 T u d 2 ) y = 0 , f 1 ( x , y , u , λ , η , μ , ν ) = 0 ,
with
f 1 ( x , y , u , λ , η , ζ , μ , ν ) = { μ j + ν j , for  j S 1 , P ( y j ( B 2 T u + d 2 ) j ) ν j + μ j , for  j S 2 , μ j , for  j S 3 .

5.3 Optimality conditions for problem (22)

In this subsection, we discuss the optimization conditions for problem (22) under suitable conditions.

For any fixed x S ( X ) , the Lagrange function of the lower level problem in (22) is defined by
L 3 ( x , y , u , v ) = c 2 T x + d 2 T y + u T ( A 2 x + B 2 y b 2 ) v T y ,
and the set of the Lagrange multipliers corresponding to the point ( x , y ) R n × R m is denoted by
Λ 3 ( x , y ) : = { ( u , v ) R p × R m : B 2 T u + d 2 v = 0 , v T y = 0 , v 0 , y 0 , u T ( b 2 A 2 x B 2 y ) = 0 , b 2 A 2 x B 2 y K q , u K q } .

Definition 5.3 The generalized slater condition is satisfied at x 0 S ( X ) for the SOCBLP problem (22) if there exists a y 0 > 0 such that b 2 A 2 x 0 B 2 y 0 int K q holds.

Theorem 5.7 Let ( x , y ) be a global (resp. local) optimal solution of the SOCBLP problem (22) and assume that the generalized slater condition given by Definition  5.3 is satisfied at x . Then, for each ( u , v ) Λ 3 ( x , y ) , the point ( x , y , u , v ) is a global (resp. local) optimal solution of
min F ( x , y ) = c 1 T x + d 1 T y s.t.  A 1 x + B 1 y b 1 0 , s.t.  B 2 T u + d 2 v = 0 , s.t.  u T ( b 2 A 2 x B 2 y ) = 0 , s.t.  b 2 A 2 x B 2 y K q , u K q , s.t.  v T y = 0 , v 0 , y 0 .
(27)

Since (27) has a similar form to the form (4) of problem (1), we could study the optimality conditions for problem (22) in the similar way to problem (1).

It is easy to see that problem (27) is equivalent to the following optimization problem:
min F ( x , y ) = c 1 T x + d 1 T y s.t.  Φ 3 ( x , y , u ) Ω 3 ,
(28)
where Ω 3 = R p × gph Π K q × gph Π R + m and Φ 3 ( x , y , u ) : R n × R m × R q R p + 2 q + 2 m is defined by
Φ 3 ( x , y , u ) = ( A 1 x + B 1 y b 1 u + A 2 x + B 2 y b 2 u y B 2 T u d 2 y ) .
Suppose that ( x , y , u ) is a feasible point to problem (28). Then there exists a partition ( S ˆ , N ˆ ) of { 1 , 2 , , m } such that
S ˆ : = { j | y j + ( B 2 T u + d 2 ) j > 0 , j = 1 , 2 , , m } , N ˆ : = { 1 , 2 , , m } S ˆ .
We define two index subsets of S ˆ as
S ˆ 1 : = { j | y j > 0 , ( B 2 T u + d 2 ) j = 0 , j S ˆ } , S ˆ 2 : = { j | y j = 0 , ( B 2 T u + d 2 ) j > 0 , j S ˆ } .
There also exists a partition ( S ˆ , N ˆ ) of { 1 , 2 , , s } such that
S ˆ : = { i | u i + ( b 2 A 2 x B 2 y ) i int K q i , i = 1 , 2 , , s } , N ˆ : = { 1 , 2 , , s } S ˆ .
By Lemma 2.1, we define three index subsets of S ˆ as
S ˆ 1 : = { i | u i int K q i , ( b 2 A 2 x B 2 y ) i = 0 , i S ˆ } , S ˆ 2 : = { i | u i , ( b 2 A 2 x B 2 y ) i bd K q i { 0 } , i S ˆ } , S ˆ 3 : = { i | u i = 0 , ( b 2 A 2 x B 2 y ) i int K q i , i S ˆ } .

Now we present the following assumptions.

(A5) The linear independence constraint qualification holds, i.e., the matrix ( A 1 , A 2 ) is of full rank in row with p + q m , and ( I q S ˆ 3 , B 2 S ˆ 1 ) is of full rank in column, where | S ˆ 3 | + | S ˆ 1 | q and I q = ( I q 1 , I q 2 , , I q q ) , B 2 = ( B 2 1 , B 2 2 , , B 2 m ) .

(A6) The componentwise strict complementarity condition holds at ( y , B 2 T u + d 2 ) , i.e.,
y + ( B 2 T u + d 2 ) int R + m ,

or, in other words, N ˆ = .

(A7) The componentwise strict complementarity condition holds at ( u , b 2 A 2 x B 2 y ) , i.e.,
u + ( b 2 A 2 x B 2 y ) int K q ,

or in other words, N ˆ = .

By using Lemma 2.3 and the proof of Lemma 4.3 and Theorem 4.4, we obtain the following optimality conditions for the SOCBLP problem (22).

Lemma 5.8 Suppose that ( x , y , u ) is a local optimal solution of (28). If assumptions (A5), (A6), and (A7) hold, then the following constraint qualification holds:
J Φ 3 ( x , y , u ) T w = 0 , w N Ω 3 ( Φ 3 ( x , y , u ) ) } w = 0 .
Theorem 5.9 Let ( x , y ) be a local optimal solution of the SOCBLP problem (22) and assume that the generalized slater condition given by Definition  5.3 is satisfied at x . If assumptions (A5), (A6), and (A7) hold, then for each ( u , v ) Λ 3 ( x , y ) , there exist λ R p , η = ( η 1 ; η 2 ; ; η s ) i = 1 s R q i , ζ = ( ζ 1 ; ζ 2 ; ; ζ s ) i = 1 s R q i , μ = ( μ 1 ; μ 2 ; ; μ m ) R m and ν = ( ν 1 ; ν 2 ; ; ν m ) R m such that
c 1 + A 1 T λ + A 2 T η = 0 , d 1 + B 1 T λ + B 2 T η + μ + ν = 0 , η + ζ B 2 μ = 0 , Π R + p ( λ + A 1 x + B 1 y b 1 ) λ = 0 , Π K q ( u + A 2 x + B 2 y b 2 ) u = 0 , Π R + m ( y B 2 T u d 2 ) y = 0 , f ˆ 1 ( x , y , u , λ , η , ζ , μ , ν ) = 0 , f ˆ 2 ( x , y , u , λ , η , ζ , μ , ν ) = 0 ,
with
f ˆ 1 ( x , y , u , λ , η , ζ , μ , ν ) = { μ j + ν j , for  j S ˆ 1 , μ j , for  j S ˆ 2 , f ˆ 2 ( x , y , u , λ , η , ζ , μ , ν ) = { η i + ζ i , for  i S ˆ 1 , P ( u i + ( A 2 x + B 2 y b 2 ) i ) ζ i + η i , for  i S ˆ 2 , η i , for  i S ˆ 3 .

6 Conclusions

In this paper, we discuss the models and theories of the SOCBLP. Firstly, we give some basic concepts of the SOCBLP, such as constraint region, inducible region and optimal solution. Then some examples illustrate that the SOCBLP is generally a nonconvex and nondifferentiable optimization problem, whose feasible set may be not connected in some cases and constraint region is generally not polyhedral for m 3 . Finally, the SOCBLP in the optimistic case is reformulated as a single level optimization problem by using the KKT conditions for the lower level problem. The necessary optimality conditions for several models of the SOCBLP are given under the strict complementarity and linear independence constraint qualification assumptions.

Declarations

Acknowledgements

This research is supported by the National Natural Science Foundation of China (Nos. 71171150, 11161001, 71161001) and China Postdoctoral Science Foundation of China (No. 2012M511651). The authors are grateful to the editor and the anonymous referees for their valuable comments, which have greatly improved the paper.

Authors’ Affiliations

(1)
School of Mathematics and Statistics, Wuhan University
(2)
School of Mathematics and Computing Science, Guilin University of Electronic Technology
(3)
School of Information and Calculating Science, North University for Ethnics

References

  1. Bard JF: Practical Bilevel Programming. Kluwer Academic, Dordrecht; 1998.MATHGoogle Scholar
  2. Dempe S: Foundations of Bilevel Programming. Kluwer Academic, New York; 2002.MATHGoogle Scholar
  3. Alizadeh F, Goldfarb D: Second-order cone programming. Math. Program. 2003, 95: 3-51. 10.1007/s10107-002-0339-5MathSciNetView ArticleMATHGoogle Scholar
  4. Tang JY, He GP, Dong L, Fang L: A new one-step smoothing Newton method for second-order cone programming. Appl. Math. 2012, 57: 311-331. 10.1007/s10492-012-0019-6MathSciNetView ArticleMATHGoogle Scholar
  5. Li YM, Wang XT, Wei DY: A new class of smoothing complementarity functions over symmetric cones. Commun. Nonlinear Sci. Numer. Simul. 2010, 15: 3299-3305. 10.1016/j.cnsns.2009.12.006MathSciNetView ArticleMATHGoogle Scholar
  6. Wang GQ, Bai YQ: A new full Nesterov-Todd step primal-dual path-following interior-point algorithm for symmetric optimization. J. Optim. Theory Appl. 2012, 154: 966-985. 10.1007/s10957-012-0013-xMathSciNetView ArticleMATHGoogle Scholar
  7. Che HT: A smoothing and regularization predictor-corrector method for nonlinear inequalities. J. Inequal. Appl. 2012., 2012: Article ID 214Google Scholar
  8. Fang L, Han C: A new one-step smoothing Newton method for the second-order cone complementarity problem. Math. Methods Appl. Sci. 2011, 34: 347-359. 10.1002/mma.1366MathSciNetView ArticleMATHGoogle Scholar
  9. Liu X, Sun J: Generalized stationary points and an interior point method for mathematical programs with equilibrium constraints. Math. Program. 2004, 101: 231-261.MathSciNetView ArticleMATHGoogle Scholar
  10. Zhang Y, Zhang LW, Wu J: Convergence properties of a smoothing approach for mathematical programs with second-order cone complementarity constraints. Set-Valued Anal. 2011, 19: 609-646. 10.1007/s11228-011-0190-zView ArticleMathSciNetMATHGoogle Scholar
  11. Ejiri, T: A smoothing method for mathematical programs with second-order cone complementarity constraints. Master thesis, Kyoto University (2007)Google Scholar
  12. Yan T, Fukushima M: Smoothing method for mathematical programs with symmetric cone complementarity constraints. Optimization 2011, 60: 113-128. 10.1080/02331934.2010.541458MathSciNetView ArticleMATHGoogle Scholar
  13. Jiang, Y: Optimization problems with second-order cone equilibrium constraints. PhD thesis, Dalian University of Technology (2011)Google Scholar
  14. Wu J, Zhang LW, Zhang Y: A smoothing Newton method for mathematical programs governed by second-order cone constrained generalized equations. J. Glob. Optim. 2013, 55: 359-385. 10.1007/s10898-012-9880-9View ArticleMathSciNetMATHGoogle Scholar
  15. Ding C, Sun D, Ye J: First order optimality conditions for mathematical programs with semidefinite cone complementarity constraints. Math. Program. 2014. 10.1007/s10107-013-0735-zGoogle Scholar
  16. Faraut U, Korányi A: Analysis on Symmetric Cones. Oxford University Press, New York; 1994.MATHGoogle Scholar
  17. Mordukhovich BS: Variational Analysis and Generalized Differentiation I: Basic Theory. Springer, Berlin; 2006.Google Scholar
  18. Rockafellar RT, Wets RJ-B: Variational Analysis. Springer, Berlin; 1998.View ArticleMATHGoogle Scholar
  19. Liu YJ, Zhang LW: Convergence analysis of the augmented Lagrangian method for nonlinear second-order cone optimization problems. Nonlinear Anal. 2007, 67: 1359-1373. 10.1016/j.na.2006.07.022MathSciNetView ArticleMATHGoogle Scholar
  20. Outrata JV, Sun D: On the coderivative of the projection operator onto the second-order cone. Set-Valued Anal. 2008, 16: 999-1014. 10.1007/s11228-008-0092-xMathSciNetView ArticleMATHGoogle Scholar
  21. Dempe S, Dutta J: Is bilevel programming a special case of a mathematical programming with complementarity constraints? Math. Program. 2012, 131: 37-48. 10.1007/s10107-010-0342-1MathSciNetView ArticleMATHGoogle Scholar
  22. Gowda MS, Sznajder R, Tao J: Some P-properties for linear transformations on Euclidean Jordan algebras. Linear Algebra Appl. 2004, 393: 203-232.MathSciNetView ArticleMATHGoogle Scholar

Copyright

© Chi et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.