Skip to main content

A solution method for the optimistic linear semivectorial bilevel optimization problem

Abstract

In this paper, the linear semivectorial bilevel programming problem is our concern. Based on the optimal value function reformulation approach, the linear semivectorial bilevel programming problem is transformed into a nonsmooth optimization problem, and a solution algorithm is proposed. We analyze the global and local convergence of the algorithm and give an example to illustrate the algorithm proposed in this paper.

1 Introduction

Bilevel programming (BLP), which involves two optimization problems where the constraint region of the upper level problem is implicitly determined by the lower level problem, has been widely applied to decentralized planning problems involving a decision progress with a hierarchical structure. Nowadays, more and more researchers have devoted their efforts to this field, and many papers have been published about bilevel optimization both from the theoretical and computational points of view [1–3], however, many of them deal with the bilevel programming problem where the lower level is a single objective optimization problem. In fact, many practical problems need to be modeled as multi-objective (vector-valued) optimization problem in the lower level; see [4–6].

Bonnel and Morgan [7] firstly labeled the bilevel programming problem with multi-objective in the lower level problem as ‘semivectorial bilevel programming problem’, and a penalty approach is suggested to solve the problem in a general case where the objective functions of the upper level and the lower level are defined on Hausdorff topological space, but no numerical results are reported. Subsequently, Bonnel [8] derived necessary optimality conditions for the semivectorial bilevel optimization problem in very general Banach spaces. More recently, Dempe [9] also studied the optimality conditions based on the optimal value function reformulation approach for the semivectorial bilevel optimization problem.

Another penalty method is developed in [10] in the case where the objective function of the upper level problem is concave and the lower level is a linear multi-objective optimization problem. Along the line of [7, 10], for a class of semivectorial bilevel programming problem, where the upper level is a general scalar-valued optimization problem and the lower level is a linear multi-objective optimization problem, Zheng and Wan [6] presented a new penalty function approach based on the objective penalty function method.

In this paper, inspired by the solution algorithm proposed in [11] for the optimistic linear Stackelberg problem, we will give an optimistic solution approach for the linear semivectorial bilevel programming problem. Our strategy can be outlined as follows. By using the weighted sum scalarization approach, we reformulate the linear semivectorial bilevel programming problem as a special bilevel programming problem, where the lower level is a parametric linear scalar optimization problem. Then, based on the optimal value function reformulation approach, we transform the linear semivectorial bilevel programming problem into a nonsmooth optimization problem and propose an algorithm. We analyze the global and local convergence of the algorithm and give a numerical example to illustrate the algorithm proposed in this paper.

The remainder of the paper is organized as follows. In the next section we present the mathematical model of the linear semivectorial bilevel programming problem and give the optimal value function reformulation approach. In Section 3, we give the optimistic solution algorithm and analyze the convergence. Finally, we conclude this paper.

2 Linear semivectorial bilevel programming problem and some properties

The linear semivectorial bilevel programming problem, which is considered in this paper, can be described as follows:

min x , y C 1 T x + C 2 T y , s . t . min y D y , s . t . s . t . A x + B y ≤ b ,
(1)

where x∈ R n , y∈ R m , b∈ R p , A∈ R p × n , B∈ R p × m , C 1 ∈ R q × n , C 2 ∈ R q × m , D∈ R l × m .

Note that in problem (1), the objective function of the upper level is minimized w.r.t. x and y, that is, in this work we adopt the optimistic approach to consider the linear semivectorial bilevel programming problem [4].

Let S={(x,y)|Ax+By≤b} denote the constraint region of problem (1), Y(x)={y∈ R m |Ax+By≤b} be the feasible set of the lower level problem, and Π x ={x∈ R n |∃y∈ R m ,Ax+By≤b} be the projection of S onto the upper level’s decision space. To well define problem (1), we make the following assumption.

(A) The constraint region S is nonempty and compact.

For fixed x∈ R n , let S(x) denote the weak efficiency set of solutions to the lower level problem

( P x ) : min y D y , s . t . A x + B y ≤ b .

Then problem (1) can be written as follows:

min { C 1 T x + C 2 T y : x ∈ Π x , y ∈ S ( x ) } .

One way to transform the lower level problem ( P x ) into an usual one level optimization problem is the so-called scalarization technique, which consists of solving the following further parameterized problem:

min y λ T D y , s . t . A x + B y ≤ b ,
(2)

where the new parameter vector λ is a nonnegative point of the unit sphere, i.e., λ belongs to Ω={λ|λ∈ R + l , ∑ i = 1 l λ i =1}. Since it is a difficult task to choose the best choice x(y) on the Pareto front for a given upper level variable x, our approach in this paper consists of considering the set Ω as a new constraint set for the upper level problem. To proceed in this way, denote by ψ(x,λ) the solution set of problem (2) in the usual sense, for any given parameter couple (x,λ)∈ Π x ×Ω. The following relationship (see, e.g., Theorem 3.1 in [9]) relates the solution set of ( P x ) and that of (2).

Proposition 2.1 Let assumption (A) be satisfied. Then we have

S(x)=ψ(x,Ω):=⋃ { ψ ( x , λ ) : λ ∈ Ω } .

Hence, problem (1) can be replaced by the following bilevel programming problem:

min x , y , λ C 1 T x + C 2 T y , s . t . ∑ i = 1 l λ i = 1 , s . t . λ ≥ 0 , s . t . min y λ T D y , s . t . s . t . A x + B y ≤ b .
(3)

The link between problem (1) and (3) will be formalized in the next result. For this, noting that a set-valued map Ξ: R a → R b is closed at (u,v)∈ R a × R b if and only if for any sequence ( u k , v k )∈gphΞ with ( u k , v k )→(u,v), one has v∈Ξ(u). Ξ is said to be closed if it is closed at any point of R a × R b .

Proposition 2.2 Consider problem (1) and (3); the following assertions hold:

  1. (i)

    Let ( x ¯ , λ ¯ ) be a local (resp., global) optimal solution of problem (1). Then, for all y ¯ ∈Y(x) with y ¯ ∈ψ( x ¯ , λ ¯ ), the point ( x ¯ , y ¯ , λ ¯ ) is a local (resp., global) optimal solution of problem (3).

  2. (ii)

    Let ( x ¯ , y ¯ , λ ¯ ) be a local (resp., global) optimal solution of problem (3) and assume the set-valued mapping ψ is closed. Then ( x ¯ , y ¯ ) is a local (resp., global) optimal solution of problem (1).

Remark As the objective functions and constraint functions in problem (1) satisfy the conditions of Proposition 3.1 in [9], following Proposition 3.1 in [9], Proposition 2.2 is obvious.

Problem (3) is the usual bilevel programming problem. In order to solve problem (3), one common approach is to substitute the lower level problem in problem (3) by its K-K-T optimality conditions [12]. However, the recent research by Dempe and Dutta [13] shows that in general the original bilevel programming problem and its K-K-T reformulation are not always equivalent, even in the case when the lower level programming problem is a parametric convex optimization problem. In this paper we adopt the optimal value function reformulation approach, which will be described in the following.

Let φ(x,λ)= min y { λ T Dy:Ax+By≤b}, problem (3) can be transformed into the following one level optimization problem:

min x , y , λ C 1 T x + C 2 T y , s . t . ∑ i = 1 l λ i = 1 , s . t . λ T D y ≤ φ ( x , λ ) , s . t . A x + B y ≤ b , s . t . λ ≥ 0 .
(4)

This approach has been investigated e.g. in [14], and it is shown that problem (3) is fully equivalent to problem (4).

Similar to the result in [11], we also have the following results about the optimal value function φ(x,λ).

Proposition 2.3 The function φ(x,λ) is a piecewise linear concave function over Q:={(x,λ):φ(x,λ)<∞}.

This result is implied by the property that a linear programming problem has a basic optimal solution and the number of vertices {( x 1 , y 1 ),…,( x p , y p )} of the convex polyhedron {(x,y):Ax+By≤b} is finite. Then φ(x,λ) can be reformulated as φ(x,λ)= min i = 1 , … , p { λ T D y i }, which clearly is a piecewise linear concave function. In addition, it is a result of convex analysis that the function φ(x,λ) is Lipschitz continuous on the interior of Q [15].

Let v i ∈∂φ( x i , λ i ) for i=1,…,t be supergradients of the concave function with

∂φ ( x ′ , λ ′ ) := { α : φ ( x , λ ) ≤ φ ( x ′ , λ ′ ) + α T ( x − x ′ , λ − λ ′ ) , ∀ ( x , λ ) ∈ R n + l } .
(5)

Then {(x,y,λ):Ax+By≤b, ∑ i = 1 l λ i =1, λ T Dy≤φ(x,λ),λ≥0} ⊆ T := {(x,y,λ):Ax+By≤b, ∑ i = 1 l λ i =1,λ≥0, λ T Dy≤φ( x i , λ i )+ ( v i ) T (x− x i ,λ− λ i ),i=1,…,t}, with the vertices ( x i , y i , λ i ) of the polyhedron {(x,y,λ):Ax+by≤b, ∑ i = 1 l λ i =1,λ≥0}.

It is obvious that the optimal function value of problem (4) is not smaller than the optimal function value of the problem

min x , y , λ { C 1 T x + C 2 T y : ( x , y , λ ) ∈ T } .
(6)

Based on the above analysis, we can propose an algorithm for problem (4). The main idea is that instead of problem (4), the relax problem

min x , y , λ { C 1 T x + C 2 T y : A x + B y ≤ b , ∑ i = 1 l λ i = 1 , λ T D y ≤ min z ∈ Z λ T D z , λ ≥ 0 } ,
(7)

with Z⊆Y(x), is solved. It is obvious that the constraint λ T Dy≤ min z ∈ Z λ T Dz enlarges the feasible set of problem (4).

In order to reduce the feasible set of problem (7) and approximate the one of problem (4), Z is extended successively. Furthermore, every optimal solution of problem (7), which is feasible to problem (4), is an optimal solution for this problem.

Proposition 2.4 Let the feasible point ( x ¯ , y ¯ , λ ¯ ) be a global optimal solution of problem (7) for Z⊆Y(x). If ( x ¯ , y ¯ , λ ¯ ) is feasible to problem (4), then it is also a globally optimal solution to problem (4).

Proof Suppose the opposite and denote the feasible set of problem (7) by

T R := { ( x , y , λ ) : A x + B y ≤ b , ∑ i = 1 l λ i = 1 , λ T D y ≤ min z ∈ Z λ T D z , λ ≥ 0 } .

Let IR={(x,y,λ):Ax+By≤b, ∑ i = 1 l λ i =1, λ T Dy≤φ(x,λ),λ≥0} denote the feasible set of problem (4). Then, for problem (4), there exists some point (x,y,λ)∈IR, (x,y,λ)≠( x ¯ , y ¯ , λ ¯ ), with

C 1 T x+ C 2 T y< C 1 T x ¯ + C 2 T y ¯ .

As IR⊆ T R , (x,y,λ) is also feasible to problem (7), thus contradicting the optimality of ( x ¯ , y ¯ , λ ¯ ). □

3 Algorithm and convergence

Based on the above analysis, we can propose the following algorithm.

Algorithm 1 Step 0. Choose a vector for Z, set k:=1.

Step 1. Solve problem (7) globally. The optimal solution is ( x k , y k , λ k ).

Step 2. If ( x k , y k , λ k ) is feasible to problem (4), stop. Otherwise, compute an optimal solution z k of the lower level problem with the parameter ( x k , λ k ). Set Z:=Z∪{ z k }, k:=k+1 and go to Step 1.

In Step 0, we can find the first vector for Z by solving the following programming problem:

min x , y C 1 T x + C 2 T y , s . t . A x + B y ≤ b .

In Step 1, as problem (7) is a nonlinear, nonconvex programming problem with a linear objective function. It can be solved using an augmented Lagrangian function in order to create linear subproblems [16].

The following theorem gives the convergence result of Algorithm 1.

Theorem 3.1 Let assumption (A) be satisfied. Then every accumulation point ( x ¯ , y ¯ , λ ¯ ) of the sequence ( x k , y k , λ k ) produced by Algorithm 1 is a globally optimal solution for problem (4).

Proof The existence of accumulation points follows from the concavity, continuity of φ(x,λ), together with the assumed boundedness of S.

Let ( x ¯ , y ¯ , λ ¯ ) be an accumulation point of the sequence {( x k , y k , λ k )}. We make the assumption { z k }→ z ¯ for the sequence produced in Step 2 of Algorithm 1. This follows from the nonemptiness and compactness of the set S as well as from the convergence of the parameters x k → x ¯ .

In the k th iteration, suppose that ( x k , y k , λ k ) is not feasible to problem (4), i.e.

φ ( x k , λ k ) = ( λ k ) T D z k < ( λ k ) T D y k ,

with z k ∈Z as calculated in Step 2. Following the continuity of the optimal value function φ(x,λ), we have

φ ( x k , λ k ) = ( λ k ) T D z k → ( λ ¯ ) T D z ¯ =φ( x ¯ , λ ¯ ).

Feasibility of ( x k , y k , λ k ) to problem (7) leads to

( λ k ) T D y k ≤ min z ∈ Z ( λ k ) T Dz,

and

( λ ¯ ) T D y ¯ ≤ ( λ ¯ ) T D z ¯ =φ( x ¯ , λ ¯ ).

Therefore, ( x ¯ , y ¯ , λ ¯ ) is feasible and globally optimal to problem (4). □

Now we consider the local optimality of Algorithm 1, that is, in Step 1 of Algorithm 1, the relaxed problem (7) is solved locally instead of globally. Before formulating the local convergence result, we first introduce a suitable cone of feasible directions.

Definition 3.1 Let ( x ¯ , y ¯ , λ ¯ )∈ T R . The contingent cone at ( x ¯ , y ¯ , λ ¯ ) with respect to T R is given by

C T R ( x ¯ , y ¯ , λ ¯ ) = { d ∈ R n + m + l : ∃ { ( x k , y k , λ k ) } ⊆ T R , ∃ { t k } ⊆ R + ∖ { 0 } , lim k → ∞ t k = 0 , d = lim k → ∞ ( ( x k , y k , λ k ) − ( x ¯ , y ¯ , λ ¯ ) ) t k } .

We have the following local convergence result.

Theorem 3.2 Let assumption (A) be satisfied and that ( C 1 T , C 2 T , 0 T ) T d>0 for all d∈ C T R ( x k , y k , λ k ). Then every accumulation point ( x ¯ , y ¯ , λ ¯ ) of the sequence {( x k , y k , λ k )} generated by Algorithm 1 with the adjusted Step 1 is locally optimal for problem (4).

Proof Both the existence of accumulation points and the feasibility for problem (4) and (7) follow from the above Theorem 3.1. Let ( x ¯ , y ¯ , λ ¯ ) be an accumulation point of the sequence {( x k , y k , λ k )}. In the k th iteration, ( x k , y k , λ k ) is locally optimal for problem (7), then for every sequence {( x l , y l , λ l )}⊆ T R converging to ( x k , y k , λ k ), we have, for all l>0,

0 ≤ C 1 T x l + C 2 T y l − C 1 T x k − C 2 T y k = ( C 1 T , C 2 T , 0 T ) T ( ( x l , y l , λ l ) − ( x k , y k , λ k ) ) + o ( ∥ ( x l , y l , λ l ) − ( x k , y k , λ k ) ∥ ) ,
(8)

with lim l → ∞ o(∥( x l , y l , λ l )−( x k , y k , λ k )∥)/∥( x l , y l , λ l )−( x k , y k , λ k )∥=0. Obviously, we get

lim l → ∞ ( x l , y l , λ l ) − ( x k , y k , λ k ) ∥ ( x l , y l , λ l ) − ( x k , y k , λ k ) ∥ =d∈ C T R ( x k , y k , λ k ) .

Following (8), one deduces that

0 ≤ C 1 T x l + C 2 T y l − C 1 T x k − C 2 T y k ∥ ( x l , y l , λ l ) − ( x k , y k , λ k ) ∥ = ( C 1 T , C 2 T , 0 T ) T ( ( x l , y l , λ l ) − ( x k , y k , λ k ) ) ∥ ( x l , y l , λ l ) − ( x k , y k , λ k ) ∥ + o ( ∥ ( x l , y l , λ l ) − ( x k , y k , λ k ) ∥ ) ∥ ( x l , y l , λ l ) − ( x k , y k , λ k ) ∥ .

Applying the assumption in Theorem 3.2, it leads to

C 1 T x l + C 2 T y l − C 1 T x k − C 2 T y k ∥ ( x l , y l , λ l ) − ( x k , y k , λ k ) ∥ > o ( ∥ ( x l , y l , λ l ) − ( x k , y k , λ k ) ∥ ) ∥ ( x l , y l , λ l ) − ( x k , y k , λ k ) ∥ .

That is,

C 1 T x l + C 2 T y l − C 1 T x k − C 2 T y k >ϵ ∥ ( x l , y l , λ l ) − ( x k , y k , λ k ) ∥ .
(9)

Here ϵ has some small value.

Since IR∩U( x k , y k , λ k )⊆ T R , here U( x k , y k , λ k ) is an open neighborhood of the point ( x k , y k , λ k ), and ( x k , y k , λ k ) is feasible to problem (4) for sufficiently large k, formula (9) is still valid for all (x,y,λ)∈U( x k , y k , λ k )∩IR, (x,y,λ)≠( x k , y k , λ k ). Noting that we have for all {( x l , y l , λ l )} converging to ( x ¯ , y ¯ , λ ¯ )

C 1 T x l + C 2 T y l − C 1 T x ¯ − C 2 T y ¯ >ϵ ∥ ( x l , y l , λ l ) − ( x ¯ , y ¯ , λ ¯ ) ∥ ,

because ( x ¯ , y ¯ , λ ¯ ) is an accumulation point of {( x k , y k , λ k )} and feasible to problem (4). □

To illustrate the above algorithm, we solve the following linear semivectorial bilevel programming problems.

Example 1 Consider the following linear semivectorial bilevel programming problem [6], where x∈ R 3 , y∈ R 3 :

min x ≥ 0 − 14 x 1 + 11 x 2 + 8 x 3 − 15 y 1 − 3 y 2 + 4 y 3 , s . t . , min y ≥ 0 F 21 = 6 x 1 − 2 x 2 + 4 x 3 − 4 y 1 + 7 y 2 − 7 y 3 , min y ≥ 0 F 22 = − x 1 − 13 x 2 − 3 x 3 + 4 y 1 + 2 y 2 + 4 y 3 , min y ≥ 0 F 23 = − x 1 − 2 x 2 − 18 x 3 + 3 y 1 − 9 y 2 + 8 y 3 , s . t . 15 x 1 − 7 x 2 + 3 x 3 + 2 y 1 − 7 y 2 + 2 y 3 ≤ 200 , 7 x 1 + 7 x 2 + 6 x 3 + y 1 + 13 y 2 + y 3 ≤ 140 , 2 x 1 + 2 x 2 − x 3 + 14 y 1 + 2 y 2 + 2 y 3 ≤ 240 , − 3 x 1 + 6 x 2 + 12 x 3 + 4 y 1 − 8 y 2 + y 3 ≤ 140 , 4 x 1 − 7 x 2 + 7 x 3 + 2 y 1 + 4 y 2 − 7 y 3 ≤ 45 , 4 x 1 + 5 x 2 + x 3 − 7 y 1 − 6 y 2 + y 3 ≤ 800 .

The solution proceeds as follows.

Step 0. Solve the above Example 1 without the lower level objective functions, and obtain an initial vector for Z:={(14.1771,2.7860,6.0360)}, k:=1.

Step 1. Solve problem (7), obtain an optimal solution ( x 1 1 , x 2 1 , x 3 1 , y 1 1 , y 2 1 , y 3 1 , λ 1 1 , λ 2 1 , λ 3 1 )=(11.9384,0,0,14.1771,2.7860,6.0360,0.0767,0.4870,0.4363).

Step 2. The lower level problem in (4) with ( x 1 1 , x 2 1 , x 3 1 , λ 1 1 , λ 2 1 , λ 3 1 )=(11.9384,0,0,0.0767,0.4870,0.4363) leads to ( z 1 1 , z 2 1 , z 3 1 )=(14.1771,2.7860,6.0360), which coincides with the solution ( y 1 1 , y 2 1 , y 3 1 )=(14.1771,2.7860,6.0360) in Step 1, hence the algorithm terminates with the optimal solution ( x 1 1 , x 2 1 , x 3 1 , y 1 1 , y 2 1 , y 3 1 )=(11.9384,0,0,14.1771,2.7860,6.0360), which coincides with the optimistic optimal solution in [6].

It is noted that in Example 1 we only need one iteration to get the optimal solution by the algorithm proposed. The reason why such thing happens is that the optimal solution ignoring the lower level objective function for the example is just feasible to it. We also solve Example 3 and 4 in [6] using the algorithm proposed in this paper, and for the two examples, one iteration is needed to obtain the optimal solution.

To further illustrate the effectiveness of the algorithm, we consider Example 2, which is constructed following Example 3.8 in [11]. As the two lower level objective functions are compatible, following Example 3.8 the optimistic optimal solution for Example 2 is also (x,y)=(−1,−1,2,2).

Example 2 Consider the following linear semivectorial bilevel programming problem, where x∈ R 2 , y∈ R 2 :

min x , y 2 x 1 + x 2 − 2 y 1 + y 2 , s . t . ∥ x 1 ∥ ≤ 1 , s . t . − 1 ≤ x 2 ≤ − 0.75 , min y F 21 = − y 1 − y 2 , min y F 22 = − 2 y 1 − 2 y 2 , s . t . − 2 y 1 + y 2 ≤ 0 , y 1 ≤ 2 , 0 ≤ y 2 ≤ 2 .

The solution proceeds as follows.

Loop 1 Step 0. Z:={(0,0)}, k:=1.

Step 1. Solve problem (7), and obtain an optimal solution ( x 1 1 , x 2 1 , y 1 1 , y 2 1 , λ 1 1 , λ 2 1 )=(−1,−1,2,0,0.5,0.5).

Step 2. The lower level problem in (4) with ( x 1 1 , x 2 1 , λ 1 1 , λ 2 1 )=(−1,−1,0.5,0.5) leads to ( z 1 1 , z 2 1 )=(2,2), which is added to Z. Go to Step 1.

Loop 2 Step 1. Solve problem (7) with the updated Z, and obtain an optimal solution ( x 1 2 , x 2 2 , y 1 2 , y 2 2 , λ 1 2 , λ 2 2 )=(−1,−1,2,2,0.5,0.5).

Step 2. The lower level problem in (4) with ( x 1 1 , x 2 1 , λ 1 1 , λ 2 1 )=(−1,−1,0.5,0.5) leads to ( z 1 2 , z 2 2 )=(2,2), which coincides with the solution of Step 1, hence the algorithm terminated with the optimal solution ( x 1 2 , x 2 2 , y 1 2 , y 2 2 )=(−1,−1,2,2).

4 Conclusion

In this paper, we consider the linear semivectorial bilevel programming problem. Based on the optimal value reformulation approach, we transform the problem into a single level programming problem. We propose an algorithm for solving the linear semivectorial bilevel programming problem, and the global and local convergence of the algorithm are analyzed. Finally, some linear semivectorial bilevel programming problems are solved to illustrate the algorithm.

In addition, as the constraint region S is compact, only its vertices need to be considered for the computation of optimal solutions; then the algorithm proposed in this paper stops after a finite number of iterations.

References

  1. Bard J Nonconvex Optimization and Its Applications. In Practical Bilevel Optimization: Algorithm and Applications. Kluwer Academic, Dordrecht; 1998.

    Chapter  Google Scholar 

  2. Dempe S Nonconvex Optimization and Its Applications. In Foundations of Bilevel Programming. Kluwer Academic, Dordrecht; 2002.

    Google Scholar 

  3. Vicente L, Calamai P: Bilevel and multilevel programming: a bibliography review. J. Glob. Optim. 1994,5(3):291-306. 10.1007/BF01096458

    Article  MathSciNet  MATH  Google Scholar 

  4. Eichfelder G: Multiobjective bilevel optimization. Math. Program. 2010, 123: 419-449. 10.1007/s10107-008-0259-0

    Article  MathSciNet  MATH  Google Scholar 

  5. Zhang G, Lu J, Dillon T: Decentralized multi-objective bilevel decision making with fuzzy demands. Knowl.-Based Syst. 2007,20(5):495-507. 10.1016/j.knosys.2007.01.003

    Article  Google Scholar 

  6. Zheng Y, Wan Z: A solution method for semivectorial bilevel programming problem via penalty method. J. Appl. Math. Comput. 2011, 37: 207-219. 10.1007/s12190-010-0430-7

    Article  MathSciNet  MATH  Google Scholar 

  7. Bonnel H, Morgan J: Semivectorial bilevel optimization problem: penalty approach. J. Optim. Theory Appl. 2006,131(3):365-382. 10.1007/s10957-006-9150-4

    Article  MathSciNet  MATH  Google Scholar 

  8. Bonnel H: Optimality conditions for the semivectorial bilevel optimization problem. Pac. J. Optim. 2006, 2: 447-467.

    MathSciNet  MATH  Google Scholar 

  9. Dempe S, Gadhi N, Zemkoho AB: New optimality conditions for the semivectorial bilevel optimization problem. J. Optim. Theory Appl. 2013, 157: 54-74. 10.1007/s10957-012-0161-z

    Article  MathSciNet  MATH  Google Scholar 

  10. Ankhili Z, Mansouri A: An exact penalty on bilevel programs with linear vector optimization lower level. Eur. J. Oper. Res. 2009, 197: 36-41. 10.1016/j.ejor.2008.06.026

    Article  MathSciNet  MATH  Google Scholar 

  11. Dempe S, Franke S: Solution algorithm for an optimistic linear Stackelberg problem. Comput. Oper. Res. 2014, 41: 277-281.

    Article  MathSciNet  Google Scholar 

  12. Lv YB, Hu T, Wang G, et al.: A penalty function method based on Kuhn-Tucker condition for solving linear bilevel programming. Appl. Math. Comput. 2007, 188: 808-813. 10.1016/j.amc.2006.10.045

    Article  MathSciNet  MATH  Google Scholar 

  13. Dempe S, Dutta J: Is bilevel programming a special case of a mathematical program with complementarity conditions? Math. Program. 2012,131(1):37-48.

    Article  MathSciNet  MATH  Google Scholar 

  14. Ye JJ, Zhu DL: Optimality conditions for bilevel programming problems. Optimization 1995, 33: 9-27. 10.1080/02331939508844060

    Article  MathSciNet  MATH  Google Scholar 

  15. Rockafellar RT: Convex Analysis. Princeton University Press, Princeton; 1970.

    Book  MATH  Google Scholar 

  16. Ronbinson SM: A quaratically convergent algorithm for general nonlinear programming problems. Math. Program. 1972, 3: 145-156.

    Article  Google Scholar 

Download references

Acknowledgements

The authors thank the referees and the editor for their valuable comments and suggestions on improvement of this paper. The work is supported by the Nature Science Foundation of China (Nos. 11201039, 71171150).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yibing Lv.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

YL and ZW conceived and designed the study. YL wrote and edited the paper. ZW reviewed the manuscript. The two authors read and approved the manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lv, Y., Wan, Z. A solution method for the optimistic linear semivectorial bilevel optimization problem. J Inequal Appl 2014, 164 (2014). https://doi.org/10.1186/1029-242X-2014-164

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-164

Keywords