- Research
- Open access
- Published:
A solution method for the optimistic linear semivectorial bilevel optimization problem
Journal of Inequalities and Applications volume 2014, Article number: 164 (2014)
Abstract
In this paper, the linear semivectorial bilevel programming problem is our concern. Based on the optimal value function reformulation approach, the linear semivectorial bilevel programming problem is transformed into a nonsmooth optimization problem, and a solution algorithm is proposed. We analyze the global and local convergence of the algorithm and give an example to illustrate the algorithm proposed in this paper.
1 Introduction
Bilevel programming (BLP), which involves two optimization problems where the constraint region of the upper level problem is implicitly determined by the lower level problem, has been widely applied to decentralized planning problems involving a decision progress with a hierarchical structure. Nowadays, more and more researchers have devoted their efforts to this field, and many papers have been published about bilevel optimization both from the theoretical and computational points of view [1–3], however, many of them deal with the bilevel programming problem where the lower level is a single objective optimization problem. In fact, many practical problems need to be modeled as multi-objective (vector-valued) optimization problem in the lower level; see [4–6].
Bonnel and Morgan [7] firstly labeled the bilevel programming problem with multi-objective in the lower level problem as ‘semivectorial bilevel programming problem’, and a penalty approach is suggested to solve the problem in a general case where the objective functions of the upper level and the lower level are defined on Hausdorff topological space, but no numerical results are reported. Subsequently, Bonnel [8] derived necessary optimality conditions for the semivectorial bilevel optimization problem in very general Banach spaces. More recently, Dempe [9] also studied the optimality conditions based on the optimal value function reformulation approach for the semivectorial bilevel optimization problem.
Another penalty method is developed in [10] in the case where the objective function of the upper level problem is concave and the lower level is a linear multi-objective optimization problem. Along the line of [7, 10], for a class of semivectorial bilevel programming problem, where the upper level is a general scalar-valued optimization problem and the lower level is a linear multi-objective optimization problem, Zheng and Wan [6] presented a new penalty function approach based on the objective penalty function method.
In this paper, inspired by the solution algorithm proposed in [11] for the optimistic linear Stackelberg problem, we will give an optimistic solution approach for the linear semivectorial bilevel programming problem. Our strategy can be outlined as follows. By using the weighted sum scalarization approach, we reformulate the linear semivectorial bilevel programming problem as a special bilevel programming problem, where the lower level is a parametric linear scalar optimization problem. Then, based on the optimal value function reformulation approach, we transform the linear semivectorial bilevel programming problem into a nonsmooth optimization problem and propose an algorithm. We analyze the global and local convergence of the algorithm and give a numerical example to illustrate the algorithm proposed in this paper.
The remainder of the paper is organized as follows. In the next section we present the mathematical model of the linear semivectorial bilevel programming problem and give the optimal value function reformulation approach. In Section 3, we give the optimistic solution algorithm and analyze the convergence. Finally, we conclude this paper.
2 Linear semivectorial bilevel programming problem and some properties
The linear semivectorial bilevel programming problem, which is considered in this paper, can be described as follows:
where , , , , , , , .
Note that in problem (1), the objective function of the upper level is minimized w.r.t. x and y, that is, in this work we adopt the optimistic approach to consider the linear semivectorial bilevel programming problem [4].
Let denote the constraint region of problem (1), be the feasible set of the lower level problem, and be the projection of S onto the upper level’s decision space. To well define problem (1), we make the following assumption.
(A) The constraint region S is nonempty and compact.
For fixed , let denote the weak efficiency set of solutions to the lower level problem
Then problem (1) can be written as follows:
One way to transform the lower level problem () into an usual one level optimization problem is the so-called scalarization technique, which consists of solving the following further parameterized problem:
where the new parameter vector λ is a nonnegative point of the unit sphere, i.e., λ belongs to . Since it is a difficult task to choose the best choice on the Pareto front for a given upper level variable x, our approach in this paper consists of considering the set Ω as a new constraint set for the upper level problem. To proceed in this way, denote by the solution set of problem (2) in the usual sense, for any given parameter couple . The following relationship (see, e.g., Theorem 3.1 in [9]) relates the solution set of () and that of (2).
Proposition 2.1 Let assumption (A) be satisfied. Then we have
Hence, problem (1) can be replaced by the following bilevel programming problem:
The link between problem (1) and (3) will be formalized in the next result. For this, noting that a set-valued map is closed at if and only if for any sequence with , one has . Ξ is said to be closed if it is closed at any point of .
Proposition 2.2 Consider problem (1) and (3); the following assertions hold:
-
(i)
Let be a local (resp., global) optimal solution of problem (1). Then, for all with , the point is a local (resp., global) optimal solution of problem (3).
-
(ii)
Let be a local (resp., global) optimal solution of problem (3) and assume the set-valued mapping ψ is closed. Then is a local (resp., global) optimal solution of problem (1).
Remark As the objective functions and constraint functions in problem (1) satisfy the conditions of Proposition 3.1 in [9], following Proposition 3.1 in [9], Proposition 2.2 is obvious.
Problem (3) is the usual bilevel programming problem. In order to solve problem (3), one common approach is to substitute the lower level problem in problem (3) by its K-K-T optimality conditions [12]. However, the recent research by Dempe and Dutta [13] shows that in general the original bilevel programming problem and its K-K-T reformulation are not always equivalent, even in the case when the lower level programming problem is a parametric convex optimization problem. In this paper we adopt the optimal value function reformulation approach, which will be described in the following.
Let , problem (3) can be transformed into the following one level optimization problem:
This approach has been investigated e.g. in [14], and it is shown that problem (3) is fully equivalent to problem (4).
Similar to the result in [11], we also have the following results about the optimal value function .
Proposition 2.3 The function is a piecewise linear concave function over .
This result is implied by the property that a linear programming problem has a basic optimal solution and the number of vertices of the convex polyhedron is finite. Then can be reformulated as , which clearly is a piecewise linear concave function. In addition, it is a result of convex analysis that the function is Lipschitz continuous on the interior of Q [15].
Let for be supergradients of the concave function with
Then ⊆ T := , with the vertices of the polyhedron .
It is obvious that the optimal function value of problem (4) is not smaller than the optimal function value of the problem
Based on the above analysis, we can propose an algorithm for problem (4). The main idea is that instead of problem (4), the relax problem
with , is solved. It is obvious that the constraint enlarges the feasible set of problem (4).
In order to reduce the feasible set of problem (7) and approximate the one of problem (4), Z is extended successively. Furthermore, every optimal solution of problem (7), which is feasible to problem (4), is an optimal solution for this problem.
Proposition 2.4 Let the feasible point be a global optimal solution of problem (7) for . If is feasible to problem (4), then it is also a globally optimal solution to problem (4).
Proof Suppose the opposite and denote the feasible set of problem (7) by
Let denote the feasible set of problem (4). Then, for problem (4), there exists some point , , with
As , is also feasible to problem (7), thus contradicting the optimality of . □
3 Algorithm and convergence
Based on the above analysis, we can propose the following algorithm.
Algorithm 1 Step 0. Choose a vector for Z, set .
Step 1. Solve problem (7) globally. The optimal solution is .
Step 2. If is feasible to problem (4), stop. Otherwise, compute an optimal solution of the lower level problem with the parameter . Set , and go to Step 1.
In Step 0, we can find the first vector for Z by solving the following programming problem:
In Step 1, as problem (7) is a nonlinear, nonconvex programming problem with a linear objective function. It can be solved using an augmented Lagrangian function in order to create linear subproblems [16].
The following theorem gives the convergence result of Algorithm 1.
Theorem 3.1 Let assumption (A) be satisfied. Then every accumulation point of the sequence produced by Algorithm 1 is a globally optimal solution for problem (4).
Proof The existence of accumulation points follows from the concavity, continuity of , together with the assumed boundedness of S.
Let be an accumulation point of the sequence . We make the assumption for the sequence produced in Step 2 of Algorithm 1. This follows from the nonemptiness and compactness of the set S as well as from the convergence of the parameters .
In the k th iteration, suppose that is not feasible to problem (4), i.e.
with as calculated in Step 2. Following the continuity of the optimal value function , we have
Feasibility of to problem (7) leads to
and
Therefore, is feasible and globally optimal to problem (4). □
Now we consider the local optimality of Algorithm 1, that is, in Step 1 of Algorithm 1, the relaxed problem (7) is solved locally instead of globally. Before formulating the local convergence result, we first introduce a suitable cone of feasible directions.
Definition 3.1 Let . The contingent cone at with respect to is given by
We have the following local convergence result.
Theorem 3.2 Let assumption (A) be satisfied and that for all . Then every accumulation point of the sequence generated by Algorithm 1 with the adjusted Step 1 is locally optimal for problem (4).
Proof Both the existence of accumulation points and the feasibility for problem (4) and (7) follow from the above Theorem 3.1. Let be an accumulation point of the sequence . In the k th iteration, is locally optimal for problem (7), then for every sequence converging to , we have, for all ,
with . Obviously, we get
Following (8), one deduces that
Applying the assumption in Theorem 3.2, it leads to
That is,
Here ϵ has some small value.
Since , here is an open neighborhood of the point , and is feasible to problem (4) for sufficiently large k, formula (9) is still valid for all , . Noting that we have for all converging to
because is an accumulation point of and feasible to problem (4). □
To illustrate the above algorithm, we solve the following linear semivectorial bilevel programming problems.
Example 1 Consider the following linear semivectorial bilevel programming problem [6], where , :
The solution proceeds as follows.
Step 0. Solve the above Example 1 without the lower level objective functions, and obtain an initial vector for , .
Step 1. Solve problem (7), obtain an optimal solution .
Step 2. The lower level problem in (4) with leads to , which coincides with the solution in Step 1, hence the algorithm terminates with the optimal solution , which coincides with the optimistic optimal solution in [6].
It is noted that in Example 1 we only need one iteration to get the optimal solution by the algorithm proposed. The reason why such thing happens is that the optimal solution ignoring the lower level objective function for the example is just feasible to it. We also solve Example 3 and 4 in [6] using the algorithm proposed in this paper, and for the two examples, one iteration is needed to obtain the optimal solution.
To further illustrate the effectiveness of the algorithm, we consider Example 2, which is constructed following Example 3.8 in [11]. As the two lower level objective functions are compatible, following Example 3.8 the optimistic optimal solution for Example 2 is also .
Example 2 Consider the following linear semivectorial bilevel programming problem, where , :
The solution proceeds as follows.
Loop 1 Step 0. , .
Step 1. Solve problem (7), and obtain an optimal solution .
Step 2. The lower level problem in (4) with leads to , which is added to Z. Go to Step 1.
Loop 2 Step 1. Solve problem (7) with the updated Z, and obtain an optimal solution .
Step 2. The lower level problem in (4) with leads to , which coincides with the solution of Step 1, hence the algorithm terminated with the optimal solution .
4 Conclusion
In this paper, we consider the linear semivectorial bilevel programming problem. Based on the optimal value reformulation approach, we transform the problem into a single level programming problem. We propose an algorithm for solving the linear semivectorial bilevel programming problem, and the global and local convergence of the algorithm are analyzed. Finally, some linear semivectorial bilevel programming problems are solved to illustrate the algorithm.
In addition, as the constraint region S is compact, only its vertices need to be considered for the computation of optimal solutions; then the algorithm proposed in this paper stops after a finite number of iterations.
References
Bard J Nonconvex Optimization and Its Applications. In Practical Bilevel Optimization: Algorithm and Applications. Kluwer Academic, Dordrecht; 1998.
Dempe S Nonconvex Optimization and Its Applications. In Foundations of Bilevel Programming. Kluwer Academic, Dordrecht; 2002.
Vicente L, Calamai P: Bilevel and multilevel programming: a bibliography review. J. Glob. Optim. 1994,5(3):291-306. 10.1007/BF01096458
Eichfelder G: Multiobjective bilevel optimization. Math. Program. 2010, 123: 419-449. 10.1007/s10107-008-0259-0
Zhang G, Lu J, Dillon T: Decentralized multi-objective bilevel decision making with fuzzy demands. Knowl.-Based Syst. 2007,20(5):495-507. 10.1016/j.knosys.2007.01.003
Zheng Y, Wan Z: A solution method for semivectorial bilevel programming problem via penalty method. J. Appl. Math. Comput. 2011, 37: 207-219. 10.1007/s12190-010-0430-7
Bonnel H, Morgan J: Semivectorial bilevel optimization problem: penalty approach. J. Optim. Theory Appl. 2006,131(3):365-382. 10.1007/s10957-006-9150-4
Bonnel H: Optimality conditions for the semivectorial bilevel optimization problem. Pac. J. Optim. 2006, 2: 447-467.
Dempe S, Gadhi N, Zemkoho AB: New optimality conditions for the semivectorial bilevel optimization problem. J. Optim. Theory Appl. 2013, 157: 54-74. 10.1007/s10957-012-0161-z
Ankhili Z, Mansouri A: An exact penalty on bilevel programs with linear vector optimization lower level. Eur. J. Oper. Res. 2009, 197: 36-41. 10.1016/j.ejor.2008.06.026
Dempe S, Franke S: Solution algorithm for an optimistic linear Stackelberg problem. Comput. Oper. Res. 2014, 41: 277-281.
Lv YB, Hu T, Wang G, et al.: A penalty function method based on Kuhn-Tucker condition for solving linear bilevel programming. Appl. Math. Comput. 2007, 188: 808-813. 10.1016/j.amc.2006.10.045
Dempe S, Dutta J: Is bilevel programming a special case of a mathematical program with complementarity conditions? Math. Program. 2012,131(1):37-48.
Ye JJ, Zhu DL: Optimality conditions for bilevel programming problems. Optimization 1995, 33: 9-27. 10.1080/02331939508844060
Rockafellar RT: Convex Analysis. Princeton University Press, Princeton; 1970.
Ronbinson SM: A quaratically convergent algorithm for general nonlinear programming problems. Math. Program. 1972, 3: 145-156.
Acknowledgements
The authors thank the referees and the editor for their valuable comments and suggestions on improvement of this paper. The work is supported by the Nature Science Foundation of China (Nos. 11201039, 71171150).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
YL and ZW conceived and designed the study. YL wrote and edited the paper. ZW reviewed the manuscript. The two authors read and approved the manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Lv, Y., Wan, Z. A solution method for the optimistic linear semivectorial bilevel optimization problem. J Inequal Appl 2014, 164 (2014). https://doi.org/10.1186/1029-242X-2014-164
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029-242X-2014-164