 Research
 Open Access
 Published:
Solving linear bilevel multiobjective programming problem via exact penalty function approach
Journal of Inequalities and Applications volume 2015, Article number: 258 (2015)
Abstract
In this paper, the linear bilevel multiobjective programming problem is addressed. The duality gap of the lower level problem is appended to the objectives of the upper level problem with a penalty, and a penalized problem for the linear bilevel multiobjective programming problem is obtained. We prove that the Pareto optimality is reached for an exact penalty function, then an algorithm (original algorithm) is proposed. In addition, for the linear bilevel multiobjective programming problem with given weights for the upper level objective functions, we analyze the optimality conditions and propose an algorithm (weights algorithm). The numerical results showing viability of the penalty function approach are presented.
Introduction
Bilevel programming (BP), which is characterized by the existence of two optimization problems in which the constraint region of the firstlevel problem is implicitly determined by another optimization problem, has increasingly been addressed in the literature, both from the theoretical and computational points of view (see the monographs of Dempe [1] and Bard [2] and the reviews by Vicente and Calamai [3], Dempe [4] and Colson et al. [5]). In the last two decades, many papers have been published about bilevel optimization, however, there are only very few of them dealing with the bilevel multiobjective programming (BMP) problem, where the upper level or the lower level or both of a bilevel decision have multiple conflicting objectives [6–8].
Although solving the bilevel multiobjective programming problem is not an easy task, some researchers have presented feasible approachers for this problem. Shi and Xia [9, 10] propose an interactive algorithm based on the concepts of satisfactoriness and direction vector for the nonlinear bilevel multiobjective problem. AboSinna [11], Osman [12] present some approaches via fuzzy set theory for solving bilevel and multiple level multiobjective problems, and Teng and Li [13] and Deb and Sinha [14] give evolutionary algorithms for some bilevel multiobjective programming problems. Besides, for the socalled semivectorial bilevel optimization problem, Bonnel and Morgan [15] construct the penalized problem by the marginal function of the lower level problem and propose the penalty function algorithm. However, no numerical result is reported. Based on the objective penalty function approach for the single level programs, Zheng and Wan [16] propose the corresponding penalty function algorithm, which contains two penalty parameters. A recent study by Eichfelder [8] suggests a refinementbased strategy in which the algorithm starts with a uniformly distributed set of points on upper level variable. Noted that if the dimension of upper level variable is high, generating a uniformly spread upper level variables and refining the resulting upper level variable will be computationally expensive.
The linear bilevel multiobjective programming (LBMP) problem, i.e., both the objective functions and the constraint functions are linear functions, has attracted more and more attention. Nishizaki and Sakawa [17] give three Stackelberg solution definitions and propose the corresponding algorithms based on the idea of the Kth best method. However, they do not give the numerical results. Ankhili and Mansouri [18] propose an exact penalty function algorithm based on the marginal function of lower level problem for the LBMP problem, where the upper level is a linear scalar optimization problem and the lower level is a linear multiobjective optimization problem. Calvete and Gale [19] analyze the characters of the feasible region and give some algorithms frame for the LBMP problem, where the upper level is a linear multiobjective optimization problem and the lower level is a linear scalar optimization problem.
In this paper, inspired by the method presented in [20, 21] for the linear bilevel single objective programming problem, we propose an exact penalty algorithm for the LBMP problem, where both the upper level and the lower level are linear multiobjective optimization problems. There are few reports on using exact penalty approach for solving the above LBMP problem. Our strategy can be outlined as follows. By using the weighted sum scalarization approach, we reformulate the LBMP problem as a special bilevel programming problem, where the upper level is a linear multiobjective programming problem and the lower level is a linear scalar optimization problem. Then the duality gap of the lower level problem is appended to the objectives of the upper level problem with a penalty, and a penalized problem for the LBMP is obtained. We prove that the penalty function is exact, and an exact penalty function algorithm is proposed for the LBMP. In addition, for the LBMP problem with given weights for the upper level objective functions, we analyze the optimality conditions and propose a special algorithm. Finally, we give some numerical examples to illustrate the algorithm proposed in this paper.
The remainder of the paper is organized as follows. In the next section we give the mathematical model of the LBMP problem and construct the penalized problem. In Section 3, we analyze the characters of the penalized problem and give via an exact penalty method an existence theorem of Pareto optimal solutions. In Section 4, we propose the algorithm and give the numerical results. Finally, we conclude the paper with some remarks.
Linear bilevel multiobjective programming problem and penalized problem
In this paper, we consider the following linear bilevel multiobjective programming problem (LBMP), which can be written as
where \(x\in R^{n}\), \(y\in R^{m}\), \(b\in R^{p}\), \(A_{1}\in R^{p\times n}\), \(A_{2}\in R^{p\times m}\), \(C\in R^{q\times n}\), \(C'\in R^{q\times m}\), \(D\in R^{l\times m}\).
Let \(S=\{(x,y)\mid A_{1}x+A_{2}y\leq b,x\geq0, y\geq0\}\) denote the constraint region of problem (1), and \(\Pi_{y}=\{x\in R^{n}_{+}\mid \exists y\in R^{m}_{+},A_{1}x+A_{2}y\leq b\}\) be the projection of S onto the upper level’s decision space. To define problem (1), we make the following assumption.

(A)
The constraint region S is nonempty and compact.
For fixed \(x\in R_{+}^{n}\), let \(S(x)\) denote the efficiency set of solutions to the lower level problem
Definition 2.1
A point \((x,y)\) is feasible for problem (1) if \((x,y)\in S\) and \(y\in S(x)\); the term \((x^{*},y^{*})\) is a Pareto optimal solution to problem (1), provided that it is a feasible point and there exists no other feasible point \((x,y)\) such that \(Cx^{*}+C'y^{*}\leq Cx+C'y\).
Remark 2.1
Note that in problem (1) the objective function of the upper level is maximized with regard to x and y, that is, in this work we adopt the optimistic approach to consider the bilevel multiobjective programming problem [8].
One way to transform the lower level problem \((P_{x})\) into a scalar optimization problem is the socalled scalarization technique, which consists of solving the following further parameterized problem:
where the new parameter vector λ is a nonnegative point of the unit sphere, i.e., λ belongs to \(\Omega=\{\lambda\mid \lambda\in R_{+}^{l},\sum _{i=1}^{l}\lambda_{i}=1\}\). Since it is a difficult task to choose the best choice \(x(y)\) on the Pareto front for a given upper level variable x, our approach in this paper consists to consider the set Ω as a new constraint set for the upper level problem. Note that this approach can also be used in [22] for the semivectorial bilevel optimization problem.
For any given parameter couple \((x,\lambda)\in\Pi_{y}\times\Omega\), let \(\gamma(x,\lambda)\) denote the solution set of problem (2). The following relationship (see, e.g., Theorem 3.1 in [22]) relates the solution set of \((P_{x})\) and that of problem (2).
Proposition 2.1
Let assumption (A) be satisfied. Then we have
Based on Proposition 2.1, problem (1) can be replaced by the following bilevel multiobjective programming problem:
The link between problem (1) and (4) will be formalized in the following proposition.
Proposition 2.2
Let assumption (A) be satisfied. Then the following assertions hold.

(i)
Let \((\bar{x},\bar{y})\) be a Pareto optimal solution of problem (1). Then for all \(\bar{\lambda}\in\Omega\) with \(\bar{y}\in\gamma(\bar {x},\bar{\lambda})\), the point \((\bar{x},\bar{y},\bar{\lambda})\) is a Pareto optimal solution of problem (4).

(ii)
Let \((\bar{x},\bar{y},\bar{\lambda})\) be a Pareto optimal solution of problem (4). Then \((\bar{x},\bar{y})\) is a Pareto optimal solution of problem (1).
Proof
(i) Let \((\bar{x},\bar{y})\) be a Pareto optimal solution of problem (1), and there exists \(\lambda^{o}\in\Omega\) with \(\bar{y}\in\gamma(\bar{x},\lambda^{o})\), such that \((\bar{x},\bar {y},\lambda^{o})\) is not a Pareto optimal solution of problem (4). Then there exists a feasible point \((x',y',\lambda')\) of problem (4), such that
Following Proposition 2.1, we have \(y'\in S(x')\), i.e. \((x',y')\) is also a feasible point of problem (1). That means that there exists a feasible point \((x',y')\) of problem (1), such that (5) is satisfied. It contradicts that \((\bar{x},\bar{y})\) is a Pareto optimal solution of problem (1).
(ii) Let \((\bar{x},\bar{y},\bar{\lambda})\) be a Pareto optimal solution of problem (4), then there does not exist a feasible point \((x',y',\lambda')\) of problem (4), such that
Suppose that \((\bar{x},\bar{y})\) is not a Pareto optimal solution of problem (1), then there exists a feasible point \((x'',y'')\), such that
Following Proposition 2.1, we see that for \((x'',y'')\), there exists \(\lambda''\in\Omega\) such that \((x'',y'',\lambda'')\) is feasible to problem (4). That means that there exists a feasible point \((x'',y'',\lambda'')\) of problem (4), such that (7) is satisfied, which contradicts (6). □
The dual of problem (2) is the following:
Here \(w\in R^{q}\) is the duality variable.
Let \(\pi(x,y,\lambda,w)=w^{T}(bA_{1}x)\lambda^{T}Dy\) denote the duality gap of the lower level problem, if \(\pi(x,y,\lambda,w)\) is equal to zero, the optimal solution of the lower level problem would be reached.
We can employ a penalty function approach to the LBMP problem (1), and formulate the penalty problem \(P(K)\) as follows:
where \(e\in R^{q}\) has all its components equal to unity, and K is the penalty factor. Clearly, problem (9) will reach optimality when \(w^{T}(bA_{1}x)\lambda^{T}Dy\rightarrow0\).
Main results
We will now analyze the main theoretical result, i.e., the exactness of the penalty function approach, which means we can get the Pareto optimal solutions of problem (1) by solving the penalized problem (9) for some finite positive constant K.
Before presenting some theoretical results, we introduce some useful notations first. Let \(Z=\{(x,y)\mid A_{1}x+A_{2}y\leq b,x\geq0,y\geq 0\}\), \(W=\{(\lambda,w)\mid \sum_{i=1}^{l}\lambda_{i}=1, w^{T}A_{2}\geq \lambda^{T}D, \lambda\geq0, w\geq0\}\), and we denote the extreme points of W and Z by \(W_{v}\) and \(Z_{v}\), respectively.
Theorem 3.1
For a given value of \((\lambda,w)\in W\) and fixed K, a Pareto optimal solution to the following programming problem:
is achievable at some \((x^{*},y^{*})\in Z_{v}\).
Proof
Note that for fixed \((\lambda,w)\) and K, problem (10) is a linear multiobjective programming problem, then Theorem 3.1 is obvious. □
Theorem 3.1 yields the following theorem.
Theorem 3.2
For fixed K, a Pareto optimal solution to problem (9) is achievable in \(Z_{v}\times W_{v}\) and \(Z_{v}\times W_{v}=(Z\times W)_{v}\).
Proof
Let \((x^{*},y^{*})\in Z_{v}\) be a Pareto optimal solution to problem (10). As \(F(x^{*},y^{*},\lambda,w,K)\) is an affine function of \((\lambda,w)\), and W is a polytope, the problem
will have some Pareto optimal solution \((\lambda^{*},w^{*})\in W_{v}\). This proves the first part and the second part is obvious following Theorem 2 in [20]. □
The above theorem is based on a fixed value of K. We now show that a finite value of K would yield an exact Pareto optimal solution to the overall problem (9), where the penalty term \(w^{T}(bA_{1}x)\lambda^{T}Dy\) becomes zero.
Theorem 3.3
There exists a finite value of K, \(K^{*}\) say, for which the Pareto optimal solution \((x,y,\lambda,w)\) to the penalty function problem (9) satisfies \(w^{T}(bA_{1}x)\lambda^{T}Dy=0\).
Proof
Suppose \((x^{*},y^{*},\lambda^{*})\) is the Pareto optimal solution to problem (4), i.e. the linear bilevel multiobjective problem, then the optimality conditions of the lower level problem are satisfied. That means \((w^{*})^{T}(bA_{1}x^{*})(\lambda^{*})^{T}Dy^{*}=0\).
Let \((x,y,\lambda,w)\) be a Pareto optimal solution to problem (9), then there exists an index i, such that
where \(C_{i}\) and \(C_{i}'\) are the ith rows of C and \(C'\), respectively.
Thus,
where k is some constant. Thus, as \(K\rightarrow\infty\), \(w^{T}(bA_{1}x)\lambda^{T}Dy=0\rightarrow0\). However, since \(Z_{v}\times W_{v}\) is finite, \(w^{T}(bA_{1}x)\lambda^{T}Dy=0\) for some large finite value of K, say \(K^{*}\). □
We now show that, by increasing K monotonically, we can achieve the Pareto optimal solutions of the linear bilevel multiobjective programming problem (4).
Theorem 3.4
There exists some \(K^{*}\geq0\), such that for all \(K\geq K^{*}\), the Pareto optimal solution \((x,y,\lambda,w)\) to problem (9) is also a Pareto optimal solution to problem (4).
Proof
Following Theorem 3.3 and the essentials of the penalty function method for multiobjective programs (Theorem 2.1 in [23]), we know that there exists some \(K^{*}\geq0\), such that for all \(K\geq K^{*}\), the Pareto optimal solution \((x,y,\lambda,w)\) to problem (9) satisfying \(w^{T}(bA_{1}x)\lambda^{T}Dy=0\), and there does not exist a point \((\hat{x},\hat{y},\hat{\lambda},\hat{w})\) which is feasible to problem (9), satisfying
Suppose that \((x,y,\lambda,w)\) (feasible to problem (4)) is not a Pareto optimal solution to problem (4), then there exists a feasible point \((\check{x},\check{y},\check{\lambda },\check{w})\) to problem (4) (also feasible to problem (9)), satisfying
which contradicts (11). Then \((x,y,\lambda)\) is also a Pareto optimal solution to problem (4). □
Algorithm and numerical results
Following the above results, we know that we can obtain the Pareto optimal solution of problem (1) by solving the penalized problem (9). Then we can propose the following exact penalty function algorithm for the linear bilevel multiobjective programming problem (1).
Original algorithm
Step 0. Choose an initial point \((x^{0},y^{0},\lambda^{0},w^{0})\in W_{v}\times Z_{v}\), a positive constant \(K>1\) and stopping tolerance \(\epsilon>0\), and set \(k:=0\).
Step 1. Find a Pareto optimal solution \((x^{k},y^{k},\lambda ^{k},w^{k})\) of problem (9).
Step 2. If \((w^{k})^{T}(bA_{1}x^{k})(\lambda^{k})^{T}Dy^{k}\leq \epsilon\), stop, a Pareto optimal solution is obtained; else go to Step 3.
Step 3. Set \((x^{k+1},y^{k+1},\lambda^{k+1},w^{k+1}):=(x^{k},y^{k},\lambda^{k},w^{k})\), \(k:=k+1\), \(K:=2K\), and go to Step 1.
The following theorem states the convergence of the above algorithm.
Theorem 4.1
Let assumption (A) be satisfied, then the last point in the sequence \((x^{k},y^{k},\lambda^{k})\), which is generated by the original algorithm, is a Pareto optimal solution to problem (4).
Proof
Following Theorem 3.3, we know that the penalty function is exact. Then it means that the sequence \((x^{k},y^{k},\lambda^{k})\), which is generated by the above algorithm, is finite. Let \((x^{*},y^{*},\lambda^{*})\) be the last point in the sequence \((x^{k},y^{k},\lambda^{k})\). Following Theorem 3.4, it is obvious that \((x^{*},y^{*},\lambda^{*})\) is a Pareto optimal solution to problem (4). □
It is clear that we can obtain the Pareto optimal front of problem (1) by solving problem (9) using some appropriate approaches. In the following, we consider the situation that the decision maker gives the positive weights for the upper level objective functions, and propose a special algorithm for this problem.
Let the decision maker give the positive weights \(\mu=(\mu_{1},\ldots ,\mu_{q})^{T}\in R_{+}^{q}\) with \(\sum_{i=1}^{q}\mu_{i}=1\) for the upper level objective functions. Then the Pareto optimal solution to the given weights μ can be obtained by solving the following programs:
Let \(Q(\lambda,w,K)=\max_{(x,y)\in Z}[\mu ^{T}(Cx+C'y)K(w^{T}(bA_{1}x)\lambda^{T}Dy)]\), we have the following result.
Theorem 4.2
For \((\bar{\lambda},\bar{w}),(\tilde{\lambda},\tilde{w})\in W\), we have
Proof
For all \((\bar{\lambda},\bar{w})\in W\),
and
Subtract (15) from (16) and the result follows. □
Following Theorem 4.2, if \(\alpha(\tilde{\lambda},\tilde{w})=\min _{(\lambda,w)\in W}[(w\tilde{w})^{T}(bA_{1}x)(\lambda\tilde{\lambda })^{T}Dy]<0\), then \((\tilde{\lambda},\tilde{w})\neq(\lambda ^{*},w^{*})\in\arg\max\{Q(\lambda,w,K):(\lambda,w)\in W\}\).
Based on the above analysis, we can propose an algorithm for solving the special problem (13).
Weights algorithm
Step 0. Give the weights \(\mu=(\mu_{1},\ldots,\mu_{q})^{T}\in R_{+}^{q}\) with \(\sum_{i=1}^{q}\mu_{i}=1\) for the upper level objective functions, and choose an initial point \((\lambda^{0},w^{0})\in W_{v}\), a positive constant \(K>1\), and stopping tolerance \(\epsilon>0\), and set \(k:=0\).
Step 1. Solve \(\max_{(x,y)\in Z}\mu ^{T}(Cx+C'y)K((w^{k})^{T}(bA_{1}x)(\lambda^{k})^{T}Dy)\) and we obtain the optimal solution \((x^{k},y^{k})\).
Step 2. Solve \(\alpha(\lambda^{k},w^{k})=\min_{(\lambda,w)\in W}[(ww^{k})^{T}(bA_{1}x^{k})(\lambda\lambda^{k})^{T}Dy^{k}]\), and obtain the optimal solution \((\lambda^{*},w^{*})\).
Step 3.

(1)
If \(\alpha(\lambda^{k},w^{k})<0\), then \((\lambda ^{k+1},w^{k+1})=(\lambda^{*},w^{*})\), \(k:=k+1\), go to Step 1.

(2)
If \(\alpha(\lambda^{k},w^{k})\geq0\) and \((w^{k})^{T}(bA_{1}x^{k})(\lambda^{k})^{T}Dy^{k}>\epsilon\), then \(K:=2K\), \(k:=k\), go to Step 1.

(3)
If \(\alpha(\lambda^{k},w^{k})\geq0\) and \((w^{k})^{T}(bA_{1}x^{k})(\lambda^{k})^{T}Dy^{k}<\epsilon\), then the optimal solution to problem (13) is \((x^{k},y^{k},\lambda^{k},w^{k})\).
To illustrate the weights algorithm, we solve the following linear bilevel multiobjective programming problem using the above algorithm.
Example 1
[6]
For Example 1, following problem (4) and (9) we can get the following penalized problem:
The solution proceeds as follows.
Loop 1
Step 0. Choose the weights \(\mu=(0.5,0.5)^{T}\) for the upper level objective functions, and an initial point \((\lambda ^{0},w^{0})=(1/3,2/3,0,0)^{T}\), a positive constant \(K=100\), and stopping tolerance \(\epsilon=0.01\), and set \(k:=1\).
Step 1. Solve
and obtain the optimal solution \((x^{0},y^{0})=(0,\frac{4}{3})^{T}\).
Step 2. Solve
Obtain the optimal solution \((\lambda^{*},w^{*})=(1,0,0,0)^{T}\), and the optimal value \(\alpha(\lambda^{0},w^{0})=\frac{8}{3}\).
Step 3. As \(\alpha(\lambda^{0},w^{0})=\frac{8}{3}<0\), let \((\lambda ^{1},w^{1})=(1,0,0,0)^{T}\), \(k:=1\), go to Step 1.
Loop 2
Step 1. Solve
and obtain the optimal solution \((x^{1},y^{1})=(0,0)^{T}\).
Step 2. Solve
Obtain the optimal solution \((\lambda^{*},w^{*})=(1,0,0,0)^{T}\), and the optimal value \(\alpha(\lambda^{1},w^{1})=0\).
Step 3. \(\alpha(\lambda^{1},w^{1})=0\), and \((w^{1})^{T}(bA_{1}x^{1})(\lambda^{1})^{T}Dy^{1}=0<0.01\). Then the Pareto optimal solution of Example 1 to the weights \(\mu=(0.5,0.5)^{T}\) is \((x,y,\lambda,w)=(0,0,1.0,0,0,0)^{T}\).
It is noted that in [6], a Pareto optimal solution to Example 1 is \((x,y)=(0,0.5)^{T}\), and the upper level objective value is \(F(x,y)=(1,2)^{T}\). However, in this paper the Pareto optimal solution of Example 1 is \((x,y)=(0,0)^{T}\) obtained by the weights algorithm, and the corresponding upper level objective function value is \(F(x,y)=(0,0)^{T}\). Following the vector partial order, it is obvious that the optimal solution \((x,y)=(0,0)\) obtained here is a Pareto optimal solution to Example 1. It is shown that the weights algorithm is feasible to the linear bilevel multiobjective programming problem.
To further show the viability of the weights algorithm proposed, we consider the following two examples.
Example 2
[6]
Example 3
[24]
In the above examples, we choose the fixed weights of the upper level objectives as \(\mu=(\mu_{1},\mu_{2})^{T}=(0.5,0.5)^{T}\), and we obtain the Pareto optimal solutions, which are presented in Table 1.
In Table 2, we compare the upper level objective value obtained in this paper with that in the corresponding references. From the above comparison, it is showed that the Pareto optimal solutions obtained in this paper are the Pareto optimal solutions to the above two examples. Then the exact penalty function approach presented here to the linear bilevel multiobjective programming problem shows usefulness and viability.
Conclusion
In this paper, we introduce a new exact penalty function method for solving a linear bilevel multiobjective programming problem. The method is based on appending the duality gap of the lower level problem to the upper level objectives with a penalty. The numerical results reported illustrated that the exact penalty function method introduced in this paper can be numerically efficient.
It is noted that, besides its theoretical properties, the original algorithm proposed in this paper has one distinct advantage: it only requires the use of practicable algorithms for solving the smooth multiobjective optimization problems, no other complex operations are necessary. In addition, for the LBMP problem with given upper level objective functions weights, using the weights algorithm proposed we only need to solve a series of linear programming problems to obtain the Pareto optimal solutions.
References
 1.
Dempe, S: Foundations of Bilevel Programming. Kluwer, Dordrecht (2002)
 2.
Bard, J: Practical Bilevel Optimization: Algorithm and Applications. Kluwer, Dordrecht (1998)
 3.
Vicente, L, Calamai, P: Bilevel and multilevel programming: a bibliography review. J. Glob. Optim. 5(3), 291306 (1994)
 4.
Dempe, S: Annotated bibliography on bilevel programming and mathematical programs with equilibrium constraints. Optimization 52(3), 333359 (2003)
 5.
Colson, B, Marcotte, P, Savard, G: An overview of bilevel optimization. Ann. Oper. Res. 153, 235256 (2007)
 6.
Zhang, G, Lu, J, Dillon, T: Decentralized multiobjective bilevel decision making with fuzzy demands. Knowl.Based Syst. 20(5), 495507 (2007)
 7.
Ye, J: Necessary optimality conditions for multiobjective bilevel programs. Math. Oper. Res. 36(1), 165184 (2011)
 8.
Eichfelder, G: Multiobjective bilevel optimization. Math. Program. 123, 419449 (2010)
 9.
Shi, X, Xia, H: Interactive bilevel multiobjective decision making. J. Oper. Res. Soc. 48(9), 943949 (1997)
 10.
Shi, X, Xia, H: Model and interactive algorithm of bilevel multiobjective decision making with multiple interconnected decision makers. J. MultiCriteria Decis. Anal. 10, 2734 (2004)
 11.
AboSinna, M: A bilevel nonlinear multiobjective decision making under fuzziness. J. Oper. Res. Soc. Ind. 38(5), 484495 (2001)
 12.
Osman, M, AboSinna, M, et al.: A multilevel nonlinear multiobjective decision making under fuzziness. Appl. Math. Comput. 153(1), 239253 (2004)
 13.
Teng, C, Li, L: A class of genetic algorithms on bilevel multiobjective decision making problem. J. Syst. Sci. Syst. Eng. 9(3), 290296 (2000)
 14.
Deb, K, Sinha, A: Solving bilevel multiobjective optimization problems using evolutionary algorithms. Lect. Notes Comput. Sci. 5467, 110124 (2009)
 15.
Bonnel, H, Morgan, J: Semivectorial bilevel optimization problem: penalty approach. J. Optim. Theory Appl. 131(3), 365382 (2006)
 16.
Zheng, Y, Wan, Z: A solution method for semivectorial bilevel programming problem via penalty method. J. Appl. Math. Comput. 37, 207219 (2011)
 17.
Nishizaki, I, Sakawa, M: Stackelberg solutions to multiobjective twolevel linear programming problem. J. Optim. Theory Appl. 103(1), 161182 (1999)
 18.
Ankhili, Z, Mansouri, A: An exact penalty on bilevel programs with linear vector optimization lower level. Eur. J. Oper. Res. 197, 3641 (2009)
 19.
Calvete, HI, Gale, C: Linear bilevel programs with multiple objectives at the upper level. J. Comput. Appl. Math. 234, 950959 (2010)
 20.
Anandalingam, G, White, DJ: A solution for the linear static Stackelberg problem using penalty function. IEEE Trans. Autom. Control 38(5), 484495 (2001)
 21.
Lv, YB, Hu, T, Wang, G, Wan, Z: A penalty function method based on KuhnTucker condition for solving linear bilevel programming. Appl. Math. Comput. 188, 808813 (2007)
 22.
Dempe, S, Gadhi, N, Zemkoho, AB: New optimality conditions for the semivectorial bilevel optimization problem. J. Optim. Theory Appl. 157, 5474 (2013)
 23.
White, DJ: Multiobjective programming and penalty function. J. Optim. Theory Appl. 43(4), 583600 (1984)
 24.
Pieume, CO, Marcotte, P, Fotso, LP: Solving bilevel linear multiobjective programming problems. Amer. J. Oper. Res. 1, 214219 (2011)
Acknowledgements
The authors thank the editor Rachel for the suggestions on maintaining the integrity of this paper. The work is supported by the Nature Science Foundation of China (Nos. 11201039, 71471140, 61273179).
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
YL and ZW conceived and designed the study. YL wrote and edited the paper. ZW reviewed the manuscript. The two authors read and approved the manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Lv, Y., Wan, Z. Solving linear bilevel multiobjective programming problem via exact penalty function approach. J Inequal Appl 2015, 258 (2015). https://doi.org/10.1186/s1366001507807
Received:
Accepted:
Published:
Keywords
 linear bilevel multiobjective programming
 duality gap
 penalty function
 Pareto optimal solution