 Research
 Open Access
 Published:
Optimality conditions for pessimistic semivectorial bilevel programming problems
Journal of Inequalities and Applications volume 2014, Article number: 41 (2014)
Abstract
In this paper, a class of pessimistic semivectorial bilevel programming problems is investigated. By using the scalarization method, we transform the pessimistic semivectorial bilevel programming problem into a scalar objective optimization problem with inequality constraints. Furthermore, we derive a generalized minimax optimization problem using the maximization bilevel optimal value function, of which the sensitivity analysis is constructed via the lowerlevel value function approach. Using the generalized differentiation calculus of Mordukhovich, the firstorder necessary optimality conditions are established in the smooth setting. As an application, we take the optimality conditions of the bilevel programming problems with multiobjective lower level problem when the lower level multiobjective optimization problem is linear with respect to the lowerlevel variables.
MSC:90C26, 90C30, 90C31, 90C46.
1 Introduction
Bilevel programming (also called twolevel programming) problems provide a framework to deal with decision processes involving two decision makers with a hierarchical structure. The leader at the upper level of the hierarchy and the follower at the lower level seek to optimize their individual objective functions and control their own set of decision variables. The hierarchical process means that the leader announces his variables first and then the follower reacts, bearing in mind the selection. The goal of the leader is to optimize his own objective function by incorporating, within the optimization scheme, the reaction of the follower to his course of action. The leader can influence but cannot control the decisions of the follower. In this paper, we consider a bilevel programming problem (BP), which is called semivectorial bilevel programming problem by Bonnel and Morgan [1], where the upper level is a scalar optimization problem and the lower level is a vector optimization problem:
where $x\in {R}^{n}$ and $z\in {R}^{m}$ denote the upperlevel and the lowerlevel decision variables, respectively, $G(x):{R}^{n}\to {R}^{q}$ denotes the upperlevel constraint function and $F:{R}^{n}\times {R}^{m}\to R$ is the upperlevel objective function, ${\mathrm{\Psi}}_{\mathrm{wef}}(x)$ is the weakly efficient optimal solutions set of the multiobjective optimization problem with respect to the upper decision variable x:
where $g:{R}^{n}\times {R}^{m}\to {R}^{p}$ is the lowerlevel constrained function, $f:{R}^{n}\times {R}^{m}\to {R}^{l}$ is the lowerlevel multiobjective function. The term ‘${R}_{+}^{l}min$’ in (1.3) is used to symbolize that vectorvalues in the lowerlevel problem are in the sense of weak Pareto minima (see Section 2) with respect to an order induced by the positive orthant of ${R}^{l}$.
In order to ensure that the results in this paper are correct, we make some hypotheses throughout the paper as follows.
Hypothesis 1 The set $\{x\in {R}^{n}\mid G(x)\le 0\}$ is nonempty and compact.
Hypothesis 2 For any x verifying $G(x)\le 0$, the set $\{z\in {R}^{m}\mid g(x,z)\le 0\}$ is nonempty and compact.
Generally speaking, the weakly efficient solution set ${\mathrm{\Psi}}_{\mathrm{wef}}(x)$ of the lowerlevel problem (1.3) and (1.4) is not singleton, i.e., the set ${\mathrm{\Psi}}_{\mathrm{wef}}(x)$ in (1.2) has more than one point. In this case, the notion of an optimal solution of the bilevel programming problem may be ambiguous. That is why the word ‘min’ is written in quotes in (1.1). Two ways to deal with this situation are given by the optimistic formulation and the pessimistic formulation in [2].
If the upperlevel decision maker (i.e. the leader) supposes that the lowerlevel decision maker (i.e. the follower) is willing to support him, that is, the follower will select a solution $z(x)\in {\mathrm{\Psi}}_{\mathrm{wef}}(x)$, which is one of the best to the leader, then we get the following optimistic formulation:
For the research papers on the optimistic formulation of the semivectorial bilevel programming problem one is referred to [1, 3–9]. In [1], a penalty method was given to solve the problem in case of weakly efficient solutions in the lowerlevel problem (1.2). Zheng and Wan [9] developed another penalty method consisting of two penalty parameters in the case where the multiobjective lowerlevel problem is linear. Ankhili and Mansouri [3] developed an exact penalty method for the problem in the case where the upperlevel problem was concave and the lowerlevel problem was a linear multiobjective optimization problem. Eichfelder [7] considered the problem in the case where F is also vectorvalued. In the latter paper, the induced set of the investigated problem is shown to be the set of minima point (with respect to a cone) of another unperturbed multiobjective optimization problem. Hence, the resulting problem is simply a multiobjective optimization problem over an efficient set. Then it is solved by using a scalarization method by Pascoletti and Serafini combined with an adaptive parameter control method based on sensitivity for the problem. Recently, Calvete and Galé [5] also considered the problem in the case where the upperlevel objective function is quasiconcave and the lowerlevel problem is a linear multiobjective optimization problem. The problem was reformulated as an optimization problem over a nonconvex region given by a union of faces of the polyhedron defined by all constraints. An extreme point method was showed to deal with the problem. Then, based on the ‘k th’ best method and genetic algorithm, they developed an exact and a metaheuristic algorithm, respectively. The performance of the above two algorithms were also evaluated. In [8], Nie defined the risk optimal decision, conservative optimal decision and mean optimal decision of the semivectorial bilevel programming problem. Weighting methods were employed to analyze the lowerlevel multiobjective optimization problem, and some properties about the problem were obtained. In [4], Bonnel derived necessary optimality conditions for the problem (1.5) in general Banach spaces, while considering efficient and weakly efficient solutions for the lowerlevel multiobjective optimization the problem (1.3). In the latter paper, the author inserted the weak or properly weak solution setvalued mapping of the lowerlevel problem in the upperlevel objective function to derive a setvalued optimization problem. Using the notion of contingent derivative, necessary optimality conditions, which are abstract in nature, were derived. In [6], Dempe et al. considered also the optimistic formulation of the semivectorial bilevel programming problem. Considering the scalarization approach for the lowerlevel multiobjective optimization problem, they transformed the problem into a scalar objective optimization problem with inequality constraints by means of the optimal value reformulation, completely detailed firstorder KKTtype necessary optimality conditions were derived in the smooth and nonsmooth settings while using the generalized differentiation calculus of Mordukhovich. It is worth to mention that the method of [6] was different from that of [4].
If the upperlevel decision maker is a conservative leader, the leader is going to be the worst and bound the damage resulting from an undesirable selection of the follower. This leads to the following pessimistic formulation of (1.1):
To the best of our knowledge, there are very few results for the problem (1.6) apart from [8, 10].
Bonnel and Morgan [10] developed optimality conditions for the bilevel optimal control problem, which is a special case of semivectorial bilevel programming problem. For two extreme cases, the optimistic case and the pessimistic case, the optimality conditions were presented, respectively. In [8], Nie defined the conservative optimal decision for the problem (1.1) (i.e. (1.6)). Weighting methods were employed to analyze the lowerlevel multiobjective optimization problem when the lowerlevel objective functions were all continuously differentiable and strictly convex and the lowerlevel constraints were all continuously differentiable, then a minimax optimization problem with constraints was derived. But for the problem (1.6), no detailed optimality conditions and concrete solving methods were found in [8].
Hence, in this paper, our main work is as follows: Using the scalarization method and the maximization bilevel optimal value function, the pessimistic problem (1.6) is transformed into a generalized minimax optimization problem, i.e. the problem (3.4). Then, we develop a link between the problems (1.6) and (3.4), that is, Proposition 3.1, which shows that these two problems have the same local or global optimal solutions under some mild conditions. The results of Proposition 3.1 is formally similar to that of Proposition 3.1 in [6], but is different in nature. Based on Proposition 3.1, we transform the problem (3.4) into the problem (3.9) using the bilevel optimal value function formulation. Furthermore, we develop the necessary optimality conditions (see Theorem 4.4) for the problem (3.4) using generalized differentiation calculus of Mordukhovich. By Proposition 3.1, we obtain the necessary optimality conditions (see Corollary 4.1) for the pessimistic problem (1.6). Our results in this paper and the results in [6] make up together the firstorder necessary optimality conditions for the semivectorial bilevel programming problem. It is very important for the development of the optimality theory of the semivectorial bilevel programming problem in the future.
The rest of the paper is organized as follows. In Section 2, we present the definitions of efficient solutions and Pareto minima, and then the relevant notions and properties from variational analysis will be presented as well. The transformation process of the pessimistic semivectorial bilevel programming problem (PSBP) into a singlelevel generalized minimax optimization problem with constraints by means of the optimal value function reformulation is given in Section 3. In Section 4, we first present the estimation of the lowerlevel negative value function and the sensitivity analysis of the lowerlevel optimal solution maps. Based on these, the sensitivity analysis for the maximization bilevel value function is presented. Finally, the necessary optimality conditions are derived for the problem (1.6) while considering the case where all functions involved are strictly differentiable. The special case where the lowerlevel multiobjective optimization problem is linear in the lowerlevel variable is studied in Section 5.
2 Preliminaries
In this section, we mainly recall some basic definitions and results.
2.1 Efficient solution and Pareto minima
Definition 2.1 Let $C\subset {R}^{n}$ be a closed convex cone with nonempty interior, C is said to be pointed convex cone if $C\cap C=\{0\}$. We denote a partial order by ${\u2aaf}_{C}$ in ${R}^{n}$ induced by C.
Definition 2.2 Let $A\subseteq {R}^{n}$ be nonempty. ${z}^{\ast}\in A$ is said to be Pareto (resp. weak Pareto) minima of A w.r.t. C if
where ‘int’ denotes the topological interior of the set in question.
Considering the multiobjective optimization problem with respect to ${\le}_{C}$:
where f represents a vectorvalued function and X the nonempty feasible set. For a nonempty set $A\subset X$, the image of A by f is defined by $f(A):=\{f(x)\mid x\in A\}$.
Definition 2.3 The point ${x}^{\ast}\in X$ is said to be an efficient (resp. weakly efficient) optimal solution of problem (2.2) if $f({x}^{\ast})$ is a Pareto (resp. weak Pareto) minima of $f(X)$.
Definition 2.4 The point ${x}^{\ast}\in X$ is said to be a local efficient (resp. weakly local efficient) optimal solution of problem (2.2) if there exists a neighborhood U of ${x}^{\ast}$ such that $f({x}^{\ast})$ is a Pareto (resp. weak Pareto) minima of $f(X\cap U)$.
A vectorvalued function $f:{R}^{n}\to {R}^{m}$ is said to convex with respect to a partial ${\u2aaf}_{C}$ induced by a pointed, closed, and convex cone C, if we have
2.2 Tools from variational analysis
Details of the material presented here can be found in [13, 14].
Definition 2.6 Given a point $\overline{x}$, $lim{sup}_{x\to \overline{x}}\mathrm{\Gamma}(x)$ is said to be the KuratowskiPainlevé outer/upper limit of a setvalued mapping $\mathrm{\Gamma}:{R}^{n}\Rightarrow {R}^{m}$ at $\overline{x}$, if
Definition 2.7 For an extended realvalued function $\psi :{R}^{n}\to \overline{R}$, $\stackrel{\u02c6}{\partial}\psi (\overline{x})$ is said to be the Fréchet subdifferential of ψ at a point $\overline{x}$ of its domain, if
Definition 2.8 Given a point $\overline{x}$, $\partial \psi (\overline{x})$ is said to be the basic/Mordukhovich subdifferential of ψ at $\overline{x}$, if
If ψ is convex, $\partial \psi (\overline{x})$ is reduced to the subdifferential in the sense of convex analysis:
$\partial \psi (\overline{x})$ is nonempty and compact when ψ is local Lipschitz continuous, its convex hull is the Clarke subdifferential $\overline{\partial}\psi (\overline{x})$, i.e.
where ‘co’ denotes the convex hull of the set in question. Via this link between the Basic and Clarke subdifferential, we have the following convex hull property:
where ψ is Lipschitz continuous near $\overline{x}$.
Definition 2.9 ${\partial}_{x}\psi (\overline{x},\overline{y})$ is said to be the partial basic (resp. Clarke) subdifferential of ψ with respect to x, if we have
The partial basic (resp. Clarke) subdifferential with respect to y can be defined analogously as follows:
Definition 2.10 Given a point $\overline{x}\in \mathrm{\Omega}$, ${N}_{\mathrm{\Omega}}(\overline{x})$ is said to be the basic/Mordukhovich normal cone to a set $\mathrm{\Omega}\subset {R}^{n}$ at $\overline{x}$, if
where ${\stackrel{\u02c6}{N}}_{\mathrm{\Omega}}(\overline{x})$ represents the prenormal/Fréchet normal cone to a set Ω at $\overline{x}$ defined by
The set Ω will be said to be regular at $\overline{x}\in \mathrm{\Omega}$ if
For the lower semicontinuous function ψ with the epigraph epiψ, we can equivalently define the basic/Mordukhovich subdifferential (2.5) using the normal cone (2.11) by
The singular subdifferential of ψ at $\overline{x}\in dom\psi $ by
If ψ is lower semicontinuous near $\overline{x}$, then ${\partial}^{\mathrm{\infty}}\psi (\overline{x})=\{0\}$ if and only if ψ is locally Lipschitz continuous near $\overline{x}$. Given a setvalued mapping $\mathrm{\Xi}:{R}^{n}\to {2}^{{R}^{m}}$ with its graph
recall the notion of coderivative for Ξ at $(\overline{x},\overline{y})\in gph\mathrm{\Xi}$ is defined by
via the normal cone (2.11) to the graph of Ξ. If Ξ is singlevalued and locally Lipschitz continuous near $\overline{x}$, its coderivative can be denoted analytically as
via the basic subdifferential (2.5) of the Lagrange scalarization $\u3008\upsilon ,\mathrm{\Xi}\u3009(x):=\u3008\upsilon ,\mathrm{\Xi}(x)\u3009$, where the component $\overline{y}$ ($=\mathrm{\Xi}(\overline{x})$) is omitted in the coderivative notation for singlevalued mappings. This implies that the coderivative can be represented as
where Ξ is strictly differentiable at point $\overline{x}$, $\mathrm{\nabla}\mathrm{\Xi}(\overline{x})$ denotes its Jacobian matrix at $\overline{x}$ and ‘⊤’ stands for transposition.
Definition 2.11 A setvalued mapping Ξ is said to be inner semicompact at $\overline{x}$ with $\mathrm{\Xi}(\overline{x})\ne \mathrm{\varnothing}$, if for every sequence ${x}_{k}\to \overline{x}$ with $\mathrm{\Xi}({x}_{k})\ne \mathrm{\varnothing}$, there exists a sequence of ${y}_{k}\in \mathrm{\Xi}({x}_{k})$ which contains a convergent subsequence as $k\to \mathrm{\infty}$.
It follows that inner semicompactness holds whenever Ξ is uniformly bounded near $\overline{x}$, i.e., there exist a neighborhood U and a bounded set $\mathrm{\Omega}\subset {R}^{m}$ such that $\mathrm{\Xi}(x)\subset \mathrm{\Omega}$ for all $x\in U$.
Definition 2.12 A setvalued mapping Ξ is said to be inner semicontinuous at $(\overline{x},\overline{y})\in gph\mathrm{\Xi}$, if for every sequence ${x}_{k}\to \overline{x}$ there exists a sequence of ${y}_{k}\in \mathrm{\Xi}({x}_{k})$ that converges to $\overline{y}$ as $k\to \mathrm{\infty}$.
From Definitions 2.11 and 2.12, it is clear that Ξ is inner semicontinuous at $(\overline{x},\overline{y})$, if Ξ is inner semicompact at $\overline{x}$ with $\mathrm{\Xi}(\overline{x})=\{\overline{y}\}$. Generally speaking, the inner semicontinuity which is much stronger than the inner semicompactness, is a necessary condition for the Lipschitzlike/Aubin property, which means that there exist two neighborhoods U of $\overline{x}$ and V of $\overline{y}$, and a constant $\kappa >0$ such that
where d means the distance from a point to a set in ${R}^{m}$. When $V={R}^{m}$ in (2.16), this property is reduced to the classical local Lipschitz continuity of Ξ near $\overline{x}$. A complete characterization of the Lipschitzlike/Aubin property (2.16), and hence a sufficient condition for the inner semicontinuity of Ξ at $(\overline{x},\overline{y})$, is given for closed graph mappings by the following coderivative/Mordukhovich criterion (see [[13], Theorem 5.7] and [[14], Theorem 9.40]):
In addition, the infimum of all $\kappa >0$ for which (2.16) holds is equal to the coderivative norm $\parallel {D}^{\ast}\mathrm{\Xi}(\overline{x},\overline{y})\parallel $ as a positively homogeneous mapping ${D}^{\ast}\mathrm{\Xi}(\overline{x},\overline{y})$. Set $x=\overline{x}$ in (2.16), the resulting weaker property is known as calmness of Ξ at $(\overline{x},\overline{y})$ [14], which is used to derive the sensitivity analysis of the lowerlevel optimal solution mapping of the problem (3.4) in the sequel. For $V={R}^{m}$, the Lipschitzlike property in (2.16) corresponds to the upper Lipschitz property of Robinson [15].
3 Optimal value function reformulation for the pessimistic semivectorial bilevel programming problem
In this section, we shall discuss the reformulation process of the problems (1.6), (1.3), and (1.4) into a singlelevel generalized minimax optimization problem with constraints. We firstly by using scalarization technique transform the problems (1.3) and (1.4) into a usual onelevel optimization problem, which consists of solving the following parametric problem:
where the parameter y is a nonnegative point of the unit sphere, i.e.,
For a given upperlevel variable x, the weakly efficient solution set ${\mathrm{\Psi}}_{\mathrm{wef}}(x)$ of the lowerlevel problem (1.3) is not in general a singleton, hence it is difficult to choose the best point $z(x)$ on the set ${\mathrm{\Psi}}_{\mathrm{wef}}(x)$. Furthermore, we consider the set Y (3.2) as a new constraint set for the upperlevel problem [6]. For all $(x,y)\in X\times Y$ (where $X:=\{x\in {R}^{n}\mid G(x)\le 0\}$), we denote by $\mathrm{\Psi}(x,y)$ the solution set of the problem (3.1). When the weakly efficient solutions are considered for the lowerlevel problem (1.3), the relationship (see e.g. [16]) relates the solution set of this problem and that of (3.1) as follows.
Theorem 3.1 Assume that the functions $g(x,\cdot )$ and $f(x,\cdot )$ are ${R}_{+}^{p}$convex and ${R}_{+}^{l}$convex for all $x\in X$, respectively. Then
Hence, the pessimistic semivectorial bilevel programming problem (1.6) can be replaced by the following classical pessimistic bilevel programming problem:
where the set Y (3.2) on the new parameter of the lowerlevel problem acts like additional upperlevel constraints. Now, we define the maximization bilevel optimal value function by
then the problem (3.4) can be expressed as the following generalized minimax problem with constraints:
We can also define the maximization another a bilevel optimal value function by
then the problem (3.4) can be further expressed as onelevel optimization problem:
Remark 3.1 The variable y in (3.4) is regarded as an upperlevel decision making variable rather than lowerlevel decision variable. That is why we use the representation ‘${min}_{x}{max}_{y}{max}_{z}$’ and not use the representation ‘${min}_{x}{max}_{y,z}$’. The hierarchically decision making process of the pessimistic bilevel programming problem (3.4) is as follows: The leader announces his variables $(x,y)$ first and then the follower, bearing in mind, optimizes the objective function of himself and reacts the lowerlevel decision making variable z which is an optimal solution of the lowerlevel problem. In essence, we regard y in (3.4) as a weight vector to which the leader attaches the follower rather than the follower gives himself. For the problem (3.4), the existence and approximation of solution, the regularization properties and so on were studied in [11, 12, 17–27]. In [28], Loridan and Morgan considered the pessimistic formulation (i.e., the weak Stackelberg problem). Based on a method of Molodtsov, they presented an approach to approximate such problem by sequences of the optimistic formulation (i.e., the strong Stackelberg problem). The results related to the convergence of marginal functions and approximated solutions were given and the case of data perturbations was also considered. In [29], Aboussoror and Mansouri considered a class of weak linear bilevel programming with nonunique lowerlevel solutions, they gave an existence theorem of solution and a solving algorithm via exact penalty method. In [30], Lv et al. developed a penalty function method for the weak price control problem. In [31], Tsoukalas et al. provided an introduction to bilevel programming problems that illustrates some of the applications and computational challenges, and that outlines how bilevel programming problems can be solved. In [32], they and Kleniati provided a formal justification for the conjectures given in [31], the computational complexity of pessimistic bilevel programming problems were examined, and a solution scheme was developed and analyzed for the pessimistic programming problems. In [33], Malyshev and Strekalovsky considered the pessimistic formulation of a quadraticlinear bilevel programming problem, they reduced the problem to a series of bilevel programming problems in its optimistic formulation and then to nonconvex optimization problems by the KKToptimality condition of the lowerlevel problem. Global and local search algorithms for the latter problems are developed. In [34], Dassanayaka studied the pessimistic formulation of the bilevel programming problems in finite dimensional spaces. Using the analysis tools from modern variational and generalized differentiation developed by Mordukhovich, firstorder necessary and sufficient optimality conditions were established. A genetic algorithm for the weak linear bilevel programming problem was developed by Xiao and Li in [35]. Very recently, the pessimistic formulation for the bilevel programming problem was considered by Zemkoho in [36] and by Dempe et al. in [37] in the case where the functions involved were nonsmooth and smooth, respectively, the necessary optimality conditions were derived via the bilevel optimal value function reformulation.
Before discussing the link between the problems (1.6) and (3.4), we firstly recall that the notion of optimal solution for the upperlevel problem in pessimistic formulation (see [[2], Definition 5.3]), namely, a point $({x}^{\ast},{z}^{\ast})$ is said to be a local optimal solution for the problem (1.6) if ${x}^{\ast}\in X$, ${z}^{\ast}\in {\mathrm{\Psi}}_{\mathrm{wef}}({x}^{\ast})$ with
and there exists an open neighborhood ${U}_{\delta}({x}^{\ast})$, with
It is called a global pessimistic solution if $\delta =\mathrm{\infty}$ can be selected.
Now, we present the theorem of the existence of solution to the problem (3.4) (see [[2], Theorem 5.3]).
Theorem 3.2 If the set $\{(x,y,z)\mid (x,y)\in X\times Y,g(x,z)\le 0\}$ is nonempty and compact, and for each $x\in X$, the MangasarianFromowitz Constraint Qualification (MFCQ) holds. Suppose that the lowerlevel solution set mapping $\mathrm{\Psi}(x,y)$ is lower semicontinuous at all points $(x,y)\in X\times Y$. Then the problem (3.4) has an optimal solution.
Proof Due to lower semicontinuity of the lowerlevel solution set mapping $\mathrm{\Psi}(x,y)$, thus, the optimal value function ${\phi}_{p}(x,y)$ in (3.5) is lower semicontinuous. Hence, this optimal value function ${\phi}_{p}(x,y)$ attains its minimum on the compact set $X\times Y$ provided that this set is nonempty. □
The link between the problems (1.6) and (3.4) will be given in the next result. For this purpose, note that a setvalued mapping $\mathrm{\Xi}:{R}^{a}\to {2}^{{R}^{b}}$ is closedvalued at a point $(u,v)\in {R}^{a}\times {R}^{b}$ if for any sequence $({u}^{k},{v}^{k})\in gph\mathrm{\Xi}$ with $({u}^{k},{v}^{k})\to (u,v)$, one has $v\in \mathrm{\Xi}(u)$. Ξ is said to be closedvalued if it is closedvalued at any point of ${R}^{a}\times {R}^{b}$.
Proposition 3.1 Consider the problems (1.6) and (1.3)(1.4), where the lowerlevel constraint function $g(x,\cdot )$ is ${R}_{+}^{p}$convex and $f(x,\cdot )$ is ${R}_{+}^{l}$convex for all $x\in X$. Assume that Ψ is lower semicontinuous on $X\times Y$. Then the following assertions hold.

(i)
Let $({x}^{\ast},{z}^{\ast})$ be a local (resp. global) optimal solution of the problem (1.6). Then, for all ${y}^{\ast}\in Y$ with ${z}^{\ast}\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})$, the point $({x}^{\ast},{y}^{\ast},{z}^{\ast})$ is a local (resp. global) optimal solution of the problem (3.4).

(ii)
Let $({x}^{\ast},{y}^{\ast},{z}^{\ast})$ be a local (resp. global) optimal solution of the problem (3.4). Assume the setvalued mapping Ψ is closedvalued. Then $({x}^{\ast},{z}^{\ast})$ is a local (resp. global) optimal solution of the problem (1.6).
Proof We provide the proofs of (i) and (ii) in the local cases. The global cases can be obtained analogously.

(i)
Let $({x}^{\ast},{z}^{\ast})$ be a local optimal solution of the problem (1.6). Then
$$\{\begin{array}{l}{x}^{\ast}=argmin\{{\phi}_{pp}(x)\mid x\in X\},\\ {z}^{\ast}=argmax\{F(x,z)\mid z\in {\mathrm{\Psi}}_{\mathrm{wef}}(x)\}.\end{array}$$
Suppose that there exists $\overline{y}\in Y$ with ${z}^{\ast}\in \mathrm{\Psi}({x}^{\ast},\overline{y})$ such that $({x}^{\ast},\overline{y},{z}^{\ast})$ is not a local optimal solution of the problem (3.4). Then there exists a sequence $({x}^{k},{y}^{k},{z}^{k})$ with ${x}^{k}\to {x}^{\ast}$, ${y}^{k}\to \overline{y}$, ${z}^{k}\to {z}^{\ast}$ and $({x}^{k},{y}^{k})\in X\times Y$, ${z}^{k}\in \mathrm{\Psi}({x}^{k},{y}^{k})$ such that $F({x}^{k},{z}^{k})$ is better than $F({x}^{\ast},{z}^{\ast})$. By the equality (3.3), we know that $[{y}^{k}\in Y,{z}^{k}\in \mathrm{\Psi}({x}^{k},{y}^{k})]\Rightarrow {z}^{k}\in {\mathrm{\Psi}}_{\mathrm{wef}}({x}^{k})$, and moreover, ${x}^{\ast}\in X$ (since X is closed) and $[\overline{y}\in Y,{z}^{\ast}\in \mathrm{\Psi}({x}^{\ast},\overline{y})]\Rightarrow {z}^{\ast}\in {\mathrm{\Psi}}_{\mathrm{wef}}({x}^{\ast})$, that is, $({x}^{k},{y}^{k},{z}^{k})$ and $({x}^{\ast},\overline{y},{z}^{\ast})$ are the feasible solutions to the problem (3.4). To sum up, we can find a sequence $({x}^{k},{z}^{k})\to ({x}^{\ast},{z}^{\ast})$ with ${x}^{k}\in X$, ${z}^{k}\in {\mathrm{\Psi}}_{\mathrm{wef}}({x}^{k})$ such that $F({x}^{k},{z}^{k})$ is better than $F({x}^{\ast},{z}^{\ast})$, which contradicts the initial statement that $({x}^{\ast},{z}^{\ast})$ is a local optimal solution of the problem (1.6).

(ii)
Assume that $({x}^{\ast},{y}^{\ast},{z}^{\ast})$ is a local optimal solution of the problem (3.4). Then we have
$$\{\begin{array}{l}{x}^{\ast}=argmin\{{\phi}_{pp}(x)\mid x\in X\},\\ {y}^{\ast}=argmax\{{\phi}_{p}(x,y)\mid y\in Y\},\\ {z}^{\ast}=argmax\{F(x,z)\mid z\in \mathrm{\Psi}(x,y)\}.\end{array}$$
Since Ψ is lower semicontinuous and closedvalued, for any sequence $({x}^{k},{y}^{k})$ with ${x}^{k}\to {x}^{\ast}$, ${y}^{k}\to {y}^{\ast}$ and ${z}^{\ast}\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})$, there exists ${z}^{k}\in \mathrm{\Psi}({x}^{k},{y}^{k})$ for all k such that ${z}^{k}\to {z}^{\ast}$. By the equality (3.3), we have $[{y}^{\ast}\in Y,{z}^{\ast}\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})]\Rightarrow {z}^{\ast}\in {\mathrm{\Psi}}_{\mathrm{wef}}({x}^{\ast})$. Considering that ${x}^{\ast}\in X$ and X is closed, we have
Therefore $({x}^{\ast},{z}^{\ast})$ is a local optimal solution of the problem (1.6). This completes the proof. □
Next, we give the optimal value function reformulation for the pessimistic bilevel programming problem (3.4) as follows:
Based on this result, we will attempt to derive the necessary optimality conditions of the pessimistic semivectorial bilevel programming problem (1.6) via deriving those of the auxiliary problem (3.4). Obviously, if we set the minimization optimal value function as
then the maximization optimal value function ${\phi}_{p}(x,y)$ (3.5) coincides with the negative of ${\phi}_{p}^{o}(x,y)$, i.e., for all $(x,y)\in X\times Y$, we have
Analogously, we can set the minimization another optimal value function as
then, for all $x\in X$, we also have
By (3.11) and (3.13), we analyze the maximized bilevel optimal value function ${\phi}_{p}(x,y)$ and ${\phi}_{pp}(x)$ via analyzing the minimization bilevel optimal value functions ${\phi}_{p}^{o}(x,y)$ in (3.10) and ${\phi}_{pp}^{o}(x)$ in (3.12), respectively. In order to analyze the bilevel value function ${\phi}_{p}^{o}(x,y)$ and ${\phi}_{pp}^{o}(x)$, we consider a general ‘abstract’ framework of the marginal function:
where $\psi :{R}^{n}\times {R}^{m}\to \overline{R}$ and $\mathrm{\Xi}:{R}^{n}\to {2}^{{R}^{m}}$. Denote the argminimum mapping in (3.14) by ${\mathrm{\Xi}}_{o}(x)=argmin\{\psi (x,y)\mid y\in \mathrm{\Xi}(x)\}=\{y\in \mathrm{\Xi}(x)\mid \psi (x,y)\le \mu (x)\}$. We summarize in the next theorem some known results on general value functions needed in the paper (see [[13], Corollary 1.109] and [[38], Theorem 5.2]).
Theorem 3.3 Let the value function μ be given in (3.14), where the graph of Ξ is locally closed around $(\overline{x},\overline{y})\in gph\mathrm{\Xi}$ and ψ is strictly differentiable at this point. The following assertions hold:

(i)
Let ${\mathrm{\Xi}}_{o}$ be inner semicompact at $\overline{x}$. Then μ is lower semicontinuous at $\overline{x}$ and the upper bound for its basic subdifferential is given as follows:
$$\partial \mu (\overline{x})\subset \bigcup _{\overline{y}\in {\mathrm{\Xi}}_{o}(\overline{x})}\{{\mathrm{\nabla}}_{x}\psi (\overline{x},\overline{y})+{D}^{\ast}\mathrm{\Xi}(\overline{x},\overline{y})({\mathrm{\nabla}}_{y}\psi (\overline{x},\overline{y}))\}.$$If in addition Ξ is Lipschitzlike around $(\overline{x},\overline{y})$ for all vectors $\overline{y}\in {\mathrm{\Xi}}_{o}(\overline{x})$, then we also have the Lipschitz continuity of μ around $\overline{x}$.

(ii)
Let ${\mathrm{\Xi}}_{o}$ be inner semicontinuous at $(\overline{x},\overline{y})$. Then μ is lower semicontinuous at $\overline{x}$ and the upper bound for its basic subdifferential is given as follows:
$$\partial \mu (\overline{x})\subset {\mathrm{\nabla}}_{x}\psi (\overline{x},\overline{y})+{D}^{\ast}\mathrm{\Xi}(\overline{x},\overline{y})({\mathrm{\nabla}}_{y}\psi (\overline{x},\overline{y})).$$If in addition Ξ is Lipschitzlike around $(\overline{x},\overline{y})$, then μ is Lipschitz continuous around $\overline{x}$.
By the equalities (3.11), (3.13), and (2.8), we have
and so
By Theorem 3.3, we can estimate the upper bound of the subdifferential of the bilevel optimal value function ${\phi}_{p}(x,y)$ (resp. ${\phi}_{pp}(x)$) via estimating the subdifferential of $\partial {\phi}_{p}^{o}(x,y)$ (resp. $\partial {\phi}_{pp}^{o}(x)$). In the next section, based on specific structures of the setvalued mapping Ξ, our aim is to give detailed upper bounds for ${D}^{\ast}\mathrm{\Xi}(\overline{x},\overline{y})$ in terms of problem data. Verifiable rules for Ξ to be Lipschitzlike will also be provided. Further, we present the sensitivity analysis for the maximization bilevel optimal value function $\partial {\phi}_{p}(x,y)$ and $\partial {\phi}_{pp}(x)$. Based on these results, we develop the necessary optimality conditions for the problems (3.4) and (1.6).
4 Main results
In this section, we study the necessary optimality conditions for the optimal value function reformulation (3.9) of the problem (3.4). Firstly, we recall that the argminimun/solution map of the lowerlevel problem (3.1) as
with φ denoting the optimal value function associated to the lowerlevel problem (3.1), i.e.,
Here, we employ the lowerlevel value function approach [39] to sensitivity analysis of the bilevel value function ${\phi}_{p}^{o}(x,y)$ in (3.10). Hence, we have the lowerlevel optimal value function reformulation of ${\phi}_{p}^{o}(x,y)$:
Since the basic subdifferential ∂φ does not satisfy the plus symmetry, an appropriate estimate of $\partial (\phi )$ is needed to proceed with this approach. By the wellknown convex hull property (2.8), the estimate of $\partial (\phi )$ can be done.
In order to study the sensitivity analysis of the negative value function in the lowerlevel problem (3.1), we first recall the lowerlevel and upperlevel regularity conditions [40], which are defined, respectively, as
It is clear that these are the dual forms of the MFCQ for the lowerlevel constraints ${g}_{i}({x}^{\ast},z)\le 0$, $i=1,\dots ,p$ (for the fixed parameter $x={x}^{\ast}$) and the upperlevel constraints ${G}_{j}(x)\le 0$, $j=1,\dots ,q$, respectively. A particularity of the new constraint set Y (3.2), that the related Lagrange multipliers can be completely eliminated from the optimality conditions, is given in the next lemma (see [[6], Lemma 4.2]).
Lemma 4.1 The set of vectors $({x}^{\ast},{y}^{\ast},{z}^{\ast})\in {R}^{n}\times {R}^{l}\times {R}^{m}$, $\gamma ,{z}_{s}\in {R}^{l}$ and $\mu ,r,{\upsilon}_{s}\in R$ with $s=1,\dots ,n+l+1$, satisfies the system
if and only if the following inequality holds:
4.1 Sensitivity analysis of the lowerlevel negative value function
In this subsection, we shall study the sensitivity analysis of the negative value function in the lowerlevel problem (3.1).
Theorem 4.1 If f and g are strictly differentiable, and the following assertions hold for the negation of the value function φ in (4.2).

(i)
If the solution map $\mathrm{\Psi}(x,y)$ in (4.1) is inner semicompact at $({x}^{\ast},{y}^{\ast})$ for all $({x}^{\ast},{y}^{\ast},z)\in gph\mathrm{\Psi}$ satisfying (4.4), then φ is Lipschitz continuous around $({x}^{\ast},{y}^{\ast})$, and the following inclusion holds:
$$\begin{array}{rcl}\partial (\phi )({x}^{\ast},{y}^{\ast})& \subset & \{\left[\phantom{\begin{array}{l}{\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{z}{f}_{k}({x}^{\ast},{z}_{s})+{\sum}_{i=1}^{p}{\beta}_{i}^{s}{\mathrm{\nabla}}_{z}{g}_{i}({x}^{\ast},{z}_{s})=0,\\ {\beta}_{i}^{s}\ge 0,{\beta}_{i}^{s}{g}_{i}({x}^{\ast},{z}_{s})=0,{\eta}_{s}\ge 0,{\sum}_{s=1}^{n+l+1}{\eta}_{s}=1,\\ {z}_{s}\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast}),\mathrm{\forall}s=1,\dots ,n+l+1,k=1,\dots ,l,i=1,\dots ,p\end{array}}\begin{array}{c}{\sum}_{s=1}^{n+l+1}{\eta}_{s}({\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{x}{f}_{k}({x}^{\ast},{z}_{s})+{\sum}_{i=1}^{p}{\beta}_{i}^{s}{\mathrm{\nabla}}_{x}{g}_{i}({x}^{\ast},{z}_{s}))\\ {\sum}_{s=1}^{n+l+1}{\eta}_{s}f({x}^{\ast},{z}_{s})\end{array}\right]\\ \begin{array}{l}{\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{z}{f}_{k}({x}^{\ast},{z}_{s})+{\sum}_{i=1}^{p}{\beta}_{i}^{s}{\mathrm{\nabla}}_{z}{g}_{i}({x}^{\ast},{z}_{s})=0,\\ {\beta}_{i}^{s}\ge 0,{\beta}_{i}^{s}{g}_{i}({x}^{\ast},{z}_{s})=0,{\eta}_{s}\ge 0,{\sum}_{s=1}^{n+l+1}{\eta}_{s}=1,\\ {z}_{s}\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast}),\mathrm{\forall}s=1,\dots ,n+l+1,k=1,\dots ,l,i=1,\dots ,p\end{array}\}.\end{array}$$(4.8) 
(ii)
Assume that $({x}^{\ast},{y}^{\ast},{z}^{\ast})\in gph\mathrm{\Psi}$ with ${x}^{\ast}\in dom\phi $ satisfying (4.4) and that either Ψ is inner semicontinuous at this point or f and g are convex. Then φ is Lipschitz continuous around $({x}^{\ast},{y}^{\ast})$, and the following inclusion holds:
$$\begin{array}{rcl}\partial (\phi )({x}^{\ast},{y}^{\ast})& \subset & \{\left[\begin{array}{c}{\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{x}{f}_{k}({x}^{\ast},{z}^{\ast}){\sum}_{i=1}^{p}{\beta}_{i}{\mathrm{\nabla}}_{x}{g}_{i}({x}^{\ast},{z}^{\ast})\\ f({x}^{\ast},{z}^{\ast})\end{array}\right]\\ \begin{array}{l}{\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{z}{f}_{k}({x}^{\ast},{z}^{\ast})+{\sum}_{i=1}^{p}{\beta}_{i}{\mathrm{\nabla}}_{z}{g}_{i}({x}^{\ast},{z}^{\ast})=0,\\ {\beta}_{i}\ge 0,{\beta}_{i}{g}_{i}({x}^{\ast},{z}^{\ast})=0,k=1,\dots ,l,i=1,\dots ,p\end{array}\}.\end{array}$$(4.9)
Proof The local Lipschitz continuity of φ is justified from [[38], Theorem 5.2] under the fulfillment of (4.4) in both the inner semicontinuity and inner semicompactness cases. If the functions f and g are convex, then the value function φ is also convex, in this case the Lipschitz continuity follows from [[41], Theorem 6.3.2]. To prove the subdifferential inclusion of (i), recall that
by [[42], Corollary 4] under the assumptions of (i). The claimed estimate of $\partial (\phi )$ follows from this by combining (2.8) and the classical Carathéodory’s theorem.
When Ψ is inner semicontinuous at $({x}^{\ast},{y}^{\ast},{z}^{\ast})$, we have by [[43], Corollary 5.3] that
This implies the subdifferential inclusion of (ii) by (2.7) and (2.8). If both f and g are convex, inclusion (4.11) holds without the inner semicontinuity assumption (see [[40], Theorem 4.2] and [[44], Corollary 4]). This completes the proof. □
Note that in the fully convex (even nonsmooth) case, the assumption (4.4) in Theorem 4.1 can be replaced by a much weaker qualification condition [44] requiring that the set $epi{f}^{\ast}+cone({\bigcup}_{i=1}^{p}epi{g}_{i}^{\ast})$ is closed on ${R}^{n}\times {R}^{m}\times R$, where $epi{f}^{\ast}$ denotes the conjugate function for an extended realvalued function f.
4.2 Sensitivity analysis of the lowerlevel optimal solution maps
In this subsection, we present an upper estimate for the coderivative of the solution mapping Ψ given in (4.1) and establish its Lipschitzlike property. For this purpose, we first present the calmness property. By (2.15), calculating the coderivative of Ψ, we must compute the limiting normal cone to the graph of Ψ:
in terms of the initial data. To proceed this way by using the conventional results of the generalized differential calculus [13] requires the fulfillment of the basic qualification condition, which reads in this case
However, it is shown in [[45], Theorem 3.1] that condition (4.13) fails in common situations; in particular, when φ is locally Lipschitz around the point in question. The weaker assumption which helps circumventing this difficulty is given as follows:
The condition (4.14) is automatically satisfied if f and g are linear. Furthermore, (4.14) holds at $({x}^{\ast},{y}^{\ast},{z}^{\ast})$ for the locally Lipschitzian function φ if we pass to the boundary of the normal cone in (4.13), that is, if the following qualification condition holds:
with K being semismooth, in particular, convex. The condition (4.15) seems to be especially effective for socalled simple convex bilevel programming problems. For the more details, the readers can be refer to [45, 46]. It is deserved that for the latter case, the condition (4.15) can be further weakened by passing to the boundary of the subdifferential of f [45]. It is also worth mentioning that, except the condition (4.15), another sufficient condition for the validity of the calmness property (4.14) is provided by the notion of uniform weak sharp minima. More details can be found in [34, 43, 47].
For estimating the coderivative of Ψ, we present additional qualification condition:
where ${\mathrm{\Lambda}}_{z}({x}^{\ast},{y}^{\ast},{z}^{\ast},{z}^{\ast \ast})$ is a particular multipliers set, i.e.,
By [[39], Proposition 5.6], if the functions f and g are fully convex and continuously differentiable and the lower constraint function g does not contain the upperlevel decision maker variables and the lowerlevel value function φ is finite, the condition (4.4) is a sufficient condition for (4.17) holding at point $({x}^{\ast},{y}^{\ast},{z}^{\ast})$. The following lowerlevel Lagrange multipliers set plays an important role in the sequel:
Now, we present the coderivative estimate and Lipschitzlike property of lowerlevel solution maps.
Theorem 4.2 (i) For all $({x}^{\ast},{y}^{\ast},z)\in gph\mathrm{\Psi}$, let the conditions (4.4) and (4.14) hold at this point, and let the solution map Ψ (4.1) be inner semicompact at $({x}^{\ast},{y}^{\ast})$. Then, for all $z\in {R}^{m}$, (4.7) and the inclusion (4.19) hold:
with ${\sum}_{s=1}^{n+l+1}{\eta}_{s}=1$ and ${\eta}_{s}\ge 0$ for $s=1,2,\dots ,n+l+1$. If in addition the condition (4.16) holds at $({x}^{\ast},{y}^{\ast},{z}_{s})$, then Ψ is Lipschitzlike around this point.
(ii) Let the solution map Ψ (4.1) be inner semicontinuous at $({x}^{\ast},{y}^{\ast},{z}^{\ast})\in gph\mathrm{\Psi}$, and let the qualification conditions (4.4) and (4.14) hold at this point. Then, for all ${z}^{\ast \ast}\in {R}^{m}$, we have
If in addition the condition (4.16) holds at $({x}^{\ast},{y}^{\ast},{z}^{\ast})$, then Ψ is Lipschitzlike around this point.
Proof We first show the proof for (i). It follows from Theorem 4.1(i) that the lowerlevel function φ is Lipschitz continuous around $({x}^{\ast},{y}^{\ast})$ under assumption condition (4.4) and the inner semicompactness assumptions. If we add the calmness property (4.14), then we have
by [[48], Theorem 4.1] taking into account that the constraint $f(x,y,z)\phi (x,y)\le 0$ is working at point $({x}^{\ast},{y}^{\ast},{z}^{\ast})$. By (4.12), we have
which holds under the validity of the condition (4.4) at point $({x}^{\ast},{y}^{\ast},{z}_{t})$. Combining the definition of the coderivative (2.15), we derive the coderivative estimate (4.19). Further, by (4.19) and the coderivative criterion (2.17) for the Lipschitzlike property, the coderivative criterion holds provided that
Let us prove (ii). According to Theorem 4.1(ii), the lowerlevel function φ is Lipschitz continuous around $({x}^{\ast},{y}^{\ast})$ under the condition (4.4) and the inner semicontinuous assumptions. If we add the calmness property (4.14), then we have
by [[48], Theorem 4.1] taking into account that the constraint $f(x,y,z)\phi (x,y)\le 0$ is active at point $({x}^{\ast},{y}^{\ast},{z}^{\ast})$. By (4.12), the equality (4.21) holds. Combining the definition of the coderivative (2.15), we derive the coderivative estimate (4.20). Further, by (4.20) and the coderivative criterion (2.17) for the Lipschitzlike property, the coderivative criterion holds provided the (4.22). This completes the proof. □
Noting that if the functions f and g are convex, the inner semicontinuity of Ψ can be dropped in Theorem 4.2(ii).
4.3 Sensitivity analysis of the maximization bilevel optimal value functions ${\phi}_{p}(x,y)$ and ${\phi}_{pp}(x)$ using the lowerlevel value function approach
For simplicity, we define the upperlevel optimal solution set mapping as follows:
In the rest of this paper, we always assume that the set ${\mathrm{\Psi}}_{o}(x,y)$ is nonempty. The following results illustrate the local sensitivity analysis of the bilevel value function ${\phi}_{p}$ defined in (3.5).
Theorem 4.3 Considering (3.5) and (4.23), the following assertions hold:

(i)
Assume that ${\mathrm{\Psi}}_{o}$ is inner semicompact at $({x}^{\ast},{y}^{\ast})$, the condition (4.4) holds at $({x}^{\ast},{y}^{\ast},z)\in gph\mathrm{\Psi}$, while the condition (4.14) holds at $({x}^{\ast},{y}^{\ast},z)$ for all $z\in {\mathrm{\Psi}}_{o}({x}^{\ast},{y}^{\ast})$. Then the following inclusion holds:
$$\begin{array}{rcl}\partial {\phi}_{p}({x}^{\ast},{y}^{\ast})& \subset & \bigcup _{{z}_{t}\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})}\bigcup _{{z}_{s}\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})}\bigcup _{({\lambda}_{t},{\beta}_{i}^{t})\in {\mathrm{\Lambda}}_{z}({x}^{\ast},{y}^{\ast},{z}^{\ast},{z}_{t})}\bigcup _{{\beta}_{i}^{s}\in \mathrm{\Lambda}({x}^{\ast},{y}^{\ast},{z}^{\ast})}\{\phantom{\begin{array}{l}{x}_{t}^{\ast \ast}\in {\sum}_{t=1}^{n+l+1}{\nu}_{t}\{{\mathrm{\nabla}}_{x}F({x}^{\ast},{z}_{t}){\lambda}_{t}({\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{x}{f}_{k}({x}^{\ast},{z}_{t})\\ \phantom{\rule{1em}{0ex}}{\sum}_{s=1}^{n+l+1}{\eta}_{s}({\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{x}{f}_{k}({x}^{\ast},{z}_{s})\\ \phantom{\rule{1em}{0ex}}+{\sum}_{i=1}^{p}{\beta}_{i}^{s}{\mathrm{\nabla}}_{x}{g}_{i}({x}^{\ast},{z}_{s})))+{\sum}_{i=1}^{p}{\beta}_{i}^{t}{\mathrm{\nabla}}_{x}{g}_{i}({x}^{\ast},{z}_{t})\},\\ {y}_{t}^{\ast \ast}\in {\lambda}_{t}f({x}^{\ast},{z}_{t}){\lambda}_{t}{\sum}_{s=1}^{n+l+1}{\eta}_{s}f({x}^{\ast},{z}_{s}),\\ {\sum}_{t=1}^{n+l+1}{\nu}_{t}=1,{\nu}_{t}\ge 0,t=1,\dots ,n+l+1,\\ {\sum}_{s=1}^{n+l+1}{\eta}_{s}=1,{\eta}_{s}\ge 0,s=1,\dots ,n+l+1\end{array}}({x}_{t}^{\ast \ast},{y}_{t}^{\ast \ast})\phantom{\begin{array}{l}{x}_{t}^{\ast \ast}\in {\sum}_{t=1}^{n+l+1}{\nu}_{t}\{{\mathrm{\nabla}}_{x}F({x}^{\ast},{z}_{t}){\lambda}_{t}({\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{x}{f}_{k}({x}^{\ast},{z}_{t})\\ \phantom{\rule{1em}{0ex}}{\sum}_{s=1}^{n+l+1}{\eta}_{s}({\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{x}{f}_{k}({x}^{\ast},{z}_{s})\\ \phantom{\rule{1em}{0ex}}+{\sum}_{i=1}^{p}{\beta}_{i}^{s}{\mathrm{\nabla}}_{x}{g}_{i}({x}^{\ast},{z}_{s})))+{\sum}_{i=1}^{p}{\beta}_{i}^{t}{\mathrm{\nabla}}_{x}{g}_{i}({x}^{\ast},{z}_{t})\},\\ {y}_{t}^{\ast \ast}\in {\lambda}_{t}f({x}^{\ast},{z}_{t}){\lambda}_{t}{\sum}_{s=1}^{n+l+1}{\eta}_{s}f({x}^{\ast},{z}_{s}),\\ {\sum}_{t=1}^{n+l+1}{\nu}_{t}=1,{\nu}_{t}\ge 0,t=1,\dots ,n+l+1,\end{array}}\\ \begin{array}{l}{x}_{t}^{\ast \ast}\in {\sum}_{t=1}^{n+l+1}{\nu}_{t}\{{\mathrm{\nabla}}_{x}F({x}^{\ast},{z}_{t}){\lambda}_{t}({\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{x}{f}_{k}({x}^{\ast},{z}_{t})\\ \phantom{{x}_{t}^{\ast \ast}\in}{\sum}_{s=1}^{n+l+1}{\eta}_{s}({\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{x}{f}_{k}({x}^{\ast},{z}_{s})\\ \phantom{{x}_{t}^{\ast \ast}\in}+{\sum}_{i=1}^{p}{\beta}_{i}^{s}{\mathrm{\nabla}}_{x}{g}_{i}({x}^{\ast},{z}_{s})))+{\sum}_{i=1}^{p}{\beta}_{i}^{t}{\mathrm{\nabla}}_{x}{g}_{i}({x}^{\ast},{z}_{t})\},\\ {y}_{t}^{\ast \ast}\in {\lambda}_{t}f({x}^{\ast},{z}_{t}){\lambda}_{t}{\sum}_{s=1}^{n+l+1}{\eta}_{s}f({x}^{\ast},{z}_{s}),\\ {\sum}_{t=1}^{n+l+1}{\nu}_{t}=1,{\nu}_{t}\ge 0,t=1,\dots ,n+l+1,\\ {\sum}_{s=1}^{n+l+1}{\eta}_{s}=1,{\eta}_{s}\ge 0,s=1,\dots ,n+l+1\end{array}\}.\end{array}$$(4.24)
If in addition (4.16) is satisfied at $({x}^{\ast},{y}^{\ast},z)$ for all $z\in {\mathrm{\Psi}}_{o}({x}^{\ast},{y}^{\ast})$, then ${\phi}_{p}$ is Lipschitz continuous around $({x}^{\ast},{y}^{\ast})$.

(ii)
Assume that ${\mathrm{\Psi}}_{o}$ is inner semicontinuous at $({x}^{\ast},{y}^{\ast},{z}^{\ast})$, the conditions (4.4) and (4.14) hold at this point. Furthermore, assume that the set $co{N}_{gph\mathrm{\Psi}}({x}^{\ast},{y}^{\ast},{z}^{\ast})$ is closed. Then the following inclusion holds:
$$\begin{array}{c}\partial {\phi}_{p}({x}^{\ast},{y}^{\ast})\hfill \\ \phantom{\rule{1em}{0ex}}\subset \bigcup _{({\lambda}_{t},{\beta}_{i}^{t})\in {\mathrm{\Lambda}}_{z}({x}^{\ast},{y}^{\ast},{z}^{\ast},{z}^{\ast \ast})}\bigcup _{{\gamma}_{i}^{t}\in \mathrm{\Lambda}({x}^{\ast},{y}^{\ast},{z}^{\ast})}\{{\mathrm{\nabla}}_{x}F({x}^{\ast},{z}^{\ast})\sum _{t=1}^{n+l+1}{\nu}_{t}\{\sum _{i=1}^{p}({\beta}_{i}^{t}{\lambda}_{t}{\gamma}_{i}^{t}){\mathrm{\nabla}}_{x}{g}_{i}({x}^{\ast},{z}^{\ast})\}\hfill \\ \phantom{\rule{2em}{0ex}}\sum _{t=1}^{n+l+1}{\nu}_{t}=1,{\nu}_{t}\ge 0,t=1,\dots ,n+l+1\}.\hfill \end{array}$$(4.25)
If in addition (4.16) is satisfied at point $({x}^{\ast},{y}^{\ast},{z}^{\ast})$, then ${\phi}_{p}$ is Lipschitz continuous around $({x}^{\ast},{y}^{\ast})$.
Proof We first provide the proof of (i). To justify (i), observe by Theorem 3.3(i) that
under the inner semicompactness assumption on ${\mathrm{\Psi}}_{o}$. Since ${\mathrm{\Psi}}_{o}(x,y)\subset \mathrm{\Psi}(x,y)$ for all $(x,y)\in X\times Y$, the lowerlevel optimal solution map Ψ in (4.1) is also inner semicompact at $({x}^{\ast},{y}^{\ast},{z}_{t})\in gph{\mathrm{\Psi}}_{o}$. Hence, by the subdifferential of the lowerlevel negation value function −φ in Theorem 4.1(i) and the coderivative of Ψ in Theorem 4.2(i), combining with (3.15) and Carathéodory’s theorem, we can derive the upper estimate of ${\phi}_{p}({x}^{\ast},{y}^{\ast})$.
To prove the local Lipschitz continuity of ${\phi}_{p}({x}^{\ast},{y}^{\ast})$ in (i) under the condition (4.16), the latter condition implies the Lipschitzlike property of Ψ around $({x}^{\ast},{y}^{\ast},{z}_{t})\in gph{\mathrm{\Psi}}_{o}$. Thus the desired result is obtained from Theorem 3.3(i).
For justifying (ii), since the function F is strictly differentiable and ${\mathrm{\Psi}}_{o}$ is inner semicontinuous at $({x}^{\ast},{y}^{\ast},{z}^{\ast})$, we get from [[43], Theorem 5.1]
Combining with (4.20) and Carathéodory’s theorem, we can derive the estimation for the coderivative ${\overline{D}}^{\ast}\mathrm{\Psi}({x}^{\ast},{y}^{\ast},{z}^{\ast})({\mathrm{\nabla}}_{z}(F({x}^{\ast},{z}^{\ast})))$:
The latter inclusion implies that ${\overline{N}}_{gph\mathrm{\Psi}}({x}^{\ast},{y}^{\ast},{z}^{\ast})=co{N}_{gph\mathrm{\Psi}}({x}^{\ast},{y}^{\ast},{z}^{\ast})$ provided the set $co{N}_{gph\mathrm{\Psi}}({x}^{\ast},{y}^{\ast},{z}^{\ast})$ is closed. Combining the above two results, by (2.7) and (3.15), we can justify (4.25). To justify the local Lipschitz continuity of ${\phi}_{p}({x}^{\ast},{y}^{\ast})$ in (ii) under the condition (4.16), the latter condition implies the Lipschitzlike property of Ψ around $({x}^{\ast},{y}^{\ast},{z}^{\ast})$. This completes the proof. □
In the following, we shall present a local sensitivity analysis of the maximization bilevel optimal value function ${\phi}_{pp}(x)$ in (3.7). For this goal, we need to estimate $\partial {\phi}_{pp}^{o}(x)$. Combining (3.13) and the conclusion on the sensitivity analysis for the value function of the nonparametric minimax problem (see [[49], Lemma 3.3]), we have
By (3.16), we derive that $\partial {\phi}_{pp}(x)\subset co\partial {\phi}_{p}^{o}(x,y)$ and ${\phi}_{pp}(x)$ is Lipschitz continuous around ${x}^{\ast}$ under the corresponding conditions of Theorem 4.3.
Remark 4.1 Observe that for the subdifferential estimate of ${\phi}_{p}$ in Theorem 4.3(ii), the upper bound of the basic subdifferential does not contain the partial derivative of the lowerlevel objective function $f(x,y,z)$ with respect to the upperlevel variable x. Such a phenomenon is no longer true if the inner semicontinuity for ${\mathrm{\Psi}}_{o}$ is replaced by the inner semicompactness in Theorem 4.3(i), this phenomenon can also be found in [6, 40]. We mention that the inner semicompactness of ${\mathrm{\Psi}}_{o}$ in Theorem 4.3(i) can be replaced by the restrictive uniform boundedness assumption on ${\mathrm{\Psi}}_{o}$ or even on Ψ. Finally, by Theorem 3.3(ii) and Theorem 4.2(ii), we can derive the subdifferential estimate for ${\phi}_{p}$, which is different from (4.25). In this case, the gradient of the upperlevel objective function F is involved in the convex combinations summation, while that of (4.25) not be. This will be shown in the following Theorem 4.4(ii).
4.4 Necessary optimality conditions using the bilevel optimal value function formulation
In this subsection, we shall establish the necessary optimality conditions for the optimal value reformulation (3.9) of the problem (3.4) using the above sensitivity analysis results.
Theorem 4.4 Let $({x}^{\ast},{y}^{\ast})$ be an upperlevel regular local optimal solution to the problem (3.4), whereas F and G are strictly differentiable at $({x}^{\ast},{z}^{\ast})$ and ${x}^{\ast}$, respectively, and let $X\times Y$ be closed. Then the following assertions hold:

(i)
Let ${\mathrm{\Psi}}_{o}$ be inner semicompact at $({x}^{\ast},{y}^{\ast})$ while for all $z\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})$ and the point $({x}^{\ast},z)$ is lowerlevel regular, let f and g be strictly differentiable at $({x}^{\ast},z)$, $z\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})$, and let the conditions (4.14) and (4.16) be satisfied at all point $({x}^{\ast},{y}^{\ast},z)$ with $z\in {\mathrm{\Psi}}_{o}({x}^{\ast},{y}^{\ast})$. Then there exist ${\lambda}_{t}\ge 0$, α, ${\beta}^{t}$, ${\beta}^{s}$, ${\eta}_{s}$ and ${z}_{t},{z}_{s}\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})$ with $t,s=1,\dots ,n+l+1$ such that (4.7) and the following conditions hold:
$$\{\begin{array}{l}{\sum}_{t=1}^{n+l+1}{\nu}_{t}\{{\mathrm{\nabla}}_{x}F({x}^{\ast},{z}_{t}){\lambda}_{t}({\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{x}{f}_{k}({x}^{\ast},{z}_{t})\\ \phantom{\rule{1em}{0ex}}{\sum}_{s=1}^{n+l+1}{\eta}_{s}({\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{x}{f}_{k}({x}^{\ast},{z}_{s})+{\sum}_{i=1}^{p}{\beta}_{i}^{s}{\mathrm{\nabla}}_{x}{g}_{i}({x}^{\ast},{z}_{s})))\\ \phantom{\rule{1em}{0ex}}+{\sum}_{i=1}^{p}{\beta}_{i}^{t}{\mathrm{\nabla}}_{x}{g}_{i}({x}^{\ast},{z}_{t})\}+{\sum}_{j=1}^{q}{\alpha}_{j}\mathrm{\nabla}{G}_{j}({x}^{\ast})=0,\\ {\mathrm{\nabla}}_{z}F({x}^{\ast},{z}_{t}){\lambda}_{t}{\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{z}{f}_{k}({x}^{\ast},{z}_{t}){\sum}_{i=1}^{p}{\beta}_{i}^{t}{\mathrm{\nabla}}_{z}{g}_{i}({x}^{\ast},{z}_{t})=0,\\ {\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{z}{f}_{k}({x}^{\ast},{z}_{s})+{\sum}_{i=1}^{p}{\beta}_{i}^{s}{\mathrm{\nabla}}_{z}{g}_{i}({x}^{\ast},{z}_{s})=0,\\ \mathrm{\forall}j=1,\dots ,q\text{:}\phantom{\rule{1em}{0ex}}{\alpha}_{j}\ge 0,\phantom{\rule{1em}{0ex}}{\alpha}_{j}{G}_{j}({x}^{\ast})=0,\\ \mathrm{\forall}t=1,\dots ,n+l+1,\mathrm{\forall}i=1,\dots ,p\text{:}\phantom{\rule{1em}{0ex}}{\beta}_{i}^{t}\ge 0,\phantom{\rule{1em}{0ex}}{\beta}_{i}^{t}{g}_{i}({x}^{\ast},{z}_{t})=0,\\ \mathrm{\forall}s=1,\dots ,n+l+1,\mathrm{\forall}i=1,\dots ,p\text{:}\phantom{\rule{1em}{0ex}}{\beta}_{i}^{s}\ge 0,\phantom{\rule{1em}{0ex}}{\beta}_{i}^{s}{g}_{i}({x}^{\ast},{z}_{s})=0,\\ \mathrm{\forall}s=1,\dots ,n+l+1\text{:}\phantom{\rule{1em}{0ex}}{\eta}_{s}\ge 0,\phantom{\rule{1em}{0ex}}{\sum}_{s=1}^{n+l+1}{\eta}_{s}=1,\\ \mathrm{\forall}t=1,\dots ,n+l+1\text{:}\phantom{\rule{1em}{0ex}}{\nu}_{t}\ge 0,\phantom{\rule{1em}{0ex}}{\sum}_{t=1}^{n+l+1}{\nu}_{t}=1.\end{array}$$(4.26)The relationships (4.7) and (4.26) considered together are called the KMstationarity conditions.

(ii)
Let ${\mathrm{\Psi}}_{o}$ be inner semicontinuous at $({x}^{\ast},{y}^{\ast},{z}^{\ast})$, $({x}^{\ast},{z}^{\ast})$ be lowerlevel regular, f and g be strictly differentiable at $({x}^{\ast},{z}^{\ast})$, and let the conditions (4.14) and (4.16) be satisfied at $({x}^{\ast},{y}^{\ast},{z}^{\ast})$ and the set $co{N}_{gph\mathrm{\Psi}}({x}^{\ast},{y}^{\ast},{z}^{\ast})$ be closed. Then there exist ${\lambda}_{t}\ge 0$, α, ${\beta}^{t}$, and ${\gamma}^{t}$ such that the following conditions hold:
$$\{\begin{array}{l}{\mathrm{\nabla}}_{x}F({x}^{\ast},{z}^{\ast}){\sum}_{t=1}^{n+l+1}{\nu}_{t}{\sum}_{i=1}^{p}({\beta}_{i}^{t}{\lambda}_{t}{\gamma}_{i}^{t}){\mathrm{\nabla}}_{x}{g}_{i}({x}^{\ast},{z}^{\ast})+{\sum}_{j=1}^{q}{\alpha}_{j}\mathrm{\nabla}{G}_{j}({x}^{\ast})=0,\\ {\mathrm{\nabla}}_{z}F({x}^{\ast},{z}^{\ast}){\sum}_{t=1}^{n+l+1}{\nu}_{t}({\lambda}_{t}{\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{z}{f}_{k}({x}^{\ast},{z}^{\ast})+{\sum}_{i=1}^{p}{\beta}_{i}^{t}{\mathrm{\nabla}}_{z}{g}_{i}({x}^{\ast},{z}^{\ast}))=0,\\ {\sum}_{k=1}^{l}{y}_{k}^{\ast}{\mathrm{\nabla}}_{z}{f}_{k}({x}^{\ast},{z}^{\ast})+{\sum}_{i=1}^{p}{\gamma}_{i}{\mathrm{\nabla}}_{z}{g}_{i}({x}^{\ast},{z}^{\ast})=0,\\ \mathrm{\forall}j=1,\dots ,q\text{:}\phantom{\rule{1em}{0ex}}{\alpha}_{j}\ge 0,\phantom{\rule{1em}{0ex}}{\alpha}_{j}{G}_{j}({x}^{\ast})=0,\\ \mathrm{\forall}t=1,\dots ,n+l+1,\mathrm{\forall}i=1,\dots ,p\text{:}\phantom{\rule{1em}{0ex}}{\beta}_{i}^{t}\ge 0,\phantom{\rule{1em}{0ex}}{\beta}_{i}^{t}{g}_{i}({x}^{\ast},{z}^{\ast})=0,\\ \mathrm{\forall}t=1,\dots ,n+l+1,\mathrm{\forall}i=1,\dots ,p\text{:}\phantom{\rule{1em}{0ex}}{\gamma}_{i}^{t}\ge 0,\phantom{\rule{1em}{0ex}}{\gamma}_{i}^{t}{g}_{i}({x}^{\ast},{z}^{\ast})=0,\\ \mathrm{\forall}t=1,\dots ,n+l+1\text{:}\phantom{\rule{1em}{0ex}}{\nu}_{t}\ge 0,\phantom{\rule{1em}{0ex}}{\sum}_{t=1}^{n+l+1}{\nu}_{t}=1.\end{array}$$(4.27)
The relationships (4.27) are called the KNstationarity conditions.
Proof Under the assumptions of (ii), the bilevel value function ${\phi}_{pp}$ in (3.7) is Lipschitz continuous near ${x}^{\ast}$. Since X is closed, one has from [[13], Proposition 5.3] that
By the inner semicontinuity of ${\mathrm{\Psi}}_{o}$ at $({x}^{\ast},{y}^{\ast},{z}^{\ast})$ and the upper regularity (4.5), we have
Combining with Theorem 4.3(ii) and (4.29), Theorem 4.4(ii) is easily derived. If ${\mathrm{\Psi}}_{o}$ is inner semicompact around $({x}^{\ast},{y}^{\ast})$, the condition (4.7) holds at all point $({x}^{\ast},{y}^{\ast},z)$ with $z\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})$, and that (4.14) and (4.16) are satisfied at all point $({x}^{\ast},{y}^{\ast},z)$ with $z\in {\mathrm{\Psi}}_{o}({x}^{\ast},{y}^{\ast})$. Thus, by Theorem 4.3(i), we obtain the conclusion (i). This completes the proof. □
Remark 4.2 (i) The prefixes ‘KN’ and ‘KM’ in Theorem 4.4 reflect the difference between the KKTtype optimality conditions via the inner semicompactness and inner semicontinuity of the upperlevel optimal solution set mapping ${\mathrm{\Psi}}_{o}$, respectively. For the notions ‘KMstationary’ and ‘KMstationarity’, the readers can be referred to [50]. In (4.27), the gradient of F does not involve the convex combinations summation, in this case, analogously to [37], we call (4.27) KKTtype necessary optimality (stationarity) conditions for the problem (3.4).
(ii) Under the inner semicontinuity of the lowerlevel optimal setvalued mapping Ψ, the necessary optimality conditions (ii) in Theorem 4.4 are in fact those of the problem
This means that the above framework, the constraints described by Y (3.2) can be dropped and the condition that set $X\times Y$ is closed is reduced to that the set X is closed, the latter is immediately reached by Hypothesis 1 in Section 1, while deriving the necessary optimality conditions of the problem (1.6), which is a strange phenomenon just as that said in [6].
By Proposition 3.1(i) and Theorem 4.4, the necessary optimality conditions for the pessimistic semivectorial bilevel programming problem (1.6) are derived when the involved functions are strictly differentiable.
Corollary 4.1 Let $({x}^{\ast},{z}^{\ast})$ be a local optimal solution of the problem (1.6), where F and ${G}_{j}$, $j=1,\dots ,q$ are strictly differentiable at $({x}^{\ast},{z}^{\ast})$ and ${z}^{\ast}$, respectively. For all $x\in X$, $f(x,\cdot )$ and $g(x,\cdot )$ are ${R}_{+}^{l}$ and ${R}_{+}^{p}$convex, respectively. Let ${x}^{\ast}$ be upperlevel regular. Then the following assertions hold:

(i)
(KMstationarity conditions) Let ${\mathrm{\Psi}}_{o}$ be inner semicompact at $({x}^{\ast},{y}^{\ast})$ while for all $z\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})$ and the point $({x}^{\ast},z)$ is lowerlevel regular. Let f and g be strictly differentiable at $({x}^{\ast},z)$, $z\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})$, and let the conditions (4.14) and (4.16) be satisfied at all point $({x}^{\ast},{y}^{\ast},z)$ with $z\in {\mathrm{\Psi}}_{o}({x}^{\ast},{y}^{\ast})$ and the set $X\times Y$ be closed. Then there exist ${\lambda}_{t}\ge 0$, α, ${\beta}^{t}$, ${\beta}^{s}$, ${\eta}_{s}$ and ${z}_{t},{z}_{s}\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})$ with $t,s=1,\dots ,n+l+1$ such that (4.7) and (4.26) hold.

(ii)
(KNstationarity conditions) Let ${\mathrm{\Psi}}_{o}$ be inner semicontinuous at $({x}^{\ast},{y}^{\ast},{z}^{\ast})$ and the point $({x}^{\ast},{z}^{\ast})$ be lowerlevel regular. Let f and g be strictly differentiable at $({x}^{\ast},{z}^{\ast})$, $X\times Y$ be closed, and let the conditions (4.14) and (4.16) be satisfied at $({x}^{\ast},{y}^{\ast},{z}^{\ast})$ and the set $co{N}_{gph\mathrm{\Psi}}({x}^{\ast},{y}^{\ast},{z}^{\ast})$ be closed. Then there exist ${\lambda}_{t}\ge 0$, α, ${\beta}^{t}$ and ${\gamma}^{t}$ such that (4.27) holds.
5 BP with linear multiobjective optimization lowerlevel problems
In this section, we consider the following pessimistic semivectorial bilevel programming problem (PSBP) with a linear multiobjective optimization lowerlevel problem:
where the functions F and G defined in Section 1, and ${\mathrm{\Psi}}_{\mathrm{wef}}(x)$ is the weak Pareto optimal solutions set of the following problem with respect to the upper decision variable x:
where $b:{R}^{n}\to {R}^{l}$ and $d:{R}^{n}\to {R}^{p}$ are strictly differentiable, $A:{R}^{n}\to {R}^{l\times m}$ and $C:{R}^{n}\to {R}^{p\times m}$ are defined by
where the real value functions ${a}_{kt}$ ($1\le k\le l$, $1\le t\le m$) and ${c}_{it}$ ($1\le i\le p$, $1\le t\le m$) are strictly differentiable.
Considering the bilevel optimal value function approach developed in the previous section, for deriving the necessary optimality conditions for the problem (5.1), we firstly recall the bilevel optimal value function reformulation of the problem (5.1) according to Section 3,
Here $X=\{x\in {R}^{n}\mid G(x)\le 0\}$ and Y is given in (3.2), the function f is defined as
The lowerlevel regularity condition for the problem (5.1) is given as follows:
The following result which is a consequence of Theorem 4.4 is the necessary optimality conditions for the problem (5.1).
Theorem 5.1 (The necessary optimality conditions for the problem (5.1))
Let ${x}^{\ast}$ be an upperlevel regular local optimal solution to the problem (5.1), F and G be strictly differentiable at $({x}^{\ast},{z}^{\ast})$ and ${x}^{\ast}$, respectively. The following assertions hold:

(i)
(KMstationarity conditions) Let ${\mathrm{\Psi}}_{o}$ (where $\mathrm{\Psi}(x,y)$ in (4.1) is replaced by $\mathrm{\Psi}(x,y)$ in (5.5)) be inner semicompact at $({x}^{\ast},{y}^{\ast})$ while for all $z\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})$, $({x}^{\ast},z)$ is lowerlevel regular in the sense of (5.7), f and g be strictly differentiable at $({x}^{\ast},z)$, $z\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})$, and let the conditions (4.14) and (4.16) be satisfied at all point $({x}^{\ast},{y}^{\ast},z)$ with $z\in {\mathrm{\Psi}}_{o}({x}^{\ast},{y}^{\ast})$ and the set $X\times Y$ be closed. Then there exist ${\lambda}_{r}\ge 0$, α, ${\beta}^{r}$, ${\beta}^{s}$, ${\eta}_{s}$ and ${z}_{r},{z}_{s}\in \mathrm{\Psi}({x}^{\ast},{y}^{\ast})$ with $r,s=1,\dots ,n+l+1$ such that (4.7) (with $f(x,z)=A(x)z+b(x)$) and the following hold:
$$\{\begin{array}{l}{\sum}_{r=1}^{n+l+1}{\nu}_{r}\{{\mathrm{\nabla}}_{x}F({x}^{\ast},{z}_{r}){\lambda}_{r}({\sum}_{k=1}^{l}{y}_{k}^{\ast}{\sum}_{t=1}^{m}{z}_{tr}\mathrm{\nabla}{a}_{kt}({x}^{\ast})\\ \phantom{\rule{1em}{0ex}}{\sum}_{s=1}^{n+l+1}{\eta}_{s}({\sum}_{k=1}^{l}{y}_{k}^{\ast}{\sum}_{t=1}^{m}{z}_{ts}^{\ast}\mathrm{\nabla}{a}_{kt}({x}^{\ast})\\ \phantom{\rule{1em}{0ex}}+{\sum}_{i=1}^{p}{\beta}_{i}^{s}({\sum}_{t=1}^{m}{z}_{ts}\mathrm{\nabla}{c}_{it}({x}^{\ast})\mathrm{\nabla}{d}_{i}({x}^{\ast}))))\\ \phantom{\rule{1em}{0ex}}+{\sum}_{i=1}^{p}{\beta}_{i}^{r}({\sum}_{t=1}^{m}{z}_{tr}\mathrm{\nabla}{c}_{it}({x}^{\ast})\mathrm{\nabla}{d}_{i}({x}^{\ast}))\}+{\sum}_{j=1}^{q}{\alpha}_{j}\mathrm{\nabla}{G}_{j}({x}^{\ast})=0,\\ \mathrm{\forall}r=1,\dots ,n+l+1\text{:}\phantom{\rule{1em}{0ex}}{\mathrm{\nabla}}_{z}F({x}^{\ast},{z}_{r}){\lambda}_{r}{\sum}_{k=1}^{l}{y}_{k}^{\ast}{a}_{k}({x}^{\ast}){\sum}_{i=1}^{p}{\beta}_{i}^{r}{c}_{i}({x}^{\ast})\le 0,\\ \mathrm{\forall}r=1,\dots ,n+l+1\text{:}\phantom{\rule{1em}{0ex}}{z}_{r}^{\mathrm{\top}}[{\mathrm{\nabla}}_{z}F({x}^{\ast},{z}_{r}){\lambda}_{r}{\sum}_{k=1}^{l}{y}_{k}^{\ast}{a}_{k}({x}^{\ast})\\ \phantom{\mathrm{\forall}r=1,\dots ,n+l+1\text{:}\phantom{\rule{1em}{0ex}}}\phantom{\rule{1em}{0ex}}{\sum}_{i=1}^{p}{\beta}_{i}^{r}{c}_{i}({x}^{\ast})\le 0]=0,\\ \mathrm{\forall}s=1,\dots ,n+l+1\text{:}\phantom{\rule{1em}{0ex}}{\sum}_{k=1}^{l}{y}_{k}^{\ast}{a}_{k}({x}^{\ast})+{\sum}_{i=1}^{p}{\beta}_{i}^{s}{c}_{i}({x}^{\ast})\le 0,\\ \mathrm{\forall}r,s=1,\dots ,n+l+1\text{:}\phantom{\rule{1em}{0ex}}{z}_{r}^{\mathrm{\top}}[{\sum}_{k=1}^{l}{y}_{k}^{\ast}{a}_{k}({x}^{\ast})+{\sum}_{i=1}^{p}{\beta}_{i}^{s}{c}_{i}({x}^{\ast})\le 0]=0,\\ \mathrm{\forall}j=1,\dots ,q\text{:}\phantom{\rule{1em}{0ex}}{\alpha}_{j}\ge 0,\phantom{\rule{1em}{0ex}}{\alpha}_{j}{G}_{j}({x}^{\ast})=0,\\ \mathrm{\forall}s=1,\dots ,n+l+1\text{:}\phantom{\rule{1em}{0ex}}{\eta}_{s}\ge 0,\phantom{\rule{1em}{0ex}}{\sum}_{s=1}^{n+l+1}{\eta}_{s}=1,\\ \mathrm{\forall}r=1,\dots ,n+l+1\text{:}\phantom{\rule{1em}{0ex}}{\nu}_{r}\ge 0,\phantom{\rule{1em}{0ex}}{\sum}_{r=1}^{n+l+1}{\nu}_{r}=1.\end{array}$$(5.8) 
(ii)
(KNstationarity conditions) Let ${\mathrm{\Psi}}_{o}$ (where $\mathrm{\Psi}(x,y)$ in (4.1) is replaced by $\mathrm{\Psi}(x,y)$ in (5.5)) be inner semicontinuous at $({x}^{\ast},{y}^{\ast},{z}^{\ast})$, $({x}^{\ast},{z}^{\ast})$ be lowerlevel regular in the sense of (5.7), f and g be strictly differentiable at $({x}^{\ast},{z}^{\ast})$ and the set $X\times Y$ be closed and let the conditions (4.14) and (4.16) be satisfied at $({x}^{\ast},{y}^{\ast},{z}^{\ast})$ and the set $co{N}_{gph\mathrm{\Psi}}({x}^{\ast},{y}^{\ast},{z}^{\ast})$ be closed. Then there exist ${\lambda}_{r}\ge 0$, α, ${\beta}^{r}$ and ${\gamma}^{r}$ such that
$$\{\begin{array}{l}{\mathrm{\nabla}}_{x}F({x}^{\ast},{z}^{\ast})+{\sum}_{j=1}^{q}{\alpha}_{j}\mathrm{\nabla}{G}_{j}({x}^{\ast})\\ \phantom{\rule{1em}{0ex}}{\sum}_{r=1}^{n+l+1}{\nu}_{r}{\sum}_{i=1}^{p}({\beta}_{i}^{r}{\lambda}_{r}{\gamma}_{i}^{r})({\sum}_{t=1}^{m}{z}_{tr}\mathrm{\nabla}{c}_{it}({x}^{\ast})\mathrm{\nabla}{d}_{i}({x}^{\ast}))=0,\\ {\mathrm{\nabla}}_{z}F({x}^{\ast},{z}^{\ast}){\sum}_{r=1}^{n+l+1}{\nu}_{r}({\lambda}_{r}{\sum}_{k=1}^{l}{y}_{k}^{\ast}{a}_{k}({x}^{\ast})+{\sum}_{i=1}^{p}{\beta}_{i}^{r}{c}_{i}({x}^{\ast}))\le 0,\\ {z}^{\ast \mathrm{\top}}[{\mathrm{\nabla}}_{z}F({x}^{\ast},{z}^{\ast}){\sum}_{r=1}^{n+l+1}{\nu}_{r}({\lambda}_{r}{\sum}_{k=1}^{l}{y}_{k}^{\ast}{a}_{k}({x}^{\ast})+{\sum}_{i=1}^{p}{\beta}_{i}^{r}{c}_{i}({x}^{\ast}))\le 0]=0,\\ \mathrm{\forall}r=1,\dots ,n+l+1\text{:}\phantom{\rule{1em}{0ex}}{\sum}_{k=1}^{l}{y}_{k}^{\ast}{a}_{k}({x}^{\ast})+{\sum}_{i=1}^{p}{\gamma}_{i}^{r}{c}_{i}({x}^{\ast})\le 0,\\ \mathrm{\forall}r=1,\dots ,n+l+1\text{:}\phantom{\rule{1em}{0ex}}{z}^{\ast \mathrm{\top}}[{\sum}_{k=1}^{l}{y}_{k}^{\ast}{a}_{k}({x}^{\ast})+{\sum}_{i=1}^{p}{\gamma}_{i}^{r}{c}_{i}({x}^{\ast})\le 0]=0,\\ \mathrm{\forall}j=1,\dots ,q\text{:}\phantom{\rule{1em}{0ex}}{\alpha}_{j}\ge 0,\phantom{\rule{1em}{0ex}}{\alpha}_{j}{G}_{j}({x}^{\ast})=0,\\ \mathrm{\forall}r=1,\dots ,n+l+1\text{:}\phantom{\rule{1em}{0ex}}{\nu}_{r}\ge 0,\phantom{\rule{1em}{0ex}}{\sum}_{r=1}^{n+l+1}{\nu}_{r}=1.\end{array}$$(5.9)
Proof The proof is similar to that of Theorem 4.4 and so is omitted here. □
Remark 5.1 (i) If the term $b(x)$ of the lowerlevel objective function in the problem (5.2) is removed, we can obtain the same result for Theorem 5.1(i) and (ii), which is
note that ${\sum}_{s=1}^{n+l+1}{\eta}_{s}=1$.
(ii) If F and G are linear functions with respect to their variables, then the calmness condition (4.14) is automatically satisfied in this case by [15]. Then we can obtain the more concise results, which is the particular case of Corollary 4.1. The interested reader can try to give detailed results of the optimality conditions for this case.
6 Conclusions
In this paper, we develop the necessary optimality conditions for the pessimistic formulation of semivectorial bilevel optimization problem. Firstly, we transform our problem into a scalar objective optimization problem with inequality constraints via the scalarization method for the multiobjective optimization problem. Furthermore, we derive a generalized minimax optimization problem by means of the bilevel optimal value function, of which the sensitivity analysis is constructed via the lowerlevel value function approach. Considering the special case where the lowerlevel multiobjective optimization problem is linear, we also give it the necessary optimality conditions. In the further work, we intend to develop the necessary optimality conditions in the nonsmooth setting and develop the solving algorithms for the pessimistic formulation of semivectorial bilevel optimization problem, especially the latter, which is challenging. For the problem (1.1), if the leader is not certain of that the follower cooperates or dose not cooperate fully with him, it would be inappropriate for the leader who considers only the optimistic or pessimistic formulation. Hence, when both the leader and the follower are neither fully cooperative nor fully noncooperative, it is meaningful to consider a partial cooperation model which combines the optimistic formulation and the pessimistic formulation for the problem (1.1).
References
 1.
Bonnel H, Morgan J: Semivectorial bilevel optimization problem: penalty approach. J. Optim. Theory Appl. 2006,131(3):365382. 10.1007/s1095700691504
 2.
Dempe S Nonconvex Optimization and Its Applications Series 61. In Foundations of Bilevel Programming. Kluwer Academic, Dordrecht; 2002.
 3.
Ankhili Z, Mansouri A: An exact penalty on bilevel programs with linear vector optimization lower level. Eur. J. Oper. Res. 2009, 197: 3641. 10.1016/j.ejor.2008.06.026
 4.
Bonnel H: Optimality conditions for the semivectorial bilevel optimization problem. Pac. J. Optim. 2006,2(3):447467.
 5.
Calvete HI, Galé C: On linear bilevel problems with multiple objectives at the lower level. Omega 2011, 39: 3340. 10.1016/j.omega.2010.02.002
 6.
Dempe S, Gadhi N, Zemkoho AB: New optimality conditions for the semivectorial bilevel optimization problem. J. Optim. Theory Appl. 2013, 157: 5474. 10.1007/s109570120161z
 7.
Eichfelder G: Multiobjective bilevel optimization. Math. Program. 2010, 123: 419449. 10.1007/s1010700802590
 8.
Nie P: A note on bilevel optimization problems. Int. J. Appl. Math. Sci. 2005,2(1):3138.
 9.
Zheng Y, Wan Z: A solution method for semivectorial bilevel programming problem via penalty method. J. Appl. Math. Comput. 2011, 37: 207219. 10.1007/s1219001004307
 10.
Bonnel H, Morgan J: Optimality conditions for semivectorial bilevel convex optimal control problem. Springer Proceedings in Mathematics 50. In Computational and Analytical Mathematics: In Honor of Jonathan Borwein on the Occasion of His 60th Birthday Edited by: Bauschke H, Théra M. 2013, 4578.
 11.
Chen J, Cho YJ, Kim JK, Li J: Multiobjective optimization problems with modified objective functions and cone constraints and applications. J. Glob. Optim. 2011, 49: 137147. 10.1007/s1089801095393
 12.
Chen J, Wan Z, Cho YJ: Nonsmooth multiobjective optimization problems and weak vector quasivariational inequalities. Comput. Appl. Math. 2013, 32: 291301. 10.1007/s403140130014x
 13.
Mordukhovich BS: Variational Analysis and Generalized Differentiation. I: Basic Theory. II: Applications. Springer, Berlin; 2006.
 14.
Rockafellar RT, Wets RJB: Variational Analysis. Springer, Berlin; 1998.
 15.
Robinson SM: Some continuity properties of polyhedral multifunctions. Math. Program. Stud. 1981, 14: 206214. 10.1007/BFb0120929
 16.
Ehrgott M: Multicriteria Optimization. 2nd edition. Springer, Berlin; 2005.
 17.
Aboussoror A: Weak bilevel programming problems: existence of solutions. Adv. Math. Res. 2002, 1: 8392.
 18.
Aboussoror A, Loridan P: Strongweak Stackelberg problems in finite dimensional spaces. Serdica Math. J. 1995, 21: 151170.
 19.
Aboussoror A, Loridan P: Existence of solutions to two level optimization problems with nonunique lowerlevel solutions. J. Math. Anal. Appl. 2001, 254: 348357. 10.1006/jmaa.2000.7001
 20.
Lignola MB, Morgan J: Stability of regularized bilevel programming problems. J. Optim. Theory Appl. 1997,93(3):575596. 10.1023/A:1022695113803
 21.
Loridan P, Morgan J: Approximate solutions for two level optimization problems. Int. Ser. Numer. Math. 1988, 84: 181196.
 22.
Loridan P, Morgan J: ϵ Regularized twolevel optimization problems: approximation and existence results. In:Optimization (Fifth FrenchGerman Conference, Castel Novel). Lecture Notes in Mathematics, vol. 1405 Optimization 1989, 99113.
 23.
Loridan P, Morgan J: Leastnorm regularization for weak twolevel optimization problem. International Series of Numerical Mathematics 107. In Optimization, Optimal Control and Partial Differential Equations. Birkhäuser, Basel; 1992:307318.
 24.
Loridan P, Morgan J: On strict ϵ solutions for a twolevel optimization problem. 90. In Proceedings of the International Conference on Operations Research. Springer, Berlin; 1992:165172.
 25.
Lucchetti R, Mignanego F, Pieri G: Existence theorems of equilibrium points in Stackelberg games with constraints. Optimization 1987, 18: 857866. 10.1080/02331938708843300
 26.
Marhfour A: Mixed solutions for weak Stackelberg problems: existence and stability results. J. Optim. Theory Appl. 2000,105(2):417440. 10.1023/A:1004618103646