- Research
- Open Access

# Strict global minimizers and higher-order generalized strong invexity in multiobjective optimization

- Guneet Bhatia
^{1}Email author and - Rishi Rajan Sahay
^{2}

**2013**:31

https://doi.org/10.1186/1029-242X-2013-31

© Bhatia and Sahay; licensee Springer 2013

**Received:**19 April 2012**Accepted:**6 January 2013**Published:**24 January 2013

## Abstract

Higher-order strict minimizers with respect to a nonlinear function for a multiobjective optimization problem are introduced and are characterized via sufficient optimality conditions and higher-order mixed saddle points of a vector-valued partial Lagrangian. To this aim, we present certain generalizations of higher-order strong invexity. A mixed dual is proposed and corresponding duality results are obtained. An equivalent optimization problem for the given multiobjective optimization problem is introduced. It is shown that the problem of finding higher-order strict minimizers with respect to a nonlinear function for the given problem reduces to that of finding strict minimizers in the ordinary sense for an equivalent problem.

**MSC:**26A51, 90C29, 90C46.

## Keywords

- higher-order strong invexity
- strict minimizer of order
*m* - partial Lagrangian
- mixed saddle point

## 1 Introduction

Multiobjective optimization problems occupy an important place in the theory of optimization. Several solution concepts for multiobjective optimization problem have appeared in the literature *viz.* efficiency, weak efficiency and proper efficiency [1, 2]. The concept of higher-order local minimizer plays an important role in the convergence analysis of iterative numerical methods [3] and in stability results [4]. For a scalar optimization problem, Auslender [5] derived necessary and sufficient optimality conditions for isolated local minima of order 1 and 2, and Ward [6] presented the notion of strict local minimum of order *m*. Jimenez [7] extended the idea of Ward [6] to define the notion of a strict local efficient solution of order *m* for a vector minimization problem. Bhatia [8] extended the notion of Ward to define the higher-order global strict minimizer for a multiobjective optimization problem. Sahay and Bhatia [9] introduced the notion of a strict minimizer of order *m* with respect to a nonlinear function for a scalar optimization problem.

In this paper, we move a step ahead in this direction and introduce the concept of a higher-order strict minimizer with respect to a nonlinear function for a multiobjective optimization problem. For the purpose of studying this new solution concept, we present certain generalizations of higher-order strong invexity [9]. Sufficient optimality conditions characterizing this solution concept are obtained. A mixed dual is proposed and well-known duality results are established. A partial vector-valued Lagrangian for the multiobjective optimization problem is introduced. Higher-order mixed saddle points for the partial Lagrangian with respect to a nonlinear function are shown to be equivalent to the higher-order strict minimizers with respect to the same function. Further, an equivalent optimization problem that enables one to find the higher-order strict minimizers for a given multiobjective optimization problem in a simpler manner is presented.

## 2 Higher-order global strict minimizers

where ${f}_{i},{g}_{j}:X\to R$, $i=1,2,\dots ,p$, $j=1,2,\dots ,q$ are real-valued differentiable functions and *X* is a non-empty open subset of ${R}^{n}$ endowed with the Euclidean norm $\parallel \cdot \parallel $.

We denote by $S=\{x\in X:{g}_{j}(x)\le 0,j=1,2,\dots ,q\}$ the set of all feasible solutions for (MOP) and let $I(x)=\{j:{g}_{j}(x)=0\}$ be the set of indices corresponding to active constraints. Let $B({x}^{0},\epsilon )=\{x\in {R}^{n}:\parallel x-{x}^{0}\parallel <\epsilon \}$ denote an open ball with centre ${x}^{0}$ and radius *ε*.

**Definition 2.1** ([7])

**Definition 2.2** ([8])

*m*for (MOP) if there exists an $\epsilon >0$ and a constant $c\in int{R}_{+}^{p}$ such that

The notion of a local strict minimizer reduces to the global sense if the ball $B({x}^{0},\epsilon )$ is replaced by the whole space ${R}^{n}$.

The following example illustrates that in some cases ${x}^{0}$ may fail to be a strict minimizer in the sense of the above definition.

**Example 2.1** Let $S=[0,1]$ and $f(x)=({x}^{3},{sin}^{3}x)$, then ${x}^{0}=0$ is not a strict minimizer of order 1 in the sense of Definition 2.2, since for any $c=({c}_{1},{c}_{2})\in int{R}_{+}^{2}$, there exists an *x* satisfying $0<x<{c}_{1}^{1/2}$, $0<\frac{{sin}^{3}x}{x}<{c}_{2}$ such that $f(x)<f({x}^{0})+c\parallel x-{x}^{0}\parallel $.

The above example motivates us to introduce a new notion of a strict minimizer of order *m* with respect to a nonlinear function for the multiobjective optimization problem (MOP).

**Definition 2.3**Let $m\ge 1$ be an integer. A point ${x}^{0}\in S$ is a local strict minimizer of order

*m*for (MOP) with respect to a nonlinear function $\psi :S\times S\to {R}^{n}$, if there exists an $\epsilon >0$ and a constant $c\in int{R}_{+}^{p}$ such that

**Definition 2.4**Let $m\ge 1$ be an integer. A point ${x}^{0}\in S$ is a strict minimizer of order

*m*for (MOP) with respect to a nonlinear function $\psi :S\times S\to {R}^{n}$ if there exists a constant $c\in int{R}_{+}^{p}$ such that

**Remark 2.1** The function *ψ* plays an important role in the notion of a strict minimizer defined above. For the problem considered in Example 2.1, ${x}^{0}=0$ failed to be a strict minimizer of order 1 in the usual sense; however, it is important to observe here that ${x}^{0}=0$ is a strict minimizer of order 1 with respect to $\psi (x,{x}^{0})={sin}^{3}x-{sin}^{3}{x}^{0}$ for $c=(1,1)$.

**Remark 2.2** The study of higher-order minimizers is pertinent as these minimizers play an important role in the convergence analysis of iterative numerical methods and in stability results. These minimizers are often exactly those satisfying an *m* th derivative test [6, 7]. It is clear that any strict minimizer of order *m* is also a strict minimizer for (MOP). Converse of this statement may not be true. If ${x}^{0}$ is a strict minimizer of order *m* with respect to a nonlinear function *ψ*, then it is also a strict minimizer of order *j* with respect to the same *ψ* for all $j>m$.

We recall that [1] a set $S\subseteq {R}^{n}$ is invex with respect to *η* if there exists $\eta :S\times S\to {R}^{n}$ such that for all $x,y\in S$ and all $\lambda \in [0,1]$, $y+\lambda \eta (x,y)\in S$. Throughout this paper, we assume $S\subseteq X$ to be an invex set.

**Definition 2.5** ([9])

*η*,

*ψ*on

*S*if there exists a constant $c>0$ such that for all $x,y\in S$,

If $\psi (x,y)=0$, then the above definition reduces to the notion of invexity. If $\psi (x,y)=x-y$, $\eta (x,y)=x-y$, the definition reduces to the definition of strong convexity of order *m* [10].

**Remark 2.3** It is important to observe that there exist functions which are strongly invex of order *m* but are not strongly convex of any order. For example, let $X={R}^{2}$, $S=\{({x}_{1},{x}_{2})\in {R}^{2}:0\le {x}_{1},{x}_{2}\le 1\}$, $f(x)={x}_{1}+{x}_{2}^{2}$, $\eta (x,y)=(-{y}_{1},-{y}_{2})$ and $\psi (x,y)=({x}_{1}/\sqrt{1+{y}_{2}},0)$, where $x={({x}_{1},{x}_{2})}^{t}$ and $y={({y}_{1},{y}_{2})}^{t}$. Then, for all $x,y\in S$ and $\lambda \in [0,1]$, we have $y+\lambda \eta (x,y)=({y}_{1}(1-\lambda ),{y}_{2}(1-\lambda ))\in S$, thus *S* is an invex set with respect to *η*. Clearly, *f* is strongly invex of order $m\ge 1$ with respect to *η* and *ψ* as defined above for $0<c\le 1$. However, on choosing $x=(1,1/2)$ and $y=(0,1/2)$, it is evident that *f* is not strongly convex of any order for any $c>0$.

**Remark 2.4** Every strongly invex function of order *m* with respect to *η* and *ψ* is invex. However, converse of this statement may not be true [9].

We now present the following generalizations of higher-order strong invexity.

**Definition 2.6**A differentiable function $f:X\to R$ is said to be strongly pseudoinvex type I of order

*m*with respect to

*η*,

*ψ*on

*S*if there exists a constant $c>0$ such that for all $x,y\in S$,

or equivalently, $f(x)<f(y)+c{\parallel \psi (x,y)\parallel}^{m}$ implies $\mathrm{\nabla}f{(y)}^{t}\eta (x,y)<0$.

**Remark 2.5** Strong invexity of order *m* with respect to *η* and *ψ* implies strong pseudoinvexity type I of order *m* with respect to the same *η* and *ψ*. However, converse is not true in general. For example, let $X=R$, $S=(0,\pi /2]$, $f(x)=1+cosx$, $\eta (x,y)=\frac{(cosy-cosx)}{siny}$ and $\psi (x,y)=cosx-cosy$, then for $c=1/{2}^{m-1}$, *f* is strongly pseudoinvex type I of order $m\ge 1$ with respect to *η* and *ψ* on *S* but is not strongly invex of any order *m* with respect to these *η* and *ψ*.

**Definition 2.7**A differentiable function $f:X\to R$ is said to be strongly pseudoinvex type II of order

*m*with respect to

*η*,

*ψ*on

*S*if there exists a constant $c>0$ such that for all $x,y\in S$,

**Definition 2.8**A differentiable function $f:X\to R$ is said to be strongly quasiinvex type I of order

*m*with respect to

*η*,

*ψ*on

*S*if there exists a constant $c>0$ such that for all $x,y\in S$,

**Definition 2.9**A differentiable function $f:X\to R$ is said to be strongly quasiinvex type II of order

*m*with respect to

*η*,

*ψ*on

*S*if there exists a constant $c>0$ such that for all $x,y\in S$,

## 3 Local-global property and optimality conditions

**Theorem 3.1** *Suppose* ${x}^{0}\in S$ *is a strict local minimizer of order* *m* *with respect to* *ψ* *for* (MOP) *and the functions* ${f}_{i}:X\to R$, $i=1,2,\dots ,p$ *are strongly pseudoinvex type I of order* *m* *with respect to the same* *η* *and* *ψ* *on* *S*. *Then* ${x}^{0}$ *is a strict minimizer of order* *m* *with respect to the same* *ψ* *for* (MOP).

*Proof*Since ${x}^{0}\in S$ is a local strict minimizer of order

*m*with respect to

*ψ*for (MOP), therefore there exists an $\epsilon >0$ and a constant $c=({c}_{1},\dots ,{c}_{p})\in int{R}_{+}^{p}$ such that

*m*with respect to

*ψ*for (MOP), then for all ${c}_{i}>0$, $i=1,2,\dots ,p$, there exists some $z\in S$ such that

For ${x}^{0}\in S$ and sufficiently small $\lambda \in (0,1)$ and $\eta :S\times S\to {R}^{n}$, we have ${x}^{0}+\lambda \eta (z,{x}^{0})\in B({x}^{0},\epsilon )\cap S$.

*m*on

*S*with respect to

*η*and

*ψ*for $z,{x}^{0}\in S$, it follows from the set of above inequalities that

*ψ*, we have

This contradicts (3.1). □

**Theorem 3.2** (Fritz John type necessary optimality conditions)

*Suppose*${x}^{0}$

*is a strict minimizer of order*

*m*

*with respect to a nonlinear function*$\psi :S\times S\to {R}^{n}$

*for*(MOP)

*and the functions*${f}_{i}$, $i=1,2,\dots ,p$, ${g}_{j}$, $j=1,2,\dots ,q$

*are differentiable at*${x}^{0}$.

*Then there exists*${\lambda}_{i}^{0}\ge 0$, $i=1,2,\dots ,p$, ${\mu}_{j}^{0}\ge 0$, $j=1,2,\dots ,q$

*such that*

**Definition 3.1** (MOP) is said to satisfy Slater’s constraint qualification (SCQ) at ${x}^{0}$ if there exists $\overline{x}\in X$ such that ${g}_{j}(\overline{x})<0$, $j\in I({x}^{0})$.

**Theorem 3.3** (Karush-Kuhn-Tucker type necessary optimality conditions)

*Suppose*${x}^{0}$

*is a strict minimizer of order*

*m*

*with respect to a nonlinear function*$\psi :S\times S\to {R}^{n}$

*for*(MOP)

*and the functions*${f}_{i}$, $i=1,2,\dots ,p$, ${g}_{j}$, $j=1,2,\dots ,q$

*are differentiable at*${x}^{0}$.

*Assume that*(

*SCQ*)

*holds at*${x}^{0}$,

*then there exist*${\lambda}_{i}^{0}\ge 0$, $i=1,2,\dots ,p$, ${\mu}_{j}^{0}\ge 0$, $j=1,2,\dots ,q$

*such that*

**Theorem 3.4** (Sufficient optimality conditions)

*Let the conditions* (3.2)-(3.4) *be satisfied at* ${x}^{0}\in S$. *Suppose* ${f}_{i}$, $i=1,2,\dots ,p$ *are strongly pseudoinvex type I of order* *m* *and* ${g}_{j}$, $j\in I({x}^{0})$ *are strongly quasiinvex type I of order* *m* *with respect to the same* *η* *and* *ψ* *on S*. *Then* ${x}^{0}$ *is a strict minimizer of order* *m* *with respect to* *ψ* *for* (MOP).

*Proof*On the contrary, suppose that ${x}^{0}\in S$ is not a strict minimizer of order

*m*with respect to

*ψ*for (MOP). Then, for ${\overline{c}}_{i}>0$, $i=1,2,\dots ,p$, there exists some $\overline{x}\in S$ such that

*m*with respect to

*η*and

*ψ*on

*S*, therefore, from (3.5), we have

*m*with respect to the same

*η*and

*ψ*on

*S*, it follows that there exist constants ${\beta}_{j}>0$, $j\in I({x}^{0})$ such that

On using (3.2), we have ${\sum}_{j=1}^{q}{\mu}_{j}^{0}{\beta}_{j}{\parallel \psi (\overline{x},{x}^{0})\parallel}^{m}<0$, which is not possible. □

**Remark 3.1** The result of the above theorem also holds under the conditions that ${f}_{i}$, $i=1,2,\dots ,p$ are strongly invex of order *m* with respect to the same *η* and *ψ* and ${g}_{j}$, $j\in I({x}^{0})$ are strongly quasiinvex type II of order *m* with respect to the same *η* and *ψ* on *S*.

## 4 Duality

In this section, we develop duality relationship between (MOP) and its mixed dual (MD) under the assumption of generalized strong invexity of order *m* with respect to a nonlinear function.

*J*and

*K*such that $Q=J\cup K$. The mixed dual for (MOP) is given by

**Theorem 4.1** (Weak duality)

*Let*

*x*

*and*$(u,\lambda ,\mu )$

*be feasible for*(MOP)

*and*(MD)

*respectively*.

*Suppose*${f}_{i}+{\mu}_{J}{g}_{J}$, $i=1,2,\dots ,p$

*are strongly pseudoinvex type I of order*

*m*

*and*${\mu}_{j}{g}_{j}$, $j\in K$

*is strongly quasiinvex type II of order*

*m*

*with respect to the same*

*η*

*and*

*ψ*,

*then there exists*$c\in int{R}_{+}^{p}$

*such that*

*Proof*Suppose on the contrary, for every $c\in int{R}_{+}^{p}$, we have

*x*is feasible for (MOP) and ${\mu}_{j}\ge 0$, therefore for $i=1,2,\dots ,p$, we have

*m*for ${f}_{i}+{\mu}_{J}{g}_{J}$, $i=1,2,\dots ,p$, with respect to

*η*and

*ψ*, we have

*m*with respect to

*η*and

*ψ*, therefore

This contradicts (4.1). □

**Theorem 4.2** (Strong duality)

*Suppose* ${x}^{0}$ *is a strict minimizer of order* *m* *with respect to a nonlinear function* $\psi :S\times S\to {R}^{n}$ *for* (MOP). *Assume that* (SCQ) *holds at* ${x}^{0}$, *then there exist* ${\lambda}_{i}^{0}\ge 0$, $i=1,2,\dots ,p$ *and* ${\mu}_{j}^{0}\ge 0$, $j=1,2,\dots ,q$ *such that* $({x}^{0},{\lambda}^{0},{\mu}^{0})$ *is feasible for* (MD). *Further*, *if the conditions of Theorem* 4.1 *hold*, *then* $({x}^{0},{\lambda}^{0},{\mu}^{0})$ *is a strict maximizer of order* *m* *for* (MD).

*Proof* The proof follows from Theorem 3.3 and Theorem 4.1. □

## 5 Partial vector Lagrangian and mixed saddle point

The saddle point of the Lagrangian is always a global minimizer for the inequality constrained minimization problem. Due to the significance of this result in economics and optimization theory, several researchers [1, 2, 11] have obtained the equivalence between the saddle point and optimal solutions of an optimization problem under various conditions on the functions involved. In this section, we define higher-order mixed saddle points with respect to a nonlinear function $\psi :S\times S\to {R}^{n}$ for a partial vector-valued Lagrangian of a multiobjective optimization problem. The equivalence of these saddle points and the higher-order strict minimizers with respect to the same function *ψ* for (MOP) is established under generalized higher-order strong invexity conditions on the functions involved.

Let $Q=\{1,2,\dots ,q\}$, ${J}_{1}\subseteq Q$ and ${J}_{2}=Q\mathrm{\setminus}{J}_{1}$, $|{J}_{1}|$ denote the cardinality of index set ${J}_{1}$.

**Definition 5.1**Vector-valued partial Lagrangian function $L:S\times {R}_{+}^{|{J}_{1}|}\to {R}^{p}$ for (MOP) is defined as

where ${L}_{i}(x,{\mu}_{{J}_{1}})={f}_{i}(x)+{\sum}_{j\in {J}_{1}}{\mu}_{j}{g}_{j}(x)$, $i=1,2,\dots ,p$, $x\in S$, ${\mu}_{{J}_{1}}\in {R}_{+}^{|{J}_{1}|}$.

We now introduce the notion of mixed saddle points of order *m* with respect to a nonlinear function for (MOP) as follows.

**Definition 5.2**A vector $({x}^{0},{\mu}_{{J}_{1}}^{0})\in S\times {R}_{+}^{|{J}_{1}|}$ is said to be a mixed saddle point of order

*m*with respect to a nonlinear function

*ψ*for the partial vector-valued Lagrangian

*L*for (MOP) if there exists $c\in int{R}_{+}^{p}$ such that

**Theorem 5.1** *Suppose that* ${x}^{0}$ *is a strict minimizer of order* *m* *with respect to a nonlinear function* *ψ* *for* (MOP) *and* (SCQ) *holds at* ${x}^{0}$. *Further*, *if* ${f}_{i}+{\sum}_{j\in {J}_{1}}{\mu}_{j}^{0}{g}_{j}$, $i=1,2,\dots ,p$ *are strongly pseudoinvex type I of order* *m* *and* ${\mu}_{j}^{0}{g}_{j}$, $j\in {J}_{2}$ *is strongly quasiinvex type I of order* *m* *with respect to* *η* *and* *ψ* *on S*, *then* $({x}^{0},{\mu}_{{J}_{1}}^{0})$ *is a mixed saddle point of order* *m* *with respect to* *ψ* *for the partial Lagrangian*.

*Proof* Suppose ${x}^{0}$ is a strict minimizer of order *m* with respect to *ψ* for (MOP) and the constraint qualification holds at ${x}^{0}$. Therefore, by Theorem 3.3, there exist ${\lambda}_{i}^{0}\ge 0$, $i=1,2,\dots ,p$ and ${\mu}_{j}^{0}\ge 0$, $j=1,2,\dots ,q$ such that conditions (3.2)-(3.4) hold at ${x}^{0}$.

*L*for (MOP). Then, for all $c\in int{R}_{+}^{p}$, there exists some $\overline{x}\in S$ such that

*m*with respect to

*η*and

*ψ*, it follows from the above inequalities that

*m*on

*S*with respect to

*η*and

*ψ*, there exist constants ${\beta}_{j}>0$, $j\in {J}_{2}$ such that

This implies $L({x}^{0},{\mu}_{{J}_{1}}^{0})\nless L({x}^{0},{\mu}_{{J}_{1}})$.

Thus, $({x}^{0},{\mu}_{{J}_{1}}^{0})$ is a mixed saddle point of order *m* with respect to a nonlinear function *ψ* for the partial vector Lagrangian. □

**Theorem 5.2** *If* $({x}^{0},{\mu}_{{J}_{1}}^{0})$ *is a mixed saddle point of order* *m* *with respect to a nonlinear function* *ψ* *for the partial vector Lagrangian*, *then* ${x}^{0}$ *is a strict minimizer of order* *m* *with respect to the same* *ψ* *for* (MOP).

*Proof*From the hypothesis $L({x}^{0},{\mu}_{{J}_{1}}^{0})\nless L({x}^{0},{\mu}_{{J}_{1}})$, we have

*m*with respect to

*ψ*for (MOP). Then, for every $c\in int{R}_{+}^{p}$, there exists an $\overline{x}\in S$ such that

For any ${\mu}_{{J}_{1}}^{0}\in {R}_{+}^{|{J}_{1}|}$ and $\overline{x}\in S$, we have ${\mu}_{{J}_{1}}^{0}{g}_{{J}_{1}}(\overline{x})\le 0$.

Therefore, $L(\overline{x},{\mu}_{{J}_{1}}^{0})-c{\parallel \psi (\overline{x},{x}^{0})\parallel}^{m}<L({x}^{0},{\mu}_{{J}_{1}}^{0})$, which contradicts (5.2). □

## 6 An equivalent vector optimization problem

In this section, we introduce an equivalent vector optimization problem (EVP) corresponding to (MOP) and prove that the problem of finding strict minimizers of order *m* with respect to a nonlinear function $\psi :S\times S\to {R}^{n}$ for (MOP) reduces simply to the problem of finding strict minimizers for (EVP).

where ${f}_{i}$, $i=1,2,\dots ,p$ and ${g}_{j}$, $j=1,2,\dots ,q$ are defined as in (MOP). $\eta :S\times S\to {R}^{n}$ satisfies Assumption *C* [12].

Let $D({x}^{0})=\{x\in X:{g}_{j}({x}^{0})+\mathrm{\nabla}{g}_{j}({x}^{0})\eta (x,{x}^{0})\le 0,j=1,2,\dots ,q\}$ denote the set of all feasible solutions of (EVP).

**Theorem 6.1** *Let* ${x}^{0}$ *be a strict minimizer of order* *m* *with respect to a nonlinear function* $\psi :S\times S\to {R}^{n}$ *for* (MOP). *Assume that Slater’s constraint qualification* (SCQ) *holds at* ${x}^{0}$, *then* ${x}^{0}$ *is a strict minimizer in the equivalent vector optimization problem* (EVP).

*Proof*Assume ${x}^{0}$ is a strict minimizer of order

*m*with respect to a nonlinear function $\psi :S\times S\to {R}^{n}$ for (MOP) and (SCQ) is satisfied at ${x}^{0}$; therefore, necessary optimality conditions (3.2)-(3.4) hold. Suppose ${x}^{0}$ is not a strict minimizer in (EVP). Then there exists $\overline{x}$ feasible for (EVP) such that for $i=1,2,\dots ,p$,

*η*satisfies Assumption

*C*, therefore $\eta ({x}^{0},{x}^{0})=0$ for ${x}^{0}\in S$, the above set of inequalities reduces to

Adding (6.2) and (6.3), we get a contradiction to (3.2). □

**Theorem 6.2** *Let* ${x}^{0}$ *be a strict minimizer in the equivalent vector optimization problem* (EVP). *Further assume that* ${f}_{i}$, $i=1,2,\dots ,p$ *are strongly pseudoinvex type I of order* *m* *and* ${g}_{j}$, $j=1,2,\dots ,q$ *are strongly invex of order* *m* *with respect to the same* *η* *and* *ψ*, *then* ${x}^{0}$ *is a strict minimizer of order* *m* *in the original vector optimization problem* (MOP).

*Proof*Clearly, ${x}^{0}$ is feasible for (MOP). First, we will show that any feasible point in (MOP) is also a feasible point in (EVP), that is, we will show that $S\subseteq D({x}^{0})$. Let $x\in S$ and ${g}_{j}$, $j=1,2,\dots ,q$ be strongly invex of order

*m*with respect to

*η*and

*ψ*on

*X*. Therefore, for some ${k}_{j}>0$, $j=1,2,\dots ,q$, we have

that is, $x\in D({x}^{0})$. Hence, $S\subseteq D({x}^{0})$.

*m*in (MOP). Then, for ${\beta}_{i}>0$, $i=1,2,\dots ,p$, there exists some $\overline{x}\in S$ such that

*m*with respect to

*η*and

*ψ*on

*S*, we have

which contradicts that ${x}^{0}$ is a strict minimizer for (EVP). □

## Declarations

### Acknowledgements

The authors would like to thank Prof. Davinder Bhatia (Retd.) and Dr. Pankaj Gupta, Department of Operational Research, University of Delhi, for their keen interest and continuous help throughout the preparation of this article.

## Authors’ Affiliations

## References

- Weir T, Mond B: Pre-invex functions in multiple objective optimization.
*J. Math. Anal. Appl.*1988, 136: 29–38. 10.1016/0022-247X(88)90113-8MATHMathSciNetView ArticleGoogle Scholar - Sawaragi Y, Nakayama H, Tanino T:
*Theory of Multiobjective Optimization*. Academic Press, Orlando; 1985.MATHGoogle Scholar - Cromme L: Strong uniqueness: a far reaching criterion for the convergence of iterative procedures.
*Numer. Math.*1978, 29: 179–193. 10.1007/BF01390337MATHMathSciNetView ArticleGoogle Scholar - Studniarski M: Sufficient conditions for the stability of local minimum points in nonsmooth optimization.
*Optimization*1989, 20: 27–35. 10.1080/02331938908843409MATHMathSciNetView ArticleGoogle Scholar - Auslender A: Stability in mathematical programming with non-differentiable data.
*SIAM J. Control Optim.*1984, 22: 239–254. 10.1137/0322017MATHMathSciNetView ArticleGoogle Scholar - Ward DE: Characterization of strict local minima and necessary conditions for weak sharp minima.
*J. Optim. Theory Appl.*1994, 80: 551–571. 10.1007/BF02207780MATHMathSciNetView ArticleGoogle Scholar - Jimenez B: Strict efficiency in vector optimization.
*J. Math. Anal. Appl.*2002, 265: 264–284. 10.1006/jmaa.2001.7588MATHMathSciNetView ArticleGoogle Scholar - Bhatia G: Optimality and mixed saddle point criteria in multiobjective optimization.
*J. Math. Anal. Appl.*2008, 342(1):135–145. 10.1016/j.jmaa.2007.11.042MATHMathSciNetView ArticleGoogle Scholar - Sahay, RR, Bhatia, G: Characterizations for the set of higher order global strict minimizers. J. Nonlinear Convex Anal. (to appear)Google Scholar
- Lin GH, Fukushima M: Some exact penalty results for non-linear programs and mathematical programs with equilibrium constraints.
*J. Optim. Theory Appl.*2003, 118: 67–80. 10.1023/A:1024787424532MATHMathSciNetView ArticleGoogle Scholar - Mangasarian OL Classical Appl. Math. 10. In
*Nonlinear Programming*. SIAM, Philadelphia; 1994. (corrected reprint of the 1969 original)View ArticleGoogle Scholar - Mohan SR, Neogy SK: On invex sets and preinvex functions.
*J. Math. Anal. Appl.*1995, 189: 901–908. 10.1006/jmaa.1995.1057MATHMathSciNetView ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.