Skip to main content

Optimality and duality for nonsmooth multiobjective optimization problems

Abstract

In this paper, we consider a nonsmooth multiobjective programming problems including support functions with inequality and equality constraints. Necessary and sufficient optimality conditions are obtained by using higher-order strong convexity for Lipschitz functions. Mond-Weir type dual problem and duality theorems for a strict minimizer of order m are given.

1 Introduction

Nonlinear analysis problems are a new and vital area of optimization theory, mathematical physics, economics, engineering and functional analysis. Moreover, nonsmooth problems occur naturally and frequently in optimization.

In 1970, Rockafellar wrote in his book that practical applications are not necessarily differentiable in applied mathematics (see [1]). So, dealing with nondifferentiable mathematical programming problems was very important. Vial [2] studied strongly and weakly convex sets and ρ-convex functions.

Auslender [3] introduced the notion of lower second-order directional derivative and obtained necessary and sufficient conditions for a strict local minimizer. Based on Auslender’s results, Studniarski [4] proved necessary and sufficient conditions for the problem of the feasible set defined by an arbitrary set. Moreover, Ward [5] derived necessary and sufficient conditions for strict minimizer of order m in nondifferentiable scalar programs. Jimenez [6] introduced the notion of super-strict efficiency for vector problems and gave necessary conditions for strict minimality. Jimenez and Novo [7, 8] obtained first- and second-order optimality conditions for vector optimization problems. Bhatia [9] gave the higher-order strong convexity for Lipschitz functions and established optimality conditions for the new concept of strict minimizer of higher order for a multiobjective optimization problem.

Kim and Bae [10] formulated nondifferentiable multiobjective programs with the support functions. Also, Bae et al. [11] established duality theorems for nondifferentiable multiobjective programming problems under generalized convexity assumptions. Also, Kim and Lee [12] introduced the nonsmooth multiobjective programming problems involving locally Lipschitz functions and support functions. They introduced Karush-Kuhn-Tucker type optimality conditions and established duality theorems for (weak) Pareto-optimal solutions. Recently, Bae and Kim [10] established optimality conditions and duality theorems for a nondifferentiable multiobjective programming problem with support functions.

In this paper, we consider nonsmooth multiobjective programming with inequality and equality constraints. In Section 2, we introduce the concept of a strict minimizer of order m and higher-order strong convexity for this problem. In Section 3, necessary and sufficient optimality theorems are established for a strict minimizer of order m under generalized strong convexity assumptions. In Section 4, we formulate a Mond-Weir type dual problem and obtain weak and strong duality theorems.

2 Preliminaries

Let x,y R n . The following notation will be used for vectors in R n :

x < y x i < y i , i = 1 , 2 , , n ; x y x i y i , i = 1 , 2 , , n ; x y x i y i , i = 1 , 2 , , n  but  x y ; x y  is the negation of  x < y ; x y  is the negation of  x y .

For x,uR, xu and x<u have the usual meaning. Let R n be the n-dimensional Euclidean space, and let R + n be its nonnegative orthant.

Definition 2.1 [13]

Let D be a compact convex set in R n . The support function s(|D) is defined by

s(x|D):=max { x T y : y D } .

The support function s(|D) has a subdifferential. The subdifferential of s(|D) at x is given by

s(x|D):= { z D : z T x = s ( x | D ) } .

The support function s(|D) is convex and everywhere finite, that is, there exists zD such that

s(y|D)s(x|D)+ z T (yx)for all yD.

Equivalently,

z T x=s(x|D).

We consider the following multiobjective programming problem.

(MOP) Minimize f ( x ) + s ( x | D ) = ( f 1 ( x ) + s ( x | D 1 ) , , f p ( x ) + s ( x | D p ) ) subject to g ( x ) 0 , h ( x ) = 0 , x X ,

where f:X R p , g:X R q and h:X R r are locally Lipschitz functions, respectively, and X is the convex set of R n . For each iP={1,2,,p}, D i is a compact convex subset of R n .

Further, let S:={xX g j (x)0,j=1,,q, h l (x)=0,l=1,,r} be the feasible set of (MOP), B( x 0 ,ϵ)={x R n x x 0 <ϵ} be an open ball with center x 0 and radius ϵ and I( x 0 ):={j{1,,q} g j ( x 0 )=0} be the index set of active constraints at x 0 .

We introduce the following definitions due to Jimenez [6].

Definition 2.2 A point x 0 X is called a strict local minimizer for (MOP) if there exists ϵ>0 such that

f(x)+s(x|D)f ( x 0 ) +s ( x 0 | D ) ,xB ( x 0 , ϵ ) X.

Definition 2.3 Let m1 be an integer. A point x 0 X is called a strict local minimizer of order m for (MOP) if there exist ϵ>0 and cint R + p such that

f(x)+s(x|D)f ( x 0 ) +s ( x 0 | D ) + x x 0 m c,xB ( x 0 , ϵ ) X.

Definition 2.4 Let m1 be an integer. A point x 0 X is called a strict minimizer of order m for (MOP) if there exists cint R + p such that

f(x)+s(x|D)f ( x 0 ) +s ( x 0 | D ) + x x 0 m c,xX.

Definition 2.5 [14]

Suppose that f:XR is Lipschitz on X. Clarke’s generalized directional derivative of f at xX in the direction d R n , denoted by f 0 (x,d), is defined as

f 0 (x,d)= lim sup y x t 0 f ( y + t d ) f ( y ) t .

Definition 2.6 [14]

Clarke’s generalized gradient of f at xX, denoted by f(x), is defined as

f(x)= { ξ R n : f 0 ( x , d ) ξ , d d R n } .

Definition 2.7 For a nonempty subset X of R n , we denote X , the dual cone of X, defined by

X = { u R n u T x 0 , x X } .

Further, for x 0 X, N X ( x 0 ) denotes the normal cone to X at x 0 defined by

N X ( x 0 ) = { d R n d , x x 0 0 , x X } .

It is clear that ( X x 0 ) = N X ( x 0 ).

We recall the notion of strong convexity of order m introduced by Lin and Fukushima in [15].

Definition 2.8 A function f:XR is said to be strongly convex of order m on a convex set X if there exists c>0 such that for x 1 , x 2 X and t[0,1],

f ( t x 1 + ( 1 t ) x 2 ) tf( x 1 )+(1t)f( x 2 )ct(1t) x 1 x 2 m .

Proposition 2.1 [15]

If f i , i=1,,p, are strongly convex of order m on a convex set X, then i = 1 p t i f i and max 1 i p f i are also strongly convex of order m on X, where t i 0, i=1,,p.

Definition 2.9 A locally Lipschitz function f is said to be strongly quasiconvex of order m on X if there exists a constant c>0 such that for x 1 , x 2 X,

f( x 1 )f( x 2 )ξ, x 1 x 2 + x 1 x 2 m c0,ξf( x 2 ).

For each k{1,,p} and xX, we consider the following scalarizing problem of (MOP) due to the one in [16].

( P k ( x 0 ) ) Minimize f k ( x ) + s ( x | D k ) subject to f i ( x ) + s ( x | D i ) f i ( x 0 ) + s ( x 0 | D i ) , k P , i k , g j ( x ) 0 , j = 1 , , q , h l ( x ) = 0 , l = 1 , , r .

The following definition is due to the one in [17].

Definition 2.10 Let x 0 be a feasible solution for (MOP). We say that the basic regularity condition (BRC) is satisfied at x 0 if there exist no non-zero scalars λ i 0 0, w i D i , i=1,,p, ik, kP, μ j 0 0, jI( x 0 ), μ j 0 =0, jI( x 0 ), and ν l 0 , l=1,,r, such that

0 i = 1 , i k p λ i 0 ( f i ( x 0 ) + w i ) + j = 1 q μ j 0 g j ( x 0 )+ l = 1 r ν l 0 h l ( x 0 ) + N X ( x 0 ) .

3 Optimality conditions

In this section, we establish Fritz John necessary optimality conditions, Karush-Kuhn-Tucker necessary optimality conditions and Karush-Kuhn-Tucker sufficient optimality condition for a strict minimizer of (MOP).

Theorem 3.1 (Fritz John necessary optimality conditions)

Suppose that x 0 is a strict minimizer of order m for (MOP) and f i , i=1,,p, g j , j=1,,q, and h l , l=1,,r, are locally Lipschitz functions at x 0 . Then there exist λ i 0 0, w i 0 D i , i=1,,p, μ j 0 0, j=1,,q, and ν l 0 , l=1,,r, not all zero such that

0 i = 1 p λ i 0 ( f i ( x 0 ) + w i 0 ) + j = 1 q μ j 0 g j ( x 0 ) + l = 1 r ν l 0 h l ( x 0 ) + N X ( x 0 ) , w i 0 , x 0 = s ( x 0 | D i ) , i = 1 , , p , μ j 0 g j ( x 0 ) = 0 , j = 1 , , q .

Proof Since x 0 is a strict minimizer of order m for (MOP), it is a strict minimizer for (MOP). It can be shown that x 0 solves the following problem:

minimize F ( x ) subject to g ( x ) 0 , h ( x ) = 0 ,

where

F ( x ) = max { ( f 1 ( x ) + s ( x | D 1 ) ) ( f 1 ( x 0 ) + s ( x 0 | D 1 ) ) , , ( f p ( x ) + s ( x | D p ) ) ( f p ( x 0 ) + s ( x 0 | D p ) ) } .

If it is not so, then there exits x 1 R n such that F( x 1 )<F( x 0 ), g( x 1 )0, h( x 1 )=0. Since F( x 0 )=0, we have F( x 1 )<0. This contradicts the fact that x 0 is a strict minimizer for (MOP). Since x 0 minimizes F(x), from Theorem 6.1.1 in Clarke [14], there exists (λ,μ,ν)( R p , R q , R r ) not all zero such that

0 i = 1 p λ i F ( x 0 ) + j I ( x 0 ) μ j g j ( x 0 ) + l = 1 r ν l h l ( x 0 ) + N X ( x 0 ) .

Letting μ j =0, for jI( x 0 ), we have

0 i = 1 p λ i F ( x 0 ) + j = 1 q μ j g j ( x 0 ) + l = 1 r ν l h l ( x 0 ) + N X ( x 0 ) .

Since F(x)=max{(f(x)+s(x|D))(f( x 0 )+s( x 0 |D))} for any xX and s( x 0 | D i )= ( x 0 ) T w i , i=1,,p, we have

F ( x 0 ) co { ( f i ( x 0 ) + s ( x 0 | D i ) ) } = co { ( f i ( x 0 ) + w i ) } ,

where co{( f i ( x 0 )+s( x 0 | D i ))} denotes the convex hull of {( f i ( x 0 )+s( x 0 | D i ))}. Hence, there exist λ i 0 0, w i 0 D i , i=1,,p, μ j 0 0, j=1,,q, and ν l , l=1,,r, not all zero such that

0 i = 1 p λ i ( f i ( x 0 ) + w i 0 ) + j = 1 q μ j g j ( x 0 ) + l = 1 r ν l h l ( x 0 ) + N X ( x 0 ) , w i 0 , x 0 = s ( x 0 | D i ) , i = 1 , , p , μ j 0 g j ( x 0 ) = 0 , j = 1 , , q .

 □

Theorem 3.2 (Karush-Kuhn-Tucker necessary optimality conditions)

Suppose that x 0 is a strict minimizer of order m for (MOP) and f i , i=1,,p, g j , j=1,,q, and h l , l=1,,r, are locally Lipschitz functions at x 0 . If the basic regularity condition (BRC) holds at x 0 , then there exist λ i 0 0, w i 0 D i , i=1,,p, μ j 0 0, j=1,,q, and ν l 0 , l=1,,r, such that

0 i = 1 p λ i 0 ( f i ( x 0 ) + w i 0 ) + j = 1 q μ j 0 g j ( x 0 ) + l = 1 r ν l 0 h l ( x 0 ) + N X ( x 0 ) , w i 0 , x 0 = s ( x 0 | D i ) , i = 1 , , p , μ j 0 g j ( x 0 ) = 0 , j = 1 , , q , ( λ 1 0 , , λ p 0 ) ( 0 , , 0 ) .

Proof Since x 0 is a strict minimizer of order m for (MOP), by Theorem 3.1, there exist w i 0 D i , λ i 0 0, i=1,,p, μ j 0 0, j=1,,q, and ν l , l=1,,r, not all zero such that

0 i = 1 p λ i 0 ( f i ( x 0 ) + w i 0 ) + j = 1 q μ j 0 g j ( x 0 ) + l = 1 r ν l 0 h l ( x 0 ) + N X ( x 0 ) , w i 0 , x 0 = s ( x 0 | D i ) , i = 1 , , p , μ j 0 g j ( x 0 ) = 0 , j = 1 , , q .

It can be shown that ( λ 1 0 ,, λ p 0 )(0,,0). If λ i 0 =0, i=1,,p, then we have

0 i = 1 , i k p λ i 0 ( f i ( x 0 ) + w i ) + j = 1 q μ j 0 g j ( x 0 )+ l = 1 r ν l 0 h l ( x 0 ) + N X ( x 0 )

for each kP={1,,p}. Since the basic regularity condition (BRC) holds at x 0 , we have λ k =0, kP, ki={1,,p}, μ j =0, jI( x 0 ), and ν l =0, l=1,,r. This contradicts the fact that λ i , λ k , kP, ki, μ j , jI( x 0 ) and ν l , l=1,,r, are not all simultaneously zero. Hence, ( λ 1 ,, λ p )(0,,0). □

Theorem 3.3 (Karush-Kuhn-Tucker sufficient optimality conditions)

Assume that there exist λ i 0 0, w i 0 D i , i=1,,p, μ j 0 0, j=1,,q, and ν l 0 , l=1,,r, such that for x 0 X,

0 i = 1 p λ i 0 ( f i ( x 0 ) + w i 0 ) + j = 1 q μ j 0 g j ( x 0 ) + l = 1 r ν l 0 h l ( x 0 ) + N X ( x 0 ) , w i 0 , x 0 = s ( x 0 | D i ) , i = 1 , , p , μ j 0 g j ( x 0 ) = 0 , j = 1 , , q , ( λ 1 0 , , λ p 0 ) ( 0 , , 0 ) .

Assume further that f i , i=1,,p, are strongly convex of order m on X, g j , jI( x 0 ) are strongly quasiconvex of order m on X and ν T h is strongly quasiconvex of order m on X. Then x 0 is a strict minimizer of order m for (MOP).

Proof Since f i , i=1,,p, are strongly convex of order m on X and ( ) T w i , i=1,,p, are convex, there exists c i >0, i=1,,p, such that for all xX, ξ i f i ( x 0 ) and w i D i , i=1,,p,

f i (x) f i ( x 0 ) ξ i , x x 0 + x x 0 m c i , x T w i ( x 0 ) T w i x i , x x 0 .

So, we obtain

( f i ( x ) + x T w i ) ( f i ( x 0 ) + ( x 0 ) T x i ) ξ i + w i , x x 0 + x x 0 m c i .
(3.1)

For λ i 0 0, i=1,,p, (3.1) implies

i = 1 p λ i 0 ( f i ( x ) + x T w i ) i = 1 p λ i 0 ( f i ( x 0 ) + ( x 0 ) T w i ) i = 1 p λ i 0 ξ i + w i , x x 0 + i = 1 p λ i 0 x x 0 m c i .
(3.2)

For xX, we have

g j ( x ) g j ( x 0 ) , j I ( x 0 ) , ν T h ( x ) = ν T h ( x 0 ) .

Since g j , jI( x 0 ) are strongly quasiconvex of order m on X and ν T h is strongly quasiconvex of order m on X, it follows that there exist c j >0, η j g j ( x 0 ), jI( x 0 ), c>0, and ζ ν T h( x 0 ) such that

η j , x x 0 + x x 0 m c j 0 , ζ , x x 0 + x x 0 m c 0 .
(3.3)

For μ j 0 0, jI( x 0 ), we obtain

j I ( x 0 ) μ j 0 η j , x x 0 + j I ( x 0 ) μ j 0 x x 0 m c j 0.
(3.4)

Since μ j 0 =0 for jI( x 0 ), (3.4) implies

j = 1 q μ j 0 η j , x x 0 + j = 1 q μ j 0 x x 0 m c j 0.
(3.5)

By (3.2), (3.3) and (3.5), we get

i = 1 p λ i 0 ( f i ( x ) + x T w i ) i = 1 p λ i 0 ( f i ( x 0 ) + ( x 0 ) T w i ) x x 0 m a,

where a= i = 1 p λ i 0 c i + j = 1 q μ j 0 c j + l = 1 m ν l 0 c l . This implies that

i = 1 p λ i 0 [ ( f i ( x ) + x T w i ) ( f i ( x 0 ) + ( x 0 ) T w i ) x x 0 m d i ] 0,
(3.6)

where d=ae.

Suppose that x 0 is not a strict minimizer of order m for (MOP). Then there exist x, x 0 X and c R + p such that

f(x)+s(x|D)<f ( x 0 ) +s ( x 0 | D ) + x x 0 m c,xX.

Since x T ws(x|D) and ( x 0 ) T w=s( x 0 |D), we have

f ( x ) + x T w f ( x ) + s ( x | D ) < f ( x 0 ) + s ( x 0 | D ) + x x 0 m c = f ( x 0 ) + ( x 0 ) T w + x x 0 m c .

For λ i 0 0, we obtain

i = 1 p λ i 0 ( f i ( x ) + x T w i ) < i = 1 p λ i 0 ( f i ( x 0 ) + ( x 0 ) T w i ) + i = 1 p λ i 0 x x 0 m c i .

This is a contradiction to (3.6). □

Remark 3.1 Suppose that g j , jI( x 0 ) are strongly convex of order m on X and that ν T h is strongly convex of order m on X. Then the conclusion of Theorem 3.3 also holds.

Proof It follows on the lines of Theorem 3.3. □

4 Duality theorems

Now we propose the following Mond-Weir type dual (MOD) to (MOP):

(MOD) Maximize f ( u ) + u T w (MOD) subject to 0 i = 1 p λ i ( f i ( u ) + w i ) + j = 1 q μ j g j ( u ) (MOD) subject to 0 + l = 1 r ν l h l ( u ) + N X ( u ) ,
(4.1)
(MOD) Maximize j = 1 q μ j g j (u)+ l = 1 r ν l h l (u)0,
(4.2)
(MOD) Maximize λ i 0, w i D i ,i=1,,p, λ T e=1,
(4.3)
(MOD) Maximize μ j 0,j=1,,q, ν l ,l=1,,r,uX.
(4.4)

Theorem 4.1 (Weak duality)

Let x and (u,w,λ,μ,ν) be feasible solutions of (MOP) and (MOD), respectively. If f i , i=1,,p, are strongly convex of order m at u and j = 1 q μ j g j ()+ l = 1 r ν l h l () is strongly quasiconvex of order m at u, then the following cannot hold:

f i (x)+s(x| D i )< f i (u)+ u T w i ,i=1,,p.
(4.5)

Proof Since x is a feasible solution of (MOP) and (u,w,λ,μ,ν) is a feasible solution of (MOD), we have

j = 1 q μ j g j (x)+ l = 1 r μ l h l (x) j = 1 q μ j g j (u)+ l = 1 r μ l h l (u).

Since j = 1 q μ j g j ()+ l = 1 r ν l h l () is strongly quasiconvex of order m at u, it follows that there exist c j >0, η j g j (u), j=1,,q, c l >0, and ζ l h l (u) such that

j = 1 q μ j η j + l = 1 r ν l ζ l , x u + j = 1 q x u m c j + l = 1 r x u m c l 0.
(4.6)

Now, suppose contrary to the result that (4.5) holds. Since x T w i s(x| D i ), i=1,,p, we obtain

f i (x)+ x T w i < f i (u)+ u T w i ,i=1,,p.

For cint R + p , we obtain

f i (x)+ x T w i < f i (u)+ u T w i + x u m c i ,i=1,,p.
(4.7)

For λ i 0, we obtain

i = 1 p λ i ( f i ( x ) + x T w i ) < i = 1 p λ i ( f i ( u ) + u T w i ) + i = 1 p λ i x u m c i .
(4.8)

Since f i , i=1,,p, are strongly convex of order m at u and ( ) T w i , i=1,,p, are convex at u, there exists c i >0, i=1,,p, such that for all xX, ξ i f i ( x 0 ) and w i D i , i=1,,p,

f i ( x ) f i ( u ) ξ i , x u + x u m c i , x T w i u T w i w i , x u .

So, we obtain

( f i ( x ) + x T w i ) ( f i ( u ) + u T w i ) ξ i + w i ,xu+ x u m c i .
(4.9)

For λ i 0, i=1,,p, we obtain

i = 1 p λ i ( f i ( x ) + x T w i ) i = 1 p λ i ( f i ( u ) + u T w i ) i = 1 p λ i ( ξ i + w i ) , x u + i = 1 p λ i x u m c i .
(4.10)

By (4.6) and (4.10), we get

i = 1 p λ i ( f i ( x ) + x T w i ) i = 1 p λ i ( f i ( u ) + u T w i ) x u m a,
(4.11)

where a= i = 1 p λ i c i + j = 1 q μ j c j + l = 1 r ν l c l . This implies that

i = 1 p λ i [ ( f i ( x ) + x T w i ) ( f i ( u ) + u T w i ) x u m d i ] 0,
(4.12)

where d=ae, since λ T e=1. This is a contradiction to (4.8). □

Lemma 4.1 If g j (), j=1,,m, are strongly convex of order m on X and ν T h is strongly convex of order m on X, then the same conclusion of Theorem  4.1 also holds.

Proof It follows on the lines of Theorem 4.1. □

Definition 4.1 Let m1 be an integer. A point x 0 X is called a strict maximizer of order m for (MOD) if there exists cint R + p such that

f ( x 0 ) + ( x 0 ) T w+ x x 0 m cf(x)+ x T w,xX.

Theorem 4.2 (Strong duality)

If x 0 is a strict minimizer of order m for (MOP) and the basic regularity condition (BRC) holds at x 0 , then there exist λ i 0 0, w i 0 D i , i=1,,p, μ j 0 0, j=1,,q, and ν l 0 , l=1,,r, such that ( x 0 , w 0 , λ 0 , μ 0 , ν 0 ) is a feasible solution of (MOD) and ( x 0 ) T w i 0 =s( x 0 | D i ), i=1,,p. Moreover, if the assumptions of Theorem  4.1 are satisfied, then ( x 0 , w 0 , λ 0 , μ 0 , ν 0 ) is a strict maximizer of order m for (MOD).

Proof By Theorem 3.3, there exists λ i 0 0, w i 0 D i , i=1,,p, μ j 0 0, j=1,,q, and ν l 0 , l=1,,r, such that

0 i = 1 p λ i 0 ( f i ( x 0 ) + w i 0 ) + j = 1 q μ j 0 g j ( x 0 ) + l = 1 r ν l 0 h l ( x 0 ) + N X ( x 0 ) , w i 0 , x 0 = s ( x 0 | D i ) , i = 1 , , p , μ j 0 g j ( x 0 ) = 0 , j = 1 , , q , ( λ 1 0 , , λ p 0 ) ( 0 , , 0 ) .

Thus ( x 0 , w 0 , λ 0 , μ 0 , ν 0 ) is a feasible solution of (MOD) and ( x 0 ) T w i 0 =s( x 0 | D i ), i=1,,p. By Theorem 4.1, we obtain that the following holds:

f i ( x 0 ) + ( x 0 ) T w i 0 = f i ( x 0 ) + s ( x 0 | D i ) f i ( u ) + u T w i , i = 1 , , p ,

for a given feasible solution (u,w,λ,μ,ν) of (MOD). For x 0 ,uX and cint R p , we have

f ( x 0 ) + ( x 0 ) T w 0 + u x 0 m c f ( u ) + u T w .

Thus, ( x 0 , w 0 , λ 0 , μ 0 , ν 0 ) is a strict maximizer of order m for (MOD). □

Remark 4.1 Theorem 4.1 and Theorem 4.2 reduce to [[13], Theorem 4.1 and Theorem 4.2] in an inequality constraint case. More exactly, f i ()+ ( ) T w i , i=1,,p, and g j (), jI(u) at the considered point in the framework of [[13], Theorem 4.1 and Theorem 4.2] are strongly convex of order m and strongly quasiconvex of order m, respectively.

References

  1. Rockafellar RT: Convex Analysis. Princeton University Press, Princeton; 1970.

    Book  MATH  Google Scholar 

  2. Vial JP: Strong and weak convexity of sets and functions. Math. Oper. Res. 1983, 8: 231–259. 10.1287/moor.8.2.231

    Article  MATH  MathSciNet  Google Scholar 

  3. Auslender A: Stability in mathematical programming with non-differentiable data. SIAM J. Control Optim. 1984, 22: 239–254. 10.1137/0322017

    Article  MATH  MathSciNet  Google Scholar 

  4. Studniarski M: Necessary and sufficient conditions for isolated local minima of nonsmooth functions. SIAM J. Control Optim. 1986, 24: 1044–1049. 10.1137/0324061

    Article  MATH  MathSciNet  Google Scholar 

  5. Ward DE: Characterizations of strict local minima and necessary conditions for weak sharp minima. J. Optim. Theory Appl. 1994, 80: 551–571. 10.1007/BF02207780

    Article  MATH  MathSciNet  Google Scholar 

  6. Jimenez B: Strictly efficiency in vector optimization. J. Math. Anal. Appl. 2002, 265: 264–284. 10.1006/jmaa.2001.7588

    Article  MATH  MathSciNet  Google Scholar 

  7. Jimenez B, Novo V: First and second order sufficient conditions for strict minimality in multiobjective programming. Numer. Funct. Anal. Optim. 2002, 23: 303–322. 10.1081/NFA-120006695

    Article  MATH  MathSciNet  Google Scholar 

  8. Jimenez B, Novo V: First and second order sufficient conditions for strict minimality in nonsmooth vector optimization. J. Math. Anal. Appl. 2003, 284: 496–510. 10.1016/S0022-247X(03)00337-8

    Article  MATH  MathSciNet  Google Scholar 

  9. Bhatia G: Optimality and mixed saddle point criteria in multiobjective optimization. J. Math. Anal. Appl. 2008, 342: 135–145. 10.1016/j.jmaa.2007.11.042

    Article  MATH  MathSciNet  Google Scholar 

  10. Kim DS, Bae KD: Optimality conditions and duality for a class of nondifferentiable multiobjective programming problems. Taiwan. J. Math. 2009, 13(2B):789–804.

    MATH  MathSciNet  Google Scholar 

  11. Bae KD, Kang YM, Kim DS: Efficiency and generalized convex duality for nondifferentiable multiobjective programs. J. Inequal. Appl. 2010., 2010: Article ID 930457

    Google Scholar 

  12. Kim DS, Lee HJ: Optimality conditions and duality in nonsmooth multiobjective programs. J. Inequal. Appl. 2010., 2010: Article ID 939537

    Google Scholar 

  13. Bae KD, Kim DS: Optimality and duality theorems in nonsmooth multiobjective optimization. Fixed Point Theory Appl. 2011., 2011: Article ID 42

    Google Scholar 

  14. Clarke FH: Optimization and Nonsmooth Analysis. Wiley-Interscience, New York; 1983.

    MATH  Google Scholar 

  15. Lin GH, Fukushima M: Some exact penalty results for nonlinear programs and mathematical programs with equilibrium constraints. J. Optim. Theory Appl. 2003, 118: 67–80. 10.1023/A:1024787424532

    Article  MATH  MathSciNet  Google Scholar 

  16. Chankong V, Haimes YY: Multiobjective Decision Making: Theory and Methodology. North-Holland, New York; 1983.

    MATH  Google Scholar 

  17. Chandra S, Dutta J, Lalitha CS: Regularity conditions and optimality in vector optimization. Numer. Funct. Anal. Optim. 2004, 25: 479–501. 10.1081/NFA-200042637

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2013R1A1A2A10008908).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Do Sang Kim.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

DSK obtained necessary and sufficient optimality conditions by using higher-order strong convexity of Lipschitz functions, formulated a Mond-Weir type dual problem and established weak and strong duality theorems for a strict minimizer of order m. KDB carried out the duality studies and participated in the sequence alignment and drafted the manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Bae, K.D., Kim, D.S. Optimality and duality for nonsmooth multiobjective optimization problems. J Inequal Appl 2013, 554 (2013). https://doi.org/10.1186/1029-242X-2013-554

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-554

Keywords