Open Access

Sufficiency and duality in multiobjective fractional programming problems involving generalized type I functions

Journal of Inequalities and Applications20132013:435

https://doi.org/10.1186/1029-242X-2013-435

Received: 7 November 2012

Accepted: 20 August 2013

Published: 13 September 2013

Abstract

In this paper, we establish sufficient optimality conditions for the (weak) efficiency to multiobjective fractional programming problems involving generalized ( F , α , ρ , d ) -V-type I functions. Using the optimality conditions, we also investigate a parametric-type duality for multiobjective fractional programming problems concerning a generalized ( F , α , ρ , d ) -V-type I function. Then some duality theorems are proved for such problems in the framework of generalized ( F , α , ρ , d ) -V-type I functions.

MSC:90C46, 90C32, 90C30.

Keywords

multiobjective fractional programmingweakly efficient solutionefficient solutiongeneralized type I functionduality

1 Introduction

Fractional programming problems have been studied with different characteristics of functions or diverse constraints. Singh has studied fractional programming problems since 1986. With convexity assumptions, Singh and Hanson [1] constituted not only sufficient conditions of Fritz John and Kuhn-Tucker type but also necessary conditions. Without convexity assumptions, Singh [2] also extended necessary conditions based on a constraint qualification. At the same time, Weir [3], under the concept of proper efficiency, transformed these problems into nondifferentiable functions and also established the duality theorems. Consequently, Kaul and Lyall [4] discussed the efficiency for fractional vector maximization problems and dual problems. Liu [5] derived the necessary conditions and duality for non-smooth, non-linear multiobjective fractional programming problems. He also established the duality results under generalized ( F , ρ ) -convexity conditions. By applying Liu’s results, Bhatia and Garg [6] established Bector-type dual results for ( F , ρ ) -convex functions. Then Bhatia and Pandey [7] further changed the components of the objective function to nonnegative, convex numerator, while the denominators were concave and positive.

The concept of ( F , ρ ) -convexity, which was introduced by Preda [8], is as an extension of F-convexity and ρ-convexity that are defined by Hanson and Mond [9], and by Vial [10], respectively. Gulati et al. [11] defined the generalized ( F , α , ρ , d ) -V-type I function-a generalization of convexity. It combines three components: the concepts of V-type I functions [12], ( F , α , ρ , d ) -convex functions [13], and type I functions [14]. Then sufficient optimality conditions and dual programs were established in [11]. Indeed, since the vector-valued functions are not comparable, the concepts of efficiency and proper efficiency are very important in fractional vector optimization problems. Thus many authors have been studying those relevant problems to characterize the efficiency and proper efficiency of fractional vector optimization problems (cf. [120]). In [2], Singh derived the necessary conditions for efficient optimality of differentiable multiobjective programming under a constraint qualification. Using Singh’s results, Lee and Ho [20] derived the necessary optimality conditions for the efficiency to multiobjective fractional programming problems.

This paper consists of four sections. In Section 2, some definitions and results are recalled. In Section 3, we employ a necessary optimality condition with ( F , α , ρ , d ) -V-type I functions to establish sufficient optimality conditions for multiobjective fractional programming problem (FP). Finally, in Section 4, a parametric dual problem of (FP) is performed by using the results in Section 3, and parametric duality theorems under the framework of generalized ( F , α , ρ , d ) -V-type I functions are established.

2 Definitions and preliminaries

Let R n denote the n-dimensional Euclidean space, and let R + n denote its nonnegative orthant. For any x = ( x 1 , x 2 , , x n ) and y = ( y 1 , y 2 , , y n ) , we define:
  1. (1)

    x = y if and only if x i = y i for all i = 1 , 2 , , n ;

     
  2. (2)

    x > y if and only if x i > y i for all i = 1 , 2 , , n ;

     
  3. (3)

    x y if and only if x i y i for all i = 1 , 2 , , n ;

     
  4. (4)

    x y if and only if x y and x y .

     

Let X be a nonempty open set in R n .

We consider the following multiobjective nonlinear fractional programming problem:
(FP) Minimize ϕ ( x ) ( f 1 ( x ) g 1 ( x ) , f 2 ( x ) g 2 ( x ) , , f k ( x ) g k ( x ) ) , ϕ ( x ) ( ϕ 1 ( x ) , ϕ 2 ( x ) , , ϕ k ( x ) ) , subject to h j ( x ) 0 , j = 1 , 2 , , m ,

where f i , g i : X R , i = 1 , 2 , , k , and h j : X R , j = 1 , 2 , , m , are differentiable functions. We assume that f i ( x ) 0 and g i ( x ) > 0 for all x X , i = 1 , 2 , , k .

Denote by X = { x X : h ( x ) = ( h 1 ( x ) , h 2 ( x ) , , h m ( x ) ) 0 } the feasible set of (FP).

Definition 1 A functional F : X × X × R n R is sublinear in R n if for any x , x ¯ X ,
F ( x , x ¯ ; a 1 + a 2 ) F ( x , x ¯ ; a 1 ) + F ( x , x ¯ ; a 2 ) for all  a 1 , a 2 R n , F ( x , x ¯ ; β a ) = β F ( x , x ¯ ; a ) , β R , β 0  for all  a R n .
Definition 2 A function F ˜ ( x ) = ( F ˜ 1 ( x ) , F ˜ 2 ( x ) , , F ˜ k ( x ) ) : X R k is said to be differentiable at x if there exists a linear transformation A of R n into R k such that
lim h 0 F ˜ ( x + h ) F ˜ ( x ) A h k h n = 0 ,
then we say that F ˜ is differentiable at x, and we write
F ˜ ( x ) = A = [ F ˜ 1 ( x ) F ˜ 2 ( x ) F ˜ k ( x ) ] n × k .
Here
  1. 1.

    h R n ,

     
  2. 2.

    n and k denote any norm in R n and R m , respectively,

     
  3. 3.

    F ˜ i ( x ) denotes the gradient of F ˜ i at x for i = 1 , 2 , , k .

     

For convenience, the symbols are stated as follows to define generalized ( F , α , ρ , d ) -V-type I functions. Let K = { 1 , 2 , , k } and M = { 1 , 2 , , m } be the index sets. Let F be a sublinear functional. The functions F = ( F 1 , F 2 , , F k ) : X R k and ħ = ( ħ 1 , ħ 2 , , ħ m ) : X R m are differentiable at x ¯ X . Let α = ( α 1 , α 2 ) , where α 1 = ( α 1 1 , α 2 1 , , α k 1 ) , α 2 = ( α 1 2 , α 2 2 , , α m 2 ) , and α i 1 , α j 2 : X × X R + { 0 } for i K , j M , and let d ( , ) : X × X R be a pseudometric. Also, for x ¯ X , let J ( x ¯ ) = { j : h j ( x ¯ ) = 0 } and let h J denote the vector of active constraints at x ¯ .

In order to approve the sufficient optimality conditions holding for problem (FP) and duality theorems holding with respect to primal problem (FP), the following definitions of ( F , α , ρ , d ) -V-type I functions are required for the framework. The functions are the extensions of V-type I functions presented in [12] and type I functions presented in [14].

Definition 3 [11]

( F , ħ ) is said to be ( F , α , ρ , d ) -V-type I at x ¯ X if there exist vectors α and ρ = ( ρ 1 1 , ρ 2 1 , , ρ k 1 , ρ 1 2 , ρ 2 2 , , ρ m 2 ) , with ρ i 1 , ρ j 2 R for i K , j M , such that for each x X and for all i K , j M , we have
F i ( x ) F i ( x ¯ ) F ( x , x ¯ ; α i 1 ( x , x ¯ ) F i ( x ¯ ) ) + ρ i 1 d 2 ( x , x ¯ ) , ħ j ( x ¯ ) F ( x , x ¯ ; α j 2 ( x , x ¯ ) ħ j ( x ¯ ) ) + ρ j 2 d 2 ( x , x ¯ ) .
With the above definition, when the first inequality is satisfied as
F i ( x ) F i ( x ¯ ) > F ( x , x ¯ ; α i 1 ( x , x ¯ ) F i ( x ¯ ) ) + ρ i 1 d 2 ( x , x ¯ ) ,

then ( F , ħ ) is said to be semistrictly ( F , α , ρ , d ) -V-type I at x ¯ X .

Remark 1
  1. (1)

    If ρ i 1 = ρ j 2 = 0 for i K , j M and F ( x , x ¯ ; a ) = a η ( x , x ¯ ) , with η : X × X R n , the inequalities become those V-type I functions introduced by Hanson et al. [12].

     
  2. (2)

    If α i 1 ( x , x ¯ ) = α j 2 ( x , x ¯ ) = 1 , ρ i 1 = ρ j 2 = 0 for i K , j M and F ( x , x ¯ ; a ) = a η ( x , x ¯ ) , the definition of type I function given by Hanson and Mond [14] can be obtained.

     

Definition 4 [11]

( F , ħ ) is said to be quasi- ( F , α , ρ , d ) -V-type I at x ¯ X if there exists vector α and ρ = ( ρ 1 , ρ 2 ) R 2 such that for each x X and for all i K , j M , we have
i = 1 k α i 1 ( x , x ¯ ) F i ( x ) i = 1 k α i 1 ( x , x ¯ ) F i ( x ¯ ) F ( x , x ¯ ; i = 1 k F i ( x ¯ ) ) ρ 1 d 2 ( x , x ¯ ) , j = 1 m α j 2 ( x , x ¯ ) ħ j ( x ¯ ) 0 F ( x , x ¯ ; j = 1 m ħ j ( x ¯ ) ) ρ 2 d 2 ( x , x ¯ ) .

Definition 5 [11]

( F , ħ ) is said to be pseudo- ( F , α , ρ , d ) -V-type I at x ¯ X if there exists vector α and ρ = ( ρ 1 , ρ 2 ) R 2 such that for each x X and for all i K , j M , we have
F ( x , x ¯ ; i = 1 k F i ( x ¯ ) ) ρ 1 d 2 ( x , x ¯ ) i = 1 k α i 1 ( x , x ¯ ) F i ( x ) i = 1 k α i 1 ( x , x ¯ ) F i ( x ¯ ) , F ( x , x ¯ ; j = 1 m ħ j ( x ¯ ) ) ρ 2 d 2 ( x , x ¯ ) j = 1 m α j 2 ( x , x ¯ ) ħ j ( x ¯ ) 0 .

Definition 6 [11]

( F , ħ ) is said to be pseudo-quasi- ( F , α , ρ , d ) -V-type I at x ¯ X if there exists vector α and ρ = ( ρ 1 , ρ 2 ) R 2 such that for each x X and for all i K , j M , we have
i = 1 k α i 1 ( x , x ¯ ) F i ( x ) < i = 1 k α i 1 ( x , x ¯ ) F i ( x ¯ ) F ( x , x ¯ ; i = 1 k F i ( x ¯ ) ) < ρ 1 d 2 ( x , x ¯ ) , j = 1 m α j 2 ( x , x ¯ ) ħ j ( x ¯ ) 0 F ( x , x ¯ ; j = 1 m ħ j ( x ¯ ) ) ρ 2 d 2 ( x , x ¯ ) .
With the above definition, when the first inequality is satisfied as
i = 1 k α i 1 ( x , x ¯ ) F i ( x ) i = 1 k α i 1 ( x , x ¯ ) F i ( x ¯ ) F ( x , x ¯ ; i = 1 k F i ( x ¯ ) ) < ρ 1 d 2 ( x , x ¯ ) ,

then ( F , ħ ) is said to be strictly pseudo-quasi- ( F , α , ρ , d ) -V-type I at x ¯ X .

Definition 7 [11]

( F , ħ ) is said to be quasi-pseudo- ( F , α , ρ , d ) -V-type I at x ¯ X if there exists vector α and ρ = ( ρ 1 , ρ 2 ) R 2 such that for each x X and for all i K , j M , we have
F ( x , x ¯ ; i = 1 k F i ( x ¯ ) ) > ρ 1 d 2 ( x , x ¯ ) i = 1 k α i 1 ( x , x ¯ ) F i ( x ) > i = 1 k α i 1 ( x , x ¯ ) F i ( x ¯ ) , j = 1 m α j 2 ( x , x ¯ ) ħ j ( x ¯ ) < 0 F ( x , x ¯ ; j = 1 m ħ j ( x ¯ ) ) < ρ 2 d 2 ( x , x ¯ ) .
With the above definition, when the second inequality is satisfied as
j = 1 m α j 2 ħ j ( x ¯ ) 0 F ( x , x ¯ ; j = 1 m ħ j ( x ¯ ) ) < ρ 2 d 2 ( x , x ¯ ) ,

then ( F , ħ ) is said to be quasi-strictly-pseudo- ( F , α , ρ , d ) -V-type I at x ¯ X .

Definition 8 [2]

A feasible solution x of (FP) is said to be an efficient solution of (FP) if there does not exist any feasible solution x of (FP) such that
ϕ ( x ) ϕ ( x ) .

Definition 9 [2]

A feasible solution x of (FP) is said to be a weakly efficient solution of (FP) if there does not exist any feasible solution x of (FP) such that
ϕ ( x ) < ϕ ( x ) .

Definition 10 [2]

Let Y R n . The vector μ R n is called a convergent vector for Y at ν Y if and only if there exist a sequence { ν k ˜ } in Y and a sequence { α k ˜ } of positive real numbers such that
if  lim k ˜ ν k ˜ = ν and lim k ˜ α k ˜ = 0 , then  lim k ˜ ( ν k ˜ ν ) α k ˜ = μ .

Let C ( Y , ν ) denote the set of all convergent vectors for Y at ν .

We say that the constraint h of (FP) satisfies the constraint qualification at x (see [2]) if
D C ( X , x ) ,
(1)

where C ( X , x ) is the set of all convergent vectors for X at x and D = { d R n : h j ( x ) d 0  for all  j J } , where J = { j : h j ( x ) = 0 } .

3 Sufficient optimality conditions

The following theorem gives necessary optimality conditions for (FP) that are derived by Lee and Ho [20].

Theorem 1 ([20] Necessary optimality conditions)

Let x be a (weakly) efficient solution of (FP). Assume that the constraint qualification (1) is satisfied for h at x . Then there exist y R k , v R k , and z R m such that ( x , v , y , z ) satisfies
i = 1 k y i [ f i ( x ) v i g i ( x ) ] + j = 1 m z j h j ( x ) = 0 ,
(2)
f i ( x ) v i g i ( x ) = 0 for all  i = 1 , 2 , , k ,
(3)
j = 1 m z j h j ( x ) = 0 ,
(4)
h j ( x ) 0 for all  j = 1 , 2 , , m ,
(5)
y I and z R + m ,
(6)

where I = { y R k | y = ( y 1 , y 2 , , y k ) 0 ,  and  i = 1 k y i = 1 } .

In this section, we establish some sufficient optimality conditions for a (weakly) efficient solution, which are inverse of the above theorem with extra assumptions. Because of these extra assumptions, the correlative duality theorems are various. We deduce the fractional programming problem into a nonfractional programming problem by using the Dinkelbach transformation [21].

Theorem 2 (Sufficient optimality condition)

Let x be a feasible solution of (FP), and let there exist y I R k , v R k , z R m satisfying conditions (2)~(6) at x . Let
A i ( x ) = f i ( x ) v i g i ( x ) , i = 1 , 2 , , k ,
where A ( x ) = ( A 1 ( x ) , A 2 ( x ) , , A k ( x ) ) . If
  1. (a)

    ( A , h J ) is ( F , α , ρ , d ) -V-type I at x on X ,

     
  2. (b)

    i = 1 k y i ρ i 1 α i 1 ( x , x ) + j J ( x ) z j ρ j 2 α j 2 ( x , x ) 0 ,

     
  3. (c)

    i = 1 k y i A i ( x ) + j J ( x ) z j h j ( x ) = 0 ,

     

then x is a weakly efficient solution of problem (FP).

Proof Since ( A , h J ) is ( F , α , ρ , d ) -V-type I at x , we have
A i ( x ) A i ( x ) F ( x , x ; α i 1 ( x , x ) A i ( x ) ) + ρ i 1 d 2 ( x , x ) , i K , 0 = h j ( x ) F ( x , x ; α j 2 ( x , x ) h j ( x ) ) + ρ j 2 d 2 ( x , x ) , j J ( x ) .
Since α i 1 ( x , x ) > 0 , i K , and α j 2 ( x , x ) > 0 , j J ( x ) , the above inequalities along with the property of sublinearity of F give
A i ( x ) α i 1 ( x , x ) A i ( x ) α i 1 ( x , x ) ρ i 1 d 2 ( x , x ) α i 1 ( x , x ) F ( x , x ; A i ( x ) ) , i K , ρ j 2 d 2 ( x , x ) α j 2 ( x , x ) F ( x , x ; h j ( x ) ) , j J ( x ) .
From (c), (6), and the property of sublinearity of F, we get
i = 1 k y i A i ( x ) α i 1 ( x , x ) i = 1 k y i A i ( x ) α i 1 ( x , x ) i = 1 k y i ρ i 1 d 2 ( x , x ) α i 1 ( x , x ) j J ( x ) z j ρ j 2 d 2 ( x , x ) α j 2 ( x , x ) F ( x , x ; i = 1 k y i A i ( x ) ) + F ( x , x ; j J ( x ) z j h j ( x ) ) 0 .
Then
i = 1 k y i A i ( x ) α i 1 ( x , x ) i = 1 k y i A i ( x ) α i 1 ( x , x ) ( i = 1 k y i ρ i 1 α i 1 ( x , x ) + j J ( x ) z j ρ j 2 α j 2 ( x , x ) ) d 2 ( x , x ) .
Now, by using (b), the above inequality becomes
i = 1 k y i A i ( x ) α i 1 ( x , x ) i = 1 k y i A i ( x ) α i 1 ( x , x ) .
(7)
If x is not a weakly efficient solution for problem (FP), then there exists x X such that
f i ( x ) g i ( x ) < f i ( x ) g i ( x ) = v i for  i K .
From relation (6) and α i 1 ( x , x ) > 0 , we have
i = 1 k y i A i ( x ) α i 1 ( x , x ) < i = 1 k y i A i ( x ) α i 1 ( x , x ) ,

which contradicts (7). Hence, x is a weakly efficient solution of (FP). □

Example Consider the following multiobjective nonlinear fractional programming problem:
Minimize ( x 2 ( π x 2 ) e cos x 1 π 2 e 1 2 4 x 1 , sin 2 x 1 + x 1 x 1 + 1 2 , x 1 + cos x 2 π 4 x 2 ) , subject to h 1 ( x 1 , x 2 ) = ( π 4 x 1 ) 0 , h 2 ( x 1 , x 2 ) = cos x 2 0 ,
where
  1. 1.

    X = { ( x 1 , x 2 ) : 0 < x 1 < 3 π 2 , 0 < x 2 < 3 π 2 } ,

     
  2. 2.

    f ( f 1 , f 2 , f 3 ) ( x 2 ( π x 2 ) e cos x 1 π 2 e 1 2 4 , sin 2 x 1 + x 1 , x 1 + cos x 2 π 4 ) : X R 3 ,

     
  3. 3.

    g ( g 1 , g 2 , g 3 ) ( x 1 , x 1 + 1 2 , x 2 ) : X R 3 ,

     
  4. 4.

    h = ( h 1 , h 2 ) : X R 2 .

     

The feasible region is X = { ( x 1 , x 2 ) : π 4 x 1 < 3 π 2 , 0 < x 2 π 2 } .

By Theorem 2, we know
A ( x 1 , x 2 ) = ( A 1 ( x 1 , x 2 ) , A 2 ( x 1 , x 2 ) , A 3 ( x 1 , x 2 ) ) = ( x 2 ( π x 2 ) e cos x 1 π 2 e 1 2 4 , sin 2 x 1 1 2 , x 1 + cos x 2 π 4 ) .

It can be seen that ( A , h ) is ( F , α , ρ , d ) -V-type I at x = ( π 4 , π 2 ) X for F ( x , x ; a ) = a ( x x ) , d ( x , x ) = ( x 1 π 4 ) 2 + ( x 2 π 2 ) 2 , α = ( α 1 1 , α 2 1 , α 3 1 , α 1 2 , α 2 2 ) , ρ = ( ρ 1 1 , ρ 2 1 , ρ 3 1 , ρ 1 2 , ρ 2 2 ) , v = ( v 1 , v 2 , v 3 ) where α 1 1 = 1 , α 2 1 = 1 x 1 + π 4 , α 3 1 = 3 2 π , α 1 2 = 1 = α 2 2 , ρ 1 1 = 5 2 , ρ 2 1 = 1 3 , ρ 3 1 = 1 15 , ρ 1 2 = 0 = ρ 2 2 , v 1 = 0 , v 2 = 1 , v 3 = 0 . For y = ( y 1 , y 2 , y 3 ) = ( 0 , 1 18 , 17 18 ) , z = ( z 1 , z 2 ) = ( 1 4 , 17 18 ) .

It is easy to see that the relations (b) and (c) in Theorem 2 are also satisfied at the point ( π 4 , π 2 ) . Hence, ( π 4 , π 2 ) X is a weakly efficient solution.

Theorem 3 (Sufficient optimality condition)

Let x be a feasible solution of (FP), and let there exist y I R k , v R k , z R m satisfying conditions (2)~(6) at x . Let
A i ( x ) = f i ( x ) v i g i ( x ) , i = 1 , 2 , , k ,
where A ( x ) = ( A 1 ( x ) , A 2 ( x ) , , A k ( x ) ) . If
  1. (a)

    ( y A , z J h J ) is pseudo-quasi- ( F , α , ρ , d ) -V-type I at x on X ,

     
  2. (b)

    ρ 1 + ρ 2 0 ,

     
  3. (c)

    i = 1 k y i A i ( x ) + j J ( x ) z j h j ( x ) = 0 ,

     

then x is a weakly efficient solution of problem (FP).

Proof Assume that x is not a weakly efficient solution of (FP). Then there is a feasible solution x X such that
f i ( x ) g i ( x ) < f i ( x ) g i ( x ) = v i for  i K .
From the above inequality, we have
f i ( x ) v i g i ( x ) < f i ( x ) v i g i ( x ) for  i K .
From relation (6) and α i 1 ( x , x ) > 0 , for i K , we obtain
i = 1 k α i 1 ( x , x ) y i A i ( x ) < i = 1 k α i 1 ( x , x ) y i A i ( x ) .
(8)
Also, h j ( x ) = 0 , j J ( x ) yield
j J ( x ) α j 2 ( x , x ) z j h j ( x ) = 0 .
(9)
By using (a), (8) and (9) imply
F ( x , x ; i = 1 k y i A i ( x ) ) < ρ 1 d 2 ( x , x ) , F ( x , x ; j J ( x ) z j h j ( x ) ) ρ 2 d 2 ( x , x ) .
Summing up the two inequalities above and the property of sublinearity of F, we have
F ( x , x ; i = 1 k y i A i ( x ) + j J ( x ) z j h j ( x ) ) F ( x , x ; i = 1 k y i A i ( x ) ) + F ( x , x ; j J ( x ) z j h j ( x ) ) < ( ρ 1 + ρ 2 ) d 2 ( x , x ) .
Using (b), we obtain
F ( x , x ; i = 1 k y i A i ( x ) + j J ( x ) z j h j ( x ) ) < 0 .
(10)
From relation (c), we get
F ( x , x ; i = 1 k y i A i ( x ) + j J ( x ) z j h j ( x ) ) = 0 ,

which contradicts (10). Thus, the proof is complete. □

If the assumption of pseudo-quasi-type I in Theorem 3 above is replaced by the strictly pseudo-quasi-type I, the stronger conclusion that x is an efficient solution of (FP) may be received. This result is stated as follows.

Theorem 4 (Sufficient optimality condition)

Let x be a feasible solution of (FP), and let there exist y I R k , v R k , z R m satisfying conditions (2)~(6) at x . Let
A i ( x ) = f i ( x ) v i g i ( x ) , i = 1 , 2 , , k ,
where A ( x ) = ( A 1 ( x ) , A 2 ( x ) , , A k ( x ) ) . If
  1. (a)

    ( y A , z J h J ) is strictly pseudo-quasi- ( F , α , ρ , d ) -V-type I at x on X ,

     
  2. (b)

    ρ 1 + ρ 2 0 ,

     
  3. (c)

    i = 1 k y i A i ( x ) + j J ( x ) z j h j ( x ) = 0 ,

     

then x is an efficient solution of problem (FP).

Theorem 5 (Sufficient optimality conditions)

Let x be a feasible solution of (FP), and let there exist y I R k , v R k , z R m satisfying conditions (2)~(6) at x . Let
A i ( x ) = f i ( x ) v i g i ( x ) , i = 1 , 2 , , k ,
where A ( x ) = ( A 1 ( x ) , A 2 ( x ) , , A k ( x ) ) . If
  1. (a)

    ( y A , z J h J ) is quasi-strictly-pseudo ( F , α , ρ , d ) -V-type I at x on X ,

     
  2. (b)

    ρ 1 + ρ 2 0 ,

     
  3. (c)

    i = 1 k y i A i ( x ) + j J ( x ) z j h j ( x ) = 0 ,

     

then x is a weakly efficient solution of problem (FP).

Proof Assume that x is not a weakly efficient solution of (FP). We have
i = 1 k α i 1 ( x , x ) y i A i ( x ) < i = 1 k α i 1 ( x , x ) y i A i ( x ) ,
(11)
j J ( x ) α j 2 ( x , x ) z j h j ( x ) = 0 .
(12)
Since (11), (12), and (a) hold,
F ( x , x ; i = 1 k y i A i ( x ) ) ρ 1 d 2 ( x , x ) ,
(13)
and
F ( x , x ; j J ( x ) z j h j ( x ) ) < ρ 2 d 2 ( x , x ) .
(14)
From (c) and the property of sublinearity of F, it yields
0 = F ( x , x ; i = 1 k y i A i ( x ) + j J ( x ) z j h j ( x ) ) F ( x , x ; i = 1 k y i A i ( x ) ) + F ( x , x ; j J ( x ) z j h j ( x ) ) .
From (14) and (b), we get
F ( x , x ; i = 1 k y i A i ( x ) ) F ( x , x ; j J ( x ) z j h j ( x ) ) > ρ 2 d 2 ( x , x ) ρ 1 d 2 ( x , x ) ,

which contradicts (13). Thus, the proof is complete. □

4 Parametric duality theorem

In this section we give some weak, strong, converse duality relations between problems (D) and (FP). We consider the following parametric duality (D) of (FP).
(D) Maximize  v = ( v 1 , v 2 , , v k ) , subject to (D) i = 1 k y i [ f i ( u ) v i g i ( u ) ] + j = 1 m z j h j ( u ) = 0 ,
(15)
(D) f i ( u ) v i g i ( u ) 0 for all  i = 1 , 2 , , k ,
(16)
(D) j = 1 m z j h j ( u ) 0 ,
(17)
(D) u X , y I R k , z R + m , v i 0 for all  i = 1 , 2 , , k .
(18)

Parametric duality results have been proven under generalized type I functions assumptions.

Theorem 6 (Weak duality)

Let x be a feasible solution of (FP), and let ( u , y , z , v ) be a feasible solution of (D). Let
B i ( ) = f i ( ) v i g i ( ) , i = 1 , 2 , , k ,
where B ( ) = ( B 1 ( ) , B 2 ( ) , , B k ( ) ) . Assume that
  1. (a)

    ( B , h ) is ( F , α , ρ , d ) -V-type-I at u,

     
  2. (b)

    i = 1 k y i α i 1 ( x , u ) = 1 and α j 2 ( x , u ) = 1 for all j M ,

     
  3. (c)

    i = 1 k y i ρ i 1 α i 1 ( x , u ) + j = 1 m z j ρ j 2 0 .

     

Then ϕ ( x ) v .

Proof Let x be a feasible solution of (FP), and let ( u , y , z , v ) be a feasible solution of (D). By using (a), we have
B i ( x ) B i ( u ) F ( x , u ; α i 1 ( x , u ) B i ( u ) ) + ρ i 1 d 2 ( x , u ) for  i K ,
(19)
h j ( u ) F ( x , u ; α j 2 ( x , u ) h j ( u ) ) + ρ j 2 d 2 ( x , u ) for  j M .
(20)
Multiplying (19) by y i α i 1 ( x , u ) 0 , i K , and (20) by z j 0 , j M , summing over all i and j and using α j 2 ( x , u ) = 1 for j M , we get
i = 1 k y i B i ( x ) α i 1 ( x , u ) i = 1 k y i B i ( u ) α i 1 ( x , u ) F ( x , u ; i = 1 k y i B i ( u ) ) + i = 1 k y i ρ i 1 d 2 ( x , u ) α i 1 ( x , u ) , j = 1 m z j h j ( u ) F ( x , u ; j = 1 m z j h j ( u ) ) + j = 1 m z j ρ j 2 d 2 ( x , u ) .
Adding the two relations above and the property of sublinearity of F along with (15), we obtain
i = 1 k y i B i ( x ) α i 1 ( x , u ) i = 1 k y i B i ( u ) α i 1 ( x , u ) j = 1 m z j h j ( u ) ( i = 1 k y i ρ i 1 α i 1 ( x , u ) + j = 1 m z j ρ i 2 ) d 2 ( x , u ) ,
which by virtue of (c) implies
i = 1 k y i B i ( x ) α i 1 ( x , u ) i = 1 k y i B i ( u ) α i 1 ( x , u ) + j = 1 m z j h j ( u ) .
(21)

Suppose, on the contrary, that ϕ ( x ) < v . It would exhibit a contradiction.

Assume that ϕ ( x ) < v holds, from (17) and (b), we have
i = 1 k y i B i ( x ) α i 1 ( x , u ) < i = 1 k y i B i ( u ) α i 1 ( x , u ) + j = 1 m z j h j ( u ) .

This contradicts (21) and the proof is completed. □

Suppose that x is a weakly efficient solution of (FP). Using x and the optimality conditions of (FP), we can find a feasible solution of (D). Furthermore, if we assume that some reasonable conditions are fulfilled, then (FP) and (D) have the same optimal value, and we have the following strong duality theorem.

Theorem 7 (Strong duality)

Let x be a weakly efficient solution to problem (FP), and let the constraint qualification (1) be satisfied for h at x . Then there exist y I R k , z R m , and v R k such that ( x , y , z , v ) is a feasible solution of (D). If the hypotheses of Theorem  6 are fulfilled, then ( x , y , z , v ) is a weakly efficient solution of (D) and their efficient values of (FP) and (D) are equal.

Proof Let x be a weakly efficient solution of (FP). Then there exist y I R k , z R m , and v R k such that ( x , y , z , v ) satisfies the relations (2)~(6). Hence, it is obtained that ( x , y , z , v ) is a feasible solution of (D). If ( x , y , z , v ) is not a weakly efficient solution of (D), then there exists a feasible solution ( x , y , z , v ) of (D), and we have
v i > v i = f i ( x ) g i ( x ) for all  i K .

It follows that ϕ ( x ) < v , which contradicts the weak duality (Theorem 6). Hence ( x , y , z , v ) is a weakly efficient solution of (D), and the efficient values of (FP) and (D) are clearly equal to their respective weakly efficient solution points. □

Theorem 8 (Strict converse duality)

Let x and ( u , y , z , v ) be weakly efficient solutions of (FP) and (D), respectively, with v i = f i ( x ) g i ( x ) for all i = 1 , 2 , , k . Assume that the assumptions of Theorem  7 are fulfilled. Let
A i ( ) = f i ( ) v i g i ( ) , i = 1 , 2 , , k ,

where A ( ) = ( A 1 ( ) , A 2 ( ) , , A k ( ) ) .

Assume that
  1. (a)

    ( A , h ) is semistrictly ( F , α , ρ , d ) -V-type-I at u with α i 1 ( x , u ) = 1 , i K , α j 2 ( x , u ) = 1 , j M ,

     
  2. (b)

    i = 1 k y i ρ i 1 + j = 1 m z j ρ j 2 0 .

     

Then x = u .

Proof Assume that x u . By (18), (a), and summing over i and j, we have
i = 1 k y i A i ( x ) i = 1 k y i A i ( u ) > F ( x , u ; i = 1 k y i A i ( x ) ) + i = 1 k y i ρ i 1 d 2 ( x , u ) , j = 1 m z j h j ( u ) F ( x , u ; j = 1 m z j h j ( u ) ) + j = 1 m z j ρ j 2 d 2 ( x , u ) .
Adding the two inequalities above and the property of sublinearity of F along with (15), we obtain
i = 1 k y i A i ( x ) i = 1 k y i A i ( u ) j = 1 m z j h j ( u ) > ( i = 1 k y i ρ i 1 + j = 1 m z j ρ j 2 ) d 2 ( x , u ) ,
which in view of (b) yields
i = 1 k y i A i ( x ) > i = 1 k y i A i ( u ) + j = 1 m z j h j ( u ) .
(22)
Since
v i = f i ( x ) g i ( x ) for all  i = 1 , 2 , , k ,
from relations (16), (17), and (18), we have
i = 1 k y i A i ( x ) i = 1 k y i A i ( u ) + j = 1 m z j h j ( u ) ,

which contradicts (22). Hence, the proof is completed. □

Declarations

Acknowledgements

The author wish to thank the referees for their several valuable suggestions which have considerably improved the presentation of this article.

Authors’ Affiliations

(1)
Chung-Jen Junior College of Nursing, Health Sciences and Management

References

  1. Singh C, Hanson MA: Saddlepoint theory for nondifferentiable multiobjective fractional programming. J. Inf. Optim. Sci. 1986, 7: 41–48.MathSciNetGoogle Scholar
  2. Singh C: Optimality conditions in multiobjective differentiable programming. J. Optim. Theory Appl. 1987, 53: 115–123. 10.1007/BF00938820MathSciNetView ArticleGoogle Scholar
  3. Weir T: Duality for nondifferentiable multiple objective fractional programming problems. Util. Math. 1989, 36: 53–64.MathSciNetGoogle Scholar
  4. Kaul RN, Lyall V: A note on nonlinear fractional vector maximization. Opsearch 1989, 26: 108–121.MathSciNetGoogle Scholar
  5. Liu JC: Optimality and duality for multiobjectional programming involving nonsmooth pseudoinvex functions. Optimization 1996, 37: 27–39. 10.1080/02331939608844194MathSciNetView ArticleGoogle Scholar
  6. Bhatia D, Garg PK:Duality for nonsmooth nonlinear fractional multiobjective programs via ( F , ρ ) -convexity. Optimization 1998, 43: 185–197. 10.1080/02331939808844382MathSciNetView ArticleGoogle Scholar
  7. Bhatia D, Pandey S: A note multiobjective fractional programming. Cahiers Centre Études Rech. Opér. 1991, 33: 3–11.MathSciNetGoogle Scholar
  8. Preda V: On efficiency and duality for multiobjective programs. J. Math. Anal. Appl. 1992, 166: 265–377.MathSciNetView ArticleGoogle Scholar
  9. Hanson MA, Mond B: Further generalizations of convexity in mathematical programming. J. Inf. Optim. Sci. 1986, 3: 25–32.MathSciNetGoogle Scholar
  10. Vial JP: Strong and weak convexity of sets and functions. Math. Oper. Res. 1982, 8: 231–259.MathSciNetView ArticleGoogle Scholar
  11. Gulati TR, Ahmad I, Agarwal D: Sufficiency and duality in multiobjective programming under generalized type I functions. J. Optim. Theory Appl. 2007, 135: 411–427. 10.1007/s10957-007-9271-4MathSciNetView ArticleGoogle Scholar
  12. Hanson MA, Pini R, Singh C: Multiobjective programming under generalized type I invexity. J. Math. Anal. Appl. 2001, 261: 562–577. 10.1006/jmaa.2001.7542MathSciNetView ArticleGoogle Scholar
  13. Liang ZA, Huang HX, Pardalos PM: Optimality conditions and duality for a class of nonlinear fractional programming problems. J. Optim. Theory Appl. 2001, 110: 611–619. 10.1023/A:1017540412396MathSciNetView ArticleGoogle Scholar
  14. Hanson MA, Mond B: Necessary and sufficient conditions in constrained optimization. Math. Program. 1987, 37: 51–58. 10.1007/BF02591683MathSciNetView ArticleGoogle Scholar
  15. Aghezza B, Hachimi M: Generalized invexity and duality in multiobjective programming problems. J. Glob. Optim. 2000, 18: 91–101. 10.1023/A:1008321026317View ArticleGoogle Scholar
  16. Ahmad I, Gulati TR:Mixed type duality for multiobjective variational problems with generalized ( F , ρ ) -convexity. J. Math. Anal. Appl. 2005, 306: 669–683. 10.1016/j.jmaa.2004.10.019MathSciNetView ArticleGoogle Scholar
  17. Kim DS, Kim MH: Generalized type I invexity and duality in multiobjective variational problems. J. Math. Anal. Appl. 2005, 307: 533–554. 10.1016/j.jmaa.2005.02.018MathSciNetView ArticleGoogle Scholar
  18. Lai HC, Ho SC: Optimality and duality for nonsmooth multiobjective fractional programmings involving exponential V - r -invexity. Nonlinear Anal. 2012, 75: 3157–3166. 10.1016/j.na.2011.12.013MathSciNetView ArticleGoogle Scholar
  19. Lai HC, Ho SC: Duality for a system of multiobjective problems with exponential type invexity functions. J. Nonlinear Convex Anal. 2012, 13(1):97–110.MathSciNetGoogle Scholar
  20. Lee JC, Ho SC: Optimality and duality for multiobjective fractional problems with r -invexity. Taiwan. J. Math. 2008, 82(3):719–740.MathSciNetGoogle Scholar
  21. Göpfert A, Riahi H, Tammer C, Zălinescu C: Variational Methods in Partially Ordered Spaces. Springer, New York; 2003.Google Scholar

Copyright

© Ho; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.