Skip to main content

On ϵ-solutions for robust fractional optimization problems

Abstract

We consider ϵ-solutions (approximate solutions) for a fractional optimization problem in the face of data uncertainty. Using robust optimization approach (worst-case approach), we establish optimality theorems and duality theorems for ϵ-solutions for the fractional optimization problem. Moreover, we give an example illustrating our duality theorems.

MSC:90C25, 90C32, 90C46.

1 Introduction

A robust fractional optimization problem is to optimize an objective fractional function over the constrained set defined by functions with data uncertainty.

To get the ϵ-solution (approximate solution), many authors have established ϵ-optimality conditions and ϵ-duality theorems for several kinds of optimization problems [17]. Especially, Lee and Lee [8] gave an ϵ-duality theorems for a convex semidefinite optimization problem with conic constraints. Also, they [9] established optimality theorems and duality theorems for ϵ-solutions for convex optimization problems with uncertainty data.

In [1015], many authors have treated fractional programming problems in the absence of data uncertainty. Recently, many authors have studied robust optimization problems [9, 1621]. Very recently, Jeyakumar and Li [22] established duality theorems for a fractional programming problem in the face of data uncertainty via robust optimization.

The purpose of the paper is to extend the ϵ-optimality theorems and ϵ-duality theorems in [9] to fractional optimization problems with uncertainty data.

Consider the following standard form of fractional programming problem with a geometric constraint set:

( FP ) min f ( x ) g ( x ) s.t.  h i ( x ) 0 , i = 1 , , m , x C ,

where f, h i : R n R, i=1,,m, are convex functions, C is a closed convex cone of R n , and g: R n R is a concave function such that, for any xC, f(x)0 and g(x)>0.

The fractional programming problem (FP) in the face of data uncertainty in the constraints can be captured by the problem:

( UFP ) min max ( u , v ) U × V f ( x , u ) g ( x , v ) s.t.  h i ( x , w i ) 0 , i = 1 , , m , x C ,

where f: R n × R p R, h i : R n × R q R, f(,u) and h i (, w i ) are convex, and g: R n × R p R, g(,v) is concave, and u R p , v R p , and w i R q are uncertain parameters which belong to the convex and compact uncertainty sets U R p , V R p , and W i R q , i=1,,m, respectively.

We study ϵ-optimality theorems and ϵ-duality theorems for the uncertain fractional programming model problem (UFP) by examining its robust (worst-case) counterpart [18]:

( RFP ) min max ( u , v ) U × V f ( x , u ) g ( x , v ) s.t.  h i ( x , w i ) 0 , w i W i , i = 1 , , m , x C .

Clearly, A:={xC h i (x, w i )0, w i W i ,i=1,,m} is a feasible set of (RFP).

Let ϵ0. Then x ¯ is called an ϵ-solution of (RFP) if, for any xA,

max ( u , v ) U × V f ( x , u ) g ( x , v ) max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ.

Using the parametric approach, we transform the problem (RFP) into the robust non-fractional convex optimization problem ( RNCP ) r with a parametric r R + :

( RNCP ) r min max u U f ( x , u ) r min v V g ( x , v ) s.t.  h i ( x , w i ) 0 , w i W i , i = 1 , , m , x C .

Let ϵ0. Then x ¯ is called an ϵ-solution of ( RNCP ) r if, for any xA,

max u U f(x,u)r min v V g(x,v) max u U f( x ¯ ,u)r min v V g( x ¯ ,v)ϵ.

In this paper, we consider ϵ-solutions for (RFP), and we establish optimality theorems and duality theorems for ϵ-solutions for the robust fractional optimization problem. Moreover, we give an example for our duality theorems.

2 Preliminaries

Let us first recall some notation and preliminary results which will be used throughout this paper. R n denotes the Euclidean space with dimension n. The nonnegative orthant of R n is denoted by R + n and is defined by R + n :={( x 1 ,, x n ) R n : x i 0}. We say the set A is convex whenever μ a 1 +(1μ) a 2 A for all μ[0,1], a 1 , a 2 A. A function f: R n R{+} is said to be convex if, for all μ[0,1],

f ( ( 1 μ ) x + μ y ) (1μ)f(x)+μf(y)

for all x,y R n . The function f is said to be concave whenever −f is convex. Let g: R n R{+} be a convex function. The subdifferential of g at adomg is defined by

g(a):= { v R n g ( x ) g ( a ) + v , x a x dom g } ,

where , is the inner product on R n and domg:={x R n :g(x)<+}. Let ϵ0. Then the ϵ-subdifferential of g at adomg is defined by

ϵ g(a):= { v R n g ( x ) g ( a ) + v , x a ϵ x dom g } .

The function f is said to be proper if f(x)> for all x R n . We say f is a lower semicontinuous function if lim inf y x f(y)f(x) for all x R n . As usual, for any proper convex function g on R n , its conjugate function g : R n R{+} is defined, for any x R n , by g ( x )=sup{ x ,xg(x)|x R n }. The epigraph of a function g: R n R{+}, epig, is defined by epig={(x,r) R n ×Rg(x)r}. We denote the convex hull of a subset A of R n by coA, and denote the closure of the set A by clA. Let C be a closed convex set in R n and xC. Then the normal cone N C (x) to C at x is defined by

N C (x)= { v R n v , y x 0 ,  for all  y C } ,

and we let ϵ0, then the ϵ-normal cone N C ϵ (x) to C at x is defined by

N C ϵ (x)= { v R n v , y x ϵ ,  for all  y C } .

When C is a closed convex cone in R n , we denote N C (0) by C and call it the negative dual cone of C.

Proposition 2.1 [23]

Let f: R n R be a convex function and let δ C be the indicator function with respect to a closed convex subset C of R n , that is, δ C (x)=0 if xC, and δ C (x)=+ if xC. Let ϵ0. Then

ϵ (f+ δ C )( x ¯ )= ϵ 0 0 , ϵ 1 0 ϵ 0 + ϵ 1 = ϵ { ϵ 0 f ( x ¯ ) + ϵ 1 δ C ( x ¯ ) } .

Proposition 2.2 [24, 25]

If f: R n R{+} is a proper lower semicontinuous convex function and if adomf:={x R n f(x)<+}, then

epi f = ϵ 0 { ( v , v , a + ϵ f ( a ) ) v ϵ f ( a ) } .

Proposition 2.3 [26]

Let f: R n R be a convex function and g: R n R{+} be a proper lower semicontinuous convex function. Then

epi ( f + g ) =epi f +epi g .

Moreover, if f,g: R n R{+} are proper lower semicontinuous convex functions, and if domfdomg, then

epi ( f + g ) =cl ( epi f + epi g ) .

Proposition 2.4 [22, 26]

Let h i : R n R{+}, iI (where I is an arbitrary index set), be a proper lower semicontinuous convex function. Suppose that there exists x 0 R n such that sup i I h i ( x 0 )<+. Then

epi ( sup i I h i ) =cl ( co i I epi h i ) .

Proposition 2.5 [23]

Let h i : R n R{+}, i=1,,m, be proper lower semicontinuous convex functions. Let ϵ0. If i = 1 m ridom h i , where ridom h i is the relative interior of dom h i , then for all x i = 1 n dom h i ,

ϵ ( i = 1 m h ) (x)= { i = 1 m ϵ i h i ( x ) | ϵ i 0 , i = 1 , , m , i = 1 m ϵ i = ϵ } .

Proposition 2.6 [9]

Let h i : R n × R q R, i=1,,m, be continuous functions such that, for all w i R q , h i (, w i ) is a convex function and let C be a closed convex cone of R n . Suppose that each W i , i=1,,m, is compact and convex, and there exists x 0 C such that h i ( x 0 , w i )<0, for all w i W i , i=1,,m. Then

w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R +

is closed.

Proposition 2.7 [9]

Let h i : R n × R q R, i=1,,m, be continuous functions and let C be a closed convex cone of R n . Suppose that each W i R q , i=1,,m, is convex, for all w i R q , h i (, w i ) is a convex function, and, for each x R n , h i (x,) is concave on W i . Then

w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R +

is convex.

Now we give the following relation between the ϵ-solutions of (RFP) and ( RNCP ) r ¯ .

Lemma 2.1 Let x ¯ A and let ϵ0. If max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ0, then the following statements are equivalent:

  1. (i)

    x ¯ is an ϵ-solution of (RFP);

  2. (ii)

    x ¯ is an ϵ ¯ -solution of ( RNCP ) r ¯ , where r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ and ϵ ¯ =ϵ min v V g( x ¯ ,v).

Proof () Let x ¯ A be an ϵ-solution of (RFP). Then for any xA, max ( u , v ) U × V f ( x , u ) g ( x , v ) max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ. Put r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ and ϵ ¯ =ϵ min v V g( x ¯ ,v). Then we have, for any xA, max u U f(x,u) min v V r ¯ g(x,v)0. Since max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v)ϵ min v V g( x ¯ ,v)=0, for any xA,

max u U f ( x , u ) r ¯ min v V g ( x , v ) max u U f ( x ¯ , u ) r ¯ min v V g ( x ¯ , v ) ϵ min v V g ( x ¯ , v ) = max u U f ( x ¯ , u ) r ¯ min v V g ( x ¯ , v ) ϵ ¯ .

Hence x ¯ is an ϵ ¯ -solution of ( RNCP ) r ¯ .

() Let x ¯ A be an ϵ ¯ -solution of ( RNCP ) r ¯ . Then for any xA, max u U f(x,u) r ¯ min v V g(x,v) max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v) ϵ ¯ . Since max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v)ϵ min v V g( x ¯ ,v)=0, for any xA, max u U f(x,u) r ¯ min v V g(x,v)0. So, we have max ( u , v ) U × V f ( x , u ) g ( x , v ) r ¯ . Since r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ,

max ( u , v ) U × V f ( x , u ) g ( x , v ) max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ.

Hence x ¯ is an ϵ-solution of (RFP). □

3 ϵ-Optimality theorems

In this section, we establish ϵ-optimality theorems for ϵ-solutions for the robust fractional optimization problem.

Now we give the following lemma which is the robust version of Farkas lemma for non-fractional convex functions.

Lemma 3.1 Let f: R n × R p R and h i : R n × R q R, i=1,,m, be functions such that, for any uU, f(,u) and, for each w i W i , h i (, w i ) are convex functions, and, for any x R n , f(x,) is concave function. Let g: R n × R q R be a function such that, for any vV, g(,v) is a concave function, and, for all xR, g(x,) is a convex function. Let U R p , V R p , and W i R q , i=1,,m be convex and compact sets. Let r0 and let C be a closed convex cone of R n . Assume that A:={xC h i (x, w i )0, w i W i ,i=1,,m}. Then the following statements are equivalent:

  1. (i)

    {xC h i (x, w i )0, w i W i ,i=1,,m}{x R n max u U f(x,u)r min v V g(x,v)0};

  2. (ii)

    there exist u ¯ U and v ¯ V such that

    { x C h i ( x , w i ) 0 , w i W i , i = 1 , , m } { x R n f ( x , u ¯ ) r g ( x , v ¯ ) 0 } ;
  3. (iii)
    ( 0 , 0 ) u U epi ( f ( , u ) ) + v V epi ( r g ( , v ) ) + cl co ( w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R + ) ;
  4. (iv)
    ( 0 , 0 ) epi ( max u U f ( , u ) ) + epi ( r min v V g ( , v ) ) + cl co ( w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R + ) .

Proof Let D:={x R n h i (x, w i )0, w i W i ,i=1,,m}. Then A=CD. We will prove that epi δ A =clco( w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R + ). For any x R n ,

δ A (x)= δ C (x)+ δ D (x)and δ D (x)= sup w i W i , λ i 0 i = 1 m λ i h i (x, w i ).

Thus, by Propositions 2.3 and 2.4, we have

epi δ A = epi ( δ D + δ C ) = cl ( epi δ D + epi δ C ) = cl ( epi ( sup w i W i , λ i 0 i = 1 m λ i h i ( , w i ) ) + epi δ C ) = cl ( cl co w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + epi δ C ) = cl co ( w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R + ) .

[(i) (iv)] Now we assume that the statement (iv) holds. Then, by Proposition 2.3, the statement (iv) is equivalent to

( 0 , 0 ) epi ( max u U f ( , u ) ) + epi ( r min v V g ( , v ) ) + epi δ A = epi ( max u U f ( , u ) r min v V g ( , v ) + δ A ) .

Equivalently, by definition of epigraph of ( max u U f ( , u ) r min v V g ( , v ) + δ A ) ,

( max u U f ( , u ) r min v V g ( , v ) + δ A ) (0)0.

From the definition of a conjugate function, for any x R n ,

( max u U f ( , u ) r min v V g ( , v ) + δ A ) (x)0.

It is equivalent to the statement that, for any xA,

max u U f(x,u)r min v V g(x,v)0.

[(ii) (iii)] Now we assume that the statement (iii) holds. Then the statement (iii) is equivalent to

(0,0) u U epi ( f ( , u ) ) + v V epi ( r g ( , v ) ) +epi δ A .

It means that there exist u ¯ U and v ¯ V such that

(0,0)epi ( f ( , u ¯ ) r g ( , v ¯ ) + δ A ) .

It is equivalent to the statement that there exist u ¯ U and v ¯ V such that

( f ( , u ¯ ) r g ( , v ¯ ) + δ A ) (0)0.

From the definition of a conjugate function, there exist u ¯ U and v ¯ V such that, for any x R n ,

( f ( , u ¯ ) r g ( , v ¯ ) + δ A ) (x)0.

It means that there exist u ¯ U and v ¯ V such that, for any xA,

f(x, u ¯ )rg(x, v ¯ )0.

[(iii) (iv)] To get the desired result, it suffices to show that

u U epi ( f ( , u ) ) =epi ( max u U f ( , u ) ) ,
(1)
v V epi ( r g ( , v ) ) =epi ( r min v V g ( , v ) ) .
(2)

By Proposition 2.4, epi ( max u U f ( , u ) ) =clco u U epi ( f ( , u ) ) . Let ( z 1 , α 1 ),( z 2 , α 2 ) u U epi ( f ( , u ) ) and let μ[0,1]. Then there exist u 1 , u 2 U such that ( z 1 , α 1 )epi ( f ( , u 1 ) ) and ( z 2 , α 2 )epi ( f ( , u 2 ) ) , that is, ( f ( , u 1 ) ) ( z 1 ) α 1 and ( f ( , u 2 ) ) ( z 2 ) α 2 . Using the definition of a conjugate function, we have, for all x R n ,

z 1 ,xf(x, u 1 ) α 1 and z 2 ,xf(x, u 2 ) α 2 .
(3)

Since, for all x R n , f(x,) is concave, we have f(x,μ u 1 +(1μ) u 2 )μf(x, u 1 )+(1μ)f(x, u 2 ), i.e.,

f ( x , μ u 1 + ( 1 μ ) u 2 ) μf(x, u 1 )(1μ)f(x, u 2 ).
(4)

So, from (3) and (4), we have, for all x R n ,

μ z 1 + ( 1 μ ) z 2 , x f ( x , μ u 1 + ( 1 μ ) u 2 ) μ α 1 +(1μ) α 2 ,

and so ( f ( , μ u 1 + ( 1 μ ) u 2 ) ) (μ z 1 +(1μ) z 2 )μ α 1 +(1μ) α 2 . Hence, we have

( μ z 1 + ( 1 μ ) z 2 , μ α 1 + ( 1 μ ) α 2 ) epi ( f ( , μ u 1 + ( 1 μ ) u 2 ) ) .

So, u U epi ( f ( , u ) ) is convex.

Now we show that u U epi ( f ( , u ) ) is closed. Let

( z n , α n ) u U epi ( f ( , u ) )

with ( z n , α n )( z , α ) as n. Then there exists u n U such that ( f ( , u n ) ) ( z n ) α n . Since U is compact, we may assume that u n u U as n. So, for each x R n ,

z n ,xf(x, u n ) α n .

Since, for all x R n , f(x,) is concave, f(x,) is continuous. Passing to the limit as n, we get, for each x R n , z ,xf(x, u ) α . Hence, we have

( z , α ) epi ( f ( , u ) ) .

So, u U epi ( f ( , u ) ) is closed. Thus, (1) holds.

Moreover, since, for all x R n , g(x,) is convex and r0, for all x R n , rg(x,) is concave. So, similarly, we can prove that (2) holds. □

Remark 3.1 Using convex-concave minimax theorem (Corollary 37.3.2 in [27]), we can prove that the statement (i) in Lemma 3.1 is equivalent to the statement (ii) in Lemma 3.1.

Remark 3.2 From proving in Lemma 3.1 that the statement (i) is equivalent to the statement (iv), we see that we can prove the equivalent relation without the assumptions that, for all x R n , f(x,), and g(x,) are concave and convex, respectively.

From Lemmas 2.1 and 3.1, we can get the following theorem.

Theorem 3.1 Let f: R n × R p R and h i : R n × R q R, i=1,,m, be functions such that, for any uU, f(,u), and, for each w i W i , h i (, w i ) are convex functions, and, for any x R n , f(x,) is concave function. Let g: R n × R p R be a function such that, for any vV, g(,v) is a concave function, and, for all x R n , g(x,) is a convex function. Let U R p , V R p , and W i R q , i=1,,m. Let x ¯ A and let r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ. Suppose that w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R + is closed and convex. Then the following statements are equivalent:

  1. (i)

    x ¯ is an ϵ-solution of (RFP);

  2. (ii)

    There exist u ¯ U, v ¯ V, w ¯ i W i , and λ ¯ i 0, i=1,,m such that, for any xC,

    f(x, u ¯ ) r ¯ g(x, v ¯ )+ i = 1 m λ ¯ i h i (x, w ¯ i )0.

Proof () Let x ¯ be an ϵ-solution of (RFP). Then, by Lemma 2.1, equivalently, x ¯ is an ϵ ¯ -solution of ( RNCP ) r ¯ , where r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ and ϵ ¯ =ϵ min v V g( x ¯ ,v), that is, for any xA, max u U f(x,u) r ¯ min v V g(x,v) max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v)ϵ min v V g( x ¯ ,v). Since max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v)ϵ min v V g( x ¯ ,v)=0, we have A{xC max u U f(x,u) r ¯ min v V g(x,v)0}. Then, by Lemma 3.1, we have

( 0 , 0 ) u U epi ( f ( , u ) ) + v V epi ( r ¯ g ( , v ) ) + cl co ( w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R + ) .

Moreover, by assumption,

(0,0) u U epi ( f ( , u ) ) + v V epi ( r ¯ g ( , v ) ) + w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R + .

So, there exist u ¯ U, v ¯ V, w ¯ i W i , and λ ¯ i 0, i=1,,m such that

(0,0)epi ( f ( , u ¯ ) ) +epi ( r ¯ g ( , v ¯ ) ) +epi ( i = 1 m λ ¯ i h i ( , w ¯ i ) ) + C × R + .

Then there exist s R n , η0, t R n , μ0, z i R n , ρ i 0, i=1,,m, c C , and γ R + such that

( 0 , 0 ) = ( s , ( f ( , u ¯ ) ) ( s ) + η ) + ( t , ( r ¯ g ( , v ¯ ) ) ( t ) + μ ) + i = 1 m ( z i , ( λ ¯ i h i ( , w ¯ i ) ) ( z i ) + ρ i ) + ( c , γ ) .

So, 0=s+t+ i = 1 m z i + c and 0= ( f ( , u ¯ ) ) (s)+η+ ( r ¯ g ( , v ¯ ) ) (t)+μ+ i = 1 m ( ( λ ¯ i h i ( , w ¯ i ) ) ( z i )+ ρ i )+γ. Hence, for any x R n ,

i = 1 m z i , x c , x f ( x , u ¯ ) ( r ¯ g ( x , v ¯ ) ) = s , x + t , x f ( x , u ¯ ) ( r ¯ g ( x , v ¯ ) ) ( f ( , u ¯ ) ) ( s ) + ( r ¯ g ( , v ) ) ( t ) = η μ i = 1 m ( ( λ ¯ i h i ( , w ¯ i ) ) ( z i ) + ρ i ) γ .
(5)

Since η0, μ0, ρ i 0, i=1,,m, and c C , from (5), for any xC,

0 i = 1 m z i , x + c , x + f ( x , u ¯ ) + ( r ¯ g ( x , v ¯ ) ) η μ i = 1 m ( λ ¯ i h i ( , w ¯ 1 ) ) ( z i ) i = 1 m λ ¯ i ρ i γ i = 1 m z i , x + f ( x , u ¯ ) r ¯ g ( x , v ¯ ) i = 1 m ( λ ¯ i h i ( , w ¯ i ) ) ( z i ) f ( x , u ¯ ) r ¯ g ( x , v ¯ ) + i = 1 m ( λ ¯ i h i ( x , w ¯ i ) ) .

() Suppose that there exist u ¯ U, v ¯ V, w ¯ i W i , and λ ¯ i 0, i=1,,m, such that, for any xC,

f(x, u ¯ ) r ¯ g(x, v ¯ )+ i = 1 m λ ¯ i h i (x, w ¯ i )0.
(6)

Since r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ, we have max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v)ϵ min v V g( x ¯ ,v)=0. So, from (6), we have, for any xA,

max u U f ( x , u ) r ¯ min v V g ( x , v ) max u U f ( x , u ) r ¯ min v V g ( x , v ) + i = 1 m λ ¯ i h i ( x , w ¯ i ) f ( x , u ¯ ) r ¯ g ( x , v ¯ ) + i = 1 m λ ¯ i h i ( x , w ¯ i ) 0 = max u U f ( x ¯ , u ) r ¯ min v V g ( x ¯ , v ) ϵ min v V g ( x ¯ , v ) .

Hence, for any xA, max u U f(x,u) r ¯ min v V g(x,v) max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v)ϵ min v V g( x ¯ ,v). It means that x ¯ is an ϵ ¯ -solution of ( RNCP ) r ¯ . Thus, by Lemma 2.1, x ¯ is an ϵ-solution of (RFP). □

Using Remark 3.2 and Lemmas 2.1 and 3.1, we can obtain the following characterization of an ϵ-solution for (RFP).

Theorem 3.2 (ϵ-Optimality theorem)

Let f: R n × R p R and h i : R n × R q R, i=1,,m, be functions such that, for any uU, f(,u), and, for each w i W i , h i (, w i ) are convex functions. Let g: R n × R p R be a function such that, for any vV, g(,v) is a concave function. Let U R p , V R p , and W i R q , i=1,,m. Let x ¯ A and let ϵ0. Let r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ. If max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) <ϵ, then x ¯ is an ϵ-solution of (RFP). If max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ and

w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R +

is closed and convex, then the following statements are equivalent:

  1. (i)

    x ¯ is an ϵ-solution of (RFP);

  2. (ii)

    there exist w ¯ i W i and λ ¯ i 0, i=1,,m, ϵ 0 1 0, ϵ 0 2 0, and ϵ i 0, i=1,,m+1 such that

    0 ϵ 0 1 ( max u U f ( , u ) ) ( x ¯ ) + ϵ 0 2 ( r ¯ min v V g ( , v ) ) ( x ¯ ) + i = 1 m ϵ i ( λ ¯ i h i ( , w ¯ i ) ) ( x ¯ ) + N C ϵ m + 1 ( x ¯ ) ,
    (7)
    max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v)=ϵ min v V g( x ¯ ,v)and
    (8)
    ϵ 0 1 + ϵ 0 2 + i = 1 m + 1 ϵ i ϵ min v V g( x ¯ ,v)= i = 1 m λ ¯ i h i ( x ¯ , w ¯ i ).
    (9)

Proof [(i) (ii)] We assume that x ¯ is an ϵ-solution of (RFP). Then, by Lemma 2.1, x ¯ is an ϵ ¯ -solution of ( RNCP ) r ¯ , where r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ and ϵ ¯ =ϵ min v V g( x ¯ ,v), that is, for any xA, max u U f(x,u) r ¯ min v V g(x,v) max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v)ϵ min v V g( x ¯ ,v). Since max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v)ϵ min v V g( x ¯ ,v)=0, we have A{xC max u U f(x,u) r ¯ min v V g(x,v)0}. By Lemma 3.1,

( 0 , 0 ) epi ( max u U f ( , u ) ) + epi ( r ¯ min v V g ( , v ) ) + cl co ( w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R + ) .

By assumption,

( 0 , 0 ) epi ( max u U f ( , u ) ) + epi ( r ¯ min v V g ( , v ) ) + w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R + .

So, there exist w ¯ i W i and λ ¯ i 0, i=1,,m, such that

(0,0)epi ( max u U f ( , u ) ) +epi ( r ¯ min v V g ( , v ) ) +epi ( i = 1 m λ ¯ i h i ( , w ¯ i ) ) +epi δ C .

By Proposition 2.2, we obtain

( 0 , 0 ) ϵ 0 1 0 { ( ξ 0 1 , ξ 0 1 , x ¯ + ϵ 0 1 max u U f ( x ¯ , u ) ) | ξ 0 1 ϵ 0 1 ( max u U f ( , u ) ) ( x ¯ ) } + ϵ 0 2 0 { ( ξ 0 2 , ξ 0 2 , x ¯ + ϵ 0 2 + r ¯ min v V g ( x ¯ , v ) ) | ξ 0 2 ϵ 0 2 ( r ¯ min v V g ( , v ) ) ( x ¯ ) } + ϵ 0 { ( ξ , ξ , x ¯ + ϵ i = 1 m λ ¯ i h i ( x ¯ , w ¯ i ) ) | ξ ϵ ( i = 1 m λ ¯ i h i ( , w ¯ i ) ) ( x ¯ ) } + ϵ m + 1 0 { ( ξ m + 1 , ξ m + 1 , x ¯ + ϵ m + 1 δ C ( x ¯ ) ) ξ m + 1 ϵ m + 1 δ C ( x ¯ ) } .

So, there exist ξ ¯ 0 1 ϵ 0 1 ( max u U f(,u))( x ¯ ), ξ ¯ 0 2 ϵ 0 2 ( r ¯ min v V g(,v))( x ¯ ), ξ ¯ ϵ ( i = 1 m λ ¯ i h i (, w ¯ i ))( x ¯ ), ξ ¯ m + 1 ϵ m + 1 δ C ( x ¯ ), ϵ 0 1 0, ϵ 0 2 0, ϵ 0, and ϵ m + 1 0 such that

0 = ξ ¯ 0 1 + ξ ¯ 0 2 + ξ ¯ + ξ ¯ m + 1 and ϵ 0 1 + ϵ 0 2 + ϵ + ϵ m + 1 = max u U f ( x ¯ , u ) r ¯ min v V g ( x ¯ , v ) + i = 1 m λ ¯ i h i ( x ¯ , w ¯ i ) .

By Proposition 2.5, there exist ξ ¯ 0 1 ϵ 0 1 ( max u U f(,u))( x ¯ ), ξ ¯ 0 2 ϵ 0 2 ( r ¯ min v V g(,v))( x ¯ ), ξ ¯ i ϵ i ( λ ¯ i h i (, w ¯ i ))( x ¯ ), ξ ¯ m + 1 ϵ m + 1 δ C ( x ¯ ), ϵ 0 1 0, ϵ 0 2 0, ϵ i 0, i=1,,m, and ϵ m + 1 0 such that

0 ϵ 0 1 ( max u U f ( , u ) ) ( x ¯ ) + ϵ 0 2 ( r ¯ min v V g ( , v ) ) ( x ¯ ) 0 + i = 1 m ϵ i ( λ ¯ i h i ( , w ¯ i ) ) ( x ¯ ) + N C ϵ m + 1 ( x ¯ ) and ϵ 0 1 + ϵ 0 2 + i = 1 m + 1 ϵ i = max u U f ( x ¯ , u ) r ¯ min v V g ( x ¯ , v ) + i = 1 m λ ¯ i h i ( x ¯ , w ¯ i ) .
(10)

Since r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ,

max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v)ϵ min v V g( x ¯ ,v)=0.
(11)

So, (8) holds, and so, from (10) and (11), we have

0 ϵ 0 1 ( max u U f ( , u ) ) ( x ¯ ) + ϵ 0 2 ( r ¯ min v V g ( , v ) ) ( x ¯ ) 0 + i = 1 m ϵ i ( λ ¯ i h i ( , w ¯ i ) ) ( x ¯ ) + N C ϵ m + 1 ( x ¯ ) and ϵ 0 1 + ϵ 0 2 + i = 1 m + 1 ϵ i ϵ min v V g ( x ¯ , v ) = i = 1 m λ ¯ i h i ( x ¯ , w ¯ i ) .

Thus the conditions (7) and (9) hold.

[(ii) (i)] Taking account of the converse of the process for proving (i) (ii), we can easily check that the statement (ii) (i) holds. □

If for all x R n , f(x,) is concave, and, for all xR, g(x,) is convex, then using Lemmas 2.1 and 3.1, we can obtain the following characterization of an ϵ-solution for (RFP).

Theorem 3.3 (ϵ-Optimality theorem)

Let f: R n × R p R and h i : R n × R q R, i=1,,m, be functions such that, for any uU, f(,u), and, for each w i W i , h i (, w i ) are convex functions, and, for all x R n , f(x,) is concave function. Let g: R n × R p R be a function such that, for any vV, g(,v) is a concave function, and, for all xR, g(x,) is a convex function. Let U R p , V R p , and W i R q , i=1,,m. Let x ¯ A and let ϵ0. Let r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ. If max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) <ϵ, then x ¯ is an ϵ-solution of (RFP). If max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ and

w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R +

is closed and convex, then the following statements are equivalent:

  1. (i)

    x ¯ is an ϵ-solution of (RFP);

  2. (ii)

    there exist u ¯ U, v ¯ V, w ¯ i W i , λ ¯ i 0, i=1,,m, ϵ 0 1 0, ϵ 0 2 0, and ϵ i 0, i=1,,m+1 such that

    0 ϵ 0 1 ( f ( , u ¯ ) ) ( x ¯ )+ ϵ 0 2 ( r ¯ g ( , v ¯ ) ) ( x ¯ )+ i = 1 m ϵ i ( λ ¯ i h i ( , w ¯ i ) ) ( x ¯ )+ N C ϵ m + 1 ( x ¯ ),
    (12)
    max u U f( x ¯ ,u) min v V r ¯ g( x ¯ ,v)=ϵ min v V g( x ¯ ,v)and
    (13)
    ϵ 0 1 + ϵ 0 2 + i = 1 m + 1 ϵ i ϵ min v V g( x ¯ ,v) i = 1 m λ ¯ i h i ( x ¯ , w ¯ i ).
    (14)

Proof [(i) (ii)] Let x ¯ be an ϵ-solution of (RFP). Then, by Lemma 2.1, x ¯ is an ϵ ¯ -solution of ( RNCP ) r ¯ , where r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ and ϵ ¯ =ϵ min v V g( x ¯ ,v), that is, for any xA, max u U f(x,u) r ¯ min v V g(x,v) max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v)ϵ min v V g( x ¯ ,v). Since max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v)ϵ min v V g( x ¯ ,v)=0, we have A{xC max u U f(x,u) r ¯ min v V g(x,v)0}. By Lemma 3.1,

( 0 , 0 ) u U epi ( f ( , u ) ) + v V epi ( r ¯ g ( , v ) ) + cl co ( w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R + ) .

By assumption,

(0,0) u U epi ( f ( , u ) ) + v V epi ( r ¯ g ( , v ) ) + w i W i , λ i 0 epi ( i = 1 m λ i h i ( , w i ) ) + C × R + .

Since C × R + =epi δ C , there exist u ¯ U, v ¯ V, w ¯ i W i , and λ ¯ i 0, i=1,,m, such that

(0,0)epi ( f ( , u ¯ ) ) +epi ( r ¯ g ( , v ¯ ) ) +epi ( i = 1 m λ ¯ i h i ( , w ¯ i ) ) +epi δ C .

By Proposition 2.2, we obtain

( 0 , 0 ) ϵ 0 1 0 { ( ξ 0 1 , ξ 0 1 , x ¯ + ϵ 0 1 f ( x ¯ , u ¯ ) ) ξ 0 1 ϵ 0 1 ( f ( , u ¯ ) ) ( x ¯ ) } + ϵ 0 2 0 { ( ξ 0 2 , ξ 0 2 , x ¯ + ϵ 0 2 + r ¯ g ( x ¯ , v ¯ ) ) ξ 0 2 ϵ 0 2 ( r ¯ g ( , v ¯ ) ) ( x ¯ ) } + ϵ 0 { ( ξ , ξ , x ¯ + ϵ i = 1 m λ ¯ i h i ( x ¯ , w ¯ i ) ) | ξ ϵ ( i = 1 m λ ¯ i h i ( , w ¯ i ) ) ( x ¯ ) } + ϵ m + 1 0 { ( ξ m + 1 , ξ m + 1 , x ¯ + ϵ m + 1 δ C ( x ¯ ) ) ξ m + 1 ϵ m + 1 δ C ( x ¯ ) } .

So, there exist ξ ¯ 0 1 ϵ 0 1 (f(, u ¯ ))( x ¯ ), ξ ¯ 0 2 ϵ 0 2 ( r ¯ g(, v ¯ ))( x ¯ ), ξ ¯ ϵ ( i = 1 m λ ¯ i h i (, w ¯ i ))( x ¯ ), ξ ¯ m + 1 ϵ m + 1 δ C ( x ¯ ), ϵ 0 1 0, ϵ 0 2 0, ϵ 0, and ϵ m + 1 0 such that

0= ξ ¯ 0 1 + ξ ¯ 0 2 + ξ ¯ + ξ ¯ m + 1 and ϵ 0 1 + ϵ 0 2 + ϵ + ϵ m + 1 =f( x ¯ , u ¯ ) r ¯ g( x ¯ , v ¯ )+ i = 1 m λ ¯ i h i ( x ¯ , w ¯ i ).

By Proposition 2.5, there exist ξ ¯ 0 1 ϵ 0 1 (f(, u ¯ ))( x ¯ ), ξ ¯ 0 2 ϵ 0 2 ( r ¯ g(, v ¯ ))( x ¯ ), ξ ¯ i ϵ i ( λ ¯ i h i (, w ¯ i ))( x ¯ ), ξ ¯ m + 1 ϵ m + 1 δ C ( x ¯ ), ϵ 0 1 0, ϵ 0 2 0, ϵ i 0, i=1,,m, and ϵ m + 1 0 such that

0 ϵ 0 1 ( f ( , u ¯ ) ) ( x ¯ ) + ϵ 0 2 ( r ¯ g ( , v ¯ ) ) ( x ¯ ) 0 + i = 1 m ϵ i ( λ ¯ i h i ( , w ¯ i ) ) ( x ¯ ) + N C ϵ m + 1 ( x ¯ ) and ϵ 0 1 + ϵ 0 2 + i = 1 m + 1 ϵ i = f ( x ¯ , u ¯ ) r ¯ g ( x ¯ , v ¯ ) + i = 1 m λ ¯ i h i ( x ¯ , w ¯ i ) .
(15)

Since r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ, we have max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v)=ϵ min v V g( x ¯ ,v). So, we have

f( x ¯ , u ¯ ) r ¯ g( x ¯ , v ¯ ) max u U f( x ¯ ,u) r ¯ min v V g( x ¯ ,v)=ϵ min v V g( x ¯ ,v).
(16)

So, the condition (13) holds. Also, from (15) and (16), we have

0 ϵ 0 1 ( max u U f ( , u ) ) ( x ¯ ) + ϵ 0 2 ( r ¯ min v V g ( , v ) ) ( x ¯ ) 0 + i = 1 m ϵ i ( λ ¯ i h i ( , w ¯ i ) ) ( x ¯ ) + N C ϵ m + 1 ( x ¯ ) and ϵ 0 1 + ϵ 0 2 + i = 1 m + 1 ϵ i ϵ min v V g ( x ¯ , v ) i = 1 m λ ¯ i h i ( x ¯ , w ¯ i ) .

Consequently, the conditions (12) and (14) hold.

[(ii) (i)] Taking account of the converse of the process for proving (i) (ii), we can easily check that the statement (ii) (i) holds. □

Remark 3.3 Assume that f: R n × R p R and g: R n × R p R are functions such that, for all x R n , f(x,), and g(x,) are concave and convex, respectively. Then we know that Theorem 3.2 is equivalent to Theorem 3.3 from Lemma 3.1, immediately.

4 ϵ-Duality theorems

Following the approach in [13], we formulate a dual problem (RFD) for (RFP) as follows:

(RFD) max r s.t. 0 ϵ 0 1 ( max u U f ( , u ) ) ( x ) + ϵ 0 2 ( r min v V g ( , v ) ) ( x ) 0 + i = 1 m ϵ i ( λ i h i ( , w i ) ) ( x ) + N C ϵ m + 1 ( x ) , max u U f ( x , u ) r min v V g ( x , v ) ϵ min v V g ( x , v ) , ϵ 0 1 + ϵ 0 2 + i = 1 m + 1 ϵ i ϵ min v V g ( x , v ) i = 1 m λ i h i ( x , w i ) , r 0 , w i W i , λ i 0 , i = 1 , , m , ϵ 0 1 0 , ϵ 0 2 0 , ϵ i 0 , i = 1 , , m + 1 .

Clearly,

F : = { ( x , w , λ , r ) | 0 ϵ 0 1 ( max u U f ( , u ) ) ( x ) + ϵ 0 2 ( r min v V g ( , v ) ) ( x ) + i = 1 m ϵ i ( λ i h i ( , w i ) ) ( x ) + N R + ϵ 2 ( x ) , max u U f ( x , u ) r min v V g ( x , v ) ϵ g ( x , v ) , ϵ 0 1 + ϵ 0 2 + i = 1 m + 1 ϵ i ϵ min v V g ( x , v ) i = 1 m λ i h i ( x , w i ) , r 0 , w i W i , λ i 0 , ϵ 0 1 0 , ϵ 0 2 0 , ϵ i 0 , i = 1 , , m , ϵ m + 1 0 }

is the feasible set of (RFD).

Let ϵ0. Then ( x ¯ , w ¯ , λ ¯ , r ¯ ) is called an ϵ-solution of (RFD) if, for any (y,w,λ,r)F, r ¯ rϵ.

When ϵ=0, max u U f(x,u)=f(x), min v V g(x,v)=g(x), and h i (x, w i )= h i (x), i=1,,m, (RFP) becomes (FP), and (RFD) collapses to the Mond-Weir type dual problem (FD) for (FP) as follows [28]:

(FD) max r s.t. 0 f ( x ) + ( r g ) ( x ) + i = 1 m λ i h i ( x ) + N C ( x ) , f ( x ) r g ( x ) 0 , λ i h i ( x ) 0 , r 0 , λ i 0 , i = 1 , , m .

Now, we prove ϵ-weak and ϵ-strong duality theorems which hold between (RFP) and (RFD).

Theorem 4.1 (ϵ-Weak duality theorem)

For any feasible x of (RFP) and any feasible (y,w,λ,r) of (RFD),

max ( u , v ) U × V f ( x , u ) g ( x , v ) rϵ.

Proof Let x and (y,w,λ,r) be feasible solutions of (RFP) and (RFD), respectively. Then there exist ξ ¯ 0 1 ϵ 0 1 ( max u U f(,u))(y), ξ ¯ 0 2 ϵ 0 2 (r min v V g(,v))(y), ξ ¯ i ϵ i ( λ i h i (, w i ))(y), ξ ¯ m + 1 N C ϵ m + 1 (y), ϵ 0 1 0, ϵ 0 2 0, ϵ i 0, i=1,,m, and ϵ m + 1 0 such that

ξ ¯ 0 1 + ξ ¯ 0 2 + i = 1 m + 1 ξ ¯ i = 0 , max u U f ( y , u ) r min v V g ( y , v ) ϵ min v V g ( y , v ) and ϵ 0 1 + ϵ 0 2 + i = 1 m + 1 ϵ i ϵ min v V g ( y , v ) i = 1 m λ i h i ( y , w i ) .

Thus, we have

max u U f ( x , u ) r min v V g ( x , v ) + ϵ min v V g ( x , v ) max u U f ( y , u ) r min v V g ( y , v ) + ξ ¯ 0 1 + ξ ¯ 0 2 , x y ϵ 0 1 ϵ 0 2 + ϵ min v V g ( x , v ) = max u U f ( y , u ) r min v V g ( y , v ) i = 1 m + 1 ξ ¯ i , x y ϵ 0 1 ϵ 0 2 + ϵ min v V g ( x , v ) max u U f ( y , u ) r min v V g ( y , v ) + i = 1 m λ i h i ( y , w i ) i = 1 m λ i h i ( x , w i ) ϵ 0 1 ϵ 0 2 i = 1 m + 1 ϵ i + ϵ min v V g ( x , v ) max u U f ( y , u ) r min v V g ( y , v ) + i = 1 m λ i h i ( y , w i ) ϵ 0 1 ϵ 0 2 i = 1 m + 1 ϵ i max u U f ( y , u ) r min v V g ( y , v ) ϵ min v V g ( y , v ) 0 .

Hence, we have max ( u , v ) U × V f ( x , u ) g ( x , v ) rϵ. □

Theorem 4.2 (ϵ-Strong duality theorem)

Suppose that

w i W i , λ i 0 epi ( i = 1 m λ i g i ( , w i ) ) + C × R +

is closed. If x ¯ is an ϵ-solution of (RFP) and max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ0, then there exist w ¯ R q , λ ¯ R + m , and r ¯ R + such that ( x ¯ , w ¯ , λ ¯ , r ¯ ) is a 2ϵ-solution of (RFD).

Proof Let x ¯ A be an ϵ-solution of (RFP). Let r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) . Then, by Theorem 3.2, there exist w ¯ i W i , λ ¯ i 0, ϵ 0 1 0, ϵ 0 2 0, ϵ i 0, i=1,,m, and ϵ m + 1 such that

0 ϵ 0 1 ( max u U f ( , u ) ) ( x ¯ ) + ϵ 0 2 ( r ¯ min v V g ( , v ) ) ( x ¯ ) + i = 1 m ϵ i ( λ ¯ i h i ( , w ¯ i ) ) ( x ¯ ) + N C ϵ m + 1 ( x ¯ ) , max u U f ( x ¯ , u ) r ¯ min v V g ( x ¯ , v ) = ϵ min v V g ( x ¯ , v ) and ϵ 0 1 + ϵ 0 2 + i = 1 m + 1 ϵ i ϵ min v V g ( x ¯ , v ) = i = 1 m λ ¯ i h i ( x ¯ , w ¯ i ) .

So, ( x ¯ , w ¯ , λ ¯ , r ¯ ) is a feasible solution of (RFD). For any feasible (y,u,v,w,λ,v) of (RFD), it follows from Theorem 4.1 (ϵ-weak duality theorem) that

r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵrϵϵ=r2ϵ.

Thus ( x ¯ , w ¯ , λ ¯ , r ¯ ) is a 2ϵ-solution of (RFD). □

Remark 4.1 Using the optimality conditions of Theorem 3.2, robust fractional dual problem (RFD) for a robust fractional problem (RFP) in the convex constraint functions with uncertainty is formulated. However, when we formulated the dual problem using optimality condition in Theorem 3.3, we could not know whether ϵ-weak duality theorem is established, or not. It is an open question.

Now we give an example illustrating our duality theorems.

Example 4.1 Consider the following fractional programming problem with uncertainty:

( RFP ) min max ( u , v ) U × V u x + 1 v x + 2 s.t.  2 w 1 x 3 0 , w 1 [ 1 , 2 ] , x R + ,

where U=[1,2] and V=[1,2].

Now we transform the problem (RFP) into the robust non-fractional convex optimization problem ( RNCP ) r with a parametric r R + :

( RNCP ) r min max u [ 1 , 2 ] ( u x + 1 ) r min v [ 1 , 2 ] ( v x + 2 ) s.t.  2 w 1 x 3 0 , w 1 [ 1 , 2 ] , x R + .

Let f(x,u)=ux+1, g(x,v)=vx+2, h 1 (x, w 1 )=2 w 1 x3, and ϵ[0, 9 22 ]. Then A:={xR0x 3 4 } is the set of all robust feasible solutions of (RFP) and A ¯ :={xR0x 4 ϵ 3 2 ϵ } is the set of all ϵ-solutions of (RFP). Let F := {(y, w 1 , λ 1 ,r)0 ϵ 0 1 ( max u U f(,u))(y) + ϵ 0 2 (r min v V g(,v))(y) + ϵ 1 ( λ 1 h 1 (, w 1 ))(y) + N R + ϵ 2 (x), max u U f(y,u)r min v V g(y,v)ϵ min v V g(y,v), ϵ 0 1 + ϵ 0 2 + ϵ 1 + ϵ 2 ϵ min v V g(y,v) λ 1 h 1 (y, w 1 ), r0, w 1 [1,2], λ 1 0, ϵ 0 1 0, ϵ 0 2 0, ϵ 1 0, ϵ 2 0}. Then we formulate a dual problem (RFD) for (RFP) as follows:

( RFD ) max r s.t. ( y , w 1 , λ 1 , r ) F .

Then F is the set of all robust feasible solutions of (RFD).

Now we calculate the set F= A ˜ B ˜ .

A ˜ : = { ( 0 , w 1 , λ 1 , r ) | 0 ϵ 0 1 ( max u U f ( , u ) ) ( 0 ) + ϵ 0 2 ( r min v V g ( , v ) ) ( 0 ) + ϵ 1 ( λ 1 h 1 ( , w 1 ) ) ( 0 ) + N R + ϵ 2 ( 0 ) , max u U f ( 0 , u ) r min v V g ( 0 , v ) ϵ min v V g ( 0 , v ) , ϵ 0 1 + ϵ 0 2 + ϵ 1 + ϵ 2 ϵ min v V g ( 0 , v ) λ 1 h 1 ( 0 , w 1 ) , r 0 , u [ 1 , 2 ] , λ 1 0 , ϵ 0 1 0 , ϵ 0 2 0 , ϵ 1 0 , ϵ 2 0 } = { ( 0 , w 1 , λ 1 , r ) 0 { 2 } + { r } + { 2 λ 1 w 1 } + ( , 0 ] , 1 2 r 2 ϵ , ϵ 0 1 + ϵ 0 2 + ϵ 1 + ϵ 2 2 ϵ 3 λ 1 , r 0 , w 1 [ 1 , 2 ] , λ 1 0 , ϵ 0 1 0 , ϵ 0 2 0 , ϵ 1 0 , ϵ 2 0 } = { ( 0 , w 1 , λ 1 , r ) | r 2 + 2 λ 1 w 1 , r 1 2 ϵ 2 , ϵ 0 1 + ϵ 0 2 + ϵ 1 + ϵ 2 2 ϵ 3 λ 1 , r 0 , w 1 [ 1 , 2 ] , λ 1 0 , ϵ 0 1 0 , ϵ 0 2 0 , ϵ 1 0 , ϵ 2 0 } , B ˜ : = { ( y , w 1 , λ 1 , r ) | 0 ϵ 0 1 ( max u U f ( , u ) ) ( y ) + ϵ 0 2 ( r min v V g ( , v ) ) ( y ) + ϵ 1 ( λ 1 h 1 ( , w 1 ) ) ( y ) + N R + ϵ 2 ( y ) , max u U f ( y , u ) r min v V g ( y , v ) ϵ min v V g ( y , v ) , ϵ 0 1 + ϵ 0 2 + ϵ 1 + ϵ 2 ϵ min v V g ( y , v ) λ 1 h 1 ( y , w 1 ) , y > 0 , r 0 , w 1 [ 1 , 2 ] , λ 1 0 , ϵ 0 1 0 , ϵ 0 2 0 , ϵ 1 0 , ϵ 2 0 } = { ( y , w 1 , λ 1 , r ) | 0 { 2 } + { r } + { 2 λ 1 w 1 } + [ ϵ 2 y , 0 ] , 2 y + 1 r ( y + 2 ) ϵ ( y + 2 ) , y > 0 , ϵ 0 1 + ϵ 0 2 + ϵ 1 + ϵ 2 ϵ ( y + 2 ) λ 1 ( 2 w 1 y 3 ) , r 0 , w 1 [ 1 , 2 ] , λ 1 0 , ϵ 0 1 0 , ϵ 0 2 0 , ϵ 1 0 , ϵ 2 0 } = { ( y , w 1 , λ 1 , r ) | 0 [ 2 r + 2 λ 1 w 1 ϵ 2 y , 2 r + 2 λ 1 w 1 ] , 2 y + 1 r ( y + 2 ) ϵ ( y + 2 ) , ϵ 0 1 + ϵ 0 2 + ϵ 1 + ϵ 2 ϵ ( y + 2 ) λ 1 ( 2 w 1 y 3 ) , y > 0 , r 0 , w 1 [ 1 , 2 ] , λ 1 0 , ϵ 0 1 0 , ϵ 0 2 0 , ϵ 1 0 , ϵ 2 0 } .

We can check for any xA and any (y, w 1 , λ 1 ,r)F,

max ( u , v ) U × V f ( x , u ) g ( x , v ) rϵ,

that is, ϵ-weak duality holds. Indeed, let xA and (y, w 1 , λ 1 ,r) A ˜ be fixed. Then

max u [ 1 , 2 ] f ( x , u ) r min v [ 1 , 2 ] g ( x , v ) + ϵ min v [ 1 , 2 ] g ( x , v ) = 2 x + 1 r ( x + 2 ) + ϵ ( x + 2 ) = ( 2 r ) x + 1 2 r + ϵ ( x + 2 ) 2 λ 1 w 1 x + 2 ϵ + ϵ ( x + 2 ) 3 λ 1 + 2 ϵ + ϵ ( x + 2 ) 3 λ 1 + ϵ 0 1 + ϵ 0 2 + ϵ 1 + ϵ 2 + 3 λ 1 + ϵ ( x + 2 ) 0 .

Moreover, let xA and (y,u,v, w 1 , λ 1 ,r) B ˜ be fixed.

max u [ 1 , 2 ] f ( x , u ) r min v [ 1 , 2 ] g ( x , v ) + ϵ min v [ 1 , 2 ] g ( x , v ) = 2 x + 1 r ( x + 2 ) + ϵ ( x + 2 ) = 2 y + 1 r ( y + 2 ) + ( 2 r ) ( x y ) + ϵ ( x + 2 ) .

If xy0, then

max u [ 1 , 2 ] f ( x , u ) r min v [ 1 , 2 ] g ( x , v ) + ϵ min v [ 1 , 2 ] g ( x , v ) = 2 y + 1 r ( y + 2 ) + ( 2 r ) ( x y ) + ϵ ( x + 2 ) 2 y + 1 r ( y + 2 ) 2 λ 1 w 1 ( x y ) + ϵ ( x + 2 ) 2 y + 1 r ( y + 2 ) + 2 λ 1 w 1 y 2 λ 1 w 1 x + ϵ ( x + 2 ) ϵ ( y + 2 ) + 2 λ 1 w 1 y 3 λ 1 + ϵ ( x + 2 ) ϵ 0 1 + ϵ 0 2 + ϵ 1 + ϵ 2 + 3 λ 1 3 λ 1 + ϵ ( x + 2 ) 0 .

If xy<0, then

max u [ 1 , 2 ] f ( x , u ) r min v [ 1 , 2 ] g ( x , v ) + ϵ min v [ 1 , 2 ] g ( x , v ) = 2 y + 1 r ( y + 2 ) + ( 2 r ) ( x y ) + ϵ ( x + 2 ) 2 y + 1 r ( y + 2 ) + ( 2 λ 1 w 1 + ϵ 2 y ) ( x y ) + ϵ ( x + 2 ) 2 y + 1 r ( y + 2 ) + 2 λ 1 w 1 y ϵ 2 2 λ 1 w 1 x + ϵ 2 y x + ϵ ( x + 2 ) ϵ ( y + 2 ) + 2 λ 1 w 1 y ϵ 2 3 λ 1 + ϵ 2 y x + ϵ ( x + 2 ) ϵ 0 1 + ϵ 0 2 + ϵ 1 + ϵ 2 + 3 λ 1 ϵ 2 3 λ 1 + ϵ 2 y x + ϵ ( x + 2 ) 0 .

Let ϵ= 1 3 . Then x ¯ A ¯ :={xR0x 4 7 } is the set of all ϵ-solutions of (RFP) and 1 6 r ¯ 1 2 .

If x ¯ =0, then r ¯ = 1 6 . When ϵ= 1 3 , we can calculate the set A ˜ as follows:

A ˜ := { ( 0 , w 1 , λ 1 , r ) | 0 r 1 6 , 0 λ 1 2 9 , w 1 [ 1 , 2 ] } .

Let w ¯ 1 =2, λ ¯ 1 = 1 9 . Then (0,2, 5 8 , 1 9 ) A ˜ . So, we have

r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ= 1 6 = 5 6 2ϵr2ϵ.

Hence, (0,2, 1 9 , 1 6 ) is a 2ϵ-solution of (RFD). So, ϵ-strong duality holds. If 0< x ¯ 4 7 , then 1 6 < r ¯ 1 2 . When ϵ= 1 3 , we can calculate the set B ˜ as follows:

B ˜ : = { ( y , w 1 , λ 1 , r ) | y > 0 , 2 + 2 λ 1 w 1 ϵ 2 y r 5 y + 1 3 ( y + 2 ) , ϵ 2 1 3 ( y + 2 ) λ 1 ( 2 w 1 y 3 ) , r 0 , u [ 1 , 2 ] , v [ 1 , 2 ] , w 1 [ 1 , 2 ] , ϵ 2 0 } .

Let w ¯ 1 =2, λ ¯ 1 =0, and ϵ 2 = x ¯ + 2 3 . Then ( x ¯ ,2,0, 5 x ¯ + 1 3 ( x ¯ + 2 ) ) B ˜ . So, we have

r ¯ = max ( u , v ) U × V f ( x ¯ , u ) g ( x ¯ , v ) ϵ= 5 x ¯ + 1 3 ( x ¯ + 2 ) rr2ϵ.

Hence, ( x ¯ ,2,0, 5 x ¯ + 1 3 ( x ¯ + 2 ) ) is a 2ϵ-solution of (RFD). So, ϵ-strong duality holds.

References

  1. Govil MG, Mehra A: ϵ -Optimality for multiobjective programming on a Banach space. Eur. J. Oper. Res. 2004,157(1):106–112. 10.1016/S0377-2217(03)00206-6

    Article  MathSciNet  MATH  Google Scholar 

  2. Gutiérrez C, Jiménez B, Novo V: Multiplier rules and saddle-point theorems for Helbig’s approximate solutions in convex Pareto problems. J. Glob. Optim. 2005,32(3):367–383. 10.1007/s10898-004-5904-4

    Article  MATH  Google Scholar 

  3. Hamel A: An ϵ -Lagrange multiplier rule for a mathematical programming problem on Banach spaces. Optimization 2001,49(1–2):137–149. 10.1080/02331930108844524

    Article  MathSciNet  MATH  Google Scholar 

  4. Liu JC: ϵ -Duality theorem of nondifferentiable nonconvex multiobjective programming. J. Optim. Theory Appl. 1991,69(1):153–167. 10.1007/BF00940466

    Article  MathSciNet  MATH  Google Scholar 

  5. Liu JC: ϵ -Pareto optimality for nondifferentiable multiobjective programming via penalty function. J. Math. Anal. Appl. 1996,198(1):248–261. 10.1006/jmaa.1996.0080

    Article  MathSciNet  MATH  Google Scholar 

  6. Strodiot JJ, Nguyen VH, Heukemes N: ϵ -Optimal solutions in nondifferentiable convex programming and some related question. Math. Program. 1983,25(3):307–328. 10.1007/BF02594782

    Article  MathSciNet  MATH  Google Scholar 

  7. Yokoyama K: Epsilon approximate solutions for multiobjective programming problems. J. Math. Anal. Appl. 1996,203(1):142–149. 10.1006/jmaa.1996.0371

    Article  MathSciNet  MATH  Google Scholar 

  8. Lee GM, Lee JH: ϵ -Duality for convex semidefinite optimization problem with conic constraints. J. Inequal. Appl. 2010., 2010: Article ID 363012

    Google Scholar 

  9. Lee JH, Lee GM: On ϵ -solutions for convex optimization problems with uncertainty data. Positivity 2012,16(3):509–526. 10.1007/s11117-012-0186-4

    Article  MathSciNet  Google Scholar 

  10. Barros A, Frenk JBG, Schaible S, Zhang S: Using duality to solve generalized fractional programming problems. J. Glob. Optim. 1996,8(2):139–170. 10.1007/BF00138690

    Article  MathSciNet  MATH  Google Scholar 

  11. Chinchuluun A, Yuan D, Pardalos PM: Optimality conditions and duality for nondifferentiable multiobjective fractional programming with generalized convexity. Ann. Oper. Res. 2007,154(1):133–147. 10.1007/s10479-007-0180-6

    Article  MathSciNet  MATH  Google Scholar 

  12. Frenk JBG, Schaible S: Fractional programming. In Handbook of Generalized Convexity and Monotonicity. Springer, Berlin; 2004:333–384.

    Google Scholar 

  13. Gupta P, Mehra A, Shiraishi S, Yokoyama K: ϵ -Optimality for minimax programming problems. J. Nonlinear Convex Anal. 2006,7(2):277–288.

    MathSciNet  MATH  Google Scholar 

  14. Gupta P, Shiraishi S, Yokoyama K: ϵ -Optimality without constraint qualification for multibojective fractional program. J. Nonlinear Convex Anal. 2005,6(2):347–357.

    MathSciNet  MATH  Google Scholar 

  15. Liang ZA, Huang HX, Pardalos PM: Optimality conditions and duality for a class of nonlinear fractional programming problems. J. Optim. Theory Appl. 2001,110(3):611–619. 10.1023/A:1017540412396

    Article  MathSciNet  MATH  Google Scholar 

  16. Beck A, Ben-Tal A: Duality in robust optimization: primal worst equals dual best. Oper. Res. Lett. 2009,37(1):1–6. 10.1016/j.orl.2008.09.010

    Article  MathSciNet  MATH  Google Scholar 

  17. Ben-Tal A, Ghaoui LE, Nemirovski A Princeton Series in Applied Mathematics. In Robust Optimization. Princeton University Press, Princeton; 2009.

    Chapter  Google Scholar 

  18. Ben-Tal A, Nemirovski A: A selected topics in robust convex optimization. Math. Program., Ser. B 2008,112(1):125–158.

    Article  MathSciNet  MATH  Google Scholar 

  19. Bersimas D, Brown D: Constructing uncertainty sets for robust linear optimization. Oper. Res. 2009,57(6):1483–1495. 10.1287/opre.1080.0646

    Article  MathSciNet  MATH  Google Scholar 

  20. Bersimas D, Pachamanova D, Sim M: Robust linear optimization under general norms. Oper. Res. Lett. 2004,32(6):510–516. 10.1016/j.orl.2003.12.007

    Article  MathSciNet  MATH  Google Scholar 

  21. Jeyakumar V, Li GY: Strong duality in robust convex programming: complete characterizations. SIAM J. Optim. 2010,20(6):3384–3407. 10.1137/100791841

    Article  MathSciNet  MATH  Google Scholar 

  22. Jeyakumar V, Li GY: Robust duality for fractional programming problems with constraint-wise data uncertainty. J. Optim. Theory Appl. 2011,151(2):292–303. 10.1007/s10957-011-9896-1

    Article  MathSciNet  MATH  Google Scholar 

  23. Hiriart-Urruty JB, Lemarechal C: Convex Analysis and Minimization Algorithms. Springer, Berlin; 1993. vols. I and II

    MATH  Google Scholar 

  24. Jeyakumar V: Asymptotic dual conditions characterizing optimality for convex programs. J. Optim. Theory Appl. 1997,93(1):153–165. 10.1023/A:1022606002804

    Article  MathSciNet  MATH  Google Scholar 

  25. Jeyakumar V, Lee GM, Dinh N: New sequential Lagrange multiplier conditions characterizing optimality without constraint qualification for convex programs. SIAM J. Optim. 2003,14(2):534–547. 10.1137/S1052623402417699

    Article  MathSciNet  MATH  Google Scholar 

  26. Jeyakumar V, Lee GM, Dinh N: Characterization of solution sets of convex vector minimization problems. Eur. J. Oper. Res. 2006,174(3):1380–1395. 10.1016/j.ejor.2005.05.007

    Article  MathSciNet  MATH  Google Scholar 

  27. Rockafellar RT: Convex Analysis. Princeton University Press, Princeton; 1970.

    Book  MATH  Google Scholar 

  28. Liu JC, Kimura Y, Tanaka K: Three types dual model for minimax fractional programming. Comput. Math. Appl. 1999,38(7–8):143–155. 10.1016/S0898-1221(99)00245-X

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the referees and the editor for their valuable comments and constructive suggestions, which have contributed to the final version of the paper. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2013R1A1A2005378).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gue Myung Lee.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally in this paper and they read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, J.H., Lee, G.M. On ϵ-solutions for robust fractional optimization problems. J Inequal Appl 2014, 501 (2014). https://doi.org/10.1186/1029-242X-2014-501

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-501

Keywords