Open Access

On minimax programming problems involving right upper-Dini-derivative functions

  • Anurag Jayswal1,
  • Izhar Ahmad2,
  • Krishna Kummari1 and
  • Suliman Al-Homidan2Email author
Journal of Inequalities and Applications20142014:326

https://doi.org/10.1186/1029-242X-2014-326

Received: 26 May 2014

Accepted: 29 July 2014

Published: 26 August 2014

Abstract

In this paper, we derive necessary and sufficient optimality conditions for a general minimax programming problem involving some classes of generalized convexities with the tool-right upper-Dini-derivative. Moreover, using the concept of optimality conditions, Mond-Weir type duality theory has been developed for such a minimax programming problem.

MSC:26A51, 49J35, 90C32.

Keywords

minimax programmingupper-Dini-derivativeoptimalityduality

1 Introduction

The minimax approach to optimization theory certainly is not new. It takes its origins in von Neumann’s game theory. The broad spectrum of existing results and applications of minimax theory in the field of optimization is captured in the book edited by Du et al. [1]. Starting with work of Schmittendorf [2], minimax programming problems have been studied by several authors; for example, see [39] and the references cited therein.

Convexity is a sufficient but not necessary condition for many important results of mathematical programming, since there are diverse extensions of the notion of convexity bearing the same properties. Moreover, it is well known that a function is convex iff its restriction to each line segment in its domain is convex. This property inspired Ortega and Rheinboldt [10] to introduce an important generalization of convex functions by replacing a line segment joining two points by a continuous arc and called them arcwise connected functions defined on arcwise connected sets.

Following the idea of arcwise convexity, Avriel and Zang [11] introduced Q-connected (QCN) functions and P-connected (PCN) functions and also they have discussed necessary and sufficient local-global minimum properties of these functions. Some elementary properties of these functions in terms of their directional derivatives have been studied by Bhatia and Mehra [12]. Bhatia and Mehra [12] also established optimality conditions for scalar-valued nonlinear programming problems involving these functions.

To relax the definition of arcwise convexity in terms of directional derivative recently Yuan and Liu [13] introduced the concept of ( α , ρ ) -right upper-Dini-derivative locally arcwise connected with respect to the arc H and established optimality and duality results for a nonlinear multiobjective programming problem. In this paper, we use generalized convex functions, in terms of the right upper-Dini-derivative to derive necessary and sufficient optimality conditions for a general minimax programming problem and duality results for its Mond-Weir type dual model.

This paper is structured as follows: Some preliminary concepts and properties regarding generalized convex functions are given in Section 2. In Section 3, we establish necessary and sufficient optimality conditions for a general minimax programming problem involving generalized convex functions. In Section 4, we establish appropriate duality theorems for a Mond-Weir type dual problem. Finally, in Section 5 we summarize our main results and also point out some further research opportunities.

2 Preliminaries

Let R n denote the n-dimensional Euclidean space, R + n its nonnegative orthant and X R n . For a nonempty set Q in a topological vector pace E, Q ¯ denote the closure of Q and
Q = { ν E | ν ( q ) 0 , q Q }

denotes the dual cone of Q, where E is the dual space of E.

For some nonempty subset Y, let R Y = Π Y R denote the product space in a product topology. Then the topological dual space of R Y is the generalized finite sequence space consisting of all the functions u : Y R with finite support [14]. The set R + Y = Π Y R + denote the convex cone of all nonnegative functions on Y. Then the topological dual of R + Y is given by
( R + Y ) = = { λ = ( λ y ) y Y |  a finite set  Y 0 Y  such that  λ y = 0 , y Y Y 0 and  λ y 0 , y Y 0 } .

Now, we recall some well-known results and concepts which will be used in the sequel.

Definition 2.1 [15]

A set X R n is said to be an arcwise connected set if, for every x 1 X , x 2 X , there exists a continuous vector-valued function H x 1 , x 2 : [ 0 , 1 ] X , called an arc, such that
H x 1 , x 2 ( 0 ) = x 1 , H x 1 , x 2 ( 1 ) = x 2 .

Definition 2.2 [13]

Let φ be a real-valued function defined on an arcwise connected set X R n . Let x 1 , x 2 X , and H x 1 , x 2 be the arc connecting x 1 and x 2 in X. The right upper-Dini-derivative of φ with respect to H x 2 , x 1 ( t ) at t = 0 is defined as follows:
( d φ ) + ( H x 2 , x 1 ( 0 + ) ) = lim t 0 + sup φ ( H x 2 , x 1 ( t ) ) φ ( x 2 ) t .
(1)

Using this upper-Dini-derivative concept, Yuan and Liu [13] introduced a class of functions, which called ( α , ρ ) -right upper-Dini-derivative function. For convenience, we use the following notations.

Definition 2.3 [13]

A set X R n is said to be locally arcwise connected at x ¯ if for any x X and x x ¯ there exist a positive number a ( x , x ¯ ) , with 0 < a ( x , x ¯ ) 1 , and a continuous arc H x ¯ , x such that H x ¯ , x ( t ) X for any t ( 0 , a ( x , x ¯ ) ) .

The set X is locally arcwise connected on X if X is locally arcwise connected at any x X .

Definition 2.4 [13]

Let X R n be a locally arcwise connected set and φ : X R be a real function defined on X. The function φ is said to be ( α , ρ ) -right upper-Dini-derivative locally arcwise connected with respect to H at x ¯ , if there exist real functions α : X × X R , ρ : X × X R such that
φ ( x ) φ ( x ¯ ) α ( x , x ¯ ) ( d φ ) + ( H x ¯ , x ( 0 + ) ) + ρ ( x , x ¯ ) , x X .

If φ is ( α , ρ ) -right upper-Dini-derivative locally arcwise connected with respect to H at x ¯ for any x ¯ X , then φ is called ( α , ρ ) -right upper-Dini-derivative locally arcwise connected with respect to H on X.

Remark 2.1 It revealed by an example given in [13] that there exists a function, which is ( α , ρ ) -right upper-Dini-derivative locally arcwise connected but neither d-ρ- ( η , θ ) -invex [16] nor d-invex [17] nor directional differentially B-arcwise connected [15].

Now we define the notions of ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected, strictly ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected and ρ-generalized-quasi-right upper-Dini-derivative locally arcwise connected functions.

Definition 2.5 The function φ : X R is said to be ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ , if there exists a real function ρ : X × X R such that
( d φ ) + ( H x ¯ , x ( 0 + ) ) ρ ( x , x ¯ ) φ ( x ) φ ( x ¯ ) , x X ,
equivalently
φ ( x ) < φ ( x ¯ ) ( d φ ) + ( H x ¯ , x ( 0 + ) ) < ρ ( x , x ¯ ) , x X .

The function φ : X R is said to be ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) on X if it is ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at any x ¯ X .

The following example shows that there exists a function which is ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected but not ( α , ρ ) -right upper-Dini-derivative locally arcwise connected with respect to the arc H.

Example 2.1 Let X = ( 1 , 1 ) and the function φ : X R be defined by
φ ( x ) = { | x | sin 2 1 x ; if  x ( 1 , 0 ) ( 0 , 1 ) , 0 ; if  x = 0 .
For any, x , y R , defining the arc H : [ 0 , 1 ] R by
H y , x ( t ) = t x + ( 1 t ) y , t [ 0 , 1 ] .
Note that, by the definition of right upper-Dini-derivative by (1), for x ( 1 , 0 ) ( 0 , 1 ) we have
( d φ ) + ( H 0 , x ( 0 + ) ) = | x | .
Let ρ : X × X R be defined by
ρ ( x , y ) = { | x | ; if  x ( 1 , 0 ) ( 0 , 1 ) , 0 ; if  x = 0 .
Now, for x ¯ = 0 , it follows that
( d φ ) + ( H x ¯ , x ( 0 + ) ) ρ ( x , x ¯ ) φ ( x ) φ ( x ¯ ) , x X .
This means that φ is ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ = 0 . But φ is not a ( α , ρ ) -right upper-Dini-derivative locally arcwise connected with respect to same arc H and ρ at x ¯ = 0 because for x ( 1 , 0 ) ( 0 , 1 ) and α ( x , x ¯ ) = 1 , we can see that
φ ( x ) φ ( x ¯ ) α ( x , x ¯ ) ( d φ ) + ( H x ¯ , x ( 0 + ) ) ρ ( x , x ¯ ) < 0 .
Definition 2.6 The function φ : X R is said to be strictly ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ , if there exists a real function ρ : X × X R such that
( d φ ) + ( H x ¯ , x ( 0 + ) ) ρ ( x , x ¯ ) φ ( x ) > φ ( x ¯ ) , x X , x x ¯ ,
equivalently
φ ( x ) φ ( x ¯ ) ( d φ ) + ( H x ¯ , x ( 0 + ) ) < ρ ( x , x ¯ ) , x X , x x ¯ .

The function φ : X R is said to be strictly ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) on X if it is strictly ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at any x ¯ X .

Definition 2.7 The function φ : X R is said to be ρ-generalized-quasi-right upper-Dini-derivative locally arcwise connected with respect to H at x ¯ , if there exists a real function ρ : X × X R such that
φ ( x ) φ ( x ¯ ) ( d φ ) + ( H x ¯ , x ( 0 + ) ) ρ ¯ ( x , x ¯ ) , x X ,
equivalently
( d φ ) + ( H x ¯ , x ( 0 + ) ) > ρ ¯ ( x , x ¯ ) φ ( x ) > φ ( x ¯ ) , x X .

The function φ : X R is said to be ρ-generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) on X if it is ρ-generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) at any x ¯ X .

The next example shows that there exists a function which is ρ-generalized-quasi-right upper-Dini-derivative locally arcwise connected but neither ( α , ρ ) -right upper-Dini-derivative locally arcwise connected nor ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected with respect to the arc H.

Example 2.2 Let X = ( π 2 , π 2 ) and the function φ : X R be defined by
φ ( x ) = { π | x | sin 2 π x 2 π | x | ; if  x ( π 2 , 0 ) ( 0 , π 2 ) , 0 ; if  x = 0 .
For any, x , y R , defining the arc H : [ 0 , 1 ] R by
H y , x ( t ) = t x + ( 1 t ) y , t [ 0 , 1 ] .
Clearly, for x ( π 2 , 0 ) ( 0 , π 2 ) we have
( d φ ) + ( H 0 , x ( 0 + ) ) = π | x | .
Let ρ : X × X R be defined by
ρ ( x , y ) = { π | x | ; if  x ( π 2 , 0 ) ( 0 , π 2 ) , 0 ; if  x = 0 .
Now, we can easily verify that φ is ρ-generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ = 0 . However, for x ( π 2 , 0 ) ( 0 , π 2 ) , α ( x , x ¯ ) = 1 and x ¯ = 0 , we can deduce that
φ ( x ) φ ( x ¯ ) α ( x , x ¯ ) ( d φ ) + ( H x ¯ , x ( 0 + ) ) ρ ( x , x ¯ ) < 0 ,
and
φ ( x ) φ ( x ¯ ) < 0 implies ( d φ ) + ( H x ¯ , x ( 0 + ) ) + ρ ¯ ( x , x ¯ ) = 0 .

Hence, φ is neither ( α , ρ ) -right upper-Dini-derivative locally arcwise connected nor ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected with respect to same arc H and ρ at x ¯ = 0 .

Definition 2.8 [13]

A function f : X R is called preinvex (with respect to η : X × X R n ) on X if there exists a vector-valued function η such that
f ( u + t η ( x , u ) ) t f ( x ) + ( 1 t ) f ( u )

holds for all x , u X and any t [ 0 , 1 ] .

Definition 2.9 [13]

A function f : X R is said to be convexlike if for any x , y X and 0 θ 1 , there is z X such that
f ( z ) θ f ( x ) + ( 1 θ ) f ( y ) .

Remark 2.2 The convex and the preinvex functions are convexlike functions.

In the next section we will use the following version of Theorem 2.3 from [9].

Lemma 2.1 Let G : X × Y R and ψ : X R , where X and Y are arbitrary nonempty sets. Let the pair ( G , ψ ) be convexlike on X. Assume that for some neighborhood U of 0 in R Y and a constant ν > 0 , the set Ω 0 U ¯ × ( , ν ] is a nonempty closed subset of R Y × R , where
Ω 0 = { ( u , r ) | u : Y R and x X such that ψ ( x ) r , G ( x , y ) u ( y ) , y Y } .
Then exactly one of the following systems is solvable:
  1. (I)

    G ( x , y ) 0 , y Y , ψ ( x ) < 0 ,

     
  2. (II)

    an integer s > 0 , scalars λ i 0 , 1 i s , μ 0 and vectors y i Y , 1 i s , such that ( λ 1 , , λ s , μ ) 0 and i = 1 s λ i G ( x , y i ) + μ ψ ( x ) 0 .

     

3 Optimality conditions

Consider the following general minimax programming problem:
Minimize  max y Y f ( x , y ) subject to  g ( x ) 0 , x X ,
(P)

where f : X × Y R , g = ( g 1 , g 2 , , g m ) : X R m , X is an open arcwise connected subset of R n , Y is a compact subset of R m and f ( x , ) is continuous on Y for every x X . X 0 = { x X | g j ( x ) 0 , 1 j m } denote the set of feasible solutions of (P).

For x X , we define
I ( x ) = { j | g j ( x ) = 0 } , J ( x ) = { 1 , 2 , , m } I ( x ) , Y ( x ) = { y Y | f ( x , y ) = sup z Y f ( x , z ) } .

In view of the continuity of f ( x , ) on Y and compactness of Y, it is clear that Y ( x ) is nonempty compact subset of Y, x X . Throughout this paper we assume that the right upper-Dini-derivatives of the functions f ( , y ) , g j ( ) , j { 1 , 2 , , m } with respect to an arc H x ¯ , x at t = 0 exist x ¯ , x X , y Y , and ( d f ) + ( H x ¯ , x ( 0 + ) , ) is continuous on Y, x ¯ , x X . Also assume that g j ( ) , 1 j m is continuous on X.

The following lemma can be proved without difficulty on the same lines as in Lemma 3.1 (Mehra and Bhatia [9]).

Lemma 3.1 Let x ¯ be an optimal solution of (P). Then the system
{ ( d f ) + ( H x ¯ , x ( 0 + ) , y ) < 0 , y Y ( x ¯ ) , ( d g j ) + ( H x ¯ , x ( 0 + ) ) < 0 , j I ( x ¯ )
(2)

has no solution x X .

We now prove the following theorem by using Lemmas 2.1 and 3.1, which gives the necessary optimality conditions for an optimal solution of problem (P).

Theorem 3.1 (Necessary optimality conditions)

Let x ¯ be an optimal solution of (P). Further, let ( d f ) + ( H x ¯ , x ( 0 + ) ) , ( d g j ) + ( H x ¯ , x ( 0 + ) ) , j I ( x ¯ ) be convexlike functions of x on X and let there exist a neighborhood U of 0 in R Y ( x ¯ ) and a constant ν = ( ν j ) j I ( x ¯ ) such that Ω ( x ¯ ) U ¯ × Π j I ( x ¯ ) ( , ν j ] is a nonempty closed set, where
Ω ( x ¯ ) = { ( u , r ) | r = ( r j ) j I ( x ¯ ) , u : Y R and x X such that f ( x , y ) u ( y ) , y Y ( x ¯ ) , g j ( x ) r j , j I ( x ¯ ) } .
Then there exist an integer s > 0 , scalars λ i 0 , 1 i s , μ j 0 , 1 j m , and vectors y i Y ( x ¯ ) , 1 i s , such that
i = 1 s λ i ( d f ) + ( H x ¯ , x ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H x ¯ , x ( 0 + ) ) 0 , x X , μ j g j ( x ¯ ) = 0 , 1 j m , i = 1 s λ i + j = 1 m μ j 0 .
Proof If x ¯ is an optimal solution of (P) then, by Lemma 3.1, the system (2) has no solution x X . But the assumption of Lemma 2.1 also holds and since the system (2) has no solution x X , we obtain the result that there exist an integer s > 0 , scalars λ i 0 , 1 i s , μ j 0 , j I ( x ¯ ) , and vectors y i Y ( x ¯ ) , 1 i s , such that
( λ 1 , λ 2 , , λ s , μ j ) j I ( x ¯ ) 0 ,
(3)
and
i = 1 s λ i ( d f ) + ( H x ¯ , x ( 0 + ) , y i ) + j I ( x ¯ ) μ j ( d g j ) + ( H x ¯ , x ( 0 + ) ) 0 , x X .
(4)

If we put μ j = 0 , for j J ( x ¯ ) , by (3) and (4) we obtain the required result. □

Now, we prove the following sufficient optimality conditions for the considered minimax problem (P) under generalized convexity with upper-Dini-derivative concept.

Theorem 3.2 (Sufficient optimality conditions)

Let x ¯ X 0 and there exist an integer s > 0 , scalars λ i 0 , 1 i s , i = 1 s λ i 0 , μ j 0 , 1 j m , and vectors y i Y ( x ¯ ) , 1 i s , such that α ( x , x ¯ ) > 0 ,
i = 1 s λ i ( d f ) + ( H x ¯ , x ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H x ¯ , x ( 0 + ) ) 0 , x X ,
(5)
μ j g j ( x ¯ ) = 0 , 1 j m .
(6)
Also, assume that
  1. (i)

    for 1 i s , f ( , y i ) is ( α , ρ ¯ i ) -right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ ,

     
  2. (ii)

    for 1 j m , g j ( ) is ( α , ρ ˘ j ) -right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ ,

     
  3. (iii)

    i = 1 s λ i ρ ¯ i ( x , x ¯ ) + j = 1 m μ j ρ ˘ j ( x , x ¯ ) 0 .

     

Then x ¯ is an optimal solution of (P).

Proof Suppose to the contrary that x ¯ is not an optimal solution of (P). Then there exists an x ˜ X 0 such that
sup z Y f ( x ˜ , z ) < sup z Y f ( x ¯ , z ) .
Further, as y i Y ( x ¯ ) , we have
sup z Y f ( x ¯ , z ) = f ( x ¯ , y i ) , 1 i s .
Also, y i Y , 1 i s , we have
f ( x ˜ , y i ) sup z Y f ( x ˜ , z ) , 1 i s .
Thus, from the above three inequalities, we get
f ( x ˜ , y i ) < f ( x ¯ , y i ) , 1 i s .
Using λ i 0 , 1 i s and i = 1 s λ i 0 , we obtain
i = 1 s λ i f ( x ˜ , y i ) < i = 1 s λ i f ( x ¯ , y i ) .
(7)
For x ˜ X 0 , μ j 0 , 1 j m , we have μ j g j ( x ˜ ) 0 , which in view of (6) implies that
j = 1 m μ j g j ( x ˜ ) j = 1 m μ j g j ( x ¯ ) .
(8)
Now, by (7) and (8) we obtain
i = 1 s λ i f ( x ˜ , y i ) + j = 1 m μ j g j ( x ˜ ) < i = 1 s λ i f ( x ¯ , y i ) + j = 1 m μ j g j ( x ¯ ) .
(9)
On the other hand, from the assumptions that f ( , y i ) , 1 i s and g j ( ) , 1 j m are ( α , ρ ¯ i ) and ( α , ρ ˘ j ) -right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ , we have
f ( x ˜ , y i ) f ( x ¯ , y i ) α ( x ˜ , x ¯ ) ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) + ρ ¯ i ( x ˜ , x ¯ ) , 1 i s ,
(10)
g j ( x ˜ ) g j ( x ¯ ) α ( x ˜ , x ¯ ) ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) + ρ ˘ j ( x ˜ , x ¯ ) , 1 j m .
(11)
From (10) and (11) together with λ i 0 , 1 i s , and μ j 0 , 1 j m , we get
{ i = 1 s λ i f ( x ˜ , y i ) + j = 1 m μ j g j ( x ˜ ) } { i = 1 s λ i f ( x ¯ , y i ) + j = 1 m μ j g j ( x ¯ ) } α ( x ˜ , x ¯ ) { i = 1 s λ i ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) } + i = 1 s λ i ρ ¯ i ( x ˜ , x ¯ ) + j = 1 m μ j ρ ˘ j ( x ˜ , x ¯ ) .
By (5) and using α ( x ˜ , x ¯ ) > 0 , i = 1 s λ i ρ ¯ i ( x ˜ , x ¯ ) + j = 1 m μ j ρ ˘ j ( x ˜ , x ¯ ) 0 , it follows that
{ i = 1 s λ i f ( x ˜ , y i ) + j = 1 m μ j g j ( x ˜ ) } { i = 1 s λ i f ( x ¯ , y i ) + j = 1 m μ j g j ( x ¯ ) } 0 ,

which is a contradiction to (9). Hence x ¯ is an optimum solution for (P) and the theorem is proved. □

Theorem 3.3 (Sufficient optimality conditions)

Let x ¯ X 0 and there exist an integer s > 0 , scalars λ i 0 , 1 i s , i = 1 s λ i 0 , μ j 0 , 1 j m and vectors y i Y ( x ¯ ) , 1 i s , such that the conditions (5) and (6) of Theorem  3.2 hold. Also, assume that
  1. (i)

    i = 1 s λ i f ( , y i ) is ρ ¯ -generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ ,

     
  2. (ii)

    j = 1 m μ j g j ( ) is ρ ˘ -generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ ,

     
  3. (iii)

    ρ ¯ ( x , x ¯ ) + ρ ˘ ( x , x ¯ ) 0 .

     

Then x ¯ is an optimal solution of (P).

Proof Suppose to the contrary that x ¯ is not an optimal solution of (P) and following the proof of Theorem 3.2, we have
i = 1 s λ i f ( x ˜ , y i ) < i = 1 s λ i f ( x ¯ , y i ) ,
which by ρ ¯ -generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) of i = 1 s λ i f ( , y i ) at x ¯ , we have
i = 1 s λ i ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) < ρ ¯ ( x ˜ , x ¯ ) .
(12)
For x ˜ X 0 , μ j 0 , 1 j m , we have μ j g j ( x ˜ ) 0 , which in view of (6) implies that
j = 1 m μ j g j ( x ˜ ) j = 1 m μ j g j ( x ¯ ) ,
which by ρ ˘ -generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) of j = 1 m μ j g j ( ) at x ¯ , we have
j = 1 m μ j ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) ρ ˘ ( x ˜ , x ¯ ) .
(13)
By (12) and (13), we get
i = 1 s λ i ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) < ρ ¯ ( x ˜ , x ¯ ) ρ ˘ ( x ˜ , x ¯ ) 0 ,
where the last inequality is according to ρ ¯ ( x ˜ , x ¯ ) + ρ ˘ ( x ˜ , x ¯ ) 0 . Therefore,
i = 1 s λ i ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) < 0 ,

which is a contradiction to (5). Hence x ¯ is an optimum solution for (P) and the theorem is proved. □

Theorem 3.4 (Sufficient optimality conditions)

Let x ¯ X 0 and there exist an integer s > 0 , scalars λ i 0 , 1 i s , i = 1 s λ i 0 , μ j 0 , 1 j m , and vectors y i Y ( x ¯ ) , 1 i s , such that the conditions (5) and (6) of Theorem  3.2 hold. Also, assume that
  1. (i)

    i = 1 s λ i f ( , y i ) is strictly ρ ¯ -generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ ,

     
  2. (ii)

    j = 1 m μ j g j ( ) is ρ ˘ -generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ ,

     
  3. (iii)

    ρ ¯ ( x , x ¯ ) + ρ ˘ ( x , x ¯ ) 0 .

     

Then x ¯ is an optimal solution of (P).

Proof The proof follows along similar lines as the proof of Theorem 3.3 and hence is omitted. □

Theorem 3.5 (Sufficient optimality conditions)

Let x ¯ X 0 and there exist an integer s > 0 , scalars λ i 0 , 1 i s , i = 1 s λ i 0 , μ j 0 , 1 j m , and vectors y i Y ( x ¯ ) , 1 i s , such that the conditions (5) and (6) of Theorem  3.2 hold. Also, assume that
  1. (i)

    i = 1 s λ i f ( , y i ) is ρ ¯ -generalized-quasi-right upper-Dini-derivative locally arcwise connected with respect to H at x ¯ ,

     
  2. (ii)

    for j I ( x ¯ ) , j l , g j ( ) is ρ ˘ j -generalized-quasi-right upper-Dini-derivative locally arcwise connected and g l ( ) is strictly ρ ˘ l -generalized-pseudo-right upper-Dini-derivative locally arcwise connected with respect to H at x ¯ , with μ l > 0 ,

     
  3. (iii)

    ρ ¯ ( x , x ¯ ) + j = 1 m μ j ρ ˘ j ( x , x ¯ ) 0 .

     

Then x ¯ is an optimal solution of (P).

Proof Suppose to the contrary that x ¯ is not an optimal solution of (P) and following the proof of Theorem 3.2, we have
i = 1 s λ i f ( x ˜ , y i ) < i = 1 s λ i f ( x ¯ , y i ) ,
which by ρ ¯ -generalized-quasi-right upper-Dini-derivative locally arcwise connected of i = 1 s λ i f ( , y i ) with respect to H at x ¯ , we have
i = 1 s λ i ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) ρ ¯ ( x ˜ , x ¯ ) .
(14)
Since for x ˜ X 0 and for j I ( x ¯ ) , we have
g j ( x ˜ ) 0 = g j ( x ¯ ) ,
which by ρ ˘ j -generalized-quasi-right upper-Dini-derivative locally arcwise connected of g j ( ) , j I ( x ¯ ) , j l and strictly ρ ˘ l -generalized-pseudo-right upper-Dini-derivative locally arcwise connected of g l ( ) with respect to H at x ¯ , we have
( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) ρ ˘ j ( x ˜ , x ¯ ) , for  j I ( x ¯ ) , j l ,
(15)
( d g l ) + ( H x ¯ , x ˜ ( 0 + ) ) < ρ ˘ l ( x ˜ , x ¯ ) .
(16)
Since μ j 0 , j I ( x ¯ ) , μ l > 0 , and μ j = 0 for j J ( x ¯ ) , from (15) and (16), we get
j = 1 m μ j ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) < j = 1 m μ j ρ ˘ j ( x ˜ , x ¯ ) .
(17)
By (14) and (17), we get
i = 1 s λ i ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) < ρ ¯ ( x ˜ , x ¯ ) j = 1 m μ j ρ ˘ j ( x , x ¯ ) 0 ,
where the last inequality is according to ρ ¯ ( x ˜ , x ¯ ) + j = 1 m μ j ρ ˘ j ( x , x ¯ ) 0 . Therefore,
i = 1 s λ i ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) < 0 ,

which is a contradiction to (5). Hence x ¯ is an optimum solution for (P) and the theorem is proved. □

4 Duality

This section deals with the duality theorems for the following Mond-Weir type dual (D) of minimax problem (P):
max ( s , λ , y ) K sup ( z , μ ) H ( s , λ , y ) i = 1 s λ i f ( z , y i ) ,
(D)
where K = { ( s , λ , y ) | s is an integer, λ R + s , i = 1 s λ i = 1 , y = ( y 1 , y 2 , , y s ) , y i Y ( x ) for some x X , 1 i s }, and H ( s , λ , y ) denotes the set of all ( z , μ ) X × R + m satisfying
i = 1 s λ i ( d f i ) + ( H z , x ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H z , x ( 0 + ) ) 0 , for all  x X ,
(18)
j = 1 m μ j g j ( z ) 0 .
(19)

If for a triplet ( s , λ , y ) in K the set H ( s , λ , y ) is empty then we define the supremum over it to be −∞.

Theorem 4.1 (Weak duality)

Let x and ( z , μ , s , λ , y ) be feasible solutions of (P) and (D), respectively. Assume that
  1. (i)

    i = 1 s λ i f ( , y i ) is ρ ¯ -generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at z,

     
  2. (ii)

    j = 1 m μ j g j ( ) is ρ ˘ -generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) at z,

     
  3. (iii)

    ρ ¯ ( x , z ) + ρ ˘ ( x , z ) 0 .

     
Then
sup y Y f ( x , y ) i = 1 s λ i f ( z , y i ) .
Proof Suppose to the contrary that
sup y Y f ( x , y ) < i = 1 s λ i f ( z , y i ) .
Thus, we have
f ( x , y i ) < i = 1 s λ i f ( z , y i ) , for all  y i Y ( x ) , 1 i s .
It follows from λ i 0 , 1 i s , and i = 1 s λ i = 1 , that
i = 1 s λ i f ( x , y i ) < i = 1 s λ i f ( z , y i ) ,
which by ρ ¯ -generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) of i = 1 s λ i f ( , y i ) at z, we have
i = 1 s λ i ( d f ) + ( H z , x ( 0 + ) , y i ) < ρ ¯ ( x , z ) .
(20)
For x X 0 , μ j 0 , 1 j m , we have μ j g j ( x ) 0 , which in view of (19) implies that
j = 1 m μ j g j ( x ) j = 1 m μ j g j ( z ) ,
which by ρ ˘ -generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) of j = 1 m μ j g j ( ) at z, we have
j = 1 m μ j ( d g j ) + ( H z , x ( 0 + ) ) ρ ˘ ( x , z ) .
(21)
By (20) and (21), we get
i = 1 s λ i ( d f ) + ( H z , x ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H z , x ( 0 + ) ) < ρ ¯ ( x , z ) ρ ˘ ( x , z ) 0 ,
where the last inequality is according to ρ ¯ ( x , z ) + ρ ˘ ( x , z ) 0 . Therefore,
i = 1 s λ i ( d f ) + ( H z , x ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H z , x ( 0 + ) ) < 0 ,

which is a contradiction to (18). Hence the theorem is proved. □

Theorem 4.2 (Strong duality)

Let x be an optimal solution of (P). Assume that the conditions of Theorem  3.1 are satisfied. Then there exist ( s , λ , y ) K , and ( x , μ ) H ( s , λ , y ) such that ( x , μ , s , λ , y ) is a feasible solution of (D) and the two objectives have same values. If, in addition, the assumption of weak duality Theorem  4.1 hold for all feasible solutions of (D), then ( x , μ , s , λ , y ) is an optimal solution of (D).

Proof Since x is an optimal solution for (P) and all the conditions of Theorem 3.1 are satisfied, there exist ( s , λ , y ) K , and ( x , μ ) H ( s , λ , y ) such that ( x , μ , s , λ , y ) is a feasible solution of (D) and the two objective values are equal. The optimality of ( x , μ , s , λ , y ) for (D) thus follows from Theorem 4.1. □

Theorem 4.3 (Strict converse duality)

Let x and ( z , μ , s , λ , y ) be optimal solutions of (P) and (D), respectively. Assume that the hypothesis of Theorem  4.2 is fulfilled. Also, assume that
  1. (i)

    i = 1 s λ i f ( , y i ) is strictly ρ ¯ -generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at z ,

     
  2. (ii)

    j = 1 m μ j g j ( ) is ρ ˘ -generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) at z ,

     
  3. (iii)

    ρ ¯ ( x , z ) + ρ ˘ ( x , z ) 0 .

     

Then z = x .

Proof Suppose to the contrary that z x . According to Theorem 4.2, we know that there exist ( s , λ , y ) K , and ( x , μ ) H ( s , λ , y ) such that ( x , μ , s , λ , y ) is a feasible solution of (D) and
sup y Y f ( x , y ) = i = 1 s λ i f ( z , y i ) .
Thus, we have
f ( x , y i ) i = 1 s λ i f ( z , y i ) , for all  y i Y ( x ) , 1 i s .
It follows from λ i 0 , 1 i s , and i = 1 s λ i = 1 , that
i = 1 s λ i f ( x , y i ) i = 1 s λ i f ( z , y i ) .
(22)
Now proceeding on the same lines as in Theorem 4.1, we get
i = 1 s λ i ( d f ) + ( H z , x ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H z , x ( 0 + ) ) < 0 ,

which is a contradiction to (18). Hence the theorem is proved. □

5 Conclusion

In this study we have established necessary and sufficient optimality conditions under generalized convexity using the tool-right upper-Dini-derivative for a general minimax programming problem. Mond-Weir type duality theory is also obtained. These results can be extended to the following semiinfinite minimax programming problem (SIP) with the tool-right upper-Dini-derivative:
Minimize  max y Y f ( x , y ) subject to G j ( x , t ) 0 , for all  t T j , j q ̲ = { 1 , 2 , , q } , H k ( x , s ) = 0 , for all  s S k , k r ̲ = { 1 , 2 , , r } , x X ,
(SIP)

where X R n is a nonempty open arcwise connected set, Y is a compact metrizable topological space, f ( , y ) is a real-valued function defined on X. T j and S k are compact subsets of complete metric spaces, for each j q ̲ , G j ( , t ) is a real-valued function defined on X for all t T j , for each k r ̲ , H k ( , s ) is a real-valued function defined on X for all s S k , for each j q ̲ and k r ̲ , G j ( x , ) , and H k ( x , ) are continuous real-valued functions defined, respectively, on T j and S k for all x X . We shall investigate this semiinfinite programming problem in subsequent papers.

Declarations

Acknowledgements

The research of the second and fourth author is financially supported by King Fahd University of Petroleum and Minerals, Saudi Arabia under the Internal Research Project No. IN131026.

Authors’ Affiliations

(1)
Department of Applied Mathematics, Indian School of Mines
(2)
Department of Mathematics and Statistics, King Fahd University of Petroleum and Minerals

References

  1. Du D, Pardalos PM, Wu WZ: Minimax and Applications. Kluwer Academic, Dordrecht; 1995.View ArticleGoogle Scholar
  2. Schmittendorf WE: Necessary conditions and sufficient conditions for static minimax problems. J. Math. Anal. Appl. 1977, 57: 683-693. 10.1016/0022-247X(77)90255-4MathSciNetView ArticleGoogle Scholar
  3. Ahmad I: Higher order duality in nondifferentiable minimax fractional programming involving generalized convexity. J. Inequal. Appl. 2012: Article ID 306 (2012)Google Scholar
  4. Ahmad I: Second order nondifferentiable minimax fractional programming with square root terms. Filomat 2013, 27: 135-142. 10.2298/FIL1301135AMathSciNetView ArticleMATHGoogle Scholar
  5. Ahmad I, Husain Z: Optimality conditions and duality in nondifferentiable minimax fractional programming with generalized convexity. J. Optim. Theory Appl. 2006, 129: 255-275. 10.1007/s10957-006-9057-0MathSciNetView ArticleMATHGoogle Scholar
  6. Gupta SK, Dangar D: On second order duality for nondifferentiable minimax fractional programming. J. Comput. Appl. Math. 2014, 255: 878-886.MathSciNetView ArticleMATHGoogle Scholar
  7. Jayswal A, Prasad AK, Kummari K: On nondifferentiable minimax fractional programming involving higher order generalized convexity. Filomat 2013, 27: 1497-1504. 10.2298/FIL1308497JMathSciNetView ArticleMATHGoogle Scholar
  8. Jayswal A, Stancu-Minasian I, Ahmad I, Kummari K: Generalized minimax fractional programming problems with generalized nonsmooth ( F , α , ρ , d , θ ) -univex functions. J. Nonlinear Anal. Optim. 2013, 4: 227-239.MathSciNetGoogle Scholar
  9. Mehra A, Bhatia D: Optimality and duality for minmax problems involving arcwise connected and generalized arcwise connected functions. J. Math. Anal. Appl. 1999, 231: 425-445. 10.1006/jmaa.1998.6231MathSciNetView ArticleMATHGoogle Scholar
  10. Ortega JM, Rheinboldt WC: Iterative Solutions of Nonlinear Equations in Several Variables. Academic Press, New York; 1970.MATHGoogle Scholar
  11. Avriel M, Zang I: Generalized arcwise connected functions and characterization of local-global minimum properties. J. Optim. Theory Appl. 1980, 32: 407-425. 10.1007/BF00934030MathSciNetView ArticleMATHGoogle Scholar
  12. Bhatia D, Mehra A: Optimality conditions and duality involving arcwise connected and generalized arcwise connected functions. J. Optim. Theory Appl. 1999, 100: 181-194. 10.1023/A:1021725200423MathSciNetView ArticleMATHGoogle Scholar
  13. Yuan D, Liu X: Mathematical programming involving ( α , ρ ) -right upper-Dini-derivative functions. Filomat 2013, 27: 899-908. 10.2298/FIL1305899YMathSciNetView ArticleMATHGoogle Scholar
  14. Kelley JL, Namioka I: Linear Topological Spaces. Van Nostrand, Princeton; 1963.View ArticleMATHGoogle Scholar
  15. Zhang QX: Optimality conditions and duality for semi-infinite programming involving B-arcwise connected functions. J. Glob. Optim. 2009, 45: 615-629. 10.1007/s10898-009-9400-8View ArticleMATHMathSciNetGoogle Scholar
  16. Nahak C, Mohapatra RN: d - ρ - ( η , θ ) -invexity in multiobjective optimization. Nonlinear Anal. 2009, 70: 2288-2296. 10.1016/j.na.2008.03.008MathSciNetView ArticleMATHGoogle Scholar
  17. Ye YL: d -Invexity and optimality conditions. J. Math. Anal. Appl. 1991, 162: 242-249. 10.1016/0022-247X(91)90190-BMathSciNetView ArticleMATHGoogle Scholar

Copyright

© Jayswal et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.