Skip to main content

On minimax programming problems involving right upper-Dini-derivative functions

Abstract

In this paper, we derive necessary and sufficient optimality conditions for a general minimax programming problem involving some classes of generalized convexities with the tool-right upper-Dini-derivative. Moreover, using the concept of optimality conditions, Mond-Weir type duality theory has been developed for such a minimax programming problem.

MSC:26A51, 49J35, 90C32.

1 Introduction

The minimax approach to optimization theory certainly is not new. It takes its origins in von Neumann’s game theory. The broad spectrum of existing results and applications of minimax theory in the field of optimization is captured in the book edited by Du et al. [1]. Starting with work of Schmittendorf [2], minimax programming problems have been studied by several authors; for example, see [39] and the references cited therein.

Convexity is a sufficient but not necessary condition for many important results of mathematical programming, since there are diverse extensions of the notion of convexity bearing the same properties. Moreover, it is well known that a function is convex iff its restriction to each line segment in its domain is convex. This property inspired Ortega and Rheinboldt [10] to introduce an important generalization of convex functions by replacing a line segment joining two points by a continuous arc and called them arcwise connected functions defined on arcwise connected sets.

Following the idea of arcwise convexity, Avriel and Zang [11] introduced Q-connected (QCN) functions and P-connected (PCN) functions and also they have discussed necessary and sufficient local-global minimum properties of these functions. Some elementary properties of these functions in terms of their directional derivatives have been studied by Bhatia and Mehra [12]. Bhatia and Mehra [12] also established optimality conditions for scalar-valued nonlinear programming problems involving these functions.

To relax the definition of arcwise convexity in terms of directional derivative recently Yuan and Liu [13] introduced the concept of (α,ρ)-right upper-Dini-derivative locally arcwise connected with respect to the arc H and established optimality and duality results for a nonlinear multiobjective programming problem. In this paper, we use generalized convex functions, in terms of the right upper-Dini-derivative to derive necessary and sufficient optimality conditions for a general minimax programming problem and duality results for its Mond-Weir type dual model.

This paper is structured as follows: Some preliminary concepts and properties regarding generalized convex functions are given in Section 2. In Section 3, we establish necessary and sufficient optimality conditions for a general minimax programming problem involving generalized convex functions. In Section 4, we establish appropriate duality theorems for a Mond-Weir type dual problem. Finally, in Section 5 we summarize our main results and also point out some further research opportunities.

2 Preliminaries

Let R n denote the n-dimensional Euclidean space, R + n its nonnegative orthant and X R n . For a nonempty set Q in a topological vector pace E, Q ¯ denote the closure of Q and

Q = { ν E | ν ( q ) 0 , q Q }

denotes the dual cone of Q, where E is the dual space of E.

For some nonempty subset Y, let R Y = Π Y R denote the product space in a product topology. Then the topological dual space of R Y is the generalized finite sequence space consisting of all the functions u:YR with finite support [14]. The set R + Y = Π Y R + denote the convex cone of all nonnegative functions on Y. Then the topological dual of R + Y is given by

( R + Y ) = = { λ = ( λ y ) y Y |  a finite set  Y 0 Y  such that  λ y = 0 , y Y Y 0 and  λ y 0 , y Y 0 } .

Now, we recall some well-known results and concepts which will be used in the sequel.

Definition 2.1 [15]

A set X R n is said to be an arcwise connected set if, for every x 1 X, x 2 X, there exists a continuous vector-valued function H x 1 , x 2 :[0,1]X, called an arc, such that

H x 1 , x 2 (0)= x 1 , H x 1 , x 2 (1)= x 2 .

Definition 2.2 [13]

Let φ be a real-valued function defined on an arcwise connected set X R n . Let x 1 , x 2 X, and H x 1 , x 2 be the arc connecting x 1 and x 2 in X. The right upper-Dini-derivative of φ with respect to H x 2 , x 1 (t) at t=0 is defined as follows:

( d φ ) + ( H x 2 , x 1 ( 0 + ) ) = lim t 0 + sup φ ( H x 2 , x 1 ( t ) ) φ ( x 2 ) t .
(1)

Using this upper-Dini-derivative concept, Yuan and Liu [13] introduced a class of functions, which called (α,ρ)-right upper-Dini-derivative function. For convenience, we use the following notations.

Definition 2.3 [13]

A set X R n is said to be locally arcwise connected at x ¯ if for any xX and x x ¯ there exist a positive number a(x, x ¯ ), with 0<a(x, x ¯ )1, and a continuous arc H x ¯ , x such that H x ¯ , x (t)X for any t(0,a(x, x ¯ )).

The set X is locally arcwise connected on X if X is locally arcwise connected at any xX.

Definition 2.4 [13]

Let X R n be a locally arcwise connected set and φ:XR be a real function defined on X. The function φ is said to be (α,ρ)-right upper-Dini-derivative locally arcwise connected with respect to H at x ¯ , if there exist real functions α:X×XR, ρ:X×XR such that

φ(x)φ( x ¯ )α(x, x ¯ ) ( d φ ) + ( H x ¯ , x ( 0 + ) ) +ρ(x, x ¯ ),xX.

If φ is (α,ρ)-right upper-Dini-derivative locally arcwise connected with respect to H at x ¯ for any x ¯ X, then φ is called (α,ρ)-right upper-Dini-derivative locally arcwise connected with respect to H on X.

Remark 2.1 It revealed by an example given in [13] that there exists a function, which is (α,ρ)-right upper-Dini-derivative locally arcwise connected but neither d-ρ-(η,θ)-invex [16] nor d-invex [17] nor directional differentially B-arcwise connected [15].

Now we define the notions of ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected, strictly ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected and ρ-generalized-quasi-right upper-Dini-derivative locally arcwise connected functions.

Definition 2.5 The function φ:XR is said to be ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ , if there exists a real function ρ:X×XR such that

( d φ ) + ( H x ¯ , x ( 0 + ) ) ρ(x, x ¯ )φ(x)φ( x ¯ ),xX,

equivalently

φ(x)<φ( x ¯ ) ( d φ ) + ( H x ¯ , x ( 0 + ) ) <ρ(x, x ¯ ),xX.

The function φ:XR is said to be ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) on X if it is ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at any x ¯ X.

The following example shows that there exists a function which is ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected but not (α,ρ)-right upper-Dini-derivative locally arcwise connected with respect to the arc H.

Example 2.1 Let X=(1,1) and the function φ:XR be defined by

φ(x)={ | x | sin 2 1 x ; if  x ( 1 , 0 ) ( 0 , 1 ) , 0 ; if  x = 0 .

For any, x,yR, defining the arc H:[0,1]R by

H y , x (t)=tx+(1t)y,t[0,1].

Note that, by the definition of right upper-Dini-derivative by (1), for x(1,0)(0,1) we have

( d φ ) + ( H 0 , x ( 0 + ) ) =|x|.

Let ρ:X×XR be defined by

ρ(x,y)={ | x | ; if  x ( 1 , 0 ) ( 0 , 1 ) , 0 ; if  x = 0 .

Now, for x ¯ =0, it follows that

( d φ ) + ( H x ¯ , x ( 0 + ) ) ρ(x, x ¯ )φ(x)φ( x ¯ ),xX.

This means that φ is ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ =0. But φ is not a (α,ρ)-right upper-Dini-derivative locally arcwise connected with respect to same arc H and ρ at x ¯ =0 because for x(1,0)(0,1) and α(x, x ¯ )=1, we can see that

φ(x)φ( x ¯ )α(x, x ¯ ) ( d φ ) + ( H x ¯ , x ( 0 + ) ) ρ(x, x ¯ )<0.

Definition 2.6 The function φ:XR is said to be strictly ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ , if there exists a real function ρ:X×XR such that

( d φ ) + ( H x ¯ , x ( 0 + ) ) ρ(x, x ¯ )φ(x)>φ( x ¯ ),xX,x x ¯ ,

equivalently

φ(x)φ( x ¯ ) ( d φ ) + ( H x ¯ , x ( 0 + ) ) <ρ(x, x ¯ ),xX,x x ¯ .

The function φ:XR is said to be strictly ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) on X if it is strictly ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at any x ¯ X.

Definition 2.7 The function φ:XR is said to be ρ-generalized-quasi-right upper-Dini-derivative locally arcwise connected with respect to H at x ¯ , if there exists a real function ρ:X×XR such that

φ(x)φ( x ¯ ) ( d φ ) + ( H x ¯ , x ( 0 + ) ) ρ ¯ (x, x ¯ ),xX,

equivalently

( d φ ) + ( H x ¯ , x ( 0 + ) ) > ρ ¯ (x, x ¯ )φ(x)>φ( x ¯ ),xX.

The function φ:XR is said to be ρ-generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) on X if it is ρ-generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) at any x ¯ X.

The next example shows that there exists a function which is ρ-generalized-quasi-right upper-Dini-derivative locally arcwise connected but neither (α,ρ)-right upper-Dini-derivative locally arcwise connected nor ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected with respect to the arc H.

Example 2.2 Let X=( π 2 , π 2 ) and the function φ:XR be defined by

φ(x)={ π | x | sin 2 π x 2 π | x | ; if  x ( π 2 , 0 ) ( 0 , π 2 ) , 0 ; if  x = 0 .

For any, x,yR, defining the arc H:[0,1]R by

H y , x (t)=tx+(1t)y,t[0,1].

Clearly, for x( π 2 ,0)(0, π 2 ) we have

( d φ ) + ( H 0 , x ( 0 + ) ) =π|x|.

Let ρ:X×XR be defined by

ρ(x,y)={ π | x | ; if  x ( π 2 , 0 ) ( 0 , π 2 ) , 0 ; if  x = 0 .

Now, we can easily verify that φ is ρ-generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ =0. However, for x( π 2 ,0)(0, π 2 ), α(x, x ¯ )=1 and x ¯ =0, we can deduce that

φ(x)φ( x ¯ )α(x, x ¯ ) ( d φ ) + ( H x ¯ , x ( 0 + ) ) ρ(x, x ¯ )<0,

and

φ(x)φ( x ¯ )<0implies ( d φ ) + ( H x ¯ , x ( 0 + ) ) + ρ ¯ (x, x ¯ )=0.

Hence, φ is neither (α,ρ)-right upper-Dini-derivative locally arcwise connected nor ρ-generalized-pseudo-right upper-Dini-derivative locally arcwise connected with respect to same arc H and ρ at x ¯ =0.

Definition 2.8 [13]

A function f:XR is called preinvex (with respect to η:X×X R n ) on X if there exists a vector-valued function η such that

f ( u + t η ( x , u ) ) tf(x)+(1t)f(u)

holds for all x,uX and any t[0,1].

Definition 2.9 [13]

A function f:XR is said to be convexlike if for any x,yX and 0θ1, there is zX such that

f(z)θf(x)+(1θ)f(y).

Remark 2.2 The convex and the preinvex functions are convexlike functions.

In the next section we will use the following version of Theorem 2.3 from [9].

Lemma 2.1 Let G:X×YR and ψ:XR, where X and Y are arbitrary nonempty sets. Let the pair (G,ψ) be convexlike on X. Assume that for some neighborhood U of 0 in R Y and a constant ν>0, the set Ω 0 U ¯ ×(,ν] is a nonempty closed subset of R Y ×R, where

Ω 0 = { ( u , r ) | u : Y R and x X such that ψ ( x ) r , G ( x , y ) u ( y ) , y Y } .

Then exactly one of the following systems is solvable:

  1. (I)

    G(x,y)0, yY, ψ(x)<0,

  2. (II)

    an integer s>0, scalars λ i 0, 1is, μ0 and vectors y i Y, 1is, such that ( λ 1 ,, λ s ,μ)0 and i = 1 s λ i G(x, y i )+μψ(x)0.

3 Optimality conditions

Consider the following general minimax programming problem:

Minimize  max y Y f ( x , y ) subject to  g ( x ) 0 , x X ,
(P)

where f:X×YR, g=( g 1 , g 2 ,, g m ):X R m , X is an open arcwise connected subset of R n , Y is a compact subset of R m and f(x,) is continuous on Y for every xX. X 0 ={xX| g j (x)0,1jm} denote the set of feasible solutions of (P).

For xX, we define

I ( x ) = { j | g j ( x ) = 0 } , J ( x ) = { 1 , 2 , , m } I ( x ) , Y ( x ) = { y Y | f ( x , y ) = sup z Y f ( x , z ) } .

In view of the continuity of f(x,) on Y and compactness of Y, it is clear that Y(x) is nonempty compact subset of Y, xX. Throughout this paper we assume that the right upper-Dini-derivatives of the functions f(,y), g j (), j{1,2,,m} with respect to an arc H x ¯ , x at t=0 exist x ¯ , xX, yY, and ( d f ) + ( H x ¯ , x ( 0 + ),) is continuous on Y, x ¯ , xX. Also assume that g j (), 1jm is continuous on X.

The following lemma can be proved without difficulty on the same lines as in Lemma 3.1 (Mehra and Bhatia [9]).

Lemma 3.1 Let x ¯ be an optimal solution of (P). Then the system

{ ( d f ) + ( H x ¯ , x ( 0 + ) , y ) < 0 , y Y ( x ¯ ) , ( d g j ) + ( H x ¯ , x ( 0 + ) ) < 0 , j I ( x ¯ )
(2)

has no solution xX.

We now prove the following theorem by using Lemmas 2.1 and 3.1, which gives the necessary optimality conditions for an optimal solution of problem (P).

Theorem 3.1 (Necessary optimality conditions)

Let x ¯ be an optimal solution of (P). Further, let ( d f ) + ( H x ¯ , x ( 0 + )), ( d g j ) + ( H x ¯ , x ( 0 + )), jI( x ¯ ) be convexlike functions of x on X and let there exist a neighborhood U of 0 in R Y ( x ¯ ) and a constant ν= ( ν j ) j I ( x ¯ ) such that Ω( x ¯ ) U ¯ × Π j I ( x ¯ ) (, ν j ] is a nonempty closed set, where

Ω ( x ¯ ) = { ( u , r ) | r = ( r j ) j I ( x ¯ ) , u : Y R and x X such that f ( x , y ) u ( y ) , y Y ( x ¯ ) , g j ( x ) r j , j I ( x ¯ ) } .

Then there exist an integer s>0, scalars λ i 0, 1is, μ j 0, 1jm, and vectors y i Y( x ¯ ), 1is, such that

i = 1 s λ i ( d f ) + ( H x ¯ , x ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H x ¯ , x ( 0 + ) ) 0 , x X , μ j g j ( x ¯ ) = 0 , 1 j m , i = 1 s λ i + j = 1 m μ j 0 .

Proof If x ¯ is an optimal solution of (P) then, by Lemma 3.1, the system (2) has no solution xX. But the assumption of Lemma 2.1 also holds and since the system (2) has no solution xX, we obtain the result that there exist an integer s>0, scalars λ i 0, 1is, μ j 0, jI( x ¯ ), and vectors y i Y( x ¯ ), 1is, such that

( λ 1 , λ 2 , , λ s , μ j ) j I ( x ¯ ) 0,
(3)

and

i = 1 s λ i ( d f ) + ( H x ¯ , x ( 0 + ) , y i ) + j I ( x ¯ ) μ j ( d g j ) + ( H x ¯ , x ( 0 + ) ) 0,xX.
(4)

If we put μ j =0, for jJ( x ¯ ), by (3) and (4) we obtain the required result. □

Now, we prove the following sufficient optimality conditions for the considered minimax problem (P) under generalized convexity with upper-Dini-derivative concept.

Theorem 3.2 (Sufficient optimality conditions)

Let x ¯ X 0 and there exist an integer s>0, scalars λ i 0, 1is, i = 1 s λ i 0, μ j 0, 1jm, and vectors y i Y( x ¯ ), 1is, such that α(x, x ¯ )>0,

i = 1 s λ i ( d f ) + ( H x ¯ , x ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H x ¯ , x ( 0 + ) ) 0,xX,
(5)
μ j g j ( x ¯ )=0,1jm.
(6)

Also, assume that

  1. (i)

    for 1is, f(, y i ) is (α, ρ ¯ i )-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ ,

  2. (ii)

    for 1jm, g j () is (α, ρ ˘ j )-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ ,

  3. (iii)

    i = 1 s λ i ρ ¯ i (x, x ¯ )+ j = 1 m μ j ρ ˘ j (x, x ¯ )0.

Then x ¯ is an optimal solution of (P).

Proof Suppose to the contrary that x ¯ is not an optimal solution of (P). Then there exists an x ˜ X 0 such that

sup z Y f( x ˜ ,z)< sup z Y f( x ¯ ,z).

Further, as y i Y( x ¯ ), we have

sup z Y f( x ¯ ,z)=f ( x ¯ , y i ) ,1is.

Also, y i Y, 1is, we have

f ( x ˜ , y i ) sup z Y f( x ˜ ,z),1is.

Thus, from the above three inequalities, we get

f ( x ˜ , y i ) <f ( x ¯ , y i ) ,1is.

Using λ i 0, 1is and i = 1 s λ i 0, we obtain

i = 1 s λ i f ( x ˜ , y i ) < i = 1 s λ i f ( x ¯ , y i ) .
(7)

For x ˜ X 0 , μ j 0, 1jm, we have μ j g j ( x ˜ )0, which in view of (6) implies that

j = 1 m μ j g j ( x ˜ ) j = 1 m μ j g j ( x ¯ ).
(8)

Now, by (7) and (8) we obtain

i = 1 s λ i f ( x ˜ , y i ) + j = 1 m μ j g j ( x ˜ )< i = 1 s λ i f ( x ¯ , y i ) + j = 1 m μ j g j ( x ¯ ).
(9)

On the other hand, from the assumptions that f(, y i ), 1is and g j (), 1jm are (α, ρ ¯ i ) and (α, ρ ˘ j )-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ , we have

f ( x ˜ , y i ) f ( x ¯ , y i ) α( x ˜ , x ¯ ) ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) + ρ ¯ i ( x ˜ , x ¯ ),1is,
(10)
g j ( x ˜ ) g j ( x ¯ )α( x ˜ , x ¯ ) ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) + ρ ˘ j ( x ˜ , x ¯ ),1jm.
(11)

From (10) and (11) together with λ i 0, 1is, and μ j 0, 1jm, we get

{ i = 1 s λ i f ( x ˜ , y i ) + j = 1 m μ j g j ( x ˜ ) } { i = 1 s λ i f ( x ¯ , y i ) + j = 1 m μ j g j ( x ¯ ) } α ( x ˜ , x ¯ ) { i = 1 s λ i ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) } + i = 1 s λ i ρ ¯ i ( x ˜ , x ¯ ) + j = 1 m μ j ρ ˘ j ( x ˜ , x ¯ ) .

By (5) and using α( x ˜ , x ¯ )>0, i = 1 s λ i ρ ¯ i ( x ˜ , x ¯ )+ j = 1 m μ j ρ ˘ j ( x ˜ , x ¯ )0, it follows that

{ i = 1 s λ i f ( x ˜ , y i ) + j = 1 m μ j g j ( x ˜ ) } { i = 1 s λ i f ( x ¯ , y i ) + j = 1 m μ j g j ( x ¯ ) } 0,

which is a contradiction to (9). Hence x ¯ is an optimum solution for (P) and the theorem is proved. □

Theorem 3.3 (Sufficient optimality conditions)

Let x ¯ X 0 and there exist an integer s>0, scalars λ i 0, 1is, i = 1 s λ i 0, μ j 0, 1jm and vectors y i Y( x ¯ ), 1is, such that the conditions (5) and (6) of Theorem  3.2 hold. Also, assume that

  1. (i)

    i = 1 s λ i f(, y i ) is ρ ¯ -generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ ,

  2. (ii)

    j = 1 m μ j g j () is ρ ˘ -generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ ,

  3. (iii)

    ρ ¯ (x, x ¯ )+ ρ ˘ (x, x ¯ )0.

Then x ¯ is an optimal solution of (P).

Proof Suppose to the contrary that x ¯ is not an optimal solution of (P) and following the proof of Theorem 3.2, we have

i = 1 s λ i f ( x ˜ , y i ) < i = 1 s λ i f ( x ¯ , y i ) ,

which by ρ ¯ -generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) of i = 1 s λ i f(, y i ) at x ¯ , we have

i = 1 s λ i ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) < ρ ¯ ( x ˜ , x ¯ ).
(12)

For x ˜ X 0 , μ j 0, 1jm, we have μ j g j ( x ˜ )0, which in view of (6) implies that

j = 1 m μ j g j ( x ˜ ) j = 1 m μ j g j ( x ¯ ),

which by ρ ˘ -generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) of j = 1 m μ j g j () at x ¯ , we have

j = 1 m μ j ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) ρ ˘ ( x ˜ , x ¯ ).
(13)

By (12) and (13), we get

i = 1 s λ i ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) < ρ ¯ ( x ˜ , x ¯ ) ρ ˘ ( x ˜ , x ¯ )0,

where the last inequality is according to ρ ¯ ( x ˜ , x ¯ )+ ρ ˘ ( x ˜ , x ¯ )0. Therefore,

i = 1 s λ i ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) <0,

which is a contradiction to (5). Hence x ¯ is an optimum solution for (P) and the theorem is proved. □

Theorem 3.4 (Sufficient optimality conditions)

Let x ¯ X 0 and there exist an integer s>0, scalars λ i 0, 1is, i = 1 s λ i 0, μ j 0, 1jm, and vectors y i Y( x ¯ ), 1is, such that the conditions (5) and (6) of Theorem  3.2 hold. Also, assume that

  1. (i)

    i = 1 s λ i f(, y i ) is strictly ρ ¯ -generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ ,

  2. (ii)

    j = 1 m μ j g j () is ρ ˘ -generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) at x ¯ ,

  3. (iii)

    ρ ¯ (x, x ¯ )+ ρ ˘ (x, x ¯ )0.

Then x ¯ is an optimal solution of (P).

Proof The proof follows along similar lines as the proof of Theorem 3.3 and hence is omitted. □

Theorem 3.5 (Sufficient optimality conditions)

Let x ¯ X 0 and there exist an integer s>0, scalars λ i 0, 1is, i = 1 s λ i 0, μ j 0, 1jm, and vectors y i Y( x ¯ ), 1is, such that the conditions (5) and (6) of Theorem  3.2 hold. Also, assume that

  1. (i)

    i = 1 s λ i f(, y i ) is ρ ¯ -generalized-quasi-right upper-Dini-derivative locally arcwise connected with respect to H at x ¯ ,

  2. (ii)

    for jI( x ¯ ), jl, g j () is ρ ˘ j -generalized-quasi-right upper-Dini-derivative locally arcwise connected and g l () is strictly ρ ˘ l -generalized-pseudo-right upper-Dini-derivative locally arcwise connected with respect to H at x ¯ , with μ l >0,

  3. (iii)

    ρ ¯ (x, x ¯ )+ j = 1 m μ j ρ ˘ j (x, x ¯ )0.

Then x ¯ is an optimal solution of (P).

Proof Suppose to the contrary that x ¯ is not an optimal solution of (P) and following the proof of Theorem 3.2, we have

i = 1 s λ i f ( x ˜ , y i ) < i = 1 s λ i f ( x ¯ , y i ) ,

which by ρ ¯ -generalized-quasi-right upper-Dini-derivative locally arcwise connected of i = 1 s λ i f(, y i ) with respect to H at x ¯ , we have

i = 1 s λ i ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) ρ ¯ ( x ˜ , x ¯ ).
(14)

Since for x ˜ X 0 and for jI( x ¯ ), we have

g j ( x ˜ )0= g j ( x ¯ ),

which by ρ ˘ j -generalized-quasi-right upper-Dini-derivative locally arcwise connected of g j (), jI( x ¯ ), jl and strictly ρ ˘ l -generalized-pseudo-right upper-Dini-derivative locally arcwise connected of g l () with respect to H at x ¯ , we have

( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) ρ ˘ j ( x ˜ , x ¯ ),for jI( x ¯ ),jl,
(15)
( d g l ) + ( H x ¯ , x ˜ ( 0 + ) ) < ρ ˘ l ( x ˜ , x ¯ ).
(16)

Since μ j 0, jI( x ¯ ), μ l >0, and μ j =0 for jJ( x ¯ ), from (15) and (16), we get

j = 1 m μ j ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) < j = 1 m μ j ρ ˘ j ( x ˜ , x ¯ ).
(17)

By (14) and (17), we get

i = 1 s λ i ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) < ρ ¯ ( x ˜ , x ¯ ) j = 1 m μ j ρ ˘ j (x, x ¯ )0,

where the last inequality is according to ρ ¯ ( x ˜ , x ¯ )+ j = 1 m μ j ρ ˘ j (x, x ¯ )0. Therefore,

i = 1 s λ i ( d f ) + ( H x ¯ , x ˜ ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H x ¯ , x ˜ ( 0 + ) ) <0,

which is a contradiction to (5). Hence x ¯ is an optimum solution for (P) and the theorem is proved. □

4 Duality

This section deals with the duality theorems for the following Mond-Weir type dual (D) of minimax problem (P):

max ( s , λ , y ) K sup ( z , μ ) H ( s , λ , y ) i = 1 s λ i f ( z , y i ) ,
(D)

where K = {(s,λ,y)|s is an integer, λ R + s , i = 1 s λ i =1, y=( y 1 , y 2 ,, y s ), y i Y(x) for some xX, 1is}, and H(s,λ,y) denotes the set of all (z,μ)X× R + m satisfying

i = 1 s λ i ( d f i ) + ( H z , x ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H z , x ( 0 + ) ) 0,for all xX,
(18)
j = 1 m μ j g j (z)0.
(19)

If for a triplet (s,λ,y) in K the set H(s,λ,y) is empty then we define the supremum over it to be −∞.

Theorem 4.1 (Weak duality)

Let x and (z,μ,s,λ,y) be feasible solutions of (P) and (D), respectively. Assume that

  1. (i)

    i = 1 s λ i f(, y i ) is ρ ¯ -generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at z,

  2. (ii)

    j = 1 m μ j g j () is ρ ˘ -generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) at z,

  3. (iii)

    ρ ¯ (x,z)+ ρ ˘ (x,z)0.

Then

sup y Y f(x,y) i = 1 s λ i f ( z , y i ) .

Proof Suppose to the contrary that

sup y Y f(x,y)< i = 1 s λ i f ( z , y i ) .

Thus, we have

f ( x , y i ) < i = 1 s λ i f ( z , y i ) ,for all  y i Y(x),1is.

It follows from λ i 0, 1is, and i = 1 s λ i =1, that

i = 1 s λ i f ( x , y i ) < i = 1 s λ i f ( z , y i ) ,

which by ρ ¯ -generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) of i = 1 s λ i f(, y i ) at z, we have

i = 1 s λ i ( d f ) + ( H z , x ( 0 + ) , y i ) < ρ ¯ (x,z).
(20)

For x X 0 , μ j 0, 1jm, we have μ j g j (x)0, which in view of (19) implies that

j = 1 m μ j g j (x) j = 1 m μ j g j (z),

which by ρ ˘ -generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) of j = 1 m μ j g j () at z, we have

j = 1 m μ j ( d g j ) + ( H z , x ( 0 + ) ) ρ ˘ (x,z).
(21)

By (20) and (21), we get

i = 1 s λ i ( d f ) + ( H z , x ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H z , x ( 0 + ) ) < ρ ¯ (x,z) ρ ˘ (x,z)0,

where the last inequality is according to ρ ¯ (x,z)+ ρ ˘ (x,z)0. Therefore,

i = 1 s λ i ( d f ) + ( H z , x ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H z , x ( 0 + ) ) <0,

which is a contradiction to (18). Hence the theorem is proved. □

Theorem 4.2 (Strong duality)

Let x be an optimal solution of (P). Assume that the conditions of Theorem  3.1 are satisfied. Then there exist ( s , λ , y )K, and ( x , μ )H( s , λ , y ) such that ( x , μ , s , λ , y ) is a feasible solution of (D) and the two objectives have same values. If, in addition, the assumption of weak duality Theorem  4.1 hold for all feasible solutions of (D), then ( x , μ , s , λ , y ) is an optimal solution of (D).

Proof Since x is an optimal solution for (P) and all the conditions of Theorem 3.1 are satisfied, there exist ( s , λ , y )K, and ( x , μ )H( s , λ , y ) such that ( x , μ , s , λ , y ) is a feasible solution of (D) and the two objective values are equal. The optimality of ( x , μ , s , λ , y ) for (D) thus follows from Theorem 4.1. □

Theorem 4.3 (Strict converse duality)

Let x and ( z , μ , s , λ , y ) be optimal solutions of (P) and (D), respectively. Assume that the hypothesis of Theorem  4.2 is fulfilled. Also, assume that

  1. (i)

    i = 1 s λ i f(, y i ) is strictly ρ ¯ -generalized-pseudo-right upper-Dini-derivative locally arcwise connected (with respect to H) at z ,

  2. (ii)

    j = 1 m μ j g j () is ρ ˘ -generalized-quasi-right upper-Dini-derivative locally arcwise connected (with respect to H) at z ,

  3. (iii)

    ρ ¯ ( x , z )+ ρ ˘ ( x , z )0.

Then z = x .

Proof Suppose to the contrary that z x . According to Theorem 4.2, we know that there exist ( s , λ , y )K, and ( x , μ )H( s , λ , y ) such that ( x , μ , s , λ , y ) is a feasible solution of (D) and

sup y Y f ( x , y ) = i = 1 s λ i f ( z , y i ) .

Thus, we have

f ( x , y i ) i = 1 s λ i f ( z , y i ) ,for all  y i Y ( x ) ,1i s .

It follows from λ i 0, 1i s , and i = 1 s λ i =1, that

i = 1 s λ i f ( x , y i ) i = 1 s λ i f ( z , y i ) .
(22)

Now proceeding on the same lines as in Theorem 4.1, we get

i = 1 s λ i ( d f ) + ( H z , x ( 0 + ) , y i ) + j = 1 m μ j ( d g j ) + ( H z , x ( 0 + ) ) <0,

which is a contradiction to (18). Hence the theorem is proved. □

5 Conclusion

In this study we have established necessary and sufficient optimality conditions under generalized convexity using the tool-right upper-Dini-derivative for a general minimax programming problem. Mond-Weir type duality theory is also obtained. These results can be extended to the following semiinfinite minimax programming problem (SIP) with the tool-right upper-Dini-derivative:

Minimize  max y Y f ( x , y ) subject to G j ( x , t ) 0 , for all  t T j , j q ̲ = { 1 , 2 , , q } , H k ( x , s ) = 0 , for all  s S k , k r ̲ = { 1 , 2 , , r } , x X ,
(SIP)

where X R n is a nonempty open arcwise connected set, Y is a compact metrizable topological space, f(,y) is a real-valued function defined on X. T j and S k are compact subsets of complete metric spaces, for each j q ̲ , G j (,t) is a real-valued function defined on X for all t T j , for each k r ̲ , H k (,s) is a real-valued function defined on X for all s S k , for each j q ̲ and k r ̲ , G j (x,), and H k (x,) are continuous real-valued functions defined, respectively, on T j and S k for all xX. We shall investigate this semiinfinite programming problem in subsequent papers.

References

  1. Du D, Pardalos PM, Wu WZ: Minimax and Applications. Kluwer Academic, Dordrecht; 1995.

    Book  Google Scholar 

  2. Schmittendorf WE: Necessary conditions and sufficient conditions for static minimax problems. J. Math. Anal. Appl. 1977, 57: 683-693. 10.1016/0022-247X(77)90255-4

    MathSciNet  Article  Google Scholar 

  3. Ahmad I: Higher order duality in nondifferentiable minimax fractional programming involving generalized convexity. J. Inequal. Appl. 2012: Article ID 306 (2012)

    Google Scholar 

  4. Ahmad I: Second order nondifferentiable minimax fractional programming with square root terms. Filomat 2013, 27: 135-142. 10.2298/FIL1301135A

    MathSciNet  Article  MATH  Google Scholar 

  5. Ahmad I, Husain Z: Optimality conditions and duality in nondifferentiable minimax fractional programming with generalized convexity. J. Optim. Theory Appl. 2006, 129: 255-275. 10.1007/s10957-006-9057-0

    MathSciNet  Article  MATH  Google Scholar 

  6. Gupta SK, Dangar D: On second order duality for nondifferentiable minimax fractional programming. J. Comput. Appl. Math. 2014, 255: 878-886.

    MathSciNet  Article  MATH  Google Scholar 

  7. Jayswal A, Prasad AK, Kummari K: On nondifferentiable minimax fractional programming involving higher order generalized convexity. Filomat 2013, 27: 1497-1504. 10.2298/FIL1308497J

    MathSciNet  Article  MATH  Google Scholar 

  8. Jayswal A, Stancu-Minasian I, Ahmad I, Kummari K: Generalized minimax fractional programming problems with generalized nonsmooth (F,α,ρ,d,θ)-univex functions. J. Nonlinear Anal. Optim. 2013, 4: 227-239.

    MathSciNet  Google Scholar 

  9. Mehra A, Bhatia D: Optimality and duality for minmax problems involving arcwise connected and generalized arcwise connected functions. J. Math. Anal. Appl. 1999, 231: 425-445. 10.1006/jmaa.1998.6231

    MathSciNet  Article  MATH  Google Scholar 

  10. Ortega JM, Rheinboldt WC: Iterative Solutions of Nonlinear Equations in Several Variables. Academic Press, New York; 1970.

    MATH  Google Scholar 

  11. Avriel M, Zang I: Generalized arcwise connected functions and characterization of local-global minimum properties. J. Optim. Theory Appl. 1980, 32: 407-425. 10.1007/BF00934030

    MathSciNet  Article  MATH  Google Scholar 

  12. Bhatia D, Mehra A: Optimality conditions and duality involving arcwise connected and generalized arcwise connected functions. J. Optim. Theory Appl. 1999, 100: 181-194. 10.1023/A:1021725200423

    MathSciNet  Article  MATH  Google Scholar 

  13. Yuan D, Liu X: Mathematical programming involving (α,ρ)-right upper-Dini-derivative functions. Filomat 2013, 27: 899-908. 10.2298/FIL1305899Y

    MathSciNet  Article  MATH  Google Scholar 

  14. Kelley JL, Namioka I: Linear Topological Spaces. Van Nostrand, Princeton; 1963.

    Book  MATH  Google Scholar 

  15. Zhang QX: Optimality conditions and duality for semi-infinite programming involving B-arcwise connected functions. J. Glob. Optim. 2009, 45: 615-629. 10.1007/s10898-009-9400-8

    Article  MATH  MathSciNet  Google Scholar 

  16. Nahak C, Mohapatra RN: d - ρ - (η,θ) -invexity in multiobjective optimization. Nonlinear Anal. 2009, 70: 2288-2296. 10.1016/j.na.2008.03.008

    MathSciNet  Article  MATH  Google Scholar 

  17. Ye YL: d -Invexity and optimality conditions. J. Math. Anal. Appl. 1991, 162: 242-249. 10.1016/0022-247X(91)90190-B

    MathSciNet  Article  MATH  Google Scholar 

Download references

Acknowledgements

The research of the second and fourth author is financially supported by King Fahd University of Petroleum and Minerals, Saudi Arabia under the Internal Research Project No. IN131026.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Suliman Al-Homidan.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors carried out the proof. All authors conceived of the study and participated in its design and coordination. All authors read and approved the final manuscript.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Jayswal, A., Ahmad, I., Kummari, K. et al. On minimax programming problems involving right upper-Dini-derivative functions. J Inequal Appl 2014, 326 (2014). https://doi.org/10.1186/1029-242X-2014-326

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-326

Keywords

  • minimax programming
  • upper-Dini-derivative
  • optimality
  • duality