Skip to main content

Nondifferentiable mathematical programming involving (G,β)-invexity

Abstract

In this paper, we define new vector generalized convexity, namely nondifferentiable vector ( G f , β f )-invexity, for a given locally Lipschitz vector function f. Basing on this new nondifferentiable vector generalized invexity, we have managed to deal with nondifferentiable nonlinear programming problems under some assumptions. Firstly, we present G-Karush-Kuhn-Tucker necessary optimality conditions for nonsmooth mathematical programming problems. With the new vector generalized invexity assumption, we also obtain G-Karush-Kuhn-Tucker sufficient optimality conditions for the same programming problems. Moreover, we establish duality results for this kind of multiobjective programming problems. In the end, a suitable example illustrates that the new optimality results are more useful for some class of optimization problems than the optimality conditions with invex functions.

MSC:90C26.

1 Introduction

Convexity plays a central role in many aspects of mathematical programming including the analysis of stability, sufficient optimality conditions and duality. Based on convexity assumptions, nonlinear programming problems can be solved efficiently. In order to treat many practical problems, there have been many attempts to weaken the convexity assumptions and many concepts of generalized convex functions have been introduced and applied to mathematical programming problems in the literature [14]. One of these concepts, invexity, was introduced by Hanson in [1]. He has shown that invexity has a common property in mathematical programming with convexity and that Karush-Kuhn-Tucker conditions are sufficient for global optimality of nonlinear programming under the invexity assumptions. Ben-Israel and Mond [2] also introduced the concept of preinvex functions, which is a special case of invexity. Many researchers, such as Mordukhovich [5], Mishra [6, 7], Ahmad [8, 9], Soleimani-Damaneh [10] and so on, are devoted to this hot topic. Furthermore, Ansari and Yao [11] edited a book which provides a good review for different variants of invexity. With generalized convexity, sufficient and dual results can be obtained, and we refer to [1214] and references therein for more research results.

In [3], Antczak introduced new definitions of a p-invex set and a (p,r)-preinvex function which is the generalization of the concept in [2]. He also discussed the differentiable and nondifferentiable nonlinear programming problems involving the (p,r)-invexity-type functions in [15]. With respect to fixed functions η and b, Antczak extended the (p,r)-invexity to the B-(p,r)-invexity and generalized B-(p,r)-invexity in [16]. Ahmad et al. [8] derived the sufficient conditions for an optimal solution to the minimax fractional problem and then established weak, strong, and strict converse duality theorems for the problem and its dual problem under B-(p,r)-invexity assumptions. Antczak [4] considered a special kind of (p,r)-invexity, (0,r)-invexity, which is called r-invexity in the cases of differentiability and nondifferentiability. Later, Antczak [17] generalized the concept of (scalar) differentiable r-invex functions to the vectorial case and defined a class of V-r-invex functions. In [18], Antczak further generalized the notion of V-r-invexity to the case of nondifferentiability. Note that some other researchers were interested in studying the mathematical programming involving V-r-invex functions; see [6, 7, 9] and the references therein.

To further enlarge the class of mathematical models for which the theoretical tools hold, Antczak extended the invexity to G-invexity [19] for scalar differentiable functions. In the natural way, he extended the definition of G-invexity to the case of differentiable vector-valued functions. He [20] also applied this vector G-invexity to develop optimality conditions for differentiable multiobjective programming problems with both inequality and equality constraints and established the so-called G-Karush-Kuhn-Tucker necessary optimality conditions for this kind of programming under the Kuhn-Tucker constraint qualification. With vector G-invexity, he proved new duality results for nonlinear differentiable multiobjective programming problems, and a number of new vector duality problems such as G-Mond-Weir, G-Wolfe and G-mixed dual vector problems to the primal one were defined in [21]. Further, Kim et al. [22] considered a special kind of nondifferentiable multiobjective programming with G-invexity.

Motivated by [20, 21, 23], we enlarge the class of mathematical models for which the theoretical tools hold in this paper. Here, we present a new generalized convexity, namely nondifferentiable vector ( G f , β f )-invexity, for a given locally Lipschitz vector function f. We point out that it is very necessary to consider the nondifferentiable vector ( G f , β f )-invexity, and our reasons are as follows:

  • In some case, choosing G suitably can simplify the computation of the Clarke derivative of f; see Examples 1 and 2;

  • The concept of ( G f , β f )-invexity can not only unify but also extend the concepts of α-invexity and G-invexity; see Example 3. Moreover, ( G f , β f )-invexity, together with Lemma 1, can make the choosing of a vector-valued function η easy; see Example 3.

Basing on the new nondifferentiable vector generalized invexity, we have managed to deal with nonlinear programming problems under some assumptions. The rest of the paper is organized as follows. In Section 2, we present the concept of the nondifferentiable vector ( G f , β f )-invexity pertaining to a given locally Lipschitz vector function f. For a given function f, we discuss the relation between ( G f , β f )-invexity and (b, G f )-preinvexity in Section 3. In Section 4, we present the G-Karush-Kuhn-Tucker necessary optimality conditions for the nondifferentiable mathematical programming problems. Moreover, with this nondifferentiable vector generalized invexity assumption, we prove the G-Karush-Kuhn-Tucker sufficient optimality conditions for the nondifferentiable mathematical programming problems. In Section 5, we establish the duality results for this kind of nonsmooth multiobjective programming problems as applications of this new generalized invexity. In Section 6, we give our conclusion. Moreover, we present a suitable example which illustrates that the optimality results in this paper are more useful for some class of optimization problems than the optimality conditions with existing invexity; see Example 6.

2 Notations and definitions

In this section, we provide some notations and results about the nondifferentiable vector ( G f , β f )-invex functions. The following convention will be used throughout the paper. For any x= ( x 1 , x 2 , , x n ) T , y= ( y 1 , y 2 , , y n ) T :

For any function f defined on a nonempty set X R n , I f (X) denotes the range of f or the image of X under f. Moreover, let K={1,,k} and M={1,2,,m}.

Definition 1 Let d R n , X be a nonempty set of R n and f:XR. If

f 0 (x;d):= lim y x μ 0 sup 1 μ ( f ( y + μ d ) f ( y ) )

exists, then f 0 (x;d) is called the Clarke derivative of f at x in the direction d. If this limit superior exists for all d R n , then f is called Clarke differentiable at x. The set

f(x)= { ζ f 0 ( x ; d ) ζ , d , d R n }

is called the Clarke subdifferential of f at x.

We give a direct proof for the following useful lemma, which can also be deduced from Theorem 2.3.9 in [24].

Lemma 1 (Chain rule)

Let ϕ be a real-valued Lipschitz continuous function defined on X, and denote the image of X under ϕ by I ϕ (X); let φ: I ϕ (X)R be a differentiable function such that φ (γ) is continuous on I ϕ (X) and φ (γ)0 for each γ I ϕ (X). Then the chain rule

( φ ϕ ) 0 (x,d)= φ ( ϕ ( x ) ) ϕ 0 (x,d)

holds for each d R n . Therefore,

(φϕ)(x)= φ ( ϕ ( x ) ) (ϕ)(x).

Proof On the one hand, from Definition 1 and the assumption that φ (γ)0 for all γ I ϕ (X), we obtain

( φ ϕ ) 0 ( x ; d ) = lim y x μ 0 sup 1 μ ( φ ϕ ( y + μ d ) φ ϕ ( y ) ) = lim y x μ 0 sup ( φ ϕ ( y + μ d ) φ ϕ ( y ) ϕ ( y + μ d ) ϕ ( y ) ϕ ( y + μ d ) ϕ ( y ) μ ) lim y x μ 0 sup ( φ ϕ ( y + μ d ) φ ϕ ( y ) ϕ ( y + μ d ) ϕ ( y ) ) lim y x μ 0 sup ( ϕ ( y + μ d ) ϕ ( y ) μ ) = φ ( ϕ ( x ) ) ϕ 0 ( x , d ) .

On the other hand, by the definition of ϕ 0 (x,d), there exists a vector sequence { y n }X, a real sequence { μ n } R + such that y n x (n), μ n 0 (n) and

ϕ 0 (x,d)= lim y x μ 0 sup ϕ ( y + μ d ) ϕ ( y ) μ = lim n ϕ ( y n + μ n d ) ϕ ( y n ) μ n .
(1)

Note that

φ ϕ ( y n + μ n d ) φ ϕ ( y n ) ϕ ( y n + μ n d ) ϕ ( y n ) ϕ ( y n + μ n d ) ϕ ( y n ) μ n = φ ϕ ( y n + μ n d ) φ ϕ ( y n ) μ n

and

lim n φ ϕ ( y n + μ n d ) φ ϕ ( y n ) ϕ ( y n + μ n d ) ϕ ( y n ) = φ ( ϕ ( x ) ) .

Therefore, by (1) and definition of ( φ ϕ ) 0 (x;d), we obtain

φ ( ϕ ( x ) ) ϕ 0 (x;d)= lim n φ ϕ ( y n + μ n d ) φ ϕ ( y n ) μ n ( φ ϕ ) 0 (x;d).

Thus, we obtain the desired result. □

With the above chain rule, we can compute the Clarke derivative of a real-valued function f more easily than by using the definition of the Clarke derivative itself; see the following Examples 1 and 2.

Example 1 Denote

Then f(x)=gh(x), and it is easy to check that

h 0 (0,d)={ 1 , d > 0 , 1 , d < 0 and g (0)=1.

Thus, by the chain rule in Lemma 1,

f 0 (0,d)={ 1 , d > 0 , 1 , d < 0 .

Example 2 Let X be a nonempty subset of R n , f be a locally Lipschitz function on X, and r be an arbitrary real number. Denote

φ(a)=:{ 1 r e r a , r 0 , a , r = 0

for all aR. By the chain rule in Lemma 1,

( φ f ) 0 (x,d)= φ ( f ( x ) ) f 0 (x,d).

For differentiable functions, Antczak introduced the G-invexity in [20]. Note from Example 2 that the function φ(f) may be not differentiable even if the function φ is differentiable. Thus, it is necessary to introduce the following vector ( G f , β f )-invexity concept for a given nondifferentiable function f.

Definition 2 Let f=( f 1 ,, f k ) be a vector-valued locally Lipschitz function defined on a nonempty set X R n . Consider the functions η:X×X R n , G f i : I f i (X)R, and β i f :X×X R + for iK. Moreover, G f i is strictly increasing on its domain I f i (X) for each iK. If

G f i f i (x) G f i f i (u)(>) β i f (x,u) G f i ( f i ( u ) ) ζ i , η ( x , u ) , ζ i f i (u),
(2)

holds for all xX (xu) and iK, then f is said to be (strictly) nondifferentiable vector ( G f , β f )-invex at u on X (with respect to η) (or shortly, ( G f , β f )-invex at u on X), where G f =( G f 1 ,, G f k ) and β:=( β 1 f , β 2 f ,, β k f ). If f is (strictly) nondifferentiable vector ( G f , β f )-invex at u on X (with respect to η) for all uX, then f is (strictly) nondifferentiable vector ( G f , β f )-invex on X with respect to η.

Remark 1 In order to define (strictly) nondifferentiable vector ( G f , β f )-incave functions with respect to η for given f, the direction of the inequality (2) in Definition 2 should be changed to the opposite one.

Remark 2 (1) Let f:XR be differentiable ( G f , β f )-invex, then G f (f) is α-invex by Definition 2 in this paper and α-invexity as defined in [23], where α= β f .

  1. (2)

    Let f:XR be differentiable ( G f , β f )-invex and G f (a)=a for aR, then f is α-invex as defined in [23], where α= β f .

  2. (3)

    Let f=( f 1 ,, f k ) be differentiable vector ( G f , β f )-invex and β i f (x,u)=1 for all x,uX (iK), then f is vector G-invex as defined in [20]. Further, if |K|=1, then f is G-invex as defined in [19].

Hence, the concept of ( G f , β f )-invexity defined in this paper not only unifies but also extends the concepts of α-invexity and G-invexity. Example 3 illustrates that there exists a function which is neither α-invex as defined in [23] nor G-invex as defined in [20], but ( G f , β f )-invex as defined in this paper. Moreover, Definition 2 together with Lemma 1 can help us to choose a vector-valued function η simply; see Example 3 too.

Example 3 Let X=[1/2,1/2]R. Define f=( f 1 , f 2 , f 3 ):X R 3 as follows:

From Lemma 1,

f 1 (0)=[4,4], f 2 (0)=[4,4], f 3 (0)=[1,1].

Define

Then, by Definition 2, f is nondifferentiable vector ( G f , β f )-invex with respect to η. Note that f is nondifferentiable. Then f is neither α-invex as defined in [23] nor G-invex as defined in [20].

3 Relations between (b, G f )-preinvexity and ( G f , β f )-invexity

In this section, we present the concept of (b, G f )-preinvexity and discuss its relations with ( G f , β f )-invexity introduced in the above section.

Definition 3 Let X R n , α:X×X R + , and η:X×X R n . The set X is said to be α-invex at uX with respect to η if for all xX,

u+λα(x,u)η(x,u)X,λ[0,1].

X is said to be an α-invex set with respect to η if X is α-invex at each uX. If α(x,u)=1 for all x,uX, then the α-invex set X with respect to η is called an invex set X with respect to η.

Definition 4 Let X be an invex set (with respect to η) in R n as defined in Definition 3. Consider the functions f i :XR and b i :X×X×[0,1] R + (iK). If

(3)

hold for all xX (xu), then f=( f 1 ,, f k ) is said to be (strictly) vector b-preinvex at u on X with respect to η, where b=( b 1 ,, b k ). If f is (strictly) vector b-preinvex at u on X with respect to η for each uX, then f is (strictly) vector b-preinvex on X with respect to η.

Definition 5 Let X be an invex set (with respect to η) of R n as defined in Definition 3. Consider the functions f i :XR, G f i : I f i (X)R, and b i :X×X×[0,1] R + (iK). Moreover, G f i is strictly increasing on I f i (X) for iK. If

(4)

hold for all xX (xu), then f=( f 1 ,, f k ) is said to be (strictly) vector (b, G f )-preinvex at u on X with respect to η, where G f =( G f 1 ,, G f k ) and b=( b 1 ,, b k ). If f is (strictly) vector (b, G f )-preinvex at u on X for all uX, then f is (strictly) vector (b, G f )-preinvex on X with respect to η.

Example 4 Let X=R. Define

f(x)=ln ( | x | + 1 ) ,G(x)= e x ,xX.

Then it is easy to check that f is (b,G)-invex on with respect to the function η defined by η(x,u)=u, where b(x,u;λ)1 for all x,uR. However, f is not b-invex at u=1 with respect to the same η and b, since

f ( u + λ η ( x , u ) ) >λf(x)+(1λ)f(u),λ=0.5,x=0,u=1.

Above Example 4 illustrates there exists a function which is not b-preinvex but (b,G)-preinvex. Next, we give another useful lemma and the proof is omitted.

Lemma 2 Let φ be an increasing function defined on AR, then φ 1 exists and φ 1 is increasing on I φ (A).

Theorem 1 Let X be an invex set (with respect to η) in R n and f=( f 1 ,, f k ) be a function defined on X; let G f =( G f 1 ,, G f k ) be a function such that G f i : I f i (X)R is strictly increasing on I f i (X) for iK; let b=:( b 1 ,, b k ), where b i :X×X×[0,1] R + (iK). Then f is (strictly) vector (b, G f )-preinvex at u on X with respect to η if and only if G f f=( G f 1 f 1 ,, G f k f k ) is (strictly) vector b-preinvex at u on X with respect to the same η.

Proof ‘if’ part. Let G f f=( G f 1 f 1 ,, G f k f k ) be (strictly) vector b-preinvex at u on X with respect to η. We get from Definition 4

Thus, we obtain with Lemma 2

By Definition 5, we deduce f is (strictly) vector (b, G f )-preinvex at u on X with respect to the same η.

Moreover, the above steps are invertible, so the result follows. □

Theorem 2 Let X be an invex set (with respect to η) in R n ; let f=( f 1 ,, f k ) be (strictly) vector (b, G f )-preinvex on X with respect to η; assume that G f i () is differentiable and strictly increasing on I f i (X), b i (x,u;λ) is continuous on X×X×[0,1] for each iK. Moreover, lim λ 0 sup b i (x,u;λ)>0 for any x,uX. Then f is vector ( G f , β f )-invex on X with respect to η, where β i f (x,u)= 1 lim λ 0 sup b i ( x , u ; λ ) for iK.

Proof Since f=( f 1 ,, f k ) is (strictly) vector (b, G f )-preinvex on X with respect to η, then from Theorem 1 G f f=( G f 1 f 1 ,, G f k f k ) is (strictly) vector b-preinvex on X with respect to η. That is, for any x,uX (xu),

Hence,

Therefore, by the definition of the superior limit and continuity, one obtains

( G f i f i ) 0 ( u ; η ( x , u ) ) lim λ 0 sup b i ( x , u ; λ ) (<) G f i f i (x) G f i f i (u),iK,

which together with Lemma 1 gives

G f i f i ( x ) G f i f i ( u ) 1 lim λ 0 sup b i ( x , u ; λ ) ( G f i ) ( f i ( u ) ) f i 0 ( u ; η ( x , u ) ) = β i f ( x , u ) ( G f i ) ( f i ( u ) ) ζ i , η ( x , u ) , ζ i f i ( u ) , i K .

Thus, the result follows. □

Example 5 Let X be an invex set (with respect to η) of R n and f=( f 1 ,, f k ) be (strictly) (b, G f )-preinvex on X with respect to η. For any given real number r, let φ be the function defined in Example 2 and denote by G f f(φ f 1 ,,φ f k ). Then from Theorem 2 f is nondifferentiable vector ( G f , β f )-invex on X with respect to η, where β i f (x,u)= 1 lim λ 0 sup b i ( x , u ; λ ) for iK. That is, the inequalities

hold for any ζ i f i (u) and for each iK. Thus, f is exactly the locally Lipschitz V-r-invexity with respect to η on X or r-invex.

Remark 3 By Definition 2 and Example 5, we know that both a V-r-invex function and an r-invex function are nondifferentiable vector ( G f , β f )-invex.

In general, a multiobjective programming problem is formulated as the following vector minimization problem:

(CVP)

where X is a nonempty set of R n , f i (iK) and g j (jM) are real-valued Lipschitz functions on X.

Let E CVP ={xX: g j (x)0,jM} be the set of all feasible solutions for the problem (CVP). Further, denote by J( x ¯ ):={jM: g j ( x ¯ )=0} the set of constraint indices active at x ¯ E CVP .

The above multiobjective programming problem (CVP) was widely used in applied sciences. Recently, this kind of programming was used to solve problems arising in fields such as bioinformatics, computational biology, molecular biology, wastewater treatment, drug discovery, and food processing.

For convenience, we need the following vector minimization problem:

where G g (0):=( G g 1 (0),, G g m (0)). Denote by E G - CVP :={xX: G g g(x) G g (0)}, J ( x ¯ ):={jM: G g j g j ( x ¯ )= G g j (0)}. Then it is easy to see that E CVP = E G - CVP and J( x ¯ )= J ( x ¯ ). So, the set of all feasible solutions and the set of constraint active indices for either (CVP) or (G-CVP) are denoted by E and J( x ¯ ), respectively.

Before studying optimality in multiobjective programming, we have to define clearly the concepts of optimality and solutions in relation to a multiobjective programming problem. Note that in vector optimization problems, there is a multitude of competing definitions and approaches. One of the dominating ones is (weak) Pareto optimality. The (weak) Pareto optimality in multiobjective programming associates the concept of a solution with some property that seems intuitively natural.

Definition 6 A feasible point x ¯ is said to be a (weakly) efficient solution for a multiobjective programming problem (CVP) if and only if there exists no xE such that

f(x)(<)f( x ¯ ).

Lemma 3 Let G f i be strictly increasing on I f i (X) for each iK and G g j be strictly increasing on I g j (X) for each jM. Further, let 0 I g j (X), jM. Then x ¯ is a (weakly) efficient solution for (CVP) if and only if x ¯ is a (weakly) efficient solution for (G-CVP).

4 Optimality conditions in nondifferentiable multiobjective programming

The first necessary conditions for the inequality-constrained problem have been presented in 1948 by Fritz John; while stronger necessary conditions for the same inequality-constrained problem were obtained in 1951 by Kuhn and Tucker. Since then, optimality conditions of Fritz John and Karush-Kuhn-Tucker type for differentiable or nondifferentiable nonconvex multiobjective programming problems were established under different assumptions. For example, optimality conditions of Fritz-John and Karush-Kuhn-Tucker type for nondifferentiable convex multiobjective programming problems were established by Kanniappan. Later, Craven proved these conditions for nondifferentiable multiobjective programming problems involving locally Lipschitz functions. Also, under some constraint qualifications, Lee proved the Karush-Kuhn-Tucker necessary optimality conditions for multiobjective programming problems involving Lipschitz functions. Moveover, Soleimani-Damaneh characterized the weak Pareto-optimal solutions of nonsmooth multiobjective programs in Asplund spaces under locally Lipschitz and generalized convexity conditions. Further, he established some sufficient conditions for optimality and proper optimality for multiple-objective programs in Banach spaces after extending the concept of vector invexity.

Recently, Antczak [19] introduced the so-called G-Karush-Kuhn-Tucker necessary optimality conditions for a differentiable mathematical programming problem. In a natural way, he [20] extended the so-called G-Karush-Kuhn-Tucker necessary optimality conditions to the vectorial case for differentiable multiobjective programming problems. From the discussion in the above sections, it is interesting to consider the nondifferentiable nonlinear programming. Hence, we present not only G-Karush-Kuhn-Tucker necessary optimality but also G-Karush-Kuhn-Tucker sufficient optimality for this kind of nondifferentiable mathematical programming problems.

Theorem 3 (G-Fritz John necessary optimality condition)

Let G f i be a function defined on I f i (X) such that G f i is nonnegative and continuous on I f i (X) for each iK; let G g j be a function defined on I g j (X) such that G g j is nonnegative and continuous on I g j (X) for each jM. If x ¯ is a (weakly) efficient solution for (CVP), then there exist λ ¯ R n , and ξ ¯ R m such that

(5)
(6)
(7)

Proof Since x ¯ is a (weakly) efficient solution for (CVP), then by Lemma 3, x ¯ is a (weakly) efficient solution for (G-CVP). Therefore, from Theorem 10 of [18], we have

Hence, by Lemma 1, we get the desired result. □

The G-Karush-Kuhn-Tucker necessary optimality conditions for x ¯ to be (weak) Pareto optimal are obtained from the above Fritz John necessary optimality conditions under some constraint qualifications.

Now, we give a generalized Slater type constraint qualification. Under this regularity constraint qualification, we establish the G-Karush-Kuhn-Tucker necessary optimality conditions for the considered nonsmooth multiobjective programming problem (CVP).

Definition 7 The program (CVP) is said to satisfy the generalized Slater type constraint at x ¯ if there exists x 0 E such that g J ( x 0 )<0 and g J is ( G g J , β g J )-invex with respect to η at x ¯ on E, where JJ( x ¯ ).

Theorem 4 (G-Karush-Kuhn-Tucker necessary optimality condition)

Let G f i be a function defined on I f i (X) such that G f i is nonnegative and continuous on I f i (X) for each iK; let G g j be a function defined on I g j (X) such that G g j is nonnegative and continuous on I g j (X) for each jM. Assume that x ¯ is a (weakly) efficient solution for (CVP) and the program (CVP) satisfies the generalized Slater type constraint at x ¯ . Then there exist λ ¯ R n , and ξ ¯ R m such that

(8)
(9)
(10)

Proof On the one hand, since x ¯ is a (weakly) efficient solution for (CVP), the necessary optimality conditions of G-Fritz John type (5)-(7) for (CVP) are fulfilled. Let us suppose that λ ¯ =0. Then by (6) we have that μ ¯ j =0 for all jJ, and there exists at least one jJ such that μ ¯ j >0. Thus, from (5), Lemma 1, and subdifferential calculus (see [24]), it follows that

0 ( j = 1 m ξ ¯ j G g j g j ) ( x ¯ )= ( j J ξ ¯ j G g j g j ) ( x ¯ ) j J ξ ¯ j G g j ( g j ( x ¯ ) ) g j ( x ¯ ).

This implies that there exists ζ j g j ( x ¯ ), jM, such that

j J ξ ¯ j G g j ( g j ( x ¯ ) ) ζ j =0.

Note that g J is assumed to be ( G g J , β g J )-invex with respect to η at x ¯ . Then

j J ξ ¯ j G g j g j ( x ) G g j ( 0 ) β j g J ( x , x ¯ ) = j J ξ ¯ j G g j g j ( x ) G g j g j ( x ¯ ) β j g J ( x , x ¯ ) j J ξ ¯ j G g j ( g j ( x ¯ ) ) ζ j , η ( x , x ¯ ) = 0 .
(11)

On the other hand, it follows from the generalized Slater type constraint qualification that there exists x 0 E such that g j ( x 0 )<0 for all jJ. Since μ ¯ j >0 at least for one jJ, we obtain the following inequality:

0> j J ξ ¯ j G g j g j ( x ) G g j ( 0 ) β j g J ( x , x ¯ ) = j J ξ ¯ j G g j g j ( x ) G g j g j ( x ¯ ) β j g J ( x , x ¯ ) ,

which contradicts (11). □

Now, under the assumption of generalized invexity defined in Section 2, we can establish sufficient optimality conditions for nonsmooth multiobjective programming problems involving locally Lipschitz functions.

Theorem 5 (G-Karush-Kuhn-Tucker sufficient optimality conditions)

Let x ¯ be a feasible point for (CVP); let G f i be differentiable and strictly increasing on I f i (X) for each iK, and let G g j be differentiable and strictly increasing on I g j (X) for each jM. Moreover, G-Karush-Kuhn-Tucker necessary optimality conditions (8)-(10) are satisfied at x ¯ . If f is nondifferentiable vector ( G f , β f )-invex at x ¯ on X with respect to η and g is nondifferentiable vector ( G g , β g )-invex at x ¯ on X with respect to the same η, then x ¯ is a (weakly) efficient solution for (CVP).

Proof Suppose, contrary to the result, that x ¯ is not a weakly efficient solution for (CVP). By Lemma 3, x ¯ is not a weakly efficient solution for (G-CVP). Hence, there exists x 0 X such that

G f i f i ( x 0 )< G f i f i ( x ¯ ),iK.
(12)

By the generalized invexity assumption of f and g, we have

(13)
(14)

where ζ i f f i ( x ¯ ) (iK) and ζ j g g j ( x ¯ ) (jM). Multiplying (14) by ξ ¯ j , we get

ξ ¯ j ( G g j g j ( x 0 ) G g j g j ( x ¯ ) ) ξ ¯ j β j g ( x 0 , x ¯ ) G g j ( g j ( x ¯ ) ) ζ j g , η ( x 0 , x ¯ ) ,jM.
(15)

From (8), (9), (13), and (15), we have

Note that λ ¯ 0. Then

i = 1 k λ ¯ i G f i ( f i ( x ¯ ) ) ζ i f + j = 1 m ξ ¯ j G g j ( g j ( x ¯ ) ) ζ j g , η ( x 0 , x ¯ ) <0,

which contradicts the G-Karush-Kuhn-Tucker necessary optimality condition (8). Hence, x ¯ is a weakly efficient solution for (CVP), and the proof is complete. □

Theorem 6 (G-Karush-Kuhn-Tucker sufficient optimality conditions)

Let x ¯ be a feasible point for (CVP); let G f i be differentiable and strictly increasing on I f i (X) for each iK, and let G g j be differentiable and strictly increasing on I g j (X) for each jM. Moreover, G-Karush-Kuhn-Tucker necessary optimality conditions (8)-(10) are satisfied at x ¯ . If f is strictly nondifferentiable vector ( G f , β f )-invex at x ¯ on X with respect to η and g is nondifferentiable vector ( G g , β g )-invex at x ¯ on X with respect to the same η, then x ¯ is an efficient solution for (CVP).

Proof Proof is similar to the proof of Theorem 5. □

5 Duality

Duality is an important concept in the study of optimization problems. Several duals, including the Mond-Weir dual and the Wolfe dual, have been introduced for various nonlinear programming problems. For example, Ahmad et al. [9] considered the Mond-Weir type dual program of nonsmooth multiobjective programming involving generalized V-r-invex functions. Further, Soleimani-Damaneh considered Mond-Weir type and Wolfe type duals for a general nonsmooth optimization problem in Banach algebras. As applications of our new generalized invexity, we also establish dual results following the approaches of Mond and Weir. We formulate the following dual problem for (CVP):

(MWD)

Let W denote the set of all feasible solutions for the dual problem (MWD). Further, denote by Y the set Y={yX:(y,λ,μ)W}.

Theorem 7 (Weak duality)

Let x and (y,λ,μ) be feasible solutions for (CVP) and (MWD), respectively. Moreover, assume that f I and g J are ( G f I , β f I )-invex and ( G g J , β g J )-invex at y on EY with respect to the same η, respectively, where II(y) and JJ(y). Then f(x)f(y).

Proof Let x and (y,λ,μ) be feasible solutions for (CVP) and (MWD), respectively. Then there exist ζ i f f i (y), iK and ζ j g g j (y), jM, such that

i = 1 k λ i G f i ( f i ( y ) ) ζ i f + j = 1 m μ j G g j ( g j ( y ) ) ζ j g =0.
(16)

We proceed by contradiction. Suppose that

f(x)<f(y).
(17)

Since f I and g J are ( G f I , β f I )-invex and ( G g J , β g J )-invex at y on EY with respect to the same η, respectively. Then, by Definition 2, the system

holds for all xE. Hence, we deduce that the inequality

i = 1 k λ i G f i ( f i ( y ) ) ζ i f + j = 1 m μ j G g j ( g j ( y ) ) ζ j g , η ( x , y ) <0

holds for all ζ i f f i (y), iI, ζ j g g j (y), jJ. This contradicts (16). □

Theorem 8 (Strong duality)

Let x ¯ be a (weakly) efficient solution in (CVP). Then there exist λ ¯ R k , λ ¯ 0, μ ¯ R m , μ ¯ 0 such that ( x ¯ , λ ¯ , μ ¯ ) is feasible in (MWD). If, also weak duality theorem holds for problems (CVP) and (MWD), then ( x ¯ , λ ¯ , μ ¯ ) is a (weakly) efficient solution in (MWD) and the optimal values in both problems are the same.

Proof Let x ¯ be a (weakly) efficient solution in (CVP). Then there exist λ ¯ R k , λ ¯ 0, μ ¯ R m , μ ¯ 0 such that the G-Karush-Kuhn-Tucker optimality conditions (5)-(7) are fulfilled at x ¯ . Thus, by the G-Karush-Kuhn-Tucker optimality conditions (5)-(7), we conclude that ( x ¯ , λ ¯ , μ ¯ ) is feasible in (MWD). Suppose that ( x ¯ , λ ¯ , μ ¯ ) is not a (weakly) efficient solution in (MWD). Then there exists ( x ˜ , λ ˜ , μ ˜ )W such that

f( x ˜ )(<)f( x ¯ ).

But the above inequality is a contradiction to weak duality. Thus, ( x ¯ , λ ¯ , μ ¯ ) is a (weakly) efficient solution in (MWD), and the optimal values in both problems are the same. □

Theorem 9 (Converse duality)

Let ( y ¯ , λ ¯ , μ ¯ ) be a (weakly) efficient solution for (MWD) such that y ¯ E. Moreover, assume that f I and g J are (strictly) ( G f I , β f I )-invex and (strictly) ( G g J , β g J )-invex at y ¯ on EY with respect to the same η, respectively, where II( y ¯ ) and JJ( y ¯ ). Then y ¯ is a (weakly) efficient solution in (CVP).

Proof Since ( y ¯ , λ ¯ , μ ¯ ) is a (weakly) efficient point in (MWD), then it is feasible in (MWD). Hence, y ¯ X, μ ¯ 0, and the second constraint of (MWD) is fulfilled at y ¯ . Thus, we have

i = 1 k μ ¯ j g j ( y ¯ )=0.

We proceed by contradiction. Suppose that y ¯ is not a (weakly) efficient point in (MWD). Then there exists x ˜ E such that

f( x ˜ )(<)f( y ¯ ).

Since f I and g J are (strictly) ( G f I , β f I )-invex and (strictly) ( G g J , β g J )-invex at y ¯ on EY with respect to the same η, respectively, then, by Definition 2, the inequalities

hold for all xE. Hence, it is also true for x= x ˜ . Thus, we deduce that the inequality

i = 1 k λ ¯ i G f i ( f i ( y ¯ ) ) ζ i f + j = 1 m μ ¯ j G g j ( g j ( y ¯ ) ) ζ j g , η ( x , y ¯ ) <0

holds for all ζ i f f i ( y ¯ ), iK, ζ j g g j ( y ¯ ), jM, which contradicts the feasibility of ( y ¯ , λ ¯ , μ ¯ ) in (MWD). □

6 Conclusion

This paper presents a new type of generalized invexity, namely nondifferentiable ( G f , β f )-invexity for a given locally Lipschitz function f defined on X R n . This new invexity not only unifies but also extends the existing G-invexity and α-invexity presented in literatures. We have constructed auxiliary mathematical programming (G-CVP) and have discussed the relations between programming (G-CVP) and (CVP). With (G-CVP), we have proved the G-Karush-Kuhn-Tucker necessary optimality conditions for (CVP). Our statement of the so-called G-Kuhn-Tucker necessary optimality conditions established in this paper is more general than the classical Kuhn-Tucker necessary optimality conditions found in the literature. Also, we have proved the sufficiency of the introduced G-Karush-Kuhn-Tucker necessary optimality conditions for (CVP) under the new nondifferentiable vector invexity assumption. More exactly, this result has been proved for such multiobjective programming problems in which the objective functions, the constraints are nondifferentiable vector generalized invex with respect to the same η defined in Section 2, but not necessarily with respect to the same G; see the following example. As applications of our new generalized invexity, we establish dual results for (CVP) under the Mond-Weir dual programming. Note that many researchers were interested in studying minimax programming or fractional programming with different generalized invexities; see [6, 8, 10, 15]. As pointed out by an anonymous referee, we will study minimax programming or fractional programming under the invexity proposed in this sequel in the future.

To illustrate the approach to optimality considered in the paper, we here give an example of a nonsmooth multiobjective programming problem involving nondifferentiable vector generalized invex functions with respect to the same function η defined in Section 2.

Example 6 Let X=[1/2,1/2]R. We consider the following (CVP):

where

It is not difficult to see that f 1 , f 2 , g are locally Lipschitz functions and, moreover, the set of all feasible solutions E=X=[1/2,1/2]R. Note also that a feasible solution x ¯ =0 is an efficiently optimal in the considered nonsmooth vector optimization problem. Then, from Example 3, f and g are nondifferentiable vector ( G f , β f )-invex and ( G g , β g )-invex with respect to the same η, respectively, where η, β f , and β g are defined in Example 3. Also, it can be established that the G-Karush-Kuhn-Tucker necessary optimality conditions (8)-(10) are satisfied at x ¯ . Since all the hypotheses of Theorem 6 are fulfilled, then x ¯ is an efficient optimal in the considered multiobjective programming problem. Further, note that the sufficient optimality Theorem 20 in [19] for efficient optimality is not applicable to the considered multiobjective programming problem (CVP). This follows from the fact that all functions involved in the considered multiobjective programming problem are nondifferentiable.

References

  1. Hanson MA: On sufficiency of the Kuhn-Tucker conditions. J. Math. Anal. Appl. 1981, 80: 545–550. 10.1016/0022-247X(81)90123-2

    Article  MathSciNet  Google Scholar 

  2. Ben-Israel A, Mond B: What is invexity. J. Aust. Math. Soc. Ser. B 1986, 28: 1–9. 10.1017/S0334270000005142

    Article  MathSciNet  Google Scholar 

  3. Antczak T:(p,r)-invex sets and functions. J. Math. Anal. Appl. 2001, 263: 355–379. 10.1006/jmaa.2001.7574

    Article  MathSciNet  Google Scholar 

  4. Antczak T: r -preinvexity and r -invexity in mathematical programming. Comput. Math. Appl. 2005, 50(3–4):551–566. 10.1016/j.camwa.2005.01.024

    Article  MathSciNet  Google Scholar 

  5. Mordukhovich BS: Multiobjective optimization problems with equilibrium constraints. Math. Program. 2009, 117(1):331–354. 10.1007/s10107-007-0172-y

    Article  MathSciNet  Google Scholar 

  6. Mishra SK, Shukla K: Nonsmooth minimax programming problems with V - r -invex functions. Optimization 2010, 59(1):95–103. 10.1080/02331930903500308

    Article  MathSciNet  Google Scholar 

  7. Mishra SK, Singh V, Wang SY, Lai KK: Optimality and duality for nonsmooth multiobjective optimization problems with generalized V - r -invexity. J. Appl. Anal. 2010, 16: 49–58.

    Article  MathSciNet  Google Scholar 

  8. Ahmad I, Gupta SK, Kailey N, Agarwal RP:Duality in nondifferentiable minimax fractional programming with B-(p,r)-invexity. J. Inequal. Appl. 2011., 2011: Article ID 1

    Google Scholar 

  9. Ahmad I, Gupta SK, Jayswal A: On sufficiency and duality for nonsmooth multiobjective programming problems involving generalized V - r -invex functions. Nonlinear Anal., Theory Methods Appl. 2011, 74(17):5920–5928. 10.1016/j.na.2011.05.058

    Article  MathSciNet  Google Scholar 

  10. Soleimani-Damaneh M: Optimality for nonsmooth fractional multiple objective programming. Nonlinear Anal., Theory Methods Appl. 2008, 68(10):2873–2878. 10.1016/j.na.2007.02.033

    Article  MathSciNet  Google Scholar 

  11. Ansari QH, Yao J-C: Recent Developments in Vector Optimization. Springer, Berlin; 2012.

    Book  Google Scholar 

  12. Li J, Gao Y: Non-differentiable multiobjective mixed symmetric duality under generalized convexity. J. Inequal. Appl. 2011., 2011: Article ID 23. doi:10.1186/1029–242X-2011–23

    Google Scholar 

  13. Gao Y: Higher-order symmetric duality for a class of multiobjective fractional programming problems. J. Inequal. Appl. 2012., 2012: Article ID 142. doi:10.1186/1029–242X-2012–142

    Google Scholar 

  14. Gupta SK, Dangar D, Kumar S: Second-order duality for a nondifferentiable minimax fractional programming under generalized α -univexity. J. Inequal. Appl. 2012., 2012: Article ID 187. doi:10.1186/1029–242X-2012–187

    Google Scholar 

  15. Antczak T:Minimax programming under (p,R)-invexity. Eur. J. Oper. Res. 2004, 158: 1–19. 10.1016/S0377-2217(03)00352-7

    Article  MathSciNet  Google Scholar 

  16. Antczak T:Generalized B-(p,R)-invexity functions and nonlinear mathematical programming. Numer. Funct. Anal. Optim. 2009, 30(1–2):1–22. 10.1080/01630560802678549

    Article  MathSciNet  Google Scholar 

  17. Antczak T: V - r -invexity in multiobjective programming. J. Appl. Anal. 2005, 11(1):63–80.

    Article  MathSciNet  Google Scholar 

  18. Antczak T: Optimality and duality for nonsmooth multiobjective programming problems with V - r -invexity. J. Glob. Optim. 2009, 45(2):319–334. 10.1007/s10898-008-9377-8

    Article  MathSciNet  Google Scholar 

  19. Antczak T: New optimality conditions and duality results of G -type in differentiable mathematical programming. Nonlinear Anal., Theory Methods Appl. 2007, 66: 1617–1632. 10.1016/j.na.2006.02.013

    Article  MathSciNet  Google Scholar 

  20. Antczak T: On G -invex multiobjective programming. Part I. Optimality. J. Glob. Optim. 2009, 43(1):97–109. 10.1007/s10898-008-9299-5

    Article  MathSciNet  Google Scholar 

  21. Antczak T: On G -invex multiobjective programming. Part II. Duality. J. Glob. Optim. 2009, 43(1):111–140. 10.1007/s10898-008-9298-6

    Article  MathSciNet  Google Scholar 

  22. Kim HJ, Seo YY, Kim DS: Optimality conditions in nondifferentiable G -invex multiobjective programming. J. Inequal. Appl. 2010., 2010: Article ID 172059. doi:10.1155/2010/172059

    Google Scholar 

  23. Noor M: On generalized preinvex functions and monotonicities. J. Inequal. Pure Appl. Math. 2004, 5(4):1–9.

    Google Scholar 

  24. Clarke FH: Optimization and Nonsmooth Analysis. Wiley-Interscience, New York; 1983.

    Google Scholar 

Download references

Acknowledgements

The authors are grateful to the referees for their valuable suggestions that helped to improve the paper in its present form. This research is supported by the Science Foundation of Hanshan Normal University (LT200801).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoling Liu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors carried out the proof. All authors conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Yuan, D., Liu, X., Yang, S. et al. Nondifferentiable mathematical programming involving (G,β)-invexity. J Inequal Appl 2012, 256 (2012). https://doi.org/10.1186/1029-242X-2012-256

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-256

Keywords