- Research Article
- Open Access
- Published:
Optimality Conditions in Nondifferentiable G-Invex Multiobjective Programming
Journal of Inequalities and Applications volume 2010, Article number: 172059 (2010)
Abstract
We consider a class of nondifferentiable multiobjective programs with inequality and equality constraints in which each component of the objective function contains a term involving the support function of a compact convex set. We introduce G-Karush-Kuhn-Tucker conditions and G-Fritz John conditions for our nondifferentiable multiobjective programs. By using suitable G-invex functions, we establish G-Karush-Kuhn-Tucker necessary and sufficient optimality conditions, and G-Fritz John necessary and sufficient optimality conditions of our nondifferentiable multiobjective programs. Our optimality conditions generalize and improve the results in Antczak (2009) to the nondifferentiable case.
1. Introduction and Preliminaries
A number of different forms of invexity have appeared. In [1], Martin defined Kuhn-Tucker invexity and weak duality invexity. In [2], Ben-Israel and Mond presented some new results for invex functions. Hanson [3] introduced the concepts of invex functions, and Type I, Type II functions were introduced by Hanson and Mond [4]. Craven and Glover [5] established Kuhn-Tucker type optimality conditions for cone invex programs, and Jeyakumar and Mond [6] introduced the class of the so-called V-invex functions to proved some optimality for a class of differentiable vector optimization problems than under invexity assumption.Egudo [7] established some duality results for differentiable multiobjective programming problems with invex functions. Kaul et al. [8] considered Wolfe-type and Mond-Weir-type duals and generalized the duality results of Weir [9] under weaker invexity assumptions.
Based on the paper by Mond and Schechter [10], Yang et al. [11] studied a class of nondifferentiable multiobjective programs. They replaced the objective function by the support function of a compact convex set, constructed a more general dual model for a class of nondifferentiable multiobjective programs, and established only weak duality theorems for efficient solutions under suitable weak convexity conditions. Subsequently, Kim et al. [12] established necessary and sufficient optimality conditions and duality results for weakly efficient solutions of nondifferentiable multiobjective fractional programming problems.
Recently, Antczak [13, 14] studied the optimality and duality for G-multi-objective programming problems. They defined a new class of differentiable nonconvex vector valued functions, namely, the vector G-invex (G-incave) functions with respect to . They used vector G-invexity to develop optimality conditions for differentiable multiobjective programming problems with both inequality and equality constraints. Considering the concept of a (weak) Pareto solution, they established the so-called G-Karush-Kuhn-Tucker necessary optimality conditions for differentiable vector optimization problems under the Kuhn-Tucker constraint qualification.
In this paper, we obtain an extension of the results in [13],which were established in the differentiable to the nondifferentiable case. We proposed a class of nondifferentiable multiobjective programming problems in which each component of the objective function contains a term involving the support function of a compact convex set. We obtain G-Karush-Tucker necessary and sufficient conditions and G-Fritz John necessary and sufficient conditions for weak Pareto solution. Necessary optimal theorems are presented by using alternative theorem [15] and Mangasarian-Fromovitz constraint qualification [16]. In addition, we give sufficient optimal theorems under suitable G-invexity conditions.
We provide some definitions and some results that we shall use in the sequel. Throughout the paper, the following convention will be used.
For any we write

Throughout the paper, we will use the same notation for row and column vectors when the interpretation is obvious. We say that a vector is negative if
and strictly negative if
.
Definition 1.1.
A function is said to be strictly increasing if and only if

Let be a vector-valued differentiable function defined on a nonempty open set
, and
, the range of
, that is, the image of
under
.
Definition 1.2 (see [11]).
Let C be a compact convex set in . The support function
is defined by

The support function , being convex and everywhere finite, has a subdifferential, that is, there exists
such that

Equivalently,

The subdifferential of at
is given by

Now, in the natural way, we generalize the definition of a real-valued G-invex function. Let be a vector-valued differentiable function defined on a nonempty open set
, and
the range of
, that is, the image of
under
.
Definition 1.3.
Let be a vector-valued differentiable function defined on a nonempty set
and
. If there exist a differentiable vector-valued function
such that any of its component
is a strictly increasing function on its domain and a vector-valued function
such that, for all
and for any

then is said to be a (strictly) vector
-invex function at
on
(with respect to
) (or shortly,
-invex function at
on
). If (1.7) is satisfied for each
, then
is vector
-invex on
with respect to
.
Lemma 1.4 (see [13]).
In order to define an analogous class of (strictly) vector -incave functions with respect to
, the direction of the inequality in the definition of
-invex function should be changed to the opposite one.
We consider the following multiobjective programming problem.

where , are differentiable functions on a nonempty open set
. Moreover,
are differentiable real-valued strictly increasing functions,
are differentiable real-valued strictly increasing functions, and
, are differentiable real-valued strictly increasing functions. Let
be the set of all feasible solutions for problem (NMP), and
. Further, we denote by
the set of inequality constraint functions active at
and by
the objective functions indices set, for which the corresponding Lagrange multiplier is not equal
. For such optimization problems, minimization means in general obtaining weak Pareto optimal solutions in the following sense.
Definition 1.5.
A feasible point is said to be a weak Pareto solution (a weakly efficient solution, a weak minimum) of (NMP) if there exists no other
such that

Definition 1.6 (see [17]).
Let be a given set in
ordered by
or by
. Specifically, we call the minimal element of
defined by
a minimal vector, and that defined by
a weak minimal vector. Formally speaking, a vector
is called a minimal vector in
if there exists no vector
in
such that
; it is called a weak minimal vector if there exists no vector
in
such that
.
By using the result of Antczak [13] and the definition of a weak minimal vector, we obtain the following proposition.
Proposition 1.7.
Let be feasible solution in a multiobjective programming problem and let
, be a continuous real-valued strictly increasing function defined on
. Further, we denote
and
. Then,
is a weak Pareto solution in the set of all feasible solutions
for a multiobjective programming problem if and only if the corresponding vector
is a weak minimal vector in the set
.
Proof.
Let be a weak Pareto solution. Then there does not exist
such that

By the strict increase of involving the support function, we have

Therefore, is a weak minimal vector in the set W. The converse part is proved similarly.
Lemma 1.8 (see [13]).
In the case when , for any
, we obtain a definition of a vector-valued invex function.
2. Optimality Conditions
In this section, we establish G-Fritz John and G-Karush-Kuhn-Tucker necessary and sufficient conditions for a weak Pareto optimal point of (NMP).
Theorem 2.1 (G-Fritz John Necessary Optimality Conditions).
Suppose that are differentiable real-valued strictly increasing functions defined on
are differentiable real-valued strictly increasing functions defined on
, are differentiable real-valued strictly increasing functions defined on
, and let
. Let
be a weak Pareto optimal point in problem (NMP). Then there exist
,
, and
such that

Proof.
Let . Since
is convex and compact,

is finite. Also, ,

Since is a weak Pareto optimal point in (NMP)

has no solution .By [15, Corollary
], there exist
, and
, not all zero, such that for any
,

Let =
. Then
. Assume to the contrary that
. By separation theorem, there exists
such that
, that is,

This contradicts (2.5).
Letting , we get

Since , we obtain the desired result.
Theorem 2.2 (G-Karush-Kuhn-Tucker Necessary Optimality Conditions).
Suppose  that are differentiable real-valued strictly increasing functions defined on
are differentiable real-valued strictly increasing functions defined on
, are differentiable real-valued strictly increasing functions defined on
, and
, are linearly independent, and let
. Moreover, we assume that there exists
such that
, and
. If
is a weak Pareto optimal point in problem (NMP), then there exist
,
, and
such that

Proof.
Since is a weak Pareto optimal point of (NMP), by Theorem 2.1, there exist
,
, and
such that

Assume that there exists such that
, and
. Then
. Assume to the contrary that
. Then
. If
, then
. Since
, are linearly independent,
has a trivial solution
, this contradicts to the fact that
. So
. Define
. Since
, we have
and so
. This is a contradiction. Hence
. Indeed, it is sufficient only to show that there exist
, and
such that
. we set

It is not difficult to see that the G-Karush-Kuhn-Tucker necessary optimality conditions are satisfied with Lagrange multipliers, there exist ; and
given by (2.10).
We denote by and
the sets of equality constraints indices for which a corresponding Lagrange multiplier is positive and negative, respectively, that is,
and
.
Theorem 2.3 (G-Fritz John Sufficient Optimality Conditions).
Let satisfy the G-Fritz John optimality conditions as follow:




Further, assume that is vector
-invex with respect to
at
,
is strictly
-invex with respect to
at
,
-invex with respect to
at
, and
-incave with respect to
at
. Moreover, suppose that
for
and
for
. Then
is a weak Pareto optimal point in problem (NMP).
Proof.
Suppose that is not a weak Pareto optimal point in problem (NMP). Then there exists
such that
. Since

Thus we get

By assumption, is
-invex with respect to
at
on
. Then by Definition 1.3, for any
,

Hence by (2.16) and (2.17), we obtain

Since satisfy the G-Fritz John conditions, by
,

Since is strictly
-invex with respect to
at
on
,

Thus, by ,

Then, (2.12) implies

By assumption, , is
-invex with respect to
at
on
, and
, is
-incave with respect to
at
on
. Then, by Definition 1.3, we have,

Thus, for any ,

Since and
, then the inequality above implies

Adding both sides of inequalities (2.19), (2.22), (2.25), and by (2.14),

which contradicts (2.11). Hence, is a weak Pareto optimal for (NMP).
Theorem 2.4 (G-Karush-Kuhn-Tucker Sufficient Optimality Conditions).
Let   satisfy the G-Karush-Kuhn-Tucker conditions as follow:




Further, assume that is vector
-invex with respect to
at
,
is strictly
-invex with respect to
at
,
-invex with respect to
at
, and
-incave with respect to
at
. Moreover, suppose that
for
and
for
. Then
is a weak Pareto optimal point in problem (NMP).
Proof.
Suppose that is not a weak Pareto optimal point in problem (NMP). Then there exists
such that
. Since

Thus we get

By assumption, is
-invex with respect to
at
on
. Then by Definition 1.3, for any
,

Hence by (2.32) and (2.33), we obtain

Since satisfy the G-Karush-Kuhn-Tucker conditions, by
,

Since is strictly
-invex with respect to
at
on
,

Thus, by ,

Then, (2.28),(2.30) imply

By assumption, , is
-invex with respect to
at
on
, and
, is
-incave with respect to
at
on
. Then, by Definition 1.3, we have,

Thus, for any ,

Since and
, then the inequality above implies

Adding both sides of inequalities (2.35), (2.38) and (2.41),

which contradicts (2.27). Hence, is a weak Pareto optimal for (NMP).
References
Martin DH: The essence of invexity. Journal of Optimization Theory and Applications 1985, 47(1):65–76. 10.1007/BF00941316
Ben-Israel A, Mond B: What is invexity? Journal of the Australian Mathematical Society. Series B 1986, 28(1):1–9. 10.1017/S0334270000005142
Hanson MA: On sufficiency of the Kuhn-Tucker conditions. Journal of Mathematical Analysis and Applications 1981, 80(2):545–550. 10.1016/0022-247X(81)90123-2
Hanson MA, Mond B: Necessary and sufficient conditions in constrained optimization. Mathematical Programming 1987, 37(1):51–58. 10.1007/BF02591683
Craven BD, Glover BM: Invex functions and duality. Journal of the Australian Mathematical Society. Series A 1985, 39(1):1–20. 10.1017/S1446788700022126
Jeyakumar V, Mond B: On generalised convex mathematical programming. Journal of the Australian Mathematical Society. Series B 1992, 34(1):43–53. 10.1017/S0334270000007372
Egudo RR: Efficiency and generalized convex duality for multiobjective programs. Journal of Mathematical Analysis and Applications 1989, 138(1):84–94. 10.1016/0022-247X(89)90321-1
Kaul RN, Suneja SK, Srivastava MK: Optimality criteria and duality in multiple-objective optimization involving generalized invexity. Journal of Optimization Theory and Applications 1994, 80(3):465–482. 10.1007/BF02207775
Weir T: A note on invex functions and duality in multiple objective optimization. Opsearch 1988, 25(2):98–104.
Mond B, Schechter M: Nondifferentiable symmetric duality. Bulletin of the Australian Mathematical Society 1996, 53(2):177–188. 10.1017/S0004972700016890
Yang XM, Teo KL, Yang XQ: Duality for a class of nondifferentiable multiobjective programming problems. Journal of Mathematical Analysis and Applications 2000, 252(2):999–1005. 10.1006/jmaa.2000.6991
Kim DS, Kim SJ, Kim MH: Optimality and duality for a class of nondifferentiable multiobjective fractional programming problems. Journal of Optimization Theory and Applications 2006, 129(1):131–146. 10.1007/s10957-006-9048-1
Antczak T: On
-invex multiobjective programming. I. Optimality. Journal of Global Optimization 2009, 43(1):97–109. 10.1007/s10898-008-9299-5
Antczak T: On
-invex multiobjective programming. II. Duality. Journal of Global Optimization 2009, 43(1):111–140. 10.1007/s10898-008-9298-6
Mangasarian OL: Nonlinear Programming. McGraw-Hill, New York, NY, USA; 1969:xiii+220.
Clarke FH: Optimization and Nonsmooth Analysis, Canadian Mathematical Society Series of Monographs and Advanced Texts. John Wiley & Sons, New York, NY, USA; 1983:xiii+308.
Lin JG: Maximal vectors and multi-objective optimization. Journal of Optimization Theory and Applications 1976, 18(1):41–64. 10.1007/BF00933793
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Kim, H., Seo, Y. & Kim, D. Optimality Conditions in Nondifferentiable G-Invex Multiobjective Programming. J Inequal Appl 2010, 172059 (2010). https://doi.org/10.1155/2010/172059
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1155/2010/172059
Keywords
- Sufficient Optimality Condition
- Multiobjective Programming Problem
- Invex Function
- Multiobjective Fractional Programming
- Multiobjective Fractional Programming Problem