 Research
 Open Access
 Published:
Nondifferentiable mathematical programming involving (G,\beta )invexity
Journal of Inequalities and Applications volume 2012, Article number: 256 (2012)
Abstract
In this paper, we define new vector generalized convexity, namely nondifferentiable vector ({G}_{f},{\beta}_{f})invexity, for a given locally Lipschitz vector function f. Basing on this new nondifferentiable vector generalized invexity, we have managed to deal with nondifferentiable nonlinear programming problems under some assumptions. Firstly, we present GKarushKuhnTucker necessary optimality conditions for nonsmooth mathematical programming problems. With the new vector generalized invexity assumption, we also obtain GKarushKuhnTucker sufficient optimality conditions for the same programming problems. Moreover, we establish duality results for this kind of multiobjective programming problems. In the end, a suitable example illustrates that the new optimality results are more useful for some class of optimization problems than the optimality conditions with invex functions.
MSC:90C26.
1 Introduction
Convexity plays a central role in many aspects of mathematical programming including the analysis of stability, sufficient optimality conditions and duality. Based on convexity assumptions, nonlinear programming problems can be solved efficiently. In order to treat many practical problems, there have been many attempts to weaken the convexity assumptions and many concepts of generalized convex functions have been introduced and applied to mathematical programming problems in the literature [1–4]. One of these concepts, invexity, was introduced by Hanson in [1]. He has shown that invexity has a common property in mathematical programming with convexity and that KarushKuhnTucker conditions are sufficient for global optimality of nonlinear programming under the invexity assumptions. BenIsrael and Mond [2] also introduced the concept of preinvex functions, which is a special case of invexity. Many researchers, such as Mordukhovich [5], Mishra [6, 7], Ahmad [8, 9], SoleimaniDamaneh [10] and so on, are devoted to this hot topic. Furthermore, Ansari and Yao [11] edited a book which provides a good review for different variants of invexity. With generalized convexity, sufficient and dual results can be obtained, and we refer to [12–14] and references therein for more research results.
In [3], Antczak introduced new definitions of a pinvex set and a (p,r)preinvex function which is the generalization of the concept in [2]. He also discussed the differentiable and nondifferentiable nonlinear programming problems involving the (p,r)invexitytype functions in [15]. With respect to fixed functions η and b, Antczak extended the (p,r)invexity to the B\text{}(p,r)invexity and generalized B\text{}(p,r)invexity in [16]. Ahmad et al. [8] derived the sufficient conditions for an optimal solution to the minimax fractional problem and then established weak, strong, and strict converse duality theorems for the problem and its dual problem under B\text{}(p,r)invexity assumptions. Antczak [4] considered a special kind of (p,r)invexity, (0,r)invexity, which is called rinvexity in the cases of differentiability and nondifferentiability. Later, Antczak [17] generalized the concept of (scalar) differentiable rinvex functions to the vectorial case and defined a class of Vrinvex functions. In [18], Antczak further generalized the notion of Vrinvexity to the case of nondifferentiability. Note that some other researchers were interested in studying the mathematical programming involving Vrinvex functions; see [6, 7, 9] and the references therein.
To further enlarge the class of mathematical models for which the theoretical tools hold, Antczak extended the invexity to Ginvexity [19] for scalar differentiable functions. In the natural way, he extended the definition of Ginvexity to the case of differentiable vectorvalued functions. He [20] also applied this vector Ginvexity to develop optimality conditions for differentiable multiobjective programming problems with both inequality and equality constraints and established the socalled GKarushKuhnTucker necessary optimality conditions for this kind of programming under the KuhnTucker constraint qualification. With vector Ginvexity, he proved new duality results for nonlinear differentiable multiobjective programming problems, and a number of new vector duality problems such as GMondWeir, GWolfe and Gmixed dual vector problems to the primal one were defined in [21]. Further, Kim et al. [22] considered a special kind of nondifferentiable multiobjective programming with Ginvexity.
Motivated by [20, 21, 23], we enlarge the class of mathematical models for which the theoretical tools hold in this paper. Here, we present a new generalized convexity, namely nondifferentiable vector ({G}_{f},{\beta}_{f})invexity, for a given locally Lipschitz vector function f. We point out that it is very necessary to consider the nondifferentiable vector ({G}_{f},{\beta}_{f})invexity, and our reasons are as follows:

In some case, choosing G suitably can simplify the computation of the Clarke derivative of f; see Examples 1 and 2;

The concept of ({G}_{f},{\beta}_{f})invexity can not only unify but also extend the concepts of αinvexity and Ginvexity; see Example 3. Moreover, ({G}_{f},{\beta}_{f})invexity, together with Lemma 1, can make the choosing of a vectorvalued function η easy; see Example 3.
Basing on the new nondifferentiable vector generalized invexity, we have managed to deal with nonlinear programming problems under some assumptions. The rest of the paper is organized as follows. In Section 2, we present the concept of the nondifferentiable vector ({G}_{f},{\beta}_{f})invexity pertaining to a given locally Lipschitz vector function f. For a given function f, we discuss the relation between ({G}_{f},{\beta}_{f})invexity and (b,{G}_{f})preinvexity in Section 3. In Section 4, we present the GKarushKuhnTucker necessary optimality conditions for the nondifferentiable mathematical programming problems. Moreover, with this nondifferentiable vector generalized invexity assumption, we prove the GKarushKuhnTucker sufficient optimality conditions for the nondifferentiable mathematical programming problems. In Section 5, we establish the duality results for this kind of nonsmooth multiobjective programming problems as applications of this new generalized invexity. In Section 6, we give our conclusion. Moreover, we present a suitable example which illustrates that the optimality results in this paper are more useful for some class of optimization problems than the optimality conditions with existing invexity; see Example 6.
2 Notations and definitions
In this section, we provide some notations and results about the nondifferentiable vector ({G}_{f},{\beta}_{f})invex functions. The following convention will be used throughout the paper. For any x={({x}_{1},{x}_{2},\dots ,{x}_{n})}^{T}, y={({y}_{1},{y}_{2},\dots ,{y}_{n})}^{T}:
For any function f defined on a nonempty set X\subset {\mathbb{R}}^{n}, {I}_{f}(X) denotes the range of f or the image of X under f. Moreover, let K=\{1,\dots ,k\} and M=\{1,2,\dots ,m\}.
Definition 1 Let d\in {\mathbb{R}}^{n}, X be a nonempty set of {\mathbb{R}}^{n} and f:X\to \mathbb{R}. If
exists, then {f}^{0}(x;d) is called the Clarke derivative of f at x in the direction d. If this limit superior exists for all d\in {\mathbb{R}}^{n}, then f is called Clarke differentiable at x. The set
is called the Clarke subdifferential of f at x.
We give a direct proof for the following useful lemma, which can also be deduced from Theorem 2.3.9 in [24].
Lemma 1 (Chain rule)
Let ϕ be a realvalued Lipschitz continuous function defined on X, and denote the image of X under ϕ by {I}_{\varphi}(X); let \phi :{I}_{\varphi}(X)\to \mathbb{R} be a differentiable function such that {\phi}^{\mathrm{\prime}}(\gamma ) is continuous on {I}_{\varphi}(X) and {\phi}^{\mathrm{\prime}}(\gamma )\ge 0 for each \gamma \in {I}_{\varphi}(X). Then the chain rule
holds for each d\in {\mathbb{R}}^{n}. Therefore,
Proof On the one hand, from Definition 1 and the assumption that {\phi}^{\mathrm{\prime}}(\gamma )\ge 0 for all \gamma \in {I}_{\varphi}(X), we obtain
On the other hand, by the definition of {\varphi}^{0}(x,d), there exists a vector sequence \{{y}_{n}\}\subset X, a real sequence \{{\mu}_{n}\}\subset {\mathbb{R}}_{+} such that {y}_{n}\to x (n\to \mathrm{\infty}), {\mu}_{n}\to 0 (n\to \mathrm{\infty}) and
Note that
and
Therefore, by (1) and definition of {(\phi \circ \varphi )}^{0}(x;d), we obtain
Thus, we obtain the desired result. □
With the above chain rule, we can compute the Clarke derivative of a realvalued function f more easily than by using the definition of the Clarke derivative itself; see the following Examples 1 and 2.
Example 1 Denote
Then f(x)=g\circ h(x), and it is easy to check that
Thus, by the chain rule in Lemma 1,
Example 2 Let X be a nonempty subset of {\mathbb{R}}^{n}, f be a locally Lipschitz function on X, and r be an arbitrary real number. Denote
for all a\in \mathbb{R}. By the chain rule in Lemma 1,
For differentiable functions, Antczak introduced the Ginvexity in [20]. Note from Example 2 that the function \phi (f) may be not differentiable even if the function φ is differentiable. Thus, it is necessary to introduce the following vector ({G}_{f},{\beta}_{f})invexity concept for a given nondifferentiable function f.
Definition 2 Let f=({f}_{1},\dots ,{f}_{k}) be a vectorvalued locally Lipschitz function defined on a nonempty set X\subset {\mathbb{R}}^{n}. Consider the functions \eta :X\times X\to {\mathbb{R}}^{n}, {G}_{{f}_{i}}:{I}_{{f}_{i}}(X)\to \mathbb{R}, and {\beta}_{i}^{f}:X\times X\to {\mathbb{R}}_{+} for i\in K. Moreover, {G}_{{f}_{i}} is strictly increasing on its domain {I}_{{f}_{i}}(X) for each i\in K. If
holds for all x\in X (x\ne u) and i\in K, then f is said to be (strictly) nondifferentiable vector ({G}_{f},{\beta}_{f})invex at u on X (with respect to η) (or shortly, ({G}_{f},{\beta}_{f})invex at u on X), where {G}_{f}=({G}_{{f}_{1}},\dots ,{G}_{{f}_{k}}) and \beta :=({\beta}_{1}^{f},{\beta}_{2}^{f},\dots ,{\beta}_{k}^{f}). If f is (strictly) nondifferentiable vector ({G}_{f},{\beta}_{f})invex at u on X (with respect to η) for all u\in X, then f is (strictly) nondifferentiable vector ({G}_{f},{\beta}_{f})invex on X with respect to η.
Remark 1 In order to define (strictly) nondifferentiable vector ({G}_{f},{\beta}_{f})incave functions with respect to η for given f, the direction of the inequality (2) in Definition 2 should be changed to the opposite one.
Remark 2 (1) Let f:X\to \mathbb{R} be differentiable ({G}_{f},{\beta}_{f})invex, then {G}_{f}(f) is αinvex by Definition 2 in this paper and αinvexity as defined in [23], where \alpha ={\beta}_{f}.

(2)
Let f:X\to \mathbb{R} be differentiable ({G}_{f},{\beta}_{f})invex and {G}_{f}(a)=a for a\in \mathbb{R}, then f is αinvex as defined in [23], where \alpha ={\beta}_{f}.

(3)
Let f=({f}_{1},\dots ,{f}_{k}) be differentiable vector ({G}_{f},{\beta}_{f})invex and {\beta}_{i}^{f}(x,u)=1 for all x,u\in X (i\in K), then f is vector Ginvex as defined in [20]. Further, if K=1, then f is Ginvex as defined in [19].
Hence, the concept of ({G}_{f},{\beta}_{f})invexity defined in this paper not only unifies but also extends the concepts of αinvexity and Ginvexity. Example 3 illustrates that there exists a function which is neither αinvex as defined in [23] nor Ginvex as defined in [20], but ({G}_{f},{\beta}_{f})invex as defined in this paper. Moreover, Definition 2 together with Lemma 1 can help us to choose a vectorvalued function η simply; see Example 3 too.
Example 3 Let X=[1/2,1/2]\subset \mathbb{R}. Define f=({f}_{1},{f}_{2},{f}_{3}):X\to {\mathbb{R}}^{3} as follows:
From Lemma 1,
Define
Then, by Definition 2, f is nondifferentiable vector ({G}_{f},{\beta}_{f})invex with respect to η. Note that f is nondifferentiable. Then f is neither αinvex as defined in [23] nor Ginvex as defined in [20].
3 Relations between (b,{G}_{f})preinvexity and ({G}_{f},{\beta}_{f})invexity
In this section, we present the concept of (b,{G}_{f})preinvexity and discuss its relations with ({G}_{f},{\beta}_{f})invexity introduced in the above section.
Definition 3 Let X\subset {\mathbb{R}}^{n}, \alpha :X\times X\to {\mathbb{R}}_{+}, and \eta :X\times X\to {\mathbb{R}}^{n}. The set X is said to be αinvex at u\in X with respect to η if for all x\in X,
X is said to be an αinvex set with respect to η if X is αinvex at each u\in X. If \alpha (x,u)=1 for all x,u\in X, then the αinvex set X with respect to η is called an invex set X with respect to η.
Definition 4 Let X be an invex set (with respect to η) in {\mathbb{R}}^{n} as defined in Definition 3. Consider the functions {f}_{i}:X\to \mathbb{R} and {b}_{i}:X\times X\times [0,1]\to {\mathbb{R}}_{+} (i\in K). If
hold for all x\in X (x\ne u), then f=({f}_{1},\dots ,{f}_{k}) is said to be (strictly) vector bpreinvex at u on X with respect to η, where b=({b}_{1},\dots ,{b}_{k}). If f is (strictly) vector bpreinvex at u on X with respect to η for each u\in X, then f is (strictly) vector bpreinvex on X with respect to η.
Definition 5 Let X be an invex set (with respect to η) of {\mathbb{R}}^{n} as defined in Definition 3. Consider the functions {f}_{i}:X\to \mathbb{R}, {G}_{{f}_{i}}:{I}_{{f}_{i}}(X)\to \mathbb{R}, and {b}_{i}:X\times X\times [0,1]\to {\mathbb{R}}_{+} (i\in K). Moreover, {G}_{{f}_{i}} is strictly increasing on {I}_{{f}_{i}}(X) for i\in K. If
hold for all x\in X (x\ne u), then f=({f}_{1},\dots ,{f}_{k}) is said to be (strictly) vector (b,{G}_{f})preinvex at u on X with respect to η, where {G}_{f}=({G}_{{f}_{1}},\dots ,{G}_{{f}_{k}}) and b=({b}_{1},\dots ,{b}_{k}). If f is (strictly) vector (b,{G}_{f})preinvex at u on X for all u\in X, then f is (strictly) vector (b,{G}_{f})preinvex on X with respect to η.
Example 4 Let X=\mathbb{R}. Define
Then it is easy to check that f is (b,G)invex on ℝ with respect to the function η defined by \eta (x,u)=u, where b(x,u;\lambda )\equiv 1 for all x,u\in \mathbb{R}. However, f is not binvex at u=1 with respect to the same η and b, since
Above Example 4 illustrates there exists a function which is not bpreinvex but (b,G)preinvex. Next, we give another useful lemma and the proof is omitted.
Lemma 2 Let φ be an increasing function defined on A\subset \mathbb{R}, then {\phi}^{1} exists and {\phi}^{1} is increasing on {I}_{\phi}(A).
Theorem 1 Let X be an invex set (with respect to η) in {\mathbb{R}}^{n} and f=({f}_{1},\dots ,{f}_{k}) be a function defined on X; let {G}_{f}=({G}_{{f}_{1}},\dots ,{G}_{{f}_{k}}) be a function such that {G}_{{f}_{i}}:{I}_{{f}_{i}}(X)\to \mathbb{R} is strictly increasing on {I}_{{f}_{i}}(X) for i\in K; let b=:({b}_{1},\dots ,{b}_{k}), where {b}_{i}:X\times X\times [0,1]\to {\mathbb{R}}_{+} (i\in K). Then f is (strictly) vector (b,{G}_{f})preinvex at u on X with respect to η if and only if {G}_{f}\circ f=({G}_{{f}_{1}}\circ {f}_{1},\dots ,{G}_{{f}_{k}}\circ {f}_{k}) is (strictly) vector bpreinvex at u on X with respect to the same η.
Proof ‘if’ part. Let {G}_{f}\circ f=({G}_{{f}_{1}}\circ {f}_{1},\dots ,{G}_{{f}_{k}}\circ {f}_{k}) be (strictly) vector bpreinvex at u on X with respect to η. We get from Definition 4
Thus, we obtain with Lemma 2
By Definition 5, we deduce f is (strictly) vector (b,{G}_{f})preinvex at u on X with respect to the same η.
Moreover, the above steps are invertible, so the result follows. □
Theorem 2 Let X be an invex set (with respect to η) in {\mathbb{R}}^{n}; let f=({f}_{1},\dots ,{f}_{k}) be (strictly) vector (b,{G}_{f})preinvex on X with respect to η; assume that {G}_{{f}_{i}}(\cdot ) is differentiable and strictly increasing on {I}_{{f}_{i}}(X), {b}_{i}(x,u;\lambda ) is continuous on X\times X\times [0,1] for each i\in K. Moreover, {lim}_{\lambda \downarrow 0}sup{b}_{i}(x,u;\lambda )>0 for any x,u\in X. Then f is vector ({G}_{f},{\beta}_{f})invex on X with respect to η, where {\beta}_{i}^{f}(x,u)=\frac{1}{{lim}_{\lambda \downarrow 0}sup{b}_{i}(x,u;\lambda )} for i\in K.
Proof Since f=({f}_{1},\dots ,{f}_{k}) is (strictly) vector (b,{G}_{f})preinvex on X with respect to η, then from Theorem 1 {G}_{f}\circ f=({G}_{{f}_{1}}\circ {f}_{1},\dots ,{G}_{{f}_{k}}\circ {f}_{k}) is (strictly) vector bpreinvex on X with respect to η. That is, for any x,u\in X (x\ne u),
Hence,
Therefore, by the definition of the superior limit and continuity, one obtains
which together with Lemma 1 gives
Thus, the result follows. □
Example 5 Let X be an invex set (with respect to η) of {\mathbb{R}}^{n} and f=({f}_{1},\dots ,{f}_{k}) be (strictly) (b,{G}_{f})preinvex on X with respect to η. For any given real number r, let φ be the function defined in Example 2 and denote by {G}_{f}\circ f\triangleq (\phi \circ {f}_{1},\dots ,\phi \circ {f}_{k}). Then from Theorem 2 f is nondifferentiable vector ({G}_{f},{\beta}_{f})invex on X with respect to η, where {\beta}_{i}^{f}(x,u)=\frac{1}{{lim}_{\lambda \downarrow 0}sup{b}_{i}(x,u;\lambda )} for i\in K. That is, the inequalities
hold for any {\zeta}_{i}\in \partial {f}_{i}(u) and for each i\in K. Thus, f is exactly the locally Lipschitz Vrinvexity with respect to η on X or rinvex.
Remark 3 By Definition 2 and Example 5, we know that both a Vrinvex function and an rinvex function are nondifferentiable vector ({G}_{f},{\beta}_{f})invex.
In general, a multiobjective programming problem is formulated as the following vector minimization problem:
where X is a nonempty set of {\mathbb{R}}^{n}, {f}_{i} (i\in K) and {g}_{j} (j\in M) are realvalued Lipschitz functions on X.
Let {E}_{\mathrm{CVP}}=\{x\in X:{g}_{j}(x)\leqq 0,j\in M\} be the set of all feasible solutions for the problem (CVP). Further, denote by J(\overline{x}):=\{j\in M:{g}_{j}(\overline{x})=0\} the set of constraint indices active at \overline{x}\in {E}_{\mathrm{CVP}}.
The above multiobjective programming problem (CVP) was widely used in applied sciences. Recently, this kind of programming was used to solve problems arising in fields such as bioinformatics, computational biology, molecular biology, wastewater treatment, drug discovery, and food processing.
For convenience, we need the following vector minimization problem:
where {G}_{g}(0):=({G}_{{g}_{1}}(0),\dots ,{G}_{{g}_{m}}(0)). Denote by {E}_{G\text{}\mathrm{CVP}}:=\{x\in X:{G}_{g}\circ g(x)\leqq {G}_{g}(0)\}, {J}^{\mathrm{\prime}}(\overline{x}):=\{j\in M:{G}_{{g}_{j}}\circ {g}_{j}(\overline{x})={G}_{{g}_{j}}(0)\}. Then it is easy to see that {E}_{\mathrm{CVP}}={E}_{G\text{}\mathrm{CVP}} and J(\overline{x})={J}^{\mathrm{\prime}}(\overline{x}). So, the set of all feasible solutions and the set of constraint active indices for either (CVP) or (GCVP) are denoted by E and J(\overline{x}), respectively.
Before studying optimality in multiobjective programming, we have to define clearly the concepts of optimality and solutions in relation to a multiobjective programming problem. Note that in vector optimization problems, there is a multitude of competing definitions and approaches. One of the dominating ones is (weak) Pareto optimality. The (weak) Pareto optimality in multiobjective programming associates the concept of a solution with some property that seems intuitively natural.
Definition 6 A feasible point \overline{x} is said to be a (weakly) efficient solution for a multiobjective programming problem (CVP) if and only if there exists no x\in E such that
Lemma 3 Let {G}_{{f}_{i}} be strictly increasing on {I}_{{f}_{i}}(X) for each i\in K and {G}_{{g}_{j}} be strictly increasing on {I}_{{g}_{j}}(X) for each j\in M. Further, let 0\in {I}_{{g}_{j}}(X), j\in M. Then \overline{x} is a (weakly) efficient solution for (CVP) if and only if \overline{x} is a (weakly) efficient solution for (GCVP).
4 Optimality conditions in nondifferentiable multiobjective programming
The first necessary conditions for the inequalityconstrained problem have been presented in 1948 by Fritz John; while stronger necessary conditions for the same inequalityconstrained problem were obtained in 1951 by Kuhn and Tucker. Since then, optimality conditions of Fritz John and KarushKuhnTucker type for differentiable or nondifferentiable nonconvex multiobjective programming problems were established under different assumptions. For example, optimality conditions of FritzJohn and KarushKuhnTucker type for nondifferentiable convex multiobjective programming problems were established by Kanniappan. Later, Craven proved these conditions for nondifferentiable multiobjective programming problems involving locally Lipschitz functions. Also, under some constraint qualifications, Lee proved the KarushKuhnTucker necessary optimality conditions for multiobjective programming problems involving Lipschitz functions. Moveover, SoleimaniDamaneh characterized the weak Paretooptimal solutions of nonsmooth multiobjective programs in Asplund spaces under locally Lipschitz and generalized convexity conditions. Further, he established some sufficient conditions for optimality and proper optimality for multipleobjective programs in Banach spaces after extending the concept of vector invexity.
Recently, Antczak [19] introduced the socalled GKarushKuhnTucker necessary optimality conditions for a differentiable mathematical programming problem. In a natural way, he [20] extended the socalled GKarushKuhnTucker necessary optimality conditions to the vectorial case for differentiable multiobjective programming problems. From the discussion in the above sections, it is interesting to consider the nondifferentiable nonlinear programming. Hence, we present not only GKarushKuhnTucker necessary optimality but also GKarushKuhnTucker sufficient optimality for this kind of nondifferentiable mathematical programming problems.
Theorem 3 (GFritz John necessary optimality condition)
Let {G}_{{f}_{i}} be a function defined on {I}_{{f}_{i}}(X) such that {G}_{{f}_{i}}^{\mathrm{\prime}} is nonnegative and continuous on {I}_{{f}_{i}}(X) for each i\in K; let {G}_{{g}_{j}} be a function defined on {I}_{{g}_{j}}(X) such that {G}_{{g}_{j}}^{\mathrm{\prime}} is nonnegative and continuous on {I}_{{g}_{j}}(X) for each j\in M. If \overline{x} is a (weakly) efficient solution for (CVP), then there exist \overline{\lambda}\in {\mathbb{R}}^{n}, and \overline{\xi}\in {\mathbb{R}}^{m} such that
Proof Since \overline{x} is a (weakly) efficient solution for (CVP), then by Lemma 3, \overline{x} is a (weakly) efficient solution for (GCVP). Therefore, from Theorem 10 of [18], we have
Hence, by Lemma 1, we get the desired result. □
The GKarushKuhnTucker necessary optimality conditions for \overline{x} to be (weak) Pareto optimal are obtained from the above Fritz John necessary optimality conditions under some constraint qualifications.
Now, we give a generalized Slater type constraint qualification. Under this regularity constraint qualification, we establish the GKarushKuhnTucker necessary optimality conditions for the considered nonsmooth multiobjective programming problem (CVP).
Definition 7 The program (CVP) is said to satisfy the generalized Slater type constraint at \overline{x} if there exists {x}^{0}\in E such that {g}_{J}({x}^{0})<0 and {g}_{J} is ({G}_{{g}_{J}},{\beta}_{{g}_{J}})invex with respect to η at \overline{x} on E, where J\triangleq J(\overline{x}).
Theorem 4 (GKarushKuhnTucker necessary optimality condition)
Let {G}_{{f}_{i}} be a function defined on {I}_{{f}_{i}}(X) such that {G}_{{f}_{i}}^{\mathrm{\prime}} is nonnegative and continuous on {I}_{{f}_{i}}(X) for each i\in K; let {G}_{{g}_{j}} be a function defined on {I}_{{g}_{j}}(X) such that {G}_{{g}_{j}}^{\mathrm{\prime}} is nonnegative and continuous on {I}_{{g}_{j}}(X) for each j\in M. Assume that \overline{x} is a (weakly) efficient solution for (CVP) and the program (CVP) satisfies the generalized Slater type constraint at \overline{x}. Then there exist \overline{\lambda}\in {\mathbb{R}}^{n}, and \overline{\xi}\in {\mathbb{R}}^{m} such that
Proof On the one hand, since \overline{x} is a (weakly) efficient solution for (CVP), the necessary optimality conditions of GFritz John type (5)(7) for (CVP) are fulfilled. Let us suppose that \overline{\lambda}=0. Then by (6) we have that {\overline{\mu}}_{j}=0 for all j\notin J, and there exists at least one j\in J such that {\overline{\mu}}_{j}>0. Thus, from (5), Lemma 1, and subdifferential calculus (see [24]), it follows that
This implies that there exists {\zeta}_{j}\in \partial {g}_{j}(\overline{x}), j\in M, such that
Note that {g}_{J} is assumed to be ({G}_{{g}_{J}},{\beta}_{{g}_{J}})invex with respect to η at \overline{x}. Then
On the other hand, it follows from the generalized Slater type constraint qualification that there exists {x}^{0}\in E such that {g}_{j}({x}^{0})<0 for all j\in J. Since {\overline{\mu}}_{j}>0 at least for one j\in J, we obtain the following inequality:
which contradicts (11). □
Now, under the assumption of generalized invexity defined in Section 2, we can establish sufficient optimality conditions for nonsmooth multiobjective programming problems involving locally Lipschitz functions.
Theorem 5 (GKarushKuhnTucker sufficient optimality conditions)
Let \overline{x} be a feasible point for (CVP); let {G}_{{f}_{i}} be differentiable and strictly increasing on {I}_{{f}_{i}}(X) for each i\in K, and let {G}_{{g}_{j}} be differentiable and strictly increasing on {I}_{{g}_{j}}(X) for each j\in M. Moreover, GKarushKuhnTucker necessary optimality conditions (8)(10) are satisfied at \overline{x}. If f is nondifferentiable vector ({G}_{f},{\beta}_{f})invex at \overline{x} on X with respect to η and g is nondifferentiable vector ({G}_{g},{\beta}_{g})invex at \overline{x} on X with respect to the same η, then \overline{x} is a (weakly) efficient solution for (CVP).
Proof Suppose, contrary to the result, that \overline{x} is not a weakly efficient solution for (CVP). By Lemma 3, \overline{x} is not a weakly efficient solution for (GCVP). Hence, there exists {x}_{0}\in X such that
By the generalized invexity assumption of f and g, we have
where {\zeta}_{i}^{f}\in \partial {f}_{i}(\overline{x}) (i\in K) and {\zeta}_{j}^{g}\in \partial {g}_{j}(\overline{x}) (j\in M). Multiplying (14) by {\overline{\xi}}_{j}, we get
From (8), (9), (13), and (15), we have
Note that \overline{\lambda}\u2a7e0. Then
which contradicts the GKarushKuhnTucker necessary optimality condition (8). Hence, \overline{x} is a weakly efficient solution for (CVP), and the proof is complete. □
Theorem 6 (GKarushKuhnTucker sufficient optimality conditions)
Let \overline{x} be a feasible point for (CVP); let {G}_{{f}_{i}} be differentiable and strictly increasing on {I}_{{f}_{i}}(X) for each i\in K, and let {G}_{{g}_{j}} be differentiable and strictly increasing on {I}_{{g}_{j}}(X) for each j\in M. Moreover, GKarushKuhnTucker necessary optimality conditions (8)(10) are satisfied at \overline{x}. If f is strictly nondifferentiable vector ({G}_{f},{\beta}_{f})invex at \overline{x} on X with respect to η and g is nondifferentiable vector ({G}_{g},{\beta}_{g})invex at \overline{x} on X with respect to the same η, then \overline{x} is an efficient solution for (CVP).
Proof Proof is similar to the proof of Theorem 5. □
5 Duality
Duality is an important concept in the study of optimization problems. Several duals, including the MondWeir dual and the Wolfe dual, have been introduced for various nonlinear programming problems. For example, Ahmad et al. [9] considered the MondWeir type dual program of nonsmooth multiobjective programming involving generalized Vrinvex functions. Further, SoleimaniDamaneh considered MondWeir type and Wolfe type duals for a general nonsmooth optimization problem in Banach algebras. As applications of our new generalized invexity, we also establish dual results following the approaches of Mond and Weir. We formulate the following dual problem for (CVP):
Let W denote the set of all feasible solutions for the dual problem (MWD). Further, denote by Y the set Y=\{y\in X:(y,\lambda ,\mu )\in W\}.
Theorem 7 (Weak duality)
Let x and (y,\lambda ,\mu ) be feasible solutions for (CVP) and (MWD), respectively. Moreover, assume that {f}_{I} and {g}_{J} are ({G}_{{f}_{I}},{\beta}_{{f}_{I}})invex and ({G}_{{g}_{J}},{\beta}_{{g}_{J}})invex at y on E\cup Y with respect to the same η, respectively, where I\triangleq I(y) and J\triangleq J(y). Then f(x)\nless f(y).
Proof Let x and (y,\lambda ,\mu ) be feasible solutions for (CVP) and (MWD), respectively. Then there exist {\zeta}_{i}^{f}\in \partial {f}_{i}(y), i\in K and {\zeta}_{j}^{g}\in \partial {g}_{j}(y), j\in M, such that
We proceed by contradiction. Suppose that
Since {f}_{I} and {g}_{J} are ({G}_{{f}_{I}},{\beta}_{{f}_{I}})invex and ({G}_{{g}_{J}},{\beta}_{{g}_{J}})invex at y on E\cup Y with respect to the same η, respectively. Then, by Definition 2, the system
holds for all x\in E. Hence, we deduce that the inequality
holds for all {\zeta}_{i}^{f}\in \partial {f}_{i}(y), i\in I, {\zeta}_{j}^{g}\in \partial {g}_{j}(y), j\in J. This contradicts (16). □
Theorem 8 (Strong duality)
Let \overline{x} be a (weakly) efficient solution in (CVP). Then there exist \overline{\lambda}\in {\mathbb{R}}^{k}, \overline{\lambda}\u2a7e0, \overline{\mu}\in {\mathbb{R}}^{m}, \overline{\mu}\geqq 0 such that (\overline{x},\overline{\lambda},\overline{\mu}) is feasible in (MWD). If, also weak duality theorem holds for problems (CVP) and (MWD), then (\overline{x},\overline{\lambda},\overline{\mu}) is a (weakly) efficient solution in (MWD) and the optimal values in both problems are the same.
Proof Let \overline{x} be a (weakly) efficient solution in (CVP). Then there exist \overline{\lambda}\in {\mathbb{R}}^{k}, \overline{\lambda}\u2a7e0, \overline{\mu}\in {\mathbb{R}}^{m}, \overline{\mu}\geqq 0 such that the GKarushKuhnTucker optimality conditions (5)(7) are fulfilled at \overline{x}. Thus, by the GKarushKuhnTucker optimality conditions (5)(7), we conclude that (\overline{x},\overline{\lambda},\overline{\mu}) is feasible in (MWD). Suppose that (\overline{x},\overline{\lambda},\overline{\mu}) is not a (weakly) efficient solution in (MWD). Then there exists (\tilde{x},\tilde{\lambda},\tilde{\mu})\in W such that
But the above inequality is a contradiction to weak duality. Thus, (\overline{x},\overline{\lambda},\overline{\mu}) is a (weakly) efficient solution in (MWD), and the optimal values in both problems are the same. □
Theorem 9 (Converse duality)
Let (\overline{y},\overline{\lambda},\overline{\mu}) be a (weakly) efficient solution for (MWD) such that \overline{y}\in E. Moreover, assume that {f}_{I} and {g}_{J} are (strictly) ({G}_{{f}_{I}},{\beta}_{{f}_{I}})invex and (strictly) ({G}_{{g}_{J}},{\beta}_{{g}_{J}})invex at \overline{y} on E\cup Y with respect to the same η, respectively, where I\triangleq I(\overline{y}) and J\triangleq J(\overline{y}). Then \overline{y} is a (weakly) efficient solution in (CVP).
Proof Since (\overline{y},\overline{\lambda},\overline{\mu}) is a (weakly) efficient point in (MWD), then it is feasible in (MWD). Hence, \overline{y}\in X, \overline{\mu}\geqq 0, and the second constraint of (MWD) is fulfilled at \overline{y}. Thus, we have
We proceed by contradiction. Suppose that \overline{y} is not a (weakly) efficient point in (MWD). Then there exists \tilde{x}\in E such that
Since {f}_{I} and {g}_{J} are (strictly) ({G}_{{f}_{I}},{\beta}_{{f}_{I}})invex and (strictly) ({G}_{{g}_{J}},{\beta}_{{g}_{J}})invex at \overline{y} on E\cup Y with respect to the same η, respectively, then, by Definition 2, the inequalities
hold for all x\in E. Hence, it is also true for x=\tilde{x}. Thus, we deduce that the inequality
holds for all {\zeta}_{i}^{f}\in \partial {f}_{i}(\overline{y}), i\in K, {\zeta}_{j}^{g}\in \partial {g}_{j}(\overline{y}), j\in M, which contradicts the feasibility of (\overline{y},\overline{\lambda},\overline{\mu}) in (MWD). □
6 Conclusion
This paper presents a new type of generalized invexity, namely nondifferentiable ({G}_{f},{\beta}_{f})invexity for a given locally Lipschitz function f defined on X\subset {\mathbb{R}}^{n}. This new invexity not only unifies but also extends the existing Ginvexity and αinvexity presented in literatures. We have constructed auxiliary mathematical programming (GCVP) and have discussed the relations between programming (GCVP) and (CVP). With (GCVP), we have proved the GKarushKuhnTucker necessary optimality conditions for (CVP). Our statement of the socalled GKuhnTucker necessary optimality conditions established in this paper is more general than the classical KuhnTucker necessary optimality conditions found in the literature. Also, we have proved the sufficiency of the introduced GKarushKuhnTucker necessary optimality conditions for (CVP) under the new nondifferentiable vector invexity assumption. More exactly, this result has been proved for such multiobjective programming problems in which the objective functions, the constraints are nondifferentiable vector generalized invex with respect to the same η defined in Section 2, but not necessarily with respect to the same G; see the following example. As applications of our new generalized invexity, we establish dual results for (CVP) under the MondWeir dual programming. Note that many researchers were interested in studying minimax programming or fractional programming with different generalized invexities; see [6, 8, 10, 15]. As pointed out by an anonymous referee, we will study minimax programming or fractional programming under the invexity proposed in this sequel in the future.
To illustrate the approach to optimality considered in the paper, we here give an example of a nonsmooth multiobjective programming problem involving nondifferentiable vector generalized invex functions with respect to the same function η defined in Section 2.
Example 6 Let X=[1/2,1/2]\subset \mathbb{R}. We consider the following (CVP):
where
It is not difficult to see that {f}_{1}, {f}_{2}, g are locally Lipschitz functions and, moreover, the set of all feasible solutions E=X=[1/2,1/2]\subset \mathbb{R}. Note also that a feasible solution \overline{x}=0 is an efficiently optimal in the considered nonsmooth vector optimization problem. Then, from Example 3, f and g are nondifferentiable vector ({G}_{f},{\beta}_{f})invex and ({G}_{g},{\beta}_{g})invex with respect to the same η, respectively, where η, {\beta}_{f}, and {\beta}_{g} are defined in Example 3. Also, it can be established that the GKarushKuhnTucker necessary optimality conditions (8)(10) are satisfied at \overline{x}. Since all the hypotheses of Theorem 6 are fulfilled, then \overline{x} is an efficient optimal in the considered multiobjective programming problem. Further, note that the sufficient optimality Theorem 20 in [19] for efficient optimality is not applicable to the considered multiobjective programming problem (CVP). This follows from the fact that all functions involved in the considered multiobjective programming problem are nondifferentiable.
References
Hanson MA: On sufficiency of the KuhnTucker conditions. J. Math. Anal. Appl. 1981, 80: 545–550. 10.1016/0022247X(81)901232
BenIsrael A, Mond B: What is invexity. J. Aust. Math. Soc. Ser. B 1986, 28: 1–9. 10.1017/S0334270000005142
Antczak T:(p,r)invex sets and functions. J. Math. Anal. Appl. 2001, 263: 355–379. 10.1006/jmaa.2001.7574
Antczak T: r preinvexity and r invexity in mathematical programming. Comput. Math. Appl. 2005, 50(3–4):551–566. 10.1016/j.camwa.2005.01.024
Mordukhovich BS: Multiobjective optimization problems with equilibrium constraints. Math. Program. 2009, 117(1):331–354. 10.1007/s101070070172y
Mishra SK, Shukla K: Nonsmooth minimax programming problems with V  r invex functions. Optimization 2010, 59(1):95–103. 10.1080/02331930903500308
Mishra SK, Singh V, Wang SY, Lai KK: Optimality and duality for nonsmooth multiobjective optimization problems with generalized V  r invexity. J. Appl. Anal. 2010, 16: 49–58.
Ahmad I, Gupta SK, Kailey N, Agarwal RP:Duality in nondifferentiable minimax fractional programming with B\text{}(p,r)invexity. J. Inequal. Appl. 2011., 2011: Article ID 1
Ahmad I, Gupta SK, Jayswal A: On sufficiency and duality for nonsmooth multiobjective programming problems involving generalized V  r invex functions. Nonlinear Anal., Theory Methods Appl. 2011, 74(17):5920–5928. 10.1016/j.na.2011.05.058
SoleimaniDamaneh M: Optimality for nonsmooth fractional multiple objective programming. Nonlinear Anal., Theory Methods Appl. 2008, 68(10):2873–2878. 10.1016/j.na.2007.02.033
Ansari QH, Yao JC: Recent Developments in Vector Optimization. Springer, Berlin; 2012.
Li J, Gao Y: Nondifferentiable multiobjective mixed symmetric duality under generalized convexity. J. Inequal. Appl. 2011., 2011: Article ID 23. doi:10.1186/1029–242X2011–23
Gao Y: Higherorder symmetric duality for a class of multiobjective fractional programming problems. J. Inequal. Appl. 2012., 2012: Article ID 142. doi:10.1186/1029–242X2012–142
Gupta SK, Dangar D, Kumar S: Secondorder duality for a nondifferentiable minimax fractional programming under generalized α univexity. J. Inequal. Appl. 2012., 2012: Article ID 187. doi:10.1186/1029–242X2012–187
Antczak T:Minimax programming under (p,R)invexity. Eur. J. Oper. Res. 2004, 158: 1–19. 10.1016/S03772217(03)003527
Antczak T:Generalized B\text{}(p,R)invexity functions and nonlinear mathematical programming. Numer. Funct. Anal. Optim. 2009, 30(1–2):1–22. 10.1080/01630560802678549
Antczak T: V  r invexity in multiobjective programming. J. Appl. Anal. 2005, 11(1):63–80.
Antczak T: Optimality and duality for nonsmooth multiobjective programming problems with V  r invexity. J. Glob. Optim. 2009, 45(2):319–334. 10.1007/s1089800893778
Antczak T: New optimality conditions and duality results of G type in differentiable mathematical programming. Nonlinear Anal., Theory Methods Appl. 2007, 66: 1617–1632. 10.1016/j.na.2006.02.013
Antczak T: On G invex multiobjective programming. Part I. Optimality. J. Glob. Optim. 2009, 43(1):97–109. 10.1007/s1089800892995
Antczak T: On G invex multiobjective programming. Part II. Duality. J. Glob. Optim. 2009, 43(1):111–140. 10.1007/s1089800892986
Kim HJ, Seo YY, Kim DS: Optimality conditions in nondifferentiable G invex multiobjective programming. J. Inequal. Appl. 2010., 2010: Article ID 172059. doi:10.1155/2010/172059
Noor M: On generalized preinvex functions and monotonicities. J. Inequal. Pure Appl. Math. 2004, 5(4):1–9.
Clarke FH: Optimization and Nonsmooth Analysis. WileyInterscience, New York; 1983.
Acknowledgements
The authors are grateful to the referees for their valuable suggestions that helped to improve the paper in its present form. This research is supported by the Science Foundation of Hanshan Normal University (LT200801).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors carried out the proof. All authors conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Yuan, D., Liu, X., Yang, S. et al. Nondifferentiable mathematical programming involving (G,\beta )invexity. J Inequal Appl 2012, 256 (2012). https://doi.org/10.1186/1029242X2012256
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029242X2012256
Keywords
 ({G}_{f},{\beta}_{f})invexity
 GKarushKuhnTucker sufficient optimality conditions
 GKarushKuhnTucker necessary optimality conditions
 duality