 Research
 Open access
 Published:
Nonconvex composite multiobjective nonsmooth fractional programming
Journal of Inequalities and Applications volume 2013, Article number: 508 (2013)
Abstract
We consider nonsmooth multiobjective programs where the objective function is a fractional composition of invex functions and locally Lipschitz and Gâteaux differentiable functions. KuhnTucker necessary and sufficient optimality conditions for weakly efficient solutions are presented. We formulate dual problems and establish weak, strong and converse duality theorems for a weakly efficient solution.
MSC:90C46, 90C29, 90C32.
1 Introduction
Recently there has been an increasing interest in developing optimality conditions and duality relations for nonsmooth multiobjective programming problems involving locally Lipschitz functions. Many authors have studied under kinds of generalized convexity, and some results have been obtained. Schaible [1] and Bector et al. [2] derived some KuhnTucker necessary and sufficient optimality conditions for the multiobjective fractional programming. By using ρinvexity of a fractional function, Kim [3] obtained necessary and sufficient optimality conditions and duality theorems for nonsmooth multiobjective fractional programming problems. Lai and Ho [4] established sufficient optimality conditions for multiobjective fractional programming problems involving exponential Vrinvex Lipschitz functions. In [5], Kim and Schaible considered nonsmooth multiobjective programming problems with inequality and equality constraints involving locally Lipschitz functions and presented several sufficient optimality conditions under various invexity assumptions and regularity conditions. Soghra Nobakhtian [6] obtained optimality conditions and a mixed dual model for nonsmooth fractional multiobjective programming problems. Jeyakumar and Yang [7] considered nonsmooth constrained multiobjective optimization problems where the objective function and the constraints are compositions of convex functions and locally Lipschitz and Gâteaux differentiable functions. Lagrangian necessary conditions and new sufficient optimality conditions for efficient and properly efficient solutions were presented. Mishra and Mukherjee [8] extended the work of Jeyakumar and Yang [7] and the constraints are compositions of Vinvex functions.
The present article begins with an extension of the results in [7, 8] from the nonfractional to the fractional case. We consider nonsmooth multiobjective programs where the objective functions are fractional compositions of invex functions and locally Lipschitz and Gâteaux differentiable functions. KuhnTucker necessary conditions and sufficient optimality conditions for weakly efficient solutions are presented. We formulate dual problems and establish weak, strong and converse duality theorems for a weakly efficient solution.
2 Preliminaries
Let {\mathbb{R}}^{n} be the ndimensional Euclidean space and {\mathbb{R}}_{+}^{n} be its nonnegative orthant. Throughout the paper, the following convention for inequalities will be used for x,y\in {\mathbb{R}}^{n}:
The realvalued function f:{\mathbb{R}}^{n}\to \mathbb{R} is said to be locally Lipschitz if for any z\in {\mathbb{R}}^{n} there exists a positive constant K and a neighbourhood N of z such that, for each x,y\in N,
The Clarke generalized directional derivative of a locally Lipschitz function f at x in the direction d denoted by {f}^{\circ}(x;d) (see, e.g., Clarke [9]) is as follows:
The Clarke generalized subgradient of f at x is denoted by
Proposition 2.1 [9]
Let f, h be Lipschitz near x, and suppose h(x)\ne 0. Then \frac{f}{h} is Lipschitz near x, and one has
If, in addition, f(x)\geqq 0, h(x)>0 and if f and −h are regular at x, then equality holds and \frac{f}{h} is regular at x.
In this paper, we consider the following composite multiobjective fractional programming problem:
where

(1)
C is an open convex subset of a Banach space X,

(2)
{f}_{i}, {h}_{i}, i=1,2,\dots ,p, and {g}_{j}, j=1,2,\dots ,m, are realvalued locally Lipschitz functions on {\mathbb{R}}^{n}, and {F}_{i} and {G}_{j} are locally Lipschitz and Gâteaux differentiable functions from X into {\mathbb{R}}^{n} with Gâteaux derivatives {F}_{i}^{\prime}(\cdot ) and {G}_{j}^{\prime}(\cdot ), respectively, but are not necessarily continuously Fréchet differentiable or strictly differentiable [9],

(3)
{f}_{i}(x)\geqq 0, {h}_{i}(x)>0, i=1,2,\dots ,p,

(4)
{f}_{i}(x) and {h}_{i}(x) are regular.
Definition 2.1 A feasible point {x}_{0} is said to be a weakly efficient solution for (P) if there exists no feasible point x for which
Definition 2.2 [10]
A function f is invex on {X}_{0}\subset {\mathbb{R}}^{n} if for x,u\in {X}_{0} there exists a function \eta (x,u):{X}_{0}\times {X}_{0}\to {\mathbb{R}}^{n} such that
Definition 2.3 [10]
A function f:{X}_{0}\to {\mathbb{R}}^{n} is Vinvex on {X}_{0}\subset {\mathbb{R}}^{n} if for x,u\in {X}_{0} there exist functions \eta (x,u):{X}_{0}\times {X}_{0}\to {\mathbb{R}}^{n} and {\alpha}_{i}(x,u):{X}_{0}\times {X}_{0}\to {\mathbb{R}}_{+}\setminus \{0\} such that
The following lemma is needed in necessary optimality conditions, weak duality and converse duality.
Lemma 2.1 [3]
If {f}_{i}\geqq 0, {h}_{i}>0, {f}_{i} and {h}_{i} are invex at u with respect to \eta (x,u), and {f}_{i} and {h}_{i} are regular at u, then \frac{{f}_{i}}{{h}_{i}} is Vinvex at u with respect to \overline{\eta}, where \overline{\eta}(x,u)=\frac{{h}_{i}(u)}{{h}_{i}(x)}\eta (x,u).
3 Optimality conditions
Note that if F:X\to {\mathbb{R}}^{n} is locally Lipschitz near a point x\in X and Gâteaux differentiable at x and if f:{\mathbb{R}}^{n}\to \mathbb{R} is locally Lipschitz near F(x), then the continuous sublinear function, defined by
satisfies the inequality
Recall that {q}_{+}^{\prime}(x,h)={lim}_{\lambda \downarrow 0}sup{\lambda}^{1}(q(x+\lambda h)q(x))is the upper Dinidirectional derivative of q:X\to \mathbb{R} at x in the direction of h, and \partial f(F(x)) is the Clarke subdifferential of f at F(x). The function {\pi}_{x}(\cdot ) in (3.1) is called upper convex approximation of f\circ F at x, see [11, 12].
Note that for a set C, int C denotes the interior of C, and {C}^{+}=\{v\in {X}^{\prime}v(x)\geqq 0,\mathrm{\forall}x\in C\}, denotes the dual cone of C, where {X}^{\prime} is the topological dual space of X. It is also worth noting that for a convex set C, the closure of the cone generated by the set C at a point a, cl cone(Ca), is the tangent cone of C at a, and the dual cone{(Ca)}^{+} is the normal cone of C at a, see [9, 13].
Theorem 3.1 (Necessary optimality conditions)
Suppose that {f}_{i}, {h}_{i} and {g}_{j} are locally Lipschitz functions, and that {F}_{i} and {G}_{j} are locally Lipschitz and Gâteaux differentiable functions. If a\in C is a weakly efficient solution for (P), then there exist Lagrange multipliers {\lambda}_{i}\geqq 0, i=1,2,\dots ,p, and {\mu}_{j}\geqq 0, j=1,2,\dots ,m, not all zero, satisfying
Proof Let I=\{1,2,\dots ,p\}, {J}_{p}=\{p+jj=1,2,\dots ,m\}, {J}_{p}(a)=\{p+j{g}_{j}({G}_{j}(a))=0,j\in \{1,2,\dots ,m\}\}.
For convenience, we define
Suppose that the following system has a solution:
where {\pi}_{a}^{k}(d) is given by
Then the system
has a solution. So, there exists {\alpha}_{1}>0 such that a+\alpha d\in C, {l}_{k}(a+\alpha d)<{l}_{k}(a), k\in I\cup {J}_{p}(a), whenever 0<\alpha \leqq {\alpha}_{1}. Since {l}_{k}(a)<0 for k\in {J}_{p}\setminus {J}_{p}(a) and {l}_{k} is continuous in a neighbourhood of a, there exists {\alpha}_{2}>0 such that {l}_{k}(a+\alpha d)<0, whenever 0<\alpha \leqq {\alpha}_{2}, k\in {J}_{p}\setminus {J}_{p}(a). Let {\alpha}^{\ast}=min\{{\alpha}_{1},{\alpha}_{2}\}. Then a+\alpha d is a feasible solution for (P) and {l}_{k}(a+\alpha d)<{l}_{k}(a), k\in I for sufficiently small α such that 0<\alpha \leqq {\alpha}^{\ast}.
This contradicts the fact that a is a weakly efficient solution for (P). Hence (3.2) has no solution.
Since, for each k, {\pi}_{a}^{k}(\cdot ) is sublinear and cone(Ca) is convex, it follows from a separation theorem [12, 14] that there exist {\lambda}_{i}\geqq 0, i=1,\dots ,p, {\mu}_{j}\geqq 0, j\in {J}_{p}(a), not all zero, such that
Then, by applying standard arguments of convex analysis (see [15, 16]) and choosing {\mu}_{j}=0 whenever j\in {J}_{p}\setminus {J}_{p}(a), we have
So, there exist {\nu}_{i}\in {T}_{i}(a), {w}_{j}\in \partial {g}_{j}({G}_{j}(a)) satisfying
Hence, the conclusion holds. □
Under the following generalized Slater condition, we do the following:
where J(a)=\{j{g}_{j}({G}_{j}(a))=0,j=1,\dots ,m\}.
Choosing q\in {\mathbb{R}}^{p}, q>0 with {\lambda}^{T}q=1 and defining \mathrm{\Lambda}=q{q}^{T}, we can select the multipliers \overline{\lambda}=\mathrm{\Lambda}\lambda =q{q}^{T}\lambda =q>0 and \overline{\mu}=\mathrm{\Lambda}\mu =q{q}^{T}\mu \geqq 0. Hence, the following KuhnTucker type optimality conditions (KT) for (P) are obtained:
We present new conditions under which the optimality conditions (KT) become sufficient for weakly efficient solutions.
The following null space condition is as in [7]:
Let x,a\in X. Define K:X\to {\mathbb{R}}^{n(p+m)}:=\pi {\mathbb{R}}^{n} by K(x)=({F}_{1}(x),\dots ,{F}_{p}(x),{G}_{1}(x),\dots ,{G}_{m}(x)). For each x,a\in X, the linear mapping {A}_{x,a}:X\to {\mathbb{R}}^{n(p+m)} is given by
where {\alpha}_{i}(x,a), i=1,2,\dots ,p and {\beta}_{j}(x,a), j=1,2,\dots ,m, are real positive constants. Let us denote the null space of a function H by N[H].
Recall, from the generalized Farkas lemma [14], that K(x)K(a)\in {A}_{x,a}(X) if and only if {A}_{x,a}^{T}(u)=0\Rightarrow {u}^{T}(K(x)K(a))=0. This observation prompts us to define the following general null space condition:
For each x,a\in X, there exist real constants {\alpha}_{i}(x,a)>0, i=1,2,\dots ,p, and {\beta}_{j}(x,a)>0, j=1,2,\dots ,m, such that
where
Equivalently, the null space condition means that for each x,a\in X, there exist real constants {\alpha}_{i}(x,a)>0, i=1,2,\dots ,p, and {\beta}_{j}(x,a)>0, i=1,2,\dots ,m, and \zeta (x,a)\in X such that {F}_{i}(x){F}_{i}(a)={\alpha}_{i}(x,a){F}_{i}^{\prime}(a)\zeta (x,a) and {G}_{j}(x){G}_{j}(a)={\beta}_{j}(x,a){G}_{j}^{\prime}(a)\zeta (x,a). For our problem (P), we assume the following generalized null space condition for invex function (GNCI):
For each x,a\in C, there exist real constants {\alpha}_{i}(x,a)>0, i=1,2,\dots ,p, and {\beta}_{j}(x,a)>0, j=1,2,\dots ,m, and \zeta (x,a)\in (Ca) such that \eta ({F}_{i}(x),{F}_{i}(a))={\alpha}_{i}(x,a){F}_{i}^{\prime}(a)\zeta (x,a) and \eta ({G}_{j}(x),{G}_{j}(a))={\beta}_{j}(x,a){G}_{j}^{\prime}(a)\zeta (x,a).
Note that when C=X and \eta ({F}_{i}(x),{F}_{i}(a))={F}_{i}(x){F}_{i}(a) and \eta ({G}_{j}(x),{G}_{j}(a))={G}_{j}(x){G}_{j}(a), the generalized null space condition for invex function (GNCI) reduces to (NC).
Theorem 3.2 (Sufficient optimality conditions)
For the problem (P), assume that {f}_{i}, {h}_{i} and {g}_{j} are invex functions and {F}_{i} and {G}_{j} are locally Lipschitz and Gâteaux differentiable functions. Let u be feasible for (P). Suppose that the optimality conditions (KT) hold at u. If (GNCI) holds at each feasible point x of (P), then u is a weakly efficient solution of (P).
Proof From the optimality conditions (KT), there exist {\nu}_{i}\in {T}_{i}(u), {w}_{j}\in \partial {g}_{j}({G}_{j}(u)) such that
Suppose that u is not a weakly efficient solution of (P). Then there exists a feasible x\in C for (P) with
By (GNCI), there exists \zeta (x,u)\in (Cu), same for each {F}_{i} and {G}_{j}, such that \eta ({F}_{i}(x),{F}_{i}(u))={\alpha}_{i}(x,u){F}_{i}^{\prime}(u)\zeta (x,u), i=1,2,\dots ,p, and \eta ({G}_{j}(x),{G}_{j}(u))={\beta}_{j}(x,u){G}_{j}^{\prime}(u)\zeta (x,u), j=1,2,\dots ,m. Hence
This is a contradiction and hence u is a weakly efficient solution for (P). □
4 Duality theorems
In this section, we introduce a dual programming problem and establish weak, strong and converse duality theorems. Now we propose the following dual (D) to (P).
Theorem 4.1 (Weak duality)
Let x be feasible for (P), and let (u,\lambda ,\mu ) be feasible for (D). Assume that (GNCI) holds with {\alpha}_{i}(x,u)={\beta}_{j}(x,u)=1. Moreover, {f}_{i}, {h}_{i} and {g}_{j} are invex functions and {F}_{i} and {G}_{j} are locally Lipschitz and Gâteaux differentiable functions. Then
Proof Since (u,\lambda ,\mu ) is feasible for (D), there exist {\lambda}_{i}>0, {\mu}_{j}\geqq 0, {\nu}_{i}\in {T}_{i}(u), i=1,2,\dots ,p, {w}_{j}\in \partial {g}_{j}({G}_{j}(u)), j=1,2,\dots ,m, satisfying {\mu}_{j}{g}_{j}({G}_{j}(u))\geqq 0 for j=1,2,\dots ,m and
Suppose that x\ne u and
Then
By the invexity of {f}_{i} and {h}_{i}, we have
since \frac{{h}_{i}({F}_{i}(u))}{{h}_{i}({F}_{i}(x))}>0 and {\lambda}_{i}>0, then
From the feasibility conditions, we get {\mu}_{j}{g}_{j}({G}_{j}(x))\leqq 0, {\mu}_{j}{g}_{j}({G}_{j}(u))\geqq 0, and so
Similarly, by the invexity of {g}_{j}, positivity of {\beta}_{j}(x,u) and by (GNCI), we have
By (4.1) and (4.2), we get
This is a contradiction. The proof is completed by noting that when x=u the conclusion trivially holds. □
Theorem 4.2 (Strong duality)
For the problem (P), assume that the generalized Slater constraint qualification holds. If u is a weakly efficient solution for (P), then there exist \lambda \in {\mathbb{R}}^{p}, {\lambda}_{i}>0, \mu \in {\mathbb{R}}^{m}, {\mu}_{j}\geqq 0 such that (u,\lambda ,\mu ) is a weakly efficient solution for (D).
Proof It follows from Theorem 3.1 that there exist \lambda \in {\mathbb{R}}^{p}, {\lambda}_{i}>0, \mu \in {\mathbb{R}}^{m}, {\mu}_{j}\geqq 0 such that
Then (u,\lambda ,\mu ) is a feasible solution for (D). By weak duality,
Since (u,\lambda ,\mu ) is a feasible solution for (D), (u,\lambda ,\mu ) is a weakly efficient solution for (D). Hence the result holds. □
Theorem 4.3 (Converse duality)
Let (u,\lambda ,\mu ) be a weakly efficient solution of (D), and let a be a feasible solution of (P). Assume that {f}_{i}, {h}_{i} and {g}_{j} are invex functions and {F}_{i} and {G}_{j} are locally Lipschitz and Gâteaux differentiable functions. Moreover, (GNCI) holds with {\alpha}_{i}(x,u)={\beta}_{j}(x,u)=1. Then u is a weakly efficient solution of (P).
Proof Suppose, contrary to the result, that u is not a weakly efficient solution of (P). Then there exists x\in D such that
Since {f}_{i}, {h}_{i} are invex functions, for each {\nu}_{i}\in {T}_{i}(x), we have
Since (u,\lambda ,\mu ) are feasible for (P), we get
From the hypothesis {\mu}_{j}{g}_{j}({G}_{j}(x))\leqq {\mu}_{j}{g}_{j}({G}_{j}(u)), {g}_{j} is an invex function and for each {w}_{j}\in \partial {g}_{j}({G}_{j}(x)), it follows that
and {\mu}_{j}\geqq 0, j=1,2,\dots ,m, then we have
From (4.3) and (4.4), we get
This is a contradiction. □
References
Schaible S: Fractional programming. In Handbook of Global Optimization. Edited by: Horst R, Pardalos PM. Kluwer Academic, Dordrecht; 1995:495–608.
Bector CR, Chandra S, Husain I: Optimality conditions and subdifferentiable multiobjective fractional programming. J. Optim. Theory Appl. 1993, 39: 105–125.
Kim DS: Nonsmooth multiobjective fractional programming with generalized invexity. Taiwan. J. Math. 2009, 10(2):467–478.
Lai HC, Ho SC: Optimality and duality for nonsmooth multiobjective fractional programming problems involving exponential Vrinvexity. Nonlinear Anal. 2012, 75: 3157–3166. 10.1016/j.na.2011.12.013
Kim DS, Schaible S: Optimality and duality for invex nonsmooth multiobjective programming problems. Optimization 2004, 53(2):165–176. 10.1080/0233193042000209435
Nobakhtian S: Optimality and duality for nonsmooth multiobjective fractional programming with mixed constraints. J. Glob. Optim. 2008, 41: 103–115. 10.1007/s1089800791687
Jeyakumar V, Yang XQ: Convex composite multiobjective nonsmooth programming. Math. Program. 1993, 59: 325–343. 10.1007/BF01581251
Mishra SK, Mukherjee RN: Generalized convex composite multiobjective nonsmooth programming and conditional proper efficiency. Optimization 1995, 34: 53–66. 10.1080/02331939508844093
Clarke FH: Optimization and Nonsmooth Analysis. WileyInterscience, New York; 1983.
Egudo, RR, Hanson, MA: On Sufficiency of KuhnTucker Conditions in Nonsmooth Multiobjective Programming. FSU Technical Report No. M888, 51–58 (1993)
Jeyakumar V: Composite nonsmooth programming with Gâteaux differentiability. SIAM J. Control Optim. 1991, 1: 30–41. 10.1137/0801004
Jeyakumar V: On optimality conditions in nonsmooth inequality constrained minimization. Numer. Funct. Anal. Optim. 1987, 9: 535–546. 10.1080/01630568708816246
Rockafellar RT: Convex Analysis. Princeton University Press, Princeton; 1969.
Craven BD: Mathematical Programming and Control Theory. Chapman & Hall, London; 1978.
Jahn J: Scalarization in multiobjective optimization. Math. Program. 1984, 29: 203–219. 10.1007/BF02592221
Mangasarian OL: A simple characterization of solution sets of convex programs. Oper. Res. Lett. 1988, 7: 21–26. 10.1016/01676377(88)900478
Acknowledgements
This work was supported by a research grant of Pukyong National University (2013). The authors wish to thank the anonymous referees for their suggestions and comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
DSK presented necessary and sufficient optimality conditions, formulated MondWeir type dual problem and established weak, strong and converse duality theorems for nonconvex composite multiobjective nonsmooth fractional programs. HJK carried out the optimality and duality studies, participated in the sequence alignment and drafted the manuscript. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Kim, H.J., Kim, D.S. Nonconvex composite multiobjective nonsmooth fractional programming. J Inequal Appl 2013, 508 (2013). https://doi.org/10.1186/1029242X2013508
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029242X2013508