Skip to main content

Optimality conditions of E-convex programming for an E-differentiable function

Abstract

In this paper we introduce a new definition of an E-differentiable convex function, which transforms a non-differentiable function to a differentiable function under an operator E: R n → R n . By this definition, we can apply Kuhn-Tucker and Fritz-John conditions for obtaining the optimal solution of mathematical programming with a non-differentiable function.

1 Introduction

The concepts of E-convex sets and an E-convex function have been introduced by Youness in [1, 2], and they have some important applications in various branches of mathematical sciences. Youness in [1] introduced a class of sets and functions which is called E-convex sets and E-convex functions by relaxing the definition of convex sets and convex functions. This kind of generalized convexity is based on the effect of an operator E: R n → R n on the sets and the domain of the definition of functions. Also, in [2] Youness discussed the optimality criteria of E-convex programming. Xiusu Chen [3] introduced a new concept of semi E-convex functions and discussed its properties. Yu-Ru Syan and Stanelty [4] introduced some properties of an E-convex function, while Emam and Youness in [5] introduced a new class of E-convex sets and E-convex functions, which are called strongly E-convex sets and strongly E-convex functions, by taking the images of two points x and y under an operator E: R n → R n besides the two points themselves. In [6] Megahed et al. introduced a combined interactive approach for solving E-convex multiobjective nonlinear programming. Also, in [7, 8] Iqbal and et al. introduced geodesic E-convex sets, geodesic E-convex and some properties of geodesic semi-E-convex functions.

In this paper we present the concept of an E-differentiable convex function which transforms a non-differentiable convex function to a differentiable function under an operator E: R n → R n , for which we can apply the Fritz-John and Kuhn-Tucker conditions [9, 10] to find a solution of mathematical programming with a non-differentiable function.

In the following, we present the definitions of E-convex sets, E-convex functions, and semi E-convex functions.

Definition 1[1]

A set M is said to be an E-convex set with respect to an operator E: R n → R n if and only if λE(x)+(1−λ)E(y)∈M for each x,y∈M and λ∈[0,1].

Definition 2[1]

A function f: R n →R is said to be an E-convex function with respect to an operator E: R n → R n on an E-convex set M⊆ R n if and only if

f ( λ E ( x ) + ( 1 − λ ) E ( y ) ) ≤λ(f∘E)(x)+(1−λ)(f∘E)(y)

for each x,y∈M and λ∈[0,1].

Definition 3[3]

A real-valued function f:M⊆ R n →R is said to be semi E-convex function with respect to an operator E: R n → R n on M if M is an E-convex set and

f ( λ E ( x ) + ( 1 − λ ) E ( y ) ) ≤λf(x)+(1−λ)f(y)

for each x,y∈M and λ∈[0,1].

Proposition 4[1]

1- Let a setM⊆ R n be an E-convex set with respect to an operator E, E: R n → R n , thenE(M)⊆M.

2- IfE(M)is a convex set andE(M)⊆M, then M is an E-convex set.

3- If M 1 and M 2 are E-convex sets with respect to E, then M 1 ∩ M 2 is an E-convex set with respect to E.

Lemma 5[1]

LetM⊆ R n be an E 1 - and E 2 -convex set, then M is an( E 1 ∘ E 2 )- and( E 2 ∘ E 1 )-convex set.

Lemma 6[1]

LetE: R n → R n be a linear map and let M 1 , M 2 ⊂ R n be E-convex sets, then M 1 + M 2 is an E-convex set.

Definition 7[1]

Let S⊂ R n ×R and E: R n → R n , we say that the set S is E-convex if for each (x,α),(y,β)∈S and each λ∈[0,1], we have

( λ E x + ( 1 − λ ) E y , λ α + ( 1 − λ ) β ) ∈S.

2 Generalized E-convex function

Definition 8[1]

Let M⊆ R n be an E-convex set with respect to an operator E: R n → R n . A function f:M→R is said to be a pseudo E-convex function if for each x 1 , x 2 ∈M with ∇(f∘E)( x 1 )( x 2 − x 1 )≥0 implies f(E x 2 )≥f(E x 1 ) or for all x 1 , x 2 ∈M and f(E x 2 )<f(E x 1 ) implies ∇(f∘E)( x 1 )( x 2 − x 1 )<0.

Definition 9[1]

Let M⊆ R n be an E-convex set with respect to an operator E: R n → R n . A function f:M→R is said to be a quasi-E-convex function if and only if

f ( λ E x + ( 1 − λ ) E y ) ≤max { ( f ∘ E ) x , ( f ∘ E ) y }

for each x,y∈M and λ∈[0,1].

3 E-differentiable function

Definition 10 Let f:M⊆ R n →R be a non-differentiable function at x ¯ and let E: R n → R n be an operator. A function f is said to be E-differentiable at x ¯ if and only if (f∘E) is a differentiable function at x ¯ and

( f ∘ E ) ( x ) = ( f ∘ E ) ( x ¯ ) + ∇ ( f ∘ E ) ( x ¯ ) ( x − x ¯ ) + ∥ x − x ¯ ∥ α ( x ¯ , x − x ¯ ) , α ( x ¯ , x − x ¯ ) → 0 as  x → x . ¯

Example 11 Let f(x)=|x| be a non-differentiable function at the point x=0 and let E:R→R be an operator such that E(x)= x 2 , then the function (f∘E)(x)=f(Ex)= x 2 is a differentiable function at the point x=0, and hence f is an E-differentiable function.

3.1 Problem formulation

Now, we formulate problems P and P E , which have a non-differentiable function and an E-differentiable function, respectively.

Let E: R n → R n be an operator, M be an E-convex set and f be an E-differentiable function. The problem P is defined as

P{ Min f ( x ) , subject to  M = { x : g i ( x ) ≤ 0 , i = 1 , 2 , … , m } ,

where f is a non-differentiable function, and the problem P E is defined as

P E { Min ( f ∘ E ) ( x ) , subject to  M ′ = { x : ( g i ∘ E ) ( x ) ≤ 0 , i = 1 , 2 , … , m } ,

where f is an E-differentiable function.

Now, we will discuss the relationship between the solutions of problems P and P E .

Lemma 12[11]

LetE: R n → R n be a one-to-one and onto operator and let M ′ ={x:( g i ∘E)(x)≤0,i=1,2,…,m}. ThenE( M ′ )=M, where M and M ′ are feasible regions of problems P and P E , respectively.

Theorem 13 LetE: R n → R n be a one-to-one and onto operator and let f be an E-differentiable function. If f is non-differentiable at x ¯ , and x ¯ is an optimal solution of the problem P, then there exists y ¯ ∈ M ′ such that x ¯ =E( y ¯ )and y ¯ is an optimal solution of the problem P E .

Proof Let x ¯ be an optimal solution of the problem P. From Lemma 12 there exists y ¯ ∈ M ′ such that x ¯ =E( y ¯ ). Let y ¯ be a not optimal solution of the problem P E , then there is y ˆ ∈ M ′ such that (f∘E)( y ˆ )≤(f∘E)( y ¯ ). Also, there exists x ˆ ∈M such that x ˆ =E( y ˆ ) . Then f( x ˆ )<f( x ¯ ) contradicts the optimality of x ¯ for the problem P. Hence the proof is complete. □

Theorem 14 LetE: R n → R n be a one-to-one and onto operator, and let f be an E-differentiable function and strictly quasi-E-convex. If x ¯ is an optimal solution of the problem P, then there exists y ¯ ∈ M ′ such that x ¯ =E( y ¯ )and y ¯ is an optimal solution of the problem  P E .

Proof Let x ¯ be an optimal solution of the problem P. Then from Lemma 12 there is y ¯ ∈ M ′ such that x ¯ =E( y ¯ ). Let y ¯ be a not optimal solution of the problem P E , then there is y ˆ ∈ M ′ and also x ˆ ∈M, x ˆ =E( y ˆ ) such that (f∘E)( y ˆ )≤(f∘E)( y ¯ ). Since f is strictly quasi-E-convex function, then

f ( λ E ( y ¯ ) + ( 1 − λ ) E ( y ˆ ) ) < max { ( f ∘ E ) ( y ¯ ) , ( f ∘ E ) ( y ˆ ) } < max { f ( x ¯ ) , f ( x ˆ ) } < f ( x ¯ ) .

Since M is an E-convex set and E(M)⊂M, then λE( y ¯ )+(1−λ)E( y ˆ )∈M contradicts the assumption that x ¯ is a solution of the problem P, then there exists y ¯ ∈ M ′ , a solution of the problem P E , such that x ¯ =E( y ¯ ). □

Theorem 15 Let M be an E-convex set, E: R n → R n be a one-to-one and onto operator andf:M⊆ R n →Rbe an E-differentiable function at x ¯ . If there is a vectord⊂ R n such that∇(f∘E)( x ¯ )d<0, then there existsδ>0such that

(f∘E)( x ¯ +λd)<(f∘E)( x ¯ ) for each λ∈(0,δ).

Proof Since f is an E-differentiable function at x ¯ , then

( f ∘ E ) ( x ¯ + λ d ) = ( f ∘ E ) ( x ¯ ) + λ ∇ ( f ∘ E ) ( x ¯ ) + λ ∥ d ∥ α ( x ¯ , λ d ) , α ( x ¯ , λ d ) → 0 as  λ → 0 .

Since ∇(f∘E)( x ¯ )d<0 and α( x ¯ ,λd)→0 as λ→0, then there exists δ>0 such that

∇(f∘E)( x ¯ )+∥d∥α( x ¯ ,λd)<0for each Î»âˆˆ(0,δ)

and thus (f∘E)( x ¯ +λd)<(f∘E)( x ¯ ). □

Corollary 16 Let M be an E-convex set, letE: R n → R n be a one-to-one and onto operator, and letf:M⊆ R n →Rbe an E-differentiable and strictly E-convex function at x ¯ . If x ¯ is a local minimum of the function(f∘E), then∇(f∘E)( x ¯ )=0.

Proof Suppose that ∇(f∘E)( x ¯ )≠0 and let d=−∇(f∘E)( x ¯ ), then ∇(f∘E)( x ¯ )d=− ∥ ∇ ( f ∘ E ) ( x ¯ ) ∥ 2 <0. By Theorem 15 there exists δ>0 such that

(f∘E)( x ¯ +λd)<(f∘E)( x ¯ )for each Î»âˆˆ(0,δ)

contradicting the assumption that x ¯ is a local minimum of (f∘E)(x), and thus ∇(f∘E)( x ¯ )=0. □

Theorem 17 Let M be an E-convex set, E: R n → R n be a one-to-one and onto operator, andf:M⊆ R n →Rbe twice E-differentiable and strictly E-convex function at x ¯ . If x ¯ is a local minimum of(f∘E), then∇(f∘E)( x ¯ )=0and the Hessian matrixH( x ¯ )= ∇ 2 (f∘E)( x ¯ )is positive semidefinite.

Proof Suppose that d is an arbitrary direction. Since f is a twice E-differentiable function at x ¯ , then

( f ∘ E ) ( x ¯ + λ d ) = ( f ∘ E ) ( x ¯ ) + λ ∇ ( f ∘ E ) ( x ¯ ) d + 1 2 λ 2 d t ∇ 2 ( f ∘ E ) ( x ¯ ) d + λ 2 ∥ d ∥ 2 α ( x ¯ , λ d ) ,

where α( x ¯ ,λd)→0 as λ→0.

From Corollary 16 we have ∇(f∘E)( x ¯ )=0, and

( f ∘ E ) ( x ¯ + λ d ) − ( f ∘ E ) ( x ¯ ) λ 2 = 1 2 d t ∇ 2 (f∘E)( x ¯ )d.

Since x ¯ is a local minimum of (f∘E), then (f∘E)( x ¯ )<(f∘E)( x ¯ +λd), and

d t ∇ 2 (f∘E)( x ¯ )d≥0, i.e. , H( x ¯ )= ∇ 2 (f∘E)( x ¯ )is positive semidefinite.

 □

Example 18 Let f(x,y)=x+2 y 2 −2 x 1 3 be a non-differentiable function at (0,y), and let E(x,y)=( x 3 ,y), then (f∘E)(x,y)= x 3 +2 y 2 −2x, and

∂ ( f ∘ E ) ∂ x = 3 x 2 − 2 = 0 implies x = ± 2 3 , ∂ ( f ∘ E ) ∂ y = 4 y = 0 implies y = 0 , ∂ 2 ( f ∘ E ) ∂ x 2 = 6 x , ∂ 2 ( f ∘ E ) ∂ y ∂ x = 0 , ∂ 2 ( f ∘ E ) ∂ y 2 = 4 , ∂ 2 ( f ∘ E ) ∂ x ∂ y = 0 .

Then ( x 1 , y 1 )=( 2 3 ,0) and ( x 2 , y 2 )=(− 2 3 ,0) are extremum points of (f∘E)(x,y), and the Hessian matrix H( 2 3 ,0)= [ 6 2 3 0 0 4 ] is positive definite. And thus the point ( 2 3 ,0) is a local minimum of the function (f∘E)(x,y), but the Hessian matrix H(− 2 3 ,0)= [ − 6 2 3 0 0 4 ] is indefinite.

Theorem 19 Let M be an E-convex set, letE: R n → R n be a one-to-one and onto operator, and letf:M⊆ R n →Rbe a twice E-differentiable and strictly E-convex function at x ¯ . If∇(f∘E)( x ¯ )=0and the Hessian matrixH( x ¯ )= ∇ 2 (f∘E)( x ¯ )is positive definite, then x ¯ is a local minimum of(f∘E).

Proof Suppose that x ¯ is not a local minimum of (f∘E)(x), and there exists a sequence { x k } is converging to x ¯ such that (f∘E)( x k )<(f∘E)( x ¯ ) for each k. Since ∇(f∘E)( x ¯ )=0, and f is twice E-differentiable at x ¯ , then

( f ∘ E ) ( x k ) = ( f ∘ E ) ( x ¯ ) + λ ∇ ( f ∘ E ) ( x ¯ ) ( x k − x ¯ ) + 1 2 ( x k − x ¯ ) t ∇ 2 ( f ∘ E ) ( x ¯ ) ( x k − x ¯ ) + ∥ ( x k − x ¯ ) ∥ 2 α ( x ¯ , ( x k − x ¯ ) ) ,

where α( x ¯ ,( x k − x ¯ ))→0 as k→∞, and

1 2 ( x k − x ¯ ) t ∇ 2 (f∘E)( x ¯ )( x k − x ¯ )+ ∥ ( x k − x ¯ ) ∥ 2 α ( x ¯ , ( x k − x ¯ ) ) <0for each k.

By dividing on ∥ ( x k − x ¯ ) ∥ 2 , and letting d k = ( x k − x ¯ ) ∥ ( x k − x ¯ ) ∥ , we get

1 2 d k t ∇ 2 (f∘E)( x ¯ ) d k +α ( x ¯ , ( x k − x ¯ ) ) <0for each k.

But ∥ d k ∥=1 for each k, and hence there exists an index set K such that { d k } K →d, where ∥d∥=1. Considering this subsequence and the fact that α( x ¯ ,( x k − x ¯ ))→0 as k→∞, then d t ∇ 2 (f∘E)( x ¯ )d<0. This contradicts the assumption that H( x ¯ ) is positive definite. Therefore x ¯ is indeed a local minimum. □

Example 20 Let f(x,y)= x 2 3 + y 2 −1 be a non-differentiable at the point (0,y), and let E(x,y)=( x 3 ,y), then (f∘E)(x,y)= x 2 + y 2 −1

∂ ( f ∘ E ) ∂ x = 2 x , ∂ 2 ( f ∘ E ) ∂ y ∂ x = 0 , ∂ 2 ( f ∘ E ) ∂ x 2 = 2 , ∂ ( f ∘ E ) ∂ y = 2 y , ∂ 2 ( f ∘ E ) ∂ x ∂ y = 0 , ∂ 2 ( f ∘ E ) ∂ y 2 = 2 .

The necessary condition for x ¯ is a local minimum of (f∘E) is ∇(f∘E)( x ¯ )=0, then x ¯ =(0,0), and the Hessian matrix H( x ¯ )

H= [ ∂ 2 ( f ∘ E ) ∂ x 2 ∂ 2 ( f ∘ E ) ∂ y ∂ x ∂ 2 ( f ∘ E ) ∂ x ∂ y ∂ 2 ( f ∘ E ) ∂ y 2 ] = [ 2 0 0 2 ]

is positive definite.

Example 21 Let f(x,y)= x 1 3 +y−1 be non-differentiable at the point (0,y), and let E(x,y)=( x 3 ,y), then (f∘E)(x,y)=x+y−1.

Now, let M={ λ 1 (0,0)+ λ 2 (0,3)+ λ 3 (1,2)+ λ 4 (1,0)}∪{ λ 1 (0,0)+ λ 2 (0,−3)+ λ 3 (1,−2)+ λ 4 (1,0)}, ∑ i = 1 4 λ i =1, λ i ≥0 be an E-convex set with respect to operator E (the feasible region is shown in Figure 1) and

f ( 0 , 0 ) = − 1 , ( f ∘ E ) ( 0 , 0 ) = − 1 , f ( 0 , − 3 ) = − 4 , ( f ∘ E ) ( 0 , 3 ) = − 4 , f ( 1 , 2 ) = 2 , ( f ∘ E ) ( 1 , 2 ) = 2 , f ( 1 , 0 ) = 0 , ( f ∘ E ) ( 1 , 0 ) = 0 , f ( 0 , 3 ) = 2 , ( f ∘ E ) ( 0 , 3 ) = 2 , f ( 1 , − 2 ) = − 2 , ( f ∘ E ) ( 1 , 2 ) = − 2 .
Figure 1
figure 1

The feasible region M.

Then x ¯ =(0,−3) is a solution of the problem P E and E( x ¯ )=E(0,−3)=(0,−3) is a solution of the problem P.

Definition 22 Let M be a nonempty E-convex set in R n and let E( x ¯ )∈clM. The cone of feasible direction of E(M) at E( x ¯ ) denoted by D is given by

D= { d : d ≠ 0 , E ( x ¯ ) + λ d ∈ M  for each  λ ∈ [ 0 , δ ] , δ > 0 } .

Lemma 23 Let M be an E-convex set with respect to an operatorE: R n → R n , and letf:M⊆ R n →Rbe E-differentiable at x ¯ . If x ¯ is a local minimum of the problem P E , then F 0 ∩D=Ï•, where F 0 ={d:∇(f∘E)( x ¯ )d<0}, and D is the cone of feasible direction of M at  x ¯ .

Proof Suppose that there exists a vector d∈ F 0 ∩D. Then by Theorem 15, there exists δ 1 such that

(f∘E)( x ¯ +λd)<(f∘E)( x ¯ )for each Î»âˆˆ(0, δ 1 ).
(3.1)

By the definition of the cone of feasible direction, there exists δ 2 such that

E( x ¯ )+λd∈Mfor each Î»âˆˆ(0, δ 2 ).
(3.2)

From 3.1 and 3.2 we have (f∘E)( x ¯ +λd)<(f∘E)( x ¯ ) for each λ∈(0,δ), where δ=min{ δ 1 , δ 2 }, which contradicts the assumption that x ¯ is a local optimal solution, then F 0 ∩D=ϕ. □

Lemma 24 Let M be an open E-convex set with respect to an operatorE: R n → R n , letf:M⊆ R n →Rbe E-differentiable at x ¯ and let g i : R n →Rfori=1,2,…,m. Let x ¯ be a feasible solution of the problem P E and letI={i:( g i ∘E)( x ¯ )=0}. Furthermore, suppose that g i fori∈Iis E-differentiable at x ¯ and that g i fori∉Iis continuous at x ¯ . If x ¯ is a local optimal solution, then F 0 ∩ G 0 =ϕ, where

F 0 = { d : ∇ ( f ∘ E ) ( x ¯ ) d < 0 } , G 0 = { d : ∇ ( g i ∘ E ) ( x ¯ ) d < 0 , for each i ∈ I }

and E is one-to-one and onto.

Proof Let d∈ G 0 . Since E( x ¯ )∈M and M is an open E-convex set, there exists a δ 1 >0 such that

E( x ¯ )+λd∈Mfor Î»âˆˆ(0, δ 1 ).
(3.3)

Also, since ( g i ∘E)( x ¯ )<0 and since g i is continuous at x ¯ for i∉I, there exists a δ 2 >0 such that

( g i ∘E)( x ¯ +λd)<0for Î»âˆˆ(0, δ 2 ) and for i∉I.
(3.4)

Finally, since d∈ G 0 , ∇( g i ∘E)( x ¯ )d<0 for each i∈I and by Theorem 15, there exists δ 3 >0 such that

( g i ∘E)( x ¯ +λd)<( g i ∘E)( x ¯ )for Î»âˆˆ(0, δ 3 ) and i∈I.
(3.5)

From 3.3, 3.4 and 3.5, it is clear that points of the form E( x ¯ )+λd are feasible to the problem P E for each λ∈(0,δ), where δ=min( δ 1 , δ 2 , δ 3 ). Thus d∈D, where D is the cone of feasible direction of the feasible region at x ¯ . We have shown that for d∈ G 0 implies that d∈D, and hence G 0 ⊂D. By Lemma 23, since x ¯ is a local solution of the problem P E , F 0 ∩D=Ï•. It follows that F 0 ∩ G 0 =Ï•. □

Theorem 25 (Fritz-John optimality conditions)

Let M be an open E-convex set with respect to the one-to-one and onto operatorE: R n → R n , letf:M⊆ R n →Rbe E-differentiable at x ¯ and let g i : R n →Rfori=1,2,…,m. Let x ¯ be feasible solution of the problem P E and letI={i:( g i ∘E)( x ¯ )=0}. Furthermore, suppose that g i fori∈Iis differentiable at x ¯ and that g i fori∉Iis continuous at x ¯ . If x ¯ is a local optimal solution, then there exist scalars u 0 and u i fori∈Isuch that

andE( x ¯ )is a local solution of the problem P.

Proof Let x ¯ be a local solution of the problem P E , then there is no vector d such that ∇(f∘E)( x ¯ )d<0 and ∇( g i ∘E)( x ¯ )d<0. Let A be a matrix with rows ∇(f∘E)( x ¯ ) and ∇( g i ∘E)( x ¯ ). From Gordon’s theorem [10], we have the system Ad<0 is inconsistent, then there exists a vector b≥0 such that Ab=0, where b=( u 0 ⋅ u i ) for each i∈I. And thus

u ∘ ∇(f∘E)( x ¯ )+ ∑ i ∈ I u i ∇( g i ∘E)( x ¯ )=0

holds and E( x ¯ ) is a local solution of the problem P. □

Theorem 26 LetE: R n → R n be a one-to-one and onto operator and letf:M⊆ R n →Rbe an E-differentiable function. If x ¯ is an optimal solution of the problem P, then there exists y ¯ ∈ M ′ such that x ¯ =E( y ¯ )is an optimal solution of the problem P E and the Fritz-John optimality condition of the problem P E is satisfied.

Proof Let x ¯ be an optimal solution of the problem P. Since E is one-to-one and onto, according to Theorem 13, there exists y ¯ ∈ M ′ , x ¯ =E( y ¯ ) is an optimal solution of the problem P E . Hence there exist scalars u 0 . u i satisfying the Fritz-John optimality conditions of the problem P E

u ∘ ∇ ( f ∘ E ) ( x ¯ ) + ∑ i ∈ I u i ∇ ( g i ∘ E ) ( x ¯ ) = 0 , ( u 0 , u i ) = 0 , u 0 , u i ≥ 0 .

 □

Theorem 27 (Kuhn-Tucker necessary condition)

Let M be an open E-convex set with respect to the one-to-one and onto operatorE: R n → R n , letf:M⊆ R n →Rbe E-differentiable and strictly E-convex at x ¯ and let g i : R n →Rfori=1,2,…,m. Let y ¯ be a feasible solution of the problem P E and letI={i:( g i ∘E)( y ¯ )=0}. Furthermore, suppose that( g i ∘E)is continuous at y ¯ fori∉Iand∇( g i ∘E)( y ¯ )fori∈Iare linearly independent. If x ¯ is a solution of the problem P, x ¯ =E( y ¯ )and y ¯ is a local solution of the problem P E , then there exist scalars u i fori∈Isuch that

∇(f∘E)( y ¯ )+ ∑ i ∈ I u i ∇( g i ∘E)( y ¯ )=0, u i ≥0 for each i∈I.

Proof From the Fritz-John optimality condition theorem, there exist scalars u 0 and u i for each i∈I such that

u 0 ∇(f∘E)( y ¯ )+ ∑ i ∈ I u ˆ i ∇( g i ∘E)( y ¯ )=0, u 0 , u ˆ i ≥0 for each i∈I.

If u 0 =0, the assumption of linear independence of ∇( g i ∘E)( y ¯ ) does not hold, then u 0 >0. By taking u i = u ˆ i u 0 , then ∇(f∘E)( y ¯ )+ ∑ i ∈ I u i ∇( g i ∘E)( y ¯ )=0, u i ≥0 holds for each i∈I. From Theorem 26, y ¯ is a local solution of the problem P E . □

Theorem 28 Let M be an open E-convex set with respect to the one-to-one and onto operatorE: R n → R n , g i : R n →Rfori=1,2,…,m, and letf:M⊆ R n →Rbe E-differentiable at x ¯ and strictly E-convex at x ¯ . Let x ¯ =E( y ¯ )be a feasible solution of the problem P E andI={i:( g i ∘E)( y ¯ )=0}. Suppose that f is pseudo-E-convex at y ¯ and that g i is quasi-E-convex and differentiable at y ¯ for eachi∈I. Furthermore, suppose that the Kuhn-Tucker conditions hold at y ¯ . Then y ¯ is a global optimal solution of the problem P E and hence x ¯ =E( y ¯ )is a solution of the problem P.

Proof Let y ˆ be a feasible solution of the problem P E , then ( g i ∘E)( y ˆ )≤( g i ∘E)( y ¯ ) for each i∈I. Since ( g i ∘E)( y ˆ )≤0, ( g i ∘E)( y ¯ )=0 and g i is quasi-E-convex at y ¯ , then

( g i ∘ E ) ( y ¯ + λ ( y ˆ − y ¯ ) ) = ( g i ∘ E ) ( λ y ˆ + ( 1 − λ ) y ¯ ) ≤ max { ( g i ∘ E ) ( y ˆ ) , ( g i ∘ E ) ( y ¯ ) } = ( g i ∘ E ) ( y ¯ ) .

This means that ( g i ∘E) does not increase by moving from y ¯ along the direction y ˆ − y ¯ . Then we must have from Theorem 15 that ∇( g i ∘E)(y− y ¯ )≤0. Multiplying by u i and summing over I, we get

[ ∑ i ∈ I u i ∇ ( g i ∘ E ) ( y ¯ ) ] (y− y ¯ )≤0.

But since

∇(f∘E)( y ¯ )+ ∑ i ∈ I u i ∇( g i ∘E)( y ¯ )=0,

it follows that ∇(f∘E)( y ¯ )(y− y ¯ )⩾0. Since f is pseudo E-convex at y ¯ , we get

(f∘E)(y)≥(f∘E)( y ¯ ).

Then y ¯ is a global solution of the problem P E and from Theorem 13 x ¯ =E( y ¯ ) is a global solution of the problem P. □

Example 29 Consider the following problem (problem P):

Min f ( x , y ) = x 2 3 + y 2 , subject to  x 2 + y 2 ≤ 5 , x + 2 y ≤ 4 , x , y ≥ 0 .

The feasible region of this problem is shown in Figure 2.

Figure 2
figure 2

The feasible region M.

Let E(x,y)=( 1 8 x 3 , 1 3 y), then the problem P E is as follows:

min ( f ∘ E ) ( x , y ) = 1 4 x 2 + 1 9 y 2 , subject to  x 6 64 + y 2 9 ≤ 5 , 1 8 x 3 + 2 3 y ≤ 4 , x , y ≥ 0 .

We note that E(M)⊂M, where

( 5 , 0 ) ∈ M implies E ( 5 , 0 ) = ( 5 5 8 , 0 ) ∈ M , ( 0 , 2 ) ∈ M implies E ( 0 , 2 ) = ( 0 , 2 3 ) ∈ M , ( 0 , 0 ) ∈ M implies E ( 0 , 0 ) = ( 0 , 0 ) ∈ M , ( 2 , 1 ) ∈ M implies E ( 2 , 1 ) = ( 1 , 1 3 ) ∈ M .

The Kuhn-Tucker conditions are as follows:

∇ ( f ∘ E ) ( x , y ) + u 1 ∇ ( g 1 ∘ E ) ( x , y ) + u 2 ∇ ( g 2 ∘ E ) ( x , y ) = 0 , [ 1 2 x 2 9 y ] + u 1 [ 6 64 x 5 2 9 y ] + u 2 [ 3 8 x 2 2 3 ] = 0 , u 1 [ x 6 64 + y 2 9 − 5 ] = 0 , u 2 [ 1 8 x 3 + 2 3 y − 4 ] = 0 .

The solution is {[x=0.0, u 1 =0.0, u 2 =0.0,y=0.0]}, z ¯ =(0,0), and x ¯ =E( z ¯ )=(0,0) is a solution of the problem P.

4 Conclusion

In this paper we introduced a new definition of an E-differentiable convex function, which transforms a non-differentiable function to a differentiable function under an operator E: R n → R n , and we studied Kuhn-Tucker and Fritz-John conditions for obtaining an optimal solution of mathematical programming with a non-differentiable function. At the end, some examples have been presented to clarify the results.

References

  1. Youness EA: E -convex sets, E -convex functions and E -convex programming. J. Optim. Theory Appl. 1999, 102(3):439–450.

    Article  MathSciNet  Google Scholar 

  2. Youness EA: Optimality criteria in E -convex programming. Chaos Solitons Fractals 2001, 12: 1737–1745. 10.1016/S0960-0779(00)00036-9

    Article  MathSciNet  Google Scholar 

  3. Chen X: Some properties of semi- E -convex functions. J. Math. Anal. Appl. 2002, 275: 251–262. 10.1016/S0022-247X(02)00325-6

    Article  MathSciNet  Google Scholar 

  4. Syau Y-R, Lee ES: Some properties of E -convex functions. Appl. Math. Lett. 2005, 18: 1074–1080. 10.1016/j.aml.2004.09.018

    Article  MathSciNet  Google Scholar 

  5. Emam T, Youness EA: Semi strongly E -convex function. J. Math. Stat. 2005, 1(1):51–57.

    Article  MathSciNet  Google Scholar 

  6. Megahed AA, Gomma HG, Youness EA, El-Banna AH: A combined interactive approach for solving E -convex multi- objective nonlinear programming. Appl. Math. Comput. 2011, 217: 6777–6784. 10.1016/j.amc.2010.12.086

    Article  MathSciNet  Google Scholar 

  7. Iqbal A, Ahmad I, Ali S: Some properties of geodesic semi- E -convex functions. Nonlinear Anal., Theory Methods Appl. 2011, 74: 6805–6813. 10.1016/j.na.2011.07.005

    Article  MathSciNet  Google Scholar 

  8. Iqbal, A, Ali, S, Ahmad, I: On geodesic E-convex sets, geodesic E-convex functions and E-epigraphs. J. Optim. Theory Appl. (2012), (Article Available online)

    Google Scholar 

  9. Mangasarian OL: Nonlinear Programming. Mcgraw-Hill, New York; 1969.

    Google Scholar 

  10. Bazaraa MS, Shetty CM: Nonlinear Programming Theory and Algorithms. Wiley, New York; 1979.

    Google Scholar 

  11. Youness EA: Characterization of efficient solution of multiobjective E -convex programming problems. Appl. Math. Comput. 2004, 151(3):755–761. 10.1016/S0096-3003(03)00526-5

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors express their deep thanks and their respect to the referees and the Journal for these valuable comments in the evaluation of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abd El-Monem A Megahed.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Megahed, A.EM.A., Gomma, H.G., Youness, E.A. et al. Optimality conditions of E-convex programming for an E-differentiable function. J Inequal Appl 2013, 246 (2013). https://doi.org/10.1186/1029-242X-2013-246

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-246

Keywords