# Optimality conditions of E-convex programming for an E-differentiable function

## Abstract

In this paper we introduce a new definition of an E-differentiable convex function, which transforms a non-differentiable function to a differentiable function under an operator $E:{R}^{n}â†’{R}^{n}$. By this definition, we can apply Kuhn-Tucker and Fritz-John conditions for obtaining the optimal solution of mathematical programming with a non-differentiable function.

## 1 Introduction

The concepts of E-convex sets and an E-convex function have been introduced by Youness in [1, 2], and they have some important applications in various branches of mathematical sciences. Youness in [1] introduced a class of sets and functions which is called E-convex sets and E-convex functions by relaxing the definition of convex sets and convex functions. This kind of generalized convexity is based on the effect of an operator $E:{R}^{n}â†’{R}^{n}$ on the sets and the domain of the definition of functions. Also, in [2] Youness discussed the optimality criteria of E-convex programming. Xiusu Chen [3] introduced a new concept of semi E-convex functions and discussed its properties. Yu-Ru Syan and Stanelty [4] introduced some properties of an E-convex function, while Emam and Youness in [5] introduced a new class of E-convex sets and E-convex functions, which are called strongly E-convex sets and strongly E-convex functions, by taking the images of two points x and y under an operator $E:{R}^{n}â†’{R}^{n}$ besides the two points themselves. In [6] Megahed et al. introduced a combined interactive approach for solving E-convex multiobjective nonlinear programming. Also, in [7, 8] Iqbal and et al. introduced geodesic E-convex sets, geodesic E-convex and some properties of geodesic semi-E-convex functions.

In this paper we present the concept of an E-differentiable convex function which transforms a non-differentiable convex function to a differentiable function under an operator $E:{R}^{n}â†’{R}^{n}$, for which we can apply the Fritz-John and Kuhn-Tucker conditions [9, 10] to find a solution of mathematical programming with a non-differentiable function.

In the following, we present the definitions of E-convex sets, E-convex functions, and semi E-convex functions.

Definition 1[1]

A set M is said to be an E-convex set with respect to an operator $E:{R}^{n}â†’{R}^{n}$ if and only if $\mathrm{Î»}E\left(x\right)+\left(1âˆ’\mathrm{Î»}\right)E\left(y\right)âˆˆM$ for each $x,yâˆˆM$ and $\mathrm{Î»}âˆˆ\left[0,1\right]$.

Definition 2[1]

A function $f:{R}^{n}â†’R$ is said to be an E-convex function with respect to an operator $E:{R}^{n}â†’{R}^{n}$ on an E-convex set $MâŠ†{R}^{n}$ if and only if

$f\left(\mathrm{Î»}E\left(x\right)+\left(1âˆ’\mathrm{Î»}\right)E\left(y\right)\right)â‰¤\mathrm{Î»}\left(fâˆ˜E\right)\left(x\right)+\left(1âˆ’\mathrm{Î»}\right)\left(fâˆ˜E\right)\left(y\right)$

for each $x,yâˆˆM$ and $\mathrm{Î»}âˆˆ\left[0,1\right]$.

Definition 3[3]

A real-valued function $f:MâŠ†{R}^{n}â†’R$ is said to be semi E-convex function with respect to an operator $E:{R}^{n}â†’{R}^{n}$ on M if M is an E-convex set and

$f\left(\mathrm{Î»}E\left(x\right)+\left(1âˆ’\mathrm{Î»}\right)E\left(y\right)\right)â‰¤\mathrm{Î»}f\left(x\right)+\left(1âˆ’\mathrm{Î»}\right)f\left(y\right)$

for each $x,yâˆˆM$ and $\mathrm{Î»}âˆˆ\left[0,1\right]$.

Proposition 4[1]

1- Let a set$MâŠ†{R}^{n}$be an E-convex set with respect to an operator E, $E:{R}^{n}â†’{R}^{n}$, then$E\left(M\right)âŠ†M$.

2- If$E\left(M\right)$is a convex set and$E\left(M\right)âŠ†M$, then M is an E-convex set.

3- If${M}_{1}$and${M}_{2}$are E-convex sets with respect to E, then${M}_{1}âˆ©{M}_{2}$is an E-convex set with respect to E.

Lemma 5[1]

Let$MâŠ†{R}^{n}$be an${E}_{1}$- and${E}_{2}$-convex set, then M is an$\left({E}_{1}âˆ˜{E}_{2}\right)$- and$\left({E}_{2}âˆ˜{E}_{1}\right)$-convex set.

Lemma 6[1]

Let$E:{R}^{n}â†’{R}^{n}$be a linear map and let${M}_{1},{M}_{2}âŠ‚{R}^{n}$be E-convex sets, then${M}_{1}+{M}_{2}$is an E-convex set.

Definition 7[1]

Let $SâŠ‚{R}^{n}Ã—R$ and $E:{R}^{n}â†’{R}^{n}$, we say that the set S is E-convex if for each $\left(x,\mathrm{Î±}\right),\left(y,\mathrm{Î²}\right)âˆˆS$ and each $\mathrm{Î»}âˆˆ\left[0,1\right]$, we have

$\left(\mathrm{Î»}Ex+\left(1âˆ’\mathrm{Î»}\right)Ey,\mathrm{Î»}\mathrm{Î±}+\left(1âˆ’\mathrm{Î»}\right)\mathrm{Î²}\right)âˆˆS.$

## 2 Generalized E-convex function

Definition 8[1]

Let $MâŠ†{R}^{n}$ be an E-convex set with respect to an operator $E:{R}^{n}â†’{R}^{n}$. A function $f:Mâ†’R$ is said to be a pseudo E-convex function if for each ${x}_{1},{x}_{2}âˆˆM$ with $\mathrm{âˆ‡}\left(fâˆ˜E\right)\left({x}_{1}\right)\left({x}_{2}âˆ’{x}_{1}\right)â‰¥0$ implies $f\left(E{x}_{2}\right)â‰¥f\left(E{x}_{1}\right)$ or for all ${x}_{1},{x}_{2}âˆˆM$ and $f\left(E{x}_{2}\right) implies $\mathrm{âˆ‡}\left(fâˆ˜E\right)\left({x}_{1}\right)\left({x}_{2}âˆ’{x}_{1}\right)<0$.

Definition 9[1]

Let $MâŠ†{R}^{n}$ be an E-convex set with respect to an operator $E:{R}^{n}â†’{R}^{n}$. A function $f:Mâ†’R$ is said to be a quasi-E-convex function if and only if

$f\left(\mathrm{Î»}Ex+\left(1âˆ’\mathrm{Î»}\right)Ey\right)â‰¤max\left\{\left(fâˆ˜E\right)x,\left(fâˆ˜E\right)y\right\}$

for each $x,yâˆˆM$ and $\mathrm{Î»}âˆˆ\left[0,1\right]$.

## 3 E-differentiable function

Definition 10 Let $f:MâŠ†{R}^{n}â†’R$ be a non-differentiable function at $\stackrel{Â¯}{x}$ and let $E:{R}^{n}â†’{R}^{n}$ be an operator. A function f is said to be E-differentiable at $\stackrel{Â¯}{x}$ if and only if $\left(fâˆ˜E\right)$ is a differentiable function at $\stackrel{Â¯}{x}$ and

Example 11 Let $f\left(x\right)=|x|$ be a non-differentiable function at the point $x=0$ and let $E:Râ†’R$ be an operator such that $E\left(x\right)={x}^{2}$, then the function $\left(fâˆ˜E\right)\left(x\right)=f\left(Ex\right)={x}^{2}$ is a differentiable function at the point $x=0$, and hence f is an E-differentiable function.

### 3.1 Problem formulation

Now, we formulate problems P and ${P}_{E}$, which have a non-differentiable function and an E-differentiable function, respectively.

Let $E:{R}^{n}â†’{R}^{n}$ be an operator, M be an E-convex set and f be an E-differentiable function. The problem P is defined as

where f is a non-differentiable function, and the problem ${P}_{E}$ is defined as

where f is an E-differentiable function.

Now, we will discuss the relationship between the solutions of problems P and ${P}_{E}$.

Lemma 12[11]

Let$E:{R}^{n}â†’{R}^{n}$be a one-to-one and onto operator and let${M}^{\mathrm{â€²}}=\left\{x:\left({g}_{i}âˆ˜E\right)\left(x\right)â‰¤0,i=1,2,â€¦,m\right\}$. Then$E\left({M}^{\mathrm{â€²}}\right)=M$, where M and${M}^{\mathrm{â€²}}$are feasible regions of problems P and${P}_{E}$, respectively.

Theorem 13 Let$E:{R}^{n}â†’{R}^{n}$be a one-to-one and onto operator and let f be an E-differentiable function. If f is non-differentiable at$\stackrel{Â¯}{x}$, and$\stackrel{Â¯}{x}$is an optimal solution of the problem P, then there exists$\stackrel{Â¯}{y}âˆˆ{M}^{\mathrm{â€²}}$such that$\stackrel{Â¯}{x}=E\left(\stackrel{Â¯}{y}\right)$and$\stackrel{Â¯}{y}$is an optimal solution of the problem${P}_{E}$.

Proof Let $\stackrel{Â¯}{x}$ be an optimal solution of the problem P. From Lemma 12 there exists $\stackrel{Â¯}{y}âˆˆ{M}^{\mathrm{â€²}}$ such that $\stackrel{Â¯}{x}=E\left(\stackrel{Â¯}{y}\right)$. Let $\stackrel{Â¯}{y}$ be a not optimal solution of the problem ${P}_{E}$, then there is $\stackrel{Ë†}{y}âˆˆ{M}^{\mathrm{â€²}}$ such that $\left(fâˆ˜E\right)\left(\stackrel{Ë†}{y}\right)â‰¤\left(fâˆ˜E\right)\left(\stackrel{Â¯}{y}\right)$. Also, there exists $\stackrel{Ë†}{x}âˆˆM$ such that $\stackrel{Ë†}{x}=E\left(\stackrel{Ë†}{y}\right)$ . Then $f\left(\stackrel{Ë†}{x}\right) contradicts the optimality of $\stackrel{Â¯}{x}$ for the problem P. Hence the proof is complete.â€ƒâ–¡

Theorem 14 Let$E:{R}^{n}â†’{R}^{n}$be a one-to-one and onto operator, and let f be an E-differentiable function and strictly quasi-E-convex. If$\stackrel{Â¯}{x}$is an optimal solution of the problem P, then there exists$\stackrel{Â¯}{y}âˆˆ{M}^{\mathrm{â€²}}$such that$\stackrel{Â¯}{x}=E\left(\stackrel{Â¯}{y}\right)$and$\stackrel{Â¯}{y}$is an optimal solution of the problem ${P}_{E}$.

Proof Let $\stackrel{Â¯}{x}$ be an optimal solution of the problem P. Then from Lemma 12 there is $\stackrel{Â¯}{y}âˆˆ{M}^{\mathrm{â€²}}$ such that $\stackrel{Â¯}{x}=E\left(\stackrel{Â¯}{y}\right)$. Let $\stackrel{Â¯}{y}$ be a not optimal solution of the problem ${P}_{E}$, then there is $\stackrel{Ë†}{y}âˆˆ{M}^{\mathrm{â€²}}$ and also $\stackrel{Ë†}{x}âˆˆM$, $\stackrel{Ë†}{x}=E\left(\stackrel{Ë†}{y}\right)$ such that $\left(fâˆ˜E\right)\left(\stackrel{Ë†}{y}\right)â‰¤\left(fâˆ˜E\right)\left(\stackrel{Â¯}{y}\right)$. Since f is strictly quasi-E-convex function, then

$\begin{array}{rcl}f\left(\mathrm{Î»}E\left(\stackrel{Â¯}{y}\right)+\left(1âˆ’\mathrm{Î»}\right)E\left(\stackrel{Ë†}{y}\right)\right)& <& max\left\{\left(fâˆ˜E\right)\left(\stackrel{Â¯}{y}\right),\left(fâˆ˜E\right)\left(\stackrel{Ë†}{y}\right)\right\}\\ <& max\left\{f\left(\stackrel{Â¯}{x}\right),f\left(\stackrel{Ë†}{x}\right)\right\}\\ <& f\left(\stackrel{Â¯}{x}\right).\end{array}$

Since M is an E-convex set and $E\left(M\right)âŠ‚M$, then $\mathrm{Î»}E\left(\stackrel{Â¯}{y}\right)+\left(1âˆ’\mathrm{Î»}\right)E\left(\stackrel{Ë†}{y}\right)âˆˆM$ contradicts the assumption that $\stackrel{Â¯}{x}$ is a solution of the problem P, then there exists $\stackrel{Â¯}{y}âˆˆ{M}^{\mathrm{â€²}}$, a solution of the problem ${P}_{E}$, such that $\stackrel{Â¯}{x}=E\left(\stackrel{Â¯}{y}\right)$.â€ƒâ–¡

Theorem 15 Let M be an E-convex set, $E:{R}^{n}â†’{R}^{n}$be a one-to-one and onto operator and$f:MâŠ†{R}^{n}â†’R$be an E-differentiable function at$\stackrel{Â¯}{x}$. If there is a vector$dâŠ‚{R}^{n}$such that$\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)d<0$, then there exists$\mathrm{Î´}>0$such that

$\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}+\mathrm{Î»}d\right)<\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)\phantom{\rule{1em}{0ex}}\mathit{\text{for each}}\mathrm{Î»}âˆˆ\left(0,\mathrm{Î´}\right).$

Proof Since f is an E-differentiable function at $\stackrel{Â¯}{x}$, then

Since $\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)d<0$ and $\mathrm{Î±}\left(\stackrel{Â¯}{x},\mathrm{Î»}d\right)â†’0$ as $\mathrm{Î»}â†’0$, then there exists $\mathrm{Î´}>0$ such that

and thus $\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}+\mathrm{Î»}d\right)<\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)$.â€ƒâ–¡

Corollary 16 Let M be an E-convex set, let$E:{R}^{n}â†’{R}^{n}$be a one-to-one and onto operator, and let$f:MâŠ†{R}^{n}â†’R$be an E-differentiable and strictly E-convex function at$\stackrel{Â¯}{x}$. If$\stackrel{Â¯}{x}$is a local minimum of the function$\left(fâˆ˜E\right)$, then$\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)=0$.

Proof Suppose that and let $d=âˆ’\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)$, then $\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)d=âˆ’{âˆ¥\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)âˆ¥}^{2}<0$. By Theorem 15 there exists $\mathrm{Î´}>0$ such that

contradicting the assumption that $\stackrel{Â¯}{x}$ is a local minimum of $\left(fâˆ˜E\right)\left(x\right)$, and thus $\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)=0$.â€ƒâ–¡

Theorem 17 Let M be an E-convex set, $E:{R}^{n}â†’{R}^{n}$be a one-to-one and onto operator, and$f:MâŠ†{R}^{n}â†’R$be twice E-differentiable and strictly E-convex function at$\stackrel{Â¯}{x}$. If$\stackrel{Â¯}{x}$is a local minimum of$\left(fâˆ˜E\right)$, then$\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)=0$and the Hessian matrix$H\left(\stackrel{Â¯}{x}\right)={\mathrm{âˆ‡}}^{2}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)$is positive semidefinite.

Proof Suppose that d is an arbitrary direction. Since f is a twice E-differentiable function at $\stackrel{Â¯}{x}$, then

$\begin{array}{rcl}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}+\mathrm{Î»}d\right)& =& \left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)+\mathrm{Î»}\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)d+\frac{1}{2}{\mathrm{Î»}}^{2}{d}^{t}{\mathrm{âˆ‡}}^{2}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)d\\ +{\mathrm{Î»}}^{2}{âˆ¥dâˆ¥}^{2}\mathrm{Î±}\left(\stackrel{Â¯}{x},\mathrm{Î»}d\right),\end{array}$

where $\mathrm{Î±}\left(\stackrel{Â¯}{x},\mathrm{Î»}d\right)â†’0$ as $\mathrm{Î»}â†’0$.

From Corollary 16 we have $\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)=0$, and

$\frac{\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}+\mathrm{Î»}d\right)âˆ’\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)}{{\mathrm{Î»}}^{2}}=\frac{1}{2}{d}^{t}{\mathrm{âˆ‡}}^{2}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)d.$

Since $\stackrel{Â¯}{x}$ is a local minimum of $\left(fâˆ˜E\right)$, then $\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)<\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}+\mathrm{Î»}d\right)$, and

${d}^{t}{\mathrm{âˆ‡}}^{2}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)dâ‰¥0,\phantom{\rule{1em}{0ex}}\mathit{\text{i.e.}}\text{,}\phantom{\rule{2em}{0ex}}H\left(\stackrel{Â¯}{x}\right)={\mathrm{âˆ‡}}^{2}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)\phantom{\rule{1em}{0ex}}\text{is positive semidefinite}.$

â€ƒâ–¡

Example 18 Let $f\left(x,y\right)=x+2{y}^{2}âˆ’2{x}^{\frac{1}{3}}$ be a non-differentiable function at $\left(0,y\right)$, and let $E\left(x,y\right)=\left({x}^{3},y\right)$, then $\left(fâˆ˜E\right)\left(x,y\right)={x}^{3}+2{y}^{2}âˆ’2x$, and

$\begin{array}{c}\frac{\mathrm{âˆ‚}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}x}=3{x}^{2}âˆ’2=0\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}x=Â±\sqrt{\frac{2}{3}},\hfill \\ \frac{\mathrm{âˆ‚}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}y}=4y=0\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}y=0,\hfill \\ \frac{{\mathrm{âˆ‚}}^{2}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}{x}^{2}}=6x,\phantom{\rule{2em}{0ex}}\frac{{\mathrm{âˆ‚}}^{2}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}y\phantom{\rule{0.2em}{0ex}}\mathrm{âˆ‚}x}=0,\phantom{\rule{2em}{0ex}}\frac{{\mathrm{âˆ‚}}^{2}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}{y}^{2}}=4,\phantom{\rule{2em}{0ex}}\frac{{\mathrm{âˆ‚}}^{2}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}x\phantom{\rule{0.2em}{0ex}}\mathrm{âˆ‚}y}=0.\hfill \end{array}$

Then $\left({x}_{1},{y}_{1}\right)=\left(\sqrt{\frac{2}{3}},0\right)$ and $\left({x}_{2},{y}_{2}\right)=\left(âˆ’\sqrt{\frac{2}{3}},0\right)$ are extremum points of $\left(fâˆ˜E\right)\left(x,y\right)$, and the Hessian matrix $H\left(\sqrt{\frac{2}{3}},0\right)=\left[\begin{array}{cc}6\sqrt{\frac{2}{3}}& 0\\ 0& 4\end{array}\right]$ is positive definite. And thus the point $\left(\sqrt{\frac{2}{3}},0\right)$ is a local minimum of the function $\left(fâˆ˜E\right)\left(x,y\right)$, but the Hessian matrix $H\left(âˆ’\sqrt{\frac{2}{3}},0\right)=\left[\begin{array}{cc}âˆ’6\sqrt{\frac{2}{3}}& 0\\ 0& 4\end{array}\right]$ is indefinite.

Theorem 19 Let M be an E-convex set, let$E:{R}^{n}â†’{R}^{n}$be a one-to-one and onto operator, and let$f:MâŠ†{R}^{n}â†’R$be a twice E-differentiable and strictly E-convex function at$\stackrel{Â¯}{x}$. If$\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)=0$and the Hessian matrix$H\left(\stackrel{Â¯}{x}\right)={\mathrm{âˆ‡}}^{2}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)$is positive definite, then$\stackrel{Â¯}{x}$is a local minimum of$\left(fâˆ˜E\right)$.

Proof Suppose that $\stackrel{Â¯}{x}$ is not a local minimum of $\left(fâˆ˜E\right)\left(x\right)$, and there exists a sequence $\left\{{x}_{k}\right\}$ is converging to $\stackrel{Â¯}{x}$ such that $\left(fâˆ˜E\right)\left({x}_{k}\right)<\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)$ for each k. Since $\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)=0$, and f is twice E-differentiable at $\stackrel{Â¯}{x}$, then

$\begin{array}{rcl}\left(fâˆ˜E\right)\left({x}_{k}\right)& =& \left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)+\mathrm{Î»}\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)\left({x}_{k}âˆ’\stackrel{Â¯}{x}\right)\\ +\frac{1}{2}{\left({x}_{k}âˆ’\stackrel{Â¯}{x}\right)}^{t}{\mathrm{âˆ‡}}^{2}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)\left({x}_{k}âˆ’\stackrel{Â¯}{x}\right)+{âˆ¥\left({x}_{k}âˆ’\stackrel{Â¯}{x}\right)âˆ¥}^{2}\mathrm{Î±}\left(\stackrel{Â¯}{x},\left({x}_{k}âˆ’\stackrel{Â¯}{x}\right)\right),\end{array}$

where $\mathrm{Î±}\left(\stackrel{Â¯}{x},\left({x}_{k}âˆ’\stackrel{Â¯}{x}\right)\right)â†’0$ as $kâ†’\mathrm{âˆž}$, and

By dividing on ${âˆ¥\left({x}_{k}âˆ’\stackrel{Â¯}{x}\right)âˆ¥}^{2}$, and letting ${d}_{k}=\frac{\left({x}_{k}âˆ’\stackrel{Â¯}{x}\right)}{âˆ¥\left({x}_{k}âˆ’\stackrel{Â¯}{x}\right)âˆ¥}$, we get

But $âˆ¥{d}_{k}âˆ¥=1$ for each k, and hence there exists an index set K such that ${\left\{{d}_{k}\right\}}_{K}â†’d$, where $âˆ¥dâˆ¥=1$. Considering this subsequence and the fact that $\mathrm{Î±}\left(\stackrel{Â¯}{x},\left({x}_{k}âˆ’\stackrel{Â¯}{x}\right)\right)â†’0$ as $kâ†’\mathrm{âˆž}$, then ${d}^{t}{\mathrm{âˆ‡}}^{2}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)d<0$. This contradicts the assumption that $H\left(\stackrel{Â¯}{x}\right)$ is positive definite. Therefore $\stackrel{Â¯}{x}$ is indeed a local minimum.â€ƒâ–¡

Example 20 Let $f\left(x,y\right)={x}^{\frac{2}{3}}+{y}^{2}âˆ’1$ be a non-differentiable at the point $\left(0,y\right)$, and let $E\left(x,y\right)=\left({x}^{3},y\right)$, then $\left(fâˆ˜E\right)\left(x,y\right)={x}^{2}+{y}^{2}âˆ’1$

$\begin{array}{c}\frac{\mathrm{âˆ‚}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}x}=2x,\phantom{\rule{2em}{0ex}}\frac{{\mathrm{âˆ‚}}^{2}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}y\phantom{\rule{0.2em}{0ex}}\mathrm{âˆ‚}x}=0,\phantom{\rule{2em}{0ex}}\frac{{\mathrm{âˆ‚}}^{2}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}{x}^{2}}=2,\hfill \\ \frac{\mathrm{âˆ‚}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}y}=2y,\phantom{\rule{2em}{0ex}}\frac{{\mathrm{âˆ‚}}^{2}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}x\phantom{\rule{0.2em}{0ex}}\mathrm{âˆ‚}y}=0,\phantom{\rule{2em}{0ex}}\frac{{\mathrm{âˆ‚}}^{2}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}{y}^{2}}=2.\hfill \end{array}$

The necessary condition for $\stackrel{Â¯}{x}$ is a local minimum of $\left(fâˆ˜E\right)$ is $\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)=0$, then $\stackrel{Â¯}{x}=\left(0,0\right)$, and the Hessian matrix $H\left(\stackrel{Â¯}{x}\right)$

$H=\left[\begin{array}{cc}\frac{{\mathrm{âˆ‚}}^{2}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}{x}^{2}}& \frac{{\mathrm{âˆ‚}}^{2}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}y\phantom{\rule{0.2em}{0ex}}\mathrm{âˆ‚}x}\\ \frac{{\mathrm{âˆ‚}}^{2}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}x\phantom{\rule{0.2em}{0ex}}\mathrm{âˆ‚}y}& \frac{{\mathrm{âˆ‚}}^{2}\left(fâˆ˜E\right)}{\mathrm{âˆ‚}{y}^{2}}\end{array}\right]=\left[\begin{array}{cc}2& 0\\ 0& 2\end{array}\right]$

is positive definite.

Example 21 Let $f\left(x,y\right)={x}^{\frac{1}{3}}+yâˆ’1$ be non-differentiable at the point $\left(0,y\right)$, and let $E\left(x,y\right)=\left({x}^{3},y\right)$, then $\left(fâˆ˜E\right)\left(x,y\right)=x+yâˆ’1$.

Now, let $M=\left\{{\mathrm{Î»}}_{1}\left(0,0\right)+{\mathrm{Î»}}_{2}\left(0,3\right)+{\mathrm{Î»}}_{3}\left(1,2\right)+{\mathrm{Î»}}_{4}\left(1,0\right)\right\}âˆª\left\{{\mathrm{Î»}}_{1}\left(0,0\right)+{\mathrm{Î»}}_{2}\left(0,âˆ’3\right)+{\mathrm{Î»}}_{3}\left(1,âˆ’2\right)+{\mathrm{Î»}}_{4}\left(1,0\right)\right\}$, ${âˆ‘}_{i=1}^{4}{\mathrm{Î»}}_{i}=1$, ${\mathrm{Î»}}_{i}â‰¥0$ be an E-convex set with respect to operator E (the feasible region is shown in Figure 1) and

$\begin{array}{c}f\left(0,0\right)=âˆ’1,\phantom{\rule{2em}{0ex}}\left(fâˆ˜E\right)\left(0,0\right)=âˆ’1,\phantom{\rule{2em}{0ex}}f\left(0,âˆ’3\right)=âˆ’4,\phantom{\rule{2em}{0ex}}\left(fâˆ˜E\right)\left(0,3\right)=âˆ’4,\hfill \\ f\left(1,2\right)=2,\phantom{\rule{2em}{0ex}}\left(fâˆ˜E\right)\left(1,2\right)=2,\phantom{\rule{2em}{0ex}}f\left(1,0\right)=0,\phantom{\rule{2em}{0ex}}\left(fâˆ˜E\right)\left(1,0\right)=0,\hfill \\ f\left(0,3\right)=2,\phantom{\rule{2em}{0ex}}\left(fâˆ˜E\right)\left(0,3\right)=2,\phantom{\rule{2em}{0ex}}f\left(1,âˆ’2\right)=âˆ’2,\phantom{\rule{2em}{0ex}}\left(fâˆ˜E\right)\left(1,2\right)=âˆ’2.\hfill \end{array}$

Then $\stackrel{Â¯}{x}=\left(0,âˆ’3\right)$ is a solution of the problem ${P}_{E}$ and $E\left(\stackrel{Â¯}{x}\right)=E\left(0,âˆ’3\right)=\left(0,âˆ’3\right)$ is a solution of the problem P.

Definition 22 Let M be a nonempty E-convex set in ${R}^{n}$ and let $E\left(\stackrel{Â¯}{x}\right)âˆˆclM$. The cone of feasible direction of $E\left(M\right)$ at $E\left(\stackrel{Â¯}{x}\right)$ denoted by D is given by

Lemma 23 Let M be an E-convex set with respect to an operator$E:{R}^{n}â†’{R}^{n}$, and let$f:MâŠ†{R}^{n}â†’R$be E-differentiable at$\stackrel{Â¯}{x}$. If$\stackrel{Â¯}{x}$is a local minimum of the problem${P}_{E}$, then${F}_{0}âˆ©D=\mathrm{Ï•}$, where${F}_{0}=\left\{d:\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)d<0\right\}$, and D is the cone of feasible direction of M at $\stackrel{Â¯}{x}$.

Proof Suppose that there exists a vector $dâˆˆ{F}_{0}âˆ©D$. Then by Theorem 15, there exists ${\mathrm{Î´}}_{1}$ such that

(3.1)

By the definition of the cone of feasible direction, there exists ${\mathrm{Î´}}_{2}$ such that

(3.2)

From 3.1 and 3.2 we have $\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}+\mathrm{Î»}d\right)<\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)$ for each $\mathrm{Î»}âˆˆ\left(0,\mathrm{Î´}\right)$, where $\mathrm{Î´}=min\left\{{\mathrm{Î´}}_{1},{\mathrm{Î´}}_{2}\right\}$, which contradicts the assumption that $\stackrel{Â¯}{x}$ is a local optimal solution, then ${F}_{0}âˆ©D=\mathrm{Ï•}$.â€ƒâ–¡

Lemma 24 Let M be an open E-convex set with respect to an operator$E:{R}^{n}â†’{R}^{n}$, let$f:MâŠ†{R}^{n}â†’R$be E-differentiable at$\stackrel{Â¯}{x}$and let${g}_{i}:{R}^{n}â†’R$for$i=1,2,â€¦,m$. Let$\stackrel{Â¯}{x}$be a feasible solution of the problem${P}_{E}$and let$I=\left\{i:\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{x}\right)=0\right\}$. Furthermore, suppose that${g}_{i}$for$iâˆˆI$is E-differentiable at$\stackrel{Â¯}{x}$and that${g}_{i}$for$iâˆ‰I$is continuous at$\stackrel{Â¯}{x}$. If$\stackrel{Â¯}{x}$is a local optimal solution, then${F}_{0}âˆ©{G}_{0}=\mathrm{Ï•}$, where

$\begin{array}{c}{F}_{0}=\left\{d:\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)d<0\right\},\hfill \\ {G}_{0}=\left\{d:\mathrm{âˆ‡}\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{x}\right)d<0,\mathit{\text{for each}}iâˆˆI\right\}\hfill \end{array}$

and E is one-to-one and onto.

Proof Let $dâˆˆ{G}_{0}$. Since $E\left(\stackrel{Â¯}{x}\right)âˆˆM$ and M is an open E-convex set, there exists a ${\mathrm{Î´}}_{1}>0$ such that

(3.3)

Also, since $\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{x}\right)<0$ and since ${g}_{i}$ is continuous at $\stackrel{Â¯}{x}$ for $iâˆ‰I$, there exists a ${\mathrm{Î´}}_{2}>0$ such that

(3.4)

Finally, since $dâˆˆ{G}_{0}$, $\mathrm{âˆ‡}\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{x}\right)d<0$ for each $iâˆˆI$ and by Theorem 15, there exists ${\mathrm{Î´}}_{3}>0$ such that

(3.5)

From 3.3, 3.4 and 3.5, it is clear that points of the form $E\left(\stackrel{Â¯}{x}\right)+\mathrm{Î»}d$ are feasible to the problem ${P}_{E}$ for each $\mathrm{Î»}âˆˆ\left(0,\mathrm{Î´}\right)$, where $\mathrm{Î´}=min\left({\mathrm{Î´}}_{1},{\mathrm{Î´}}_{2},{\mathrm{Î´}}_{3}\right)$. Thus $dâˆˆD$, where D is the cone of feasible direction of the feasible region at $\stackrel{Â¯}{x}$. We have shown that for $dâˆˆ{G}_{0}$ implies that $dâˆˆD$, and hence ${G}_{0}âŠ‚D$. By Lemma 23, since $\stackrel{Â¯}{x}$ is a local solution of the problem ${P}_{E},{F}_{0}âˆ©D=\mathrm{Ï•}$. It follows that ${F}_{0}âˆ©{G}_{0}=\mathrm{Ï•}$.â€ƒâ–¡

Theorem 25 (Fritz-John optimality conditions)

Let M be an open E-convex set with respect to the one-to-one and onto operator$E:{R}^{n}â†’{R}^{n}$, let$f:MâŠ†{R}^{n}â†’R$be E-differentiable at$\stackrel{Â¯}{x}$and let${g}_{i}:{R}^{n}â†’R$for$i=1,2,â€¦,m$. Let$\stackrel{Â¯}{x}$be feasible solution of the problem${P}_{E}$and let$I=\left\{i:\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{x}\right)=0\right\}$. Furthermore, suppose that${g}_{i}$for$iâˆˆI$is differentiable at$\stackrel{Â¯}{x}$and that${g}_{i}$for$iâˆ‰I$is continuous at$\stackrel{Â¯}{x}$. If$\stackrel{Â¯}{x}$is a local optimal solution, then there exist scalars${u}_{0}$and${u}_{i}$for$iâˆˆI$such that

and$E\left(\stackrel{Â¯}{x}\right)$is a local solution of the problem P.

Proof Let $\stackrel{Â¯}{x}$ be a local solution of the problem ${P}_{E}$, then there is no vector d such that $\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)d<0$ and $\mathrm{âˆ‡}\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{x}\right)d<0$. Let A be a matrix with rows $\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)$ and $\mathrm{âˆ‡}\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{x}\right)$. From Gordonâ€™s theorem [10], we have the system $Ad<0$ is inconsistent, then there exists a vector $bâ‰¥0$ such that $Ab=0$, where $b=\left({u}_{0}â‹\dots {u}_{i}\right)$ for each $iâˆˆI$. And thus

${u}_{âˆ˜}\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)+\underset{iâˆˆI}{âˆ‘}{u}_{i}\mathrm{âˆ‡}\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{x}\right)=0$

holds and $E\left(\stackrel{Â¯}{x}\right)$ is a local solution of the problem P.â€ƒâ–¡

Theorem 26 Let$E:{R}^{n}â†’{R}^{n}$be a one-to-one and onto operator and let$f:MâŠ†{R}^{n}â†’R$be an E-differentiable function. If$\stackrel{Â¯}{x}$is an optimal solution of the problem P, then there exists$\stackrel{Â¯}{y}âˆˆ{M}^{\mathrm{â€²}}$such that$\stackrel{Â¯}{x}=E\left(\stackrel{Â¯}{y}\right)$is an optimal solution of the problem${P}_{E}$and the Fritz-John optimality condition of the problem${P}_{E}$is satisfied.

Proof Let $\stackrel{Â¯}{x}$ be an optimal solution of the problem P. Since E is one-to-one and onto, according to Theorem 13, there exists $\stackrel{Â¯}{y}âˆˆ{M}^{\mathrm{â€²}}$, $\stackrel{Â¯}{x}=E\left(\stackrel{Â¯}{y}\right)$ is an optimal solution of the problem ${P}_{E}$. Hence there exist scalars ${u}_{0}.{u}_{i}$ satisfying the Fritz-John optimality conditions of the problem ${P}_{E}$

$\begin{array}{c}{u}_{âˆ˜}\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{x}\right)+\underset{iâˆˆI}{âˆ‘}{u}_{i}\mathrm{âˆ‡}\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{x}\right)=0,\hfill \\ \left({u}_{0},{u}_{i}\right)=0,\hfill \\ {u}_{0},{u}_{i}â‰¥0.\hfill \end{array}$

â€ƒâ–¡

Theorem 27 (Kuhn-Tucker necessary condition)

Let M be an open E-convex set with respect to the one-to-one and onto operator$E:{R}^{n}â†’{R}^{n}$, let$f:MâŠ†{R}^{n}â†’R$be E-differentiable and strictly E-convex at$\stackrel{Â¯}{x}$and let${g}_{i}:{R}^{n}â†’R$for$i=1,2,â€¦,m$. Let$\stackrel{Â¯}{y}$be a feasible solution of the problem${P}_{E}$and let$I=\left\{i:\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{y}\right)=0\right\}$. Furthermore, suppose that$\left({g}_{i}âˆ˜E\right)$is continuous at$\stackrel{Â¯}{y}$for$iâˆ‰I$and$\mathrm{âˆ‡}\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{y}\right)$for$iâˆˆI$are linearly independent. If$\stackrel{Â¯}{x}$is a solution of the problem P, $\stackrel{Â¯}{x}=E\left(\stackrel{Â¯}{y}\right)$and$\stackrel{Â¯}{y}$is a local solution of the problem${P}_{E}$, then there exist scalars${u}_{i}$for$iâˆˆI$such that

$\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{y}\right)+\underset{iâˆˆI}{âˆ‘}{u}_{i}\mathrm{âˆ‡}\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{y}\right)=0,\phantom{\rule{1em}{0ex}}{u}_{i}â‰¥0\mathit{\text{for each}}iâˆˆI.$

Proof From the Fritz-John optimality condition theorem, there exist scalars ${u}_{0}$ and ${u}_{i}$ for each $iâˆˆI$ such that

If ${u}_{0}=0$, the assumption of linear independence of $\mathrm{âˆ‡}\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{y}\right)$ does not hold, then ${u}_{0}>0$. By taking ${u}_{i}=\frac{{\stackrel{Ë†}{u}}_{i}}{{u}_{0}}$, then $\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{y}\right)+{âˆ‘}_{iâˆˆI}{u}_{i}\mathrm{âˆ‡}\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{y}\right)=0$, ${u}_{i}â‰¥0$ holds for each $iâˆˆI$. From Theorem 26, $\stackrel{Â¯}{y}$ is a local solution of the problem ${P}_{E}$.â€ƒâ–¡

Theorem 28 Let M be an open E-convex set with respect to the one-to-one and onto operator$E:{R}^{n}â†’{R}^{n}$, ${g}_{i}:{R}^{n}â†’R$for$i=1,2,â€¦,m$, and let$f:MâŠ†{R}^{n}â†’R$be E-differentiable at$\stackrel{Â¯}{x}$and strictly E-convex at$\stackrel{Â¯}{x}$. Let$\stackrel{Â¯}{x}=E\left(\stackrel{Â¯}{y}\right)$be a feasible solution of the problem${P}_{E}$and$I=\left\{i:\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{y}\right)=0\right\}$. Suppose that f is pseudo-E-convex at$\stackrel{Â¯}{y}$and that${g}_{i}$is quasi-E-convex and differentiable at$\stackrel{Â¯}{y}$for each$iâˆˆI$. Furthermore, suppose that the Kuhn-Tucker conditions hold at$\stackrel{Â¯}{y}$. Then$\stackrel{Â¯}{y}$is a global optimal solution of the problem${P}_{E}$and hence$\stackrel{Â¯}{x}=E\left(\stackrel{Â¯}{y}\right)$is a solution of the problem P.

Proof Let $\stackrel{Ë†}{y}$ be a feasible solution of the problem ${P}_{E}$, then $\left({g}_{i}âˆ˜E\right)\left(\stackrel{Ë†}{y}\right)â‰¤\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{y}\right)$ for each $iâˆˆI$. Since $\left({g}_{i}âˆ˜E\right)\left(\stackrel{Ë†}{y}\right)â‰¤0$, $\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{y}\right)=0$ and ${g}_{i}$ is quasi-E-convex at $\stackrel{Â¯}{y}$, then

$\begin{array}{rcl}\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{y}+\mathrm{Î»}\left(\stackrel{Ë†}{y}âˆ’\stackrel{Â¯}{y}\right)\right)& =& \left({g}_{i}âˆ˜E\right)\left(\mathrm{Î»}\stackrel{Ë†}{y}+\left(1âˆ’\mathrm{Î»}\right)\stackrel{Â¯}{y}\right)\\ â‰¤& max\left\{\left({g}_{i}âˆ˜E\right)\left(\stackrel{Ë†}{y}\right),\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{y}\right)\right\}\\ =& \left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{y}\right).\end{array}$

This means that $\left({g}_{i}âˆ˜E\right)$ does not increase by moving from $\stackrel{Â¯}{y}$ along the direction $\stackrel{Ë†}{y}âˆ’\stackrel{Â¯}{y}$. Then we must have from Theorem 15 that $\mathrm{âˆ‡}\left({g}_{i}âˆ˜E\right)\left(yâˆ’\stackrel{Â¯}{y}\right)â‰¤0$. Multiplying by ${u}_{i}$ and summing over I, we get

$\left[\underset{iâˆˆI}{âˆ‘}{u}_{i}\mathrm{âˆ‡}\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{y}\right)\right]\left(yâˆ’\stackrel{Â¯}{y}\right)â‰¤0.$

But since

$\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{y}\right)+\underset{iâˆˆI}{âˆ‘}{u}_{i}\mathrm{âˆ‡}\left({g}_{i}âˆ˜E\right)\left(\stackrel{Â¯}{y}\right)=0,$

it follows that $\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(\stackrel{Â¯}{y}\right)\left(yâˆ’\stackrel{Â¯}{y}\right)â©¾0$. Since f is pseudo E-convex at $\stackrel{Â¯}{y}$, we get

$\left(fâˆ˜E\right)\left(y\right)â‰¥\left(fâˆ˜E\right)\left(\stackrel{Â¯}{y}\right).$

Then $\stackrel{Â¯}{y}$ is a global solution of the problem ${P}_{E}$ and from Theorem 13 $\stackrel{Â¯}{x}=E\left(\stackrel{Â¯}{y}\right)$ is a global solution of the problem P.â€ƒâ–¡

Example 29 Consider the following problem (problem P):

The feasible region of this problem is shown in Figure 2.

Let $E\left(x,y\right)=\left(\frac{1}{8}{x}^{3},\frac{1}{3}y\right)$, then the problem ${P}_{E}$ is as follows:

We note that $E\left(M\right)âŠ‚M$, where

$\begin{array}{c}\left(\sqrt{5},0\right)âˆˆM\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}E\left(\sqrt{5},0\right)=\left(5\frac{\sqrt{5}}{8},0\right)âˆˆM,\hfill \\ \left(0,2\right)âˆˆM\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}E\left(0,2\right)=\left(0,\frac{2}{3}\right)âˆˆM,\hfill \\ \left(0,0\right)âˆˆM\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}E\left(0,0\right)=\left(0,0\right)âˆˆM,\hfill \\ \left(2,1\right)âˆˆM\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}E\left(2,1\right)=\left(1,\frac{1}{3}\right)âˆˆM.\hfill \end{array}$

The Kuhn-Tucker conditions are as follows:

$\begin{array}{c}\mathrm{âˆ‡}\left(fâˆ˜E\right)\left(x,y\right)+{u}_{1}\mathrm{âˆ‡}\left({g}_{1}âˆ˜E\right)\left(x,y\right)+{u}_{2}\mathrm{âˆ‡}\left({g}_{2}âˆ˜E\right)\left(x,y\right)=0,\hfill \\ \left[\begin{array}{c}\frac{1}{2}x\\ \frac{2}{9}y\end{array}\right]+{u}_{1}\left[\begin{array}{c}\frac{6}{64}{x}^{5}\\ \frac{2}{9}y\end{array}\right]+{u}_{2}\left[\begin{array}{c}\frac{3}{8}{x}^{2}\\ \frac{2}{3}\end{array}\right]=0,\hfill \\ {u}_{1}\left[\frac{{x}^{6}}{64}+\frac{{y}^{2}}{9}âˆ’5\right]=0,\hfill \\ {u}_{2}\left[\frac{1}{8}{x}^{3}+\frac{2}{3}yâˆ’4\right]=0.\hfill \end{array}$

The solution is $\left\{\left[x=0.0,{u}_{1}=0.0,{u}_{2}=0.0,y=0.0\right]\right\}$, $\stackrel{Â¯}{z}=\left(0,0\right)$, and $\stackrel{Â¯}{x}=E\left(\stackrel{Â¯}{z}\right)=\left(0,0\right)$ is a solution of the problem P.

## 4 Conclusion

In this paper we introduced a new definition of an E-differentiable convex function, which transforms a non-differentiable function to a differentiable function under an operator $E:{R}^{n}â†’{R}^{n}$, and we studied Kuhn-Tucker and Fritz-John conditions for obtaining an optimal solution of mathematical programming with a non-differentiable function. At the end, some examples have been presented to clarify the results.

## References

1. Youness EA: E -convex sets, E -convex functions and E -convex programming. J. Optim. Theory Appl. 1999, 102(3):439â€“450.

2. Youness EA: Optimality criteria in E -convex programming. Chaos Solitons Fractals 2001, 12: 1737â€“1745. 10.1016/S0960-0779(00)00036-9

3. Chen X: Some properties of semi- E -convex functions. J. Math. Anal. Appl. 2002, 275: 251â€“262. 10.1016/S0022-247X(02)00325-6

4. Syau Y-R, Lee ES: Some properties of E -convex functions. Appl. Math. Lett. 2005, 18: 1074â€“1080. 10.1016/j.aml.2004.09.018

5. Emam T, Youness EA: Semi strongly E -convex function. J. Math. Stat. 2005, 1(1):51â€“57.

6. Megahed AA, Gomma HG, Youness EA, El-Banna AH: A combined interactive approach for solving E -convex multi- objective nonlinear programming. Appl. Math. Comput. 2011, 217: 6777â€“6784. 10.1016/j.amc.2010.12.086

7. Iqbal A, Ahmad I, Ali S: Some properties of geodesic semi- E -convex functions. Nonlinear Anal., Theory Methods Appl. 2011, 74: 6805â€“6813. 10.1016/j.na.2011.07.005

8. Iqbal, A, Ali, S, Ahmad, I: On geodesic E-convex sets, geodesic E-convex functions and E-epigraphs. J. Optim. Theory Appl. (2012), (Article Available online)

9. Mangasarian OL: Nonlinear Programming. Mcgraw-Hill, New York; 1969.

10. Bazaraa MS, Shetty CM: Nonlinear Programming Theory and Algorithms. Wiley, New York; 1979.

11. Youness EA: Characterization of efficient solution of multiobjective E -convex programming problems. Appl. Math. Comput. 2004, 151(3):755â€“761. 10.1016/S0096-3003(03)00526-5

## Acknowledgements

The authors express their deep thanks and their respect to the referees and the Journal for these valuable comments in the evaluation of this paper.

## Author information

Authors

### Corresponding author

Correspondence to Abd El-Monem A Megahed.

## Authorsâ€™ original submitted files for images

Below are the links to the authorsâ€™ original submitted files for images.

## Rights and permissions

Reprints and Permissions

Megahed, A.EM.A., Gomma, H.G., Youness, E.A. et al. Optimality conditions of E-convex programming for an E-differentiable function. J Inequal Appl 2013, 246 (2013). https://doi.org/10.1186/1029-242X-2013-246