# Optimality conditions of E-convex programming for an E-differentiable function

## Abstract

In this paper we introduce a new definition of an E-differentiable convex function, which transforms a non-differentiable function to a differentiable function under an operator $E:{R}^{n}\to {R}^{n}$. By this definition, we can apply Kuhn-Tucker and Fritz-John conditions for obtaining the optimal solution of mathematical programming with a non-differentiable function.

## 1 Introduction

The concepts of E-convex sets and an E-convex function have been introduced by Youness in [1, 2], and they have some important applications in various branches of mathematical sciences. Youness in  introduced a class of sets and functions which is called E-convex sets and E-convex functions by relaxing the definition of convex sets and convex functions. This kind of generalized convexity is based on the effect of an operator $E:{R}^{n}\to {R}^{n}$ on the sets and the domain of the definition of functions. Also, in  Youness discussed the optimality criteria of E-convex programming. Xiusu Chen  introduced a new concept of semi E-convex functions and discussed its properties. Yu-Ru Syan and Stanelty  introduced some properties of an E-convex function, while Emam and Youness in  introduced a new class of E-convex sets and E-convex functions, which are called strongly E-convex sets and strongly E-convex functions, by taking the images of two points x and y under an operator $E:{R}^{n}\to {R}^{n}$ besides the two points themselves. In  Megahed et al. introduced a combined interactive approach for solving E-convex multiobjective nonlinear programming. Also, in [7, 8] Iqbal and et al. introduced geodesic E-convex sets, geodesic E-convex and some properties of geodesic semi-E-convex functions.

In this paper we present the concept of an E-differentiable convex function which transforms a non-differentiable convex function to a differentiable function under an operator $E:{R}^{n}\to {R}^{n}$, for which we can apply the Fritz-John and Kuhn-Tucker conditions [9, 10] to find a solution of mathematical programming with a non-differentiable function.

In the following, we present the definitions of E-convex sets, E-convex functions, and semi E-convex functions.

Definition 1

A set M is said to be an E-convex set with respect to an operator $E:{R}^{n}\to {R}^{n}$ if and only if $\lambda E\left(x\right)+\left(1-\lambda \right)E\left(y\right)\in M$ for each $x,y\in M$ and $\lambda \in \left[0,1\right]$.

Definition 2

A function $f:{R}^{n}\to R$ is said to be an E-convex function with respect to an operator $E:{R}^{n}\to {R}^{n}$ on an E-convex set $M\subseteq {R}^{n}$ if and only if

$f\left(\lambda E\left(x\right)+\left(1-\lambda \right)E\left(y\right)\right)\le \lambda \left(f\circ E\right)\left(x\right)+\left(1-\lambda \right)\left(f\circ E\right)\left(y\right)$

for each $x,y\in M$ and $\lambda \in \left[0,1\right]$.

Definition 3

A real-valued function $f:M\subseteq {R}^{n}\to R$ is said to be semi E-convex function with respect to an operator $E:{R}^{n}\to {R}^{n}$ on M if M is an E-convex set and

$f\left(\lambda E\left(x\right)+\left(1-\lambda \right)E\left(y\right)\right)\le \lambda f\left(x\right)+\left(1-\lambda \right)f\left(y\right)$

for each $x,y\in M$ and $\lambda \in \left[0,1\right]$.

Proposition 4

1- Let a set$M\subseteq {R}^{n}$be an E-convex set with respect to an operator E, $E:{R}^{n}\to {R}^{n}$, then$E\left(M\right)\subseteq M$.

2- If$E\left(M\right)$is a convex set and$E\left(M\right)\subseteq M$, then M is an E-convex set.

3- If${M}_{1}$and${M}_{2}$are E-convex sets with respect to E, then${M}_{1}\cap {M}_{2}$is an E-convex set with respect to E.

Lemma 5

Let$M\subseteq {R}^{n}$be an${E}_{1}$- and${E}_{2}$-convex set, then M is an$\left({E}_{1}\circ {E}_{2}\right)$- and$\left({E}_{2}\circ {E}_{1}\right)$-convex set.

Lemma 6

Let$E:{R}^{n}\to {R}^{n}$be a linear map and let${M}_{1},{M}_{2}\subset {R}^{n}$be E-convex sets, then${M}_{1}+{M}_{2}$is an E-convex set.

Definition 7

Let $S\subset {R}^{n}×R$ and $E:{R}^{n}\to {R}^{n}$, we say that the set S is E-convex if for each $\left(x,\alpha \right),\left(y,\beta \right)\in S$ and each $\lambda \in \left[0,1\right]$, we have

$\left(\lambda Ex+\left(1-\lambda \right)Ey,\lambda \alpha +\left(1-\lambda \right)\beta \right)\in S.$

## 2 Generalized E-convex function

Definition 8

Let $M\subseteq {R}^{n}$ be an E-convex set with respect to an operator $E:{R}^{n}\to {R}^{n}$. A function $f:M\to R$ is said to be a pseudo E-convex function if for each ${x}_{1},{x}_{2}\in M$ with $\mathrm{\nabla }\left(f\circ E\right)\left({x}_{1}\right)\left({x}_{2}-{x}_{1}\right)\ge 0$ implies $f\left(E{x}_{2}\right)\ge f\left(E{x}_{1}\right)$ or for all ${x}_{1},{x}_{2}\in M$ and $f\left(E{x}_{2}\right) implies $\mathrm{\nabla }\left(f\circ E\right)\left({x}_{1}\right)\left({x}_{2}-{x}_{1}\right)<0$.

Definition 9

Let $M\subseteq {R}^{n}$ be an E-convex set with respect to an operator $E:{R}^{n}\to {R}^{n}$. A function $f:M\to R$ is said to be a quasi-E-convex function if and only if

$f\left(\lambda Ex+\left(1-\lambda \right)Ey\right)\le max\left\{\left(f\circ E\right)x,\left(f\circ E\right)y\right\}$

for each $x,y\in M$ and $\lambda \in \left[0,1\right]$.

## 3 E-differentiable function

Definition 10 Let $f:M\subseteq {R}^{n}\to R$ be a non-differentiable function at $\overline{x}$ and let $E:{R}^{n}\to {R}^{n}$ be an operator. A function f is said to be E-differentiable at $\overline{x}$ if and only if $\left(f\circ E\right)$ is a differentiable function at $\overline{x}$ and

Example 11 Let $f\left(x\right)=|x|$ be a non-differentiable function at the point $x=0$ and let $E:R\to R$ be an operator such that $E\left(x\right)={x}^{2}$, then the function $\left(f\circ E\right)\left(x\right)=f\left(Ex\right)={x}^{2}$ is a differentiable function at the point $x=0$, and hence f is an E-differentiable function.

### 3.1 Problem formulation

Now, we formulate problems P and ${P}_{E}$, which have a non-differentiable function and an E-differentiable function, respectively.

Let $E:{R}^{n}\to {R}^{n}$ be an operator, M be an E-convex set and f be an E-differentiable function. The problem P is defined as

where f is a non-differentiable function, and the problem ${P}_{E}$ is defined as

where f is an E-differentiable function.

Now, we will discuss the relationship between the solutions of problems P and ${P}_{E}$.

Lemma 12

Let$E:{R}^{n}\to {R}^{n}$be a one-to-one and onto operator and let${M}^{\mathrm{\prime }}=\left\{x:\left({g}_{i}\circ E\right)\left(x\right)\le 0,i=1,2,\dots ,m\right\}$. Then$E\left({M}^{\mathrm{\prime }}\right)=M$, where M and${M}^{\mathrm{\prime }}$are feasible regions of problems P and${P}_{E}$, respectively.

Theorem 13 Let$E:{R}^{n}\to {R}^{n}$be a one-to-one and onto operator and let f be an E-differentiable function. If f is non-differentiable at$\overline{x}$, and$\overline{x}$is an optimal solution of the problem P, then there exists$\overline{y}\in {M}^{\mathrm{\prime }}$such that$\overline{x}=E\left(\overline{y}\right)$and$\overline{y}$is an optimal solution of the problem${P}_{E}$.

Proof Let $\overline{x}$ be an optimal solution of the problem P. From Lemma 12 there exists $\overline{y}\in {M}^{\mathrm{\prime }}$ such that $\overline{x}=E\left(\overline{y}\right)$. Let $\overline{y}$ be a not optimal solution of the problem ${P}_{E}$, then there is $\stackrel{ˆ}{y}\in {M}^{\mathrm{\prime }}$ such that $\left(f\circ E\right)\left(\stackrel{ˆ}{y}\right)\le \left(f\circ E\right)\left(\overline{y}\right)$. Also, there exists $\stackrel{ˆ}{x}\in M$ such that $\stackrel{ˆ}{x}=E\left(\stackrel{ˆ}{y}\right)$ . Then $f\left(\stackrel{ˆ}{x}\right) contradicts the optimality of $\overline{x}$ for the problem P. Hence the proof is complete. □

Theorem 14 Let$E:{R}^{n}\to {R}^{n}$be a one-to-one and onto operator, and let f be an E-differentiable function and strictly quasi-E-convex. If$\overline{x}$is an optimal solution of the problem P, then there exists$\overline{y}\in {M}^{\mathrm{\prime }}$such that$\overline{x}=E\left(\overline{y}\right)$and$\overline{y}$is an optimal solution of the problem ${P}_{E}$.

Proof Let $\overline{x}$ be an optimal solution of the problem P. Then from Lemma 12 there is $\overline{y}\in {M}^{\mathrm{\prime }}$ such that $\overline{x}=E\left(\overline{y}\right)$. Let $\overline{y}$ be a not optimal solution of the problem ${P}_{E}$, then there is $\stackrel{ˆ}{y}\in {M}^{\mathrm{\prime }}$ and also $\stackrel{ˆ}{x}\in M$, $\stackrel{ˆ}{x}=E\left(\stackrel{ˆ}{y}\right)$ such that $\left(f\circ E\right)\left(\stackrel{ˆ}{y}\right)\le \left(f\circ E\right)\left(\overline{y}\right)$. Since f is strictly quasi-E-convex function, then

$\begin{array}{rcl}f\left(\lambda E\left(\overline{y}\right)+\left(1-\lambda \right)E\left(\stackrel{ˆ}{y}\right)\right)& <& max\left\{\left(f\circ E\right)\left(\overline{y}\right),\left(f\circ E\right)\left(\stackrel{ˆ}{y}\right)\right\}\\ <& max\left\{f\left(\overline{x}\right),f\left(\stackrel{ˆ}{x}\right)\right\}\\ <& f\left(\overline{x}\right).\end{array}$

Since M is an E-convex set and $E\left(M\right)\subset M$, then $\lambda E\left(\overline{y}\right)+\left(1-\lambda \right)E\left(\stackrel{ˆ}{y}\right)\in M$ contradicts the assumption that $\overline{x}$ is a solution of the problem P, then there exists $\overline{y}\in {M}^{\mathrm{\prime }}$, a solution of the problem ${P}_{E}$, such that $\overline{x}=E\left(\overline{y}\right)$. □

Theorem 15 Let M be an E-convex set, $E:{R}^{n}\to {R}^{n}$be a one-to-one and onto operator and$f:M\subseteq {R}^{n}\to R$be an E-differentiable function at$\overline{x}$. If there is a vector$d\subset {R}^{n}$such that$\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)d<0$, then there exists$\delta >0$such that

$\left(f\circ E\right)\left(\overline{x}+\lambda d\right)<\left(f\circ E\right)\left(\overline{x}\right)\phantom{\rule{1em}{0ex}}\mathit{\text{for each}}\lambda \in \left(0,\delta \right).$

Proof Since f is an E-differentiable function at $\overline{x}$, then

Since $\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)d<0$ and $\alpha \left(\overline{x},\lambda d\right)\to 0$ as $\lambda \to 0$, then there exists $\delta >0$ such that

and thus $\left(f\circ E\right)\left(\overline{x}+\lambda d\right)<\left(f\circ E\right)\left(\overline{x}\right)$. □

Corollary 16 Let M be an E-convex set, let$E:{R}^{n}\to {R}^{n}$be a one-to-one and onto operator, and let$f:M\subseteq {R}^{n}\to R$be an E-differentiable and strictly E-convex function at$\overline{x}$. If$\overline{x}$is a local minimum of the function$\left(f\circ E\right)$, then$\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)=0$.

Proof Suppose that $\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)\ne 0$ and let $d=-\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)$, then $\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)d=-{\parallel \mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)\parallel }^{2}<0$. By Theorem 15 there exists $\delta >0$ such that

contradicting the assumption that $\overline{x}$ is a local minimum of $\left(f\circ E\right)\left(x\right)$, and thus $\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)=0$. □

Theorem 17 Let M be an E-convex set, $E:{R}^{n}\to {R}^{n}$be a one-to-one and onto operator, and$f:M\subseteq {R}^{n}\to R$be twice E-differentiable and strictly E-convex function at$\overline{x}$. If$\overline{x}$is a local minimum of$\left(f\circ E\right)$, then$\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)=0$and the Hessian matrix$H\left(\overline{x}\right)={\mathrm{\nabla }}^{2}\left(f\circ E\right)\left(\overline{x}\right)$is positive semidefinite.

Proof Suppose that d is an arbitrary direction. Since f is a twice E-differentiable function at $\overline{x}$, then

$\begin{array}{rcl}\left(f\circ E\right)\left(\overline{x}+\lambda d\right)& =& \left(f\circ E\right)\left(\overline{x}\right)+\lambda \mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)d+\frac{1}{2}{\lambda }^{2}{d}^{t}{\mathrm{\nabla }}^{2}\left(f\circ E\right)\left(\overline{x}\right)d\\ +{\lambda }^{2}{\parallel d\parallel }^{2}\alpha \left(\overline{x},\lambda d\right),\end{array}$

where $\alpha \left(\overline{x},\lambda d\right)\to 0$ as $\lambda \to 0$.

From Corollary 16 we have $\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)=0$, and

$\frac{\left(f\circ E\right)\left(\overline{x}+\lambda d\right)-\left(f\circ E\right)\left(\overline{x}\right)}{{\lambda }^{2}}=\frac{1}{2}{d}^{t}{\mathrm{\nabla }}^{2}\left(f\circ E\right)\left(\overline{x}\right)d.$

Since $\overline{x}$ is a local minimum of $\left(f\circ E\right)$, then $\left(f\circ E\right)\left(\overline{x}\right)<\left(f\circ E\right)\left(\overline{x}+\lambda d\right)$, and

${d}^{t}{\mathrm{\nabla }}^{2}\left(f\circ E\right)\left(\overline{x}\right)d\ge 0,\phantom{\rule{1em}{0ex}}\mathit{\text{i.e.}}\text{,}\phantom{\rule{2em}{0ex}}H\left(\overline{x}\right)={\mathrm{\nabla }}^{2}\left(f\circ E\right)\left(\overline{x}\right)\phantom{\rule{1em}{0ex}}\text{is positive semidefinite}.$

□

Example 18 Let $f\left(x,y\right)=x+2{y}^{2}-2{x}^{\frac{1}{3}}$ be a non-differentiable function at $\left(0,y\right)$, and let $E\left(x,y\right)=\left({x}^{3},y\right)$, then $\left(f\circ E\right)\left(x,y\right)={x}^{3}+2{y}^{2}-2x$, and

$\begin{array}{c}\frac{\partial \left(f\circ E\right)}{\partial x}=3{x}^{2}-2=0\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}x=±\sqrt{\frac{2}{3}},\hfill \\ \frac{\partial \left(f\circ E\right)}{\partial y}=4y=0\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}y=0,\hfill \\ \frac{{\partial }^{2}\left(f\circ E\right)}{\partial {x}^{2}}=6x,\phantom{\rule{2em}{0ex}}\frac{{\partial }^{2}\left(f\circ E\right)}{\partial y\phantom{\rule{0.2em}{0ex}}\partial x}=0,\phantom{\rule{2em}{0ex}}\frac{{\partial }^{2}\left(f\circ E\right)}{\partial {y}^{2}}=4,\phantom{\rule{2em}{0ex}}\frac{{\partial }^{2}\left(f\circ E\right)}{\partial x\phantom{\rule{0.2em}{0ex}}\partial y}=0.\hfill \end{array}$

Then $\left({x}_{1},{y}_{1}\right)=\left(\sqrt{\frac{2}{3}},0\right)$ and $\left({x}_{2},{y}_{2}\right)=\left(-\sqrt{\frac{2}{3}},0\right)$ are extremum points of $\left(f\circ E\right)\left(x,y\right)$, and the Hessian matrix $H\left(\sqrt{\frac{2}{3}},0\right)=\left[\begin{array}{cc}6\sqrt{\frac{2}{3}}& 0\\ 0& 4\end{array}\right]$ is positive definite. And thus the point $\left(\sqrt{\frac{2}{3}},0\right)$ is a local minimum of the function $\left(f\circ E\right)\left(x,y\right)$, but the Hessian matrix $H\left(-\sqrt{\frac{2}{3}},0\right)=\left[\begin{array}{cc}-6\sqrt{\frac{2}{3}}& 0\\ 0& 4\end{array}\right]$ is indefinite.

Theorem 19 Let M be an E-convex set, let$E:{R}^{n}\to {R}^{n}$be a one-to-one and onto operator, and let$f:M\subseteq {R}^{n}\to R$be a twice E-differentiable and strictly E-convex function at$\overline{x}$. If$\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)=0$and the Hessian matrix$H\left(\overline{x}\right)={\mathrm{\nabla }}^{2}\left(f\circ E\right)\left(\overline{x}\right)$is positive definite, then$\overline{x}$is a local minimum of$\left(f\circ E\right)$.

Proof Suppose that $\overline{x}$ is not a local minimum of $\left(f\circ E\right)\left(x\right)$, and there exists a sequence $\left\{{x}_{k}\right\}$ is converging to $\overline{x}$ such that $\left(f\circ E\right)\left({x}_{k}\right)<\left(f\circ E\right)\left(\overline{x}\right)$ for each k. Since $\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)=0$, and f is twice E-differentiable at $\overline{x}$, then

$\begin{array}{rcl}\left(f\circ E\right)\left({x}_{k}\right)& =& \left(f\circ E\right)\left(\overline{x}\right)+\lambda \mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)\left({x}_{k}-\overline{x}\right)\\ +\frac{1}{2}{\left({x}_{k}-\overline{x}\right)}^{t}{\mathrm{\nabla }}^{2}\left(f\circ E\right)\left(\overline{x}\right)\left({x}_{k}-\overline{x}\right)+{\parallel \left({x}_{k}-\overline{x}\right)\parallel }^{2}\alpha \left(\overline{x},\left({x}_{k}-\overline{x}\right)\right),\end{array}$

where $\alpha \left(\overline{x},\left({x}_{k}-\overline{x}\right)\right)\to 0$ as $k\to \mathrm{\infty }$, and

By dividing on ${\parallel \left({x}_{k}-\overline{x}\right)\parallel }^{2}$, and letting ${d}_{k}=\frac{\left({x}_{k}-\overline{x}\right)}{\parallel \left({x}_{k}-\overline{x}\right)\parallel }$, we get

But $\parallel {d}_{k}\parallel =1$ for each k, and hence there exists an index set K such that ${\left\{{d}_{k}\right\}}_{K}\to d$, where $\parallel d\parallel =1$. Considering this subsequence and the fact that $\alpha \left(\overline{x},\left({x}_{k}-\overline{x}\right)\right)\to 0$ as $k\to \mathrm{\infty }$, then ${d}^{t}{\mathrm{\nabla }}^{2}\left(f\circ E\right)\left(\overline{x}\right)d<0$. This contradicts the assumption that $H\left(\overline{x}\right)$ is positive definite. Therefore $\overline{x}$ is indeed a local minimum. □

Example 20 Let $f\left(x,y\right)={x}^{\frac{2}{3}}+{y}^{2}-1$ be a non-differentiable at the point $\left(0,y\right)$, and let $E\left(x,y\right)=\left({x}^{3},y\right)$, then $\left(f\circ E\right)\left(x,y\right)={x}^{2}+{y}^{2}-1$

$\begin{array}{c}\frac{\partial \left(f\circ E\right)}{\partial x}=2x,\phantom{\rule{2em}{0ex}}\frac{{\partial }^{2}\left(f\circ E\right)}{\partial y\phantom{\rule{0.2em}{0ex}}\partial x}=0,\phantom{\rule{2em}{0ex}}\frac{{\partial }^{2}\left(f\circ E\right)}{\partial {x}^{2}}=2,\hfill \\ \frac{\partial \left(f\circ E\right)}{\partial y}=2y,\phantom{\rule{2em}{0ex}}\frac{{\partial }^{2}\left(f\circ E\right)}{\partial x\phantom{\rule{0.2em}{0ex}}\partial y}=0,\phantom{\rule{2em}{0ex}}\frac{{\partial }^{2}\left(f\circ E\right)}{\partial {y}^{2}}=2.\hfill \end{array}$

The necessary condition for $\overline{x}$ is a local minimum of $\left(f\circ E\right)$ is $\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)=0$, then $\overline{x}=\left(0,0\right)$, and the Hessian matrix $H\left(\overline{x}\right)$

$H=\left[\begin{array}{cc}\frac{{\partial }^{2}\left(f\circ E\right)}{\partial {x}^{2}}& \frac{{\partial }^{2}\left(f\circ E\right)}{\partial y\phantom{\rule{0.2em}{0ex}}\partial x}\\ \frac{{\partial }^{2}\left(f\circ E\right)}{\partial x\phantom{\rule{0.2em}{0ex}}\partial y}& \frac{{\partial }^{2}\left(f\circ E\right)}{\partial {y}^{2}}\end{array}\right]=\left[\begin{array}{cc}2& 0\\ 0& 2\end{array}\right]$

is positive definite.

Example 21 Let $f\left(x,y\right)={x}^{\frac{1}{3}}+y-1$ be non-differentiable at the point $\left(0,y\right)$, and let $E\left(x,y\right)=\left({x}^{3},y\right)$, then $\left(f\circ E\right)\left(x,y\right)=x+y-1$.

Now, let $M=\left\{{\lambda }_{1}\left(0,0\right)+{\lambda }_{2}\left(0,3\right)+{\lambda }_{3}\left(1,2\right)+{\lambda }_{4}\left(1,0\right)\right\}\cup \left\{{\lambda }_{1}\left(0,0\right)+{\lambda }_{2}\left(0,-3\right)+{\lambda }_{3}\left(1,-2\right)+{\lambda }_{4}\left(1,0\right)\right\}$, ${\sum }_{i=1}^{4}{\lambda }_{i}=1$, ${\lambda }_{i}\ge 0$ be an E-convex set with respect to operator E (the feasible region is shown in Figure 1) and

$\begin{array}{c}f\left(0,0\right)=-1,\phantom{\rule{2em}{0ex}}\left(f\circ E\right)\left(0,0\right)=-1,\phantom{\rule{2em}{0ex}}f\left(0,-3\right)=-4,\phantom{\rule{2em}{0ex}}\left(f\circ E\right)\left(0,3\right)=-4,\hfill \\ f\left(1,2\right)=2,\phantom{\rule{2em}{0ex}}\left(f\circ E\right)\left(1,2\right)=2,\phantom{\rule{2em}{0ex}}f\left(1,0\right)=0,\phantom{\rule{2em}{0ex}}\left(f\circ E\right)\left(1,0\right)=0,\hfill \\ f\left(0,3\right)=2,\phantom{\rule{2em}{0ex}}\left(f\circ E\right)\left(0,3\right)=2,\phantom{\rule{2em}{0ex}}f\left(1,-2\right)=-2,\phantom{\rule{2em}{0ex}}\left(f\circ E\right)\left(1,2\right)=-2.\hfill \end{array}$

Then $\overline{x}=\left(0,-3\right)$ is a solution of the problem ${P}_{E}$ and $E\left(\overline{x}\right)=E\left(0,-3\right)=\left(0,-3\right)$ is a solution of the problem P.

Definition 22 Let M be a nonempty E-convex set in ${R}^{n}$ and let $E\left(\overline{x}\right)\in clM$. The cone of feasible direction of $E\left(M\right)$ at $E\left(\overline{x}\right)$ denoted by D is given by

Lemma 23 Let M be an E-convex set with respect to an operator$E:{R}^{n}\to {R}^{n}$, and let$f:M\subseteq {R}^{n}\to R$be E-differentiable at$\overline{x}$. If$\overline{x}$is a local minimum of the problem${P}_{E}$, then${F}_{0}\cap D=\varphi$, where${F}_{0}=\left\{d:\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)d<0\right\}$, and D is the cone of feasible direction of M at $\overline{x}$.

Proof Suppose that there exists a vector $d\in {F}_{0}\cap D$. Then by Theorem 15, there exists ${\delta }_{1}$ such that

(3.1)

By the definition of the cone of feasible direction, there exists ${\delta }_{2}$ such that

(3.2)

From 3.1 and 3.2 we have $\left(f\circ E\right)\left(\overline{x}+\lambda d\right)<\left(f\circ E\right)\left(\overline{x}\right)$ for each $\lambda \in \left(0,\delta \right)$, where $\delta =min\left\{{\delta }_{1},{\delta }_{2}\right\}$, which contradicts the assumption that $\overline{x}$ is a local optimal solution, then ${F}_{0}\cap D=\varphi$. □

Lemma 24 Let M be an open E-convex set with respect to an operator$E:{R}^{n}\to {R}^{n}$, let$f:M\subseteq {R}^{n}\to R$be E-differentiable at$\overline{x}$and let${g}_{i}:{R}^{n}\to R$for$i=1,2,\dots ,m$. Let$\overline{x}$be a feasible solution of the problem${P}_{E}$and let$I=\left\{i:\left({g}_{i}\circ E\right)\left(\overline{x}\right)=0\right\}$. Furthermore, suppose that${g}_{i}$for$i\in I$is E-differentiable at$\overline{x}$and that${g}_{i}$for$i\notin I$is continuous at$\overline{x}$. If$\overline{x}$is a local optimal solution, then${F}_{0}\cap {G}_{0}=\varphi$, where

$\begin{array}{c}{F}_{0}=\left\{d:\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)d<0\right\},\hfill \\ {G}_{0}=\left\{d:\mathrm{\nabla }\left({g}_{i}\circ E\right)\left(\overline{x}\right)d<0,\mathit{\text{for each}}i\in I\right\}\hfill \end{array}$

and E is one-to-one and onto.

Proof Let $d\in {G}_{0}$. Since $E\left(\overline{x}\right)\in M$ and M is an open E-convex set, there exists a ${\delta }_{1}>0$ such that

(3.3)

Also, since $\left({g}_{i}\circ E\right)\left(\overline{x}\right)<0$ and since ${g}_{i}$ is continuous at $\overline{x}$ for $i\notin I$, there exists a ${\delta }_{2}>0$ such that

(3.4)

Finally, since $d\in {G}_{0}$, $\mathrm{\nabla }\left({g}_{i}\circ E\right)\left(\overline{x}\right)d<0$ for each $i\in I$ and by Theorem 15, there exists ${\delta }_{3}>0$ such that

(3.5)

From 3.3, 3.4 and 3.5, it is clear that points of the form $E\left(\overline{x}\right)+\lambda d$ are feasible to the problem ${P}_{E}$ for each $\lambda \in \left(0,\delta \right)$, where $\delta =min\left({\delta }_{1},{\delta }_{2},{\delta }_{3}\right)$. Thus $d\in D$, where D is the cone of feasible direction of the feasible region at $\overline{x}$. We have shown that for $d\in {G}_{0}$ implies that $d\in D$, and hence ${G}_{0}\subset D$. By Lemma 23, since $\overline{x}$ is a local solution of the problem ${P}_{E},{F}_{0}\cap D=\varphi$. It follows that ${F}_{0}\cap {G}_{0}=\varphi$. □

Theorem 25 (Fritz-John optimality conditions)

Let M be an open E-convex set with respect to the one-to-one and onto operator$E:{R}^{n}\to {R}^{n}$, let$f:M\subseteq {R}^{n}\to R$be E-differentiable at$\overline{x}$and let${g}_{i}:{R}^{n}\to R$for$i=1,2,\dots ,m$. Let$\overline{x}$be feasible solution of the problem${P}_{E}$and let$I=\left\{i:\left({g}_{i}\circ E\right)\left(\overline{x}\right)=0\right\}$. Furthermore, suppose that${g}_{i}$for$i\in I$is differentiable at$\overline{x}$and that${g}_{i}$for$i\notin I$is continuous at$\overline{x}$. If$\overline{x}$is a local optimal solution, then there exist scalars${u}_{0}$and${u}_{i}$for$i\in I$such that and$E\left(\overline{x}\right)$is a local solution of the problem P.

Proof Let $\overline{x}$ be a local solution of the problem ${P}_{E}$, then there is no vector d such that $\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)d<0$ and $\mathrm{\nabla }\left({g}_{i}\circ E\right)\left(\overline{x}\right)d<0$. Let A be a matrix with rows $\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)$ and $\mathrm{\nabla }\left({g}_{i}\circ E\right)\left(\overline{x}\right)$. From Gordon’s theorem , we have the system $Ad<0$ is inconsistent, then there exists a vector $b\ge 0$ such that $Ab=0$, where $b=\left({u}_{0}\cdot {u}_{i}\right)$ for each $i\in I$. And thus

${u}_{\circ }\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)+\sum _{i\in I}{u}_{i}\mathrm{\nabla }\left({g}_{i}\circ E\right)\left(\overline{x}\right)=0$

holds and $E\left(\overline{x}\right)$ is a local solution of the problem P. □

Theorem 26 Let$E:{R}^{n}\to {R}^{n}$be a one-to-one and onto operator and let$f:M\subseteq {R}^{n}\to R$be an E-differentiable function. If$\overline{x}$is an optimal solution of the problem P, then there exists$\overline{y}\in {M}^{\mathrm{\prime }}$such that$\overline{x}=E\left(\overline{y}\right)$is an optimal solution of the problem${P}_{E}$and the Fritz-John optimality condition of the problem${P}_{E}$is satisfied.

Proof Let $\overline{x}$ be an optimal solution of the problem P. Since E is one-to-one and onto, according to Theorem 13, there exists $\overline{y}\in {M}^{\mathrm{\prime }}$, $\overline{x}=E\left(\overline{y}\right)$ is an optimal solution of the problem ${P}_{E}$. Hence there exist scalars ${u}_{0}.{u}_{i}$ satisfying the Fritz-John optimality conditions of the problem ${P}_{E}$

$\begin{array}{c}{u}_{\circ }\mathrm{\nabla }\left(f\circ E\right)\left(\overline{x}\right)+\sum _{i\in I}{u}_{i}\mathrm{\nabla }\left({g}_{i}\circ E\right)\left(\overline{x}\right)=0,\hfill \\ \left({u}_{0},{u}_{i}\right)=0,\hfill \\ {u}_{0},{u}_{i}\ge 0.\hfill \end{array}$

□

Theorem 27 (Kuhn-Tucker necessary condition)

Let M be an open E-convex set with respect to the one-to-one and onto operator$E:{R}^{n}\to {R}^{n}$, let$f:M\subseteq {R}^{n}\to R$be E-differentiable and strictly E-convex at$\overline{x}$and let${g}_{i}:{R}^{n}\to R$for$i=1,2,\dots ,m$. Let$\overline{y}$be a feasible solution of the problem${P}_{E}$and let$I=\left\{i:\left({g}_{i}\circ E\right)\left(\overline{y}\right)=0\right\}$. Furthermore, suppose that$\left({g}_{i}\circ E\right)$is continuous at$\overline{y}$for$i\notin I$and$\mathrm{\nabla }\left({g}_{i}\circ E\right)\left(\overline{y}\right)$for$i\in I$are linearly independent. If$\overline{x}$is a solution of the problem P, $\overline{x}=E\left(\overline{y}\right)$and$\overline{y}$is a local solution of the problem${P}_{E}$, then there exist scalars${u}_{i}$for$i\in I$such that

$\mathrm{\nabla }\left(f\circ E\right)\left(\overline{y}\right)+\sum _{i\in I}{u}_{i}\mathrm{\nabla }\left({g}_{i}\circ E\right)\left(\overline{y}\right)=0,\phantom{\rule{1em}{0ex}}{u}_{i}\ge 0\mathit{\text{for each}}i\in I.$

Proof From the Fritz-John optimality condition theorem, there exist scalars ${u}_{0}$ and ${u}_{i}$ for each $i\in I$ such that

If ${u}_{0}=0$, the assumption of linear independence of $\mathrm{\nabla }\left({g}_{i}\circ E\right)\left(\overline{y}\right)$ does not hold, then ${u}_{0}>0$. By taking ${u}_{i}=\frac{{\stackrel{ˆ}{u}}_{i}}{{u}_{0}}$, then $\mathrm{\nabla }\left(f\circ E\right)\left(\overline{y}\right)+{\sum }_{i\in I}{u}_{i}\mathrm{\nabla }\left({g}_{i}\circ E\right)\left(\overline{y}\right)=0$, ${u}_{i}\ge 0$ holds for each $i\in I$. From Theorem 26, $\overline{y}$ is a local solution of the problem ${P}_{E}$. □

Theorem 28 Let M be an open E-convex set with respect to the one-to-one and onto operator$E:{R}^{n}\to {R}^{n}$, ${g}_{i}:{R}^{n}\to R$for$i=1,2,\dots ,m$, and let$f:M\subseteq {R}^{n}\to R$be E-differentiable at$\overline{x}$and strictly E-convex at$\overline{x}$. Let$\overline{x}=E\left(\overline{y}\right)$be a feasible solution of the problem${P}_{E}$and$I=\left\{i:\left({g}_{i}\circ E\right)\left(\overline{y}\right)=0\right\}$. Suppose that f is pseudo-E-convex at$\overline{y}$and that${g}_{i}$is quasi-E-convex and differentiable at$\overline{y}$for each$i\in I$. Furthermore, suppose that the Kuhn-Tucker conditions hold at$\overline{y}$. Then$\overline{y}$is a global optimal solution of the problem${P}_{E}$and hence$\overline{x}=E\left(\overline{y}\right)$is a solution of the problem P.

Proof Let $\stackrel{ˆ}{y}$ be a feasible solution of the problem ${P}_{E}$, then $\left({g}_{i}\circ E\right)\left(\stackrel{ˆ}{y}\right)\le \left({g}_{i}\circ E\right)\left(\overline{y}\right)$ for each $i\in I$. Since $\left({g}_{i}\circ E\right)\left(\stackrel{ˆ}{y}\right)\le 0$, $\left({g}_{i}\circ E\right)\left(\overline{y}\right)=0$ and ${g}_{i}$ is quasi-E-convex at $\overline{y}$, then

$\begin{array}{rcl}\left({g}_{i}\circ E\right)\left(\overline{y}+\lambda \left(\stackrel{ˆ}{y}-\overline{y}\right)\right)& =& \left({g}_{i}\circ E\right)\left(\lambda \stackrel{ˆ}{y}+\left(1-\lambda \right)\overline{y}\right)\\ \le & max\left\{\left({g}_{i}\circ E\right)\left(\stackrel{ˆ}{y}\right),\left({g}_{i}\circ E\right)\left(\overline{y}\right)\right\}\\ =& \left({g}_{i}\circ E\right)\left(\overline{y}\right).\end{array}$

This means that $\left({g}_{i}\circ E\right)$ does not increase by moving from $\overline{y}$ along the direction $\stackrel{ˆ}{y}-\overline{y}$. Then we must have from Theorem 15 that $\mathrm{\nabla }\left({g}_{i}\circ E\right)\left(y-\overline{y}\right)\le 0$. Multiplying by ${u}_{i}$ and summing over I, we get

$\left[\sum _{i\in I}{u}_{i}\mathrm{\nabla }\left({g}_{i}\circ E\right)\left(\overline{y}\right)\right]\left(y-\overline{y}\right)\le 0.$

But since

$\mathrm{\nabla }\left(f\circ E\right)\left(\overline{y}\right)+\sum _{i\in I}{u}_{i}\mathrm{\nabla }\left({g}_{i}\circ E\right)\left(\overline{y}\right)=0,$

it follows that $\mathrm{\nabla }\left(f\circ E\right)\left(\overline{y}\right)\left(y-\overline{y}\right)⩾0$. Since f is pseudo E-convex at $\overline{y}$, we get

$\left(f\circ E\right)\left(y\right)\ge \left(f\circ E\right)\left(\overline{y}\right).$

Then $\overline{y}$ is a global solution of the problem ${P}_{E}$ and from Theorem 13 $\overline{x}=E\left(\overline{y}\right)$ is a global solution of the problem P. □

Example 29 Consider the following problem (problem P):

The feasible region of this problem is shown in Figure 2.

Let $E\left(x,y\right)=\left(\frac{1}{8}{x}^{3},\frac{1}{3}y\right)$, then the problem ${P}_{E}$ is as follows:

We note that $E\left(M\right)\subset M$, where

$\begin{array}{c}\left(\sqrt{5},0\right)\in M\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}E\left(\sqrt{5},0\right)=\left(5\frac{\sqrt{5}}{8},0\right)\in M,\hfill \\ \left(0,2\right)\in M\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}E\left(0,2\right)=\left(0,\frac{2}{3}\right)\in M,\hfill \\ \left(0,0\right)\in M\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}E\left(0,0\right)=\left(0,0\right)\in M,\hfill \\ \left(2,1\right)\in M\phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}E\left(2,1\right)=\left(1,\frac{1}{3}\right)\in M.\hfill \end{array}$

The Kuhn-Tucker conditions are as follows:

$\begin{array}{c}\mathrm{\nabla }\left(f\circ E\right)\left(x,y\right)+{u}_{1}\mathrm{\nabla }\left({g}_{1}\circ E\right)\left(x,y\right)+{u}_{2}\mathrm{\nabla }\left({g}_{2}\circ E\right)\left(x,y\right)=0,\hfill \\ \left[\begin{array}{c}\frac{1}{2}x\\ \frac{2}{9}y\end{array}\right]+{u}_{1}\left[\begin{array}{c}\frac{6}{64}{x}^{5}\\ \frac{2}{9}y\end{array}\right]+{u}_{2}\left[\begin{array}{c}\frac{3}{8}{x}^{2}\\ \frac{2}{3}\end{array}\right]=0,\hfill \\ {u}_{1}\left[\frac{{x}^{6}}{64}+\frac{{y}^{2}}{9}-5\right]=0,\hfill \\ {u}_{2}\left[\frac{1}{8}{x}^{3}+\frac{2}{3}y-4\right]=0.\hfill \end{array}$

The solution is $\left\{\left[x=0.0,{u}_{1}=0.0,{u}_{2}=0.0,y=0.0\right]\right\}$, $\overline{z}=\left(0,0\right)$, and $\overline{x}=E\left(\overline{z}\right)=\left(0,0\right)$ is a solution of the problem P.

## 4 Conclusion

In this paper we introduced a new definition of an E-differentiable convex function, which transforms a non-differentiable function to a differentiable function under an operator $E:{R}^{n}\to {R}^{n}$, and we studied Kuhn-Tucker and Fritz-John conditions for obtaining an optimal solution of mathematical programming with a non-differentiable function. At the end, some examples have been presented to clarify the results.

## References

1. Youness EA: E -convex sets, E -convex functions and E -convex programming. J. Optim. Theory Appl. 1999, 102(3):439–450.

2. Youness EA: Optimality criteria in E -convex programming. Chaos Solitons Fractals 2001, 12: 1737–1745. 10.1016/S0960-0779(00)00036-9

3. Chen X: Some properties of semi- E -convex functions. J. Math. Anal. Appl. 2002, 275: 251–262. 10.1016/S0022-247X(02)00325-6

4. Syau Y-R, Lee ES: Some properties of E -convex functions. Appl. Math. Lett. 2005, 18: 1074–1080. 10.1016/j.aml.2004.09.018

5. Emam T, Youness EA: Semi strongly E -convex function. J. Math. Stat. 2005, 1(1):51–57.

6. Megahed AA, Gomma HG, Youness EA, El-Banna AH: A combined interactive approach for solving E -convex multi- objective nonlinear programming. Appl. Math. Comput. 2011, 217: 6777–6784. 10.1016/j.amc.2010.12.086

7. Iqbal A, Ahmad I, Ali S: Some properties of geodesic semi- E -convex functions. Nonlinear Anal., Theory Methods Appl. 2011, 74: 6805–6813. 10.1016/j.na.2011.07.005

8. Iqbal, A, Ali, S, Ahmad, I: On geodesic E-convex sets, geodesic E-convex functions and E-epigraphs. J. Optim. Theory Appl. (2012), (Article Available online)

9. Mangasarian OL: Nonlinear Programming. Mcgraw-Hill, New York; 1969.

10. Bazaraa MS, Shetty CM: Nonlinear Programming Theory and Algorithms. Wiley, New York; 1979.

11. Youness EA: Characterization of efficient solution of multiobjective E -convex programming problems. Appl. Math. Comput. 2004, 151(3):755–761. 10.1016/S0096-3003(03)00526-5

## Acknowledgements

The authors express their deep thanks and their respect to the referees and the Journal for these valuable comments in the evaluation of this paper.

## Author information

Authors

### Corresponding author

Correspondence to Abd El-Monem A Megahed.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

Reprints and Permissions

Megahed, A.EM.A., Gomma, H.G., Youness, E.A. et al. Optimality conditions of E-convex programming for an E-differentiable function. J Inequal Appl 2013, 246 (2013). https://doi.org/10.1186/1029-242X-2013-246

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1029-242X-2013-246

### Keywords

• E-convex set
• E-convex function
• semi E-convex function
• E-differentiable function 