Skip to main content

Optimality conditions for interval-valued univex programming

Abstract

We introduce concepts of interval-valued univex mappings, consider optimization conditions for interval-valued univex functions for the constrained interval-valued minimization problem, and show examples for the illustration purposes.

1 Introduction

Convexity and generalized convexity are important in mathematical programming. Invex functions, introduced by Hanson [17], are important generalized convex functions and are successfully used in optimization and equilibrium problems. For example, necessary and sufficient conditions are obtained for K-invex functions in [14]. The concept of G-invex functions is introduced by Antczak [3]. Optimality and duality for differentiable G-multiobjective problems are considered in [4, 5]. Noor [26] considered invex equilibrium problems in the context of invexity. As an extension and refinement of Noor [26], Farajzadeh [15] gave some results for invex Ky Fan inequalities in topological vector spaces.

Another important type of generalized convex functions, called univex functions and preunivex functions, is introduced in [8]. Suppose \(\emptyset \neq X\subseteq R^{n}\), \(\eta :X \times X\rightarrow R^{n}\), \(\varPhi :R\rightarrow R\), and \(b=b(x,y):X \times X \rightarrow R^{+}\). A differentiable function \(F: X\rightarrow R\) is said to be univex at \(y\in X\) with respect to η, Φ, b if, for all \(x\in X\),

$$\begin{aligned} b(x,y)\varPhi \bigl[F(x)-F(y)\bigr]\geq \eta ^{t}(x,y)\nabla F(y). \end{aligned}$$
(1)

Later, some generalized optimality conditions of primal and dual problems were considered by Hanson and Mond [18]. Combing with generalized type I and univex functions, optimality conditions and duality for several mathematical programming problems were considered by many researchers [1, 16, 29], and more and more scholars pay attention to type I and univex functions [24, 25, 34, 35].

The authors of [2, 6, 9, 12, 27, 30, 33, 36,37,38,39] have studied generalized convex interval-valued mappings and their connection with interval-valued optimization. For example, Steuer [33] proposed three algorithms, called the F-cone algorithm, E-cone algorithm, and emanating algorithms, to solve the linear programming problems with interval-valued objective functions. To prove strong duality theorems, Wu [37] derived KKT optimality conditions in the interval-valued problems under convexity hypotheses. Wu [36] also obtained KKT conditions in an optimization problem with an interval-valued objective function using H-derivatives and the concept of weakly differentiable functions. Since the H-derivative suffers certain disadvantages, Chalco-Cano et al. [10] gave KKT-type optimality conditions, which were obtained using the gH-derivatives of interval-valued functions. Also, they studied the relationship between the approach presented with other known approaches given by Wu [36]. However, these methods cannot solve a kind of optimization problems with interval-valued objective functions that are not LU-convex but univex. Antczak [6] used the classical exact \(l_{1}\) penalty function method for solving nondifferentiable interval-valued optimization problems under convexity hypotheses. Optimality conditions in invex optimization problems with an interval-valued objective function were discussed by Zhang et al. [39]. Using gH-differentiability, Li et al. [21] introduced interval-valued invex mappings and gave the optimality conditions for interval-valued objective functions under invexity. By using the weak derivative of fuzzy functions, Li et al. [22] defined fuzzy weakly univex functions and considered optimization conditions for fuzzy minimization problem.

Followed by [21] and [22], in this paper, we introduce the concept of interval-valued univex mappings, consider optimization conditions for interval-valued univex functions for the constrained interval-valued minimization problem, and show examples for illustration purposes. The present paper can be seen as promotion and expansion of [20]. The method presented in this paper is different from that in [6]. Our method cannot solve Example 3.1 of [6] because the objective function is not gH-differentiable. Example 4.1 shows that the methods given by [6, 33, 36, 37] cannot solve a kind of optimization problems for interval-valued univex mappings. Example 4.2 shows that the methods given by Li et al. [22] cannot solve a kind of fuzzy optimization problems for interval-valued univex mappings. Finally, Example 4.3 shows that the method given in [10] cannot solve a kind of optimization problems for interval-valued univex mappings. In Sect. 3, we introduce the concept of interval-valued univex mappings and discuss some their properties. Section 4 deals with optimality conditions for the constrained interval-valued minimization problem under the assumption of interval-valued univexity.

2 Preliminaries

In this paper, a closed interval in R is denoted by \(A=[a^{L}, a ^{U}]\). Every \(a\in R\) is considered as a particular closed interval \(a=[a,a]\). The set of closed intervals is denoted by \(\mathcal{I}\).

Given \(A=[a^{L},a^{U}]\) and \(B=[b^{L},b^{U}] \in \mathcal{I}\), the arithmetic operations and order are defined in [32] as follows:

  1. (1)

    \(A+B=[a^{L}+b^{L}, a^{U}+b^{U}] \) and \(-A=\{-a:a\in A\}=[-a^{U},-a ^{L}]\);

  2. (2)

    \(A\ominus _{gH} B=[\min (a^{L}-b^{L},a^{U}-b^{U}),\max (a^{L}-b ^{L},a^{U}-b^{U})]\);

  3. (3)

    \(A\preceq B\Leftrightarrow a^{L}\leq b^{L}\) and \(a^{U}\leq b^{U}\); \(A\prec B \Leftrightarrow A\preceq B\) and \(A\neq B\).

For \(X\subseteq R^{n}\), a mapping \(F:X\rightarrow \mathcal{I}\) is called an interval-valued function. Then \(F(x)=[F^{L}(x),F^{U}(x)]\), where \(F^{L}(x)\) and \(F^{U}(x)\) are two real-valued functions defined on \(R^{n}\) and satisfying \(F^{L}(x)\leq F^{U}(x)\) for every \(x\in X\). If \(F^{L}(x)\) and \(F^{U}(x)\) are continuous, then \(F(x)\) is said to be continuous.

It is well known that the derivative and subderivative of a function is important in the study of generalized convexity and mathematical programming. For example, a classic subdifferential is introduced by Azimov and Gasimov [7]. Some theorems connecting operations on the weak subdifferential in the nonsmooth and nonconvex analysis are provided in [13]. The derivative and subderivative of interval-valued functions are extensions of real-valued functions. Due to different arithmetics of intervals, several definitions about derivatives of interval-valued functions are introduced by the authors, such as weakly differentiable functions [36], H-differentiable functions (based on the Hukuhara difference of two closed intervals [36]), gH-differentiable functions (based on the operation \(\ominus _{gH}\) of two closed intervals [11, 31]), and subdifferentiable functions (based on the difference \(A- B=[a^{L}-b^{U}, a^{U}-b^{L}]\) of two closed intervals [6]). In this paper, we always use weakly differentiable and gH-differentiable functions, which are defined as follows.

Let X be an open set in \(R^{n}\), and let \(F(x)=[F^{L}(x),F^{U}(x)]\). Then \(F(x)\) is called weakly differentiable at \(x_{0}\) if \(F^{L}(x)\) and \(F^{U}(x)\) are differentiable at \(x_{0}\).

Let \(x_{0} \in (a, b)\) and h be such that \(x_{0} + h \in (a, b)\). Then

$$\begin{aligned} F^{\prime }(x_{0})= \lim_{x\rightarrow 0} \bigl[F(x_{0}+h)\ominus _{gH}F(x _{0})\bigr]. \end{aligned}$$
(2)

If \(F^{\prime }(x_{0})\in \mathcal{I}\) exists, then F is gH- differentiable at \(x_{0}\).

If \(F^{L}(x)\) and \(F^{U}(x)\) are differentiable functions at \(x\in (a, b)\), then \(F(x)\) is gH-differentiable at x, and

$$\begin{aligned} F^{\prime }(x)= \bigl[\min \bigl\{ \bigl(F^{L}\bigr)^{\prime }(x), \bigl(F^{U}\bigr)^{\prime }(x)\bigr\} , \max \bigl\{ \bigl(F^{L}\bigr)^{\prime }(x),\bigl(F^{U} \bigr)^{\prime }(x)\bigr\} \bigr]. \end{aligned}$$
(3)

We say that an interval-valued function F is gH-differentiable at \(x=(x_{1},\ldots ,x_{n})\in X\) if all the partial gH-derivatives \(( \frac{\partial F}{\partial x_{1}})(x),\ldots , ( \frac{\partial F}{ \partial x_{n}})(x)\) exist on some neighborhood of x and are continuous at x. We write

$$\begin{aligned} \nabla F(x)=\biggl(\biggl( \frac{\partial F}{\partial x_{1}}\biggr) (x),\biggl( \frac{\partial F}{\partial x_{2}}\biggr) (x),\ldots ,\biggl( \frac{\partial F}{\partial x_{n}}\biggr) (x) \biggr)^{t}, \end{aligned}$$

and we call \(\nabla F(x)\) the gradient of a gH-differentiable interval-valued function F at x.

Let \(\mathbb{H}(R^{n})\) denote the family of nonempty compact subsets of \(R^{n}\). For \(A,B\in \mathbb{H}(R^{n})\), the Hausdorff metric \(h(A,B)\) on \(\mathbb{H}(R^{n})\) is defined by

$$\begin{aligned} h(A,B)=\inf \bigl\{ \varepsilon \mid A\subseteq N(B,\varepsilon ),B\subseteq N(A, \varepsilon )\bigr\} , \end{aligned}$$

where

$$\begin{aligned} N(A,\varepsilon )=\bigl\{ x\in R^{n}\mid d(x,A)< \varepsilon \bigr\} , \quad d(x,A)= \inf_{a\in A} \Vert x-a \Vert . \end{aligned}$$

The following basic result (which can be found in Lemma 3.1. of [19]) of the mathematical analysis is well known:

Suppose that \(\varPhi :R^{n}\rightarrow R^{n}\) is continuous and let \(X\in \mathbb{H}(R^{n})\). Then the mapping

$$\begin{aligned} \varPsi : \mathbb{H}\bigl(X^{n}\bigr)\rightarrow \mathbb{H} \bigl(R^{n}\bigr), \qquad \varPsi (A)=\bigl\{ \phi (a)\mid a\in A\bigr\} \end{aligned}$$

is uniformly continuous in h-metric.

We say that \(\varPsi :\mathcal{I}\rightarrow \mathcal{I}\) is increasing if \(A\preceq B\) implies \(\varPsi (A)\preceq \varPsi (B)\). From the above result we can prove the following result:

If function \(\varPhi : R\rightarrow R\) is increasing, then \(\varPsi : \mathcal{I}\rightarrow \mathcal{I}\) is increasing. Moreover, \(\varPsi ([a^{L}, a^{U}])=[\varPhi (a^{L}),\varPhi (a^{U})]\).

3 Interval-valued univex functions

In this section, we define interval-valued univex functions as a generalization of univex functions [8] and discuss some their properties.

Let X be an invex set in \(R^{n}\) (the concept of an invex set can be found in [8]), and let F be an interval-valued function. The following definition is a particular case of fuzzy weakly univex functions, which has been introduced in [22].

Suppose F is a weakly differentiable interval-valued function. Then F is weakly univex at \(y\in X\) with respect to η, Φ, b if and only if both \(F^{L}(x)\) and \(F^{U}(x)\) are univex at \(y\in X\), that is, for all \(x\in X\),

$$\begin{aligned}& b(x,y)\varPhi \bigl[F^{L}(x)-F^{L}(y)\bigr]\geq \eta ^{t}(x,y)\nabla F^{L}(y), \end{aligned}$$
(4)
$$\begin{aligned}& b(x,y)\varPhi \bigl[F^{U}(x)-F^{U}(y)\bigr]\geq \eta ^{t}(x,y)\nabla F^{U}(y), \end{aligned}$$
(5)

where \(\eta =\eta (x,y):X \times X\rightarrow R^{n}\), \(\varPhi :R\rightarrow R\), and \(b = b(x,y):X\times X \times [0, 1]\rightarrow R^{+}\).

Remark 3.1

The concept of LU-invexity for interval-valued functions is introduced in [39], since it considers the endpoint functions; in this paper, we call them weakly invex. Every interval-valued weakly invex function is interval-valued weakly univex with respect to η, b, Φ, where

$$\begin{aligned}& \varPhi (x) = x,\quad b=1, \end{aligned}$$

but the converse is not true.

Example 3.1

Consider the function \(F: (-\infty ,0)\rightarrow \mathcal{I}\) defined by

$$\begin{aligned}& F(x)=[1,2]x^{3}, \\& \eta (x,y)=\textstyle\begin{cases} x^{2}+xy+y^{2}, & x>y, \\ x-y, & x\leq y, \end{cases}\displaystyle \\& b(x,y)=\textstyle\begin{cases} \frac{y^{2}}{x-y}, & x>y, \\ 0, & x\leq y. \end{cases}\displaystyle \end{aligned}$$

Let \(\varPhi :R\rightarrow R\) be defined by \(\varPhi (V)=3V\), \(F^{L}(x)=2x ^{3}\), and \(F^{U}(x)=x^{3}\); then \(\nabla F^{L}(x)=6x^{2}\) and \(\nabla F^{U}(x)=3 x^{2}\). Then F is interval-valued weakly univex but not interval-valued weakly invex, since for \(x=-2\) and \(y=-1\), \(F^{U}(x)-F^{U}(y)< \eta ^{t}(x,y)\nabla F^{U}(y)\).

Let X be a nonempty open set in \(R^{n}\), \(\eta :X \times X\rightarrow R^{n}\), \(\varPsi :\mathcal{I}\rightarrow \mathcal{I}\), and \(b = b(x,y): X \times X \rightarrow R^{+}\).

Definition 3.1

Suppose F is a gH-differentiable interval-valued function. Then F is univex at \(y\in X\) with respect to η, Ψ, b if for all \(x\in X\),

$$\begin{aligned}& b(x,y)\varPsi \bigl[F(x)\ominus _{gH}F(y)\bigr]\succeq \eta ^{t}(x,y)\nabla F(y). \end{aligned}$$
(6)

The following example shows that an interval-valued univex function may not be an interval-valued weakly univex function.

Example 3.2

Suppose \(F(x)=[-|x|,|x|]\), \(x\in R\), \(b=1\), and \(\varPhi (a)=a\). Then \(\varPsi [a,b]=[a,b]\) is induced by \(\varPhi (a)=a\), and

$$\begin{aligned} \eta (x,y)=\textstyle\begin{cases} x-y, &x y\geq 0, \\ x+y, &x y< 0. \end{cases}\displaystyle \end{aligned}$$

Then \(F(x)\) is gH-differentiable on R, and \(F^{\prime }(y)=[-1, 1]\). We can prove that

$$\begin{aligned} b\varPsi \bigl[F(x)\ominus _{gH} F(y)\bigr]\succeq \eta ^{t}(x,y)\nabla F(y). \end{aligned}$$

Therefore \(F(x)\) is univex with respect to η, b, Ψ, but \(F(x)\) is not weakly univex since \(F^{L}(x)\) is not univex with respect to η, b, Φ.

Theorem 3.1

Suppose \(F(x)\) is gH-differentiable. If \(F(x)\) is an interval-valued weakly univex function with respect to η, b, Φ and Φ is increasing, then \(F(x)\) is an interval-valued univex function with respect to the same η, b, and Ψ, where Ψ is an extension of Φ.

Proof

Since \(F(x)\) is weakly univex at y, then real-valued functions \(F^{L}\) and \(F^{U}\) are univex at y, that is,

$$\begin{aligned}& b(x,y)\varPhi \bigl[F^{L}(x)-F^{L}(y)\bigr]\geq \eta ^{t}(x,y)\nabla F^{L}(y)\quad \text{and} \\& b(x,y)\varPhi \bigl[F^{U}(x)-F^{U}(y)\bigr]\geq \eta ^{t}(x,y)\nabla F^{U}(y) \end{aligned}$$

for all \(x\in X\).

(i) Under the condition \(\eta ^{t}(x,y)\nabla F^{L}(y) \leq \eta ^{t}(x,y) \nabla F^{U}(y)\), we have

$$\begin{aligned}& \eta ^{t}(x,y)\nabla F(y)=\bigl[\eta ^{t}(x,y)\nabla F^{L}(y), \eta ^{t}(x,y) \nabla F^{U}(y)\bigr]. \end{aligned}$$

If \(F(x)\ominus _{gH} F(y)=[F^{L}(x)-F^{L}(y), F^{U}(x)-F^{U}(y)]\), then since Φ is increasing, we have

$$\begin{aligned}& b(x,y)\varPsi \bigl[F(x)\ominus _{gH}F(y)\bigr] \\& \quad =\bigl[b(x,y)\varPhi \bigl(F^{L}(x)-F^{L}(y)\bigr), b(x,y)\varPhi \bigl(F^{U}(x)-F^{U}(y)\bigr)\bigr] \\& \quad \succeq \bigl[\eta ^{t}(x,y)\nabla F^{L}(y), \eta ^{t}(x,y)\nabla F^{U}(y)\bigr] \\& \quad = \eta ^{t}(x,y)\nabla F(y). \end{aligned}$$

If \(F(x)\ominus _{gH} F(y)=[F^{U}(x)-F^{U}(y), F^{L}(x)-F^{L}(y)]\), then

$$\begin{aligned}& b(x,y)\varPhi \bigl(F^{L}(x)-F^{L}(y)\bigr) \\& \quad \succeq b(x,y)\varPhi \bigl(F^{U}(x)-F^{U}(y)\bigr) \\& \quad \succeq \eta ^{t}(x,y)\nabla F^{U}(y) \\& \quad \succeq \eta ^{t}(x,y)\nabla F^{L}(y), \end{aligned}$$

and since Φ is increasing, we have

$$\begin{aligned}& b(x,y)\varPsi \bigl[F(x)\ominus _{gH}F(y)\bigr] \\& \quad =[b(x,y)\varPsi \bigl[F^{U}(x)-F^{U}(y), F^{L}(x)-F^{L}(y)\bigr] \\& \quad =\bigl[b(x,y)\varPhi \bigl(F^{U}(x)-F^{U}(y)\bigr), b(x,y)\varPhi \bigl(F^{L}(x)-F^{L}(y)\bigr)\bigr] \\& \quad \succeq \bigl[\eta ^{t}(x,y)\nabla F^{L}(y), \eta ^{t}(x,y)\nabla F^{U}(y)\bigr] \\& \quad = \eta ^{t}(x,y)\nabla F(y). \end{aligned}$$

(ii) Under the condition \(\eta ^{t}(x,y)\nabla F^{L}(y) > \eta ^{t}(x,y) \nabla F^{U}(y)\), we have

$$\begin{aligned} \eta ^{t}(x,y)\nabla F(y)=\bigl[\eta ^{t}(x,y)\nabla F^{U}(y),\eta ^{t}(x,y) \nabla F^{L}(y)\bigr]. \end{aligned}$$

If \(F(x)\ominus _{gH} F(y)=[F^{U}(x)-F^{U}(y), F^{L}(x)-F^{L}(y)]\), then since Φ is increasing, we have

$$\begin{aligned}& b(x,y)\varPsi \bigl[F(x)\ominus _{gH}F(y)\bigr] \\& \quad =\bigl[b(x,y)\varPhi \bigl(F^{U}(x)-F^{U}(y)\bigr), b(x,y)\varPhi \bigl(F^{L}(x)-F^{L}(y)\bigr)\bigr] \\& \quad \succeq \bigl[\eta ^{t}(x,y)\nabla F^{U}(y), \eta ^{t}(x,y)\nabla F^{L}(y)\bigr] \\& \quad = \eta ^{t}(x,y)\nabla F(y). \end{aligned}$$

If \(F(x)\ominus _{gH} F(y)=[F^{L}(x)-F^{L}(y), F^{U}(x)-F^{U}(y)]\), then

$$\begin{aligned}& b(x,y)\varPhi \bigl(F^{U}(x)-F^{U}(y)\bigr) \\& \quad \succeq b(x,y)\varPhi \bigl(F^{L}(x)-F^{L}(y)\bigr) \\& \quad \succeq \eta ^{t}(x,y)\nabla F^{L}(y) \\& \quad \succeq \eta ^{t}(x,y)\nabla F^{U}(y). \end{aligned}$$

Since Φ is increasing, we have

$$\begin{aligned}& b(x,y)\varPsi \bigl[F(x)\ominus _{gH}F(y)\bigr] \\& \quad =b(x,y)\varPsi \bigl[F^{L}(x)-F^{L}(y), F^{U}(x)-F^{U}(y)\bigr] \\& \quad =\bigl[b(x,y)\varPhi \bigl(F^{L}(x)-F^{L}(y) \bigr),b(x,y)\varPhi \bigl(F^{U}(x)-F^{U}(y)\bigr)\bigr] \\& \quad \succeq \bigl[ \eta ^{t}(x,y)\nabla F^{U}(y), \eta ^{t}(x,y)\nabla F^{L}(y)\bigr] \\& \quad = \eta ^{t}(x,y)\nabla F(y). \end{aligned}$$

 □

Remark 3.2

If Φ is nonincreasing, then Theorem 3.1 may not be true (as shown in the following Example 3.3).

Example 3.3

Suppose \(F(x)=[-2,1]x^{2}\), \(x<0\). Then \(F(x)\) is gH-differentiable and weakly differentiable. It is easy to check that \(F(x)\) is weakly univex with respect to \(\eta (x,y)=x-y\),

$$b(x,y)=\textstyle\begin{cases} 1, & x\leq y< 0, \\ \frac{-2y(x-y)}{-x^{2}+y^{2}}, & y< x< 0, \end{cases} $$

and \(\varPhi (a)=|a|\). However, \(F(x)\) is not univex with respect to the same \(\eta (x,y)\), b, and Ψ, where Ψ is defined by the extension of \(\varPhi (a)=|a|\).

4 Optimality criteria for interval-valued univex mappings

In this section, for gH-differentiable interval-valued univex functions, we establish sufficient optimality conditions for a feasible solution \(x^{\ast }\) to be an optimal solution or a nondominated solution for \((P)\).

Suppose \(F(x)\), \(g_{1}(x),\ldots , g_{m}(x)\) are gH-differentiable interval-valued mappings defined on a nonempty open set \(X\subseteq R ^{n}\). Then, we consider the primal problem:

$$\begin{aligned}& (P)\quad \min F(x) \\& \hphantom{(P)\quad} \text{s.t.}\quad g(x)\preceq 0. \end{aligned}$$

Let \(P:=\{x\in X :g(x)\preceq 0\}\) denote the feasible set of \((P)\).

Since is a partial order, the optimal solution may not exist for some interval-valued optimization problems. Therefore, authors always consider the concept of a nondominated solution in this situation. We reconsider an optimal solution and nondominated solution as follows.

Definition 4.1

  1. (i)

    \(x^{\ast }\in P\) is an optimal solution of \((P)\Leftrightarrow F(x^{\ast })\preceq F(x)\) for all \(x\in P\). In this case, \(F(x^{\ast })\) is called the optimal objective value of F.

  2. (ii)

    \(x^{\ast }\in P\) is a nondominated solution of \((P)\Leftrightarrow \) there exists no \(x_{0}\in P\) such that \(F(x_{0})\prec F(x^{\ast })\). In this case, \(F(x^{\ast })\) is called the nondominated objective value of F.

Theorem 4.1

Let \(x^{\ast }\) be P-feasible. Suppose that:

  1. (i)

    there exist η, \(\varPsi _{0}\), \(b_{0}\), \(\varPsi _{i}\), \(b_{i}\), \(i=1, 2 , \ldots ,m \), such that

    $$\begin{aligned}& b_{0}(x,y)\varPsi _{0}\bigl[F(x)\ominus _{gH}F \bigl(x^{\ast }\bigr)\bigr]\succeq \eta ^{t}\bigl(x,x ^{\ast }\bigr)\nabla F\bigl(x^{\ast }\bigr) {} \end{aligned}$$
    (7)

    and

    $$\begin{aligned}& -b_{i}\bigl(x,x^{\ast }\bigr)\varPsi _{i} \bigl[g_{i}\bigl(x^{\ast }\bigr)\bigr]\succeq \eta ^{t}\bigl(x,x^{ \ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr) \end{aligned}$$
    (8)

    for all feasible x;

  2. (ii)

    there exists \(y^{\ast }\in R^{m}\) such that

    $$\begin{aligned}& \nabla F\bigl(x^{\ast }\bigr)=-y^{\ast t}\nabla g \bigl(x^{\ast }\bigr), \end{aligned}$$
    (9)
    $$\begin{aligned}& y^{\ast }\geq 0. \end{aligned}$$
    (10)

    Further suppose that

    $$\begin{aligned}& \varPsi _{0}(\mu )\succeq 0 \quad \Rightarrow \quad \mu \succeq 0, \end{aligned}$$
    (11)
    $$\begin{aligned}& \mu \preceq 0 \quad \Rightarrow\quad \varPsi _{i}( \mu )\succeq 0, \end{aligned}$$
    (12)

    and

    $$\begin{aligned}& b_{0}\bigl(x,x^{\ast }\bigr)> 0,\qquad b_{i} \bigl(x,x^{\ast }\bigr)\geq 0, \end{aligned}$$
    (13)

    for all feasible x. Then \(x^{\ast }\) is an optimal solution of \((P)\).

Proof

Let x be P-feasible. Then

$$\begin{aligned} g(x)\preceq 0. \end{aligned}$$

This, along with (12), yields

$$\begin{aligned} \varPsi _{i}\bigl[g_{i}(x)\bigr]\succeq 0. \end{aligned}$$

From (7)–(13) it follows that

$$\begin{aligned} b_{0}\bigl(x,x^{\ast }\bigr)\varPsi _{0}\bigl[F(x) \ominus _{gH}F\bigl(x^{\ast }\bigr)\bigr] \succeq &\eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F\bigl(x^{\ast }\bigr) \\ =&-\eta ^{t}\bigl(x,x^{\ast }\bigr) \sum _{i=1} ^{m}y_{i}\nabla g_{i} \bigl(x^{\ast }\bigr) \\ \succeq &\sum_{i=1} ^{m}b_{i} \bigl(x,x^{\ast }\bigr)y_{i}\varPsi _{i} \bigl[g_{i}(x^{ \ast }\bigr] \\ \succeq &0. \end{aligned}$$

From (13) it follows that

$$\begin{aligned} \varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast } \bigr)\bigr]\succeq 0. \end{aligned}$$

By (11) we have

$$\begin{aligned} F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\succeq 0. \end{aligned}$$

Thus

$$\begin{aligned} F(x)\succeq F\bigl(x^{\ast }\bigr). \end{aligned}$$

Therefore \(x^{\ast }\) is an optimal solution of \((P)\). □

Remark 4.1

If we change the condition

$$\begin{aligned} \varPsi _{0}(\mu )\succeq 0 \quad \Rightarrow\quad \mu \succeq 0 \end{aligned}$$

of Theorem 4.1 by

$$\begin{aligned} \varPsi _{0}(\mu )\nprec 0 \quad \Rightarrow\quad \mu \nprec 0, \end{aligned}$$
(14)

then \(x^{\ast }\) is a nondominated solution of \((P)\).

In Theorem 18 of [20], the authors also gave a sufficient optimality condition for a feasible solution \(x^{\ast }\) to be an optimal solution. In this theorem, the equation

$$\begin{aligned} \nabla F\bigl(x^{\ast }\bigr)+y^{\ast t}\nabla g \bigl(x^{\ast }\bigr)=0 \end{aligned}$$

was used, substituted for (9) of Theorem 4.1. We can prove that the previous equation is very restrictive. In fact, in case \(F(x)\) is a unary function, suppose \(\nabla F(x^{\ast })=[a,b]\) and \(y^{\ast t} \nabla g(x^{\ast })=[yc,yd]\). Then we have \([a,b]+[yc,yd]=[a+yc,b+yd]=0\), where \(a\leq b\) and \(yc\leq yd\). Therefore we have \(a=b\) and \(c=d\) since \(y\geq 0\). That is to say, \(\nabla F(x^{\ast })\) is a real number instead of an interval. In the following example, we can observe that \(x^{\ast }\) is an optimal solution of \((P)\), but \(x^{\ast }\) do not satisfies the previous equation. The following example also shows the advantages of our method over [6, 33, 36, 37].

Example 4.1

$$\begin{aligned}& \min F(x)=\biggl[\frac{1}{2},\frac{3}{2}\biggr]\sin ^{2}x_{1}+\biggl[\frac{1}{2},\frac{3}{2} \biggr] \sin ^{2}x_{2} \\& \textit{s.t.}\quad g(x)=\biggl[\frac{1}{2},\frac{3}{2}\biggr](\sin x_{1}-1)^{2}+\biggl[\frac{1}{2}, \frac{3}{2} \biggr](\sin x_{2}-1)^{2}\preceq \frac{1}{4}\biggl[ \frac{1}{2}, \frac{3}{2}\biggr], \\& \hphantom{\mbox{s.t.}\quad} x_{1},x_{2}\in \biggl(0,\frac{\pi }{2} \biggr). \end{aligned}$$

We can observe that \(F(x)\) is weakly differentiable, H-differentiable, and gH-differentiable. Since the interval-valued function \(F(x)\) is not convex, the method in [6, 33, 36, 37] cannot be used.

The function \(F(x)\) is interval-valued univex with respect to

$$\begin{aligned}& \eta (x,y)=\textstyle\begin{cases} (\frac{\sin x_{1}-\sin y_{1}}{\cos y_{1}},\frac{\sin x_{2}-\sin y_{2}}{ \cos y_{2}})^{t}, &(x_{1},x_{2})\geq (y_{1},y_{2}), \\ 0 & \text{otherwise}, \end{cases}\displaystyle \\& b_{0}(x,y)=\textstyle\begin{cases} 1, &(x_{1},x_{2})\geq (y_{1},y_{2}), \\ 0 & \text{otherwise}, \end{cases}\displaystyle \end{aligned}$$

and Ψ is induced by \(\varPhi (a)=2a\), \(b_{1}(x,y)=b_{0}(x,y)\), and \(\varPsi _{1}\) is induced by \(\varPhi _{1}(a)=|a|\), where \(x=(x_{1},x_{2})^{t}\) and \(y=(y_{1},y_{2})^{t}\). The point \(x^{\ast }=(\sin ^{-1}(1-\frac{1}{2 \sqrt{2}}),\sin ^{-1}(1-\frac{1}{2\sqrt{2}}))^{t}\) is a feasible solution. We can also see that \((F,g)\) satisfies the hypotheses of Theorem 4.1. Therefore \(x^{\ast }=(\sin ^{-1}(1-\frac{1}{2\sqrt{2}}), \sin ^{-1}(1-\frac{1}{2\sqrt{2}}))^{t}\) is an optimal solution.

Theorem 4.2

Let \(x^{\ast }\) be P-feasible. Suppose that:

  1. (i)

    there exist η, \(\varPsi _{0}\), \(b_{0}\), \(\varPsi _{i}\), \(b_{i}\), \(i=1, 2 , \ldots ,m \), such that

    $$\begin{aligned}& b_{0}(x,y)\varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\bigr]\succeq \eta ^{t} \bigl(x,x ^{\ast }\bigr)\nabla F\bigl(x^{\ast }\bigr) \end{aligned}$$
    (15)

    and

    $$\begin{aligned}& -b_{i}\bigl(x,x^{\ast }\bigr)\varPsi _{i}\bigl[g_{i}\bigl(x^{\ast }\bigr)\bigr]\succeq \eta ^{t}\bigl(x,x^{ \ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr) \end{aligned}$$
    (16)

    for all feasible x;

  2. (ii)

    there exists \(y^{\ast }\in R^{m}\) such that

    $$\begin{aligned}& \bigl\{ \nabla F\bigl(x^{\ast }\bigr)\bigr\} ^{L}= \bigl\{ -y^{\ast t}\nabla g\bigl(x^{\ast }\bigr)\bigr\} ^{L}, \end{aligned}$$
    (17)
    $$\begin{aligned}& y^{\ast }\geq 0. \end{aligned}$$
    (18)

    Further, suppose that

    $$\begin{aligned}& \varPsi _{0}(\mu )\nprec 0 \quad \Rightarrow \quad \mu \nprec 0, \end{aligned}$$
    (19)
    $$\begin{aligned}& \mu \preceq 0 \quad \Rightarrow \quad \varPsi _{i}( \mu )\succeq 0, \end{aligned}$$
    (20)

    and

    $$\begin{aligned}& b_{0}\bigl(x,x^{\ast }\bigr)> 0,\qquad b_{i}\bigl(x,x^{\ast }\bigr)\geq 0 \end{aligned}$$
    (21)

    for all feasible x. Then \(x^{\ast }\) is a nondominated solution of \((P)\).

Proof

Let x be P-feasible. Then

$$\begin{aligned}& \widetilde{g}(x)\preceq 0. \end{aligned}$$

From (20) we conclude

$$\begin{aligned}& \varPsi _{i}\bigl[g_{i}(x)\bigr]\succeq 0. \end{aligned}$$

From (15), (16) it follows that

$$\begin{aligned}& b_{0}(x,y)\bigl\{ \varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\bigr]\bigr\} ^{L}\geq \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F\bigl(x^{\ast } \bigr)\bigr\} ^{L}, \\& b_{0}(x,y)\bigl\{ \varPsi _{0}\bigl[F(x)\ominus _{g}F\bigl(x^{\ast }\bigr)\bigr]\bigr\} ^{U}\geq \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F\bigl(x^{\ast } \bigr)\bigr\} ^{U}, \end{aligned}$$

and

$$\begin{aligned}& b_{i}\bigl(x,x^{\ast }\bigr)\bigl\{ \varPsi _{i} \bigl[g_{i}\bigl(x^{\ast }\bigr)\bigr]\bigr\} ^{L}\leq \bigl\{ -\eta ^{t}\bigl(x,x ^{\ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr)\bigr\} ^{L}, \\& b_{i}\bigl(x,x^{\ast }\bigr)\bigl\{ \varPsi _{i} \bigl[g_{i}\bigl(x^{\ast }\bigr)\bigr]\bigr\} ^{U}\leq \bigl\{ -\eta ^{t}\bigl(x,x ^{\ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr)\bigr\} ^{U}. \end{aligned}$$

Since

$$\begin{aligned} \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F\bigl(x^{\ast } \bigr) =&\eta ^{t}\bigl(x,x^{\ast }\bigr)\bigl[\bigl\{ \nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{L},\bigl\{ \nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{U}\bigr]] \\ =&\textstyle\begin{cases} [\eta ^{t}(x,x^{\ast })\{\nabla F(x^{\ast })\}^{L},\eta ^{t}(x,x^{ \ast })\{\nabla F(x^{\ast })\}^{U}], &\eta ^{t}(x,x^{\ast })\geq 0, \\ [\eta ^{t}(x,x^{\ast })\{\nabla F(x^{\ast })\}^{U},\eta ^{t}(x,x^{ \ast })\{\nabla F(x^{\ast })\}^{L}], &\eta ^{t}(x,x^{\ast })< 0, \end{cases}\displaystyle \end{aligned}$$

and

$$\begin{aligned} -\eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr) =&-\eta ^{t}\bigl(x,x^{\ast }\bigr) \bigl[ \bigl\{ \nabla g_{i}\bigl(x^{\ast }\bigr)\bigr\} ^{L},\bigl\{ \nabla g_{i}\bigl(x^{\ast }\bigr)\bigr\} ^{U}\bigr]] \\ =&\textstyle\begin{cases} [\eta ^{t}(x,x^{\ast })\{-\nabla g_{i}(x^{\ast })\}^{U},\eta ^{t}(x,x ^{\ast })\{-\nabla g_{i}(x^{\ast })\}^{L}], &\eta ^{t}(x,x^{\ast }) \geq 0, \\ [\eta ^{t}(x,x^{\ast })\{-\nabla g_{i}(x^{\ast })\}^{L},\eta ^{t}(x,x ^{\ast })\{-\nabla g_{i}(x^{\ast })\}^{U}], &\eta ^{t}(x,x^{\ast })< 0, \end{cases}\displaystyle \end{aligned}$$

we consider the following two cases.

Case (i)

$$\begin{aligned}& \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{L}=\eta ^{t} \bigl(x,x^{\ast }\bigr) \bigl\{ \nabla F\bigl(x^{\ast }\bigr)\bigr\} ^{L} \end{aligned}$$

and

$$\begin{aligned}& \bigl\{ -\eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr)\bigr\} ^{L}=\eta ^{t}\bigl(x,x ^{\ast }\bigr)\bigl\{ -\nabla g_{i}\bigl(x^{\ast }\bigr) \bigr\} ^{U} \end{aligned}$$

yield

$$\begin{aligned}& \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{U}=\eta ^{t} \bigl(x,x^{\ast }\bigr) \bigl\{ \nabla F\bigl(x^{\ast }\bigr)\bigr\} ^{U} \end{aligned}$$

and

$$\begin{aligned}& \bigl\{ -\eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr)\bigr\} ^{U}=\eta ^{t}\bigl(x,x ^{\ast }\bigr)\bigl\{ -\nabla g_{i}\bigl(x^{\ast }\bigr) \bigr\} ^{L}. \end{aligned}$$

Thus

$$\begin{aligned} b_{0}(x,y)\bigl\{ \varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\bigr]\bigr\} ^{L} \geq & \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{L} \\ =&\eta ^{t}\bigl(x,x^{\ast }\bigr)\bigl\{ \nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{L} \\ =&\eta ^{t}\bigl(x,x^{\ast }\bigr)\bigl\{ -y^{\ast t} \nabla g\bigl(x^{\ast }\bigr)\bigr\} ^{L} \\ \geq &\sum_{i=1} ^{m}b_{i} \bigl(x,x^{\ast }\bigr)y_{i}\bigl\{ \varPsi _{i} \bigl[g_{i}\bigl(x^{ \ast }\bigr)\bigr]\bigr\} ^{L} \\ \geq &0. \end{aligned}$$

From (21) it follows that

$$\begin{aligned}& \varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast } \bigr)\bigr]\succeq 0. \end{aligned}$$

Then

$$\begin{aligned}& F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\nprec 0, \end{aligned}$$

and thus

$$\begin{aligned}& F(x)\nprec F\bigl(x^{\ast }\bigr). \end{aligned}$$

Therefore \(x^{\ast }\) is a nondominated solution of \((P)\).

Case (ii)

$$\begin{aligned}& \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{L}=\eta ^{t} \bigl(x,x^{\ast }\bigr) \bigl\{ \nabla F\bigl(x^{\ast }\bigr)\bigr\} ^{U} \end{aligned}$$

and

$$\begin{aligned}& \bigl\{ -\eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr)\bigr\} ^{L}=\eta ^{t}\bigl(x,x ^{\ast }\bigr)\bigl\{ -\nabla g_{i}\bigl(x^{\ast }\bigr) \bigr\} ^{L} \end{aligned}$$

yield

$$\begin{aligned}& \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{U}=\eta ^{t} \bigl(x,x^{\ast }\bigr) \bigl\{ \nabla F\bigl(x^{\ast }\bigr)\bigr\} ^{L} \end{aligned}$$

and

$$\begin{aligned}& \bigl\{ -\eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr)\bigr\} ^{U}=\eta ^{t}\bigl(x,x ^{\ast }\bigr)\bigl\{ -\nabla g_{i}\bigl(x^{\ast }\bigr) \bigr\} ^{U}. \end{aligned}$$

Thus

$$\begin{aligned} b_{0}(x,y)\bigl\{ \varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\bigr]\bigr\} ^{U} \geq & \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{U} \\ =&\eta ^{t}\bigl(x,x^{\ast }\bigr)\bigl\{ \nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{L} \\ =&\eta ^{t}\bigl(x,x^{\ast }\bigr)\bigl\{ -y^{\ast t} \nabla g\bigl(x^{\ast }\bigr)\bigr\} ^{L} \\ \geq &\sum_{i=1} ^{m}b_{i} \bigl(x,x^{\ast }\bigr)y_{i}\bigl\{ \varPsi _{i} \bigl[g_{i}\bigl(x^{ \ast }\bigr)\bigr]\bigr\} ^{L} \\ \geq &0, \end{aligned}$$

From (21) it follows that

$$\begin{aligned}& \varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast } \bigr)\bigr]\nprec 0. \end{aligned}$$

Then

$$\begin{aligned}& F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\nprec 0, \end{aligned}$$

and thus

$$\begin{aligned}& F(x)\nprec F\bigl(x^{\ast }\bigr). \end{aligned}$$

Therefore \(x^{\ast }\) is a nondominated solution of \((P)\). □

Theorem 4.3

Let \(x^{\ast }\) be P-feasible. Suppose that:

  1. (i)

    there exist η, \(\varPsi _{0}\), \(b_{0}\), \(\varPsi _{i}\), \(b_{i}\), \(i=1, 2 , \ldots ,m \), such that

    $$\begin{aligned}& b_{0}(x,y)\varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\bigr]\succeq \eta ^{t} \bigl(x,x ^{\ast }\bigr)\nabla F\bigl(x^{\ast }\bigr) \end{aligned}$$
    (22)

    and

    $$\begin{aligned}& -b_{i}\bigl(x,x^{\ast }\bigr)\varPsi _{i}\bigl[g_{i}\bigl(x^{\ast }\bigr)\bigr]\succeq \eta ^{t}\bigl(x,x^{ \ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr) \end{aligned}$$
    (23)

    for all feasible x;

  2. (ii)

    there exists \(y^{\ast }\in R^{m}\) such that

    $$\begin{aligned}& \bigl\{ \nabla F\bigl(x^{\ast }\bigr)\bigr\} ^{U}= \bigl\{ -y^{\ast t}\nabla g\bigl(x^{\ast }\bigr)\bigr\} ^{U}, \end{aligned}$$
    (24)
    $$\begin{aligned}& y^{\ast }\geq 0. \end{aligned}$$
    (25)

    Further, suppose that

    $$\begin{aligned}& \varPsi _{0}(\mu )\nprec 0 \quad \Rightarrow\quad \mu \nprec 0, \end{aligned}$$
    (26)
    $$\begin{aligned}& \mu \preceq 0 \quad \Rightarrow\quad \varPsi _{i}( \mu )\succeq 0, \end{aligned}$$
    (27)

    and

    $$\begin{aligned}& b_{0}\bigl(x,x^{\ast }\bigr)> 0,\qquad b_{i}\bigl(x,x^{\ast }\bigr)\geq 0 \end{aligned}$$
    (28)

    for all feasible x. Then \(x^{\ast }\) is a nondominated solution of \((P)\).

The following example shows the advantages of our method over [22].

Example 4.2

$$\begin{aligned}& \min F(x)=[-1,1] \vert x \vert \\& \textit{s.t.}\quad g(x)=x-1\leq 0. \end{aligned}$$

Since \(F^{L}(x)=-|x|\) and \(F^{U} (x)=|x|\) is not differentiable at \(x=0\), \(F(x)\) is not weakly differentiable at \(x=0\). Therefore the method in [22] cannot be used.

Note that the objective function \(F(x)\) is gH-differentiable on R and that \(F^{\prime }(y)=[-1,1]\). Let

$$\begin{aligned} b_{0}(x,y)=\textstyle\begin{cases} 1, &x< y< 0 \text{ or } 0< x< y, \\ 0 &\text{otherwise}. \end{cases}\displaystyle \end{aligned}$$

the function \(\varPsi _{0}[a,b]=[a,b]\) is induced by \(\varPhi _{0}(a)=a\), and

$$\begin{aligned}& \eta (x,y)=\textstyle\begin{cases} x-y, &x< y< 0 \text{ or } 0< x< y, \\ 0 &\text{otherwise}. \end{cases}\displaystyle \end{aligned}$$

Let \(b_{1}=1\), and let \(\varPsi _{1}\) be induced by \(\varPhi _{1}(a)=|a|\). The point \(x^{\ast }=1\) is a feasible solution. We can see that \((F,g)\) satisfies the hypotheses of Theorem 4.2. Therefore \(x^{\ast }=1\) is a nondominated solution.

The following example also shows the advantages of our method over [10] and [23, 28].

Example 4.3

$$\begin{aligned}& \min F(x)=[-2,1]x^{2},\quad x< 0, \\& \textit{s.t.}\quad g(x)=x+1\leq 0. \end{aligned}$$

Then \(F(x)\) is gH-differentiable and weakly differentiable. Since \(F(x)\) is not LU-convex, the methods of [10] cannot be used, and since \(F^{L}(x)+F^{U}(x)=-x^{2}\) is not convex, the methods of [23, 28] cannot be used.

Let

$$\begin{aligned}& b_{0}(x,y)=\textstyle\begin{cases} 1, & x\leq y< 0, \\ \frac{-2y(x-y)}{-x^{2}+y^{2}}, & y< x< 0, \end{cases}\displaystyle \quad \mbox{and} \quad \varPsi _{0}[a,b]=\textstyle\begin{cases} [a,b], & [a,b]\preceq 0, \\ \varPsi ([a,b]), &[a,b]\npreceq 0, \end{cases}\displaystyle \end{aligned}$$

where \(\varPsi ([a,b])\) induced by \(\varPhi (a)=|a|\), and

$$\begin{aligned} \eta (x,y)=\textstyle\begin{cases} x-y, &x< y< 0 \text{ or } 0< x< y, \\ 0 &\text{otherwise}. \end{cases}\displaystyle \end{aligned}$$

Let \(b_{1}(x,y)=1\) and \(\varPhi _{1}(a)=\varPhi (a)=|a|\). The point \(x^{\ast }=-1\) is a feasible solution. We can see that \((F,g)\) satisfies the hypotheses of Theorem 4.3, and therefore \(x^{\ast }=-1\) is a nondominated solution.

5 Conclusion

The objective of this paper is to introduce the concept of gH-differentiable interval-valued univex mappings and discuss the relationship between interval-valued univex mappings and interval-valued weakly univex mappings. We derive sufficient optimality conditions for constrained interval-valued minimization problem under interval-valued univex mappings. In future work, we hope to give sufficient optimality conditions for a nondifferentiable interval-valued optimization problem under univexity hypotheses.

References

  1. Aghezzaf, B., Hachimi, M.: Generalized invexity and duality in multiobjective programming problems. J. Glob. Optim. 18, 91–101 (2000)

    Article  MathSciNet  Google Scholar 

  2. Ahmad, I., Jayswal, A., Banerjee, J.: On interval-valued optimization problems with generalized invex functions. J. Inequal. Appl. 2013, 313 (2013)

    Article  MathSciNet  Google Scholar 

  3. Antczak, T.: New optimality conditions and duality results of G-type in differentiable mathematical programming. Nonlinear Anal. 66, 1617–1632 (2007)

    Article  MathSciNet  Google Scholar 

  4. Antczak, T.: On G-invex multiobjective programming. Part I. Optimality. J. Glob. Optim. 43, 97–109 (2009)

    Article  Google Scholar 

  5. Antczak, T.: On G-invex multiobjective programming. Part II. Duality. J. Glob. Optim. 43, 111–140 (2009)

    Article  Google Scholar 

  6. Antczak, T.: Exactness property of the exact absolute value penalty function method for solving convex nondifferentiable interval-valued optimization problems. J. Optim. Theory Appl. 176, 205–224 (2018)

    Article  MathSciNet  Google Scholar 

  7. Azimov, A., Gasimov, R.N.: On weak conjugacy, weak subdifferentials and duality with zero gap in nonconvex optimization. Int. J. Appl. Math. 1, 171–192 (1999)

    MathSciNet  MATH  Google Scholar 

  8. Bector, C.R., Suneja, S.K., Gupta, S.: Univex functions and Univex nonlinear programming. In: Proceedings of the Administrative Sciences Association of Canada, pp. 115–124 (1992)

    Google Scholar 

  9. Bhurjee, A.K., Panda, G.: Efficient solution of interval optimization problem. Math. Methods Oper. Res. 76, 273–288 (2012)

    Article  MathSciNet  Google Scholar 

  10. Chalco-Cano, Y., Lodwick, W.A., Rufian-Lizana, A.: Optimality conditions of type KKT for optimization problem with interval-valued objective function via generalized derivative. Fuzzy Optim. Decis. Mak. 12, 305–322 (2013)

    Article  MathSciNet  Google Scholar 

  11. Chalco-Cano, Y., Roman-Flores, H., Jimenez-Gamero, M.D.: Generalized derivative and pi-derivative for set-valued functions. Inf. Sci. 181, 2177–2188 (2011)

    Article  Google Scholar 

  12. Charnes, A., Granot, F., Phillips, F.: An algorithm for solving interval linear programming problems. Oper. Res. 25, 688–695 (1977)

    Article  MathSciNet  Google Scholar 

  13. Cheraghia, P., Farajzadeh, A., Milovanovi, G.V.: Some notes on weak subdifferential. Filomat 31, 3407–3420 (2017)

    Article  MathSciNet  Google Scholar 

  14. Craven, B.D.: Invex functions and constrained local minima. Bull. Aust. Math. Soc. 24, 357–366 (1981)

    Article  MathSciNet  Google Scholar 

  15. Farajzadeh, A.P., Noor, M.A.: On dual invex Ky Fan inequalities. J. Optim. Theory Appl. 145, 407–413 (2010)

    Article  MathSciNet  Google Scholar 

  16. Gulati, T.R., Ahmad, I., Agarwal, D.: Sufficiency and duality in multiobjective programming under generalized type I functions. J. Optim. Theory Appl. 135, 411–427 (2007)

    Article  MathSciNet  Google Scholar 

  17. Hanson, M.A.: On sufficiency of the Kuhn–Tucker conditions. J. Math. Anal. Appl. 80, 545–550 (1981)

    Article  MathSciNet  Google Scholar 

  18. Hanson, M.A., Mond, B.: Necessary and sufficient conditions in constrained optimization. Math. Program. 37, 51–58 (1987)

    Article  MathSciNet  Google Scholar 

  19. Heriberto, R.F., Laécio, C.B., Rodney, C.B.: A note on Zadeh’s extensions. Fuzzy Sets Syst. 117, 327–331 (2001)

    Article  MathSciNet  Google Scholar 

  20. Li, L.F., Liu, S.Y., Zhang, J.K.: Univex interval-valued mapping with differentiability and its application in nonlinear programming. J. Appl. Math. 2013, Article ID 383692 (2013)

    MathSciNet  MATH  Google Scholar 

  21. Li, L.F., Liu, S.Y., Zhang, J.K.: On interval-valued invex mappings and optimality conditions for interval-valued optimization problems. J. Inequal. Appl. 2015, 179 (2015)

    Article  MathSciNet  Google Scholar 

  22. Li, L.F., Liu, S.Y., Zhang, J.K.: On fuzzy generalized convex mappings and optimality conditions for fuzzy weakly univex mappings. Fuzzy Sets Syst. 280, 107–132 (2015)

    Article  MathSciNet  Google Scholar 

  23. Luhandjula, M.K., Rangoaga, M.J.: An approach for solving a fuzzy multiobjective programming problem. Eur. J. Oper. Res. 232, 249–255 (2014)

    Article  MathSciNet  Google Scholar 

  24. Mishra, S.K., Wang, S.Y., Lai, K.K.: Nondifferentiable multiobjective programming under generalized d-univexity. Eur. J. Oper. Res. 160, 218–226 (2005)

    Article  MathSciNet  Google Scholar 

  25. Mishra, S.K., Wang, S.Y., Lai, K.K., Shi, J.M.: Nondifferentiable minimax fractional programming under generalized univexity. J. Comput. Appl. Math. 158, 379–395 (2003)

    Article  MathSciNet  Google Scholar 

  26. Noor, M.A.: Invex equilibrium problems. J. Math. Anal. Appl. 302, 463–475 (2005)

    Article  MathSciNet  Google Scholar 

  27. Osuna-Gomez, R., Chalco-Cano, Y., Hernandez-Jimenez, B., Ruiz-Garzond, G.: Optimality conditions for generalized differentiable interval-valued functions. Inf. Sci. 321, 136–146 (2015)

    Article  MathSciNet  Google Scholar 

  28. Panigrahi, M., et al.: Convex fuzzy mapping with differentiability and its application in fuzzy optimization. Eur. J. Oper. Res. 185, 47–62 (2007)

    Article  MathSciNet  Google Scholar 

  29. Rueda, N.G., Hanson, M.A., Singh, C.: Optimality and duality with generalized convexity. J. Optim. Theory Appl. 86, 491–500 (1995)

    Article  MathSciNet  Google Scholar 

  30. Singh, D., Dar, B.A., Kim, D.S.: KKT optimality conditions in interval valued multiobjective programming with generalized differentiable functions. Eur. J. Oper. Res. 254, 29–39 (2016)

    Article  MathSciNet  Google Scholar 

  31. Stefanini, L.: Generalized Hukuhara differentiability of interval-valued functions and interval differential equations. Nonlinear Anal. 71, 1311–1328 (2009)

    Article  MathSciNet  Google Scholar 

  32. Stefanini, L.: A generalization of Hukuhara difference for interval and fuzzy arithmetic. In: Dubois, D., Lubiano, M.A., et al. (eds.) Soft Methods for Handling Variability and Imprecision. Series on Advances in Soft Computing, vol. 48. Springer, Berlin (2008)

    Chapter  Google Scholar 

  33. Steuer, R.E.: Algorithms for linear programming problems with interval objective function coefficients. Math. Oper. Res. 6, 333–348 (1981)

    Article  MathSciNet  Google Scholar 

  34. Tripathy, A.K.: Higher order duality in multiobjective fractional programming with square root term under generalized higher order \((F,\alpha ,\beta ,\rho ,\sigma ,d)-V\)-type I univex functions. Appl. Math. Comput. 247, 880–897 (2014)

    MathSciNet  MATH  Google Scholar 

  35. Tripathy, A.K., Devi, G.: Mixed type duality for nondifferentiable multiobjective fractional programming under generalized \((d,\rho , \eta ,\theta )\)-type I univex function. Appl. Math. Comput. 219, 9196–9201 (2013)

    MathSciNet  MATH  Google Scholar 

  36. Wu, H.C.: The Karush–Kuhn–Tucker optimality conditions in an optimization problem with interval-valued objective function. Eur. J. Oper. Res. 176, 46–59 (2007)

    Article  MathSciNet  Google Scholar 

  37. Wu, H.C.: On interval-valued nonlinear programming problems. J. Math. Anal. Appl. 338, 299–316 (2008)

    Article  MathSciNet  Google Scholar 

  38. Wu, H.C.: Wolfe duality for interval-valued optimization. J. Optim. Theory Appl. 138, 497–509 (2008)

    Article  MathSciNet  Google Scholar 

  39. Zhang, J.K., Liu, S.Y., Li, L.L., Feng, Q.X.: The KKT optimality conditions in a class of generalized convex optimization problems with an interval-valued objective function. Optim. Lett. 8, 607–631 (2014)

    Article  MathSciNet  Google Scholar 

Download references

Funding

This work was partially supported by the National Natural Science Foundation of China (Grant Nos. 11401469, 11701446) and the Natural Science Foundation of Shaanxi Province (2018JM1055) and Shaanxi key disciplines of special funds projects.

Author information

Authors and Affiliations

Authors

Contributions

All authors have equal contributions. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Lifeng Li.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, L., Zhang, J. & Zhou, C. Optimality conditions for interval-valued univex programming. J Inequal Appl 2019, 49 (2019). https://doi.org/10.1186/s13660-019-2002-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-019-2002-1

Keywords