Skip to content

Advertisement

  • Research
  • Open Access

On robust approximate optimal solutions for fractional semi-infinite optimization with uncertainty data

Journal of Inequalities and Applications20192019:45

https://doi.org/10.1186/s13660-019-1997-7

  • Received: 11 December 2018
  • Accepted: 11 February 2019
  • Published:

Abstract

This paper provides some new results on robust approximate optimal solutions of a fractional semi-infinite optimization problem under uncertainty data in the constraint functions. By employing conjugate analysis and robust optimization approach (worst-case approach), we obtain some necessary and sufficient optimality conditions for robust approximate optimal solutions of such a fractional semi-infinite optimization problem. In addition, we state a mixed type approximate dual problem to the reference problem and obtain some robust duality properties between them. The results obtained in this paper improve the corresponding results in the literature.

Keywords

  • Approximate optimal solutions
  • Mixed type duality
  • Fractional semi-infinite optimization

MSC

  • 49N15
  • 90C34
  • 90C46

1 Introduction

Let X be a locally convex vector space, and let T be a nonempty infinite index set. Let \(f : X \rightarrow \mathbb{R}\) be a continuous convex and nonnegative function, \(g : X \rightarrow \mathbb{R} \) be a continuous concave and positive function, and let \(h_{t}:X\rightarrow \mathbb{R}\), \(t\in T\), be continuous convex functions. Consider the following fractional optimization problem, which has an infinite number of inequality constraints:
$$ (\mbox{FP}) \quad \min_{x\in X} \biggl\{ \frac{f(x)}{g(x)}\Bigm| h_{t}(x)\leq 0, \forall t \in T \biggr\} . $$
Throughout this paper, we always assume that \(\mathcal{\overline{F}}:= \{x\in X:h_{t}(x)\leq 0, \forall t\in T \}\neq\emptyset \). This modeling of fractional optimization problem has been recognized as a valuable modeling tool for many optimization problems which arise from practical needs. Many papers have been devoted to fractional optimization problem in the absence of data uncertainty in the past years, see [110] and the references therein.

Recently, a fractional optimization problem under data uncertainty has attracted a great deal of attention. Jeyakumar and Li [11] established robust duality results for a convex-concave fractional optimization problem in the face of data uncertainty in the constraints. Following the framework of robust optimization, Jeyakumar et al. [12] developed a duality theory for a minimax fractional optimization problem in the face of data uncertainty both in the objective and constraints. Sun and Chai [13] presented duality theory for fractional programming problems with uncertain cone constraints in locally convex vector spaces. Sun et al. [14] obtained some complete characterizations of robust optimal solutions of a fractional optimization problem in the face of data uncertainty both in the objective and constraints in terms of some robust type subdifferential constraint qualifications. Li et al. [15] obtained some necessary and sufficient optimality conditions for an uncertain minimax convex-concave fractional optimization problem under the robust subdifferentiable constraint qualification. They also obtained strong duality results between the robust counterpart of this uncertain optimization problem and the optimistic counterpart of its conventional Wolfe type and Mond–Weir type dual problems.

The above papers are mainly devoted to robust optimal solutions for fractional optimization problems with data uncertainty. It is well known that approximate solutions in optimization problems occur naturally, see, for example, [1620]. However, to the best of our knowledge, there is no work dealing with robust approximate optimal solutions for fractional semi-infinite optimization problems with data uncertainty in spite of the fact that some authors have investigated some robust approximate optimal solutions for other kinds of uncertain optimization problems, see, for example, [2124]. Thus, it is meaningful to consider robust approximate optimal solutions for fractional semi-infinite optimization problems with data uncertainty. To do this, let \(Z_{t}\), \(t\in T\), be locally convex vector spaces, \(h_{t}:X\times Z _{t}\rightarrow \mathbb{R}\), \(t\in T\), be continuous functions, and let \(v_{t}\in \mathcal{V}_{t}\) be the uncertain parameters which belong to the uncertainty set \(\mathcal{V}_{t}\subseteq Z_{t}\), \(t\in T\). The uncertainty case of \((\mbox{FP})\) is given as follows:
$$ (\mbox{UFP}) \quad \min_{x\in X} \biggl\{ \frac{f(x)}{g(x)} \Bigm| h_{t}(x,v_{t})\leq 0, \forall t\in T \biggr\} . $$
The aim of this paper is to provide some approximate optimality and duality for the robust (worst-case) counterpart of \((\mbox{UFP})\), namely
$$ (\mbox{RUFP}) \quad \min_{x\in X} \biggl\{ \frac{f(x)}{g(x)} \Bigm| h_{t}(x,v_{t})\leq 0, \forall (t,v_{t})\in \operatorname{gph}\mathcal{V} \biggr\} , $$
where the uncertainty set-valued mapping \(\mathcal{V}:T\rightrightarrows Z_{t}\) is defined as \(\mathcal{V}(t):=\mathcal{V}_{t}\) for all \(t\in T\).

Our results are divided into two parts. In the first one, we deal with robust approximate optimal solutions for \((\mbox{UFP})\). We establish necessary and sufficient optimality conditions for robust approximate optimal solutions of \((\mbox{UFP})\) by using a robust type constraint qualification introduced in the literature. In particular, we give the optimality conditions of robust approximate optimal solutions for convex semi-infinite optimization problems under uncertainty data. In the second part, we first propose a mixed type approximate dual problem of \((\mbox{UFP})\). And then, we discuss robust approximate duality relationships between the robust counterpart of \((\mbox{UFP})\) and the optimistic counterpart of its conventional mixed type approximate dual problem. We also show that our results encompass as special cases some optimization problems considered recently in the literature.

The paper is organized as follows. In Sect. 2, we recall some notions and give some preliminary results. In Sect. 3, we obtain necessary and sufficient optimality conditions for robust approximate optimal solutions of (UFP). In Sect. 4, we investigate mixed type robust approximate duality theory for (UFP). In Sect. 5, we apply the proposed approach to investigate optimality conditions of robust approximate optimal solution for a fractional optimization problem with uncertain cone constraints.

2 Preliminaries

In this section, we recall some notations and preliminary results which will be used in this paper, see [25]. Unless otherwise specified, all spaces under consideration are assumed to be locally convex vector spaces. The canonical pairing between space X and its topological dual \(X ^{*}\) is defined by \(\langle \cdot ,\cdot \rangle \). Let \(D\subseteq X^{*}\times \mathbb{R}\). The weak closure (resp. convex hull, convex cone hull) of D is denoted by clD (resp. coD, coneD) Furthermore, for the nonempty set \(C\subseteq X\), the dual cone of C is denoted by
$$ C^{*}= \bigl\{ x^{*}\in X^{*}\mid \bigl\langle x^{*},x\bigr\rangle \geq 0,\forall x\in C \bigr\} . $$
For the nonempty infinite index set T, consider the product space \(\mathbb{R}^{T}\) of multipliers \(\lambda =(\lambda _{t})_{t\in T}\) with \(\lambda _{t}\in \mathbb{R}\), and denote by \(\mathbb{R}^{(T)}\) the following linear space [26]:
$$ \mathbb{R}^{(T)}:=\bigl\{ \lambda =(\lambda _{t})_{t\in T} \mid \lambda _{t} =0 \mbox{ for all }t\in T \mbox{ except for finitely many }\lambda _{t} \neq0\bigr\} . $$
The nonnegative cone of \(\mathbb{R}^{(T)}\) is defined by
$$ \mathbb{R}^{(T)}_{+}:= \bigl\{ \lambda \in \mathbb{R}^{(T)}\mid \lambda _{t}\geq 0 , \forall t\in T \bigr\} . $$
Given \(u\in \mathbb{R}^{T}\) and \(\lambda \in \mathbb{R}^{(T)}\), and denoting \(T(\lambda ):=\{t\in T\mid \lambda _{t}\neq0\}\), we have
$$ \langle \lambda , u\rangle :=\sum_{t\in T}\lambda _{t} u_{t}= \sum_{t\in T(\lambda ) }\lambda _{t} u_{t}. $$
For an extended real-valued function \(f:X\rightarrow \mathbb{R}\cup \{+\infty \}\), we use the classical notations for effective domain \(\operatorname{dom}f=\{x\in X\mid f(x)<+\infty \}\), epigraph \(\operatorname{epi}{f}=\{(x,r)\in X\times \mathbb{R}\mid f(x)\leq r\}\), and conjugate function \(f^{*}:X^{*}\rightarrow {\mathbb{R}} \), \(f^{*}(x^{*})=\sup_{x\in X} \{\langle x^{*}, x\rangle -f(x) \}\). We say that f is proper iff its effective domain is nonempty. We say that f is convex iff epif is a convex set. The function f is said to be concave whenever −f is convex. Moreover, we say that f is lower semicontinuous iff epif is closed. For any \(\varepsilon \geq 0\), the ε-subdifferential of f at \(\bar{x}\in \operatorname{dom}f\) is the convex set given by
$$ \partial _{\varepsilon } f(\bar{x})= \bigl\{ x^{*}\in X^{*} \mid f(x)- f( \bar{x})\geq \bigl\langle x^{*},x-\bar{x}\bigr\rangle - \varepsilon , \forall x \in X \bigr\} , $$
while if \(f(\bar{x})=+\infty \), we take by convention \(\partial _{\varepsilon } f(\bar{x})=\emptyset \). If \(\varepsilon = 0\), the set \(\partial f(\bar{x}) := \partial _{0} f(\bar{x})\) is the classical subdifferential of convex analysis, that is,
$$ \partial f(\bar{x})= \bigl\{ x^{*}\in X^{*}\mid f(x)- f( \bar{x})\geq \bigl\langle x^{*},x-\bar{x}\bigr\rangle , \forall x\in X \bigr\} . $$
On the other hand, for spaces X and Y, given a vector-valued function \(h:X\rightarrow Y\). Let \(K\subseteq Y\) be a nonempty closed convex cone which defined the partial order of Y. One has h is K-convex iff, for any \(x, y\in X\) and \(\alpha \in [0,1]\),
$$ h\bigl(\alpha x+(1-\alpha )y\bigr)-\alpha h(x)-(1-\alpha )h(y)\in -K. $$
Consider for each \(\lambda \in K^{*}\). The function \(\lambda h: X \rightarrow \mathbb{R}\) is defined by \((\lambda h)(x):=\langle \lambda , h(x)\rangle \) for any \(x\in X\). It is easy to see that h is K-convex if and only if λh is a convex function for each \(\lambda \in K^{*}\).

Now, let us recall the following results which will be used in the sequel.

Lemma 2.1

([27])

Let \(f: X\rightarrow \mathbb{R}\cup \{+\infty \}\) be a proper lower semicontinuous convex function, and let \(\bar{x}\in \operatorname{dom}f\). Then
$$\begin{aligned} \operatorname{epi} {f^{*}}=\bigcup_{\varepsilon \geq 0} \bigl\{ \bigl(\xi , \langle \xi ,\bar{x}\rangle +\varepsilon -f(\bar{x}) \bigr) \mid \xi \in \partial _{\varepsilon }f(\bar{x}) \bigr\} . \end{aligned}$$

Lemma 2.2

([28])

Let \(f: X\rightarrow \mathbb{R}\cup \{+\infty \}\) be a proper convex function, and let \(\alpha >0\). Then
$$\begin{aligned} \operatorname{epi} {(\alpha f)^{*}}= \alpha \operatorname{epi} {f^{*}}. \end{aligned}$$

Lemma 2.3

([28])

Let \(f_{1}, f_{2}: X\rightarrow \mathbb{R}\cup \{+\infty \}\) be proper convex functions such that \(\operatorname{dom} f_{1} \cap \operatorname{dom} f_{2} \neq\emptyset \).
  1. (i)
    If \(f_{1}\) and \(f_{2}\) are lower semicontinuous, then
    $$ \operatorname{epi} (f_{1}+f_{2})^{*} = \operatorname{cl}\bigl(\operatorname{epi} f^{*}_{1} + \operatorname{epi} f^{*}_{2} \bigr). $$
     
  2. (ii)
    If one of \(f_{1}\) and \(f_{2}\) is continuous at some \(\bar{x}\in \operatorname{dom} f_{1} \cap \operatorname{dom} f_{2} \), then
    $$ \operatorname{epi} (f_{1}+f_{2})^{*} = \operatorname{epi} f^{*}_{1} +\operatorname{epi} f^{*}_{2} . $$
     

3 Robust approximate optimality conditions

In this section, we investigate some optimality conditions for robust approximate optimal solutions of (UFP). First of all, let us recall some concepts which will be used in the sequel.

Definition 3.1

The robust feasible set of (UFP) is defined by
$$ \mathcal{F}:= \bigl\{ x\in X\mid h_{t}(x,v_{t})\leq 0, \forall v_{t}\in \mathcal{V}_{t}, t\in T \bigr\} . $$

Definition 3.2

Let \(\varepsilon \geq 0\). We say that \(\bar{x}\in \mathcal{F}\) is a robust ε-optimal solution of (UFP) iff \(\bar{x}\in \mathcal{F}\) is an ε-optimal solution of (RUFP), i.e.,
$$ \frac{f(x)}{g(x)}\geq \frac{f(\bar{x})}{g(\bar{x})}-\varepsilon , \quad \forall x\in \mathcal{F}. $$

Remark 3.1

It is apparent that, if \(\varepsilon =0\), then the concept of robust ε-optimal solution coincides with the usual robust optimal solution for (UFP).

The following constraint qualification will play an important role in the study of (UFP).

Definition 3.3

([23])

We say that robust type closed convex cone constraint qualification (RCQ) holds iff
$$ \bigcup_{v \in \mathcal{V}, \lambda \in \mathbb{R}_{+}^{(T)}}\operatorname{epi} \biggl( \sum _{t\in T}\lambda _{t} h_{t}(\cdot ,v_{t}) \biggr)^{*} \mbox{ is weak$^{*}$ closed and convex,} $$
where \(v\in \mathcal{V}\) means that v is a selection of \(\mathcal{V}\), i.e., \(v:T\rightarrow \mathbb{R}^{q}\) and \(v_{t}\in \mathcal{V}_{t}\) for all \(t\in T\).

The following result gives a robust version of Farkas lemma for uncertain infinite convex systems.

Lemma 3.1

([23])

Let \(\phi : X\rightarrow \mathbb{{R}}\) be a convex function, and let \(h_{t} : X\times Z_{t}\rightarrow \mathbb{R}\), \(t\in T\), be continuous functions such that, for any \(v_{t}\in Z_{t}\), \(h_{t}(\cdot ,v_{t})\) is a convex function. Let \(\mathcal{V}_{t} \subseteq Z_{t}\), \(t\in T\), be compact and let \(\mathcal{F}\neq \emptyset \). Then the following statements are equivalent:
  1. (i)

    \(\{x\in X \mid h_{t}(x,v_{t})\leq 0, \forall v_{t}\in \mathcal{V}_{t}, t\in T \}\subseteq \{x \in X\mid \phi (x )\geq 0 \}\).

     
  2. (ii)

    \((0,0)\in \operatorname{epi}\phi ^{*}+\operatorname{cl} \operatorname{co} (\bigcup_{v \in \mathcal{V}, \lambda \in \mathbb{R}_{+}^{(T)}} \operatorname{epi} ( \sum_{t\in T}\lambda _{t} h_{t}(\cdot ,v_{t}) ) ^{*} )\).

     
In order to give some optimality conditions for robust ε-optimal solutions of (UFP), by virtue of the parametric approach introduced in [1], we associate (RUFP) with the following optimization problem, with a parametric \(\mu \in \mathbb{R}_{+}\):
$$ (\mbox{RUFP})_{\mu } \quad \min_{x\in X} \bigl\{ f(x)-\mu g(x)\mid h_{t}(x,v_{t})\leq 0, \forall v _{t}\in \mathcal{V}_{t}, t\in T \bigr\} . $$

By using the similar method of [21], the following relation between the ε-optimal solutions of (RUFP) and \((\mbox{RUFP})_{\mu }\) is obtained.

Lemma 3.2

Let \(\bar{x}\in \mathcal{F}\) and \(\varepsilon \geq 0\). Let \(\bar{ \mu }:=\frac{f(\bar{x})}{g(\bar{x})}-\varepsilon \geq 0\). Then \(\bar{x}\in \mathcal{F}\) is a robust ε-optimal solution of (UFP) if and only if \(\bar{x}\in \mathcal{F}\) is an ε̄-optimal solution of \((\mathrm{RUFP})_{\bar{ \mu }}\), where \(\bar{\varepsilon }=\varepsilon g(\bar{x})\).

Now, we are in a position to give some optimality conditions for robust ε-optimal solutions of (UFP) using Lemmas 3.1 and 3.2.

Theorem 3.1

Let \(\bar{x}\in \mathcal{F} \), \(\varepsilon \geq 0\), and \(\bar{\mu } =\frac{f( \bar{x} )}{ g(\bar{x})}-\varepsilon >0\). Let \(h_{t} : X\times Z_{t} \rightarrow \mathbb{{R}}\), \(t\in T\), be continuous functions such that, for any \(v_{t}\in \mathcal{V}_{t}\), \(h_{t}(\cdot ,v_{t})\) is a convex function. If (RCQ) holds, then is a robust ε-optimal solution of (UFP) if and only if there exist \((\bar{\lambda }_{t})_{t\in T}\in \mathbb{R}_{+}^{(T)}\), \(\bar{v}_{t}\in \mathcal{V}_{t}\), \(t\in T\), and \(\varepsilon _{0}^{\prime } \geq 0\), \(\varepsilon _{0}^{\prime \prime }\geq 0\), \(\varepsilon _{t}\geq 0\), \(t\in T\) such that
$$\begin{aligned} 0\in \partial _{\varepsilon _{0}^{\prime }} f(\bar{x} )+\bar{\mu } \partial _{\varepsilon _{0}^{\prime \prime }} (-g) (\bar{x} ) +\sum_{t\in T} \partial _{\varepsilon _{t}} \bigl(\bar{\lambda }_{t} h_{t} (\cdot , \bar{v}_{t}) \bigr) (\bar{x}) \end{aligned}$$
(1)
and
$$\begin{aligned} \varepsilon _{0}^{\prime }+\bar{\mu }\varepsilon _{0}^{\prime \prime }+\sum_{t\in T} \varepsilon _{t}-\varepsilon g(\bar{x})= \sum_{t\in T} \bar{\lambda } _{t}h_{t} (\bar{x},\bar{v}_{t}) . \end{aligned}$$
(2)

Proof

\((\Rightarrow )\): Let be a robust ε-optimal solution of (UFP). Then
$$ \frac{f(x)}{g(x)}\geq \frac{f(\bar{x})}{g(\bar{x})}-\varepsilon , \quad \forall x\in \mathcal{F}, $$
from which it follows that
$$ h_{t}(x,v_{t})\leq 0,\quad v_{t}\in \mathcal{V}_{t}, t\in T, x\in X\quad \Longrightarrow\quad \frac{f(x)}{g(x)}\geq \frac{f(\bar{x})}{g(\bar{x})}-\varepsilon . $$
For any \(x\in X\), set
$$ \phi (x):=f(x)- \biggl(\frac{f(\bar{x})}{g(\bar{x})}-\varepsilon \biggr)g(x)=f(x)-\bar{ \mu } g(x). $$
Then
$$ h_{t}(x,v_{t})\leq 0,\quad v_{t}\in \mathcal{V}_{t}, t\in T, x\in X\quad \Longrightarrow\quad \phi (x)\geq 0. $$
By Lemma 3.1, we have
$$ (0,0)\in \operatorname{epi}\phi ^{*}+\operatorname{cl} \operatorname{co} \biggl(\bigcup_{v \in \mathcal{V}, \lambda \in \mathbb{R}_{+}^{(T)}} \operatorname{epi} \biggl( \sum_{t\in T}\lambda _{t} h_{t}(\cdot ,v_{t}) \biggr)^{*} \biggr). $$
Since (RCQ) holds, one has
$$\begin{aligned} (0,0)\in \operatorname{epi}\phi ^{*}+ \bigcup _{v \in \mathcal{V}, \lambda \in \mathbb{R}_{+}^{(T)}} \operatorname{epi} \biggl( \sum _{t\in T}\lambda _{t} h_{t}(\cdot ,v_{t}) \biggr) ^{*} . \end{aligned}$$
(3)
By Lemmas 2.2 and 2.3, we obtain
$$\begin{aligned} \operatorname{epi}\phi ^{*}=\operatorname{epi}f^{*}+ \bar{\mu } \operatorname{epi}(-g)^{*} \end{aligned}$$
(4)
and
$$\begin{aligned} \operatorname{epi} \biggl( \sum_{t\in T} \lambda _{t} h_{t}(\cdot ,v_{t}) \biggr) ^{*} =\sum_{t\in T}\operatorname{epi} \bigl( \lambda _{t} h_{t}(\cdot ,v _{t}) \bigr)^{*} . \end{aligned}$$
(5)
Then, together with (3), (4), and (5), we obtain
$$\begin{aligned} (0,0)\in \operatorname{epi}f^{*}+\bar{\mu } \operatorname{epi}(-g)^{*}+ \bigcup_{v \in \mathcal{V}, \lambda \in \mathbb{R}_{+}^{(T)}} \biggl(\sum _{t\in T}\operatorname{epi} \bigl(\lambda _{t} h_{t}(\cdot ,v_{t}) \bigr) ^{*} \biggr) . \end{aligned}$$
So, there exist \((\bar{\lambda }_{t})_{t\in T}\in \mathbb{R}_{+}^{(T)}\) and \(\bar{v}_{t}\in \mathcal{V}_{t}\), \(t\in T\), such that
$$\begin{aligned} (0,0)\in \operatorname{epi}f^{*}+\bar{\mu } \operatorname{epi}(-g)^{*}+ \sum_{t\in T}\operatorname{epi} \bigl(\bar{\lambda }_{t} h_{t}(\cdot , \bar{v}_{t}) \bigr)^{*} . \end{aligned}$$
It follows that there exist \((\xi _{0}^{\prime },r_{0}^{\prime })\in \operatorname{epi}f^{*}\), \((\xi _{0}^{\prime \prime },r_{0}^{\prime \prime })\in \operatorname{epi}(-g)^{*}\), and \((\xi _{t} ,r _{t})\in \operatorname{epi} (\bar{\lambda }_{t} h_{t}(\cdot ,\bar{v} _{t}) )^{*}\) such that
$$\begin{aligned} (0,0 )= \biggl(\xi _{0}^{\prime }+\bar{\mu }\xi _{0}^{\prime \prime }+\sum_{t \in T}\xi _{t} ,r_{0}^{\prime }+\bar{\mu } r_{0}^{\prime \prime }+ \sum_{t\in T}r_{t} \biggr). \end{aligned}$$
(6)
Moreover, by Lemma 2.1, there exist \(\varepsilon _{0}^{\prime } \geq 0\), \(\varepsilon _{0}^{\prime \prime }\geq 0\), and \(\varepsilon _{t}\geq 0\), \(t\in T\), such that
$$\begin{aligned}& \xi _{0}^{\prime }\in \partial _{\varepsilon _{0}^{\prime }}f(\bar{x})\quad \mbox{and}\quad r _{0}^{\prime }=\bigl\langle \xi _{0}^{\prime }, \bar{x}\bigr\rangle +\varepsilon _{0}^{\prime }-f( \bar{x} ), \\& \xi _{0}^{\prime \prime }\in \partial _{\varepsilon _{0}^{\prime \prime }}(-g) (\bar{x})\quad \mbox{and}\quad r_{0}^{\prime \prime }=\bigl\langle \xi _{0}^{\prime \prime }, \bar{x}\bigr\rangle +\varepsilon _{0}^{\prime \prime }+g( \bar{x} ), \end{aligned}$$
and
$$ \xi _{t} \in \partial _{\varepsilon _{t}} \bigl(\bar{\lambda }_{t} h _{t}(\cdot ,\bar{v}_{t}) \bigr) ( \bar{x}), \quad \mbox{and}\quad r_{t}=\langle \xi _{t} ,\bar{x} \rangle +\varepsilon _{t}- \bar{\lambda }_{t} h_{t} ( \bar{x},\bar{v}_{t}). $$
It follows from (6) that
$$\begin{aligned} 0\in \partial _{\varepsilon _{0}^{\prime }} f(\bar{x} )+\bar{\mu } \partial _{\varepsilon _{0}^{\prime \prime }} (-g) (\bar{x} ) +\sum_{t\in T} \partial _{\varepsilon _{t}} \bigl(\bar{\lambda }_{t} h_{t} (\cdot , \bar{v}_{t}) \bigr) (\bar{x}) , \end{aligned}$$
(7)
and
$$\begin{aligned} 0 =& r_{0}^{\prime }+\bar{\mu } r_{0}^{\prime \prime }+ \sum_{t\in T}r_{t} \\ =& \biggl\langle \xi _{0}^{\prime }+\bar{\mu }\xi _{0}^{\prime \prime }+\sum_{t\in T}\xi _{t},\bar{x} \biggr\rangle +\varepsilon _{0}^{\prime }+ \bar{\mu }\varepsilon _{0}^{\prime \prime }+\sum _{t\in T}\varepsilon _{t}-f(\bar{x})+\bar{\mu }g( \bar{x})- \sum_{t\in T} \bar{\lambda }_{t} h_{t}(\bar{x},\bar{v}_{t}) \\ =& \varepsilon _{0}^{\prime }+\bar{\mu }\varepsilon _{0}^{\prime \prime }+\sum_{t\in T} \varepsilon _{t}-\varepsilon g(\bar{x})-\sum_{t\in T} \bar{\lambda } _{t} h_{t}(\bar{x},\bar{v}_{t}). \end{aligned}$$
Thus, (1) and (2) hold.
\((\Leftarrow )\): Suppose that there exist \((\bar{\lambda }_{t})_{t \in T}\in \mathbb{R}_{+}^{(T)}\), \(\bar{v}_{t}\in \mathcal{V}_{t}\), \(t\in T\), and \(\varepsilon _{0}^{\prime }\geq 0\), \(\varepsilon _{0}^{\prime \prime } \geq 0\), \(\varepsilon _{t}\geq 0\), \(t\in T\), such that (1) and (2) hold. By (1), there exist \(\xi _{0}^{\prime }\in \partial _{\varepsilon _{0}^{\prime }} f(\bar{x} )\), \(\xi _{0}^{\prime \prime }\in \partial _{\varepsilon _{0}^{\prime \prime }} (-g)(\bar{x} )\), and \(\xi _{t} \in \partial _{\varepsilon _{t}} (\bar{\lambda }_{t} h_{t}(\cdot , \bar{v}_{t}) )(\bar{x})\) such that
$$\begin{aligned} \xi _{0}^{\prime }+\bar{\mu }\xi _{0}^{\prime \prime }+\sum_{t\in T}\xi _{t} =0. \end{aligned}$$
(8)
Since \(\xi _{0}^{\prime }\in \partial _{\varepsilon _{0}^{\prime }} f(\bar{x} )\), \(\xi _{0}^{\prime \prime }\in \partial _{\varepsilon _{0}^{\prime \prime }} (-g)(\bar{x} )\), and \(\xi _{t} \in \partial _{\varepsilon _{t}} (\bar{\lambda }_{t} h _{t}(\cdot ,\bar{v}_{t}) )(\bar{x})\), we obtain that, for any \(x\in \mathcal{F}\),
$$\begin{aligned}& f(x )-f(\bar{x} )\geq \bigl\langle \xi _{0}^{\prime } ,x-\bar{x} \bigr\rangle - \varepsilon _{0}^{\prime }, \\& -g(x )+g(\bar{x} )\geq \bigl\langle \xi _{0}^{\prime \prime } ,x-\bar{x} \bigr\rangle - \varepsilon _{0}^{\prime \prime }, \end{aligned}$$
and
$$ \bar{\lambda }_{t} h_{t} (x,\bar{v}_{t})- \bar{ \lambda }_{t} h_{t} ( \bar{x},\bar{v}_{t})\geq \langle \xi _{t} ,x-\bar{x} \rangle - \varepsilon _{t}. $$
These imply that, for any \(x\in \mathcal{F}\),
$$\begin{aligned}& f(x )-f(\bar{x} )-\bar{\mu }g(x)+\bar{\mu }g(\bar{x})+\sum _{t\in T} \bar{\lambda }_{t} h_{t}(x, \bar{v}_{t})-\sum_{t\in T} \bar{\lambda } _{t} h_{t} (\bar{x},\bar{v}_{t}) \\& \quad \geq \biggl\langle \xi _{0}^{\prime }+\bar{\mu }\xi _{0}^{\prime \prime }+\sum_{t \in T}\xi _{t},x-\bar{x} \biggr\rangle -\varepsilon _{0}^{\prime }- \bar{ \mu }\varepsilon _{0}^{\prime \prime }-\sum _{t\in T}\varepsilon _{t}. \end{aligned}$$
Together with \(\bar{\lambda }_{t} h_{t}({x},\bar{v}_{t})\leq 0\) and (8), one has
$$\begin{aligned} f(x )-f(\bar{x} )-\bar{\mu }g(x)+\bar{\mu }g(\bar{x})-\sum _{t\in T} \bar{ \lambda }_{t} h_{t} (\bar{x},\bar{v}_{t}) \geq -\varepsilon _{0}^{\prime }- \bar{ \mu }\varepsilon _{0}^{\prime \prime }-\sum _{t\in T}\varepsilon _{t},\quad \forall x \in \mathcal{F}. \end{aligned}$$
(9)
From (2) and (9), one gets
$$\begin{aligned} f(x )-\bar{\mu }g(x) \geq f(x )-\bar{\mu }g(x) -\varepsilon g(\bar{x}),\quad \forall x\in \mathcal{F}. \end{aligned}$$
And so, is an ε̄-optimal solution of \(\mbox{(RUFP)}_{\bar{\mu }}\) where \(\bar{\varepsilon }=\varepsilon g( \bar{x})\). By Lemma 3.2, is a robust ε-optimal solution of (UFP) and the proof is complete. □

Remark 3.2

It is worth noticing that the robust approximate optimality conditions for (UFP) obtained in Theorem 3.1, to the best of our knowledge, have not yet been considered in the literature. In [21, Theorems 3.2 and 3.3], the authors obtained some similar results for a class of fractional optimization with a finite number of inequality constraints and a geometric constraint set. So, our results can be regarded as a generalization of the results obtained in [21].

In the special case when \(\mathcal{V}_{t}\), \(t\in T\), are singletons, we can easily obtain the following result.

Corollary 3.1

Let \(\bar{x}\in \mathcal{\overline{F}} \), \(\varepsilon \geq 0\), and \(\bar{\mu } =\frac{f(\bar{x} )}{ g(\bar{x})}-\varepsilon >0\). Let \(h_{t} : X \rightarrow \mathbb{{R}}\), \(t\in T\), be continuous convex functions. If \(\operatorname{cone} (\bigcup_{t\in T}\operatorname{epi} h _{t}^{*} )\) is weak closed, then is an ε-optimal solution of (FP) if and only if there exist \((\bar{\lambda }_{t})_{t\in T}\in \mathbb{R}_{+}^{(T)}\) and \(\varepsilon _{0}^{\prime }\geq 0\), \(\varepsilon _{0}^{\prime \prime }\geq 0\), \(\varepsilon _{t}\geq 0\), \(t\in T\), such that
$$\begin{aligned} 0\in \partial _{\varepsilon _{0}^{\prime }} f(\bar{x} )+\bar{\mu } \partial _{\varepsilon _{0}^{\prime \prime }} (-g) (\bar{x} ) +\sum_{t\in T}\bar{ \lambda }_{t}\partial _{\varepsilon _{t}} h_{t} (\bar{x}) \end{aligned}$$
and
$$\begin{aligned} \varepsilon _{0}^{\prime }+\bar{\mu }\varepsilon _{0}^{\prime \prime }+\sum_{t\in T} \varepsilon _{t}-\varepsilon g(\bar{x})= \sum_{t\in T} \bar{\lambda } _{t}h_{t} ( \bar{v}_{t}) . \end{aligned}$$

In the special case when \(\varepsilon =0\), we can get the following result which is a version of the robust optimality condition for nonsmooth fractional semi-infinite optimization problems.

Corollary 3.2

Let \(\bar{x}\in \mathcal{F} \) and \(\bar{\mu } =\frac{f(\bar{x} )}{ g( \bar{x})}>0\). Let \(h_{t} : X\times Z_{t}\rightarrow \mathbb{{R}}\), \(t\in T\), be continuous functions such that, for any \(v_{t}\in \mathcal{V}_{t}\), \(h_{t}(\cdot ,v_{t})\) is a convex function. If (RCQ) holds, then is a robust optimal solution of (UFP) if and only if there exist \((\bar{\lambda }_{t})_{t \in T}\in \mathbb{R}_{+}^{(T)}\) and \(\bar{v}_{t}\in \mathcal{V}_{t}\), \(t\in T\), such that
$$\begin{aligned} 0\in \partial f(\bar{x} )+\bar{\mu }\partial (-g) (\bar{x} ) +\sum _{t \in T}\partial \bigl(\bar{\lambda }_{t} h_{t} (\cdot ,\bar{v}_{t}) \bigr) ( \bar{x}) , \end{aligned}$$
and
$$\begin{aligned} h_{t} (\bar{x},\bar{v}_{t})=0,\quad \forall t\in T(\bar{\lambda }) . \end{aligned}$$

As a direct corollary of Theorem 3.1, we obtain a robust optimality condition of robust ε-optimal solutions for convex semi-infinite optimization problems under uncertainty data. Related results can be found in [23].

Theorem 3.2

For the problem
$$ (\mathrm{UCP}) \quad \min_{x\in X} \bigl\{ {f(x)} : h_{t}(x,v_{t})\leq 0, \forall t\in T \bigr\} . $$
Let \(\varepsilon \geq 0\). Suppose that \(f :X \rightarrow \mathbb{R}\) is a convex function. Let \(\bar{x}\in \mathcal{F} \) and \(\varepsilon \geq 0\). Let \(h_{t} : X\times Z_{t}\rightarrow \mathbb{{R}}\), \(t\in T\), be continuous functions such that, for any \(v_{t}\in \mathcal{V}_{t}\), \(h_{t}(\cdot ,v_{t})\) is a convex function. If (RCQ) holds, then is a robust ε-optimal solution of (UCP) if and only if there exist \((\bar{\lambda }_{t})_{t\in T}\in \mathbb{R}_{+}^{(T)}\), \(\bar{v}_{t}\in \mathcal{V}_{t}\), \(t\in T\), and \(\varepsilon _{0} \geq 0\), \(\varepsilon _{t}\geq 0\), \(t\in T\), such that
$$\begin{aligned} 0\in \partial _{\varepsilon _{0} } f(\bar{x} )+ \sum_{t\in T} \partial _{\varepsilon _{t}} \bigl( \bar{\lambda }_{t} h_{t} ( \cdot , \bar{v}_{t}) \bigr) (\bar{x}) \end{aligned}$$
and
$$\begin{aligned} \varepsilon _{0} +\sum_{t\in T}\varepsilon _{t}-\varepsilon = \sum_{t \in T}\bar{\lambda }_{t}h_{t} (\bar{x},\bar{v}_{t}) . \end{aligned}$$

Proof

Let \(g(x)\equiv 1\) for each \(x \in X\). Then the conclusion follows from Theorem 3.1. □

Similarly, we obtain the following result for nonsmooth convex semi-infinite optimization problems.

Corollary 3.3

Let \(\bar{x}\in \mathcal{F} \). Let \(h_{t} : X\times Z_{t}\rightarrow \mathbb{{R}}\), \(t\in T\), be continuous functions such that, for any \(v_{t}\in \mathcal{V}_{t}\), \(h_{t}(\cdot ,v_{t})\) is a convex function. If (RCQ) holds, then is a robust optimal solution of (UCP) if and only if there exist \((\bar{\lambda }_{t})_{t \in T}\in \mathbb{R}_{+}^{(T)}\) and \(\bar{v}_{t}\in \mathcal{V}_{t}\), \(t\in T\), such that
$$\begin{aligned} 0\in \partial f(\bar{x} ) +\sum_{t\in T}\partial \bigl( \bar{\lambda }_{t} h_{t} (\cdot ,\bar{v}_{t}) \bigr) (\bar{x}) \end{aligned}$$
and
$$\begin{aligned} h_{t} (\bar{x},\bar{v}_{t})=0,\quad \forall t\in T(\bar{\lambda }) . \end{aligned}$$

4 Mixed type approximate duality results

In this section, we first introduce a mixed type robust dual problem for (UFP), and then discuss the robust approximate weak and strong duality properties.

Let \(y\in X\), \(\lambda :=({\lambda }_{t})_{t}\in \mathbb{R}_{+}^{(T)}\), \(\beta :=(\beta _{t})_{t}\in \mathbb{R}_{+}^{(T)}\), \(\mu \geq 0\), and \(\varepsilon \geq 0\). For fixed \(v_{t}\in \mathcal{V}_{t}\), \(t\in T\), the conventional mixed type dual problem of (UFP) is given by
$$\begin{aligned} (\mathrm{MD }) \quad \textstyle\begin{cases} \max \mu \\ \mbox{s.t.}\quad 0\in \partial _{\varepsilon _{0}^{\prime }} f(y )+ {\mu } \partial _{\varepsilon _{0}^{\prime \prime }} (-g)(y) +\sum_{t\in T} \partial _{\varepsilon _{t}} ( ({\lambda }_{t}+\beta _{t}) h_{t} ( \cdot , {v}_{t}) )(y) , \\ \hphantom{\mbox{s.t.}\quad} f(y)- \mu g(y)+\sum_{t\in T} {\lambda }_{t}h_{t} (y, {v}_{t}) \geq \varepsilon g(y), \\ \hphantom{\mbox{s.t.}\quad} \varepsilon _{0}^{\prime }+ {\mu }\varepsilon _{0}^{\prime \prime }+\sum_{t\in T} \varepsilon _{t}-\varepsilon g(y)\leq \sum_{t\in T} {\beta } _{t}h_{t} (y, {v}_{t}), \\ \hphantom{\mbox{s.t.}\quad} \mu \geq 0, {\lambda }_{t} \geq 0 , {\beta }_{t} \geq 0, \varepsilon _{0}^{\prime }\geq 0 , \varepsilon _{0}^{\prime \prime }\geq 0 , \varepsilon _{t}\geq 0, t \in T. \end{cases}\displaystyle \end{aligned}$$
The optimistic counterpart of (MD), called optimistic dual optimization problem, is a deterministic maximization problem which is given by
$$\begin{aligned} (\mathrm{OMD }) \quad \textstyle\begin{cases} \max \mu \\ \mbox{s.t.}\quad 0\in \partial _{\varepsilon _{0}^{\prime }} f(y )+ {\mu } \partial _{\varepsilon _{0}^{\prime \prime }} (-g)(y) +\sum_{t\in T} \partial _{\varepsilon _{t}} ( ({\lambda }_{t}+\beta _{t}) h_{t} ( \cdot , {v}_{t}) )(y) , \\ \hphantom{\mbox{s.t.}\quad} f(y)- \mu g(y)+\sum_{t\in T} {\lambda }_{t}h_{t} (y, {v}_{t}) \geq \varepsilon g(y), \\ \hphantom{\mbox{s.t.}\quad} \varepsilon _{0}^{\prime }+ {\mu }\varepsilon _{0}^{\prime \prime }+\sum_{t\in T} \varepsilon _{t}-\varepsilon g(y)\leq \sum_{t\in T} {\beta } _{t}h_{t} (y, {v}_{t}), \\ \hphantom{\mbox{s.t.}\quad} \mu \geq 0, {\lambda }_{t} \geq 0 , {\beta }_{t} \geq 0, v_{t} \in \mathcal{V}_{t}, \varepsilon _{0}^{\prime }\geq 0 , \varepsilon _{0}^{\prime \prime } \geq 0 , \varepsilon _{t}\geq 0, t\in T. \end{cases}\displaystyle \end{aligned}$$

Remark 4.1

  1. (i)
    In the special case that \(\varepsilon =0\), and there is no uncertainty in the constraint functions, (UFP) becomes (FP), (OMD) collapses to
    $$\begin{aligned} \textstyle\begin{cases} \max \mu \\ \mbox{s.t.}\quad 0\in \partial f(y )+ {\mu }\partial (-g)(y) +\sum_{t\in T}\partial ( ({\lambda }_{t}+{\beta }_{t}) h_{t} (\cdot , {v}_{t}) )(y) ,\\ \hphantom{\mbox{s.t.}\quad} f(y)-\mu g(y)+\sum_{t\in T}{\lambda }_{t}h_{t} (y, {v}_{t})\geq 0, {\beta }_{t}h_{t} (y, {v}_{t})\geq 0 ,\\ \hphantom{\mbox{s.t.}\quad} \mu \geq 0, {\lambda }_{t} \geq 0 , {\beta }_{t} \geq 0 , {v}_{t}\in \mathcal{V}_{t} , t\in T. \end{cases}\displaystyle \end{aligned}$$
     
  2. (ii)
    In the special case that \(\varepsilon =0\), and the objective functions and the constraint functions are continuously differentiable, (OMD) collapses to
    $$\begin{aligned} \textstyle\begin{cases} \max \mu \\ \mbox{s.t.}\quad \nabla f(y )- {\mu }\nabla g(y) +\sum_{t\in T} ({\lambda }_{t}+{\beta }_{t})\nabla h_{t} (\cdot , {v}_{t}) (y)=0 ,\\ \hphantom{\mbox{s.t.}\quad} f(y)-\mu g(y)+\sum_{t\in T}{\lambda }_{t}h_{t} (y, {v}_{t})\geq 0 ,{\beta }_{t}h_{t} (y, {v}_{t})\geq 0,\\ \hphantom{\mbox{s.t.}\quad} \mu \geq 0, {\lambda }_{t} \geq 0 , {\beta }_{t} \geq 0, {v}_{t}\in \mathcal{V}_{t} , t\in T. \end{cases}\displaystyle \end{aligned}$$
     
  3. (iii)
    Obviously, if \(\varepsilon =0\) and \(\lambda =0\), (OMD) collapses to the Mond–Weir type optimistic dual problem as follows:
    $$\begin{aligned} \textstyle\begin{cases} \max \mu \\ \mbox{s.t.}\quad 0\in \partial f(y )+ {\mu }\partial (-g)(y) +\sum_{t\in T}\partial ( \beta _{t} h_{t} (\cdot , {v}_{t}) )(y) ,\\ \hphantom{\mbox{s.t.}\quad} f(y)- \mu g(y) \geq 0, {\beta }_{t}h_{t} (y, {v}_{t})\geq 0,\\ \hphantom{\mbox{s.t.}\quad} \mu \geq 0, {\beta }_{t} \geq 0, v_{t} \in \mathcal{V}_{t}, t\in T. \end{cases}\displaystyle \end{aligned}$$
    And if \(\varepsilon =0\) and \(\beta =0\), (OMD) collapses to the Wolfe type optimistic dual problem as follows:
    $$\begin{aligned} \textstyle\begin{cases} \max \mu \\ \mbox{s.t.}\quad 0\in \partial f(y )+ {\mu }\partial (-g)(y) +\sum_{t\in T}\partial ( {\lambda }_{t} h_{t} (\cdot , {v}_{t}) )(y) ,\\ \hphantom{\mbox{s.t.}\quad} f(y)- \mu g(y)+\sum_{t\in T} {\lambda }_{t}h_{t} (y, {v}_{t})\geq 0,\\ \hphantom{\mbox{s.t.}\quad} \mu \geq 0, {\lambda }_{t} \geq 0 , v_{t} \in \mathcal{V}_{t}, t\in T. \end{cases}\displaystyle \end{aligned}$$
     

Let us denote by \(\mathcal{F}^{\mathrm{(OMD)}}\) the feasible set of (OMD). Now, we give some robust ε-weak and ε-strong duality properties.

Theorem 4.1

(Mixed type robust ε-weak duality)

Let \(\varepsilon \geq 0\). For any feasible x of (RUFP) and any feasible \((y,\lambda ,\beta , v,\mu )\) of (OMD), we have
$$ \frac{f( {x})}{g(x)}\geq \mu -\varepsilon . $$

Proof

Since \((y,\lambda ,\beta ,v,\mu )\) is a feasible solution of (OMD), we have \(\mu \geq 0\), \({\lambda }_{t} \geq 0\), \({\beta }_{t} \geq 0\), \({v}_{t}\in \mathcal{V}_{t}\), \(\varepsilon _{0} ^{\prime }\geq 0\), \(\varepsilon _{0}^{\prime \prime }\geq 0 \), \(\varepsilon _{t}\geq 0\), \(t\in T\), such that
$$\begin{aligned}& 0\in \partial _{\varepsilon _{0}^{\prime }} f(y )+ {\mu } \partial _{\varepsilon _{0}^{\prime \prime }} (-g) (y) +\sum_{t\in T} \partial _{\varepsilon _{t}} \bigl( ({\lambda }_{t}+{\beta }_{t}) h_{t} ( \cdot , {v}_{t}) \bigr) (y) , \end{aligned}$$
(10)
$$\begin{aligned}& f(y)- \mu g(y)+\sum_{t\in T} { \lambda }_{t}h_{t} (y, {v}_{t})\geq \varepsilon g(y), \end{aligned}$$
(11)
and
$$\begin{aligned} \varepsilon _{0}^{\prime }+ {\mu }\varepsilon _{0}^{\prime \prime }+\sum_{t\in T} \varepsilon _{t}-\varepsilon g(y)\leq \sum_{t\in T} { \beta }_{t}h_{t} (y, {v}_{t}). \end{aligned}$$
(12)
By (10), there exist \(\xi _{0}^{\prime }\in \partial _{\varepsilon _{0}^{\prime }} f(y)\), \(\xi _{0}^{\prime \prime }\in \partial _{\varepsilon _{0}^{\prime \prime }} (-g)(y )\), and \(\xi _{t} \in \partial _{\varepsilon _{t}} ( ({\lambda }_{t}+{\beta }_{t}) h_{t} ( \cdot , {v}_{t}) )(y)\) such that
$$\begin{aligned} \xi _{0}^{\prime }+ {\mu }\xi _{0}^{\prime \prime }+ \sum_{t\in T}\xi _{t} =0. \end{aligned}$$
(13)
Note that for any \(x\in \mathcal{F}\), one has \((\lambda _{t}+\beta _{t})h _{t}(x,v_{t})\leq 0\) and \(g(x)>0\). These, together with (11), (12), and (13), imply
$$\begin{aligned}& f(x )-\mu g(x)+\varepsilon g(x) \\& \quad \geq f(y)+ \bigl\langle \xi _{0}^{\prime } ,x-y \bigr\rangle - \varepsilon _{0}^{\prime } -\mu g(y)+\mu \bigl\langle \xi _{0}^{\prime \prime } ,x-y \bigr\rangle - \mu \varepsilon _{0}^{\prime \prime }+\varepsilon g(x) \\& \quad = f(y)-\mu g(y)- \biggl\langle \sum_{t\in T}\xi _{t} ,x-y \biggr\rangle - \varepsilon _{0}^{\prime }- \mu \varepsilon _{0}^{\prime \prime }+\varepsilon g(x) \\& \quad \geq f(y)-\mu g(y)- \sum_{t\in T} (\lambda _{t}+\beta _{t})h_{t}(x,v _{t})\\& \qquad{}+\sum _{t\in T} (\lambda _{t}+\beta _{t})h_{t}(y,v_{t})-\sum _{t \in T}\varepsilon _{t}-\varepsilon _{0}^{\prime }-\mu \varepsilon _{0}^{\prime \prime }+ \varepsilon g(x) \\& \quad \geq f(y)-\mu g(y) +\sum_{t\in T} \lambda _{t}h_{t}(y,v_{t})+ \sum _{t\in T} \beta _{t}h_{t}(y,v_{t})- \sum_{t\in T}\varepsilon _{t}- \varepsilon _{0}^{\prime }-\mu \varepsilon _{0}^{\prime \prime } \\& \quad \geq f(y)-\mu g(y)+\sum_{t\in T} \lambda _{t}h_{t}(y,v_{t})-\varepsilon g(y) \\& \quad \geq 0. \end{aligned}$$
Thus,
$$ \frac{f( {x})}{g(x)}\geq \mu -\varepsilon . $$
This completes the proof. □

Theorem 4.2

(Mixed type robust ε-strong duality)

Let \(\bar{x} \in \mathcal{F} \) and \(\varepsilon \geq 0\). Let \(h_{t} : X\times Z _{t}\rightarrow \mathbb{{R}}\), \(t\in T\), be continuous functions such that, for any \(v_{t}\in \mathcal{V}_{t}\), \(h_{t}(\cdot ,v_{t})\) is a convex function. Assume that (RCQ) holds. If is a robust ε-optimal solution of (UFP) and \(\frac{f(\bar{x} )}{ g(\bar{x})}-\varepsilon >0\), then there exist \((\bar{\lambda }_{t})_{t\in T}\in \mathbb{R}_{+}^{(T)}\), \(\bar{v}_{t} \in \mathcal{V}_{t}\), \(t\in T\), and \(\bar{\mu } \geq 0\) such that \((\bar{x},0,\bar{\lambda },\bar{v},\bar{\mu } )\) is a robust 2ε-optimal solution of (MD).

Proof

Suppose that \(\bar{x}\in \mathcal{F}\) is a robust ε-optimal solution of (UFP). Let \(\bar{\mu }:=\frac{f(\bar{x} )}{ g( \bar{x})}-\varepsilon >0\). Then
$$\begin{aligned} f(\bar{x})-\bar{\mu } g(\bar{x})=\varepsilon g(\bar{x}). \end{aligned}$$
(14)
Moreover, by Theorem 3.1, there exist \((\bar{\lambda } _{t})_{t\in T}\in \mathbb{R}_{+}^{(T)}\), \(\bar{v}_{t}\in \mathcal{V} _{t}\), \(t\in T\), and \(\varepsilon _{0}^{\prime }\geq 0\), \(\varepsilon _{0} ^{\prime \prime }\geq 0\), \(\varepsilon _{t}\geq 0\), \(t\in T\), such that
$$\begin{aligned} 0\in \partial _{\varepsilon _{0}^{\prime }} f(\bar{x} )+\bar{\mu } \partial _{\varepsilon _{0}^{\prime \prime }} (-g) (\bar{x} ) +\sum_{t\in T} \partial _{\varepsilon _{t}} \bigl(\bar{\lambda }_{t} h_{t} (\cdot , \bar{v}_{t}) \bigr) (\bar{x}) \end{aligned}$$
(15)
and
$$\begin{aligned} \varepsilon _{0}^{\prime }+\bar{\mu }\varepsilon _{0}^{\prime \prime }+\sum_{t\in T} \varepsilon _{t}-\varepsilon g(\bar{x})= \sum_{t\in T} \bar{\lambda } _{t}h_{t} (\bar{x},\bar{v}_{t}) . \end{aligned}$$
(16)
From (14), (15), and (16), we can deduce that \((\bar{x},0,\bar{ \lambda },\bar{v},\bar{\mu } )\) is a feasible solution of (OMD). Then, for any feasible solution \((y,\lambda ,\beta ,v,\mu )\) of (OMD),
$$\begin{aligned} \bar{\mu } -\mu =\frac{f(\bar{x})}{g(\bar{x})}-\varepsilon -\mu \geq \mu -\varepsilon - \varepsilon -\mu =-2\varepsilon , \end{aligned}$$
where the inequality is from the mixed type robust ε-weak duality. Thus, \((\bar{x},0,\bar{\lambda }, \bar{v},\bar{\mu } )\) is a robust 2ε-optimal solution of (MD). The proof is complete. □

Remark 4.2

In the special case when \(\varepsilon =0\) and/or \(\mathcal{V}_{t}\), \(t\in T\), are singletons, some similar results concerning the classical Wolfe type duality for smooth optimization problems have been established in [13, 28] based on different kinds of constraint qualifications.

Finally, in this section, we consider a special case of (UFP) with \(g(x)\equiv 1\). In this case, (UFP) becomes the uncertain convex semi-infinite optimization (UCP), and (OMD) reduces to the following optimization problem:
$$\begin{aligned} (\mathrm{OMD })_{1} \quad \textstyle\begin{cases} \max f(y) +\sum_{t\in T} {\lambda }_{t}h_{t} (y, {v}_{t}) \\ \mbox{s.t.}\quad 0\in \partial _{\varepsilon _{0} } f(y ) +\sum_{t\in T} \partial _{\varepsilon _{t}} ( ({\lambda }_{t}+\beta _{t}) h_{t} ( \cdot , {v}_{t}) )(y) , \\ \hphantom{\mbox{s.t.}\quad} \varepsilon _{0} +\sum_{t\in T}\varepsilon _{t}-\varepsilon \leq \sum_{t\in T} {\beta }_{t}h_{t} (y, {v}_{t}), \\ \hphantom{\mbox{s.t.}\quad} {\lambda }_{t} \geq 0 , {\beta }_{t} \geq 0, v_{t} \in \mathcal{V} _{t}, \varepsilon _{0} \geq 0 , \varepsilon _{t}\geq 0, t\in T. \end{cases}\displaystyle \end{aligned}$$

Remark 4.3

Note that if for any \(t\in T\), \(\beta _{t}=0\), then \((\mathrm{OMD })_{1}\) becomes the Wolfe type dual problem introduced in [23]. Thus, \((\mathrm{OMD })_{1}\) can be seen as a mixed type dual problem for uncertain convex semi-infinite optimization (UCP).

Similarly, we obtain the following robust approximate weak and strong duality properties which generalize the corresponding results obtained in [23].

Theorem 4.3

Let \(\varepsilon \geq 0\). For any feasible x of (RUCP) and any feasible \((y,\lambda ,\beta , v )\) of \((\mathrm{OMD})_{1}\), we have
$$ f( {x}) \geq f(y) +\sum_{t\in T} {\lambda }_{t}h_{t} (y, {v} _{t})-\varepsilon . $$

Theorem 4.4

Let \(\bar{x}\in \mathcal{F} \) and \(\varepsilon \geq 0\). Let \(h_{t} : X\times Z_{t}\rightarrow \mathbb{{R}}\), \(t\in T\), be continuous functions such that, for any \(v_{t}\in \mathcal{V}_{t}\), \(h_{t}( \cdot ,v_{t})\) is a convex function. Assume that (RCQ) holds. If is a robust ε-optimal solution of (UCP), then there exist \((\bar{\lambda }_{t})_{t\in T} \in \mathbb{R}_{+}^{(T)}\), \(\bar{v}_{t}\in \mathcal{V}_{t}\), \(t\in T\), such that \((\bar{x},0,\bar{\lambda },\bar{v} )\) is a 2ε-optimal solution of \((\mathrm{OMD})_{1}\).

5 Applications

In this section, let X, Y, and Z be real locally convex topological vector spaces. Let \(K\subseteq Y\) be a nonempty closed convex cone which defined the partial order of Y. Let \(f : X \rightarrow \mathbb{R }\) be a continuous convex and nonnegative function, and let \(g : X \rightarrow \mathbb{R} \) be a continuous concave and positive function. Consider the following uncertain fractional optimization problem with cone constraint:
$$ (\mbox{UFP})_{c} \quad \min_{x\in X} \biggl\{ \frac{f(x)}{g(x)}\Bigm| h(x,v)\in -K \biggr\} , $$
where \(h :X\times Z\rightarrow Y\) is a continuous function, and the uncertain parameter v belongs to the convex compact set \(\mathcal{V}\subseteq Z\).
Pursuing the approach given in [29, 30], \((\mathrm{UFP})_{c}\) can be reformulated as an example of (UFP) by setting
$$ T:=K^{*}, \qquad h_{\lambda }(x,v_{\lambda }):= ( \lambda h) (x,v ) \quad \mbox{for any } \lambda \in T=K^{*}. $$
Here, we also use \(\mathcal{F}\) to denote the feasible solution set of \((\mathrm{UFP})_{c}\), i.e.,
$$ \mathcal{F}_{c}:= \bigl\{ x\in X\mid ( \lambda h) (x,v )\leq 0,\forall v \in \mathcal{V} , \lambda \in K^{*} \bigr\} = \bigl\{ x\in X\mid h (x,v ) \in -K,\forall v \in \mathcal{V} \bigr\} . $$
Moreover, for any \(\beta =(\beta _{\lambda })_{\lambda \in K^{*}} \in \mathbb{R}_{+}^{(K^{*})}\), it is easy to obtain that
$$ \bigcup_{v \in \mathcal{V}, \beta \in \mathbb{R}_{+}^{(K^{*})}} \operatorname{epi} \biggl( \sum _{\lambda \in K^{*}}\beta _{\lambda }h_{ \lambda }(\cdot ,v_{\lambda }) \biggr)^{*}= \bigcup_{v \in \mathcal{V}, {\lambda } \in {K^{*}}} \operatorname{epi} \bigl( ( {\lambda } h ) (\cdot ,v ) \bigr)^{*}. $$

Now, we establish the corresponding results of the problem \(( \mathrm{UFP})_{c}\) by using the similar methods of Sects. 3 and 4.

Theorem 5.1

Let \(\bar{x}\in \mathcal{F}_{c} \), \(\varepsilon \geq 0\), and \(\bar{\mu } =\frac{f(\bar{x} )}{ g(\bar{x})}-\varepsilon >0\). Let \(h : X\times Z\rightarrow Y\) be a continuous function such that \(h (\cdot , v )\) is a K-convex function for any \(v \in \mathcal{V} \). If \(\bigcup_{v \in \mathcal{V}, {\lambda } \in {K^{*}}}\operatorname{epi} ( ( {\lambda } h ) (\cdot ,v ) )^{*}\) is weak closed and convex, then is a robust ε-optimal solution of \(\mathrm{(UFP)}_{c}\) if and only if there exist \(\bar{\lambda } \in K^{*}\), \(\bar{v} \in \mathcal{V} \), and \(\varepsilon _{i}\geq 0\), \(i=1,2,3\), such that
$$\begin{aligned} 0\in \partial _{\varepsilon _{1}} f(\bar{x} )+\bar{\mu } \partial _{\varepsilon _{2}} (-g) (\bar{x} ) + \partial _{\varepsilon _{3}} \bigl((\bar{\lambda } h) (\cdot , \bar{v} ) \bigr) (\bar{x}) \end{aligned}$$
and
$$\begin{aligned} \varepsilon _{1}+\bar{\mu }\varepsilon _{2}+ \varepsilon _{3}-\varepsilon g(\bar{x})= (\bar{\lambda } h ) (\bar{x},\bar{v} ) . \end{aligned}$$

Corollary 5.1

Let \(\bar{x}\in \mathcal{F}_{c} \) and \(\bar{\mu } =\frac{f(\bar{x} )}{ g(\bar{x})} >0\). Let \(h : X\times Z\rightarrow Y\) be a continuous function such that \(h (\cdot , v )\) is a K-convex function for any \(v \in \mathcal{V} \). If \(\bigcup_{v \in \mathcal{V}, {\lambda } \in {K^{*}}}\operatorname{epi} ( ( {\lambda } h ) (\cdot ,v ) )^{*}\) is weak closed and convex, then is a robust optimal solution of \(\mathrm{(UFP)}_{c}\) if and only if there exist \(\bar{\lambda } \in K^{*}\) and \(\bar{v} \in \mathcal{V} \) such that
$$\begin{aligned} 0\in \partial f(\bar{x} )+\bar{\mu }\partial (-g) (\bar{x} ) + \partial \bigl(( \bar{\lambda } h) (\cdot ,\bar{v} ) \bigr) (\bar{x}) \end{aligned}$$
and
$$\begin{aligned} (\bar{\lambda } h ) (\bar{x},\bar{v} ) =0. \end{aligned}$$
Similarly, for any λ, \(\beta \in K^{*}\) and \(\varepsilon \geq 0\), we define the mixed type dual problem of \(\mathrm{(UFP)}_{c}\) as follows:
$$\begin{aligned} (\mathrm{MD })_{c} \quad \textstyle\begin{cases} \max \mu \\ \mbox{s.t.}\quad 0\in \partial _{\varepsilon _{1}} f(y )+ {\mu }\partial _{\varepsilon _{2}} (-g)(y) + \partial _{\varepsilon _{3}} ( ({\lambda } +\beta ) h (\cdot , {v}) )(y) , \\ \hphantom{\mbox{s.t.}\quad} f(y)- \mu g(y)+ ({\lambda } h ) (y, {v} )\geq \varepsilon g(y), \\ \hphantom{\mbox{s.t.}\quad} \varepsilon _{1}+ {\mu }\varepsilon _{2}+ \varepsilon _{3}-\varepsilon g(y) \leq ({\beta } h) (y, {v} ), \\ \hphantom{\mbox{s.t.}\quad} \mu \geq 0, {\lambda }, {\beta } \in K^{*}, \varepsilon _{i}\geq 0, i=1,2,3. \end{cases}\displaystyle \end{aligned}$$
The optimistic counterpart of \((\mathrm{MD})_{c}\) is given by
$$\begin{aligned} (\mathrm{OMD })_{c} \quad \textstyle\begin{cases} \max \mu \\ \mbox{s.t.}\quad 0\in \partial _{\varepsilon _{1}} f(y )+ {\mu }\partial _{\varepsilon _{2}} (-g)(y) + \partial _{\varepsilon _{3}} ( ({\lambda } +\beta ) h (\cdot , {v}) )(y) , \\ \hphantom{\mbox{s.t.}\quad} f(y)- \mu g(y)+ ({\lambda } h ) (y, {v} )\geq \varepsilon g(y), \\ \hphantom{\mbox{s.t.}\quad} \varepsilon _{1}+ {\mu }\varepsilon _{2}+ \varepsilon _{3}-\varepsilon g(y) \leq ({\beta } h) (y, {v} ), \\ \hphantom{\mbox{s.t.}\quad} \mu \geq 0, {\lambda }, {\beta } \in K^{*}, v\in \mathcal{V}, \varepsilon _{i}\geq 0, i=1,2,3. \end{cases}\displaystyle \end{aligned}$$

Theorem 5.2

Let \(\varepsilon \geq 0\). For any feasible x of \(\mathrm{(UFP)}_{c}\) and any feasible \((y,\lambda ,\beta , v,\mu )\) of \((\mathrm{OMD})_{c}\), we have
$$ \frac{f( {x})}{g(x)}\geq \mu -\varepsilon . $$

Theorem 5.3

Let \(\bar{x}\in \mathcal{F} \) and \(\varepsilon \geq 0\). Let \(h : X\times Z\rightarrow Y\) be a continuous function such that \(h (\cdot , v )\) is a K-convex function for any \(v \in \mathcal{V} \). Assume that \(\bigcup_{v \in \mathcal{V}, {\lambda } \in {K^{*}}} \operatorname{epi} ( ( {\lambda } g ) (\cdot ,v ) ) ^{*}\) is weak closed and convex. If is a robust ε-optimal solution of \(\mathrm{(UFP)}_{c}\) and \(\frac{f(\bar{x} )}{ g(\bar{x})}-\varepsilon >0\), then there exist \(\bar{\lambda } \in K^{*}\), \(\bar{v} \in \mathcal{V} \), and \(\bar{\mu } \geq 0\) such that \((\bar{x},0,\bar{\lambda }, \bar{v},\bar{\mu } )\) is a 2ε-optimal solution of \((\mathrm{OMD})_{c}\).

6 Conclusions

In this paper, a nonsmooth fractional semi-infinite optimization problem under data uncertainty in the constraint function (UFP) is considered. Under a new robust type constraint qualification (RCQ), some approximate optimality conditions and approximate duality results are established by using the framework of robust optimization approach. The obtained results encompass as special cases some optimization problems considered in the recent literature. It would be interesting to consider other concepts of approximate solutions for fractional semi-infinite optimization problems with data uncertainty. These may be the topic of some of our forthcoming papers.

Declarations

Acknowledgements

We would like to express our sincere thanks to the anonymous referees for many helpful comments and constructive suggestions which have contributed to the final preparation of this paper.

Availability of data and materials

Not applicable.

Funding

This research was supported by the Basic and Advanced Research Project of Chongqing (cstc2016jcyjA0219, cstc2016jcyjA0178), the National Natural Science Foundation of China (71501162), the Science and Technology Research Program of Chongqing Municipal Education Commission (KJQN201800837), and the Program for University Innovation Team of Chongqing (CXTDX201601026).

Authors’ contributions

All the authors have contributed equally to this paper. All the authors have read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Chongqing Key Laboratory of Social Economy and Applied Statistics, College of Mathematics and Statistics, Chongqing Technology and Business University, Chongqing, China
(2)
China Research Institute of Enterprise Governed by Law, Southwest University of Political Science and Law, Chongqing, China

References

  1. Dinkelbach, W.: On nonlinear fractional programming. Manag. Sci. 13, 492–498 (1967) MathSciNetView ArticleGoogle Scholar
  2. Schaible, S.: Duality in fractional programming: a unified approach. Oper. Res. 24, 452–461 (1976) MathSciNetView ArticleGoogle Scholar
  3. Craven, B.D.: Fractional Programming. Heldermann, Berlin (1988) MATHGoogle Scholar
  4. Lai, H.C., Liu, J.C., Tanaka, K.: Duality without a constraint qualification for minimax fractional programming. J. Math. Anal. Appl. 230, 311–328 (1999) MathSciNetView ArticleGoogle Scholar
  5. Liang, Z.A., Huang, H.X., Pardalos, P.M.: Optimality conditions and duality for a class of nonlinear fractional programming problems. J. Optim. Theory Appl. 110, 611–619 (2001) MathSciNetView ArticleGoogle Scholar
  6. Yang, X.M., Teo, K.L., Yang, X.Q.: Symmetric duality for a class of nonlinear fractional programming problems. J. Math. Anal. Appl. 271, 7–15 (2002) MathSciNetView ArticleGoogle Scholar
  7. Yang, X.M., Yang, X.Q., Teo, K.L.: Duality and saddle-point type optimality for generalized nonlinear fractional programming. J. Math. Anal. Appl. 289, 100–109 (2004) MathSciNetView ArticleGoogle Scholar
  8. Long, X.J.: Optimality conditions and duality for nondifferentiable multiobjective fractional programming problems with \((C,\alpha , \rho , d)\)-convexity. J. Optim. Theory Appl. 148, 197–208 (2011) MathSciNetView ArticleGoogle Scholar
  9. Sun, X.K., Long, X.J., Chai, Y.: Sequential optimality conditions for fractional optimization with applications to vector optimization. J. Optim. Theory Appl. 164, 479–499 (2015) MathSciNetView ArticleGoogle Scholar
  10. Sun, X.K., Tang, L.P., Long, X.J., Li, M.H.: Some dual characterizations of Farkas-type results for fractional programming problems. Optim. Lett. 12, 1403–1420 (2018) MathSciNetView ArticleGoogle Scholar
  11. Jeyakumar, V., Li, G.Y.: Robust duality for fractional programming problems with constraint-wise data uncertainty. J. Optim. Theory Appl. 151, 292–303 (2011) MathSciNetView ArticleGoogle Scholar
  12. Jeyakumar, V., Li, G.Y., Srisatkunarajah, S.: Strong duality for robust minimax fractional programming problems. Eur. J. Oper. Res. 228, 331–336 (2013) MathSciNetView ArticleGoogle Scholar
  13. Sun, X.K., Chai, Y.: On robust duality for fractional programming with uncertainty data. Positivity 18, 9–28 (2014) MathSciNetView ArticleGoogle Scholar
  14. Sun, X.K., Long, X.J., Fu, H.Y., Li, X.B.: Some characterizations of robust optimal solutions for uncertain fractional optimization and applications. J. Ind. Manag. Optim. 13, 803–824 (2017) MathSciNetMATHGoogle Scholar
  15. Li, X.B., Wang, Q.L., Lin, Z.: Optimality conditions and duality for minimax fractional programming problems with data uncertainty. J. Ind. Manag. Optim. https://doi.org/10.3934/jimo.2018089
  16. Loridan, P.: Necessary conditions for ε-optimality. Math. Program. 19, 140–152 (1982) MathSciNetView ArticleGoogle Scholar
  17. Son, T.Q., Strodiot, J.J., Nguyen, V.H.: ε-Optimality and ε-Lagrangian duality for a nonconvex programming problem with an infinite number of constraints. J. Optim. Theory Appl. 141, 389–409 (2009) MathSciNetView ArticleGoogle Scholar
  18. Sun, X.K., Guo, X.L., Zeng, J.: Necessary optimality conditions for DC infinite programs with inequality constraints. J. Nonlinear Sci. Appl. 9, 617–626 (2016) MathSciNetView ArticleGoogle Scholar
  19. Long, X.J., Xiao, Y.B., Huang, N.J.: Optimality conditions of approximate solutions for nonsmooth semi-infinite programming problems. J. Oper. Res. Soc. China 6, 289–299 (2018) MathSciNetView ArticleGoogle Scholar
  20. Kim, D.S., Son, T.Q.: An approach to ε-duality theorems for nonconvex semi-infinite multiobjective optimization problems. Taiwan. J. Math. 22, 1261–1287 (2018) MathSciNetView ArticleGoogle Scholar
  21. Lee, J.H., Lee, G.M.: On ε-solutions for robust fractional optimization problems. J. Inequal. Appl. 2014, 501 (2014) MathSciNetView ArticleGoogle Scholar
  22. Sun, X.K., Li, X.B., Long, X.J., Peng, Z.Y.: On robust approximate optimal solutions for uncertain convex optimization and applications to multi-objective optimization. Pac. J. Optim. 13, 621–643 (2017) MathSciNetGoogle Scholar
  23. Lee, J.H., Lee, G.M.: On ϵ-solutions for robust semi-infinite optimization problems. Positivity (2018). https://doi.org/10.1007/s11117-018-0630-1 View ArticleGoogle Scholar
  24. Sun, X., Fu, H., Zeng, J.: Robust approximate optimality conditions for uncertain nonsmooth optimization with infinite number of constraints. Mathematics 7, 12 (2019) View ArticleGoogle Scholar
  25. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970) View ArticleGoogle Scholar
  26. Goberna, M.A., López, M.A.: Linear Semi-Infinite Optimization. Wiley, Chichester (1998) MATHGoogle Scholar
  27. Jeyakumar, V.: Asymptotic dual conditions characterizing optimality for convex programs. J. Optim. Theory Appl. 93, 153–165 (1997) MathSciNetView ArticleGoogle Scholar
  28. Boţ, R.I.: Conjugate Duality in Convex Optimization. Springer, Berlin (2010) View ArticleGoogle Scholar
  29. Sun, X.K., Li, S.J., Zhao, D.: Duality and Farkas-type results for DC infinite programming with inequality constraints. Taiwan. J. Math. 17, 1227–1244 (2013) MathSciNetView ArticleGoogle Scholar
  30. Sun, X.K.: Regularity conditions characterizing Fenchel–Lagrange duality and Farkas-type results in DC infinite programming. J. Math. Anal. Appl. 414, 590–611 (2014) MathSciNetView ArticleGoogle Scholar

Copyright

© The Author(s) 2019

Advertisement