# Lagrange-type duality in DC programming problems with equivalent DC inequalities

## Abstract

In this paper, we provide Lagrange-type duality theorems for mathematical programming problems with DC objective and constraint functions. The class of problems to which Lagrange-type duality theorems can be applied is broader than the class in the previous research. The main idea is to consider equivalent inequality systems given by the maximization of the original functions. In order to compare the present results with the previously reported results, we describe the difference between their constraint qualifications, which are technical assumptions for the duality.

## Introduction

Lagrange duality is very effective in solving convex programming problems with inequality constraints. Constraint qualifications, which are technical assumptions for Lagrange duality, play an essential role in proving its duality theorems. For convex functions $$f_{i}: \mathbb{R}^{n}\to \mathbb{R}$$, $$i=1,\ldots ,m$$, the inequality system $$\{f_{i}\le 0, i=1,\ldots ,m\}$$ is said to have the Farkas-Minkowski property (FM, for short) if $$\operatorname {cone}\operatorname {co}\bigcup^{m}_{i=1}\operatorname {epi}f^{*} _{i}+\{0\}\times [0,+\infty )$$ is closed. FM is well known as a necessary and sufficient constraint qualification for Lagrange duality; see . Also it is easy to check that the system $$\{f_{i}\le 0, i=1,\ldots ,m\}$$ has FM if and only if the system $$\{\max_{i=1,\ldots ,m}f_{i}\le 0\}$$ has FM.

A function is said to be DC if it can be expressed as the difference of two convex functions. In this paper, we consider the following mathematical programming problem with DC objective and constraint functions:

\begin{aligned}& \begin{aligned} &\mbox{minimize } f_{0}(x)-g_{0}(x) \\ &\mbox{subject to } f_{i}(x)-g_{i}(x)\le 0,\quad i=1, \ldots ,m, \end{aligned} \hspace{330pt}\mbox{(P)} \end{aligned}

where $$f_{i},g_{i}:\mathbb{R}^{n}\to \mathbb{R}$$ are convex functions for each $$i=0,1,\ldots ,m$$. For the inequality system $$\{f_{i}-g_{i} \le 0, i=1,\ldots ,m\}$$, its constraint qualifications for Lagrange-type duality have been observed in [2, 3]. To our surprise, we can observe that such constraint qualifications of two DC inequality systems $$\{f_{i}-g_{i}\le 0, i=1,\ldots ,m\}$$ and $$\{F-G\le 0\}$$, where $$F=\max_{i=1,\ldots ,m}\{f_{i}+\sum_{j\ne i}g _{j}\}$$ and $$G=\sum^{m}_{j=1}g_{j}$$, have a difference in spite of the two systems being equivalent.

The purpose of this paper is to provide other Lagrange-type duality theorems for DC programming problems with equivalent DC inequalities. The class of problems to which Lagrange-type duality theorems can be applied is broader than the class in previous research. The main idea, motivated by the above observation, is to consider equivalent inequality systems given by the maximization of the original functions. In order to compare the present results with the previously reported results, we describe the difference between their constraint qualifications. The outline of the paper is as follows: In Section 2, we introduce definitions and preliminary results which will be used in this paper. In Section 3, we provide a Lagrange-type duality theorem for equivalent inequality system $$\{F-G\le 0\}$$. We provide an application of this theorem and we describe the difference between the present and previous constraint qualifications. Also, we provide a unified Lagrange-type duality theorem which contains the present theorem and the previous results in . In Section 4, we summarize our results. Finally, we give proofs of lemmas which will be used in the proof of the main result in the Appendix.

## Notations and preliminaries

In this section, we describe our notations and present preliminary results. The inner product of two vectors x and y in the n-dimensional real Euclidean space $$\mathbb{R}^{n}$$ will be denoted by $$\langle x,y \rangle$$. For a set $$A\subseteq \mathbb{R}^{n}$$, we shall denote the closure, convex hull, conical hull of A by clA, coA, and coneA, respectively. For a convex set $$C\subseteq \mathbb{R}^{n}$$ and $$\alpha , \beta \in [0,+\infty )$$, $$(\alpha +\beta )C=\alpha C+ \beta C$$, where $$\alpha A=\{\alpha x\mid x\in A\}$$ and $$A+B=\{x+y\mid x\in A, y\in B\}$$ for any $$\alpha \in \mathbb{R}$$ and $$A, B\subseteq \mathbb{R}^{n}$$. For an extended real-valued function $$f:\mathbb{R} ^{n}\to \mathbb{R}\cup \{+\infty \}$$, the domain, the epigraph, and the conjugate function of f are defined by

\begin{aligned}& \operatorname{dom} f= \bigl\{ x\in \mathbb{R}^{n}\mid f(x)< +\infty \bigr\} , \\& \operatorname {epi}f= \bigl\{ (x,r)\in \mathbb{R}^{n}\times \mathbb{R}\mid x\in \operatorname{dom} f,f(x)\le r \bigr\} ,\quad \mbox{and} \\& f^{*}(y)=\sup_{x\in \mathbb{R}^{n}} \bigl\{ \langle x,y \rangle -f(x) \bigr\} , \quad \forall y\in \mathbb{R}^{n}. \end{aligned}

The indicator function of $$A\subseteq \mathbb{R}^{n}$$ is denoted by $$\delta_{A}$$. For each $$x\in \operatorname{dom} f$$, the subdifferential of the function f at x is the set

$$\partial f(x)= \bigl\{ x^{*}\in \mathbb{R}^{n}\mid \bigl\langle x^{*},y-x \bigr\rangle +f(x) \le f(y), \forall y\in \mathbb{R}^{n} \bigr\} .$$

If $$x\in \operatorname{dom} f$$, then $$f(x)+f^{*}(y)\ge \langle y,x \rangle$$ (the Young-Fenchel inequality) holds for each $$y\in \mathbb{R}^{n}$$ and

$$f(x)+f^{*}(y)= \langle y,x \rangle\quad \Leftrightarrow\quad y\in \partial f(x).$$

For two extended real-valued functions $$f, g:\mathbb{R}^{n}\to \mathbb{R}\cup \{+\infty \}$$, the infimal convolution of f and g is defined by

$$(f\oplus g) (x)=\inf_{x_{1}+x_{2}=x} \bigl\{ f(x_{1})+g(x_{2}) \bigr\} , \quad \forall x \in \mathbb{R}^{n}.$$

For extended real-valued convex functions $$f_{i}:\mathbb{R}^{n}\to \mathbb{R}\cup \{+\infty \}$$, $$i=1,\ldots ,m$$, if $$\bigcap^{m} _{i=1}\operatorname {int}\operatorname{dom} f_{i}\ne \emptyset$$, then

$$\partial (f_{1}+\cdots +f_{m}) (x)=\partial f_{1}(x)+\cdots +\partial f_{m}(x)$$
(1)

for all $$x\in \bigcap^{m}_{i=1}\operatorname{dom} f_{i}$$ and for each $$y\in \partial (f_{1}+\cdots +f_{m})(x)$$, there exists $$y_{i}\in \partial f_{i}(x)$$ ($$i=1,\ldots ,m$$) such that

$$(f_{1}+\cdots +f_{m})^{*}(y)=f^{*}_{1}(y_{1})+ \cdots +f^{*}_{m}(y_{m}).$$
(2)

Hence

$$(f_{1}+\cdots +f_{m})^{*}(y)= \bigl(f^{*}_{1}\oplus \cdots \oplus f^{*}_{m} \bigr) (y),$$
(3)

the infimal convolution is attained for all y; see . It is easy to show that (3) implies that

$$\operatorname {epi}(f_{1}+\cdots +f_{m})^{*}=\operatorname {epi}f^{*}_{1}+\cdots +\operatorname {epi}f^{*} _{m} .$$
(4)

When all $$f_{i}$$ are real-valued convex functions,

$$\operatorname {epi}\Bigl(\max_{i=1,\ldots ,m}f_{i} \Bigr)^{*}=\operatorname {co}\Biggl(\bigcup^{m}_{i=1} \operatorname {epi}f^{*}_{i} \Biggr)$$
(5)

holds; see Theorem 2.4.7 in . The following theorem will be used in the proof of the main theorem.

### Theorem 1

Sion, 

Let X be a convex set, Y be a compact convex set, $$f:X\times Y\to \mathbb{R}$$, where $$f(x,\cdot )$$ is usc concave on Y for each $$x\in X$$ and $$f(\cdot ,y)$$ is lsc convex on X for each $$y\in Y$$. Then

$$\inf_{x\in X}\max_{y\in Y}f(x,y)=\max _{y\in Y}\inf_{x\in X}f(x,y) .$$

## Main results

We observe the following DC programming problem with inequality constraints:

\begin{aligned}& \begin{aligned} &\mbox{minimize } f_{0}(x)-g_{0}(x) \\ &\mbox{subject to } f_{i}(x)-g_{i}(x)\le 0,\quad i=1, \ldots ,m, \end{aligned} \hspace{330pt}\mbox{(P)} \end{aligned}

where $$f_{i},g_{i}:\mathbb{R}^{n}\to \mathbb{R}$$ are convex functions for each $$i=0,1,\ldots ,m$$. First, we give a real-valued version of a previous Lagrange-type duality result for (P) in  as follows, where $$\operatorname{Val}(\mathrm{P})$$ is the infimum value of (P):

### Theorem 2

Let $$f_{i},g_{i}:\mathbb{R} ^{n}\to \mathbb{R}$$ be convex functions for each $$i=0,1,\ldots ,m$$, $$S=\{x\in \mathbb{R}^{n}\mid f_{i}(x)-g_{i}(x)\le 0, \forall i=1, \ldots ,m\}$$, $$\bigcup_{x\in S}\partial g_{0}(x)\subseteq D _{0}\subseteq \mathbb{R}^{n}$$ and $$\bigcup_{x\in S} ( \prod^{m}_{i=1}\partial g_{i}(x) ) \subseteq D \subseteq \mathbb{R}^{nm}$$. If $$S((y_{i})_{i=1}^{m})=\{x\in \mathbb{R}^{n}\mid f _{i}(x)- \langle x,y_{i} \rangle +g^{*}_{i}(y_{i})\le 0, \forall i=1,\ldots ,m\}$$ is not empty and

$$\operatorname {cone}\operatorname {co}\bigcup^{m}_{i=1} \bigl(\operatorname {epi}f^{*}_{i}- \bigl(y_{i},g ^{*}_{i}(y_{i}) \bigr) \bigr) +\{0\}\times [0,+ \infty )\quad \textit{is closed}$$
(6)

for each $$(y_{i})_{i=1}^{m}\in D\cap \prod^{m}_{i=1} \operatorname{dom} g^{*}_{i}$$, then

\begin{aligned} \operatorname{Val}(\mathrm{P}) =&\inf_{(y_{0},(y_{i})_{i=1}^{m})\in D_{0}\times D}\max_{\lambda_{i}\ge 0} \inf_{x\in \mathbb{R}^{n}}\left \{ f_{0}(x)- \langle x,y_{0} \rangle +g^{*}_{0}(y_{0})\vphantom{\sum^{m}_{i=1}} \right . \\ &{}+\left .\sum^{m}_{i=1} \lambda_{i} \bigl(f_{i}(x)- \langle x,y_{i} \rangle +g ^{*}_{i}(y_{i}) \bigr) \right \} . \end{aligned}
(7)

We remark that, in the real-valued case, this theorem contains the previous theorems in . Clearly, problem (P) is equivalent to the following problem (P′):

\begin{aligned}& \begin{aligned} &\mbox{minimize } f_{0}(x)-g_{0}(x) \\ &\mbox{subject to } \max_{i=1,\ldots ,m} \bigl\{ f_{i}(x)-g_{i}(x) \bigr\} \le 0, \end{aligned} \hspace{327pt}\mbox{(P')} \end{aligned}

and problem (P′) is also a DC programming problem because

\begin{aligned} \max_{i=1,\ldots ,m} \{f_{i}-g_{i} \} =&\max_{i=1,\ldots ,m} \Biggl\{ f_{i}+\sum _{j\ne i}g_{j}-\sum^{m}_{i=1}g_{i} \Biggr\} =\max_{i=1,\ldots ,m} \biggl\{ f_{i}+\sum _{j\ne i}g_{j} \biggr\} -\sum ^{m}_{i=1}g_{i}=F-G, \end{aligned}
(8)

and F and G are convex functions. To our surprise, we can observe that constraint qualifications of two DC inequality systems $$\{f_{i}-g_{i}\le 0, i=1,\ldots ,m\}$$ and $$\{F-G\le 0\}$$ have a difference in spite of the two systems being equivalent. This can be seen at the end of Section 3. Motivated by the observation, we give the first duality result.

### Theorem 3

Let $$f_{i},g_{i}:\mathbb{R}^{n}\to \mathbb{R}$$ be convex functions for each $$i=0,1,\ldots ,m$$, $$S=\{x\in \mathbb{R}^{n}\mid f_{i}(x)-g_{i}(x) \le 0, \forall i=1,\ldots ,m\}$$, $$\bigcup_{x\in S}\partial g _{0}(x)\subseteq D_{0}$$ and $$D=\bigcup_{x\in S}\sum^{m} _{i=1}\partial g_{i}(x)$$. If

$$\operatorname {cone}\operatorname {co}\Biggl(\bigcup^{m}_{i=1} \biggl(\operatorname {epi}f^{*}_{i}+ \sum_{j\ne i} \operatorname {epi}g^{*}_{j} \biggr)-\sum^{m}_{i=1} \bigl(y_{i},g^{*} _{i}(y_{i}) \bigr) \Biggr) +\{0\}\times [0,+\infty ) \quad \textit{is closed}$$
(9)

for each $$(y_{i})_{i=1}^{m}\in \bigcup_{x\in S}\prod^{m}_{i=1}\partial g_{i}(x)$$, then the following Lagrange-type duality holds:

\begin{aligned} \operatorname{Val}(\mathrm{P}) =&\inf_{(y_{0},\hat{y})\in D_{0}\times D} \max_{\substack{\hat{\lambda }, \lambda_{i}\ge 0 \\ \sum ^{m}_{i=1}\lambda_{i}=\hat{\lambda }}} \inf _{x\in \mathbb{R}^{n}}\Bigg\{ f_{0}(x)- \langle x,y_{0} \rangle +g^{*}_{0}(y_{0}) + \sum^{m}_{i=1}\lambda_{i} \bigl(f_{i}(x)-g_{i}(x) \bigr)\\ &{} +\hat{\lambda } \Biggl( \sum^{m}_{j=1}g_{j}(x)- \langle x, \hat{y} \rangle + \Biggl(\sum^{m}_{j=1}g_{j} \Biggr)^{*}(\hat{y}) \Biggr) \Bigg\} . \end{aligned}

Also we give a unified result of Theorem 2 and Theorem 3, as follows.

### Theorem 4

Let $$f_{i},g_{i}:\mathbb{R}^{n}\to \mathbb{R}$$ be convex functions for each $$i=0,1,\ldots ,m$$, $$S=\{x\in \mathbb{R}^{n}\mid f_{i}(x)-g_{i}(x) \le 0, \forall i=1,\ldots ,m\}$$, $$I\subseteq \{1,\ldots ,m\}$$, $$\bigcup_{x\in S}\partial g_{0}(x)\subseteq D_{0}$$ and $$D=\bigcup_{x\in S} ( \prod_{i\notin I}\partial g _{i}(x)\times \sum_{i\in I}\partial g_{i}(x) )$$. If

\begin{aligned}& \operatorname {cone}\operatorname {co}\biggl(\bigcup _{i\in I} \biggl( \biggl(\operatorname {epi}f ^{*}_{i}+\sum _{\substack{j\ne i\\j\in I}}\operatorname {epi}g^{*}_{j} \biggr)-\sum _{i\in I} \bigl(y_{i},g_{i}^{*}(y_{i}) \bigr) \biggr) \\& \quad {} \cup \bigcup_{i\notin I} \bigl(\operatorname {epi}f^{*}_{i}- \bigl(y_{i},g ^{*}_{i}(y_{i}) \bigr) \bigr) \biggr)+\{0\}\times [0,+\infty ) \end{aligned}
(10)

is closed for each $$(y_{i})_{i=1}^{m}\in \bigcup_{x\in S} \prod^{m}_{i=1}\partial g_{i}(x)$$, then

\begin{aligned} \operatorname{Val}(\mathrm{P}) = &\inf_{(y_{0},((y_{i})_{i\notin I},\hat{y}))\in D_{0}\times D} \max_{\substack{\hat{\lambda }, \lambda_{i}\ge 0\\\sum _{i \in I}\lambda_{i}=\hat{\lambda }}} \inf_{x\in \mathbb{R}^{n}} \Bigg\{ f_{0}(x)- \langle x,y_{0} \rangle +g^{*}_{0}(y_{0}) \\ &{}+\sum_{i\notin I}\lambda_{i} \bigl(f_{i}(x)- \langle x,y_{i} \rangle +g ^{*}_{i}(y_{i})\bigr) +\sum _{i\in I}\lambda_{i}\bigl(f_{i}(x)-g_{i}(x) \bigr)\\ &{}+\hat{\lambda } \biggl( \sum_{j\in I}g_{j}(x)- \langle x,\hat{y} \rangle +\biggl( \sum_{j\in I}g_{j} \biggr)^{*}(\hat{y}) \biggr) \Bigg\} . \end{aligned}

### Remark 1

If $$I=\emptyset$$, then Theorem 4 becomes Theorem 2, and if $$I=\{1,\ldots ,m\}$$, then Theorem 4 becomes Theorem 3. Also, the assumptions of Theorem 2 and Theorem 3 have a difference. This can be seen at the end of Section 3. Therefore Theorem 4 is a generalization of Theorem 2 and Theorem 3.

In order to prove Theorem 4, we provide Lemma 1 and Lemma 2.

### Lemma 1

For any $$m\in \mathbb{N}$$ and for any convex sets $$C_{i}\subseteq \mathbb{R}^{n}$$ ($$i=1,\ldots ,m$$),

$$\operatorname {co}\bigcup^{m}_{i=1}C_{i}= \bigcup_{\substack{\lambda_{i}\ge 0 \\ \sum ^{m}_{i=1}\lambda_{i}=1}}\sum^{m}_{i=1} \lambda_{i} C _{i}.$$
(11)

### Lemma 2

For any $$m\in \mathbb{N}$$ and for any convex sets $$A_{i}, B_{i} \subseteq \mathbb{R}^{n}$$ ($$i=1,\ldots ,m$$),

$$\operatorname {co}\bigcup_{\substack{\lambda_{i}\ge 0 \\ \sum ^{m}_{i=1}\lambda_{i}=1}}\sum ^{m}_{i=1} \bigl(\lambda_{i} A_{i}+(1- \lambda_{i})B_{i} \bigr) =\operatorname {co}\bigcup ^{m}_{i=1} \biggl(A_{i}+\sum _{j\ne i}B_{j} \biggr).$$
(12)

The proofs of Lemma 1 and Lemma 2 will be given in the Appendix.

### Proof of Theorem 4

Let $$F=\max_{i\in I}\{f_{i}+ \sum_{\substack{j\ne i \\ j\in I}}g_{j}\}$$ and $$G=\sum_{i\in I}g_{i}$$. We can see the problem (P) is converted to the following equivalent problem (P″) from (8):

\begin{aligned}& \mbox{minimize } f_{0}(x)-g_{0}(x) \\& \mbox{subject to } f_{i}(x)-g_{i}(x)\le 0, \quad \forall i \notin I ,\hspace{183pt}\mbox{(P'')} \\& F(x)-G(x)\le 0. \end{aligned}

From (1),

$$D=\bigcup_{x\in S} \biggl( \prod _{i\notin I}\partial g _{i}(x)\times \sum _{i\in I}\partial g_{i}(x) \biggr) = \bigcup _{x\in S} \biggl( \prod_{i\notin I}\partial g_{i}(x) \times \partial \sum_{i\in I}g_{i}(x) \biggr) .$$

For each $$((y_{i})_{i\notin I},\hat{y})\in D\cap (\prod_{i\notin I} \operatorname{dom} g^{*}_{i}\times \operatorname{dom} G^{*})$$, there exists $$\hat{x}\in S$$ such that $$y_{i}\in \partial g_{i}(\hat{x})$$ for each $$i\notin I$$ and $$\hat{y}\in \partial \sum_{i\in I}g_{i}(\hat{x})$$, that is,

$$g_{i}(\hat{x})+g^{*}_{i}(y_{i})= \langle \hat{x},y_{i} \rangle \quad (i\notin I), \qquad \biggl(\sum _{i\in I}g_{i} \biggr) (\hat{x})+ \biggl(\sum _{i\in I}g_{i} \biggr)^{*}(\hat{y})= \langle \hat{x}, \hat{y} \rangle .$$

From (3), there exists $$y_{i}$$ ($$i\in I$$) such that $$(\sum_{i\in I}g_{i})^{*}(\hat{y})=\sum_{i\in I}g^{*}_{i}(y _{i})$$ and $$\sum_{i\in I}y_{i}=\hat{y}$$. Then

$$\sum_{i\in I} \bigl(g_{i}( \hat{x})+g^{*}_{i}(y_{i}) \bigr)=\sum _{i\in I} \langle \hat{x},y _{i} \rangle ,$$

and since $$g_{i}(\hat{x})+g^{*}_{i}(y_{i})\ge \langle \hat{x},y _{i} \rangle$$ for each $$i\in I$$, we have

$$g_{i}(\hat{x})+g^{*}_{i}(y_{i})= \langle \hat{x},y_{i} \rangle, \quad \mbox{that is}, y_{i} \in \partial g_{i}(\hat{x})$$

for each $$i\in I$$. Therefore

$$(y_{i})_{i=1}^{m}\in \prod ^{m}_{i=1}\partial g_{i}(\hat{x})\subseteq \bigcup_{x\in S}\prod^{m}_{i=1} \partial g_{i}(x).$$
(13)

From $$\hat{y}\in \partial \sum_{i\in I}g_{i}(\hat{x})$$ and $$\hat{x} \in S$$,

\begin{aligned} F(x)- \langle \hat{x},\hat{y} \rangle +G^{*}( \hat{y}) =& \max_{i\in I} \biggl\{ f_{i}(\hat{x})+\sum _{\substack{j\ne i\\j\in I}}g_{j}( \hat{x}) \biggr\} - \langle \hat{x}, \hat{y} \rangle + \biggl(\sum_{i\in I}g _{i} \biggr)^{*}(\hat{y}) \\ =&\max_{i\in I} \biggl\{ f_{i}(\hat{x})+\sum _{\substack{j\ne i\\j\in I}}g _{j}(\hat{x}) \biggr\} -\sum _{i\in I}g_{i}(\hat{x}) \\ =&\max_{i\in I} \bigl\{ f_{i}(\hat{x})-g_{i}( \hat{x}) \bigr\} \le 0. \end{aligned}

From $$y_{i}\in \partial g_{i}(\hat{x})$$ for each $$i\notin I$$ and $$\hat{x}\in S$$, $$f_{i}(\hat{x})- \langle \hat{x},y_{i} \rangle +g ^{*}_{i}(y_{i})=f_{i}(\hat{x})-g_{i}(\hat{x})\le 0$$. Therefore is an element of $$\{x\in \mathbb{R}^{n}\mid f_{i}(x)- \langle x,y _{i} \rangle +g^{*}_{i}(y_{i})\le 0, \forall i\notin I, F(x)- \langle x,\hat{y} \rangle +G^{*}(\hat{y})\le 0\}$$ and this set is non-empty. For each $$i\in I$$, let $$F_{i}=f_{i}+ \sum_{\substack{j\ne i \\ j\in I}}g_{j}$$. Now we have

\begin{aligned} \operatorname {epi}F^{*} =&\operatorname {co}\bigcup_{i\in I} \operatorname {epi}F^{*}_{i}\quad \bigl( \because \mbox{ from (5)} \bigr) \\ =& \bigcup_{ \substack{\lambda_{i}\ge 0\\\sum _{i\in I}\lambda_{i}=1}} \sum_{i\in I} \lambda_{i} \operatorname {epi}F^{*}_{i} \quad (\because \mbox{ by using Lemma 1}) \\ =& \bigcup_{ \substack{\lambda_{i}\ge 0\\\sum _{i\in I}\lambda_{i}=1}} \sum_{i\in I} \lambda_{i} \biggl(\operatorname {epi}f^{*}_{i}+\sum _{\substack{j\ne i\\j\in I}}\operatorname {epi}g^{*}_{j} \biggr) \quad \bigl( \because \mbox{ from (4)}\bigr) \\ =& \bigcup_{ \substack{\lambda_{i}\ge 0\\\sum ^{m}_{i=1}\lambda_{i}=1}} \sum_{i\in I} \bigl(\lambda_{i}\operatorname {epi}f^{*}_{i}+(1- \lambda_{i})\operatorname {epi}g^{*} _{i} \bigr) \\ =&\operatorname {co}\bigcup_{ \substack{\lambda_{i}\ge 0\\\sum ^{m}_{i=1}\lambda_{i}=1}} \sum _{i\in I} \bigl(\lambda_{i}\operatorname {epi}f^{*}_{i}+(1- \lambda_{i})\operatorname {epi}g^{*} _{i} \bigr) \\ =&\operatorname {co}\bigcup_{i\in I} \biggl(\operatorname {epi}f^{*}_{i}+ \sum_{\substack{j\ne i\\j\in I}}\operatorname {epi}g^{*}_{i} \biggr)\quad (\because \mbox{from Lemma 2}). \end{aligned}

Therefore

$$\operatorname {epi}F^{*}- \bigl(\hat{y},G^{*}(\hat{y}) \bigr)= \operatorname {co}\biggl( \bigcup_{i\in I} \biggl(\operatorname {epi}f^{*}_{i}+ \sum_{\substack{j\in I \\ j\ne i}}\operatorname {epi}g^{*}_{j} \biggr)- \sum_{i\in I} \bigl(y_{i},g^{*}_{i}(y_{i}) \bigr) \biggr),$$

and hence

\begin{aligned}& \operatorname {cone}\operatorname {co}\biggl(\bigcup_{i\notin I} \bigl( \operatorname {epi}f^{*}_{i}- \bigl(y_{i},g^{*} _{i}(y_{i}) \bigr) \bigr)\cup \bigl(\operatorname {epi}F^{*}- \bigl(\hat{y},G^{*}(\hat{y}) \bigr) \bigr) \biggr)+\{0 \}\times [0,+ \infty ) \\& \quad =\operatorname {cone}\operatorname {co}\biggl(\bigcup_{i\notin I} \bigl(\operatorname {epi}f^{*}_{i}- \bigl(y_{i},g ^{*}_{i}(y_{i}) \bigr) \bigr)\cup \biggl(\bigcup_{i\in I} \biggl(\operatorname {epi}f^{*}_{i}+ \sum_{\substack{j\in I\\j\ne i}}\operatorname {epi}g^{*}_{j} \biggr)-\sum_{i\in I} \bigl(y_{i},g ^{*}_{i}(y_{i}) \bigr) \biggr) \biggr) \\& \qquad {} +\{0\}\times [0,+\infty ), \end{aligned}

because $$\operatorname {co}(A\cup \operatorname {co}B)=\operatorname {co}(A\cup B)$$ for any $$A, B\subseteq \mathbb{R}^{n}$$. From (10), this set is closed. By using Theorem 2,

\begin{aligned} \operatorname{Val}(\mathrm{P}) = &\inf_{(y_{0},((y_{i})_{i\notin I},\hat{y}))\in D_{0}\times D} \max_{\hat{\lambda }, \lambda_{i}\ge 0}\inf _{x\in \mathbb{R}^{n}}\Biggl\{ f_{0}(x)- \langle x,y_{0} \rangle +g^{*}_{0}(y_{0})\vphantom{\sum_{i\notin I}} \\ &{}+\sum_{i\notin I}\lambda_{i} \bigl(f_{i}(x)- \langle x,y_{i} \rangle +g ^{*}_{i}(y_{i})\bigr) +\hat{\lambda }\bigl(F(x)- \langle x,\hat{y} \rangle +G^{*}( \hat{y})\bigr) \Biggr\} \end{aligned}

holds. For any $$(y_{0},((y_{i})_{i\notin I},\hat{y}))\in D_{0}\times D$$,

\begin{aligned}& \max_{\substack{\lambda_{i}\ge 0(i\notin I)\\\hat{\lambda } \ge 0}}\inf_{x\in \mathbb{R}^{n}} \Biggl\{ f_{0}(x)- \langle x,y_{0} \rangle +g^{*}_{0}(y_{0}) + \sum_{i\notin I}\lambda_{i} \bigl(f_{i}(x)- \langle x,y_{i} \rangle +g ^{*}_{i}(y_{i})\bigr) \\& \qquad {}+\hat{\lambda }\bigl(F(x)- \langle x,\hat{y} \rangle +G^{*}( \hat{y})\bigr)\vphantom{\sum_{i\notin I}} \Biggr\} \\& \quad =\max_{\substack{\lambda_{i}\ge 0(i\notin I)\\\hat{\lambda } \ge 0}}\inf_{x\in \mathbb{R}^{n}}\Bigg\{ f_{0}(x)- \langle x,y_{0} \rangle +g^{*}_{0}(y_{0})+ \sum_{i\notin I}\lambda_{i}\bigl(f_{i}(x)- \langle x,y_{i} \rangle +g ^{*}_{i}(y_{i}) \bigr)\\& \qquad {}+\hat{\lambda } \biggl(\max_{i\in I} \biggl\{ f_{i}(x)+\sum_{j\ne i, j\in I}g_{j}(x) \biggr\} - \langle x,\hat{y} \rangle +\biggl( \sum_{j\in I}g_{j} \biggr)^{*}(\hat{y}) \biggr) \Bigg\} \\& \quad =\max_{\substack{\lambda_{i}\ge 0(i\notin I)\\\hat{\lambda } \ge 0}} \inf_{x\in \mathbb{R}^{n}} \Bigg\{ f_{0}(x)- \langle x,y_{0} \rangle +g^{*}_{0}(y_{0})+ \sum_{i\notin I}\lambda_{i} \bigl(f_{i}(x)- \langle x,y_{i} \rangle +g ^{*}_{i}(y_{i}) \bigr)\\& \qquad {}+\hat{\lambda } \biggl(\max_{ \substack{\lambda_{i}\ge 0(i\in I)\\\sum_{i\in I}\lambda_{i}=1}} \sum _{i\in I}\lambda_{i}\biggl(f_{i}(x)+ \sum_{j\ne i, j\in I}g_{j}(x)\biggr)- \langle x, \hat{y} \rangle +\biggl(\sum_{j\in I}g_{j} \biggr)^{*}( \hat{y}) \biggr) \Bigg\} \\& \quad =\max_{\substack{\lambda_{i}\ge 0(i\notin I)\\\hat{\lambda } \ge 0}}\inf_{x\in \mathbb{R}^{n}}\max _{ \substack{\lambda_{i}\ge 0(i\in I)\\\sum_{i\in I}\lambda_{i}=1}} \Bigg\{ f_{0}(x)- \langle x,y_{0} \rangle +g^{*}_{0}(y_{0})+\sum _{i\notin I}\lambda_{i}\bigl(f_{i}(x)- \langle x,y_{i} \rangle +g ^{*}_{i}(y_{i}) \bigr)\\& \qquad {}+\hat{\lambda }\biggl(\sum_{i\in I} \lambda_{i}\biggl(f_{i}(x)+\sum _{j\ne i, j\in I}g_{j}(x)\biggr)- \langle x,\hat{y} \rangle + \biggl( \sum_{j\in I}g_{j} \biggr)^{*}(\hat{y})\biggr) \Bigg\} \\& \quad =\max_{\substack{\lambda_{i}\ge 0(i\notin I)\\\hat{\lambda } \ge 0}}\max_{\substack{\lambda_{i}\ge 0(i\in I)\\\sum _{i\in I} \lambda_{i}=1}}\inf _{x\in \mathbb{R}^{n}} \Bigg\{ f_{0}(x)- \langle x,y_{0} \rangle +g^{*}_{0}(y_{0})+\sum _{i\notin I}\lambda_{i}\bigl(f_{i}(x)- \langle x,y_{i} \rangle +g ^{*}_{i}(y_{i}) \bigr)\\& \qquad {}+\hat{\lambda }\biggl(\sum_{i\in I} \lambda_{i}\biggl(f_{i}(x)+\sum _{j\ne i, j\in I}g_{j}(x)\biggr)- \langle x,\hat{y} \rangle + \biggl( \sum_{j\in I}g_{j} \biggr)^{*}(\hat{y})\biggr)\Bigg\} \\& \quad =\max_{\substack{\hat{\lambda }, \lambda_{i}\ge 0\\\sum _{i \in I}\lambda_{i}=1}}\inf_{x\in \mathbb{R}^{n}} \Bigg\{ f_{0}(x)- \langle x,y_{0} \rangle +g^{*}_{0}(y_{0})+ \sum_{i\notin I}\lambda_{i}\bigl(f_{i}(x)- \langle x,y_{i} \rangle +g ^{*}_{i}(y_{i}) \bigr)\\& \qquad {}+\hat{\lambda }\biggl(\sum_{i\in I} \lambda_{i}\biggl(f_{i}(x)-g_{i}(x)+\sum _{j\in I}g_{j}(x)\biggr)- \langle x,\hat{y} \rangle + \biggl( \sum_{j\in I}g_{j} \biggr)^{*}(\hat{y})\biggr) \Bigg\} \\& \quad =\max_{\substack{\hat{\lambda }, \lambda_{i}\ge 0\\\sum _{i \in I}\lambda_{i}=1}}\inf_{x\in \mathbb{R}^{n}} \Bigg\{ f_{0}(x)- \langle x,y_{0} \rangle +g^{*}_{0}(y_{0})+ \sum_{i\notin I}\lambda_{i}\bigl(f_{i}(x)- \langle x,y_{i} \rangle +g ^{*}_{i}(y_{i}) \bigr)\\& \qquad {}+\hat{\lambda }\sum_{i\in I} \lambda_{i}\bigl(f_{i}(x)-g_{i}(x)\bigr)+ \hat{ \lambda }\biggl(\sum_{j\in I}g_{j}(x)- \langle x,\hat{y} \rangle +\biggl( \sum_{j\in I}g_{j} \biggr)^{*}(\hat{y})\biggr) \Bigg\} \\& \quad =\max_{\substack{\hat{\lambda }, \lambda_{i}\ge 0\\\sum _{i \in I}\lambda_{i}=\hat{\lambda }}}\inf_{x\in \mathbb{R}^{n}} \Bigg\{ f_{0}(x)- \langle x,y_{0} \rangle +g^{*}_{0}(y_{0})+ \sum_{i\notin I}\lambda_{i}\bigl(f_{i}(x)- \langle x,y_{i} \rangle +g ^{*}_{i}(y_{i}) \bigr)\\& \qquad {}+\sum_{i\in I}\lambda_{i} \bigl(f_{i}(x)-g_{i}(x)\bigr) +\hat{\lambda }\biggl(\sum _{j\in I}g_{j}(x)- \langle x,\hat{y} \rangle + \biggl( \sum_{j\in I}g_{j} \biggr)^{*}(\hat{y})\biggr) \Bigg\} . \end{aligned}

The fourth equality of the previous equalities follows from Theorem 1. Hence we have

\begin{aligned} \operatorname{Val}(\mathrm{P}) = &\inf_{(y_{0},((y_{i})_{i\notin I},\hat{y}))\in D_{0}\times D} \max_{\substack{\hat{\lambda }, \lambda_{i}\ge 0\\\sum _{i \in I}\lambda_{i}=\hat{\lambda }}}\inf _{x\in \mathbb{R}^{n}} \Bigg\{ f_{0}(x)- \langle x,y_{0} \rangle +g^{*}_{0}(y_{0})\vphantom{\sum_{i\notin I}} \\ &{}+\sum_{i\notin I}\lambda_{i} \bigl(f_{i}(x)- \langle x,y_{i} \rangle +g ^{*}_{i}(y_{i})\bigr) +\sum _{i\in I}\lambda_{i}\bigl(f_{i}(x)-g_{i}(x) \bigr)\\ &{}+\hat{\lambda }\biggl( \sum_{j\in I}g_{j}(x)- \langle x,\hat{y} \rangle +\biggl( \sum_{j\in I}g_{j} \biggr)^{*}(\hat{y})\biggr) \Bigg\} . \end{aligned}

This completes the proof. □

Now we can apply Theorem 3 to DC programming problems.

### Example 1

Consider the following DC programming problem:

\begin{aligned}& \begin{aligned} &\mbox{minimize } f_{0}(x)-g_{0}(x) \\ &\mbox{subject to } x=(x_{1},x_{2})\in \mathbb{R}^{2},\quad f_{i}(x)-g_{i}(x) \le 0,\quad i=1,2, \end{aligned} \hspace{330pt}\mbox{(P)} \end{aligned}

where $$f_{0}(x_{1},x_{2})=x_{1}^{2}-x_{2}$$, $$g_{0}(x_{1},x_{2})=0$$, $$f_{1}(x_{1},x_{2})=x_{2}$$, $$g_{1}(x_{1},x_{2})=\vert x_{1}\vert$$, $$f_{2}(x _{1},x_{2})=-x_{2}$$, and $$g_{2}(x_{1},x_{2})=\vert x_{1}\vert$$. This mathematical programming problem is neither convex nor differentiable, therefore the previous theorems concerned with convex or differentiable programming problems cannot be applied directly. Let $$D_{0}=\bigcup_{x\in S} \partial g_{0}(x)=\{(0,0)\}$$ and $$D=\bigcup_{x\in S}(\partial g_{1}(x)+ \partial g_{2}(x))=[-2,2]\times \{0\}$$. We can check that the assumption of Theorem 3 holds. Therefore,

\begin{aligned} \operatorname{Val}(\mathrm{P}) = & \inf_{\hat{y}_{1}\in [-2,2]} \max_{\lambda_{1},\lambda_{2}\geq 0}\inf _{x_{1},x_{2}\in \mathbb{R}} \bigl( x_{1}^{2}-x_{2}+ \lambda_{1} \bigl(\vert x_{1}\vert +x_{2} \bigr)+ \lambda_{2} \bigl(\vert x _{1}\vert -x_{2} \bigr) -( \lambda_{1}+\lambda_{2}) \hat{y}_{1}x_{1} \bigr) \\ = & \inf_{\hat{y}_{1}\in [-2,2]}\max_{\lambda_{1},\lambda_{2}\geq 0} \inf _{x_{1},x_{2}\in \mathbb{R}} \bigl( x_{1}^{2}+( \lambda_{1}+ \lambda_{2}) \bigl(\vert x_{1} \vert - \hat{y}_{1}x _{1} \bigr) +(-1+ \lambda_{1}- \lambda_{2})x_{2} \bigr) \\ = & \inf_{\hat{y}_{1}\in [-2,2]}\max_{\lambda_{2}\geq 0} \inf _{x_{1}\in \mathbb{R}} \bigl( x_{1}^{2}+(2 \lambda_{2}+1) \bigl(\vert x_{1}\vert - \hat{y}_{1}x_{1} \bigr) \bigr) \\ = & \inf_{\hat{y}_{1}\in [-2,2]}\max_{\lambda_{2}\geq 0} \min \Bigl\{ \inf _{x_{1}\ge 0} \bigl( x_{1}^{2}+(2 \lambda_{2}+1) (1-\hat{y}_{1})x _{1} \bigr) , \inf _{x_{1}\le 0} \bigl( x_{1}^{2}-(2 \lambda_{2}+1) (1+\hat{y}_{1})x _{1} \bigr) \Bigr\} , \end{aligned}

and we can see that

\begin{aligned}& \inf_{x_{1}\ge 0} \bigl( x_{1}^{2}+(2 \lambda_{2}+1) (1- \hat{y}_{1})x _{1} \bigr) = \textstyle\begin{cases} -\frac{1}{4}(2\lambda_{2}+1)^{2}(1-\hat{y}_{1})^{2} & \mbox{if } \hat{y}_{1}\in [1,2], \\ 0 & \mbox{if }\hat{y}_{1}\in [-2,1), \end{cases}\displaystyle \\& \inf_{x_{1}\le 0} \bigl( x_{1}^{2}-(2 \lambda_{2}+1) (1+\hat{y}_{1})x _{1} \bigr) = \textstyle\begin{cases} -\frac{1}{4}(2\lambda_{2}+1)^{2}(1+\hat{y}_{1})^{2} & \mbox{if } \hat{y}_{1}\in [-2,-1], \\ 0 & \mbox{if }\hat{y}_{1}\in (-1,2], \end{cases}\displaystyle \end{aligned}

then we have

\begin{aligned} \operatorname{Val}(\mathrm{P}) = & \inf_{\vert \hat{y}_{1}\vert \in [1,2]}\max_{\lambda_{2}\ge 0} \biggl\{ - \frac{1}{4}(2\lambda_{2}+1)^{2} \bigl(1-\vert \hat{y}_{1}\vert \bigr)^{2} \biggr\} \\ = & \inf_{\vert \hat{y}_{1}\vert \in [1,2]} \biggl\{ -\frac{1}{4} \bigl(1-\vert \hat{y}_{1}\vert \bigr)^{2} \biggr\} \\ = & -\frac{1}{4}. \end{aligned}

This example shows that Theorem 3 contributes to solving DC programming problems.

Next, we provide an observation that Theorem 3 has no relevance to Theorem 2. At first, we give a DC inequality system for which holds the assumption of Theorem 3 but not the assumption of Theorem 2 in the following example.

### Example 2

Define $$f_{1},f_{2},g_{1},g_{2}:\mathbb{R}\to \mathbb{R}$$ as

\begin{aligned}& f_{1}(x)= \textstyle\begin{cases} \frac{1}{4}x^{2}-x+1 & \mbox{if } x\ge 2, \\ 0 & \mbox{if }-2< x< 2, \\ \frac{1}{4}x^{2}+x+1 & \mbox{otherwise,} \end{cases}\displaystyle \qquad f_{2}(x)=\frac{1}{25}x^{2}- \frac{1}{4}, \\& g_{1}(x)=\frac{1}{5}x^{2} \quad \mbox{and}\quad g_{2}(x)= \biggl[ \frac{x+1}{2} \biggr] x- \biggl[ \frac{x+1}{2} \biggr] ^{2}, \end{aligned}

where $$[\cdot ]$$ is the greatest integer function. We have $$g_{2}(x)=kx-k ^{2}$$ if $$x\in [2k-1,2k+1)$$ where $$k\in \mathbb{Z}$$, $$g_{2}$$ is also a convex function. Also we can see that

\begin{aligned}& f^{*}_{1}(y)= \textstyle\begin{cases} y^{2}+2y & \mbox{if } y\ge 0, \\ y^{2}-2y & \mbox{otherwise}, \end{cases}\displaystyle \qquad f^{*}_{2}(y)=5y^{2}+ \frac{1}{4}, \\& g^{*}_{1}(y)=\frac{5}{4}y^{2}\quad \mbox{and}\quad g^{*}_{2}(y)= \bigl(2[y]+1 \bigr)y-[y]^{2}-[y]. \end{aligned}

Put $$F=\max \{f_{1}+g_{2},f_{2}+g_{1}\}$$ and $$G=g_{1}+g_{2}$$. For each $$\hat{y}\in D=\bigcup_{x\in S}(\partial g_{1}(x)+\partial g_{2}(x))$$, there exists $$\hat{x}\in S$$, $$y_{1}\in \partial g_{1}(\hat{x})$$, $$y _{2}\in \partial g_{2}(\hat{x})$$ such that $$\hat{y}=y_{1}+y_{2}$$ and $$G^{*}(\hat{y})=g^{*}_{1}(y_{1})+g^{*}_{2}(y_{2})$$ from (3). Since $$\operatorname {epi}F^{*}=\operatorname {co}((\operatorname {epi}f^{*}_{1}+\operatorname {epi}g^{*}_{2})\cup (\operatorname {epi}f^{*}_{2}+\operatorname {epi}g^{*}_{1}))$$,

\begin{aligned}& \operatorname {cone}\operatorname {co}\bigl(\operatorname {epi}F^{*}- \bigl( \hat{y},G^{*}( \hat{y}) \bigr) \bigr)+\{0\}\times [0,+ \infty ) \\& \quad =\operatorname {cone}\operatorname {co}\bigl( \bigl\{ \bigl(n,n^{2} \bigr)\mid n\in \mathbb{Z} \bigr\} - \bigl(y_{1}+y_{2},g^{*} _{1}(y_{1})+g^{*}_{2}(y_{2}) \bigr) \bigr)+\{0\}\times [0,+\infty ). \end{aligned}

The latter set is always closed. In general,

\begin{aligned}& \operatorname {cone}\operatorname {co}\bigl( \bigl\{ \bigl(n,n^{2} \bigr)\mid n\in \mathbb{Z} \bigr\} -(a,b) \bigr) \\& \quad = \textstyle\begin{cases} \operatorname {epi}h &\mbox{if }a\notin \mathbb{Z}, \alpha \le \beta \mbox{ or } a\in \mathbb{Z}, a^{2}-b\ge 0, \\ \mathbb{R}^{2} & \mbox{otherwise}, \end{cases}\displaystyle \end{aligned}

where $$a, b\in \mathbb{R}$$, $$\alpha =\min \{ \frac{n^{2}-b}{n-a}\mid n\in \mathbb{Z}, n>a \}$$, $$\beta = \max \{ \frac{n^{2}-b}{n-a}\mid n\in \mathbb{Z}, n>a \}$$, and $$h(x)= \Bigl\{ \small{\begin{array}{l@{\quad}l} \alpha x & \mbox{if } x\ge 0, \\ \beta x & \mbox{otherwise.} \end{array} }$$ From this, $$\operatorname {cone}\operatorname {co}(\{(n,n^{2})\mid n\in \mathbb{Z}\}-(a,b))$$ is always closed. Therefore for $$\{F-G\le 0\}$$ holds condition (6). Also $$S(\hat{y})\ne \emptyset$$ because $$F(\hat{x})- \langle \hat{x}, \hat{y} \rangle +G^{*}(\hat{y})\le 0$$. Therefore for $$\{F-G \le 0\}$$ holds the assumption of Theorem 3. However,

\begin{aligned}& \operatorname {cone}\operatorname {co}\bigl( \bigl(\operatorname {epi}f^{*}_{1}- \bigl(0,g^{*}_{1}(0) \bigr) \bigr)\cup \bigl(\operatorname {epi}f^{*} _{2}- \bigl(0,g^{*}_{2}(0) \bigr) \bigr) \bigr)+\{0\}\times [0,+\infty ) \\& \quad = \bigl\{ (x,\alpha )\mid 2\vert x\vert < \alpha \bigr\} \cup \bigl\{ (0,0) \bigr\} \end{aligned}

is not closed, that is, $$\{f_{1}-g_{1}\le 0, f_{2}-g_{2}\le 0\}$$ does not hold (6).

Next, we give a DC inequality system for which holds the assumption of Theorem 2 but not the assumption of Theorem 3 in the following example.

### Example 3

Define $$f_{1},f_{2},g_{1},g_{2}:\mathbb{R}\to \mathbb{R}$$ as

\begin{aligned}& f_{1}(x)= \biggl[ \frac{x+1}{2} \biggr] x- \biggl[ \frac{x+1}{2} \biggr] ^{2}, \qquad f_{2}(x)= \biggl[ \frac{2x+1}{2} \biggr] x- \frac{1}{2} \biggl[ \frac{2x+1}{2} \biggr] ^{2}, \\& g_{1}(x)=\frac{1}{4}x^{2},\quad \mbox{and}\quad g_{2}(x)= \frac{1}{2}x^{2}. \end{aligned}

We can see that

\begin{aligned}& f^{*}_{1}(y)= \bigl(2[y]+1 \bigr)y-[y]^{2}-[y], \qquad f^{*}_{2}(y)= \biggl([y]+ \frac{1}{2} \biggr)y- \frac{1}{2}[y]^{2}-\frac{1}{2}[y] , \\& g^{*}_{1}(y)=y^{2}\quad \mbox{and}\quad g^{*}_{2}(x)= \frac{1}{2}y^{2}, \end{aligned}

and then

\begin{aligned}& \operatorname {cone}\operatorname {co}\bigl( \bigl(\operatorname {epi}f^{*}_{1}- \bigl(y_{1},g^{*}_{1}(y_{1}) \bigr) \bigr)\cup \bigl(\operatorname {epi}f^{*}_{2}- \bigl(y_{2},g^{*}_{2}(y_{2}) \bigr) \bigr) \bigr) +\{0\}\times [0,+\infty ) \\& \quad =\operatorname {cone}\operatorname {co}\biggl( \bigl( \bigl\{ \bigl(n,n^{2} \bigr)\mid n\in \mathbb{Z} \bigr\} - \bigl(y_{1},g^{*} _{1}(y_{1}) \bigr) \bigr)\\& \qquad {} \cup \biggl( \biggl\{ \biggl( n,\frac{1}{2}n^{2} \biggr) \Bigm| n \in \mathbb{Z} \biggr\} - \bigl(y_{2},g^{*}_{2}(y_{2}) \bigr) \biggr) \biggr) +\{0 \}\times [0,+\infty ), \end{aligned}

for each $$(y_{1},y_{2})\in \bigcup_{x\in S}(\partial g_{1}(x)\times \partial g_{2}(x))$$. The latter set is always closed in a similar way to Example 2. Also, for each $$(y_{1},y_{2})\in \bigcup_{x\in S}(\partial g_{1}(x)\times \partial g_{2}(x))$$, there exists $$z\in \mathbb{R}$$ such that $$y_{1}=\frac{1}{2}z$$, $$y_{2}=z$$, then

\begin{aligned} S(y_{1},y_{2}) =& \bigl\{ x\in \mathbb{R}\mid f_{i}(x)-xy_{i}+g^{*}_{i}(y_{i}) \le 0, i=1,2 \bigr\} \\ =& \biggl\{ x\in \mathbb{R} \Bigm| \biggl[ \frac{x+1}{2} \biggr] x- \biggl[ \frac{x+1}{2} \biggr] ^{2}- \frac{1}{2}xz+ \frac{1}{4}z^{2}\le 0, \\ & \biggl[ \frac{2x+1}{2} \biggr] x-\frac{1}{2} \biggl[ \frac{2x+1}{2} \biggr] ^{2}-xz+\frac{1}{2}z^{2}\le 0 \biggr\} \\ \supseteq& \biggl\{ x\in \mathbb{R} \Bigm| \frac{1}{4}x^{2}- \frac{1}{2}xz+\frac{1}{4}z^{2}\le 0, \frac{1}{2}x^{2}-xz+ \frac{1}{2}z^{2}\le 0 \biggr\} \\ \ni& z, \end{aligned}

Then $$S(y_{1},y_{2})$$ is non-empty. Therefore $$\{f_{1}-g_{1}\le 0, f _{2}-g_{2}\le 0\}$$ holds by the assumption of Theorem 2. However,

\begin{aligned}& \operatorname {cone}\operatorname {co}\bigl( \bigl(\operatorname {epi}f^{*}_{1}+ \operatorname {epi}g^{*}_{2} \bigr)\cup \bigl(\operatorname {epi}f^{*} _{2}+\operatorname {epi}g^{*}_{1} \bigr) - \bigl(0+0,g^{*}_{1}(0)+g^{*}_{2}(0) \bigr) \bigr) +\{0\}\times [0,+\infty ) \\& \quad =\mathbb{R}\times (0,+\infty )\cup \bigl\{ (0,0) \bigr\} \end{aligned}

is not a closed set, that is, (9) does not hold.

## Conclusions

In this paper, we studied Lagrange-type duality for DC programming problems with DC inequality constraints. It is well known that the maximum of DC functions is also a DC function. Based on this idea, we presented Theorem 3, which is a Lagrange-type duality theorem for the maximum DC inequality constraint of the original DC inequality constraints. Theorem 3 has no relevance to Theorem 2, which is a previous Lagrange-type duality for DC programming problems proved in . More precisely, Theorem 3 does not imply Theorem 2 and Theorem 2 does not imply Theorem 3. Also we proved Theorem 4, which is a unified Lagrange-type duality result of Theorem 2 and Theorem 3. Consequently, the class of DC programming problems to which Lagrange-type duality theorems can be applied was broader than the class in previous research.

## References

1. Goberna, MA, Jeyakumar, V, López, MA: Necessary and sufficient constraint qualifications for solvability of systems of infinite convex inequalities. Nonlinear Anal. 68, 1184-1194 (2008)

2. Martínez-Legaz, JE, Volle, M: Duality in DC programming: the case of several DC constraints. J. Math. Anal. Appl. 237, 657-671 (1999)

3. Harada, R, Kuroiwa, D: Lagrange-type duality in DC programming. J. Math. Anal. Appl. 418, 415-424 (2014)

4. Rockafellar, RT: Convex Analysis. Princeton Paperbacks, Princeton (1996)

5. Hiriart-Urruty, J, Baptiste, J, Lemarechal, C: Convex Analysis and Minimization Algorithms II. Advanced Theory and Bundle Methods. Grundlehren der Mathematischen Wissenschaften. Springer, Berlin (1993)

6. Sion, M: On general minimax theorems. Pac. J. Math. 8, 171-176 (1958)

## Acknowledgements

The authors are grateful to the anonymous referees of valuable comments and suggestions which improved the quality of the paper. This work has been partially supported by JSPS KAKENHI Grant Number 16K05274.

## Author information

Authors

### Competing interests

The authors declare to have no competing interests.

### Authors’ contributions

RH conceived of the study and drafted, completed, and approved the final manuscript. DK conceived of the study and drafted, read, and approved the final manuscript.

## Appendix

### Appendix

In this section, we give proofs of Lemma 1 and Lemma 2.

### Proof of Lemma 1

Clearly, (11) holds when $$m=1, 2$$. Assume that (11) holds for some $$m\in \mathbb{N}$$. Let $$C_{i}\subseteq \mathbb{R}^{n}$$ be convex sets for all $$i=1,\ldots ,m+1$$. Then

\begin{aligned} \operatorname {co}\bigcup^{m+1}_{i=1}C_{i} =&\operatorname {co}\Biggl(\bigcup^{m}_{i=1}C_{i} \cup C_{m+1} \Biggr) \\ =&\operatorname {co}\Biggl(\operatorname {co}\Biggl(\bigcup^{m}_{i=1}C_{i} \Biggr)\cup C_{m+1} \Biggr) \\ =&\bigcup_{\lambda \in [0,1]} \Biggl(\lambda \operatorname {co}\bigcup ^{m}_{i=1}C _{i}+(1-\lambda )C_{m+1} \Biggr)\quad (\because \mbox{from the case when m=2}) \\ =&\bigcup_{\lambda \in [0,1]} \Biggl(\lambda \bigcup _{ \substack{\lambda_{i}\ge 0\\\sum ^{m}_{i=1}\lambda_{i}=1}}\sum^{m}_{i=1} \lambda_{i}C_{i}+(1-\lambda )C_{m+1} \Biggr)\quad ( \because \mbox{from the assumption}) \\ =&\bigcup_{\lambda \in [0,1]} \bigcup _{ \substack{\lambda_{i}\ge 0\\\sum ^{m}_{i=1}\lambda_{i}=1}} \Biggl(\sum^{m}_{i=1} \lambda \lambda_{i}C_{i}+(1-\lambda )C_{m+1} \Biggr) \\ =& \bigcup_{\substack{\lambda_{i}\ge 0\\\sum ^{m+1}_{i=1}\lambda _{i}=1}}\sum ^{m+1}_{i=1} \lambda_{i}C_{i}. \end{aligned}

Therefore (11) holds for $$m+1$$. From mathematical induction, the proof is completed. □

### Proof of Lemma 2

We may assume that all $$A_{i}$$ and $$B_{i}$$ are not empty. We show this lemma by using mathematical induction. It is clear that (12) holds when $$m=1$$. In the case of $$m=2$$, (12) holds from Lemma 1 by putting $$C_{1}=A _{1}+B_{2}$$ and $$C_{2}=A_{2}+B_{1}$$. Assume that (12) holds for some $$m\in \mathbb{N}$$. Let $$A_{i}, B_{i}\subseteq \mathbb{R} ^{n}$$ be convex sets for all $$i=1,\ldots ,m+1$$. Then

\begin{aligned}& \operatorname {co}\bigcup_{\substack{\lambda_{i}\ge 0\\\sum ^{m+1}_{i=1}\lambda _{i}=1}} \sum ^{m+1}_{i=1} \bigl(\lambda_{i} A_{i}+(1-\lambda_{i})B_{i} \bigr) \\& \quad =\operatorname {co}\bigcup_{0\le \lambda_{1}\le 1} \Biggl( \bigcup _{\substack{\lambda_{2},\ldots ,\lambda_{m+1}\ge 0\\\sum ^{m+1}_{i=1}\lambda_{i}=1}} \Biggl(\sum^{m+1}_{i=1} \bigl(\lambda_{i} A _{i}+(1-\lambda_{i})B_{i} \bigr) \Biggr) \Biggr) \\& \quad =\operatorname {co}\Biggl( \bigcup_{0\le \lambda_{1}< 1} \Biggl( \bigcup _{\substack{\lambda_{2},\ldots ,\lambda_{m+1}\ge 0\\\sum ^{m+1}_{i=1}\lambda_{i}=1}} \Biggl(\sum^{m+1}_{i=1} \bigl(\lambda_{i} A _{i}+(1-\lambda_{i})B_{i} \bigr) \Biggr) \Biggr) \cup \Biggl(A_{1}+\sum ^{m+1}_{i=2}B_{i} \Biggr) \Biggr) \\& \quad =\operatorname {co}\Biggl(\bigcup_{0\le \lambda_{1}< 1} \Biggl( \lambda_{1}A_{1}+(1- \lambda_{1})B_{1} \\& \qquad {} + \bigcup_{\substack{\lambda_{2},\ldots ,\lambda_{m+1}\ge 0\\\sum ^{m+1}_{i=1}\lambda_{i}=1}} \Biggl(\sum ^{m+1}_{i=2} \bigl(\lambda_{i} A _{i}+(1-\lambda_{i})B_{i} \bigr) \Biggr) \Biggr) \cup \Biggl(A_{1}+\sum^{m+1} _{i=2}B_{i} \Biggr) \Biggr) \\& \quad =\operatorname {co}\Biggl(\bigcup_{0\le \lambda_{1}< 1} \Biggl( \lambda_{1}A_{1}+(1-\lambda_{1})B_{1}+(1- \lambda_{1}) \bigcup_{\substack{\lambda_{2},\ldots ,\lambda_{m+1}\ge 0\\\sum ^{m+1}_{i=2}\frac{\lambda_{i}}{1-\lambda_{1}}=1}} \Biggl(\sum ^{m+1}_{i=2} \biggl( \frac{\lambda_{i}}{1-\lambda_{1}} A_{i}+\frac{1-\lambda_{i}}{1-\lambda _{1}}B_{i} \biggr) \Biggr) \Biggr) \\& \qquad {}\cup \Biggl(A_{1}+\sum^{m+1}_{i=2}B_{i} \Biggr) \Biggr). \end{aligned}
(14)

For all $$i=2,\ldots ,m+1$$, since $$B_{i}$$ are convex sets, $$1-\lambda _{i}=(1-\lambda_{1}-\lambda_{i})+\lambda_{1}$$, and $$1-\lambda_{1}- \lambda_{i}\ge 0$$, we have

$$\frac{1-\lambda_{i}}{1-\lambda_{1}}B_{i} =\frac{1-\lambda_{1}-\lambda _{i}}{1-\lambda_{1}}B_{i}+ \frac{\lambda_{1}}{1-\lambda_{1}}B_{i} = \biggl( 1-\frac{\lambda_{i}}{1-\lambda_{1}} \biggr) B_{i}+\frac{\lambda _{1}}{1-\lambda_{1}}B_{i}$$

and then

\begin{aligned}& \bigcup_{\substack{\lambda_{2},\ldots ,\lambda_{m+1}\ge 0\\\sum ^{m+1}_{i=2}\frac{\lambda_{i}}{1-\lambda_{1}}=1}} \Biggl(\sum ^{m+1}_{i=2} \biggl(\frac{\lambda_{i}}{1-\lambda_{1}} A_{i}+ \frac{1-\lambda _{i}}{1-\lambda_{1}}B_{i} \biggr) \Biggr) \\& \quad = \bigcup_{\substack{\lambda_{2},\ldots ,\lambda_{m+1}\ge 0\\\sum ^{m+1}_{i=2}\frac{\lambda_{i}}{1-\lambda_{1}}=1}} \sum ^{m+1}_{i=2} \biggl(\frac{\lambda_{i}}{1-\lambda_{1}} A_{i}+ \biggl(1-\frac{\lambda _{i}}{1-\lambda_{1}} \biggr)B_{i}+\frac{\lambda_{1}}{1-\lambda_{1}}B_{i} \biggr) \\& \quad =\frac{\lambda_{1}}{1-\lambda_{1}}\sum^{m+1}_{i=2}B_{i} + \bigcup_{\substack{\lambda_{2},\ldots ,\lambda_{m+1}\ge 0\\\sum ^{m+1}_{i=2}\frac{\lambda_{i}}{1-\lambda_{1}}=1}} \Biggl(\sum ^{m+1}_{i=2} \biggl(\frac{\lambda_{i}}{1-\lambda_{1}} A_{i}+ \biggl(1-\frac{\lambda _{i}}{1-\lambda_{1}} \biggr)B_{i} \biggr) \Biggr) \\& \quad =\frac{\lambda_{1}}{1-\lambda_{1}}\sum^{m+1}_{i=2}B_{i} + \bigcup_{\substack{\lambda^{\prime }_{2},\ldots ,\lambda^{\prime } _{m+1}\ge 0\\\sum ^{m+1}_{i=2}\lambda^{\prime }_{i}=1}} \Biggl( \sum ^{m+1}_{i=2} \bigl(\lambda^{\prime }_{i} A_{i}+ \bigl(1-\lambda^{\prime }_{i} \bigr)B _{i} \bigr) \Biggr). \end{aligned}

Hence,

\begin{aligned} (14) =&\operatorname {co}\Biggl(\bigcup _{0\le \lambda_{1}< 1} \Biggl( \lambda_{1}A_{1}+(1- \lambda_{1})B_{1}+ \lambda_{1}\sum ^{m+1}_{i=2}B_{i} \\ &{}+(1-\lambda_{1}) \bigcup_{\substack{\lambda^{\prime }_{2},\ldots ,\lambda^{\prime } _{m+1}\ge 0\\\sum ^{m+1}_{i=2}\lambda^{\prime }_{i}=1}} \Biggl( \sum^{m+1}_{i=2} \bigl( \lambda^{\prime }_{i} A_{i}+ \bigl(1- \lambda^{\prime }_{i} \bigr)B _{i} \bigr) \Biggr) \Biggr)\cup \Biggl(A_{1}+\sum^{m+1}_{i=2}B_{i} \Biggr) \Biggr) \\ =&\operatorname {co}\Biggl(\bigcup_{0\le \lambda_{1}< 1} \Biggl( \lambda_{1}A_{1}+(1- \lambda_{1})B_{1}+ \lambda_{1}\sum^{m+1}_{i=2}B_{i} \\ &{}+(1-\lambda_{1}) \operatorname {co}\bigcup_{\substack{\lambda^{\prime }_{2},\ldots ,\lambda^{\prime } _{m+1}\ge 0\\\sum ^{m+1}_{i=2}\lambda^{\prime }_{i}=1}} \Biggl( \sum^{m+1}_{i=2} \bigl( \lambda^{\prime }_{i} A_{i}+ \bigl(1- \lambda^{\prime }_{i} \bigr)B _{i} \bigr) \Biggr) \Biggr) \cup \Biggl(A_{1}+\sum^{m+1}_{i=2}B_{i} \Biggr) \Biggr). \end{aligned}
(15)

From the assumption,

\begin{aligned} (15) =&\operatorname {co}\Biggl(\bigcup _{0\le \lambda_{1}< 1} \Biggl( \lambda_{1}A_{1}+(1- \lambda_{1})B_{1}+\lambda_{1}\sum ^{m+1}_{i=2}B _{i} \\ &{} +(1-\lambda_{1})\operatorname {co}\bigcup^{m+1}_{i=2} \biggl(A_{i}+ \sum_{\substack{j\ne i\\2\le j\le m+1}}B_{j} \biggr) \Biggr)\cup \Biggl(A _{1}+\sum^{m+1}_{i=2}B_{i} \Biggr) \Biggr) \\ =&\operatorname {co}\Biggl(\bigcup_{0\le \lambda_{1}< 1} \Biggl( \lambda_{1}A_{1}+(1- \lambda_{1})B_{1}+ \lambda_{1}\sum^{m+1}_{i=2}B_{i} \\ &{} +(1-\lambda_{1})\bigcup^{m+1}_{i=2} \biggl(A_{i}+ \sum_{\substack{j\ne i\\2\le j\le m+1}}B_{j} \biggr) \Biggr)\cup \Biggl(A _{1}+\sum^{m+1}_{i=2}B_{i} \Biggr) \Biggr) \\ =&\operatorname {co}\Biggl(\bigcup_{0\le \lambda_{1}\le 1} \Biggl( \lambda_{1} \Biggl(A_{1}+\sum^{m+1}_{i=2}B_{i} \Biggr)+(1-\lambda_{1}) \Biggl(B_{1} +\bigcup ^{m+1}_{i=2} \biggl(A_{i}+ \sum _{\substack{j\ne i\\2\le j\le m+1}}B_{j} \biggr) \Biggr) \Biggr) \Biggr) \\ =&\operatorname {co}\Biggl(\bigcup_{0\le \lambda_{1}\le 1} \Biggl( \lambda_{1} \Biggl(A_{1}+\sum^{m+1}_{i=2}B_{i} \Biggr)+(1-\lambda_{1}) \Biggl(\bigcup^{m+1}_{i=2} \biggl(A_{i}+\sum_{j\ne i}B_{j} \biggr) \Biggr) \Biggr) \Biggr). \end{aligned}
(16)

By using Lemma 1,

\begin{aligned} (16) =&\operatorname {co}\Biggl( \Biggl(A_{1}+ \sum ^{m+1}_{i=2}B_{i} \Biggr)\cup \Biggl(\bigcup ^{m+1}_{i=2} \biggl(A_{i}+ \sum _{j\ne i}B_{j} \biggr) \Biggr) \Biggr) \\ =&\operatorname {co}\bigcup^{m+1}_{i=1} \biggl(A_{i}+\sum_{j\ne i}B_{j} \biggr). \end{aligned}

Consequently, (12) holds for $$m+1$$. □

## Rights and permissions 