For solving problem (P), we first give the following concept about an approximate optimal solution.
Definition 3.1
For given \(\varepsilon ,\eta \ge 0\), let
$$ D_{\varepsilon }= \bigl\{ x\in D^{0} \mid G_{k}(x)< \varepsilon , k=0,1,\ldots ,m+N \bigr\} , $$
a point \(x\in D_{\varepsilon }\) is said to be an εfeasible solution to problem (P). If an εfeasible solution \(\bar{x}=(\bar{x}_{0},\bar{x}_{1},\ldots ,\bar{x}_{n+N})\) to problem (P) satisfies
$$ \bar{x}_{0}\leq \min \{x_{0}\mid x\in D_{\varepsilon }\}+ \eta , $$
x̄ is called an \((\varepsilon ,\eta)\)optimal solution to problem (P).
Remark 3.1
All feasible solutions to problem (P) are εfeasible. When \(\eta =0\), an \((\varepsilon ,\eta)\)optimal solution is optimal over all εfeasible solutions of problem (P). When \(\eta >0\), an \((\varepsilon ,\eta)\)optimal solution is ηoptimal for all εfeasible solutions to problem (P).
For seeking an \((\varepsilon ,\eta)\)optimal solution of problem (P), a division and cut algorithm to be developed includes three essential operations: division operation, deleting operation, and reduction operation.
First, the division operation consists in a sequential box division of the original box \(D^{0}\) following in an exhaustive subsection principle, such that any infinite nested sequence of division sets generated through the algorithm reduces to a singleton. This paper takes an adaptive division operation, which extends the standard bisection in the normally used exhaustive subsection principle. Second, by using overestimation of the constraints, the deleting operation consists in eliminating each subbox D generated by the division operation, in which there is no feasible solution. In addition, the reduction operation is used to reduce the size of the current partition set (referred to a node), aiming at tightening each subbox which contains the feasible portion currently still of interest.
For any box \(D=[p,q]=\prod_{i=0}^{n+N}[p_{i},q_{i}]\subseteq D ^{0}\), for convenience, we will use the following functions throughout this paper:
$$\begin{aligned}& f_{k}^{i}(\alpha)=G_{k}^{+}(p)G_{k}^{} \bigl(q\alpha (q_{i}p_{i})e^{i}\bigr),\quad k=0,1,\dots ,m+N, \\& g_{k}^{i}(\beta)=G_{k}^{+} \bigl(p^{\prime }+\beta \bigl(q_{i}p^{\prime }_{i} \bigr)e^{i}\bigr)G_{k}^{}(q), \quad k=0,1,\dots ,m+N, \end{aligned}$$
where \(\alpha ,\beta \in (0,1),p_{i}\le p^{\prime }_{i}\le q_{i} \), and \(e^{i}\) denotes the ith unit vector, i.e., a vector with 1 at the ith position and 0 everywhere else, \(i=0,1,\dots , n+N\).
At a given stage of the proposed algorithm for problem (P), let V represent the best current objective function value to problem (P). Next, we will show these detailed operations.
3.1 Deleting operation
In this subsection, we will give a suitable deleting operation, which offers a possibility to remove a subbox D of \(D^{0}\) without feasibility. Toward this end, we take into account a subproblem of problem (P) over a given box \(D=[p,q]=\prod_{i=0}^{n+N}[p_{i},q _{i}]\subseteq D^{0}\) as follows:
$$ \mathrm{P}(D): \textstyle\begin{cases} \min &x_{0}, \\ \mbox{s.t.} &\bar{F}(x)\triangleq \max \{G_{k}^{+}(x)G_{k}^{}(x)\mid k=0,1,\ldots ,m+N\}\leq 0, \\ & x=(x_{0},x_{1},\ldots ,x_{n+N})\in D. \end{cases} $$
Given \(\eta >0\), for solving \(\mathrm{P}(D)\), we need to seek out a feasible solution \(\bar{x}=(\bar{x}_{0},\bar{x}_{1},\ldots , \bar{x}_{n+N}) \in D\) of \(\mathrm{P}(D)\) such that \(\bar{x}_{0}\leq V\eta \), or to draw a conclusion that there exists no such x̄, where V is the best objective value to problem \(\mathrm{P}(D)\) known so far. Obviously, if \(p_{0}>V\eta \), there exists no \(\bar{x}\in D\) with \(\bar{x}_{0}\leq V\eta \). If \(p_{0}\leq V\eta \) and \(G_{k}(p)\leq 0\) for each \(k=0,\ldots ,m+N\), then update \(\bar{x}=p\). Therefore, without loss of generality, in the following discussion we shall suppose that
$$ \bar{F}(p)>\varepsilon , \qquad p_{0}\leq V\eta ,\qquad \bigl\{ x\in D^{0} \mid \bar{F}(x)< \varepsilon \bigr\} \neq \emptyset . $$
(3.1)
Clearly, if \(\bar{F}(x)>\varepsilon \) for any \(x\in D\), there exists no feasible solution on D, and so D is deleted for further discussion. However, since the judgement satisfying \(\bar{F}(x)>\varepsilon \) is not easy, we introduce an auxiliary problem of \(\mathrm{P}(D)\) as follows:
$$ \mathrm{Q}(D): \textstyle\begin{cases} \min & \bar{F}(x) \\ \mbox{s.t.}& x_{0}\leq V\eta , \\ & x=(x_{0},x_{1},\ldots ,x_{n+N})\in D\subseteq D^{0}. \end{cases} $$
Observe that the objective and constraint functions are interchanged in \(\mathrm{P}(D)\) and \(\mathrm{Q}(D)\). Let \(V(\mathrm{P}(D))\) and \(V(\mathrm{Q}(D))\) be the optimal values of problems \(\mathrm{P}(D)\) and \(\mathrm{Q}(D)\), respectively. Then we give the following results about problems \(\mathrm{P}(D)\) and \(\mathrm{Q}(D)\).
Theorem 3.1
Let
\(\varepsilon >0\), \(\eta >0\)
be given, and let
D
be a box with
\(D\subseteq D^{0}\). We have the following result:

(i)
A feasible solution to
\(\mathrm{Q}(D)\)
satisfying
\(\bar{F}(\hat{x})< \varepsilon \)
is an
εfeasible solution of
\(\mathrm{P}(D)\)
with
\(\hat{x}_{0}\leq V\eta \).

(ii)
If
\(V(\mathrm{Q}(D^{0}))>0\), consider the following two cases: (a) problem P(\(D^{0}\)) has no feasible solution if
\(V=x_{0}^{u}+\eta \), and (b) an
εfeasible solution
\(\tilde{x}=(\tilde{x}_{0},\tilde{x} _{1},\ldots ,\tilde{x}_{n+N})\)
of
\(\mathrm{Q}(D^{0})\)
is an
\((\varepsilon , \eta)\)optimal solution of
\(\mathrm{P}(D^{0})\)
if
\(V=\tilde{x}_{0}\).
Proof
(i) This result is obvious, and here it is omitted.
(ii) By utilizing the assumption \(V(\mathrm{Q}(D^{0}))>0\), i.e.,
$$ \min \bigl\{ \bar{F}(x)\mid x_{0}\leq V\eta ,x\in D^{0} \bigr\} >0, $$
(3.2)
we have the following conclusions:

(a)
If \(V=x^{u}_{0}+\eta \), by (3.2) we get \(\{x\mid \bar{F}(x) \le 0, x\in D^{0}\}=\emptyset \), which implies that problem \(\mathrm{P}(D)\) has no feasible solution.

(b)
If \(V=\tilde{x}_{0}\), from (3.2) it is easy to see that
$$ x_{0}>V\eta =\tilde{x}_{0}\eta , $$
for any \(x=(x_{0},x_{1},\ldots ,x_{n+N})\in \{x\mid \bar{F}(x)\leq 0 < \varepsilon , x\in D^{0}\}\), which means that
$$ \min \bigl\{ x_{0}\mid G_{k}(x)< \varepsilon , k=0,1, \ldots ,m+N, x \in D^{0} \bigr\} \geq \tilde{x}_{0}\eta . $$
Consequently, x̃ is an \((\varepsilon ,\eta)\)optimal solution of \(\mathrm{P}(D)\), and this accomplishes the proof. □
Theorem 3.1 illustrates that by utilizing problem \(\mathrm{Q}(D)\) one can know whether or not there exists a feasible solution \(\hat{x}=(\hat{x}_{0}, \hat{x}_{1},\ldots ,\hat{x}_{n+N})\) of \(\mathrm{P}(D)\) with improving the current objective function value V, i.e., \(\hat{x}_{0}\leq V\eta \). Further, if \(V(\mathrm{Q}(D^{0}))\ge \varepsilon >0\), then an \((\varepsilon ,\eta)\)optimal solution of \(\mathrm{P}(D^{0})\) can be obtained, or it can be confirmed that problem \(\mathrm{P}(D^{0})\) has no feasible solution.
Additionally, if \(V(\mathrm{Q}(D))>\varepsilon \), it is easy to see that a fathomed box D cannot contain the feasible solutions for problem (P). Thus, D is excluded from further consideration by the search. Unfortunately, solving problem \(\mathrm{Q}(D)\) may be as difficult as solving the original problem \(\mathrm{P}(D)\). Hence, a lower bound \(\operatorname{LB}(D)\) of \(V(\mathrm{Q}(D))\) is required for eliminating the part of \(D^{0}\) which does not contain the solution to problem (P). Clearly, if \(\operatorname{LB}(D)\geq \varepsilon \), D is excluded from further consideration by the search. Since \(G_{k}^{+}(x)\) and \(G_{k}^{}(x)\) are all increasing, an apparent lower bound is
$$ \underline{\operatorname{LB}}(D)=\max_{k=0,1,\ldots ,m+N} \bigl\{ G_{k}^{+}(p)G_{k}^{}(q)\bigr\} . $$
Although quite straightforward, the bound is sufficient to confirm convergence of the algorithm, as we will see immediately. For strengthening the computational validity in solving problem \(\mathrm{P}(D)\), some tighter lower bounds can be obtained by utilizing the following Theorem 3.2. Especially, what is more important is that such tighter lower bounds can be gained only by simple arithmetic computations, which is different from the ones in the usual branch and bound methods for solving convex or linear programs [5, 6, 11, 12, 14].
Theorem 3.2
For any
\(D=[p,q]=\prod_{i=0}^{n+N}[p _{i},q_{i}]\subseteq D^{0}\), under assumption (3.1), let
$$ \alpha = \textstyle\begin{cases} \frac{q_{0}V+\eta }{q_{0}p_{0}}, & \textit{if } q_{0}>V\eta, \\ 0, & \textit{otherwise}. \end{cases} $$
Then a lower bound
\(\operatorname{LB}(D)\)
satisfying
\(\operatorname{LB}(D)\leq V(Q(D))\)
for problem
\(\mathrm{Q}(D)\)
is given by
$$ \operatorname{LB}(D)=\min_{i=0,1,\ldots ,n+N} \max_{k=0,1,\ldots ,m+N} \bigl\{ f_{k}^{i}( \alpha) \bigr\} . $$
Proof
(i) If \(\alpha =0 \), this result is obvious.
(ii) By the assumption it holds that \(p_{0}\leq V\eta < q_{0}\). Then we get \(0<\alpha <1\), and there exists \(\tilde{x}=q\alpha (qp)\) so that \(\tilde{x}_{0}=V\eta \) with \(\tilde{x}_{0}=q_{0}\alpha (q_{0}p _{0})\). Thus, for each \(\hat{x}=q\beta (qp)\) with \(\beta <\alpha \), it is easy to find that \(\hat{x}_{0}>\tilde{x}_{0}=V\eta \), that is, for each \(x>\tilde{x}\), there exists \(\hat{x}=q\beta (qp)\) with \(\beta <\alpha \) such that \(x\geq \hat{x}\), and so it holds that \(x_{0}\geq \hat{x}_{0}>V\eta \). Now, let us denote
$$ \Omega^{i}= \bigl\{ x\in [p,q]\mid p_{i}\leq x_{i}\leq \tilde{x_{i}} \bigr\} , $$
then we can acquire that
$$\begin{aligned} \bigl\{ x\in [p,q]\mid x_{0}\leq V\eta \bigr\} &\subset [p,q] \setminus \{x\mid\tilde{x}< x\leq q\} \\ &\ =[p,q]\Bigm\backslash \bigcap_{i=0}^{n+N}{ \bigl\{ x\in [p,q]\mid \tilde{x_{i}}< x _{i} \bigr\} } \\ & = \bigcup_{i=0}^{n+N} \bigl\{ x\in [p,q]\mid p_{i} \leq x_{i} \leq \tilde{x_{i}} \bigr\} \\ & = \bigcup_{i=0}^{n+N}{ \Omega^{i}}. \end{aligned}$$
Let \(\operatorname{LB}(\Omega^{i})= \max_{k=0,1,\ldots ,m+N}\{f_{k}^{i}( \alpha)\}\) and \(\operatorname{LB}(D)=\min \{\operatorname{LB}(\Omega^{i})\mid i=0,1,\ldots ,n+N\}\). Obviously, we get \(\operatorname{LB}(\Omega^{i})\leq \min \{\bar{F}(x) \mid x\in \bigcup_{i=0}^{n+N}{\Omega^{i}}\}\),
$$ \operatorname{LB}(D) \leq \min \Biggl\{ \bar{F}(x)\Bigm x\in \bigcup _{i=0}^{n+N} {\Omega^{i}} \Biggr\} \leq \min \bigl\{ \bar{F}(x)\mid x_{0}\leq V\eta ,x\in [p,q] \bigr\} . $$
Thus, the proof is complete. □
Notice that \(\operatorname{LB}(D)\) in Theorem 3.2 satisfies
$$ \min \bigl\{ \bar{F}(x)\mid x_{0}\leq V\eta ,x\in D^{0} \bigr\} \geq \operatorname{LB}(D)\geq\max_{k=0,1,\ldots ,m+N} \bigl\{ G_{k}^{+}(p)G_{k}^{}(q) \bigr\} . $$
(3.3)
3.2 Adaptive division
The division operation repeatedly subdivides an \((n+N+1)\)dimensional box \(D^{0}\) into \((n+N+1)\)dimensional subboxes. This operation helps the algorithm confirm the position of a global optimal solution in \(D^{0}\) for problem (P). Throughout this algorithm, we take a new adaptive subdivision principle as follows.
Adaptive subdivision
For given \(\eta >0\), consider any box \(D=[p,q]=\{x\in R^{n+N+1}p_{i} \le x_{i}\le q_{i},i=0,1,\ldots ,n+N\}\subseteq D^{0}\).
(i) If \(q_{0}>V\eta \), then let \(\alpha =\frac{q_{0}V+\eta }{q_{0}p _{0}}\); otherwise let \(\alpha =0\).
(ii) Denote \(t=\mathrm{{argmax}}\{q_{i}p_{i}\mid i=0,1,\dots ,n+N\}\). Let \(u_{t}=p_{t}\) and \(v_{t}=q_{t}\alpha (q_{t}p_{t})\). Set \(\bar{x} _{t}=(u_{t}+v_{t})/2\).
(iii) By using \(\bar{x}_{t}\), let us denote
$$ D_{1}= \bigl\{ x\in R^{n+N+1}\mid p_{i}\leq x_{i}\leq q_{i},i\neq t, p_{t} \leq x_{t}\leq \bar{x}_{t}, i=0,1,\dots ,n+N \bigr\} , $$
and
$$ D_{2}= \bigl\{ x\in R^{n+N+1}\mid p_{i}\leq x_{i}\leq q_{i},i\neq t, \bar{x} _{t}\leq x_{t}\leq q_{t} , i=0,1,\dots ,n+N \bigr\} . $$
Based on the above division operation, D is divided into two new boxes \(D_{1}\) and \(D_{2}\). Especially, when \(\alpha =0\), the adaptive subdivision simply reduces to the standard bisection. As we will see from numerical experiments in Sect. 5, the adaptive subdivision is superior to the standard bisection. Moreover, the subdivision can confirm the convergence of the algorithm, and we have the following results.
Theorem 3.3
Suppose that the above adaptive division operation is infinite, then it generates a nested sequence
\(\{D^{s _{t}}\}\)
of partition sets
\(\{D^{s}\}\)
generated by the adaptive division operation, so that
\(\operatorname{LB}(D^{s_{t}})\rightarrow V(\mathrm{Q}(D^{0}))\)
as
\(t\rightarrow +\infty \).
Proof
By the adaptive division operation, for each box \(D^{s}=[p^{s},q^{s}]\subseteq D^{0}\), we can acquire the points \(u^{s}, v^{s}(i)\in D^{s}\) (\(i=0,1,\dots ,n+N\)) satisfying
$$ u^{s}=p^{s},\qquad v^{s}(i)=q^{s}\alpha^{s} \bigl(q^{s}_{i}p^{s}_{i}\bigr)e^{i} \quad \mbox{with } \alpha^{s}=0 \mbox{ or } \alpha^{s}=\frac{q^{s}_{0}V+ \eta }{q^{s}_{0}p^{s}_{0}} . $$
From Theorem 3.2, we can obtain
$$ \operatorname{LB} \bigl(D^{s} \bigr)=\min_{i=0,1,\ldots ,n+N}\max _{k=0,1,\ldots ,m+N} \bigl\{ G_{k}^{+} \bigl(u ^{s} \bigr) G_{k}^{} \bigl(v^{s}(i) \bigr) \bigr\} . $$
According to Ref. [18], this adaptive division ensures the existence of an infinite subsequence \(\{s_{t}\}\) with \(D^{s_{t}+1}\subseteq D^{s _{t}}\) and \(\operatorname{LB}(D^{s_{t}})\le V(Q(D^{0}))\) for each t, so that for each \(i=0,1,\dots ,n+N\),
$$\begin{aligned}& v^{s_{t}}(i)u^{s_{t}}\rightarrow 0,\quad \mbox{as } t\rightarrow +\infty , \\& \lim_{t \to +\infty }v^{s_{t}}(i)=\lim_{t \to +\infty }u^{s_{t}}=\hat{u}\in D^{0}. \end{aligned}$$
Thus we can obtain that
$$ \lim_{t \to +\infty }\operatorname{LB} \bigl(D^{s_{t}} \bigr)=\min _{i=0,1,\ldots ,n+N} \max_{k=0,1,\ldots ,m+N} \bigl\{ G_{k}^{+}( \hat{u})G_{k}^{}(\hat{u}) \bigr\} = \bar{F}(\hat{u}). $$
In addition, by assumption (3.1), since \(u_{0}^{s_{t}}=p_{0}^{s_{t}} \leq V\eta \), it holds that
$$ \lim_{t \to +\infty }u_{0}^{s_{t}}= \hat{u}_{0}\le V\eta , $$
which implies that û is feasible to Q(\(D^{0}\)), then we get
$$ \operatorname{LB} \bigl(D^{s_{t}} \bigr)\le \min \bigl\{ \bar{F}(x)\mid u_{0} \leq V\eta ,u\in D^{0} \bigr\} \le \bar{F}(\hat{u}). $$
Consequently, the limitation \(\operatorname{LB}(D^{s_{t}})\rightarrow V(Q(D^{0}))\)
\((t \rightarrow +\infty)\) holds and then we accomplish the proof. □
3.3 Reduction operation
At a given stage of the proposed algorithm for problem (P), let \(D=[p,q]\subseteq D^{0}\) be a box generated by the division operation and still of interest. Clearly, the smaller this box D is, the closer the feasible solution will be to the \((\varepsilon ,\eta)\)optimal solution to problem (P). Hence, to effectively tighten the variable bounds in a particular node, a valid range reduction strategy is introduced by overestimation of the constraints, and by applying the monotonic decompositions to problem (P). Based on the above discussion, for any box \(D=[p,q]\subseteq D^{0}\) generated by the division operation and still of interest, we intend to identify whether or not the box D contains a feasible solution x̂ of \(\mathrm{P}(D)\) such that \(\hat{x}_{0}\leq V\eta \). Consequently, seeking such a point x̂ can be confined to the set \(\hat{D}\cap D\), where
$$ \hat{D}= \bigl\{ x\mid x_{0}\leq V\eta , G_{k}^{+}(x)G_{k}^{}(x)< \varepsilon , k=0,1,\dots ,m+N \bigr\} . $$
The reduction operation is based on special cuts that exploit the monotonic structure of problem (P), and it aims at substituting the box \(D=[p,q]\) with a smaller box \(D'\) without losing any valid point \(x\in \hat{D}\cap D\), i.e.,
$$ \hat{D}\cap D'=\hat{D}\cap D. $$
(3.4)
This will suppress the fast growth of the branching tree in the division operation for seeking the \((\varepsilon ,\eta)\)optimal solution of problem (P).
For any \(D=[p,q]=\prod_{i=0}^{n+N}[p_{i},q_{i}]\subseteq D^{0}\) generated by the division operation, the box \(D'\) satisfying condition (3.4) is denoted by \(R[p,q]\). To recognize how \(R[p,q]\) is acquired, we first demand to know the parameters γ, \({\alpha }_{k}^{i}\), and \({\beta }_{k}^{i}\) computed for each \(k=0,1,\dots ,m+N\), \(i=0,1, \dots ,n+N\) by utilizing the following rules.
Rule (i): Given the box \([p,q]\subseteq D^{0}\), if \(f_{k}^{i}(1) >\varepsilon \), let \({\alpha }_{k}^{i}\) be the solution to the equation \(f_{k}^{i}({\alpha }_{k}^{i})=\varepsilon \) about the univariate \(\alpha_{k}^{i}\); otherwise let \({\alpha }_{k}^{i}=1\).
Rule (ii): For given boxes \(D=[p,q]\) and \(D'=[\bar{p},q]\) with \(D'\subseteq D\subseteq D^{0}\), if \(g_{k}^{i}(1) >\varepsilon \), one can solve the univariate equation \(g_{k}^{i}({\beta }_{k}^{i})= \varepsilon \) to obtain \({\beta }_{k}^{i}\); otherwise let \({\beta } _{k}^{i}=1\). If \(\bar{p}_{0}< V\eta < q_{0}\), then set \(\gamma =\frac{V \eta \bar{p}_{0}}{q_{0}\bar{p}_{0}}\); otherwise let \(\gamma =1\).
Notice that it is easy to get \({\alpha }_{k}^{i}\) and \(\beta_{k}^{i}\), since the univariate functions \(f_{k}^{i}(\lambda)\) and \(g_{k}^{i}( \mu)\) are strictly monotonic in Rules (i) and (ii).
According to Rules (i) and (ii), let us denote
$$\begin{aligned}& \hat{\alpha }^{i}=\min_{k=0,1,\ldots ,m+N} \bigl\{ {\alpha }_{k}^{i} \bigr\} , \\& \hat{\beta }^{i}=\min_{k=0,1,\ldots ,m+N} \bigl\{ {\beta }_{k}^{i}, \gamma\bigr\} , \quad i=0,1,\ldots ,n+N, \end{aligned}$$
(3.5)
then \(R[p,q]\) can be obtained by Theorems 3.4 and 3.5.
Theorem 3.4
Given the box
\(D=[p,q]=\prod_{i=0} ^{n+N}[p_{i},q_{i}]\subseteq D^{0}\), it holds that

(i)
If
\(p_{0}\leq V\eta \)
and
\(\bar{F}(p)<\varepsilon \), then
\(R[p,q]=[p,p]\), and

(ii)
If
\(p_{0}>V\eta \)
or
\(\max \{f_{k}^{i}(0)  k=0,1,\dots ,m+N \}> \varepsilon \)
holds for some
\(i\in \{0,1,\ldots ,n+N\}\), then
\(R[p,q]=\emptyset \).
Proof
(i) The proof of this result is easy, here it is omitted.
(ii) The former of the conclusion is apparent, we only need to give the proof of the latter.
If there exists some \(i\in \{0,1,\ldots ,n+N\}\) so that \(\max \{f_{k} ^{i}(0)  k=0,1,\dots ,m+N\}> \varepsilon \), we can obtain
$$ \bar{F}(x)>\max_{k=0,1,\dots ,m+N} \bigl\{ f_{k}^{i}(0)\bigr\} >\varepsilon \quad \mbox{for each } x\in [p,q]. $$
Therefore, we can acquire \(R[p,q]=\emptyset \), and this accomplishes the proof. □
Theorem 3.5
For any
\(D=[p,q]\subseteq D^{0}\), under Rule (i) and (3.5) and the assumption
$$ p_{0}\leq V\eta \quad \textit{ and }\quad \max_{k=0,1,\dots ,m+N} \bigl\{ f_{k}^{i}(0) \bigr\} < \varepsilon \quad \textit{for each } i=0,1,\ldots ,n+N, $$
let
\({\hat{p}}=q\sum_{i=0}^{n+N}\hat{\alpha }^{i}(q_{i}p_{i})e ^{i}\). If the box
\([\hat{p},q]\)
satisfies the assumption of Theorem 3.4, then
\(R[p,q]=[\hat{p},\hat{p}]\)
or
\(R[p,q]=\emptyset \). Otherwise, \(R[p,q]=[\hat{p},\hat{q}]\), where
\(\hat{q}=\hat{p}\sum_{i=0}^{n+N}\hat{\beta }^{i}(q_{i}p_{i})e^{i}\)
with respect to Rule (ii) and (3.5).
Proof
For any given \(x=(x_{0},\ldots ,x_{n+N})^{T}\in [p,q]\), we first confirm that \(x\geq \hat{p}\).
Assume that \(x\ngeq \hat{p}\), that is to say, there exists some \(i\in \{0,1,\ldots ,n+N\}\) such that
$$ x_{i}< \hat{p}_{i}=q_{i}\hat{\alpha }^{i}(q_{i}p_{i}) \quad \mbox{i.e. } x_{i}=q_{i}\alpha (q_{i}p_{i}) \mbox{ with } \alpha > \hat{\alpha }^{i}. $$
(3.6)
Then we discuss as follows:
If \(\hat{\alpha }^{i}=1\), we can obtain \(x_{i}<\hat{p}_{i}=q_{i} \hat{\alpha }^{i}(q_{i}p_{i})=p_{i}\) from (3.6), contradicting \(x\in [p,q]\), then \(x\geq \hat{p}\).
If \(\hat{\alpha }^{i}\in (0,1)\), from Rule (i) and the definition of \(\hat{\alpha }^{i}\), there must exist some k such that \(f_{k}^{i}( \hat{\alpha }^{i})=\varepsilon \), i.e.,
$$ G_{k}^{+}(p)G_{k}^{} \bigl(q\hat{ \alpha }^{i}(q_{i}p_{i})e^{i} \bigr)= \varepsilon. $$
(3.7)
In addition, through Rule (i) and the definition of \(G_{k}^{}(x)\), it holds from (3.6) and (3.7) that
$$ G_{k}^{} \bigl(q(q_{i}x_{i})e^{i} \bigr)= G_{k}^{} \bigl(q\alpha (q_{i}p_{i})e ^{i} \bigr)< G_{k}^{} \bigl(q\hat{\alpha }^{i}(q_{i}p_{i})e^{i} \bigr)=G_{k}^{+}(p) \varepsilon . $$
Consequently,
$$ G_{k}^{}(x)\leq G_{k}^{} \bigl(q(q_{i}x_{i})e^{i} \bigr)< G_{k}^{+}(p) \varepsilon \leq G_{k}^{+}(x)\varepsilon , $$
contradicting \(G_{k}^{+}(x)G_{k}^{}(x)\leq \varepsilon \). According to the above discussions, we have demonstrated that \(x\geq \hat{p}\), i.e., \(x\in [\hat{p},q]\). Next we will show that \(x\leq \hat{q}\).
Assume that \(x\nleq \hat{q}\), then there exists some i such that
$$ x_{i}>\hat{q}_{i}=\hat{p}_{i}+\hat{\beta }^{i}(q_{i}\hat{p}_{i}),\quad \mbox{i.e. } x_{i}=\hat{p}_{i}+\beta (q_{i} \hat{p}_{i}) \mbox{ with } \beta >\hat{\beta }^{i}. $$
(3.8)
We discuss as follows:
If \(\hat{\beta }^{i}=1\), from (3.8) we can acquire \(x_{i}>\hat{q} _{i}=\hat{p}_{i}+\hat{\alpha }^{i}(q_{i}\hat{p}_{i})=q_{i}\), contradicting \(x\in [p,q]\).
If \(\hat{\beta }^{i}\in (0,1)\), from Rules (i) and (ii) and the definition of \(\hat{\beta }^{i}\), we can obtain \(g_{k}^{i}( \hat{\beta }^{i})=\varepsilon \), i.e.,
$$ G_{k}^{+} \bigl(\hat{p}_{i}+\hat{\beta }^{i}(q_{i}\hat{p}_{i})e^{i} \bigr)G_{k}^{}(q)=\varepsilon , $$
(3.9)
or
$$ \hat{p}_{0}+\hat{\beta }^{0}(q_{0} \hat{p}_{0})=V\eta . $$
(3.10)
Suppose that (3.9) holds, due to Rule (ii) and the definition of \(G_{k}^{+}(x)\), by (3.8) and (3.9) it follows that
$$ G_{k}^{+} \bigl(\hat{p}+(x_{i} \hat{p}_{i})e^{i} \bigr)=G_{k}^{+} \bigl( \hat{p}+\beta (q _{i}\hat{p}_{i})e^{i} \bigr)> G_{k}^{+} \bigl(\hat{p}+\hat{\beta }^{i}(q_{i} \hat{p}_{i})e^{i} \bigr)=G_{k}^{}(q)+ \varepsilon ; $$
therefore,
$$ G_{k}^{+}(x)\geq G_{k}^{+} \bigl( \hat{p}+(x_{i}\hat{p}_{i})e^{i} \bigr)>G_{k} ^{}(q)+\varepsilon \geq G_{k}^{}(x)+ \varepsilon \quad \mbox{with } x _{i}=\hat{p}_{i}+\beta (q_{i}\hat{p}_{i}), $$
which contradicts \(G_{k}^{+}(x)G_{k}^{}(x)\leq \varepsilon \).
Suppose that (3.10) holds, from (3.8), (3.10), and Rule (ii), we can draw a conclusion that
$$ x_{0}=\hat{p}_{0}+\beta (q_{0} \hat{p}_{0})>\hat{p}_{0}+\hat{\beta } ^{0}(q_{0} \hat{p}_{0})=V\eta , $$
which contradicts \(x_{0}\leq V\eta \). According to the above discussions, we can acquire \(x\leq \hat{q}\), i.e., \(x\in [\hat{p},\hat{q}]\), and the proof is complete. □
From Theorem 3.5, by Rules (i) and (ii) the main computational effort for deriving \(R[p,q]\) is to solve some univariate equations about the variables \({\hat{\alpha }}_{k}^{i}\) and \(\hat{\beta }_{k}^{i}\), which is easy to solve, for example, by using the bisection approach. What is more, as seen below, the cost of main computation in the proposed algorithm is also to form \(R[p,q]\).