 Research
 Open Access
 Published:
Solving a class of generalized fractional programming problems using the feasibility of linear programs
Journal of Inequalities and Applications volume 2017, Article number: 147 (2017)
Abstract
This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasiconcavity or lowrank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.
Introduction
In a variety of applications, we encounter a class of nonconvex optimization problems as follows:
where \(c_{i}, d_{i}\in\mathbb{R}^{n}\), \(c_{0i}, d_{0i}\in \mathbb{R}\), \(A\in\mathbb{R}^{m\times n}\), \(b\in\mathbb{R}^{m}\), \(c_{i}^{\top}x+c_{0i}>0\), \(d_{i}^{\top}x+d_{0i}>0\) over a nonempty, compact set Ω for each \(i=1,\ldots,p\), and \(G:\mathbb{R}_{+} ^{p}\rightarrow\mathbb{R}_{+}\) is a continuous function.
Problem (P) is worth studying because some important special optimization problems that have been studied in literature fall into the category of (P), such as multiplicative programs, sumofratios optimization, fractional polynomial optimization, namely:

(a)
Multiplicative programs (MP): In this case, the objective function G, with the form \(G(y_{1},\ldots,y_{p})=\prod_{i=1}^{p}y_{i}\) with \(y_{i}=\frac{c_{i}^{\top}x+c_{0i}}{d_{i}^{ \top}x+d_{0i}}\), is quasiconcave, and its minimum is attained at some extreme point of the polytope [1]. Multiplicative objective functions arise in a variety of practical applications, such as economic analysis [2], robust optimization [3], VLSI chip design [4], combination optimization [5], etc.

(b)
Sumofratios (SOR) optimization: SOR functions have the form \(G(y_{1},\ldots,y_{p}) =\sum_{i=1}^{p}y_{i}\) with \(y_{i}=\frac{c_{i}^{\top}x+c_{0i}}{d_{i}^{\top}x+d_{0i}}\). Matsui [6] points out that it is NPhard to minimize SOR functions over a polytope. For many applications of this form, we can see the survey paper by Schaible and Shi [7] and the references therein. Specially, a kind of SOR optimization problems with the form \(G(y_{1},\ldots,y_{p})=\sum_{i=1}^{p} \vert y_{i} \vert ^{q}\), where \(q\geq0\) and \(y_{i}=\frac{c_{i}^{\top}x+c_{0i}}{d_{i}^{\top}x+d _{0i}}\), are considered by Kuno and Masaki [8] as well, and they often occur in a computer version.

(c)
Fractional polynomial optimization: Polynomial functions with positive coefficients have the form \(G(y_{1},\ldots,y _{p})=\sum_{j=1}^{m}c_{j}\prod_{i=1}^{p}y_{i}^{\gamma_{ij}}\), where \(y_{i} =\frac{c_{i}^{\top}x+c_{0i}}{d_{i}^{\top}x+d_{0i}}\), \(c_{j}\geq0\) and \(\gamma_{ij}\) is a positive integer. Problems of this form have many applications [9], including production planning, engineering design, etc. In addition, from research point of view, these problems pose significant theoretical and computational challenges because they possess multiple local optima that are not globally optima.
During the past years, many solution methods have been developed for globally solving special cases of problem (P). These methods can be classified into outerapproximation [10], branchandbound [11–14], mixed branchandbound and outerapproximation [15], cutting plane [16], parameterbased [17], vertex enumeration [8], heuristic methods [18], etc. However, most of these methods lack theoretical analysis of the running time of the algorithms, or performance guarantee of the solutions obtained. To our knowledge, little work has been done about the solution of εapproximation problems of (P) without the quasiconcavity and lowrank assumptions; although Locatelli [19] has developed an approximation algorithm for a general class of global optimization problems. Next, we immediately introduce the definition of the εapproximation problem related to global optimization as follows.
Definition 1
Given \(\varepsilon>0\), letting \(f_{\ast}=\min_{x\in\Omega}f(x)\), a point \(\bar{x}\in \Omega\) is said to be an εapproximation solution for \(\min_{x\in\Omega}f(x)\) if
This article focuses on presenting a fully polynomial time approximation scheme (FPTAS) for solving problem (P). An FPTAS for a minimization problem is an approximation algorithm, that is, for any given \(\varepsilon>0\), it can find an εapproximation solution for the problem, and its running time is polynomial in the input size of the problem and \(1/\varepsilon\). As shown by Mittal and Schulz [20], the optimum value of problem (P) cannot be approximated to within any factor unless \(\mathit{NP}=P\). Therefore, in order to obtain an FPTAS for solving problem (P), some extra assumptions of the function G will be required (see Section 2) in this article.
For the special cases of problem (P), many solution algorithms have been developed about the solution of εapproximation problems. Depetrini and Locatelli [21] presented an approximation algorithm for linear fractionalmultiplicative problems, and they pointed out that the algorithm is an FPTAS as the number p of ratio terms is fixed. This result has been extended to a wider class of optimization problems by Locatelli [19]. Also, Goyal and Ravi [22] exploited the fact that the minimum of a quasiconcave function is attained at an extreme point of the polytope and proposed an FPTAS for minimizing a class of lowrank quasiconcave functions over a convex set. Mittal and Schulz [20] developed an FPTAS for optimizing a class of lowrank nonconvex functions without quasiconcavity over a polytope. In addition, Depetrini et al. [23] and Goyal et al. [24] respectively gave an FPTAS for a class of optimization problems where the objective functions are products of two linear functions. Shen and Wang [25] presented a linear decomposition approximation algorithm for a class of nonconvex programming problems by dividing the input space into polynomially many grids. Nevertheless, these solution methods [20, 21, 23–25] cannot be directly applied to the case (i.e., problem (P)) considered in this paper, where the objective function is a composition of some ratios of affine functions without quasiconcavity or lowrank.
The aim of this article is to present a solution approach for a class of fractional programming problems (P). By introducing some variables, the original problem (P) is first converted to a pdimensional equivalent problem (Q). Through the establishment of a nonuniform grid, on the basis of problem (Q), the solving process of the original problem (P) is then transformed into checking the feasibility of a series of linear programming problems. Thus, a new approximation algorithm is presented for globally solving problem (P) based on the exploration technique of a nonuniform grid over a box. The algorithm does not require quasiconcavity or lowrank of the function G to problem (P), and it is proved that this is an FPTAS as the term p is fixed in G. We emphasize here that the exploration technique used in this article is different from the ones given in [19, 21]. The reason is that we utilize a different strategy from that given in [19, 21] to update the incumbent best value of the objective function \(g(t)\) to problem (Q), and that requires fewer interesting grid points restored and considered in our algorithm, compared with Refs. [19, 21]. Also, we notice that the main computational cost of the proposed algorithm is checking the feasibility of linear problems at the interesting grid points. This means that it requires less computational cost and so is more easily implementable. Finally, problem (P) generalizes the one investigated in [21], and the proposed algorithm can be directly applied to solve the problem in [19] by replacing the convex feasibility with the linear one. Numerical results show that the proposed algorithm requires much less computational time to obtain an approximation optimized solution of problem (P) with the same approximation error than the approaches (given by [19, 21]) do.
The paper is structured as follows. In Section 2, we discuss the reformulation of problem (P) as a pdimensional one. Section 3 presents an approximation algorithm to obtain an εapproximation solution for problem (P) which is FPTAS by its computational complexity. Some numerical results are reported in Section 4. Finally, the conclusions are presented in Section 5.
Parametric reformulation of the problem
For solving problem (P), throughout this paper, we assume that G satisfies:

\(G(y)\leq G(y^{\prime})\) for all \(y, y^{\prime}\in\mathbb{R}_{+} ^{p}\) with \(y_{i}\leq y_{i}^{\prime}\), \(i=1,\ldots,p\), and

\(\delta^{k}G(y)\leq G(\delta y)\) for all \(y\in\mathbb{R}_{+}^{p}\), \(\forall\delta\in(0,1)\), and some constant k.
There are a number of functions G which satisfy the above conditions, such as the product of a constant number (say p) of linear functions (with \(k=p\)), the sum of linear ratio functions (with \(k=1\)), etc. This paper will present an approximation algorithm for solving problem (P) under the above assumptions. For this purpose, let us introduce p variables \(y_{i}\), \(i=1,\ldots,p\), thus, problem (P) can be equivalent to the form:
Theorem 1
\(x^{\ast}\) is a global optimal solution for problem (P) if and only if \((x^{\ast},y^{\ast})\) is a global optimal solution for problem (P1) with \(y^{\ast}_{i}=\frac{c_{i}^{\top}x^{\ast}+c_{0i}}{d _{i}^{\top}x^{\ast}+d_{0i}}\) for each \(i=1,\ldots,p\). The minimal objective function values of problems (P) and (P1) are equal, i.e., \(f(x^{\ast})=G(y^{\ast})\).
Proof
Let \((x^{\ast},y^{\ast})\) be a global optimal solution for problem (P1). We suppose that \(x^{\ast}\) is not a global optimal solution for problem (P), then there exists \(\bar{x}\in\Omega\) such that
Let \(\bar{y}_{i}=\frac{c_{i}^{\top}\bar{x}+c_{0i}}{d_{i}^{\top} \bar{x}+d_{0i}}\), \(i=1,\ldots,p\). Then \((\bar{x},\bar{y})\) is a feasible solution of problem (P1). We can have, from (2.1), that
On the other hand, since \((x^{\ast},y^{\ast})\) is a feasible solution of problem (P1), this implies that \(\frac{c_{i}^{\top}x^{\ast}+c _{0i}}{d_{i}^{\top}x^{\ast}+d_{0i}}\leq y^{\ast}_{i}\), \(i=1,\ldots,p\). Therefore, from the assumptions of G, it holds that
Combining (2.2) with (2.3), we can obtain \(G(\bar{y})< G(y ^{\ast})\). Since \((\bar{x},\bar{y})\) is a feasible solution of problem (P1), this contradicts the optimality of \((x^{\ast},y^{\ast})\) for problem (P1). Therefore, the supposition that \(x^{\ast}\) is not a global optimal solution for problem (P) must be false.
Next, we will show the converse case. Let \(x^{\ast}\) be a global optimal solution of problem (P), and let \(y^{\ast}_{i}=\frac{c_{i} ^{\top}x^{\ast}+c_{0i}}{d_{i}^{\top}x^{\ast}+d_{0i}}\), \(i=1,\ldots,p\). Then \((x^{\ast},y^{\ast})\) is a feasible solution of problem (P1). Suppose that there exists some feasible solution \((\bar{x},\bar{y})\) for problem (P1) such that
Then, from \(\frac{c_{i}^{\top}\bar{x}+c_{0i}}{d_{i}^{\top}\bar{x}+d _{0i}}\leq\bar{y}_{i}\), \(i=1,\ldots,p\), it follows that
By using (2.4)(2.5), we have \(f(\bar{x})< G(y^{\ast})=f(x ^{\ast})\). Since \(\bar{x}\in\Omega\), this contradicts that \(x^{\ast}\) is an optimal solution of problem (P). Hence, \((x^{\ast},y ^{\ast})\) must be the optimal solution to (P1). Based on the above result, obviously, from the assumptions of G, we have \(f(x^{\ast})=G(y ^{\ast})\). □
Based on the above theorem, for solving problem (P), we may solve problem (P1) instead. Additionally, it is known that each single ratio \(\frac{c_{i}^{\top}x+c_{0i}}{d_{i}^{\top}x+d_{0i}}\) is both quasiconcave and quasiconvex, and its minimum and maximum must be attained respectively at some vertex of Ω (see, e.g., [26]). To this end, let us denote
And let
Now, let us define a pdimensional set for each \(t\in H\) as follows:
and the corresponding function \(g(t)\) is given by
Clearly, we can know whether \(S(t)\) is a null set or not by checking the feasibility of a linear program for given \(t\in H\), which can be solved in polynomial time. Based on the above result, it turns out that problem (P1) is equivalent to the following pdimensional problem:
According to the definition of \(g(t)\), we have the following conclusion.
Theorem 2
Given \(\varepsilon>0\), let \(\delta=(\frac{1}{1+\varepsilon})^{1/k}\), for each \(\bar{t}\in H\), it holds that
Proof
From the definition of \(S(t)\) and \(\delta=(\frac{1}{1+\varepsilon})^{1/k} \in(0,1)\), we have \(S(\delta\bar{t})\subseteq S(\bar{t})\) for each \(\bar{t}\in H\). When \(S(\delta\bar{t})\neq\emptyset\), it implies that \(S(t)\neq\emptyset\) for each \(t\in[\delta\bar{t},\bar{t}]\). This means that \(g(t)=G(t)\) for each \(t\in[\delta\bar{t},\bar{t} ]\). With the assumptions of \(G(t)\), it holds that
When \(S(\delta\bar{t})=\emptyset\) and \(S(\bar{t})\neq\emptyset\), similarly, we have that
When \(S(\bar{t})=\emptyset\), it implies that \(S(t)=\emptyset\) for any \(t\in[\delta\bar{t},\bar{t}]\), and so the conclusion holds. □
The approximation algorithm
The algorithm and its convergence
In this subsection, by using Theorem 2 above, we present an approximation algorithm for solving problem (P), and prove that the algorithm can find an εapproximation solution for problem (P).
The proposed algorithm adopts an exploration technique of a suitably defined nonuniform grid over H. In the algorithm, let \(\mathcal{T}\) be the set of all restored interesting grid points which will be further analyzed. \(\mathcal{W}\) is a set of the grid points already discarded, and \(\mathcal{X}\) is a set of the remaining grid points at each iteration. Moreover, U represents the best value of the function \(g(t)\) obtained so far, and denote \(t^{\ast}\) such that \(U=g(t^{ \ast})\). The algorithm starts with \(t^{\ast}=(u_{1},\ldots,u_{p})\) and \(U=g(t^{\ast})\). In each iteration, we select a point \(\bar{t} \in\mathcal{T}\) and calculate \(\bar{a}=\min\{a\in\mathbb{N}: S( \delta^{a}\bar{t} )=\emptyset\}\), where \(\mathbb{N}\) represents the set of the natural numbers. If \(\bar{a}=0\), we newly select a point t̄ from \(\mathcal{T}\). Otherwise, we have \(S(\delta^{\bar{a}1} \bar{t})\neq\emptyset\), and so \(S(t)\neq\emptyset\) for each \(t\in[\delta^{\bar{a}1}\bar{t},\bar{t}]\). This implies that \(g(t)=G(t)\) for each \(t\in[\delta^{\bar{a}1}\bar{t},\bar{t}]\). By using the nondecreasing G, it holds that \(g(\delta^{\bar{a}1} \bar{t})=\min_{t\in[\delta^{\bar{a}1}\bar{t},\bar{t}]}g(t)\). In addition, for any \(t\in\{t:\delta^{\bar{a}}\bar{t}_{i}< t_{i} \leq\bar{t}_{i}, i=1,\ldots,p\}\triangleq(\delta^{\bar{a}}\bar{t}, \bar{t}]\), there exists an integer vector \(\tau=(\tau_{1},\ldots, \tau_{p})\) such that \(t_{i}\in(\delta^{\tau_{i}+1}\bar{t}_{i}, \delta^{\tau_{i}}\bar{t}_{i}]\) satisfying \(\tau_{i}\in\{0,1,\ldots, \bar{a}1\}\) for each i, thus, we have \((1+\varepsilon)g(t)\geq g( \delta^{\tau}\bar{t})\) for any \(t\in(\delta^{\tau+1}\bar{t}, \delta^{\tau}\bar{t}]\) from Theorem 2. We see that all points \(\delta^{\tau}\bar{t}=(\delta^{\tau_{1}}\bar{t}_{1},\ldots, \delta^{\tau_{p}}\bar{t}_{p})\) with \(\tau_{i}\in\{0,1,\ldots, \bar{a}1\}\) belong to \([\delta^{\bar{a}1}\bar{t},\bar{t}]\), hence,
And so, it is reasonable to update \(U=\min\{U,g(\delta^{\bar{a}1} \bar{t})\}\) and \(t^{*}\) such that \(g(t^{\ast})=U\). Next, we consider \(2^{p}\) new points \((\xi_{1}\bar{t_{1}},\ldots,\xi_{p}\bar{t_{p}})\) with \(\xi_{i}\in\{\delta^{\bar{a}},1\}\) for all i, discard all points which satisfy \(\xi_{i}u_{i}< l_{i}\) for some i, and add the remaining points to \(\mathcal{X}\), then update \(\mathcal{T}=(\mathcal{T}\cup \mathcal{X})\backslash\mathcal{W}\). This process is repeated until \(\mathcal{T}=\emptyset\). At termination, each point \(x^{\ast}\in S(t ^{\ast})\) is an approximation solution of problem (P). The detailed algorithm is summarized as Algorithm 1.
Theorem 3
The proposed algorithm can find an εapproximation solution for problem (P).
Proof
Note that the algorithm evaluates the function \(g(t)\) values at the following points:
where \(s_{i}\in\mathbb{N}\), and satisfies
For any \(t\in H\), there is an integer vector \((s_{1},\ldots,s_{p})\) with \(0\leq s_{i}\leq\bar{s_{i}}\), \(i=1,\ldots,p\), such that \(t\in\prod_{i=1}^{p}[\delta^{s_{i}+1}u_{i},\delta^{s_{i}}u_{i}]\). Thus, in view of Theorem 2 and the definition of δ, it holds that \(g(\delta^{s_{1}}u_{1},\ldots,\delta^{s_{p}}u_{p})\leq (1+\varepsilon )g(t)\) for each \(t\in\prod_{i=1}^{p}[\delta^{s_{i}+1}u_{i},\delta ^{s_{i}}u_{i}]\). Hence, we have
On the other hand, let us denote \(t^{*}=(\delta^{s_{1}^{*}}u_{1}, \ldots,\delta^{s_{p}^{*}}u_{p})\) such that
From Step (\(k_{2}\)) of the algorithm, we know \(S(t^{*})\neq\emptyset\). By using the definition of \(S(t)\), there exists a point \(x^{*}\) satisfying \(x^{*}\in S(t^{*})\). Now, let us denote \(\tilde{t}_{i}=\frac{c_{i} ^{\top}x^{*}+c_{0i}}{d_{i}^{\top}x^{*}+d_{0i}}\), \(i=1,\ldots,p\), then we have \(x^{*}\in S(\tilde{t})\) and \(\tilde{t}_{i}\leq t^{*}_{i}\). Combining the definition of \(g(t)\), we see that \(g(\tilde{t})\leq g(t ^{*})\). Thus, we conclude that
Therefore, the point \(x^{\ast}\) is an εapproximation solution of problem (P) by Definition 1. □
The complexity of the algorithm
In this subsection, the computational complexity of the algorithm will be presented in order to show that the approximation algorithm is an FPTAS for fixed p. For this purpose, we need to use the following lemma from Ref [27]. Let \(\Omega=\{x\in\mathbb{R}^{n}: Ax\leq b,x\geq0\}\) be a polyhedron with \(A\in\mathbb{R}^{m\times n}\), \(b\in\mathbb{R}^{m}\), and denote
Then we have the following lemma.
Lemma 1
[27]
Let \(x^{0}\) be a vertex of Ω, then, for each \(j=1,\ldots,n\), it holds that
where \(p_{j}\in\mathbb{R}\), \(q\in\mathbb{R}\) with
Lemma 2
Given \(\varepsilon>0\), let \(\delta=(\frac{1}{1+\varepsilon})^{1/k}\). The number of the points \((\delta^{s_{1}}u_{1},\ldots,\delta^{s_{p}}u _{p})\) satisfying (3.1), at which the feasibility of the corresponding linear programs are checked by the proposed algorithm, is not more than
Proof
Note that \(\delta=(\frac{1}{1+\varepsilon})^{1/k}\in(0,1)\) is fixed if \(\epsilon> 0\) is given. Since the points \((\delta^{s_{1}}u_{1}, \ldots, \delta^{s_{p}}u_{p})\) belonging to the nonuniform grid over H satisfy (3.1), the number of these grid points is equal to \(\prod_{i=1}^{p}(\bar{s_{i}}+1)\). Moreover, by the proposed algorithm, the number of the points \((\delta^{s_{1}}u_{1},\ldots,\delta^{s_{p}}u _{p})\) at which the feasibility of linear programs should be checked is not larger than \(\prod_{i=1}^{p}(\bar{s_{i}}+1)\). In view of the definition of \(\bar{s_{i}}\) and δ, we can have that
Since \(\ln(1+\varepsilon)\approx\varepsilon\) for sufficiently small \(\varepsilon>0\), we see that the number of points where the feasibility of linear programs should be checked is not larger than \(\prod_{i=1} ^{p}[1+\frac{k}{\varepsilon}\ln(\frac{u_{i}}{l_{i}})]\). □
By the proposed algorithm, to find an εapproximation solution for problem (P), the computational cost includes the cost of the computation of the box H and the calculation of ā at Step (k1) of the algorithm for each iteration. It is known that each \(l_{i}\) and \(u_{i}\) must be attained at some vertex of Ω respectively (see, e.g., [26]), and that can be computed in polynomial time, thus H can be determined in polynomial time. On the other hand, we notice that the main work is the calculation of ā at each iteration in the algorithm (see Step (k1)). This is because the calculation of ā at each iteration requires checking the feasibility of some linear problems with \(m+p\) constraints and n variables. In other words, the computational cost of the algorithm is to check the feasibility of linear problems at interesting grid points. Let us denote \(T(m+p,n)\) as the cost of checking the feasibility of a linear programming problem with \(m+p\) constraints and n variables.
In order to give the computational cost of the proposed algorithm, without loss of generality, we can assume that
This is because
by choosing sufficiently large \(M_{i}\in\mathbb{R}\) such that \(M_{i}(c_{i}^{\top}x+c_{0i})\geq1\), \(M_{i}(d_{i}^{\top}x+d_{0i}) \geq1\) for any \(x\in\Omega\). Based on the above discussion, combining Lemmas 1 and 2 finally leads to the following theorem.
Theorem 4
As p is a fixed positive integer, the number of operations required by the proposed algorithm to obtain an εapproximate solution for problem (P) is not larger than
where \(\lambda=\max\{\bar{\lambda}, \vert c_{ij} \vert , \vert d_{ij} \vert , \vert c_{0i} \vert , \vert d_{0i} \vert : i=1,\ldots,p, j=1,\ldots,n\}\).
Proof
Let \(x^{l_{i}}\), \(x^{u_{i}}\) be vertices of Ω with \(l_{i}=\frac{c _{i}^{\top}x^{l_{i}}+c_{0i}}{d_{i}^{\top}x^{l_{i}}+d_{0i}}\), \(u_{i}=\frac{c_{i}^{\top}x^{u_{i}}+c_{0i}}{d_{i}^{\top}x^{u_{i}}+d _{0i}}\), \(i=1,2,\ldots,p\). Thus, it follows from Lemma 1 that
where \(p_{j}^{l_{i}}\), \(q^{l_{i}}\), \(p_{j}^{u_{i}}\), \(q^{u_{i}}\) satisfy (3.2). Let \(\rho=\max\{1, 1/q^{l_{i}},1/q^{u_{i}}: i=1,\ldots ,p\}\). Combining Lemma 1 and the definition of λ leads to
Thus, with (3.3), it holds that
Similarly, we can obtain that \(u_{i}\leq2\rho n^{n+1}\lambda^{n+1}\). And so
Since for each interesting grid point we require the solution of a linear feasibility problem with \(m+p\) constraints and n variables, by Lemma 2, for given p, we can claim that the number of operations required by the proposed algorithm is not larger than
□
Remark 1
From Theorem 4, we can conclude that the proposed algorithm is an FPTAS for problem (P) for fixed p. On the other hand, we know that the computational time of the proposed algorithm is an exponential increase with p increasing. These conclusions can be observed also in the numerical results of the next section.
Remark 2
Notice that the detailed complexity analysis of the proposed algorithm can be used as an indicator of the difficulty of some optimization problems, such as multiplicative programs, sumofratios optimization, etc. Thus, in order to solve efficiently these problems, we should expect to design a more sophisticated approach where its performance is at least as good.
Numerical examples
Based on Theorem 4, although the computational complexity results of the algorithms ([19, 21] and ours) are similar, we should notice that it is the worst case time complexity which is one of the most often used criteria of evaluating algorithms in optimization. In fact, these complexity results ([19, 21] and ours) are only the upper bounds of the computational cost of the algorithms for solving optimization problems under the worst case, i.e., all the grid points are considered. Hence, to further verify the performance of the proposed algorithm in this article, in this section we compare the proposed algorithm with the ones in [19, 21] by numerical examples. Because it is an approximation algorithm for solving general fractional programming problem (P), we do not attempt comparisons with the solution methods for solving special cases of (P) (e.g., branchandbound [11, 12], outerapproximation [15], cutting plane [16], etc.), and the approximation algorithms in [20, 22], which are restricted to solving problems under the quasiconcavity or lowrank assumptions in the objective functions. Additionally, the algorithms ([19, 21] and ours) are based on the exploration of a suitably defined nonuniform grid over a rectangle, but we exploit different exploration strategies to minimize the objective function over the feasible set, and use different methods to update the incumbent best value of the objective function obtained at each iteration, compared with [19, 21].
We implemented the three algorithms ([19, 21] and ours) in MATLAB 2012b with some test experiments. Tests are run on a PC with dual processor CPU (2.33 Hz), Intel(R), and Core(TM) i3. Notice that these algorithms use different approaches for computing the lower bound \(l_{i}\) and the upper bound \(u_{i}\) of each ratio term in the objective functions. Hence, for comparison, each \(l_{i}\), \(u_{i}\) in the three algorithms is given by taking the same way (i.e., using (2.6)) in our computation.
Some notations in Tables 1, 2, 3 have been used for column headers: Solution: the approximate optimal solution; Optimum: the approximate optimal value; Iter: the number of the algorithm iterations; CPU(s): the execution time in seconds; Nodes: the maximal number of the interesting grid points restored; Avg: average performance by the algorithm; Std: standard deviation of performances by the algorithm.
We first solve several sample examples, where Examples 13 and Examples 45 come from Ref. [28] and Ref. [29], respectively. The corresponding computational results are summarized in Table 1.
Example 1
Example 2
Example 3
where
Example 4
where
Example 5
where
Note that for solving Examples 4 and 5 we chose \((l_{1},l_{2},l_{3})=(1,1,1)\), \((u_{1},u_{2},u_{3})=(12,7,12)\) and \((l_{1},l_{2},l_{3})=(13,54,2)\), \((u_{1},u_{2},u_{3})=(450,850,105)\) which come from Ref. [29], respectively. In addition, we notice that the algorithm in [21] cannot be reasonable to solve Examples 4 and 5, and so we do not use it for solving them.
From Table 1, it can be seen easily that the proposed algorithm requires less computational time for solving Examples 15 compared with the ones in [19, 21] with the same \(\varepsilon>0\) value. This is because the number of iterations and the maximal number of the interesting grid points restored are less than the ones in [19, 21] from Table 1, which means that the total number of the interesting grid points considered by the proposed algorithm is less than the one of the algorithms in [19, 21]. Also, in the three algorithms ([19, 21] and ours), notice that the main computational time is to check feasibility of linear programs at interesting grid points. Hence, the more interesting grid points are considered, the more computational time will be required.
Next, we apply the three algorithms ([19, 21] and our own) to randomly generated examples as follows.
where all elements of \(c_{i}\in\mathbb{R}^{n}\) and \(L\in\mathbb{R} ^{n}\) are random numbers generated from the interval \([0,1]\); \(b\in\mathbb{R}^{m}\), \(V\in\mathbb{R}^{n}\) are randomly generated vectors with all components belonging to \((1,2)\); and each element of \(A\in\mathbb{R}^{m\times n}\) is randomly generated in \([1,1]\). Nineteen examples for selected combinations of m (number of constraints), n (number of variables), and p (number of linear functions in the objective function), altogether 190 randomly generated test instances are solved. The approximation error is fixed at \(\varepsilon=0.01\), and the average computational results (standard deviation) are obtained by running the algorithms ([19, 21] and ours) for 10 times. Table 2 shows the numerical results for solving instances when \((m,n)=(50,50)\), p changed in \(\{2,4,5,6,7,8,9,10\}\). Similarly, as \(p=4\) and \((m,n)\) is changed, the computational results are listed in Table 3. In Tables 2 and 3, ‘’ means the problem cannot be solved within two hours.
It can be seen from Tables 2 and 3 that the proposed algorithm needs fewer iterations and interesting grid points, and so requires less computational time for solving this kind of random problems, compared with the algorithms given by [19, 21]. Also, it is shown by Tables 2 and 3 that the performance of the algorithms is strongly affected by changes in n and p, specially, when p increases. The reason is that the number of operations required by the algorithms ([19, 21] and ours) is an exponential increase with p increasing according to the corresponding computational complexity results.
It is worth mentioning from Tables 2 and 3 that the computational time of the proposed algorithm increases with n and p increasing, but not as sharply as the algorithms in [19, 21]. For example, in Table 2, the instances cannot be solved by the algorithms in [19, 21] within two hours when \(p\geq6\) and \(p\geq7\), respectively, while the presented algorithm can solve all instances with p increasing 2 to 10 in less than two hours. This is due to the fact that the main computational cost of the algorithms ([19, 21] and ours) is the solution of linear feasibility problems at the interesting grid points. That is to say, the computational time for solving this kind of problems is directly affected by the number of interesting grid points. We notice that for the algorithms in [19, 21], the number of iterations and interesting grid points checked at each iteration increases with p increasing. However, for the proposed algorithm, p is related to the number of iterations (see Step (k2) in the proposed algorithm) and independent of the number of interesting grid points checked at each iteration (see Step (k1) in the proposed algorithm). This means that the proposed algorithm requires fewer interesting grid points considered and less computational time than the ones of the algorithms in [19, 21] for solving this kind of random problems. Moreover, from Table 3, notice that the algorithms in [19, 21] cannot solve the instances within two hours when \(n\geq150\) and \(p=4\), but all instances selected can be solved by the proposed algorithm within no more than two hours. This is mainly because the more interesting grid points are considered, the more the feasibility of linear programs with n variables should be checked. On the other hand, note that the interesting grid points considered by the proposed algorithm are much fewer than the ones considered by the algorithms [19, 21]. And so the increase of the computational time of the proposed algorithm is not as sharp as the algorithms [19, 21] with n increasing.
A comparison of the performance of the algorithms ([19, 21] and our own), the numerical results in Tables 13 show that the proposed algorithm is effective and the computational results can be obtained within a reasonable time.
Results and discussion
In this work, a new solution algorithm for globally solving a class of generalized fractional programming problems is presented. As further work, we think the ideas can be extended to more general type optimization problems, in which each \(c_{i}^{\top}x+c_{0i}\), \(d_{i} ^{\top}x+d_{0i}\) in the objective function to problem (P) is replaced with a convex function, respectively.
Conclusion
This article proposes a new approximation algorithm for solving a class of fractional programming problems (P) without the assumptions on quasiconcavity or lowrank. In order to solve this problem, the original problem (P) is first converted into a pdimensional equivalent one with a box constrained set, we then give a new approximation algorithm which can be more easily implemented compared with the ones given in [19, 21]. Moreover, the computational complexity of such an algorithm can be derived to show that it is an FPTAS when p is fixed, and that its computational time is an exponential increase with p increasing. Also, the complexity results can be used as an indicator of the difficulty of some optimization problems falling into the category of (P), and so we should expect to design a more sophisticated approach where its performance is at least as good. Additionally, this article not only gives a provable bound on the running time of the proposed algorithm, but also guarantees the quality of the solution obtained to problem (P).
References
Konno, H, Gao, C, Saitoh, I: Cutting plane/tabu search algorithms for low rank concave quadratic programming problems. J. Glob. Optim. 13, 225240 (1998)
Henderson, JM, Quandt, RE: Microeconomic Theory: A Mathematical Approach. McGrawHill, New York (1971)
Mulvey, JM, Vanderbei, RJ, Zenios, SA: Robust optimization of largescale systems. Oper. Res. 43, 264281 (1995)
Maling, K, Mueller, SH, Heller, WR: On finding most optional rectangular package plans. In: Proceedings of the 19th Design Automation Conference, pp. 663670 (1982)
Kuno, T: Polynomial algorithms for a class of minimum ranktwo cost path problems. J. Glob. Optim. 15, 405417 (1999)
Matsui, T: NPhardness of linear multiplicative programming and related problem. J. Glob. Optim. 9, 113119 (1996)
Schaible, S, Shi, J: Fractional programming: the sumofratios case. Optim. Methods Softw. 18, 219229 (2003)
Kuno, T, Masaki, T: A practical but rigorous approach to sumofratios optimization in geometric applications. Comput. Optim. Appl. 54, 93109 (2013)
Teles, JP, Castro, PM, Matos, HA: Multiparametric disaggregation technique for global optimization of polynomial programming problems. J. Glob. Optim. 55, 227251 (2013)
Gao, YL, Xu, CX, Yang, YJ: An outcomespace finite algorithm for solving linear multiplicative programming. Appl. Math. Comput. 179, 494505 (2006)
Shen, P, Wang, C: Global optimization for sum of generalized fractional functions. J. Comput. Appl. Math. 214, 112 (2008)
Wang, C, Shen, P: A global optimization algorithm for linear fractional programming. Appl. Math. Comput. 204, 281287 (2008)
Shen, P, Yang, L, Liang, Y: Range division and contraction algorithm for a class of global optimization problems. Appl. Math. Comput. 242, 116126 (2014)
Shen, PP, Li, WM, Liang, YC: Branchreductionbound algorithm for linear sumofratios fractional programs. Pac. J. Optim. 11(1), 7999 (2015)
Benson, HP: An outcome space branch and boundouter approximation algorithm for convex multiplicative programming. J. Glob. Optim. 15, 315342 (1999)
Benson, HP, Boger, GM: Outcomespace cuttingplane algorithm for linear multiplicative programming. J. Optim. Theory Appl. 104, 301332 (2000)
Konno, H, Yajima, Y, Matsui, T: Parametric simplex algorithms for solving a special class of nonconvex minimization problems. J. Glob. Optim. 1, 6581 (1991)
Liu, XJ, Umegaki, T, Yamamoto, Y: Heuristic methods for linear multiplicative programming. J. Glob. Optim. 15, 433447 (1999)
Locatelli, M: Approximation algorithm for a class of global optimization problems. J. Glob. Optim. 55, 1325 (2013)
Mittal, S, Schulz, AS: An FPTAS for optimizing a class of lowrank functions over a polytope. Math. Program. 141, 103120 (2013)
Depetrini, D, Locatelli, M: Approximation algorithm for linear fractional multiplicative problems. Math. Program. 128, 437443 (2011)
Goyal, V, Ravi, R: An FPTAS for minimizing a class of lowrank quasiconvex functions over a convex set. Oper. Res. Lett. 41, 191196 (2013)
Depetrini, D, Locatelli, M: A FPTAS for a class of linear multiplicative problems. Comput. Optim. Appl. 44, 276288 (2009)
Goyal, V, GencKaya, L, Ravi, R: An FPTAS for minimizing the product of two nonnegative linear cost functions. Math. Program. 126, 401405 (2011)
Shen, P, Wang, C: Linear decomposition approach for a class of nonconvex programming problems. J. Inequal. Appl. 2017, 74 (2017). doi:10.1186/s136600171342y
Schaible, S, Ibaraki, T: Fractional programming. Eur. J. Oper. Res. 12, 325338 (1983)
Shen, P, Zhao, X: A fully polynomial time approximation algorithm for linear sumofratios fractional program. Math. Appl. 26, 355359 (2013)
HoaiPhuong, NT, Tuy, H: A unified monotonic approach to generalized linear fractional programming. J. Glob. Optim. 26, 229259 (2003)
Shao, LZ, Ehrgott, M: An objective space cut and bound algorithm for convex multiplicative programmes. J. Glob. Optim. 58, 711728 (2014)
Acknowledgements
The authors are grateful to the responsible editor and the anonymous referees for their valuable comments and suggestions, which have greatly improved the earlier version of this paper. This paper is supported by the National Natural Science Foundation of China (11671122), the Key Scientific Research Project in University of Henan Province (17A110006), and the Program for Innovative Research Team (in Science and Technology) in University of Henan Province (14IRTSTHN023).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
PPS carried out the idea of this paper, the description of the algorithm and drafted the manuscript. TLZ completed the computation for numerical examples, and CFW carried out the analysis of computational complexity of the algorithm. All authors read and approved the final manuscript.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Shen, P., Zhang, T. & Wang, C. Solving a class of generalized fractional programming problems using the feasibility of linear programs. J Inequal Appl 2017, 147 (2017). https://doi.org/10.1186/s1366001714201
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1366001714201
MSC
 90C30
 90C33
 90C15
Keywords
 generalized fractional programming
 global optimization
 approximation algorithm
 computational complexity