 Research
 Open Access
 Published:
Linear decomposition approach for a class of nonconvex programming problems
Journal of Inequalities and Applications volume 2017, Article number: 74 (2017)
Abstract
This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the nearoptimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasiconcavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.
Introduction
Consider a class of nonconvex programming problems:
where \(k\geq2\), \(\varphi:\mathbb{R}^{k}\rightarrow\mathbb{R}_{+}\) is a continuous function, Ω is a nonempty polytope, \(b\in\mathbb {R}^{s}\), \(A\in\mathbb{R}^{s\times n}\), and \(a_{1},a_{2},\ldots,a_{k}\in \mathbb{R}^{n}\) are linear independent vectors. The function f is called a lowrank function with rank k over a polytope Ω defined by Kelner and Nikovola [1]. With this broader definition, multiplicative programming, quadratic programming, bilinear programming, as well as polynomial programming can be all put into the category of problem (P), whose important applications can be found in some surveys (e.g., [2–7]). In general, nonconvex programming problems of this form (P) are known to be NPhard, even minimizing the product of two linear functions with rank two over a polytope is NPhard ([8]). As shown by Mittal and Schulz [9], the optimum value of problem (P) cannot be approximated to within any factor unless \(\mathrm{P}=\mathrm{NP}\). Hence, for solving problem (P) some extra assumptions (\(\mathbb{A}1\))(\(\mathbb {A}3\)) on the properties of the function f will be required as follows:
 (\(\mathbb{A}1\)):

\(\varphi(y)\leq\varphi(y^{\prime})\), if \(y_{i}\leq y_{i}^{\prime}\), for each \(i=1,\ldots,k\);
 (\(\mathbb{A}2\)):

\(\varphi(\lambda y)\leq\lambda^{c}\varphi(y)\) for all \(y\in\mathbb{R}_{+}^{k},\lambda>1\) and some constant c;
 (\(\mathbb{A}3\)):

\(a^{\top}_{i}x>0,\text{ for } i=1,\ldots,k\).
An exhaustive reference on optimizing lowrank functions can be found in Konno and Thach [10]. Konno et al. [11] proposed cutting plane and tabusearch algorithms for lowrank concave quadratic programming problems. Porembski [12] gave a cutting plane solution approach for general lowrank concave minimization problems with a small number of variables. Additionally, some solution algorithms have been developed for the special cases of problem (P) (e.g. [13–16]). The above solution methods are efficient heuristics, without providing a theoretical analysis on the running time or performance of the algorithms.
The main purpose of this article is to present an approximation scheme with provable performance bounds for solving globally problem (P) to obtain an εapproximate solution for any \(\varepsilon>0\) in time polynomial in the input size and \(\frac{1}{\varepsilon}\). For the special cases of problem (P), there exists extensive work about the solution of εapproximation problems. Vavasis [17] gave an approximation scheme for lowrank quadratic optimization problems. Depetrini and Locatelli [18] presented a fully polynomialtime approximation scheme (FPTAS) for minimizing the sum or product of ratios of linear functions over a polyhedron. Kelner and Nikolova [1] developed an expected polynomialtime smoothed algorithm for a class of lowrank quasiconcave minimization problems whose the objective function satisfies the Lipschitz condition. Daniele and Locatelli [19] proposed an FPTAS for minimizing product of two linear functions over a polyhedral set. Additionally, for minimizing the product of two nonnegative linear cost functions, Goyal et al. [20] gave an FPTAS under the condition of the convex hull of the feasible solutions in terms of linear inequalities known. The algorithm in [21] works for minimizing a class of lowrank quasiconcave functions over a convex set, and this algorithm solves a polynomial number of linear optimization problems. Mittal and Schulz [9] presented an FPTAS for minimizing a general class of lowrank functions over a polytope, and their algorithm is based on constructing an approximate Paretooptimal front of the linear functions that constitute the objective function.
In this paper, by exploiting the feature of problem (P), a suitable nonuniform grid for solving problem (P) is first constructed over a given \((k1)\)dimensional box. Based on the exploration of the grid nodes, the original problem (P) can then be transformed and decomposed into a polynomial number of subproblems, in which each subproblem is corresponding to a grid node and is easy to solve by considering a linear program. Thus, the main computational effort of the proposed algorithm only consists in solving linear programming problems related to all nodes, which do not grow in size from a grid node to the next node. Furthermore, it is verified that through solving these linear programs, we can obtain an εapproximation solution of the primal problem (P). The proposed algorithm has several features as follows. First, in contrast with [19, 20, 22], the rank k of the objective function considered by the proposed algorithm is not limited to only around two. Second, the proposed algorithm does not require differentiable and the inverse of the single variable function about the objective function, and it works for minimizing a class of more general functions, while Goyal and Ravi [21] and Kelner and Nikolova [1] both require the quasiconcavity assumption of the objective function. Third, although the nonuniform grid constructed for the algorithms in [21] and ours is based on subdividing a \((k1)\)dimensional hyperrectangle, the algorithm in [21] requires iterations that are not necessary for our algorithm and the one in [9]. Moreover, at each iteration of the algorithm in [21], it is required to solve a single variable equation and the corresponding linear optimization problem for each grid node. Finally, we emphasize here that the efficiency of the algorithms (of [9, 21] and ours) strongly depends upon the number of grid nodes (or subproblems solved) that are associated with the dimension of the grid points, under the condition of the same input size and the tolerance ε value. In fact, the nonuniform grid in [9] derives from parting a kdimensional hypercube. Therefore, from the procedure of the algorithm and its computational complexity analysis it can be seen that our work is independent of [9, 21] and the proposed algorithm differs significantly giving an interesting alternative approach to solve the problem with a reduced running time.
The structure of this paper is as follows. The next section describes the equivalent problem and its decomposition technique. Section 3 presents the algorithm and the computational cost of such an algorithm. Finally, some conclusions are drawn in Sections 4 and 5, and discussions presented.
Equivalent problem and its decomposition technique
Equivalent problem
For solving problem (P), we will propose an equivalent problem (P). To this end, let us firstly denote
Assume that, without loss of generality, \(k= \arg\max\{\frac {u_{i}}{l_{i}}  i=1,\ldots,k\}\), and define a rectangle H given by
Thus, by introducing variable \(y\in R^{k1}\), problem (P) is equivalent to the following problem:
The key equivalent theorem for problems (P) and (Q) is given as follows.
Theorem 1
\(x^{\ast}\in R^{n}\) is a global optimum solution of problem (P) if and only if \((x^{\ast},y^{\ast})\in R^{n+k1}\) is a global optimum solution of problem (Q), where \(y_{i}^{\ast}=a^{\top }_{i}x^{\ast}\) for each \(i=1,\ldots,k1\). In addition, the global optimal values of problems (P) and (Q) are equal.
Proof
If \(x^{\ast}\) is a global optimal solution of problem (P), let
It is obvious that \((x^{\ast},y^{\ast})\in R^{n+k1}\) is a feasible solution of problem (Q). Let \((x,y)\) be any feasible solution of problem (Q), i.e.,
According to the definition of \(y^{\ast}\) and the optimality of \(x^{\ast }\), we must have
Additionally, from (2.3) and the assumption (\(\mathbb{A}1\)), it follows that
Thus, (2.4) and (2.5) mean that \((x^{\ast},y^{\ast})\) is a global optimal solution to problem (Q).
Conversely, suppose that \((x^{\ast},y^{\ast})\) is a global optimal solution for problem (Q), then we have
By the assumption of φ, we can obtain
For any given \(x\in\Omega\), if we let \(y_{i}=a^{\top}_{i}x, i=1,\ldots ,k1\), then \((x,y)\) is a feasible solution to problem (Q) with \(y=(y_{1},\ldots,y_{k1})\in R^{k1}\). Thus, from the optimality of \((x^{\ast},y^{\ast})\) it follows that
This means that \(x^{\ast}\) is a global optimal solution to problem (P). □
By Theorem 1, we can conclude that, for solving the problem (P), we may globally solving its equivalent problem (Q) instead. Besides, it is easy to understand that the problems (P) and (Q) have the same global optimal value. Hence, we will propose a decomposition approach for the problem (Q) below.
Linear decomposition technique
Problem (Q) has a relatively lowrank decomposition structure because, in contrast to problem (P), the nonconvexity of the objective function only involves the term \(a^{\top}_{k}x\) if we fix a \(y=(y_{1},\ldots ,y_{k1})\in H\). In order to solve problem (Q), based on this observation, for any given \(\theta\in(0,1)\) we want to construct a polynomial size grid by subdividing H into smaller rectangles, such that the ratio of successive divisions is equal to \((1+\theta)\) in each dimension. Thus, a polynomial size grid will be generated over H, where the set of the grid nodes can be given by
where \(\mathcal{D}^{\theta}_{i}=\{l_{i}, l_{i}(1+\theta), \ldots, l_{i}(1+\theta)^{r_{i}}\}\) with
Note that under the assumption (\(\mathbb{A}3\)), \(l_{i}>0\) must hold for each i. Clearly, for any \((y_{1},y_{2},\ldots, y_{k1})\in H\), there exists a point \((\upsilon_{1},\upsilon_{2},\ldots,\upsilon _{k1})\in B^{\theta}\) such that
Thus, H can be approximated by the set \(B^{\theta}\). Next, for each grid node \(\upsilon\in B^{\theta}\), consider the corresponding subproblem as follows:
Notice that, by the assumption (\(\mathbb{A}1\)) of φ, for a given \(\upsilon\in B^{\theta}\), problem \(\mathrm{P1}(\upsilon)\) is equivalent to a linear problem \(\mathrm{P2}(\upsilon)\):
That is, for a fixed point \(\upsilon\in B^{\theta}\), \(x^{\upsilon}\) is the optimal solution of problem \(\mathrm{P1}(\upsilon)\) if and only if \(x^{\upsilon}\) is an optimal solution for problem \(\mathrm{P2}(\upsilon)\).
Clearly, for each \(\upsilon\in B^{\theta}\), the corresponding subproblems \(\mathrm{P1}(\upsilon)\) can easily be solved by a linear program \(\mathrm{P2}(\upsilon)\). Thus, we can decompose a nonconvex programming problem (Q) into a series of subproblems, and we can obtain its approximation global solution via the solutions of those linear programming problems when concerning all nodes υ over \(B^{\theta}\).
Algorithm and its computational complexity
In this section, we will propose an effective algorithm for getting the approximation solution to problem (P), and then analyze its computational complexity.
εapproximation algorithm
In what follows we will introduce an algorithm for solving problem (P), and the algorithm is able to return an εapproximate solution of problem (P).
Based on the particularities of problem (P), a given rectangle H is firstly subdivided to construct a necessary nonuniform grid \(B^{\theta }\). The prime problem (P) can then transformed and decomposed into a series of subproblems on the basis of the exploration of the grid nodes. Each subproblem is associated with a grid node in the proposed algorithm, and it can be solved by a linear program. An necessary and specific description is given as follows. Given \(\varepsilon\in(0,1)\), let \(\theta=(1+\varepsilon)^{\frac {1}{c}}1\). The grid nodes set \(B^{\theta}\) can be generated by (2.6)(2.7). For each \(\upsilon\in B^{\theta}\), solve problem \(\mathrm{P2}(\upsilon)\) to get the solution \(x^{\upsilon}\), and the optimal value to the corresponding problem \(\mathrm{P1}(\upsilon)\) is denoted \(\omega(\upsilon)=\varphi(\upsilon,a^{\top}_{k}x^{\upsilon})\), here, let \(\omega({\upsilon})= +\infty\) if the feasible set to \(\mathrm{P2}(\upsilon)\) is empty. The process is repeated until all the points of \(B^{\theta}\) are considered. The detailed algorithm is Algorithm 1.
The following theorem shows that the proposed algorithm can reach an optimal solution to problem (P).
Theorem 2
Given \(\varepsilon>0\), an εoptimal solution x̃ to problem (P) from the proposed algorithm can be obtained in the sense that
where \(x^{\ast}\) is the optimal solution of problem (P).
Proof
Let
From \(x^{\ast}\) being the optimal solution of problem (P), we have
This implies that \((y^{\ast}_{1},y^{\ast}_{2},\ldots,y^{\ast}_{k1})\in H\), so there exists some \(\upsilon^{\ast}\in B^{\theta}\) which satisfies
Thus, combining with the assumptions of φ, we have
Now, suppose that x̄ is the optimal solution of problem \(\mathrm{P1}(v^{\ast})\). Then \(x^{\ast}\in\Omega\) together with (3.1)(3.2) implies that \(x^{\ast}\) is a feasible solution of problem \(\mathrm{P1}(v^{\ast})\). Thus we have
Additionally, let \(\tilde{\upsilon}=\arg\min\{\omega(\upsilon)  \upsilon\in B^{\theta}\}\). Since x̃ is the optimal solution of problem \(\mathrm{P1}(\tilde{\upsilon})\), it follows that \(a_{i}^{\top}\tilde {x}\leq\tilde{v}_{i}, i=1,\ldots,k1\), thus, we can get
According to the definitions of ṽ and x̄, we have
Hence, from (3.3)(3.6) and \(\theta=(1+\varepsilon)^{\frac {1}{c}}1\), we can conclude that
and so x̃ is the approximation solution to problem (P). □
By Theorem 1 we also have the following corollary.
According to the above discussion, the εapproximation solution to problem (P) can be obtained by solving \(\vert B^{\theta } \vert \) (the number of grid nodes in \(B^{\theta}\)) linear programming problems \(\mathrm{P2}(\upsilon)\) with \(\upsilon\in B^{\theta}\). However, it is not necessary to solve each \(\mathrm{P2}(\upsilon)\) associated with each \(\upsilon\in B^{\theta}\) for searching the solution of problem (P), that is, by using the following proposition we can obtain an improvement of the algorithm.
Proposition 1
Let \(\hat{x}=\arg\min\{a^{\top}_{k}x  x\in\Omega\}\). Then x̂ is an optimal solution of problem P1\((\upsilon)\) for any \(\upsilon\in\hat{B}^{\theta}\), where
Proof
Suppose that \(\bar{x}^{\upsilon}\) is any feasible solution of problem \(\mathrm{P1}(\upsilon)\) with \(\upsilon\in\hat{B}^{\theta}\). By using the definition of x̂ we can see that x̂ is a feasible solution of problem P1\((\upsilon)\) for any \(\upsilon\in\hat{B}^{\theta }\). With the increase of the function φ, it follows that
which concludes the proof. □
Proposition 1 shows that x̂ is the optimal solution of subproblem \(\mathrm{P1}(\upsilon)\) for any \(\upsilon\in\hat{B}^{\theta}\). Therefore, in practical implementations, we only are required to solve the subproblem \(\mathrm{P2}(\upsilon)\) associated with the points contained in the set \(B^{\theta} \setminus \hat{B}^{\theta}\). A further note on \(\hat {B}^{\theta}\) is as follows.
For any \(\theta\in(0,1)\), by the definition of H, let
where \(\hat{y}_{i}=a^{\top}_{i}\hat{x}\) with \(\hat{x}=\arg\min\{a^{\top }_{k}x  x\in\Omega\}\). Combining the definition of \(r_{i},i=1,\ldots,k1\) with the above result, the set \(\hat{B}^{\theta}\) can be given by
Let
This means the εapproximation solution to problem (P) can be obtained only by solving \(\vert T^{\theta} \vert \) (the number of points in the set \(T^{\theta}\)) linear programming subproblems \(\mathrm{P2}(\upsilon)\) for all \(\upsilon\in T^{\theta}\). Thus the proposed algorithm can be improved by Algorithm 2.
Notice that, when the proposed improved algorithm stops, we can obtain an εoptimal solution x̃ to problem (P) with the objective value L̃.
Computational complexity for the algorithm
Now we consider the complexity analysis of the proposed improved algorithm. By (3.8)(3.10), we can conclude that the number of the grid nodes belonging to \(\hat{B}^{\theta}\) is at least
where \(\hat{y}_{i}=a^{\top}_{i}\hat{x}\) with \(\hat{x}=\arg\min\{a^{\top }_{k}x  x\in\Omega\}\). On the other hand, we know from (2.4) that the total number of the points in the set \(B^{\theta}\) is equal to \(\prod_{i=1}^{k1}r_{i}\), \(r_{i}\) satisfying (2.5). Thus, it follows that the number of the elements in \(B^{\theta}\) is at most
Combining (3.9) with (3.10), the proposed improvement algorithm requires that the number of the grid nodes considered in actually computation is not more than
Theorem 3
Let \(\hat{x}=\arg\min\{a^{\top}_{k}x  x\in \Omega\}\), \(L=\min_{i=1,\ldots,k1}\{\frac{u_{i}}{\hat{y}_{i}}\}\) with \(\hat{y}_{i}=a^{\top}_{i}\hat{x}\), and let U=\(\max_{i=1,\ldots,k1}\frac{u_{i}}{l_{i}}\). When k is fixed, the running time of the improved algorithm for obtaining an εoptimal solution for problem (P), is bounded from above by
where \(\xi\in(\log L,\log U)\), and \(\operatorname{cost}( \vert \pi \vert ,n)\) is the time taken to solve a linear program in n variables and input size of \(\vert \pi \vert \) bits.
Proof
By Step 0 of the improved algorithm, it follows that
and
From the above results and (3.13), we have
Thus, the upper bound of the number of grid points Ξ is
The result of (3.15) holds because \(\log(1+\varepsilon)\approx \varepsilon\) for small ε values. By using the Lagrange mean value theorem, there exists some \(\xi\in (\log L,\log U)\) such that
Thus we can know from (3.14)(3.16) that the total number of the grid nodes considered in the improved algorithm is not more than
Note that logU and logL are computed in polynomial time about the input size of the problem. Additionally, for each grid node υ in the set \(T^{\theta}\), a corresponding linear programming problem \(\mathrm{P2}(\upsilon)\) is required to solve. Therefore, for a fixed k, the running time required by the improved algorithm for obtaining an εoptimal solution for problem (P), is bounded from above by
where \(\xi\in(\log L,\log U)\). □
In view of the above theorem we can conclude that the running time of the proposed improved algorithm is polynomial in input size and \(\frac {1}{\varepsilon}\) for fixed k, hence the algorithm is an FPTAS (fully polynomialtime approximation scheme) for the problem (P).
Comparison with [9, 21]: The algorithm in [9] searches for the optimal objective value in a kdimensional grid, in which requires one to check the feasible of a linear program for each grid node, thus the total number of linear programs solved by their method is \(O(\frac{c^{k}(\log\frac {M}{m})^{k}}{\varepsilon^{k}})\) with \(M\setminus m=\max \setminus \min\{ a_{i}^{\top}x  x\in\Omega, i=1,\ldots,k\}\). In the algorithm [21], the number of linear optimization problems that are solved over a convex set in each iteration is \(O(\frac{c^{k1}(\log R)^{k1}}{\varepsilon^{k1}})\), where \(R=\max\{ \frac{u_{i}}{l_{i}}i=1,\ldots,k\}\). Also, at each iteration of their algorithm [21], the ratio of the upper and lower bounds of the objective value can be reduced by a constant factor, hence the number of iterations is \(O(\frac{c}{\varepsilon}\cdot\log\frac{z^{0}_{U}}{z^{0}_{L}})\), where \(z^{0}_{U}(z^{0}_{L})\) denotes the initial upper (lower) bound on the objective value. This implies that the algorithm in [21] solves \(O(\log\frac{z^{0}_{U}}{z^{0}_{L}}\cdot\frac{c^{k}(\log R)^{k1}}{\varepsilon^{k}})\) linear optimization problems over a convex set. In this article, as can be seen in (3.17), the proposed algorithm solves \(O (\log\frac{U}{L}\cdot\frac{c^{k1}\xi^{k2} }{\varepsilon ^{k1}} )\) different linear programs, and the running time is associated with \((k1)\)th order in \(\frac{1}{\varepsilon}\), compared with the kth order in \(\frac{1}{\varepsilon}\) in [9, 21].
Conclusions
In this article, we present a new linear decomposition algorithm for globally solving a class of nonconvex programming problems. First, the original problem is transformed and decomposed into a polynomial number of equivalent linear programming subproblems, by exploiting a suitable nonuniform grid. Second, compared with existing results in the literature, the proposed algorithm does not require the assumptions of quasiconcavity and differentiability of the objective function, and further, the rank k of the objective function is not limited to only around two. Finally, the computational complexity of the algorithm is given to show that it differs significantly giving an interesting alternative approach to solve the problem (P) with a reduced running time.
Results and discussion
In this work, a new linear decomposition algorithm for globally solving a class of nonconvex programming problems is presented. As further work, we think the ideas can be extended to more general type optimization problems, in which each \(a^{\top}_{i}x\) in the objective function to problem (P) is replaced with a convex function.
References
Kelner, JA, Nikolova, E: On the hardness and smoothed complexity of quasicave minimization. In: Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science, Providence, RI, pp. 472482 (2007)
Bennett, KP: Global tree optimization: a nongreedy decision tree algorithm. In: Computing Sciences and Statistics 26, pp. 156160 (1994)
BloemhofRuwaard, JM, Hendrix, EMT: Generalized bilinear programming: an application in farm management. Eur. J. Oper. Res. 90, 102114 (1996)
Konno, H, Shirakawa, H, Yamazaki, H: A meanabsolute deviationskewness protfolio optimazation model. Ann. Oper. Res. 45, 205220 (1993)
Maranas, CD, Androulakis, IP, Floudas, CA, Berger, AJ, Mulvey, JM: Solving longterm financial planning problems via global optimization. J. Econ. Dyn. Control 21, 14051425 (1997)
Pardalos, PM, Vavasis, SA: Quadratic programming with one negative eigenvalue is NPhard. J. Glob. Optim. 1, 1522 (1991)
Quesada, I, Grossmann, IE: Alternative bounding approximations for the global optimization of various engineering design problems. In: Global Optimization in Engineering Design 9, pp. 309331 (1996)
Matsui, T: NPhardness of linear multiplicative programming and related problems. J. Glob. Optim. 9, 113119 (1996)
Mittal, S, Schulz, AS: An FPTAS for optimization a class of lowrank functions over a polytope. Math. Program. 141, 103120 (2013)
Konno, H, Thach, PT, Tuy, H: Optimization on Low Rank Nonconvex Structures. Kluwer Academic, Dordrecht (1996)
Konno, H, Gao, C, Saitoh, I: Cutting plane/tabu search algorithms for low rank concave quadratic programming problems. J. Glob. Optim. 13, 225240 (1998)
Porembski, M: Cutting planes for lowrank like concave minimization problems. Oper. Res. 52, 942953 (2004)
Peiping, S, Chunfeng, W: Global optimization for sum of linear ratios problem with coefficients. Appl. Math. Comput. 176(1), 219229 (2006)
Wang, C, Shen, P: A global optimization algorithm for linear fractional programming. Appl. Math. Comput. 204, 281287 (2008)
Wang, C, Liu, S: A new linearization method for generalized linear multiplicative programming. Comput. Oper. Res. 38(7), 10081013 (2011)
Jiao, HW, Liu, SY: A practicable branch and bound algorithm for sum of linear ratios problem, Eur. J. Oper. Res. 243, 723730 (2015)
Vavasis, SA: Approximation algorithms for indefinite quadratic progrmming. Math. Program. 57, 279311 (1992)
Depetrini, D, Locatelli, M: Approximation algorithm for linear fractional multiplicative problems. Math. Program. 128, 437443 (2011)
Depetrini, D, Locatelli, M: A FPTAS for a class of linear multiplicative problems. Comput. Optim. Appl. 44, 275288 (2009)
Goyal, V, GencKaya, L, Ravi, R: An FPTAS for minimizing the product of two nonnegative linear cost functions. Math. Program. 126, 401405 (2011)
Goyal, V, Ravi, R: An FPTAS for minimizing a class of lowrank quasiconcave functions over convex set. Oper. Res. Lett. 41, 191196 (2013)
Kern, W, Woeginger, G: Quadratic programming and combinatorial minimum weight product problem. Math. Program. 100, 641649 (2007)
Acknowledgements
The authors are grateful to the responsible editor and the anonymous referees for their valuable comments and suggestions, which have greatly improved the earlier version of this paper.
This paper is supported by National Natural Science Foundation of China (11671122), the Program for Innovative Research Team (in Science and Technology) in University of Henan Province (14IRTSTHN023).
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
PPS carried out the idea of this paper, the description of linear decomposition algorithm and drafted the manuscript. CFW carried out the analysis of computational complexity of the algorithm. All authors read and approved the final manuscript.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Shen, P., Wang, C. Linear decomposition approach for a class of nonconvex programming problems. J Inequal Appl 2017, 74 (2017). https://doi.org/10.1186/s136600171342y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s136600171342y
MSC
 90C30
 90C33
 90C15
Keywords
 nonconvex programming
 global optimization
 linear decomposition approach
 approximation algorithm
 computational complexity