Linear decomposition approach for a class of nonconvex programming problems
 Peiping Shen^{1, 2}Email author and
 Chunfeng Wang^{1, 2}
https://doi.org/10.1186/s136600171342y
© The Author(s) 2017
Received: 6 February 2017
Accepted: 23 March 2017
Published: 13 April 2017
Abstract
This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the nearoptimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasiconcavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.
Keywords
MSC
1 Introduction
 (\(\mathbb{A}1\)):

\(\varphi(y)\leq\varphi(y^{\prime})\), if \(y_{i}\leq y_{i}^{\prime}\), for each \(i=1,\ldots,k\);
 (\(\mathbb{A}2\)):

\(\varphi(\lambda y)\leq\lambda^{c}\varphi(y)\) for all \(y\in\mathbb{R}_{+}^{k},\lambda>1\) and some constant c;
 (\(\mathbb{A}3\)):

\(a^{\top}_{i}x>0,\text{ for } i=1,\ldots,k\).
An exhaustive reference on optimizing lowrank functions can be found in Konno and Thach [10]. Konno et al. [11] proposed cutting plane and tabusearch algorithms for lowrank concave quadratic programming problems. Porembski [12] gave a cutting plane solution approach for general lowrank concave minimization problems with a small number of variables. Additionally, some solution algorithms have been developed for the special cases of problem (P) (e.g. [13–16]). The above solution methods are efficient heuristics, without providing a theoretical analysis on the running time or performance of the algorithms.
The main purpose of this article is to present an approximation scheme with provable performance bounds for solving globally problem (P) to obtain an εapproximate solution for any \(\varepsilon>0\) in time polynomial in the input size and \(\frac{1}{\varepsilon}\). For the special cases of problem (P), there exists extensive work about the solution of εapproximation problems. Vavasis [17] gave an approximation scheme for lowrank quadratic optimization problems. Depetrini and Locatelli [18] presented a fully polynomialtime approximation scheme (FPTAS) for minimizing the sum or product of ratios of linear functions over a polyhedron. Kelner and Nikolova [1] developed an expected polynomialtime smoothed algorithm for a class of lowrank quasiconcave minimization problems whose the objective function satisfies the Lipschitz condition. Daniele and Locatelli [19] proposed an FPTAS for minimizing product of two linear functions over a polyhedral set. Additionally, for minimizing the product of two nonnegative linear cost functions, Goyal et al. [20] gave an FPTAS under the condition of the convex hull of the feasible solutions in terms of linear inequalities known. The algorithm in [21] works for minimizing a class of lowrank quasiconcave functions over a convex set, and this algorithm solves a polynomial number of linear optimization problems. Mittal and Schulz [9] presented an FPTAS for minimizing a general class of lowrank functions over a polytope, and their algorithm is based on constructing an approximate Paretooptimal front of the linear functions that constitute the objective function.
In this paper, by exploiting the feature of problem (P), a suitable nonuniform grid for solving problem (P) is first constructed over a given \((k1)\)dimensional box. Based on the exploration of the grid nodes, the original problem (P) can then be transformed and decomposed into a polynomial number of subproblems, in which each subproblem is corresponding to a grid node and is easy to solve by considering a linear program. Thus, the main computational effort of the proposed algorithm only consists in solving linear programming problems related to all nodes, which do not grow in size from a grid node to the next node. Furthermore, it is verified that through solving these linear programs, we can obtain an εapproximation solution of the primal problem (P). The proposed algorithm has several features as follows. First, in contrast with [19, 20, 22], the rank k of the objective function considered by the proposed algorithm is not limited to only around two. Second, the proposed algorithm does not require differentiable and the inverse of the single variable function about the objective function, and it works for minimizing a class of more general functions, while Goyal and Ravi [21] and Kelner and Nikolova [1] both require the quasiconcavity assumption of the objective function. Third, although the nonuniform grid constructed for the algorithms in [21] and ours is based on subdividing a \((k1)\)dimensional hyperrectangle, the algorithm in [21] requires iterations that are not necessary for our algorithm and the one in [9]. Moreover, at each iteration of the algorithm in [21], it is required to solve a single variable equation and the corresponding linear optimization problem for each grid node. Finally, we emphasize here that the efficiency of the algorithms (of [9, 21] and ours) strongly depends upon the number of grid nodes (or subproblems solved) that are associated with the dimension of the grid points, under the condition of the same input size and the tolerance ε value. In fact, the nonuniform grid in [9] derives from parting a kdimensional hypercube. Therefore, from the procedure of the algorithm and its computational complexity analysis it can be seen that our work is independent of [9, 21] and the proposed algorithm differs significantly giving an interesting alternative approach to solve the problem with a reduced running time.
The structure of this paper is as follows. The next section describes the equivalent problem and its decomposition technique. Section 3 presents the algorithm and the computational cost of such an algorithm. Finally, some conclusions are drawn in Sections 4 and 5, and discussions presented.
2 Equivalent problem and its decomposition technique
2.1 Equivalent problem
Theorem 1
\(x^{\ast}\in R^{n}\) is a global optimum solution of problem (P) if and only if \((x^{\ast},y^{\ast})\in R^{n+k1}\) is a global optimum solution of problem (Q), where \(y_{i}^{\ast}=a^{\top }_{i}x^{\ast}\) for each \(i=1,\ldots,k1\). In addition, the global optimal values of problems (P) and (Q) are equal.
Proof
By Theorem 1, we can conclude that, for solving the problem (P), we may globally solving its equivalent problem (Q) instead. Besides, it is easy to understand that the problems (P) and (Q) have the same global optimal value. Hence, we will propose a decomposition approach for the problem (Q) below.
2.2 Linear decomposition technique
Clearly, for each \(\upsilon\in B^{\theta}\), the corresponding subproblems \(\mathrm{P1}(\upsilon)\) can easily be solved by a linear program \(\mathrm{P2}(\upsilon)\). Thus, we can decompose a nonconvex programming problem (Q) into a series of subproblems, and we can obtain its approximation global solution via the solutions of those linear programming problems when concerning all nodes υ over \(B^{\theta}\).
3 Algorithm and its computational complexity
In this section, we will propose an effective algorithm for getting the approximation solution to problem (P), and then analyze its computational complexity.
3.1 εapproximation algorithm
In what follows we will introduce an algorithm for solving problem (P), and the algorithm is able to return an εapproximate solution of problem (P).
The following theorem shows that the proposed algorithm can reach an optimal solution to problem (P).
Theorem 2
Proof
By Theorem 1 we also have the following corollary.
According to the above discussion, the εapproximation solution to problem (P) can be obtained by solving \(\vert B^{\theta } \vert \) (the number of grid nodes in \(B^{\theta}\)) linear programming problems \(\mathrm{P2}(\upsilon)\) with \(\upsilon\in B^{\theta}\). However, it is not necessary to solve each \(\mathrm{P2}(\upsilon)\) associated with each \(\upsilon\in B^{\theta}\) for searching the solution of problem (P), that is, by using the following proposition we can obtain an improvement of the algorithm.
Proposition 1
Proof
Proposition 1 shows that x̂ is the optimal solution of subproblem \(\mathrm{P1}(\upsilon)\) for any \(\upsilon\in\hat{B}^{\theta}\). Therefore, in practical implementations, we only are required to solve the subproblem \(\mathrm{P2}(\upsilon)\) associated with the points contained in the set \(B^{\theta} \setminus \hat{B}^{\theta}\). A further note on \(\hat {B}^{\theta}\) is as follows.
Notice that, when the proposed improved algorithm stops, we can obtain an εoptimal solution x̃ to problem (P) with the objective value L̃.
3.2 Computational complexity for the algorithm
Theorem 3
Proof
In view of the above theorem we can conclude that the running time of the proposed improved algorithm is polynomial in input size and \(\frac {1}{\varepsilon}\) for fixed k, hence the algorithm is an FPTAS (fully polynomialtime approximation scheme) for the problem (P).
Comparison with [9, 21]: The algorithm in [9] searches for the optimal objective value in a kdimensional grid, in which requires one to check the feasible of a linear program for each grid node, thus the total number of linear programs solved by their method is \(O(\frac{c^{k}(\log\frac {M}{m})^{k}}{\varepsilon^{k}})\) with \(M\setminus m=\max \setminus \min\{ a_{i}^{\top}x  x\in\Omega, i=1,\ldots,k\}\). In the algorithm [21], the number of linear optimization problems that are solved over a convex set in each iteration is \(O(\frac{c^{k1}(\log R)^{k1}}{\varepsilon^{k1}})\), where \(R=\max\{ \frac{u_{i}}{l_{i}}i=1,\ldots,k\}\). Also, at each iteration of their algorithm [21], the ratio of the upper and lower bounds of the objective value can be reduced by a constant factor, hence the number of iterations is \(O(\frac{c}{\varepsilon}\cdot\log\frac{z^{0}_{U}}{z^{0}_{L}})\), where \(z^{0}_{U}(z^{0}_{L})\) denotes the initial upper (lower) bound on the objective value. This implies that the algorithm in [21] solves \(O(\log\frac{z^{0}_{U}}{z^{0}_{L}}\cdot\frac{c^{k}(\log R)^{k1}}{\varepsilon^{k}})\) linear optimization problems over a convex set. In this article, as can be seen in (3.17), the proposed algorithm solves \(O (\log\frac{U}{L}\cdot\frac{c^{k1}\xi^{k2} }{\varepsilon ^{k1}} )\) different linear programs, and the running time is associated with \((k1)\)th order in \(\frac{1}{\varepsilon}\), compared with the kth order in \(\frac{1}{\varepsilon}\) in [9, 21].
4 Conclusions
In this article, we present a new linear decomposition algorithm for globally solving a class of nonconvex programming problems. First, the original problem is transformed and decomposed into a polynomial number of equivalent linear programming subproblems, by exploiting a suitable nonuniform grid. Second, compared with existing results in the literature, the proposed algorithm does not require the assumptions of quasiconcavity and differentiability of the objective function, and further, the rank k of the objective function is not limited to only around two. Finally, the computational complexity of the algorithm is given to show that it differs significantly giving an interesting alternative approach to solve the problem (P) with a reduced running time.
5 Results and discussion
In this work, a new linear decomposition algorithm for globally solving a class of nonconvex programming problems is presented. As further work, we think the ideas can be extended to more general type optimization problems, in which each \(a^{\top}_{i}x\) in the objective function to problem (P) is replaced with a convex function.
Declarations
Acknowledgements
The authors are grateful to the responsible editor and the anonymous referees for their valuable comments and suggestions, which have greatly improved the earlier version of this paper.
This paper is supported by National Natural Science Foundation of China (11671122), the Program for Innovative Research Team (in Science and Technology) in University of Henan Province (14IRTSTHN023).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Kelner, JA, Nikolova, E: On the hardness and smoothed complexity of quasicave minimization. In: Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science, Providence, RI, pp. 472482 (2007) Google Scholar
 Bennett, KP: Global tree optimization: a nongreedy decision tree algorithm. In: Computing Sciences and Statistics 26, pp. 156160 (1994) Google Scholar
 BloemhofRuwaard, JM, Hendrix, EMT: Generalized bilinear programming: an application in farm management. Eur. J. Oper. Res. 90, 102114 (1996) View ArticleMATHGoogle Scholar
 Konno, H, Shirakawa, H, Yamazaki, H: A meanabsolute deviationskewness protfolio optimazation model. Ann. Oper. Res. 45, 205220 (1993) MathSciNetView ArticleMATHGoogle Scholar
 Maranas, CD, Androulakis, IP, Floudas, CA, Berger, AJ, Mulvey, JM: Solving longterm financial planning problems via global optimization. J. Econ. Dyn. Control 21, 14051425 (1997) MathSciNetView ArticleMATHGoogle Scholar
 Pardalos, PM, Vavasis, SA: Quadratic programming with one negative eigenvalue is NPhard. J. Glob. Optim. 1, 1522 (1991) MathSciNetView ArticleMATHGoogle Scholar
 Quesada, I, Grossmann, IE: Alternative bounding approximations for the global optimization of various engineering design problems. In: Global Optimization in Engineering Design 9, pp. 309331 (1996) View ArticleGoogle Scholar
 Matsui, T: NPhardness of linear multiplicative programming and related problems. J. Glob. Optim. 9, 113119 (1996) MathSciNetView ArticleMATHGoogle Scholar
 Mittal, S, Schulz, AS: An FPTAS for optimization a class of lowrank functions over a polytope. Math. Program. 141, 103120 (2013) MathSciNetView ArticleMATHGoogle Scholar
 Konno, H, Thach, PT, Tuy, H: Optimization on Low Rank Nonconvex Structures. Kluwer Academic, Dordrecht (1996) MATHGoogle Scholar
 Konno, H, Gao, C, Saitoh, I: Cutting plane/tabu search algorithms for low rank concave quadratic programming problems. J. Glob. Optim. 13, 225240 (1998) MathSciNetView ArticleMATHGoogle Scholar
 Porembski, M: Cutting planes for lowrank like concave minimization problems. Oper. Res. 52, 942953 (2004) MathSciNetView ArticleMATHGoogle Scholar
 Peiping, S, Chunfeng, W: Global optimization for sum of linear ratios problem with coefficients. Appl. Math. Comput. 176(1), 219229 (2006) MathSciNetGoogle Scholar
 Wang, C, Shen, P: A global optimization algorithm for linear fractional programming. Appl. Math. Comput. 204, 281287 (2008) MathSciNetMATHGoogle Scholar
 Wang, C, Liu, S: A new linearization method for generalized linear multiplicative programming. Comput. Oper. Res. 38(7), 10081013 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Jiao, HW, Liu, SY: A practicable branch and bound algorithm for sum of linear ratios problem, Eur. J. Oper. Res. 243, 723730 (2015) MathSciNetView ArticleMATHGoogle Scholar
 Vavasis, SA: Approximation algorithms for indefinite quadratic progrmming. Math. Program. 57, 279311 (1992) View ArticleMATHGoogle Scholar
 Depetrini, D, Locatelli, M: Approximation algorithm for linear fractional multiplicative problems. Math. Program. 128, 437443 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Depetrini, D, Locatelli, M: A FPTAS for a class of linear multiplicative problems. Comput. Optim. Appl. 44, 275288 (2009) MathSciNetView ArticleMATHGoogle Scholar
 Goyal, V, GencKaya, L, Ravi, R: An FPTAS for minimizing the product of two nonnegative linear cost functions. Math. Program. 126, 401405 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Goyal, V, Ravi, R: An FPTAS for minimizing a class of lowrank quasiconcave functions over convex set. Oper. Res. Lett. 41, 191196 (2013) MathSciNetView ArticleMATHGoogle Scholar
 Kern, W, Woeginger, G: Quadratic programming and combinatorial minimum weight product problem. Math. Program. 100, 641649 (2007) MathSciNetView ArticleMATHGoogle Scholar