An accelerating algorithm for globally solving nonconvex quadratic programming

To globally solve a nonconvex quadratic programming problem, this paper presents an accelerating linearizing algorithm based on the framework of the branch-and-bound method. By utilizing a new linear relaxation approach, the initial quadratic programming problem is reduced to a sequence of linear relaxation programming problems, which is used to obtain a lower bound of the optimal value of this problem. Then, by using the deleting operation of the investigated regions, we can improve the convergent speed of the proposed algorithm. The proposed algorithm is proved to be convergent, and some experiments are reported to show higher feasibility and efficiency of the proposed algorithm.


Introduction
In this paper, we consider the following nonconvex quadratic programming problem where Q = (q jk ) n×n is a real n × n symmetric matrix; A = (a jk ) m×n is a real m × n matrix, and c, y ∈ R n ; b ∈ R m . The (NQP) is an important class of global optimization problems, and it is not only because they have broad applications in the fields of statistics and design engineering, optimal control, economic equilibria, financial management, production planning, combinatorial optimization, and so on [1,2], but also because many other nonlinear problems can be transformed into this form [3][4][5][6][7], such as the following linear multiplicative programming (LMP) problem: where p ≥ 0, c T i = (c 1 , c 2 , . . . , c n ), e T i = (e 1 , e 2 , . . . , e n ) ∈ R n ; A = (a jk ) m×n is a real m × n matrix, and d i , f i ∈ R, i = 1, 2, . . . , n; b ∈ R m . Obviously, the (LMP) can be easily transformed into the (NQP). So, the applications of the (NQP) also include all the applications of the (LMP).
In the last decades, many solution methods have been developed for globally solving the (NQP) and its special forms. For example, branch-and-cut algorithm [8] and branch-and-bound algorithm [9] for box-constrained nonconvex quadratic programs; two efficient algorithms for linear constrained quadratic programming (LCQP) by Cambini et al. [10] and Li et al. [11]; branch-and-reduce approach by Gao et al. [12]; finite branch-and-bound approach by Burer and Vandenbussche [13]; branch-and-bound method based on outer approximation and linear relaxation by Al-Khayyal et al. [14]; two simplicial branch-and-bound algorithms by Linderoth [15] and Raber [16]; branchand-cut algorithm by Sherali et al. [17] and Audet et al. [18], and so on. In addition, a branch-and-bound algorithm for finding a global optimal solution for a nonconvex quadratic program with convex quadratic constraints has been proposed by Cheng Lu et al. [19]; based on the different properties of a quadratic function, five different branch-and-bound approaches for solving the (NQP) have been proposed in Refs. [20][21][22][23][24].
Based on the above-mentioned methods, we will present an accelerating algorithm for effectively solving the (NQP) in this paper. Firstly, we present a new linear relaxation technique, which can be used to construct the linear relaxation programming (LRP) of the (NQP). In addition, we divide the initial feasible region into a number of sub-regions and use deleting techniques to reduce the scope of the investigation. Then, the initial problem (NQP) is converted into a series of linear relaxation programming subproblems. We can prove that the solutions of these subproblems can converge to the global optimal value of the original problem (NQP). Finally, numerical results and comparison with the methods mentioned in recent literature show that our algorithm works as well as or better than those methods. This paper is organized as follows. In Sect. 2, we describe a new linear relaxation approach for establishing the linear relaxation programming of the (NQP). In Sect. 3, a range deleting technique is given for improving the convergent speed of the proposed algorithm, then a range division and deleting algorithm is proposed and its convergence is proved. Some numerical experiments are reported to show the feasibility and effectiveness of our algorithm in Sect. 4.

New linear relaxation approach
In this section, we will show how to construct the (LRP) for the (NQP). Let the ith row of the matrix Q be Q i , and let z i = Q i y = n k=1 q ik y k , i = 1, 2, . . . , n, we obtain z = (z 1 , z 2 , . . . , z n ) T = (Q 1 y, Q 2 y, . . . , Q n y) T = Qy.
By introducing new variables z i , i = 1, 2, . . . , n, the function f (y) can be expressed as follows: y i z i , and the set D is defined as We compute the initial variable bounds by solving the following linear programming problems: and let Based on the above discussion, we can transform the initial problem into the following equivalent problem (EP): For globally solving the (NQP), the computation of lower bounds for this problem and its subdivided subproblems is the principal operation in the establishment of a branch-andbound procedure. A lower bound on the optimal value of the (NQP) and its subdivided subproblems can be gotten by solving a linear relaxation programming of the (EP), which can be derived by a new linear relaxation approach. Let Since ϕ(y i ) = y 2 i is a convex function of y i over [y i , y i ], its affine concave envelope is ϕ u (y i ) = (y i + y i )y iy i y i . Moreover, the tangential supporting function for ϕ(y i ) is parallel with the ϕ u (y i ), thus the point of tangential support will occur at y i +y i 2 , and the corresponding tangential supporting function is ϕ l (y i ) = (y i + y i )y i -(y i +y i ) 2 4 . Hence, by the geometric property of the function ϕ(y i ), we have Similarly, it follows that Furthermore, assume that (y i + z i ) is a single variable, then (y i + z i ) 2 is a convex function about the univariate variable (y i + z i ) over the interval [y i + z i , y i + z i ]. By the conclusion above, we have and By (1)-(3), we can obtain and Hence, we have Furthermore, we define the difference functions as follows: Since (y i ) is convex about y i , for any y i ∈ [y i , y i ], it follows that (y i ) can attain the maximum at the point y i or y i . Then On the other hand, since ∇(y i ) is concave about y i , for any y i ∈ [y i , y i ], ∇(y i ) can attain the maximum at the point Similarly, we can prove that and Using the similar method, we can get the following conclusions: Since by (5)-(7), we can get that Similarly, we can prove that ∇(y i , z i ) → 0, as yy → 0, zz → 0. For convenience, without loss of generality, for any By utilizing the convexity of a univariate quadratic function, we establish an effective method for generating the linear underestimation and linear overestimation of the functions y 2 i , z 2 i , and y i z i , respectively. By the conclusions above, the linear relaxation underestimation function f L (y, z) of the function f (y, z) for the (EP) can be established. Thus, by the former discussions, we can construct the corresponding approximation linear relaxation programming (LRP) of the (EP) as follows: (LRP) : By (4), it is obvious that Therefore, we have that Remark 1 From above, we only need to solve the (LRP) instead of solving the (EP) to obtain the lower and upper bounds of the optimal value in problem (NQP).
Remark 2 Based on the construction of the (LRP), each feasible point of the (NQP) in the sub-range is also feasible to LRP, and the global minimum value of LRP is not more than that of the (NQP) in the sub-range . Thus, the (LRP) can provide a valid lower bound for the global optimum value of problem NQP in the sub-range .
Remark 3 The conclusions above ensure that the linear relaxation programming LRP can infinitely approximate the (NQP), as Y → 0 (obviously, Z → 0), this will guarantee the global convergence of the proposed algorithm.

Accelerating branch-and-bound algorithm and its convergence
Now we establish an accelerating branch-and-bound algorithm based upon the former linear relaxation approach for globally solving the (NQP). We will introduce the algorithm process at first and then give the convergence analysis of the algorithm.

Branching process
The critical operation of the branching process is iteratively subdividing the initial range 0 into some sub-ranges, each sub-range denoted by is concerned with a node of the branch and bound, such that any infinite iterative sequence of partition sets shrinks to a singleton. Here, we will adopt a standard range bisection approach, which is adequate to ensuring global convergence of the proposed algorithm. Detailed process is described as follows.
For any selected sub-range ∈ 0 , Y = [y , y ] ⊆ Y 0 , let q ∈ arg max{y iy i : i = 1, 2, . . . , n}, and y m = (y q + y q )/2, then divide into two sub-ranges 1 and 2 by subdividing the interval [y q , y q ] into two subintervals [y q , y m ] and [y m , y q ]. From the above branching operation, we can notice that the intervals [z , z ] of z will never be partitioned by the branching processes. Hence, these branching operations only occur in a space of dimension n, i.e., the proposed algorithm economizes the required computations.

Range deleting technique
At the kth iteration of the algorithm, for any rectangle k ⊆ 0 , we want to check whether k contains a global optimal solution of the (EP)( 0 ), where Therefore, there is no global optimal solution for the (EP) over k .
Case 2: If c τ < 0, then for any (y, z) ∈ k , we have y τ < ρ τ , i.e., c τ y τ > f -ELB k + min{c τ y k τ , c τ y k τ }. Similar to the proof of Case 1, we can get that f (y, z) > f , thus the range k does not contain any global optimal solution of the (EP). Thus, the proof is completed. According to Theorem 3.1, we can construct the following range reduction technique to reject the whole investigated sub-range k or delete a part of the investigated sub-range k which does not contain any global optimal solution of the (EP).

Branch-and-bound algorithm
Assume that we have gotten the set of active nodes k at the kth iteration of the algorithm, and each node is associated with a sub-range ⊆ 0 for all ∈ k . For each sub-range , use the proposed range deleting technique to compress the sub-range , still denote the remaining sub-range by , and obtain a lower bound LB( ) of the optimum value of the (NQP) by solving the (LRP) in .
Let f r ( ) and (y r ( ), z r ( )) be the optimal value and the optimal solution of the (LRP) over the sub-range , respectively. Combining the former linear relaxation method, branching process, and range deleting technique, the basic steps of the proposed accelerating algorithm for globally solving the (NQP) may be stated as follows.
Algorithm statement Initialization step. Let the initial iteration counter k := 0, the initial set of active nodes 0 = { 0 }, the initial upper bound f = +∞, the convergent error ε > 0, and the initial collection of feasible points F := ∅.
Range deleting step. For each sub-range , use the proposed range deleting technique in Sect. 3.2 to contract each sub-range and still denote the remaining sub-range by .
Range division step. Apply the presented range division approach to subdivide k into two new sub-ranges, and denote the collection of new subdivided sub-ranges by k .
Renewing the lower and upper bound step. If k = ∅ for each ∈ k , then solve the LRP( ) to get f r ( ) and (y r ( ), z r ( )). Let LB( ) := f r ( ), if LB( ) > f , set k := k \ . The residual subdivided set is now k := ( k \ k ) ∪ k , and renew the lower bound LB k := inf ∈ k LB( ).
If F = ∅, renew the upper bound f := min (y,z)∈F f (y, z), and denote the known best feasible solution by (y best , z best ) := argmin (y,z)∈F f (y, z). Termination checking step. If f -LB k ≤ ε, the algorithm ends with f and y best as the ε-global optimum value and a global optimal solution of problem (NQP). Otherwise, set k := k + 1 and pick out an active node k+1 satisfying k+1 = argmin ∈ k LB( ), and return to Range deleting step.

Convergence analysis
In this subsection, the convergence of this algorithm is discussed as follows. Proof If the algorithm terminates finitely, it terminates at the kth iteration, where k ≥ 0. Upon termination, by steps in the algorithm, we get that f -LB k ≤ ε. By the first five steps in the algorithm, there exists a feasible solution y * for the (NQP), which satisfies that f = f (y * ), and f (y * ) -LB k ≤ ε. According to the algorithm, we have that LB k ≤ v. Since y * is a feasible solution of the (NQP), we have f (y * ) ≥ v. Combining the above inequalities, , v ≤ f (y * ) ≤ v + ε. Therefore, y * is a global ε-optimum solution for the (NQP).
If the algorithm is infinite, according to the algorithm, we have that {LB k } is a nondecreasing sequence and it has an upper bound min y∈D f (y), which ensures the existence of the limit LB := lim k→∞ LB k ≤ min y∈D f (y). Since the sequence {y k } is included in a compact set Y 0 , there must exist a convergent subsequence {y s } ⊆ {y k } and assume that lim s→∞ y s = y * . Then by the branching process and deleting method, there must exist a decreasing subsequence {Y r } ⊂ Y s , where (Y s , Z s ) ∈ s with y r ∈ Y r , LB r = LB(Y r ) = f L (y r ) and lim r→∞ Y r = {y * }. By the continuity of function 0 (y), we have Then, since Y 0 is a closed set and {y k } ⊂ Y 0 , obviously, we have that y * ∈ Y 0 , i.e., y * is a feasible solution of the (NQP). Thus, the proof is completed.
Some randomly generated test examples with a large scale number of variables and constraints are used to validate the robustness of the proposed algorithm. These randomly generated examples and their computational results are given as follows.
where Q n×n is a real symmetric matrix, all the real elements of Q n×n , A m×n , and c n×1 are randomly generated in Interval [-2, 2], all the real elements of b m×1 are randomly generated in Interval [1,10]. For Example 8, we solved 10 different random instances for each size and presented statistics of the results. The computational results are summarized in Table 3, where the following notations have been used in column headers: Ave.Iter.: the average number of iterations; Ave.Time (s): the average CPU execution time for the algorithm in seconds; m: the number of constraints; n: the number of variables.
From Table 3 and Fig. 1, it can be seen that, when m and n are below 50, the algorithm can find the global optimal solution in a short time and with lower iteration number. As the problem size becomes larger, the average number of iterations and the average CPU time of our algorithm are also increased, but they are not very sensitive to the problem size.  From the experimental results in Table 3, we can see that the proposed algorithm with the given convergent error can be used to globally solve the (NQP) with a large scale number of constraints and variables. The results in Tables 1-3 show that our algorithm is both feasible and efficient.

Concluding remarks
In this paper, we propose a new branch-and-bound algorithm for globally solving the nonconvex quadratic programming problem. In this algorithm, we present a new linear relaxation method, which can be used to derive the linear programs relaxation problem of the investigated problem (NPQ). To accelerate the computational speed of the proposed branch-and-bound algorithm, an interval deleting rule is used to reduce the investigated regions. By subsequently partitioning the initial region and solving a sequence of linear programs relaxation problems, the proposed algorithm is convergent to the global optima of the initial problem (NPQ). Finally, compared with some existent algorithms, numerical results show higher computational efficiency of the proposed algorithm.