A new filter QP-free method for the nonlinear inequality constrained optimization problem

In this paper, a filter QP-free infeasible method with nonmonotone line search is proposed for minimizing a smooth optimization problem with smooth inequality constraints. This proposed method is based on the solution of nonsmooth equations, which are obtained by the Lagrangian multiplier method and the function of the nonlinear complementarity problem for the Karush–Kuhn–Tucker optimality conditions. Especially, each iteration of this method can be viewed as a perturbation of a Newton or quasi-Newton iteration on both the primal and dual variables for the solution of the Karush–Kuhn–Tucker optimality conditions. What is more, it is considered to use the function of the nonlinear complementarity problem in the filter, which makes the proposed algorithm avoid the incompatibility. Then the global convergence of the proposed method is given. And under some mild conditions, the superlinear convergence rate can be obtained. Finally, some preliminary numerical results are shown to illustrate that the proposed filter QP-free infeasible method is quite promising.


Introduction
In this paper, we mainly consider solving the nonlinear optimization problem (NLP) with the inequality constraints, where the objective function and the constrained functions are Lipschitz continuously differentiable functions. We give the Lagrangian function associated with this problem, then the Karush-Kuhn-Tucker (KKT) optimality conditions for our solved problem can be obtained.
It is well known that the KKT optimality conditions is a mixed nonlinear complementarity problem (NCP). And this NCP has attracted much attention due to its various applications [1][2][3] such as the economic equilibrium problem, the restructuring problems of electricity and gas markets, and so on. Of course, there are many efficient methods for solving the NCP, which can be seen in [4][5][6][7]. One popular way to solve the NCP is to construct a Newton method for solving the related nonlinear equations, which is a reformulation of the KKT optimality condition. Another way is to use the filter method to directly solve the NLP with the inequality constraints. Recently Pu, Li, and Xue [8] proposed a new quadratic programming (QP)-free infeasible method for minimizing a smooth function subject to some inequality constraints. This method is based on the solution of nonsmooth equations which are obtained by the multiplier and the Fischer-Burmeister NCP function for the KKT conditions. They proved that the method had a superlinear convergence rate under some mild conditions. Fletcher and Leyffer [9] proposed a filter method for solving the NLP problem, which was an alternative to the traditional merit functions approach. Provided that there is a sufficient decrease in the objective function or the constraints violation function, it was shown that the trial points generated from solving a sequence of trust region QP subproblems are accepted. In addition, the computational results reported in [9,10] are also very encouraging. For more related methods, one can refer to [11][12][13][14][15][16].
Stimulated by the progress in these two aspects, in this paper, we propose a nonmonotone filter QP-free infeasible method for minimizing a smooth function subject to smooth inequality constraints. This proposed iterative method is based on the solution of nonsmooth equations, which are obtained by the multiplier and some NCP functions for the KKT first order optimality conditions. And each iteration of this method can be viewed as a perturbation of a Newton or quasi-Newton iteration on both the primal and dual variables for the solution of the KKT optimality conditions. Specifically, we use the filter on the linear search with a nonmonotone acceptance mechanism [17,18]. Moreover, we also consider to use the NCP function in the filter. Thus our algorithm can avoid the incompatibility, which may appear in the filter SQP algorithm. We also give the global convergence and the superlinear convergence rate of the proposed method under some mild conditions. Finally, we take some numerical tests to illustrate the effectiveness of the proposed filter QP-free infeasible method.
The rest of this paper is organized as follows. In Sect. 2, we give some preliminaries and the formulation of the solved problem. Then we propose an infeasible filter QP-free method. In Sect. 3, we show that the proposed method is well defined and establish its global convergence and superlinear convergence rate under some mild conditions. Some numerical tests are given in Sect. 4. Finally, we give some brief conclusions in Sect. 5.

Preliminaries and algorithm
In this section, we firstly introduce the formulation of the solved problem. Then we give some preliminaries for structuring a new filter QP-free method. Finally, we present the structure of our proposed method in detail.
In this paper, we mainly consider solving the nonlinear optimization problem (NLP) with the inequality constraints, which can be formulated as where f : R n → R and G = (g 1 , g 2 , . . . , g m ) T : R n → R m are Lipschitz continuously differentiable functions. The Lagrangian function associated with problem (1) is where μ = (μ 1 , μ 2 , . . . , μ m ) T ∈ R m is the multiplier vector. For simplicity, we use (x, μ) to denote the column vector (x T , μ T ) T .
Then we can obtain the KKT point (x,μ) ∈ R n × R m for problem (1), which satisfies the necessary optimality conditions : where 1 ≤ i ≤ m. We also say thatx ∈ D is a KKT point of problem (1) if there existsμ ∈ R m such that (x,μ) satisfies (2). It is well known that the KKT optimality condition is a mixed NCP. And the reformulation of (2) can be viewed as the following nonlinear equation: (x, μ) = 0.

Preliminaries
In this subsection, we give the definition of Fischer-Burmeister NCP function and some related Jacobian functions in different cases. Both theoretical results and computational experience have indicated that the nonsmooth methods based on the Fischer-Burmeister NCP function are efficient. The Fischer-Burmeister function has a very simple structure, which is defined as It is clear that this function ψ is continuously differentiable everywhere except at the origin, but it is strongly semismooth at the origin, i.e., if a = 0 or b = 0, then ψ is continuously differentiable at (a, b) ∈ R 2 , and if a = 0 and b = 0, then the generalized Jacobian of ψ at (0, 0) is (see [14]) ∂ψ(0, 0) = (ξ -1, η -1)|ξ 2 + η 2 = 1 .
Given the above formulation of problem (1), we can denote Clearly, the KKT optimality conditions (2) can be equivalently reformulated as the nonsmooth equations (x, μ) = 0.
where e i = (0, . . . , 0, 1, 0, . . . , 0) T ∈ R m is the ith column of the unit matrix, its ith element is 1, and other elements are 0. If g i (x) = 0 and μ i = 0, 1 ≤ i ≤ m, then φ i (x, μ) is strongly semismooth and directionally differentiable at (x, μ). We have We may reformulate the KKT at point (x,μ) conditions as a system of equations: To replace the violation constrained function p(G(x)) in the filter F of Fletcher and Leyffer method [9], we use the violation constrained function p(G(x), μ) = 1 (x, μ) .

Algorithm
In this subsection, we give the process and the framework of the filter QP-free method for solving problem (1). We firstly give some closed forms for preparing the method.
where H k is a positive matrix, which may be modified by BFGS update. The diag(ξ k ) or diag(η kc k ) denotes the diagonal matrix whose jth diagonal element is ξ k j or η k jc k j respectively, and where c > 0 and ν > 1 are given parameters. Secondly, we give the nonmonotone sequence for structuring our method. We may assume that the elements k and F k are sorted in the decreasing order, that is, We denote the maximal elements in¯ k ,F k by p k max ,F k max , respectively. Based on the above given information, we now give the framework of the nonmonotone filter QP-free infeasible method (NFQPIM) for minimizing a smooth function subject to smooth inequality constraints as follows in Algorithm 1.
If there are no such x k+1 and μ k+1 or α k too small, we use the backtracking technology or use the feasibility restoration phase to find x k+1 and μ k+1 so that it is an acceptable filter and the QP(x k+1 ) subproblem is compatible. Go to Step 1. Step and delete all pairs (f (x l ), l 1 ) which are dominated by (F(x k+1 ), μ k+1 ) in F k+1 , obtain F k+1 . Let k = k + 1 and go to Step 1. ( 1 (x, μ)) T ) T , the above proposed NFQPIM can also be used to solve the following constrained NLP: where f : R n → R and G(x) = (g 1 (x), g 2 (x), . . . , g m (x)) T : R n → R m and H(x) = (h 1 (x), h 2 (x), . . . , h p (x)) T : R p → R m are Lipschitz continuously differentiable functions.

Implementation
In this subsection, we give the implementation of the proposed NFQPIM. Firstly, we suppose that the following assumptions A1-A3 hold. A1. The level set {x|f (x) ≤ f (x 0 )} is bounded, and for sufficiently large k, μ k + λ k0 + λ k1 <μ. A2. f and g i are Lipschitz continuously differentiable, and for all y, z ∈ R n+m , and From the definitions of ξ k j and η k j , we know that ξ k j ≥ 0 and η k jc k = 0 for all j.
It is clear that the following lemma holds, with reference to [8].
From Lemmas 3-6, we know that, if k 1 = 0, then (d k , λ k ) is the decreasing direction of k ; if d k0 = 0, then d k is the decreasing direction of f k . If k 1 = 0 and d k0 = 0, then (x k , μ k ) is a KKT point. We consider four cases for linear searches.
Case 1. k -1 iteration has a -step and k 1 = 0. In this case, p k max = p k-1 max and min{p k j max |j ∈ F k > 0}. Clearly, we can find α k such thatx k+1 satisfies (9).
Case 2. k -1 iteration has a -step and k 1 = 0. In this case, it follows from Lemma 5 that, given any ε > 0, there ist > 0 such that, for any 0 < t ≤t, p k max > 0 is monotonically nonincreasing. So, we can find α k such thatx k+1 satisfies (9). Case 3. k -1 iteration has an f -step and d k0 = 0. In this case, it follows from Lemma 5 that, if d k0 = 0, then where f k max is monotonically nonincreasing. We can find α k such thatx k+1 satisfies (10).
Case 4. The (k -1) iteration has an f -step and d k0 = 0. In this case, if k 1 = 0, then (x k , μ k ) is a KKT point, otherwise x k may be an infeasible stationary point.
If there are no such x k+1 and μ k+1 or α k too small, we use the backtracking technology or use the feasibility restoration phase to find x k+1 and μ k+1 so that it is acceptable that the filter and the QP(x k+1 ) subproblem are compatible.

Convergence
In this section, we discuss the global and superlinear convergence rate of the proposed method. We give the following A4 and suppose that the assumptions A1-A4 hold in this section.

Lemma 7
Consider the sequence { 1 (x k ) 2 } and {f k } such that { 1 (x k ) 2 ≥ 0} and {f k } is monotonically decreasing and bounded below. Let a constant θ satisfy, for all k and l ∈ F k , that where α k ≥ α min > 0 is the step length, θ is a given positive number. Then p k max → 0.
Proof Suppose that the theorem is not true, then 1 (x k ) → 0, and there exist ε > 0 and infinitely many members of index set K such that Because {f k } is monotonically decreasing, (27) implies f (x k ) → -∞ as k → +∞, which is contravention of {f k } being bounded below. This lemma holds.

Lemma 8 Consider an infinite sequence iterations on which
Proof It is obvious that Lemmas 8 and 2 imply that Theorem 1 holds.
Next we consider the superlinear convergence of the method and firstly give the following assumptions we need.
A7. The strict complementarity condition holds at each KKT point (x * , μ * ). It follows that φ k is differentiable at each KKT point (x * , μ * ). Assumption A7 implies that is continuously differentiable at each KKT point (x * , μ * ). As Lemma 1, we have that the following lemmas hold.
Assumption A5 shows that (x k , μ k ) is a Newton direction with a high order perturbation. We obtain the following lemma.
Furthermore, Lemma 10 implies that the following theorem holds.

Numerical tests
We use Algorithm 1 (NFQPIM) for the constrained optimization problems (see [19]): H k is updated by the BFGS method. The termination criterion is φ ≤ 10 -5 . The parameters are chosen as follows: c = 0.1, ν = 2, τ = 0.7, θ 1 = 0.8, θ = 0.6,μ = 10,000. In the "NIT/NG" entry of the table below, NIT is the number of iterations, NF represents the number of function evaluations, NG denotes the number of gradient evaluations. The numerical results can be seen in the Table 1. We test the proposed NFQPIM for solving almost 100 optimization problems. And the numerical results illustrate that the proposed method is efficient and promising.