Smoothing approximation to the lower order exact penalty function for inequality constrained optimization

For inequality constrained optimization problem, we first propose a new smoothing method to the lower order exact penalty function, and then show that an approximate global solution of the original problem can be obtained by solving a global solution of a smooth lower order exact penalty problem. We propose an algorithm based on the smoothed lower order exact penalty function. The global convergence of the algorithm is proved under some mild conditions. Some numerical experiments show the efficiency of the proposed method.


Introduction
Consider the following inequality constrained optimization problem: where f i : R n → R, i = 0, 1, . . . , m, are twice continuously differentiable functions. Throughout this paper, we use X 0 = {x ∈ R n |f i (x) ≤ 0, i ∈ I} to denote the feasible solution set.
To solve problem (P), the penalty function methods have been introduced in many literature works (see, e.g., [16][17][18][19][20][21][22][23][24]). Zangwill [16] introduced the classical l 1 exact penalty function where q > 0 is a penalty parameter, but it is not a smooth function. The corresponding penalty optimization problem is as follows: The non-smoothness of the function restricts the application of a gradient-type or Newton-type algorithm to solving problem (P 1 ). In order to avoid this shortcoming, the smoothing of the l 1 exact penalty function is proposed in [17,18].
In addition, to overcome the non-smoothness of the function, the following smooth penalty function is proposed: However, the function is non-exact.
Recently, Wu et al. [20] proposed the following low order penalty function: and proved that the low order penalty function is exact under mild conditions. But this penalty function is non-smooth, too. When k = 1, ϕ q,k (x) can be seen as the classical l 1 exact penalty function. The least exact penalty parameter corresponding to k ∈ (0, 1) is much less than that of the l 1 exact penalty function. This can avoid the defects of too large parameter ρ in the algorithm. Only for k = 1 2 , the smoothing of the lower order penalty function (1.3) is studied in [20] and [21]. In [24], a smoothing method of the low order penalty function (1.3) is given. We hope to study a new smoothing method for the low order penalty function (1.3) and compare it with the existing methods. With a different segmentation method, we will give a new piecewise smooth function and propose a new method to smooth the lower order penalty function (1.3) with k ∈ [ 1 2 , 1) in this paper. The remainder of this paper is organized as follows. In Sect. 2, a new smoothing function is proposed. The error estimates are obtained among the optimal objective function values of the smoothed penalty problem, the non-smooth penalty problem, and the original problem. In Sect. 3, the corresponding algorithm is proposed to obtain an approximate solution to (P). The global convergence of the algorithm is proved. In Sect. 4, some numerical experiments are given to illustrate the efficiency of the algorithm. In Sect. 5, some conclusions are presented.

A smoothing penalty function
For the lower order penalty problem in order to establish the global exact penalization, the following assumption is given in [20]. We will consider the smoothing method under the following assumption. (2) The optimal solution set G((P)) is a finite set.
Under Assumption 2.1, problem (P) is equivalent to the following problem: where X is a box with int(X) = ∅. For any k ∈ (0, 1), penalty problem (LP) is equivalent to the following penalty problem: Now we consider a new smoothing technique to the lower order penalty function (1.3). Let p k (t) = (max{t, 0}) k , then Define a function p k, (t) ( > 0) by It is easy to see that p k, (t) is continuously differentiable and The following figure shows the process of function p k, (t) approaching function p k (t). Figure 1 shows the behavior of p 3 4  Based on this, we consider the following continuously differentiable penalty function: where lim ε→0 + ϕ q,k, (x) = ϕ q,k (x). The corresponding optimization problem to ϕ q,k, (x) is as follows: The behavior of p k, (t) and p k (t) For problems (P), (LP ), and (SP), we have the following conclusion.

Lemma 2.1
For any x ∈ X, > 0, and q > 0, it holds that Proof For all i ∈ I, it holds that It is easy to see that function F(t) is monotonically increasing w.r.t. t due to that k ∈ [ 1 2 , 1). One has It follows that by the fact that q > 0.

Theorem 2.1
For a positive sequence {ε j }, which converges to 0 as j → ∞, assume that x j is an optimal solution to min x∈X ϕ q,k, j (x) for some given q > 0, k ∈ [ 1 2 , 1). If x is an accumulating point of sequence {x j }, thenx is an optimal solution to min x∈X ϕ q,k (x).
Proof It follows from Lemma 2.1 that Since x j is a solution to min x∈X ϕ q,k, j (x), one has It follows from (2.5) and (2.6) that Letting j → ∞ yields Thus, x is an optimal solution to min x∈X ϕ q,k (x).
Theorem 2.2 Let x * q,k ∈ X be an optimal solution of problem (LP ), andx q,k, ∈ X be an optimal solution of problem (SP) for some q > 0, k ∈ [ 1 2 , 1), and > 0. Then Proof Under the hypothetical conditions, it holds that Therefore, by Lemma 2.1, one has and This completes the proof.

Corollary 2.1
Suppose that Assumption 2.1 holds, and for any x ∈ G((P)), there exists λ * ∈ R m + such that the pair (x * , λ * ) satisfies the second order sufficient condition (in [20]). Let x * ∈ X be an optimal solution of problem (P) andx q,k, ∈ X be an optimal solution of problem (SP) for some q > 0, k ∈ [ 1 2 , 1), and > 0. Then there exists q * > 0 such that, for any q > q * , Proof It follows from Corollary 2.3 (in [20]) that x * ∈ X is an optimal solution of problem (LP ). By Theorem 2.2, one has This completes the proof.
then x ∈ X is an -feasible solution of problem (P).

Theorem 2.
3 Let x * q,k ∈ X be an optimal solution of problem (LP ), andx q,k, ∈ X be an optimal solution of problem (SP) for some q > 0, k ∈ [ 1 2 , 1), and > 0. If x * q,k is a feasible solution of problem (P), andx q,k, is an -feasible solution of problem (P), then Note that Thus, it follows from (2.2) that By (2.7) and (2.8), one has Theorems 2.1 and 2.2 show that an optimal solution of (SP) is also an approximate optimal solution of (LP ) when the error is sufficiently small. By Theorem 2.3, an optimal solution of (SP) is an approximately optimal solution of (P) if the optimal solution of (SP) is an -feasible solution of (P).

A smoothing method
Based on the discussion in the last section, we can design an algorithm to obtain an approximate optimal solution of (P) by solving (SP).
Step 2. Solve min x∈R n ϕ q j ,k, j (x) starting at x j . Let x j+1 be the optimal solution (x j+1 can be obtained by a quasi-Newton method).
Under some mild conditions, the following conclusion shows the global convergence of Algorithm 3.1.
(2) Any limit point of {x j+1 } is an optimal solution of (P).

(3.2)
For the sake of contradiction, suppose that {x j+1 } is unbounded. Without loss of generality, we assume that x j+1 → ∞ as j → ∞.
To prove x * is the optimal solution of (P), it is only needed to show that x * ∈ X 0 and f 0 (x * ) ≤ f 0 (x), ∀x ∈ X 0 .
To show that x * ∈ X 0 , we outline a proof by contradiction. We presuppose that x * / ∈ X 0 , then there exist δ 0 > 0, i 0 ∈ I, and the subset J ⊂ N such that where N is the natural number set. By Step 2, (2.2), and (2.3), for any x ∈ X 0 , one has It follows that which contradicts with q j → +∞, j → 0, and 2k-1 j q j → 0, as j → ∞. Then we have that x * ∈ X 0 .

Numerical examples
In this section, we will do some numerical experiments to show the efficiency of Algorithm 3.1.
In [24], with three different starting points, similar numerical results are given with k = 2 3 . The optimal solution (2.329517, 3.178421) is given with the objective function value -5.507938. In [25], the optimal solution (2.3295, 3.1783) is given with the objective function value -5.5079. The numerical results of Example 4.3 are similar to the numerical results of [24] and [25] in this example.

Concluding remarks
In this paper, we proposed a method to smooth the lower order exact penalty function with k ∈ [ 1 2 , 1) for inequality constrained optimization. Furthermore, we proved that the algorithm based on the smoothed penalty functions is globally convergent under mild conditions. The given numerical experiments show that the algorithm is effective.

Funding
This work was supported by the National Natural Science Foundation of China (71371107 and 61373027) and the Natural Science Foundation of Shandong Province (ZR2016AM10)