- Research
- Open access
- Published:

# Smoothing of the lower-order exact penalty function for inequality constrained optimization

*Journal of Inequalities and Applications*
**volumeÂ 2016**, ArticleÂ number:Â 185 (2016)

## Abstract

In this paper, we propose a method to smooth the general lower-order exact penalty function for inequality constrained optimization. We prove that an approximation global solution of the original problem can be obtained by searching a global solution of the smoothed penalty problem. We develop an algorithm based on the smoothed penalty function. It is shown that the algorithm is convergent under some mild conditions. The efficiency of the algorithm is illustrated with some numerical examples.

## 1 Introduction

We consider the following nonlinear constrained optimization problem:

where \(f:R^{n}\rightarrow R\) and \(g_{i}:R^{n}\rightarrow R\), \(i\in I=\{1,2,\ldots,m\}\) are twice continuously differentiable functions. Let

The penalty function methods have been proposed to solve problem [P] in much of the literature. In Zangwill [1], the classical \(l_{1}\) exact penalty function is defined as follows:

where \(q>0\) is a penalty parameter, but it is not a smooth function. Differentiable approximations to the exact penalty function have been obtained in various places in the literature, such as [2â€“10].

Recently lower-order penalty functions have been proposed in some literature. In [11], Luo gave a global exact penalty result for a lower-order penalty function of the form

where \(\alpha>0, \gamma\geq1\) are the penalty parameters. Obviously, it is the \(l_{1}\) penalty function when \(\gamma=1\).

The nonlinear penalty function has been investigated in [12] and [13] as follows:

where \(f(x)\) is assumed to be positive, \(k>0\) is a given number, and \(d=(d_{1},d_{2},\ldots,d_{m})\in R_{+}^{m}\) is the penalty parameter. It was shown in [12] that the exact penalty parameter corresponding to \(k\in(0,1]\) is substantially smaller than that of the classical \(l_{1}\) exact penalty function.

In [14], the lower-order penalty functions

have been introduced and shown to be exact under some conditions, but its smoothing does not discussed for \(k\in(0,1)\). When \(k=\frac{1}{2}\), we have the following function:

Its smoothing has been investigated in [14, 15] and [16]. The smoothing of the lower-order exact penalty function (1.4) has been investigated in [17] and [18].

In this paper, we aim to smooth the lower-order penalty function (1.4). The rest of this paper is organized as follows. In SectionÂ 2, a new smoothing function to the lower-order penalty function (1.4) is introduced. The error estimates are obtained among the optimal objective function values of the smoothed penalty problem, the nonsmooth penalty problem and the original problem. In SectionÂ 3, we present an algorithm to compute an approximate solution to [P] based on the smooth penalty function and show that it is globally convergent. In SectionÂ 4, three numerical examples are given to show the efficiency of the algorithm. In SectionÂ 5, we conclude the paper.

## 2 Smoothing exact lower-order penalty function

We consider the following lower-order penalty problem:

In order to establish the exact penalization property, we need the following assumptions as given in [14].

### Assumption 1

\(f(x)\) satisfies the following coercive condition:

Under Assumption 1, there exists a box *X* such that \(G([P])\subset \operatorname{int}(X)\), where \(G([P])\) is the set of global minima of problem \([P]\), \(\operatorname{int}(X)\) denotes the interior of the set *X*. Consider the following problem:

Let \(G([P^{\prime}])\) denote the set of global minima of problem \([P^{\prime}]\). Then \(G([P^{\prime}])=G([P])\).

### Assumption 2

The set \(G([P])\) is a finite set.

Then for any \(k\in(0,1)\), we consider the penalty problem of the form

We know that the lower-order penalty function \(\varphi_{q,k}(x) (k\in(0,1))\) is an exact penalty function in [14] under Assumption 1 and Assumption 2. But the lower-order exact penalty function \(\varphi_{q,k}(x)\) (\(k\in(0,1)\)) is a nondifferentiable function. Now we consider its smoothing.

Let \(p_{k}(u)=(\max\{u,0\})^{k}\), that is,

then

For any \(\epsilon>0\), let

It is easy to see that \(p_{\epsilon,k}(u)\) is continuously differentiable on *R*. Furthermore, we see that \(p_{\epsilon,k}(u)\rightarrow p_{k}(u)\) as *Ïµ*â†’0.

FigureÂ 1 shows the behavior of \(p_{2/3}(u)\) (represented by the solid line), \(p_{0.1,2/3}(u)\) (represented by the dot line), \(p_{0.01,2/3}(u)\) (represented by the broken line) and \(p_{0.001,2/3}(u)\) (represented by the dash and dot line).

Let

Then \(\varphi_{q,\epsilon,k}(x)\) is continuously differentiable on \(R^{n}\). Consider the following smoothed optimization problem:

### Lemma 2.1

*For any*
\(x\in X, \epsilon>0\), *we see that*

### Proof

Note that

Let

We get

When \(u\in(0,\epsilon)\), \(F^{\prime}(u)\geq0\). It is easy to see that \(F(u)\) is monotone increasing in \([0,\epsilon]\). When \(g_{i}(x)\in[0,\epsilon]\), we can get

Thus we see that

This completes the proof.â€ƒâ–¡

### Theorem 2.1

*Let*
\(\{\epsilon_{j}\}\rightarrow0^{+}\)
*be a sequence of positive numbers and assume that*
\(x_{j}\)
*is a solution to*
\(\min_{x\in X} \varphi_{q,\epsilon_{j},k}(x)\)
*for some*
\(q>0\), \(k\in(0,1)\). *Let*
*xÌ„*
*be an accumulation point of the sequence*
\(\{x_{j}\}\). *Then*
*xÌ„*
*is an optimal solution to*
\(\min_{x\in X} \varphi_{q,k}(x)\).

### Proof

Because \(x_{j}\) is a solution to \(\min_{x\in X} \varphi_{q,\epsilon_{j},k}(x)\), we see that

By Lemma 2.1, we see that

and

It follows that

Let \(j\rightarrow\infty\), we see that

This completes the proof.â€ƒâ–¡

### Theorem 2.2

*Let*
\(x_{q,k}^{*}\in X\)
*be an optimal solution of problem*
\([LOP^{\prime}]_{k}\)
*and*
\(\bar{x}_{q,\epsilon,k}\in X\)
*be an optimal solution of problem*
\([SP]\)
*for some*
\(q>0, k\in(0,1)\)
*and*
\(\epsilon>0\). *Then we see that*

### Proof

By Lemma 2.1, we see that

This completes the proof.â€ƒâ–¡

Theorem 2.1 and Theorem 2.2 mean that an approximately optimal solution to [SP] is also an approximately optimal solution to \([LOP^{\prime}]_{k}\) when the error *Ïµ* is sufficiently small.

### Definition 2.1

For \(\epsilon>0\), a point \(x\in X \) is an *Ïµ*-feasible solution or an *Ïµ*-solution of problemÂ [P], if

We say that the pair \((x^{*},\lambda^{*})\) satisfies the second-order sufficiency condition in [19] if

where \(L(x,\lambda)=f(x)+\sum_{i=1}^{m}\lambda_{i}g_{i}(x)\), and

### Theorem 2.3

*Suppose that Assumptions*
1
*and*
2
*hold*, *and that for any*
\(x^{*}\in G([P])\), *there exists a*
\(\lambda^{*}\in R_{+}^{m}\)
*such that the pair*
\((x^{*},\lambda^{*})\)
*satisfies the second*-*order sufficiency condition* (2.5). *Then for*
\(\forall k\in(0,1)\), *let*
\(x^{*}\in X \)
*be a global solution of problem*
\([P]\)
*and*
\(\bar{x}_{q,\epsilon,k}\in X \)
*be a global solution of problem*
\([SP]\)
*for*
\(k\in(0,1),\epsilon>0\). *Then there exists*
\(q^{*}>0\)
*such that for any*
\(q>q^{*}\),

*Furthermore*, *if*
\(\bar{x}_{q,\epsilon,k}\)
*be an*
*Ïµ*-*feasible solution of problem* [*P*], *then we see that*

*where*
\(q^{*}>0\)
*is defined in Corollary* 2.3 *in* [14].

### Proof

By Corollary 2.3 in [14], we see that \(x^{*}\in X\) is a global solution of problem \([LOP^{\prime}]_{k}\). Then by Theorem 2.2, we see that

Since \(\sum_{i=1}^{m}p_{k}(g_{i}(x^{*}))=0\), we have

By (2.7) and (2.8), we see that (2.6) holds.

Furthermore, it follows from (2.4) and (2.6) that

It follows that

From (2.3) and the fact that \(\bar{x}_{q,\epsilon,k}\) is an *Ïµ*-feasible solution of problem [P], we see that

Then it follows from (2.9) and (2.10) that

This completes the proof.â€ƒâ–¡

Theorem 2.3 means that an approximately optimal solution to [SP] is an approximately optimal solution to [P] if the solution to [SP] is *Ïµ*-feasible.

## 3 A smoothing method

We propose the following algorithm to solve [P].

### Algorithm 3.1

*Step* 1 Choose an initial point \(x_{0}\), and a stoping tolerance \(\epsilon>0\). Given \(\epsilon_{0}>0, q_{0}>0, 0<\eta<1\), and \(\sigma>1\), let \(j=0\) and go to Step 2.

*Step* 2 Use \(x_{j}\) as the starting point to solve \(\min_{x\in R^{n}} \varphi_{q_{j},\epsilon_{j},k}(x)\). Let \(x_{j}^{*}\) be the optimal solution obtained (\(x_{j}^{*}\) is obtained by a quasi-Newton method and a finite difference gradient).

*Step* 3 If \(x_{j}^{*}\) is *Ïµ*-feasible to [P], then stop and we have obtained an approximately optimal solution \(x_{j}^{*}\) of the original problem [P]. Otherwise, let \(q_{j+1}=\sigma q_{j}, \epsilon_{j+1}=\eta\epsilon_{j}, x_{j+1}=x_{j}^{*}\), and \(j=j+1\), then go to Step 2.

From \(0<\eta<1\) and \(\sigma>1\), we can easily see that the sequence \(\{\epsilon_{j}\}\) is decreasing to 0 and the sequence \(\{q_{j}\}\) is increasing to +âˆž as \(j\longrightarrow+\infty\).

Now we prove the convergence of the algorithm under some mild conditions.

### Theorem 3.1

*Suppose that Assumption*
1
*holds and for any*
\(q\in[q_{0},+\infty), \epsilon\in(0,\epsilon_{0}]\), *the set*

*Let*
\(\{x_{j}^{*}\}\)
*be the sequence generated by Algorithm*
3.1. *If the sequence*
\(\{\varphi_{q_{j},\epsilon_{j},k}(x_{j}^{*})\}\)
*is bounded*, *then*
\(\{x_{j}^{*}\}\)
*is bounded and any limit point of*
\(\{x_{j}^{*}\}\)
*is the solution of* [*P*].

### Proof

First we show that \(\{x_{j}^{*}\}\) is bounded. Note that

By the assumptions, there is some number *L* such that

Suppose to the contrary that \(\{x_{j}^{*}\}\) is unbounded. Without loss of generality, we assume that \(\|x_{j}^{*}\|\rightarrow\infty\) as \(j\rightarrow \infty\). Then we get

which results in a contradiction since *f* is coercive.

We show next that any limit point of \(\{x_{j}^{*}\}\) is the optimal solution of [P]. Let *xÌ„* be any limit point of \(\{x_{j}^{*}\}\). Then there exists a natural number set \(J\subseteq N\), such that \(x_{j}^{*}\rightarrow\bar{x}, j\in J\). If we can prove that (i) \(\bar{x}\in G_{0}\) and (ii) \(f(\bar{x})\leq\inf_{x\in G_{0}}f(x)\) hold, then *xÌ„* is the optimal solution of [P].

(i) Suppose to the contrary that \(\bar{x}\notin G_{0}\), then there exist \(\delta_{0}>0, i_{0}\in I\) and the subset \(J_{1}\subset J\) such that

for any \(j\in J_{1}\).

And by step 2 in Algorithm 3.1 and (2.3), we see that

for any \(x\in G_{0}\), which contradicts with \(q_{j}\rightarrow +\infty\). Then we see that \(\bar{x}\in G_{0}\).

(ii) For any \(x\in G_{0}\), we have

then \(f(\bar{x})\leq\inf_{x\in G_{0}}f(x)\) holds.

This completes the proof.â€ƒâ–¡

## 4 Numerical examples

In this section, we solve three numerical examples to show the applicability of AlgorithmÂ 3.1.

### Example 4.1

(Example 4.2 in [17], Example 3.3 in [18] and Example 4.1 in [20])

Let \(k=1/3, x_{0}=(0.5, 1.5), q_{0}=1.0, \epsilon_{0}=0.1, \eta=0.1, \sigma=2, \epsilon=10^{-15}\), we obtain the results by Algorithm 3.1 shown in TableÂ 1.

Let \(k=2/3, x_{0}=(0.5, 1.5), q_{0}=10, \epsilon_{0}=0.1, \eta=0.1, \sigma=2, \epsilon=10^{-15}\), we obtain the results by Algorithm 3.1 shown in TableÂ 2.

When \(k=1/3\) and \(k=2/3\), numerical results are given in TableÂ 1 and TableÂ 2, respectively. It is clear from TableÂ 1 and TableÂ 2 that the obtained approximate solutions are similar. In [17], the given solution for Example 4.2 is \((0.7254,0.3993)\) with objective function value 1.8375 when \(k=1/3\). In [18], the given solution for Example 3.3 is \((0.7239410,0.3988712)\) with objective function value 1.837919 when \(k=1/3\). The given solution for ExampleÂ 3.3 is \((0.7276356,0.3998984)\) with objective function value 1.838380 when \(k=2/3\). In [20], the given solution for Example 4.1 is \((0.7255,0.3993)\) with objective function value 1.8376. Numerical results are similar to the results of [17] and [20], and they are better than the results of [20] in this example.

### Example 4.2

(Test Problem 6 in SectionÂ 4.6 in [21])

Let \(k=2/3, x_{0}=(2.5,0), q_{0}=5, \epsilon_{0}=0.1, \eta=0.1, \sigma=2, \epsilon=10^{-15}\), the results by AlgorithmÂ 3.1 are shown in TableÂ 3.

Let \(k=2/3, x_{0}=(0,4), q_{0}=5, \epsilon_{0}=0.1, \eta=0.1, \sigma=2, \epsilon=10^{-15}\), the results by AlgorithmÂ 3.1 are shown in TableÂ 4.

Let \(k=2/3, x_{0}=(1.0,1.5), q_{0}=5, \epsilon_{0}=0.1, \eta=0.1, \sigma=2, \epsilon=10^{-15}\), the results by AlgorithmÂ 3.1 are shown in TableÂ 5.

With different starting points \(x_{0}=(2.5,0),x_{0}=(0,4)\), and \(x_{0}=(1.0,1.5)\), numerical results are given in TableÂ 3, TableÂ 4 and TableÂ 5, respectively. One can see that the numerical results in TablesÂ 3-5 are similar. This means that Algorithm 3.1 does not completely depend on how to choose a starting point in this example. In [21], the given solution for Example 4.2 is \((2.3295, 3.1783)\) with objective function value âˆ’5.5079. Numerical results are similar to the result of [21] in this example.

For the *j*th iteration of the algorithm, we define a constraint error \(e_{j}\) by

It is clear that \(x_{j}^{*}\) is *Ïµ*-feasible to (P) when \(e_{j}<\epsilon\).

### Example 4.3

(Example 4.5 in [15] and Example 3.4 in [18])

Let \(k=2/3, x_{0}=(2,2,1,2,1,2), q_{0}=100, \epsilon_{0}=0.5, \eta=0.01, \sigma=2,\epsilon=10^{-15}\), the results by Algorithm 3.1 are shown in TableÂ 6.

It is clear from TableÂ 6 that the obtained approximately optimal solution is \(x^{*}=(1.620468, 8.379530, 0.013229, 0.607246, 0.999994, 7.392764)\) with corresponding objective function value 117.000044. In [15], the obtained approximately optimal solution is \(x^{*}=(1.805996,8.194004,0.497669,0.308327,1.000000,7.691673 )\) with corresponding objective function value 117.010399. In [18], the given solution for Example 3.4 is \((1.635022, 8.364978, 0.060010, 0.5812775, 0.9937346, 7.431253 )\) with objective function value 117.0573 when \(k=2/3\). Numerical results are better than the results of [15] and [18] in this example.

## 5 Concluding remarks

In this paper, we propose a method for smoothing the nonsmooth lower-order exact penalty function for inequality constrained optimization. We prove that the algorithm based on the smoothed penalty functions is convergent under mild conditions.

According to the numerical results given in SectionÂ 4, we can obtain an approximately optimal solution of the original problem [P] by Algorithm 3.1.

Finally, we give some advices on how to choose parameter in the algorithm. Usually, the initial value of \(q_{0}\) may be 1, 5, 10, 100, 1000 or 10000, and \(\sigma=2, 5, 10 \mbox{ or } 100\). The initial value of \(\epsilon_{0}\) may be 1, 0.5, 0.1 or 0.01, and \(\eta=0.5, 0.1, 0.05\mbox{ or }0.01\).

## References

Zangwill, WI: Non-linear programming via penalty functions. Manag. Sci.

**13**(5), 344-358 (1967)Zangwill, I: A smoothing-out technique for min-max optimization. Math. Program.

**19**(1), 61-77 (1980)Ben-Tal, A, Teboulle, M: A smoothing technique for non-differentiable optimization problems. In: Lecture Notes in Mathematics, vol.Â 1405, pp.Â 1-11. Springer, Berlin (1989)

Pinar, M, Zenios, S: On smoothing exact penalty functions for convex constrained optimization. SIAM J. Optim.

**4**, 486-511 (1994)Liu, BZ: On smoothing exact penalty functions for nonlinear constrained optimization problems. J. Appl. Math. Comput.

**30**, 259-270 (2009)Lian, SJ: Smoothing approximation to \(l_{1}\) exact penalty function for inequality constrained optimization. Appl. Math. Comput.

**219**(6), 3113-3121 (2012)Liu, BZ, Zhao, WL: A modified exact smooth penalty function for nonlinear constrained optimization. J. Inequal. Appl.

**2012**, 173 (2012)Xu, XS, Meng, ZQ, Huang, LG, Shen, R: A second-order smooth penalty function algorithm for constrained optimization problems. Comput. Optim. Appl.

**55**(1), 155-172 (2013)Jiang, M, Shen, R, Xu, XS, Meng, ZQ: Second-order smoothing objective penalty function for constrained optimization problems. Numer. Funct. Anal. Optim.

**35**(3), 294-309 (2014)Binh, NT: Smoothing approximation to \(l_{1}\) exact penalty function for constrained optimization problems. J. Appl. Math. Inform.

**33**(3-4), 387-399 (2015)Luo, ZQ, Pang, JS, Ralph, D: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, Cambridge (1996)

Rubinov, AM, Yang, XQ, Bagirov, AM: Penalty functions with a small penalty parameter. Optim. Methods Softw.

**17**(5), 931-964 (2002)Huang, XX, Yang, XQ: Convergence analysis of a class of nonlinear penalization methods for constrained optimization via first-order necessary optimality conditions. J. Optim. Theory Appl.

**116**(2), 311-332 (2003)Wu, ZY, Bai, FS, Yang, XQ, Zhang, LS: An exact lower order penalty function and its smoothing in nonlinear programming. Optimization

**53**(1), 51-68 (2004)Meng, ZQ, Dang, CY, Yang, XQ: On the smoothing of the square-root exact penalty function for inequality constrained optimization. Comput. Optim. Appl.

**35**, 375-398 (2006)Lian, SJ: Smoothing approximation to the square-order exact penalty functions for constrained optimization. J.Â Appl. Math.

**2013**, Article ID 568316 (2013)He, ZH, Bai, FS: A smoothing approximation to the lower order exact penalty function. Oper. Res. Trans.

**14**(2), 11-22 (2010)Lian, SJ: On the smoothing of the lower order exact penalty function for inequality constrained optimization. Oper. Res. Trans.

**16**(2), 51-64 (2012)Bazaraa, MS, Sherali, HD, Shetty, CM: Nonlinear Programming: Theory and Algorithms, 2nd edn. Wiley, New York (1993)

Sun, XL, Li, D: Value-estimation function method for constrained global optimization. J. Optim. Theory Appl.

**102**(2), 385-409 (1999)Flondas, CA, Pardalos, PM: A Collection of Test Problems for Constrained Global Optimization Algorithms. Springer, Berlin (1990)

## Acknowledgements

The authors wish to thank the anonymous referees for their endeavors and valuable comments. This work is supported by National Natural Science Foundation of China (71371107 and 61373027) and Natural Science Foundation of Shandong Province (ZR2013AM013).

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

### Authorsâ€™ contributions

YD drafted the manuscript. SL helped to draft the manuscript and revised it. All authors read and approved the final manuscript.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Lian, S., Duan, Y. Smoothing of the lower-order exact penalty function for inequality constrained optimization.
*J Inequal Appl* **2016**, 185 (2016). https://doi.org/10.1186/s13660-016-1126-9

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/s13660-016-1126-9