Skip to main content

Smoothing of the lower-order exact penalty function for inequality constrained optimization

Abstract

In this paper, we propose a method to smooth the general lower-order exact penalty function for inequality constrained optimization. We prove that an approximation global solution of the original problem can be obtained by searching a global solution of the smoothed penalty problem. We develop an algorithm based on the smoothed penalty function. It is shown that the algorithm is convergent under some mild conditions. The efficiency of the algorithm is illustrated with some numerical examples.

1 Introduction

We consider the following nonlinear constrained optimization problem:

$$[P]\quad \textstyle\begin{array}{@{}l@{\quad}l} \min f(x)\\ \mathrm{s.t.}\ g_{i}(x)\leq0,& i=1,2,\ldots,m, \end{array} $$

where \(f:R^{n}\rightarrow R\) and \(g_{i}:R^{n}\rightarrow R\), \(i\in I=\{1,2,\ldots,m\}\) are twice continuously differentiable functions. Let

$$G_{0}= \bigl\{ x\in R^{n}| g_{i}(x)\leq0, i=1,2, \ldots,m \bigr\} . $$

The penalty function methods have been proposed to solve problem [P] in much of the literature. In Zangwill [1], the classical \(l_{1}\) exact penalty function is defined as follows:

$$ p_{1}(x,q)=f(x)+q\sum_{i=1}^{m} \max \bigl\{ g_{i}(x),0 \bigr\} , $$
(1.1)

where \(q>0\) is a penalty parameter, but it is not a smooth function. Differentiable approximations to the exact penalty function have been obtained in various places in the literature, such as [2–10].

Recently lower-order penalty functions have been proposed in some literature. In [11], Luo gave a global exact penalty result for a lower-order penalty function of the form

$$ f(x)+\alpha \Biggl(\sum_{i=1}^{m}\max \bigl\{ g_{i}(x),0 \bigr\} \Biggr)^{1/\gamma}, $$
(1.2)

where \(\alpha>0, \gamma\geq1\) are the penalty parameters. Obviously, it is the \(l_{1}\) penalty function when \(\gamma=1\).

The nonlinear penalty function has been investigated in [12] and [13] as follows:

$$ L^{k}(x,d)= \Biggl[f(x)^{k}+\sum _{i=1}^{m}d_{i} \bigl(\max \bigl\{ g_{i}(x),0 \bigr\} \bigr)^{k} \Biggr]^{1/k}, $$
(1.3)

where \(f(x)\) is assumed to be positive, \(k>0\) is a given number, and \(d=(d_{1},d_{2},\ldots,d_{m})\in R_{+}^{m}\) is the penalty parameter. It was shown in [12] that the exact penalty parameter corresponding to \(k\in(0,1]\) is substantially smaller than that of the classical \(l_{1}\) exact penalty function.

In [14], the lower-order penalty functions

$$ \varphi_{q,k}(x)=f(x)+q\sum_{i=1}^{m} \bigl(\max \bigl\{ g_{i}(x),0 \bigr\} \bigr)^{k}, \quad k \in(0,1), $$
(1.4)

have been introduced and shown to be exact under some conditions, but its smoothing does not discussed for \(k\in(0,1)\). When \(k=\frac{1}{2}\), we have the following function:

$$ \varphi_{q}(x)=f(x)+q\sum_{i=1}^{m} \sqrt{\max \bigl\{ g_{i}(x),0 \bigr\} }. $$
(1.5)

Its smoothing has been investigated in [14, 15] and [16]. The smoothing of the lower-order exact penalty function (1.4) has been investigated in [17] and [18].

In this paper, we aim to smooth the lower-order penalty function (1.4). The rest of this paper is organized as follows. In Section 2, a new smoothing function to the lower-order penalty function (1.4) is introduced. The error estimates are obtained among the optimal objective function values of the smoothed penalty problem, the nonsmooth penalty problem and the original problem. In Section 3, we present an algorithm to compute an approximate solution to [P] based on the smooth penalty function and show that it is globally convergent. In Section 4, three numerical examples are given to show the efficiency of the algorithm. In Section 5, we conclude the paper.

2 Smoothing exact lower-order penalty function

We consider the following lower-order penalty problem:

$$[LOP]_{k}\quad \min_{x\in R^{n}} \varphi_{q,k}(x). $$

In order to establish the exact penalization property, we need the following assumptions as given in [14].

Assumption 1

\(f(x)\) satisfies the following coercive condition:

$$\lim_{\|x\|\rightarrow+\infty}f(x)=+\infty. $$

Under Assumption 1, there exists a box X such that \(G([P])\subset \operatorname{int}(X)\), where \(G([P])\) is the set of global minima of problem \([P]\), \(\operatorname{int}(X)\) denotes the interior of the set X. Consider the following problem:

$$\bigl[P^{\prime}\bigr]\quad \textstyle\begin{array}{@{}l@{\quad}l} \min f(x)\\ \mathrm{s.t.}\ g_{i}(x)\leq0, & i=1,2,\ldots,m,\\ \hphantom{\mathrm{s.t.}}\ x\in X. \end{array} $$

Let \(G([P^{\prime}])\) denote the set of global minima of problem \([P^{\prime}]\). Then \(G([P^{\prime}])=G([P])\).

Assumption 2

The set \(G([P])\) is a finite set.

Then for any \(k\in(0,1)\), we consider the penalty problem of the form

$$\bigl[LOP^{\prime}\bigr]_{k}\quad \min_{x\in X} \varphi_{q,k}(x). $$

We know that the lower-order penalty function \(\varphi_{q,k}(x) (k\in(0,1))\) is an exact penalty function in [14] under Assumption 1 and Assumption 2. But the lower-order exact penalty function \(\varphi_{q,k}(x)\) (\(k\in(0,1)\)) is a nondifferentiable function. Now we consider its smoothing.

Let \(p_{k}(u)=(\max\{u,0\})^{k}\), that is,

$$ p_{k}(u)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} u^{k} & \mbox{if } u> 0 ,\\ 0 & \mbox{otherwise}, \end{array}\displaystyle \right . $$
(2.1)

then

$$ \varphi_{q,k}(x)=f(x)+q\sum_{i=1}^{m}p_{k} \bigl(g_{i}(x) \bigr). $$
(2.2)

For any \(\epsilon>0\), let

$$ p_{\epsilon,k}(u)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} 0 & \mbox{if } u\leq0 , \\ \frac{1}{2}\epsilon^{-k}u^{2k} & \mbox{if } 0 < u\leq\epsilon, \\ u^{k}-\frac{\epsilon^{k}}{2} &\mbox{if } u>\epsilon. \end{array}\displaystyle \right . $$
(2.3)

It is easy to see that \(p_{\epsilon,k}(u)\) is continuously differentiable on R. Furthermore, we see that \(p_{\epsilon,k}(u)\rightarrow p_{k}(u)\) as ϵ→0.

Figure 1 shows the behavior of \(p_{2/3}(u)\) (represented by the solid line), \(p_{0.1,2/3}(u)\) (represented by the dot line), \(p_{0.01,2/3}(u)\) (represented by the broken line) and \(p_{0.001,2/3}(u)\) (represented by the dash and dot line).

Figure 1
figure 1

The behavior of \(\pmb{p_{\epsilon,2/3}(u)}\) and \(\pmb{p_{2/3}(u)}\) .

Let

$$ \varphi_{q,\epsilon,k}(x)=f(x)+q\sum_{i=1}^{m}p_{\epsilon ,k} \bigl(g_{i}(x) \bigr). $$
(2.4)

Then \(\varphi_{q,\epsilon,k}(x)\) is continuously differentiable on \(R^{n}\). Consider the following smoothed optimization problem:

$$[SP] \quad \min_{x\in X} \varphi_{q,\epsilon,k}(x). $$

Lemma 2.1

For any \(x\in X, \epsilon>0\), we see that

$$0\leq\varphi_{q,k}(x)-\varphi_{q,\epsilon,k}(x)\leq \frac{1}{2}mq \epsilon^{k}. $$

Proof

Note that

$$p_{k} \bigl(g_{i}(x) \bigr)-p_{\epsilon,k} \bigl(g_{i}(x) \bigr)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} 0 & \mbox{if } g_{i}(x)\leq0 , \\ (g_{i}(x))^{k}-\frac{1}{2}\epsilon^{-k}(g_{i}(x))^{2k} & \mbox{if } 0< g_{i}(x)\leq\epsilon, \\ \frac{\epsilon^{k}}{2} & \mbox{if } g_{i}(x)>\epsilon. \end{array}\displaystyle \right . $$

Let

$$F(u)=u^{k}-\frac{1}{2}\epsilon^{-k}u^{2k}. $$

We get

$$F^{\prime}(u)=ku^{k-1}-k\epsilon^{-k}u^{2k-1}=k \epsilon^{-k}u^{k-1} \bigl(\epsilon ^{k}-u^{k} \bigr). $$

When \(u\in(0,\epsilon)\), \(F^{\prime}(u)\geq0\). It is easy to see that \(F(u)\) is monotone increasing in \([0,\epsilon]\). When \(g_{i}(x)\in[0,\epsilon]\), we can get

$$0\leq p_{k} \bigl(g_{i}(x) \bigr)-p_{\epsilon,k} \bigl(g_{i}(x) \bigr)< \frac{1}{2}\epsilon^{k}. $$

Thus we see that

$$0\leq \varphi_{q,k}(x)-\varphi_{q,\epsilon,k}(x)\leq \frac{1}{2}mq\epsilon^{k}. $$

This completes the proof. □

Theorem 2.1

Let \(\{\epsilon_{j}\}\rightarrow0^{+}\) be a sequence of positive numbers and assume that \(x_{j}\) is a solution to \(\min_{x\in X} \varphi_{q,\epsilon_{j},k}(x)\) for some \(q>0\), \(k\in(0,1)\). Let x̄ be an accumulation point of the sequence \(\{x_{j}\}\). Then x̄ is an optimal solution to \(\min_{x\in X} \varphi_{q,k}(x)\).

Proof

Because \(x_{j}\) is a solution to \(\min_{x\in X} \varphi_{q,\epsilon_{j},k}(x)\), we see that

$$\varphi_{q,\epsilon_{j},k}(x_{j})\leq\varphi_{q,\epsilon _{j},k}(x), \quad \forall x\in X. $$

By Lemma 2.1, we see that

$$\varphi_{q,\epsilon_{j},k}(x)\leq\varphi_{q,k}(x) $$

and

$$\varphi_{q,k}(x)\leq\varphi_{q,\epsilon_{j},k}(x)+ \frac{1}{2}mq \epsilon_{j}^{k}. $$

It follows that

$$\begin{aligned} \varphi_{q,k}(x_{j})\leq \varphi_{q,\epsilon_{j},k}(x_{j})+ \frac{1}{2}mq\epsilon_{j}^{k} \leq \varphi_{q,\epsilon_{j},k}(x)+ \frac{1}{2}mq\epsilon_{j}^{k} \leq \varphi_{q,k}(x)+ \frac{1}{2}mq\epsilon_{j}^{k}. \end{aligned}$$

Let \(j\rightarrow\infty\), we see that

$$\varphi_{q,k}(\bar{x})\leq\varphi_{q,k}(x). $$

This completes the proof. □

Theorem 2.2

Let \(x_{q,k}^{*}\in X\) be an optimal solution of problem \([LOP^{\prime}]_{k}\) and \(\bar{x}_{q,\epsilon,k}\in X\) be an optimal solution of problem \([SP]\) for some \(q>0, k\in(0,1)\) and \(\epsilon>0\). Then we see that

$$0\leq \varphi_{q,k} \bigl(x_{q,k}^{*} \bigr)- \varphi_{q,\epsilon,k}( \bar{x}_{q,\epsilon ,k})\leq \frac{1}{2}mq \epsilon^{k}. $$

Proof

By Lemma 2.1, we see that

$$\begin{aligned} 0 \leq&\varphi_{q,k} \bigl(x_{q,k}^{*} \bigr)- \varphi_{q,\epsilon ,k} \bigl(x_{q,k}^{*} \bigr) \\ \leq&\varphi_{q,k} \bigl(x_{q,k}^{*} \bigr)- \varphi_{q,\epsilon,k}( \bar {x}_{q,\epsilon,k}) \\ \leq&\varphi_{q,k}(\bar{x}_{q,\epsilon,k})-\varphi_{q,\epsilon,k}(\bar {x}_{q,\epsilon,k}) \\ \leq&\frac{1}{2}mq\epsilon^{k}. \end{aligned}$$

This completes the proof. □

Theorem 2.1 and Theorem 2.2 mean that an approximately optimal solution to [SP] is also an approximately optimal solution to \([LOP^{\prime}]_{k}\) when the error ϵ is sufficiently small.

Definition 2.1

For \(\epsilon>0\), a point \(x\in X \) is an ϵ-feasible solution or an ϵ-solution of problem [P], if

$$g_{i}(x)\leq\epsilon, \quad i=1,2,\ldots,m. $$

We say that the pair \((x^{*},\lambda^{*})\) satisfies the second-order sufficiency condition in [19] if

$$ \begin{aligned} &\nabla_{x}L \bigl(x^{*},\lambda^{*} \bigr)=0, \\ &g_{i} \bigl(x^{*} \bigr)\leq 0,\quad i=1,2,\ldots,m, \\ &\lambda_{i}^{*}\geq0,\quad i=1,2,\ldots,m, \\ &\lambda_{i}^{*}g_{i} \bigl(x^{*} \bigr)=0, \quad i=1,2,\ldots,m, \\ &y^{T}\nabla^{2}L \bigl(x^{*},\lambda^{*} \bigr)y>0,\quad \mbox{for any } y\in V \bigl(x^{*} \bigr), \end{aligned} $$
(2.5)

where \(L(x,\lambda)=f(x)+\sum_{i=1}^{m}\lambda_{i}g_{i}(x)\), and

$$\begin{aligned}& V \bigl(x^{*} \bigr)= \bigl\{ y\in R^{n}|{\nabla^{T}g_{i} \bigl(x^{*} \bigr)y=0, i\in A \bigl(x^{*} \bigr); \nabla^{T}g_{i} \bigl(x^{*} \bigr)y\leq0, i\in B \bigl(x^{*} \bigr)} \bigr\} , \\& A \bigl(x^{*} \bigr)= \bigl\{ i\in\{1,2,\ldots,m\}|g_{i} \bigl(x^{*} \bigr)=0, \lambda^{*}>0 \bigr\} , \\& B \bigl(x^{*} \bigr)= \bigl\{ i\in\{1,2,\ldots,m\}|g_{i} \bigl(x^{*} \bigr)=0, \lambda^{*}=0 \bigr\} . \end{aligned}$$

Theorem 2.3

Suppose that Assumptions 1 and 2 hold, and that for any \(x^{*}\in G([P])\), there exists a \(\lambda^{*}\in R_{+}^{m}\) such that the pair \((x^{*},\lambda^{*})\) satisfies the second-order sufficiency condition (2.5). Then for \(\forall k\in(0,1)\), let \(x^{*}\in X \) be a global solution of problem \([P]\) and \(\bar{x}_{q,\epsilon,k}\in X \) be a global solution of problem \([SP]\) for \(k\in(0,1),\epsilon>0\). Then there exists \(q^{*}>0\) such that for any \(q>q^{*}\),

$$ 0\leq f \bigl(x^{*} \bigr)-\varphi_{q,\epsilon,k}(\bar{x}_{q,\epsilon,k})\leq \frac {1}{2}mq\epsilon^{k}. $$
(2.6)

Furthermore, if \(\bar{x}_{q,\epsilon,k}\) be an ϵ-feasible solution of problem [P], then we see that

$$0 \leq f \bigl(x_{q,k}^{*} \bigr)-f(\bar{x}_{q,\epsilon,k})\leq mq \epsilon^{k}, $$

where \(q^{*}>0\) is defined in Corollary 2.3 in [14].

Proof

By Corollary 2.3 in [14], we see that \(x^{*}\in X\) is a global solution of problem \([LOP^{\prime}]_{k}\). Then by Theorem 2.2, we see that

$$ 0\leq \varphi_{q,k} \bigl(x^{*} \bigr)-\varphi_{q,\epsilon,k}( \bar{x}_{q,\epsilon,k})\leq \frac{1}{2}mq\epsilon^{k}. $$
(2.7)

Since \(\sum_{i=1}^{m}p_{k}(g_{i}(x^{*}))=0\), we have

$$ \varphi_{q,k} \bigl(x^{*} \bigr)=f \bigl(x^{*} \bigr)+q\sum _{i=1}^{m}p_{k} \bigl(g_{i} \bigl(x^{*} \bigr) \bigr)=f \bigl(x^{*} \bigr). $$
(2.8)

By (2.7) and (2.8), we see that (2.6) holds.

Furthermore, it follows from (2.4) and (2.6) that

$$0 \leq f \bigl(x^{*} \bigr)- \Biggl(f(\bar{x}_{q,\epsilon,k})+q\sum _{i=1}^{m}p_{\epsilon ,k} \bigl(g_{i}( \bar{x}_{q,\epsilon,k}) \bigr) \Biggr) \leq\frac{1}{2}mq \epsilon^{k}. $$

It follows that

$$ q\sum_{i=1}^{m}p_{\epsilon,k} \bigl(g_{i}(\bar{x}_{q,\epsilon,k}) \bigr)\leq f \bigl(x^{*} \bigr)-f( \bar{x}_{q,\epsilon,k})\leq \frac{1}{2}mq\epsilon^{k}+q\sum _{i=1}^{m}p_{\epsilon,k} \bigl(g_{i}(\bar {x}_{q,\epsilon,k}) \bigr) . $$
(2.9)

From (2.3) and the fact that \(\bar{x}_{q,\epsilon,k}\) is an ϵ-feasible solution of problem [P], we see that

$$ 0\leq\sum_{i=1}^{m}p_{\epsilon,k} \bigl(g_{i}(\bar{x}_{q,\epsilon,k}) \bigr) \leq\frac{1}{2}m \epsilon^{k}. $$
(2.10)

Then it follows from (2.9) and (2.10) that

$$0 \leq f \bigl(x^{*} \bigr)-f(\bar{x}_{q,\epsilon,k})\leq mq\epsilon^{k}. $$

This completes the proof. □

Theorem 2.3 means that an approximately optimal solution to [SP] is an approximately optimal solution to [P] if the solution to [SP] is ϵ-feasible.

3 A smoothing method

We propose the following algorithm to solve [P].

Algorithm 3.1

Step 1 Choose an initial point \(x_{0}\), and a stoping tolerance \(\epsilon>0\). Given \(\epsilon_{0}>0, q_{0}>0, 0<\eta<1\), and \(\sigma>1\), let \(j=0\) and go to Step 2.

Step 2 Use \(x_{j}\) as the starting point to solve \(\min_{x\in R^{n}} \varphi_{q_{j},\epsilon_{j},k}(x)\). Let \(x_{j}^{*}\) be the optimal solution obtained (\(x_{j}^{*}\) is obtained by a quasi-Newton method and a finite difference gradient).

Step 3 If \(x_{j}^{*}\) is ϵ-feasible to [P], then stop and we have obtained an approximately optimal solution \(x_{j}^{*}\) of the original problem [P]. Otherwise, let \(q_{j+1}=\sigma q_{j}, \epsilon_{j+1}=\eta\epsilon_{j}, x_{j+1}=x_{j}^{*}\), and \(j=j+1\), then go to Step 2.

From \(0<\eta<1\) and \(\sigma>1\), we can easily see that the sequence \(\{\epsilon_{j}\}\) is decreasing to 0 and the sequence \(\{q_{j}\}\) is increasing to +∞ as \(j\longrightarrow+\infty\).

Now we prove the convergence of the algorithm under some mild conditions.

Theorem 3.1

Suppose that Assumption 1 holds and for any \(q\in[q_{0},+\infty), \epsilon\in(0,\epsilon_{0}]\), the set

$$\arg\min_{x\in R^{n}} \varphi_{q,\epsilon,k}(x)\neq\emptyset. $$

Let \(\{x_{j}^{*}\}\) be the sequence generated by Algorithm 3.1. If the sequence \(\{\varphi_{q_{j},\epsilon_{j},k}(x_{j}^{*})\}\) is bounded, then \(\{x_{j}^{*}\}\) is bounded and any limit point of \(\{x_{j}^{*}\}\) is the solution of [P].

Proof

First we show that \(\{x_{j}^{*}\}\) is bounded. Note that

$$\varphi_{q_{j},\epsilon_{j},k} \bigl(x_{j}^{*} \bigr)=f \bigl(x_{j}^{*} \bigr)+q_{j}\sum_{i=1}^{m}p_{\epsilon_{j},k} \bigl(g_{i} \bigl(x_{j}^{*} \bigr) \bigr),\quad j=0,1,2,\ldots. $$

By the assumptions, there is some number L such that

$$L>\varphi_{q_{j},\epsilon_{j},k} \bigl(x_{j}^{*} \bigr),\quad j=0,1,2,\ldots. $$

Suppose to the contrary that \(\{x_{j}^{*}\}\) is unbounded. Without loss of generality, we assume that \(\|x_{j}^{*}\|\rightarrow\infty\) as \(j\rightarrow \infty\). Then we get

$$L>f \bigl(x_{j}^{*} \bigr), \quad j=0,1,2,\ldots, $$

which results in a contradiction since f is coercive.

We show next that any limit point of \(\{x_{j}^{*}\}\) is the optimal solution of [P]. Let x̄ be any limit point of \(\{x_{j}^{*}\}\). Then there exists a natural number set \(J\subseteq N\), such that \(x_{j}^{*}\rightarrow\bar{x}, j\in J\). If we can prove that (i) \(\bar{x}\in G_{0}\) and (ii) \(f(\bar{x})\leq\inf_{x\in G_{0}}f(x)\) hold, then x̄ is the optimal solution of [P].

(i) Suppose to the contrary that \(\bar{x}\notin G_{0}\), then there exist \(\delta_{0}>0, i_{0}\in I\) and the subset \(J_{1}\subset J\) such that

$$g_{i_{0}} \bigl(x_{j}^{*} \bigr)\geq\delta_{0}> \epsilon_{j} $$

for any \(j\in J_{1}\).

And by step 2 in Algorithm 3.1 and (2.3), we see that

$$f \bigl(x_{j}^{*} \bigr)+\frac{1}{2}q_{j} \delta_{0}^{k}\leq f \bigl(x_{j}^{*} \bigr)+q_{j} \biggl( \bigl(g_{i_{0}} \bigl(x_{j}^{*} \bigr) \bigr)^{k}-\frac{1}{2}\epsilon_{j}^{k} \biggr) \leq \varphi_{q_{j},\epsilon_{j},k} \bigl(x_{j}^{*} \bigr)\leq \varphi_{q_{j},\epsilon _{j},k}(x)=f(x) $$

for any \(x\in G_{0}\), which contradicts with \(q_{j}\rightarrow +\infty\). Then we see that \(\bar{x}\in G_{0}\).

(ii) For any \(x\in G_{0}\), we have

$$f \bigl(x_{j}^{*} \bigr)\leq\varphi_{q_{j},\epsilon_{j},k} \bigl(x_{j}^{*} \bigr)\leq\varphi _{q_{j},\epsilon_{j},k}(x)=f(x), $$

then \(f(\bar{x})\leq\inf_{x\in G_{0}}f(x)\) holds.

This completes the proof. □

4 Numerical examples

In this section, we solve three numerical examples to show the applicability of Algorithm 3.1.

Example 4.1

(Example 4.2 in [17], Example 3.3 in [18] and Example 4.1 in [20])

$$\begin{aligned}& \min f(x)=x_{1}^{2}+x_{2}^{2}- \cos(17x_{1})-\cos(17x_{2})+3, \\& \mathrm{s.t.}\ g_{1}(x)= (x_{1}-2)^{2}+x_{2}^{2}-1.6^{2} \leq0, \\& g_{2}(x)= x_{1}^{2}+(x_{2}-3)^{2}-2.7^{2} \leq0, \\& 0 \leq x_{1} \leq2, \\& 0 \leq x_{2} \leq2. \end{aligned}$$

Let \(k=1/3, x_{0}=(0.5, 1.5), q_{0}=1.0, \epsilon_{0}=0.1, \eta=0.1, \sigma=2, \epsilon=10^{-15}\), we obtain the results by Algorithm 3.1 shown in Table 1.

Table 1 Numerical results for Example 4.1 with \(\pmb{k=1/3}\)

Let \(k=2/3, x_{0}=(0.5, 1.5), q_{0}=10, \epsilon_{0}=0.1, \eta=0.1, \sigma=2, \epsilon=10^{-15}\), we obtain the results by Algorithm 3.1 shown in Table 2.

Table 2 Numerical results for Example 4.1 with \(\pmb{k=2/3}\)

When \(k=1/3\) and \(k=2/3\), numerical results are given in Table 1 and Table 2, respectively. It is clear from Table 1 and Table 2 that the obtained approximate solutions are similar. In [17], the given solution for Example 4.2 is \((0.7254,0.3993)\) with objective function value 1.8375 when \(k=1/3\). In [18], the given solution for Example 3.3 is \((0.7239410,0.3988712)\) with objective function value 1.837919 when \(k=1/3\). The given solution for Example 3.3 is \((0.7276356,0.3998984)\) with objective function value 1.838380 when \(k=2/3\). In [20], the given solution for Example 4.1 is \((0.7255,0.3993)\) with objective function value 1.8376. Numerical results are similar to the results of [17] and [20], and they are better than the results of [20] in this example.

Example 4.2

(Test Problem 6 in Section 4.6 in [21])

$$\begin{aligned}& \min f(x)=-x-y, \\& \mathrm{s.t.}\ g_{1}(x,y)= y-2x^{4}+8x^{3}-8x^{2}-2 \leq0, \\& g_{2}(x,y)= y-4x^{4}+32x^{3}-88x^{2}+96x-36 \leq0, \\& 0\leq x \leq3, \\& 0\leq y \leq4. \end{aligned}$$

Let \(k=2/3, x_{0}=(2.5,0), q_{0}=5, \epsilon_{0}=0.1, \eta=0.1, \sigma=2, \epsilon=10^{-15}\), the results by Algorithm 3.1 are shown in Table 3.

Table 3 Numerical results for Example 4.2 with \(\pmb{x_{0}=(2.5,0)}\)

Let \(k=2/3, x_{0}=(0,4), q_{0}=5, \epsilon_{0}=0.1, \eta=0.1, \sigma=2, \epsilon=10^{-15}\), the results by Algorithm 3.1 are shown in Table 4.

Table 4 Numerical results for Example 4.2 with \(\pmb{x_{0}=(0,4)}\)

Let \(k=2/3, x_{0}=(1.0,1.5), q_{0}=5, \epsilon_{0}=0.1, \eta=0.1, \sigma=2, \epsilon=10^{-15}\), the results by Algorithm 3.1 are shown in Table 5.

Table 5 Numerical results for Example 4.2 with \(\pmb{x_{0}=(1.0,1.5)}\)

With different starting points \(x_{0}=(2.5,0),x_{0}=(0,4)\), and \(x_{0}=(1.0,1.5)\), numerical results are given in Table 3, Table 4 and Table 5, respectively. One can see that the numerical results in Tables 3-5 are similar. This means that Algorithm 3.1 does not completely depend on how to choose a starting point in this example. In [21], the given solution for Example 4.2 is \((2.3295, 3.1783)\) with objective function value −5.5079. Numerical results are similar to the result of [21] in this example.

For the jth iteration of the algorithm, we define a constraint error \(e_{j}\) by

$$e_{j}=\sum_{i=1}^{m} \max \bigl\{ g_{i} \bigl(x_{j}^{*} \bigr),0 \bigr\} . $$

It is clear that \(x_{j}^{*}\) is ϵ-feasible to (P) when \(e_{j}<\epsilon\).

Example 4.3

(Example 4.5 in [15] and Example 3.4 in [18])

$$\begin{aligned}& \min f(x)=10x_{2}+2x_{3}+x_{4}+3x_{5}+4x_{6}, \\& \mathrm{s.t.}\ g_{1}(x)= x_{1}+x_{2}-10 = 0, \\& g_{2}(x)= -x_{1}+x_{3}+ x_{4}+x_{5} = 0, \\& g_{3}(x)= -x_{2}-x_{3}+x_{5}+x_{6}= 0, \\& g_{4}(x)= 10x_{1}-2x_{3}+3x_{4}-2x_{5}-16 \leq0, \\& g_{5}(x)= x_{1}+4x_{3}+x_{5}-10 \leq0, \\& 0 \leq x_{1} \leq12, \\& 0 \leq x_{2} \leq18, \\& 0 \leq x_{3} \leq5, \\& 0 \leq x_{4} \leq12, \\& 0 \leq x_{5} \leq1, \\& 0 \leq x_{6} \leq16. \end{aligned}$$

Let \(k=2/3, x_{0}=(2,2,1,2,1,2), q_{0}=100, \epsilon_{0}=0.5, \eta=0.01, \sigma=2,\epsilon=10^{-15}\), the results by Algorithm 3.1 are shown in Table 6.

Table 6 Numerical results for Example 4.3

It is clear from Table 6 that the obtained approximately optimal solution is \(x^{*}=(1.620468, 8.379530, 0.013229, 0.607246, 0.999994, 7.392764)\) with corresponding objective function value 117.000044. In [15], the obtained approximately optimal solution is \(x^{*}=(1.805996,8.194004,0.497669,0.308327,1.000000,7.691673 )\) with corresponding objective function value 117.010399. In [18], the given solution for Example 3.4 is \((1.635022, 8.364978, 0.060010, 0.5812775, 0.9937346, 7.431253 )\) with objective function value 117.0573 when \(k=2/3\). Numerical results are better than the results of [15] and [18] in this example.

5 Concluding remarks

In this paper, we propose a method for smoothing the nonsmooth lower-order exact penalty function for inequality constrained optimization. We prove that the algorithm based on the smoothed penalty functions is convergent under mild conditions.

According to the numerical results given in Section 4, we can obtain an approximately optimal solution of the original problem [P] by Algorithm 3.1.

Finally, we give some advices on how to choose parameter in the algorithm. Usually, the initial value of \(q_{0}\) may be 1, 5, 10, 100, 1000 or 10000, and \(\sigma=2, 5, 10 \mbox{ or } 100\). The initial value of \(\epsilon_{0}\) may be 1, 0.5, 0.1 or 0.01, and \(\eta=0.5, 0.1, 0.05\mbox{ or }0.01\).

References

  1. Zangwill, WI: Non-linear programming via penalty functions. Manag. Sci. 13(5), 344-358 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  2. Zangwill, I: A smoothing-out technique for min-max optimization. Math. Program. 19(1), 61-77 (1980)

    Article  MathSciNet  Google Scholar 

  3. Ben-Tal, A, Teboulle, M: A smoothing technique for non-differentiable optimization problems. In: Lecture Notes in Mathematics, vol. 1405, pp. 1-11. Springer, Berlin (1989)

    Google Scholar 

  4. Pinar, M, Zenios, S: On smoothing exact penalty functions for convex constrained optimization. SIAM J. Optim. 4, 486-511 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  5. Liu, BZ: On smoothing exact penalty functions for nonlinear constrained optimization problems. J. Appl. Math. Comput. 30, 259-270 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Lian, SJ: Smoothing approximation to \(l_{1}\) exact penalty function for inequality constrained optimization. Appl. Math. Comput. 219(6), 3113-3121 (2012)

    MathSciNet  MATH  Google Scholar 

  7. Liu, BZ, Zhao, WL: A modified exact smooth penalty function for nonlinear constrained optimization. J. Inequal. Appl. 2012, 173 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  8. Xu, XS, Meng, ZQ, Huang, LG, Shen, R: A second-order smooth penalty function algorithm for constrained optimization problems. Comput. Optim. Appl. 55(1), 155-172 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  9. Jiang, M, Shen, R, Xu, XS, Meng, ZQ: Second-order smoothing objective penalty function for constrained optimization problems. Numer. Funct. Anal. Optim. 35(3), 294-309 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  10. Binh, NT: Smoothing approximation to \(l_{1}\) exact penalty function for constrained optimization problems. J. Appl. Math. Inform. 33(3-4), 387-399 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  11. Luo, ZQ, Pang, JS, Ralph, D: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, Cambridge (1996)

    Book  MATH  Google Scholar 

  12. Rubinov, AM, Yang, XQ, Bagirov, AM: Penalty functions with a small penalty parameter. Optim. Methods Softw. 17(5), 931-964 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  13. Huang, XX, Yang, XQ: Convergence analysis of a class of nonlinear penalization methods for constrained optimization via first-order necessary optimality conditions. J. Optim. Theory Appl. 116(2), 311-332 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  14. Wu, ZY, Bai, FS, Yang, XQ, Zhang, LS: An exact lower order penalty function and its smoothing in nonlinear programming. Optimization 53(1), 51-68 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  15. Meng, ZQ, Dang, CY, Yang, XQ: On the smoothing of the square-root exact penalty function for inequality constrained optimization. Comput. Optim. Appl. 35, 375-398 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  16. Lian, SJ: Smoothing approximation to the square-order exact penalty functions for constrained optimization. J. Appl. Math. 2013, Article ID 568316 (2013)

    MathSciNet  Google Scholar 

  17. He, ZH, Bai, FS: A smoothing approximation to the lower order exact penalty function. Oper. Res. Trans. 14(2), 11-22 (2010)

    MathSciNet  MATH  Google Scholar 

  18. Lian, SJ: On the smoothing of the lower order exact penalty function for inequality constrained optimization. Oper. Res. Trans. 16(2), 51-64 (2012)

    MathSciNet  MATH  Google Scholar 

  19. Bazaraa, MS, Sherali, HD, Shetty, CM: Nonlinear Programming: Theory and Algorithms, 2nd edn. Wiley, New York (1993)

    MATH  Google Scholar 

  20. Sun, XL, Li, D: Value-estimation function method for constrained global optimization. J. Optim. Theory Appl. 102(2), 385-409 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  21. Flondas, CA, Pardalos, PM: A Collection of Test Problems for Constrained Global Optimization Algorithms. Springer, Berlin (1990)

    Book  Google Scholar 

Download references

Acknowledgements

The authors wish to thank the anonymous referees for their endeavors and valuable comments. This work is supported by National Natural Science Foundation of China (71371107 and 61373027) and Natural Science Foundation of Shandong Province (ZR2013AM013).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shujun Lian.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

YD drafted the manuscript. SL helped to draft the manuscript and revised it. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lian, S., Duan, Y. Smoothing of the lower-order exact penalty function for inequality constrained optimization. J Inequal Appl 2016, 185 (2016). https://doi.org/10.1186/s13660-016-1126-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1126-9

Keywords