Skip to main content

Smoothing approximation to the lower order exact penalty function for inequality constrained optimization

Abstract

For inequality constrained optimization problem, we first propose a new smoothing method to the lower order exact penalty function, and then show that an approximate global solution of the original problem can be obtained by solving a global solution of a smooth lower order exact penalty problem. We propose an algorithm based on the smoothed lower order exact penalty function. The global convergence of the algorithm is proved under some mild conditions. Some numerical experiments show the efficiency of the proposed method.

1 Introduction

Consider the following inequality constrained optimization problem:

figure a

where \(f_{i}:R^{n}\rightarrow R\), \(i=0,1,\ldots,m\), are twice continuously differentiable functions. Throughout this paper, we use \(X_{0}=\{x\in R^{n}| f_{i}(x)\leq0, i\in I\}\) to denote the feasible solution set.

This problem is widely applied in transportation, economics, mathematical programming, regional science, etc. [13], and it has received extensive attention on a related problem, for example, variational inequalities, equilibrium problem, minimizers of convex functions, etc. (see, e.g., [415]).

To solve problem (P), the penalty function methods have been introduced in many literature works (see, e.g., [1624]). Zangwill [16] introduced the classical \(l_{1}\) exact penalty function

$$ F_{1}(x,q)=f_{0}(x)+q\sum_{i=1}^{m} \max\bigl\{ f_{i}(x),0\bigr\} , $$
(1.1)

where \(q>0\) is a penalty parameter, but it is not a smooth function. The corresponding penalty optimization problem is as follows:

figure b

The non-smoothness of the function restricts the application of a gradient-type or Newton-type algorithm to solving problem (\(P_{1}\)). In order to avoid this shortcoming, the smoothing of the \(l_{1}\) exact penalty function is proposed in [17, 18].

In addition, to overcome the non-smoothness of the function, the following smooth penalty function is proposed:

$$ F_{2}(x,q)=f_{0}(x)+q\sum_{i=1}^{m} \max\bigl\{ f_{i}(x),0\bigr\} ^{2}. $$
(1.2)

However, the function is non-exact.

Recently, Wu et al. [20] proposed the following low order penalty function:

$$ \varphi_{q,k}(x)=f_{0}(x)+q\sum_{i=1}^{m} \bigl(\max\bigl\{ f_{i}(x),0\bigr\} \bigr)^{k},\quad k \in(0,1), $$
(1.3)

and proved that the low order penalty function is exact under mild conditions. But this penalty function is non-smooth, too. When \(k=1\), \(\varphi _{q,k}(x)\) can be seen as the classical \(l_{1}\) exact penalty function. The least exact penalty parameter corresponding to \(k\in(0,1)\) is much less than that of the \(l_{1}\) exact penalty function. This can avoid the defects of too large parameter ρ in the algorithm. Only for \(k=\frac{1}{2}\), the smoothing of the lower order penalty function (1.3) is studied in [20] and [21]. In [24], a smoothing method of the low order penalty function (1.3) is given. We hope to study a new smoothing method for the low order penalty function (1.3) and compare it with the existing methods. With a different segmentation method, we will give a new piecewise smooth function and propose a new method to smooth the lower order penalty function (1.3) with \(k\in[\frac{1}{2},1)\) in this paper.

The remainder of this paper is organized as follows. In Sect. 2, a new smoothing function is proposed. The error estimates are obtained among the optimal objective function values of the smoothed penalty problem, the non-smooth penalty problem, and the original problem. In Sect. 3, the corresponding algorithm is proposed to obtain an approximate solution to (P). The global convergence of the algorithm is proved. In Sect. 4, some numerical experiments are given to illustrate the efficiency of the algorithm. In Sect. 5, some conclusions are presented.

2 A smoothing penalty function

For the lower order penalty problem

figure c

in order to establish the global exact penalization, the following assumption is given in [20]. We will consider the smoothing method under the following assumption.

Assumption 2.1

  1. (1)

    \(f_{0}(x)\) satisfies the coercive condition

    $$\lim_{\|x\|\rightarrow+\infty}f_{0}(x)=+\infty. $$
  2. (2)

    The optimal solution set \(G(\mbox{($P$)})\) is a finite set.

Under Assumption 2.1, problem (P) is equivalent to the following problem:

figure d

where X is a box with \(\operatorname{int}(X)\neq\emptyset\).

For any \(k\in(0,1)\), penalty problem (LP) is equivalent to the following penalty problem:

figure e

Now we consider a new smoothing technique to the lower order penalty function (1.3).

Let \(p_{k}(t)=(\max\{t,0\})^{k}\), then

$$ \varphi_{q,k}(x)=f_{0}(x)+q\sum_{i=1}^{m}p_{k} \bigl(f_{i}(x)\bigr). $$
(2.1)

Define a function \(p_{k,\epsilon}(t)\) (\(\epsilon> 0\)) by

$$ p_{k,\epsilon}(t)=\left \{ \textstyle\begin{array}{l@{\quad}l} 0, & \text{if } t\leq-\epsilon^{k},\\ \frac{k}{2}\epsilon^{-1}(t+\epsilon^{k})^{2}, & \text{if } {-}\epsilon^{k}< t< 0,\\ ({t+\epsilon})^{k}+\frac{k}{2}\epsilon^{2k-1}-\epsilon^{k},&\text{if } t\geq0, \end{array}\displaystyle \right . $$
(2.2)

where \(\frac{1}{2}\leq k<1\). It is easy to see that \(p_{k,\epsilon }(t)\) is continuously differentiable and

$$\lim_{\epsilon\rightarrow0^{+}}p_{k,\epsilon}(t)= p_{k}(t). $$

The following figure shows the process of function \(p_{k,\epsilon}(t)\) approaching function \(p_{k}(t)\).

Figure 1 shows the behavior of \(p_{\frac{3}{4},0.01}(t)\) (represented by the dash and dot line), \(p_{\frac{3}{4},0.001}(t)\) (represented by the dot line), \(p_{\frac{3}{4},0.0001}(t)\) (represented by the dash line), and \(p_{\frac{3}{4}}(t)\) (represented by the solid line).

Figure 1
figure 1

The behavior of \(p_{k,\epsilon}(t)\) and \(p_{k}(t)\)

Based on this, we consider the following continuously differentiable penalty function:

$$ \varphi_{q,k,\epsilon}(x)=f_{0}(x)+q\sum_{i=1}^{m}p_{k,\epsilon} \bigl(f_{i}(x)\bigr), $$
(2.3)

where \(\lim_{\varepsilon\rightarrow0^{+}} \varphi_{q,k,\epsilon }(x)=\varphi_{q,k}(x)\).

The corresponding optimization problem to \(\varphi_{q,k,\epsilon}(x)\) is as follows:

figure g

For problems (P), (\(\mathit {LP}^{\prime}\)), and (SP), we have the following conclusion.

Lemma 2.1

For any \(x\in X\), \(\epsilon>0\), and \(q >0\), it holds that

$$-\frac{k}{2}\epsilon^{2k-1}mq\leq\varphi_{q,k}(x)-\varphi _{q,k,\epsilon}(x)< \epsilon^{k}mq,\quad k\in\biggl[\frac{1}{2},1\biggr). $$

Proof

For all \(i\in I\), it holds that

$$p_{k}\bigl(f_{i}(x)\bigr)-p_{k,\epsilon} \bigl(f_{i}(x)\bigr)=\left \{ \textstyle\begin{array}{l@{\quad}l} 0, &\text{if } f_{i}(x)\leq-\epsilon^{k},\\ -\frac{k}{2}\epsilon^{-1}(f_{i}(x)+\epsilon^{k})^{2}, & \text{if } {-}\epsilon^{k} < f_{i}(x)< 0,\\ {f_{i}(x)^{k}-(f_{i}(x)+\epsilon})^{k}-\frac{k}{2}\epsilon ^{2k-1}+\epsilon^{k}, &\text{if } f_{i}(x)\geq0. \end{array}\displaystyle \right .$$

Set

$$F(t)=t^{k}-(t+\epsilon)^{k},\quad t\geq0. $$

Then

$$F^{\prime}(t)=k\bigl[t^{k-1}-(t+\epsilon)^{k-1}\bigr]. $$

It is easy to see that function \(F(t)\) is monotonically increasing w.r.t. t due to that \(k\in[\frac{1}{2},1)\). One has

$$-\epsilon^{k}\leq{f_{i}(x)^{k}- \bigl(f_{i}(x)+\epsilon}\bigr)^{k}\leq0,\quad \text{if } f_{i}(x)\geq0. $$

It follows that

$$-\frac{k}{2}\epsilon^{2k-1}\leq p_{k} \bigl(f_{i}(x)\bigr)-p_{k,\epsilon }\bigl(f_{i}(x)\bigr) \leq\epsilon^{k}, \quad \text{if }f_{i}(x)\geq0. $$

When \(-\epsilon^{k}< f_{i}(x)<0\), one has

$$-\frac{k}{2}\epsilon^{2k-1}< p_{k}\bigl(f_{i}(x) \bigr)-p_{k,\epsilon}\bigl(f_{i}(x)\bigr)< 0. $$

So,

$$ -\frac{k}{2}\epsilon^{2k-1}\leq p_{k} \bigl(f_{i}(x)\bigr)-p_{k,\epsilon}\bigl(f_{i}(x)\bigr)< \epsilon^{k}, \quad\forall i\in I. $$
(2.4)

It follows from (2.1), (2.3), and (2.4) that

$$-\frac{k}{2}\epsilon^{2k-1}mq\leq\varphi_{q,k}(x)-\varphi _{q,k,\epsilon}(x)< \epsilon^{k}mq $$

by the fact that \(q >0\). □

Theorem 2.1

For a positive sequence \(\{\varepsilon_{j}\}\), which converges to 0 as \(j\to\infty\), assume that \(x^{j}\) is an optimal solution to \(\min_{x\in X} \varphi_{q,k,\epsilon_{j}}(x)\) for some given \(q>0\), \(k\in[\frac{1}{2},1)\). If is an accumulating point of sequence \(\{x^{j}\}\), then is an optimal solution to \(\min_{x\in X} \varphi_{q,k}(x)\).

Proof

It follows from Lemma 2.1 that

$$ -\frac{k}{2}\epsilon_{j}^{2k-1}mq\leq \varphi_{q,k}(x)-\varphi _{q,k,\epsilon_{j}}(x)< \epsilon_{j}^{k}mq,\quad \forall x\in X. $$
(2.5)

Since \(x_{j}\) is a solution to \(\min_{x\in X} \varphi_{q,k,\epsilon_{j}}(x)\), one has

$$ \varphi_{q,k,\epsilon_{j}}(x_{j})\leq\varphi_{q,k,\epsilon_{j}}(x),\quad \forall x \in X. $$
(2.6)

It follows from (2.5) and (2.6) that

$$\begin{aligned}\varphi_{q,k}(x_{j})&< \varphi_{q,k,\epsilon_{j}}(x_{j})+ \epsilon_{j}^{k}mq \\ &\leq \varphi_{q,k,\epsilon_{j}}(x)+\epsilon_{j}^{k}mq \\ &\leq \varphi_{q,k}(x)+ \epsilon_{j}^{k}mq+ \frac{k}{2}\epsilon_{j}^{2k-1}mq. \end{aligned} $$

Letting \(j\rightarrow\infty\) yields

$$\varphi_{q,k}(\bar{x})\leq\varphi_{q,k}(x). $$

Thus, is an optimal solution to \(\min_{x\in X} \varphi _{q,k}(x)\). □

Theorem 2.2

Let \(x_{q,k}^{*}\in X\) be an optimal solution of problem (\(\mathit {LP}^{\prime}\)), and \(\bar{x}_{q,k,\epsilon}\in X\) be an optimal solution of problem (SP) for some \(q>0\), \(k\in[\frac{1}{2},1)\), and \(\epsilon>0\). Then

$$-\frac{k}{2}\epsilon^{2k-1}mq\leq \varphi_{q,k} \bigl(x_{q,k}^{*}\bigr)-\varphi_{q,k,\epsilon}(\bar {x}_{q,k,\epsilon})< \epsilon^{k}mq. $$

Proof

Under the hypothetical conditions, it holds that \(\varphi_{q,k}(x_{q,k}^{*})\leq\varphi_{q,k}(x)\) and \(\varphi _{q,k,\epsilon}(\bar{x}_{q,k,\epsilon})\leq\varphi_{q,k,\epsilon }(x)\), \(\forall x\in X\).

Therefore, by Lemma 2.1, one has

$$-\frac{k}{2}\epsilon^{2k-1}mq\leq\varphi_{q,k} \bigl(x_{q,k}^{*}\bigr)-\varphi _{q,k,\epsilon}\bigl(x_{q,k}^{*} \bigr) \leq\varphi_{q,k}\bigl(x_{q,k}^{*}\bigr)- \varphi_{q,k,\epsilon}(\bar {x}_{q,k,\epsilon}) $$

and

$$\varphi_{q,k}\bigl(x_{q,k}^{*}\bigr)-\varphi_{q,k,\epsilon}(\bar {x}_{q,k,\epsilon}) \leq\varphi_{q,k}(\bar{x}_{q,k,\epsilon})- \varphi_{q,k,\epsilon }(\bar{x}_{q,k,\epsilon}) < \epsilon^{k}mq. $$

This completes the proof. □

Corollary 2.1

Suppose that Assumption 2.1 holds, and for any \(x\in G((P))\), there exists \(\lambda^{*}\in R_{+}^{m}\) such that the pair \((x^{*},\lambda ^{*})\) satisfies the second order sufficient condition (in [20]). Let \(x^{*}\in X \) be an optimal solution of problem (P) and \(\bar{x}_{q,k,\epsilon}\in X \) be an optimal solution of problem (SP) for some \(q>0\), \(k\in[\frac{1}{2},1)\), and \(\epsilon>0\). Then there exists \(q^{*}>0\) such that, for any \(q>q^{*}\),

$$-\frac{k}{2}\epsilon^{2k-1}mq\leq f_{0}\bigl(x^{*}\bigr)- \varphi_{q,k,\epsilon }(\bar{x}_{q,k,\epsilon})< \epsilon^{k}mq. $$

Proof

It follows from Corollary 2.3 (in [20]) that \(x^{*}\in X\) is an optimal solution of problem (\(\mathit {LP}^{\prime}\)). By Theorem 2.2, one has

$$-\frac{k}{2}\epsilon^{2k-1}mq\leq \varphi_{q,k}\bigl(x^{*} \bigr)-\varphi_{q,k,\epsilon}(\bar{x}_{q,k,\epsilon})< \epsilon^{k}mq. $$

Since \(\sum_{i=1}^{m}p_{k}(f_{i}(x^{*}))=0\), it holds that

$$\varphi_{q,k}\bigl(x^{*}\bigr)=f_{0}\bigl(x^{*}\bigr)+q\sum _{i=1}^{m}p_{k} \bigl(f_{i}\bigl(x^{*}\bigr)\bigr)=f_{0}\bigl(x^{*}\bigr). $$

This completes the proof. □

Definition 1

For \(\epsilon>0\), if \(x\in X \) is such that

$$f_{i}(x)\leq\epsilon, \quad i=1,2,\ldots,m, $$

then \(x\in X \) is an ϵ-feasible solution of problem (P).

Theorem 2.3

Let \(x_{q,k}^{*}\in X \) be an optimal solution of problem (\(\mathit {LP}^{\prime}\)), and \(\bar{x}_{q,k,\epsilon}\in X \) be an optimal solution of problem (SP) for some \(q>0\), \(k\in[\frac{1}{2},1)\), and \(\epsilon>0\). If \(x_{q,k}^{*}\) is a feasible solution of problem (P), and \(\bar{x}_{q,k,\epsilon}\)is an ϵ-feasible solution of problem (P), then

$$-\frac{k}{2}\epsilon^{2k-1}mq \leq f_{0} \bigl(x_{q,k}^{*}\bigr)-f_{0}(\bar {x}_{q,k,\epsilon})< \biggl(2^{k}\epsilon^{k}+\frac{k}{2} \epsilon^{2k-1}\biggr)mq. $$

Proof

By (2.1), (2.3), and Theorem 2.2, one has

$$\begin{aligned}-\frac{k}{2}\epsilon^{2k-1}mq&\leq \varphi _{q,k}\bigl(x_{q,k}^{*}\bigr)-\varphi_{q,k,\epsilon}( \bar{x}_{q,k,\epsilon}) \\ &=f_{0}\bigl(x_{q,k}^{*}\bigr)+q\sum _{i=1}^{m}p_{k}\bigl(f_{i} \bigl(x_{q,k}^{*}\bigr)\bigr)-\Biggl(f_{0}(\bar {x}_{q,k,\epsilon})+q\sum_{i=1}^{m}p_{k,\epsilon} \bigl(f_{i}(\bar {x}_{q,k,\epsilon})\bigr)\Biggr) \\ &< \epsilon^{k}mq. \end{aligned} $$

Since \(\sum_{i=1}^{m}p_{k}(f_{i}(x_{q,k}^{*}))=0\), it holds that

$$\begin{aligned}[b] -\frac{k}{2}\epsilon^{2k-1}mq+q\sum_{i=1}^{m}p_{k,\epsilon} \bigl(f_{i}(\bar{x}_{q,k,\epsilon})\bigr) &\leq f_{0}\bigl(x_{q,k}^{*}\bigr)-f_{0}( \bar{x}_{q,k,\epsilon}) \\ &< \epsilon^{k}mq+q\sum_{i=1}^{m}p_{k,\epsilon} \bigl(f_{i}(\bar {x}_{q,k,\epsilon})\bigr).\end{aligned} $$
(2.7)

Note that

$$f_{i}(\bar{x}_{q,k,\epsilon})\leq\epsilon,\quad i\in I. $$

Thus, it follows from (2.2) that

$$ 0\leq q\sum_{i=1}^{m}p_{k,\epsilon} \bigl(f_{i}(\bar {x}_{q,k,\epsilon})\bigr) \leq\biggl(2^{k} \epsilon^{k}+\frac{k}{2}\epsilon^{2k-1}-\epsilon ^{k}\biggr)mq. $$
(2.8)

By (2.7) and (2.8), one has

$$-\frac{k}{2}\epsilon^{2k-1}mq \leq f_{0} \bigl(x_{q,k}^{*}\bigr)-f_{0}(\bar {x}_{q,k,\epsilon})< \biggl(2^{k}\epsilon^{k}+\frac{k}{2} \epsilon^{2k-1}\biggr)mq . $$

 □

Theorems 2.1 and 2.2 show that an optimal solution of (SP) is also an approximate optimal solution of (\(\mathit {LP}^{\prime}\)) when the error ϵ is sufficiently small. By Theorem 2.3, an optimal solution of (SP) is an approximately optimal solution of (P) if the optimal solution of (SP) is an ϵ-feasible solution of (P).

3 A smoothing method

Based on the discussion in the last section, we can design an algorithm to obtain an approximate optimal solution of (P) by solving (SP).

Algorithm 3.1

  1. Step 1.

    Take \(x^{0}\), \(\epsilon_{0}>0\), \(0< a<1\), \(q_{0}>0\), \(b>1\), \(\epsilon>0\), and \(k\in[\frac{1}{2},1)\), let \(j=0\) and go to Step 2.

  2. Step 2.

    Solve \(\min_{x\in R^{n}} \varphi_{q_{j},k,\epsilon_{j}}(x)\) starting at \(x^{j}\). Let \(x^{j+1}\) be the optimal solution (\(x^{j+1}\) can be obtained by a quasi-Newton method).

  3. Step 3.

    Let \(\epsilon_{j+1}=a\epsilon_{j}\), \(q_{j+1}=b q_{j}\), and \(j=j+1\), then go to Step 2.

Remark

Since \(0< a<1\) and \(b>1\), let \(a^{2k-1}b<1\), as \(j\rightarrow+\infty\), the sequence \(\{\epsilon_{j}\}\) is gradually decreased to 0, the sequence \(\{q_{j}\}\) is gradually increased to +∞ and \(\{\epsilon_{j}^{2k-1}q_{j}\} \) is gradually decreased to 0.

Under some mild conditions, the following conclusion shows the global convergence of Algorithm 3.1.

Theorem 3.1

Suppose that Assumption 2.1 holds, and for any \(\epsilon\in(0,\epsilon_{0}]\), \(q\in[q_{0},+\infty)\), the solution set of \(\min_{x\in R^{n}} \varphi_{q,k,\epsilon}(x)\) is nonempty. If \(\{x^{j+1}\}\) is the sequence generated by Algorithm 3.1 satisfying \(a^{2k-1}b<1\), and the sequence \(\{\varphi_{q_{j},k,\epsilon_{j}}(x^{j+1})\}\) is bounded, then

  1. (1)

    \(\{x^{j+1}\}\) is bounded.

  2. (2)

    Any limit point of \(\{x^{j+1}\}\) is an optimal solution of (P).

Proof

(1) It follows from (2.3) that

$$ \varphi_{q_{j},k,\epsilon _{j}}\bigl(x^{j+1}\bigr)=f_{0} \bigl(x^{j+1}\bigr)+q_{j}\sum_{i=1}^{m}p_{k,\epsilon _{j}} \bigl(f_{i}\bigl(x^{j+1}\bigr)\bigr),\quad j=0,1,2,\ldots. $$
(3.1)

By hypothesis, there exists some number L such that

$$ L>\varphi_{q_{j},k,\epsilon_{j}}\bigl(x^{j+1}\bigr),\quad j=0,1,2,\ldots. $$
(3.2)

For the sake of contradiction, suppose that \(\{x^{j+1}\}\) is unbounded. Without loss of generality, we assume that \(\|x^{j+1}\|\rightarrow\infty\) as \(j\rightarrow \infty\).

By (2.2), (3.1), and (3.2), one has

$$L>f_{0}\bigl(x^{j+1}\bigr), \quad j=0,1,2,\ldots, $$

which results in a contradiction with Assumption 2.1(1).

(2) Without loss of generality, we assume \(x^{j+1}\rightarrow x^{*}\) as \(j\rightarrow\infty\).

To prove \(x^{*}\) is the optimal solution of (P), it is only needed to show that \(x^{*}\in X_{0}\) and \(f_{0}(x^{*})\leq f_{0}(x)\), \(\forall x\in X_{0}\).

To show that \(x^{*}\in X_{0}\), we outline a proof by contradiction. We presuppose that \(x^{*}\notin X_{0}\), then there exist \(\delta_{0}>0\), \(i_{0}\in I\), and the subset \(J \subset N\) such that

$$f_{i_{0}}\bigl(x^{j+1}\bigr)\geq\delta_{0}> \epsilon_{j},\quad \forall j\in J, $$

where N is the natural number set.

By Step 2, (2.2), and (2.3), for any \(x\in X_{0}\), one has

$$\begin{aligned}f_{0}\bigl(x^{j+1} \bigr)+q_{j}{\biggl((\delta_{0}+\epsilon _{j})^{k}+\frac{k}{2}\epsilon_{j}^{2k-1}- \epsilon_{j}^{k}\biggr)} &\leq\varphi_{q_{j},k,\epsilon_{j}} \bigl(x^{j+1}\bigr) \\ &\leq\varphi_{q_{j},k,\epsilon_{j}}(x) \\ &\leq f_{0}(x)+m\frac{k}{2}\epsilon_{j}^{2k-1}q_{j}. \end{aligned} $$

It follows that

$$f_{0}\bigl(x^{j+1}\bigr)+q_{j}{\bigl(( \delta_{0}+\epsilon_{j})^{k}-\epsilon _{j}^{k}\bigr)}\leq f_{0}(x)+(m-1) \frac{k}{2}\epsilon_{j}^{2k-1}q_{j},\quad \forall x \in X_{0}, $$

which contradicts with \(q_{j}\rightarrow +\infty, \epsilon_{j}\rightarrow 0\), and \(\epsilon_{j}^{2k-1}q_{j}\rightarrow 0\), as \(j\rightarrow\infty\). Then we have that \(x^{*}\in X_{0}\).

Next, we show that \(f_{0}(x^{*})\leq f_{0}(x)\), \(\forall x\in X_{0}\).

For this, by Step 2, (2.2), and (2.3), it holds that

$$f_{0}\bigl(x^{j+1}\bigr)\leq\varphi_{q_{j},k,\epsilon_{j}} \bigl(x^{j+1}\bigr) \leq\varphi_{q_{j},k,\epsilon_{j}}(x)\leq f_{0}(x)+m\frac {k}{2}\epsilon_{j}^{2k-1}q_{j},\quad \forall x\in X_{0}. $$

Letting \(j\rightarrow\infty\) yields that

$$f_{0}\bigl(x^{*}\bigr)\leq f_{0}(x). $$

Therefore, any limit point of \(\{x^{j+1}\}\) is an optimal solution of (P). □

4 Numerical examples

In this section, we will do some numerical experiments to show the efficiency of Algorithm 3.1.

Example 4.1

Consider the following optimization problem considered in [18, 22, 23]:

$$\begin{gathered}\min f_{0}(x)=x_{1}^{2}+x_{2}^{2}+2x_{3}^{2}+x_{4}^{2}-5x_{1}-5x_{2}-21x_{3}+7x_{4} \\ \quad\text{s.t. } f_{1}(x)=2x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+2x_{1}+x_{2}+x_{4}-5 \leq0, \\ \quad\phantom{\text{s.t. }} f_{3}(x)=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}+x_{1}-x_{2}+ x_{3}-x_{4}-8 \leq0, \\ \quad\phantom{\text{s.t. }} f_{3}(x)= x_{1}^{2}+2x_{2}^{2}+x_{3}^{2}+2x_{4}^{2}-x_{1}-x_{4}-10 \leq0. \end{gathered} $$

For this problem, we let \(k=\frac{3}{4}\), \(\epsilon_{0}=0.01\), \(a=0.01\), \(q_{0}=1\), \(b =2\), \(\epsilon=10^{-16}\). With different starting points, numerical results of Algorithm 3.1 are shown in Tables 12, and 3.

Table 1 Numerical results for Example 4.1 with \(x^{0}=(0,0,0,0)\)
Table 2 Numerical results for Example 4.1 with \(x^{0}=(2,0,3.5,0)\)
Table 3 Numerical results for Example 4.1 with \(x^{0}=(2,2,2,0.5)\)

From Tables 12, 3, we know that the obtained approximate optimal solutions are similar, which shows that the numerical result of Algorithm 3.1 does not depend on the section of the starting points for this example. In [18], the objective function value \(f_{0}(x^{*})=-44.23040\) was obtained in the forth iteration. From the numerical results given in [22], we know that the optimal solution is \(x^{*}=(0.1585001,0.8339736,2.014753,-0.959688 )\) with the objective function value \(f_{0}(x^{*})=-44.22965\). In [23], the objective function value obtained in the 25th iteration is \(f_{0}(x^{*})=-44\). Hence, the numerical results obtained by Algorithm 3.1 are better than the numerical results given in [18, 22, 23] for this example.

Example 4.2

Consider the following problem considered in [17]:

$$\begin{gathered}\min f_{0}(x)=-2x_{1}-6x_{2}+x_{1}^{2}-2x_{1}x_{2}+2x_{2}^{2} \\ \quad\text{s.t. }f_{1}(x)=x_{1}+x_{2}-2 \leq0, \\ \quad\phantom{\text{s.t. }} f_{2}(x)=-x_{1}+2x_{2}-2 \leq0, \\ \quad\phantom{\text{s.t. }} x_{1}, x_{2}\geq0. \end{gathered} $$

For this problem, we let \(x^{0}=(0,0)\), \(\epsilon_{0}=0.001\), \(a=0.001\), \(q_{0}=2\), \(b =10\), \(\epsilon=10^{-16}\). With different k, numerical results of Algorithm 3.1 are shown in Tables 45, and 6.

Table 4 Numerical results for Example 4.2 with \(k=\frac{3}{4}\)
Table 5 Numerical results for Example 4.2 with \(k=\frac{3}{5}\)
Table 6 Numerical results for Example 4.2 with \(k=\frac{8}{9}\)

From Tables 45, 6, we can see that almost the same approximate optimal solutions are obtained for different k in this example. The objective function value is similar to the objective function value \(f_{0}(x^{*})=-7.2000\) with \(x^{*}=(0.8000,1.2000)\) obtained in the forth iteration in [17].

Example 4.3

Consider the following problem considered in [24] and [25] (Test Problem 6 in Sect. 4.6):

$$\begin{gathered}\min f_{0}(x)=-x_{1}-x_{2} \\ \quad\text{s.t. }f_{1}(x)=x_{2}-2x_{1}^{4}+8x_{1}^{3}-8x_{1}^{2}-2 \leq 0, \\ \quad\phantom{\text{s.t. }} f_{2}(x)=x_{2}-4x_{1}^{4}+32x_{1}^{3}-88x_{1}^{2}+96x_{1}-36 \leq0, \\ \quad\phantom{\text{s.t. }}0\leq x_{1} \leq3,\qquad 0\leq x_{2} \leq4. \end{gathered} $$

For this problem, we set \(k=\frac{2}{3}\), \(x^{0}=(0,0)\), \(\epsilon_{0}=0.01\), \(a=0.01\), \(q_{0}=5\), \(b =2\), \(\epsilon=10^{-16}\). The numerical results of Algorithm 3.1 are shown in Table 7.

Table 7 Numerical results for Example 4.3 with \(x^{0}=(0,0)\)

We set \(k=\frac{8}{9}\), \(x^{0}=(1.0,1.5)\), \(\epsilon_{0}=0.1\), \(a=0.1\), \(q_{0}=5\), \(b =3\), \(\epsilon=10^{-16}\). The numerical results of Algorithm 3.1 are shown in Table 8.

Table 8 Numerical results for Example 4.3 with \(x^{0}=(1.0,1.5)\)

We set \(k=\frac{3}{4}\), \(x^{0}=(2,0.5)\), \(\epsilon_{0}=0.00001\), \(a=0.001\), \(q_{0}=2\), \(b =10\), \(\epsilon=10^{-16}\). The numerical results of Algorithm 3.1 are shown in Table 9.

Table 9 Numerical results for Example 4.3 with \(x^{0}=(2,0.5)\)

In [24], with three different starting points, similar numerical results are given with \({k=\frac{2}{3}}\). The optimal solution \((2.329517, 3.178421)\) is given with the objective function value −5.507938. In [25], the optimal solution \((2.3295, 3.1783)\) is given with the objective function value −5.5079. The numerical results of Example 4.3 are similar to the numerical results of [24] and [25] in this example.

From Tables 78, 9, we can see that we need to adjust the parameters \(q_{0}\), \(\epsilon_{0}\), a, b to get the better numerical results with different k and \(x^{0}\). Usually, \(\epsilon_{0}\) may be 0.5, 0.1, 0.01, 0.001, or smaller digits, and \(a=0.5, 0.1,0.01\), or 0.001. \(q_{0}\) may be 1, 2, 3, 5, 10, 100, or larger digits, and \(b= 2, 3, 5, 10\), or 100.

5 Concluding remarks

In this paper, we proposed a method to smooth the lower order exact penalty function with \(k\in[\frac{1}{2},1)\) for inequality constrained optimization. Furthermore, we proved that the algorithm based on the smoothed penalty functions is globally convergent under mild conditions. The given numerical experiments show that the algorithm is effective.

References

  1. Wang, C.W., Wang, Y.J.: A superlinearly convergent projection method for constrained systems of nonlinear equations. J. Glob. Optim. 40, 283–296 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  2. Qi, L.Q., Wang, F., Wang, Y.J.: Z-eigenvalue methods for a global polynomial optimization problem. Math. Program. 118, 301–316 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  3. Qi, L.Q., Tong, X.J., Wang, Y.J.: Computing power system parameters to maximize the small signal stability margin based on min-max models. Optim. Eng. 10, 465–476 (2009)

    Article  MATH  Google Scholar 

  4. Chen, H.B., Wang, Y.J.: A family of higher-order convergent iterative methods for computing the Moore–Penrose inverse. Appl. Math. Comput. 218, 4012–4016 (2011)

    MathSciNet  MATH  Google Scholar 

  5. Wang, G., Che, H.T., Chen, H.B.: Feasibility-solvability theorems for generalized vector equilibrium problem in reflexive Banach spaces. Fixed Point Theory Appl. 2012, Article ID 38 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  6. Wang, G., Yang, X.Q., Cheng, T.C.E.: Generalized Levitin–Polyak well-posedness for generalized semi-infinite programs. Numer. Funct. Anal. Optim. 34, 695–711 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  7. Wang, Y.J., Qi, L.Q., Luo, S.L., Xu, Y.: An alternative steepest direction method for optimization in evaluating geometric discord. Pac. J. Optim. 10, 137–149 (2014)

    MathSciNet  MATH  Google Scholar 

  8. Wang, Y.J., Caccetta, L., Zhou, G.L.: Convergence analysis of a block improvement method for polynomial optimization over unit spheres. Numer. Linear Algebra Appl. 22, 1059–1076 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  9. Wang, Y.J., Liu, W.Q., Caccetta, L., Zhou, G.: Parameter selection for nonnegative \(l_{1}\) matrix/tensor sparse decomposition. Oper. Res. Lett. 43, 423–426 (2015)

    Article  MathSciNet  Google Scholar 

  10. Sun, M., Wang, Y.J., Liu, J.: Generalized Peaceman–Rachford splitting method for multi-block separable convex programming with applications to robust PCA. Calcolo 54, 77–94 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  11. Sun, H.C., Sun, M., Wang, Y.J.: Proximal ADMM with larger step size for two-block separable convex programming and its application to the correlation matrices calibrating problems. J. Nonlinear Sci. Appl. 10(9), 5038–5051 (2017)

    Article  MathSciNet  Google Scholar 

  12. Konnov, I.V.: The method of pairwise variations with tolerances for linearly constrained optimization problems. J. Nonlinear Var. Anal. 1, 25–41 (2017)

    Google Scholar 

  13. Qin, X., Yao, J.C.: Projection splitting algorithms for nonself operators. J. Nonlinear Convex Anal. 18, 925–935 (2007)

    MathSciNet  MATH  Google Scholar 

  14. Luu, D.V., Mai, T.T.: Optimality conditions for Henig efficient and superefficient solutions of vector equilibrium problems. J. Nonlinear Funct. Anal. 2018, Article ID 18 (2018)

    Google Scholar 

  15. Chieu, N.H., Lee, G.M., Yen, N.D.: Second-order subdifferentials and optimality conditions for C1-smooth optimization problems. Appl. Anal. Optim. 1, 461–476 (2017)

    MathSciNet  Google Scholar 

  16. Zangwill, W.I.: Nonlinear programming via penalty functions. Manag. Sci. 13(5), 334–358 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  17. Wu, Z.Y., Lee, H.W.J., Bai, F.S., Zhang, L.S., Yang, X.M.: Quadratic smoothing approximation to \(l_{1}\) exact penalty function in global optimization. J. Ind. Manag. Optim. 1, 533–547 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  18. Lian, S.J.: Smoothing approximation to \(l_{1}\) exact penalty function for inequality constrained optimization. Appl. Math. Comput. 219(6), 3113–3121 (2012)

    MathSciNet  MATH  Google Scholar 

  19. Lian, S.J., Zhang, L.S.: A simple smooth exact penalty function for smooth optimization problem. J. Syst. Sci. Complex. 25(5), 521–528 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  20. Wu, Z.Y., Bai, F.S., Yang, X.Q., Zhang, L.S.: An exact lower order penalty function and its smoothing in nonlinear programming. Optimization 53, 51–68 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  21. Meng, Z.Q., Dang, C.Y., Yang, X.Q.: On the smoothing of the squareroot exact penalty function for inequality constrained optimization. Comput. Optim. Appl. 35, 375–398 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  22. Lian, S.J., Han, J.L.: Smoothing approximation to the square-order exact penalty functions for constrained optimization. J. Appl. Math. 2013, Article ID 568316 (2013)

    MathSciNet  Google Scholar 

  23. Lasserre, J.B.: A globally convergent algorithm for exact penalty functions. Eur. J. Oper. Res. 7, 389–395 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  24. Lian, S.J., Duan, Y.Q.: Smoothing of the lower-order exact penalty function for inequality constrained optimization. J. Inequal. Appl. 2016, Article ID 185 (2016). https://doi.org/10.1186/s13660-016-1126-9

    Article  MathSciNet  MATH  Google Scholar 

  25. Flondas, C.A., Pardalos, P.M.: A Collection of Test Problems for Constrained Global Optimization Algorithms. Springer, Berlin (1990)

    Book  Google Scholar 

Download references

Funding

This work was supported by the National Natural Science Foundation of China (71371107 and 61373027) and the Natural Science Foundation of Shandong Province (ZR2016AM10)

Author information

Authors and Affiliations

Authors

Contributions

SL and NN drafted the manuscript. SL revised it. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Shujun Lian.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lian, S., Niu, N. Smoothing approximation to the lower order exact penalty function for inequality constrained optimization. J Inequal Appl 2018, 131 (2018). https://doi.org/10.1186/s13660-018-1723-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-018-1723-x

MSC

Keywords