• Research
• Open Access

# A modified exact smooth penalty function for nonlinear constrained optimization

Journal of Inequalities and Applications20122012:173

https://doi.org/10.1186/1029-242X-2012-173

• Received: 13 February 2012
• Accepted: 26 July 2012
• Published:

## Abstract

In this paper, a modified simple penalty function is proposed for a constrained nonlinear programming problem by augmenting the dimension of the program with a variable that controls the weight of the penalty terms. This penalty function enjoys improved smoothness. Under mild conditions, it can be proved to be exact in the sense that local minimizers of the original constrained problem are precisely the local minimizers of the associated penalty problem.

MSC:47H20, 35K55, 90C30.

## Keywords

• constrained minimization
• exact penalty function
• nonlinear programming problem

## 1 Introduction

Merit function has always taken an important role in optimization problem. It is traditionally constructed to solve nonlinear programs by augmenting the objective function or a corresponding Lagrange function some penalty or barrier terms with respect of the constraints. Then it can be optimized by some unconstrained or bounded constrained optimization softwares or sequential quadratic programming (SQP) techniques. No matter what kind of techniques are involved, the merit function always depends on a small parameter ε or large parameter $\rho ={\epsilon }^{-1}$. As $\epsilon \to 0$, the minimizer of a merit function such as a barrier function or the quadratic penalty function, converges to a minimizer of the original problem. By using some exact penalty function such as ${l}_{1}$ penalty function (see [1, 9, 2022]), the minimizer of the corresponding penalty problem must be a minimizer of the original problem when ε is sufficiently small. There are some nonsmooth penalty functions for nonsmooth optimization problems, such as the exact penalty function using the distance function for the nonsmooth variational inequality problem in Hilbert spaces  and the one in .

The traditional exact penalty functions  are always nonsmooth. When it is used as a merit function to accept a new iterate in an SQP method, it may cause the Maratos effect . On the other hand, a traditional smooth penalty function like the quadratic penalty function cannot be an exact one. So we must compute a sequence of minimization subproblem as $\epsilon \to 0$. At that time, ill-conditioning may occur when the penalty parameter is too large or small, which also brings difficulty of computation. In  and , some kinds of augmented Lagrangian penalty functions have been proposed with improved exactness under strong conditions. In , exact penalty functions via regularized gap function for variational inequalities have also been given. All these functions enjoy some smoothness, but at the very beginning, to use this smoothness we need second-order or third-order derivative information of the problem function that is difficult to estimate in practice. Besides, all the above kinds of penalty functions (see [24, 7, 16] for summary) may be unbounded below even when the constrained problem is bounded, which may make it difficult to locate a minimizer.

In the paper , a new penalty function is proposed for the constrained optimization problem. By augmenting the dimension of the program with an additional variable ε that controls the weight of the penalty terms, this new penalty function enjoys properties of smoothness and exactness, and remains bounded below under reasonable conditions. Its important new idea is that the penalty function is considered as a function of variable x and the additional variable ε simultaneously. Under proper assumptions, the minimizer $\left({x}^{\ast },{\epsilon }_{\ast }\right)$ of the merit function satisfies ${\epsilon }_{\ast }=0$, and ${x}^{\ast }$ is a minimizer of the original problem. However, the penalty function given in  is not smooth in a small neighborhood of $\left({x}^{\ast },0\right)$, where the minimizer of the original constrained problem lies. In this paper, we give a penalty function which enjoys the properties of the penalty function given in  and has improved smoothness.

The rest of this paper is organized as follows. In Section 2, a penalty function is introduced for a smooth nonlinear optimization problem with equality constraints and bounded constraints. The smoothness of this penalty function is discussed, as well as other properties, including being bounded below under mild assumptions. Section 3 shows the exactness of our penalty function in the sense that under certain conditions, local minimizer of our penalty function has the form $\left({x}^{\ast },{\epsilon }_{\ast }\right)$ with ${\epsilon }_{\ast }=0$ and ${x}^{\ast }$ is a local minimizer of the original problem, and a converse result holds.

Notation Throughout this paper, we use the Euclidean norm $\parallel x\parallel =\sqrt{\sum {x}_{k}^{2}}$. The subvector of x indexed by the indices in J is denoted by ${x}_{J}$. We denote sets of the form
$\left[\underline{x},\overline{x}\right]:=x\in {R}^{n}|\left\{\underline{x}\le x\le \overline{x}\right\},$

where the lower bound $\underline{x}\in {\left(R\cup \left\{-\mathrm{\infty }\right\}\right)}^{n}$ and the upper bound $\overline{x}\in {\left(R\cup \left\{\mathrm{\infty }\right\}\right)}^{n}$ are vectors containing proper or infinite bounds on the components of x and $\left[\underline{x},\overline{x}\right]$ is referred to an n-dimensional box.

## 2 New penalty function

We consider the smooth nonlinear optimization problem with equality constraints and bound constraints:
$\left(P\right)\phantom{\rule{1em}{0ex}}\begin{array}{l}minf\left(x\right)\\ \text{s.t.}\phantom{\rule{1em}{0ex}}x\in \left[u,v\right],\phantom{\rule{1em}{0ex}}F\left(x\right)=0,\end{array}$
(2.1)
where $\left[u,v\right]$ is a box in ${\mathrm{\Re }}^{n}$ with nonempty interior, $f:D\to \mathrm{\Re }$ and $F:D\to {\mathrm{\Re }}^{m}$ are continuously differentiable in an open set D containing $\left[u,v\right]$ and $m. We fix $w\in {\mathrm{\Re }}^{m}$ and consider the equivalent problem:
$\left(\overline{P}\right)\phantom{\rule{1em}{0ex}}\begin{array}{l}minf\left(x\right)\\ \text{s.t.}\phantom{\rule{1em}{0ex}}{F}_{j}\left(x\right)={\epsilon }^{\gamma }{w}_{j},\phantom{\rule{1em}{0ex}}j=1,\dots ,m,\\ \phantom{\rule{1em}{0ex}}x\in \left[u,v\right],\epsilon =0,\end{array}$
(2.2)

where $\gamma >0$.

Let $\overline{\epsilon }>0$ be an upper bound of the parameter ε. Then the corresponding penalty function ${f}_{\sigma }$ on $D×\left[0,\overline{\epsilon }\right]$ for ($\overline{P}$) is given as follows:
(2.3)
with the constraint violation measure
$\mathrm{△}\left(x,\epsilon \right):={\parallel F\left(x\right)-{\epsilon }^{\gamma }w\parallel }^{2},$

where, in addition, $\gamma >\alpha \ge 2\beta \ge 2$ and $q>0$, are all fixed numbers, $\sigma >0$ is a penalty parameter and $\parallel \cdot \parallel$ is a Euclidean norm, with $\parallel x\parallel =\sqrt{{x}^{T}x}$ for any vector x.

Obviously, $\epsilon =\mathrm{△}\left(x,\epsilon \right)=0$ if and only if $\epsilon =0$, ${F}_{j}\left(x\right)=0$, $j=1,\dots ,m$. The corresponding penalty problem then reads
$\left({P}_{\sigma }\right)\phantom{\rule{1em}{0ex}}\begin{array}{l}min{f}_{\sigma }\left(x,\epsilon \right)\\ \text{s.t.}\phantom{\rule{1em}{0ex}}\left(x,\epsilon \right)\in \left[u,v\right]×\left[0,\overline{\epsilon }\right].\end{array}$
(2.4)

The main difference between (2.3) and the penalty function given in  is that in (2.3), $\beta \left(\epsilon \right)={\epsilon }^{\beta }$, which does not have the property that ${\beta }^{\prime }\left(\epsilon \right)\to +\mathrm{\infty }$, as $\epsilon \to {0}^{+}$.

It is easy to see that ${f}_{\sigma }\left(x,\epsilon \right)$ is continuously differentiable with respect to $\left(x,\epsilon \right)$ on
${D}_{q}:=\left\{\left(x,\epsilon \right)\in D×\left(0,\overline{\epsilon }\right]|0\le \mathrm{△}\left(x,\epsilon \right)<\frac{1}{q}\right\}.$

### 2.1 Boundedness of the penalty function

If $F\left(x\right)=0$, $\left(x,\epsilon \right)\in {D}_{q}$, then
${f}_{\sigma }\left(x,\epsilon \right)=f\left(x\right)+\frac{{\epsilon }^{2\gamma -\alpha }{\parallel w\parallel }^{2}}{1-q{\epsilon }^{2\gamma }{\parallel w\parallel }^{2}}+\sigma {\epsilon }^{\beta }\ge {f}_{\sigma }\left(x,0\right)=f\left(x\right).$
(2.5)
Therefore, ${f}_{\sigma }\left(x,\epsilon \right)$ is bounded below on the set
${D}^{\prime }=\left\{x\in \left[u,v\right]|\parallel F\left(x\right)\parallel \le {q}^{-1/2}+{\overline{\epsilon }}^{\gamma }\parallel w\parallel \right\},$

whenever $f\left(x\right)$ is bounded below on the set ${D}^{\prime }$. This is a reasonable condition since it usually holds when f is bounded below on the feasible set, $\overline{\epsilon }$ is small enough, and q is large enough.

The denominator $1-q\mathrm{△}\left(x,\epsilon \right)$ is included since it forces the level sets of ${f}_{\sigma }$ to remain in the set $\left\{\left(x,\epsilon \right)\in {\mathrm{\Re }}^{n+1}|\mathrm{△}\left(x,\epsilon \right)<{q}^{-1}\right\}$, hence in some sense does not go far away from the feasible set of (P).

Now we see a simple example:
$\begin{array}{l}min{x}^{7}\\ \text{s.t.}\phantom{\rule{1em}{0ex}}{x}^{2}-1=0,\\ \phantom{\rule{1em}{0ex}}x\in \mathrm{\Re }.\end{array}$
It has a bounded feasible domain, a global minimizer at ${x}^{\ast }=-1$ with $f\left({x}^{\ast }\right)=-1$, and a local minimizer $x=1$. The traditional quadratic penalty function for this problem
$P\left(x\right)={x}^{7}+\frac{1}{\epsilon }{\left({x}^{2}-1\right)}^{2},$
is unbounded below for all penalty parameters $\epsilon >0$ since, e.g., $p\left(x\right)\to -\mathrm{\infty }$ for $x=-s$, $s\to +\mathrm{\infty }$. It is also the case for traditional penalty functions, including multiplier penalty functions that use an additional term $+\lambda \left({x}^{2}-1\right)$. On the other hand, our new penalty function is bounded below. Set $w=1$, it reads
where $r=1+{\epsilon }^{\gamma }-{x}^{2}$. Since ${f}_{\sigma }\left(x,\epsilon \right)=+\mathrm{\infty }$, if $|x|\ge \sqrt{{q}^{-1/2}+1+{\epsilon }^{\gamma }}$, it is obvious that our penalty function is bounded below. See Figure 1 for the display of the contour of the penalty function on this example. Figure 1 β = 1 , γ = 3 , α = 2 , and σ = 2 , 000 .

## 3 Exactness of the penalty function

In this section, we show that our penalty function is exact in the sense that under certain conditions, local minimizer of our penalty function has the form $\left({x}^{\ast },{\epsilon }_{\ast }\right)$ in which ${\epsilon }_{\ast }=0$ and ${x}^{\ast }$ is a local minimizer of the original problem and a converse proposition holds.

Firstly, recall the Mangasarian-Fromovitz condition. We say that the Mangasarian-Fromovitz condition (see ) for Problem (P) holds at $x\in \left[u,v\right]$ if ${F}^{\prime }\left(x\right)$ has full rank and there is a vector $p\in {\mathrm{\Re }}^{n}$ with ${F}^{\prime }\left(x\right)p=0$ and
(3.1)

Theorem 3.1 Assume that the set ${D}^{\prime }$ defined in Section  2 is bounded, and the Mangasarian-Fromovitz condition holds at each ${x}^{\prime }\in {D}^{\prime }$. Let $\gamma >\alpha \ge 2\beta \ge 2$ in (${P}_{\sigma }$). If σ is sufficiently large, then there is no Kuhn-Tucker point $\left(x,\epsilon \right)$ of (${P}_{\sigma }$) with $\epsilon >0$.

Proof Let the Lagrangian function of (${P}_{\sigma }$) for $\sigma >0$ be
$L\left(x,\epsilon ,y,z\right)={f}_{\sigma }\left(x,\epsilon \right)+\sum _{i=1}^{n}{y}_{i}\left({u}_{i}-{x}_{i}\right)+\sum _{i=1}^{n}{z}_{i}\left({x}_{i}-{v}_{i}\right)+{y}_{n+1}\left(-\epsilon \right)+{z}_{n+1}\left(\epsilon -\overline{\epsilon }\right),$
where ${y}_{i},{z}_{i}\in \mathrm{\Re }$, $i=1,\dots ,n+1$ are the Lagrangian multipliers. If $\left(x,\epsilon \right)$ is a Kuhn-Tucker point of (${P}_{\sigma }$) with $\epsilon >0$, then there exist vectors $y,z\in {\mathrm{\Re }}^{n+1}$ such that
$\begin{array}{c}{\mathrm{\nabla }}_{\left(x,\epsilon \right)}L\left(x,\epsilon ,y,z\right)=0,\hfill \\ {u}_{i}-{x}_{i}\le 0,\phantom{\rule{2em}{0ex}}{x}_{i}-{v}_{i}\le 0,\phantom{\rule{1em}{0ex}}i=1,\dots ,n,\hfill \\ -\epsilon <0,\phantom{\rule{2em}{0ex}}\epsilon -\overline{\epsilon }\le 0,\hfill \\ {y}_{i}\left({u}_{i}-{x}_{i}\right)=0,\phantom{\rule{2em}{0ex}}{z}_{i}\left({v}_{i}-{x}_{i}\right)=0,\phantom{\rule{2em}{0ex}}{z}_{n+1}\left(\epsilon -\overline{\epsilon }\right)=0,\phantom{\rule{2em}{0ex}}{y}_{n+1}\epsilon =0,\hfill \end{array}$
then
${\mathrm{\nabla }}_{\left(x,\epsilon \right)}{f}_{\sigma }\left(x,\epsilon \right)=y-z,$
and
$\begin{array}{c}inf\left({y}_{i},{x}_{i}-{u}_{i}\right)=inf\left({z}_{i},{v}_{i}-{x}_{i}\right)=0,\phantom{\rule{1em}{0ex}}i=1,\dots ,n,\hfill \\ {y}_{n+1}=inf\left({z}_{n+1},\overline{\epsilon }-\epsilon \right)=0,\hfill \end{array}$

where ${\mathrm{\nabla }}_{\left(x,\epsilon \right)}{f}_{\sigma }\left(x,\epsilon \right)$ is the gradient of ${f}_{\sigma }$ with respect to $\left(x,\epsilon \right)$. The assertion of the theorem is proved by contradiction. □

Assume that there exists a sequence $\left\{\left({x}^{k},{\epsilon }_{k},{\sigma }_{k}\right)\right\}$ with ${\epsilon }_{k}>0$ for all k, ${\sigma }_{k}\to +\mathrm{\infty }$ as $k\to \mathrm{\infty }$, where $\left({x}^{k},{\epsilon }_{k}\right)$ is a Kuhn-Tucker point of (${P}_{{\sigma }_{k}}$). We use the abbreviation ${\mathrm{△}}_{k}:=\mathrm{△}\left({x}^{k},{\epsilon }_{k}\right)$. The point ${x}^{k}$ satisfies
$\parallel F\left({x}^{k}\right)\parallel \le {\mathrm{△}}_{k}^{1/2}+{\epsilon }_{k}^{\gamma }\parallel w\parallel \le {q}^{-1/2}+{\overline{\epsilon }}^{\gamma }\parallel w\parallel ,$
hence, ${x}^{k}\in {D}^{\prime }=\left\{x\in \left[u,v\right]|\parallel F\left(x\right)\parallel \le {q}^{-1/2}+{\overline{\epsilon }}^{\gamma }\parallel w\parallel \right\}$. Since ${D}^{\prime }$ is closed and bounded, we may restrict ourselves to a subsequence if necessary and assume that
$\underset{k\to \mathrm{\infty }}{lim}{\epsilon }_{k}={\epsilon }_{\ast }\in \left[0,\overline{\epsilon }\right]\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\underset{k\to \mathrm{\infty }}{lim}{x}^{k}={x}^{\ast }\in {D}^{\prime }.$
The condition $\frac{\partial }{\partial \epsilon }{f}_{\sigma }\left({x}^{k},{\epsilon }_{k}\right)\le 0$ yields
$\begin{array}{r}\alpha q{\mathrm{△}}_{k}^{2}+\left(2\gamma -\alpha \right){\epsilon }_{k}^{2\gamma }{\parallel w\parallel }^{2}+2\left(\alpha -\gamma \right){\epsilon }_{k}^{\gamma }\sum _{j=1}^{m}{F}_{j}\left({x}^{k}\right){w}_{j}\\ \phantom{\rule{1em}{0ex}}+\beta {\epsilon }_{k}^{\alpha +\beta }{\left(1-q{\mathrm{△}}_{k}\right)}^{2}{\sigma }_{k}\le \alpha \sum _{j=1}^{m}{F}_{j}^{2}\left({x}^{k}\right)\end{array}$
(3.2)
with equality in the case ${\epsilon }_{k}\ne \overline{\epsilon }$. When ${\sigma }_{k}\to \mathrm{\infty }$ and because $\alpha {\sum }_{j=1}^{m}{F}_{j}^{2}\left({x}^{k}\right)$ - the right side of (3.2) is a finite number, we have
$\underset{k\to \mathrm{\infty }}{lim}{\epsilon }_{k}={\epsilon }_{\ast }=0\phantom{\rule{1em}{0ex}}\text{or}\phantom{\rule{1em}{0ex}}\underset{k\to \mathrm{\infty }}{lim}\mathrm{△}\left({x}^{k},{\epsilon }_{k}\right)={\mathrm{△}}^{\ast }={q}^{-1},$
(3.3)
where ${\mathrm{△}}^{\ast }=\mathrm{△}\left({x}^{\ast },{\epsilon }_{\ast }\right)$. On the other side, the derivative the penalty function ${f}_{\sigma }$ with respect to x is given as
(3.4)
(3.4) is equivalent to
(3.5)
Let $k\to +\mathrm{\infty }$, since ${\epsilon }_{\ast }=0$ or ${\mathrm{△}}^{\ast }={q}^{-1}$, we have
(3.6)
Because ${x}^{\ast }\in {D}^{\prime }$, thus Mangasarian-Fromovitz condition holds at ${x}^{\ast }$ and there exists some vector $p\in {\mathrm{\Re }}^{n}$ such that ${F}^{\prime }\left({x}^{\ast }\right)p=0$, where p satisfies (3.1). Let ${I}_{1}:=\left\{i|{x}_{i}^{\ast }={u}_{i}\right\}$, ${I}_{2}:=\left\{i|{x}_{i}^{\ast }={v}_{i}\right\}$ and $\overline{{\mathrm{△}}^{\ast }}:=F\left({x}^{\ast }\right)-{\epsilon }_{\ast }^{\gamma }w$. Then by (3.6), we have
$0={\left({F}^{\prime }\left({x}^{\ast }\right)p\right)}^{T}\overline{{\mathrm{△}}^{\ast }}=\sum _{i\in {I}_{1}}{p}_{i}{\left({F}^{\prime }{\left({x}^{\ast }\right)}^{T}\overline{{\mathrm{△}}^{\ast }}\right)}_{i}+\sum _{i\in {I}_{2}}{p}_{i}{\left({F}^{\prime }{\left({x}^{\ast }\right)}^{T}\overline{{\mathrm{△}}^{\ast }}\right)}_{i}.$
(3.7)
(3.7) and the Mangasarian-Fromovitz condition (3.1) imply ${\left({F}^{\prime }{\left({x}^{\ast }\right)}^{T}\overline{{\mathrm{△}}^{\ast }}\right)}_{i}=0$ for $i\in {I}_{1}\cup {I}_{2}$. Thus, ${F}^{\prime }{\left({x}^{\ast }\right)}^{T}\overline{{\mathrm{△}}^{\ast }}=0$. Now the fact that ${F}^{\prime }\left({x}^{\ast }\right)$ has full rank yields $\overline{{\mathrm{△}}^{\ast }}=0$, i.e.,
$F\left({x}^{\ast }\right)-{\epsilon }_{\ast }^{\gamma }w=0,$
and by $\overline{{\mathrm{△}}^{\ast }}={lim}_{k\to \mathrm{\infty }}\left(F\left({x}^{k}\right)-{\epsilon }_{k}^{\gamma }w\right)=0$, it follows that
$\underset{k\to \mathrm{\infty }}{lim}\mathrm{△}\left({x}^{k},{\epsilon }_{k}\right)=\underset{k\to \mathrm{\infty }}{lim}{\parallel F\left({x}^{k}\right)-{\epsilon }_{k}^{\gamma }w\parallel }^{2}={\mathrm{△}}^{\ast }=0.$
By (3.3), we obtain
$\underset{k\to \mathrm{\infty }}{lim}{\epsilon }_{k}={\epsilon }_{\ast }=0.$

Thus, ${lim}_{k\to \mathrm{\infty }}F\left({x}^{k}\right)=F\left({x}^{\ast }\right)=0$.

Furthermore, by (3.2), it holds that
$\begin{array}{r}\frac{q}{{\epsilon }_{k}^{\alpha +\beta }}{\mathrm{△}}^{2}\left({x}^{k},{\epsilon }_{k}\right)+\frac{2\frac{\gamma }{\alpha }-1}{{\epsilon }_{k}^{\alpha +\beta -2\gamma }}{\parallel w\parallel }^{2}+2\left(1-\frac{\gamma }{\alpha }\right){\epsilon }_{k}^{\gamma -\left(\alpha +\beta \right)}\sum _{i=1}^{m}{F}_{i}\left({x}^{k}\right){w}_{i}\\ \phantom{\rule{1em}{0ex}}+\frac{\beta }{\alpha }{\left(1-q\mathrm{△}\left({x}^{k},{\epsilon }_{k}\right)\right)}^{2}{\sigma }_{k}\le \frac{{\sum }_{i=1}^{m}{F}_{i}^{2}\left({x}^{k}\right)}{{\epsilon }_{k}^{\alpha +\beta }}.\end{array}$
Let $k\to \mathrm{\infty }$, the last term on the left-hand side tend to +∞. Thus, the vectors ${y}^{k}=\frac{F\left({x}^{k}\right)}{{\epsilon }_{k}^{\frac{\alpha +\beta }{2}}}$ satisfies $\parallel {y}^{k}\parallel \to +\mathrm{\infty }$. The vectors ${z}^{k}=\frac{{y}^{k}}{\parallel {y}^{k}\parallel }$ have norm 1, and (3.5) implies that the numbers ${\mu }_{i}^{k}$ ($i=1,\dots ,n$), defined by
$\begin{array}{rcl}{\mu }_{i}^{k}& =& \frac{1}{\parallel {y}^{k}\parallel }\frac{\partial }{\partial {x}_{i}}f\left({x}^{k}\right)+\frac{1}{\parallel {y}^{k}\parallel }\frac{2}{{\epsilon }_{k}^{\alpha }{\left(1-q\mathrm{△}\left({x}^{k},{\epsilon }_{k}\right)\right)}^{2}}{\left({F}^{\prime }{\left({x}^{k}\right)}^{T}\left(F\left({x}_{k}\right)-{\epsilon }_{k}^{\gamma }w\right)\right)}_{i}\\ =& \frac{1}{\parallel {y}^{k}\parallel }\frac{\partial }{\partial {x}_{i}}f\left({x}^{k}\right)+\frac{2}{{\epsilon }_{k}^{\frac{\alpha -\beta }{2}}{\left(1-q\mathrm{△}\left({x}^{k},{\epsilon }_{k}\right)\right)}^{2}}{\left({F}^{\prime }{\left({x}^{k}\right)}^{T}\left({z}^{k}-\frac{{\epsilon }_{k}^{\gamma -\frac{\alpha +\beta }{2}}w}{\parallel {y}^{k}\parallel }\right)\right)}_{i},\end{array}$
satisfy
If we pick a convergent subsequence ${z}^{{n}_{k}}$ with the limit ${z}^{\ast }$ and pass to the limit we obtain

Now similarly as above, it yields ${z}^{\ast }=0$, which is a contradiction with $\parallel {z}^{\ast }\parallel =1$. Thus such a sequence $\left\{\left({x}^{k},{\epsilon }_{k},{\sigma }_{k}\right)\right\}$ cannot exist, and for sufficiently large $\sigma >0$, all Kuhn-Tucker points of (${P}_{\sigma }$) are of the form $\left(x,0\right)$.

Theorem 3.2 Assume that $\left({x}^{\ast },{\epsilon }_{\ast }\right)$ is a local minimizer of minimizer of (${P}_{\sigma }$) with finite ${f}_{\sigma }\left({x}^{\ast },{\epsilon }_{\ast }\right)$, where $\sigma >0$ is sufficiently large. If the hypotheses of Theorem  3.1 are fulfilled, then ${x}^{\ast }$ is a local minimizer of (P).

Proof Now let $\left({x}^{\ast },{\epsilon }_{\ast }\right)$ be a local minimizer of (${P}_{\sigma }$) with finite ${f}_{\sigma }\left({x}^{\ast },{\epsilon }_{\ast }\right)$ and $\sigma >0$ is sufficiently large. If ${\epsilon }_{\ast }>0$, then $\left({x}^{\ast },{\epsilon }_{\ast }\right)$ must be a Kuhn-Tucker point of (${P}_{\sigma }$), which is a contradiction with Theorem 3.1. Therefore, ${\epsilon }_{\ast }=0$, and since ${f}_{\sigma }\left({x}^{\ast },{\epsilon }_{\ast }\right)$ is finite, $\mathrm{△}\left({x}^{\ast },{\epsilon }_{\ast }\right)=0$. It implies that $F\left({x}^{\ast }\right)=0$, and by (2.5) there is a neighborhood $N\left({x}^{\ast }\right)$ of ${x}^{\ast }$ where $f\left(x\right)\ge f\left({x}^{\ast }\right)$ for feasible x. Therefore, ${x}^{\ast }$ is a local minimizer of (P). □

We now show a converse result of Theorem 3.2, which will use the following lemmas.

Lemma 3.1 Suppose $F\left({x}^{\ast }\right)=0$, and ${F}^{\prime }\left({x}^{\ast }\right)$ has full rank. Then there exist a neighborhood ${N}_{0}\left({x}^{\ast }\right)$ of ${x}^{\ast }$ and a constant ${\kappa }_{0}>0$ such that for each $x\in {N}_{0}\left({x}^{\ast }\right)$, and each subset J of $\left\{1,2,\dots ,m\right\}$, there exists a vector $y=y\left(x\right)\in {N}_{0}\left({x}^{\ast }\right)$ with ${F}_{i}\left(y\right)=0$, for $i\in J$ and ${F}_{i}\left(y\right)={F}_{i}\left(x\right)$, $i\in K=\left\{1,2,\dots ,m\right\}\setminus J$, such that
$\parallel x-y\parallel \le {\kappa }_{0}\parallel {F}_{J}\left(x\right)\parallel .$
Proof Since $F\left({x}^{\ast }\right)=0$ and ${F}^{\prime }\left({x}^{\ast }\right)$ has full rank, there exists a matrix $B\in {\mathrm{\Re }}^{\left(n-m\right)×n}$ such that the augmented matrix $\left(\begin{array}{c}{F}^{\prime }\left({x}^{\ast }\right)\\ B\end{array}\right)$ is nonsingular. By the continuity of ${F}^{\prime }\left(\cdot \right)$ at ${x}^{\ast }$, there exists a neighborhood ${N}_{1}\left({x}^{\ast }\right)\subset D$ of ${x}^{\ast }$ such that $\left(\begin{array}{c}{F}^{\prime }\left(x\right)\\ B\end{array}\right)$ is nonsingular, for any $x\in {N}_{1}\left({x}^{\ast }\right)$. Take for $\mathcal{A}$ the closed convex hull of $\left\{{F}^{\prime }\left(x\right)|x\in {N}_{1}\left({x}^{\ast }\right)\right\}$, then for all $A\in \mathcal{A}$, the matrix $\left(\begin{array}{c}A\\ B\end{array}\right)$ is nonsingular. We now show that for any $x,y\in {N}_{1}\left({x}^{\ast }\right)$, there exists a matrix $A\in \mathcal{A}$ such that
$F\left(x\right)-F\left(y\right)=A\left(x-y\right).$
(3.8)
In fact, given $x,y\in {N}_{1}\left({x}^{\ast }\right)$, it follows from the mean value theorem that
$\begin{array}{rl}F\left(x\right)-F\left(y\right)& ={\int }_{0}^{1}{F}^{\prime }\left(y+s\left(x-y\right)\right)\left(x-y\right)\phantom{\rule{0.2em}{0ex}}ds\\ ={A}_{x,y}\left(x-y\right),\end{array}$
where ${A}_{x,y}={\int }_{0}^{1}{F}^{\prime }\left(y+s\left(x-y\right)\right)\phantom{\rule{0.2em}{0ex}}ds\in \mathcal{A}$, so (3.8) holds. Set the mapping $H\left(z\right):=\left(\begin{array}{c}F\left(z\right)\\ B\left(z-{x}^{\ast }\right)\end{array}\right)$, for $z\in {N}_{1}\left({x}^{\ast }\right)$. By the proof in , Theorem 4.5], we have that there exists a neighborhood ${N}_{0}\left({x}^{\ast }\right)\subset {N}_{1}\left({x}^{\ast }\right)$ of ${x}^{\ast }$ such that for each $x\in {N}_{0}\left({x}^{\ast }\right)$, and each subset J of $\left\{1,2,\dots ,m\right\}$, there exists a vector $y=y\left(x\right)\in {N}_{0}\left({x}^{\ast }\right)$ with
$H\left(y\right)=\left(\begin{array}{c}F\left(y\right)\\ B\left(y-{x}^{\ast }\right)\end{array}\right)=\left(\begin{array}{c}0\\ {F}_{K}\left(x\right)\\ B\left(x-{x}^{\ast }\right)\end{array}\right),$

so ${F}_{i}\left(y\right)=0$, for $i\in J$ and ${F}_{i}\left(y\right)={F}_{i}\left(x\right)$, $i\in K$.

For $x,y\in {N}_{0}\left({x}^{\ast }\right)$, we have
$H\left(x\right)-H\left(y\right)=\left(\begin{array}{c}A\\ B\end{array}\right)\left(x-y\right),$
for some $A\in \mathcal{A}$. On the other side, we have
$\parallel H\left(x\right)-H\left(y\right)\parallel =\parallel {F}_{J}\left(x\right)\parallel .$
(3.9)
Therefore, combining (3.8) with (3.9), we have
$\parallel x-y\parallel \le \parallel {\left(\begin{array}{c}A\\ B\end{array}\right)}^{-1}\parallel \parallel {F}_{J}\left(x\right)\parallel \le {\kappa }_{0}\parallel {F}_{J}\left(x\right)\parallel ,$
where
${\kappa }_{0}:=\parallel \underset{A\in \mathcal{A}}{sup}{\left(\begin{array}{c}A\\ B\end{array}\right)}^{-1}\parallel <+\mathrm{\infty },$

and this complete the proof. □

Lemma 3.2 Assume that ${x}^{\ast }$ is a local minimizer of problem (2.1) with ${u}_{i}\le {x}_{i}^{\ast }\le {v}_{i}$, $i\in \left\{1,\dots ,p\right\}$, where $m\le p\le n$. Suppose that ${F}^{\prime }\left({x}^{\ast }\right)$ has full rank. Then there exists a constant ${\kappa }_{1}>0$ such that

where ${N}_{0}\left({x}^{\ast }\right)$ is defined in Lemma  3.1.

Proof From Lemma 3.1, let ${N}_{0}\left({x}^{\ast }\right)$ and ${\kappa }_{0}$ be the one in Lemma 3.1. Let $x\in {N}_{0}\left({x}^{\ast }\right)$, then by Lemma 3.1 with $J=\left\{1,\dots ,m\right\}$, there exists an $y=y\left(x\right)$ with $F\left(y\right)=0$ and ${y}_{i}={x}_{i}^{\ast }$, $i=m+1,\dots ,n$ such that
$\parallel x-y\parallel \le {\kappa }_{0}\parallel F\left(x\right)\parallel .$
(3.10)
So y is a feasible point of problem (2.1), and $f\left(y\right)\ge f\left({x}^{\ast }\right)$. Since f is continuously differentiable, for any $x,y\in {N}_{0}\left({x}^{\ast }\right)$, there exists a vector $\xi \in {\mathrm{\Re }}^{n}$ such that
$f\left(x\right)-f\left(y\right)=\mathrm{\nabla }f{\left(\xi \right)}^{T}\left(x-y\right),$
where ξ lies in the segment between x and y. Set $L:={sup}_{z\in N\left({x}^{\ast }\right)}\parallel \mathrm{\nabla }f\left(z\right)\parallel$, we have
$\begin{array}{rl}|f\left(x\right)-f\left(y\right)|& \le \parallel \mathrm{\nabla }f\left(\xi \right)\parallel \parallel x-y\parallel \\ \le L\parallel x-y\parallel \\ \le L{\kappa }_{0}\parallel F\left(x\right)\parallel ,\end{array}$

where the last inequality holds from (3.10).

Let ${\kappa }_{1}=L{\kappa }_{0}$, then
$f\left(x\right)=f\left(x\right)-f\left(y\right)+f\left(y\right)\ge f\left({x}^{\ast }\right)-{\kappa }_{1}\parallel F\left(x\right)\parallel ,$

which complete the proof. □

Theorem 3.3 If ${x}^{\ast }$ is a local minimizer of problem (P) with ${u}_{i}\le {x}_{i}^{\ast }\le {v}_{i}$, $i\in \left\{1,\dots ,p\right\}$, where $m\le p\le n$, and ${F}^{\prime }\left({x}^{\ast }\right)$ has full rank, then for sufficiently large $\sigma >0$, there are a neighborhood $N\left({x}^{\ast }\right)$ of ${x}^{\ast }$ and a ${\epsilon }^{\prime }\in \left(0,\overline{\epsilon }\right]$ such that

In particular, $\left({x}^{\ast },0\right)$ is a local minimizer of ${f}_{\sigma }$.

Proof Let $N\left({x}^{\ast }\right)\subset {N}_{0}\left({x}^{\ast }\right)$ is a neighborhood of ${x}^{\ast }$ such that
$\underset{x\in N\left({x}^{\ast }\right)}{sup}\left(f\left({x}^{\ast }\right)-f\left(x\right)\right)<1,$
(3.11)

where ${N}_{0}\left({x}^{\ast }\right)$ is defined in Lemma 3.1.

For $\left(x,\epsilon \right)\in N\left({x}^{\ast }\right)×\left(0,{\epsilon }^{\prime }\right]$, where ${\epsilon }^{\prime }\in \left(0,\overline{\epsilon }\right]$ and ${\epsilon }^{\prime }\le 1$, we distinguish two cases.

Case 1. $\mathrm{△}\left(x,\epsilon \right)={\parallel F\left(x\right)-{\epsilon }^{\gamma }w\parallel }^{2}\ge {\epsilon }^{\alpha }$. For this case, we have
$\begin{array}{rl}{f}_{\sigma }\left(x,\epsilon \right)& \ge f\left(x\right)+1+\sigma {\epsilon }^{\beta }\\ \ge f\left({x}^{\ast }\right)+\sigma {\epsilon }^{\beta }\\ >f\left({x}^{\ast }\right)={f}_{\sigma }\left({x}^{\ast },0\right),\end{array}$

where the second inequality is by (3.11).

Case 2. $\mathrm{△}\left(x,\epsilon \right)<{\epsilon }^{\alpha }$. Then $\parallel F\left(x\right)\parallel \le {\mathrm{△}}^{\frac{1}{2}}+{\epsilon }^{\gamma }\parallel w\parallel \le {\epsilon }^{\frac{\alpha }{2}}+{\epsilon }^{\gamma }\parallel w\parallel$, and
$\begin{array}{rl}{f}_{\sigma }\left(x,\epsilon \right)& \ge f\left(x\right)+\sigma {\epsilon }^{\beta }\\ \ge f\left({x}^{\ast }\right)-{\kappa }_{1}\parallel F\left(x\right)\parallel +\sigma {\epsilon }^{\beta }\\ \ge f\left({x}^{\ast }\right)-{\kappa }_{1}\left({\epsilon }^{\frac{\alpha }{2}}+{\epsilon }^{\gamma }\parallel w\parallel \right)+\sigma {\epsilon }^{\beta }\\ \ge f\left({x}^{\ast }\right)+\left(\sigma -{\kappa }_{1}\left(1+{\epsilon }^{\gamma -\frac{\alpha }{2}}\right)\parallel w\parallel \right){\epsilon }^{\frac{\alpha }{2}}.\end{array}$
(3.12)

The last inequality holds since $\beta \le \frac{\alpha }{2}$.

Let $\sigma >{\kappa }_{1}\left(1+{\epsilon }^{\gamma -\frac{\alpha }{2}}\right)\parallel w\parallel$, we get
${f}_{\sigma }\left(x,\epsilon \right)\ge f\left({x}^{\ast }\right)={f}_{\sigma }\left({x}^{\ast },0\right).$

From Case 1 and Case 2, we obtain the conclusion. □

## 4 Conclusion remarks

In this paper, a modified exact penalty function for equality constrained nonlinear programming problem is constructed by augmenting a new variable that controls the constraint violence. This function enjoys smoothness, and with very mild conditions it is proved to be an exact penalty function.

Since in practice, a lot of applied problems are nonsmooth, it is a meaningful work to extend the results in this paper to the nonsmooth case. By using the limiting subgradients that is presented in two books written by Mordukhovich [14, 15], as well as Clarke’s generalized gradients in , we can extend the penalty function with the mentioned good properties to nonsmooth optimization problems, just as that has been done in . That will be our future research direction.

## Declarations

### Acknowledgements

The authors wish to thank the anonymous referees for their endeavors and valuable comments. The authors would also like to thank Professor Zhang Liansheng for some very helpful comments on a preliminary version of this paper. This research was supported by the National Natural Science Foundation of China under Grants 10971118, 11101248, Natural Science Foundation of Shandong Province under Grants ZR2012AM016, and the foundation 4041-409012 of Shandong University of Technology.

## Authors’ Affiliations

(1)
School of Science, Shandong University of Technology, Zibo, Shandong, 255049, P.R. China

## References 