Skip to main content

A modified exact smooth penalty function for nonlinear constrained optimization

Abstract

In this paper, a modified simple penalty function is proposed for a constrained nonlinear programming problem by augmenting the dimension of the program with a variable that controls the weight of the penalty terms. This penalty function enjoys improved smoothness. Under mild conditions, it can be proved to be exact in the sense that local minimizers of the original constrained problem are precisely the local minimizers of the associated penalty problem.

MSC:47H20, 35K55, 90C30.

1 Introduction

Merit function has always taken an important role in optimization problem. It is traditionally constructed to solve nonlinear programs by augmenting the objective function or a corresponding Lagrange function some penalty or barrier terms with respect of the constraints. Then it can be optimized by some unconstrained or bounded constrained optimization softwares or sequential quadratic programming (SQP) techniques. No matter what kind of techniques are involved, the merit function always depends on a small parameter ε or large parameter ρ= ε 1 . As ε0, the minimizer of a merit function such as a barrier function or the quadratic penalty function, converges to a minimizer of the original problem. By using some exact penalty function such as l 1 penalty function (see [1, 9, 2022]), the minimizer of the corresponding penalty problem must be a minimizer of the original problem when ε is sufficiently small. There are some nonsmooth penalty functions for nonsmooth optimization problems, such as the exact penalty function using the distance function for the nonsmooth variational inequality problem in Hilbert spaces [18] and the one in [19].

The traditional exact penalty functions [8] are always nonsmooth. When it is used as a merit function to accept a new iterate in an SQP method, it may cause the Maratos effect [13]. On the other hand, a traditional smooth penalty function like the quadratic penalty function cannot be an exact one. So we must compute a sequence of minimization subproblem as ε0. At that time, ill-conditioning may occur when the penalty parameter is too large or small, which also brings difficulty of computation. In [3] and [6], some kinds of augmented Lagrangian penalty functions have been proposed with improved exactness under strong conditions. In [11], exact penalty functions via regularized gap function for variational inequalities have also been given. All these functions enjoy some smoothness, but at the very beginning, to use this smoothness we need second-order or third-order derivative information of the problem function that is difficult to estimate in practice. Besides, all the above kinds of penalty functions (see [24, 7, 16] for summary) may be unbounded below even when the constrained problem is bounded, which may make it difficult to locate a minimizer.

In the paper [10], a new penalty function is proposed for the constrained optimization problem. By augmenting the dimension of the program with an additional variable ε that controls the weight of the penalty terms, this new penalty function enjoys properties of smoothness and exactness, and remains bounded below under reasonable conditions. Its important new idea is that the penalty function is considered as a function of variable x and the additional variable ε simultaneously. Under proper assumptions, the minimizer ( x , ε ) of the merit function satisfies ε =0, and x is a minimizer of the original problem. However, the penalty function given in [10] is not smooth in a small neighborhood of ( x ,0), where the minimizer of the original constrained problem lies. In this paper, we give a penalty function which enjoys the properties of the penalty function given in [10] and has improved smoothness.

The rest of this paper is organized as follows. In Section 2, a penalty function is introduced for a smooth nonlinear optimization problem with equality constraints and bounded constraints. The smoothness of this penalty function is discussed, as well as other properties, including being bounded below under mild assumptions. Section 3 shows the exactness of our penalty function in the sense that under certain conditions, local minimizer of our penalty function has the form ( x , ε ) with ε =0 and x is a local minimizer of the original problem, and a converse result holds.

Notation Throughout this paper, we use the Euclidean norm x= x k 2 . The subvector of x indexed by the indices in J is denoted by x J . We denote sets of the form

[ x ̲ , x ¯ ]:=x R n |{ x ̲ x x ¯ },

where the lower bound x ̲ ( R { } ) n and the upper bound x ¯ ( R { } ) n are vectors containing proper or infinite bounds on the components of x and [ x ̲ , x ¯ ] is referred to an n-dimensional box.

2 New penalty function

We consider the smooth nonlinear optimization problem with equality constraints and bound constraints:

(P) min f ( x ) s.t. x [ u , v ] , F ( x ) = 0 ,
(2.1)

where [u,v] is a box in n with nonempty interior, f:D and F:D m are continuously differentiable in an open set D containing [u,v] and m<n. We fix w m and consider the equivalent problem:

( P ¯ ) min f ( x ) s.t. F j ( x ) = ε γ w j , j = 1 , , m , x [ u , v ] , ε = 0 ,
(2.2)

where γ>0.

Let ε ¯ >0 be an upper bound of the parameter ε. Then the corresponding penalty function f σ on D×[0, ε ¯ ] for ( P ¯ ) is given as follows:

f σ (x,ε)= { f ( x ) if  ε = ( x , ε ) = 0 , f ( x ) + 1 ε α ( x , ε ) 1 q ( x , ε ) + σ ε β if  ε > 0 , ( x , ε ) < q 1 , + otherwise
(2.3)

with the constraint violation measure

(x,ε):= F ( x ) ε γ w 2 ,

where, in addition, γ>α2β2 and q>0, are all fixed numbers, σ>0 is a penalty parameter and is a Euclidean norm, with x= x T x for any vector x.

Obviously, ε=(x,ε)=0 if and only if ε=0, F j (x)=0, j=1,,m. The corresponding penalty problem then reads

( P σ ) min f σ ( x , ε ) s.t. ( x , ε ) [ u , v ] × [ 0 , ε ¯ ] .
(2.4)

The main difference between (2.3) and the penalty function given in [10] is that in (2.3), β(ε)= ε β , which does not have the property that β (ε)+, as ε 0 + .

It is easy to see that f σ (x,ε) is continuously differentiable with respect to (x,ε) on

D q := { ( x , ε ) D × ( 0 , ε ¯ ] | 0 ( x , ε ) < 1 q } .

2.1 Boundedness of the penalty function

If F(x)=0, (x,ε) D q , then

f σ (x,ε)=f(x)+ ε 2 γ α w 2 1 q ε 2 γ w 2 +σ ε β f σ (x,0)=f(x).
(2.5)

Therefore, f σ (x,ε) is bounded below on the set

D = { x [ u , v ] | F ( x ) q 1 / 2 + ε ¯ γ w } ,

whenever f(x) is bounded below on the set D . This is a reasonable condition since it usually holds when f is bounded below on the feasible set, ε ¯ is small enough, and q is large enough.

The denominator 1q(x,ε) is included since it forces the level sets of f σ to remain in the set {(x,ε) n + 1 |(x,ε)< q 1 }, hence in some sense does not go far away from the feasible set of (P).

Now we see a simple example:

min x 7 s.t. x 2 1 = 0 , x .

It has a bounded feasible domain, a global minimizer at x =1 with f( x )=1, and a local minimizer x=1. The traditional quadratic penalty function for this problem

P(x)= x 7 + 1 ε ( x 2 1 ) 2 ,

is unbounded below for all penalty parameters ε>0 since, e.g., p(x) for x=s, s+. It is also the case for traditional penalty functions, including multiplier penalty functions that use an additional term +λ( x 2 1). On the other hand, our new penalty function is bounded below. Set w=1, it reads

f σ (x,ε)= { x 7 if  ε = r = 0 , x 7 + 1 ε α r 2 1 q r 2 + σ ε β if  ε > 0 , | r | < q 1 / 2 , + otherwise,

where r=1+ ε γ x 2 . Since f σ (x,ε)=+, if |x| q 1 / 2 + 1 + ε γ , it is obvious that our penalty function is bounded below. See Figure 1 for the display of the contour of the penalty function on this example.

Figure 1
figure 1

β=1 , γ=3 , α=2 , and σ=2,000 .

3 Exactness of the penalty function

In this section, we show that our penalty function is exact in the sense that under certain conditions, local minimizer of our penalty function has the form ( x , ε ) in which ε =0 and x is a local minimizer of the original problem and a converse proposition holds.

Firstly, recall the Mangasarian-Fromovitz condition. We say that the Mangasarian-Fromovitz condition (see [12]) for Problem (P) holds at x[u,v] if F (x) has full rank and there is a vector p n with F (x)p=0 and

p i { > 0 if  x i = u i , < 0 if  x i = v i .
(3.1)

Theorem 3.1 Assume that the set D defined in Section  2 is bounded, and the Mangasarian-Fromovitz condition holds at each x D . Let γ>α2β2 in ( P σ ). If σ is sufficiently large, then there is no Kuhn-Tucker point (x,ε) of ( P σ ) with ε>0.

Proof Let the Lagrangian function of ( P σ ) for σ>0 be

L(x,ε,y,z)= f σ (x,ε)+ i = 1 n y i ( u i x i )+ i = 1 n z i ( x i v i )+ y n + 1 (ε)+ z n + 1 (ε ε ¯ ),

where y i , z i , i=1,,n+1 are the Lagrangian multipliers. If (x,ε) is a Kuhn-Tucker point of ( P σ ) with ε>0, then there exist vectors y,z n + 1 such that

( x , ε ) L ( x , ε , y , z ) = 0 , u i x i 0 , x i v i 0 , i = 1 , , n , ε < 0 , ε ε ¯ 0 , y i ( u i x i ) = 0 , z i ( v i x i ) = 0 , z n + 1 ( ε ε ¯ ) = 0 , y n + 1 ε = 0 ,

then

( x , ε ) f σ (x,ε)=yz,

and

inf ( y i , x i u i ) = inf ( z i , v i x i ) = 0 , i = 1 , , n , y n + 1 = inf ( z n + 1 , ε ¯ ε ) = 0 ,

where ( x , ε ) f σ (x,ε) is the gradient of f σ with respect to (x,ε). The assertion of the theorem is proved by contradiction. □

Assume that there exists a sequence {( x k , ε k , σ k )} with ε k >0 for all k, σ k + as k, where ( x k , ε k ) is a Kuhn-Tucker point of ( P σ k ). We use the abbreviation k :=( x k , ε k ). The point x k satisfies

F ( x k ) k 1 / 2 + ε k γ w q 1 / 2 + ε ¯ γ w,

hence, x k D ={x[u,v]|F(x) q 1 / 2 + ε ¯ γ w}. Since D is closed and bounded, we may restrict ourselves to a subsequence if necessary and assume that

lim k ε k = ε [0, ε ¯ ]and lim k x k = x D .

The condition ε f σ ( x k , ε k )0 yields

α q k 2 + ( 2 γ α ) ε k 2 γ w 2 + 2 ( α γ ) ε k γ j = 1 m F j ( x k ) w j + β ε k α + β ( 1 q k ) 2 σ k α j = 1 m F j 2 ( x k )
(3.2)

with equality in the case ε k ε ¯ . When σ k and because α j = 1 m F j 2 ( x k ) - the right side of (3.2) is a finite number, we have

lim k ε k = ε =0or lim k ( x k , ε k ) = = q 1 ,
(3.3)

where =( x , ε ). On the other side, the derivative the penalty function f σ with respect to x is given as

x i f σ ( x k , ε k ) = x i f ( x k ) + 1 ε k α x i ( x k , ε k ) 1 q ( x k , ε k ) + 1 ε k α q ( x k , ε k ) x i ( x k , ε k ) ( 1 q ( x k , ε k ) ) 2 = x i f ( x k ) + 1 ε k α x i ( x k , ε k ) ( 1 q ( x k , ε k ) ) 2 { 0 if  x i k = u i , = 0 if  u i < x i k < v i , 0 if  x i k = v i
(3.4)

(3.4) is equivalent to

ε k α ( 1 q ( x k , ε k ) ) 2 x i f ( x k ) + x i ( x k , ε k ) = ε k α ( 1 q ( x k , ε k ) ) 2 x i f ( x k ) + 2 ( F ( x k ) T ( F ( x k ) ε k γ w ) ) i { 0 if  x i k = u i , = 0 if  u i < x i k < v i , 0 if  x i k = v i .
(3.5)

Let k+, since ε =0 or = q 1 , we have

( F ( x ) T ( F ( x ) ε γ w ) ) i { 0 if  x i = u i , = 0 if  u i < x i < v i , 0 if  x i = v i .
(3.6)

Because x D , thus Mangasarian-Fromovitz condition holds at x and there exists some vector p n such that F ( x )p=0, where p satisfies (3.1). Let I 1 :={i| x i = u i }, I 2 :={i| x i = v i } and ¯ :=F( x ) ε γ w. Then by (3.6), we have

0= ( F ( x ) p ) T ¯ = i I 1 p i ( F ( x ) T ¯ ) i + i I 2 p i ( F ( x ) T ¯ ) i .
(3.7)

(3.7) and the Mangasarian-Fromovitz condition (3.1) imply ( F ( x ) T ¯ ) i =0 for i I 1 I 2 . Thus, F ( x ) T ¯ =0. Now the fact that F ( x ) has full rank yields ¯ =0, i.e.,

F ( x ) ε γ w=0,

and by ¯ = lim k (F( x k ) ε k γ w)=0, it follows that

lim k ( x k , ε k ) = lim k F ( x k ) ε k γ w 2 = =0.

By (3.3), we obtain

lim k ε k = ε =0.

Thus, lim k F( x k )=F( x )=0.

Furthermore, by (3.2), it holds that

q ε k α + β 2 ( x k , ε k ) + 2 γ α 1 ε k α + β 2 γ w 2 + 2 ( 1 γ α ) ε k γ ( α + β ) i = 1 m F i ( x k ) w i + β α ( 1 q ( x k , ε k ) ) 2 σ k i = 1 m F i 2 ( x k ) ε k α + β .

Let k, the last term on the left-hand side tend to +∞. Thus, the vectors y k = F ( x k ) ε k α + β 2 satisfies y k +. The vectors z k = y k y k have norm 1, and (3.5) implies that the numbers μ i k (i=1,,n), defined by

μ i k = 1 y k x i f ( x k ) + 1 y k 2 ε k α ( 1 q ( x k , ε k ) ) 2 ( F ( x k ) T ( F ( x k ) ε k γ w ) ) i = 1 y k x i f ( x k ) + 2 ε k α β 2 ( 1 q ( x k , ε k ) ) 2 ( F ( x k ) T ( z k ε k γ α + β 2 w y k ) ) i ,

satisfy

μ i k { 0 if  x i k = u i , = 0 if  u i < x i k < v i , 0 if  x i k = v i .

If we pick a convergent subsequence z n k with the limit z and pass to the limit we obtain

( F ( x ) z ) i { 0 if  x i = u i , = 0 if  u i < x i < v i , 0 if  x i = v i .

Now similarly as above, it yields z =0, which is a contradiction with z =1. Thus such a sequence {( x k , ε k , σ k )} cannot exist, and for sufficiently large σ>0, all Kuhn-Tucker points of ( P σ ) are of the form (x,0).

Theorem 3.2 Assume that ( x , ε ) is a local minimizer of minimizer of ( P σ ) with finite f σ ( x , ε ), where σ>0 is sufficiently large. If the hypotheses of Theorem  3.1 are fulfilled, then x is a local minimizer of (P).

Proof Now let ( x , ε ) be a local minimizer of ( P σ ) with finite f σ ( x , ε ) and σ>0 is sufficiently large. If ε >0, then ( x , ε ) must be a Kuhn-Tucker point of ( P σ ), which is a contradiction with Theorem 3.1. Therefore, ε =0, and since f σ ( x , ε ) is finite, ( x , ε )=0. It implies that F( x )=0, and by (2.5) there is a neighborhood N( x ) of x where f(x)f( x ) for feasible x. Therefore, x is a local minimizer of (P). □

We now show a converse result of Theorem 3.2, which will use the following lemmas.

Lemma 3.1 Suppose F( x )=0, and F ( x ) has full rank. Then there exist a neighborhood N 0 ( x ) of x and a constant κ 0 >0 such that for each x N 0 ( x ), and each subset J of {1,2,,m}, there exists a vector y=y(x) N 0 ( x ) with F i (y)=0, for iJ and F i (y)= F i (x), iK={1,2,,m}J, such that

xy κ 0 F J ( x ) .

Proof Since F( x )=0 and F ( x ) has full rank, there exists a matrix B ( n m ) × n such that the augmented matrix ( F ( x ) B ) is nonsingular. By the continuity of F () at x , there exists a neighborhood N 1 ( x )D of x such that ( F ( x ) B ) is nonsingular, for any x N 1 ( x ). Take for A the closed convex hull of { F (x)|x N 1 ( x )}, then for all AA, the matrix ( A B ) is nonsingular. We now show that for any x,y N 1 ( x ), there exists a matrix AA such that

F(x)F(y)=A(xy).
(3.8)

In fact, given x,y N 1 ( x ), it follows from the mean value theorem that

F ( x ) F ( y ) = 0 1 F ( y + s ( x y ) ) ( x y ) d s = A x , y ( x y ) ,

where A x , y = 0 1 F (y+s(xy))dsA, so (3.8) holds. Set the mapping H(z):= ( F ( z ) B ( z x ) ) , for z N 1 ( x ). By the proof in [10], Theorem 4.5], we have that there exists a neighborhood N 0 ( x ) N 1 ( x ) of x such that for each x N 0 ( x ), and each subset J of {1,2,,m}, there exists a vector y=y(x) N 0 ( x ) with

H(y)= ( F ( y ) B ( y x ) ) = ( 0 F K ( x ) B ( x x ) ) ,

so F i (y)=0, for iJ and F i (y)= F i (x), iK.

For x,y N 0 ( x ), we have

H(x)H(y)= ( A B ) (xy),

for some AA. On the other side, we have

H ( x ) H ( y ) = F J ( x ) .
(3.9)

Therefore, combining (3.8) with (3.9), we have

xy ( A B ) 1 F J ( x ) κ 0 F J ( x ) ,

where

κ 0 := sup A A ( A B ) 1 <+,

and this complete the proof. □

Lemma 3.2 Assume that x is a local minimizer of problem (2.1) with u i x i v i , i{1,,p}, where mpn. Suppose that F ( x ) has full rank. Then there exists a constant κ 1 >0 such that

f(x)f ( x ) κ 1 F ( x ) ,for all x N 0 ( x ) ,

where N 0 ( x ) is defined in Lemma  3.1.

Proof From Lemma 3.1, let N 0 ( x ) and κ 0 be the one in Lemma 3.1. Let x N 0 ( x ), then by Lemma 3.1 with J={1,,m}, there exists an y=y(x) with F(y)=0 and y i = x i , i=m+1,,n such that

xy κ 0 F ( x ) .
(3.10)

So y is a feasible point of problem (2.1), and f(y)f( x ). Since f is continuously differentiable, for any x,y N 0 ( x ), there exists a vector ξ n such that

f(x)f(y)=f ( ξ ) T (xy),

where ξ lies in the segment between x and y. Set L:= sup z N ( x ) f(z), we have

| f ( x ) f ( y ) | f ( ξ ) x y L x y L κ 0 F ( x ) ,

where the last inequality holds from (3.10).

Let κ 1 =L κ 0 , then

f(x)=f(x)f(y)+f(y)f ( x ) κ 1 F ( x ) ,

which complete the proof. □

Theorem 3.3 If x is a local minimizer of problem (P) with u i x i v i , i{1,,p}, where mpn, and F ( x ) has full rank, then for sufficiently large σ>0, there are a neighborhood N( x ) of x and a ε (0, ε ¯ ] such that

f σ (x,ε)> f σ ( x , 0 ) =f ( x ) ,for all (x,ε)N ( x ) ×(0, ε ].

In particular, ( x ,0) is a local minimizer of f σ .

Proof Let N( x ) N 0 ( x ) is a neighborhood of x such that

sup x N ( x ) ( f ( x ) f ( x ) ) <1,
(3.11)

where N 0 ( x ) is defined in Lemma 3.1.

For (x,ε)N( x )×(0, ε ], where ε (0, ε ¯ ] and ε 1, we distinguish two cases.

Case 1. (x,ε)= F ( x ) ε γ w 2 ε α . For this case, we have

f σ ( x , ε ) f ( x ) + 1 + σ ε β f ( x ) + σ ε β > f ( x ) = f σ ( x , 0 ) ,

where the second inequality is by (3.11).

Case 2. (x,ε)< ε α . Then F(x) 1 2 + ε γ w ε α 2 + ε γ w, and

f σ ( x , ε ) f ( x ) + σ ε β f ( x ) κ 1 F ( x ) + σ ε β f ( x ) κ 1 ( ε α 2 + ε γ w ) + σ ε β f ( x ) + ( σ κ 1 ( 1 + ε γ α 2 ) w ) ε α 2 .
(3.12)

The last inequality holds since β α 2 .

Let σ> κ 1 (1+ ε γ α 2 )w, we get

f σ (x,ε)f ( x ) = f σ ( x , 0 ) .

From Case 1 and Case 2, we obtain the conclusion. □

4 Conclusion remarks

In this paper, a modified exact penalty function for equality constrained nonlinear programming problem is constructed by augmenting a new variable that controls the constraint violence. This function enjoys smoothness, and with very mild conditions it is proved to be an exact penalty function.

Since in practice, a lot of applied problems are nonsmooth, it is a meaningful work to extend the results in this paper to the nonsmooth case. By using the limiting subgradients that is presented in two books written by Mordukhovich [14, 15], as well as Clarke’s generalized gradients in [5], we can extend the penalty function with the mentioned good properties to nonsmooth optimization problems, just as that has been done in [1719]. That will be our future research direction.

References

  1. Antczak T: Exact penalty functions method for mathematical programming problems involving invex functions. Eur. J. Oper. Res. 2009, 198: 29–36. 10.1016/j.ejor.2008.07.031

    Article  MathSciNet  Google Scholar 

  2. Bazaraa MS, Sherali HD, Shetty CM: Nonlinear Optimization Theory and Algorithms. 2nd edition. Wiley, New York; 1993.

    Google Scholar 

  3. Bertsekas DP: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York; 1982.

    Google Scholar 

  4. Boukari D, Fiacco AV: Survey of penalty, exact-penalty and multiplier methods from 1968 to 1993. Optimization 1995, 32: 301–334. 10.1080/02331939508844053

    Article  MathSciNet  Google Scholar 

  5. Clarke FH: Optimization and Nonsmooth Analysis. Wiley-Interscience, New York; 1983.

    Google Scholar 

  6. Di Pillo G: Exact penalty methods. In Algorithms for Continuous Optimization. Edited by: Spedicato E. Kluwer Academic, Dordrecht; 1994:209–253.

    Chapter  Google Scholar 

  7. Fletcher R: Practical Methods of Optimization. 2nd edition. Wiley, New York; 1987.

    Google Scholar 

  8. Han SP, Mangasarian OL: Exact penalty functions in nonlinear programming. Math. Program. 1979, 17: 251–269. 10.1007/BF01588250

    Article  MathSciNet  Google Scholar 

  9. Hoheisel T, Kanzow C, Outrata J: Exact penalty results for mathematical programs with vanishing constraints. Nonlinear Anal. 2010, 72: 2514–2526. 10.1016/j.na.2009.10.047

    Article  MathSciNet  Google Scholar 

  10. Huyer W, Neumaier A: A new exact penalty function. SIAM J. Optim. 2003, 13: 1141–1158. 10.1137/S1052623401390537

    Article  MathSciNet  Google Scholar 

  11. Li W, Peng J: Exact penalty functions for constrained minimization problems via regularized gap function for variational inequality. J. Glob. Optim. 2007, 37: 85–94.

    Article  MathSciNet  Google Scholar 

  12. Mangasarian OL, Fromovitz S: The Fritz John necessary optimality conditions in the presence of equality and inequality constraints. J. Math. Anal. Appl. 1967, 17: 37–47. 10.1016/0022-247X(67)90163-1

    Article  MathSciNet  Google Scholar 

  13. Maratos, N: Exact penalty function algorithms for finite dimensional and control optimization problems. PhD thesis, University of London (1978)

    Google Scholar 

  14. Mordukhovich BS Grundlehren Series (Fundamental Principles of Mathematical Sciences) 330. In Variations Analysis and Generalized Differentiation, I: Basic Theory. Springer, Berlin; 2006.

    Google Scholar 

  15. Mordukhovich BS Grundlehren Series (Fundamental Principles of Mathematical Sciences) 331. In Variations Analysis and Generalized Differentiation, II: Applications. Springer, Berlin; 2006.

    Google Scholar 

  16. Nocedal J, Wright SJ: Numerical Optimization. Springer, New York; 1999.

    Book  Google Scholar 

  17. Soleimani-damaneh M: The gap function for optimization problems in Banach spaces. Nonlinear Anal. 2008, 69: 716–723. 10.1016/j.na.2007.06.008

    Article  MathSciNet  Google Scholar 

  18. Soleimani-damaneh M: Penalization for variational inequalities. Appl. Math. Lett. 2009, 22: 347–350. 10.1016/j.aml.2008.03.029

    Article  MathSciNet  Google Scholar 

  19. Soleimani-damaneh M: Nonsmooth optimization using Mordukhovich’s subdifferential. SIAM J. Control Optim. 2010, 48: 3403–3432. 10.1137/070710664

    Article  MathSciNet  Google Scholar 

  20. Zangwill WI: Nonlinear programming via penalty function. Manag. Sci. 1967, 13: 344–358. 10.1287/mnsc.13.5.344

    Article  MathSciNet  Google Scholar 

  21. Zaslavski AJ: A sufficient condition for exact penalty in constrained optimization. SIAM J. Optim. 2005, 16: 250–262. 10.1137/040612294

    Article  MathSciNet  Google Scholar 

  22. Zaslavski AJ: Stability of exact penalty for nonconvex inequality-constrained minimization problems. Taiwan. J. Math. 2010, 14: 1–19.

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors wish to thank the anonymous referees for their endeavors and valuable comments. The authors would also like to thank Professor Zhang Liansheng for some very helpful comments on a preliminary version of this paper. This research was supported by the National Natural Science Foundation of China under Grants 10971118, 11101248, Natural Science Foundation of Shandong Province under Grants ZR2012AM016, and the foundation 4041-409012 of Shandong University of Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liu Bingzhuang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Bingzhuang, L., Wenling, Z. A modified exact smooth penalty function for nonlinear constrained optimization. J Inequal Appl 2012, 173 (2012). https://doi.org/10.1186/1029-242X-2012-173

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-173

Keywords