Skip to main content

New cautious BFGS algorithm based on modified Armijo-type line search

Abstract

In this paper, a new inexact line search rule is presented, which is a modified version of the classical Armijo line search rule. With lower cost of computation, a larger descent magnitude of objective function is obtained at every iteration. In addition, the initial step size in the modified line search is adjusted automatically for each iteration. On the basis of this line search, a new cautious Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is developed. Under some mild assumptions, the global convergence of the algorithm is established for nonconvex optimization problems. Numerical results demonstrate that the proposed method is promising, especially in comparison with the existent methods.

1 Introduction

Consider the following unconstrained optimization problem:

min x R n f(x),
(1)

where f: R n R is a twice continuously differentiable function.

Amongst the variant methods to solve problem (1), it is well known that the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method has obtained great success either in the aspect of the theoretical research or in the field of engineering applications. In this connection, it is referred to, for example, the literature [111] and the references therein.

Summarily, in the framework of the BFGS method, a quasi-Newton direction d k at the current iterate point x k is first obtained by solving the following linear system of equation:

B k d k = g k ,
(2)

where B k is a given positive definite matrix, g: R n R n is the gradient function of f, and g k is the value of g at x k . At the new iterate point x k + 1 , B k is updated by

B k + 1 = B k B k s k s k T B k s k T B k s k + y k y k T y k T s k ,
(3)

where s k = x k + 1 x k , y k = g k + 1 g k .

Next, along the search direction d k , we choose a suitable stepsize α k by employing some line search strategy. Thus, the iterate point x k is updated by

x k + 1 = x k + α k d k .
(4)

Actually, it has been reported that a choice of suitable line search rule is important to the efficiency and convergence of the BFGS method (see, for example, [7, 1215] and [16]). It is well known that the Armijo line search is the cheapest and most popular algorithm to obtain a step length among all line search methods. However, when the Armijo line search algorithm is implemented to find a step length α k , B k + 1 in (3) may not be positive definite even if B k is a positive definite matrix [17]. For this, in [5] a cautious BFGS method(CBFGS) associated with the Armijo line search is presented to solve the nonconvex unconstrained optimization problems. The update formula of B k in [5] is

B k + 1 ={ B k B k s k s k T B k s k T B k s k + y k y k T y k T s k , if  y k T s k s k 2 ϵ g k γ , B k , otherwise ,
(5)

where ϵ and γ are positive constants.

In this paper, we shall first present a modified Armijo-type line search rule. Then, on the basis of this line search, a new cautious BFGS algorithm is developed. It will be shown that in our line search, a larger descent magnitude of an objective function is obtained with lower cost of computation at every iteration. In addition, the initial step size is adjusted automatically at each iteration.

The rest of this paper is organized as follows. In Section 2, a modified Armijo-type inexact line search rule is presented and a new cautious BFGS algorithm is developed. Section 3 is devoted to establishing the global convergence of the proposed algorithm under some suitable assumptions. In Section 4, numerical results are reported to demonstrate the efficiency of the algorithm. Some conclusions are given in the last section.

2 Modified Armijo-type line search rule and new cautious BFGS algorithm

The classical Armijo line search is to find α k such that the following inequality holds:

f( x k + α k d k )f( x k )+ σ 1 α k g k T d k ,
(6)

where σ 1 (0,1) is a given constant scalar. In a computer procedure, α k in (6) is obtained by searching in the set {β,βρ,β ρ 2 ,} such that α k is the largest component satisfying (6), where ρ(0,1) and β>0 are given constant scalars.

Compared with other line search methods, the computer procedure of the Armijo line search is simplest, and the computational cost to find a feasible stepsize is very low, especially for 1>ρ>0 being close to 0. Its drawback lies in that at each iteration, there may be only little reduction of an objective function to be obtained.

Inspired by this observation, we present a modified Armijo-type line search (MALS) rule as follows. Suppose that g is a Lipschitz continuous function. Let L be the Lipschitz constant. Let L k be an approximation of L. Set

β k = g k T d k ( L k d k 2 ) .

Different from the classical Armijo line search (6), we find a step size α k as the largest component in the set { β k , β k ρ, β k ρ 2 } such that the inequality

f( x k + α k d k )f( x k )+σ α k ( g k T d k 1 2 α k μ L k d k 2 )
(7)

holds, where σ(0,1), μ[0,+), ρ(0,1) are given constant scalars.

In the following proposition, we show that the new line search (7) is well defined.

Proposition 1 Let f: R n R be a continuously differentiable function. Suppose that the gradient function g of f is Lipschitz continuous. Let L k >0 be an approximation value of the Lipschitz constant. If d k is a descent direction of f at x k , then there is an α>0 in the set { β k , β k ρ, β k ρ 2 } such that the following inequality holds:

f(x+α d k )f( x k )+σα ( g k T d k 1 2 α μ L k d k 2 )
(8)

where σ(0,1), μ[0,+), ρ(0,1) are given constant scalars.

Proof In fact, we only need to prove that a step length α is obtained in finitely many steps. If it is not true, then for all sufficiently large positive integer m, we have

f ( x + β k ρ m d k ) f( x k )>σα ( g k T d k 1 2 β k ρ m μ L k d k 2 ) .
(9)

By the mean-theorem, there is a θ k (0,1) such that

β k ρ m g ( x k + θ k β k ρ m d k ) T d k >σ β k ρ m ( g k T d k 1 2 β k ρ m μ L k d k 2 ) .
(10)

Thus,

( g ( x k + θ k β k ρ m d k ) g k ) T d k >(σ1) g k T d k 1 2 σ β k ρ m μ L k d k 2 .
(11)

As m, it is obtained that

(σ1) g k T d k 0.

From σ(0,1), it follows that g k T d k 0. This contradicts the fact that d k is a descent direction. □

Remark 1 Since the third term on the right-hand side of (7) is negative, it is easy to see that the obtained step size α ensures a larger descent magnitude of the objective function than that in (6).

It is noted that (7) reduces to (6) when μ=0.

Remark 2 In the MALS, the parameter L k should be estimated at each iteration. In this paper, for k>1, we choose

L k = s k 1 T y k 1 s k 1 2 ,
(12)

where s k 1 = x k x k 1 , y k 1 = g k g k 1 . Actually, L k in (12) is a solution of the minimization problem

min L k s k 1 y k 1 .
(13)

Therefore, it is acceptable that L k is an approximation of L.

Based on Proposition 1, Remarks 1 and 2, a new cautious BFGS algorithm is developed for solving problem (1).

Algorithm 1 (New cautious BFGS algorithm)

Step 1. Choose an initial point x 0 R n and a positive definite matrix B 0 . Choose σ(0,1), μ0, ϵ>0 and L 0 >0. Set k:=0.

Step 2. If g k ϵ, the algorithm stops. Otherwise, go to Step 3.

Step 3. Find d k that is a solution of the following system of linear equations:

B k d k = g k .

Step 4. Determine a step size α k satisfying (7).

Step 5. Set x k + 1 := x k + α k d k . Compute s k and y k . Update B k as B k + 1 by (5). Set k:=k+1, return to Step 2.

3 Global convergence

In this section, we are going to prove the global convergence of Algorithm 1.

We need the following conditions.

Assumption 1

  1. 1.

    The level set Ω={x R n |f(x)f( x 0 )} is bounded.

  2. 2.

    In some neighborhood N of Ω, f is continuously differentiable and its gradient is Lipschitz continuous, namely there exists a constant L>0 such that

    g ( x ) g ( y ) Lxy,x,yN.
    (14)
  3. 3.

    The sequence { L k }, 0< L k ML, where M is a positive constant.

Before the statement of the global convergence, we first prove the following useful lemmas.

Lemma 1 Let { x k } be a sequence generated by Algorithm 1. If L k >0 for each k0, then for any given initial point x 0 , the following results hold:

  1. 1.

    { f k } is a decreasing sequence.

  2. 2.

    { x k }Ω.

  3. 3.

    k = 1 ( f k f k + 1 )<+.

Proof The first and second results are directly from the condition L k >0 for each k0, Proposition 1 and the definition of Ω. We only need to prove the third result.

Since { f k } is a decreasing sequence and is bounded below, it is clear that there exists a constant f such that

lim k f k = f .
(15)

From (15), we have

k = 1 ( f k f k + 1 )= lim N k = 1 N ( f k f k + 1 )= lim N ( f 1 f k + 1 )= f 1 f .
(16)

Thus,

k = 1 ( f k f k + 1 )<+.
(17)

 □

Lemma 2 Let { x k } be a sequence generated by Algorithm  1. Let { d k } be the sequence of search direction. If Assumption  1 holds, then

k = 1 ( g k T d k d k ) 2 <+.
(18)

In particular,

lim k g k T d k d k =0.
(19)

Proof Denote

K 1 ={k α k = β k }, K 2 ={k α k < β k }.

For k K 1 , we have

f k f k + 1 σ α k ( g k T d k 1 2 α k μ L k d k 2 ) = σ g k T d k L k d k 2 ( g k T d k 1 2 g k T d k L k d k 2 μ L k d k 2 ) = σ ( 1 + 1 2 μ ) ( g k T d k ) 2 L k d k 2 σ ( 2 + μ ) 2 M L ( g k T d k d k ) 2 .
(20)

For k K 2 , it follows from (7) that

f ( x k + ρ 1 α k d k ) f( x k )>σ ρ 1 α k ( g k T d k 1 2 ρ 1 α k μ L k d k 2 ) .
(21)

By the mean-theorem, there is a θ k (0,1) such that

ρ 1 α k g ( x k + θ k ρ 1 α k d k ) T d k >σ ρ 1 α k ( g k T d k 1 2 ρ 1 α k μ L k d k 2 ) .
(22)

Hence,

( g ( x k + θ k ρ 1 α k d k ) g k ) T d k >(σ1) g k T d k 1 2 σ ρ 1 α k μ L k d k 2 .
(23)

From the Lipschitz continuity of g, it is obtained that

( L + 1 2 σ μ L k ) ρ 1 α k d k 2 >(σ1) g k T d k .
(24)

It reads

α k > 2 ρ ( 1 σ ) 2 L + σ μ L k g k T d k d k 2 .
(25)

From (7) and (25), it is deduced that

f k f k + 1 σ α k ( g k T d k 1 2 α k μ L k d k 2 ) σ 2 ρ ( 1 σ ) 2 L + σ μ L k g k T d k d k 2 g k T d k 2 ρ σ ( 1 σ ) 2 L + σ μ M L ( g k T d k d k ) 2 .
(26)

Denote

η=min { σ ( 2 + μ ) 2 M L , 2 ρ σ ( 1 σ ) 2 L + σ μ M L } .
(27)

Then from (26), we obtain

f k f k + 1 η ( g k T d k d k ) 2 .
(28)

From Lemma 1, it is clear that

k = 1 η ( g k T d k d k ) 2 <+.
(29)

That is to say

k = 1 ( g k T d k d k ) 2 <+,
(30)

since η>0. It is certain that

lim k g k T d k d k =0.
(31)

 □

Lemma 3 Let { x k } be a sequence generated by Algorithm 1. Suppose that there exist constants a 1 , a 2 >0 such that the following relations hold for infinitely many k:

B k s k a 1 s k , a 2 s k 2 s k T B k s k .
(32)

Then

lim inf k g k =0.
(33)

Proof Let Λ be the indices set of k satisfying (32).

From (32) and g k = B k d k , it follows that for each kΛ,

a 2 d k 2 d k T B k d k = g k T d k .
(34)

Thus,

a 2 d k g k T d k d k .
(35)

Combined with (31), it yields

lim k Λ , k d k =0.
(36)

On the other hand, from (32) and g k = B k d k , it is deduced that for each kΛ,

0 g k = B k d k a 1 d k .
(37)

From (36) and (37), it is easy to see that

lim k Λ , k g k =0.
(38)

The desired result (33) is proved. □

Lemma 3 indicates that for the establishment of the global convergence, it suffices to show that (32) holds for infinitely many k in Algorithm 1. The following lemma gives sufficient conditions for (32) to hold (see Theorem 2.1 in [18]).

Lemma 4 Let B 0 be a symmetric and positive matrix and B k be updated by (3). Suppose that there are positive constants m 1 , m 2 ( m 1 < m 2 ) such that for all k0,

y k T s k s k 2 m 1 , y k 2 y k T s k m 2 .
(39)

Then there exist constants a 1 , a 2 such that for any positive integer t, (32) holds for at least [t/2] values of k{1,2,,t}.

Now, we come to establish the global convergence for Algorithm 1. For the sake of convenience, we define an index set

K ˜ = { i | y i T s i s i 2 ϵ g i γ } .
(40)

Theorem 1 Let { x k } be a sequence generated by Algorithm 1. Under Assumption  1, (33) holds.

Proof From Lemma 3, we only need to show that (32) holds for infinitely many k.

If K ˜ is a finite set, then B k remains a constant matrix after a finite number of iterations. Hence, there are constants a 1 , a 2 such that (32) holds for all k sufficiently large. The proof of the result is completed.

In the following, we prove (33) in the case that K ˜ is a infinite set.

Suppose that (33) is not true. Then there is a constant δ>0 such that g k δ for all k. From (40), the inequality

y k T s k s k 2 ϵ δ γ
(41)

holds for all k K ˜ .

Combined with (14), it is obtained that

y k 2 y k T s k y k 2 ϵ δ γ s k 2 L 2 ϵ δ γ .
(42)

From Lemma 4, it follows that there exist constants a 1 , a 2 such that (32) holds for infinitely many k. It contradicts the result in Lemma 3.

The proof is completed. □

4 Numerical experiments

In this section, we report the numerical performance of Algorithm 1. The numerical experiments are carried out on a set of 16 test problems from [19]. We make comparisons with the cautious BFGS method associated with the ordinary Armijo line search rule.

In order to study the numerical performance of Algorithm 1, we record the run time of CPU, the total number of function evaluations required in the process of line search and the total number of iterations for each algorithm.

All MATLAB procedures run in the following computer environment: 2GHz CPU, 1GB memory based operating system of WINDOWs XP. The parameters are chosen as follows:

ϵ= 10 6 , B 0 = I n × n ,ρ=0.3,σ=0.2,μ=1,L=1.

As to the parameters in the cautious update (5), we first let γ=0.01 if g k 1, and γ=3 if g k <1.

The performance of algorithms and the solution results are reported in Table 1. In this table, we use the following denotations:

Dim: the dimension of the objective function;

GV: the gradient value of the objective function when the algorithm stops;

NI: the number of iterations;

NF: the number of function evaluations;

CT: the run time of CPU;

CBFGS: the CBFGS method associated with Armijo line search rule;

NCBFGS: the new BFGS method proposed in this paper.

Table 1 Comparison of efficiency with other method

In Table 1 it is shown that the developed algorithm in this paper is promising. In some cases, it requires less number of iterations, less number of function evaluation or less CPU time to find an optimal solution with the same tolerance than another algorithm.

5 Conclusions

A modified Armijo-type line search with an automatical adjustment of initial step size has been presented in this paper. Combined with the cautious BFGS method, a new BFGS algorithm has been developed. Under some assumptions, the global convergence was established for nonconvex optimization problems. Numerical results demonstrate that the proposed method is promising.

References

  1. Al-Baali M: Quasi-Newton algorithms for large-scale nonlinear least-squares. In High Performance Algorithms and Software for Nonlinear Optimization. Edited by: Pillo G, Murli A. Kluwer Academic, Dordrecht; 2003:1–21.

    Chapter  Google Scholar 

  2. Al-Baali M, Grandinetti L: On practical modifications of the quasi-Newton BFGS method. Adv. Model. Optim. 2009, 11(1):63–76.

    MathSciNet  Google Scholar 

  3. Gill PE, Leonard MW: Limited-Memory reduced-Hessian methods for large-scale unconstrained optimization. SIAM J. Optim. 2003, 14: 380–401. 10.1137/S1052623497319973

    Article  MathSciNet  Google Scholar 

  4. Guo Q, Liu JG: Global convergence of a modified BFGS-type method for unconstrained nonconvex minimization. J. Appl. Math. Comput. 2006, 21: 259–267. 10.1007/BF02896404

    Article  MathSciNet  Google Scholar 

  5. Li DH, Fukushima M: On the global convergence of the BFGS method for nonconvex unconstrained optimization problems. SIAM J. Optim. 2001, 11(4):1054–1064. 10.1137/S1052623499354242

    Article  MathSciNet  Google Scholar 

  6. Mascarenhas WF: The BFGS method with exact line searches fails for non-convex objective functions. Math. Program. 2004, 99: 49–61. 10.1007/s10107-003-0421-7

    Article  MathSciNet  Google Scholar 

  7. Nocedal J: Theory of algorithms for unconstrained optimization. Acta Numer. 1992, 1: 199–242.

    Article  MathSciNet  Google Scholar 

  8. Xiao YH, Wei ZX, Zhang L: A modified BFGS method without line searches for nonconvex unconstrained optimization. Adv. Theor. Appl. Math. 2006, 1(2):149–162.

    MathSciNet  Google Scholar 

  9. Zhou W, Li D: A globally convergent BFGS method for nonlinear monotone equations without any merit function. Math. Comput. 2008, 77: 2231–2240. 10.1090/S0025-5718-08-02121-2

    Article  Google Scholar 

  10. Zhou W, Zhang L: Global convergence of the nonmonotone MBFGS method for nonconvex unconstrained minimization. J. Comput. Appl. Math. 2009, 223: 40–47. 10.1016/j.cam.2007.12.011

    Article  MathSciNet  Google Scholar 

  11. Zhou W, Zhang L: Global convergence of a regularized factorized quasi-Newton method for nonlinear least squares problems. Comput. Appl. Math. 2010, 29(2):195–214.

    MathSciNet  Google Scholar 

  12. Cohen AI: Stepsize analysis for descent methods. J. Optim. Theory Appl. 1981, 33: 187–205. 10.1007/BF00935546

    Article  MathSciNet  Google Scholar 

  13. Dennis JE, Schnable RB: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, Englewood Cliffs; 1983.

    Google Scholar 

  14. Shi ZJ, Shen J: New inexact line search method for unconstrained optimization. J. Optim. Theory Appl. 2005, 127(2):425–446. 10.1007/s10957-005-6553-6

    Article  MathSciNet  Google Scholar 

  15. Sun WY, Han JY, Sun J: Global convergence of nonmonotone descent methods for unconstrained optimization problems. J. Comput. Appl. Math. 2002, 146: 89–98. 10.1016/S0377-0427(02)00420-X

    Article  MathSciNet  Google Scholar 

  16. Wolfe P: Convergence conditions for ascent methods. SIAM Rev. 1969, 11: 226–235. 10.1137/1011036

    Article  MathSciNet  Google Scholar 

  17. Nocedal J, Wright JS: Numerical Optimization. Springer, New York; 1999.

    Book  Google Scholar 

  18. Byrd R, Nocedal J: A tool for the analysis of quasi-Newton methods with application to unconstrained minimization. SIAM J. Numer. Anal. 1989, 26: 727–739. 10.1137/0726042

    Article  MathSciNet  Google Scholar 

  19. More JJ, Garbow BS, Hillstrom KE: Testing unconstrained optimization software. ACM Trans. Math. Softw. 1981, 7: 17–41. 10.1145/355934.355936

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Supported by the National Natural Science Foundation of China (Grant No. 71210003, 71071162).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhong Wan.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

ZW carried out the studies of the new Armijo line search studies, participated in the sequence alignment, the proof of the main results and drafted the manuscript. SH participated in the proving some main results. XZ participated in the design of algorithm and performed the numerical experiments. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wan, Z., Huang, S. & Zheng, X.D. New cautious BFGS algorithm based on modified Armijo-type line search. J Inequal Appl 2012, 241 (2012). https://doi.org/10.1186/1029-242X-2012-241

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-241

Keywords