Skip to content

Advertisement

Open Access

New cautious BFGS algorithm based on modified Armijo-type line search

Journal of Inequalities and Applications20122012:241

https://doi.org/10.1186/1029-242X-2012-241

Received: 8 October 2011

Accepted: 20 September 2012

Published: 17 October 2012

Abstract

In this paper, a new inexact line search rule is presented, which is a modified version of the classical Armijo line search rule. With lower cost of computation, a larger descent magnitude of objective function is obtained at every iteration. In addition, the initial step size in the modified line search is adjusted automatically for each iteration. On the basis of this line search, a new cautious Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is developed. Under some mild assumptions, the global convergence of the algorithm is established for nonconvex optimization problems. Numerical results demonstrate that the proposed method is promising, especially in comparison with the existent methods.

Keywords

unconstrained optimizationinexact line search ruleglobal convergenceBFGS

1 Introduction

Consider the following unconstrained optimization problem:
min x R n f ( x ) ,
(1)

where f : R n R is a twice continuously differentiable function.

Amongst the variant methods to solve problem (1), it is well known that the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method has obtained great success either in the aspect of the theoretical research or in the field of engineering applications. In this connection, it is referred to, for example, the literature [111] and the references therein.

Summarily, in the framework of the BFGS method, a quasi-Newton direction d k at the current iterate point x k is first obtained by solving the following linear system of equation:
B k d k = g k ,
(2)
where B k is a given positive definite matrix, g : R n R n is the gradient function of f, and g k is the value of g at x k . At the new iterate point x k + 1 , B k is updated by
B k + 1 = B k B k s k s k T B k s k T B k s k + y k y k T y k T s k ,
(3)

where s k = x k + 1 x k , y k = g k + 1 g k .

Next, along the search direction d k , we choose a suitable stepsize α k by employing some line search strategy. Thus, the iterate point x k is updated by
x k + 1 = x k + α k d k .
(4)
Actually, it has been reported that a choice of suitable line search rule is important to the efficiency and convergence of the BFGS method (see, for example, [7, 1215] and [16]). It is well known that the Armijo line search is the cheapest and most popular algorithm to obtain a step length among all line search methods. However, when the Armijo line search algorithm is implemented to find a step length α k , B k + 1 in (3) may not be positive definite even if B k is a positive definite matrix [17]. For this, in [5] a cautious BFGS method(CBFGS) associated with the Armijo line search is presented to solve the nonconvex unconstrained optimization problems. The update formula of B k in [5] is
B k + 1 = { B k B k s k s k T B k s k T B k s k + y k y k T y k T s k , if  y k T s k s k 2 ϵ g k γ , B k , otherwise ,
(5)

where ϵ and γ are positive constants.

In this paper, we shall first present a modified Armijo-type line search rule. Then, on the basis of this line search, a new cautious BFGS algorithm is developed. It will be shown that in our line search, a larger descent magnitude of an objective function is obtained with lower cost of computation at every iteration. In addition, the initial step size is adjusted automatically at each iteration.

The rest of this paper is organized as follows. In Section 2, a modified Armijo-type inexact line search rule is presented and a new cautious BFGS algorithm is developed. Section 3 is devoted to establishing the global convergence of the proposed algorithm under some suitable assumptions. In Section 4, numerical results are reported to demonstrate the efficiency of the algorithm. Some conclusions are given in the last section.

2 Modified Armijo-type line search rule and new cautious BFGS algorithm

The classical Armijo line search is to find α k such that the following inequality holds:
f ( x k + α k d k ) f ( x k ) + σ 1 α k g k T d k ,
(6)

where σ 1 ( 0 , 1 ) is a given constant scalar. In a computer procedure, α k in (6) is obtained by searching in the set { β , β ρ , β ρ 2 , } such that α k is the largest component satisfying (6), where ρ ( 0 , 1 ) and β > 0 are given constant scalars.

Compared with other line search methods, the computer procedure of the Armijo line search is simplest, and the computational cost to find a feasible stepsize is very low, especially for 1 > ρ > 0 being close to 0. Its drawback lies in that at each iteration, there may be only little reduction of an objective function to be obtained.

Inspired by this observation, we present a modified Armijo-type line search (MALS) rule as follows. Suppose that g is a Lipschitz continuous function. Let L be the Lipschitz constant. Let L k be an approximation of L. Set
β k = g k T d k ( L k d k 2 ) .
Different from the classical Armijo line search (6), we find a step size α k as the largest component in the set { β k , β k ρ , β k ρ 2 } such that the inequality
f ( x k + α k d k ) f ( x k ) + σ α k ( g k T d k 1 2 α k μ L k d k 2 )
(7)

holds, where σ ( 0 , 1 ) , μ [ 0 , + ) , ρ ( 0 , 1 ) are given constant scalars.

In the following proposition, we show that the new line search (7) is well defined.

Proposition 1 Let f : R n R be a continuously differentiable function. Suppose that the gradient function g of f is Lipschitz continuous. Let L k > 0 be an approximation value of the Lipschitz constant. If d k is a descent direction of f at x k , then there is an α > 0 in the set { β k , β k ρ , β k ρ 2 } such that the following inequality holds:
f ( x + α d k ) f ( x k ) + σ α ( g k T d k 1 2 α μ L k d k 2 )
(8)

where σ ( 0 , 1 ) , μ [ 0 , + ) , ρ ( 0 , 1 ) are given constant scalars.

Proof In fact, we only need to prove that a step length α is obtained in finitely many steps. If it is not true, then for all sufficiently large positive integer m, we have
f ( x + β k ρ m d k ) f ( x k ) > σ α ( g k T d k 1 2 β k ρ m μ L k d k 2 ) .
(9)
By the mean-theorem, there is a θ k ( 0 , 1 ) such that
β k ρ m g ( x k + θ k β k ρ m d k ) T d k > σ β k ρ m ( g k T d k 1 2 β k ρ m μ L k d k 2 ) .
(10)
Thus,
( g ( x k + θ k β k ρ m d k ) g k ) T d k > ( σ 1 ) g k T d k 1 2 σ β k ρ m μ L k d k 2 .
(11)
As m , it is obtained that
( σ 1 ) g k T d k 0 .

From σ ( 0 , 1 ) , it follows that g k T d k 0 . This contradicts the fact that d k is a descent direction. □

Remark 1 Since the third term on the right-hand side of (7) is negative, it is easy to see that the obtained step size α ensures a larger descent magnitude of the objective function than that in (6).

It is noted that (7) reduces to (6) when μ = 0 .

Remark 2 In the MALS, the parameter L k should be estimated at each iteration. In this paper, for k > 1 , we choose
L k = s k 1 T y k 1 s k 1 2 ,
(12)
where s k 1 = x k x k 1 , y k 1 = g k g k 1 . Actually, L k in (12) is a solution of the minimization problem
min L k s k 1 y k 1 .
(13)

Therefore, it is acceptable that L k is an approximation of L.

Based on Proposition 1, Remarks 1 and 2, a new cautious BFGS algorithm is developed for solving problem (1).

Algorithm 1 (New cautious BFGS algorithm)

Step 1. Choose an initial point x 0 R n and a positive definite matrix B 0 . Choose σ ( 0 , 1 ) , μ 0 , ϵ > 0 and L 0 > 0 . Set k : = 0 .

Step 2. If g k ϵ , the algorithm stops. Otherwise, go to Step 3.

Step 3. Find d k that is a solution of the following system of linear equations:
B k d k = g k .

Step 4. Determine a step size α k satisfying (7).

Step 5. Set x k + 1 : = x k + α k d k . Compute s k and y k . Update B k as B k + 1 by (5). Set k : = k + 1 , return to Step 2.

3 Global convergence

In this section, we are going to prove the global convergence of Algorithm 1.

We need the following conditions.

Assumption 1
  1. 1.

    The level set Ω = { x R n | f ( x ) f ( x 0 ) } is bounded.

     
  2. 2.
    In some neighborhood N of Ω, f is continuously differentiable and its gradient is Lipschitz continuous, namely there exists a constant L > 0 such that
    g ( x ) g ( y ) L x y , x , y N .
    (14)
     
  3. 3.

    The sequence { L k } , 0 < L k M L , where M is a positive constant.

     

Before the statement of the global convergence, we first prove the following useful lemmas.

Lemma 1 Let { x k } be a sequence generated by Algorithm 1. If L k > 0 for each k 0 , then for any given initial point x 0 , the following results hold:
  1. 1.

    { f k } is a decreasing sequence.

     
  2. 2.

    { x k } Ω .

     
  3. 3.

    k = 1 ( f k f k + 1 ) < + .

     

Proof The first and second results are directly from the condition L k > 0 for each k 0 , Proposition 1 and the definition of Ω. We only need to prove the third result.

Since { f k } is a decreasing sequence and is bounded below, it is clear that there exists a constant f such that
lim k f k = f .
(15)
From (15), we have
k = 1 ( f k f k + 1 ) = lim N k = 1 N ( f k f k + 1 ) = lim N ( f 1 f k + 1 ) = f 1 f .
(16)
Thus,
k = 1 ( f k f k + 1 ) < + .
(17)

 □

Lemma 2 Let { x k } be a sequence generated by Algorithm  1. Let { d k } be the sequence of search direction. If Assumption  1 holds, then
k = 1 ( g k T d k d k ) 2 < + .
(18)
In particular,
lim k g k T d k d k = 0 .
(19)
Proof Denote
K 1 = { k α k = β k } , K 2 = { k α k < β k } .
For k K 1 , we have
f k f k + 1 σ α k ( g k T d k 1 2 α k μ L k d k 2 ) = σ g k T d k L k d k 2 ( g k T d k 1 2 g k T d k L k d k 2 μ L k d k 2 ) = σ ( 1 + 1 2 μ ) ( g k T d k ) 2 L k d k 2 σ ( 2 + μ ) 2 M L ( g k T d k d k ) 2 .
(20)
For k K 2 , it follows from (7) that
f ( x k + ρ 1 α k d k ) f ( x k ) > σ ρ 1 α k ( g k T d k 1 2 ρ 1 α k μ L k d k 2 ) .
(21)
By the mean-theorem, there is a θ k ( 0 , 1 ) such that
ρ 1 α k g ( x k + θ k ρ 1 α k d k ) T d k > σ ρ 1 α k ( g k T d k 1 2 ρ 1 α k μ L k d k 2 ) .
(22)
Hence,
( g ( x k + θ k ρ 1 α k d k ) g k ) T d k > ( σ 1 ) g k T d k 1 2 σ ρ 1 α k μ L k d k 2 .
(23)
From the Lipschitz continuity of g, it is obtained that
( L + 1 2 σ μ L k ) ρ 1 α k d k 2 > ( σ 1 ) g k T d k .
(24)
It reads
α k > 2 ρ ( 1 σ ) 2 L + σ μ L k g k T d k d k 2 .
(25)
From (7) and (25), it is deduced that
f k f k + 1 σ α k ( g k T d k 1 2 α k μ L k d k 2 ) σ 2 ρ ( 1 σ ) 2 L + σ μ L k g k T d k d k 2 g k T d k 2 ρ σ ( 1 σ ) 2 L + σ μ M L ( g k T d k d k ) 2 .
(26)
Denote
η = min { σ ( 2 + μ ) 2 M L , 2 ρ σ ( 1 σ ) 2 L + σ μ M L } .
(27)
Then from (26), we obtain
f k f k + 1 η ( g k T d k d k ) 2 .
(28)
From Lemma 1, it is clear that
k = 1 η ( g k T d k d k ) 2 < + .
(29)
That is to say
k = 1 ( g k T d k d k ) 2 < + ,
(30)
since η > 0 . It is certain that
lim k g k T d k d k = 0 .
(31)

 □

Lemma 3 Let { x k } be a sequence generated by Algorithm 1. Suppose that there exist constants a 1 , a 2 > 0 such that the following relations hold for infinitely many k:
B k s k a 1 s k , a 2 s k 2 s k T B k s k .
(32)
Then
lim inf k g k = 0 .
(33)

Proof Let Λ be the indices set of k satisfying (32).

From (32) and g k = B k d k , it follows that for each k Λ ,
a 2 d k 2 d k T B k d k = g k T d k .
(34)
Thus,
a 2 d k g k T d k d k .
(35)
Combined with (31), it yields
lim k Λ , k d k = 0 .
(36)
On the other hand, from (32) and g k = B k d k , it is deduced that for each k Λ ,
0 g k = B k d k a 1 d k .
(37)
From (36) and (37), it is easy to see that
lim k Λ , k g k = 0 .
(38)

The desired result (33) is proved. □

Lemma 3 indicates that for the establishment of the global convergence, it suffices to show that (32) holds for infinitely many k in Algorithm 1. The following lemma gives sufficient conditions for (32) to hold (see Theorem 2.1 in [18]).

Lemma 4 Let B 0 be a symmetric and positive matrix and B k be updated by (3). Suppose that there are positive constants m 1 , m 2 ( m 1 < m 2 ) such that for all k 0 ,
y k T s k s k 2 m 1 , y k 2 y k T s k m 2 .
(39)

Then there exist constants a 1 , a 2 such that for any positive integer t, (32) holds for at least [ t / 2 ] values of k { 1 , 2 , , t } .

Now, we come to establish the global convergence for Algorithm 1. For the sake of convenience, we define an index set
K ˜ = { i | y i T s i s i 2 ϵ g i γ } .
(40)

Theorem 1 Let { x k } be a sequence generated by Algorithm 1. Under Assumption  1, (33) holds.

Proof From Lemma 3, we only need to show that (32) holds for infinitely many k.

If K ˜ is a finite set, then B k remains a constant matrix after a finite number of iterations. Hence, there are constants a 1 , a 2 such that (32) holds for all k sufficiently large. The proof of the result is completed.

In the following, we prove (33) in the case that K ˜ is a infinite set.

Suppose that (33) is not true. Then there is a constant δ > 0 such that g k δ for all k. From (40), the inequality
y k T s k s k 2 ϵ δ γ
(41)

holds for all k K ˜ .

Combined with (14), it is obtained that
y k 2 y k T s k y k 2 ϵ δ γ s k 2 L 2 ϵ δ γ .
(42)

From Lemma 4, it follows that there exist constants a 1 , a 2 such that (32) holds for infinitely many k. It contradicts the result in Lemma 3.

The proof is completed. □

4 Numerical experiments

In this section, we report the numerical performance of Algorithm 1. The numerical experiments are carried out on a set of 16 test problems from [19]. We make comparisons with the cautious BFGS method associated with the ordinary Armijo line search rule.

In order to study the numerical performance of Algorithm 1, we record the run time of CPU, the total number of function evaluations required in the process of line search and the total number of iterations for each algorithm.

All MATLAB procedures run in the following computer environment: 2GHz CPU, 1GB memory based operating system of WINDOWs XP. The parameters are chosen as follows:
ϵ = 10 6 , B 0 = I n × n , ρ = 0.3 , σ = 0.2 , μ = 1 , L = 1 .

As to the parameters in the cautious update (5), we first let γ = 0.01 if g k 1 , and γ = 3 if g k < 1 .

The performance of algorithms and the solution results are reported in Table 1. In this table, we use the following denotations:

Dim: the dimension of the objective function;

GV: the gradient value of the objective function when the algorithm stops;

NI: the number of iterations;

NF: the number of function evaluations;

CT: the run time of CPU;

CBFGS: the CBFGS method associated with Armijo line search rule;

NCBFGS: the new BFGS method proposed in this paper.

Table 1

Comparison of efficiency with other method

Functions

Algorithm

Dim

GV

NI

NF

CT

Rosenbrock

CBFGS

2

6.2782e-007

35

74

0.0310s

NCBFGS

2

1.1028e-007

40

70

0.0310s

Freudenstein and Roth

CBFGS

2

7.9817e-007

28

82

0.0310s

NCBFGS

2

2.7179e-007

11

25

0.0320s

Beale

CBFGS

2

7.2275e-007

40

55

0.0310s

NCBFGS

2

3.1136e-007

18

23

0.0470s

Brown badly

CBFGS

2

7.7272e-007

36

223

0.0310s

NCBFGS

2

0

29

50

0.0620s

Broyden tridiagonal

CBFGS

4

7.5723e-007

26

126

0.0320s

NCBFGS

4

3.8712e-007

15

21

0.0310s

Powell singular

CBFGS

4

9.9993e-007

13,993

14,031

2.4530s

NCBFGS

4

9.4607e-007

31

38

0.0320s

Kowalik and Osborne

CBFGS

4

9.9783e-007

3126

3128

2.1250s

NCBFGS

4

4.4454e-007

30

45

0.0470s

Brown almost-linear

CBFGS

6

9.5864e-007

263

300

0.1100s

NCBFGS

6

1.2290e-007

22

30

0.0160s

Discrete boundary

CBFGS

6

8.6773e-007

79

85

0.0470s

NCBFGS

6

3.3650e-007

14

17

0.0320s

Variably dimensioned

CBFGS

8

3.4688e-008

7

51

0.0470s

NCBFGS

8

3.1482e-007

10

21

0.0320s

Extended Rosenbrock

CBFGS

8

8.2943e-007

91

190

0.0470s

NCBFGS

8

7.7959e-007

99

149

0.0320s

Extended Powell singular

CBFGS

8

9.9975e-007

6154

6199

1.4690s

NCBFGS

8

6.5685e-007

42

55

0.0630s

Brown almost-linear

CBFGS

8

9.8392e-007

364

379

0.1880s

NCBFGS

8

4.8080e-007

20

27

0.0780s

Broyden tridiagonal

CBFGS

9

4.4261e-007

38

86

0.0470s

NCBFGS

9

6.2059e-007

41

56

0.0310s

Linear-rank1

CBFGS

10

-

-

-

-

NCBFGS

10

2.6592e-007

4

15

0.0310s

Linear-full rank

CBFGS

12

9.5231e-007

18

36

0.0160s

NCBFGS

12

9.4206e-016

2

3

0.0150s

In Table 1 it is shown that the developed algorithm in this paper is promising. In some cases, it requires less number of iterations, less number of function evaluation or less CPU time to find an optimal solution with the same tolerance than another algorithm.

5 Conclusions

A modified Armijo-type line search with an automatical adjustment of initial step size has been presented in this paper. Combined with the cautious BFGS method, a new BFGS algorithm has been developed. Under some assumptions, the global convergence was established for nonconvex optimization problems. Numerical results demonstrate that the proposed method is promising.

Declarations

Acknowledgements

Supported by the National Natural Science Foundation of China (Grant No. 71210003, 71071162).

Authors’ Affiliations

(1)
School of Mathematics and Statistics, Central South University, Changsha, China

References

  1. Al-Baali M: Quasi-Newton algorithms for large-scale nonlinear least-squares. In High Performance Algorithms and Software for Nonlinear Optimization. Edited by: Pillo G, Murli A. Kluwer Academic, Dordrecht; 2003:1–21.View ArticleGoogle Scholar
  2. Al-Baali M, Grandinetti L: On practical modifications of the quasi-Newton BFGS method. Adv. Model. Optim. 2009, 11(1):63–76.MathSciNetGoogle Scholar
  3. Gill PE, Leonard MW: Limited-Memory reduced-Hessian methods for large-scale unconstrained optimization. SIAM J. Optim. 2003, 14: 380–401. 10.1137/S1052623497319973MathSciNetView ArticleGoogle Scholar
  4. Guo Q, Liu JG: Global convergence of a modified BFGS-type method for unconstrained nonconvex minimization. J. Appl. Math. Comput. 2006, 21: 259–267. 10.1007/BF02896404MathSciNetView ArticleGoogle Scholar
  5. Li DH, Fukushima M: On the global convergence of the BFGS method for nonconvex unconstrained optimization problems. SIAM J. Optim. 2001, 11(4):1054–1064. 10.1137/S1052623499354242MathSciNetView ArticleGoogle Scholar
  6. Mascarenhas WF: The BFGS method with exact line searches fails for non-convex objective functions. Math. Program. 2004, 99: 49–61. 10.1007/s10107-003-0421-7MathSciNetView ArticleGoogle Scholar
  7. Nocedal J: Theory of algorithms for unconstrained optimization. Acta Numer. 1992, 1: 199–242.MathSciNetView ArticleGoogle Scholar
  8. Xiao YH, Wei ZX, Zhang L: A modified BFGS method without line searches for nonconvex unconstrained optimization. Adv. Theor. Appl. Math. 2006, 1(2):149–162.MathSciNetGoogle Scholar
  9. Zhou W, Li D: A globally convergent BFGS method for nonlinear monotone equations without any merit function. Math. Comput. 2008, 77: 2231–2240. 10.1090/S0025-5718-08-02121-2View ArticleGoogle Scholar
  10. Zhou W, Zhang L: Global convergence of the nonmonotone MBFGS method for nonconvex unconstrained minimization. J. Comput. Appl. Math. 2009, 223: 40–47. 10.1016/j.cam.2007.12.011MathSciNetView ArticleGoogle Scholar
  11. Zhou W, Zhang L: Global convergence of a regularized factorized quasi-Newton method for nonlinear least squares problems. Comput. Appl. Math. 2010, 29(2):195–214.MathSciNetGoogle Scholar
  12. Cohen AI: Stepsize analysis for descent methods. J. Optim. Theory Appl. 1981, 33: 187–205. 10.1007/BF00935546MathSciNetView ArticleGoogle Scholar
  13. Dennis JE, Schnable RB: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, Englewood Cliffs; 1983.Google Scholar
  14. Shi ZJ, Shen J: New inexact line search method for unconstrained optimization. J. Optim. Theory Appl. 2005, 127(2):425–446. 10.1007/s10957-005-6553-6MathSciNetView ArticleGoogle Scholar
  15. Sun WY, Han JY, Sun J: Global convergence of nonmonotone descent methods for unconstrained optimization problems. J. Comput. Appl. Math. 2002, 146: 89–98. 10.1016/S0377-0427(02)00420-XMathSciNetView ArticleGoogle Scholar
  16. Wolfe P: Convergence conditions for ascent methods. SIAM Rev. 1969, 11: 226–235. 10.1137/1011036MathSciNetView ArticleGoogle Scholar
  17. Nocedal J, Wright JS: Numerical Optimization. Springer, New York; 1999.View ArticleGoogle Scholar
  18. Byrd R, Nocedal J: A tool for the analysis of quasi-Newton methods with application to unconstrained minimization. SIAM J. Numer. Anal. 1989, 26: 727–739. 10.1137/0726042MathSciNetView ArticleGoogle Scholar
  19. More JJ, Garbow BS, Hillstrom KE: Testing unconstrained optimization software. ACM Trans. Math. Softw. 1981, 7: 17–41. 10.1145/355934.355936MathSciNetView ArticleGoogle Scholar

Copyright

© Wan et al.; licensee Springer 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement