 Research
 Open Access
A scaled threeterm conjugate gradient method for unconstrained optimization
 Ibrahim Arzuka^{1}Email author,
 Mohd R Abu Bakar^{1} and
 Wah June Leong^{1}
https://doi.org/10.1186/s1366001612391
© Arzuka et al. 2016
Received: 30 May 2016
Accepted: 9 November 2016
Published: 13 December 2016
Abstract
Conjugate gradient methods play an important role in many fields of application due to their simplicity, low memory requirements, and global convergence properties. In this paper, we propose an efficient threeterm conjugate gradient method by utilizing the DFP update for the inverse Hessian approximation which satisfies both the sufficient descent and the conjugacy conditions. The basic philosophy is that the DFP update is restarted with a multiple of the identity matrix in every iteration. An acceleration scheme is incorporated in the proposed method to enhance the reduction in function value. Numerical results from an implementation of the proposed method on some standard unconstrained optimization problem show that the proposed method is promising and exhibits a superior numerical performance in comparison with other wellknown conjugate gradient methods.
Keywords
1 Introduction
2 Conjugate gradient method via memoryless quasiNewton method
2.1 Algorithm (STCG)
Algorithm 1
 Step 1.:

Select an initial point \(x_{o}\) and determine \(f (x_{o} )\) and \(g (x_{o} )\). Set \(d_{o}=g_{o}\) and \(k=0\).
 Step 2.:

Test the stopping criterion \(\Vert g_{k}\Vert \) ≤ ϵ, if satisfied stop. Else go to Step 3.
 Step 3.:

Determine the steplength \(\alpha_{k}\) as follows:
Given \(\delta\in ( 0,1 ) \) and \(p_{1},p_{2}\), with \(0< p_{1}< p_{2}<1\). (i)
Set \(\alpha=1\).
 (ii)Test the relation$$ f (x+\alpha d_{k} )f (x_{k} )\leq\alpha \delta g^{T}_{k} d_{k}. $$(16)
 (iii)
If (16) is satisfied, then \(\alpha_{k}=\alpha\) and go to Step 4 else choose a new \(\alpha\in [p_{1}\alpha,p_{2}\alpha ]\) and go to (ii).
 (i)
 Step 4.:

Determine \(z=x_{k}+\alpha_{k} d_{k}\), compute \(g_{z}=\nabla f (z )\) and \(y_{k}=g_{k}g_{z}\).
 Step 5.:

Determine \(r_{k}=\alpha_{k} g^{T}_{k} d_{k}\) and \(q_{k}=\alpha_{k} y^{T}_{k} d_{k}\).
 Step 6.:

If \(q_{k} \neq0\), then \(\vartheta_{k}=\frac{r_{k}}{q_{k}}\), \(x_{k+1}=x_{k}+\vartheta_{k}\alpha_{k} d_{k}\) else \(x_{k+1}=x_{k}+\alpha_{k} d_{k}\).
 Step 7.:

Determine the search direction \(d_{k+1}\) by (12) where \(\mu_{k}\), \(\varphi_{1}\), and \(\varphi_{2}\) are computed by (11), (13), and (14), respectively.
 Step 8.:

Set \(k:=k+1\) and go to Step 2.
3 Convergence analysis
In this section, we analyze the global convergence of the propose method, where we assume that \(g_{k}\neq0\) for all \(k\geq0\) else a stationary point is obtained. First of all, we show that the search direction satisfies the sufficient descent and the conjugacy conditions. In order to present the results, the following assumptions are needed.
Assumption 1
Proof
Now, we shall state the sufficient descent property of the proposed search direction in the following lemma.
Lemma 3.2
Suppose that Assumption 1 holds on the objective function f then the search direction (12) satisfies the sufficient descent condition \(g_{k+1}^{T} d_{k+1}\leqc\Vert g_{k+1}\Vert ^{2}\).
Proof
Lemma 3.3
Suppose that Assumption 1 holds, then the search direction (12) satisfies the conjugacy condition (27).
Proof
Lemma 3.4
Suppose that Assumption 1 holds then there exists a constant \(p>0\) such that \(\Vert d_{k+1}\Vert \leq P\Vert g_{k+1}\Vert \), where \(d_{k+1}\) is defined by (12).
Proof
In order to establish the convergence result, we give the following lemma.
Lemma 3.5
Proof
Theorem 3.6
Proof
4 Numerical results
5 Conclusion
We have presented a new threeterm conjugate gradient method for solving nonlinear large scale unconstrained optimization problems by considering a modification of the quasiNewton memoryless DFP update of the inverse Hessian approximation. A remarkable property of the proposed method is that both the sufficient and the conjugacy conditions are satisfied and the global convergence is established under some mild assumption. The numerical results show that the proposed method is promising and more efficient than any of the other methods considered.
Declarations
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Fletcher, R, Reeves, CM: Function minimization by conjugate gradients. Comput. J. 7(2), 149154 (1964) MathSciNetView ArticleMATHGoogle Scholar
 Polak, E, Ribiere, G: Note sur la convergence de méthodes de directions conjuguées. ESAIM: Mathematical Modelling and Numerical Analysis  Modélisation Mathématique et Analyse Numérique 3(R1), 3543 (1969) MathSciNetMATHGoogle Scholar
 Liu, Y, Storey, C: Efficient generalized conjugate gradient algorithms, part 1: theory. J. Optim. Theory Appl. 69(1), 129137 (1991) MathSciNetView ArticleMATHGoogle Scholar
 Hestenes, MR: The conjugate gradient method for solving linear systems. In: Proc. Symp. Appl. Math VI, American Mathematical Society, pp. 83102 (1956) Google Scholar
 Dai, YH, Yuan, Y: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10(1), 177182 (1999) MathSciNetView ArticleMATHGoogle Scholar
 Fletcher, R: Practical Methods of Optimization. John Wiley & Sons, New York (2013) MATHGoogle Scholar
 AlBaali, M: Descent property and global convergence of the FletcherReeves method with inexact line search. IMA J. Numer. Anal. 5(1), 121124 (1985) MathSciNetView ArticleMATHGoogle Scholar
 Gilbert, JC, Nocedal, J: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2(1), 2142 (1992) MathSciNetView ArticleMATHGoogle Scholar
 Hager, WW, Zhang, H: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16(1), 170192 (2005) MathSciNetView ArticleMATHGoogle Scholar
 Zhang, L, Zhou, W, Li, DH: A descent modified PolakRibièrePolyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 26(4), 629640 (2006) MathSciNetView ArticleMATHGoogle Scholar
 Zhang, L, Zhou, W, Li, D: Some descent threeterm conjugate gradient methods and their global convergence. Optim. Methods Softw. 22(4), 697711 (2007) MathSciNetView ArticleMATHGoogle Scholar
 Andrei, N: On threeterm conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 219(11), 63166327 (2013) MathSciNetMATHGoogle Scholar
 Broyden, C: QuasiNewton methods and their application to function minimisation. Mathematics of Computation 21, 368381 (1967) MathSciNetView ArticleMATHGoogle Scholar
 Davidon, WC: Variable metric method for minimization. SIAM J. Optim. 1(1), 117 (1991) MathSciNetView ArticleMATHGoogle Scholar
 Fletcher, R, Powell, MJ: A rapidly convergent descent method for minimization. Comput. J. 6(2), 163168 (1963) MathSciNetView ArticleMATHGoogle Scholar
 Goldfarb, D: Extension of Davidon’s variable metric method to maximization under linear inequality and equality constraints. SIAM J. Appl. Math. 17(4), 739764 (1969) MathSciNetView ArticleMATHGoogle Scholar
 Shanno, DF: Conjugate gradient methods with inexact searches. Math. Oper. Res. 3(3), 244256 (1978) MathSciNetView ArticleMATHGoogle Scholar
 Perry, JM: A class of conjugate gradient algorithms with a two step variable metric memory. Center for Mathematical Studies in Economies and Management Science. Northwestern University Press, Evanston (1977) Google Scholar
 Wolkowicz, H: Measures for symmetric rankone updates. Math. Oper. Res. 19(4), 815830 (1994) MathSciNetView ArticleMATHGoogle Scholar
 Nocedal, J: Conjugate gradient methods and nonlinear optimization. In: Linear and Nonlinear Conjugate GradientRelated Methods, pp. 923 (1996) Google Scholar
 Andrei, N: Acceleration of conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 213(2), 361369 (2009) MathSciNetMATHGoogle Scholar
 Leong, WJ, San Goh, B: Convergence and stability of line search methods for unconstrained optimization. Acta Appl. Math. 127(1), 155167 (2013) MathSciNetView ArticleMATHGoogle Scholar
 BabaieKafaki, S: A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization. 4OR 11(4), 361374 (2013) MathSciNetView ArticleMATHGoogle Scholar
 Dai, YH, Liao, LZ: New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43(1), 87101 (2001) MathSciNetView ArticleMATHGoogle Scholar
 Andrei, N: An unconstrained optimization test functions collection. Adv. Model. Optim. 10(1), 147161 (2008) MathSciNetMATHGoogle Scholar
 Byrd, RH, Nocedal, J: A tool for the analysis of quasiNewton methods with application to unconstrained minimization. SIAM J. Numer. Anal. 26(3), 727739 (1989) MathSciNetView ArticleMATHGoogle Scholar
 Dolan, ED, More, JJ: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201213 (2002) MathSciNetView ArticleMATHGoogle Scholar