 Research
 Open access
 Published:
Three modified PolakRibièrePolyak conjugate gradient methods with sufficient descent property
Journal of Inequalities and Applications volume 2015, Article number: 125 (2015)
Abstract
In this paper, three modified PolakRibièrePolyak (PRP) conjugate gradient methods for unconstrained optimization are proposed. They are based on the twoterm PRP method proposed by Cheng (Numer. Funct. Anal. Optim. 28:12171230, 2007), the threeterm PRP method proposed by Zhang et al. (IMA J. Numer. Anal. 26:629640, 2006), and the descent PRP method proposed by Yu et al. (Optim. Methods Softw. 23:275293, 2008). These modified methods possess the sufficient descent property without any line searches. Moreover, if the exact line search is used, they reduce to the classical PRP method. Under standard assumptions, we show that these three methods converge globally with a Wolfe line search. We also report some numerical results to show the efficiency of the proposed methods.
1 Introduction
Consider the unconstrained optimization problem:
where \(f: \mathcal{R}^{n}\rightarrow\mathcal{R}\) is continuously differentiable, and its gradient \(g(x)\) is available. Conjugate gradient methods are efficient for solving (1), especially for largescale problems. A conjugate gradient method generates an iterate sequence \(\{x_{k}\}\) by
where \(x_{k}\) is the current iterate, \(\alpha_{k}>0\) is the step size and computed by certain line search, and \(d_{k}\) is the search direction defined by
in which \(\beta_{k}\) is an important parameter. Generally, different conjugate gradient methods correspond to different choices of the parameter \(\beta_{k}\). Some wellknown formulas for \(\beta_{k}\) include the FletcherReeves (FR) [1], the PolakRibièrePolyak (PRP) [2, 3], the LiuStorey (LS) [4], the DaiYuan (DY) [5], the HestenesStiefel (HS) [6] and the conjugate descent (CD) [7] formulas. In this paper, we focus our attention on the PRP method, in which the parameter \(\beta_{k}\) is given by
where \(\\cdot\\) is the 2norm. In the convergence analysis and implementations of conjugate gradient methods, one often requires the line search to be an inexact line search such as a Wolfe line search, a strong Wolfe line search or an Armijo line search. The Wolfe line search is finding a step size \(\alpha _{k}\) satisfying
where \(0<\rho<\sigma<1\). The strong Wolfe line search is computing \(\alpha_{k}\) such that
where \(0<\rho<1/2\) and \(\sigma\in(\rho,1)\). The Armijo line search is finding a step size \(\alpha_{k}=\max\{\rho^{j}j=0,1,\ldots\}\) satisfying
where \(\delta\in(0,1)\) and \(\rho\in(0,1)\) are two constants.
The PRP method is generally regarded to be one of the most efficient conjugate gradient methods and has been studied by many researchers [2, 3, 8]. Polak and Ribière [2] proved that the PRP method with the exact line search is globally convergent under a strong convexity assumption for the objective function f. Gilbert and Nocedal [3] conducted an elegant analysis and showed that the PRP method is globally convergent if \(\beta_{k}^{\mathrm{PRP}}\) is restricted to be nonnegative (denoted \(\beta_{k}^{\mathrm{PRP+}}\)) and \(\alpha_{k}\) is determined by a line search step satisfying the sufficient descent condition
in addition to the Wolfe line search condition (5). Grippo and Lucidi [8] proposed new line search conditions, which can ensure that the PRP method is globally convergent for nonconvex problems. However, the method given by Grippo and Lucidi [8] does not perform better than the PRP method, which employs \(\beta_{k}^{\mathrm{PRP+}}\) and the Wolfe line search in the numerical computations. Therefore, great attention is given to the problem of finding the methods which not only have global convergence but also have nice numerical performance [9–16].
Recently, two new conjugate gradient methods, obtained by modifying the PRP method, called twoterm PRP method (denoted CTPRP) and threeterm PRP method (denoted ZTPRP), have been proposed by Cheng [9] and Zhang et al. [10], respectively, in which the direction \(d_{k}\) is given by
or
where
An attractive feature of the CTPRP method and the ZTPRP method is that they satisfy \(g_{k}^{\top}d_{k}=\g_{k}\^{2}\), which is independent of line search used. Moreover, these two methods are globally convergent if some kind of modified Armijo type line search or strong Wolfe line search is used, and the presented numerical results in [9, 10] show some potential advantages of the proposed methods. Moreover, Yu et al. [11] proposed another type variation of PRP method, denoted YTPRP, whose direction is defined by
where
An attractive feature of the \(d_{k}^{\mathrm{YTPRP}}\) is that it satisfies \(g_{k}^{\top}d_{k}\leq(11/4C)\g_{k}\^{2}\), which is also independent of line search used.
Note that the global convergence of the above three methods is established under some Armijo type line search or strong Wolfe line search. It is well known that the step size generated by the Armijo line search maybe approaches zero, and thus the reduction of the objective function is very little. This slows down the optimization process. Obviously, the strong Wolfe line search can avoid this phenomenon when the parameter \(\sigma \rightarrow0^{+}\), and in this case, the strong Wolfe line search is close to the exact line search. Thus, the computational load of the strong Wolfe line search increases heavily. In fact, the Wolfe line search can also avoid the above phenomenon. However, compared with the strong Wolfe line search, the Wolfe line search needs less computation to get a suitable step size at each iteration. Therefore, the Wolfe line search can enhance the efficiency of the conjugate gradient method.
In this paper, we shall investigate some variations of PRP method under a Wolfe line search. In fact, we take a little modification to the \(\beta_{k}^{\mathrm{PRP}}\) and propose three modified PRP methods based on the iterate directions \(d_{k}^{\mathrm{CTPRP}}\), \(d_{k}^{\mathrm {ZTPRP}}\), and \(d_{k}^{\mathrm{YTPRP}}\), which possess not only the sufficient descent property for any line search but also global convergence with a Wolfe line search. In order to do so, the remainder of the paper is organized as follows: In Section 2, we propose the modified PRP methods and prove their convergence. In Section 3, we present some numerical results by using the test problems in [17]. Section 4 concludes the paper with final remarks.
2 Three modified PRP methods
First, we give the following basic assumption as regards the objection function \(f(x)\).
Assumptions

(H1)
The level set \(R_{0}=\{xf(x)\leq f(x_{0})\}\) is bounded.

(H2)
In some neighborhood N of \(R_{0}\), the gradient \(g(x)\) is Lipschitz continuous on an open convex set B that contains \(R_{0}\), i.e., there exists a constant \(L>0\) such that
$$\bigl\Vert g(x)g(y)\bigr\Vert \leq L\xy\, \quad \text{for any } x, y\in B. $$
Assumptions (H1) and (H2) imply that there exist positive constants γ and B such that
and
Recently, Wei et al. [18] proposed a variation of the FR method which we call the VFR method, in which the parameter \(\beta_{k}\) is defined by
where \(\mu_{1}\in(0,+\infty)\), \(\mu_{2}\in(\mu_{1}+\epsilon_{1},+\infty)\), \(\mu_{3}\in (0,+\infty)\), and \(\epsilon_{1}\) is any given positive constant. An attractive feature of the VFR method is that the sufficient descent condition \(g_{k}^{\top}d_{k}\leq(1\frac{\mu_{1}}{\mu_{2}})\g_{k}\^{2}\) always holds which is independent of the line search used. The idea of Wei et al. [18] was further extended to the WeiYaoLiu method by Dai and Wen [19]. Here, motivated by the ideas of Wei et al. [18] and Dai and Wen [19], we construct two modified PRP methods, in which the parameter \(\beta_{k}\) is specified as follows:
or
where \(\mu\geq0\) is a constant. Obviously, if \(\mu=0\) or the line search is exact, the new parameter \(\beta_{k}^{\mathrm{MPRP}}\) or \(\beta _{k}^{\mathrm{MPRP+}}\) reduces to the classical parameter \(\beta _{k}^{\mathrm{PRP}}\) in [2] or \(\beta_{k}^{\mathrm{PRP+}}\) in [3].
First, using the parameter \(\beta_{k}^{\mathrm{MPRP}}\) and the direction \(d_{k}^{\mathrm{CTPRP}}\), we present the following conjugate gradient method (denoted the TMPRP1 method).
TMPRP1 method
(Twoterm modified PRP method)
 Step 0.:

Give an initial point \(x_{0}\in\mathcal{R}^{n}\), \(\mu\geq0\), \(0<\rho<\sigma<1\), and set \(d_{0}=g_{0}\), \(k:=0\).
 Step 1.:

If \(\g_{k}\=0\) then stop; otherwise go to Step 2.
 Step 2.:

Compute \(d_{k}\) by
$$ d_{k}=\left \{ \begin{array}{l@{\quad}l} g_{k},& \text{if } k=0, \\  (1+\beta_{k}^{\mathrm{MPRP}}\frac{g_{k}^{\top}d_{k1}}{\g_{k}\ ^{2}} )g_{k}+\beta_{k}^{\mathrm{MPRP}}d_{k1},& \text{if } k\geq1. \end{array} \right . $$(13)Determine the step size \(\alpha_{k}\) by Wolfe line search (5).
 Step 3.:

Set \(x_{k+1}=x_{k}+\alpha_{k} d_{k}\), and \(k:=k+1\); go to Step 1.
Similarly, using the parameter \(\beta_{k}^{\mathrm{MPRP}}\) and the direction \(d_{k}^{\mathrm{ZTPRP}}\), we present the following conjugate gradient method (denoted the TMPRP2 method).
TMPRP2 method
(Threeterm modified PRP method)
 Step 0.:

Give an initial point \(x_{0}\in\mathcal{R}^{n}\), \(\mu\geq0\), \(0<\rho<\sigma<1\), and set \(d_{0}=g_{0}\), \(k:=0\).
 Step 1.:

If \(\g_{k}\=0\) then stop; otherwise go to Step 2.
 Step 2.:

Compute \(d_{k}\) by
$$ d_{k}=\left \{ \begin{array}{l@{\quad}l} g_{k},& \text{if } k=0, \\ g_{k}+\beta_{k}^{\mathrm{MPRP}}d_{k1}\vartheta_{k} y_{k1}, & \text{if } k\geq1, \end{array} \right . $$(14)where \(\vartheta_{k}=g_{k}^{\top}d_{k1}/(\g_{k1}\^{2}+\mug_{k}^{\top}d_{k1})\). Determine the step size \(\alpha_{k}\) by Wolfe line search (5).
 Step 3.:

Set \(x_{k+1}=x_{k}+\alpha_{k} d_{k}\), and \(k:=k+1\); go to Step 1.
Using a parameter similar to \(\beta_{k}^{\mathrm{YPRP}}\), we present the following conjugate gradient method (denoted the TMPRP3 method).
TMPRP3 method
(Threeterm descent PRP method)
 Step 0.:

Give an initial point \(x_{0}\in\mathcal{R}^{n}\), \(\mu\geq0\), \(t>1\), \(0<\rho<\sigma<1\), and set \(d_{0}=g_{0}\), \(k:=0\).
 Step 1.:

If \(\g_{k}\=0\) then stop; otherwise go to Step 2.
 Step 2.:

Compute \(d_{k}\) by
$$ d_{k}=\left \{ \begin{array}{l@{\quad}l} g_{k}, &\text{if } k=0, \\ g_{k}+\beta_{k}^{\mathrm{VPRP}}d_{k1}+\nu_{k} (y_{k1}s_{k1}), &\text{if } k\geq1, \end{array} \right . $$(15)where
$$ \begin{aligned} &\beta_{k}^{\mathrm{VPRP}}=\frac{g_{k}^{\top}(g_{k}g_{k1})}{\mu g_{k}^{\top}d_{k1}+\g_{k1}\^{2}}t \frac{\y_{k1}\^{2}g_{k}^{\top}d_{k1}}{(\mug_{k}^{\top}d_{k1}+\g_{k1}\^{2})^{2}}, \\ &\nu_{k}=\frac{g_{k}^{\top}d_{k1}}{\mug_{k}^{\top}d_{k1}+\g_{k1}\^{2}}. \end{aligned} $$(16)Determine the step size \(\alpha_{k}\) by Wolfe line search (5).
 Step 3.:

Set \(x_{k+1}=x_{k}+\alpha_{k} d_{k}\), and \(k:=k+1\); go to Step 1.
Remark 2.1
If the constant \(\mu=0\), then the TMPRP1 method and TMPRP2 method reduce to the methods proposed by Cheng [9] and Zhang et al. [10], respectively, and the TMPRP3 method reduces to a method similar to that proposed by Yu et al. [20].
Remark 2.2
Obviously, if the line search is exact, then the direction generated by (13) or (14) or (15) reduces to (3) with \(\beta _{k}=\beta_{k}^{\mathrm{PRP}}\). Therefore, in the following, we assume that \(\mu>0\).
Remark 2.3
From (13) and (14), we can easily obtain
This indicates that the TMPRP1 method and the TMPRP2 method satisfy the sufficient descent property. In addition, from the following lemma, we can see that the TMPRP3 method also satisfies this property.
Lemma 2.1
Let \(\{x_{k}\}\) and \(\{d_{k}\}\) be generated by the TMPRP3 method, then we have
Proof
which indicates that (18) holds by induction since \(d_{0}=g_{0}\) and \(t>1\). This completes the proof. □
Remark 2.4
From the proof of Lemma 2.1, we can see that if the term \(s_{k1}\) in \(d_{k}\) is deleted, then the above sufficient descent property still holds.
The global convergence proof of the above three methods is similar, here, we only prove the global convergence of the TMPRP1 method. In the case of the other two methods, the argument is similar.
The following lemma, called the Zoutendijk condition, is often used to prove global convergence of conjugate gradient method. It was originally given by Zoutendijk in [21].
Lemma 2.2
Suppose that \(x_{0}\) is a starting point for which assumptions (H1) and (H2) hold. Consider any method in the form of (2), where \(d_{k}\) is a descent direction and \(\alpha_{k}\) satisfies the Wolfe condition (5) or the strong Wolfe condition (6). Then we have
This together with (17) shows that
Definition 2.1
The function \(f(x)\) is said to be uniformly convex on \(\mathcal{R}^{n}\), if there is a positive constant m such that
where \(\nabla^{2}f(x)\) is the Hessian matrix of the function \(f(x)\).
Now we prove the strongly global convergence of TMPRP1 method for uniformly convex functions.
Lemma 2.3
Let the sequences \(\{x_{k}\}\) and \(\{d_{k}\}\) be generated by TMPRP1 method, and the function \(f(x)\) be uniformly convex, then we have
where \(c_{1}=(1\rho)^{1}m/2\).
Proof
See Lemma 2.1 in [22]. □
The proof of the following theorem is similar to that of Theorem 2.1 in [22]. For completeness, we give the proof.
Theorem 2.1
Suppose that the assumptions (H1) and (H2) hold, and \(f(x)\) is uniformly convex, then we have
Proof
From (11), (20), and (H2), we have
This together with (13) shows that
Then, letting \(\sqrt{A}=1+\frac{2L}{c_{1}}\), we get \(\d_{k}\^{2}\leq A\ g_{k}\^{2}\). So, by (19), we get
This completes the proof. □
We are going to investigate the global convergence of the TMPRP1 method with Wolfe line search (5) for nonconvex function. In the last part of this subsection, we use \(\beta_{k}^{\mathrm{MPRP}+}\) to replace \(\beta _{k}^{\mathrm{MPRP}}\) in (13).
The next lemma corresponds to Lemma 4.3 in [23] and Theorem 3.2 in [24].
Lemma 2.4
Suppose that assumptions (H1) and (H2) hold. Let \(\{ x_{k}\}\) be the sequence generated by TMPRP1 method. If there exists a constant \(\varepsilon>0\) such that \(\g_{k}\ \geq\varepsilon\) for all \(k\geq0\), then we have
where \(u_{k}=d_{k}/\d_{k}\\).
Proof
From (17) and \(\g_{k}\\geq\varepsilon\) for all k, we have \(\d_{k}\>0\) for all k. Therefore, \(u_{k}\) is well defined. Define
Then we have
Since \(u_{k1}\) and \(u_{k}\) are unit vectors, we can write
Noting that \(\delta_{k}\geq0\), we get
From (10), (11), and (H2), we have
From (9), (10), and (23), it follows that there exists a constant \(M_{1}\geq0\) such that
Thus, from (19) and (24), we get
which together with (22) completes the proof. □
The following theorem establishes the global convergence of the TMPRP1 method with Wolfe line search (5) for general nonconvex functions. The proof is analogous to that of Theorem 3.2 in [24].
Theorem 2.2
Let the assumptions (H1) and (H2) hold. Then the sequence \(\{x_{k}\}\) generated by TMPRP1 method satisfies
Proof
Assume that the conclusion (25) is not true. Then there exists a constant \(\varepsilon>0\) such that for all
The proof is divided into the following two steps.
Step I. A bound on the steps \(s_{k}\). We observe that for any \(l\geq k\),
where \(s_{j}=x_{j+1}x_{j}\) and \(u_{k}\) is defined in Lemma 2.4. Using the triangle inequality and \(\u_{k}\=1\), we can write (26) as
Let Δ be an arbitrary but fixed positive integer. It follows from Lemma 2.4 that there is an index \(k_{\Delta}\) such that
If \(j>k\geq k_{\Delta}\) with \(jk\leq\Delta\), then by (28) and CauchySchwarz inequality, we have
Combining this with (27) yields
where \(l>k\geq k_{\Delta}\) with \(lk\leq\Delta\).
Step II. A bound on the direction \(d_{k}\). From (13) and (24), we have
By the use of the same argument of the Case III of Theorem 3.2 in [3], we can get the conclusion (25). This completes the proof. □
Remark 2.5
From Theorem 2.2, we can see that the TMPRP1 method possesses better convergence properties than CTPRP method in [2]. Since the TMPRP1 method converges globally for nonconvex minimization problems with a Wolfe line search, while the CTPPR method converges globally for nonconvex minimization problems with a strong Wolfe line search. We also note that the term \(\mug_{k}^{\top}d_{k1}\) in the denominator of (11) plays an important role in the proof of Lemma 2.4.
3 Numerical results
In this section, we present some numerical results to compare the performance of the TMPRP1 method, the CG_DESCENT method in [24] and the DTPRP method in [19].

TMPRP1: the TMPRP1 method with Wolfe line search (5), with \(\mu=10^{4}\), \(\rho=0.1\), \(\sigma=0.5\);

CG_DESCENT: the CG_DESCENT method with Wolfe line search (5), with \(\rho=0.1\), \(\sigma=0.5\);

DTPRP: the DTPRP method with Wolfe line search (5), with \(\mu =1.2\), \(\rho=0.1\), \(\sigma=0.5\).
All codes were written in Matlab 7.1 and run on a portable computer. We stopped the iteration if the number of iterations exceeded 1,000 or \(\ g_{k}\<10^{5}\). Here, we use some test problems in [17] with different dimensions. Our numerical results are listed in the form NI/NF/CPU, where the symbols NI, NF, and CPU mean the number of iterations, the number of function evaluations and the CPU time in seconds, respectively. ‘F’ means the method failed. Here, the code of Wolfe line search (5) is adapted from [25]. In Figures 1 and 2, we adopt the performance profiles by Dolan and Moré [12] to compare the performance based on the CPU time between the TMPRP1 method, the CG_DESCENT method and the DTPRP method. That is, for each method, we plot the fraction P of problems for which the method is within a factor τ of the best time. The left side of the figure gives the percentage of the test problems for which a method is fastest; while the right side gives the percentage of thee test problems that are successfully solved by each of the methods. The top curve is the method that solved the most problems in a time that was within a factor τ of the best time. From Table 1 and Figures 1 and 2, we can see that the TMPRP1 method performs better than the CG_DESCENT method and the DTPRP method, thus the proposed TMPRP1 method is computationally efficient.
4 Conclusion
This paper proposed three modified PRP conjugate gradient methods, which are some improvements of recently proposed PRP conjugate gradient methods. The global convergence of the proposed methods are established under the Wolfe line search. The effectiveness of the proposed methods have been shown by some numerical examples. We find that the performance of the TMPRP1 method is related to the parameter μ in \(\beta_{k}^{\mathrm{MPRP}}\); therefore, how to choose a suitable parameter τ deserves further investigation.
References
Fletcher, R, Reeves, C: Function minimization by conjugate gradients. J. Comput. 7, 149154 (1964)
Polak, B, Ribière, G: Note sur la convergence des méthodes de directions conjuguées. Rev. Fr. Inform. Rech. Oper. 16, 3543 (1969)
Gilbert, JC, Nocedal, J: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2, 2142 (1992)
Liu, YL, Storey, CS: Efficient generalized conjugate gradient algorithms, part 1: theory. J. Optim. Theory Appl. 69, 129137 (1991)
Dai, YH, Yuan, YX: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10, 177182 (2000)
Hestenes, MR, Stiefel, EL: Method of conjugate gradient for solving linear systems. J. Res. Natl. Bur. Stand. 49, 409432 (1952)
Fletcher, R: Practical Methods of Optimization. Volume 1: Unconstrained Optimization. Wiley, New York (1987)
Grippo, L, Luidi, S: A globally convergent version of the PolakRibière gradient method. Math. Program. 78, 375391 (1997)
Cheng, WY: A twoterm PRPbased descent method. Numer. Funct. Anal. Optim. 28, 12171230 (2007)
Zhang, L, Zhou, WJ, Li, DH: A descent modified PolakRibièrePolyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 26, 629640 (2006)
Yu, GH, Guan, LT, Chen, WF: Spectral conjugate gradient methods with sufficient descent property for largescale unconstrained optimization. Optim. Methods Softw. 23, 275293 (2008)
Dolan, ED, Moré, JJ: Benchmarking optimization software with performance profiles. Math. Program. 91, 201213 (2002)
Wei, ZX, Li, G, Qi, LQ: Global convergence of the PolakRibièrePolyak conjugate gradient method with inexact line searches for nonconvex unconstrained optimization problems. Math. Comput. 77, 21732193 (2008)
Li, G, Tang, CM, Wei, ZX: New conjugacy condition and related new conjugate gradient methods for unconstrained optimization. J. Comput. Appl. Math. 202, 523539 (2007)
Yu, G, Guan, L, Li, G: Global convergence of modified PolakRibièrePolyak conjugate gradient methods with sufficient descent property. J. Ind. Manag. Optim. 3, 565579 (2008)
Dai, YH, Kou, CX: A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAM J. Optim. 23, 296320 (2013)
Neculai, A: Unconstrained optimization by direct searching (2007). http://camo.ici.ro/neculai/UNO/UNO.FOR
Wei, ZX, Yao, SW, Liu, LY: The convergence properties of some new conjugate gradient methods. Appl. Math. Comput. 183, 13411350 (2006)
Dai, ZF, Wen, FH: Another improved WeiYaoLiu nonlinear conjugate gradient method with sufficient descent property. Appl. Math. Comput. 218, 74217430 (2012)
Yu, GH, Zhao, YL, Wei, ZX: A descent nonlinear conjugate gradient method for largescale unconstrained optimization. Appl. Math. Comput. 187, 636643 (2007)
Zoutendijk, G: Nonlinear programming, computational methods. In: Abadie, J (ed.) Integer and Nonlinear Programming, pp. 3786. NorthHolland, Amsterdam (1970)
Dai, ZF, Tian, BS: Global convergence of some modified PRP nonlinear conjugate gradient methods. Optim. Lett. 5(4), 615630 (2011)
Dai, YH, Liao, LZ: New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43, 97101 (2001)
Hager, WW, Zhang, HC: A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2, 3558 (2006)
Wu, QZ, Zheng, ZY, Deng, W: Operations Research and Optimization, MATLAB Programming, pp. 6669. China Machine Press, Beijing (2010)
Acknowledgements
The authors gratefully acknowledge the helpful comments and suggestions of the anonymous reviewers. This work was partially supported by the domestic visiting scholar project funding of Shandong Province outstanding young teachers in higher schools, the foundation of Scientific Research Project of Shandong Universities (No. J13LI03), and the Shandong Province Statistical Research Project (No. 20143038).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
The first author has designed the three methods and the second author has refined them. Both authors have equally contributed in the numerical results. All authors read and approved the final manuscript.
Rights and permissions
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
About this article
Cite this article
Sun, M., Liu, J. Three modified PolakRibièrePolyak conjugate gradient methods with sufficient descent property. J Inequal Appl 2015, 125 (2015). https://doi.org/10.1186/s1366001506499
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1366001506499