A quasiNewton algorithm for largescale nonlinear equations
 Linghua Huang^{1}Email author
https://doi.org/10.1186/s1366001713017
© The Author(s) 2017
Received: 24 November 2016
Accepted: 18 January 2017
Published: 3 February 2017
Abstract
In this paper, the algorithm for largescale nonlinear equations is designed by the following steps: (i) a conjugate gradient (CG) algorithm is designed as a subalgorithm to obtain the initial points of the main algorithm, where the subalgorithm’s initial point does not have any restrictions; (ii) a quasiNewton algorithm with the initial points given by subalgorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length \(\alpha_{k}\). The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the \(1+q\)order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for largescale problems.
Keywords
1 Introduction
Next we present some techniques for the calculation of \(d_{k}\). At present, there exist many wellknown methods for \(d_{k}\), such as the Newton method, the trust region method, and the quasiNewton method, etc.
The earliest nonmonotone line search framework was developed by Grippo, Lampariello, and Lucidi in [40] for Newton’s methods. Many subsequent papers have exploited nonmonotone line search techniques of this nature (see [41–44] etc.), which shows that the nonmonotone technique works well in many cases. Considering these points, Zhu [31] proposed the nonmonotone line search (1.7). From (1.7), we can see that the Jacobian matrix \(\nabla e(x)\) must be computed at every iteration. Computing the Jacobian matrix \(\nabla e(x)\) may be expensive if n is large and for any n at every iteration. Thus, one might prefer to remove the matrix, leading to a new nonmonotone technique.

A subalgorithm is designed to get the initial point of the main algorithm.

A new nonmonotone line search technique is presented, moreover, the Jacobian matrix \(\nabla e_{k}\) must not be computed at every iteration.

The given method possesses the sufficient descent property for the normal function \(p(x)\).

The global convergence and the \(1+q\)order convergent rate of the new method are established under suitable conditions.

Numerical results show that this method is more effective than other similar methods.
We organize the paper as follows. In Section 2, the algorithms are stated. Convergent results are established in Section 3 and numerical results are reported in Section 4. In the last section, our conclusion is given. Throughout this paper, we use these notations: \(\Vert \cdot \Vert \) is the Euclidean norm, \(e(x_{k})\) and \(g(x_{k+1})\) are replaced by \(e_{k}\) and \(g_{k+1}\), respectively.
2 Algorithm
In this section, we will design a subalgorithm and the main algorithm, respectively. These two algorithms are listed as follows.
 Step 0:
Given any \(x_{0} \in\Re^{n}, \delta_{1},\delta_{2} \in(0,1), \epsilon_{k}>0\), \(r \in(0,1), \epsilon\in[0,1)\), let \(k:=0\).
 Step 1:
If \(\Vert e_{k}\Vert \leq\epsilon\), stop. Otherwise let \(d_{k}=e_{k}\) and go to next step.
 Step 2:
Choose \(\epsilon_{k+1}\) satisfies (1.4) and let \(\alpha_{k}=1,r,r^{2},r^{3},\ldots \) until (1.3) holds.
 Step 3:
Let \(x_{k+1}=x_{k} +\alpha_{k} d_{k}\).
 Step 4:
If \(\Vert e_{k+1}\Vert \leq\epsilon\), stop.
 Step 5:
Compute \(d_{k+1}=e_{k+1}+\beta_{k}d_{k}\), set \(k=k+1\) and go to Step 2.
Remark
(i) \(\beta_{k}\) of Step 5 is a scalar and different \(\beta_{k}\) will determine different CG methods.
(ii) From Step 2 and [28], it is easy to deduce that there exists \(\alpha_{k}\) such that (1.3). Thus, this subalgorithm is well defined.
In the following, we will state the main algorithm. First, assume that the terminated point of subalgorithm is \(x_{sup}\), then the given algorithm is defined as follows.
Algorithm 1
Main algorithm
 Step 1:
Stop if \(\Vert e_{k}\Vert \leq\epsilon_{\mathrm{main}}\). Otherwise solve (1.10) to get \(d_{k}\).
 Step 2:
Let \(\alpha_{k}=1,r,r^{2},r^{3},\ldots \) until (1.13) holds.
 Step 3:
Let the next iterative be \(x_{k+1}=x_{k}+\alpha_{k}d_{k}\).
 Step 4:
Update \(B_{k}\) by quasiNewton update formula and ensure the update matrix \(B_{k+1}\) is positive definite.
 Step 5:
Let \(k:=k+1\). Go to Step 1.
Remark
Step 4 of Algorithm 1 can ensure that \(B_{k}\) is always positive definite. This means that (1.10) has a unique solution \(d_{k}\). By positive definiteness of \(B_{k}\), it is easy to obtain \(e_{k}^{T}d_{k}<0\). In the following sections, we only concentrate to the convergence of the main algorithm.
3 Convergence analysis
Assumption A
(i) e is continuously differentiable on an open convex set \(\Omega_{1}\) containing Ω.
Assumption B
Considering Assumption B and using the von Neumann lemma, we deduce that \(B_{k}\) is also bounded (see [31]).
Lemma 3.1
Proof
The following lemma shows that the line search technique (1.13) is reasonable, then Algorithm 1 is well defined.
Lemma 3.2
Let Assumptions A and B hold. Then Algorithm 1 will produce an iteration \(x_{k+1}=x_{k}+\alpha_{k}d_{k}\) in a finite number of backtracking steps.
Proof
Now we establish the global convergence theorem of Algorithm 1.
Theorem 3.1
Proof
Lemma 3.3
see Lemma 4.1 in [31]
Theorem 3.2
Proof
4 Numerical results
Function 1
Function 2
Function 3
Function 4
Function 5
Function 6
Function 7
Strictly convex function 2 [[52], p. 30]
Function 8
Function 9
Function 10
The parameters were chosen as \(r=0.1\), \(\sigma=0.9, M=12,\epsilon=10^{4}\), and \(\epsilon_{\mathrm{main}}=10^{5}\). In order to ensure the positive definiteness of \(B_{k}\), in Step 4 of the main algorithm: if \(y_{k}^{T}s_{k}>0\), update \(B_{k}\) by (1.11), otherwise let \(B_{k+1}=B_{k}\). This program will also be stopped if the iteration number of main algorithm is larger than 200. Since the line search cannot always ensure these descent conditions \(d_{k}^{T}e_{k}<0\) and \(d_{k}^{T}\nabla e(x_{k}) e_{k}<0\), an uphill search direction may occur in numerical experiments. In this case, the line search rule maybe fails. In order to avoid this case, the stepsize \(\alpha_{k}\) will be accepted if the searching time is larger than six in the inner circle for the test problems.
Numerical results
New method with CG  Normal method (only main algorithm)  

P  Dim  NI/NG  GF  GD  cpu time  NI/NG  GF  GD  cpu time 
1  1000  1/1  6.676674e−006  1.335335e−005  0.000000e+000  0/2  6.676674e−006  1.335335e−005  0.000000e+000 
2000  1/1  3.335834e−006  6.671668e−006  0.000000e+000  0/2  3.335834e−006  6.671668e−006  0.000000e+000  
3000  1/1  2.223334e−006  4.446667e−006  1.560010e−002  0/2  2.223334e−006  4.446667e−006  3.120020e−002  
2  1000  12/17  1.570352e−007  1.214954e−007  1.544410e+000  199/2879  1.624268e−004  1.228551e+000  1.338801e+002 
2000  200/2927  8.144022e−005  3.138945e−001  8.647135e+002  199/2928  8.144022e−005  3.138945e−001  8.680832e+002  
3000  200/2326  5.434381e−005  2.481121e−003  1.614626e+003  199/2327  5.434381e−005  2.481121e−003  1.622785e+003  
3  1000  8/8  4.194859e−006  8.389718e−006  1.560010e−002  115/1009  5.838535e−008  1.171251e−007  7.996611e+001 
2000  8/8  7.775106e−006  1.555021e−005  7.800050e−002  117/1040  1.161670e−007  2.328056e−007  5.662368e+002  
3000  9/9  1.614597e−012  3.231630e−012  1.553770e+001  137/1362  1.739498e−007  3.484891e−007  2.141410e+003  
4  1000  92/165  5.632576e−006  5.669442e−006  2.215214e+000  199/285  3.703283e+001  1.196051e+001  1.356897e+002 
2000  87/156  6.245922e−006  6.085043e−006  1.502290e+001  199/230  3.637504e+000  9.570779e+000  9.591097e+002  
3000  94/169  6.678153e−006  6.437585e−006  4.731510e+001  199/234  2.639260e+002  1.904865e+001  3.096277e+003  
5  1000  22/51  8.288299e−006  6.268946e−007  2.106014e+000  199/2570  3.195300e+004  4.649782e+006  1.779971e+001 
2000  21/50  4.114462e−006  1.943351e−006  5.179233e+000  199/2652  6.395300e+004  1.354972e+005  8.327333e+001  
3000  21/51  9.843373e−006  8.504719e−006  1.597450e+001  199/2853  9.595300e+004  1.135356e+005  1.314776e+002  
6  1000  9/11  5.984185e−012  1.197596e−011  7.176046e−001  6/9  6.069722e−007  1.118437e−006  4.149627e+000 
2000  9/11  1.505191e−006  3.010383e−006  7.800050e−002  6/9  1.210931e−006  2.231765e−006  2.898499e+001  
3000  9/11  2.251571e−006  4.503142e−006  1.404009e−001  6/9  1.814891e−006  3.345093e−006  9.300780e+001  
7  1000  200/602  1.208240e−003  4.697005e+000  3.424222e+001  199/573  3.156137e+005  3.893380e+004  1.378113e+002 
2000  200/760  1.612671e+001  9.034319e+000  2.420200e+002  199/644  1.014481e+006  3.131576e+005  9.872367e+002  
3000  200/693  5.570227e−003  9.501149e+001  7.698181e+002  199/743  9.357473e+007  2.087488e+007  3.087119e+003  
8  1000  2/2  0.000000e+000  0.000000e+000  0.000000e+000  1/3  0.000000e+000  0.000000e+000  6.552042e−001 
2000  2/2  0.000000e+000  0.000000e+000  0.000000e+000  1/3  0.000000e+000  0.000000e+000  4.820431e+000  
3000  2/2  0.000000e+000  0.000000e+000  6.240040e−002  1/3  0.000000e+000  0.000000e+000  1.538170e+001  
9  1000  67/118  7.138941e−006  1.820053e−005  3.010819e+000  2/5  2.358640e−006  4.611203e−006  1.404009e+000 
2000  70/124  6.342724e−006  1.607326e−005  2.062333e+001  2/5  5.917002e−007  1.169969e−006  9.703262e+000  
3000  74/131  7.447187e−006  1.799920e−005  6.450641e+001  2/5  2.632811e−007  5.225655e−007  3.084140e+001  
10  1000  26/49  2.044717e−008  3.900140e−008  2.359983e+002  121/125  7.382123e−006  1.467673e−005  4.987196e+002 
2000  24/47  9.030382e−006  2.717060e−006  1.847286e+003  121/125  7.454090e−006  1.481981e−005  3.852538e+003  
3000  27/51  6.468831e−009  1.138377e−008  6.632227e+003  121/125  7.523322e−006  1.495745e−005  1.299774e+004 
Dim: the dimension.
NI: the number of iterations.
NG: the number of function evaluations.
cpu time: the cpu time in seconds.
GF: the final norm function evaluations \(p(x)\) when the program is stopped.
GD: the final norm evaluations of search direction \(d_{k}\).
fails: fails to find the final values of \(p(x)\) when the program is stopped.
Numerical results of VIM1 method
P  Dim  NI/NG  GF  cpu time  P  Dim  NI/NG  GF  cpu time 

1  1000  1/1  6.676674e−006  1.560010e−002  6  1000  5/5  4.591162e−011  9.656462e+000 
2000  1/1  3.335834e−006  0.000000e+000  2000  5/5  9.140464e−011  7.439688e+001  
3000  1/1  2.223334e−006  3.120020e−002  3000  5/5  1.368978e−010  2.484628e+002  
2  1000  18/18  2.840705e−007  5.494355e+001  7  1000  5/5  4.058902e−006  9.656462e+000 
2000  27/27  2.532474e−006  6.315544e+002  2000  6/6  1.983880e−017  8.993458e+001  
3000  22/22  9.781547e−007  1.669476e+003  3000  6/6  6.708054e−017  3.007543e+002  
3  1000  5/5  5.430592e−007  9.578461e+000  8  1000  fails  
2000  5/5  5.619751e−007  7.435008e+001  2000  fails  
3000  5/5  5.870798e−007  2.484160e+002  3000  fails  
4  1000  4/4  4.559227e−009  1.243328e+001  9  1000  fails  
2000  4/4  9.082664e−009  1.026487e+002  2000  fails  
3000  4/4  1.360090e−008  3.708768e+002  3000  fails  
5  1000  9/9  2.648764e−006  3.196460e+001  10  1000  fails  
2000  9/9  2.649263e−006  2.529244e+002  2000  fails  
3000  9/9  2.649430e−006  8.258849e+002  3000  fails 
The tool of Dolan and Moré [57] is used to analyze the efficiency of these three algorithms.
5 Conclusion
In this paper, we focus on two algorithms solved a class of largescale nonlinear equations. At the first step, a CG algorithm, called a subalgorithm, was used as the initial points of the main algorithm. Then a quasiNewton algorithm with the initial points done by a CG subalgorithm was defined as the main algorithm. In order to avoid computing the Jacobian matrix, a nonmonotone line search technique was used in the algorithms. The convergence results are established and numerical results are reported.
According to the numerical performance, it is clear that the CG technique is very effective for largescale nonlinear equations. This observation inspires us to design the CG methods to directly solve nonlinear equations in the future.
Declarations
Acknowledgements
Only the author contributed in writing this paper. The author thanks the referees and the Editor for their valuable comments, which greatly improved the paper.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Chen, B, Shu, H, Coatrieux, G, Chen, G, Sun, X, Coatrieux, J: Color image analysis by quaterniontype moments. J. Math. Imaging Vis. 51, 124144 (2015) MathSciNetView ArticleMATHGoogle Scholar
 Fu, Z, Ren, K, Shu, J, Sun, X, Huang, F: Enabling personalized search over encrypted outsourced data with efficiency improvement. IEEE Trans. Parallel Distrib. Syst. (2015). doi:10.1109/TPDS.2015.2506573 Google Scholar
 Gu, B, Sheng, VS: A robust regularization path algorithm for νsupport vector classification. IEEE Trans. Neural Netw. Learn. Syst. (2016). doi:10.1109/TNNLS.2016.2527796 Google Scholar
 Gu, B, Sheng, VS, Tay, KY, Romano, W, Li, S: Incremental support vector learning for ordinal regression. IEEE Trans. Neural Netw. Learn. Syst. 26, 14031416 (2015) MathSciNetView ArticleGoogle Scholar
 Guo, P, Wang, J, Li, B, Lee, S: A variable thresholdvalue authentication architecture for wireless mesh networks. J. Internet Technol. 15, 929936 (2014) Google Scholar
 Li, J, Li, X, Yang, B, Sun, X: Segmentationbased image copymove forgery detection scheme. IEEE Trans. Inf. Forensics Secur. 10, 507518 (2015) View ArticleGoogle Scholar
 Pan, Z, Zhang, Y, Kwong, S: Efficient motion and disparity estimation optimization for low complexity multiview video coding. IEEE Trans. Broadcast. 61, 166176 (2015) View ArticleGoogle Scholar
 Shen, J, Tan, H, Wang, J, Wang, J, Lee, S: A novel routing protocol providing good transmission reliability in underwater sensor networks. J. Internet Technol. 16, 171178 (2015) Google Scholar
 Xia, Z, Wang, X, Sun, X, Wang, Q: A secure and dynamic multikeyword ranked search scheme over encrypted cloud data. IEEE Trans. Parallel Distrib. Syst. 27, 340352 (2015) View ArticleGoogle Scholar
 Fu, Z, Wu, X, Guan, C, Sun, X, Ren, K: Towards efficient multikeyword fuzzy search over encrypted outsourced data with accuracy improvement. IEEE Trans. Inf. Forensics Secur. (2016). doi:10.1109/TIFS.2016.2596138 Google Scholar
 Gu, B, Sun, X, Sheng, VS: Structural minimax probability machine. IEEE Trans. Neural Netw. Learn. Syst. (2016). doi:10.1109/TNNLS.2016.2544779 Google Scholar
 Ma, T, Zhou, J, Tang, M, Tian, Y, AlDhelaan, A, AlRodhaan, M, Lee, S: Social network and tag sources based augmenting collaborative recommender system. IEICE Trans. Inf. Syst. 98, 902910 (2015) View ArticleGoogle Scholar
 Ren, Y, Shen, J, Wang, JN, Han, J, Lee, S: Mutual verifiable provable data auditing in public cloud storage. J. Internet Technol. 16, 317323 (2015) Google Scholar
 Yuan, G: Modified nonlinear conjugate gradient methods with sufficient descent property for largescale optimization problems. Optim. Lett. 3, 1121 (2009) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Duan, X, Liu, W, Wang, X, et al.: Two new PRP conjugate gradient algorithms for minimization optimization models. PLoS ONE 10, e0140071 (2015) View ArticleGoogle Scholar
 Yuan, G, Lu, X: A modified PRP conjugate gradient method. Ann. Oper. Res. 166, 7390 (2009) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Lu, X, Wei, Z: A conjugate gradient method with descent direction for unconstrained optimization. J. Comput. Appl. Math. 233, 519530 (2009) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Wei, Z: New line search methods for unconstrained optimization. J. Korean Stat. Soc. 38, 2939 (2009) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Wei, Z: The superlinear convergence analysis of a nonmonotone BFGS algorithm on convex objective functions. Acta Math. Sin. Engl. Ser. 24(1), 3542 (2008) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Wei, Z: Convergence analysis of a modified BFGS method on convex minimizations. Comput. Optim. Appl. 47, 237255 (2010) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Wei, Z: A trust region algorithm with conjugate gradient technique for optimization problems. Numer. Funct. Anal. Optim. 32, 212232 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Wei, Z: The Barzilai and Borwein gradient method with nonmonotone line search for nonsmooth convex optimization problems. Math. Model. Anal. 17, 203216 (2012) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Wei, Z, Wang, Z: Gradient trust region algorithm with limited memory BFGS update for nonsmooth convex minimization. Comput. Optim. Appl. 54, 4564 (2013) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Wei, Z, Wu, Y: Modified limited memory BFGS method with nonmonotone line search for unconstrained optimization. J. Korean Math. Soc. 47, 767788 (2010) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Wei, Z, Zhao, Q: A modified PolakRibièrePolyak conjugate gradient algorithm for largescale optimization problems. IIE Trans. 46, 397413 (2014) View ArticleGoogle Scholar
 Yuan, G, Zhang, M: A modified HestenesStiefel conjugate gradient algorithm for largescale optimization. Numer. Funct. Anal. Optim. 34, 914937 (2013) MathSciNetView ArticleMATHGoogle Scholar
 Zhang, Y, Sun, X, Baowei, W: Efficient algorithm for Kbarrier coverage based on integer linear programming. China Communications 13, 1623 (2016) View ArticleGoogle Scholar
 Li, D, Fukushima, M: A global and superlinear convergent GaussNewtonbased BFGS method for symmetric nonlinear equations. SIAM J. Numer. Anal. 37, 152172 (1999) MathSciNetView ArticleMATHGoogle Scholar
 Gu, G, Li, D, Qi, L, Zhou, S: Descent directions of quasiNewton methods for symmetric nonlinear equations. SIAM J. Numer. Anal. 40, 17631774 (2002) MathSciNetView ArticleMATHGoogle Scholar
 Brown, PN, Saad, Y: Convergence theory of nonlinear NewtonKrylov algorithms. SIAM J. Optim. 4, 297330 (1994) MathSciNetView ArticleMATHGoogle Scholar
 Zhu, D: Nonmonotone backtracking inexact quasiNewton algorithms for solving smooth nonlinear equations. Appl. Math. Comput. 161, 875895 (2005) MathSciNetMATHGoogle Scholar
 Yuan, G, Lu, X: A new backtracking inexact BFGS method for symmetric nonlinear equations. Comput. Math. Appl. 55, 116129 (2008) MathSciNetView ArticleMATHGoogle Scholar
 Nash, SG: A surey of truncatedNewton matrices. J. Comput. Appl. Math. 124, 4559 (2000) MathSciNetView ArticleGoogle Scholar
 Dembao, RS, Eisenstat, SC, Steinaug, T: Inexact Newton methods. SIAM J. Numer. Anal. 19, 400408 (1982) MathSciNetView ArticleGoogle Scholar
 Griewank, A: The ’global’ convergence of Broydenlike methods with a suitable line search. J. Aust. Math. Soc. Ser. B, Appl. Math 28, 7592 (1986) MathSciNetView ArticleMATHGoogle Scholar
 Ypma, T: Local convergence of inexact Newton methods. SIAM J. Numer. Anal. 21, 583590 (1984) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Wei, Z, Lu, X: A BFGS trustregion method for nonlinear equations. Computing 92, 317333 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Wei, Z, Lu, S: Limited memory BFGS method with backtracking for symmetric nonlinear equations. Math. Comput. Model. 54, 367377 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Yao, S: A BFGS algorithm for solving symmetric nonlinear equations. Optimization 62, 8295 (2013) MathSciNetView ArticleMATHGoogle Scholar
 Grippo, L, Lampariello, F, Lucidi, S: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23, 707716 (1986) MathSciNetView ArticleMATHGoogle Scholar
 Birgin, EG, Martinez, JM, Raydan, M: Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 10, 11961211 (2000) MathSciNetView ArticleMATHGoogle Scholar
 Han, J, Liu, G: Global convergence analysis of a new nonmonotone BFGS algorithm on convex objective functions. Comput. Optim. Appl. 7, 277289 (1997) MathSciNetView ArticleMATHGoogle Scholar
 Liu, G, Peng, J: The convergence properties of a nonmonotonic algorithm. J. Comput. Math. 1, 6571 (1992) Google Scholar
 Zhou, J, Tits, A: Nonmonotone line search for minimax problem. J. Optim. Theory Appl. 76, 455476 (1993) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G: A new method with descent property for symmetric nonlinear equations. Numer. Funct. Anal. Optim. 31, 974987 (2010) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Meng, Z, Li, Y: A modified Hestenes and Stiefel conjugate gradient algorithm for largescale nonsmooth minimizations and nonlinear equations. J. Optim. Theory Appl. 168, 129152 (2016) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Lu, S, Wei, Z: A new trustregion method with line search for solving symmetric nonlinear equations. Int. J. Comput. Math. 88, 21092123 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Wei, Z, Li, G: A modified PolakRibièrePolyak conjugate gradient algorithm for nonsmooth convex programs. J. Comput. Appl. Math. 255, 8696 (2014) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Zhang, M: A threeterms PolakRibièrePolyak conjugate gradient algorithm for largescale nonlinear equations. J. Comput. Appl. Math. 286, 186195 (2015) MathSciNetView ArticleMATHGoogle Scholar
 Yuan, G, Lu, X, Wei, Z: BFGS trustregion method for symmetric nonlinear equations. J. Comput. Appl. Math. 230, 4458 (2009) MathSciNetView ArticleMATHGoogle Scholar
 GomezRuggiero, M, Martinez, J, Moretti, A: Comparing algorithms for solving sparse nonlinear systems of equations. SIAM J. Sci. Comput. 23, 459483 (1992) MathSciNetView ArticleMATHGoogle Scholar
 Raydan, M: The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim. 7, 2633 (1997) MathSciNetView ArticleMATHGoogle Scholar
 Moré, J, Garbow, B, Hillström, K: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7, 1741 (1981) MathSciNetView ArticleMATHGoogle Scholar
 Aslam Noor, M, Waseem, M, Inayat Noor, K, AlSaid, E: Variational iteration technique for solving a system of nonlinear equations. Optim. Lett. 7, 9911007 (2013) MathSciNetView ArticleMATHGoogle Scholar
 Polak, E, Ribière, G: Note sur la convergence de directions conjugees. Rev. Franaise Informat. Recherche Opérationnelle 3, 3543 (1969) MATHGoogle Scholar
 Polyak, E: The conjugate gradient method in extremal problems. USSR Comput. Math. Math. Phys. 9, 94112 (1969) View ArticleMATHGoogle Scholar
 Dolan, ED, Moré, JJ: Benchmarking optimization software with performance profiles. Math. Program. 91, 201213 (2002) MathSciNetView ArticleMATHGoogle Scholar