 Research
 Open Access
A conjugate gradient algorithm for largescale unconstrained optimization problems and nonlinear equations
 Gonglin Yuan^{1} and
 Wujie Hu^{1}Email authorView ORCID ID profile
https://doi.org/10.1186/s1366001817031
© The Author(s) 2018
 Received: 14 January 2018
 Accepted: 26 April 2018
 Published: 11 May 2018
Abstract
For largescale unconstrained optimization problems and nonlinear equations, we propose a new threeterm conjugate gradient algorithm under the Yuan–Wei–Lu line search technique. It combines the steepest descent method with the famous conjugate gradient algorithm, which utilizes both the relevant function trait and the current point feature. It possesses the following properties: (i) the search direction has a sufficient descent feature and a trust region trait, and (ii) the proposed algorithm globally converges. Numerical results prove that the proposed algorithm is perfect compared with other similar optimization algorithms.
Keywords
 Conjugate gradient
 Descent property
 Global convergence
MSC
 90C26
1 Introduction
It is well known that the model of small and mediumscale smooth functions is simple since it has many optimization algorithms, such as Newton, quasiNewton, and bundle algorithms. Note that three algorithms fail to effectively address largescale optimization problems because they need to store and calculate relevant matrices, whereas the conjugate gradient algorithm is successful because of its simplicity and efficiency.
The optimization model is an important mathematic problem since it has been applied to various fields such as economics, engineering, and physics (see [1–12]). Fletcher and Reeves [13] successfully address largescale unconstrained optimization problems on the basis of the conjugate gradient algorithm and obtained amazing achievements. The conjugate gradient algorithm is increasingly famous because of its simplicity and low requirement of calculation machine. In general, a good conjugate gradient algorithm optimization algorithm includes a good conjugate gradient direction and an inexact line search technique (see [14–18]). At present, the conjugate gradient algorithm is mostly applied to smooth optimization problems, and thus, in this paper, we propose a modified LS conjugate gradient algorithm to solve largescale nonlinear equations and smooth problems. The common algorithms of addressing nonlinear equations include Newton and quasiNewton methods (see [19–21]), gradientbased, CG methods (see [22–24]), trust region methods (see [25–27]), and derivativefree methods (see [28]), and all of them fail to address largescale problems. The famous optimization algorithms of spectral gradient approach, limitedmemory quasiNewton method and conjugate gradient algorithm, are suitable to solve largescale problems. Li and Li [29] proposed various algorithms on the basis of modified PRP conjugate gradient, which successfully solve largescale nonlinear equations.

The search direction has a sufficient descent feature and a trust region trait.

Under mild assumptions, the proposed algorithm possesses the global convergence.

The new algorithm combines the steepest descent method with the conjugate gradient algorithm.

Numerical results prove that it is perfect compared to other similar algorithms.
The rest of the paper is organized as follows. The next section presents the necessary properties of the proposed algorithm. The global convergence is stated in Sect. 3. In Sect. 4, we report the corresponding numerical results. In Sect. 5, we introduce the largescale nonlinear equations and express the new algorithm. Some necessary properties are listed in Sect. 6. The numerical results are reported in Sect. 7. Without loss of generality, \(f(x_{k})\) and \(f(x_{k+1})\) are replaced by \(f_{k}\) and \(f_{k+1}\), and \(\\cdot \\) is the Euclidean norm.
2 New modified conjugate gradient algorithm
3 Important characteristics
This section lists some important properties of sufficient descent, the trust region, and the global convergence of Algorithm 2.1. It expresses the necessary proof.
Lemma 3.1
Proof
It is obvious that formulas of (3.1) and (3.2) are true for \(k=0\).
Similarly to (3.1) and (3.2), the algorithm has a sufficient descent feature and a trust region trait. To obtain the global convergence, we propose the following necessary assumptions.
Assumption 1
 (i)
The level set of \(\pi =\{xf(x) \leq f(x _{0})\}\) is bounded.
 (ii)The objective function \(f \in C^{2}\) is bounded from below, and its gradient function g is Lipschitz continuous, thats is, there exists a constant ζ such thatThe existence and necessity of the step length \(\alpha_{k}\) are established in [43]. In view of the discussion and established technique, the global convergence of the proposed algorithm is expressed as follows.$$ \bigl\Vert g(x)g(y)\bigr\Vert \leq \zeta \Vert xy\Vert ,\quad x, y \in R^{n}. $$(3.3)
Theorem 3.1
Proof
4 Numerical results
In this section, we list the numerical result in terms of the algorithm characteristics NI, NFG, and CPU, where NI is the total iteration number, NFG is the sum of the calculation frequency of the objective function and gradient function, and CPU is the calculation time in seconds.
4.1 Problems and test experiments
Test problems
No.  Problem 

1  Extended Freudenstein and Roth Function 
2  Extended Trigonometric Function 
3  Extended Rosenbrock Function 
4  Extended White and Holst Function 
5  Extended Beale Function 
6  Extended Penalty Function 
7  Perturbed Quadratic Function 
8  Raydan 1 Function 
9  Raydan 2 Function 
10  Diagonal 1 Function 
11  Diagonal 2 Function 
12  Diagonal 3 Function 
13  Hager Function 
14  Generalized Tridiagonal 1 Function 
15  Extended Tridiagonal 1 Function 
16  Extended Three Exponential Terms Function 
17  Generalized Tridiagonal 2 Function 
18  Diagonal 4 Function 
19  Diagonal 5 Function 
20  Extended Himmelblau Function 
21  Generalized PSC1 Function 
22  Extended PSC1 Function 
23  Extended Powell Function 
24  Extended Block Diagonal BD1 Function 
25  Extended Maratos Function 
26  Extended Cliff Function 
27  Quadratic Diagonal Perturbed Function 
28  Extended Wood Function 
29  Extended Hiebert Function 
30  Quadratic Function QF1 Function 
31  Extended Quadratic Penalty QP1 Function 
32  Extended Quadratic Penalty QP2 Function 
33  A Quadratic Function QF2 Function 
34  Extended EP1 Function 
35  Extended Tridiagonal2 Function 
36  BDQRTIC Function (CUTE) 
37  TRIDIA Function (CUTE) 
38  ARWHEAD Function (CUTE) 
38  ARWHEAD Function (CUTE) 
40  NONDQUAR Function (CUTE) 
41  DQDRTIC Function (CUTE) 
42  EG2 Function (CUTE) 
43  DIXMAANA Function (CUTE) 
44  DIXMAANB Function (CUTE) 
45  DIXMAANC Function (CUTE) 
46  DIXMAANE Function (CUTE) 
47  Partial Perturbed Quadratic Function 
48  Broyden Tridiagonal Function 
49  Almost Perturbed Quadratic Function 
50  Tridiagonal Perturbed Quadratic Function 
51  EDENSCH Function (CUTE) 
52  VARDIM Function (CUTE) 
53  STAIRCASE S1 Function 
54  LIARWHD Function (CUTE) 
55  DIAGONAL 6 Function 
56  DIXON3DQ Function (CUTE) 
57  DIXMAANF Function (CUTE) 
58  DIXMAANG Function (CUTE) 
59  DIXMAANH Function (CUTE) 
60  DIXMAANI Function (CUTE) 
61  DIXMAANJ Function (CUTE) 
62  DIXMAANK Function (CUTE) 
63  DIXMAANL Function (CUTE) 
64  DIXMAAND Function (CUTE) 
65  ENGVAL1 Function (CUTE) 
66  FLETCHCR Function (CUTE) 
67  COSINE Function (CUTE) 
68  Extended DENSCHNB Function (CUTE) 
69  DENSCHNF Function (CUTE) 
70  SINQUAD Function (CUTE) 
71  BIGGSB1 Function (CUTE) 
72  Partial Perturbed Quadratic PPQ2 Function 
73  Scaled Quadratic SQ1 Function 
Stopping rule: If the inequality \( f(x_{k}) > e_{1}\) is correct, let \(stop1=\frac{f(x_{k})f(x_{k+1})}{ f(x_{k})}\) or \(stop1= f(x _{k})f(x_{k+1})\). The algorithm stops when one of the following conditions is satisfied: \(\g(x)\<\epsilon \), the iteration number is greater than 2000, or \(stop 1 < e_{2}\), where \(e_{1}=e_{2}=10^{5}\) and \(\epsilon =10^{6}\). In Table 1, “No” and “problem” represent the index of the the tested problems and the name of the problem, respectively.
Initiation: \(\iota =0.3\), \(\iota_{1}=0.1\), \(\tau =0.65\), \(\eta_{1}=0.65\), \(\eta_{2}=0.001\), \(\eta_{3}=0.001\), \(\eta_{4}=0.001\), \(\eta_{5}=0.1\).
Dimension: 1200, 3000, 6000, 9000.
Calculation environment: The calculation environment is a computer with 2 GB of memory, a Pentium(R) DualCore CPU E5800@3.20 GHz, and the 64bit Windows 7 operation system.
Numerical results
NO  Dim  Algorithm 2.1  Algorithm 2  Algorithm 3  

NI  NFG  CPU  NI  NFG  CPU  NI  NFG  CPU  
1  9000  4  20  0.124801  14  48  0.405603  5  26  0.249602 
2  9000  71  327  1.965613  27  89  0.670804  32  136  0.858005 
3  9000  7  20  0.0312  37  160  0.249602  27  147  0.202801 
4  9000  12  49  0.280802  34  161  0.717605  42  219  0.951606 
5  9000  13  56  0.202801  20  63  0.249602  5  24  0.0624 
6  9000  65  252  0.421203  43  143  0.280802  3  9  0.0312 
7  9000  11  37  0.0624  478  979  2.215214  465  1479  2.558416 
8  9000  5  20  0.0624  22  55  0.156001  14  54  0.156001 
9  9000  6  16  0.0312  5  21  0.0624  3  8  0.0312 
10  9000  2  13  0.0156  2  13  0.000001  2  13  0.000001 
11  9000  3  17  0.0312  7  34  0.0624  17  87  0.218401 
12  9000  3  10  0.0312  19  40  0.202801  14  50  0.202801 
13  9000  3  24  0.0624  3  24  0.0312  3  24  0.0156 
14  9000  4  12  4.305628  5  14  5.382034  5  14  5.226033 
15  9000  19  77  9.984064  22  66  9.516061  21  71  10.296066 
16  9000  3  11  0.0624  6  27  0.078  6  18  0.0624 
17  9000  11  45  0.374402  27  69  0.780005  27  87  0.811205 
18  9000  5  23  0.0312  3  10  0.000001  3  10  0.0312 
19  9000  3  9  0.0624  3  9  0.0312  3  19  0.0312 
20  9000  19  76  0.124801  15  36  0.0624  3  9  0.0312 
21  9000  12  47  0.156001  13  61  0.187201  15  59  0.218401 
22  9000  7  46  0.795605  8  70  0.577204  6  46  0.686404 
23  9000  9  45  0.218401  101  357  2.090413  46  150  0.873606 
24  9000  5  47  0.093601  14  88  0.156001  14  97  0.249602 
25  9000  9  28  0.0312  40  214  0.249602  8  46  0.0624 
26  9000  24  102  0.327602  24  100  0.249602  3  24  0.0312 
27  9000  6  20  0.0312  34  109  0.187201  92  321  0.530403 
28  9000  13  50  0.124801  20  83  0.109201  23  84  0.140401 
29  9000  6  36  0.0468  4  21  0.0312  4  21  0.0312 
30  9000  11  37  0.0624  454  931  1.450809  424  1346  1.747211 
31  9000  18  63  0.124801  15  51  0.093601  3  10  0.0312 
32  9000  18  70  0.218401  23  61  0.218401  3  18  0.0624 
33  9000  2  5  0.000001  2  5  0.0312  2  5  0.000001 
34  9000  8  16  0.0312  6  12  0.0312  3  6  0.0312 
35  9000  4  13  0.0312  4  10  0.0312  3  8  0.000001 
36  9000  7  23  4.602029  8  28  5.569236  10  47  8.673656 
37  9000  7  23  0.0624  1412  2829  6.942044  2000  6021  11.356873 
38  9000  4  18  0.0312  8  35  0.187201  4  11  0.0312 
39  9000  5  19  0.0312  28  56  0.124801  3  8  0.0312 
40  9000  13  43  0.561604  835  2936  36.223432  9  41  0.421203 
41  9000  10  32  0.0624  17  41  0.093601  22  81  0.124801 
42  9000  4  33  0.0624  13  35  0.124801  9  47  0.109201 
43  9000  16  62  1.029607  16  38  0.951606  13  48  0.780005 
44  9000  3  17  0.156001  9  50  0.624004  3  17  0.187201 
45  9000  21  118  1.49761  12  81  0.858006  3  24  0.202801 
46  9000  20  81  1.435209  209  443  11.247672  110  362  6.630042 
47  9000  11  37  27.066173  30  97  68.64044  37  112  87.220159 
48  9000  13  54  9.718862  31  92  18.610919  23  50  11.980877 
49  9000  11  37  0.0624  478  979  1.51321  504  1592  1.887612 
50  9000  11  37  7.971651  472  967  263.68849  444  1273  299.381519 
51  9000  6  31  0.156001  7  25  0.218401  3  17  0.124801 
52  9000  62  186  0.998406  63  195  0.842405  4  21  0.0624 
53  9000  10  32  0.0312  2000  4059  7.72205  1865  5618  7.971651 
54  9000  4  11  0.0312  21  79  0.156001  17  79  0.124801 
55  9000  10  24  3.010819  7  25  3.213621  3  10  1.076407 
56  9000  7  21  0.0156  2000  4003  6.489642  1390  4107  5.335234 
57  9000  5  39  0.358802  67  220  4.024826  3  24  0.202801 
58  9000  5  24  0.343202  114  282  6.411641  82  315  5.257234 
59  9000  5  39  0.343202  68  310  4.72683  3  23  0.171601 
60  9000  18  74  1.294808  206  437  11.107271  119  363  6.957645 
61  9000  5  39  0.358802  85  247  4.929632  3  24  0.218401 
62  9000  4  32  0.234001  4  32  0.249602  3  22  0.187201 
63  9000  3  22  0.187201  3  22  0.187201  3  22  0.187201 
64  9000  5  39  0.343202  23  147  1.747211  3  23  0.218401 
65  9000  12  59  15.334898  14  51  14.944896  7  21  6.130839 
66  9000  3  9  1.62241  2000  4022  1114.767546  529  2196  443.526443 
67  9000  5  28  0.093601  15  58  0.280802  3  23  0.0312 
68  9000  13  55  0.109201  11  27  0.0624  9  25  0.0624 
69  9000  16  73  0.218401  24  55  0.187201  20  70  0.171601 
70  9000  4  13  2.542816  41  203  36.332633  35  231  37.783442 
71  9000  11  35  0.093601  2000  4014  6.708043  1491  4631  5.600436 
72  9000  9  30  21.85574  1089  3897  2675.588751  287  1015  704.391315 
73  9000  19  65  0.093601  607  1269  1.856412  669  2062  2.293215 
Test problems
No.  Problem 

1  Exponential function 1 
2  Exponential function 2 
3  Trigonometric function 
4  Singular function 
5  Logarithmic function 
6  Broyden tridiagonal function 
7  Trigexp function 
8  Strictly convex function 1 
9  Strictly convex function 2 
10  Zero Jacobian function 
11  Linear functionfull rank 
12  Penalty function 
13  Variable dimensioned function 
14  Extended Powel singular function 
15  Tridiagonal system 
16  Fivediagonal system 
17  Extended Freudentein and Roth function 
18  Extended Wood problem 
19  Discrete boundary value problem 
Other case: To save the paper space, we only list the data of dimension of 9000, and the remaining data are listed in the attachment.
4.2 Results and discussion
5 Nonlinear equations
6 The global convergence of Algorithm 5.1
First, we make the following necessary assumptions.
Assumption 2
 (i)
The objective model of (5.1) has a nonempty solution set.
 (ii)The function h is Lipschitz continuous on \(R^{n}\), which means that there is a positive constant L such that$$ \bigl\Vert h(x)h(y)\bigr\Vert \leq L\Vert xy\Vert , \quad \forall x, y \in R^{n}. $$(6.1)
Lemma 6.1
This paper merely proposes, but omits, the relevant proof since it is similar to the proof in [61].
Lemma 6.2
Algorithm 5.1 generates an iteration point in a finite number of iteration steps, which satisfies the formula of \(x_{k+1}=x_{k}+\alpha _{k}d_{k}\) if Assumption 2 holds.
Proof
It is obvious that this formula fails to meet the definition of the step length \(\alpha_{k^{*}}^{(l)}\). Thus, we conclude that the proposed line search technique is reasonable and necessary. In other words, the line search technique generates a positive constant \(\alpha_{k}\) in a finite frequency of backtracking repetitions. By the established conclusion we propose the following theorem on the global convergence of the proposed algorithm. □
Theorem 6.1
Proof
7 The results of nonlinear equations
In this section, we list the relevant numerical results of nonlinear equations and present the objective function \(h(x)=(f_{1}(x), f_{2}(x), \ldots, f_{n}(x))\), where the relevant functions’ information is listed in Table 1.
7.1 Problems and test experiments
To measure the efficiency of the proposed algorithm, in this section, we compare this method with (1.10) (as Algorithm 6) using three characteristics “NI”, “NG”, and “CPU” and the remind that Algorithm 6 is identical to Algorithm 5.1. “NI” presents the number of iterations, “NG” is the calculation frequency of the function, and “CPU” is the time of the process in addressing the tested problems. In Table 1, “No” and “problem” express the indices and the names of the test problems.
Stopping rule: If \(\g_{k}\ \leq \varepsilon \) or the whole iteration number is greater than 2000, the algorithm stops.
Initiation: \(\varepsilon =1e{}5\), \(\sigma =0.8\), \(s=1\), \(\rho =0.9\), \(\eta_{1}=0.85\), \(\eta_{2}=\eta_{3}=0.001\), \(\eta_{4}= \eta_{5}=0.1\).
Dimension: 3000, 6000, 9000.
Calculation environment: The calculation environment is a computer with 2 GB of memory, a Pentium(R) DualCore CPU E5800@3.20 GHz, and the 64bit Windows 7 operation system.
Numerical results
NO  Dim  Algorithm 5.1  Algorithm 6  

NI  NFG  CPU  NI  NFG  CPU  
1  3000  161  162  3.931225  146  147  4.149627 
1  6000  126  127  12.760882  115  116  11.122871 
1  9000  111  112  22.464144  99  100  19.515725 
2  3000  5  76  1.185608  5  76  1.060807 
2  6000  6  91  4.758031  5  76  4.009226 
2  9000  5  62  6.926444  5  62  6.754843 
3  3000  33  228  3.276021  18  106  1.778411 
3  6000  40  275  15.490899  18  106  6.084039 
3  9000  40  285  33.243813  18  106  12.54248 
4  3000  4  61  0.842405  4  61  0.936006 
4  6000  4  47  2.698817  4  61  3.322821 
4  9000  4  47  5.226033  4  61  6.817244 
5  3000  23  237  3.244821  23  237  3.354022 
5  6000  25  263  14.133691  25  263  13.930889 
5  9000  26  278  30.186193  26  278  30.092593 
6  3000  1999  29986  382.951255  1999  29986  365.369942 
6  6000  88  1307  68.141237  1999  29986  1484.240314 
6  9000  65  962  101.806253  1999  29986  3113.998361 
7  3000  4  47  0.748805  3  46  0.624004 
7  6000  4  47  2.589617  3  46  2.386815 
7  9000  4  47  5.257234  3  46  5.054432 
8  3000  25  156  2.854818  17  142  1.872012 
8  6000  32  189  10.826469  18  162  8.377254 
8  9000  28  192  21.512538  19  174  18.938521 
9  3000  10  151  1.934412  5  76  1.014007 
9  6000  4  61  3.510023  5  76  3.884425 
9  9000  4  61  6.614442  6  91  9.609662 
10  3000  1999  29986  386.804479  1999  29986  359.816306 
10  6000  1999  29986  1523.068963  1999  29986  1469.59182 
10  9000  1999  29986  3164.339884  1999  29986  3087.712193 
11  3000  498  7457  98.32743  499  7472  93.101397 
11  6000  498  7457  385.026068  499  7472  367.787958 
11  9000  498  7457  794.07629  498  7457  774.825767 
12  3000  1999  2000  51.059127  1999  2000  46.238696 
12  6000  1999  2000  199.322478  1999  2000  185.71919 
12  9000  1999  2000  405.680601  1999  2000  391.234908 
13  3000  1  2  0.0312  1  2  0.0624 
13  6000  1  2  0.156001  1  2  0.187201 
13  9000  1  2  0.140401  1  2  0.249602 
14  3000  1999  29972  400.220565  1999  29973  362.671125 
14  6000  1999  29972  1544.316299  1999  29973  1460.294161 
14  9000  1999  29972  3197.287295  1999  29973  3105.168705 
15  3000  4  61  0.733205  4  61  0.733205 
15  6000  4  61  3.790824  4  61  3.026419 
15  9000  4  61  6.552042  4  61  6.146439 
16  3000  5  62  1.060807  5  62  0.858006 
16  6000  5  62  3.400822  5  62  3.291621 
16  9000  5  62  6.942044  5  62  6.25564 
17  3000  6  77  1.326009  6  91  1.216808 
17  6000  6  77  4.243227  6  91  4.570829 
17  9000  6  77  8.548855  6  91  9.40686 
18  3000  5  76  0.936006  5  76  0.920406 
18  6000  5  76  3.900025  5  76  3.775224 
18  9000  5  76  8.533255  5  76  7.86245 
19  3000  108  1060  15.5689  141  1272  17.565713 
19  6000  81  788  44.429085  114  1029  53.820345 
19  9000  63  628  70.512452  100  903  99.715839 
7.2 Results and discussion
8 Conclusion
 (i)
The proposed threeterm conjugate gradient formula possesses the sufficient descent property and the trust region feature without any conditions. The sufficient descent property can make the objective function value be descent, and then the iteration sequence \(\{x_{k}\}\) converges to the global limit point. Moreover, the trust region is good for the proof of the presented algorithm to be easily turned out.
 (ii)
The given algorithm can be used for not only the normal unstrained optimization problems but also for the nonlinear equations. Both algorithms for these two problems have the global convergence under general conditions.
 (iii)
Largescale problems are done by the given problems, which shows that the new algorithms are very effective.
Declarations
Acknowledgements
The authors would like to thank the editor and the referees for their interesting comments, which greatly improved our paper. This work is supported by the National Natural Science Foundation of China (Grant No. 11661009), the Guangxi Science Fund for Distinguished Young Scholars (No. 2015GXNSFGA139001), and the Guangxi Natural Science Key Fund (No. 2017GXNSFDA198046). Innovation Project of Guangxi Graduate Education (No. YCSW2018046)
Authors’ contributions
The work of Dr. GY is organizing and checking this paper, and Dr. WH mainly has done the experiments of the algorithms and written the codes. All authors read and approved the final manuscript.
Competing interests
The authors declare to have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Birindelli, I., Leoni, F., Pacella, F.: Symmetry and spectral properties for viscosity solutions of fully nonlinear equations. J. Math. Pures Appl. 107(4), 409–428 (2017) MathSciNetView ArticleGoogle Scholar
 Ganji, D.D., Fakour, M., Vahabzadeh, A., et al.: Accuracy of VIM, HPM and ADM in solving nonlinear equations for the steady threedimensional flow of a Walter’s B fluid in vertical channel. Walailak J. Sci. Technol. 11(7), 203–204 (2014) Google Scholar
 Georgiades, F.: Nonlinear equations of motion of Lshaped beam structures. Eur. J. Mech. A, Solids 65, 91–122 (2017) MathSciNetView ArticleGoogle Scholar
 Dai, Z., Wen, F.: Some improved sparse and stable portfolio optimization problems. Finance Res. Lett. (2018). https://doi.org/10.1016/j.frl.2018.02.026
 Dai, Z., Li, D., Wen, F.: Worsecase conditional valueatrisk for asymmetrically distributed asset scenarios returns. J. Comput. Anal. Appl. 20, 237–251 (2016) MathSciNetMATHGoogle Scholar
 Dong, X., Liu, H., He, Y.: A selfadjusting conjugate gradient method with sufficient descent condition and conjugacy condition. J. Optim. Theory Appl. 165(1), 225–241 (2015) MathSciNetMATHView ArticleGoogle Scholar
 Dong, X., Liu, H., He, Y., Yang, X.: A modified Hestenes–Stiefel conjugate gradient method with sufficient descent condition and conjugacy condition. J. Comput. Appl. Math. 281, 239–249 (2015) MathSciNetMATHView ArticleGoogle Scholar
 Liu, Y.: Approximate solutions of fractional nonlinear equations using homotopy perturbation transformation method. Abstr. Appl. Anal. 2012(2), 374 (2014) Google Scholar
 Chen, P.: Christoph Schwab, sparsegrid, reducedbasis Bayesian inversion, nonaffineparametric nonlinear equations. J. Comput. Phys. 316(C), 470–503 (2016) MathSciNetMATHView ArticleGoogle Scholar
 Shah, F.A., Noor, M.A.: Some numerical methods for solving nonlinear equations by using decomposition technique. Appl. Math. Comput. 251(C), 378–386 (2015) MathSciNetMATHGoogle Scholar
 Waziri, M., Aisha, H.A., Mamat, M.: A structured Broyden’slike method for solving systems of nonlinear equations. World Appl. Sci. J. 8(141), 7039–7046 (2014) Google Scholar
 Wen, F., He, Z., Dai, Z., et al.: Characteristics of investors risk preference for stock markets. Econ. Comput. Econ. Cybern. Stud. Res. 48, 235–254 (2014) Google Scholar
 Fletcher, R., Reeves, C.M.: Function minimization by conjugate gradients. Comput. J. 7(2), 149–154 (1964) MathSciNetMATHView ArticleGoogle Scholar
 AlBaali, M., Narushima, Y., Yabe, H.: A family of threeterm conjugate gradient methods with sufficient descent property for unconstrained optimization. Comput. Optim. Appl. 60(1), 89–110 (2015) MathSciNetMATHView ArticleGoogle Scholar
 Egido, J.L., Lessing, J., Martin, V., et al.: On the solution of the Hartree–Fock–Bogoliubov equations by the conjugate gradient method. Nucl. Phys. A 594(1), 70–86 (2016) View ArticleGoogle Scholar
 Huang, C., Chen, C.: A boundary elementbased inverseproblem in estimating transient boundary conditions with conjugate gradient method. Int. J. Numer. Methods Eng. 42(5), 943–965 (2015) MATHView ArticleGoogle Scholar
 Huang, N., Ma, C.: The modified conjugate gradient methods for solving a class of generalized coupled Sylvestertranspose matrix equations. Comput. Math. Appl. 67(8), 1545–1558 (2014) MathSciNetMATHView ArticleGoogle Scholar
 Mostafa, E.S.M.E.: A nonlinear conjugate gradient method for a special class of matrix optimization problems. J. Ind. Manag. Optim. 10(3), 883–903 (2014) MathSciNetMATHView ArticleGoogle Scholar
 Albaali, M., Spedicato, E., Maggioni, F.: Broyden’s quasiNewton methods for a nonlinear system of equations and unconstrained optimization, a review and open problems. Optim. Methods Softw. 29(5), 937–954 (2014) MathSciNetMATHView ArticleGoogle Scholar
 Fang, X., Ni, Q., Zeng, M.: A modified quasiNewton method for nonlinear equations. J. Comput. Appl. Math. 328, 44–58 (2018) MathSciNetMATHView ArticleGoogle Scholar
 Luo, Y.Z., Tang, G.J., Zhou, L.N.: Hybrid approach for solving systems of nonlinear equations using chaos optimization and quasiNewton method. Appl. Soft Comput. 8(2), 1068–1073 (2008) View ArticleGoogle Scholar
 Tarzanagh, D.A., Nazari, P., Peyghami, M.R.: A nonmonotone PRP conjugate gradient method for solving square and underdetermined systems of equations. Comput. Math. Appl. 73(2), 339–354 (2017) MathSciNetMATHView ArticleGoogle Scholar
 Wan, Z., Hu, C., Yang, Z.: A spectral PRP conjugate gradient methods for nonconvex optimization problem based on modified line search. Discrete Contin. Dyn. Syst., Ser. B 16(4), 1157–1169 (2017) MathSciNetMATHView ArticleGoogle Scholar
 Yuan, G., Sheng, Z., Wang, B., et al.: The global convergence of a modified BFGS method for nonconvex functions. J. Comput. Appl. Math. 327, 274–294 (2018) MathSciNetMATHView ArticleGoogle Scholar
 Amini, K., Shiker, M.A.K., Kimiaei, M.: A line search trustregion algorithm with nonmonotone adaptive radius for a system of nonlinear equations. 4OR 14, 133–152 (2016) MathSciNetMATHView ArticleGoogle Scholar
 Qi, L., Tong, X.J., Li, D.H.: Activeset projected trustregion algorithm for boxconstrained nonsmooth equations. J. Optim. Theory Appl. 120(3), 601–625 (2004) MathSciNetMATHView ArticleGoogle Scholar
 Yang, Z., Sun, W., Qi, L.: Global convergence of a filtertrustregion algorithm for solving nonsmooth equations. Int. J. Comput. Math. 87(4), 788–796 (2010) MathSciNetMATHView ArticleGoogle Scholar
 Yu, G.: A derivativefree method for solving largescale nonlinear systems of equations. J. Ind. Manag. Optim. 6(1), 149–160 (2017) MathSciNetView ArticleGoogle Scholar
 Li, Q., Li, D.H.: A class of derivativefree methods for largescale nonlinear monotone equations. IMA J. Numer. Anal. 31(4), 1625–1635 (2011) MathSciNetMATHView ArticleGoogle Scholar
 Sheng, Z., Yuan, G., Cui, Z.: A new adaptive trust region algorithm for optimization problems. Acta Math. Sci. 38B(2), 479–496 (2018) MathSciNetView ArticleGoogle Scholar
 Sheng, Z., Yuan, G., Cui, Z., et al.: An adaptive trust region algorithm for largeresidual nonsmooth least squares problems. J. Ind. Manag. Optim. 34, 707–718 (2018) Google Scholar
 Yuan, G., Sheng, Z., Liu, W.: The modified HZ conjugate gradient algorithm for largescale nonsmooth optimization. PLoS ONE 11, 1–15 (2016) Google Scholar
 Yuan, G., Sheng, Z.: Nonsmooth Optimization Algorithms. Press of Science, Beijing (2017) Google Scholar
 Narushima, Y., Yabe, H., Ford, J.A.: A threeterm conjugate gradient method with sufficient descent property for unconstrained optimization. SIAM J. Optim. 21(1), 212–230 (2016) MathSciNetMATHView ArticleGoogle Scholar
 Zhou, W.: Some descent threeterm conjugate gradient methods and their global convergence. Optim. Methods Softw. 22(4), 697–711 (2007) MathSciNetMATHView ArticleGoogle Scholar
 Cardenas, S.: Efficient generalized conjugate gradient algorithms. I. Theory. J. Optim. Theory Appl. 69(1), 129–137 (1991) MathSciNetView ArticleGoogle Scholar
 Dai, Y.H., Yuan, Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10(1), 177–182 (1999) MathSciNetMATHView ArticleGoogle Scholar
 Hestenes, M.R., Steifel, E.: Cassettari, D., et al.: Method of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49(6), 409–436 (1952) MathSciNetView ArticleGoogle Scholar
 Wei, Z., Yao, S., Liu, L.: The convergence properties of some new conjugate gradient methods. Appl. Math. Comput. 183(2), 1341–1350 (2006) MathSciNetMATHGoogle Scholar
 Yuan, G., Lu, X.: A modified PRP conjugate gradient method. Ann. Oper. Res. 166(1), 73–90 (2009) MathSciNetMATHView ArticleGoogle Scholar
 Yuan, G., Lu, X., Wei, Z.: A conjugate gradient method with descent direction for unconstrained optimization. J. Comput. Appl. Math. 233(2), 519–530 (2009) MathSciNetMATHView ArticleGoogle Scholar
 Yuan, G., Meng, Z., Li, Y.: A modified Hestenes and Stiefel conjugate gradient algorithm for largescale nonsmooth minimizations and nonlinear equations. J. Optim. Theory Appl. 168(1), 129–152 (2016) MathSciNetMATHView ArticleGoogle Scholar
 Yuan, G., Wei, Z., Lu, X.: Global convergence of BFGS and PRP methods under a modified weak Wolfe–Powell line search. Appl. Math. Model. 47, 811–825 (2017) MathSciNetView ArticleGoogle Scholar
 Zhang, L., Zhou, W., Li, D.H.: A descent modified Polak–Ribière–Polyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 26(4), 629–640 (2006) MathSciNetMATHView ArticleGoogle Scholar
 Nazareth, L.: A conjugate direction algorithm without line searches. J. Optim. Theory Appl. 23(3), 373–387 (1977) MathSciNetMATHView ArticleGoogle Scholar
 Yuan, G., Wei, Z., Li, G.: A modified Polak–Ribière–Polyak conjugate gradient algorithm for nonsmooth convex programs. J. Comput. Appl. Math. 255, 86–96 (2014) MathSciNetMATHView ArticleGoogle Scholar
 Yuan, G., Zhang, M.: A threeterms Polak–Ribière–Polyak conjugate gradient algorithm for largescale nonlinear equations. J. Comput. Appl. Math. 286, 186–195 (2015) MathSciNetMATHView ArticleGoogle Scholar
 Andrei, N.: An unconstrained optimization test functions collection. Environ. Sci. Technol. 10(1), 6552–6558 (2008) MathSciNetGoogle Scholar
 Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2001) MathSciNetMATHView ArticleGoogle Scholar
 Ahmad, F., Tohidi, E., Ullah, M.Z., et al.: Higher order multistep Jarrattlike method for solving systems of nonlinear equations, application to PDEs and ODEs. Comput. Math. Appl. 70(4), 624–636 (2015) MathSciNetView ArticleGoogle Scholar
 Kang, S.M., Nazeer, W., Tanveer, M., et al.: Improvements in Newton–Raphson method for nonlinear equations using modified Adomian decomposition method. Int. J. Math. Anal. 9(39), 1910–1928 (2015) Google Scholar
 Matinfar, M., Aminzadeh, M.: Threestep iterative methods with eighthorder convergence for solving nonlinear equations. J. Comput. Appl. Math. 225(1), 105–112 (2016) MathSciNetGoogle Scholar
 Papp, Z., Rapajić, S.: FR type methods for systems of largescale nonlinear monotone equations. Appl. Math. Comput. 269(C), 816–823 (2015) MathSciNetGoogle Scholar
 Yuan, G., Wei, Z., Lu, X.: A new backtracking inexact BFGS method for symmetric nonlinear equations. Comput. Math. Appl. 55(1), 116–129 (2008) MathSciNetMATHView ArticleGoogle Scholar
 Li, Q., Li, D.H.: A class of derivativefree methods for largescale nonlinear monotone equations. IMA J. Numer. Anal. 31(4), 1625–1635 (2011) MathSciNetMATHView ArticleGoogle Scholar
 Solodov, M.V., Svaiter, B.F.: A hybrid projectionproximal point algorithm. J. Convex Anal. 6, 59–70 (1999) MathSciNetMATHGoogle Scholar
 Ceng, L.C., Wen, C.F., Yao, Y.: Relaxed extragradientlike methods for systems of generalized equilibria with constraints of mixed equilibria, minimization and fixed point problems. J. Nonlinear Var. Anal. 1, 367–390 (2017) Google Scholar
 Cho, S.Y.: Generalized mixed equilibrium and fixed point problems in a Banach space. J. Nonlinear Sci. Appl. 9, 1083–1092 (2016) MathSciNetMATHView ArticleGoogle Scholar
 Cho, S.Y.: Strong convergence analysis of a hybrid algorithm for nonlinear operators in a Banach space. J. Appl. Anal. Comput. 8, 19–31 (2018) MathSciNetGoogle Scholar
 Liu, Y.: A modified hybrid method for solving variational inequality problems in Banach spaces. J. Nonlinear Funct. Anal. 2017, Article ID 31 (2017) Google Scholar
 Solodov, M.V., Svaiter, B.F.: A Globally Convergent Inexact Newton Method for Systems of Monotone Equations, Reformulation, Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, pp. 1411–1414. Springer, Berlin (1998) Google Scholar