A class of new derivative-free gradient type methods for large-scale nonlinear systems of monotone equations

*Correspondence: fangxiaowei@163.com 1Department of Mathematics, Huzhou University, Huzhou, China Abstract In this paper, we present a class of new derivative-free gradient type methods for large-scale nonlinear systems of monotone equations. The methods combine the RMIL conjugate gradient method, the strategy of hyperplane projection and the derivative-free line search technique. Under some appropriate assumptions, the global convergence of the given methods is established. Numerical experiments indicate that the proposed methods are effective.


Introduction
In this paper, we consider the following nonlinear systems of monotone equations: where F : R n → R n is monotone and continuous, which means F satisfies (F(x) -F(y)) T × (xy) ≥ 0 for all x, y ∈ R n . Nonlinear systems of monotone equations are widely used in economy, finance, engineering, industry and many other fields, so there are numerous iterative algorithms for solving (1).
Recently, La Cruz [3] presented a spectral method that uses the residual vector as search direction for solving large-scale systems of nonlinear monotone equations. Solodov and Svaiter [12] proposed a method which combined projection, proximal point and Newton method. According to the work of Solodov and Svaiter [12], Zhang and Zhou [17] developed a spectral gradient projection method for solving nonlinear monotone equations.
In particular, the conjugate gradient methods are widely used methods for solving largescale nonlinear equations because of the low memory and simplicity [8,10]. In the last few years, some authors proposed a series of methods for solving nonlinear monotone equations, which combined conjugate gradient methods and projection method [12]. For instance, Cheng [2] extended the PRP conjugate method to monotone equations. Yan et al. [13] proposed two modified HS conjugate method. Li and Li [7] designed a class of derivative-free methods based on line search technique. Ahookhosh et al. [1] developed two derivative-free conjugate gradient procedures. Papp and Rapajić [9] described some new FR type methods. Yuan et al. [14,15] proposed new three-terms conjugate gradient methods. Dai et al. [4] gave a modified Perrys conjugate gradient method. Zhou et al. [20,21] developed a class of methods. Zhang [16] developed a residual method-based secant condition.
For unconstrained optimization problem, Rivaie et al. [11] designed a RMIL conjugate gradient method. Fang and Ni [6] extended the RMIL method to solve large-scale nonlinear systems of equations with the ideas of nonmonotone line search. Numerical experiments show that the RMIL method is practically effective.
For systems of monotone equations, we describe a class of new derivative-free gradient type method, which is inspired by the efficiency of the RMIL method [11], and the strategy of projection method [12].
This paper is organized as follows. In Sect. 2, we propose the algorithm. In Sect. 3, we establish the global convergence. In Sect. 4, numerical results show the efficiency of the proposed methods. In Sect. 5, we give some conclusions. We denote by · the Euclidean norm.

Algorithm
In this section, we first consider the conjugate gradient method for the following unconstrained optimization problem: where f : R n → R is smooth. Quite recently, Rivaie et al. [11] developed RMIL conjugate gradient method, and the search direction d k is given by where g k is the gradient of f . Numerical results show that RMIL conjugate gradient method is superior and more efficient than other conjugate gradient method.
We focus on the method for solving monotone equations (1). We have the projection procedure in [12], by performing some line search techniques to find a point such that On the other hand, for any x * such that F(x * ) = 0, by the monotonicity of F, we obtain Equations (5) and (6) imply that the hyperplane strictly separates the zeros of systems of monotone equations (1) from x k . Therefore, Solodov and Svaiter [12] could compute the next iterate x k+1 by projecting x k onto the hyperplane H k . Specifically, x k+1 is obtained by The steplength α k of (4) is determined by a proper line search technique. Recently, Zhang and Zhou [17], Zhou [18] presented the following derivative-free line search rule: where α k = max{s, ρs, ρ 2 s, . . .}, d k is the search direction, σ , s, ρ are constants, and σ > 0, s > 0, 1 > ρ > 0. Now, we extend RMIL conjugate gradient method [11] for solving nonlinear systems of monotone equations, which combined the projection method [12] and derivative-free line search technique [17,18]. The steps of our algorithm are listed as follows.

Algorithm 1 (MRMIL)
Step 0: Choose an initial point Step 1: Choose the search direction d k that satisfies the following sufficient descent condition: and determine the initial steplength α = s.
Step 2: If then set α k = α, z k = x k + α k d k and go to step 3. Else set α k = ρα k , and go to step 2.
Step 3: If F(z k ) > , then compute and go to step 4, otherwise stop.
Step 4: If F(x k+1 ) > and k < k max , then set k = k + 1 and go to step 1, otherwise stop. Let where The MRMIL1 method is the Algorithm1 with MRMIL1 direction which is defined by (13).
MRMIL2 direction: where The MRMIL2 method is Algorithm 1 with the MRMIL2 direction which is defined by (15).
MRMIL3 direction: where The MRMIL3 method is Algorithm 1 with the MRMIL3 direction which is defined by (17).

Convergence analysis
In this section, so as to obtain the global convergence of MRMIL1, MRMIL2 and MRMIL3 method, we give the following assumptions. ( where L is a positive constant.

Assumption 3.1 implies that
where κ is a positive constant. Now, we get Lemma 3.1 whose proof is similar to those in [12] and is omitted. For any x * such that F(x * ) = 0, we have In addition, the sequence {x k } satisfies where κ, γ , s, L are constants, and κ > 0, 1 > γ > 0, s > 0, L > 0.
Equations (41) and (42) are a contradiction, then the conclusion (36) is hold. From Assumption 3.1, Lemma 3.1 and (36), we see that the sequence {x k } converges to some accumulation point x * such that F(x * ) = 0.

Numerical experiments
In this section, we report some numerical test results for the MRMIL1 method, the MR-MIL2 method, and the MRMIL3 method and compare with the DFPB1 method in [1] and the M3TFR2 method in [9]. Our tests are implemented in Matlab R2011a, run on a personal computer with 8 GB RAM and Intel CPU I5-3470.
In order to compare all methods, we employ the performance profiles [5], which are defined by the following fraction: where P is the test set, |P| is the number of problems in the test set P, V is the set of optimization solvers, and t p,v is the CPU time (or the number of the function evaluations, or the number of iterations) for p ∈ P and v ∈ V . We test the following problems with different starting points and various sizes: (1) the problems 1-7 from [2,7,13,19] with sizes 1000, 5000, 10,000, 50,000; (2) the problem 8 from [7] with sizes 10,000, 20,164, 40,000; All problems are initialized with the following eight starting points:  [1,9], we used the same parameters for five methods: initial steplength s =  Figure 1 is for the iterations performance profiles related to five methods. As we can see, MRMIL3 guarantees better results than M3TFR2, DFPB1, MRMIL1 and MRMIL2 as it solves a higher percentage of problems when τ ≥ 0.2, It also can be seen that DFPB1, MRMIL1 and MRMIL2 have similar performances, especially when τ ≥ 0.3. Furthermore, M3TFR2 gives better results than DFPB1, MRMIL1 and MRMIL2, and the difference is significantly small as the performance ratio τ increases.
The number of function evaluations performance profiles are reported in Figure 2. we note that MRMIL3 performs better than the other four methods when τ ≥ 0.2. In addition, we also observe that MRMIL1 and MRMIL2 is more efficient than DFPB1 when τ ≥ 0.3, but more inefficient than M3TFR2. Figure 3 shows the CPU time performance profiles. When τ ≤ 0.15, M3TFR2 and DFBP1 uses the shortest CPU time, but MRMIL3 gives the best result when τ ≥ 0.15.

Conclusion
In this paper, we give a class of new derivative-free gradient type methods for large-scale nonlinear systems of monotone equations. Under mild assumptions, we prove that the methods possess global convergence properties. Numerical experiments show that the proposed methods are promising, especially the MRMIL3 method, which is the most efficient one.

Appendix: Test problems
We now list the following test problems.