A family of conjugate gradient methods for large-scale nonlinear equations

In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.


Introduction
Consider the following nonlinear equations problem of finding x ∈ C such that where F : R n → R n is a continuous nonlinear mapping and C is a nonempty closed convex set of R n . The problem finds wide applications in areas such as ballistic trajectory computation and vibration systems [, ], the power flow equations [-], economic equilibrium problem [-], etc. Generally, there are two categories of solution methods for solving the problem. The first one is first-order methods including the trust region method, the Levenberg-Marquardt method and the projection method. The second one is second-order methods including the Newton method and quasi-Newton method. For the first method, Zhang et al. [] proposed a spectral gradient method for problem (.) with C = R n , and Wang et al. [] proposed a projection method for problem (.). Later, Yu et al. [] proposed a spectral gradient projection method for constrained nonlinear equations. Compared with the projection method in [], the methods [, ] need the Lipschitz continuity of the underlying mapping F(·), but the former needs to solve a linear equation at each iteration, and its variants [, ] also inherit the shortcoming. Different from the above, in this paper, we consider the conjugate gradient method for solving the non-monotone problem (.). To this end, we briefly review the well-known conjugate gradient method for the unconstrained optimization problem The conjugate gradient method generates the sequence of iterates recurrently by where x k is the current iterate, α k >  is the step-size determined by some line search, and d k is the search direction defined by where t >  is a constant. In this paper, motivated by the projection methods in [, ] and the conjugate gradient methods in [, ], we propose a new family of conjugate gradient projection methods for solving nonlinear problem (.). The new designed method is derivative-free as it does not need to compute the Jacobian matrix or its approximation of the underlying function (.). Further, the new method does not need to solve any linear equations at each iteration, thus it is suitable to solve large scale problem (.).
The remainder of this paper is organized as follows. Section  describes the new method and presents its convergence. The numerical results are reported in Section . Some concluding remarks are drawn in the last section.

Algorithm and convergence analysis
Throughout this paper, we assume that the mapping F(·) is monotone, or more generally pseudo-monotone, on R n in the sense of Karamardian []. That is, it satisfies that where ·, · denotes the usual inner product in R n . Further, we use P C (x) to denote the projection of point x ∈ R n onto the convex set C, which satisfies the following property: Now, we describe the new conjugate gradient projection method for nonlinear constrained equations.
Step . If F(x k ) < , stop; otherwise go to Step .
Step . Compute where β k is such that Step . Find the trial point Step . Compute Set k := k +  and go to Step .
Obviously, Algorithm . is different from the methods in [, ]. Now, we give some comment on the searching direction d k defined by (.). We claim that it is derived from Schmidt orthogonalization. In fact, in order to make d k = -F(x k ) + β k d k- satisfy the property we only need to ensure that β k d k- is vertical to F(x k ). As a matter of fact, by Schmidt orthogonalization, we have Equality (.) together with the Cauchy-Schwarz inequality implies that d k ≥ F(x k ) . In addition, by (.) and (.), we have Therefore, for all k ≥ , it holds that Furthermore, it is easy to see that the line search (.) is well defined if F(x k ) = . For parameter β k defined by (.), it has many choices such as β S From the structure of H k , the orthogonal projection onto H k has a closed-form expression. That is, In particular, if x k = y k , then h k (x k ) > .
Proof From x ky k = -α k d k and the line search (.), we have On the other hand, from condition (.), we can obtain This completes the proof.
Lemma . indicates that the hyperplane ∂H k = {x ∈ R n |h k (x) = } strictly separates the current iterate from the solutions of problem (.) if x k is not a solution. In addition, from Lemma ., we also can derive that the solution set S * of problem (.) is included in H k for all k.
Certainly, if Algorithm . terminates at step k, then x k is a solution of problem (.). So, in the following analysis, we assume that Algorithm . always generates an infinite sequence {x k }. Based on the lemma, we can establish the convergence of the algorithm.
Theorem . If F is continuous and condition (.) holds, then the sequence {x k } generated by Algorithm . globally converges to a solution of problem (.).
Proof First, we show that the sequences {x k } and {y k } are both bounded. In fact, it follows from x * ∈ H k , (.), (.) and (.) that Thus the sequence { x kx * } is decreasing and convergent, and hence the sequence {x k } is bounded, and from (.), the sequence {d k } is also bounded. Then, by y k = x k + α k d k , the sequence {y k } is also bounded. Then, by the continuity of F(·), there exists constant M >  such that F(y k ) ≤ M for all k. So, from which we can deduce that Obviously, this problem has a unique solution x * = (, , . . . , ) .

Problem  The mapping F(·) is taken as
Problem  The mapping F(x) : R  → R  is given by This problem has a degenerate solution x * = (, , , ).  For Problem , the initial point is set as x  = (, , . . . , ), and Table  gives the numerical results by Algorithm . with different dimensions, where Iter. denotes the iteration number and CPU denotes the CPU time in seconds when the algorithm terminates. Table  lists the numerical results of Problem  with different initial points. The numerical results given in Table  and Table  show that the proposed method is efficient for solving the given two test problems.

Conclusion
In this paper, we extended the conjugate gradient method to nonlinear equations. The major advantage of the method is that it does not need to compute the Jacobian matrix or any linear equations at each iteration, thus it is suitable to solve large-scale nonlinear constrained equations. Under mild conditions, the proposed method possesses global convergence. In Step  of Algorithm ., we have to compute a projection onto the intersection of the feasible set C and a half-space at each iteration, which is equivalent to quadratic programming, quite time-consuming work. Hence, how to remove this projection step is one of our future research topics.