- Research
- Open access
- Published:
A superlinearly convergent hybrid algorithm for systems of nonlinear equations
Journal of Inequalities and Applications volume 2012, Article number: 180 (2012)
Abstract
We propose a new algorithm for solving systems of nonlinear equations with convex constraints which combines elements of Newton, the proximal point, and the projection method. The convergence of the whole sequence is established under weaker conditions than the ones used in existing projection-type methods. We study the superlinear convergence rate of the new method if in addition a certain error bound condition holds. Preliminary numerical experiments show that our method is efficient.
MSC: 90C25, 90C30.
1 Introduction
Let be a continuous mapping and be a nonempty, closed, and convex set. The inner product and norm are denoted by and , respectively. Consider the problem of finding
Let S denote the solution set of (1.1). Throughout this paper, we assume that S is nonempty and F has the property that
The property (1.2) holds if F is monotone or more generally pseudomonotone on C in the sense of Karamardian [1].
Nonlinear equations have wide applications in reality. For example, many problems arising from chemical technology, economy, and communications can be transformed into nonlinear equations; see [2–5]. In recent years, many numerical methods for problem (1.1) with smooth mapping F have been proposed. These methods include the Newton method, quasi-Newton method, Levenberg-Marquardt method, trust region method, and their variants; see [6–14].
Recently, the literature [15] proposed a hybrid method for solving problem (1.1), which combines the Newton, proximal point, and projection methodologies. The method possesses a very nice globally convergent property if F is monotone and continuous. Under the assumptions of differentiability and nonsingularity, locally superlinear convergence of the method is proved. However, the condition of nonsingularity is too strong. Relaxing the nonsingularity assumption, the literature [16] proposed a modified version for the method by changing the projection way, and showed that under the local error bound condition which is weaker than nonsingularity, the proposed method converges superlinearly to the solution of problem (1.1). The numerical performances given in [16] show that the method is really efficient. However, the literatures [15, 16] need the mapping F to be monotone, which seems too stringent a requirement for the purpose of ensuring global convergence property and locally superlinear convergence of the hybrid method.
To further relax the assumption of monotonicity of F, in this paper, we propose a new hybrid algorithm for problem (1.1) which covers one in [16]. The global convergence of our method needs only to assume that F satisfies the property (1.2), which is much weaker than monotone or more generally pseudomonotone. We also discuss the superlinear convergence of our method under mild conditions. Preliminary numerical experiments show that our method is efficient.
2 Preliminaries and algorithms
For a nonempty, closed, and convex set and a vector , the projection of x onto Ω is defined as
We have the following property on the projection operator; see [17].
Lemma 2.1 Let be a closed convex set. Then it holds that
Algorithm 2.1 Choose , parameters , λ, , , , , , and set .
Step 1. Compute . If , stop. Otherwise, let , . Choose a positive semidefinite matrix . Compute such that
where
Stop if . Otherwise,
Step 2. Compute , where and is the smallest nonnegative integer m satisfying
Step 3. Compute
where and
Let and return to Step 1.
Remark 2.1 When we take parameters , , and the search direction , our algorithm degrades into one in [16]. At this step of getting the next iterate, our projection way and projection region are also different from the one in [15].
Now we analyze the feasibility of Algorithm 2.1. It is obvious that satisfying conditions (2.1) and (2.2) exists. In fact, when we take , satisfies (2.1) and (2.2). Next, we need only to show the feasibility of (2.3).
Lemma 2.2 For all nonnegative integer k, there exists a nonnegative integer satisfying (2.3).
Proof If , then it follows from (2.1) and (2.2) that , which means Algorithm 2.1 terminates with being a solution of problem (1.1).
Now, we assume that , for all k. By the definition of , the Cauchy-Schwarz inequality and the positive semidefiniteness of , we have
Suppose that the conclusion of Lemma 2.2 does not hold. Then there exists a nonnegative integer such that (2.3) is not satisfied for any nonnegative integer m, i.e.,
Letting and by the continuity of F, we have
Which, together with (2.5), , and , we conclude that , which contradicts . This completes the proof. □
3 Convergence analysis
In this section, we first prove two lemmas, and then analyze the global convergence of Algorithm 2.1.
Lemma 3.1 If the sequences and are generated by Algorithm 2.1, is bounded and F is continuous, then is also bounded.
Proof Combining inequality (2.5) with the Cauchy-Schwarz inequality, we obtain
By the definition of and , it follows that
From the boundedness of and the continuity of F, we conclude that is bounded, and hence so is . This completes the proof. □
Lemma 3.2 Let be a solution of problem (1.1) and the function be defined by (2.4). If condition (1.2) holds, then
In particular, if , then .
Proof
where the inequality follows from (2.3).
where the inequality follows from condition (1.2) and the definition of .
If , then because . The proof is completed. □
Remark 3.1 Lemma 3.2 means that the hyperplane
strictly separates the current iterate from the solution set of problem (1.1).
Let and . Since
where the first inequality follows from condition (1.2), the second one follows from (2.3), and the last one follows , which shows that is a descent direction of the function at the point .
We next prove our main result. Certainly, if Algorithm 2.1 terminates at Step k, then is a solution of problem (1.1). Therefore, in the following analysis, we assume that Algorithm 2.1 always generates an infinite sequence.
Theorem 3.1 If F is continuous on C, condition (1.2) holds and , then the sequence generated by Algorithm 2.1 globally converges to a solution of (1.1).
Proof Let be a solution of problem (1.1). Since , it follows from Lemma 2.1 that
i.e.,
which shows that the sequence is nonincreasing, and hence is a convergent sequence. Therefore, is bounded and
From Lemma 3.1 and the continuity of F, we have that is bounded; that is, there exists a positive constant M such that
By (2.3) and the choices of and λ, we have
This, together with inequality (3.4), we deduce that
Now, we consider the following two possible cases:
Suppose first that . Then we must have
From the definition of , the choice of and , each case of them follows that
Since F is continuous and is bounded, which implies that the sequence has some accumulation point such that
This shows that is a solution of problem (1.1). Replacing by in the preceding argument, we obtain that the sequence is nonincreasing, and hence converges. Since is an accumulation point of , some subsequence of converges to zero, which implies that the whole sequence converges to zero, and hence .
Suppose now that . Let be any accumulation point of and be the corresponding subsequence converging to . By the choice of , (2.3) implies that
Since F is continuous, we obtain by letting that
From (2.5) and (3.5), we conclude that , which contradicts . Hence, the case of is not possible. This completes the proof. □
Remark 3.2 Compared to the conditions of the global convergence used in literatures [15, 16], our conditions are weaker.
4 Convergence rate
In this section, we provide a result on the convergence rate of the iterative sequence generated by Algorithm 2.1. To establish this result, we need the following conditions (4.1) and (4.2).
For , there are positive constants δ, , and such that
and
where denotes the distance from x to solution set S, and
If F is differentiable and is locally Lipschitz continuous with modulus , then there exists a constant such that
In fact, by the mean value theorem of vector valued function, we have
where . Under assumptions (4.2) or (4.3), it is readily shown that there exists a constant such that
In 1998, the literature [15] showed that their proposed method converged superlinearly when the underlying function F is monotone, differentiable with being nonsingular, and ∇F is locally Lipschitz continuous. It is known that the local error bound condition given in (4.1) is weaker than the nonsingular. Recently, under conditions (4.1), (4.2), and the underlying function F being monotone and continuous, the literature [16] obtained the locally superlinear rate of convergence of the proposed method.
Next, we analyze the superlinear convergence rate of the iterative sequence under a weaker condition. In the rest of section, we assume that , , where .
Lemma 4.1 Let be a positive semidefinite matrix and . Then
(1) ;
(2) .
Proof See [18]. □
Lemma 4.2 Suppose that F is continuous and satisfies conditions (1.2), (4.1), and (4.2). If there exists a positive constant N such that for all k, then for all k sufficiently large,
(1) ;
(2) , where , and are all positive constants.
Proof For (1), let and be the closest solution to . We have
i.e., . Thus, by (2.1), (2.2), (4.2), and Lemma 4.1, we have
By and , it follows that
From (4.1) and the choice of , it holds that
From the boundedness of , there exists a positive constant such that
Therefore,
We obtain that the left-hand side of (1) by setting .
For the right-hand side part, it follows from (2.1) and (2.2) that
We obtain the right-hand side part by setting .
For (2), using (2.1) and (2.2), we have
By setting , we obtain the desired result. □
Lemma 4.3 Suppose that the assumptions in Lemma 4.2 hold. Then for all k sufficiently large, it holds that
Proof By and the continuity of F, we have
By Lemma 4.2(1), we obtain that
which means that for all k sufficiently large. Hence, it follows from (4.2) that
where . Using (2.1) and (2.2), (4.6) can be written as
Hence,
By Lemma 4.2(1) and the choices of and , for k sufficiently large, we obtain
where the last inequality follows from .
Therefore,
which implies that (2.3) holds with for all k sufficiently large, i.e., . This completes the proof. □
From now on, we assume that k is large enough so that .
Lemma 4.4 Suppose that the assumptions in Lemma 4.2 hold. Set . Then for all k sufficiently large, there exists a positive constant such that
Proof Set
Then and . Hence, the vectors and are orthogonal. That is,
where is the angle between and . Because and , the angle between and is also . By (4.7), we obtain
which implies that the vectors , and constitute a triangle. Since and . So for all k sufficiently large, we have
which, together with (4.8) and Lemma 4.2(1), we obtain
where . This completes the proof. □
Now, we turn our attention to local rate of convergence analysis.
Theorem 4.1 Suppose that the assumptions in Lemma 4.2 hold. Then the sequence Q-superlinearly converges to 0.
Proof By the definition of , Lemma 4.2(1) and (4.4), for sufficiently large k, we have
which implies that because . Thus, for k sufficiently large, which, together with (4.2), Lemma 4.2, Lemma 4.4, and the definition of , we obtain
Because is bounded, there exists a positive constant such that
On the other hand, from Lemma 3.2, we know that
where S is the solution set of problem (1.1). Since , it follows from Lemma 2.1 that
which implies that
Therefore, together with inequalities (4.1), (4.5), and (4.9), we have
which implies that the order of superlinear convergence is at least 1.5. This completes the proof. □
Remark 4.1 Compared with the proof of the locally superlinear convergence in literatures [15, 16], our conditions are weaker.
5 Numerical experiments
In this section, we present some numerical experiments results to show the efficiency of our method. The MATLAB codes are run on a notebook computer with CPU2.10GHZ under MATLAB Version 7.0. Just as done in [16], we take and use the left division operation in MATLAB to solve the system of linear equations (2.1) at each iteration. We choose , , , , and . ‘Iter.’ denotes the number of iteration and ‘CPU’ denotes the CPU time in seconds. We choose as the stop criterion. The example is tested in [16].
Example Let
and the constraint set C be taken as
From Tables 1-2, we can see that our algorithm is efficient if parameters are chosen properly. We can also observe that the algorithm’s operation results change with the value of a. When we take , the operation results are not best, that is to say, the direction is not an optimal one.
References
Karamardian S: Complementarity problems over cones with monotone and pseudomonotone maps. J. Optim. Theory Appl. 1976, 18: 445–454. 10.1007/BF00932654
Dirkse SP, Ferris MC: MCPLIB: a collection of nonlinear mixed complementarity problems. Optim. Methods Softw. 1995, 5: 319–345. 10.1080/10556789508805619
El-Hawary ME: Optimal Power Flow: Solution Techniques, Requirement and Challenges. IEEE Service Center, Piscataway; 1996.
Meintjes K, Morgan AP: A methodology for solving chemical equilibrium system. Appl. Math. Comput. 1987, 22: 333–361. 10.1016/0096-3003(87)90076-2
Wood AJ, Wollenberg BF: Power Generations, Operations, and Control. Wiley, New York; 1996.
Bertsekas DP: Nonlinear Programming. Athena Scientific, Belmont; 1995.
Dennis JE, Schnabel RB: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice Hall, Englewood Cliffs; 1983.
Ortega JM, Rheinboldt WC: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, San Diego; 1970.
Polyak BT: Introduction to Optimization. Optimization Software, Inc. Publications Division, New York; 1987.
Tong XJ, Qi L: On the convergence of a trust-region method for solving constrained nonlinear equations with degenerate solution. J. Optim. Theory Appl. 2004, 123: 187–211.
Zhang JL, Wang Y: A new trust region method for nonlinear equations. Math. Methods Oper. Res. 2003, 58: 283–298. 10.1007/s001860300302
Fan JY, Yuan YX: On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption. Computing 2005, 74: 23–39. 10.1007/s00607-004-0083-1
Fan JY: Convergence rate of the trust region method for nonlinear equations under local error bound condition. Comput. Optim. Appl. 2006, 34: 215–227. 10.1007/s10589-005-3078-8
Fan JY, Pan JY: An improved trust region algorithm for nonlinear equations. Comput. Optim. Appl. 2011, 48: 59–70. 10.1007/s10589-009-9236-7
Solodov MV, Svaiter BF: A globally convergent inexact Newton method for systems of monotone equations. In Reformulation: Piecewise Smooth, Semismooth and Smoothing Methods. Edited by: Fukushima M, Qi L. Kluwer Academic, Dordrecht; 1998:355–369.
Wang CW, Wang YJ: A superlinearly convergent projection method for constrained systems of nonlinear equations. J. Glob. Optim. 2009, 44: 283–296. 10.1007/s10898-008-9324-8
Zarantonello EH: Projections on Convex Sets in Hilbert Spaces and Spectral Theory. Academic Press, New York; 1971.
Zhou GL, Toh KC: Superlinear convergence of a Newton-type algorithm for monotone equations. J. Optim. Theory Appl. 2005, 125: 205–221. 10.1007/s10957-004-1721-7
Acknowledgements
The author wish to thank the anonymous referees for their suggestions and comments. This work is also supported by the Educational Science Foundation of Chongqing, Chongqing of China (Grant No. KJ111309).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The author declares that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Zheng, L. A superlinearly convergent hybrid algorithm for systems of nonlinear equations. J Inequal Appl 2012, 180 (2012). https://doi.org/10.1186/1029-242X-2012-180
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029-242X-2012-180