 Research
 Open Access
 Published:
On a new semilocal convergence analysis for the Jarratt method
Journal of Inequalities and Applications volume 2013, Article number: 194 (2013)
Abstract
We develop a new semilocal convergence analysis for the Jarratt method. Through our new idea of recurrent functions, we develop new sufficient convergence conditions and tighter error bounds. Numerical examples are also provided in this study.
MSC:65H10, 65G99, 65J15, 47H17, 49M15.
1 Introduction
In this study, we are concerned with the problem of approximating a locally unique solution {x}^{\star} of the equation
where F is a Fréchetdifferentiable operator defined on a convex subset \mathcal{D} of a Banach space \mathcal{X} with values in a Banach space \mathcal{Y}.
A large number of problems in applied mathematics and also in engineering are solved by finding the solutions of certain equations. For example, dynamic systems are mathematically modeled by difference or differential equations and their solutions usually represent the states of the systems. For the sake of simplicity, assume that a timeinvariant system is driven by the equation \dot{x}=Q(x) for some suitable operator Q, where x is the state. Then the equilibrium states are determined by solving equation (1.1). Similar equations are used in the case of discrete systems. The unknowns of engineering equations can be functions (difference, differential and integral equations), vectors (systems of linear or nonlinear algebraic equations) or real or complex numbers (single algebraic equations with single unknowns). Except in special cases, the most commonly used solution methods are iterative  when starting from one or several initial approximations, a sequence is constructed that converges to a solution of the equation. Iteration methods are also applied for solving optimization problems. In such cases, the iteration sequences converge to an optimal solution of the problem at hand. Since all of these methods have the same recursive structure, they can be introduced and discussed in a general framework.
Many authors have developed high order methods for generating a sequence approximating {x}^{\star}. A survey of such results can be found in [[1], and the references there] (see also [2–11]). The natural generalization of the Newton method is to apply a multipoint scheme. Suppose that we know the analytic expressions of F({x}_{n}), {F}^{\prime}({x}_{n}) and {F}^{\prime}{({x}_{n})}^{1} at a recurrent step {x}_{n} for each n\ge 0. In order to increase the order of convergence and to avoid the computation of the second Fréchetderivative, we can add one more evaluation of F({c}_{1}{x}_{n}+{c}_{2}{y}_{n}) or {F}^{\prime}({c}_{1}{x}_{n}+{c}_{2}{y}_{n}), where {c}_{1} and {c}_{2} are real constants that are independent of {x}_{n} and {y}_{n}, whereas {y}_{n} is generated by a Newtonstep. A twopoint scheme for functions of one variable was found and developed by Ostrowski [11]. Following this idea, we provide a semilocal as well as a local convergence analysis for a fourthorder inverse free Jarratttype method (JM) [1, 4] given by
for each n\ge 0. The fourth order of (JM) is the same as that of a twostep Newton method [1]. But the computational cost is less than that of Newton’s method. In each step, we save one evaluation of the derivative and the computation of one inverse.
Here, we use our new idea of recurrent functions in order to provide new sufficient convergence conditions, which can be weaker than before [4]. Using this approach, the error bounds and the example on the distances are improved (see Example 3.5 and Remarks 3.6). This new idea can be used on other iterative methods [1].
2 Semilocal convergence analysis of (JM)
We present our Theorem 2.1 in [4] in an affine invariant form since {F}^{\prime}{({x}_{0})}^{1}F can be used for F in the original proof of Theorem 2.1.
Theorem 2.1 Let F:\mathcal{D}\subseteq \mathcal{X}\to \mathcal{Y} be thrice differentiable. Assume that there exist {x}_{0}\in \mathcal{D}, L\ge 0, M\ge 0, N\ge 0 and \eta \ge 0 such that
for each x,y\in \mathcal{D},
and
where {v}^{\star} and {v}^{\star \star} are the zeros of functions
given by
Then the following hold:

(1)
The scalar sequences \{{v}_{n}\} and \{{w}_{n}\} given by
\begin{array}{l}{w}_{n}={v}_{n}{g}^{\mathrm{\prime}}{({v}_{n})}^{1}g({v}_{n}),\\ {b}_{n}={g}^{\mathrm{\prime}}{({v}_{n})}^{1}({g}^{\mathrm{\prime}}({v}_{n}+\frac{2}{3}({w}_{n}{v}_{n})){g}^{\mathrm{\prime}}({v}_{n})),\\ {v}_{n+1}={w}_{n}\frac{3}{4}{b}_{n}(1\frac{3}{2}{b}_{n})({w}_{n}{v}_{n})\end{array}\}(2.11)
for each n\ge 0 are nondecreasing and converge to their common limit {v}^{\star}, so that

(2)
The sequences \{{x}_{n}\} and \{{y}_{n}\} generated by (JM) are well defined, remain in \overline{U}({x}_{0},{v}^{\star}) for all n\ge 0 and converge to a unique solution {x}^{\star}\in \overline{U}({x}_{0},{v}^{\star}) of the equation F(x)=0, which is the unique solution of the equation F(x)=0 in U({x}_{0},{v}^{\star \star}). Moreover, the following estimates hold for all n\ge 0:
(2.13)
where
Remarks 2.2 The bounds of Theorem 2.1 can be improved under the same hypotheses and computational cost in two cases as follows.
Case 1. Define a function {g}_{0} by
In view of (2.2), there exists {M}_{0}\in [0,M] such that
for all x\in \mathcal{D}. We can find upper bounds on the norms \parallel {F}^{\prime}{(x)}^{1}{F}^{\prime}({x}_{0})\parallel using {M}_{0}, which is actually needed, and not K used in [4].
Note that
and K/{M}_{0} can be arbitrarily large [1–3]. Using (2.19), it follows that, for any x\in \overline{U}({x}_{0},{v}^{\star}),
It follows from (2.21) and the Banach lemma on invertible operators [1] that \parallel {F}^{\prime}{(x)}^{1}{F}^{\prime}({x}_{0})\parallel exists and
We can use (2.21) instead of the less precise one used in [4]:
This suggests that more precise scalar majorizing sequences \{{\overline{v}}_{n}\}, \{{\overline{w}}_{n}\} can be used and they are defined as follows for initial iterates {\overline{v}}_{0}=0, {\overline{w}}_{1}=\eta:
A simple induction argument shows that, if {M}_{0}<K, then
and
where
Note also that if {M}_{0}=K, then {\overline{v}}_{n}={v}_{n}, {\overline{w}}_{n}={w}_{n}.
Case 2. In view of the upper bound for \parallel F({x}_{n+1})\parallel obtained in Theorem 2.1 in [4] and (2.21), \{{t}_{n}\}, \{{s}_{n}\} given in (3.9) and (3.10) are also even more precise majorizing sequences for \{{x}_{n}\} and \{{y}_{n}\}. Therefore, if they converge under certain conditions (see Lemma 3.2), then we can produce a new semilocal convergence theorem for (JM) with sufficient convergence conditions or bounds that can be better than the ones of Theorem 2.1 (see also Theorem 3.4 and Example 3.5).
Similar favorable comparisons (due to (2.20)) can be made with other iterative methods of the fourth order [1, 11].
3 Semilocal convergence analysis of (JM) using recurrent functions
We show the semilocal convergence of (JM) using recurrent functions. First, we need the following definition.
Definition 3.1 Let L\ge 0, {M}_{0}>0, M>0, N\ge 0 and \eta >0 be given constants. Define the polynomials on [0,+\mathrm{\infty}) for some \alpha >0 by
Moreover, define a scalar {\varphi}_{0} by
The polynomials {f}_{1}, g, {g}_{1} have unique positive roots denoted by {\varphi}_{{f}_{1}}, {\varphi}_{g} and {\varphi}_{{g}_{1}} (given in an explicit form), respectively, by the Descartes rule of signs. Moreover, assume
and
Under the conditions (3.1), (3.2), respectively,
and the polynomial {h}_{1} has a unique positive root {\varphi}_{{h}_{1}}.
Set {\varphi}_{1}=min\{{\varphi}_{{h}_{1}},{\varphi}_{{f}_{1}},{\varphi}_{g},{\varphi}_{{g}_{1}},1\}. Furthermore, assume
If {\varphi}_{1}=1, then assume that (3.3) holds as a strict inequality. From now on (3.1)(3.3) constitute the (C) conditions.
We can show the following result on the majorizing sequences for (JM).
Lemma 3.2 Under the (C) conditions, choose
Then the scalar sequences \{{s}_{n}\}, \{{t}_{n}\} given by
are nondecreasing, bounded from above by
and converge to their unique least upper bound {t}^{\star}\in [0,{t}^{\star \star}]. Moreover, the following estimate holds:
where
Proof We show, using induction on k, that
and
The estimate (3.8) holds for k=0 by the choice of α. Moreover, the estimates (3.7) and (3.9) hold for n=0 by (3.5), the choice of {\varphi}_{0} and (3.4). Let us assume (3.7)(3.9) hold for all k\le n. We have in turn by the induction hypotheses:
or
or
and
Hence, instead of (3.9), we can show
The estimate (3.8) can be written as
or
So, we can show, instead of (3.8),
or
The estimate (3.11) motivates us to define polynomials {\overline{f}}_{k} on [0,1) (for \varphi =t) by
or, since {t}^{2}\le t for t\in [0,1], define the polynomials {f}_{k} on [0,1) by
We need a relationship between two consecutive polynomials {f}_{k}:
where g and its unique positive root {\varphi}_{g}\in [0,1) are given in Definition 3.1. The estimate (3.11) is true if
or, if
since by (3.14) we have
But (3.16) is true by the definition of {\varphi}_{{f}_{1}} and (3.4). Define
Then we also have
This completes the induction for (3.8). The estimate (3.10) is true if
or
The estimate (3.20) motivates us to define polynomials {h}_{k} on [0,1) by
We need a relationship between two consecutive polynomials {h}_{k}:
and so
where {g}_{1} and the unique positive root {\varphi}_{{g}_{1}} are given in Definition 3.1. The estimate (3.20) is true if
or, if
since
But (3.24) is true by the definition of {\varphi}_{{h}_{1}} and (3.4). Define a function {h}_{\mathrm{\infty}} on [0,1) by
Then we have
This completes the induction for (3.4)(3.9). It follows that the sequences \{{s}_{n}\} and \{{t}_{n}\} are nondecreasing, bounded from above by {t}^{\star \star} given in a closed form by (3.6) and converge to their unique least upper bound {t}^{\star}\in [0,{t}^{\star \star}]. This completes the proof. □
Under the hypotheses of Lemma 3.2, further assume
where
Fix
Define the parameters {p}_{0}, p by
and a function {g}_{3} on [1,1/q) by
Moreover, assume
Then the following estimates hold for all k\ge 0:
Proof We show
If the estimate (3.34) holds, then we have
which implies the second equation in (3.33). We have the estimate
that is, we have
Instead of showing (3.34), we can show
or
or
By the hypothesis (3.32), we have
Assume
We also have
We get in turn
which completes the induction for (3.38). This completes the proof. □
Theorem 3.4 Under the hypotheses (3.1)(3.5) and (3.23), further assume that the hypotheses of Lemma 3.2 hold and
Then the sequences \{{x}_{n}\} and \{{y}_{n}\} generated by (JM) are well defined, remain in \overline{U}(x,{t}^{\star}) for all n\ge 0 and converge to a unique solution {x}^{\star} of the equation F(x)=0 in \overline{U}(x,{t}^{\star}). Moreover, the following estimates hold:
Furthermore, under the hypotheses of Proposition 3.3, the estimates (3.33) also hold. Finally, if R\ge {t}^{\star} such that
and
then the solution {x}^{\star} is unique in U(x,R).
Example 3.5 Let \mathcal{X}=\mathcal{Y}={\mathbb{R}}^{2}, \mathcal{D}={[1,3]}^{2}, {x}_{0}={(2,2)}^{T} and define a function F on \mathcal{D} by
Using (2.1)(2.7), we obtain
and
Hence the conclusions of Theorem 2.1 hold for the equation F(x)=0. Considering the hypotheses of Theorem 3.4, from Lemma 3.2, we have
and, from Definition 3.1, we get
Consequently, from the definition of {\varphi}_{1} (see Definition 3.1), we obtain
and from the definition of {\varphi}_{0} (see Definition 3.1), we obtain
We see the assumption {\varphi}_{0}<{\varphi}_{1} (see equation (3.3) in Definition 3.1) is also valid. Furthermore, from the equation (3.4) (see Lemma 3.2), we consider
From the equation (3.6),
Hence the conditions of Theorem 3.4 are also satisfied. Additionally, to verify the claims about the sequences \{{s}_{n}\} and \{{t}_{n}\} (see equation (3.5)), we produce Table 1.
From Table 1, we observe the following:
⧫ The sequences \{{t}_{n}\} and \{{s}_{n}\} are nondecreasing.
⧫ The sequences \{{t}_{n}\} and \{{s}_{n}\} are bounded from above by {t}^{\star \star}.
⧫ The estimate (3.7) holds.
Let us now compare the bounds between Theorems 2.1 and 3.4. From equation (2.10), we get
From the equation (2.11), for v[0]=0, we obtain Table 2.
Comparing Tables 1 and 2, we observe that the bounds of Theorem 3.4 are finer than those of Theorem 2.1. That is, {s}_{n}{t}_{n}\le {w}_{n}{v}_{n} for all n=0,1,2,\dots . Considering the hypotheses of Proposition 3.3, we have for q=4
From Table 1 and the preceding data, we note that min\{{t}_{1},{g}_{3}(\eta )\}<{p}_{0}. Consequently, the assumption (3.32) is true. Additionally, to ascertain the estimate (3.33), we form Table 3.
In Table 3, we observe that the estimates (3.33) are also true. Hence the conclusions of Proposition 3.3 also hold for the equation F(x)=0.

(1)
The condition (3.32) can be replaced by a stronger, but easier to check
\frac{2\eta}{2\delta}\le {p}_{0},(3.44)
for \delta \in I (see (3.13) and (3.21)).
The best possible choice for δ seems to be \delta ={\delta}_{3}. Let
In this case, (3.44) is written as

(2)
The ratio of convergence ‘qη’ given in Proposition 3.3 can be smaller than ‘\sqrt[3]{5}\theta’ given in Theorem 2.1 for q close to \sqrt[3]{b} and M, N, L not all zero and \eta >0.
Set \alpha =\sqrt[3]{b}\eta and \beta =\sqrt[3]{5}\theta. Note that b<K and 40{K}^{3}>b. By comparing α and β, we have
Case 1. If 2.666{M}^{3}+0.444NM6.740740L\le 0 or 2.666{M}^{3}+0.444NM6.740740L>0 and \eta >{h}_{0}, then we have
Case 2. If 2.666{M}^{3}+0.444NM6.740740L>0 and \eta <{h}_{0}, then we have
Case 3. If 0<\eta ={h}_{0}, then we have
Note that the pJarratttype method (p\in [0,1]) given in [8] uses (2.1)(2.5), but the sufficient convergence conditions are different from the ones given in the study and guarantees only thirdorder convergence (not fourth obtained here) in the case of the Jarratt method (for p=2/3).
4 Conclusions
We developed a semilocal convergence analysis, using recurrent functions, for the Jarratt method to approximate a locally unique solution of a nonlinear equation in a Banach space. A numerical example and some favorable comparisons with previous works are also reported.
References
Argyros IK: Convergence and Applications of NewtonType Iterations. Springer, New York; 2008.
Argyros IK: On the NewtonKantorovich hypothesis for solving equations. J. Comput. Appl. Math. 2004, 169: 315–332. 10.1016/j.cam.2004.01.029
Argyros IK: A unifying localsemilocal convergence analysis and applications for twopoint Newtonlike methods in Banach space. J. Math. Anal. Appl. 2004, 298: 374–397. 10.1016/j.jmaa.2004.04.008
Argyros IK, Chen D, Qian Q: An inversefree Jarratt type approximation in a Banach space. Approx. Theory Its Appl. 1996, 12: 19–30.
Argyros IK, Cho YJ, Hilout S: Numerical Methods for Equations and Its Applications. CRC Press, New York; 2012.
Candela S, Marquina A: Recurrence relations for rational cubic methods: I. The Halley method. Computing 1990, 44: 169–184. 10.1007/BF02241866
Candela S, Marquina A: Recurrence relations for rational cubic methods. II. The Chebyshev method. Computing 1990, 45: 355–367. 10.1007/BF02238803
Ezquerro JA, Hernández MA: Avoiding the computation of the second Fréchetderivative in the convex acceleration of Newton’s method. J. Comput. Appl. Math. 1998, 96: 1–12.
Jarratt P: Multipoint iterative methods for solving certain equations. Comput. J. 1965/1966, 8: 398–400.
Jarratt P: Some efficient fourth order multipoint methods for solving equations. Nordisk Tidskr. Informationsbehandling BIT 1969, 9: 119–124.
Ostrowski AM Pure and Applied Mathematics 9. In Solution of Equations in Euclidean and Banach Spaces. Academic Press, New York; 1973. Third edition of Solution of equations and systems of equations
Acknowledgements
The second author was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant Number: 20120008170).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors jointly worked on the results and they read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Argyros, I.K., Cho, Y.J. & Khattri, S.K. On a new semilocal convergence analysis for the Jarratt method. J Inequal Appl 2013, 194 (2013). https://doi.org/10.1186/1029242X2013194
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029242X2013194
Keywords
 Jarratt method
 Newtontype methods
 Banach space
 Fréchetderivative
 majorizing sequence
 recurrent functions