Skip to main content

# The convergence theorem for fourth-order super-Halley method in weaker conditions

## Abstract

In this paper, we establish the Newton-Kantorovich convergence theorem of a fourth-order super-Halley method under weaker conditions in Banach space, which is used to solve the nonlinear equations. Finally, some examples are provided to show the application of our theorem.

## 1 Introduction

For a number of problems arising in scientific and engineering areas one often needs to find the solution of nonlinear equations in Banach spaces

$$F(x)=0,$$
(1)

where F is a third-order FrÃ©chet-differentiable operator defined on a convex subset Î© of a Banach space X with values in a Banach space Y.

There are kinds of methods to find a solution of equation (1). Generally, iterative methods are often used to solve this problem [1]. The best-known iterative method is Newtonâ€™s method

$$x_{n+1}=x_{n}-F'(x_{n})^{-1}F(x_{n}),$$
(2)

which has quadratic convergence. Recently a lot of research has been carried out to provide improvements. Third-order iterative methods such as Halleyâ€™s method, Chebyshevâ€™s method, super-Halleyâ€™s method, Chebyshev-likeâ€™s method etc. [2â€“12] are used to solve equation (1). To improve the convergence order, fourth-order iterative methods are also discussed in [13â€“19].

Kou et al. [20] presented a variant of the super-Halley method which improves the order of the super-Halley method from three to four by using the values of the second derivative at $$( x_{n}-\frac{1}{3}f(x_{n})/f'(x_{n}) )$$ instead of $$x_{n}$$. Wang et al. [15] established the semilocal convergence of the fourth-order super-Halley method in Banach spaces by using recurrence relations. This method in Banach spaces can be given by

$$x_{n + 1}= x_{n} - \biggl[I+\frac{1}{2}K_{F}(x_{n}) \bigl[I-K_{F}(x_{n}) \bigr]^{-1} \biggr] \Gamma_{n} F(x_{n}),$$
(3)

where $$\Gamma_{n}=[F'(x_{n})]^{-1}$$, $$K_{F}(x_{n})=\Gamma_{n}F''(u_{n})\Gamma_{n}F(x_{n})$$, and $$u_{n}=x_{n}-\frac{1}{3}\Gamma_{n}F(x_{n})$$.

Let $$x_{0} \in\Omega$$ and the nonlinear operator $$F: \Omega\subset X\rightarrow Y$$ be continuously third-order FrÃ©chet differentiable where Î© is an open set and X and Y are Banach spaces. Assume that

1. (C1)

$$\Vert \Gamma_{0}F(x_{0})\Vert \leq\eta$$,

2. (C2)

$$\Vert \Gamma_{0} \Vert \leq\beta$$,

3. (C3)

$$\Vert F^{\prime\prime}(x) \Vert \leq M, x \in\Omega$$,

4. (C4)

$$\Vert F'''(x)\Vert \leq N, x \in\Omega$$,

5. (C5)

there exists a positive real number L such that

$$\bigl\Vert F^{\prime\prime\prime}(x)- F^{\prime\prime\prime}(y) \bigr\Vert \leq L \Vert x-y \Vert ,\quad \forall x, y \in\Omega.$$

Under the above assumptions, we apply majorizing functions to prove the semilocal convergence of the method (3) to solve nonlinear equations in Banach spaces and establish its convergence theorems in [21]. The main results is as follows.

### Theorem 1

[21]

Let X and Y be two Banach spaces and $$F:\Omega\subseteq X \rightarrow Y$$ be a third-order FrÃ©chet differentiable on a non-empty open convex subset Î©. Assume that all conditions (C1)-(C5) hold and $$x_{0} \in \Omega$$, $$h=K \beta\eta\leq1/2$$, $$\overline{B(x_{0}, t^{*})} \subset\Omega$$, then the sequence $$\{x_{n}\}$$ generated by the method (3) is well defined, $$x_{n} \in \overline{B(x_{0}, t^{*})}$$ and converges to the unique solution $$x^{*}\in B(x_{0}, t^{**})$$ of $$F(x)$$, and $$\Vert x_{n}-x^{*}\Vert \leq t^{*}-t_{n}$$, where

\begin{aligned} &t^{*} =\frac{1-\sqrt{1-2h}}{h}\eta,\qquad t^{**} = \frac{1+\sqrt{1-2h}}{h} \eta, \\ & K \geq M \biggl[ 1 + \frac{N}{M^{2} \beta} +\frac{35 L}{36 M^{3} \beta^{2}} \biggr]. \end{aligned}
(4)

We know the conditions of TheoremÂ 1 cannot be satisfied by some general nonlinear operator equations. For example,

$$F(x)=\frac{1}{6}x^{3}+\frac{1}{6}x^{2}- \frac{5}{6}x+\frac{1}{3}=0.$$
(5)

Let the initial point $$x_{0}=0$$, $$\Omega= [-1, 1]$$. Then we know

\begin{aligned} &\beta= \bigl\Vert F'(x_{0})^{-1} \bigr\Vert = \frac{6}{5}, \qquad\eta= \bigl\Vert F'(x_{0})^{-1}F(x_{0}) \bigr\Vert =\frac{2}{5}, \\ &M=\frac{4}{3},\qquad N=1,\qquad L=0. \end{aligned}

From (4), we can get $$K \geq M$$, so

$$h=K \beta\eta\geq M \beta\eta=\frac{16}{25}> \frac{1}{2}.$$

The conditions of TheoremÂ 1 cannot be satisfied. Hence, we cannot know whether the sequence $$\{x_{n}\}$$ generated by the method (3) converges to the solution $$x^{*}$$.

In this paper, we consider weaker conditions and establish a new Newton-Kantorovich convergence theorem. The paper is organized as follows: in SectionÂ 2 the convergence analysis based on weaker conditions is given and in SectionÂ 3, a new Newton-Kantorovich convergence theorem is established. In SectionÂ 4, some numerical examples are worked out. We finish the work with some conclusions and references.

## 2 Analysis of convergence

Let $$x_{0} \in\Omega$$ and nonlinear operator $$F: \Omega\subset X\rightarrow Y$$ be continuously third-order FrÃ©chet differentiable where Î© is an open set and X and Y are Banach spaces. We assume that:

1. (C6)

$$\Vert \Gamma_{0}F(x_{0})\Vert \leq\eta$$,

2. (C7)

$$\Vert F'(x_{0})^{-1}F^{\prime\prime}(x_{0}) \Vert \leq\gamma$$,

3. (C8)

$$\Vert F'(x_{0})^{-1}F'''(x)\Vert \leq N, x \in\Omega$$,

4. (C9)

there exists a positive real number L such that

$$\bigl\Vert F'(x_{0})^{-1} \bigl[F^{\prime\prime\prime}(x)- F^{\prime\prime\prime}(y) \bigr] \bigr\Vert \leq L \Vert x-y \Vert ,\quad \forall x, y \in\Omega.$$
(6)

Denote

$$g(t)=\frac{1}{6}Kt^{3}+\frac{1}{2}\gamma t^{2}-t+\eta,$$
(7)

where $$K, \gamma, \eta$$ are positive real numbers and

$$\frac{5L}{12N \eta+ 36\gamma}+N \leq K.$$
(8)

### Lemma 1

[19]

Let $$\alpha=\frac{2}{\gamma+\sqrt{\gamma^{2}+2K}}, \beta=\alpha-\frac{1}{6}K \alpha^{3}-\frac{1}{2}\gamma \alpha^{2}=\frac{2 (\gamma+2\sqrt{\gamma^{2}+2K} )}{3 (\gamma+\sqrt{\gamma^{2}+2K} )}$$. If $$\eta\leq \beta$$, then the polynomial equation $$g(t)$$ has two positive real roots $$r_{1}, r_{2}$$ ($$r_{1}\leq r_{2}$$) and a negative root $$-r_{0}$$ ($$r_{0}>0$$).

### Lemma 2

Let $$r_{1}, r_{2}, -r_{0}$$ be three roots of $$g(t)$$ and $$r_{1} \leq r_{2}, r_{0}>0$$. Write $$u=r_{0}+t$$, $$a=r_{1}-t$$, $$b=r_{2}-t$$, and

$$q(t)=\frac{{b - u}}{{a - u}} \cdot\frac{{(a - u)b^{3} + u(u - a)b^{2} + u^{2} (a - u)b - au^{3} }}{{(b - u)a^{3} + u(u - b)a^{2} + u^{2} (b - u)a - bu^{3} }} .$$
(9)

Then as $$0\leq t\leq r_{1}$$, we have

$$q(0)\leq q(t) \leq q(r_{1})\leq1.$$
(10)

### Proof

Since $$g(t)=\frac{K}{6}abu$$ and $$g''(t)\geq 0$$ ($$t\geq0$$), we have

$$u-a-b \geq0.$$
(11)

Differentiating q and noticing $$q'(t)\geq0$$ ($$0\leq t\leq r_{1}$$), we obtain

$$q(0)\leq q(t)\leq q(r_{1}).$$
(12)

On the other hand, since

$$q(t)-1 \leq0,$$

the lemma is proved.

Now we consider the majorizing sequences $$\{t_{n}\}$$, $$\{ s_{n} \}$$ $$(n\geq0)$$, $$t_{0}=0$$,

$$\textstyle\begin{cases} s_{n} = t_{n} - \frac{{g(t_{n} )}}{{g'(t_{n} )}}, \\ h_{n} = - g'(t_{n} )^{ - 1} g''(r_{n} ) (s_{n} - t_{n} ), \\ t_{n + 1} = t_{n} - [ {1 + \frac{1}{2} \frac{{h_{n} }}{{1 - h_{n} }}} ]\frac{{g(t_{n} )}}{{g'(t_{n} )}} , \end{cases}$$
(13)

where $$r_{n}=t_{n}+1/3(s_{n}-t_{n})$$.â€ƒâ–¡

### Lemma 3

Let $$g(t)$$ be defined by (7) and satisfy the condition $$\eta\leq\beta$$, then we have

$$\frac{{(\sqrt[3]{\lambda_{2}} \theta)^{4^{n} } }}{{\sqrt[3]{\lambda_{2}} - (\sqrt[3]{\lambda_{2}} \theta)^{4^{n} } }}(r_{2} - r_{1} ) \le r_{1} - t_{n} \le\frac{{(\sqrt[3]{\lambda_{1}} \theta)^{4^{n} } }}{{\sqrt[3]{\lambda_{1}} - (\sqrt[3]{\lambda_{1}} \theta)^{4^{n} } }}(r_{2} - r_{1} ),\quad n = 0,1, \ldots,$$
(14)

where $$\theta = \frac{{r_{1} }}{{r_{2} }}, \lambda_{1} = q(r_{1} ), \lambda_{2} = q(0)$$.

### Proof

Let $$a_{n}=r_{1}-t_{n}, b_{n}=r_{2}-t_{n}, u_{n}=r_{0}+t_{n}$$, then

\begin{aligned} & g(t_{n} ) = \frac{K}{6}a_{n} b_{n} u_{n}, \end{aligned}
(15)
\begin{aligned} &g'(t_{n} ) = -{\frac{K}{6}} [ {a_{n} u_{n} + b_{n} u_{n} - a_{n} b_{n} } ], \end{aligned}
(16)
\begin{aligned} &g''(t_{n} ) = { \frac{K}{3}} [ {u_{n} - b_{n} - a_{n} } ]. \end{aligned}
(17)

Write $$\varphi(t_{n})= a_{n} u_{n} + b_{n} u_{n} - a_{n} b_{n}$$, then we have

\begin{aligned} & a_{n + 1} = a_{n} + \biggl[ {1 + \frac{1}{2}\frac{{h_{n} }}{{1 - h_{n} }}} \biggr]\frac{{g(t_{n} )}}{{g'(t_{n} )}} \\ &\phantom{a_{n + 1}} = \frac{{a_{n}^{4} (b_{n} - u_{n} ) [ {(a_{n} - u_{n} )b_{n}^{3} + u_{n} (u_{n} - a_{n} )b_{n}^{2} + u_{n}^{2} (a_{n} - u_{n} )b_{n} - a_{n} u_{n}^{3} } ]}}{\varphi^{2} (t_{n} )(a_{n}^{2} u_{n}^{2} + b_{n}^{2} u_{n}^{2} + a_{n}^{2} b_{n}^{2} ) - 2a_{n}^{2} b_{n}^{2} \varphi(t_{n} )}, \end{aligned}
(18)
\begin{aligned} & b_{n + 1}= b_{n} + \biggl[ {1 + \frac{1}{2}\frac{{h_{n} }}{{1 - h_{n} }}} \biggr]\frac{{g(t_{n} )}}{{g'(t_{n} )}} \\ &\phantom{b_{n + 1}} = \frac{{b_{n}^{4} (a_{n} - u_{n} ) [ {(b_{n} - u_{n} )a_{n}^{3} + u_{n} (u_{n} - b_{n} )b_{n}^{2} + u_{n}^{2} (b_{n} - u_{n} )a_{n} - b_{n} u_{n}^{3} } ]}}{\varphi^{2} (t_{n} )(a_{n}^{2} u_{n}^{2} + b_{n}^{2} u_{n}^{2} + a_{n}^{2} b_{n}^{2} ) - 2a_{n}^{2} b_{n}^{2} \varphi(t_{n} )}. \end{aligned}
(19)

We can obtain

$$\frac{{a_{n + 1} }}{{b_{n + 1} }} = \frac{{b_{n} - u_{n} }}{{a_{n} - u_{n} }} \cdot\frac{{(a_{n} - u_{n} )b_{n}^{3} + u_{n} (u_{n} - a_{n} )b_{n}^{2} + u_{n}^{2} (a_{n} - u_{n} )b_{n} - a_{n} u_{n}^{3} }}{{(b_{n} - u_{n} )a_{n}^{3} + u_{n} (u_{n} - b_{n} )a_{n}^{2} + u_{n}^{2} (b_{n} - u_{n} )a_{n} - b_{n} u_{n}^{3} }} \cdot \biggl( {\frac{{a_{n} }}{{b_{n} }}} \biggr)^{4}.$$
(20)

From LemmaÂ 2, we have $$\lambda_{2}\leq q(t)\leq\lambda_{1}$$. Thus

$$\frac{{a_{n} }}{{b_{n} }} \le\lambda_{1} \biggl( { \frac{{a_{n - 1} }}{{b_{n - 1} }}} \biggr)^{4} \le \cdots \le ( {\lambda_{1}} )^{1 + 4 + \cdots + 4^{n - 1} } \biggl( {\frac{{a_{0} }}{{b_{0} }}} \biggr)^{4^{n} } = \frac{1}{{\sqrt[3]{\lambda_{1}} }} \bigl( {\sqrt[3]{\lambda_{1}} \theta} \bigr)^{4^{n} }.$$
(21)

In a similar way,

$$\frac{{a_{n} }}{{b_{n} }} \ge\frac{1}{{\sqrt[3]{\lambda_{2}} }} \bigl( {\sqrt[3]{ \lambda_{2}} \theta} \bigr)^{4^{n} }.$$
(22)

That completes the proof of the lemma.â€ƒâ–¡

### Lemma 4

Suppose $$t_{n}, s_{n}$$ are generated by (13). If $$\eta< \beta$$, then the sequences $$\{t_{n}\}, \{s_{n}\}$$ increase and converge to $$r_{1}$$, and

$$0\leq t_{n} \leq s_{n} \leq t_{n+1} < r_{1}.$$
(23)

### Proof

Let

\begin{aligned} &U(t) = t - g'(t)^{ - 1} g(t), \\ &H(t) = \bigl( {g'(t)^{ - 1} } \bigr)^{2} g''(T)g(t), \\ &V(t) = t + \biggl[ {1 + \frac{1}{2}\frac{{H(t)}}{{1 - H(t)}}} \biggr] \bigl( {U(t) - t} \bigr), \end{aligned}
(24)

where $$T= (2t+U(t) ) /3$$.

When $$0 \leq t \leq r_{1}$$, we can obtain $$g(t)\geq0$$, $$g'(t)< 0$$, $$g''(t)> 0$$. Hence

$$U(t)=t-\frac{g(t)}{g'(t)} \geq t \geq0.$$
(25)

So $$\forall t \in[0,r_{1}]$$, we always have $$U(t)\geq t$$.

Since $$T=\frac{2t+U(t)}{3}\geq t \geq0$$, we have

$$H(t)= \frac{g''(T)g(t)}{g'(t)^{2}} \geq0.$$
(26)

On the other hand $$g''(T)g(t)-g'(t)^{2}>0$$, then

$$0\leq H(t)= \frac{g''(T)g(t)}{g'(t)^{2}}< 1.$$
(27)

Thus

$$V(t)=U(t)+\frac{1}{2}\frac{{H(t)}}{{1 - H(t)}} \bigl( {U(t) - t} \bigr) \geq0,$$

and $$\forall t \in[0, r_{1}]$$, we always have $$V(t)\geq U(t)$$.

Since

\begin{aligned} V'(t)={}& \frac{g(t) [ 3g'(t)^{2} ( {g''(T) - g''(t)} ) ( {g''(T)g(t) - 2g'(t)^{2} } )- Kg(t)^{2} g'(t)g''(t) }{{6g'(t)^{2} [ {g''(T)g(t) - g'(t)^{2} } ]^{2} }} \\ &{} +\frac{ - Kg(t)^{2} g''(t)g'(t) + 3g(t)^{2} g''(t)g''(T) ]}{{6g'(t)^{2} [ {g''(T)g(t) - g'(t)^{2} } ]^{2} }} \\ ={}& \frac{{g(t) [ { - Kg(t)^{2} g'(t)g''(t) - Kg(t)^{2} g''(t)g'(t) + 3g(t)^{2} g''(t)g''(T)} ]}}{{6g'(t)^{2} [ {g''(T)g(t) - g'(t)^{2} } ]^{2} }}, \end{aligned}
(28)

we know $$V'(t) > 0$$ for $$0\leq t \leq r_{1}$$. That is to say that $$V(t)$$ is monotonically increasing. By this we will inductively prove that

$$0\leq t_{n}\leq s_{n} \leq t_{n+1} < V(r_{1})=r_{1}.$$
(29)

In fact, (29) is obviously true for $$n=0$$. Assume (29) holds until some n. Since $$t_{n+1}< r_{1}$$, $$s_{n+1}, t_{n+2}$$ are well defined and $$t_{n+1}\leq s_{n+1}\leq t_{n+2}$$. On the other hand, by the monotonicity of $$V(t)$$, we also have

$$t_{n+2}=V(t_{n+1})< V(r_{1})=r_{1}.$$

Thus, (29) also holds for $$n+1$$.

From LemmaÂ 3, we can see that $$\{t_{n}\}$$ converges to $$r_{1}$$. That completes the proof of the lemma.â€ƒâ–¡

### Lemma 5

Assume F satisfies the conditions (C6)-(C9), then $$\forall x \in\overline{B(x_{0}, r_{1})}$$, $$F'(x)^{-1}$$ exists and satisfies the inequality

1. (I)

$$\Vert F'(x_{0})^{-1}F''(x)\Vert \leq g''(\Vert x-x_{0}\Vert )$$,

2. (II)

$$\Vert F'(x)^{-1}F'(x_{0})\Vert \leq -g'(\Vert x-x_{0}\Vert )^{-1}$$.

### Proof

(I) From the above assumptions, we have

\begin{aligned} \bigl\Vert F'(x_{0})^{-1}F''(x) \bigr\Vert &= \bigl\Vert F'(x_{0})^{-1}F''(x_{0})+F'(x_{0})^{-1} \bigl[F''(x)-F''(x_{0}) \bigr] \bigr\Vert \\ &\leq\gamma+ N\Vert x-x_{0}\Vert \leq\gamma+ K\Vert x-x_{0}\Vert \\ &= g'' \bigl(\Vert x-x_{0}\Vert \bigr). \end{aligned}

(II) When $$t\in[0,r_{1})$$, we know $$g'(t)<0$$. Hence when $$x \in \overline{B(x_{0}, r_{1})}$$,

\begin{aligned} &\bigl\Vert F'(x_{0})^{-1}F'(x)-I \bigr\Vert \\ &\quad= \bigl\Vert F'(x_{0})^{-1} \bigl[F'(x)-F'(x_{0})-F''(x_{0}) (x-x_{0})+F''(x_{0}) (x-x_{0}) \bigr] \bigr\Vert \\ &\quad\le \biggl\Vert { \int_{0}^{1} {F'(x_{0} )^{ - 1} \bigl[ {F'' \bigl( {x_{0} + t(x - x_{0} )} \bigr) - F''(x_{0} )} \bigr]\,dt(x - x_{0} )} } \biggr\Vert + \gamma \Vert {x - x_{0} } \Vert \\ &\quad\le \biggl\Vert { \int_{0}^{1} {Nt\,dt(x - x_{0} )^{2} } } \biggr\Vert + \gamma \Vert {x - x_{0} } \Vert \le\frac{1}{2}K\Vert {x - x_{0} } \Vert ^{2} + \gamma \Vert {x - x_{0} } \Vert \\ &\quad=1+g' \bigl(\Vert x-x_{0}\Vert \bigr)< 1. \end{aligned}

By the Banach lemma, we know $$(F'(x_{0})^{-1}F'(x) )^{-1}=F'(x)^{-1}F'(x_{0})$$ exists and

$$\bigl\Vert {F'(x)^{ - 1} F'(x_{0} )} \bigr\Vert \le\frac{1}{{1 - \Vert {I - F'(x_{0} )^{ - 1} F'(x)} \Vert }} \le - g' \bigl( {\Vert {x - x_{0} } \Vert } \bigr)^{ - 1}.$$

That completes the proof of the lemma.â€ƒâ–¡

### Lemma 6

[21]

Assume that the nonlinear operator $$F:\Omega\subset X \rightarrow Y$$ is continuously third-order FrÃ©chet differentiable where Î© is an open set and X and Y are Banach spaces. The sequences $$\{x_{n}\}$$, $$\{y_{n}\}$$ are generated by (3). Then we have

\begin{aligned} F(x_{n+1})={}& \frac{1}{2}F^{\prime\prime}(u_{n}) (x_{n+1}-y_{n})^{2}+\frac {1}{6}F'''(x_{n}) (x_{n+1}-y_{n}) (x_{n+1}-x_{n})^{2} \\ &{}- \frac{1}{6} \int^{1}_{0} \biggl[F''' \biggl(x_{n}+\frac{1}{3}t(y_{n}-x_{n}) \biggr)-F'''(x_{n}) \biggr]\,dt(y_{n}-x_{n}) (x_{n+1}-x_{n})^{2} \\ &{}+\frac{1}{2} \int^{1}_{0} \bigl[F''' \bigl(x_{n}+t(x_{n+1}-x_{n}) \bigr)-F'''(x_{n}) \bigr](1-t)^{2}\,dt(x_{n+1}-x_{n})^{3} , \end{aligned}

where $$y_{n}=x_{n}-\Gamma_{n}F(x_{n})$$ and $$u_{n}=x_{n}+\frac{1}{3}(y_{n}-x_{n})$$.

## 3 Newton-Kantorovich convergence theorem

Now we give a theorem to establish the semilocal convergence of the method (3) in weaker conditions, the existence and uniqueness of the solution and the domain in which it is located, along with a priori error bounds, which lead to the R-order of convergence of at least four of the iterations (3).

### Theorem 2

Let X and Y be two Banach spaces, and $$F:\Omega\subseteq X \rightarrow Y$$ be a third-order FrÃ©chet differentiable on a non-empty open convex subset Î©. Assume that all conditions (C6)-(C9) hold true and $$x_{0} \in\Omega$$. If $$\eta< \beta$$, $$\overline{B(x_{0}, r_{1})}\subset\Omega$$, then the sequence $$\{ x_{n}\}$$ generated by (3) is well defined, $$\{x_{n}\} \in\overline{B(x_{0}, r_{1})}$$ and converges to the unique solution $$x^{*} \in B(x_{0}, \alpha)$$, and $$\Vert x_{n}-x^{*}\Vert \leq r_{1}-t_{n}$$. Further, we have

$$\bigl\Vert x_{n}-x^{*} \bigr\Vert \leq \frac{{(\sqrt[3]{\lambda_{1}} \theta)^{4^{n} } }}{{\sqrt[3]{\lambda_{1}} - (\sqrt[3]{\lambda_{1}} \theta)^{4^{n} } }}(r_{2} - r_{1} ),\quad n = 0,1, \ldots,$$
(30)

where $$\theta=\frac{{r_{1} }}{{r_{2} }}, \lambda_{1} = q(r_{1} )$$, $$\alpha=\frac{2}{\gamma+\sqrt{\gamma^{2}+2K}}$$.

### Proof

We will prove the following formula by induction:

$$(I_{n})$$ $$x_{n} \in\overline{B(x_{0}, t_{n})}$$,

$$(II_{n})$$ $$\Vert F'(x_{n})^{-1}F'(x_{0})\Vert \leq-g'(t_{n})^{-1}$$,

$$(III_{n})$$ $$\Vert F'(x_{0})^{-1}F''(x_{n}) \Vert \leq g''(\Vert x_{n}-x_{0}\Vert )\leq g''(t_{n})$$,

$$(IV_{n})$$ $$\Vert y_{n}-x_{n}\Vert \leq s_{n}-t_{n}$$,

$$(V_{n})$$ $$y_{n}\in\overline{B(x_{0}, s_{n})}$$,

$$(VI_{n})$$ $$\Vert x_{n+1}-y_{n}\Vert \leq t_{n+1}-s_{n}$$.

Estimate that ($$I_{n}$$)-($$VI_{n}$$) are true for $$n=0$$ by the initial conditions. Now, assume that $$(I_{n})$$-$$(VI_{n})$$ are true for all integers $$k \leq n$$.

$$(I_{n+1})$$ From the above assumptions, we have

\begin{aligned} \Vert x_{n+1}-x_{0} \Vert & \leq \Vert x_{n+1}-y_{n}\Vert +\Vert y_{n}-x_{n}\Vert +\Vert x_{n}-x_{0} \Vert \\ & \leq(t_{n+1}-s_{n})+(s_{n}-t_{n})+(t_{n}-t_{0})=t_{n+1}. \end{aligned}
(31)

$$(II_{n+1})$$ From (II) of LemmaÂ 5, we can obtain

$$\bigl\Vert {F'(x_{n + 1} )^{ - 1} F'(x_{0} )} \bigr\Vert \le - g' \bigl( { \Vert {x_{n + 1} - x_{0} } \Vert } \bigr)^{ - 1} \le - g'(t_{n + 1} )^{ - 1}.$$
(32)

$$(III_{n+1})$$ From (I) of LemmaÂ 5, we can obtain

$$\bigl\Vert {F'(x_{0} )^{ - 1} F''(x_{n + 1} )} \bigr\Vert \le g'' \bigl( {\Vert {x_{n + 1} - x_{0} } \Vert } \bigr) \le g''(t_{n + 1} ).$$
(33)

$$(IV_{n+1})$$ From LemmaÂ 5, we have

\begin{aligned} \bigl\Vert {F'(x_{0} )^{ - 1} F(x_{n + 1} )} \bigr\Vert \le{}& \frac{1}{2}g''(r_{n} ) (t_{n + 1} - s_{n} )^{2} + \frac{1}{6}N(t_{n + 1} - s_{n} ) (t_{n + 1} - t_{n} )^{2} \\ &{} + \frac{L}{{36}}(s_{n} - t_{n} )^{2} (t_{n + 1} - t_{n} )^{2} + \frac{L}{{24}}(t_{n + 1} - t_{n} )^{4} \\ \le{}&\frac{1}{2}g''(r_{n}) (t_{n + 1} - s_{n} )^{2} \\ &{}+ \frac{1}{6} \biggl( N + \frac{L}{{36}}\cdot\frac{(s_{n} - t_{n})^{2}}{t_{n + 1} - s_{n}} +\frac{L}{24} \cdot \frac{(t_{n + 1} - t_{n} )^{2} }{t_{n + 1} - s_{n} } \biggr) ( {t_{n + 1} - s_{n} } ) (t_{n + 1} - t_{n} )^{2} \\ \le{}& \frac{1}{2}g''(r_{n}) (t_{n + 1} - s_{n} )^{2} + \frac{1}{6}K ( {t_{n + 1} - s_{n} } ) (t_{n + 1} - t_{n} )^{2} \\ \le{}& g(t_{n+1}). \end{aligned}
(34)

Thus, we have

\begin{aligned} \Vert y_{n+1}-x_{n+1} \Vert & \leq \bigl\Vert -F'(x_{n+1})F'(x_{0}) \bigr\Vert \bigl\Vert F'(x_{0})^{-1}F(x_{n+1}) \bigr\Vert \\ &\leq- \frac{g(t_{n+1})}{g'(t_{n+1})}=s_{n+1}-t_{n+1}. \end{aligned}
(35)

$$(V_{n+1})$$ From the above assumptions and (35), we obtain

\begin{aligned} \Vert y_{n+1}-x_{0} \Vert & \leq \Vert y_{n+1}-x_{n+1}\Vert +\Vert x_{n+1}-y_{n}\Vert +\Vert y_{n}-x_{n} \Vert +\Vert x_{n}-x_{0}\Vert \\ & \leq (s_{n+1}-t_{n+1})+(t_{n+1}-s_{n})+(s_{n}-t_{n})+(t_{n}-t_{0})=s_{n+1}, \end{aligned}
(36)

so $$y_{n+1}\in\overline{B(x_{0}, s_{n+1})}$$.

$$(VI_{n+1})$$ Since

\begin{aligned} \bigl\Vert I-K_{F}(x_{n+1}) \bigr\Vert &\geq1- \bigl\Vert K_{F}(x_{n+1}) \bigr\Vert \geq 1- \bigl(-g'(t_{n+1}) \bigr)^{-1}g''(r_{n+1}) (s_{n+1}-t_{n+1}) \\ &=1+g'(t_{n+1})^{-1}g''(r_{n+1}) (s_{n+1}-t_{n+1})=1-h_{n+1}, \end{aligned}

we have

\begin{aligned} \Vert x_{n+2}-y_{n+1}\Vert & = \frac{1}{2} \bigl\Vert K_{F}(x_{n+1}) \bigl[I-K_{F}(x_{n+1}) \bigr]^{-1} \bigr\Vert \bigl\Vert F'(x_{n+1})^{-1}F(x_{n+1}) \bigr\Vert \\ & \leq \frac{1}{2}\frac{h_{n+1}}{1-h_{n+1}}\cdot\frac {g(t_{n+1})}{-g'(t_{n+1})}=t_{n+2}-s_{n+1}. \end{aligned}
(37)

Further, we have

$$\Vert x_{n+2}-x_{n+1}\Vert \leq \Vert x_{n+2}-y_{n+1}\Vert +\Vert y_{n+1}-x_{n+1} \Vert \leq t_{n+2}-t_{n+1},$$
(38)

and when $$m > n$$

$$\Vert x_{m}-x_{n}\Vert \leq \Vert x_{m}-x_{m-1}\Vert +\cdots+\Vert x_{n+1}-x_{n} \Vert \leq t_{m}-t_{n}.$$
(39)

It then follows that the sequence $$\{x_{n}\}$$ is convergent to a limit $$x^{*}$$. Take $$n \rightarrow\infty$$ in (34), we deduce $$F(x^{*})=0$$. From (39), we also get

$$\bigl\Vert x^{*}-x_{n} \bigr\Vert \leq r_{1}-t_{n}.$$
(40)

Now, we prove the uniqueness. Suppose $$x^{**}$$ is also the solution of $$F(x)$$ on $$B(x_{0}, \alpha)$$. By Taylor expansion, we have

$$0=F \bigl(x^{**} \bigr)-F \bigl(x^{*} \bigr)= \int^{1}_{0}F^{\prime} \bigl((1-t)x^{*}+tx^{**} \bigr)\,dt \bigl(x^{**}-x^{*} \bigr).$$
(41)

Since

\begin{aligned} & \biggl\Vert F'(x_{0})^{-1} \int^{1}_{0} \bigl[F' \bigl((1-t)x^{*}+tx^{**} \bigr)-F'(x_{0}) \bigr]\,dt \biggr\Vert \\ &\quad \leq \biggl\Vert F'(x_{0})^{-1} \int^{1}_{0}{ \int^{1}_{0}F'' \bigl[x_{0}+t \bigl(x^{*}-x_{0} \bigr)+t \bigl(x^{**}-t^{*} \bigr) \bigr]}\,ds\,dt \bigl[x^{*}-x_{0}+t \bigl(x^{**}-x^{*} \bigr) \bigr] \biggr\Vert \\ &\quad \leq \int^{1}_{0}{ \int^{1}_{0}g'' \bigl[s \bigl\Vert x^{*}-x_{0}+t \bigl(x^{**}-x^{*} \bigr) \bigr\Vert \bigr]}\,ds\,dt \bigl\Vert x^{*}-x_{0}+t \bigl(x^{**}-x^{*} \bigr) \bigr\Vert \\ &\quad = \int^{1}_{0}g' \bigl( \bigl\Vert \bigl(x^{*}-x_{0} \bigr)+t \bigl(x^{**}-x^{*} \bigr) \bigr\Vert \bigr)\,dt-g'(0) \\ &\quad = \int^{1}_{0}g' \bigl( \bigl\Vert (1-t) \bigl(x^{*}-x_{0} \bigr)+t \bigl(x^{**}-x_{0} \bigr) \bigr\Vert \bigr)+1 \\ &\quad < \frac{g'(r_{1})+g'(\alpha)}{2}+1 \leq1, \end{aligned}
(42)

we can find that the inverse of $$\int^{1}_{0}F' ((1-t)x^{*}+tx^{**} )\,dt$$ exists, so $$x^{**}=x^{*}$$.

From LemmaÂ 3, we get

$$\bigl\Vert x_{n}-x^{*} \bigr\Vert \leq \frac{{(\sqrt[3]{\lambda_{1}} \theta)^{4^{n} } }}{{\sqrt[3]{\lambda_{1}} - (\sqrt[3]{\lambda_{1}} \theta)^{4^{n} } }}(r_{2} - r_{1} ), \quad n = 0,1, \ldots.$$
(43)

This completes the proof of the theorem.â€ƒâ–¡

## 4 Numerical examples

In this section, we illustrate the previous study with an application to the following nonlinear equations.

### Example 1

Let $$X=Y=R$$, and

$$F(x)=\frac{1}{6}x^{3}+\frac{1}{6}x^{2}- \frac{5}{6}x+\frac{1}{3}=0.$$
(44)

We consider the initial point $$x_{0}=0$$, $$\Omega= [-1, 1]$$, we can get

$$\eta=\gamma=\frac{2}{5},\qquad N=\frac{6}{5},\qquad L=0.$$

Hence, from (8), we have $$K=N=\frac{6}{5}$$ and

$$\beta= \frac{{2 ( {\gamma+ 2\sqrt{\gamma^{2} + 2K} } )}}{{3 ( {\gamma + \sqrt{\gamma^{2} + 2K} } )^{2} }} = \frac{3}{5},\quad \eta< \beta.$$

This means that the hypotheses of TheoremÂ 2 are satisfied, we can get the sequence $$\{x_{n}\}_{(n \geq 0)}$$ generated by the method (3) is well defined and converges.

### Example 2

Consider an interesting case as follows:

$$x(s)=1+\frac{1}{4}x(s) \int^{1}_{0}\frac{s}{s+t}x(t)\,dt,$$
(45)

where we have the space $$X=C[0,1]$$ with norm

$$\Vert x \Vert = \mathop{\max } _{0 \le s \le1} \bigl\vert {x(s)} \bigr\vert .$$

This equation arises in the theory of the radiative transfer, neutron transport and the kinetic theory of gases.

Let us define the operator F on X by

$$F(x)=\frac{1}{4}x(s) \int^{1}_{0}{\frac{s}{s+t}x(t)\,dt}-x(s)+1.$$
(46)

Then for $$x_{0}=1$$ we can obtain

\begin{aligned} &N=0, \qquad L=0,\qquad K=0,\qquad \bigl\Vert F'(x_{0})^{-1} \bigr\Vert =1.5304,\qquad \eta=0.2652, \\ &\gamma = \bigl\Vert {F'(x_{0} )^{ - 1} F''(x_{0} )} \bigr\Vert = 1.5304 \times 2 \cdot\frac{1}{4}\mathop{\max} _{0 \le s \le1} \biggl\vert { \int _{0}^{1} {\frac{s}{{s + t}}\,dt} } \biggr\vert = 1.5304 \times\frac{{\ln2}}{2} = 0.5303, \\ &\frac{{2 ( {\gamma + 2\sqrt{\gamma^{2} + 2K} } )}}{{3 ( {\gamma + \sqrt{\gamma^{2} + 2K} } )^{2} }} = 0.9429 > \eta. \end{aligned}

That means that the hypotheses of TheoremÂ 2 are satisfied.

### Example 3

Consider the problem of finding the minimizer of the chained Rosenbrock function [22]:

$$g(\mathbf{x})=\sum_{i=1}^{m} \bigl[4 \bigl(x_{i}-x^{2}_{i+1} \bigr)^{2}+ \bigl(1-x^{2}_{i+1} \bigr) \bigr],\quad \mathbf{x} \in \mathbf{R}^{m} .$$
(47)

For finding the minimum of g one needs to solve the nonlinear system $$F(\mathbf{x})=0$$, where $$F(\mathbf{x})=\nabla g(\mathbf{x})$$. Here, we apply the method (3), and compare it with Chebyshev method (CM), the Halley method (HM), and the super-Halley method (SHM).

In a numerical tests, the stopping criterion of each method is $$\Vert \mathbf{x}_{k}-\mathbf{x}^{*}\Vert _{2} \leq1e-15$$, where $$\mathbf{x}^{*}=(1,1,\ldots,1)^{T}$$ is the exact solution. We choose $$m = 30$$ and $$x_{0} = 1.2\mathbf{ x}^{*}$$. Listed in TableÂ 1 are the iterative errors ($$\Vert \mathbf{x}_{k}-\mathbf{x}^{*}\Vert _{2}$$) of various methods. From TableÂ 1, we know that, as tested here, the performance of the method (3) is better.

## 5 Conclusions

In this paper, a new Newton-Kantorovich convergence theorem of a fourth-order super-Halley method is established. As compared with the method in [21], the differentiability conditions of the method in the paper are mild. Finally, some examples are provided to show the application of the convergence theorem.

## References

1. Ortega, JM, Rheinbolt, WC: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York (1970)

2. Ezquerro, JA, GutiÃ©rrez, JM, HernÃ¡ndez, MA, Salanova, MA: Chebyshev-like methods and quadratic equations. Rev. Anal. NumÃ©r. ThÃ©or. Approx. 28, 23-35 (1999)

3. Candela, V, Marquina, A: Recurrence relations for rational cubic methods I: the Halley method. Computing. 44, 169-184 (1990)

4. Candela, V, Marquina, A: Recurrence relations for rational cubic methods II: the Chebyshev method. Computing. 45, 355-367 (1990)

5. Amat, S, Busquier, S, GutiÃ©rrez, JM: Third-order iterative methods with applications to Hammerstein equations: aÂ unified approach. J. Comput. Appl. Math. 235, 2936-2943 (2011)

6. HernÃ¡ndez, MA, Romero, N: General study of iterative processes of R-order at least three under weak convergence conditions. J. Optim. Theory Appl. 133, 163-177 (2007)

7. Amat, S, BermÃºdez, C, Busquier, S, Plaza, S: On a third-order Newton-type method free of bilinear operators. Numer. Linear Algebra Appl. 17, 639-653 (2010)

8. Argyros, IK, Cho, YJ, Hilout, S: On the semilocal convergence of the Halley method using recurrent functions. J. Appl. Math. Comput. 37, 221-246 (2011)

9. Argyros, IK: On a class of Newton-like method for methods for solving nonlinear equations. J. Comput. Appl. Math. 228, 115-122 (2009)

10. Parida, PK, Gupta, DK: Recurrence relations for semilocal convergence of a Newton-like method in Banach spaces. J.Â Math. Anal. Appl. 345, 350-361 (2008)

11. Parida, PK, Gupta, DK: Recurrence relations for a Newton-like method in Banach spaces. J. Comput. Appl. Math. 206, 873-887 (2007)

12. GutiÃ©rrez, JM, HernÃ¡ndez, MA: Recurrence relations for the super-Halley method. Comput. Math. Appl. 36, 1-8 (1998)

13. Ezquerro, JA, HernÃ¡ndez, MA: New iterations of R-order four with reduced computational cost. BIT Numer. Math. 49, 325-342 (2009)

14. Ezquerro, JA, GutiÃ©rrez, JM, HernÃ¡ndez, MA, Salanova, MA: The application of an inverse-free Jarratt-type approximation to nonlinear integral equations of Hammerstein-type. Comput. Math. Appl. 36, 9-20 (1998)

15. Wang, X, Gu, C, Kou, J: Semilocal convergence of a multipoint fourth-order super-Halley method in Banach spaces. Numer. Algorithms 56, 497-516 (2011)

16. Amat, S, Busquier, S, GutiÃ©rrez, JM: An adaptative version of a fourth-order iterative method for quadratic equations. J.Â Comput. Appl. Math. 191, 259-268 (2006)

17. Wang, X, Kou, J, Gu, C: Semilocal convergence of a class of modified super-Halley methods in Banach spaces. J.Â Optim. Theory Appl. 153, 779-793 (2012)

18. HernÃ¡ndez, MA, Salanova, MA: Sufficient conditions for semilocal convergence of a fourth order multipoint iterative method for solving equations in Banach spaces. Southwest J. Pure Appl. Math. 1, 29-40 (1999)

19. Wu, Q, Zhao, Y: A Newton-Kantorovich convergence theorem for inverse-free Jarratt method in Banach space. Appl. Math. Comput. 179, 39-46 (2006)

20. Kou, J, Li, Y, Wang, X: A variant of super-Halley method with accelerated fourth-order convergence. Appl. Math. Comput. 186, 535-539 (2007)

21. Zheng, L, Gu, C: Fourth-order convergence theorem by using majorizing functions for super-Halley method in Banach spaces. Int. J. Comput. Math. 90, 423-434 (2013)

22. Powell, MJD: On the convergence of trust region algorithms for unconstrained minimization without derivatives. Comput. Optim. Appl. 53, 527-555 (2012)

Download references

## Acknowledgements

This work is supported by the National Natural Science Foundation of China (11371243, 11301001, 61300048), and the Natural Science Foundation of Universities of Anhui Province (KJ2014A003).

## Author information

Authors

### Corresponding author

Correspondence to Lin Zheng.

## Additional information

### Competing interests

The author declares to have no competing interests.

### Authorâ€™s contributions

Only the author contributed in writing this paper.

## Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

## About this article

### Cite this article

Zheng, L. The convergence theorem for fourth-order super-Halley method in weaker conditions. J Inequal Appl 2016, 289 (2016). https://doi.org/10.1186/s13660-016-1227-5

Download citation

• Received:

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/s13660-016-1227-5