- Research
- Open Access
- Published:
An effective finite element Newton method for 2D p-Laplace equation with particular initial iterative function
Journal of Inequalities and Applications volume 2016, Article number: 281 (2016)
Abstract
In this article, a functional minimum problem equivalent to the p-Laplace equation is introduced, a finite element-Newton iteration formula is established, and a well-posed condition of iterative functions satisfied is provided. According to the well-posed condition, an effective initial iterative function is presented. Using the effective particular initial function and Newton iterations with the iterative step length equal to 1, an effective particular sequence of iterative functions is obtained. With the decreasing properties of gradient modulus of subdivision finite element, it has been proved that the function sequence converges to the solution of finite element formulation of p-Laplace equation. Moreover, a discussion on local convergence rate of iterative functions is provided. In summary, the iterative method based on the effective particular initial function not only makes up the shortage of the Newton algorithm, which requires an exploratory reduction in the iterative step length, but also retains the benefit of fast convergence rate, which is verified with theoretical analysis and numerical experiments.
Introduction
Let \(\Omega\subset R^{2}\) be a bounded and connected domain. Consider the following p-Laplace equation with Dirichlet boundary.
Problem I
Find u such that
where \(p>2\), the source term f is smooth enough to ensure validity of the following analysis and does not vanish on any nonzero measure set K (\(K\subset\Omega\)).
The p-Laplace equation is not only a tool for researching the special theory of Sobolev spaces [1], but is also an important mathematical model of many physical processes and other applied sciences; for example, it can be used to describe a variety of nonlinear media such as phase transitions in water and ice at transition temperature [2], elasticity [3], population models [4], the non-Newtonian fluid movement in the boundary layer [5], and digital image processing [6]. However, since the equation includes a very strong nonlinear factor, it is an important approach to solve the equation by numerical methods. A finite element method, combined with Newton iteration scheme, is one of most efficient numerical methods. Some posteriori error estimates for the finite element approximation of the p-Laplace equation are developed by Carstensen et al. [7, 8]. Carstensen [9, 10] applied these posteriori error estimates to a control method of solving the equation. The control method is based on the Newton iteration. However, the Newton iteration of the p-Laplace equation is not discussed in detail in their study, which is very dependent on selection of initial iteration function and also requires an exploratory reduction in the iterative step length (the default step length is 1; see [11]). Therefore, it is necessary to study how to select a suitable initial function. On the other hand, though the Newton algorithm has the advantage of very fast convergence rate near the solution (see [11]), there are many factors (for instance, the ill-posed factor) to affect the convergence of Newton iterations for the p-Laplace equation. Bermejo and Infante [12] applied the Polak-Ribiere iterations to the multigrid algorithm for the p-Laplace equation instead of Newton iterations due to the difficulties in computation relating to the ill-posed coefficient matrix. In order to overcome the ill-posed problem, it is necessary to develop a well-posed condition of the iteration functions. To the best of our knowledge, a well-posed condition of the iteration functions of finite element-Newton iterations for the p-Laplace equation has not been provided so far. Therefore, in this paper, we aim mainly to establish a well-posed condition of the iterative functions of finite element-Newton iterations for the p-Laplace equation and to provide theory analysis. To this end, we intend to transform the p-Laplace equation into a functional minimum problem (see Section 2 in [12]) solved by Newton iterations. According to the well-posed condition, an effective particular initial function is selected, and an effective iterative function sequence is constructed. Besides, utilizing the gradient modulus and gradient direction of an element, we will discuss the factors affecting the convergence of Newton iterations.
To this end, we first introduce some special Sobolev spaces and two preparative definitions as follows. Let
with inner product and norm
In particular, we set \(H^{1}_{0}(\Omega)=W_{0}^{1,2}(\Omega)\) when \(p=2\). Since \(p>2\) and Ω is bounded, the imbedding of \(W_{0}^{1,p}(\Omega)\) into \(H^{1}_{0}(\Omega)\) is continuous (see [1]).
Definition 1
For positive numbers a and b, if there exist two constants \(c_{1}\) and \(c_{2}\) (\(c_{1}\ge c_{2}>0\)) independent on a and b such that
then a is known as the same order large (small, respectively) with b. Similarly, a is said to be high order large (small, respectively) with b if there exists \(s>1\) such that
Definition 2
For \(p>2\), a function \(u(x,y)\) is called well-posed with respect to \(w(x,y)\) (in short, \(u(x,y)\) is well-posed) if there exist two functions \(c_{1}(x,y)\) and \(c_{2}(x,y)\) such that: \(c_{1}(x,y)\ge c_{2}(x,y)>0\); when \(\vert \nabla w(x,y) \vert \le1\), \(c_{1}(x,y)\) is at most the same order small with \(\vert \nabla w(x,y) \vert \) (i.e., there is no situation of high order), and \(c_{2}(x,y)\) is at most the same order large with \(1/\vert \nabla w(x,y) \vert \); when \(\vert \nabla w(x,y) \vert >1\), \(c_{1}(x,y)\) is at most the same order small with \(1/\vert \nabla w(x,y) \vert \), and \(c_{2}(x,y)\) is at most the same order large with \(\vert \nabla w(x,y) \vert \); and
The paper is organized as follows. In Section 2, a functional minimum problem equivalent to the p-Laplace equation is introduced, a finite element-Newton iteration formula is established, and the classical Newton algorithm is presented. In Section 3, we discuss the well-posed condition of iterative functions. Even though the initial function fails to satisfy the well-posed condition, after a sufficient number of Newton iterations with default step length are implemented, well-posed iterative functions can be always obtained, as will be seen in the well-posed theorem of Section 3. However, the iteration step number is often large. So an effective particular initial iterative function that satisfies the well-posed condition should be selected in order to make the iteration step number reduced greatly. This is related to the content of Section 5. In Section 4, an effective particular iterative function sequence is provided. These functions possess some properties that the gradient moduli on each subdivision finite element decrease monotonically and have a certain lower boundary. By using these properties we prove that this sequence converges to the solution of finite element formulation of the p-Laplace equation and present some results about its convergence speed. In Section 5, considering the well-posed condition and properties mentioned, we select an effective particular initial iterative function, which results in the special iterative functions involved in Section 4 by finite element Newton iterations with default step length. In Section 6, some numerical experiments are provided for showing that some results on the convergence rate and gradient fields are consistent with theoretical conclusions.
A Newton algorithm for the p-Laplace equation
The variational formulation of Problem I is as follows.
Problem II
Find \(u\in W_{0}^{1,p}(\Omega)\) such that
Problem II is equivalent to solving the following functional minimum problem (see Section 2 in [12]):
where
As far as we know, Problem II and the corresponding minimum problem have the same unique solution (see [13]).
This is unconstrained optimization problem with respect to scalar function, which can be solved by the Newton method (see [11]). According to Section 2 in [12], we can obtain the first derivative operator of J with respect to u:
where the operator \(J'(u)\) is defined on the space \(H^{1}_{0}(\Omega )\), which is an inner space and more convenient in numerical computing than \(W_{0}^{1,p}(\Omega)\). According to continuous imbedding theory, the uniqueness of solution to Problem II in \(W_{0}^{1,p}(\Omega)\) means that it also holds in \(H^{1}_{0}(\Omega)\).
Similarly, the second-derivative operator (Section 2 in [12]) is written as
for all \(\delta_{1} u,\delta_{2} u\in H^{1}_{0}(\Omega)\), By using the classical Newton algorithm we can establish the following Newton iteration formula. Find \(u-u_{k}\in H^{1}_{0}(\Omega)\) such that
where \(u-u_{k}\) is the Newton descent direction. Furthermore, (2.2) can be expressed as the following equation:
In order to find the solution of this problem, we apply the finite element method. Let \(\{S_{h}\}\) be a uniformly regular family of triangulation of Ω̄ with diameters bounded by h (see [14]). We denote the subdivision element by \(e\in S_{h}\) and the set of nodes by P. Let us introduce the following finite element space:
where \(P_{1}(e)\) denotes the space of all polynomials defined on triangular elements e, and its degree is not greater than 1. We set \(\{\lambda_{i}\}_{i=1}^{M}\) as a basis of \(H_{h}\), where M is the total number of nodes. Thus, the finite element formulation of (2.2) can be written as follows.
Problem III
Find \(\delta u_{k}\in H_{h}\) such that
For any nonzero measure set \(K\subset\Omega\), the property of f ensures that the iterative function \(u_{k}\) is not constant on K. Set \(a(u_{h},v_{h}):=\int_{\Omega}[(p-2)\vert \nabla u_{k} \vert ^{p-4}(\nabla u_{k}\cdot\nabla u_{h})(\nabla u_{k}\cdot\nabla v_{h})+\vert \nabla u_{k} \vert ^{p-2}\nabla u_{h}\cdot\nabla v_{h}]{\,\mathrm{d}}x{\,\mathrm{d}}y\). Then, for any \(v_{h}\in H_{h}\) that does not vanish on Ω, we have the following formula:
By the symmetry and positive definiteness of \(a(\cdot,\cdot)\) there exists a unique \(u_{h}\in H_{h}\) such that
Now, we recall the classical Newton algorithm.
Step 1. Set \(k=0\) and termination condition \(\varepsilon>0\), select an initial iterative function \(u_{0}\in H_{h}\), and compute \(J(u_{o})\).
Step 2. Iterative formula: apply equation (2.4) to finding \(\delta u_{k}\in H_{h}\) and set iterative step length \(\alpha:=1\).
Step 3. Set \(u_{k+1}:=u_{k}+\alpha\delta u_{k}\) (\(k=0,1,2,\ldots\)).
Step 4. If \(J(u_{k+1})< J(u_{k})\), then go to Step 5. Otherwise, set \(\alpha:=\alpha/2\) and go to Step 3.
Step 5. If \([ \sum_{i=1}^{M} ( -\int_{\Omega }\vert \nabla u_{k+1} \vert ^{p-2}\nabla u_{k+1}\cdot\nabla\lambda _{i}{\,\mathrm{d}}x{\,\mathrm{d}}y+\int_{\Omega} f\lambda_{i}{\,\mathrm{d}}x{\,\mathrm{d}}y) ^{2} ] ^{1/2}<\varepsilon\), the algorithm terminates, and output \(u_{k+1}\). Otherwise, go to Step 2.
Remark 1
As will be seen in Section 5, the convergence speed of Newton iterative functions is very fast when these functions are near the solution to Problem II. However, the total convergence speed heavily relies on selection of an initial iterative function. So it is important to find a good initial function. On the other hand, in order to make descent, it is necessary to get an exploratory reduction in the iterative step length; see Step 4 of the Newton algorithm. Sometimes, several attempts to shorten the step length would lead to slowing down the iteration speed. Therefore, we would give some improvements and modifications to achieve a better iteration convergence effect in the following sections.
Well-posed condition and well-posed theorem of iterative function
Well-posed condition
To begin with, consider the following problem: find w such that
Its finite element formulation can be written as follows.
Problem IV
Find \(w_{0}\in H_{h}\) such that
Noting the relationship between the solution of Problem IV and the finite element solution of Problem II, denoted by \(u^{*}\in H_{h}\), we get that, for each triangular element \(e\in S_{h}\),
Since \(p-1>1\), relationships of the gradient modulus of an element from (3.2) indicate that \(u^{*}\) becomes steep if \(w_{0}\) is gentle, and, conversely, \(u^{*}\) has small steepness if the gradient of \(w_{0}\) is large.
Since \(H_{h}\) consists of the piecewise continuous functions of degree 1, for each triangular element \(e\in S_{h}\) and all \((x,y)\in e\), \(\vert \nabla w_{0}(x,y) \vert \) is independent of x and y. If f vanishes on a point of domain Ω, the value of \(\vert \nabla w_{0} \vert \) is correspondingly very small in a small neighborhood of the point. Since the domain is partitioned into small triangular elements, there exist some elements such that the values of \(\vert \nabla w_{0}(e) \vert \) are less than 1 and often far less than 1. As will be seen further, these triangular elements determine the convergence effect of Newton iterations.
A natural idea is that the Newton initial function is selected as \(w_{0}\), that is, \(u_{0}=w_{0}\). According to iterative formula (2.4) and (3.1), for each element \(e\in S_{h}\), there holds the following equation:
where the coefficient on the left-hand side is \(\vert \nabla w_{0} \vert ^{p-2}\), whereas the modulus of the second term on the right-hand side is \(\vert \nabla w_{0} \vert \). If \(p>3\) and some elements satisfy
then there may be a situation that \(\vert \nabla\delta u_{0} \vert \) of (2.4) is far greater than 1 on these elements, which often occurs in numerical experiments involved in Section 5. According to the classical Newton algorithm, in order to obtain \(J(u_{1})\) less than \(J(w_{0})\), the attempts to shorten the step length would spend many times, which will greatly affect the iteration speed. Nevertheless, there is no described situation for \(2< p\le3\).
Generally, for a certain iterative function \(u_{k}\), on each triangular element \(e\in S_{h}\), the iterative formula is written as
Likewise, if some elements satisfy
then \(\delta u_{k}\) of iterative formula (3.3) often has the property
The other extreme situation is that
where \(u_{k}\) cannot be the finite element solution of Problem II according to (3.2). Therefore, both the initial function and functions of iteration should avoid the two extreme conditions on each element \(e\in S_{h}\), (3.4) and (3.5). This means that \(u_{k}\) should be well posed with respect to \(w_{0}\), which is called the well-posed condition of iterative functions, that is,
where \(c_{1}(e)\) and \(c_{2}(e)\) meet the requirements of Definition 2.
Remark 2
The \(w_{0}\) introduced plays a very important role in the content of the next subsection. Moreover, the elements where \(\vert \nabla w_{0}(e) \vert \) is far less than 1 could be vital for the convergence effect of Newton iterations. For better convergence effect, the initial iterative function need to be selected to satisfy the well-posed condition, which will be discussed in detail in Section 5.
Well-posed theorem of Newton iteration
Although an iterative function may fail to meet the well-posed condition, there always exists a certain iterative function satisfying the condition by the Newton iteration, as the following theorem says.
Theorem 1
Well-posed theorem
If there exists a domain \(\tau\subset\Omega\) such that
then we can always obtain a certain \(k'>k\) such that \(u_{k'}\) satisfies the well-posed condition by the Newton iteration with \(\alpha=1\).
Proof
We only need to discuss the iteration on τ. By the description in Section 3.1, the following estimate appears:
Since \(u_{k+1}=u_{k}+\delta u_{k}\) and \(p-2>0\), the estimate on τ yields
Thus, the terms on the left-hand side of iterative formula (3.3) can be approximated by
In the same way, the terms on the right-hand side of (3.3) have the following estimate:
Combining the preceding two estimates, we obtain that
Evidently, the gradient of \(u_{k+2}\) can be expressed as
Besides, we obtain that
which indicates that the gradient moduli of iterative functions decrease with a fixed rate. It is not until (3.8) fails to hold that the geometric decrease stops, that is, that there exists a certain \(k'>k\) such that \(u_{k'}\) satisfies
where C is a small constant, and c is defined by
which completes the proof of Theorem 1. □
Corollary 1
Let \(\{\lambda_{i}\} _{i=1}^{M}\) be a basis of finite element space \(H_{h}\) and set
Under the hypotheses of Theorem 1, we have the following estimate:
Remark 3
The well-posed theorem shows that though some iterative functions have poor properties, well-posed iterative functions are always obtained by Newton iterations. However, the geometric reduction often spends many iteration times validated by numerical experiments in Section 5. Besides, at the beginning of iterations, the value of \(G(u_{k})\) is quite large, which easily leads to data overflow. On the other hand, the well-posed theorem does not tell us whether or not the subsequent functions of iteration by Newton iterations with default step length are all well posed. Therefore, it is necessary to find a better and well-posed initial function.
Convergence and its rate of an effective particular iterative function sequence
To begin with, we assume that the initial function satisfies
Since the finite element solution of Problem II satisfies (3.2), combining the previous inequality and (3.2), we study a sequence of iterative functions whose gradient moduli decrease on subdivision elements \(e\in S_{h}\), that is, for each \(e\in S_{h}\),
We will use this function sequence to approximate the finite element solution of Problem II, which is the main issue discussed in this section.
Decreasing conditions of gradient modulus
For each triangular element \(e\in S_{h}\), we introduce some useful notations:
Take a unit vector \(\boldsymbol{n}=(-\xi_{2},\xi_{1})^{T}/\vert \nabla u_{k}(e) \vert \) and set
where \(r_{2}^{k}(e)\) may be positive or not.
Lemma 1
For given function \(u_{k}\in H_{h}\), we have the following inequality on each triangular element \(e\in S_{h}\):
If \(\delta u_{k}\in H_{h}\) is the solution of Problem III and \(u_{k+1}=u_{k}+\delta u_{k}\) and if \(r_{1}^{k}\) and \(r_{2}^{k}\) satisfy
then the gradient moduli on element \(e\in S_{h}\) decrease, that is,
Proof
According to (4.1) and (4.2), the gradient of \(w_{0}\) on a triangular element e is denoted by
where \(0\le r_{1}^{k}(e)\le1\) and \(\vert r_{2}^{k}(e) \vert \le1\). By condition (4.3) we have
On the element e, taking the scalar product of (3.3) with \(2\nabla u_{k}\), we get
Evidently, (4.7) means that
Likewise, taking the scalar product of (3.3) with n, we have
and
Due to (4.8) and (4.9), \(\nabla\delta u_{k}\) is written as
Taking the scalar product of (3.3) with \(\nabla\delta u_{k}\) yields
We consider the following equation:
Combining (4.7)-(4. 11) with (4.12) yields the equation
Condition (4.4) results in
Therefore, (4.12) is not greater than zero, which means
that is,
which completes the proof of Lemma 1. □
Remark 4
Lemma 1 indicates that in order to make the gradient modulus of the next iterative function \(u_{k+1}\) less than that of \(u_{k}\), the projection of \(\nabla w_{0}\) onto the orthogonal component of \(\nabla u_{k}\) needs to be small enough. Furthermore, this ensures small projection of \(\nabla\delta u_{k}\) onto orthogonal component of \(\nabla w_{0}\) such that the direction of gradient field of \(u_{k+1}\) is almost consistent with that of \(w_{0}\). This consistency is very important since the direction of the gradient of \(u^{*}\) is the same as that of \(w_{0}\), that is, the result of (3.2). As will seen inthe next subsection, the decreasing of the gradient modulus on an element \(e\in S_{h}\) is an important precondition for convergence of the iterative functions.
Convergence analysis
In order to derive the convergence of the effective particular iterative functions, we first introduce the following Lemma 2 corresponding to Lemma 1.
Lemma 2
For \(u_{k}\in H_{h}\) and \(\delta u_{k}\in H_{h}\), let \(u_{k+1}=u_{k}+\delta u_{k}\), and let q be conjugate of p, that is, \(1/p+1/q=1\). If \(u_{k}\) (\(k=1,2,\ldots\)) satisfy the requirements of Lemma 1 and if \(w_{0}\in W^{1,p}(\Omega)\) and \(u_{0}\in H_{0}^{1}(\Omega)\), then we have
and
where C in this context indicates a positive constant that is possibly different at different occurrences.
Proof
Taking \(v_{h}=u_{k+1}-u_{k}\) in (2.4) of Problem III yields that
According to the decreasing result of Lemma 1, the third term on the left-hand side of (4.15) yields that
Combining (4.16) with (4.15) yields the following estimate:
Summing (4.17) from \(k=0,1, \ldots, N-1\) and using the Young and Cauchy inequalities, we obtain that
Furthermore, (4.18) can be written as
Evidently, (4.13) and (4.14) hold, which completes the proof of Lemma 2. □
The compact embedding theorem (see [1]) shows that, for two-dimensional space and \(p>2\), if Ω is bounded and its boundary is Lipschitz continuous, then the following imbedding is compact:
The following theorem can be derived from this compact embedding result.
Theorem 2
Let \(\eta\in P\) denote the node of \(S_{h}\), and let \(\{\lambda_{i}\}_{i=1}^{M}\) be a basis of \(H_{h}\). Under the hypotheses of Lemma 2, there exits a unique \(\bar {u}\in H_{h}\) such that the iterative functions \(u_{k}\) converge to ū in the following sense:
Proof
Since (4.13) holds, due to the compact embedding theorem, (4.20) is easily derived, and also \(u_{k}\stackrel{W}{\to}{\bar{u}}\) in \(W_{0}^{1,p}(\Omega)\), that is,
where \(( W_{0}^{1,p}(\Omega) ) '\) is the dual space of \(W_{0}^{1,p}(\Omega)\). Let us introduce the space
We note that X is isomorphic to \(W_{0}^{1,p}(\Omega)\) with the one-to-one operator
and its corresponding conjugate operator
satisfies
Evidently, by (4.13), \(X\hookrightarrow\hookrightarrow L^{1}(\Omega )\). For a given triangulation \(S_{h}\) of Ω, the basis functions of the finite element space \(H_{h}\) satisfy
which indicates that \(\nabla\lambda_{i}\in L^{\infty}(\Omega)\) (\(i=1,2,\ldots,M\)). Due to the Riesz representation theorem (see [1]), for each \(\nabla\lambda_{i}\), there exists a unique \(\varphi _{i}\in( L^{1}(\Omega) ) '\) such that
and
Since \(( L^{1}(\Omega) ) '\subset X'\), for each \(\varphi _{i}\) mentioned, there exists a unique \(g_{i}\in( W_{0}^{1,p}(\Omega) ) '\) such that \(\varphi_{i}=A^{*}\circ g_{i}\). From (4.22) and (4.24) we derive (4.21) by the limitation
Thus, the proof of Theorem 2 is completed. □
Though \(u_{k}\) converges to ū by Theorem 2, it is still not clear whether or not ū represents the finite element solution of Problem II. This question will be answered by Theorem 3. To this end, it is necessary to introduce the following lemma.
Lemma 3
For vectors \(\boldsymbol{a}\in R^{2}\) and \(\boldsymbol{b}\in R^{2}\), there exists a constant \(c_{0}>0\) independent of a and b such that
Theorem 3
Under the hypotheses of Lemma 1, let \(\bar{u}\in H_{h}\) be the convergence function of \(u_{k}\), and \(u^{*}\in H_{h}\) be the finite element solution of Problem II. For each node \(\eta\in P\) and each element \(e\in S_{h}\), we have
Proof
Taking \(v_{h}=\lambda_{i}\) in (2.4), by (4.22) and the Cauchy inequality we get
Since \(\Vert \nabla u_{k} \Vert _{0,p-2,\Omega}\le \Vert \nabla u_{k} \Vert _{0,p,\Omega }\) (see [1]) and \(\Vert \nabla u_{k} \Vert _{0,p,\Omega}\) is bounded, (4.25) can be written as
Summing (4.26) for \(k=0,1,2, \ldots, N-1\) and using (4.19), we obtain that
which means that, for each i (\(1\le i\le M\)), we have the limitation
By (4.21) we observe that, for each i (\(1\le i\le M\)),
Since \(\{\lambda_{i}\}_{i=1}^{M}\) is a basis of the finite element space \(H_{h}\), then for any \(v_{h}\in H_{h}\), we have
On the other hand, \(u^{*}\in H_{h}\) is the unique finite element solution of Problem II such that
Owing to Lemma 3, subtracting (4.27) and (4.28) and taking \(v_{h}=\bar {u}-u^{*}\) yield that
where \(c_{0}>0\). Evidently, we have
Thus, the proof of Theorem 3 is completed. □
Remark 5
Theorem 2 shows that the gradient of \(u_{k}\) is just weakly convergent in the sense of (4.21). For a stronger convergence, the further discussion will be involved in the study of convergence rate of iterative functions in the next subsection.
Convergence rate of iterative functions near the solution
To the best of our knowledge, Newton algorithms of algebraic equations have local quadratic convergence rate (see [15]). Among the usual optimization algorithms, such as the direct decent method and conjugate gradient method, the convergence rate of Newton iterations near the solution is the fastest (see [15]). Whether or not it also works for p-Laplace equation will be studied in this subsection.
Theorem 4
Local convergence rate theorem
Assume that \(u^{*}\in H_{h}\) is the solution of Problem II such that
Let \(u_{k-1}\), \(u_{k}\), and \(u_{k+1}\) (\(k=1,2,\ldots\)) be iterative functions by Newton iteration with \(\alpha=1\) satisfying
where \(d>0\) is constant. Then, we have the estimate
where \(C(p)\) is a positive number that depends only on p.
Proof
First, we consider the third derivative of operator \(J(u)\) as follows:
From (4.29), the Newton iterative formula (2.2), and (2.5), according to the Taylor expansion with respect to a function and (4.32), we derive the estimate
where \(C(p)\) in this context indicates a positive constant that only depends on p and is possibly different at different occurrences. For convenience, we set
By using (4.30) and the estimate described we have
Evidently, (4.31) is an immediate consequence of (4.30) and the last estimate. The proof of Theorem 4 is complete. □
Corollary 2
Under the assumptions of Theorem 4, if \(u_{k}\) and \(u_{k-1}\) satisfy
then we have the inequality
which indicates that
Remark 6
In the description of local convergence rate theorem, it is significant to introduce the definition of μ, which relies on the relationship of the solution \(u^{*}\) and iterative functions \(u_{k}\) and \(u_{k-1}\). More precisely, the relationship is represented by the ratio of \(u^{*}\), \(u_{k}\), and \(u_{k-1}\) in the gradient modulus on an element \(e\in S_{h}\) whose powers satisfy
On the other hand, the boundedness of μ is similar to the well-posed condition for \(u^{*}\), \(u_{k}\), and \(u_{k-1}\) such that there is no situation of high order among them. Moreover, it infers that the iterative functions \(u_{k}\) and \(u_{k-1}\) are in a neighborhood of the solution \(u^{*}\). This is the reason that we call Theorem 4 a local convergence rate theorem. In fact, (4.31) shows the convergence rate by the fact that the term \(\vert \nabla(u_{k}-u^{*}) \vert ^{4}\) on its right-hand side has higher power compared with the term \(\vert \nabla(u_{k+1}-u^{*}) \vert ^{2}\) on its left-hand side, which is similar to the case of algebraic equations, whereas the power of \(\vert \nabla u_{k-1} \vert ^{p-4}\) on the right-hand side is lower than that of the term \(\vert \nabla u_{k} \vert ^{p-2}\) on the left-hand side. Such changes in powers mean that the iterative functions approximate the solution more and more quickly.
Remark 7
Corollary 2 is a result on the stronger convergence of the gradient, compared with (4.21) in Theorem 3. However, due to the description of (4.33), it is different from the strong convergence in \(H_{0}^{1}(\Omega)\) or \(W_{0}^{1,p}(\Omega)\), which is related to the value of \(\vert \nabla u_{k} \vert ^{p-2}\) on each element \(e\in S_{h}\) and consistent with the statement of (4.14) in Lemma 2.
An effective particular initial iterative function
To begin with, we introduce the particular problem
whose finite element formulation is as follows.
Problem V
Find \(\phi\in H_{h}\) such that
Evidently, on each element \(e\in S_{h}\), the solution ϕ satisfies
Thus, a particular initial iterative function is
which satisfies the following inequality on each element \(e\in S_{h}\):
According to the theory of elliptic equations (see [16]), there exists a constant \(C>0\) such that, on each element \(e\in S_{h}\),
Therefore, we take \(c_{2}(e)=1+C\) and
such that \(u_{0}\) is well posed. For the particular initial function, (5.4) plays a very important role in construction of the special iterative function sequence mentioned in Section 4 which is the following lemma.
Lemma 4
Take the initial function \(u_{0}\) defined as in (5.3) and set \(u_{k}\in H_{h}\) as the iterative function by Newton formula (2.4) with step length \(\alpha=1\). For any integer \(k\ge0\) and triangular element \(e\in S_{h}\), we have the following inequalities:
Proof
We use mathematical induction. First, we study the case \(k=0\). According to (5.2), (4.1), and (4.2), for each element \(e\in S_{h}\), we have
Since \(\delta u\in H_{h}\) is the solution of (2.4), due to (4.10) and (5.7), \(u_{1}\) is characterized by
which indicates \(u_{1}\) and \(w_{0}\) have the same direction of the gradient field such that \(u_{1}\) satisfies \(r_{2}^{1}(e)=0\). Therefore, for \(k=0\), Lemma 1 shows that, on each element \(e\in S_{h}\), we have \(\vert \nabla u_{1} \vert \le \vert \nabla u_{0} \vert \). Furthermore, from (4.3), (4.10), and (5.7) we get the following equations on each triangular element \(e\in S_{h}\):
In order to study the relationship of these two equations, we introduce the function
Since \(g(1)=0\) and \(g'(x)=1-[1-(1-x)/(p-1)]^{p-2}\ge0\), it suffices to prove that \(g(x)\le0\), which means that \(u_{1}\) satisfies (5.5).
Assuming that \(u_{k}\) satisfies
we consider
Likewise, from the iterative formula (3.3), it follows that \(r_{2}^{k+1}(e)=0\). Applying the method in the case \(k=0\) to the general situation mentioned before, we obtain that \(u_{k+1}\) satisfies
which completes the proof of Lemma 4. □
Theorem 5
Taking the initial function \(u_{0}\) as in (5.3) and setting \(u_{k}\in H_{h}\) as the iterative function by Newton formula (2.4) with step length \(\alpha =1\), then \(u_{k}\) converges to the finite element solution of Problem II.
Remark 8
As described before, in order to achieve the descent effect, the classical Newton algorithm needs a few attempts to shorten the iterative step length, whereas the particular initial function introduced in this section ensures the step length equal to 1. Besides, it is proved that the iterative functions based on the initial function converge to the solution for the p-Laplace equation, and the result of convergence rate theorem also holds. On the other hand, since the decreasing properties and existence of nontrivial lower boundary, these iterative functions are all well posed, whereas the well-posed theorem in Section 3.2 cannot guarantee that the subsequent functions of iteration are always well posed. In fact, this is attributed to the absence of nontrivial lower bound of gradient modulus in a well-posed theorem.
Some numerical experiments
In this section, we present some experiments of the Newton method to solve the p-Laplace equation based on two different initial iterative functions so as to validate the conclusion of theoretical analysis in the preceding section. To begin with, we take \(p=5\), the two-dimensional domain
and the source function f defined by \(f(x,y)=\sin(\pi x)\cos(\pi y)\).
We divide the domain Ω̄ into small triangular elements, which leads to a uniformly regular triangulation \(S_{h}\) with \(h\le 0.05\). The triangulation is depicted graphically on the left-hand side of Figure 1.
First, we consider the solution \(w_{0}\) of Problem IV as the initial function. From the right-hand side of Figure 1, the solution is characterized by a gentle and smooth shape at the peaks and troughs. According to the Newton iteration with step \(\alpha=1\), we obtain the numerical iteration function \(u_{1}\), depicted on the left-hand side of Figure 2. Evidently, \(u_{1}\) is not well posed: its value can reach ±100 in some region of space owing to the large gradient modulus of this area. Continuing the Newton iteration with step length equal to 1, we spend 29 times of iteration in finding the well-posed function \(u_{29}\), depicted on the right-hand side of Figure 2.
The total 29 iterations are recorded in Table 1 including two parameters, G and θ. According to (3.9), \(G(u_{k})\) can determine whether \(u_{k}\) approximates the solution of Problem II. The smaller of G means the more approximate solution obtained, and the decline rate of G is represented by θ. From Table 1, the large G of previous 20 iterations means that there exist some areas with very large gradient modulus, which can be inferred by the decline rate θ. Accurately, the numerical experiment verifies the conclusion of Corollary 1, which indicates that G declines by the rate
Until the 27th iteration, the value of θ becomes slightly small. The slight change shows that there are still some areas whose gradient has more significant impact on the overall convergence than that of other area.
According to the previous discussion, although the selection of \(w_{0}\) as the initial function can result in a well-posed function by the sufficient iterations, the shortage of the geometric decrease is slow, and for the first iteration, the value of G can reach 10^{12}, likely to cause data overflow. A conclusion can be drawn that \(w_{0}\) is not suitable as the initial function. Therefore, we take the expression of (5.3) as the initial function \(u_{0}\), depicted on the left-hand side of Figure 3. According to theoretical analysis in Sections 4 and 5, the particular initial function can lead to a particular sequence of iterative function with the decreasing properties of gradient modulus on each element e, it is further proved to converge to the solution for p-Laplace equation. The numerical experiments show that we need only 8 iterations to achieve a better result, depicted on the right-hand side of Figure 3 and recorded in Table 2. Compared with \(w_{0}\), the shape of \(u_{8}\) at the peaks and troughs become steep and sharp. As seen by the Table 2, G become very small after 8 iterations, and the rate θ indicates that the decline is gradually accelerated owing to the nice selection of initial function.
Due to the study in Remark 4 of Section 4, the direction of the gradient on each element e is an important factor in affecting the convergence, which is taken into account in the selection of initial function in Section 5. Thus, Figure 4 shows the gradient fields of \(u_{0}\) and \(w_{0}\), whose directions are very similar. On the other hand, we can apply (3.2) to determine whether \(u_{8}\) approximates the solution of Problem II, mainly by comparing the gradient field of \(w_{0}\) with the vector field of \(\vert \nabla u_{8} \vert ^{p-2}\nabla u_{8}\), depicted in Figure 5. As seen in Figure 5, the directions and lengths of arrows in the left- and right-hand side figures are almost identical, which means that \(u_{8}\) is regarded as the approximate solution of Problem II.
Conclusions
At the beginning of this paper, the classical Newton algorithm for the p-Laplace equation is presented. However, the convergence and convergence rate of the Newton iterations depend heavily on the selections of the initial iterative function and iterative step length.
In order to find a suitable initial function, the well-posed condition of the iterative function is put forward, which can guarantee the absence of singularity in the iterations. Furthermore, according to the well-condition theorem, we know that well-posed iterative functions always exist, except the statement that the subsequent functions of iteration with step length 1 are all well posed, which means that despite the preceding well-posed functions, a subsequent function may be not well posed. This may lead to the result that the properties of iterative functions change back and forth between the well-posed and non-well-posed states. After studies, it is attributed to the absence of nontrivial lower bound of the gradient modulus. Considering the analysis described, we select a particular initial iterative function (5.3). By the Newton iteration with step length 1, the particular initial function results in a particular sequence of iterative functions with the decreasing properties of the gradient modulus of a subdivision element, and it is proved to converge to the solution of finite element formulation of the p-Laplace equation in Section 4. Moreover, the local convergence rate shows that the convergence rate is very fast, further validated by the numerical experiments in Section 6.
On the other hand, due to Remark 4 in Section 4.1, it is important to make the direction of the gradient of iterative functions consistent with that of the solution of Problem IV. Actually, according to the study in Section 5, the initial function and the corresponding iterative functions satisfy the consistency mentioned previously, so that a better convergence effect is achieved and verified by the numerical experiments in Section 6. Evidently, for the iterative method based on the particular initial function, it is not necessary to change the iterative step length. In summary, the method not only makes up the shortage of the classical Newton algorithm, which requires an exploratory reduction in the iterative step length, but also retains the benefit of fast convergence rate.
References
Brezis, H: Functional Analysis, Sobolev Spaces and Partial Differential. Springer, New York (2010)
Fusco, G, Hale, JK: Slow-motion manifolds, dormant instability, and singular perturbations. J. Dyn. Differ. Equ. 1, 75-94 (1989)
Alikakos, N, Bates, PW, Fusco, G: Slow motion for the Cahn-Hilliard equation in one space dimension. J. Differ. Equ. 90, 81-135 (1991)
Oruganti, S, Shi, J, Shivaji, R: Diffusive logistic equation with constant yield harvesting. I. Steady states. Trans. Am. Math. Soc. 354, 3601-3619 (2002)
Atkinson, C, Jones, CW: Similarity solutions in some nonlinear diffusion problems and in boundary-layer flow of a pseudoplastic fluid. Q. J. Mech. Appl. Math. 27, 193-211 (1974)
Zhang, Y, Pu, Y, Zhou, J: Two new nonlinear PDE image in painting models. Comput. Sci. Environ. Eng. Ecoinformatics 158(5), 341-347 (2011)
Carstensen, C, Liu, W, Yan, N: A posteriori error estimates for finite element approximation of parabolic p-Laplacian. SIAM J. Numer. Anal. 43(6), 2294-2319 (2006)
Liu, W, Yan, N: Some a posteriori error estimators for p-Laplacian based on residual estimation or gradient recovery. J. Sci. Comput. 16(4), 435-477 (2001)
Carstensen, C, Klose, R: A posteriori finite element error control for the p-Laplace problem. SIAM J. Sci. Comput. 25(3), 792-814 (2003)
Carstensen, C, Liu, W, Yan, N: A posteriori FE error control for p-Laplacian by gradient recovery in quasi-norm. Math. Comput. 75(256), 1599-1616 (2006)
Bothmer, K: Numerical Methods for Nonlinear Elliptic Differential Equations: A Synopsis. Oxford University Press, Oxford (2010)
Bermejo, R, Infante, JA: A multigrid algorithm for the p-Laplacian. SIAM J. Sci. Comput. 21(5), 1774-1789 (2000)
Ciarlet, PG: The Finite Element Method for Elliptic Problems. North-Holland, Amsterdam (1978)
Luo, ZD: Mixed Finite Element Methods and Applications. Science Press, Beijing (2006) (in Chinese)
Yuan, YX: Nonlinear Optimization Method. Science Press, Beijing (2008)
Hackbusch, W: Elliptic Differential Equations - Theory and Numerical Treatment. Springer, Berlin (2003)
Acknowledgements
This research was supported by National Science Foundation of China grant 11271127 and 11671106.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Luo, Z., Teng, F. An effective finite element Newton method for 2D p-Laplace equation with particular initial iterative function. J Inequal Appl 2016, 281 (2016). https://doi.org/10.1186/s13660-016-1223-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-016-1223-9
MSC
- 65N30
- 35Q10
Keywords
- finite element formulation
- Newton method
- p-Laplace equation
- well-posed condition
- initial iterative function