- Research
- Open Access
An effective finite element Newton method for 2D p-Laplace equation with particular initial iterative function
- Zhendong Luo^{1}Email authorView ORCID ID profile and
- Fei Teng^{2}
https://doi.org/10.1186/s13660-016-1223-9
© Luo and Teng 2016
Received: 5 October 2016
Accepted: 27 October 2016
Published: 11 November 2016
Abstract
In this article, a functional minimum problem equivalent to the p-Laplace equation is introduced, a finite element-Newton iteration formula is established, and a well-posed condition of iterative functions satisfied is provided. According to the well-posed condition, an effective initial iterative function is presented. Using the effective particular initial function and Newton iterations with the iterative step length equal to 1, an effective particular sequence of iterative functions is obtained. With the decreasing properties of gradient modulus of subdivision finite element, it has been proved that the function sequence converges to the solution of finite element formulation of p-Laplace equation. Moreover, a discussion on local convergence rate of iterative functions is provided. In summary, the iterative method based on the effective particular initial function not only makes up the shortage of the Newton algorithm, which requires an exploratory reduction in the iterative step length, but also retains the benefit of fast convergence rate, which is verified with theoretical analysis and numerical experiments.
Keywords
- finite element formulation
- Newton method
- p-Laplace equation
- well-posed condition
- initial iterative function
MSC
- 65N30
- 35Q10
1 Introduction
Let \(\Omega\subset R^{2}\) be a bounded and connected domain. Consider the following p-Laplace equation with Dirichlet boundary.
Problem I
The p-Laplace equation is not only a tool for researching the special theory of Sobolev spaces [1], but is also an important mathematical model of many physical processes and other applied sciences; for example, it can be used to describe a variety of nonlinear media such as phase transitions in water and ice at transition temperature [2], elasticity [3], population models [4], the non-Newtonian fluid movement in the boundary layer [5], and digital image processing [6]. However, since the equation includes a very strong nonlinear factor, it is an important approach to solve the equation by numerical methods. A finite element method, combined with Newton iteration scheme, is one of most efficient numerical methods. Some posteriori error estimates for the finite element approximation of the p-Laplace equation are developed by Carstensen et al. [7, 8]. Carstensen [9, 10] applied these posteriori error estimates to a control method of solving the equation. The control method is based on the Newton iteration. However, the Newton iteration of the p-Laplace equation is not discussed in detail in their study, which is very dependent on selection of initial iteration function and also requires an exploratory reduction in the iterative step length (the default step length is 1; see [11]). Therefore, it is necessary to study how to select a suitable initial function. On the other hand, though the Newton algorithm has the advantage of very fast convergence rate near the solution (see [11]), there are many factors (for instance, the ill-posed factor) to affect the convergence of Newton iterations for the p-Laplace equation. Bermejo and Infante [12] applied the Polak-Ribiere iterations to the multigrid algorithm for the p-Laplace equation instead of Newton iterations due to the difficulties in computation relating to the ill-posed coefficient matrix. In order to overcome the ill-posed problem, it is necessary to develop a well-posed condition of the iteration functions. To the best of our knowledge, a well-posed condition of the iteration functions of finite element-Newton iterations for the p-Laplace equation has not been provided so far. Therefore, in this paper, we aim mainly to establish a well-posed condition of the iterative functions of finite element-Newton iterations for the p-Laplace equation and to provide theory analysis. To this end, we intend to transform the p-Laplace equation into a functional minimum problem (see Section 2 in [12]) solved by Newton iterations. According to the well-posed condition, an effective particular initial function is selected, and an effective iterative function sequence is constructed. Besides, utilizing the gradient modulus and gradient direction of an element, we will discuss the factors affecting the convergence of Newton iterations.
Definition 1
Definition 2
The paper is organized as follows. In Section 2, a functional minimum problem equivalent to the p-Laplace equation is introduced, a finite element-Newton iteration formula is established, and the classical Newton algorithm is presented. In Section 3, we discuss the well-posed condition of iterative functions. Even though the initial function fails to satisfy the well-posed condition, after a sufficient number of Newton iterations with default step length are implemented, well-posed iterative functions can be always obtained, as will be seen in the well-posed theorem of Section 3. However, the iteration step number is often large. So an effective particular initial iterative function that satisfies the well-posed condition should be selected in order to make the iteration step number reduced greatly. This is related to the content of Section 5. In Section 4, an effective particular iterative function sequence is provided. These functions possess some properties that the gradient moduli on each subdivision finite element decrease monotonically and have a certain lower boundary. By using these properties we prove that this sequence converges to the solution of finite element formulation of the p-Laplace equation and present some results about its convergence speed. In Section 5, considering the well-posed condition and properties mentioned, we select an effective particular initial iterative function, which results in the special iterative functions involved in Section 4 by finite element Newton iterations with default step length. In Section 6, some numerical experiments are provided for showing that some results on the convergence rate and gradient fields are consistent with theoretical conclusions.
2 A Newton algorithm for the p-Laplace equation
The variational formulation of Problem I is as follows.
Problem II
As far as we know, Problem II and the corresponding minimum problem have the same unique solution (see [13]).
Problem III
Now, we recall the classical Newton algorithm.
Step 1. Set \(k=0\) and termination condition \(\varepsilon>0\), select an initial iterative function \(u_{0}\in H_{h}\), and compute \(J(u_{o})\).
Step 2. Iterative formula: apply equation (2.4) to finding \(\delta u_{k}\in H_{h}\) and set iterative step length \(\alpha:=1\).
Step 3. Set \(u_{k+1}:=u_{k}+\alpha\delta u_{k}\) (\(k=0,1,2,\ldots\)).
Step 4. If \(J(u_{k+1})< J(u_{k})\), then go to Step 5. Otherwise, set \(\alpha:=\alpha/2\) and go to Step 3.
Step 5. If \([ \sum_{i=1}^{M} ( -\int_{\Omega }\vert \nabla u_{k+1} \vert ^{p-2}\nabla u_{k+1}\cdot\nabla\lambda _{i}{\,\mathrm{d}}x{\,\mathrm{d}}y+\int_{\Omega} f\lambda_{i}{\,\mathrm{d}}x{\,\mathrm{d}}y) ^{2} ] ^{1/2}<\varepsilon\), the algorithm terminates, and output \(u_{k+1}\). Otherwise, go to Step 2.
Remark 1
As will be seen in Section 5, the convergence speed of Newton iterative functions is very fast when these functions are near the solution to Problem II. However, the total convergence speed heavily relies on selection of an initial iterative function. So it is important to find a good initial function. On the other hand, in order to make descent, it is necessary to get an exploratory reduction in the iterative step length; see Step 4 of the Newton algorithm. Sometimes, several attempts to shorten the step length would lead to slowing down the iteration speed. Therefore, we would give some improvements and modifications to achieve a better iteration convergence effect in the following sections.
3 Well-posed condition and well-posed theorem of iterative function
3.1 Well-posed condition
Problem IV
Since \(H_{h}\) consists of the piecewise continuous functions of degree 1, for each triangular element \(e\in S_{h}\) and all \((x,y)\in e\), \(\vert \nabla w_{0}(x,y) \vert \) is independent of x and y. If f vanishes on a point of domain Ω, the value of \(\vert \nabla w_{0} \vert \) is correspondingly very small in a small neighborhood of the point. Since the domain is partitioned into small triangular elements, there exist some elements such that the values of \(\vert \nabla w_{0}(e) \vert \) are less than 1 and often far less than 1. As will be seen further, these triangular elements determine the convergence effect of Newton iterations.
Remark 2
The \(w_{0}\) introduced plays a very important role in the content of the next subsection. Moreover, the elements where \(\vert \nabla w_{0}(e) \vert \) is far less than 1 could be vital for the convergence effect of Newton iterations. For better convergence effect, the initial iterative function need to be selected to satisfy the well-posed condition, which will be discussed in detail in Section 5.
3.2 Well-posed theorem of Newton iteration
Although an iterative function may fail to meet the well-posed condition, there always exists a certain iterative function satisfying the condition by the Newton iteration, as the following theorem says.
Theorem 1
Well-posed theorem
Proof
Corollary 1
Remark 3
The well-posed theorem shows that though some iterative functions have poor properties, well-posed iterative functions are always obtained by Newton iterations. However, the geometric reduction often spends many iteration times validated by numerical experiments in Section 5. Besides, at the beginning of iterations, the value of \(G(u_{k})\) is quite large, which easily leads to data overflow. On the other hand, the well-posed theorem does not tell us whether or not the subsequent functions of iteration by Newton iterations with default step length are all well posed. Therefore, it is necessary to find a better and well-posed initial function.
4 Convergence and its rate of an effective particular iterative function sequence
4.1 Decreasing conditions of gradient modulus
Lemma 1
Proof
Remark 4
Lemma 1 indicates that in order to make the gradient modulus of the next iterative function \(u_{k+1}\) less than that of \(u_{k}\), the projection of \(\nabla w_{0}\) onto the orthogonal component of \(\nabla u_{k}\) needs to be small enough. Furthermore, this ensures small projection of \(\nabla\delta u_{k}\) onto orthogonal component of \(\nabla w_{0}\) such that the direction of gradient field of \(u_{k+1}\) is almost consistent with that of \(w_{0}\). This consistency is very important since the direction of the gradient of \(u^{*}\) is the same as that of \(w_{0}\), that is, the result of (3.2). As will seen inthe next subsection, the decreasing of the gradient modulus on an element \(e\in S_{h}\) is an important precondition for convergence of the iterative functions.
4.2 Convergence analysis
In order to derive the convergence of the effective particular iterative functions, we first introduce the following Lemma 2 corresponding to Lemma 1.
Lemma 2
Proof
The following theorem can be derived from this compact embedding result.
Theorem 2
Proof
Though \(u_{k}\) converges to ū by Theorem 2, it is still not clear whether or not ū represents the finite element solution of Problem II. This question will be answered by Theorem 3. To this end, it is necessary to introduce the following lemma.
Lemma 3
Theorem 3
Proof
4.3 Convergence rate of iterative functions near the solution
To the best of our knowledge, Newton algorithms of algebraic equations have local quadratic convergence rate (see [15]). Among the usual optimization algorithms, such as the direct decent method and conjugate gradient method, the convergence rate of Newton iterations near the solution is the fastest (see [15]). Whether or not it also works for p-Laplace equation will be studied in this subsection.
Theorem 4
Local convergence rate theorem
Proof
Corollary 2
Remark 6
Remark 7
Corollary 2 is a result on the stronger convergence of the gradient, compared with (4.21) in Theorem 3. However, due to the description of (4.33), it is different from the strong convergence in \(H_{0}^{1}(\Omega)\) or \(W_{0}^{1,p}(\Omega)\), which is related to the value of \(\vert \nabla u_{k} \vert ^{p-2}\) on each element \(e\in S_{h}\) and consistent with the statement of (4.14) in Lemma 2.
5 An effective particular initial iterative function
Problem V
Lemma 4
Proof
Theorem 5
Taking the initial function \(u_{0}\) as in (5.3) and setting \(u_{k}\in H_{h}\) as the iterative function by Newton formula (2.4) with step length \(\alpha =1\), then \(u_{k}\) converges to the finite element solution of Problem II.
Remark 8
As described before, in order to achieve the descent effect, the classical Newton algorithm needs a few attempts to shorten the iterative step length, whereas the particular initial function introduced in this section ensures the step length equal to 1. Besides, it is proved that the iterative functions based on the initial function converge to the solution for the p-Laplace equation, and the result of convergence rate theorem also holds. On the other hand, since the decreasing properties and existence of nontrivial lower boundary, these iterative functions are all well posed, whereas the well-posed theorem in Section 3.2 cannot guarantee that the subsequent functions of iteration are always well posed. In fact, this is attributed to the absence of nontrivial lower bound of gradient modulus in a well-posed theorem.
6 Some numerical experiments
The data record the case of 29 iterations with \(\pmb{w_{0}}\) as the initial function and step length equal to 1, where k is the iteration number, G is defined by ( 3.9 ), and the rate θ denotes \(\pmb{G(u_{k})/G(u_{k-1})}\)
k | G | θ | k | G | θ | k | G | θ |
---|---|---|---|---|---|---|---|---|
0 | 0.0337 | - | 10 | 2.5505e + 08 | 0.3164 | 20 | 2.5649e + 03 | 0.3164 |
1 | 8.0246e + 12 | - | 11 | 8.0700e + 07 | 0.3164 | 21 | 811.5647 | 0.3164 |
2 | 2.5390e + 12 | 0.3164 | 12 | 2.5534e + 07 | 0.3164 | 22 | 256.7842 | 0.3164 |
3 | 8.0336e + 11 | 0.3164 | 13 | 8.0791e + 06 | 0.3164 | 23 | 81.2481 | 0.3164 |
4 | 2.5419e + 11 | 0.3164 | 14 | 2.5563e + 06 | 0.3164 | 24 | 25.7074 | 0.3164 |
5 | 8.0427e + 10 | 0.3164 | 15 | 8.0882e + 05 | 0.3164 | 25 | 8.1338 | 0.3164 |
6 | 2.5448e + 10 | 0.3164 | 16 | 2.5592e + 05 | 0.3164 | 26 | 2.5733 | 0.3164 |
7 | 8.0518e + 09 | 0.3164 | 17 | 8.0974e + 04 | 0.3164 | 27 | 0.8137 | 0.3164 |
8 | 2.5476e + 09 | 0.3164 | 18 | 2.5621e + 04 | 0.3164 | 29 | 0.2569 | 0.3164 |
9 | 8.0609e + 08 | 0.3164 | 19 | 8.1065e + 03 | 0.3164 | 29 | 0.0808 | 0.3164 |
Until the 27th iteration, the value of θ becomes slightly small. The slight change shows that there are still some areas whose gradient has more significant impact on the overall convergence than that of other area.
7 Conclusions
At the beginning of this paper, the classical Newton algorithm for the p-Laplace equation is presented. However, the convergence and convergence rate of the Newton iterations depend heavily on the selections of the initial iterative function and iterative step length.
In order to find a suitable initial function, the well-posed condition of the iterative function is put forward, which can guarantee the absence of singularity in the iterations. Furthermore, according to the well-condition theorem, we know that well-posed iterative functions always exist, except the statement that the subsequent functions of iteration with step length 1 are all well posed, which means that despite the preceding well-posed functions, a subsequent function may be not well posed. This may lead to the result that the properties of iterative functions change back and forth between the well-posed and non-well-posed states. After studies, it is attributed to the absence of nontrivial lower bound of the gradient modulus. Considering the analysis described, we select a particular initial iterative function (5.3). By the Newton iteration with step length 1, the particular initial function results in a particular sequence of iterative functions with the decreasing properties of the gradient modulus of a subdivision element, and it is proved to converge to the solution of finite element formulation of the p-Laplace equation in Section 4. Moreover, the local convergence rate shows that the convergence rate is very fast, further validated by the numerical experiments in Section 6.
On the other hand, due to Remark 4 in Section 4.1, it is important to make the direction of the gradient of iterative functions consistent with that of the solution of Problem IV. Actually, according to the study in Section 5, the initial function and the corresponding iterative functions satisfy the consistency mentioned previously, so that a better convergence effect is achieved and verified by the numerical experiments in Section 6. Evidently, for the iterative method based on the particular initial function, it is not necessary to change the iterative step length. In summary, the method not only makes up the shortage of the classical Newton algorithm, which requires an exploratory reduction in the iterative step length, but also retains the benefit of fast convergence rate.
Declarations
Acknowledgements
This research was supported by National Science Foundation of China grant 11271127 and 11671106.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Brezis, H: Functional Analysis, Sobolev Spaces and Partial Differential. Springer, New York (2010) View ArticleGoogle Scholar
- Fusco, G, Hale, JK: Slow-motion manifolds, dormant instability, and singular perturbations. J. Dyn. Differ. Equ. 1, 75-94 (1989) MathSciNetView ArticleMATHGoogle Scholar
- Alikakos, N, Bates, PW, Fusco, G: Slow motion for the Cahn-Hilliard equation in one space dimension. J. Differ. Equ. 90, 81-135 (1991) MathSciNetView ArticleMATHGoogle Scholar
- Oruganti, S, Shi, J, Shivaji, R: Diffusive logistic equation with constant yield harvesting. I. Steady states. Trans. Am. Math. Soc. 354, 3601-3619 (2002) MathSciNetView ArticleMATHGoogle Scholar
- Atkinson, C, Jones, CW: Similarity solutions in some nonlinear diffusion problems and in boundary-layer flow of a pseudoplastic fluid. Q. J. Mech. Appl. Math. 27, 193-211 (1974) View ArticleMATHGoogle Scholar
- Zhang, Y, Pu, Y, Zhou, J: Two new nonlinear PDE image in painting models. Comput. Sci. Environ. Eng. Ecoinformatics 158(5), 341-347 (2011) View ArticleGoogle Scholar
- Carstensen, C, Liu, W, Yan, N: A posteriori error estimates for finite element approximation of parabolic p-Laplacian. SIAM J. Numer. Anal. 43(6), 2294-2319 (2006) MathSciNetView ArticleMATHGoogle Scholar
- Liu, W, Yan, N: Some a posteriori error estimators for p-Laplacian based on residual estimation or gradient recovery. J. Sci. Comput. 16(4), 435-477 (2001) MathSciNetView ArticleMATHGoogle Scholar
- Carstensen, C, Klose, R: A posteriori finite element error control for the p-Laplace problem. SIAM J. Sci. Comput. 25(3), 792-814 (2003) MathSciNetView ArticleMATHGoogle Scholar
- Carstensen, C, Liu, W, Yan, N: A posteriori FE error control for p-Laplacian by gradient recovery in quasi-norm. Math. Comput. 75(256), 1599-1616 (2006) MathSciNetView ArticleMATHGoogle Scholar
- Bothmer, K: Numerical Methods for Nonlinear Elliptic Differential Equations: A Synopsis. Oxford University Press, Oxford (2010) View ArticleGoogle Scholar
- Bermejo, R, Infante, JA: A multigrid algorithm for the p-Laplacian. SIAM J. Sci. Comput. 21(5), 1774-1789 (2000) MathSciNetView ArticleMATHGoogle Scholar
- Ciarlet, PG: The Finite Element Method for Elliptic Problems. North-Holland, Amsterdam (1978) MATHGoogle Scholar
- Luo, ZD: Mixed Finite Element Methods and Applications. Science Press, Beijing (2006) (in Chinese) Google Scholar
- Yuan, YX: Nonlinear Optimization Method. Science Press, Beijing (2008) Google Scholar
- Hackbusch, W: Elliptic Differential Equations - Theory and Numerical Treatment. Springer, Berlin (2003) Google Scholar