 Research
 Open access
 Published:
Quadratic convergence of monotone iterates for semilinear elliptic obstacle problems
Journal of Inequalities and Applications volumeÂ 2017, ArticleÂ number:Â 238 (2017)
Abstract
In this paper, we consider the numerical solution for the discretization of semilinear elliptic complementarity problems. A monotone algorithm is established based on the upper and lower solutions of the problem. It is proved that iterates, generated by the algorithm, are a pair of upper and lower solution iterates and converge monotonically from above and below, respectively, to the solution of the problem. Moreover, we investigate the convergence rate for the monotone algorithm and prove quadratic convergence of the algorithm. The monotone and quadratic convergence results are also extended to the discrete problems of the twosided obstacle problems with a semilinear elliptic operator. We also present some simple numerical experiments.
1 Introduction
In this paper, we consider the following semilinear elliptic complementarity problem of finding \({\mathcal {U}}\in{ \mathcal {K}}=\{{\mathcal {V}}\in H_{0}^{1}(\Omega):{\mathcal {V}}\ge{ \mathcal {\varphi}}, \mbox{ a.e. in } \Omega\}\) such that
where \(\Omega\in R^{2}\) is a bounded convex polygonal with boundary âˆ‚Î©, \({\mathcal {F}}({\mathcal {V}},x)\) is continuously differentiable in variable \({\mathcal {V}}\) with \(\frac{ \partial{ \mathcal {F}}}{ \partial{ \mathcal {V}}}\geq C^{*}\ge0\) on \({\mathcal {K}}\times\bar{\Omega}\), \(\varphi\in H^{2}(\Omega)\) with \(\varphi_{\partial\Omega}\leq0\), and
Here, \(\vec{\beta}=(\beta_{1},\beta_{2})\in(L^{\infty}(\Omega))^{2}\), \(\gamma \in L^{\infty}(\Omega)\) and \(\gamma\ge0\) on Î©.
Problem (1.1) has been widely applied in many scientific, engineering or economic problems, e.g., in the diffusion problems involving MichaelisMenten or second order irreversible reactions [1â€“5].
To solve problem (1.1), we generally apply a finite difference or finite element approximation to obtain a discrete problem. If we use standard finite difference or lumping finite element approximation with a Delaunay triangle triangulation (see, e.g., in [6â€“8]), the discrete problem can be a finite dimensional nonlinear complementarity problem of finding \({ u}\in K=\{{ v}\in R^{n}:{ v}\ge\phi\}\) such that
where \(A=(a_{ij})\in R^{n\times n}\) is an Mmatrix, \({f}:R^{n}\rightarrow R^{n}\) is a diagonal and nondecreasing monotone mapping. In other words, matrix A has a nonnegative inverse \(A^{1}\) as well as nonpositive offdiagonals, and mapping f has a form of \(f(v)=(f_{i}(v_{i}))\in R^{n}\) with
In this paper, we also assume that A is a positive definite matrix. That is, there exists a positive constant Ïƒ such that, for any vector \(v\in R^{n}\),
These conditions can be satisfied for suitable small meshsize, see, e.g., in [6, 7, 9].
The numerical algorithms for solving problem (1.2) have been developing rapidly. Among these algorithms, there are a kind of monotone algorithms based on the upper or lower solutions of the problem. We refer to [7, 8, 10, 11] and the references therein for details. These algorithms can be regarded as extensions of the monotone algorithms for solving elliptic boundary value problems or their discretizations (see, e.g., in [6, 12â€“14]). In these algorithms, any generated iterate is an upper (or lower) solution sequence which then converges to the solution monotonically. In this paper, we extend the monotone iterative approach for elliptic boundary value equation, presented in [6], to complementarity as well as twosided obstacle problems. By using a pair of upper and lower solutions as two initial iterates, one can construct two monotone sequences which converge monotonically from above and below, respectively, to the solutions of the problems. Especially, the initial iterative solutions in the monotone iterative algorithms can be obtained directly by solving two discrete linear complementarity problems without any knowledge of the exact solution. Quadratic convergence is also proved for the algorithms.
The structure of the paper is as follows. In SectionÂ 2, we provide two procedures, in which only a pair of linear complementarity problems are needed to be solved. Following these procedures, we can obtain a pair of upper and lower solutions of the nonlinear complementarity problem we are concerned with. In SectionsÂ 3 and 4, we propose a monotone iterative algorithm and deal with the quadratic convergence of the monotone iterates, respectively. In SectionÂ 5, we extend the results obtained in SectionsÂ 24 to the twosided obstacle problem. In SectionÂ 6, we present some simple numerical experiments.
2 Upper and lower solutions and their initializations
The approach presented in this paper is based on the upper and lower solutions of the problems. In this section, we introduce the definitions of upper and lower solutions of problem (1.2) and discuss their properties.
Nonlinear complementarity problem (1.2) is equivalent to the following system of nonsmooth equations:
According to (2.1), let
and
Then \(S_{1}\) and \(S_{1}\) are called the upper and lower solution sets of problem (1.2), respectively. Besides, any element of \(S_{1}\) (\(S_{1}\)) is called an upper (lower) solution of problem (1.2). It is well known that the solution of the problem is the minimal (maximal) element of upper (lower) solution set (see, e.g., in [9, 15, 16]).
It is obvious that \(v\in S_{1}\) is equivalent to
that is, \(v\in K\) and \(Av+f(v)\geq0\). On the other hand, \(v\in S_{1}\) is equivalent to \((Av+f(v))_{i}\leq0\) holds for each index i satisfying \(v_{i}>\phi_{i}\). Therefore, a lower solution v may not belong to a feasible set K. For instance, any v satisfying \(Av+f(v)\le0\) is in \(S_{1}\). Let
That is, \(\tilde{S}_{1}\) consists of lower solutions in K. Since the solution of problem (1.2) is in K, in the sequel, we will consider the lower solution in \({\tilde{S}}_{1}\). Obviously, Ï• is a candidate of such lower solutions.
Lemma 2.1
For any \(w\in R^{n}\), let Å« be a solution of the following linear complementarity problem (LCP) of finding \(\bar{u}\in K\) such that
where \(\Lambda^{*}=\operatorname{diag}(c^{*}_{1},c^{*}_{2},\ldots,c^{*}_{n})\in R^{n\times n}\) is a nonnegative diagonal matrix and
Then \(\bar{u}\in S_{1}\).
Proof
Assuming that Å« satisfies (2.2) and \(\vert Aw+f(w) \vert +Aw+f(w)\ge0\),
where \(z=\bar{u}w\). It follows from (2.2) again that
Since A is an Mmatrix, so is \(A+\Lambda^{*}\) (see, e.g., in [13, 14]). That is, \((A+\Lambda^{*})^{1}\geq0\). This together with (2.4) implies \(z\geq0\), and hence by (1.3), \(\int_{0}^{1}[f'(w+tz)\Lambda^{*}]z\,dt\ge0\). Therefore, by (2.3), we get
which together with \(\bar{u} \in K\) implies \(\bar{u}\in S_{1}\).â€ƒâ–¡
For the lower solution, we have a similar result as follows.
Lemma 2.2
For any \(w\in R^{n}\), let \(\underline{u}\) be a solution of the following LCP of finding \(\underline{u}\in K\) such that
where
Then \(\underline{u}\in\tilde{S}_{1}\).
Problems (2.2) and (2.5) can be regarded as lower and upper obstacle problems with variables Å« and \(\underline{u}\), respectively. According to Lemmas 2.1 and 2.2, we can obtain a pair of upper and lower solutions of nonlinear complementarity problem (1.2) by solving two linear complementarity problems (2.2) and (2.5). As for linear complementarity problems, there are many classic and efficient iterative or direct algorithms to solve them. We refer to [11, 15â€“17] for further discussions.
3 Monotone iterative algorithm for complementarity problem
In this section, we propose an algorithm for solving the nonlinear complementarity problem (1.2) and discuss its monotone convergence. Firstly, we present the algorithm as follows.
Algorithm 3.1
Let the initials \(u^{(0)}_{1}\in S_{1}\) and \(u^{(0)}_{1}\in{\tilde{S}}_{1}\). For \(k\ge1\), we calculate a pair of iterates by solving the following linear complementarity problems of finding \(u_{\alpha}^{(k)}\in K\), \(\alpha=1, 1\), respectively, such that
where
\(A^{(k)}=A+\Lambda^{(k1)}\), and \(\Lambda^{(k1)}\) is a diagonal matrix with diagonals
Direct algorithms with a polynomial computational complexity are fruitful for subproblems (3.1). Especially, there are a lot of polynomial algorithms for linear complementarity problems with an Mmatrix. We refer to [15â€“19] for more details.
By Lemmas 2.1 and 2.2, for any \(w\in R^{n}\), Å« and \(\underline{u}\), generated by (2.2) and (2.5), are in \(S_{1}\) and \({\tilde{S}} _{1}\), respectively. So, we may let the initials in Algorithm 3.1 be obtained by solving (2.2) and (2.5). That is,
Remark 3.1
Noting that A is an Mmatrix and \(\Lambda^{(k1)}\) is a diagonal matrix with nonnegative diagonals, \(A^{(k)}\) is also an Mmatrix. Therefore, similar to \(S_{1}\) and \({\tilde{S}}_{1}\), we can define the upper and lower solution sets of problem (3.1), respectively, as follows:
Moreover, the solutions \(u_{\alpha}^{(k)}\) of problems (3.1), \(\alpha=1, 1\), are the minimal and maximal elements of \(S_{1}^{(k,\alpha)} \) and \({\tilde{S}}_{1}^{(k,\alpha)}\), respectively.
Lemma 3.1
Let \(u_{\alpha}^{(k)}\), \(\alpha=1,1\), be the solutions of problems (3.1), respectively. If \(u^{(k1)}_{1}\ge u^{(k1)}_{1}\), \(u_{1}^{(k)}\geq u_{1}^{(k)}\).
Proof
For any \(v\in R^{n}\), we have \(v=v^{+}+v^{}\), where \(v^{+}=\max\{v,0\}\geq0\) and \(v^{}=\operatorname{min}\{ v,0\}\leq0\). Let \(z^{(k)}=u_{1}^{(k)}u_{1}^{(k)}\). It is sufficient to prove that \((z^{(k)})^{+}=0\). By (3.1) and \((z^{(k)})^{+}\ge0\), we have
If \(((z^{(k)})^{+})_{i}>0\), \((u_{1}^{(k)})_{i}>(u_{1}^{(k)})_{i}\ge\phi_{i}\). Then, by (3.1), we have
Thereby,
This together with (3.6) gives
By the use of (3.2), we have then
Hence,
where
Since \(u^{(k1)}_{1}\ge u^{(k1)}_{1}\), we have then
This together with (3.3) implies that the righthand side of the equality in (3.7) is nonpositive, and then
which implies
where the last equality is from \(((z^{(k)})^{+})_{i}((z^{(k)})^{})_{i}=0\), and the last inequality is from \(((z^{(k)})^{+})_{i}((z^{(k)})^{})_{j}\leq0\) and \(a_{ij}\le0\) for \(i\neq j\). By the positive definite assumption of matrix A, we conclude \((z^{(k)})^{+}=0\), and the proof is then complete.â€ƒâ–¡
The following theorem gives the monotone convergence of Algorithm 3.1.
Theorem 3.1
Algorithm 3.1 is well defined and the sequences \(\{ u_{\alpha}^{(k)}\}\), \(\alpha=1,1\), generated by (3.1), converge monotonically to the unique solution u of problem (1.2):
Moreover, \(\{u^{(k)}_{1}\}\subset S_{1}\), \(\{u^{(k)}_{1}\}\subset{\tilde{S}}_{1}\).
Proof
It follows immediately from \(u_{1}^{(0)}\in{\tilde{S}}_{1}\) and \(u_{1}^{(0)}\in S_{1}\) that
Assume that \(u^{(k1)}_{1}\in S_{1}\) and \(u^{(k1)}_{1}\in{\tilde{S}}_{1}\). Then, by (3.1), (3.2) and (3.4), we have that
and for indices i satisfying \((u_{1}^{(k1)})_{i}>\phi_{i}\),
That is, \(u^{(k1)}_{1}\in S_{1}^{(k,1)}\) and \(u^{(k1)}_{1}\in{\tilde{S}}_{1}^{(k,1)}\), where \(S_{1}^{(k,1)}\) and \({\tilde{S}}_{1}^{(k,1)}\) are defined by (3.4) and (3.5), respectively. Therefore, by RemarkÂ 3.1, we have
Let \(z_{\alpha}^{(k)}=u_{\alpha}^{(k)}u_{\alpha}^{(k1)}\), \(\alpha=1,1\). (3.9) implies
From (3.1), we have
where
By LemmaÂ 3.1 and (3.9), for \(t\in[0,1]\), we have
and
And then, by (3.3), we get
It then follows from (3.1) and (3.10)(3.12) that
which implies \(u_{1}^{(k)}\in S_{1}\). On the other hand, if \((u_{1}^{(k)})_{i}>\phi_{i}\), we have by (3.1) and (3.10)(3.12) that
which implies \(u_{1}^{(k)}\in{\tilde{S}}_{1}\). By the principle of induction, we obtain (3.8) as well as \(\{u^{(k)}_{1}\}\subset S_{1}\) and \(\{u^{(k)}_{1}\}\subset {\tilde{S}}_{1}\). Furthermore, by (3.8), sequences \(u^{(k)}_{1}\) and \(u^{(k)}_{1}\) are monotone and bounded. Therefore, they are convergent. Let \(\lim u_{\alpha}^{(k)}=u_{\alpha}\), \(\alpha=1,1\). By (3.1), we have \(u_{\alpha}\ge\phi\) and
which implies \(u_{1}=u_{1}=u\) by (3.2). We then complete the proof.â€ƒâ–¡
4 Quadratic convergence rate of the monotone algorithm
Introduce the notations
and
Here, we have assumed that f is twice continuously differentiable. The following theorem gives the quadratic convergence of Algorithm 3.1.
Theorem 4.1
Let the sequences \(\{u_{\alpha}^{(k)}\}\), \(\alpha=1,1\), be generated by Algorithm 3.1. Then the following estimate holds:
where \(\Vert v \Vert =\sqrt{v^{T}v}\).
Proof
The linear complementarity problems (3.1) can be reformulated as the following variational inequality problems of finding \(u_{\alpha}^{(k)}\in K\), \(\alpha=1,1\), respectively, such that
where \((v,w)=v^{T}w\). Letting \(v_{1}=u_{1}^{(k)}\) and \(v_{1}=u_{1}^{(k)}\) in above variational forms, we get
and
Denoting \(u_{1}^{(k)}u_{1}^{(k)}\) by \(z^{(k)}\) and adding above two inequalities, we obtain
Then, by (3.2), we have
That is,
Taking into account (3.3), we conclude that
where
and
Therefore, by (4.2),
This together with (1.4), (4.1) as well as (4.4) and (4.5) implies
That is, (4.3) holds. The proof is then completed.â€ƒâ–¡
By estimate (4.3), the quadratic convergence holds true for the difference \(u_{1}^{(k)}u_{1}^{(k)}\) between the upper and lower iterative solutions. This is not like the usual quadratic convergence estimate. Nevertheless, similar to the proof of TheoremÂ 4.1, we can get
and
And then,
Noting that \(u_{1}^{(k)}\leq u\leq u_{1}^{(k)}\), we can obtain the following standard estimates in a similar way.
Theorem 4.2
Let the sequences \(\{u_{\alpha}^{(k)}\}\), \(\alpha=1,1\), be generated by (3.1). Then the following estimates hold:
if \(f''_{i}(v_{i})\geq0\) for \(u_{1}^{(0)}\leq v_{i}\leq u_{1}^{(0)}\), and
if \(f''_{i}(v_{i})\leq0\) for \(u_{1}^{(0)}\leq v_{i}\leq u_{1}^{(0)}\).
Estimates (4.6) and (4.7) indicate that when \(f_{i}(v_{i})\) (\(i=1,2,\ldots, n\)) have convex (or concavity ) property in a neighborhood of the solution of problem (1.2), the maximal (or minimal) sequence, generated by (3.1), converges quadratically to the solution of the problem.
5 Extensions to a twosided obstacle problem
In this section, we extend the results obtained in the previous sections to the case of a twosided obstacle problem. Consider the discrete twosided obstacle problem of finding \(u\in K\) such that
where A and f are defined as before, \(K=\{v\in R^{n}:\phi\leq v\leq\psi\}\) with \(\phi<\psi\).
Problem (5.1) is equivalent to the following system of nonsmooth equations:
and the following variational inequality problem of finding \(u\in K\) such that
According to (5.2), we define the upper and lower solution sets of problem (5.1), respectively, as follows:
which is equivalent to
and
which is equivalent to
Similar to the case of complementarity problem, the solution of problem (5.1) is the minimal (maximal) element of \(S_{1}\) (\(S_{1}\)).
Obviously, Ï• and Ïˆ are candidates of \(S_{1}\) and \(S_{1}\), respectively. In the following, we present two schemes, which can produce an upper or a lower solution of problem (5.1) from any \(w\in R^{n}\).
Scheme 5.1
Let \(w\in R^{n}\).
 Step 1.:

Solve the following LCP of finding \(\bar{u}\ge\phi\) such that
$$ \bigl(A+\Lambda^{*}\bigr)\bar{u}\ge\bar{g}, \qquad (\bar{u} \phi)^{T}\bigl[\bigl(A+\Lambda^{*}\bigr)\bar{u}\bar{g}\bigr]=0, $$(5.6)where \(\Lambda^{*}\) and á¸¡ are the same as those given in SectionÂ 2.
 Step 2.:

Let
$$\tau_{i}= \textstyle\begin{cases} 0 & \mbox{if } \bar{u}_{i}\leq\psi_{i},\\ \psi_{i}\bar{u}_{i} & \mbox{if } \bar{u}_{i}> \psi_{i} \end{cases} $$and \(\Gamma=\operatorname{diag}(\tau_{1},\tau_{2},\ldots,\tau_{n})\). Define
$$\tilde{\bar{u}}=\bar{u}+\Gamma e, $$where \(e\in R^{n}\) is the vector of ones.
By Scheme 5.1, \(\tilde{\bar{u}}\leq\bar{u}\) and \(\tilde{\bar{u}}\in K\). Moreover, if \(\tilde{\bar{u}}_{i}<\psi_{i}\),
where the first inequality is from LemmaÂ 2.1, \(\tau_{i}=0\) and \(\bar{u}_{i}=\tilde{\bar{u}}_{i}\), the second inequality is from the definition of Mmatrix and \(\tau_{j}\leq0\) for each j.
Thereby, we obtain the following result.
Lemma 5.1
Let \(\tilde{\bar{u}}\) be produced by Scheme 5.1. Then \(\tilde{\bar{u}}\in S_{1}\), where \(S_{1}\) is defined by (5.4).
Similarly, we can obtain a lower solution of the problem by the following scheme.
Scheme 5.2
Let \(w\in R^{n}\).
 Step 1.:

Solve the following LCP of finding \(\underline{u}\leq\psi\) such that
$$ \bigl(A+\Lambda^{*}\bigr)\underline{u}\leq\underline{g},\qquad (\underline{u}\psi )^{T}\bigl[\bigl(A+\Lambda^{*}\bigr)\underline{u}\bar{g}\bigr]=0, $$(5.7)where \(\Lambda^{*}\) and \(\underline{g}\) are the same as those given in SectionÂ 2.
 Step 2.:

Let
$$\tau_{i}= \textstyle\begin{cases} 0 & \mbox{if } \bar{u}_{i}\geq\phi_{i},\\ \phi_{i}\underline{u}_{i} & \mbox{if } \bar{u}_{i}< \phi_{i} \end{cases} $$and \(\Gamma=\operatorname{diag}(\tau_{1},\tau_{2},\ldots,\tau_{n})\). Define
$$\tilde{\underline{u}}=\underline{u}+\Gamma e. $$
Similar to LemmaÂ 5.1, we have the following result.
Lemma 5.2
Let \(\tilde{\underline{u}}\) be produced by Scheme 5.2. Then \(\tilde{\bar{u}}\in S_{1}\), where \(S_{1}\) is defined by (5.5).
By Schemes 5.1 and 5.2, we can obtain a pair of upper and lower solutions of twosided obstacle problem (5.1) by solving two affine upper and lower obstacle problems (5.6) and (5.7), instead of solving one twosided obstacle problem. To our knowledge, as for the twosided obstacle problem, direct algorithms with polynomial computational complexity are few.
Algorithm 5.1
Let the initials \(u^{(0)}_{1}\in S_{1}\) and \(u^{(0)}_{1}\in{ S}_{1}\). For \(k\ge1\), we calculate a pair of iterates by solving the following affine twosided obstacle problems of finding \(u_{\alpha}^{(k)}\in K\), \(\alpha=1, 1\), respectively, such that
where matrix \(A^{(k)}\) and mapping r are the same as those in Algorithm 3.1.
Similar to LemmaÂ 3.1, the following lemma holds.
Lemma 5.3
Let \(u_{\alpha}^{(k)}\), \(\alpha=1,1\), be the solutions of problems (5.1), respectively. If \(u^{(k1)}_{1}\ge u^{(k1)}_{1}\), \(u_{1}^{(k)}\geq u_{1}^{(k)}\).
According to LemmaÂ 5.3, we have the following monotone and quadratic convergence similar to Theorems 3.1, 4.1 and 4.2.
Theorem 5.1
Let the sequences \(\{u_{\alpha}^{(k)}\}\), \(\alpha=1,1\), be generated by Algorithm 5.1. Then the iterates converge to the solution u of problem (5.1). Moreover, \(\{u^{(k)}_{1}\}\subset S_{1}\), \(\{u^{(k)}_{1}\}\subset S_{1}\), and
where the constants are the same as those in TheoremÂ 4.1. Moreover,
if \(f''_{i}(v_{i})\geq0\) for \(u_{1}^{(0)}\leq v_{i}\leq u_{1}^{(0)}\), and
if \(f''_{i}(v_{i})\leq0\) for \(u_{1}^{(0)}\leq v_{i}\leq u_{1}^{(0)}\).
6 Numerical experiments
In this section, we present numerical experiments in order to investigate the performance of the proposed algorithms. The programs are coded in Visual C++ 6.0 and run on a computer with 2.0Â GHz CPU. We consider the following two problems.
Problem 1
We consider the following nonlinear complementarity problem which is the same as Problem 5.1 in [20]:
Here \(F(u)=Au+D(u)+f\), where
and
\(h=\frac{1}{\sqrt{n}+1}\), \(D(x)=(D_{i}):R^{n}\rightarrow R^{n}\) being a given diagonal mapping with \(D_{i}:R\rightarrow R\), for \(i=1,2,\ldots,n\), that is, component \(D_{i}\) of D is a function of the ith variable \(u_{i}\) only. Set \(D_{i}(u_{i})=\lambda e^{u_{i}}\) and obtain a diagonal mapping \(D(u)=(D_{i}(u_{i}))\). In our test, we fix \(\lambda=0.8\), and let \(f_{i}=\max\{ 0,v_{i}0.5\}\times10^{w_{i}0.5}\), where \(w_{i}\) and \(v_{i}\) are random numbers in \([0,1]\), \(i=1,2,\ldots, n\).
Problem 2
We discuss the following nonlinear complementarity problem:
Here \(F(u)=Au+D(u)+f\), with A being the same as in Problem 1. The vector f is generated from a uniform distribution in the interval \((10,10)\). Let \(D(u)=(D_{i}):R^{n}\rightarrow R^{n}\) be a given diagonal mapping with \(D_{i}:R\rightarrow R\), for \(i=1,2,\ldots,n\). The components of \(D(x)\) are \(D_{i}(u)=\operatorname{arctan}(u_{i})\).
We compare different algorithms from the point of view of iteration numbers and cpu times (seconds). Here, we consider three algorithms: Algorithm 3.1, denoted by (AL); the semismooth equation approach proposed in [21], denoted by SSN; and primaldual algorithm proposed in [22], denoted by PDA.
In the algorithm AL, we choose initial point \(u^{(0)}_{1}=0\) and \(u^{(0)}_{1}\) to be obtained by LemmaÂ 2.1 for all problems. All subproblems are solved by PSOR and the tolerance in PSOR is chosen to be equal to 10^{âˆ’7} in \(\Vert \cdot \Vert _{2}\) norm. In order to determine the relaxation parameter Ï‰, we consider the following nonlinear complementarity problem:
Here \(F(u)=Au+D(u)+f\), where A is the same in Problems 1 and 2, \(D(u)=(D_{i}):R^{n}\rightarrow R^{n}\) being a given diagonal mapping like in Problem 1. Set \(D_{i}(u_{i})=u_{i}^{2}\), and let \(f_{i}=1\). Fix \(h=\frac{1}{20}\), and use AL to solve the above problem. We change the Ï‰, and present the result in TableÂ 1. From the table, we can see that the relaxation parameter \(\omega=1.9\) is a good choice, and we use this in all problems.
The termination criterion of the algorithm AL is chosen to be \(\Vert u^{(k)}_{1}u^{(k)}_{1} \Vert \le10^{6}\). In the algorithm SSN, we choose initial point \(u^{0}=0\), tolerance \(\epsilon=10^{6}\), \(p=3\), \(\rho=0.5\), \(\beta=0.3\) and \(H_{k}\in\partial_{B} \Phi\) is defined by the procedure proposed in [21]. In the algorithm PDA, we fix \(c=1\) and choose the initial point \(u^{0}=0\). The stopping criterion is that the active sets are not changed between two iterations. In this algorithm, the subproblems are systems of nonlinear equations, which are solved by Newton iteration. The numerical results are listed in TablesÂ 23, where â€˜iterâ€™ denotes the iteration number for the algorithm converging to the solution and â€˜cpuâ€™ denotes the execution time.
From TablesÂ 23, we can easily see that the iteration numbers of Algorithm 3.1 are stable, which may also mean that the initials obtained by (2.2) may be a good solution guess. While for SSN and PDA, the iteration numbers increase when the dimensions of Problem 2 become large. As we can see from TableÂ 3, the algorithm we proposed seems to be more effective for solving largescale problems. The main reason may be as follows. Algorithm 3.1 takes only a few iterations for all problems, and in each iteration it only needs to solve two linear complementarity problems. These subproblems are solved by PSOR rapidly. For SSN, it takes a lot of time to solve the system of linear equations to obtain directions, especially for largescale problems. For PDA, by using the active set strategy, in each iteration, it only needs to solve a reduced system of linear equations, where the dimension is much less than the original one. Therefore, the execution time for PDA is also less compared with SSN.
7 Conclusions
In this paper, we have considered the numerical solution for the discretization of semilinear elliptic complementarity problems. Based on the upper and lower solutions of the problem, we have proposed a monotone algorithm and proved that iterates are a pair of upper and lower solution iterates and converge monotonically from above and below, respectively, to the solution of the problem. Moreover, we have established quadratic convergence of the algorithm. The limited numerical results showed that the proposed algorithm is effective.
References
Bensoussan, A, Lions, JL: Impulse Control and Quasivariational Inequalities. Gauthier Villars, Paris (1984)
Billups, SC, Murty, KG: Complementarity problems. J. Comput. Appl. Math. 124, 303318 (2000)
Elliott, CM, Ockendon, JR: Weak and Variational Methods for Moving Boundary Problems. Research Notes in Mathematics, vol.Â 59. Pitman, London (1982)
Meyer, GH: Free boundary problems with nonlinear source terms. Numer. Math. 43, 463482 (1984)
Viglialoro, G, Murcia, J: A singular elliptic problem related to the membrane equilibrium equations. Int. J. Comput. Math. 90, 21852196 (2013)
Boglaev, I: Uniform quadratic convergence of monotone iterates for semi linear singularly perturbed elliptic problems. Lect. Notes Comput. Sci. Eng. 81, 3746 (2011)
Hoffmann, KH, Zou, J: Parallel solution of variational inequality problems with nonlinear source terms. IMA J. Numer. Anal. 16, 3145 (1996)
Li, CL, Zeng, JP: Twolevel Schwarz method for solving variational inequality with nonlinear source terms. J. Comput. Appl. Math. 211, 6775 (2008)
Jiang, YJ, Zeng, JP: A multiplicative Schwarz algorithm for the nonlinear complementarity problem with an Mfunction. Bull. Aust. Math. Soc. 82, 353366 (2010)
Sun, Z, Zeng, JP: A monotone semismooth Newton type method for a class of complementarity problems. J.Â Comput. Appl. Math. 235, 12611274 (2011)
Zeng, JP, Zhou, SZ: On monotone and geometric convergence of Schwarz methods for twosided obstacle problems. SIAM J. Numer. Anal. 35, 600616 (1998)
Pao, CV: Numerical analysis of coupled systems of nonlinear parabolic equations. SIAM J. Numer. Anal. 36, 393416 (1999)
Ortega, JM, Rheinboldt, WC: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York (1970)
Varga, R: Matrix Iterative Analysis. PrenticeHall, Englewood Cliffs (1962)
Cottle, RW, Pang, JS, Stone, RE: The Linear Complementarity Problem. Academic Press, New York (1992)
Facchinei, F, Pang, JS: FiniteDimensional Variational Inequalities and Complementarity Problems. Springer, New York (2003)
Jiang, YJ, Zeng, JP: Direct algorithm for the solution of twosided obstacle problems with Mmatrix. Numer. Linear Algebra Appl. 18, 167173 (2011)
Zeng, JP, Jiang, YJ: Direct algorithms to solve the twosided obstacle for an Mmatrix. Numer. Linear Algebra Appl. 13, 543551 (2006)
Zeng, JP, Zhou, SZ: Twosided obstacle problem and its equivalent linear complementarity problem. Chin. Sci. Bull. 39, 10571062 (1994)
Wang, ZY, Fukushima, M: A finite algorithm for almost linear complementarity problems. Numer. Funct. Anal. Optim. 28, 13871403 (2007)
De Luca, T, Facchinei, F, Kanzow, C: A semismooth equation approach to the solution of nonlinear complementarity problems. Math. Program. 75, 407439 (1996)
KÃ¤rkkÃ¤inen, T, Kunisch, K, Tarvainen, P: Augmented Lagrangian active set methods for obstacle problem. J. Optim. Theory Appl. 119, 499533 (2003)
Acknowledgements
The work was supported by the NSF of China (Grant Nos.Â 11271069, 11601188) and by Training Program for Outstanding Young Teachers in Guangdong Province (Grant No.Â 20140202).
Author information
Authors and Affiliations
Contributions
All authors jointly worked on the results. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisherâ€™s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zeng, J., Chen, H. & Xu, H. Quadratic convergence of monotone iterates for semilinear elliptic obstacle problems. J Inequal Appl 2017, 238 (2017). https://doi.org/10.1186/s136600171513x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s136600171513x
MSC
 complementarity problem
 obstacle problem
 upper and lower solution
 monotone iteration
 quadratic convergence