- Research
- Open access
- Published:
A Hermite-Gauss method for the approximation of eigenvalues of regular Sturm-Liouville problems
Journal of Inequalities and Applications volume 2016, Article number: 154 (2016)
Abstract
Recently, some authors have used the sinc-Gaussian sampling technique to approximate eigenvalues of boundary value problems rather than the classical sinc technique because the sinc-Gaussian technique has a convergence rate of the exponential order, \(O (e^{-(\pi-h\sigma)N/2}/\sqrt{N} )\), where σ, h are positive numbers and N is the number of terms in sinc-Gaussian technique. As is well known, the other sampling techniques (classical sinc, generalized sinc, Hermite) have a convergence rate of a polynomial order. In this paper, we use the Hermite-Gauss operator, which is established by Asharabi and Prestin (Numer. Funct. Anal. Optim. 36:419-437, 2015), to construct a new sampling technique to approximate eigenvalues of regular Sturm-Liouville problems. This technique will be new and its accuracy is higher than the sinc-Gaussian because Hermite-Gauss has a convergence rate of order \(O (e^{-(2\pi-h\sigma)N/2}/\sqrt {N} )\). Numerical examples are given with comparisons with the best sampling technique up to now, i.e. sinc-Gaussian.
1 Introduction
Let \(E_{\sigma}(\varphi)\), \(\sigma> 0\), be the class of entire functions satisfying the following condition:
where φ is a non-decreasing, non-negative function on \([0,\infty )\). On the class \(E_{\sigma}(\varphi)\), Schmeisser and Stenger [2] have introduced the so-called sinc-Gaussian operator
where \(\mathrm{h}\in(0,\pi/\sigma]\), \(\alpha:=(\pi-\mathrm{h} \sigma )/2\), \(N\in\mathbb{N}\), and
Note that the summation in (1.2) depends on the real part of ζ. Here, the sinc function is defined as
The authors of [2] investigated a bound of the approximating function from the class \(E_{\sigma}(\varphi)\) by the sinc-Gaussian operator. They proved that [2], if \(f\in E_{\sigma}(\varphi)\), then we have for \(\zeta\in\mathbb{C}\), \(\vert \Im \zeta \vert < N\)
where
Annaby and Asharabi [3] have constructed a new sampling technique to approximate eigenvalues of second order Birkhoff-regular eigenvalue problems using sinc-Gaussian operator. Then some authors have used this technique, which is called the sinc-Gaussian technique, to approximate eigenvalues of boundary value problems rather than the classical sinc technique; see, for example, [4–6]. The convergence rate of sinc-Gaussian technique is of order \(O (e^{-(\pi-h\sigma )N/2}/\sqrt {N} )\), where σ, h is defined above, which is highly better than the speed of the classical sinc method. The classical sinc method was investigated by Boumenir and Chanane [7]. Then several studies appeared; see e.g. [4, 8–10], with different classes of boundary value problems.
For the class \(E_{\sigma}(\varphi)\), Asharabi and Prestin [1] defined another localization operator \(\mathcal {H}_{h,N}\), which is called a Hermite-Gauss operator, as follows:
where \(h\in(0,2\pi/\sigma]\) and \(\beta:=(2\pi-h \sigma)/2\). For \(f\in E_{\sigma}(\varphi)\) and \(\zeta\in\mathbb{C}\), \(\vert \Im\zeta \vert < N\) we have [1]
where
The bound of (1.6) shows that the Hermite-Gauss operator has a higher accuracy than the sinc-Gaussian operator because it has a convergence rate of order \(O (e^{-(2\pi-h\sigma)N/2}/\sqrt {N} )\). We would like to mention here that the sinc-Gaussian and Hermite-Gauss operators are generalized in [11] and extended for entire functions of two variables satisfying some conditions; see [12].
This paper is concerned with constructing a new sampling technique to approximate eigenvalues of Sturm-Liouville problems with separate-type boundary conditions using Hermite-Gauss operator \(\mathcal{H}_{h,N}\). This sampling technique, which is called a Hermite-Gauss technique, is new and it is expected to give us higher accuracy results. Since alternative samples will be used in our sampling operator, the amplitude error appears in our scheme. For this reason, we will derive estimates for the amplitude error associated with Hermite-Gauss operator, \(\mathcal {H}_{h,N}\). This will be done in the next section. Section 3 contains the technique adopted and the associated error analysis. The last section involves numerical examples and comparisons.
2 Amplitude error
In this section, we will investigate the amplitude error associated with the Hermite-Gauss operator (1.5). The amplitude error arises when the exact values \(f^{(i)}(nh)\), \(i=0,1\), of (1.5) are replaced by closer approximate ones. We assume that \(\widetilde{f}^{(i)}(nh)\) are close to \(f^{(i)}(nh)\), i.e. there is \(\varepsilon>0\), sufficiently small such that
for all \(i=0,1\). Now, we define the amplitude error as follows:
In the following theorem, we will estimate a bound for the amplitude error \(A_{N}(z)\) on complex domain \(\mathbb{C}\). Unreservedly, in this paper we need the bound of amplitude error only on a real domain because the eigenvalues of Sturm-Liouville problem (3.1)-(3.2) are real numbers but in the general cases the eigenvalues are not necessarily real and this technique will be used for approximating eigenvalues of different classes of boundary value problems.
Theorem 2.1
Let \(\sigma>0\), \(h\in(0,2\pi/\sigma]\), and \(\beta:=(2\pi-h \sigma)/2\). Assume that (2.1) holds. Then we have for \(\zeta\in \mathbb{C}\), \(\vert \Im\zeta \vert < N\)
Proof
From the definition of the amplitude error (2.2) and in view of (2.1), we get
Since sinc and sin are entire functions of exponential type, we have
Therefore
Using the inequality
in (2.5) with the hypothesis \(\vert \Im\zeta \vert < N\) implies
The summation in (2.6) is estimated [3], Eq (28), as follows:
Combining (2.7) and (2.6) yields (2.3). □
In the real domain the bound of the amplitude error will be
which is of the uniform type. This bound will be used when we investigate the error analysis of this technique.
3 The technique and its error analysis
In this section, we discuss the technique and study its error analysis. The error analysis is derived with two types of errors. Now consider the regular Sturm-Liouville problem
with separate-type boundary conditions
where \(\alpha_{ij}\in\mathbb{R}\), \(1\leq i,j\leq2\) such that \(\vert \alpha_{k1}\vert +\vert \alpha_{k2}\vert \neq0\), \(k=1,2\), and \(q(\cdot) \in L^{1}[0,b]\) is a real valued function. Let \(y(\cdot,\mu)\) denote to the solution of (3.1) satisfying the following initial conditions:
From the theory of Sturm-Liouville problems, cf. e.g. [13, 14], the solution \(y(x,\mu)\) and its derivative \(y'(x,\mu)\) are entire functions in μ for \(t \in[0, b]\) and problem (3.1)-(3.3) has a countable set of real and simple eigenvalues \(\{\mu^{2}_{j}\}_{j=o}^{\infty}\) which can be ordered as an increasing sequence tending to infinity,
Moreover, the eigenvalues are the zeros of the characteristic function, which is defined by
The authors of [15] used the successive iterations to prove that \(\mathcal{D}(\cdot)\) can be written as
where the operators T and T̃ are Volterra operators acting in the space of continuous functions, \(C[0,b] \), which are defined, respectively, by
and \(T^{0}\) is the identity operator. All series in (3.6) converge uniformly on \([0,b]\) for any \(\mu\in\Bbb {C}\). As i in [15] we split \(\mathcal{D}(\cdot)\) into parts via
where \(\mathcal{K}_{k}\) is known,
and \(\mathcal{U}_{k}(\mu)\) involves the infinite sum of integral operators
Lemma 3.1
Assume that \(q(\cdot) \in L^{1}[0,b]\), then we have \(\mathcal {U}_{k}(\cdot)\in E_{b}(\varphi)\) for all \(k\in\mathbb{N}_{0}\).
Proof
Since \(q(\cdot) \in L^{1}[0,b]\), the solution \(y(b,\mu)\) and its derivative \(y'(b,\mu)\) are entire functions in μ and then \(\mathcal {D}(\mu)\) is an entire function. Therefore, \(\mathcal{U}_{k}\) is also an entire function and then we will prove that \(\mathcal{U}_{k}\) satisfies the condition (1.1) of the class \(E_{b}(\varphi)\). Since \(q(\cdot) \in L^{1}[0,b]\), we have, cf. e.g. [16], Eq (2.2)-(2.4), for all \(k\in\mathbb{N}_{0}\) and \(\mu\in\mathbb{C}\)
and
where \(\rho_{k}:= \sum_{n= k}^{\infty}\frac{ (c b \tau )^{n}}{n!}\), \(\tau:=\int_{0}^{b}\vert q(t)\vert \,dt\), and \(c:=1.709\). Combining (3.12), (3.13), and (3.11), we obtain for all \(k\in\mathbb{N}_{0}\)
and thus \(\mathcal{U}_{k}(\cdot)\in E_{b}(\varphi)\) where \(\varphi :=M_{k}\) is a constant function which is given by
□
Since \(\mathcal{U}_{k}(\cdot)\in E_{b}(\varphi)\), we approximate the function \(\mathcal{U}_{k}\) using the Hermite-Gauss operator, (1.5), where \(h\in(0,2\pi/b]\) and \(\beta:=(2\pi-bh)/2\) and then, from (1.6), we obtain
where the function \(\mathcal{T}_{k,h,N}\) is defined as
In (3.14), we let \(\mu\in\mathbb{R}\) because all eigenvalues of problem (3.1)-(3.3) are real. The samples \(\mathcal{U}_{k}(nh)=\mathcal{D}(nh)-\mathcal{K}_{k}(nh)\) and \(\mathcal{U}^{'}_{k}(nh)=\mathcal{D}^{'}(nh)-\mathcal {K}^{'}_{k}(nh)\), \(n\in\mathbb{Z}_{N}(h^{-1}\mu)\) cannot be computed explicitly in the general case, so we compute them numerically and this is the reason for the appearance of the amplitude error. According to (3.5), we have
The solution \(y(1,nh)\) and its derivative with respect to t, \(\partial _{t}y(1,nh)\), can be computed directly by solving the initial value problem defined by (3.1) and (3.4) at the nodes \(\{ nh \}_{n\in\mathbb{Z}_{N}(h^{-1}\mu)}\). Also, we can solve the initial value problem (3.1) and (3.4) approximately to find the solution \(y(1,\mu)\) and its derivative, \(\partial_{t}y(1,\mu)\), as a function of the parameter μ and consequently, we can easily calculate the derivatives of solution, \(\partial_{\mu}y(1,nh)\) and \(\partial^{2}_{\mu t}y(1,nh)\) at the nodes \(\{nh \}_{n\in \mathbb{Z}_{N}(h^{-1}\mu)}\). In all examples of Section 4, we use the code ‘ParametricNDSolve’ of Mathematica to compute these values numerically. Now let \(\widetilde{\mathcal{U}}_{k}(nh)\) and \(\widetilde{\mathcal{U}}'_{k}(nh)\) be the approximations of the samples \(\mathcal{U}_{k}(nh)\) and \(\mathcal{U}'_{k}(nh)\), \(n\in \mathbb {Z}_{N}(h^{-1}\mu)\), respectively. Let
Therefore we get, cf. Theorem 2.1,
where \(\mathcal{A}(\varepsilon,N)\) is defined in (2.8). Now let \(\widetilde{\mathcal{D}}_{N,k}(\mu):=\mathcal{K}_{k}(\mu)+\mathcal {H}_{h,N}[\widetilde{\mathcal{U}}_{k}](\mu)\). Combining (3.9), (3.16), and (3.18) implies
Now we determine enclosure intervals for the eigenvalues. Let \((\mu ^{*})^{2}\) be an eigenvalue, that is, let \(\mathcal{D}(\mu^{*})=0\), and \((\mu_{N,k})^{2}\) be its approximation, i.e. \(\widetilde {\mathcal{D}}_{N.k}(\mu_{N,k})=0\). In view of (3.19), we obtain
Since \(\widetilde{\mathcal{D}}_{N,k}(\mu^{*})\) is given and \(\mathcal {T}_{N,h,k}(\mu^{*}) +\mathcal{A}(\varepsilon,N)\) is computable, we can define an enclosure for \(\mu^{*}\), by solving the following system of inequalities:
Its solution in an interval will be denoted by \(I_{N,k,\varepsilon}\). In the following theorem, we find a bound for the error \(\vert \mu^{*}-\mu_{N,k}\vert \).
Theorem 3.2
Let \((\mu^{*})^{2}\) be an eigenvalue of problem (3.1)-(3.3). For sufficiently large N, we have the following estimate:
for all \(k\in\mathbb{N}_{0}\). Moreover, \(\vert \mu^{*}-\mu _{N,k}\vert \longrightarrow0\) when \(N\rightarrow\infty\) and \(\varepsilon\rightarrow0\).
Proof
Since \(\mathcal{D} (\mu_{N,k} )-\widetilde{\mathcal {D}}_{N.k} (\mu_{N,k} )=\mathcal{D} (\mu_{N,k} )-\mathcal{D}(\mu^{*})\), then from (3.19) and after replacing μ by \(\mu_{N,k}\) we get
Using the mean value theorem yields
for some \(\zeta\in J_{N,k}:= (\min\{\mu^{*},\mu_{N,k}\},\max\{ \mu ^{*},\mu_{N,k}\} )\). Since the zeros of \(\mathcal{D}(\mu)\) are simple, for sufficiently large N we have \(\inf_{\zeta\in I_{N,k,\varepsilon} } \vert \mathcal{D}'(\zeta)\vert >0\) and then we get (3.20). In view of (3.17) and (2.8), the right hand-side of (3.20) goes to zero uniformly when \(N\rightarrow\infty\) and \(\varepsilon\rightarrow0\), and therefore \(\vert \mu^{*}-\mu _{N,k}\vert \rightarrow0\) for all \(k\in\mathbb{N}_{0}\). □
4 Examples and comparisons
This section includes three examples to illustrate our technique. All examples are computed in [3] with the Hermite sampling technique and the authors compare their results with the results of the classical sinc technique. In our approximations, \(\mathcal{K}_{k}\) of (3.10) has fewer terms than is used in [3]. Note that the accuracy of any sampling technique increases when N is fixed but k increases. As is well known, the sinc-Gaussian is better than the other sampling techniques (classical sinc, generalized sinc, Hermite) because of the convergence rate of all these techniques being of polynomial order; see e.g. [7, 8, 10, 15–17]. As we mentioned before, the sinc-Gaussian has convergence rate of an exponential order. Therefore, we compare our results only with the results of the sinc-Gaussian technique. As predicted by the error estimates, the Hermite-Gauss technique gives us a higher accuracy result than the results of sinc-Gaussian technique and the accuracy increases when N is fixed, but h decreases without any additional cost except that the function is approximated on a smaller domain. Denote by \(E_{G}\) and \(E_{H}\) the absolute errors associated with the results of the sinc-Gaussian and Hermite-Gauss technique, respectively. We use Mathematica to derive the following examples.
Example 4.1
Consider the Sturm-Liouville problem
with the separate boundary condition of the form
In this case, the characteristic function is
and the exact eigenvalues are \(\mu^{2}_{l}:=(2 l+1)^{2} \pi^{2}/4-1\), \(l \in\mathbb{Z}\). Taking \(k=2\) in (3.10) and making some computations gives
and then \(\mathcal{U}_{2}\in B^{\infty}_{1}\). Table 1 shows the first five approximate eigenvalues of problem (4.1)-(4.2) using our techniques with \(N=7\) and \(h=1\) comparing with the results of the sinc-Gaussian technique.
Example 4.2
The boundary value problem
is a special case of problem (3.1)-(3.3) when \(\alpha _{11}=\alpha_{21}=1\) and \(\alpha_{12}=\alpha_{22}=0\). The characteristic function of this problem is
and the exact eigenvalues are \(\mu^{2}_{l}:=(\pi l)^{2}-1\), \(l \in \mathbb{Z}\). Taking \(k=2\) in (3.10), we have after some calculations
Table 2 lists the first five approximate eigenvalues using our technique with \(N=7\) and \(h=1\) in comparison with the results of the sinc-Gaussian technique.
Example 4.3
In this example, we introduce the Sturm-Liouville problem
The characteristic function is
where \({}_{1}F_{1}\) is the hypergeometric function. In this case, putting \(k=0\) in (3.10) implies after some calculations
As in the last examples, we summarize our results of this example in Table 3. To compute the absolute error, the exact eigenvalues are computed approximately by Mathematica.
References
Asharabi, RM, Prestin, J: A modification of Hermite sampling with a Gaussian multiplier. Numer. Funct. Anal. Optim. 36, 419-437 (2015)
Schmeisser, G, Stenger, F: Sinc approximation with a Gaussian multiplier. Sampl. Theory Signal Image Process., Int. J. 6, 199-221 (2007)
Annaby, MH, Asharabi, RM: Computing eigenvalues of boundary-value problems using sinc-Gaussian method. Sampl. Theory Signal Image Process., Int. J. 7, 293-311 (2008)
Annaby, MH, Tharwat, MM: A sinc-Gaussian technique for computing eigenvalues of second-order linear pencils. Appl. Numer. Math. 63, 129-137 (2013)
Tharwat, MM, Al-Harbi, SM: Approximation of eigenvalues of boundary value problems. Bound. Value Probl. (2014). doi:10.1186/1687-2770-2014-51
Tharwat, MM, Bhrawy, AH, Alofi, AS: Approximation of eigenvalues of discontinuous Sturm-Liouville problems with eigenparameter in all boundary conditions. Bound. Value Probl. (2013). doi:10.1186/1687-2770-2013-132
Boumenir, A, Chanane, B: Eigenvalues of Sturm-Liouville systems using sampling theory. Appl. Anal. 62, 323-334 (1996)
Annaby, MH, Asharabi, RM: On sinc-based method in computing eigenvalues of boundary-value problems. SIAM J. Numer. Anal. 46(2), 671-690 (2008)
Annaby, MH, Asharabi, RM: Approximating eigenvalues of discontinuous problems by sampling theorems. J. Numer. Math. 3, 163-183 (2008)
Annaby, MH, Tharwat, MM: On computing eigenvalues of second-order linear pencils. IMA J. Numer. Anal. 27, 366-380 (2007)
Asharabi, RM: Generalized sinc-Gaussian sampling involving derivatives. Numer. Algorithms (2016). doi:10.1007/s11075-016-0129-4
Asharabi, RM, Prestin, J: On two-dimensional classical and Hermite sampling. IMA J. Numer. Anal. 36, 851-871 (2016)
Levitan, BM, Sargsjan, IS: Introduction to Spectral Theory: Selfadjoint Ordinary Differential Operators. Translation of Mathematical Monographs, vol. 39. Am. Math. Soc., Providence (1975)
Titchmarch, EC: Eigenfunction Expansions Associated with Second-Order Differential Equations, 2nd edn. Clarendon Press, Oxford (1962)
Annaby, MH, Asharabi, RM: Computing eigenvalues of Sturm-Liouville problems by Hermite interpolations. Numer. Algorithms 60, 355-367 (2012)
Boumenir, A: The sampling method for Sturm-Liouville problems with the eigenvalue parameter in the boundary condition. Numer. Funct. Anal. Optim. 21, 67-75 (2000)
Chanane, B: Computing the eigenvalues of singular Sturm-Liouville problems using the regularized sampling method. Appl. Math. Comput. 184, 972-978 (2007)
Acknowledgements
The author would like to thank the anonymous referees for their constructive comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The author declares that he has no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Asharabi, R.M. A Hermite-Gauss method for the approximation of eigenvalues of regular Sturm-Liouville problems. J Inequal Appl 2016, 154 (2016). https://doi.org/10.1186/s13660-016-1098-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-016-1098-9