Skip to main content

A Hermite-Gauss method for the approximation of eigenvalues of regular Sturm-Liouville problems

Abstract

Recently, some authors have used the sinc-Gaussian sampling technique to approximate eigenvalues of boundary value problems rather than the classical sinc technique because the sinc-Gaussian technique has a convergence rate of the exponential order, \(O (e^{-(\pi-h\sigma)N/2}/\sqrt{N} )\), where σ, h are positive numbers and N is the number of terms in sinc-Gaussian technique. As is well known, the other sampling techniques (classical sinc, generalized sinc, Hermite) have a convergence rate of a polynomial order. In this paper, we use the Hermite-Gauss operator, which is established by Asharabi and Prestin (Numer. Funct. Anal. Optim. 36:419-437, 2015), to construct a new sampling technique to approximate eigenvalues of regular Sturm-Liouville problems. This technique will be new and its accuracy is higher than the sinc-Gaussian because Hermite-Gauss has a convergence rate of order \(O (e^{-(2\pi-h\sigma)N/2}/\sqrt {N} )\). Numerical examples are given with comparisons with the best sampling technique up to now, i.e. sinc-Gaussian.

Introduction

Let \(E_{\sigma}(\varphi)\), \(\sigma> 0\), be the class of entire functions satisfying the following condition:

$$ \bigl\vert f(\zeta)\bigr\vert \leq\varphi \bigl(\vert \Re \zeta \vert \bigr) \mathrm {e}^{\sigma \vert \Im\zeta \vert },\quad \zeta\in\Bbb {C}, $$
(1.1)

where φ is a non-decreasing, non-negative function on \([0,\infty )\). On the class \(E_{\sigma}(\varphi)\), Schmeisser and Stenger [2] have introduced the so-called sinc-Gaussian operator

$$ \mathcal{G}_{\mathrm{h},N}[f](\zeta):=\sum _{n\in\mathbb {Z}_{N} (\mathrm{h}^{-1}\zeta )}f(n\mathrm{h}) \operatorname {sinc}\bigl(\mathrm{h}^{-1} \zeta-n \bigr) \mathrm{e}^{-\frac{\alpha}{N} ( \mathrm{h}^{-1}\zeta-n )^{2}}, $$
(1.2)

where \(\mathrm{h}\in(0,\pi/\sigma]\), \(\alpha:=(\pi-\mathrm{h} \sigma )/2\), \(N\in\mathbb{N}\), and

$$\mathbb{Z}_{N}(\zeta):= \bigl\{ n \in\mathbb{Z}: \bigl\vert [\Re \zeta+1/2]-n\bigr\vert \leq N \bigr\} . $$

Note that the summation in (1.2) depends on the real part of ζ. Here, the sinc function is defined as

$$ \operatorname {sinc}(t):= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{\sin(\pi t)}{\pi t}, & t\neq0, \\ 1, & t= 0. \end{array}\displaystyle \right . $$
(1.3)

The authors of [2] investigated a bound of the approximating function from the class \(E_{\sigma}(\varphi)\) by the sinc-Gaussian operator. They proved that [2], if \(f\in E_{\sigma}(\varphi)\), then we have for \(\zeta\in\mathbb{C}\), \(\vert \Im \zeta \vert < N\)

$$ \bigl\vert f(\zeta)-\mathcal{G}_{\mathrm{h},N}[f](\zeta) \bigr\vert \leq 2 \bigl\vert \sin \bigl(\mathrm{h}^{-1}\pi\zeta \bigr) \bigr\vert \varphi \bigl(\vert \Re\zeta \vert +\mathrm{h}(N+1) \bigr)E_{N} \bigl( \mathrm {h}^{-1}\Im\zeta \bigr) \frac{\mathrm{e}^{-\alpha N}}{\sqrt{\pi\alpha N}}, $$
(1.4)

where

$$ E_{N}(t):=\cosh(2\alpha t)+O \bigl(N^{-1/2} \bigr), \quad \mbox{as } N\rightarrow\infty. $$

Annaby and Asharabi [3] have constructed a new sampling technique to approximate eigenvalues of second order Birkhoff-regular eigenvalue problems using sinc-Gaussian operator. Then some authors have used this technique, which is called the sinc-Gaussian technique, to approximate eigenvalues of boundary value problems rather than the classical sinc technique; see, for example, [46]. The convergence rate of sinc-Gaussian technique is of order \(O (e^{-(\pi-h\sigma )N/2}/\sqrt {N} )\), where σ, h is defined above, which is highly better than the speed of the classical sinc method. The classical sinc method was investigated by Boumenir and Chanane [7]. Then several studies appeared; see e.g. [4, 810], with different classes of boundary value problems.

For the class \(E_{\sigma}(\varphi)\), Asharabi and Prestin [1] defined another localization operator \(\mathcal {H}_{h,N}\), which is called a Hermite-Gauss operator, as follows:

$$\begin{aligned} \mathcal{H}_{h,N}[f](\zeta):={}&\sum _{n\in\mathbb{Z}_{N}(h^{-1}\zeta )} \biggl\{ \biggl(1+ \frac{2\beta(\zeta-n\mathrm{h})^{2}}{h^{2}N} \biggr)f(nh)+(\zeta -nh)f'(nh) \biggr\} \\ &{}\times \operatorname {sinc}^{2} \bigl(h^{-1} \zeta-n \bigr) \mathrm{e}^{-\frac{\beta}{N} ( h^{-1}\zeta-n )^{2}}, \end{aligned}$$
(1.5)

where \(h\in(0,2\pi/\sigma]\) and \(\beta:=(2\pi-h \sigma)/2\). For \(f\in E_{\sigma}(\varphi)\) and \(\zeta\in\mathbb{C}\), \(\vert \Im\zeta \vert < N\) we have [1]

$$ \bigl\vert f(\zeta)-\mathcal{H}_{h,N}[f](\zeta) \bigr\vert \leq2 \bigl\vert \sin ^{2} \bigl(h^{-1}\pi\zeta \bigr) \bigr\vert \varphi \bigl(\vert \Re\zeta \vert +h(N+1) \bigr) \mathcal{E}_{N} \bigl(h^{-1}\Im \zeta \bigr) \frac{\mathrm{e}^{-\beta N}}{\sqrt{\pi\beta N}}, $$
(1.6)

where

$$\begin{aligned} \mathcal{E}_{N}(t) :=& \frac{4 \mathrm{e}^{\beta t^{2}/N}}{\sqrt{\pi\beta N} (1- (t/N )^{2} )}+ \frac{\mathrm{e}^{-2\beta t}}{(1-\mathrm{e}^{-2\pi(N+t)})^{2}}+ \frac{\mathrm{e}^{2\beta t}}{(1-\mathrm{e}^{-2\pi (N-t)})^{2}} \\ =& 2\cosh(2\beta t)+O \bigl(N^{-1/2} \bigr), \quad\mbox{as } N \rightarrow \infty. \end{aligned}$$
(1.7)

The bound of (1.6) shows that the Hermite-Gauss operator has a higher accuracy than the sinc-Gaussian operator because it has a convergence rate of order \(O (e^{-(2\pi-h\sigma)N/2}/\sqrt {N} )\). We would like to mention here that the sinc-Gaussian and Hermite-Gauss operators are generalized in [11] and extended for entire functions of two variables satisfying some conditions; see [12].

This paper is concerned with constructing a new sampling technique to approximate eigenvalues of Sturm-Liouville problems with separate-type boundary conditions using Hermite-Gauss operator \(\mathcal{H}_{h,N}\). This sampling technique, which is called a Hermite-Gauss technique, is new and it is expected to give us higher accuracy results. Since alternative samples will be used in our sampling operator, the amplitude error appears in our scheme. For this reason, we will derive estimates for the amplitude error associated with Hermite-Gauss operator, \(\mathcal {H}_{h,N}\). This will be done in the next section. Section 3 contains the technique adopted and the associated error analysis. The last section involves numerical examples and comparisons.

Amplitude error

In this section, we will investigate the amplitude error associated with the Hermite-Gauss operator (1.5). The amplitude error arises when the exact values \(f^{(i)}(nh)\), \(i=0,1\), of (1.5) are replaced by closer approximate ones. We assume that \(\widetilde{f}^{(i)}(nh)\) are close to \(f^{(i)}(nh)\), i.e. there is \(\varepsilon>0\), sufficiently small such that

$$ \sup_{n \in\mathbb{Z}_{N}(h^{-1}\zeta) }\bigl\vert \widetilde {f}^{(i)}(nh)-f^{(i)}(nh)\bigr\vert < \varepsilon, $$
(2.1)

for all \(i=0,1\). Now, we define the amplitude error as follows:

$$\begin{aligned} A_{N}(\zeta) :=& \mathcal{H}_{h,N}[f]( \zeta)-\mathcal {H}_{h,N}[\widetilde{f}](\zeta) \\ =&\sum_{n\in\mathbb{Z}_{N}(h^{-1}\zeta)} \bigl\{ f(nh)-\widetilde{f}(nh) \bigr\} \biggl(1+ \frac{2\beta(\zeta-n\mathrm{h})^{2}}{h^{2}N} \biggr) \operatorname {sinc}^{2} \bigl(h^{-1} \zeta-n \bigr) \mathrm{e}^{-\frac{\beta}{N} ( h^{-1}\zeta-n )^{2}} \\ &{}+ \sum_{n\in\mathbb{Z}_{N}(h^{-1}\zeta)} \bigl\{ f'(nh)- \widetilde {f}'(nh) \bigr\} (\zeta-nh) \operatorname {sinc}^{2} \bigl(h^{-1}\zeta-n \bigr) \mathrm{e}^{-\frac{\beta }{N} ( h^{-1}\zeta-n )^{2}}. \end{aligned}$$
(2.2)

In the following theorem, we will estimate a bound for the amplitude error \(A_{N}(z)\) on complex domain \(\mathbb{C}\). Unreservedly, in this paper we need the bound of amplitude error only on a real domain because the eigenvalues of Sturm-Liouville problem (3.1)-(3.2) are real numbers but in the general cases the eigenvalues are not necessarily real and this technique will be used for approximating eigenvalues of different classes of boundary value problems.

Theorem 2.1

Let \(\sigma>0\), \(h\in(0,2\pi/\sigma]\), and \(\beta:=(2\pi-h \sigma)/2\). Assume that (2.1) holds. Then we have for \(\zeta\in \mathbb{C}\), \(\vert \Im\zeta \vert < N\)

$$ \bigl\vert A_{N}(\zeta)\bigr\vert \leq2\varepsilon \biggl(1+ \frac{2\beta}{\pi ^{2}N}+\frac {h}{\pi} \biggr) (1+\sqrt{N/\beta\pi} ) \mathrm {e}^{ (2\pi +\beta h^{-1} )h^{-1} \vert \Im\zeta \vert }\mathrm{e}^{-\beta/4N}. $$
(2.3)

Proof

From the definition of the amplitude error (2.2) and in view of (2.1), we get

$$\begin{aligned} \bigl\vert A_{N}(\zeta)\bigr\vert \leq& \varepsilon \sum_{n\in\mathbb {Z}_{N}(h^{-1}\zeta)} \biggl( \bigl\vert \operatorname {sinc}^{2} \bigl(h^{-1}\zeta-n \bigr) \bigr\vert + \frac{2\beta \vert \sin^{2}(h^{-1}\zeta-n)\vert }{\pi ^{2}N} \biggr) \bigl\vert \mathrm{e}^{-\frac{\beta}{N} ( h^{-1}\zeta-n )^{2}} \bigr\vert \\ &{}+ \frac{\varepsilon h}{\pi} \sum_{n\in\mathbb{Z}_{N}(h^{-1}\zeta)} \bigl\vert \sin \bigl(h^{-1}\zeta-n \bigr) \operatorname {sinc}\bigl(h^{-1}\zeta-n \bigr) \mathrm{e}^{-\frac {\beta}{N} ( h^{-1}\zeta-n )^{2}} \bigr\vert . \end{aligned}$$
(2.4)

Since sinc and sin are entire functions of exponential type, we have

$$ \bigl\vert \operatorname {sinc}\bigl(h^{-1}\zeta-n \bigr) \bigr\vert \leq \mathrm{e}^{h^{-1}\pi \vert \Im \zeta \vert },\qquad \bigl\vert \sin \bigl(h^{-1}\zeta-n \bigr) \bigr\vert \leq\mathrm{e}^{h^{-1}\pi \vert \Im\zeta \vert }. $$

Therefore

$$ \bigl\vert A_{N}(\zeta)\bigr\vert \leq\varepsilon \biggl(1+ \frac{2\beta}{\pi ^{2}N}+\frac {h}{\pi} \biggr)\mathrm{e}^{2h^{-1}\pi \vert \Im\zeta \vert }\sum _{n\in \mathbb {Z}_{N}(h^{-1}\zeta)} \bigl\vert \mathrm{e}^{-\frac{\beta}{N} ( h^{-1}\zeta-n )^{2}} \bigr\vert . $$
(2.5)

Using the inequality

$$ \bigl\vert \mathrm{e}^{-\zeta^{2}} \bigr\vert \leq\mathrm{e}^{- (\Re \zeta )^{2}} \mathrm{e}^{ (\Im\zeta )^{2}} $$

in (2.5) with the hypothesis \(\vert \Im\zeta \vert < N\) implies

$$ \bigl\vert A_{N}(\zeta)\bigr\vert \leq\varepsilon \biggl(1+ \frac{2\beta}{\pi ^{2}N}+\frac {h}{\pi} \biggr)\mathrm{e}^{ (2\pi+\beta h^{-1} )h^{-1} \vert \Im \zeta \vert }\sum _{n\in\mathbb{Z}_{N}(h^{-1}\zeta)} \mathrm{e}^{-\frac {\beta }{N} ( h^{-1}\Re\zeta-n )^{2}}. $$
(2.6)

The summation in (2.6) is estimated [3], Eq (28), as follows:

$$ \sum_{n\in\mathbb{Z}_{N}(h^{-1}\zeta)} \mathrm{e}^{-\frac{\beta }{N} ( h^{-1}\Re\zeta-n )^{2}} \leq2 (1+\sqrt {N/\beta\pi } )\mathrm{e}^{-\beta/4N}. $$
(2.7)

Combining (2.7) and (2.6) yields (2.3). □

In the real domain the bound of the amplitude error will be

$$ \mathcal{A}(\varepsilon,N):=2\varepsilon \biggl(1+ \frac{2\beta}{\pi ^{2}N}+\frac{h}{\pi} \biggr) (1+\sqrt{N/\beta\pi} )\mathrm {e}^{-\beta/4N}, $$
(2.8)

which is of the uniform type. This bound will be used when we investigate the error analysis of this technique.

The technique and its error analysis

In this section, we discuss the technique and study its error analysis. The error analysis is derived with two types of errors. Now consider the regular Sturm-Liouville problem

$$ -y''(t)+q(t)y(t)= \mu^{2}y(t),\quad t\in[0,b], \mu\in\Bbb {C}, $$
(3.1)

with separate-type boundary conditions

$$\begin{aligned}& \alpha_{11}y(0,\mu)+\alpha_{12}y'(0, \mu)=0, \end{aligned}$$
(3.2)
$$\begin{aligned}& \alpha_{21}y(b,\mu)+\alpha_{22}y'(b, \mu)=0, \end{aligned}$$
(3.3)

where \(\alpha_{ij}\in\mathbb{R}\), \(1\leq i,j\leq2\) such that \(\vert \alpha_{k1}\vert +\vert \alpha_{k2}\vert \neq0\), \(k=1,2\), and \(q(\cdot) \in L^{1}[0,b]\) is a real valued function. Let \(y(\cdot,\mu)\) denote to the solution of (3.1) satisfying the following initial conditions:

$$ y(0,\mu):=\alpha_{12},\qquad y'(0,\mu):=- \alpha_{11}. $$
(3.4)

From the theory of Sturm-Liouville problems, cf. e.g. [13, 14], the solution \(y(x,\mu)\) and its derivative \(y'(x,\mu)\) are entire functions in μ for \(t \in[0, b]\) and problem (3.1)-(3.3) has a countable set of real and simple eigenvalues \(\{\mu^{2}_{j}\}_{j=o}^{\infty}\) which can be ordered as an increasing sequence tending to infinity,

$$ \mu^{2}_{0}< \mu^{2}_{1}< \mu^{2}_{2}< \cdots\rightarrow\infty. $$

Moreover, the eigenvalues are the zeros of the characteristic function, which is defined by

$$ \mathcal{D}(\mu):=\alpha_{21}y(b,\mu)+ \alpha_{22}y'(b,\mu). $$
(3.5)

The authors of [15] used the successive iterations to prove that \(\mathcal{D}(\cdot)\) can be written as

$$\begin{aligned} \mathcal{D}(\mu) =&- \alpha_{12} \alpha_{22} \mu\sin (\mu b)-\alpha_{11}\alpha_{22} \cos(\mu b)+\alpha_{21}\alpha_{12}\sum _{n = 0}^{\infty}T^{n} \cos(\mu b) \\ &{}- \alpha_{21}\alpha_{11}\sum_{n = 0}^{\infty}T^{n} \frac{\sin(\mu b)}{\mu} + \alpha_{22}\alpha_{12}\sum _{n = 0}^{\infty}\widetilde{T}T^{n} \cos (\mu b) \\ &{}- \alpha_{22}\alpha_{11} \sum_{n = 0}^{\infty} \widetilde{T}T^{n}\frac{\sin(\mu b)}{\mu}, \end{aligned}$$
(3.6)

where the operators T and are Volterra operators acting in the space of continuous functions, \(C[0,b] \), which are defined, respectively, by

$$\begin{aligned}& (Ty) (x,\mu):= \int_{0}^{x}\frac{\sin\mu(x-t)}{\mu} q(t) y(t, \mu) \,dt, \end{aligned}$$
(3.7)
$$\begin{aligned}& (\widetilde{T}y) (x,\mu):= \int_{0}^{x}\cos \bigl(\mu(x-t) \bigr) q(t) y(t, \mu)\,dt, \end{aligned}$$
(3.8)

and \(T^{0}\) is the identity operator. All series in (3.6) converge uniformly on \([0,b]\) for any \(\mu\in\Bbb {C}\). As i in [15] we split \(\mathcal{D}(\cdot)\) into parts via

$$ \mathcal{D}(\mu)=\mathcal{K}_{k}(\mu)+ \mathcal{U}_{k}(\mu),\quad k \in \mathbb{N}_{0}, $$
(3.9)

where \(\mathcal{K}_{k}\) is known,

$$\begin{aligned} \mathcal{K}_{k}(\mu) :=&- \alpha_{12}\alpha_{22} \mu \sin(b \mu)-\alpha_{11} \alpha_{22}\cos(b \mu)+\alpha_{21}\alpha _{12}\sum _{n = 0}^{k}T^{n} \cos(b \mu) \\ &{}- \alpha_{21}\alpha_{11}\sum_{n = 0}^{k-1}T^{n} \frac{\sin(b \mu)}{\mu} + \alpha_{22}\alpha_{12}\sum _{n = 0}^{k}\widetilde{T}T^{n} \cos (b \mu ) \\ &{}-\alpha_{22}\alpha_{11} \sum_{n = 0}^{k-1} \widetilde{T}T^{n}\frac{\sin(b \mu)}{\mu}, \end{aligned}$$
(3.10)

and \(\mathcal{U}_{k}(\mu)\) involves the infinite sum of integral operators

$$\begin{aligned} \mathcal{U}_{k}(\mu) =& \alpha_{21}\alpha_{12}\sum_{n = k+1}^{\infty}T^{n} \cos(b \mu)-\alpha_{21}\alpha_{11}\sum _{n = k}^{\infty}T^{n} \frac{\sin(b \mu)}{\mu} \\ &{}+ \alpha_{22}\alpha_{12}\sum _{n = k+1}^{\infty}\widetilde{T}T^{n} \cos(b \mu)- \alpha_{22}\alpha_{11} \sum_{n = k}^{\infty} \widetilde{T}T^{n}\frac{\sin(b \mu)}{\mu}. \end{aligned}$$
(3.11)

Lemma 3.1

Assume that \(q(\cdot) \in L^{1}[0,b]\), then we have \(\mathcal {U}_{k}(\cdot)\in E_{b}(\varphi)\) for all \(k\in\mathbb{N}_{0}\).

Proof

Since \(q(\cdot) \in L^{1}[0,b]\), the solution \(y(b,\mu)\) and its derivative \(y'(b,\mu)\) are entire functions in μ and then \(\mathcal {D}(\mu)\) is an entire function. Therefore, \(\mathcal{U}_{k}\) is also an entire function and then we will prove that \(\mathcal{U}_{k}\) satisfies the condition (1.1) of the class \(E_{b}(\varphi)\). Since \(q(\cdot) \in L^{1}[0,b]\), we have, cf. e.g. [16], Eq (2.2)-(2.4), for all \(k\in\mathbb{N}_{0}\) and \(\mu\in\mathbb{C}\)

$$ \begin{aligned} & \Biggl\vert \sum_{n= k}^{\infty} T^{n} \bigl[\cos(t \mu) \bigr](b) \Biggr\vert \leq \rho_{k} \mathrm{e}^{b \vert \Im\mu \vert }, \\ & \Biggl\vert \sum_{n= k}^{\infty} \widetilde{T} T^{n} \bigl[\cos(t \mu ) \bigr](b) \Biggr\vert \leq\tau \rho_{k} \mathrm{e}^{b \vert \Im\mu \vert }, \end{aligned} $$
(3.12)

and

$$ \begin{aligned} & \Biggl\vert \sum_{n= k}^{\infty} T^{n} \biggl[\frac{\sin(t \mu )}{\mu} \biggr](b) \Biggr\vert \leq c b \rho_{k}\mathrm{e}^{b \vert \Im\mu \vert }, \\ & \Biggl\vert \sum_{n= k}^{\infty} \widetilde{T} T^{n} \biggl[\frac {\sin(t \mu)}{\mu} \biggr](b) \Biggr\vert \leq c b \tau \rho_{k}\mathrm{e}^{b \vert \Im\mu \vert }, \end{aligned} $$
(3.13)

where \(\rho_{k}:= \sum_{n= k}^{\infty}\frac{ (c b \tau )^{n}}{n!}\), \(\tau:=\int_{0}^{b}\vert q(t)\vert \,dt\), and \(c:=1.709\). Combining (3.12), (3.13), and (3.11), we obtain for all \(k\in\mathbb{N}_{0}\)

$$ \bigl\vert \mathcal{U}_{k}(\mu) \bigr\vert \leq M_{k} \mathrm{e}^{b \vert \Im\mu \vert }, $$
(3.14)

and thus \(\mathcal{U}_{k}(\cdot)\in E_{b}(\varphi)\) where \(\varphi :=M_{k}\) is a constant function which is given by

$$ M_{k}:=\vert \alpha_{21} \alpha_{12}\vert \rho_{k+1}+\vert \alpha_{21} \alpha _{11}\vert cb \rho_{k}+ \tau \vert \alpha_{22}\alpha_{12}\vert \rho_{k+1}+cb\tau \vert \alpha _{22}\alpha_{11}\vert \rho_{k}. $$
(3.15)

 □

Since \(\mathcal{U}_{k}(\cdot)\in E_{b}(\varphi)\), we approximate the function \(\mathcal{U}_{k}\) using the Hermite-Gauss operator, (1.5), where \(h\in(0,2\pi/b]\) and \(\beta:=(2\pi-bh)/2\) and then, from (1.6), we obtain

$$ \bigl\vert \mathcal{U}_{k}(\mu)- \mathcal{H}_{h,N}[\mathcal{U}_{k}](\mu ) \bigr\vert \leq \mathcal{T}_{N,h,k}(\mu), \quad\mu\in\mathbb{R}, $$
(3.16)

where the function \(\mathcal{T}_{k,h,N}\) is defined as

$$ \mathcal{T}_{N,h,k}(\mu):= 4M_{k} \bigl\vert \sin^{2} \bigl(h^{-1}\pi\mu \bigr) \bigr\vert \biggl(1+ \frac{2}{\sqrt{\pi\beta N}} \biggr) \frac{\mathrm{e}^{-\beta N}}{\sqrt{\pi\beta N}}. $$
(3.17)

In (3.14), we let \(\mu\in\mathbb{R}\) because all eigenvalues of problem (3.1)-(3.3) are real. The samples \(\mathcal{U}_{k}(nh)=\mathcal{D}(nh)-\mathcal{K}_{k}(nh)\) and \(\mathcal{U}^{'}_{k}(nh)=\mathcal{D}^{'}(nh)-\mathcal {K}^{'}_{k}(nh)\), \(n\in\mathbb{Z}_{N}(h^{-1}\mu)\) cannot be computed explicitly in the general case, so we compute them numerically and this is the reason for the appearance of the amplitude error. According to (3.5), we have

$$\begin{aligned}& \mathcal{D}(nh)=\alpha_{21}y(1,nh)+\alpha_{22} \partial_{t}y(1,nh), \\& \mathcal{D}'(nh)=\alpha_{21}\partial_{\mu}y(1,nh)+ \alpha _{22}\partial ^{2}_{\mu t}y(1,nh). \end{aligned}$$

The solution \(y(1,nh)\) and its derivative with respect to t, \(\partial _{t}y(1,nh)\), can be computed directly by solving the initial value problem defined by (3.1) and (3.4) at the nodes \(\{ nh \}_{n\in\mathbb{Z}_{N}(h^{-1}\mu)}\). Also, we can solve the initial value problem (3.1) and (3.4) approximately to find the solution \(y(1,\mu)\) and its derivative, \(\partial_{t}y(1,\mu)\), as a function of the parameter μ and consequently, we can easily calculate the derivatives of solution, \(\partial_{\mu}y(1,nh)\) and \(\partial^{2}_{\mu t}y(1,nh)\) at the nodes \(\{nh \}_{n\in \mathbb{Z}_{N}(h^{-1}\mu)}\). In all examples of Section 4, we use the code ‘ParametricNDSolve’ of Mathematica to compute these values numerically. Now let \(\widetilde{\mathcal{U}}_{k}(nh)\) and \(\widetilde{\mathcal{U}}'_{k}(nh)\) be the approximations of the samples \(\mathcal{U}_{k}(nh)\) and \(\mathcal{U}'_{k}(nh)\), \(n\in \mathbb {Z}_{N}(h^{-1}\mu)\), respectively. Let

$$ \sup_{n \in\mathbb{Z}_{N}(h^{-1}\mu) } \bigl\vert \mathcal {U}_{k}^{(i)}(nh)- \widetilde{\mathcal{U}}_{k}^{(i)}(nh) \bigr\vert < \varepsilon,\quad i=1,2. $$

Therefore we get, cf. Theorem 2.1,

$$ \bigl\vert \mathcal{H}_{h,N}[\mathcal{U}_{k}]( \mu)-\mathcal {H}_{h,N}[\widetilde{\mathcal{U}}_{k}](\mu) \bigr\vert \leq\mathcal {A}(\varepsilon,N),\quad \mu\in\mathbb{R}, $$
(3.18)

where \(\mathcal{A}(\varepsilon,N)\) is defined in (2.8). Now let \(\widetilde{\mathcal{D}}_{N,k}(\mu):=\mathcal{K}_{k}(\mu)+\mathcal {H}_{h,N}[\widetilde{\mathcal{U}}_{k}](\mu)\). Combining (3.9), (3.16), and (3.18) implies

$$ \bigl\vert \mathcal{D}(\mu)-\widetilde{\mathcal{D}}_{N,k}( \mu) \bigr\vert \leq \mathcal{T}_{N,h,k}(\mu) +\mathcal{A}( \varepsilon,N), \quad \mu\in \mathbb{R}. $$
(3.19)

Now we determine enclosure intervals for the eigenvalues. Let \((\mu ^{*})^{2}\) be an eigenvalue, that is, let \(\mathcal{D}(\mu^{*})=0\), and \((\mu_{N,k})^{2}\) be its approximation, i.e. \(\widetilde {\mathcal{D}}_{N.k}(\mu_{N,k})=0\). In view of (3.19), we obtain

$$ \bigl\vert \widetilde{\mathcal{D}}_{N,k} \bigl(\mu^{*} \bigr) \bigr\vert \leq\mathcal {T}_{N,h,k} \bigl(\mu^{*} \bigr) + \mathcal{A}(\varepsilon,N),\quad \mu\in\mathbb{R}. $$

Since \(\widetilde{\mathcal{D}}_{N,k}(\mu^{*})\) is given and \(\mathcal {T}_{N,h,k}(\mu^{*}) +\mathcal{A}(\varepsilon,N)\) is computable, we can define an enclosure for \(\mu^{*}\), by solving the following system of inequalities:

$$ -\mathcal{T}_{N,h,k} \bigl(\mu^{*} \bigr) -\mathcal{A}( \varepsilon,N) \leq \widetilde{\mathcal{D}}_{N,k} \bigl( \mu^{*} \bigr)\leq\mathcal{T}_{N,h,k} \bigl(\mu^{*} \bigr) +\mathcal{A}( \varepsilon,N). $$

Its solution in an interval will be denoted by \(I_{N,k,\varepsilon}\). In the following theorem, we find a bound for the error \(\vert \mu^{*}-\mu_{N,k}\vert \).

Theorem 3.2

Let \((\mu^{*})^{2}\) be an eigenvalue of problem (3.1)-(3.3). For sufficiently large N, we have the following estimate:

$$ \bigl\vert \mu^{*}-\mu_{N,k}\bigr\vert < \frac{\mathcal{T}_{N,h,k}(\mu_{N,k}) +\mathcal {A}(\varepsilon,N)}{ \inf_{\zeta\in I_{N,k,\varepsilon} } \vert \mathcal{D}'(\zeta)\vert }, $$
(3.20)

for all \(k\in\mathbb{N}_{0}\). Moreover, \(\vert \mu^{*}-\mu _{N,k}\vert \longrightarrow0\) when \(N\rightarrow\infty\) and \(\varepsilon\rightarrow0\).

Proof

Since \(\mathcal{D} (\mu_{N,k} )-\widetilde{\mathcal {D}}_{N.k} (\mu_{N,k} )=\mathcal{D} (\mu_{N,k} )-\mathcal{D}(\mu^{*})\), then from (3.19) and after replacing μ by \(\mu_{N,k}\) we get

$$ \bigl\vert \mathcal{D}(\mu_{N,k})- \mathcal{D} \bigl( \mu^{*} \bigr) \bigr\vert \leq\mathcal{T}_{N,h,k}( \mu_{N,k}) +\mathcal {A}(\varepsilon,N). $$

Using the mean value theorem yields

$$ \bigl\vert \bigl(\mu^{*}-\mu_{N,k} \bigr) \mathcal{D}'(\zeta)\bigr\vert \leq\mathcal {T}_{N,h,k}(\mu _{N,k}) +\mathcal{A}(\varepsilon,N), \quad\zeta \in J_{N,k} \subset I_{N,k,\varepsilon}, $$
(3.21)

for some \(\zeta\in J_{N,k}:= (\min\{\mu^{*},\mu_{N,k}\},\max\{ \mu ^{*},\mu_{N,k}\} )\). Since the zeros of \(\mathcal{D}(\mu)\) are simple, for sufficiently large N we have \(\inf_{\zeta\in I_{N,k,\varepsilon} } \vert \mathcal{D}'(\zeta)\vert >0\) and then we get (3.20). In view of (3.17) and (2.8), the right hand-side of (3.20) goes to zero uniformly when \(N\rightarrow\infty\) and \(\varepsilon\rightarrow0\), and therefore \(\vert \mu^{*}-\mu _{N,k}\vert \rightarrow0\) for all \(k\in\mathbb{N}_{0}\). □

Examples and comparisons

This section includes three examples to illustrate our technique. All examples are computed in [3] with the Hermite sampling technique and the authors compare their results with the results of the classical sinc technique. In our approximations, \(\mathcal{K}_{k}\) of (3.10) has fewer terms than is used in [3]. Note that the accuracy of any sampling technique increases when N is fixed but k increases. As is well known, the sinc-Gaussian is better than the other sampling techniques (classical sinc, generalized sinc, Hermite) because of the convergence rate of all these techniques being of polynomial order; see e.g. [7, 8, 10, 1517]. As we mentioned before, the sinc-Gaussian has convergence rate of an exponential order. Therefore, we compare our results only with the results of the sinc-Gaussian technique. As predicted by the error estimates, the Hermite-Gauss technique gives us a higher accuracy result than the results of sinc-Gaussian technique and the accuracy increases when N is fixed, but h decreases without any additional cost except that the function is approximated on a smaller domain. Denote by \(E_{G}\) and \(E_{H}\) the absolute errors associated with the results of the sinc-Gaussian and Hermite-Gauss technique, respectively. We use Mathematica to derive the following examples.

Example 4.1

Consider the Sturm-Liouville problem

$$ -y''(t)- y(t)=\mu^{2}y(t), \quad t \in[0,1], $$
(4.1)

with the separate boundary condition of the form

$$ y'(0,\mu)=y(1,\mu)=0. $$
(4.2)

In this case, the characteristic function is

$$\begin{aligned} \mathcal{D}(\mu):=\cos \bigl(\sqrt{1+\mu^{2}} \bigr), \end{aligned}$$
(4.3)

and the exact eigenvalues are \(\mu^{2}_{l}:=(2 l+1)^{2} \pi^{2}/4-1\), \(l \in\mathbb{Z}\). Taking \(k=2\) in (3.10) and making some computations gives

$$ \mathcal{K}_{2}(\mu)=\cos(\mu)- \frac{\sin(\mu)}{2\mu }+ \frac{\sin(\mu)-\mu\cos(\mu)}{8\mu^{3}}, $$

and then \(\mathcal{U}_{2}\in B^{\infty}_{1}\). Table 1 shows the first five approximate eigenvalues of problem (4.1)-(4.2) using our techniques with \(N=7\) and \(h=1\) comparing with the results of the sinc-Gaussian technique.

Table 1 Comparison of Hermite-Gauss and sinc-Gaussian, \(\pmb{N=7}\) and \(\pmb{h=1}\)

Example 4.2

The boundary value problem

$$\begin{aligned}& -y''(t)- y(t)=\mu^{2}y(t),\quad t \in[0,1], \end{aligned}$$
(4.4)
$$\begin{aligned}& y(0,\mu)=y(1,\mu)=0, \end{aligned}$$
(4.5)

is a special case of problem (3.1)-(3.3) when \(\alpha _{11}=\alpha_{21}=1\) and \(\alpha_{12}=\alpha_{22}=0\). The characteristic function of this problem is

$$ \mathcal{D}(\mu):=-\frac{\sin ( \sqrt{1+\mu^{2}} )}{\sqrt {1+\mu^{2}}}, $$
(4.6)

and the exact eigenvalues are \(\mu^{2}_{l}:=(\pi l)^{2}-1\), \(l \in \mathbb{Z}\). Taking \(k=2\) in (3.10), we have after some calculations

$$ \mathcal{K}_{2}(\mu)=- \frac{\sin\mu}{\mu}+\frac{\sin(\mu )-\mu\cos(\mu)}{2\mu^{3}}. $$

Table 2 lists the first five approximate eigenvalues using our technique with \(N=7\) and \(h=1\) in comparison with the results of the sinc-Gaussian technique.

Table 2 Comparison of Hermite-Gauss and sinc-Gaussian, \(\pmb{N=7}\) and \(\pmb{h=1}\)

Example 4.3

In this example, we introduce the Sturm-Liouville problem

$$\begin{aligned}& -y''(t)+t^{2} y(t)= \mu^{2}y(t),\quad t \in[0,1], \end{aligned}$$
(4.7)
$$\begin{aligned}& y'(0,\mu)=y'(1,\mu)=0. \end{aligned}$$
(4.8)

The characteristic function is

$$ \mathcal{D}(\mu):=-{}_{1}F_{1} \biggl( \frac{1}{4} \bigl(1-\mu ^{2} \bigr),\frac{1}{2};1 \biggr)+ \bigl(1-\mu^{2} \bigr){}_{1}F_{1} \biggl(1+\frac {1}{4} \bigl(1-\mu^{2} \bigr),\frac{3}{2};1 \biggr), $$
(4.9)

where \({}_{1}F_{1}\) is the hypergeometric function. In this case, putting \(k=0\) in (3.10) implies after some calculations

$$ \mathcal{K}_{0}(\mu):=-\mu\sin(\mu)+ \frac{\mu(3+2\mu ^{2})\cos\mu+3(\mu^{2}-1)\sin\mu}{12\mu^{3}}. $$

As in the last examples, we summarize our results of this example in Table 3. To compute the absolute error, the exact eigenvalues are computed approximately by Mathematica.

Table 3 Comparison of Hermite-Gauss and sinc-Gaussian, \(\pmb{N=5}\) and \(\pmb{h=1}\)

References

  1. Asharabi, RM, Prestin, J: A modification of Hermite sampling with a Gaussian multiplier. Numer. Funct. Anal. Optim. 36, 419-437 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  2. Schmeisser, G, Stenger, F: Sinc approximation with a Gaussian multiplier. Sampl. Theory Signal Image Process., Int. J. 6, 199-221 (2007)

    MathSciNet  MATH  Google Scholar 

  3. Annaby, MH, Asharabi, RM: Computing eigenvalues of boundary-value problems using sinc-Gaussian method. Sampl. Theory Signal Image Process., Int. J. 7, 293-311 (2008)

    MathSciNet  MATH  Google Scholar 

  4. Annaby, MH, Tharwat, MM: A sinc-Gaussian technique for computing eigenvalues of second-order linear pencils. Appl. Numer. Math. 63, 129-137 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  5. Tharwat, MM, Al-Harbi, SM: Approximation of eigenvalues of boundary value problems. Bound. Value Probl. (2014). doi:10.1186/1687-2770-2014-51

    MathSciNet  MATH  Google Scholar 

  6. Tharwat, MM, Bhrawy, AH, Alofi, AS: Approximation of eigenvalues of discontinuous Sturm-Liouville problems with eigenparameter in all boundary conditions. Bound. Value Probl. (2013). doi:10.1186/1687-2770-2013-132

    MathSciNet  MATH  Google Scholar 

  7. Boumenir, A, Chanane, B: Eigenvalues of Sturm-Liouville systems using sampling theory. Appl. Anal. 62, 323-334 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  8. Annaby, MH, Asharabi, RM: On sinc-based method in computing eigenvalues of boundary-value problems. SIAM J. Numer. Anal. 46(2), 671-690 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  9. Annaby, MH, Asharabi, RM: Approximating eigenvalues of discontinuous problems by sampling theorems. J. Numer. Math. 3, 163-183 (2008)

    MathSciNet  MATH  Google Scholar 

  10. Annaby, MH, Tharwat, MM: On computing eigenvalues of second-order linear pencils. IMA J. Numer. Anal. 27, 366-380 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  11. Asharabi, RM: Generalized sinc-Gaussian sampling involving derivatives. Numer. Algorithms (2016). doi:10.1007/s11075-016-0129-4

    Google Scholar 

  12. Asharabi, RM, Prestin, J: On two-dimensional classical and Hermite sampling. IMA J. Numer. Anal. 36, 851-871 (2016)

    Article  MathSciNet  Google Scholar 

  13. Levitan, BM, Sargsjan, IS: Introduction to Spectral Theory: Selfadjoint Ordinary Differential Operators. Translation of Mathematical Monographs, vol. 39. Am. Math. Soc., Providence (1975)

    MATH  Google Scholar 

  14. Titchmarch, EC: Eigenfunction Expansions Associated with Second-Order Differential Equations, 2nd edn. Clarendon Press, Oxford (1962)

    Google Scholar 

  15. Annaby, MH, Asharabi, RM: Computing eigenvalues of Sturm-Liouville problems by Hermite interpolations. Numer. Algorithms 60, 355-367 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  16. Boumenir, A: The sampling method for Sturm-Liouville problems with the eigenvalue parameter in the boundary condition. Numer. Funct. Anal. Optim. 21, 67-75 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  17. Chanane, B: Computing the eigenvalues of singular Sturm-Liouville problems using the regularized sampling method. Appl. Math. Comput. 184, 972-978 (2007)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The author would like to thank the anonymous referees for their constructive comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rashad M Asharabi.

Additional information

Competing interests

The author declares that he has no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Asharabi, R.M. A Hermite-Gauss method for the approximation of eigenvalues of regular Sturm-Liouville problems. J Inequal Appl 2016, 154 (2016). https://doi.org/10.1186/s13660-016-1098-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1098-9

MSC

  • 34L16
  • 65L15
  • 94A20

Keywords

  • sinc methods
  • Sturm-Liouville problem
  • error bounds
  • convergence rate