# On conjectures of Stenger in the theory of orthogonal polynomials

## Abstract

The conjectures in the title deal with the zeros $$x_{j}$$, $$j=1,2, \ldots ,n$$, of an orthogonal polynomial of degree $$n>1$$ relative to a nonnegative weight function w on an interval $$[a,b]$$ and with the respective elementary Lagrange interpolation polynomials $$\ell _{k} ^{(n)}$$ of degree $$n-1$$ taking on the value 1 at the zero $$x_{k}$$ and the value 0 at all the other zeros $$x_{j}$$. They involve matrices of order n whose elements are integrals of $$\ell _{k}^{(n)}$$, either over the interval $$[a,x_{j}]$$ or the interval $$[x_{j},b]$$, possibly containing w as a weight function. The claim is that all eigenvalues of these matrices lie in the open right half of the complex plane. This is proven to be true for Legendre polynomials and a special Jacobi polynomial. Ample evidence for the validity of the claim is provided for a variety of other classical, and nonclassical, weight functions when the integrals are weighted, but not necessarily otherwise. Even in the case of weighted integrals, however, the conjecture is found by computation to be false for a piecewise constant positive weight function. Connections are mentioned with the theory of collocation Runge–Kutta methods in ordinary differential equations.

## Introduction

Let w be a nonnegative weight function on $$[a,b]$$, $$-\infty \leq a< b \leq \infty$$, and $$p_{n}$$ be the orthonormal polynomial of degree n relative to the weight function w. Let $$\{x_{j}\}_{j=1}^{n}$$ be the zeros of $$p_{n}$$ and

$$\ell _{k}^{(n)}(x)=\prod _{\stackrel{1\leq j\leq n}{j\neq k}} \frac{x-x _{j}}{x_{k}-x_{j}}, \quad k=1,2,\ldots ,n,$$
(1)

the elementary Lagrange interpolation polynomial of degree $$n-1$$ having the value 1 at $$x_{k}$$ and 0 at all the other zeros $$x_{j}$$. The Stenger conjectures relate to the eigenvalues of matrices of order n whose elements are certain integrals involving the elementary Lagrange polynomials (1), the claim being that the real part of all eigenvalues is positive. We distinguish between the restricted Stenger conjecture [8, §2.3, Remark 2.2], in which the matrices are

\begin{aligned} &U_{n}=\bigl[u_{jk}^{(n)} \bigr], \quad u_{jk}^{(n)}= \int _{a}^{x_{j}}\ell _{k} ^{(n)}(x) \,\mathrm{d}x, \\ &V_{n}=\bigl[v_{jk}^{(n)}\bigr], \quad v_{jk}^{(n)}= \int _{x_{j}}^{b}\ell _{k} ^{(n)}(x)\, \mathrm{d}x, \end{aligned}\quad j, k=1,2,\ldots ,n,
(2)

and the extended Stenger conjecture (called “new conjecture” in [8, §2.4]), in which the matrices are

\begin{aligned} &U_{n}=\bigl[u_{jk}^{(n)} \bigr], \quad u_{jk}^{(n)}= \int _{a}^{x_{j}}\ell _{k} ^{(n)}(x)w(x)\,\mathrm{d}x, \\ &V_{n}=\bigl[v_{jk}^{(n)}\bigr], \quad v_{jk}^{(n)}= \int _{x_{j}}^{b}\ell _{k} ^{(n)}(x)w(x)\,\mathrm{d}x, \end{aligned} \quad j, k=1,2,\ldots ,n,
(3)

where w is assumed to be positive a.e. on $$[a,b]$$. (For the fact that this assumption is essential, see Sects. 7 and 8.) Thus, in the latter conjecture the elements of $$U_{n}$$, $$V_{n}$$ depend on the weight function w not only through the polynomials $$\ell _{k}^{(n)}$$, but also by virtue of w being part of the integration process. Note that, unlike for the extended conjecture, the restricted conjecture requires $$[a,b]$$ to be a finite interval, at least for one of the two matrices $$U_{n}$$, $$V_{n}$$.

We also note that the order in which the $$x_{j}$$ are arranged is immaterial since a permutation of $$j=\{ 1,2,3,\ldots ,n\}$$ implies the same permutation of $$k=\{1,2,3,\ldots ,n\}$$, which amounts to a similarity transformation of $$U_{n}$$ resp. $$V_{n}$$, and therefore leaves the eigenvalues unchanged.

The weight function $$w(x)=1$$ on $$[-1.1]$$ is special in the sense that the extended conjecture is the same as the restricted one and will be simply called the Stenger conjecture. Its proof will be given in Sect. 4. In Sect. 2 we will prove that the eigenvalues of $$U_{n}$$ and $$V_{n}$$ in the restricted as well as in the extended Stenger conjecture are the same if w is a symmetric weight function. In Sect. 3 we show that, both in the restricted and extended conjecture, the matrix $$U_{n}^{(\alpha , \beta )}$$ belonging to the Jacobi weight function $$w(x)=(1-x)^{\alpha }(1+x)^{\beta }$$ on $$[-1,1]$$ with parameters α, β is the same as the matrix $$V_{n}^{(\beta ,\alpha )}$$ with the Jacobi parameters interchanged. Section 5, devoted to the restricted Stenger conjecture, shows, partly by numerical computation, that the conjecture may be true for large classes of weight functions, but can also be false for other classes of weight functions. In contrast, Sect. 6 provides ample computational support for the validity of the extended Stenger conjecture for a variety of classical and nonclassical weight functions. Discrete weight functions are considered in Sect. 7. In Sect. 8 the extended Stenger conjecture is challenged in the case of a piecewise constant positive weight function. Related work on collocation Runge–Kutta methods is mentioned in the Appendix.

## Symmetric weight functions

We assume here the weight function $$w(x)$$ to be symmetric, i.e., $$w(-x)=w(x)$$ on $$[-b,b]$$, $$0< b\leq \infty$$, and the zeros $$x_{j}$$ of the corresponding orthonormal polynomial $$p_{n}$$ ordered increasingly:

$$-b< x_{1}< x_{2}< \cdots < x_{n}< b.$$

We then have, by symmetry,

$$x_{j}+x_{n+1-j}=0, \quad j=1,2,\ldots ,n.$$
(4)

### Theorem 1

If w is symmetric, the eigenvalues of $$V_{n}$$ are the same as those of $$U_{n}$$, both in the case of the restricted (where $$b<\infty$$) and the extended Stenger conjecture.

### Proof

We present the proof for the extended conjecture, the one for the restricted conjecture being the same (just drop the factor $$w(t)$$ in all integrals). From the definition of $$V_{n}$$ in (3), we have

$$v_{jk}= \int _{x_{j}}^{b} \ell _{k}^{(n)}(x)w(x) \,\mathrm{d}x= \int _{-b} ^{-x_{j}} \ell _{k}^{(n)}(-t)w(t) \,\mathrm{d}t,$$

and, therefore, by (4),

$$v_{jk}= \int _{-b}^{x_{n+1-j}} \ell _{k}^{(n)}(-t)w(t) \,\mathrm{d}t.$$

Since $$\ell _{k}^{(n)}(-t)=1$$ if $$-t=x_{k}$$, that is, $$t=-x_{k}=x_{n+1-k}$$, and $$\ell _{k}^{(n)}(-t)=0$$ if $$t=x_{j}$$, $$j\neq n+1-k$$, we get

$$v_{jk}= \int _{-b}^{x_{n+1-j}} \ell _{n+1-k}^{(n)}(x)w(x) \,\mathrm{d}x,$$

thus, by (3) (with $$a=-b$$),

$$v_{jk}=u_{n+1-j,n+1-k}.$$

In matrix form, this can be written as which is a similarity transformation of $$U_{n}$$. Hence, $$V_{n}$$ and $$U_{n}$$ have the same eigenvalues. □

## Jacobi weight functions

In this section we look at Jacobi weight functions

$$w^{(\alpha ,\beta )}(x)=(1-z)^{\alpha }(1+x)^{\beta }\quad \text{on } [-1,1],$$
(5)

where α, β are greater than −1.

Switching Jacobi parameters has the effect of turning a U-matrix into a V-matrix and vice versa. More precisely, we have the following.

### Theorem 2

Let $$U_{n}^{(\alpha ,\beta )}$$ be the matrix $$U_{n}$$ for Jacobi polynomials with parameters α, β, and $$V_{n}^{(\beta ,\alpha )}$$ be the matrix $$V_{n}$$ for Jacobi polynomials with parameters β, α. Then

$$U_{n}^{(\alpha ,\beta )}=V_{n}^{(\beta ,\alpha )},$$
(6)

both in the restricted and extended Stenger conjecture.

### Proof

We give the proof for the restricted Stenger conjecture. It is the same for the extended conjecture, using $$w^{(\alpha ,\beta )} (-x)=w^{(\beta ,\alpha )}(x)$$.

We denote quantities x related to Jacobi parameters α, β by $$x^{*}$$ after interchange of the parameters. Since the Jacobi polynomial satisfies $$P_{n}^{(\alpha ,\beta )}(x)=(-1)^{n} P _{n}^{(\beta ,\alpha )}(-x)$$ (cf. [9, Eq. (4.1.3)]), we can take $$x_{j}^{*}=x_{j}^{(\beta ,\alpha )}=-x_{j}=-x_{j}^{(\alpha ,\beta )}$$ for the zeros of $$P_{n}^{(\beta ,\alpha )}$$. Noting that

$$\ell _{k}^{(n)}(x;\alpha ,\beta )=\prod _{j\neq k}\frac{x-x_{j}}{x_{k}-x _{j}} =-\prod_{j\neq k} \frac{x+x_{j}^{*}}{x_{k}^{*}-x_{j}^{*}}= \prod_{j\neq k} \frac{(-x)-x_{j}^{*}}{x_{k}^{*}-x_{j}^{*}}=\ell _{k} ^{(n)}(-x;\beta ,\alpha ),$$

we get

$$u_{jk}^{(\alpha ,\beta )}= \int _{-1}^{x_{j}} \ell _{k}^{(n)}(t; \alpha , \beta )\,\mathrm{d}t = \int _{-1}^{x_{j}}\ell _{k}^{(n)}(-t; \beta ,\alpha )\,\mathrm{d}t= \int _{x_{j}^{*}}^{1} \ell _{k}^{(n)}(x; \beta ,\alpha )) \,\mathrm{d}x=v_{jk}^{(\beta ,\alpha )}.$$

□

## Proof of the Stenger conjecture for Legendre polynomials

By virtue of Theorem 1, it suffices to consider the matrix $$U_{n}$$.

Let $$\lambda \in {\mathbb{C}}$$ be an eigenvalue of $$U_{n}$$ and $$y=[y_{1},y_{2},\ldots ,y_{n}]^{T}\in {\mathbb{C}}^{n}$$ be a corresponding eigenvector,

$$U_{n} y=\lambda y, \quad y\neq [0,0,\ldots ,0]^{T},$$
(7)

so that

$$\int _{-1}^{x_{i}} \Biggl( \sum _{j=1}^{n} \ell _{j}^{(n)}(x)y_{j} \Biggr) \,\mathrm{d}x =\lambda y_{i}, \quad i=1,2,\ldots ,n.$$
(8)

Let $$y(x)\in {\mathbb{P}}_{n-1}$$ be the unique polynomial of degree $$\leq n-1$$ interpolating to $$y_{j}$$ at $$x_{j}$$, $$j=1,2,\ldots ,n$$. By the Lagrange interpolation formula and (8), we then have

$$\int _{-1}^{x_{i}} y(t)\,\mathrm{d}t=\lambda y(x_{i}), \quad i=1,2, \ldots ,n.$$
(9)

With $$w_{i}$$, $$i=1,2,\ldots ,n$$, denoting the weights of the n-point Gauss–Legendre quadrature formula, multiply (9) by $$w_{i}\overline{y(x_{i})}$$ and sum over i to get

$$\sum_{i=1}^{n} w_{i} \overline{y(x_{i})} \int _{-1}^{x_{i}} y(t) \,\mathrm{d}t =\lambda \sum _{i=1}^{n} w_{i} \bigl\vert y(x_{i}) \bigr\vert ^{2}.$$

Since $$\overline{y(x)}\int _{-1}^{x} y(t)\,\mathrm{d}t$$ is a polynomial of degree $$2n-1$$, and n-point Gauss quadrature is exact for any such polynomial, and since $$|y(x)|^{2}$$ is a polynomial of degree $$2n-2$$, we have

$$\int _{-1}^{1} \overline{y(x)} \biggl( \int _{-1}^{x} y(t)\,\mathrm{d}t \biggr) \, \mathrm{d}x =\lambda \int _{-1}^{1} \bigl\vert y(x) \bigr\vert ^{2} \,\mathrm{d}x.$$
(10)

Integration by parts on the left yields the identity

$$\int _{-1}^{1} \overline{y(x)} \biggl( \int _{-1}^{x} y(t)\,\mathrm{d}t \biggr) \, \mathrm{d}x + \int _{-1}^{1} y(x) \biggl( \int _{-1}^{x} \overline{y(t)}\,\mathrm{d}t \biggr) \,\mathrm{d}x = \biggl\vert \int _{-1}^{1} y(t)\,\mathrm{d}t \biggr\vert ^{2}.$$
(11)

The real part of the left-hand side of (10) is

$$\frac{1}{2} \biggl[ \int _{-1}^{1} \overline{y(x)} \biggl( \int _{-1} ^{x} y(t)\,\mathrm{d}t \biggr) \, \mathrm{d}x + \int _{-1}^{1} y(x) \biggl( \int _{-1}^{x} \overline{y(t)}\,\mathrm{d}t \biggr) \,\mathrm{d}x \biggr],$$

which, by (11), equals $$\frac{1}{2} \vert \int _{-1}^{1} y(t) \,\mathrm{d}t \vert ^{2}$$. Therefore, taking the real part on the right of (10) yields

$$\operatorname{Re}\lambda \int _{-1}^{1} \bigl\vert y(x) \bigr\vert ^{2}\,\mathrm{d}x= \frac{1}{2} \biggl\vert \int _{-1}^{1} y(t)\,\mathrm{d}t \biggr\vert ^{2}.$$
(12)

From this, it follows that $$\operatorname{Re}\lambda \geq 0$$.

To prove strict positivity of Reλ, we have to show that the integral on the right of (12) does not vanish. To do this, we look at $$\int _{-1}^{x} y(t)\,\mathrm{d}t-\lambda y(x)$$, which is a polynomial of degree n vanishing at $$x_{i}$$, $$i=1,2,\ldots ,n$$, by (9). Therefore,

$$\int _{-1}^{x} y(t)\,\mathrm{d}t-\lambda y(x)= \mathrm{const}\, P_{n}(x),$$
(13)

where $$P_{n}$$ is the Legendre polynomial of degree n. We now multiply (13) by $$(1-x)^{k-1}$$, $$1\leq k\leq n$$, and integrate over $$[-1,1]$$. Then, by orthogonality, we get

$$\int _{-1}^{1} (1-x)^{k-1} \biggl( \int _{-1}^{x} y(t)\,\mathrm{d}t \biggr) \, \mathrm{d}x = \lambda \int _{-1}^{1} (1-x)^{k-1} y(x)\, \mathrm{d}x.$$

On the left, integrating by parts, letting

\begin{aligned} &u(x)= \int _{-1}^{x} y(t)\,\mathrm{d}t, \qquad v^{\prime }(x)=(1-x)^{k-1}, \\ &u^{\prime }(x)=y(x), \qquad v(x)= \int _{1}^{x} (1-t)^{k-1}\, \mathrm{d}t=-(1-x)^{k}/k , \end{aligned}

and noting that $$u(-1)=v(1)=0$$, we get

$$\int _{-1}^{1} \frac{(1-x)^{k}}{k} y(x)\,\mathrm{d}x= \lambda \int _{-1} ^{1} (1-x)^{k-1} y(x)\, \mathrm{d}x, \quad 1\leq k\leq n.$$
(14)

Now suppose that $$\int _{-1}^{1} y(x)\,\mathrm{d}x=0$$. Then (14) for $$k=1$$ implies that $$y(x)$$ is orthogonal to all linear functions. Putting $$k=2$$ in (14) then implies orthogonality of $$y(x)$$ to all quadratic functions. Proceeding in this manner up to $$k=n-1$$, we conclude that $$y(x)$$ is orthogonal to all polynomials of degree $$n-1$$, in particular to itself, so that $$\int _{-1}^{1} y^{2}(x)\,\mathrm{d}x =0$$, hence $$y(x)\equiv 0$$. This contradicts (7). Thus, by (12), $$\operatorname{Re}\lambda >0$$. □

## The restricted Stenger conjecture

### Proof of the restricted Stenger conjecture for a special Jacobi polynomial

Here we consider the weight function $$w(x)=1-x$$ on $$[-1,1]$$, that is, the Jacobi weight function $$(1-x)^{\alpha }(1+x)^{\beta }$$ with parameters $$\alpha =1$$, $$\beta =0$$, and denote by $$x_{i}$$, $$i=1,2, \ldots ,n$$, the zeros of the Jacobi polynomial $$P_{n}^{(1,0)}$$ and by $$U_{n}$$ the matrix in (2) formed with these zeros $$x_{i}$$. As is well known, the $$x_{i}$$ are the internal nodes of the $$(n+1)$$-point Gauss–Radau quadrature formula

$$\int _{-1}^{1} f(x)\,\mathrm{d}x=\sum _{i=1}^{n} w_{i} f(x_{i})+w_{n+1}f(x _{n+1}), \quad f\in {\mathbb{P}}_{2n},$$
(15)

where $$x_{n+1}=1$$.

Let again $$\lambda \in {\mathbb{C}}$$ be an eigenvalue of $$U_{n}$$ and $$y=[y_{1},y_{2},\ldots ,y_{n}]\in {\mathbb{C}}^{n}$$ be a corresponding eigenvector, and $$y(x)$$ be as defined in Sect. 4. Multiplying (9) now by $$w_{i}(1-x_{i}) \overline{y(x_{i})}$$ and summing over $$i=1,2,\ldots ,n+1$$, we obtain

$$\sum_{i=1}^{n+1} w_{i}(1-x_{i}) \overline{y(x_{i})} \int _{-1}^{x_{i}} y(t) \,\mathrm{d}t=\lambda \sum _{i=1}^{n+1} w_{i}(1-x_{i}) \bigl\vert y(x_{i}) \bigr\vert ^{2}.$$

(The last term in the sums on the left and right, of course, is zero.) Therefore, by (15), since $$(1-x)\overline{ y(x)}\int _{-1} ^{x} y(t)\,\mathrm{d}t$$ is a polynomial of degree $$\leq 2n$$ and $$(1-x)|y(x)|^{2}$$ a polynomial of degree $$\leq 2n-1$$,

$$\int _{-1}^{1} (1-x)\overline{y(x)} \biggl( \int _{-1}^{x} y(t) \,\mathrm{d}t \biggr)\, \mathrm{d}x=\lambda \int _{-1}^{1} (1-x) \bigl\vert y(x) \bigr\vert ^{2} \,\mathrm{d}x.$$
(16)

The real part of the left-hand side of (16) is

\begin{aligned}[b] & \frac{1}{2} \biggl[ \int _{-1}^{1} (1-x)\overline{y(x)} \biggl( \int _{-1}^{x} y(t)\,\mathrm{d}t \biggr)\, \mathrm{d}x+ \int _{-1}^{1} (1-x)y(x) \biggl( \int _{-1}^{x} \overline{y(t)}\,\mathrm{d}t \biggr) \,\mathrm{d}x \biggr] \\ &\quad =\frac{1}{2} \int _{-1}^{1} (1-x)\frac{\mathrm{d}}{\mathrm{d}x} \biggl\vert \int _{-1}^{x} y(t)\,\mathrm{d}t \biggr\vert ^{2} \,\mathrm{d}x, \end{aligned}
(17)

having used the product rule of differentiation on the right. Integration by parts then yields

$$\frac{1}{2} \int _{-1}^{1} \biggl\vert \int _{-1}^{x} y(t)\,\mathrm{d}t \biggr\vert ^{2} \,\mathrm{d}x= \operatorname{Re}\lambda \int _{-1}^{1} (1-x) \bigl\vert y(x) \bigr\vert ^{2} \,\mathrm{d}x.$$

Since the integral on the right is positive, and so is the integral on the left, there follows $$\operatorname{Re}\lambda >0$$. □

It may be thought that the same kind of proof might work also for Jacobi weight functions with parameters $$\alpha =0$$, $$\beta =1$$, or $$\alpha =\beta =1$$ using Gauss–Radau quadrature with fixed node −1 or Gauss–Lobatto quadrature, respectively. The last step in the proof (integration by parts of the integral on the right of (17)), however, fails to produce the desired conclusion, the first factor in that integral being $$1+x$$, resp. $$1-x^{2}$$.

### A counterexample

The simplest counterexample we came across involves a Gegenbauer polynomial of small degree.

### Counterexample

$$p_{n}(x)=C_{n}^{(\alpha )}(x), \quad n=5, \alpha =10,$$
(18)

where $$C_{n}^{(\alpha )}$$ is the Gegenbauer polynomial of degree n.

From [1, Eq. 22.3.4] one finds

$$C_{5}^{(\alpha )}(x)=\alpha (\alpha +1) (\alpha +2) x \biggl[ \frac{4}{15} (\alpha +3) (\alpha +4) x^{4}-\frac{4}{3} ( \alpha +3) x^{2}+1 \biggr].$$

One zero of $$C_{5}^{(\alpha )}$$, of course, is 0, while the other four are the zeros of the polynomial P in brackets. When $$\alpha =10$$, one finds

$$P(x)=\frac{1}{3} \biggl( \frac{728}{5} x^{4}-52 x^{2}+3 \biggr).$$

This is a quadratic polynomial in $$x^{2}$$, the zeros of which could be found explicitly. However, we proceed computationally, using Matlab, since eventually, to obtain eigenvalues, one has to compute anyway.

The Matlab routine doing the computations is counterex.m.Footnote 1 It computes the elements of $$U_{n}$$ in (2) (where $$n=5$$) exactly by 3-point Gauss–Legendre quadrature of the last integral in

$$u_{jk}= \int _{-1}^{x_{j}} \ell _{k}^{(5)}(x) \,\mathrm{d}x=\frac{1}{2} (1+x _{j}) \int _{-1}^{1} \ell _{k}^{(5)} \biggl(\frac{1}{2} (1+x_{j}) t - \frac{1}{2} (1-x_{j}) \biggr)\,\mathrm{d}t$$
(19)

and uses a routine lagrange.m for calculating the elementary Lagrange interpolation polynomials as well as the OPQ routines r_jacobi.m, gauss.m. For the latter, see [4, pp. 301, 304].

The output, showing the five eigenvalues d of $$U_{5}$$, is

>> counterex

d =

.431796388637445 + 0.000000000000000i

.285123529721968 + .272861054932517i

.285123529721968 - .272861054932517i

-.001021724040688 + .286723270044925i

-.001021724040688 - .286723270044925i

>>

The last pair of eigenvalues has negative real part, disproving, at least computationally, the restricted Stenger conjecture. The extended conjecture, however, seems to be valid for this example; see Sect. 6.2, Example 1.

### Conjectures

The counterexample in Sect. 5.2 is symptomatic for more general counterexamples, not only regarding Gegenbauer, but also many other weight functions. They are formulated here as separate conjectures, all firmly rooted in computational evidence.

### Conjecture 5.1

The restricted Stenger conjecture for $$U_{n}$$ (and, by Theorem 1, also for $$V_{n}$$) is true for all Gegenbauer polynomials $$C_{n}^{(\alpha )}$$ with $$2\leq n\leq 4$$, but for $$n\geq 5$$ it is true only for $$\alpha >-1$$ up to some $$\alpha _{n}>1$$.

The routine Uconj_restr_jac.m evaluates the matrix $$U_{n}$$ (for Jacobi polynomials) in Matlab double-precision arithmetic and its eigenvalues in 32-digit variable-precision arithmetic. Since the eigenvalues become more ill-conditioned as n increases, we first make sure that they are accurate to at least four significant decimal digits by running the routine entirely in 32-digit arithmetic for selected values of α (and also of β) in $$(-1,1]$$ and selected values of n, using the routine sUconj_restr_jac.m, and comparing the results with those obtained in double precision.

Conjecture 5.1 has then been confirmed for all $$\alpha = -0.9:0.1:10$$, using the routine run_Uconj_restr_jac.m. Estimates of $$\alpha _{n}$$ have been obtained by a bisection-type procedure and are shown in Table 1. They are “estimates” in the sense that the conjecture is true for $$\alpha \leq \alpha _{n}$$, but false for $$\alpha =\alpha _{n}+0.001$$.

It appears that $$\alpha _{n}$$ converges monotonically down to 1 as $$n\rightarrow \infty$$.

### Conjecture 5.2

The restricted Stenger conjecture for $$U_{n}$$ holds true in the case of Jacobi polynomials $$P_{n}^{(\alpha ,\beta )}$$ for all $$n>1$$ if $$-1<\alpha ,\beta \leq 1$$, but not necessarily otherwise.

The positive part of the conjecture has been confirmed for $$[\alpha , \beta ] =-0.9:0.1:1$$, and in each case for $$n=2:40$$, using the routine run_Uconj_restr_jac.m. The negative part follows from Conjecture 5.1, Table 1 (if true). By Theorem 2, the same conjecture can be made for the matrix $$V_{n}$$.

#### Algebraic/logarithmic weight functions

Here we first examine weight functions of the type

$$w_{\alpha }(x)=x^{\alpha }\log (1/x) \quad \text{on} [0,1] \text{ with } \alpha >-1.$$
(20)

### Conjecture 5.3

For the matrix $$U_{n}$$, the restricted Stenger conjecture holds true in the case of the weight function (20) for all $$n>1$$ if $$-1<\alpha \leq \alpha _{1}$$, where $$1<\alpha _{1}<2$$, but not necessarily otherwise. For the matrix $$V_{n}$$, in contrast, the conjecture is true for all $$\alpha >-1$$.

In order to compute the zeros $$x_{j}$$ of the required orthogonal polynomials (needed to obtain the Lagrange polynomials $$\ell _{k}^{(n)}$$) for degrees $$2\leq n\leq 40$$ and arbitrary $$\alpha >-1$$, we need a routine that generates the respective recurrence coefficients for the orthogonal polynomials. This can be done by applying a multicomponent discretization procedure, using appropriate quadrature rules to discretize the integral $$\int _{0}^{1} f(x)x^{\alpha }\log (1/x) \,\mathrm{d}x$$, where f is a polynomial of degree $$\leq 2n-1$$. It was found to be helpful to split the integral in two integrals, one extended from 0 to ξ, and the other from ξ to 1, $$0<\xi <1$$, and use ξ to optimize the rate of convergence (that is, to minimize the parameter Mcap in the discretization routine mcdis.m). Using obvious changes of variables, one finds

\begin{aligned}& \begin{aligned}[b] \int _{0}^{\xi }f(x)x^{\alpha }\log (1/x)\, \mathrm{d}x&=\xi ^{\alpha +1} \biggl[ \log (1/\xi ) \int _{0}^{1} f(t\xi )t^{\alpha }\, \mathrm{d}t \\ &\quad{} +\frac{1}{(1+\alpha )^{2}} \int _{0}^{\infty }f\bigl(\xi \mathrm{e}^{-t/(1+\alpha )} \bigr) t\mathrm{e}^{-t}\,\mathrm{d}t \biggr], \end{aligned} \end{aligned}
(21)
\begin{aligned}& \int _{\xi }^{1} f(x)x^{\alpha }\log (1/x)\, \mathrm{d}x=(1-\xi ) \int _{0} ^{1} f\bigl(x(t)\bigr)\bigl[x(t) \bigr]^{\alpha }\log \bigl(1/x(t)\bigr)\,d t, \end{aligned}
(22)

where in (22), $$x(t)=(1-\xi )t+\xi$$ maps the interval $$[0,1]$$ onto $$[\xi ,1]$$. In (21), the first integral on the right can be discretized (without error) by n-point Gauss–Jacobi quadrature on $$[0,1]$$ with Jacobi parameters 0 and α, and the second integral (with small error) by sufficiently high-order generalized Gauss–Laguerre quadrature with Laguerre parameter 1. The integral in (22) can be discretized by sufficiently high-order Gauss–Legendre quadrature on $$[0,1]$$. For the optimal ξ, one can use, as found empirically (using the routine run_r_alglog1.m),

$$\xi =\textstyle\begin{cases} [1+10(\alpha +0.9)]/1000 & \text{if } -0.9\leq \alpha \leq 1, \\ 0.02 & \text{if } \alpha >1. \end{cases}$$

This is implemented in the routine r_alglog1.m.

The routine sUconj_restr_log1.m, run with $$\mathtt{dig} =32$$, generates the matrix $$U_{n}$$ and its eigenvalues in 32-digit arithmetic. It relies on the global $$n\times 2$$ arrays ab and ableg containing the first n recurrence coefficients of the (monic) orthogonal polynomials relative to the weight functions $$w_{\alpha }$$ and 1, respectively (both supported on $$[0,1]$$). The array ab, when $$\alpha =-1/2,0,1/2,1,2$$ is available, partly in [5, 2.3.1,2.41,2.4.3], to 32 digits for n at least as large as 100, whereas ableg can easily be generated by the routine sr_jacobi01.m. For these five values of α, we can therefore produce reference values to high precision for the eigenvalues of $$U_{n}$$.

The Matlab double-precision routine Uconj_restr_log1.m, also run with $$\mathtt{dig} =32$$, generates the matrix $$U_{n}$$ in double-precision arithmetic and the eigenvalues in 32-digit arithmetic for arbitrary values of $$\alpha >-1$$, its global array ab being produced by the routine r_alglog1.m. When the eigenvalues so obtained are compared with the reference values, for the above five values of α, it is found that for $$n\leq 40$$ they all are accurate to at least four decimal digits (cf. test_Uconj_restr_log1.m). This provides us with some confidence that the routine Uconj_restr_log1.m, when $$n\leq 40$$, will produce eigenvalues to the same accuracy, also when α is arbitrary in the range from $$-1/2$$ to 2.

The routine run_Uconj_restr_log1.m validates the restricted Stenger conjecture for the matrix $$U_{n}$$ when $$\alpha =-1/2,0,1/2,1$$, at least for all n between 2 and 40, but refutes it when $$\alpha =2$$ and $$n=8$$, producing a pair of eigenvalues with negative real part $$-1.698\ldots (-3)$$. This provides some indication that Conjecture 5.3 for the matrix $$U_{n}$$ may be valid. We strengthen this expectation by running the routine for additional values of α, and at the same time try to estimate the value of $$\alpha _{1}$$ in dependence of n by applying a bisection-type procedure. It is found that, when $$n\leq 40$$, Conjecture 5.3 for $$U_{n}$$ is true with $$\alpha _{1}$$ as shown in Table 2.

It appears that $$\alpha _{1}$$ is monotonically decreasing. Since it is bounded below by 1, it would then have to converge to a limit value (perhaps =1).

The routines dealing with the matrix $$V_{n}$$ are Vconj_restr_log1.m and run_Vconj_restr_log1.m. They validate Conjecture 5.3 for the matrix $$V_{n}$$ when $$\alpha =-1/2,0,1/2,1,2,5,10$$, in each case for $$2\leq n\leq 40$$.

For illustration, the eigenvalues of $$U_{n}$$ are shown in Fig. 1 for $$\alpha =0$$ and $$n=10,20,40$$, and those of $$V_{n}$$ in Fig. 2 for the same α and n.

For the weight function

$$w(x)=x^{\alpha }\log ^{2}(1/x) \quad \text{on } [0,1], \text{with } \alpha >-1,$$
(23)

our conjecture for $$U_{n}$$ is the same as the one in Conjecture 5.3, but not so for $$V_{n}$$.

### Conjecture 5.4

For the matrix $$U_{n}$$, the restricted Stenger conjecture holds true in the case of the weight function (23) for all $$n>1$$ if $$-1<\alpha < \alpha _{2}$$, where $$\alpha _{2}$$ is a number between 1 and 2, but not necessarily otherwise. For the matrix $$V_{n}$$, the conjecture is false for all $$\alpha >-1$$.

The routines used to make this conjecture are the same as those used for Conjecture 5.3 but with “log1” replaced by “log2”. The statements regarding the matrix $$U_{n}$$ are arrived at in the same way as in Conjecture 5.3, the values of $$\alpha _{2}$$ now being as shown in Table 3.

With regard to $$V_{n}$$, the conjecture is found to be false for $$\alpha =-1/2,0,1/2, 1,2,5$$ and $$n=7$$ in each case, there being a single pair of conjugate complex eigenvalues with negative real part.

We illustrate by showing in Fig. 3 the eigenvalues of $$U_{n}$$ for $$\alpha =0$$ and $$n=10,20,40$$.

#### Laguerre and generalized Laguerre weight functions

For generalized Laguerre weight functions

$$w(x)=x^{\alpha }\mathrm{e}^{-x} \quad \text{on } [0,\infty ], \alpha >-1,$$
(24)

it only makes sense to look at the U-conjecture.

### Conjecture 5.5

For the matrix $$U_{n}$$, the restricted Stenger conjecture is true in the case of the weight function (24) for all $$n>1$$ if $$-1<\alpha \leq \alpha _{0}$$, where $$1<\alpha _{0}<2$$, but not necessarily otherwise.

The routines written for this conjecture are Uconj_restr_lag.m and run_Uconj_restr_lag.m. The latter, run for $$\alpha =-0.9:0.1:2$$, $$n=2:40$$, confirms the conjecture up to, and including, $$\alpha =1.2$$, but refutes it when $$\alpha =1.3$$ and $$n=40$$, producing a single pair of conjugate complex eigenvalues with negative real part. The case $$\alpha =1.3$$ was checked by running the routine run_sUconj_restr_lag.m in 32-digit arithmetic, which produced eigenvalues agreeing with those obtained in double precision to at least 12 digits. (This check may take as many as five hours to run.) A bisection-type procedure, run in double precision, yields the values of $$\alpha _{0}$$ shown in Table 4 in dependence of n.

Figure 4 shows the eigenvalues of $$U_{n}$$ when $$\alpha =0$$ and $$n=10,20,40$$.

## The extended Stenger conjecture

To avoid extensive and time-consuming Matlab variable-precision computations, we restrict ourselves in Sects. 6.26.6 to values of n that are less than, or equal to, 30. Also note that in all figures of this section the horizontal axis carries a logarithmic scale.

### Proof of a weak form of the extended Stenger conjecture for a special Jacobi polynomial

We consider here, as in Sect. 5.1, the Jacobi weight function $$w(x)=(1-x)^{\alpha }(1+x)^{\beta }$$ on $$[-1,1]$$, with $$\alpha =1$$, $$\beta =0$$, and continue using the same notations as in that section. In particular, we again use the $$(n+1)$$-point Gauss–Radau quadrature formula

$$\int _{-1}^{1} f(x)\,\mathrm{d}x = \sum _{i=1}^{n+1} w_{i} f(x_{i})+R_{n}(f),$$
(25)

where $$x_{n+1}=1$$, but this time we include the remainder term

$$R_{n}(f)=-\gamma _{n} \frac{f^{(2n+1)}(\xi )}{(2n+1)!}, \quad \gamma _{n}=2^{2n+1} \frac{(n+1)n!^{4}}{(2n+1)!^{2}}$$
(26)

(cf. [3, top of p. 158, where $$\gamma ^{b}$$ should read $$\gamma _{n} ^{b}$$]). In place of (9), we now have

$$\int _{-1}^{x_{i}} y(t) (1-t)\,\mathrm{d}t=\lambda y(x_{i}), \quad i=1,2, \ldots ,n.$$
(27)

Multiplying this, as in Sect. 5.1, by $$w_{i}(1-x_{i})\overline{y(x_{i})}$$ and summing over $$i=1,2,\ldots ,n+1$$, we obtain

$$\sum_{i=1}^{n+1} w_{i}(1-x_{i})\overline{y(x_{i})} \int _{-1}^{x_{i}} y(t) (1-t) \,\mathrm{d}t=\lambda \sum _{i=1}^{n+1} w_{i}(1-x_{i}) \bigl\vert y(x_{i}) \bigr\vert ^{2}.$$
(28)

Since

$$f(x):=(1-x)\overline{y(x)} \int _{-1}^{x} y(t) (1-t)\,\mathrm{d}t$$
(29)

is a polynomial of degree $$2n+1$$ and the left-hand side of (28) is equal to the quadrature sum on the right of (25) with f as in (29), we get

\begin{aligned} &\sum_{i=1}^{n+1} w_{i}(1-x_{i})\overline{y(x_{i})} \int _{-1}^{x_{i}} y(t) (1-t)\,\mathrm{d}t \\ &\quad = \int _{-1}^{1} (1-x)\overline{y(x)} \biggl( \int _{-1}^{x} y(t) (1-t) \,\mathrm{d}t \biggr)\, \mathrm{d}x+\gamma _{n} \frac{f^{(2n+1)}(\xi )}{(2n+1)!}, \end{aligned}

where $$f^{(2n+1)}$$ is a nonnegative constant, namely

$$f^{(2n+1)}(\xi )=\frac{(2n+1)!}{n+1} \vert a_{n-1} \vert ^{2},$$

with $$a_{n-1}$$ the leading coefficient (of $$x^{n-1}$$) of the polynomial $$y(x)$$. Thus,

\begin{aligned}[b] &\sum_{i=1}^{n+1} w_{i}(1-x_{i})\overline{y(x_{i})} \int _{-1}^{x_{i}} y(t) (1-t)\,\mathrm{d}t \\ &\quad = \int _{-1}^{1} (1-x)\overline{y(x)} \biggl( \int _{-1}^{x} y(t) (1-t) \,\mathrm{d}t \biggr) \, \mathrm{d}x+C_{n}, \end{aligned}
(30)

where

$$C_{n}=\frac{\gamma _{n}}{n+1} \vert a_{n-1} \vert ^{2}.$$

Now the real part of the left-hand side of (28), by (30), is

\begin{aligned} &\frac{1}{2} \biggl[ \int _{-1}^{1} (1-x)\overline{y(x)} \biggl( \int _{-1}^{x} y(t) (1-t)\,\mathrm{d}t \biggr)\, \mathrm{d}x \\ &\qquad {}+ \int _{-1}^{1} (1-x)y(x) \biggl( \int _{-1}^{x} \overline{y(t)} (1-t)\,\mathrm{d}t \biggr)\,\mathrm{d}x \biggr] + C_{n} \\ &\quad = \frac{1}{2} \int _{-1}^{1} \frac{\mathrm{d}}{\mathrm{d}x} \biggl\vert \int _{-1}^{x} y(t) (1-t)\,\mathrm{d}t \biggr\vert ^{2} \,\mathrm{d}x + C_{n} \\ &\quad = \frac{1}{2} \biggl\vert \int _{-1}^{1} y(t) (1-t)\,\mathrm{d}t \biggr\vert ^{2} + C_{n}, \end{aligned}

so that, by (28),

$$\frac{1}{2} \biggl\vert \int _{-1}^{1} y(t) (1-t)\,\mathrm{d}t \biggr\vert ^{2}+C _{n} = \operatorname{Re}\lambda \int _{-1}^{1} (1-x) \bigl\vert y(x) \bigr\vert ^{2}\,\mathrm{d}x.$$
(31)

the integrand on the right being a polynomial of degree $$2n-1$$. From this, it follows that $$\operatorname{Re}\lambda \geq 0$$. □

Strict positivity of Reλ holds if $$|a_{n-1}|>0$$, that is, if $$y(x)$$ is a polynomial of exact degree $$n-1$$, or if the integral on the left of (31) does not vanish. Computation, using the routines check_pos.m and run_check_pos.m, confirms that both are indeed the case, at least for $$n\leq 40$$. Table 5 shows, for selected values of n, the minimum values of $$\vert \int _{-1} ^{1} y(t)(1-t)\,\mathrm{d}t \vert$$ and $$|a_{n-1}|$$, the minimum being taken over all eigenvalues/vectors. For checking purposes, the computations have also been carried out entirely in 32-digit arithmetic.

### Jacobi weight functions

The element $$u_{jk}^{(n)}$$ of the matrix $$U_{n}$$ in (3) for the Jacobi weight function $$w(x)=(1-x)^{\alpha }(1+x)^{\beta }$$ on $$[-1,1]$$ is

$$u_{jk}^{(n)}= \int _{-1}^{x_{j}} \ell _{k}^{(n)}(x)w(x) \,\mathrm{d}x = \frac{1}{2} (1+x_{j}) \int _{-1}^{1} \ell _{k}^{(n)} \bigl(x(t)\bigr)w\bigl(x(t)\bigr) \,\mathrm{d}t,$$

where

$$x(t)=\frac{1}{2} (1+x_{j}) t-\frac{1}{2} (1-x_{j})$$

maps $$[-1,1]$$ onto $$[-1,x_{j}]$$. An elementary computation yields

$$u_{jk}^{(n)}= \biggl(\frac{1+x_{j}}{2} \biggr)^{\alpha +\beta +1} \int _{-1}^{1} \ell _{k}^{(n)} \bigl(x(t)\bigr) \biggl[ \frac{3-x_{j}}{1+x_{j}}- t \biggr] ^{\alpha }(1+t)^{\beta } \,\mathrm{d}t.$$
(32)

Although the second factor in the integrand of (32) may be algebraically singular at a point close to, but larger than, 1 (when $$x_{j}<1$$ is close to 1), we simply apply Gauss–Jacobi quadrature with Jacobi parameters 0 and β to the integral in (32) and choose the number of quadrature points large enough so as to produce eigenvalues of $$U_{n}$$ accurate to at least four decimal places (which is good enough for plotting purposes). This is implemented by the Matlab function Uconj_ext_jac.m and can be run with the Matlab script run_Uconj_ext_jac.m.

### Example 1

Gegenbauer weight function $$w(x)=(1-x^{2})^{\alpha }$$ on $$[-1,1]$$ with $$\alpha =10$$.

This is the weight function for which the restricted Stenger conjecture is false already for $$n=5$$ (cf. Sect. 5.2). The extended conjecture, however, is found to be true for all $$2\leq n \leq 30$$; see Fig. 5 for the cases $$n=5,15,30$$.

### Example 2

Jacobi weight function with parameters $$(\alpha , \beta )=[-0.9:0.6:0.9, 1.7:0.7:3.8, 4.7:0.9:7.4]$$.

We used the script run_Uconj_ext_jac.m to check the extended U-conjecture for all these Jacobi weight functions, separately for $$n=5,15,30$$, and found in every case that the conjecture is valid. By Theorem 2, the same is true for the matrix $$V_{n}$$.

To illustrate, we show in Fig. 6 the eigenvalues of $$U_{n}$$ for the three parameter choices $$\alpha =\beta =-0.9$$, $$\alpha =-0.3$$, $$\beta =-0.9$$, and $$\alpha =5.6$$, $$\beta =1.7$$, in each case with $$n=30$$.

### Algebraic/logarithmic weight functions

#### The weight function $$w(x)=x^{\alpha }\log (1/x)$$ on $$[0,1]$$

Here, for the matrix $$U_{n}$$, we use the change of variables $$x=x_{j} t$$ in

$$u_{jk}^{(n)}= \int _{0}^{x_{j}}\ell _{k}^{(n)}(x) x^{\alpha }\log (1/x) \,\mathrm{d}x =x_{j}^{\alpha +1} \int _{0}^{1}\ell _{k}^{(n)}(x_{j} t) t ^{\alpha }\log \bigl(1/(x_{j} t)\bigr)\,\mathrm{d}t$$

to get

$$u_{jk}^{(n)}=x_{j}^{\alpha +1} \biggl[ \log (1/x_{j}) \int _{0}^{1} \ell _{k}^{(n)}(x_{j} t) t^{\alpha }\,\mathrm{d}t + \int _{0}^{1} \ell _{k} ^{(n)}(x_{j} t) t^{\alpha }\log (1/t)\,\mathrm{d}t \biggr].$$
(33)

Both integrals can be evaluated exactly, the first by m-point Gauss–Jacobi quadrature on $$[0,1]$$ with Jacobi parameters 0 and α, where $$m=\lceil n/2\rceil$$, and the second by m-point Gauss quadrature relative to the weight function $$w(t)=t^{\alpha } \log (1/t)$$ on $$[0,1]$$. For the latter, the recurrence coefficients for the relevant orthogonal polynomials (when $$\alpha =0, -1/2, 1/2, 1, 2, 5$$) are available to 32 decimal digits, partly in [5, 2.3.1, 2.4.1, 2.4.3], which allow us to generate the Gaussian quadrature rule in a well-known manner (cf., e.g., [3, §3.1.1]) using the OPQ routine gauss.m (see [4, p. 304]). This is implemented by the Matlab function Uconj_ext_log1.m and can be run with the Matlab script run_Uconj_ext_log1.m.

Alternatively, when $$n\leq 40$$, we may compute the recurrence coefficients for arbitrary $$\alpha >-1$$ as described in Sect. 5.3.3. This is implemented by the routines r_alglog1.m, Uconj_ext_log1.m, and run0_Uconj_ext_log1.m.

### Example 3

Algebraic/logarithmic weight function $$w(x)=x^{ \alpha }\log (1/x)$$ on $$[0,1]$$ with $$\alpha =(-0.9:0.1:5)(5.2:0.2:7) (7.5:0.5:10)$$.

Our routines validate the extended Stenger conjecture for all these values of α and $$2\leq n\leq 30$$. The eigenvalues of $$U_{n}$$ are shown in the case $$\alpha =0$$ in Fig. 7, and in the cases $$\alpha =-1/2, 1/2$$ in Figs. 8 and 9, respectively, for $$n=5,15,30$$. They are similar when $$\alpha =1,2,5$$.

With regard to $$V_{n}$$, the conjecture has been similarly validated, using the routines Vconj_ext_log1 and run_Vconj_ext_log1.m, for the same values of n and α as in Example 3. To compute the matrix $$V_{n}$$, we have used

$$v_{jk}^{(n)}= \int _{0}^{1} \ell _{k}^{(n)}(x)x^{\alpha } \log (1/x) \,\mathrm{d}x -u_{jk}^{(n)}$$
(34)

with $$u_{jk}^{(n)}$$ as in (33) and the integral evaluated by $$\lceil n/2\rceil$$-point Gaussian quadrature relative to the weight function $$w(x)$$. The eigenvalues of $$V_{n}$$ are found to be similar to those for $$U_{n}$$ shown in Figs. 79.

#### Algebraic/square-logarithmic weight function $$w(x)=x^{\alpha }\log ^{2}(1/x)$$ on $$[0,1]$$, $$\alpha >-1$$

Similarly as in Sect. 6.3.1, one finds

\begin{aligned}[b] u_{jk}^{(n)}&=x_{j}^{\alpha +1} \biggl[ \log ^{2}(1/x_{j}) \int _{0}^{1} \ell _{k}^{(n)}(x_{j} t) t^{\alpha }\,\mathrm{d}t +2\log (1/x_{j}) \int _{0}^{1} \ell _{k}^{(n)}(x_{j} t) t^{\alpha }\log (1/t)\,\mathrm{d}t \\ &\quad{} + \int _{0}^{1} \ell _{k}^{(n)}(x_{j} t) t^{\alpha }\log ^{2}(1/t)\,\mathrm{d}t \biggr], \end{aligned}
(35)

where again the integrals can be evaluated exactly and some of the required recurrence coefficients taken from [5, 2.3.2], [5, 2.4.5], [5, 2.4.7]. This is implemented by the Matlab function Uconj_ext_log2.m and driver run_Uconj_ext_log2.m.

### Example 4

Algebraic/square-logarithmic weight function $$w(x)=x^{\alpha }\log ^{2}(1/x)$$ on $$[0,1]$$ with $$\alpha =0,-1/2,1/2, 1,2,5$$.

Our routines validate the extended Stenger conjecture for all these values of α and $$2\leq n\leq 30$$. The eigenvalues of $$U_{n}$$ in the case $$\alpha =0$$ are found to be similar to those depicted in Fig. 7 for the weight function $$\log (1/x)$$. For the cases $$\alpha =-1/2, 1/2,5$$, they are shown respectively in Figs. 1012 for $$n=5,15,30$$. Interestingly, all eigenvalues appear to be real when $$\alpha -=-1/2$$.

Similar results and validations, using the routines Vconj_ext_log2.m and run_Vconj_ext_log2.m, are obtained for the matrix $$V_{n}$$, which, as in (34), is computed exactly by

$$v_{jk}^{(n)}= \int _{0}^{1} \ell _{k}^{(n)}(x)x^{\alpha } \log ^{2}(1/x) \,\mathrm{d}x-u_{jk}^{(n)}$$
(36)

with $$u_{jk}^{(n)}$$ as in (35).

### Laguerre and generalized Laguerre weight functions

Here, the weight function is assumed to be $$w(x)=x^{\alpha } \mathrm{e}^{-x}$$ on $$[0,\infty ]$$, where $$\alpha >-1$$. We write

$$u_{jk}^{(n)}= \int _{0}^{\infty }\ell _{k}^{(n)}(x)x^{\alpha } \mathrm{e} ^{-x} \,\mathrm{d}x- \int _{x_{j}}^{\infty }\ell _{k}^{(n)}(x)x^{\alpha } \mathrm{e}^{-x}\,\mathrm{d}x$$

and, in the second integral, make the change of variables $$x=x_{j}+t$$ to get

$$u_{jk}^{(n)}= \int _{0}^{\infty }\ell _{k}^{(n)}(x)x^{\alpha } \mathrm{e} ^{-x} \,\mathrm{d}x-\mathrm{e}^{-x_{j}} \int _{0}^{\infty }\ell _{k}^{(n)}(x _{j}+t) (x_{j}+t)^{\alpha }\mathrm{e}^{-t}\, \mathrm{d}t.$$
(37)

The first integral can be evaluated exactly by $$\lceil n/2 \rceil$$-point generalized Gauss–Laguerre quadrature. The second integral, similarly as in (32) for Jacobi weight functions, has an algebraic singularity close to, and to the left of, the origin when $$x_{j}$$ is close to zero (and α not an integer). As in Sect. 6.2, we ignore this and simply apply Gauss–Laguerre quadrature of sufficiently high order so as to obtain plotting accuracy for all the eigenvalues of $$U_{n}$$. However, there is yet another complication: Around $$n=25$$, the Gauss–Laguerre weights, in Matlab double precision, start becoming increasingly inaccurate (in terms of relative accuracy) and adversely affect the accuracy of the second integral in (37). For this reason, we use 32-digit variable-precision arithmetic to compute these weights and convert them to Matlab double precision, once computed. At the same time we lower the accuracy requirement from 4- to 3-digit accuracy.

### Example 5

Generalized Laguerre weight function $$w(x)=x^{ \alpha }\mathrm{e}^{-x}\,\mathrm{d}x$$ on $$[0,\infty ]$$ for the same values of α and n as in Example 2.

The Matlab routines implementing this and validating the conjecture in each case are Uconj_ext_lag.m and run_Uconj_ext_lag.m. They may take several hours to run because of the extensive variable-precision work involved. The accuracy achieved for the eigenvalues is consistently of the order of 10−4 or better, but the necessary number of quadrature points is found to be as large as 440 (for $$\alpha =-0.9$$ and $$n=30$$).

For illustration, we show in Fig. 13 the eigenvalues obtained in the case of the ordinary Laguerre weight function ($$\alpha =0$$) and for $$n=5, 15,30$$. Notice the extremely small real eigenvalues when $$n=30$$, the smallest being of the order 10−43.

Using

$$v_{jk}^{(n)}= \int _{0}^{\infty }\ell _{k}^{(n)}(x)x^{\alpha } \mathrm{e} ^{-x} \,\mathrm{d}x-u_{jk}^{(n)}$$
(38)

with $$u_{jk}^{(n)}$$ as in (37), the conjecture has been similarly validated with the help of the routines Vconj_ext_lag.m, run_Vconj_ext_lag.m.

### Hermite and generalized Hermite weight functions

These are the weight functions $$w(x)=|x|^{2\mu }e^{-x^{2}}$$ on $$[-\infty ,\infty ]$$, $$\mu >-1/2$$. Since they are symmetric, it suffices, by Theorem 1, to consider $$U_{n}$$. To simplify matters, we assume 2μ to be a nonnegative integer.

For the evaluation of $$u_{jk}^{(n)}$$, we distinguish the cases $$x_{j}<0$$ and $$x_{j}\geq 0$$. In the former case, by the change of variables $$x=x_{j}-t$$, one gets

$$u_{jk}^{(n)}=\mathrm{e}^{-x_{j}^{2}} \int _{0}^{\infty }\ell _{k}^{(n)}(x _{j}-t) (t-x_{j})^{2\mu }\mathrm{e}^{2x_{j} t} \mathrm{e}^{-t^{2}} \,\mathrm{d}t, \quad x_{j}< 0.$$
(39)

Here, half-range Gauss–Hermite quadrature (cf. [5, 2.9.1]) is expected to converge rapidly. When $$x_{j}\geq 0$$, breaking up the first integral in (3) (with $$a=-\infty$$) into two parts, one extended from −∞ to 0 and the other from 0 to $$x_{j}$$, and making appropriate changes of variables in each yield

$$u_{jk}^{(n)}= \int _{0}^{\infty }\ell _{k}^{(n)}(-t)t^{2\mu } \mathrm{e} ^{-t^{2}}\,\mathrm{d}t +x_{j}^{2\mu +1} \int _{0}^{1} \ell _{k}^{(n)}(x _{j} t)\mathrm{e}^{-x_{j}^{2} t^{2}} t^{2\mu }\,\mathrm{d}t, \quad x _{j}\geq 0.$$
(40)

The first integral can be evaluated exactly by $$\lceil (n+2\mu )/2 \rceil$$-point half-range Gauss–Hermite quadrature. The second integral may be approximated by Gauss–Jacobi quadrature on $$[0,1]$$ with Jacobi parameters 0 and 2μ. This, too, is expected to converge quickly.

### Example 6

Generalized Hermite weight function $$w(x)=|x|^{2 \mu }\mathrm{e}^{-x^{2}}$$ on $$[-\infty ,\infty ]$$, $$\mu =0:1/2:25$$ and $$n=5,15,30$$.

The conjecture has been validated in all cases using the routines Uconj_ext_herm.m, run_Uconj_ext_herm.m. For illustration, the eigenvalues of $$U_{n}$$ are shown in Fig. 14 for the case $$\mu =0$$.

### A weight function supported on two disjoint intervals

We now consider a weight function which is not positive a.e.:

$$w(x)=\textstyle\begin{cases} \vert x \vert (x^{2}-\xi ^{2})^{p}(1-x^{2})^{q} & \text{if }x\in [-1,-\xi ]\cup [\xi ,1], \\ 0 & \text{otherwise}, \end{cases}$$
(41)

where $$0<\xi <1$$, $$p>-1$$, $$q>-1$$. This weight function, of interest in theoretical chemistry when $$p=q=-1/2$$, has been studied in . In our present context, we assume, for simplicity, that p and q are nonnegative integers. Then only integrations of polynomials are required, which, as before, can be done exactly.

Since the weight function w is symmetric, it suffices, by Theorem 1, to look at the matrices $$U_{n}$$ only.

Any polynomial $$\pi _{n}$$ orthogonal with respect to w can have at most one zero in the interval $$[-\xi ,\xi ]$$ where w is zero [3, Theorem 1.20]. By symmetry, therefore, all zeros of $$\pi _{n}$$ are located in the intervals $$(-1,-\xi )$$ or $$(\xi ,1)$$, except when n is odd, in which case there is a zero at the origin.

The recurrence coefficients $$\alpha _{k}$$, $$\beta _{k}$$ for the (monic) polynomials $$\pi _{n}$$ are known explicitly [2, Eq. (4.1)]: All $$\alpha _{k}=0$$, by symmetry, and

\begin{aligned}& \beta _{0} =\bigl(1-\xi ^{2} \bigr)^{p+q+1}\varGamma (p+1)\varGamma (q+1)/\varGamma ( p+q+2), \\& \beta _{1} = \frac{1}{2}\bigl(1-\xi ^{2}\bigr) \alpha _{0}^{J}+ {\frac{1}{2}}\bigl(1+\xi ^{2}\bigr), \\& \left.\textstyle\begin{array}{l} \beta _{2k} =({\frac{1}{2}}(1-\xi ^{2}))^{2} \beta _{k}^{J}/\beta _{2k-1} \\ \beta _{2k+1} ={\frac{1}{2}}(1-\xi ^{2}) \alpha _{k}^{J}+{\frac{1}{2}}(1+\xi ^{2})-\beta _{2k} \end{array}\displaystyle \right\} \quad k=1,2,3,\ldots , \end{aligned}

where $$\alpha _{k}^{J}$$, $$\beta _{k}^{J}$$ are the recurrence coefficients of the monic Jacobi polynomials with parameters $$\alpha =q$$, $$\beta =p$$. Therefore, the zeros of $$\pi _{n}$$ are easily computed by the OPQ routine gauss.m (see [4, p. 304]).

The computation of $$u_{jk}^{(n)}$$ is different, depending on where the zero $$x_{j}$$ is located. In fact,

$$u_{jk}^{(n)}=-\frac{1+x_{j}}{2} \int _{-1}^{1} \ell _{k}^{(n)} \bigl( x_{1}(t)\bigr)x _{1}(t) \bigl(x_{1}^{2}(t)- \xi ^{2}\bigr)^{p}\bigl(1-x_{1}^{2}(t) \bigr)^{q} \,\mathrm{d}t \quad \text{if } x_{j}< -\xi ,$$

where $$x_{1}(t)=\frac{1+x_{j}}{2}t+\frac{x_{j}-1}{2}$$ maps $$[-1,1]$$ onto $$[-1,x_{j}]$$;

$$u_{jk}^{(n)}=-\frac{1-\xi }{2} \int _{-1}^{1} \ell _{k}^{(n)} \bigl( x_{2}(t)\bigr)x _{2}(t) \bigl(x_{2}^{2}(t)- \xi ^{2}\bigr)^{p}\bigl(1-x_{2}^{2}(t) \bigr)^{q} \,\mathrm{d}t \quad \text{if } x_{j}=0,$$

where $$x_{2}(t)=\frac{1-\xi }{2}t-\frac{1+\xi }{2}$$ maps $$[-1,1]$$ onto $$[-1,-\xi ]$$; and

$$u_{jk}^{(n)}= \bigl( u_{jk}^{(n)} \bigr)_{x_{j}=0}+\frac{ x_{j}- \xi }{2} \int _{-1}^{1} \ell _{k} \bigl(x_{3}(t)\bigr)x_{3}(t) \bigl(x_{3}^{2}(t)- \xi ^{2}\bigr)^{p} \bigl(1-x_{3}^{2}(t) \bigr)^{q} \,\mathrm{d}t \quad \text{if } x_{j}>\xi ,$$

where $$x_{3}(t)=\frac{x_{j}-\xi }{2}t+\frac{x_{j}+\xi }{2}$$ maps $$[-1,1]$$ onto $$[\xi ,x_{j}]$$.

All integrals can be computed exactly by $$(\lceil (n+1)/2\rceil +p+q)$$-point Gauss–Legendre quadrature.

### Example 7

The weight function (41) with $$\xi =0.1:0.2:0.9$$ and $$p,q=0:5$$ for $$n=5,15,30$$.

The routines Uconj_ext_twoint.m, run_Uconj_ext_twoint.m can be used to validate the conjecture in all cases, even though the weight function is not in the class of weight functions assumed in the conjecture. (For another such example, see Example 9 with $$N=1$$.)

To illustrate, we show in Fig. 15 the eigenvalues of $$U_{n}$$, $$n=5,15,30$$, in the case $$\xi =1/2$$, $$p=q=0$$, i.e., for the weight function $$w(x)$$ on $$[-1,1]$$ equal to $$|x|$$ outside of $$[-1/2,1/2]$$ and 0 inside.

## Discrete weight functions

To demonstrate that an assumption about the weight function like the one made for the extended Stenger conjecture is called for, we now consider a discrete measure $$\mathrm{d}\lambda _{N+1}$$ supported on $$N+1$$ points $$0,1,2,\ldots ,N$$ with jumps $$w_{k}>0$$ at the points k, $$k=0,1, \ldots ,N$$. The corresponding orthogonal polynomials, now $$N+1$$ in number, are again denoted by $$p_{n}$$, $$n=0,1,\ldots ,N$$. If $$w_{0}=w_{1}=\cdots =w_{N}=1$$, we are dealing with the classical discrete orthogonal polynomials attributed to Chebyshev [3, Example 1.15]). They are the special case $$\alpha =\beta =0$$ of Hahn polynomials with parameters α, β (cf. [3, last entry of Table 1.2]). Both the weight function and the zeros of $$p_{n}$$ are symmetric about the midpoint $$N/2$$. In particular, when N is even and n odd, one of the zeros is equal to $$N/2$$, hence an integer.

For the elements of $$U_{n}$$, we have

$$u_{j,k}^{(n)}=\sum _{i=0}^{i_{j}} w_{i} \ell _{k}^{(n)}(i), \quad i _{j}=\lfloor x_{j}\rfloor ,$$
(42)

where $$x_{j}$$ are the zeros of $$p_{n}$$ (assumed in increasing order). These can be generated by the functions r_hahn.m and gauss.m.

### Example 8

The measure $$\mathrm{d}\lambda _{N+1}$$, $$N\geq 2$$, with $$w_{0}=w_{1}=\cdots =w_{N}=1$$, and $$p_{n}$$ with $$2\leq n\leq N$$.

It is important to note that when the zeros of $$p_{n}$$ are computed by the routine gauss.m, and when N is even and n odd, the integer zero $$x_{j}=N/2$$ may end up becoming slightly less than $$N/2$$, in which case $$\lfloor x_{j}\rfloor$$ in (42) will yield an incorrect result. Similarly, the smallest zero, when computed, may turn out to become negative, or the largest zero equal to N. To avoid these pitfalls, we overwrite the zero, once computed, by $$N/2$$ or reset $$\lfloor x_{j} \rfloor$$, $$j=1,n$$, by 0 resp. $$N-1$$.

On running the script run_Uconj_ext_hahn.m, using Uconj_ext_hahn.m, to compute $$U_{n}$$ and its eigenvalues, we found that the extended Stenger conjecture is still true for all $$N\leq 10$$ and all $$2\leq n\leq N$$, but no longer when $$N>10$$. The values of N and n for which eigenvalues with negative real parts appear are shown in Table 6 for $$11\leq N\leq 30$$.

Asterisks indicate the presence of two pairs of delinquent complex conjugate eigenvalues rather than the usual single pair. (48-digit arithmetic was used for the last two entries in Table 6.)

Since the weight function is symmetric (with respect to the midpoint $$N/2$$), by Theorem 1 the same pattern of validity and nonvalidity holds also for the V-conjecture.

We illustrate by showing in Fig. 16 the eigenvalues of $$U_{n}$$, $$n=N$$, for $$N=11,15,30$$.

Since there are no approximations involved, the results obtained should be quite accurate. In fact, we reran Example 8 in 48-digit arithmetic and found the double-precision eigenvalues accurate to 13, 12, and 10 digits for, resp., $$n=11,15,30$$.

With regard to the restricted Stenger conjecture, the routines used are run_Uconj_restr_hahn.m and Uconj_restr_hahn.m. They, too, confirm the validity of the conjecture for $$N\leq 10$$ and $$2\leq n\leq N$$. But for $$N>11$$, there are now more values of n than shown in Table 6 for which there are eigenvalues with negative real parts, and there can be as many as four pairs of delinquent eigenvalues.

## Block-discrete and ε-block-discrete weight functions

It may be interesting to see whether the eigenvalues of $$U_{n}$$ behave similarly as in Example 8 when the weight function is not ($$N+1$$)-discrete, but ($$N+1$$)-block-discrete, that is, of the form

$$w(x;N+1)= \textstyle\begin{cases} w_{\nu }& \text{if } 2\nu \leq x\leq 2\nu +1, \nu =0,1,\ldots ,N, \\ 0 & \text{otherwise}, \end{cases}$$
(43)

where $$w_{0},w_{1},\ldots ,w_{N}$$, $$N\geq 1$$, are positive numbers. Thus, the weight function is made up of $$N+1$$ “blocks” with base 1 and heights $$w_{\nu }$$, $$\nu =0,1,\ldots ,N$$, any two consecutive blocks being separated by a zero-block. More generally, we may consider $$(N+1)$$-ε-block-discrete weight functions, where the separating zero-blocks are replaced by ε-blocks, that is,

$$w(x;N+1,\varepsilon )= \textstyle\begin{cases} w_{\nu }& \text{if } 2\nu \leq x< 2\nu +1, \nu =0,1,\ldots ,N, \\ \varepsilon & \text{if } 2\nu -1\leq x< 2\nu , \nu =1,2,\ldots N, \\ 0 & \text{otherwise}. \end{cases}$$
(44)

The orthogonal polynomials $$p_{n}$$ associated with the weight function $$w(x; N+1,\varepsilon )$$ can be generated from their three-term recurrence relation, which in turn can be computed (exactly) by a ($$2N+1$$)-component discretization procedure (cf. [3, §2.2.4]) using $$\lceil n/2\rceil$$-point Gauss–Legendre quadrature on $$[0,1]$$. This is implemented in Matlab double and variable precision by the routines ab_blockhahn.m, sab_blockhahn.m. (For checking purposes, the same recurrence relation was also computed by a moment-based routine in sufficiently high precision.)

The elements $$u_{jk}$$ of the matrix $$U_{n}$$

$$u_{jk}= \int _{0}^{x_{j}} \ell _{k}^{(n)}(x) w(x;N+1,\varepsilon ) \,\mathrm{d}x,$$

where $$x_{j}$$ are the zeros of $$p_{n}$$, can be computed (exactly) as follows. Let $$m=\lfloor x_{j} \rfloor$$.

If $$m=0$$,

$$u_{jk}^{(n)}=w_{0} \int _{0}^{x_{j}}\ell _{k}^{(n)}(x) \,\mathrm{d}x= w _{0} x_{j} \int _{0}^{1} \ell _{k}^{(n)}(x_{j} t)\,\mathrm{d}t;$$

if $$m=1$$,

\begin{aligned} u_{jk} & = w_{0} \int _{0}^{1} \ell _{k}^{(n)}(x) \,\mathrm{d}x+\varepsilon \int _{1}^{x_{j}} \ell _{k}^{(n)}(x) \,\mathrm{d}x \\ & = \int _{0}^{1} \bigl[ w_{0} \ell _{k}^{(n)}(t)+\varepsilon (x _{j}-1) \ell _{k}^{(n)}\bigl((x_{j}-1)t+1\bigr) \bigr]; \end{aligned}

if $$m>0$$ is even,

\begin{aligned} u_{jk} & = \sum _{\nu =0}^{(m-2)/2} w_{\nu } \int _{2\nu }^{2\nu +1} \ell _{k} ^{(n)}(x)\,\mathrm{d}x+w_{m/2} \int _{m}^{x_{j}}\ell _{k}^{(n)}(x) \,\mathrm{d}x +\varepsilon \sum_{\nu =1}^{m/2} \int _{2\nu -1}^{2\nu } \ell _{k}^{(n)}(x) \,\mathrm{d}x \\ & = \int _{0}^{1} \Biggl( \sum _{\nu =0}^{(m-2)/2} w_{\nu } \ell _{k} ^{(n)} (2\nu +t)+w_{m/2}(x_{j}-m)\ell _{k}^{(n)}\bigl((x_{j}-m)t+m\bigr) \\ & \quad{} + \varepsilon \sum_{\nu =1}^{m/2} \ell _{k}^{(n)} (2 \nu -1+t) \Biggr) \,\mathrm{d}t; \end{aligned}

if $$m>1$$ is odd,

\begin{aligned} u_{jk} & = \sum _{\nu =0}^{(m-1)/2} w_{\nu } \int _{2\nu }^{2\nu +1} \ell _{k} ^{(n)}(x)\,\mathrm{d}x+\varepsilon \sum_{\nu =1}^{(m-1)/2} \int _{2 \nu -1}^{2\nu } \ell _{k}^{(n)}(x) \,\mathrm{d}x+\varepsilon \int _{m}^{x _{j}}\ell _{k}^{(n)}(x) \,\mathrm{d}x \\ & = \int _{0}^{1} \Biggl( w_{0} \ell _{k}^{(n)}(t)+\sum_{\nu =1}^{(m-1)/2} \bigl[ w_{\nu }\ell _{k}^{(n)}(2\nu +t)+\varepsilon \ell _{k}^{(n)} (2\nu -1+t) \bigr] \\ & \quad{} + \varepsilon (x_{j}-m)\ell _{k}^{(n)} \bigl((x_{j}-m)t+m\bigr) \Biggr) \,\mathrm{d}t. \end{aligned}

All integrals on the far right of these equations can be computed exactly by $$\lceil n/2 \rceil$$-point Gauss–Legendre quadrature on $$[0,1]$$. The first pitfall mentioned in Example 8, associated with computing the floor of $$x_{j}$$, is no longer an issue since the midpoint is now $$N+1/2$$, a half-integer, not an integer.

### Example 9

The ($$N+1$$)-block-discrete Hahn weight function with parameters $$\alpha =\beta =0$$ and $$p_{n}$$ with $$2\leq n\leq N$$.

This is the weight function (43) with $$w_{0}=w_{1} =\cdots =w_{N}=1$$. To check the behavior of the eigenvalues in this case, we have run the script run_Uconj_ext_blockhahn.m using the function Uconj_ext_blockhahn.m and $$\mathtt{epsilon}=0$$ for $$N=1:10$$ and $$2\leq n\leq 30$$ for each N. It was found that the extended Stenger conjecture is still true for $$2\leq n\leq 30$$ (and probably for all $$n\geq 2$$) when $$N=1$$, i.e., for a 2-block-discrete Hahn weight function. When $$N>1$$, however, eigenvalues with negative real parts again show up, starting from some $$n\geq 9$$, and frequently, but not always, thereafter. The values of N and n, for which this occurs, are shown in Table 7. There is usually one pair of delinquent complex conjugate eigenvalues, but in some cases there are two such pairs. These are identified by an asterisk in Table 7.

The validity of the extended Stenger conjecture for $$N=1$$ is interesting. It may well be for the same (unknown) reason that validates the conjecture in the case of the two-interval weight function of Sect. 6.6; cf. Example 7.

To illustrate, we show in Fig. 17 the eigenvalues in the cases $$(N,n)=(2,30),(5,28), (10,26)$$.

The restricted Stenger conjecture, in this example, fares much better, though failing also in a few cases. Using the routines run_Uconj_restr_blockhahn.m and Uconj_restr_blockhahn.m for $$N=1:10$$, $$2\leq n\leq 30$$, we found the conjecture to be true for $$N=[1,2,3,4,9]$$, $$2\leq n\leq 30$$, and false in only the five cases: $$(N,n)=(5,30),(6,28),(7,30),(8,28),(10,25)$$. To rule out the presence of severe numerical instabilities as a cause for this unexpected behavior, all cases have been rerun, and confirmed, in 32-digit arithmetic. The double-precision eigenvalues were compared with those obtained in 32-digit precision and found to agree to 5–15 digits, the delinquent ones always to at least 11 digits.

For illustration, we show in Fig. 18 the eigenvalues in the cases $$(N,n)=(2,30),(6,28), (10,25)$$, the last two containing a pair of eigenvalues with negative real part.

The presence of delinquent eigenvalues in this example, strictly speaking, does not invalidate the extended Stenger conjecture, since the weight function (43) does not satisfy the positivity a.e. condition imposed by Stenger. However, the matrix $$U_{n}$$ associated with the weight function (44), depending on the positive parameter ε, by a continuity argument will have the same pattern of delinquent eigenvalues as the matrix $$U_{n}$$ associated with the weight function (43) when ε is sufficiently small. This then shows that the extended Stenger conjecture cannot be valid for all admissible weight functions. We illustrate this with the final example,

### Example 10

The $$(N+1)$$-ε-block-discrete weight function (44) for $$N=2$$, $$\varepsilon = 1/100$$, and $$n=9$$.

This relates to the first item in Table 7. The routine run_Uconj_ext_epsilon_blockhahn_N2_n9.m, using r_blockhahn to generate the required recurrence coefficients by an ($$N+1$$)-component discretization procedure ($$N=2$$) implemented by the routines mcdis.m and quad_blockhahn.m, computes the eigenvalues of $$U_{n}$$ for $$n=9$$. They are shown in Table 8.

Recomputing them in 32-digit arithmetic proves them correct to all digits shown.

1. All Matlab routines referenced in this paper, and all textfiles used, can be accessed at CONJS of the website https://www.cs.purdue.edu/archives/2002/wxg/codes.

## References

1. Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions. Appl. Math. Ser., vol. 55 National Bureau of Standards, Washington (1964)

2. Gautschi, W.: On some orthogonal polynomials of interest in theoretical chemistry. BIT Numer. Math. 24, 473–483 (1984) [Also in Selected Works, v. 2, 101–111.]

3. Gautschi, W.: Orthogonal Polynomials: Computation and Approximation, Numerical Mathematics and Scientific Computation. Oxford University Press, Oxford (2004)

4. Gautschi, W. Orthogonal Polynomials in MATLAB: Exercises and Solutions. SIAM, Philadelphia (2016)

5. Gautschi, W.: A Software Repository for Orthogonal Polynomials. SIAM, Philadelphia (2018)

6. Hairer, E., Nørsett, P., Wanner, G.: Solving Ordinary Differential Equations I: Nonstiff Problems, 2nd rev. edn. Springer Series in Computational Mathematics, vol. 8. Springer, Berlin (1993)

7. Hairer, E., Wanner, G.: Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems, 2nd rev. edn. Springer Series in Computational Mathematics, vol. 14. Springer, Berlin (1996)

8. Stenger, F., Baumann, G., Koures, V.G.: Computational methods for chemistry and physics, and Schrödinger in $$3 + 1$$. In: Sabin, J.R., Cabrera-Trujillo, R. (eds.) Advances in Quantum Chemistry, pp. 265–298. Academic Press, San Diego (2015) Ch. 11

9. Szegö, G.: Orthogonal Polynomials, 4th edn. Colloquium Publications, vol. 23. Am. Math. Soc., Providence (1975)

### Acknowledgements

The authors thank Martin J. Gander for having alerted them to the sensitivity, when n is large, of the eigenvalues of the matrices $$U_{n}$$, $$V_{n}$$ to small changes in their elements, and they acknowledge helpful correspondence with Frank Stenger.

Not applicable.

Not applicable.

## Author information

Authors

### Contributions

The authors have equal contributions. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Walter Gautschi.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

Dedicated to Gradimir V. Milovanović on his 70th birthday

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Appendix:  Relation to Runge–Kutta methods

### Appendix:  Relation to Runge–Kutta methods

Let $$x_{1},x_{2}, \ldots ,x_{n}$$ be distinct real numbers (typically in the interval $$[0,1]$$). The corresponding (collocation) Runge–Kutta method (see [6, Theorem II. 7.7] is then given by the coefficients

$$a_{jk}= \int _{0}^{x_{j}} \ell _{k}(x)\,\mathrm{d}x, \qquad b_{k}= \int _{0} ^{1} \ell _{k}(x)\, \mathrm{d}x,$$
(45)

where $$\ell _{k}(x)$$ is the kth elementary Lagrange interpolation polynomial of degree $$n-1$$. We collect the coefficients in the $$n\times n$$ matrix $$A=(a_{jk})_{j,k=1}^{n}$$, in the column vector $$b=(b_{k})_{k=1}^{n}$$, and we denote the column vector with all elements equal to 1 by $\mathbb{1}$.

An application of the Runge–Kutta method with step size h to the Dahlquist test equation $$\dot{y}=\lambda y$$ yields (with $$z=h\lambda$$)

${y}_{1}=R\left(z\right){y}_{0},\phantom{\rule{2em}{0ex}}R\left(z\right)=1+z{b}^{\mathrm{T}}{\left(I-zA\right)}^{-1}\mathbb{1},$
(46)

where $$R(z)$$ is the stability function of the method. Note that for an invertible matrix A, its eigenvalues are the reciprocal of the poles of the rational function $$R(z)$$.

The adjoint method of (45) is given by the coefficients (cf. [6, Theorem II. 8.3])

$$a_{n+1-j,n+1-k}^{*}=b_{k}-a_{jk}= \int _{x_{j}}^{1} \ell _{k}(x) \,\mathrm{d}x, \qquad b_{n+1-k}^{*}=b_{k} .$$
(47)

Its stability function is related to that of (45) by $$R^{*}(z)=1/R(-z)$$.

Connection to the Stenger conjecture. The $$n\times n$$ matrix with coefficients $$a_{jk}$$ of (45) is equal to the matrix $$U_{n}$$ (with $$a=0$$) of (2) in Sect. 1, and the matrix with coefficients $$a_{jk}^{*}$$ of (47) is equal to $$V_{n}$$ (with $$b=1$$). Since the nonzero eigenvalues of A are the reciprocal of the poles of the stability function (46), there is a close connection between the Stenger conjecture and A-stability of a Runge–Kutta method.

The (shifted) Legendre polynomials are orthogonal with respect to the constant weight function $$w(x)=1$$ on $$[0,1]$$. The corresponding collocation Runge–Kutta method is the so-called Gauss method of order 2n, which is A-stable (see [7, Section IV.5]). Its stability function is the diagonal Padé approximation $$R_{n,n}(x)$$, for which all poles are in the right half of the complex plane. This provides another proof of the Stenger conjecture for Legendre polynomials.

## Rights and permissions 