Skip to main content

An explicit version of the Chebyshev-Markov-Stieltjes inequalities and its applications

Abstract

Given is the Borel probability space on the set of real numbers. The algebraic-analytical structure of the set of all finite atomic random variables on it with a given even number of moments is determined. It is used to derive an explicit version of the Chebyshev-Markov-Stieltjes inequalities suitable for computation. These inequalities are based on the theory of orthogonal polynomials, linear algebra, and the polynomial majorant/minorant method. The result is used to derive generalized Laguerre-Samuelson bounds for finite real sequences and generalized Chebyshev-Markov value-at-risk bounds. A financial market case study illustrates how the upper value-at-risk bounds work in the real world.

1 Introduction

Let I be a real interval and consider probability measures μ on I with moments \(\mu_{k} = \int_{I} x^{k}\,d\mu (x)\), \(k = 0,1,2,\ldots\) , such that \(\mu_{0} = 1\). Given a sequence of real numbers \(\{ \mu_{k} \}_{k \ge 0}\), the moment problem on I consists to ask the following questions. Is there a μ on I with the given moments? If it exists, is the measure μ on I uniquely determined? If not, describe all μ on I with the given moments (see e.g. Kaz’min and Rehmann [1]).

There are essentially three different types of intervals. Either two end-points are finite, typically \(I = [0,1]\), one end-point is finite, typically \(I = [0,\infty)\), or no end-points are finite, i.e. \(I = ( - \infty,\infty )\). The corresponding moment problems are called Stieltjes moment problem if \(I = [0,\infty)\), the Hausdorff moment problem if \(I = [0,1]\), and the Hamburger moment problem if \(I = ( - \infty,\infty )\). Besides this, the algebraic moment problem by Mammana [2] asks for the existence and construction of a finite atomic random variable with a given finite moment structure. It is well known that the latter plays a crucial role in the construction of explicit bounds to probability measures and integrals given a fixed number of moments.

In the present study, the focus is on the interval \(I = ( - \infty,\infty )\). Motivated by a previous result from the author in a special case, we ask for a possibly explicit description of the set of all finite atomic random variables by given moment structure for an even arbitrary number of moments. Based on some basic results from the theory of orthogonal polynomials, as summarized in Section 2, we characterize in the main Theorem 3.1 these sets of finite atomic random variables. This constitutes a first specific answer to a research topic suggested by the author in the Preface of Hürlimann [3], namely the search for a general structure that belongs to the sets of finite atomic random variables by given moment structure. As an immediate application, we derive in Section 4 an explicit version of the Chebyshev-Markov-Stieltjes inequalities that is suitable for computation. It is used to derive generalized Laguerre-Samuelson bounds for finite real sequences in Section 5, and generalized Chebyshev-Markov value-at-risk bounds in Section 6.

The historical origin of the Chebyshev-Markov-Stieltjes inequalities dates back to Chebyshev [4], who has first formulated this famous problem and has proposed also a solution without proof, however. Proofs were later given by Markov [5], Possé [6] and Stieltjes [7, 8]. Twentieth century developments include among others Uspensky [9], Shohat and Tamarkin [10], Royden [11], Krein [12], Akhiezer [13], Karlin and Studden [14], Freud [15] and Krein and Nudelman [16]. A short account is also found in Whittle [17], pp.110-118. It seems that the Chebyshev-Markov inequalities have been stated in full generality for the first time by Zelen [18]. Explicit analytical results for moments up to order four have been given in particular by Zelen [18], Simpson and Welch [19], Kaas and Goovaerts [20], and Hürlimann [21] (see also Hürlimann [3]).

2 Basic classical results on orthogonal polynomials

Let \(( \Omega,A,P )\) be the Borel probability space on the set of real numbers such that Ω is the sample space, A is the σ-field of events and P is the probability measure. For a measurable real-valued random variable X on this probability space, that is, a map \(X: \Omega \to R\), the probability distribution of X is defined and denoted by \(F_{X}(x) = P(X \le x)\).

Consider a real random variable X with finite moments \(\mu_{k} = E[X^{k}]\), \(k = 0,1,2,\ldots\) . If X takes an infinite number of values, then the Hankel determinants

$$ D_{n} = \begin{vmatrix} \mu_{0} & \mu_{1} & \cdot & \cdot & \mu_{n} \\ \mu_{1} & \mu_{2} & \cdot & \cdot & \mu_{n + 1} \\ \cdot & \cdot & & & \cdot \\ \cdot & \cdot & & & \cdot \\ \mu_{n} & \mu_{n + 1} & \cdot & \cdot & \mu_{2n} \end{vmatrix}, \quad n = 0,1,2,\ldots, $$
(2.1)

are non-zero. Otherwise, only a finite number of them are non-zero (e.g. Cramér [22], Section 12.6). We will assume that all are non-zero. By convention one sets \(p_{0}(x) = \mu_{0} = 1\).

Definition 2.1

An orthogonal polynomial of degree \(n \ge 1\) with respect to the moment structure \(\{ \mu_{k} \}_{k = 0,\ldots,2n - 1}\), also called orthogonal polynomial with respect to X, is a polynomial \(p_{n}(x)\) of degree n, with positive leading coefficient, that satisfies the n linear expected value equations

$$ E\bigl[p_{n}(X) \cdot X^{i}\bigr] = 0, \quad i = 0,1, \ldots,n - 1. $$
(2.2)

The terminology ‘orthogonal’ refers to the scalar product induced by the expectation operator \(\langle X,Y \rangle = E[XY]\), where X, Y are random variables for which this quantity exists. An orthogonal polynomial is uniquely defined if either its leading coefficient is one (so-called monic polynomial) or \(\langle p_{n}(X),p_{n}(X) \rangle = 1\) (so-called orthonormal property). A monic orthogonal polynomial is throughout denoted by \(q_{n}(x)\) while an orthogonal polynomial with the orthonormal property is denoted by \(P_{n}(x)\). Some few classical results are required.

Lemma 2.1

(Chebyshev determinant representation)

The monic orthogonal polynomial of degree n identifies with the determinant

$$ q_{n}(x) = \frac{1}{D_{n - 1}} \begin{vmatrix} \mu_{0} & \mu_{1} & \cdot & \cdot & \mu_{n} \\ \mu_{1} & \mu_{2} & \cdot & \cdot & \mu_{n + 1} \\ \cdot & \cdot & & & \cdot \\ \cdot & \cdot & & & \cdot \\ \mu_{n - 1} & \mu_{n} & \cdot & \cdot & \mu_{2n - 1} \\ 1 & x & \cdot & \cdot & x^{n} \end{vmatrix}, \quad n = 1,2,\ldots . $$
(2.3)

Proof

See e.g. Akhiezer [13], Chapter 1, Section 1, or Hürlimann [3], Lemma I.1.1. □

The monic orthogonal polynomials form an orthogonal system of functions.

Lemma 2.2

(Orthogonality relations)

$$\begin{aligned}& \bigl\langle q_{m}(X),q_{n}(X) \bigr\rangle = 0 \quad \textit{for } m \ne n, \end{aligned}$$
(2.4)
$$\begin{aligned}& \bigl\langle q_{n}(X),q_{n}(X) \bigr\rangle = \frac{D_{n}}{D_{n - 1}} \ne 0, \quad n = 1,2,\ldots. \end{aligned}$$
(2.5)

Proof

See e.g. Akhiezer [13], Chapter 1, Section 1, or Hürlimann [3], Lemma I.1.2. □

Using Lemma 2.2 one sees that the monic orthogonal polynomials and the orthonormal polynomials are linked by the relationship

$$ P_{n}(x) = \sqrt{\frac{D_{n - 1}}{D_{n}}} q_{n}(x), \quad n = 1,2,\ldots. $$
(2.6)

The following relationship plays an essential role in the derivation of the new explicit solution to the algebraic moment problem in the next section.

Lemma 2.3

(Christoffel [23] and Darboux [24])

Let \(K_{n}(x,y) = \sum_{i = 0}^{n} P_{i}(x)P_{i}(y)\) be the Christoffel-Darboux kernel polynomial of degree n. Then one has the determinantal identity

$$ D_{n} \cdot K_{n}(x,y) = - \begin{vmatrix} 0 & 1 & x & \cdot & \cdot & x^{n} \\ 1 & \mu_{0} & \mu_{1} & \cdot & \cdot & \mu_{n} \\ y & \mu_{1} & \mu_{2} & \cdot & \cdot & \mu_{n + 1} \\ \cdot & \cdot & \cdot & & & \cdot \\ \cdot & \cdot & \cdot & & & \cdot \\ y^{n} & \mu_{n} & \mu_{n + 1} & \cdot & \cdot & \mu_{2n} \end{vmatrix}. $$
(2.7)

Proof

This is shown in Akhiezer [13], Chapter 1, Section 2, p.9. □

Note that Brezinski [25] has shown that the Christoffel-Darboux formula is equivalent to the well-known three term recurrence relation for orthogonal polynomials. For further information on orthogonal polynomials consult the recent introduction by Koornwinder [26] as well as several books by Szegö [27], Freud [15], Chihara [28], Nevai [29], etc.

3 An explicit solution to the algebraic moment problem

Given the first \(2n + 1\) moments of some real random variable X, the algebraic moment problem of order \(r = n + 1\), abbreviated \(\operatorname{AMP}(r)\), asks for the existence and construction of a finite atomic random variable with ordered support \(\{ x_{1},\ldots,x_{r} \}\), that is, \(x_{1} < x_{2} <\cdots < x_{r}\), and probabilities \(\{ p_{1},\ldots,p_{r} \}\), such that the system of non-linear equations

$$ \sum_{i = 1}^{r} p_{i}x_{i}^{k} = \mu_{k}, \quad k = 0,\ldots,2n + 1, $$
(3.1)

is solvable. For computational purposes it suffices to know that if a solution exists, then the atoms of the random variable solving \(\operatorname{AMP}(r)\) must be identical with the distinct real zeros of the orthogonal polynomial \(q_{r}(x)\) of degree \(r = n + 1\), as shown by the following precise recipe.

Lemma 3.1

(Mammana [2])

Given are positive numbers \(p_{1},\ldots,p_{r}\) and real distinct numbers \(x_{1} < x_{2} <\cdots < x_{r}\) such that the system \(\operatorname{AMP}(r)\) is solvable. Then the \(x_{i}\) ’s are the distinct real zeros of the orthogonal polynomial of degree r, that is, \(q_{r}(x_{i}) = 0\), \(i = 1,\ldots,r\), and

$$ p_{i} = \prod_{j \ne i} (x_{i} - x_{j})^{ - 1} \cdot E\biggl[\prod _{j \ne i} (Z - x_{j}) \biggr], \quad i = 1, \ldots,r, $$
(3.2)

with Z the discrete random variable with support \(\{ x_{1},\ldots,x_{r} \}\) and probabilities \(\{ p_{1},\ldots,p_{r} \}\).

Next, we assume only the knowledge of the moment sequence \(\{ \mu_{k} \}_{k = 0,1,\ldots,2n}\) and suppose that the Hankel determinants \(D_{k}\), \(k = 0,1,\ldots,n\), are positive. These are the moment inequalities required for the existence of random variables on the whole real line with given first 2n moments (e.g. De Vylder [30], part II, Chapter 3.3). Clearly, only the orthogonal polynomials \(q_{k}(x)\), \(k = 1,2,\ldots,n\), are defined. However, an observation similar to the remark by Akhiezer [13], Chapter 2, Section 5, p.60, is helpful. If, in addition to \(\{ \mu_{k} \}_{k = 0,1,\ldots,2n}\), an arbitrary real number \(\mu_{2n + 1}\) is also given, it is possible by Lemma 2.1 to construct the polynomial

$$ q_{r}(y) = \frac{1}{D_{n}} \begin{vmatrix} \mu_{0} & \mu_{1} & \cdot & \cdot & \mu_{n} \\ \mu_{1} & \mu_{2} & \cdot & \cdot & \mu_{n + 1} \\ \cdot & \cdot & & & \cdot \\ \cdot & \cdot & & & \cdot \\ \mu_{n} & \mu_{n + 1} & \cdot & \cdot & \mu_{2n + 1} \\ 1 & y & \cdot & \cdot & y^{r} \end{vmatrix}, $$
(3.3)

which satisfies the orthogonality relations \(E[q_{r}(Y) \cdot Y^{i}] = 0\), \(i = 0,1,\ldots,r - 1\), for a random variable Y with the ‘moments’ \(\{ \mu_{k} \}_{k = 0,1,\ldots,2n + 1}\), and thus plays the role of the ‘non-existent’ orthogonal polynomial of degree \(r = n + 1\). How, in this modified setting, does the solution of the system \(\operatorname{AMP}(r)\), as generated by \(q_{r}(y)\), looks like?

Instead of the arbitrary variable moment \(\mu_{2n + 1}\) one can equivalently specify a variable real number x and include it in the support \(\{ x_{1},\ldots,x_{n},x \}\) that solves \(\operatorname{AMP}(r)\) based on the variable ‘orthogonal polynomial’ \(q_{r}(x,y)\). Let \(e_{i}\), \(i = 1,\ldots,n\), denote the elementary symmetric functions in \(\{ x_{1},\ldots,x_{n} \}\) and set \(e_{0} = 1\), \(e_{r} = 0\). Then the variable functions

$$ e_{i}(x) = x \cdot e_{i - 1} + e_{i}, \quad i = 1, \ldots,r, $$
(3.4)

coincide with the elementary symmetric functions in \(\{ x_{1},\ldots,x_{n},x \}\), and one has necessarily

$$ q_{r}(x,y) = \sum_{i = 0}^{r} ( - 1)^{i}e_{i}(x)y^{r - i}, \qquad e_{0}(x) = 1. $$
(3.5)

Writing out the linear expected value equations \(E[q_{r}(x,Y) \cdot Y^{i}] = 0\), \(i = 0,1,\ldots,n - 1\), one sees that they are equivalent with the system of linear equations defined in matrix form by \(A \cdot z = b\) with

$$ \begin{aligned} &A_{ij} = \mu_{i + j - 2}x - \mu_{i + j - 1},\quad i,j = 1,\ldots,n, \\ &b_{i} = \mu_{n + i - 1}x - \mu_{n + i}, \quad i = 1, \ldots,n, \\ &z_{j} = ( - 1)^{n - j}e_{n - j + 1},\quad j = 1,\ldots,n. \end{aligned} $$
(3.6)

For \(i = 1,\ldots,n\), let \(A^{i}\) be the matrix formed by replacing the ith column of A by the column vector b. Then, with Cramer’s rule, one obtains from (3.6) the solution

$$ e_{i} = ( - 1)^{i - 1}z_{n - i + 1} = ( - 1)^{i - 1} \frac{\vert A^{n - i + 1} \vert }{\vert A \vert }, \quad i = 1,\ldots,n, $$
(3.7)

where \(\vert M \vert \) denotes the determinant of the matrix M. Clearly, the first n atoms of the support \(\{ x_{1},\ldots,x_{n},x \}\) are by construction the distinct real zeros of the polynomial of degree n given by

$$ q_{n}^{*}(x,y) = \sum_{i = 0}^{n} ( - 1)^{i}e_{i}y^{n - i}. $$
(3.8)

For application to the Chebyshev-Markov-Stieltjes inequalities in Section 4, one is interested in the probabilities (3.2) associated to the discrete random variable Z with support \(\{ x_{1},\ldots,x_{n},x \}\) that solves \(\operatorname{AMP}(r)\). By (3.2) the probability associated to the atom x coincides with the function of one variable

$$ \begin{aligned} &p(x) = \frac{P(x)}{Q(x)}, \qquad P(x) = E\Biggl[ \sum_{i = 0}^{n} ( - 1)^{i}e_{i}Z^{n - i} \Biggr] = \sum_{i = 0}^{n} ( - 1)^{i}e_{i}\mu_{n - i}, \\ &Q(x) = \sum_{i = 0}^{n} ( - 1)^{i}e_{i}x^{n - i}. \end{aligned} $$
(3.9)

We are ready for the following main result.

Theorem 3.1

(Discrete random variables on \((- \infty,\infty)\) with an even number of given moments \(\{ \mu_{k} \}_{k = 0,1,\ldots,2n}\))

Suppose that the Hankel determinants \(D_{k}\), \(k = 0,1,\ldots,n\), are positive. Let \(x_{n + 1} \in ( - \infty,\infty )\) be an arbitrary fixed real number, and let B be the matrix defined by

$$ B = \begin{pmatrix} \mu_{0} & \mu_{1} & \cdot & \cdot & \mu_{n} \\ \mu_{1} & \mu_{2} & \cdot & \cdot & \mu_{n + 1} \\ \cdot & \cdot & & & \cdot \\ \cdot & \cdot & & & \cdot \\ \mu_{n - 1} & \mu_{n} & \cdot & \cdot & \mu_{2n - 1} \\ 1 & x_{n + 1} & \cdot & \cdot & x_{n + 1}^{n} \end{pmatrix}. $$
(3.10)

Further, for \(i = 1,\ldots,n\), let \(B^{i}\) be the matrix formed by replacing the ith row of B by the row vector \(( \mu_{n} \ \mu_{n + 1} \ \cdot \ \cdot \ \mu_{2n})\). Then the support \(\{ x_{1},\ldots,x_{n},x_{n + 1} \}\) and the probabilities \(\{ p_{1},\ldots,p_{n},p_{n + 1} \}\) of a discrete random variable on \((- \infty,\infty)\) with moment sequence \(\{ \mu_{k} \}_{k = 0,1,\ldots,2n}\), are uniquely determined as follows. The atoms \(\{ x_{1},\ldots,x_{n} \}\) are the distinct real solutions of the polynomial

$$ q_{n}^{*}(x_{n + 1},y) = \sum _{i = 0}^{n} ( - 1)^{i}e_{i}y^{n - i}, \quad e_{i} = ( - 1)^{i - 1}\frac{\vert B^{n - i + 1} \vert }{\vert B \vert }, i = 1, \ldots,n, $$
(3.11)

and the probabilities \(\{ p_{1},\ldots,p_{n},p_{n + 1} \}\) are determined by

$$ p_{i} = p(x_{i}) = \Biggl\{ \sum _{i = 0}^{n} P_{i}(x_{i})^{2} \Biggr\} ^{ - 1},\quad i = 1,\ldots,n + 1, $$
(3.12)

where the \(P_{i}(x)\), \(i = 1,\ldots,n\), are the orthonormal polynomials defined in (2.6), and the function \(p(x)\) has been defined in (3.9).

Remark 3.1

Before proceeding with the proof, it is important to note that Theorem 3.1 is a generalization of the special case \(n = 2\) in Hürlimann [21], Proposition II.2.

Proof

First of all, one notes that the matrices A, \(A^{i}\), \(i = 1,\ldots,n\), and B, \(B^{i}\), \(i = 1,\ldots,n\), have equal determinants, that is, \(\vert A \vert = \vert B \vert \), \(\vert A^{i} \vert = \vert B^{i} \vert \), \(i = 1,\ldots,n\). Indeed, multiply each column in (3.10) by \(x_{n + 1}\) and subtract it from the next column to see that \(\vert A \vert = \vert B \vert \). The other determinantal equalities are shown similarly. Equations (3.11) follow by (3.7) and (3.8). Equation (3.12) for \(x = x_{n + 1}\) follows from (3.9) using (3.11) and the following determinantal identities:

$$\begin{aligned}& \vert B \vert \cdot P(x) = \sum_{i = 0}^{n} ( - 1)^{i}\mu_{n - i} \cdot \vert B \vert \cdot e_{i} = \mu_{n} \cdot \vert B \vert - \sum _{i = 1}^{n} \mu_{n - i} \cdot \bigl\vert B^{n - i + 1} \bigr\vert = D_{n}, \end{aligned}$$
(3.13)
$$\begin{aligned}& \vert B \vert \cdot Q(x) = \sum_{i = 0}^{n} ( - 1)^{i} \cdot \vert B \vert \cdot e_{i}x^{n - i} = \vert B \vert \cdot x^{n} - \sum_{i = 1}^{n} \bigl\vert B^{n - i + 1} \bigr\vert \cdot x^{n - 1} = D_{n} \cdot \sum_{i = 0}^{n} P_{i}(x_{i})^{2}. \end{aligned}$$
(3.14)

To show the last equality in (3.13) we use Laplace’s formula to expand the involved determinants in powers of \(x = x_{n + 1}\) with coefficients in the cofactors \(C_{ij}\) (= signed minors) of the Hankel determinant \(D_{n}\). One sees that

$$\vert B \vert = \sum_{j = 1}^{n + 1} C_{j,n + 1}x^{j - 1}, \qquad \bigl\vert B^{i} \bigr\vert = - \sum_{j = 1}^{n + 1} C_{j,i}x^{j - 1},\quad i = 1,\ldots,n. $$

Inserted into (3.13) and rearranging it follows that

$$\mu_{n} \cdot \vert B \vert - \sum_{i = 1}^{n} \mu_{n - i} \cdot \bigl\vert B^{n - i + 1} \bigr\vert = \sum _{i = 0}^{n} \mu_{i}C_{1,i + 1} + \sum_{j = 1}^{n} x^{j} \cdot \sum_{i = 0}^{n} \mu_{i}C_{j + 1,i + 1}. $$

Now, the first sum is identical with Laplace’s cofactor expansion of \(D_{n}\). Similarly, the coefficients of the powers of x are cofactor expansions of determinants with two equal columns that vanish. This shows (3.13). For the last equality in (3.14) we use again Laplace’s formula. An expansion with respect to the first column of the displayed determinant shows the following identity:

$$\begin{aligned} \vert B \vert \cdot Q(x) &= \sum_{i = 0}^{n} ( - 1)^{i} \cdot \vert B \vert \cdot e_{i}x^{n - i} = \vert B \vert \cdot x^{n} - \sum_{i = 1}^{n} \bigl\vert B^{n - i + 1} \bigr\vert \cdot x^{n - 1} \\ &= - \begin{vmatrix} 0 & 1 & x & \cdot & \cdot & x^{n} \\ 1 & \mu_{0} & \mu_{1} & \cdot & \cdot & \mu_{n} \\ x & \mu_{1} & \mu_{2} & \cdot & \cdot & \mu_{n + 1} \\ \cdot & \cdot & \cdot & & & \cdot \\ \cdot & \cdot & \cdot & & & \cdot \\ x^{n} & \mu_{n} & \mu_{n + 1} & \cdot & \cdot & \mu_{2n} \end{vmatrix}. \end{aligned}$$

From the special case \(y = x\) of the Christoffel-Darboux formula in Lemma 2.3 one obtains (3.14). Inserting (3.13) and (3.14) into (3.9) implies (3.12) for \(x = x_{n + 1}\). Finally, cyclic permutations of the atoms \(x_{1},\ldots, x_{n}, x_{n + 1}\) implies (3.12) for all \(i = 1,\ldots,n + 1\). □

4 An explicit version of the Chebyshev-Markov-Stieltjes inequalities

As a main application of Theorem 3.1, we get a completely explicit version of the Chebyshev-Markov-Stieltjes inequalities, which go back to Chebyshev, Markov, Possé and Stieltjes, and have been stated and studied at many places. Our formulation corresponds essentially to the inequalities found in the appendix of Zelen [18] for the infinite interval \((- \infty,\infty)\), and find herewith an implementation suitable for computation, as will be demonstrated in the next two sections. The probabilistic bounds (4.1) below are nothing else than the special case \(f(x) = 1\) of (5.10), Section 1.5, in Freud [15]. For the interested reader the essential steps of the modern probabilistic proof are sketched.

Theorem 4.1

(Chebyshev-Markov-Stieltjes inequalities)

Let \(F_{X}(x)\) be the distribution function of an arbitrary random variable X defined on \((- \infty,\infty)\) with the moments \(\{ \mu_{k} \}_{k = 0,1,\ldots,2n}\). Let \(\{ x_{1},\ldots,x_{n},x \}\) be the distinct zeros of the polynomial \(q_{r}(y) = (y - x)q_{n}^{*}(x,y)\) of degree \(r = n + 1\) as defined in (3.11). Then the following inequalities hold:

$$ \sum_{x_{i} < x} p(x_{i}) \le F_{X}(x) \le \sum_{x_{i} < x} p(x_{i}) + p(x), $$
(4.1)

where \(p(x) = \{ \sum_{i = 0}^{n} P_{i}(x)^{2} \}^{ - 1}\) is the inverse of the Christoffel-Darboux kernel.

Proof

The main ingredient of the proof is the well-known polynomial majorant/minorant method as applied to the Heaviside indicator function \(I_{ [ x,\infty )}(z)\) (e.g. Hürlimann [21], Appendix I, or Hürlimann [3], Chapter III.2). For ease of notation, one sets \(x_{r} = x\). A polynomial majorant \(q_{x}(z) \ge I_{ [ x,\infty )}(z)\) can only be obtained at supports \(\{ x_{1},\ldots,x_{r} \}\) of atomic random variables \(Z_{x}\) (depending on the choice of x) with moments \(\{ \mu_{k} \}_{k = 0,1,\ldots,2n}\) such that \(\Pr ( q_{x}(Z_{x}) = I_{ [ x,\infty )}(Z_{x}) ) = 1\), and one has the lower bound

$$F_{X}(x) \ge 1 - E\bigl[I_{ [ x,\infty )}(Z_{x})\bigr]. $$

The polynomial \(q_{x}(z)\) of degree 2n is uniquely determined by the following conditions. For \(i = 1,\ldots,r\), one has \(q_{x}(x_{i}) = 0\) if \(x_{i} < x\) and \(q_{x}(x_{i}) = 1\) if \(x_{i} \ge x\), and for \(i = 1,\ldots,n\) one has \(q_{x}'(x_{i}) = 0\). Through application of Theorem 3.1, one sees that the obtained lower bound is necessarily equal to \(1 - \sum_{x_{i} \ge x} p(x_{i}) = \sum_{x_{i} < x} p(x_{i})\) with \(p(x_{i})\) determined by (3.12). Similarly, the polynomial minorant yields the formula for the upper bound in (4.1). The result follows. □

5 Generalized Laguerre-Samuelson bounds

Let \(x_{1}, x_{2},\ldots, x_{n}\) be n real numbers with first and second order moments \(s_{k} = \frac{1}{n}\sum_{i = 1}^{n} x_{i}^{k}\), \(k = 1,2\). The Laguerre-Samuelson inequality (see Jensen and Styan [31] and Samuelson [32]) asserts that for a sample of size n no observation lies more than \(\sqrt{n - 1}\) standard deviation away from the arithmetic mean, that is,

$$ \vert x_{i} - \mu \vert \le \sqrt{n - 1} \cdot \sigma, \quad i = 1,\ldots,n, \mu = s_{1}, \sigma = \sqrt{s_{2} - s_{1}^{2}}. $$
(5.1)

An improved version, which takes into account the sample moments of order three and four, has been derived in Hürlimann [33]. It has also been suggested that further improvement based on the Chebyshev-Markov extremal distributions with known higher moments, though more complex, should be possible. It is shown how a sequence of increasingly accurate generalized Laguerre-Samuelson bounds can be constructed. Based on the first 10 orders of approximations, we demonstrate through simulation the reasonable convergence of the obtained probabilistic approximation algorithm to the maximum possible sample standardized deviation associated to a given probability distribution on the real line.

5.1 The theoretical bounds

The number \(m \ge 1\) denotes the order of approximation of the generalized Laguerre-Samuelson (LS) bounds. Let \(x_{1}, x_{2},\ldots, x_{n}\) be n real numbers with first moments \(s_{k}=\frac{1}{n}\sum_{i=1}^{n}x_{i}^{k}\), \(k=1,2,\ldots,2m\). The mean and standard deviation are \(\mu = s_{1}\), \(\sigma = \sqrt{s_{2} - s_{1}^{2}}\). Consider the discrete uniform random variable X defined by \(P(X=\frac{x_{i}-\mu}{\sigma})=\frac{1}{n}\), \(i=1,\ldots,n\). Clearly, X is a standard random variable with moments \(\mu_{k}=\frac{1}{n}\sum_{i=1}^{n}[(x_{i}-\mu)/\sqrt{s_{2}}]^{k}\), \(k=1,2,\ldots,2m\). In particular, one has \(\mu_{1} = 0\), \(\mu_{2} = 1\). We proceed now similarly to Section 3 in Hürlimann [33]. According to Theorem 4.1, one has necessarily the inequalities

$$ P(X\leq x)\leq p(x),\quad x\leq c_{L}< 0,\qquad P(X\leq x)\geq 1-p(x), \quad x\geq c_{U}>0, $$
(5.2)

where \(c_{L}\), \(c_{U}\) are the smallest and largest zeros of the polynomial \(q_{m}^{*}(x,y)\) defined in (3.11). Substituting \(x_{L}=(x_{\min}-\mu)/\sigma\leq c_{L}<0\) into the first inequality and \(x_{U}=(x_{\max}-\mu)/\sigma\geq c_{U}>0\) into the second inequality, one gets the bounds

$$ x_{L}\cdot \sigma\leq x_{i} -\mu\leq x_{U}\cdot \sigma,\quad i=1,\ldots,n, $$
(5.3)

where \(x_{L}\leq c_{L}<0\), \(x_{U}\geq c_{U}>0\) are both solutions of the equation \(p(x)=\frac{1}{n}\). With the Christoffel-Darboux kernel determinantal representation (2.7) for the inverse of \(p(x)\) this is equivalent to the polynomial equation of degree 2m given by

$$ \begin{vmatrix} 0 & 1 &x & \cdot & \cdot &x^{m} \\ 1&\mu_{0} & \mu_{1} & \cdot & \cdot & \mu_{m} \\ x & \mu_{1} & \mu_{2} & \cdot & \cdot & \mu_{m+1} \\ \cdot & \cdot & \cdot & & & \cdot \\ \cdot & \cdot & \cdot & & & \cdot \\ x^{m} & \mu_{m} & \mu _{m+1} &\cdot & \cdot & \mu_{2m} \end{vmatrix} + n\cdot |D_{m}|=0. $$
(5.4)

Since the probability function \(p(x)\) is monotone decreasing, the condition \(x_{U}\geq c_{U}>0\) is equivalent with the inequalities \(p(x_{U})=\frac{1}{n}\leq p(c_{U}) \Leftrightarrow n\geq p(c_{U})^{-1}\). Similarly, the condition \(x_{L}\leq c_{L}<0\) is equivalent with the inequalities \(p(x_{L})=\frac{1}{n}\leq p(c_{L}) \Leftrightarrow n\geq p(c_{L})^{-1}\). Therefore, a necessary condition for the validity of (5.3) is \(n\geq\max\{p(c_{L})^{-1}, p(c_{U})^{-1}\}\). The following main result has been shown.

Theorem 5.1

(Generalized Laguerre-Samuelson \((LS)\) bounds)

Let \(x_{1}, x_{2},\ldots, x_{n}\) be n real numbers with mean μ, standard deviation σ, and set \(\mu_{k}=\frac{1}{n}\sum_{i=1}^{n} [(x_{i}-\mu)/\sqrt{s_{2}}]^{k}\), \(k=1,2,\ldots,2m\), for some order of approximation \(m \ge 1\). Suppose that \(n\geq \max\{p(c_{L})^{-1},p(c_{U})^{-1}\}\) and that the polynomial equation (5.4) of degree 2m has two real solutions \(x_{L}\leq c_{L}<0\) and \(x_{U}\geq c_{U} >0\). Then, one has the bounds

$$ x_{L} \leq \frac{x_{i} - \mu}{\sigma}\leq x_{U},\quad i=1,\ldots,n. $$
(5.5)

Remark 5.1

Setting \(m = 1\) one recovers the original Laguerre-Samuelson bound (5.1). The case \(m = 2\) is Theorem 3 in Hürlimann [33].

5.2 The applied bounds

To illustrate the generalized LS bounds consider first ‘completely symmetric’ sequences of length \(n=2p\geq 2\) of the type

$$ -x_{p} \leq \cdots\leq -x_{2}\leq -x_{1}< 0< x_{1} \leq x_{2}\leq \cdots \leq x_{p}, $$
(5.6)

for which \(s_{2k-1}=0\), \(k=1,\ldots,m\), where m is the maximal order of approximation. Results for the order of approximation \(m=2\) have been discussed in Hürlimann [33]. We extend the numerical approximation up to the order \(m = 10\).

It is interesting to observe that by increasing order of approximation the generalized LS bounds seem to converge to the true empirical bound. Attained bounds (up to four significant figures) are marked in bold face. For completely symmetric Cauchy sequences the analysis of this property has been suggested to us by corresponding simulations for the order \(m=2\). One can ask for a rigorous proof (or disproof) of it.

In Applied Statistics one does not usually encounter completely symmetric sequences. In empirical studies or simulations, the property \(s_{2k-1}\neq 0\), \(k=1,\ldots,m\), is rather the rule than the exception. It is therefore worthwhile to present some simulations for arbitrary sequences stemming from distributions defined on the whole real line, whether symmetric or not.

Our illustration includes only simulated sequences from symmetric distributions. Besides the Laplace, the interesting and important Pearson type VII (generalized Student t, normal inverted gamma mixture) is considered. Its density function is given by

$$ f_{X}(x)=\frac{1}{B(\alpha,\frac{1}{2})\cdot(1+x^{2})^{\alpha+\frac{1}{2}}}, \quad x\in(-\infty, \infty). $$
(5.7)

For simulation, one needs its percentile function, which reads (Eq. (3.5) in Hürlimann [34])

$$ F_{X}^{-1}(u)= \textstyle\begin{cases} -\sqrt{\beta^{-1}(2u;\frac{1}{2},\alpha)^{-1}-1}, & u\leq \frac{1}{2},\\ \sqrt{\beta^{-1}(2(1-u);\frac{1}{2},\alpha)^{-1}-1}, & u\geq \frac{1}{2}, \end{cases} $$
(5.8)

where \(\beta(x;a,b)\) is a beta density. While the mean and skewness of (5.7) are zero, the variance (if \(\alpha >1\)) and excess kurtosis (if \(\alpha >2\)) are given by

$$ \sigma^{2}_{X}=\frac{1}{2(\alpha-1)}, \qquad K_{X}=\frac{3}{\alpha-2}. $$
(5.9)

In case \(\alpha=\frac{1}{2}\) one has a Cauchy, and if \(\alpha=1\) one has a Bowers distribution (see Hürlimann [3] and the references in Hürlimann [34]). In case \(\alpha \to \infty\) it converges to a normal distribution. Besides its original use in Statistics, this distribution has found many applications in Actuarial Science and Finance (see e.g. Hürlimann [34, 35], Nadarajah et al. [36], among many others).

In contrast to Table 1, the simulated odd order moments in Table 2 never vanish and sometimes deviate considerably from zero. Nevertheless, the same observations as for Table 1 can be made. Convergence of the bounds for the Pearson type VII distributions occur at quite low order of approximations. For the Cauchy some special simulation runs have been chosen. They show that the original Samuelson bound are sometimes almost attained for a specific distribution. Mathematical explanations of this phenomenon are left open for further investigations.

Table 1 Simulation of generalized LS bounds for completely symmetric sequences
Table 2 Simulation of generalized LS bounds for arbitrary sequences

6 Generalized Chebyshev-Markov VaR bounds

Let \(r_{1}, r_{2},\ldots, r_{n}\) be n observed returns from a financial market with first and second order sample moments \(s_{k} = \frac{1}{n}\sum_{i = 1}^{n} r_{i}^{k}\), \(k = 1,2\). The classical Chebyshev-Markov Value-at-Risk (VaR) bound (e.g. Hürlimann [21], Theorem 3.1, Case 2) asserts that by given sufficiently small loss tolerance level \(\varepsilon > 0\) the VaR functional by given mean and variance satisfies the inequality

$$ \operatorname{VaR}_{\varepsilon} [L] \le - \mu + \sqrt{\varepsilon^{ - 1} - 1} \cdot \sigma, \quad \mu = s_{1}, \sigma = \sqrt {s_{2} - s_{1}^{2}}, $$
(6.1)

where \(L = - R\) is the loss random variable associated to the random return R with observed values \(r_{1}, r_{2},\ldots, r_{n}\).

An improved version, which takes into account the sample moments of order three and four, has been derived in Hürlimann [21], Theorem 4.1, Case 1 (see also the monograph Hürlimann [3] for a more general exposition). This approach has been further investigated in Hürlimann [3739]. Based on Theorem 4.1, it is shown how a sequence of increasingly accurate generalized Chebyshev-Markov (CM) VaR bounds can be constructed. Based on the first 10 order of approximations, we demonstrate then the use of the obtained algorithm through a real-life case study. A comparison with the estimated VaR of some important return distributions is instructive and justifies the application and further development of the CM VaR bounds. Similar generalizations to bounds for the conditional value-at-risk measure (CVaR) can also be obtained (see Hürlimann [21], Theorem 4.2, for the special case \(m = 2\)).

6.1 The theoretical bounds

The number \(m \ge 1\) denotes the order of approximation of the generalized CM VaR bounds. Let \(r_{1}, r_{2},\ldots, r_{n}\) be n observed returns from a financial market with sample moments \(s_{k} = \frac{1}{n}\sum_{i = 1}^{n} r_{i}^{k}\), \(k = 1,2,\ldots,2m\). The mean and standard deviation are denoted \(\mu = s_{1}\), \(\sigma = \sqrt{s_{2} - s_{1}^{2}}\). Let \(X = - R\) be the negative of the standardized random return R with observed values \((r_{i} - \mu )/\sigma\), \(i = 1,\ldots,n\). Clearly X is a standard random variable with moments \(\mu_{k} = \frac{1}{n}( - 1)^{k}\sum_{i = 1}^{n} [(r_{i} - \mu )/\sqrt{s_{2}} ]^{k}\), \(k = 1,2,\ldots,2m\). In particular, one has \(\mu_{1} = 0\), \(\mu_{2} = 1\). According to Theorem 4.1, one has necessarily the inequality

$$ P(X \le x) \ge 1 - p(x),\quad x \ge c_{L} > 0, $$
(6.2)

where \(c_{L}\) is the largest zero of the polynomial \(q_{m}^{*}(x,y)\) defined in (3.11). Now, for a fixed (sufficiently small) loss tolerance level \(\varepsilon > 0\) one has \(P(X \le \operatorname{VaR}_{\varepsilon} [X]) \ge 1 - \varepsilon\) provided

$$ \operatorname{VaR}_{\varepsilon} [X] \le - \mu + x_{\varepsilon} \cdot \sigma, $$
(6.3)

where \(x_{\varepsilon} \ge c_{L} > 0\) solves the equation \(p(x_{\varepsilon} ) = \varepsilon\). With the determinantal representation (2.7) for \(1/p(x)\) this is equivalent to the polynomial equation of degree 2m given by

$$ \begin{vmatrix} 0 & 1 & x & \cdot & \cdot & x^{m} \\ 1 & \mu_{0} & \mu_{1} & \cdot & \cdot & \mu_{m} \\ x & \mu_{1} & \mu_{2} & \cdot & \cdot & \mu_{m + 1} \\ \cdot & \cdot & \cdot & & & \cdot \\ \cdot & \cdot & \cdot & & & \cdot \\ x^{m} & \mu_{m} & \mu_{m + 1} & \cdot & \cdot & \mu_{2m} \end{vmatrix} + \varepsilon^{ - 1} \cdot \vert D_{m} \vert = 0. $$
(6.4)

Since the probability function \(p(x)\) is monotone decreasing, the condition \(x_{\varepsilon} \ge c_{L} > 0\) is equivalent with the inequalities \(p(x_{ \varepsilon} ) = \varepsilon \le p(c_{L})\). Therefore, a necessary condition for the validity of (6.3) is \(\varepsilon \le p(c_{L})\). The following main result has been shown.

Theorem 6.1

(Generalized CM VaR bound)

Let \(r_{1}, r_{2},\ldots, r_{n}\) be n observed returns from a financial market with sample mean μ, standard deviation σ, and set \(\mu_{k} = \frac{1}{n}( - 1)^{k}\sum_{i = 1}^{n} [(r_{i} - \mu )/\sqrt{s_{2}} ]^{k}\), \(k = 1,2,\ldots,2m\), for some order of approximation \(m \ge 1\). Suppose that \(\varepsilon \le p(c_{L})\) and that the polynomial equation (6.4) of degree 2m has a real solution \(x_{\varepsilon} \ge c_{L} > 0\). Then, the following CM VaR bound holds:

$$ \operatorname{VaR}_{\varepsilon} [X] \le - \mu + x_{\varepsilon} \cdot \sigma. $$
(6.5)

Remark 6.1

Setting \(m = 1\) one recovers the classical Chebyshev-Markov bound (6.1). The case \(m = 2\) is Theorem 4.1, Case 1, in Hürlimann [21].

6.2 The applied bounds

In current risk management practice there are three basic types of VaR models, the normal linear VaR model (also called parametric VaR or variance-covariance VaR), the historical VaR simulation model, and the Monte Carlo VaR model (see Alexander [40] for a readable account). For a short discussion of the strengths/weaknesses of the common VaR models we refer to Hürlimann [39], Section 1. In the development of improved VaR models, it is important to consider methods that overcome the limitations shared by parametric VaR, historical VaR and Monte Carlo VaR. The main advantages of the proposed CM VaR method are three-fold:

  1. (i)

    Given a set of moments up to a fixed even order, one expects a single solution. This assertion may depend on the choice of a sufficiently small loss tolerance level, the properties of the largest real solution to the polynomial equation (6.4), as well as computational feasibility (representation of the higher order moments in machine precision). In this respect, the results of Table 3 are well behaved.

    Table 3 Generalized CM VaR bounds versus FFT VaR from best fitted return distributions
  2. (ii)

    The evaluation of the CM VaR bounds is easy to implement and its computation is fast. In this respect the method overcomes the lack of fast computation encountered with the historical VaR and the Monte Carlo VaR models.

  3. (iii)

    The CM VaR bounds yield a sequence of increasingly accurate upper bounds to VaR that are consistent with the actuarial principle of safe or conservative pricing.

To illustrate the generalized CM VaR bounds consider return observations stemming from the following five different Swiss Market (SMI) and Standard & Poors 500 (SP500) data sets:

  • SMI 3Y/1D: 758 historic daily closing prices over 3 years from 04.01.2010 to 28.12.2012

  • SMI 24Y/1D: 6030 historic daily closing prices over 24 years from 03.01.1989 to 28.12.2012

  • SP500 3Y/1D: 754 historic daily closing prices over 3 years from 04.01.2010 to 31.12.2012

  • SP500 24Y/1D: 6049 historic daily closing prices over 24 years from 03.01.1989 to 31.12.2012

  • SP500 63Y/1M: 756 historic monthly closing prices over 63 years from Jan. 1950 to Dec. 2012

These data sets are typical as they contain short to medium high volatile periods (short term period of 3 years, long term period of 24 years), and very long term periods (63 years of monthly data). In order to be compared with the results in Hürlimann [4143] the most recent data has not been included. The observed sample logarithmic returns of stock-market indices are negatively skewed and have a much higher excess kurtosis than is allowed by a normal distribution, at least over shorter daily and even monthly periods. In the mentioned papers, a number of important return distributions have been fitted to these data sets using the first four sample moments by solving specific equations (main theorems in Sections 3 of Hürlimann [4143]). Their goodness-of-fit statistics have been optimized through application of the fast Fourier transform (FFT) approximation, which only requires the knowledge of the characteristic function. Though the moment method is not the optimal tool in statistical inference, as compared to maximum likelihood estimation for example, its controlled use in conjunction with the Anderson-Darling and Cramér-von Mises goodness-of-fit statistics remains a first choice in applications depending heavily on moments as the one considered in Hürlimann [4143]. This last statement also applies to the CM VaR bounds.

Table 3 contains estimations of the standardized VaR of the negative return or loss defined by

$$ St\operatorname{VaR}_{\varepsilon} [L] = \frac{\operatorname{VaR}_{\varepsilon} [L] - \mu_{L}}{\sigma_{L}}. $$
(6.6)

The CM StVaR bounds for the loss tolerance level \(\varepsilon = 0.005\) (of current use in the Basel III and Solvency II regulatory environments) are shown up to the order \(m = 10\). They can be immediately compared with the standardized maximum losses defined by

$$ L_{\max} = \max_{1 \le i \le n}\frac{L_{i} - \mu_{L}}{\sigma_{L}}, $$
(6.7)

as well as with the standardized VaR of a normal distribution, which is equal to 2.57583. Quite instructive is a comparison with the standardized VaR associated to the best FFT fit of important return distributions according to their minimum Anderson-Darling statistics. The displayed standardized FFT VaR is defined by

$$ FFT \operatorname{VaR}_{\varepsilon} [L] = \frac{FFTq_{1 - \varepsilon} [R] - \mu_{L}}{\sigma_{L}}, $$
(6.8)

where \(FFTq_{1 - \varepsilon} [R]\) is the linear interpolated return quantile defined by

$$ FFTq_{1 - \varepsilon} [R] = FFTq_{1 - \varepsilon^{ -}} [R] - \bigl(\varepsilon - \varepsilon^{ -} \bigr) \cdot \frac{FFTq_{1 - \varepsilon^{ -}} [R] - FFTq_{1 - \varepsilon^{ +}} [R]}{\varepsilon^{ +} - \varepsilon^{ -}}, $$
(6.9)

where \(FFTq_{1 - \varepsilon^{ -}} [R]\), \(FFTq_{1 - \varepsilon^{ +}} [R]\) are the two estimated FFT quantiles of the return just above and below the \(1 - \varepsilon = 0.995\) quantile, with \(1 - \varepsilon^{ -} \ge 0.995 \ge 1 - \varepsilon^{ +}\). The fitted return distributions are the variance-gamma (VG) and the normal variance-gamma (NVG) (see Hürlimann [41]), the truncated Lévy flight (TLF) and its bilateral gamma special case (TLF-BG) (see Hürlimann [43]), and the normal tempered stable distribution (NTS) and its normal inverse Gaussian (NIG) special case (see Hürlimann [43]). These return distributions reveal the possible range of variation of VaR given the first four sample moments. The maximum FFT VaR associated to these return distributions can be either below or above the CM VaR bound of order \(m = 10\), which takes into account the first 20 sample moments. Since the CM VaR bounds lie on the safe side, there is no doubt that the use of the CM VaR bounds is herewith justified. Further research on them should be encouraged.

References

  1. Kaz’min, YA, Rehmann, U: Moment problem. Encyclopedia of Mathematics (2011/2012). http://www.encyclopediaofmath.org/index.php/Moment_problem

  2. Mammana, C: Sul problema algebrico dei momenti. Ann. Sc. Norm. Super. Pisa, Cl. Sci. (3) 8, 133-140 (1954)

    MATH  MathSciNet  Google Scholar 

  3. Hürlimann, W: Extremal moment methods and stochastic orders: application in actuarial science. Bol. Asoc. Mat. Venez. 15(1), 5-110; 15(2), 153-301 (2008)

    MATH  MathSciNet  Google Scholar 

  4. Chebyshev, PL: Sur les valeurs limites des intégrales. J. Math. 19, 157-160 (1874)

    Google Scholar 

  5. Markov, A: Démonstration de certaines inégalités de M. Chebyshev. Math. Ann. 24, 172-180 (1884)

    Article  MathSciNet  Google Scholar 

  6. Possé, C: Sur quelques applications des fractions continues algébriques. Académie Impériale des Sciences, St. Petersburg (1886)

    Google Scholar 

  7. Stieltjes, TJ: Sur l’évaluation approchée des intégrales. C. R. Math. Acad. Sci. Paris, Sér. I 97, 740-742, 798-799 (1883)

    Google Scholar 

  8. Stieltjes, TJ: Recherches sur les fractions continues. Ann. Fac. Sci. Toulouse 8, 1-22; 9, 45-47 (1894/1895)

  9. Uspensky, JV: Introduction to Mathematical Probability. McGraw-Hill, New York (1937)

    MATH  Google Scholar 

  10. Shohat, JA, Tamarkin, JD: The Problem of Moments. Am. Math. Soc., New York (1943)

    Book  MATH  Google Scholar 

  11. Royden, HL: Bounds on a distribution function when its first n moments are given. Ann. Math. Stat. 24, 361-376 (1953)

    Article  MATH  MathSciNet  Google Scholar 

  12. Krein, MG: The ideas of P.L. Chebyshev and A.A. Markov in the theory of limiting values of integrals and their further developments. Transl. Am. Math. Soc. 12, 1-122 (1951)

    MathSciNet  Google Scholar 

  13. Akhiezer, NI: The Classical Moment Problem and Some Related Questions in Analysis. Oliver & Boyd, Edinburgh (1965)

    MATH  Google Scholar 

  14. Karlin, S, Studden, WJ: Tchebycheff Systems: With Applications in Analysis and Statistics. Wiley-Interscience, New York (1966)

    MATH  Google Scholar 

  15. Freud, G: Orthogonal Polynomials. Pergamon Press/Akadémiai Kiadó, Oxford/Budapest (1971)

    Google Scholar 

  16. Krein, MG, Nudelman, AA: The Markov Moment Problem and Extremal Problems. Translations of Mathematical Monographs, vol. 50. Am. Math. Soc., Providence (1977)

    MATH  Google Scholar 

  17. Whittle, P: Optimisation Under Constraints. Wiley, New York (1971)

    Google Scholar 

  18. Zelen, M: Bounds on a distribution function that are functions of moments to order four. J. Res. Natl. Bur. Stand. 53(6), 377-381 (1954)

    Article  MATH  MathSciNet  Google Scholar 

  19. Simpson, JH, Welch, BL: Table of the bounds of the probability integral when the first four moments are given. Biometrika 47, 399-410 (1960)

    Article  MATH  MathSciNet  Google Scholar 

  20. Kaas, R, Goovaerts, MJ: Application of the problem of moments to various insurance problems in non-life. In: Goovaerts, M, De Vylder, F, Haezendonck, J (eds.) Insurance and Risk Theory. NATO ASI Series, vol. 171, pp. 79-118. Reidel, Dordrecht (1986)

    Chapter  Google Scholar 

  21. Hürlimann, W: Analytical bounds for two value-at-risk functionals. ASTIN Bull. 32(2), 235-265 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  22. Cramér, H: Mathematical Methods of Statistics. Princeton University Press, Princeton (1946); (Thirteenth printing 1974)

    MATH  Google Scholar 

  23. Christoffel, EB: Über die Gaussische Quadratur und eine Verallgemeinerung derselben. J. Reine Angew. Math. 55, 61-82 (1858)

    Article  MATH  MathSciNet  Google Scholar 

  24. Darboux, G: Mémoire sur l’approximation des fonctions de très-grand nombres, et sur une classe étendue de développements en série. J. Math. Pures Appl. 4, 5-56, 377-416 (1878)

    MATH  Google Scholar 

  25. Brezinski, C: A direct proof of the Christoffel-Darboux identity and its equivalence to the recurrence relationship. J. Comput. Appl. Math. 32, 17-25 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  26. Koornwinder, TH: Orthogonal polynomials, a short introduction (2013). Preprint. Available at arXiv:1303.2825v1 [math.CA]

  27. Szegö, G: Orthogonal Polynomials, 3rd edn. American Mathematical Society Colloquium Publications, vol. 23. Am. Math. Soc., Providence (1967)

    Google Scholar 

  28. Chihara, TS: An Introduction to Orthogonal Polynomials. Gordon & Breach, New York (1978); (Reprint 2011, Dover)

    MATH  Google Scholar 

  29. Nevai, P: Orthogonal polynomials: theory and practice. In: Proc. NATO Adv. Study Institute on Orthogonal Polynomials and Their Applications, Columbus, Ohio, May 22-June 3, 1989. Nato Series C, vol. 294 (1990)

    Google Scholar 

  30. De Vylder, F: Advanced Risk Theory. A Self-Contained Introduction. Editions de l’Université de Bruxelles, Collection Actuariat (1996)

  31. Jensen, ST, Styan, GPH: Some comments and a bibliography on the Laguerre-Samuelson inequality with extensions and applications in statistics and matrix theory. In: Rassias, TM, Srivastava, HM (eds.) Analytic and Geometric Inequalities and Applications. Kluwer Academic, Dordrecht (1999)

    Google Scholar 

  32. Samuelson, PA: How deviant can you be? J. Am. Stat. Assoc. 63, 1522-1525 (1968)

    Article  Google Scholar 

  33. Hürlimann, W: An improved Laguerre-Samuelson inequality of Chebyshev-Markov type. J. Optim. 2014, Article ID 832123 (2014)

    Google Scholar 

  34. Hürlimann, W: Financial data analysis with two symmetric distributions. ASTIN Bull. 31, 187-211 (2001)

    Article  MathSciNet  Google Scholar 

  35. Hürlimann, W: Fitting bivariate cumulative returns with copulas. Comput. Stat. Data Anal. 45(2), 355-372 (2004)

    Article  MATH  Google Scholar 

  36. Nadarajah, S, Zhang, B, Chan, S: Estimation methods for expected shortfall. Quant. Finance 14(2), 271-291 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  37. Hürlimann, W: Robust variants of the Cornish-Fischer approximation and Chebyshev-Markov bounds: application to value-at-risk and expected shortfall. Adv. Appl. Math. Sci. 1(2), 239-260 (2009)

    MATH  MathSciNet  Google Scholar 

  38. Hürlimann, W: Solvency risk capital for the short and long term: probabilistic versus stability criterion. Belg. Actuar. Bull. 9(1), 8-16 (2010)

    MathSciNet  Google Scholar 

  39. Hürlimann, W: Market Value-at-Risk: ROM simulation, Cornish-Fisher VaR and Chebyshev-Markov VaR bound. Br. J. Math. Comput. Sci. 4(13), 1797-1814 (2014)

    Article  Google Scholar 

  40. Alexander, C: Market Risk Analysis. Volume IV: Value-at-Risk Models. Wiley, Chichester (2008)

    Google Scholar 

  41. Hürlimann, W: Portfolio ranking efficiency (I) Normal variance gamma returns. Int. J. Math. Arch. 4(5), 192-218 (2013)

    Google Scholar 

  42. Hürlimann, W: Portfolio ranking efficiency (II) Truncated Lévy flight returns. Int. J. Math. Arch. 4(8), 39-48 (2013)

    Google Scholar 

  43. Hürlimann, W: Portfolio ranking efficiency (III) Normal tempered stable returns. Int. J. Math. Arch. 4(10), 20-29 (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Werner Hürlimann.

Additional information

Competing interests

The author declares that he has no financial and non-financial competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hürlimann, W. An explicit version of the Chebyshev-Markov-Stieltjes inequalities and its applications. J Inequal Appl 2015, 192 (2015). https://doi.org/10.1186/s13660-015-0709-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0709-1

MSC

Keywords