Skip to main content

Sharp bounds on moments of random variables

Abstract

We give new inequalities involving the expectation and variance of a random variable defined on a finite interval and having an essentially bounded probability density function. Our inequalities sharpen previous inequalities by N.S. Barnett and S.S. Dragomir. We prove our bounds are optimal, and we explicitly give the cases in which equality occurs.

1 Introduction

There has been considerable interest in the study of absolutely continuous random variables X that take values in a compact interval \([ a,b ] \) and whose probability density functions ρ are essentially bounded. See, for example, [16], and [7].

One result of particular importance, introduced by Barnett and Dragomir in [5], and discussed in [13, 5, 7], and elsewhere, gives good general bounds involving the mean and variance of X:

Theorem 1

([5, Theorem 1(ii)])

If X is an absolutely continuous random variable whose pdf ρ satisfies \(m\leq \rho ( x ) \leq M\) for \(\mathcal{L}^{1}\) a.e. \(x\in [ a,b ] \) and \(\int _{a}^{b}\rho ( x ) \,dx =1\), then

$$ \frac{m ( b-a ) ^{3}}{6}\leq \bigl[ b-E ( X ) \bigr] \bigl[ E ( X ) -a \bigr] - \sigma ^{2} ( X ) \leq \frac{M ( b-a ) ^{3}}{6} $$
(1)

and

$$ \biggl\vert \bigl[ b-E ( X ) \bigr] \bigl[ E ( X ) -a \bigr] - \sigma ^{2} ( X ) - \frac{ ( b-a ) ^{3}}{6} \biggr\vert \leq \frac{\sqrt{5} ( b-a ) ^{3} ( M-m ) }{60}. $$
(2)

In this paper we establish, for the first time, optimal lower and upper bounds for

$$ \bigl[ b-E ( X ) \bigr] \bigl[ E ( X ) -a \bigr] -\sigma ^{2} ( X ) $$

for such random variables X, in particular sharpening the key results (1) and (2). Moreover, we show that the distributions yielding the optimal bounds are unique, and we give them explicitly.

Since pdfs that differ on sets having Lebesgue measure zero correspond to identically distributed random variables, without loss of generality we will identify such pdfs. In what follows, \(\mathcal{L}^{1} ( E ) = \vert E \vert \) denotes the Lebesgue measure of the set E.

2 Results

Theorem 2

Suppose X is an absolutely continuous random variable whose pdf ρ satisfies \(0\leq m\leq \rho ( x ) \leq M<\infty \) for \(\mathcal{L}^{1}\) a.e. \(x\in [ a,b ] \) and \(\int _{a}^{b}\rho ( x ) \,dx =1\).

  1. (1)

    If \(m=M\), then

    $$ \bigl[ b-E ( X ) \bigr] \bigl[ E ( X ) -a \bigr] -\sigma ^{2} ( X ) = \frac{ ( b-a ) ^{2}}{6}. $$
  2. (2)

    If \(m< M\), then

    $$\begin{aligned} & \bigl[ b-E ( X ) \bigr] \bigl[ E ( X ) -a \bigr] -\sigma ^{2} ( X ) \\ &\quad \geq \frac{1}{6}m ( b-a ) ^{3}+\frac{1}{12} \bigl( 2 ( b-a ) ( M-m ) +M ( b-a ) -1 \bigr) \frac{ ( 1-m ( b-a ) ) ^{2}}{ ( M-m ) ^{2}}. \end{aligned}$$

    This lower bound is sharp, and we have equality if and only if the function

    $$ \rho ( x ) = \textstyle\begin{cases} M, & \textit{if }x\in {}[ a,a+\frac{G}{2})\cup (b-\frac{G}{2},b], \\ m, & \textit{if }x\in [ a+\frac{G}{2}, b-\frac{G}{2} ], \end{cases}$$

    with

    $$ G=\frac{1-m ( b-a ) }{M-m}, $$

    is a pdf of X.

  3. (3)

    If \(m< M\), then

    $$\begin{aligned} & \bigl[ b-E ( X ) \bigr] \bigl[ E ( X ) -a \bigr] -\sigma ^{2} ( X ) \\ &\quad \leq \frac{1}{4} ( b-a ) ^{2}-\frac{1}{12} \biggl( m ( b-a ) ^{3}+ \frac{ ( 1-m ( b-a ) ) ^{3}}{ ( M-m ) ^{2}} \biggr) . \end{aligned}$$

    This upper bound is sharp, and we have equality if and only if the function

    $$ \rho ( x ) = \textstyle\begin{cases} M, & \textit{if }x\in ( ( \frac{1}{2}a+\frac{1}{2}b ) -\frac{G}{2}, ( \frac{1}{2}a+\frac{1}{2}b ) +\frac{G}{2} ), \\ m, & \textit{if }x\in [ a, ( \frac{1}{2}a+\frac{1}{2}b ) -\frac{G}{2} ] \cup [ ( \frac{1}{2}a+\frac{1}{2}b ) +\frac{G}{2}, b ], \end{cases}$$

    with G defined as in part (2), is a pdf of X.

3 Proofs

We first record a useful lemma, proven using a clever algebraic manipulation by Barnett and Dragomir in [5].

Lemma 3

([5, (2.5)])

Suppose X is as stated in our theorem. Then,

$$ \bigl[ b-E ( X ) \bigr] \bigl[ E ( X ) -a \bigr] -\sigma ^{2} ( X ) = \int _{a}^{b} ( b-x ) ( x-a ) \rho ( x ) \,dx . $$

Proof of (1)

In this case, X is a continuous uniform random variable on \([ a,b ] \), and so its expected value and variance are \(E ( X ) = ( a+b ) /2\) and \(\sigma ^{2} ( X ) = ( b-a ) ^{2}/12\). Substituting these values yields the result immediately.

Proof of (2)

Suppose \(m< M\). Let \(c=b-a\), and define

$$\begin{aligned}& f ( x ) = x ( c-x ) \quad \text{for all }x\in [ 0,c ] , \\& \tau _{a} ( x ) = x+a\quad \text{for all real }x, \\& \phi ( x ) = \frac{x-m}{M-m}\quad \text{for all }x\in [ m,M ] . \end{aligned}$$

The translation operator \(\tau _{a}\) is invertible, with \(\tau _{a}^{-1} ( x ) =x-a\) for all real x. Since \(1/ ( M-m ) >0\), the affine transformation \(\phi : [ m,M ] \rightarrow [ 0,1 ] \) is invertible with \(\phi ^{-1} ( x ) = ( M-m ) x+m\) for all \(x\in [ 0,1 ] \). Define

$$\begin{aligned} &\mathcal{C}_{\rho } = \biggl\{ \rho :m\leq \rho ( x ) \leq M\text{ for }\mathcal{L}^{1}\text{ a.e. }x\in [ a,b ] , \text{and }\int _{a}^{b}\rho ( x ) \,dx =1 \biggr\} , \\ &\mathcal{C}_{h} = \biggl\{ h:m\leq h ( x ) \leq M \text{ for }\mathcal{L}^{1}\text{ a.e. }x\in [ 0,c ] ,\text{and }\int _{0}^{c}h ( x ) \,dx =1 \biggr\} , \\ &\mathcal{C}_{g} = \biggl\{ g:0\leq g ( x ) \leq 1 \text{ for }\mathcal{L}^{1}\text{ a.e. }x\in [ 0,c ] ,\text{and }\int _{0}^{c}g ( x ) \,dx =G \biggr\} . \end{aligned}$$

We say that a function ρ, h, or g is admissible provided it is an element of \(\mathcal{C}_{\rho }\), \(\mathcal{C}_{h}\), or \(\mathcal{C}_{g}\), respectively. Whenever \(\rho _{\ast }\), \(h_{\ast }\), and \(g_{\ast }\) are admissible, define

$$\begin{aligned}& I_{\rho _{\ast }} = \int _{a}^{b} ( b-x ) ( x-a ) \rho _{\ast } ( x ) \,dx , \\& I_{h_{\ast }} = \int _{0}^{c} ( c-x ) x\cdot h_{\ast } ( x ) \,dx = \int _{0}^{c}f ( x ) \cdot h_{\ast } ( x ) \,dx , \\& I_{g_{\ast }} = \int _{0}^{c} ( c-x ) x\cdot g_{\ast } ( x ) \,dx = \int _{0}^{c}f ( x ) \cdot g_{\ast } ( x ) \,dx . \end{aligned}$$

If \(\rho \in \mathcal{C}_{\rho }\), \(h=\rho \circ \tau _{a}\), and \(g=\phi \circ h\), then \(m\leq h ( x ) \leq M\) for \(\mathcal{L}^{1}\) a.e. \(x\in [ 0,c ] \), \(0\leq g ( x ) \leq 1\) for \(\mathcal{L}^{1}\) a.e. \(x\in [ 0,c ] \), \(\rho =h\circ \tau _{a}^{-1}\), \(h=\phi ^{-1}\circ g\), and

$$\begin{aligned} 1 & = \int _{a}^{b}\rho ( x ) \,dx = \int _{0}^{c}h ( x ) \,dx = \int _{0}^{c} \bigl[ ( M-m ) g ( x ) +m \bigr] \,dx \\ & = ( M-m ) \int _{0}^{c}g ( x ) \,dx +mc, \end{aligned}$$

from which it follows that \(h\in \mathcal{C}_{h}\) and \(g\in \mathcal{C}_{g}\).

If \(g\in \mathcal{C}_{g}\), \(h=\phi ^{-1}\circ g\), and \(\rho =h\circ \tau _{a}^{-1}\), then \(m\leq h ( x ) \leq M\) for \(\mathcal{L}^{1}\) a.e. \(x\in [ 0,c ] \), \(m\leq \rho ( x ) \leq M\) for \(\mathcal{L}^{1}\) a.e. \(x\in [ a,b ] \), \(g=\phi \circ h\), \(h=\rho \circ \tau _{a}\), and

$$ \frac{1-mc}{M-m}=G= \int _{0}^{c}g ( x ) \,dx = \int _{0}^{c} \frac{h ( x ) -m}{M-m}\,dx = \frac{1}{M-m} \biggl( \int _{0}^{c}h ( x ) \,dx -mc \biggr) , $$

from which it follows that

$$ 1= \int _{0}^{c}h ( x ) \,dx = \int _{a}^{b}\rho ( x ) \,dx , $$

and hence, \(h\in \mathcal{C}_{h}\) and \(\rho \in \mathcal{C}_{\rho }\).

We say that a function ρ, h, or g is a minimizer provided it is an element of \(\mathcal{C}_{\rho }\), \(\mathcal{C}_{h}\), or \(\mathcal{C}_{g}\), respectively, and provided it results in the lowest possible value for \(I_{\rho _{\ast }}\), \(I_{h_{\ast }}\), or \(I_{g_{\ast }}\), respectively, among all admissible functions \(\rho _{\ast }\), \(h_{\ast }\), or \(g_{\ast }\).

If ρ is a minimizer and \(h=\rho \circ \tau _{a}\), then h is admissible, as shown above, and

$$ I_{h}= \int _{0}^{c} ( c-x ) x\cdot h ( x ) \,dx = \int _{0}^{c} ( c-x ) x\cdot \rho ( x+a ) \,dx = \int _{a}^{b} ( b-u ) ( u-a ) \cdot \rho ( u ) \,du =I_{\rho }, $$

and so h is a minimizer. If, additionally, \(g=\phi \circ h\), then g is admissible, as shown above, and we have

$$\begin{aligned} I_{g} & = \int _{0}^{c} ( c-x ) x\cdot g ( x ) \,dx \\ & = \int _{0}^{c} ( c-x ) x\cdot \frac{h ( x ) -m}{M-m}\,dx \\ & = \frac{1}{M-m} \biggl( I_{h}-\frac{1}{6}mc^{3} \biggr) . \end{aligned}$$

Thus, g is a minimizer as well, as \(I_{g}\) is a strictly increasing affine function of \(I_{h}\).

Similarly, if g is a minimizer and \(h=\phi ^{-1}\circ g\), then h is admissible, as shown above, and we have

$$\begin{aligned} I_{h} & = \int _{0}^{c} ( c-x ) x\cdot h ( x ) \,dx \\ & = \int _{0}^{c} ( c-x ) x\cdot \bigl( ( M-m ) g ( x ) +m \bigr) \,dx \\ & = ( M-m ) I_{g}+\frac{1}{6}mc^{3}. \end{aligned}$$

Then, h is a minimizer, as \(I_{h}\) is a strictly increasing affine function of \(I_{g}\). If, additionally, \(\rho =h\circ \tau _{a}^{-1}\), then ρ is admissible, as shown above, and we have

$$ I_{\rho }= \int _{a}^{b} ( b-x ) ( x-a ) \rho ( x ) \,dx = \int _{0}^{c} ( c-u ) u\cdot h ( u ) \,du =I_{h}, $$

and so ρ is a minimizer.

By Lemma 3, X minimizes

$$ \bigl[ b-E ( X ) \bigr] \bigl[ E ( X ) -a \bigr] -\sigma ^{2} ( X ) $$

if and only if ρ is admissible and minimizes \(I_{\rho }\). By the argument above, that occurs if and only if the function g defined by \(g=\phi \circ h\), where \(h=\rho \circ \tau _{a}\), minimizes \(I_{g}\).

We will now use the Bathtub Principle [8, Theorem 1.14] with Ω, μ, f, G, \(\mathcal{C}\), and I there replaced by \([ 0,c ]\), the Lebesgue measure, f, G, \(\mathcal{C}_{g}\), and \(I_{g}\), respectively, to find a function g that minimizes \(I_{g}\) and to prove that it is unique among minimizers of \(I_{g}\). It will then follow from our reasoning above that the function ρ defined by \(\rho =h\circ \tau _{a}^{-1}\), where \(h=\phi ^{-1}\circ g\), uniquely minimizes \(I_{\rho }\), and hence solves our problem. Let

$$\begin{aligned} s & = \sup \bigl\{ t: \bigl\vert \bigl\{ x\in [ 0,c ] :f ( x ) < t \bigr\} \bigr\vert \leq G \bigr\} \\ & = \sup \biggl\{ t: \biggl\vert \biggl\{ x\in [ 0,c ] : \frac{1}{4}c^{2}- \biggl( x-\frac{c}{2} \biggr) ^{2}< t \biggr\} \biggr\vert \leq G \biggr\} \\ & = \sup \biggl\{ t: \biggl\vert \biggl\{ x\in [ 0,c ] :x> \frac{c}{2}+\sqrt{\frac{1}{4}c^{2}-t}\text{ or }x< \frac{c}{2}-\sqrt{\frac{1}{4}c^{2}-t} \biggr\} \biggr\vert \leq G \biggr\} \\ & = \sup \bigl\{ t:c-\sqrt{c^{2}-4t}\leq G \bigr\} \\ & = \sup \biggl\{ t:t\leq \frac{1}{4} \bigl( c^{2}- ( c-G ) ^{2} \bigr) \biggr\} \\ & = \frac{1}{4} \bigl( c^{2}- ( c-G ) ^{2} \bigr) . \end{aligned}$$

For this s, we have \(\vert \{ x\in [ 0,c ] :f ( x ) =s \} \vert =0\) and

$$\begin{aligned} & \bigl\vert \bigl\{ x\in [ 0,c ] :f ( x ) < s \bigr\} \bigr\vert \\ &\quad = \biggl\vert \biggl\{ x\in [ 0,c ] :\frac{1}{4}c^{2}- \biggl( x-\frac{c}{2} \biggr) ^{2}< \frac{1}{4} \bigl( c^{2}- ( c-G ) ^{2} \bigr) \biggr\} \biggr\vert \\ &\quad = \biggl\vert \biggl\{ x\in [ 0,c ] : \biggl( x- \frac{c}{2} \biggr) ^{2}>\frac{1}{4} ( c-G ) ^{2} \biggr\} \biggr\vert \\ &\quad = \biggl\vert \biggl[0, \frac{G}{2}\biggl)\cup \biggr(c-\frac{G}{2}, c\biggr] \biggr\vert \\ &\quad = G. \end{aligned}$$

Since \(\vert \{ x\in [ 0,c ] :f ( x ) < s \} \vert =G\), by the Bathtub Principle [8, Theorem 1.14] there is a unique \(g\in \mathcal{C}_{g}\) that minimizes the integral \(I_{g}\), and it is given by \(g ( x ) =\chi _{ \{ f< s \} } ( x ) \).

Thus,

$$ g ( x ) = \textstyle\begin{cases} 1, & \text{if }f ( x ) < s, \\ 0, & \text{if }f ( x ) \geq s\end{cases}\displaystyle = \textstyle\begin{cases} 1, & \text{if }x\in {}[ 0,\frac{G}{2})\cup (c-\frac{G}{2}, c], \\ 0, & \text{if }x\in [ \frac{G}{2}, c-\frac{G}{2} ] .\end{cases}$$

We then calculate

$$ h ( x ) = ( M-m ) g ( x ) +m= \textstyle\begin{cases} M, & \text{if }x\in {}[ 0,\frac{G}{2})\cup (c-\frac{G}{2}, c], \\ m, & \text{if }x\in [ \frac{G}{2}, c-\frac{G}{2} ] \end{cases}$$

and

$$\begin{aligned} \rho ( x ) & = h ( x-a ) = \textstyle\begin{cases} M, & \text{if }x\in {}[ a, a+\frac{G}{2})\cup (a+c-\frac{G}{2}, a+c], \\ m, & \text{if }x\in [ a+\frac{G}{2}, a+c-\frac{G}{2} ] \end{cases}\displaystyle \\ & = \textstyle\begin{cases} M, & \text{if }x\in {}[ a, a+\frac{G}{2})\cup (b-\frac{G}{2}, b], \\ m, & \text{if }x\in [ a+\frac{G}{2}, b-\frac{G}{2} ] .\end{cases}\displaystyle \end{aligned}$$

This ρ is the desired minimizer. Finally, we will calculate the minimum value of \(I_{\rho }\),

$$\begin{aligned} I_{\rho } & = I_{h} \\ & = \int _{0}^{c}f ( x ) h ( x ) \,dx = \int _{0}^{G/2}Mf ( x ) \,dx + \int _{G/2}^{c-G/2}mf ( x ) \,dx + \int _{c-G/2}^{c}Mf ( x ) \,dx \\ & = \frac{1}{24}MG^{2} ( 3c-G ) +\frac{1}{12}m \bigl( 2c^{3}-G^{2} ( 3c-G ) \bigr) +\frac{1}{24}MG^{2} ( 3c-G ) \\ & = \frac{1}{12}G^{2} ( 3c-G ) ( M-m ) + \frac{1}{6}mc^{3}. \end{aligned}$$

We have

$$ G ( M-m ) =1-mc $$

and also

$$\begin{aligned} G ( 3c-G ) & = \frac{1-mc}{M-m} \biggl( 3c-\frac{1-mc}{M-m} \biggr) \\ & = \frac{1}{ ( M-m ) ^{2}} ( 1-mc ) ( 3Mc-2mc-1 ) . \end{aligned}$$

Substituting these expressions, we obtain

$$\begin{aligned} & \int _{0}^{c}f ( x ) h ( x ) \,dx \\ &\quad = \frac{1}{6}mc^{3}+\frac{1}{12} ( 1-mc ) \frac{1}{ ( M-m ) ^{2}} ( 1-mc ) ( 3Mc-2mc-1 ) \\ &\quad = \frac{1}{6}mc^{3}+\frac{1}{12} \bigl( 2c ( M-m ) +Mc-1 \bigr) \frac{ ( 1-mc ) ^{2}}{ ( M-m ) ^{2}} \\ &\quad = \frac{1}{6}m ( b-a ) ^{3}+\frac{1}{12} \bigl( 2 ( b-a ) ( M-m ) +M ( b-a ) -1 \bigr) \frac{ ( 1-m ( b-a ) ) ^{2}}{ ( M-m ) ^{2}}, \end{aligned}$$

as asserted in (2).

Proof of (3)

Suppose \(m< M\). Let c, G, f, \(\tau _{a}\), ϕ, \(\mathcal{C}_{p}\), \(\mathcal{C}_{h}\), \(\mathcal{C}_{g}\), \(I_{\rho _{\ast }}\), \(I_{h_{\ast }}\), and \(I_{g_{\ast }}\) be as in the proof of (2). We say that a function ρ, h, or g is a maximizer provided it is an element of \(\mathcal{C}_{\rho }\), \(\mathcal{C}_{h}\), or \(\mathcal{C}_{g}\), respectively, and provided it results in the highest possible value for \(I_{\rho _{\ast }}\), \(I_{h_{\ast }}\), or \(I_{g_{\ast }}\), respectively, among all admissible functions \(\rho _{\ast }\), \(h_{\ast }\), or \(g_{\ast }\).

If ρ is a maximizer and \(h=\rho \circ \tau _{a}\), then h is admissible and \(I_{h}=I_{\rho }\), as shown above, and so h is a maximizer. If, additionally, \(g=\phi \circ h\), then g is admissible and

$$ I_{g}=\frac{1}{M-m} \biggl( I_{h}- \frac{1}{6}mc^{3} \biggr) , $$

as shown above, and so g is a maximizer, as \(I_{g}\) is a strictly increasing affine function of \(I_{h}\).

Similarly, if g is a maximizer and \(h=\phi ^{-1}\circ g\), then h is admissible and

$$ I_{h}= ( M-m ) I_{g}+\frac{1}{6}mc^{3}, $$

as shown above, and so h is a maximizer, as \(I_{h}\) is a strictly increasing affine function of \(I_{g}\). If, additionally, \(\rho =h\circ \tau _{a}^{-1}\), then ρ is admissible and \(I_{\rho }=I_{h}\), as shown above, and so ρ is a maximizer.

By Lemma 3, X maximizes

$$ \bigl[ b-E ( X ) \bigr] \bigl[ E ( X ) -a \bigr] -\sigma ^{2} ( X ) $$

if and only if ρ is admissible and maximizes \(I_{\rho }\). By the argument above, that occurs if and only if the function g defined by \(g=\phi \circ h\), where \(h=\rho \circ \tau _{a}\), maximizes \(I_{g}\).

Define

$$ F ( x ) =\frac{c^{2}}{4}-f ( x ) = \biggl( x- \frac{c}{2} \biggr) ^{2}\quad \text{for all }x\in [ 0,c ] . $$

Whenever \(g_{\ast }\) is admissible, define

$$ I_{g_{\ast }}^{\prime }= \int _{0}^{c}F ( x ) \cdot g_{ \ast } ( x ) \,dx . $$

For each admissible \(g_{\ast }\), we have

$$ I_{g_{\ast }}+I_{g_{\ast }}^{\prime }= \int _{0}^{c}f ( x ) \cdot g_{\ast } ( x ) \,dx + \int _{0}^{c}F ( x ) \cdot g_{\ast } ( x ) \,dx = \int _{0}^{c}\frac{c^{2}}{4} \cdot g_{\ast } ( x ) \,dx =\frac{c^{2}}{4}G. $$

Since this sum is constant, g maximizes \(I_{g}\), as needed, if and only if g minimizes \(I_{g}^{\prime }\).

We will now use the Bathtub Principle [8, Theorem 1.14] with Ω, μ, f, G, \(\mathcal{C}\), and I there replaced by \([ 0,c ]\), the Lebesgue measure, F, G, \(\mathcal{C}_{g}\), and \(I_{g}^{\prime }\), respectively, to find a function g that minimizes \(I_{g}^{\prime }\) and to prove that it is unique among minimizers of \(I_{g}\). It will then follow from our reasoning above that the function ρ defined by \(\rho =h\circ \tau _{a}^{-1}\), where \(h=\phi ^{-1}\circ g\), uniquely maximizes \(I_{\rho }\), and hence solves our problem. Let

$$\begin{aligned} s & = \sup \bigl\{ t: \bigl\vert \bigl\{ x\in [ 0,c ] :F ( x ) < t \bigr\} \bigr\vert \leq G \bigr\} \\ & = \sup \biggl\{ t: \biggl\vert \biggl\{ x\in [ 0,c ] : \biggl( x- \frac{c}{2} \biggr) ^{2}< t \biggr\} \biggr\vert \leq G \biggr\} \\ & = \sup \biggl\{ t: \biggl\vert \biggl( \frac{c}{2}-\sqrt{t}, \frac{c}{2}+\sqrt{t} \biggr) \biggr\vert \leq G \biggr\} \\ & = \sup \{ t:2\sqrt{t}\leq G \} \\ & = \frac{G^{2}}{4}. \end{aligned}$$

For this s, we have \(\vert \{ x\in [ 0,c ] :F ( x ) =s \} \vert =0\) and

$$\begin{aligned} & \bigl\vert \bigl\{ x\in [ 0,c ] :F ( x ) < s \bigr\} \bigr\vert \\ &\quad = \biggl\vert \biggl\{ x\in [ 0,c ] : \biggl( x- \frac{c}{2} \biggr) ^{2}< \frac{G^{2}}{4} \biggr\} \biggr\vert \\ &\quad = \biggl\vert \biggl( \frac{c}{2}-\frac{G}{2}, \frac{c}{2}+ \frac{G}{2} \biggr) \biggr\vert \\ &\quad = G. \end{aligned}$$

Since \(\vert \{ x\in [ 0,c ] :F ( x ) < s \} \vert =G\), by the Bathtub Principle [8, Theorem 1.14] there is a unique \(g\in \mathcal{C}_{g}\) that minimizes the integral \(I_{g}^{\prime }\), and it is given by \(g ( x ) =\chi _{ \{ F< s \} } ( x ) \).

Thus,

$$ g ( x ) = \textstyle\begin{cases} 1, & \text{if }F ( x ) < s, \\ 0, & \text{if }F ( x ) \geq s\end{cases}\displaystyle = \textstyle\begin{cases} 1, & \text{if } x\in ( \frac{c}{2}-\frac{G}{2}, \frac{c}{2}+\frac{G}{2} ) , \\ 0, & \text{if } x\in [ 0, \frac{c}{2}-\frac{G}{2} ] \cup [ \frac{c}{2}+\frac{G}{2}, c ]. \end{cases}$$

We then calculate

$$ h ( x ) = ( M-m ) g ( x ) +m= \textstyle\begin{cases} M, & \text{if } x\in ( \frac{c}{2}-\frac{G}{2}, \frac{c}{2}+\frac{G}{2} ) , \\ m, & \text{if } x\in [ 0, \frac{c}{2}-\frac{G}{2} ] \cup [ \frac{c}{2}+\frac{G}{2}, c ] \end{cases}$$

and

$$ \rho ( x ) =h ( x-a ) = \textstyle\begin{cases} M, & \text{if }x\in ( ( \frac{1}{2}a+\frac{1}{2}b ) -\frac{G}{2}, ( \frac{1}{2}a+\frac{1}{2}b ) +\frac{G}{2} ) , \\ m, & \text{if }x\in [ a, ( \frac{1}{2}a+\frac{1}{2}b ) -\frac{G}{2} ] \cup [ ( \frac{1}{2}a+\frac{1}{2}b ) +\frac{G}{2}, b ]. \end{cases}$$

This ρ is the desired maximizer. Finally, we will calculate the maximum value of \(I_{\rho }\),

$$\begin{aligned} I_{\rho } & = I_{h} \\ & = \int _{0}^{\frac{c}{2}-\frac{G}{2}}mf ( x ) \,dx + \int _{ \frac{c}{2}-\frac{G}{2}}^{ \frac{c}{2}+\frac{G}{2}}Mf ( x ) \,dx + \int _{ \frac{c}{2}+\frac{G}{2}}^{c}mf ( x ) \,dx \\ & = \frac{1}{24}m ( c-G ) ^{2} ( 2c+G ) + \frac{1}{12}GM \bigl( 3c^{2}-G^{2} \bigr) +\frac{1}{24}m ( c-G ) ^{2} ( 2c+G ) \\ & = \frac{1}{12}m ( c-G ) ^{2} ( 2c+G ) + \frac{1}{12}GM \bigl( 3c^{2}-G^{2} \bigr) . \end{aligned}$$

We now substitute \(GM=Gm+1-mc\) into the previous expression.

$$\begin{aligned} I_{\rho } & = \frac{1}{12}m ( c-G ) ^{2} ( 2c+G ) +\frac{1}{12} ( Gm+1-mc ) \bigl( 3c^{2}-G^{2} \bigr) \\ & = \frac{1}{4}c^{2}-\frac{1}{12}mc^{3}- \frac{1}{12}G^{2}+\frac{1}{12}G^{2}mc \\ & = \frac{1}{4}c^{2}-\frac{1}{12}mc^{3}- \frac{1}{12}G^{2} ( 1-mc ) \\ & = \frac{1}{4}c^{2}-\frac{1}{12} \biggl( mc^{3}+ \frac{ ( 1-mc ) ^{3}}{ ( M-m ) ^{2}} \biggr) \\ & = \frac{1}{4} ( b-a ) ^{2}-\frac{1}{12} \biggl( m ( b-a ) ^{3}+ \frac{ ( 1-m ( b-a ) ) ^{3}}{ ( M-m ) ^{2}} \biggr) , \end{aligned}$$

as asserted in (3). □

Finally, we will comment on the choice of f and F in the proof. The proof makes essential use of the relationship between \(I_{\rho }\) and \(I_{g} \). \(I_{h}\) is an intermediate quantity, related to \(I_{\rho }\) by the translation \(\tau _{a}\) and related to \(I_{g}\) by the affine transformation ϕ. f is simply what results after starting with \(( b-x ) ( x-a ) \) in \(I_{\rho }\) and translating by a to obtain \(I_{h}\):

$$ I_{\rho }= \int _{a}^{b} ( b-x ) ( x-a ) \rho ( x ) \,dx = \int _{0}^{c} ( c-x ) x\cdot h ( x ) \,dx = \int _{0}^{c}f ( x ) \cdot h ( x ) \,dx =I_{h}. $$

In the proof of part (3) of the theorem, we need to maximize an integral, but the version of the Bathtub Principle that we use is stated for minimizations. It is for this reason that we apply the Bathtub Principle with the complementary function \(F ( x ) =-f ( x ) +c^{2}/4\). Here, the \(c^{2}/4\) constant term ensures that \(F ( x ) \) is nonnegative on \([ 0,c ] \). The specific choices of f and F do not affect the bounds in the theorem, which are absolute.

Availability of data and materials

Not applicable.

References

  1. Agarwal, R.P., Barnett, N.S., Cerone, P., Dragomir, S.S.: A survey on some inequalities for expectation and variance. Comput. Math. Appl. 49, 429–480 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  2. Barnett, N.S., Cerone, P., Dragomir, S.S.: Further inequalities for the expectation and variance of a random variable defined on a finite interval. Math. Inequal. Appl. 6(1) (2000)

  3. Barnett, N.S., Cerone, P., Dragomir, S.S. (eds.): Inequalities for Random Variables over a Finite Interval RGMIA Monographs (2004)

    Google Scholar 

  4. Barnett, N.S., Cerone, P., Dragomir, S.S., Roumeliotis, J.: Some inequalities for the dispersion of a random variable whose pdf is defined on a finite interval. J. Inequal. Pure Appl. Math. 2(1), Art. 1 (2001)

    MathSciNet  MATH  Google Scholar 

  5. Barnett, N.S., Dragomir, S.S.: Some elementary inequalities for the expectation and variance of a random variable whose pdf is defined on a finite interval. RGMIA Res. Rep. Collect. 2(7), Art. 12 (1999)

    Google Scholar 

  6. Cerone, P., Dragomir, S.S.: On some inequalities for the expectation and variance. Korean J. Comput. Appl. Math. 8(2), 357–380 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  7. Kumar, P.: Moments inequalities of a random variable defined over a finite interval. J. Inequal. Pure Appl. Math. 3(3), Art. 41 (2002)

    MathSciNet  MATH  Google Scholar 

  8. Lieb, E.H., Loss, M.: Analysis. Am. Math. Soc., Providence (1997)

    MATH  Google Scholar 

Download references

Acknowledgements

Not applicable.

Authors’ information

This is given on the title page.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

D.C. wrote the manuscript. This is a single-author work.

Corresponding author

Correspondence to David Caraballo.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Caraballo, D. Sharp bounds on moments of random variables. J Inequal Appl 2023, 72 (2023). https://doi.org/10.1186/s13660-023-02962-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-023-02962-w

MSC

Keywords