• Research
• Open Access

# An inequality for the gamma function via statistics and applications

Journal of Inequalities and Applications20152015:185

https://doi.org/10.1186/s13660-015-0705-5

• Received: 23 December 2014
• Accepted: 20 May 2015
• Published:

## Abstract

The aim of this paper is to establish an inequality for the gamma function, using a statistical method. Applications of the inequality are also given, including some estimates of π.

## Keywords

• inequality
• variance
• unbiased estimate
• UMVUE

• 62F11
• 33B15
• 33B10

## 1 Introduction and main result

Recently, there have been many papers about the ratio of gamma functions in the literature; see . Some of the papers use statistical methods. Gurland  has given an inequality satisfied by the gamma function, using the so-called Cramér-Rao lower bound for the variance of unbiased estimators. Olkin  has given an extension of Gurland’s inequality. Gokhale  has given another inequality, which used an analogue of the Cramér-Rao lower bound derived by Rao . Rao gave a stronger version of Wallis’ formula . We, inspired by the above papers, give an inequality concerning the gamma function. Applications of the inequality are also given. We first recall some definitions, notation, and well-known results in statistical theory, which will be used in this paper.

A normal distribution $$N(\mu,\sigma^{2})$$ is described by the probability density function,
$$p(x)=\frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}},\quad x\in \mathbb{R}.$$
(1.1)
When a random variable X is distributed normally with mean μ and variance $$\sigma^{2}$$, we write $$X\sim N(\mu,\sigma^{2})$$.

Suppose that $$x_{1},x_{2},\ldots,x_{n}$$ is a sample from a population with a distribution function $$F_{\theta}(x)$$ ($$\theta\in\Omega$$). Let $$\hat{g}=\hat{g}(x_{1},x_{2},\ldots,x_{n})$$ be an estimator of a parametric function $$g(\theta)$$. If $$E(\hat{g})=g(\theta)$$ for all values of parameter $$\theta\in\Omega$$, we call $$\hat{g}$$ an unbiased estimator of $$g(\theta)$$.

Consider an estimation of $$g(\theta)$$ based on a sample $$x_{1},x_{2},\ldots,x_{n}$$ from some member of a family of distribution functions $$F_{\theta}(x)$$, $$\theta\in\Omega$$, where Ω is the parameter space. An unbiased estimator $$\hat{g}(x_{1},x_{2},\ldots,x_{n})$$ of $$g(\theta)$$ is UMVUE, if $$\forall\theta\in\Omega$$,
$$\operatorname{var}_{\theta}(\hat{g}(x_{1},x_{2}, \ldots,x_{n})\leq \operatorname{var}_{\theta}\bigl(\tilde{g}(x_{1},x_{2}, \ldots,x_{n})\bigr),$$
(1.2)
for any other unbiased estimator $$\tilde{g}$$.
Euler’s gamma function Γ is defined for $$x>0$$ by
$$\Gamma(x)=\int_{0}^{\infty}t^{x-1}e^{-t}\,dt.$$
(1.3)
If $$x_{1},x_{2},\ldots,x_{n}$$ is a sample from a population with distribution $$N(\mu,\sigma^{2})$$, then
$$\hat{\sigma}=\frac{\Gamma (\frac {n}{2} )}{ \sqrt{2}\Gamma (\frac{n+1}{2} )}\sqrt{\sum _{i=1}^{n}x_{i}^{2}}$$
(1.4)
is the UMVUE of σ.

The main result of this paper is the following theorem.

### Theorem 1.1

Suppose that $$n_{k}$$ ($$k=1,2,\ldots,m$$) are nonnegative integers and $$\lambda_{k}\in\mathbb{R}$$, such that $$0\leq\lambda_{k}\leq1$$, $$\sum_{k=1}^{m}\lambda_{k}=1$$. Then we have
$$\frac{n\Gamma^{2}(\frac{n}{2})}{2\Gamma^{2}(\frac{n+1}{2})}-1\leq\sum_{k=1}^{m} \lambda_{k}^{2} \biggl(\frac{n_{k}\Gamma^{2}(\frac{n_{k}}{2})}{ 2\Gamma^{2}(\frac{n_{k}+1}{2})}-1 \biggr),$$
(1.5)
where $$n=\sum_{k=1}^{m}n_{k}$$.

## 2 Proof of the main result

In this section, we use statistical methods to prove the theorem.

### Proof

Let $$x_{11}, x_{12}, \ldots, x_{1n_{1}}, x_{21}, x_{22}, \ldots, x_{2n_{2}}, x_{m1}, x_{m2}, \ldots, x_{mn_{m}}$$ be a random sample from a normal distribution $$X\sim N(\mu,\sigma^{2})$$. From (1.4), it is known that
$$\hat{\sigma}=\frac{\Gamma (\frac {n}{2} )}{ \sqrt{2}\Gamma (\frac{n+1}{2} )}\sqrt{\sum _{i=1}^{m}\sum_{j=1}^{n_{i}}x_{ij}^{2}}$$
(2.1)
is the UMVUE of σ, where $$n=\sum_{i=1}^{m}n_{i}$$.
For any $$x_{k1}, x_{k2}, \ldots, x_{kn_{k}}$$, $$1\leq k\leq m$$
$$\frac{\Gamma(\frac{n_{k}}{2})}{ \sqrt{2}\Gamma(\frac{n_{k}+1}{2})}\sqrt{\sum_{k=1}^{n_{k}}x_{ki}^{2}}$$
(2.2)
is an unbiased estimate of σ.
Using (2.2), we construct a new unbiased estimate of σ, i.e.,
$$\hat{\sigma}_{1}=\sum_{k=1}^{m} \frac{\lambda_{k}\Gamma(\frac{n_{k}}{2})}{ \sqrt{2}\Gamma(\frac{n_{k}+1}{2})}\sqrt{\sum_{k=1}^{n_{k}}x_{ki}^{2}},$$
(2.3)
where $$0\leq\lambda_{k}\leq1$$, $$\sum_{k=1}^{m}\lambda_{k}=1$$.
Due to the definition of the UMVUE, the following inequality holds:
$$\operatorname{Var}\hat{\sigma}_{1}\geq \operatorname{Var} \hat{\sigma}.$$
(2.4)
After some simple computations, we can obtain
$$D(\hat{\sigma})= \biggl(\frac{n\Gamma^{2}(\frac{n}{2})}{2\Gamma^{2}(\frac {n+1}{2})}-1 \biggr) \sigma^{2}$$
(2.5)
and
$$D(\hat{\sigma}_{1})=\sum_{k=1}^{m} \lambda_{k}^{2} \biggl(\frac {n_{k}\Gamma^{2}(\frac{n_{k}}{2})}{2\Gamma^{2}(\frac{n_{k}+1}{2})}-1 \biggr) \sigma^{2}.$$
(2.6)
Substituting (2.5) and (2.6) into (2.4) gives (1.5). Thus, we complete the proof. □

## 3 Some applications of Theorem 1.1

In this section, we show some applications of the main result of this paper. First, we give the following inequalities, which include the gamma function and a trigonometric function.

### Theorem 3.1

Suppose n is any positive integer, and $$\theta\in\mathbb{R}$$, then
$$\frac{\Gamma^{2}(2n+1)}{\Gamma^{2}(2n+\frac{1}{2})}\leq n\sin^{2}\theta+\bigl(2- \sin^{2}\theta\bigr)\frac{\Gamma^{2}(n+1)}{\Gamma ^{2}(n+\frac{1}{2})}.$$
(3.1)

### Proof

Letting $$m=2$$ and $$\lambda_{1}=\sin^{2}\theta$$, $$\lambda _{2}=\cos^{2}\theta$$ in (1.5) gives
\begin{aligned}& \frac{(n_{1}+n_{2})\Gamma^{2}(\frac{n_{1}+n_{2}}{2})}{2\Gamma ^{2}(\frac{n_{1}+n_{2}+1}{2})}-1 \\& \quad \leq\sin^{4}\theta \biggl(\frac{n_{1}\Gamma^{2}(\frac{n_{1}}{2})}{2\Gamma ^{2}(\frac{n_{1}+1}{2})}-1 \biggr)+ \cos^{4}\theta \biggl(\frac{n_{2}\Gamma^{2}(\frac{n_{2}}{2})}{2\Gamma ^{2}(\frac{n_{2}+1}{2})}-1 \biggr). \end{aligned}
(3.2)
Letting $$n_{1}=n_{2}=a$$ in (3.2), one obtains
$$\frac{\Gamma^{2}(2a+1)}{\Gamma^{2}(2a+\frac{1}{2})}- 2\bigl(\sin^{4} \theta+\cos^{4}\theta\bigr)\frac{\Gamma^{2}(a+1)}{\Gamma^{2}(a+\frac {1}{2})}\leq2a \bigl(1- \sin^{4}\theta-\cos^{4}\theta \bigr).$$
(3.3)
Employing the following trigonometric formula:
$$\sin^{4}\theta+\cos^{4}\theta=1- \frac{\sin^{2}2\theta}{2},$$
(3.4)
(3.3) becomes
$$\frac{\Gamma^{2}(2a+1)}{\Gamma^{2}(2a+\frac{1}{2})}\leq a\sin^{2}(2\theta)+\bigl(2- \sin^{2}(2\theta)\bigr)\frac{\Gamma^{2}(a+1)}{\Gamma ^{2}(a+\frac{1}{2})}.$$
(3.5)
Replacing 2θ by θ and a by n, we obtain (3.1). Thus, we finish the proof. □
In , Gurland gave the following estimator of π:
$$\frac{4n+3}{ (2n+1 )^{2}} \biggl(\frac{ (2n )!!}{ (2n-1 )!!} \biggr)^{2}< \pi.$$
(3.6)
Mortici  gave the refinements of Gurland’s formula for π:
$$\biggl(\frac{n+\frac{1}{4}}{n^{2}+\frac{1}{2}n+\frac{3}{32}}+\frac {9}{2\text{,}048n^{5}}-\frac{45}{8\text{,}192n^{6}} \biggr) \biggl(\frac{ (2n )!!}{ (2n-1 )!!} \biggr)^{2}< \pi.$$
(3.7)
Using (3.1), we can get the following similar result.

### Corollary 3.2

Suppose n is any nonnegative integer, then
$$\frac{1}{n} \biggl[ \biggl(\frac{(4n)!!}{(4n-1)!!} \biggr)^{2}- \biggl(\frac {(2n)!!}{(2n-1)!!} \biggr)^{2} \biggr]< \pi.$$
(3.8)

### Proof

Letting $$\theta=\frac{\pi}{2}$$ in (3.1) gives
$$\frac{\Gamma^{2}(2n+1)}{\Gamma^{2}(2n+\frac{1}{2})}\leq n+\frac{\Gamma^{2}(n+1)}{\Gamma^{2}(n+\frac{1}{2})}.$$
(3.9)
We have
$$\frac{ (2q )!!}{ (2q-1 )!!}=\sqrt{\pi}\frac{\Gamma (q+1 )}{\Gamma (q+\frac{1}{2} )}.$$
(3.10)
See, e.g., . So
$$\frac{1}{n} \biggl[ \biggl(\frac{(4n)!!}{(4n-1)!!} \biggr)^{2}- \biggl(\frac {(2n)!!}{(2n-1)!!} \biggr)^{2} \biggr]\leq \pi.$$
(3.11)
Because the equality in (3.11) cannot hold, we get (3.8). □
Now we give an inequality involving combinational coefficients $${n\choose m}$$, defined by
$${n\choose m }=\frac{n!}{m!(n-m)!}.$$

### Theorem 3.3

Suppose m, w are any positive integers, then
\begin{aligned}& \frac{1}{m} \biggl[\frac{(2mw)!!}{(2mw-1)!!} \biggr]^{2} +\frac{1}{2^{2m}} \biggl[w\pi- \biggl(\frac{ (2w )!!}{ (2w-1 )!!} \biggr)^{2} \biggr] \left[{2m\choose m}-1 \right] \\& \quad\leq w\pi. \end{aligned}
(3.12)

### Proof

Since
$$\sum_{k=1}^{m}{m\choose k} \frac{1}{2^{m}}=1.$$
(3.13)
Letting $$n_{k}=2w$$, $$\lambda_{k}={m\choose k}\frac{1}{2^{m}}$$ in (1.5), then $$n=\sum_{k=1}^{m}n_{k}=2mw$$. We get
$$mw\frac{\Gamma^{2}(mw)}{\Gamma^{2}(mw+\frac{1}{2})}-1\leq\sum_{k=1}^{m} \biggl({m\choose k}\frac{1}{2^{m}} \biggr)^{2} \biggl(w \frac{\Gamma ^{2}(w)}{\Gamma^{2}(w+\frac{1}{2})}-1 \biggr).$$
(3.14)
Using the inequality of (3.10) and after some simple derivations, we have
\begin{aligned}& \Biggl[\frac{1}{m} \biggl(\frac{(2mw)!!}{(2mw-1)!!} \biggr)^{2} -\sum_{k=1}^{m} \biggl({m\choose k}\frac{(2w)!!}{(2w-1)!!}\frac{1}{2^{m}} \biggr)^{2} \Biggr]\frac{1}{\pi} \\& \quad\leq w \Biggl[1-\sum_{k=1}^{m} \biggl({m\choose k}\frac{1}{2^{m}} \biggr)^{2} \Biggr]. \end{aligned}
(3.15)
Substituting
$$\sum_{k=0}^{m}{m\choose k}^{2}={2m\choose m}$$
(3.16)
into (3.15) one obtains (3.12). □
The special case $$w=1$$ of (3.12) results in
$$\frac{1}{m} \biggl[\frac{(2m)!!}{(2m-1)!!} \biggr]^{2} +\frac{\pi-4}{2^{2m}} \left[{2m\choose m}-1 \right]\leq\pi.$$
(3.17)

Finally, we give the following double inequality for π.

### Theorem 3.4

Let p, d are positive integers, then
\begin{aligned}& \frac{1}{2pd} \biggl[ \biggl(\frac{ (4pd+2p )!!}{ (4pd+2p-1 )!!} \biggr)^{2}- \biggl(\frac{ (2p )!!}{ (2p-1 )!!} \biggr)^{2} \biggr] \\& \quad< \pi < \frac{4d}{ (2p+1 ) [ (2d+1 )^{2} (\frac { (4pd+2p+2d-1 )!!}{ (4pd+2p+2d )!!} )^{2}- (\frac{(2p-1)!!}{(2p)!!} )^{2} ]}. \end{aligned}
(3.18)

### Proof

Letting $$m=2d+1$$ and $$n_{1}=n_{2}=\cdots=n_{m}=2p+1$$ in (1.5) gives
\begin{aligned}& \frac{(2p+1)(2d+1)}{2}\frac{\Gamma^{2}(2pd+p+d+\frac{1}{2})}{\Gamma ^{2}(2pd+p+d+1)}-1 \\& \quad\leq\sum_{k=1}^{2d+1} \lambda_{k}^{2} \biggl[\frac{2p+1}{2}\frac {\Gamma^{2}(\frac{2p+1}{2})}{ \Gamma^{2}(\frac{2p+2}{2})}-1 \biggr]. \end{aligned}
(3.19)
Using (3.10), the inequality (3.19) can be written in the equivalent form
$$\pi< \frac{2 (1-\sum_{k=1}^{2d+1}\lambda_{k}^{2} )}{(2p+1) [(2d+1) (\frac{(4pd+2p+2d-1)!!}{(4pd+2p+2d)!!} )^{2}-\sum_{k=1}^{2d+1}\lambda_{k}^{2} (\frac{(2p-1)!!}{(2p)!!} )^{2} ]}.$$
(3.20)
Letting $$\lambda_{1}=\lambda_{2}=\cdots=\lambda_{m}=\frac{1}{2d+1}$$ in (3.20) gives
$$\pi< \frac{4d}{(2p+1) [(2d+1)^{2} (\frac {(4pd+2p+2d-1)!!}{(4pd+2p+2d)!!} )^{2}- (\frac{(2p-1)!!}{(2p)!!} )^{2} ]}.$$
(3.21)
Similarly, letting $$m=2d+1$$, $$n_{1}=n_{2}=\cdots=n_{m}=2p$$, and $$\lambda_{1}=\lambda_{2}=\cdots=\lambda_{m}=\frac{1}{2d+1}$$ in (1.5), one can obtain
$$\frac{1}{2pd} \biggl[ \biggl(\frac{(4pd+2p)!!}{(4pd+2p-1)!!} \biggr)^{2}- \biggl(\frac{(2p)!!}{(2p-1)!!} \biggr)^{2} \biggr]< \pi.$$
(3.22)

Then the inequality of (3.18) is the combination of the inequality (3.21) and the inequality (3.22). □

The special case $$p=1$$ of (3.18) results in
$$\frac{1}{2d} \biggl[ \biggl(\frac{ (4d+2 )!!}{ (4d+1 )!!} \biggr)^{2}-4 \biggr]< \pi < \frac{4d}{3 [ (2d+1 )^{2} (\frac{ (6d+1 )!!}{ (6d+2 )!!} )^{2}-\frac{1}{4} ]}.$$
(3.23)

## Declarations

### Acknowledgements

The authors would like to thank two anonymous referees for many helpful comments and suggestions. We acknowledge support by the National Natural Science Foundation (grant 11271057) of China. 