Open Access

Some new estimates of the ‘Jensen gap’

Journal of Inequalities and Applications20162016:39

https://doi.org/10.1186/s13660-016-0985-4

Received: 19 September 2015

Accepted: 19 January 2016

Published: 1 February 2016

Abstract

Let \(( \mu,\Omega ) \) be a probability measure space. We consider the so-called ‘Jensen gap’
$$ J ( \varphi,\mu,f ) = \int_{\Omega}\varphi \bigl( f ( s ) \bigr)\,d\mu ( s ) -\varphi \biggl( \int_{\Omega }f ( s )\,d\mu ( s ) \biggr) $$
for some classes of functions φ. Several new estimates and equalities are derived and compared with other results of this type. Especially the case when φ has a Taylor expansion is treated and the corresponding discrete results are pointed out.

Keywords

Jensen’s inequalityconvex function γ-superconvex functionssuperquadratic functionsTaylor expansion

MSC

26D1026D1526B25

1 Introduction

Let \(( \Omega,\mu ) \) be a probability measure space i.e. \(\mu ( \Omega ) =1\) and let f be a μ-measurable function on Ω. If φ is convex, then Jensen’s inequality
$$ \varphi \biggl( \int_{\Omega}f ( s )\,d\mu ( s ) \biggr) \leq \int_{\Omega}\varphi \bigl( f ( s ) \bigr)\,d\mu ( s ) $$
(1.1)
holds. This inequality can be traced back to Jensen’s original papers [1, 2] and is one of the most fundamental mathematical inequalities. One reason for that is that in fact a great number of classical inequalities can be derived from (1.1), see e.g. [3] and the references given therein. The inequality (1.1) cannot in general be improved since we have equality in (1.1) when \(\varphi ( x ) \equiv x\). However, for special cases of functions (1.1) can be given in a more specific form e.g. by giving lower estimates of the so-called ‘Jensen gap’
$$ J ( \varphi,\mu,f ) = \int_{\Omega}\varphi \bigl( f ( s ) \bigr)\,d\mu ( s ) -\varphi \biggl( \int_{\Omega }f ( s )\,d\mu ( s ) \biggr) , $$
thus obtaining refined versions of (1.1).

We give a few examples of such results.

Example 1

(see [4])

Let φ be a superquadratic function i.e. \(\varphi: [ 0,\infty ) \rightarrow \mathbb{R} \) is such that there exists a constant \(C ( x ) \), \(x\geq0\), such that
$$ \varphi ( y ) \geq\varphi ( x ) +C ( x ) ( y-x ) +\varphi \bigl( \vert y-x \vert \bigr) $$
for \(y\geq0\). For such functions we have the following estimate of the Jensen gap:
$$ J ( \varphi,\mu,f ) \geq \int_{\Omega}\varphi \biggl( \biggl\vert f ( s ) - \int_{\Omega}f ( s )\,d\mu ( s ) \biggr\vert \biggr)\,d\mu ( s ) . $$

Example 2

(see [5] and [6])

We say that a function \(K ( x ) \) in γ-superconvex if \(\varphi ( x ) :=x^{-\gamma }K ( x ) \) is convex. If φ is a differentiable convex, increasing function and \(\varphi ( 0 ) = \lim_{z\rightarrow0+} z\varphi^{\prime} ( z ) =0\), then we have the following estimate of the Jensen gap:
$$ J ( K,\mu,f ) \geq\varphi ( z ) \int_{\Omega} \bigl( \bigl( f ( s ) \bigr) ^{\gamma}-z^{\gamma} \bigr)\,d\mu ( s ) +\varphi^{\prime} ( z ) \int_{\Omega} \bigl( f ( s ) \bigr) ^{\gamma} \bigl( f ( s ) -z \bigr)\,d\mu ( s ) \geq0, $$
for \(z= \int_{\Omega}f ( s )\,d\mu ( s ) >0\) and \(f\geq 0\), \(f^{\gamma}\) when \(\gamma\geq0 \) are integrable functions on the probability measure space \(( \Omega,\mu ) \).

Remark 1

By using the results in Examples 1 and 2 it is possible to derive Hardy-type inequalities with other ‘breaking points’ (the point where the inequality reverses) than the usual breaking point \(p=1\). See [5, 7, 8] and [9].

Remark 2

In the recent paper [6] it was proved that the notion of γ-superconvexity has sense also for the case \(-1\leq\gamma \leq0\) and in fact this was used even to derive there some new two-sided Jensen type inequalities.

Example 3

(see [10])

In his paper Walker studied the Jensen gap for the special case \(f\equiv1\) i.e. for \(J ( \varphi,\mu) :=J ( \varphi,\mu,1 ) \) and found an estimate of the type
$$ J ( \varphi,\mu ) \geq\frac{1}{2}C ( \varphi,\mu ) \biggl( \int_{\Omega}s^{2}\,d\mu ( s ) - \biggl( \int_{\Omega }s\,d\mu ( s ) \biggr) ^{2} \biggr) , $$
where the positive constant \(C=C ( \varphi,\mu ) \) is easily computed.

In his paper it was assumed that φ admits a Taylor power series representation \(\varphi ( x ) =\sum_{n=1}^{\infty }a_{n}x^{n}\), \(a_{n}\geq0\), \(n=0,1,2,\ldots\) , \(0< x\leq A<\infty\). In another recent paper Dragomir [11] derived some other Jensen integral inequalities for this power series case. A comparison between these two results and our results is given in our concluding remarks.

Inspired by these results, we derive some new results of the same type. In Theorem 1 we get an estimate like that of Walker in [10] but for the general case of \(J ( \varphi,\mu,f ) \). In Theorem 2 we prove another complement of the Walker result by considering the Jensen functional
$$ J_{\alpha} \bigl( t^{\alpha},\mu \bigr) = \int_{\Omega}y^{\alpha}\,d\mu ( y ) - \biggl( \int_{\Omega}y\,d\mu ( y ) \biggr) ^{\alpha }, \quad\alpha \geq2, $$
and get an estimate for this Jensen gap which even reduces to equality for \(\alpha=N\), \(N=2,3,\ldots\) . By using this result it is possible to derive a similar equality for the Jensen gap whenever it can be represented by a Taylor power series (see Theorem 3).

In Section 3 we show that our lower bound of the Jensen gap is better than the lower bound in [11] when the function that we deal with has a Taylor series expansion with non-negative coefficients. Moreover, we prove that by our technique we can in such cases derive also upper bounds and not only lower bounds as in [10].

2 The main results

Our first main result reads as follows.

Theorem 1

Let \(\phi: [ 0,A ) \rightarrow \mathbb{R} \) have a Taylor power series representation on \([ 0,A )\), \(0< A\leq\infty:\phi ( x ) =\sum_{n=0}^{\infty }a_{n}x^{n}\).

Let φ be a convex increasing function on \([ 0,A ) \) that is related to ϕ by
$$ \varphi ( x ) =\frac{\phi ( x ) -\phi(0)}{x}=\sum_{n=0}^{\infty}a_{n+1}x^{n}. $$
(a) If \(f\geq0\) and f, \(f^{2}\), and \(\phi\circ f\) are integrable functions on Ω, \(z=\int_{\Omega}f\,d\mu>0\), where μ is a probability measure on Ω, then
$$ \int_{\Omega}\phi ( f )\,d\mu-\phi ( z ) \geq \biggl( \frac{\phi ( z ) -\phi(0)}{z} \biggr) ^{\prime} \biggl( \int_{\Omega}f^{2}\,d\mu-z^{2} \biggr) \geq0. $$
In other words,
$$\begin{aligned} J ( \phi,\mu,f ) =& \int_{\Omega}\phi ( f )\,d\mu -\phi ( z ) \\ =&\sum_{n=0}^{\infty}a_{n+1} \int_{\Omega}f^{n+1}\,d\mu-\sum _{n=0}^{\infty }a_{n+1}z^{n+1} \\ \geq&\sum_{n=0}^{\infty} ( n+1 ) a_{n+2}z^{n} \biggl( \int _{\Omega }f^{2}\,d\mu-z^{2} \biggr) \geq0. \end{aligned}$$
(b) For \(\overline{x}=\sum_{i=1}^{m}\alpha_{i}x_{i}\), \(\ \sum_{i=1}^{m}\alpha_{i}=1\), \(0\leq\alpha_{i}\leq1\), \(0\leq x_{i}< A\), \(i=1,\ldots,m\), it yields
$$ \sum_{i=1}^{m}\alpha_{i}\phi ( x_{i} ) -\phi ( \overline {x} ) \geq \biggl( \frac{\phi ( \overline{x} ) -\phi(0)}{\overline{x}} \biggr) ^{\prime} \Biggl( \sum _{i=1}^{m}\alpha _{i}x_{i}^{2}-\overline{x}^{2} \Biggr) \geq0. $$
In other words,
$$ \sum_{i=1}^{m}\sum _{n=0}^{\infty}\alpha _{i}a_{n+1}x_{i}^{n+1}- \sum_{n=0}^{\infty}a_{n+1} \overline{x}^{n+1}\geq \sum_{n=0}^{\infty} ( n+1 ) a_{n+2}\overline{x}^{n} \Biggl( \sum _{i=1}^{m}\alpha_{i}x_{i}^{2}- \overline{x}^{2} \Biggr) \geq0. $$

Proof

For \(\phi ( x ) =\sum_{n=0}^{\infty}a_{n}x^{n}\), \(0\leq x< A\), by denoting the function \(\psi: [ 0,A ) \rightarrow \mathbb{R} _{+}\) \(\psi ( x ) =\phi ( x ) -\phi ( 0 ) =\sum_{n=0}^{\infty}a_{n+1}x^{n+1}\), \(0\leq x< A\), and \(\varphi ( x ) =\frac{\psi ( x ) }{x}\Leftrightarrow x\varphi ( x ) =\psi ( x )\), \(0\leq x< A\), we see that \(\psi ( x ) \) is 1-quasiconvex function (see [6]), \(\varphi ( x ) =\sum_{n=0}^{\infty}a_{n+1}x^{n}\), \(0\leq x< A\), and \(\varphi ^{\prime} ( x ) =\sum_{n=0}^{\infty} ( n+1 ) a_{n+2}x^{n}\).

The functions ϕ, ψ, φ, and \(\varphi^{\prime}\) are differentiable functions on \([ 0,A ) \). From the convexity of \(\varphi ( x ) \) we have
$$ \varphi ( y ) -\varphi ( x ) >\varphi^{\prime } ( x ) ( y-x ) ,\quad x,y\in [ 0,A ), $$
and, therefore,
$$ \psi ( y ) -\psi ( x ) =y\varphi ( y ) -x\varphi ( x ) \geq\varphi ( x ) ( y-x ) +\varphi^{\prime} ( x ) y ( y-x ) ,\quad x,y\geq0. $$
Since \(\psi ( x ) =\phi ( x ) -\phi ( 0 ) \) we get
$$ \phi ( y ) -\phi ( x ) =\psi ( y ) -\psi ( x ) \geq\varphi ( x ) ( y-x ) + \varphi ^{\prime} ( x ) y ( y-x ) . $$
Now using this inequality with \(x=z\), \(y=f\), and integrating, we find that
$$\begin{aligned} &\int_{\Omega}\phi ( f )\,d\mu-\phi ( z )\\ &\quad\geq\varphi ( z ) \biggl( \int_{\Omega}f\,d\mu- \int_{\Omega }z\,d\mu \biggr) +\varphi^{\prime} ( z ) \biggl( \int_{\Omega }f^{2}\,d\mu-z^{2} \biggr)\\ &\quad=0+ \biggl( \frac{\phi ( z ) -\phi ( 0 ) }{z} \biggr) ^{\prime} \biggl( \int_{\Omega}f^{2}\,d\mu-z^{2} \biggr) \geq0. \end{aligned}$$
In the last inequality we have used \(z=\int_{\Omega}f\,d\mu>0\) and φbeing convex increasing, where \(\varphi ( z ) =\frac{\phi ( z ) -\phi ( 0 ) }{z}\).

Hence (a) is proved and since (b) is just a special case of (a), the proof is complete. □

For the proof of our next main result we need the following lemma, which is also of independent interest.

Lemma 1

Let φ be a differentiable function on \(I\subset \mathbb{R} \), and let \(x,y\subseteq I\). Then, for \(N=2,3,\ldots\) ,
$$\begin{aligned} &\varphi ( x ) \bigl( y^{N-1}-x^{N-1} \bigr) +\varphi ^{\prime } ( x ) y^{N-1} ( y-x ) \\ &\quad= \bigl( x^{N-1}\varphi ( x ) \bigr) ^{\prime} ( y-x ) + ( y-x ) ^{2}\sum_{k=1}^{N-1}y^{k-1} \bigl( x^{N-k-1}\varphi ( x ) \bigr) ^{\prime}. \end{aligned}$$
(2.1)
In particular, for \(N=2\) we have
$$ \varphi ( x ) ( y-x ) +\varphi^{\prime} ( x ) y ( y-x ) = \bigl( x\varphi ( x ) \bigr) ^{\prime } ( y-x ) +\varphi^{\prime} ( x ) ( y-x ) ^{2}. $$
(2.2)

Proof

A simple calculation shows that (2.2) holds, i.e., that (2.1) holds for \(N=2\). For \(N=3\) (2.1) reads
$$\begin{aligned} &\varphi ( x ) \bigl( y^{2}-x^{2} \bigr) + \varphi^{\prime } ( x ) y^{2} ( y-x ) = \bigl( x^{2}\varphi ( x ) \bigr) ^{\prime} ( y-x ) + ( y-x ) ^{2} \bigl( \bigl( x\varphi ( x ) \bigr) ^{\prime}+y \varphi^{\prime} ( x ) \bigr) . \end{aligned}$$
(2.3)
Moreover, it is easy to verify the identity
$$\begin{aligned} &\varphi ( x ) \bigl( y^{2}-x^{2} \bigr) + \varphi^{\prime } ( x ) y^{2} ( y-x ) =\varphi^{\prime} ( x ) y ( y-x ) ^{2}+x\varphi ( x ) ( y-x ) + \bigl( x\varphi ( x ) \bigr) ^{\prime }y ( y-x ). \end{aligned}$$
(2.4)
By using (2.4) together with (2.2) and making some straightforward calculations we obtain (2.3). The general proof follows in the same way using induction and the more general (than (2.4)) identity
$$\begin{aligned}[b] &\varphi ( x ) \bigl( y^{N-1}-x^{N-1} \bigr) +\varphi ^{\prime } ( x ) y^{N-1} ( y-x )\\ &\qquad{}- \bigl[ \bigl( x\varphi ( x ) \bigr) \bigl( y^{N-2}-x^{N-2} \bigr) + \bigl( x\varphi ( x ) \bigr) ^{\prime }y^{N-2} ( y-x ) \bigr]\\ &\quad=\varphi^{\prime} ( x ) y^{N-2} ( y-x ) ^{2},\quad N=2,3,4, \ldots. \end{aligned} $$
 □

Now we are ready to state our next main result.

Theorem 2

Let μ be a probability measure on \(\Omega= (0,\infty)\), \(z=\int_{\Omega}y\,d\mu ( y ) >0\), and \(N=2,3,\ldots\) . Then the refined Jensen-type inequality
$$ \int_{\Omega}y^{\alpha}\,d\mu ( y ) -z^{\alpha}\geq \int _{\Omega } ( y-z ) ^{2}\sum _{k=1}^{N-1} ( \alpha-k ) x^{k-1}z^{\alpha-k-1}\,d\mu,\quad y\geq0, $$
(2.5)
holds for any \(\alpha\geq N\). Moreover, for \(N-1<\alpha\leq N\) (2.5) holds in the reversed direction. In particular, for \(\alpha=N\) we have equality in (2.5).

Proof

A convex differentiable function on \(\varphi ( x ) \) is characterized by
$$ \varphi ( y ) -\varphi ( x ) \geq\varphi^{\prime } ( x ) ( y-x ) $$
and this inequality holds in the reversed direction if \(\varphi ( x ) \) is concave. For \(\varphi ( x ) =x\) we have equality. Therefore, when \(\varphi ( x ) \) is convex it yields
$$ \varphi ( y ) y^{N-1}-\varphi ( x ) x^{N-1}\geq \varphi ( x ) \bigl( y^{N-1}-x^{N-1} \bigr) +\varphi^{\prime} ( x ) y^{N-1} ( y-x ) ,\quad x,y\geq0. $$
Hence in view of Lemma 1 we find that
$$ \varphi ( y ) y^{N-1}-\varphi ( x ) x^{N-1}\geq \bigl( x^{N-1}\varphi ( x ) \bigr) ^{\prime} ( y-x ) + ( y-x ) ^{2}\sum_{k=1}^{N-1}y^{k-1} \bigl( x^{N-k-1}\varphi ( x ) \bigr) ^{\prime}. $$
By using this inequality with the convex function \(\varphi ( x ) =x^{\alpha-N+1}\), \(x\geq0\), \(\alpha\geq N\), we obtain
$$ y^{\alpha}-x^{\alpha}\geq\alpha x^{\alpha-1} ( y-x ) + ( y-x ) ^{2}\sum_{k=1}^{N-1} ( \alpha-k ) y^{k-1}x^{\alpha -k-1}. $$
By now choosing \(x=z\), integrating over Ω, and using the fact that \(\int_{\Omega} ( y-z )\,d\mu ( y ) =0\) we obtain (2.5). For the reversed inequality we use the concave function \(\varphi ( x ) =x^{\alpha-N+1}\), \(( N-1 ) <\alpha\leq N\), and all inequalities above reverse. For \(\alpha=N\) we get an equality, so the proof is complete. □

Corollary 1

Let \(x_{i}\geq0\), \(\alpha_{i}\geq0\), \(i=1,2,\ldots,m\), \(\sum_{i=1}^{m}\alpha_{i}=1\), and \(\overline{x}=\sum_{i=1}^{m}\alpha _{i}x_{i}\). Then, for \(N=2,3,\ldots\) ,
$$ \sum_{i=1}^{m}\alpha_{i}x_{i}^{\alpha}- \overline{x}^{\alpha}\geq \sum_{i=1}^{m} \alpha_{i} ( x_{i}-\overline{x} ) ^{2}\sum _{k=1}^{N-1} ( \alpha-k ) x_{i}^{k-1} \overline {x}^{\alpha -k-1} $$
(2.6)
holds for any \(\alpha\geq N\). Moreover, for \(N-1<\alpha\leq1\) (2.6) holds in the reversed direction. In particular, for \(\alpha =N\), (2.6) reduces to an equality.

Our final main result reads as follows.

Theorem 3

Let \(0< A\leq\infty\) and let \(\phi: ( 0,A ] \rightarrow \mathbb{R} \) have a Taylor expansion \(\phi ( x ) =\sum_{n=0}^{\infty }a_{n}x^{n}\), on \(( 0,A ] \). If μ is a probability measure on \(( 0,A ] \) and \(z=\int_{0}^{A}x\,d\mu ( x ) >0\), then
$$ \int_{\Omega}\phi ( x )\,d\mu-\phi ( z ) =\sum _{n=2}^{\infty}a_{n} \int_{0}^{A} ( x-z ) ^{2}\sum _{k=1}^{n-1} ( n-k ) x^{k-1}z^{n-k-1}\,d\mu. $$
(2.7)

Proof

We note that
$$ \int_{0}^{A}\phi ( x )\,d\mu-\phi ( z ) = \int_{0}^{A}\sum_{n=0}^{\infty}a_{n} \bigl( x^{n}-z^{n} \bigr)\,d\mu =\sum _{n=0}^{\infty}a_{n} \int_{0}^{A} \bigl( x^{n}-z^{n} \bigr)\,d\mu. $$
Obviously, \(\int_{0}^{A} ( x^{n}-z^{n} )\,d\mu=0\), for \(n=0,1\), and hence (2.7) follows from the equality cases in (2.5) in Theorem 2, i.e. when \(\alpha=N=2,3,\ldots\) .

The proof is complete. □

Corollary 2

Let \(0< A\leq\infty\) and let \(\phi: [ 0,A ) \) have a Taylor expansion \(\phi ( x ) =\sum_{n=0}^{\infty}a_{n}x^{n}\), on \([ 0,A ) \). If \(\overline{x}=\sum_{i=1}^{m}\alpha_{i}x_{i}\), \(\sum_{i=1}^{m}\alpha_{i}=1\), \(0\leq \alpha_{i}\leq1\), \(0\leq x_{i}\leq A\), \(i=1,2,\ldots,m\), then
$$ J=\sum_{i=1}^{m}\alpha_{i}\phi ( x_{i} ) -\phi ( \overline{x} ) =\sum _{n=2}^{\infty}a_{n} \Biggl( \sum _{i=1}^{m}\alpha _{i}x_{i}^{2}-\overline{x}^{2} \Biggr) \sum_{k=1}^{n-1} ( n-k ) x^{k-1}\overline{x}^{n-k-1}. $$

Corollary 3

Let \(0< a< b<\infty\), and μ be a probability measure on \(( a,b ) \). Then we have the following estimate of the Jensen gap \(J_{N}:=\int_{a}^{b}x^{N}\,d\mu- ( \int_{a}^{b}x\,d\mu ) ^{N}\), \(N=2,3,\ldots\) :
$$ \frac{N ( N-1 ) }{2}a^{N-2}J_{2}\leq J_{N}\leq \frac{N ( N-1 ) }{2}b^{N-2}J_{2}. $$
(2.8)

Proof

We use Theorem 2 with \(\alpha=N\) and find that
$$ J_{N}= \int_{a}^{b} ( x-z ) ^{2}\sum _{k=1}^{N-1} ( N-k ) x^{k-1}z^{N-k-1}\,d\mu. $$
We note that if \(a< x< b\), then \(a< z< b \) so that \(a^{N-2}\leq x^{k-1}z^{N-k-1}\leq b^{N-2}\). Moreover, \(\sum_{k=1}^{N-1} ( N-k ) =\frac{N ( N-1 ) }{2}\) and
$$ \int_{a}^{b} ( x-z ) ^{2}\,d\mu= \int_{a}^{b}x^{2}\,d\mu- \biggl( \int_{a}^{b}x\,d\mu \biggr) ^{2}=J_{2}, $$
so (2.8) is proved. □

Remark 3

For the case \(N=2\) both inequalities in (2.8) reduce to equalities. Moreover, for the discrete case we have: If \(0< a< x_{i}< b\), \(\alpha_{i}\geq0\), \(i=1,2,\ldots,m\), \(\sum_{i=1}^{m}\alpha_{i}=1\), \(\overline{x}=\sum_{i=1}^{m}\alpha_{i}x_{i}\), then, for \(N=2,3,\ldots\) ,
$$\begin{aligned} &\frac{N ( N-1 ) }{2}a^{N-2} \Biggl( \sum_{i=1}^{m}a_{i}x_{i}^{2}-\overline{x}^{2} \Biggr) \\ &\quad\leq\sum_{i=1}^{m}a_{i}x_{i}^{N}- \overline{x}^{N}\leq\frac{N ( N-1 ) }{2}b^{N-2} \Biggl( \sum _{i=1}^{m}a_{i}x_{i}^{2}- \overline{x}^{2} \Biggr) . \end{aligned}$$
(2.9)

3 Final remarks and examples

In this section we present some recent interesting results of Dragomir [11] and Walker [10]. Moreover, we point out the corresponding special cases of our results and compare these results with those of [11] and [10].

Example 4

In Dragomir’s paper [11], Theorem 2, it was proved that for
$$ \phi ( x ) =\sum_{n=0}^{\infty}a_{n}x^{n},\quad a_{n}\geq0, $$
(3.1)
which converges on \(0< x< R\leq\infty\), the following lower bound of the Jensen gap holds:
$$\begin{aligned} &\int_{\Omega}\phi\circ f\,d\mu-\phi \biggl( \int_{\Omega}f\,d\mu \biggr) \\ &\quad\geq\frac{1}{2} \biggl[ \int_{\Omega}f^{2}\,d\mu- \biggl( \int_{\Omega }f\,d\mu \biggr) ^{2} \biggr] \frac{\phi^{\prime} ( \int_{\Omega}f\,d\mu ) -\phi^{\prime} ( 0 ) }{\int_{\Omega}f\,d\mu}, \end{aligned}$$
(3.2)
when \(( \Omega,\mu ) \) is a probability measure space, \(f\geq 0\), and f, \(f^{2}\), and \(\phi\circ f\) are integrable on Ω and \(\int_{\Omega}f\,d\mu>0\).

Example 5

In Theorem 1 we proved that for convex increasing functions we get the inequalities
$$\begin{aligned} &\int_{\Omega}\phi\circ f\,d\mu-\phi \biggl( \int_{\Omega}f\,d\mu \biggr) \\ &\quad\geq \biggl[ \int_{\Omega}f^{2}\,d\mu- \biggl( \int_{\Omega}f\,d\mu \biggr) ^{2} \biggr] \biggl( \frac{\phi ( \int_{\Omega}f\,d\mu ) -\phi ( 0 ) }{\int_{\Omega}f\,d\mu} \biggr) ^{\prime}\geq0. \end{aligned}$$
(3.3)
A function that satisfies (3.1) is convex increasing and therefore Theorem 1 holds, which means that we get the inequalities in (3.3).

Remark 4

It is easily computed that when ϕ is of the form (3.1), then
$$ \frac{1}{2}\frac{\phi^{\prime} ( \int_{\Omega}f\,d\mu ) -\phi ^{\prime} ( 0 ) }{\int_{\Omega}f\,d\mu}\leq \biggl( \frac{\phi ( \int_{\Omega}f\,d\mu ) -\phi ( 0 ) }{\int_{\Omega }f\,d\mu} \biggr) ^{\prime} $$
(3.4)
holds, and from this we conclude that our bound in (3.3), when (3.1) is satisfied, is stronger than Dragomir’s (3.2). Indeed,
$$ \frac{1}{2}\frac{\phi^{\prime} ( z ) -\phi^{\prime} ( 0 ) }{z}=\sum_{n=0}^{\infty} \frac{1}{2} ( n+2 ) a_{n+2}z^{n} $$
and
$$ \biggl( \frac{\phi ( \int_{\Omega}f\,d\mu ) -\phi ( 0 ) }{\int_{\Omega}f\,d\mu} \biggr) ^{\prime}=\sum _{n=0}^{\infty} ( n+1 ) a_{n+2}z^{n}, $$
and our claim is obvious.

Example 6

In Theorem 3.1 in Walker’s paper [10], a lower bound for the Jensen gap is given for a function ϕ that satisfies (3.1):
$$ \int_{\Omega}\phi ( s )\,d\mu ( s ) -\phi \biggl( \int_{\Omega}\,d\mu ( s ) \biggr) \geq\mu ( 1,R ) \tau \frac{1}{2}\sum_{n=2}^{\infty}a_{n}n ( n-1 ) $$
where
$$ \tau= \int_{\Omega}s^{2}\,d\mu_{2} ( s ) - \biggl( \int_{\Omega }s\,d\mu_{2} ( s ) \biggr) ^{2} $$
when μ is a probability measure defined on \(\Omega= ( 0,R ) \) and \(\mu_{2}\) is μ restricted and normalized to \(( 1,R ) \).
More generally, in Section 4 in [10], \(\mu ( 1,R ) \) was replaced by \(\mu ( a,R ) \) and we have
$$ \int_{\Omega}\phi ( s )\,d\mu ( s ) -\phi \biggl( \int_{\Omega}\,d\mu ( s ) \biggr) \geq\mu ( a,R ) \tau \frac{1}{2}\sum_{n=2}^{\infty}a^{n}a_{n}n ( n-1 ) , $$
(3.5)
where
$$ \tau= \int_{\Omega}s^{2}\,d\mu_{a} ( s ) - \biggl( \int_{\Omega }s\,d\mu_{a} ( s ) \biggr) ^{2}, $$
when \(\mu_{a}\) is μ restricted and normalized to \(\Omega= ( a,R ) \).

From Corollary 3 and Remark 3 we easily get the following.

Example 7

Let \(0< A\leq\infty\) and let \(\phi: ( 0,A ] \rightarrow \mathbb{R} \) have Taylor expansion \(\phi ( x ) =\sum_{n=0}^{\infty }a_{n}x^{n}\), \(a_{n}\geq0\), \(n=2,3,\ldots\) , on \(( 0,A ] \). If μ is a probability measure on \(( 0,A ] \), \(0\leq a< b\leq A \), and \(z=\int_{0}^{A}x\,d\mu ( x ) >0\), then
$$ \sum_{n=2}^{\infty}a_{n} \frac{n ( n-1 ) }{2}a^{n-2}J_{2}\leq J ( \phi,\mu ) \leq\sum _{n=2}^{\infty}a_{n} \frac{n ( n-1 ) }{2}b^{n-2}J_{2}. $$
(3.6)
Moreover, for the discrete case we have: If \(0< a< x_{i}< b\), \(\alpha _{i}\geq 0\), \(i=1,2,\ldots,m\), \(\sum_{i=1}^{m}a_{i}=1\), \(\overline{x}=\sum_{i=1}^{m}\alpha_{i}x_{i}\), then, for \(n=2,3,\ldots\) ,
$$\begin{aligned} &\sum_{n=2}^{\infty}a_{n} \frac{n ( n-1 ) }{2}a^{n-2} \Biggl( \sum_{i=1}^{m} \alpha_{i}x_{i}^{2}-\overline{x}^{2} \Biggr) \\ &\quad\leq\sum_{i=1}^{m}\alpha_{i} \bigl( \phi ( x_{i} ) -\phi ( \overline{x} ) \bigr) \leq\sum _{n=2}^{\infty}a_{n}\frac{n ( n-1 ) }{2}b^{n-2} \Biggl( \sum_{i=1}^{m}\alpha _{i}x_{i}^{2}-\overline{x}^{2} \Biggr) . \end{aligned}$$

Remark 5

The lower bound in (3.5) coincides with that in (3.6) when \(a=1\). The lower bound in (3.6) is better than that in (3.5) when \(a<1\), but Walker’s bound (3.5) is better than (3.6) for \(a>1\). It seems not to be possible to derive an upper bound like that in (3.5) by using the method in [10].

Declarations

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Mathematics, University of Haifa
(2)
Department of Engineering Sciences and Mathematics, Luleå University of Thechnology
(3)
UiT The Arctic University of Norway

References

  1. Jensen, JLWV: Om konvexe Funktioner og Uligeder mellem Middelvaerdier. Nyt Tidsskr. Math. 16B, 49-69 (1905) (in Danish) Google Scholar
  2. Jensen, JLWV: Sur les fonctions convexes et les inegalités entre les moyennes. Acta Math. 30, 175-193 (1906) (in French) View ArticleMathSciNetMATHGoogle Scholar
  3. Persson, L-E, Samko, N: Inequalities and Convexity, Operator Theory: Advances and Applications, vol. 242, 29 pp. Birkhäuser, Basel (2014) MATHGoogle Scholar
  4. Abramovich, S, Jameson, G, Sinnamon, G: Refining of Jensen’s inequality. Bull. Math. Soc. Sci. Math. Roum. 47, 3-14 (2004) MathSciNetMATHGoogle Scholar
  5. Abramovich, S, Persson, L-E: Some new scales of refined Hardy type inequalities via functions related to superquadracity. Math. Inequal. Appl. 16, 679-695 (2013) MathSciNetMATHGoogle Scholar
  6. Abramovich, S, Persson, L-E, Samko, N: On γ-quasiconvexity, superquadracity and two sided reversed Jensen type inequalities. Math. Inequal. Appl. 18(2), 615-627 (2015) MathSciNetMATHGoogle Scholar
  7. Oguntuase, J, Persson, L-E: Refinement of Hardy’s inequalities via superquadratic and subquadratic functions. J. Math. Anal. Appl. 339, 1305-14012 (2008) View ArticleMathSciNetMATHGoogle Scholar
  8. Abramovich, S, Persson, L-E: Some new refined Hardy type inequalities with breaking points \(p=2\) and \(p=3\). In: Proceedings of the IWOTA 2011, vol. 236, pp. 1-10. Birkhäuser, Basel (2014) Google Scholar
  9. Abramovich, S, Persson, L-E, Samko, N: Some new scales of refined Jensen and Hardy type inequalities. Math. Inequal. Appl. 17, 1105-1114 (2014) MathSciNetMATHGoogle Scholar
  10. Walker, SG: On a lower bound for Jensen inequality. SIAM J. Math. Anal. (to appear) Google Scholar
  11. Dragomir, SS: Jensen integral inequality for power series with nonnegative coefficients and applications. RGMIA Res. Rep. Collect. 17, 42 (2014) Google Scholar

Copyright

© Abramovich and Persson 2016