Skip to main content

How sharp is the Jensen inequality?


We study how good the Jensen inequality is, that is, the discrepancy between \(\int_{0}^{1} \varphi(f(x)) \,dx\), and \(\varphi ( \int_{0}^{1} f(x) \,dx )\), φ being convex and \(f(x)\) a nonnegative \(L^{1}\) function. Such an estimate can be useful to provide error bounds for certain approximations in \(L^{p}\), or in Orlicz spaces, where convex modular functionals are often involved. Estimates for the case of \(C^{2}\) functions, as well as for merely Lipschitz continuous convex functions φ, are established. Some examples are given to illustrate how sharp our results are, and a comparison is made with some other estimates existing in the literature. Finally, some applications involving the Gamma function are obtained.


The celebrated Jensen inequality,

$$ \varphi \biggl( \int_{0}^{1} f(x) \,dx \biggr) \leq\int_{0}^{1} \varphi\bigl(f(x)\bigr) \,dx, $$

valid for every real-valued convex function \(\varphi: I \to{\mathbf {R}}\), where I is a connected bounded set in R, and for every real-valued nonnegative function \(f: [0, 1] \to I\), \(f \in L^{1}(0, 1)\), plays an important role in convex analysis. It was established by the Danish mathematician JLWV Jensen in [1], and it is important in convex analysis [2, 3], since it can be used to generalize the triangle inequality. In addition, the Jensen inequality is an important tool in \(L^{p}\)-spaces and in connection to modular and Orlicz spaces; see, e.g., [4, 5]. In particular, Orlicz spaces are often generated by convex φ-functions [6], and the convexity allows one to prove several important properties for such spaces.

Inequality (1) is written on the interval \([0, 1]\), but a more general version, on arbitrary intervals, can be promptly obtained by a linear change of the independent variable,

$$ \varphi \biggl( \int_{a}^{b} f(x) \,dx \biggr) \leq\frac{1}{b - a} \int_{a}^{b} \varphi\bigl((b - a) f(x)\bigr) \,dx. $$

Other forms, pertaining to discrete sums, probability, and other fields, are found in the literature; see, e.g., [2, 3, 79].

Inequality (1) reduces to an equality whenever either (i) φ is affine, or (ii) \(f(x)\) is a constant. In this paper, we assume that φ is strictly convex on I. Since (1) and (2) reduce to equalities when φ is affine, we expect that the discrepancy between the two sides of such inequalities depends on the departure of φ from the affine behavior. Such a departure should be measured somehow.

It is a natural question to ask how sharp the Jensen inequality can be, which amounts to estimating the difference

$$ \int_{0}^{1} \varphi\bigl(f(x)\bigr) \,dx - \varphi \biggl( \int_{0}^{1} f(x) \,dx \biggr). $$

Estimates for such a quantity can be useful to determine the degree of accuracy in the approximation by families of linear as well as nonlinear integral operators, in various settings, such as in the case of \(L^{p}\) convergence (see, e.g., [10]), or in the more general case of modular convergence in Orlicz spaces; see, e.g., [6, 1115].

In this paper, we derive several estimates for (3), and we will consider the case of convex functions which are in the \(C^{2}\) class, as well as when they are merely Lipschitz continuous. We will also provide a few examples for the purpose of illustration. Moreover, we compared our estimates with the bounds derived from some other results, already known in the literature, in order to test the quality of our estimates. Some applications involving the Gamma function are also made.

Some estimates

A first estimate for (3) can be established as follows. Assume, for simplicity, that φ is smooth, say a \(C^{2}\) function. We then expand \(\varphi(f(x))\) around any given value of \(f(x)\), say \(f(x_{0}) = c\), which can be chosen arbitrarily in the domain I of φ, such that \(f(x_{0}) \in I^{\circ}\), i.e., \(c = f(x_{0})\) is an inner point of I. Thus,

$$\varphi\bigl(f(x)\bigr) = \varphi(c) + \varphi'(c) \bigl[f(x) - c \bigr] + \frac{1}{2} \varphi''\bigl(c^{*}(x)\bigr) \bigl[f(x) - c\bigr]^{2}, $$

where \(c^{*}(x)\) is a suitable value between \(f(x)\) and \(f(x_{0}) = c\), hence a function of x. Then we have

$$\int_{0}^{1} \varphi\bigl(f(x)\bigr) \,dx = \varphi(c) + \varphi'(c) \int_{0}^{1} \bigl[f(x) - c\bigr] \,dx + \frac{1}{2} \int_{0}^{1} \varphi''\bigl(c^{*}(x)\bigr) \bigl[f(x) - c \bigr]^{2} \,dx, $$


$$\begin{aligned}& \varphi \biggl(\int_{0}^{1} f(x) \,dx \biggr) \\& \quad = \varphi(c) + \varphi'(c) \biggl( \int_{0}^{1} f(x) \,dx - c \biggr) + \frac{1}{2} \varphi'' \bigl(c^{**}\bigr) \biggl(\int_{0}^{1} f(x) \,dx - c \biggr)^{2}, \end{aligned}$$

where \(c^{**}\) is a suitable number between c and \(\int_{0}^{1} f(x) \,dx\). Therefore,

$$\begin{aligned} 0 &\leq\int_{0}^{1} \varphi\bigl(f(x)\bigr) \,dx - \varphi \biggl( \int_{0}^{1} f(x) \,dx \biggr) \\ &= \frac{1}{2} \int_{0}^{1} \varphi''\bigl(c^{*}(x)\bigr) \bigl[f(x) - c \bigr]^{2} \,dx - \frac{1}{2} \varphi'' \bigl(c^{**}\bigr) \biggl( \int_{0}^{1} f(x) \,dx - c \biggr)^{2}. \end{aligned}$$

Thus, we obtain immediately

$$\begin{aligned} 0 \leq& \int_{0}^{1} \varphi\bigl(f(x)\bigr) \,dx - \varphi \biggl( \int_{0}^{1} f(x) \,dx \biggr) \\ \leq&\frac{1}{2} \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}(I_{2})} \cdot \Vert f - c \Vert _{L^{2}}^{2} + \frac{1}{2} \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}(I_{2})} \cdot \Vert f - c \Vert _{L^{1}}^{2} \\ =& \frac{1}{2} \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}(I_{2})} \cdot \bigl[ \Vert f - c \Vert _{L^{2}}^{2} + \Vert f - c \Vert _{L^{1}}^{2} \bigr], \end{aligned}$$

where \(I_{2}\) denotes the domain of \(\varphi''\).

In the special case of \(\varphi\in C^{2}\), the ‘second mean-value theorem for integrals’ can be used to write

$$ \int_{0}^{1} \varphi'' \bigl(c^{*}(x)\bigr) \bigl[f(x) - c\bigr]^{2} \,dx = \varphi''( \tilde{c}) \int_{0}^{1} \bigl[f(x) - c \bigr]^{2} \,dx, $$

for a suitable \(\tilde{c} \in I_{2} \subseteq I\), \(I_{2}\) denoting the domain of \(\varphi''\). Then, if more, \(\varphi''\) is also Lipschitz continuous with a Lipschitz constant \(L_{\varphi''}\), we obtain

$$\begin{aligned}& \frac{1}{2} \biggl\{ \varphi''(\tilde{c}) \int_{0}^{1} \bigl(f(x) - c\bigr)^{2} \,dx - \varphi''\bigl(c^{**}\bigr) \biggl[ \biggl(\int_{0}^{1} \bigl(f(x) - c\bigr) \,dx \biggr)^{2} - \int_{0}^{1} \bigl(f(x) - c \bigr)^{2} \,dx \biggr] \\& \qquad {} - \varphi''\bigl(c^{**}\bigr) \int_{0}^{1} \bigl(f(x) - c\bigr)^{2} \,dx\biggr\} \\& \quad = \frac{1}{2} \biggl\{ \bigl[\varphi''( \tilde{c}) - \varphi ''\bigl(c^{**}\bigr) \bigr] \int_{0}^{1} \bigl(f(x) - c \bigr)^{2} \,dx \\& \qquad {} + \varphi''\bigl(c^{**}\bigr) \biggl[ \int_{0}^{1} \bigl(f(x) - c \bigr)^{2} \,dx - \biggl(\int_{0}^{1} \bigl(f(x) - c\bigr) \,dx \biggr)^{2} \biggr] \biggr\} \\& \quad \leq \frac{1}{2} \biggl\{ \bigl\vert \varphi''( \tilde{c}) - \varphi ''\bigl(c^{**}\bigr) \bigr\vert \int_{0}^{1} \bigl(f(x) - c \bigr)^{2} \,dx + \sup_{I_{2}} \varphi'' \cdot \bigl[ \| f - c \|_{L^{2}}^{2} + \| f - c \|_{L^{1}}^{2} \bigr] \biggr\} \\& \quad \leq \frac{1}{2} L_{\varphi''} \bigl\vert \tilde{c} - c^{**} \bigr\vert \cdot\| f - c \|_{L^{2}}^{2} + \frac{1}{2} \sup_{I_{2}} \varphi'' \bigl[ \| f - c \|_{L^{2}}^{2} + \| f - c \|_{L^{1}}^{2} \bigr]. \end{aligned}$$

Recall that \(\varphi''(x) > 0\) for every \(x \in I_{2}\), being φ convex on I. The estimate in (7) is quite elegant but seems to be not very useful in practice, since it depends on some unknowns constants, such as \(\tilde{c}\) and \(c^{**}\). However, \(| \tilde{c} - c^{**} | \leq |I_{2}| \leq |I|\).

On the other hand, we can also write, from (5) and (6),

$$\begin{aligned} 0 &\leq \int_{0}^{1} \varphi\bigl(f(x)\bigr) \,dx - \varphi \biggl( \int_{0}^{1} f(x) \,dx \biggr) \\ &\leq \frac{1}{2} \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}(D)} \cdot\| f - c \|_{L^{2}}^{2} - \frac{1}{2} \inf_{I_{2}} \varphi'' \biggl[ \int_{0}^{1}\bigl( f(x) - c\bigr) \,dx \biggr]^{2}. \end{aligned}$$

Using the Cauchy-Schwarz inequality, instead, we have from (4)

$$\begin{aligned} 0 &\leq \int_{0}^{1} \varphi\bigl(f(x)\bigr) \,dx - \varphi \biggl( \int_{0}^{1} f(x) \,dx \biggr) \\ &\leq \frac{1}{2} \bigl\langle \bigl(\varphi'' \circ c^{*}\bigr) (\cdot), \bigl(f(\cdot) - c\bigr)^{2} \bigr\rangle + \frac{1}{2} \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}(I_{2})} \cdot \Vert f - c \Vert _{L^{1}}^{2} \\ &\leq\frac{1}{2} \bigl\Vert \varphi'' \circ c^{*} \bigr\Vert _{L^{2}} \bigl\Vert (f - c)^{2} \bigr\Vert _{L^{2}} + \frac{1}{2} \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}(I_{2})} \Vert f - c \Vert _{L^{1}}^{2} \\ &= \frac{1}{2} \bigl[ \bigl\Vert \varphi'' \circ c^{*} \bigr\Vert _{L^{2}} \cdot \Vert f - c \Vert _{L^{4}}^{2} + \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}(I_{2})} \Vert f - c \Vert _{L^{1}}^{2} \bigr]. \end{aligned}$$


$$\bigl\Vert \varphi'' \circ c^{*} \bigr\Vert _{L^{2}} \equiv\bigl\Vert \varphi'' \bigl(c^{*}(\cdot)\bigr) \bigr\Vert _{L^{2}}^{2} \equiv\bigl\Vert \varphi''\bigl(c^{*}( \cdot)\bigr) \bigr\Vert _{L^{2}(0,1)}^{2} . $$

In this case, we should have \(c^{*}(x) \in I_{2} \subseteq I\), for every \(x \in[0, 1]\), but this is obviously true whenever \(I_{2} = I\) and \(\varphi\in C^{2}(I)\). Hence,

$$\int_{0}^{1} \bigl[\varphi'' \bigl(c^{*}(x)\bigr)\bigr]^{2} \,dx \leq\int _{I_{2}} \bigl[\varphi''(x) \bigr]^{2} \,dx \equiv\bigl\Vert \varphi'' \bigr\Vert _{L^{2}(I_{2})}^{2}, $$

so that

$$\begin{aligned} 0 &\leq \int_{0}^{1} \varphi\bigl(f(x)\bigr) \,dx - \varphi \biggl( \int_{0}^{1} f(x) \,dx \biggr) \\ &\leq\frac{1}{2} \bigl[ \bigl\Vert \varphi'' \bigr\Vert _{L^{2}(I_{2})} \bigl\Vert f(\cdot) - c \bigr\Vert _{L^{2}}^{2} + \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}(I_{2})} \bigl\Vert f(\cdot) - c \bigr\Vert _{L^{1}}^{2} \bigr]. \end{aligned}$$

Clearly, one may try to optimize (i.e., to minimize) the right-hand side of each estimate, suitably choosing the value of the constant c, which is only constrained to belong to the range of \(f(x)\).

Remark 2.1

Note that the estimate (10) can be generalized by using the Hölder inequality in place of the Cauchy-Schwarz inequality. We recall that, for every \(0 \leq p, q \leq+\infty\), with \(1/p + 1/q = 1\) and for every measurable real- or complex-valued functions \(f \in L^{p}\) and \(g \in L^{q}\), the relation

$$\int_{D} \bigl\vert f(t) g(t)\bigr\vert \,dt \equiv\| f g \|_{L^{1}} \leq\|f \|_{p} \| g \|_{q} $$

holds. If \(\varphi'' \circ c^{*} \in L^{p}\) and \(f \in L^{2 q}\), we obtain

$$\begin{aligned} \int_{0}^{1} \varphi'' \bigl(c^{*}(x)\bigr) \bigl[f(x) -c \bigr]^{2} \,dx &\equiv\bigl\Vert \bigl(\varphi'' \circ c^{*}\bigr) \cdot\bigl[f(\cdot) - c \bigr]^{2} \bigr\Vert _{L^{1}} \leq\bigl\Vert \varphi'' \circ c^{*}\bigr\Vert _{L^{p}} \cdot \bigl\Vert (f -c )^{2} \bigr\Vert _{L^{q}} \\ &= \bigl\Vert \varphi'' \circ f^{*} \bigr\Vert _{L^{p}} \cdot\| f - c \|_{L^{2 q}}^{2}. \end{aligned}$$

From (4) and by the Hölder inequality, we have

$$\begin{aligned} 0 &\leq \int_{0}^{1} \varphi\bigl(f(x)\bigr) \,dx - \varphi \biggl( \int_{0}^{1} f(x) \,dx \biggr) \\ &\leq\frac{1}{2} \bigl\Vert \varphi'' \circ c^{*} \bigr\Vert _{L^{p}} \bigl\Vert (f - c)^{2} \bigr\Vert _{L^{q}} + \frac{1}{2} \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}(I_{2})} \Vert f - c \Vert _{L^{1}}^{2} \\ &= \frac{1}{2} \bigl[ \bigl\Vert \varphi'' \circ c^{*} \bigr\Vert _{L^{p}} \cdot \Vert f - c \Vert _{L^{2q}}^{2} + \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}(I_{2})} \Vert f - c \Vert _{L^{1}}^{2} \bigr]. \end{aligned}$$

Now, observing that

$$\bigl\Vert \varphi'' \circ c^{*} \bigr\Vert ^{p}_{L^{p}} = \int_{0}^{1} \bigl[\varphi''\bigl(c^{*}(x)\bigr) \bigr]^{p} \,dx \leq\int_{I_{2}} \bigl[ \varphi''(x)\bigr]^{p} \,dx \equiv\bigl\Vert \varphi'' \bigr\Vert _{L^{p}(I_{2})}^{p}, $$

and assuming, as before, that \(c^{*}(x) \in I_{2} \subseteq I\) for every \(x \in [0, 1]\), unless \(I_{2} = I\), we obtain, finally,

$$\begin{aligned} 0 &\leq \int_{0}^{1} \varphi\bigl(f(x)\bigr) \,dx - \varphi \biggl( \int_{0}^{1} f(x) \,dx \biggr) \\ &\leq\frac{1}{2} \bigl[ \bigl\Vert \varphi'' \bigr\Vert _{L^{p}(I_{2})} \cdot\| f - c \| _{L^{2q}}^{2} + \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}(I_{2})} \| f - c \|_{L^{1}}^{2} \bigr]. \end{aligned}$$

Another estimate can be established under weaker conditions on φ. When φ is (strictly convex but) just Lipschitz continuous on the bounded subsets of the real line, \(J \subseteq{\mathbf{R}}\), with the Lipschitz constant \(L_{\varphi}\), observing that \(L_{\varphi}\) depends on J, we have in (3)

$$\begin{aligned} 0 &\leq \int_{0}^{1} \varphi\bigl(f(x)\bigr) \,dx - \varphi \biggl( \int_{0}^{1} f(x) \,dx \biggr) = \int_{0}^{1} \varphi\bigl(f(x)\bigr) \,dx - \int ^{1}_{0} \varphi \biggl( \int_{0}^{1} f(x) \,dx \biggr) \,dt \\ &= \int_{0}^{1} \biggl[ \varphi\bigl(f(t)\bigr) - \varphi \biggl( \int_{0}^{1} f(x) \,dx \biggr) \biggr] \,dt \leq L_{\varphi} \int_{0}^{1} \biggl\vert f(t) - \biggl( \int_{0}^{1} f(x) \,dx \biggr) \biggr\vert \,dt \\ &\equiv L_{\varphi} \bigl\Vert f - \Vert f \Vert _{L^{1}} \bigr\Vert _{L^{1}}. \end{aligned}$$


It is easy to provide some simple examples to show how sharp the Jensen inequality can be in practice.

Example 3.1

Let now be \(\varphi(y) = - \sin\pi y\) and \(f(x) = x^{2}\). Then

$$- \int_{0}^{1} \sin \bigl( \pi x^{2} \bigr) \,dx $$

is not elementary, but we can use the Fresnel integral \(S(x) := \int_{0}^{x} \sin (\frac{\pi}{2} t^{2} ) \,dt\); see [16]. Setting \(x = t/\sqrt{2}\), we obtain \(S(\sqrt{2}) \approx0.7139\), and hence

$$\int_{0}^{1} \sin\bigl(\pi x^{2}\bigr) \,dx = \frac{1}{\sqrt{2}} \int_{0}^{\sqrt{2}} \sin \biggl( \frac{\pi t^{2}}{2} \biggr) \,dt = \frac{1}{\sqrt{2}} S(\sqrt{2}) \approx0.50480, $$


$$\sin \biggl( \pi\int_{0}^{1} x^{2} \,dx \biggr) = \sin\frac{\pi}{3} = \frac{\sqrt{3}}{2} \approx0.8660. $$

Thus, the true discrepancy between the two sides of the Jensen inequality is

$$E_{1} := - \int_{0}^{1} \sin \bigl( \pi x^{2} \bigr) \,dx + \sin \biggl( \pi\int_{0}^{1} x^{2} \,dx \biggr) \approx-0.50480 + 0.8660 \approx0.3612, $$

while inequality (5) would read

$$\begin{aligned} \frac{1}{2} \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}} \cdot \bigl[ \| f - c \|^{2}_{L^{2}} + \| f - c \|^{2}_{L^{1}} \bigr] =& \frac{\pi^{2}}{2} \biggl[ c^{2}-\frac{2}{3}c+\frac{1}{5} + \biggl( \frac{4}{3} c^{3/2} - c + 1/3 \biggr)^{2} \biggr] \\ =:& \frac{\pi^{2}}{2} g(c), \end{aligned}$$

and hence, since the function \(g(c)\) reaches its minimum at \(c \approx 0.31\) (evaluated with the help of Maple), we have \(E_{1} \leq\frac{\pi^{2}}{2} g(0.31) \approx1.3627\ldots\) .

Moreover, estimating \(E_{1}\) using inequality (8), and observing that \(\inf_{x\in[0,1]}|\varphi''(x)| = 0\), we obtain

$$E_{1} \leq\frac{1}{2} \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}} \cdot \| f - c \|^{2}_{L^{2}} \leq \frac{\pi^{2}}{2} \biggl( c^{2} - \frac{2}{3} c + \frac{1}{5} \biggr) =: \frac{\pi^{2}}{2} \cdot h(c). $$

Since \(h(c)\) attains its minimum on \([0,1]\) at \(c \approx0.33\), we finally obtain

$$E_{1} \leq\bigl(\pi^{2}/2\bigr) \cdot0.0889 \approx 0.43870 \ldots. $$

Such a value is very close to the true discrepancy, \(E_{1}\). Clearly, it is evident that under the \(C^{2}\) regularity assumption on the convex function φ, the estimate in (8) turns out to be much better than the estimate in (5). Indeed, the relative errors inherent to the estimates in (8) and (5) are, respectively the 21% and 277%.

Remark 3.2

Generally speaking, using a kind of Cauchy mean-value theorem, some bounds could be established for the error made using the Jensen inequality; see, e.g., [17, 18]. In particular, from Theorem 6 of [18], the following estimate can easily be proved:

$$ \int_{0}^{1} \varphi\bigl(f(x) \bigr) \,dx - \varphi \biggl( \int_{0}^{1} f(x) \,dx \biggr) \leq\frac{1}{2} \sup_{I}\bigl\vert \varphi''\bigr\vert \biggl[ \int _{0}^{1} f^{2}(x) \,dx - \biggl( \int _{0}^{1} f(x) \,dx \biggr)^{2} \biggr], $$

where \(\varphi\in C^{2}(I)\). This inequality can be compared with that in (8). In particular, in the case of Example 3.1, estimating \(E_{1}\) using (13) we obtain \(E_{1} \leq\pi^{2} \cdot(2/45) \approx0.4386\ldots\) , which is essentially the same value obtained using the estimate in (8). Therefore, our estimate (8) provides a result comparable to that given by (13). Yet, these two results have been obtained by different methods.

Example 3.3

Consider the case of a nonsmooth convex functions φ, say, \(\varphi(x) := |x - (1/2)|^{p}\) on \([0,1]\), with \(1 \leq p < +\infty\). Here, only the estimate in (12) can be used to estimate how sharp the Jensen inequality might be. Choosing the function \(f(x) = x\), we obtain the discrepancy

$$\begin{aligned} E_{2} &:= \int^{1}_{0} \biggl\vert x - \frac{1}{2} \biggr\vert ^{2p} \,dx - \biggl\vert \int ^{1}_{0} x \,dx - \frac{1}{2} \biggr\vert ^{2p} \\ &= \int^{1}_{0} \biggl\vert x - \frac{1}{2} \biggr\vert ^{2p} \,dx = \frac{1}{(2 p + 1) 4^{p}} \biggl( \frac{1}{2} \biggr)^{2p+1}, \end{aligned}$$

while we have from (12) \(E_{2} \leq p/4\).

We stress that in this case, the inequalities showed in (5), (8), and (13) cannot be applied to the functions involved in Example 3.3, since now \(\varphi(x)\) is not a \(C^{2}\), but it is merely Lipschitz continuous.

Example 3.4

The Gamma function, Γ, is known to be strictly convex on the real positive halfline. Keep in mind that it attains its minimum at \(x_{0} \approx 1.4616\), with \(\Gamma(x_{0}) \approx0.8856\); see [19, 20].

Consider \(\varphi(y) := \Gamma(y)\), and \(f(x) := x + 1\) for \(0 \leq x \leq1\). Thus we have, by the Jensen inequality,

$$\Gamma \biggl(\int_{0}^{1} (x + 1) \,dx \biggr) \leq\int_{0}^{1} \Gamma(x + 1) \,dx, $$


$$\Gamma \biggl( \frac{3}{2} \biggr) = \frac{\sqrt{\pi}}{2} \leq\int _{0}^{1} \Gamma(x + 1) \,dx\quad \mbox{or} \quad \frac{\sqrt{\pi}}{2} \leq\int_{1}^{2} \Gamma(x) \,dx. $$

Therefore, the ‘true’ discrepancy between the two sides of such inequality is

$$ E_{3} := \int_{1}^{2} \Gamma(x) \,dx - \frac{\sqrt{\pi}}{2}, $$

which can be evaluated by numerical integration. Using, e.g., the Simpson rule, we have

$$\int_{1}^{2} \Gamma(x) \,dx \approx \frac{1}{6} \biggl[ \Gamma(1) + 4 \Gamma \biggl( \frac{3}{2} \biggr) + \Gamma(2) \biggr] = \frac{1 + \sqrt{\pi}}{3} \approx0.92415, $$


$$ E_{3} = \int_{1}^{2} \Gamma(x) \,dx - \frac{\sqrt{\pi}}{2} = \frac{1}{3} - \frac{\sqrt{\pi}}{6} \approx0.0379. $$

Now we wish to test our estimates, evaluating the right-hand sides of (5), (10), and (12). Recall that c should belong to \([1, 2]\), and it an easy though lengthy argument to show that the minimum values are attained for \(c = 3/2\), resulting in

$$\bigl\Vert f(\cdot) - 3/2 \bigr\Vert _{L^{1}} = \frac{1}{4}, \qquad \bigl(\bigl\Vert f(\cdot) - 3/2 \bigr\Vert _{L^{2}} \bigr)^{2} = \frac{1}{12},\qquad \bigl(\bigl\Vert f(\cdot) - 3/2 \bigr\Vert _{L^{4}}\bigr)^{4} = \frac{1}{80}. $$

To estimate \(\Gamma''(x + 1)\), we need to estimate Γ, \(\psi:= \Gamma'/\Gamma\), and \(\psi'\), since from the definition of \(\psi:= \Gamma'/\Gamma\) follows that \(\psi' := \Gamma''/\Gamma- (\Gamma '/\Gamma)^{2}\), and hence

$$\varphi''(x) = \Gamma''(x + 1) = \Gamma(x + 1) \bigl[ \psi'(x + 1) + \psi^{2}(x + 1) \bigr]. $$

Now, for \(0 \leq x \leq1\), \(1 \leq\Gamma(x + 1) \leq\Gamma(x_{0}) \approx 0.8856\), while \(\psi(1) = - \gamma\leq\psi(x) \leq\psi(2) = 1 - \gamma\); see [19]. Here, \(\gamma= 0.57721 \ldots\) is the Euler-Mascheroni constant. Thus, \(0 \leq\psi(x) \leq\max\{\gamma, 1 - \gamma\} = \gamma\). Moreover, it is well known that \(\psi'(x)\) is monotone decreasing for \(x > 0\) (see the representations of ψ and its derivatives ([19], Section 6.4.1, p.260)); hence, in particular, \(\psi'(2) \leq\psi(x) \leq\psi'(1)\), and \(\psi'(1) = \pi^{2}/6\) and \(\psi'(2) = \pi^{2}/6 - 1\). Therefore, we have \(\| \varphi'' \|_{L^{\infty}(0,1)} \leq\pi^{2}/6 + \gamma^{2} \approx1.9776\), and finally

$$ E_{3} \leq\frac{1}{2} \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}(0,1)} \bigl[ \bigl(\bigl\Vert f(\cdot) - 3/2 \bigr\Vert _{L^{2}}\bigr)^{2} + \bigl(\bigl\Vert f(\cdot) - 3/2 \bigr\Vert _{L^{1}}\bigr)^{2} \bigr] \approx0.1442. $$

This numerical value should be compared with the ‘true’ discrepancy computed in (15) above, which is about 0.0379. Therefore, the estimate in (5) provides a relative error of 280%, which is similar to that computed in Example 3.1.

However, we can do better, using the estimate which exploits the value of \(\inf\varphi''\), i.e., the estimate in (8). We have in fact, for \(0 \leq x \leq1\),

$$ \varphi''(x) = \Gamma(x + 1) \bigl\{ \psi'(x + 1) + \psi^{2}(x + 1) \bigr\} \geq \Gamma(x_{0}) \psi'(2) \approx0.5711, $$

hence the estimate

$$ E_{3} \leq\frac{1}{2} \bigl\Vert \varphi'' \bigr\Vert _{L^{\infty}(0,1)} \cdot \bigl(\bigl\Vert f(\cdot) - 3/2 \bigr\Vert _{L^{2}}\bigr)^{2} - \frac{1}{2} \inf\varphi''(x) \cdot \bigl(\bigl\Vert f(\cdot) - 3/2 \bigr\Vert _{L^{1}} \bigr)^{2} \approx0.0645, $$

which definitely represents an appreciable improvement with respect to the previous estimate. This is in agreement with what was observed in Example 3.1. At this point, we can also test the estimate in (9), which involves the \(L^{4}\) norm of \(f(x) - 3/2\). We have

$$\begin{aligned} \bigl(\bigl\Vert \varphi'' \bigr\Vert _{L^{2}(0,1)} \bigr)^{2} &= \int_{1}^{2} \bigl( \Gamma''(t) \bigr)^{2} \,dt = \int _{1}^{2} \bigl( \Gamma(t) \bigl[ \psi'(t) + \psi^{2}(t) \bigr] \bigr)^{2} \,dt \\ &\leq \biggl( \frac{\pi^{2}}{6} + \gamma^{2} \biggr)^{2} \approx3.9109, \end{aligned}$$

and \(\| f(\cdot) - 3/2 \|_{L^{4}}^{4} = 1/80\), so that we will have

$$ E_{3} \leq \frac{1}{2} \biggl( \frac{\pi^{2}}{6} + \gamma^{2} \biggr) \bigl\{ \bigl(\bigl\Vert f(\cdot) - 3/2 \bigr\Vert _{L^{4}} \bigr)^{8} + \bigl(\bigl\Vert f(\cdot) - 3/2 \bigr\Vert _{L^{1}} \bigr)^{2} \bigr\} \approx0.0618, $$

only slightly better than the previous one (where we obtained about 0.0645), the ‘true’ discrepancy being about 0.032.

In closing, we observe that, in Example 3.4, the estimate in (13) yields the result \(E_{3} \leq0.0824\), which is clearly worse than that obtained from (8). In this case, we are able to improve the estimate given by (13), which was derived from [18].

Final remarks and conclusions

The purpose of this paper is to establish estimates concerning the Jensen inequality, which involve convex functions. Such estimates can be useful in many instances, such as, e.g., modular estimates in Orlicz spaces, or \(L^{p}\)-estimates for linear and nonlinear integral operators. For instance, the convex function can be \(\varphi(x) := |x|^{p}\) with \(p \geq1\), namely the convex ‘φ-function’ used of the general theory of Orlicz spaces, used to generate the \(L^{p}\)-spaces; see, e.g., [4, 5]. Thus, the previous estimates, aimed at assessing how sharp the Jensen inequality might be, can be used to obtain estimates for the \(L^{p}\)-norms of certain given functions.

Besides, one can consider the same functions φ with \(p < 0\), for instance, \(\varphi(x) := x^{-1/2}\), or \(\varphi(x) := x^{-1}\), which are smooth and convex on every interval \([a, b]\) with \(0 < a < b \leq+\infty\). Estimates have been derived for both, smooth and Lipschitz continuous convex φ. They depend, respectively, on the uniform norm of the \(\varphi''\) and on the Lipschitz constant of φ, as well as on the \(L^{1}\) and \(L^{2}\) norms of the function f involved in the inequality. From the numerical experiments it is clear that, in general, the estimate in (8) is sharper than the other ones established in this paper. Moreover, the estimate in (8) improves that in (13), established in [18], when the function φ is a \(C^{2}\) convex function. Finally, we stress the usefulness of estimates (12), which allows one to obtain a rough estimate for the error made using the Jensen inequality when φ is merely Lipschitz continuous. Clearly, a major advantage will occur when the \(C^{2}\)-assumption of φ is not satisfied.


  1. 1.

    Jensen, JLWV: Sur les fonctions convexes et les inégalités entre les valeurs moyennes. Acta Math. 30(1), 175-193 (1906) (French)

    Article  MATH  MathSciNet  Google Scholar 

  2. 2.

    Kuczma, M: An Introduction to the Theory of Functional Equations and Inequalities: Cauchy’s Equation and Jensen’s Inequality. Birkhaüser, Basel (2008)

    Google Scholar 

  3. 3.

    Mukhopadhyay, N: On sharp Jensen’s inequality and some unusual applications. Commun. Stat., Theory Methods 40, 1283-1297 (2011)

    Article  MATH  Google Scholar 

  4. 4.

    Musielak, J: Orlicz Spaces and Modular Spaces. Lecture Notes in Math., vol. 1034. Springer, Berlin (1983)

    Google Scholar 

  5. 5.

    Musielak, J, Orlicz, W: On modular spaces. Stud. Math. 28, 49-65 (1959)

    MathSciNet  Google Scholar 

  6. 6.

    Bardaro, C, Musielak, J, Vinti, G: Nonlinear Integral Operators and Applications. de Gruyter Series in Nonlinear Analysis and Applications, vol. 9. de Gruyter, Berlin (2003)

    Google Scholar 

  7. 7.

    Dragomir, SS, Ionescu, NM: Some converse of Jensen’s inequality and applications. Rev. Anal. Numér. Théor. Approx. 23, 71-78 (1994)

    MATH  MathSciNet  Google Scholar 

  8. 8.

    Dragomir, SS, Pečarić, J, Persson, LE: Properties of some functionals related to Jensen’s inequality. Acta Math. Hung. 69(4), 129-143 (1995)

    Google Scholar 

  9. 9.

    Dragomir, SS: Bounds for the normalized Jensen functional. Bull. Aust. Math. Soc. 74(3), 471-478 (2006)

    Article  MATH  Google Scholar 

  10. 10.

    Costarelli, D, Spigler, R: Convergence of a family of neural network operators of the Kantorovich type. J. Approx. Theory 185, 80-90 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  11. 11.

    Costarelli, D, Vinti, G: Approximation by multivariate generalized sampling Kantorovich operators in the setting of Orlicz spaces. Boll. Unione Mat. Ital. (9) IV, 445-468 (2011); Special volume dedicated to Prof. Giovanni Prodi

    MathSciNet  Google Scholar 

  12. 12.

    Costarelli, D, Vinti, G: Approximation by nonlinear multivariate sampling Kantorovich type operators and applications to image processing. Numer. Funct. Anal. Optim. 34(8), 819-844 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  13. 13.

    Costarelli, D, Vinti, G: Order of approximation for nonlinear sampling Kantorovich operators in Orlicz spaces. Comment. Math. 53(2), 271-292 (2013); Special volume dedicated to Prof. Julian Musielak

    MATH  MathSciNet  Google Scholar 

  14. 14.

    Costarelli, D, Vinti, G: Order of approximation for sampling Kantorovich operators. J. Integral Equ. Appl. 26(3), 345-368 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  15. 15.

    Cluni, F, Costarelli, D, Minotti, AM, Vinti, G: Applications of sampling Kantorovich operators to thermographic images for seismic engineering. J. Comput. Anal. Appl. 19(4), 602-617 (2015)

    Google Scholar 

  16. 16.

    van Wijngaarden, A, Scheen, WL: Table of Fresnel integrals.

  17. 17.

    Mercer, AMcD: Some new inequalities involving elementary mean values. J. Math. Anal. Appl. 229, 677-681 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  18. 18.

    Pečarić, JE, Perić, I, Srivastava, HM: A family of the Cauchy type mean-value theorems. J. Math. Anal. Appl. 306, 730-739 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  19. 19.

    Abramowitz, M, Stegun, IA (eds.): Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. National Bureau of Standards Applied Mathematics Series, vol. 55. U.S. Government Printing Office, Washington (1964)

    Google Scholar 

  20. 20.

    Olver, FWJ, Lozier, DW, Boisvert, RF, Clark, CW (eds.): NIST Digital Library of Mathematical Functions. Cambridge University Press, Cambridge (2010)

    Google Scholar 

Download references


This work was accomplished within the GNFM and GNAMPA research groups of the Italian INdAM.

Author information



Corresponding author

Correspondence to Renato Spigler.

Additional information

Competing interests

The authors declare that they have no financial or other competing interests.

Authors’ contributions

Both authors, DC and RS, contributed substantially to this paper, participated in drafting and checking the manuscript and have approved the version to be published.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Costarelli, D., Spigler, R. How sharp is the Jensen inequality?. J Inequal Appl 2015, 69 (2015).

Download citation


  • 26B25
  • 39B72


  • Jensen inequality
  • convex functions
  • Orlicz spaces
  • convex modular functionals
  • Cauchy-Schwarz inequality
  • Hölder inequality