 Research
 Open Access
Levinson’s type generalization of the Jensen inequality and its converse for real Stieltjes measure
 Rozarija Mikić^{1},
 Josip Pečarić^{1} and
 Mirna Rodić^{1}Email authorView ORCID ID profile
https://doi.org/10.1186/s136600161274y
© The Author(s) 2017
 Received: 3 November 2016
 Accepted: 6 December 2016
 Published: 3 January 2017
Abstract
We derive the Levinson type generalization of the Jensen and the converse Jensen inequality for real Stieltjes measure, not necessarily positive. As a consequence, also the Levinson type generalization of the HermiteHadamard inequality is obtained. Similarly, we derive the Levinson type generalization of Giaccardi’s inequality. The obtained results are then applied for establishing new meanvalue theorems. The results from this paper represent a generalization of several recent results.
Keywords
 Jensen’s inequality
 converse Jensen’s inequality
 HermiteHadamard’s inequality
 Giaccardi’s inequality
 Levinson’s inequality
 Green function
 meanvalue theorems
MSC
 26D15
 26A51
 26A24
1 Introduction and preliminary results
Steffensen [1] showed that inequality (1.1) also holds in the case when \((x_{1},\ldots,x_{n})\) is a monotonic ntuple of numbers from the interval I and \((p_{1},\ldots,p_{n})\) is an arbitrary real ntuple such that \(0\le P_{k}\le P_{n}\) (\(k=1,\ldots,n\)), \(P_{n}>0\), where \(P_{k}=\sum_{i=1}^{k}p_{i}\). His result is called the JensenSteffensen inequality.
Boas [2] gave the integral analog of the JensenSteffensen inequality.
Theorem 1.1
[2]
The generalization of this result is also given by Boas in [2]. It is the socalled JensenBoas inequality (see also [3]).
Theorem 1.2
[2]
The following theorem states the wellknown Levinson inequality.
Theorem 1.3
[4]
Numerous papers have been devoted to extensions and generalizations of this result, as well as to weakening the assumptions under which inequality (1.4) is valid (see for instance [5–8], and [9]).
A function \(f\colon I\to \mathbb{R}\) is called kconvex if \([x_{0},\ldots,x_{k}]f\ge 0\) for all choices of \(k+1\) distinct points \(x_{0},x_{1},\ldots,x_{k}\in I\). If the kth derivative of a convex function exists, then \(f^{(k)}\ge 0\), but \(f^{(k)}\) may not exist (for properties of divided differences and kconvex functions see [3]).
Remark 1.4
 (i)Bullen [6] rescaled Levinson’s inequality to a general interval \([a,b]\) and showed that if function f is 3convex and \(p_{i}, x_{i}, y_{i}\), \(i=1,\ldots,n\), are such that \(p_{i}>0\), \(\sum_{i=1} ^{n}p_{i}=1\), \(a\le x_{i}\), \(y_{i}\le b\), (1.3) holds for some \(c\in \langle a,b\rangle \) andthen (1.4) holds.$$ \max \{x_{1},\ldots,x_{n}\}\le \max \{y_{1},\ldots,y_{n}\}, $$(1.5)
 (ii)
 (iii)Mercer [7] made a significant improvement by replacing condition (1.3) with a weaker one, i.e. he proved that inequality (1.4) holds under the following conditions:$$\begin{aligned}& f'''\ge 0, \qquad p_{i}>0, \qquad \sum_{i=1}^{n}p_{i}=1, \qquad a\le x_{i},y_{i} \le b, \\& \max \{x_{1},\ldots,x_{n}\}\le \max \{y_{1}, \ldots,y_{n}\}, \\& \sum_{i=1}^{n}p_{i}(x_{i} \bar{x})^{2}=\sum_{i=1}^{n}p_{i}(y_{i} \bar{y})^{2}. \end{aligned}$$(1.6)
 (iv)
Witkowski [9] showed that it is enough to assume that f is 3convex in Mercer’s assumptions. Furthermore, Witkowski weakened the assumption (1.6) and showed that equality can be replaced by inequality in a certain direction.
Furthermore, Baloch, Pečarić, and Praljak in their paper [5] introduced a new class of functions \(\mathcal{K}_{1} ^{c}(a,b)\) that extends 3convex functions and can be interpreted as functions that are ‘3convex at point \(c\in \langle a,b\rangle \)’. They showed that \(\mathcal{K}_{1}^{c}(a,b)\) is the largest class of functions for which Levinson’s inequality (1.4) holds under Mercer’s assumptions, i.e. that \(f\in \mathcal{K}_{1}^{c}(a,b)\) if and only if inequality (1.4) holds for arbitrary weights \(p_{i}>0\), \(\sum_{i=1}^{n}p_{i}=1\) and sequences \(x_{i}\) and \(y_{i}\) that satisfy \(x_{i}\le c\le y_{i}\) for \(i=1,2,\ldots,n\).
We give the definition of the class \(\mathcal{K}_{1}^{c}(a,b)\) extended to an arbitrary interval I.
Definition 1.5
Let \(f\colon I \to \mathbb{R}\) and \(c\in I^{\circ }\), where \(I^{\circ }\) is the interior of I. We say that \(f\in \mathcal{K} _{1}^{c}(I)\) (\(f\in \mathcal{K}_{2}^{c}(I)\)) if there exists a constant D such that the function \(F(x)=f(x)\frac{D}{2}x^{2}\) is concave (convex) on \(\langle \infty ,c]\cap I\) and convex (concave) on \([c,+\infty \rangle \cap I\).
Remark 1.6
 (1):

If \(f \in \mathcal{K}_{i}^{c}(a,b)\), \(i=1,2\), and \(f''(c)\) exists, then \(f''(c)=D\).
 (2):

The function \(f:(a,b)\to \mathbb{R}\) is 3convex (3concave), if and only if \(f \in \mathcal{K}_{1}^{c}(a,b)\) (\(f \in \mathcal{K}_{2}^{c}(a,b)\)) for every \(c\in (a,b)\).
Jakšetić, Pečarić, and Praljak in [10] gave the following Levinson type generalization of the JensenBoas inequality.
Theorem 1.7
[10]
Theorem 1.8
[11]
 (1):

For every continuous convex function \(\varphi : [\alpha, \beta ]\rightarrow \mathbb{R}\)holds.$$ \varphi \biggl( \frac{\int_{a}^{b} g(x)\,d\lambda (x)}{\int_{a}^{b}d\lambda (x)} \biggr) \leq \frac{\int_{a}^{b} \varphi ( g(x) )\,d\lambda (x)}{\int_{a}^{b}d\lambda (x)} $$(1.10)
 (2):

For all \(s\in [\alpha , \beta ]\)holds, where the function \(G: [\alpha , \beta ]\times [\alpha , \beta ]\rightarrow \mathbb{R}\) is defined in (1.8).$$ G \biggl( \frac{\int_{a}^{b} g(x)\,d\lambda (x)}{\int_{a}^{b}d\lambda (x)},s \biggr) \leq \frac{\int_{a}^{b} G ( g(x),s )\,d\lambda (x)}{\int_{a}^{b}d\lambda (x)} $$(1.11)
Note that for every continuous concave function \(\varphi : [\alpha , \beta ]\rightarrow \mathbb{R}\) inequality (1.10) is reversed, i.e. the following corollary holds.
Corollary 1.9
[11]
The main aim of our paper is to give a Levinson type generalization of the result from Theorem 1.8. In that way, a generalization of Theorem 1.7 for real Stieltjes measure, not necessarily positive nor increasing, will also be obtained.
2 Main results
Theorem 2.1
Proof
Let \(\varphi \in \mathcal{K}_{1}^{c}([\alpha ,\beta ])\) be continuous function on \([\alpha ,\beta ]\) and let \(\phi (x)=\varphi (x)\frac{D}{2}x^{2}\), where D is the constant from Definition 1.5.
Corollary 2.2
 (i)
If for all \(s_{1}\in [\alpha , c]\) and \(s_{2}\in [c,\beta ]\) inequalities (2.2) hold, where the function G is defined in (1.8), then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([\alpha ,\beta ])\) the reverse inequalities in (2.3) hold.
 (ii)
If for all \(s_{1}\in [\alpha , c]\) and \(s_{2}\in [c,\beta ]\) the reverse inequalities in (2.2) hold, then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([\alpha ,\beta ])\) (2.3) holds.
Remark 2.3
Remark 2.4
3 Discrete case
In this section we give the results for the discrete case. The proofs are similar to those in the integral case given in the previous section, so we will state these results without the proofs.
In Levinson’s inequality (1.3) and its generalizations (see [5]) we see that \(p_{i}\) (\(i=1,\ldots,n\)) are positive real numbers. Here, we will give a generalization of that result, allowing \(p_{i}\) to also be negative, with the sum not equal to zero, but with a supplementary demand on \(p_{i}\) and \(x_{i}\) given by using the Green function G defined in (1.8).
Here we use the common notation: for real ntuples \((x_{1},\ldots,x_{n})\) and \((p_{1},\ldots,p_{n})\) we set \(P_{k}=\sum_{i=1}^{k}p_{i}\), \(\bar{P_{k}}=P_{n}P_{k1}\) (\(k=1,\ldots,n\)) and \(\bar{x}= \frac{1}{P_{n}}\sum_{i=1}^{n}p_{i}x_{i}\). Analogously, for real mtuples \((y_{1},\ldots,y_{m})\) and \((q_{1},\ldots,q_{m})\) we define \(Q_{k}\), \(\bar{Q_{k}}\) (\(k=1,\ldots,m\)) and ȳ.
Using that fact the authors in [11] derived the analogous results of Theorem 1.8 and Corollary 1.9 for discrete case, and here, similarly as in the previous section, we get the following results.
Theorem 3.1
Inequality (3.3) is reversed if we change the signs of inequalities in (3.2).
Corollary 3.2
 (i)
If for all \(s_{1}\in [\alpha , c]\) and \(s_{2}\in [c,\beta ]\) the inequalities in (3.2) hold, where the function G is defined in (1.8), then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([\alpha ,\beta ])\) the reverse inequalities in (3.3) hold.
 (ii)
If for all \(s_{1}\in [\alpha , c]\) and \(s_{2}\in [c, \beta ]\) the reversed inequalities in (3.2) hold, then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([ \alpha ,\beta ])\) (3.3) holds.
Remark 3.3
Theorem 3.1 is the generalization of Levinson’s type inequality given in [5]. Namely, since the function G is convex in both variables, in the case when all \(p_{i}>0\) and \(q_{j}>0\) we can apply the Jensen inequality and we see that for all \(s_{1}\in [\alpha , c]\) and \(s_{2}\in [c,\beta ]\) inequalities (3.2) hold. Now from Theorem 3.1 and Corollary 3.2 we get the result from [5].
4 Converses of the Jensen inequality
It was obtained in 1973. by Lah and Ribarič in their paper [12]. Since then, there have been many papers written on the subject of its generalizations and converses (for instance, see [13] and [3]).
In [14] the authors gave a Levinson type generalization of inequality (4.1) for positive measures. In this section we will obtain a similar result involving signed measures, with a supplementary demand by using the Green function G defined in (1.8). In order to do so, we first need to state a result from [11], which gives us a version of the EdmundsonLahRibarič inequality for signed measures.
Theorem 4.1
[11]
 (1):

For every continuous convex function \(\varphi : [\alpha, \beta ]\rightarrow \mathbb{R}\)holds, where \(\bar{g}=\frac{\int_{a}^{b} g(x)\,d\lambda (x)}{\int_{a} ^{b}d\lambda (x)}\).$$ \frac{\int_{a}^{b} \varphi ( g(x) )\,d\lambda (x)}{\int_{a} ^{b}d\lambda (x)}\leq \frac{M\bar{g}}{Mm}\varphi (m)+ \frac{\bar{g}m}{Mm}\varphi (M) $$(4.2)
 (2):

For all \(s\in [\alpha , \beta ]\)holds, where the function \(G: [\alpha , \beta ]\times [\alpha , \beta ]\rightarrow \mathbb{R}\) is defined in (1.8).$$ \frac{\int_{a}^{b} G ( g(x),s )\,d\lambda (x)}{\int_{a}^{b}d\lambda (x)}\leq \frac{M\bar{g}}{Mm}G(m,s)+ \frac{\bar{g}m}{Mm}G(M,s) $$(4.3)
Note that for every continuous concave function \(\varphi : [\alpha , \beta ]\rightarrow \mathbb{R}\) inequality (4.2) is reversed, i.e. the following corollary holds.
Corollary 4.2
[11]
In the following theorem we give the Levinson type generalization of the upper result, and we use a similar method to Section 2 of this paper.
Theorem 4.3
Proof
Let \(\varphi \in \mathcal{K}_{1}^{c}([\alpha ,\beta ])\) be continuous function on \([\alpha ,\beta ]\) and let \(\phi (x)=\varphi (x)\frac{D}{2}x^{2}\), where D is the constant from Definition 1.5.
Corollary 4.4
 (i)
If for all \(s\in [\alpha , c]\) the inequality in (4.5) holds, and for all \(s\in [c, \beta ]\) the inequality in (4.6) holds, then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([\alpha ,\beta ])\) the reverse inequalities in (4.7) hold.
 (ii)
If for all \(s\in [\alpha , c]\) the reversed inequality in (4.5) holds, and for all \(s\in [c, \beta ]\) the reversed inequality in (4.6) holds, then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([\alpha ,\beta ])\) the inequalities in (4.7) hold.
Remark 4.5
5 Discrete form of the converses of the Jensen inequality
In this section we give the Levinson type generalization for converses of Jensen’s inequality in discrete case. The proofs are similar to those in the integral case given in the previous section, so we give these results with the proofs omitted.
In [14] the authors obtained the following Levinson type generalization of the discrete EdmundsonLahRibarič inequality.
Theorem 5.1
[14]
Our first result is a generalization of the result from [14] stated above, in which it is allowed for \(p_{i}, q _{j}\) to also be negative, with the sum not equal to zero, but with supplementary demands on \(p_{i},q_{j}\) and \(x_{i},y_{j}\) given by using the Green function G defined in (1.8).
Theorem 5.2
Remark 5.3
If we set all \(p_{i},q_{j}\) to be positive, then Theorem 5.2 becomes the result from [14] which is stated above in Theorem 5.1.
Corollary 5.4
 (i)
If for all \(s\in [\alpha , c]\) inequality (5.2) holds and for all \(s\in [c, \beta ]\) inequality (5.3) holds, then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([\alpha ,\beta ])\) the reverse inequalities in (5.4) hold.
 (ii)
If for all \(s\in [\alpha , c]\) the reversed inequality in (5.2) holds and for all \(s\in [c, \beta ]\) the reversed inequality in (5.3) holds, then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([\alpha ,\beta ])\) (5.4) holds.
6 The HermiteHadamard inequality
Fink in [16] discussed the generalization of (6.1) by separately looking the left and right side of the inequality and considering certain signed measures. In their paper [17], the authors gave a complete characterization of the right side of the HermiteHadamard inequality.
Rodić Lipanović, Pečarić, and Perić in [11] obtained the complete characterization for the left and the right side of the generalized HermiteHadamard inequality for the real Stieltjes measure.
In this section a Levinson type generalization of the HermiteHadamard inequality for signed measures will be given as a consequence of the results given in Sections 2 and 4.
Corollary 6.1
Remark 6.2
 (i)
If for all \(s_{1}\in [\alpha , c]\) and \(s_{2}\in [c, \beta ]\) inequalities (6.4) hold, then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([\alpha ,\beta ])\) the reverse inequalities in (6.5) hold.
 (ii)
If for all \(s_{1}\in [\alpha , c]\) and \(s_{2}\in [c, \beta ]\) the reversed inequalities in (6.4) hold, then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([ \alpha ,\beta ])\) (6.5) holds.
Note that for the Levinson type generalization of the leftside inequality of the generalized HermiteHadamard inequality it is necessary to demand that \(\tilde{x}\in [\alpha , c]\) and \(\tilde{y} \in [c,\beta ]\).
Remark 6.3
If in Remark 2.3 we put \(f(x)=x\) and \(g(x)=x\), we can obtain weaker conditions instead of equality (6.3) under which inequality (6.5) holds.
Similarly, from the results given in the fourth section we get the Levinson type generalization of the rightside inequality of the generalized HermiteHadamard inequality. Here we allow that the mean value x̃ goes outside of the interval \([\alpha , c]\) and ỹ outside of the interval \([c,\beta ]\).
Corollary 6.4
Remark 6.5
 (i)
If for all \(s\in [\alpha , c]\) inequality (6.7) holds and for all \(s\in [c,\beta ]\) inequality (6.8) holds, then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([\alpha ,\beta ])\) the reverse inequalities in (6.9) hold.
 (ii)
If for all \(s\in [\alpha , c]\) the reversed inequality in (6.7) holds and for all \(s\in [c,\beta ]\) the reversed inequality in (6.8) holds, then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([\alpha ,\beta ])\) (6.9) holds.
Remark 6.6
If in Remark 4.5 we put \(f(x)=x\) and \(g(x)=x\), we can obtain analogous weaker conditions instead of equality (6.6) under which inequality (6.9) holds.
It is easy to see that for \(\lambda (x)=x\) and \(\mu (x)=x\) the conditions (6.4), (6.7) and (6.8) are always fulfilled. In that way we can obtain a Levinson type generalization of both sides in the classical weighted HermiteHadamard inequality.
Corollary 6.7
 (i)If \(C:=\frac{1}{12}(b_{2}a_{2})^{2}=\frac{1}{12}(b_{1}a _{1})^{2}\) holds, then for every continuous function \(\varphi \in \mathcal{K}_{1}^{c}([\alpha ,\beta ])\)$$\begin{aligned}& \frac{1}{b_{1}a_{1}} \int_{a_{1}}^{b_{1}} \varphi (x)\,dx\varphi \biggl(\dfrac{a_{1}+b_{1}}{2} \biggr) \\& \quad \le \frac{D}{2}C \le \frac{1}{b_{2}a_{2}} \int_{a_{2}}^{b_{2}} \varphi (x)\,dx \varphi \biggl( \dfrac{a_{2}+b_{2}}{2} \biggr) . \end{aligned}$$
 (ii)If \(C:=\frac{1}{6}(b_{2}a_{2})^{2}=\frac{1}{6}(b_{1}a _{1})^{2}\) holds, then for every continuous function \(\varphi \in \mathcal{K}_{1}^{c}([\alpha ,\beta ])\)$$\begin{aligned}& \frac{\varphi (a_{1})+\varphi (b_{1})}{2} \frac{1}{b_{1}a_{1}} \int_{a_{1}}^{b_{1}} \varphi ( x )\,dx \\& \quad \le\frac{D}{2}C\le\frac{\varphi (a_{2})+\varphi (b_{2})}{2}\frac{1}{b_{2}a_{2}} \int_{a_{2}}^{b_{2}} \varphi ( x )\,dx . \end{aligned}$$
7 The inequalities of Giaccardi and Petrović
The following generalization of (7.1) is given by Giaccardi (see [3] and [19]).
Theorem 7.1
Giaccardi, [19]
In this section we will use an analogous technique as in the previous sections to obtain a Levinson type generalization of the Giaccardi inequality for ntuples p of real numbers which are not necessarily nonnegative. As a simple consequence, we will obtain a Levinson type generalization of the original Giaccardi inequality (7.2). In order to do so, we first need to state two results from [11].
Theorem 7.2
[11]
 (1):

For every continuous convex function \(f: [\alpha , \beta ]\rightarrow \mathbb{R}\)holds.$$ \frac{1}{P_{n}}\sum_{i=1}^{n} p_{i} f(x_{i}) \leq \frac{b\bar{x}}{ba}f(a)+ \frac{\bar{x}a}{ba}f(b) $$(7.3)
 (2):

For all \(s\in [\alpha , \beta ]\)holds, where the function \(G: [\alpha , \beta ]\times [\alpha , \beta ] \rightarrow \mathbb{R}\) is defined in (1.8).$$ \frac{1}{P_{n}}\sum_{i=1}^{n} p_{i} G(x_{i},s)\leq \frac{b\bar{x}}{ba}G(a,s)+ \frac{\bar{x}a}{ba}G(b,s) $$(7.4)
Corollary 7.3
[11]
Our first result is a Levinson type generalization of the Giaccardi inequality for ntuples p and mtuples q of arbitrary real numbers instead of nonnegative real numbers.
Theorem 7.4
Proof
We follow the same idea as in the proof of Theorem 4.3 from Section 4. We apply Theorem 7.2 and Corollary 7.3 to the function \(\phi (x)=\varphi (x)\frac{D}{2}x ^{2}\), which is concave on \([\alpha ,c]\) and convex on \([c,\beta ]\). We set \(a=\min \{x_{0}, \sum_{i=1}^{n}p_{i}x_{i}\}\), \(b=\max \{x_{0}, \sum_{i=1}^{n}p_{i}x_{i}\}\) on \([\alpha , c]\), and then we set \(a=\min \{y_{0}, \sum_{j=1}^{m}q_{j}y_{j}\}\) and \(b=\max \{y_{0}, \sum_{j=1}^{m}q_{j}y_{j}\}\) on \([c, \beta ]\), as well as consider the signs of \(P_{n}\) and \(Q_{m}\). We omit the details. □
Corollary 7.5
 (i)
If \(P_{n}>0\) and \(Q_{m}<0\) and if for all \(s\in [\alpha , \beta ]\) inequality (7.8) holds and inequality (7.9) is reversed, then (7.10) holds.
 (ii)
If \(P_{n}<0\) and \(Q_{m}>0\) and if for all \(s\in [\alpha , \beta ]\) inequality (7.8) is reversed and inequality (7.9) holds, then (7.10) holds.
Corollary 7.6
 (i)
If \(P_{n}\cdot Q_{m}>0\) and if for all \(s\in [\alpha , \beta ]\) inequalities (7.8) and (7.9) hold, then for every continuous function \(\varphi \in \mathcal{K}_{2} ^{c}([\alpha ,\beta ])\) the reversed inequality in (7.10) holds.
 (ii)
If \(P_{n}\cdot Q_{m}>0\) and if for all \(s\in [\alpha , \beta ]\) inequalities (7.8) and (7.9) are reversed, then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([\alpha ,\beta ])\) (7.10) holds.
 (iii)
If \(P_{n}>0\) and \(Q_{m}<0\) and if for all \(s\in [\alpha ,\beta ]\) inequality (7.8) holds and inequality (7.9) is reversed, then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([\alpha ,\beta ])\) the reversed inequality in (7.10) holds.
 (iv)
If \(P_{n}<0\) and \(Q_{m}>0\) and if for all \(s\in [\alpha , \beta ]\) inequality (7.8) is reversed and inequality (7.9) holds, then for every continuous function \(\varphi \in \mathcal{K}_{2}^{c}([\alpha ,\beta ])\) the reversed inequality in (7.10) holds.
Remark 7.7
One needs to notice that if we set \(p_{i}\) (\(i=1,\ldots,n\)) and \(q_{j}\) (\(j=1,\ldots,m\)) to be positive, Theorem 7.4 becomes the Levinson type generalization of the original Giaccardi inequality (7.2).
Remark 7.8
8 Meanvalue theorems
Let \(f\colon [a_{1},b_{1}]\to \mathbb{R}\) and \(g\colon [a_{2},b_{2}] \to \mathbb{R}\) be continuous functions, \([\alpha , \beta ]\subseteq \mathbb{R}\) an interval and \(c\in \langle \alpha ,\beta \rangle \) such that \(f([a_{1},b_{1}])=[m_{1},M_{1}]\subseteq [\alpha , c]\) and \(g([a_{2},b_{2}])=[m_{2},M_{2}]\subseteq [c,\beta ]\). Let \(\lambda \colon [a_{1},b_{1}]\to \mathbb{R}\) and \(\mu \colon [a_{2},b_{2}] \to \mathbb{R}\) be continuous functions or functions of bounded variation such that \(\lambda (a_{1})\neq \lambda (b_{1})\) and \(\mu (a_{2})\neq \mu (b_{2})\), and let \(\varphi \in \mathcal{K}_{1} ^{c}([\alpha ,\beta ])\) be a continuous function.
In the following two theorems we give the meanvalue theorems of the Lagrange and Cauchy type, respectively.
Theorem 8.1
 (i)If (2.1) holds and for all \(s_{1}\in [ \alpha ,c]\), \(s_{2}\in [c,\beta ]\) (2.2) holds, then there exists \(\xi_{1} \in [\alpha ,\beta ]\) such that$$\begin{aligned} \Gamma_{J} (\varphi )=\dfrac{\varphi '''(\xi_{1})}{6} \biggl[\frac{ \int_{a_{2}}^{b_{2}} g^{3}(x)\,d\mu (x)}{\int_{a_{2}}^{b_{2}}d\mu (x)} \frac{ \int_{a_{1}}^{b_{1}}f^{3}(x)\,d\lambda (x)}{\int_{a_{1}}^{b_{1}}d\lambda (x)}+\bar{f}^{3} \bar{g}^{3} \biggr]. \end{aligned}$$(8.3)
 (ii)If (4.4) holds, and for all \(s\in [ \alpha ,c]\) (4.5) holds and for all \(s\in [c, \beta ]\) (4.6) holds, then there exists \(\xi_{2} \in [\alpha ,\beta ]\) such that$$\begin{aligned} \Gamma_{\mathrm{ELR}} (\varphi ) ={}&\dfrac{\varphi '''(\xi_{2})}{6} \biggl[ \frac{M _{2}\bar{g}}{M_{2}m_{2}}m_{2}^{3}+\frac{\overline{g}m_{2}}{M_{2}m _{2}}M_{2}^{3} \frac{\int_{a_{2}}^{b_{2}} g^{3}(x)\,d\mu (x)}{\int_{a _{2}}^{b_{2}}d\mu (x)} \\ &{}\frac{M_{1}\bar{f}}{M_{1}m_{1}}m_{1}^{3}\frac{\bar{f}m_{1}}{M _{1}m_{1}}M_{1}^{3}+ \frac{\int_{a_{1}}^{b_{1}} f^{3}(x)\,d\lambda (x)}{ \int_{a_{1}}^{b_{1}}d\lambda (x)} \biggr]. \end{aligned}$$(8.4)
Proof
Theorem 8.2
Proof
Remark 8.3
Note that if in Theorem 8.2 we set the function ψ to be \(\psi (x)=x^{3}\), we get exactly Theorem 8.1.
Remark 8.4
Note that if we set the functions f, g, λ, and μ from our theorems to fulfill the conditions from Jensen’s integral inequality or JensenSteffensen’s, or JensenBrunk’s, or JensenBoas’ inequality, then  applying that inequality on the function G which is continuous and convex in both variables  we see that in these cases for all \(s_{1}\in [\alpha ,c]\), \(s_{2}\in [c,\beta ]\) inequalities in (2.2) hold, and so from our results we directly get the results from the paper [10].
Remark 8.5
If in the definition of the functional \(\Gamma_{J}\) (resp. \(\Gamma _{\mathrm{ELR}}\)) we set \(f(x)=x\) and \(g(x)=x\), then we get a functional that represents the difference between the right and the left side of the lefthand part (resp. righthand part) of the generalized HermiteHadamard inequality. In the same manner, adequate results of Lagrange and Cauchy type for those functionals can be derived directly from Theorem 8.1 and Theorem 8.2.
8.1 Discrete case
Let \([\alpha , \beta ] \subseteq \mathbb{R}\) and \(c\in \langle \alpha ,\beta \rangle \). Let \(x_{i}\in [a_{1},b_{1}]\subseteq [\alpha ,c]\), \(p_{i}\in \mathbb{R}\) (\(i=1,\ldots,n\)) be such that \(P_{n}\neq 0\), and let \(y_{j}\in [a_{2},b_{2}]\subseteq [c, \beta ]\), \(q_{j}\in \mathbb{R}\) (\(j=1,\ldots,m\)) be such that \(Q_{m}\neq 0\). Let \(\varphi \in \mathcal{K}_{1}^{c}([\alpha ,\beta ])\) be a continuous function.
 (i)
\(\Gamma_{J_{D}} (\varphi )\ge 0\), when (3.1) holds and for all \(s_{1}\in [\alpha ,c]\), \(s_{2}\in [c,\beta ]\) (3.2) holds;
 (ii)
\(\Gamma_{ELR_{D}} (\varphi )\ge 0\), when (5.1) holds, and for all \(s\in [\alpha ,c]\) (5.2) holds and for all \(s\in [c,\beta ]\) (5.3) holds;
 (iii)
\(\Gamma_{G} (\varphi )\ge 0\), when \(P_{n}\cdot Q_{m}>0\) and (7.6) holds, and for all \(s\in [\alpha ,\beta ]\) (7.8) and (7.9) hold.
The following two results are meanvalue theorems of the Lagrange and Cauchy type, respectively, and they are obtained in an analogous way to the theorems of the same type in the previous sections, so we omit the proof.
Theorem 8.6
 (i)If (3.1) holds and for all \(s_{1}\in [ \alpha ,c]\), \(s_{2}\in [c,\beta ]\) (3.2) holds, then there exists \(\xi_{1} \in [\alpha ,\beta ]\) such that$$\begin{aligned} \Gamma_{D} (\varphi )=\dfrac{\varphi '''(\xi_{1})}{6} \Biggl[\frac{1}{Q _{m}} \sum_{j=1}^{m}q_{j}y_{j}^{3} \frac{1}{P_{n}}\sum_{i=1}^{n}p_{i}x _{i}^{3} +\bar{x}^{3}\bar{y}^{3} \Biggr] . \end{aligned}$$(8.10)
 (ii)If (5.1) holds, and for all \(s\in [ \alpha ,c]\) (5.2) holds and for all \(s\in [c, \beta ]\) (5.3) holds, then there exists \(\xi_{2} \in [\alpha ,\beta ]\) such that$$\begin{aligned} \Gamma_{ELR_{D}} (\varphi ) ={}&\dfrac{\varphi '''(\xi_{2})}{6} \Biggl[ \frac{b _{2}\bar{y}}{b_{2}a_{2}}a_{2}^{3}+\frac{\bar{y}a_{2}}{b_{2}a_{2}}b _{2}^{3}\frac{1}{Q_{m}}\sum _{j=1}^{m} q_{j} y_{j}^{3} \\ & {}\frac{b_{1}\bar{f}}{b_{1}a_{1}}a_{1}^{3}\frac{\bar{f}a_{1}}{b _{1}a_{1}}b_{1}^{3}+ \frac{1}{P_{n}}\sum_{i=1}^{n} p_{i} x_{i}^{3} \Biggr]. \end{aligned}$$(8.11)
 (iii)If \(P_{n}\cdot Q_{m}>0\) and (7.6) holds, and for all \(s\in [\alpha ,\beta ]\) (7.8) and (7.9) hold, then there exists \(\xi_{3} \in [\alpha , \beta ]\) such that$$\begin{aligned} \Gamma_{G} (\varphi ) ={}&\dfrac{\varphi '''(\xi_{3})}{6} \Biggl[A_{2} \Biggl( \sum_{j=1}^{m} q_{j}y_{j} \Biggr)^{3}+B_{2} \Biggl( \sum_{j=1}^{m} q _{j}1 \Biggr) y_{0}^{3}\sum _{j=1}^{m} q_{j}y_{j}^{3} \\ & {} A_{1} \Biggl( \sum_{i=1}^{n} p_{i}x_{i} \Biggr)^{3}B_{1} \Biggl( \sum_{i=1}^{n} p_{i}1 \Biggr) x_{0}^{3}+\sum_{i=1}^{n} p_{i}x_{i}^{3} \Biggr]. \end{aligned}$$(8.12)
Theorem 8.7
Remark 8.8
Note that if in Theorem 8.7 we set the function ψ to be \(\psi (x)=x^{3}\), we get exactly Theorem 8.6.
As a consequence of the previous two theorems, we now give some further results in which we give explicit conditions on \(p_{i}, x_{i}\) (\(i=1,\ldots,n\)) and \(q_{j}, y_{j}\) (\(j=1,\ldots,m\)) for (8.10) and (8.13) to hold, where using the properties of the function G we can skip the supplementary conditions on that function.
Corollary 8.9
Proof
Note that \(p_{i},q_{j}>0\) implies that \(\bar{x}\in [ \alpha ,c]\) and \(\bar{y}\in [c,\beta ]\), so we can set the interval \([a_{1},b_{1}]\) to be \([\alpha ,c]\) and \([a_{2},b_{2}]\) to be \([c,\beta ]\). The function G is convex, so by Jensen’s inequality we see that the inequalities in (3.2) hold for all \(s_{1} \in [\alpha ,c]\), \(s_{2}\in [c,\beta ]\). Now we can apply Theorem 8.6 and Theorem 8.7 to get the statements of this corollary. □
Corollary 8.10
Proof
Declarations
Acknowledgements
This research is supported by Croatian Science Foundation under the project 5435.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Steffensen, JF: On certain inequalities and methods of approximation. J. Inst. Actuar. 51, 274297 (1919) Google Scholar
 Boas, RP: The JensenSteffensen inequality. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 302319, 18 (1970) MathSciNetMATHGoogle Scholar
 Pečarić, JE, Proschan, F, Tong, YL: Convex Functions, Partial Orderings, and Statistical Applications. Mathematics in Science and Engineering, vol. 187. Academic Press, San Diego (1992) MATHGoogle Scholar
 Levinson, N: Generalisation of an inequality of Ky Fan. J. Math. Anal. Appl. 8, 133134 (1964) MathSciNetView ArticleMATHGoogle Scholar
 Baloch, IA, Pečarić, J, Praljak, M: Generalization of Levinson’s inequality. J. Math. Inequal. 9(2), 571586 (2015) MathSciNetView ArticleMATHGoogle Scholar
 Bullen, PS: An inequality of N. Levinson. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 421460, 109112 (1985) MathSciNetMATHGoogle Scholar
 Mercer, AMcD: Short proof of Jensen’s and Levinson’s inequality. Math. Gaz. 94, 492495 (2010) View ArticleGoogle Scholar
 Pečarić, J: On an inequality of N. Levinson. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 678715, 7174 (1980) MathSciNetMATHGoogle Scholar
 Witkowski, A: On Levinson’s inequality. RGMIA Research Report Collection 15, Art. 68 (2012) Google Scholar
 Jakšetić, J, Pečarić, J, Praljak, M: Generalized JensenSteffensen inequality and exponential convexity. J. Math. Inequal. 9(4), 12871302 (2015) MathSciNetMATHGoogle Scholar
 Pečarić, J, Perić, I, Rodić Lipanović, M: Uniform treatment of Jensen type inequalities. Math. Rep. 16(66)(2), 183205 (2014) MathSciNetGoogle Scholar
 Lah, P, Ribarič, M: Converse of Jensen’s inequality for convex functions. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 412460, 201205 (1973) MATHGoogle Scholar
 Edmundson, HP: Bounds on the expectation of a convex function of a random variable. The Rand Corporation, Paper No. 982 (1956) Google Scholar
 Jakšić, R, Pečarić, J: Levinson’s type generalization of the EdmundsonLahRibarič inequality. Mediterr. J. Math. 13(1), 483496 (2016) MathSciNetView ArticleMATHGoogle Scholar
 Fejér, L: Über die Fourierreihen, II. Math. Naturwiss. Anz. Ungar. Akad. Wiss. 24, 369390 (1906) MATHGoogle Scholar
 Fink, AM: A best possible Hadamard inequality. Math. Inequal. Appl. 1, 223230 (1998) MathSciNetMATHGoogle Scholar
 Florea, A, Niculescu, CP: A HermiteHadamard inequality for convexconcave symmetric functions. Bull. Math. Soc. Sci. Math. Roum. 50(98)(2) 149156 (2007) MathSciNetMATHGoogle Scholar
 Petrović, M: Sur une fonctionnelle. Publ. Math. Univ. Belgrade 1, 149156 (1932) Google Scholar
 Vasić, PM, Pečarić, JE: On the Jensen inequality for monotone functions I. An. Univ. Vest. Timiş., Ser. Mat.Inform. 1, 95104 (1979) MathSciNetMATHGoogle Scholar