Skip to main content

Continuous refinements of some Jensen-type inequalities via strong convexity with applications

Abstract

In this paper we prove new continuous refinements of some Jensen type inequalities in both direct and reversed forms. As applications we also derive some continuous refinements of Hermite–Hadamard, Hölder, and Popoviciu type inequalities. As particular cases we point out the corresponding results for sums and integrals showing that our results contain both several well-known but also some new results for these special cases.

1 Preliminaries

Classical inequalities are extremely important for several parts of mathematical sciences as well as their applications in engineering and natural sciences. This is one reason of the great and increasing interest to further develop and apply this important area. Here we just mention two such types of recent fairly recent developments:

1. It is well known that the classical Jensen inequality is more or less equivalent to the concept of convexity. It is also well known that the Jensen inequality implies several of the classical inequalities, see e.g. the books [10, 18] and the P.-L. Lions related material [19]. Moreover, by using some variations of the concept of convexity, some refined versions of classical inequalities have been proved. The first paper concerning a refinement of the Jensen inequality with this idea is [1], and for the first application of this result, see [16]. See also [6] and [20]. Moreover, by using other variations of the concept of convexity, other refinements of some classical inequalities have been obtained, see e.g. [2] and [3] and the references in these papers.

2. The classical inequalities can in several cases be given in a more unified continuous and/or e.g. Banach function spaces setting. See e.g. [1214], and [15] and the references given there.

The main aim of this paper is to further complement and develop 1. and 2. by first proving some new continuous versions of Jensen type inequalities in both direct and reversed form by using the concept of strong convexity. Moreover, as applications we derive some corresponding continuous Hermite–Hadamard, Hölder, and Popoviciu type inequalities. For another interesting use of strongly convex functions, we also refer to [11].

In this paper we use some usual notations for measure spaces. Let \((X, \mu )\) and \((Z, \lambda ) \) be two probability measure spaces. Let \(\alpha : X\times Z \rightarrow [ 0, \infty \rangle \) be a measurable mapping such that

$$\begin{aligned} \int _{X} \alpha (x,z) \,d\mu (x) =1 \quad \text{for each }z \in Z \end{aligned}$$
(1)

and

$$\begin{aligned} \int _{Z} \alpha (x,z) \,d\lambda (z) =1 \quad \text{for each }x\in X. \end{aligned}$$
(2)

In [20] a continuous refinement of the Jensen inequality is given.

Theorem 1.1

Let \((X, \mu )\) and \((Z, \lambda ) \) be two probability measure spaces, and let \(\alpha : X \times Z \rightarrow [0,\infty \rangle \) be a measurable function on \(X\times Z\) satisfying (1) and (2). If φ is a real convex function on the interval \(I\subseteq {\mathbf{R}}\), then for the function \(f:X \rightarrow I\), \(f, \varphi \circ f\in L^{1}(\mu )\), it holds that

$$ \varphi \biggl( \int _{X} f \,d\mu \biggr) \leq \int _{Z} \varphi \biggl( \int _{X} f(x) \alpha (x,z) \,d\mu (x) \biggr) \,d\lambda (z) \leq \int _{X} (\varphi \circ f) \,d\mu . $$
(3)

If φ is concave, then the reversed signs of the inequalities hold in (3).

If λ is a discrete measure, the refinement of the Jensen inequality has been rediscovered recently, see e.g. [7], while similar results can be found in [5, 6, 17] for some particular cases of α.

One of our objectives is to give results for strongly convex functions. So, let us evoke a definition and some useful facts about that class of functions.

Definition 1.2

Let I be an interval of the real line. A function \(\varphi : I \rightarrow {\mathbf{R}}\) is called a strongly convex function with modulus \(c>0\) if

$$\begin{aligned} \varphi \bigl(tu+(1-t)v\bigr) \leq t \varphi (u) +(1-t)\varphi (v) -ct(1-t) (u-v)^{2} \end{aligned}$$

for all \(u,v \in I\) and all \(t\in [0,1]\).

The theory of strongly convex functions is vast, but here we point out only a very useful characterization of it. Namely, a function φ is strongly convex with modulus \(c>0\) if and only if the function \(\psi (x)=\varphi (x) -cx^{2}\) is convex [11]. The Jensen inequality for strongly convex functions is given in its discrete and integral form in [9]. A slightly modified result is the following theorem.

Theorem 1.3

Let \((X,\mu )\) be a probability measure space, I be an interval in R. Let \(\varphi : I \rightarrow {\mathbf{R}}\) be a strongly convex function with modulus \(c>0\), and let \(f:X\rightarrow I\) be a function such that \(f, f^{2} \in L^{1}(\mu )\). Then

$$\begin{aligned} \varphi \biggl( \int _{X} f \,d\mu \biggr) \leq \int _{X} (\varphi \circ f) \,d\mu - c \int _{X} (f-\bar{f})^{2} \,d\mu , \end{aligned}$$
(4)

where \(\bar{f}= \int _{X} f \,d\mu \).

The paper is organized as follows: in Sect. 2 we derive the announced refinement of the Jensen inequality (see Theorem 2.1). In order to be able to see that our results generalize and unify some other recent results in the literature (see [5, 7, 17], and [21]), we point out some more or less direct consequences of Theorem 2.1 (see Corollaries 2.3 and 2.4). The corresponding results both for convex and strongly convex functions are discussed and proved in Sect. 3 (see Theorems 3.1 and 3.2). As applications, we derive in Sect. 4 some corresponding new continuous versions of the Hermite–Hadamard inequality (see Theorem 4.1). Moreover, in Sect. 5 the corresponding results concerning the Hölder and Popoviciu inequalities are discussed and proved (see Theorem 5.1). Finally, in order to put more light on the “gaps” in some of our inequalities, we use Sect. 6 to derive some important properties of the functionals describing these gaps (see Theorems 6.1 and 6.2).

2 Refinements of the Jensen inequality for strongly convex functions

The main result in this section reads as follows.

Theorem 2.1

Let the assumptions of Theorem 1.1hold. If \(\varphi : I \rightarrow {\mathbf{R}}\) is a strongly convex function with modulus \(c>0\) and \(f:X\rightarrow I\) is a function such that \(f, f^{2} \in L^{1}(\mu )\), then

$$\begin{aligned} \varphi \biggl( \int _{X} f \,d\mu \biggr) \leq & \int _{Z} \varphi \biggl( \int _{X} f(x) \alpha (x,z) \,d\mu (x) \biggr) \,d \lambda (z) \\ &{}- c \int _{Z} \biggl( \int _{X} f(x) \alpha (x,z) \,d\mu (x) - \bar{f} \biggr)^{2} \,d\lambda (z) \\ \leq & \int _{X} (\varphi \circ f) \,d\mu - c \int _{X} (f-\bar{f})^{2} \,d\mu , \end{aligned}$$
(5)

where \(\bar{f}= \int _{X} f \,d\mu \).

Proof

Since the function φ is strongly convex, the function \(\varphi - c(\cdot )^{2}\) is convex, and for it the refinement (3) holds. Therefore, after adding the term \(c ( \int _{X} f \,d\mu )^{2} = c\bar{f}^{2}\) on each side of the refinement of the Jensen inequality, we get

$$\begin{aligned} \varphi \biggl( \int _{X} f \,d\mu \biggr) \leq & \int _{Z} \varphi \biggl( \int _{X} f(x) \alpha (x,z) \,d\mu (x) \biggr) \,d \lambda (z) \\ &{}- c \biggl[ \int _{Z} \biggl( \int _{X} f(x) \alpha (x,z) \,d\mu (x) \biggr)^{2} \,d\lambda (z) - \bar{f}^{2} \biggr] \\ \leq & \int _{X} (\varphi \circ f) \,d\mu - c \biggl[ \int _{X} f^{2} \,d\mu - \bar{f}^{2} \biggr]. \end{aligned}$$
(6)

Let us transform the term \(\int _{X} f^{2} \,d\mu - \bar{f}^{2}\) as follows:

$$\begin{aligned} \int _{X} f^{2} \,d\mu - \bar{f}^{2} =& \int _{X} f^{2} \,d\mu -2 \bar{f} \cdot \bar{f} + \bar{f}^{2} \\ =& \int _{X} f^{2} \,d\mu -2 \int _{X} \bar{f} f \,d\mu + \int _{X} \bar{f}^{2} \,d\mu = \int _{X} ( f-\bar{f})^{2} \,d\mu . \end{aligned}$$

We denote \(F(z):= \int _{X}f(x) \alpha (x,z) \,d\mu (x) \). Using the Fubini theorem and the properties of the weight α, we obtain

$$\begin{aligned} \bar{F} :=& \int _{Z} F(z) \,d\lambda (z) = \int _{Z} \biggl( \int _{X} f(x) \alpha (x,z) \,d\mu (x) \biggr) \,d\lambda (z) \\ =& \int _{X} \biggl( \int _{Z}\alpha (x,z) \,d\lambda (z) \biggr) f(x) \,d\mu (x) \\ =& \int _{X} 1\cdot f(x) \,d\mu (x) = \bar{f}. \end{aligned}$$
(7)

Under this notation and using the same method as previously, we find that the second term in the middle expression of (6) is equal to

$$\begin{aligned} -c \biggl[ \int _{Z} F^{2}(z) \,d\lambda (z) - \bar{F}^{2} \biggr] =& -c \int _{Z} \biggl[ \int _{X} f(x)\alpha (x,z)\,d\mu (x) - \bar{f} \biggr]^{2} \,d\lambda (z). \end{aligned}$$
(8)

By combining (6)–(8) we obtain (5), and the proof is complete. □

Remark 2.2

Since \(c>0\), the chain of inequalities in (5) can be followed by \(\leq \int _{X} (\varphi \circ f) \,d\mu \). So Theorem 2.1 is indeed a genuine refinement of the Jensen inequality.

It is interesting to state the corresponding refinements for some particular cases such as for discrete and for integral Jensen’s inequality with finitely many functions.

Corollary 2.3

(i) Let \(-\infty \leq a < b \leq \infty \), \(\varphi : I \rightarrow {\mathbf{R}}\) be a strongly convex function with modulus c. Let \(w, f, \alpha _{i} : [a,b] \rightarrow {\mathbf{R}}\), \(i=1,2,\ldots , n\), be integrable functions such that \(w, \alpha _{i} \geq 0\), \(\sum_{i=1}^{n} \alpha _{i}(x)=1\) for each \(x\in [a,b]\), \(\int _{a}^{b} w \,dx \neq 0\), \(\int _{a}^{b} \alpha _{i} w \,dx \neq 0\), and \(f([a,b]) \subseteq I\). Then the following refinement of the Jensen inequality holds:

$$\begin{aligned} \varphi \biggl( \frac{1}{W} \int _{a}^{b} f w \,dx \biggr) \leq & \frac{1}{W} \sum_{i=1}^{n} \biggl( \int _{a}^{b} \alpha _{i} w \,dx \biggr) \varphi \biggl( \frac{\int _{a}^{b} \alpha _{i} fw \,dx}{\int _{a}^{b} \alpha _{i} w \,dx} \biggr) \\ &{}- \frac{c}{W} \sum_{i=1}^{n} \biggl( \int _{a}^{b} \alpha _{i} w \,dx \biggr) \biggl( \frac{\int _{a}^{b} \alpha _{i} fw \,dx}{\int _{a}^{b} \alpha _{i} w \,dx} - \bar{f} \biggr)^{2} \\ \leq & \frac{1}{W} \int _{a}^{b} (\varphi \circ f)w \,dx - \frac{c}{W} \int _{a}^{b} (f-\bar{f})^{2}w \,dx \\ \leq & \frac{1}{W} \int _{a}^{b} (\varphi \circ f)w \,dx, \end{aligned}$$
(9)

where \(W= \int _{a}^{b} w \,dx\) and \(\bar{f}=\frac{1}{W} \int _{a}^{b} fw \,dx\).

(ii) Let \(w_{j}\), \(j=1, \ldots , m\), be nonnegative numbers such that \(\sum_{j=1}^{m} w_{j} \neq0\), let \(\alpha _{ij}\), \(i=1, \ldots , n\), \(j=1, \ldots , m\), be nonnegative numbers such that \(\sum_{j=1}^{m} w_{j} \alpha _{ij}\neq 0\), \(i=1, \ldots , n\), and \(\sum_{i=1}^{n} \alpha _{ij}=1\), \(j=1, \ldots , m\). Let \(f_{j}\), \(j=1, \ldots , m\), be real numbers from an interval I. Then, for any strongly convex function \(\varphi :I\rightarrow {\mathbf{R}}\) with modulus c, the following refinement of the discrete Jensen inequality holds:

$$\begin{aligned} \varphi \Biggl( \frac{1}{W} \sum _{j=1}^{m} w_{j} f_{j} \Biggr) \leq & \frac{1}{W} \sum_{i=1}^{n} \Biggl(\sum_{j=1}^{m} w_{j} \alpha _{ij} \Biggr) \varphi \biggl( \frac{\sum_{j=1}^{m} w_{j}\alpha _{ij} f_{j}}{\sum_{j=1}^{m} w_{j} \alpha _{ij}} \biggr) \\ &{}- \frac{c}{W} \sum_{i=1}^{n} \Biggl(\sum_{j=1}^{m} w_{j} \alpha _{ij} \Biggr) \biggl( \frac{\sum_{j=1}^{m} w_{j} \alpha _{ij}f_{j}}{\sum_{j=1}^{m} w_{j} \alpha _{ij}} - \bar{f} \biggr)^{2} \\ \leq & \frac{1}{W} \sum_{j=1}^{m} w_{j} \varphi (f_{j}) - \frac{c}{W} \sum _{j=1}^{m} w_{j} (f_{j}- \bar{f})^{2} \\ \leq & \frac{1}{W} \sum_{j=1}^{m} w_{j} \varphi (f_{j}), \end{aligned}$$
(10)

where \(W= \sum_{j=1}^{m} w_{j}\) and \(\bar{f}=\frac{1}{W} \sum_{j=1}^{m} w_{j} f_{j}\).

Proof

(i) By applying Theorem 2.1 for

$$\begin{aligned}& Z=Z_{1} \cup \cdots \cup Z_{n}, \qquad Z_{i}=[i-1, i \rangle , \qquad Z_{n}=[n-1,n], \quad i=1,2,\ldots , n-1, \\& X=[a,b], \qquad d\mu (x)=\frac{w(x)}{W} \,dx, \qquad d\lambda (z)= \frac{1}{W} \biggl( \int _{a}^{b} \alpha _{i} w \,dx \biggr) \,dz \quad \text{for }z\in Z_{i}, \\& \alpha (x,z)=W \frac{\alpha _{i}(x)}{\int _{a}^{b} \alpha _{i} w \,dx} \quad \text{for }z\in Z_{i}, \end{aligned}$$

the inequalities in (5) become (9).

(ii) By applying Theorem 2.1 for the same substitutions for Z and λ as we did in the proof of the first part, together with the following:

$$\begin{aligned}& X=X_{1} \cup \cdots \cup X_{m}, \qquad X_{i}=[i-1, i \rangle , \qquad X_{m}=[m-1,m], \quad i=1,2,\ldots , m-1, \\& d\mu (x)=\frac{w(x)}{W} \,dx, \qquad w(x)=w_{j}, \qquad f(x)=f_{j}, \qquad \alpha _{i}(x)=\alpha _{ij} \quad \text{for }x\in X_{j}, \end{aligned}$$

the inequalities in (5) and trivial arguments give (10). The proof is complete. □

Also, we state a refinement with finitely many functions with partition of the space X. Namely, we get the following.

Corollary 2.4

Let the assumptions of Theorem 1.3hold. Let \(X_{1}, \ldots , X_{n}\) be a partition of the set X. Let μ have the additional property that \(\int _{X_{i}} \,d\mu \neq0\), \(i=1,2,\ldots , n\).

Then, for any strongly convex function \(\varphi :I\rightarrow {\mathbf{R}}\) with modulus c, the following refinement of the Jensen inequality holds:

$$\begin{aligned} \varphi \biggl( \int _{X} f \,d\mu \biggr) \leq & \sum _{i=1}^{n} \biggl( \int _{X_{i}} \,d\mu \biggr) \varphi \biggl( \frac{\int _{X_{i}} f \,d\mu}{\int _{X_{i}} \,d\mu} \biggr) \\ &{}- c \sum_{i=1}^{n} \biggl( \int _{X_{i}} \,d\mu \biggr) \biggl( \frac{\int _{X_{i}} f \,d\mu}{\int _{X_{i}} \,d\mu} - \bar{f} \biggr)^{2} \\ \leq & \int _{X} (\varphi \circ f) \,dx - c \int _{X} (f-\bar{f})^{2} \,d\mu \\ \leq & \int _{X} (\varphi \circ f) \,d\mu , \end{aligned}$$
(11)

where \(\bar{f}= \int _{X} f \,d\mu \).

Proof

Let us use the same partition of the space Z as in the proof of Corollary 2.3, where the functions \(\alpha _{i}\) are defined as

$$ \alpha _{i} = \chi _{X_{i}}, \quad i=1,2,\ldots ,n.$$

Here, \(\chi _{S}\) denotes the characteristic function of the set S. Then the assumptions of Theorem 2.1 are satisfied. Hence, (5) and a trivial estimate show that the inequalities in (11) hold. The proof is complete. □

If in Corollary 2.4 we put \(X=[a,b]\), \(a=a_{0}< a_{1}< a_{2}< \cdots <a_{n}=b\) and \(X_{i}=[a_{i-1}, a_{i} \rangle \) for \(i=1,2,\ldots ,n\), \(d\mu = \frac{w(x)}{W} \,dx\), then the inequalities in (11) become as follows:

$$\begin{aligned} \varphi \biggl( \frac{1}{W} \int _{a}^{b} fw \,dx \biggr) \leq & \frac{1}{W} \sum_{i=1}^{n} \biggl( \int _{a_{i-1}}^{a_{i}} w \,dx \biggr) \varphi \biggl( \frac{\int _{a_{i-1}}^{a_{i}} fw \,dx}{\int _{a_{i-1}}^{a_{i}} w \,dx} \biggr) \\ &{}- \frac{c}{W} \sum_{i=1}^{n} \biggl( \int _{a_{i-1}}^{a_{i}} w \,dx \biggr) \biggl( \frac{\int _{a_{i-1}}^{a_{i}} fw \,dx}{\int _{a_{i-1}}^{a_{i}} w \,dx} - \bar{f} \biggr)^{2} \\ \leq & \frac{1}{W} \int _{a}^{b} (\varphi \circ f)w \,dx - \frac{c}{W} \int _{a}^{b} (f-\bar{f})^{2} w \,dx \\ \leq & \frac{1}{W} \int _{a}^{b} (\varphi \circ f)w \,dx, \end{aligned}$$
(12)

where \(W= \int _{a}^{b} w \,dx\) and \(\bar{f}=\frac{1}{W} \int _{a}^{b} f w \,dx\).

Remark 2.5

When the function φ is convex i.e. when \(c=0\), some of the above-mentioned results are already known.

The refinement via two functions \(\alpha _{1}\), \(\alpha _{2}\), \(\alpha _{1}+\alpha _{2}=1\) involving integrals for convex function φ has been published very recently in paper [7] together with applications in the information theory. The results for finite sequences for convex function φ are given in [21].

The result of Corollary 2.4 for \(n=2\) and \(c=0\) is the main result in paper [5].

If \(c=0\) i.e. if φ is a convex function, then the corresponding result of (12) is given in the paper [17].

3 Refinements of the reverse Jensen inequality for convex and strongly convex functions

The simplest form of the reverse Jensen inequality is the following inequality where one weight is positive while the second one is negative:

Let φ be a real convex function on I. If p and q are positive numbers such that \(p-q>0\), then

$$\begin{aligned} (p-q) \varphi \biggl( \frac{pa-qb}{p-q} \biggr) \geq p\varphi (a) - q \varphi (b) \end{aligned}$$
(13)

for all \(a,b\in I\) such that \(\frac{pa-qb}{p-q} \in I\).

This follows from the definition of a convex function: \(\varphi (tx+(1-t)y)\leq t \varphi (x) +(1-t)\varphi (y) \), \(t\in [0,1]\), \(x,y\in I\) after the substitutions

$$ t=\frac{p-q}{p}, \qquad x=\frac{pa-qb}{p-q}, \qquad y=b. $$

The reverse Jensen inequality for integrals follows from Lemma 4.25 in the book [18, p. 124] and has the following form.

Theorem 3.1

Let \((X, \mu )\) be a probability measure space. Let \(u_{0}, f_{0}\in {\mathbf{R}}\), \(u_{0}>1\). Let φ be a real convex function on an interval I and \(f_{0}\in I\). Let f be a function on X such that f and \(\varphi \circ f\) are integrable and \(\frac{u_{0}f_{0} - \int _{X} f \,d\mu}{u_{0} - 1} \in I\). Then

$$\begin{aligned} (u_{0} - 1)\cdot \varphi \biggl( \frac{u_{0}f_{0} - \int _{X} f \,d\mu}{u_{0} - 1} \biggr) \geq u_{0} \varphi (f_{0}) - \int _{X} (\varphi \circ f) \,d\mu . \end{aligned}$$
(14)

If φ is concave, then the reversed inequality holds.

The most known consequences of the previous inequality are the Popoviciu inequality and the Bellman inequality, which are reversed inequalities of the Hölder and the Minkowski inequalities, respectively. Here we give a proof of it since we will use one step of that proof in our further investigation.

Proof

By putting in (13)

$$ p=u_{0}, \qquad q=1, \qquad a=f_{0}, \qquad b= \int _{X} f \,d\mu , $$

we obtain that

$$\begin{aligned} (u_{0} - 1) \cdot \varphi \biggl( \frac{u_{0}f_{0} - \int _{X} f \,d\mu}{u_{0} - 1} \biggr) \geq & u_{0} \varphi (f_{0}) - \varphi \biggl( \int _{X} f \,d\mu \biggr) \\ \geq & u_{0} \varphi (f_{0}) - \int _{X} (\varphi \circ f) \,d\mu , \end{aligned}$$
(15)

where in the last inequality we use the Jensen inequality for integral. □

The following theorem is a continuous refinement of the previously mentioned reverse Jensen inequality for integrals.

Theorem 3.2

(Continuous refinement of the reverse Jensen inequality for convex function)

Let the assumptions of Theorem 1.1hold. Additionally, let \(u_{0}\in {\mathbf{R}}\) be such that \(u_{0}>1\). Let φ be a real convex function on an interval I and \(f_{0}\in I\). Let f be a function on X such that f and \(\varphi \circ f\) are integrable and \(\frac{u_{0}f_{0} - \int _{X} f \,d\mu}{u_{0} - 1} \in I\). Then

$$\begin{aligned} &(u_{0} - 1)\cdot \varphi \biggl( \frac{u_{0}f_{0} - \int _{X} f \,d\mu}{u_{0} -1} \biggr) \\ &\quad \geq u_{0} \varphi (f_{0}) - \int _{Z} \varphi \biggl( \int _{X} f(x) \alpha (x,z) \,d\mu (x) \biggr) \,d\lambda (z) \\ &\quad \geq u_{0} \varphi (f_{0}) - \int _{X} (\varphi \circ f) \,d\mu . \end{aligned}$$
(16)

If φ is concave, then the reversed signs of the inequalities hold.

Proof

Using the first inequality in (15) and the result of Theorem 1.1, we get

$$\begin{aligned} (u_{0} - 1)\cdot \varphi \biggl( \frac{u_{0}f_{0} - \int _{X} f \,d\mu}{u_{0} -1} \biggr) &\geq u_{0} \varphi (f_{0}) - \varphi \biggl( \int _{X} f \,d\mu \biggr) \\ & \geq u_{0} \varphi (f_{0}) - \int _{Z} \varphi \biggl( \int _{X} f(x) \alpha (x,z) \,d\mu (x) \biggr) \,d\lambda (z) \\ & \geq u_{0} \varphi (f_{0}) - \int _{X} (\varphi \circ f) \,d\mu , \end{aligned}$$
(17)

and the proof is complete. □

By using the previous theorem, it is easy to obtain a continuous refinement of the reverse Jensen inequality for a strongly convex function.

Theorem 3.3

(Refinement of the reverse Jensen inequality for a strongly convex function)

Let the assumptions of Theorem 1.1and Theorem 3.2hold. Then, for the strongly convex function φ, the following holds:

$$\begin{aligned} &(u_{0} - 1)\cdot \varphi \biggl( \frac{u_{0}f_{0} - \int _{X} f \,d\mu}{u_{0} - 1} \biggr) \\ &\quad \geq u_{0} \varphi (f_{0}) - \int _{Z} \varphi \biggl( \int _{X} f(x) \alpha (x,z) \,d\mu (x) \biggr) \,d\lambda (z) \\ &\qquad{} - c \biggl( u_{0} f_{0}^{2} - \int _{Z} \biggl( \int _{X} f(x) \alpha (x,z) \,d\mu (x) \biggr)^{2} \,d\lambda (z) - \frac{(u_{0}f_{0} - \int _{X} f \,d\mu )^{2}}{u_{0} - 1} \biggr) \\ &\quad \geq u_{0} \varphi (f_{0}) - \int _{X} (\varphi \circ f) \,d\mu - \frac{c}{u_{0}-1} \int _{X} \bigl[ (f-\overline{f})^{2} - u_{0}(f_{0} -f)^{2} \bigr] \,d\mu , \end{aligned}$$

where \(\overline{f} = \int _{X} f \,d\mu \).

Proof

By applying the result of Theorem 3.2 for the convex function \(\varphi -c(\cdot )^{2}\), we get the desired result. □

4 Refinements of some Hermite–Hadamard inequalities for strongly convex functions

As a first application of Theorem 2.1 we note the following: a particular choice of the functions w and f gives a continuous refinement of the left-hand side of the well-known Hermite–Hadamard inequality. In fact, by putting in (5) \(X=[a,b]\), \(d\mu (x)=\frac{1}{b-a} \,dx\), and \(f(x)=x\) for \(x\in [a,b]\), we get

$$\begin{aligned} \varphi \biggl( \frac{a+b}{2} \biggr) \leq & \int _{Z} \varphi \biggl( \frac{1}{b-a} \int _{a}^{b} x \alpha (x,z) \,dx \biggr) \,d \lambda (z) \\ &{}- c \int _{Z} \biggl(\frac{1}{b-a} \int _{a}^{b} x \alpha (x,z) \,dx - \frac{a+b}{2} \biggr)^{2} \,d\lambda (z) \\ \leq & \frac{1}{b-a} \int _{a}^{b} \varphi (x) \,dx - \frac{c}{12}(b-a)^{2}, \end{aligned}$$
(18)

where α satisfies (1) and (2).

The inequality between the first and the third term in chain (18) is already known. It is the left-hand side of the Hermite–Hadamard inequality for a strongly convex function, and it is given in [9]. Hence, (18) is a continuous refinement of this result.

A discrete refinement of the left-hand side of the Hermite–Hadamard inequality for a convex function is given in [17]. Here we give a generalization of it, namely, a discrete refinement of the left-hand side of the Hermite–Hadamard inequality for a strongly convex function. It follows from (12) applied with \(w(x)=1\) and \(f(x)=x\) for \(x\in [a,b]\):

$$\begin{aligned} \varphi \biggl( \frac{a+b}{2} \biggr) \leq & \frac{1}{b-a} \sum_{i=1}^{n} ({a_{i}}- a_{i-1} ) \varphi \biggl( \frac{a_{i}+ a_{i-1}}{2} \biggr) \\ &{}- \frac{c}{b-a} \sum_{i=1}^{n} ({a_{i}}- a_{i-1} ) \biggl(\frac{a_{i}+ a_{i-1}}{2} - \frac{a+b}{2} \biggr)^{2} \\ \leq & \frac{1}{b-a} \int _{a}^{b} \varphi (x) \,dx - \frac{c}{12}(b-a)^{2}. \end{aligned}$$
(19)

The particular case of (19) for \(n=2\), \(a_{0}=a\), \(a_{1}=\frac{a+b}{2}\), \(a_{2}=b\) is given in [4].

The refinement of the right-hand side of the Hermite–Hadamard inequality is based on the Lah–Ribarič inequality, and we cannot directly obtain a continuous refinement. A discrete refinement of the right-hand side of the Hermite–Hadamard inequality for a convex function is given in [17]. Here we derive a refinement of the Lah–Ribarič inequality for a strongly convex function, which follows from the result for a convex function applied with the convex function \(\varphi - c(\cdot )^{2}\).

Theorem 4.1

Let f, w be integrable functions on \([a,b]\), \(w\geq 0\), \(W:=\int _{a}^{b} w(t) \,dt \neq 0\), and let \(a_{0},a_{1}, \ldots ,a_{n}\) be such that \(a=a_{0}< a_{1}<\cdots <a_{n}=b\) and \(m_{i} \leq f(t) \leq M_{i}\) for \(t\in [a_{i-1}, a_{i}]\), \(m_{i} \neq M_{i}\), \(i=1,2,\ldots ,n\), and \(m=\min \{ m_{1}, \ldots ,m_{n}\}\), \(M=\min \{ M_{1}, \ldots ,M_{n}\}\). If \(\varphi : I \rightarrow {\mathbf{R}}\) is a strongly convex function with modulus c, \(f([a,b]) \subseteq I\), then

(i)

$$\begin{aligned} &\frac{1}{W} \int _{a}^{b} \varphi \bigl(f(t)\bigr) w(t) \,dt - \frac{c}{W} \int _{a}^{b} f^{2}(t) w(t) \,dt \\ &\quad \leq \frac{1}{W} \sum_{i=1}^{n} w_{i} \biggl[ \frac{M_{i}-\bar{f}_{i}}{M_{i}-m_{i}} \varphi (m_{i}) + \frac{\bar{f}_{i} -m_{i}}{M_{i}-m_{i}} \varphi (M_{i}) \biggr] \\ &\qquad{} - \frac{c}{W} \sum_{i=1}^{n} w_{i} \bigl[\bar{f}_{i} (M_{i}+m_{i}) -m_{i}M_{i}\bigr] \\ &\quad \leq \frac{M-\bar{f}}{M-m} \varphi (m) + \frac{\bar{f}-m}{M-m} \varphi (M) - c \bigl[\bar{f} (M+m)-mM\bigr], \end{aligned}$$
(20)

where \(\bar{f} = \frac{1}{W} \int _{a}^{b} f(t)w(t) \,dt\), \(w_{i} = \int _{a_{i-1}}^{a_{i}} w(t) \,dt\), and \(\bar{f}_{i} = \frac{1}{w_{i}} \int _{a_{i-1}}^{a_{i}} f(t)w(t) \,dt\).

(ii)

$$\begin{aligned} &\frac{1}{b-a} \int _{a}^{b} \varphi \bigl(f(t)\bigr) \,dt \\ &\quad \leq \frac{1}{b-a} \sum_{i=1}^{n} (a_{i}-a_{i-1}) \frac{\varphi (a_{i-1})+\varphi (a_{i})}{2} \\ &\qquad{} - \frac{c}{b-a} \Biggl[ \sum _{i=1}^{n} (a_{i}-a_{i-1}) \frac{a_{i}^{2}+a_{i-1}^{2}}{2} \Biggr] -\frac{c}{3}\bigl(a^{2}+ab+b^{2} \bigr) \\ &\quad \leq \frac{\varphi (a) + \varphi (b)}{2} - c \frac{(b-a)^{2}}{6}. \end{aligned}$$

Proof

(i) If φ is a strongly convex function, then the function \(\varphi - c(\cdot )^{2}\) is convex. Putting in Theorem 2.3 from paper [17] \(\varphi - c(\cdot )^{2}\) instead of f, after simple calculations, we get the statement of this theorem.

(ii) This result follows from inequalities (20) using \(w(t)=1\) and \(f(t)=t\). □

As we can see, the chain of inequalities (20) is a refinement of the right-hand side of the Hermite–Hadamard inequality for a strongly convex function. If \(n=2\), \(a_{0}=a\), \(a_{1}=\frac{a+b}{2}\), \(a_{2}=b\), we get the result from [4, Theorem 5].

5 Refinements of the Hölder and Popoviciu inequalities

It is known that the Hölder inequality is a consequence of the Jensen inequality for an appropriate function φ. In the further text we use a version of (3) given for general measure μ not only for a probability measure. In that case (3) has the following form:

$$\begin{aligned} \varphi \biggl(\frac{1}{W} \int _{X} f(x)w(x) \,d\mu (x) \biggr) \leq & \int _{Z} \varphi \biggl(\frac{1}{W} \int _{X} f(x) \alpha (x,z)w(x) \,d\mu (x) \biggr) \,d\lambda (z) \\ \leq & \frac{1}{W} \int _{X} (\varphi \circ f) (x) w(x) \,d\mu , \end{aligned}$$
(21)

where φ is convex, \(w:X\rightarrow [0,\infty \rangle \) is a measurable function such that \(W:= \int _{X} w \,d\mu \neq 0\) and

$$ \int _{X} \alpha w \,d\mu = \int _{X} w \,d\mu \quad \text{for } z\in Z, \qquad \int _{Z} \alpha \,d\lambda =1\quad \text{for } x\in X. $$
(22)

If φ is concave, then the reversed signs of the inequalities hold in (21).

If \(r,s >1\) are numbers such that \(\frac{1}{r} + \frac {1}{s}=1\), then (21) for the concave function \(\varphi (x)=x^{1/r}\) with the substitutions \(w=wg^{s}\), \(f=f^{r} g^{-s}\), where α satisfies assumption (22) and \(\int _{X} wg^{s}\alpha \,d\mu = \int _{X} wg^{s} \,d\mu \), gives the following continuous refinement of the Hölder inequality:

$$\begin{aligned} \Vert fg \Vert _{1} \leq \int _{Z} \bigl\Vert \alpha ^{1/r}(\cdot , z) f \bigr\Vert _{r} \cdot \bigl\Vert \alpha ^{1/s}(\cdot , z) g \bigr\Vert _{s} \,d\lambda \leq \Vert f \Vert _{r} \cdot \Vert g \Vert _{s}. \end{aligned}$$
(23)

As usual, by \(\| F \|_{p}\) we denote \(\| F \|_{p}= ( \int _{X} |F(x)|^{p} w(x) \,d\mu (x) )^{1/p}\).

Let us write the weighted version of (16).

Let w be a nonnegative measurable function on X, \(w_{0}\in {\mathbf{R}}\) be such that \(0< \int _{X} w \,d\mu < w_{0}\). Let φ be a real convex function on an interval I and \(f_{0}\in I\). Let f be a function on X such that wf and \(w(\varphi \circ f)\) are integrable and \(\frac{w_{0}f_{0} - \int _{X} wf \,d\mu}{w_{0} - \int _{X} w \,d\mu} \in I\). If α satisfies (22), then

$$\begin{aligned} &(w_{0} - W)\cdot \varphi \biggl( \frac{w_{0}f_{0} - \int _{X} fw \,d\mu}{w_{0} - W} \biggr) \\ &\quad \geq w_{0} \varphi (f_{0}) - W \int _{Z} \varphi \biggl(\frac{1}{W} \int _{X} f(x) \alpha (x,z) w(x) \,d\mu (x) \biggr) \,d\lambda (z) \\ &\quad \geq w_{0} \varphi (f_{0}) - \int _{X} (\varphi \circ f)w \,d\mu , \end{aligned}$$
(24)

where \(W= \int _{X} w \,d\mu \).

Putting in (24) the substitutions \(\varphi (x)=x^{1/r}\), \(w=wg^{s}\), \(f=f^{r} g^{-s}\), \(w_{0}=w_{0}c_{2}^{s}\), \(f_{0}=c_{1}^{r}c_{2}^{-s}\), and if α satisfies assumption (22) and \(\int _{X} wg^{s}\alpha \,d\mu = \int _{X} wg^{s} \,d\mu \), then we have the following refinement of the Popoviciu inequality:

$$\begin{aligned} w_{0}c_{1}c_{2} - \Vert fg \Vert _{1} \geq & w_{0}c_{1}c_{2} - \int _{Z} \bigl\Vert \alpha ^{1/r}(\cdot , z) f \bigr\Vert _{r} \bigl\Vert \alpha ^{1/s}(\cdot , z) g \bigr\Vert _{s} \,d \lambda \\ \geq & \bigl( w_{0}c_{1}^{r} - \Vert f \Vert _{r}^{r} \bigr)^{1/r} \bigl( w_{0}c_{2}^{s}- \Vert g \Vert _{s}^{s} \bigr)^{1/s}, \end{aligned}$$
(25)

provided that all integrals exist. We note that both (23) and (25) are stated and proved in [14]. A particular case of (23) when the continuous refinement collapses to the sum of two functions u, v, such that \(u(x)+v(x)=1\) on \(X=[a,b]\), was described in [7].

We derive the following refinement of the Hölder and the Popoviciu inequalities.

Theorem 5.1

Let \(r,s >1\) be numbers such that \(\frac {1}{r} + \frac {1}{s}=1\). Let \((X, \mu )\) and \((Z, \lambda ) \) be two measure spaces, \(\int _{Z} \,d\lambda =1\), \(w:X \rightarrow [0,\infty \rangle \) be a measurable mapping on X such that \(\int _{X} w \,d\mu \neq 0\), and \(\alpha : X \times Z \rightarrow [0,\infty \rangle \) be a function which satisfies (2). Let \(c_{1},c_{2},w_{0} >0\) and \(f,g: X \rightarrow [0,\infty \rangle \) be such that \(w_{0}c_{1}^{r} - \| f\|_{r}^{r} >0\) and \(w_{0}c_{2}^{s}- \| g\|_{s}^{s}>0\) and \(\int _{X} \alpha (x,z)w(x)g^{s}(x) \,d\mu (x)= \int _{X} w(x)g^{s}(x) \,d\mu (x)\), \(z\in Z\), hold. Then

(i) The following continuous refinement of the Hölder inequality holds:

$$\begin{aligned} \Vert fg \Vert _{1} \leq \biggl( \int _{Z} \Vert fg \alpha \Vert _{1}^{r} \,d\lambda \biggr)^{1/r} \leq \Vert f \Vert _{r} \cdot \Vert g \Vert _{s}, \end{aligned}$$
(26)

provided that all integrals exist.

(ii) The following continuous refinement of the Popoviciu inequality holds:

$$\begin{aligned} w_{0}c_{1}c_{2} - \Vert fg \Vert _{1} \geq & \bigl( w_{0}c_{2}^{s}- \Vert g \Vert _{s}^{s} \bigr)^{1/s} \biggl(w_{0}c_{1}^{r}- \frac{1}{ \Vert g \Vert _{s}^{r}} \int _{Z} \bigl\Vert fg \alpha (\cdot , z) \bigr\Vert _{1}^{r} \,d\lambda (z) \biggr)^{1/r} \\ \geq & \bigl( w_{0}c_{1}^{r} - \Vert f \Vert _{r}^{r} \bigr)^{1/r} \bigl( w_{0}c_{2}^{s}- \Vert g \Vert _{s}^{s} \bigr)^{1/s}, \end{aligned}$$
(27)

provided that all integrals exist.

Proof

(i) By making the substitutions

$$ \varphi (x)=x^{r}, \qquad w=wg^{s}, \quad \text{and} \quad f=f g^{-s/r} $$

in (21) for the convex function φ, we get inequality (26).

(ii) After the substitutions

$$ \varphi (x)=x^{r}, \qquad w_{0}=w_{0}c_{2}^{s}, \qquad f_{0}=c_{1} c_{2}^{-s/r} \qquad w=wg^{s}, \quad \text{and} \quad f=f g^{-s/r} $$

in (24) for the convex function φ, we get inequality (27). The proof is complete. □

Remark 5.2

In paper [7] we find some special cases of the (i) part of the previous theorem when \(X=[a,b]\), \(d\mu =dx\), and the continuous refinement becomes just a refinement for two functions u, v such that \(u(x)+v(x)=1\) on \([a,b]\).

6 Properties of some related functionals

A well-known idea to put further light to various inequalities is to separately investigate the properties of the functionals describing the “gaps” in the inequalities. In this section we give some examples of results of this type.

We fix the following objects: a measure space \((X,\mu )\), a probability measure space \((Z, \lambda )\), a convex function \(\varphi : I \rightarrow {\mathbf{R}}\), functions \(f: X \rightarrow I\), \(\alpha : X\times Z \rightarrow [0,\infty \rangle \) such that \(\int _{Z} \alpha (x,z) \,d\lambda (z)=1\) (\(x\in X\)), and a positive number \(f_{0}\). By \(K_{\varphi , f, \alpha}\) we denote the following set of weights:

$$ K_{\varphi , f, \alpha} := \biggl\{ w: X\rightarrow [0,\infty \rangle : wf, w(\varphi \circ f) \in L^{1}(\mu ), \int _{X} \alpha w \,d \mu = \int _{X} w \,d\mu \neq0 \biggr\} . $$

By \(K_{\varphi ,f_{0}, f, \alpha}\) we denote a class of pairs \((w_{0},w)\):

$$\begin{aligned} K_{\varphi ,f_{0}, f, \alpha}&:= \biggl\{ (w_{0},w) : w_{0} \in \langle 0, \infty \rangle , w: X\rightarrow [0, \infty \rangle , wf, w(\varphi \circ f) \in L^{1}(\mu ), \\ &\hphantom{{}:={}}\int _{X} \alpha w \,d\mu = \int _{X} w \,d\mu \neq0, w_{0} - \int _{X} w \,d\mu >0, \frac{w_{0}f_{0}-\int _{X} fw \,d\mu }{w_{0}-\int _{X} w \,d\mu} \in I \biggr\} . \end{aligned}$$

Let us define the functionals \(L_{J}\), \(M_{J}\), \(R_{J}\), \(K_{J}\):

$$\begin{aligned}& L_{J}(w) := W\cdot \varphi \biggl(\frac{1}{W} \int _{X} fw \,d\mu \biggr), \\& M_{J}(w) := W\cdot \int _{Z} \varphi \biggl(\frac{1}{W} \int _{X} f(x) \alpha (x,z)w(x) \,d\mu (x) \biggr) \,d\lambda (z), \\& R_{J}(w) := \int _{X} (\varphi \circ f)w \,d\mu, \\& K_{J}(w_{0},w) := (w_{0} - W)\cdot \varphi \biggl( \frac{w_{0}f_{0} - \int _{X} fw \,d\mu}{w_{0} - W} \biggr). \end{aligned}$$

First we state the following complementary information about the “gaps” in inequalities (21).

Theorem 6.1

The functionals \(L_{J}\) and \(M_{J}\) are subadditive on \(K_{\varphi , f, \alpha}\) i.e.

$$\begin{aligned}& L_{J}(p+q) \leq L_{J}(p) + L_{J}(q), \\& M_{J}(p+q) \leq M_{J}(p) + M_{J}(q) \end{aligned}$$

for all \(p,q \in K_{\varphi , f, \alpha}\), and the functionals \(J_{1}=R_{J}-L_{J}\) and \(J_{2}=R_{J}-M_{J}\) are superadditive.

Moreover, if \(p, q \in K_{\varphi , f, \alpha}\) satisfy \(p \leq q\), then

$$ J_{1}(p) \leq J_{1}(q) \quad \textit{and} \quad J_{2}(p) \leq J_{2}(q).$$

Proof

Since φ is a convex function, we get

$$\begin{aligned} L_{J}(p+q) &= (P+Q) \varphi \biggl( \frac{1}{P+Q} \int _{X} (p+q)f \,d\mu \biggr) \\ & = (P+Q) \varphi \biggl( \frac{P}{P+Q} \biggl( \frac {1}{P} \int _{X} pf \,d\mu \biggr) +\frac{Q}{P+Q} \biggl( \frac {1}{Q} \int _{X} qf \,d\mu \biggr) \biggr) \\ & \leq (P+Q) \cdot \biggl[ \frac{P}{P+Q}\varphi \biggl( \frac {1}{P} \int _{X} pf \,d\mu \biggr) +\frac{Q}{P+Q} \varphi \biggl(\frac {1}{Q} \int _{X} qf \,d\mu \biggr) \biggr] \\ & = P\cdot \varphi \biggl(\frac{1}{P} \int _{X} pf \,d\mu \biggr) + Q \cdot \varphi \biggl( \frac{1}{Q} \int _{X} qf \,d\mu \biggr) \\ & = L_{J}(p) + L_{J}(q), \end{aligned}$$

where \(P=\int _{X} p \,d\mu \) and \(Q=\int _{X} q \,d\mu \).

Let us denote \(S(w):= W\cdot \varphi (\frac{1}{W} \int _{X} f(x) \alpha (x,z) w(x) \,d\mu (x) )\), where \(W=\int _{X} w \,d\mu \). Using the same method as in the first part of the proof, we get that \(S(p+q) \leq S(p)+S(q)\). By integrating the terms in this inequality over Z, we get the subadditivity of \(M_{J}\).

In literature, the functional \(J_{1}\) is called the Jensen functional, and its superadditivity is already known. Also, a lot of results connected with \(J_{1}\) are given in [8]. The superadditivity of \(J_{2}\) follows from the linearity of \(R_{J}\) and the subadditivity of \(M_{J}\).

By using the results of Sect. 3, we see that the functionals \(J_{1}\) and \(J_{2}\) are nonnegative on \(K_{\varphi , f, \alpha}\). If \(p\leq q\), then from the superadditivity of \(J_{i}\), \(i=1,2\), we get

$$ J_{i}(q) = J_{i}\bigl(p + (q-p)\bigr) \geq J_{i}(p) +J_{i}(q-p) \geq J_{i}(p) $$

i.e. \(J_{i}\), \(i=1,2\), are nondecreasing functionals. The proof is complete. □

Now we define the following functionals, which are connected with the reverse of the Jensen inequality:

$$\begin{aligned}& C(w_{0}) := w_{0}\varphi (f_{0}), \\& P_{1}(w_{0}, w) := C(w_{0}) - R_{J}(w)-K_{J}(w_{0},w), \\& P_{2}(w_{0}, w) := C(w_{0}) - L_{J}(w)-K_{J}(w_{0},w), \\& P_{3}(w_{0}, w) := C(w_{0}) - M_{J}(w)-K_{J}(w_{0},w). \end{aligned}$$

Our corresponding result for the “gaps” in inequality (24) reads as follows.

Theorem 6.2

The functionals \(P_{1}\), \(P_{2}\), \(P_{3}\) are superadditive on \(K_{\varphi ,f_{0}, f, \alpha}\) i.e.

$$ P_{i}(p_{0}+q_{0}, p+q) \geq P_{i}(p_{0}, p) + P_{i}(q_{0}, q), \quad i=1,2,3, $$

for all \((p_{0},p), (q_{0},q) \in K_{\varphi ,f_{0}, f, \alpha}\). Moreover,

$$\begin{aligned} P_{1} \leq P_{3} \leq P_{2} \leq 0. \end{aligned}$$
(28)

Proof

Putting in the definition of the convex function φ

$$ (r+s) \varphi \biggl( \frac{rx+sy}{r+s} \biggr) \leq r\varphi (x) + s \varphi (z) $$

the following substitutions:

$$\begin{aligned}& r=p_{0}-P, \qquad s=q_{0}-Q, \quad \text{where } P= \int _{X} p \,d\mu , Q= \int _{X} q \,d\mu \\& x=\frac{p_{0}f_{0}-\int _{X} pf \,d\mu}{p_{0}-P}, \qquad y= \frac{q_{0}g_{0}-\int _{X} qf \,d\mu}{q_{0}-Q}, \end{aligned}$$

we get that

$$\begin{aligned} & K_{J}(p_{0}+q_{0}, p+q) \\ &\quad= \bigl((p_{0}+q_{0}) -(P+Q) \bigr) \cdot \varphi \biggl( \frac{(p_{0}f_{0}+q_{0}g_{0}) -\int _{X} (p+q)f \,d\mu}{(p_{0}+q_{0}) -(P+Q)} \biggr) \\ &\quad\leq (p_{0} -P) \cdot \varphi \biggl( \frac{p_{0}f_{0} -\int _{X} pf \,d\mu}{p_{0} -P} \biggr) + (q_{0} -Q) \cdot \varphi \biggl( \frac{q_{0}g_{0} -\int _{X} qf \,d\mu}{q_{0} -Q} \biggr) \\ &\quad= K_{J}(p_{0}, p) + K_{J}(q_{0}, q). \end{aligned}$$

Hence, the subadditivity of \(K_{J}\) is proved. From this fact and from the linearity of C and \(R_{J}\), the superadditivity of \(P_{1}\) also follows. The other statements are proved in similar ways, so we omit the details.

The inequalities in (28) follow from the refinements of the Jensen inequality and from the inequalities in (15). The proof is complete. □

Availability of data and materials

Not applicable.

References

  1. Abramovich, S., Jameson, G., Sinnamon, G.: Refining of Jensen’s inequality. Bull. Math. Soc. Sci. Math. Roum. 47, 3–14 (2004)

    MathSciNet  MATH  Google Scholar 

  2. Abramovich, S., Persson, L.E.: Fejer and Hermite-Hadamard type inequalities for quasiconvex functions. Math. Notes Acad. Sci. 102(5–6), 599–609 (2017)

    Article  MathSciNet  Google Scholar 

  3. Abramovich, S., Persson, L.E.: Some new Hermite-Hadamard and Fejer type inequalities without convexity/concavity. Math. Inequal. Appl. 23(2), 447–458 (2020)

    MathSciNet  MATH  Google Scholar 

  4. Azócar, A., Nikodem, K., Roa, G.: Fejér-type inequalities for strongly convex functions. Ann. Math. Sil. 26, 43–54 (2012)

    MATH  Google Scholar 

  5. Dragomir, S.S., Khan, M.A., Abathun, A.: Refinement of the Jensen integral inequality. Open Math. J. 14, 221–228 (2016)

    Article  MathSciNet  Google Scholar 

  6. Horváth, L., Khan, K.A., Pečarić, J.: Cyclic refinements of the discrete and integral form of Jensen’s inequality with applications. Analysis (Berlin) 36, 253–262 (2016)

    MathSciNet  MATH  Google Scholar 

  7. Khan, M.A., Pečarić, G., Pečarić, J.: New refinement of the Jensen inequality associated to certain functions with applications. J. Inequal. Appl. 2020, 76 (2020). https://doi.org/10.1186/s13660-020-02343-7

    Article  MathSciNet  MATH  Google Scholar 

  8. Krnić, M., Lovričević, N., Pečarić, J., Perić, J.: Superadditivity and Monotonicity of the Jensen Functionals. Element, Zagreb (2016)

    MATH  Google Scholar 

  9. Merentes, N., Nikodem, K.: Remarks on strongly convex functions. Aequ. Math. 80(1–2), 193–199 (2010)

    Article  MathSciNet  Google Scholar 

  10. Niculescu, C.P., Persson, L.E.: Convex Functions and Their Applications. A Contemporary Approach, 2nd edn. CMS Books of Mathematics. Springer, Berlin (2017) (First Edition 2006)

    MATH  Google Scholar 

  11. Nikodem, K., Páles, Z.: Characterizations of inner product spaces by strongly convex functions. Banach J. Math. Anal. 5, 83–87 (2011)

    Article  MathSciNet  Google Scholar 

  12. Nikolova, L., Persson, L.E., Varošanec, S.: Continuous forms of classical inequalities. Mediterr. J. Math. 13(5), 3483–3497 (2016)

    Article  MathSciNet  Google Scholar 

  13. Nikolova, L., Persson, L.E., Varošanec, S.: A new look at classical inequalities involving Banach lattice norms. J. Inequal. Appl. 2017, 302 (2017). https://doi.org/10.1186/s13660-017-1576-8

    Article  MathSciNet  MATH  Google Scholar 

  14. Nikolova, L., Persson, L.E., Varošanec, S.: Refinement of continuous forms of classical inequalities. Eurasian Math. J. 12(2), 59–73 (2021)

    MathSciNet  MATH  Google Scholar 

  15. Nikolova, L., Varošanec, S.: Continuous forms of Gauss-Polya type inequalities involving derivatives. Math. Inequal. Appl. 22(4), 1385–1395 (2019)

    MathSciNet  MATH  Google Scholar 

  16. Oguntuase, J., Persson, L.E.: Refinement of Hardy’s inequalities via superquadratic and subquadratic functions. J. Math. Anal. Appl. 339(2), 1305–1312 (2008)

    Article  MathSciNet  Google Scholar 

  17. Pečarić, J., Perić, J.: Refinements of the integral form of Jensen’s and the Lah-Ribarič inequalities and applications for Csiszár divergence. J. Inequal. Appl. 2020, 108 (2020). https://doi.org/10.1186/s13660-020-02369-x

    Article  MATH  Google Scholar 

  18. Pečarić, J., Proschan, F., Tong, Y.L.: Convex Functions, Partial Ordering, and Statistical Applications. Academic Press, New York (1992)

    MATH  Google Scholar 

  19. Persson, L.E.: Lecture Notes. P.L. Lions seminar (2015)

  20. Rooin, J.: A refinement of Jensen’s inequality. J. Inequal. Pure Appl. Math. 6(2), Art. 38 (2005)

    MathSciNet  MATH  Google Scholar 

  21. Rooin, J.: Some refinements of discrete Jensen’s inequality and some of it applications (2006) math/0610736v1

Download references

Acknowledgements

The authors would like to thank the referee for his/her valuable comments and suggestions that helped us to improve the quality of the manuscript.

Funding

The publication charges for this manuscript are supported by the University of Zagreb, Faculty of Science, Department of Mathematics.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally and significantly in writing this paper. LEP analyzed and interpreted results regarding the position in recent research, and he is the main author concerning the results from Sect. 5. LN is the main author concerning the results in Sects. 2 and 6, and she merged all the results in one whole. SV’s main contributions are Sects. 3 and 4. SV typed the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Sanja Varošanec.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nikolova, L., Persson, LE. & Varošanec, S. Continuous refinements of some Jensen-type inequalities via strong convexity with applications. J Inequal Appl 2022, 63 (2022). https://doi.org/10.1186/s13660-022-02801-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-022-02801-4

MSC

Keywords