- Research
- Open access
- Published:
Continuous refinements of some Jensen-type inequalities via strong convexity with applications
Journal of Inequalities and Applications volume 2022, Article number: 63 (2022)
Abstract
In this paper we prove new continuous refinements of some Jensen type inequalities in both direct and reversed forms. As applications we also derive some continuous refinements of Hermite–Hadamard, Hölder, and Popoviciu type inequalities. As particular cases we point out the corresponding results for sums and integrals showing that our results contain both several well-known but also some new results for these special cases.
1 Preliminaries
Classical inequalities are extremely important for several parts of mathematical sciences as well as their applications in engineering and natural sciences. This is one reason of the great and increasing interest to further develop and apply this important area. Here we just mention two such types of recent fairly recent developments:
1. It is well known that the classical Jensen inequality is more or less equivalent to the concept of convexity. It is also well known that the Jensen inequality implies several of the classical inequalities, see e.g. the books [10, 18] and the P.-L. Lions related material [19]. Moreover, by using some variations of the concept of convexity, some refined versions of classical inequalities have been proved. The first paper concerning a refinement of the Jensen inequality with this idea is [1], and for the first application of this result, see [16]. See also [6] and [20]. Moreover, by using other variations of the concept of convexity, other refinements of some classical inequalities have been obtained, see e.g. [2] and [3] and the references in these papers.
2. The classical inequalities can in several cases be given in a more unified continuous and/or e.g. Banach function spaces setting. See e.g. [12–14], and [15] and the references given there.
The main aim of this paper is to further complement and develop 1. and 2. by first proving some new continuous versions of Jensen type inequalities in both direct and reversed form by using the concept of strong convexity. Moreover, as applications we derive some corresponding continuous Hermite–Hadamard, Hölder, and Popoviciu type inequalities. For another interesting use of strongly convex functions, we also refer to [11].
In this paper we use some usual notations for measure spaces. Let \((X, \mu )\) and \((Z, \lambda ) \) be two probability measure spaces. Let \(\alpha : X\times Z \rightarrow [ 0, \infty \rangle \) be a measurable mapping such that
and
In [20] a continuous refinement of the Jensen inequality is given.
Theorem 1.1
Let \((X, \mu )\) and \((Z, \lambda ) \) be two probability measure spaces, and let \(\alpha : X \times Z \rightarrow [0,\infty \rangle \) be a measurable function on \(X\times Z\) satisfying (1) and (2). If φ is a real convex function on the interval \(I\subseteq {\mathbf{R}}\), then for the function \(f:X \rightarrow I\), \(f, \varphi \circ f\in L^{1}(\mu )\), it holds that
If φ is concave, then the reversed signs of the inequalities hold in (3).
If λ is a discrete measure, the refinement of the Jensen inequality has been rediscovered recently, see e.g. [7], while similar results can be found in [5, 6, 17] for some particular cases of α.
One of our objectives is to give results for strongly convex functions. So, let us evoke a definition and some useful facts about that class of functions.
Definition 1.2
Let I be an interval of the real line. A function \(\varphi : I \rightarrow {\mathbf{R}}\) is called a strongly convex function with modulus \(c>0\) if
for all \(u,v \in I\) and all \(t\in [0,1]\).
The theory of strongly convex functions is vast, but here we point out only a very useful characterization of it. Namely, a function φ is strongly convex with modulus \(c>0\) if and only if the function \(\psi (x)=\varphi (x) -cx^{2}\) is convex [11]. The Jensen inequality for strongly convex functions is given in its discrete and integral form in [9]. A slightly modified result is the following theorem.
Theorem 1.3
Let \((X,\mu )\) be a probability measure space, I be an interval in R. Let \(\varphi : I \rightarrow {\mathbf{R}}\) be a strongly convex function with modulus \(c>0\), and let \(f:X\rightarrow I\) be a function such that \(f, f^{2} \in L^{1}(\mu )\). Then
where \(\bar{f}= \int _{X} f \,d\mu \).
The paper is organized as follows: in Sect. 2 we derive the announced refinement of the Jensen inequality (see Theorem 2.1). In order to be able to see that our results generalize and unify some other recent results in the literature (see [5, 7, 17], and [21]), we point out some more or less direct consequences of Theorem 2.1 (see Corollaries 2.3 and 2.4). The corresponding results both for convex and strongly convex functions are discussed and proved in Sect. 3 (see Theorems 3.1 and 3.2). As applications, we derive in Sect. 4 some corresponding new continuous versions of the Hermite–Hadamard inequality (see Theorem 4.1). Moreover, in Sect. 5 the corresponding results concerning the Hölder and Popoviciu inequalities are discussed and proved (see Theorem 5.1). Finally, in order to put more light on the “gaps” in some of our inequalities, we use Sect. 6 to derive some important properties of the functionals describing these gaps (see Theorems 6.1 and 6.2).
2 Refinements of the Jensen inequality for strongly convex functions
The main result in this section reads as follows.
Theorem 2.1
Let the assumptions of Theorem 1.1hold. If \(\varphi : I \rightarrow {\mathbf{R}}\) is a strongly convex function with modulus \(c>0\) and \(f:X\rightarrow I\) is a function such that \(f, f^{2} \in L^{1}(\mu )\), then
where \(\bar{f}= \int _{X} f \,d\mu \).
Proof
Since the function φ is strongly convex, the function \(\varphi - c(\cdot )^{2}\) is convex, and for it the refinement (3) holds. Therefore, after adding the term \(c ( \int _{X} f \,d\mu )^{2} = c\bar{f}^{2}\) on each side of the refinement of the Jensen inequality, we get
Let us transform the term \(\int _{X} f^{2} \,d\mu - \bar{f}^{2}\) as follows:
We denote \(F(z):= \int _{X}f(x) \alpha (x,z) \,d\mu (x) \). Using the Fubini theorem and the properties of the weight α, we obtain
Under this notation and using the same method as previously, we find that the second term in the middle expression of (6) is equal to
By combining (6)–(8) we obtain (5), and the proof is complete. □
Remark 2.2
Since \(c>0\), the chain of inequalities in (5) can be followed by \(\leq \int _{X} (\varphi \circ f) \,d\mu \). So Theorem 2.1 is indeed a genuine refinement of the Jensen inequality.
It is interesting to state the corresponding refinements for some particular cases such as for discrete and for integral Jensen’s inequality with finitely many functions.
Corollary 2.3
(i) Let \(-\infty \leq a < b \leq \infty \), \(\varphi : I \rightarrow {\mathbf{R}}\) be a strongly convex function with modulus c. Let \(w, f, \alpha _{i} : [a,b] \rightarrow {\mathbf{R}}\), \(i=1,2,\ldots , n\), be integrable functions such that \(w, \alpha _{i} \geq 0\), \(\sum_{i=1}^{n} \alpha _{i}(x)=1\) for each \(x\in [a,b]\), \(\int _{a}^{b} w \,dx \neq 0\), \(\int _{a}^{b} \alpha _{i} w \,dx \neq 0\), and \(f([a,b]) \subseteq I\). Then the following refinement of the Jensen inequality holds:
where \(W= \int _{a}^{b} w \,dx\) and \(\bar{f}=\frac{1}{W} \int _{a}^{b} fw \,dx\).
(ii) Let \(w_{j}\), \(j=1, \ldots , m\), be nonnegative numbers such that \(\sum_{j=1}^{m} w_{j} \neq0\), let \(\alpha _{ij}\), \(i=1, \ldots , n\), \(j=1, \ldots , m\), be nonnegative numbers such that \(\sum_{j=1}^{m} w_{j} \alpha _{ij}\neq 0\), \(i=1, \ldots , n\), and \(\sum_{i=1}^{n} \alpha _{ij}=1\), \(j=1, \ldots , m\). Let \(f_{j}\), \(j=1, \ldots , m\), be real numbers from an interval I. Then, for any strongly convex function \(\varphi :I\rightarrow {\mathbf{R}}\) with modulus c, the following refinement of the discrete Jensen inequality holds:
where \(W= \sum_{j=1}^{m} w_{j}\) and \(\bar{f}=\frac{1}{W} \sum_{j=1}^{m} w_{j} f_{j}\).
Proof
(i) By applying Theorem 2.1 for
the inequalities in (5) become (9).
(ii) By applying Theorem 2.1 for the same substitutions for Z and λ as we did in the proof of the first part, together with the following:
the inequalities in (5) and trivial arguments give (10). The proof is complete. □
Also, we state a refinement with finitely many functions with partition of the space X. Namely, we get the following.
Corollary 2.4
Let the assumptions of Theorem 1.3hold. Let \(X_{1}, \ldots , X_{n}\) be a partition of the set X. Let μ have the additional property that \(\int _{X_{i}} \,d\mu \neq0\), \(i=1,2,\ldots , n\).
Then, for any strongly convex function \(\varphi :I\rightarrow {\mathbf{R}}\) with modulus c, the following refinement of the Jensen inequality holds:
where \(\bar{f}= \int _{X} f \,d\mu \).
Proof
Let us use the same partition of the space Z as in the proof of Corollary 2.3, where the functions \(\alpha _{i}\) are defined as
Here, \(\chi _{S}\) denotes the characteristic function of the set S. Then the assumptions of Theorem 2.1 are satisfied. Hence, (5) and a trivial estimate show that the inequalities in (11) hold. The proof is complete. □
If in Corollary 2.4 we put \(X=[a,b]\), \(a=a_{0}< a_{1}< a_{2}< \cdots <a_{n}=b\) and \(X_{i}=[a_{i-1}, a_{i} \rangle \) for \(i=1,2,\ldots ,n\), \(d\mu = \frac{w(x)}{W} \,dx\), then the inequalities in (11) become as follows:
where \(W= \int _{a}^{b} w \,dx\) and \(\bar{f}=\frac{1}{W} \int _{a}^{b} f w \,dx\).
Remark 2.5
When the function φ is convex i.e. when \(c=0\), some of the above-mentioned results are already known.
The refinement via two functions \(\alpha _{1}\), \(\alpha _{2}\), \(\alpha _{1}+\alpha _{2}=1\) involving integrals for convex function φ has been published very recently in paper [7] together with applications in the information theory. The results for finite sequences for convex function φ are given in [21].
The result of Corollary 2.4 for \(n=2\) and \(c=0\) is the main result in paper [5].
If \(c=0\) i.e. if φ is a convex function, then the corresponding result of (12) is given in the paper [17].
3 Refinements of the reverse Jensen inequality for convex and strongly convex functions
The simplest form of the reverse Jensen inequality is the following inequality where one weight is positive while the second one is negative:
Let φ be a real convex function on I. If p and q are positive numbers such that \(p-q>0\), then
for all \(a,b\in I\) such that \(\frac{pa-qb}{p-q} \in I\).
This follows from the definition of a convex function: \(\varphi (tx+(1-t)y)\leq t \varphi (x) +(1-t)\varphi (y) \), \(t\in [0,1]\), \(x,y\in I\) after the substitutions
The reverse Jensen inequality for integrals follows from Lemma 4.25 in the book [18, p. 124] and has the following form.
Theorem 3.1
Let \((X, \mu )\) be a probability measure space. Let \(u_{0}, f_{0}\in {\mathbf{R}}\), \(u_{0}>1\). Let φ be a real convex function on an interval I and \(f_{0}\in I\). Let f be a function on X such that f and \(\varphi \circ f\) are integrable and \(\frac{u_{0}f_{0} - \int _{X} f \,d\mu}{u_{0} - 1} \in I\). Then
If φ is concave, then the reversed inequality holds.
The most known consequences of the previous inequality are the Popoviciu inequality and the Bellman inequality, which are reversed inequalities of the Hölder and the Minkowski inequalities, respectively. Here we give a proof of it since we will use one step of that proof in our further investigation.
Proof
By putting in (13)
we obtain that
where in the last inequality we use the Jensen inequality for integral. □
The following theorem is a continuous refinement of the previously mentioned reverse Jensen inequality for integrals.
Theorem 3.2
(Continuous refinement of the reverse Jensen inequality for convex function)
Let the assumptions of Theorem 1.1hold. Additionally, let \(u_{0}\in {\mathbf{R}}\) be such that \(u_{0}>1\). Let φ be a real convex function on an interval I and \(f_{0}\in I\). Let f be a function on X such that f and \(\varphi \circ f\) are integrable and \(\frac{u_{0}f_{0} - \int _{X} f \,d\mu}{u_{0} - 1} \in I\). Then
If φ is concave, then the reversed signs of the inequalities hold.
Proof
Using the first inequality in (15) and the result of Theorem 1.1, we get
and the proof is complete. □
By using the previous theorem, it is easy to obtain a continuous refinement of the reverse Jensen inequality for a strongly convex function.
Theorem 3.3
(Refinement of the reverse Jensen inequality for a strongly convex function)
Let the assumptions of Theorem 1.1and Theorem 3.2hold. Then, for the strongly convex function φ, the following holds:
where \(\overline{f} = \int _{X} f \,d\mu \).
Proof
By applying the result of Theorem 3.2 for the convex function \(\varphi -c(\cdot )^{2}\), we get the desired result. □
4 Refinements of some Hermite–Hadamard inequalities for strongly convex functions
As a first application of Theorem 2.1 we note the following: a particular choice of the functions w and f gives a continuous refinement of the left-hand side of the well-known Hermite–Hadamard inequality. In fact, by putting in (5) \(X=[a,b]\), \(d\mu (x)=\frac{1}{b-a} \,dx\), and \(f(x)=x\) for \(x\in [a,b]\), we get
where α satisfies (1) and (2).
The inequality between the first and the third term in chain (18) is already known. It is the left-hand side of the Hermite–Hadamard inequality for a strongly convex function, and it is given in [9]. Hence, (18) is a continuous refinement of this result.
A discrete refinement of the left-hand side of the Hermite–Hadamard inequality for a convex function is given in [17]. Here we give a generalization of it, namely, a discrete refinement of the left-hand side of the Hermite–Hadamard inequality for a strongly convex function. It follows from (12) applied with \(w(x)=1\) and \(f(x)=x\) for \(x\in [a,b]\):
The particular case of (19) for \(n=2\), \(a_{0}=a\), \(a_{1}=\frac{a+b}{2}\), \(a_{2}=b\) is given in [4].
The refinement of the right-hand side of the Hermite–Hadamard inequality is based on the Lah–Ribarič inequality, and we cannot directly obtain a continuous refinement. A discrete refinement of the right-hand side of the Hermite–Hadamard inequality for a convex function is given in [17]. Here we derive a refinement of the Lah–Ribarič inequality for a strongly convex function, which follows from the result for a convex function applied with the convex function \(\varphi - c(\cdot )^{2}\).
Theorem 4.1
Let f, w be integrable functions on \([a,b]\), \(w\geq 0\), \(W:=\int _{a}^{b} w(t) \,dt \neq 0\), and let \(a_{0},a_{1}, \ldots ,a_{n}\) be such that \(a=a_{0}< a_{1}<\cdots <a_{n}=b\) and \(m_{i} \leq f(t) \leq M_{i}\) for \(t\in [a_{i-1}, a_{i}]\), \(m_{i} \neq M_{i}\), \(i=1,2,\ldots ,n\), and \(m=\min \{ m_{1}, \ldots ,m_{n}\}\), \(M=\min \{ M_{1}, \ldots ,M_{n}\}\). If \(\varphi : I \rightarrow {\mathbf{R}}\) is a strongly convex function with modulus c, \(f([a,b]) \subseteq I\), then
(i)
where \(\bar{f} = \frac{1}{W} \int _{a}^{b} f(t)w(t) \,dt\), \(w_{i} = \int _{a_{i-1}}^{a_{i}} w(t) \,dt\), and \(\bar{f}_{i} = \frac{1}{w_{i}} \int _{a_{i-1}}^{a_{i}} f(t)w(t) \,dt\).
(ii)
Proof
(i) If φ is a strongly convex function, then the function \(\varphi - c(\cdot )^{2}\) is convex. Putting in Theorem 2.3 from paper [17] \(\varphi - c(\cdot )^{2}\) instead of f, after simple calculations, we get the statement of this theorem.
(ii) This result follows from inequalities (20) using \(w(t)=1\) and \(f(t)=t\). □
As we can see, the chain of inequalities (20) is a refinement of the right-hand side of the Hermite–Hadamard inequality for a strongly convex function. If \(n=2\), \(a_{0}=a\), \(a_{1}=\frac{a+b}{2}\), \(a_{2}=b\), we get the result from [4, Theorem 5].
5 Refinements of the Hölder and Popoviciu inequalities
It is known that the Hölder inequality is a consequence of the Jensen inequality for an appropriate function φ. In the further text we use a version of (3) given for general measure μ not only for a probability measure. In that case (3) has the following form:
where φ is convex, \(w:X\rightarrow [0,\infty \rangle \) is a measurable function such that \(W:= \int _{X} w \,d\mu \neq 0\) and
If φ is concave, then the reversed signs of the inequalities hold in (21).
If \(r,s >1\) are numbers such that \(\frac{1}{r} + \frac {1}{s}=1\), then (21) for the concave function \(\varphi (x)=x^{1/r}\) with the substitutions \(w=wg^{s}\), \(f=f^{r} g^{-s}\), where α satisfies assumption (22) and \(\int _{X} wg^{s}\alpha \,d\mu = \int _{X} wg^{s} \,d\mu \), gives the following continuous refinement of the Hölder inequality:
As usual, by \(\| F \|_{p}\) we denote \(\| F \|_{p}= ( \int _{X} |F(x)|^{p} w(x) \,d\mu (x) )^{1/p}\).
Let us write the weighted version of (16).
Let w be a nonnegative measurable function on X, \(w_{0}\in {\mathbf{R}}\) be such that \(0< \int _{X} w \,d\mu < w_{0}\). Let φ be a real convex function on an interval I and \(f_{0}\in I\). Let f be a function on X such that wf and \(w(\varphi \circ f)\) are integrable and \(\frac{w_{0}f_{0} - \int _{X} wf \,d\mu}{w_{0} - \int _{X} w \,d\mu} \in I\). If α satisfies (22), then
where \(W= \int _{X} w \,d\mu \).
Putting in (24) the substitutions \(\varphi (x)=x^{1/r}\), \(w=wg^{s}\), \(f=f^{r} g^{-s}\), \(w_{0}=w_{0}c_{2}^{s}\), \(f_{0}=c_{1}^{r}c_{2}^{-s}\), and if α satisfies assumption (22) and \(\int _{X} wg^{s}\alpha \,d\mu = \int _{X} wg^{s} \,d\mu \), then we have the following refinement of the Popoviciu inequality:
provided that all integrals exist. We note that both (23) and (25) are stated and proved in [14]. A particular case of (23) when the continuous refinement collapses to the sum of two functions u, v, such that \(u(x)+v(x)=1\) on \(X=[a,b]\), was described in [7].
We derive the following refinement of the Hölder and the Popoviciu inequalities.
Theorem 5.1
Let \(r,s >1\) be numbers such that \(\frac {1}{r} + \frac {1}{s}=1\). Let \((X, \mu )\) and \((Z, \lambda ) \) be two measure spaces, \(\int _{Z} \,d\lambda =1\), \(w:X \rightarrow [0,\infty \rangle \) be a measurable mapping on X such that \(\int _{X} w \,d\mu \neq 0\), and \(\alpha : X \times Z \rightarrow [0,\infty \rangle \) be a function which satisfies (2). Let \(c_{1},c_{2},w_{0} >0\) and \(f,g: X \rightarrow [0,\infty \rangle \) be such that \(w_{0}c_{1}^{r} - \| f\|_{r}^{r} >0\) and \(w_{0}c_{2}^{s}- \| g\|_{s}^{s}>0\) and \(\int _{X} \alpha (x,z)w(x)g^{s}(x) \,d\mu (x)= \int _{X} w(x)g^{s}(x) \,d\mu (x)\), \(z\in Z\), hold. Then
(i) The following continuous refinement of the Hölder inequality holds:
provided that all integrals exist.
(ii) The following continuous refinement of the Popoviciu inequality holds:
provided that all integrals exist.
Proof
(i) By making the substitutions
in (21) for the convex function φ, we get inequality (26).
(ii) After the substitutions
in (24) for the convex function φ, we get inequality (27). The proof is complete. □
Remark 5.2
In paper [7] we find some special cases of the (i) part of the previous theorem when \(X=[a,b]\), \(d\mu =dx\), and the continuous refinement becomes just a refinement for two functions u, v such that \(u(x)+v(x)=1\) on \([a,b]\).
6 Properties of some related functionals
A well-known idea to put further light to various inequalities is to separately investigate the properties of the functionals describing the “gaps” in the inequalities. In this section we give some examples of results of this type.
We fix the following objects: a measure space \((X,\mu )\), a probability measure space \((Z, \lambda )\), a convex function \(\varphi : I \rightarrow {\mathbf{R}}\), functions \(f: X \rightarrow I\), \(\alpha : X\times Z \rightarrow [0,\infty \rangle \) such that \(\int _{Z} \alpha (x,z) \,d\lambda (z)=1\) (\(x\in X\)), and a positive number \(f_{0}\). By \(K_{\varphi , f, \alpha}\) we denote the following set of weights:
By \(K_{\varphi ,f_{0}, f, \alpha}\) we denote a class of pairs \((w_{0},w)\):
Let us define the functionals \(L_{J}\), \(M_{J}\), \(R_{J}\), \(K_{J}\):
First we state the following complementary information about the “gaps” in inequalities (21).
Theorem 6.1
The functionals \(L_{J}\) and \(M_{J}\) are subadditive on \(K_{\varphi , f, \alpha}\) i.e.
for all \(p,q \in K_{\varphi , f, \alpha}\), and the functionals \(J_{1}=R_{J}-L_{J}\) and \(J_{2}=R_{J}-M_{J}\) are superadditive.
Moreover, if \(p, q \in K_{\varphi , f, \alpha}\) satisfy \(p \leq q\), then
Proof
Since φ is a convex function, we get
where \(P=\int _{X} p \,d\mu \) and \(Q=\int _{X} q \,d\mu \).
Let us denote \(S(w):= W\cdot \varphi (\frac{1}{W} \int _{X} f(x) \alpha (x,z) w(x) \,d\mu (x) )\), where \(W=\int _{X} w \,d\mu \). Using the same method as in the first part of the proof, we get that \(S(p+q) \leq S(p)+S(q)\). By integrating the terms in this inequality over Z, we get the subadditivity of \(M_{J}\).
In literature, the functional \(J_{1}\) is called the Jensen functional, and its superadditivity is already known. Also, a lot of results connected with \(J_{1}\) are given in [8]. The superadditivity of \(J_{2}\) follows from the linearity of \(R_{J}\) and the subadditivity of \(M_{J}\).
By using the results of Sect. 3, we see that the functionals \(J_{1}\) and \(J_{2}\) are nonnegative on \(K_{\varphi , f, \alpha}\). If \(p\leq q\), then from the superadditivity of \(J_{i}\), \(i=1,2\), we get
i.e. \(J_{i}\), \(i=1,2\), are nondecreasing functionals. The proof is complete. □
Now we define the following functionals, which are connected with the reverse of the Jensen inequality:
Our corresponding result for the “gaps” in inequality (24) reads as follows.
Theorem 6.2
The functionals \(P_{1}\), \(P_{2}\), \(P_{3}\) are superadditive on \(K_{\varphi ,f_{0}, f, \alpha}\) i.e.
for all \((p_{0},p), (q_{0},q) \in K_{\varphi ,f_{0}, f, \alpha}\). Moreover,
Proof
Putting in the definition of the convex function φ
the following substitutions:
we get that
Hence, the subadditivity of \(K_{J}\) is proved. From this fact and from the linearity of C and \(R_{J}\), the superadditivity of \(P_{1}\) also follows. The other statements are proved in similar ways, so we omit the details.
The inequalities in (28) follow from the refinements of the Jensen inequality and from the inequalities in (15). The proof is complete. □
Availability of data and materials
Not applicable.
References
Abramovich, S., Jameson, G., Sinnamon, G.: Refining of Jensen’s inequality. Bull. Math. Soc. Sci. Math. Roum. 47, 3–14 (2004)
Abramovich, S., Persson, L.E.: Fejer and Hermite-Hadamard type inequalities for quasiconvex functions. Math. Notes Acad. Sci. 102(5–6), 599–609 (2017)
Abramovich, S., Persson, L.E.: Some new Hermite-Hadamard and Fejer type inequalities without convexity/concavity. Math. Inequal. Appl. 23(2), 447–458 (2020)
Azócar, A., Nikodem, K., Roa, G.: Fejér-type inequalities for strongly convex functions. Ann. Math. Sil. 26, 43–54 (2012)
Dragomir, S.S., Khan, M.A., Abathun, A.: Refinement of the Jensen integral inequality. Open Math. J. 14, 221–228 (2016)
Horváth, L., Khan, K.A., Pečarić, J.: Cyclic refinements of the discrete and integral form of Jensen’s inequality with applications. Analysis (Berlin) 36, 253–262 (2016)
Khan, M.A., Pečarić, G., Pečarić, J.: New refinement of the Jensen inequality associated to certain functions with applications. J. Inequal. Appl. 2020, 76 (2020). https://doi.org/10.1186/s13660-020-02343-7
Krnić, M., Lovričević, N., Pečarić, J., Perić, J.: Superadditivity and Monotonicity of the Jensen Functionals. Element, Zagreb (2016)
Merentes, N., Nikodem, K.: Remarks on strongly convex functions. Aequ. Math. 80(1–2), 193–199 (2010)
Niculescu, C.P., Persson, L.E.: Convex Functions and Their Applications. A Contemporary Approach, 2nd edn. CMS Books of Mathematics. Springer, Berlin (2017) (First Edition 2006)
Nikodem, K., Páles, Z.: Characterizations of inner product spaces by strongly convex functions. Banach J. Math. Anal. 5, 83–87 (2011)
Nikolova, L., Persson, L.E., Varošanec, S.: Continuous forms of classical inequalities. Mediterr. J. Math. 13(5), 3483–3497 (2016)
Nikolova, L., Persson, L.E., Varošanec, S.: A new look at classical inequalities involving Banach lattice norms. J. Inequal. Appl. 2017, 302 (2017). https://doi.org/10.1186/s13660-017-1576-8
Nikolova, L., Persson, L.E., Varošanec, S.: Refinement of continuous forms of classical inequalities. Eurasian Math. J. 12(2), 59–73 (2021)
Nikolova, L., Varošanec, S.: Continuous forms of Gauss-Polya type inequalities involving derivatives. Math. Inequal. Appl. 22(4), 1385–1395 (2019)
Oguntuase, J., Persson, L.E.: Refinement of Hardy’s inequalities via superquadratic and subquadratic functions. J. Math. Anal. Appl. 339(2), 1305–1312 (2008)
Pečarić, J., Perić, J.: Refinements of the integral form of Jensen’s and the Lah-Ribarič inequalities and applications for Csiszár divergence. J. Inequal. Appl. 2020, 108 (2020). https://doi.org/10.1186/s13660-020-02369-x
Pečarić, J., Proschan, F., Tong, Y.L.: Convex Functions, Partial Ordering, and Statistical Applications. Academic Press, New York (1992)
Persson, L.E.: Lecture Notes. P.L. Lions seminar (2015)
Rooin, J.: A refinement of Jensen’s inequality. J. Inequal. Pure Appl. Math. 6(2), Art. 38 (2005)
Rooin, J.: Some refinements of discrete Jensen’s inequality and some of it applications (2006) math/0610736v1
Acknowledgements
The authors would like to thank the referee for his/her valuable comments and suggestions that helped us to improve the quality of the manuscript.
Funding
The publication charges for this manuscript are supported by the University of Zagreb, Faculty of Science, Department of Mathematics.
Author information
Authors and Affiliations
Contributions
All authors contributed equally and significantly in writing this paper. LEP analyzed and interpreted results regarding the position in recent research, and he is the main author concerning the results from Sect. 5. LN is the main author concerning the results in Sects. 2 and 6, and she merged all the results in one whole. SV’s main contributions are Sects. 3 and 4. SV typed the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Nikolova, L., Persson, LE. & Varošanec, S. Continuous refinements of some Jensen-type inequalities via strong convexity with applications. J Inequal Appl 2022, 63 (2022). https://doi.org/10.1186/s13660-022-02801-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-022-02801-4