 Research
 Open access
 Published:
Strong \(\mathcal {F}\)convexity and concavity and refinements of some classical inequalities
Journal of Inequalities and Applications volume 2024, Article number: 96 (2024)
Abstract
The concept of strong \({\mathcal {F}}\)convexity is a natural generalization of strong convexity. Although strongly concave functions are rarely mentioned and used, we show that in more effective and specific analysis this concept is very useful, and especially its generalization, namely strong \({\mathcal {F}}\)concavity. Using this concept, refinements of the Young inequality are given as a model case. A general form of the selfimproving property for Jensen type inequalities is presented. We show that a careful choice of control functions for convex or concave functions can give a control over these refinements and produce refinements of the power mean inequalities.
1 Introduction
Let \(I \subseteq \mathbb{R}\) be an interval and c be a positive real number. A function \(f \colon I \to \mathbb{R}\) is called strongly convex with modulus c if
holds for every \(x,y\in I\) and \(t\in [0,1]\). Strongly convex functions were introduced by B. T. Polyak in [16].
We introduce in a natural way the class of strongly \({\mathcal {F}}\)convex (concave) functions, which is wider than the class of strongly convex (concave) functions.
Definition 1.1
Let \(I\subseteq \mathbb{R}\) be an interval and \(F \colon I \to \mathbb{R}\) be a convex function. We say that a function \(f \colon I \to \mathbb{R}\) is strongly \({\mathcal {F}}\)convex with control function F if
holds for all \(x,y\in I\) and \(t\in [0,1]\). If −f is strongly \({\mathcal {F}}\)convex, then we say that f is a strongly \({\mathcal {F}}\)concave function.
Some related notions can be found in [1, 2, 8, 17], and the references therein.
It is obvious that f is strongly \({\mathcal {F}}\)convex (concave) iff \(fF\) (\(f + F\)) is convex (concave) on I.
Let I be an interval in \(\mathbb{R}\) and \(f \colon I \to \mathbb{R}\) be a convex function. If \(\boldsymbol{x}=\left ( x_{1},\ldots ,x_{n}\right ) \) is any ntuple in \(I^{n}\) and \(\boldsymbol{p}=\left ( p_{1},\ldots ,p_{n}\right ) \) is a nonnegative ntuple such that \(P_{n}=\sum _{i=1}^{n}p_{i}>0\), then the wellknown Jensen inequality
holds (see [4] or for example [15, p. 43]). If f is strictly convex, then (1.1) is strict unless \(x_{i}=c\) for all \(i\in \left \{ j:p_{j}>0\right \} \).
Jensen’s inequality is probably the most important of all inequalities: it has many applications in mathematics and statistics, and some other wellknown inequalities are its special cases (such as Young’s inequality, Cauchy’s inequality, Hölder’s inequality, AGH inequality, etc.).
One of many generalizations of Jensen’s inequality is the integral form of Jensen’s inequality (see [11, 15]).
Theorem 1.2
(Integral form of Jensen’s inequality)
Let μ be a probability measure on \([a,b]\subset \mathbb{R}\), and let \(g \colon [a, b] \to \mathbb{R}\) be a μintegrable function. If f is a convex function given on an interval I that includes the image of g, then
The reversed inequality holds in (1.2) if f is a concave function.
The integral Jensen inequality for strongly convex functions is obtained in [9].
Theorem 1.3
Let \(\left ( X, \Sigma , \mu \right )\) be a probability measure space, I be an open interval, and \(\varphi \colon X \to I\) be a μintegrable function. If \(f \colon I \to \mathbb{R}\) is strongly convex with modulus c, then
where \(m = \int _{X} \varphi (x) d \mu (x)\).
Strongly related to Jensen’s inequality is the Lah–Ribarič inequality (see [5] or for example [7], [10, p. 9], [14]). Its integral form is given in the following theorem.
Theorem 1.4
(Integral form of the Lah–Ribarič inequality)
Let μ be a probability measure on \([a,b]\), and let \(g \colon [a, b] \to \mathbb{R}\) be a μintegrable function such that \(m \leq g(t) \leq M\) for all \(t \in [a, b]\), \(m < M\). Suppose that \(I\subseteq \mathbb{R}\) is an interval such that \([m,M]\subseteq I\). If \(f:I\to \mathbb{R}\) is a convex function, then
where \(\bar{g} = \int _{a}^{b} g(t) d \mu (t)\). The reversed inequality holds in (1.3) if f is a concave function.
Several forms of Lah–Ribarič type inequality for strongly convex functions are obtained in [6]. We give its integral version.
Theorem 1.5
Let μ be a probability measure, and let \(g \colon [a, b] \to \mathbb{R}\) be a μintegrable function such that \(m \leq g(t) \leq M\) for all \(t \in [a, b]\), \(m < M\). Suppose that \(I\subseteq \mathbb{R}\) is an interval such that \([m,M]\subseteq I\). If \(f:I\to \mathbb{R}\) is a strongly convex function with modulus c, then
where \(\bar{g} = \int _{a}^{b} g(t) d \mu (t)\).
One of the most influential classical inequalities is the Young inequality
where a and b are nonnegative real numbers, \(p > 1\), \(\frac{1}{p} + \frac{1}{q} = 1\). The reversed inequality holds in (1.5) if \(a,b>0\) and \(0< p<1\).
In [12], using strong convexity of the exponential function on \(\left [\log{(2c)},\infty \right )\) for any \(c>0\), the following refinements of the Young and the reversed Young inequality were proved:
assuming \(a,b>0\), \(p,q>1\), \(1/p+1/q=1\),
assuming \(a,b>0\), \(0< p<1\), \(q<0\), \(1/p+1/q=1\). Using known replacements (see Sect. 2), it is easy to show that these refinements are equivalent (as are the Young and the reversed Young inequality).
In Sect. 2, proving that the exponential function is strongly \({\mathcal {F}}\)convex with suitable class of control functions, a generalizations of (1.6) and (1.7) are given. A multiplicative improvement of the Young and its reversed inequality is obtained proving that the logarithmic function is strongly \({\mathcal {F}}\)concave. In Sect. 3 a general form of refinements of the Jensen, the Lah–Ribarič, and the generalized Hermite–Hadamard inequality is given, emphasizing their selfimproving property for strongly \({\mathcal {F}}\)convex or concave functions. Section 4 deals with specific family of control functions, naturally generalizing the case of strongly convex and the concave case, giving some criteria for optimality of the obtained refinements. Finally, we give a series of inequalities involving the integral power means, showing how suitable choice of control functions can produce interesting inequalities.
2 Strong \({\mathcal {F}}\)convexity and concavity and refinements of the Young inequality
In this section we give two types of refinements of the Young and the reversed Young inequality. These refinements are obtained by noticing that the exponential and the logarithmic function can be regarded as strongly \({\mathcal {F}}\)convex and strongly \({\mathcal {F}}\)concave function, respectively.
Theorem 2.1
Let \(a,b>0\), \(\alpha \geq 2\), \(p>1, q>1\) such that \(\frac{1}{p}+\frac{1}{q}=1\). Then the following claims hold:

(i)
For every \(x_{0}\geq \max \{\log{a^{p}},\log{b^{q}}\}\),
$$\begin{aligned} ab\leq{}& \frac{a^{p}}{p}+\frac{b^{q}}{q} \\ & {} \frac{1}{\alpha (\alpha 1)} \frac{e^{\alpha 2}}{(\alpha 2)^{\alpha 2}}e^{x_{0}}\left [ \frac{1}{p}\left (\log{a^{p}}+x_{0}\right )^{\alpha}+\frac{1}{q} \left (\log{b^{q}}+x_{0}\right )^{\alpha}\right . \\ & {} \left .\left (\frac{1}{p}\log{a^{p}}+\frac{1}{q}\log{b^{q}}+x_{0} \right )^{\alpha}\right ]\leq \frac{a^{p}}{p}+\frac{b^{q}}{q}. \end{aligned}$$(2.1) 
(ii)
$$\begin{aligned} ab &\leq \frac{a^{p}}{p}+\frac{b^{q}}{q}  \frac{1}{\alpha (\alpha 1)} \frac{e^{\alpha 2}}{(\alpha 2)^{\alpha 2}}\min \{a^{p},b^{q}\}C_{a,b} \left (p,q,\alpha \right )\left \log \left (\frac{b^{q}}{a^{p}} \right )\right ^{\alpha} \\ & \leq \frac{a^{p}}{p}+\frac{b^{q}}{q}, \end{aligned}$$(2.2)
where \(\displaystyle C_{a,b}\left (p,q,\alpha \right )=\left \{ \begin{array}{c@{\quad}c} \frac{1}{q}\frac{1}{q^{\alpha}}, & a^{p}< b^{q} \\ \frac{1}{p}\frac{1}{p^{\alpha}}, & a^{p}>b^{q} \end{array} \right .\). If \(a^{p}=b^{q}\), then \(C_{a,b}\left (p,q,\alpha \right )\) is arbitrary.
Proof
(i) The function \(f(x)=e^{x}\) is strongly \({\mathcal {F}}\)convex with control function \(F(x)=c\left (x+x_{0}\right )^{\alpha}\), \(c>0\), \(\alpha >2\), on \(\left (x_{0},\infty \right )\) iff
Since \(\displaystyle x\mapsto \frac{e^{x}}{\left (x+x_{0}\right )^{\alpha 2}}\) has its minimum at \(x=\alpha 2x_{0}\geq x_{0}\), it follows that the function
is convex on \(\left [x_{0},\infty \right )\), where
Rearranging
taking \(x_{0}\geq \max \{\log{a^{p}},\log{b^{q}}\}\), we easily obtain (2.1).
The second inequality in (2.1) follows from the Jensen inequality for the function \(x\mapsto x^{\alpha}\), \(\alpha >1\).
(ii) If \(a^{p}< b^{q}\), set in (2.1) \(x_{0}=\log{a^{p}}\).
If \(b^{q}< a^{p}\), set in (2.1) \(x_{0}=\log{b^{q}}\). □
The reversed Young inequality can be refined from the refinement of the Young inequality given in Theorem 2.1 using familiar substitutions.
Corollary 2.2
Let \(a,b>0\), \(\alpha \geq 2\), \(0< p<1\), \(q<0\) such that \(\frac{1}{p}+\frac{1}{q}=1\). The following claims hold:

(i)
For every \(x_{0}\geq \max \{\log{(ab)},\log{b^{q}}\}\),
$$\begin{aligned} &\frac{a^{p}}{p}+\frac{b^{q}}{q}\leq ab \\ & \quad \frac{1}{\alpha (\alpha 1)} \frac{e^{\alpha 2}}{(\alpha 2)^{\alpha 2}}e^{x_{0}}\left [p\left ( \log{(ab)}+x_{0}\right )^{\alpha}\frac{p}{q}\left (\log{b^{q}}+x_{0} \right )^{\alpha}\right . \\ &\quad \left .\left (p\log{(ab)}\frac{p}{q}\log{b^{q}}+x_{0}\right )^{ \alpha}\right ]\leq ab. \end{aligned}$$(2.3) 
(ii)
$$\begin{aligned} \frac{a^{p}}{p}+\frac{b^{q}}{q} &\leq ab  \frac{1}{\alpha (\alpha 1)} \frac{e^{\alpha 2}}{(\alpha 2)^{\alpha 2}}\min \{ab,b^{q}\} \frac{1}{p^{\alpha}}C_{a,b}^{r}\left (p,q,\alpha \right )\left \log \left (\frac{b^{q}}{a^{p}}\right )\right ^{\alpha} \\ & \leq ab, \end{aligned}$$(2.4)
where \(\displaystyle C_{a,b}^{r}\left (p,q,\alpha \right )=\left \{ \begin{array}{c@{\quad}c} 1p(1p)^{\alpha}, & a^{p}< b^{q} \\ pp^{\alpha}, & a^{p}>b^{q} \end{array} \right .\). If \(a^{p}=b^{q}\), then \(C_{a,b}\left (p,q,\alpha \right )\) is arbitrary.
Proof
Both claims follow from Theorem 2.1 replacing p with \(1/p\), q with \(q/p\), a with \((ab)^{p}\), and b with \(b^{p}\). We just notice that \(b^{q}/a^{p}\) is replaced with \(b^{q}/(ab)=(b^{q}/a^{p})^{1/p}\). □
Lemmas 5.3 and 5.4 in [12] follow from the second claims in Theorem 2.1 and Corollary 2.2.
Remark 2.3
It is difficult to establish the optimal (maximal) value of the expression refining the Young inequality in (2.1) and (2.3), but it is easy to see (by differentiating the expression with respect to \(x_{0}\)) that the optimal value is achieved for some \(x_{0}\) in the interval \(\left (\min \{\alpha \log{a^{p}},\alpha \log{b^{q}}\}, \max \{ \alpha \log{a^{p}},\alpha \log{b^{q}}\}\right )\) and some \(x_{0}\) in the interval \(\left (\min \{\alpha \log{ab},\alpha \log{b^{q}}\}, \max \{\alpha  \log{ab},\alpha \log{b^{q}}\}\right )\), respectively.
Remark 2.4
The same problem appears in establishing the optimal value of the expression refining the Young inequality in (2.2) and (2.4). But what is easy to see is the derivative of this expression (with respect to α; using logarithmic derivative) for \(\alpha =2\) is +∞, hence the optimal value is achieved for some \(\alpha >2\).
An alternative way to obtain refinements of the Young inequality is to consider strong \({\mathcal {F}}\)concavity of the logarithmic function.
Theorem 2.5
Let \(a,b>0\), \(\alpha > 1\), \(p>1, q>1\) such that \(\frac{1}{p}+\frac{1}{q}=1\). Then
Proof
The function \(f(x)=\log x\) is strongly \({\mathcal {F}}\)concave (on some interval to be specified later) with control function \(F(x)=cx^{\alpha}\), \(c>0\), \(\alpha >1\), iff \(c\leq \frac{1}{\alpha (\alpha 1)}\frac{1}{x^{\alpha}}\).
This shows that if \(a^{p}< b^{q}\), then the function
is a concave function on \(\left (0,b^{q}\right )\). Rearranging
we get
This gives that the first term in (2.5) is not lesser than the third term (in the case \(a^{p}< b^{q}\)).
The claim is that the function
is a decreasing function on \((0,\infty )\). The inequality \(\mathrm{R}\left (\alpha _{1}\right )\geq \mathrm{R}\left (\alpha _{2} \right )\) for \(\alpha _{1}<\alpha _{2}\) follows trivially from convexity of the function
on \((0,1)\), which is obvious since \(\psi ''(x)=x^{\alpha _{1}2}\left (1x^{\alpha _{2}\alpha _{1}} \right )\).
The first inequality now follows from
The case \(a^{p}>b^{q}\) can be treated similarly. □
The reversed Young inequality of this type has a somewhat different form. In some sense more natural (note that \(a^{p}/p+b^{q}/q<0\) for q small enough) than the usual form (see (1.5) and (1.7)).
Theorem 2.6
Let \(a,b>0\), \(\alpha >1\), \(0< p<1\), \(q<0\) such that \(\frac{1}{p}+\frac{1}{q}=1\). Then
Proof
The first two inequalities follow from Theorem 2.5 using the same replacements as in the proof of Corollary 2.2. The third inequality again follows from the Jensen inequality for the function \(t\mapsto t^{\alpha}\), \(\alpha >1\). □
An important application of the Young inequality (also as weighted AGinequality) and its reversed forms (for \(0< p<1\), or for \(p>1\) using the Specht ratio, the Kantorovich constant, and their generalizations; see [13, Chap. 2]) is in the theory of operator inequalities (see [13] and the references therein). One basic property that enables producing these types of inequalities is that the inequalities actually depend on the ratio of \(a/b\) (if one uses the weighted AGinequality) or ratio \(a^{p}/b^{q}\) (if the Young inequality is used). See [13, Chap. 2, Sects. 6 and 7]. This is also true for the refinements of the Young and the reversed Young inequality given in Theorems 2.5 and 2.6.
Set: \(h=a^{p}/b^{q}\). The following claims are straightforward.
The inequalities in Theorem 2.5 can be written as
where
Similarly, the inequalities in Theorem 2.6 can be written as
where
3 Selfimproving property of Jensen type inequalities for strongly \({\mathcal {F}}\)convex and concave functions
In this section we present how selfimprovement for strongly \({\mathcal {F}}\)convex and concave functions works in the case of two probably most important inequalities, i.e., the Jensen inequality and the Lah–Ribarič inequality (or the converse Jensen inequality). We also consider the case of a general Hermite–Hadamard inequality as a unification of former two inequalities.
Although known proofs of such a type of refinements for strongly convex functions are based on the notions of quadratic support functions and the generalized Beckenbach convexity (see [6] and [9]), our approach is more simple (and more general). It would be of some interest to develop analogous theories in the case of strongly \({\mathcal {F}}\)convex (and concave) functions.
All results in this section also hold, with the same proofs, in very general setting (as in Theorem 1.3), but we give them only for real intervals. The reason is that examples and applications given in the next sections are only presented for this case. Also we emphasized the concave case since this case is somehow completely neglected. For example, in [6] and [9] this notion is neither defined nor mentioned. We saw in the preceding section that it is important to work with this notion also.
First we give a variant of the integral form of the Jensen inequality for strongly \(\mathcal {F}\)concave (convex) functions.
Theorem 3.1
Let μ be a probability measure on \([a,b]\subset \mathbb{R}\). Let \(g \colon [a, b] \to \mathbb{R}\) be a μintegrable function. Suppose that \(I\subseteq \mathbb{R}\) contains the image of g. If \(f:I\to \mathbb{R}\) is a strongly \(\mathcal {F}\)concave function with control function F, then
If f is strongly \({\mathcal {F}}\)convex (with control function F), then (3.1) holds with the reversed inequalities and minus sign in front of the brackets in the middle term.
Proof
The first inequality follows from the integral form of the Jensen inequality (1.2) for the concave function \(f + F\).
The second inequality follows from (1.2) for the convex function F. □
Notice that \(\int _{a}^{b} F \left ( g(t) \right ) d \mu (t)  F \left ( \int _{a}^{b} g(t) d \mu (t) \right )=c\int _{a}^{b}\left (g(t)\overline{g}\right )^{2}d \mu (t)\) if \(F(t)=ct^{2}\), where \(\overline{g}=\int _{a}^{b}g(t)d\mu (t)\) (see (1.4)).
Next, we give a variant of the integral Lah–Ribarič inequality for strongly \(\mathcal {F}\)concave (convex) functions.
Theorem 3.2
Let μ be a probability measure on \([a,b]\), and let \(g \colon [a, b] \to \mathbb{R}\) be a μintegrable function such that \(m \leq g(t) \leq M\) for all \(t \in [a, b]\), \(m < M\). Suppose that \(I\subseteq \mathbb{R}\) is such that \([m,M]\subseteq I\). If \(f:I\to \mathbb{R}\) is a strongly \(\mathcal {F}\)concave function with control function F, then
where \(\bar{g} = \int _{a}^{b} g(t) d \mu (t)\).
If f is strongly \({\mathcal {F}}\)convex with control function F, then (3.2) holds with the reversed inequalities and minus sign in front of the brackets in the middle term.
Proof
The first inequality follows from the integral form of the Lah–Ribarič inequality (1.3) for the concave function \(f + F\). The second inequality follows from (1.3) for the convex function F. □
Finally, we give a variant of Theorem 1.5 for strong \(\mathcal {F}\)concavity. We presented it in such a way that refinements of the general Hermite–Hadamard inequality are more explicit.
Corollary 3.3
Let μ be a probability measure on \([a,b]\), and let \(g \colon [a, b] \to \mathbb{R}\) be a μintegrable function such that \(m \leq g(t) \leq M\) for all \(t \in [a, b]\), \(m < M\). Suppose that \(I\subseteq \mathbb{R}\) is such that \([m,M]\subseteq I\). If \(f:I\to \mathbb{R}\) is a strongly \(\mathcal {F}\)concave function with control function F, then
where \(\bar{g} = \int _{a}^{b} g(t) d \mu (t)\).
If f is strongly \({\mathcal {F}}\)convex with control function F, then (3.3) holds with the reversed inequalities and minus signs in front of the brackets in the second and third term.
Proof
The first inequality is the first inequality in (3.1).
The second inequality follows from the Lah–Ribarič inequality (1.3) for the concave function \(f+F\). □
It is straightforward to see that
for \(F(x)=cx^{2}\) (see (1.4)).
4 Strong \({\mathcal {F}}\)convexity and concavity with control functions \(F(x)=c\left xx_{0}\right ^{\alpha}\). Applications and examples
Although many papers are written on refining classical inequalities for strongly convex functions, there are surprisingly few that deal with effectiveness of these refinements, especially in determining the modulus of strong convexity.
Results from Sect. 3 are generalizations of Theorem 1.3 and Theorem 1.5. In this section we give a more detailed analysis how the choice of control functions F in these generalizations can give new refinements.
We will consider a suitable class of convex functions F that naturally generalizes the class of functions \(x\mapsto c x^{2}\), \(c>0\), which generates strongly convex and concave functions. Let
assuming \(x_{0}\leq m\).
We denote the refining expression in (3.1) for the functions \(F_{\alpha}\) by
and the refining expression in (3.2) for the functions \(F_{\alpha}\) by
In the variants given in Sect. 3, functions f influence the refining expression in determining the constants c. It is interesting to see conditions for the general function f and a convex function \(F_{\alpha}\) so that we can use theorems from the previous section.
Suppose that \(f\in C^{2}\left ([m,M]\right )\) and suppose \(x_{0}\leq m\). Then
\(f(x)+c (xx_{0})^{\alpha}\), \(\alpha >1\), is concave on \([m,M]\) iff
Hence if
then f is strongly \({\mathcal {F}}\)concave on \([m,M]\) with control function
\(F_{\alpha}(x)=c (xx_{0})^{\alpha}\), \(x_{0}\leq m\).
Since \(Ref_{1}\) and \(Ref_{2}\) are increasing in \(c>0\), it is optimal to take
A more insightful case is given in the following theorem.
Theorem 4.1
Let \(f\in C^{2}([m,M])\) be a concave (convex) function and \(x_{0} \leq m\).

1.
If there is \(\alpha _{0}\geq 1\) such that
$$ \frac{\left f''(x)\right }{(xx_{0})^{\alpha _{0}  2}}\; \textrm{is decreasing on}\;[m, M],\;\textrm{and}\; \left f''(M)\right  > 0, $$then

(a)
f is strongly \(\mathcal {F}\)concave (strongly \({\mathcal {F}}\)convex) with control function
$$ F_{\alpha} (x) = \frac{1}{\alpha (\alpha  1)} \frac{\left f''(M)\right }{(Mx_{0})^{\alpha  2}} (xx_{0})^{ \alpha} $$for every \(\alpha \geq \alpha _{0}\). Especially, for \(\alpha = 1\), we define
$$ F_{1}(x)=\left f''(M)\right (Mx_{0})(xx_{0})\log{(xx_{0})}; $$ 
(b)
the refining expressions \(Ref_{1}\) and \(Ref_{2}\) defined as in (4.1) and (4.2) are decreasing functions in \(\alpha \geq \alpha _{0}\);

(c)
the maximal refinements are obtained for \(\alpha = \alpha _{0}\).

(a)

2.
If there is \(\alpha _{0}\geq 1\) such that
$$ \frac{\left f''(x)\right }{(xx_{0})^{\alpha _{0}  2}}\; \textrm{is increasing on}\;[m, M],\;\textrm{and}\; \frac{\left f''(m)\right }{(mx_{0})^{\alpha _{0}  2}} > 0, $$then

(a)
f is strongly \(\mathcal {F}\)concave (strongly \({\mathcal {F}}\)convex) with control function
$$ F_{\alpha} (x) = \frac{1}{\alpha (\alpha  1)} \frac{\left f''(m)\right }{(mx_{0})^{\alpha  2}} (xx_{0})^{ \alpha} $$for every \(1 < \alpha \leq \alpha _{0}\). Especially, for \(\alpha = 1\), we define
$$ F_{1}(x)=\left f''(m)\right (mx_{0})(xx_{0})\log{(xx_{0})}; $$ 
(b)
the refining expressions \(Ref_{1}\) and \(Ref_{2}\) are increasing functions in \(1\leq \alpha \leq \alpha _{0}\);

(c)
the maximal refinements are obtained for \(\alpha = \alpha _{0}\).

(a)
Proof
We prove the concave case. The proof of the convex case is analogous: replace f with −f.

1.
Strong \({\mathcal {F}}\)concavity of the function f with control function
$$ F_{\alpha} (x) = \frac{1}{\alpha (\alpha  1)} \frac{ f''(M)}{(Mx_{0})^{\alpha  2}} (xx_{0})^{\alpha} $$is equivalent to
$$ \frac{f''(x)}{(xx_{0})^{\alpha 2}}\geq \frac{f''(M)}{\left (Mx_{0}\right )^{\alpha 2}}, $$which is obvious for \(\alpha \geq \alpha _{0}\) since \(\frac{f''(x)}{(xx_{0})^{\alpha 2}}= \frac{1}{(xx_{0})^{\alpha \alpha _{0}}} \frac{f''(x)}{(xx_{0})^{\alpha _{0}2}}\) and \(\frac{f''(x)}{(xx_{0})^{\alpha _{0}2}}\) is a decreasing function.
Notice that \(Ref_{1}(\alpha )\) and \(Ref_{2}(\alpha )\) are defined for \(F_{\alpha}\) independently of the context (assuming \(g(t)\in [m,M]\) for every \(t\in [a,b]\)). A simple application of the L’Hospital rule gives \(\lim _{\alpha \to 1}Ref_{1}(\alpha )=Ref_{1}(1)\) and \(\lim _{\alpha \to 1}Ref_{2}(\alpha )=Ref_{2}(1)\).
The claim is that refining expressions \(Ref_{1}\) and \(Ref_{2}\) are decreasing in \(\alpha \geq \alpha _{0}\). We give the proof for \(Ref_{1}\) (for \(Ref_{2}\) the proof is analogous). For \(\alpha _{0} \leq \alpha _{1}<\alpha _{2}\), the claim is that
$$\begin{aligned} & \int _{a}^{b} F_{\alpha _{1}} \left ( g(t) \right ) d \mu (t)  F_{ \alpha _{1}} \left ( \int _{a}^{b} g(t) d \mu (t) \right ) \\ & \geq \int _{a}^{b} F_{\alpha _{2}} \left ( g(t) \right ) d \mu (t)  F_{\alpha _{2}} \left ( \int _{a}^{b} g(t) d \mu (t) \right ) \end{aligned}$$holds. Using Theorem 1.2 (the convex part of this theorem), it is enough to prove that
$$\begin{aligned} \phi (x)&=F_{\alpha _{1}}(x)F_{\alpha _{2}}(x) \\ & = \frac{1}{\alpha _{1} (\alpha _{1}  1)} \frac{ f''(M)}{(Mx_{0})^{\alpha _{1}  2}}(xx_{0})^{\alpha _{1}}  \frac{1}{\alpha _{2} (\alpha _{2}  1)} \frac{ f''(M)}{(Mx_{0})^{\alpha _{2}  2}} (xx_{0})^{\alpha _{2}} \end{aligned}$$is a convex function on \([m,M]\). This is immediate from
$$ \phi ''(x)=  f''(M) \left ( \frac{xx_{0}}{Mx_{0}} \right )^{ \alpha _{1}  2} \left ( 1  \left (\frac{xx_{0}}{Mx_{0}}\right )^{ \alpha _{2}  \alpha _{1}}\right ) \geq 0. $$Maximal part of the result follows directly.

2.
Analogous to the proof of (1).
□
Remark 4.2
Theorem 4.1 heavily depends on the monotonicity of the function \(x\mapsto \left f''(x)\right /(xx_{0})^{\alpha 2}\) on \([m,M]\subset \left [x_{0},\infty \right )\) for some \(\alpha >1\). If this monotonicity cannot be obtained, one can argue as follows. Suppose that \(\left f''(x)\right \geq m_{2}>0\) on \([m,M]\). Then \(f\in C^{2}\left ([m,M]\right )\) is strongly \({\mathcal {F}}\)concave (convex) with control function \(x\mapsto c(xx_{0})^{\alpha}\) if
If \(1<\alpha \leq 2\), then the optimal value is \(c=\frac{m_{2}}{\alpha (\alpha 1)}(mx_{0})^{2\alpha}\), which gives that or the function f is not strongly \({\mathcal {F}}\)concave (convex) or the maximal refinements are \(Ref_{1}(2)\) and \(Ref_{2}(2)\) (see (2) in Theorem 4.1). If \(\alpha \geq 2\), then the optimal value is \(c=\frac{m_{2}}{\alpha (\alpha 1)}(Mx_{0})^{2\alpha}\), and again we get that the maximal refinements are \(Ref_{1}(2)\) and \(Ref_{2}(2)\) (see (1) in Theorem 4.1). In this situation we cannot improve refinements \(Ref_{1}\) and \(Ref_{2}\) obtained in the strong concavity (convexity) case. The optimality is lost in the initial step since generally
Example 4.3
The function \(x\mapsto e^{x}/x\) is decreasing on \((0,1]\) and increasing on \([1,\infty )\), and the function \(x\mapsto e^{x}/x^{2}\) is decreasing on \((0,2]\) and increasing on \([2,\infty )\). According to Theorem 4.1, the function \(f(x)=e^{x}\) is strongly \({\mathcal {F}}\)convex on \([1,2]\) with control function \(F_{\alpha}^{1}(x)=\frac{1}{\alpha (\alpha 1)}ex^{\alpha}\) for \(1<\alpha \leq 3\) and with control function \(F_{\alpha}^{2}(x)=\frac{1}{\alpha (\alpha 1)}\frac{e^{2}}{2^{2}}x^{ \alpha}\) for \(\alpha \geq 4\). It seems that the method of comparing \(Ref_{1}^{1}(3)\) (i.e. \(Ref_{1}(\alpha )\) defined using \(F_{\alpha}^{1}\)) and \(Ref_{1}^{2}(4)\) (i.e. \(Ref_{1}(\alpha )\) defined using \(F_{\alpha}^{2}\)) given in the proof of Theorem 4.1 does not work here. Analogous statements hold for \(Ref_{2}\).
Suppose that \(f\in C^{2}([a,b])\). Then \(f(x)+cx^{2}\) is concave iff
\(f''(x)\leq 2c\). This means that if there is an \(x_{0}\in [a,b]\) such that \(f''(x_{0})=0\), such \(c>0\) cannot exist. It follows that the method of strongly concave (and similarly strongly convex) functions cannot be used to improve Jensen type inequalities.
Example 4.4
Let \(f(x)=\sin \left ( \frac{\pi}{2} x\right )\), \(x\in [0,1]\). Since \(f''(0)=0\), Theorem 1.3 cannot be used to improve Jensen type inequalities.
Let \(F_{\alpha}(x)=cx^{\alpha}\), \(x\in [0,1]\), \(c>0\), \(\alpha >1\). Then \(f+F_{\alpha}\) is concave iff
For \(\alpha \in (1, 3)\), f is not strongly \(\mathcal {F}\)concave with control function \(F_{\alpha}(x)=cx^{\alpha}\) for any \(c > 0\). For \(\alpha \in [3, + \infty )\), \(\sin \left ( \frac{\pi}{2} x \right ) x^{2  \alpha}\) is a decreasing function on \([0, 1]\), so we infer that for
the function f is strongly \({\mathcal {F}}\)concave with control function \(F_{\alpha} (x)= c x^{\alpha}\). We can use Theorem 3.1 for refining the integral Jensen inequality in this case.
Let \(F_{\alpha} (x) = \frac{\pi ^{2}}{4 \alpha (\alpha 1)} x^{\alpha}\). Using Theorem 4.1, the refining expressions \(Ref_{1}\) and \(Ref_{2}\) are decreasing in α, \(\alpha \geq 3\) for every \(g \colon [0, 1] \to \mathbb{R}\) such that \(g \left ( [0, 1] \right ) \subseteq [0, 1]\). The best possible improvements are obtained for \(\alpha = 3\).
Similar analysis can be carried out for \(f(x)=\cos{\left (\frac{\pi}{2}x \right )}\), \(F_{\alpha}(x)=c(1x)^{\alpha}\), \(x\in [0,1]\), \(\alpha >1\), \(c>0\).
As an illustration of many possible applications of strong \({\mathcal {F}}\)concavity and convexity, we give the following corollary. All terms in this corollary can be expressed in terms of the power means
where μ is a probability measure on \([a,b]\), g is a nonnegative μintegrable function on \([a,b]\), but we will not use this not to overburden this corollary with unnecessary notations.
Corollary 4.5
Let \(\alpha >1\), \(0< p<1\), and \(M>0\). Let μ be a probability measure on \([a,b]\subset \mathbb{R}\) and \(g \colon [a,b] \to \mathbb{R}\) be a μintegrable function such that \(0\leq g(t)\leq M\) for every \(t\in [a,b]\). Then
Proof
First we prove inequalities using the integral Jensen inequality, and then we give an alternative proof using the method from the proof of Theorem 4.1.
Application of the integral Jensen inequality is assumed in all cases. Denote (4.3) as

1.
\(I_{1}\leq I_{2}\) follows from convexity on \([0, M]\) of the function
$$ x\mapsto x\log x\frac{1}{\alpha (\alpha 1)}\frac{1}{M^{\alpha 1}}x^{ \alpha}. $$ 
2.
\(I_{1}\leq I_{3}\) follows from concavity on \([0, M]\) of the function
$$ x\mapsto x^{p}+\frac{p(1p)}{\alpha (\alpha 1)} \frac{1}{M^{\alpha p}}x^{\alpha}. $$ 
3.
\(I_{1}\leq I_{4}\) follows from concavity on \([0, M]\) of the function
$$ x\mapsto \log x\frac{1}{\alpha (\alpha 1)}\frac{1}{M^{\alpha}}x^{ \alpha}. $$ 
4.
\(I_{2}\leq I_{3}\) follows from concavity on \([0, M]\) of the function
$$ x\mapsto x^{p}+\frac{1}{p(1p)}\frac{1}{M^{1p}}x\log x. $$ 
5.
\(I_{2}\leq I_{4}\) follows from concavity on \([0, M]\) of the function
$$ x\mapsto \log x+\frac{1}{M}x\log x. $$ 
6.
\(I_{3}\leq I_{4}\) follows from concavity on \([0, M]\) of the function
$$ x\mapsto \log x+\frac{1}{p (1p)}\frac{1}{M^{p}}\left (x^{p}\right ). $$
The reason for this strange form of writing the last function is to fit this notation in our concept of strong \({\mathcal {F}}\)concavity.
Another possibility to prove the theorem is as follows. The inequality \(I_{1}\leq I_{3}\) (proven above) written explicitly reads
Using convexity of the function
and concavity of the function
on \([0,M]\) for \(1<\alpha _{1}<\alpha _{2}\) and \(0< p_{1}< p_{2}<1\), it follows (see the proof of Theorem 4.1) that \(I_{1}\) decreases with α and \(I_{3}\) decreases with p. The remaining inequalities \(I_{1}\leq I_{2}\leq I_{3}\) and \(I_{3}\leq I_{4}\) follow by \(\lim _{\alpha \to 1}I_{1}\) and \(\lim _{p\to 0}I_{3}\). □
An analogous result holds for the Lah–Ribarič inequality (1.3). For the Lah–Ribarič difference set
where μ is a probability measure on \([a,b]\), \(g:[a,b]\to \mathbb{R}\) such that \(m\leq g(t)\leq M\), \(f\circ g\) dμintegrable, where \(f:I\to \mathbb{R}\), \(I\subseteq \mathbb{R}\) an interval such that \([m,M]\subseteq I\).
Corollary 4.6
Let \(\alpha >1\), \(0< p<1\), and \(0< m< M\). Let μ be a probability measure on \([a,b]\subset \mathbb{R}\) and \(g \colon [a,b] \to \mathbb{R}\) be a μintegrable function such that \(m\leq g(t)\leq M\) for every \(t\in [a,b]\). Then
Proof
Notations are different but the proof is identical to the proof of Corollary 4.5. □
Many refinements using the methods of this paper can be given for related inequalities. We just mention [3] and the results from [14], and many papers on similar subject, as obvious candidates for such refinements.
Data Availability
No datasets were generated or analysed during the current study.
References
Dragomir, S.S.: On a reverse of Jessen’s inequality for isotonic linear functionals. JIPAM. J. Inequal. Pure Appl. Math. 2(3), Article ID 36 (2001)
Dragomir, S.S., Nikodem, K.: Jensen’s and HermiteHadamard’s type inequalities for lower and strongly convex functions on normed spaces. Bull. Iranian Math. Soc. 44(5), 1337–1349 (2018)
Ivelić Bradanović, S.: Improvements of Jensen’s inequality and its converse for strongly convex functions with applications to strongly fdivergences. J. Math. Anal. Appl. 531(2), Article ID 127866 (2024)
Jensen, J.L.W.V.: Om konvexe funktioner og uligheder mellem Middelvaerdier. Nyt Tidsskr. Math. 16B, 49–69 (1905)
Klaričić Bakula, M., Pečarić, J., Perić, J.: On the converse Jensen inequality. Appl. Math. Comput. 218(11), 6566–6575 (2012)
Klaričić, M., Nikodem, K.: On the converse Jensen inequality for strongly convex functions. J. Math. Anal. Appl. 434, 516–522 (2016)
Lah, P., Ribarič, M.: Converse of Jensen’s inequality for convex functions. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 412–460, 201–205 (1973)
Marinescu, D.S., Păltănea, E.: Properties of Pečarićtype functions and applications. Results Math. 76(3), Article ID 149 (2021)
Merentes, N., Nikodem, K.: Remarks on strongly convex functions. Aequ. Math. 80, 193–199 (2010)
Mitrinović, D.S., Pečarić, J.E., Fink, A.M.: Classical and New Inequalities in Analysis. Mathematics and Its Applications (East European Series), vol. 61. Kluwer Academic, Dordrecht (1993)
Niculescu, C.P., Persson, L.E.: Convex Functions and Their Applications. A Contemporary Approach. CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC, vol. 23. Springer, New York (2006)
Nikolova, L., Persson, L.E., Varošanec, S.: Some new refinements of the Young, Hölder, and Minkowski inequalities. J. Inequal. Appl. 2023, Article ID 28 (2023)
Pečarić, J., Furuta, T., Mićić Hot, J., Seo, Y.: MondPečarić Method in Operator Inequalities. Monogr. Inequal., vol. 1. Element, Zagreb (2005)
Pečarić, J., Perić, J.: Refinements of the integral form of Jensen’s and the Lah–Ribarič inequalities and applications for Csiszár divergence. J. Inequal. Appl. 2020, Article ID 108 (2020)
Pečarić, J.E., Proschan, F., Tong, Y.L.: Convex Functions, Partial Orderings, and Statistical Applications. Mathematics in Science and Engineering, vol. 187. Academic Press, Boston (1992)
Polyak, B.T.: Existence theorems and convergence of minimizing sequences for extremal problems with constraints. Dokl. Akad. Nauk SSSR 166(2), 287–290 (1966)
Veselý, L., Zajíček, L.: Deltaconvex mappings between Banach spaces and applications. Diss. Math. 289, (1989)
Funding
There is no funding for this work.
Author information
Authors and Affiliations
Contributions
J. P wrote the manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Perić, J. Strong \(\mathcal {F}\)convexity and concavity and refinements of some classical inequalities. J Inequal Appl 2024, 96 (2024). https://doi.org/10.1186/s13660024031782
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660024031782