# Strong $$\mathcal {F}$$-convexity and concavity and refinements of some classical inequalities

## Abstract

The concept of strong $${\mathcal {F}}$$-convexity is a natural generalization of strong convexity. Although strongly concave functions are rarely mentioned and used, we show that in more effective and specific analysis this concept is very useful, and especially its generalization, namely strong $${\mathcal {F}}$$-concavity. Using this concept, refinements of the Young inequality are given as a model case. A general form of the self-improving property for Jensen type inequalities is presented. We show that a careful choice of control functions for convex or concave functions can give a control over these refinements and produce refinements of the power mean inequalities.

## 1 Introduction

Let $$I \subseteq \mathbb{R}$$ be an interval and c be a positive real number. A function $$f \colon I \to \mathbb{R}$$ is called strongly convex with modulus c if

$$f\left (tx+(1-t)y\right )\leq tf(x)+(1-t)f(y)-ct(1-t)(x-y)^{2}$$

holds for every $$x,y\in I$$ and $$t\in [0,1]$$. Strongly convex functions were introduced by B. T. Polyak in [16].

We introduce in a natural way the class of strongly $${\mathcal {F}}$$-convex (concave) functions, which is wider than the class of strongly convex (concave) functions.

### Definition 1.1

Let $$I\subseteq \mathbb{R}$$ be an interval and $$F \colon I \to \mathbb{R}$$ be a convex function. We say that a function $$f \colon I \to \mathbb{R}$$ is strongly $${\mathcal {F}}$$-convex with control function F if

$$tf(x)+(1-t)f(y)-f\left (tx+(1-t)y\right )\geq tF(x)+(1-t)F(y)-F\left (tx+(1-t)y \right )$$

holds for all $$x,y\in I$$ and $$t\in [0,1]$$. If −f is strongly $${\mathcal {F}}$$-convex, then we say that f is a strongly $${\mathcal {F}}$$-concave function.

Some related notions can be found in [1, 2, 8, 17], and the references therein.

It is obvious that f is strongly $${\mathcal {F}}$$-convex (concave) iff $$f-F$$ ($$f + F$$) is convex (concave) on I.

Let I be an interval in $$\mathbb{R}$$ and $$f \colon I \to \mathbb{R}$$ be a convex function. If $$\boldsymbol{x}=\left ( x_{1},\ldots ,x_{n}\right )$$ is any n-tuple in $$I^{n}$$ and $$\boldsymbol{p}=\left ( p_{1},\ldots ,p_{n}\right )$$ is a nonnegative n-tuple such that $$P_{n}=\sum _{i=1}^{n}p_{i}>0$$, then the well-known Jensen inequality

$$f\left ( \frac{1}{P_{n}}\sum _{i=1}^{n}p_{i}x_{i}\right ) \leq \frac{1}{P_{n}}\sum _{i=1}^{n}p_{i}f\left ( x_{i}\right )$$
(1.1)

holds (see [4] or for example [15, p. 43]). If f is strictly convex, then (1.1) is strict unless $$x_{i}=c$$ for all $$i\in \left \{ j:p_{j}>0\right \}$$.

Jensen’s inequality is probably the most important of all inequalities: it has many applications in mathematics and statistics, and some other well-known inequalities are its special cases (such as Young’s inequality, Cauchy’s inequality, Hölder’s inequality, A-G-H inequality, etc.).

One of many generalizations of Jensen’s inequality is the integral form of Jensen’s inequality (see [11, 15]).

### Theorem 1.2

(Integral form of Jensen’s inequality)

Let μ be a probability measure on $$[a,b]\subset \mathbb{R}$$, and let $$g \colon [a, b] \to \mathbb{R}$$ be a μ-integrable function. If f is a convex function given on an interval I that includes the image of g, then

$$f \left ( \int _{a}^{b} g(t) d \mu (t) \right ) \leq \int _{a}^{b} f \left ( g(t) \right ) d \mu (t).$$
(1.2)

The reversed inequality holds in (1.2) if f is a concave function.

The integral Jensen inequality for strongly convex functions is obtained in [9].

### Theorem 1.3

Let $$\left ( X, \Sigma , \mu \right )$$ be a probability measure space, I be an open interval, and $$\varphi \colon X \to I$$ be a μ-integrable function. If $$f \colon I \to \mathbb{R}$$ is strongly convex with modulus c, then

\begin{aligned} & f \left ( \int _{X} \varphi (x) d \mu (x) \right ) \leq \int _{X} f \left ( \varphi (x) \right ) d \mu (x) - c \int _{X} \left ( \varphi (x) - m \right )^{2} d \mu (x), \end{aligned}

where $$m = \int _{X} \varphi (x) d \mu (x)$$.

Strongly related to Jensen’s inequality is the Lah–Ribarič inequality (see [5] or for example [7], [10, p. 9], [14]). Its integral form is given in the following theorem.

### Theorem 1.4

(Integral form of the Lah–Ribarič inequality)

Let μ be a probability measure on $$[a,b]$$, and let $$g \colon [a, b] \to \mathbb{R}$$ be a μ-integrable function such that $$m \leq g(t) \leq M$$ for all $$t \in [a, b]$$, $$m < M$$. Suppose that $$I\subseteq \mathbb{R}$$ is an interval such that $$[m,M]\subseteq I$$. If $$f:I\to \mathbb{R}$$ is a convex function, then

$$\int _{a}^{b} f \left ( g(t) \right ) d \mu (t) \leq \frac{M - \bar{g}}{M - m} f(m) + \frac{\bar{g} - m}{M - m} f(M),$$
(1.3)

where $$\bar{g} = \int _{a}^{b} g(t) d \mu (t)$$. The reversed inequality holds in (1.3) if f is a concave function.

Several forms of Lah–Ribarič type inequality for strongly convex functions are obtained in [6]. We give its integral version.

### Theorem 1.5

Let μ be a probability measure, and let $$g \colon [a, b] \to \mathbb{R}$$ be a μ-integrable function such that $$m \leq g(t) \leq M$$ for all $$t \in [a, b]$$, $$m < M$$. Suppose that $$I\subseteq \mathbb{R}$$ is an interval such that $$[m,M]\subseteq I$$. If $$f:I\to \mathbb{R}$$ is a strongly convex function with modulus c, then

\begin{aligned} & f \left ( \int _{a}^{b} g(x) d \mu (x) \right ) \\ & \leq \int _{a}^{b} f \left ( g ( x ) \right ) d \mu (x) - c \int _{a}^{b} \left (g(x)-\overline{g}\right )^{2} d \mu (x) \\ & \leq \frac{M - \bar{g}}{M-m} f(m) + \frac{\bar{g} - m}{M-m} f(M) - c \left ( M - \bar{g} \right ) \left ( \bar{g} - m \right ), \end{aligned}
(1.4)

where $$\bar{g} = \int _{a}^{b} g(t) d \mu (t)$$.

One of the most influential classical inequalities is the Young inequality

$$a b \leq \frac{a^{p}}{p} + \frac{b^{q}}{q},$$
(1.5)

where a and b are nonnegative real numbers, $$p > 1$$, $$\frac{1}{p} + \frac{1}{q} = 1$$. The reversed inequality holds in (1.5) if $$a,b>0$$ and $$0< p<1$$.

In [12], using strong convexity of the exponential function on $$\left [\log{(2c)},\infty \right )$$ for any $$c>0$$, the following refinements of the Young and the reversed Young inequality were proved:

$$ab \leq \frac{a^{p}}{p}+\frac{b^{q}}{q}- \frac{\min \left \{a^{p}, b^{q}\right \}}{2pq}\log ^{2} \frac{a^{p}}{b^{q}}\leq \frac{a^{p}}{p}+\frac{b^{q}}{q}$$
(1.6)

assuming $$a,b>0$$, $$p,q>1$$, $$1/p+1/q=1$$,

$$ab \geq \frac{a^{p}}{p}+\frac{b^{q}}{q}-\frac{p}{2q}\min \left \{ab, b^{q} \right \}\log ^{2}{ab^{q-1}}\leq \frac{a^{p}}{p}+\frac{b^{q}}{q}$$
(1.7)

assuming $$a,b>0$$, $$0< p<1$$, $$q<0$$, $$1/p+1/q=1$$. Using known replacements (see Sect. 2), it is easy to show that these refinements are equivalent (as are the Young and the reversed Young inequality).

In Sect. 2, proving that the exponential function is strongly $${\mathcal {F}}$$-convex with suitable class of control functions, a generalizations of (1.6) and (1.7) are given. A multiplicative improvement of the Young and its reversed inequality is obtained proving that the logarithmic function is strongly $${\mathcal {F}}$$-concave. In Sect. 3 a general form of refinements of the Jensen, the Lah–Ribarič, and the generalized Hermite–Hadamard inequality is given, emphasizing their self-improving property for strongly $${\mathcal {F}}$$-convex or concave functions. Section 4 deals with specific family of control functions, naturally generalizing the case of strongly convex and the concave case, giving some criteria for optimality of the obtained refinements. Finally, we give a series of inequalities involving the integral power means, showing how suitable choice of control functions can produce interesting inequalities.

## 2 Strong $${\mathcal {F}}$$-convexity and concavity and refinements of the Young inequality

In this section we give two types of refinements of the Young and the reversed Young inequality. These refinements are obtained by noticing that the exponential and the logarithmic function can be regarded as strongly $${\mathcal {F}}$$-convex and strongly $${\mathcal {F}}$$-concave function, respectively.

### Theorem 2.1

Let $$a,b>0$$, $$\alpha \geq 2$$, $$p>1, q>1$$ such that $$\frac{1}{p}+\frac{1}{q}=1$$. Then the following claims hold:

1. (i)

For every $$x_{0}\geq \max \{-\log{a^{p}},-\log{b^{q}}\}$$,

\begin{aligned} ab\leq{}& \frac{a^{p}}{p}+\frac{b^{q}}{q} \\ & {} -\frac{1}{\alpha (\alpha -1)} \frac{e^{\alpha -2}}{(\alpha -2)^{\alpha -2}}e^{-x_{0}}\left [ \frac{1}{p}\left (\log{a^{p}}+x_{0}\right )^{\alpha}+\frac{1}{q} \left (\log{b^{q}}+x_{0}\right )^{\alpha}\right . \\ & {} -\left .\left (\frac{1}{p}\log{a^{p}}+\frac{1}{q}\log{b^{q}}+x_{0} \right )^{\alpha}\right ]\leq \frac{a^{p}}{p}+\frac{b^{q}}{q}. \end{aligned}
(2.1)
2. (ii)
\begin{aligned} ab &\leq \frac{a^{p}}{p}+\frac{b^{q}}{q} - \frac{1}{\alpha (\alpha -1)} \frac{e^{\alpha -2}}{(\alpha -2)^{\alpha -2}}\min \{a^{p},b^{q}\}C_{a,b} \left (p,q,\alpha \right )\left |\log \left (\frac{b^{q}}{a^{p}} \right )\right |^{\alpha} \\ & \leq \frac{a^{p}}{p}+\frac{b^{q}}{q}, \end{aligned}
(2.2)

where $$\displaystyle C_{a,b}\left (p,q,\alpha \right )=\left \{ \begin{array}{c@{\quad}c} \frac{1}{q}-\frac{1}{q^{\alpha}}, & a^{p}< b^{q} \\ \frac{1}{p}-\frac{1}{p^{\alpha}}, & a^{p}>b^{q} \end{array} \right .$$. If $$a^{p}=b^{q}$$, then $$C_{a,b}\left (p,q,\alpha \right )$$ is arbitrary.

### Proof

(i) The function $$f(x)=e^{x}$$ is strongly $${\mathcal {F}}$$-convex with control function $$F(x)=c\left (x+x_{0}\right )^{\alpha}$$, $$c>0$$, $$\alpha >2$$, on $$\left (-x_{0},\infty \right )$$ iff

$$c\leq \frac{1}{\alpha (\alpha -1)} \frac{e^{x}}{\left (x+x_{0}\right )^{\alpha -2}}, \textrm{ where } x>-x_{0}.$$

Since $$\displaystyle x\mapsto \frac{e^{x}}{\left (x+x_{0}\right )^{\alpha -2}}$$ has its minimum at $$x=\alpha -2-x_{0}\geq -x_{0}$$, it follows that the function

$$\phi (x)=e^{x}-c\left (\alpha ,x_{0}\right )\left (x+x_{0}\right )^{ \alpha},\;\alpha \geq 2,$$

is convex on $$\left [-x_{0},\infty \right )$$, where

$$c\left (\alpha ,x_{0}\right )=\frac{1}{\alpha (\alpha -1)} \frac{e^{\alpha -2}}{(\alpha -2)^{\alpha -2}}e^{-x_{0}}.$$

Rearranging

$$\phi \left (\frac{1}{p}\log{a^{p}}+\frac{1}{q}\log{b^{q}}\right ) \leq \frac{1}{p}\phi \left (\log{a^{p}}\right )+\frac{1}{q}\phi \left (\log{b^{q}}\right )$$

taking $$x_{0}\geq \max \{-\log{a^{p}},-\log{b^{q}}\}$$, we easily obtain (2.1).

The second inequality in (2.1) follows from the Jensen inequality for the function $$x\mapsto x^{\alpha}$$, $$\alpha >1$$.

(ii) If $$a^{p}< b^{q}$$, set in (2.1) $$x_{0}=-\log{a^{p}}$$.

If $$b^{q}< a^{p}$$, set in (2.1) $$x_{0}=-\log{b^{q}}$$. □

The reversed Young inequality can be refined from the refinement of the Young inequality given in Theorem 2.1 using familiar substitutions.

### Corollary 2.2

Let $$a,b>0$$, $$\alpha \geq 2$$, $$0< p<1$$, $$q<0$$ such that $$\frac{1}{p}+\frac{1}{q}=1$$. The following claims hold:

1. (i)

For every $$x_{0}\geq \max \{-\log{(ab)},-\log{b^{q}}\}$$,

\begin{aligned} &\frac{a^{p}}{p}+\frac{b^{q}}{q}\leq ab \\ & \quad -\frac{1}{\alpha (\alpha -1)} \frac{e^{\alpha -2}}{(\alpha -2)^{\alpha -2}}e^{-x_{0}}\left [p\left ( \log{(ab)}+x_{0}\right )^{\alpha}-\frac{p}{q}\left (\log{b^{q}}+x_{0} \right )^{\alpha}\right . \\ &\quad -\left .\left (p\log{(ab)}-\frac{p}{q}\log{b^{q}}+x_{0}\right )^{ \alpha}\right ]\leq ab. \end{aligned}
(2.3)
2. (ii)
\begin{aligned} \frac{a^{p}}{p}+\frac{b^{q}}{q} &\leq ab - \frac{1}{\alpha (\alpha -1)} \frac{e^{\alpha -2}}{(\alpha -2)^{\alpha -2}}\min \{ab,b^{q}\} \frac{1}{p^{\alpha}}C_{a,b}^{r}\left (p,q,\alpha \right )\left |\log \left (\frac{b^{q}}{a^{p}}\right )\right |^{\alpha} \\ & \leq ab, \end{aligned}
(2.4)

where $$\displaystyle C_{a,b}^{r}\left (p,q,\alpha \right )=\left \{ \begin{array}{c@{\quad}c} 1-p-(1-p)^{\alpha}, & a^{p}< b^{q} \\ p-p^{\alpha}, & a^{p}>b^{q} \end{array} \right .$$. If $$a^{p}=b^{q}$$, then $$C_{a,b}\left (p,q,\alpha \right )$$ is arbitrary.

### Proof

Both claims follow from Theorem 2.1 replacing p with $$1/p$$, q with $$-q/p$$, a with $$(ab)^{p}$$, and b with $$b^{-p}$$. We just notice that $$b^{q}/a^{p}$$ is replaced with $$b^{q}/(ab)=(b^{q}/a^{p})^{1/p}$$. □

Lemmas 5.3 and 5.4 in [12] follow from the second claims in Theorem 2.1 and Corollary 2.2.

### Remark 2.3

It is difficult to establish the optimal (maximal) value of the expression refining the Young inequality in (2.1) and (2.3), but it is easy to see (by differentiating the expression with respect to $$x_{0}$$) that the optimal value is achieved for some $$x_{0}$$ in the interval $$\left (\min \{\alpha -\log{a^{p}},\alpha -\log{b^{q}}\}, \max \{ \alpha -\log{a^{p}},\alpha -\log{b^{q}}\}\right )$$ and some $$x_{0}$$ in the interval $$\left (\min \{\alpha -\log{ab},\alpha -\log{b^{q}}\}, \max \{\alpha - \log{ab},\alpha -\log{b^{q}}\}\right )$$, respectively.

### Remark 2.4

The same problem appears in establishing the optimal value of the expression refining the Young inequality in (2.2) and (2.4). But what is easy to see is the derivative of this expression (with respect to α; using logarithmic derivative) for $$\alpha =2$$ is +∞, hence the optimal value is achieved for some $$\alpha >2$$.

An alternative way to obtain refinements of the Young inequality is to consider strong $${\mathcal {F}}$$-concavity of the logarithmic function.

### Theorem 2.5

Let $$a,b>0$$, $$\alpha > 1$$, $$p>1, q>1$$ such that $$\frac{1}{p}+\frac{1}{q}=1$$. Then

\begin{aligned} &\frac{a^{p}}{p}+\frac{b^{q}}{q} \\ & \quad \geq ab \left ( \frac{a^{a^{p}}b^{b^{q}}}{\left (\frac{1}{p}a^{p}+\frac{1}{q}b^{q}\right )^{\frac{1}{p}a^{p}+\frac{1}{q}b^{q}}} \right )^{\displaystyle \min \left \{\frac{1}{a^{ p}}, \frac{1}{b^{ q}}\right \}} \\ & \quad \geq ab\left ( \frac{\exp \left (\frac{1}{p}a^{\alpha p}+\frac{1}{q}b^{\alpha q}\right )}{\exp \left (\left (\frac{a^{p}}{p}+\frac{b^{q}}{q}\right )^{\alpha}\right )} \right )^{\displaystyle \frac{1}{\alpha (\alpha -1)}\min \left \{ \frac{1}{a^{\alpha p}},\frac{1}{b^{\alpha q}}\right \}}\geq ab. \end{aligned}
(2.5)

### Proof

The function $$f(x)=\log x$$ is strongly $${\mathcal {F}}$$-concave (on some interval to be specified later) with control function $$F(x)=cx^{\alpha}$$, $$c>0$$, $$\alpha >1$$, iff $$c\leq \frac{1}{\alpha (\alpha -1)}\frac{1}{x^{\alpha}}$$.

This shows that if $$a^{p}< b^{q}$$, then the function

$$\phi (x)=\log x+\frac{1}{\alpha (\alpha -1)}\frac{1}{b^{\alpha q}}x^{ \alpha},\;\alpha >1,$$

is a concave function on $$\left (0,b^{q}\right )$$. Rearranging

$$\phi \left (\frac{a^{p}}{p}+\frac{b^{q}}{q}\right )\geq \frac{1}{p} \phi \left (a^{p}\right )+\frac{1}{q}\phi \left (b^{q}\right ),$$

we get

\begin{aligned} &\log \left (\frac{a^{p}}{p}+\frac{b^{q}}{q}\right )\geq \log{(ab)} \\ &\quad{} +\frac{1}{\alpha (\alpha -1)}\left (\frac{1}{p}\left ( \frac{a^{p}}{b^{q}}\right )^{\alpha}+\frac{1}{q}-\left (\frac{1}{p} \frac{a^{p}}{b^{q}}+\frac{1}{q}\right )^{\alpha}\right ). \end{aligned}

This gives that the first term in (2.5) is not lesser than the third term (in the case $$a^{p}< b^{q}$$).

The claim is that the function

$$\alpha \mapsto \mathrm{R}(\alpha )=\frac{1}{\alpha (\alpha -1)}\left ( \frac{1}{p}\left (\frac{a^{p}}{b^{q}}\right )^{\alpha}+\frac{1}{q}- \left (\frac{1}{p}\frac{a^{p}}{b^{q}}+\frac{1}{q}\right )^{\alpha} \right )$$

is a decreasing function on $$(0,\infty )$$. The inequality $$\mathrm{R}\left (\alpha _{1}\right )\geq \mathrm{R}\left (\alpha _{2} \right )$$ for $$\alpha _{1}<\alpha _{2}$$ follows trivially from convexity of the function

$$\psi (x)=\frac{1}{\alpha _{1}\left (\alpha _{1}-1\right )}x^{\alpha _{1}}- \frac{1}{\alpha _{2}\left (\alpha _{2}-1\right )}x^{\alpha _{2}}$$

on $$(0,1)$$, which is obvious since $$\psi ''(x)=x^{\alpha _{1}-2}\left (1-x^{\alpha _{2}-\alpha _{1}} \right )$$.

The first inequality now follows from

$$\lim _{\alpha \to 1}\mathrm{R}(\alpha )=\frac{1}{b^{q}}\left (a^{p} \log a+b^{q}\log b-\left (\frac{1}{p}a^{p}+\frac{1}{q}b^{q}\right ) \log \left (\frac{1}{p}a^{p}+\frac{1}{q}b^{q}\right )\right ).$$

The case $$a^{p}>b^{q}$$ can be treated similarly. □

The reversed Young inequality of this type has a somewhat different form. In some sense more natural (note that $$a^{p}/p+b^{q}/q<0$$ for q small enough) than the usual form (see (1.5) and (1.7)).

### Theorem 2.6

Let $$a,b>0$$, $$\alpha >1$$, $$0< p<1$$, $$q<0$$ such that $$\frac{1}{p}+\frac{1}{q}=1$$. Then

\begin{aligned} &ab-\frac{1}{q}b^{q}\geq \frac{1}{p}a^{p} \\ &\quad \cdot \left ( \frac{(ab)^{pab}b^{-pb^{q}}}{\left (pab -\frac{p}{q}b^{q}\right )^{pab-\frac{p}{q}b^{q}}} \right )^{\displaystyle \min \left \{\frac{1}{ab},\frac{1}{b^{q}} \right \}} \\ & \quad \geq \frac{1}{p}a^{p} \left ( \frac{\exp \left (p(ab)^{\alpha}-\frac{p}{q}b^{\alpha q}\right )}{\exp \left (\left (pab-\frac{p}{q}b^{q}\right )^{\alpha}\right )} \right )^{\displaystyle \frac{1}{\alpha (\alpha -1)}\min \left \{ \frac{1}{(ab)^{\alpha}},\frac{1}{b^{\alpha q}}\right \}} \\ & \quad\geq \frac{1}{p}a^{p}. \end{aligned}
(2.6)

### Proof

The first two inequalities follow from Theorem 2.5 using the same replacements as in the proof of Corollary 2.2. The third inequality again follows from the Jensen inequality for the function $$t\mapsto t^{\alpha}$$, $$\alpha >1$$. □

An important application of the Young inequality (also as weighted AG-inequality) and its reversed forms (for $$0< p<1$$, or for $$p>1$$ using the Specht ratio, the Kantorovich constant, and their generalizations; see [13, Chap. 2]) is in the theory of operator inequalities (see [13] and the references therein). One basic property that enables producing these types of inequalities is that the inequalities actually depend on the ratio of $$a/b$$ (if one uses the weighted AG-inequality) or ratio $$a^{p}/b^{q}$$ (if the Young inequality is used). See [13, Chap. 2, Sects. 6 and 7]. This is also true for the refinements of the Young and the reversed Young inequality given in Theorems 2.5 and 2.6.

Set: $$h=a^{p}/b^{q}$$. The following claims are straightforward.

The inequalities in Theorem 2.5 can be written as

$$\frac{a^{p}}{p}+\frac{b^{q}}{q}\geq abY_{1}\left (h,p,q\right )\geq ab Y_{2}\left (h,p,q,\alpha \right )\geq ab,$$

where

\begin{aligned}& Y_{1}\left (h,p,q\right )=\left ( \frac{h^{\frac{1}{p}h}}{\left (\frac{1}{p}h+\frac{1}{q}\right )^{\frac{1}{p}h+\frac{1}{q}}} \right )^{\displaystyle \min \left \{\frac{1}{h},1\right \}},\\& Y_{2}\left (h,p,q,\alpha \right )=\left ( \frac{\exp \left (\frac{1}{p}h^{\alpha }+\frac{1}{q}\right )}{\exp \left (\left (\frac{1}{p}h+\frac{1}{q}\right )^{\alpha}\right )} \right )^{\displaystyle \frac{1}{\alpha (\alpha -1)}\min \left \{ \frac{1}{h^{\alpha }},1\right \}}. \end{aligned}

Similarly, the inequalities in Theorem 2.6 can be written as

$$ab-\frac{1}{q}b^{q}\geq \frac{1}{p}a^{p}RY_{1}\left (h,p,q\right ) \geq \frac{1}{p}a^{p}RY_{2}\left (h,p,q,\alpha \right )\geq \frac{1}{p}a^{p},$$

where

\begin{aligned} &RY_{1}\left (h,p,q\right )=\left ( \frac{h^{\frac{1}{q}h^{-\frac{1}{p}}}}{\left (p -\frac{p}{q}h^{-\frac{1}{p}}\right )^{p-\frac{p}{q}h^{-\frac{1}{p}}}} \right )^{\displaystyle \min \left \{h^{\frac{1}{p}},1\right \}},\\ &RY_{2}\left (h,p,q,\alpha \right )=\left ( \frac{\exp \left (p-\frac{p}{q}h^{-\frac{\alpha}{p}}\right )}{\exp \left (\left (p-\frac{p}{q}h^{-\frac{1}{p}}\right )^{\alpha}\right )} \right )^{\displaystyle \frac{1}{\alpha (\alpha -1)}\min \left \{h^{ \frac{\alpha}{p}},1\right \}}. \end{aligned}

## 3 Self-improving property of Jensen type inequalities for strongly $${\mathcal {F}}$$-convex and concave functions

In this section we present how self-improvement for strongly $${\mathcal {F}}$$-convex and concave functions works in the case of two probably most important inequalities, i.e., the Jensen inequality and the Lah–Ribarič inequality (or the converse Jensen inequality). We also consider the case of a general Hermite–Hadamard inequality as a unification of former two inequalities.

Although known proofs of such a type of refinements for strongly convex functions are based on the notions of quadratic support functions and the generalized Beckenbach convexity (see [6] and [9]), our approach is more simple (and more general). It would be of some interest to develop analogous theories in the case of strongly $${\mathcal {F}}$$-convex (and concave) functions.

All results in this section also hold, with the same proofs, in very general setting (as in Theorem 1.3), but we give them only for real intervals. The reason is that examples and applications given in the next sections are only presented for this case. Also we emphasized the concave case since this case is somehow completely neglected. For example, in [6] and [9] this notion is neither defined nor mentioned. We saw in the preceding section that it is important to work with this notion also.

First we give a variant of the integral form of the Jensen inequality for strongly $$\mathcal {F}$$-concave (convex) functions.

### Theorem 3.1

Let μ be a probability measure on $$[a,b]\subset \mathbb{R}$$. Let $$g \colon [a, b] \to \mathbb{R}$$ be a μ-integrable function. Suppose that $$I\subseteq \mathbb{R}$$ contains the image of g. If $$f:I\to \mathbb{R}$$ is a strongly $$\mathcal {F}$$-concave function with control function F, then

\begin{aligned} & f \left ( \int _{a}^{b} g(t) d \mu (t) \right ) \\ & \geq \int _{a}^{b} f \left ( g(t) \right ) d \mu (t) + \left [ \int _{a}^{b} F \left ( g(t) \right ) d \mu (t) - F \left ( \int _{a}^{b} g(t) d \mu (t) \right ) \right ] \\ & \geq \int _{a}^{b} f \left ( g(t) \right ) d \mu (t). \end{aligned}
(3.1)

If f is strongly $${\mathcal {F}}$$-convex (with control function F), then (3.1) holds with the reversed inequalities and minus sign in front of the brackets in the middle term.

### Proof

The first inequality follows from the integral form of the Jensen inequality (1.2) for the concave function $$f + F$$.

The second inequality follows from (1.2) for the convex function F. □

Notice that $$\int _{a}^{b} F \left ( g(t) \right ) d \mu (t) - F \left ( \int _{a}^{b} g(t) d \mu (t) \right )=c\int _{a}^{b}\left (g(t)-\overline{g}\right )^{2}d \mu (t)$$ if $$F(t)=ct^{2}$$, where $$\overline{g}=\int _{a}^{b}g(t)d\mu (t)$$ (see (1.4)).

Next, we give a variant of the integral Lah–Ribarič inequality for strongly $$\mathcal {F}$$-concave (convex) functions.

### Theorem 3.2

Let μ be a probability measure on $$[a,b]$$, and let $$g \colon [a, b] \to \mathbb{R}$$ be a μ-integrable function such that $$m \leq g(t) \leq M$$ for all $$t \in [a, b]$$, $$m < M$$. Suppose that $$I\subseteq \mathbb{R}$$ is such that $$[m,M]\subseteq I$$. If $$f:I\to \mathbb{R}$$ is a strongly $$\mathcal {F}$$-concave function with control function F, then

\begin{aligned} & \int _{a}^{b} f \left ( g(t) \right ) d \mu (t) \\ & \geq \frac{M - \bar{g}}{M - m} f(m) + \frac{\bar{g} - m}{M - m} f(M) \\ & + \left [ \frac{M - \bar{g}}{M - m} F(m) + \frac{\bar{g} - m}{M - m} F(M) - \int _{a}^{b} F \left ( g(t) \right ) d \mu (t) \right ] \\ & \geq \frac{M - \bar{g}}{M-m} f(m) + \frac{\bar{g} - m}{M-m} f(M), \end{aligned}
(3.2)

where $$\bar{g} = \int _{a}^{b} g(t) d \mu (t)$$.

If f is strongly $${\mathcal {F}}$$-convex with control function F, then (3.2) holds with the reversed inequalities and minus sign in front of the brackets in the middle term.

### Proof

The first inequality follows from the integral form of the Lah–Ribarič inequality (1.3) for the concave function $$f + F$$. The second inequality follows from (1.3) for the convex function F. □

Finally, we give a variant of Theorem 1.5 for strong $$\mathcal {F}$$-concavity. We presented it in such a way that refinements of the general Hermite–Hadamard inequality are more explicit.

### Corollary 3.3

Let μ be a probability measure on $$[a,b]$$, and let $$g \colon [a, b] \to \mathbb{R}$$ be a μ-integrable function such that $$m \leq g(t) \leq M$$ for all $$t \in [a, b]$$, $$m < M$$. Suppose that $$I\subseteq \mathbb{R}$$ is such that $$[m,M]\subseteq I$$. If $$f:I\to \mathbb{R}$$ is a strongly $$\mathcal {F}$$-concave function with control function F, then

\begin{aligned} & f\left (\int _{a}^{b} g(t) d \mu (t)\right ) \\ & \geq \int _{a}^{b} f \left ( g(t) \right ) d \mu (t) + \left [ \int _{a}^{b} F \left ( g (t) \right ) d \mu (t) - F \left ( \int _{a}^{b} g(t) d \mu (t) \right ) \right ] \\ & \geq \frac{M - \bar{g}}{M - m} f(m) + \frac{\bar{g} - m}{M - m} f(M) \\ & + \left [ \frac{M - \bar{g}}{M - m} F(m) + \frac{\bar{g} - m}{M - m} F(M) - F \left ( \int _{a}^{b} g(t) d \mu (t) \right ) \right ] \\ & \geq \frac{M - \bar{g}}{M - m} f(m) + \frac{\bar{g} - m}{M - m} f(M), \end{aligned}
(3.3)

where $$\bar{g} = \int _{a}^{b} g(t) d \mu (t)$$.

If f is strongly $${\mathcal {F}}$$-convex with control function F, then (3.3) holds with the reversed inequalities and minus signs in front of the brackets in the second and third term.

### Proof

The first inequality is the first inequality in (3.1).

The second inequality follows from the Lah–Ribarič inequality (1.3) for the concave function $$f+F$$. □

It is straightforward to see that

$$\frac{M - \bar{g}}{M - m} F(m) + \frac{\bar{g} - m}{M - m} F(M) - F \left ( \int _{a}^{b} g(t) d \mu (t) \right ) =c \left ( M-\bar{g} \right )(\bar{g} - m)$$

for $$F(x)=cx^{2}$$ (see (1.4)).

## 4 Strong $${\mathcal {F}}$$-convexity and concavity with control functions $$F(x)=c\left |x-x_{0}\right |^{\alpha}$$. Applications and examples

Although many papers are written on refining classical inequalities for strongly convex functions, there are surprisingly few that deal with effectiveness of these refinements, especially in determining the modulus of strong convexity.

Results from Sect. 3 are generalizations of Theorem 1.3 and Theorem 1.5. In this section we give a more detailed analysis how the choice of control functions F in these generalizations can give new refinements.

We will consider a suitable class of convex functions F that naturally generalizes the class of functions $$x\mapsto c x^{2}$$, $$c>0$$, which generates strongly convex and concave functions. Let

$$F_{\alpha}(x)=c \left |x-x_{0}\right |^{\alpha},\; c>0,\; x_{0}\in \mathbb{R},\; \alpha >1,$$

assuming $$x_{0}\leq m$$.

We denote the refining expression in (3.1) for the functions $$F_{\alpha}$$ by

$$Ref_{1} ( \alpha ) := \int _{a}^{b} F_{\alpha} \left ( g(t) \right ) d \mu (t) - F_{\alpha} \left ( \int _{a}^{b} g(t) d \mu (t) \right ),$$
(4.1)

and the refining expression in (3.2) for the functions $$F_{\alpha}$$ by

$$Ref_{2} ( \alpha ) := \frac{M - \bar{g}}{M - m} F_{\alpha}(m) + \frac{\bar{g} - m}{M - m} F_{\alpha}(M) - \int _{a}^{b} F_{\alpha} \left ( g(t) \right ) d \mu (t).$$
(4.2)

In the variants given in Sect. 3, functions f influence the refining expression in determining the constants c. It is interesting to see conditions for the general function f and a convex function $$F_{\alpha}$$ so that we can use theorems from the previous section.

Suppose that $$f\in C^{2}\left ([m,M]\right )$$ and suppose $$x_{0}\leq m$$. Then

$$f(x)+c (x-x_{0})^{\alpha}$$, $$\alpha >1$$, is concave on $$[m,M]$$ iff

$$c\leq -\frac{f''(x)}{\alpha (\alpha -1)} \frac{1}{(x-x_{0})^{\alpha -2}}.$$

Hence if

$$0< c\leq \frac{1}{\alpha (\alpha -1)}\min _{x\in [m,M]} \frac{-f''(x)}{(x-x_{0})^{\alpha -2}},$$

then f is strongly $${\mathcal {F}}$$-concave on $$[m,M]$$ with control function

$$F_{\alpha}(x)=c (x-x_{0})^{\alpha}$$, $$x_{0}\leq m$$.

Since $$Ref_{1}$$ and $$Ref_{2}$$ are increasing in $$c>0$$, it is optimal to take

$$c=c \left ( \alpha \right )= \frac{1}{\alpha (\alpha -1)}\min _{x\in [m,M]} \frac{-f''(x)}{(x-x_{0})^{\alpha -2}}.$$

A more insightful case is given in the following theorem.

### Theorem 4.1

Let $$f\in C^{2}([m,M])$$ be a concave (convex) function and $$x_{0} \leq m$$.

1. 1.

If there is $$\alpha _{0}\geq 1$$ such that

$$\frac{\left |f''(x)\right |}{(x-x_{0})^{\alpha _{0} - 2}}\; \textrm{is decreasing on}\;[m, M],\;\textrm{and}\; \left |f''(M)\right | > 0,$$

then

1. (a)

f is strongly $$\mathcal {F}$$-concave (strongly $${\mathcal {F}}$$-convex) with control function

$$F_{\alpha} (x) = \frac{1}{\alpha (\alpha - 1)} \frac{\left |f''(M)\right |}{(M-x_{0})^{\alpha - 2}} (x-x_{0})^{ \alpha}$$

for every $$\alpha \geq \alpha _{0}$$. Especially, for $$\alpha = 1$$, we define

$$F_{1}(x)=\left |f''(M)\right |(M-x_{0})(x-x_{0})\log{(x-x_{0})};$$
2. (b)

the refining expressions $$Ref_{1}$$ and $$Ref_{2}$$ defined as in (4.1) and (4.2) are decreasing functions in $$\alpha \geq \alpha _{0}$$;

3. (c)

the maximal refinements are obtained for $$\alpha = \alpha _{0}$$.

2. 2.

If there is $$\alpha _{0}\geq 1$$ such that

$$\frac{\left |f''(x)\right |}{(x-x_{0})^{\alpha _{0} - 2}}\; \textrm{is increasing on}\;[m, M],\;\textrm{and}\; \frac{\left |f''(m)\right |}{(m-x_{0})^{\alpha _{0} - 2}} > 0,$$

then

1. (a)

f is strongly $$\mathcal {F}$$-concave (strongly $${\mathcal {F}}$$-convex) with control function

$$F_{\alpha} (x) = \frac{1}{\alpha (\alpha - 1)} \frac{\left |f''(m)\right |}{(m-x_{0})^{\alpha - 2}} (x-x_{0})^{ \alpha}$$

for every $$1 < \alpha \leq \alpha _{0}$$. Especially, for $$\alpha = 1$$, we define

$$F_{1}(x)=\left |f''(m)\right |(m-x_{0})(x-x_{0})\log{(x-x_{0})};$$
2. (b)

the refining expressions $$Ref_{1}$$ and $$Ref_{2}$$ are increasing functions in $$1\leq \alpha \leq \alpha _{0}$$;

3. (c)

the maximal refinements are obtained for $$\alpha = \alpha _{0}$$.

### Proof

We prove the concave case. The proof of the convex case is analogous: replace f with −f.

1. 1.

Strong $${\mathcal {F}}$$-concavity of the function f with control function

$$F_{\alpha} (x) = \frac{1}{\alpha (\alpha - 1)} \frac{- f''(M)}{(M-x_{0})^{\alpha - 2}} (x-x_{0})^{\alpha}$$

is equivalent to

$$\frac{-f''(x)}{(x-x_{0})^{\alpha -2}}\geq \frac{-f''(M)}{\left (M-x_{0}\right )^{\alpha -2}},$$

which is obvious for $$\alpha \geq \alpha _{0}$$ since $$\frac{-f''(x)}{(x-x_{0})^{\alpha -2}}= \frac{1}{(x-x_{0})^{\alpha -\alpha _{0}}} \frac{-f''(x)}{(x-x_{0})^{\alpha _{0}-2}}$$ and $$\frac{-f''(x)}{(x-x_{0})^{\alpha _{0}-2}}$$ is a decreasing function.

Notice that $$Ref_{1}(\alpha )$$ and $$Ref_{2}(\alpha )$$ are defined for $$F_{\alpha}$$ independently of the context (assuming $$g(t)\in [m,M]$$ for every $$t\in [a,b]$$). A simple application of the L’Hospital rule gives $$\lim _{\alpha \to 1}Ref_{1}(\alpha )=Ref_{1}(1)$$ and $$\lim _{\alpha \to 1}Ref_{2}(\alpha )=Ref_{2}(1)$$.

The claim is that refining expressions $$Ref_{1}$$ and $$Ref_{2}$$ are decreasing in $$\alpha \geq \alpha _{0}$$. We give the proof for $$Ref_{1}$$ (for $$Ref_{2}$$ the proof is analogous). For $$\alpha _{0} \leq \alpha _{1}<\alpha _{2}$$, the claim is that

\begin{aligned} & \int _{a}^{b} F_{\alpha _{1}} \left ( g(t) \right ) d \mu (t) - F_{ \alpha _{1}} \left ( \int _{a}^{b} g(t) d \mu (t) \right ) \\ & \geq \int _{a}^{b} F_{\alpha _{2}} \left ( g(t) \right ) d \mu (t) - F_{\alpha _{2}} \left ( \int _{a}^{b} g(t) d \mu (t) \right ) \end{aligned}

holds. Using Theorem 1.2 (the convex part of this theorem), it is enough to prove that

\begin{aligned} \phi (x)&=F_{\alpha _{1}}(x)-F_{\alpha _{2}}(x) \\ & = \frac{1}{\alpha _{1} (\alpha _{1} - 1)} \frac{- f''(M)}{(M-x_{0})^{\alpha _{1} - 2}}(x-x_{0})^{\alpha _{1}} - \frac{1}{\alpha _{2} (\alpha _{2} - 1)} \frac{- f''(M)}{(M-x_{0})^{\alpha _{2} - 2}} (x-x_{0})^{\alpha _{2}} \end{aligned}

is a convex function on $$[m,M]$$. This is immediate from

$$\phi ''(x)= - f''(M) \left ( \frac{x-x_{0}}{M-x_{0}} \right )^{ \alpha _{1} - 2} \left ( 1 - \left (\frac{x-x_{0}}{M-x_{0}}\right )^{ \alpha _{2} - \alpha _{1}}\right ) \geq 0.$$

Maximal part of the result follows directly.

2. 2.

Analogous to the proof of (1).

□

### Remark 4.2

Theorem 4.1 heavily depends on the monotonicity of the function $$x\mapsto \left |f''(x)\right |/(x-x_{0})^{\alpha -2}$$ on $$[m,M]\subset \left [x_{0},\infty \right )$$ for some $$\alpha >1$$. If this monotonicity cannot be obtained, one can argue as follows. Suppose that $$\left |f''(x)\right |\geq m_{2}>0$$ on $$[m,M]$$. Then $$f\in C^{2}\left ([m,M]\right )$$ is strongly $${\mathcal {F}}$$-concave (convex) with control function $$x\mapsto c(x-x_{0})^{\alpha}$$ if

$$0< c\leq \frac{m_{2}}{\alpha (\alpha -1)} \frac{1}{(x-x_{0})^{\alpha -2}}.$$

If $$1<\alpha \leq 2$$, then the optimal value is $$c=\frac{m_{2}}{\alpha (\alpha -1)}(m-x_{0})^{2-\alpha}$$, which gives that or the function f is not strongly $${\mathcal {F}}$$-concave (convex) or the maximal refinements are $$Ref_{1}(2)$$ and $$Ref_{2}(2)$$ (see (2) in Theorem 4.1). If $$\alpha \geq 2$$, then the optimal value is $$c=\frac{m_{2}}{\alpha (\alpha -1)}(M-x_{0})^{2-\alpha}$$, and again we get that the maximal refinements are $$Ref_{1}(2)$$ and $$Ref_{2}(2)$$ (see (1) in Theorem 4.1). In this situation we cannot improve refinements $$Ref_{1}$$ and $$Ref_{2}$$ obtained in the strong concavity (convexity) case. The optimality is lost in the initial step since generally

$$\min _{x \in [m, M]} \left \{\frac{1}{x^{\alpha - 2}} \min _{x \in [m, M]} \left \{ \left |f''(x)\right | \right \} \right \} \leq \min _{x \in [m, M]}\frac{\left |f''(x)\right |}{x^{\alpha - 2}}.$$

### Example 4.3

The function $$x\mapsto e^{x}/x$$ is decreasing on $$(0,1]$$ and increasing on $$[1,\infty )$$, and the function $$x\mapsto e^{x}/x^{2}$$ is decreasing on $$(0,2]$$ and increasing on $$[2,\infty )$$. According to Theorem 4.1, the function $$f(x)=e^{x}$$ is strongly $${\mathcal {F}}$$-convex on $$[1,2]$$ with control function $$F_{\alpha}^{1}(x)=\frac{1}{\alpha (\alpha -1)}ex^{\alpha}$$ for $$1<\alpha \leq 3$$ and with control function $$F_{\alpha}^{2}(x)=\frac{1}{\alpha (\alpha -1)}\frac{e^{2}}{2^{2}}x^{ \alpha}$$ for $$\alpha \geq 4$$. It seems that the method of comparing $$Ref_{1}^{1}(3)$$ (i.e. $$Ref_{1}(\alpha )$$ defined using $$F_{\alpha}^{1}$$) and $$Ref_{1}^{2}(4)$$ (i.e. $$Ref_{1}(\alpha )$$ defined using $$F_{\alpha}^{2}$$) given in the proof of Theorem 4.1 does not work here. Analogous statements hold for $$Ref_{2}$$.

Suppose that $$f\in C^{2}([a,b])$$. Then $$f(x)+cx^{2}$$ is concave iff

$$f''(x)\leq -2c$$. This means that if there is an $$x_{0}\in [a,b]$$ such that $$f''(x_{0})=0$$, such $$c>0$$ cannot exist. It follows that the method of strongly concave (and similarly strongly convex) functions cannot be used to improve Jensen type inequalities.

### Example 4.4

Let $$f(x)=\sin \left ( \frac{\pi}{2} x\right )$$, $$x\in [0,1]$$. Since $$f''(0)=0$$, Theorem 1.3 cannot be used to improve Jensen type inequalities.

Let $$F_{\alpha}(x)=cx^{\alpha}$$, $$x\in [0,1]$$, $$c>0$$, $$\alpha >1$$. Then $$f+F_{\alpha}$$ is concave iff

$$c \leq \frac{\pi ^{2}}{4 \alpha (\alpha - 1)} \sin \left ( \frac{\pi}{2} x \right ) x^{2 - \alpha}.$$

For $$\alpha \in (1, 3)$$, f is not strongly $$\mathcal {F}$$-concave with control function $$F_{\alpha}(x)=cx^{\alpha}$$ for any $$c > 0$$. For $$\alpha \in [3, + \infty )$$, $$\sin \left ( \frac{\pi}{2} x \right ) x^{2 - \alpha}$$ is a decreasing function on $$[0, 1]$$, so we infer that for

$$\alpha \geq 3,\; 0< c \leq \frac{\pi ^{2}}{4 \alpha (\alpha -1)}$$

the function f is strongly $${\mathcal {F}}$$-concave with control function $$F_{\alpha} (x)= c x^{\alpha}$$. We can use Theorem 3.1 for refining the integral Jensen inequality in this case.

Let $$F_{\alpha} (x) = \frac{\pi ^{2}}{4 \alpha (\alpha -1)} x^{\alpha}$$. Using Theorem 4.1, the refining expressions $$Ref_{1}$$ and $$Ref_{2}$$ are decreasing in α, $$\alpha \geq 3$$ for every $$g \colon [0, 1] \to \mathbb{R}$$ such that $$g \left ( [0, 1] \right ) \subseteq [0, 1]$$. The best possible improvements are obtained for $$\alpha = 3$$.

Similar analysis can be carried out for $$f(x)=\cos{\left (\frac{\pi}{2}x \right )}$$, $$F_{\alpha}(x)=c(1-x)^{\alpha}$$, $$x\in [0,1]$$, $$\alpha >1$$, $$c>0$$.

As an illustration of many possible applications of strong $${\mathcal {F}}$$-concavity and convexity, we give the following corollary. All terms in this corollary can be expressed in terms of the power means

$$M_{p}\left (g; d\mu \right )=\left (\int _{a}^{b}g^{p}(t)d\mu (t) \right )^{\frac{1}{p}},p\neq 0,\;M_{0}\left (g; d\mu \right )=\exp{ \left (M_{1}\left (\log{g};d\mu \right )\right )},$$

where μ is a probability measure on $$[a,b]$$, g is a nonnegative μ-integrable function on $$[a,b]$$, but we will not use this not to overburden this corollary with unnecessary notations.

### Corollary 4.5

Let $$\alpha >1$$, $$0< p<1$$, and $$M>0$$. Let μ be a probability measure on $$[a,b]\subset \mathbb{R}$$ and $$g \colon [a,b] \to \mathbb{R}$$ be a μ-integrable function such that $$0\leq g(t)\leq M$$ for every $$t\in [a,b]$$. Then

\begin{aligned} &\frac{1}{\alpha (\alpha -1)}\frac{1}{M^{\alpha}}\left (\int _{a}^{b}g^{\alpha}(t)d\mu (t)-\left (\int _{a}^{b}g(t)d\mu (t)\right )^{\alpha}\right ) \\ & \quad\leq \frac{1}{M}\left (\int _{a}^{b}g(t)\log{g(t)}d\mu (t)-\int _{a}^{b}g(t)d \mu (t)\log \int _{a}^{b}g(t)d\mu (t)\right ) \\ & \quad\leq \frac{1}{p (1-p)}\frac{1}{M^{p}}\left (\left (\int _{a}^{b}g(t)d \mu (t)\right )^{p}-\int _{a}^{b}g^{p}(t)d\mu (t)\right ) \\ & \quad\leq \log{\int _{a}^{b}g(t)d\mu (t)}-\int _{a}^{b}\log{g(t)}d\mu (t). \end{aligned}
(4.3)

### Proof

First we prove inequalities using the integral Jensen inequality, and then we give an alternative proof using the method from the proof of Theorem 4.1.

Application of the integral Jensen inequality is assumed in all cases. Denote (4.3) as

$$I_{1}\leq I_{2}\leq I_{3}\leq I_{4}.$$
1. 1.

$$I_{1}\leq I_{2}$$ follows from convexity on $$[0, M]$$ of the function

$$x\mapsto x\log x-\frac{1}{\alpha (\alpha -1)}\frac{1}{M^{\alpha -1}}x^{ \alpha}.$$
2. 2.

$$I_{1}\leq I_{3}$$ follows from concavity on $$[0, M]$$ of the function

$$x\mapsto x^{p}+\frac{p(1-p)}{\alpha (\alpha -1)} \frac{1}{M^{\alpha -p}}x^{\alpha}.$$
3. 3.

$$I_{1}\leq I_{4}$$ follows from concavity on $$[0, M]$$ of the function

$$x\mapsto \log x-\frac{1}{\alpha (\alpha -1)}\frac{1}{M^{\alpha}}x^{ \alpha}.$$
4. 4.

$$I_{2}\leq I_{3}$$ follows from concavity on $$[0, M]$$ of the function

$$x\mapsto x^{p}+\frac{1}{p(1-p)}\frac{1}{M^{1-p}}x\log x.$$
5. 5.

$$I_{2}\leq I_{4}$$ follows from concavity on $$[0, M]$$ of the function

$$x\mapsto \log x+\frac{1}{M}x\log x.$$
6. 6.

$$I_{3}\leq I_{4}$$ follows from concavity on $$[0, M]$$ of the function

$$x\mapsto \log x+\frac{1}{p (1-p)}\frac{1}{M^{p}}\left (-x^{p}\right ).$$

The reason for this strange form of writing the last function is to fit this notation in our concept of strong $${\mathcal {F}}$$-concavity.

Another possibility to prove the theorem is as follows. The inequality $$I_{1}\leq I_{3}$$ (proven above) written explicitly reads

\begin{aligned} &\frac{1}{\alpha (\alpha -1)}\frac{1}{M^{\alpha}}\left (\int _{a}^{b}g^{\alpha}(t)d\mu (t)-\left (\int _{a}^{b}g(t)d\mu (t)\right )^{\alpha}\right ) \\ &\quad \leq \frac{1}{p (1-p)}\frac{1}{M^{p}}\left (\left (\int _{a}^{b}g(t)d \mu (t)\right )^{p}-\int _{a}^{b}g^{p}(t)d\mu (t)\right ). \end{aligned}

Using convexity of the function

$$\phi (x)=\frac{1}{\alpha _{1}(\alpha _{1}-1)} \frac{1}{M^{\alpha _{1}}}x^{\alpha _{1}}- \frac{1}{\alpha _{2}(\alpha _{2}-1)}\frac{1}{M^{\alpha _{2}}}x^{ \alpha _{2}}$$

and concavity of the function

$$\psi (x)=\frac{1}{p_{1}(1-p_{1})}\frac{1}{M^{p_{1}}}x^{p_{1}}- \frac{1}{p_{2}(1-p_{2})}\frac{1}{M^{p_{2}}}x^{p_{2}}$$

on $$[0,M]$$ for $$1<\alpha _{1}<\alpha _{2}$$ and $$0< p_{1}< p_{2}<1$$, it follows (see the proof of Theorem 4.1) that $$I_{1}$$ decreases with α and $$I_{3}$$ decreases with p. The remaining inequalities $$I_{1}\leq I_{2}\leq I_{3}$$ and $$I_{3}\leq I_{4}$$ follow by $$\lim _{\alpha \to 1}I_{1}$$ and $$\lim _{p\to 0}I_{3}$$. □

An analogous result holds for the Lah–Ribarič inequality (1.3). For the Lah–Ribarič difference set

$$LHd(f,g;d\mu )=\left |\frac{M-\overline{g}}{M-m}f(m)+ \frac{\overline{g}-m}{M-m}f(M)-\int _{a}^{b}f\left (g(t)\right )d\mu (t) \right |,$$

where μ is a probability measure on $$[a,b]$$, $$g:[a,b]\to \mathbb{R}$$ such that $$m\leq g(t)\leq M$$, $$f\circ g$$ -integrable, where $$f:I\to \mathbb{R}$$, $$I\subseteq \mathbb{R}$$ an interval such that $$[m,M]\subseteq I$$.

### Corollary 4.6

Let $$\alpha >1$$, $$0< p<1$$, and $$0< m< M$$. Let μ be a probability measure on $$[a,b]\subset \mathbb{R}$$ and $$g \colon [a,b] \to \mathbb{R}$$ be a μ-integrable function such that $$m\leq g(t)\leq M$$ for every $$t\in [a,b]$$. Then

\begin{aligned} &\frac{1}{\alpha (\alpha -1)}\frac{1}{M^{\alpha}}LRd\left (( \cdot )^{\alpha},g;d\mu \right )\leq \frac{1}{M}LRd\left ( ( \cdot ) ( \log (\cdot ),g;d\mu \right ) \\ &\quad \leq \frac{1}{p (1-p)}\frac{1}{M^{p}}LRd\left ( ( \cdot )^{p},g;d \mu \right ) \leq LRd\left (\log ( \cdot ),g;d\mu \right ). \end{aligned}

### Proof

Notations are different but the proof is identical to the proof of Corollary 4.5. □

Many refinements using the methods of this paper can be given for related inequalities. We just mention [3] and the results from [14], and many papers on similar subject, as obvious candidates for such refinements.

## Data Availability

No datasets were generated or analysed during the current study.

## References

1. Dragomir, S.S.: On a reverse of Jessen’s inequality for isotonic linear functionals. JIPAM. J. Inequal. Pure Appl. Math. 2(3), Article ID 36 (2001)

2. Dragomir, S.S., Nikodem, K.: Jensen’s and Hermite-Hadamard’s type inequalities for lower and strongly convex functions on normed spaces. Bull. Iranian Math. Soc. 44(5), 1337–1349 (2018)

3. Ivelić Bradanović, S.: Improvements of Jensen’s inequality and its converse for strongly convex functions with applications to strongly f-divergences. J. Math. Anal. Appl. 531(2), Article ID 127866 (2024)

4. Jensen, J.L.W.V.: Om konvexe funktioner og uligheder mellem Middelvaerdier. Nyt Tidsskr. Math. 16B, 49–69 (1905)

5. Klaričić Bakula, M., Pečarić, J., Perić, J.: On the converse Jensen inequality. Appl. Math. Comput. 218(11), 6566–6575 (2012)

6. Klaričić, M., Nikodem, K.: On the converse Jensen inequality for strongly convex functions. J. Math. Anal. Appl. 434, 516–522 (2016)

7. Lah, P., Ribarič, M.: Converse of Jensen’s inequality for convex functions. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 412–460, 201–205 (1973)

8. Marinescu, D.S., Păltănea, E.: Properties of Pečarić-type functions and applications. Results Math. 76(3), Article ID 149 (2021)

9. Merentes, N., Nikodem, K.: Remarks on strongly convex functions. Aequ. Math. 80, 193–199 (2010)

10. Mitrinović, D.S., Pečarić, J.E., Fink, A.M.: Classical and New Inequalities in Analysis. Mathematics and Its Applications (East European Series), vol. 61. Kluwer Academic, Dordrecht (1993)

11. Niculescu, C.P., Persson, L.-E.: Convex Functions and Their Applications. A Contemporary Approach. CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC, vol. 23. Springer, New York (2006)

12. Nikolova, L., Persson, L.-E., Varošanec, S.: Some new refinements of the Young, Hölder, and Minkowski inequalities. J. Inequal. Appl. 2023, Article ID 28 (2023)

13. Pečarić, J., Furuta, T., Mićić Hot, J., Seo, Y.: Mond-Pečarić Method in Operator Inequalities. Monogr. Inequal., vol. 1. Element, Zagreb (2005)

14. Pečarić, J., Perić, J.: Refinements of the integral form of Jensen’s and the Lah–Ribarič inequalities and applications for Csiszár divergence. J. Inequal. Appl. 2020, Article ID 108 (2020)

15. Pečarić, J.E., Proschan, F., Tong, Y.L.: Convex Functions, Partial Orderings, and Statistical Applications. Mathematics in Science and Engineering, vol. 187. Academic Press, Boston (1992)

16. Polyak, B.T.: Existence theorems and convergence of minimizing sequences for extremal problems with constraints. Dokl. Akad. Nauk SSSR 166(2), 287–290 (1966)

17. Veselý, L., Zajíček, L.: Delta-convex mappings between Banach spaces and applications. Diss. Math. 289, (1989)

## Funding

There is no funding for this work.

## Author information

Authors

### Contributions

J. P wrote the manuscript.

### Corresponding author

Correspondence to Jurica Perić.

## Ethics declarations

Not applicable.

### Competing interests

The authors declare no competing interests.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Reprints and permissions

Perić, J. Strong $$\mathcal {F}$$-convexity and concavity and refinements of some classical inequalities. J Inequal Appl 2024, 96 (2024). https://doi.org/10.1186/s13660-024-03178-2