# Estimation of f-divergence and Shannon entropy by using Levinson type inequalities for higher order convex functions via Hermite interpolating polynomial

## Abstract

Levinson type inequalities are generalized by using Hermite interpolating polynomial for the class of $$\mathfrak{n}$$-convex functions. In seek of application to information theory, some estimates for new functional are obtained based on f divergence. Inequalities involving Shannon entropies are also discussed.

## 1 Introduction

The theory of convex functions has encountered a fast advancement. This can be ascribed to a couple of causes: firstly, direct implication of convex functions in modern analysis; secondly, numerous important inequalities are outcomes of applications of convex functions and convex functions are closely related to the theory of inequalities (see [1]).

In [2], Levinson generalized Ky Fan’s inequality (see also [3, p. 32, Theorem 1]) as follows.

### Theorem A

Let$$f :\mathbb{I}=(0, 2\hat{\alpha }) \rightarrow \mathbb{R}$$with$$f^{(3)}(t) \geq 0$$. Also, let$$x_{\mu } \in (0, \hat{\alpha })$$and$$p_{\mu }>0$$. Then

\begin{aligned}[b] &\frac{1}{\mathcal{P}_{\mathfrak{n}}}\sum_{\mu =1}^{\mathfrak{n}}p_{\mu }f(x_{\mu })- f\Biggl(\frac{1}{\mathcal{P}_{\mathfrak{n}}}\sum_{\mu =1}^{\mathfrak{n}}p_{\mu }x_{\mu } \Biggr)\\ &\quad \leq \frac{1}{\mathcal{P}_{\mathfrak{n}}}\sum_{\mu =1}^{\mathfrak{n}}p_{\mu }f(2 \hat{\alpha }-x_{\mu })-f\Biggl(\frac{1}{\mathcal{P}_{\mathfrak{n}}} \sum _{\mu =1}^{\mathfrak{n}}p_{\mu }(2\hat{\alpha }-x_{\mu })\Biggr). \end{aligned}
(1)

Functional form of (1) is defined as follows:

\begin{aligned} \mathfrak{J}_{1}\bigl(f(\cdot )\bigr) =&\frac{1}{\mathcal{P}_{\mathfrak{n}}}\sum _{\mu =1}^{\mathfrak{n}}p_{\mu }f(2\hat{\alpha }-x_{\mu })-f\Biggl(\frac{1}{\mathcal{P}_{\mathfrak{n}}}\sum_{\mu =1}^{\mathfrak{n}}p_{\mu }(2 \hat{\alpha }-x_{\mu })\Biggr)- \frac{1}{\mathcal{P}_{\mathfrak{n}}} \sum _{\mu =1}^{\mathfrak{n}}p_{\mu }f(x_{\mu }) \\ &{}+f\Biggl(\frac{1}{\mathcal{P}_{\mathfrak{n}}}\sum_{\mu =1}^{\mathfrak{n}}p_{\mu }x_{\mu } \Biggr). \end{aligned}
(2)

In [4], Popoviciu noticed that (1) is legitimate on $$(0, 2\hat{\alpha })$$ for 3-convex functions, while in [5] (see additionally [3, p. 32, Theorem 2]) Bullen gave another proof of Popoviciu’s result and furthermore the converse of (1).

### Theorem B

1. (a)

Let$$f:\mathbb{I}=[\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}$$with$$f^{(3)}(t) \geq 0$$and$$x_{\mu }, y_{\mu } \in [\zeta _{1}, \zeta _{2}]$$for$$\mu =1, 2, \ldots , \mathfrak{n}$$be such that

$$\max \{x_{1}, \ldots , x_{\mathfrak{n}}\} \leq \min \{y_{1}, \ldots , y_{\mathfrak{n}}\}, \quad x_{1}+y_{1}= \cdots =x_{\mathfrak{n}}+y_{\mathfrak{n}}$$
(3)

and$$p_{\mu }>0$$, then

$$\frac{1}{\mathcal{P}_{\mathfrak{n}}}\sum_{\mu =1}^{\mathfrak{n}} p_{\mu }f(x_{\mu })-f\Biggl(\frac{1}{\mathcal{P}_{\mathfrak{n}}}\sum _{\mu =1}^{\mathfrak{n}}p_{\mu }x_{\mu }\Biggr)\leq \frac{1}{\mathcal{P}_{\mathfrak{n}}}\sum_{\mu =1}^{\mathfrak{n}}p_{\mu }f(y_{\mu })-f \Biggl(\frac{1}{\mathcal{P}_{\mathfrak{n}}}\sum_{\mu =1}^{\mathfrak{n}}p_{\mu }y_{\mu } \Biggr).$$
(4)
2. (b)

If the functionfis continuous, $$p_{\mu }>0$$, and (4) holds for all$$x_{\mu }$$, $$y_{\mu }$$satisfying (3), thenfis 3-convex.

Functional form of (4) is defined as follows:

\begin{aligned}[b] \mathfrak{J}_{2}\bigl(f(\cdot )\bigr)&=\frac{1}{\mathcal{P}_{\mathfrak{n}}} \sum_{\mu =1}^{\mathfrak{n}}p_{\mu }f(y_{\mu })-f \Biggl(\frac{1}{\mathcal{P}_{\mathfrak{n}}}\sum_{\mu =1}^{\mathfrak{n}}q_{\mu }y_{\mu } \Biggr)\\ &\quad {}-\frac{1}{\mathcal{P}_{\mathfrak{n}}} \sum_{\mu =1}^{\mathfrak{n}}p_{\mu }f(x_{\mu })+ f\Biggl(\frac{1}{\mathcal{P}_{\mathfrak{n}}}\sum_{\mu =1}^{\mathfrak{n}}p_{\mu }x_{\mu } \Biggr). \end{aligned}
(5)

### Remark 1

If the function f is 3-convex, then $$\mathfrak{J}_{i}(f(\cdot ))\geq 0$$ for $$i=1, 2$$, and $$\mathfrak{J}_{i}(f(\cdot ))=0$$ for $$f(t)=t$$ or $$f(t)=t^{2}$$ or f is a constant function.

In the following result Pečarić [6] (see also [3, p. 32, Theorem 4]) weakened condition (3) and proved that inequality (4) still holds.

### Theorem C

Let$$f:\mathbb{I}=[\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}$$with$$f^{(3)}(t) \geq 0$$, $$p_{\mu }>0$$, and let$$x_{\mu }, y_{\mu } \in [\zeta _{1}, \zeta _{2}]$$such that$$x_{\mu }+y_{\mu }=2\breve{c}$$for$$\mu =1, \ldots , \mathfrak{n}$$$$x_{\mu }+x_{\mathfrak{n}-\mu +1}\leq 2\breve{c}$$and$$\frac{p_{\mu }x_{\mu }+p_{\mathfrak{n}-\mu +1}x_{\mathfrak{n}-\mu +1}}{p_{\mu }+p_{\mathfrak{n}-\mu +1}} \leq \breve{c}$$. Then inequality (4) holds.

In [7], Mercer proved inequality (4) by replacing the condition of symmetric distribution of points $$x_{\mu }$$ and $$y_{\mu }$$ with symmetric variances of points $$x_{\mu }$$ and $$y_{\mu }$$.

### Theorem D

Let$$f:\mathbb{I}=[\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}$$with$$f^{(3)}(t) \geq 0$$, $$p_{\mu }$$be positive such that$$\sum_{\mu =1}^{\mathfrak{n}}p_{\mu }=1$$. Also let$$x_{\mu }$$, $$y_{\mu }$$satisfy$$\max \{x_{1} \ldots x_{\mathfrak{n}}\} \leq \min \{y_{1} \ldots y_{\mathfrak{n}}\}$$and

$$\sum_{\mu =1}^{\mathfrak{n}}p_{\mu } \Biggl(x_{\mu }-\sum_{\mu =1}^{\mathfrak{n}}p_{\mu }x_{\mu } \Biggr)^{2}=\sum_{\mu =1}^{\mathfrak{n}}p_{\mu } \Biggl(y_{\mu }- \sum_{\mu =1}^{\mathfrak{n}}p_{\mu }y_{\mu } \Biggr)^{2},$$
(6)

then (4) holds.

In [8], the Hermite interpolating polynomial is given as follows.

Let $$\zeta _{1}, \zeta _{2} \in \mathbb{R}$$ with $$\zeta _{1} < \zeta _{2}$$ and $$\zeta _{1} = c_{1} < c_{2} < \cdots < c_{l}=\zeta _{2}$$ ($$l \geq 2$$) be the points. For $$f \in C^{\mathfrak{n}}[\zeta _{1}, \zeta _{2}]$$, a unique polynomial $$\sigma _{\mathcal{H}}^{(i)}(s)$$ of degree $$(n-1)$$ exists and satisfies any of the following conditions:

$$(i)$$:

Hermite conditions

\begin{aligned} \sigma _{\mathcal{H}}^{(i)}(c_{j}) = f^{(i)}(c_{j}); \quad 0 \leq i \leq k_{j}, 1 \leq j \leq l, \sum _{j=1}^{l}k_{j} + l = \mathfrak{n}. \end{aligned}

It is noted that Hermite conditions include the following particular cases:

(Case-1):

Lagrange conditions ($$l=\mathfrak{n}$$,$$k_{j}=0$$for alli)

$$\sigma _{L}(c_{j}) = f(c_{j}), \quad 1 \leq j \leq \mathfrak{n}.$$
(Case-2):

Type$$(q, \mathfrak{n}-q)$$conditions ($$l = 2$$,$$1 \leq q \leq \mathfrak{n}-1$$,$$k_{1} =q-1$$,$$k_{2} = n - q - 1$$)

\begin{aligned}& \sigma _{(q, \mathfrak{n})}^{(i)}(\zeta _{1}) = f^{(i)}( \zeta _{1}), \quad 0 \leq i \leq q-1, \\& \sigma _{(q, \mathfrak{n})}^{(i)}(\zeta _{2}) = f^{(i)}( \zeta _{2}), \quad 0 \leq i \leq \mathfrak{n}-q-1. \end{aligned}
(Case-3):

Two-point Taylor conditions ($$\mathfrak{n}=2q$$,$$l=2$$,$$k_{1} = k_{2} = q-1$$)

$$\sigma _{2T}^{(i)}(\zeta _{1}) = f^{(i)}( \zeta _{1}), \qquad f_{2T}^{(i)}(\zeta _{2}) =f^{(i)}(\zeta _{2}). \quad 0 \leq i \leq q-1.$$

In [8], the following result is given.

### Theorem E

Let$$-\infty < \zeta _{1} < \zeta _{2} < \infty$$and$$\zeta _{1} < c_{1} < c_{2} < \cdots < c_{l} \leq \zeta _{2}$$ ($$l \geq 2$$) be the given points and$$f \in C^{\mathfrak{n}}([\zeta _{1}, \zeta _{2}])$$. Then we have

\begin{aligned} f(u)= \sigma _{\mathcal{H}}(u) + R_{\mathcal{H}}(f, u), \end{aligned}
(7)

where$$\sigma _{\mathcal{H}}(u)$$is the Hermite interpolation polynomial, that is,

\begin{aligned} \sigma _{\mathcal{H}}(u) = \sum_{j=1}^{l} \sum_{i=0}^{k_{j}}\mathcal{H}_{i_{j}}(u)f^{(i)}(c_{j}); \end{aligned}

the$$\mathcal{H}_{i_{j}}$$are the fundamental polynomials of the Hermite basis given as

\begin{aligned} \mathcal{H}_{i_{j}}(u)=\frac{1}{i!}\frac{\omega (u)}{(u - c_{j})^{k_{j} + 1 - i}} \sum_{k=0}^{k_{j} - i}\frac{1}{k!} \frac{d^{k}}{du^{k}} \biggl(\frac{(u -c_{j})^{k_{j} + 1}}{\omega (u)} \biggr)\bigg|_{u = c_{j}}(u - c_{j})^{k}, \end{aligned}
(8)

with

\begin{aligned} \omega (u) = \prod_{j=1}^{l}(u - c_{j})^{k_{j} + 1}, \end{aligned}

and the remainder is given by

\begin{aligned} R_{\mathcal{H}}(f, u) = \int _{\zeta _{1}}^{\zeta _{2}}\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(u, s)f^{(\mathfrak{n})}(s)\,ds, \end{aligned}

where$$\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(u, s)$$is defined by

\begin{aligned} \mathcal{G}_{\mathcal{H}, \mathfrak{n}}(u, s)= \textstyle\begin{cases} \sum_{j=1}^{l}\sum_{i=0}^{k_{j}}\frac{(c_{j} - s)^{\mathfrak{n} - i -1}}{(\mathfrak{n} - i - 1)!}\mathcal{H}_{i_{j}}(u) , & s \leq u; \\ {-}\sum_{j=r+1}^{l}\sum_{i=0}^{k_{j}}\frac{(c_{j} - s)^{\mathfrak{n} - i -1}}{(\mathfrak{n} - i - 1)!}\mathcal{H}_{i_{j}}(u) , & s \geq u, \end{cases}\displaystyle \end{aligned}
(9)

for all$$c_{r} \leq s \leq c_{r+1}$$; $$r = 0, 1, \ldots , l$$, with$$c_{0} = \zeta _{1}$$and$$c_{l+1}= \zeta _{2}$$.

We note that $$\mathcal{G}_{\mathcal{H}, \mathfrak{n}-3}(u, s) \geq 0$$, where $$\mathcal{G}_{\mathcal{H}, n-3}$$ denotes the third derivative with respect to the first variable.

### Remark 2

In particular cases, for Lagrange condition from Theorem E, we have

\begin{aligned} f(u) = \sigma _{L}(u) + R_{L}(f, u), \end{aligned}

where $$\sigma _{L}(u)$$ is the Lagrange interpolating polynomial, that is,

\begin{aligned} \sigma _{L}(u) = \sum_{j=1}^{\mathfrak{n}} \sum_{k=1, k \neq j}^{\mathfrak{n}} \biggl(\frac{u - c_{k}}{c_{j} - c_{k}} \biggr)f(c_{j}), \end{aligned}

and the remainder $$R_{L}(f, u)$$ is given by

\begin{aligned} R_{L}(f, u) = \int _{\zeta _{1}}^{\zeta _{2}}\mathcal{G}_{L}(u, s)f^{(\mathfrak{n})}(s)\,ds, \end{aligned}

with

\begin{aligned} \mathcal{G}_{L}(u, s) = \frac{1}{(\mathfrak{n}-1)!} \textstyle\begin{cases} \sum_{j=1}^{r}(c_{j} - s)^{\mathfrak{n}-1}\prod_{k=1, k\neq j}^{\mathfrak{n}} (\frac{u - c_{k}}{c_{j} - c_{k}} ), & s \leq u; \\ {-} \sum_{j=r+1}^{\mathfrak{n}}(c_{j} - s)^{\mathfrak{n}-1}\prod_{k=1, k\neq j}^{\mathfrak{n}} ( \frac{u - c_{k}}{c_{j} - c_{k}} ), & s \geq u, \end{cases}\displaystyle \end{aligned}
(10)

$$c_{r} \leq s \leq c_{r+1}$$$$r=1, 2, \ldots , \mathfrak{n}-1$$, with $$c_{1} = \zeta _{1}$$ and $$c_{\mathfrak{n}} =\zeta _{2}$$.

For type $$(q, \mathfrak{n}-q)$$ condition, from Theorem E, we have

\begin{aligned} f(u) = \sigma _{(q, \mathfrak{n})}(u) + R_{q, \mathfrak{n}}(f, u), \end{aligned}

where $$\sigma _{(q, \mathfrak{n})}(u)$$ is $$(q, \mathfrak{n}-q)$$ interpolating, that is,

\begin{aligned} \sigma _{(q, \mathfrak{n})}(u) = \sum_{i=0}^{q-1} \tau _{i}(u)f^{(i)}(\zeta _{1}) + \sum _{i=0}^{\mathfrak{n}-q-1}\eta _{i}(u)f^{(i)}( \zeta _{2}), \end{aligned}

with

\begin{aligned} \tau _{i}(u) = \frac{1}{i!}(u - \zeta _{1})^{i} \biggl(\frac{u - \zeta _{1}}{\zeta _{1} - \zeta _{2}} \biggr)^{\mathfrak{n}-q} \sum_{k=0}^{q - 1 -i}\binom{{\mathfrak{n}-q+k-1}}{{k}} \biggl( \frac{u - \zeta _{1}}{\zeta _{2} - \zeta _{1}} \biggr)^{k} \end{aligned}
(11)

and

\begin{aligned} \eta _{i}(u) = \frac{1}{i!}(u - \zeta _{1})^{i} \biggl(\frac{u - \zeta _{1}}{\zeta _{2} - \zeta _{1}} \biggr)^{q} \sum_{k=0}^{\mathfrak{n}-q-1-i} \binom{{q+k-1}}{{k}} \biggl( \frac{u - \zeta _{2}}{\zeta _{2} - \zeta _{1}} \biggr)^{k}, \end{aligned}
(12)

and the remainder $$R_{(q, \mathfrak{n})}(f, u)$$ is defined as

\begin{aligned} R_{(q, \mathfrak{n})}(f, u) = \int _{\zeta _{1}}^{\zeta _{2}}\mathcal{G}_{q, \mathfrak{n}}(u, s)f^{(\mathfrak{n})}(s)\,ds, \end{aligned}

with

\begin{aligned} \mathcal{G}_{(q, \mathfrak{n})}(u, s) = \textstyle\begin{cases} \sum_{j=0}^{q-1} [\sum_{p=0}^{q-1-j}\binom{{\mathfrak{n}-q+p-1}}{{p}} (\frac{u - \zeta _{1}}{\zeta _{2} - \zeta _{1}} )^{p} ]\\ \quad {}\times \frac{(u - \zeta _{1})^{j}(\zeta _{1} - s)^{\mathfrak{n}-j-1}}{j!(\mathfrak{n} - j - 1)!} (\frac{\zeta _{2} - u}{\zeta _{2} - \zeta _{1}} )^{\mathfrak{n} - q}, & \zeta _{1} \leq s \leq u \leq \zeta _{2}; \\ - \sum_{j=0}^{\mathfrak{n}-q-1} [\sum_{\lambda =0}^{\mathfrak{n}-q-j-1}\binom{{q+\lambda -1}}{{\lambda }} (\frac{\zeta _{2} - u}{\zeta _{2} - \zeta _{1}} )^{\lambda } ]\\ \quad {}\times \frac{(u - \zeta _{2})^{j}(\zeta _{2} - s)^{\mathfrak{n}-j-1}}{j!(\mathfrak{n} - j - 1)!} (\frac{u - \zeta _{1}}{\zeta _{2} - \zeta _{1}} )^{q}, & \zeta _{1} \leq u \leq s \leq \zeta _{2}. \end{cases}\displaystyle \end{aligned}
(13)

From type two-point Taylor condition from Theorem E, we have

\begin{aligned} f(u) = \sigma _{2T}(u) + R_{2T}(f, u), \end{aligned}

where

\begin{aligned} \sigma _{2T}(u) =& \sum_{i=0}^{q-1} \sum_{k=0}^{q - 1 - i}\binom{{q + k - 1}}{{k}} \biggl[ \frac{(u - \zeta _{1})^{i}}{i!} \biggl(\frac{u - \zeta _{2}}{\zeta _{1} - \zeta _{2}} \biggr)^{q} \biggl( \frac{u - \zeta _{1}}{\zeta _{2} - \zeta _{1}} \biggr)^{k}f^{(i)}(\zeta _{1}) \\ &{}- \frac{(u - \zeta _{2})^{i}}{i!} \biggl(\frac{u - \zeta _{1}}{\zeta _{2} - \zeta _{1}} \biggr)^{q} \biggl(\frac{u - \zeta _{1}}{\zeta _{1} - \zeta _{2}} \biggr)^{k}f^{(i)}(\zeta _{2}) \biggr], \end{aligned}

and the remainder $$R_{2T}(f, u)$$ is given by

\begin{aligned} R_{2T}(f, u) = \int _{\zeta _{1}}^{\zeta _{2}}\mathcal{G}_{2T}(u, s)f^{(\mathfrak{n})}(s)\,ds \end{aligned}

with

\begin{aligned} \mathcal{G}_{2T}(u, s) = \textstyle\begin{cases} \frac{(-1)^{q}}{(2q - 1)!}p^{\mathfrak{n}}(u, s)\sum_{j=0}^{q-1}\binom{{q-1+j}}{{j}}(u - s)^{q-1-j}\delta ^{j}(u, s), & \zeta _{1} \leq s \leq u \leq \zeta _{2}; \\ \frac{(-1)^{q}}{(2q - 1)!}\delta ^{\mathfrak{n}}(u, s)\sum_{j=0}^{q-1}\binom{{q-1+j}}{{j}}(s - u)^{q-1-j}p^{j}(u, s), & \zeta _{1} \leq u \leq s \leq \zeta _{2} , \end{cases}\displaystyle \end{aligned}
(14)

where $$p(u, s) = \frac{(s - \zeta _{1})(\zeta _{2} - u)}{\zeta _{2} - \zeta _{1}}$$, $$\delta (u, s) = p(u, s)$$ for all $$u, s \in [\zeta _{1}, \zeta _{2}]$$.

In [9] and [10] the positivity of Green functions is given as follows.

### Lemma 1

For the Green function$$\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(u, s)$$as defined in (9), the following results holds:

$$(i)$$:

$$\frac{\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(u, s)}{\omega (u)} > 0$$$$c_{1} \leq u \leq c_{l}$$, $$c_{1} \leq s \leq c_{l}$$;

$$(\mathit{i}i)$$:

$$\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(u, s) \leq \frac{1}{(\mathfrak{n}-1)!(\zeta _{2} - \zeta _{1})} \vert \omega (u) \vert$$;

$$(\mathit{iii})$$:

$$\int _{\zeta _{1}}^{\zeta _{2}}\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(u, s)\,ds = \frac{\omega (u)}{\mathfrak{n}!}$$.

Under Mercer’s assumptions (6), Pečarić et al. [11] gave a probabilistic version of (1) for the family of 3-convex functions at a point. Operator version of probabilistic Levinson’s inequality discussed in [12]. All generalizations existing in the literature are only for one type of data points, see [1317]. But in this pattern, and motivated by the above discussion, Levinson type inequalities are generalized via Hermite interpolating polynomial involving two types of data points for higher order convex functions.

## 2 Main results

Motivated by identity (5), we construct the following identities.

### 2.1 Bullen type inequalities for higher order convex functions

First we define the following functional:

$$\mathcal{F}$$::

Let $$f: \mathbb{I}_{1}= [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}$$ be a function. Also, let $$(p_{1}, \ldots , p_{\mathfrak{n}_{1}}) \in \mathbb{R}^{\mathfrak{n}_{1}}$$ and $$(q_{1}, \ldots , q_{m_{1}}) \in \mathbb{R}^{m_{1}}$$ be such that $$\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }=1$$, $$\sum_{\nu =1}^{m_{1}}q_{\nu }=1$$, and $$x_{\mu }$$, $$y_{\nu }$$, $$\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }x_{\mu }$$, $$\sum_{\nu =1}^{m_{1}}q_{\nu }y_{\nu } \in \mathbb{I}_{1}$$. Then

$$\breve{\mathfrak{J}}\bigl(f(\cdot )\bigr)=\sum _{\nu =1}^{m_{1}}q_{\nu }f(y_{\nu })-f \Biggl(\sum_{\nu =1}^{m_{1}}q_{\nu }y_{\nu } \Biggr)- \sum_{\mu =1}^{\mathfrak{n}_{1}} p_{\mu }f(x_{\mu })+ f\Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }x_{\mu } \Biggr).$$
(15)

### Theorem 1

Assume$$\mathcal{F}$$. Let$$f \in C^{\mathfrak{n}}[\zeta _{1}, \zeta _{2}]$$and$$\zeta _{1} = c_{1} < c_{2} < \cdots < c_{l} = \zeta _{2}$$ ($$l \geq 2$$) be the points. Moreover, $$\mathcal{H}_{i_{j}}$$and$$\mathcal{G}_{\mathcal{H}, \mathfrak{n}}$$are the same as defined in (8) and (9) respectively. Then we have the following identity:

\begin{aligned} \breve{\mathfrak{J}}\bigl(f(\cdot )\bigr) = \sum _{j=1}^{l}\sum_{i=0}^{k_{j}}f^{(i)}(c_{j}) \breve{\mathfrak{J}}\bigl(\mathcal{H}_{i_{j}}(\cdot )\bigr) + \int _{\zeta _{1}}^{\zeta _{2}}\breve{\mathfrak{J}} \bigl( \mathcal{G}_{\mathcal{H}, \mathfrak{n}}(\cdot , s) \bigr)f^{(\mathfrak{n})}(s)\,ds, \end{aligned}
(16)

where$$\breve{\mathfrak{J}}(f(\cdot ))$$is defined in (15),

\begin{aligned} \breve{\mathfrak{J}}\bigl(\mathcal{H}_{i_{j}}(\cdot )\bigr) =&\sum _{\nu =1}^{m_{1}}q_{\nu } \bigl( \mathcal{H}_{i_{j}}(y_{\nu }) \bigr)- \mathcal{H}_{i_{j}} \Biggl(\sum_{\nu =1}^{m_{1}}q_{\nu }y_{\nu } \Biggr) \\ &{}-\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu } \bigl( \mathcal{H}_{i_{j}}(x_{\mu }) \bigr) +\mathcal{H}_{i_{j}} \Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }x_{\mu } \Biggr) \end{aligned}
(17)

and

\begin{aligned} \breve{\mathfrak{J}} \bigl(\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(\cdot , s) \bigr) =&\sum _{\nu =1}^{m_{1}}q_{\nu } \bigl( \mathcal{G}_{\mathcal{H}, \mathfrak{n}}(y_{\nu }, s) \bigr)-\mathcal{G}_{\mathcal{H}, \mathfrak{n}} \Biggl(\sum_{\nu =1}^{m_{1}}q_{\nu }y_{\nu }, s \Biggr) \\ &{}-\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu } \bigl( \mathcal{G}_{\mathcal{H}, \mathfrak{n}}(x_{\mu }, s) \bigr) +\mathcal{G}_{\mathcal{H}, \mathfrak{n}} \Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }x_{\mu }, s \Biggr). \end{aligned}
(18)

### Proof

We get (16), after using (7) in (15) and the linearity of $$\breve{\mathfrak{J}}(\cdot )$$. □

Next we obtain a generalization of Bullen type inequality (4) for real weights.

### Theorem 2

Assume that all the conditions of Theorem 1hold withfbeing an$$\mathfrak{n}$$-convex function.

If

$$\breve{\mathfrak{J}} \bigl(\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(\cdot , s) \bigr)\geq 0,\quad s \in \mathbb{I}_{1},$$
(19)

then

$$\breve{\mathfrak{J}}\bigl(f(\cdot )\bigr) \geq \sum _{j=1}^{l}\sum_{i=0}^{k_{j}}f^{(i)}(c_{j}) \breve{\mathfrak{J}}\bigl(\mathcal{H}_{i_{j}}(\cdot )\bigr).$$
(20)

### Proof

It is given that the function f is $$\mathfrak{n}$$-convex, therefore $$f^{(\mathfrak{n})}(x)\ge 0$$ for all $$x\in \mathbb{I}_{1}$$. Therefore we apply Theorem 1 to obtain (20). □

### Remark 3

(i) In Theorem 2, inequality in (20) holds in reverse direction if the inequality in (19) is reversed.

(ii) Inequality (20) also holds in reverse direction if f is $$\mathfrak{n}$$-concave.

### Corollary 1

Assume that all the conditions of Theorem 1hold withfbeing an$$\mathfrak{n}$$-convex function. If$$\mathcal{G}_{L}$$is a Green function defined in (10), and

\begin{aligned} \breve{\mathfrak{J}} \bigl(\mathcal{G}_{L}(\cdot , s) \bigr) \geq 0 \quad \textit{for all } s \in \mathbb{I}_{1}. \end{aligned}

Then

\begin{aligned} \breve{\mathfrak{J}}\bigl(f(\cdot )\bigr) \geq \sum _{j=1}^{\mathfrak{n}}f^{(i)}(c_{j})\breve{ \mathfrak{J}} \Biggl(\prod_{k=1, k \neq j}^{\mathfrak{n}} \biggl( \frac{(\cdot ) - c_{j}}{c_{j} - c_{k}} \biggr) \Biggr). \end{aligned}
(21)

### Proof

By using the Lagrange conditions in (16), we get (21). □

### Corollary 2

Assume that all the conditions of Theorem 1hold withfbeing an$$\mathfrak{n}$$-convex function. Let$$\mathcal{G}_{(q, n)}$$be the Green function defined in (13) and$$\tau _{i}$$, $$\eta _{i}$$be defined in (11) and (12) respectively. If

\begin{aligned} \breve{\mathfrak{J}} \bigl(\mathcal{G}_{(q, \mathfrak{n})}(\cdot , s) \bigr) \geq 0 \quad \textit{for all } s \in \mathbb{I}_{1}, \end{aligned}

then

\begin{aligned} \breve{\mathfrak{J}}\bigl(f(\cdot )\bigr) \geq \sum _{i=0}^{q-1}f^{(i)}(\zeta _{1}) \breve{\mathfrak{J}}\bigl(\tau _{i}(\cdot )\bigr) + \sum _{i=0}^{\mathfrak{n}-q-1}f^{(i)} (\zeta _{2} ) \breve{\mathfrak{J}}\bigl(\eta _{i}(\cdot )\bigr). \end{aligned}
(22)

### Proof

By using the type $$(q, \mathfrak{n}-q)$$ conditions in (16), we get (22). □

### Corollary 3

Assume that all the conditions of Theorem 1hold withfbeing an$$\mathfrak{n}$$-convex function. Let$$\mathcal{G}_{2T}$$be a Green function as defined in (14). If

\begin{aligned} \breve{\mathfrak{J}} \bigl(\mathcal{G}_{2T}(\cdot , s) \bigr) \geq 0 \quad \textit{for all } s \in \mathbb{I}_{1}, \end{aligned}

then

\begin{aligned} \breve{\mathfrak{J}}\bigl(f(\cdot )\bigr) \geq & \sum _{i=0}^{q-1}\sum_{k=0}^{q-1-i} \binom{{q+k-1}}{{k}} \biggl[f^{(i)}(\zeta _{1})\breve{ \mathfrak{J}} \biggl(\frac{((\cdot )-\zeta _{1})^{i}}{i!} \biggl(\frac{(\cdot )-\zeta _{2}}{\zeta _{1}-\zeta _{2}} \biggr)^{q} \biggl(\frac{(\cdot )-\zeta _{1}}{\zeta _{2}-\zeta _{1}} \biggr)^{k} \biggr) \\ &{}+ f^{(i)}(\zeta _{2})\breve{\mathfrak{J}} \biggl( \frac{((\cdot )-\zeta _{2})^{i}}{i!} \biggl(\frac{(\cdot )-\zeta _{1}}{\zeta _{2}-\zeta _{1}} \biggr)^{q} \biggl( \frac{(\cdot )-\zeta _{2}}{\zeta _{1}-\zeta _{2}} \biggr)^{k} \biggr) \biggr]. \end{aligned}
(23)

### Proof

By using two-point Taylor condition in (16), we get (23). □

If we put $$m_{1}=\mathfrak{n}_{1}$$, $$p_{\mu }=q_{\nu }$$ and use positive weights in (15), then $$\breve{\mathfrak{J}}(\cdot )$$ converted to the functional $$\mathfrak{J}_{2}(\cdot )$$ defined in (5), also in this case, (16), (17), (18), (19), and (20) become

\begin{aligned} \mathfrak{J}_{2}\bigl(f(\cdot )\bigr) = \sum _{j=1}^{l}\sum_{i=0}^{k_{j}}f^{(i)}(c_{j}) \mathfrak{J}_{2}\bigl(\mathcal{H}_{i_{j}}(\cdot )\bigr) + \int _{\zeta _{1}}^{\zeta _{2}}\mathfrak{J}_{2} \bigl( \mathcal{G}_{\mathcal{H}, \mathfrak{n}}(\cdot , s) \bigr)f^{(\mathfrak{n})}(s)\,ds, \end{aligned}
(24)

where $$\mathfrak{J}_{2}(f(\cdot ))$$ is defined in (15),

\begin{aligned}& \begin{aligned}[b] \mathfrak{J}_{2}\bigl(\mathcal{H}_{i_{j}}( \cdot )\bigr)&=\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu } \bigl(\mathcal{H}_{i_{j}}(y_{\mu }) \bigr)- \mathcal{H}_{i_{j}} \Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }y_{\mu } \Biggr) \\ &\quad {}-\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu } \bigl(\mathcal{H}_{i_{j}}(x_{\mu }) \bigr) +\mathcal{H}_{i_{j}} \Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }x_{\mu } \Biggr), \end{aligned} \end{aligned}
(25)
\begin{aligned}& \begin{aligned}[b] \mathfrak{J}_{2} \bigl(\mathcal{G}_{\mathcal{H}, \mathfrak{n}}( \cdot , s) \bigr)&=\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu } \bigl(\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(y_{\mu }, s) \bigr)- \mathcal{G}_{\mathcal{H}, \mathfrak{n}} \Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }y_{\mu }, s \Biggr) \\ &\quad {}-\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu } \bigl(\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(x_{\mu }, s) \bigr) + \mathcal{G}_{\mathcal{H}, \mathfrak{n}} \Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }x_{\mu }, s \Biggr), \end{aligned} \end{aligned}
(26)
\begin{aligned}& \mathfrak{J}_{2} \bigl(\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(\cdot , s) \bigr)\geq 0, \quad s \in \mathbb{I}_{1}, \end{aligned}
(27)

and

$$\mathfrak{J}_{2}\bigl(f(\cdot )\bigr) \geq \sum _{j=1}^{l}\sum_{i=0}^{k_{j}}f^{(i)}(c_{j}) \mathfrak{J}_{2}\bigl(\mathcal{H}_{i_{j}}(\cdot )\bigr).$$
(28)

### Theorem 3

Let$$f: \mathbb{I}_{1}= [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}$$be an$$\mathfrak{n}$$-convex function and$$p_{\mathfrak{n}_{1}} \in \mathbb{R}_{+}^{\mathfrak{n}_{1}}$$be such that$$\sum_{\mu =1}^{n_{1}}p_{\mu }=1$$. Also let$$f \in C^{\mathfrak{n}}([\zeta _{1}, \zeta _{2}])$$and$$\zeta _{1} = c_{1} < c_{2} < \cdots < c_{l} = \zeta _{2}$$ ($$l \geq 2$$) be the points. Moreover, $$\mathcal{H}_{i_{j}}$$, $$\mathcal{G}_{\mathcal{H}, \mathfrak{n}}$$are defined in (8) and (9) respectively. Then, for the functional$$\mathfrak{J}_{2}(\cdot )$$defined in (5), we have the following:

$$(i)$$:

If$$k_{j}$$is odd for each$$j = 2, \ldots , l$$, then (28) holds.

$$(\mathit{ii})$$:

Let (28) be satisfied and the function

\begin{aligned} F(\cdot ) = \sum_{j=1}^{l} \sum_{i=0}^{k_{j}}f^{(i)}(c_{j}) \mathcal{H}_{i_{j}}(\cdot ) \end{aligned}
(29)

be 3-convex. Then$$\sum_{j=1}^{l}\sum_{i=0}^{k_{j}}f^{(i)}(c_{j})\mathfrak{J}_{2}(\mathcal{H}_{i_{j}}(\cdot ))\geq 0$$and

\begin{aligned} \mathfrak{J}_{2}\bigl(f(\cdot )\bigr) \geq 0. \end{aligned}
(30)

### Proof

$$(i)$$$$\omega (\cdot ) \geq 0$$ for odd values of $$k_{j}$$, so from Lemma 1 we have $$\mathcal{G}_{\mathcal{H}, \mathfrak{n}-3}(\cdot , s) \geq 0$$. Hence $$\mathcal{G}_{\mathcal{H}, \mathfrak{n}}$$ is 3-convex, so $$\mathfrak{J}_{2} (\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(\cdot , s) ) \geq 0$$ by virtue of Remark 1 because weights are assumed to be positive. Now, using Theorem 2, we get (28).

$$(\mathit{ii})$$$$\mathfrak{J}_{2}(\cdot )$$ is a linear function, so we can $$\sum_{j=1}^{l}\sum_{i=0}^{k_{j}}f^{(i)}(c_{j})\mathfrak{J}_{2}(\mathcal{H}_{i_{j}}(\cdot ))$$ in the form $$\mathfrak{J}_{2}(\lambda )$$. Therefore $$\sum_{j=1}^{l}\sum_{i=0}^{k_{j}}f^{(i)}(c_{j})\mathfrak{J}_{2}(\mathcal{H}_{i_{j}}(\cdot ))\geq 0$$, because λ is supposed to be convex, hence $$\mathfrak{J}_{2}(f(\cdot ))\geq 0$$. □

In the next result, we generalize Levinson type inequality for $$2\mathfrak{n}$$ points given in [6] (see also [3]).

### Theorem 4

Let$$f: [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}$$be an$$\mathfrak{n}$$-convex function, $$(p_{1}, \ldots , p_{\mathfrak{n}_{1}}) \in \mathbb{R}_{+}^{\mathfrak{n}_{1}}$$be such that$$\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }=1$$. Also, let$$x_{1}, \ldots , x_{\mathfrak{n}_{1}}$$and$$y_{1}, \ldots , y_{n_{1}} \in \mathbb{I}_{1}$$be such that$$x_{\mu }+y_{\mu }=2\breve{c}$$, $$x_{\mu }+x_{\mathfrak{n}_{1}-\mu +1} \leq 2\breve{c}$$, $$\frac{p_{\mu }x_{\mu }+p_{\mathfrak{n}_{1}-\mu +1}x_{\mathfrak{n}_{1}-\mu +1}}{p_{\mu }+p_{\mathfrak{n}_{1}-\mu +1}} \leq \breve{c}$$for$$\mu =1, \ldots , \mathfrak{n}_{1}$$.

$$(i)$$:

If$$k_{j}$$is odd for each$$j = 2, \ldots , l$$, then (28) holds.

$$(\mathit{ii})$$:

Let (28) be satisfied and the function

\begin{aligned} F(\cdot ) = \sum_{j=1}^{l} \sum_{i=0}^{k_{j}}f^{(i)}(c_{j}) \mathcal{H}_{i_{j}}(\cdot ) \end{aligned}
(31)

be 3-convex. Then$$\sum_{j=1}^{l}\sum_{i=0}^{k_{j}}f^{(i)}(c_{j})\mathfrak{J}_{2}(\mathcal{H}_{i_{j}}(\cdot ))\geq 0$$and (30) holds.

### Proof

Verification is like Theorem 3. □

In the next result, Levinson type inequality is given (for positive weights) under Mercer’s conditions.

### Corollary 4

Let$$f \in C^{\mathfrak{n}}([\zeta _{1}, \zeta _{2}])$$, $$(p_{1}, \ldots , p_{\mathfrak{n}_{1}}) \in \mathbb{R}^{\mathfrak{n}_{1}}$$be such that$$\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }=1$$and$$x_{\mu }$$, $$y_{\mu }$$satisfy (6) and$$\max \{x_{1} \ldots x_{\mathfrak{n}_{1}}\} \leq \min \{y_{1} \ldots y_{\mathfrak{n}_{1}}\}$$. Also, let$$\zeta _{1} = c_{1} < c_{2} < \cdots < c_{l} = \zeta _{2}$$ ($$l \geq 2$$) be the points. Moreover, $$\mathcal{H}_{i_{j}}$$, $$\mathcal{G}_{\mathcal{H}, n}$$are defined by (8) and (9) respectively. Then (33) holds.

### Proof

We get (33), after using (7) in (5) and the linearity of $$\mathfrak{J}_{2}(\cdot )$$. □

### 2.2 Levinson’s inequality for higher order-convex functions

For the next results, motivated by identity (2), we construct the following identities:

First we define the following functional:

$$\mathcal{H}$$::

Let $$f: \mathbb{I}_{2}= [0, 2\hat{\alpha }] \rightarrow \mathbb{R}$$ be a function, $$x_{1}, \ldots , x_{\mathfrak{n}_{1}} \in (0, \alpha )$$, $$(p_{1}, \ldots , p_{\mathfrak{n}_{1}})\in \mathbb{R}^{\mathfrak{n}_{1}}$$, $$(q_{1}, \ldots , q_{m_{1}})\in \mathbb{R}^{m_{1}}$$ are real numbers such that $$\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }=1$$ and $$\sum_{\nu =1}^{m_{1}}q_{\nu }=1$$. Also, let $$x_{\mu }$$, $$\sum_{\nu =1}^{m_{1}}q_{\nu }(2\hat{\alpha }-x_{\nu })$$ and $$\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu } \in \mathbb{I}_{2}$$. Then

\begin{aligned} \tilde{\mathfrak{J}}\bigl(f(\cdot )\bigr) =&\sum_{\nu =1}^{m_{1}}q_{\nu }f(2 \hat{\alpha }-x_{\nu })-f \Biggl(\sum_{\nu =1}^{m_{1}}q_{\nu } (2\hat{\alpha }-x_{\nu }) \Biggr) -\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }f(x_{\mu }) \\ &{}+ f \Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }x_{\mu } \Biggr). \end{aligned}
(32)

### Theorem 5

Assume$$\mathcal{H}$$. Let$$f \in C^{\mathfrak{n}}([\zeta _{1}, \zeta _{2}])$$and$$\zeta _{1} = c_{1} < c_{2} < \cdots < c_{l} = \zeta _{2}$$ ($$l \geq 2$$) be the points. Moreover, $$\mathcal{H}_{i_{j}}$$, $$\mathcal{G}_{\mathcal{H}, \mathfrak{n}}$$are defined by (8) and (9) respectively. Then, for$$0 \leq \zeta _{1}<\zeta _{2}\leq 2\hat{\alpha }$$, we have the following identity:

\begin{aligned} \tilde{\mathfrak{J}}\bigl(f(\cdot )\bigr) = \sum _{j=1}^{l}\sum_{i=0}^{k_{j}}f^{(i)}(c_{j}) \tilde{\mathfrak{J}}\bigl(\mathcal{H}_{i_{j}}(\cdot )\bigr) + \int _{\zeta _{1}}^{\zeta _{2}}\tilde{\mathfrak{J}} \bigl( \mathcal{G}_{\mathcal{H}, \mathfrak{n}}(\cdot , s) \bigr)f^{(\mathfrak{n})}(s)\,ds, \end{aligned}
(33)

where$$\tilde{\mathfrak{J}}(f(\cdot ))$$is defined in (32),

\begin{aligned} \tilde{\mathfrak{J}}\bigl(\mathcal{H}_{i_{j}}(\cdot )\bigr) =&\sum _{\nu =1}^{m_{1}}q_{\nu } \bigl( \mathcal{H}_{i_{j}}(2\hat{\alpha }-x_{\mu }) \bigr)- \mathcal{H}_{i_{j}} \Biggl(\sum_{\nu =1}^{m_{1}}q_{\nu }(2 \hat{\alpha }-x_{\nu }) \Biggr) \\ &{}-\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu } \bigl( \mathcal{H}_{i_{j}}(x_{\mu }) \bigr) +\mathcal{H}_{i_{j}} \Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }x_{\mu } \Biggr) \end{aligned}
(34)

and

\begin{aligned} \tilde{\mathfrak{J}} \bigl(\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(\cdot , s) \bigr) =&\sum _{\nu =1}^{m_{1}}q_{\nu } \bigl( \mathcal{G}_{\mathcal{H}, \mathfrak{n}}(2\hat{\alpha }-x_{\mu }, s) \bigr)- \mathcal{G}_{\mathcal{H}, \mathfrak{n}} \Biggl(\sum_{\nu =1}^{m_{1}}q_{\nu }(2 \hat{\alpha }-x_{\mu }), s \Biggr) \\ &{}-\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu } \bigl( \mathcal{G}_{\mathcal{H}, \mathfrak{n}}(x_{\mu }, s) \bigr) +\mathcal{G}_{\mathcal{H}, \mathfrak{n}} \Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }x_{\mu }, s \Biggr). \end{aligned}
(35)

### Proof

Replace $$\mathbb{I}_{1}$$, $$\breve{\mathfrak{J}}(\cdot )$$, and $$y_{\nu }$$ with $$\mathbb{I}_{2}$$, $$\tilde{\mathfrak{J}}(\cdot )$$, and $$2\hat{\alpha }-x_{\nu }$$ respectively in Theorem 1 to get the required result. □

In the next result we generalize Levinson’s inequality (for real weights) for $$\mathfrak{n}$$-convex functions.

### Theorem 6

Assume that all the conditions of Theorem 2hold withfbeing an$$\mathfrak{n}$$-convex function.

If

$$\tilde{\mathfrak{J}} \bigl(\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(\cdot , s) \bigr)\geq 0, \quad s \in \mathbb{I}_{2},$$
(36)

then for $$0 \leq \zeta _{1}<\zeta _{2}\leq 2\hat{\alpha }$$

$$\tilde{\mathfrak{J}}\bigl(f(\cdot )\bigr) \geq \sum _{j=1}^{l}\sum_{i=0}^{k_{j}}f^{(i)}(c_{j}) \tilde{\mathfrak{J}}\bigl(\mathcal{H}_{i_{j}}(\cdot )\bigr).$$
(37)

### Proof

Proof is similar to that of Theorem 2 with the conditions given in the statement. □

If we put $$m_{1}=\mathfrak{n}_{1}$$, $$p_{\mu }=q_{\nu }$$ and use positive weights in (32), then $$\tilde{\mathfrak{J}}(\cdot )$$ converted to the functional $$\mathfrak{J}_{1}(\cdot )$$ defined in (2), also in this case, for $$0 \leq \zeta _{1}<\zeta _{2}\leq 2\hat{\alpha }$$, (33), (34), (35), (36), and (37) become

\begin{aligned}& \mathfrak{J}_{1}\bigl(f(\cdot )\bigr) = \sum _{j=1}^{l}\sum_{i=0}^{k_{j}}f^{(i)}(c_{j}) \mathfrak{J}_{1}\bigl(\mathcal{H}_{i_{j}}(\cdot )\bigr) + \int _{\zeta _{1}}^{\zeta _{2}}\mathfrak{J}_{1} \bigl( \mathcal{G}_{\mathcal{H}, \mathfrak{n}}(\cdot , s) \bigr)f^{(\mathfrak{n})}(s)\,ds, \end{aligned}
(38)
\begin{aligned}& \begin{aligned}[b] \mathfrak{J}_{1}\bigl( \mathcal{H}_{i_{j}}(\cdot )\bigr)&=\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu } \bigl(\mathcal{H}_{i_{j}}(2\hat{\alpha }-x_{\mu }) \bigr)- \mathcal{H}_{i_{j}} \Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }(2 \hat{\alpha }-x_{\mu }) \Biggr) \\ &\quad {}-\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu } \bigl(\mathcal{H}_{i_{j}}(x_{\mu }) \bigr) +\mathcal{H}_{i_{j}} \Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }x_{\mu } \Biggr), \end{aligned} \end{aligned}
(39)
\begin{aligned}& \begin{aligned}[b] \mathfrak{J}_{1} \bigl(\mathcal{G}_{\mathcal{H}, \mathfrak{n}}( \cdot , s) \bigr)&=\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu } \bigl(\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(2\hat{\alpha }-x_{\mu }, s) \bigr) - \mathcal{G}_{\mathcal{H}, \mathfrak{n}} \Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }(2 \hat{\alpha }-x_{\mu }), s \Biggr) \\ &\quad {}-\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu } \bigl(\mathcal{G}_{\mathcal{H}, \mathfrak{n}}(x_{\mu }, s) \bigr) + \mathcal{G}_{\mathcal{H}, \mathfrak{n}} \Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}}p_{\mu }x_{\mu }, s \Biggr), \end{aligned} \end{aligned}
(40)
\begin{aligned}& \mathfrak{J}_{1} \bigl(\mathcal{G}_{\mathcal{H}, \mathfrak{n}}( \cdot , s) \bigr)\geq 0,\quad s \in \mathbb{I}_{2}, \end{aligned}
(41)
\begin{aligned}& \mathfrak{J}_{1}\bigl(f(\cdot )\bigr) \geq \sum _{j=1}^{l}\sum_{i=0}^{k_{j}}f^{(i)}(c_{j}) \mathfrak{J}_{1}\bigl(\mathcal{H}_{i_{j}}(\cdot )\bigr), \end{aligned}
(42)

respectively.

### Theorem 7

Let all the assumptions of Theorem 5hold, $$f: [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}$$be an$$\mathfrak{n}$$-convex function.

$$(i)$$:

If$$k_{j}$$is odd for each$$j = 2, \ldots , l$$, then (42) holds.

$$(\mathit{ii})$$:

Let (42) be satisfied and the function

\begin{aligned} F(\cdot ) = \sum_{j=1}^{l} \sum_{i=0}^{k_{j}}f^{(i)}(c_{j}) \mathcal{H}_{i_{j}}(\cdot ) \end{aligned}
(43)

be 3-convex. Then the right-hand side of (42) is nonnegative, and we have

\begin{aligned} \mathfrak{J}_{1}\bigl(f(\cdot )\bigr) \geq 0, \end{aligned}
(44)

where$$0 \leq \zeta _{1}<\zeta _{2}\leq 2\hat{\alpha }$$.

### Proof

By using Theorem 6 and Remark 1. □

### Remark 4

Čebyšev, Grüss, and Ostrowski-type new bounds related to the obtained generalizations can also be discussed. Moreover, we can also give related mean value theorems by using nonnegative functionals (16) and (33), and we can construct the new families of $$\mathfrak{n}$$-exponentially convex functions and Cauchy means related to these functionals as given in Sect. 4 of [17].

## 3 Application to information theory

The idea of Shannon entropy is the central job of information speculation now and again implied as the measure of uncertainty. The entropy of random variable is described with respect to probability distribution and can be shown to be a decent measure of random variable. Shannon entropy allows us to assess the typical least number of bits expected to encode a progression of pictures subject to the letters all together size and the repeat of the symbols.

Divergences between probability distributions have been familiar with measure of the difference between them. An assortment of divergences exist, for example, the f-divergences (especially, Kullback–Leibler divergences, Hellinger distance, and total variation distance), Rényi divergences, Jensen–Shannon divergences, etc. (see [18, 19]). There are a lot of papers overseeing inequalities and entropies, see, e.g., [2028] and the references therein. Jensen’s inequality is an essential job in a bit of these inequalities. Regardless, Jensen’s inequality manages one kind of data points and Levinson’s inequality deals with two types of data points.

### 3.1 Csiszár divergence

The following definition is given by Csiszár in [29, 30].

### Definition 1

Let $$f: \mathbb{R}^{+} \rightarrow \mathbb{R}^{+}$$ be a convex function. Also, let $$\tilde{\mathbf{r}}, \tilde{\mathbf{k}} \in \mathbb{R}_{+}^{\mathfrak{n}_{1}}$$ be such that $$\sum_{v=1}^{\mathfrak{n}_{1}}r_{v}=1$$ and $$\sum_{v=1}^{\mathfrak{n}_{1}}k_{v}=1$$. Then f-divergence functional is defined by

\begin{aligned} I_{f}(\tilde{\mathbf{r}}, \tilde{\mathbf{k}}) := \sum _{v=1}^{\mathfrak{n}_{1}}k_{v}f \biggl(\frac{r_{v}}{k_{v}} \biggr). \end{aligned}

By defining the following:

\begin{aligned} f(0) := \lim_{x \rightarrow 0^{+}}f(x); \qquad 0f \biggl(\frac{0}{0} \biggr):=0; \qquad 0f \biggl(\frac{a}{0} \biggr):= \lim_{x \rightarrow 0^{+}}xf \biggl(\frac{a}{x} \biggr), \quad a>0, \end{aligned}

he stated that nonnegative probability distributions can also be used.

First we give the following definition.

### Definition 2

Let I be an interval contained in $$\mathbb{R}$$ and $$f: I \rightarrow \mathbb{R}$$ be an $$\mathfrak{n}$$-convex function. Also, let $$\tilde{\mathbf{r}}=(r_{1}, \ldots , r_{\mathfrak{n}_{1}})\in \mathbb{R}^{\mathfrak{n_{1}}}$$ and $$\tilde{\mathbf{k}}=(k_{1}, \ldots , k_{\mathfrak{n}_{1}})\in (0, \infty )^{\mathfrak{n}_{1}}$$ be such that

\begin{aligned} \frac{r_{v}}{k_{v}} \in I, \quad v= 1, \ldots , \mathfrak{n}_{1}. \end{aligned}

Then

\begin{aligned} \hat{I}_{f}(\tilde{\mathbf{r}}, \tilde{\mathbf{k}}) : = \sum_{v=1}^{\mathfrak{n}_{1}}k_{v}f \biggl( \frac{r_{v}}{k_{v}} \biggr). \end{aligned}
(45)

We apply Theorem 2 for $$\mathfrak{n}$$-convex functions to $$\hat{I}_{f}(\tilde{\mathbf{r}}, \tilde{\mathbf{k}})$$.

### Theorem 8

Let$$\tilde{\mathbf{r}}= (r_{1}, \ldots , r_{\mathfrak{n}_{1}} ) \in \mathbb{R}^{\mathfrak{n}_{1}}$$, $$\tilde{\mathbf{w}}= (w_{1}, \ldots , w_{\mathfrak{m}_{1}} )\in \mathbb{R}^{\mathfrak{m}_{1}}$$, $$\tilde{\mathbf{k}}= (k_{1}, \ldots , k_{\mathfrak{n}_{1}} )\in (0, \infty )^{\mathfrak{n}_{1}}$$and$$\tilde{\mathbf{t}}= (t_{1}, \ldots , t_{\mathfrak{m}_{1}} ) \in (0, \infty )^{\mathfrak{m}_{1}}$$be such that

\begin{aligned} \frac{r_{v}}{k_{v}} \in I, \quad v = 1, \ldots , \mathfrak{n}_{1}, \end{aligned}

and

\begin{aligned} \frac{w_{u}}{t_{u}} \in I, \quad u = 1, \ldots , \mathfrak{m}_{1}. \end{aligned}

Also, let$$f \in C^{n}[\zeta _{1}, \zeta _{2}]$$be such thatfis an$$\mathfrak{n}$$-convex function. If$$k_{j}$$is odd for each$$j = 2, \ldots , l$$, then

\begin{aligned} \mathfrak{J}_{cis}\bigl(f(\cdot )\bigr)\geq \sum _{j=1}^{l}\sum_{i=0}^{k_{j}}f^{(i)}(c_{j}) \mathfrak{J} \bigl(\mathcal{H}_{i_{j}}(\cdot ) \bigr), \end{aligned}
(46)

where

\begin{aligned} \mathfrak{J}_{cis}\bigl(f(\cdot )\bigr) =& \frac{1}{\sum_{u=1}^{\mathfrak{m}_{1}}t_{u}}\hat{I}_{f}(\tilde{\mathbf{w}}, \tilde{\mathbf{t}})- f\Biggl(\sum_{u=1}^{\mathfrak{m}_{1}}\frac{w_{u}}{\sum_{u=1}^{\mathfrak{m}_{1}}t_{u}} \Biggr)-\frac{1}{\sum_{s=1}^{\mathfrak{n}_{1}}k_{s}} \hat{I}_{f}(\tilde{\mathbf{r}}, \tilde{ \mathbf{k}}) \\ &{}+f\Biggl(\sum_{s=1}^{\mathfrak{n}_{1}} \frac{r_{s}}{\sum_{s=1}^{\mathfrak{n}_{1}}k_{s}}\Biggr) \end{aligned}
(47)

and

\begin{aligned} \mathfrak{J}\bigl(\mathcal{H}_{i_{j}}(\cdot )\bigr) =&\sum _{\nu =1}^{\mathfrak{m}_{1}}\frac{t_{u}}{\sum_{u=1}^{\mathfrak{m}_{1}}t_{u}} \mathcal{H}_{i_{j}} \biggl(\frac{w_{u}}{t_{u}} \biggr)- \mathcal{H}_{i_{j}} \Biggl(\sum_{\nu =1}^{\mathfrak{m}_{1}} \frac{w_{u}}{\sum_{u=1}^{\mathfrak{m}_{1}}t_{u}} \Biggr) \\ &{}-\sum_{\mu =1}^{\mathfrak{n}_{1}}\frac{k_{v}}{\sum_{v=1}^{\mathfrak{n}_{1}}k_{v}} \mathcal{H}_{i_{j}} \biggl(\frac{r_{v}}{k_{v}} \biggr) + \mathcal{H}_{i_{j}} \Biggl(\sum_{\mu =1}^{\mathfrak{n}_{1}} \frac{r_{v}}{\sum_{v=1}^{\mathfrak{n}_{1}}k_{v}} \Biggr). \end{aligned}
(48)

### Proof

Since $$k_{j}$$ is odd for each $$j=2, \ldots , l$$, so we have $$\omega (\cdot ) \geq 0$$ and by using Lemma 1 we have $$\mathcal{G}_{\mathcal{H}, \mathfrak{n}-3}(\cdot , s) \geq 0$$, so $$\mathcal{G}_{\mathcal{H}, \mathfrak{n}}$$ is 3-convex, so (19) holds. Hence, using $$p_{\mu } = \frac{k_{v}}{\sum_{v=1}^{\mathfrak{n}_{1}}k_{v}}$$, $$x_{\mu } = \frac{r_{v}}{k_{v}}$$, $$q_{\nu } = \frac{t_{u}}{\sum_{u=1}^{\mathfrak{m}_{1}}t_{u}}$$, $$y_{\nu } = \frac{w_{u}}{t_{u}}$$ in Theorem 2, (20) becomes (46), where $$\hat{I}_{f}(\tilde{\mathbf{r}}, \tilde{\mathbf{k}})$$ is defined in (45) and

\begin{aligned} \hat{I}_{f}(\tilde{\mathbf{w}}, \tilde{\mathbf{t}}) : = \sum_{u=1}^{\mathfrak{m}_{1}}t_{u}f \biggl( \frac{w_{u}}{t_{u}} \biggr). \end{aligned}
(49)

□

### Definition 3

(see [31]) The $$\mathcal{S}$$hannon entropy of positive probability distribution $$\tilde{\mathbf{k}}=(k_{1}, \ldots , k_{\mathfrak{n}_{1}})$$ is defined by

\begin{aligned} \mathcal{S} : = - \sum_{v=1}^{\mathfrak{n}_{1}}k_{v} \log (k_{v}). \end{aligned}
(50)

### Corollary 5

Let$$\tilde{\mathbf{k}}=(k_{1}, \ldots , k_{\mathfrak{n}_{1}})$$and$$\tilde{\mathbf{t}}=(t_{1}, \ldots , t_{\mathfrak{m}_{1}})$$be positive probability distributions. Also, let$$\tilde{\mathbf{r}}=(r_{1}, \ldots , r_{\mathfrak{n}_{1}}) \in (0, \infty )^{\mathfrak{n}_{1}}$$and$$\tilde{\mathbf{w}}=(w_{1}, \ldots , w_{\mathfrak{m}_{1}}) \in (0, \infty )^{\mathfrak{m}_{1}}$$.

$$(i)$$:

If base of log is greater than 1 and$$\mathfrak{n} =\textit{odd}$$ ($$\mathfrak{n}=3, 5, \ldots$$), then

\begin{aligned} \mathfrak{J}_{s}(\cdot ) \geq &\sum _{j=1}^{l}\sum_{i=0}^{k_{j}} \frac{(-1)^{i-1}(i-1)!}{(c_{j})^{i}}\mathfrak{J} \bigl(\mathcal{H}_{i_{j}}(\cdot ) \bigr), \end{aligned}
(51)

where

\begin{aligned} \mathfrak{J}_{s}(\cdot ) =&\sum _{u=1}^{\mathfrak{m}_{1}}t_{u}\log (w_{u})+ \tilde{\mathcal{S}} -\log \Biggl(\sum_{u=1}^{\mathfrak{m}_{1}}w_{u} \Biggr)-\sum_{v=1}^{\mathfrak{n}_{1}}k_{v}\log (r_{v})+\mathcal{S} \\ &{}+\log \Biggl(\sum_{v=1}^{\mathfrak{n}_{1}}r_{v} \Biggr) \end{aligned}
(52)

and$$\mathfrak{J} (\mathcal{H}_{i_{j}}(\cdot ) )$$is defined in (48).

$$(\mathit{ii})$$:

If$$k_{j}$$is odd and base of log is less than 1 or$$\mathfrak{n} =\textit{even}$$ ($$\mathfrak{n}=4, 6, \ldots$$), then the inequality in (51) is reversed.

### Proof

$$(i)$$ The function $$f(x)=\log (x)$$ is $$\mathfrak{n}$$-convex for $$\mathfrak{n}=3, 5, \ldots$$ and base of log is greater than 1. Therefore, using $$f(x)=\log (x)$$ in Theorem 8, we get (51), where $$\mathcal{S}$$ is defined in (50) and

$$\tilde{\mathcal{S}}= - \sum_{u=1}^{\mathfrak{m}_{1}}t_{u} \log (t_{u}).$$

$$(\mathit{ii})$$ Since $$k_{j}$$ is odd and the function $$f(x) = \log (x)$$ is $$\mathfrak{n}$$-concave for $$\mathfrak{n}=4, 6, \ldots$$ , so by using Remark 3(ii), (20) holds in reverse direction. Therefore, using $$f(x)=\log (x)$$ and $$p_{\mu } = \frac{k_{v}}{\sum_{v=1}^{\mathfrak{n}_{1}}k_{v}}$$, $$x_{\mu } = \frac{r_{v}}{k_{v}}$$, $$q_{\nu } = \frac{t_{u}}{\sum_{u=1}^{\mathfrak{m}_{1}}t_{u}}$$, $$y_{\nu } = \frac{w_{u}}{t_{u}}$$ in reversed inequality (20), we have

\begin{aligned} \mathfrak{J}_{s}(\cdot ) \leq &\sum _{j=1}^{l}\sum_{i=0}^{k_{j}} \frac{(-1)^{i-1}(i-1)!}{(c_{j})^{i}}\mathfrak{J} \bigl(\mathcal{H}_{i_{j}}(\cdot ) \bigr). \end{aligned}
(53)

□

### Corollary 6

Let$$\tilde{\mathbf{r}}=(r_{1}, \ldots , r_{\mathfrak{n}_{1}})$$and$$\tilde{\mathbf{w}}=(w_{1}, \ldots , w_{\mathfrak{m}_{1}})$$be positive probability distributions. Also, let$$\tilde{\mathbf{k}}=(k_{1}, \ldots , k_{\mathfrak{n}_{1}}) \in (0, \infty )^{\mathfrak{n}_{1}}$$, $$\tilde{\mathbf{t}}=(t_{1}, \ldots , t_{\mathfrak{m}_{1}}) \in (0, \infty )^{\mathfrak{m}_{1}}$$, and$$k_{j}$$be odd.

$$(i)$$:

If base of log is greater than 1 and$$\mathfrak{n} =\textit{even}$$ ($$\mathfrak{n} \geq 4$$), then

\begin{aligned} \mathfrak{J}_{\mathbf{{s}}}(\cdot ) \geq & \sum _{j=1}^{l}\sum_{i=0}^{k_{j}} \frac{(-1)^{i-1}(i-2)!}{(c_{j})^{i-1}}\mathfrak{J} \bigl(\mathcal{H}_{i_{j}}(\cdot ) \bigr), \end{aligned}
(54)

where

\begin{aligned} \mathfrak{J}_{\mathbf{{s}}}(\cdot ) =&\frac{1}{\sum_{u=1}^{\mathfrak{m}_{1}}t_{u}} \Biggl( \tilde{\mathfrak{S}}+\sum_{u=1}^{\mathfrak{m}_{1}} w_{u}\log (t_{u}) \Biggr)-\frac{1}{\sum_{u=1}^{\mathfrak{m}_{1}}t_{u}}\log \Biggl( \sum_{u=1}^{\mathfrak{m}_{1}}t_{u} \Biggr) \\ &{}-\frac{1}{\sum_{v=1}^{\mathfrak{n}_{1}}k_{v}} \Biggl(\mathfrak{S}+\sum _{v=1}^{\mathfrak{n}_{1}}r_{v}\log (k_{v}) \Biggr) +\frac{1}{\sum_{v=1}^{\mathfrak{n}_{1}}k_{v}}\log \Biggl(\sum_{v=1}^{\mathfrak{n}_{1}}k_{v} \Biggr) \end{aligned}
(55)

and$$\mathfrak{J} (\mathcal{H}_{i_{j}}(\cdot ) )$$is defined in (48).

$$(\mathit{ii})$$:

If base of log is less than 1 or$$\mathfrak{n} =\textit{odd}$$ ($$\mathfrak{n} \geq 3$$), then (54) holds in reverse direction.

### Proof

$$(i)$$ Since the function $$f(x)= -x\log (x)$$ is $$\mathfrak{n}$$-convex ($$\mathfrak{n}=4, 6, \ldots$$) and $$k_{j}$$ is odd for each $$j=2, \ldots , l$$, so we have $$\omega (\cdot ) \geq 0$$, and by using Lemma 1 we have $$\mathcal{G}_{\mathcal{H}, \mathfrak{n}-3}(\cdot , s) \geq 0$$ implies $$\mathcal{G}_{\mathcal{H}, \mathfrak{n}}$$ is 3-convex. Hence (19) holds. Therefore, using $$f(x)=-x\log (x)$$ and $$p_{\mu } = \frac{k_{v}}{\sum_{v=1}^{\mathfrak{n}_{1}}k_{v}}$$, $$x_{\mu } = \frac{r_{v}}{k_{v}}$$, $$q_{\nu } = \frac{t_{u}}{\sum_{u=1}^{\mathfrak{m}_{1}}t_{u}}$$, $$y_{\nu } = \frac{w_{u}}{t_{u}}$$ in Theorem 2, (20) becomes (54), where

$$\tilde{\mathfrak{S}}= - \sum_{u=1}^{\mathfrak{m}_{1}}w_{u} \log (w_{u})$$

and

$$\mathfrak{S}=- \sum_{v=1}^{\mathfrak{n}_{1}}r_{v} \log (r_{v}).$$

$$(\mathit{ii})$$ The function $$f(x)= -x\log (x)$$ is $$\mathfrak{n}$$-concave ($$\mathfrak{n}=3, 5, \ldots$$), so by using Remark 3(ii), (20) holds in reverse direction. Therefore using the same substitutions as in $$(i)$$ in reversed inequality (20), we have (54) in reverse direction. □

## 4 Conclusion

This paper is concerned with a generalization of the Levinson type inequalities (for real weights) for two types of data points implicating higher order convex functions. Hermite interpolating polynomial is used for the class of $$\mathfrak{n}$$-convex functions, where $$\mathfrak{n} \geq 3$$. In seek of application to information theory, the main results are applied to information theory via f-divergence and Shannon entropy.

## References

1. Pečarić, J., Proschan, F., Tong, Y.L.: Convex Functions, Partial Orderings and Statistical Applications. Academic Press, New York (1992)

2. Levinson, N.: Generalization of an inequality of Kay Fan. J. Math. Anal. Appl. 6, 133–134 (1969)

3. Mitrinović, D.S., Pečarić, J., Fink, A.M.: Classical and New Inequalities in Analysis, vol. 61. Kluwer Academic, Dordrecht (1993)

4. Popoviciu, T.: Sur une inegalite de N. Levinson. Mathematica 6, 301–306 (1969)

5. Bullen, P.S.: An inequality of N. Levinson. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 109–112 (1973)

6. Pečarić, J.: On an inequality on N. Levinson. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 71–74 (1980)

7. Mercer, A.McD.: A variant of Jensen’s inequality. J. Inequal. Pure Appl. Math. 4(4), article 73 (2003)

8. Agarwal, R.P., Wong, P.J.Y.: Error Inequalities in Polynomial Interpolation and Their Applications. Kluwer Academic, Dordrecht (1993)

9. Beesack, P.: On the Green’s function of an N-point boundary value problem. Pac. J. Math. 12(3), 801–812 (1962)

10. Levin, A.Y.: Some problems bearing on the oscillation of solution of linear differential equations. Sov. Math. Dokl. 4, 121–124 (1963)

11. Pečarić, J., Praljak, M., Witkowski, A.: Generalized Levinson’s inequality and exponential convexity. Opusc. Math. 35, 397–410 (2015)

12. Pečarić, J., Praljak, M., Witkowski, A.: Linear operators inequality for n-convex functions at a point. Math. Inequal. Appl. 18, 1201–1217 (2015)

13. Khan, M.A., Pečarić, Ð., Pečarić, J.: Bounds for Csiszár divergence and hybrid Zipf–Mandelbrot entropies. Math. Methods Appl. Sci. 42, 7411–7424 (2019)

14. Khan, M.A., Hanif, M., Khan, Z.A., Ahmad, K., Chu, Y.M.: Association of Jensen inequality for s-convex function. J. Inequal. Appl. 2019, 164 (2019)

15. Mehmood, N., Butt, S.I., Pečarić, Ð., Pečarić, J.: Generalization of cyclic refinements of Jensen’s inequality by Lidstone’s polynomial with applications in information theory. J. Math. Inequal. 14(1), 249–271 (2020)

16. Butt, S.I., Mehmood, N., Pečarić, Ð., Pečarić, J.: New bounds for Shannon, relative and Mandelbrot entropies via Abel–Gontscharoff interpolating polynomial. Math. Inequal. Appl. 22(4), 1283–1301 (2019)

17. Butt, S.I., Khan, K.A., Pečarić, J.: Generalization of Popoviciu inequality for higher order convex function via Taylor’s polynomial. Acta Univ. Apulensis, Mat.-Inform. 42, 181–200 (2015)

18. Liese, F., Vajda, I.: Convex Statistical Distances. Teubner-Texte zur Mathematik, vol. 95. Teubner, Leipzig (1987)

19. Vajda, I.: Theory of Statistical Inference and Information. Kluwer, Dordrecht (1989)

20. Khan, M.A., Mohammad, Z., Sahwi, A., Chu, Y.M.: New estimations for Shannon and Zipf-Mandelbrot entropies. Entropy 20(608), 1–10 (2018)

21. Khan, M.A., Pečarić, Ð., Pečarić, J.: On Zipf–Mandelbrot entropies. J. Comput. Appl. Math. 346(219), 192–204 (2018)

22. Khan, M.A., Pečarić, Ð., Pečarić, J.: Bounds for Shannon and Zipf–Mandelbrot entropies. Math. Methods Appl. Sci. 40(18), 7316–7322 (2017)

23. Adeel, M., Khan, K.A., Pečarić, Ð., Pečarić, J.: Generalization of the Levinson inequality with applications to information theory. J. Inequal. Appl. 2019, 230 (2019)

24. Adeel, M., Khan, K.A., Pečarić, Ð., Pečarić, J.: Levinson type inequalities for higher order convex functions via Abel–Gontscharoff interpolation. Adv. Differ. Equ. 2019, 430 (2019)

25. Adeel, M., Khan, K.A., Pečarić, Ð., Pečarić, J.: Estimation of f-divergence and Shannon entropy by Levinson type inequalities via new Green’s functions and Lidstone polynomial. Adv. Differ. Equ. 2020, 27 (2020)

26. Khan, K.A., Niaz, T., Pečarić, Ð., Pečarić, J.: Refinement of Jensen’s inequality and estimation of f- and Rényi divergence via Montgomery identity. J. Inequal. Appl. 2018, 318 (2018)

27. Gibbs, A.L.: On choosing and bounding probability metrics. Int. Stat. Rev. 70, 419–435 (2002)

28. Sason, I., Verdú, S.: f-Divergence inequalities. IEEE Trans. Inf. Theory 62, 5973–6006 (2016)

29. Csiszár, I.: Information measures: a critical survey. In: Tans. 7th Prague Conf. on Info. Th., Statist. Decis. Funct., Random Process and 8th European Meeting of Statist., Vol. B, pp. 73–86. Academia, Prague (1978)

30. Csiszár, I.: Information-type measures of difference of probability distributions and indirect observations. Studia Sci. Math. Hung. 2, 299–318 (1967)

31. Horváth, L., Pečarić, Ð., Pečarić, J.: Estimations of f- and Rényi divergences by using a cyclic refinement of the Jensen’s inequality. Bull. Malays. Math. Sci. Soc. 42, 933–946 (2019)

### Acknowledgements

The authors wish to thank the anonymous referees for their very careful reading of the manuscript and fruitful comments and suggestions. The research of the 4th author is supported by the Ministry of Education and Science of the Russian Federation (the Agreement number No. 02.a03.21.0008).

Not applicable.

## Funding

There is no funding for this work.

## Author information

Authors

### Contributions

All authors jointly worked on the results and they read and approved the final manuscript.

## Ethics declarations

### Competing interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

## Rights and permissions

Reprints and permissions

Adeel, M., Khan, K.A., Pečarić, Ð. et al. Estimation of f-divergence and Shannon entropy by using Levinson type inequalities for higher order convex functions via Hermite interpolating polynomial. J Inequal Appl 2020, 137 (2020). https://doi.org/10.1186/s13660-020-02403-y