- Research
- Open access
- Published:
Generalizations of Levinson type inequalities via new Green functions with applications to information theory
Journal of Inequalities and Applications volume 2023, Article number: 124 (2023)
Abstract
In this paper, four new Green functions are used to generalize Levinson-type inequalities for the class of 3-convex functions. The f-divergence, Renyi entropy, Renyi divergence, Shannon entropy, and the Zipf–Mandelbrot law are also used to apply the main results to information theory.
1 Introduction and preliminaries
Information theory is a branch of science concerned with the storage, quantification, and transmission of data. Information is difficult to quantify since it is an abstract object. In his first of two important and essential theorems, Claude Shannon [1], who created the idea of information theory, presented entropy as the fundamental unit of information in 1948. The probability density function may also be used to measure the information. The distance between the two probability distributions is calculated using the divergence measure. Divergence measure is a concept in probability theory that is used to overcome a number of problems. Divergence measures, which compare two probability distributions and are employed in statistics and information theory, are described in the literature. Sensor networks [2], finance [3], economics [4], and approximation of probability distributions [5] are all areas where information and divergence measures are highly valuable and play a significant role.
In [6], Adeel et al. generealized the Levinson inequality and gave fruitful results in information theory. Khan [7] et al., used Abel–Gontscharoff interpolation to present Levinson-type inequalities for convex functions of higher order. Adeel et al., calculated Shannon entropy and estimated f-divergence in [8] by utilising new Lidstone polynomials and Green functions in association with Levinson-type inequalities. By using Hermite interpolating polynomial, Khan [9] et al., were successful to achieve Levinson type inequality for the convex functions of higher order and provided estimates for the Shannon entropy and f-divergence. In [10], Adeel et al. used Bullen-type inequalities to estimate different entropies and f-divergence via Fink’s identity. Khan [11] et al. gave various entropy results related to Levinson-type inequalities using Green functions and also presented results for Hermite interpolating polynomial. For 2n-convex functions, Khan [12] et al. generalized Levinson-type inequalities by applying Lidstone interpolating polynomial. In [13], Adeel et al. used Green functions to obtain generalize Levinson-type inequalities via Montgomery identity. They also found bounds for different entropies and divergences. However, in [6, 11, 13], all the generalizations and results are proved using only two Green functions. In this study, four newly defined 3-convex Green functions are used to generalize the Levinson-type inequalities, and the bounds for different entropies are given.
Higher-order convex functions are defined using divided difference techniques.
Definition 1.1
[14, p. 14] For a function \(h:[\varpi _{1},\varpi _{2}] \rightarrow \mathbb{R}\), the divided difference of order n, at mutually exclusive points \(u_{0},\ldots,u_{n}\in [\varpi _{1},\varpi _{2}]\) is recursively defined by
It is clear that (1) is identical to
The nth-order divided difference is used to define a real valued convex function in the following formulation (see [14, p. 15]).
Definition 1.2
For \((n+1 )\) different points \(u_{0},\ldots ,u_{n} \in [\varpi _{1},\varpi _{2}]\), a function \(f : [\varpi _{1},\varpi _{2}] \rightarrow \mathbb{R}\) is called n-convex \((0\leq n )\) if and only if
holds.
If \([u_{0},\ldots ,u_{n};f ] \leq 0\), then f is n-concave.
Criteria for n-convex function is given in [14, p. 16], as follows:
Theorem 1.1
f is n-convex if and only if \(f^{(n)} \geq 0\), given that \(f^{(n)}\) exists.
Levinson [15], extended the Ky Fan’s inequality to the class of functions which are 3-convex as follows:
Theorem A
Suppose \(f :\mathbb{I}_{2}=(0, 2\gamma ) \rightarrow \mathbb{R}\) with \(f^{(3)}(z)\) is non-negative. Consider \(x_{\sigma} \in (0, \gamma )\) and \(p_{\sigma}>0\). Then
Popoviciu [16], noted that Levinson’s Inequality (2) has important role on \((0, 2\gamma )\), but in [17], Bullen proved distinctive conformation of Popoviciu’s findings with the converse of (2).
Theorem B
(i) Assume \(f:\mathbb{I}_{0}=[\varpi _{1}, \varpi _{2}] \rightarrow \mathbb{R}\) be a convex function of order three and \(x_{\sigma}, y_{\sigma} \in \mathbb{I}_{0}\) for \(\sigma =1, 2, \ldots , \eta \), \(p_{\sigma}>0\) so that
then
(ii) For \(p_{\sigma}>0\). If f is continuous and (4) holds for all \(x_{\sigma}\), \(y_{\sigma}\) satisfying (3), then f is 3-convex.
In the following result, Pečarić [18] gave Inequality (4) by weakening Condition (3).
Theorem C
Assume \(f:\mathbb{I}_{0} \rightarrow \mathbb{R}\) be so that \(0\leq f^{3}(t)\) and \(0< p_{\sigma}\). Also, let \(x_{\sigma}, y_{\sigma} \in \mathbb{I}_{0}\) be such that \(x_{\sigma}+y_{\sigma}=2\breve{c}\), for \(\sigma =1, \ldots , \eta \), \(x_{\sigma}+x_{\eta -\sigma +1}\leq 2 \breve{c}\) and \(\frac{p_{\sigma}x_{\sigma}+p_{\eta - \sigma +1}x_{\eta -\sigma +1}}{p_{\sigma}+p_{\eta -\sigma +1}} \leq \breve{c}\). Then (4) is true.
In [19], Mercer showed that (4) is true after substituting symmetric point variances for the symmetric conditions.
Theorem D
Suppose a 3-convex function f, defined on \(\mathbb{I}_{0}\) and \(p_{\sigma}\) be such that \(\sum_{\sigma =1}^{\eta}p_{\sigma}=1\). Choose \(x_{\sigma}\), \(y_{\sigma}\) such that \(\min \{y_{1}\ldots y_{\eta}\}\geq \max \{x_{1} \ldots x_{\eta}\}\) and
then (4) holds.
Let \(\mathfrak{g}=[\hat{e}_{1},\hat{e}_{2}]\subset \mathbb{R}\), \(\hat{e}_{1}<\hat{e}_{2}\) and \(\lambda =1,\ldots ,4\). In [20], Pečarić et al. define new type of Green functions, \(\hat{G}_{\lambda}:\mathfrak{g}\times \mathfrak{g}\rightarrow \mathbb{R}\), which are given as:
The Abel–Gontscharof-type identities were also proved by them utilising these Green functions, which are given by;
where \(f:[\hat{e}_{1},\hat{e}_{2}]\rightarrow \mathbb{R}\).
This work is arranged as follows: in Sect. 2, new type of 3-convex Green functions are defined, also identities related to these 3-convex Green functions are established. Additionally, a new class of 3-convex Green functions is used to modify Levinson’s inequality for the 3-convex function. In Sect. 3, the f-divergence, the Renyi entropy, Renyi divergence, the Zipf–Mandelbrot law, and the Shannon entropy are used to give results to information theory.
2 Mian results
Firstly, we define new type of 3-convex Green functions with graphs and using these Green functions, a lemma is also stated. Then results related to Levinson-type inequalities using new Green functions are being presented.
Let \(\mathfrak{g}=[\hat{e}_{1},\hat{e}_{2}]\subset (-\infty ,\infty )\) and \(\lambda \in \{1,2,3,4\}\). New type of 3-convex Green functions, \(G_{\lambda}:\mathfrak{g}\times \mathfrak{g}\rightarrow \mathbb{R}\), which are defined as:
Figure 1, demonstrates a graphical depiction of \(G_{k}\) (\(k=1,\ldots ,4\)). These new Green functions are used to present following lemma.
Lemma 2.1
Consider a function f be defined on \(\mathfrak{g}\) such that \(f'''\) exists and \(G_{\lambda}\) (\(\lambda ={1,\dots ,4}\)) be the two-point right focal problem-type Green functions given by (14)–(17). Then the following identities are true:
Proof
The aforementioned results can be proved by employing the same integrating approach. As a result, we give proof of (18) only.
After rearranging, we get Identity (21). □
Remark 2.1
If we apply integration by parts on the integral part of (10)-(13), by choosing \(f''(\vartheta )\) as first function and \(\widehat{G}_{k}(\hat{\varphi},\vartheta )\) (\(k=1,2,3,4\)) as second function, we get (18)–(21), respectively.
Now, identities associating with Jensen difference of two distinct data points are given by using newly defined 3-convex Green functions given by (14)–(17).
Theorem 2.2
Assume \(f\in C^{3}[\hat{e}_{1}, \hat{e}_{2}]\) be such that \(f: \mathfrak{g}=[\hat{e}_{1}, \hat{e}_{2}] \rightarrow \mathbb{R}\), \((q_{1}, \ldots , q_{\varrho}) \in \mathbb{R}^{\varrho}\), \((p_{1}, \ldots , p_{\eta}) \in \mathbb{R}^{\eta}\) such that \(\sum_{\sigma =1}^{\eta}p_{\sigma}=1\) and \(\sum_{\tau =1}^{\varrho}q_{\tau}=1\). Also, let \(x_{\sigma}\), \(y_{\tau}\), \(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}\), \(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \in \mathfrak{g}\). Then for \(G_{k}\) (\(k=1, 4\))
and for \(G_{k}\) (\(k=2, 3\))
where
and \(G_{k}\) (\(k=1 ,2,3,4\)) are defined in (14)–(17), respectively.
Proof
Let \(k=4\) and using (21) in (24), we have
Similar steps are followed to get (23). □
Corollary 2.1
Suppose \(f\in C^{3}[0, 2\hat{\gamma}]\) is such that \(f: \mathbb{I}_{2}= [0, 2\hat{\gamma}] \rightarrow \mathbb{R}\), \(x_{1}, \ldots , x_{\eta }\in (0, \gamma )\) and \((p_{1}, \ldots , p_{\eta})\) \(\in \mathbb{R}^{\eta}\) such that \(\sum_{\sigma =1}^{\eta}p_{\sigma}=1\). Let \(x_{\sigma}\), \(\sum_{\sigma =1}^{\eta}p_{\sigma}(2\hat{\gamma}-x_{ \sigma})\) and \(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma} \in \mathfrak{g}\). Then for \(k=1,4\)
and \(k=2,3\)
where \(\mathfrak{D}(f(\cdot ))\) and \(\mathfrak{D}(G_{k}(\cdot , \vartheta ))\) are given in (24) and (25), respectively.
Proof
Taking \(\mathbb{I}_{2}=[0, 2\hat{\gamma}]\), \(y_{\tau}=(2\hat{\gamma}-x_{\sigma})\), \(x_{1}, \ldots , x_{\eta }\in (0, \gamma )\), \(p_{\sigma}=q_{\tau}\) and \(\eta =\varrho \) in Theorem 2.2, after simplifications we get (26) and (27). □
To avoid many notions, we have the following class:
ℜ: Let a 3-convex function, \(f: \mathfrak{g}= [\hat{e}_{1},\hat{e}_{2}] \rightarrow \mathbb{R}\). Assume \((p_{1},\ldots , p_{\eta}) \in \mathbb{R}^{\eta}\), \((q_{1}, \ldots , q_{\varrho})\in \mathbb{R}^{\varrho}\) to be such that \(\sum_{\sigma =1}^{\eta}p_{\sigma}=1\), \(\sum_{\tau =1}^{\varrho}q_{\tau}=1\), and \(x_{\sigma}\), \(y_{\tau}\), \(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}\), \(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \in \mathfrak{g}\).
Theorem 2.3
Assume ℜ. If
and
then we have following equivalent statements:
For \(f \in C^{3}[\hat{e}_{1}, \hat{e}_{2}]\)
For each \(\vartheta \in \mathfrak{g}\)
where \(G_{k}(\cdot , \vartheta )\) are given by (14)–(17) for \(k=1,\dots ,4\), respectively.
Proof
(30) ⇒ (31): Let (30) holds. As the function \(G_{k}(\cdot , \vartheta )(\vartheta \in \mathfrak{g})\) is 3-convex, continuous, it follows that also for this function (30) holds, i.e., (31) is true.
(31) ⇒ (30): Assume a 3-convex function f, then \(f'''\) exists preserving generality. Consider a 3-convex function, if \(f \in C^{3}[\hat{e}_{1}, \hat{e}_{2}]\) and (31) holds, then f can be written in the form of (18). Now after simple calculations, we have
Convexity of f implies \(f^{(3)}(\vartheta ) \geq 0\) for all \(\vartheta \in \mathfrak{g}\). Hence, if for each \(\vartheta \in \mathfrak{g}\) (31) is true, then it is obvious that for each 3-convex function, f define on \(\mathfrak{g}\) with \(f \in C^{3}[\hat{e}_{1}, \hat{e}_{2}]\), (30) is valid. □
Remark 2.2
If the expression
and any of the \((2f^{(2)}(\hat{e}_{1})-f^{(2)}(\hat{e}_{2}))\), \((2f^{(2)}(\hat{e}_{1})-f^{(2)}(\hat{e}_{2}))\) have opposite signs in (28) and (29), respectively, then inequalities (30) and (31) are reversed.
The next results are related to generalize Bullen-type inequality (for real weights) presented in [17](see also [21]).
Theorem 2.4
Assume ℜ with
and
If (28) and (29) hold, then (30) and (31) are equivalent.
Proof
By taking \(x_{\sigma}\) and \(y_{\tau}\) such that (32) and (33) hold in Theorem 2.3, we get desired result. □
Remark 2.3
If \(x_{\sigma}\), \(y_{\tau}\) satisfy (32) and (33) and \(p_{\sigma}=q_{\tau}\) are positive, then inequality (30) reduces to Bullen inequality [21, p. 32, Theorem 2] for \(\varrho =\eta \).
Theorem 2.5
Assume ℜ. Also, assume \(x_{1},\ldots ,x_{\eta}\) and \(y_{1},\ldots ,y_{\varrho}\) to be so that \(x_{\sigma}+y_{\tau}=2\breve{c}\) (\(\sigma =1\ldots ,\eta \)), \(x_{\sigma}+x_{\eta -\sigma +1} \leq 2\breve{c,} \) and \(\frac{p_{\sigma}x_{\sigma}+p_{\eta -\sigma +1}x_{\eta -\sigma +1}}{p_{\sigma}+p_{\eta -\sigma +1}} \leq \breve{c}\). If (28) and (29) hold, then (30) and (31) are equivalent.
Proof
Applying Theorem 2.3, with given conditions of the statement, we get the desired result. □
Remark 2.4
If we put \(\varrho =\eta \), \(p_{\sigma}=q_{\tau}\) are positive, \(x_{\sigma}+y_{\tau}=2\breve{c}\), \(x_{\sigma}+x_{\eta -\sigma +1} \leq 2\breve{c,} \) and \(\frac{p_{\sigma}x_{\sigma}+p_{\eta -\sigma +1}x_{\eta -\sigma +1}}{p_{\sigma}+p_{\eta -\sigma +1}} \leq 2\breve{c}\), in Theorem 2.3, then (30) becomes extended form of Bullen inequality presented in [21, p. 32, Theorem 4].
Next, we have Mercer condition (5), if \(\sigma =\tau \) and \(\varrho =\eta \).
Theorem 2.6
Suppose \(f: \mathfrak{g}= [\hat{e}_{1}, \hat{e}_{2}] \rightarrow \mathbb{R}\) and \(f \in C^{3}[\hat{e}_{1}, \hat{e}_{2}]\), \(p_{\sigma}\), \(q_{\sigma}\) are positive such that \(\sum_{\sigma =1}^{\eta}p_{\sigma}=1\) and \(\sum_{\sigma =1}^{\eta}q_{\sigma}=1\). Let \(x_{\sigma}\), \(y_{\sigma}\) satisfy (32) for \(\eta =\varrho \) and
If (28) and (29) hold, then (30) and (31) are the same.
Proof
For positive weights, statements (30) and (31) are equivalent if we used (34) and (32) in Theorem 2.3. □
The following findings depends on generalized form of Levinson-type inequality given in [15] (see also [21]).
Theorem 2.7
Consider a 3-convex function, \(f: \mathbb{I}_{2}= [0, 2\hat{\gamma}] \rightarrow \mathbb{R}\) and \(f \in C^{3}[0, 2\hat{\gamma}]\), \(x_{1}, \ldots , x_{\eta}\in (0, \gamma )\), \((p_{1}, \ldots , p_{\eta}) \in \mathbb{R}^{\eta}\) and \(\sum_{\sigma =1}^{\eta}p_{\sigma}=1\). Also, assume \(x_{\sigma}\), \(\sum_{\sigma =1}^{\eta}p_{\sigma}(2\hat{\gamma}-x_{ \sigma})\), \(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma} \in \mathbb{I}_{2}\). Then the following are equivalent:
and
where \(G_{k}(\cdot , \vartheta )\) (\(k=1,\ldots ,4\)) are given in (14)–(17).
Proof
Let \(\mathbb{I}_{2}=[0, 2\hat{\gamma}]\), \((x_{1}, \ldots , x_{\eta}) \in (0, \gamma )\), \(p_{\sigma}=q_{\tau}\), \(\varrho =\eta \) and \(y_{\tau}=(2\hat{\gamma}-x_{\sigma})\) in Theorem 2.3 with \(0 \leq \hat{e}_{1} < \hat{e}_{2} \leq 2\hat{\gamma}\), we have required result. □
Remark 2.5
If \(p_{\sigma}\) are positive in Theorem 2.7, then inequality (35) becomes Levinson inequality given in [21, p. 32, Theorem 1].
3 Applications to information theory
Levinson-type inequalities play an essential role in generalizing inequalities for divergence between probability distributions. In [6, 11, 13], Adeel et al. applied their findings to information theory by using two 3-convex Green functions. In this section, the key findings of Sect. 1 are linked to information theory via f-divergence, Rényi entropy and divergence, Shannon entropy and Zipf–Mandelbrot law using newly defined 3-convex Green functions (14)–(17).
3.1 Csiszár divergence
Csiszár [22, 23], presented following definition.
Definition 3.1
If \(f: \mathbb{R}_{+}\to \mathbb{R}_{+}\) be a convex function. Choose \(\tilde{\mathbf{v}}, \tilde{\mathbf{l}} \in \mathbb{R}_{+}^{\eta}\) such that \(\sum_{\sigma =1}^{\eta}v_{\sigma}=1\) and \(\sum_{\sigma =1}^{\eta}l_{\sigma}=1\). Then Csiszár f-divergence is defines as follows:
In [24], Horv́ath et al. gave generalization of (37) as follows:
Definition 3.2
If \(f: \mathbb{I} \rightarrow \mathbb{R}\) be such that \(\mathbb{I} \subset \mathbb{R}\). Choose \(\tilde{\mathbf{v}}=(v_{1}, \ldots , v_{\eta})\in \mathbb{R}^{\eta}\) and \(\tilde{\mathbf{l}}=(l_{1}, \ldots , l_{\eta})\in (0, \infty )^{\eta}\) such that
Then
Throughout the paper, we assume that:
\(\mathbb{E}\): Suppose \(\tilde{\mathbf{v}}= (v_{1}, \ldots , v_{\eta} )\), \(\tilde{\mathbf{l}}= (l_{1}, \ldots , l_{\eta} )\) be in \((0, \infty )^{\eta}\) and \(\tilde{\mathbf{s}}= (s_{1}, \ldots , s_{\varrho} )\), \(\tilde{\mathbf{u}}= (u_{1}, \ldots , u_{\varrho} )\) are in \((0, \infty )^{\varrho}\).
And
Theorem 3.1
Let the hypothesis \(\mathbb{E}\) holds and
and
If
and
then the following are equivalent.
(i) For each 3-convex and continuous function \(f: \mathbb{I} \rightarrow \mathbb{R}\),
(ii)
where
Proof
Using \(p_{\sigma} = \frac{l_{\sigma}}{\sum_{\sigma =1}^{\eta}l_{\sigma}}\), \(x_{\sigma} = \frac{v_{\sigma}}{l_{\sigma}}\), \(q_{\tau} = \frac{u_{\tau}}{\sum_{\tau =1}^{\varrho}u_{\tau}}\) and \(y_{\tau} = \frac{s_{\tau}}{u_{\tau}}\) in Theorem 2.3, we get the required results. □
Remark 3.1
(i) In Remark 2.1, put \(\hat{e}_{2}=\hat{e}_{1}\) and constant of integration equals to zero in first part of piecewise function \(G_{1}\), then the results of Theorem 3.1 coincide with [6, p. 12, Theorem 6].
(ii) Similarly, in Remark 2.1, take constant of integration equals to zero in second part of piecewise function \(G_{2}\), also replace \(\hat{e}_{2}\) with \(\hat{e}_{1}\) and φ̂ with ϑ then the results of Theorem 3.1 coincide with [6, p. 12, Theorem 6].
3.2 Shannon entropy
Definition 3.3
(See [25]) For positive probability distribution \(\tilde{\mathbf{l}}=(l_{1}, \ldots , l_{\eta})\) the Shannon entropy is given by
Corollary 3.1
Let the hypothesis \(\mathbb{E}\) holds. If
and
then
where
and \(\mathbb{S}\) is defined in (44) and
If log has base less than 1, then (47) and (46) are reversed.
Proof
If log has base greater than 1, then the function \(f(x) \mapsto -x\log (x)\) is 3-convex. Hence, substituting \(f(x):= -x\log (x)\) in (39) and (41), we have required results by Remark 2.2. □
Remark 3.2
(i) In Remark 2.1, put \(\hat{e}_{2}=\hat{e}_{1}\) and constant of integration equals to zero in first part of piecewise function \(G_{1}\) then the results of Corollary 3.1 meet with [6, p. 13, Corollary 6].
(ii) Take constant of integration equals to zero in second part of piecewise function \(G_{2}\), also replace \(\hat{e}_{2}\) with \(\hat{e}_{1}\) and φ̂ with ϑ in Remark 2.1, then the results of Corollary 3.1 meet with [6, p. 13, Corollary 6].
3.3 Rényi divergence and entropy
In [26], Rényi divergence and entropy are defined as:
Definition 3.4
Suppose \(\tilde{\mathbf{v}}, \tilde{\mathbf{q}} \in \mathbb{R}_{+}^{\eta}\) is so that \(\sum_{1}^{\eta}v_{\sigma}=1\) and \(\sum_{1}^{\varrho}q_{\tau}=1\), also let \(\Delta \geq 0\), \(\Delta \neq 1\).
Δ-order, Rényi divergence is
and Δ-order Rényi entropy is given by
For non-negative probability distributions, these definitions are also valid.
Theorem 3.2
Let the hypothesis \(\mathbb{E}\) holds and
If either \(1 < \Delta \) and base of log is greater than 1 or \(\Delta \in [0, 1)\) and base of log is less than 1 or if
and
then
If either base of log is less than 1 and \(1 < \Delta \) or \(\Delta \in [0, 1)\) and log has base greater than 1, then (52) and (53) are reversed.
Proof
We prove for \(\Delta \in [0, 1)\) and base of log is greater than 1 and the other cases can be proved in a similar manner.
Taking, \(\mathbb{I} = (0, \infty )\) and \(f(x)=\log (x)\), then \(0< f^{(3)}(x)\), so f is 3-convex. Thus, putting \(f(x)=\log (x)\) and following substitutions
and
in the reverse of (30) (by Remark 2.2), we obtain
Dividing (54) with \((\Delta -1)\) and using
to get (53). □
Remark 3.3
Using all the conditions of Remark 3.1(i), (ii), the inequality (53) coincides with results given in [6, p. 14, inequality (48)].
Corollary 3.2
Let the hypothesis \(\mathbb{E}\) holds such that \(\sum_{1}^{\eta}v_{\sigma}=1\) and \(\sum_{1}^{\varrho}s_{\tau}=1\). Also, let
and
If Δ and base of log are greater than 1, then
If log has base less than 1, then (56) and (57) are reversed.
Proof
Suppose \(\tilde{\mathbf{l}}= (\frac{1}{\eta}, \ldots , \frac{1}{\eta} )\) and \(\tilde{\mathbf{u}}= (\frac{1}{\varrho}, \ldots , \frac{1}{\varrho} )\). Then from (49), we have
and
It implies
and
From Theorem 3.2, it follows \(\tilde{\mathbf{l}}= \frac{1}{\eta}\), \(\tilde{\mathbf{u}}= \frac{1}{\varrho}\), (58) and (59), we have
After simple calculations, we obtain (57). □
Remark 3.4
If all the assumptions of part (i) and (ii) of Remark 3.1 are applied to Corollary 3.2, then the results of Corollary 3.2 meet with [6, p. 16, Corollary 7].
3.4 Zipf–Mandelbrot law
The Zipf law is given as follows (see [27]).
Definition 3.5
A discrete probability distribution with three parameters, \(\phi \in [0, \infty )\), \(\mathcal{M} \in \{1, 2, \ldots , \}\), and \(w> 0\), is called Zipf–Mandelbrot law and is given by
where
For each value of \(\mathcal{M}\), if the entire mass of the law is considered, then density function of Zipf–Mandelbrot take the following form
for \(0 \leq \phi \), \(1< w\), \(\vartheta \in \mathcal{M}\), where
The Zipf–Mandelbrot law changes to the Zipf law, if \(\phi = 0\).
Theorem 3.3
Assume that \(\tilde{\mathbf{s}}\) and \(\tilde{\mathbf{v}}\) be the Zipf–Mandelbrot laws. If (55) and (56) are true for \(v_{\sigma}= \frac{1}{(\sigma +l)^{\sigma}{\mathcal{K}_{\mathcal{M}, l, \sigma}}}\), then \(s_{\tau}= \frac{1}{(\tau +s)^{\tau}{\mathcal{K}_{\mathcal{M}, s, \tau}}}\).
If log has base greater than 1, then
If log has base less than 1, then (56) and (61) are hold in opposite direction.
Proof
Similar to Corollary 3.2, the proof uses Definition 3.5 and the hypothesis given in statement to obtain the desired result. □
Remark 3.5
By following the same conditions and methodology as in parts (i) and (ii) of Remark 3.1, the Inequality (61) becomes [6, p. 17, inequality (56)].
4 Conclusion
Four newly defined 3-convex Green functions are utilized to generate generalized Levinson-type inequalities for the class of 3-convex functions. We are able to find applications to information theory and also the bounds for obtained entropies and divergences. The newly established new Green functions are generalizations of Green functions given in [20] as these are 3-convex. Other interpolations, e.g., Lidstone interpolation, Hermite interpolating polynomial, and Montgomery identity, are also useful to explore the related results.
Availability of data and materials
Data sharing is not applicable to this paper as no datasets were generated or analyzed during the current study.
References
Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27(3), 379–423 (1948)
Pardo, L.: Statistical Inference Based on Divergence Measures. Chapman & Hall, London (2018)
Sen, A., Sen, M.A., Foster, J.E., Amartya, S., Foster, J.E.: On Economic Inequality. Oxford University Press, London (1997)
Theil, H.: Economics and Information Theory. North-Holland, Amsterdam (1967)
Chow, C.K.C.N., Liu, C.: Approximating discrete probability distributions with dependence trees. IEEE Trans. Inf. Theory 14(3), 462–467 (1968)
Adeel, M., Khan, K.A., Pečarić, Ð., Pečarić, J.: Generalization of the Levinson inequality with applications to information theory. J. Inequal. Appl. 2019(1), 230 (2019)
Adeel, M., Khan, K.A., Pečarić, Ð., Pečarić, J.: Levinson type inequalities for higher order convex functions via Abel-Gontscharoff interpolation. Adv. Differ. Equ. 2019(1), 430 (2019)
Adeel, M., Khan, K.A., Pečarić, Ð., Pečarić, J.: Estimation of f-divergence and Shannon entropy by Levinson type inequalities via new Green’s functions and Lidstone polynomial. Adv. Differ. Equ. 2020(1), 27 (2020)
Adeel, M., Khan, K.A., Pečarić, Ð., Pečarić, J.: Estimation of f-divergence and Shannon entropy by using Levinson type inequalities for higher order convex functions via Hermite interpolating polynomial. J. Inequal. Appl. 2020(1), 137 (2020)
Adeel, M., Khan, K.A., Pečarić, Đ., Pečarić, J.: Estimation of f-divergence and Shannon entropy by Bullen type inequalities via Fink’s identity. Filomat 36(2), 527–538 (2022)
Adeel, M., Khan, K.A., Pečarić, Đ., Pečarić, J.: Entropy results for Levinson-type inequalities via Green functions and Hermite interpolating polynomial. Aequ. Math. 96(1), 1–16 (2022)
Aeel, M., Khan, K.A., Pečarić, Đ., Pečarić, J.: Estimation of f-divergence and Shannon entropy by Levinson type inequalities via Lidstone interpolating polynomial. Trans. A. Razmadze Math. Inst. 175(1), 1–11 (2021)
Adeel, M., Khan, K.A., Pečarić, Đ., Pečarić, J.: Levinson-type inequalities via new Green functions and Montgomery identity. Open Math. 18(1), 632–652 (2020)
Pečarić, J., Proschan, F., Tong, Y.L.: Convex Functions, Partial Orderings and Statistical Applications. Academic Press, New York (1992)
Levinson, N.: Generalization of an inequality of Kay Fan. J. Math. Anal. Appl. 6, 133–134 (1969)
Popoviciu, T.: Sur une inegalite de N. Levinson. Mathematica 6, 301–306 (1969)
Bullen, P.S.: An inequality of N. Levinson. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz., 109–112 (1973)
Pečarić, J.: On an inequality on N. Levinson. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz., 71–74 (1980)
Mercer, A.M.: 94.33 short proofs of Jensen’s and Levinson’s inequalities. Math. Gaz. 94(531), 492–495 (2010)
Mehmood, N., Agarwal, R.P., Butt, S.I., Pečarić, J.: New generalizations of Popoviciu type inequalities via new Green’s functions and Montgomery identity. J. Inequal. Appl., 1–21 (2017)
Mitrinović, D.S., Pečarić, J., Fink, A.M.: Classical and New Inequalities in Analysis, vol. 61. Kluwer Academic, Dordrecht (1993)
Csiszár, I.: Information measures: a critical survey. In: Tans. 7th Prague Conf. on Info. Th., Statist. Decis. Funct., Random Process and 8th European Meeting of Statist., B, pp. 73–86. Academia, Prague (1978)
Csiszár, I.: Information-type measures of difference of probability distributions and indirect observations. Studia Sci. Math. Hung. 2, 299–318 (1967)
Horváth, L., Pečarić, Đ., Pečarić, J.: Estimations of f-and Rényi divergences by using a cyclic refinement of the Jensen’s inequality. Bull. Malays. Math. Sci. Soc. 42, 933–946 (2019)
Matic, M., Pearce, C.E.M., Pečarić, J.: Shannon’s and related inequalities in information theory. In: Rassias, T.M. (ed.) Survey on classical inequalities, pp. 127–164. Kluwer Academic, Dordrecht (2000)
Rényi, A.: On measure of information and entropy. In: Proceeding of the Fourth Berkely Symposium on Mathematics, Statistics and Probability, pp. 547–561 (1960)
Zipf, G.K.: Human Behaviour and the Principle of Least-Effort. Addison-Wesley, Reading (1949)
Acknowledgements
The authors wish to thank the editor for possibility to publish.
Funding
There is no funding for this work.
Author information
Authors and Affiliations
Contributions
AR initiated the work and made calculations. KAK supervised and validated the draft. JP dealt with the formal analysis and investigation. GP included the applications to information theory. All the authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Rasheed, A., Khan, K.A., Pečarić, J. et al. Generalizations of Levinson type inequalities via new Green functions with applications to information theory. J Inequal Appl 2023, 124 (2023). https://doi.org/10.1186/s13660-023-03040-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-023-03040-x