A monotonic refinement of Levinson’s inequality
- Julije Jakšetić^{1},
- Josip Pečarić^{2} and
- Marjan Praljak^{3}Email author
https://doi.org/10.1186/s13660-015-0682-8
© Jakšetić et al.; licensee Springer. 2015
Received: 2 November 2014
Accepted: 30 April 2015
Published: 16 May 2015
Abstract
In this paper we give a monotonic refinement of the probabilistic version of Levinson’s inequality based on the monotonic refinement of Jensen’s inequality obtained by Cho et al. (Panam. Math. J. 12:43-50, 2002).
Keywords
Levinson’s inequality Jensen’s inequality 3-convexity at a pointMSC
26D151 Introduction
Levinson’s inequality and its converse are summarized in the following result taken from Bullen [1].
Theorem 1.1
(b) If for a continuous function f inequality (3) holds for all n, all \(c\in[a,b]\), all 2n distinct points \(x_{i}, y_{i} \in [a,b]\) satisfying (1) and (2) and all weights \(p_{i}>0\) such that \(\sum_{i=1}^{n}p_{i}=1\), then f is 3-convex.
Levinson [2] originally proved the inequality for functions \(f: (0,2c) \to\mathbb{R}\) such that \(f''' \geq0\). Popoviciu [3] showed that the assumption of nonnegativity of the third derivative can be weakened to 3-convexity of f. Bullen [1] gave another proof of the inequality (rescaled to a general interval \([a,b]\)) as well as its converse given in part (b) of Theorem 1.1. Pečarić and Raşa [4] extended the inequality by using the method of index set functions; in the process they weakened assumption (1) and obtained a monotonic refinement of the inequality.
The above version of the inequality assumes that the sequences \(x_{i}\)’s and \(y_{i}\)’s are symmetrically distributed around the point c. Mercer [5] made a significant improvement by replacing this condition of symmetric distribution with the weaker one that the variances of the two sequences are equal.
Theorem 1.2
Witkowski [6] extended Mercer’s result to 3-convex functions and a more general probabilistic setting. Baloch et al. [7] showed that inequality (3) holds for a larger class of functions they introduced and called 3-convex functions at a point.
Definition 1.3
Let I be an interval in \(\mathbb{R}\) and \(c\in I\). A function \(f: I\to\mathbb{R} \) is said to be 3-convex at point c if there exists a constant A such that the function \(F(s) = f(s) - \frac{A}{2} s^{2} \) is concave on \(I\cap(-\infty,c]\) and convex on \(I \cap[c ,\infty)\).
Baloch et al. [7] also proved the converse of the inequality, i.e., 3-convex functions at a point are the largest class of functions for which Levinson’s inequality holds under the equal variances assumption. Probabilistic version of Levinson’s inequality and its converse are summarized in the following result taken from Pečarić et al. [8].
Theorem 1.4
(b) Let \(f:[a,b] \to\mathbb{R}\) be continuous and \(c\in(a,b)\) fixed. Suppose that inequality (4) holds for all discrete random variables X and Y taking two values \(x_{1}, x_{2} \in[a,c]\) and \(y_{1}, y_{2} \in[c,b]\), respectively, each with probability \(\frac{1}{2}\) and such that \(\operatorname{Var}(X) = \operatorname{Var}(Y)\) (i.e. \(|x_{2}- x_{1}| = |y_{2} - y_{1}|\)). Then f is 3-convex at c.
Remark 1.5
Results in [8] were stated for f defined on an arbitrary interval I. In that case, the finiteness of \(\operatorname{Var}(X) = \operatorname{Var}(Y)\), \(\mathbb{E}[f(X)]\) and \(\mathbb{E}[f(Y)]\) needs to be assumed. For simplicity, in this paper we will work with the closed interval \([a,b]\) since in this case the function f and all random variables are bounded and the aforementioned finiteness assumptions are satisfied.
If X and Y are discrete random variables taking values \(x_{i}\) and \(y_{i}\), respectively, with probabilities \(p_{i}\), then Theorem 1.4(a) gives Theorem 1.2. In [8] it was proven that a function defined on an interval is 3-convex if and only if it is 3-convex at every point of the interval. Therefore, the converse stated in Theorem 1.4(b) strengthens the converse stated in Theorem 1.1(b).
The following is a result from [9].
Theorem 1.6
- (a)
the mappings H and V are convex on \([0,1]\),
- (b)
the mapping H is nondecreasing on \([0,1]\), while the mapping V is nonincreasing on \([0,\frac{1}{2} ]\) and nondecreasing on \([\frac{1}{2} ,1]\),
- (c)the following equalities hold:$$\begin{aligned} & \inf_{t\in[0,1]} H(t) = H(0) = f\bigl(\mathbb{E}[x]\bigr), \\ & \sup_{t\in[0,1]} H(t) = H(1) = \mathbb{E}\bigl[f(x)\bigr], \\ & \inf_{t\in[0,1]} V(t) = V\biggl(\frac{1}{2} \biggr) = \frac{1}{\mu(\Omega)^{2}} \int_{\Omega} \int_{\Omega} f \biggl(\frac{ x(s) + x(u)}{2} \biggr)\,d\mu(s)\,d\mu(u), \\ & \sup_{t\in[0,1]} V(t) = V(0)=V(1) = \mathbb{E}\bigl[f(x)\bigr], \end{aligned}$$
- (d)the following inequality holds for all \(t\in[0,1]\):$$V(t) \geq\max\bigl\{ H(t), H(1-t)\bigr\} . $$
Remark 1.7
Theorem 1.6 was proven in [9] for the case when Ω is an interval in \(\mathbb{R}\) and μ is a measure with density, i.e., \(d\mu(s) = p(s)\,ds\). But, from the proofs given there, it is obvious that the statements hold under the more general setting given here.
In this paper we will construct the corresponding two mappings in connection with Levinson’s inequality and show their monotonicity and convexity properties.
2 Main results
The following is our main result.
Theorem 2.1
- (a)
the mappings H and V are convex on \([0,1]\),
- (b)
the mapping H is nondecreasing on \([0,1]\), while the mapping V is nonincreasing on \([0,\frac{1}{2} ]\) and nondecreasing on \([\frac{1}{2} ,1]\),
- (c)the following equalities hold:$$\begin{aligned}& \inf_{t\in[0,1]} H(t) = H(0) = f\bigl(\mathbb{E}[y]\bigr) - f\bigl(\mathbb{E}[x] \bigr), \\& \sup_{t\in[0,1]} H(t) = H(1) = \mathbb{E}\bigl[f(y)\bigr] - \mathbb{E}\bigl[f(x) \bigr], \\& \begin{aligned}[b] \inf_{t\in[0,1]} V(t) ={}& V\biggl( \frac{1}{2} \biggr) = \frac{1}{\mu(\Omega)^{2}} \int_{\Omega} \int _{\Omega} \biggl[ f \biggl(\frac{ y(s) + y(u)}{2} \biggr) \\ & {} - f \biggl(\frac{ x(s) + x(u)}{2} \biggr) \biggr]\,d\mu(s)\,d\mu(u), \end{aligned} \\& \sup_{t\in[0,1]} V(t) = V(0) = V(1) = \mathbb{E}\bigl[f(y)\bigr] - \mathbb{E}\bigl[f(x)\bigr], \end{aligned}$$
- (d)the following inequality holds for all \(t\in[0,1]\):$$V(t) \geq\max\bigl\{ H(t), H(1-t)\bigr\} . $$
Proof
Let the constant A be as in Definition 1.3, i.e., such that the function \(F(s) = f(s) - \frac{A}{2} s^{2}\) is concave on \([a,c]\) and convex on \([c,b]\).
Finally, as for part (d), since V is symmetric around \(t=\frac{1}{2}\) and H is nondecreasing, it is enough to prove that \(V(t) \geq H(t)\) for \(t\in[\frac{1}{2},1]\). This inequality holds since \(V_{1} (t) \geq H_{1} (t)\) and \(V_{2} (t) \geq H_{2} (t)\) by Theorem 1.6(d) and \(V_{3}(t) = H_{3}(t)\) since \(\operatorname{Var}(x) =\operatorname{Var}(y)\) and this finishes the proof. □
Remark 2.2
The convexity and monotonicity property of the mapping H in the case when x and y are two discrete random variables taking values \(x_{i}\) and \(y_{i}\), respectively, with probabilities \(p_{i}\), \(i=1,\ldots,n\), was proven in [7].
Remark 2.3
Furthermore, \(V_{3} (t) - H_{3} (t) = \frac{1}{2} B\), so from \(V_{1}(t) \geq H_{1}(t)\), \(V_{2} (t) \geq H_{2} (t)\) and \(V_{3}(t)+V_{4}(t) - H_{3}(t) - H_{4}(t) = \frac{1}{2} B (1-t)^{2} \geq0 \) it follows that \(V(t) \geq H(t)\), i.e., part (d) also holds.
Declarations
Acknowledgements
This work has been fully supported by Croatian Science Foundation under the project 5435.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Bullen, P: An inequality of N. Levinson. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 421(460), 109-112 (1973) MathSciNetMATHGoogle Scholar
- Levinson, N: Generalization of an inequality of Ky Fan. J. Math. Anal. Appl. 8, 133-134 (1964) View ArticleMATHMathSciNetGoogle Scholar
- Popoviciu, T: Sur une inégalité de N. Levinson. Mathematica 6, 301-306 (1964) MathSciNetGoogle Scholar
- Pečarić, J, Raşa, I: On an index set function. Southeast Asian Bull. Math. 24, 431-434 (2000) View ArticleMATHMathSciNetGoogle Scholar
- Mercer, A: Short proof of Jensen’s and Levinson’s inequalities. Math. Gaz. 94, 492-495 (2010) Google Scholar
- Witkowski, A: On Levinson’s inequality. Ann. Univ. Paedagog. Crac. Stud. Math. 12, 59-67 (2013) MathSciNetMATHGoogle Scholar
- Baloch, I, Pečarić, J, Praljak, M: Generalization of Levinson’s inequality. J. Math. Inequal. 9(2), 571-586 (2015) MathSciNetView ArticleMATHGoogle Scholar
- Pečarić, J, Praljak, M, Witkowski, A: Generalized Levinson’s inequality and exponential convexity. Opusc. Math. 35(3), 397-410 (2015) View ArticleMATHGoogle Scholar
- Cho, Y, Matić, M, Pečarić, J: Two mappings in connection to Jensen’s inequality. Panam. Math. J. 12, 43-50 (2002) MATHGoogle Scholar