- Research
- Open Access
A necessary and sufficient condition for the inequality of generalized weighted means
- Mateu Sbert^{1, 2}Email authorView ORCID ID profile and
- Jordi Poch^{2}
https://doi.org/10.1186/s13660-016-1233-7
© Sbert and Poch 2016
- Received: 21 June 2016
- Accepted: 8 November 2016
- Published: 22 November 2016
Abstract
We present in this paper a necessary and sufficient condition to establish the inequality between generalized weighted means which share the same sequence of numbers but differ in the weights. We first present a sufficient condition, and then obtain the more general, necessary and sufficient, condition. Our results were motivated by an inequality, involving harmonic means, found in the study of multiple importance sampling Monte Carlo technique. We present new proofs of Chebyshev’s sum inequality, Cauchy-Schwartz, and the rearrangement inequality, and derive several interesting inequalities, some of them related to the Shannon entropy, the Tsallis, and the Rényi entropy with different entropic indices, and to logsumexp mean. Those inequalities are obtained as particular cases of our general inequality, and show the potential and practical interest of our approach. We show too the relationship of our inequality with sequence majorization.
Keywords
- inequalities
- weighted arithmetic mean
- weighted harmonic mean
- weighted geometric mean
- weighted power mean
- weighted quasi-arithmetic mean
- weighted Kolmogorov mean
- generalized weighted mean
- Chebyshev’s sum inequality
- Cauchy-Schwartz inequality
- Rearrangement inequality
- entropy
- Tsallis entropy
- Rényi entropy
- logsumexp mean
- majorization
1 Introduction
In the research in the multiple importance sample Monte Carlo integration problem [1–3], we were confronted with several inequalities relating harmonic means, which were either described in the literature [4–7] or easy to prove from it. However, we were unable to find in the literature the following inequality (we give in an Appendix an interpretation of this inequality), which we conjectured was true.
Conjecture 1
2 Results: inequalities for generalized weighted mean
2.1 A new inequality for generalized weighted mean
By taking the \(\{ \alpha_{k} \}\) to be any weights obeying equation (6) we generalize Conjecture 1 to the following theorem.
Theorem 1
Lemma 1
Proof
We prove now Theorem 1 under the \(\{b_{k}\}\)-increasing condition:
Proof
Corollary 1
A sufficient condition for strict inequality in equation (9) is that \(b_{1} < b_{M}\) (i.e., the non-trivial case) and \(\alpha_{M} \alpha'_{1}- \alpha'_{M} \alpha_{1} >0 \).
Proof
Observe that \(\alpha_{M} \alpha'_{1}- \alpha'_{M} \alpha_{1} = 0\) would imply, from equation (11), that \(\alpha_{1} = \alpha'_{1}\) and \(\alpha_{M} = \alpha'_{M}\), and applying condition (7) with \(i=1\) we would obtain \(\alpha_{j} \ge\alpha'_{j}\) for all \(j \ge1\). Using the fact that \(\sum_{j} \alpha_{j} = \sum_{j} \alpha'_{j}=1\) we immediately arrive at \(\alpha_{j} = \alpha'_{j}\) for all j, thus the trivial case. If we exclude this trivial case the inequality (13) is strict. □
Corollary 2
Proof
Observe that Conjecture 1 is a particular case of equation (21) when \(k_{1}=k_{2}=1, C_{2}=C_{1}\).
Corollary 3
Proof
Corollary 4
Proof
Let us see now that Theorem 1, for the weighted harmonic mean, also holds for the weighted arithmetic mean.
Theorem 2
Proof
Remark 1
Corollary 5
A sufficient condition for strict inequality in equation (33) is that \(b_{1} < b_{M}\) (i.e., the non trivial case) and \(\alpha_{M} \alpha'_{1}- \alpha'_{M} \alpha_{1} >0\).
Proof
From equation (35) and equation (36), we apply Corollary 1 to the sequence \(\{b_{k}^{-1}\}\) with weights \(\{ \alpha_{k}' \}\) and \(\{ \alpha_{k} \}\). As the ordering is the reverse of \(\{b_{k}\}\), and weights are switched, \(\alpha_{M}\) would take the role of \(\alpha'_{1}\) and vice versa, and the same with \(\alpha'_{M}\) and \(\alpha_{1}\). □
Corollary 6
Chebyshev’s sum inequality
Proof
Corollary 6 can easily be extended to the following one.
Corollary 7
Chebyshev’s sum inequality extended
Given the positive sequences \(\{x_{1}, x_{2}, \ldots, x_{M} \}, \{y_{1} , y_{2}, \ldots, y_{M} \}\), then the following holds:
Corollary 8
Cauchy-Schwartz-Buniakowski inequality
Proof
\(\mathcal{A}( \{ b_{k}^{\gamma} \}) = 1/\mathcal{H}( \{ b_{k}^{-\gamma} \})\) and reciprocally, Corollary 6, applied to sequences \(\{ b_{k}^{\gamma_{1}} \}, \{ b_{k}^{\gamma_{2}} \}, \gamma_{1}, \gamma_{1} \ge0\), together with Corollary 4, allow us to establish the following corollary.
Corollary 9
Observe that we can only guarantee that equation (51) hold when both \(\gamma_{1}, \gamma_{2}\) are of the same sign. For instance, taking \(\gamma_{1}= \gamma= -\gamma_{2}\) we can easily check that equation (51) would read \(\mathcal{H}( \{ b_{k}^{\gamma} \}) \ge \mathcal{A}( \{ b_{k}^{\gamma} \})\), which is false in general (rather what is true is the inverse inequality).
Consider now \(H(\{p_{k}\})= -\sum_{k} p_{k}\log p_{k} \), the Shannon entropy of \(\{p_{k}\}\).
Corollary 10
Proof
In information theory [8], the value \(-\log p_{i}\) is considered as the information of result i, thus Corollary 10 says that the expected value of information is less than or equal to its average value.
Corollary 11
Proof
Corollary 12
Proof
Corollary 13
Proof
Let us see now that Theorem 1 extends to a weighted geometric mean.
Theorem 3
Proof
Let us see now that Theorem 1 also extends to weighted generalized (or power) mean:
Theorem 4
Proof
When \(p < 0\) we just apply Theorem 1. □
Theorem 5 below extends Theorem 1 to the quasi-arithmetic or Kolmogorov generalized weighted mean.
Theorem 5
Proof
Corollary 14
Proof
Corollary 15
Proof
Theorem 6
Finally, we consider the following theorem.
Theorem 7
Proof
Apply Theorem 5 to the sequence \(\{ g(b_{k}) \}\) with weights \(\alpha'_{k}= \frac{h'(b_{k})}{\sum_{k} h'(b_{k})} \) and \(\alpha_{k} = \frac{h(b_{k})}{\sum_{k} h(b_{k})}\). □
Observe that Corollaries 3-15 can be considered as applications of Theorem 7.
2.2 Relationship to majorization
Lemma 2
Proof
We can then state Theorem 8.
Theorem 8
Proof
Observe first that equation (91) (equation (10)) is equivalent to equation (7) for sequences of M increasing strictly positive numbers \(\{ b_{k} \}\). We can apply then Theorem 1, and obtain the condition equation (89) in Lemma 2. It is enough then to apply Lemma 2, which guarantees that the inequalities for majorization, equation (88), are fulfilled for the decreasing sequences \(\{ \alpha_{M+1-k} \}, \{ \alpha '_{M+1-k}\}\). □
Observe that the weights in Corollary 6 are such that \(\{ \alpha_{k} \} \succ\{ \alpha'_{k}\}\), and in this way Corollary 6 can be proved by direct application of Lemma 1 in [12].
A similar theorem can be proved for decreasing weights.
Theorem 9
Proof
The same proof as for Theorem 8 holds. □
Theorem 10
Proof
It is enough to apply Theorems 8 and 9 together with Hardy-Littlewood-Pólya theorem [5, 13] on majorization. □
Theorem 11
Proof
It is enough to normalize the sequences and apply Theorems 8 and 9, taking into account that majorization is invariant to a change in scale. □
Theorem 12
Proof
It is enough to normalize the translated sequences and apply Theorems 8 and 9, taking into account that majorization is invariant to a change of scale and a translation. □
Theorem 13
Proof
We can also obtain similar results for convex functions than in Theorem 10 for Theorems 11, 12, and 13.
2.3 Not a necessary condition
We can see with counterexamples that the sufficient condition equation (7) appearing on all Theorems 1-5 is not a necessary condition for the inequality of the means for any strictly positive sequence \(\{b_{k}\}\) for \(M\ge3\) (although it is easy to prove it is a necessary condition for \(M=2\)).
We leave it to the reader to check with the other means considered in this paper.
3 Results: a necessary and sufficient condition
We will see in this section that the condition found in Lemma 2 is a necessary and sufficient condition.
Theorem 14
Consider the sequences of M numbers \(\{ x_{k} \}\) and \(\{ y_{k} \}\), \(\sum_{k} x_{k} =\sum_{k} y_{k} \). Then propositions (1) to (4) are equivalent:
Proof
Repeating the proof for the sequences \(\{y_{M-k+1}\}\) and \(\{x_{M-k+1}\} \) we find that (4) implies (2). □
Remark
The case when the sequences \(\{x_{k}\}, \{y_{k}\}\) are strictly positive and \(\sum_{k} x_{k} = \sum_{k} y_{k} = 1\) is interesting and proves that condition in Lemma 2 is necessary and sufficient, and that it can be extended to harmonic, geometric, and power means, in the same way as Theorem 1 has been extended.
Corollary 16
Rearrangement inequality
Given the sequences of real numbers \(\{x_{1} , x_{2} , \ldots, x_{M} \}\), \(\{y_{1}, y_{2} , \ldots, y_{M} \} \), then the maximum of their crossed sum is obtained when both are paired in the same (increasing or decreasing) order, while the minimum is obtained when both are paired in inverse order.
Proof
4 Conclusions
Motivated by proving an inequality that appeared in our research in Monte Carlo multiple importance sampling (MIS), we identified first a sufficient condition for a generalized weighted means inequality where only the weights are changed. We obtained then a necessary and sufficient condition. We have given new proofs for Chebyshev’s sum, the Cauchy-Schwartz, and the rearrangement inequalities, as well as for other interesting inequalities, and we obtained results for the Shannon, the Tsallis, and the Rényi entropy and logsumexp. We also showed the relationship to majorization.
Declarations
Acknowledgements
The authors are funded in part by grants TIN2013-47276-C6-1-R from Spanish Government and by grant number 2014-SGR-1232 from Catalan Government. The authors acknowledge the comments by David Juher to an earlier draft, and comments by anonymous reviewers that helped to improve the final version of the paper and suggested simplified proofs for Lemma 2 and Theorem 14.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Veach, E, Guibas, LJ: Optimally combining sampling techniques for Monte Carlo rendering. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ’95, pp. 419-428. ACM, New York (1995). doi:10.1145/218380.218498 View ArticleGoogle Scholar
- Havran, V, Sbert, M: Optimal combination of techniques in multiple importance sampling. In: Proceedings of the 13th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry. VRCAI ’14, pp. 141-150. ACM, New York (2014). doi:10.1145/2670473.2670496 Google Scholar
- Havran, V, Sbert, M: Optimal Combination of Techniques in Multiple Importance Sampling. Technical Report Series of DCGI CS-TR-DCGI-2014-2, Department of Computer Graphics and Interaction, Czech Technical University, FEE, (August 2014) Google Scholar
- Bullen, PS: Handbook of Means and Their Inequalities. Springer, Dordrecht (2003) View ArticleMATHGoogle Scholar
- Hardy, GH, Littlewood, JE, Pólya, G: Inequalities. Cambridge Mathematical Library. Cambridge University Press, Cambridge (1952). https://books.google.es/books?id=t1RCSP8YKt8C MATHGoogle Scholar
- Korovkin, PP: Inequalities. Little Mathematics Library. Mir Publishers, Moscow (1975). https://archive.org/details/InequalitieslittleMathematicsLibrary MATHGoogle Scholar
- Hoehn, L, Niven, I: Averages on the move. Math. Mag. 58(3), 151-156 (1985) MathSciNetView ArticleMATHGoogle Scholar
- Cover, TM, Thomas, JA: Elements of Information Theory. Wiley, New York (2006) MATHGoogle Scholar
- Tsallis, C: Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 52(1/2), 479-487 (1988) MathSciNetView ArticleMATHGoogle Scholar
- Tsallis, C: Generalized entropy-based criterion for consistent testing. Phys. Rev. E 58, 1442-1445 (1998) View ArticleGoogle Scholar
- Rényi, A: On measures of entropy and information. In: Proc. Fourth Berkeley Symp. Math. Stat. and Probability’ 60, vol. 1, pp. 547-561. University of California Press, Berkeley (1961) Google Scholar
- Marjanović, MM, Kadelburg, Z: A proof of Chebyshev’s inequality. In: The Teaching of Mathematics, vol. X, 2, pp. 107-108 (2007) Google Scholar
- Karamata, J: Sur une inégalité rélative aux fonctions convexes. Publ. Math. Univ. Belgrade 1, 145-148 (1932) MATHGoogle Scholar
- Rubinstein, RY, Kroese, DP: Simulation and the Monte Carlo Method. Wiley Series in Probability and Statistics. Wiley, New York (2008). http://books.google.com.au/books?id=1-ffZVmazvwC MATHGoogle Scholar
- Kalos, MH, Whitlock, PA: Monte Carlo Methods: Basics. Monte Carlo Methods. Wiley, New York (1986). http://books.google.com.au/books?id=87UdAQAAMAAJ View ArticleMATHGoogle Scholar