- Research
- Open access
- Published:
Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation
Journal of Inequalities and Applications volume 2017, Article number: 261 (2017)
Abstract
In this paper, we study the complete convergence and complete moment convergence for weighted sums of extended negatively dependent (END) random variables under sub-linear expectations space with the condition of \(C_{\mathbb{V}}[|X|^{p}l(|X|^{1/\alpha})]<\infty\), further \(\hat{\mathbb {E}}(|X|^{p}l(|X|^{1/\alpha}))\leq C_{\mathbb{V}}[|X|^{p}l(|X|^{1/\alpha })]<\infty\), \(1< p<2\) (\(l(x)>0\) is a slow varying and monotone nondecreasing function). As an application, the Baum-Katz type result for weighted sums of extended negatively dependent random variables is established under sub-linear expectations space. The results obtained in the article are the extensions of the complete convergence and complete moment convergence under classical linear expectation space.
1 Introduction
Additivity has been generally regarded as a fairly natural assumption, so the classical probability theorems have always been considered under additive probabilities and the linear expectations. However, many uncertain phenomena do not satisfy this assumption. So Peng [1–5] introduced the notions of sub-linear expectations to extend the classical linear expectations. He also established the general theoretical framework of the sub-linear expectation space. The theorems of sub-linear expectations are widely used to assess financial riskiness under uncertainty. For complete convergence and complete moment convergence, there are few reports under sub-linear expectations. This paper aims to obtain the complete convergence and complete moment convergence under sub-linear expectation space with the condition of \(C_{\mathbb{V}}[|X|^{p}l(|X|^{1/\alpha})]<\infty\), further \(\hat{\mathbb{E}}(|X|^{p}l(|X|^{1/\alpha}))\leq C_{\mathbb{V}}[|X|^{p}l(|X|^{1/\alpha})]<\infty\), \(1< p<2\). In addition, the results and conditions of this paper include a slow varying and monotone nondecreasing function, so the theorems are more generic than the traditional complete convergence. In a word, it is meaningful that this paper extends the complete convergence and complete moment convergence under sub-linear expectation.
Sub-linear expectations generate lots of interesting properties which are unlike those in linear expectations, and the issues in sub-linear expectations are more challenging, so lots of scholars have attached importance to them. Numbers of results have been established, for example, Peng [1–5] gained a weak law of large numbers and a central limit theorem under sub-linear expectation space. Chen [6] gained the law of large numbers for independent identically distributed random variables with the condition of \(\hat{\mathbb{E}}(|X|^{1+\alpha})<\infty \). The powerful tools as the moment inequalities and Kolmogorov’s exponential inequalities were established by Zhang [7–9]. He also obtained the Hartman-Wintner’s law of iterated logarithm and Kolmogorov’s strong law of large numbers for identically distributed and extended negatively dependent random variables. Wu and Chen [10] also researched the law of the iterated logarithm, and Cheng [11] studied the strong law of larger number with a general moment condition \(\sup_{i\geq1}\hat{\mathbb{E}}[|X_{i}|\psi(|X_{i}|)]<\infty\), and so on. Many powerful inequations and conventional methods for linear expectation and probabilities are no longer valid, the study of limit theorems under sub-linear expectation becomes much more challenging.
The complete convergence has a relatively complete development in probability limit theory. The notion of complete convergence was raised by Hsu and Robbins [12], and Chow [13] established complete moment convergence. The complete moment convergence is a more general version of the complete convergence. Lots of results on complete convergence and complete moment convergence for different sequences have been found under classical probability space. For example, Shen et al. [14], Wang et al. [15] and Wu and Jiang [16], and so on. Some recent papers had new results about complete convergence and complete moment convergence. For instance, Wang et al. [17] gained general results of complete convergence and complete moment convergence for weighted sums of some class of random variables, and Wang et al. [18] researched complete convergence and complete moment convergence for a class of random variables, and so on. In addition, the theorems of this paper are the extensions of the literature [14] under sub-linear expectation space. And we prove the theorems in this paper with the condition of \(C_{\mathbb{V}}[|X|^{p}l(|X|^{1/\alpha})]<\infty\), further \(\hat{\mathbb {E}}(|X|^{p}l(|X|^{1/\alpha}))\leq C_{\mathbb{V}}[|X|^{p}l(|X|^{1/\alpha })]<\infty\), \(1< p<2\) (\(l(x)>0\) is a slow varying function).
In the next section, we generally introduce some basic notations and concepts, related properties under sub-linear expectations and preliminary lemmas that are useful to prove the main theorems. In Section 3, the complete convergence and complete moment convergence under sub-linear expectation space are established. The proofs of these theorems are stated in the last section.
2 Basic settings
The study of this paper uses the framework and notations which are established by Peng [1–5]. So, we omit the definitions of sub-linear expectation (\(\hat{\mathbb{E}}\)), capacity \((\mathbb{V},v)\), countably sub-additive and Choquet integrals/expectations \((C_{\mathbb{V}},C_{v})\) and so on.
Definition 2.1
-
(i)
(Identical distribution) Assume that a space \(\textbf{X}_{1}\) and a space \(\textbf{X}_{2}\) are two n-dimensional random vectors defined severally in the sub-linear expectation space \((\Omega_{1},\mathcal {H}_{1},\hat{\mathbb{E}}_{1})\) and \((\Omega_{2},\mathcal{H}_{2},\hat{\mathbb {E}}_{2})\). They are named identically distributed if
$$ \hat{\mathbb{E}}_{1}\bigl[\varphi(\textbf{{X}}_{1}) \bigr]=\hat{\mathbb {E}}_{2}\bigl[\varphi(\textbf{{X}}_{2}) \bigr],\quad\forall\varphi\in C_{l,\mathrm{Lip}}(\mathbb{R}_{n}), $$whenever the sub-expectations are finite. A sequence \(\{X_{n}, n\geq1\}\) of random variables is named to be identically distributed if, for each \(i\geq1,X_{i}\) and \(X_{1}\) are identically distributed.
-
(ii)
(Extended negatively dependent) A sequence of random variables \(\{ X_{n}, n\geq1\}\) is named to be upper (resp. lower) extended negatively dependent if there is some dominating constant \(K\geq1\) such that
$$ \hat{\mathbb{E}} \Biggl(\prod_{i=1}^{n}g_{i}(X_{i}) \Biggr)\leq K\prod_{i=1}^{n}\hat{\mathbb{E}} \bigl(g_{i}(X_{i})\bigr), \quad\forall n\geq2. $$Whenever the nonnegative functions \(g_{i}(X_{i})\in C_{b,\mathrm{Lip}}(\mathbb{R})\), \(i=1,2,\ldots \) , are all nondecreasing (resp. all nonincreasing). They are named extended negatively dependent if they are both upper extended negatively dependent and lower extended negatively dependent.
It is distinct that if \(\{X_{n}, n\geq1\}\) is a sequence of extended independent random variables and \(f_{1}(x),f_{2}(x),\ldots{}\in C_{l,\mathrm{Lip}}(\mathbb{R})\), then \(\{f_{n}(X_{n}), n\geq1\}\) is also a sequence of extended dependent random variables with \(K=1\); if \(\{X_{n}, n\geq1\} \) is a sequence of upper extended negatively dependent random variables and \(f_{1}(x),f_{2}(x),\ldots{}\in C_{l,\mathrm{Lip}}(\mathbb{R})\) are all nondecreasing (resp. all nonincreasing) functions, then \(\{f_{n}(X_{n});n\geq 1\}\) is also a sequence of upper (resp. lower) extended negatively dependent random variables. It shall be noted that the extended negative dependence of \(\{X_{n}, n\geq1\}\) under \(\hat{\mathbb{E}}\) does not imply the extended negative dependence under ε̂.
In the following, let \(\{X_{n}, n\geq1\}\) be a sequence of random variables in \((\Omega,\mathcal{H},\hat{\mathbb{E}})\) and \(\sum_{i=1}^{n}X_{i}=S_{n}\). The symbol C is on behalf of a generic positive constant which may differ from one place to another. Let \(a_{n}\ll b_{n}\) denote that there exists a constant \(C>0\) such that \(a_{n}\leq Cb_{n}\) for sufficiently large n, \(I(\cdot)\) denotes an indicator function, \(a_{x}\sim b_{x}\) denotes \(\lim_{x\rightarrow\infty}\frac{a_{x}}{b_{x}}=1\). Also, let \(a_{n}\approx b_{n}\) denote that there exist constants \(c_{1}>0\) and \(c_{2}>0\) such that \(c_{1}a_{n}\leq b_{n}\leq c_{2}a_{n}\) for sufficiently large n.
The following three lemmas are needed in the proofs of our theorems.
Lemma 2.1
([19])
\(l(x)\) is a slow varying function if and only if
where \(c(x)\geq0\), \(\lim_{x\rightarrow\infty}c(x)=c>0\), and \(\lim_{x\rightarrow\infty}f(x)=0\).
Lemma 2.2
Suppose \(X\in\mathcal{H}\), \(p>0\), \(\alpha>0\), and \(l(x)\) is a slow varying function.
-
(i)
Then, for \(\forall c>0\),
$$ C_{\mathbb{V}}\bigl[|X|^{p}l\bigl(|X|^{1/\alpha}\bigr)\bigr]< \infty\quad\Leftrightarrow\quad\sum_{n=1}^{\infty}n^{\alpha p-1}l(n)\mathbb{V}\bigl(|X|>c n^{\alpha}\bigr)< \infty. $$(2.2) -
(ii)
If \(C_{\mathbb{V}}[|X|^{p}l(|X|^{1/\alpha})]<\infty\), then for any \(\theta>1\) and \(c>0\),
$$ \sum_{k=1}^{\infty}\theta^{k\alpha p}l\bigl( \theta^{k}\bigr)\mathbb{V}\bigl(|X|>c\theta ^{k\alpha}\bigr) < \infty. $$(2.3)
Proof
(i) By Lemma 2.1, we can express \(l(x)\) as equality (2.1), and \(f(u)\rightarrow0\) as \(u\rightarrow\infty\), \(c(x)\rightarrow c\) as \(x\rightarrow\infty\). Let \(Z(x)=|x|^{p}l(|x|^{1/\alpha})\), \(Z^{-1}(x)\) be the inverse function of \(Z(x)\), \(l(x)\) is a slow varying function and for any \(c>0\), we have
So,
(ii) By the proof of (i), we can imply that for any \(\theta>1\)
 □
Lemma 2.3
(Zhang [9] (Rosenthal’s inequalities))
Let \(\{ X_{n},n\geq1\}\) be a sequence of upper extended negatively dependent random variables in \((\Omega,\mathcal{H},\hat{\mathbb{E}})\). And \(\hat {\mathbb{E}}[X_{k}]\leq0\), \(k=1,\ldots,n\). Then
3 Main results
Theorem 3.1
Let \(0< p<2\), \(\alpha>0\), \(\alpha p>1\), and \(\{X_{n},n\geq1\} \) be a sequence of END and identically distributed random variables under sub-linear expectations. Let \(l(x)>0\) be a slow varying and monotone nondecreasing function. And \(\{a_{ni},1\leq i\leq n,n\geq1\}\) is an array of real numbers such that
If
further, for \(1< p<2\),
Then, for any \(\varepsilon>0\),
where \(b_{i}=0\) if \(p\leq1\), and \(b_{i}=\hat{\mathbb{E}}X_{i}\) if \(p>1\);
where \(b_{i}=0\) if \(p\leq1\), and \(b_{i}=\hat{\mathbb{\varepsilon}}X_{i}\) if \(p>1\).
In particular, if \(\hat{\mathbb{E}}X_{i}=\hat{\varepsilon}X_{i}\), then
where \(b_{i}=0\) if \(p\leq1\), and \(b_{i}=\hat{\mathbb{E}}X_{i}=\hat{\mathbb {\varepsilon}}X_{i}\) if \(p>1\).
Theorem 3.2
Suppose that the conditions of Theorem 3.1 hold, and \(\hat{\mathbb{E}}X_{i}=\hat{\mathbb{\varepsilon}}X_{i}=b_{i}\), \({1< p<2}\), then, for any \(\varepsilon>0\),
Theorem 3.3
Suppose that \(1/2<\alpha\leq1\) and other conditions of Theorem 3.1 hold. Let \({l(x)>0}\) be a monotone nondecreasing function. Assume further that \(\{a_{ni},1\leq i\leq n,n\geq1\}\) is an array of real numbers such that (3.1) holds and \(\hat {\mathbb{E}}X_{i}=\hat{\mathbb{\varepsilon}}X_{i}=b_{i}\). If
then, for \(\forall\varepsilon>0\),
4 Proof
Proof of Theorem 3.1
Without loss of generality, we can assume that \(\hat{\mathbb{E}}X_{i}=0\), when \(p>1\). We just need to prove (3.5). Because of considering \(\{-X_{n};n\geq1\}\) instead of \(\{X_{n};n\geq1\} \) in (3.5), we can obtain (3.6). Noting that \(a_{ni}\geq0\), without loss of generality, we can assume that
and \(a_{ni}\geq0\) for all \(1\leq i\leq n\) and \(n\geq1\). It follows by (3.2) and Hölder’s inequality that
For fixed \(n\geq1\), denote for \(1\leq i\leq n\) that
It is easily checked that for \(\forall\varepsilon>0\),
which can imply that
For \(0<\mu<1\), let \(g(x)\) be a decreasing function and \(g(x)\in C_{l,\mathrm{Lip}}(\mathbb{R})\), \(0\leq g(x)\leq1\) for all x and \(g(x)=1\) if \(|x|\leq\mu\), \(g(x)=0\) if \(|x|>1\). Then
In order to prove (3.5), it suffices to show \(I_{1}<\infty\) and \(I_{2}<\infty \). By Lemma 2.2(i) and identically distributed random variables, we can get that
In the following, we prove that \(I_{2}<\infty\). First, we prove that
Case 1: \(0< p\leq1\).
For any \(r>0\), by the \(C_{r}\) inequality and (4.4),
So, by (4.3) we can imply that
By (2.3), we can imply that
and \(\mathbb{V}(|X|>\mu n^{\alpha})\downarrow\), so we get \(n\mathbb {V}(|X|>\mu n^{\alpha})\rightarrow0\) as \(n\rightarrow\infty\). Next, we estimate \(I_{21}\). Let \(g_{j}(x)\in C_{l,\mathrm{Lip}}(\mathbb{R})\), \(j\geq1\) such that \(0\leq g_{j}(x)\leq1\) for all x and \(g_{j}(\frac{x}{2^{j\alpha}})=1\) if \(2^{(j-1)\alpha}<|x|\leq2^{j\alpha}\), \(g_{j}(\frac{x}{2^{j\alpha}})=0\) if \(|x|\leq2^{(j-1)\alpha}\) or \(|x|>(1+\mu)2^{j\alpha}\). Then
For every n, there exists k such that \(2^{k-1}\leq n<2^{k}\), thus by (4.7), \(g(x)\downarrow\), and \(n^{-\alpha+1}\downarrow0\), from \(\alpha >\frac{1}{p}\geq1\), we get
Noting that by (2.4), \(\alpha p>1\),
It follows that
from the Kronecker lemma and \(2^{j(\alpha-1)}\uparrow\infty\).
Case 2: \(1< p<2\).
By (3.4), we can get that
By (4.9) and \(\alpha p>1\), \(1< p<2\), one can get that
It follows that for all n large enough,
which implies that
By Definition 2.1(ii), we can know that fixed \(n\geq1,\{ a_{ni}(X_{i}^{(n)}-\hat{\mathbb{E}}X_{i}^{(n)}),1\leq i\leq n\}\) are still END random variables. Hence, we have by Lemma 2.3 (taking \(x=\varepsilon n^{\alpha}\)) that
By (4.6), we have
By Lemma 2.2(i), we can get \(I_{4}<\infty\). Noting that by (4.8)
By \(p<2\), we get \(I_{31}<\infty\). Next we estimate \(I_{32}\). By (2.4), we can imply that
Hence, it follows that
By \(I_{3}<\infty\) and \(I_{4}<\infty\), we can get \(I_{2}<\infty\).
This finishes the proof of Theorem 3.1. □
Proof of Theorem 3.2
Without loss of generality, we can assume that \(\hat{\mathbb{E}}X_{i}=0\) when \(p>1\), and assume that \(a_{ni}\geq0\). For \(\forall\varepsilon>0\), we have by Theorem 3.1 that
Hence, it suffices to show that
For \(t>n^{\alpha}\), denote
and
Since \(X_{i}=U_{ti}+Z_{ti}\), it follows that
Note that by Lemma 2.2(i)
In the following, we prove that \(H_{2}<\infty\). First, we show that
Case 1: \(0< p\leq1\).
Note (4.10) and (4.4), which imply that
So, for \(t>n^{\alpha}\), we get
We get \(n\mathbb{V}(|X|>\mu n^{\alpha})\rightarrow0\) as \(n\rightarrow \infty\) in the proof of (4.7). Next, we estimate \(H_{21}\). For every n, there exists k such that \(2^{k-1}\leq n<2^{k}\), thus by (4.8), (4.13), \(g(x)\downarrow\), \(t>n^{\alpha}\) and \(n^{-\alpha+1}\downarrow0\), from \(\alpha>1\), we get
Noting that by (2.4), \(\alpha p>1\),
It follows that
from the Kronecker lemma and \(2^{j(\alpha-1)}\uparrow\infty\).
Case 2: \(1< p<2\).
By \(\hat{\mathbb{E}}X_{i}=0\) and \(\alpha p>1\), \(t>n^{\alpha}\), we can get that
It follows that for all n large enough,
which implies that
For fixed \(t>n^{\alpha}\) and \(n\geq1\), it is easily seen that \(\{ a_{ni}(Z_{ti}-\hat{\mathbb{E}}Z_{ti}),i\geq1\}\) are still END random variables. Hence, we have by Markov’s inequality, Lemma 2.3, (4.3), (4.12), (4.13), Lemma 2.2(i) that
Hence, this finishes the proof of Theorem 3.2. □
Proof of Theorem 3.3
We use the same notations as those in Theorem 3.1. The proof is similar to that of Theorem 3.1. We only need to show that
Because \(l(x)>0\) is a monotone nondecreasing function, we have
which together with (3.8) yields that \(C_{\mathbb{V}}|X|^{1/\alpha }< C_{\mathbb{V}}[|X|^{1/\alpha}l(|X|^{1/\alpha})]<\infty\). Noting that \(1\leq1/\alpha<2\) and \(\hat{\mathbb{E}}X_{i}=0\), we have
 □
References
Peng, SG: G-expectation, G-Brownian motion and related stochastic calculus of Ito type. Stoch. Anal. Appl. 2, 541-567 (2007)
Peng, SG: Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation. Stoch. Process. Appl. 118, 2223-2253 (2008)
Peng, SG: Nonlinear expectations and nonlinear Markov chains. Chin. Ann. Math., Ser. B 26, 159-184 (2005)
Peng, SG: Law of large numbers and central limit theorem under nonlinear expectations. arXiv:math/0702358v1 (2007)
Peng, SG: Survey on normal distributions, central limit theorem, Brownian motion and the related stochastic calculus under sublinear expectations. Sci. China Ser. A 52, 1391-1411 (2009)
Chen, ZJ: Strong laws of large numbers for sub-linear expectations. Sci. China Math. 59(5), 945-954 (2016)
Zhang, LX: Strong limit theorems for extended independent and extended negatively dependent random variables under non-linear expectations. arXiv:1608.00710 (2016)
Zhang, LX: Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm. Sci. China Math. 59(2), 2503-2526 (2016)
Zhang, LX: Rosenthal’s inequalities for independent and negatively dependent random variables under sub-linear expectations with applications. Sci. China Math. 59(4), 751-768 (2016)
Wu, PY Chen, ZJ: Invariance principles for the law of the iterated logarithm under G-framework. Sci. China Math. 58(6), 1251-1264 (2015)
Cheng, H: A strong law of large numbers for sub-linear expectation under a general moment condition. Stat. Probab. Lett. 119, 248-258 (2016)
Hsu, P, Robbins, H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33, 25-31 (1947)
Chow, YS: On the rate of moment complete convergence of sample sums and extremes. Bull. Inst. Math. Acad. Sin. 16, 177-201 (1988)
Shen, A, Xue, MX, Wang, WJ: Complete convergence for weighted sums of extended negatively dependent random variables. Commun. Stat., Theory Methods 46(3), 1433-1444 (2016)
Wang, XJ, Hu, SH: Complete convergence and complete moment convergence for martingale difference sequences. Acta Math. Sin. Engl. Ser. 30(1), 119-132 (2014)
Wu, Q, Jiang, YY: Complete convergence and complete moment convergence for negatively associated sequences of random variables. J. Inequal. Appl. 2016, Article ID 157 (2016)
Wang, XJ, Hu, SH, Volodin, A: General results of complete convergence and complete moment convergence for weighted sums of some class of random variables. Commun. Stat., Theory Methods 45(15), 4494-4508 (2016)
Wang, XJ, Wu, Y: On complete convergence and complete moment convergence for a class of random variables. J. Korean Math. Soc. 54(3), 877-896 (2017)
Seneta, E: Regularly Varying Functions. Lecture Notes in Mathematics, vol. 508. Springer, Berlin (1976)
Acknowledgements
Supported by the National Natural Science Foundation of China (11661029, 11361019) and the Support Program of the Guangxi China Science Foundation (2015GXNSFAA139008).
Author information
Authors and Affiliations
Contributions
All authors contributed equally and read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zhong, H., Wu, Q. Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation. J Inequal Appl 2017, 261 (2017). https://doi.org/10.1186/s13660-017-1538-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-017-1538-1