- Research
- Open access
- Published:
Complete moment convergence of moving average processes for m-WOD sequence
Journal of Inequalities and Applications volume 2021, Article number: 16 (2021)
Abstract
In this paper, the complete moment convergence for the partial sum of moving average processes \(\{X_{n}=\sum_{i=-\infty }^{\infty }a_{i}Y_{i+n},n\geq 1\}\) is established under some mild conditions, where \(\{Y_{i},-\infty < i<\infty \}\) is a sequence of m-widely orthant dependent (m-WOD, for short) random variables which is stochastically dominated by a random variable Y, and \(\{a_{i},-\infty < i<\infty \}\) is an absolutely summable sequence of real numbers. These conclusions promote and improve the corresponding results from m-extended negatively dependent (m-END, for short) sequences to m-WOD sequences.
1 Introduction and main results
Let \(\{Y_{i},-\infty < i<\infty \}\) be a sequence of random variables and \(\{a_{i},-\infty < i<\infty \}\) be an absolutely summable sequence of real numbers, and for \(n\geq 1\) set \(X_{n}=\sum_{i=-\infty }^{\infty }a_{i}Y_{i+n}\). The limit properties of the moving average process \(\{X_{n},n\geq 1\}\) have been extensively investigated by many authors. For example, Burton and Dehling [1] obtained a large deviation principle, Ibragimov [2] established the central limit theorem, Račkauskas and Suquet [3] proved the functional central limit theorems for self-normalized partial sums of linear processes, and An [4], Chen et al. [5], Kim and Ko [6], Li et al. [7], Li and Zhang [8], Wang and Hu [9], Yang and Hu [10], Zhang [11], Zhou [12], Zhou and Lin [13], Zhang [14], Zhang and Ding [15], Song and Zhu [16, 17] got the complete (moment) convergence of moving average process based on a sequence of different dependent (or mixing) random variables, respectively. But few results for moving average process based on m-WOD random variables are known. Firstly, we introduce some definitions.
Definition 1.1
A sequence \(\{Y_{i},-\infty < i<\infty \}\) of random variables is said to be stochastically dominated by a random variable Y if there exists a constant C such that
Definition 1.2
A real-valued function \(l(x)\), positive and measurable on \([a,\infty )\), \(a>0\), is said to be slowly varying at infinity if, for each \(\lambda >0\), \(\lim_{x\to \infty }\frac{l(\lambda x)}{l(x)}=1\).
The concept of widely orthant dependence structure was introduced by Wang et al. [18] as follows.
Definition 1.3
For the random variables \(\{X_{n},n\geq 1\}\), if there exists a finite positive sequence \(\{g_{U}(n),n\geq 1\}\) satisfying, for each \(n\geq 1\) and for all \(x_{i}\in R\), \(1\leq i\leq n\),
then we say that the random variables \(\{X_{n},n\geq 1\}\) are widely upper orthant dependent (WUOD, for short); if there exists a finite positive sequence \(\{g_{L}(n),n\geq 1\}\) satisfying, for each \(n\geq 1\) and for all \(x_{i}\in R\), \(1\leq i\leq n\),
then we say that the random variables \(\{X_{n},n\geq 1\}\) are widely lower orthant dependent (WLOD, for short); if they are both WUOD and WLOD, then we say that the random variables \(\{X_{n},n\geq 1\}\) are widely orthant dependent (WOD, for short), and \(g_{U}(n)\), \(g_{L}(n)\), \(n\geq 1\), are called dominated coefficients.
Inspired by WOD and m-NA, Fang et al. [19] introduced the following notion.
Definition 1.4
Let \(m\geq 1\) be a fixed integer. A sequence of random variables \(\{X_{n},n\geq 1\}\) is said to be m-WOD if, for any \(n\geq 2\) and \(i_{1},i_{2},\ldots ,i_{n}\) such that \(|i_{k}-i_{j}|\geq m\) for all \(1\leq k\neq j\leq n\), we have that \(X_{i_{1}},X_{i_{2}},\ldots ,X_{i_{n}}\) are WOD.
By (1.1) and (1.2), we can see that \(g_{U}(n)\geq 1\) and \(g_{L}(n)\geq 1\). Recall that when \(g_{U}(n)=g_{L}(n)=M\) for some positive constant M and any \(n\geq 1\), then the random variables \(\{X_{n},n\geq 1\}\) are called extended negatively dependent (END, for short). The definition of END was introduced by Liu [20]. If both (1.1) and (1.2) hold for \(g_{U}(n)=g_{L}(n)=1\) for any \(n\geq 1\), then the random variables \(\{X_{n},n\geq 1\}\) are called negatively orthant dependent (NOD, for short), which was introduced by Ebrahimi and Ghosh [21]. It is well known that negatively associated (NA, for short) random variables are NOD. Hu [22] pointed out that negatively superadditive dependent (NSD, for short) random variables are NOD. Hence, the class of m-WOD random variables includes independent sequence, m-NA sequence, NSD sequence, m-NOD sequence, and m-END sequence as special cases. Studying the probability limit theory and its applications for m-WOD random variables is of great interest. But there are few results on the complete moment convergence of moving average process based on an m-WOD sequence. Therefore, in this paper, we establish some results on the complete moment convergence for partial sums for moving average process.
Throughout the sequel, C represents a positive constant although its value may change from one appearance to the next, \(I\{A\}\) denotes the indicator function of the set A, \([x]\) denotes the integer part of x, \(X^{+}=\max \{X,0\}\), \(X^{-}=\max \{-X,0\}\).
2 Preliminary lemmas
In this section, we give some lemmas which will be useful to prove our main results.
Lemma 2.1
(Fang et al. [19])
Let \(\{X_{n},n\geq 1\}\) be a sequence of m-WOD random variables with dominating coefficients \(g(n)=\max \{g_{L}(n),g_{U}(n)\}\)). If \(\{f_{n}(\cdot ),n\geq 1\}\) are all nondecreasing (or nonincreasing), then \(\{f_{n}(X_{n}),n\geq 1\}\) are still m-WOD with dominating coefficients \(\{g(n),n\geq 1\}\).
Lemma 2.2
(Fang et al. [19])
For a positive real number \(q\geq 2\), if \(\{X_{n},n\geq 1\}\) is a sequence of mean zero m-WOD random variables with dominating coefficients \(g(n)=\max \{g_{L}(n),g_{U}(n)\}\). If \({E}|X_{i}|^{q}<\infty \) for every \(i \geq 1\), then for all \(n\geq 1\) there exist positive constants \(C_{1}(m,q)\) and \(C_{2}(m,q)\) depending on q and m such that
Lemma 2.3
(Zhou [12])
If l is slowly varying at infinity, then
(1) \(\sum_{n=1}^{m}n^{s}l(n)\leq C m^{s+1}l(m)\) for \(s>-1\) and positive integer m,
(2) \(\sum_{n=m}^{\infty }n^{s}l(n)\leq C m^{s+1}l(m)\) for \(s<-1\) and positive integer m.
Lemma 2.4
(Wang et al. [23])
Let \(\{X_{n}, n\geq 1\}\) be a sequence of random variables which is stochastically dominated by a random variable X. Then, for any \(a>0\) and \(b>0\),
3 Main results and proofs
Theorem 3.1
Let l be a function slowly varying at infinity, \(p\geq 1\), \(\alpha >1/2\), \(\alpha p> 1\). Assume that \(\{a_{i},-\infty < i<\infty \}\) is an absolutely summable sequence of real numbers. Suppose that \(\{X_{n}=\sum_{i=-\infty }^{\infty } a_{i}Y_{i+n}, n\geq 1\}\) is a moving average process generated by a sequence \(\{Y_{i},-\infty < i<\infty \}\) of m-WOD random variables with dominating coefficients \(g(n)=O(n^{\delta })\) for some \(\delta \geq 0\) which is stochastically dominated by a random variable Y. If \(EY_{i}=0\) for \(1/2<\alpha \leq 1\), \(E|Y|^{p}l(|Y|^{1/{\alpha }})<\infty \) for \(p>1\), and \(E|Y|^{1+\lambda }<\infty \) for \(p=1\) and some \(\lambda >0\), then for any \(\varepsilon >0\)
Proof
Let \(f(n)=n^{\alpha p-2-\alpha }l(n)\) and \(Y^{(1)}_{xj}=-xI\{Y_{j}< -x\}+Y_{j}I\{|Y_{j}|\leq x\}+xI\{Y_{j}> x\}\) and \(Y^{(2)}_{xj}=Y_{j}-Y^{(1)}_{xj}\) be the monotone truncations of \(\{Y_{j},-\infty < j<\infty \}\) for \(x>0\). Then, by Lemma 2.1, it is easy to know that \(\{Y^{(1)}_{xj}-EY^{(1)}_{xj},-\infty < j<\infty \}\) and \(\{Y^{(2)}_{xj},-\infty < j<\infty \}\) are two sequences of m-WOD random variables. Note that \(\sum_{k=1}^{n}X_{k}=\sum_{i=-\infty }^{\infty }a_{i}\sum_{j=i+1}^{i+n}Y_{j}\) and \(\sum_{i=-\infty }^{\infty }|a_{i}|<\infty \), then by Lemma 2.4 we have, for \(x>n^{\alpha }\), if \(\alpha >1\)
If \(1/2<\alpha \leq 1\), note that \(\alpha p> 1\), this means \(p>1\). By \(E|Y|^{p}l(|Y|^{1/{\alpha }})<\infty \) and l is slowly varying at infinity, it is easy to conclude that, for any \(0<\epsilon <p-1/{\alpha }\), we have \(E|Y|^{p-\epsilon }<\infty \). Then, noting \(EY_{i}=0\), by Lemma 2.4 we can obtain
Therefore, by the above discussion, for \(x>n^{\alpha }\) large enough, we know
Then
Firstly we prove \(I_{1}<\infty \). Noting \(|Y^{(2)}_{xj}|<|Y_{j}|I\{|Y_{j}|> x\}\), then by Markov’s inequality and Lemma 2.4, we have
If \(p>1\), then \(\alpha p-1-\alpha >-1\), by Lemma 2.3, we can get
If \(p=1\), \(E|Y|^{1+\lambda }<\infty \) implies \(E|Y|^{1+\lambda '}l(|Y|^{1/{\alpha }})<\infty \) for any \(0<\lambda '<\lambda \), then by Lemma 2.3 we get
So, we conclude
Next we show \(I_{2}<\infty \). By Markov’s inequality, Hőlder’s inequality, and Lemma 2.2, we can obtain
where \(r\geq 2\) will be given later.
For \(I_{21}\), if \(p>1\), taking \(r>\max \{2,p\}\), then by \(C_{r}\) inequality, Lemma 2.3, and Lemma 2.4, we know
For \(I_{21}\), if \(p=1\), taking \(r>\max \{1+\lambda ',2\}\), where \(0<\lambda '<\lambda \), then by the same argument as above we know
For \(I_{22}\), if \(1\leq p<2\), noting that \(g(n)=O(n^{\delta })\), taking \(r>2\) such that \(\alpha p+r/2-\alpha pr/2-1+\delta =(\alpha p-1)(1-r/2)+\delta <0\), then by \(C_{r}\) inequality, Lemma 2.3, and Lemma 2.4, we obtain
For \(I_{22}\), if \(p\geq 2\), noting that \(g(n)=O(n^{\delta })\), taking \(r>(\alpha p-1)/({\alpha -1/2})\geq p\) such that \(\alpha (p-r)+r/2+\delta -1<0\), then by \(C_{r}\) inequality, Lemma 2.3, and Lemma 2.4, similar to the proof of (3.7), one gets
Thus, (3.1) can be deduced immediately by combining (3.2)–(3.8). □
The next theorem will discuss the case \(\alpha p=1\).
Theorem 3.2
Let l be a function slowly varying at infinity, \(1\leq p<2\). Assume that \(\sum_{i=-\infty }^{\infty }|a_{i}|^{\theta }<\infty \), where θ belongs to \((0,1)\) if \(p=1\) and \(\theta =1\) if \(1< p<2\). Suppose that \(\{X_{n}=\sum_{i=-\infty }^{\infty } a_{i}Y_{i+n}, n\geq 1\}\) is a moving average process generated by a sequence \(\{Y_{i},-\infty < i<\infty \}\) of m-WOD random variables with dominating coefficients \(g(n)=O(n^{\delta })\) for some \(0\leq \delta <(2-p)/p\) which is stochastically dominated by a random variable Y. If \(EY_{i}=0\) and \(E|Y|^{p(1+\delta )}l(|Y|^{p})<\infty \), then for any \(\varepsilon >0\)
Proof
Let \(h(n)=n^{-1-1/p}l(n)\). Similar to the proof of (3.2), we obtain
For \(J_{1}\), by Markov’s inequality, \(C_{r}\) inequality, Lemma 2.3, and Lemma 2.4, one gets
For \(J_{2}\), as the same argument of \(I_{2}\), noting that \(g(n)=O(n^{\delta })\) for some \(0\leq \delta <(2-p)/p\), taking \(r=2\), by Lemma 2.2, Lemma 2.3, and Lemma 2.4, we conclude
Hence, by combining (3.10)–(3.12), (3.9) holds. □
For the complete convergence, we have the following corollary from the above theorems immediately.
Corollary 3.3
Under the assumptions of Theorem 3.1, for any \(\varepsilon >0\), we have
Under the assumptions of Theorem 3.2, for any \(\varepsilon >0\), we have
Remark 3.4
Since m-WOD random variables include independent, m-NA, NSD, WOD, m-NOD, and m-END random variables, so our results also hold for independent, m-NA, NSD, WOD, m-NOD, and m-END random variables, and therefore Theorem 3.1 and Theorem 3.2 improve upon the known results.
Remark 3.5
Obviously, the assumption that \(\{Y_{i},-\infty < i<\infty \}\) is stochastically dominated by a random variable Y is weaker than the assumption of identical distribution of the random variables \(\{Y_{i},-\infty < i<\infty \}\), therefore the results of Theorem 3.1 and Theorem 3.2 also hold for identically distributed random variables.
Remark 3.6
Let \(a_{0}=1\), \(a_{i}=0\), \(i\neq 0\), then \(S_{n}=\sum_{k=1}^{n}X_{k}=\sum_{k=1}^{n}Y_{k}\). Hence the results of Theorem 3.1 and Theorem 3.2 also hold when \(\{X_{k},k\geq 1\}\) is a sequence of m-WOD random variables which is stochastically dominated by a random variable Y.
Remark 3.7
The results obtained by this paper and Fang et al. [19] are different. In our paper, we mainly discuss the complete moment convergence of moving average processes for an m-WOD sequence, Fang et al. [19] proved the asymptotic approximations of ratio moments based on the m-WOD sequence.
4 Conclusions
In this paper, using the moment inequality for m-WOD sequences and truncation method, the complete moment convergence for the partial sum of moving average processes \(\{X_{n}=\sum_{i=-\infty }^{\infty }a_{i}Y_{i+n},n\geq 1\}\) is established, where \(\{Y_{i},-\infty < i<\infty \}\) is a sequence of m-WOD random variables which is stochastically dominated by a random variable Y, and \(\{a_{i},-\infty < i<\infty \}\) is an absolutely summable sequence of real numbers. These conclusions obtained extend and improve the corresponding results from m-END sequences to m-WOD sequences.
Availability of data and materials
Data sharing not applicable to this article as no data sets were generated or analysed during the current study.
References
Burton, R.M., Dehling, H.: Large deviations for some weakly dependent random processes. Stat. Probab. Lett. 9(5), 397–401 (1990)
Ibragimov, I.A.: Some limit theorem for stationary processes. Theory Probab. Appl. 7, 349–382 (1962)
Račkauskas, A., Suquet, C.: Functional central limit theorems for self-normalized partial sums of linear processes. Lith. Math. J. 51(2), 251–259 (2011)
An, J.: Complete moment convergence of weighted sums for processes under asymptotically almost negatively associated assumptions. Proc. Indian Acad. Sci. Math. Sci. 124, 267–279 (2014)
Chen, P.Y., Hu, T.C., Volodin, A.: Limiting behaviour of moving average processes under φ-mixing assumption. Stat. Probab. Lett. 79(1), 105–111 (2009)
Kim, T.S., Ko, M.H.: Complete moment convergence of moving average processes under dependence assumptions. Stat. Probab. Lett. 78(7), 839–846 (2008)
Li, D.L., Rao, M.B., Wang, X.C.: Complete convergence of moving average processes. Stat. Probab. Lett. 14(2), 111–114 (1992)
Li, Y.X., Zhang, L.X.: Complete moment convergence of moving average processes under dependence assumptions. Stat. Probab. Lett. 70(3), 191–197 (2004)
Wang, X.J., Hu, S.H.: Complete convergence and complete moment convergence for martingale difference sequence. Acta Math. Sin. Engl. Ser. 30(1), 119–132 (2014)
Yang, W.Z., Hu, S.H.: Complete moment convergence of pairwise NQD random variables. Stochastics 87(2), 199–208 (2015)
Zhang, L.X.: Complete convergence of moving average processes under dependence assumptions. Stat. Probab. Lett. 30(2), 165–170 (1996)
Zhou, X.C.: Complete moment convergence of moving average processes under φ-mixing assumptions. Stat. Probab. Lett. 80(5–6), 285–292 (2010)
Zhou, X.C., Lin, J.G.: Complete moment convergence of moving average processes under ρ-mixing assumption. Math. Slovaca 61(6), 979–992 (2011)
Zhang, Y.: Complete moment convergence for moving average process generated by \(\rho ^{-}\)-mixing random variables. J. Inequal. Appl. (2015). https://doi.org/10.1186/s13660-015-0766-5
Zhang, Y., Ding, X.: Further research on complete moment convergence for moving average process of a class of random variables. J. Inequal. Appl. (2017). https://doi.org/10.1186/s13660-016-1287-6
Song, M.Z., Zhu, Q.X.: The strong convergence properties of weighted sums for a class of dependent random variables. Commun. Stat., Theory Methods 49(4), 3455–3465 (2020)
Song, M.Z., Zhu, Q.X.: Complete moment convergence of extended negatively dependent random variables. J. Inequal. Appl. (2020). https://doi.org/10.1186/s13660-020-02416-7
Wang, K.Y., Wang, Y.B., Gao, Q.W.: Uniform asymptotics for the finite-time ruin probability of a new dependent risk model with a constant interest rate. Methodol. Comput. Appl. Probab. 15(1), 109–124 (2013)
Fang, H.Y., Ding, S.S., Li, X.Q., Yang, W.Z.: Asymptotic approximations of ratio moments based on dependent sequences. Mathematics 8(3), 361 (2020). https://doi.org/10.3390/math8030361
Liu, L.: Precise large deviations for dependent random variables with heavy tails. Stat. Probab. Lett. 79(9), 1290–1298 (2009)
Ebrahimi, N., Ghosh, M.: Multivariate negative dependence. Commun. Stat., Theory Methods 10(4), 307–337 (1981)
Hu, T.Z.: Negatively superadditive dependence of random variables with applications. Chinese J. Appl. Probab. Statist. 16(2), 133–144 (2000)
Wang, X.J., Li, X.Q., Yang, W.Z., Hu, S.H.: On complete convergence for arrays of rowwise weakly dependent random variables. Appl. Math. Lett. 25(11), 1916–1920 (2012)
Acknowledgements
The authors thank the editor and the referees for constructive and pertinent suggestions, which have improved the quality of the manuscript greatly.
Funding
This work was supported by the National Natural Science Foundation of China (Grant No. 11701077) and Team Project of Jilin Provincial Department of Science and Technology (Grant No. 20200301036RQ).
Author information
Authors and Affiliations
Contributions
All authors contributed equally to this work. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Guan, L., Xiao, Y. & Zhao, Y. Complete moment convergence of moving average processes for m-WOD sequence. J Inequal Appl 2021, 16 (2021). https://doi.org/10.1186/s13660-021-02546-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-021-02546-6