Open Access

Complete moment convergence of double-indexed randomly weighted sums of mixing sequences

Journal of Inequalities and Applications20162016:313

https://doi.org/10.1186/s13660-016-1260-4

Received: 6 August 2016

Accepted: 22 November 2016

Published: 29 November 2016

Abstract

In this paper, we study the complete moment convergence of the sums of ρ̃-mixing sequences which are double-indexed randomly weighted and stochastically dominated by a random variable X. Under the different moment conditions on X and weights, many complete moment convergence and complete convergence results are obtained. Moreover, some simulations are given for illustration.

Keywords

complete convergence double-indexed randomly weighted sums ρ̃-mixing sequences

MSC

60F15

1 Introduction

Let \(\{X_{n},n\geq1\}\) be a random variable sequence defined on a fixed probability space \((\Omega,\mathscr{F},P)\). Denote \(\mathscr{F}_{S}=\sigma(X_{i},i\in S\subset\mathbb{N})\). For given sub-σ-algebras \(\mathscr{B},\mathscr{R}\) of \(\mathscr{F}\), let
$$\rho(\mathscr{B},\mathscr{R})=\sup_{X\in L_{2}(\mathscr{B}),Y\in L_{2}(\mathscr{R})}\frac{|EXY-EXEY|}{(\operatorname{Var}X\cdot\operatorname{Var}Y)^{1/2}}. $$
Define
$$\tilde{\rho}(k)=\sup\rho(\mathscr{F}_{S},\mathscr{F}_{T}), $$
where the supremum is taken over all finite subsets \(S,T \subset\mathbb{N}\) such that
$$\operatorname{dist}(S,T)=\min_{j\in S,h\in T}|j-h|\geq k,\quad k\geq0. $$

Obviously, one has \(0\leq\tilde{\rho}(k+1)\leq \tilde{\rho}(k)\leq1\) and \(\tilde{\rho}(0)=1\).

Definition 1

A sequence of random variables \(\{X_{n},n\geq1\}\) is said to be a ρ̃-mixing sequence if there exists \(k\in\mathbb{N}\) such that \(\tilde{\rho}(k)<1\).

The concept of ρ̃-mixing random variables dates back to Stein [1]. Bradley [2] studied the properties of ρ̃-mixing random variables and obtained the central limit theorem. There are many examples of the structure of ρ̃-mixing random variables. Let \(\{e_{n},n\geq1\}\) be a sequence of independent identically distributed (\(i.i.d.\)) random variables with zero mean and finite variance. For \(n\geq1\), let \(X_{n}=\sum_{i=0}^{p}c_{i}e_{n-i}\), where p is a positive integer and \(c_{i}\) are constants, \(i=0,1,2,\ldots,p\). It is known that \(\{X_{n}\}\) is a moving average process with order p. It can be checked that \(\{X_{n}\}\) is a ρ̃-mixing process. Moreover, if \(\{X_{n}\}\) is a strictly stationary, finite-state, irreducible, and aperiodic Markov chain, then it is a ρ̃-mixing sequence (see Bradley [3]). There are many results for ρ̃-mixing sequences; see Peligrad and Gut [4] and Utev and Peligrad [5] for the moment inequalities, Sung [6] and Hu et al. [7] for the inverse moments, Yang et al. [8] for the nonlinear regression model, Wang et al. [9] for the Bahadur representation, etc.

On the one hand, since Hsu and Robbins [10] gave the concept of complete convergence, it has been an important basic tool to study the convergence in probability and statistics. Baum and Katz [11] extended the complete convergence of Hsu and Robbins [10], Chow [12] first investigated the complete moment convergence. Many authors extend the results of complete convergence from the independent case to the dependent cases. For the strong convergence, complete convergence and the applications for NOD sequences, we can refer to Sung [13, 14], Wu [15], Chen et al. [16] among others. Similarly, for NSD sequences, they are referenced by Shen et al. [17], Wang et al. [18], Deng et al. [19], Shen et al. [20], Wang et al. [21] among others. For END sequences, we can refer to Wang et al. [22], Hu et al. [23], Wang et al. [24], etc. For more results of strong convergence, complete convergence and the applications, one can refer to Hu et al. [25], Rosalsky and Volodin [26], Wang et al. [27], Wu et al. [28], Yang et al. [29], Yang and Hu [30], Wang et al. [31] and so on. In addition, for ρ̃-mixing sequences, we can refer to Kuczmaszewska [32], An and Yuan [33], Wang et al. [34], Sung [35], Wu et al. [36] for the study of convergence and applications.

On the other hand, there are many authors who study the convergence of random variables which are randomly weighted. For example, Thanh and Yin [37] established the almost sure and complete convergence of randomly weighted sums of independent random elements in Banach spaces; Thanh et al. [38] investigated complete convergence of the randomly weighted sums of ρ̃-mixing sequences and gave the application to linear-time-invariant systems; Cabrera et al. [39] investigated the conditional mean convergence and conditional almost sure convergence for randomly weighted sums of dependent random variables; Shen et al. [40] obtained the conditional convergence for randomly weighted sums of random variables based on conditional residual h-integrability. For randomly weighted sums of martingales differences, Yang et al. [41] and Yao and Lin [42] obtained some results of complete convergence and the moment of the maximum normed sums. For the tail behavior and ruin theory of randomly weighted sums of random variables, we can refer to Gao and Wang et al. [43], Tang and Yuan [44], Leng and Hu [45], Yang et al. [46], Mao and Ng [47] and the references therein.

For \(n\geq1\), let \(S_{n}=\sum_{i=1}^{n}A_{ni}X_{i}\), where \(\{X_{i}\} \) is a ρ̃-mixing sequence and \(\{A_{ni}\}\) are the double-indexed randomly weights. Inspired by the papers above, we study the complete moment convergence of the sums \(S_{n}\) of ρ̃-mixing sequences \(\{X_{i}\}\) which are double-indexed randomly weighted and stochastically dominated by a random variable X. Under the different moment conditions on X and weights, many complete moment convergence results are obtained. Moreover, some simulations are given for illustration. For the details, please see our results and simulations in Section 3. Some lemmas are presented in Section 2. Finally, the proofs of the main results are presented in Section 4. For a given ρ̃-mixing sequence of random variables \(\{X_{n},n\geq1\}\), denote the dependence coefficient \(\tilde{\rho}(k)\) by \(\tilde{\rho}(X,k)\). In addition, for convenience, let \(C,C_{1},C_{2},\ldots\) be some positive constants, which are independent of n and may have different values in different expressions, \(x^{+}=\max{(x,0)}\) and \(x^{-}=\max(-x,0)\).

2 Some lemmas

Lemma 2.1

Utev and Peligrad [5]

Let \(0\leq r<1\), \(p\geq2\), and k be a positive integer. Assume that \(\{X_{n}, n\geq1\}\) is a mean zero sequence of ρ̃-mixing random variables satisfying \(\tilde{\rho}(X,k)\leq r\). Let \(E|X_{n}|^{p}<\infty\) for all \(n\geq1\). Then there exists a positive constant C not depending on n such that
$$E \Biggl(\max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j}X_{i} \Biggr|^{p} \Biggr) \leq C \Biggl\{ \sum_{i=1}^{n} E|X_{i}|^{p}+ \Biggl(\sum_{i=1}^{n} \operatorname{Var}(X_{i}) \Biggr)^{p/2} \Biggr\} . $$

Lemma 2.2

Thanh et al. [38]

Let \(0\leq r<1\) and k be a positive integer. Let \(X=\{X_{n}, n\geq1\}\) and \(Y=\{Y_{n},n\geq1\}\) be two sequences of ρ̃-mixing random variables satisfying \(\tilde{\rho}(X,k)\leq r\) and \(\tilde{\rho}(Y,k)\leq r\), respectively. Suppose that \(f:\mathbb{R}\times\mathbb {R}\rightarrow\mathbb{R}\) is a Borel function. Assume that X is independent of Y. Then the sequence \(f(X,Y)=\{f(X_{n},Y_{n}),n\geq1\}\) is also a ρ̃-mixing sequence of random variables satisfying \(\tilde{\rho}(f(X,Y),k)\leq r\).

Lemma 2.3

Sung [48]

Let \(\{X_{i},1\leq i\leq n\}\) and \(\{Y_{i},1\leq i\leq n\}\) be the sequences of random variables. Then, for any \(q>1\), \(\varepsilon>0\), and \(a>0\),
$$\begin{aligned} E \Biggl(\max_{1\leq k\leq n} \Biggl|\sum_{i=1}^{k}(X_{i}+Y_{i}) \Biggr|-\varepsilon a \Biggr)^{+} \leq& \biggl(\frac{1}{\varepsilon^{q}}+ \frac{1}{q-1} \biggr)\frac{1}{a^{q-1}}E \Biggl(\max_{1\leq k\leq n} \Biggl| \sum_{i=1}^{k}X_{i} \Biggr|^{q} \Biggr) \\ &{}+E \Biggl(\max_{1\leq k\leq n} \Biggl|\sum_{i=1}^{k}Y_{i} \Biggr| \Biggr). \end{aligned}$$

Lemma 2.4

Adler and Rosalsky [49] and Adler et al. [50]

Let \(\{X_{n},n\geq1\}\) be a sequence of random variables, which is stochastically dominated by a random variable X, i.e.
$$\sup_{n\geq1}P\bigl(|X_{n}|>x\bigr)\leq CP\bigl(|X|>x\bigr), \quad \forall x\geq0. $$
Then, for any \(\alpha>0\) and \(b>0\), the following two statements hold:
$$\begin{aligned} &E\bigl[|X_{n}|^{\alpha}I\bigl(|X_{n}|\leq b\bigr)\bigr]\leq C_{1}\bigl\{ E\bigl[|X|^{\alpha}I\bigl(|X|\leq b\bigr)\bigr]+b^{\alpha}P\bigl(|X|>b\bigr) \bigr\} , \\ &E\bigl[|X_{n}|^{\alpha}I\bigl(|X_{n}|>b\bigr)\bigr]\leq C_{2}E\bigl[|X|^{\alpha}I\bigl(|X|>b\bigr)\bigr]. \end{aligned}$$
Consequently, \(E[|X_{n}|^{\alpha}]\leq C_{3}E|X|^{\alpha}\) for all \(n\geq1\).

3 Main results and simulations

Theorem 3.1

Let \(\alpha>1/2\), \(1< p<2\), \(0\leq r<1\), and k be a positive integer. Assume that \(\{X_{n},n\geq1\}\) is a mean zero sequence of ρ̃-mixing random variables with \(\tilde{\rho}(X,k)\leq r\), which is stochastically dominated by a random variable X with \(E|X|^{p}<\infty\). Let \(\{A_{ni},1\leq i\leq n,n\geq1\}\) be a triangular array of random variables. Suppose that, for all \(n\geq1\), the sequence \(A_{n}=\{A_{ni},1\leq i\leq n\}\) is independent of the sequence \(\{X_{n},n\geq1\}\) and satisfies \(\tilde{\rho}(A_{n},k)\leq r\) and
$$ \sum_{i=1}^{n}EA_{ni}^{2}=O(n). $$
(3.1)
Then, for all \(\varepsilon>0\),
$$ \sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}E \Biggl( \max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j}A_{ni}X_{i} \Biggr|-\varepsilon n^{\alpha} \Biggr)^{+}< \infty $$
(3.2)
and so
$$ \sum_{n=1}^{\infty}n^{\alpha p-2}P \Biggl( \max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j}A_{ni}X_{i} \Biggr|> \varepsilon n^{\alpha} \Biggr)< \infty. $$
(3.3)

Theorem 3.2

Let \(\alpha>1/2\), \(p\geq2\), \(0\leq r<1\), and k be a positive integer. Assume that \(\{X_{n},n\geq1\}\) is a mean zero sequence of ρ̃-mixing random variables with \(\tilde{\rho}(X,k)\leq r\), which is stochastically dominated by a random variable X with \(E|X|^{p}<\infty\). Let \(\{A_{ni},1\leq i\leq n,n\geq1\}\) be a triangular array of random variables. Suppose that, for all \(n\geq1\), the sequence \(A_{n}=\{A_{ni},1\leq i\leq n\}\) is independent of the sequence \(\{X_{n},n\geq1\}\) and satisfies \(\tilde{\rho}(A_{n},k)\leq r\) and
$$ \sum_{i=1}^{n}E|A_{ni}|^{q}=O(n), \quad\textit{for some } q>\frac{2(\alpha p-1)}{2\alpha-1}. $$
(3.4)
Then, for all \(\varepsilon>0\), (3.2) holds and so (3.3) also holds.

For the case \(1\leq l<2\), we take \(p=2l\) and \(\alpha=2/p\) in Theorem 3.2 and obtain following result.

Theorem 3.3

Let \(1\leq l<2\), \(0\leq r<1\), and k be a positive integer. Assume that \(\{X_{n},n\geq1\}\) is a mean zero sequence of ρ̃-mixing random variables with \(\tilde{\rho}(X,k)\leq r\), which is stochastically dominated by a random variable X with \(E|X|^{2l}<\infty\). Let \(\{A_{ni},1\leq i\leq n,n\geq1\}\) be a triangular array of random variables. Suppose that, for all \(n\geq1\), the sequence \(A_{n}=\{A_{ni},1\leq i\leq n\}\) is independent of the sequence \(\{X_{n},n\geq1\}\) and satisfies \(\tilde{\rho}(A_{n},k)\leq r\) and
$$ \sum_{i=1}^{n}E|A_{ni}|^{q}=O(n), \quad\textit{for some } q>\frac{2l}{2-l}. $$
(3.5)
Then, for all \(\varepsilon>0\),
$$ \sum_{n=1}^{\infty}n^{-1/l}E \Biggl( \max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j}A_{ni}X_{i} \Biggr|-\varepsilon n^{1/l} \Biggr)^{+}< \infty $$
(3.6)
and
$$ \sum_{n=1}^{\infty}P \Biggl(\max _{1\leq j\leq n} \Biggl|\sum_{i=1}^{j}A_{ni}X_{i} \Biggr|> \varepsilon n^{1/l} \Biggr)< \infty, $$
(3.7)
which implies the Marcinkiewicz-Zygmund-type strong law of large numbers
$$ \lim_{n\rightarrow\infty}\frac{1}{n^{1/l}}\sum_{i=1}^{n}A_{ni}X_{i}=0, \quad \textit{a.s.} $$
(3.8)

When \(\alpha\geq1\) and \(E|X|<\infty\), we have the following result.

Theorem 3.4

Let \(\alpha\ge1\), \(0\leq r<1\), and k be a positive integer. Assume that \(\{X_{n},n\geq1\}\) is a sequence of ρ̃-mixing random variables with \(\tilde{\rho}(X,k)\leq r\), which is stochastically dominated by a random variable X with \(E|X|<\infty\). Suppose that \(EX_{n} =0\) for all \(n\geq1\) if \(\alpha=1\). Let \(\{A_{ni},1\leq i\leq n,n\geq1\}\) be a triangular array of random variables. Suppose that, for all \(n\geq1\), the sequence \(A_{n}=\{A_{ni},1\leq i\leq n\}\) is independent of the sequence \(\{X_{n},n\geq1\}\) and satisfies \(\tilde{\rho}(A_{n},k)\leq r\) and (3.1) holds. Then, for all \(\varepsilon>0\), we have
$$ \sum_{n=1}^{\infty}n^{\alpha-2}P \Biggl( \max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j}A_{ni}X_{i} \Biggr|>\varepsilon n^{\alpha} \Biggr)< \infty. $$
(3.9)

Remark 3.1

In Theorem 3.12 of Thanh et al. [38], the authors obtained the complete convergence results of (3.3) in Theorems 3.1 and 3.2, and (3.7) and (3.8) in Theorem 3.3. But we also obtain the complete moment convergence results of (3.2) in Theorems 3.1 and 3.2, and (3.6) in Theorem 3.3. Meanwhile, we have the complete convergence result (3.9) in Theorem 3.4 under the first moment condition on X, so we extend the results of Thanh et al. [38]. In addition, if \(A_{ni}\equiv1\), then Wang et al. [27] established (3.2) and (3.3) for φ-mixing sequences (see Corollaries 3.2 and 3.3 of Wang et al. [27]). Similarly, if \(A_{ni}=a_{ni}\) are constant weights in (3.4) and (3.5), then Yang et al. [29] obtained (3.2), (3.6), and (3.8) for martingale differences (see Theorem 5 and Corollary 6 of Yang et al. [29]). Therefore, we extend the results of Wang et al. [27] and Yang et al. [29] to ρ̃-mixing sequences which are double-indexed randomly weighted. Moreover, some simulations are presented to illustrate (3.8).

Simulation 3.1

On the one hand, with all \(n\geq1\), define \(X_{n}=\sum_{i=0}^{p}c_{i}e_{n-i}\) for some positive integer p and positive constants \(c_{i}\), \(i=0,1,2,\ldots,p\), \(\{\varepsilon_{i}\}\) are independent random variables. So \(\{X_{n}\}\) is a m-dependent sequence, which is also a ρ̃-mixing sequence. For example with \(n\geq1\), let
$$ X_{n}=0.5e_{n}+0.4e_{n-1}+0.3e_{n-2}+0.2e_{n-3}+0.1e_{n-4}, $$
(3.10)
where \(\{e_{i}\}\) are \(i.i.d\). random variables such as \(e_{0}\sim N(0,\sigma^{2})\) with \(\sigma^{2}>0\) or \(e_{0}\sim U(-a,a)\) with \(a>0\). On the other hand, there are two cases of assumptions on \(\{ A_{ni},1\leq i\leq n,n\geq1\}\):

(1) For all \(n\geq1\), let \(\{A_{ni},1\leq i\leq n\}\) be \(i.i.d\). random variables satisfying \(A_{11}\sim t(m)\) with \(m>0\), which are also independent of \(\{e_{i}\}\).

(2) For all \(n\geq2\), let \(A_{n1},A_{n2},\ldots,A_{nn}\) are independent of \(\{e_{i}\}\) and \((A_{n1},A_{n2},\ldots,A_{nn})\sim N_{n}(0,\Sigma)\), where 0 is the zero vector,
$$\Sigma= \begin{bmatrix} 1&\rho& \rho^{2} & 0& \cdots&0&0 &0 & 0\\ \rho& 1& \rho& \rho^{2} &\cdots&0&0 & 0 &0 \\ \rho^{2} &\rho& 1 & \rho& \cdots&0&0 & 0 &0 \\ 0&\rho^{2} &\rho& 1 & \cdots& 0&0 &0 & 0 \\ \vdots& \vdots& \vdots& \vdots& & \vdots&\vdots& \vdots& \vdots\\ 0 & 0 & 0& 0 & \cdots&1& \rho&\rho^{2}&0 \\ 0 & 0 & 0 & 0& \cdots&\rho& 1 & \rho&\rho^{2} \\ 0 & 0 & 0 & 0& \cdots&\rho^{2}& \rho& 1 &\rho\\ 0 & 0 & 0 & 0& \cdots&0 & \rho^{2} & \rho&1 \end{bmatrix} _{n\times n}, $$
and \(-1<\rho<1\).
Using MATLAB software, we make the Box plots to illustrate
$$ \frac{1}{n^{1/l}}\sum_{i=1}^{n}A_{ni}X_{i} \rightarrow0,\quad n\rightarrow \infty. $$
(3.11)
For \(l=1.5\), the distributions \(e_{0}\sim N(0,1)\), \(A_{11}\sim t(4)\), \(A_{11}\sim t(20)\), and sample size \(n=100,200,\ldots,1{,}000\), we repeat the experiments 10,000 times and obtain the Box plots such as in Figures 1 and 2. In Figures 1 and 2, the label of the y-axis is the value of (3.11) and the label of the x-axis is the sample size n, by repeating the experiments 10,000 times. By Figures 1 and 2, for the cases of \(l=1.5\), \(e_{0}\sim N(0,1)\), \(A_{11}\sim t(4)\), and \(A_{11}\sim t(20)\), it can be found that the medians are close to 0 and their variation ranges become smaller as the sample size n increases. Comparing Figure 1 with 2, the variation range of Figure 2 is smaller than that of Figure 1, which can be explained by the variance of \(t(20)\) being smaller than that of \(t(4)\).
Figure 1

The Box plots for normal distribution and t distribution.

Figure 2

The Box plots for normal distribution and t distribution.

Similarly, for \(l=1.2\), \(l=1.3\), \(e_{0}\sim U(-1,1)\), \(A_{11}\sim N_{n}(0,\Sigma)\) with \(\rho=-0.5\) and \(A_{11}\sim N_{n}(0,\Sigma)\) with \(\rho=0.3\), we establish the Figures 3 and 4. By Figures 3 and 4, it is also seen that the medians are close to 0 and their variation range becomes smaller as the sample size n increases.
Figure 3

The Box plots for uniform distribution and multivariate normal distribution.

Figure 4

The Box plots for uniform distribution and multivariate normal distribution.

4 The proofs of main results

Proof of Theorem 3.1

For all \(n\geq1\), let \(X_{ni}=X_{i}I(|X_{i}|\leq n^{\alpha})\), \(\tilde{X}_{ni}=X_{i}I(|X_{i}|> n^{\alpha})\), \(1\leq i\leq n\). Obviously, for \(1\leq i\leq n\), \(A_{ni}X_{i}\) decomposes as
$$\begin{aligned} A_{ni}X_{i}&=A_{ni}X_{ni}+A_{ni} \tilde{X}_{ni} \\ &=\bigl[A_{ni}X_{ni}-E(A_{ni}X_{ni}) \bigr]+E(A_{ni}X_{ni})+A_{ni}\tilde {X}_{ni}. \end{aligned}$$
(4.1)
Then, by (4.1) and Lemma 2.3 with \(a=n^{\alpha}\) and \(q=2\), we have
$$\begin{aligned} &\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}E \Biggl(\max_{1\leq j\leq n} \Biggl|\sum _{i=1}^{j}A_{ni}X_{i} \Biggr|- \varepsilon n^{\alpha} \Biggr)^{+} \\ &\quad\leq C_{1}\sum_{n=1}^{\infty}n^{\alpha p-2-2\alpha}E \Biggl(\max_{1\leq j\leq n} \Biggl(\sum _{i=1}^{j}\bigl[A_{ni}X_{ni}-E(A_{ni}X_{ni}) \bigr] \Biggr)^{2} \Biggr) \\ &\qquad{}+\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}E \Biggl(\max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j}A_{ni} \tilde{X}_{ni} \Biggr| \Biggr) +\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha} \Biggl(\max_{1\leq j\leq n} \Biggl|\sum _{i=1}^{j}E(A_{ni}X_{ni}) \Biggr| \Biggr) \\ &\quad:=I_{1}+I_{2}+I_{3}. \end{aligned}$$
(4.2)
By (3.1) and Hölder’s inequality, it is easy to establish that
$$ \sum_{i=1}^{n} E|A_{ni}|=O(n). $$
(4.3)
Since for all \(n\geq1\), \(\{A_{ni},1\leq i\leq n\}\) is independent of the sequence \(\{X_{n},n\geq1\}\), then by Markov’s inequality, (4.3), Lemma 2.4, \(E|X|^{p}<\infty\), and \(p>1\), we obtain
$$\begin{aligned} I_{2} \leq&\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}\sum_{i=1}^{n}E|A_{ni}|E|X_{i}|I \bigl(|X_{i}|>n^{\alpha}\bigr) \\ \leq& C_{1}\sum_{n=1}^{\infty}n^{\alpha p-1-\alpha}E\bigl[|X|I\bigl(|X|>n^{\alpha }\bigr)\bigr] \\ =&C_{1}\sum_{n=1}^{\infty}n^{\alpha p-1-\alpha}\sum_{m=n}^{\infty}E\bigl[|X|I \bigl(m< |X|^{1/\alpha}\leq m+1\bigr)\bigr] \\ =&C_{1}\sum_{m=1}^{\infty}E \bigl[|X|I\bigl(m< |X|^{1/\alpha}\leq m+1\bigr)\bigr]\sum _{n=1}^{m}n^{\alpha(p-1)-1} \\ \leq&C_{2}\sum_{m=1}^{\infty}m^{\alpha p-\alpha}E\bigl[|X|I\bigl(m< |X|^{1/\alpha}\leq m+1\bigr)\bigr]\leq C_{3}E|X|^{p}< \infty. \end{aligned}$$
(4.4)
By the independence and \(EX_{i}=0\), we have \(E(A_{ni}X_{i})=EA_{ni}EX_{i}=0\), which implies
$$E(A_{ni}X_{ni})=EA_{ni}E(X_{i}- \tilde{X}_{ni})=-EA_{ni}E\bigl(X_{i}I \bigl(|X_{i}|> n^{\alpha}\bigr)\bigr),\quad 1\leq i\leq n. $$
Therefore, by Lemma 2.4 and the proof of (4.4), it follows that
$$\begin{aligned} I_{3} =&\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha} \Biggl|\sum_{i=1}^{n} \bigl[-EA_{ni}E\bigl(X_{i}I\bigl(|X_{i}|> n^{\alpha }\bigr)\bigr)\bigr] \Biggr| \\ \leq&\sum_{n=1}^{\infty}n^{\alpha p-2-\alpha}\sum _{i=1}^{n}E|A_{ni}|E \bigl[|X_{i}|I\bigl(|X_{i}|>n^{\alpha}\bigr)\bigr]\leq C_{1}E|X|^{p}< \infty. \end{aligned}$$
(4.5)
By Lemma 2.2, we find that \(AX=\{[A_{ni}X_{ni}-E(A_{ni}X_{ni})],1\leq i\leq n\}\) is also a mean sequence of ρ̃-mixing random variables with \(\tilde{\rho}(AX,k)\leq r\). Then, by Lemma 2.1 with \(p=2\), Lemma 2.4, and (3.1), we establish that
$$\begin{aligned} I_{1} \leq&C_{1}\sum _{n=1}^{\infty}n^{\alpha p-2-2\alpha}\sum _{i=1}^{n}E(A_{ni}X_{ni})^{2}=C_{1} \sum_{n=1}^{\infty}n^{\alpha p-2-2\alpha}\sum _{i=1}^{n}EA^{2}_{ni}EX_{ni}^{2} \\ \leq&C_{2}\sum_{n=1}^{\infty}n^{\alpha p-1-2\alpha}E\bigl[|X|^{2}I\bigl(|X|\leq n^{\alpha}\bigr) \bigr]+C_{3}\sum_{n=1}^{\infty}n^{\alpha p-1} P\bigl(|X|>n^{\alpha}\bigr) \\ :=&C_{2}I_{11}+C_{3}I_{12}. \end{aligned}$$
(4.6)
By \(p<2, \alpha>1/2\) and \(E|X|^{p}<\infty\), it can be checked that
$$\begin{aligned} I_{11} =&\sum_{n=1}^{\infty}n^{\alpha p-1-2\alpha}\sum_{i=1}^{n} E \bigl[X^{2}I\bigl((i-1)^{\alpha}< |X|\leq i^{\alpha}\bigr) \bigr] \\ =&\sum_{i=1}^{\infty}E\bigl[X^{2}I \bigl((i-1)^{\alpha}< |X|\leq i^{\alpha}\bigr)\bigr] \sum _{n=i}^{\infty}n^{\alpha(p-2)-1} \\ \leq&C_{1}\sum_{i=1}^{\infty}E \bigl[|X|^{p}|X|^{2-p}I\bigl((i-1)^{\alpha}< |X|\leq i^{\alpha}\bigr)\bigr]i^{\alpha p-2\alpha} \\ \leq& C_{1}E|X|^{p}< \infty. \end{aligned}$$
(4.7)
In addition, by the proof of (4.4), we have
$$ I_{12}\leq\sum_{n=1}^{\infty}n^{\alpha p-1-\alpha}E\bigl[|X|I\bigl(|X|>n^{\alpha}\bigr)\bigr]\leq C_{1}E|X|^{p}< \infty. $$
(4.8)

Consequently, combining (4.2) with (4.4)-(4.8), we get (3.2) immediately. Moreover, by Remark 2.6 of Sung [48], (3.3) also holds true. □

Proof of Theorem 3.2

We use the same notation as in the proof of Theorem 3.1. Obviously, by \(p\geq2\), it is easy to see that \(q>2(\alpha p-1)/(2\alpha-1)\geq2\). Consequently, by Hölder’s inequality and (3.4), it follows that
$$ \sum_{i=1}^{n} E|A_{ni}|=O(n) \quad \mbox{and} \quad\sum_{i=1}^{n} EA_{ni}^{2}=O(n). $$
(4.9)
From (4.2), (4.4), (4.5), and (4.9), we have \(I_{2}<\infty\) and \(I_{3}<\infty\). So we have to prove \(I_{1}<\infty\) under the conditions of Theorem 3.2. Since \(q>2\), by an argument similar to the proof of (4.6), by Lemma 2.1, we have
$$\begin{aligned} I_{1} \leq&C_{1}\sum _{n=1}^{\infty}n^{\alpha p-2-q\alpha}\sum _{i=1}^{n}E\bigl|A_{ni}X_{ni}-E(A_{ni}X_{ni})\bigr|^{q} \\ &{}+C_{2}\sum_{n=1}^{\infty}n^{\alpha p-2-q\alpha} \Biggl(\sum_{i=1}^{n}E \bigl[A_{ni}X_{ni}-E(A_{ni}X_{ni}) \bigr]^{2} \Biggr)^{q/2} \\ :=&C_{1}I_{11}+C_{2}I_{12}. \end{aligned}$$
(4.10)
For \(p\geq2\), \(E|X|^{p}<\infty\) implies \(EX^{2}<\infty\). Thus, by (4.9) and Lemma 2.4, we have
$$\begin{aligned} I_{12} \leq& C_{3}\sum _{n=1}^{\infty}n^{\alpha p-2-q\alpha} \Biggl(\sum _{i=1}^{n}EA^{2}_{ni}EX^{2} \Biggr)^{q/2} \\ \leq& C_{4}\sum_{n=1}^{\infty}n^{\alpha p-2-q\alpha+q/2}< \infty, \end{aligned}$$
(4.11)
by the fact \(q>2(\alpha p-1)/(2\alpha-1)\). Moreover, by Lemma 2.4 and (3.4),
$$\begin{aligned} I_{11} \leq& C_{1}\sum _{n=1}^{\infty}n^{\alpha p-1-q\alpha}E\bigl[|X|^{q}I \bigl(|X|\leq n^{\alpha}\bigr)\bigr]+C_{5}\sum _{n=1}^{\infty}n^{\alpha p-1}P\bigl(|X|>n^{\alpha} \bigr) \\ \leq& C_{2}\sum_{n=1}^{\infty}n^{\alpha p-1-q\alpha}E\bigl[|X|^{q}I\bigl(|X|\leq n^{\alpha}\bigr) \bigr]+C_{3}\sum_{n=1}^{\infty}n^{\alpha p-1-\alpha }E\bigl[|X|I\bigl(|X|>n^{\alpha}\bigr)\bigr] \\ :=&C_{2}I_{111}+C_{3}I_{112}. \end{aligned}$$
(4.12)
By \(p\geq2\) and \(\alpha>1/2\), it follows that \(2(\alpha p-1)/(2\alpha-1)-p\geq0\), which yields \(q>p\). So, by \(E|X|^{p}<\infty\),
$$\begin{aligned} I_{111} =&\sum_{n=1}^{\infty}n^{\alpha p-1-q\alpha}\sum_{i=1}^{n} E \bigl[|X|^{q}I\bigl((i-1)^{\alpha}< |X|\leq i^{\alpha}\bigr) \bigr] \\ =&\sum_{i=1}^{\infty}E\bigl[|X|^{q}I \bigl((i-1)^{\alpha}< |X|\leq i^{\alpha}\bigr)\bigr] \sum _{n=i}^{\infty}n^{\alpha(p-q)-1} \\ \leq&C_{1}\sum_{i=1}^{\infty}E \bigl[|X|^{p}|X|^{q-p}I\bigl((i-1)^{\alpha}< |X|\leq i^{\alpha}\bigr)\bigr]i^{\alpha p-q\alpha} \\ \leq& C_{1}E|X|^{p}< \infty. \end{aligned}$$
(4.13)
It follows from (4.4) that
$$ I_{112}=\sum_{n=1}^{\infty}n^{\alpha p-1-\alpha}E\bigl[|X|I\bigl(|X|>n^{\alpha}\bigr)\bigr]\leq CE|X|^{p}< \infty. $$
(4.14)

Consequently, by (4.10) and (4.11)-(4.14), we obtain \(I_{1}<\infty\). So we have (3.2). Similarly, combining Remark 2.6 of Sung [48] with (3.2), we obtain (3.3). □

Proof of Theorem 3.3

On the one hand, by \(p=2l\), \(\alpha=2/p\), we have \(\alpha p=2\). On the other hand, by the fact \(1\leq l<2\), we have that (3.4) is the same as (3.5). Then, as an application of Theorem 3.2, we obtain (3.6) immediately. Moreover, by (3.3) with \(\alpha p=2\), we establish (3.7). Finally, by the Borel-Cantelli lemma, (3.8) holds true. □

Proof of Theorem 3.4

It is easy to see that
$$ P \Biggl(\max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j} A_{ni}X_{i} \Biggr|>\varepsilon n^{\alpha} \Biggr)\leq \sum _{i=1}^{n}P\bigl(|X_{i}|>n^{\alpha} \bigr)+P \Biggl(\max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j} A_{ni}X_{ni} \Biggr|>\varepsilon n^{\alpha} \Biggr), $$
(4.15)
where \(X_{ni}=X_{i}I(|X_{i}|\leq n^{\alpha})\). If \(\alpha>1\), then by Lemma 2.4, \(E|X|<\infty\), and (4.3), we obtain
$$\begin{aligned} \frac{1}{n^{\alpha}}\max_{1\leq j\leq n} \Biggl|\sum _{i=1}^{j}E(A_{ni}X_{ni}) \Biggr| =& \frac{1}{n^{\alpha}}\max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j} \bigl[EA_{ni}EI\bigl(|X_{i}|\leq n^{\alpha}\bigr)\bigr] \Biggr| \\ \leq& \frac{C_{1}}{n^{\alpha}}\sum_{i=1}^{n}E|A_{ni}| \bigl\{ E\bigl[|X|I\bigl(|X|\leq n^{\alpha}\bigr)\bigr]+n^{\alpha}E \bigl[I\bigl(|X|>n^{\alpha}\bigr)\bigr]\bigr\} \\ \leq&C_{2}n^{1-\alpha}E|X|\rightarrow 0, \quad\mbox{as } n \rightarrow\infty. \end{aligned}$$
(4.16)
If \(\alpha=1\), then by \(EX_{i}=0\), \(1\leq i\leq n\), Lemma 2.4, and \(E|X|<\infty\), we obtain
$$\begin{aligned} \frac{1}{n^{\alpha}}\max_{1\leq j\leq n} \Biggl|\sum _{i=1}^{j} E[A_{ni}X_{ni}] \Biggr| =& \frac{1}{n}\max_{1\leq j\leq n} \Biggl|\sum_{i=1}^{j} \bigl[-E(A_{ni})E\bigl(X_{i}I\bigl(|X_{i}|>n\bigr)\bigr) \bigr] \biggr| \\ \leq& \frac{C_{1}}{n}\sum_{i=1}^{n}E|A_{ni}|E \bigl[|X|I\bigl(|X|>n\bigr)\bigr] \\ \leq&C_{2}E\bigl[|X|I\bigl(|X|>n\bigr)\bigr]\rightarrow 0, \quad \mbox{as } n \rightarrow\infty. \end{aligned}$$
(4.17)
Moreover,
$$ \sum_{n=1}^{\infty}n^{\alpha-2}\sum _{i=1}^{n}P\bigl(|X_{i}|>n^{\alpha} \bigr)\leq C_{1}\sum_{n=1}^{\infty}n^{\alpha-1}P\bigl(|X|>n^{\alpha}\bigr) \leq C_{2}E|X|< \infty. $$
(4.18)
So, by (4.15)-(4.18), to prove (3.9), it suffices to show that
$$I^{*}:=\sum_{n=1}^{\infty}n^{\alpha-2}P \Biggl(\max_{1\leq j\leq n} \Biggl|\sum _{i=1}^{j} \bigl[A_{ni}X_{ni}-E(A_{ni}X_{ni}) \bigr] \Biggr|> \frac{\varepsilon n^{\alpha}}{2} \Biggr)< \infty. $$
Obviously, by Markov’s inequality and the proofs of (4.6), (4.7), and (4.18), we establish that
$$\begin{aligned} I^{*} \leq&\frac{4}{\varepsilon^{2}}\sum_{n=1}^{\infty}n^{-2-\alpha}E\Biggl(\max_{1\leq j\leq n} \Biggl| \sum _{i=1}^{j}\bigl[A_{ni}X_{ni}-E(A_{ni}X_{ni}) \bigr] \Biggr|^{2}\Biggr) \\ \leq&C_{1}\sum_{n=1}^{\infty}n^{-1-\alpha}E\bigl[X^{2}I\bigl(|X|\leq n^{\alpha}\bigr) \bigr]+C_{2}\sum_{n=1}^{\infty}n^{\alpha-1} P\bigl(|X|>n^{\alpha}\bigr) \leq C_{3}E|X|< \infty. \end{aligned}$$
Hence, the proof of the theorem is concluded. □

Declarations

Acknowledgements

The authors are deeply grateful to editors and anonymous referee for their careful reading and insightful comments, which helped in improving the earlier version of this paper.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Finance, Chongqing Technology and Business University
(2)
International Business School, Brandeis University

References

  1. Stein, S: A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability. Vol. II, pp. 583-602 (1972) Google Scholar
  2. Bradley, RC: On the spectral density and asymptotic normality of weakly dependent random fields. J. Theor. Probab. 5(2), 355-373 (1992) MathSciNetView ArticleMATHGoogle Scholar
  3. Bradley, RC: Every ‘lower psi-mixing’ Markov chain is ‘interlaced rho-mixing’. Stoch. Process. Appl. 72(2), 221-239 (1997) MathSciNetView ArticleMATHGoogle Scholar
  4. Peligrad, M, Gut, A: Almost-sure results for a class of dependent random variables. J. Theor. Probab. 12(1), 87-104 (1999) MathSciNetView ArticleMATHGoogle Scholar
  5. Utev, V, Peligrad, M: Maximal inequalities and an invariance principle for a class of weakly dependent random variables. J. Theor. Probab. 16(1), 101-115 (2003) MathSciNetView ArticleMATHGoogle Scholar
  6. Sung, SH: On inverse moments for a class of nonnegative random variables. J. Inequal. Appl. 2010, Article ID 823767 (2010) MathSciNetView ArticleMATHGoogle Scholar
  7. Hu, SH, Wang, XH, Yang, WZ, Wang, XJ: A note on the inverse moment for the nonnegative random variables. Commun. Stat., Theory Methods 43(8), 1750-1757 (2014) MathSciNetView ArticleMATHGoogle Scholar
  8. Yang, WZ, Hu, SH: Large deviation for a least squares estimator in a nonlinear regression model. Stat. Probab. Lett. 91, 135-144 (2014) MathSciNetView ArticleMATHGoogle Scholar
  9. Wang, YW, Yang, WZ, Hu, SH: The Bahadur representation of sample quantiles for weakly dependent sequences. Stochastics 88(3), 428-436 (2016) MathSciNetMATHGoogle Scholar
  10. Hsu, PL, Robbins, H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33(2), 25-31 (1947) MathSciNetView ArticleMATHGoogle Scholar
  11. Baum, LE, Katz, M: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 120(1), 108-123 (1965) MathSciNetView ArticleMATHGoogle Scholar
  12. Chow, Y: On the rate of moment convergence of sample sums and extremes. Bull. Inst. Math. Acad. Sin. 16, 177-201 (1988) MathSciNetMATHGoogle Scholar
  13. Sung, SH: On the strong convergence for weighted sums of random variables. Stat. Pap. 52(2), 447-454 (2011) MathSciNetView ArticleMATHGoogle Scholar
  14. Sung, SH: A note on the complete convergence for weighted sums of negatively dependent random variables. J. Inequal. Appl. 2012, Article ID 158 (2012) MathSciNetView ArticleMATHGoogle Scholar
  15. Wu, QY: A complete convergence theorem for weighted sums of arrays of rowwise negatively dependent random variables. J. Inequal. Appl. 2012, Article ID 50 (2012) MathSciNetView ArticleMATHGoogle Scholar
  16. Chen, P, Sung, SH: Complete convergence and strong laws of large numbers for weighted sums of negatively orthant dependent random variables. Acta Math. Hung. 148(1), 83-95 (2016) MathSciNetView ArticleGoogle Scholar
  17. Shen, Y, Wang, XJ, Yang, WZ, Hu, SH: Almost sure convergence theorem and strong stability for weighted sums of NSD random variables. Acta Math. Sin. Engl. Ser. 29(4), 743-756 (2013) MathSciNetView ArticleMATHGoogle Scholar
  18. Wang, XJ, Chen, ZY, Hu, SH: Complete convergence for weighted sums of NSD random variables and its application in the EV regression model. Test 24(1), 166-184 (2015) MathSciNetView ArticleMATHGoogle Scholar
  19. Deng, X, Wang, XJ, Wu, Y, Ding, Y: Complete moment convergence and complete convergence for weighted sums of NSD random variables. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. a Mat. 110(1), 97-120 (2016) MathSciNetView ArticleMATHGoogle Scholar
  20. Shen, AT, Xue, MX, Volodin, A: Complete moment convergence for arrays of rowwise NSD random variables. Stochastics 88(4), 606-621 (2016) MathSciNetMATHGoogle Scholar
  21. Wang, XH, Li, XQ, Hu, SH: On the complete convergence of weighted sums for an array of rowwise negatively superadditive dependent random variables. ScienceAsia 42(1), 66-74 (2016) View ArticleGoogle Scholar
  22. Wang, XJ, Hu, TC, Volodin, A, Hu, SH: Complete convergence for weighted sums and arrays of rowwise extended negatively dependent random variables. Commun. Stat., Theory Methods 42(13), 2391-2401 (2013) MathSciNetView ArticleMATHGoogle Scholar
  23. Hu, TC, Rosalsky, A, Wang, KL: Complete convergence theorems for extended negatively dependent random variables. Sankhya A 77(1), 1-29 (2015) MathSciNetView ArticleMATHGoogle Scholar
  24. Wang, XJ, Zheng, LL, Xu, C, Hu, SH: Complete consistency for the estimator of nonparametric regression models based on extended negatively dependent errors. Statistics 49(2), 396-407 (2015) MathSciNetView ArticleMATHGoogle Scholar
  25. Hu, TC, Rosalsky, A, Volodin, A: A complete convergence theorem for row sums from arrays of rowwise independent random elements in Rademacher type p Banach spaces. Stoch. Anal. Appl. 30(2), 343-353 (2012) MathSciNetView ArticleMATHGoogle Scholar
  26. Rosalsky, A, Volodin, A: On almost sure convergence of series of random variables irrespective of their joint distributions. Stoch. Anal. Appl. 32(4), 575-590 (2014) MathSciNetView ArticleMATHGoogle Scholar
  27. Wang, XH, Li, XQ, Hu, SH: Complete convergence of weighted sums for arrays of rowwise ϕ-mixing random variables. Appl. Math. 59(5), 589-607 (2014) MathSciNetView ArticleMATHGoogle Scholar
  28. Wu, YF, Hu, TC, Volodin, A: Complete convergence and complete moment convergence for weighted sums of m-NA random variables. J. Inequal. Appl. 2015, Article ID 200 (2015) MathSciNetView ArticleGoogle Scholar
  29. Yang, WZ, Wang, XH, Li, XQ, Hu, SH: The convergence of double-indexed weighted sums of martingale differences and its application. Abstr. Appl. Anal. 2014, Article ID 893906 (2014) MathSciNetGoogle Scholar
  30. Yang, WZ, Hu, SH: Complete moment convergence of pairwise NQD random variables. Stochastics 87(2), 199-208 (2015) MathSciNetMATHGoogle Scholar
  31. Wang, XJ, Hu, SH, Volodin, A: General results of complete convergence and complete moment convergence for weighted sums of some class of random variables. Commun. Stat., Theory Methods 45(15), 4494-4508 (2016) MathSciNetView ArticleMATHGoogle Scholar
  32. Kuczmaszewska, A: On complete convergence for arrays of rowwise dependent random variables. Stat. Probab. Lett. 77(11), 1050-1060 (2007) MathSciNetView ArticleMATHGoogle Scholar
  33. An, J, Yuan, DM: Complete convergence of weighted sums for \(\rho^{*}\)-mixing sequence of random variables. Stat. Probab. Lett. 78(12), 1466-1472 (2008) MathSciNetView ArticleMATHGoogle Scholar
  34. Wang, XJ, Li, XQ, Yang, WZ, Hu, SH: On complete convergence for arrays of rowwise weakly dependent random variables. Appl. Math. Lett. 25(11), 1916-1920 (2012) MathSciNetView ArticleMATHGoogle Scholar
  35. Sung, SH: On the strong convergence for weighted sums of \(\rho^{*}\)-mixing random variables. Stat. Pap. 54(3), 773-781 (2013) MathSciNetView ArticleMATHGoogle Scholar
  36. Wu, YF, Sung, SH, Volodin, A: A note on the rates of convergence for weighted sums of ρ̃-mixing random variables. Lith. Math. J. 54(2), 220-228 (2014) MathSciNetView ArticleMATHGoogle Scholar
  37. Thanh, LV, Yin, G: Almost sure and complete convergence of randomly weighted sums of independent random elements in Banach spaces. Taiwan. J. Math. 15(4), 1759-1781 (2011) MathSciNetMATHGoogle Scholar
  38. Thanh, LV, Yin, G, Wang, LY: State observers with random sampling times and convergence analysis of double-indexed and randomly weighted sums of mixing processes. SIAM J. Control Optim. 49(1), 106-124 (2011) MathSciNetView ArticleMATHGoogle Scholar
  39. Cabrera, MO, Rosalsky, A, Volodin, A: Some theorems on conditional mean convergence and conditional almost sure convergence for randomly weighted sums of dependent random variables. Test 21(2), 369-385 (2012) MathSciNetView ArticleMATHGoogle Scholar
  40. Shen, AT, Wu, RC, Chen, Y, Zhou, Y: Conditional convergence for randomly weighted sums of random variables based on conditional residual h-integrability. J. Inequal. Appl. 2013, Article ID 122 (2013) MathSciNetView ArticleMATHGoogle Scholar
  41. Yang, WZ, Wang, YW, Wang, XH, Hu, SH: Complete moment convergence for randomly weighted sums of martingale differences. J. Inequal. Appl. 2013, Article ID 396 (2013) MathSciNetView ArticleMATHGoogle Scholar
  42. Yao, M, Lin, L: The moment of maximum normed randomly weighted sums of martingale differences. J. Inequal. Appl. 2015, Article ID 264 (2015) MathSciNetView ArticleMATHGoogle Scholar
  43. Gao, QW, Wang, YB: Randomly weighted sums with dominated varying-tailed increments and application to risk theory. J. Korean Stat. Soc. 39(3), 305-314 (2010) MathSciNetView ArticleMATHGoogle Scholar
  44. Tang, QH, Yuan, ZY: Randomly weighted sums of subexponential random variables with application to capital allocation. Extremes 17(3), 467-493 (2014) MathSciNetView ArticleMATHGoogle Scholar
  45. Leng, X, Hu, TZ: The tail behavior of randomly weighted sums of dependent random variables. Stat. Interface 7(3), 331-338 (2014) MathSciNetView ArticleMATHGoogle Scholar
  46. Yang, Y, Ignataviciute, E, Siaulys, J: Conditional tail expectation of randomly weighted sums with heavy-tailed distributions. Stat. Probab. Lett. 105, 20-28 (2015) MathSciNetView ArticleMATHGoogle Scholar
  47. Mao, TT, Ng, KW: Second-order properties of tail probabilities of sums and randomly weighted sums. Extremes 18(3), 403-435 (2015) MathSciNetView ArticleMATHGoogle Scholar
  48. Sung, SH: Moment inequalities and complete moment convergence. J. Inequal. Appl. 2009, Article ID 271265 (2009) MathSciNetView ArticleMATHGoogle Scholar
  49. Adler, A, Rosalsky, A: Some general strong laws for weighted sums of stochastically dominated random variables. Stoch. Anal. Appl. 5(1), 1-16 (1987) MathSciNetView ArticleMATHGoogle Scholar
  50. Adler, A, Rosalsky, A, Taylor, RL: Strong laws of large numbers for weighted sums of random elements in normed linear spaces. Int. J. Math. Math. Sci. 12(3), 507-530 (1989) MathSciNetView ArticleMATHGoogle Scholar

Copyright

© The Author(s) 2016